id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0001/cond-mat0001338.html | ar5iv | text | # The magnetic neutron scattering resonance of high-๐_c superconductors in external magnetic fields: an SO(5) study
\[
## Abstract
The magnetic resonance at 41 meV observed in neutron scattering studies of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> holds a key position in the understanding of high-$`T_\mathrm{c}`$ superconductivity. Within the SO(5) model for superconductivity and antiferromagnetism, we have calculated the effect of an applied magnetic field on the neutron scattering cross-section of the magnetic resonance. In the presence of Abrikosov vortices, the neutron scattering cross-section shows clear signatures of not only the fluctuations in the superconducting order parameter $`\psi `$, but also the modulation of the phase of $`\psi `$ due to vortices. In reciprocal space we find that i) the scattering amplitude is zero at $`(\pi /a,\pi /a)`$, ii) the resonance peak is split into a ring with radius $`\pi /d`$ centered at $`(\pi /a,\pi /a)`$, $`d`$ being the vortex lattice constant, and consequently, iii) the splitting $`\pi /d`$ scales with the magnetic field as $`\sqrt{B}`$.
\]
Soon after the discovery of high-$`T_c`$ superconductivity in the doped cuprate compounds, its intimate relation to antiferromagnetism was realized. A key discovery in the unraveling of this relationship was the observation of the so called 41 meV magnetic resonance later also denoted the $`\pi `$ resonance. In inelastic neutron scattering experiments on YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> at temperatures below $`T_\mathrm{c}90\mathrm{K}`$, Rossat-Mignod et al. found a sharp peak at $`\mathrm{}\omega 41\mathrm{meV}`$ and $`๐ช=(\pi /a,\pi /a)`$, $`a`$ being the lattice constant of the square lattice in the copper-oxide planes. Later its antiferromagnetic origin was confirmed by Mook et al. in a polarized neutron scattering experiment and subsequently Fong et al. found that the magnetic scattering appears only in the superconducting state. Recently, Fong *et al.* have also observed the $`\pi `$ resonance in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub>, which means that it is a general feature of high-$`T_c`$ superconductors and not a phenomenon restricted to YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub>. This gives strong experimental evidence for the $`\pi `$ resonance being related to antiferromagnetic fluctuations within the superconducting state. Conversely, it may be noted that angular-resolved photoemission spectroscopy has shown how the single-particle gap within the antiferromagnetic state inherits the $`d`$-wave modulation of the superconducting state.
A number of different models have been proposed to explain the $`\pi `$ resonance. In particular, Zhang was inspired by the existence of antiferromagnetic fluctuations in the superconducting state to suggest a unified SO(5) theory of antiferromagnetism and $`d`$-wave superconductivity in the high-$`T_\mathrm{c}`$ superconductors. It is of great interest to extend the different theoretical explanations to make predictions for the behavior of the $`\pi `$ resonance *e.g.* in an applied magnetic field. An experimental test of such predictions will put important constraints on theoretical explanations of the $`\pi `$ resonance in particular and of high-$`T_c`$ superconductivity in general. In this paper we treat the $`\pi `$ resonance in the presence of an applied magnetic field within the SO(5) model.
Zhang proposed that the cuprates at low temperatures can be understood as a competition between $`d`$-wave superconductivity and antiferromagnetism of a system which at higher temperatures possesses SO(5) symmetry. The SO(5) symmetry group is the minimal group that contains both the gauge group U(1) \[$`=`$SO(2)\] which is broken in the superconducting state, and the spin rotation group SO(3) which is broken in the antiferromagnetic state. Furthermore, the SO(5) group also contains rotations of the superspin between the antiferromagnetic sector and the superconducting sector. The relevant order parameter is a real vector $`๐ง=(n_1,n_2,n_3,n_4,n_5)`$ in a five dimensional superspin space with a length which is fixed ($`\left|๐ง\right|^2=1`$) at low temperatures. This order parameter is related to the complex superconducting order parameter, $`\psi `$, and the antiferromagnetic order parameter, $`๐ฆ`$, in each copper-oxide plane as follows: $`\psi =fe^{i\varphi }=n_1+in_5`$ and $`๐ฆ=(n_2,n_3,n_4)`$. Zhang argued how in terms of the five dimensional superspin space one can construct an effective Lagrangian $`(๐ง)`$ describing the low energy physics of the $`t`$-$`J`$ limit of the Hubbard model.
Two comments are appropriate here. Firstly, we note that relaxing the constraint $`\left|๐ง\right|^2=1`$ in the bulk superconducting state will introduce high energy modes, but these can safely be ignored at low temperatures. Moreover, they do not alter the topology of vortices in the order parameter, which is our main concern. Secondly, one may worry that results obtained from a pure SO(5) model deviate substantially from those obtained from the recently developed, physically more correct projected SO(5) theory . However, the two models are only significantly different close to half filling, and our study concerns AF-modes in the bulk superconductor in a weak magnetic field, a state which although endowed with the topology of vortices is far from half filling. For simplicity, we thus restrict the calculations in this paper to the original form of the SO(5) theory.
In the superconducting state the SO(5) symmetry is spontaneously broken which leads to a โhighโ energy collective mode where the approximate SO(5) symmetry allows for rotations of $`๐ง`$ between the superconducting and the antiferromagnetic phases. These rotations have an energy cost $`\mathrm{}\omega _\pi `$ corresponding to the $`\pi `$ resonance and fluctuations in $`๐ง`$ will thus give rise to a neutron scattering peak at $`\mathrm{}\omega _\pi `$ which, through the antiferromagnetic part of the superspin, is located at $`๐ช=๐`$, where $`๐=(\pi /a,\pi /a)`$ is the antiferromagnetic ordering vector. The uniform superconducting state ($`f=1`$) can be characterized by a superspin $`๐ง=(f\mathrm{cos}\varphi ,0,0,0,f\mathrm{sin}\varphi )`$, and the $`\pi `$ mode is a fluctuation $`\delta ๐ง(t)(0,0,0,fe^{i\omega _\pi t},0)`$ around the static solution, where $`\widehat{๐ณ}`$ has been chosen as an arbitrary direction for $`\delta ๐ฆ`$. In this case with $`f=1`$ we have $`\delta ๐ฆe^{i\omega _\pi t}`$, i.e. a sharp peak at $`\omega =\omega _\pi `$ and $`๐ช=๐`$.
In the presence of an applied magnetic field, the superconductor will be penetrated by flux quanta, each forming a vortex with a flux $`h/2e`$ by which the complex superconducting order parameter $`\psi `$ acquires a phase shift of $`2\pi `$ when moving around the vortex. In YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> the vortices arrange themselves in a triangular vortex lattice with an area of the hexagonal unit cell given by $`๐=h/2eB`$ and consequently a lattice constant given by $`d=3^{1/4}\sqrt{h/eB}`$. In the work by Arovas et al., Bruus et al., and Alama et al. the problem of Abrikosov vortices was studied within the SO(5) model of Zhang. In the center of a vortex core, the superconducting part of the order parameter is forced to zero. This leaves two possibilities: i) either the vortex core is in a metallic normal state (as it is the case in conventional superconductors) corresponding to a vanishing superspin or ii) the superspin remains intact but is rotated from the superconducting sector into the antiferromagnetic sector. The prediction of the possibility of antiferromagnetically ordered insulating vortex cores is thus quite novel and allows for a direct experimental test of the SO(5) theory. However, the antiferromagnetic ordering of vortices is according to our knowledge still to be confirmed experimentally. In this paper we report a different consequence of the SO(5) theory in neutron scattering experiments; we consider the $`\pi `$ mode in the presence of vortices and show that the peak at $`๐ช=๐`$ splits into a ring with a radius $`\pi /d`$ centered at $`๐ช=๐`$ where it has zero amplitude. Consequently the splitting scales with magnetic field $`B`$ as $`\pi /d\sqrt{B}`$.
We start by considering just one vortex, then generalize the result to a vortex lattice. To make our calculations quantitative, we consider YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> for which $`a=3.8\mathrm{\AA }`$, $`\kappa 84`$, and $`\xi 16\mathrm{\AA }`$ for the lattice constant, the GinzburgโLandau parameter, and the coherence length, respectively. The order parameter can be written in the form
$$๐ง(๐ซ)=(f(r)\mathrm{cos}\varphi _๐ซ,0,m(r),0,f(r)\mathrm{sin}\varphi _๐ซ),$$
(1)
where $`\varphi _๐ซ=\mathrm{arg}(๐ซ)`$. The isotropy of the antiferromagnetic subspace allows us to choose $`๐ฆ`$ to lie in the $`y`$-direction without loss of generality. Static numerical solutions for $`f(r)`$ and thereby also $`m(r)`$ in the presence of a vortex are derived as described in Refs. . Due to the high value of $`\kappa `$ the absolute value $`f`$ of the superconducting order parameter $`\psi `$ increases from zero at the center of the vortex ($`r=0`$) to its bulk value ($`f=1`$) at a distance of the order $`\xi `$ from the center. The antiferromagnetic order parameter follows from $`f`$ since $`m=\sqrt{1f^2}`$.
For the $`\pi `$ mode in the presence of a vortex, Bruus et al. found that the fluctuation of the superspin is
$$\delta ๐ง(๐ซ,t)=(0,0,0,\delta \theta f(r)\mathrm{cos}\varphi _๐ซe^{i\omega _\pi t},0),$$
(2)
where the small angle $`\delta \theta `$ by which $`๐ง`$ rotates into the antiferromagnetic sector is undetermined. Since the excitation depends on $`f`$ and not on $`m`$ it is a de-localized excitation with zero amplitude at the center of the vortices and in terms of energy it actually corresponds to an energy at the bottom edge of the continuum of an effective potential associated to the vortices.
For an isotropic spin space, the magnetic scattering cross-section for neutrons is proportional to the dynamic structure factor, which is the Fourier transform of the spin-spin correlation function (see e.g. Ref. ),
$$๐ฎ(๐ช,\omega )=_{\mathrm{}}^{\mathrm{}}dte^{i\omega t}\underset{\mathrm{๐๐}^{}}{}e^{i๐ช(๐๐^{})}\widehat{๐}_๐(t)\widehat{๐}_๐^{}(0).$$
(3)
To make a connection to the SO(5) calculations we make the semiclassical approximation $`<\widehat{๐}_๐(t)\widehat{๐}_๐^{}(0)><\widehat{๐}_๐(t)><\widehat{๐}_๐^{}(0)>`$ so that
$`๐ฎ(๐ช,\omega )`$ $``$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}dte^{i\omega t}{\displaystyle \underset{๐,๐^{}}{}}e^{i\left(๐ช+๐\right)(๐๐^{})}`$ (4)
$`\times ๐ฆ(๐,t)๐ฆ(๐^{},0),`$ (5)
where $`๐ฆ(๐,t)=e^{i๐๐}๐_๐(t)`$ is the antiferromagnetic order parameter which enters the superspin $`๐ง`$.
With a superspin given by $`๐ง(๐ซ,t)=๐ง(๐ซ)+\delta ๐ง(๐ซ,t)`$ the dynamical structure factor has two components โ an elastic and an inelastic. The elastic component
$$๐ฎ_{\mathrm{el}}(๐ช,\omega )=\left|\underset{๐}{}e^{i(๐ช+๐)๐}m(R)\right|^22\pi \delta (\omega ),$$
(6)
is located at $`๐ช=๐`$ and has a width $`\pi /\xi `$. In elastic neutron scattering experiments the observation of this peak would directly prove the antiferromagnetical ordering in vortex cores.
The inelastic contribution is
$`๐ฎ_{\mathrm{in}}(๐ช,\omega )`$ $`=`$ $`\left(\delta \theta \right)^2\left|{\displaystyle \underset{๐}{}}e^{i(๐ช+๐)๐}f(R)\mathrm{cos}\varphi _๐\right|^2`$ (7)
$`\times 2\pi \delta (\omega \omega _\pi ).`$ (8)
For $`๐ช=๐`$ the phase factor $`e^{i(๐ช+๐)๐}`$ vanishes, and the cosine factor makes the different terms in the summation cancel pairwise so that $`๐ฎ_{\mathrm{in}}(๐,\omega _\pi )=0`$. The presence of a single vortex moves the intensity away from $`๐ช=๐`$ and a ring-shaped peak with radius $`\delta q\pi /L`$ centered at $`๐ช=๐`$ is formed, $`L\sqrt{A}`$ being the size of the sample. In the semiclassical approximation the zero amplitude at $`๐ช=๐`$ is a topological feature, which is independent of the detailed radial form $`f(r)`$ of the vortex. This robustness relies on the identification of the $`\pi `$ mode as being proportional to the superconducting order-parameter (including its phase). Quantum fluctuations may add some amplitude at $`๐ช=๐`$, but such an analysis beyond leading order is outside the scope of this work.
It is interesting to see how this result compares to predictions based on the BCS theory. The neutron scattering cross-section is given by the spin susceptibility, which for a homogeneous (vortex free) superconductor has been calculated via the BCS-Lindhard function. Here we briefly consider how the BCS coherence factor $`[u_kv_{k+q}v_ku_{k+q}]^2`$ appearing in the Lindhard function is modified by the presence of vortices. In a semiclassical approximation the spatial variation of the superconducting phase $`\varphi (๐ซ)`$ leads to a coherence factor of the form $`[u_k(๐ซ_1)e^{i\varphi (๐ซ_1)/2}v_{k+q}(๐ซ_2)e^{i\varphi (๐ซ_2)/2}v_k(๐ซ_1)e^{i\varphi (๐ซ_1)/2}u_{k+q}(๐ซ_2)e^{i\varphi (๐ซ_2)/2}]^2`$. Therefore in contrast to Eq. (8) the superconducting phase does not separate in the two spatial positions, and consequently the spatial average in general is not zero at $`๐ช=๐`$. It thus appears that the above mentioned ring-shaped peak in the dynamic structure factor is special for the SO(5) model.
We now generalize the single-vortex SO(5)-result to the case of a vortex lattice. For non-overlapping vortices we construct the full superconducting order parameter by
$$\stackrel{~}{\psi }(๐ซ)=\stackrel{~}{f}(๐ซ)e^{i\stackrel{~}{\varphi }(๐ซ)}=_j\psi (๐ซ๐ซ_j),$$
(9)
where the $`๐ซ_j`$ denote the positions of the vortices. The function $`\stackrel{~}{f}(๐ซ)=_jf(๐ซ๐ซ_j)`$ is $`1`$ except for close to the vortices where it dips to zero. Also the phase $`\stackrel{~}{\varphi }(๐ซ)=_j\mathrm{arg}(๐ซ๐ซ_j)`$ has by construction the periodicity of the vortex lattice (modulo $`2\pi `$) and the contour integral $`_Cd๐ฅ\mathbf{}\stackrel{~}{\varphi }(๐ซ)`$ equals $`2\pi n`$ where $`n`$ is the number of vortices enclosed by the contour $`C`$. In the limit of non-overlapping vortices we can capture the main physics by considering the single vortex solution within a unit cell of the vortex lattice. We comment on the inclusion of the entire vortex lattice further on, but for now we restrict the summation in Eq. (8) to lattice sites $`๐`$ inside the vortex lattice unit cell. In Fig. 1 we show the result for a magnetic field $`B=10\mathrm{T}`$. As seen, the presence of vortices moves the intensity away from $`๐ช=๐`$ and a ring shaped peak with radius $`\delta q`$ centered at $`๐ช=๐`$ is formed. We note that the only relevant length scale available is the vortex lattice constant $`d`$ and consequently we expect that $`\delta q=\pi /d`$. Since $`d=3^{1/4}\sqrt{h/eB}`$ we consequently expect that $`\delta q=3^{1/4}\pi \sqrt{eB/h}0.008\times (\pi /a)\sqrt{B/[\mathrm{T}]}`$. Had we included all the vortex lattice unit cells in our analysis, the structure factor of the hexagonal vortex lattice would have led to a breaking of the ring in Fig. 1 into six sub-peaks sitting on top of the ring. In a real experiment these sub-peaks could easily be smeared back into a ring-shaped scattering peak if either the vortex lattice were slightly imperfect or if the resolution of the spectrometer were too low. To describe the main effect of the SO(5) theory we therefore continue to use the single unit cell approximation.
In Fig. 2 we show the splitting as a function of the magnetic field and indeed we find the expected scaling with a pre-factor confirming that the splitting is given by $`\delta q=\pi /d`$. The full width half maximum of the ring is given by $`\mathrm{\Gamma }3.1\times \delta q=3.1\times \pi /d`$.
In Fig. 3 we show the amplitude of the ring as a function of magnetic field. The amplitude approximately decreases as $`1/B`$ with the magnetic field, but with a small deviation. This deviation makes the $`๐ช`$-integrated intensity, which is proportional to the amplitude times $`(\delta q)^2`$, decrease as $`I(B)/I(0)10.004\times B/[\mathrm{T}]`$ which reflects that the area occupied by vortices increases linearly with $`B`$ and consequently the superconducting region decreases linearly with $`B`$. In fact, the reduction is given by $`๐^12\pi rdrm^2(r)0.004\times B/[\mathrm{T}]`$, where the integral gives the effective area of the vortex. The reduction in integrated intensity should be relatively easy to observe experimentally, but is not a unique feature of the SO(5) model. Thus, while it will aid to prove that the $`\pi `$ resonance only resides in the superconducting phase, it will not clearly distinguish between different theories.
In order to discuss the experimental possibilities for testing our predictions, we note that the original observation of the zero-field $`\pi `$ resonance was an experimental achievement and hence that the experiment proposed here constitutes a great challenge. However, since the first observation of the $`\pi `$ resonance in 1991, the field of neutron scattering has developed considerably. To observe the ring-like shape (see inset of Fig. 1) of the excitation would require a resolution better than $`\pi /d`$ along two directions in reciprocal space, which seems unachievable with current spectrometers. However, the overall width of the ring can in fact be measured with good resolution along just one direction in the reciprocal plane. Scans along this direction (as in Fig. 1) could then reveal a broadening of $`3.1\times \pi /d`$. With a sufficiently optimized spectrometer we believe this to be possible, and the reward is a stringent test of a quantitative prediction of the SO(5) theory. We note that Bourges et al. have investigated the $`\pi `$ resonance in a magnetic field of $`B=11.5\mathrm{T}`$ and report a broadening in energy, but do not report data on the $`๐ช`$-shape.
In conclusion we have found that within the SO(5) model, the $`\pi `$ resonance splits into a ring centered at $`๐ช=(\pi /a,\pi /a)`$ in the presence of a magnetic field. The ring has the radius $`\pi /d`$ and full width half maximum of about $`3.1\times \pi /d`$, where $`d`$ is the vortex lattice constant. Consequently the splitting is found to scale with the magnetic field as $`B^{1/2}`$. We emphasize that the amplitude of the $`\pi `$ resonance is zero at $`๐ช=(\pi /a,\pi /a)`$ in the presence of a magnetic field.
We acknowledge useful discussions with J. Jensen, N. H. Andersen, A.-P. Jauho and D. F. McMorrow. H.M.R. is supported by the Danish Research Academy and H.B. by the Danish Natural Science Research Council through Ole Rรธmer Grant No. 9600548. |
no-problem/0001/cond-mat0001128.html | ar5iv | text | # Roughening and preroughening transitions in crystal surfaces with double-height steps
## Abstract
We investigate phase transitions in a solid-on-solid model where double-height steps as well as single-height steps are allowed. Without the double-height steps, repulsive interactions between up-up or down-down step pairs give rise to a disordered flat phase. When the double-height steps are allowed, two single-height steps can merge into a double-height step (step doubling). We find that the step doubling reduces repulsive interaction strength between single-height steps and that the disordered flat phase is suppressed. As a control parameter a step doubling energy is introduced, which is assigned to each step doubling vertex. From transfer matrix type finite-size-scaling studies of interface free energies, we obtain the phase diagram in the parameter space of the step energy, the interaction energy, and the step doubling energy.
Much attention has been paid to the phase transitions in crystal surfaces since they show rich critical phenomena. The interplay between roughening and reconstruction results in interesting phases, such as a disordered flat (DOF) phase, as well as flat and rough phases . In the DOF phase the surface is filled with macroscopic amount of steps which are disordered positionally but have up-down order. Several solid-on-solid (SOS) type models have been studied, which reveals that the DOF phase is stabilized by the repulsive step-step interactions or by specific topological properties of surfaces, e.g., Si(001) .
The SOS type model studies have been done in cases where the nearest-neighbor (NN) height difference, $`\mathrm{\Delta }h`$, is restricted to be equal to or less than 1 in units of the lattice constant. However, in real crystals there also appear steps with $`|\mathrm{\Delta }h|>1`$. For example, double-height steps on W(430) become more favorable than single-height steps at high temperatures since they have lower kink energy . In this paper we investigate the phase transitions in crystal surfaces in the presence of the double-height steps with $`|\mathrm{\Delta }h|=2`$, especially focusing on the stability of the DOF phase. We study a generalized version of the restricted solid-on-solid (RSOS) model on a square lattice with the Hamiltonian given in Eq. (2). We study the model under the periodic and anti-periodic boundary conditions, from which various interface free energies are defined. The interface free energy is calculated from numerical diagonalizations of the transfer matrix, and the phase diagram is obtained by analyzing their finite-size-scaling (FSS) properties.
In the RSOS model the surface is described by integer-valued heights $`h_๐ซ`$ at each site $`๐ซ=(n,m)`$ on a square lattice. (The lattice constant in the $`z`$ direction is set to 1.) Only the single-height step (S step) with $`|\mathrm{\Delta }h|=1`$ is allowed. It was found that the RSOS model with NN and next-nearest-neighbor (NNN) interactions between height displays the DOF phase when the NNN coupling strength is large enough . The NNN coupling accounts for the repulsive interactions between parallel (up-up or down-down) step pairs. Parallel step pairs cost more energy than anti-parallel (up-down or down-up) step pairs.
The double-height step (D step) is incorporated into the RSOS model by relaxing the restriction on the NN height difference to $`|\mathrm{\Delta }h|=0,1,2`$. We only consider quadratic NN and NNN interactions between heights since they are sufficient to describe the key feature of the phase transitions. The total Hamiltonian is written as
$$H_0=K\underset{๐ซ,๐ซ^{}}{}(h_๐ซh_๐ซ^{})^2+L\underset{(๐ซ,๐ซ^{\prime \prime })}{}(h_๐ซh_{๐ซ^{\prime \prime }})^2$$
(1)
where $``$ and $`()`$ denote the pair of NN and NNN sites. With this Hamiltonian, a D step costs more energy than two separate S steps by an amount of $`2K+4K`$ per unit length. Even though the D steps are energetically unfavorable, we will show that their effect is not negligible. We also consider a step-doubling energy $`E_D`$ to study the effect of the step doubling. It is assigned to each vertex where two S steps merge into a D step (see Fig. 1). The electronic state at step edges may be different from that at a flat surface, which contributes to the step energy. When two S steps merge into a D step, the electronic state near the vertex may be changed. The change leads to an additional energy cost, which is reflected by $`E_D`$. When $`E_D`$ is positive (negative), it suppresses (enhances) the step doubling. The Hamiltonian including $`H_0`$ and the step-doubling energy is then given by
$$H=H_0+E_DN_D$$
(2)
where $`N_D`$ is the total number of step-doubling vertices. (For a notational convenience the energy is measured in unit of $`k_BT`$.) The model with the Hamiltonian Eq. (2) with $`E_D=0`$ and with the restriction $`|\mathrm{\Delta }h|=0,1`$ will be referred to as the RSOS3 model, and the model with the Hamiltonian Eq. (2) and with $`|\mathrm{\Delta }h|=0,1,2`$ will be referred to as the RSOS5 model.
In a continuum description phase transitions in crystal surfaces are described by the sine-Gordon model
$$H=d^2๐ซ\left[\frac{1}{2}K_G(\varphi )^2\underset{q=1}{\overset{\mathrm{}}{}}u_q\mathrm{cos}(2q\pi \varphi )\right],$$
(3)
where $`\varphi (๐ซ)(\mathrm{},\mathrm{})`$ is a real-valued local average height field, $`K_G`$ the stiffness constant, and $`u_q`$ the fugacity of $`q`$-charge . In the renormalization group sense $`u_1`$ is irrelevant at high temperatures where the model renormalizes to the Gaussian model with a renormalized stiffness constant $`K_G<\frac{\pi }{2}`$ describing the rough phase. As temperature decreases, $`u_1`$ becomes relevant at a roughening transition temperature. There appear two kinds of low temperature phases depending on the sign of $`u_1`$: For positive $`u_1`$ the Hamiltonian favors an integer average height and hence the surface is flat. For a negative $`u_1`$ it favors a half-integer average height. Since the microscopic height is integer-valued, the surface can take the half-integer average height by forming steps with up-down order, i.e., the surface is in the DOF phase. As temperature decreases further, the sign of $`u_1`$ changes and the surface falls into the flat phase. At the roughening transition between the rough phase and the flat or DOF phase, the renormalized stiffness constant takes the universal value of $`\frac{\pi }{2}`$. The flat and DOF phases are separated by the preroughening transition characterized by $`u_1=0`$ .
The phase boundaries can be obtained using FSS properties of the interface free energies. Consider the model on a finite $`N\times M`$ square lattice rotated by $`45^{}`$ under the various boundary conditions (BCโs): The periodic BC, $`h(n+N,m)=h(n,m)+a`$ with integer $`a`$, and the anti-periodic BC, $`h(n+N,m)=h(n,m)+a(\text{mod }2)`$ with $`a=0\text{ and }1`$. They will be referred to as $`(\pm ,a)`$ BCโs (the upper (lower) sign for the (anti-)periodic BCโs). The free energy is obtained from the largest eigenvalue of the transfer matrix. Detailed description of the transfer matrix set-up can be found in Ref. . The boundary conditions except for the $`(+,0)`$ BC induce a frustration in the surface. The interface free energy $`\eta _\kappa `$ is defined as the excess free energy per unit length under the $`\kappa `$ BC with $`\kappa =(\pm ,a)`$ from that under the $`(+,0)`$ BC:
$$\eta _\kappa =\frac{1}{M}\mathrm{ln}\frac{Z_\kappa }{Z_{(+,0)}}$$
(4)
with $`Z_\kappa `$ the partition function satisfying the $`\kappa `$-BC.
The interface free energies have characteristic FSS properties in each phase. In the rough phase they show the universal $`1/N`$ scaling in the semi-infinite limit $`M\mathrm{}`$ as
$`\eta _{(+,a)}`$ $`=`$ $`{\displaystyle \frac{\zeta }{2}}{\displaystyle \frac{K_Ga^2}{N}}+o\left({\displaystyle \frac{1}{N}}\right)`$ (5)
$`\eta _{(,a)}`$ $`=`$ $`{\displaystyle \frac{\pi \zeta }{4N}}+o\left({\displaystyle \frac{1}{N}}\right),`$ (6)
where $`K_G\frac{\pi }{2}`$ is the renormalized stiffness constant of the Gaussian model and $`\zeta `$ is the aspect ratio of the lattice constants in the horizontal and vertical directions . In the flat phase $`\eta _{(+,a)}`$ and $`\eta _{(,1)}`$ are finite because at least one step is induced under the $`(+,a)`$ and $`(,1)`$ BCโs, while $`\eta _{(,0)}`$ is exponentially small in $`N`$ since the $`(,0)`$ BC may not induce any steps . In the DOF phase the $`(,1)`$ BC does not induce any frustration in the step up-down order, but the $`(+,a)`$ and $`(,0)`$ BCโs do. So $`\eta _{(,1)}`$ is exponentially small in $`N`$, and $`\eta _{(+,a)}`$ and $`\eta _{(,0)}`$ are finite . From these FSS properties the roughening points can be estimated from
$$\eta _{(+,1)}=\frac{\pi \zeta }{4N},$$
(7)
where the universal value of $`K_G=\frac{\pi }{2}`$ at the roughening transition is used in Eq. (5). The preroughening points between the flat and the DOF phase can be estimated from the crossing behaviors of $`N\eta _{(,0)}`$ or $`N\eta _{(,1)}`$, which converges to zero in one phase and diverges to infinity in the other phase as $`N`$ grows.
The estimation of transition points using the interface free energies suffers from slow convergence due to corrections to the scaling. They may smooth out the crossing behaviors of $`N\eta _{(,0)}`$ and $`N\eta _{(,1)}`$ at the preroughening transitions for small $`N`$. But one can safely cancel out leading corrections to scaling by taking the ratio or the difference of them, which can be seen as follows. Consider the lattice version of the continuum model in Eq. (3). It is obvious, using the transformation $`\varphi \varphi 1/2`$, that the model under the $`(,0)`$ BC is the same as that under the $`(,1)`$ BC with $`u_q`$ replaced by $`u_q`$ for odd $`q`$. It yields the relation
$$\eta _{(,0)}(u_1,u_2,u_3,\mathrm{})=\eta _{(,1)}(u_1,u_2,u_3,\mathrm{}).$$
(8)
So if one neglects all higher order contributions from $`u_{q3}`$, the location of $`u_1=0`$ is found from the condition $`\eta _{(,0)}\eta _{(,1)}=0`$ or $`R=1`$ with
$$R\frac{\eta _{(,0)}}{\eta _{(,1)}}.$$
(9)
It is not influenced by correction-to-scalings from $`u_2`$. Therefore the relation $`R=1`$ can be used to get the $`u_1=0`$ point more accurately. One can easily see that $`R>1`$ for negative $`u_1`$ and $`R<1`$ for positive $`u_1`$. It approaches 1 in the rough phase and at the preroughening transition points, diverges in the DOF phase, and vanishes in the flat phase as $`N\mathrm{}`$.
In the RSOS3 model the exact point with $`u_1=0`$ is known along the line $`L=0`$ . It is called the self-dual point and is located at $`K=K_{SD}=\mathrm{ln}[\frac{1}{2}(\sqrt{5}+1)]`$. From numerical studies of the RSOS3 model transfer matrix, we could obtain the exact value of $`K_{SD}`$ with error less than $`10^{12}`$ by solving $`R=1`$ even with small system size $`N=4`$, which indicates that $`R`$ is a useful quantity to determine the preroughening transition points accurately. It will be used in the analysis of the RSOS5 model.
We first consider the RSOS5 model in a special case of $`E_D=0`$ and compare its phase diagram with that of the RSOS3 model to have insight into the role of the D step. At low temperatures the D step is unfavorable due to larger free energy cost than the S step. So the nature of the low temperature phase in the RSOS5 model is not different from that in the RSOS3 model, i.e., the flat phase. At high temperatures, the surface is in the rough phase in the RSOS3 model. Since the rough phase is critical and there is no characteristic length scale, there will be no difference between S and D steps. So the RSOS5 model will also have the rough phase as a high temperature phase. There is significant difference in intermediate temperature range, where the repulsive step interactions stabilize the DOF phase in the RSOS3 model. Without the D steps the parallel steps have less meandering entropy than anti-parallel ones. It is energetically unfavorable for parallel steps to approach each other closer than the interaction range while anti-parallel steps can approach each other at will . However, if one allows the D step, two parallel S steps can approach each other and form a D step without the interaction energy cost. Provided that the energy cost of the D step is not too high, the presence of the D step reduces repulsive interaction strength effectively and enhances the meandering entropy of parallel steps. Then it will suppress the DOF phase.
To see such effects quantitatively, we calculate the ratio $`R`$ for the RSOS3 model and the RSOS5 model with $`E_D=0`$ along a line $`L=5K`$ (see Fig. 2). The strip width for the transfer matrix is $`N=4,6,8`$, and $`10`$ for the RSOS3 model and $`N=4,6`$, and $`8`$ for the RSOS5 model. The RSOS3 model displays the roughening and the preroughening transitions along the line $`L=5K`$, which is manifest in Fig. 2(a). There are three regions where $`N`$ dependence of $`R`$ is distinct with each other. The surface is in the rough phase with negative $`u_1`$ in the small $`L`$ (high temperature) region, where $`R`$ approaches $`1`$ from above. And the surface is in the DOF (flat) phase for the intermediate (large) $`L`$ region, where $`R`$ grows (vanishes). The roughening and preroughening transition points are estimated from Eq. (7) and $`R=1`$ with $`R`$ in Eq. (9), respectively, which are represented by broken vertical lines.
The situation changes qualitatively in the RSOS5 model. As can be seen in Fig. 2(b), $`R`$ is always less than 1, and there are only two regions with distinct $`N`$ dependence of $`R`$. In the small $`L`$ region $`R`$ approach $`1`$ from below, and in the large $`L`$ region $`R`$ vanishes as $`N`$ increases. They correspond to the rough phase with positive $`u_1`$ and the flat phase, respectively. The roughening transition point is estimated from Eq. (7) and represented by the broken vertical line. It shows that the DOF phase is suppressed in the presence of the D step. We have also checked that $`R`$ is always less than 1 ($`u_1>0`$) and the DOF phase does not appear at any values of $`K`$ and $`L`$ in the RSOS5 model with $`E_D=0`$.
We can argue the reason why the DOF phase disappear in the presence of the D step as follows. Consider two parallel S steps merging at a vertex. If the D step is not allowed, the possible vertex configuration is as shown in Fig. 3(a) and the energy cost for such configuration is $`2K+4L`$. On the other hand, if the D steps is allowed, the step doubling may occur in two ways as shown in Fig. 3(b) with the energy cost $`3K+5L`$. Though the step doubling costs more energy ($`K+L`$), entropic contribution of the step doubling ($`\mathrm{ln}2`$) may lower the free energy of parallel steps below than the value without the step doubling. Our numerical results above show that the step doubling suppresses the DOF phase entirely in the $`E_D=0`$ case. In our model a D step costs more energy than two separate S steps. The two energy scales may be comparable to each other in a more realistic model, where the suppression effect will be stronger.
From the above arguments, one finds that the step doubling plays an important role in phase transitions. So we introduce a new term $`E_DN_D`$ in Eq. (2) with the step-doubling energy $`E_D`$ and study the phase diagram in the parameter space $`(K,L,E_D)`$. When $`E_D<0.0`$ ($`>0.0`$), the step doubling is favored (suppressed). One can easily expect that the DOF phase does not appear for negative $`E_D`$.
For positive $`E_D`$ the step doubling is suppressed and the effect of the step interaction becomes important. So we expect there appears the DOF phase in the positive $`E_D`$ side of the parameter space. In Fig. 4 we show the ratio $`R`$ for $`e^{E_D}=0.2`$ and along the line $`L=5K`$. Though the convergence is not good, compared with Fig. 2(a), one can identify three regions as the rough, DOF, and flat phases from the $`N`$ dependence of $`R`$. The roughening point between the rough phase and the DOF phase is estimated using Eq. (7), and the preroughening point using $`R=1`$ for $`N=8`$. They are denoted by broken vertical lines.
We obtain the phase diagram in the whole parameter space using the conditions $`\eta _{(+,1)}=\frac{\pi \zeta }{4N}`$ for the roughening transition boundary and $`R=1`$ for the preroughening transition boundary. It is obtained for strip width $`N=4,6`$, and $`8`$. Since the maximum $`N`$ we can handle is small, the convergence of the phase boundary is poor especially as one approaches $`e^{E_D}=0`$. But there is no qualitative change in shape. So we only present the phase diagram obtained from $`N=8`$ in Fig. 5. The region under the surface represented by broken lines corresponds to the rough phase. The DOF phase is bounded by the surfaces of broken lines and solid lines. The region above the surfaces corresponds to the flat phase. One should notice that there is a critical value of $`E_D`$, approximately $`0.071`$, smaller than which the DOF phase does not appear.
In summary, we have studied the phase transitions in the RSOS5 model with the Hamiltonian in Eq. (2) with D steps as well as S steps. We have found that the D step, which has not been considered in previous works, plays an important role in phase transitions in crystal surfaces. The presence of the D step reduces the strength of the repulsive interaction between parallel steps through the step doubling, and hence suppresses the DOF phase. We also found that the step-doubling energy is an important quantity which characterizes a surface upon the roughening.
I would like to thank D. Kim and M. den Nijs for helpful discussions. I wish to acknowledge the financial support of Korea Research Foundation made in the program year 1997. This work is also supported by the KOSEF through the SRC program of SNU-CTP. |
no-problem/0001/cond-mat0001396.html | ar5iv | text | # Stability of Trions in Strongly Spin Polarized Two-Dimensional Electron Gases
## Abstract
Low-temperature magneto-photoluminescence studies of negatively charged excitons ($`X_s^{}`$ trions) are reported for n-type modulation-doped ZnSe/Zn(Cd,Mn)Se quantum wells over a wide range of Fermi energy and spin-splitting. The magnetic composition is chosen such that these magnetic two-dimensional electron gases (2DEGs) are highly spin-polarized even at low magnetic fields, throughout the entire range of electron densities studied ($`5\times 10^{10}`$ to $`6.5\times 10^{11}`$ cm<sup>-2</sup>). This spin polarization has a pronounced effect on the formation and energy of $`X_s^{}`$, with the striking result that the trion ionization energy (the energy separating $`X_s^{}`$ from the neutral exciton) follows the temperature- and magnetic field-tunable Fermi energy. The large Zeeman energy destabilizes $`X_s^{}`$ at the $`\nu =1`$ quantum limit, beyond which a new PL peak appears and persists to 60 Tesla, suggesting the formation of spin-triplet charged excitons.
Magnetic two-dimensional electron gases (2DEGs) represent a relatively new class of semiconductor quantum structure in which an electron gas is made to interact strongly with embedded magnetic moments. Typically, magnetic 2DEGs (and 2D hole gases) are realized in modulation-doped II-VI diluted magnetic semiconductor quantum wells in which paramagnetic spins (Mn<sup>2+</sup>, $`S=\frac{5}{2}`$) interact with the confined electrons via a strong $`J_{sd}`$ exchange interaction. This interaction leads to an enhanced spin splitting of the electron Landau levels which follows the Brillouin-like Mn<sup>2+</sup> magnetization, saturating in the range 10-20 meV by a few Tesla. Since the spin splitting can greatly exceed both the cyclotron ($``$1 meV/T) and Fermi energies, these magnetic 2DEGs consist largely of spin-polarized Landau levels, and serve as interesting templates for studies of quantum transport in the absence of spin gaps. In addition, it has been recognized that this interplay between the cyclotron, Zeeman and Fermi energies may also be exploited in magneto-optical experiments to gain insights into the rich spectrum of optical excitations found in 2DEGs. The aim of this paper is to use strongly spin-polarized magnetic 2DEGs, containing a wide range of electron densities, to shed new light on the spin-dependent properties of negatively charged excitons (or trions).
Predicted in 1958 by Lampert and first observed by Kheng in 1993, the singlet state of the negatively charged exciton (the $`X_s^{}`$ trion) consists of a spin-up and spin-down electron bound to a single hole. The energy required to remove one of these electrons (leaving behind a neutral exciton $`X^0`$) is the $`X_s^{}`$ ionization energy $`\mathrm{\Delta }E_X`$, usually defined as the energy between $`X_s^{}`$ and $`X^0`$ features in optical studies. $`\mathrm{\Delta }E_X`$ is small; typically only $``$1 meV, $``$3 meV, and $``$6 meV in GaAs-, CdTe-, and ZnSe-based 2DEGs respectively. The spin-singlet nature of the two electrons in $`X_s^{}`$ suggests that $`\mathrm{\Delta }E_X`$ โ and hence trion stability โ should be sensitive to the Zeeman energy and spin-polarization of the 2DEG. Here, we explicitly study highly spin-polarized magnetic 2DEGs to establish empirical correlations between Zeeman energy and trion stability over a broad range of carrier densities. In particular, magneto-photoluminescence (PL) measurements demonstrate the striking result that $`\mathrm{\Delta }E_X`$ follows the energy of the Fermi surface, which can be tuned independently from the Landau levels via the strong Zeeman dependence on temperature and applied field. The role of the Fermi and Zeeman energies in determining $`\mathrm{\Delta }E_X`$ is studied for all carrier densities, and qualitative agreement with numerical calculations is found. The giant spin-splitting in these systems is found to reduce $`\mathrm{\Delta }E_X`$, eventually driving a rapid suppression of $`X_s^{}`$ by the $`\nu =1`$ quantum limit, beyond which the formation of a new peak in the PL (which persists to 60T) may signify the formation of spin-triplet charged excitons.
These experiments are performed at the National High Magnetic Field Laboratory, in the generator-driven 60 Tesla Long-Pulse magnet and a 40T capacitor-driven magnet (with 2000 ms and 500 ms pulse duration, respectively), as well as a 20T superconducting magnet. Light is coupled to and from the samples via single optical fibers (200$`\mu m`$ or 600$`\mu m`$ diameter), and excitation power is kept below 200$`\mu W`$. Thin-film circular polarizers between the fiber and sample permit polarization-sensitive PL studies. In the pulsed magnet experiments, a high-speed CCD camera acquires complete optical spectra every 1.5 ms, enabling reconstruction of the entire spectra vs. field dependence in a single magnet shot. The magnetic 2DEG samples are MBE-grown n-type modulation-doped 105$`\AA `$ wide single quantum wells into which Mn<sup>2+</sup> are โdigitallyโ introduced in the form of equally-spaced fractional monolayers of MnSe. Specifically, the quantum wells are paramagnetic digital alloys of (Zn<sub>1-x</sub>Cd<sub>x</sub>Se)<sub>m-f</sub>(MnSe)<sub>f</sub> with x= 0.1 to 0.2, m=5 and f=1/8 or 1/16 effective monolayer thickness. The electron densities, determined from Shubnikov-deHaas (SdH) oscillations in transport, range between $`5\times 10^{10}`$ and $`6.5\times 10^{11}`$ cm<sup>-2</sup>. All samples show a large spin splitting at 1.5 K, with โeffectiveโ g-factors in the range $`70<g_e^{eff}(H0)<100`$.
Figure 1a shows the evolution of the PL spectra in a magnetic 2DEG with relatively low carrier density $`1.24\times 10^{11}`$ cm<sup>-2</sup> and $`g_{eff}=73`$ at 1.5K. This sample has a mobility of 14000 cm<sup>2</sup>/Vs and exhibits clear SdH oscillations in transport. At $`H=0`$, the data show a strong PL peak at 2.74 eV with a small satellite $``$6 meV higher in energy. With applied field, the peaks shift rapidly to lower energy in the $`\sigma ^+`$ polarization due to the large Zeeman energy (the $`\sigma ^{}`$ emission disappears completely at low fields in all the magnetic 2DEGs, much like their undoped counterparts). By 1 T, the satellite develops into a clear peak of comparable amplitude, and as will be verified in Fig. 2, we assign the high- and low-energy PL features to $`X^0`$ and $`X_s^{}`$. At $`\nu =1`$ (5.5 T), the smooth evolution of the PL spectra changes abruptly as the $`X_s^{}`$ resonance collapses and a strong, single PL peak emerges at an energy between that of $`X^0`$ and $`X_s^{}`$, as shown. This new PL feature persists to 60 T. Fig. 1b shows the energies of the PL peaks (the data are fit to Gaussians), where the discontinuity at $`\nu =1`$ is clearly seen. The $`X_s^{}`$ ionization energy $`\mathrm{\Delta }E_X`$ decreases and oscillates with magnetic field (inset, Fig 1b). Anticipating Figs. 3 and 4, we note that $`\mathrm{\Delta }E_X`$ qualitatively mimics the Fermi energy in this low-density magnetic 2DEG (plotted in Fig. 1a, inset).
Owing to the giant spin splitting in this sample, the โordinaryโ Landau level (LL) fan diagram for non-magnetic 2DEGs (with Landau levels evenly spaced by $`\mathrm{}\omega _c`$, and spin splitting $`\mathrm{}\omega _c`$) is replaced by that shown in the inset of Fig. 1a. The LLs are simply calculated as
$$\epsilon _{l,s}=\mathrm{}\omega _c(l+\frac{1}{2})+sE_ZB_{5/2}(5g_{Mn}\mu _BH/2k_BT^{})$$
(1)
where $`l`$ is the orbital angular momentum index and $`s`$ is the electron spin ($`\pm \frac{1}{2}`$). Here, $`\mathrm{}\omega _c`$ =0.83 meV/T is the electron cyclotron energy, and the second term is the Zeeman energy: $`B_{5/2}`$ is the Brillouin function describing the magnetization of the $`S=\frac{5}{2}`$ Mn<sup>2+</sup> moments, $`E_Z`$ is the saturation value of the electron splitting, $`g_{Mn}`$=2.0, and $`T^{}`$ is an empirical โeffective temperatureโ which best fits the low-field energy shifts. We ignore the much smaller contribution to the Zeeman energy arising from the bare electron g-factor. At low fields, the spin-down LLs (solid lines) are Zeeman-shifted well below the spin-up LLs (dotted lines), leading to a highly spin-polarized electron gas - e.g., by 1T, over 95% of the electrons are oriented spin-down in this sample. The Fermi energy $`\epsilon _F`$ (thick line) is calculated numerically by inverting the integral
$$N_e=_{\mathrm{}}^{\mathrm{}}g[\epsilon ,B,T]f[\epsilon ,\epsilon _F,T]๐\epsilon .$$
(2)
Here, $`N_e`$ is the known electron density, $`f[\epsilon ,\epsilon _F,T]`$ is the Fermi-Dirac distribution and $`g[\epsilon ,B,T]`$ is the density of states, taken to be the sum of Lorentzian LLs of width $`\mathrm{\Gamma }=\mathrm{}/2\tau _s`$ centered at the energies $`\epsilon _{l,s}`$ given in Eq.1. The electron scattering time $`\tau _s`$ is obtained from analyzing SdH oscillations, or alternatively from the measured mobility.
Typically, identification of $`X^0`$ and $`X_s^{}`$ relies on their polarization properties in reflection or absorption\- measurements which directly probe the available density of states. However in these magnetic 2DEGs, the huge Zeeman splitting and the relatively broad spectral linewidths (resulting from the high Mn<sup>2+</sup> concentration) complicate these standard analyses. While reflectivity studies in these samples do confirm the presence of two bound states at zero field (as expected for $`X^0`$ and $`X_s^{}`$), we rely on spin-polarized PL excitation measurements to verify the peaks in finite field, shown in Fig. 2. At fixed field and temperature, we record the PL while tuning the energy and helicity of the excitation laser (a frequency-doubled cw Ti:Sapphire laser). Since the PL is entirely $`\sigma ^+`$ polarized, it must arise from the recombination of a spin-down ($`m_s=\frac{1}{2}`$) electron with a $`m_j=\frac{3}{2}`$ valence hole (see diagram, Fig. 2c). If that $`m_s=\frac{1}{2}`$ electron is part of an $`X_s^{}`$ trion, emission will occur at the $`X_s^{}`$ energy. Thus the probability of forming $`X_s^{}`$ is related to the number of spin-up ($`m_s=+\frac{1}{2}`$) electrons present in the system. By specifically injecting spin-up electrons at the $`\sigma ^{}`$ resonance, we do indeed observe an enhancement of the $`X_s^{}`$ intensity (Fig. 2a). In contrast, injecting spin-down electrons with $`\sigma ^+`$ light can (and does) only favor the $`X^0`$ intensity (Fig. 2b). The amplitude ratio, I($`X_s^{}`$)/I($`X^0`$), is plotted in Fig. 2c, where the effects of pumping spin-up and spin-down electrons are more easily seen. Of related interest, no difference in this ratio is observed when exciting above the ZnSe barriers (2.8 eV) - evidence that the injected spin is scrambled when the electrons spill into the well from the barrier regions.
With the aid of the diagram in Fig. 2c, the evolution of the PL spectra in Fig. 1 may be interpreted as follows: $`X_s^{}`$ and $`X^0`$ are competing channels for exciton formation, with $`X_s^{}`$ dominating at zero field. With small applied field, the large spin-splitting drives a rapid depopulation of the spin-up electron bands, reducing the probability of $`X_s^{}`$ formation and thus increasing $`X^0`$ formation, as observed. With increasing field and Zeeman energy, $`X_s^{}`$ continues to form until it is no longer energetically favorable to bind a spin-up electron โ in this case, evidently, at $`\nu =1`$ when the Fermi energy falls to the lowest LL. The PL peak which forms at $`\nu =1`$ (and persists to 60T), with an energy between that of $`X_s^{}`$ and $`X^0`$, represents formation of a stable new ground state. A likely candidate is the spin-triplet state of the negatively charged exciton ($`X_t^{}`$), wherein both bound electrons are oriented spin-down. The $`X_t^{}`$ trion, predicted to become the ground state in nonmagnetic 2DEGs at sufficiently high magnetic field, may also form stably in highly spin-polarized magnetic 2DEGs due to Zeeman energy considerations, although no theoretical description of these effects exists at present.
We turn now to results from high-density samples. Fig. 3 shows PL spectra and energy shifts observed in a high-density magnetic 2DEG ($`n_e=4.3\times 10^{11}`$ cm<sup>-2</sup>, mobility=2700 cm<sup>2</sup>/Vs, and $`g_e^{eff}(H0)=95`$ at 1.5K). These data are characteristic of that obtained in samples with $`n_e`$ up to $`6.5\times 10^{11}`$ cm<sup>-2</sup>, the highest density studied. Again, we observe a dominant PL peak at $`H=0`$ which shifts rapidly down in energy with applied field. However, in contrast with the low-density 2DEGs, the high-energy satellite peak does not appear until 2 Tesla (at 1.5K). This satellite grows to a peak of comparable amplitude by 12 Tesla, and exhibits similar sensitivity to the energy and helicity of the pump laser as seen in Fig 2; therefore we again assign these features to $`X_s^{}`$ and $`X^0`$. At $`\nu =1`$ (17 Tesla), these resonances collapse and are again replaced by a strong emission at an intermediate energy which persists to 60T. The energy of the observed PL peaks at 1.5K, 4K, and 10K are plotted in Fig. 3b, along with $`\mathrm{\Delta }E_X`$ (inset). Several features are notable. First, the $`X^0`$ peak only becomes visible at a particular spin splitting โ not field โ in support of the assertion that $`X^0`$ forms readily only when the spin-up electron subbands depopulate to a particular degree. In addition, the collapse of the $`X^0`$ and $`X_s^{}`$ peaks occurs at $`\nu =1`$ independent of temperature, again indicating that the drop of the Fermi energy to the lowest LL destabilizes $`X_s^{}`$. Finally, $`\mathrm{\Delta }E_X`$ again follows the calculated Fermi energy in this sample, exhibiting oscillations in phase with the Fermi edge.
This latter behavior is unexpected but appears to be true in all of our samples. In contrast with studies in nonmagnetic 2DEGs, these data clearly demonstrate the relevance of both the Zeeman energy and the Fermi energy in determining the trion ionization energy $`\mathrm{\Delta }E_X`$. In Figure 4 we explicitly study this behavior and reveal the surprising result that $`\mathrm{\Delta }E_X`$ closely follows the energy of the Fermi surface regardless of electron density, temperature, and applied field. Fig. 4a shows the measured field dependence of $`\mathrm{\Delta }E_X`$ in six magnetic 2DEGs with electron densities from $`n_e5\times 10^{10}`$ to $`2.5\times 10^{11}`$ cm<sup>-2</sup>. The data are plotted from the field at which distinct $`X^0`$ and $`X_s^{}`$ PL peaks first appear, until the collapse of the PL spectra. $`\mathrm{\Delta }E_X`$ is seen to decrease rapidly with field at the lowest densities, but remain roughly constant and exhibit weak oscillations at high densities. Further, a rough extrapolation (dotted lines) reveals that $`\mathrm{\Delta }E_X`$ at zero field increases from $``$7meV to 10meV with carrier density. Aside from a $``$7meV difference in overall magnitude, these features are qualitatively reproduced by the numerical computation of the Fermi energy in these samples, plotted in the lower graph. It is natural to associate 7 meV with the โbareโ ($`n_e0`$) $`X_s^{}`$ binding energy, in reasonable agreement with earlier studies in low-density, nonmagnetic ZnSe-based 2DEGs. Thus, at least at zero field, $`\mathrm{\Delta }E_X`$ reflects the โbareโ $`X_s^{}`$ binding energy plus the Fermi energy, in agreement with a recent viewpoint wherein the ionization process requires removing one electron from $`X_s^{}`$ to the top of the Fermi sea.
In nonzero field, the Zeeman energy reduces the $`X_s^{}`$ ionization energy. The explicit temperature dependence of $`\mathrm{\Delta }E_X`$ in the low-density magnetic 2DEG is particularly telling (Fig. 4b): Here, the small Fermi energy should play a minimal role ($`\epsilon _F`$1.5meV $``$ 9meV total spin splitting), and the data should directly reveal the $`X_s^{}`$ ionization energy. At different temperatures, $`\mathrm{\Delta }E_X`$ decreases from its zero-field value of $``$7.5meV at a rate which depends on the Brillouin-like spin splitting. In this sample, the 2DEG is almost immediately completely spin-polarized - no gas of โspin-upโ electrons remains โ and thus the drop in $`\mathrm{\Delta }E_X`$ must reflect the influence of the Zeeman energy. Physically, the energy of the spin-up electron in $`X_s^{}`$ increases with spin splitting, becoming more weakly bound, reducing $`\mathrm{\Delta }E_X`$ by roughly half of the total Zeeman splitting until the $`X_s^{}`$ destabilizes. Within this scenario, however, the rolloff in the slope of the data towards zero field is puzzling, possibly indicating that the energy between the Fermi edge and the spin-up subbands (rather than the Zeeman energy itself) may be the relevant parameter, as the calculated Fermi energy shows precisely the same behavior. No theoretical framework for this behavior exists at present. Alternatively, Fig 4c shows typical data from the high electron density sample where the Fermi energy (7.7meV) is comparable to the total spin splitting (12.6meV). Here, the measured $`\mathrm{\Delta }E_X`$ clearly follows the oscillations of the calculated Fermi energy, with no clear indication of the role played by the Zeeman energy. We pose these questions for future theoretical models for $`X_s^{}`$ formation, which must necessarily include the Zeeman energy and the influence of a finite Fermi energy.
In conclusion, we have presented a systematic study of charged exciton formation in strongly magnetic 2DEGs, wherein the giant spin splitting dominates the cyclotron energy and the electron gas is highly spin-polarized. The trion ionization energy $`\mathrm{\Delta }E_X`$ tracks the energy of the Fermi edge regardless of electron density, temperature or applied field, highlighting the important roles played by both the Fermi- and Zeeman energies. With increasing electron density, the data suggest that $`\mathrm{\Delta }E_X`$ โ at least at zero magnetic field โ reflects the โbareโ $`X_s^{}`$ ionization energy of $``$7 meV plus the Fermi energy. Studies in low density samples show that the โbareโ $`X_s^{}`$ binding energy is reduced by an amount proportional to the Zeeman energy, and in high density samples $`\mathrm{\Delta }E_X`$ follows the oscillations of the Fermi surface as it moves between Landau levels. Quantitative interpretation of these data must await a more complete theory of $`X_s^{}`$ formation in electron gases. This work is supported by the NHMFL and NSF-DMR 9701072 and 9701484. |
no-problem/0001/nucl-th0001014.html | ar5iv | text | # Vector Mesons in Medium and Dileptons in Heavy-Ion Collisions
## 1 Introduction
The investigation of hadron properties inside atomic nuclei constitutes one of the traditional research objectives in nuclear physics. However, in terms of the underlying theory of strong interactions QCD) even the description of the nuclear ground state remains elusive so far. Valuable insights can be expected from a careful study of transition regimes between hadronic and quark-gluon degrees of freedom. E.g., in electron-nucleus scattering experiments the corresponding control variable is the momentum transfer, whereas heavy-ion reactions, performed over a wide range of collision energies, aim at compressing and/or heating normal nuclear matter to witness potential phase transitions into a Quark-Gluon Plasma (QGP).
Among the key properties of the low-energy sector of strong interactions is the (approximate) chiral symmetry of the QCD Lagrangian and its spontaneous breaking in the vacuum. This is evident from such important phenomena as the build-up of a chiral condensate and constituent quark mass ($`M_q0.4`$ GeV), or the large mass splitting of $`\mathrm{\Delta }M0.5`$ GeV between โchiral partnersโ in the hadron spectrum (such as $`\pi (140)`$-$`\sigma (4001200)`$, $`\rho (770)`$-$`a_1(1260)`$ or $`N(940)`$-$`N^{}(1535)`$). It also indicates that medium modifications of hadron properties can be viewed as precursors of chiral symmetry restoration.
In this talk the focus will be on the vector (V) and axialvector (A) channels. The former is special in that it directly couples to the electromagnetic current (i.e., real and virtual photons) at which point it becomes โimmuneโ to (strong) final state interactions thus providing direct experimental access to in-medium properties of vector mesons, e.g., through photoabsorption/-production on nuclei, or dilepton ($`e^+e^{}`$, $`\mu ^+\mu ^{}`$) spectra in heavy-ion reactions. The key issue is then to relate the medium effects to mechanisms of chiral restoration. This necessitates the simultaneous consideration of the axialvector channel, which, however, largely has to rely on theoretical analyses.
This talk is structured as follows: Sect. 2 is devoted to vector-meson properties in nuclear matter, Sect. 3 contains applications to heavy-ion reactions and Sect. 4 finishes with conclusions. A more complete discussion of the presented topics can be found in a recent review .
## 2 (Axial-) Vector Mesons in Cold Nuclear Matter
### 2.1 Correlators and Duality Threshold
The general quantity that is common to most theoretical approaches is the current-current correlation function which in the (axial-) vector channel is defined by
$$\mathrm{\Pi }_{V,A}^{\mu \nu }(q)=id^4xe^{iqx}0|๐ฏj_{V,A}^\mu (x)j_{V,A}^\nu (0)|0.$$
(1)
For simplicity we will concentrate on the (prevailing) isospin $`I=1`$ (isovector) projections
$$j_{I=1}^\mu =\frac{1}{2}(\overline{u}\mathrm{\Gamma }^\mu u\overline{d}\mathrm{\Gamma }^\mu d)\mathrm{with}\mathrm{\Gamma }_V^\mu =\gamma ^\mu ,\mathrm{\Gamma }_A^\mu =\gamma _5\gamma ^\mu .$$
(2)
At sufficiently high invariant mass both correlators can be described by their (identical) perturbative forms which read (up to $`\alpha _S`$ corrections)
$$\mathrm{Im}\mathrm{\Pi }_{V,I=1}^{\mu \nu }=\mathrm{Im}\mathrm{\Pi }_{A,I=1}^{\mu \nu }=(g^{\mu \nu }\frac{q^\mu q^\nu }{M^2})\frac{M^2}{12}\frac{N_c}{2},MM_{dual}$$
(3)
($`M^2=q_0^2\stackrel{}{q}^2`$). At low invariant masses the vector correlator is accurately saturated by the (hadronic) $`\rho `$ spectral function within the Vector Dominance Model (VDM), i.e.,
$`\mathrm{Im}\mathrm{\Pi }_{V,I=1}^{\mu \nu }`$ $`=`$ $`{\displaystyle \frac{(m_\rho ^{(0)})^4}{g_\rho ^2}}\mathrm{Im}D_\rho ^{\mu \nu },MM_{dual}`$ (4)
$`\mathrm{Im}\mathrm{\Pi }_{A,I=1}^{\mu \nu }`$ $`=`$ $`{\displaystyle \frac{(m_{a_1}^{(0)})^4}{g_{a_1}^2}}\mathrm{Im}D_{a_1}^{\mu \nu }f_\pi ^2\pi \delta (M^2m_\pi ^2)q^\mu q^\nu ,MM_{dual}`$ (5)
with a similar relation involving the $`a_1`$ meson in the axialvector channel. The spontaneous breaking of chiral symmetry (SBCS) manifests itself in both the difference of the $`a_1`$ and $`\rho `$ spectral functions as well as the additional pionic piece in $`\mathrm{\Pi }_A`$ (notice that $`f_\pi `$ is another order parameter of SBCS). In vacuum the transition from the hadronic to the partonic regime (โduality thresholdโ) is characterized by the onset of perturbative QCD around $`M_{dual}1.5`$ GeV. In the medium, chiral restoration requires the degeneration of $`V`$\- and $`A`$-correlators over the entire mass range.
### 2.2 Model-Independent Results: V-A Mixing and Sum Rules
In a dilute gas the prevailing medium effect can be computed via low-density expansions. Using soft pion theorems and current algebra Krippa extended an earlier finite-temperature analysis to the finite-density case to obtain
$`\mathrm{\Pi }_V^{\mu \nu }(q)`$ $`=`$ $`(1\xi )\mathrm{\Pi }_V^{\mu \nu }(q)+\xi \mathrm{\Pi }_A^{\mu \nu }(q)`$
$`\mathrm{\Pi }_A^{\mu \nu }(q)`$ $`=`$ $`(1\xi )\mathrm{\Pi }_A^{\mu \nu }(q)+\xi \mathrm{\Pi }_V^{\mu \nu }(q)^{}`$ (6)
i.e., the leading density effect is a mere โmixingโ of the vacuum correlators $`\mathrm{\Pi }^{\mu \nu }`$. The โmixingโ parameter
$$\xi \frac{4\varrho _N\overline{\sigma }_{\pi N}}{3f_\pi ^2m_\pi ^2}$$
(7)
($`\varrho _N`$: nucleon density) is determined by the โlong-rangeโ part of the $`\pi N`$ sigma term,
$$\overline{\sigma }_{\pi N}=4\pi ^3m_\pi ^2N|\pi ^2|N20\mathrm{MeV}.$$
(8)
Chanfray et al. pointed out that $`\overline{\sigma }_{\pi N}`$ is in fact governed by the well-known nucleon- and delta-hole excitations in the pion cloud of the $`\rho `$ (or $`a_1`$) meson which have been thoroughly studied within hadronic models to be discussed in the following section. A naive extrapolation of eq. (7) to the chiral restoration point where $`\xi =1/2`$, gives $`\varrho _c2.5\varrho _0`$, which is not unreasonable. Nonetheless, as we will see below, realistic models exhibit substantial medium modifications beyond the mixing effect.
Similar in spirit, i.e., combining low-density expansions with chiral constraints, is the so-called master formula approach applied in ref. : chiral Ward identities including the effects of explicit breaking are used to express medium corrections to the correlators through empirically inferred $`\pi N`$, $`\rho N`$ (or $`\gamma N`$, etc.) scattering amplitudes times density. Resummations to all orders in density cannot be performed either in this framework.
Model independent relations which are in principle valid to all orders in density are provided by sum rules. Although typically of little predictive power, their evaluation in model calculations can give valuable insights. One example are the well-known QCD sum rules which have been used to analyze vector-meson spectral functions in refs. . It has been found, e.g., that the generic decrease of the quark- and gluon-condensates on the right-hand-side (r.h.s.) is compatible with the phenomenological (left-hand) side if either (i) the vector meson masses decrease (together with small resonance widths), or, (b) both width and mass increase (as found in most microscopic models).
Another example of sum rules are the ones derived by Weinberg , being generalized to the in-medium case in ref. . The first Weinberg sum rule, e.g., connects the pion decay constant to the integrated difference between the $`V`$\- and $`A`$-correlators:
$$f_\pi ^2=\underset{0}{\overset{\mathrm{}}{}}\frac{dq_0^2}{\pi (q_0^2q^2)}\left[\mathrm{Im}\mathrm{\Pi }_V(q_0,q)\mathrm{Im}\mathrm{\Pi }_A(q_0,q)\right]$$
(9)
for arbitrary three-momentum $`q`$ (here, the pionic piece has been explicitly separated out from $`\mathrm{\Pi }_A`$). We will come back to this relation below.
### 2.3 Hadronic Models and Experimental Constraints
Among the most spectacular predictions for the behavior of vector mesons in medium is the Brown-Rho Scaling hypothesis . By imposing QCD scale invariance on a chiral effective Lagrangian at finite density and applying a mean-field approximation it was conjectured that all light hadron masses (with the exception of the symmetry-protected Goldstone bosons) drop with increasing density following an approximately universal scaling law. The scaling also encompasses the pion decay constant (as well as an appropriate power of the quark condensate) and therefore establishes a direct link to chiral symmetry restoration being realized through the vanishing of all light hadron masses.
More conservative approaches reside on many-body techniques to calculate selfenergy contributions to the vector-meson propagators $`D_V`$ arising from interactions with surrounding matter particles (nucleons). They are computed from gauge-invariant (vector-current conserving) as well as chirally symmetric Lagrangians. The $`\rho `$ propagator, e.g., takes the form
$$D_\rho ^{L,T}(q_0,q;\varrho _N)=\left[M^2(m_\rho ^{(0)})^2\mathrm{\Sigma }_{\rho \pi \pi }^{L,T}(q_0,q;\varrho _N)\mathrm{\Sigma }_{\rho BN}^{L,T}(q_0,q;\varrho _N)\right]^1$$
(10)
for both transverse and longitudinal polarization states (which in matter, where Lorentz-invariance is lost, differ for $`q>0`$). $`\mathrm{\Sigma }_{\rho \pi \pi }`$ encodes the medium modifications in the pion cloud (through $`NN^1`$ and $`\mathrm{\Delta }N^1`$ bubbles, so-called โpisobarsโ) , and $`\mathrm{\Sigma }_{\rho BN}`$ stems from direct โrhosobarโ excitations of either $`S`$-wave ($`N(1520)N^1`$, $`\mathrm{\Delta }(1700)N^1`$, $`\mathrm{}`$) or $`P`$-wave type ($`\mathrm{\Delta }N^1`$, $`N(1720)N^1`$, $`\mathrm{}`$. The parameters of the interaction vertices (coupling constants and form factor cutoffs) can be estimated from free decay branching ratios of the involved resonances or more comprehensive scattering data (e.g., $`\pi N\rho N`$ , or $`\gamma N`$ absorption) which determine the low-density properties of the spectral functions. Additional finite-density constraints can be obtained from the analysis of photoabsorption data on nuclei. Invoking the VDM, the total photoabsorption cross section can be readily related to the imaginary part of the in-medium vector-meson selfenergy in the zero mass limit (needed for the coupling to real photons). An example of such a calculation is displayed in Fig. 2 where a reasonable fit to existing data on various nuclei has been achieved.
The low-density limit (represented by the long-dashed line in Fig. 2) cannot reproduce the disappearance of especially the $`N`$(1520) as seen in the data. A selfconsistent calculation to all orders in density , however, generates sufficiently large in-medium widths, on the order of $`\mathrm{\Gamma }_{N(1520)}^{med}`$ 200-300 MeV (resulting in the full line).
Fig. 2 shows the final result for the $`\rho `$ spectral function which has been subjected to the aforementioned constraints. The apparent strong broadening is consistent with other calculations . Similar features, albeit less pronounced, emerge within analogous treatments for $`\omega `$ and $`\varphi `$ mesons .
Let us now return to the question what these findings might imply for chiral restoration. In a recent work by Kim et al. an effective chiral Lagrangian including $`a_1`$-meson degrees of freedom has been constructed. Medium modifications of the latter are introduced by an โ$`a_1`$-sobarโ through $`N(1900)N^1`$ excitations to represent the chiral partner of the $`N(1520)N^1`$ state. Pertinent (schematic) two-level models have been employed for both the $`\rho `$ and $`a_1`$ spectral densities which, in turn, have been inserted into the Weinberg sum rule, eq. (9) (supplemented by perturbative high energy continua).
The resulting density-dependence of the pion decay constant, displayed in Fig. 3, exhibits an appreciable decrease of $``$ 30% at $`\varrho _N=\varrho _0`$, which bears some sensitivity on the assumed branching ratio of the $`N(1900)Na_1`$ decay (or $`N(1900)Na_1`$ coupling constant). However, the mechanism is likely to be robust: due to the low-lying $`\rho `$-$`N(1520)N^1`$ and $`a_1`$-$`N(1900)N^1`$ excitations, accompanied by a broadening of the elementary resonance peaks, the $`\rho `$ and $`a_1`$ spectral densities increasingly overlap, thus reducing $`f_\pi `$.
## 3 Electromagnetic Observables in Heavy-Ion Reactions
In central collisions of heavy nuclei at (ultra-) relativistic energies (ranging from $`p_{lab}`$=1-200 AGeV in current experiments to $`\sqrt{s}`$=0.2-10 ATeV at RHIC and LHC) hot and dense hadronic matter is created over extended time periods of about 20 fm/c. Local thermal equilibrium is probably reached within the first fm/c, after which the โfireballโ expands and cools until the strong interactions cease (โthermal freezeoutโ) and the particles stream freely to the detector. Electromagnetic radiation (real and virtual photons) is continuously emitted as it decouples from the strongly interacting matter at the point of creation.
The thermal production rate of $`e^+e^{}`$ pairs per unit 4-volume can be expressed through the electromagnetic current correlation function (summed over all isospin states $`I`$=0,1),
$$\frac{dN_{ee}^{th}}{d^4xd^4q}=\frac{\alpha ^2}{\pi ^3M^2}f^B(q_0;T)\frac{1}{3}(\mathrm{Im}\mathrm{\Pi }_{em}^L+2\mathrm{I}\mathrm{m}\mathrm{\Pi }_{em}^T)$$
(11)
($`f^B`$: Bose distribution function; a similar expression holds for photons with $`M0`$). Fig. 4 shows that the medium effects in the $`\rho `$ propagator (including interactions with nucleons as well as thermal pions, kaons, etc.) induce a substantial reshaping of the emission rate (full lines) as compared to free $`\pi \pi `$ annihilation (dashed line) already at rather moderate temperatures and densities (left panel). In fact, under conditions close to the expected phase boundary (right panel) the $`\rho `$ resonance is completely โmeltedโ and the hadronic dilepton production rate is very reminiscent to the one from a perturbative Quark-Gluon Plasma (dashed-dotted lines in Fig. 4) down to rather low invariant masses of $``$ 0.5 GeV ($`\alpha _S`$ corrections to the partonic rate might improve the agreement at still lower masses). It has been suggested to interpret this as a lowering of the in-medium quark-hadron duality threshold as a consequence of the approach towards chiral restoration.
The total thermal yield in a heavy-ion reaction is obtained by a space-time integration of eq. (11) over the density-temperature profile for a given collision system, modeled, e.g., within transport or hydrodynamic simulations. At CERN-SpS energies (160โ200 AGeV) this โthermalโ component is dominant over (or at least competitive with) final state hadron decays (at low $`M`$) and hard initial processes such as Drell-Yan annihilation (at high $`M`$) in the invariant mass range $`M`$ 0.2-2 GeV. A consistent description of the measured data is possible once hadronic many-body effects are included , cf. Figs.6 and 6. However, at this point also the dropping mass scenario is compatible with the data (cf. dashed curve in Fig. 6).
Optimistically one may conclude that strongly interacting matter close to the hadron-QGP phase boundary has been observed at the CERN-SpS. Other observables such as hadro-chemistry or $`J/\mathrm{\Psi }`$ suppression also support this scenario. Nonetheless, further data are essential to substantiate the present status and resolve the open questions.
## 4 Conclusions
This talk has focused on medium modifications of vector mesons in connection with chiral symmetry restoration in hot/dense matter. In accordance with a variety of empirical information hadronic spectral functions are characterized by the appearance of low-lying excitations as well as a broadening of the resonance structures. A schematic treatment of the $`a_1`$ meson on similar footings shows that these features encode an approach towards chiral restoration in nuclear matter as signaled by the decrease of the pion decay constant when evaluating the first Weinberg sum rule.
The application of these model calculations to electromagnetic observables as measured in recent heavy-ion experiments at the CERN-SpS leads to a reasonable description of the data from 0 to 2 GeV in invariant mass. The structureless in-medium hadronic dilepton production rates resemble perturbative $`q\overline{q}`$ annihilation in the vicinity of the expected phase boundary indicating that chiral restoration might be realized through a reduction of the quark-hadron duality threshold which in vacuum is located around 1.5 GeV. It would also corroborate the interrelation between temperature/density and momentum transfer in the transition from hadronic to partonic degrees of freedom.
In the near future further dilepton data will be taken by the PHENIX experiment at RHIC (advancing to a new energy frontier) as well as the precision experiment HADES at GSI. Thus electromagnetic observables can be expected to continue the progress in our understanding of strong interaction physics.
Acknowledgments
It is a pleasure to thank G.E. Brown, E.V. Shuryak and H. Sorge for collaboration and many fruitful discussions. |
no-problem/0001/physics0001047.html | ar5iv | text | # Path Integral Monte Carlo Calculation of the Deuterium Hugoniot
## I Introduction
Recent laser shock wave experiments on pre-compressed liquid deuterium have produced an unexpected equation of state for pressures up to 3.4 Mbar. It was found that deuterium has a significantly higher compressibility than predicted by the semi-empirical equation of state based on plasma many-body theory and lower pressure shock data (see SESAME model ). These experiments have triggered theoretical efforts to understand the state of compressed hydrogen in this range of density and temperature, made difficult because the experiments are in regime where strong correlations and a significant degree of electron degeneracy are present. At this high density, it is problematic even to define the basic units such as molecules, atoms, free deuterons and electrons. Conductivity measurements as well as theoretical estimates suggest that in the experiment, a state of significant but not complete metalization was reached.
A variety of simulation techniques and analytical models have been advanced to describe hydrogen in this particular regime. There are ab initio methods such as restricted path integral Monte Carlo simulations (PIMC) and density functional theory molecular dynamics (DFT-MD) . Further there are models that minimize an approximate free energy function constructed from known theoretical limits with respect to the chemical composition, which work very well in certain regimes. The most widely used include .
We present new results from PIMC simulations. What emerges is a relative consensus of theoretical calculations. First, we performed a finite size and time step study using a parallelized PIMC code that allowed simulation of systems with $`N_P=64`$ pairs of electrons and deuterons and more importantly to decrease the time step from $`\tau ^1=10^6\mathrm{K}`$ to $`\tau ^1=810^6\mathrm{K}`$. More importantly, we studied the effect of the nodal restriction on the hugoniot.
## II Restricted path integrals
The density matrix of a quantum system at temperature $`k_BT=1/\beta `$ can be written as a integral over all paths $`๐_t`$,
$$\rho (๐_0,๐_\beta ;\beta )=\frac{1}{N!}\underset{๐ซ}{}(\pm 1)^๐ซ\underset{๐_0๐ซ๐_\beta }{}๐๐_te^{S[๐_t]}.$$
(1)
$`๐_t`$ stands for the entire paths of $`N`$ particles in $`3`$ dimensional space $`๐_t=(๐ซ_{1t},\mathrm{},๐ซ_{Nt})`$ beginning at $`๐_0`$ and connecting to $`๐ซ๐_\beta `$. $`๐ซ`$ labels the permutation of the particles. The upper sign corresponds to a system of bosons and the lower one to fermions. For non-relativistic particles interacting with a potential $`V(๐)`$, the action of the path $`S[๐_t]`$ is given by,
$$S[๐_t]=_0^\beta ๐t\left[\frac{m}{2}\left|\frac{d๐(t)}{\mathrm{}dt}\right|^2+V(๐(t))\right]+\text{const}.$$
(2)
One can estimate quantum mechanical expectation values using Monte Carlo simulations with a finite number of imaginary time slices $`M`$ corresponding to a time step $`\tau =\beta /M`$.
For fermionic systems the integration is complicated due to the cancellation of positive and negative contributions to the integral, (the fermion sign problem). It can be shown that the efficiency of the straightforward implementation scales like $`e^{2\beta Nf}`$, where $`f`$ is the free energy difference per particle of a corresponding fermionic and bosonic system . In , it has been shown that one can evaluate the path integral by restricting the path to only specific positive contributions. One introduces a reference point $`๐^{}`$ on the path that specifies the nodes of the density matrix, $`\rho (๐,๐^{},t)=0`$. A node-avoiding path for $`0<t\beta `$ neither touches nor crosses a node: $`\rho (๐(t),๐^{},t)0`$. By restricting the integral to node-avoiding paths,
$`\rho _F(๐_\beta ,๐^{};\beta )`$ $`=`$ (3)
$`{\displaystyle ๐๐_0}`$ $`\rho _F`$ $`(๐_0,๐^{};0){\displaystyle \underset{๐_0๐_\beta \mathrm{{\rm Y}}(๐^{})}{}}๐๐_te^{S[๐_t]},`$ (4)
($`\mathrm{{\rm Y}}(๐^{})`$ denotes the restriction) the contributions are positive and therefore PIMC represents, in principle, a solution to the sign problem. The method is exact if the exact fermionic density matrix is used for the restriction. However, the exact density matrix is only known in a few cases. In practice, applications have approximated the fermionic density matrix, by a determinant of single particle density matrices,
$$\rho (๐,๐^{};\beta )=\left|\begin{array}{ccc}\rho _1(๐ซ_1,๐ซ_1^{};\beta )& \mathrm{}& \rho _1(๐ซ_N,๐ซ_1^{};\beta )\\ \mathrm{}& \mathrm{}& \mathrm{}\\ \rho _1(๐ซ_1,๐ซ_N^{};\beta )& \mathrm{}& \rho _1(๐ซ_N,๐ซ_N^{};\beta )\end{array}\right|.$$
(5)
This approach has been extensively applied using the free particle nodes,
$$\rho _1(๐ซ,๐ซ^{},\beta )=(4\pi \lambda \beta )^{3/2}\text{exp}\left\{(๐ซ๐ซ^{})^2/4\lambda \beta \right\}$$
(6)
with $`\lambda =\mathrm{}^2/2m`$, including applications to dense hydrogen . It can be shown that for temperatures larger than the Fermi energy the interacting nodal surface approaches the free particle (FP) nodal surface. In addition, in the limit of low density, exchange effects are negligible, the nodal constraint has a small effect on the path and therefore its precise shape is not important. The FP nodes also become exact in the limit of high density when kinetic effects dominate over the interaction potential. However, for the densities and temperatures under consideration, interactions could have a significant effect on the fermionic density matrix.
To gain some quantitative estimate of the possible effect of the nodal restriction on the thermodynamic properties, it is necessary to try an alternative. In addition to FP nodes, we used a restriction taken from a variational density matrix (VDM) that already includes interactions and atomic and molecular bound states.
The VDM is a variational solution of the Bloch equation. Assume a trial density matrix with parameters $`q_i`$ that depend on imaginary time $`\beta `$ and $`๐^{}`$,
$$\rho (๐,๐^{};\beta )=\rho (๐,q_1,\mathrm{},q_m).$$
(7)
By minimizing the integral:
$$๐๐\left(\frac{\rho (๐,๐^{};\beta )}{\beta }+\rho (๐,๐^{};\beta )\right)^2=0,$$
(8)
one determines equations for the dynamics of the parameters in imaginary time:
$$\frac{1}{2}\frac{H}{\stackrel{}{q}}+\stackrel{}{๐ฉ}\dot{\stackrel{}{q}}=0\text{where}H\rho \rho ๐๐.$$
(9)
The normalization matrix is:
$`๐ฉ_{ij}`$ $`=`$ $`\underset{q^{}q}{lim}{\displaystyle \frac{^{\mathrm{\hspace{0.17em}2}}}{q_iq_j^{}}}\left[{\displaystyle ๐๐\rho (๐,\stackrel{}{q};\beta )\rho (๐,\stackrel{}{q}^{};\beta )}\right].`$ (10)
We assume the density matrix is a Slater determinant of single particle Gaussian functions
$$\rho _1(๐ซ,๐ซ^{},\beta )=(\pi w)^{3/2}\text{exp}\left\{(๐ซ๐ฆ)^2/w+d\right\}$$
(11)
where the variational parameters are the mean $`๐ฆ`$, squared width $`w`$ and amplitude $`d`$. The differential equation for this ansatz are given in . The initial conditions at $`\beta 0`$ are $`w=2\beta `$, $`๐ฆ=๐ซ^{}`$ and $`d=0`$ in order to regain the correct FP limit. It follows from Eq. 8 that at low temperature, the VDM goes to the lowest energy wave function within the variational basis. For an isolated atom or molecule this will be a bound state, in contrast to the delocalized state of the FP density matrix. A further discussion of the VDM properties is given in . Note that this discussion concerns only the nodal restriction. In performing the PIMC simulation, the complete potential between the interacting charges is taken into account as discussed in detail in .
Simulations with VDM nodes lead to lower internal energies than those with FP nodes as shown in Fig. 1. Since the free energy $`F`$ is the integral of the internal energy over temperature, one can conclude that VDM nodes yield to a smaller $`F`$ and hence, are the more appropriate nodal surface.
For the two densities considered here, the state of deuterium goes from a plasma of strongly interacting but un-bound deuterons and electrons at high $`T`$ to a regime at low $`T`$, which is characterized by a significant electronic degeneracy and bound states. Also at decreasing $`T`$, one finds an increasing number of electrons involved in long permutation cycles. Additionally for $`T\mathrm{15\hspace{0.17em}625}\mathrm{K}`$, molecular formation is observed. Comparing FP and VDM nodes, one finds that VDM predicts a higher molecular fraction and fewer permutations hinting to more localized electrons.
## III Shock Hugoniot
The recent experiments measured the shock velocity, propagating through a sample of pre-compressed liquid deuterium characterized by an initial state, ($`E_0`$$`V_0`$$`p_0`$) with $`T=19.6\mathrm{K}`$ and $`\rho _0=0.171\mathrm{g}/\mathrm{cm}^3`$. Assuming an ideal planar shock front, the variables of the shocked material ($`E`$$`V`$$`p`$) satisfy the hugoniot relation ,
$$H=EE_0+\frac{1}{2}(VV_0)(p+p_0)=0.$$
(12)
We set $`E_0`$ to its exact value of $`15.886\mathrm{eV}`$ per atom and $`p_0=0`$. Using the simulation results for $`p`$ and $`E`$, we calculate $`H(T,\rho )`$ and then interpolate $`H`$ linearly at constant $`T`$ between the two densities corresponding to $`r_s=1.86`$ and $`2`$ to obtain a point on the hugoniot in the $`(p,\rho )`$ plane. (Results at $`r_s=1.93`$ confirm the function is linear within the statistical errors). The PIMC data for $`p`$, $`E`$, and the hugoniot are given in Tab. I.
In Fig. 2, we compare the effects of different approximations made in the PIMC simulations such as time step $`\tau `$, number of pairs $`N_P`$ and the type of nodal restriction. For pressures above 3 Mbar, all these approximations have a very small effect. The reason is that PIMC simulation become increasingly accurate as temperature increases. The first noticeable difference occurs at $`p2.7\mathrm{Mbar}`$, which corresponds to $`T=\mathrm{62\hspace{0.17em}500}\mathrm{K}`$. At lower pressures, the differences become more and more pronounced. We have performed simulations with free particle nodes and $`N_P=32`$ for three different values of $`\tau `$. Using a smaller time step makes the simulations computationally more demanding and it shifts the hugoniot curves to lower densities. These differences come mainly from enforcing the nodal surfaces more accurately, which seems to be more relevant than the simultaneous improvements in the accuracy of the action $`S`$, that is the time step is constrained more by the Fermi statistics than it is by the potential energy. We improved the efficiency of the algorithm by using a smaller time step $`\tau _F`$ for evaluating the Fermi action than the time step $`\tau _B`$ used for the potential action. Unless specified otherwise, we used $`\tau _F=\tau _B=\tau `$. At even lower pressures not shown in Fig. 2, all of the hugoniot curves with FP nodes turn around and go to low densities as expected.
As a next step, we replaced the FP nodes by VDM nodes. Those results show that the form of the nodes has a significant effect for $`p`$ below 2 Mbar. Using a smaller $`\tau `$ also shifts the curve to slightly lower densities. In the region where atoms and molecules are forming, it is plausible that VDM nodes are more accurate than free nodes because they can describe those states . We also show a hugoniot derived on the basis of the VDM alone (dashed line). These results are quite reasonable considering the approximations (Hartree-Fock) made in that calculation. Therefore, we consider the PIMC simulation with the smallest time step using VDM nodes ($``$) to be our most reliable hugoniot. Going to bigger system sizes $`N_P=64`$ and using FP nodes also shows a shift towards lower densities.
Fig. 3 compares the Hugoniot from Laser shock wave experiments with PIMC simulation (VDM nodes, $`\tau ^1=210^6\mathrm{K}`$) and several theoretical approaches: SESAME model by Kerley (thin solid line), linear mixing model by Ross (dashed line) , DFT-MD by Lenosky et al. (dash-dotted line), Padรฉ approximation in the chemical picture (PACH) by Ebeling et al. (dotted line), and the work by Saumon et al. (thin dash-dotted line).
The differences of the various PIMC curves in Fig. 2 are small compared to the deviation from the experimental results . There, an increased compressibility with a maximum value of $`6\pm 1`$ was found while PIMC predicts $`4.3\pm 0.1`$, only slightly higher than that given by the SESAME model. Only for $`p>2.5\mathrm{Mbar}`$, does our hugoniot lie within experimental errorbars. In this regime, the deviations in the PIMC and PACH hugoniot are relatively small, less than $`0.05\mathrm{gcm}^3`$ in density. In the high pressure limit, the hugoniot goes to the FP limit of 4-fold compression. This trend is also present in the experimental findings. For pressures below 1 Mbar, the PIMC hugoniot goes back lower densities and shows the expected tendency towards the experimental values from earlier gas gun work and lowest data points from . For these low pressures, differences between PIMC and DFT-MD are also relatively small.
## IV Conclusions
We reported results from PIMC simulations and performed a finite size and time step study. Special emphasis was put on improving the fermion nodes where we presented the first PIMC results with variational instead of FP nodes. We find a slightly increased compressibility of $`4.3\pm 0.1`$ compared to the SESAME model but we cannot reproduce the experimental findings of values of about $`6\pm 1`$. Further theoretical and experimental work will be needed to resolve this discrepancy.
###### Acknowledgements.
The authors would like to thank W. Magro for the collaboration concerning the parallel PIMC simulations and E.L. Pollock for the contributions to the VDM method. This work was supported by the CSAR program and the Department of Physics at the University of Illinois. We used the computational facilities at the National Center for Supercomputing Applications and Lawrence Livermore National Laboratories. |
no-problem/0001/math0001153.html | ar5iv | text | # Introduction
## Introduction
Let $`B`$ be an ideal in a polynomial ring $`R=k[X_1,\mathrm{},X_n]`$ in $`n`$ variables over a field $`k`$. The local cohomology of $`R`$ at $`B`$ is defined by
$$H_B^i(R)=\mathrm{lim}\mathrm{Ext}_R^i(R/B^d,R).$$
In general, this limit is not well behaved: the natural maps
$$\mathrm{Ext}_R^i(R/B^d,R)H_B^i(R)$$
are not injective and it is difficult to understand how their images converge to $`H_B^i(R)`$ (see Eisenbud, Mustaลฃว and Stillman for a discussion of related problems).
However, in the case when $`B`$ is a monomial ideal we will see that the situation is especially nice if instead of the sequence $`\{B^d\}_{d1}`$ we consider the cofinal sequence of ideals $`\{B_0^{[d]}\}_{d1}`$, consisting of the โFrobenius powersโ of the ideal $`B_0=\mathrm{radical}(B)`$. They are defined as follows: if $`m_1,\mathrm{},m_r`$ are monomial generators of $`B_0`$, then
$$B_0^{[d]}=(m_1^d,\mathrm{},m_r^d).$$
Our first main result is that the natural map
$$\mathrm{Ext}_R^i(R/B_0^{[d]},R)H_B^i(R)$$
is an isomorphism onto the submodule of $`H_B^i(R)`$ of elements of multidegree $`\alpha `$, with $`\alpha _jd`$ for all $`j`$.
The second main result gives a filtration of $`\mathrm{Ext}_R^i(R/B,R)`$ for a squarefree monomial ideal $`B`$. For $`\alpha \{0,1\}^n`$, let $`\mathrm{supp}(\alpha )=\{j|\alpha _j=1\}`$ and $`P_\alpha =(X_j|j\mathrm{supp}(\alpha ))`$.
We describe a canonical filtration of
$$\mathrm{Ext}_R^i(R/B,R):0=M_0\mathrm{}M_n=\mathrm{Ext}_R^i(R/B,R)$$
such that for every $`l`$,
$$M_l/M_{l1}\underset{|\alpha |=l}{}(R/P_\alpha (\alpha ))^{\beta _{li,\alpha }(B^{})}.$$
The numbers $`\beta _{li,\alpha }(B^{})`$ are the Betti numbers of $`B^{}`$, the Alexander dual ideal of $`B`$ (see section 3 below for the related definitions). For an interpretation of this filtration in terms of Betti diagrams, see Remark 1 after Theorem 3.3, below.
In a slightly weaker form, this result has been conjectured by David Eisenbud.
Letโs see this filtration for a simple example: $`R=k[a,b,c,d]`$, $`B=(ab,cd)`$ and $`i=2`$. Since $`B`$ is a complete intersection, we get $`\mathrm{Ext}_R^2(R/B,R)R/B(1,1,1,1)`$. Our filtration is $`M_0=M_1=0`$, $`M_2=R\overline{a}\overline{c}+R\overline{a}\overline{d}+R\overline{b}\overline{c}+R\overline{b}\overline{d}`$, $`M_3=R\overline{a}+R\overline{b}+R\overline{c}+R\overline{d}`$ and $`M_4=\mathrm{Ext}_R^2(R/B,R)`$.
From the description of $`\mathrm{Ext}_R^2(R/B,R)`$ it follows that
$$M_2/M_1=M_2R/(b,d)(0,1,0,1)R/(b,c)(0,1,1,0)$$
$$R/(a,d)(1,0,0,1)R/(a,c)(1,0,1,0),$$
$$M_3/M_2R/(b,c,d)(0,1,1,1)R/(a,c,d)(1,0,1,1)$$
$$R/(a,b,d)(1,1,0,1)R/(a,b,c)(1,1,1,0),$$
$$M_4/M_3R/(a,b,c,d)(1,1,1,1).$$
On the other hand, $`B^{}=(bd,bc,ad,ac)`$. If $`F_{}`$ is the minimal multigraded resolution of $`B^{}`$, then
$$F_0=R(0,1,0,1)R(0,1,1,0)R(1,0,0,1)R(1,0,1,0),$$
$$F_1=R(0,1,1,1)R(1,0,1,1)R(1,1,0,1)R(1,1,1,0),$$
$$F_2=R(1,1,1,1).$$
We see that for each $`\alpha \{0,1\}^4`$ such that $`R(\alpha )`$ appears in $`F_{l2}`$, there is a corresponding summand $`R/P_\alpha (\alpha )`$ in $`M_l/M_{l1}`$.
In order to prove this result about the filtration of $`\mathrm{Ext}_R^i(R/B,R)`$ we will study the multigraded components of this module and how an element of the form $`X_jR`$ acts on these components. As we have seen, it is enough to study the same problem for $`H_B^i(R)`$.
We give two descriptions for the degree $`\alpha `$ part of $`H_B^i(R)`$, as simplicial cohomology groups of certain simplicial complexes depending only on $`B`$ and the signs of the components of $`\alpha `$. The first complex is on the set of minimal generators of $`B`$ and the second one is a full subcomplex of the simplicial complex associated to $`B^{}`$ via the Stanley-Reisner correspondence. The module structure on $`H_B^i(R)`$ is described by the maps induced in cohomology by inclusion of simplicial complexes.
As a first consequence of these results and using also a formula of Hochster , we obtain an isomorphism
$$\mathrm{Ext}_R^i(R/B,R)_\alpha \mathrm{Tor}_{|\alpha |i}^R(B^{},k)_\alpha ,$$
for every $`\alpha \{0,1\}^n`$.
This result is equivalent to the fact that in our filtration the numbers are as stated above. This isomorphism has been obtained also by Yanagawa . It can be considerd as a strong form of the inequality of Bayer, Charalambous and Popescu between the Betti numbers of $`B`$ and those of $`B^{}`$. As shown in that paper, this implies that $`B`$ and $`B^{}`$ have the same extremal Betti numbers, extending results of Eagon and Reiner and Terai .
As a final application of our analysis of the graded pieces of $`\mathrm{Ext}_R^i(R/B,R)`$, we give a topological description for the associated primes of $`\mathrm{Ext}_R^i(R/B,R)`$. In the terminology of Vasconcelos , these are the homological associated primes of $`R/B`$. In particular, we characterize the minimal associated primes of $`\mathrm{Ext}_R^i(R/B,R)`$ using only the Betti numbers of $`B^{}`$.
We mention here the recent work of Terai on the Hilbert function of the modules $`H_B^i(R)`$. It is easy to see that using the results in our paper one can deduce Teraiโs formula for this Hilbert function.
The problem of effectively computing the local cohomology modules with respect to an arbitrary ideal is quite difficult since these modules are not finitely generated. The general approach is to use the $`D`$-module structure for the local cohomology (see, for example, Walther ). However, in the special case of monomial ideals our results show that it is possible to make this computation with elementary methods.
Our main motivation for studying local cohomology at monomial ideals comes from the applications in the context of toric varieties. Via the homogeneous coordinate ring, the cohomology of sheaves on such a variety can be expressed as local cohomology of modules at the โirrelevant idealโ, which is a squarefree monomial ideal. For a method of computing the cohomology of sheaves on toric varieties in this way, see Eisenbud, Mustaลฃว and Stillman . For applications to vanishing theorems on toric varieties and related results, see Mustaลฃว .
The main reference for the definitions and the results that we use is Eisenbud . For the basic facts about the cohomology of simplicial complexes, see Munkres . Cohomology of simplicial complexes is always taken to be reduced cohomology. Notice also that we make a distinction between the empty complex which contains just the empty set (which has nontrivial cohomology in degree $`1`$) and the void complex which doesnโt contain any set (whose cohomology is trivial in any degree).
This work has been done in connection with a joint project with David Eisenbud and Mike Stillman. We would like to thank them for their constant encouragement and for generously sharing their insight with us. We are also grateful to Josep Alvarez Montaner for pointing out a mistake in an earlier version of this paper.
## ยง1. Local cohomology as a union of Ext modules
Let $`BR=k[X_1,\mathrm{},X_n]`$ be a squarefree monomial ideal. All the modules which appear are $`๐^n`$-graded. We partially order the elements of $`๐^n`$ by setting $`\alpha \beta `$ iff $`\alpha _j\beta _j`$, for all $`j`$.
###### Theorem 1.1
For each $`i`$ and $`d`$, the natural map
$$\mathrm{Ext}_R^i(R/B^{[d]},R)H_B^i(R)$$
is an isomorphism onto the submodule of $`H_B^i(R)`$ of elements of degree $`(d,\mathrm{},d)`$.
Proof. We will compute $`\mathrm{Ext}_R^i(R/B^{[d]},R)`$ using the Taylor resolution $`F_{}^d`$ of $`R/B^{[d]}`$ ( see Eisenbud , exercise 17.11). The inclusion $`B^{[d+1]}B^{[d]}`$, $`d1`$ induces a morphism of complexes $`\varphi ^d:F_{}^{d+1}F_{}^d`$. The assertions in the theorem are consequences of the more precise lemma below.
###### Lemma 1.2
If $`(\varphi ^d)^{}:(F_{}^d)^{}(F_{}^{d+1})^{}`$ is the dual $`Hom_R(\varphi ^d,R)`$ of the above map, then in a multidegree $`\alpha ๐^n`$ we have:
(a) If $`\alpha (d,\mathrm{},d)`$, then $`(\varphi ^d)_\alpha ^{}`$ is an isomorphism of complexes.
(b) If $`\alpha _j<d`$ for some $`j`$, $`1jn`$, then $`(F_{}^d)_\alpha ^{}=0`$, so $`(\varphi ^d)_\alpha ^{}`$ is the zero map.
Proof of the lemma. Let $`m_1,\mathrm{},m_r`$ be monomial minimal generators of $`B`$. For any subset $`I`$ of $`\{1,\mathrm{},r\}`$ we set
$$m_I=\mathrm{LCM}\{m_iiI\}.$$
As each $`m_I`$ is square-free, $`degm_I๐^n`$ is a vector of ones and zeros.
Recall from Eisenbud that $`F_{}^d`$ is a free $`R`$-module with basis $`\{f_I^d|I\{1,\mathrm{},r\}\}`$, where $`deg(f_I^d)=ddeg(m_I)`$. Therefore, the degree $`\alpha `$ part of $`(F_{}^d)^{}`$ has a vector space basis consisting of elements of the form $`ne_I^d`$ where $`nR`$ is a monomial, $`e_I^d=(f_I^d)^{}`$ has degree equal to $`ddeg(m_I)`$, and $`deg(n)ddeg(m_I)=\alpha .`$
Part (b) of the Lemma follows at once. For part (a), note that $`(\varphi ^k)^{}:(F_{}^d)^{}(F_{}^{d+1})^{}`$ takes $`e_I^d`$ to $`m_Ie_I^{d+1}`$. The vector $`deg(e_I^{d+1})=(d+1)deg(m_I)`$ has entry $`(d+1)`$ wherever $`deg(m_I)`$ has entry 1, so any element $`ne_I^{d+1}`$ of degree $`\alpha (d,d,\mathrm{},d)`$ must have $`n`$ divisible by $`m_I`$. It is thus of the form $`(\varphi ^d)^{}(x)`$ for the unique element $`x=(n/m_I)e_I^d`$, as required.
## ยง2. Local cohomology as simplicial cohomology
To describe $`H_B^i(R)`$ in a multidegree $`\alpha ๐^n`$, we will use two simplicial complexes associated with $`B`$ and $`\alpha `$. We will assume that $`B(0)`$.
By computing local cohomology using the Taylor complex we will express $`H_B^i(R)_\alpha `$ as the simplicial cohomology of a complex on the set of minimal generators of $`B`$. We will interpret this later as the cohomology of an other complex, this time on the potentially smaller set $`\{1,\mathrm{},n\}`$. This one is a full subcomplex of the complex associated to the dual ideal $`B^{}`$ via the Stanley-Reisner correspondence. In fact, this is the complex used in the computation of the Betti numbers of $`B^{}`$ (see the next section for the definitions). We will use this result to derive the relation between $`\mathrm{Ext}(R/B,R)`$ and $`\mathrm{Tor}^R(B^{},k)`$ in Corolary 3.1 below.
Let $`m_1,\mathrm{},m_r`$ be the minimal monomial generators of $`B`$. As above, for $`J\{1,\mathrm{},r\}`$, $`m_J`$ will denote $`\mathrm{LCM}(m_j;jJ)`$.
For $`i\{1,\mathrm{},n\}`$, we define
$$T_i:=\{J\{1,\mathrm{},r\}|X_im_J\}.$$
For every subset $`I\{1,\mathrm{},n\}`$, we define $`T_I=_{iI}T_i`$. When $`I=\mathrm{}`$, we take $`T_I`$ to be the void complex. It is clear that each $`T_i`$ is a simplicial complex on the set $`\{1,\mathrm{},r\}`$, and therefore so is $`T_I`$.
For $`\alpha ๐^n`$, we take $`I_\alpha =\{i|\alpha _i1\}\{1,\mathrm{},n\}`$. Note that the complex $`T_{I_\alpha }`$ depends only on the signs of the components of $`\alpha `$ (and, of course, on $`B`$).
If $`e_1,\mathrm{},e_n`$ is the canonical basis of $`๐^n`$ and $`\alpha ^{}=\alpha +e_l`$, we have obviously $`I_\alpha ^{}I_\alpha `$, with equality iff $`\alpha _l1`$. Therefore, $`T_{I_\alpha ^{}}`$ is a subcomplex of $`T_{I_\alpha }`$.
###### Theorem 2.1
(a) With the above notation, we have
$$H_B^i(R)_\alpha H^{i2}(T_{I_\alpha };k).$$
(b) Via the isomorphisms given in (a), the multiplication by $`X_l`$:
$$\nu _{X_l}:H_B^i(R)_\alpha H_B^i(R)_\alpha ^{}$$
corresponds to the morphism:
$$H^{i2}(T_{I_\alpha };k)H^{i2}(T_{I_\alpha ^{}};k),$$
induced in cohomology by the inclusion $`T_{I_\alpha ^{}}T_{I_\alpha }`$. In particular, if $`\alpha _l1`$, then $`\nu _{X_l}`$ is an isomorphism.
Proof. We have seen in Lemma 1.2 that
$$\mathrm{Ext}_R^i(R/B^{[d]},R)_\alpha H_B^i(R)_\alpha $$
if $`\alpha (d,\mathrm{},d)`$. We fix such a $`d`$. With the notations in Lemma 1.2 , we have seen that the degree $`\alpha `$ part of $`(F_{}^d)^{}`$ has a vector space basis consisting of elements of the form $`ne_J^d`$, where $`nR`$ is a monomial and $`deg(n)ddeg(m_J)=\alpha `$. Therefore, the basis of $`(F_p^d)_\alpha ^{}`$ is indexed by those $`J\{1,\mathrm{},r\}`$ with $`|J|=p`$ and $`\alpha +ddeg(m_J)(0,\mathrm{},0)`$. Because $`\alpha _j1`$ iff $`jI_\alpha `$ and $`\alpha (d,\mathrm{},d)`$, the above inequality is equivalent to $`X_j|m_J`$ for every $`jI_\alpha `$ i.e. to $`JT_{I_\alpha }`$.
Let $`G^{}`$ be the cochain complex computing the relative cohomology of the pair $`(D,T_{I_\alpha })`$ with coefficients in $`k`$, where $`D`$ is the full simplicial complex on the set $`\{1,\mathrm{},r\}`$.
If $`I_\alpha \mathrm{}`$, then the degree $`\alpha `$ part of $`(F_p^d)^{}`$ is equal to $`G^{p1}`$ for every $`p`$. Moreover, the maps are the same and therefore we get $`H_B^i(R)_\alpha H^{i1}(D,T_{I_\alpha };k)`$. Since $`D`$ is contractible, the long exact sequence in cohomology of the pair $`(D,T_{I_\alpha })`$ yields $`H_B^i(R)_\alpha H^{i2}(T_{I_\alpha };k)`$.
If $`I_\alpha =\mathrm{}`$, then $`(F_{}^d)^{}`$ in degree $`\alpha `$ is up to a shift the complex computing the reduced cohomology of $`D`$ with coefficients in $`k`$. Since $`D`$ is contractible, we get $`H_B^i(R)_\alpha =0=H^{i2}(T_{I_\alpha };k)`$, which completes the proof of part (a).
For part (b), we may suppose that $`I_\alpha ^{}(0)`$. With the above notations, $`\nu _{X_l}`$ is induced by the map $`\varphi _l:(F_p^d)_\alpha ^{}(F_p^d)_\alpha ^{}^{}`$, given by $`\varphi _l(ne_J^d)=X_lne_J^d`$.
If $`G^{}`$ is constructed as above, but for $`\alpha ^{}`$ instead of $`\alpha `$, then via the isomorphisms:
$$(F_p^d)_\alpha ^{}G^{p1},$$
$$(F_p^d)_\alpha ^{}^{}G^{p1},$$
the map $`\varphi _l`$ corresponds to the canonical projection $`G^{p1}G^{p1}`$, which concludes the proof of part (b).
###### Remark
The last assertion in Theorem 2.1(b), that $`\nu _{X_l}`$ is an isomorphism if $`\alpha _l1`$ has been obtained also in Yanagawa .
The next corollary describes $`H_B^i(R)_\alpha `$ as the cohomology of a simplicial complex with vertex set $`\{1,\mathrm{},n\}`$.
We first introduce the complex $`\mathrm{\Delta }`$ defined by:
$$\mathrm{\Delta }:=\{F\{1,\mathrm{},n\}|\underset{jF}{}X_jB\}.$$
In fact , by the Stanley-Reisner correspondence between square-free monomial ideals and simplicial complexes (see Bruns and Herzog ), $`\mathrm{\Delta }`$ corresponds to $`B^{}`$.
For any subset $`I\{1,\mathrm{},n\}`$, we define $`\mathrm{\Delta }_I`$ to be the full simplicial subcomplex of $`\mathrm{\Delta }`$ supported on $`I`$:
$$\mathrm{\Delta }_I:=\{F\{1,\mathrm{},n\}|F\mathrm{\Delta },FI\}.$$
When $`I=\mathrm{}`$, we take $`\mathrm{\Delta }_I`$ to be the void complex. It is clear that if $`II^{}`$, then $`\mathrm{\Delta }_I^{}`$ is a subcomplex of $`\mathrm{\Delta }_I`$. This is the case if $`\alpha ^{}=\alpha +e_l`$, $`I=I_\alpha `$ and $`I^{}=I_\alpha ^{}`$.
###### Corollary 2.2
(a) With the above notation, for any $`\alpha ๐^n`$
$$H_B^i(R)_\alpha H^{i2}(\mathrm{\Delta }_{I_\alpha };k).$$
(b) Via the isomorphisms given by (a), the multiplication map $`\nu _{X_l}`$ corresponds to the morphism:
$$H^{i2}(\mathrm{\Delta }_{I_\alpha };k)H^{i2}(\mathrm{\Delta }_{I_\alpha ^{}};k),$$
induced in cohomology by the inclusion $`\mathrm{\Delta }_{I_\alpha ^{}}\mathrm{\Delta }_{I_\alpha }`$.
Proof. Using the notation in Theorem 2.1, if $`I_\alpha \mathrm{}`$, then $`T_{I_\alpha }=_{iI_\alpha }T_i`$.
If $`i_1,\mathrm{},i_kI_\alpha `$ and $`_{1pk}T_{i_p}\mathrm{}`$, then
$$\underset{1pk}{}T_{i_p}=\{J\{1,\mathrm{},r\}|X_{i_p}m_J,1pk\}$$
is the full simplicial complex on those $`j`$ with $`X_{i_p}m_j`$, for every $`p`$, $`1pk`$. Therefore it is contractible.
This shows that we can compute the cohomology of $`T_I`$ as the cohomology of the nerve $`๐ฉ`$ of the cover $`T_I=_{iI}T_i`$ (see Godement ). But by definition, $`\{i_1,\mathrm{},i_k\}I`$ is a simplex in $`๐ฉ`$ iff $`_{1pk}T_{i_p}\mathrm{}`$ iff there is $`j`$ such that $`X_{i_p}m_j`$ for every p, $`1pk`$. This shows that $`๐ฉ=\mathrm{\Delta }_I`$ and we get that $`H_B^i(R)_\alpha H^{i2}(\mathrm{\Delta }_I;k)`$ when $`I\mathrm{}`$.
When $`I=\mathrm{}`$, $`H_B^i(R)_\alpha =0`$ by theorem 2.1 and also $`H^{i2}(\mathrm{\Delta }_I;k)=0`$ (the reduced cohomology of the void simplicial complex is zero).
Part (b) follows immediately from part (b) in Theorem 2.1 and the fact that the isomorphism between the cohomology of a space and that of the nerve of a cover as above is functorial.
###### Remark
The same type of arguments as in the proofs of Theorem 2.1 and of Corollary 2.2 can be used to give a topological description for $`\mathrm{Ext}_B^i(R/B,R)_\alpha `$, for a possibly non-reduced nonzero monomial ideal $`B`$. Namely, for $`\alpha ๐^n`$, we define the simplicial complex $`\mathrm{\Delta }_\alpha `$ on $`\{1,\mathrm{},n\}`$ by $`J\mathrm{\Delta }_\alpha `$ iff there is a monomial $`m`$ in $`B`$ such that $`deg(X^\alpha m)_j<0`$ for $`jJ`$. We make the convention that $`\mathrm{\Delta }`$ is the void complex iff $`\alpha 0`$. Then
$$\mathrm{Ext}_R^i(R/B,R)_\alpha H^{i2}(\mathrm{\Delta }_\alpha ;k).$$
Moreover, we can describe these $`k`$-vector spaces using a more geometric object. If we view $`B๐^n๐^n`$, let $`P_\alpha `$ be the subspace of $`\mathrm{R}^n`$ supported on $`B`$, translated by $`\alpha `$, minus the first quadrant. More precisely,
$$P_\alpha =\{x๐^n|x\alpha m,\mathrm{for}\mathrm{some}mB\}๐_+^n.$$
Then, using a similar argument to the one in the proof of corollary 1.4, one can show that
$$\mathrm{Ext}_R^i(R/B,R)_\alpha H^{i2}(P_\alpha ;k),$$
where the right-hand side is the reduced singular cohomology group. Here we have to make the convention that for $`\alpha 0`$, $`P_\alpha `$ is the โvoid topological spaceโ, with trivial reduced cohomology (as oposed to the empty topological space which has nonzero reduced cohomology in degree $`1`$).
We leave the details of the proof to the interested reader.
## ยง3. The filtration on the Ext modules
The Alexander dual of a reduced monomial ideal $`B`$ is defined by
$$B^{}=(X^F|F\{1,\mathrm{}n\},X^{F^c}B),$$
where $`F^c:=\{1,\mathrm{},n\}F`$ (see Bayer, Charalambous and Popescu for interpretation in terms of Alexander duality ). Note that $`(B^{})^{}=B`$.
We will derive first a relation between $`\mathrm{Ext}_R(R/B,R)`$ and $`\mathrm{Tor}^R(B^{},k)`$. This can be seen as a stronger form of the inequality in Bayer, Charalambous and Popescu between the Betti numbers of $`B`$ and $`B^{}`$.
For $`\alpha ๐^n`$, we will denote $`|\alpha |=_i\alpha _i`$.
###### Corollary 3.1
Let $`BR=k[X_1,\mathrm{},X_n]`$ be a reduced monomial ideal and $`\alpha ๐^n`$ a multidegree. If $`\alpha \{0,1\}^n`$, then $`\mathrm{Tor}_i^R(B^{},k)_\alpha =0`$, and if $`\alpha \{0,1\}^n`$, then
$$\mathrm{Tor}_i^R(B^{},k)_\alpha \mathrm{Ext}_R^{|\alpha |i}(R/B,R)_\alpha .$$
Proof. We will use Hochsterโs formula for the Betti numbers of reduced monomial ideals (see, for example, Hochster or Bayer, Charalambous and Popescu ). It says that if $`\alpha \{0,1\}^n`$, then $`\mathrm{Tor}_i^R(B^{},k)_\alpha =0`$ and if $`\alpha \{0,1\}^n`$, then
$$\mathrm{Tor}_i^R(B^{},k)_\alpha H^{|\alpha |i2}(\mathrm{\Delta }_I;k),$$
where $`I`$ is the support of $`\alpha `$.
Obviously, we may suppose that $`B(0)`$. If $`\alpha \{0,1\}^n`$, then corollary 2.2 gives
$$H^{|\alpha |i2}(\mathrm{\Delta }_I;k)H_B^{|\alpha |i}(R)_\alpha $$
and theorem 1.1 gives
$$H_B^{|\alpha |i}(R)_\alpha \mathrm{Ext}_R^{|\alpha |i}(R/B,R)_\alpha .$$
Putting together these isomorphisms, we get the assertion of the corollary.
We recall that the multigraded Betti numbers of $`B`$ are defined by
$$\beta _{i,\alpha }(B):=\mathrm{dim}_\mathrm{k}\mathrm{Tor}_\mathrm{i}^\mathrm{R}(\mathrm{B},\mathrm{k})_\alpha .$$
Equivalently, if $`F_{}`$ is a multigraded minimal resolution of $`B`$, then
$$F_i\underset{\alpha ๐^n}{}R(\alpha )^{\beta _{i,\alpha }(B)}.$$
One says that $`(i,\alpha )`$ is extremal (or that $`\beta _{i,\alpha }`$ is extremal) if $`\beta _{j,\alpha ^{}}(B)=0`$ for all $`ji`$ and $`\alpha ^{}>\alpha `$ such that $`|\alpha ^{}||\alpha |ji`$.
###### Remark
Using Theorems 1.1, 2.1(b) and Corollary 3.1 one can give a formula for the Hilbert function of $`H_B^i(R)`$ using the Betti numbers of $`B^{}`$. This formula is equivalent to the one which appears in Terai .
As a consequence of the above corollary, we obtain the inequality between the Betti numbers of $`B`$ and $`B^{}`$ from Bayer, Charalambous and Popescu . It implies the equality of extremal Betti numbers from that paper, in particular the equality $`\mathrm{reg}B=\mathrm{pd}(R/B^{})`$ from Terai .
###### Corollary 3.2
If $`BR`$ is a reduced monomial ideal, then
$$\beta _{i,\alpha }(B)\underset{\alpha \alpha ^{}}{}\beta _{|\alpha |i1,\alpha ^{}}(B^{}),$$
for every $`i0`$ and every $`\alpha \{0,1\}^n`$. If $`\beta _{|\alpha |i1,\alpha }(B^{})`$ is extremal, then so is $`\beta _{i,\alpha }(B)`$ and
$$\beta _{i,\alpha }(B)=\beta _{|\alpha |i1,\alpha }(B^{}).$$
Proof. Since $`\beta _{i,\alpha }(B)=\mathrm{dim}_\mathrm{k}\mathrm{Tor}_\mathrm{i}^\mathrm{R}(\mathrm{B},\mathrm{k})_\alpha `$, by the previous corollary we get
$$\beta _{i,\alpha }(B)=\mathrm{dim}_\mathrm{k}\mathrm{Ext}_\mathrm{R}^{|\alpha |\mathrm{i}}(\mathrm{R}/\mathrm{B}^{},\mathrm{R})_\alpha =\mathrm{dim}_\mathrm{k}H^{|\alpha |\mathrm{i}}(\mathrm{Hom}(\mathrm{F}_{},\mathrm{R}))_\alpha ,$$
where $`F_{}`$ is the minimal free resolution of $`R/B^{}`$.
Since $`F_{|\alpha |i}=_{\alpha ^{}๐^n}R(\alpha ^{})^{\beta _{|\alpha |i1,\alpha ^{}}(B^{})}`$, we get
$$\beta _{i,\alpha }(B)\underset{\alpha ^{}๐^n}{}\beta _{|\alpha |i1,\alpha ^{}}(B^{})\mathrm{dim}_\mathrm{k}(\mathrm{R}(\alpha ^{})_\alpha )=\underset{\alpha \alpha ^{}}{}\beta _{|\alpha |\mathrm{i}1,\alpha ^{}}(\mathrm{B}^{}).$$
If $`\beta _{|\alpha |i1,\alpha }(B^{})`$ is extremal, the above inequality becomes $`\beta _{i,\alpha }(B)\beta _{|\alpha |i1,\alpha }(B^{})`$. Applying the same inequality for $`ji`$ and $`\alpha ^{}>\alpha `$ such that $`|\alpha ^{}||\alpha |ji`$ and the fact that $`\beta _{|\alpha |i1,\alpha }(B^{})`$ is extremal, we get that $`\beta _{i,\alpha }(B)`$ is extremal.
Applying the previous inequality with $`B`$ replaced by $`B^{}`$, we obtain $`\beta _{|\alpha |i1,\alpha }(B^{})\beta _{i,\alpha }(B)`$, which concludes the proof.
We fix some notations for the remaining of this section. Let $`[n]=\{0,1\}^n`$ and $`[n]_l=\{\alpha [n]||\alpha |=l\}`$, for every $`l`$, $`0ln`$. For $`\alpha [n]`$, let $`\mathrm{supp}(\alpha )=\{j|\alpha _j=1\}`$ and $`P_\alpha =(X_j|j\mathrm{supp}(\alpha ))`$. The ideals $`P_\alpha `$, $`\alpha [n]`$ are exactly the monomial prime ideals of $`R`$.
The following theorem gives the canonical filtration of $`\mathrm{Ext}_R^i(R/B,R)`$ announced in the Introduction.
###### Theorem 3.3
Let $`BR`$ be a squarefree monomial ideal. For each $`l`$, $`0ln`$, let $`M_l`$ be the submodule of $`\mathrm{Ext}_R^i(R/B,R)`$ generated by all $`\mathrm{Ext}_R^i(R/B,R)_\alpha `$, for $`\alpha [n]`$, $`|\alpha |l`$. Then $`M_0=0`$, $`M_n=\mathrm{Ext}_R^i(R/B,R)`$ and for every $`l`$, $`0ln`$,
$$M_l/M_{l1}\underset{\alpha [n]_l}{}(R/P_\alpha (\alpha ))^{\beta _{li,\alpha }(B^{})}.$$
Proof. Clearly we may suppose $`B0`$. The fact that $`M_0=0`$ follows from Corollary 2.2(a).
Letโs see first that $`M_n=\mathrm{Ext}_R^i(R/B,R)`$. For this it is enough to prove that all the minimal monomial generators of $`\mathrm{Ext}_R^i(R/B,R)`$ are in degrees $`\alpha `$, $`\alpha [n]`$.
Indeed, if $`\alpha _j1`$ for some $`j`$, then the multiplication by $`X_j`$ defines an isomorphism
$$\mathrm{Ext}_R^i(R/B,R)_{\alpha e_j}\mathrm{Ext}_R^i(R/B,R)_\alpha $$
by Corollary 2.2(b) and Theorem 1.1. In particular, there are no minimal generators in degree $`\alpha `$.
On the other hand, by Theorem 1.1, $`\mathrm{Ext}_R^i(R/B,R)_\alpha =0`$ if $`\alpha _j2`$, for some $`j`$. Therefore we have $`M_n=\mathrm{Ext}_R^i(R/B,R)`$.
Suppose now that we have homogeneous elements $`f_1,\mathrm{},f_r`$ with $`deg(f_q)[n]_l^{}`$, $`l^{}l`$, for every $`q`$, $`1qr`$. We suppose that they are linearly independent over $`k`$ and that their linear span contains $`\mathrm{Ext}_R^i(R/B,R)_\alpha `$, for every $`\alpha [n]_l^{}`$, $`l^{}l1`$. We will suppose also that $`deg(f_r)=\alpha `$, $`|\alpha |=l`$. If $`T:=_{1qr1}Rf_q`$, let $`\overline{f}_r`$ be the image of $`f_r`$ in $`M_l/T`$.
Claim. With the above notations, $`\mathrm{Ann}_R(\overline{f}_r)=P_\alpha `$.
Let $`F=\mathrm{supp}(\alpha )`$. If $`jF`$, then $`deg(X_jf_r)=(\alpha e_j)`$, $`\alpha e_j[n]`$. By our assumption, it follows that $`X_jf_rT`$, so that $`P_\alpha \mathrm{Ann}_R(\overline{f}_r)`$.
Conversely, consider now $`m=X_j^{m_j}\mathrm{Ann}\overline{f}_r`$ and suppose that $`m(X_j|jF)`$. We can suppose that $`m`$ has minimal degree. Let $`j`$ be such that $`m_j1`$. Then $`jF`$ and therefore $`m_j\alpha _j=m_j1`$. Since $`mf_rT`$, we can write
$$mf_r=\underset{q<r}{}c_qn_qf_q,$$
where $`n_q`$ are monomials and $`c_qk`$. Since $`deg(f_q)0`$ for every $`q`$, in the above equality we may assume that $`X_j|n_q`$ for every $`q`$ such that $`c_q0`$. But by Corollary 2.2(b) and Theorem 1.1, the multiplication by $`X_j`$ is an isomorphism:
$$\mathrm{Ext}_R^i(R/B,R)_{\alpha +degme_j}\mathrm{Ext}_R^i(R/B,R)_{\alpha +degm}.$$
Therefore $`m/X_j\mathrm{Ann}\overline{f}_r`$, in contradiction with the minimality of $`m`$. We get $`\mathrm{Ann}\overline{f}_r=(X_j|jF)`$, which completes the proof of the claim.
The first consequence is that for every nonzero $`fM_l`$, $`deg(f)=\alpha `$, $`\alpha [n]_l`$, if $`\overline{f}`$ is the image of $`f`$ in $`M_l/M_{l1}`$, then $`\mathrm{Ann}_R(\overline{f})=P_\alpha `$, so that $`R\overline{f}R/P_\alpha (\alpha )`$.
Letโs consider now a homogeneous basis $`f_1,\mathrm{},f_N`$ of $`_{\alpha [n]_l}\mathrm{Ext}_R^i(R/B,R)_\alpha `$. By Corollary 3.1,
$$\mathrm{dim}_\mathrm{k}\mathrm{Ext}_\mathrm{R}^\mathrm{i}(\mathrm{R}/\mathrm{B},\mathrm{R})_\alpha =\beta _{\mathrm{l}\mathrm{i},\alpha }(\mathrm{B}^{}).$$
Therefore, to complete the proof of the theorem, it is enough to show that
$$M_l/M_{l1}\underset{1jN}{}R\overline{f}_j.$$
Here $`\overline{f}_j`$ denotes the image of $`f_j`$ in $`M_l/M_{l1}`$.
Since $`M_l=M_{l1}+_{1jN}Rf_j`$, we have only to show that if $`_{1jN}n_jf_jM_{l1}`$, then $`n_jf_jM_{l1}`$ for every $`j`$, $`1jN`$.
Let $`\{g_1,\mathrm{},g_N^{}\}`$ be the union of homogeneous bases for $`\mathrm{Ext}_R^i(R/B,R)_\alpha `$, for $`\alpha [n]_l^{}`$, $`l^{}l1`$.
Letโs fix some $`j`$, with $`1jN`$. If $`deg(f_j)=\alpha `$, by applying the above claim to $`f_j`$, as part of $`\{f_p|\mathrm{\hspace{0.17em}1}pN\}\{g_p^{}|\mathrm{\hspace{0.17em}1}p^{}N^{}\}`$, we get that $`n_jP_\alpha `$. But we have already seen that $`P_\alpha f_jM_{l1}`$ and therefore the proof is complete.
###### Remark 1
We can interpret the statement of Theorem 3.3 using the multigraded Betti diagram of $`B^{}`$. This is the diagram having at the intersection of the $`i^{\mathrm{th}}`$ row with the $`j^{\mathrm{th}}`$ column the Betti numbers $`\beta _{j,\alpha }(B^{})`$, for $`\alpha ๐^n`$, $`|\alpha |=i+j`$.
For each $`i`$ and $`j`$ we form a module corresponding to $`(i,j)`$:
$$E_{i,j}=\underset{\alpha [n]_{i+j}}{}(R/P_\alpha (\alpha ))^{\beta _{j,\alpha }(B^{})}.$$
Theorem 3.3 gives a filtration of $`\mathrm{Ext}_R^i(R/B,R)`$ having as quotients the modules constructed above corresponding to the $`i^{\mathrm{th}}`$ row: $`E_{i,j}`$, $`j๐`$.
Notice that by definition, $`\mathrm{Tor}_i^R(B^{},k)`$ is obtained by a โdualโ procedure applied to the $`i^{\mathrm{th}}`$ column (in this case the extensions being trivial). Indeed, if for $`(j,i)`$ we put
$$E_{j,i}^{}=\underset{\alpha [n]_{i+j}}{}k(\alpha )^{\beta _{i,\alpha }(B^{})},$$
then $`\mathrm{Tor}_i^R(B^{},k)_{j๐}E_{j,i}^{}`$.
###### Remark 2
Using Theorem 3.3 one can compute the Hilbert series of $`\mathrm{Ext}_R^i(R/B,R)`$ in terms of the Betti numbers of $`B^{}`$. Using local duality, one can derive the fomula, due to Hochster , for the Hilbert series of the local cohomology modules $`H_{\underset{ยฏ}{m}}^{ni}(R/B)`$, where $`\underset{ยฏ}{m}=(X_1,\mathrm{},X_n)`$ (see also Bruns and Herzog , Theorem 5.3.8).
We describe now the set of homological associated primes of $`R/B`$ i.e. the set
$$_{i0}\mathrm{Ass}(\mathrm{Ext}_R^i(R/B,R))$$
(see Vasconcelos ). Since the module $`\mathrm{Ext}_R^i(R/B,R)`$ is $`๐^n`$-graded, its associated primes are of the form $`P_\alpha `$, for some $`\alpha [n]`$. In fact, Theorem 3.3 shows that
$$\mathrm{Ass}(\mathrm{Ext}_R^i(R/B,R))\{P_\alpha |\beta _{|\alpha |i,\alpha }(B^{})0\}.$$
The next result gives the necessary and sufficient condition for a prime ideal $`P_\alpha `$ to be in $`\mathrm{Ass}(\mathrm{Ext}_R^i(R/B,R))`$. In particular, we get the characterization of the minimal associated primes of this module using only the Betti numbers of $`B^{}`$.
###### Theorem 3.4
Let $`BR`$ be a nonzero square-free monomial ideal and $`\alpha [n]`$. Let $`F=\mathrm{supp}(\alpha )`$.
(a) The ideal $`P_\alpha `$ belongs to $`\mathrm{Ass}(\mathrm{Ext}_R^i(R/B,R))`$ iff
$$\underset{jF}{}Ker(H^{i2}(\mathrm{\Delta }_F;k)H^{i2}(\mathrm{\Delta }_{Fj};k))0.$$
(b) The ideal $`P_\alpha `$ is a minimal prime in $`\mathrm{Ass}(\mathrm{Ext}_R^i(R/B,R))`$ iff
$$\beta _{|\alpha |i,\alpha }(B^{})0$$
and
$$\beta _{|\alpha ^{}|i,\alpha ^{}}(B^{})=0,$$
for every $`\alpha ^{}[n]`$, $`\alpha ^{}\alpha `$, $`\alpha ^{}\alpha `$.
Proof. By Corollary 2.2, the condition in (a) is equivalent to the existence of $`u\mathrm{Ext}_R^i(R/B,R)_\alpha `$, $`u0`$ such that $`X_ju=0`$ for every $`jF`$. Since $`\alpha _j=0`$ for $`jF`$, Corollary 2.2(b) and Theorem 1.1 imply that for every monomial $`m`$, $`mP_\alpha `$, the multiplication by $`m`$ is injective on $`\mathrm{Ext}_R^i(R/B,R)_\alpha `$.
Therefore, in the above situation we have $`\mathrm{Ann}_R(u)=P_\alpha `$, so that $`P_\alpha `$ is an element of $`\mathrm{Ass}(\mathrm{Ext}_R^i(R/B,R))`$.
Conversely, suppose that $`P_\alpha \mathrm{Ass}(Ext_R^i(R/B,R))`$. Since $`P_\alpha `$ and $`\mathrm{Ext}_R^i(R/B,R)`$ are $`๐^n`$-graded, this is equivalent to the existence of $`u\mathrm{Ext}_R^i(R/B,R)_\alpha ^{}`$, for some $`\alpha ^{}๐^n`$, such that $`P_\alpha =\mathrm{Ann}_R(u)`$. To complete the proof of part (a), it is enough to show that we can take $`\alpha ^{}=\alpha `$.
By Theorem 1.1, $`\alpha ^{}(1,\mathrm{},1)`$. Since $`X_ju=0`$ for $`jF`$, multiplication by $`X_j`$ on $`\mathrm{Ext}_R^i(R/B,R)_\alpha `$ is not injective so that by Corollary 2.2(b), we must have $`\alpha _j^{}=1`$ for $`jF`$.
Letโs consider some $`jF`$. If $`\alpha _j^{}1`$, by Corollary 2.2(b) there is $`u^{}\mathrm{Ext}_R^i(R/B,R)_{\alpha ^{\prime \prime }}`$, $`\alpha ^{\prime \prime }=\alpha ^{}\alpha _j^{}e_j`$ such that $`X_j^{\alpha _j^{}}u^{}=u`$ and $`\mathrm{Ann}_R(u^{})=\mathrm{Ann}_R(u)=P_\alpha `$. Therefore, we may suppose that $`\alpha _j^{}0`$.
If $`\alpha _j^{}=1`$, since $`X_j\mathrm{Ann}_R(u)`$, which is prime, we have $`\mathrm{Ann}_R(X_ju)=\mathrm{Ann}_R(u)=P_\alpha `$. This shows that we may suppose $`\alpha _j^{}=0`$ for every $`jF`$, so that $`\alpha ^{}=\alpha `$.
The sufficiency of the condition in part (b) follows directly from part (a) and Corollary 3.1. For the converse, it is enough to notice that if for some $`G\{1,\mathrm{},n\}`$, there is $`0uH^{i2}(\mathrm{\Delta }_G;k)`$, then there is $`HG`$ such that $`X^Hu`$ corresponds to a nonzero element in $`_{jGH}Ker(H^{i2}(\mathrm{\Delta }_{GH};k)H^{i2}(\mathrm{\Delta }_{G(Hj)};k))`$.
###### Example 1
Let $`R=k[a,b,c,d]`$ and $`B=(ab,bc,cd,ad,ac)`$. Then $`\mathrm{\Delta }`$ is the simplicial complex:
Theorem 3.4(a) gives easily that
$$\mathrm{Ass}(\mathrm{Ext}_R^3(R/B,R))=\{(a,b,d),(b,c,d)\}.$$
###### Example 2
In general, it is not sufficient for $`\beta _{|\alpha |i,\alpha }(B^{})`$ to be nonzero in order to have $`P_\alpha \mathrm{Ass}(\mathrm{Ext}_R^i(R/B,R))`$.
Letโs consider $`R=k[a,b,c]`$ and $`B=(a,bc)`$. Then $`\mathrm{\Delta }`$ is the simplicial complex:
Using Theorem 3.4(a), we get:
$$\mathrm{Ass}(\mathrm{Ext}_R^2(R/B,R))=\{(a,b),(a,c)\},$$
while
$$\{F|\beta _{|\alpha _F|2,\alpha _F}(B^{})0\}=\{\{a,b,c\},\{a,b\},\{a,c\}\}.$$
References
D.Bayer, H.Charalambous, and S.Popescu (1998). Extremal Betti numbers and applications to monomial ideals, preprint.
W.Bruns and J.Herzog (1993). Cohen Macaulay rings, Cambridge Univ. Press.
J.Eagon and V.Reiner (1996). Resolutions of Stanley-Reisner rings and Alexander duality, preprint.
D.Eisenbud (1995). Commutative Algebra with a View Toward Algebraic Geometry, Springer.
D.Eisenbud, M.Mustaลฃว and M.Stillman (1998). Cohomology on toric varieties and local cohomology with monomial support, preprint.
R.Godement (1958). Topologie algebrique et theorie des faisceaux, Paris, Herman.
M.Hochster (1977). Cohen-Macaulay rings, combinatorics and simplicial complexes, in Ring theory II, B.R.McDonald, R.A.Morris (eds), Lecture Notes in Pure and Appl. Math., 26, M.Dekker.
J.R.Munkres (1984). Elements of algebraic topology, Benjamin/Cummings, Menlo Park CA.
M.Mustaลฃว (1999). Vanishing theorems on toric varieties, in preparation.
N.Terai (1997). Generalization of Eagon-Reiner theorem and h-vectors of graded rings, preprint.
N.Terai (1998). Local cohomology with respect to monomial ideals, in preparation.
W.V.Vasconcelos (1998). Computational methods in commutative algebra and algebraic geometry, Algorithms and Computation in Mathematics, vol.2, SpringerโVerlag.
U.Walther (1999). Algorithmic computation of local cohomology modules and the cohomological dimension of algebraic varieties, Journal for Pure and Applied Algebra, to appear.
K.Yanagawa (1998). Alexander duality for Stanley-Reisner rings and squarefree $`N^n`$ graded modules, preprint.
Author Adress:
Mircea Mustata
Department of Mathematics, Univ. of California, Berkeley; Berkeley CA 94720
mustata@math.berkeley.edu Institute of Mathematics of the Romanian Academy, Calea Grivitei 21, Bucharest, Romania
mustata@stoilow.imar.ro |
no-problem/0001/hep-ex0001053.html | ar5iv | text | # A Prototype RICH Detector Using Multi-Anode Photo Multiplier Tubes and Hybrid Photo-Diodes
## 1 Introduction
This paper reports results from a prototype Ring Imaging Cherenkov (RICH) counter and compares the performance of Multi- Anode Photomultiplier tubes (MAPMT) and two types of Hybrid Photo-diode Detectors (HPD) for detecting the Cherenkov photons. The experimental arrangement represents a prototype of the downstream RICH detector of the LHCb experiment at CERN.
The LHCb experiment will make precision measurements of CP asymmetries in B decays. Particle identification by the RICH detectors is an important tool and an essential component of LHCb. For example, separating pions and kaons using the RICH suppresses backgrounds coming from $`B_d^0K^+\pi ^{}`$, $`B_s^0K^+\pi ^{}`$ and $`B_s^0K^+K^{}`$ when selecting $`B_d^0\pi ^+\pi ^{}`$ decays, and backgrounds coming from $`B_sD_s^\pm \pi ^{}`$ when selecting the $`B_sD_s^\pm K^{}`$ decay mode.
LHCb has two RICH detectors. Together they cover polar angles from 10 to 330 mrad. The upstream detector, RICH1, uses aerogel and $`C_4F_{10}`$ radiators to identify particles with momenta from 1 to 65 GeV/c. The downstream detector, RICH2, has 180 cm of $`CF_4`$ radiator and identifies particles with momenta up to 150 GeV/c. It uses a spherical focusing mirror with a radius of curvature of 820 cm which is tilted by 370 mrad to bring the image out of the acceptance of the spectrometer. A flat mirror then reflects this image onto the photodetector plane. For tracks with $`\beta 1`$, RICH2 is expected to detect about 30 photoelectrons .
The LHCb collaboration intends to use arrays of photodetectors with a sensitive granularity of $`2.5\mathrm{mm}\times 2.5\mathrm{mm}`$ covering an area of $`2.9\mathrm{m}^2`$ with a total of 340,000 channels, to detect the Cherenkov photons in both RICH detectors. These photodetectors are expected to cover an active area of at least 70% of the detector plane. Current commercially available devices<sup>1</sup><sup>1</sup>1 Commercial HPD devices from Delft Electronische Producten (DEP), The Netherlands, Commercial MAPMT devices from Hamamatsu Photonics, Japan. have inadequate coverage of the active area and their performance at LHC speeds remains to be proven. The beam tests described here used prototypes of three of the new photodetector designs that have been proposed for LHCb.
The results from the LHCb RICH1 prototype detector tests carried out during 1997 are reported in an accompanying publication . The data used in this paper were collected during the summer and autumn of 1998 at the CERN SPS facility. The main goals of these RICH2 prototype studies are:
* To test the performance of the $`CF_4`$ radiator, using the full-scale optical layout of RICH2,
* To test the performance of the photodetectors using the RICH2 geometry by measuring the Cherenkov angle resolution and photoelectron yields.
Section 2 of this paper describes the main features of the test beam setup. Section 3 describes the simulation of the experiment and is followed by a discussion of the photoelectron yields and Cherenkov angle resolution measurements for each of the photodetectors. Finally a summary is given in Section 6, with plans for future work.
## 2 Experimental Setup
The setup included scintillators and a silicon telescope which defined and measured the direction of incident charged particles, a radiator for the production of Cherenkov photons, a mirror for focusing these photons, photodetectors and the data acquisition system. A brief description of these components is given below, and a more complete description of the experimental setup can be found in . The photodetectors were mounted on a plate customised for particular detector configurations. A schematic diagram of the setup is shown in Figure 1.
### 2.1 Beam line
The experimental setup was mounted in the CERN X7 beam line. The beam was tuned to provide negative particles (mainly pions) with momenta between 10 and 120 GeV/c. The precision of the beam momentum for a given setting ($`\delta `$p/p) was better than 1%. Readout of the detectors was triggered by the passage of beam particles which produced time-correlated signals from two pairs of scintillation counters placed 8 metres apart along the beam line. The beam size was $`20\times 20\mathrm{m}\mathrm{m}^2`$ as defined by the smaller of these counters.
### 2.2 Beam Trajectory Measurement
The input beam direction and position were measured using a silicon telescope consisting of three planes of pixel detectors. Each of these planes has a $`22\times 22`$ array of silicon pixels with dimensions $`1.3\mathrm{mm}\times 1.3\mathrm{mm}`$. Two of the planes were placed upstream of the radiator and the third one downstream of the mirror. The first and third planes were separated by 8 metres.
Using the silicon telescope, the beam divergence was measured to be typically 0.3 mrad and 0.1 mrad in the horizontal and vertical planes respectively.
### 2.3 The RICH Detector
During different data-taking periods, air and $`CF_4`$ were used as radiators. The pressure and temperature of these radiators were monitored for correcting the refractive index . The gas circulation system which provided the $`CF_4`$ is described below.
During the $`CF_4`$ runs, data were taken at various pressures ranging from 865 mbar to 1015 mbar and at different temperatures between $`20^0C`$ and $`30^0C`$. The refractive index of $`CF_4`$ as a function of wavelength at STP using the parametrization in is plotted in Figure 2.
As shown in the schematic diagram in Figure 3, the prototype Cherenkov vessel was connected into the gas circulation system, which was supplied by $`CF_4`$ gas <sup>2</sup><sup>2</sup>2 as supplied by CERN stores: reference SCEM 60.56.10.100.7 at high pressure. A molecular sieve (13X pore size) was included in the circuit to remove water vapour. The system used a microprocessor interface <sup>3</sup><sup>3</sup>3Siemens S595U to set and stabilise the required gas pressure and to monitor and record pressure, temperature and concentrations of water vapour and oxygen throughout the data taking. The absolute pressure of the $`CF_4`$ in the Cherenkov vessel was maintained to within 1 mbar of the required value using electromagnetic valves which controlled the gas input flow and the output flow to the vent. Throughout the data taking the oxygen concentration was below 0.1$`\%`$ and the water vapour concentration was below 100 ppm by volume.
The Cherenkov photons emitted were reflected by a mirror of focal length 4003 mm which was tilted with respect to the beam axis by 314 mrad, similar to the optical layout of the LHCb RICH2. Using micrometer screws, the angle of tilt of the mirror was adjusted to reflect photons on different regions of the photodetector plane which was located 4003 mm from the mirror. The reflectivity of the mirror, measured as a function of the wavelength, is shown in Figure 4.
The important characteristics of the three different designs of photodetectors tested are briefly summarised as follows:
* The 61-pixel Hybrid Photo-Diode (HPD) is manufactured by DEP and has an S20 (trialkali) photocathode deposited on a quartz window. The quantum efficiency of a typical HPD measured by DEP, is plotted in Figure 5 as a function of the incoming photon wavelength. Photoelectrons are accelerated through a 12 kV potential over 12 mm onto a 61-pixel silicon detector. The image on the photocathode is magnified by 1.06 on the silicon detector surface. This device gives an approximate gain of 3000. The pixels are hexagonally close packed and measure 2 mm between their parallel edges. The signal is read out by a Viking VA2 ASIC.
* The 2048-pixel HPD is manufactured in collaboration with DEP. It has electrostatic cross-focusing by which the image on the photocathode is demagnified by a factor of four at the anode. The operating voltage of this HPD is 20 kV. The anode has a silicon detector, which provides an approximate gain of 5000, with an array of 2048 silicon pixels bump bonded to an LHC1 binary readout ASIC. Details of this device and its readout can be found in .
Using the measurements made by DEP, the quantum efficiency of the S20 photocathode used on the 2048-pixel HPD is plotted in Figure 5 as a function of the photon wavelength. This tube has an active input window diameter of 40 mm and the silicon pixels are rectangles of size 0.05 mm $`\times `$ 0.5 mm. It represents a half-scale prototype of a final tube which will have an 80 mm diameter input window and 1024 square pixels with 0.5 mm side.
* The 64-channel Multi-Anode PMT (MAPMT) is manufactured by Hamamatsu. It has a bialkali photocathode deposited on a borosilicate-glass window and 64 square anodes mounted in an 8 $`\times `$ 8 array with a pitch of 2.3 mm. The photoelectrons are multiplied using a 12-stage dynode chain resulting in an approximate overall gain of $`10^6`$ when operated at 900 V. From the measurements made by Hamamatsu, the quantum efficiency of a typical MAPMT as a function of the wavelength is shown in Figure 5.
During some runs, pyrex filters were placed in front of the photodetectors in order to limit the transmission to longer wavelengths where the refractive index of the radiators is almost constant. In Figure 6 the transmission of pyrex as a function of photon wavelength is plotted.
### 2.4 Experimental Configurations
The detector configurations used are summarised in Table 1. In configuration 1, seven 61-pixel HPDs and one MAPMT were placed on a ring of radius 113 mm on the detector plate. In configurations 2 and 3, a 2048-pixel HPD and three 61-pixel HPDs were placed on a ring of radius 90 mm on the detector plate. In addition to these configurations, the different radiator, beam and photodetector conditions used for the various runs are shown in Table 2.
### 2.5 Data Acquisition System
The 61-pixel HPDs and the MAPMT use analogue readout whereas the 2048-pixel HPD uses binary readout. A detailed description of their respective data acquisition systems can be found in and .
For the analogue readout system, the mean and width of the pedestal distributions for each pixel were calculated using dedicated pedestal runs, interleaved between data runs triggered with beam. Some data were also taken using light emitted from a pulsed Light Emitting Diode (LED) for detailed studies of the photoelectron spectra. Zero suppression was not used on analogue data from the photodetectors.
A pixel threshold map was established on the 2048-pixel HPD using an LED . For this, the high voltage applied on the tube was varied, and the voltage for each channel to become active was recorded. This threshold map was used to identify pixels with too low a threshold, which were then masked. It was also used to identify pixels with too high a threshold and hence insensitive to photoelectrons. A histogram of the threshold map is shown in Figure 7 where the pixels which were masked or insensitive (26$`\%`$) are indicated by the entries in the first bin. For this device, the noise ($`\sigma _N`$) of the readout electronics is 160 electrons (0.6 kV Silicon equivalent) and the distribution of the silicon pixel thresholds has an rms width of 1.6 kV.
In Figure 8 an online display, integrating all events in a run, with seven 61-pixel HPDs and an MAPMT in configuration 1 is shown. Part of the Cherenkov ring falls on the photodetectors and is clearly visible.
## 3 Simulation of RICH2 prototype
The RICH2 prototype configurations are simulated to allow detailed comparisons of expected performance with that found in data. The simulation program generates photons uniformly in energy and with the corresponding Cherenkov angle. The trajectories of these photons, and the photoelectrons they produce, are simulated using the beam divergence, beam composition and the optical characteristics of the various components of the RICH detector shown in Figures 2 to 6. The air radiator is simulated using a gas mixture consisting of 80% Nitrogen and 20% Oxygen.
The program also simulates the response of the various photodetectors. Since the 2048-pixel HPD used binary readout, to study its response the program simulates the threshold map ( Figure 7) used for this readout. The simulation of the response of the silicon detector of this HPD is described in Section 4.1
## 4 Estimates of Photoelectron yield
The average number of photoelectrons detected per event in a photodetector defines the photoelectron yield for that detector. This is determined for the configurations 1 and 2 indicated in Table 1. Since the 61-pixel HPD and the MAPMT use analogue readout, the distinction between signal and background depends upon the threshold above the pedestal peak assigned to the measured photoelectron spectrum. To get the true photoelectron yield at a given threshold, estimates are made for the level of background present and for the amount of signal loss that occurs as a result of applying the threshold cut, specified in terms of the width ($`\sigma `$) of the pedestal spectrum.
In the two types of HPDs, there is 18$`\%`$ probability at normal incidence, for electrons to backscatter at the silicon surface, causing some loss of signal. In the 61-pixel HPD, the backscattered electrons can โbounceโ off the silicon surface more than once, whereas in the 2048-pixel HPD the electric field is such that they do not return to the silicon detector. Passage through the dead layers of the silicon wafer can also cause a small amount of signal loss in the HPDs. Since the 2048-pixel HPD uses binary readout, its photoelectron yield depends mainly upon the threshold map of the readout system.
From the estimate of the photoelectron yield ($`N_{pe}`$) of a photodetector, the figure of merit ($`N_0`$) is calculated using:
$`N_0=N_{pe}/(ฯต_AL\mathrm{sin}^2\theta _c)`$ where $`ฯต_A`$ is the fraction of the Cherenkov ring covered by the photodetector, L is the length of the radiator and $`\theta _c`$ is the mean Cherenkov angle measured using the method described in Section 5.
### 4.1 Photoelectron yield for the 2048-pixel HPD
The response of the silicon detector of this HPD is simulated as follows:
Each photoelectron is accelerated through a potential of 20 kV towards the silicon surface. The probability for backscattering at the silicon surface is 18 $`\%`$. During the backscattering process, only a fraction of the 20 keV energy is released in the silicon detector. For an energy release varying from 5 to 20 keV, the energy loss in the dead layer of the silicon ranges from 5 to 1.2 keV as described in and references therein. A readout channel is expected to fire only when the charge signal generated in the silicon detector exceeds the corresponding pixel threshold by at least 4 times the electronic noise.
A flat background of 0.01 photoelectrons per event is observed in the real data on the detector surface from beam related sources such as photons and photoelectrons reflected in random directions from different surfaces in the prototype. This is also incorporated into the simulation. The resultant photoelectron yield from the simulation in the presence of a pyrex filter is shown in Figure 9(a), and in the absence of any filter is shown in Figure 9(b).
The systematic error in the photoelectron yield is evaluated from the simulation by varying the parameters which are listed below. The result of these variations are tabulated in Table 3.
* Quantum efficiency of the phototube: The quantum efficiency of the 2048-pixel HPD is found to be approximately half that of the 61-pixel HPD. The simulation is repeated by replacing the quantum efficiency of the 2048-pixel HPD with those from the 61-pixel HPD, scaled down by a factor of two.
* Amount of photon absorption in oxygen: The simulation is repeated with and without activating the photon absorption although this is significant only for wavelengths below 195 nm.
* Wavelength cutoff of the photocathode: To account for any variation in the active wavelength range among different versions of the photocathodes, the simulation is repeated by varying lower cutoff between 190 nm and 200 nm, and the upper cutoff between 600 nm and 900 nm.
* Backscattering probability at the silicon surface: The simulation is repeated by varying the backscattering probability between 16$`\%`$ and 20 $`\%`$.
The simulated photoelectron yield per detector in the case without any filter is 0.46 $`\pm `$ 0.07, whereas in real data the yield is 0.49 (Figure 9 (b)). The simulated yield per detector, for the case with the pyrex filter, is 0.18 $`\pm `$ 0.02 and the corresponding yield in real data is 0.15 (Figure 9 (a)). Using these yields, the figure of merit is estimated to be 97 $`\pm `$ 16 $`cm^1`$ in the case without any filter and 30 $`\pm `$ 5 $`cm^1`$ in the case with the pyrex filter. For the case without any filter, an independent determination of the figure of merit for the same tube, agrees with the present estimate.
### 4.2 Photoelectron yield for the 61-pixel HPD
Figure 10 shows a typical photoelectron spectrum obtained from a single pixel in a 61-pixel HPD. The peaks corresponding to the pedestal and signal can be clearly seen. In similar distributions obtained for each of the pixels, the background contamination in the photoelectron yield and the amount of signal lost are estimated as a function of the threshold cut using two different analysis methods. One of these methods is described below and the other one is described in Section 4.3 where similar estimates are made for the MAPMT.
The signal loss is estimated using data where the signals were provided by photons from an LED as only these runs have adequate statistics for this purpose. The signal loss is considered to have a Gaussian component and a backscattering component which are described below.
An example of the spectra for each detector pixel in LED data is shown in Figure 11. It can be divided mainly into three parts identified as distributions for the pedestal, one photoelectron and two photoelectrons, in addition to two underlying distributions corresponding to the backscattering contributions to the single and double photoelectron spectra. In order to estimate these backscattering contributions, a backscattering probability of 18$`\%`$ is assumed. The energy distribution of the backscattered electrons is made by convoluting the distribution of the energy fraction of the backscattered electrons for 10 keV electrons incident on aluminium, obtained from , with a Gaussian that has the same width as that of the pedestal spectrum in LED data.
The adc spectrum in LED data is fitted with a function that modelled the spectrum as a sum of three Gaussians with contributions from two backscattering components. The three Gaussians correspond to the distributions of pedestal, one photoelectron and two photoelectrons. The result of the fit is superimposed over the adc spectrum in Figure 11. The widths of the Gaussians for the photoelectrons are then corrected to account for the slight difference in the widths of the pedestal observed in LED data and Cherenkov photon data.
In the region below the threshold cut, the sum of the area which is under the one photoelectron Gaussian and the corresponding backscattering component is then taken as the sum of the Gaussian and backscattering components of the signal loss.
This procedure is repeated using a different LED run and varying the backscattering probability between 16$`\%`$ and 20$`\%`$. The resultant variations obtained in the signal loss estimate are taken as contributions to systematic error from this method.
At the threshold cut of 3$`\sigma `$, the Gaussian component of the signal loss is 0.9$`\%`$ whereas the backscattering component is 11.2$`\%`$.
The background remaining in the Cherenkov photoelectron spectrum after a given threshold cut is considered to have a Gaussian component due to electronic noise, and a non-Gaussian component induced by detector noise and photons from extraneous sources. For the first component, a single Gaussian is fit to the pedestal part of this spectrum. The area under this fit spectrum above the threshold cut is then taken as the Gaussian component of the background. This procedure was repeated changing the upper range of the Gaussian fit from 1.2$`\sigma `$ to 2$`\sigma `$ and the resultant variation in the background estimate is taken as a contribution to the systematic error.
In order to evaluate the second component, data from pedestal runs are used. The fraction of the spectrum above the threshold cut, after removing the fit single Gaussian to the pedestal spectrum, is taken as the non-Gaussian component. The variation in this estimate obtained using different pedestal runs is taken as a contribution to the systematic error.
After correcting the distribution of the number of photoelectrons in each pixel for background and signal loss, their spatial distribution on the silicon surface is fitted with a function which assumes the Cherenkov angle distribution to be a Gaussian. A residual flat background observed in this fit is considered as beam related background and is subtracted from the photoelectron signal. The fit is repeated by varying the parameters of the function and the resultant variations in the background estimate is taken as a contribution to the systematic error.
The results obtained for the photoelectron multiplicities after correcting for background and signal loss using the above method are reported below. These are in agreement with the results obtained from the alternative method described in the next section.
In these estimates, the statistical error is found to be negligible compared to the overall systematic error which is obtained by adding the various contributions in quadrature. The contributions to the systematic error are shown in Table 4. In Table 5, the corrected photoelectron yields for the data with pyrex filter and with no filter are shown along with the corresponding expectations from simulation. The yields from data and simulation agree.
As a systematic check, the stability of the corrected photoelectron yields obtained by varying the threshold cut from 2$`\sigma `$ to 5$`\sigma `$ for the data with pyrex filter, is shown in Table 6. The small variation seen in the yields between 3$`\sigma `$ and 4$`\sigma `$ is quoted as a systematic error contribution in Table 4. The fact that the corrected photoelectron yields estimated are independent of the threshold cut and that the two analysis methods yield similar results give confidence in the results shown in Table 5.
Using the yield estimates in Table 5, the figure of merit is estimated to be 89 $`\pm `$ 8 $`cm^1`$ in the case with pyrex filter and 258 $`\pm `$ 24 $`cm^1`$ in the case without any filter.
### 4.3 Photoelectron yield for MAPMT
Figure 12 shows a typical pulse height distribution for a pixel in the MAPMT in beam triggered runs. The photoelectron signal and pedestal peaks can be clearly distinguished. The amount of signal lost and the amount of background contamination to the photoelectron yield are estimated using the method described below.
This method also uses data where the photons from an LED provided signals to the MAPMT. A Gaussian is fit to the pedestal part of the pulse height distribution. The contribution of the pedestal is removed, and in the remaining spectrum that part below the threshold cut is taken to be the signal loss. The contributions to the systematic error in this estimate are listed below:
* The change in signal loss obtained by swapping the width of the pedestal in Cherenkov photon data with that from LED data, is taken as a contribution to the systematic error.
* In the Cherenkov photon data and LED data, the ranges of the fits to the pedestals are varied and any resultant change in the signal loss is taken as the contribution to the systematic error.
In order to estimate the background level, data from a special run are used where the pressure in the $`CF_4`$ radiator was reduced such that the Cherenkov ring passed through a different set of pixels than in the other runs. In these data, the photoelectron yield is estimated after applying the threshold cut to the spectrum from the pixels which are selected to be off the Cherenkov ring. Assuming a uniform background across the MAPMT, this yield is taken as the background contribution. This procedure is repeated by varying the set of pixels which are selected for this estimate and the resultant change in the background estimate is taken as contribution to the systematic error.
These estimates for the background level and signal loss are repeated for different threshold cuts in the spectra with the results given in Table 7. The photoelectron yields resulting from these estimates are independent of the threshold cuts applied. The systematic error in this measurement is estimated in the same way as for the 61-pixel HPD described in the previous section. Above a threshold cut of 3 $`\sigma `$, the yield after the corrections is estimated to be 0.48 $`\pm `$ 0.03. The corresponding expectation from simulation is 0.52. The discrepancy between data and simulation is attributed to the uncertainty in the knowledge of the quantum efficiency of the particular MAPMT used in these tests. Using this yield estimate, the figure of merit is estimated to be 155 $`\pm `$ 13 $`cm^1`$.
## 5 Resolution of the Reconstructed Cherenkov Angle
As described in , the reconstruction of the Cherenkov angle requires the coordinates of the hit on the photodetector, the centre of curvature of the mirror and the photon emission point (E) which is assumed to be the middle point of the track in the radiator. The point (M) where the photons are reflected off the mirror, is reconstructed using the fact that it lies in the plane defined by the aforementioned three points. The reconstructed Cherenkov angle is the angle between the beam direction and the line joining E and M.
Figures 13(a),(b) show the Cherenkov angle distribution obtained using air radiator and 100 GeV/c pions for the 2048-pixel HPD and a 61-pixel HPD which were diametrically opposite to each other on the detector plate in configuration 2 with pyrex filter. The 2048-pixel HPD has a better resolution than the 61-pixel HPD since the pixel granularity is 0.2 mm for the former and 2 mm for the latter. Figure 13(c) shows the Cherenkov angle distribution obtained using $`CF_4`$ radiator and 120 GeV/c pions for an MAPMT with 2.3 mm pixel granularity in configuration 1.
### 5.1 Sources of Uncertainty in the Cherenkov Angle Measurement
* Chromatic Error: This is due to the variation of refractive index of the radiator with wavelength and is largest in the UV region. Use of pyrex filters reduces this contribution.
* Emission point uncertainty: This comes from the fact that the mirror is tilted with respect to the beam axis and that the emission point is assumed to be in the middle of the radiator, regardless of the true but unknown point of emission.
* Pixel size of Photodetector.
* Measurement of beam trajectory: This contribution comes from the granularity of the pixels in the silicon detectors which are used to measure the direction of the incident beam particle.
* Alignment: This contribution comes from residual misalignments between the silicon telescope, the mirror and the photodetectors.
In Table 8 the resolutions from each of the above components are tabulated for each of the three photodetectors in typical configurations. In each case, the overall simulated resolution is in good agreement with that measured in the beam triggered data.
In configuration 1 with seven HPDs it was possible to perform a detailed investigation of the Cherenkov angle resolution. Figure 14(a) shows the resolution measured in data and from simulation for each of the seven 61-pixel HPDs in this configuration. Agreement is seen between data and simulation in all cases. Each HPD in this figure was located at a different azimuth on the detector plate and hence has a different emission point uncertainty. Hence the overall resolution for different HPDs are different. Figure 14(b) shows the same resolutions, for the data using the pyrex filter, which reduces the contribution from chromatic error.
The expectation from the LHCb Technical proposal is to have a resolution of 0.35 mrad which is already achieved for the MAPMT, the 2048-pixel HPD and some of the HPDs shown in Figure 14.
### 5.2 Multiphoton Resolution
The mean value of the Cherenkov angle from all the photoelectron hits in each event is calculated for the data from the seven 61-pixel HPDs in configuration 1 without pyrex filters. The width of this distribution versus the number of photoelectrons detected per trigger is plotted in Figure 15. For a perfectly aligned system, the width is expected to be inversely proportional to the square root of the number of photoelectrons as indicated by the curve. The disagreement between data and simulation is compatible with the residual misalignment in the system which is of the order of 0.1 mrad.
### 5.3 Particle Identification
Figure 16 shows the Cherenkov angle distribution for the 2048-pixel HPD without pyrex filter in configuration 3 where the beam used was a mixture of pions and electrons at 10.4 GeV/c. Good separation is obtained between the two particle types. Figure 17 shows the plot of the the mean Cherenkov angle calculated from the hits in the 61-pixel HPDs without pyrex filter in configuration 1, where the beam was a mixture of kaons and pions, approximately in the ratio 1:9, at 50 GeV/c. Peaks corresponding to the two charged particle types can be seen in this figure.
## 6 Summary and Outlook for the Future
The goals set for the RICH2 prototype tests have largely been accomplished. The performance of the $`CF_4`$ radiator and the optical layout of the RICH2 detector have been tested. Photoelectron yields from the prototype HPDs and MAPMTs have been measured and found to agree with simulations. A Cherenkov angle precision of 0.35 mrad as assumed in the LHCb technical proposal has been demonstrated with all three photodetectors.
Improvements in the integrated quantum efficiency of both HPDs and MAPMTs are expected in future devices. The LHCb RICH detector will require photodetectors with higher active to total area than those tested here. HPDs with 80% active area and a lens system for MAPMTs are currently being developed. These will be tested with LHC compatible readout (25 ns shaping time) during 1999-2000.
## 7 Acknowledgements
This work has benefited greatly from the technical support provided by our colleagues at the institutes participating in this project. In particular the mirror reflectivity and the pyrex transmission were measured by A. Braem. The radiator vessel extensions were manufactured by D. Clark and I. Clark. The printed circuits for the MAPMT were designed and assembled by S. Greenwood. The silicon telescope was provided by E. Chesi and J. Seguniot. We also received valuable advice and assistance from our colleagues in the LHCb collaboration, in particular from R. Forty, O. Ullaland and T. Ypsilantis.
Finally, we gratefully acknowledge the CERN PS division for providing the test beam facilities and the UK Particle Physics and Astronomy Research Council for the financial support. |
no-problem/0001/quant-ph0001012.html | ar5iv | text | # Evidence for a dynamic origin of charge
## I The nature of charge
Since the discovery of the electron by J. J. Thomson the concept of electric charge has remained nearly unchanged. Apart from Lorentzโ extended electron , or Abrahamโs electromagnetic electron , the charge of an electron remained a point like entity, in one way or another related to electron mass . In atomic nuclei we think of charge as a smeared out region of space, which is structured by the elementary constituents of nuclear particles, the quarks .
The first major modification in this picture occurred only in the last decades, when experiments on the quantum hall effect suggested the existence of โfractional chargeโ of electrons. Although this effect has later been explained on the basis of standard theory , its implications are worth a more thorough analysis. Because it cannot be excluded that the same feature, fractional or even continuous charge, will show up in other experiments, especially since experimental practice more and more focuses on the properties of single particles. And in this case the conventional picture, which is based on discrete and unchangeable charge of particles, may soon prove too narrow a frame of reference. It seems therefore justified, at this point, to analyze the very nature of charge itself. A nature, which would reveal itself as an answer to the question: What is charge?
It must be noted, in this respect, that the picture of continuous charge, in classical theories, is due to the omission of the atomic structure of matter. In any modern sense, continuous charge can only be recovered by considering dynamic processes within the very particles themselves.
With this problem in mind, we reanalyze the fundamental equations of intrinsic particle properties . The consequences of this analysis are developed in two directions. First, we determine the interface between mechanic and electromagnetic properties of matter, where we find that only one fundamental constant describes it: Planckโs constant $`\mathrm{}`$. And second, we compute the fields of interaction within a hydrogen atom, where we detect oscillations of the proton density of mass as their source. Finally, the implications of our results in view of unifying gravity and quantum theory are discussed and a new model of gravity waves derived, which is open to experimental tests.
## II The origin of dynamic charge
The intrinsic vector field $`๐(๐ซ,t)`$, the momentum density $`๐ฉ(๐ซ,t)`$, and the scalar field $`\varphi (๐ซ,t)`$ of a particle are described by (see , Eq. (18)):
$`๐(๐ซ,t)={\displaystyle \frac{1}{\overline{\sigma }}}\varphi (๐ซ,t)+{\displaystyle \frac{1}{\overline{\sigma }}}{\displaystyle \frac{}{t}}๐ฉ(๐ซ,t)`$ (1)
Here $`\overline{\sigma }`$ is a dimensional constant introduced for reasons of consistency. Rewriting the equation with the help of the definitions:
$`\beta :={\displaystyle \frac{1}{\overline{\sigma }}}\beta \varphi (๐ซ,t):=\varphi (๐ซ,t)`$ (2)
we obtain the classical equation for the electric field, where in place of a vector potential $`๐(๐ซ,t)`$ we have the momentum density $`๐ฉ(๐ซ,t)`$. This similarity, as already noticed, bears on the Lorentz gauge as an expression of the energy principle ( Eqs. (26) - (28)).
$`๐(๐ซ,t)=\varphi (๐ซ,t)+\beta {\displaystyle \frac{}{t}}๐ฉ(๐ซ,t)`$ (3)
Note that $`\beta `$ describes the interface between dynamic and electromagnetic properties of the particle. Taking the gradient of (3) and using the continuity equation for $`๐ฉ(๐ซ,t)`$:
$`๐ฉ(๐ซ,t)+{\displaystyle \frac{}{t}}\rho (๐ซ,t)=0`$ (4)
where $`\rho (๐ซ,t)`$ is the density of mass, we get the Poisson equation with an additional term. And if we include the source equation for the electric field $`๐(๐ซ,t)`$:
$`๐(๐ซ,t)=\sigma (๐ซ,t),`$ (5)
$`\sigma (๐ซ,t)`$ being the density of charge, $`ฯต`$ set to 1 for convenience, we end up with the modified Poisson equation:
$`\mathrm{\Delta }\varphi (๐ซ,t)=\underset{staticcharge}{\underset{}{\sigma (๐ซ,t)}}\underset{dynamiccharge}{\underset{}{\beta {\displaystyle \frac{^2}{t^2}}\rho (๐ซ,t)}}`$ (6)
The first term in (6) is the classical term in electrostatics. The second term does not have a classical analogue, it is an essentially novel source of the scalar field $`\varphi `$, its novelty is due to the fact, that no dynamic interpretation of the vector potential $`๐(๐ซ,t)`$ exists, whereas, in the current framework, $`๐ฉ(๐ซ,t)`$ has a dynamic meaning: that of momentum density.
To appreciate the importance of the new term, think of an aggregation of mass in a state of oscillation. In this case the second derivative of $`\rho `$ is a periodic function, which is, by virtue of Eq. (6), equal to periodic charge. Then this dynamic charge gives rise to a periodic scalar field $`\varphi `$. This field appears as a field of charge in periodic oscillations: hence its name, dynamic charge. It should be noted that dynamic charge is essentially different from a classical dipole: in that case the field can appear zero (cancellation of opposing effects), whereas in case of dynamic charge it is zero. Even, as shall be seen presently, for monopole oscillations.
## III Oscillations of a proton
We demonstrate the implications of Eq. (6) on an easy example: the radial oscillations of a proton. The treatment is confined to monopole oscillations, although the results can easily be generalized to any multipole. Let a protonโs radius be a function of time, so that $`r_p=r_p(t)`$ will be:
$`r_p(t)=R_p+d\mathrm{sin}\omega _Ht`$ (7)
Here $`R_p`$ is the original radius, $`d`$ the oscillation amplitude, and $`\omega _H`$ its frequency. Then the volume of the proton $`V_p`$ and, consequently, its density of mass $`\rho _p`$ depend on time. In first order approximation we get:
$`\rho _p(t)`$ $`=`$ $`{\displaystyle \frac{3M_p}{4\pi }}\left(R_p+d\mathrm{sin}\omega _Ht\right)^3`$ (8)
$`\rho _p(t)`$ $``$ $`\rho _0\left(1x\mathrm{sin}\omega _Ht\right)x:={\displaystyle \frac{3d}{R_p}}`$ (9)
The Poisson equation for the dynamic contribution to proton charge then reads:
$`\mathrm{\Delta }\varphi (๐ซ,t)=\beta x\rho _0\omega _H^2\mathrm{sin}\omega _Ht`$ (10)
Integrating over the volume of the proton we find for the dynamic charge of the oscillating proton the expression:
$`q_D(t)={\displaystyle _{V_p}}d^3r\beta x\rho _0\omega _H^2\mathrm{sin}\omega _Ht=\beta xM_p\omega _H^2\mathrm{sin}\omega _Ht`$ (11)
This charge gives rise to a periodic field within the hydrogen atom, as already analyzed in some detail and in a slightly different context . We shall turn to the calculation of a hydrogenโs fields of interaction in the following sections. But in order to fully appreciate the meaning of the dynamic aspect it is necessary to digress at this point and to turn to the discussion of electromagnetic units.
## IV Natural electromagnetic units
By virtue of the Poisson equation (6) dynamic charge must be dimensionally equal to static charge, which for a proton is + e. But since it is, in the current framework, based on dynamic variables, the choice of $`\beta `$ also defines the interface between dynamic and electromagnetic units. From (11) we get, dimensionally:
$`[e]=[\beta ][M_p\omega _H^2][\beta ]=\left[{\displaystyle \frac{e}{M_p\omega _H^2}}\right]`$ (12)
The unit of $`\beta `$ is therefore, in SI units:
$`[\beta ]=C{\displaystyle \frac{s^2}{kg}}=C{\displaystyle \frac{m^2}{J}}[SI]`$ (13)
We define now the natural system of electromagnetic units by setting $`\beta `$ equal to 1. Thus:
$`[\beta ]:=1[C]={\displaystyle \frac{J}{m^2}}`$ (14)
The unit of charge C is then energy per unit area of a surface. Why, it could be asked, should this definition make sense? Because, would be the answer, it is the only suitable definition, if electrostatic interactions are accomplished by photons.
Suppose a $`\delta ^3(๐ซ๐ซ^{})`$ like region around $`๐ซ^{}`$ is the origin of photons interacting with another $`\delta ^3(๐ซ๐ซ^{\prime \prime })`$ like region around $`๐ซ^{\prime \prime }`$. Then $`๐ซ^{}`$ is the location of charge. Due to the geometry of the problem the interaction energy will decrease with the square of $`|๐ซ^{}๐ซ^{\prime \prime }|`$. What remains constant, and thus characterizes the charge at $`๐ซ^{}`$, is only the interaction energy per surface unit. Thus the definition, which applies to all $`r^2`$ like interactions, also, in principle, to gravity.
Returning to the question of natural units, we find that all the other electromagnetic units follow straightforward from the fundamental equations . They are displayed in Table I.
If we analyze the units in Lorentzโ force equation, we observe, at first glance, an inconsistency.
$`๐
_L=q\left(๐+๐ฎ\times ๐\right)`$ (15)
The unit on the left, Newton, is not equal to the unit on the right. As a first step to solve the problem we include the dielectric constant $`ฯต^1`$ in the equation, since this is the conventional definition of the electric field $`๐`$. Then we have:
$`[๐
_L]={\displaystyle \frac{Nm}{m^2}}\left({\displaystyle \frac{m^4}{N}}{\displaystyle \frac{N}{m^3}}+{\displaystyle \frac{m}{s}}{\displaystyle \frac{Ns}{m^4}}\right)=N+N{\displaystyle \frac{N}{m^4}}`$ (16)
Interestingly, now the second term, which describes the magnetic forces, is wrong in the same manner, the first term was before we included the dielectric units. It seems thus, that the dimensional problem can be solved by a constant $`\eta `$, which is dimensionally equal to $`ฯต`$, and by rewriting the force equation (15) in the following manner:
$`๐
_L={\displaystyle \frac{q}{\eta }}\left(๐+๐ฎ\times ๐\right)`$ (17)
$`[\eta ]=Nm^4=Cm^3=[\sigma ]`$ (18)
The modification of (15) has an implicit meaning, which is worth being emphasized. It is common knowledge in special relativity, that electric and magnetic fields are only different aspects of a situation. They are part of a common field tensor $`F_{\mu \nu }`$ and transform into each other by Lorentz transformations. From this point of view the treatment of electric and magnetic fields in the SI, where we end up with two different constants ($`ฯต,\mu `$), seems to go against the requirement of simplicity. On the other hand, the approach in quantum field theory, where one employs in general only a dimensionless constant at the interface to electrodynamics, the finestructure constant $`\alpha `$, is over the mark. Because the information, whether we deal with the electromagnetic or the mechanic aspect of a situation, is lost. The natural system, although not completely free of difficulties, as seen further down, seems a suitable compromise. Different aspects of the intrinsic properties, and which are generally electromagnetic, are not distinguished, no scaling is necessary between $`๐ฉ,๐`$ and $`๐`$. The only constant necessary is at the interface to mechanic properties, which is $`\eta `$. This also holds for the fields of radiation, which we can describe by:
$`\varphi _{Rad}(๐ซ,t)={\displaystyle \frac{1}{8\pi \eta }}\left(๐^2+c^2๐^2\right)`$ (19)
Note that in the natural system the usage, or the omission, of $`\eta `$ ultimately determines, whether a variable is to be interpreted as an electromagnetic or a mechanic property. Forces and energies are mechanic, whereas momentum density is not. The numerical value of $`\eta `$ has to be determined by explicit calculations. This will be done in the next sections. We conclude this section by comparing the natural system of electromagnetic units to existing systems.
From an analysis of the Maxwell equations one finds three dimensional constants $`k_1,k_2,k_3`$, and a dimensionless one, $`\alpha `$, which acquire different values in different systems (see Table II).
Judging by the number of dimensional constants it seems that the natural system is most similar to the electrostatic system of units. However, since we have defined a separate interface to mechanic properties, it is free of the usual nuisance of fractional exponents without a clear physical meaning. The other difference is that c, in the esu, is a constant, whereas it only signifies the velocity of a particle in the natural system. For photons this velocity equals c, but for electrons it is generally much smaller. We note in passing that all the fundamental relations for the intrinsic fields remain valid. Also the conventional relations for the forces of interaction and the radiation energy remain functionally the same. Only the numerical values will be different.
Comparing with existing systems we note three distinct advantages: (i) The system reflects the dynamic origin of fields, and it is based on only three fundamental units: m, kg, s. A separate definition of the current is therefore obsolete. (ii) There is a clear cut interface between mechanics (forces, energies), and electrodynamics (fields of motion). (iii) The system provides a common framework for macroscopic and microscopic processes.
## V Interactions in hydrogen
Returning to proton oscillations let us first restate the main differences between a free electron and an electron in a hydrogen atom : (i) The frequency of the hydrogen system is constant $`\omega _H`$, as is the frequency of the electron wave. It is thought to arise from the oscillation properties of a proton. (ii) Due to this feature the wave equation of momentum density $`๐ฉ(๐ซ,t)`$ is not homogeneous, but inhomogeneous:
$`\mathrm{\Delta }๐ฉ(๐ซ,t){\displaystyle \frac{1}{u^2}}{\displaystyle \frac{^2}{t^2}}๐ฉ(๐ซ,t)=๐(t)\delta ^3(๐ซ)`$ (20)
for a proton at $`๐ซ=0`$ of the coordinate system. The source term is related to nuclear oscillations. We do not solve (20) directly, but use the energy principle to simplify the problem. From a free electron it is known that the total intrinsic energy density, the sum of a kinetic component $`\varphi _K`$ and a field component $`\varphi _{EM}`$ is a constant of motion :
$`\varphi _K(๐ซ)+\varphi _{EM}(๐ซ)=\rho _0u^2`$ (21)
where $`u`$ is the velocity of the electron and $`\rho _0`$ its density amplitude. We adopt this notion of energy conservation also for the hydrogen electron, we only modify it to account for the spherical setup:
$`\varphi _K(๐ซ)+\varphi _{EM}(๐ซ)={\displaystyle \frac{\rho _0}{r^2}}u^2`$ (22)
The radial velocity of the electron has discrete levels. Due to the boundary values problem at the atomic radius, it depends on the principal quantum number $`n`$. From the treatment of hydrogen we recall for $`u_n`$ and $`\rho _0`$ the results :
$`u_n={\displaystyle \frac{\omega _HR_H}{2\pi n}}\rho _0={\displaystyle \frac{M_e}{2\pi R_H}}`$ (23)
where $`R_H`$ is the radius of the hydrogen atom and $`M_e`$ the mass of an electron. Since $`\rho _0`$ includes the kinetic as well as the field components of electron โmassโ, e.g. in Eq. (22), we can define a momentum density $`๐ฉ_0(๐ซ,t)`$, which equally includes both components. As the velocity $`u_n=u_n(t)`$ of the electron wave in hydrogen is periodic:
$`๐ฎ_n(t)=u_n\mathrm{cos}\omega _Ht๐^r`$ (24)
the momentum density $`๐ฉ_0(๐ซ,t)`$ is given by:
$`๐ฉ_0(๐ซ,t)={\displaystyle \frac{\rho _0u_n}{r^2}}\mathrm{cos}\omega _Ht๐^r`$ (25)
The combination of kinetic and field components in the variables has a physical background: it bears on the result that photons change both components of an electron wave . With these definitions we can use the relation between the electric field and the change of momentum, although now this equation refers to both components:
$`๐_0(๐ซ,t)={\displaystyle \frac{}{t}}๐ฉ_0(๐ซ,t)={\displaystyle \frac{\rho _0u_n}{r^2}}\omega _H\mathrm{sin}\omega _Ht๐^r`$ (26)
Note that charge, by definition, is included in the electric field itself. Integrating the dynamic charge of a proton from Eq. (11) and accounting for flow conservation in our spherical setup, the field of a proton will be:
$`๐_0(๐ซ,t)={\displaystyle \frac{q_D}{r^2}}={\displaystyle \frac{M_p\omega _H^2}{r^2}}x\mathrm{sin}\omega _Ht๐^r`$ (27)
Apart from a phase factor the two expressions must be equal. Recalling the values of $`u_n`$ and $`\rho _0`$ from (23), the amplitude $`x`$ of proton oscillation can be computed. We obtain:
$`x={\displaystyle \frac{3d}{R_p}}={\displaystyle \frac{M_e}{(2\pi )^2M_p}}{\displaystyle \frac{1}{n}}`$ (28)
In the highest state of excitation, which for the dynamic model is $`n=1`$, the amplitude is less than $`10^5`$ times the proton radius: Oscillations are therefore comparatively small. This result indicates that the scale of energies within the proton is much higher than within the electron, say. The result is therefore well in keeping with existing nuclear models. For higher $`n`$, and thus lower excitation energy, the amplitude becomes smaller and vanishes for $`n\mathrm{}`$.
It is helpful to consider the different energy components within the hydrogen atom at a single state, say $`n=1`$, to understand, how the electron is actually bound to the proton. The energy of the electron consists of two components.
$`\varphi _K(๐ซ,t)={\displaystyle \frac{\rho _0u_1^2}{r^2}}\mathrm{sin}^2k_1r\mathrm{cos}^2\omega _Ht`$ (29)
is the kinetic component of electron energy ($`k_1`$ is now the wavevector of the wave). As in the free case, the kinetic component is accompanied by an intrinsic field, which accounts for the energy principle (i.e. the requirement, that total energy density at a given point is a constant of motion). Thus:
$`\varphi _{EM}(๐ซ,t)={\displaystyle \frac{\rho _0u_1^2}{r^2}}\mathrm{cos}^2k_1r\mathrm{cos}^2\omega _Ht`$ (30)
is the field component. The two components together make up for the energy of the electron. Integrating over the volume of the atom and a single period $`\tau `$ of the oscillation, we obtain:
$`W_{el}`$ $`=`$ $`{\displaystyle \frac{1}{\tau }}{\displaystyle _0^\tau }๐t{\displaystyle _{V_H}}d^3r\left(\varphi _K(๐ซ,t)+\varphi _{EM}(๐ซ,t)\right)`$ (31)
$`=`$ $`{\displaystyle \frac{1}{2}}M_eu_1^2`$ (32)
This is the energy of the electron in the hydrogen atom. $`W_{el}`$ is equal to 13.6 eV. The binding energy of the electron is the energy difference between a free electron of velocity $`u_1`$ and an electron in a hydrogen atom at the same velocity. Since the energy of the free electron $`W_{free}`$ is:
$`W_{free}=\mathrm{}\omega _H=M_eu_1^2`$ (33)
the energy difference $`\mathrm{}W`$ or the binding energy comes to:
$`\mathrm{}W=W_{free}W_{el}={\displaystyle \frac{1}{2}}M_eu_1^2`$ (34)
This value is also equal to 13.6 eV. It is, furthermore, the energy contained in the photon field $`\varphi _{Rad}(๐ซ,t)`$ of the protonโs radiation
$`W_{Rad}`$ $`=`$ $`\mathrm{}W={\displaystyle \frac{1}{\tau }}{\displaystyle _0^\tau }๐t{\displaystyle _{V_H}}d^3r\varphi _{Rad}(๐ซ,t)`$ (35)
$`=`$ $`{\displaystyle \frac{1}{2}}M_eu_1^2`$ (36)
This energy has to be gained by the electron in order to be freed from its bond, it is the ionization energy of hydrogen. However, in the dynamic picture the electron is not thought to move as a point particle in the static field of a central proton charge, the electron is, in this model, a dynamic and oscillating structure, which emits and absorbs energy constantly via the photon field of the central proton. In a very limited sense, the picture is still a statistical one, since the computation of energies involves the average over a full period.
## VI The meaning of $`\eta `$
The last problem, we have to solve, is the determination of $`\eta `$, the coupling constant between electromagnetic and mechanic variables. To this end we compute the energy of the radiation field $`W_{Rad}`$, using Eqs. (19), (27), and (28). From (19) and (27) we obtain:
$`\varphi _{Rad}(r,t)={\displaystyle \frac{1}{8\pi \eta }}๐^2={\displaystyle \frac{1}{8\pi \eta }}{\displaystyle \frac{M_p^2\omega _H^4}{r^4}}x^2\mathrm{sin}^2\omega _Ht`$ (37)
Integrating over one period and the volume of the atom this gives:
$`W_{Rad}`$ $`=`$ $`{\displaystyle \frac{1}{\tau }}{\displaystyle _0^\tau }๐t{\displaystyle _{R_p}^{R_H}}4\pi r^2๐r\varphi _{Rad}(๐ซ,t)`$ (38)
$``$ $`{\displaystyle \frac{1}{4\eta }}{\displaystyle \frac{M_p^2\omega _H^4x^2}{R_p}}`$ (39)
provided $`R_p`$, the radius of the proton is much smaller than the radius of the atom. With the help of (28), and remembering that $`W_{Rad}`$ for $`n=1`$ equals half the electronโs free energy $`\mathrm{}\omega _H`$, this finally leads to:
$`W_{Rad}={\displaystyle \frac{1}{4\eta }}{\displaystyle \frac{M_p^2\omega _H^4x^2}{R_p}}={\displaystyle \frac{1}{2}}\mathrm{}\omega _H`$ (40)
$`\eta ={\displaystyle \frac{M_e^2\nu _H^3}{2hR_p}}={\displaystyle \frac{1.78\times 10^{20}}{R_p}}`$ (41)
since the frequency $`\nu _H`$ of the hydrogen atom equals $`6.57\times 10^{15}`$ Hz. Then $`\eta `$ can be calculated in terms of the proton radius $`R_p`$. This radius has to be inferred from experimental data, the currently most likely parametrization being :
$`{\displaystyle \frac{\rho _p(r)}{\rho _{p,0}}}={\displaystyle \frac{1}{1+e^{(r1.07)/0.55}}}`$ (42)
radii in fm. If the radius of a proton is defined as the radius, where the density $`\rho _{p,0}`$ has decreased to $`\rho _{p,0}/e`$, with e the Euler number, then the value is between 1.3 and 1.4 fm. Computing $`4\pi `$ the inverse of $`\eta `$, we get, numerically:
$`{\displaystyle \frac{4\pi }{\eta }}`$ $`=`$ $`0.92\times 10^{34}(R_p=1.3fm)`$ (43)
$`=`$ $`0.99\times 10^{34}(R_p=1.4fm)`$ (44)
$`=`$ $`1.06\times 10^{34}(R_p=1.5fm)`$ (45)
Numerically, this value is equal to the numerical value of Planckโs constant $`\mathrm{}`$ :
$`\mathrm{}_{UIP}=1.0546\times 10^{34}`$ (46)
Given the conceptual difference in computing the radius the agreement seems remarkable. Note that this is a genuine derivation of $`\mathrm{}`$, because nuclear forces and radii fall completely outside the scope of the theory in its present form. If measurements of $`R_p`$ were any different, then we would be faced, at this point, with a meaningless numerical value. Reversing the argument it can be said, that the correct value - or rather the meaningful value - is a strong argument for the correctness of our theoretical assumptions. Since these assumptions involve to a greater or lesser extent the whole theory of matter waves developed so far, we devote the rest of this section mainly to a critical analysis of this result and shall show the most striking physical implications only at the end.
Starting with the approximations involved, we note (i) a first order approximation in d, and (ii) an approximation in the integration. Since $`d10^5R_p`$, and $`R_p10^5R_H`$, both errors are negligible. In view of the standard experimental error margins, also the deviation of a few percent, depending on how we define the proton radius, seems acceptable. On the positive side, there are two plausibility arguments, indicating that we deal not only with a numerical coincidence: (i) The Planck constant describes the interface between frequency and energy in all fundamental experiments. Since we started with a frequency (= proton oscillations), and calculated an energy, it must have, at some point, entered the calculation. The only unknown quantity in the calculation was $`\eta `$: therefore it should contain $`\mathrm{}`$. (ii) What we have in fact developed with this model of hydrogen, is in spirit very close to the harmonic oscillator in quantum theory; the rest energy term is related to the energy of our photon field. In order to be compatible with quantum theory, the energy must contain $`\mathrm{}`$. Again, the only variable, which could contain it, is $`\eta `$.
It can also be asked, why electromagnetic variables are multiplied by $`\mathrm{}`$ to give the energy of radiation. Especially, since the finestructure constant contains a division by $`\mathrm{}`$:
$`\alpha ={\displaystyle \frac{e^2}{\mathrm{}c}}`$ (47)
To answer this question, consider a variable in electrodynamics $`A_{ED}`$ and its correlating variable $`A_M`$ in mechanics. Then the transition from $`A_{ED}`$ to $`A_M`$ is described by a transformation $`T`$, so that:
$`A_M=T_{M,ED}A_{ED}`$ (48)
Since the inverse transformation must exist and the variables are assumed to be unique, the transformation is unitary:
$`T_{M,ED}T_{ED,M}=T_{M,ED}T_{M,ED}^1=1`$ (49)
In our case the primary variables are the electromagnetic ones: $`๐ฉ,๐,๐`$. And the transformation involves a multiplication by $`\mathrm{}`$.
$`A_M=\mathrm{}A_{ED}`$ (50)
The fundamental units m, kg, s are, in this system, the natural system, tied to the electromagnetic variables. In quantum theory, on the other hand, the fundamental variables are Newtonian. Then the transformation between electromagnetic and mechanic variables involves the inverse transformation.
$`A_M(QM)=\mathrm{}^1A_{ED}(QM)`$ (51)
If we consider, in addition, that charge has been included in the definition of $`๐`$, the transformation, in conventional units and in quantum theory should read:
$`A_M(QM)={\displaystyle \frac{e^2}{\mathrm{}}}A_{ED}(QM)`$ (52)
which is the finestructure constant multiplied by $`c`$. And $`c`$ is, generally, only a matter of convention. Therefore we think, the conclusion, that $`4\pi /\eta `$ really is $`\mathrm{}`$, is a reasonable and safe one. But in this case Planckโs constant has not much bearing on a different scale of measurement, as is often invoked, when there is talk about $`\mathrm{}0`$ in the macroscopic scale. The constant bears on two fundamental aspects of matter itself. As we see it, $`\mathrm{}`$ describes the interface between electromagnetic and mechanic variables of matter. It is therefore even more fundamental than currently assumed. For the following, let us redefine the symbol $`\mathrm{}`$ by:
$`\mathrm{}:=1.0546\times 10^{34}[N^1m^4]`$ (53)
Then we can rewrite the equations for $`๐
`$, the Lorentz force, for $`๐`$, angular momentum related to this force, and $`\varphi _{Rad}`$, the radiation energy density of a photon in a very suggestive form:
$`๐
=\mathrm{}q\left({\displaystyle \frac{๐}{4\pi }}+๐ฎ\times {\displaystyle \frac{๐}{4\pi }}\right)`$ (54)
$`๐=\mathrm{}q๐ซ\times \left({\displaystyle \frac{๐}{4\pi }}+๐ฎ\times {\displaystyle \frac{๐}{4\pi }}\right)`$ (55)
$`\varphi _{Rad}={\displaystyle \frac{\mathrm{}}{2}}\left[\left({\displaystyle \frac{๐}{4\pi }}\right)^2+c^2\left({\displaystyle \frac{๐}{4\pi }}\right)^2\right]`$ (56)
Every calculation of mechanic properties involves a multiplication by $`\mathrm{}`$. Since $`\mathrm{}`$ is a scaling constant, the term โquantizationโ, commonly used in this context, is misleading. Furthermore, it is completely irrelevant, whether we compute an integral property (the force in (54)), or a density ($`\varphi _{Rad}`$ in (56), a force density can also be obtained by replacing charge q by a density value). From the interaction fields within a hydrogen atom, e.g. Eq. (37), it is clear that the field varies locally and temporary and can reach any value between zero and its maximum. Although it is described by:
$`\varphi _{Rad}(r,t)={\displaystyle \frac{\mathrm{}}{2}}{\displaystyle \frac{M_p^2\omega _H^4}{r^4}}x^2\mathrm{sin}^2\omega _Ht`$ (57)
it is not โquantizedโ. Neither would be the forces based on the field $`๐_0`$, or the angular momenta. Although, in both cases, they are proportional to $`\mathrm{}`$. What is, in a sense, discontinuous, is the mass contained in the shell of the atom. But this mass depends, as does the amplitude of $`\varphi _{Rad}(r,t)`$, on the mass of the atomic nucleus. So the only discontinuity left on the fundamental level, is the mass of atomic nuclei. That the energy spectrum of atoms is discrete, is a trivial observation in view of boundary conditions and finite radii. To sum up the argument: There are no quantum jumps.
All our calculations so far focus on single atoms. To get the values of mechanic variables in SI units used in macrophysics, we have to include the scaling between the atomic domain and the domain of everyday measurements. Without proof, we assume this value to be $`N_A`$, Avogadroโs number. The scale can be made plausible from solid state physics, where statistics on the properties of single electrons generally involve a number of $`N_A`$ particles in a volume of unit dimensions . And a dimensionless constant does not show up in any dimensional analysis.
## VII Solar gravity fields
We conclude this paper, which seems to open a new perspective on a number of very fundamental problems, by a brief discussion. The first issue concerns the nature of gravity. From the given treatment it is possible to conclude, that there is maybe no fundamental difference between electrostatic and gravitational interaction. Both seem to be transmitted with the velocity of light, both obey a $`r^2`$ law, both are related to the existence of mass, whether its static or its dynamic features. So the conjecture, that also gravity is transmitted by a โphotonโ, has it least some basis. But here the similarities end. Because of the vast differences in the coupling $`ฯตG10^{22}`$ one must assume a very different frequency scale. From Eq. (27):
$`|๐|={\displaystyle \frac{q_D}{r^2}}{\displaystyle \frac{M\omega ^2}{r^2}}`$ (58)
it can be inferred that the frequency scale for gravity and electrostatics would differ by about $`10^{11}`$.
$`\omega _G10^{11}\omega _E`$ (59)
Here $`\omega _E`$ is the characteristic electromagnetic frequency, $`\omega _G`$ its gravitational counterpart. The hypothesis can in principle be tested. If we assume that $`\nu _G`$, the frequency of gravity radiation, is about $`10^{11}`$ times the frequency of proton oscillation, we get:
$`\nu _G1100kHz`$ (60)
If therefore electromagnetic fields of this frequency range exist in space, we would attribute these fields to solar gravity. To estimate the intensity of these, hypothetical, waves, we use Eq. (26):
$`๐_S(๐ซ,t)={\displaystyle \frac{}{t}}๐ฉ_E(๐ซ,t)`$ (61)
Here $`๐_S`$ is the solar gravity field. The momentum density and its derivative can be inferred from centrifugal acceleration.
$`{\displaystyle \frac{}{t}}๐ฉ_E(๐ซ,t)=\rho _Ea_C๐^r`$ (62)
$`\rho _E={\displaystyle \frac{3M_E}{4\pi R_E^3}}a_C=\omega _E^2R_O`$ (63)
where $`R_O`$ is the earthโs orbital radius and where we have assumed isotropic distribution of terrestrial mass. Then Eq. (37) leads to:
$`\varphi _G(r=R_O)={\displaystyle \frac{\mathrm{}}{2}}\left({\displaystyle \frac{G_S}{4\pi }}\right)^2={\displaystyle \frac{\mathrm{}}{2}}\left({\displaystyle \frac{3M_ER_O}{4R_E^3\tau _E^2}}\right)^2`$ (64)
Note the occurrence of Planckโs constant also in this equation, although all masses and distances are astronomical. The intensity of the field, if calculated from (64), is very small. To give it in common measures, we compute the flow of gravitational energy through a surface element at the earthโs position. In SI units we get:
$`J_G(R_O)=\varphi _G(R_O)N_Ac70mW/m^2`$ (65)
Compared to radiation in the near visible range - the solar radiation amounts to over 300 Watt/m<sup>2</sup> \- the value seems rather small. But considering, that also radiation in the visible range could have an impact on terrestrial motion, the intensity of the gravity waves could be, in fact, much higher.
## VIII Is there static charge?
In the conventional models a particleโs charge is not only discrete, but has also a defined sign. Although anti-particles are thought to exist, the charge of protons is positive, the charge of electrons negative. Dynamic charge is neither discrete, nor does it possess a defined sign. Depending on the exact moment, the charge of a proton:
$`q_p=M_p\omega _H^2x\mathrm{sin}\omega _Ht`$ (66)
either has a positive or a negative value, which determines the direction of the energy flow within the hydrogen atom. The difference between electrons and protons in this model is mainly due to their density of mass.
Related to this feature is a shift of focus within the dynamic model of atoms. Although the states of the atom are described by quantum numbers ($`n`$ for the principal state, $`l`$ and $`m`$ if multipoles are included), these numbers refer primarily to nuclear states of oscillation. States of the atomโs electron are merely a reaction to them. Therefore the properties of an atom, in the dynamic model, refer to properties of the atomic nucleus. How this model bears on chemical properties, remains to be seen.
The last issue is a consequence of our treatment of the hydrogen atom. In this case the main features, the energy spectrum as well as ionization energy and the energy of emitted photons can be explained from dynamic charge alone. There is, in contrast to the conventional treatment, no necessity to invoke static potentials. It will also have been noted that in natural units and based on dynamic processes interactions are generally free of any notion of โchargeโ in its proper sense. So does that mean, it could be asked, that there is no charge? Based on the current evidence and considering the situation in high energy physics, this is definitely too bold a statement. Considering, though, that the notion of a fixed โelementary chargeโ lies at the heart of all current accounts of these experiments, the degree of theoretical freedom in the dynamic picture is incomparably higher. So that still, after a few years of development, we might end up with a tentative answer: maybe not. And in this case, the question about the true nature of charge will have been answered. It is dynamic in nature, we would then say.
## IX Conclusions
In this paper we presented evidence for the existence of a dynamic component of charge. It derives, as shown, from the variation of a particleโs density of mass. A new system of electromagnetic units, the natural system, has been developed, which bears on these dynamic sources. We have given a fully deterministic treatment of hydrogen, where we used our theoretical model to determine the fundamental scaling constant between electromagnetic and mechanic variables. The constant, we found, is $`\mathrm{}`$, Planckโs constant. The constant thus has no bearing on any length scale, as frequently thought. And finally we have discussed these results in view of unifying gravity and quantum theory. The intensity of the postulated solar gravity waves seems sufficiently high, so that these waves, in the low frequency range of the electromagnetic spectrum, can in principle be detected.
## Acknowledgements
Iโd like to thank Jaime Keller. In our discussions I realized, for the first time, that the most efficient theory of electrodynamics might be one without electrons. |
no-problem/0001/astro-ph0001472.html | ar5iv | text | # Faint Infrared Flares from the Microquasar GRS 1915+105
## 1 Introduction
As the archetypal Galactic microquasar, GRS 1915+105 offers unique observational opportunities for investigating the formation of relativistic jets in black hole systems. To date, two types of ejection events have been observed from this system. The first of these, the โmajorโ ejections, produce bright ($`1`$ Jy) resolvable radio jets which move with apparent velocities of $`v_{\mathrm{app}}=1.25c`$ and actual space velocities of $`v0.9c`$ (Mirabel & Rodriguez, 1994; Fender et al., 1999). The jets transition quickly from optically thick to optically thin spectra and then fade on timescales of several days. Due to the rarity of these events, coordinated pointed X-ray observations have not been possible to date.
The second type of ejection event consists of X-ray oscillations with hard power-law dips and thermal flares, and associated synchrotron flares in the infrared (Eikenberry et al., 1998a,b) and radio bands (Mirabel et al., 1998; Fender & Pooley, 1998). We refer to these events as โClass Bโ flares to distinguish them from the larger โClass Aโ major ejection events. These smaller events have peak intensities in the range $`100200`$ mJy from the infrared (IR) to radio bands, and the time of peak flux exhibits apparent delays as a function of wavelength which may indicate the expansion of a synchrotron bubble (Mirabel et al., 1998). The flares fade on timescales of several minutes and tend to repeat on timescales from $`3050`$ minutes (i.e. Pooley & Fender, 1997; Eikenberry et al., 1998a).
In this paper, we present a third type of IR flare from GRS 1915+105 โ faint (sub-milliJansky) IR flares associated with X-ray soft-dip/soft-flare cycles. In Section 2, we present the observations and analysis of these flares. In Section 3, we discuss the implications of the flares for understanding relativistic jet formation in microquasars. In Section 4, we present our conclusions.
## 2 Observations and Analysis
### 2.1 July 1998 Observations
We observed GRS 1915+105 on the nights of 8-12 July 1998 UTC using the Palomar Observatory 5-m telescope and the Cassegrain near-infrared array camera in the K ($`2.2\mu `$m) band. Details of these observations and the data reduction will be presented in Eikenberry et al. (2000), and we summarize them here. We configured the camera to take 128x128-pixel (16x16-arcsec) images at a rate of 1 frame per second, with absolute timing provided by a WWV-B receiver with $`1`$ ms accuracy. We observed GRS 1915+105 in this mode for approximately 5 hours each night, obtaining $`1.5\times 10^4`$ frames per night. The field of view was large enough to capture both GRS 1915+105 and several nearby field stars, including โStar Aโ, which has a magnitude of $`K=13.3`$ mag (Eikenberry & Fazio, 1997; Fender et al., 1997). After standard processing (sky/bias subtraction, flat-fielding, interpolation over bad pixels and cosmic ray hits) we used the nearby stars to perform differential photometry on GRS 1915+105, with the overall absolute calibration provded by Star A. We present the resulting flux density for GRS 1915+105 on July 10, 1998 UTC with 10-second time-resolution in Figure 1(a). We obtained X-ray observations on the same nights using the PCA instrument on the Rossi X-ray Timing Explorer (RXTE - see Greiner, Morgan, and Remillard (1996) and references therein for further details regarding the intrument and data modes). We present the X-ray intensity for July 10, 1998 in Figure 1(b).
The most obvious features in the IR lightcurve in Figure 1 are 6 faint flares. The flares have peak amplitudes of $`0.30.6`$ mJy (or $`510`$ mJy de-reddened for $`A_K3`$ mag) โ more than an order of magnitude fainter than the Class B flares (i.e. Fender, et al. 1997; Eikenberry et al., 1998a). They have typical durations of $`500`$ seconds, and are roughly symmetric in time. Furthermore, they repeat on timescales from $`3060`$ minutes. When simultaneous X-ray coverage is available, the IR flares appear to be associated with rapid X-ray fluctuations (Fig. 1b). Inspection with an expanded timescale shows several interesting aspects of these pairings (Fig. 2). The X-ray oscillations show a flare-dip-flare morphology. X-ray hardness ratios show that the dips are very soft (see also Figure 4 d-f), as opposed to the hard X-ray dips associated with Class B IR/radio flares. Furthermore, the rises of the IR flares in Figure 2 appear to precede the X-ray oscillations. Note that for the first 2 X-ray dips, there are IR flares $`15001800`$ seconds later, suggesting a possible correspondence between X-ray dips and highly delayed IR flares. However, if this were the case, we would expect X-ray dips at $`24600`$s and $`30300`$s, to match the observed IR flares at 26200s and 31900s. Since we do not see X-ray dips at these times, we conclude that the actual IR/X-ray correspondence has IR flares preceding X-ray dips by $`200600`$ s. Thus, these observations are the first to clearly demonstrate the time ordering of associated X-ray dips and IR flares in GRS 1915+105.
### 2.2 August 1997 Observations
We also observed GRS 1915+105 simultaneously with the Palomar 5-m telescope and RXTE on 13-15 August 1997 (see also Eikenberry et al., 1998a,b). The basic obervational parameters were similar to those for July 1998 described above. On 14-15 August 1997, we observed a series of Class B IR flares with their corresponding X-ray cycles of hard dips and thermal flares. We also noted that at times the IR flux from GRS 1915+105 showed a noticeable quasi-steady IR excess (Figure 3a), much lower than the flux levels from the Class B flares themselves, but higher than the apparent baseline IR emission of $`3.6`$ mJy on those nights. Interestingly, the episodes of excess IR emission appear to be associated with rapid X-ray oscillations (Figure 3b) that seem to resemble the X-ray cycles seen in July 1998 (Figure 2). Motivated by the X-ray/IR association we observed in the July 1998 data, we performed detailed X-ray spectral analyses of X-ray oscillations in both epochs. Figure 4 shows the resulting best-fit parameters to typical X-ray oscillations from both epochs at 1-second time resolution using the XSPEC package and an absorbed multi-temperature blackbody + power-law model (identical to those described in Muno et al., 1999). Not only are the morphologies of the events quite similar (although the August 1997 cycle is $`3`$ times faster), but the key spectral parameters of blackbody temperature and power-law index seem to evolve in a virtually identical manner for both epochs. These similarities in both morphology and spectrum confirm that the X-ray cycles from July 1998 and August 1997 are indeed the same phenomenon. Furthermore, note that the blackbody temperature drops and the power-law index rises during the X-ray dip, both of which effects cause a softening of the X-ray spectrum during the dip. The X-ray dips associated with Class B flares, on the other hand, show a decrease in the BB temperature and a marked decrease in the power-law index, making them spectrally hard. Thus, the events we discuss here differ from those associated with Class B flares.
Based on these results, we then hypothesize that the IR excess seen in 14-15 August 1997 during the X-ray oscillations may be due to faint infrared flares such as those seen in Figures 1-2. Since the X-ray oscillations are separated by $`2040`$ seconds in August 1997 and the typical width of the faint IR flares is $`500`$ seconds, many flares will be superposed on one another to create the appearance of a quasi-steady IR excess such as we observe. If we assume that each X-ray oscillation in Figure 3(b) has an associated IR flare and we approximate that flare as a gaussian with $`0.3`$ mJy amplitude and 160 seconds FWHM (consistent with the faintest July 1998 flares), we calculate a predicted IR excess of 1.3 mJy. This value is a close match to the actual observed excess of $`1.0`$ mJy we observed (Figure 3).
## 3 Discussion
Based on these observations, we surmise that we have found a new type of IR flare associated with X-ray oscillations in GRS 1915+105. These events differ significantly from the previously-known Class B events in their IR brightness as well as the timescale, morphology, and spectral characteristics of the X-ray oscillations. In keeping with our proposed classification scheme for such flares โ Class A being major ejection events and Class B being the $`100200`$ mJy (de-reddened) IR/radio flares associated with hard X-ray dips โ we assign these faint IR flares associated with soft X-ray dips the label โClass Cโ.
The July 1998 observations are useful not only in allowing us to identify this new phenomenon, but also in allowing us to determine the timing relationship between the X-ray and IR oscillations. Previous observations of Class B events (e.g. Eikenberry et al, 1998a) have been unable to unambiguously determine whether the IR/radio flares come from an ejection at the beginning of the preceding hard X-ray dip, at its end, or simultaneously with a soft X-ray โspikeโ seen during the dip. Mirabel et al. (1998) suggest that the ejection occurs at the time of the spike, based on timing/flux arguments and an expanding plasmoid (van der Laan) model for their IR/radio data. However, this model predicts an IR peak flux density $`20`$ times higher than observed, and thus this issue remains unresolved for now.
There are several physical phenomena which might produce the Class C behavior, but our understanding may be helped by recently published X-ray/radio observations of Feroci et al. (1999). Using BeppoSAX and the Ryle Telescope, they report an X-ray event very similar in both flux and spectral evolution to those we report here. Furthermore, they observed a $`40`$ mJy radio flare which peaked $`1000`$ seconds after the X-ray event. If we assume that this is a Class C event, and furthermore that it had an (unobserved) IR flare similar in flux density and timing to those we observed, then we must conclude that the flares have a flat peak flux density over several decades of frequency ($`F_\nu \nu ^{0.15}`$), with longer wavelengths delayed compared to shorter wavelengths. This behavior closely resembles that of Class B flares (Mirabel et al., 1998), and thus suggests that the Class C flares are also due to synchrotron emission from an expanding plasma bubble.
The fact that the IR flares precede the X-ray oscillations suggests an โoutside-inโ model for these events. In such a model, a disturbance far from the black hole propagates inward, first creating the synchrotron flare. Then as the disturbance reaches the innermost portion of the accretion disk, which produces the majority of the thermal X-ray flux, it creates the X-ray flare-dip-flare cycle. Several possibilities may explain these observations. If we assume that Class C events are due to ejection events which occur before the inner disk is perturbed, we must conclude that the innermost portion of the accretion disk is not the site of origin for the ejections, contrary to what is generally believed for microquasars (and other relativistic jet systems). An alternative interpretation may be that the IR/radio flare comes from a plasma bubble created by a magnetic reconnection event in the accretion disk, which would generate a disturbance in the accretion flow. Theorists have hypothesized that such reconnection events may be commonplace in systems where jets are powered by magnetocentrifugal launching mechanisms. Yet another interpretation could be that the jets in GRS 1915+105 are not composed of discrete events, but are continuous low-luminosity outflows punctuated by the appearance of occasional high-luminosity shock events propagating through the flow (as has been suggested for the case of relativistic jets in AGN). In this case, the Class C events could be due to a reverse shock propagating through the jet back towards the disk. As it nears the inner disk, the shock would first produce a synchrotron bubble, generating the IR (and eventually radio) flares, and then reach the inner disk itself to disrupt the X-ray emission, as observed.
## 4 Conclusions
We have reported a new type of IR/X-ray oscillation in the microquasar GRS 1915+105. These oscillations show faint ($`0.5`$ mJy) IR flares with durations of $`500`$ seconds, and are associated with X-ray cycles of soft dips and thermal flares. This distinguishes them from previously known GRS 1915+105 behaviors which show either major radio flares (Class A) or brighter ($`100200`$ mJy) IR/radio flares accompanied by X-ray events with hard dips and thermal flares (Class B). Thus, we label these events as โClass Cโ.
Combining our observations with X-ray/radio observations of a single Class C event by Feroci et al. (1999) indicates that the Class C events are due to synchrotron emission from an expanding plasmoid. Furthermore, in the Class C events the IR flare precedes the onset of the X-ray cycle by several hundred seconds, suggesting an โoutside-inโ model for them. Several possibilities exist for explaining this behavior, including magnetic reconnection events in the outer disk or reverse shocks propagating through a continuous jet medium.
The authors would like to thank the members of the Rossi X-ray Timing Explorer team, without whose work none of these investigations would have been possible. SE thanks R. Lovelace, M. Romanova, and R. Taam for helpful discussions of these observations. This work was supported in part by NASA grant NAG 5-7941. |
no-problem/0001/math0001034.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Universal twists connecting (affine) quantum groups to (elliptic) (dynamical) (affine) algebras have been constructed in . They show in particular the quasi-Hopf structure of elliptic and dynamical algebras. These twists transform the universal $`R`$-matrix $``$ of the first object into the universal $`R`$-matrix $`^{}`$ of the second one as follows:
$$_{12}^{}=_{21}_{12}_{12}^1.$$
(1.1)
The double degeneracy limits of elliptic $`R`$-matrices, whether vertex-type or face-type give rise to algebraic structures which have been variously characterised as scaled elliptic algebras , or double Yangian algebras . As pointed out earlier <sup>1</sup><sup>1</sup>1We wish to thank S. Pakuliak for clarifying this point to us., although represented by formally identical YangโBaxter relations $`RLL=LLR`$ , these two classes of objects differ fundamentally in their structures (as is reflected in the very different mode expansions of $`L`$ defining their individual generators) and must be considered separately.
In our previous paper we have defined, in the quantum inverse scattering or RLL formulation, various algebraic structures of double Yangian type connected by twist-like operators, i.e. such that their evaluated $`R`$-matrices were related as:
$$R_{12}^F=F_{21}R_{12}F_{12}^1$$
(1.2)
for a particular matrix $`F`$. It was conjectured that these twist-like operators were indeed evaluation representations of universal twists obeying a shifted cocycle condition thereby raising the relation (1.2) to the status of a genuine twist connection (1.1) between quasi-Hopf algebras.
We shall be concerned here only with algebraic structures related to the algebra $`\widehat{sl(2)}_c`$, and henceforth dispense with indicating it explicitly: for instance $`๐Y`$ is thus to be understood as $`๐Y(\widehat{sl(2)}_c)`$.
It is our purpose here to establish such connections, at the level of universal $`R`$-matrices, between the double Yangian structures respectively known as $`๐Y`$, $`๐Y_r^{V6}`$, $`๐Y_r^{V8}`$ and $`๐Y_r^F`$. $`๐Y`$ is the double Yangian defined in . $`๐Y_r^{V6}`$ is characterised by a scaled โellipticโ $`R`$-matrix defined in , $`๐Y_r^{V8}`$ is characterised by a scaled $`R`$-matrix defined in . In connection with our previous caveat, note that these $`R`$-matrices are also used to describe respectively the scaled elliptic algebras $`๐_{\mathrm{},0}`$, $`๐_{\mathrm{},\eta }`$ . $`๐Y_r^F`$ is the deformed double Yangian obtained by a particular limit of the dynamical $`R`$-matrix characterising elliptic $`_{q,p,\lambda }`$ algebra .
A crucial ingredient for our procedure is a linear (difference) equation obeyed by the twist. This type of equation for twist operators was first written in . It is also present in . Our method consists in i) finding a twist-like action in representation ii) interpreting this representation as an infinite product iii) defining a linear equation obeyed by this infinite product iv) promoting this linear equation for the representation to the level of linear equation for universal twist. v) The solution of this linear equation is obtained as a infinite product as in which vi) is then proved to obey the shifted cocycle condition as in and vii) has an evaluation representation identical to the twist-like action found in i).
This provides us with the universal $`R`$-matrix and quasi-Hopf structure of the twisted algebras $`๐Y_r^{V6,V8,F}`$, thereby realising a fully consistent description of these algebraic structures.
The universal $`R`$-matrix and Hopf algebra structure for $`๐Y`$ were described in . We construct a universal twist between $`๐Y`$ and $`๐Y_r^{V6}`$. We then show the existence of a universal coboundary (trivial) twist, the evaluation of which realises the connection between the evaluated $`R`$-matrices of $`๐Y_r^{V6}`$ and $`๐Y_r^{V8}`$, leading to identification of these two as quasi-Hopf algebras. Finally another universal coboundary-like twist realises, when evaluated, the connection between the $`R`$-matrices of $`๐Y_r^{V6}`$ and $`๐Y_r^F`$.
It follows that the three deformed structures are in fact one single quasi-Hopf algebra described by three different choices of generators, more precisely given in three different gauges.
We shall denote throughout this paper $`[๐;]`$ the universal or represented twist connecting $`R`$-matrices as $`_{}=_{21}[๐;]_๐_{12}^1[๐;]`$.
## 2 Presentation of the double Yangians $`๐Y`$ and $`๐Y_r`$
### 2.1 Double Yangian $`๐Y`$
The double Yangian $`๐Y`$ is defined by the $`R`$-matrix
$$R(\beta )=\rho (\beta )\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& \frac{i\beta }{i\beta +\pi }& \frac{\pi }{i\beta +\pi }& 0\\ 0& \frac{\pi }{i\beta +\pi }& \frac{i\beta }{i\beta +\pi }& 0\\ 0& 0& 0& 1\end{array}\right),$$
(2.1)
with the normalisation factor
$$\rho (\beta )=\frac{\mathrm{\Gamma }_1(\frac{i\beta }{\pi }|\mathrm{\hspace{0.33em}2})\mathrm{\Gamma }_1(2+\frac{i\beta }{\pi }|\mathrm{\hspace{0.33em}2})}{\mathrm{\Gamma }_1(1+\frac{i\beta }{\pi }|\mathrm{\hspace{0.33em}2})^2},$$
(2.2)
together with the relations
$`R_{12}(\beta _1\beta _2)L_1^\pm (\beta _1)L_2^\pm (\beta _2)`$ $`=`$ $`L_2^\pm (\beta _2)L_1^\pm (\beta _1)R_{12}(\beta _1\beta _2).`$ (2.3)
$`R_{12}(\beta _1\beta _2i\pi c)L_1^{}(\beta _1)L_2^+(\beta _2)`$ $`=`$ $`L_2^+(\beta _2)L_1^{}(\beta _1)R_{12}(\beta _1\beta _2).`$ (2.4)
and the mode expansions
$$L^+(\beta )=\underset{k0}{}L_k^+\beta ^k\text{and}L^{}(\beta )=\underset{k0}{}L_k^{}\beta ^k.$$
(2.5)
It is important to point out that $`L^+`$ and $`L^{}`$ are independent. There exists in this case a Gauss decomposition of the Lax matrices allowing for an alternative Drinfeld presentation .
Indeed, $`L^\pm `$ are decomposed as
$$L^\pm (x)=\left(\begin{array}{cc}1& f^\pm (x^{})\\ 0& 1\end{array}\right)\left(\begin{array}{cc}k_1^\pm (x)& 0\\ 0& k_2^\pm (x)\end{array}\right)\left(\begin{array}{cc}1& 0\\ e^\pm (x)& 1\end{array}\right)$$
(2.6)
with $`x^+x\frac{i\beta }{\pi }`$ and $`x^{}xc`$. Furthermore, $`k_1^\pm (x)k_2^\pm (x1)=1`$ and one defines $`h^\pm (x)k_2^\pm (x)^1k_1^\pm (x)`$.
The evaluation representation $`\pi _x`$ is then easily defined by its action on a two-dimensional vector space by
$$\pi _x(e_k)=x^k\sigma ^+,\pi _x(f_k)=x^k\sigma ^{},\pi _x(h_k)=x^k\sigma ^3,$$
(2.7)
where
$$e^\pm (u)\pm \underset{\begin{array}{c}k0\\ k<0\end{array}}{}e_ku^{k1},f^\pm (u)\pm \underset{\begin{array}{c}k0\\ k<0\end{array}}{}f_ku^{k1},h^\pm (u)1\pm \underset{\begin{array}{c}k0\\ k<0\end{array}}{}h_ku^{k1}.$$
(2.8)
### 2.2 Deformed double Yangian $`๐Y_r^{V6}`$
The $`R`$-matrix of the deformed double Yangian $`๐Y_r^{V6}`$ is related to the two-body $`S`$ matrix of the sineโGordon theory $`S_{SG}(\beta ,r)`$ and is given by
$$R_{V6}(\beta ,r)=cotg(\frac{i\beta }{2})S_{SG}(\beta ,r)=\rho _r(\beta )\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& \frac{\mathrm{sin}\frac{i\beta }{r}}{\mathrm{sin}\frac{\pi +i\beta }{r}}& \frac{\mathrm{sin}\frac{\pi }{r}}{\mathrm{sin}\frac{\pi +i\beta }{r}}& 0\\ 0& \frac{\mathrm{sin}\frac{\pi }{r}}{\mathrm{sin}\frac{\pi +i\beta }{r}}& \frac{\mathrm{sin}\frac{i\beta }{r}}{\mathrm{sin}\frac{\pi +i\beta }{r}}& 0\\ 0& 0& 0& 1\end{array}\right),$$
(2.9)
where the normalisation factor is
$$\rho _r(\beta )=\frac{S_2^2(1+\frac{i\beta }{\pi }|r,2)}{S_2(\frac{i\beta }{\pi }|r,2)S_2(2+\frac{i\beta }{\pi }|r,2)}.$$
(2.10)
$`S_2(x|\omega _1,\omega _2)`$ is Barnesโ double sine function of periods $`\omega _1`$ and $`\omega _2`$ defined by:
$$S_2(x|\omega _1,\omega _2)=\frac{\mathrm{\Gamma }_2(\omega _1+\omega _2x|\omega _1,\omega _2)}{\mathrm{\Gamma }_2(x|\omega _1,\omega _2)},$$
(2.11)
where $`\mathrm{\Gamma }_r`$ is the multiple Gamma function of order $`r`$ given by
$$\mathrm{\Gamma }_r(x|\omega _1,\mathrm{},\omega _r)=\mathrm{exp}\left(\frac{}{s}\underset{n_1,\mathrm{},n_r0}{}(x+n_1\omega _1+\mathrm{}+n_r\omega _r)^s|_{s=0}\right).$$
(2.12)
The algebra $`๐Y_r^{V6}`$ is defined by
$$R_{12}(\beta _1\beta _2)L_1(\beta _1)L_2(\beta _2)=L_2(\beta _2)L_1(\beta _1)R_{12}^{}(\beta _1\beta _2),$$
(2.13)
where $`R_{12}^{}(\beta ,r)R_{12}(\beta ,rc)`$.
The Lax matrix $`L`$ must now be expanded on both positive *and* negative powers as
$$L(\beta )=\underset{k}{}L_k\beta ^k.$$
(2.14)
A presentation similar to the double Yangian case is achieved by introducing the following two matrices:
$`L^+(\beta )L(\beta i\pi c),`$ (2.15)
$`L^{}(\beta )\sigma _3L(\beta i\pi r)\sigma _3.`$ (2.16)
They obey coupled exchange relations following from (2.13) and periodicity/unitarity properties of the matrices $`R_{12}`$ and $`R_{12}^{}`$:
$`R_{12}(\beta _1\beta _2)L_1^\pm (\beta _1)L_2^\pm (\beta _2)=L_2^\pm (\beta _2)L_1^\pm (\beta _1)R_{12}^{}(\beta _1\beta _2),`$ (2.17)
$`R_{12}(\beta _1\beta _2i\pi c)L_1^+(\beta _1)L_2^{}(\beta _2)=L_2^{}(\beta _2)L_1^+(\beta _1)R_{12}^{}(\beta _1\beta _2).`$ (2.18)
Contrary to the case of the double Yangian, the matrices $`L^+`$ and $`L^{}`$ are *not* independent. Note also that, due to conflicting conventions, the $`r\mathrm{}`$ limit of $`L^\pm `$ in $`๐Y_r^{V6}`$ corresponds to $`L^{}`$ in $`๐Y`$.
### 2.3 Deformed double Yangian $`๐Y_r^{V8}`$
The $`R`$-matrix of the deformed double Yangian $`๐Y_r^{V8}`$ is obtained as the scaling limit of the $`R`$-matrix of the elliptic algebra $`๐_{q,p}`$ . It reads
$$R_{V8}(\beta ,r)=\rho _r(\beta )\left(\begin{array}{cccc}\frac{\mathrm{cos}\frac{i\beta }{2r}\mathrm{cos}\frac{\pi }{2r}}{\mathrm{cos}\frac{\pi +i\beta }{2r}}& 0& 0& \frac{\mathrm{sin}\frac{i\beta }{2r}\mathrm{sin}\frac{\pi }{2r}}{\mathrm{cos}\frac{\pi +i\beta }{2r}}\\ 0& \frac{\mathrm{sin}\frac{i\beta }{2r}\mathrm{cos}\frac{\pi }{2r}}{\mathrm{sin}\frac{\pi +i\beta }{2r}}& \frac{\mathrm{cos}\frac{i\beta }{2r}\mathrm{sin}\frac{\pi }{2r}}{\mathrm{sin}\frac{\pi +i\beta }{2r}}& 0\\ 0& \frac{\mathrm{cos}\frac{i\beta }{2r}\mathrm{sin}\frac{\pi }{2r}}{\mathrm{sin}\frac{\pi +i\beta }{2r}}& \frac{\mathrm{sin}\frac{i\beta }{2r}\mathrm{cos}\frac{\pi }{2r}}{\mathrm{sin}\frac{\pi +i\beta }{2r}}& 0\\ \frac{\mathrm{sin}\frac{i\beta }{2r}\mathrm{sin}\frac{\pi }{2r}}{\mathrm{cos}\frac{\pi +i\beta }{2r}}& 0& 0& \frac{\mathrm{cos}\frac{i\beta }{2r}\mathrm{cos}\frac{\pi }{2r}}{\mathrm{cos}\frac{\pi +i\beta }{2r}}\end{array}\right).$$
(2.19)
It is also obtained from the $`R`$-matrix of $`๐Y_r^{V6}`$ by a gauge transformation . The algebra $`๐Y_r^{V8}`$ is defined by the same relation as $`๐Y_r^{V6}`$, albeit with the matrix $`R_{V8}`$, and the same type of Lax matrix with positive and negative modes.
### 2.4 Deformed double Yangian $`๐Y_r^F`$
The $`R`$-matrix of $`๐Y_r^F`$ is given by
$$R(\beta ;r)=\rho _r(\beta )\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& \frac{\mathrm{sin}\frac{i\beta }{r}}{\mathrm{sin}\frac{\pi +i\beta }{r}}& e^{\beta /r}\frac{\mathrm{sin}\frac{\pi }{r}}{\mathrm{sin}\frac{\pi +i\beta }{r}}& 0\\ 0& e^{\beta /r}\frac{\mathrm{sin}\frac{\pi }{r}}{\mathrm{sin}\frac{\pi +i\beta }{r}}& \frac{\mathrm{sin}\frac{i\beta }{r}}{\mathrm{sin}\frac{\pi +i\beta }{r}}& 0\\ 0& 0& 0& 1\end{array}\right).$$
(2.20)
The normalisation factor is the same as for $`๐Y_r^{V6}`$. The definition of the algebra and the Lax operator are again formally identical.
## 3 Twist from $`๐Y`$ to $`๐Y_r`$: representation formula
### 3.1 A notation for $`P_{12}`$ invariant matrices
Let us denote by $`M(b^+,b^{})`$ the $`4\times 4`$ matrix given by
$$M(b^+,b^{})\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& \frac{1}{2}(b^++b^{})& \frac{1}{2}(b^+b^{})& 0\\ 0& \frac{1}{2}(b^+b^{})& \frac{1}{2}(b^++b^{})& 0\\ 0& 0& 0& 1\end{array}\right).$$
(3.1)
With this definition, we have $`M(a,b)M(a^{},b^{})=M(aa^{},bb^{})`$ and $`M(a,b)^1=M(a^1,b^1)`$.
Now,
$$R[๐Y](\beta )=\rho (\beta )M(1,\frac{i\beta \pi }{i\beta +\pi }).$$
(3.2)
We have $`R[๐Y_r^{V6}](\beta )=\rho _r(\beta )M(b_r^+,b_r^{})`$, with
$`b_r^+`$ $`=`$ $`{\displaystyle \frac{\mathrm{cos}\frac{i\beta \pi }{2r}}{\mathrm{cos}\frac{i\beta +\pi }{2r}}}={\displaystyle \frac{\mathrm{\Gamma }_1(r+\frac{i\beta }{\pi }+1|2r)\mathrm{\Gamma }_1(r\frac{i\beta }{\pi }1|2r)}{\mathrm{\Gamma }_1(r+\frac{i\beta }{\pi }1|2r)\mathrm{\Gamma }_1(r\frac{i\beta }{\pi }+1|2r)}},`$ (3.3)
$`b_r^{}`$ $`=`$ $`{\displaystyle \frac{\mathrm{sin}\frac{i\beta \pi }{2r}}{\mathrm{sin}\frac{i\beta +\pi }{2r}}}={\displaystyle \frac{\mathrm{\Gamma }_1(\frac{i\beta }{\pi }+1|2r)\mathrm{\Gamma }_1(2r\frac{i\beta }{\pi }1|2r)}{\mathrm{\Gamma }_1(\frac{i\beta }{\pi }1|2r)\mathrm{\Gamma }_1(2r\frac{i\beta }{\pi }+1|2r)}}`$ (3.4)
$`=`$ $`{\displaystyle \frac{\mathrm{\Gamma }_1(2r+\frac{i\beta }{\pi }+1|2r)\mathrm{\Gamma }_1(2r\frac{i\beta }{\pi }1|2r)}{\mathrm{\Gamma }_1(2r+\frac{i\beta }{\pi }1|2r)\mathrm{\Gamma }_1(2r\frac{i\beta }{\pi }+1|2r)}}{\displaystyle \frac{i\beta \pi }{i\beta +\pi }}.`$ (3.5)
### 3.2 The linear equation in representation
We remark that the normalisation factor of $`๐Y_r^{V6}`$ can be rewritten as:
$$\rho _r(\beta )=\rho _F(\beta ;r)\rho (\beta )\rho _F(\beta ;r)^1$$
(3.6)
with
$$\rho _F(\beta )=\frac{\mathrm{\Gamma }_2(\frac{i\beta }{\pi }+1+r|\mathrm{\hspace{0.33em}2},r)^2}{\mathrm{\Gamma }_2(\frac{i\beta }{\pi }+r|\mathrm{\hspace{0.33em}2},r)\mathrm{\Gamma }_2(\frac{i\beta }{\pi }+2+r|\mathrm{\hspace{0.33em}2},r)}.$$
(3.7)
Equations (3.2-3.6) allow us to write:
$$R[๐Y_r^{V6}]=F_{21}(\beta )R[๐Y]F_{12}(\beta )^1.$$
(3.8)
Using the notation (3.1), $`F_{12}(\beta )`$ is given by
$$F_{12}(\beta )=\rho _F(\beta )M(\frac{\mathrm{\Gamma }_1(\frac{i\beta }{\pi }+r1|2r)}{\mathrm{\Gamma }_1(\frac{i\beta }{\pi }+r+1|2r)},\frac{\mathrm{\Gamma }_1(\frac{i\beta }{\pi }+2r1|2r)}{\mathrm{\Gamma }_1(\frac{i\beta }{\pi }+2r+1|2r)}).$$
(3.9)
This twist-like matrix reads
$`F_{12}(\beta )`$ $`=`$ $`\rho _F(\beta ){\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}M(1,{\displaystyle \frac{i\beta +\pi +2n\pi r}{i\beta \pi +2n\pi r}})M({\displaystyle \frac{i\beta +\pi +(2n1)\pi r}{i\beta \pi +(2n1)\pi r}},1)`$ (3.10)
$`=`$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}R(\beta i(2n)\pi r)^1\tau (R(\beta i(2n1)\pi r)^1)`$ (3.11)
with
$$\tau (M(a,b))=M(b,a)$$
(3.12)
and where, unless differently specified, $`R`$ is the $`R`$-matrix of $`๐Y`$. One uses here the representation of $`\rho _F(\beta )`$ as an infinite product
$$\rho _F(\beta )=\underset{n=1}{\overset{\mathrm{}}{}}\rho (\beta in\pi r)^1.$$
(3.13)
The automorphism $`\tau `$ may be interpreted as the adjoint action of $`(1)^{\frac{1}{2}h_0^{(1)}}`$, so that
$`F_{12}(\beta )`$ $`=`$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}R(\beta i(2n)\pi r)^1Ad\left((1)^{\frac{1}{2}h_0^{(1)}}\right)R(\beta i(2n1)\pi r)^1`$ (3.14)
$`=`$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}Ad\left((1)^{\frac{n}{2}h_0^{(1)}}\right)R(\beta in\pi r)^1.`$ (3.15)
Hence $`F`$ is solution of the difference equation
$$F(\beta i\pi r)=(1)^{\frac{1}{2}h_0^{(1)}}F(\beta )(1)^{\frac{1}{2}h_0^{(1)}}R(\beta i\pi r).$$
(3.16)
It would be tempting to relate the automorphism $`\tau `$ to the one used in , although the naive scaling of the latter does not give back the former. For instance our $`\tau `$ is inner not outer.
All the infinite products are logarithmically divergent. They are consistently regularised by the $`\mathrm{\Gamma }_1`$ and $`\mathrm{\Gamma }_2`$ functions. In particular, $`\underset{r\mathrm{}}{lim}F=M(1,1)=\text{1I}_4`$.
## 4 The universal form of $`[๐Y;๐Y_r^{V6}]`$
We construct a universal twist $``$ from $`๐Y`$ to $`๐Y_r^{V6}`$, such that
$$F(\beta _1\beta _2)=\pi _{\beta _1}\pi _{\beta _2}().$$
(4.1)
The form of the difference equation (3.16) obeyed by the conjectural representation of the twist, together with the known generic structures of linear equations obeyed by universal twists lead us to postulate the following linear equation for $``$:
$$(r)=Ad(\varphi ^1\text{1I})()๐$$
(4.2)
with
$`\varphi `$ $`=`$ $`(1)^{\frac{1}{2}h_0}e^{(r+c)d},`$ (4.3)
$`๐`$ $``$ $`e^{\alpha cd\gamma dc}.`$ (4.4)
We now prove the consistency of these postulates. We will use the following preliminary properties:
* The operator $`d`$ in the double Yangian $`๐Y`$ is defined by $`[d,e(u)]=\frac{d}{du}e(u)`$ (see ). The evaluation representations are related through $`\pi _{\beta +\beta ^{}}=\pi _\beta Ad(\mathrm{exp}(\frac{i\beta ^{}}{\pi }d))`$.
* The operator $`d`$ satisfies $`\mathrm{\Delta }(d)=d1+1d`$.
* The generator $`h_0`$ of $`๐Y`$ is such that
$$h_0e(u)=e(u)(h_0+2),h_0f(u)=f(u)(h_02),[h_0,h(u)]=0,$$
(4.5)
and hence $`\tau =Ad\left((1)^{\frac{1}{2}h_0^{(1)}}\right)`$ satisfies $`\tau ^2=1`$.
The equation (4.2) can be solved by
$$(r)=\underset{k}{\overset{}{}}_k(r),_k(r)=\varphi _1^k๐_{12}^1\varphi _1^k.$$
(4.6)
It is easily seen that equation (3.15) is the evaluation representation of this universal formula.
As in , $`_k`$ satisfy the following properties:
$`(\mathrm{\Delta }\text{id})(_k(r))`$ $`=`$ $`_k^{(23)}(r+c_1)_k^{(13)}\left(r+c_2+{\displaystyle \frac{\alpha }{k}}c_2\right),`$ (4.7)
$`(\text{id}\mathrm{\Delta })(_k(r))`$ $`=`$ $`_k^{(12)}(r)_k^{(13)}\left(r{\displaystyle \frac{\gamma }{k}}c_2\right),`$ (4.8)
and
$$_k^{(12)}(r)_{k+l}^{(13)}\left(r+\frac{l\gamma }{k+l}c_2\right)_l^{(23)}(r+c_1)=_l^{(23)}(r+c_1)_{k+l}^{(13)}\left(r+\frac{l+\alpha }{k+l}c_2\right)_k^{(12)}(r).$$
(4.9)
It is then straightforward to follow to prove the shifted cocycle relation, provided that $`\alpha +\gamma =1`$.
We then have
$$^{(12)}(r)(\mathrm{\Delta }\text{id})((r))=^{(23)}\left(r+c^{(1)}\right)(\text{id}\mathrm{\Delta })((r)).$$
(4.10)
It follows that $`_{๐Y_r^{V6}}^{}{}_{12}{}^{}=_{21}_{12}_{12}^1`$ satisfies a shifted YangโBaxter equation
$$_{12}(r+c^{(3)})_{13}(r)_{23}(r+c^{(1)})=_{23}(r)_{13}(r+c^{(2)})_{12}(r),$$
(4.11)
and that $`๐Y_r^{V6}`$ is a quasi-Hopf algebra with $`\mathrm{\Delta }^{}(x)=\mathrm{\Delta }(x)^1`$ and $`\mathrm{\Phi }_{123}=_{23}(r)_{23}(r+c^{(1)})^1`$.
## 5 Twist to $`๐Y_r^{V8}`$
### 5.1 In representation
The $`R`$-matrix of $`๐Y_r^{V6}`$ and $`๐Y_r^{V8}`$ are related by
$$R[๐Y_r^{V8}]=K_{21}R[๐Y_r^{V6}]K_{12}^1,$$
(5.1)
where
$$K=VV\text{with}V=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}\hfill 1& \hfill 1\\ \hfill 1& \hfill 1\end{array}\right).$$
(5.2)
This implies an isomorphism between $`๐Y_r^{V8}`$ and $`๐Y_r^{V6}`$ where the Lax operators are connected by $`L_{V8}=VL_{V6}V^1`$.
### 5.2 Universal form
We identify $`V`$ with an evaluation representation of an element $`g`$
$$V\pi _x(g)\text{with}g=\mathrm{exp}\left(\frac{\pi }{2}(f_0e_0)\right).$$
(5.3)
Since $`e_0`$ and $`f_0`$ lie in the undeformed Hopf subalgebra $`sl(2)`$ of $`๐Y`$ , the coproduct of $`g`$ reads
$$\mathrm{\Delta }(g)=gg$$
(5.4)
so that
$$g_1g_2\mathrm{\Delta }^{}(g^1)=g_1g_2g_1^1g_2^1.$$
(5.5)
The two-cocycle $`g_1g_2\mathrm{\Delta }^{}(g^1)`$ is a coboundary (with respect to the coproduct $`\mathrm{\Delta }^{}`$). In representation, (5.5) is equal to the scaling limit of the represented twist from $`๐ฐ_q`$ to $`๐_{q,p}`$ .
Note that this case is similar to the gauge transformation used in although $`g`$ is not purely Cartan.
It follows that
$$[๐Y_r^{V8}]g_1g_2\mathrm{\Delta }_{21}^{}(g^1)[๐Y_r^{V6}]\mathrm{\Delta }_{12}^{}(g)g_1^1g_2^1$$
(5.6)
satisfies the shifted YangโBaxter equation (4.11).
To recover (5.1), use (5.5) and remark that $`\pi _x\pi _x(gg)`$ commutes with $`R[๐Y]`$.
## 6 Twist to $`๐Y_r^F`$
### 6.1 Twist in representation
The $`R`$-matrices of $`๐Y_r^{V6}`$ and $`๐Y_r^F`$ are related by:
$$R[๐Y_r^F](\beta _1\beta _2)=K_{21}^{(6)}(\beta _2,\beta _1)R[๐Y_r^{V6}](\beta _1\beta _2)(K_{12}^{(6)})^1(\beta _1,\beta _2),$$
(6.1)
where
$$K^{(6)}(\beta _1,\beta _2)=V^{}(\beta _1)V^{}(\beta _2)\text{with}V^{}(\beta )=\left(\begin{array}{cc}\hfill e^{\frac{\beta }{2r}}& \hfill 0\\ \hfill 0& \hfill e^{\frac{\beta }{2r}}\end{array}\right).$$
(6.2)
### 6.2 Universal twist
Again, one identifies $`V^{}(\beta )`$ as the evaluation representation of an algebra element
$$V^{}(\beta )=\pi _\beta \left(g^{}\right),$$
(6.3)
where
$$g^{}=\mathrm{exp}\left(\frac{h_1}{2r}\right).$$
(6.4)
One then defines the following shifted coboundary
$$๐ฆ_{12}(r)=g^{}(r)g^{}(r+c^{(1)})\mathrm{\Delta }^{}(g_{}^{}{}_{}{}^{1}).$$
(6.5)
It obeys a shifted cocycle condition
$$๐ฆ_{12}(r)(\mathrm{\Delta }^{}\text{id})๐ฆ(r)=๐ฆ_{23}(r+c^{(1)})(\text{id}\mathrm{\Delta }^{^{}})๐ฆ(r),$$
(6.6)
with $`_{23}^{}(r)=_{23}(r+c^{(1)})`$, as a consequence of
$$(\mathrm{\Delta }^{}\text{id})\mathrm{\Delta }^{}(g_{}^{}{}_{}{}^{1})=(\text{id}\mathrm{\Delta }^{^{}})\mathrm{\Delta }^{}(g_{}^{}{}_{}{}^{1}),$$
(6.7)
which is the coassociativity property for $`\mathrm{\Delta }^{}`$.
Finally
$$[๐Y_r^F]๐ฆ_{21}(r)[๐Y_r^{V6}]๐ฆ_{12}^1(r)$$
(6.8)
satisfies the shifted YangโBaxter equation (4.11). Moreover, (6.8) together with (6.5) show that $`๐Y_r^F`$ and $`๐Y_r^{V6}`$ are the same quasi-Hopf algebra.
Acknowledgements
This work was supported in part by CNRS and EC network contract number FMRX-CT96-0012.
M.R. was supported by an EPSRC research grant no. GR/K 79437 and CNR-NATO fellowship.
D.A., L.F. and E.R. are most grateful to RIMS for hospitality. We thank warmfully M. Jimbo, H. Konno, T. Miwa and J. Shiraishi for fruitful and stimulating discussions.
We are also indebted to S. Pakuliak for his enlightening comments.
J.A. wishes to thank the LAPTH for its kind hospitality. |
no-problem/0001/astro-ph0001543.html | ar5iv | text | # Accelerations of Water Masers in NGC4258
## 1 Introduction
Water masers were first detected in the galaxy NGC4258 (M106) by Claussen, Heiligman, & Lo (1984) and Henkel et al. (1984). The $`6_{16}5_{23}`$ rotational transition of H<sub>2</sub>O, at a rest frequency of 22.235080 GHz, produces this emission. Observers found maser sources spanning a velocity range of about $`200\mathrm{km}\mathrm{s}^1`$, approximately centered on the systemic velocity of the galaxy, which Cecil, Wilson, & Tully (1992) measured to be $`472\pm 4\mathrm{km}\mathrm{s}^1`$. (This velocity, along with the rest of the velocities below, uses the radio definition of the Doppler shift, $`v/c=\mathrm{\Delta }\nu /\nu _{}`$, in the LSR frame.) These maser lines are referred to as the systemic-velocity spectral features. The peak flux density varies in time but is generally between 2 and 10 Jy, with a typical value of about 4 Jy. Early Very Long Baseline Interferometry (VLBI) observations of these features revealed that the systemic emission is quite compact, on the order of one-hundredth of a parsec (Claussen et al. 1988), but is elongated with a velocity gradient along its major axis (7970 $`\pm `$ 40 km s<sup>-1</sup>pc<sup>-1</sup>), most likely indicative of circular motion seen edge-on (Greenhill et al. 1995a). Haschick & Baan (1990) were the first to note the velocity drift (acceleration) of one systemic feature. Later studies found the whole of the systemic emission to be increasing in velocity at a rate of about 10 km s<sup>-1</sup>yr<sup>-1</sup> (Haschick, Baan, & Peng 1994; Greenhill et al. 1995b; Nakai et al. 1995).
Nakai, Inoue, & Miyoshi (1993) observed NGC4258 with a spectrometer of very broad bandwidth and discovered additional maser emission at velocities offset approximately $`\pm 1000\mathrm{km}\mathrm{s}^1`$ from the previously known systemic emission. These are called the high-velocity spectral features. Unlike the systemic features, which appear as a thicket of overlapping lines, the high-velocity features are more sparse, occurring in small, well-separated clusters of a few narrow ($``$ 1 km s<sup>-1</sup>), overlapping lines. The redshifted high-velocity features have peak flux densities around 1 Jy or less and have velocities of about 1230 to 1460 $`\mathrm{km}\mathrm{s}^1`$. In contrast, the blueshifted high-velocity features have peak flux densities of about 0.1 Jy or less, and cover velocities in the range of $`520`$ to $`290`$ $`\mathrm{km}\mathrm{s}^1`$. In response to the observation of the systemic emission position-velocity gradient, the measurement of the systemic emission velocity drift, and the detection of the high-velocity emission, Watson & Wallin (1994) and Greenhill et al. (1995a) proposed that the systemic- and high-velocity features are all part of a rotating disk about 0.2 pc in radius viewed nearly edge-on. The systemic features originate in a small region on the front side of the disk, producing the linear position-velocity gradient as well as the line-of-sight acceleration observed for these features. The high-velocity features were attributed to gas at large impact parameters where the diskโs orbital motion is parallel to the line of sight. The velocity range of the high-velocity spectrum was believed to reflect the broad radial width of the disk, though the high-velocity maser positions were unknown at this time (Greenhill et al. 1995a).
VLBA observations of both the previously studied systemic features and the more recently discovered high-velocity features provide strong confirmation that the masers are embedded in a rotating disk that we view nearly edge-on (Miyoshi et al. 1995). The results are summarized in Figure Accelerations of Water Masers in NGC4258. The observations show that the features are distributed in a linear fashion with small vertical spread, suggestive of a thin disk. However, the disk is slightly warped; the red- and blueshifted high-velocity features do not precisely โline up,โ but rather appear to trace out portions of a curve. The rotation curve inferred from the high-velocity features is Keplerian. The masers are located at disk radii between 0.14 pc and 0.28 pc, and the central mass is 3.9 $`\times `$ $`10^7`$ M for a calculated distance of 7.2 Mpc, most likely in the form of a supermassive black hole (Herrnstein et al. 1999; Maoz 1995a, 1998).
The line-of-sight velocity for a maser in Keplerian rotation around a mass $`M`$ at a disk radius $`R`$ is given by $`v=\left(\frac{GM}{R}\right)^{1/2}\mathrm{cos}\theta +v_{gal}`$, where $`\theta `$ is the azimuthal position measured from the midline (diameter perpendicular to the line of sight), and $`v_{gal}`$ is the systemic velocity of the galaxy. Because the masers are well fit by a Keplerian rotation curve, the high-velocity masers must all be located close to a single diameter through the disk, i.e., $`\mathrm{cos}\theta `$ is a constant. The midline (where $`\mathrm{cos}\theta =1`$) is the most likely candidate for two reasons. First, the line-of-sight velocity gradient (along the line of sight) is zero there, which maximizes the gain path along which maser amplification can occur. Second, the line-of-sight accelerations measured for the high-velocity features are small (as discussed below).
The line-of-sight velocity for a maser in Keplerian rotation at a disk radius $`R`$ can also be expressed as $`v=\left(\frac{GM}{R}\right)^{1/2}\left(\frac{b}{R}\right)+v_{gal}`$, where $`b`$ is the impact parameter in the plane of the sky measured from the center of the disk. Because the systemic-velocity masers exhibit a linear velocity gradient with impact parameter, they must all be located close to a single disk radius; i.e., $`R`$ is a constant. The systemic-velocity features are probably located in front of the diskโs dynamical center for two reasons. First, the velocities of these features are near the systemic velocity of the galaxy. Second, these features occur spatially midway between the red- and blueshifted high-velocity features. The distinctly non-zero accelerations measured for the systemic-velocity features are consistent with this interpretation.
Motivated by hints of periodicity in the spacings of high-velocity emission in position and velocity (see Figure Accelerations of Water Masers in NGC4258), Maoz (1995b) proposed that spiral structure is present in the disk and that masing occurs at the density maxima located where the spiral arms intersect the midline. Later, Maoz & McKee (1998) expanded upon this idea and suggested that the disk contains spiral shock waves and that masing occurs only in thin post-shock regions seen in locations where the spiral arms are tangent to the line of sight. In this model the high-velocity features decrease in velocity (magnitude) at a predictable rate, as the spiral structure rotates and different portions of the spiral arms become tangent to the line of sight. Consequently, within the model the masers are not distinct physical entities but rather locations in the disk marking the passage of the spiral excitation wave.
Four previous studies of maser feature accelerations have been made. All four studies measured accelerations for the systemic features; two of the studies also examined high-velocity accelerations. Greenhill et al. (1995b) found accelerations for twelve systemic features using a series of spectra taken at the Effelsberg 100 m telescope in 1984โ1986. They measured a range of values between 8.1 and 10.9 $`\mathrm{km}\mathrm{s}^1\mathrm{yr}^1`$, with an average drift of $`9.5\pm 1.1\mathrm{km}\mathrm{s}^1\mathrm{yr}^1`$. Haschick et al. (1994) observed the systemic features with the Haystack 37 m telescope at roughly monthly intervals from 1986 to 1993. They found accelerations for four clusters of masers of between 6.2 and 10.4 $`\mathrm{km}\mathrm{s}^1\mathrm{yr}^1`$, with an average value of 7.5 $`\mathrm{km}\mathrm{s}^1\mathrm{yr}^1`$. Nakai et al. (1995) observed the systemic features with the Nobeyama Radio Observatory 45 m telescope approximately every week in 1992. They measured accelerations for thirteen features of between 8.7 and 10.2 $`\mathrm{km}\mathrm{s}^1\mathrm{yr}^1`$, with an average rate of $`9.6\pm 1.0`$ $`\mathrm{km}\mathrm{s}^1\mathrm{yr}^1`$. All three of the above studies determined the accelerations by following local maxima in spectra through a time series and looking for velocity drifts as a linear function of time. In the fourth study, Herrnstein (1997) used spectra from four epochs of VLBA observations, four to nine months apart. These large time gaps precluded following the features โby eye.โ Instead, features were tracked by a Bayesian analysis that considered all possible pairings of features among the epochs. The measured accelerations were between 6.8 and 11.6 km s<sup>-1</sup>yr<sup>-1</sup>, with most values close to 9 km s<sup>-1</sup>yr<sup>-1</sup>. The small range in measured acceleration supports the idea that the systemic masers originate in a relatively narrow band of radii.
Two of the previous four studies also examined the high-velocity features, but neither detected any statistically significant accelerations. Greenhill et al. (1995b) measured upper limits of 1 km s<sup>-1</sup>yr<sup>-1</sup> for 20 redshifted lines and 3 blueshifted lines in a series of spectra taken in 1993. Nakai et al. (1995) also tracked redshifted and blueshifted spectral lines and found upper limits on the acceleration of 0.7 $`\mathrm{km}\mathrm{s}^1\mathrm{yr}^1`$ and 2.8 $`\mathrm{km}\mathrm{s}^1\mathrm{yr}^1`$, respectively.
The purpose of our study was to obtain precise emission feature velocities at regular intervals and thereby measure the accelerations of the high-velocity features. These observations are an improvement over past efforts because of the long time baseline (nearly three years) and frequent observations. The data permit us to derive line-of-sight positions of masers in the disk, to test the predictions of the Maoz & McKee model, and to look for correlations between maser positions and physical properties such as linewidth and intensity.
In ยง2 we describe the observations and limits on systematic measurement errors, and in ยง3 we present the measured accelerations. In ยง4 we compare our results to the Maoz & McKee model, derive maser positions within the disk, and discuss possible correlations in maser properties. A summary of our conclusions is contained in ยง5. In the appendix we present a quantitative analysis of the relative robustness and sensitivity of maser position estimates obtained individually from measurements of acceleration, position, and line-of-sight velocity. A prelimary version of these results was presented by Bragg et al. (1998).
## 2 Observations and Data Reduction
This study uses observations from three different instruments: the Very Large Array (VLA) and the Very Long Baseline Array (VLBA) of the NRAO<sup>1</sup><sup>1</sup>1The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation., and the Effelsberg 100 m telescope of the Max Planck Institute for Radio Astronomy. We summarize the observations in Table 1 and display a time-series of systemic, redshifted and blueshifted spectra in Figures Accelerations of Water Masers in NGC4258, Accelerations of Water Masers in NGC4258, and Accelerations of Water Masers in NGC4258, respectively.
### 2.1 VLA Observations
We observed NGC4258 with the VLA seventeen times between 1995 January and 1997 February (approximately every one to two months) in order to obtain a series of spectra of the masers without large time gaps. We used two IFs to observe adjacent velocity ranges. The IFs were tuned to fixed sky frequencies and Doppler tracking was implemented in software. The bandwidth of each IF was 3.125 MHz ($``$ 42 km s<sup>-1</sup>) which was divided into 128 channels of width 24.4 kHz (0.329 km s<sup>-1</sup>). The instantaneous bandwidth for the spectrometer configured in this way is about 80 km s<sup>-1</sup>, and a series of seven integrations covered the entire region of interest in the spectrum.
We observed the systemic and redshifted features for all epochs, and the blueshifted features in all but the final three. For all epochs except the first, we observed the systemic features over the range $`390`$ to $`600\mathrm{km}\mathrm{s}^1`$, the red features over the range $`1235`$ to $`1460\mathrm{km}\mathrm{s}^1`$, and the blue features over the ranges $`460`$ to $`420\mathrm{km}\mathrm{s}^1`$ and $`390`$ to $`350\mathrm{km}\mathrm{s}^1`$. (For the first epoch, these ranges were all shifted by $`20\mathrm{km}\mathrm{s}^1`$ towards higher velocities.) For some epochs, we observed additional velocity bands, including $`550`$ to $`510\mathrm{km}\mathrm{s}^1`$, $`330`$ to $`290\mathrm{km}\mathrm{s}^1`$, $`1475`$ to $`1515\mathrm{km}\mathrm{s}^1`$, and $`1560`$ to $`1635\mathrm{km}\mathrm{s}^1`$, but detected no new emission. Typical integration times for each velocity range were 18 minutes, the exception being the blueshifted velocities, for which we used 36-minute integrations. We observed 1146+399 for phase calibration, 3C286 for flux calibration, and 3C273 for bandpass calibration.
We edited and calibrated the data with standard routines in AIPS. The overall amplitude calibration is accurate to 20% and the relative calibration within each epoch is accurate to 15%. For each epoch, we computed spectra from vector averages of data for all baselines. The angular extent of the maser emission is much smaller than the resolution of the VLA in any configuration, so imaging was unnecessary. For the epoch on 1996 June 27, thunderstorms resulted in the loss of all data. For the epochs on 1995 July 29 and 1995 September 9, large atmospheric phase variations led to the loss of high-velocity data; we recovered part of the systemic data from 1995 July by self-calibration of peaks in the maser spectrum.
### 2.2 VLBA and Effelsberg Data
The VLBA data we include in this study consist of five spectra of the high-velocity masers taken from 1994 April to 1996 September (Herrnstein 1997, A. Trotter 1998, private communication). For these observations the total bandwidth was $``$400 km s <sup>-1</sup>, which was divided into 2048 spectral channels each of width 0.211 km s<sup>-1</sup>. The velocity coverage of these observations exceeded that of the VLA spectra, so we have used only the portions that overlap the VLA spectra. As for the VLA data, the amplitude calibration is good to within 20% and Doppler tracking was implemented in software after correlation. The VLBA spectra have better signal-to-noise than the VLA observations largely because the integration times were typically much longer (about 12 hours).
The Effelsberg 100-m telescope data consist of five spectra of the redshifted high-velocity features taken between 1995 March and 1995 June. These observations were obtained in total-power mode with a bandwidth of $``$333 km s<sup>-1</sup> divided into 1024 spectral channels of width 0.329 km s<sup>-1</sup>. The integration times were between 6 and 12 minutes on-source. These observations covered a velocity range from 1180 to 1510 km s<sup>-1</sup> and the amplitude calibration is accurate to within 20%. We corrected the spectra to the radio definition of the Doppler shift. (We note that by default the band-center velocities of Effelsberg spectra assume the optical definition of the Doppler shift.) At the band-center frequency used, the difference between velocities using the two definition is $`6.052`$ km s<sup>-1</sup>. The Effelsberg spectra have lower signal-to-noise than those from the VLA largely because of the short integration times.
### 2.3 Feature Fitting
#### 2.3.1 High-Velocity Features
To measure the velocity of the maser medium for each feature, as well as spectral feature amplitudes and linewidths, we fit Gaussian profiles to the spectral data. However, the high-velocity features occur in clusters of a few partially-overlapping lines so fits of multiple components are necessary. We used a nonlinear, multiple-Gaussian-component least-squares fit. To optimize the fitting, we split the redshifted high-velocity spectra from the VLA and VLBA into three segments, each containing between five and twelve features: from 1225 $`\mathrm{km}\mathrm{s}^1`$ to 1300 $`\mathrm{km}\mathrm{s}^1`$, from 1300 $`\mathrm{km}\mathrm{s}^1`$ to 1375 $`\mathrm{km}\mathrm{s}^1`$, and from 1375 $`\mathrm{km}\mathrm{s}^1`$ to 1460 $`\mathrm{km}\mathrm{s}^1`$. We fit each Effelsberg spectum over the entire 300 km s<sup>-1</sup> range at once as each spectrum contains fewer detectable features because of the lower signal to noise ratios.
Each spectrum was fit iteratively. First, we identified and fit the four or five most prominent peaks. Second, we identified additional features in the residuals by convolving the residuals with a five-channel boxcar function and searching for peaks among these smoothed residuals. Third, we refit the spectrum to include any additional features. We repeated this process until both the rms deviation of the residuals was comparable to the noise level in the spectrum, and no additional features could be picked out โby eyeโ in the smoothed residuals. Occasionally, we split VLA spectra into 20 to 30 km s<sup>-1</sup> velocity segments (instead of 75 km s<sup>-1</sup>) when they were especially crowded.
#### 2.3.2 Systemic Features
We have identified ten to fifteen peaks in each systemic-velocity spectrum. Though it is desirable to obtain velocities from formal fitting, the density of features near the systemic velocity is too great to permit a unique decomposition of the spectra into individual masing components. For these features, we followed the method used by Greenhill et al. (1995b): first, we convolved each spectrum with a three-channel boxcar, second, we identified local maxima and minima, and third, we selected maxima that exceed one of their two neighboring minima by more than $`13\sigma `$. We have chosen a factor of thirteen to preclude the selection of noise spikes as peaks but to still include most of the visually โrealโ peaks. This method is not sensitive to lower amplitude peaks but seemed to do reasonably well in the dense part of the spectrum. Because the expected accelerations are greater than for the high-velocity portion of the spectrum, high precision measurement of velocity is less critical.
### 2.4 Velocity Error Budget
Because the accelerations of the high-velocity features are quite small ($`<1`$ km s<sup>-1</sup>yr<sup>-1</sup>), it was important that our individual velocity measurements be accurate to a level not normally needed in radio astronomy. We used a standard routine in AIPS to implement Doppler tracking, for which the uncertainty is $``$ 0.004 km s<sup>-1</sup> due to the omission of Jupiterโs influence (The routine accounts for the rotation of the earth, motion of the earth about the earth-moon barycenter, revolution of the earth-moon barycenter about the sun,and the motion of the sun with respect to the local standard of rest.) To verify and possibly to further constrain this uncertainty, we compared sample velocity shifts computed in AIPS to those from the CfA Planetary Ephemermis Program (PEP), which is accurate to 1 mm s<sup>-1</sup> (J. Chandler 1998, private communication). We found the two programs to agree to within the quoted uncertainty of the AIPS routine (0.004 km s<sup>-1</sup>). Thus we conclude that any long-term accelerations we observe in the maser features above this level must be due to a real physical effect.
## 3 Results
### 3.1 Accelerations of the High-Velocity Maser Features
We have measured accelerations for seventeen redshifted high-velocity features and two blueshifted high-velocity features and find them to range between $`0.77`$ and 0.38 km s<sup>-1</sup>yr<sup>-1</sup> (Figure Accelerations of Water Masers in NGC4258, Table 2). To do this, we plotted centroid velocity as a function of time for all fitted peaks and identified isolated and minimally-blended features that were resolved by the fitting process described above. We identified three biases in this procedure: (1) features with low accelerations are favored in the presence of blends, (2) feature amplitude and linewidth may be correlated in blends (note the feature at 1450 km s<sup>-1</sup> in Figure Accelerations of Water Masers in NGC4258), and (3) features extremely near each other in velocity with time varying fluxes may appear to be a single feature for which a non-physical acceleration could be measured.
We fit a linear function to the time-series of velocities for each tracked feature and estimate accelerations. The $`\chi _\nu ^2`$ values derived for these fits are not very good, indicating that the assumption of constant acceleration is not entirely correct. In some cases, the spectral features exhibit a clear systematic wander from constant acceleration, i.e. a โwobble.โ The feature at 1306 km s<sup>-1</sup>, in particular, exhibits a wander that does not appear to be a result of scatter from measurement error. This โwobbleโ is likely a result of either feature blending (two nearby features being mistaken for a single feature) or real changes in structure of the masing gas cloud that causes an apparent velocity shift. In any event, the wobble factor is introduced as a source of random noise of unknown amplitude that must be combined with the measurement noise to estimate the acceleration. In order to quantify this โwobbleโ and find the uncertainties in the acceleration values, we have used a maximum likelihood technique similar to that used to fit simple linear functions, but with the inclusion of a โwobbleโ uncertainty added in quadrature to the uncertainty in the measurement of the velocity for each feature. This model for the motion of the masers primarily changes the weighting of the measured peak velocities; it is similar to rescaling the errors by $`\chi _\nu ^2`$, but our method better quantifies the observed deviations in sensible units. In cases like that of the feature at 1306 km s<sup>-1</sup>, uncertainty in the estimate of acceleration is almost entirely due to the wobble contribution, where for the feature at -440 km s<sup>-1</sup>, the uncertainty is largely due to the measurement errors. The likelihood is given by:
$$P=\frac{1}{\left(\sigma _i^2+\sigma _w^2\right)^{\frac{1}{2}}}\mathrm{exp}\frac{\left(v_i\left(a+bx_i\right)\right)^2}{2\left(\sigma _i^2+\sigma _w^2\right)},$$
(1)
where $`i`$ enumerates the epochs, $`v_i`$ is the feature velocity measured on day $`x_i`$, with error $`\sigma _i`$, $`a`$ and $`b`$ are the usual linear fit parameters (intercept and slope), and $`\sigma _w`$ is the so-called โwobbleโ factor, which is taken to be constant in time, but different for each feature. We obtained the parameter values that result in the maximum likelihood by taking derivatives of $`\mathrm{ln}P`$ with respect to $`a`$, $`b`$, and $`\sigma _w`$, setting them equal to zero, and solving iteratively to find the acceleration ($`b`$) and โwobbleโ factor ($`\sigma _w`$) for each feature. Finally, we estimated the uncertainties by executing a linear least-squares fit with $`\sigma _w`$ held fixed. Note that the units of the wobble factors listed in Table 2 are km s<sup>-1</sup>. On the simple assumption that the measurements are approximately uniformly distributed over the three years of observations, the contribution of the wobble to the uncertainty in the measured acceleration will be about 0.3 x $`\sigma _w`$ km s<sup>-1</sup>$`yr^1`$. For cases where this term approaches the measurement uncertainty quoted for the acceleration, the uncertainty is due mostly to the wobble contribution. When the term is small, the uncertainty is due predominantly to measurement error.
### 3.2 Accelerations of the Systemic-Velocity Features
We have measured accelerations for twelve systemic-velocity features, that are between 7.5 and 10.4 km s<sup>-1</sup>yr<sup>-1</sup>. As in the case of the high-velocity features, we tracked the emission lines by eye in a plot of velocity versus time for local maxima in the spectra. A linear straight-line least-squares fit to the data for each feature yields an acceleration. We assumed an error of 0.3 km s<sup>-1</sup> for each maximum (corresponding to the width of the velocity channels). Table 3 contains the results of these fits and Figure Accelerations of Water Masers in NGC4258 shows the data and the fitted lines. The range of accelerations agrees well with those obtained by past studies. The average value is 9.1$`\pm `$0.8 km s<sup>-1</sup>yr<sup>-1</sup>, where 0.8 km s<sup>-1</sup>yr<sup>-1</sup> is the rms deviation from the mean.
It has been proposed that there is a persistent gap or dip in the spectrum at the systemic velocity ($`472`$km s<sup>-1</sup>). Theories invoked to explain this putative characteristic involve an absorbing layer of non-inverted H<sub>2</sub>O in the disk (Watson & Wallin 1994; Maoz & McKee 1998). We do not find evidence for such a gap in our spectral data, indeed we find features moving through the systemic velocity (Figure Accelerations of Water Masers in NGC4258). At least one such feature is prominent in the data presented here, during the epochs between 1996 January 11 and 1996 May 10 (Figure Accelerations of Water Masers in NGC4258). Other features moving through the systemic velocity were observed by Greenhill et al. (1995a).
## 4 Discussion
### 4.1 Comparisons with Spiral Model
We have used the measured accelerations to test the predictions of the spiral shock model of Maoz & McKee (1998). A primary motivation for this model was to explain a perceived periodicity in the positions of the groups of high-velocity masers and the relative weakness of the blueshifted features when compared to the redshifted features. In the model the high-velocity masers are produced in thin post-shock regions, and we should observe maser emission where the proposed spiral arms are parallel to the line of sight, which occurs along a diameter that is at an angle to the midline equal to the pitch angle of the spiral. For a trailing spiral, this geometry places the redshifted features in front of the midline, and the blueshifted features behind it, where they are subject to absorption as the emission passes through the disk. The model is illustrated in Figure Accelerations of Water Masers in NGC4258, the first panel of which shows spiral arms with an exaggerated pitch angle of 20, the disk midline, and the diametrical cord that makes a 20 angle with the midline. (Note that the arms are parallel to the line of sight where they intersect the 20 cord.) Assuming a logarithmic spiral, Maoz & McKee predict an acceleration of $`0.05(\theta _p/2.5^{})`$ km s<sup>-1</sup>yr<sup>-1</sup> towards smaller *absolute* velocities for all maser features, where $`\theta _p`$ is the pitch angle of the spiral. This acceleration is *not* a result of the Keplerian motion of an individual maser, but rather occurs because of the rotation of the spiral arms at the pattern speed. As the structure rotates, different portions of the arms become tangent to the line of sight. For a trailing spiral, the rotation causes the tangent point of each arm with the line of sight to move outward in radius. Because the rotational velocity is smaller at larger radii, the velocities of all features appear to decrease in magnitude. This apparent deceleration is not dynamical in origin because different clumps of gas are visible in different epochs.
The fundamental signature of the model is a step function of acceleration with position, where the magnitude of the step is proportional to the pitch angle. The middle panel of Figure Accelerations of Water Masers in NGC4258 displays accelerations predicted by Maoz & McKee for a pitch angle of 2.5. We do not see this signature in the data. No choice of pitch angle can reproduce the observed accelerations because statistically significant positive and negative accelerations are measured for both redshifted and blueshifted high-velocity features (Figure Accelerations of Water Masers in NGC4258, bottom). These accelerations must occur for some other reason. We suggest that the measured accelerations simply reflect line-of-sight projections for features only slightly off the midline. We conclude that spiral shock waves are probably not the dominant cause of the measured accelerations.
Maoz & McKee also predict that the blueshifted features will *always* be weaker than the redshifted features for maser emission originating in a trailing spiral shock wave. We note that there exist two examples in which blueshifted features have been observed to be stronger than redshifted ones. NGC3079 always exhibits strong blue emission (Nakai et al. 1995; Trotter et al. 1998), though the disk is not well defined, and it would be premature to speculate on the presence of a spiral instability. NGC5793 has also been observed on occasion to have stronger blueshifted high-velocity emission (Hagiwara et al. 1997). Herrnstein, Greenhill, & Moran (1996) suggested that in the case of NGC4258, the persistent relative weakness of the blueshifted features is due to absorption of these features along the line of sight, which passes through gas ionized by the central engine. The redshifted features are not absorbed because the disk warp is anti-symmetric, and their line of sight passes through less heavily ionized gas that is โshadowedโ by the disk. For this model, either high-velocity group could be the stronger for any particular maser source.
### 4.2 Geometric Model
We have assumed that the accelerations are a direct manifestation of the physical motion of discrete clumps of gas in a Keplerian disk. There are three direct ways that the azimuth positions can be determined from a flat thin disk slightly inclined to the line of sight: analysis of the measured positions in the plane of the sky, analysis of deviations of line-of-sight velocities from an assumed Keplerian rotation curve; and analysis of accelerations. In the appendix we show that the third techique is the most sensitive for the conditions in NGC4258. We use this technique here. We solve for the azimuthal position angle of each maser (with respect to the midline) from the line-of-sight velocity,
$$v_{los}=\left(\frac{GM}{R}\right)^{\frac{1}{2}}\mathrm{cos}\theta ,$$
(2)
and the line-of-sight acceleration,
$$a_{los}=\frac{GM}{R^2}\mathrm{sin}\theta .$$
(3)
Eliminating $`R`$ from eq(2) and (3) we obtain
$$f(\theta )=\frac{\mathrm{sin}\theta }{\mathrm{cos}^4\theta }=GM\frac{a_{los}}{v_{los}^4},$$
(4)
For the small angle approximation, $`f(\theta )\theta `$, but we solved the transcendental eq(4) to estimate the values of $`\theta `$ for all the high- and systemic-velocity maser features. We find that the high-velocity masers lie between $`13.6^{}`$ and $`9.3^{}`$ in azimuth. Individual results are listed in Table 2 and shown in Figure Accelerations of Water Masers in NGC4258 along with the amplitudes and linewidths of the features at all epochs. The dominant uncertainty in $`\theta `$ is due to measurement uncertainty in $`a_{los}`$. The 4% uncertainty in distance (Herrnstein et al. 1999) contributes an uncertainty in $`M`$ of 4% and hence an uncertainty in angle of 4%.
These positions are consistent with those found by the disk modeling of Herrnstein (1997). The standard deviation of the positions of the high-velocity masers derived here is $`\sigma _\theta =4.9^{}`$, which is consistent with the statistical scatter about the midline of $`6^{}`$ found with the VLBA. However, the positions of the high-velocity masers do not compare well in detail, probably because Herrnsteinโs azimuthal positions are highly model dependent. Nonetheless, for his models, the feature at -434 km s<sup>-1</sup> is located about 10 behind the midline for four epochs of VLBI observation, which agrees reasonably well with the position found for it here, 6 behind the midline.
If the accelerations of the systemic-velocity features are due to Keplerian motion, then the radius of each feature is given by $`R=(GM/a)^{1/2}`$, where $`R`$ is the disk radius of the emitting gas, $`a`$ is the measured acceleration, and $`M`$ is the mass at the center of the disk (assuming the values of $`\theta `$ are nearly $`90^{}`$). Accelerations between 7.5 and 10.4 km s<sup>-1</sup>yr<sup>-1</sup> correspond to radii between 0.127 pc and 0.152 pc. The average and standard deviation of the radii of these features are $`R=0.138\pm 0.006`$ pc. Moran et al. (1995) found a typical radial spread of only about 0.005 pc, but their Figure 4 shows that some features lie farther out than this. In general, our results agree well with theirs; most features lie within a fairly narrow range of radii with a few outlying points.
### 4.3 Physical Conditions in the High-Velocity Maser Medium
A fundamental question is whether maser features are discrete physical entities or just markers of locations in streaming gas where conditions are favorable for maser emission. The former hypothesis is supported by the fact that masers are discrete points in VLBI images with persistent, Gaussian-like spatial profiles with measured proper motions and spectral line profiles that vary little in time. Naturally, if the clumpiness of the systemic masers is established, then the same should be true in the high-velocity maser medium. On the other hand, the fact that the masers tend to lie near the midline where the velocity gradients are minimum suggests the latter hypothesis. A reconciliation of these views can be found in the mechanism whereby discrete clumps are more likely to amplify each otherโs emission in regions where velocity gradients are small (e.g., Deguchi & Watson 1989). To understand more clearly the physical conditions, we have analyzed the maser properties as a function of position and time.
#### 4.3.1 Maser Amplitude and Linewidth as a Function of Position
We find that the maser features with the largest average amplitudes are those located near the midline; specifically, there is an upper envelope visible in the data indicating a falling-off of average amplitude with position away from the midline (Figure Accelerations of Water Masers in NGC4258). This is reasonable in the context of the anticipated gain lengths. The velocity coherence length $`l`$ (path length in the disk over which the line-of-sight velocity is constant to within one maser linewidth) decreases away from the midline ($`l\theta ^1`$ for $`\theta `$ between about 3 and 15). A shorter possible path length for amplification could result in weaker maser features.
We do not observe an obvious correlation between average amplitude and radial position of the masers. We might expect such a relation to come about for at least two reasons. First, the coherence length, defined above, is greater at larger radii ($`lR^{5/4}`$), because of the smaller velocity gradients farther from the disk center. Given constant pumping, increasing the coherence length would be expected to result in larger gains. Second, the height of the disk increases with $`R`$, as $`HR^{3/2}`$ for an isothermal, hydrostatic disk in Keplerian rotation (Frank, King, & Raine 1992). This implies a larger emission region at larger radii, which could lead to increasing emission with radius. The lack of an amplitude increase with longer coherence lengths suggests that the observed amplitude is not limited by the coherence length, but rather by some tighter and radius-independent constraint. For example, if the masers originate in aligned clumps of material that amplify each other, then the emission would likely be less sensitive to changes in the coherence length. The clumps would need to be close enough to one another that their emission would be beamed into an angle greater than the local inclination angle in order for us to see it. For clumps approximately the size of the masing layer thickness ($`h0.0003`$ pc, Moran et al. 1995), separated by the maximum possible coherence length, $`l`$, the beaming angle (given by $`h/l`$) is only $`2^{}`$, but disk modelling (Herrnstein 1997) reveals that the inclination angle is larger than this. Therefore, smaller clump separations are required (about a third of the maximum coherence length), confining the volume from which an individual spectral feature could originate and precluding any possible trends with radius resulting from increasing maximum coherence lengths. Due to such beaming effects, clumps of gas significantly smaller than the disk height would need to be quite close together to be visible, weakening the restriction of features to the midline. As we only observe features near the midline, such small clumps are unlikely.
Finally, we find a marginal trend between linewidth and $`R`$. The average linewidth decreases with radius, but time variability makes such a correlation difficult to see. The best-fit slope for the average width versus radial position is $`5.82\pm 2.36`$ km s<sup>-1</sup>pc<sup>-1</sup> when we weight the data using the rms deviations of the widths. For the individual epochs, the slopes range between $`10.3`$ and $`0.1`$ km s<sup>-1</sup>pc<sup>-1</sup>. The weighted average slope is $`2.14`$ km s<sup>-1</sup>pc<sup>-1</sup>. Note that the fitted slope is negative for all observed epochs, strengthening the conclusion that the gradient exists. The most likely explanation for the dropping of linewidth with radius is that the temperature decreases with radius. For a Shakura-Sunyaev thin disk, the temperature is proportional to $`R^{0.75}`$ (Frank et al. 1992), which is consistent with our data.
#### 4.3.2 Time Variability of the High-Velocity Maser Features
The amplitudes of the high-velocity maser features are highly time variable, but their linewidths remain fairly constant. This provides some information about the saturation condition of the masers. All the features tracked varied by at least a factor of 2 in amplitude, and the most variable feature changed by a factor of 32. The average variation was a factor of 8, although the rms variation was only about 23%. There could be many reasons for these fluctuations: change in physical size of the masing medium, change in direction of the maser beam with respect to the earth, change in pump conditions for a saturated maser, or change in a background source in the case of unsaturated amplification.
The flux density from a saturated maser with a cylindrical geometry that is beamed toward the observer is (e.g., Goldreich & Keeley 1972)
$$F=\frac{1}{2}\frac{h\nu n\mathrm{\Delta }Pl^3}{\mathrm{\Delta }\nu D^2},$$
(5)
where $`h`$ is Planckโs constant, $`\nu `$ is the frequency, $`\mathrm{\Delta }\nu `$ is the linewidth, $`l`$ is the length of the maser, $`n`$ is the population density in the pump level, $`\mathrm{\Delta }P`$ is the differential pump rate, and $`D`$ is the distance to the maser. The flux density is independent of the cross-sectional area of the maser and depends on the cube of the length because of the beaming effect. Hence, for small changes in length the fractional amplitude variation is three times the fractional length variation. Thus an rms variation of 23% in amplitude would require a variation of 8% in length. To account for a variation of a factor of 32 would require a change in length of a factor of 3. Such a large physical length variation seems unrealistic, which suggests that the masers may be unsaturated.
If the beam angle of the masers is small enough, the maser can be unsaturated. The brightness temperature, $`T_B`$, of the maser is $`F\lambda ^2/(2k\theta _s)`$, where $`\theta _s`$ is the angular size of the maser. The masers are unresolved at a level of about 100 $`\mu `$as, which means that their brightness temperatures are greater that $`2\times 10^{11}`$ K for a typical flux density of 1 Jy. On the other hand, the brightness temperature at which a maser saturates (e.g., Reid & Moran 1988) is
$$T_S=\frac{h\nu }{2k}\frac{\mathrm{\Gamma }}{A}\frac{4\pi }{\theta _m^2},$$
(6)
where $`k`$ is Blotzmannโs constant, $`\theta _m`$ is the maser beam angle, $`\mathrm{\Gamma }`$ is the maser decay rate and $`A`$ is the Einstein coefficient for the maser transition, 1 s<sup>-1</sup> and $`2\times 10^9`$ s<sup>-1</sup>, respectively. Hence, the maser can be unsaturated as long as the beam angle is small enough that $`T_B<T_S`$, or
$$\theta _m<\left[\frac{4\pi h\nu ^3\mathrm{\Gamma }}{c^2FA}\theta _s^2\right]^{\frac{1}{2}}.$$
(7)
For $`F`$ = 1 Jy and $`\theta _s`$ = 100 $`\mu `$as, the beam angle needs to be less than about 6 for a maser to be unsaturated, which is a reasonable expectation. However if the cross section of the masers are equal to the hydrostatic thickness of the maser layer of the disk for a temperature of 1000 K, 10 $`\mu `$as, then the beam angle would need to be less than 0.6, i.e., an aspect ratio of greater than 100:1.
The flux density of an unsaturated maser pointed at the observer is
$$F=I_o\theta _s^2e^{\alpha l},$$
(8)
where $`I_o`$ is the input intensity of the maser, and $`\alpha `$ is the gain coefficient of the maser. Therefore,
$$\frac{\mathrm{\Delta }l}{l}=\frac{1}{\alpha l}\mathrm{ln}(F_2/F_1).$$
(9)
A reasonable estimate of the gain of a water maser, $`\alpha l`$, is 25 (e.g., Reid & Moran 1988). Thus, a change in amplitude by a factor of 32 can be accomplished with a 14% change in length with constant cross sectional area.
In the simple theory of masers, the linewidth is expected to narrow during unsaturated growth and rebroaden to the thermal linewidth during saturation. During unsaturated growth the linewidth is (Goldreich & Kwan 1974)
$$\mathrm{\Delta }\nu =\frac{\mathrm{\Delta }\nu _D}{\sqrt{\alpha l}},$$
(10)
where $`\mathrm{\Delta }\nu _D`$ is the Doppler linewidth of the feature. Figure 13 shows two examples of linewidth versus amplitude. Ignoring the possible variation in the 1303 km s<sup>-1</sup> feature at low amplitude, we conclude that there is no evidence for linewidth changes with amplitude; i.e., the linewidth changes by less than a factor of 10% for an amplitude change of a factor of 10. The change in linewidth, $`\delta \mathrm{\Delta }\nu `$, as a function of change in length is
$$\frac{\delta \mathrm{\Delta }\nu }{\mathrm{\Delta }\nu }=\frac{1}{2}\frac{\mathrm{\Delta }l}{l}.$$
(11)
Substituting Equation (11) into Equation (9) to eliminate $`\mathrm{\Delta }l/l`$ gives an estimate of the maser gain,
$$\alpha l=\frac{\mathrm{ln}(F_2/F_1)}{2\frac{\delta \mathrm{\Delta }\nu }{\mathrm{\Delta }\nu }}.$$
(12)
Thus, a gain of greater than 12 would make the linewidth variation undetectable given that we see less than 10% linewidth variation for an amplitude change of a factor of 10.
An alternative explanation for the lack of variation in the linewidth is that the masers are actually saturated. If hydrostatic support of the disk limits the sizes of (unsaturated) maser clumps to $`<10\mu `$as, then masers should not be visible over a broad range of radii because the local inclinations mostly exceed the beam angle of $`0.6^{}`$ required to keep the masers unsaturated However, for saturated emission, significant variability must imply large changes in path length or pump conditions (Equation 5). Significant changes in emission rate and beam angle can be accomplished if a maser gain path is crossed by clumps (of varying sizes) that are moving at similar line-of-sight velocities within the disk. However, crossing times comparable to the observed time scale of intensity flucutations may be difficult to realize. Instead local pump efficiency may be time variable along a gain path if the maser pump energy is supplied by X-ray irradiation (and cooling) of the disk gas (Neufeld, Maloney, & Conger 1994). However, this mechanism is complex and detailed modeling is necessary to investigate it.
## 5 Conclusions
Accelerations have been measured for the water maser features in NGC4258. The average acceleration measured for the systemic velocity features is 9.1$`\pm `$0.8 km s<sup>-1</sup>yr<sup>-1</sup>, which is consistent with past observations. The scatter probably indicates that the masers lie over a range of radii within the disk of about 17%. The accelerations of the high-velocity features were successfully measured for the first time and found to lie between $`0.77`$ and 0.38 km s<sup>-1</sup>yr<sup>-1</sup>. Maser positions, derived from a simple Keplerian disk model and measured line-of-sight velocities and accelerations of the high-velocity features, were within $`13.6^{}`$ and $`9.3^{}`$ of the midline with a standard deviation of $`4.9^{}`$. There is no significant systematic bias in positions with respect to the midline. The average amplitudes of the masers are largest near the midline, as expected from velocity coherence arguments. The variability of the high-velocity features, the largest being a factor of 32, suggests that the masers are unsaturated. The absence of linewidth variations implies that the maser gain is greater than 12 or else that the masers are saturated. There may be a marginal decrease in linewidth with radius consistent with the thin disk accretion model. No evidence was found to support a spiral shock origin of the maser features.
The authors would like to thank J. Herrnstein and A. Trotter for access to their VLBA spectra as well as J. Chandler for providing PEP calculations for comparison with the AIPS program. A. E. B. is a National Science Foundation Graduate Fellow.
## Appendix A Appendix โ Positions Along the Line of Sight
In this paper, we use measured line-of-sight accelerations and velocities to solve for the positions of the high-velocity masers in a flat model disk. The simplest view is adopted, i.e. the masers arise from small clumps of gas in Keplerian orbits around a massive central object. While the impact parameters are measurable with VLBI, the positions of the masers along the line of sight for an edge-on disk (angular displacements from the midline) are difficult to estimate as precisely.
There are actually three ways to measure line-of-sight positions: from positions in the VLBA maps, from velocity deviations on a position-velocity diagram, and from line-of-sight accelerations. The first two techniques depend solely on imaging of the maser disk. We can investigate these methods and compare the error bars that each generates.
The azimuthal positions of high-velocity masers in a flat inclined disk can be found from the off-axis sky positions (direction perpendicular on the sky to that defined by the midline). The off-axis position of a particular maser spot located an angle $`\theta `$ from the midline at a radius $`R`$ in a disk with inclination angle $`\varphi `$ is given by
$$y=R\mathrm{sin}\theta \mathrm{sin}\varphi .$$
(A1)
Rearranging and in the limit of small angles:
$$\theta =\mathrm{arcsin}\frac{y}{R\mathrm{sin}\varphi }\frac{y}{R\mathrm{sin}\varphi }.$$
And so,
$$\mathrm{\Delta }\theta =\frac{\mathrm{\Delta }y}{R\mathrm{sin}\varphi }.$$
(A2)
We know that the uncertainty in the y-position is related to the signal-to-noise ratio (SNR), angular resolution ($`\mathrm{\Delta }\mathrm{\Phi }`$), and distance to the source (D) as:
$$\mathrm{\Delta }y=\frac{1}{2}\frac{\mathrm{\Delta }\mathrm{\Phi }}{SNR}D,$$
which gives us
$$\mathrm{\Delta }\theta =\frac{1}{2}\frac{D\mathrm{\Delta }\mathrm{\Phi }}{R\mathrm{sin}\varphi }\frac{1}{SNR}.$$
But, $`R/D`$ is the angular offset ($`\mathrm{\Delta }\alpha `$) of the high velocity masers from the reference point (the systemic masers), so
$$\mathrm{\Delta }\theta =\frac{1}{2}\frac{\mathrm{\Delta }\mathrm{\Phi }}{\mathrm{\Delta }\alpha }\frac{1}{\mathrm{sin}\varphi }\frac{1}{SNR}.$$
(A3)
For a resolution of 500 $`\mu `$as, an angular offset ($`\mathrm{\Delta }\alpha `$) of 6000 $`\mu `$as, and a SNR of 10, values typical for the VLBA observations of the high-velocity masers, along with the observed inclination angle of 6, we find that $`\mathrm{\Delta }\theta 2.5^{}`$, which is actually fairly large. In addition, because the disk in NGC4258 is not flat, applying this method requires a model of the warp in order to relate sky-position to location in the disk. Also, centroid fitting is used to find positions and could be affected by multiple spatially unresolved features
The deviations from Keplerian rotation provide another method for determining the azimuthal positions of the masers. In this case, it is necessary to fit an upper envelope to $`v`$ versus $`r`$, since the largest line-of-sight velocities occur on the midline. The velocity of a feature at an angle $`\theta `$ from the midline is given by
$$v=v_{}\mathrm{cos}\theta .$$
where $`v_{}`$ is the total rotational velocity of the feature (also, value we would see if feature is on the midline). In the case of small angles, this can be written
$$vv_{}\left(1\frac{\theta ^2}{2}\right).$$
(A4)
The deviation of the maser velocity from the Keplerian velocity is thus given by
$$v_{}v=v_{}\frac{\theta ^2}{2}.$$
Because this expression is quadratic in $`\theta `$ with no linear term, it is useless near $`\theta =0`$ ($`\frac{d\theta }{dv}`$ approaches infinity). Therefore, we will approximate the change in $`\theta `$ as being given by the same equation that gives us $`\theta `$ as a function of $`r`$, so
$$\delta \mathrm{\Delta }v=v_{}\frac{\left(\mathrm{\Delta }\theta \right)^2}{2}.$$
Rearranging,
$$\mathrm{\Delta }\theta \left(\frac{2\delta \mathrm{\Delta }v}{v_{}}\right)^{\frac{1}{2}}.$$
(A5)
Using the fact that the uncertainty in the velocity of a fitted feature is related to its Doppler linewidth ($`\mathrm{\Delta }v_D`$) and signal-to-noise ratio:
$$\mathrm{\Delta }v=\frac{1}{2}\frac{\mathrm{\Delta }v_D}{SNR}.$$
We can substitute into Equation 5 to show that
$$\mathrm{\Delta }\theta =\left(\frac{\mathrm{\Delta }v_D}{SNRv_{}}\right)^{\frac{1}{2}}.$$
(A6)
For a linewidth of 1 km s<sup>-1</sup>, a rotation velocity of 1000 km s<sup>-1</sup>, and a signal-to-noise ratio of 10, we find that $`\mathrm{\Delta }\theta 0.6^{}`$, which is better than the error obtained using positional information alone, although we note that to use this method we must assume that the features are very near the midline. Also, this method cannot distinguish between features in front of and behind the midline, it can only find their deviation from the midline. Deviations from Keplerian rotation resulting from the mass of the disk or an inclination warp would bias the results, as well.
Finally, the line-of-sight accelerations can be used to measure $`\theta `$, as described in ยง4.2. The the line-of-sight acceleration, $`a`$, of a maser feature an angle $`\theta `$ off the midline is given by
$$a=a_{}\mathrm{sin}\theta a_{}\theta ,$$
(A7)
where $`a_{}`$ is the total acceleration of the feature, and the second relation assumes that $`\theta `$ is small. Hence,
$$\mathrm{\Delta }\theta =\frac{\mathrm{\Delta }a}{a_{}}.$$
(A8)
Replacing $`a_{}`$ with an expression for centripetal acceleration gives
$$\mathrm{\Delta }\theta =\frac{\mathrm{\Delta }a}{\left(\frac{v^2}{R}\right)}.$$
Also, we can use the fact that the uncertainty in the measured acceleration is related to the Doppler width of the line, the signal-to-noise ratio, and the time duration of the experiment (T):
$$\mathrm{\Delta }a\frac{1}{2}\frac{\mathrm{\Delta }v_D}{SNRT}.$$
Thus,
$$\mathrm{\Delta }\theta =\frac{1}{2}\frac{\mathrm{\Delta }v_D}{SNRT\frac{v^2}{R}}=\frac{\mathrm{\Delta }v_D}{v}\frac{1}{SNR\omega T}.$$
where $`\omega `$ is the angular velocity of the maser. Replacing $`\omega `$ with $`2\pi /T_R`$, where $`T_R`$ is the rotational period of the maser,
$$\mathrm{\Delta }\theta =\frac{1}{2}\frac{\mathrm{\Delta }v_D}{v}\frac{1}{SNR\mathrm{\hspace{0.25em}2}\pi }\frac{T_R}{T}$$
(A9)
Given a linewidth of 1 km s<sup>-1</sup>, a rotational velocity of $`1000`$ km s<sup>-1</sup>, a signal to noise ratio of 10, a rotation period of 800 years (Miyoshi et al. 1995) and a experiment 2 years long (roughly the time baseline for this experiment), we find $`\mathrm{\Delta }\theta 0.2^{}`$. Based on these geometric considerations, the accelerations are the most precise way to measure the azimuthal positions of the masers.
Fig. 7. โ
Fig. 8. โ |
no-problem/0001/astro-ph0001534.html | ar5iv | text | # The rise and fall of V4334 Sgr (Sakuraiโs Object)
## 1 Introduction
After completing their core helium-burning phase, stars less massive than $`10.5M_{}`$ develop in their AGB phase electron-degenerate cores of carbon and oxygen, or oxygen and neon at the massive end, and alternately burn helium or hydrogen in shells. Each quiescent helium-burning phase is preceded by a thermonuclear runaway in the degenerate helium layer. In the aftermath of each of these โthermal pulsesโ, carbon and s-process elements are transported to the stellar surface (Iben & MacDonald 1995). Finally, these stars undergo extensive mass loss (โsuperwind phaseโ), and move in the Hertzsprung-Russell diagram to the region of central stars of planetary nebulae (PNN).
The peculiar variable star FG Sge, which is situated in the center of a planetary nebula, has inspired theoreticians to study the post-AGB evolution in more detail. It is now believed that in about 10% of all thermally pulsing stars, the last pulse can occur in a very late stage, when the star has already settled down as a PNN. In such a case, the last pulse is directly observable as a โfinal He flashโ, which drives the star from the region of the PNN back to the top of the AGB. This happened to FG Sge in the course of the 20th century. Such a โborn-again giantโ phase can last decades, centuries or millennia, but the object will return finally to the PNN region. During the final He flash phase, the outer layers of the star undergo extensive nucleosynthesis, including more or less complete processing of the surficial hydrogen. A large fraction of the outer layers is ejected, leading to the formation of a hydrogen-poor, carbon-rich nebulosity in the center of the planetary nebula.
Planetary nebulae with central hydrogen-poor condensations like Abell 30 or Abell 80 (Jacoby 1979, Jacoby & Ford 1983), H-deficient post-AGB stars like Wolf-Rayet type central stars of planetary nebulae, or white dwarfs of the PG1159 type (e.g. Werner et al. 1999) are possible end-products of final He flash objects.
The evolution of a final He flash from the PNN to the giant stage may take decades (as in FG Sge) or only a few years, as concluded in recent years from the present state and a few historical observations of โNova Aquilae No. 4โ of 1919, also known as V605 Aql (Seitter 1985, Clayton & De Marco 1997). 77 years after the flareup of V605 Aql, a โnovalike object in Sagittariusโ was soon recognized as another one of these rare events. It offers the first opportunity to study in detail the evolution of a fast final He flash.
Sakuraiโs object, later named V4334 Sgr, was discovered on 1996 February 20 as a star of $`11^{\mathrm{th}}`$ magnitude by Yukio Sakurai (Nakano, Benetti & Duerbeck 1996). Prediscovery observations by Y. Sakurai and K. Takamizawa showed that it had been at magnitude 12.5 in early 1995, and possibly at magnitude 15.5 in late 1994. Several groups have monitored the optical brightness evolution: a Russian group (Arkhipova & Noskova 1997, Arkhipova et al. 1998, 1999), a US group using automatic photometric telescopes (Margheim, Guinan & McCook 1997, Guinan et al. 1998), and a Chilean-European group, whose results are presented here. Furthermore, scattered observations made with larger telescopes during interesting phases of its evolution will also be discussed.
The UBVRiz observations made in 1996 by the Chilean-European group were published (Duerbeck et al. 1997, hereafter quoted as D97). In the present paper, photometric observations for the years 1997, 1998, and 1999 are reported, covering the complete rise and decline of the brightness of V4334 Sgr in the optical region (Sect. 2). Its properties before the outburst are investigated and constraints are given on its distance (Sect. 3). The light and color curves are constructed and interpreted (Sect. 4). The observed fadings are explained by dust formation, and similarities and differences of these events with fadings of R CrB stars are outlined (Sect. 5). The development of the energy distribution between 0.36 and 15 $`\mu `$m is investigated, and parallels to dust-forming classical novae are shown (Sect. 6). Finally, the time scales of the final He flashes in V4334 Sgr, V605 Aql and FG Sge are compared, and predictions for their future evolution are given (Sect. 7).
## 2 Observations
UBVRi observations of V4334 Sgr were carried out with the 0.91 m Dutch light collector at ESO La Silla, until its shutdown on April 1, 1999. Filters and comparison stars were the same as those used in D97 (the filters $`R`$ and $`i`$ refer to Cousins $`R_C`$ and Gunn $`i_G`$, respectively). The object was also observed until 1998 October with the 0.2 m $`f/1.5`$ Schmidt telescope of W. Liller in Re$`\stackrel{~}{\mathrm{n}}`$aca, Vi$`\stackrel{~}{\mathrm{n}}`$a del Mar, Chile. Additional observations were obtained with the 3.5 m Telescopio Nazionale Galileo (TNG)<sup>1</sup><sup>1</sup>1The Italian Telescopio Nazionale Galileo (TNG) is operated on the island of La Palma by the Centro Galileo Galilei of the CNAA (Consorzio Nazionale per lโAstronomia e lโAstrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias in July, 1999.
The early observations with the Dutch telescope were analyzed using ROMAFOT aperture photometry. The continuing decline in brightness, especially at short wavelengths, made it necessary to carry out DAOPHOT profile fitting photometry of V4334 Sgr from 1997 September 15 onward. Because of the increase of exposure times, comparison star (1) of D97 became too bright to be used as a local standard, and the average magnitude of stars $`\left(2\right)\left(6\right)`$ (taken from D97) was used as the reference magnitude in all filters. The Re$`\stackrel{~}{\mathrm{n}}`$aca observations are based on CCD aperture photometry. Since they were always taken relative to star (1) through a non-standard $`V`$ filter which extends towards the red, a transformation was established from simultaneous Dutch ($`V,i`$) and Re$`\stackrel{~}{\mathrm{n}}`$aca ($`\mathrm{\Delta }V^{}`$) observations, which permits to convert the Re$`\stackrel{~}{\mathrm{n}}`$aca observations into the standard $`V`$ system:
$$V=10.65+0.93\mathrm{\Delta }V^{}+0.047\left(Vi\right),$$
where $`\mathrm{\Delta }V^{}`$ is the magnitude difference relative to the bright comparison star; the color $`Vi`$ is taken from the โDutchโ observations, taken near the time of the Re$`\stackrel{~}{\mathrm{n}}`$aca observations. The UBVRi data of 1997 โ 1999 are listed in Table 1.
## 3 V4334 Sgr before its final He flash
V4334 Sgr before its final He flash was a faint blue star on the ESO/SERC Sky Atlas (Duerbeck & Benetti 1996, hereafter quoted as D96). In this section, the apparent magnitude is determined, the interstellar reddening is estimated, and upper and lower limits of its distance are given. This information is important for the derivation of the luminosity during the final He flash phase.
Deep images with the TNG telescope in 1999 July allow to establish a preliminary faint magnitude scale in the vicinity of V4334 Sgr (Fig. 1). A northern and a southern visual companion, each $`2\stackrel{}{\mathrm{.}}5`$ from V4334 Sgr, are seen on the Sky Atlas images (see plate 1 of D96). The northern companion has $`V=20.85,R=19.93`$, the southern one $`V=21.12,R=20.30`$. Due to the lack of $`B`$-magnitudes in the TNG observations, pseudo-$`B`$-magnitudes were assigned to some stars with the help of the available $`U`$ and $`V`$ magnitudes. Using this preliminary scale, the visibility of some fainter field stars was checked on the Sky Atlas plates, and the pre-outburst photographic magnitudes of V4334 Sgr were estimated to be $`m_B21^\mathrm{m}`$, $`m_R>21\stackrel{\mathrm{m}}{\mathrm{.}}5`$.
The interstellar reddening of V4334 Sgr is still poorly known. While $`E_{BV}=0.54`$ was estimated by D96, the value $`0.71\pm 0.09`$ was derived by Pollacco (1999) from the observed H$`\alpha `$/H$`\beta `$ line ratio of the surrounding planetary nebula for case B. Another value, 1.15, was suggested by Eyres et al. (1998b). Kimeswenger & Kerber (1998) made a detailed study of interstellar reddening in the field around V4334 Sgr and found $`E_{BV}=0.90\pm 0.09`$ for 18 stars with distances $`d2.0`$ kpc. Since we have reasons to believe that V4334 Sgr has a distance $`d2`$ kpc (see Sect. 6), we adopted $`E_{BV}=0.8`$, which is compatible with both Pollaccoโs and Kimeswenger & Kerberโs results.
The brightness of the pre-outburst magnitude of V4334 Sgr was compared with other PNNs. Absolute $`B`$ magnitudes of central stars of planetary nebulae were derived using information on trigonometric parallaxes given in Jacoby, De Marco & Sawyer (1998) and Acker et al. (1998). Apparent magnitudes were taken from the catalogue of Acker et al. (1992), the interstellar extinction was calculated with the model of Hakkila et al. (1997). Our sample is restricted to central stars in roundish bright and faint nebulae, while central stars inside irregular nebulae and stars which are obviously blended with field stars were not considered. The derived absolute magnitudes are listed in Table 2. Using the average value and its $`1\sigma `$ deviation, $`M_B=6.4\pm 1.2`$, and taking $`m_B=21^\mathrm{m}`$ for the pre-outburst magnitude of V4334 Sgr, its distance is derived to $`1800_{800}^{+1400}`$ pc, assuming a reddening $`E_{BV}=0.8`$. The value $`E_{BV}=0.7`$ would increase the distance estimates by 20%. Jacoby et al. (1998) concluded that the distance remains poorly determined, with possible values lying in the range 1 to 4 kpc. This is in agreement with the present result. This issue will be taken up again in Sect. 6.
## 4 The optical outburst behavior of V4334 Sgr
### 4.1 The light curve
Figure 2 shows the complete $`V`$ light curve of V4334 Sgr. It can be divided into four characteristic stages, conveniently separated by the seasonal gaps when the star was too close to the sun.
The first stage is the โrise to maximumโ of 1994 โ 1995, which is covered only by Takamizawaโs 12 prediscovery observations (Takamizawa 1997; see D97 for a discussion of the prediscovery light curve); note that the 1994 point is possibly only an upper limit. The second one is the โmaximum stageโ of 1996 and 1997, when quasiperiodic brightness fluctuations were superimposed on an almost constant $`V`$ magnitude of $`11^\mathrm{m}`$. The third one is the โdust onset stageโ of early and mid-1998, when the object dropped in brightness by $`1^\mathrm{m}`$, and continued to show quasiperiodic fluctuations. Finally, the fourth one is the โmassive dust stageโ of late 1998 and 1999, when the object suffered dramatic declines of 3 to $`11^\mathrm{m}`$ in visible light, and when strong, erratic brightness fluctuations โ unfortunately poorly documented โ were present.
As already shown in D97, the first and second stages can be explained by an object with a slowly growing photosphere or pseudo-photosphere, radiating almost at constant luminosity. In contrast to a stable photosphere, a pseudo-photosphere is formed in a optically thick wind, which is driven by radiation pressure from an object radiating near Eddington luminosity (see, e.g. Bath & Harkness 1989). Such a behavior is found in classical novae at early outburst stages. Dynamical instabilities causing mass loss and dust formation have also been suggested for R CrB stars which radiate close to the Eddington limit (Asplund 1998). The photosphere of V4334 Sgr may be a true photosphere or a pseudo-photosphere, an attempt to decide between both cases will be made later (Sect. 6). The expansion and cooling of the photosphere is documented in the UBVRi light curves (Fig. 3), which show a shift of the radiation maximum towards longer wavelengths at later times.
Quasi-periodic or cyclic fluctuations are superimposed on this general photometric evolution. These variations can most easily be traced in the $`V`$ light curve, because the temporal coverage is highest, and because the effects of the temperature decline are least noticeable in V. Figure 4, which also includes observations of the Russian group (see Sect. 1), shows the $`V`$ fluctuations in detail. For each of the years 1996, 1997, and 1998, the linear long-term trend in brightness was removed. In 1996, oscillations of short duration (26, 22 and 16 days and some shorter ones) are superimposed on a 70 day oscillation. In 1997, a clear preference of a single oscillation with an amplitude of about $`0.18\pm 0.03`$ in $`V`$ and a period of 56 days, prevailing over more than four cycles is seen. Secondary features are less important than in the year before (see also D97 and Duerbeck et al. 1998).
The 1998 observations belong already to the third stage, which is influenced by dust. It is difficult to decide whether the fluctuations seen in early and mid-1998 are still caused by a pulsation or by dust obscuration events, especially since the multicolor coverage is poor. A characteristic time scale of $`74\pm 8`$ days is seen, but the periodicity is poorly defined, and masked by the strong brightness decline that occurred in the second half of 1998. The amplitude in $`V`$ has increased to $`0\stackrel{\mathrm{m}}{\mathrm{.}}75`$.
In the fourth stage of late 1998 and of 1999, the object has faded dramatically. The poor temporal coverage, as well as the strong brightness fluctuations do not permit to study any underlying periodicities.
Summing up, the light curve between 1996 and mid-1998 can be described as being superimposed by quasiperiodic fluctuations of increasing cycle length and amplitude. Arkhipova et al. (1999) claim that the variations can be described with a single period that increases linearly with time. Their ephemeris
$$\mathrm{J}.\mathrm{D}.\left(\mathrm{min}\right)=2450133.1+6.91048E+0.871788E^2$$
was used to calculate the moments of minimum light, which are shown as vertical bars for the years 1996 โ 1998 in Fig. 4. While a trend towards longer periods and larger amplitudes at later times is clearly present, the representation of minima by the above formula is not satisfactory, and the occurrence of both well-expressed and marginal maxima and minima must be explained by the superposition of several pulsation modes, as was outlined in D87.
### 4.2 V4334 Sgr in the two-color diagrams
The growth of the photosphere and the dust formation is studied in the multi-color light curve (Fig. 3) and in two-color diagrams. The development of V4334 Sgr in the ($`\mathrm{UโB}`$ vs. $`\mathrm{BโV}`$), ($`\mathrm{VโR}`$ vs. $`\mathrm{BโV}`$) and ($`Vi`$ vs. $`\mathrm{BโV}`$) two-color diagrams is shown in Figs. 5 and 6. The colors are dereddened for $`E_{BV}=0.8`$. The observed averaged color indices for time intervals of days up to several weeks, and supplemented by observations of other authors at important phases, are given in Table 3.
The behavior of V4334 Sgr in the maximum stage is as follows. In 1996, the star was continuously cooling. In Fig. 5, it is moving along the two-color track of hydrogen-deficient carbon stars of hot to intermediate temperature, as calculated by Asplund (1997). This phase was interpreted by D97 as an object radiating at almost constant luminosity, while its photosphere is slowly moving outward with a velocity of $`1\mathrm{km}\mathrm{s}^1`$. In 1997, the star had moved away from the two-color track, and kept similar $`\mathrm{UโB},\mathrm{BโV},\mathrm{VโR},Vi`$ color indices for about 150 days, indicating that the photosphere had become stationary. In the case of a pseudo-photosphere, whose location is determined by the actual mass-loss rate, this means that the mass-loss rate had stabilized.
During the dust onset and the massive dust stages, the colors show a reddening of increasing strength while the visible light of V4334 Sgr declined. The reddening increased noticeably in the deep decline of early October 1998, which is documented by observations of Jacoby & De Marco (1998). Since no $`U`$ magnitudes were reported, this episode is missing from Fig. 5 and is only illustrated in the other two-color diagrams (Fig. 6) and in the $`(V,\mathrm{BโV})`$ color-magnitude diagram (Fig. 7). In 1999, the star had become so faint in $`U`$ that no observations were obtained. For 1999 July, an upper limit of $`U=23\stackrel{\mathrm{m}}{\mathrm{.}}5`$ is derived, but the previously recorded color indices make it likely that the star was several magnitudes fainter. Our $`V`$-observations show fluctuations around $`19.520^\mathrm{m}`$ in 1999 March, and a minimum brightness of $`22\stackrel{\mathrm{m}}{\mathrm{.}}1`$ in 1999 July. Note that the very last $`\mathrm{VโR}`$ and $`Vi`$ indices of 1999, which are not included in the Figures because of the lack of $`B`$ magnitudes, indicate a decrease in reddening in spite of the faintness of the star (see also Sect. 5).
## 5 Dust forming events as seen in the optical region
Theoretical, spectroscopic and photometric results have pointed out a possible relation between R CrB stars and final He flash objects like FG Sge (Gonzalez et al. 1998, Jurcsik & Montesinos 1999) and V4334 Sgr (Asplund et al. 1997, Arkhipova et al. 1999). Dust-forming events, similar to those seen in R CrB stars, have been suspected in the light curve of the rapidly evolving final He flash object V605 Aql (Harrison 1996), and have been observed in recent years in the slowly evolving final He flash object FG Sge. Already in the first papers on V4334 Sgr, a dust-forming phase was predicted (D96, Duerbeck & Pollacco 1996). The declines observed in V4334 Sgr (and first announced by Liller et al. 1998a,b) can readily be compared with those observed in the other well-observed final He flash object FG Sge and in R CrB-type variables. Fading events of R CrB and V854 Cen, observed by Cottrell, Lawson & Buchhorn (1990), Lawson & Cottrell (1989) and Lawson et al. (1992) using multicolor photometry, were useful in comparing photometric and spectroscopic characteristics of He flash objects and R CrB variables.
### 5.1 Pulsations and declines
R CrB and related hydrogen-deficient stars show pulsations at maximum light. Periods are $`40100`$ days, with amplitudes of a few $`0\stackrel{\mathrm{m}}{\mathrm{.}}1`$. A tendency towards longer periods in cooler objects is seen (Lawson et al. 1990). In the R CrB star V854 Cen, the onset of a brightness decline usually occurs near maximum light of the pulsation cycle (Lawson et al. 1992), while in RY Sgr, it occurs at minimum light (Menzies & Feast 1997).
The slowly evolving final He flash object FG Sge has shown pulsation periods ranging from 5 to 138 days, with a definitive trend to longer periods at later times, when the object had cooled (van Genderen & Gautschy 1995). After the first steep decline in 1992, FG Sge showed a nearly constant pulsation period of 115 days, and R CrB type fadings often occurred near maximum light of the pulsation (Gonzalez et al. 1998).
V4334 Sgr showed variations with cycles of $`1074`$ days with a tendency of better defined, longer periods at later (cooler) stages, and amplitudes increasing from $`0\stackrel{\mathrm{m}}{\mathrm{.}}1`$ to $`0\stackrel{\mathrm{m}}{\mathrm{.}}7`$. Apart from the rapid change of behavior, the pulsations of V4334 Sgr are comparable to those of R CrB stars, as well as to the pulsations of FG Sge. The three observed fading events of V4334 Sgr are separated by intervals of $`200`$ days, and do not seem to be related to the period of stellar pulsation. The well-documented second fading of 1998 September began around a minimum phase of pulsation.
### 5.2 Color evolution during declines
R CrB stars show both โblueโ and โredโ declines (Cottrell, Lawson & Buchhorn 1990). The blue events presumably occur when the obscuring cloud is smaller than the photosphere, as seen from the observer. In a red decline of R CrB, the star moved along the line $`\left(\mathrm{UโB}\right)/\left(\mathrm{BโV}\right)1`$ in the two-color diagram. A well-observed red decline of V854 Cen had $`\left(\mathrm{UโB}\right)/\left(\mathrm{BโV}\right)0.6`$, $`\left(\mathrm{VโR}\right)/\left(\mathrm{BโV}\right)0.8`$, $`\left(VI\right)/\left(\mathrm{BโV}\right)1.6`$.
The slowly evolving final He flash object FG Sge has only shown โblueโ declines (Jurcsik & Montesinos 1999). Thus, dust formation in FG Sge has been patchy until now.
The color behavior of V4334 Sgr is complex, because the growth of the photosphere, the dust formation, and even the interstellar reddening cause similar effects in two-color diagrams. While interstellar reddening just produces a constant shift in the diagram, the first two effects can most easily be disentangled with the aid of the $`V`$ vs. $`\mathrm{BโV}`$ diagram (Fig. 7). It shows that the initial dust formation episode in the line of sight did not begin until early 1998.
V4334 Sgr shows a โcompositeโ of several red declines, indicating that in all cases the whole visible photosphere is obscured (Fig. 2). The first decline occurred at or before J.D. 2450853 (1998 February), and the star did not regain its former brightness; the second occurred around J.D. 2451045 (1998 September), and the star partly recovered; the third one occurred at or before J.D. 2451234 (1999 February), and after some fluctuations at $`20^\mathrm{m}`$, another decline followed, whose color characteristics are only poorly known. The value of $`\left(\mathrm{UโB}\right)/\left(\mathrm{BโV}\right)`$ appears to be always larger than in the case of R CrB stars. Following the onset of dust formation in 1998 February, a sequence of three normal points indicates the beginning of the fading of 1998 September (this is marked โearly declineโ in Fig. 5). After the deep decline in 1998 October, and the recovery in November, which was observed by Jacoby & De Marco (1998), another $`\mathrm{UโB}`$ color index is available (marked โrecovery after deep minimumโ). The overall slope of the 1998 $`\left(\mathrm{UโB}\right)/\left(\mathrm{BโV}\right)`$ data is $`2.4`$. This slope is substantially steeper than the slopes $`1`$ and $`0.6`$ observed in R CrB and V854 Cen. Whether this is caused by different dust properties or a different amount of โchromospheric emissionโ above the dust in these objects, cannot be decided on the basis of the available data.
The other color index ratios, $`\left(\mathrm{VโR}\right)/\left(\mathrm{BโV}\right)=1.0`$, $`\left(Vi\right)/\left(\mathrm{BโV}\right)1.8`$, derived for the interval 1998 โ 1999 when dust obscuration was obviously present, are surprisingly similar to those observed in V854 Cen (and, cum grano salis, to the interstellar reddening lines), $`\left(\mathrm{VโR}\right)/\left(\mathrm{BโV}\right)=0.8`$, $`\left(VI\right)/\left(\mathrm{BโV}\right)=1.6`$. Both V854 Cen and V4334 Sgr show a tendency to yield steeper slopes at very red colors and very faint magnitudes.
Even during a โredโ decline, all color indices of V854 Cen turn to smaller values at very faint magnitudes. The fragmentary data of V4334 Sgr indicate that a similar effect occurred in 1999 July, when the $`V`$ magnitude reached its observed minimum near $`22^\mathrm{m}`$, and the $`\mathrm{VโR}`$ and $`Vi`$ indices were noticeably smaller than three months before. Whether this is also the signature of an imminent brightness recovery, as it is found in R CrB stars, cannot be said because of lack of data.
### 5.3 Depth and speed of declines
The deepest decline in R CrB or related stars ever observed in the visible region was $`8^\mathrm{m}`$ (see the light curve of R CrB by Mattei, Waagen & Foster 1991). Such a level may be reached during a single fading event or by a superposition of several fading events. In the latter case, the star does not become fainter, but may simply remain for a longer time at minimum level. Concerning the speed of declines, a โredโ decline of R CrB, observed by Fernie, Percy & Richer (1986), showed a maximum rate of decline of $`0\stackrel{\mathrm{m}}{\mathrm{.}}13\mathrm{day}^1`$ in its later stages; a rate of $`0\stackrel{\mathrm{m}}{\mathrm{.}}27\mathrm{day}^1`$ was observed by Cottrell et al. (1990) during a โblueโ decline. During the deep red decline of V854 Cen in 1991, with a superposition of three fading events, gradients up to $`0\stackrel{\mathrm{m}}{\mathrm{.}}7\mathrm{day}^1`$ were observed (Lawson et al. 1992).
During 1998 โ 1999, V4334 Sgr declined by $`11^\mathrm{m}`$ in $`V`$, i.e. the obscuration was at least an order of magnitude more efficient than ever observed for an R CrB star. This decline consists of several superimposed fading events, as described in Sect. 5.2. The object partially recovered from the first two events; the sparse data of 1999 indicate that V4334 Sgr is still obscured by the third fading event (it is also possible that a fourth fading event took place). The first decline was not covered by observations, the decline rates of the second and third declines were $`0\stackrel{\mathrm{m}}{\mathrm{.}}05\mathrm{day}^1`$ and $`0\stackrel{\mathrm{m}}{\mathrm{.}}14\mathrm{day}^1`$. The speed and form of these declines resemble those of slow โredโ declines of R CrB variables.
### 5.4 Spectroscopic features of dust shells
Dust that forms near an R CrB star experiences a strong radiation force, moves outward and drags gas with it, which is collisionally excited. R CrB, while recovering from a decline and at subsequent maximum light, showed a P Cyg line of He I 10830 extending to $`240\mathrm{km}\mathrm{s}^1`$ (Querci & Querci 1978).
In spectra of V4334 Sgr, a blueshifted absorption line of He I 10830 with an expansion velocity of $`550\mathrm{km}\mathrm{s}^1`$ was observed in 1998 March by Eyres et al. (1999), i.e. shortly after the onset of dust formation the line of sight. No line had been present in spectra taken in 1997 July. From 1998 August onward, the line shape changed to a P Cygni profile (Eyres et al. 1999, Tyne et al. 1999). During the deep minimum of 1999, the He I line was observed in emission only, and extended less to the red than the P Cyg line of the previous year. It showed a blueshift of about $`500\mathrm{km}\mathrm{s}^1`$ relative to the radial velocity of V4334 Sgr (limits $`700`$ and $`+130\mathrm{km}\mathrm{s}^1`$). This indicates that (a) collisionally excited gas exists since early 1998, in agreement with the photometrically observed onset of dust formation; (b) while the stellar background faded, the line kept its strength and appeared as a P Cyg line in a semi-transparent shell; (c) after the massive dust formation, emission originating in the region moving away from the observer is almost completely obscured by dust, and only the emission originating in the hemisphere facing the observer is seen.
### 5.5 The infrared behavior
The infrared behavior of R CrB stars correlates poorly with dust forming events in the line of sight, which mainly influence the flux in the optical region. Since the dust cloud is small, it converts only a small fraction of the total light of the star into infrared radiation. The infrared output of an R CrB star is dominated by radiation from the overall circumstellar dust shell, which is heated by the star (and also shows the pulsational light variations observed in the star). On the other hand, the dust flux does not change significantly when the star goes into an obscuration minimum. These findings are the best evidence for the patchiness of dust formation in the atmospheres of R CrB stars (Forrest, Gillett & Stein 1972, Feast et al. 1997).
The descent to minimum in V4334 Sgr was accompanied by a complete change in its energy distribution, including a dramatic increase at infrared wavelengths (especially in the poorly observed range $`\lambda >5\mu \mathrm{m}`$). A more detailed study of the steady growth of the infrared excess is given in Sect. 6.
### 5.6 V4334 Sgr and R CrB stars: concluding remarks
Summing up, the fading events of V4334 Sgr in the years 1998 โ 1999 show striking similarities with the โredโ declines of R CrB stars. While there are differences between individual stars in the $`\mathrm{UโB}/\mathrm{BโV}`$ ratio, all objects show a similar behavior at longer wavelengths.
In contrast to R CrB stars, the unusually deep, long-lasting minimum, the evolution of the He I 10830 line structure, as well as the connection between optical fading and infrared brightening, indicate the formation of a complete dust shell around V4334 Sgr. The behavior of V4334 Sgr in 1998 and later should not necessarily be described as the โR CrB phaseโ, as Arkhipova et al. (1999) have done. R CrB stars form no complete dust shells. Its behavior shows as well striking similarities with dust-forming classical novae, as will be shown in Sect. 6.2.
## 6 Energy distribution and luminosity
The UBVRi photometry presented in this study can be combined with infrared photometry to a study of the overall energy distribution, the character of the infrared excess, and the time variation of the luminosity. Infrared photometry is available during each season: Feast & Whitelock (1999) for 20 well-distributed JHKL data sets, obtained between early 1996 to late 1999; Kamath & Ashok (1999) for 1996 and 1997 in JHK; Fouquรฉ (in D96) for 1996 April in IJK; Arkhipova et al. (1998) for 1996 and 1997 in JHKLM; Kimeswenger et al. (1997) for 1997 March in IJK; Kerber et al. (1999) for 1997 and 1998 at 7 wavelengths between 4.5 and 12.0 $`\mu `$m, observed with ISOCAM of ISO; Lynch et al. (1998) for 1998 March and May in $`L^{}`$, $`M^{}`$, $`N^{}`$; Kaeufl & Stecklum (1998) for 1998 June in $`N`$; Jacoby (1999) for 1999 April at 1.083 and 2.230 $`\mu `$m; Tyne et al. (1999) for 1999 April and May in JHKLM; Hinkle & Joyce (1999) for 1999 September in JHKLM.
A selection from these data was combined with quasi-simultaneous UBVRi data to construct twenty-one energy distributions of V4334 Sgr: seven for 1996, seven for 1997, three for 1998, and four for 1999. The magnitudes were dereddened for the value $`E_{BV}=0.8`$, and were converted into monochromatic irradiances $`E_\lambda \left(\mathrm{in}\mathrm{W}\mathrm{m}^2\mu \mathrm{m}^1\right)`$. They are listed in Table 4. Integration over the irradiances $`E_\lambda `$ yielded total irradiances $`\left(\mathrm{in}\mathrm{W}\mathrm{m}^2\right)`$ which are also given. Selected results are shown in Fig. 8.
### 6.1 The evolution of the infrared excess
Figure 8 shows that an excess of radiation at wavelengths $`>1\mathrm{\mu m}`$ already exists during the earliest observations of 1996. This excess increases in strength in 1997, and starts to dominate the spectrum in 1998. The stellar continuum peaks at $`B`$ in 1996, at $`V`$ in 1997, between $`V`$ and $`R`$ in 1998, and possibly at $`R`$ in 1999. From 1998 onwards, the optical radiation of the star is extinguished and re-radiated at infrared wavelengths. In 1999, hardly a trace of the stellar contribution is visible in the energy distribution.
A preliminary discussion of the infrared energy distributions is given by Kipper (1999). Several test runs using Dusty (Iveziฤ, Nenkova & Elizur 1997) with parameters similar to those chosen by Kipper yielded non-optimal fits, which can possibly be explained by the different adopted value of the interstellar extinction and the different choice of the stellar atmosphere. A detailed analysis of the spectral energy distribution will be the subject of a future investigation.
Figure 8 permits estimates of the properties of the infrared excess, the character of the dust and the size of the dust forming region, and some qualitative estimates will be given. The radiation maximum of the infrared excess shifts towards longer wavelengths at later times. If the excess is approximated by a blackbody, its temperatures are $``$ 3500, 3000 $``$ 2000, 830 and 725 K in 1996, 1997, 1998 and 1999, respectively. The temperatures of 1996 and 1997 are too high for the formation of carbon dust.
The infrared excess of 1996 may be explained by free-free emission in the wind or expanding atmosphere of V4334 Sgr, which at that time had a surface temperature of $`7500`$K. In 1997, the situation is not as clear. Woitke, Goeres & Sedlmayr (1996) have shown that carbon nucleation can take place in shocks that occur in pulsating R CrB stars with effective surface temperatures of 7000 K. Since V4334 Sgr provides similar conditions, patchy dust formation may be possible in 1997. From 1998 onward, observational evidence of dust formation is beyond doubt, as was shown in Sect. 5.2.
The angular radius of the infrared emission region, calculated from $`\theta =2\times 10^{12}\left(\lambda F_\lambda \right)_{\mathrm{max}}^{1/2}T^2`$ (Gallagher & Ney 1976) with $`\theta `$ in milli-arcsec and $`\left(\lambda F_\lambda \right)_{\mathrm{max}}`$ in $`\mathrm{W}\mathrm{m}^2`$, is 0.3, 0.8, 9.4 and 11.4 milli-arcsec for the years 1996 โ 1999, respectively. Figure 9 indicates that a rapid growth of the dust shell occurred between late 1997 and early 1998.
Assuming a distance of 2 kpc for the object, the radius of the dust shell was about 19 AU in 1998, and 23 AU in 1999. If the ejection of material started in early 1995, at the time of the beginning of the final He flash, and if dust had condensed everywhere in the shell in mid-1998, the constant expansion velocity $`25\mathrm{km}\mathrm{s}^1`$ of dust-forming material is derived. Between mid-1998 and mid-1999, the shell grew with a linear velocity of $`20\mathrm{km}\mathrm{s}^1`$, which is in good agreement with the previous value. This velocity is much larger than the rate of growth of the photosphere (about $`1\mathrm{km}\mathrm{s}^1`$), which was derived from the observations of 1996 (D97). Thus we may take it as evidence for a Eddington-driven outflow above the photosphere, which had been active since the beginning of the final helium flash, and had cooled to temperatures suitable for dust formation around the end of 1997.
On the other hand, we can assume that the material was ejected at a later phase of the outburst, when the outer layers had already been enriched with carbon, and that the condensing dust experienced an acceleration due to radiation pressure. Then the resulting expansion velocity is higher, and may be similar to the velocity of the central condensations of the remnant of V605 Aql ($`100\mathrm{km}\mathrm{s}^1`$ with a FWHM of $`225\mathrm{km}\mathrm{s}^1`$, Pollacco et al. 1992). More observations are necessary to decide between both scenarios.
### 6.2 The infrared evolution of V4334 Sgr and in classical novae
Final He flash objects and R CrB type stars have similar properties, as described in Sect. 5. Similarities, however, also exist between final He flash objects and dust-forming classical novae.
The infrared behavior of dust-forming classical novae consists of four phases: (a) the initial pseudo-photosphere blackbody, (b) a free-free phase, which leads to an infrared excess between 1 โ 6 $`\mu m`$, (c) a rapid growth of dust, leading to a โred declineโ in the optical and an increase of infrared flux, of angular diameter, and of a slight drop in dust temperature ($`1200800\mathrm{K}`$), and (d) an exponential drop of the infrared flux and of the angular diameter, when dust is being dispersed and/or destroyed by radiation from the central source, accompanied by a recovery of UV and optical radiation (Ney & Hatfield 1978).
In 1996 and 1997, the derived blackbody temperature for the infrared excess of V4334 Sgr is too high for dust formation; it was already pointed out that the infrared excess of 1996 (and possibly 1997) is caused by free-free emission in the outflowing material that had passed the pseudo-photosphere and formed an extended atmosphere. Claims of the presence of dust with temperatures of 1500, 1800 and 680 K, in 1997 February, March and April (Kerber et al. 1999, Kimeswenger et al. 1997, Eyres et al. 1998a) are questionable and discrepant; part of the infrared excess may be carbon nucleation products, part of it may still be explained by free-free continuum emission.
At the end of 1997 or at the beginning of 1998, the temperature of the extended atmosphere had dropped below 2000 K, permitting the formation of carbon dust. This onset of dust formation lead to (1) a rapid growth of the angular size of the infrared emitting region (Fig. 9), (2) an increase of the infrared flux, and (3) several dust forming events in the line of sight, which caused first a gentle, then a dramatic drop in visible light output (Sect. 5.2).
Comparing the evolution of V4334 Sgr with that of a dust-forming classical nova, e.g. NQ Vul (Ney & Hatfield 1978) yields the following: Phase (a) occurred likely in 1995 but was not observed, phase (b) occurred in 1996 โ 1997, and phase (c) in 1998 โ 1999. The onset of phase (d), the destruction or dispersion of the dust, may still be far in the future: 80 years after outburst, the central object of V605 Aql is still deeply embedded in circumstellar dust, and we can expect a similar behavior for V4334 Sgr.
Tthe โspeedโ of a classical nova like NQ Vul to evolve through phases (a) to (c) is about 20 times faster than that of the final He flash object V4334 Sgr. One more noteworthy difference between a nova and a final He flash object exists: the spectrum emerging from the pseudo-photosphere of the nova near maximum light clearly reveals the speed of the outflowing material, and the expansion rate of the infrared dust shell (in milli-arcsecond $`\mathrm{day}^1`$) can be used to determine the shell expansion parallax. High resolution spectra of V4334 Sgr described by D97, Kipper & Klochkova (1997) and Jacoby et al. (1998) show that the radial velocity of the photosphere is similar to that of the planetary nebula. In an echelle spectrum, however, taken 1996 April 23 by G. Wallerstein, the deep H$`\alpha `$ absorption line shows an underlying shallow, broad absorption trough ranging from $`225`$ to $`+170\mathrm{km}\mathrm{s}^1`$ relative to the star. This may be taken as evidence for optically thin, turbulent material which shows an average outflowing velocity of $`25\mathrm{km}\mathrm{s}^1`$, with a wide spread in velocities. A careful study of spectral features of the wind, in combination with data of the growth of the dust shell, may permit the derivation of a shell expansion parallax.
Dust-forming novae show optically thick winds with high outflow velocities that exist for time scales of weeks; V4334 Sgr shows an optically thin wind with an outflow velocity of $`25\mathrm{km}\mathrm{s}^1`$ (or several times higher), which likely exists during all stages of the outburst. The increase of luminosity at later stages (Sect. 6.3), in combination with the cooling of the outer layers, may lead to a enhanced mass loss at later times.
### 6.3 Variations in luminosity and limits on the mass of V4334 Sgr
The data of Table 4 can be used to study the luminosity of V4334 Sgr at various stages of ite evolution. The total irradiances (in units of $`10^{12}\mathrm{W}\mathrm{m}^2`$) are also shown in Fig. 9. From 1997 onward, an ever increasing part of the luminosity is radiated at wavelengths longward of $`4.5\mu \mathrm{m}`$, and nothing quantitative can be said about the luminosity evolution after early 1998, because far infrared data are lacking.
The flux increased by a factor 4 from early 1996 to early 1998. This result depends only weakly on the assumed value of the interstellar extinction. It was already noted by D97 that the assumption of a constant luminosity of V4334 Sgr was not valid for the 1996 โ 1997 light curve; the flux in the optical region increased by at least 30% over one year. A possible explanation of this behavior is that in early stages of the flash, a significant fraction of the energy release is used for the expansion of the object (this is also shown, implicitly, in theoretical tracks of final He flash objects, e.g. Figs. 14 and 15 in Blรถcker 1995).
We take the โlateโ, 1997 โ 1998 luminosity of V4334 Sgr as the luminosity emerging from the remnant after most of the expansional work had been done. A total irradiance of $`2.2\times 10^{11}\mathrm{W}\mathrm{m}^2`$ is assigned to V4334 Sgr. This can easily be converted into a radiant flux release of $`10^{30}\left(\frac{d}{2\mathrm{kpc}}\right)^2`$ W, or
$$L_{\mathrm{V4334}\mathrm{Sgr}}2770L_{}\left(\frac{d}{2\mathrm{kpc}}\right)^2$$
A high-mass post-AGB model of Blรถcker (1995) has a mass of $`0.836M_{}`$, and takes 50 years for its way from the planetary nebula nucleus back to the AGB during the final He flash; its mass may be taken as a lower limit to the mass of V4334 Sgr. The luminosity of the model is $`20,000L_{}`$, which may serve as a reasonable lower limit to the luminosity, and thus to the distance of V4334 Sgr. Insertion in the above relation yields 5.4 kpc for V4334 Sgr. Even low-mass models of $`0.6M_{}`$ yield distances $`>2\mathrm{kpc}`$, and the assumption of Sect. 3, $`d>2\mathrm{kpc}`$, is always fulfilled. If one would use the range of absolute $`B`$-magnitudes of central stars of planetary nebulae as a way to estimate the mass of V4334 Sgr, the upper distance limit of 3 kpc yields a mass slightly above $`0.6M_{}`$, which is somewhat unlikely in view of its fast evolution (see below and Sect. 7). Diagrams showing evolutionary speeds, envelope and core masses for born-again giants, as given by Blรถcker & Schรถnberner (1997), do not cover the rapid evolution of V4334 Sgr. Nevertheless, an envelope mass of $`10^5M_{}`$ and a core mass around $`1M_{}`$ are reasonable guesses. The high luminosity of such an object may even give support to the โlongโ distance scale of 8 kpc as suggested by D97. Further research on post-AGB evolution is clearly needed to constrain the distance of V4334 Sgr.
## 7 Time scales
Data on the evolution of two previous final He flash objects exist: FG Sge and V605 Aql. We omit from our discussion the 17th century object CK Vul, whose nature is still not clear and whose light curve covers only the brightest stages (Harrison 1996). Data were taken from Harrison (1996), Clayton & De Marco (1997), and Jurcsik & Montesinos (1999), and compared with the present data. Table 5 gives the time scales involved.
The only existing high-quality spectrum of V605 Aql was described by Bidelman (1973) to be โvery similar to the hydrogen-deficient carbon star HD 182040โ, which has a type C2,2 in the old Keenan-Morgan classification, and C-HD1$`\mathrm{C}_24^{}`$CH0 in the 1993 revised MK system of Keenan (Barnbaum, Stone & Keenan 1996). The appearance of the spectrum of V4334 Sgr, taken in May 1997, and analyzed by Pavlenko, Yakovina & Duerbeck (2000), is strikingly similar to that of V605 Aql, as illustrated by Clayton & De Marco (1997), and classified by Bidelman.
We compare the light curves of FG Sge, V605 Aql and V4334 Sgr in detail, taking the light curve of Harrison (1996) for V605 Aql, and assuming that maximum $`B`$ (or photographic) light occurred in 1968, 1919.6 and 1996.3, for the three objects, respectively. Note that Harrisonโs light curve of V605 Aql shows a minimum already in 1920; this โfirstโ dust event, however, seems to be poorly documented and will not be taken into consideration here. The following time intervals are derived: Rise from about $`15^\mathrm{m}`$ to ($`B`$ or photographic) maximum, took 74, 1.9 and 1.5 years, respectively. A comparable spectral type C2,2 was reached about 20 years, 2.1 and 1.0 years after maximum. Dust event onset, first dust event and first minor dust event were observed 24, 3 and 2.1 years after maximum. โDisappearanceโ due to a major dust event has not yet been observed for FG Sge, and occurred 4.4 and 2.9 years after maximum for the two other objects. The โtotal duration of visibilityโ (with moderate means) is thus 6.5 and 4.4 years for V605 Aql and V4334 Sgr, respectively. FG Sge has not yet entered the stage of faintness, and one can only compare the time from the onset of brightening to the onset of dust formation, which is 98 years for FG Sge, 4.9 years for V605 Aql, and 3.6 years for V4334 Sgr. โAveragingโ the timescales of various events in the three objects, one finds that V4334 Sgr is the most rapidly evolving (and presumably the most massive) final He flash object known; V605 Aql is about 50% slower, and FG Sge is a factor of 25 โ 50 slower. Extrapolating the lifetime of FG Sge to its expected disappearance due to a future major dust event yields a value of up to 220 years. Already one half of this time has elapsed; it will be interesting to monitor the future evolution of FG Sge.
## 8 Summary and outlook
The complete multi-color light curve of V4334 Sgr from its pre-discovery rise to the dust obscuration shows that the color indices increase quite smoothly. In 1995 โ 1997, this is caused by the cooling of the expanding pseudo-photosphere of a mass-losing object that has a slowly increasing luminosity. Furthermore, the increasing infrared excess can be explained by free-free emission in an Eddington-driven outflow. Starting from 1998, brightness drops and their color characteristics mimic the โred declinesโ of R CrB variables. The increase in infrared flux and the behavior of the collisionally excited He I 10830 line indicate that a complete dust shell formed around the object in late 1998 โ early 1999. Such a phenomenon also occurs in dust-forming classical novae.
The dust formation in V4334 Sgr, and possibly in most massive final He flash objects is โcatastrophicโ, i.e. a shell is formed which surrounds the whole star and which does not dissipate quickly. V605 Aql, after its disappearance in 1924, never recovered from its dust episode: plates of the Sonneberg sky patrol from 1928 โ 1979, reaching (mostly photographic) magnitude 16 โ 17.5, did not recover it (Fuhrmann 1981). It was only recovered at a very faint magnitude (Seitter 1985). It is quite certain that V4334 Sgr will behave in a similar way in the years to come.
Thus, final He flash objects show some similarity to R CrB stars, but apparently the onset of R CrB-like activity, at least for the quickly evolving objects like V605 Aql and V4334 Sgr, soon ends in a โcatastrophicโ decline, and does not extend over centuries of stellar evolution. The slowly evolving object FG Sge is also much more active than normal R CrB stars, but it has shown โblue declinesโ, indicating that only localized dust formation has occurred until now. It will be extremely interesting to follow the future behavior of FG Sge. Final He flash objects as we know them are obviously not settling down as โnormalโ R CrB stars. Possibly low-mass, very slowly evolving final He flash objects (with FG Sge possibly defining the high mass limit) are the ones that may show up as R CrB stars during extended evolutionary phases.
The rapid evolution of V4334 Sgr (and of V605 Aql) indicate that we observe here the massive objects undergoing post-AGB evolution. Models covering such masses and timescales are badly needed, in order to constrain masses, luminosities and distances of the observed events.
This paper profited much from a stay of H.W.D. at STScI Baltimore. He thanks M. Shara and N. Panagia for support and encouragement, and K. Sahu for arranging a seminar talk. Helpful electronic discussions with M. Asplund (Uppsala), A. Evans (Keele), U.S. Kamath and N.M. Ashok (Ahmedabad) and Ya. Pavlenko (Kiev) are gratefully recognized, and we are also very much indebted to P.A. Whitelock (SAAO) for communicating infrared data in advance of publication, to G. Wallerstein (Seattle) for communicating spectroscopic observations, and to W.C. Seitter for a careful reading of the manuscript. C.S. and T.A. acknowledge financial support from the Fund for Scientific Research Flanders (FWO). This research was supported by the Belgian Fund for Scientific Research (FWO) and by the Flemish Ministry for Foreign Policy, European Affairs, Science and Technology. Finally, we acknowledge the comments of a referee that were very helpful in improving the presentation of the paper. |
no-problem/0001/hep-ex0001002.html | ar5iv | text | # Search for the Decay (๐ตโฐ)ฬโ๐ท^{โ0}โข๐พ
## Abstract
We report results of a search for the rare radiative decay $`\overline{B^0}D^0\gamma `$. Using $`9.66\times 10^6`$ $`B\overline{B}`$ meson pairs collected with the CLEO detector at the Cornell Electron Storage Ring, we set an upper limit on the branching ratio for this decay of $`5.0\times 10^5`$ at 90% CL. This provides evidence that anomalous enhancement is absent in $`W`$-exchange processes and that weak radiative $`B`$ decays are dominated by the short-distance $`bs\gamma `$ mechanism in the Standard Model.
preprint: CLNS 99/1655 CLEO 99-21
M. Artuso,<sup>1</sup> R. Ayad,<sup>1</sup> C. Boulahouache,<sup>1</sup> K. Bukin,<sup>1</sup> E. Dambasuren,<sup>1</sup> S. Karamov,<sup>1</sup> S. Kopp,<sup>1</sup> G. Majumder,<sup>1</sup> G. C. Moneti,<sup>1</sup> R. Mountain,<sup>1</sup> S. Schuh,<sup>1</sup> T. Skwarnicki,<sup>1</sup> S. Stone,<sup>1</sup> G. Viehhauser,<sup>1</sup> J.C. Wang,<sup>1</sup> A. Wolf,<sup>1</sup> J. Wu,<sup>1</sup> S. E. Csorna,<sup>2</sup> I. Danko,<sup>2</sup> K. W. McLean,<sup>2</sup> Sz. Mรกrka,<sup>2</sup> Z. Xu,<sup>2</sup> R. Godang,<sup>3</sup> K. Kinoshita,<sup>3,</sup><sup>*</sup><sup>*</sup>*Permanent address: University of Cincinnati, Cincinnati OH 45221 I. C. Lai,<sup>3</sup> S. Schrenk,<sup>3</sup> G. Bonvicini,<sup>4</sup> D. Cinabro,<sup>4</sup> L. P. Perera,<sup>4</sup> G. J. Zhou,<sup>4</sup> G. Eigen,<sup>5</sup> E. Lipeles,<sup>5</sup> M. Schmidtler,<sup>5</sup> A. Shapiro,<sup>5</sup> W. M. Sun,<sup>5</sup> A. J. Weinstein,<sup>5</sup> F. Wรผrthwein,<sup>5,</sup>Permanent address: Massachusetts Institute of Technology, Cambridge, MA 02139. D. E. Jaffe,<sup>6</sup> G. Masek,<sup>6</sup> H. P. Paar,<sup>6</sup> E. M. Potter,<sup>6</sup> S. Prell,<sup>6</sup> V. Sharma,<sup>6</sup> D. M. Asner,<sup>7</sup> A. Eppich,<sup>7</sup> T. S. Hill,<sup>7</sup> D. J. Lange,<sup>7</sup> R. J. Morrison,<sup>7</sup> R. A. Briere,<sup>8</sup> B. H. Behrens,<sup>9</sup> W. T. Ford,<sup>9</sup> A. Gritsan,<sup>9</sup> J. Roy,<sup>9</sup> J. G. Smith,<sup>9</sup> J. P. Alexander,<sup>10</sup> R. Baker,<sup>10</sup> C. Bebek,<sup>10</sup> B. E. Berger,<sup>10</sup> K. Berkelman,<sup>10</sup> F. Blanc,<sup>10</sup> V. Boisvert,<sup>10</sup> D. G. Cassel,<sup>10</sup> M. Dickson,<sup>10</sup> P. S. Drell,<sup>10</sup> K. M. Ecklund,<sup>10</sup> R. Ehrlich,<sup>10</sup> A. D. Foland,<sup>10</sup> P. Gaidarev,<sup>10</sup> L. Gibbons,<sup>10</sup> B. Gittelman,<sup>10</sup> S. W. Gray,<sup>10</sup> D. L. Hartill,<sup>10</sup> B. K. Heltsley,<sup>10</sup> P. I. Hopman,<sup>10</sup> C. D. Jones,<sup>10</sup> D. L. Kreinick,<sup>10</sup> M. Lohner,<sup>10</sup> A. Magerkurth,<sup>10</sup> T. O. Meyer,<sup>10</sup> N. B. Mistry,<sup>10</sup> E. Nordberg,<sup>10</sup> J. R. Patterson,<sup>10</sup> D. Peterson,<sup>10</sup> D. Riley,<sup>10</sup> J. G. Thayer,<sup>10</sup> P. G. Thies,<sup>10</sup> B. Valant-Spaight,<sup>10</sup> A. Warburton,<sup>10</sup> P. Avery,<sup>11</sup> C. Prescott,<sup>11</sup> A. I. Rubiera,<sup>11</sup> J. Yelton,<sup>11</sup> J. Zheng,<sup>11</sup> G. Brandenburg,<sup>12</sup> A. Ershov,<sup>12</sup> Y. S. Gao,<sup>12</sup> D. Y.-J. Kim,<sup>12</sup> R. Wilson,<sup>12</sup> T. E. Browder,<sup>13</sup> Y. Li,<sup>13</sup> J. L. Rodriguez,<sup>13</sup> H. Yamamoto,<sup>13</sup> T. Bergfeld,<sup>14</sup> B. I. Eisenstein,<sup>14</sup> J. Ernst,<sup>14</sup> G. E. Gladding,<sup>14</sup> G. D. Gollin,<sup>14</sup> R. M. Hans,<sup>14</sup> E. Johnson,<sup>14</sup> I. Karliner,<sup>14</sup> M. A. Marsh,<sup>14</sup> M. Palmer,<sup>14</sup> C. Plager,<sup>14</sup> C. Sedlack,<sup>14</sup> M. Selen,<sup>14</sup> J. J. Thaler,<sup>14</sup> J. Williams,<sup>14</sup> K. W. Edwards,<sup>15</sup> R. Janicek,<sup>16</sup> P. M. Patel,<sup>16</sup> A. J. Sadoff,<sup>17</sup> R. Ammar,<sup>18</sup> A. Bean,<sup>18</sup> D. Besson,<sup>18</sup> R. Davis,<sup>18</sup> N. Kwak,<sup>18</sup> X. Zhao,<sup>18</sup> S. Anderson,<sup>19</sup> V. V. Frolov,<sup>19</sup> Y. Kubota,<sup>19</sup> S. J. Lee,<sup>19</sup> R. Mahapatra,<sup>19</sup> J. J. OโNeill,<sup>19</sup> R. Poling,<sup>19</sup> T. Riehle,<sup>19</sup> A. Smith,<sup>19</sup> J. Urheim,<sup>19</sup> S. Ahmed,<sup>20</sup> M. S. Alam,<sup>20</sup> S. B. Athar,<sup>20</sup> L. Jian,<sup>20</sup> L. Ling,<sup>20</sup> A. H. Mahmood,<sup>20,</sup>Permanent address: University of Texas - Pan American, Edinburg TX 78539. M. Saleem,<sup>20</sup> S. Timm,<sup>20</sup> F. Wappler,<sup>20</sup> A. Anastassov,<sup>21</sup> J. E. Duboscq,<sup>21</sup> K. K. Gan,<sup>21</sup> C. Gwon,<sup>21</sup> T. Hart,<sup>21</sup> K. Honscheid,<sup>21</sup> D. Hufnagel,<sup>21</sup> H. Kagan,<sup>21</sup> R. Kass,<sup>21</sup> T. K. Pedlar,<sup>21</sup> H. Schwarthoff,<sup>21</sup> J. B. Thayer,<sup>21</sup> E. von Toerne,<sup>21</sup> M. M. Zoeller,<sup>21</sup> S. J. Richichi,<sup>22</sup> H. Severini,<sup>22</sup> P. Skubic,<sup>22</sup> A. Undrus,<sup>22</sup> S. Chen,<sup>23</sup> J. Fast,<sup>23</sup> J. W. Hinson,<sup>23</sup> J. Lee,<sup>23</sup> N. Menon,<sup>23</sup> D. H. Miller,<sup>23</sup> E. I. Shibata,<sup>23</sup> I. P. J. Shipsey,<sup>23</sup> V. Pavlunin,<sup>23</sup> D. Cronin-Hennessy,<sup>24</sup> Y. Kwon,<sup>24,</sup><sup>ยง</sup><sup>ยง</sup>ยงPermanent address: Yonsei University, Seoul 120-749, Korea. A.L. Lyon,<sup>24</sup> E. H. Thorndike,<sup>24</sup> C. P. Jessop,<sup>25</sup> H. Marsiske,<sup>25</sup> M. L. Perl,<sup>25</sup> V. Savinov,<sup>25</sup> D. Ugolini,<sup>25</sup> X. Zhou,<sup>25</sup> T. E. Coan,<sup>26</sup> V. Fadeyev,<sup>26</sup> Y. Maravin,<sup>26</sup> I. Narsky,<sup>26</sup> R. Stroynowski,<sup>26</sup> J. Ye,<sup>26</sup> and T. Wlodek<sup>26</sup>
<sup>1</sup>Syracuse University, Syracuse, New York 13244
<sup>2</sup>Vanderbilt University, Nashville, Tennessee 37235
<sup>3</sup>Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061
<sup>4</sup>Wayne State University, Detroit, Michigan 48202
<sup>5</sup>California Institute of Technology, Pasadena, California 91125
<sup>6</sup>University of California, San Diego, La Jolla, California 92093
<sup>7</sup>University of California, Santa Barbara, California 93106
<sup>8</sup>Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
<sup>9</sup>University of Colorado, Boulder, Colorado 80309-0390
<sup>10</sup>Cornell University, Ithaca, New York 14853
<sup>11</sup>University of Florida, Gainesville, Florida 32611
<sup>12</sup>Harvard University, Cambridge, Massachusetts 02138
<sup>13</sup>University of Hawaii at Manoa, Honolulu, Hawaii 96822
<sup>14</sup>University of Illinois, Urbana-Champaign, Illinois 61801
<sup>15</sup>Carleton University, Ottawa, Ontario, Canada K1S 5B6
and the Institute of Particle Physics, Canada
<sup>16</sup>McGill University, Montrรฉal, Quรฉbec, Canada H3A 2T8
and the Institute of Particle Physics, Canada
<sup>17</sup>Ithaca College, Ithaca, New York 14850
<sup>18</sup>University of Kansas, Lawrence, Kansas 66045
<sup>19</sup>University of Minnesota, Minneapolis, Minnesota 55455
<sup>20</sup>State University of New York at Albany, Albany, New York 12222
<sup>21</sup>Ohio State University, Columbus, Ohio 43210
<sup>22</sup>University of Oklahoma, Norman, Oklahoma 73019
<sup>23</sup>Purdue University, West Lafayette, Indiana 47907
<sup>24</sup>University of Rochester, Rochester, New York 14627
<sup>25</sup>Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309
<sup>26</sup>Southern Methodist University, Dallas, Texas 75275
In recent years exclusive and inclusive $`bs\gamma `$ transitions were discovered by CLEO. These observations confirmed the existence of effective flavor changing neutral current processes in the Standard Model (SM) and stirred significant theoretical interest by opening new avenues to search for new physical phenomena.
One of the essential ingredients of the inclusive $`bs\gamma `$ measurement by CLEO was the assumption that flavor annihilation and $`W`$-exchange radiative transitions, represented by decays such as $`\overline{B^0}D^0\gamma `$, are strongly suppressed. If this were not so, these decays could represent a serious experimental background to the inclusive photon spectrum used to deduce the $`bs\gamma `$ rate. The primary goal of the study presented in this Letter is to establish experimentally whether $`W`$-exchange (flavor annihilation) processes are indeed strongly suppressed in $`B`$ decays.
We search for the decay $`\overline{B^0}D^0\gamma `$ (and its charge conjugate state). In the SM framework this decay proceeds via $`W`$-exchange between $`b`$ and $`\overline{d}`$ quarks (Fig. 1). Naively, this transition is suppressed by helicity effects and Quantum Chromodynamic (QCD) color corrections to the weak vertex. Two theoretical mechanisms to overcome this suppression have been proposed in the past. One mechanism has to do with the emission of gluons from the initial state quark while the other assumes a large $`q\overline{q}g`$ (or color octet) component in the $`B`$ meson wave function. Whether either mechanism could significantly enhance the rate is debatable. Theoretical estimates which take gluon emission into account predict a $`\overline{B^0}D^0\gamma `$ branching fraction of the order of $`10^6`$. Though the numerical estimates of the rate for the color octet hypothesis are not yet available, it is expected that the rate could be enhanced by a factor of approximately ten which is a typical color suppression factor. So far the presence of a possible enhancement in the decay $`\overline{B^0}D^0\gamma `$ has not been tested experimentally.
On the other hand, if QCD suppression is present in the decay $`\overline{B^0}D^0\gamma `$, eventually we would like to measure the strength of this suppression. Theoretical predictions for the studied decay have large uncertainties, therefore, a precise knowledge of the branching fraction would allow the QCD radiative corrections to be quantified more reliably. Knowledge of these corrections becomes increasingly important as theorists suggest new ways to constrain the SM parameters using hadronic $`B`$ decays. This makes the decay $`\overline{B^0}D^0\gamma `$ an interesting process to study even if QCD suppression is present.
The data analyzed in this study were collected at the Cornell Electron Storage Ring (CESR) with the CLEO detector. The results are based on $`9.66\times 10^6`$ $`B\overline{B}`$ meson pairs, corresponding to an integrated $`e^+e^{}`$ luminosity of $`9.2fb^1`$ collected at the $`\mathrm{{\rm Y}}(4\mathrm{S})`$ energy of 10.58 GeV. To optimize most of our selection criteria, we also employed $`4.6fb^1`$ of $`e^+e^{}q\overline{q}`$ ($`q=u,d,s,c`$) annihilation data (โcontinuumโ) collected approximately 60 MeV below the $`\mathrm{{\rm Y}}(4\mathrm{S})`$ energy. Our data sample was recorded with two configurations of the CLEO detector. The first third of the data were recorded with the CLEO II detector which consisted of three cylindrical drift chambers placed in an axial solenoidal magnetic field of 1.5T, a CsI(Tl)-crystal electromagnetic calorimeter, a time-of-flight plastic scintillator system and a muon system (proportional counters embedded at various depths in the steel absorber). Two thirds of the data were taken with the CLEO II.V configuration of the detector where the innermost drift chamber was replaced by a silicon vertex detector and the argon-ethane gas of the main drift chamber was changed to a helium-propane mixture. This upgrade led to improved resolutions in momentum and specific ionization energy loss ($`dE/dx`$). The response of the detector is modeled with a GEANT-based Monte Carlo simulation program. The data and simulated samples are processed by the same event reconstruction program. Whenever possible the efficiencies are either calibrated or corrected for the difference between simulated and actual detector responses using direct measurements from independent data.
We search for $`\overline{B^0}D^0\gamma `$ candidates among events where a photon with energy greater than 1.5 GeV is accompanied by a fully reconstructed $`D^0`$ meson. The $`D^0`$ mesons are reconstructed in their decays to $`D^0\pi ^0`$ and $`D^0\gamma `$ with the $`D^0`$ mesons decaying to $`K^{}\pi ^+`$, $`K^{}\pi ^+\pi ^0`$ or $`K^{}\pi ^+\pi ^{}\pi ^+`$. These reconstructed channels comprise 25% of the product branching fraction for the $`D^0`$ and $`D^0`$ decays. Multiple entries are assigned a weight inversely proportional to the number of candidates identified in the event. As we apply selection criteria, the reweighting is performed appropriately. The average number of candidates per event before and after event selection are 10 and 1.1, respectively.
Efficient track and photon quality requirements have been designed to minimize systematic uncertainties. This includes selecting only those photons that are detected in the region of the calorimeter where the resolutions are well modeled. Kaon candidates are required to have measured $`dE/dx`$ within $`\pm 2.5`$ standard deviations ($`\sigma `$) of the expected energy loss. Pairs of photons combined to form the $`\pi ^0`$ candidates are required to have masses within $`3.5\sigma `$ and $`+2.5\sigma `$ ($`\sigma 6`$ $`\mathrm{MeV}/\mathrm{c}^2`$) of the $`\pi ^0`$ mass. To improve mass resolution for parent particles, the $`\pi ^0`$ candidates are kinematically fit to this mass. To suppress combinatorial background, soft photons from the $`D^0D^0\gamma `$ decays are required to have energies above 200 MeV. This selection is 50% efficient. The invariant mass of the $`D^0`$ candidates is required to be within $`\pm 2.5\sigma `$ ($`\sigma 8.0\mathrm{MeV}/\mathrm{c}^2`$), $`\pm 2.0\sigma `$ ($`\sigma 15.0\mathrm{MeV}/\mathrm{c}^2`$) and $`\pm 1.5\sigma `$ ($`\sigma 7.5\mathrm{MeV}/\mathrm{c}^2`$) of the $`D^0`$ mass of $`1.8646\mathrm{GeV}/\mathrm{c}^2`$ in final states with one, two and three pions, respectively. The $`D^0`$$`D^0`$ mass difference $`\delta M`$ is required to be within $`\pm 2.0\sigma `$ of $`142.1\mathrm{MeV}/\mathrm{c}^2`$ ($`\sigma 1.0\mathrm{and}5.0\mathrm{MeV}/\mathrm{c}^2`$ for the $`\pi ^0`$ and $`\gamma `$ decays of the $`D^0`$, respectively). To select $`D^0K^{}\pi ^+\pi ^0`$ candidates we require the $`K^{}\pi ^0`$ and $`\pi ^+\pi ^0`$ invariant masses to be consistent with the resonant substructure of the $`D^0`$ decays. Continuum data were used to optimize these criteria to suppress combinatorial backgrounds.
The major sources of background are photons from initial state radiation and from $`\pi ^0`$ decays both from continuum and $`B\overline{B}`$ events. To suppress the real $`\pi ^0`$ background and to reduce the cross-feed between the $`\pi ^0`$ and $`\gamma `$ reconstruction channels of the $`D^0`$, we apply a $`\pi ^0`$ veto to the photons from both the $`D^0`$ decay and the $`\overline{B^0}`$ decay. This is done by rejecting photons that, when combined with another photon candidate, form $`\pi ^0`$ candidates within $`4.5\sigma `$ and $`+3.5\sigma `$ of the $`\pi ^0`$ mass. To suppress the remaining continuum background, we use a Fisher discriminant technique. This discriminant is a linear combination of three angles and nine event shape variables. The first angle is between the $`\overline{B^0}`$ candidate momentum and the $`e^+e^{}`$ collision (โbeamโ) axis. The second is the angle between the beam axis and the direction of the $`\overline{B^0}`$ candidate thrust axis. The third is the angle between the thrust axis of the $`\overline{B^0}`$ candidate and the thrust axis of the rest of the event. The nine event shape variables are the amount of energy detected in $`10^{}`$ cones around the direction of the signal photon from the $`\overline{B^0}`$ decay. The Fisher discriminant coefficients are optimized to maximize the separation between continuum events that are jetlike and $`B\overline{B}`$ events that are spherical in shape at the $`\mathrm{{\rm Y}}(4\mathrm{S})`$ energy. This important selection criterion is optimized for each reconstruction channel separately using a combination of continuum data and simulated signal events, and has an efficiency between 40% and 70% depending on the reconstruction channel.
We define the signal region in the two-dimensional plane of the beam-constrained $`B`$ mass $`M(B)=\sqrt{E_{\mathrm{beam}}^2p(B)^2}`$ and the energy difference $`\mathrm{\Delta }E=E(B)E_{\mathrm{beam}}`$, where $`E_{\mathrm{beam}}`$ is the beam energy, $`p(B)`$ is the momentum of the $`\overline{B^0}`$ candidate and $`E(B)`$ is its detected energy. The signal region is defined by $`M(B)>5.275\mathrm{GeV}/\mathrm{c}^2`$ and $`|\mathrm{\Delta }E|100`$ MeV. The $`M(B)`$ requirement is 1.5$`\sigma `$ below the actual $`\overline{B^0}`$ mass ($`\sigma 2.8\mathrm{MeV}/\mathrm{c}^2`$). These criteria are optimized to suppress the cross-feed from $`B`$ decays to higher-multiplicity final states. The signal region selection is 78% efficient.
No events are found in the signal region. Projections onto the $`\mathrm{\Delta }E`$ and $`M(B)`$ variables are shown in Fig. 2. On average we expect 0.5 continuum background events in the signal region. We estimate this number from continuum data by relaxing the event selection requirements. The contribution from the decay $`\overline{B^0}D^0\pi ^0`$ in the signal region is less than 0.9 events assuming $`(\overline{B^0}D^0\pi ^0)<4.4\times 10^4`$ at 90% CL. The theoretical predictions for this branching fraction are of the order of $`10^4`$. The contribution from all other known $`B`$ decays in the signal region is negligible. Six data events in the $`\mathrm{\Delta }E`$ sideband are consistent with Monte Carlo expectations for the cross-feed from the decay $`B^+D^0\rho ^+`$. This decay can produce $`\overline{B^0}D^0\gamma `$ candidates with $`\mathrm{\Delta }E<m_\pi `$ when the $`\pi ^0`$ decays asymmetrically and is emitted along the $`\rho ^+`$ direction.
To derive the upper limit we combine all six reconstruction channels. Efficiencies are weighted taking into account the branching fractions for the $`D^0`$ and $`D^0`$ decays. The overall reconstruction efficiency is 2.3%, where the major contributions are due to the exclusive reconstruction approach (30%), the track and photon quality requirements (65%), the $`\delta M`$ requirement (30%) and the Fisher discriminant technique (58%). To estimate the upper limit, we conservatively reduce reconstruction efficiency by its systematic error (18%). The largest contributions to this error are due to the uncertainties in the track and photon reconstruction efficiencies (11%), the $`D^0`$ branching fractions (9%), Fisher discriminant (6%) and the efficiencies of the requirements on the reconstructed masses of the $`D^0`$ (5%) and $`\overline{B^0}`$ (5%) candidates. To estimate the upper limit we assume $`(\mathrm{{\rm Y}}(4\mathrm{S})B^0\overline{B^0})=(\mathrm{{\rm Y}}(4\mathrm{S})B^+B^{})=0.5`$. The upper limit on the number of detected signal events is 2.3 at 90% CL and corresponds to an upper limit on the branching fraction for the decay $`\overline{B^0}D^0\gamma `$ of $`5.0\times 10^5`$ at 90% CL.
We performed the first search for the decay $`\overline{B^0}D^0\gamma `$ and set an upper limit on its branching fraction of $`5.0\times 10^5`$ at 90% CL. Our non-observation is consistent with the absence of anomalous enhancements that could have overcome short-distance color suppression in the studied process. We confirm theoretical predictions that weak radiative $`B`$ decays are dominated by the short-distance $`bs\gamma `$ mechanism. Finally, our results should be useful for studies of radiative and color-suppressed processes with heavy quarks at future high statistics $`B`$ physics experiments. At these facilities the decay $`\overline{B^0}D^0\gamma `$ should be utilized to verify if the short-distance QCD radiative corrections are under firm theoretical control and, possibly, to search for new physical phenomena.
We would like to thank A. Khodjamirian, P. Kim, R. Schindler, and A. Vainshtein for useful conversations. We gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. This work was supported by the National Science Foundation, the U.S. Department of Energy, the Research Corporation, the Natural Sciences and Engineering Research Council of Canada, the A.P. Sloan Foundation, the Swiss National Science Foundation, and the Alexander von Humboldt Stiftung. |
no-problem/0001/hep-ex0001007.html | ar5iv | text | # Discovering a Light Higgs Boson with Light
## Motivation
The Standard Model (SM) is very economical in the sense that the Higgs doublet responsible for electroweak symmetry breaking can also be used to generate fermion masses. The Higgs boson couplings to the gauge bosons, quarks, and leptons are therefore predicted in the Standard Model, where one expects the Higgs boson to decay mostly to b-jets and tau pairs (for low Higgs masses, $`M_h\stackrel{<}{}140`$ GeV), or to $`WW`$ or $`ZZ`$ pairs, (for higher Higgs masses, $`M_h\stackrel{>}{}140`$ GeV). Since the Higgs boson is neutral and does not couple to photons at tree level, the branching ratio $`\mathrm{B}(h\gamma \gamma )`$ is predicted to be very small in the SM, on the order of $`10^310^4`$.
In a more general framework, however, where different sectors of the theory are responsible for the physics of flavor and electroweak symmetry breaking, one may expect deviations from the SM predictions, which may lead to drastic changes in the Higgs boson discovery signatures. One such example is the so called โfermiophobicโ (also known as โbosophilicโ or โbosonicโ) Higgs, which has suppressed couplings to all fermions. It may arise in a variety of models, see e.g. bosmodels . A variation on this theme is the Higgs in certain topcolor models, which may couple to heavy quarks only topmodels . Some even more exotic possibilities have been suggested in the context of theories with large extra dimensions LED . Finally, in the minimal supersymmetric standard model (MSSM), the width into $`b\overline{b}`$ pairs can be suppressed due to 1-loop SUSY corrections, thus enhancing the branching ratios of a light Higgs into more exotic signatures CMW ; Mrenna . In all these cases, the Higgs boson decays to photon pairs are mediated through a $`W`$ or heavy quark loop and dominate for $`M_h\stackrel{<}{}100`$ GeV SMW . In the range $`100\stackrel{<}{}M_h\stackrel{<}{}160`$, they compete with the $`WW^{}`$ mode, while for $`M_h\stackrel{>}{}160`$ GeV, $`hWW`$ completely takes over. Current bounds from LEP LEP limits are limited by the kinematic reach of the machine. The existing Run I analyses at the Tevatron have utilized the diphoton plus 2 jets Lauer ; D0 ; Wilson and inclusive diphoton Wilson channels and were limited by statistics. Since they only looked for a โbosonicโ Higgs bosmodels , they did not consider the Higgs production mechanism through gluon fusion, which can be a major additional source of signal in certain models topmodels . Since $`h\gamma \gamma `$ is a very clean signature, it will allow the Tevatron to extend significantly those limits in its next run.
In this study we shall evaluate the Higgs discovery potential of the upcoming Tevatron runs for several diphoton channels. We shall concentrate on the following two questions. First, what is the absolute reach in Higgs mass as a function of the $`h\gamma \gamma `$ branching ratio? Second, which signature (inclusive diphotons, diphotons plus one jet, or diphotons plus two jets) provides the best reach. We believe that neither of those two questions has been adequately addressed in the literature previously.
## Tevatron Reach for a Bosonic Higgs
Here we consider the case of a โbosonicโ Higgs, i.e. models where the Higgs couplings to all fermions are suppressed. Then, the main Higgs production modes at the Tevatron are associated $`Wh/Zh`$ production, as well as $`WW/ZZ`$ fusion. All of these processes have comparable rates Spira , so it makes sense to consider an inclusive signature first Wilson .
### Inclusive channel: analysis cuts
We use the following cuts for our inclusive study: two photons with $`p_T(\gamma )>20`$ GeV and rapidity $`|\eta (\gamma )|<2`$, motivated by the acceptance of the CDF or Dร detectors in Run II. Triggering on such a signature is trivial; both collaborations will have diphoton triggers that are nearly fully efficient with such offline cuts.
We assume 80% diphoton identification efficiency, which we apply to both the signal and background estimates on top of the kinematic and geometrical acceptance. Again, this efficiency is motivated by the CDF/Dร EM ID efficiency in Run I and is not likely to change in Run II.
### Inclusive channel: background
The main backgrounds to the inclusive diphoton channel come from the QCD production of dijets, direct photons, and diphotons. In the former two cases a jet mimics a photon by fragmenting into a leading $`\pi ^0/\eta `$ meson that further decays into a pair of photons, not resolved in the calorimeter.
We used the PYTHIA PYTHIA event generator and the experimentally measured probability of a jet to fake a photon Lauer to calculate all three components of the QCD background. The faking probability depends significantly on the particular photon ID cuts, especially on the photon isolation requirement (see, e.g. Lauer ; diboson ; monopole ). For this study we used an $`E_T`$-dependent jet-faking-photon probability of
$$P(\mathrm{jet}\gamma )=\mathrm{exp}\left(0.01\frac{E_T}{\text{(1 GeV)}}7.5\right),$$
which is obtained by taking the $`\eta `$-averaged faking probabilities used in the Dร Run I searches Lauer . The fractional error on $`P(\text{jet}\gamma )`$ is about 25% and is dominated by the uncertainty on the direct photon fraction in the $`\text{jet}+\gamma `$ sample used for its determination. (For high photon $`E_T`$, however, the error is dominated by the available statistics.) This probability is expected to remain approximately the same in Run II for both the CDF and Dร detectors. We used 80% ID efficiency for the pair of photons, and required the photons to be isolated from possible extra jets in the event. We accounted for NLO corrections via a constant $`k`$-factor of 1.34.
Adding all background contributions, for the total background in the inclusive diphoton channel we obtain the following parametrization:
$$\frac{d\sigma }{dM_{\gamma \gamma }}=\left[p_3+p_4\left(\frac{M_{\gamma \gamma }}{1\mathrm{GeV}}\right)+p_5\left(\frac{M_{\gamma \gamma }}{1\mathrm{GeV}}\right)^2\right]\mathrm{exp}\left\{p_1+p_2\left(\frac{M_{\gamma \gamma }}{1\mathrm{GeV}}\right)\right\},$$
where $`p_1=6.45`$, $`p_2=0.029`$, $`p_3=2.44`$, $`p_4=0.011`$ and $`p_5=0.00005`$. In the region $`M_{\gamma \gamma }>100`$ GeV it is dominated by direct diphoton production and hence is irreducible. The expected statistical plus systematic error on this background determination is at the level of 25%, based on the jet-faking photon probability uncertainty. For larger invariant masses, however, the accuracy is dominated by the uncertainties in the direct diphoton production cross section, which will be difficult to measure independently in Run II, so one will still have to rely on the NLO predictions. On the other hand, for narrow resonance searches one could do self-calibration of the background by calculating the expected background under the signal peak via interpolation of the measured diphoton mass spectrum between the regions just below and just above the assumed resonance mass. Therefore, in our case the background error will be purely dominated by the background statistics. A combination of the interpolation tecnique and the shape information from the theoretical NLO calculations of the direct diphoton cross section is expected to result in significantly smaller background error in Run II.
The total background, as well as the individual contributions from $`\gamma \gamma `$, $`\gamma j`$ and $`jj`$ production, are shown in Fig. 1. Additional SM background sources to the inclusive diphoton channel include Drell-Yan production with both electrons misidentified as photons, $`W\gamma \gamma `$ production, etc. and are all negligible compared to the QCD background. The absolute normalization of the background obtained by the above method agrees well with the actual background measured by CDF and Dร in the diphoton mode Wilson ; monopole .
In Fig. 2 we show the 95% CL upper limit on the differential cross section after cuts $`d(\epsilon \times \sigma (\gamma \gamma +X))/dM_{\gamma \gamma }`$ as a function of the diphoton invariant mass $`M_{\gamma \gamma }`$, given the above background prediction (here $`\epsilon `$ is the product of the acceptance and all efficiencies). This limit represents $`1.96\sigma `$ sensitivity to a narrow signal when doing a counting experiment in 1 GeV diphoton mass bins. This plot can be used to obtain the sensitivity to any resonance decaying into two photons as follows. One first fixes the width of the mass window around the signal peak which is used in the analysis. Then one takes the average value of the 95% C.L. limit in $`d\sigma /dM_{\gamma \gamma }`$ across the mass window from Fig. 2 and multiplies it by $`\sqrt{w/\text{GeV}}`$, where $`w`$ is the width of the mass window<sup>1</sup><sup>1</sup>1The square root enters the calculation since the significance is proportional to the background to the $`1/2`$ power., to obtain the corresponding 95% CL upper limit on the signal cross-section after cuts. Similar scaling could be used if one is interested in the 3$`\sigma `$ or 5$`\sigma `$ reach.
### What is the optimum mass window cut?
When searching for narrow resonances in the presence of large backgrounds ($`B`$), the best sensitivity toward signal ($`S`$) is achieved by performing an unbinned maximum likelihood fit to the sum of the expected signal and background shapes. However, simple counting experiments give similar sensitivity if the size of the signal โwindowโ is optimized. For narrow resonances the observed width<sup>2</sup><sup>2</sup>2Notice that the width is defined so that the cross-section at $`\pm \mathrm{\Gamma }/2`$ away from the peak is a factor of 2 smaller than the peak value (FWHM). For a Gaussian resonance the width is related to the variance $`\sigma `$ by $`\mathrm{\Gamma }=2\sigma \sqrt{\mathrm{ln}4}2.35\sigma `$. $`\mathrm{\Gamma }`$ is dominated by the instrumental effects, and is often Gaussian. The background in a narrow window centered on the assumed position $`M_0`$ of the peak in the signal invariant mass distribution could be treated as linear. Therefore, the Gaussian significance of the signal, $`S/\sqrt{B}`$, as a function of the window width, $`w`$, is given by:
$$\frac{S}{\sqrt{B}}\frac{1}{\sqrt{w}}\frac{1}{\sqrt{2\pi }\sigma }_{M_0w/2}^{M_0+w/2}๐\sqrt{s}\mathrm{exp}\left(\frac{(\sqrt{s}M_0)^2}{2\sigma ^2}\right)\frac{1}{\sqrt{w/\mathrm{\Gamma }}}\mathrm{erf}\left(\sqrt{\mathrm{ln}2}\frac{w}{\mathrm{\Gamma }}\right),$$
(1)
where erf$`(x)`$ is the error function
$$\mathrm{erf}(x)=\frac{2}{\sqrt{\pi }}_0^xe^{t^2}๐t.$$
The function (1) is shown in Fig. 3, and has a maximum at $`w1.2\mathrm{\Gamma }`$, which corresponds to $`\pm 1.2(\mathrm{\Gamma }/2)`$ cut around the resonance maximum.
For resonances significantly wider than the experimental resoluton, the shape is given by the Breit-Wigner function, and in this case the significance is:
$$\frac{S}{\sqrt{B}}\frac{1}{\sqrt{w}}_{(M_0w/2)^2}^{(M_0+w/2)^2}\frac{ds}{(sM_0^2)^2+M_0^2\mathrm{\Gamma }^2}\frac{1}{\sqrt{w/\mathrm{\Gamma }}}\mathrm{arctan}(\frac{w}{\mathrm{\Gamma }}).$$
(2)
This function, also shown in Fig. 3, peaks at a similar value of $`w`$ ($`w1.4\mathrm{\Gamma }`$). We see that for both Gaussian and Breit-Wigner resonances, the significance does not appreciably change when using a $`w=1\mathrm{\Gamma }2\mathrm{\Gamma }`$ cuts. For our analysis we shall use two representative choices: $`w=1.2\mathrm{\Gamma }`$ and $`w=2\mathrm{\Gamma }`$ for the mass window, which we shall always center on the actual Higgs mass.
Clearly, one can do even better in principle, by suitably resizing and repositioning the mass window around the bump in the combined $`S+B`$ distribution. Because of the steeply falling parton luminosities, the signal mass peak is skewed and its maximum will appear somewhat below the actual physical mass. In our analysis we choose not to take advantage of these slight improvements, thus accounting for unknown systematics.
### Inclusive channel: results
In Tables 1 and 2 we show the inclusive $`\gamma \gamma +X`$ background rates in fb for different Higgs masses, for $`w=1.2\mathrm{\Gamma }`$ and $`w=2\mathrm{\Gamma }`$ mass window cuts, respectively.
Here we have added the intrinsic width $`\mathrm{\Gamma }_h`$ and the experimental resolution $`\mathrm{\Gamma }_{\mathrm{exp}}=2\sqrt{\mathrm{ln}4}\times \sigma _{\mathrm{exp}}2.35\times 0.15\sqrt{2}\sqrt{E(\gamma )}0.35\sqrt{M_h}`$ in quadrature: $`\mathrm{\Gamma }=\left(\mathrm{\Gamma }_h^2+\mathrm{\Gamma }_{\mathrm{exp}}^2\right)^{1/2}`$. The width $`\mathrm{\Gamma }`$ varies between 3.5 GeV for $`M_h=100`$ GeV and 29.0 GeV for $`M_h=400`$ GeV. The two tables also show the significance (for 1 fb<sup>-1</sup> of data, and assuming $`\mathrm{B}(h\gamma \gamma )=100\%`$) in the inclusive diphoton channel when only associated $`Wh/Zh`$ production and $`WW/ZZh`$ fusion are included in the signal sample. We see that (as can also be anticipated from Fig. 3) a $`w=1.2\mathrm{\Gamma }`$ cut around the Higgs mass typically gives a better statistical significance, especially for lighter (and therefore more narrow) Higgs bosons.
### Exclusive channels: analysis
The next question is whether the sensitivity can be further improved by requiring additional objects in the event. The point is that a significant fraction of the signal events from both associated $`Wh/Zh`$ production and $`WW/ZZ`$ fusion will have additional hard objects, most often QCD jets. In Fig. 4 we show the โjetโ multiplicity in associated $`Wh`$ production, where for detector simulation we have used the SHW package SHW with a few modifications as in SHWmod . Here we treat โjetsโ in a broader context, including electrons and tau jets as well.
Previous studies Wilson ; D0 have required two or more additional QCD jets. Here we shall also consider the signature with at least one additional โjetโ, where a โjetโ is an object with $`|\eta |<2`$. The advantages of not requiring a second โjetโ are twofold. First, in this way we can also pick up signal from $`WW/ZZh`$ fusion, whose cross-section does not fall off as steeply with $`M_h`$, and in fact for $`M_h>200`$ GeV is larger than the cross-section for associated $`Wh/Zh`$ production<sup>3</sup><sup>3</sup>3In the case of a topcolor Higgs (see the next section) we would also pick up events with initial state gluon radiation, comprising about 30% of the gluon fusion signal, which is the dominant production process for any Higgs mass.. Events from $`WW/ZZh`$ fusion typically contain two very hard forward jets, one of which may easily pass the jet selection cuts. In Fig. 5 we show the pseudorapidity distribution of the two spectator jets in $`WW/ZZh`$ fusion (red) and associated $`Wh/Zh`$ production (blue). Second, by requiring only one additional jet, we win in signal acceptance. In order to compensate for the corresponding background increase, we shall consider several $`p_T`$ thresholds for the additional jet, and choose the one giving the largest significance.
For the exclusive channels we need to rescale the background from Fig. 1 as follows. From Monte Carlo we obtain reduction factors of $`4.6\pm 0.5`$, $`6.2\pm 1.0`$, $`7.6\pm 1.4`$, and $`8.6\pm 1.5`$ for the $`\gamma \gamma +1`$ jet channel, with $`p_T(j)>20`$, 25, 30 and 35 GeV, respectively. For the $`\gamma \gamma +2`$ jets channel the corresponding background reduction is $`21\pm 5`$, $`38\pm 12`$, $`58\pm 21`$, and $`74\pm 26`$, depending on the jet $`p_T`$ cuts. These scaling factors agree well with those from the CDF and Dร data from Run I.
Notice that we choose not to impose an invariant dijet mass ($`M_{jj}`$) cut for the $`\gamma \gamma +2`$ jets channel. We do not expect that it would lead to a gain in significance for several reasons. First, given the relatively high jet $`p_T`$ cuts needed for the background suppression, there will be hardly any background events left with dijet invariant masses below the (very wide) $`W/Z`$ mass window. Second, the signal events from $`WW/ZZ`$ fusion, which typically comprise about $`2530\%`$ of our signal, will have a dijet invariant mass distribution very similar to that of the background. Finally, not imposing the $`M_{jj}`$ cut allows for a higher signal acceptance because of the inevitable combinatorial ambiguity for the events with $`>2`$ jets.
The significances for the two exclusive channels, with the four different jet $`p_T`$ cuts, are also shown in Tables 1 and 2. We see that the exclusive $`\gamma \gamma +2`$ jets channel with $`p_T(j)>30`$ GeV typically gives the largest significance, but our new exclusive $`\gamma \gamma +1`$ jet channel is following very close behind.
### Exclusive channels: results
We are now ready to present our results for the Run II Tevatron reach for a bosonic Higgs. In Fig. 6 we show the 95% CL upper limit on the branching ratio $`\mathrm{B}(h\gamma \gamma )`$, with 0.1 (cyan), 2.0 (green) and 30 $`\mathrm{fb}^1`$ (red), as a function of $`M_h`$. For each mass point, we compare the significance for both the inclusive as well as the exclusive channels with all the different cuts, and for the limit we choose the channel with the set of cuts providing the best reach. It turns out that for the case at hand the winners are: o: $`2\gamma +2j`$, with $`p_T(j)>25`$ GeV; $`\mathrm{}`$: $`2\gamma +2j`$, with $`p_T(j)>30`$ GeV, and $`\mathrm{}`$: $`2\gamma +1j`$, with $`p_T(j)>30`$ GeV. In the figure we also show the HDECAY hdecay prediction for $`\mathrm{B}(h\gamma \gamma )`$ in case of a โbosonicโ Higgs. The reach shown for 0.1 $`\mathrm{fb}^1`$ is intended as a comparison to Run I, in fact for the 0.1 $`\mathrm{fb}^1`$ curve we scaled down both the signal and background cross-sections to their values at 1.8 TeV center-of-mass energy, keeping the efficiencies the same. In other words, the region marked as Run Iโ would have been the hypothetical reach in Run I, if the improved Run II detectors were available at that time. As seen from Fig. 6, the reach for a โbosonicโ Higgs bosmodels (at 95% CL) in Run IIa and Run IIb is $`115`$ GeV and $`125`$ GeV, correspondingly. This is a significant improvement over the ultimate reach from LEP LEP limits of $`105`$ GeV.
## Tevatron Reach for a Topcolor Higgs
Here we consider the case of a โtopcolorโ bosonic Higgs, where the Higgs also couples to the top and other heavy quarks topmodels . We therefore include events from gluon fusion into our signal sample. We used the next-to-leading order cross-sections for gluon fusion from the HIGLU program higlu .
In Tables 3 and 4 we show the significance (for 1 fb<sup>-1</sup> of data, and again assuming $`\mathrm{B}(h\gamma \gamma )=100\%`$) in the inclusive and the two exclusive channels, for the topcolor Higgs case. Since gluon fusion, which rarely has additional hard jets, is the dominant production process, the inclusive channel typically provides the best reach. However, the $`2\gamma +1j`$ channel is again very competitive, since the additional hard jet requirement manages to suppress the background at a reasonable signal cost. We see that our new $`2\gamma +1j`$ channel clearly gives a better reach than the $`2\gamma +2j`$ channel Lauer ; D0 ; Wilson . For Higgs masses above $`180`$ GeV, it sometimes becomes marginally better even than the inclusive diphoton channel. The specific jet $`p_T`$ cut and mass window size $`w`$ seem to be less of an issue โ from Tables 3 and 4 we see that $`p_T(j)>25`$, $`p_T(j)>30`$ GeV and $`p_T(j)>35`$ GeV work almost equally well, and for $`M_h\stackrel{>}{}200`$ GeV both values of $`w`$ are acceptable.
In Fig. 7 we show the Run II reach for the branching ratio $`\mathrm{B}(h\gamma \gamma )`$ as a function of the Higgs mass, for the case of a โtopcolorโ Higgs boson. This time the channels with the best signal-to-noise ratio are: o: inclusive $`2\gamma +X`$, and $`\mathrm{}`$: $`2\gamma +1j`$, with $`p_T(j)>30`$ GeV; both with $`w=1.2\mathrm{\Gamma }`$.
## Conclusions
We have studied the Tevatron reach for Higgs bosons decaying into photon pairs. For purely โbosonicโ Higgses, which only couple to gauge bosons, the $`2\gamma +2j`$ channel offers the best reach, but the $`2\gamma +1j`$ channel is almost as good. For topcolor Higgs bosons, which can also be produced via gluon fusion, the inclusive $`2\gamma +X`$ channel is the best, but the $`2\gamma +1j`$ channel is again very competitive. We see that in both cases the $`2\gamma +1j`$ channel is a no-lose option!
Acknowledgments. We would like to thank S. Mrenna for many useful discussions and B. Dobrescu for comments on the manuscript. This research was supported in part by the U.S. Department of Energy under Grants No. DE-AC02-76CH03000 and DE-FG02-91ER40688. Fermilab is operated under DOE contract DE-AC02-76CH03000. |
no-problem/0001/astro-ph0001452.html | ar5iv | text | # DUSTY TORI OF SEYFERT NUCLEI
## 1. INTRODUCTION
Dusty tori around active galactic nuclei (AGNs) play an important role in the classification of Seyfert galaxies. (Antonucci & Miller 1985; see also Antonucci 1993 for a review). Seyfert galaxies observed from a face-on view of the torus are recognized as type 1 Seyferts (S1s) while those observed from an edge-on view are recognized as type 2 Seyferts (S2s). Therefore, physical properties of dusty tori are of great interest. We briefly introduce three statistical studies investigating properties of dusty tori; 1) physical sizes of dusty tori based on water-vapor maser emission (Taniguchi & Murayama 1998), 2) ionization condition of the inner wall of tori based on high-ionization emission lines (Murayama & Taniguchi 1998a,b), and 3) viewing angle toward dusty tori based on mid-infrared color (Murayama, Mouri, & Taniguchi 2000). Please see references for detailed discussion.
## 2. Dusty Tori of Seyfert Nuclei Posed by the Water Vapor Maser Emission
### 2.1. Water Vapor Maser Emission in Active Galactic Nuclei
The recent VLBI/VLBA measurements of the H<sub>2</sub>O maser emission of the nearby AGNs, NGC 1068 (Gallimore et al. 1996; Greenhill et al. 1996; Greenhill & Gwinn 1997), NGC 4258 (Miyoshi et al. 1995; Greenhill et al. 1995a, 1995b), and NGC 4945 (Greenhill, Moran, & Herrnstein 1997), have shown that the masing clouds are located at distances of $``$ 0.1 โ 1 pc from the nuclei. These distances are almost comparable to those of molecular/dusty tori which are the most important ingredient to explain the observed diversity of AGN (Antonucci & Miller 1985; Antonucci 1993). It is therefore suggested that the masing clouds reside in the tori themselves (e.g., Greenhill et al. 1996). Therefore, the H<sub>2</sub>O maser emission provides a useful tool to study physical properties of dusty tori which are presumed to be the fueling agent onto the supermassive black hole (cf. Krolik & Begelman 1988; Murayama & Taniguchi 1997).
### 2.2. A Statistical Size of the Dusty Tori Inferred from the Frequency of Occurrence of H<sub>2</sub>O Masers
The recent comprehensive survey of the H<sub>2</sub>O maser emission for $``$ 350 AGNs by Braatz et al. (1997; hereafter BWH97) has shown that the H<sub>2</sub>O maser emission has not yet been observed in S1s and that the S2s with the H<sub>2</sub>O maser emission have the higher H I column densities toward the central engine. It is hence suggested strongly that the maser emission can be detected only when the dusty torus is viewed from almost edge-on views. This is advocated by the ubiquitous presence of so-called the main maser component whose velocity is close to the systemic one whenever the maser emission is observed because this component arises from dense molecular gas clouds along the line of sight between the background amplifier (the central engine) and us (see, e.g., Miyoshi et al. 1995; Greenhill et al. 1995b).
Since the high H I column density is achieved only when we see the torus within the aspect angle, $`\varphi =\mathrm{tan}^1(h/2b)`$ (see Figure 1), we are able to estimate $`b`$ because the detection rate of H<sub>2</sub>O maser, $`P_{\mathrm{maser}}`$, emission can be related to the aspect angle as, $`P_{\mathrm{maser}}=N_{\mathrm{maser}}/(N_{\mathrm{maser}}+N_{\mathrm{non}\mathrm{maser}})=\mathrm{cos}(90\mathrm{ยฐ}\varphi )`$ where $`N_{\mathrm{maser}}`$ and $`N_{\mathrm{non}\mathrm{maser}}`$ are the numbers of AGN with the H<sub>2</sub>O maser emission and without the H<sub>2</sub>O maser emission, respectively. This relation gives the outer radius, $`b=h[2\mathrm{tan}(90\mathrm{ยฐ}\mathrm{cos}^1P_{\mathrm{maser}})]^1`$. Table 1 shows that a typical detection rate is $`P_{\mathrm{maser}}`$ 0.05. However, this value should be regarded as a lower limit because some special properties of may be necessary to cause the maser emission (Wilson 1998). If we take account of new detections of H<sub>2</sub>O maser emission from NGC 5793 (Hagiwara et al. 1997) and NGC 3735 (Greenhill et al. 1997b) which were discovered by two other maser surveys independent from BWH97, the detection rate may be as high as $``$ 0.1 (Wilson 1998). Therefore, we estimate $`b`$ values for the two cases; 1) $`P_{\mathrm{maser}}`$ = 0.05, and $`P_{\mathrm{maser}}`$ = 0.1. These two rates correspond to the aspect angles, $`\varphi 2\stackrel{}{\mathrm{.}}9`$ and $`\varphi 5\stackrel{}{\mathrm{.}}7`$, respectively. In Table 2, we give the estimates of $`b`$ for three cases, $`a`$ = 0.1, 0.5, and 1 pc. If $`a>`$ 1 pc, the H I column density becomes lower than $`10^{23}`$ cm<sup>-1</sup> given $`M_{\mathrm{gas}}=10^5M_{}`$. Therefore, it is suggested that the inner radius may be in a range between 0.1 pc and 0.5 pc for typical Seyfert nuclei. The inner radii of the H<sub>2</sub>O masing regions in NGC 1068, NGC 4258, and NGC 4945 are indeed in this range (Greenhill et al. 1996; Miyoshi et al. 1997; Greenhill et al. 1997a). We thus obtain possible sizes of the dusty tori; ($`a,b,h`$) = (0.1 โ 0.5 pc, 1.67 โ 8.35 pc, 0.33 โ 1.67 pc) for $`\varphi 5\stackrel{}{\mathrm{.}}7`$, and ($`a,b,h`$) = (0.1 โ 0.5 pc, 3.29 โ 16.5 pc, 0.33 โ 1.67 pc) for $`\varphi 2\stackrel{}{\mathrm{.}}9`$. All the cases can achieve $`N_{\mathrm{HI}}>10^{23}`$ cm<sup>-1</sup>, being consistent with the observations (BWH97).
## 3. High-Ionization Nuclear Emission-Line Regions on the Inner Surface of Dusty Tori
### 3.1. High-Ionization Emission Lines in Seyfert Galaxies
Optical spectra of active galactic nuclei (AGN) show often very high ionization emission lines such as \[Fe VII\], \[Fe X\], and \[Fe XIV\] (the so-called coronal lines). According to the current unified models (Antonucci & Miller 1985; Antonucci 1993), it is generally believed that a dusty torus surrounds both the central engine and the BLR. Since the inner wall of the torus is exposed to intense radiation from the central engine, it is naturally expected that the wall can be one of the important sites for the HINER (Pier & Voit 1995). If the inner wall is an important site of HINER, it should be expected that the S1s would tend to have more intense HINER emission because the inner wall would be obscured by the torus itself in S2s.
In order to examine whether or not the S1s tend to have the excess HINER emission, we study the frequency distributions of the \[Fe VII\] $`\lambda `$6087/\[O III\] $`\lambda `$5007 intensity ratio between S1s and S2s. The data were compiled from the literature (Osterbrock 1977, 1985; Koski 1978; Osterbrock & Pogge 1985; Shuder & Osterbrock 1981) and our own optical spectroscopic data of one S1 (NGC 4051) and four S2s (NGC 591, NGC 5695, NGC 5929, and NGC 5033). In total, our sample contains 18 S1s and 17 S2s. The result is shown in Figure 2. It is shown that the S1s are strong \[Fe VII\] emitters than the S2s. In order to verify that this difference is really due to the excess \[Fe VII\] emission, we compare the \[O III\] luminosity between the S1s and S2s and find that the \[O III\] luminosity distribution is nearly the same between the S1s and the S2s (Figure 3). Therefore, we conclude that the higher \[Fe VII\]/\[O III\] intensity ratio in the S1s is indeed due to the excess \[Fe VII\] emission rather than the weaker \[O III\] emission in the S1s. The presence of an excess \[Fe VII\] emission in S1s can only be explained if there is a fraction of the inner HINER that cannot be seen in the S2s. The height of the inner wall is of order 1 pc (Gallimore et al. 1997; Pier & Krolik 1992, 1993). Therefore, given that the torus obscures this HINER from our line of sight, the effective height of the torus should be significantly higher than 1 pc.
### 3.2. Three-Component HINER
Although our new finding suggests strongly that part of the HINER emission arises from the inner walls of dusty tori, it is remembered that a number of S2s have the HINER. In fact, the fraction of Seyfert nuclei with the HINER is nearly the same between the S1s and the S2s (Osterbrock 1977; Koski 1978). If the HINER was mostly concentrated in the inner 1 pc region, we would observe the HINER only in the S1s. Therefore the presence of HINER in the S2s implies that there is another HINER component which has no viewing-angle dependence. A typical dimension of such a component is of order 100 pc like that of the NLR. In addition, it is also known that some Seyfert nuclei have an extended HINER whose size amounts up to $``$ 1 kpc (Golev et al. 1994; Murayama, Taniguchi, & Iwasawa 1998). The presence of such extended HINERs is usually explained as the result of very low-density conditions in the interstellar medium ($`n_\mathrm{H}1`$ cm<sup>-3</sup>) makes it possible to achieve higher ionization conditions (Korista & Ferland 1989).
The arguments described here suggest strongly that there are three kinds of HINER; 1) the torus HINER ($`r<1`$ pc), 2) the HINER associated with the NLR ($`10<r<100`$ pc), and 3) the very extended HINER ($`r`$ 1 kpc). A schematic illustration of the HINER is shown in Figure 4.
### 3.3. Dual-Component Photoionization Calculations for HINER
Any single-component photoionization models underpredict higher ionization emission lines (see Murayama & Taniguchi 1998b and references therein). We therefore proceed to construct dual-component models in which the inner surface of a torus is introduced as a new ionized-gas component in addition to the traditional NLR component with the photoionization code CLOUDY (Ferland 1996). The single-cloud model suggests that the ionization parameter lies in the range of $`\mathrm{log}U1.5`$$`2`$. As for the electron density, it is often considered that the inner edges of tori have higher electron densities, e.g., $`n_\mathrm{e}10^{7\text{}8}`$ cm<sup>-3</sup> (Pier & Voit 1995). Because the largest \[Fe VII\]/\[O III\] ratio of the observed data is $`0.5`$, \[Fe VII\]/\[O III\] of the torus component must be greater than 0.5. However, we find that ionization-bounded models cannot explain the observed large \[Fe VII\]/\[O III\] values by simply increasing electron densities up to $`10^9`$ cm<sup>-3</sup>. Further, such very high-density models yield unusually strong \[O I\] emission with respect to \[O III\]. We therefore assume โtruncatedโ clouds with both large \[Fe VII\]/\[O III\] ratios and little low-ionization lines for the HINER torus. The calculations were stopped at a hydrogen column density when \[Fe VII\]/\[O III\] $`=1`$. We performed photoionization calculations described above and we finally adopted the model with $`n_\mathrm{H}=10^{7.5}`$ cm<sup>-3</sup> and $`\mathrm{log}U=2.0`$ representative model for the HINER torus with taking \[Fe X\]/\[Fe VII\] ratios predicted by the calculations into account.
Now we can construct dual-component models combining this torus component model with the NLR models. In Figure 5, we present the results of the dual-component models. Here the lowest dashed line shows the results of the NLR component models with $`\alpha =1`$, $`\mathrm{log}U=2`$, as a function of $`n_\mathrm{H}`$ from 1 cm<sup>-3</sup> to $`10^6`$ cm<sup>-3</sup>. If we allow the contribution from the torus component to reach up to $`50`$ % in the Seyferts with very high \[Fe VII\]/\[O III\] ratios, we can explain all the data points without invoking the unusual iron overabundance. Note that the majority of objects can be explained by simply introducing a $`10`$ % contribution from the HINER torus.
## 4. New Mid-Infrared Diagnostic of the Dusty Torus Model for Seyfert Nuclei
### 4.1. The New MIR Diagnostic
The current unified model of active galactic nuclei (AGNs) has introduced the dusty torus around the central engine (Antonucci 1993). Therefore, it is urgent to study the basic properties of dusty tori (e.g., Pier & Krolik 1992). Utilizing the anisotropic property of dusty torus emission, we propose a new MIR diagnostic to estimate a critical viewing angle of the dusty torus between type 1 and 2 AGNs.
Because of the anisotropic properties of the dusty torus emission, the emission at $`\lambda <`$ 10 $`\mu `$m is systematically stronger in type 1 AGNs than in type 2s while that at $`\lambda >`$ 20 $`\mu `$m is not significantly different between type 1 and type 2 AGNs. Therefore the luminosity ratio between 3.5 $`\mu `$m and 25 $`\mu `$m is expected to be highly useful to distinguish between type 1 and 2 AGNs (Figure 6). Here we define the above ratio as
$$R=\mathrm{log}\nu _{3.5\mu \mathrm{m}}f_{\nu _{3.5\mu \mathrm{m}}}/\nu _{25\mu \mathrm{m}}f_{\nu _{25\mu \mathrm{m}}}.$$
### 4.2. Results & Discussion
We adopt three samples chosen by different selection criteria and compiled photometric data in $`L`$, $`N`$, and IRAS 25 ยตm bands:
1. 18 S1s and 6 S2s from the CfA Seyfert galaxies (Huchra & Burg 1992)
2. 20 S1s and 4 S2s from the sample of Ward et al. (1987), which is limited by the hard X-ray flux from 2 to 10 keV
3. 11 S1s and 11 S2s from the sample of Roche et al. (1991), which is composed of $`N`$-band bright objects
Since some objects are included in more than one sample, there are 31 S1s and 14 S2s in total.
The type 1 Seyferts are clearly distinguished from the type 2s with a critical value $`R0.6`$; $`R>0.6`$ for type 1s while $`R<0.6`$ for type 2s (Figures 7a-d). If we apply the Kolmogrov-Smirnov (KS) test, the probability that the observed distributions of S1s and S2s originate in the same underlying population turns out to be 0.275 %.
The upper panel of Figure 8 shows the theoretical models of Pier & Krolik (1992, 1993), which are characterized by $`a`$ (the inner radius of the torus), $`h`$ (the full height of the torus), $`\tau _\mathrm{r}`$ (the radial Thomson optical depth), $`\tau _\mathrm{z}`$ (the vertical Thomson optical depth), and $`T`$ (the effective temperature of the torus) \[see Figure 9\]. The intersection of each model locus with $`R=0.6`$ gives a critical viewing angle. The critical viewing angle is expected to be nearly the same as the typical semi-opening angle of the ionization cones observed in Seyfert nuclei, $``$ 30ยฐโ 40ยฐ(cf. Lawrence 1991 and references therein). Figure 9 shows that only two models give reasonable critical viewing angles, $`46`$ยฐโ 50ยฐthough these values are slightly larger than the semi-opening angle of the cone. The model with $`a/h`$ = 0.3 may be suitable for tori in Seyfert nuclei because this inner aspect ratio gives a semi-opening angle of the torus, $`30`$ยฐ, being consistent with those of the observed ionized cones. Although there are some contaminations from the host galaxies, circumnuclear starbursts, and dust emission in the narrow-line regions, the new diagnostic provides a powerful tool to study the critical viewing angle. |
no-problem/0001/hep-ex0001003.html | ar5iv | text | # Untitled Document
| Nucl. Instr. Meth. A324 (1993) 535 |
| --- |
| CBPF NF-013-92 |
| UMS/HEP/92-019 |
| FERMILAB-Pub-92-137-E |
The E791 Parallel Architecture Data Acquisition System
S. Amato, J. R. T. de Mello Neto<sup>*</sup><sup>*</sup>*Now at the Universidade Estadual do Rio de Janeiro, RJ, Brasil., and J. de Miranda
Centro Brasileiro de Pesquisas Fรญsicas
Rio de Janeiro Brasil
C. James
Fermilab, Batavia, IL 60510 USA
D. J. Summers
Department of Physics and Astronomy
University of Mississippi, Oxford, MS 38677 USA
S. B. Bracker
317 Belsize Drive
Toronto, Ontario M4S1M7 Canada
Abstract
To collect data for the study of charm particle decays, we built a high speed data acquisition system for use with the E791 magnetic spectrometer at Fermilab. The DA system read out 24 000 channels in 50 $`\mu `$s. Events were accepted at the rate of 9 000 per second. Eight large FIFOs were used to buffer event segments, which were then compressed and formatted by 54 processors housed in 6 VME crates. Data was written continuously to 42 Exabyte tape drives at the rate of 9.6 Mb/s. During the 1991 fixed target run at Fermilab, 20 billion physics events were recorded on 24 000 8 mm tapes; this 50 Tb (Terabyte) data set is now being analyzed.
1. Introduction
Experiment 791, Continued Study of Heavy Flavors, located in Fermilabโs Proton-East experimental area, examines the properties of short lived particles containing a charm quark. Events involving charm quarks are rare and difficult to recognize in real time. The experimentโs strategy was to impose only loose constraints when recording data, and select the events of interest offline when time and computing resources are more available. Therefore the DA system must collect and record data very quickly.
The Fermilab Tevatron delivered beam during a 23 second spill, with a 34 second interspill period, so that the experiment generated data for 23 seconds approximately every minute. The data consists of discrete packets known as events, each of which contains particle tracking information and calorimetry for one interaction. The E769 data acquisition system used previously for this detector was able to read data at 1400 kb/s during the beam spill, and record data at 625 kb/s during both the spill and interspill; the digitizing time per event was 840 $`\mu `$s. The physics goals of E791 called for recording at least 10 times the events collected by E769, in about the same amount of beam-time. The detectorโs digitizing and readout time had to be reduced by at least a factor of 10; a 50 $`\mu `$s dead time per event was achieved by replacing almost all the front-end digitizers with faster systems. Events arrived at the DA system at an average rate of 26 Mb/s during the beam spill, and were recorded at more than 9 Mb/s during both the spill and interspill using 42 Exabyte 8200 tape drives .
The following section will discuss the overall architecture and the hardware components in more detail. Following that are sections on the software used in the DA processors, and a discussion of performance and possible upgrades.
2. Architecture and Hardware
A schematic of the E791 DA system is shown in Fig. 1. Events were digitized in a variety of front-end systems and delivered into Event FIFO Buffers (EFB) along eight parallel data paths. The buffers stored 80 Mb of data apiece, enough to allow the rest of the DA system to be active during both the spill and interspill. Care was taken to ensure that each data path carried about the same amount of data. Data are distributed through Event Buffer Interfaces (EBI) to processors housed in six VME crates. The processors (CPU) read event segments from the buffers, compressed them into formatted events, and recorded them on tape through a SCSI magnetic tape controller (MTC).
The DA system is parallel in several respects. Data arrives along parallel data paths. Processors act in parallel to prepare data for logging. Many parallel tape drives record data concurrently.
3. Front Ends
The E791 detector contained silicon microstrip detectors, drift chambers, and proportional wire chambers for tracking charged particles. Calorimeters based on scintillators and phototubes measured particle energies. Gas ฤerenkov detectors performed particle identification, and plastic scintillators were used for muon identification. The detector elements were digitized by various electronics systems, which were in turn managed by front-end controllers which delivered data to the DA system. The front-end hardware is summarized in Table 1.
The DA system placed specific requirements on the front-end controllers. The data paths from the controllers conformed to the EFB inputs, which were 32-bit wide RS-485 lines accompanied by a single RS-485 strobe. Data was delivered at a maximum rate of 100 ns per 32-bit word. Each event segment on the data paths was delimited by a leading word count, calculated and placed there by the data pathโs front-end controller. A 4-bit event synchronization number was generated for each event by a scaler module and distributed to all front-end controllers. The controllers accepted this number and made it a part of each eventโs segments. The DA system used the synchronization number to assure that all event segments presented at a given moment derived from the same event in the detector. Finally, because we had 16 digitizing controllers and only 8 data paths, each data path was shared by two front-end controllers using simple token passing.
4. Event FIFO Buffers
Each Event FIFO Buffer (EFB) consisted of an I/O card, a FIFO Controller card, five 16 Mb Memory cards, and a custom backplane, housed two per crate in 9U by 220 mm Eurocrates. The I/O card contained the RS-485 input and output data paths, Status and Strobe lines, and a Zilog Z80 processor with a serial port used for testing. The Controller card kept track of internal pointers and counters, and managed the write, read, and memory refresh cycles. The Memory cards used low cost 1 Mb by 8 DRAM SIMMs. In E791, the EFBs received data in bursts of up to 40 Mb/s and delivered data at several Mb/s concurrently.
The data was pushed into the EFBโs through a 32-bit wide RS-485 data port, controlled by a strobe line driven by the attached front-end controller. Each longword of data delivered by a front-end controller was accompanied by the strobe which latched the data in the EFB and updated the EFBโs internal pointers. The output side of the EFB had a similar data port and strobe, driven by the receiving device. The EFB maintained 4 Status lines: Full, Near Full, Near Empty, and Empty. The thresholds for Near Full or Near Empty were set by the I/O cardโs processor. The Near Full LEMO outputs were used in the E791 trigger logic to inhibit triggers whenever any EFB was in danger of overflowing. The Near Empty Status was used by the event building processors, and is described below.
5. Event Buffer Interface
The EBI was a VME slave module designed specifically for the E791 DA system. Its job was to strobe 32-bit longwords out of an EFB and make them available to VME-based CPUs used to process events. Figure 2 details the connections between a single EFB and its EBIs. Each VME crate held one EBI for every EFB in the system, so that every CPU had access to the output data path from every buffer. The EFB status lines were also bussed to the EBIs, so that the CPUs could determine how much data was available in the buffers. At any moment in time, only one CPU is granted control of a particular EFB. When a CPU in one crate is finished reading data from an EFB, it passes control of the buffer to the next crate through a token passing mechanism built into the EBIs.
The EBI was a simple module with a few basic operations : (a) read a data word from the EFB and strobe the next word onto the output path, (b) read the EFB status, (c) check for the buffer control token, (d) pass the buffer control token to the next EBI, and (e) set or clear the buffer control token.
6. VME CPUs
The assembling of events was performed by VME based CPUs . They contained a 16 Mhz Motorola 68020 processor, a 68881 coprocessor, and 2 Mb of memory, and were able to perform VME master singleโword transfers at 2 Mb/s. There were 8 Event Handler CPUs in each VME crate, plus one Boss CPU. An Absoft Fortran compiler was available for the CPUs, and most of the E791 DA code was written in Fortran, except for a few time-critical subroutines which were written in 68020 Assembler.
7. The VAX-11/780
The VAX-11/780 was used to download and start the VME system; the DA system operatorโs console and status displays were also connected to the VAX. A low speed link between the VAX and VME was provided by a DR11-W on the VAX Unibus, a QBBC branch bus controller, and branch bus to VME interfaces (BVI) in each VME crate.
8. Magnetic Tape Controller and Drives
Tape writing was handled by a VME to SCSI interface, the Ciprico RF3513 . The tape drives used were Exabyte 8200s writing single-density, 2.3 Gigabyte 8 mm cassettes. As shown in Table 2, the choice of Exabyte drives was driven by the media costs of storing the large amount of data we expected to record.
In principle, each Magnetic Tape Controller (MTC) could be connected to 7 Exabyte drives, but we found that a single SCSI bus saturated when writing continuously to only four drives. We required a data rate to tape of about 1.6 Mb/s in each VME crate, but Exabyte drives write at a speed of only 0.24 Mb/s. Our solution was to use 2 MTCs per VME crate, and connect them to 4 and 3 Exabytes, respectively. Thus there were 7 Exabyte drives controlled from each VME crate, for a total of 42 drives in the DA system.
The MTCs stored their SCSI commands in circular command descriptor queues. The queues for both MTCs in a VME crate were managed by themselves and one CPU in that crate. The command descriptors held information on the VME address of a block of data and the length of the block. The MTC acted as a VME master and performed the actual transfer of a block of complete events from an event building CPU onto a single tape. The tape handling software was written to ensure that all 7 Exabyte drives on a VME crate were filling their tapes at about the same rate. All 42 drives were loaded with tapes at the same time, the DA system started, and all 42 tapes filled with data at approximately the same rate. All the tapes became full within a few minutes of each other, and all 42 tapes were stopped and unloaded at the same time. During data taking, the tapes were full when 3 hours of beam time had elapsed.
9. Software
The DA software was comprised of three main programs. At the top was VAX, which ran in the VAX-11/780. It accepted user commands, generated status displays and error logs, and fetched a tiny fraction of the incoming data to be monitored for data quality. Next was Boss, a program that ran in one CPU in each VME crate. It managed the other CPUs in its crate, and controlled the crateโs magnetic tape system. Finally was EH, the Event Handler program which ran in several CPUs in each VME crate. Event Handlers did most of the real work, reading and checking event data, formatting and compressing events, and assembling blocks of events for eventual output to tape. The interprocessor communication protocol used by the three programs was the same as used by the E769 DA system .
Operator commands were entered on a VAX terminal, transmitted to the crate bosses by VAX, and sent to the event handlers by Boss. Status information was gathered from the event handlers by Boss and compiled into a crate report; crate reports were gathered by VAX, which generated displays and report files for the operator.
All three programs consisted of a once-only initialization code and a processing loop which ran until the program was terminated. Specific tasks were placed on the processing loop, rather like beads on a string. Each time control passed to a task, it would proceed as far as possible without waiting for external responses, set a flag recording its present state, and pass control to the next task on the loop. When that task was re-entered on the next pass of the loop, it continued where it left off, and so on until the task was completed. Good real-time response was maintained while avoiding entirely the use of interrupts.
10. Event Handler Program
The EH program had two basic states, grabber and muncher. Only one CPU in each crate could be in the grabber state at any given time. The grabberโs sole duty was to read event segments from the EFBs and place them in a large internal event array, big enough to hold 200-300 events. When the crate Boss noticed that a grabberโs event array was becoming quite full, it changed that grabber to the munching state, and appointed a new grabber. Because the throughput of the entire system depended on efficient event grabbing, grabbers were free of all other obligations, and the grabbing code was written in assembly language.
Munchers took events from their event arrays, formatted and compressed the data, and grouped events into physical blocks suitable for output to tape. Munching the data could take several times longer than grabbing it, so that at any moment each crate would have one grabber and several busy munchers. Munchers were also subject to other obligations, such as responding to requests for status information and binning histograms requested by the operator.
In order to achieve high system throughput from these rather slow processors, event grabbing had to be orchestrated very carefully. At the start of data taking, one grabber would be appointed in each crate, and one crate would be designated number 1. As data arrived in the EFBs, the grabber in crate 1 would extract the event segment from EFB 1 and pass that bufferโs token to crate 2. As the grabber in crate 1 moved on to reading the second segment of the first event from EFB 2, the grabber in crate 2 would start reading the first segment of the second event from EFB 1. Soon the grabbers in all six crates would be active, each reading from a different EFB. Because there were eight EFBs but only six crates with one grabber each, all the grabbers would be busy all the time.
Normally the crate Boss would replace a grabber with a new one before the old grabberโs event array became full. If that reassignment were delayed, the existing grabber would simply pass tokens through to the next crate without reading data, giving up the event to other grabbers that might be able to handle it. Only if all grabbers were glutted with data and no event handlers could be recruited as new grabbers would data taking slow down.
As grabbers read data from the EFBs, they checked to ensure that the event segment word counts were reasonable and that all event segments being joined together in an event had the same event synchronization number. Illegal word counts and unsynchronized events usually indicated that a front-end readout system had failed. To overlook such a failure would be very serious; pieces of unrelated data could end up being joined together into a bogus event, and the error would propagate forward for all subsequent events. When such failures were noted, the grabber notified its Boss, the Boss notified the VAX, and the VAX inhibited data taking, flushed the EFBs, and instructed the system to restart. Synchronization errors occurred with depressing regularity throughout the data taking, so it was fortunate that the DA system had the ability to recognize and respond to them quickly and automatically. A few spills a day were thus lost.
Event munching consisted of compressing the TDC data from the drift chambers (which arrived in a very inefficient format), formatting each event so that it conformed to the E791 standard, and packing events into tape buffers for output. Munchers did not control tape writing however; they submitted output requests to their Boss, who queued the necessary commands to the tape controller, checked the status, and notified the event handler when the tape buffer could be reused. Each muncher had 10 tape buffers, each capable of holding a full-sized tape record of 65532 bytes. Although the Boss managed all tape writing, the data itself never passed to the Boss; the MTC extracted the data directly from the event handlerโs tape buffers.
Most of the event munching time was spent compressing TDC data to about $`\frac{2}{3}`$ of its original size. Since the TDC data was a large fraction of the total, it was important to compress the data, to conserve tape writing bandwidth and minimize tape use. In choosing readout hardware for high-rate experiments, it is important to evaluate the details of the data format very carefully (although in this instance we had no alternate choice of vendors).
11. The Boss Program
The CPU running the Boss program controlled the scheduling of each EH as a grabber or muncher. It polled the EHs on a regular basis to check the need for rescheduling. The main criteria to retire a grabber and select a new one was whether the input event arrays were full or nearly full. When the system was heavily loaded, protection against too frequent rescheduling was applied.
Managing tape writing was also the Bossโs job. The Boss made periodic requests to all EHs for a list of tape buffers ready for writing. The EHs responded by giving the boss the VME address and the length of their full tape buffers. The Boss used the information to construct the commands for the MTCs. The Boss also selected which MTC and tape drive to send a tape buffer to, based on how full the MTCโs command queue was and how full the tape in the drive was. The MTCs performed the block transfer of the tape buffer from the EH processor to the Exabyte tape drive. When a tape buffer was written, the MTC informed the Boss, and the Boss in turn notified the EH that the particular tape buffer was ready for reuse.
The Bosses were also responsible for gathering status information and reports of recoverable errors and passing the information to the VAX program. The Bosses sent occasional Request Sense commands to the drives, which returned the number of blocks written to tape and the number of blocks rewritten (soft write errors). All commands sent to the Exabyte drives were returned by the MTC with a status block, and if a drive error occurred while writing data, the status block gave details on the error type. Drive errors of some types were not recoverable, and the offending drive was taken offline until the end of the data taking run. Likewise, any EH which did not respond to Boss commands within a given time limit was reset and temporarily removed from the active system. Event processing could continue even if a few EHs or Exabyte drives failed since there were multiple drives and EHs in each VME crate. The throughput of the DA system would be slightly reduced, but not stop.
12. The VAX Program
The VAX program managed and monitored the rest of the DA system. A schematic is shown in Fig. 3. The DA Control Console is shown in Fig. 4, and provided the user with general status information and a command menu. In regular data taking the user executed a LOAD after the tapes were placed in the drives, then a START to begin a data taking run. Another option was to read out the detector without sending the events to tape (START NOTAPE). During data taking the run could be suspended for a short time (PAUSE, RESUME) and under special circumstances the user could clear the EFBs (CLEAR\_BUFF). The Bosses polled the tape drives for fullness of the tapes, and sent the information to the VAX program. When 20% of the active drives were 95% full, the VAX program automatically sent the END command. The user could also END data taking whenever he wished.
In ending data-taking runs, it was necessary to allow a smooth run down of the system. The VAX first inhibited the triggers to stop the flow of data into the EFBs. The Bosses stopped the current grabber and did not schedule another one. The VAX cleared any data that remained in the EFBs, but all the events that were already in the EH input event arrays were allowed to be written to tape. The VAX waited until the Bosses reported that all tape writing was complete and file marks written before informing the user that the run was ended. The user could not START another data taking run or execute the tape drive UNLOAD command until this END process was complete.
The EHs stored a few events for online monitoring. During data taking, the VAX retrieved these events and passed them on to an event pool managed by VAXONLINE software . The event pool was accessible by other VAX workstations in the local cluster, and an entirely separate set of programs analyzed and displayed the pool events for online monitoring of the detector. Typically, the rate at which events were sent to the pool for was fast enough for most monitoring needs. The DA system also provided a much faster alternative detector monitoring method. Monitoring a detector typically means making histograms (hit maps) of the detector elements. One can look for dead or noisy channels. Part of the EH munching code constructed such histograms upon user request. The user specified a particular section of the detector to histogram using a very simple program; the program sent the request to the VAX DA program using a DEC Mailbox facility. The request was distributed to the VME EH processors, and all the EHs in the system would accumulate all events for a period of about one minute. The Bosses and ultimately the VAX summed up the histogram contributions from each EH, and entered the final product into the event pool as a special event type. The userโs program retrieved the histogram from the event pool and could use a variety of means to display it. In this way the user could get a hit map of a part of the detector with high statistics, 200 000 events or so, in a very short time.
The VAX program retrieved status information from the Bosses on a regular basis while a data taking run was in progress. Information such as the numbers of events processed, the fullness of the tapes, and any errors that occurred were displayed on various monitors and on the DA Control Console. For every data taking run, a disk file was created which held a unique run number, the date and time the data was recorded, the number of events written to each drive during the run, the driveโs soft error rate as a percent of blocks written, and whether the drive failed during the run. This file of numbers was entered automatically into an electronic database when the run was ended.
13. Performance and Conclusions
The DA system hardware performed well. As mentioned earlier, the system was tolerant of errors encountered by CPUs running the EH program and of Exabyte drives with write errors. While all the hardware components in the system experienced some infant mortality in the initial testing phases, all the components, with one exception, had very few failures in 9 months of data taking. The exception was the Exabyte drives, which, after 2000 hours of operation, will often require head replacement. System wide failures that halted data taking were extremely rare, and recovery if they did occur was rapid.
Running in a test mode, data was pushed into the DA system from the front end controllers at a rate exceeding real data taking. The DA system then gave a maximum data rate to tape of about 9.6 Mb/s, or 1.6 Mb/s through each VME crate. Throughput in each part of the DA system components were well matched. The data rate into the EFBs times the length of the beam spill matched the size of the EFBs; the grabbing speed matched the munching speed times the number of munchers in each VME crate; the output rate from each crate matched the tape writing speed times the number of drives per crate. However, during real data taking, the maximum 9.6 Mb/s throughput was usually not attained simply because the accelerator did not deliver enough beam to create the events.
In a 5 month period of data taking in 1991 and early 1992, E791 recorded 20 billion physics events on 24 000 8 mm tapes. This 50 Tb data set is now being analysed at parallel RISC computing facilities similar to those used previously in E769 . The experimentโs goal of 100 000 reconstructed charm particle decays should easily be met.
The parallel architecture of the E791 DA system is central to its success. The performance of the system could be increased with more parallel front-end controllers for faster read out, larger Event FIFO Buffers, faster CPUs with much better I/O capability, and by upgrading the 0.24 Mb/s Exabyte 8200 drives to doubleโspeed, doubleโdensity Exabyte 8500 tape drives.
Acknowledgements
We thank the staffs of all the participating institutions and especially S. Hansen, A. Baumbaugh, K. Knickerbocker, and R. Adamo and his group, all of FNAL. This work was supported by the U. S. Department of Energy (DE-AC02-76CHO3000 and DE-FG05-91ER40622) and the Brazilian Conselho Nacional de Desenvolvimento Cientรญfico e Tecnolรณgico.
References
* C. Gay and S. Bracker, โThe E769 Multiprocessor Based Data Acquisition Systemโ, IEEE Trans. Nucl. Sci. NS-34 (1987) 870.
* Exabyte Corp., 1745 38th Street, Boulder, CO 80301, USA.
* A. E. Baumbaugh et al., โA Real Time Data Compactor (sparsifier) and 8 Mb High Speed FIFO for HEPโ, IEEE Trans. Nucl. Sci. NS-33 (1985) 903;
K. L. Knickerbocker et al., โHigh Speed Video Data Acquisition System (VDAS) for HEPโ, IEEE Trans. Nucl. Sci. NS-34 (1986) 245.
* S. Bracker, โSpecification of the E791 Event Buffer Interfaceโ, E791 internal document;
S. Hansen, FNAL Physics Dept., personal communication.
* R. Hance et al., โThe ACP Branch Bus and Real Time Applications of the ACP Multiprocessor Systemโ, IEEE Trans. Nucl. Sci. NS-34 (1987) 878.
* Ciprico, 2955 Xenium Lane, Plymouth, Minnesota 55441, USA.
* V. White et al., โThe VAXONLINE Software System at Fermilabโ, IEEE Trans. Nucl. Sci. NS-34 (1987) 763.
* C. Stoughton and D. J. Summers, โUsing Multiple RISC CPUs in Parallel to study Charm Quarksโ, Computers in Physics 6 (1992) 371.
* Phillips Scientific, 305 Island Rd., Mahwah, New Jersey 07430, USA.
* LeCroy Research, 700 Chestnut Ridge Rd., Chestnut Ridge, NY 10977, USA.
* C. Rush, A. Nguyen and R. Sidwell, Dept. of Physics, The Ohio State University, personal communication.
* Nanometric Systems, 451 South Blvd., Oak Park, IL 60302, USA.
* M. Bernett et al., โFASTBUS Smart Crate Controller Manualโ, Fermilab Technical Document HN96 (1992).
* S. Bracker, โDescription of the Damn Yankee Controller (DYC)โ, E791 Internal Document;
S. Hansen, FNAL Physics Dept., personal communication.
* M. Purohit, Dept. of Physics, Princeton University, โPrinceton Scanner/Controller Manualโ, E791 Internal Document.
* S. Hansen et al., โFermilab Smart Crate Controllerโ, IEEE Trans. Nucl Sci. NS-34 (1987) 1003.
Table 1. E791 Front End Digitization Systems and Read Out Controllers.
System Drift ฤerenkov, Silicon Micro- Proportional CAMAC Chamber Calorimeter vertex Detector Wire Chamber Digitizer Phillips LeCroy 4300B Ohio State , LeCroy 2731A LeCroy 10C6 TDC FERA ADC Nanometric N339P, Latch 4448 Latch, Nanometric S710/810 4508 PLU, Latches 2551 Scaler Mean Dead Time 30 $`\mu `$s 30 $`\mu `$s 50 $`\mu `$s 4 $`\mu `$s 30 $`\mu `$s Pre-Controllers none 2 LeCroy 4301s 81 Princeton Scanners 2 LeCroy 2738s none Controller FSCC Damn Yankee Princeton Damn Yankee SCC No. of Controllers 10 2 2 1 1 Channels / System 6304 554 15896 1088 80 Event Size to EFB 480 longwords 160 longwords 110 longwords 20 longwords 11 longwords Event Size to Tape 300 longwords 160 longwords 110 longwords 20 longwords 12 longwords On Tape Fraction 50% 27% 18% 3% 2%
Table 2. A Comparison of Storage Media. The 8 mm, 9-track, and 3480 tape prices are from the Fermilab stockroom catalog. The 4 mm DAT price is from the New York Times, 20 Jan. 1991, page 31. Prices do not include overhead.
Tape Type Length Capacity $/ $/ Tapes/ \[m\] \[Gb\] tape 50 Tbytes 50 Tbytes 8 mm video 106 2.3 $3.92 $85 217 21 739 4 mm DAT 60 1.2 $7.79 $324 583 41 667 IBM 3480 165 0.22 $4.60 $1 045 455 227 272 9-track 732 0.16 $9.31 $2 909 375 312 500
Figure 1. A schematic of the VME part of the E791 DA system. Two complete VME crates are shown, with the Event Fifo Buffers and data paths from the digitizers at the base.
Figure 2. Detail of the connections between a single EFB and the six EBIs attached to it. Each EBI is in a different VME crate. The output data path and the EFB status lines are bussed across all six EBIs. The output data path connects to the VME backplane of each crate through the EBI. The EBIs share the data path by communicating along the EFB token line.
Figure 3. Schematic of the entire E791 DA system. The VAX 11/780 was the user interface to the VME part of the system, via the DA Control Display. The VAX part of the DA program handled the status and error displays, sent events for monitoring to the event pool, and received histogram requests via the mailbox. An entirely separate set of programs picked up events from the event pool or sent histogram requests to the mailbox.
Figure 4. Detail of the E791 DA Control Display. The lower half of the screen contained commands to the system, executed by using arrow keys to move the shaded box over the command. The upper half of the screen contained contained information on the current state of the system (RUNNING or IDLE, tapes LOADED or UNLOADED, tape writing ON or OFF), the Run Number if a data-taking run was in progress, and the number of events written to tape. |
no-problem/0001/astro-ph0001485.html | ar5iv | text | # The Spectral Variability of Cygnus X-1 at MeV Energies
## Introduction
Observations by the instruments on CGRO, coupled with observations by other high-energy experiments (e.g., SIGMA, ASCA and RXTE) have provided a wealth of new information regarding the emission properties of galactic black hole candidates. An important aspect of these high energy radiations is spectral variability, observations of which can provide constraints on models which seek to describe the global emission processes. Based on observations by OSSE of seven transient galactic black hole candidates at soft $`\gamma `$-ray energies (i.e., below 1 MeV), two $`\gamma `$-ray spectral shapes have been identified that appear to be well-correlated with the soft X-ray state Grove1997 ; Grove1998 . In particular, these observations define a breaking $`\gamma `$-ray spectrum that corresponds to the low X-ray state and a power-law $`\gamma `$-ray spectrum that corresponds to the high X-ray state. (Here we emphasize that the โstateโ is that measured at soft X-ray energies, below 10 keV.)
At X-ray energies, the measured flux from Cyg X-1 is known to be variable over a wide range of time scales, ranging from msec to months. It spends most of its time in a low X-ray state, exhibiting a breaking spectrum at $`\gamma `$-ray energies that is often characterized as a Comptonization spectrum. In May of 1996, a transition of Cyg X-1 into a high X-ray state was observed by RXTE, beginning on May 10 Cui1997 . The 2โ12 keV flux reached a level of 2 Crab on May 19, four times higher than its normal value. Meanwhile, at hard X-ray energies (20-200 keV), BATSE measured a significant decrease in flux Zhang1997 . Motivated by these dramatic changes, a target-of-opportunity (ToO) for CGRO, with observations by OSSE, COMPTEL and EGRET, began on June 14 (CGRO viewing period 522.5). Here we report on the results from an analysis of the COMPTEL data from this ToO observation.
## Observations and Data Analysis
COMPTEL has obtained numerous observations of the Cygnus region since its launch in 1991, providing the best available source of data for studies of Cyg X-1 at energies above 1 MeV. Figure 1 shows a plot of hard X-ray flux, as obtained from BATSE occultation monitoring, for each day in which Cyg X-1 was within 40 of the COMPTEL pointing direction.
In previous work, we have compiled a broad-band spectrum of Cyg X-1 using contemporaneous data from BATSE, OSSE and COMPTEL McConnell1999 ; McConnell2000 . The observations were chosen, in part, based on the level of hard X-ray flux measured by BATSE, the goal being to ensure a spectral measurement that corresponded to a common spectral state. In Figure 1, the data points from the selected observations are indicated by open diamonds. The resulting spectrum, corresponding to a low X-ray state, showed evidence for emission out to 5 MeV. The spectral shape, although consistent with the so-called breaking spectral state Grove1997 ; Grove1998 , was clearly not consistent with standard Comptonization models. The COMPTEL data provided evidence for a hard tail at energies above $``$1 MeV that extended to perhaps 5 MeV.
During the high X-ray state observations in May of 1996 (VP 522.5), COMPTEL collected 11 days of data at a favorable aspect angle of 5.3. The hard X-ray flux for these days is denoted by open triangles in Figure 1. An analysis of COMPTEL data from this observation revealed some unusual characteristics. The 1โ3 MeV image (Figure 2) showed an unusually strong signal from Cyg X-1 when compared with other observations of similar exposure. The flux level was significantly higher than the average flux seen from earlier observations McConnell1999 ; McConnell2000 . In the 1โ3 MeV energy band, the flux had increased by a factor of 2.5, from $`8.6(\pm 2.7)\times 10^5`$ cm<sup>-2</sup> s<sup>-1</sup> MeV<sup>-1</sup> to $`2.2(\pm 0.4)\times 10^4`$ cm<sup>-2</sup> s<sup>-1</sup> MeV<sup>-1</sup>. The observed change in flux is significant at a level of $`2.6\sigma `$. In addition, unlike in previous measurements, there was no evidence for any emission at energies below 1 MeV. This fact is explained, in part, by a slowly degrading sensitivity of COMPTEL at energies below 1 MeV due to increasing energy thresholds in the lower (D2) detection plane. Part of the explanation, however, appears to be a much harder source spectrum.
A more complete picture of the MeV spectrum is obtained by combining the COMPTEL results with results from OSSE, extending the measured spectrum down to $``$50 keV. Unfortunately, a comparison of the COMPTEL and OSSE spectra for VP 522.5 shows indications for an offset between the two spectra by about a factor of two, with the OSSE flux points being lower than those of COMPTEL in the overlapping energy region near 1 MeV. A similar offset between OSSE and COMPTEL-BATSE is also evident in the contemporaneous low soft X-ray state spectrum McConnell1999 ; McConnell2000 . The origin of this offset is not clear. Here we shall assume that there exists some uncertainty in the instrument calibrations and that this uncertainty manifests itself in a global normalization offset. We have subsequently increased the flux for each OSSE data point by a factor of two. This provides a good match between COMPTEL and OSSE at 1 MeV for both the low-state and high-state spectra, but we are left with an uncertainty (by a factor of two) in the absolute normalization of the spectra.
We compare the resulting COMPTEL-OSSE spectra in Figure 3 (with the data points in both OSSE spectra increased by a factor two). The low-state spectrum shows the breaking type spectrum that is typical of most high energy observations of Cyg X-1. The high-state spectrum, on the other hand, shows the power-law type spectrum that is characteristic of black hole candidates in their high X-ray state. This spectral behavior had already been reported for this time period based on observations with both BATSE Zhang1997b and OSSEGierlinski1997 . The inclusion of the COMPTEL data provides evidence, for the first time, of a continuous power-law (with a photon spectral index of -2.6) extending beyond 1 MeV, up to $``$10 MeV.
A power-law spectrum had also been observed by both OSSE and BATSE during February of 1994 Phlips1996 ; Ling1997 , corresponding to the low level of hard X-ray flux near TJD 9400 in Figure 1. In this case, however, the amplitude of the power-law was too low for it to be detected by COMPTEL.
## Discussion
We can use the COMPTEL data alone to draw some important conclusions regarding the MeV variability of Cyg X-1. Most importantly, the flux measured by COMPTEL at energies above 1 MeV was observed to be higher (by a factor of 2.5) during the high X-ray state (in May of 1996) than it was during the low X-ray state. The lack of any detectable emission below 1 MeV further suggests a relatively hard spectrum.
Inclusion of the OSSE spectra clearly show an evolution from a breaking type spectrum in the low X-ray state to a power-law spectrum in the high X-ray state. The COMPTEL data are consistent with a pivot point near 1 MeV. The power-law appears to extend to $``$10 MeV with no clear indication of a cut-off.
## Acknowledgements
The COMPTEL project is supported by NASA under contract NAS5-26645, by the Deutsche Agentur fรผr Raumfahrtgelenheiten (DARA) under grant 50 QV90968 and by the Netherlands Organization for Scientific Research NWO. This work was also supported by NASA grant NAG5-7745. |
no-problem/0001/astro-ph0001034.html | ar5iv | text | # First Light Measurements of Capella with the Low Energy Transmission Grating Spectrometer aboard the Chandra X-ray Observatory
## 1 Introduction
The LETGS consists of three components of the Chandra Observatory: the High Resolution Mirror Assembly (HRMA) (Van Speybroeck et al., 1997), the Low Energy Transmission Grating (LETG) (Brinkman et al., 1987, 1997; Predehl et al., 1997), and the spectroscopic array of the High Resolution Camera (HRC-S) (Murray et al., 1997). The LETG, designed and manufactured in a collaborative effort of SRON in the Netherlands and MPE in Germany, consists of a toroidally shaped structure which supports 180 grating modules. Each module holds three 1.5-cm diameter grating facets which have a line density of 1008 lines/mm. The three flat detector elements of the HRC-S, each 10 cm long and 2 cm wide, are tilted to approximate the Rowland focal surface at all wavelengths, assuring a nearly coma-free spectral image. The detector can be moved in the cross-dispersion direction and along the optical axis, to optimize the focus for spectroscopy. <sup>1</sup><sup>1</sup>1 Further information on LETGS components is found in the AXAF Observatory Guide (http://asc.harvard.edu/udocs/) and at the Chandra X-ray Center calibration webpages (http://asc.harvard.edu/cal/).
An image of the LETG spectrum is focused on the HRC-S with zeroth order at the focus position and dispersed positive and negative orders symmetric on either side of it. The dispersion is 1.15 ร
/mm in first spectral order. The spectral width in the cross-dispersion direction is minimal at zeroth order and increases at larger wavelengths due to the intrinsic astigmatism of the Rowland circle spectrograph. The extraction of the spectrum from the image is done by applying a spatial filter around the spectral image and constructing a histogram of counts vs. position along the dispersion direction. The background is estimated from areas on the detector away from the spectral image and can be reduced by filtering events by pulse-height.
## 2 First Light Spectrum
Capella is a binary system at a distance of 12.9 pc consisting of G8 and G1 giants with an orbital period of 104 days (Hummel et al., 1994). It is the brightest quiescent coronal X-ray source in the sky after the Sun, and is therefore an obvious line source candidate for first light and for instrument calibration. X rays from Capella were discovered in 1975 (Catura, Acton, & Johnson, 1975; Mewe et al., 1975) and subsequent satellite observations provided evidence for a multi-temperature component plasma (e.g. Mewe (1991) for references). Recent spectra were obtained with EUVE longward of 70 ร
with a resolution of about 0.5 ร
(Dupree et al., 1993; Schrijver et al., 1995).
The LETG First Light observation of Capella was performed on 6 September 1999 (00h27m UT โ 10h04m UT) with LETG and HRC-S. For the analysis we use a composite of six observations obtained in the week after first light, with a total observing time of 95 ksec. The HRC-S output was processed through standard pipeline processing. For LETG/HRC-S events, only the product of the wavelength and diffraction order is known because no diffraction order information can be extracted. Preliminary analysis of the pipeline output immediately revealed a beautiful line-rich spectrum. The complete background-subtracted, negative-order spectrum between 5 and 175 ร
is shown in Fig. 1. Line identifications were made using previously measured and/or theoretical wavelengths from the literature. The most prominent lines are listed in Table 1.
The spectral resolution $`\mathrm{\Delta }\lambda `$ of the LETGS is nearly constant when expressed in wavelength units, and therefore the resolving power $`\lambda /\mathrm{\Delta }\lambda `$ is greatest at long wavelengths. With the current uncertainty of the LETGS wavelength scale of about 0.015 ร
, this means that the prominent lines at 150 and 171 ร
could be used to measure Doppler shifts as small as 30 km/sec, such as may occur during stellar-flare mass ejections, once the absolute wavelength calibration of the instrument has been established. This requires, however, that line rest-frame wavelengths are accurately known and that effects such as the orbital velocity of the Earth around the Sun are taken into account. Higher-order lines, such as the strong O VIII Ly$`\alpha `$ line at 18.97 ร
, which is seen out to 6th order, can also be used.
## 3 Diagnostics
A quantitative analysis of the entire spectrum by multi-temperature fitting or differential emission measure modeling yields a detailed thermal structure of the corona, but this requires accurate detector efficiency calibration which has not yet been completed. However, some diagnostics based on intensity ratios of lines lying closely together can already be applied. In this letter we consider the helium-like line diagnostic and briefly discuss the resonance scattering in the Fe XVII 15.014 ร
line.
### 3.1 Electron Density & Temperature Diagnostics
Electron densities, $`n_e`$, can be measured using density-sensitive spectral lines originating from metastable levels, such as the forbidden ($`f`$) $`2^3S1^1S`$ line in helium-like ions. This line and the associated resonance ($`r`$) $`2^1P1^1S`$ and intercombination ($`i`$) $`2^3P1^1S`$ line make up the so-called helium-like โtripletโ lines (Gabriel & Jordan, 1969; Pradhan, 1982; Mewe, Gronenschild, & Van den Oord, 1985). The intensity ratio $`(i+f)/r`$ varies with electron temperature, T, but more importantly, the ratio $`i/f`$ varies with $`n_e`$ due to the collisional coupling between the $`2^3S`$ and $`2^3P`$ level.
The LETGS wavelength band contains the He-like triplets from C, N, O, Ne, Mg, and Si ($``$ 40, 29, 22, 13.5, 9.2, and 6.6 ร
, respectively). However, the Si and Mg triplets are not sufficiently resolved and the Ne IX triplet is too heavily blended with iron and nickel lines for unambiguous density analysis. The O VII lines are clean (see Fig. 2) and the C V and N VI lines can be separated from the blends by simultaneous fitting of all lines. These triplets are suited to diagnose plasmas in the range $`n_e`$ = 10<sup>8</sup>โ10<sup>11</sup> cm<sup>-3</sup> and $`T`$ $``$ 1โ3 MK. For the C, N, and O triplets the measured $`i/f`$ ratios are $`0.38\pm 0.14`$, $`0.52\pm 0.15`$, and $`0.250\pm 0.035`$, respectively, which imply (Pradhan, 1982) $`n_e`$ (in $`10^9`$ cm<sup>-3</sup>) = $`2.8\pm 1.3`$, $`6\pm 3`$, and $``$ 5 (1$`\sigma `$ upper limit), respectively, for typical temperatures as indicated by the $`(i+f)/r`$ ratios of 1, 1, and 3 MK, respectively. This concerns the lower temperature part of a multi-temperature structure which also contains a hot ($``$6โ8 MK), and dense ($``$ 10<sup>12</sup> cm<sup>-3</sup>) compact plasma component (see Section 3.2). The derived densities are comparable to those of active regions on the Sun with a temperature of a few MK. Fig. 2 shows a fit to the O VII triplet measured in the โ1 order. The He-like triplet diagnostic, which was first applied to the Sun (e.g., Acton et al. (1972); Wolfson, Doyle, & Phillips (1983)) has now for the first time been applied to a star other than the Sun.
The long-wavelength region of the LETGS between 90 and 150 ร
contains a number of density-sensitive lines from $`2\mathrm{}`$$`2\mathrm{}^{}`$ transitions in the Fe-L ions Fe XXโXXII which provide density diagnostics for relatively hot ($``$ 5 MK) and dense ($``$ 10<sup>12</sup> cm<sup>-3</sup>) plasmas (Mewe, Gronenschild, & Van den Oord, 1985; Mewe, Lemen, & Schrijver, 1991; Brickhouse, Raymond & Smith, 1995). These have been applied in a few cases to EUVE spectra of late-type stars and in the case of Capella have suggested densities more than two orders of magnitude higher than found here for cooler plasma (Dupree et al., 1993; Schrijver et al., 1995). These diagnostics will also be applied to the LETGS spectrum as soon as the long-wavelength efficiency calibration is established.
### 3.2 The 15โ17 ร
region: resonance scattering of Fe XVII?
Transitions in Ne-like Fe XVII yield the strongest emission lines in the range 15โ17 ร
(cf. Fig. 1). In principle, the optical depth, $`\tau `$, in the 15.014 ร
line can be obtained by applying a simplified escape-factor model to the ratio of the Fe XVII 15.014 ร
resonance line with a large oscillator strength to a presumably optically thin Fe XVII line with a small oscillator strength. We use the 15.265 ร
line because the 16.780 ร
line can be affected by radiative cascades (Liedahl, 1999). Solar physicists have used this technique to derive the density in active regions on the Sun (e.g., Saba et al. (1999); Phillips et al. (1996, 1997)).
Various theoretical models predict 15.014/15.265 ratio values in the range 3.3โ4.7 with only a slow variation ($``$ 5%) with temperature or energy in the region 2โ5 MK or 0.1โ0.3 keV (Brown et al., 1998; Bhatia & Doschek, 1992). The fact that most ratios observed in the Sun typically range from 1.5โ2.8 (Brown et al. (1998), and references above), significantly lower than the theoretical ratios, supports claims that in solar active regions the 15.014 ร
line is affected by resonant scattering. The 15.014/15.265 ratio which was recently measured in the Livermore Electron Beam Ion Trap (EBIT) (Brown et al., 1998) and ranges from 2.77โ3.15 (with individual uncertainties of about $`\pm 0.2`$) at energies between 0.85โ1.3 keV, is significantly lower than calculated values. Although the EBIT results do not include probably minor contributions from processes such as dielectronic recombination satellites and resonant excitation, this may imply that the amount of solar scattering has been overestimated in past analyses. Our measured ratio Fe XVIII 16.078 ร
/Fe XVII 15.265 ร
gives a temperature of $``$6 MK and the photon flux ratio 15.014/15.265 is measured to be 2.64$`\pm 0.10`$. If we compare this to the recent EBIT results we conclude that there is little or no evidence for opacity effects in the 15.014 ร
line seen in our Capella spectrum.
## 4 Conclusion
The Capella measurements with LETGS show a rich spectrum with excellent spectral resolution ($`\mathrm{\Delta }\lambda `$0.06 ร
, FWHM). About 150 lines have been identified of which the brightest hundred are presented in Table 1. The high-resolution spectra of the Chandra grating spectrometers allow us to carry out direct density diagnostics, using the He-like triplets of the most abundant elements in the LETGS-band, which were previously only possible for the Sun. Density estimates based on C, N and O He-like complexes indicate densities typical of solar active regions and some two or more orders of magnitude lower than density estimates for the hotter ($`>`$5 MK) plasma obtained from EUVE spectra. A preliminary investigation into the effect of resonance scattering in the Fe XVII line at 15.014 ร
showed no clear evidence for opacity effects. After further LETGS in-flight calibration it is expected that relative Doppler velocities of the order of 30 km/s will be detectable at the longest wavelengths.
The LETGS data as presented here could only be produced after dedicated efforts of many people for many years. Our special gratitude goes to the technical and scientific colleagues at SRON, MPE and their subcontractors for making such a superb LETG and to the colleagues at many institutes for building the payload. Special thanks goes to the many teams who made Chandra a success, particularly the project scientist team, headed by Dr. Weisskopf, the MSFC project team, headed by Mr. Wojtalik, the TRW industrial teams and their subcontractors, the Chandra observatory team, headed by Dr. Tananbaum, and the crew of Space Shuttle flight STS-93. JJD, OJ, MJ, VK, SSM, DP, PR, and BJW were supported by Chandra X-ray Center NASA contract NAS8-39073 during the course of this research. |
no-problem/0001/math0001019.html | ar5iv | text | # On [๐ฟ]-homotopy groups
## 1. Introduction
A new approach to dimension theory, based on notions of extension types of complexes and extension dimension leads to appearence of $`[L]`$-homotopy theory which, in turn, allows to introduce $`[L]`$-homotopy groups (see ). Perhaps the most natural problem related to $`[L]`$-homotopy groups is a problem of computation. It is necessary to point out that $`[L]`$-homotopy groups may differ from usual homotopy groups even for complexes.
More specifically the problem of computation can be stated as follows: describe $`[L]`$-homotopy groups of a space $`X`$ in terms of usual homotopy groups of $`X`$ and homotopy properties of complex $`L`$.
The first step on this way is apparently computation of $`n`$-th $`[L]`$-homotopy group of $`S^n`$ for complex whose extension type lies between extension types of $`S^n`$ and $`S^{n+1}`$.
In what follows we, in particular, perform this step.
## 2. Preliminaries
Follow , we introduce notions of extension types of complexes, extension dimension, $`[L]`$-homotopy, $`[L]`$-homotopy groups and other related notions.
We also state Dranishnikovโs theorem, characterizing extension properties of complex .
All spaces are polish, all complexes are countable finitely-dominated $`CW`$ complexes.
For spaces $`X`$ and $`L`$, the notation $`LAE(X)`$ means, that every map $`f:AL`$, defined on a closed subspace $`A`$ of $`X`$, admits an extension $`\overline{f}`$ over $`X`$.
Let $`L`$ and $`K`$ be complexes. We say (see ) that $`LK`$ if for each space $`X`$ from $`LAE(X)`$ follows $`KAE(X)`$. Equivalence classes of complexes with respect to this relation are called extension types. By $`[L]`$ we denote extension type of $`L`$.
###### Definition 2.1.
(). The extension dimension of a space $`X`$ is extension type $`ed(X)`$ such that $`ed(X)=\mathrm{min}\{[L]:LAE(X)\}`$.
Observe, that if $`[L][S^n]`$ and $`ed(X)[L]`$, then $`dimXn`$.
Now we can give the following
###### Definition 2.2.
We say that a space $`X`$ is an absolute (neighbourhood) extensor modulo $`L`$ (shortly $`X`$ is $`\mathrm{A}(\mathrm{N})\mathrm{E}([L])`$) and write $`X\mathrm{A}(\mathrm{N})\mathrm{E}([L])`$ if $`X\mathrm{A}(\mathrm{N})\mathrm{E}(Y)`$ for each space $`Y`$ with $`ed(X)[L]`$.
Definition of $`[L]`$-homotopy and $`[L]`$-homotopy equivalence are essential for our consideration:
###### Definition 2.3.
Two maps $`f_0`$, $`f_1:XY`$ are said to be $`[L]`$-homotopic (notation: $`f_0\stackrel{[L]}{}f_1`$) if for any map $`h:ZX\times [0,1]`$, where $`Z`$ is a space with $`ed(Z)[L]`$, the composition $`(f_0f_1)h|_{h^1(X\times \{0,1\})}:h^1(X\times \{0,1\})Y`$ admits an extension $`H:ZY`$.
###### Definition 2.4.
A map $`f:XY`$ is said to be $`[L]`$-homotopy equivalence if there is a map $`g:YX`$ such that the compositions $`gf`$ and $`fg`$ are $`[L]`$-homotopic to $`id_X`$ and $`id_Y`$ respectively.
Let us observe (see ) that $`ANE([L])`$-spaces have the following $`[L]`$-homotopy extension property.
###### Proposition 2.1.
Let $`[L]`$ be a finitely dominated complex and $`X`$ be a Polish $`ANE([L])`$-space. Suppose that $`A`$ is closed in a space $`B`$ with $`ed(B)[L]`$. If maps $`f,g:AX`$ are $`[L]`$-homotopic and $`f`$ admits an extension $`F:BX`$ then $`g`$ also admits an extension $`G:BX`$, and it may be assumed that $`F`$ is $`[L]`$-homotopic to $`G`$.
To provide an important example of $`[L]`$-homotopy equivalence we need to introduce the class of approximately $`[L]`$-soft maps.
###### Definition 2.5.
A map $`f:XY`$ is said to be approximately $`[L]`$-soft, if for each space $`Z`$ with $`ed(Z)[L]`$, for each closed subset $`AZ`$, for an open cover $`๐ฐcov(Y)`$, and for any two maps $`g:AX`$ and $`h:ZY`$ such that $`fg=h|_A`$ there is a map $`k:ZX`$ satisfying condition $`k|_A=g`$ and the composition $`fk`$ is $`๐ฐ`$-close to $`h`$.
###### Proposition 2.2.
Let $`f:XY`$ be a map between $`ANE([L])`$-compacta and $`ed(Y)[L]`$. If $`f`$ is approximately $`[L]`$-soft then $`f`$ is a $`[L]`$-homotopy equivalence.
In order to define $`[L]`$-homotopy groups it is necessary to consider an $`n`$-th $`[L]`$-sphere $`S_{[L]}^n`$ , namely, an $`[L]`$-dimensional $`ANE([L])`$ \- compactum admitting an approximately $`[L]`$-soft map onto $`S^n`$. It can be shown that all possible choices of an $`[L]`$-sphere $`S_{[L]}^n`$ are $`[L]`$-homotopy equivalent. This remark, coupled with the following proposition, allows us to consider for every finite complex $`L`$, every $`n1`$ and for any space $`X`$, the set $`\pi _n^{[L]}(X)=[S_{[L]}^n,X]_{[L]}`$ endowed with natural group structure (see for details).
###### Theorem 2.3.
Let $`L`$ be a finitely dominated complex and $`X`$ be a finite polyhedron or a compact Hilbert cube manifold. Then there exist a $`[L]`$-universal $`ANE([L])`$ compactum $`\mu _X^{[L]}`$ with $`ed(\mu _X^{[L]})=[L]`$ and an $`[L]`$-invertible and approximately $`[L]`$-soft map $`f_X^{[L]}:\mu _X^{[L]}X`$.
The following theorem is essential for our consideration.
###### Theorem 2.4.
Let $`L`$ be simply-connected $`CW`$-complex, $`X`$ be finite-dimensional compactum. Then $`LAE(X)`$ iff $`\mathrm{c}\mathrm{dim}_{H_i(L)}Xi`$ for any $`i`$.
From the proof of Theorem 2.4 one can conclude that the following theorem also holds:
###### Theorem 2.5.
Let $`L`$ be a $`CW`$-complex (not necessary
simply-connected). Then for any finite-dimensional compactum $`X`$ from $`LAE(X)`$ follows that $`\mathrm{c}\mathrm{dim}_{H_i(L)}Xi`$ for any $`i`$.
## 3. Cohomological properties of $`L`$
In this section we will investigate some cohomological properties of complexes $`L`$ satisfying condition $`[L]S^n`$ for some $`n`$. To establish these properties let us first formulate the following
###### Proposition 3.1.
Let $`(X,A)`$ be a topological pair, such that $`H_q(X,A)`$ is finitely generated for any $`q`$. Then free submodules of $`H^q(X,A)`$ and $`H_q(X,A)`$ are isomorphic and torsion submodules of $`H^q(X,A)`$ and $`H_{q1}(X,A)`$ are isomorphic.
Now we use Theorem 2.5 to obtain the following lemma.
###### Lemma 3.2.
Let $`L`$ be finite $`CW`$ complex such that $`[L][S^{n+1}]`$ and $`n`$ is minimal with this property. Then for any $`qn`$ $`H_q(L)`$ is torsion group.
###### Proof.
Suppose that there exists $`qn`$ such that $`H^q(L)=G`$. To get a contradiction let us show that $`[L][S^q]`$. Consider $`X`$ such that $`LAE(X)`$. Observe, that $`X`$ is finite-dimensional since $`[L][S^{n+1}]`$ by our assumption.
Denote $`H=H_q(L)`$. By Theorem 2.5 we have $`\mathrm{c}\mathrm{dim}_HXq`$. Hence, for any closed subset $`AX`$ we have $`H^{q+1}(X,A;H)=\{0\}`$. From the other hand, univeral coefficients formula implies that
$`H^{q+1}(X,A)H^{q+1}(X,A)HTor(H^{q+2}(X,A),H)`$.
Hence, $`H^{q+1}(X,A)H=\{0\}`$. Observe, however, that by our assumtion we have $`H^{q+1}(X,A)H=H^{q+1}(G)=H^{q+1}(X,A)(H^{q+1}(X,A)G)`$. Therefore, $`H^{q+1}(X,A)=0`$.
From the last fact we conclude that $`\mathrm{c}\mathrm{dim}Xq`$ and therefore since $`X`$ is finite-dimensional, $`dimXq`$ which iplies $`S^qAE(X)`$. โ
From this lemma and Proposition 3.1 we obtain
###### Corollary 3.3.
In the same assumptions $`H^q(L)`$ is torsion group for any $`qn`$.
The following fact is essential for constraction of compacts with some specific properties which we are going to construct further.
###### Lemma 3.4.
Let $`L`$ be as in previous lemma. For any $`m`$ there exists $`pm`$ such that $`H^q(L;_p)=\{0\}`$ for any $`qn`$.
###### Proof.
From Corollary 3.3 we can conclude that $`H^q(L)=\underset{i=1}{\overset{l_k}{}}_{m_{qi}}`$ for any $`qn`$. Additionally, let $`TorH^{n+1}(L)=\underset{i=1}{\overset{l_{n+1}}{}}_{m_{(n+1)i}}`$
For any $`m`$ consider $`pm`$ such that $`(p,m_{ki})`$ for every $`k=1\mathrm{}n+1`$ and $`i=1\mathrm{}l_k`$. Universal coefficients formula implies that $`H^q(L;_p)=\{0\}`$ for every $`kn`$. โ
Finally let us proof the following
###### Lemma 3.5.
Let $`X`$ be a metrizable compactum, $`A`$ be a closed subset of $`X`$. Consider a map $`f:AS^n`$. If there exists extension $`\overline{f}:XS^n`$ then for any $`k`$ we have $`\delta _{X,A}^{}(f^{}(\zeta ))=0`$ in group $`H^{n+1}(X,A;_k)`$, where $`\zeta `$ is generator in $`H^n(S^n,_k)`$.
###### Proof.
Let $`\overline{f}`$ be an extension of $`f`$. Commutativity of the following diagram implies assertion of lemma:
$$\begin{array}{ccc}H^n(A;_k)& \stackrel{\delta _{X,A}^{}}{}& H^{n+1}(X,A;_k)\\ \overline{f^{}}=f^{}& & \overline{f^{}}& & \\ H^n(S^n;_k)& \stackrel{\delta _{S^n,S^n}^{}}{}& H^{n+1}(S^n,S^n;_k)=\{0\}\end{array}$$
## 4. Some properties of \[L\]-homotopy groups
In this section we will investigate some properties of $`[L]`$-homotopy groups.
From this point and up to the end of the text we consider finite complex $`L`$ such that $`[S^n]<[L][S^{n+1}]`$ for some fixed $`n`$.
###### Remark 4.1.
Let us observe that for such complexes $`S_{[L]}^n`$ is $`[L]`$-homotopic equivalent to $`S^n`$ (see Proposition 2.2). Therefore for any $`X`$ $`\pi _n^{[L]}(X)`$ is isomorphic to $`G=\pi _n(S^n)/N([L])`$ where $`N([L])`$ denotes the relation of $`[L]`$-homotopic equivalence between elements of $`\pi _n(S^n)`$.
From this observation one can easely obtain the following fact.
###### Proposition 4.1.
For $`\pi _n^{[L]}(S^n)`$ there are three variants: $`\pi _n^{[L]}(S^n)=`$, $`\pi _n^{[L]}(S^n)=_m`$ for some integer $`m`$ or this group is trivial.
Let us characterize the hypothetical equality $`\pi _n^{[L]}(S^n)=_m`$ in terms of extensions of maps.
###### Proposition 4.2.
If $`\pi _n^{[L]}(S^n)=_m`$ then for any $`X`$ such that $`ed(X)[L]`$, for any closed subset $`A`$ of $`X`$ and for any map $`f:AS^n`$, there exists extension $`\overline{h}:XS^m`$ of composition $`h=z_mf`$, where $`z_m:S^nS^n`$ is a map having degree $`m`$.
###### Proof.
Suppose, that $`\pi _n^{[L]}(S^n)=_m`$. Then from Remark 4.1 and since $`[z_m]=m[id_{S^n}]=[]`$ (where $`[f]`$ denotes homotopic class of $`f`$) we conclude that $`z_m:S^nS^n`$ is $`[L]`$-homotopic to constant map. Let us show that $`h=z_mf:AS^n`$ is also $`[L]`$-homotopic to constant map. This fact will prove our statement. Indeed, by our assumption $`ed(X)[L]`$ and $`S^nANE`$ and therefore we can apply Proposition 2.1.
Consider $`Z`$ such that $`ed(Z)[L]`$ and a map $`H:ZA\times I`$, where $`I=[0,1]`$. Pick a point $`sS^n`$. Let $`f_0=z_mf`$, $`f_1s`$ โ constant map considered as $`f_i:A\times \{i\}S^n`$, $`i=0,1`$.
Define $`F:A\times IS^n\times I`$ as follows: $`F(a,t)=(f(a),t)`$ for each $`aA`$ and $`tI`$. Let $`f_0^{}z_m`$ and $`f_1^{}s`$ considered as $`f_i^{}:S^n\times \{i\}S^n`$, $`i=0,1`$.
Consider a composition $`G=FH:ZS^n\times I`$. By our assumption $`f_0^{}`$ is $`[L]`$-homotopic to $`f_1^{}`$. Therefore a map $`g:G^1(S^n\times \{0\}S^n\times \{1\})S^n`$, defined as $`g|_{G^1(S^n\times \{i\})}=f_i^{}G`$ for $`i=0,1`$, can be extended over $`Z`$. From the other hand we have $`G^1(S^n\times \{i\})H^1(A\times \{i\})`$ and $`g|_{G^1(S^n\times \{i\})}=f_i^{}fH=f_i`$ for $`i=0,1`$. This remark completes the proof. โ
Now consider a special case of complex having a form $`S^n<L=K_sKS^{n+1}`$, where $`K_s`$ is a complex obtained by attaching to $`S^n`$ a $`(n+1)`$-dimensional cell using a map of degree $`s`$.
###### Proposition 4.3.
Let $`[\alpha ]\pi _n(X)`$ be an element of order $`s`$. Then $`\alpha `$ is $`[L]`$-homotopy to constant map.
###### Proof.
Observe that simillar to proof of Proposition 4.2 it is enough to show that for every $`Z`$ with $`ed(Z)[L]`$, for every closed subspace $`A`$ of $`Z`$ and for any map $`f:ZS^n`$ a composition $`\alpha f:AX`$ can be extended over $`Z`$.
Let $`g:S^nK_s^{(n)}`$ be an embedding (by $`M^{(n)}`$ we denote $`n`$-dimensional skeleton of complex $`M`$) and $`r:LK_s`$ be a retraction.
Since $`ed(Z)[L]`$, a composition $`gf`$ has an extension $`F:ZL`$. Let $`F^{}=rF`$ and $`\alpha ^{}`$ be a map $`\alpha `$ considered as a map $`\alpha ^{}:K_s^{(n)}X`$. Observe that $`\alpha ^{}F^{}`$ is a necessary extension of $`\alpha f`$. โ
## 5. Computation of $`\pi _n^{[L]}(S^n)`$
In this section we will prove that $`\pi _n^{[L]}(S^n)=`$.
Suppose the oppsite, i.e. $`\pi _n^{[L]}(S^n)=_m`$ (we use Proposition 4.1; the same arguments can be used to prove that $`\pi _n^{[L]}(S^n)`$ is non-trivial).
To get a contradiction we need to obtain a compact with special extension properties. We will use a construction of
Let us recall the following definition.
###### Definition 5.1.
Inverse sequence $`S=\{X_i,p_i^{i+1}:i\omega \}`$ consisting of metrizable compacta is said to be $`L`$-resolvable if for any $`i`$, $`AX_i`$ \- closed subspace of $`X_i`$ and any map $`f:AL`$ there exists $`ki`$ such that composition $`fp_i^k:(p_i^k)^1AL`$ can be extended over $`X_k`$.
The following lemma (see ) expresses an important property of $`[L]`$-resolvable inverse sequences.
###### Lemma 5.1.
Suppose that $`L`$ is a countable complex and that $`X`$ is a compactum such that $`X=limS`$ where $`S=(X_i,\lambda _i),q_i^{i+1}`$ is a $`L`$-resolvable inverse system of compact polyhedra $`X_i`$ with triangulations $`\lambda _i`$ such that $`mesh\{\lambda _i\}0`$. Then $`LAE(X)`$
Let us recall that in inverse sequence $`S=\{(X_i,\tau _i),p_i^{i+1}\}`$ was constructed such that $`X_i`$ is compact polyhedron with fixed triangulation $`\tau _i`$, $`X_0=S^{n+1}`$, $`mesh\tau _i0`$, $`S`$ is $`[L]`$-resolvable and for any $`xX_i`$ we have $`(p_i^{i+1})^1xL`$ or $``$.
It is easy to see that using the same construction one can obtain inverse sequence $`S=\{(X_i,\tau _i),p_i^{i+1}\}`$ having the same properties with exeption of $`X_0=D^{n+1}`$ where $`D^{n+1}`$ is $`n+1`$-dimensional disk.
Let $`X=limS`$. Observe, that $`ed(X)[L]`$. Let $`p_0:XD^{n+1}`$ be a limit projection.
Pick $`pm+1`$ which Lemma 3.4 provides us with. By Vietoris-Begle theorem (see ) and our choice of p, for every $`i`$ and every $`X_i^{}X_i`$ a homomorphism $`(p_i^{i+1})^{}:H^k(X_i^{};_p)H^k((p_i^{i+1})^1X_i^{};_p)`$ is isomorphism for $`kn`$ and monomorphism for $`k=n+1`$.
Therefore for each $`D^{}X_0=D^{n+1}`$ homomorphism $`p_0^{}:H^k(D^{};_p)H^k((p_0)^1D^{};_p)`$ is isomorphism for $`kn`$ and monomorphism for $`k=n+1`$. In particular, $`H^n(X;_p)=\{0\}`$ since $`X_0=D^{n+1}`$ has trivial cohomology groups.
Let $`A=(p_0)^1S^n`$ and $`\zeta H^n(S^n;_p)_p`$ be a generator.
Since $`p_0^{}:H^n(S^n;_p)H^n(A;_p)`$ is isomorphism, $`p_0^{}(\zeta )`$ is generator in $`H^n(A,_p)_p`$. In particular, $`p_0^{}(\zeta )`$ is element of order $`p`$.
From exact sequence of pair $`(X,A)`$
$$\begin{array}{ccccc}\mathrm{}H^n(X;_p)=\{0\}& \stackrel{i_{X,A}}{}& H^n(A;_p)& \stackrel{\delta _{X,A}^{}}{}& H^{n+1}(X,A;_p)\mathrm{}\end{array}$$
we conclude that $`\delta _{X,A}^{}`$ is monomorphism and hence $`\delta _{X,A}^{}(p_0^{}(\zeta ))H^{n+1}(X,A;_p)`$ is element of order $`p`$.
Consider now a composition $`h=z_mp_0`$. By our assumption this map can be extended over $`X`$ (see Proposition 4.2). This fact coupled with Lemma 3.5 implies that $`\delta _{X,A}^{}(h^{}(\zeta ))=0`$ in $`H^{n+1}(X,A;_p)`$. But $`\delta _{X,A}^{}(h^{}(\zeta ))=m\delta _{X,A}^{}(p_0^{}(\zeta ))`$. We arrive to a contradiction which shows that
###### Theorem 5.2.
Let $`L`$ be a complex such that $`[S^n]<[L][S^{n+1}]`$. Then $`\pi _n^{[L]}(S^n)`$.
The author is greatfull to A. C. Chigogidze for usefull discussions. |
no-problem/0001/astro-ph0001470.html | ar5iv | text | # Harmonizing the RR Lyrae and Clump Distance Scales โ Stretching the Short Distance Scale to Intermediate Ranges?
## 1 Introduction
The Hubble Space Telescope Key Project (e.g., Madore et al. 1999) concluded that the biggest uncertainty in the Hubble constant, $`H_0`$, comes from the uncertainty in the distance to the LMC. Among the major methods that have been used to determine the distance to the LMC are: the echo of the supernova 1987A, solving parameters of eclipsing binaries, Cepheids, RR Lyrae stars, and red clump giants. They all suffer from some uncertainties and possible systematic errors. The echo of the supernova 1987A was a transient event with limited data and contradictory interpretations (Gould & Uza 1998 versus Panagia 1998). Only one attempt of solving eclipsing binary using space-based spectra was made by Guinan et al. (1998) for HV 2274. Their result is sensitive to the reddening toward HV 2274 (Udalski et al. 1998 versus Nelson et al. 2000). To be calibrated with high precision, Cepheids have to wait for the next generation astrometric missions (for the Hipparcos-based calibration see Feast & Catchpole 1997 and Pont 1999). The absolute $`V`$-magnitudes of RR Lyrae stars, $`M_V(RR)`$, are still under debate with a faint value given by the statistical parallax method and a bright value suggested by the main sequence fitting (see Popowski & Gould 1999). The major problem of the red clump method is the possibility that the absolute $`I`$-magnitude, $`M_I(RC)`$, is sensitive to the environment (Cole 1998; Girardi et al. 1998; Twarog, Anthony-Twarog, & Bricker 1999). The mentioned methods give results inconsistent within their estimated uncertainties, which suggests hidden systematics.
Here I concentrate on two horizontal-branch standard candles: red clump giants and RR Lyrae stars. I start with a very short review of their application to determine the distance to the LMC. Paczyลski & Stanek (1998) pointed out that clump giants should constitute an accurate distance indicator. In a study of the morphology of the red clump, Beaulieu & Sackett (1998) argued that a distance modulus of $`\mu ^{\mathrm{LMC}}=18.3`$ provides the best fit to the dereddened LMC color-magnitude diagram. Udalski et al. (1998a) and Stanek, Zaritsky, & Harris (1998) applied the I-magnitude based approach of Paczyลski and Stanek (1998) and found a very short distance to the LMC ($`\mu ^{LMC}18.1`$). In response, Cole (1998) and Girardi et al. (1998) suggested that clump giants are not standard candles and that their $`M_I(RC)`$ depend on the metallicity and age of the population. Udalski (1998b, 1998c) countered this criticism by showing that the metallicity dependence is at a low level of about $`0.1`$ mag/dex, and that the $`M_I(RC)`$ is approximately constant for cluster ages between 2 and 10 Gyr. The new determinations of the $`M_I(RC)`$ โ \[Fe/H\] relation by Stanek et al. (2000), Udalski (2000) and Popowski (2000) indicate a moderate slope of $`0.100.20`$ mag/dex. The only clump determination, which results in a truly long distance to the LMC is a study by Romaniello et al. (2000) who investigated the field around supernova SN 1987A, which is not well suited for extinction determinations. Romaniello et al. (2000) also assumed a bright $`M_I(RC)`$ from theoretical models. To address the issue of possible extinction overestimate in earlier studies (see e.g., Zaritsky 1999 for a discussion), Udalski (1998c, 2000) measured clump magnitudes in low extinction regions in and around the LMC clusters. The resulting $`\mu ^{LMC}=18.24\pm 0.08`$ (Udalski 2000) is often perceived as the least model-dependent distance modulus to the LMC obtained from clump giants.
Different methods to determine the RR Lyrae absolute magnitude are analyzed in Popowski & Gould (1999). The results depend on the methods used. When the kinematic or geometric determinations are employed, one obtains $`M_V(RR)=0.71\pm 0.07`$ at \[Fe/H\] $`=1.6`$ (with $`M_V(RR)=0.77\pm 0.13`$ from the best understood method, statistical parallax). The other methods typically produce or are consistent with brighter values. The representative main sequence fitting to globular clusters gives $`M_V(RR)=0.45\pm 0.12`$ at \[Fe/H\] $`=1.6`$ (Carretta et al. 2000). When coupled with Walker (1992) photometry of globular clusters, Popowski & Gouldโs (1999) best $`M_V(RR)`$ results in $`\mu ^{LMC}=18.33\pm 0.08`$. When Udalski et al. (1999) photometry of the LMC field RR Lyrae stars is used, one obtains $`\mu ^{LMC}=18.23\pm 0.08`$.
The essence of the approach presented here is a comparison between clump giants and RR Lyrae stars in different environments. If answers from two distance indicators agree then either the systematics have been reduced to negligible levels in both of them or the biases conspire to produce the same answer. This last problem can be tested with an attempt to synchronize distance scales in three different environments, because a conspiracy of systematic errors is not likely to repeat in all environments. Here I show that combining the information on RR Lyrae and red clump stars in the solar neighborhood, Galactic bulge, and LMC provides additional constraints on the local distance scale.
## 2 Assumptions and Observational Data
The results I present in ยง3 and ยง4 are not entirely general and have been obtained based on certain theoretical assumptions about the nature of standard candles and populations in different stellar systems. In addition, the conclusions depend on the source of photometry. One does not have much freedom in this regard, but I have made certain choices, which I describe in ยง2.2.
### 2.1 Theoretical assumptions
This investigation relies strongly on the following two assumptions:
1. The $`M_V(RR)`$ โ \[Fe/H\] relation for RR Lyrae stars is universal. More specifically, I assume that for every considered system, $`M_V(RR)`$ is only a linear function of this systemโs metallicity:
$$M_V(RR)=\alpha \left([\mathrm{Fe}/\mathrm{H}]+1.6\right)+\beta .$$
(1)
Moreover, I will assume that the slope $`\alpha =0.18\pm 0.03`$, which is not critical for the method but determines the numerical results. In the most general case, $`M_V(RR)`$ depends on morphology of the horizontal branch (Lee, Demarque, & Zinn 1990; Caputo et al. 1993). However, for average non-extreme environments (here the character of environment can be judged using the Lee 1989 index) a linear, universal $`M_V(RR)`$ โ \[Fe/H\] should be a reasonable description. For the RR Lyrae stars of the Galactic halo (either in the solar neighborhood or in Baadeโs Window) and of the LMC field or globular clusters, equation (1) with universal $`\alpha `$ and $`\beta `$ should approximately hold. The universal character of the calibration is essential to any distance determination with standard candles, and so this assumption is rather standard.
2. The absolute magnitude $`M_I^{\mathrm{BW}}(RC)`$ of the bulge clump giants is known, which in practice means one of two things: either one takes the results of population modeling or infers the value from the Hipparcos-calibrated $`M_I^{\mathrm{HIP}}(RC)`$ of the local clump stars. I will temporarily adopt the second route and assume that there are no population factors except metallicity that influence $`M_I^{\mathrm{BW}}(RC)`$ in the Galactic bulge (with respect to the local clump) or that their contributions cancel out. Again, this is somewhat similar to point 1., but here I am more flexible allowing $`M_I^{\mathrm{LMC}}(RC)`$ in the LMC not to follow the local Hipparcos calibration (that is, I allow population effects of all types).
### 2.2 Data
The calibration of clump giants in the solar neighborhood is based on Hipparcos (Perryman 1997) data for nearly 300 clump giants as reported by Stanek & Garnavich (1998) and refined by Udalski (2000).
$$M_I^{\mathrm{HIP}}(RC)=(0.26\pm 0.02)+(0.13\pm 0.07)([\mathrm{Fe}/\mathrm{H}]+0.25)$$
(2)
I assume that the metallicity of the bulge clump in Baadeโs Window is \[Fe/H\] $`=0.0\pm 0.3`$ consistent with Minniti et al. (1995). As a result, I set $`M_I^{\mathrm{BW}}(RC)=0.23\pm 0.04`$ (see eq. (2) and ยง2.1), where the error of $`0.04`$ is dominated by the uncertainty in the metallicity of clump giants in Baadeโs Window. I stress that one can simply assume $`M_I^{\mathrm{BW}}(RC)`$ without any reference to Hipparcos results and obtain the conclusions reported later in Table 1. Equation (2) and the following considerations serve only as the evidence that, in the lack of significant population effects, this choice of $`M_I^{\mathrm{BW}}(RC)`$ would be well justified.
The $`V`$\- and $`I`$-band photometry for the bulge clump giants and RR Lyrae stars originates from, or have been calibrated to the photometric zero-points of, phase-II of the Optical Gravitational Lensing Experiment (OGLE). That is, the data for Baadeโs Window come from Udalski (1998b) and were adjusted according to zero-point corrections given by Paczyลski et al. (1999). When taken at face value, these data result in $`(VI)_0`$ colors<sup>1</sup><sup>1</sup>1Here and thereafter subscript โ0โ indicates dereddened or extinction-free value. of both clump giant and RR Lyrae stars that are 0.11 redder than for their local counterparts. To further describe the input data let me define $`\mathrm{\Delta }`$ for a given stellar system as the difference between the mean dereddened I-magnitude of clump giants and the derredened V-magnitude of RR Lyrae stars at the metallicity of RR Lyrae stars in the Galactic bulge. The quantity $`\mathrm{\Delta }`$ allows one to compare the relative brightness of clump giants and RR Lyrae stars in different environments and so will be very useful for this study (for more discussion see Udalski 1998b and Popowski 2000). In the Baadeโs Window with anomalous horizontal branch colors $`\mathrm{\Delta }^{\mathrm{BW}}I_0^{\mathrm{BW}}(RC)V_0^{\mathrm{BW}}(RR)=1.04\pm 0.04`$. When the color correction considered by Popowski (2000) is taken into account one obtains $`\mathrm{\Delta }^{\mathrm{BW}}=0.93\pm 0.04`$.
In the LMC, I use dereddened $`I_0=17.91\pm 0.05`$ for โrepresentative red clumpโ. Here โrepresentativeโ means in clusters (compare to $`I_0=17.88\pm 0.05`$ from Udalski 1998c) or in fields around clusters (compare to $`I_0=17.94\pm 0.05`$ from Udalski 2000). The advantage of using $`I_0`$ from cluster and cluster fields is their low, well-controlled extinction (Udalski 1998c, 2000). I take $`V_0=18.94\pm 0.04`$ for field RR Lyrae stars at \[Fe/H\] $`=1.6`$ from Udalski et al. (1999) and adopt $`V_0=18.98\pm 0.03`$ at \[Fe/H\] $`=1.9`$ for the cluster RR Lyrae stars investigated by Walker (1992). The difference of photometry between Udalski et al. (1999) and Walker (1992) may have several sources. The least likely is that the cluster system is displaced with respect to the center of mass of the LMC field. Also, cluster RR Lyrae stars could be intrinsically fainter, but again this is not very probable. I conclude that the difference comes either from 1) extinction, or 2) the zero-points of photometry. The first case would probably point to overestimation of extinction by OGLE, because it is harder to determine the exact extinction in the field than it is in the clusters. The second case can be tested with independent LMC photometry. In any case, the difference of $`0.1`$ mag is an indication of how well we currently measure $`V_0(RR)`$ in the LMC.
Finally, let us note that the homogeneity of photometric data was absolutely essential for the investigation of the global slope in the $`M_I(RC)`$ โ \[Fe/H\] relation (Popowski 2000). Here it is not as critical. Still, the common source of data for the Galactic bulge reduces the uncertainty in the $`M_V(RR)`$ calibration. On the other hand, the use of both OGLE and Walkerโs (1992) data for the LMC quantifies a possible level of extinction/photometry uncertainty.
## 3 The method and results
The distance modulus to the Galactic center from RR Lyrae stars is:
$$\mu ^{\mathrm{BW}}(RR)=V_0^{\mathrm{BW}}(RR)M_V^{\mathrm{BW}}(RR).$$
(3)
I assume the RR Lyrae metallicities of $`[\mathrm{Fe}/\mathrm{H}]_{RR}^{\mathrm{BW}}=1.0`$ from Walker & Terndrup (1991). The distance modulus to the Galactic center from the red clump can be expressed as:
$$\mu ^{\mathrm{BW}}(RC)=I_0^{\mathrm{BW}}(RC)M_I^{\mathrm{BW}}(RC).$$
(4)
The condition that $`\mu ^{\mathrm{BW}}(RR)`$ and $`\mu ^{\mathrm{BW}}(RC)`$ are equal to each other<sup>2</sup><sup>2</sup>2For this condition to be exactly true one has to take into account the distribution of clump giants in the bar and RR Lyrae stars in the spheroidal system as well as completeness characteristics of a survey. The analyses from OGLE did not reach this level of detail, but I neglect this small correction here. results in:
$$M_I^{\mathrm{BW}}(RC)M_V^{\mathrm{BW}}(RR)=I_0^{\mathrm{BW}}(RC)V_0^{\mathrm{BW}}(RR)$$
(5)
But the right hand side of equation (5) is just $`\mathrm{\Delta }^{BW}`$, which is either directly taken from dereddened data or determined by solving the color problem (for more detail see Popowski 2000). If there are no population differences between the clump in Baadeโs Window and the solar neighborhood (as we assumed in ยง2.1), then $`M_I^{\mathrm{BW}}(RC)`$ is extremely well constrained from the Hipparcos results reported in equation (2). Therefore, equation (5) is in effect the calibration of the absolute magnitude of RR Lyrae stars:
$$M_V^{\mathrm{BW}}(RR)=M_I^{\mathrm{BW}}(RC)\mathrm{\Delta }^{BW}$$
(6)
If one calibrates the $`M_V(RR)`$ โ \[Fe/H\] relations according to equation (6), then by construction the solar neighborhoodโs and the Baadeโs Windowโs distance scales are consistent.
To determine $`M_I^{\mathrm{LMC}}(RC)`$, I construct the Udalskiโs (1998b) diagram. However, both Udalski (1998b) and Popowski (2000) used such diagrams to determine a global slope of the $`M_I(RC)`$ โ \[Fe/H\] relation. Because I am interested here just in the LMC, a more powerful approach is to treat the Udalski (1998b) diagram in a discrete way. That is, instead of fitting a line to a few points one takes a difference between the Baadeโs Window and LMC $`\mathrm{\Delta }`$ as a measure of the $`M_I(RC)`$ differences in these two stellar systems. Therefore:
$$M_I^{\mathrm{LMC}}(RC)=M_I^{\mathrm{BW}}(RC)(\mathrm{\Delta }^{\mathrm{BW}}\mathrm{\Delta }^{\mathrm{LMC}})$$
(7)
The interesting feature of equation (7) is that the calibration of $`M_I^{\mathrm{LMC}}(RC)`$, even though based on RR Lyrae stars, is independent of the zero-point $`\beta `$ of the $`M_V(RR)`$ โ \[Fe/H\] relation. Because $`M_I^{\mathrm{LMC}}(RC)`$ leads to a specific value of $`\mu ^{LMC}`$, coupling $`\mu ^{LMC}`$ with the LMC RR Lyrae photometry allows one to calibrate the zero-point of the $`M_V(RR)`$ โ \[Fe/H\] relation. However this calibration is not independent of the one presented in equation (6) and so does not provide any additional information.
Using equations (6) and (7), I calibrate the zero point $`\beta `$ of $`M_V(RR)`$ โ \[Fe/H\] relation as well as $`M_I^{\mathrm{LMC}}(RC)`$ of clump giants in the LMC. The solutions are listed in Table 1. Different assumptions about the color anomaly in the Galactic bulge and the use of either OGLE-II or Walkerโs (1992) photometry in the LMC result in four classes of $`[M_V(RR),M_I^{LMC}(RC)]`$ solutions (column 1). Following argument from ยง2.2, I use one universal $`I_0`$ for clump giants in the LMC (column 2). The brighter RR Lyrae photometry in the LMC comes from OGLE (Udalski et al. 1999) and the fainter from Walker (1992) \[column 3\]. In column 4, I report $`\mathrm{\Delta }^{\mathrm{LMC}}`$, which has been inferred from columns 2 and 3 assuming the the slope $`\alpha `$ in the $`M_V(RR)`$ โ \[Fe/H\] relation is 0.18. In column 5, I give $`\mathrm{\Delta }^{BW}`$. The resulting $`M_V(RR)`$ at \[Fe/H\] = $`1.6`$, $`M_I^{\mathrm{LMC}}(RC)`$, and the LMC distance modulus are shown in columns 6, 7, and 8, respectively.
The sensitivity of the results to the theoretical assumptions from ยง2 can summarized in the following equation:
$$\delta \beta =\delta M_I^{\mathrm{LMC}}(RC)=\delta \mu ^{\mathrm{LMC}}=0.6(\alpha _{\mathrm{true}}0.18)+(M_{I,\mathrm{true}}^{\mathrm{BW}}(RC)+0.23),$$
(8)
where the three $`\delta `$-type terms indicate potential corrections, $`\alpha _{\mathrm{true}}`$ is a real slope in RR Lyrae $`M_V(RR)`$ \- \[Fe/H\] relation and $`M_{I,\mathrm{true}}^{\mathrm{BW}}(RC)`$ is a true absolute magnitude of clump giants in the Bulge. The multiplying factor of 0.6 in the first term is a difference between the solar neighborhood and Baadeโs Window metallicity of RR Lyrae stars. The distance scale could be made longer with either a larger (steeper) slope $`\alpha _{\mathrm{true}}`$ or a brighter $`M_{I,\mathrm{true}}^{\mathrm{BW}}(RC)`$ value. Very few $`M_V(RR)`$ \- \[Fe/H\] relation determinations argue for slopes steeper than 0.3, and clump giants in the Galactic bulge, which are old, are expected to be on average somewhat fainter than the ones in the solar neighborhood. To give an example of application of equation (8) let us assume $`\alpha _{\mathrm{true}}=0.3`$ (e.g., Sandage 1993), and $`M_I^{\mathrm{BW}}(RC)=0.15`$ (Girardi & Salaris 2000; inferred from their $`\mathrm{\Delta }M_I^{RC}`$ in Table 4 without any adjustment for a small \[Fe/H\] mismatch). The first term would result in a correction of $`0.07`$ mag and the second term would contribute 0.08 mag. In this case the two corrections would almost entirely cancel out resulting in both $`\beta `$ and $`M_I^{\mathrm{LMC}}(RC)`$ being 0.01 mag fainter and $`\mu ^{\mathrm{LMC}}`$ being 0.01 mag smaller. Even if one ignores the $`M_{I,\mathrm{true}}^{\mathrm{BW}}(RC)`$ \- related correction, it is hard to make absolute magnitudes of RR Lyrae and clump stars brighter by more than $`0.07`$ mag. Consequently, the distance moduli to the LMC reported in Table 1 are unlikely to increase by more than $`0.07`$ mag as a result of adjustment to the theoretical assumptions from ยง2.
Another interesting question is the sensitivity of the results reported in Table 1 to the deredenned magnitudes adopted for the LMC. These dependences are described by the following equations:
$$\delta M_I^{\mathrm{LMC}}(RC)=\left(I_{0,\mathrm{true}}^{\mathrm{LMC}}(RC)17.91\right)(V_{0,\mathrm{true}}^{\mathrm{LMC}}(RR)V_0^{\mathrm{LMC}}(RR)),$$
(9)
$$\delta \mu ^{\mathrm{LMC}}=(V_{0,\mathrm{true}}^{\mathrm{LMC}}(RR)V_0^{\mathrm{LMC}}(RR)),$$
(10)
where $`V_0^{\mathrm{LMC}}(RR)`$ is either Udalski et al. (1999) or Walker (1992) value described in ยง2.2. In this treatment, the obtained distance modulus to the LMC does not depend on the dereddened I-magnitudes of clump giants! This is very fortunate because of the unresolved observational controversy \[$`I_0^{\mathrm{LMC}}(RC)17.9`$ from Udalski (1998c, 2000) versus $`I_0^{\mathrm{LMC}}(RC)18.1`$ from Zaritsky (1999) or Romaniello et al. (1999)\]. Note that keeping current $`V_0^{\mathrm{LMC}}(RR)`$ and adopting fainter $`I_0^{\mathrm{LMC}}(RC)`$ would result in rather faint values of $`M_I^{\mathrm{LMC}}(RC)(0.13,0.33)`$, in potential disagreement with population models (see Girardi & Salaris 2000). This may suggest that either Udalskiโs (1998c, 2000) dereddened clump magnitudes are more accurate or that dereddened $`V`$-magnitudes for RR Lyrae stars need revision.
## 4 Discussion
Using RR Lyrae stars and clump giants, I showed that the requirement of consistency between standard candles in different environments is a powerful tool in calibrating absolute magnitudes and obtaining distances. If the anomalous character of $`(VI)_0`$ in Baadeโs Window is real (i.e., not caused by problems with photometry or misestimate of the coefficient of selective extinction), then the distance scale tends to be shorter. In particular, $`M_V(RR)=0.70\pm 0.05`$ at \[Fe/H\] = $`1.6`$, and the distance modulus to the LMC spans the range from $`\mu ^{LMC}=18.24\pm 0.08`$ to $`18.33\pm 0.07`$. If $`(VI)_0`$ color of stars in Baadeโs Window is in error and should be standard, then the distance scale is longer. In particular, one can obtain $`M_V(RR)=0.59\pm 0.05`$ at \[Fe/H\] = $`1.6`$ and the distance modulus from $`\mu ^{LMC}=18.35\pm 0.08`$ to $`18.44\pm 0.07`$. It is important to notice that the reported distance modulus ranges do not change with the assumed value of the dereddened $`I`$-magnitudes of the LMC clump giants, $`I_0^{\mathrm{LMC}}(RC)`$.
Are there any additional constraints that would allow one to select the preferred value for RR Lyrae zero point $`\beta `$, $`M_I^{\mathrm{LMC}}(RC)`$, and $`\mu ^{\mathrm{LMC}}`$? The fact that indirectly favors the intermediate distance scale ($`\mu ^{\mathrm{LMC}}18.4`$) is its consistency with the results from classical Cepheids. The value of $`M_V(RR)`$ required for such solution is only $`1.4\sigma `$ (combined) below the โkinematicโ value of Popowski & Gould (1999) and $`1.3\sigma `$ (combined) below the statistical parallax result given by Gould & Popowski (1998), leaving us without a decisive hint. The Twarog et al. (1999) study of two open Galactic clusters (NGC 2420 and NGC 2506) indicates rather bright red clumps. However, the relevance of this result to the LMC is uncertain and, more importantly, its precision is too low to provide significant information. The Beaulieu and Sackett (1998) study of clump morphology in the LMC suggests $`\mu ^{LMC}18.3`$, probably consistent with the entire (18.24, 18.44) range.
The only significant but ambiguous clue is provided by Udalskiโs (2000) spectroscopically-based investigation of the red clump in the solar neighborhood. One may entertain the following argument. If uncorrelated metallicity and age are the only population effects influencing $`M_I(RC)`$ in different environments (with age argued to have no effect in this case โ Udalski 1998c), then Hipparcos based calibration combined with $`M_I^{\mathrm{LMC}}(RC)`$ would naturally lead to an estimate of average metallicity of clump giants in the LMC. The brightest $`M_I^{\mathrm{LMC}}(RC)=0.53`$ from Table 1 would result in $`[\mathrm{Fe}/\mathrm{H}]^{\mathrm{LMC}}=2.33`$! Such a low value is in violent disagreement with observations. Therefore, either uncorrelated metallicity and age are not the only population effects influencing $`M_I(RC)`$ (see Girardi & Salaris 2000 for a discussion) or Udalski (2000) results coupled with typical LMC metallicities lend strong support to the shorter distance scale. However, unless the selective extinction coefficient toward Baadeโs Window is unusual, very short distance scale comes at a price of anomalous $`(VI)_0`$ bulge colors. Therefore, one is tempted to ask: โIs it normal that $`M_I(RC)`$ follows the local prescription and $`(VI)_0`$ does not?โ.
In summary, with currently available photometry, it is possible to obtain the consistent RR Lyrae and clump giant distance scales that differ by as much as 0.2 magnitudes. Furthermore, even the presented distance scales may require some additional shift due to possible adjustments in $`\alpha `$, $`M_I^{\mathrm{BW}}(RC)`$, and zero-points of adopted photometry. It is clear that further investigations of population dependence of $`M_I(RC)`$, the Galactic bulge colors and the zero points of the LMC photometry are needed to better constrain the local distance scale.
I would like to thank Andrew Gould for his valuable comments. I am grateful to the referee whose suggestions improved the presentation of the paper. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. |
no-problem/0001/cond-mat0001426.html | ar5iv | text | # Composite Fermions in Fractional Quantum Hall Systems
## 1 Introduction
The study of the electronic properties of quasi-two-dimensional (2D) systems has resulted in a number of remarkable discoveries in the past two decades . Among the most interesting of these are the integral and fractional quantum Hall effects. In both of these effects, incompressible states of a 2D electron liquid are found at particular values of the electron density for a given value of the magnetic field applied normal to the 2D layer.
The integral quantum Hall effect (IQHE) is rather simple to understand. The incompressibility results from a cyclotron energy gap, $`\mathrm{}\omega _c`$, in the single particle spectrum. When all states below the gap are filled and all states above it are empty, it takes a finite energy $`\mathrm{}\omega _c`$ to produce an infinitesimal compression. Excited states consist of electronโhole pair excitations and require a finite excitation energy. Both localized and extended single particle states are necessary to understand the experimentally observed behavior of the magneto-conductivity .
The fractional quantum Hall effect (FQHE) is more difficult to understand and more interesting in terms of new basic physics. The energy gap that gives rise to the Laughlin incompressible fluid state is completely the result of the interaction between the electrons. The elementary excitations are fractionally charged Laughlin quasiparticles, which satisfy fractional statistics . The standard techniques of many body perturbation theory are incapable of treating FQH systems because of the complete degeneracy of the single particle levels in the absence of the interactions. Laughlin was able to determine the form of the ground state wave function and of the elementary excitations on the basis of physical insight into the nature of the many body correlations. Striking confirmation of Laughlinโs picture was obtained by exact diagonalization of the interaction Hamiltonian within the subspace of the lowest Landau level of small systems . Jain , Lopez and Fradkin , and Halperin et al. have extended Laughlinโs approach and developed a composite Fermion (CF) description of the 2D electron gas in a strong magnetic field. This CF description has offered a simple picture for the interpretation of many experimental results. However, the underlying reason for the validity of many of the approximations used with the CF approach is not completely understood .
The object of this review is to present a simple and understandable summary of the CF picture as applied to FQH systems. Exact numerical calculations for up to eleven electrons on a spherical surface will be compared with the predictions of the mean field CF picture. The CF hierarchy will be introduced, and its predictions compared with numerical results. It will be shown that sometimes the mean field CF hierarchy correctly predicts Laughlin-like incompressible ground states, and that sometimes it fails.
The CF hierarchy depends on the validity of the mean field approximation. This seems to work well in predicting not only the LaughlinโJain families of incompressible ground states at particular values of the applied magnetic field, but also in predicting the lowest lying band of states at any value of the magnetic field. The question of when the mean field CF picture works and why will be discussed in some detail. As first suggested by Haldane , the behavior of the pseudopotential $`V(L)`$ describing the energy of interaction of a pair of electrons as a function of their total angular momentum $`L`$ is of critical importance. Some examples of other strongly interacting 2D Fermion systems will be presented, and some problems not yet completely understood will be discussed.
The plan of the paper is as follows. In section 2 the single particle states for electrons confined to a plane in the presence of an applied magnetic field are introduced. The integral and fractional quantum Hall effects are discussed briefly. Haldaneโs idea that the condensation of Laughlin quasiparticles leads to a hierarchy containing all odd denominator fractions is discussed. In section 3 the numerical calculations for a finite number of electrons confined to a spherical surface in the presence of a radial magnetic field are discussed. Results for a ten electron system at different values of the magnetic field are presented. In section 4 the ideas of fractional statistics and the ChernโSimons transformation are introduced. In section 5 Jainโs CF approach is outlined. The sequence of Jain condensed states (given by filling factor $`\nu =n(1+2pn)^1`$, where $`n`$ is any integer and $`p`$ is a positive integer) is shown to result from the mean field approximation. The application of the CF picture to electrons on a spherical surface is shown to predict the lowest band of angular momentum multiplets in a very simple way that involves only the elementary problem of addition of angular momenta . In section 6 the two energy scales, the Landau level separation $`\mathrm{}\omega _c`$ and the Coulomb energy $`e^2/\lambda `$ (where $`\lambda `$ is the magnetic length), are discussed. It is emphasized that the Coulomb interactions and ChernโSimons gauge interactions between fluctuations (beyond the mean field) cannot possibly cancel for arbitrary values the applied magnetic field. The reason for the success of the CF picture is discussed in terms of the behavior of the pseudopotential $`V(L)`$ and a kind of โHundโs ruleโ for monopole harmonics . In section 7, a phenomenological Fermi liquid picture is introduced to describe low lying excited states containing three or more Laughlin quasiparticles . In section 8 the CF hierarchy picture is introduced. Comparison with exact numerical results indicates that the behavior of the quasiparticle pseudopotential is of critical importance in determining the validity of this picture at a particular level of the hierarchy. In section 9 systems containing electrons and valence band holes are investigated . The photoluminescence and the role of excitons and negatively charged exciton complexes is discussed. The final section is a summary.
## 2 Integral and Fractional Quantum Hall Effects
The Hamiltonian for an electron confined to the $`x`$$`y`$ plane in the presence of a perpendicular magnetic field $`\mathrm{B}`$ is
$$H_0=\frac{1}{2\mu }\left(\mathrm{p}\mathbf{+}\frac{\mathrm{e}}{\mathrm{c}}\mathrm{A}\right)^2.$$
(1)
Here $`\mu `$ is the effective mass, $`\mathrm{p}\mathbf{=}\mathbf{(}\mathrm{p}_\mathrm{x}\mathbf{,}\mathrm{p}_\mathrm{y}\mathbf{,}0\mathbf{)}`$ is the momentum operator and $`\mathrm{A}\mathbf{(}\mathrm{x}\mathbf{,}\mathrm{y}\mathbf{)}`$ is the vector potential (whose curl gives $`\mathrm{B}`$). For the โsymmetric gauge,โ $`\mathrm{A}\mathbf{=}\frac{1}{2}\mathrm{B}\mathbf{(}\mathrm{y}\mathbf{,}\mathbf{}\mathrm{x}\mathbf{,}0\mathbf{)}`$, the single particle eigenfunctions are of the form $`\psi _{nm}(r,\theta )=e^{im\theta }u_{nm}(r)`$. The angular momentum of the state $`\psi _{nm}`$ is $`m`$ and its eigenenergy is given by
$$E_{nm}=\frac{1}{2}\mathrm{}\omega _c(2n+1+|m|m).$$
(2)
In these equations, $`\omega _c=eB/\mu c`$ is the cyclotron frequency, $`n=0`$, 1, 2, โฆ, and $`m=0`$, $`\pm 1`$, $`\pm 2`$, โฆ. The lowest energy states (lowest Landau level) have $`n=0`$ and $`m=0`$, 1, 2, โฆ and energy $`E_{0m}=\frac{1}{2}\mathrm{}\omega _c`$. It is convenient to introduce a complex coordinate $`z=re^{i\theta }=xiy`$, and to write the lowest Landau level wavefunctions as
$$\psi _{0m}(z)=N_mz^me^{|z|^2/4},$$
(3)
where $`N_m`$ is a normalization constant. In this expression we have used the magnetic length $`\lambda =\sqrt{\mathrm{}c/eB}`$ as the unit of length. The function $`|\psi _{0m}|^2`$ has its maximum value at a radius $`r_m`$ which is proportional to $`\sqrt{m}`$. All single particle states belonging to a given Landau level are degenerate, and separated in energy from neighboring levels by $`\mathrm{}\omega _c`$.
If the system has a โfinite radial range,โ then the $`m`$ values are restricted to being less than some maximum value ($`m=0`$, 1, 2, โฆ, $`N_\varphi 1`$). The value of $`N_\varphi `$ (the Landau level degeneracy) is equal to the total flux through the sample, $`BC`$ (where $`C`$ is the area), divided by the quantum of flux $`\varphi _0=hc/e`$. The filling factor $`\nu `$ is defined as the ratio of the number of electrons, $`N`$, to $`N_\varphi `$. When $`\nu `$ has an integral value, an infinitesimal decrease in the area $`C`$ requires promotion of an electron across the cyclotron gap $`\mathrm{}\omega _c`$ to the first unoccupied Landau level, making the system incompressible. This incompressibility together with the existence of both localized and extended states in the system is responsible for the observed behavior of the magneto-conductivity of quantum Hall systems at integral filling factors .
In order to construct a many electron wavefunction $`\mathrm{\Psi }(z_1,z_2,\mathrm{},z_N)`$ corresponding to a completely filled lowest Landau level, the product function which places one electron in each of the $`N_\varphi =N`$ orbitals $`\psi _{0m}`$ ($`m=0`$, 1, โฆ, $`N_\varphi 1`$) must be antisymmetrized. This can be done with the aid of a Slater determinant
$$\mathrm{\Psi }\left|\begin{array}{cccc}1& 1& \mathrm{}& 1\\ z_1& z_2& \mathrm{}& z_N\\ z_1^2& z_2^2& \mathrm{}& z_N^2\\ \mathrm{}& \mathrm{}& & \mathrm{}\\ z_1^{N1}& z_2^{N1}& \mathrm{}& z_N^{N1}\end{array}\right|\mathrm{exp}\left(\frac{1}{4}\underset{k}{}|z_k|^2\right).$$
(4)
The determinant in equation (4) is the well-known Vandemonde determinant. It is not difficult to show that it is equal to $`_{i<j}(z_iz_j)`$. Of course, $`N_\varphi `$ is equal to $`N`$ (since each of the $`N_\varphi `$ orbitals is occupied by one electron) and the filling factor $`\nu =1`$.
Laughlin noticed that if the factor $`(z_iz_j)`$ arising from the Vandemonde determinant was replaced by $`(z_iz_j)^{2p+1}`$, where $`p`$ was an integer, the wavefunction
$$\mathrm{\Psi }_{2p+1}\underset{i<j}{}(z_iz_j)^{2p+1}\mathrm{exp}\left(\frac{1}{4}\underset{i}{}|z_i|^2\right)$$
(5)
would be antisymmetric, keep the electrons further apart (and therefore reduce the Coulomb repulsion), and correspond to a filling factor $`\nu =(2p+1)^1`$. This results because the highest power of $`z_i`$ in the polynomial factor in $`\mathrm{\Psi }_{2p+1}`$ is $`(2p+1)(N1)`$ and it must be equal to the highest orbital index ($`m=N_\varphi 1`$), giving $`N_\varphi 1=(2p+1)(N1)`$ and $`\nu =N/N_\varphi `$ equal to $`(2p+1)^1`$ in the limit of large systems. The additional factor $`_{i<j}(z_iz_j)^{2p}`$ multiplying $`\mathrm{\Psi }_{m=1}`$ is the Jastrow factor which accounts for correlations between electrons.
It is observed experimentally that states with filling factors $`\nu =2/5`$, 3/5, 3/7, etc. exhibit FQH behavior in addition to the Laughlin $`\nu =(2p+1)^1`$ states. Haldane suggested that a hierarchy of condensed states arose from the condensation of quasiparticles (QPโs) of โparentโ FQH states. In his picture, Laughlin condensed states of the electron system occurred when $`N_\varphi =(2p+1)N_e`$, where the exponent $`2p+1`$ in equation (5) was an odd integer and the symbol $`N_e`$ denoted the number of electrons. Condensed QP states occurred when $`N_e=2qN_{\mathrm{QP}}`$, because the number of places available for inserting a QP in a Laughlin state was $`N_e`$. Haldane required the exponent $`2q`$ to be even โbecause the QPโs are bosons.โ This scheme gives rise to a hierarchy of condensed states which contains all odd denominator fractions. Haldane cautioned that the validity of the hierarchy scheme at a particular level depended upon the QP interactions which were totally unknown.
## 3 Numerical Study of Small Systems
Haldane introduced the idea of putting a small number of electrons on a spherical surface of radius $`R`$ at the center of which is a magnetic monopole of strength $`2S\varphi _0`$. The single particle Hamiltonian can be expressed as
$$H_0=\frac{\mathrm{}^2}{2\mu R^2}(\mathrm{L}\mathbf{}\mathrm{S}\widehat{\mathrm{R}}\mathbf{)}^2\mathbf{,}$$
(6)
where $`\mathrm{L}`$ is the angular momentum operator (in units of $`\mathrm{}`$), $`\widehat{R}`$ is the unit vector in the radial direction, and $`\mu `$ is the mass. The components of $`\mathrm{L}`$ satisfy the usual commutation rules $`[L_\alpha ,L_\beta ]=iฯต_{\alpha \beta \gamma }L_\gamma `$. The eigenstates of $`H_0`$ can be denoted by $`|l,m`$; they are eigenfunctions of $`L^2`$ and $`L_z`$ with eigenvalues $`l(l+1)`$ and $`m`$, respectively. The lowest energy eigenvalue (shell) occurs for $`l=S`$ and has energy $`\frac{1}{2}\mathrm{}\omega _c`$. The $`n`$th excited shell has $`l=S+n`$, and
$$E_n=\frac{\mathrm{}\omega _c}{2S}\left[l(l+1)S^2\right]=\mathrm{}\omega _c\left[n+\frac{1}{2}+\frac{n(n+1)}{2S}\right],$$
(7)
where the cyclotron energy is equal to $`\mathrm{}\omega _c=S\mathrm{}^2/\mu R^2`$ and the magnetic length is $`\lambda =R/\sqrt{S}`$. If we concentrate on a partially filled lowest Landau level we have only $`N_\varphi =2S+1`$ degenerate single particle states (since the electron angular momentum $`l`$ must be equal to $`S`$ and its $`z`$-component $`m`$ can take on values between $`l`$ and $`l`$). The Hilbert space $`_{\mathrm{MB}}`$ of $`N`$ electrons in these $`N_\varphi `$ single particle states contains $`N_{\mathrm{MB}}=N_\varphi ![N!(N_\varphi N)!]^1`$ antisymmetric many body states. The single particle configurations $`|m_1,m_2,\mathrm{},m_N=c_{m_1}^{}c_{m_2}^{}\mathrm{}c_{m_N}^{}|\mathrm{vac}`$ can be chosen as a basis of $`_{\mathrm{MB}}`$. Here $`c_m^{}`$ creates an electron in the single particle state $`|l=S,m`$, and $`|\mathrm{vac}`$ is the vacuum state. The space $`_{\mathrm{MB}}`$ can also be spanned by the angular momentum eigenfunctions, $`|L,M,\alpha `$, where $`L`$ is the total angular momentum, $`M`$ its $`z`$-component, and $`\alpha `$ is a label which distinguishes different multiplets with the same $`L`$. If $`\mathrm{}\omega _ce^2/\lambda `$, the diagonalization of the interaction Hamiltonian
$$H_I=\underset{i<j}{}\frac{e^2}{r_{ij}}$$
(8)
in the Hilbert space $`_{\mathrm{MB}}`$ of the lowest Landau level gives an excellent approximation to exact eigenstates of an interacting $`N`$ electron system. The single particle configuration basis is particularly convenient since the many body interaction matrix elements in this basis, $`m_1,m_2,\mathrm{},m_N|H_I|m_1^{},m_2^{},\mathrm{},m_N^{}`$, are expressed through the two body ones, $`m_1,m_2|H_I|m_1^{},m_2^{}`$, in a very simple way. On the other hand, using the angular momentum eigenstates $`|L,M,\alpha `$ allows the explicit decomposition of the total Hilbert space $`_{\mathrm{MB}}`$ into total angular momentum eigensubspaces. Because the interaction Hamiltonian is a scalar, the WignerโEckart theorem tells us that
$$L^{},M^{},\alpha ^{}|H_I|L,M,\alpha =\delta _{LL^{}}\delta _{MM^{}}V_{\alpha \alpha ^{}}(L),$$
(9)
where the reduced matrix element
$$V_{\alpha \alpha ^{}}(L)=L,\alpha ^{}|H_I|L,\alpha $$
(10)
is independent of $`M`$. The eigenfunctions of $`L`$ are simpler to find than those of $`H_I`$, because efficient numerical techniques exist for obtaining eigenfunctions of operators with known eigenvalues. Finding the eigenfunctions of $`L`$ and then using the WignerโEckart theorem considerably reduces dimensions of the matrices that must be diagonalized to obtain eigenvalues of $`H_I`$. Some matrix dimensions are listed in table 1, where the degeneracy of the lowest Landau level and the dimensions of the total many body Hilbert space, $`N_{\mathrm{MB}}`$, and of the largest $`M`$ subspace, $`N_{\mathrm{MB}}(M=0)`$, are given for the Laughlin $`\nu =1/3`$ state of six to eleven electron systems (the $`N`$ electron Laughlin $`\nu =(2p+1)^1`$ state occurs at $`N_\varphi =(2p+1)(N1)`$).
For example, in the eleven electron system at $`\nu =1/3`$, the $`L=0`$ block that must be diagonalized to obtain the Laughlin ground state is only 1160 by 1160, small compared to the total dimension of 1,371,535 for the entire $`M=0`$ subspace.
Typical results for the energy spectrum are shown in figure 1 for $`N=10`$ and a few different values of $`2S`$ between 21 and 30. The low energy bands marked with open circles and solid lines will be discussed in detail in the following sections. Frames (a) and (f) show two $`L=0`$ incompressible ground states: Laughlin state at $`\nu =1/3`$ and Jain state at $`\nu =2/5`$, respectively. In other frames, a number of QPโs form the lowest energy bands.
## 4 ChernโSimons Transformation and Statistics in 2D Systems
Before discussing the ChernโSimons gauge transformation and its relation to particle statistics, it is useful to look at a system of two particles each of charge $`e`$ and mass $`\mu `$, confined to a plane, in the presence of a perpendicular magnetic field $`\mathrm{B}\mathbf{=}\mathbf{(}0\mathbf{,}0\mathbf{,}\mathrm{B}\mathbf{)}\mathbf{=}\mathbf{}\mathbf{\times }\mathrm{A}\mathbf{(}\mathrm{r}\mathbf{)}`$. Because $`\mathrm{A}`$ is linear in the coordinate $`\mathrm{r}\mathbf{=}\mathbf{(}\mathrm{x}\mathbf{,}\mathrm{y}\mathbf{)}`$ \[e.g., in the symmetric gauge, $`\mathrm{A}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{=}\frac{1}{2}\mathrm{B}\mathbf{(}\mathrm{y}\mathbf{,}\mathbf{}\mathrm{x}\mathbf{)}`$\], the Hamiltonian separates into the center of mass (CM) and relative (REL) coordinate pieces, with $`\mathrm{R}\mathbf{=}\frac{1}{2}\mathbf{(}\mathrm{r}_1\mathbf{+}\mathrm{r}_2\mathbf{)}`$ and $`\mathrm{r}\mathbf{=}\mathrm{r}_1\mathbf{}\mathrm{r}_2`$ being the CM and REL coordinates, respectively. The energy spectra of $`H_{\mathrm{CM}}`$ and $`H_{\mathrm{REL}}`$ are identical to that of a single particle of mass $`\mu `$ and charge $`e`$. We have already seen that for the lowest Landau level $`\psi _{0m}=N_mr^me^{im\varphi }e^{r^2/4\lambda ^2}`$. For the relative motion $`\varphi =\varphi _1\varphi _2`$, and an interchange of the pair, $`P\psi (\mathrm{r}_1\mathbf{,}\mathrm{r}_2\mathbf{)}\mathbf{=}๐\mathbf{(}\mathrm{r}_2\mathbf{,}\mathrm{r}_1\mathbf{)}`$, is accomplished by replacing $`\varphi `$ by $`\varphi +\pi `$. In 3D systems, where two consecutive interchanges must result in the original wavefunction, this implies that $`e^{im\pi }`$ must be equal to either $`+1`$ ($`m`$ even; Bosons) or $`1`$ ($`m`$ odd; Fermions). It is well-known that for 2D systems $`m`$ need not be an integer. Interchange of a pair of identical particles can give $`P\psi (\mathrm{r}_1\mathbf{,}\mathrm{r}_2\mathbf{)}\mathbf{=}\mathrm{e}^{\mathrm{i}๐
๐ฝ}๐\mathbf{(}\mathrm{r}_1\mathbf{,}\mathrm{r}_2\mathbf{)}`$, where the statistical parameter $`\theta `$ can assume non-integral values leading to anyon statistics.
A ChernโSimons (CS) transformation is a singular gauge transformation in which an electron creation operator $`\psi _e^{}(\mathrm{r}\mathbf{)}`$ is replaced by a composite particle operator $`\psi ^{}(\mathrm{r}\mathbf{)}`$ given by
$$\psi ^{}(\mathrm{r}\mathbf{)}\mathbf{=}๐_\mathrm{e}^{\mathbf{}}\mathbf{(}\mathrm{r}\mathbf{)}\mathrm{๐๐ฑ๐ฉ}\mathbf{\left[}\mathrm{i}๐ถ\mathbf{}\mathrm{d}^2\mathrm{r}^{\mathbf{}}\mathrm{๐๐ซ๐ }\mathbf{(}\mathrm{r}\mathbf{}\mathrm{r}^{\mathbf{}}\mathbf{)}๐^{\mathbf{}}\mathbf{(}\mathrm{r}^{\mathbf{}}\mathbf{)}๐\mathbf{(}\mathrm{r}^{\mathbf{}}\mathbf{)}\mathbf{\right]}\mathbf{.}$$
(11)
Here $`\mathrm{arg}(\mathrm{r}\mathbf{}\mathrm{r}^{\mathbf{}}\mathbf{)}`$ is the angle the vector $`\mathrm{r}\mathbf{}\mathrm{r}^{\mathbf{}}`$ makes with the $`x`$-axis and $`\alpha `$ is an arbitrary parameter. The kinetic energy operator can be written in terms of the transformed operator as
$$K=\frac{1}{2\mu }d^2\mathrm{r}๐^{\mathbf{}}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{\left[}\mathbf{}\mathrm{i}\mathbf{}\mathbf{}\mathbf{+}\frac{\mathrm{e}}{\mathrm{c}}\mathrm{A}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{+}\frac{\mathrm{e}}{\mathrm{c}}\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{\right]}^2๐\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{.}$$
(12)
Here
$$\mathrm{a}_\mathrm{r}^{\mathbf{}}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{=}\frac{๐ถ\mathit{\varphi }_0}{2๐
}\mathbf{}\frac{\widehat{\mathrm{z}}\mathbf{\times }\mathbf{(}\mathrm{r}\mathbf{}\mathrm{r}^{\mathbf{}}\mathbf{)}}{\mathbf{|}\mathrm{r}\mathbf{}\mathrm{r}^{\mathbf{}}\mathbf{|}^2}$$
(13)
and
$$\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{=}๐ถ\mathit{\varphi }_0\mathbf{}\mathrm{d}^2\mathrm{r}^{\mathbf{}}\mathrm{a}_\mathrm{r}^{\mathbf{}}\mathbf{(}\mathrm{r}\mathbf{)}๐^{\mathbf{}}\mathbf{(}\mathrm{r}^{\mathbf{}}\mathbf{)}๐\mathbf{(}\mathrm{r}^{\mathbf{}}\mathbf{)}\mathbf{,}$$
(14)
where $`\widehat{z}`$ is a unit vector perpendicular to the 2D layer. The CS transformation can be thought of as an attachment to each particle of flux tube carrying a fictitious flux $`\alpha \varphi _0`$ (where $`\varphi _0=hc/e`$ is the quantum of flux) and a fictitious charge $`e`$ which couples in the standard way to the vector potential caused by the flux tubes on every other particle. The $`\mathrm{a}_\mathrm{r}^{\mathbf{}}\mathbf{(}\mathrm{r}\mathbf{)}`$ is interpreted as the vector potential at position $`\mathrm{r}`$ due to a magnetic flux of strength $`\alpha \varphi _0`$ localized at $`\mathrm{r}^{\mathbf{}}`$, and $`\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}`$ is the total vector potential at position $`\mathrm{r}`$ due to all CS fluxes. The CS magnetic field associated with the particle at $`\mathrm{r}^{\mathbf{}}`$ is $`\mathrm{b}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{=}\mathbf{}\mathbf{\times }\mathrm{a}_\mathrm{r}^{\mathbf{}}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{=}๐ถ\mathit{\varphi }_0๐น\mathbf{(}\mathrm{r}\mathbf{}\mathrm{r}^{\mathbf{}}\mathbf{)}\widehat{\mathrm{z}}`$. Because two charged particles cannot occupy the same position, one particle never senses the magnetic field of other particles, but it does sense the vector potential resulting from their CS fluxes. The classical equations of motion are unchanged by the presence of the CS flux, but the quantum statistics of the particles are changed unless $`\alpha `$ is an even integer.
For the two particle system, the vector potential associated with the CS flux $`\mathrm{a}_{\mathrm{r}_2}\mathbf{(}\mathrm{r}_1\mathbf{)}`$ depends only on the relative coordinate $`\mathrm{r}\mathbf{=}\mathrm{r}_1\mathbf{}\mathrm{r}_2`$. When $`\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}`$ is added to $`\mathrm{A}\mathbf{(}\mathrm{r}\mathbf{)}`$, the vector potential of the applied magnetic field, the Schrรถdinger equation has a solution
$$\stackrel{~}{\psi }_m=e^{i\alpha \varphi }\psi _m,$$
(15)
where $`\psi _m`$ is the solution with $`\alpha =0`$ (i.e. in the absence of CS flux). If $`\alpha `$ is an odd integer, Boson and Fermion statistics are interchanged; if $`\alpha `$ is even, no change in statistics occurs and electrons are transformed into composite Fermions with an identical energy spectrum.
The Hamiltonian for the composite particle system (charged particles with attached flux tubes) is much more complicated than the original system with $`\alpha =0`$. What is gained by making the CS transformation? The answer is that one can use the โmean fieldโ approximation in which $`\mathrm{A}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{+}\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}`$, the vector potential of the external plus CS magnetic fields, is replaced by $`\mathrm{A}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{+}\mathbf{}\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{}`$, where $`\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}`$ is the mean field value of $`\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}`$ obtained by simply replacing $`\varrho (\mathrm{r}^{\mathbf{}}\mathbf{)}\mathbf{=}๐^{\mathbf{}}\mathbf{(}\mathrm{r}^{\mathbf{}}\mathbf{)}๐\mathbf{(}\mathrm{r}^{\mathbf{}}\mathbf{)}`$ by its average value $`\varrho _0`$ in equation (14). A mean field energy spectrum can be constructed in which the massive degeneracy of the original partially filled electron Landau level disappears. One might then hope to treat both the Coulomb interaction and the CS gauge field interactions among the fluctuations (beyond the mean field) by standard many body perturbation techniques (e.g. by the random phase approximation, RPA). Unfortunately, there is no small parameter for a many body perturbation expansion unless $`\alpha `$, the number of CS flux quanta attached to each particle, is small compared to unity. However, a LandauโSilin type Fermi liquid approach can take account of the short range correlations phenomenologically. A number of excellent papers on anyon superconductivity treat CS gauge interactions by standard many body techniques. Halperin and collaborators have treated the half filled Landau level as a liquid of composite Fermions moving in zero effective magnetic field. Their RPAโFermi-liquid approach gives a surprisingly satisfactory account of the properties of that state.
The vector potential associated with fluctuations beyond the mean field is given by $`\delta \mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{=}\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{}\mathbf{}\mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}\mathbf{}`$. The perturbation to the mean field Hamiltonian contains both linear and quadratic terms in $`\delta \mathrm{a}\mathbf{(}\mathrm{r}\mathbf{)}`$, resulting in both two body โ containing $`\varrho (\mathrm{r}_1\mathbf{)}\mathit{\varrho }\mathbf{(}\mathrm{r}_2\mathbf{)}`$ โ and three body โ containing $`\varrho (\mathrm{r}_1\mathbf{)}\mathit{\varrho }\mathbf{(}\mathrm{r}_2\mathbf{)}\mathit{\varrho }\mathbf{(}\mathrm{r}_3\mathbf{)}`$ โ interaction terms. The three body interaction terms are usually ignored, though for $`\alpha `$ of the order of unity this approximation is of questionable validity.
## 5 Jainโs Composite Fermion Picture
Jain noted that in the mean field approximation, an effective filling factor $`\nu ^{}`$ of the composite Fermions was related to the electron filling factor $`\nu `$ by the relation
$$(\nu ^{})^1=\nu ^12p.$$
(16)
Remember that $`\nu ^1`$ is equal to the number of flux quanta of the applied magnetic field per electron, and $`2p`$ is the (even) number of CS flux quanta (oriented opposite to the applied magnetic field) attached to each electron in the CS transformation. Equation (16) implies that when $`\nu ^{}=\pm 1`$, $`\pm 2`$, โฆ (negative values correspond to the effective magnetic field $`\mathrm{B}^{\mathbf{}}`$ seen by the CFโs oriented opposite to $`\mathrm{B}`$) and a non-degenerate mean field CF ground state occurs, then $`\nu =\nu ^{}(1+2p\nu ^{})^1`$. This Jain sequence of condensed states ($`\nu =1/3`$, $`2/5`$, $`3/7`$, โฆ and $`\nu =2/3`$, $`3/5`$, โฆ for $`p=1`$) is the set of FQH states most prominent in experiment. When $`\nu ^{}`$ is not an integer, QPโs of the neighboring Jain state will occur.
It is quite remarkable that the mean field CF picture predicts not only the Jain sequence of incompressible ground states, but the correct band of low energy states for any value of the applied magnetic field. This is very nicely illustrated for the case of $`N`$ electrons on a Haldane sphere. When the monopole strength seen by an electron has the value $`2S`$, the effective monopole strength seen by a CF is $`2S^{}=2S2p(N1)`$. This equation reflects the fact that a given CF senses the vector potential produced by the CS flux on all other particles, but not its own CS flux. In table 2 the ten particle system is described for a number of values of $`2S`$ between 29 and 15.
The Laughlin $`\nu =1/3`$ state occurs at $`2S_3=3(N1)=27`$. For values of $`2S`$ different from this value, $`2S2S_3=\pm N_{\mathrm{QP}}`$ (โ$`+`$โ corresponds to quasiholes, QH, and โ$``$โ to quasielectrons, QE). Let us apply the CF description to the ten electron spectra in figure 1. At $`2S=27`$, we take $`p=1`$ and attach two CS flux quanta each electron. This gives $`2S^{}=9`$ so that the ten CFโs completely fill the $`2S^{}+1`$ states in the lowest angular momentum shell (lowest Landau level). There is a gap $`\mathrm{}\omega _c^{}=\mathrm{}eB^{}/\mu c`$ to the next shell, which is responsible for the incompressibility of the Laughlin state. Just as $`|S|`$ played the role of the angular momentum of the lowest shell of electrons, $`l^{}=|S^{}|`$ plays the role of the CF angular momentum and $`2|S^{}|+1`$ is the degeneracy of the CF shell. Thus, the states with $`2S=26`$ and 28 contain a single quasielectron (QE) and quasihole (QH), respectively. For the QE state, $`2S^{}=8`$ and the lowest shell of angular momentum $`l_0^{}=4`$ can accommodate only nine CFโs. The tenth is the QE in the $`l_1^{}=l_0^{}+1=5`$ shell, giving the total angular momentum $`L=5`$. For the QH state, $`2S^{}=10`$ and the lowest shell can accommodate eleven CFโs each with angular momentum $`l_0^{}=5`$. The one empty state (QH) gives $`L=l^{}=5`$. For $`2S=25`$ we obtain $`2S^{}=7`$, and there are two QEโs each of angular momentum $`l_1^{}=9/2`$ in the first excited CF shell. Adding the angular momenta of the two QEโs gives the band of multiplets $`L=0`$, 2, 4, 6, and 8. Similarly, for $`2S=29`$ we obtain $`2S^{}=11`$, and there are two QHโs each with $`l_0^{}=11/2`$, resulting in the allowed pair states at $`L=0`$, 2, 4, 6, 8, and 10. At $`2S=21`$, the lowest shell with $`l_0^{}=3/2`$ can accommodate only four CFโs, but the other six CFโs exactly fill the excited $`l_1^{}=5/2`$ shell. The resulting incompressible ground state is the Jain $`\nu =2/5`$ state, since $`\nu ^{}=2`$ for the two filled shells. A similar argument leads to $`\nu ^{}=2`$ (minus sign means $`\mathrm{B}^{\mathbf{}}`$ oriented opposite to $`\mathrm{B}`$) and $`\nu =2/3`$ at $`2S=15`$. At $`2S=30`$, the addition of three QH angular momenta of $`l_0^{}=6`$ gives the following band of low lying multiplets $`L=1`$, $`3^2`$, 4, $`5^2`$, $`6^2`$, $`7^2`$, 8, $`9^2`$, 10, 11, 12, 13, and 15. As demonstrated on an example in figure 1, this simple mean field CF picture correctly predicts the band of low energy multiplets for any number of electrons $`N`$ and for any value of $`2S`$.
## 6 Energy Scales and the Electron Pseudopotentials
The mean field composite Fermion picture is remarkably successful in predicting the low energy multiplets in the spectrum of $`N`$ electrons on a Haldane sphere. It was suggested originally that this success resulted from the cancellation of the Coulomb and ChernโSimons gauge interactions among fluctuations beyond the mean field. In figure 2, we show the lowest bands of multiplets for eight non-interacting electrons and for the same number of non-interacting mean field CFโs at $`2S=21`$.
The energy scale associated with the CS gauge interactions which convert the electron system in frame (a) to the CF system in frame (b) is $`\mathrm{}\omega _c^{}B`$. The energy scale associated with the electron-electron Coulomb interaction is $`e^2/\lambda \sqrt{B}`$. The Coulomb interaction lifts the degeneracy of the non-interacting electron bands in frame (a). However, for very large value of $`B`$ the Coulomb energy can be made arbitrarily small compared to the CS energy (as marked with a shaded rectangle in figure 2), i.e. to the separation between the CF Landau levels. The energy separations in the mean field CF model are completely wrong, but the structure of the low lying states (i.e., which angular momentum multiplets form the low lying bands) is very similar to that of the fully interacting electron system and completely different from that of the non-interacting electron system.
### 6.1 Two Fermion Problem
An intuitive picture of why this occurs can be obtained by considering the two Fermion problem. The relative (REL) motion of a pair of electrons $`(ij)`$ is described by a coordinate $`z_{ij}=z_iz_j=r_{ij}e^{i\varphi _{ij}}`$, and for the lowest Landau level its wavefunction contains a factor $`z_{ij}^m`$, where $`m=1`$, 3, 5, โฆ. If every pair of particles has identical behavior, the many particle wavefunction must contain a similar factor for each pair giving a total factor $`_{i<j}z_{ij}^m`$. As we have seen, the highest power of $`z_i`$ in this product is $`m(N1)`$. If $`m(N1)`$ is equal to $`N_\varphi 1=2S`$, the maximum value of the $`z`$-component of the single particle angular momentum, the Laughlin $`\nu =m^1`$ wavefunction results. For electrons, the $`m`$th cyclotron orbit, whose radius is $`r_m`$, encloses a flux $`m\varphi _0`$ (i.e. $`\pi r_m^2B=m\varphi _0`$). For a Laughlin $`\nu =m^1`$ state the pair function must have a radius $`r_m=r_1\sqrt{m}`$. Let us describe the CF orbits by radius $`\varrho _{\stackrel{~}{m}}`$ and require that the $`\stackrel{~}{m}`$th orbit enclose $`\stackrel{~}{m}`$ flux quanta. It is apparent that if a flux tube carrying two flux quanta (oriented opposite to the applied magnetic field $`B`$) is attached to each electron in the CS transformation of the $`\nu =1/3`$ state, the smallest orbit of radius $`\varrho _{\stackrel{~}{m}=1}`$ has exactly the same size as $`r_{m=3}`$. Both orbits enclose three flux quanta of the applied field, but the CF orbit also encloses the two oppositely oriented CS flux quanta attached to the electrons to form the CFโs. In the absence of electronโelectron interactions, the energies of these orbits are unchanged, since they still belong to the degenerate single particle states of the lowest Landau level.
In the mean field approximation the CS fluxes are replaced by a spatially uniform magnetic field, leading to an effective field $`B^{}=B/m`$. The orbits for the CF pair states in the mean field approximation are exactly the same as those of the exact CS Hamiltonian. The smallest orbit has radius $`\varrho _{\stackrel{~}{m}=1}`$ equivalent to the electron orbit $`r_{m=3}`$. However, in the mean field approximation, the energies are changed (because $`\omega _c^{}=eB^{}/\mu c`$ replaces $`\omega _c`$). This energy change leads to completely incorrect mean field CF energies, but the mean field CF orbitals give the correct structure to the low lying set of multiplets.
In the presence of a repulsive interaction, the low lying energy states will have the largest possible value of $`m`$. For a monopole strength $`2S=m(N1)`$, where $`m`$ is an odd integer, every pair can have radius $`r_m`$ and avoid the large repulsion associated with $`r_1`$, $`r_3`$, โฆ, $`r_{m2}`$. These ideas can be made somewhat more rigorous by using methods of atomic and nuclear physics for studying angular momentum shells of interacting Fermions.
### 6.2 Two Body Interaction Pseudopotential
As first suggested by Haldane , the behavior of the interacting many electron system depends entirely on the behavior of the two body interaction pseudopotential, which is defined as the interaction energy $`V`$ of a pair of electrons as a function of their pair angular momentum. In the spherical geometry, in order to allow for meaningful comparison of the pseudopotentials obtained for different values of $`2S`$ (and thus different single electron angular momenta $`l`$), it is convenient to use the โrelativeโ angular momentum $`=2lL_{12}`$ rather than $`L_{12}`$ (the length of $`\widehat{}L_{12}=\widehat{}l_1+\widehat{}l_2`$). The pair states with a given $`=m`$ (an odd integer) obtained on a sphere for different $`2S`$ are equivalent and correspond to the pair state on a plane with the relative (REL) motion described by angular momentum $`m`$ and radius $`r_m`$. The pair state with the smallest allowed orbit (and largest repulsion) has $`=1`$ on a sphere or $`m=1`$ on a plane, and larger $``$ and $`m`$ means larger average separation. In the limit of $`\lambda /R0`$ (i.e., either $`2S\mathrm{}`$ or $`R\mathrm{}`$), the pair wavefunctions and energies calculated on a sphere for $`=m`$ converge to the planar ones ($`\psi _{0m}`$ and its energy).
The pseudopotentials $`V()`$ are plotted in figure 3 for a number of values of the monopole strength $`2S`$.
The open circles mark the pseudopotential calculated on a plane ($`=m`$). At small $``$ the pseudopotentials rise very quickly with decreasing $``$ (i.e. separation). More importantly, they increase more quickly than linearly as a function of $`L_{12}(L_{12}+1)`$. The pseudopotentials with this property form a class of so-called โshort rangeโ repulsive pseudopotentials . If the repulsive interaction has short range, the low energy many body states must, to the extent that it is possible, avoid pair states the smallest values of $``$ (or $`m`$) and the maximum two body repulsion.
### 6.3 Fractional Grandparentage
It is well-known in atomic and nuclear physics that eigenfunction of an $`N`$ Fermion system of total angular momentum $`L`$ can be written as
$$|l^N,L\alpha =\underset{L_{12}}{}\underset{L^{}\alpha ^{}}{}G_{L\alpha ,L^{}\alpha ^{}}(L_{12})|l^2,L_{12};l^{N2},L^{}\alpha ^{};L.$$
(17)
Here, the totally antisymmetric state $`|l^N,L\alpha `$ is expanded in the basis of states $`|l^2,L_{12};l^{N2},L^{}\alpha ^{};L`$ which are antisymmetric under permutation of particles 1 and 2 (which are in the pair eigenstate of angular momentum $`L_{12}`$) and under permutation of particles 3, 4, โฆ, $`N`$ (which are in the $`N2`$ particle eigenstate of angular momentum $`L^{}`$). The labels $`\alpha `$ (and $`\alpha ^{}`$) distinguish independent states with the same angular momentum $`L`$ (and $`L^{}`$). The expansion coefficient $`G_{L\alpha ,L^{}\alpha ^{}}(L_{12})`$ is called the coefficient of fractional grandparentage (CFGP).
For a simple three Fermion system, equation (17) reduces to
$$|l^3,L\alpha =\underset{L_{12}}{}F_{L\alpha }(L_{12})|l^2,L_{12};l;L,$$
(18)
and $`F_{L\alpha }(L_{12})`$ is called the coefficient of fractional parentage (CFP). In the lowest Landau level, the individual Fermion angular momentum $`l`$ is equal to $`S`$, half the monopole strength, and the number of independent multiplets of angular momentum $`L`$ that can be formed by addition of the angular momenta of three identical Fermions is given in table 3
Low energy many body states must, to the extent it is possible, avoid parentage from pair states with the largest repulsion (pair states with maximum angular momenta $`L_{ij}`$ or minimum $``$). In particular, we expect that the lowest energy multiplets will avoid parentage from the pair state with $`=1`$. If $`=1`$, i.e. $`L_{12}=2l1`$, the smallest possible value of the total angular momentum $`L`$ of the three Fermion system is obtained by addition of vectors $`\mathrm{L}_{12}`$ (of length $`2l1`$) and $`\mathrm{l}_3`$ (of length $`l`$), and it is equal to $`|(2l1)l|=l1`$. Therefore, the three particle states with $`L<l1`$ must not have parentage from $`=1`$. It is straightforward to show that if $`L<l(2p1)`$, where $`p=1`$, 2, 3, โฆ, the three electron multiplet at $`L`$ has no fractional parentage from $`2p1`$. The multiplets that must avoid one, two, or three smallest values of $``$ are underlined with an appropriate number of lines in table 3 and listed in table 4. This gives the results in table 4, the values of $`2L`$ that avoid $`=1`$, 3, and 5 for various values of $`2l`$.
The $`L=0`$ states that appear at $`2l=6`$ ($`3`$), $`2l=10`$ ($`5`$), and $`2l=14`$ ($`7`$) are the only states for these values of $`2l`$ that can avoid one, two, or three largest pseudopotential parameters, respectively, and therefore are the non-degenerate ($`L=0`$) ground states. They are the Laughlin $`\nu =1/3`$, 1/5, and 1/7 states.
If only a single multiplet belongs to an angular momentum subspace, its form is completely determined by the requirement that it is an eigenstate of angular momentum with a given eigenvalue $`L`$. The wavefunction and the type of many body correlations do not depend on the form of the interaction pseudopotential. For interactions that do not have short range, the state that avoids the largest two body repulsion (e.g. the $`L=0`$ multiplet at $`2l=6`$) might not have the lowest total three body interaction energy and be the ground state. If more than one multiplet belongs to a given angular momentum eigenvalue (e.g., two multiplets occur at $`L=3`$ for $`2l=8`$), the interparticle interaction must be diagonalized in this subspace (two-dimensional for $`2l=8`$ and $`L=3`$). Whether the lowest energy eigenstate in this subspace has Laughlin type correlations, i.e. avoids as much as possible largest two body repulsion, depends critically on the short range of the interaction pseudopotential. For the Coulomb interaction, we find that the Laughlin correlations occur and, whenever possible, the CFP of the lowest lying multiplets virtually vanishes (it would vanish exactly for an โidealโ short range pseudopotential which increases infinitely quickly with decreasing $``$). For example, for the lower energy eigenstate at $`L=3`$ and $`2l=8`$, the CFP for $`=1`$ is less than $`10^3`$. A similar thing occurs at $`2S=9`$ for $`L=9/2`$, at $`2S=10`$ for $`L=4`$ and 6, at $`2S=11`$ for $`L=9/2`$, 11/2, and 15/2, at $`2S=12`$ for $`L=5`$, 6, 7, and 9, at $`2S=13`$ for $`L=11/2`$, 13/2, 15/2, 17/2, and 21/2, and at $`2S=14`$ for $`L=6^2`$, 7, 8, 9, 10, and 12. At $`2S=14`$ for $`L=6`$ there are three allowed multiplets. The diagonalization of the Coulomb interaction gives the lowest state that avoids $`=1`$ (CFP $`10^7`$) and $`=3`$ (CFP $`<10^2`$), and the next lowest state that avoids $`=1`$ (CFP $`<10^5`$) but orthogonality to the lowest state requires that it has significant parentage from $`=3`$ (CFP $`0.34`$).
One can see that the set of angular momentum multiplets $`L`$ that can be constructed at a given value of $`2l`$ without parentage from pair states with $`=1`$ is identical to the set of all allowed multiplets $`L`$ at $`2l^{}=2l4`$. For a short range repulsion (e.g. the Coulomb repulsion in the lowest Landau level), these multiplets will be (to a good approximation) the lowest energy eigenstates (the appropriate CFP for the actual eigenstates will be very small although not necessarily zero). More generally, in the lowest Landau level (remember that $`l=S`$), the set of multiplets $`L`$ that can be constructed at given $`2S`$ without parentage from $`2p1`$ (i.e. with $`2p+1`$ for all pairs; $`p=1`$, 2, โฆ) is identical to the set of all allowed multiplets $`L`$ at $`2S^{}=2S2p(N1)`$. The multiplets $`L`$ forming the lowest Coulomb energy band at a given $`2S`$ are all multiplets allowed at $`2S^{}`$. But $`2S^{}=2S2p(N1)`$ is just the effective magnetic monopole strength in the mean field CF picture! Thus the CF picture with $`2p`$ attached flux quanta simply picks the subset of angular momentum multiplets which have no parentage from pair states with $`2p1`$, and neglects the long range part of the pseudopotential, $`V()`$ for $`2p+1`$.
### 6.4 Definition of the Short Range Pseudopotential
For systems containing more than three Fermions in an angular momentum shell, the simple addition of angular momentum to determine the smallest possible $`L`$ that has parentage from pair states with $`L_{12}=2l1`$ is of no help. Instead, we make use of the following operator identity
$$\widehat{L}^2+N(N2)\widehat{l}^2=\underset{i<j}{}\widehat{L}_{ij}^2.$$
(19)
Here $`\widehat{L}=_i\widehat{l}_i`$ and $`\widehat{L}_{ij}=\widehat{l}_i+\widehat{l}_j`$. The identity is easily proved by writing out the expression for $`\widehat{L}^2`$ and for $`_{i<j}\widehat{L}_{ij}^2`$ and eliminating $`_{i<j}(\widehat{l}_i\widehat{l}_j)`$ from the pair of equations. Taking matrix elements of equation (19) between states $`|l^N,L\alpha `$ described by equation (17) gives
$`L(L+1)`$ $`+`$ $`N(N2)l(l+1)=l^N,L\alpha \left|{\displaystyle \underset{i<j}{}}\widehat{L}_{ij}^2\right|l^N,L\alpha `$ (20)
$`=`$ $`{\displaystyle \frac{1}{2}}N(N1){\displaystyle \underset{L_{12}}{}}๐ข_{L\alpha }(L_{12})L_{12}(L_{12}+1),`$
where
$$๐ข_{L\alpha }(L_{12})=\underset{L^{}\alpha ^{}}{}\left|G_{L\alpha ,L^{}\alpha ^{}}(L_{12})\right|^2.$$
(21)
The coefficients of grandparentage satisfy the relation
$$\underset{L_{12}}{}\underset{L^{}\alpha ^{}}{}G_{L\alpha ,L^{}\alpha ^{}}(L_{12})G_{L\beta ,L^{}\alpha ^{}}(L_{12})=\delta _{\alpha \beta }.$$
(22)
Of course, the energy of the multiplet $`|l^N,L\alpha `$ is given by
$$E_\alpha (L)=\frac{1}{2}N(N1)\underset{L_{12}}{}๐ข_{L\alpha }(L_{12})V(L_{12}),$$
(23)
where $`V(L_{12})`$ is the electron pseudopotential.
It is important to make the following observations:
1. The expectation value of $`_{i<j}\widehat{L}_{ij}^2`$ in a many body state $`|l^N,L\alpha `$ increases as $`L(L+1)`$, but it is totally independent of $`\alpha `$;
2. If the pseudopotential $`V_H(L_{12})`$ were a linear function of $`\widehat{L}_{12}^2`$ (we refer to $`V_H`$ as the โharmonic pseudopotentialโ), all many body multiplets with the same value of $`L`$ would be degenerate;
3. The difference $`\mathrm{\Delta }V(L_{12})=V(L_{12})V_H(L_{12})`$ between the actual pseudopotential $`V`$ and its harmonic part $`V_H`$ lifts this degeneracy. If $`N_L`$ many body multiplets of $`V_H`$ occur at angular momentum $`L`$, the anharmonic term $`\mathrm{\Delta }V`$ in the pseudopotential causes them to โrepel one anotherโ and results in a band of $`N_L`$ non-degenerate multiplets.
Because the expectation value of $`_{i<j}\widehat{L}_{ij}^2`$ in a many body state of angular momentum $`L`$ increases as $`L(L+1)`$, a strict Hundโs rule holds for harmonic pseudopotentials: For $`V_H`$ that increases as a function of $`L_{12}`$, the highest energy state is always at the maximum possible value of $`L`$ equal to $`L^{\mathrm{MAX}}=NlN(N1)/2`$, and the lowest energy state is at the minimum allowed value of $`L`$ equal to $`L^{\mathrm{MIN}}`$. If $`V_H`$ decreases as a function of $`L_{12}`$, the opposite occurs: the lowest energy state is at $`L^{\mathrm{MAX}}`$, and the highest energy state is at $`L^{\mathrm{MIN}}`$ (this is a standard Hundโs rule of atomic physics).
Neither of these Hundโs rules may remain true in the presence of a large anharmonic term $`\mathrm{\Delta }V`$. For example, if the number of multiplets $`N_L`$ at a value slightly larger than $`L^{\mathrm{MIN}}`$ is very large compared to $`N_{L^{\mathrm{MIN}}}`$, the strong level repulsion due to $`\mathrm{\Delta }V`$ within this $`L`$ subspace can overcome the difference in the expectation values of $`V_H`$, and the lowest eigenvalue of $`V`$ at $`L`$ can be lower than that at $`L^{\mathrm{MIN}}`$. However, only very few multiplets occur at large values of $`L`$: $`N_{L^{\mathrm{MAX}}}=1`$ (for $`M=L=L^{\mathrm{MAX}}`$, the only state is $`|l,l1,\mathrm{},lN+1`$), $`N_{L^{\mathrm{MAX}}1}=0`$, $`N_{L^{\mathrm{MAX}}2}1`$, $`N_{L^{\mathrm{MAX}}3}1`$, etc. As a result, breaking of the Hundโs rule that refers to the behavior of energy at large $`L`$ requires stronger anharmonicity than at small $`L`$. For the Coulomb pseudopotential in the lowest Landau level we always find that the highest energy indeed occurs at $`L^{\mathrm{MAX}}`$. However, the ability to avoid parentage from pair states having large $`L_{ij}`$ often favors many body states at small $`L>L^{\mathrm{MIN}}`$ with large $`N_L`$, as prescribed by the CF picture.
The anharmonicity of the Coulomb pseudopotential in the lowest Landau level (which increases with increasing $`L_{12}`$) is critical for the behavior of the FQH systems. We have found that the condition for the occurrence of subbands separated by gaps in the energy spectrum, and, in particular, for the occurrence of non-degenerate incompressible fluid ground states at specific values of the filling factor, is that the anharmonic term $`\mathrm{\Delta }V(L_{12})`$ is positive and increases with increasing $`L_{12}`$. In other words, the total pseudopotential $`V(L_{12})`$ must increase more quickly than linearly as a function of $`L_{12}(L_{12}+1)`$.
### 6.5 Hidden Symmetry of the Short Range Repulsion
From our numerical studies we have arrived at the following conjectures:
1. The Hilbert space $`_{Nl}`$ of $`N`$ identical Fermions each with angular momentum $`l`$ contains subspaces $`_{Nl}^{(p)}`$ of states that have no parentage from $`2p1`$. The subspaces $`\stackrel{~}{}_{Nl}^{(p)}=_{Nl}^{(p)}_{Nl}^{(p+1)}`$ can be defined; they hold states without parentage from $`2p1`$, but with some parentage from $`=2p+1`$. Then
$$_{Nl}=\stackrel{~}{}_{Nl}^{(0)}\stackrel{~}{}_{Nl}^{(1)}\stackrel{~}{}_{Nl}^{(2)}\mathrm{}.$$
(24)
2. For an โidealโ short range repulsive pseudopotential $`V_{\mathrm{SR}}`$, for which $`V_{\mathrm{SR}}()V_{\mathrm{SR}}(+2)`$, the huge difference between energy scales associated with different pair states results in the following (dynamical) symmetry: (i) subspaces $`\stackrel{~}{}_{Nl}^{(p)}`$ are the interaction eigensubspaces, (ii) $`p`$ is a good quantum number, (iii) energy spectrum splits into bands (larger $`p`$ corresponds to lower energy), and (iv) energy gap above the $`p`$th band scales as $`V(2p2)V(2p)`$.
3. For a finite short range pseudopotential $`V`$ (increasing more quickly than $`V_H`$ as a function of $`L_{12}`$), the above symmetry is only approximate, but the correlation between energy and parentage from highly repulsive pair states persists, and so do the gaps in the energy spectrum. The mixing between neighboring subbands is weak, although the structure of energy levels within each subband depends on the form of $`V(L_{12})`$ at $`2p+1`$.
4. The set of angular momentum multiplets in subspace $`_{Nl}^{(p)}`$ is identical to $`_{Nl^{}}`$, where $`l^{}=lp(N1)`$.
Although at present we do not have a general analytic proof for the last conjecture, we have verified it for various small systems and have not found one for which it would fail.
The above conjectures can be immediately translated into the planar geometry. The harmonic pseudopotential $`V_H(m)`$, used to define the class of short range pseudopotentials, is that of a repulsive interaction potential $`V(r)`$ which is linear in $`r^2`$. Then,
$$_\nu =\stackrel{~}{}_\nu ^{(0)}\stackrel{~}{}_\nu ^{(1)}\stackrel{~}{}_\nu ^{(2)}\mathrm{},$$
(25)
where $`_\nu `$ is the Hilbert space of electrons filling a fraction $`\nu `$ of an infinitely degenerate Landau level, and subspaces $`\stackrel{~}{}_\nu ^{(p)}`$ contain states without parentage from $`m2p1`$, but with some parentage from $`m=2p+1`$. The (approximate) dynamical symmetry holds for the Coulomb interaction, and the low energy band $`_\nu ^{(p)}`$ contains the same angular momentum multiplets as $`_\nu ^{}`$, with $`\nu ^{}`$ defined by the CF prescription in equation (16).
The validity of our conjectures for systems interacting through the Coulomb pseudopotential is illustrated in figure 4 for four electrons in the lowest Landau level at $`2S=5`$, 11, 17, and 23.
Different symbols mark bands corresponding to (approximate) subspaces $`_{Nl}^{(p)}`$ with different $`p`$. The same sets of multiplets reoccur for different $`2S`$ in bands related by $`_{Nl}^{(p)}_{Nl^{}}`$.
### 6.6 Comparison with Atomic Shells: Hundโs Rule
Our conjectures (verified by the numerical experiments) are based on the behavior of systems of interacting Fermions partially filling a shell of degenerate single particle states of angular momentum $`l`$. This is a central problem in atomic physics and in nuclear shell model studies of energy spectra. It is interesting to compare the behavior of the spherical harmonics of atomic physics with that of the monopole harmonics considered here. For monopole harmonics $`l=S+n`$, where $`S`$ is half of the monopole strength (and can be integral or half integral) and $`n`$ is a non-negative integer. For the lowest angular momentum shell $`l=S`$. For spherical harmonics $`S=0`$ and $`l=n`$. If in each case electrons are confined to a 2D spherical surface of radius $`R`$, one can evaluate the pair interaction energy $`V`$ as a function of the pair angular momentum $`L_{12}`$. The resulting pseudopotentials, $`V()`$ for the FQH system in the lowest Landau level, and $`V(L_{12})`$ for atomic shells in a zero magnetic field, are shown in figure 5 for a few small values of $`l`$.
In obtaining these results we have restricted ourselves to spin-polarized shells, so only orbital angular momentum is considered. It is clear that in the case of spherical harmonics the largest pseudopotential coefficient occurs for the lowest pair angular momentum, exactly the opposite of what occurs for monopole harmonics. As a consequence of equation (19), which relates the total angular momentum $`L`$ to the average pair angular momentum $`L_{12}`$, the standard atomic Hundโs rule predicts that the energy of a few electron system in an atomic shell will, on the average, decrease as a function of total angular momentum, which is opposite to the behavior of energy of electrons in the lowest Landau level. The difference between the energy spectra of electrons interacting through atomic and FQH pseudopotentials of figure 5 is demonstrated in figure 6, where we plot the result for four electrons in shells of angular momentum $`l=3`$ and 5.
The solid circles correspond to monopole harmonics and the open ones to spherical harmonics. Note that at $`L^{\mathrm{MAX}}`$ the former give the highest energy and the latter the lowest. Due to anharmonicity of the pseudopotentials, the behavior of energy at low $`L`$ does not always follow a simple Hundโs rule for either FQH or atomic system. The FQH ground state for $`l=3`$ occurs at $`L=0`$ (this is the $`\nu =2/3`$ incompressible state). However, for $`l=5`$, the lowest of the three states at $`L=2`$ has lower energy than the only state $`L=0`$. This ground state at $`L=2`$ contains one quasihole in the Laughlin $`\nu =1/3`$ state and it is the only four electron state at this filling in which electrons can avoid parentage from the $`=1`$ pair state. Exactly opposite happens for the atomic system at $`l=5`$, where the anharmonicity is able to push the highest of the three $`L=2`$ states above the high energy state at $`L=0`$.
### 6.7 Higher Landau Levels
Thus far we have considered only the lowest angular momentum shell (lowest Landau level) with $`l=S`$ The interaction of a pair of electrons in the $`n`$th excited shell of angular momentum $`l=S+n`$ can easily be evaluated to obtain the pseudopotentials $`V(L_{12})`$ shown in figure 7.
Here we compare $`V_n(L_{12})`$ as a function of $`L_{12}(L_{12}+1)`$ for $`n=0`$, 1, and 2. It can readily be observed that $`V_{n=0}`$ increases more quickly than $`L_{12}(L_{12}+1)`$ in entire range of $`L_{12}`$, while $`V_{n=1}`$ and $`V_{n=2}`$ do so only up to certain value of $`L_{12}`$ (i.e., above certain value of $`=2lL_{12}`$) For $`n=1`$, the $`V_{n=1}`$ has short range for $`3`$ but is essentially linear in $`L_{12}(L_{12}+1)`$ from $`=1`$ to 5. For $`n=2`$, the $`V_{n=2}`$ has short range for $`5`$ but is sublinear in $`L_{12}(L_{12}+1)`$ from $`=1`$ to 7. More generally, we find that the pseudopotential in the $`n`$th excited shell (Landau level) has short range for $`2n+1`$.
Because the conclusions of the CF picture depend so critically on the short range of the pseudopotential, they are not expected to be valid for all fractional fillings of higher Landau levels. For example, the ground state at $`\nu =2+1/3=7/3`$ does not have Laughlin type correlations (i.e. electrons in the $`n=1`$ Landau level do not avoid parentage from $`=1`$) even if it is non-degenerate ($`L=0`$) and incompressible (as found experimentally ).
## 7 Fermi Liquid Model of Composite Fermions
The numerical results of the type shown in figure 1 have been understood in a very simple way using Jainโs composite Fermion picture. For the ten particle system, the Laughlin $`\nu =1/3`$ incompressible ground state at $`L=0`$ occurs for $`2S=3(N1)=27`$. The low lying excited states consist of a single QP pair, with the QE and QH having angular momenta $`l_{\mathrm{QE}}=11/2`$ and $`l_{\mathrm{QH}}=9/2`$. In the mean field CF picture, these states should form a degenerate band of states with angular momentum $`L=1`$, 2, โฆ, 10. More generally, $`l_{\mathrm{QE}}=(N+1)/2`$ and $`l_{\mathrm{QH}}=(N1)/2`$ for the Laughlin state of an $`N`$ electron system, and the maximum value of $`L`$ is $`N`$. The energy of this band would be $`E=\mathrm{}\omega _c^{}=\mathrm{}\omega _c/3`$, the effective CF cyclotron energy needed to excite one CF from the (completely filled) lowest to the (completely empty) first excited CF Landau level. From the numerical results, two shortcomings of the mean field CF picture are apparent. First, due to the QEโQH interaction (neglected in the CF picture), the energy of the QEโQH band depends on $`L`$, and the โmagnetorotonโ QEโQH dispersion has a minimum at $`L=5`$. Second, the state at $`L=1`$ either does not appear, or is part of the continuum (in an infinite system) of higher energy states above the magnetoroton band.
At $`2S=271=26`$ and $`2S=27+1=28`$, the ground state contains a single quasiparticle (QE or QH, respectively), whose angular momenta $`l_{\mathrm{QE}}=l_{\mathrm{QH}}=N/2=5`$ result from the CS transformation which gives $`2S^{}=2S2(N1)=8`$ for QE and 10 for QH (and $`l_{\mathrm{QE}}=S^{}+1`$ and $`l_{\mathrm{QH}}=S^{}`$). States containing two identical QPโs form lowest energy bands at $`2S=25`$ (two QEโs) and $`2S=29`$ (two QHโs). The allowed angular momenta of two identical CF QPโs (which are Fermions) each with angular momentum $`l_{\mathrm{QP}}`$ are $`L=2l_{\mathrm{QP}}j`$ where $`j`$ is an odd integer. Of course, $`l_{\mathrm{QP}}`$ depends on $`2S`$ in the CF picture, and at $`2S=25`$ we have $`l_{\mathrm{QE}}=S^{}+1=S(N1)+1=9/2`$ yielding $`L=0`$, 2, 4, 6, and 8, while at $`2S=29`$ we have $`l_{\mathrm{QH}}=S^{}=S(N1)=11/2`$ and $`L=0`$, 2, 4, 6, 8, 10. More generally, $`l_{\mathrm{QE}}=(N1)/2`$ and $`l_{\mathrm{QH}}=(N+1)/2`$ in the 2QE and 2QH states of an $`N`$ electron system, and the maximum values of $`L`$ are $`N2`$ for QEโs and $`N`$ for QHโs. As for the magnetoroton band at $`2S=27`$, the CF picture does not account for QP interactions and incorrectly predicts the degeneracy of the bands of 2QP states at $`2S=25`$ and 27.
The energy spectra of states containing more than one CF quasiparticle can be described in the following phenomenological Fermi liquid picture. The creation of an elementary excitation, QE or QH, in a Laughlin incompressible ground state requires a finite energy, $`\epsilon _{\mathrm{QE}}`$ or $`\epsilon _{\mathrm{QH}}`$, respectively. In a state containing more than one Laughlin quasiparticle, QEโs and/or QHโs interact with one another through appropriate QEโQE, QHโQH, and QEโQH pseudopotentials.
An estimate of the QP energies can be obtained by comparing the energy of a single QE (for the $`N=10`$ electron system, the energy of the ground state at $`L=N/2=5`$ for $`2S=271=26`$) or a single QH ($`L=N/2=5`$ ground state at $`2S=27+1=28`$) with the Laughlin $`L=0`$ ground state at $`2S=27`$. There can be finite size effects here, because the QP states occur at different values of $`2S`$ than the ground state, but using the correct magnetic length $`\lambda =R/\sqrt{S}`$ ($`R`$ is the radius of the sphere) in the unit of energy $`e^2/\lambda `$ at each value of $`2S`$, and extrapolating the results as a function of $`N^1`$ to an infinite system should give reliable estimates of $`\epsilon _{\mathrm{QE}}`$ and $`\epsilon _{\mathrm{QH}}`$ for a macroscopic system.
The QP pseudopotentials $`V_{\mathrm{QP}\mathrm{QP}}`$ can be obtained by subtracting from the energies of the 2QP states obtained numerically at $`2S=25`$ (2QE), $`2S=27`$ (QEโQH), and $`2S=29`$ (2QH), the energy of the Laughlin ground state at $`2S=27`$ and two energies of appropriate non-interacting QPโs. As for the single QP, the energies calculated at different $`2S`$ must be taken in correct units of $`e^2/\lambda =\sqrt{S}e^2/R`$ to avoid finite size effects. This procedure was carried out in references .
In figure 8 we plot the QEโQE and QHโQH pseudopotentials for Laughlin $`\nu =1/3`$ and 1/5 states.
As we have seen for two electrons (see figure 3), the angular momentum $`L_{12}`$ of a pair of identical Fermions in an angular momentum shell (or a Landau level) is quantized, and the convenient quantum number to label the pair states is $`=2l_{\mathrm{QP}}L_{12}`$ (on a sphere) or relative (REL) angular momentum $`m`$ (on a plane). When plotted as a function of $``$, the pseudopotentials calculated for systems containing between six to eleven electrons (and thus for different QP angular momenta $`l_{\mathrm{QP}}`$) behave similarly and, for $`N\mathrm{}`$ (i.e., $`2S\mathrm{}`$), they seem to converge to the limiting pseudopotentials $`V_{\mathrm{QP}\mathrm{QP}}(=m)`$ describing an infinite planar system.
In figure 9 we plot the QEโQH pseudopotentials for Laughlin $`\nu =1/3`$ and 1/5 states.
As for a conduction electron and a valence hole pair in a semiconductor (an exciton), the motion of a QEโQH pair which does not carry a net electric charge is not quantized in a magnetic field. The appropriate quantum number to label the states is the continuous wavevector $`k`$ (or momentum), which on a sphere is given by $`k=L/R=L/\sqrt{S}\lambda `$ (remember that $`L`$ is given in units of $`\mathrm{}`$). When plotted as a function of $`k`$, the pseudopotentials calculated for systems containing between six to eleven electrons fall on the same curve that describes a continuous magnetoroton dispersion $`V_{\mathrm{QE}\mathrm{QH}}(k)`$ of an infinite planar system (the lines in figure 9 are only to guide the eye). Only the energies for $`L2`$ are shown in figure 9, since the single QEโQH pair state at $`L=1`$ is either disallowed (hard core) or falls into the continuum of states above the magnetoroton band. The magnetoroton minima for the Laughlin $`\nu =1/3`$ and 1/5 states occur at about $`k_0=1.4`$ $`\lambda ^1`$ and $`k_0=1.1`$ $`\lambda ^1`$, respectively. The magnetoroton band at $`\nu =1/3`$ is well decoupled from the continuum of higher states because the band width $`0.05e^2/\lambda `$ is much smaller than the energy gap $`\epsilon _{\mathrm{QE}}+\epsilon _{\mathrm{QH}}=0.1e^2/\lambda `$ for additional QEโQH pair excitations. At $`\nu =1/5`$, the band width $`0.015e^2/\lambda `$ is closer to the single particle gap $`\epsilon _{\mathrm{QE}}+\epsilon _{\mathrm{QH}}=0.021e^2/\lambda `$ and the state of two magnetorotons each with $`kk_0`$ can couple to the highest energy QEโQH pair states at $`k2k_0`$.
Knowing the QPโQP pseudopotentials and the bare QP energies allows us to evaluate the energies of states containing three or more QPโs. Typical results are shown in figure 10.
In frame (a) we show the energy spectra of three QEโs in the Laughlin $`\nu =1/3`$ state of eleven electrons. The spectrum in frame (b) gives energies of three QHโs in the nine electron system at the same filling. The exact numerical results obtained in diagonalization of the eleven and nine electron systems are represented by plus signs and the Fermi liquid picture results are marked by solid circles. The exact energies above the dashed lines correspond to higher energy states that contain additional QEโQH pairs. It should be noted that in the mean field CF picture which neglects the QPโQP interactions, all of the 3QP states would be degenerate and the energy gap separating the 3QP states from higher states would be equal to $`\mathrm{}\omega _c^{}=\mathrm{}\omega _c/3`$. Although the fit in figure 10 is not perfect, it is quite good and justifies the use of the Fermi liquid picture to describe (compressible) states at $`\nu (2p+1)^1`$.
## 8 Composite Fermion Hierarchy
The sequence of LaughlinโJain states with filling factor $`\nu `$ given by $`\nu =\nu ^{}(1+2p\nu ^{})^1`$ where $`p=1`$, 2, โฆ, and the CF filling factor $`\nu ^{}`$ is any non-zero integer, is the most prominent set of condensed states observed experimentally. However, this sequence (together with the conjugate โholeโ states, $`\nu 1\nu `$) does not contain all odd denominator fractions the way the Haldane hierarchy scheme does. The question arises quite naturally of how to treat the CF values of $`\nu ^{}`$ which are not integers. The answer leads to the CF approach to the hierarchy of incompressible quantum fluid ground states .
Consider a state of $`N_0`$ electrons at a monopole strength $`2S_0`$ with a filling factor $`\nu _0`$. The CS transformation that attaches to each electron $`2p_0`$ flux quanta oriented opposite to the applied magnetic field results in the CF system at an effective filling factor $`\nu _0^{}`$ given by $`(\nu _0^{})^1=\nu _0^12p_0`$ and an effective monopole strength $`2S_0^{}=2S_02p_0(N_01)`$. The procedure for handling non-integral values of CF filling factor $`\nu _0^{}`$ is to set it equal to $`\nu _0^{}=n_1+\nu _1`$, where $`n_1`$ is an integer and $`\nu _1`$ is the fractional filling of the CF quasiparticle level (same sign as $`n_1`$ for QEโs and opposite for QHโs). Our problem is then that of placing $`N_1`$ quasiparticles into $`2l_1+1`$ available states of a CF shell (Landau level) of angular momentum $`l_1`$: the QEโs into the lowest empty shell with $`l_1=|S_0^{}|+n_1+1`$, or the QHโs into the highest filled shell with $`l_1=|S_0^{}|+n_1`$, We now ignore all completely filled and completely empty CF shells, and reapply the CS transformation by setting $`S_1=l_1`$ and attaching $`2p_1`$ flux quanta to each of the $`N_1`$ quasiparticles in the partially filled CF shell. This produces a new type of QPโs and a new QP filling factor $`\nu _1^{}`$ given by $`(\nu _1^{})^1=\nu _1^12p_1`$. If $`\nu _1^{}`$ is an integer, we obtain a daughter states in the hierarchy. If it is not, we write $`\nu _1^{}=n_2+\nu _2`$, where $`\nu _2`$ represents the partial filling of the new QP shell, and repeat the mean field CF procedure. This leads to the set of equations:
$$\nu _l^1=2p_l+(n_{l+1}+\nu _{l+1})^1,$$
(26)
where $`\nu _l`$ is the QP filling factor and $`2p_l`$ is the number of flux quanta attached to each Fermion at the $`l`$th level of the CF hierarchy.
As an example, consider a system of $`N_0=12`$ electrons at $`2S_0=30`$. We apply the mean field CF approximation by attaching to each electron $`2p_0=2`$ flux quanta. This gives the effective CF monopole strength $`2S_0^{}=302(121)=8`$. The lowest CF shell is filled with nine particles, and there are $`N_1=3`$ quasielectrons in the first excited ($`n_1=1`$) CF shell of angular momentum $`l_1=5`$. The filling factor at this level of hierarchy is $`\nu _0^{}=1+\nu _1`$. We now reapply the CF transformation by attaching $`2p_1=4`$ flux quanta to each of $`N_1=3`$ QEโs at $`2S_1=10`$ and obtain $`2S_1^{}=104(31)=2`$. The lowest CF shell of $`l_1=1`$ is now completely filled yielding $`\nu _1^{}=1`$. Using equation (26) we obtain $`\nu _1^1=4+1^1=5`$ and $`\nu _0^1=2+(1+1/5)^1=17/6`$.
If the mean field CF picture worked on all levels of hierarchy, the twelve electron system at $`2S=30`$ should have an incompressible $`L=0`$ ground state corresponding to the filling factor $`\nu =6/17`$. In figure 11(a) we show the low energy sector of the spectrum calculated for this system using the Fermi liquid picture (only the lowest energy states containing 3QEโs in the Laughlin $`\nu =1/3`$ state are calculated).
Indeed, the $`\nu =6/17`$ hierarchy ground state at $`L=0`$ is separated from higher states by a small gap in the twelve electron spectrum (although it is not clear that this small gap will persist in the thermodynamic limit ).
Though the CF hierarchy picture seems to work in some cases, there are others where it is clearly in complete disagreement with numerical results. For example, a CF transformation with $`2p_0=2`$ applied to an $`N_0=8`$ electron system at $`2S_0=18`$ gives $`2S_0^{}=182(81)=4`$, $`n_1=1`$, and $`N_1=3`$ QEโs left in the shell with $`l_1=3`$. Adding the three QE angular momenta of $`l_1=3`$ gives a low energy band at $`L=0`$, 2, 3, 4, and 6. Reapplication of the CF transformation with $`2p_1=2`$ gives $`2S_1^{}=62(31)=2`$, i.e. the completely filled lowest shell, $`\nu _1^{}=1`$ ($`n_2=1`$ and $`\nu _2=0`$). From equation (26) we get $`\nu _1=1/3`$ and $`\nu _0=4/11`$. In figure 11(b) we show the spectrum obtained by exact numerical diagonalization of an eight electron system at $`2S=18`$. It is apparent that the set of multiplets at $`L=0`$, 2, 3, 4, and 6 form the low energy band. However the reapplication of the mean field CF transformation to the three QEโs in the $`l_1=3`$ shell (which predicts an $`L=0`$ incompressible ground state corresponding to $`\nu =4/11`$) is definitely wrong.
The reason why the CF hierarchy picture does not always work is not difficult to understand. The electron (Coulomb) pseudopotential in the lowest Landau level $`V_e()`$ satisfies the โshort rangeโ criterion (i.e. it increases more quickly with decreasing $``$ than the harmonic pseudopotential $`V_H`$) in the entire range of $``$, which is the reason for the incompressibility of the principal Laughlin $`\nu =(2p+1)^1`$ states. However, this does not generally hold for the QP pseudopotentials on higher levels of the hierarchy. In figure 8 we plotted $`V_{\mathrm{QE}\mathrm{QE}}()`$ and $`V_{\mathrm{QH}\mathrm{QH}}()`$ for the $`\nu =1/3`$ and $`\nu =1/5`$ Laughlin states of six to eleven electrons. Clearly, the QE and QH pseudopotentials are quite different and neither one decreases monotonically with increasing $``$. On the other hand, the corresponding pseudopotentials in $`\nu =1/3`$ and 1/5 states look similar, only the energy scale is different. The convergence of energies at small $``$ obtained for larger $`N`$ suggests that the maxima at $`=3`$ for QEโs and at $`=1`$ and 5 for QHโs, as well as the minima at $`=1`$ and 5 for QEโs and at $`=3`$ and 7 for QHโs, persist in the limit of large $`N`$ (i.e. for an infinite system on a plane). Consequently, the only incompressible daughter states of Laughlin $`\nu =1/3`$ and 1/5 states are those with $`\nu _{\mathrm{QE}}=1`$ or $`\nu _{\mathrm{QH}}=1/3`$ and (maybe) $`\nu _{\mathrm{QE}}=1/5`$ and $`\nu _{\mathrm{QH}}=1/7`$. It is clear that no incompressible daughter states of the parent Laughlin $`\nu =1/3`$ state will form at e.g. $`\nu =4/11`$ ($`\nu _{\mathrm{QE}}=1/3`$) or 4/13 ($`\nu _{\mathrm{QH}}=1/5`$), but that they will form (at least, in finite systems ) at $`\nu =6/17`$ ($`\nu _{\mathrm{QE}}=1/5`$) or 4/13 ($`\nu _{\mathrm{QH}}=1/7`$).
From the CF hierarchy scheme we find the JainโLaughlin states when the CS transformation is applied directly to electrons (or to holes in a more than half-filled level). These states occur at integral values of $`\nu ^{}`$, the effective CF filling factor, and correspond to completely filling a QP shell. For example, the $`\nu =2/5`$ state occurs when $`\nu ^{}=2`$, and the CFโs in the first excited shell (which are Laughlin QEโs of the $`\nu =1/3`$ state) have $`\nu _{\mathrm{QP}}=1`$. The angular momenta of the two lowest CF shells are $`l_0^{}=|S^{}|`$ and $`l_1^{}=|S^{}|+1`$, so they contain $`2l_0^{}+1`$ and $`2l_1^{}+1`$ states, respectively. Since $`\nu _{\mathrm{QP}}=1`$, there are $`N_{\mathrm{QP}}=2l_1^{}+1`$ CF quasiparticles. The total number of states filled by the $`N`$ Fermions is $`(2l_0^{}+1)+(2l_1^{}+1)=2N_{\mathrm{QP}}2`$, giving $`N=2N_{\mathrm{QP}}2`$. For an infinite system this is just Haldaneโs relation between the number of quasiparticles and the number of electrons, $`N=2qN_{\mathrm{QP}}`$, for the integer $`q=1`$. This demonstrates that integrally filled CF shells correspond to $`\nu _{\mathrm{QP}}=1`$, a completely filled shell of Laughlin QPโs. Adding new Fermions to a system with $`\nu _{\mathrm{QP}}=1`$ requires creating a new type of QPโs, and the counting of available QP states turns out to be exactly the same in the CF hierarchy and Haldaneโs Boson hierarchy picrures. Integral CF filling (i.e., $`\nu _{\mathrm{QP}}=1`$) gives a valid mean field picture independent of QPโQP interactions provided that the gap for creating new QPโs is positive. When $`\nu ^{}`$ is non-integral, the mean field picture is valid only at values of $`L`$ for which the โshort rangeโ requirement on the pseudopotential $`V_{\mathrm{QP}\mathrm{QP}}(L)`$ is satisfied. The form of the QPโQP interactions obtained from our numerical calculations makes it clear that the mean field approximation is not valid at certain quasiparticle fillings (e.g. for $`\nu _{\mathrm{QP}}=1/3`$ filling of the quasielectron levels of the electron $`\nu =1/3`$ state).
## 9 Systems Containing Electrons and Valence Band Holes
There has been a great deal of interest in photoluminescence (PL) of 2D systems in high magnetic fields. An important ingredient in understanding PL is the negatively charged exciton ($`X^{}`$). The $`X^{}`$ consists of two electrons bound to a valence band hole. If the total spin of the pair of electrons, $`J_e`$, is zero, the $`X^{}`$ is said to be a singlet ($`X_s^{}`$); if $`J_e=1`$ the $`X^{}`$ is called a triplet ($`X_t^{}`$). Only the $`X_s^{}`$ is bound in the absence of a magnetic field, but in infinite magnetic field (so that only a single Landau level is relevant) only the $`X_t^{}`$ is bound in a 2D system. It often occurs that the photoexcited hole is separated from the plane of the electron system by a small distance (this can happen, e.g., in wide GaAs quantum wells when the electron gas is confined to one GaAs/AlGaAs interface by remote ionized donors, and the photoexcited holes reside close to the other GaAs/AlGaAs interface). Several remarkable effects associated with electronโhole systems and charged excitons can be understood using the composite Fermion picture.
### 9.1 Charged Exciton and the Hidden Symmetry in the Lowest Landau Level
First let us consider the idealized 2D system at so large a magnetic field that only the lowest electron and hole Landau levels need be considered. The energy spectrum for a two-electronโone-hole system at $`2S=20`$ is shown in figure 12.
The triplet $`X^{}`$ with angular momentum $`l_X^{}=S1`$ is the only bound state, with binding energy $`0.05e^2/\lambda `$. A pair of (unbound) singlet and triplet states occur at the energy equal exactly to the exciton energy $`E_X`$. In these so-called โmultiplicativeโ states a neutral exciton $`X`$ in its ground state is decoupled from the second electron. Addition of exciton and electron angular momenta $`L_X=0`$ and $`l_e=S`$ gives a state of total angular momentum $`L=S`$, and addition of two electron spins of $`1/2`$ gives both $`J_e=0`$ and 1 spin configurations.
The occurrence of unbound states at $`E=E_X`$ and $`L=S`$ is a manifestation of the following โhidden symmetry:โ Because of the exact overlap of electron and hole orbitals in the lowest Landau level (scaled with the same magnetic length $`\lambda `$), and thus independence of the strength of interaction of the type of particles involved, the commutator of an operator $`d_X^{}`$ that creates an exciton in its $`L_X=0`$ ground state (on a sphere, $`d_X^{}=_m(1)^mc_m^{}h_m^{}`$, where $`c_m^{}`$ and $`h_m^{}`$ are electron and hole creation operators), with the interaction Hamiltonian $`H`$ is $`[H,d_X^{}]=E_Xd_X^{}`$. As a result, if $`\mathrm{\Psi }`$ is an eigenstate of $`N_e`$ electrons and $`N_h`$ holes with an eigenenergy $`E`$ and angular momentum quantum numbers $`L`$ and $`M`$, then the multiplicative state $`d_X^{}\mathrm{\Psi }`$ of $`N_e+1`$ electrons and $`N_h+1`$ holes is also an eigenstate with energy $`E+E_X`$ and the same $`L`$ and $`M`$. A good quantum number conserved due to the โhidden symmetryโ is the number of decoupled excitons, $`N_X`$. In particular, the ground state for $`N_e=N_h=N`$ is the totally multiplicative state $`(d_X^{})^N|\mathrm{vac}`$ with $`N_X=N`$; for an infinite system this ground state can be viewed as a Bose condensate of non-interacting excitons. It can be readily found that the application of the PL operator that annihilates an optically active exciton ($`d_X`$) reduces its $`N_X`$ by one, and therefore that only the multiplicative electronโhole states with $`N_X>0`$ are optically active (have non-vanishing PL intensity). In figure 12, the two multiplicative states at $`E=E_X`$ and $`L=S`$ have $`N_X=1`$, and all others have $`N_X=0`$.
It is essential to realize that two independent symmetries forbid the recombination of a triplet $`X^{}`$ ground state in figure 12:
* Due to the 2D translational/rotational space invariance, the PL operator $`d_X`$ conserves two angular momentum quantum numbers. On a sphere, these are is $`L`$ and $`M`$, and the resulting optical selection rule allows only a state with $`L=S`$ to decay by $`e`$$`h`$ recombination. On a plane, these are the total ($`L_{\mathrm{TOT}}`$) and center-of-mass ($`L_{\mathrm{CM}}`$) angular momenta and the radiative channel for an $`X^{}`$ is that of $`L_{\mathrm{REL}}L_{\mathrm{TOT}}L_{\mathrm{CM}}=0`$. This (geometrical) symmetry can be broken by collisions, but persists in systems with a finite quantum well width, finite electron and hole layer separation, or Landau level mixing.
* Due to the equal strength of $`e`$$`e`$, $`h`$$`h`$, and $`e`$$`h`$ interactions, $`N_X`$ is a good quantum number. Since $`N_X`$ is decreased in a PL process, only the multiplicative ($`N_X>0`$) states are radiative. This (dynamical) symmetry is not broken by collisions, and requires breaking electronโhole orbital symmetry.
Since a number of independent factors are needed to allow for the recombination of a triplet $`X^{}`$, this complex (in narrow and symmetric quantum wells and in high magnetic fields) is expected to be a well defined long-lived quasiparticle. The correlations, optical properties, etc. are expressed more easily in terms of this quasiparticle than in terms of individual electrons and holes. The finite angular momentum of an $`X^{}`$ in spherical geometry (decoupling of the CM excitations from the REL motion on a plane) can be viewed as the formation of a degenerate Landau level of this (charged) quasiparticle. As will be shown later, the interaction of $`X^{}`$ quasiparticles with one another and with electrons can be described using the ideas familiar in the context of FQH systems (Laughlin correlations, composite Fermions, parentage, etc.).
### 9.2 Interaction of Charged Excitons
The simplest system in which to study $`X^{}`$$`X^{}`$ interaction contains four electrons and two holes. Its energy spectrum at $`2S=17`$ is shown in figure 13.
The low energy spectrum is characterized by four bands which we have identified as follows:
1. The lowest band taking on all even values between $`L=0`$ and 12 consists of a pair of charged excitons $`X^{}`$ (each with angular momentum $`l_X^{}=S1`$);
2. The next band contains an electron with $`l_e=S`$ and a negatively charged biexciton $`X_2^{}`$ (a bound state of an $`X`$ and an $`X^{}`$) with angular momentum $`l_{X_2^{}}=S2`$; the allowed $`L`$ values go from $`|l_el_{X_2^{}}|=2`$ to $`l_e+l_{X_2^{}}1=14`$;
3. A band of multiplicative states containing an $`X`$, an $`X^{}`$, and an electron; it begins at $`L=|l_el_X^{}|=1`$ and goes to $`L=l_e+l_X^{}1=15`$;
4. A band of multiplicative states containing two neutral excitons and two free electrons; it takes on all even values of $`L`$ between zero and $`2l_e1=16`$.
One interesting feature of figure 13 is that it gives us the effective pseudopotential $`V_{AB}(L)`$ for the interaction of the pair of Fermions $`AB`$ (where $`A`$ and $`B`$ can be $`e`$, $`X^{}`$, $`X_2^{}`$, etc.) as a function of angular momentum. As for electrons, it is convenient to use the relative pair angular momentum $`=l_A+l_BL`$. For identical Fermions with angular momentum $`l`$, the allowed values of $`L`$ are $`2lj`$, where $`j`$ is an odd integer, i.e., $`=1`$, 3, 5, โฆ, and $`2l`$. For distinguishable Fermions $`A`$ and $`B`$, all values of $`L`$ between $`|l_Al_B|`$ and $`|l_A+l_B|`$ are expected, i.e., $`=0`$, 1, 2, โฆ, and $`2\mathrm{min}(l_A,l_B)`$. However, our numerical results display a โhard coreโ repulsion for composite particles, and one or more of the pair states with the largest values of $`L`$ (smallest $``$) are forbidden (i.e. the corresponding pseudopotential parameters are effectively infinite). For $`A=X_n^{}`$ and $`B=X_m^{}`$, the smallest allowed value of $``$ is given by
$$_{AB}^{\mathrm{MIN}}=2\mathrm{min}(n,m)+1.$$
(27)
The identification of pair states $`AB`$ in figure 13 (as marked with lines) was possible by comparing the displayed $`4e`$$`2h`$ spectrum with the pseudopotentials of point charge particles with appropriate angular momenta $`l_A`$ and $`l_B`$ and binding energies $`\epsilon _A`$ and $`\epsilon _B`$ . The appropriate values of angular momenta $`l_A`$ and $`l_B`$, and of the binding energies $`\epsilon _A`$ and $`\epsilon _B`$ are obtained by diagonalizing smaller systems (e.g. the $`2e`$$`1h`$ system in figure 12 for an $`X^{}`$), and the point charge pseudopotentials are used to approximate the $`AB`$ interaction. The approximate $`AB`$ energies obtained in this way are rather close to the exact $`4e`$$`2h`$ energies. This implies that, due to different energy scales, the internal dynamics of charged excitons is weakly coupled to their scattering off one another or off electrons, and allows for the interpretation of an electronโhole system in terms of well defined charged excitonic quasiparticles interacting with one another and with excess electrons through Coulomb like forces. Slight difference between the actual pseudopotentials in figure 13 and the pseudopotentials of point charge particles comes from the larger size of charged excitons and their (nearly frozen) internal degrees of freedom. The latter can be accounted for phenomenologically by attributing each type of composite particles a finite electric polarizability to describe their induced electric dipole moment in the presence of an electric field of other charged particles. Due to an increased charge isotropy, the polarization effects are expected to be greatly reduced in larger systems, and disappear completely in the fluid type states discussed in the following paragraphs.
### 9.3 Generalized Composite Fermion Picture for Charged Excitons
Suppose we have a system of different (distinguishable) charged Fermions ($`A`$, $`B`$, โฆ). They can be distinguished either because they are different species (e.g., electrons and charged excitons) or because they are confined to different, spatially separated layers. If all particles in such system repel one another through short range pseudopotentials (as defined for the electron FQH systems), one can think of many body states with Laughlin-type correlations given by a generalized (compare equation (5)) LaughlinโJastrow factor
$$(z_i^{(A)}z_j^{(B)})^{m_{AB}},$$
(28)
where $`z_i^{(A)}`$ is the complex coordinate for the position of the $`i`$th Fermion of type $`A`$, and the product is over all pairs. The restrictions on the integers $`m_{AB}`$ are that $`m_{AA}`$ must be odd, $`m_{BA}=m_{AB}`$, and $`m_{AB}`$ must not be smaller than certain minimum values $`_{AB}^{\mathrm{MIN}}`$ to avoid the infinite hard cores for all pairs. In a state with correlations given by equation (28), a number of pair states with largest repulsion are avoided for each pair, $`_{AB}m_{AB}`$. This is equivalent to saying that the high energy collisions (in which any pair of particles would come very close to one another) are forbidden in such state. This intuitive property of the Laughlin fluid states will be very useful in the discussion of collision assisted $`X^{}`$ recombination.
A generalized CF picture can be constructed for a system with Laughlin correlations. In this picture, fictitious flux tubes carrying an integral number of flux quanta $`\varphi _0`$ are attached to each particle. In the multi-component system, each particle of type $`A`$ carries flux $`(m_{AA}1)\varphi _0`$ that couples only to charges on all other particles of the same type $`A`$, and fluxes $`m_{AB}\varphi _0`$ that couple to charges on all particles of other types $`B`$ ($`A`$ and $`B`$ are any of the types of Fermions). On a sphere, the effective monopole strength seen by a CF of type $`A`$ (CF-$`A`$) is
$$2S_A^{}=2S\underset{b}{}(m_{AB}\delta _{AB})(N_B\delta _{AB}).$$
(29)
For different multi-component systems we expect generalized Laughlin incompressible states (for two components denoted as $`[m_{AA},m_{BB},m_{AB}]`$) when all the hard core pseudopotentials are avoided and CFโs of each kind fill completely an integral number of their CF shells (e.g. $`N_A=2l_A^{}+1`$ for the lowest shell). In other cases, the low lying multiplets are expected to contain different kinds of CF quasiparticles (generalized QEโs or QHโs), QP-$`A`$, QP-$`B`$, โฆ, in the neighboring incompressible state. It is interesting to realize that the effective monopole strengths $`2S_A^{}`$, i.e. the effective magnetic fields $`B_A^{}`$ seen by particles of different type are not generally equal. One can think of effective CS charges and fluxes of different colors, but the resulting number of different effective CF magnetic fields of different color can no longer be regarded as physical reality, and no cancellation between gauge and Coulomb interactions is possible.
The multi-component (multi-color) CF picture can be applied to electrons and charged excitons in an electronโhole system. We have checked that the pseudopotentials describing interaction of identical composite particles in figure 13 all satisfy the short range criterion in the entire range of $``$. For a pair of different particles, the pseudopotential may increase sufficiently quickly for some values of $``$ but not the others and, for example, for $`e^{}`$ and $`X^{}`$ only the correlations described by odd exponents $`m_{e^{}X^{}}`$ are expected to occur. As an example, let us consider the $`12e`$$`6h`$ system. In figure 14 we present its low energy spectrum at $`2S=17`$, calculated by diagonalizing systems of different combinations of electrons and composite particles interacting through effective pseudopotentials determined in figure 13.
The following combinations (groupings of $`12e`$ and $`6h`$ into bound complexes) have the highest total binding energy and thus form the lowest energy bands in the $`12e`$$`6h`$ spectrum: (i) $`6X^{}`$, (ii) $`e^{}`$$`5X^{}`$, (iii) $`e^{}`$$`4X^{}`$$`X_2^{}`$, (iv) $`2e^{}`$$`2X^{}`$$`2X_2^{}`$, (v) $`2e^{}`$$`3X^{}`$$`X_3^{}`$, (vi) $`2e^{}`$$`3X^{}`$$`X_2^{}`$, (vii) $`2e^{}`$$`4X^{}`$. Groupings (ii), (vi), and (vii) also contain neutral excitons that however do not interact with charged particles due to the hidden symmetry. For each of these groupings, the CF transformation can be applied to determine correlations and identify number and type of quasiparticles that occur in the lowest energy states. For example, for groupings (i)โ(iii) the generalized CF picture makes the following predictions.
(i) For $`m_{X^{}X^{}}=3`$ we obtain the Laughlin $`\nu =1/3`$ state with total angular momentum $`L=0`$. Because of the hard core of $`V_{X^{}X^{}}`$, this is the only state of this grouping.
(ii) We set $`m_{X^{}X^{}}=3`$ and $`m_{e^{}X^{}}=1`$, 2, and 3. For $`m_{e^{}X^{}}=1`$ we obtain $`L=1`$, 2, $`3^2`$, $`4^2`$, $`5^3`$, $`6^3`$, $`7^3`$, $`8^2`$, $`9^2`$, 10, and 11. For $`m_{e^{}X^{}}=2`$ we obtain $`L=1`$, 2, 3, 4, 5, and 6. For $`m_{e^{}X^{}}=3`$ we obtain $`L=1`$.
(iii) We set $`m_{X^{}X^{}}=3`$, $`m_{e^{}X_2^{}}=1`$, $`m_{X^{}X_2^{}}=3`$, and $`m_{e^{}X^{}}=1`$, 2, or 3. For $`m_{e^{}X^{}}=1`$ we obtain $`L=2`$, 3, $`4^2`$, $`5^2`$, $`6^3`$, $`7^2`$, $`8^2`$, 9, and 10. For $`m_{e^{}X^{}}=2`$ we obtain $`L=2`$, 3, 4, 5, and 6. For $`m_{e^{}X^{}}=3`$ we obtain $`L=2`$.
In groupings (ii) and (iii), the sets of multiplets obtained for higher values of $`m_{e^{}X^{}}`$ are subsets of the sets obtained for lower values, and we would expect them to form lower energy bands since they avoid additional small values of $`_{e^{}X^{}}`$. However, note that the (ii) and (iii) states predicted for $`m_{e^{}X^{}}=3`$ (at $`L=1`$ and 2, respectively) do not form separate bands in figure 14. This is because $`V_{e^{}X^{}}`$ increases more slowly than linearly as a function of $`L(L+1)`$ in the vicinity of $`_{e^{}X^{}}=3`$ (see figure 13). In such case the CF picture fails .
Our conclusion is that different kinds of long-lived Fermions (electrons and different charged excitonic complexes) formed in an electronโhole plasma in high magnetic fields can exhibit generalized incompressible FQH ground states with Laughlin-type correlations, and that these states can be described using a generalized CF model.
### 9.4 Spatially Separated ElectronโHole System
Even in very high magnetic fields (in the lowest Landau level), an asymmetry between $`e`$$`e`$, $`h`$$`h`$, and $`e`$$`h`$ interactions can be introduced by spatially separating 2D electron and hole layers. Such separation, which occurs for example in asymmetrically doped wide quantum wells, breaks the hidden symmetry and allows for a rich photoluminescence (PL) spectrum, which (unlike that for a co-planar system) can be therefore used as a probe of the low lying electronโhole states.
Let us consider an ideal system, in which electrons and holes occupy 2D parallel planes separated by a distance $`d`$. The interaction potentials are $`V_{ee}(r)=V_{hh}(r)=1/r`$ and $`V_{eh}(r)=1/\sqrt{r^2+d^2}`$. The energy spectrum of a seven-electronโone-hole system is shown in figure 15 for $`2S=15`$ and values of $`d`$ going from 0 to 5 (measured in units of the magnetic length $`\lambda `$).
For $`d=5\lambda `$, the $`e`$$`h`$ interaction is weak and, as a first approximation, we can say that that the lowest band of states will consist of the lowest CF band of the electron system plus the (constant) hole energy. The allowed angular momenta will be given by $`\mathrm{L}_\mathrm{e}`$, the angular momenta of the low lying electron states, added to the hole angular momentum $`\mathrm{l}_\mathrm{h}`$ of length $`l_h=S=15/2`$. At $`2S=15`$, the CF picture for the electrons gives $`2S^{}=2S2p(N1)=152(71)=3`$. The seven electrons fill the $`l_0^{}=3/2`$ shell plus three of the QE states in the shell $`l_{\mathrm{QE}}=5/2`$. The resulting electron angular momenta are $`L_e=3/2`$, 5/2, and 9/2. This gives three bands of low lying states, with total angular momenta $`6L9`$, $`5L10`$, and $`3L12`$, respectively. These three bands can be clearly distinguished at $`d=5\lambda `$ and the states within each band become nearly degenerate at $`d10\lambda `$.
For $`d=0`$, it is more useful to consider bound excitonic complexes ($`X`$ and $`X^{}`$) and Laughlin quasiparticles of the $`e^{}`$$`X^{}`$ fluid. First consider the multiplicative state with a single $`X`$ and six electrons. At $`2S=15`$ six electrons have the Laughlin $`\nu =1/3`$ ground state since $`2S^{}=152(61)=5`$ gives a CF shell which accommodates all six CFโs. This is the lowest state at $`L=0`$, marked with a circle in frame (a). For a charge configuration containing one $`X^{}`$ and five electrons, we can use the generalized CF model with $`m_{e^{}e^{}}=m_{e^{}X^{}}=2`$. This gives $`2S_e^{}=2Sm_{e^{}e^{}}(N_e1)m_{e^{}X^{}}=5`$ and $`2S_X^{}^{}=2Sm_{e^{}X^{}}N_e=5`$, and the angular momenta $`l_e^{}=S_e^{}=5/2`$ and $`l_X^{}^{}=S_X^{}^{}1=3/2`$. There is one empty state in the lowest CF-$`e^{}`$ shell giving $`L_e=5/2`$, and the CF-$`X^{}`$ has $`L_X^{}=3/2`$. Adding these two angular momenta gives $`L=1`$, 2, 3, and 4 as the lowest band of $`5e^{}`$$`X^{}`$ states. The multiplicative state at $`L=0`$ (open circle) and the band of four multiplets containing an $`X^{}`$ at $`L=1`$ to 4 (line) can clearly be seen at $`d=0`$ in frame (a). Although the hidden symmetry is only approximate at $`d>0`$, these bands can be easily identified at $`d=0.5\lambda `$ in frame (b).
At an intermediate separation of $`d=1.75\lambda `$ in frame (c), neither description used for $`d<\lambda `$ or $`d\lambda `$ is valid, and it seems that a low energy band occurs at $`L=0`$, 1, 2, $`3^2`$, 4, 5, and 6. Most likely, the $`X^{}`$ unbinds but the hole is still able to bind one electron, forming an exciton with a significant electric dipole moment. This dipole moment results in repulsion between the exciton the remaining six electrons, so that the correlations are quite different than at $`d=0`$, where the exciton decouples.
The PL spectrum can be evaluated from the eigenfunctions obtained in the numerical diagonalization of finite systems. For $`d0`$, between one and three peaks are observed in the PL spectrum . Their separations are related to the Laughlin gap (for creation of a QEโQH pair) and to the energy of interaction between the valence band hole and the electron system.
### 9.5 Charged Excitons at a Finite Magnetic Field
One final point is worth mentioning. The numerical calculations described so far were performed for an idealized model in which electrons and holes were confined to infinitely thin $`2D`$ layers, and only the lowest Landau level was considered. For realistic systems, effects due to spin, finite width of the quantum well, and Landau level mixing are very important. The energy spectra of the simple $`2e`$$`1h`$ system calculated at $`2S=20`$ for parameters appropriate to a 11.5 nm GaAs/AlGaAs quantum well are shown in figure 16.
Two frames correspond to the magnetic field of $`B=13`$ T and 68 T. We used five electron and hole Landau levels ($`n4`$) in the calculation, with the realistic magnetic field dependence of the hole cyclotron mass and the appropriate Zeeman splittings. The interaction matrix elements included finite (and different) effective widths of electron and hole quasi-2D layers.
There are a number of bound $`X^{}`$ states in both frames, in contrast to only one singlet bound state at $`B=0`$ or only one triplet bound state predicted for an idealized system at infinite $`B`$. Three of these bound states are of particular importance. The $`X_s^{}`$ and $`X_{tb}^{}`$ ($`b`$ for โbrightโ), the lowest singlet and triplet states at $`L=S`$, are the only well bound radiative states, while $`X_{td}^{}`$ ($`d`$ for โdarkโ) has by far the lowest energy of all non-radiative ($`LS`$) states. The dark triplet state $`X_{td}^{}`$ is the state discussed in the preceding sections; it is the only bound state in the lowest Landau level, but unbinds at low magnetic fields. The bright singlet state $`X_s^{}`$ is the only bound state at $`B=0`$, but unbinds at very high fields due to the hidden symmetry. These states cross at $`B30`$ T, as predicted in an earlier calculation . The bright triplet state $`X_{tb}^{}`$ has been discovered very recently . It occurs only at intermediate fields and does not cross neither $`X_s^{}`$ or $`X_{td}^{}`$. It has larger PL intensity than the $`X_s^{}`$ state.
Although an isolated $`X_{td}^{}`$ is non-radiative because of the angular momentum selection rule, its collisions with other $`X^{}`$โs or with electrons (which break the translational symmetry) could be expected to allow for $`X_{td}^{}`$ recombination. However, the Laughlin correlations limit high energy collisions at low filling density ($`\nu 1/5`$ or less) and the PL intensity of a dark $`X_{td}^{}`$ remains very low also in a presence of other particles . In consequence, the $`X_{td}^{}`$ is not seen in PL, and there is no contradiction between experiment , which sees recombination of a triplet state at the energy above the singlet state up to 50 T, and theory , which predicts that the lowest triplet state crosses the singlet at roughly 30 T.
## 10 Summary
We have introduced the Jain CF mean field picture and shown how the low lying states can be understood by simple addition of angular momentum. The mean field CF picture gives the correct spectral structure not because of some cancellation between ChernโSimons and Coulomb interactions beyond the mean field, but because it selects a low angular momentum subset of the allowed multiplets that avoids the largest pair repulsion. The Laughlin correlations, which describe incompressible quantum fluid states, depend critically on the electron pseudopotential being of โshort rangeโ (by which we mean that $`V(L_{12})`$ increases more quickly than $`L_{12}(L_{12}+1)`$). The validity of Jainโs picture also depends upon $`V(L_{12})`$ being of short range. The pseudopotential describing quasiparticles of a Laughlin condensed state display short range behavior only at certain values of $`L_{12}`$. We have used this fact to explain why only certain states in the CF hierarchy give rise to incompressible states of the quasiparticle fluid (or daughter states in the hierarchy). The pseudopotentials $`V_n(L_{12})`$ for higher Landau levels ($`n>0`$) do not display short range behavior at all values of $`L_{12}`$, implying that Laughlin-like correlations will not necessarily result at $`\nu ^{}=2p+\nu `$, where $`p`$ is an integer and $`\nu `$ is a LaughlinโJain filling factor. The CF ideas have been applied successfully to multicomponent plasmas containing different types of Fermions with the prediction of possible incompressible fluid states for these systems. Finally, the energy spectrum and PL of electronโhole systems can be interpreted in terms of CFโs and Laughlin correlations.
## Acknowledgment
The authors gratefully acknowledge the support of Grant DE-FG02-97ER45657 from the Materials Science Program โ Basic Energy Sciences of the US Department of Energy. They wish to thank P. Hawrylak, P. Sitko, I. Szlufarska, and K.-S. Yi for helpful discussions on different aspects of this work.
## References |
no-problem/0001/physics0001013.html | ar5iv | text | # REFERENCES
Everyone can understand quantum mechanics
Gao Shan
Institute of Quantum Mechanics
11-10, NO.10 Building, YueTan XiJie DongLi, XiCheng District
Beijing 100045, P.R.China
E-mail: gaoshan.iqm@263.net
## Abstract
We show that everyone can understand quantum mechanics, only if he rejects the following prejudice, namely classical continuous motion (CCM) is the only possible and objective motion of particles.
I think I can safely say that nobody today understands quantum mechanics. โโFeynman (1965)
When people talk about motion, they only refer to CCM, its uniqueness is taken for granted absolutely but unconsciously, people never dream of another different motion in Nature, but to our surprise, as to whether or not CCM is the only possible and objective motion, and whether CCM is the real motion or apparent motion, no one has given a definite answer up to now.
In classical mechanics, CCM is undoubtedly the leading actor, while in quantum mechanics, CCM is rejected by the orthodox interpretation from stem to stern, but why did people never guess what quantum mechanics describes is just another different motion from CCM? as we think, this is the most direct and natural idea, since classical mechanics describes CCM, then correspondingly quantum mechanics will describe another kind of motion.
The only stumbling block is just the huge prejudice rooted in the mind of people, it is that classical continuous motion (CCM) is the only possible and objective motion of particles, now letโs see it more clearly through looking back to the history.
Bohr and his enthusiastic supporters held this prejudice strong, they insisted that Copenhagen interpretation is the only possible interpretation of quantum mechanics, since CCM can no longer account for the phenomena in quantum mechanics, we must essentially discard it, the only possible and objective motion, then it is evident that quantum mechanics provides no objective description of Nature at all, but only our knowledge about Nature.
Einstein held this prejudice stronger, he believed that if the objective picture of classical continuous motion contradicts with quantum mechanics, the wrong side can only be quantum mechanics, not classical continuous motion, since in any case we can not lose the reality, while classical continuous motion is the only reality of Nature, thus he became the strongest opponent of Copenhagen interpretation, but his acerbic comments did not help him so much, he failed in persuading Bohr, as well as his contemporary.
Bohm also held this prejudice, his cleverness lies in that he provided a compromise hidden-variable picture between those of Bohr and Einstein, but neither one was satisfied with his way, and he himself was also tortured by the dualistic monster he created.
Everett still held this prejudice, even though he presented a crazy many worlds interpretation for quantum mechanics, his interpretation is still in the framework of CCM, only for every branch of the expensive many worlds, and no supporters would like to attempt quantum suicide to convince themselves the many worlds interpretation is right, let alone convince anyone else.
More and more followers have been trying to understand quantum mechanics, but they still held this prejudice firmly and unconsciously, they are doomed to fail, this is their destiny due to the prejudice.
Then why cling to it till death like a miser? unloosen it! please reject it! and donโt walk along this wrong way any more, it only leads to the blind alley, the impasse, no way out there.
In our previous paper, from the clear logical and physical analyses about motion, we have shown that the natural motion in continuous space-time is not CCM, but one kind of essentially discontinuous motion, and Schrรถdinger equation in quantum mechanics is just its simplest nonrelativistic motion equation; while in the real discrete space-time, the natural motion is also discontinuous, and it will result in the collapse process of the wave function, this brings about the appearance of CCM in macroscopic world, thus CCM is by no means the real motion in Nature, let alone be the only possible and objective motion, it is just one kind of ideal apparent motion in the macroscopic world where we live, while the real motion is essentially discontinuous.
Once we reject the apparent CCM, and find the real motion in Nature, understanding quantum mechanics is just an easy task, we can safely say that everybody can understand quantum mechanics easily from now on, nobody will be plagued by its weirdness any more, since quantum mechanics is just the theory describing the real motion in Nature, even though the real motion is more complex than CCM, it also has a clear picture just like CCM, its weirdness results only from its particular existence and evolution, in fact, from a logical point of view, its existence and evolution are more natural than those of CCM, only because we are unfamiliar with it, it looks very bizarre for us.
Concretely speaking, the wave function $`\psi (x,t)`$ in quantum mechanics is an indirect mathematical complex to describe the state of the real motion of particle, the direct description quantities are $`\rho (x,t)`$ and $`j(x,t)`$, their relation is $`\psi (x,t)=\rho ^{1/2}e^{iS(x,t)/\mathrm{}}`$, where $`S(x,t)=m_{\mathrm{}}^xj(x^{^{}},t)/\rho (x^{^{}},t)๐x^{^{}}+C(t)`$, the apparent wave-like form of $`\psi (x,t)`$ results essentially from the discontinuity of the real motion, not from any objective existence of wave or field.
The evolution of the real motion includes two parts, one is the linear evolution part, it results in the interference pattern, which is usually the display of classical wave, but the pattern is undoubtedly formed by a large number of particles undergoing the real motion; the other is the nonlinear stochastic evolution part, it results in the collapse process of the wave function, during measurement this process happens very soon, and the wave function of the particle collapses into a local region, this brings about the appearance of single event in measurement, this process is stochastic and indeterministic due to the essential discontinuity and randomicity of the real motion itself.
Certainly, one point needs to be stressed, even though the wave function does provide a complete description of the state of the real motion, present quantum theory does not provide a complete description of the evolution of the real motion, and needs to be revised to include the stochastic evolution part.
Now we may also understand why people havenโt understood quantum mechanics yet after they found it more than seventy years ago, the reason is very simple, because people always discuss and picturize it in the framework of CCM, they can only see the sky of CCM, some of them would ruthlessly reject the reality in the quantum world rather than give another possible motion a glance, the others would never ever give up CCM, this is indeed the sorriness of science, but the most heart-struck is that people are always very complacent about their own choices, and care little about the ideas of others, all these will be fundamentally changed from now on. |
no-problem/0001/astro-ph0001089.html | ar5iv | text | # Constraints on Cosmological Parameters from the Ly๐ผ Forest Power Spectrum and COBE-DMR
## 1 Introduction
Cosmological models based on cold dark matter (CDM) and simple versions of inflation have had considerable success in accounting for the origin of cosmic structure. In this class of models, the primordial density fluctuations are Gaussian distributed, and the shape of their power spectrum is determined by a small number of physical parameters that describe the inflationary fluctuations themselves and the material contents of the universe. For specified cosmological parameters, the measurement of cosmic microwave background (CMB) anisotropies by the COBE-DMR experiment (Smoot et al. (1992); Bennett et al. (1996)) fixes the amplitude of the matter power spectrum on large scales with an uncertainty of $`20\%`$ (e.g., Bunn & White (1997)). In this paper, we combine the COBE normalization with a recent measurement of the matter power spectrum by Croft et al. (1999b, hereafter CWPHK) to test the inflation+CDM scenario and constrain its physical parameters. A modified version of the method developed here is applied to a more recent power spectrum measurement by Croft et al. (2001).
CWPHK infer the mass power spectrum $`P(k)`$ from measurements of Ly$`\alpha `$ forest absorption in the light of background quasars, at a mean absorption redshift $`z2.5`$. The method, introduced by Croft et al. (1998), is based on the physical picture of the Ly$`\alpha `$ forest that has emerged in recent years from 3-dimensional, hydrodynamic cosmological simulations and related analytic models (e.g., Cen et al. (1994); Zhang, Anninos, & Norman (1995); Hernquist et al. (1996); Bi & Davidsen (1997); Hui, Gnedin, & Zhang (1997)). By focusing on the absorption from diffuse intergalactic gas in mildly non-linear structures, this method sidesteps the complicated theoretical problem of biased galaxy formation; it directly estimates the linear theory mass power spectrum (over a limited range of scales) under the assumption of Gaussian initial conditions. Because the observational units are km s<sup>-1</sup>, the CWPHK measurement probes somewhat different comoving scales for different cosmological parameters: $`\lambda 2\pi /k=212h^1\mathrm{Mpc}`$ for $`\mathrm{\Omega }_m`$=1, $`\lambda =316h^1\mathrm{Mpc}`$ for $`\mathrm{\Omega }_m`$=0.55 and $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0, and $`\lambda =422h^1\mathrm{Mpc}`$ for $`\mathrm{\Omega }_m`$=0.4 and $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0.6 ($`hH_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). CWPHK determine the logarithmic slope of $`P(k)`$ on these scales with an uncertainty $`0.2`$ and the amplitude with an uncertainty $`35\%`$. The extensive tests on simulations in Croft et al. (1998) and CWPHK suggest that the statistical uncertainties quoted here dominate over systematic errors in the method itself, though the measurement does depend on the assumption of Gaussian primordial fluctuations and on the broad physical picture of the Ly$`\alpha `$ forest described in the references above. For brevity, we will usually refer to the CWPHK determination of the mass power spectrum as โthe Ly$`\alpha `$ $`P(k)`$.โ
In the next Section, we discuss our choice of the parameter space for inflationary CDM models. The core of the paper is Section 3, where we combine the COBE normalization with the Ly$`\alpha `$ $`P(k)`$ to identify acceptable regions of the CDM parameter space. We focus on four representative models: a low density ($`\mathrm{\Omega }_m<1`$) open model, a low density flat model with a cosmological constant, and Einstein-de Sitter ($`\mathrm{\Omega }_m=1`$) models with pure CDM and with a mixture of CDM and hot dark matter. Because different parameters have nearly degenerate influences on the predicted Ly$`\alpha `$ $`P(k)`$, we are able to summarize our results in terms of simple equations that constrain combinations of these parameters. In Section 4, we consider other observational constraints that can break these degeneracies, such as the cluster mass function, the peculiar velocity power spectrum, the shape of the galaxy power spectrum, and the CMB anisotropy power spectrum. We review our conclusions in Section 5.
## 2 Parameter Space for CDM Models
In simple inflationary models, the power spectrum of density fluctuations in the linear regime can be well approximated as a power law, $`P(k)k^n`$ (where $`n=1`$ is the scale-invariant spectrum), multiplied by the square of a transfer function $`T(k)`$ that depends on the relative energy densities of components with different equations of state. We will assume the standard radiation background (microwave background photons and three species of light neutrinos) and consider as other possible components cold dark matter, baryons, a โcosmological constantโ vacuum energy, and neutrinos with a non-zero rest mass in the few eV range. Within this class of models, the shape of the power spectrum is therefore determined by the parameters $`n`$, $`\mathrm{\Omega }_{\mathrm{CDM}}`$, $`\mathrm{\Omega }_b`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, $`\mathrm{\Omega }_\nu `$, and $`h`$ (since $`\rho _x=\mathrm{\Omega }_x\rho _c\mathrm{\Omega }_xh^2`$). In place of $`\mathrm{\Omega }_b`$ and $`\mathrm{\Omega }_{\mathrm{CDM}}`$, we use the parameters
$$B\mathrm{\Omega }_bh^2,$$
(1)
which is constrained by light-element abundances through big bang nucleosynthesis (Walker et al. (1991)), and
$$\mathrm{\Omega }_m\mathrm{\Omega }_{\mathrm{CDM}}+\mathrm{\Omega }_b+\mathrm{\Omega }_\nu ,$$
(2)
which fixes $`\mathrm{\Omega }_{\mathrm{CDM}}`$ once $`B`$, $`h`$, and $`\mathrm{\Omega }_\nu `$ are specified. For non-zero $`\mathrm{\Omega }_\nu `$, we assume one dominant family of massive neutrinos. We do not consider arbitrary combinations of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ but instead restrict our attention to the two theoretically simplest possibilities, spatially flat models with $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_m`$ and open models with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$.
Once the cosmological parameters are specified, normalizing to the results of the COBE-DMR experiment determines the amplitude of $`P(k)`$. For inflation models with $`n<1`$, the COBE normalization can also be affected by the presence of tensor fluctuations (gravity waves). We consider normalizations with no tensor contribution and normalizations with the quadrupole tensor-to-scalar ratio $`T/S=7(1n)`$ predicted by simple power law inflation models (e.g., Davis et al. (1992)), but we do not consider arbitrary tensor contributions. We compute the COBE-normalized, linear theory, matter power spectrum $`P(k)`$ using the convenient and accurate fitting formulas of Eisenstein & Hu (1999), with the normalization procedures of Bunn & White (1997) for all flat cases and for the open case without a tensor contribution and Hu & White (1997) for the open case with a tensor contribution.
There are plausible variants of this family of inflationary CDM models that we do not analyze in this paper, because we lack the tools to easily calculate their predictions and because they would make our parameter space intractably larger. Prominent among these variants are models with a time-varying scalar field, a.k.a. โquintessenceโ (e.g., Peebles & Ratra (1988); Wang & Steinhardt (1998)), models in which the energy of the radiation background has been boosted above its standard value by a decaying particle species, a.k.a. โ$`\tau `$CDMโ (e.g., Bond & Efstathiou (1991)), and models in which inflation produces a power spectrum with broken scale invariance (e.g., Kates et al. (1995)). Given the observational evidence for a negative pressure component from Type Ia supernovae (Riess et al. (1998); Perlmutter et al. (1999)), the quintessence family might be especially interesting to explore in future work.
In sum, the free parameters of our family of cosmological models are $`\mathrm{\Omega }_m`$, $`h`$, $`n`$, $`B`$, $`\mathrm{\Omega }_\nu `$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and $`T/S`$. We allow $`\mathrm{\Omega }_m`$, $`h`$, $`n`$, $`B`$, and $`\mathrm{\Omega }_\nu `$ to assume a continuous range of values. For $`\mathrm{\Omega }_\mathrm{\Lambda }`$ and $`T/S`$ we consider only two discrete options: $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ or $`1\mathrm{\Omega }_m`$, and $`T/S=0`$ or $`7(1n)`$.
## 3 Cosmological Parameters and the Ly$`\alpha `$ Forest P(k)
To organize our discussion and guide our analysis, we focus on variations about four fiducial models, each motivated by a combination of theoretical and observational considerations. The fiducial models are a flat cold dark matter model with a non-zero cosmological constant ($`\mathrm{\Lambda }`$CDM), an open cold dark matter model with no cosmological constant (OCDM), an $`\mathrm{\Omega }_m=1`$ cold dark matter model with a significantly โtiltedโ inflationary spectrum (TCDM), and an $`\mathrm{\Omega }_m=1`$ model with a mixture of cold and hot dark matter (CHDM).
For all of the fiducial models we adopt $`B=0.02`$, based on recent measurements of the deuterium abundance in high-redshift Lyman limit absorbers (Burles & Tytler (1997), 1998). For the TCDM and CHDM models we adopt $`h=0.5`$ in order to obtain a reasonable age for the universe given the assumption that $`\mathrm{\Omega }_m=1`$. For the $`\mathrm{\Lambda }`$CDM and OCDM models we instead adopt $`h=0.65`$, which is better in line with recent direct estimates of the Hubble constant (e.g., Mould et al. (2000)). For the $`\mathrm{\Lambda }`$CDM model we take $`\mathrm{\Omega }_m=0.4`$, but for OCDM we adopt a rather high density, $`\mathrm{\Omega }_m=0.55`$, in anticipation of our results in Section 4, where we consider the cluster mass function as an additional observational constraint. For the CHDM model, we take $`\mathrm{\Omega }_\nu =0.2`$ and assume one dominant species of massive neutrino; for all other models $`\mathrm{\Omega }_\nu =0.`$ With $`B`$, $`h`$, $`\mathrm{\Omega }_m`$, and $`\mathrm{\Omega }_\nu `$ fixed, we are left with one free parameter, the inflationary spectral index $`n`$, which we choose in order to fit the amplitude of the Ly$`\alpha `$ $`P(k)`$ while maintaining the COBE normalization. The required value of $`n`$ is different for models with no tensor contribution to CMB anisotropies than for models with tensor fluctuations; we refer to the fiducial models with tensor fluctuations as $`\mathrm{\Lambda }`$CDM2, OCDM2, and TCDM2. Because a value $`n>1`$ is required for CHDM and the assumption that $`T/S=7(1n)`$ therefore cannot be correct in this case, we do not consider a CHDM model with tensor fluctuations. Table 1 lists the parameters of the fiducial models. For later reference, Table 1 also lists each modelโs value of $`\sigma _8`$, the rms linear theory mass fluctuation in spheres of radius $`8h^1\mathrm{Mpc}`$ at $`z=0`$.
Figure 1 compares the power spectra of our fiducial models to the Ly$`\alpha `$ $`P(k)`$, shown as the filled circles with error bars. Note that the overall normalization of the data points is uncertain; at the $`1\sigma `$ level they can shift up or down coherently by the amount indicated by the error bar on the open circle (see CWPHK for details). The COBE normalization itself has a $`1\sigma `$ uncertainty of approximately 20% in $`P(k)`$, roughly half of the Ly$`\alpha `$ $`P(k)`$ normalization uncertainty. Panels (a) and (c) show the fiducial models with and without tensors, respectively, over a wide range of wavenumber. Panels (b) and (d) focus on the range of wavenumbers probed by the Ly$`\alpha `$ $`P(k)`$. Our first major result is already evident from Figure 1: all of the fiducial models reproduce the observed Ly$`\alpha `$ $`P(k)`$. Each model has a single adjustable parameter, the spectral index $`n`$, so their success in reproducing both the amplitude and slope of $`P(k)`$ is an important confirmation of a generic prediction of the inflationary CDM scenario, a point we will return to shortly.
Within the precision and dynamic range of the CWPHK measurement, the Ly$`\alpha `$ $`P(k)`$ can be adequately described by a power law. CWPHK find
$$\mathrm{\Delta }^2(k)\frac{k^3}{2\pi ^2}P(k)=\mathrm{\Delta }^2(k_p)\left(\frac{k}{k_p}\right)^{3+\nu },$$
(3)
with
$`k_p`$ $`=`$ $`0.008(\mathrm{km}\mathrm{s}^1)^1,`$ (4)
$`\mathrm{\Delta }^2(k_p)`$ $`=`$ $`0.573_{0.166}^{+0.233},`$ (5)
$`\nu `$ $`=`$ $`2.25\pm 0.18.`$ (6)
Here $`\mathrm{\Delta }^2(k)`$ is the contribution to the density variance per unit interval of $`\mathrm{ln}k`$, and $`k_p`$ is a โpivotโ wavenumber near the middle of the range probed by the data.
In each panel of Figures 2 and 3, the central star shows the best fit values of $`\mathrm{\Delta }^2(k_p)`$ and $`\nu `$ quoted above, and the two large concentric circles show the $`1\sigma `$ (68%) and $`2\sigma `$ (95%) confidence contours on the parameter values. The calculation of these confidence contours is described in detail in Section 5 of CWPHK. Briefly, the likelihood distribution for the slope, $`\nu `$, is derived by fitting the power law form (eq. 3) to the $`P(k)`$ data points, using their covariance matrix. The likelihood distribution for the amplitude, $`\mathrm{\Delta }^2(k_p)`$, is obtained by convolving the distributions calculated from two separate sources of uncertainty involved in the $`P(k)`$ normalization. The joint confidence contours on the two parameters are obtained by multiplying together the two independent likelihood distributions. The $`1\sigma `$ and $`2\sigma `$ contours correspond to changes in $`2\mathrm{log}_e`$ from its best fit value of $`2.30`$ and $`6.17`$, respectively, where $``$ is the likelihood.
The open circular point near the middle of each panel of these figures shows the fiducial modelโs prediction of $`\mathrm{\Delta }^2(k_p)`$ and $`\nu `$. $`\mathrm{\Lambda }`$CDM, OCDM, and TCDM models without tensors appear in the left column of Figure 2, the corresponding models with tensors appear in the right column of Figure 2, and the CHDM model appears in Figure 3. As expected from Figure 1, the fiducial model predictions lie well within the 68% confidence contour in all cases. The 20% COBE normalization uncertainty adds a $`\mathrm{log}(1.2)0.08`$ error bar to the predicted value of $`\mathrm{log}\mathrm{\Delta }^2(k_p)`$, which we have not included in the plots. Because this uncertainty is small (once added in quadrature) compared to the Ly$`\alpha `$ $`P(k)`$ uncertainty itself, we have ignored it in the analysis of this paper. With a higher precision Ly$`\alpha `$ forest measurement, it would be important to include the COBE normalization uncertainty as an additional source of statistical error.
Changing any of the parameter values in any of the models shifts the predicted $`\mathrm{\Delta }^2(k_p)`$ and $`\nu `$, and the remaining points in Figures 2 and 3 show the effects of such parameter changes. Taking the $`\mathrm{\Lambda }`$CDM model of Figure 2a as an example, the two filled circles show the effect of increasing $`\mathrm{\Omega }_m`$ by 0.1 and 0.2 (to $`\mathrm{\Omega }_m=0.5`$ and $`\mathrm{\Omega }_m=0.6`$), while maintaining the condition $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$ and keeping all other parameters fixed at the fiducial values listed in Table 1. The two open circles show the effect of decreasing $`\mathrm{\Omega }_m`$ by 0.1 and 0.2. With $`\mathrm{\Omega }_m=0.2`$ and other parameters unchanged (leftmost open circle), the predicted amplitude $`\mathrm{\Delta }^2(k_p)`$ falls below the 95% confidence lower limit of CWPHK. In similar fashion, filled (open) pentagons show the effect of increasing (decreasing) $`n`$ by 0.05, filled (open) squares show the effect of increasing (decreasing) $`h`$ by 0.05, and filled (open) triangles show the effect of increasing (decreasing) $`\mathrm{\Omega }_b`$ by 0.01, in all cases keeping the other parameters fixed at their fiducial values. The format of the other panels of Figure 2 is identical, except that we do not show $`\mathrm{\Omega }_m`$ changes for TCDM. For $`\mathrm{\Lambda }`$CDM2 and OCDM2, we do not allow $`n>1`$. In Figure 3, filled (open) hexagons show the effect of increasing (decreasing) $`\mathrm{\Omega }_\nu `$ by 0.1 while keeping $`\mathrm{\Omega }_m=1`$. Open circles show the effect of decreasing $`\mathrm{\Omega }_m`$ by 0.1 while adding $`\mathrm{\Omega }_\mathrm{\Lambda }`$ to maintain flat space; results are virtually indistinguishable if $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is zero and the universe becomes (slightly) open. We do not consider changes that make $`\mathrm{\Omega }_m>1`$.
Parameter changes have similar effects in all of the models, and these effects can be easily understood by considering the physics that determines the shape and normalization of the matter power spectrum. The CDM transfer function has a single fundamental scale $`ct_{\mathrm{eq}}`$ determined by the size of the horizon at the time of matter-radiation equality; this scale is roughly the wavelength at which the power spectrum turns over. Increasing $`h`$, and hence the matter density $`\rho _m\mathrm{\Omega }_mh^2`$, moves matter-radiation equality to higher redshift and lower $`t_{\mathrm{eq}}`$, shifting the model power spectrum towards smaller scales (to the right in Figure 1). This horizontal shift, combined with an upward vertical shift to maintain the COBE normalization on large scales, increases the amplitude of $`P(k)`$ on Ly$`\alpha `$ forest scales and translates a shallower (higher $`\nu `$) part of the spectrum to $`k_p`$. Increasing $`\mathrm{\Omega }_m`$ also lowers $`t_{\mathrm{eq}}`$ and therefore has a similar effect. Open models are more sensitive than flat models to changes in $`\mathrm{\Omega }_m`$ because the integrated Sachs-Wolfe effect makes a greater contribution to large scale CMB anisotropies (Sachs & Wolfe (1967); Hu, Sugiyama, & Silk (1997)). Increasing $`\mathrm{\Omega }_m`$ reduces the integrated Sachs-Wolfe effect and hence increases the matter fluctuation amplitude implied by COBE, shifting the power spectrum vertically upward. The value of $`\mathrm{\Delta }^2(k_p)`$ is sensitive to the spectral index $`n`$ because of the very long lever arm between the COBE normalization scale and the scale of the Ly$`\alpha `$ forest measurement. A small decrease in $`n`$ produces an equally small decrease in $`\nu `$ but a large decrease in $`\mathrm{\Delta }^2(k_p)`$. The fluctuation amplitude is even more sensitive to $`n`$ in tensor models because, with $`T/S=7(1n)`$, decreasing $`n`$ also increases the contribution of gravity waves to the observed COBE anisotropies and therefore reduces the implied amplitude of the (scalar) matter fluctuations. Since fluctuations in the baryon component can only grow after the baryons decouple from the photons, increasing $`B`$ depresses and steepens $`P(k)`$ on small scales and therefore reduces $`\mathrm{\Delta }^2(k_p)`$ and $`\nu `$. However, for our adopted parameters the baryons always contribute a small fraction of the overall mass density, so the influence of $`\mathrm{\Omega }_b`$ changes is small. Increasing $`\mathrm{\Omega }_\nu `$ in the CHDM model has a much greater effect in the same direction, since the suppression of small scale power by neutrino free streaming is much greater than the suppression by baryon-photon coupling.
Figures 2 and 3 re-emphasize the point made earlier in our discussion of Figure 1: the agreement between the predicted and measured slope of the Ly$`\alpha `$ $`P(k)`$ confirms a general prediction of the inflationary CDM scenario. Although the four fiducial models correspond to quite different versions of this scenario, all of them reproduce the measured value of $`\nu =2.25`$ to well within its $`1\sigma `$ uncertainty once the value of $`n`$ is chosen to match the measured $`\mathrm{\Delta }^2(k_p)`$. However, if the measured value of $`\nu `$ had been substantially different, e.g. implying $`\nu >2`$ or $`\nu <2.5`$, then none of these models could have reproduced the measured $`\nu `$ while remaining consistent with the measured $`\mathrm{\Delta }^2(k_p)`$, even allowing changes in $`n`$, $`\mathrm{\Omega }_m`$, $`h`$, $`\mathrm{\Omega }_\nu `$, or $`\mathrm{\Omega }_b`$. A different value of $`\nu `$ would therefore have been a challenge to the inflationary CDM scenario itself rather than to any specific version of it. Note also that any of the models would match the observed $`\nu `$ within its $`1\sigma `$ uncertainty even if we had assumed a scale-invariant, $`n=1`$ inflationary spectrum; it is the $`\mathrm{\Delta }^2(k_p)`$ measurement that requires the departures from $`n=1`$. Because of the long lever arm from COBE to the Ly$`\alpha `$ $`P(k)`$, parameter changes that have a modest effect on $`\nu `$ have a large effect on $`\mathrm{\Delta }^2(k_p)`$.
Figure 2 also shows that changes of the different model parameters have nearly degenerate effects on the predicted values of $`\mathrm{\Delta }^2(k_p)`$ and $`\nu `$. For example, in the $`\mathrm{\Lambda }`$CDM model, increasing $`\mathrm{\Omega }_m`$ by 0.1 would increase the predicted slope and amplitude, but decreasing $`h`$ by 0.05 would almost exactly cancel this change. This near degeneracy allows us to summarize the constraints imposed by COBE and the Ly$`\alpha `$ $`P(k)`$ with simple formulas of the form
$$\mathrm{\Omega }_mh^\alpha n^\beta B^\gamma =k\pm ฯต,$$
(7)
where $`k`$ is the value obtained for the best-fit parameter values in Table 1 and the uncertainty $`ฯต`$ defines the variation that is allowed before the model leaves the 68% confidence contour. Table 2 lists the values of $`\alpha `$, $`\beta `$, $`\gamma `$, $`k`$, and $`ฯต`$ for all of the fiducial models. Although we do not show $`\mathrm{\Omega }_m`$ changes for the TCDM models in Figure 2, we vary it below $`1.0`$ (adding $`\mathrm{\Omega }_\mathrm{\Lambda }`$ to keep the universe flat) in order to derive the $`\alpha `$, $`\beta `$, and $`\gamma `$ indices, so that in all models their values reflect the importance of a change in $`h`$, $`n`$, or $`B`$ relative to a change in $`\mathrm{\Omega }_m`$.
Equation (7), together with Table 2, is our second principal result, defining the quantitative constraints placed on the parameters of inflationary CDM models by the combination of COBE and the Ly$`\alpha `$ forest $`P(k)`$. The values of the $`\alpha `$, $`\beta `$, and $`\gamma `$ indices reflect the sensitivity of the predicted power spectrum amplitude $`\mathrm{\Delta }^2(k_p)`$ to the model parameters, quantifying the impressions from Figure 2. Again taking $`\mathrm{\Lambda }`$CDM as an example, we see that small variations in $`h`$ and $`n`$ have much greater effect than small variations in $`\mathrm{\Omega }_m`$, and that the suppression of small scale power from increases in $`B`$ is always a modest effect. Models with tensors are much more sensitive to $`n`$ than models without tensors because of the influence of gravity waves on the $`P(k)`$ normalization, as discussed above. Although the index values are derived in all cases by considering small variations about the corresponding fiducial model, the constraint formula (7) remains accurate even for fairly large changes in the cosmological parameters. For example, plugging the TCDM values of $`\mathrm{\Omega }_m`$, $`h`$, $`n`$, $`B`$ into equation (7) with the $`\mathrm{\Lambda }`$CDM values of $`\alpha `$, $`\beta `$, and $`\gamma `$ yields $`k=0.47`$, compared to the value $`k=0.44`$ listed for $`\mathrm{\Lambda }`$CDM in Table 2.
Figure 3 shows that the effects of parameter changes are less degenerate in the CHDM model. This difference in behavior is not surprising, since neutrino free streaming changes $`P(k)`$ by depressing it at small scales rather than simply shifting or tilting it. The slope $`\nu `$ is therefore much more sensitive to changes in $`\mathrm{\Omega }_\nu `$ than to changes in other parameters. Conversely, the influence of $`h`$ on $`\nu `$ through shifting $`t_{\mathrm{eq}}`$ is nearly cancelled by the effect of $`h`$ on the implied neutrino mass and free streaming length. We still analyze this case as above, adding a factor of $`\mathrm{\Omega }_{\nu }^{}{}_{}{}^{\delta }`$, to obtain
$$\mathrm{\Omega }_mh^\alpha n^\beta B^\gamma \mathrm{\Omega }_{\nu }^{}{}_{}{}^{\delta }=k\pm ฯต,$$
(8)
with parameters also listed in Table 1. However, this equation cannot describe the results of Figure 3 as accurately as equation (7) describes the results of Figure 2.
Recently McDonald et al. (2000) measured the Ly$`\alpha `$ forest flux power spectrum in a sample of eight Keck HIRES spectra and used it to infer the amplitude and shape of the mass power spectrum. Their mean absorption redshift is $`z3`$ rather than $`z=2.5`$, and their data best constrain the $`P(k)`$ amplitude at $`k=0.04(\mathrm{km}\mathrm{s}^1)^1`$ rather than $`0.008(\mathrm{km}\mathrm{s}^1)^1`$. However, assuming gravitational instability and a CDM power spectrum shape, they extrapolate from their result to derive values of $`\nu `$ and $`\mathrm{\Delta }^2`$ that can be directly compared to CWPHKโs measurement at $`z=2.5`$, $`k_p=0.008(\mathrm{km}\mathrm{s}^1)^1`$, obtaining $`\nu =2.24\pm 0.10`$ and $`\mathrm{\Delta }^2(k_p)=0.32\pm 0.07`$. Despite the entirely independent data sets and very different modelling procedures, the CWPHK and McDonald et al. (2000) measurements agree almost perfectly in slope and are consistent in amplitude at the $`1\sigma `$ level. We plot the McDonald et al. (2000) measurement as error crosses in Figures 2 and 3. McDonald et al. (2000) note that the small error bar on $`\mathrm{\Delta }^2`$ should be considered preliminary, since they have not fully investigated the sensitivity of their power spectrum normalization procedure to their modelling assumptions.
Clearly none of our qualitative conclusions about inflationary CDM models would change if we were to adopt the McDonald et al. (2000) $`P(k)`$ determination instead of the CWPHK determination. Conveniently, the McDonald et al. (2000) point lies almost exactly on our $`1\sigma `$ error contour, so to a good approximation one can obtain the parameter constraints (7) and (8) implied by the McDonald et al. (2000) measurement by simply replacing the values of $`k`$ in Table 2 by $`kฯต`$.
## 4 Combining with other constraints
We have shown that the combination of COBE and the Ly$`\alpha `$ $`P(k)`$ yields constraints on degenerate combinations of cosmological parameters. To break these degeneracies, we now consider observational constraints from other studies of large scale structure and CMB anisotropies. Analyses of cosmological parameter constraints from multiple observations have been carried out by numerous groups (recent examples include Bahcall et al. (1999); Bridle et al. (1999); Steigman, Hata, & Felten (1999); Novosyadlyj et al. (2000); Wang, Tegmark, & Zaldarriaga (2001)). Our new contribution is to include the Ly$`\alpha `$ $`P(k)`$ as one of the observational constraints (also considered by Novosyadlyj et al. (2000) and Wang, Tegmark, & Zaldarriaga (2001)). We focus our attention on several other constraints that can be cast into a form that complements our results from Section 3: the mass function of galaxy clusters, the mass power spectrum inferred from galaxy peculiar velocities, the shape parameter of the galaxy power spectrum, and a constraint on $`n`$ from CMB anisotropy data. Our discussion in this Section will be more qualitative than our discussion in Section 3, in part because the uncertainties in these constraints are largely systematic, so that a straightforward statistical combination could be misleading.
In each panel of Figures 4 and 5, the heavy solid line shows the locus of $`(\mathrm{\Omega }_m,n)`$ values that yield a simultaneous match to COBE and the CWPHK measurement of the Ly$`\alpha `$ $`P(k)`$. These lines are very close to those implied by equation (7) and Table 2, but since those results are, strictly speaking, expansions about our fiducial model parameters, we compute the best-fit value of $`n`$ exactly for each $`\mathrm{\Omega }_m`$ rather than using equation (7). The $`\pm 1\sigma `$ constraints are shown as the lighter solid lines; these are close to the curves implied by equation (7) and Table 2 with $`k`$ replaced by $`k\pm ฯต`$. Because the Ly$`\alpha `$ $`P(k)`$ constraint is not very sensitive to $`B`$, we keep $`B`$ fixed at our fiducial value of 0.02 in all cases. We show results for $`h=0.65`$, $`h=0.45`$, and $`h=0.85`$ in the upper, middle, and lower panels of each figure, with flat and open models in the left and right hand columns, respectively. Figure 4 shows models without tensors and Figure 5 models with tensors. For models with tensors, we restrict the parameter space to $`n1`$, since our assumption that $`T/S=7(1n)`$ only makes sense in this regime. The TCDM models can be considered as the limit of either the flat or open models at $`\mathrm{\Omega }_m=1`$. Note that the McDonald et al. (2000) estimate of the Ly$`\alpha `$ $`P(k)`$ corresponds very closely to our $`1\sigma `$ constraint, so to adopt McDonald et al. (2000) instead of CWPHK one simply follows the lower solid line instead of the middle solid line as the constraint.
For Gaussian initial conditions, the space density of clusters as a function of virial mass constrains a combination of $`\mathrm{\Omega }_m`$ and the mass fluctuation amplitude, since clusters of a given mass can be formed by the collapse of large volumes in a low density universe or smaller volumes in a higher density universe. This constraint can be summarized quite accurately in a formula relating $`\mathrm{\Omega }_m`$ to the rms mass fluctuation $`\sigma _8`$ (White, Efstathiou, & Frenk 1993a ). We use the specific version of this formula obtained by Eke, Cole, & Frenk (1996, hereafter ECF) using N-body simulations and the Press-Schechter (1974) approximation:
$$\begin{array}{cc}\sigma _8=(0.52\pm 0.04)\mathrm{\Omega }_{m}^{}{}_{}{}^{0.46+0.10\mathrm{\Omega }_m}\hfill & \mathrm{\Omega }_\mathrm{\Lambda }=0\hfill \\ \sigma _8=(0.52\pm 0.04)\mathrm{\Omega }_{m}^{}{}_{}{}^{0.52+0.13\mathrm{\Omega }_m}\hfill & \mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_m.\hfill \end{array}$$
(9)
For each value of $`\mathrm{\Omega }_m`$, we find the value of $`\sigma _8`$ required by the cluster mass function from equation (9). Given $`h`$ and $`B=0.02`$, we then find the value of $`n`$ required to produce this value of $`\sigma _8`$ by numerically integrating the CDM power spectrum. This constraint in the $`\mathrm{\Omega }_mn`$ plane is shown by the dotted line in each panel of Figures 4 and 5, with an error bar that indicates the 8% uncertainty quoted in equation (9) from ECF.
For a given value of $`\mathrm{\Omega }_m`$, the matter power spectrum can also be estimated from the statistics of galaxy peculiar motions. Freudling et al. (1998) apply a maximum likelihood technique to the SFI peculiar velocity catalog to constrain COBE-normalized, inflationary CDM models for the matter power spectrum, obtaining the constraint
$$\mathrm{\Omega }_mh_{60}^\mu n^\nu =k\pm ฯต,$$
(10)
where $`\mu `$, $`\nu `$, $`k`$ and $`ฯต`$ are dependent on the cosmology and $`h_{60}h/0.6`$. In a flat, $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_m`$ model with no tensor component, ($`\mu `$, $`\nu `$, $`k`$, $`ฯต`$)=(1.3, 2.0, 0.58, 0.08), while if a tensor component is allowed they become (1.3, 3.9, 0.58, 0.08). For an open, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ model without a tensor component they are (0.9, 1.4, 0.68, 0.07). Freudling et al. (1998) do not consider open, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ cases with a tensor component. For specified $`h`$, equation (10) yields a constraint in the $`\mathrm{\Omega }_mn`$ plane, shown by the short-dashed line in the panels of Figures 4 and 5. The associated $`1\sigma `$ error bars are based on the statistical uncertainties $`ฯต`$ quoted by Freudling et al. (1998) and listed above. For brevity, we will refer to these curves as the velocity power spectrum constraint, though they represent the constraints on the density power spectrum implied by peculiar velocites.
We do not want to use the amplitude of the galaxy power spectrum as one of our constraints because it can be strongly affected by biased galaxy formation. However, a variety of analytic and numerical arguments (e.g., Coles (1993); Fry & Gaztaรฑaga (1993); Mann, Peacock, & Heavens (1998); Scherrer & Weinberg (1998); Narayanan, Berlind, & Weinberg (2000)) suggest that biased galaxy formation should not alter the shape of the galaxy power spectrum on scales in the linear regime, and on these scales the shape is directly related to the parameters of the inflationary CDM cosmology. We adopt the specific constraint found by Peacock & Dodds (1994) from their combined analysis of a number of galaxy clustering data sets:
$$\mathrm{\Gamma }_{\mathrm{eff}}\mathrm{\Omega }_mh\mathrm{exp}\left[\mathrm{\Omega }_b\left(1+\frac{\sqrt{2h}}{\mathrm{\Omega }_m}\right)\right]0.32\left(\frac{1}{n}1\right)=0.255\pm 0.017.$$
(11)
For $`n=1`$, $`\mathrm{\Gamma }_{\mathrm{eff}}=\mathrm{\Gamma }`$, where $`\mathrm{\Gamma }`$ is the shape parameter in the conventional parameterization of the inflationary CDM power spectrum (Bardeen et al. (1986); the influence of $`\mathrm{\Omega }_b`$ is discussed by Sugiyama (1995)). While the effects of $`\mathrm{\Gamma }`$ and $`n`$ on the power spectrum shape are different, equation (11) combines them in a way that approximates their nearly degenerate influence over the range of scales currently probed by large scale clustering measurements. For specified $`h`$ and $`B`$, equation (11) becomes a constraint in the $`\mathrm{\Omega }_mn`$ plane. We plot this constraint as the long-dashed line and associated error bar in the panels of Figures 4 and 5. We should note, however, that the Peacock & Dodds (1994) error bar may be overoptimistic, since independent estimates of $`\mathrm{\Gamma }_{\mathrm{eff}}`$ often fall outside this range. Eisenstein & Zaldarriaga (2001) have recently re-examined the spatial power spectrum inferred from the APM survey and conclude that the 68% confidence interval of $`\mathrm{\Gamma }`$ (for $`n=1`$) is $`0.190.37`$, much larger than the range implied by equation (11), and Efstathiou & Moody (2000) favor a lower central value ($`\mathrm{\Gamma }0.12`$, with $`2\sigma `$ range $`0.05\mathrm{\Gamma }0.38`$). As older estimates of the galaxy power spectrum are supplanted by results from the 2dF and Sloan galaxy redshift surveys, the $`\mathrm{\Gamma }`$ parameterization itself may become an insufficiently accurate representation of the theoretical predictions (Percival et al. (2001)).
A detailed consideration of constraints from smaller scale CMB anisotropy measurements is beyond the scope of this paper, but we do want to draw on limits that smaller scale measurements place on the inflationary index $`n`$. For the no-tensor models, we adopt the โweak priorโ constraint $`n=0.96_{0.09}^{+0.10}`$ of Netterfield et al. (2001), based on data from the BOOMERANG experiment, which we represent by the horizontal dot-dash line and $`1\sigma `$ error bar in Figure 4. Since Netterfield et al. (2001) do not consider models with tensor fluctuations, we take the corresponding constraint for the tensor models in Figure 5 from Wang et al. (2001). Their model space is less restrictive than ours because they do not impose the power-law inflation relation $`T/S=7(1n)`$, and using CMB data alone they find only a very weak constraint on $`n`$. We therefore adopt their constraint from the combination of CMB and large scale structure data, $`n=0.91_{0.05}^{+0.07}`$, where we have reduced the 95% confidence range quoted in their table 2 by a factor of two to get a representative $`1\sigma `$ uncertainty.
In Figures 4 and 5, the cluster mass function, velocity power spectrum, and shape parameter constraints tend to be roughly parallel to each other, with the shape parameter following a somewhat different track when tensor fluctuations are important. The shape parameter constraint is usually compatible with the cluster mass function constraint, at least if one allows for the possibility that the error bar in equation (11) is somewhat too small. However, the velocity power spectrum always implies a higher fluctuation amplitude than the cluster mass function, and the two constraints are not consistent within their stated $`1\sigma `$ uncertainties for any combination of $`\mathrm{\Omega }_m`$, $`n`$, and $`h`$. A recent analysis by Silberman et al. (2001) shows that the discrepancy is probably a result of non-linear effects on the velocity power spectrum, and that correcting for these yields results closer to the cluster constraint. We therefore regard the cluster constraint as more reliable, and we retain the velocity power spectrum curve mainly as a reminder of other data that can be brought to bear on these questions.
The Ly$`\alpha `$ $`P(k)`$ curve cuts across the other three constraints, requiring greater change in $`\mathrm{\Omega }_m`$ for a given change in $`n`$. The CMB anisotropy limit on $`n`$ cuts across all of the other constraints. The COBE-DMR measurement is represented implicitly in Figures 4 and 5 through its role in the Ly$`\alpha `$ $`P(k)`$ constraint, the velocity power spectrum constraint, and the CMB anisotropy constraint on $`n`$. The size of the $`1\sigma `$ error bars in these figures, and the probability that at least some of them are underestimated, prevents us from drawing sweeping conclusions. However, Figures 4 and 5 do have a number of suggestive implications if we look for models that lie within the overlapping $`1\sigma `$ uncertainties of the various constraints. Since it is not possible to satisfy the cluster mass function and velocity power spectrum constraints simultaneously within the class of models that we consider, the implications depend strongly on which of these constraints we take to be more reliable. The shape parameter implications are usually intermediate, but significantly closer to those of the cluster mass function.
The combination of the velocity power spectrum and Ly$`\alpha `$ $`P(k)`$ constraints implies a high density universe, with $`\mathrm{\Omega }_m1`$ preferred and $`\mathrm{\Omega }_m0.6`$ separating the two constraints by more than their $`1\sigma `$ error bars. The Ly$`\alpha `$ $`P(k)`$ constraint rules out the high values of $`n`$ that could otherwise allow low $`\mathrm{\Omega }_m`$ in equation (10). For $`h0.65`$, intersection of the velocity power spectrum and Ly$`\alpha `$ $`P(k)`$ constraints occurs at $`n0.8`$, incompatible with the CMB anisotropy constraint. However, an $`\mathrm{\Omega }_m1`$ universe would require a low value of $`h`$ in any case because of the age constraint for globular cluster stars, and this would push the intersection to higher $`n`$. As noted earlier, the velocity power spectrum constraint shown here is probably biased towards high $`\mathrm{\Omega }_m`$ by the non-linear effects described by Silberman et al. (2001).
If we instead adopt the cluster mass function constraint, then consistency with the Ly$`\alpha `$ $`P(k)`$ and COBE requires $`\mathrm{\Omega }_m<1`$. For $`h=0.65`$, the constraints intersect at $`\mathrm{\Omega }_m0.40.5`$ in flat models and $`\mathrm{\Omega }_m0.50.6`$ in open models; increasing $`h`$ slightly decreases the preferred $`\mathrm{\Omega }_m`$ and vice versa. This conclusion โ that the combination of COBE, the Ly$`\alpha `$ $`P(k)`$, and the cluster mass function implies a low density universe โ is the most important and robust result to emerge from this multi-constraint analysis.
At one level, our conclusions about the matter density come as no suprise, since we have already argued, in Weinberg et al. (1999), that consistency between the cluster mass function and the Ly$`\alpha `$ $`P(k)`$ implies $`\mathrm{\Omega }_m`$ in this range independent of the COBE normalization. However, the nature of the argument is subtly different in this case. In Weinberg et al. (1999), we considered matter power spectra of the CDM form parameterized by $`\mathrm{\Gamma }`$ (with $`n=1`$), and by combining the Ly$`\alpha `$ $`P(k)`$ measurement with the cluster constraint (9), we found $`\mathrm{\Omega }_m=0.34+1.3(\mathrm{\Gamma }0.2)`$ for flat models and $`\mathrm{\Omega }_m=0.46+1.3(\mathrm{\Gamma }0.2)`$ for open models, with $`1\sigma `$ uncertainties of about $`0.1`$. However, the Ly$`\alpha `$ $`P(k)`$ alone could not rule out the solution of high $`\mathrm{\Omega }_m`$ and high $`\mathrm{\Gamma }`$, so Weinberg et al.โs (1999) conclusion that $`\mathrm{\Omega }_m<1`$ rested crucially on the empirical evidence for $`\mathrm{\Gamma }0.2`$ from the shape of the galaxy power spectrum. Within the class of CDM models considered here, the combination of COBE and the Ly$`\alpha `$ $`P(k)`$ determines $`n`$, and hence the effective value of $`\mathrm{\Gamma }`$ (eq. 11), once $`\mathrm{\Omega }_m`$, $`h`$, and $`B`$ are specified. Simultaneous consistency between COBE, the Ly$`\alpha `$ $`P(k)`$, and the cluster mass function requires low $`\mathrm{\Omega }_m`$ independent of the galaxy power spectrum shape, thereby strengthening the overall argument for a low density universe, and, by the by, for a matter power spectrum with low $`\mathrm{\Gamma }_{\mathrm{eff}}`$. The lower limit on $`\mathrm{\Omega }_m`$ from this combination of constraints varies with the choice of other parameters, but it never reaches as low as $`\mathrm{\Omega }_m=0.2`$ unless $`h0.85`$.
For all of the models shown in Figures 4 and 5, the Ly$`\alpha `$ $`P(k)`$ and cluster mass function constraints intersect at values of $`n`$ consistent with the CMB anisotropy constraints, provided one takes the 1-$`\sigma `$ error ranges into account. A factor of two improvement in the precision of the Ly$`\alpha `$ $`P(k)`$ measurement could greatly restrict the range of models compatible with all three constraints, especially if the Ly$`\alpha `$ $`P(k)`$ amplitude is somewhat lower, as McDonald et al. (2000) find.
There are, of course, numerous other constraints on cosmological parameters, and we will briefly consider three of them: the cluster baryon fraction, the location of the first acoustic peak in the CMB power spectrum, and the evidence for accelerating expansion from Type Ia supernovae. (Our focus on $`h=0.65`$ as a fiducial case already reflects our assessment of the most convincing direct estimates of $`H_0`$.) If one assumes that baryons are not overrepresented relative to their universal value within the virial radii of rich clusters, then the combination of the measured gas mass fractions with big bang nucleosynthesis limits on $`\mathrm{\Omega }_b`$ yields an upper limit on $`\mathrm{\Omega }_m`$ (White et al. 1993b ). Applying this argument, Evrard (1997) concludes that
$$\mathrm{\Omega }_m\mathrm{\Omega }_{b}^{}{}_{}{}^{1}h^{3/2}23.1\mathrm{\Omega }_m0.57\left(\frac{B}{0.02}\right)\left(\frac{h}{0.65}\right)^{1/2},$$
(12)
at the 95% confidence level. From Figures 4 and 5 we see that models matching COBE, the Ly$`\alpha `$ $`P(k)`$, and the cluster mass function are always consistent with this limit โ easily in the case of flat models, sometimes marginally in the case of open models. Models that match the velocity power spectrum instead of the cluster mass function are usually incompatible with this limit, though sometimes only marginally so.
The location of the first acoustic peak in the CMB anisotropy spectrum is a strong diagnostic for space curvature (e.g., Doroshkevich, Zeldovich, & Sunyaev (1978); Wilson & Silk (1981); Sugiyama & Gouda (1992); Kamionkowski, Spergel, & Sugiyama (1994); Hu et al. 1997), and recent anisotropy measurements on degree scales favor a geometry that is close to flat (e.g., Miller et al. (1999); Melchiorri et al. (2000); Netterfield et al. (2001); Pryke et al. (2001)). Clearly our flat universe models are compatible with these results, as are the open universe models that match the Ly$`\alpha `$ $`P(k)`$ and the velocity power spectrum (all of which have $`\mathrm{\Omega }_m`$ close to one). The open models that match Ly$`\alpha `$ $`P(k)`$ and the cluster mass function are generally ruled out by the most recent, high precision limits on space curvature. The Type Ia supernova measurements of the cosmic expansion history (Riess et al. (1998); Perlmutter et al. (1999)) add a great deal of discriminatory power, since they constrain a parameter combination that is roughly $`\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }`$ instead of $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }`$; Perlmutter et al. (1999) quote $`\mathrm{\Omega }_m0.75\mathrm{\Omega }_\mathrm{\Lambda }0.25\pm 0.125`$. All of the open models miss this constraint by many $`\sigma `$, and the flat models matching the Ly$`\alpha `$ $`P(k)`$ and the velocity power spectrum fail because the values of $`\mathrm{\Omega }_m`$ are too high. The combination of COBE, the Ly$`\alpha `$ $`P(k)`$, and the cluster mass function, on the other hand, is compatible with the supernova results for flat models with a cosmological constant, though it favors somewhat higher values of $`\mathrm{\Omega }_m`$.
We have not carried out a similar multi-constraint analysis for the CHDM model because the formulas (11) and (10) for the shape parameter and velocity power spectrum constraints do not apply to it and the formula (9) for the cluster mass function constraint may be less accurate for non-zero $`\mathrm{\Omega }_\nu `$. However, our fiducial CHDM model, with $`\mathrm{\Omega }_\nu =0.2`$, has $`\sigma _8=0.96`$, with $`n=1.10`$. For $`\mathrm{\Omega }_\nu =0.3`$ we obtain $`\sigma _8=1.15`$ ($`n=1.23`$), for $`\mathrm{\Omega }_\nu =0.1`$ we obtain $`\sigma _8=0.81`$ ($`n=0.96`$), and for the TCDM model, which represents the limiting case of $`\mathrm{\Omega }_\nu =0`$, we obtain $`\sigma _8=0.77`$ ($`n=0.84`$). All of these models are likely to violate the cluster mass function constraint, which according to equation (9) implies $`\sigma _8=0.52\pm 0.04`$ for $`\mathrm{\Omega }_m=1`$. We conclude that COBE-normalized CHDM models with $`\mathrm{\Omega }_m=1`$, $`h0.5`$ cannot simultaneously match the Ly$`\alpha `$ $`P(k)`$ and the cluster mass function. The Ly$`\alpha `$ $`P(k)`$ strengthens the case against this class of CHDM models by ruling out the low values of $`n`$ that would otherwise allow them to match cluster masses (Ma (1996)). Of course CHDM models with $`\mathrm{\Omega }_m<1`$ can satisfy the observational constraints for appropriate parameter choices, and the general problem of using CMB measurements and the Ly$`\alpha `$ $`P(k)`$ to measure $`\mathrm{\Omega }_\nu `$ is discussed by Croft, Hu, & Davรฉ (1999). However, the possible presence of a neutrino component does not alter our conclusion that COBE, the Ly$`\alpha `$ $`P(k)`$, and the cluster mass function together require a low density universe.
All in all, the CWPHK and McDonald et al. (2000) measurements of the Ly$`\alpha `$ $`P(k)`$ provide additional support for the current โconsensusโ model of structure formation, $`\mathrm{\Lambda }`$CDM with $`\mathrm{\Omega }_m0.4`$ and $`h0.65`$. Moderate improvements in the statistical precision of the constraints considered here could strengthen this support, or they could open fissures of disagreement. Improvements in the near future could also allow some interesting new tests, such as discriminating between models with no tensor fluctuations and models with the $`T/S=7(1n)`$ contribution predicted by power law inflation.
A detailed consideration of the constraints from CMB anisotropy measurements is a major undertaking in itself, well beyond the scope of this paper. However, to illustrate the interplay between our results and recent CMB experiments, we plot in Figure 6 the predicted CMB power spectra of five of our fiducial models: $`\mathrm{\Lambda }`$CDM, $`\mathrm{\Lambda }`$CDM2, OCDM, TCDM, and CHDM. We computed these power spectra using CMBFAST (Seljak & Zaldarriaga (1996); Zaldarriaga, Seljak, & Bertschinger (1998)), with the cosmological parameter values listed in Table 1. The CHDM model stands out from the rest because matching the Ly$`\alpha `$ $`P(k)`$ requires a high value of $`n`$, which boosts the anisotropy on small scales. The OCDM model also stands out, albeit less dramatically, because the open space geometry shifts the acoustic peaks to smaller angles. The TCDM model lies below the $`\mathrm{\Lambda }`$CDM models because of its larger tilt, which suppresses small scale fluctuations. Figure 6 shows data points taken from the joint analysis of numerous CMB data sets by Wang et al. (2001; see their Table 1). The two $`\mathrm{\Lambda }`$CDM models fit these data points remarkably well, given that the choice of their parameters was not based on small scale CMB data at all. Because the combination of COBE and the Ly$`\alpha `$ $`P(k)`$ implies $`n`$ close to one for both of these models, their predictions are not very different, and the current CMB data do not distinguish between them. However, the TCDM, OCDM, and CHDM models are clearly ruled out, and while we have not attempted to adjust their parameters within the constraints allowed by equations (7) and (8), it appears unlikely that any such adjustment would allow these models to fit the current CMB data.
## 5 Conclusions
The slope of the mass power spectrum inferred by CWPHK from the Ly$`\alpha `$ forest, $`\nu =2.25\pm 0.18`$ at $`k_p=0.008(\mathrm{km}\mathrm{s}^1)^1`$ at $`z=2.5`$, confirms one of the basic predictions of the inflationary CDM scenario: an approximately scale-invariant spectrum of primeval inflationary fluctuations ($`n1`$) modulated by a transfer function that bends the power spectrum towards $`P(k)k^{n4}`$ on small scales. If the measured slope of the power spectrum had implied $`\nu >2`$ or $`\nu <2.5`$, we would have been unable to reproduce the Ly$`\alpha `$ $`P(k)`$ with any of the models considered here, even allowing wide variations in the cosmological parameters.
Because the amplitude of the COBE-normalized power spectrum on small scales is very sensitive to $`n`$, we are able to match the CWPHK measurement of $`\mathrm{\Delta }^2(k_p)`$ in most of the major variants of the CDM scenario ($`\mathrm{\Lambda }`$CDM, OCDM, TCDM, CHDM) by treating $`n`$ as a free parameter. Within each of these variants, we obtain constraints on the model parameters of the form $`\mathrm{\Omega }_mh^\alpha n^\beta B^\gamma =k\pm ฯต`$ (eq. 7) or $`\mathrm{\Omega }_mh^\alpha n^\beta B^\gamma \mathrm{\Omega }_{\nu }^{}{}_{}{}^{\delta }=k\pm ฯต`$ (eq. 8), with the parameter values listed in Table 2. These constraints, together with the confirmation of the predicted slope, are the main results to emerge from combining the Ly$`\alpha `$ $`P(k)`$ measurement with the COBE-DMR result.
As shown in Figures 4 and 5, the parameter combination constrained by COBE and the Ly$`\alpha `$ $`P(k)`$ is different from the combinations constrained by other measurements of large scale structure and CMB anisotropy, so joint consideration of these constraints can break some of the degeneracies among the fundamental parameters. If we combine the Ly$`\alpha `$ $`P(k)`$ constraint with the constraint on $`\mathrm{\Omega }_m`$ and $`\sigma _8`$ inferred from the cluster mass function (White et al. 1993a; ECF), then we favor a low density universe, with $`\mathrm{\Omega }_m0.30.5`$ in flat models and $`\mathrm{\Omega }_m0.50.6`$ in open models. This combination is also consistent with CMB anisotropy constraints on $`n`$. The open models are inconsistent with the angular location of the first acoustic peak in the CMB power spectrum (Netterfield et al. (2001); Pryke et al. (2001)), and they are strongly inconsistent with Type Ia supernova results, which imply $`\mathrm{\Omega }_m0.75\mathrm{\Omega }_\mathrm{\Lambda }0.25\pm 0.125`$ (Riess et al. (1998); Perlmutter et al. (1999)). The flat models are consistent with both constraints. On the whole, the CWPHK measurement of the Ly$`\alpha `$ $`P(k)`$ supports the consensus in favor of $`\mathrm{\Lambda }`$CDM with $`\mathrm{\Omega }_m0.4`$, $`h0.65`$. The contribution of the Ly$`\alpha `$ $`P(k)`$ to this consensus comes both from the slope, which confirms the generic inflationary CDM prediction, and from the amplitude, which has a different dependence on cosmological parameters than any of the other constraints considered here.
There are bright prospects for improvements of this approach in the near future. McDonald et al. (2000) have inferred the mass power spectrum from an independent Ly$`\alpha `$ forest data set using a different analysis method, obtaining a nearly identical slope and an amplitude lower by $`1\sigma `$. We have recently analyzed a much larger data set of high and moderate resolution spectra, using a variant of the Croft et al. (1998, 1999) method, and the improved data yield much higher statistical precision and better tests for systematic effects. Constraints from this new measurement of $`P(k)`$, using the method developed here, are presented in ยง7 of Croft et al. (2001). Recent measurements of CMB anisotropy have greatly improved the level of precision on small angular scales, and results from the MAP satellite should provide another major advance in the near future. These measurements yield tighter cosmological parameter constraints on their own, but they become substantially more powerful when combined with data that constrain the shape and amplitude of the matter power spectrum. It is evident from Figures 4 and 5 that simply reducing the error bars on $`n`$ and the Ly$`\alpha `$ $`P(k)`$ by a factor of two would already produce interesting new restrictions on the allowable range of models. These restrictions can become very powerful if ongoing studies of cluster masses using galaxy dynamics, X-ray properties, the Sunyaev-Zelโdovich effect, and gravitational lensing confirm the robustness of the cluster mass function constraint. In the slightly longer term, the 2dF and Sloan redshift surveys should produce measurements of the shape of the galaxy power spectrum that shrink the current statistical and systematic uncertainties, so that demanding consistency between the inferred value of $`\mathrm{\Gamma }_{\mathrm{eff}}`$ and other constraints becomes a useful additional test. At the very least, these developments should lead to a powerful test of the inflationary CDM picture and high-precision determinations of its parameters. If we are lucky, improved measurements will reveal deficiencies of the simplest $`\mathrm{\Lambda }`$CDM models that are hidden within the current uncertainties, and resolving these discrepancies will lead us to a better understanding of the cosmic energy contents and the origin of primordial fluctuations in the hot early universe.
We thank Daniel Eisenstein and Wayne Hu for helpful advice on computing power spectra, Martin White for comments on the manuscript, and Nikolay Gnedin for a prompt and helpful refereeโs report. This work was supported by NASA Astrophysical Theory Grants NAG5-3111, NAG5-3922, and NAG5-3820, by NASA Long-Term Space Astrophysics Grant NAG5-3525, and by NSF grants AST-9802568, ASC 93-18185, and AST-9803137. |
no-problem/0001/astro-ph0001462.html | ar5iv | text | # Competitive Accretion in Clusters and the IMF
## 1 Introduction
One of the most important goals of a general theory of star formation is to explain the origin of the initial mass function (IMF). In order to do this, we need to understand the differences between low-mass and high-mass star formation. A stellar cluster is the natural size-scale to investigate these differences as they contain the full mass range of stars. In this paper, we review how competitive accretion in clusters can form the basis of a theory for the IMF. Competitive accretion arises when a group of stars compete for a finite mass-reservoir (Zinnecker 1982). If this accretion contributes a large fraction of the final stellar mass, then the competition process determines the overall distribution of stellar masses.
Surveys of star forming regions have found that the majority of pre-main sequence stars are found in clusters (e.g Lada et. al. 1991; Lada, Strom & Myers 1993; see also Clarke, Bonnell & Hillenbrand 2000). The fraction of stars in clusters depends on the molecular cloud considered but generally varies from 50 to $`\mathrm{ยฟ}\mathrm{}90`$ per cent. These clusters contain anywhere from tens to thousands of stars with typical numbers of around a hundred (Lada et. al. 1991; Phelps & Lada 1997; Clarke et. al. 2000). Cluster radii are generally a few tenths of a parsec such that mean stellar densities are of the order of $`10^3`$ stars/pc<sup>3</sup> (c.f. Clarke et. al. 2000) with central stellar densities of the larger clusters (e.g. the ONC) being $`\mathrm{ยฟ}\mathrm{}10^4`$ stars/pc<sup>3</sup> (McCaughrean & Stauffer 1994; Hillenbrand & Hartmann 1998; Carpenter et. al. 1997).
Furthermore, young clusters are usually associated with massive clumps of molecular gas (Lada 1992). Generally, the mass of the gas in the youngest clusters is larger than that in stars (Lada 1991), with up to 90 % of the cluster mass in the form of gas. Gas can thus play an important role in the dynamics of the clusters and affect the final stellar masses through accretion.
Surveys of the stellar content of young (ages $`10^6`$ years) clusters (e.g. Hillenbrand 1997) reveal that they contain both low-mass and high-mass stars in proportion as you would expect from a field-star IMF (Hillenbrand 1997). Furthermore, there is a degree of mass segregation present in the clusters with the most massive stars generally found in the cluster cores.
## 2 Mass Segregation
Young stellar clusters are commonly found to have their most massive stars in or near the centre (Hillenbrand 1997; Carpenter et.al. 1997). This mass segregation is similar to that found in older clusters but the young dynamical age of these systems offers the chance to test whether the mass segregation is an initial condition or due to the subsequent evolution. We know that two-body relaxation drives a stellar system towards equipartition of kinetic energy and thus towards mass segregation. In gravitational interactions, the massive stars tend to lose some of their kinetic energies to lower-mass stars and thus sink to the centre of the cluster.
Numerical simulations of two-body relaxation have shown that while some degree of mass segregation can occur over the short lifetimes of these young clusters, it is not sufficient to explain the observations (Bonnell & Davies 1998). Thus the observed positions of the massive stars near the centre of clusters like the ONC reflects the initial conditions of the cluster and of massive star formation that occurs preferentially in the centre of rich clusters.
Forming massive stars in the centre of clusters is not straightforward due to the high stellar density. For a star to fragment out of the general cloud requires that the Jeans radius, the minimum radius for the fragment to be gravitationally bound,
$$R_JT^{1/2}\rho ^{1/2},$$
(1)
be less than the stellar separation. This implies that the gas density has to be high, as you would expect at the centre of the cluster potential. The difficulty arises in that the high gas density implies that the fragment mass, being approximately the Jeans mass,
$$M_JT^{3/2}\rho ^{1/2},$$
(2)
is quite low. Thus, unless the temperature is unreasonably high in the centre of the cluster before fragmentation, the initial stellar mass is quite low. Equation (2) implies that the stars in the centre of the cluster should have the lowest masses, in direct contradiction with the observations. Therefore, we need a better explanation for the origin of massive stars in the centre of clusters.
## 3 The dynamics of accretion in clusters
Young stellar clusters are commonly found to be gas-rich with typically 50 % to 90 % of their total mass in the form of gas (e.g. Lada 1991). This gas can interact with, and be accreted by, the stars as both move in the cluster. If significant accretion occurs, it can affect both the dynamics and the masses of the individual stars (e.g. Larson 1992).
Fragmentation models of multiple systems and of stellar clusters show that the fragmentation is inherently inefficient with a small fraction of the total mass in the initial fragments (eg. Larson 1978; Boss 1986; Bonnell et.al. 1992; Boss 1996; Klessen, Burkert & Bate 1998). The remaining gas is accreted by the fragments on the gas free-fall timescale. This occurs as the gas is self-gravitating and is the dominant mass component. The free-fall timescale is roughly the crossing or dynamical time of a stellar cluster as both are related to the total mass in the cluster which is mostly in the form of gas. Thus the gas is accreted on the same timescale on which the stars move. An exception to this is if the pre-fragmented cluster is highly structured (e.g. Klessen et.al. 1998), then the initial dynamical time can be significantly longer than the local gas free-fall timescale. Such clusters should have little gas by the time they have relaxed to a quasi-spherical distribution. Thus, for the remainder of this paper, we assume that the initial cluster is approximately spherical and that the gas and stars have similar distributions.
Simulations of accretion in clusters have been performed for clusters of 10 to 100 stars using a combination SPH and N-body code. These simulations found that accretion is a highly non-uniform process where a few stars accrete significantly more than the rest (Bonnell et.al. 1997; Bonnell et.al. 2000). Individual starsโ accretion rates depend primarily on their position in the cluster (see Fig. 1) with those in the centre accreting more gas than those near the outside. Stars near the centre accrete more gas than do others further out due to the effect of the cluster potential which funnels the gas down towards the deepest part of the potential. The accretion rates can also be relatively large when the gas is the dominant component such that the final masses of the more massive stars are due to the accretion process. In contrast, many of the stars do not accrete significant amounts of gas and their final masses are a closer reflexion of any initial mass distribution in the cluster due to the fragmentation.
Accretion in stellar clusters naturally leads to both a mass spectrum and mass segregation. Even from initially equal stellar masses, the competitive accretion results in a wide range of masses with the most massive stars located in or near the centre of the cluster. Furthermore, if the initial gas mass-fraction in clusters is generally equal, then larger clusters will produce higher-mass stars and a larger range of stellar masses as the competitive accretion process will have more gas to feed the few stars that accrete the most gas.
## 4 Modelling Competitive Accretion
The accretion process outlined above is termed โcompetitive accretionโ (Zinnecker 1982) as each star competes for the available gas reservoir. In order to investigate the possible mass functions from this process, we need to consider larger clusters, or numbers of clusters, to get statistically significant numbers. This cannot be done by the above simulations as numerical resolution is presently inadequate to follow the accretion process with more than 100 stars. One option is an analytical or semi-analytical approach to competitive accretion which can then be applied to larger clusters (Bonnell et.al. 2000).
Gas accretion by a star is given by the general formula
$$\dot{M}_{}\pi \rho vR_{\mathrm{acc}}^2,$$
(3)
where $`\rho `$ is the gas density and $`v`$ is the relative gas-star velocity and $`R_{\mathrm{acc}}`$ is the accretion radius. In a cluster model, we have the velocities and densities. Thus, in order to parametrise the accretion process, we need a description of the accretion radius $`R_{\mathrm{acc}}`$.
Accretion by a star was first explored by Bondi and Hoyle (Bondi & Hoyle 1944; Bondi 1952) in terms of an isolated star in a uniform, non-self-gravitating medium. In this model, the accretion radius is given by
$$R_{\mathrm{BH}}=\frac{GM_{}}{(v^2+c_s^2)^{1/2}},$$
(4)
where $`M_{}`$ is the stellar mass, and $`c_s`$ is the gas sound speed. This approach neglects the self-gravity of the gas, the presence of other stars, the cluster potential and how these affect the accretion.
An alternative to Bondi-Hoyle accretion is to consider a tidal accretion radius where gas can only be accreted onto a star if it is more bound to it then to other stars or the cluster as a whole. Accretion in this context is then similar to the Roche-lobe overflow problem. Taking the Roche-lobe radius to be the accretion radius we have
$$R_{\mathrm{roche}}=0.5\left(\frac{M_{}}{M_{\mathrm{enc}}}\right)^{1/3}R_{},$$
(5)
where $`M_{\mathrm{enc}}`$ is the mass enclosed in the cluster at the starโs position $`R_{}`$. This approach is consistent with the tidal effects of the cluster but does not consider whether the gas is bound to the star when considering itโs thermal and kinetic energies.
Support for using such a model comes from studies of accretion in binary systems (Bate 1997, Bate & Bonnell 1997). These studies found that the accretion of cold gas was well represented when the accretion radius was taken to be the Roche-lobe of the individual stars.
The simulations of accretion in clusters were used to test which model best represented the accretion process (Bonnell et.al. 2000). Figure 3 shows a comparison of the SPH determined accretion rate versus that estimated from Bondi-Hoyle and from Roche-lobe accretion for a cluster of 30 stars embedded in cold gas. We see that the Bondi-Hoyle accretion is too high early on in the evolution when the SPH accretion rate is low and that overall there is little correspondence between the Bondi-Hoyle accretion rate which is nearly constant and the SPH determined accretion rate. In contrast, the Roche-lobe accretion follows the SPH determined accretion rate from the early low values to the much higher values that occur towards the end of the simulation.
That the Roche-lobe accretion works better when the Bondi-Hoyle accretion gives a higher accretion rate makes sense as the Bondi-Hoyle radius is then larger than the Roche-lobe radius and the effective accretion radius would be the minimum of the two. In contrast, it is surprising (at first) that the Roche-lobe accretion works better even when the Bondi-Hoyle radius is smaller than the Roche radius. After closer inspection, it is apparent that the star is carrying an envelope of gas with it through the cluster and that this envelope approximately fills the Roche-lobe. This envelope forms while the stars are initially moving subsonically and subsequently acts to dampen the relative high velocity gas so that it can be bound to the star. Simulations where the stars are initially moving supersonically and are devoid of an envelope are found to be less well modelled by Roche-lobe accretion, in agreement with this interpretation. In general, we expect that all stars will form with circumstellar envelopes and thus the Roche-lobe accretion should be a good estimate of the accretion rates.
## 5 Accretion and the IMF
Using the above formulation for Roche-lobe accretion in a stellar cluster, we can estimate plausible IMFs considering simple cluster models. Hillenbrand & Hartmann (1998) showed that the ONC can be approximated by a King model which has a stellar density profile $`nr^2`$. In this case, the number of stars at a given radius is constant, $`dndr`$. The gas density is somewhat more tricky but two possible distributions are $`\rho r^2`$, similar to the stellar distribution, or $`\rho r^{3/2}`$, corresponding to an accretion solution (e.g. Shu 1977; Foster & Chevalier 1993). Considering these two possibilities and that the stellar velocities are in virial equilibrium with the dominant gas distribution, we can calculate $`M_{}(r)`$ and thus an IMF.
In the first case where $`\rho r^2`$, we find that $`M_{}r^2`$ and that the resulting mass function is
$$dnm^{3/2}dm.$$
(6)
For the second case where $`\rho r^{3/2}`$, then $`M_{}r^{3/4}`$, and the IMF is then
$$dnm^{7/3}dm.$$
(7)
In both cases, we see that we expect the massive stars to be segregated towards the centre of the cluster.
A plausible evolutionary picture would have the gas density evolving from a $`\rho r^2`$ initial distribution to a $`\rho r^{3/2}`$ as material is depleted due to the accretion and is replaced with gas inflowing from larger radii. As this occurs from the inside out, stars near the centre of the cluster, which are the more massive stars, would have the steeper mass-profile. An illustration of the type of mass function that results from this process is shown in figure 4. Limitations of this approach is that it neglects the stellar dynamics of the cluster and assumes a simple prescription for the gas and thus for the stellar velocities.
An alternative is to follow both the stellar and gas-dynamics simultaneously but with a prescription of the accretion to lessen resolution requirements (Bonnell et.al. 2000). Figure 5 shows the results of four clusters of 1000 stars each undergoing competitive accretion using the Roche-lobe formalism. All four clusters result in reasonable IMFs with exponents in the range of $`1.5`$ to $`2.5`$ where the Salpeter IMF is $`2.35`$. The four models shown in figure 5 span the range of possible gas temperatures (virialised, cold) and equations of state (isothermal, adiabatic). The one major limitation for these models is the lack of any feedback from the stars and the possibility of turbulence in the gas.
From these models we see that competitive accretion gives reasonable IMFs and fulfills a basic requirement of producing the more massive stars near the centre of the cluster. Unfortunately, there is an added complication in forming very massive stars, $`M_{}\mathrm{ยฟ}\mathrm{}10M_{}`$.
## 6 Formation of Massive Stars
The formation of massive stars is problematic not only for their special location in the cluster centre, but also due to the fact that the radiation pressure from massive stars is sufficient to halt the infall and accretion (Yorke & Krugel 1977; Yorke 1993). This occurs for stars of mass $`\mathrm{ยฟ}\mathrm{}10`$ M.
A secondary effect of accretion in clusters is that it can force it to contract significantly. The added mass increases the binding energy of the cluster while accretion of basically zero momentum matter will remove kinetic energy. If the core is sufficiently small that its crossing time is relatively short compared to the accretion timescale, then the core, initially at $`n10^4`$ stars pc<sup>-3</sup>, can contract to the point where, at $`n10^8`$ stars pc<sup>-3</sup>, stellar collisions are significant (Bonnell, Bate & Zinnecker 1998). Collisions between intermediate mass stars ($`2M_{}\mathrm{ยก}\mathrm{}m\mathrm{ยก}\mathrm{}10M_{}`$), whose mass has been accumulated through accretion in the cluster core, can then result in the formation of massive ($`m\mathrm{ยฟ}\mathrm{}50M_{}`$) stars. This model for the formation of massive stars predicts that the massive stars have to be significantly younger than the mean stellar age due to the time required for the core to contract (Bonnell et.al. 1998).
Preliminary studies of possible IMFs that would result from a merger process (Bailey 1999) show that, plotted as a cumulative distribution to lessen the effects of small statistics, the mass function is compatible with the high mass function of Kroupa, Tout & Gilmore (1990) where $`dnm^{2.5}dm`$.
## 7 Summary
Competitive accretion in young, gas rich stellar clusters is an appealing mechanism to explain the origin of the IMF. This one simple physical process can explain both the initial mass segregation in stellar clusters and potentially the exact mass-distribution. Stellar dynamical effects such as two-body relaxation are not able to explain the mass segregation found in clusters such as the ONC due to their extreme youth.
The accretion in a cluster environment is found to be better represented by a tidal or Roche-lobe accretion radius than by Bondi-Hoyle accretion. This occurs as the tidal radius determines when the gas is bound to the star compared to the cluster as a whole. Furthermore, this radius represents the maximum extend of any circumstellar envelope which can act to sweep up the intracluster gas.
Gas accretion in a stellar cluster is highly competitive and uneven. Stars near the centre of the cluster accrete at significantly higher rates due to their position where they are aided in attracting the gas by the overall cluster potential. This competitive accretion naturally results in both a spectrum of stellar masses, and an initial mass segregation even if all the stars originate with equal masses.
Simple analytical models of the cluster and the competitive accretion yield IMFs which range from $`\gamma =3/2`$ when the gas is assumed to be in the form $`\rho r^2`$ to $`\gamma =7/3`$ when the gas is in a $`\rho r^{3/2}`$ distribution as in an accretion flow. These two power-laws could be expected to represent the low-mass stars in the outer part of the cluster and the higher-mass stars in the inner parts of the cluster, respectively, as the accretion flow would grow from the inside-out. Simulations of clusters undergoing the Roche-lobe prescription for accretion produce mass functions which are compatible with these limits.
Finally, massive stars may form through stellar collisions in the centre of dense clusters. The necessary density for collisions would result due to the accretion process of adding mass without significant momentum which then forces the core to contract to higher densities. Such a collisional model for the formation of massive stars evades the problem of accreting onto massive stars. |
no-problem/0001/astro-ph0001014.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Tidal torques act to establish synchronization between the spin of the non-degenerate companion star and the orbital motion. Whenever the spin angular velocity of the donor is perturbed (by a magnetic stellar wind; or change in its moment of inertia due to either expansion or mass loss in response to RLO) the tidal spin-orbit coupling will result in a change in the orbital angular momentum leading to orbital shrinkage or expansion.
We have performed detailed numerical calculations of the non-conservative evolution of $`200`$ close binary systems with $`1.05.0M_{}`$ donor stars and a $`1.3M_{}`$ accreting neutron star. Rather than using analytical expressions for simple polytropes, we calculated the thermal response of the donor star to mass loss, using an updated version of Eggletonโs numerical computer code, in order to determine the stability and follow the evolution of the mass transfer. We refer to Tauris & Savonije (1999) for a more detailed description of the computer code and the binary interactions considered.
## 2 The orbital angular momentum balance equation
Consider a circular<sup>1</sup><sup>1</sup>1This is a good approximation since tidal effects acting on the near RLO giant star will circularize the orbit on a short timescale of $`10^4`$ yr, cf. Verbunt & Phinney (1995). binary with an (accreting) neutron star and a companion (donor) star with mass $`M_{\mathrm{NS}}`$ and $`M_2`$, respectively. The orbital angular momentum is given by: $`J_{\mathrm{orb}}=(M_{\mathrm{NS}}M_2/M)\mathrm{\Omega }a^2`$, where $`M=M_{\mathrm{NS}}+M_2`$ and $`\mathrm{\Omega }=\sqrt{GM/a^3}`$ is the orbital angular velocity. A simple logarithmic differentiation of this equation yields the rate of change in orbital separation:
$$\frac{\dot{a}}{a}=2\frac{\dot{J}_{\mathrm{orb}}}{J_{\mathrm{orb}}}2\frac{\dot{M}_{\mathrm{NS}}}{M_{\mathrm{NS}}}2\frac{\dot{M}_2}{M_2}+\frac{\dot{M}_{\mathrm{NS}}+\dot{M}_2}{M}$$
(1)
where the total change in orbital angular momentum can be expressed as:
$$\frac{\dot{J}_{\mathrm{orb}}}{J_{\mathrm{orb}}}=\frac{\dot{J}_{\mathrm{gwr}}}{J_{\mathrm{orb}}}+\frac{\dot{J}_{\mathrm{mb}}}{J_{\mathrm{orb}}}+\frac{\dot{J}_{\mathrm{ls}}}{J_{\mathrm{orb}}}+\frac{\dot{J}_{\mathrm{ml}}}{J_{\mathrm{orb}}}$$
(2)
The first term on the right side of this equation governs the loss of $`J_{\mathrm{orb}}`$ due to gravitational wave radiation (Landau & Lifshitz 1958). The second term arises due to a combination a magnetic wind of the (low-mass) companion star and a tidal synchronization (locking) of the orbit. This mechanism of exchanging orbital into spin angular momentum is referred to as magnetic braking (see e.g. Verbunt & Zwaan 1981; Rappaport et al. 1983).
### 2.1 Tidal torque and dissipation rate
The third term in eq.(2) was recently discussed by Tauris & Savonije (1999) and describes possible exchange of angular momentum between the orbit and the donor star due to its expansion or mass loss (note, we have neglected the tidal effects on the gas stream and the accretion disk). For both this term and the magnetic braking term we estimate whether or not the tidal torque is sufficiently strong to keep the donor star synchronized with the orbit. We estimate the tidal torque due to the interaction between the tidally induced flow and the convective motions in the stellar envelope by means of the simple mixing-length model for turbulent viscosity $`\nu =\alpha H_\mathrm{p}V_\mathrm{c}`$, where the mixing-length parameter $`\alpha `$ is adopted to be 2 or 3, $`H_\mathrm{p}`$ is the local pressure scaleheight, and $`V_\mathrm{c}`$ the local characteristic convective velocity. The rate of tidal energy dissipation can be expressed as (Terquem et al. 1998):
$$\frac{\mathrm{d}E}{\mathrm{d}t}=\frac{192\pi }{5}\mathrm{\Omega }^2_{R_i}^{R_o}\rho r^2\nu \left[\left(\frac{\xi _r}{r}\right)^2+6\left(\frac{\xi _h}{r}\right)^2\right]๐r$$
(3)
where the integration is over the convective envelope and $`\mathrm{\Omega }`$ is the orbital angular velocity, i.e. we neglect effects of stellar rotation. The radial and horizontal tidal displacements are approximated here by the values for the adiabatic equilibrium tide:
$$\xi _r=fr^2\rho \left(\frac{\mathrm{d}P}{\mathrm{d}r}\right)^1\xi _h=\frac{1}{6r}\frac{\mathrm{d}(r^2\xi _r)}{\mathrm{d}r}$$
(4)
where for the dominant quadrupole tide ($`l=m=2`$) $`f=GM_2/(4a^3)`$.
The locally dissipated tidal energy is taken into account as an extra energy source in the standard energy balance equation of the star, while the corresponding tidal torque follows as: $`\mathrm{\Gamma }=(1/\mathrm{\Omega })(dE/dt)`$.
The thus calculated tidal angular momentum exchange $`dJ=\mathrm{\Gamma }dt`$ between the donor star and the orbit during an evolutionary timestep $`dt`$ is taken into account in the angular momentum balance of the system. If the so calculated angular momentum exchange is larger than the amount required to keep the donor star synchronous with the orbital motion of the compact star we adopt a smaller tidal angular momentum exchange (and corresponding tidal dissipation rate) that keeps the donor star exactly synchronous.
### 2.2 Super-Eddington accretion and isotropic re-emission
The last term in eq.(2) is the most dominant contribution and is caused by loss of mass from the system (see e.g. van den Heuvel 1994; Soberman et al. 1997). We have adopted the โisotropic re-emissionโ model in which all of the matter flows over, in a conservative way, from the donor star to an accretion disk in the vicinity of the neutron star, and then a fraction, $`\beta `$ of this material is ejected isotropically from the system with the specific orbital angular momentum of the neutron star. If the mass-transfer rate exceeds the Eddington accretion limit for the neutron star $`\beta >0`$. In our calculations we assumed $`\beta =max[0,\mathrm{\hspace{0.33em}1}\dot{M}_{\mathrm{Edd}}/\dot{M}_2]`$ and $`\dot{M}_{\mathrm{Edd}}=1.5\times 10^8M_{}`$ yr<sup>-1</sup>.
## 3 Evolution neglecting spin-orbit couplings
Assuming $`\dot{J}_{\mathrm{gwr}}=\dot{J}_{\mathrm{mb}}=\dot{J}_{\mathrm{ls}}=0`$ and $`\dot{J}_{\mathrm{ml}}/J_{\mathrm{orb}}=\beta q^2\dot{M}_2/(M_2(1+q))`$ one obtains easily analytical solutions to eq.(1).
In Fig. 1 we have plotted
$$\frac{\mathrm{ln}(a)}{\mathrm{ln}(q)}=2+\frac{q}{q+1}+q\frac{3\beta 5}{q(1\beta )+1}$$
(5)
as a function of the mass ratio $`q=M_2/M_{\mathrm{NS}}`$. The sign of this quantity is important since it tells whether the orbit expands or contracts in response to mass transfer (note $`q<0`$). We notice that the orbit always expands when $`q<1`$ and it always decreases when $`q>1.28`$ \[solving $`\mathrm{ln}(a)/\mathrm{ln}(q)=0`$ for $`\beta =1`$ yields $`q=(1+\sqrt{17})/41.28`$\]. If $`\beta >0`$ the orbit can still expand for $`1<q1.28`$. Note, $`\mathrm{ln}(a)/\mathrm{ln}(q)=2/5`$ at $`q=3/2`$ independent of $`\beta `$.
## 4 Results including tidal spin-orbit couplings
In Fig. 2 we have plotted the orbital evolution of an X-ray binary. The solid lines show the evolution including tidal spin-orbit interactions and the dashed lines show the calculations without these interactions. In all cases the orbit will always decrease initially as a result of the large initial mass ratio ($`q=4.0/1.33.1`$). But when the tidal interactions are included the effect of pumping angular momentum into the orbit (at the expense of spin angular momentum) is clearly seen. The tidal locking of the orbit acts to convert spin angular momentum into orbital angular momentum causing the orbit to widen (or shrink less) in response to mass transfer/loss. The related so-called Pratt & Strittmatter (1976) mechanism has previously been discussed in the literature (e.g. Savonije 1978). Including spin-orbit interactions many binaries will survive an evolution which may otherwise end up in an unstable common envelope and spiral-in phase. An example of this is seen in Fig. 2 where the binary with initial $`P_{\mathrm{orb}}=2.5`$ days (solid line) only survives as a result of the spin-orbit couplings. The dashed line terminating at $`M_23.0M_{}`$ indicates the onset of a run-away mass-transfer process ($`\dot{M}_2>10^3M_{}`$ yr<sup>-1</sup>) and formation of a common envelope and possible collapse of the neutron star into a black hole. In fact, many of the systems with $`2.0<M_2/M_{}<5.0`$ recently studied by Tauris, van den Heuvel & Savonije (2000) would not have survived the extreme mass-transfer phase if the spin-orbit couplings had been neglected.
The location of the minimum orbital separations in Fig. 2 are marked by arrows in the case of $`P_{\mathrm{orb}}=8.0`$ days. Since the mass-transfer rates in such an intermediate-mass X-ray binary are shown to be highly super-Eddington (Tauris, van den Heuvel & Savonije 2000) we have $`\beta 1`$. Hence in the case of neglecting the tidal interactions (dashed line) we expect to find the minimum separation when $`q=1.28`$ (cf. Section 3). Since the neutron star at this stage only has accreted $`10^4M_{}`$ we find that the minimum orbital separation is reached when $`M_2=1.28\times 1.30M_{}=1.66M_{}`$. Including tidal interactions (solid line) results in an earlier spiral-out in the evolution and the orbit is seen to widen when $`M_21.92M_{}`$ ($`q1.48`$).
### 4.1 Low-mass donors and pre-RLO orbital evolution
For low-mass ($`1.5M_{}`$) donor stars there are two important consequences of the spin-orbit interactions which result in a reduction of the orbital separation: magnetic braking and expansion of the (sub)giant companion star. In the latter case the conversion of orbital angular momentum into spin angular momentum is a caused by a reduced rotation rate of the donor. However, in evolved stars there is a significant wind mass loss (Reimers 1975) which will cause the orbit to widen and hence there is a competition between this effect and the tidal spin-orbit interactions for determining the orbital evolution prior to the RLO-phase. This is demonstrated in Fig. 3.
We assumed $`\dot{M}_{2\mathrm{wind}}=4\times 10^{13}\eta _{\mathrm{RW}}LR_2/M_2M_{}\text{ yr}\text{-1}`$ where the mass, radius and luminosity are in solar units and $`\eta _{\mathrm{RW}}`$ is the mass-loss parameter. It is seen that only for binaries with $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}>100`$ days will the wind mass loss be efficient enough to widen the orbit. For shorter periods the effects of the spin-orbit interactions dominate (caused by expansion of the donor) and loss of orbital angular momentum causes the orbit to shrink. This result is very important e.g. for population synthesis studies of the formation of millisecond pulsars, since $`P_{\mathrm{orb}}`$ in some cases will decrease significantly prior to RLO. As an example a system with $`M_2=1.0M_{}`$, $`M_{\mathrm{NS}}=1.3M_{}`$ and $`P_{\mathrm{orb}}^{\mathrm{ZAMS}}=3.0`$ days will only have $`P_{\mathrm{orb}}^{\mathrm{RLO}}=1.0`$ days at the onset of the RLO. |
no-problem/0001/nucl-ex0001009.html | ar5iv | text | # References
Near-threshold $`\eta `$ production in the
$`pd\mathbf{}pd\eta `$ reaction
F. Hibou<sup> 1</sup>, C. Wilkin<sup> 2</sup>, A.M. Bergdolt<sup> 1</sup>, G. Bergdolt<sup> 1</sup>, O. Bing<sup> 1</sup>, M. Boivin<sup> 3</sup>, A. Bouchakour<sup> 1</sup>, F. Brochard<sup> 3,4</sup>, M.P. Combes-Comets<sup> 5</sup>, P. Courtat<sup> 5</sup>, R. Gacougnolle<sup> 5</sup>, Y. Le Bornec<sup> 5</sup>, A. Moalem<sup> 6</sup>, F. Plouin<sup> 3,4</sup>, F. Reide<sup> 5</sup>, B. Tatischeff<sup> 5</sup>, N. Willis<sup> 5</sup>.
<sup>1</sup> Institut de Recherches Subatomiques, IN2P3-CNRS/Universitรฉ Louis Pasteur,
<sup>1</sup> B.P. 28, Fโ67037 Strasbourg Cedex 2, France
<sup>2</sup> University College London, London WC1E 6BT, United Kingdom
<sup>3</sup> Laboratoire National Saturne, Fโ91191 Gif-sur-Yvette Cedex, France
<sup>4</sup> LPNHE, Ecole Polytechnique, F-91128 Palaiseau, France
<sup>5</sup> Institut de Physique Nuclรฉaire, IN2P3โCNRS/Universitรฉ Paris-Sud,
<sup>3</sup> Fโ91406 Orsay Cedex, France
<sup>6</sup> Physics Department, Ben Gurion University, 84105 Beer Sheva, Israel
## Abstract
The total cross section of the $`pdpd\eta `$ reaction has been measured at two energies near threshold by detecting the final proton and deuteron in a magneti spectrometer. The values are somewhat larger than expected on the basis of two simple theoretical estimates.
PACS: 13.60.Le, 25.10.+s, 25.40.Ve
Keywords: $`\eta `$ mesons, production threshold, two-step processes.
Corresponding author:
Colin Wilkin,
Department of Physics and Astronomy,
University College London,
Gower Street, London WC1E 6BT, UK.
E-mail: cw@hep.ucl.ac.uk
The production of $`\eta `$ mesons in the $`pd^{\mathrm{\hspace{0.17em}3}}`$He$`\eta `$ reaction near threshold is remarkable for both its strength and its energy dependence . The threshold amplitude is of a similar size to that for pion production, despite the much larger momentum transfers associated with $`\eta `$ formation. Although the angular distribution remains isotropic, suggesting $`S`$-wave production, the square of the amplitude falls by a factor of three over a 5 MeV change in the c.m. excess energy $`Q`$. This has been taken as indicative of a nearby quasi-bound state of the $`\eta ^3`$He system, arising through strong $`\eta `$ multiple scatterings from the three nucleons in the recoiling nucleus .
In order to transfer such large momenta, Kilian and Nann suggested two-step processes involving intermediate pions. They showed that the threshold kinematics for $`pd^3`$He$`\eta `$ were in a sense magic. The momentum of the $`\eta `$ produced in the reaction is very similar to that obtained from the sequential physical processes of $`ppd\pi ^+`$ followed by $`\pi ^+np\eta `$, when there is no relative momentum between the final $`pd`$ pair and all Fermi momenta are neglected. In such cases the final proton and deuteron are likely to stick to form the observed <sup>3</sup>He nucleus. The classical estimate of the enhancement due to the magic kinematics is broadly confirmed by quantum mechanical calculations , which reproduce the size of the near-threshold cross section to within about a factor of two.
The same two-step model should also be capable of explaining events where the final proton and deuteron emerge freely in the $`pdpd\eta `$ reaction and the aim of the present investigation was to undertake a first exploration of this cross section near the threshold beam energy of $`T_p=901.2`$ MeV. Unfortunately, due to the closure of the laboratory, data could only be obtained at two beam energies.
The experiment was carried out at the Laboratoire National SATURNE (LNS), using the large acceptance magnetic spectrometer SPESIII, which was well adapted for the study of meson production in three-body final states near threshold through the detection of two charged particles. The experimental conditions concerning the beam monitoring, particle detection and identification, were rather similar to those of previous studies of meson production in the $`ppppX`$, where the meson $`X`$ was identified by the missing mass method . A liquid deuterium target of 207 mg/cm<sup>2</sup> thickness was employed and, in order to improve the missing mass resolution and the signal-to-background ratio, the opening of the vertical collimators of SPESIII was reduced to $`\pm 40`$ mr.
One special feature of the $`pdpd\eta `$ reaction near threshold is the rather low momentum, around 400-500 MeV/c, of the outgoing protons. This is to be compared with the standard 600-1400 MeV/c momentum range of the SPESIII spectrometer. The momentum of the recoiling deuterons is about 900 MeV/c and, in order to detect both particles simultaneously, the magnetic field was tuned down to accept momenta from about 360 to 960 MeV/c. Under normal SPESIII working conditions, the values of the particle momenta were obtained by using well established polynomial relations, taking the coordinates of the trajectories near the focal surface as input. The properties of SPESIII were not extensively studied with reduced fields and, in the present experiment, we used the polynomial parametrisation with the momenta of the particles scaled according to the ratio of the actual to the standard mean field, (2.03 Tesla)/(3.07 Tesla). This procedure essentially assumes that the field was reduced uniformly. A similar method was applied in the simulations, applying the same ratio to the momenta when tracking the particles. Such simulations are important for generating the expected missing mass peak of the $`pdpd\eta `$ reaction as well as the background spectrum of the $`pdpd\mathrm{\hspace{0.17em}2}\pi `$ reaction.
Two-dimensional experimental and simulated scatter plots of the emerging proton and deuteron momenta are shown in Fig. 1. Superimposed upon a fairly uniform background, due the $`pdpd\mathrm{\hspace{0.17em}2}\pi `$ reaction, there is a darker ellipse inside which the $`pdpd\eta `$ events are confined. The experimental missing mass spectra of the $`pdpdX`$ reaction are shown in Figs. 2a1 and 2a2. Clear $`\eta `$ peaks are observed near the upper edges of phase space for the two nominal proton beam energies of 905 and 909 MeV. Simulated background spectra of the $`pdpd\mathrm{\hspace{0.17em}2}\pi `$ reaction are shown in Figs. 2b1 and 2b2 and simulated peaks of the $`pdpd\eta `$ reaction are represented by the solid lines in Figs. 2c1 and 2c2. To evaluate the number of $`pdpd\eta `$ events, the two simulated spectra were combined so as to fit the experimental data. After subtracting the simulated background from the experimental results, the remaining events (points with error bars) in Figs. 2c1 and 2c2 show good agreement with the simulated $`pdpd\eta `$ spectra (solid line).
Taking the mass of the $`\eta `$ meson to be 547.30 MeV/c<sup>2</sup> , the best fits were obtained by assuming incident proton energies which were 1-2 MeV lower than the nominal values derived from the Saturne machine parameters. The fits also suggested adjusting the mean field ratio to a value slightly below that of the initial 2.03/3.07 ansatz. However, due to the uncertainties in the effective field strength and the particle tracking, no definitive accurate values of the beam energies could be deduced from the fitting procedure. But, since the shift indicated here is very similar to the mean difference $`\mathrm{\Delta }T=T_{\text{nominal}}T_{\text{measured}}=1.1\pm 1.0`$ MeV obtained at Saturne from other meson-production reactions near threshold , we adopt this energy correction $`\mathrm{\Delta }T`$, to be subtracted from the nominal values to obtain the โtrueโ ones. The average values of the excess energies $`Q=\sqrt{(m_p+m_d)^2+2m_dT_p}m_pm_dm_\eta `$, where the $`m_i`$ are particle masses, were determined using the corrected proton energies and taking into account energy losses in the target. These energies were used in the simulations required to evaluate the SPESIII acceptances, which were estimated assuming phase-space distributions of final particles. The acceptance decreases very quickly above threshold through one of the final particles falling outside the solid angle of SPESIII. Nevertheless, from the resulting angular dependence of the acceptance shown in Fig. 3 as a function of the cosine of the $`\eta `$ emission angle in the centre-of-mass system, it can be seen that all the regions of phase space were covered. The resulting overall acceptances of 24% and 7%, evaluated at the mid-target energies of $`T_p=903.7`$ and 907.7 MeV respectively, lead to the cross sections given in Table 1. The first error on the cross sections given in the table includes the statistical error (10% and 18% respectively) and a 13% systematic error on the absolute normalisation. The $`\pm 0.6`$ MeV uncertainty in $`Q`$ gives rise to the additional quoted errors through the rapid acceptance variation. However, the latter errors affect little the comparison with theory since, if both values of $`Q`$ are increased by $`0.6`$ MeV, the experimental points move largely in the directions given by the theoretical curves.
Estimates have been made of the $`pdpd\eta `$ total cross section near threshold in the quantum two-step model of Ref. , though neglecting all final state interactions. The energy variation of
$$\sigma _T(pdpd\eta )=1.2Q^2\text{nb}$$
(1)
is compatible with that of our data shown in Fig. 4. The predicted values of 2.7 and 17 nb, at $`Q=1.5`$ and 3.8 MeV respectively, are only about a factor of two smaller than our results and this discrepancy could be due to the neglect of the strong final state interaction between the proton and deuteron.
There are two possible $`S`$-wave proton-deuteron final states, corresponding to spin $`\frac{1}{2}`$ and $`\frac{3}{2}`$. The low energy spin-quartet scattering wave functions show little structure at short distances, whereas the spin-doublet bear some similarity to the shape of the $`pd`$ distribution at short distances inside the bound <sup>3</sup>He nucleus .
The connection between the bound and scattering $`pd`$ wave functions can be exploited to estimate the production of $`\eta `$ mesons in the $`pdpd\eta `$ reaction, with the final $`pd`$ system in the spin-$`\frac{1}{2}`$ state, in terms of the cross section for $`pd^3\text{He}\eta `$. Provided that the structure of the deuteron is neglected, the relative normalisation of the bound and scattering wave functions at short distances is fixed purely by the proton-deuteron binding energy, $`ฯต5.5`$ MeV, in the <sup>3</sup>He nucleus. If the meson production operator is also of short range, the production in two and three-body final states should be related through
$$\sigma _T(pdpd\eta )=\frac{1}{4}\left(\frac{Q}{ฯต}\right)^{3/2}\left(1+\sqrt{1+Q/ฯต}\right)^2\times \sigma _T(pd^3\text{He}\eta ).$$
(2)
This approach reproduces about $`\frac{2}{3}`$ of the $`pdpd\pi ^0`$ total cross section in terms of that for $`pd^3\text{He}\pi ^0`$ and the residue could be due to spin-$`\frac{3}{2}`$ final states.
The very precise near-threshold $`pd^3`$He$`\eta `$ total cross section data may be parametrised as
$$\sigma _T(pd^3\text{He}\eta )=\left(\frac{p_\eta }{p_p}\right)\frac{22}{(1+1.6p_\eta )^2+(3.8p_\eta )^2}\mu \text{b},$$
(3)
where the $`\eta `$ and proton c.m. momenta $`p_\eta `$ and $`p_p`$ are measured in fm<sup>-1</sup>. This parametrisation is shown in Fig. 4.
The predictions of Eq. (2) for the $`pdpd\eta `$ total cross sections are about a factor of three lower than our experimental results shown in Fig. 4. This may be due to the short-range assumption for the meson-production operator made in deriving Eq. (2). In the two-step model of ref. , the momentum transfer is provided through having a secondary interaction with an intermediate pion. Since high Fermi momenta are then not required, this means that the final $`pd`$ system is not necessarily produced at short distances. However, at larger distances the scattering wave functions are generally bigger than bound state ones, which must die off exponentially. It would therefore be very desirable to have a microscopic two-step model calculation of the type of ref. but with the proton-deuteron final state interaction included.
We have made the first measurements of the $`pdpd\eta `$ reaction near threshold and obtained cross sections about a factor of 2-3 higher than those of two simple theoretical approaches. Since, for our data, $`Q`$ is less than the <sup>3</sup>He binding energy $`ฯต`$, the difference in the energy dependence of the two present models comes principally from the striking behaviour of the $`pd^3\text{He}\eta `$ cross section. More detailed experiments, with a much better resolution in $`Q`$, are required to see if there is in fact a strong $`\eta `$ final state interaction in the $`pd\eta `$ system to match that in $`{}_{}{}^{3}\text{He}\eta `$.
We wish to thank the Saturne accelerator crew and support staff for providing us with working conditions which led to the present results. Discussions with U. Tengblad regarding Ref. were very useful. |
no-problem/0001/astro-ph0001406.html | ar5iv | text | # The Deuterium Abundance in QSO Absorption Systems : A Mesoturbulent Approach11footnote 1Based on data obtained at the W. M. Keck Observatory, which is jointly operated by the University of California, the California Institute of Technology, and the National Aeronautics and Space Administration.
## 1. Introduction
The cosmological significance of the deuterium abundance measurements in metal-deficient QSO absorption systems has been widely discussed in the literature (see e.g. the review by Lemoine et al. 1999). Practical applications of such measurements were clearly outlined in Tytler & Burles (1997) : (i) the primordial D/H value gives the density of baryons $`\mathrm{\Omega }_\mathrm{b}`$ at the time of big bang nucleosynthesis (BBN), a precise value for $`\mathrm{\Omega }_\mathrm{b}`$ might be used (ii) to determine the fraction of baryons which are missing, (iii) to specify Galactic chemical evolution, and (iv) to test models of high energy physics. A measurement of the D/H ratio, together with other three light element abundances (<sup>3</sup>He/H, <sup>4</sup>He/H, and <sup>7</sup>Li/H), provides the complete test of the standard BBN model.
Deuterium has been reported up-to-now in a few QSO Lyman limit systems (LLS), โ the systems with the neutral hydrogen column densities of $`N_{\mathrm{H}\mathrm{i}}10^{17}10^{18}`$ cm<sup>-2</sup> (Kirkman et al. 1999). The difficulties inherent to measurements of D/H in QSO spectra are mainly caused by the confusion between the D i line (which is always partially blended by the blue wing of the saturated hydrogen line) and the numerous neighboring weak lines of H i observed in the Ly$`\alpha `$ forest at redshifts $`z>2`$ (Burles & Tytler 1998).
Currently, there are two methods to analyse the absorption spectra : (i) a conventional Voigt-profile fitting (VPF) procedure which usually assumes several subcomponents with their own physical parameters to describe a complex absorption profile, and (ii) a mesoturbulent approach which describes the line formation process in a continuous medium with fluctuating physical characteristics. It is hard to favor this or that method if both of them provide good fitting. But the observed increasing complexity of the line profiles with increasing spectral resolution gives some preference to the model of the fluctuating continuous medium.
Here, we set forward a mesoturbulent approach to measure D/H and metal abundances, which has many advantages over the standard VPF procedures. A brief description of a new Monte Carlo inversion (MCI) method is given in this report. For more details, the reader is referred to Levshakov, Agafonova, & Kegel (2000b).
An example of the MCI analysis of two โH+Dโ-like profiles with accompanying metal lines observed at $`z_\mathrm{a}=3.514`$ and 3.378 towards the quasar APM 08279+5255 is described. The high quality spectral data have been obtained with the Keck-I telescope and the HIRES spectrograph by Ellison et al. (1999).
## 2. The MCI method and results
The MCI method is based on simulated annealing technique and aimed at the evaluation both the physical parameters of the gas cloud and the corresponding velocity and density distributions along the line of sight. We consider the line formation process in clumpy stochastic media with fluctuating velocity and density fields (mesoturbulence). The new approach generalizes our previous Reverse Monte Carlo (Levshakov, Kegel, & Takahara 1999) and Entropy-Regularized Minimization (Levshakov, Takahara, & Agafonova 1999) methods dealing with incompressible turbulence (i.e. the case of random bulk motions with homogeneous gas density $`n_\mathrm{H}`$ and kinetic temperature $`T`$).
The main goal is to solve the inverse problem, i.e. the problem to deduce physical parameters from a QSO absorption system. The inversion is always an optimization problem in which an objective function is minimized. To estimate a goodness of the minimization we used a $`\chi ^2`$ function augmented by a regularization term (a penalty function) to stabilize the MCI solutions. The MCI is a stochastic optimization procedure and one does not know in advance if the global minimum of the objective function is reached in a single run. Therefore to check the convergency, several runs are executed for a given data set with every calculation starting from a random point in the simulation parameter box and from completely random configurations of the velocity and density fields. After these runs, the median estimation of the model parameters is performed.
Our model supposes a continuous absorbing gas slab of a thickness $`L`$. The velocity component along a given line of sight is described by a random function in which the velocities in neighboring volume elements are correlated with each other. The gas is optically thin in the Lyman continuum. We are considering a compressible gas, i.e. $`n_\mathrm{H}`$ is also a random function of the space coordinate, $`x`$. Following Donahue & Shull (1991) and assuming that the ionizing radiation field is constant, the ionization of different elements can be described by one parameter only โ the ionization parameter $`U1/n_\mathrm{H}`$. Furthermore, for gas in thermal equilibrium, Donahue & Shull give an explicit relation between $`U`$ and $`T`$. The background ionizing spectrum is taken from Mathews & Ferland (1987).
In our computations, the continuous random functions $`v(x)`$ and the normalized density $`y(x)=n_\mathrm{H}(x)/n_0`$ , $`n_0`$ being the mean hydrogen density, are represented by their sampled values at equally spaced intervals $`\mathrm{\Delta }x`$, i.e. by the vectors {$`v_1,\mathrm{},v_k`$} and {$`y_1,\mathrm{},y_k`$} with $`k`$ large enough to describe the narrowest components of complex spectral lines. For the ionization parameter as a function of $`x`$, we have $`U(x)=\widehat{U}_0/y(x)`$, with $`\widehat{U}_0`$ being the reduced mean ionization parameter defined below.
Absorption system at $`z_\mathrm{a}=3.514`$. A measurement of the primordial D/H in the $`z_\mathrm{a}=3.514`$ system has been recently made by Molaro et al. (1999). They suggested that the blue wing of H i Ly$`\alpha `$ is contaminated by D i and evaluated a very low deuterium abundance of D/H$`1.5\times 10^5`$ in the cloud with $`N_{\mathrm{H}\mathrm{i}}=(1.23_{0.08}^{+0.09})\times 10^{18}`$ cm<sup>-2</sup>. They considered, however, the derived D abundance as a lower limit because their analysis was based on a simplified one-component VPF model which failed to fit the red wing of the Ly$`\alpha `$ line as well as the profiles of Si iii, Si iv, and C iv lines exhibiting complex structures over approximately 100 km s<sup>-1</sup> velocity range. They further assumed that additional components would decrease the H i column density for the major component and, thus, would yield a higher deuterium abundance. Given the MCI method, we can test this assumption since the MCI accounts self-consistently for the velocity and density fluctuations.
Our aim is to fit the model spectra simultaneously to the observed H i, C ii, C iv, Si iii, and Si iv profiles. In this case the mesoturbulent model requires the definition of a simulation box for the six parameters : the carbon and silicon abundances, $`Z_\mathrm{C}`$ and $`Z_{\mathrm{Si}}`$, respectively, the rms velocity $`\sigma _\mathrm{v}`$ and density dispersion $`\sigma _\mathrm{y}`$, the reduced total hydrogen column density $`\widehat{N}_\mathrm{H}=N_\mathrm{H}/(1+\sigma _\mathrm{y}^2)^{1/2}`$, and the reduced mean ionization parameter $`\widehat{U}_0=U_0/(1+\sigma _\mathrm{y}^2)^{1/2}`$. For the model parameters the following boundaries were adopted : $`Z_\mathrm{C}`$ ranges from $`10^6`$ to $`4\times 10^4`$, $`Z_{\mathrm{Si}}`$ from $`10^6`$ to $`3\times 10^5`$, $`\sigma _\mathrm{v}`$ from 25 to 80 km s<sup>-1</sup>, $`\sigma _\mathrm{y}`$ from 0.5 to 2.2, $`\widehat{N}_\mathrm{H}`$ from $`5\times 10^{17}`$ to $`8\times 10^{19}`$ cm<sup>-2</sup>, and $`\widehat{U}_0`$ ranges from $`5\times 10^4`$ to $`5\times 10^2`$. We fix $`z_\mathrm{a}=3.51374`$ (the value adopted by Molaro et al.) as a more or less arbitrary reference velocity at which $`v_j=0`$.
Having specified the parameter space, we minimize the $`\chi ^2`$ value. The objective function includes those pixels which are critical to the fit. In Fig. 1, these pixels are marked by shaded areas. In Fig. 1 (panels f and c), the observed profiles of C iv$`\lambda 1550`$ and, respectively, Si iv$`\lambda 1402`$ are shown together with the model spectra computed with the parameters derived from Ly$`\alpha `$, C ii$`\lambda 1334`$, C iv$`\lambda 1548`$, Si iii$`\lambda 1206`$, and Si iv$`\lambda 1393`$ fitting to illustrate the consistency. For the same reason the Ly$`\beta `$ model spectrum is shown in panel b at the expected position. All model spectra in Fig. 1 are drawn by continuous curves, whereas filled circles represent observations (normalized fluxes). The corresponding distributions of $`v(x)`$, $`y(x)`$, and $`T(x)`$ are shown in panels i, j, and k. The restored velocity field reveals a complex structure which is manifested in non-Gaussian density-weighted velocity distribution as shown in panels l and m for the total hydrogen as well as for the individual ions. We found that the radial velocity distribution of H i in the vicinity of $`\mathrm{\Delta }v100`$ km s<sup>-1</sup> may mimic the deuterium absorption and, thus, the asymmetric blue wing of the hydrogen Ly$`\alpha `$ absorption may be readily explained by H i alone.
The median estimation of the model parameters gives $`N_\mathrm{H}=5.9\times 10^{18}`$ cm<sup>-2</sup>, $`N_{\mathrm{H}\mathrm{i}}=5.3\times 10^{15}`$ cm<sup>-2</sup>, $`U_0=1.6\times 10^2`$, $`\sigma _\mathrm{v}=51`$ km s<sup>-1</sup>, and $`\sigma _\mathrm{y}=1.1`$. The results were obtained with $`k=100`$ and the correlation coefficients $`f_\mathrm{v}=f_\mathrm{y}=0.95`$ (for more details, see Levshakov, Agafonova, & Kegel 2000a).
The MCI allowed us to fit precisely not only the blue wing of the saturated Ly$`\alpha `$ line but the red one as well. We found that the actual neutral hydrogen column density may be a factor of 250 lower than the value obtained by Molaro et al. if one accounts for the velocity field structure. Besides we did not confirm the extremely low metallicity of \[C/H\] $`4.0`$, and \[Si/H\] $`3.5`$ reported by Molaro et al. Our analysis yields \[C/H\] $`1.8`$, and \[Si/H\] $`0.7`$. A similar silicon overabundance has also been observed in halo (population II) stars (Henry & Worthey 1999).
Absorption system at $`z_\mathrm{a}=3.378`$. The following example illustrates how the realibility of the inversion procedure can be controlled. We have chosen the $`z_\mathrm{a}=3.378`$ system since at the position of the narrowest C iv subcomponent with $`z_{\mathrm{C}\mathrm{iv}}=3.37757`$ and $`b_{\mathrm{C}\mathrm{iv}}=6.5`$ km s<sup>-1</sup> (see Ellison et al.) one can see an โH+Dโ-like absorption in the blue wing of the saturated Ly$`\alpha `$ line (Fig. 2a). The C iv, and Si iv profiles from this system were treated by Ellison et al. separately. They found $`N_{\mathrm{C}\mathrm{iv}}^{\mathrm{tot}}=(9.12_{0.61}^{+0.65})\times 10^{13}`$ cm<sup>-2</sup> and $`N_{\mathrm{Si}\mathrm{iv}}^{\mathrm{tot}}=(1.70\pm 0.08)\times 10^{13}`$ cm<sup>-2</sup>. For the neutral hydrogen, they estimated $`N_{\mathrm{H}\mathrm{i}}=2.8\times 10^{15}`$ cm<sup>-2</sup> and $`b_{\mathrm{H}\mathrm{i}}=78.6`$ km s<sup>-1</sup> (errors for both quantities are greater than 30%).
In this example, we assumed no deuterium absorption and tried to force a common fit to the lines shown in panels ac (Fig. 2). Pixels used in the fitting are labeled by shaded areas. The MCI fit, shown by solid curves, looks excellent, and gives $`N_{\mathrm{H}\mathrm{i}}=2.1\times 10^{17}`$ cm<sup>-2</sup>, $`N_{\mathrm{C}\mathrm{iv}}=1.0\times 10^{14}`$ cm<sup>-2</sup>, $`N_{\mathrm{Si}\mathrm{iv}}=2.6\times 10^{13}`$ cm<sup>-2</sup>, $`\sigma _\mathrm{v}=41.6`$ km s<sup>-1</sup>, and $`\sigma _\mathrm{y}=1.3`$. The results were obtained with $`k=150`$ and the correlation coefficients $`f_\mathrm{v}=f_\mathrm{y}=0.97`$.
The obtained MCI solution should, however, be rejected because the synthetic spectrum of C ii (solid curve in Fig. 2d), computed with the parameters derived from the Ly$`\alpha `$, C iv, and Si iv fitting, differs significantly from the observed intensities (dots in Fig. 2d). This example shows that we can always control the MCI results using additional portions of the analysed spectrum.
Another issue of this example is that we can, in principle, fit an โH+Dโ-like absorption by H i alone even for the systems with $`N_{\mathrm{H}\mathrm{i}}10^{17}`$ cm<sup>-2</sup> and accompanying metal lines. Examples of false deuterium identifications in systems with 10 times lower neutral hydrogen column densities and without supporting metal lines have been discussed in Tytler & Burles. Both cases stress the importance of the comprehensive approach to the analysis of each individual QSO system showing possible D absorption.
We may conclude that up-to-now deuterium was detected in only four QSO spectra (Q 1937-1009, Q 1009+2956, Q 0130-4021, and Q 1718+4807) where $`N_{\mathrm{H}\mathrm{i}}`$ was measured with a sufficiently high accuracy. These measurements are in concordance with $`\mathrm{D}/\mathrm{H}=(34)\times 10^5`$.
Acknowledgements. This paper includes results obtained in collaboration with I. I. Agafonova and W. H. Kegel. The author is grateful to Ellison et al. for making their data available. I would also like to thank the conference organizers for financial assistance.
## References
Burles, S., & Tytler, D. 1998, in Primordial Nuclei and Their Galactic Evolution, ed. N. Prantzos, M. Tosi & R. von Steiger (Dordrecht : Kluwer Academic Publishes), 65
Donahue, M., & Shull, J. M. 1991, ApJ, 383, 511
Ellison, S. L., Lewis, G. F., Pettini, M., Sargent, W. L. W., Chaffee, F. H., Foltz, C. B., Rauch, M., & Irwin, M. J. 1999, PASP, 111, 919
Henry, R. B. C., & Worthey, G. 1999, PASP, 111, 919
Kirkman, D., Tytler, D., Burles, S., Lubin, D., & OโMeara, J. M. 1999, AAS, 194, 3001 (astro-ph/9907128)
Lemoine, M., Audouze, J., Jaffel, L. B., Feldman, P., Ferlet, R., Hรฉbrard, G., Jenkins, E. B., Mallouris, C., Moos, W., Sembach, K., Sonneborn, G., Vidal-Madjar, A., & York, D. C. 1999, New Astronomy, 4, 231
Levshakov, S. A., Kegel, W. H., & Takahara, F. 1999, MNRAS, 302, 707
Levshakov, S. A., Takahara, F., & Agafonova, I. I. 1999, ApJ, 517, 609
Levshakov, S. A., Agafonova, I. I., & Kegel, W. H. 2000a, A&A, in press
Levshakov, S. A., Agafonova, I. I., & Kegel, W. H. 2000b, A&A., submit.
Mathews, W. D., & Ferland, G. 1987, ApJ, 323, 456
Molaro, P., Bonifacio, P., Centurion, M., & Vladilo, G. 1999, A&A, 349, L13
Tytler, D., & Burles, S. 1997, in Origin of Matter and Evolution of Galaxies, ed. T. Kajino, Y. Yoshii & S. Kubono (Singapore : World Scientific), 37 |
no-problem/0001/astro-ph0001383.html | ar5iv | text | # Chemical enrichment and star formation in the Milky Way disk
## 1 Introduction
The question whether the Milky Way disk has experienced a smooth and constant star formation history (hereafter SFH) or a bursty one has been the subject of a number of studies since the initial suggestions by Scalo (scalo87 (1987)) and Barry (barry (1988)). Rocha-Pinto et al. (2000a ; hereafter RPSMF) present a brief review about this question. There is evidence for three extended periods of enhanced star formation in the disk. The use of the word โburstโ for these features (usually lasting 1-3 Gyr) is based on the fact that all methods used to recover the SFH are likely to smear out the original data so that the star formation enhancement features could be narrower than they seem, or be composed by a succession of smaller bursts. In this sense, they were named bursts A, B and C, after Majewski (majewski (1993)).
The most efficient way to find the SFH is using the stellar age distribution, which can be transformed into a star formation history after various corrections. Twarog (twar (1980)) summarized some of these steps. Although his SFH is usually quoted as an evidence for the constancy of the star formation in the disk, he states that during the most recent 4 Gyr, the SFH has been more or less constant, followed by a sharp increase from 4 to 8 Gyr ago, and a slow decline beyond that. His unsmoothed data were also reanalysed by Noh & Scalo (noh (1990)) who have found more signs of irregularity.
Barry (barry (1988)) has improved this situation substantially by using chromospheric ages. His conclusion was criticized by Soderblom et al. (soder91 (1991)), who showed that the empirical data would be still consistent with a constant SFH if the chromospheric emissionโage relation is suitably modified. However, Rocha-Pinto & Maciel (RPM98 (1998)) have recently argued that the scatter in Soderblom et al. (soder91 (1991))โs Figure 13, which is the main feature that could suggest a non-monotonic age calibration, is probably caused by contamination in the photometric indices due to the chromospheric activity. The chromospheric activityโage relation was also further investigated by Donahue (don93 (1993), don98 (1998)), and the new proposed calibration still predicts a non-constant SFH if applied to Barryโs data.
The SFH derived in this paper is based on a new chromospheric sample compiled by us (Rocha-Pinto et al. 2000b , hereafter Paper I). This paper is organized as follows: In section 2, we address the transformation of the age distribution into SFH. The results are presented in section 3. In section 4, statistical significances for the SFH are provided by means of a number of simulations. The impact of the age errors on the recovered SFH is also studied. Some comparisions with observational constraints are addressed in section 5, and each particular feature of the SFH is discussed in section 6, in view of the results from the simulations and comparisons with other data. The case for a non-monotonic chromospheric activityโage relation is discussed in section 7. Our final conclusions follow in section 8. A summary of this work was presented in RPSMF.
## 2 Converting age distribution into SFH
Assuming that the sample under study is representative of the galactic disk, the star formation rate can be derived from its age distribution, since the number of stars in each age bin is supposed to be correlated with the number of stars initially born at that time.
We use the same 552 stars with which we have derived the AMR (Paper I), after correcting the metallicities of the active stars for the $`m_1`$ deficiency (Gimรฉnez et al. gimenez (1991); Rocha-Pinto & Maciel RPM98 (1998)), which accounts for the influence of the chromospheric activity on the photometric indices. The reader is referred to Paper I for details concerning the sample construction and the derivation of ages, from the chromospheric Ca H and K emission measurements.
The transformation of the chromospheric age distribution into history of the star formation rate comprises three intermediate corrections, namely the volume, evolutionary and scale height corrections. They are explained in what follows.
### 2.1 Volume correction
Since our sample is not volume-limited, there could be a bias in the relative number of stars in each age bin: stars with different chemical compositions have different magnitudes, thus the volume of space sampled varies from star to star. To correct for this effect, before counting the number of stars in each age bin, we have weighted each star (counting initially as 1) by the same factor $`d^3`$ used for the case of the AMR, where $`d`$ is the maximum distance at which the star would still have apparent magnitude lower than a limit of about 8.3 mag (see Paper I for details).
This correction proves to change significantly the age distribution as can be seen in Figure 1.
### 2.2 Evolutionary corrections
A correction due to stellar evolution is needed when a sample comprises stars with different masses. The more massive stars have a life expectancy lower than the disk age, thus they would be missing in the older age bins. The mass of our stars was calculated from a characteristic massโmagnitude relation for the solar neighbourhood (Scalo scalo (1986)). In Figure 2, the mass distribution is shown. We take the mass range of our sample as 0.8 to 1.4 $`M_{}`$, which agrees well with the spectral-type range of the sample from nearly F8 V to K1-K2 V. As an example for the necessity of these corrections, the stellar lifetime of a 1.2 $`M_{}`$ is around 5.5 Gyr (see Figure 3 below). This means that only the most recent age bins are expected to have stars at the whole mass range of the sample.
The corrections are given by the following formalism. The number of stars born at time $`t`$ ago (present time corresponds to $`t=0`$), with mass between 0.8 and 1.4 $`M_{}`$ is
$$N^{}(t)=\psi (t)_{0.8}^{1.4}\varphi (m)๐m,$$
(1)
where $`\varphi (m)`$ is the initial mass function, assumed constant, and $`\psi (t)`$ is the star formation rate in units of $`M_{}`$ Gyr<sup>-1</sup>pc<sup>-2</sup>. The number of these objects that have already died today is
$$N^{}(t)=\psi (t)_{m_\tau (t)}^{1.4}\varphi (m)๐m,$$
(2)
where $`m_\tau (t)`$ is the mass whose lifetime corresponds to $`t`$. From these equations, we can write that the number of still living stars, born at time $`t`$, as
$$N^{\mathrm{obs}}(t)=N^{}(t)N^{}(t).$$
(3)
Using equations (1) and (2), we have
$$N^{}(t)=\left[\frac{_{m_\tau (t)}^{1.4}\varphi (m)๐m}{_{0.8}^{1.4}\varphi (m)๐m}\right]N^{}(t)=\frac{\alpha (t)}{\beta }N^{}(t);$$
(4)
$$N^{}(t)=\epsilon (t)N^{\mathrm{obs}}(t),$$
(5)
where
$$\epsilon (t)=\left(1\frac{\alpha (t)}{\beta }\right)^1.$$
(6)
The number of objects initially born at each age bin can be calculated by using equation (6), so that we have to multiply the number of stars presently observed by the $`\epsilon `$ factor. These corrections were independently developed by Tinsley (tinsley74 (1974)), in a different formalism. RPSMF present another way to express this correction in terms of the stellar lifetime probability function. We stress that all these formalisms yield identical results.
The function $`m_\tau (t)`$ can be calculated by inverting stellar lifetimes relations. Figure 3 shows stellar lifetimes for a number of studies published in the literature. Note the good agreement between the relations of the Padova group (Bressan et al. bressan (1993); Fagotto et al. fagotto (1994)a,b) and that by Schaller et al. (schallera (1992)), as well with Bahcall & Piran (bahcall (1983))โs lifetimes. The stellar lifetimes for $`Z=0.0017`$ given by VandenBerg (VandenBerg (1985)) are underestimated probably due to the old opacity tables used by him. The agreement in the stellar lifetimes shows that the error introduced in the SFH due to the evolutionary corrections is not very large.
The adopted turnoff-mass relation was calculated from the stellar lifetimes by Bressan et al. (bressan (1993)) and Schaller et al. (schallera (1992)), for solar metallicity stars:
$$\mathrm{log}m_\tau (t)=7.591.25\mathrm{log}t+0.05(\mathrm{log}t)^2,$$
(7)
where $`t`$ is in yr. This equation is only valid for the mass range $`5M_{}>m>0.7M_{}`$.
We have also considered the effects of the metallicity-dependent lifetimes on the turnoff mass. To account for this dependence, we have adopted the stellar lifetimes for different chemical compositions, as given by Bressan et al. (bressan (1993)) and Fagotto et al. (fagotto (1994)a,b). Equations similar to Eq. (7) were derived for each set of isochrones and the metallicity dependence of the coefficients was calculated. We arrive at the following equation:
$$\mathrm{log}m_\tau (t)=a+b\mathrm{log}t+c(\mathrm{log}t)^2,$$
(8)
where $`a=7.621.56[\mathrm{Fe}/\mathrm{H}]`$, $`b=1.26+0.34[\mathrm{Fe}/\mathrm{H}]`$, $`c=0.050.02[\mathrm{Fe}/\mathrm{H}]`$. Since \[Fe/H\] depends on time we use a third-order polynomial fitted to the AMR derived in Paper I. In that work, we have also shown that the AMR is very affected at older ages, due to the errors in the chromospheric bins. The real AMR must be probably steeper, and the disk initial metallicity around $`0.70`$ dex. The effect of this in the SFH is small. The use of a steeper AMR increases the turnoff mass at older ages, decreasing the stellar evolutionary correction factors (Equation 6). As a result, the SFH features at young and intermediate age bins (ages lower than 8 Gyr) increases slightly related to the older features, in units of relative birthrate which is the kind of plot we will work in the next sections.
Note that equation (8) does not reduce to equation (7) when $`[\mathrm{Fe}/\mathrm{H}]=0`$. The former was calculated from an average between two solar-metallicity stellar evolutionary models, while the latter uses the results of the same model with varying composition. The difference in the turnoff mass from these equations amount 12-15% from 0.4 to 15 Gyr.
The initial mass function (IMF) also enters in the formalism of the $`\epsilon `$ factor. For the mass range under consideration, the IMF depends on the SFH, more specifically on the present star formation rate. It could be derived from open clusters, but they are probably severely affected by mass segregation, unresolved binaries and so on (Scalo scalo98 (1998)). We have adopted the IMF by Miller & Scalo (ms79 (1979)), for a constant SFH, which gives an average value for the mass range under study. Power-law IMFs were also used to see the effect on the results.
In Figure 4 we show how this factor varies with age. The curves represent Equations (7; dashed curve) and (8; solid curve) using the Miller-Scaloโs IMF. A third curve (shown by dots) gives the results using a Salpeter IMF with the turnoff-mass given by Equation (7). The $`\epsilon `$ factor does not vary very much when we use a different IMF. Being flatter than Salpeter IMF, the correction factors given by the Miller-Scalo IMF are higher. However, the effects of neglecting the metallicity-dependence of the stellar lifetimes are much more important in the calculation of this correcting factor. Since low-metallicity stars live less than their richer counterparts, the turnoff-masses at older ages are highly affected. In the following section, we will use the $`\epsilon `$ factors calculated for metallicity-dependent lifetimes.
### 2.3 Scale height correction
Another depopulation mechanism, affecting samples limited to the galactic plane, is the heating of the stellar orbits which increases the scale heights of the older objects. To correct for this we use the following equations. Assuming that the scale heights in the disk are exponential, the transformation of the observed age distribution, $`N_0(t)`$, into the function $`N(t)`$ giving the total number of stars born at time $`t`$ is
$$N(t)=2H(t)N_0(t),$$
(9)
where $`H(t)`$ is the average scale height as a function of the stellar age. A problem arises since scale heights are always given as a function of absolute magnitude or mass. To solve for this, we use an average stellar age corresponding to a given mass, following the iterative procedure outlined in Noh & Scalo (noh (1990)). This average age, $`\tau `$, can be obtained by
$$\tau =\frac{_0^{\tau _m}tN(t)๐t}{_0^{\tau _m}N(t)๐t};$$
(10)
where $`\tau _m`$ is the lifetime of stars having mass $`m`$, and $`N(t)`$ is the star formation rate. Since $`\tau `$ depends on the star formation rate, which on the other hand depends on the average ages through the definition of $`H(t)`$, equations (9) and (10) can only be solved by iteration. We use the chromospheric age distribution as the first guess $`N_0(t)`$, and calculate the average ages $`\tau _0`$. These are used to convert $`H(m)`$ to $`H(t)`$, and the star formation history is found by equation (9), giving $`N_1(t)`$. This quantity is used to calculate $`\tau _1`$ and a new star formation rate, $`N_2(t)`$. Note that, in equation (9), the quantity that varies in each iteration is $`H(t)`$, not the chromospheric age distribution $`N_0(t)`$. Our calculations have shown that convergence is attained rapidly, generally after the second iteration.
Great uncertainties are still present in the scale heights for disk stars. Few works have addressed them since Scalo (scalo (1986))โs review (see e.g., Haywood, Robin & Crezรฉ hay (1997)). We will be working with two different scale heights: Scalo (scalo (1986)) and Holmberg & Flynn (holmberg (2000)), that are shown in Figure 5. Haywood et al.โs scale heights are just in the middle of these, so they set the limits on the effects in the derivation of the SFH.
The major effect of the scale heights is to increase the contribution of the older stars in the SFH. Better scale heights would not change significantly the results, so that we limit our discussion to these two derivations.
## 3 Star formation history in the galactic disk
### 3.1 Previous chromospheric SFH determinations
In Figure 6, we show a comparison between two SFHs, derived from chromospheric age distributions available in the literature: Barry (barry (1988), SFH given by Noh & Scalo noh (1990)) and Soderblom et al. (soder91 (1991), SFH given by Rana & Basu ranabasu (1992)). In this plot, as well as in subsequent figures, the SFH will be expressed always as a relative birthrate, which is defined as the star formation rate in units of average past star formation rate (see Miller & Scalo ms79 (1979), for rigorous definition).
Note that the SFHs in Figure 6 are very similar to each other, a result not really surprising since Soderblom et al. have used the same sample used by Barry. On the other hand, the corresponding events in Barryโs SFH appears 1 Gyr earlier in Soderblom et al.โs SFH. The different age calibrations used in these works are the sole cause of this discrepancy. Barry makes use of Barry et al. (barry87 (1987))โs calibration which used a low-resolution index analogous to Mount Wilson $`\mathrm{log}R_{\mathrm{HK}}^{}`$, while Soderblom et al. use a calibration derived by themselves. In Figure 7, we show a comparison of the ages for Barry (barry (1988))โs stars using both age calibrations. The difference in the ages are clearly caused by the slopes of the calibrations. Barry et al. (barry87 (1987))โs calibration gives higher ages compared to the other calibration, which explains the differences in the corresponding SFHs published.
### 3.2 Determination of the SFH
The three corrections described in section 2 are applied to our data in the following order: the age distribution is first weighted according to the volume corrections, then each age bin is multiplied by the $`\epsilon `$ factor and we iterate the result according to equations (9) and (10). The final result is the best estimate of the star formation history. It is shown in Figure 8a, for an age bin of 0.4 Gyr and Scaloโs scale height. There can be seen three regions where the stars are more concentrated: at 0-1 Gyr, 2-5 Gyr and 7-9 Gyr ago. Beyond 10 Gyr of age, the SFH is very irregular, probably reflecting more the sample incompleteness in this age range, and age errors, than real features. These patterns are still present even considering a smaller age bin of 0.2 Gyr. Figure 8b shows the same for Holmberg & Flynn (holmberg (2000)) scale heights. The only difference comes from the amplitude of the events. In this plot, the importance of the older bursts is increased, since in Holmberg & Flynn (holmberg (2000)) the difference in the scale heights of the oldest to the youngest stars is greater than the corresponding value in Scaloโs scale heights.
We have used an extended nomenclature to that of Majewsky (majewski (1993)) to refer to the features found. At the age range where bursts B and C were thought to occur double-peaked structures are now seen. Thus, we have used the terms B1 and B2, and C1 and C2, to these substructures. Also shown is the supposed burst D, as Majewski (majewski (1993)) had suggested. Their meaning will be discussed later. The lulls between the bursts were named AB gap, BC gap and so on. Some of us have previously referred to the most recent lull as โVaughan-Preston gapโ. We now avoid the use of this term because:
1. The Vaughan-Preston gap is a feature in the chromospheric activity distribution;
2. Due to the metallicity-dependence of the age calibration, the Vaughan-Preston gap is not linearly reflected in an age gap;
3. Henry et al. (HSDB (1996), hereafter HSDB) shows that the Vaughan-Preston gap is less pronounced than was earlier thought, and does not resemble a gap but a transition zone.
Comparing with other studies in the literature, the SFH seems particularly different. There are still three major star formation episodes but their amplitude, extension and time of occurrence are not identical to those that were previously found by other authors. Table 1 summarizes the main characteristics of our SFH comparing to that of Barry (barry (1988), as derived in Noh & Scalo noh (1990)). In the Table, the entries with two values stand for the SFH derived with different scale heights. The first number refers to the SFH with Scaloโs scale height, and the other refers to that with Holmberg & Flynnโs.
As we can see, the main events of our SFH seem to occur earlier than the corresponding events in Barryโs SFH, by approximately 1 Gyr. This can also be seen in Figure 6: the SFR from Soderblom et al. (soder91 (1991))โs data have features earlier than Barry by about 1 Gyr. This comes mainly from the use of Soderblom et al. (soder91 (1991))โs age calibration on which we have based our ages. This hypothesis is reinforced by the fact that the fraction of the stars formed in each burst is in reasonable agreement with the corresponding events in Barryโs SFH (see Table 1). The events we have found are most likely to be the same that have appeared in previous works, and the difference in the time of occurrence comes from the shrinking of the chronologic scale of the age calibration.
The narrowing of the AB gap is one of the main differences of our SFH and that found by Barry. This can be expected since our sample does not show a well-marked Vaughan-Preston gap, contrary to what is found in the survey of Soderblom (soder (1985)), from which Barry (barry (1988)) selected his sample.
Some other differences in the amplitude and duration of the bursts can be understood as resulting from the differences in the samples used by us and by Barry. Nearly 70% of our stars come from HSDB survey. We have already shown in Paper I that HSDB and Soderblom (soder (1985)) surveys have different chromospheric activity distributions. These are directly reflected in the SFH.
We have found double peaks at bursts B and C. At the present moment we cannot distinguish these features from a real double-peaked burst (that is, two unresolved bursts) or a single smeared peak. However, it is interesting to see that the previous chromospheric SFHs give some evidence for a double burst C. In Figure 6 burst C also seems to be formed by two peaks. On the other hand, the same does not occur for burst B. The feature called B2 corresponds more closely to burst B in the previous studies, but at the age where we have found B1, the other SFHs show a gap.
The resulting SFH comes directly from the age distribution, in an approach which assumes that the most frequent ages of the stars indicate the epochs when the star formation was more intense. Both the evolutionary and the scale height corrections do not change the clumps of stars already present in the age distribution. The only correction which could introduce spurious patterns in it is the volume correction, which must be applied before the other two. Figure 1 shows how it affects the age distribution. It is elucidating that the major patterns of the age distributions are not much changed after this correction. We refer basically to the clumps of stars younger than 1 Gyr and stars with ages between 2 and 4 Gyr. These clumps will be identified with burst A and B, respectively, after the application of the other corrections. Note also, that the AB gap is clearly seen in the age distribution before the volume correction. In spite of it, it is necessary to know if the presence of stars with very high weights (due to their proximity and low temperature) could affect the results. Therefore, we have recalculated the SFH now disregarding the stars that have very high weights after the volume correction. We have cut the sample to those stars with weights not exceeding 2$`\sigma `$ and 3$`\sigma `$. The resulting SFHs is compared to the SFH of the whole sample in Figure 9. It is possible to see that the presence of outliers does not affect the global result. The uncertainty introduced affects mainly the amplitude of the events, at a level similar to that introduced by the uncertainty in the scale heights. We believe that the volume correction has not impinged artificial patterns on the data, and that the star formation just derived reflects directly the observed distribution of stellar ages in the solar vicinity.
## 4 Statistical significance of the results
### 4.1 Inconsistency of the data with a constant SFH
There is a widespread myth on galactic evolutionary studies about the near constancy of the SFH in the disk. This comes primarily from earlier studies setting constraints to the present relative birthrate (e.g., Miller & Scalo ms79 (1979); Scalo scalo (1986)). The observational constraints have favoured a value near unity, and that was interpreted as a constant SFH.
This constraint refers only to the present star formation rate. As pointed out by OโConnell (oconnell (1997)) and Rocha-Pinto & Maciel (RPM97 (1997)), it is not the same as the star formation history.
A typical criticism to a plot like that shown in Figure 8 is that the results still do not rule out a constant SFH, since the oscilations of peaks and lulls around the unity can be understood as fluctuations of a SFH that was โconstantโ in the mean. This is an usual mistake of those who are accustomed to the strong, short-lived bursts in other galaxies.
The ability to find bursts of star formation depends on the resolution. Suppose a galaxy that has experienced only once a real strong star formating burst during its entire lifetime. The burst had an intensity of hundred times the average star formation in this galaxy, and has lasted $`10^7`$ yr, which are typical parameters of bursts in active galaxies. Figure 10 shows how this burst would be noticed, in a plot similar to that we use, as a function of the bin size. In a bin size similar to that used throughout this paper (0.4 Gyr), the strong narrow burst would be seen as a feature with a relative birthrate of 3.5. If we were to convolve it with the age errors, like those we used in Paper I, we could find a broad smeared peak similar to those in Figure 8. For a biggest bin size (1 Gyr), the relative birthrate of the burst would be lower than 1.5. Hence, a relative birthrate of 1.5 in a SFH binned by 1 Gyr is by no means constant. A great bin size can just hide a real burst that, if occurring presently in other galaxies, would be accepted with no reserves.
In the case of our galaxy, the bin size presently cannot be smaller than 0.4 Gyr. This is caused by the magnitude of the age errors. We are then limited to features whose relative birthrate will be barely greater than 3.0, especially taking into consideration that the star formation in a spiral galaxy is more or less well distributed during its lifetime. Therefore, in a plot with bin size of 0.4 Gyr, relative birthrates of 2.0 are in fact big events of star formation.
A conclusive way to avoid these mistakes is to calculate the expected fluctuations of a constant SFH in the plots we are using. We have calculated the Poisson deviations for a constant SFH composed by 552 stars. In Figure 11 we show the 2$`\sigma `$ lines (dotted lines) limiting the expected statistical fluctuations of a constant SFH.
The Milky Way SFH, in this Figure, is presented with two sets of error bars, corresponding to extreme cases. The smallest error bars correspond to Poisson errors ($`\pm \sqrt{N}`$, where $`N`$ is the number of stars in each metallicity bin). The thinner longer error bar superposed on the first shows the maximum expected error in the SFH, coming from the combination of counting errors, IMF errors and scale height errors. These last two errors were estimated from Figures 4 and 5. The contribution of the scale height errors are greatest at an age of 3.0 Gyr, due to the steep increase of the scale heights around solar-mass stars. The effect of the IMF errors are the smallest, but grows in importance for the older age bins.
From the comparison of the maximum expected fluctuations of a constant SFH and the errors in the Milky Way SFH, it is evident that some trends are not consistent with a constant history, particularly bursts A and B, and the AB gap. We can conclude that the irregularities of our SFH cannot be caused by statistical fluctuations.
### 4.2 The uncertainty introduced by the age errors
The age error affects more considerably the duration of the star formation events, since they tend to scatter the stars originally born in a burst. We can expect that this error could smear out peaks and fill in gaps in the age distribution. A detailed and realistic investigation of the statistical meaning of our bursts has to be done in the framework of our method, following the observational data as closely as possible. In the case of the Milky Way, the input data is provided by the age distribution. We have supposed that this age distribution is depopulated from old objects, since some have died or left the galactic plane. Our method to find the SFH makes use of corrections to take into account these effects. However, some features in the age distribution could be caused rather by the incompleteness of the sample. These would propagate to the SFH giving rise to features that could be taken as real, when they are not.
Thus, if we want to differentiate our SFH from a constant one, we must begin with age distributions, generated by a constant SFH, depopulated in the same way that the Galactic age distribution. With this approach, we can check if the SFH presented in Figure 8 can be produced by errors in the isochrone ages in conjunction with statistical fluctuations of an originally constant SFH.
We have done a set with 6000 simulations to study this. Each simulation was composed by the following steps:
1. A constant SFH composed by 3000 โstarsโ was built by randomly distributing the stars from 0 to 16 Gyr with uniform probability.
2. The stars are binned at 0.2 Gyr intervals. For each bin, we calculate the number of objects expected to have left the main sequence or the galactic plane. This corresponds to the number of objects which we have randomly eliminated from each age bin. The remaining stars (around 600-700 stars at each simulation) were put into an โobserved catalogueโ.
3. The real age of the stars in the โobserved catalogueโ is shifted randomly according to the average errors presented in Figure 5 of Paper I. After that, the โobserved catalogueโ looks more similar to the real data.
4. The SFH is then calculated just as it was done for the disk. From each SFH the following information is extracted: dispersion around the mean, amplitude and age of occurrence of the most prominent peak, amplitude and age of occurrence of the deepest lull.
One of the problems that we have found is that due to the size of the sample, and the depopulation caused by stellar evolution and scale height effects, the SFH always presents large fluctuations beyond 10 Gyr. These fluctuations are by no means real. They arise from the fact that in the observed sample (for the case of the simulations, in the โobserved catalogueโ), beyond 10 Gyr, the number of objects in the sample is very small, varying from 0 to 2 stars at most. In the method presented in the subsections above, we multiply the number of stars present in the older age bins by some factors to find the number of stars originally born at that time. This multiplying factor increases with age and could be as high as 12 for stars older than 10 Gyr; this way, by a simple statistical effect of small numbers, we can in our sample find age bins where no star was observed neighbouring bins where there are one or more stars. And, in the recovered SFH, this age bin will still present zero stars, but the neighbouring bins would have their original number of stars multiplied by a factor of 12. This introduces large fluctuations at older age bins, so that all statistical parameters of the simulated SFHs were calculated only from ages 0 to 10 Gyr.
In Figure 12, we present two histograms with the statistical parameters extracted from the simulations. The first panel shows the distribution of dispersions around the mean for the 6000 simulations. The arrow indicates the corresponding value for the Milky Way SFH. The dispersion of the SFH of our Galaxy is located in the farthest tail of the dispersion distribution. The probability of finding a dispersion similar to that of the Milky Way is lower than 1.7%, according to the plot. In other words, we can say, with a significance level of 98.3%, that the Milky Way SFH is not consistent with a constant SFH.
In panel b of Figure 12, a similar histogram is presented, now for the value of the most prominent peak that was found in each simulation. In the case of the Milky Way, we have B1 peak with $`b=2.5`$. Just like the previous case, it is also located in the tail of the distribution. From the comparison with the values of the highest peaks that could be caused by errors in the recovering of an originally constant SFH, we can conclude with a significance level of 99.5% that our Galaxy has not had a constant SFH.
The use of Holmberg & Flynn (holmberg (2000)) scale heights in the simulations increases these significance levels to 100% and 99.9%, respectively.
These significance levels refer to only one parameter of the SFH, namely the dispersion or the highest peak. For a rigorous estimate of the probability of finding a SFH like that presented in Figure 11, from an originally constant SFH, one has to calculate the probability to have neighbouring bins with high star formation, followed by bins with low star formation, as a function of age. This can be calculated approximately from Figure 13, where we show box charts with the results of the 6000 simulations. Superimposed on these box charts, we show the SFH, now calculated with Holmberg & Flynn (holmberg (2000))โs scale heights. For the sake of consistency, the simulations shown in the figure also use these scale heights, but we stress that the same quantitative result is found using Scaloโs scale heights.
A lot of information can be drawn from this figure. First, it can be seen that a typical constant SFH would not be recovered as an exactly โconstantโ function in this method. This is shown by the boxes with the error bars which delineate 2$`\sigma `$-analogous to those lines shown in Figure 11. The boxes distribute around unity, but shows a bump between 1 to 2 Gyr, where the average relative birthrate increases to 1.4. This is an artifact introduced by the age errors. In each individual simulation, the number of stars scattered off their real ages increases as a function of the age. In the recovered SFH there will be a substantial loss of stars with ages greater than 15 Gyr, since they are eliminated from the sample (note that originally, these stars would present ages lower than 15 Gyr, and just after the incorporation of the age errors they resemble stars older than it). This decreases the average star formation rate with respect to the original SFH, and the proportional number of young stars increases, because they are less scattered in age due to errors. This gives rise to a distortion in the expected loci of constant SFHs. Note also the increase in the $`2\sigma `$-region as we go towards older ages, reflecting the growing uncertainty of the chromospheric ages.
The diagram allows a direct estimate of the probability for each feature found in the Milky Way SFH be produced by fluctuations of a constant SFH. The box charts gives the distribution of relative birthrates in each age bin. An average probability for the major events of our SFH are shown in Figure 13, besides the features under interest. Rigorously speaking, the probability for the whole Milky Way SFH be constant, not bursty, can be estimated by the multiplication of the probability of the individual events in this Figure. It can be clearly seen that it is much less than the 2% level we have calculated from only one parameter of the SFH. Particularly, note that the AB gap has zero probability to be caused by a statistical fluctuation. All of theses results show that the Milky Way SFH was by no means constant.
### 4.3 Flattening and Broadening of the Bursts
Since the errors in the chromospheric ages are not negligible, a sort of smearing out must be present in the data. Due to this, a star formation burst found in the recovered SFH must have been originally much more pronounced. This mechanism probably affects much more older bursts, since the age errors are greater at older ages and the depopulation by evolutionary and scaleheight effects is more dramatic. We can assume that if we found a feature like a burst at say 8 Gyr ago, this probably was much stronger in order to be preserved in the recovered SFH.
The first aspect we want to show is that the errors produce a significant flattening of the original peaks. To do so, we use simulations of a SFH composed by a single burst over a constant star formation rate. The โburstโ is characterized by occurring at age $`\tau `$, having intensity $`c`$ times the value of the constant star formation rate, and lasting 1 Gyr. We want to know the fraction of the burst that is recovered, as a function of age and of the burst intensity.
We have performed 50 simulations for each pair $`(\tau ,c)`$, with around 3000 stars in each simulation. A summary of these simulations is shown in Figure 14. In all the panels (for varying $`c`$), the fraction of the recovered burst is high for recent bursts and falls off smoothly until 8-9 Gyr, when it begins to become constant. This stabilization reflects the predominance of the statistical fluctuations, since the recovered fraction is the same, regardless of the age of occurrence. What happens is that the burst becomes more or less undistinguished from the fluctuations. From this we can conclude that it is more difficult to find bursts older than 8-9 Gyr, irrespective of its original amplitude.
A second problem in the method is the broadening of the bursts. This depends sensitively on the age at which the burst occurs, and the results are even more dramatic. To illustrate this, another set of simulations was done. We consider now a SFH composed of a single burst, of 1000 stars, lasting 0.4 Gyr. No star formation occurs except during the burst. We vary the age of occurrence from 0.3 Gyr to 6 Gyr ago. Just one simulation was done for each age of occurrence, since we are only looking for the magnitude of the broadening introduced by the errors, so the exact shape of the recovered SFH does not matter. The recovered SFHs are shown in Figure 15. Only the younger bursts are reasonably recovered. The burst at 6 Gyr can still be seen, although many of its stars has been scattered over a large range of ages.
## 5 Comparison with other constraints
### 5.1 The SFH driving the chemical enrichment of the disk
On theoretical grounds, there should be a correlation between the SFH and the the ageโmetallicity relation (hereafter AMR). The increase in the star formation leads to an increase in the rate at which new metals are produced and ejected into the interstellar medium. The correlation is not a one-to-one, since the presence of infall and radial flows can also affect the enrichment rate of the system. Moreover, the enrichment rate is constrained by the amount of gas into which the new metals will be diluted. Nevertheless, it is interesting to see whether the AMR we have found in Paper I is consistent with the SFH derived from the same sample, especially because, to our knowledge, this was never tried before.
From the basic chemical evolution equations (Tinsley tinsley (1980)), for a closed box model (i.e., no infall), the link between the AMR and the SFH can be written as
$$\frac{\mathrm{d}Z}{\mathrm{d}t}(t)\frac{\psi (t)}{m_g(t)},$$
(11)
where $`Z(t)`$ gives the AMR, expressed by absolute metallicity, $`\psi (t)`$ is the SFH as in equation (1), and $`m_g(t)`$ is the total gas mass of the system, in units of $`M_{}`$ pc<sup>-2</sup>.
According to this equation, bursts in the SFH are echoed through an increase of the metal-enrichment rate. Certainly, this is particularly true when the metallicity is measured by an element produced mostly in type II supernovae, like O. The gas mass can dilute more or less the enrichment, changing the proportionality between it and the SFH, at each age, but will not destroy the relationship. On the other hand, the intrinsic metallicity dispersion of the interstellar medium can certainly somewhat obscure this proportionality, especially if it were as big as the AMR by Edvardsson et al. (Edv (1993), hereafter Edv93) suggests.
In Figure 16, we show a comparison between the metal-enrichment rate (top panel) with the SFH (bottom panel). The enrichment rate increases substantially in the last 2 Gyr, which could be a suggestion for a recent burst of SFH. However, the agreement between both functions seems very poor. There is a peculiar bump in the enrichment rate between 4 and 6 Gyr, which is coeval to a feature in the SFH, but most probably this is mere coincidence.
Although we have used iron as a metallicity indicator, which invalidates Equation (11), due to non recycling effects, we are not sure whether the situation would be improved by using O. The errors in both the AMR and SFH are still big enough to render such a comparison extremely uncertain. However, it can be a test to be done with improved data. The more important result for chemical evolution studies is that, provided that we know accurately both functions, the empirical AMR and SFH will allow an estimate of the variation of the gas mass with time, which could lead to an estimate of the evolution of the infall rate. Future studies should attempt to explore this tool.
### 5.2 Scale length of the SFH
The stars in our sample are all presently situated within a small volume of about 100 pc radius around the Sun. The star formation history derived from these stars is nevertheless applicable to a quite wide section of the Galactic disk, since the stars which are presently in the Solar neighbourhood have mostly arrived at their present positions from a torus in the disk concentric with the Solar circle.
We have investigated how wide this section of disk is by integrating the equations of motion for 361 stars of the โkinematic sampleโ (see Paper I) within a model of the Galactic potential. The potential consists of a thin exponential disk, a spherical bulge and a dark halo, and is described in detail in Flynn et al. (flynn1996 (1996)). For each star we determine the orbit by numerical integration, and measure the peri- and apogalactic distances, $`R_p`$ and $`R_a`$ and the mean Galactocentric radius, $`R_m=(R_p+R_a)/2`$ for the orbit (cf. Edvardsson et al. Edv (1993)).
The distribution of $`R_m`$ is shown in Figure 17. Most of the stellar orbits have mean Galactocentric radii within 2 kpc of the Sun (here taken to be at $`R_{}=8`$ kpc), i.e. $`6<R_m<10`$ kpc. Very few stars in the sample are presently moving along orbits with mean radii beyond these limits.
As discussed by Wielen, Fuchs and Dettbarn (WFD (1996)), due to irregularities in the Galactic potential caused by (for example) giant molecular clouds and spiral arms, the present mean Galactocentric radius of a stellar orbit $`R_m(t)`$ at time $`t`$ does not bear a simple relationship to the mean Galactocentric radius of the orbit on which the star was born $`R_m(0)`$. Wielen, Fuchs and Dettbarn describe the process by which stars are scattered by these irregularities as orbital diffusion, and show that over time scales of several Gyr, that one cannot reconstruct from $`R_m`$ the radius at which any particular star was born to better than a few kpc. This is of the same order as the width of the distribution of $`R_m`$ seen in Figure 17. We therefore conclude that our stars fairly represent the star formation history within a few kpc of the present Solar radius, $`6<R_m<10`$, or the โmiddle distanceโ regions of the Galactic disc. The SFH of the inner-disk/bulge, and the outer disk are not sampled.
However, Binney & Sellwood (binney (2000)) have criticized this conclusion. They show that during the lifetime of a star, the guiding-center of its orbit can change generally by no more than 5%. In this scenario, the value of $`R_m`$ that we have calculated is close to the galactocentric radius of the star birthplace, and our star formation history would still be representative of a considerable fraction of the galactic disk, $`7<R_m<9`$.
Another important conclusion of kinematic studies it that the older is a feature in the SFH, the more damped it is recovered from the data, related to its original amplitude (see, for example, Meusinger 1991b ), since the stars formed by the burst will be scattered through a larger region. Hence, the younger bursts in our SFH are the most local features. This does not mean that they are most probably โlocal irregularitiesโ. In time scales of 1-2 Gyr, the diffusion of stellar orbits homogenize any irregularities in the azimutal direction, so that the bursts would apply to the whole solar galactocentric annulus.
### 5.3 The Galaxy and the Magellanic Clouds
When evidences for an intermittent SFH in the Galaxy were first discovered, Scalo (scalo87 (1987)) proposed that they could have originated from interactions between the Galaxy and the Magellanic Clouds. Indeed, the Magellanic Clouds are known to have probably experienced some episodes of strong star formation for a long time. Butcher (acougueiro (1977)) first proposed that the bulk of star formation in the Large Magellanic Cloud (LMC) has occurred from 3-5 Gyr ago, by the analysis of the luminosity function of field stars. Stryker et al. (stryker1 (1981)) and Stryker (stryker2 (1983)) subsequently confirmed this result. In the last few years, additional studies have arrived almost at the same conclusions (Bertelli et al. bertelli (1992); Vallenari et al. 1996a ,b). Westerlund (wester (1990)) also remarked that the star formation in the LMC seems to have been very small from 0.7 to 2 Gyr ago. A very recent burst of star formation (around 150 Myr ago) was also found by the MACHO team (Alcock et al. todopau (1999)) from the study of the period distribution of 1800 LMC cepheids. Their analysis present compeling arguments favouring this hypothesis, as well as for the propagation of the star formation to neighbour regions.
However, these results have more recently been questioned, on the basis of colour-magnitude diagram synthesis. Some authors claim that important information on the SFH are provided by the part of the colourโmagnitude diagram below the turnoff-mass, which could only be resolved with the most recent observations (Holtzman et al. vera\_holtz (1997), holtz99 (1999), and references therein; Olsen xxx (1999)). These papers conclude that star formation in the LMC has been a continuous process over much of its lifetime.
Note that continuity in the SFH does not means constancy. Holtzman et al. (holtz99 (1999)) points that their method cannot constrain accurately the burstiness of the SFH in the LMC on small time scales, particularly for ages greater than 4 Gyr. Nevertheless, they show evidence for an increase in the star formation rate in the last 2.5 Gyr. Dolphin (dolphin (2000)) arrives to the same conclusion studying two different fields of the LMC, separated by around 2 kpc one from the other. The author recognizes that some large environment alteration must have triggered an era of star formation in our neighbour galaxy.
In spite of the controversy, it is impossible not to verify that some results on the SFH of the LMC are in apparently synchronism with some SFH events in the Milky Way disk. But this should be not really surprising. The Magellanic Clouds are satellites of our Galaxy, and past interactions between them were a rule, not an exception. Byrd & Howard (byrd (1992)) showed that a companion satellite, whose mass is larger than 1% of the primary galaxy, could excite large-scale tidal arms in the disk of the primary, and we know that spiral arms do induce, or at least organize, star formation. This number is to be compared with the mass ratio between our Galaxy and the Clouds which is 0.20 (Byrd et al. byrdetal (1994)). Besides direct tidal effects, the Clouds can produce a dynamical wake in the halo that distorts the disk (Weinberg weinberg (1999)). It is quite possible that such an effect could also enhance the star formation in the disk (M. Weinberg, private communication).
Additional evidence comes from dynamical studies of the Magellanic Clouds. Several groups have worked on the derivation of their orbits around the Galaxy. The full orbit of the Magellanic Clouds are still unknown, but there is some agreement in the published works. The most important is that all of these works conclude that the most recent close encounter between the Clouds and the Milky Way has occurred 0.2-0.5 Gyr ago, which was the closest encounter through the entire history of the system (however, Holtzman et al. holtz99 (1999) mention an unpublished work by Zhao in which the last perigalacticon occurred 2.5 Gyr ago). Murai & Fujimoto (murai (1980)) calculated that other close encounters have occurred 1.5, 2.6 and 7.5 Gyr ago. Gardiner et al. (gardiner (1994)) revisited Murai & Fujimoto (murai (1980))โs model and recalculated the epochs of the close encounters as around 1.6, 3.4, 5.5, 7.6 and 10 Gyr ago. However, Lin et al. (lin (1995)) have found different values: 2.6, 5.3, 8.4 and 11.8 Gyr ago.
From these results we can tentatively assume that, in the last 12 Gyr, the Clouds have had at most six close encounters with the Milky Way occurring more or less at 0.2-0.5, 1.4-1.5, 2.6-3.4, 5.3-5.5, 7.5-8.4 and 10-11.8 Gyr ago. Some of these encounters are not predicted by all the authors, while some are in good agreement. For the sake of simplicity, we will refer to these encounters as I, II, III, IV, V and VI, respectively.
There are similarities between the time of close encounters and the events of our derived SFH. In Figure 18 we show the epoch of these encounters superimposed over our SFH. We can associate burst A with encounter I, peak B1 with encounter III, and peak C1 with encounter V. It is not unlikely that peak B2 could also be associated with encounter IV. On the other hand, encounter VI probably cannot be responsible for any of the features found beyond 9 Gyr, since it occurs in an age range where the SFH is highly uncertain and subject to random fluctuations.
A significant exception to the rule is encounter II. It is thought to have happened in the middle of the AB gap. It seems strange to think that a close encounter between interacting galaxies could suppress the star formation. Other mechanism should be responsible for the gap. On the other hand, Lin et al. (lin (1995)) have not found such an encounter. In fact, these authors predict that by this time, the Clouds would be located in their apogalacticon, more than 100 kpc away.
Although the comparison is very premature, we conclude that the data on the age distribution and orbits of the Magellanic Clouds present some agreement with the Miky Way SFH. Have the bursts of star formation in the Milky Way been produced by interaction with its satellite galaxies? The comparison above certainly points to this possibility, that deserves more investigations to be properly answered, since there is still much uncertainty in the Magellanic Clouds close encounters, as well as on the chronologic scale of the chromospheric ages.
## 6 The features of the Milky Way SFH
We now can return to the discussion of the meaning of each feature found in the SFH derived in section 3.
### 6.1 Burst A
The most recent star formation burst is also the most likely burst to have occurred, since it has occurred in the very recent past, and so is less affected by the age errors. A recent enhancement in the SFH is also present in nearly all previous investigations of the SFH (Scalo scalo87 (1987); Barry barry (1988); Gรณmez et al. gomez (1990); Noh & Scalo noh (1990); Soderblom et al. soder91 (1991); Micela et al. micela (1993); Rocha-Pinto & Maciel RPM97 (1997); Chereul et al. chereul (1998)), and is consistent with the distribution of spectral types in class V stars (Vereshchagin & Chupina veresh (1993)). It is not present in the isochrone age distributions (Twarog twar (1980); Meusinger 1991a ) most probably due to the difficulty to measure ages for stars near the zero-age main sequence, where we expect to find the components of this burst in a HR diagram.
We can conclude with confidence that it is a real feature of the SFH. However, being the youngest, it is also the most local feature, because the younger stars have had no time to diffuse to larger distances from their birthsites. Thus, we cannot be sure (from out data only) whether this feature applies to the Milky Way as a whole.
On the other hand, it is known that the Large Magellanic Cloud appears to have experienced also a recent burst of star formation (Westerlund wester (1990); Alcock et al. todopau (1999)) which is very well represented by its young population of open clusters, cepheids, OB associations and red supergiants. At the time of this burst, both galaxies have been closer than ever in their history (Lin et al. lin (1995)). This suggests that burst A could be caused by tidal interactions between our Galaxy and the LMC.
### 6.2 AB gap
A substantial depression in the star formation rate 1-2 Gyr ago was found by many studies, beginning with Barry (barry (1988); see also the SFH derived from the massive white dwarf luminosity function derived by Isern et al. isern (1999)). This gap appears, although not directly, in the chromospheric age distribution (the so-called Vaughan-Preston gap) and in the spectral type distribution, between A and F dwarfs (Vereshchagin & Chupina veresh (1993)). A quiescence between 1 and 2 Gyr is also visible in Chereul et al. (chereul (1998)), in their study of the kinematical properties of A and F stars in the solar neighbourhood.
This feature has been present in all steps of our work, from the initial age distribution in Figure 1 to the SFH. Note that the volume corrections have deepened this lull, but it has not changed its duration.
The AB gap is likely to have lasted for a billion years. Previous studies have given a more extended duration for it, but we believe that it was caused by the use of a highly incomplete sample, together with a chromospheric age calibration that does not account for the different chemical composition of the stars. Since it is a relatively recent feature, it only samples birthsites over a radial length scale of 1-2 kpc.
### 6.3 Burst B
The small lull between the peaks B1 and B2 is not present in the initial age distribution (Figure 1), appearing only after the volume corrections. It is very narrow, which could be most probably caused by hazardous small weights of the stars in these age bins, during the volume correction. This is why we have presently no means to distinguish burst B from a single burst or an unresolved double burst. At its age of occurrence, considerable broadening of the original features is expected. Either way, our simulations give strong support to this feature.
Previous studies have found star formation enhancements around 4 Gyr ago (Scalo scalo87 (1987); Barry barry (1988); Marsakov et al. marsakov (1990); Noh & Scalo noh (1990); Soderblom et al. soder91 (1991); Twarog twar (1980); Meusinger 1991a ). Note that a strong concentration of stars around this age can also be found in the age distribution of Edv93โs stars, that we show in Figure 19.
A significant exception is the SFH found by some of us (Rocha-Pinto & Maciel RPM97 (1997)). This paper suggests that burst B would be much smaller than the preceding burst C. To find the SFH, Rocha-Pinto & Maciel used a method to extract information from the G dwarf metallicity distribution (Rocha-Pinto & Maciel RPM96 (1996)) aided by the AMR (see also Prantzos & Silk prantzos (1998)). The authors have used several AMRs from the literature and different SFHs were found for each AMR. The SFHs recovered with the AMR from Twarog (twar (1980)) and Meusinger et al. (meu (1991)) were preferred compared to that found with Edvardsson et al. Edv (1993) (hereafter Edv93) AMR. To be consistent with our present result, we need to compare the present SFH with that coming from Rocha-Pinto & Macielโs method for an AMR similar to that found from our sample (paper I). Our AMR now looks very similar to the mean points of Edv93โs AMR. Rocha-Pinto & Maciel (RPM97 (1997)) have found, using Edv93โs AMR, that Burst B could have around the same intensity as burst C, and also a narrow AB gap lasting 1 Gyr at most. Figure 20 shows a comparison between their SFH (for Edv93โs AMR) and the present history binned by 1 Gyr intervals.
### 6.4 BC gap and Burst C
The existence of the BC gap is directly linked with how much credit we are going to give to Burst C. From Figure 15, one could say that no burst could be found around 8-9 Gyr, and all supposed features are artificial patterns created by statistical fluctuations. To reinforce this theoretical expectation, we have done a simulation to show how the features above could be formed by a bursty SFH. We have considered a SFH composed by three bursts, one occurring at 0.3 Gyr, lasting 0.2 Gyr, and the other at 4 Gyr, also lasting 0.2 Gyr, and the last ocurring at 9 Gyr, lasting 0.5 Gyr. The first burst and the last burst are composed by 300 stars, while the second burst is three times more intense. The star formation at other times is assumed to be highly inefficient, forming only 60 more stars at the whole lifetime of the galaxy. The recovered SFR is shown in Figure 21. Although the two more recent bursts can be well recovered, there is no sign of burst C at 9 Gyr. We have tried other combinations between the amplitude and time of occurrence of them, but in all cases the stars of burst C were much scattered from its original age.
If on theoretical grounds there is no convincing arguments to accept the existence of burst C, the same does not occur on observational grounds. This puzzling situation comes from the fact that burst C has appeared in a number of studies that have used not only different samples, but also different methods (Barry barry (1988); Noh & Scalo noh (1990); Soderblom et al. soder91 (1991); Twarog twar (1980); Meusinger 1991a ; Rocha-Pinto & Maciel RPM97 (1997)). And it appears double-peaked in some of them, as we saw in section 3.
The magnitude of the age errors prevents us from assigning a good statistical confidence to this particular feature.
However, it is not implausible that we have overestimated the age errors. A decrease of 0.05 dex in the age errors could alleviate the situation and allow the identification of peaks (although highly broadened) younger than 10 Gyr, which would suggest that burst C is a real feature. A better estimate of the age errors would not create new bursts, or flatten the recovered SFH in these age bins, but would give confidence limits for the ages where the features found are likely to be real and not just artifacts.
### 6.5 Burst D
The so-called burst D was proposed by Majewski (majewski (1993)), as a star formation event that would be responsible for the first stars of the disk, before the formation of the thin disk.
A superficial look at Figure 8 could give us the impression that the peaks beyond 11 Gyr were remnants of this predicted burst. However, as we have shown above, it is presently impossible to recover the SFH correctly at this age range, even if our age errors are overestimated by as much as 0.05 dex. The SFH at older ages are dominated by fluctuations, superimposed on the original strongly broadened structures, in such a way that it is imposible to disentangle statistical fluctuations from real star formation events.
Theoretically, patterns as old as 13 Gyr could be found in the SFH, provided that they occurred not very close to younger ones, if the age errors were decreased by 0.10 dex, but that is hardly possible to be attained at the present moment since it would need to be of the order of magnitude of the error in the $`\mathrm{log}R_{\mathrm{HK}}^{}`$ index.
For these reasons, we give no credit to the peaks beyond 11 Gyr in Figure 8. If burst D has ever occurred, probably the present chromospheric age distribution is not an efficient tool to find its traces.
## 7 The shape of the chromospheric activityโage relation
Soderblom et al. (soder91 (1991)) argued that the interpretation of the chromospheric activity distribution as evidence for a non-constant SFH is premature. Particularly, the authors have shown that the observations do not rule out a non-monotonic chromospheric activityโage relation, even considering that the simplest fit to the data is a power-law, like the one we used.
Presently, there is good indication that the chromospheric activity of a star is linked with its rotation, and that the rotation rate decreases slowly with time. However, it is unknown how exactly the chromospheric activity is set and how it develops during the stellar lifetime. The data show that there is a chromospheric activityโage relation, but the scatter is such that it is not presently possible to know whether the chromospheric activity decreases steadily with time, or there are plateaux around some โpreferredโ activity levels. There is a possibility that the clumps we are seeing in the chromospheric age distribution (which are further identified as bursts) are artifacts produced by a monotonic chromospheric activityโage relation.
To keep the constancy of the SFH, Soderblom et al. (soder91 (1991)) proposed an alternative chromospheric activityโage relation that is highly non-monotonic. We have checked this constant-sfr calibration with our sample, but the result is not a constant sfr. This is expected, since there are many differences in the chromospheric samples used by Soderblom (soder (1985)) and Soderblom et al. (soder91 (1991)) and the one we have used (see our Figure 11 in Paper I). We have calculated a new constant-sfr calibration, in the way outlined by Soderblom et al. (soder91 (1991)). We have used 328 stars from our sample (just the stars with solar metallicity, to avoid the metallicity dependence of $`\mathrm{log}R_{\mathrm{HK}}^{}`$), with weights given by the volume correction (to account for the completeness of the sample) and using the scale height correction factors to take into account the disk heating.
Figure 22 compares the chromospheric activityโage relation we have used (solid line) with the constant-sfr calibration proposed by Soderblom et al. (dotted line) and the constant-sfr calibration from our sample (dashed line). The data and symbols are the same from Soderblom et al. (soder91 (1991)). Both constant sfr calibrations agree reasonably well for the active stars, but deviate somewhat for the inactive stars. This is caused by the fact that to be consistent with a constant sfr, the calibration must account for the increase in the relative proportions of inactive to active stars, especially around $`\mathrm{log}R_{\mathrm{HK}}^{}=4.90`$, after the survey of HSDB. Note that, our constant-sfr chromospheric activityโage relation is still barely consistent with the data and cannot be ruled out. There are few data for stars older than the Sun in the plot, and it is not possible to know whether the plateau for $`\mathrm{log}R_{\mathrm{HK}}^{}<5.0`$ in this calibration is real or not.
We acknowledge that, given no other information, it is a subjective matter whether to prefer a complex star formation history or a complex activity-age relation. Nevertheless, there are numerous independent lines of evidence that also point to a bursty star formation history; the most recent and convincing is the paper by Hernandez, Valls-Gabaud & Gilmore (valls (2000)). They use a totally different technique (colourโmagnitude diagram inversion) and find clear signs of irregularity in the star formation. In section 6, we listed several other works that indicate a non-constant star formation history, and the majority of them use different assumptions and samples. Strongly discontinuous star formation histories are also found for some galaxies in the Local Group (see OโConnell oconnell (1997)), in spite of the initial expectations during the early studies of galactic evolution that these galaxies should have had smooth star formation histories.
For all these methods to give the same sort of result, all the different kinds of calibrations would have to contain complex structure. It is simpler to infer that the star formation history is the one that actually has a complex structure. We think that when several independent methods all give a similarly bursty star formation history (although with different age calibrations, so they do not match exactly), our conclusion is supported over the irregular activity-age but constant star formation rate solution.
## 8 Conclusions
A sample composed of 552 stars with chromospheric ages and photometric metallicities was used in the derivation of the star formation history in the solar neighbourhood. Our main conclusions can be summarized as follow:
1. Evidence for at least three epochs of enhanced star formation in the Galaxy were found, at 0-1, 2-5 and 7-9 Gyr ago. These โburstsโ are similar to the ones previously found by a number of other studies.
2. We have tested the correlation between the SFH and the metal-enrichment rate, given by our AMR derived in Paper I. We have found no correlation between these parameters, although the use of Fe as a metallicity indicator, and the magnitude of the errors in both functions can still hinder the test.
3. We examined in some detail the possibility that the Galactic bursts are coeval with features in the star formation history of the Magellanic Clouds and close encounters between them and our Galaxy. While the comparison is still uncertain, it points to interesting coincidences that merit further investigation.
4. A number of simulations was done to measure the probability for the features found to be consistent with a constant SFH, in face of the age errors that smear out the original features. This probability is shown to decrease for the younger features (being nearly 0% for the quiescence in the SFH between 1-2 Gyr), such that we cannot give a strong assertion about the burst at 7-9 Gyr. On the other hand, the simulations allow us to conclude, with more than 98% of confidence, that the SFH of our Galaxy was not constant.
There is plenty of room for improvement in the use of chromospheric ages to find evolutionary constraints. For instance, a reconsideration of the age calibration and a better estimate of the metallicity corrections could diminish substantially the age errors, which would not only improve the age determination but also give more confidence in the older features in the recovered SFH.
###### Acknowledgements.
We thank Johan Holmberg for kindly making his data on the scale heights available to us before publication, and Eric Bell for a critical reading of the manuscript, and for giving important suggestions with respect to the presentation of the paper. The referee, Dr. David Soderblom, has raised several points, which contributed to improve the paper. We have made extensive use of the SIMBAD database, operated at CDS, Strasbourg, France. This work was supported by FAPESP and CNPq to WJM and HJR-P, NASA Grant NAG 5-3107 to JMS, and the Finnish Academy to CF. |
no-problem/0001/astro-ph0001449.html | ar5iv | text | # Optical surface photometry of radio galaxies - II. Observations and data analysis
## 1 Introduction
Only a small fraction of elliptical galaxies emit at radio wavelengths. This is probably due to the combination of two factors: 1) the lifetime of the radio source is much less than the typical lifetime of the host galaxy; 2) not all elliptical galaxies may harbor the conditions for radio activity. Therefore, a detailed analysis and comparison of the properties of radio and non-radio galaxies are of great relevance for understanding the phenomenon. Previous optical studies (Hine & Longair 1979; Longair & Seldner 1979; Lilly & Prestage 1987; Prestage & Peacock 1988; Owen & Laing 1989; Smith & Heckman 1989a, b; Owen & White 1991; Gonzalez-Serrano et al. 1993; de Juan et al. 1994; Colina & de Juan 1995; Ledlow & Owen 1995) have investigated the role of galaxy-galaxy interaction in the creation and fueling of nuclear radio sources, as well as the connection between overall galaxy properties (e.g. luminosity and scale length) and radio morphology.
In order to contribute to the study of the photometrical and morphological properties of the galaxies hosting radio sources, we have undertaken a systematic study of low redshift radio galaxies in the optical band.
In a previous paper (Fasano et al. 1996; hereafter Paper I) we presented structural (position angle, ellipticity, Fourier coefficients) and photometric profiles together with isophotal contours in the B and R bands for 29 galaxies extracted from a complete sample of 95 radio galaxies in the redshift range $`0.01z0.12`$.
Here, we give the results for 50 more galaxies observed in the Cousins R band, bringing to 79, 83% of the original sample, the total number of sources for which we were able to secure data of the required quality. A full discussion of the astrophysical implications of these observations is given in Govoni et al. (1999), where the overall properties of galaxies hosting radio emission are compared with radio quiet ellipticals.
## 2 Observations and data analysis
### 2.1 The sample
The sample is composed of radio galaxies in the redshift range $`0.01z0.12`$ extracted from two complete surveys of radio sources.
The first one is the all sky survey of radio sources with radio flux at 2.7 GHz greater than 2 Jy by Wall & Peacock (1985; hereafter WP). From this survey we extracted all objects, in the above redshift range, classified as radio galaxies (see also Tadhunter et al. 1993) at declination $`\delta <10^{}`$.
The second list is the Ekers et al. (1989; hereafter EK) catalogue of radio galaxies with flux at 2.7 GHz greater than 0.25 Jy and $`m_b<17.0`$, in the declination zone $`40^{}<\delta <17^{}`$. All objects classified as E or S0 in the above redshift range were included in our list. Basic data for the 50 objects presented in this paper are summarized in Table 1. Columns 1 and 2 give for each object IAU and other names, column 3 gives the subsample, columns 4 and 5 the equatorial coordinates (equinox 2000.0), and column 6 the redshift. Columns 7 and 8 report the K-correction in the R band and the galactic extinction in the V band. We derived the galactic extinction interpolating the data for galactic hydrogen column density given by Stark et al. (1992) and assuming $`A_V/E_{BV}=R=3.2`$ and $`E_{BV}=N_H/5.1\times 10^{21}`$ (Knapp & Kerr 1974). In column 9 and 10 we give the radio classification and reference.
Based on the radio morphology, sources were divided into FRI and FRII radio classes following the Fanaroff and Riley scheme (Fanaroff & Riley 1974). Most sources in the WP sample were imaged by Morganti et al. (1993) with the Very Large Array (VLA) and the Australia Telescope Compact Array (ATCA), while VLA radio images are available for most objects in the EK sample. Objects with transitional properties or with unclear classification are marked as I/II, while a U marks the only unresolved source $`1323271`$. For IC4374 (labeled with a question mark) we were not able to find a radio image, thus we include it in the FRI class because its radio luminosity at 178 MHz is below $`2\times 10^{25}`$ Watt/Hz, the dividing luminosity between the two classes.
### 2.2 Observations
Observations were secured in five different observing runs. Beside three galaxies ($`0307305`$, $`0332391`$ and $`1928340`$) included here, the results of the imaging in B and R bands obtained with the ESO-2.2 m telescope during runs 1 and 2 (Table 2) have been reported in Paper I. Data presented here were obtained in three more observing runs (run 3, 4, 5) with either the ESO-Danish 1.5 m telescope or the Nordic Optical Telescope (NOT). The journal of observations is given in Table 3, where for each object we report the run of observation, the total integration time, the atmospheric seeing expressed by the full width half maximum (FWHM) of stellar images, and the sky surface brightness, together with its estimated 1$`\sigma `$ uncertainty. The galaxy total apparent R band magnitude (corrected for galactic extinction) computed by extrapolating to infinity the surface brightness profile is also given. This value does not include the K-correction.
For most objects a short ($``$ 2 min.) and a long exposure (Table 3) were obtained, so we also have an unsaturated image of the nuclear region. In a few cases, the presence of bright stars in the field forced us to take several short exposures, subsequently combined to form a final, deep image. Photometric conditions were generally good during the observations, as confirmed by repeated observations of photometric standard stars selected from the Landolt (1992) list. Comparison of the photometric zero point for different nights indicates an average internal photometric accuracy of 5$``$10%. This, combined with the small uncertainty on the sky surface brightness (1$``$2%), gives a global internal photometric accuracy of the order of 10%. The atmospheric seeing was generally around $`1`$ arcsec, and the CCD pixel size (Table 2) were always sufficiently small to ensure proper sampling of the telescope point spread function (PSF).
### 2.3 Data reduction and surface photometry
Data reduction is extensively described in Paper I. Here we simply remind the reader that the IRAF-ccdred package was used for the basic reduction (bias subtraction, image trimming, flat fielding, cosmic rays, etc.). The dark current turned out to be insignificant and was neglected. After flat fielding, images were characterized by a quite regular sky background, well fitted by a first order polynomial.
Final images are shown in Fig. 1, where it is seen that selected sources cover an area of hundreds of arc-seconds square, ideal for two-dimensional isophotal analysis, and can be traced down to a surface brightness of $`\mu _R25`$ mag/arcsec<sup>2</sup>. It is also evident that these radio galaxies are often observed in highly crowded regions, with stars and/or nearby galaxies projected on-top. Sky subtraction and isophotal analysis (drawing, cleaning and fitting) was performed using the AIAP package (Fasano 1990), which due to its high degree of interactivity, is particularly suitable in analyzing the morphology of galaxies embedded in such high density regions.
The problem of obtaining reliable surface photometry of dumbbell systems was faced by adopting the two-galaxy fitting strategy outlined in Paper I, which allows us to fully separate the two galaxies. Contour plots for all dumbbell systems, together with those of the two members are shown in Fig. 2.
From this analysis we derived photometric and structural parameters (surface brightness, ellipticity, position angle and Fourier coefficients) as a function of the equivalent radius $`r=a\times (1ฯต)^{1/2}`$ where $`a`$ is the semimajor axis and $`ฯต`$ is the ellipticity of the ellipse fitting a given isophote. Isophotes can not be fitted in the innermost few arcsec of the galaxy because of the small number of pixels involved. To cope with this limitation, we extracted an azimuthally averaged radial profile, centered on the center of the first useful isophote. If the nucleus was saturated, the short exposure was used. The agreement between this average radial profile and that obtained from isophote fitting was always excellent in the common region, thus the two profiles were joined smoothly to fully model the $`core`$ and the outer region of the galaxies. In the following analysis we consider this combined profile as the final luminosity profile.
After fitting of the isophotes with ellipses, photometric and morphological profiles have been obtained according to the procedure described in Paper I. For each galaxy, the luminosity radial profile, the major axis position angle (defined from North to East), the ellipticity, and $`c4`$ coefficient profiles as a function of the semi-major axis are shown in Fig. 3. The Fourier coefficient $`c4`$ measures the deviation of the isophotes from the best fitting ellipse. A positive values indicate the isophote is excessively elongated along the major axis, i.e., it is similar to a disk, while a negative $`c4`$ means the isophote is boxy.
The residual background variations inside each frame were used to derive, according to Fasano & Bonoli (1990), proper errors for morphological and photometric parameters.
For the dumbbell system in Fig. 3 we show only the luminosity profile of the radio source.
## 3 Comparison with previous results
Several objects presented here were previously studied by Lilly & Prestage (1987, LP87) and Smith & Heckman (1989a,b, SH89). In common between our and LP87 samples there are 13 objects ($`0255+058`$, $`0325+023`$, $`0427539`$, $`0430+052`$, $`0453206`$, $`0620526`$, $`0625354`$, $`0625536`$, $`0915118`$, $`0945+076`$, $`1318434`$, $`1333337`$, and $`2221023`$), while eight are in common with SH89 ($`0255+058`$, $`0325+023`$, $`0430+052`$, $`0945+076`$, $`1251122`$, $`1717009`$, $`1949+023`$, and $`2221023`$).
LP87 give Cousins metric R magnitudes for a fixed aperture of 19.2 kpc (for H<sub>0</sub>=50 km sec<sup>-1</sup> Mpc<sup>-1</sup>), whereas SH89 report V and B bands isophotal ($`m_{25}`$) magnitudes. In order to perform an external check on our photometry, we derived metric $`R`$ magnitudes at 19.2 kpc and isophotal magnitudes $`V_{25}`$ (assuming $`VR=0.6`$ as appropriate for low redshift elliptical galaxies) for the common objects (Fig. 4), finding on average:
$`<\mathrm{\Delta }R_{19.2}>_{LP87}=0.12`$ mag $`(r.m.s=0.29)`$
$`<\mathrm{\Delta }V_{25}>_{SH89}=0.02`$ mag $`(r.m.s=0.36)`$.
It is worth noticing that $`0255+058`$ and $`1251122`$ are dumbbell galaxies, while a bright, edge-on spiral galaxy projects on-top of $`1318434`$. The measure of the luminosity of these galaxies is therefore particularly difficult and dependent on the details of the adopted measuring strategy. Not surprisingly $`0255+058`$ is the object with the largest discrepancy with respect to SH89.
If we remove this object, the scatter becomes 0.27 and 0.19 for the comparison with LP87 and SH89, respectively. As a whole, our photometry agrees on average with previous photometry within $`0.1`$ magnitudes.
## 4 Conclusions
We presented surface photometry analysis for 50 radio galaxies which complement our previous study of 29 objects from a complete sample of 95 low redshift radio galaxies. Detailed morphological and photometrical properties of the galaxies are reported. As previously found by other studies (e.g., SH89; Ledlow & Owen 1995), we confirm that the galaxies associated with this kind of radio emission have mostly elliptical morphology.
In some cases, however, the galaxy hosting the radio source is found to have a substantial disc component (S0โlike). Moreover the surface brightness radial profiles of radio galaxies often exhibit deviations from de Vaucouleurs $`r^{1/4}`$ profile, due to the presence of nuclear point sources (likely associated with the active nucleus) and/or to low surface brightness extended halos.
A detailed description of each object is given in the appendix, while a full discussion of the results for the whole observed sample of 79 radio galaxies is presented in Govoni et al. (1999).
## Acknowledgments
This work was partly supported by the Italian Ministry for University and Research (MURST) under grant Cofin98-02-32, and has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
## Appendix A Results for individual objects
$`\mathrm{๐๐๐๐}+\mathrm{๐๐๐}:`$ The optical counterpart of 3C 75, the central radio source of the cluster Abell 400, is an interesting case of a dumbbell galaxy, with components separated by $`20\mathrm{}`$ and with a radial velocity difference of $`500`$ km/s. Twin radio jets depart from each of the two optical nuclei (Owen et al. 1985), making this radio source extremely unusual. The large scale radio structure is classified as FRI (Morganti et al. 1993).
Unfortunately, a bright satellite crossed the field, passing close to the central part of the dumbbell system (see Fig. 1). Nevertheless, using the AIAP masking facility, we were able to obtain a photometric deblending, by iterative modeling of the two components. In spite of the appearance, the geometry of the two galaxies looks rather regular, apart from a slight off-centering of the outermost isophotes. Luminosity and geometrical profiles suggest both components are ellipticals. In particular, the luminosity profile (see Fig. 3) of the northern galaxy is consistent with the presence of a nuclear point source.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart of this radio source is a rather isolated galaxy at $`z0.066`$ (Scarpa et al. 1996). The geometrical profiles of this object are highly suggestive of an S0 morphology (increasing ellipticity, positive $`c4`$ coefficient).
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ This object seems to be a rather isolated galaxy. Its luminosity and geometrical profiles suggest a regular elliptical morphology, with a possible nuclear point source.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The source looks like a large elliptical with regular morphology. The luminosity profile strongly suggests the presence of a bright nuclear point source.
$`\mathrm{๐๐๐๐}+\mathrm{๐๐๐}:`$ The radio morphology of 3C 88 is dominated by two symmetric and well developed lobes at PA$`=60^{}`$, and is classified as FRII (Morganti et al. 1993). The optical counterpart which coincides with the radio nucleus, is an elliptical galaxy, whose major axis has position angle PA$`=30^{}`$. This is remarkably well defined and stable (Fig. 3) and, within the uncertainty, orthogonal to the radio structure.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ A radio map of this FRI radio source is reported by Jones & McAdam (1992). Fig. 1 shows that the host galaxy is embedded in a quite dense environment. In particular, it seems to interact with another (very similar) galaxy, whose angular separation from the radio galaxy is $`76^{\prime \prime }`$. The luminosity profile is consistent with the presence of a nuclear point source.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ As for the previous case, the galaxy hosting this radio source turns out to be embedded in a rich environment and seems to interact with a close elliptical companion. The luminosity profile is consistent with the presence of a nuclear point source, whereas the geometrical profiles suggest regular elliptical morphology. This source is also interesting for having strong optical emission lines (Scarpa et al. 1996), and complex radio morphology (Jones & McAdam 1992).
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart of this radio source is a spectacular case of dumbbell morphology in rich environment, with nuclei separated by $`30^{\prime \prime }`$ ($``$ 33 kpc). The radio source is associated with the South-East component of the dumbbell system. After deblending the two galaxies, we observe a regular elliptical morphology for the brightest object, whereas the other galaxy shows a rather amorphous and broad light distribution (see contours in Fig. 2).
$`\mathrm{๐๐๐๐}+\mathrm{๐๐๐}:`$ 3C 120 is a well studied radio source, displaying superluminal motion (Zensus 1989). The spectrum of the optical counterpart is quasar like (Tadhunter et al. 1993), and the galaxy has been often classified as Seyfert 1, even if its spiral morphology has never been clearly established.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The galaxy hosting this radio source is embedded in a moderately rich environment. It has regular elliptical morphology, as illustrated by the luminosity and geometrical profiles, as well as by its isophotal contours. The luminosity profile is consistent with the presence of a nuclear point source.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart of this radio source lies in the very dense environment of the cluster Abell 514. The host galaxy is a normal elliptical.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ An interesting case of dumbbell galaxy. The radio source coincides with the Southernmost component. After deblending, both galaxies show very regular morphology (see Fig. 2), suggesting the dumbbell appearance may just be due to chance projection. Based on the shape of their radial profile we conclude the northernmost galaxy is an S0, and the other one an elliptical.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ In the optical band NGC 1692 looks like a large, undisturbed elliptical galaxy, in spite of the presence of two nearby galaxies both at a projected distance of $`66^{\prime \prime }.5`$ from the radio galaxy. The south-east companion is likely to be an edge-on spiral showing a pronounced C-shape.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ Even if the optical counterpart of this radio source is located in a rather poor environment, the host galaxy appears morphologically disturbed by the presence of some small companions. In particular, two small compact galaxies on opposite sides with respect to the galaxy center are aligned with an elongated structure extending for $`30^{\prime \prime }`$ westward.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart of this radio source is an elliptical galaxy, located at the end of a chain of small galaxies. The surrounding environment is very dense, with several galaxies superposed on the main object.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The host galaxy of this radio source is a large, normal elliptical. The only noticeable thing being the presence of a nearby galaxy pair.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ A rather normal elliptical galaxy in a rich environment, with several small galaxies projecting on its halo. The luminosity profile suggests the existence of a nuclear point source.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ This FRI radio source (Jones & McAdam 1992) shows weak emission lines superposed onto the continuum spectrum of a typical early-type galaxy (Simpson et al. 1996). The source was also detected in the X-ray band (Gioia & Luppino 1994). Unfortunately, our image of the galaxy is disturbed by the presence of several saturated columns of the CCD, due to a nearby bright star. Nevertheless, using appropriate masking we were able to produce a reliable radial profile, from which we infer the existence of a nuclear point source.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart of this FRI radio source, located at the center of the cluster Abell 3392, is a giant elliptical embedded in a rich environment . It is worth noticing the presence of a strong point source in the nucleus of this galaxy, also supported by the emission lines observed in its optical spectrum (Tadhunter et al. 1993).
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart of this radio source is the South-East member of a dumbbell system, located in the cluster Abell 3391. After deblending the two galaxies with the iterative two-galaxy fitting procedure (Fig. 2), we found both galaxies show strong displacement of the isophotal centers roughly perpendicular to their alignment, as expected in strong interactions.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The host galaxy is a normal elliptical lying in a rather poor environment. Strong emission lines have been observed in its optical spectrum (Simpson et al. 1996). The image of the galaxy is disturbed by the light from a satellite which passed close to the center of the source. This structure was masked during isophotal analysis.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The galaxy hosting this radio source is a normal elliptical with regular morphology in a poor environment.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart looks like a normal elliptical with regular morphology in a poor environment.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ 3C195 is an FRII radio source (Morganti et al. 1993), with the optical counterpart exhibiting emission lines in its optical spectrum (di Serego Alighieri et al. 1994). The host galaxy is located nearby a very bright star which makes it difficult to derive luminosity and geometrical profiles extended to the outer regions. The brighter part of the galaxy, which can be reliably studied, suggests this galaxy has complex structure and a nuclear point source. The environment is relatively rich and at least two small galaxies may be gravitationally interacting with the radio galaxy.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart of the FRI radio source 3C218 (Morganti et al. 1993), is an early-type galaxy most likely a member of a small group. Ionization emission lines were detected in its optical spectrum (Simpson et al. 1996).
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical spectrum of this radio source is of an early-type galaxy with old stellar population (Scarpa et al. 1996). The optical morphology confirms this classification. The environment is poor.
$`\mathrm{๐๐๐๐}+\mathrm{๐๐๐}:`$ The radio morphology of 3C227 is very elongated East-West with terminal hot spots, and is classified as FRII (Morganti et al. 1993). The optical counterpart resides in a poor environment, and apart from the slight tendency to have disky isophotes, the most important optical feature is the presence of a very bright nuclear source. This is consistent with the detection of strong emission lines with broad wings in the optical spectrum (Simpson et al. 1996). The major axis of the galaxy, at almost constant position angle PA$`0^{}`$, is perpendicular to the radio structure.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart is an elliptical galaxy, most likely interacting with a nearby companion at projected distance $`56^{\mathrm{}}`$.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ Its optical spectrum is characteristic of an elliptical galaxy (Scarpa et al. 1996). The surface photometry of this galaxy is perturbed by the influence of a bright nearby star. Its morphology is clearly of elliptical type and its luminosity profile is reliable enough to indicate the presence of an outer halo. The environment looks relatively rich.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ This object lies in rich environment and is surrounded by several small galaxies, some of which are likely to be in interaction with the radio source. The surface photometry is made difficult both because of the above mentioned companions and also due to the presence of a relatively bright star projecting onto the galaxy body. Both morphology and luminosity profile are consistent with the presence of a extended disk.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The radio structure of this source is large and complex (Jones & McAdam 1992). The optical counterpart lies in a rich environment and coincides with a close galaxy pair similar to $`0452190`$. The radio galaxy is the brightest of the two and has normal luminosity and geometrical profiles for an elliptical galaxy. The optical spectrum exhibits emission lines (Simpson et al. 1996).
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ NGC 3557 is an FRI radio source (Birkinshaw & Davies 1985) in which H$`\alpha `$+\[NII\] extended emission has been detected (Goudfrooij et al. 1994). The luminosity and geometrical profiles indicate a regular elliptical morphology.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ Its optical counterpart is a large elliptical galaxy with undisturbed morphology.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ 3C 278 is an FRI radio source (Morganti et al. 1993). The optical counterpart is the Southernmost component of a dumbbell system. After deblending the image of the two galaxies, we found both components are heavily disturbed by tidal interaction.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The host galaxy inhabits a relatively poor environment and shows undisturbed elliptical morphology in its outer part. On the contrary, the inner region appears irregular and elongated suggesting the presence of either a nuclear dust lane or a double nucleus.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ An elliptical galaxy with undisturbed morphology, surrounded by several nearby companions. The optical spectrum is typical of early type galaxies without emission lines (Scarpa et al. 1996).
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ This radio galaxy lies in the cluster Abell 3537. After masking the light from two nearby bright stars, the surface photometry indicates a regular elliptical morphology and a nuclear point source.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ NGC 5090 hosts an FRI radio source (Morganti et al. 1993) studied in detail by Lloyd et al. (1996). The galaxy looks like a normal giant elliptical, over which projects NGC 5091, an edge-on spiral.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The host galaxy lies in cluster Abell 1736 and looks like a normal elliptical in interaction with a small S0 galaxy.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ A regular elliptical galaxy located at the center of cluster Abell 3565. Optical emission lines were detected within few arcsec of the nucleus (Goudfrooij et al. 1994).
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The host galaxy lies in a rather poor environment. Isophotal contours and geometrical profiles (increasing ellipticity, constant position angle and positive $`c4`$ coefficient) suggest it is an S0 galaxy. However, signatures of a disc are not seen in the luminosity profile.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ This radio galaxy inhabits a relatively poor environment. Its optical morphology is of an undisturbed elliptical galaxy, as confirmed also by the optical spectrum (Scarpa et al. 1996).
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The optical counterpart is a large, regular elliptical with a bright nucleus, located at the center of the cluster Abell 753.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ A large and undisturbed elliptical galaxy lying at the center of the Abell cluster 3581. Since no radio maps are available for this radio source, we guess its FRI radio morphology from the power-morphology relationship.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The radio source is estimated by Morganti et al. (1993) to have an FRII radio morphology. Optical spectra have revealed the presence of relatively strong emission lines (Tadhunter et al. 1993; Simpson et al. 1996). The luminosity profile of the host galaxy is consistent with the presence of a nuclear point source. The environment is rather poor.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ According to Morganti et al. (1993), 3C 353 is an FRII radio source. Emission lines have been detected in its optical spectrum (Tadhunter et al. 1993), and a LINER type spectrum has been also observed (Simpson et al. 1996). In our image the galaxy appears undisturbed in the outer regions, while in the center both luminosity and geometrical profiles support the existence of a bright point source.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ An FRII radio source (Morganti et al. 1993), with emission lines in the optical spectrum (Simpson et al. 1996). The optical counterpart is most probably an elliptical as suggested by its radial profile which precisely follows a de Vaucouleurs law. A nuclear point source is also observed.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ The host galaxy looks like a normal elliptical located in a small galaxy group.
$`\mathrm{๐๐๐๐}+\mathrm{๐๐๐}:`$ 3C 403 is an FRII radio source (Morganti et al. 1993) located in a poor environment. Strong emission lines were observed in its optical spectrum (Simpson et al. 1996; Tadhunter et al. 1993). In spite of the presence of a nearby bright star we derive for this galaxy a reliable profile, which clearly indicates a normal elliptical structure.
$`\mathrm{๐๐๐๐}\mathrm{๐๐๐}:`$ 3C 445 is a well known radio source, and broad emission lines have been observed in its optical spectrum (Eracleous & Halpern 1994; Corbett et al. 1998). The optical morphology is characterized by the presence of an extremely bright nuclear point source.
References.
Birkinshaw M., Davies R.L., 1985, ApJ 291, 32
Colina L., de Juan L., 1995, ApJ 448, 548
Corbett E.A., Robinson A., Axon D.J., Young S., Hough J.H., 1998, MNRAS 296, 721
de Juan L., Colina L., Perez-Fournon I., 1994, ApJS 91, 507
di Serego Alighieri S., Danziger I.J., Morganti R., Tadhunter C.N., 1994, MNRAS 269, 998
Ekers R.D., Wall J.V., Shaver P.A., et al., 1989, MNRAS 236, 737 (EK)
Eracleous M., Halpern J.P., 1994, ApJS 90, 1
Fanaroff B.L., Riley J.M., 1974, MNRAS 167, 31
Fasano G., 1990, internal report of Astr. Obs. of Padova
Fasano G., Bonoli C., 1990, A&A 234, 89
Fasano G., Falomo R., Scarpa R., 1996, MNRAS 282, 40 (Paper I)
Gioia I.M., Luppino G.A., 1994, ApJS 94, 583
Gonzalez-Serrano J.I., Carballo R., Perez-Fournon I., 1993, AJ 105, 1710
Goudfrooij P., Hansen L., Jorgensen H.E., Norgaard-Nielsen H.U., 1994, A&AS 105, 341
Govoni F., Falomo R., Fasano G., Scarpa R., 1999, A&A, submitted
Hine R.G, Longair M.S., 1979, MNRAS 188, 111
Jones P.A., McAdam W.B., 1992, ApJS 80, 137
Knapp G.R., Kerr F.J., 1974, A&A 35, 361
Landolt A.U., 1992, AJ 104, 340
Ledlow M.J., Owen F.N., 1995, AJ 109, 853
Lilly S.J., Prestage R.M., 1987, MNRAS 225, 531 (LP87)
Lloyd B.D., Jones P.A., Haynes R.F., 1996, MNRAS 279, 1197
Longair M.S., Seldner M., 1979, MNRAS 189, 433
Morganti R., Killeen N.E.B., Tadhunter C.N., 1993, MNRAS 263, 1023
Morganti R., Oosterloo T.A., Reynolds J.E., Tadhunter C.N., Migenes V., 1997, MNRAS 284, 541
Owen F.N., Laing R.A., 1989, MNRAS 238, 357
Owen F.N., White R.A., 1991, MNRAS 249, 164
Owen F.N., Odea C.P., Inoue M., Eilek J.A., 1985, ApJ 294, L85
Prestage R.M., Peacock J.A., 1988, MNRAS 230, 131
Scarpa R., Falomo R., Pesce J.E., 1996, A&AS 116, 295
Simpson C., Ward M., Clements D.L., Rawlings S., 1996, MNRAS 281, 509
Smith E.P., Heckman T.M., 1989a, ApJS 69, 365 (SH89)
Smith E.P., Heckman T.M., 1989b, ApJ 341, 658 (SH89)
Stark A.A., Gammie C.F., Wilson R.W., et al., 1992, ApJS 79, 77
Tadhunter C.N., Morganti R., Di Serego Alighieri S., Fosbury R.A.E., Danziger I.J., 1993, MNRAS 263, 999
Wall J.V., Peacock J.A., 1985, MNRAS 216, 173 (WP)
Zensus J.A., 1989, in BL Lac objects, ed. L. Maraschi, T. Maccacaro, M.H. Ulrich (Berlin: Springer), p.3 |
no-problem/0001/nlin0001009.html | ar5iv | text | # Some Techniques for the Measurement of Complexity in Tierra
## 1 Introduction
The issue of what happens to complexity in an evolving system is of great interest. In natural (biological) evolution, the naive view is that life started simple, and evolved ever more complex life forms over time, leading to that pinnacle of complexity, homo sapiens. The end points of that process are of course fixed. In the beginning, life must be simple. In our present era, there must exist intelligent organisms (namely us) pondering over the mystery of how we came to be. So the anthropic principle fixes the present day as having complex lifeforms. There is nothing within the Modern Synthesis of Darwinism that implies a steady interpolation between these two end points. In fact it is even plausible that more complex organisms than us existed in the past, but have since vanished into obscurity. However, examinations of the fossil record over the Phanerozoic (the last 550 million years of the Earthโs history) indicate almost no growth in complexity by a number of different measures over that period, apart from an initial large jump at the Cambrian explosion.
The interesting thing is to ask what one might see if looking at another evolutionary system apart from the one in which we evolved. Would we see any growth in complexity at all? Since we donโt have an extra terrestrial biology to observe (a few Martian meteorites aside), the only other systems available are Artificial Life systems evolving within a digital computer such as Tierra or Avida. The Avida group has reported measuring the information content (complexity) of individual avidan organisms, or rather a lower bound of the organismโs complexity. Their results are that this lower bound increases over time for the maximally fit organism, thus showing information accumulating as time progresses. One important critique of this work, however, is that organisms do not interact directly with each other, and in order to prevent evolution stagnating, an externally imposed task (eg computing a logical operation) is added to the system. Organisms are given โfitness pointsโ depending on how well they perform this task. This heavily weights the system in favour for accruing information.
By contrast, in the Tierra system, the organisms interact with each other, providing a rich array of possible (intrinsic) tasks for the organisms to exploit. Since this is an evolving ecology with no externally imposed task, the above critique does not apply. However, the downside is that determining whether two genotypes are phenotypically equivalent is considerably more complex. In some work a couple of years ago, I studied the phenotypic properties of Tierran organisms to build up a picture of the genotype to phenotype landscape. A Tierran organismโs phenotype can be characterised by a couple of numbers for each possible pairwise interaction in the ecology. Multiway interactions are ignored in this study, as experience has shown them to be relatively rare.
## 2 Complexity of a Digital Organism
The information content of a string is given by the difference between the maximal Shannon entropy of that string (i.e. considering the string to be random, or devoid of information), and the entropy given by assuming that the string codes for some phenotype $`p`$:
$$I(g)=H(g)H(g|p)=\mathrm{}\mathrm{log}_{32}N$$
(1)
where $`\mathrm{}`$ is the length of the genotype (in instructions), and $`N`$ is the number of genotypes that give rise to the same phenotype $`p`$. The base, 32, refers to the number of instructions in the Tierra instruction set. If $`N32^{\mathrm{}}`$ (ie a completely random sequence), then $`I(g)=0`$. Similarly, if $`N=1`$ (there is only one genetic sequence encoding a genotype, or no redundancy), then $`I(g)=\mathrm{}`$.
The most obvious way to compute $`N`$ is to search all $`32^{\mathrm{}}`$ genotypes for equivalent phenotypes. However, this is an enormous number of strings to check, and computationally infeasible. Adami recognised this problem, and took the approach of counting the number of volatile sites $`v`$ (sites that vary amongst phenotypic equivalents), and approximating $`N32^v`$. In one sense this is an overestimate of $`N`$, so they argue that this gives a lower bound to the information $`I(g)`$. In another sense, however, it is not strictly a lower bound. If it turns out that fixing one of the volatile sites to a particular value allows one of the fixed sites to vary without altering the phenotype, then this would be not be counted in the $`N`$. so what we have is really an overestimate of an underestimate.
The same criticism applies to this work. We can estimate the above mentioned estimate fairly accurately, more precisely we can find the size of the neutral network connected by one-site neutral mutations to $`g`$. However, the possibility remains that there are other neutral networks of $`g`$ that arenโt connected by single site mutations to $`g`$. Probably the most efficient way of finding these is by using a genetic algorithm to explore genotype space, i.e. run Tierra for a long time to see what it discovers! The way we use this in our experiment is to keep a list of neutrally equivalent organisms that Tierra discovers. As we explore the neutral network connected to $`g`$, we eliminate items from the list that we come across. The remaining names on the list can then be used as seeds to start the process again.
In this work, we use two different techniques to measure $`N`$. The first is a Monte Carlo random sampling technique to estimate the proportion of the $`32^v`$ strings found by varying the volatile sites. The second technique, which we use in conjunction with the Monte Carlo approach mentioned above, is to walk the neutral net. The Monte Carlo technique works well when the density of neutral variants is fairly high, whereas the latter technique is best on sparse networks. A decision on which technique to use for which site is based on estimated densities of neutral variants.
## 3 Establishing Phenotypic Equivalence
Equation (4) of presents the dynamical equations of two species of Tierran organisms interacting. The precise form of the dynamics is not important here, however the phenotype of the organism can be characterised by its interactions will all other possible Tierran phenotypes. Since it is impossible to have the complete set of all possible Tierran organisms, those organisms generated during a run of Tierra are used. Since Tierran organisms coevolve, the most important organisms should be contemporaneous with the test organism. The following characteristics are saved for each pair of organisms:
1. The outcome of the tournament. This may be one of the following:
The test organism never calls the divide instruction, or does not produce any recognisable progeny (essentially still born)
The organism produces progeny once, but then never repeats the act.
The organism continuous reproduces the same progeny. For this purpose we ignore what is produced first time around, as this will be swamped by number latter progeny.
The organism continuously reproduces, but the progeny is either different each time, or the CPU is in a different state each time the divide instruction is called - thus canโt be guaranteed to reproduce ad infinitum.
2. The name of the progeny organism. This is usually identical to the parent, but may another type in the case of symbiosis or parasitism.
3. The number of timesteps it takes to reach the first divide instruction ($`\sigma _{ij}`$), and the time it takes between successive divide steps after that ($`\tau _{ij}`$).
4. The number of template matching operations made to the opposing organism prior to the first divide ($`\mu _{ij}`$) and between successive divides ($`\nu _{ij}`$).
Two organisms are neutrally equivalent if they have identical characteristics against all Tierran organisms. Once all organisms are paired with each other, we can produce a list of phenotypically unique organisms, which provides a smaller test list to pit trial mutants against. We may also eliminate some noninteractive pairings prior to simulation by trying to see if potential template matches could happen between organisms. This still produces a fairly large list of test organisms, so it is still computationally expensive. The high degree of parallelism in this problem allows it to be attacked in reasonable time on a parallel supercomputer.
A further refinement may be possible by producing an archetypal list, perhaps by ignoring the ($`\mu ,\nu ,\tau `$ and $`\sigma `$) parameters. The idea being that the archetypes contain a representative organism from each niche of the ecology, and ignoring minor differences such as reproductive rate. This would coarsen the approximation a little, but will probably give an acceptable result. At present this idea has not been tested.
## 4 Interim Results
Due to the time constraints of producing this paper, the analysis of a reasonable length Tierra run has not been completed. At the time of writing, a moderately large data set of 1660 organisms was generated from a 24 hour Tierra run. Tierra produces most of its diversity during the earliest stage of its running, so it becomes significantly more expensive to produce larger data sets. This data set was halved by removing every second organism, and then a phenotypic analysis was carried out. This set reduced to 103 distinct phenotypes, which formed the test list used for carrying out the complexity analysis. Each of these 103 organisms were then tested for phenotypic equivalence against their single site nearest neighbours. The number of sites on which no mutation resulted in a phenotypically equivalent organism (โnonvolatile sitesโ) is plotted against the time of speciation in figure 1. |
no-problem/0001/astro-ph0001078.html | ar5iv | text | # Radiative Processes and Geometry of Spectral States of Black-hole Binaries
## 1. Introduction
Black-hole (hereafter BH) binaries show two main states in their X-ray/$`\gamma `$-ray (hereafter X$`\gamma `$) spectra: a hard (also called low) one and a soft (also called high) one. The two states differ in the relative strength of the blackbody and power-lawโlike components, as illustrated by the case of Cyg X-1 in Figure 1 (Gierliลski et al. 1999 \[G99\]). Apart from these two, an intermediate state (also shown in Figure 1) and an off state simply correspond to a transition period between the hard and the soft state, and to a very weak X-ray emission, respectively. Finally, a very high state is distinguished by both the above components being strong.
In this work, I will concentrate on radiative processes dominant during the hard and soft states. I will also discuss implications of correlations among the spectral and timing properties for the source geometry.
## 2. The Hard State
### 2.1. Thermal Comptonization
Thermal Comptonization of soft blackbody photons, as the radiative process expected to dominate in a hot accretion disk surrounded by a cold one, was proposed to take place in BH binaries by Eardley, Lightman, & Shapiro (1975) and Shapiro, Lightman, & Eardley (1976). (That geometry was also proposed by Thorne & Price 1975.) Shapiro et al. (1976) obtained a solution for spectral formation in this process and found it to qualitatively agree with the high-energy cutoff seen in balloon data for Cyg X-1. Then, Sunyaev & Trรผmper (1979) (and later some other authors) fitted hard X-ray data from Cyg X-1 using the optically-thick, nonrelativistic, Comptonization solution of Sunyaev & Titarchuk (1980). That fit neglected the presence of a Compton-reflection spectral component not known at that time (see ยง2.2 below). This, most likely, explains their fitted value of the electron temperature of $`kT=27`$ keV, which is much lower than the values of $`100`$ keV obtained in contemporary models (e.g., Gierliลski et al. 1997). The effect is due to the spectral curvature in the hard X-ray regime being partly due to Compton reflection, as pointed out by Haardt et al. (1993). This can be seen in Figure 2 by comparing the total spectrum, curved in the $`15`$โ100 keV range, with the Comptonization component, rather flat in that range.
Presently, the best evidence that the primary X$`\gamma `$ continua of BH binaries in the hard state are due to thermal Comptonization comes from observations by the CGRO/OSSE detector simultaneous with X-ray observations by other instruments. The obtained plasma parameters are $`kT50`$โ100 keV and a Thomson optical depth of $`\tau _\mathrm{T}1`$ (in agreement with the values of Shapiro et al. 1976). These parameters have been obtained, e.g., for the hard state of Cyg X-1 (Figure 1, Gierliลski et al. 1997) and GX 339โ4 (Figure 2, Zdziarski et al. 1998 \[Z98\], and in preparation); in addition, they appear consistent with the spectra of transient BH binaries (Grove et al. 1998 \[G98\]).
The photon spectral index, $`\mathrm{\Gamma }`$, of the primary X-ray continuum is a function of $`\tau _\mathrm{T}`$ and $`kT`$; roughly, it depends on them through the Compton parameter, $`y4(kT/m_\mathrm{e}c^2)\tau _\mathrm{T}`$. At a given $`\tau _\mathrm{T}`$, $`kT`$ is determined by balance between heating of the plasma and its radiative cooling, with the cooling rate proportional to the flux of soft photons providing seeds for Comptonization. Then, the stronger the flux in the soft photons is, the larger $`\mathrm{\Gamma }`$ is (e.g., see Beloborodov 1999b).
The two spectra of GX 339โ4 shown in Figure 2 have almost identical X-ray spectral slopes (corresponding to a constant $`y`$), but the fitted electron temperature increases from $`kT50`$ to $`80`$ keV when the luminosity, $`L`$, is smaller by a factor of $`2`$. A similar behaviour (but for a smaller range of $`L`$) is seen in four spectra of Cyg X-1 presented in Gierliลski et al. (1997). A constant $`\mathrm{\Gamma }`$ (or $`y`$) implies an approximately constant geometry (determining the amplification factor of Comptonization, see ยง2.2 below). Then, a higher $`kT`$ at a lower $`L`$ corresponds to a proportionally smaller $`\tau _\mathrm{T}`$. Such a behaviour is expected in hot accretion disks (Shapiro et al. 1976; Abramowicz et al. 1995; Narayan & Yi 1995), in which $`\tau _\mathrm{T}`$ decreases with decreasing $`\dot{m}`$ ($`\dot{M}c^2/L_\mathrm{E}`$). Then, the character of the $`\tau _\mathrm{T}(L)`$ dependence may allow us to determine which branch of the hot disk solution, advection-dominated or cooling-dominated, is followed by the source. Hot disks parametrized by $`y`$ were studied by Zdziarski (1998), whose results applied to GX 339โ4 appear to favour the advection-dominated solution branch (Zdziarski et al., in preparation).
### 2.2. Compton Reflection and its Correlation with Spectral and Timing Properties
As illustrated in Figure 2, the X$`\gamma `$ spectra of BH binaries usually show a distinct component due to Compton reflection (Lightman & White 1988; Magdziarz & Zdziarski 1995) of the primary continuum from a cold medium, presumably an optically-thick accretion disk (Done et al. 1992; Gierliลski et al. 1997; Z98; ลปycki, Done, & Smith 1998; 1999; Done & ลปycki 1999; Gilfanov, Churazov, & Revnivtsev 1999 \[GCR99\]; Revnivtsev, Gilfanov, & Churazov 1999; 2000 \[RGC00\]).
A very interesting property of Compton reflection is that its relative strength, $`R`$ ($`\mathrm{\Omega }/2\pi `$, where $`\mathrm{\Omega }`$ is the solid angle of the reflector as seen from the hot plasma), strongly correlates with the spectral and timing properties of the sources. Ueda, Ebisawa, & Done (1994) have found a correlation between $`R`$ and $`\mathrm{\Gamma }`$ in GX 339โ4, albeit based on a few observations with relatively large errors. Then, Zdziarski, Lubiลski, & Smith (1999 \[ZLS99\]) have shown the presence of a strong $`R`$-$`\mathrm{\Gamma }`$ correlation at a very high statistical significance in 47 Ginga observations of Seyfert 1s. Also, 23 Ginga observations of BH and neutron-star binaries were found to obey the same correlation (Zdziarski 1999 \[Z99\]).
The correlation has recently been unambiguously confirmed in the RXTE data on Cyg X-1 and GX 339โ4. It is seen both in spectra obtained at different epochs and in Fourier-resolved spectra (i.e., corresponding to variability in a given range of Fourier frequencies) of a given observation (Revnivtsev et al. 1999; GCR99; RGC00). Figure 3 presents the RXTE results for those two objects and GS 1354โ644 (M. Gilfanov, private comm.), as well as the Ginga results for 20 observations of three BH binaries (Z99).
Furthermore, GCR99 and RGC00 find that the strength of Compton reflection strongly correlates with the characteristic frequencies in the power-density spectrum (PDS), both in Cyg X-1 and GX 339โ4. Their PDS per logarithm of frequency exhibit two peaks at frequencies, $`f`$, which correlate positively with both $`R`$ and $`\mathrm{\Gamma }`$ (with the ratio of the two peak frequencies remaining constant). Also, the spectra from both reflection and Fe K$`\alpha `$ fluorescence are smeared, with the amount of smearing increasing with $`R`$ and $`\mathrm{\Gamma }`$ (RGC00).
A likely general explanation of the $`R`$-$`\mathrm{\Gamma }`$ correlation appears to be a mutual interaction between a hot, thermal plasma and a cold medium, as proposed by ZLS99. Namely, the cold medium both reflects the hot-plasma emission and provides blackbody photons as seeds for Comptonization. Then, the larger the solid angle subtended by the reflector is, the stronger the flux of soft photons is, and, consequently, the stronger the cooling of the plasma is. In the case of thermal plasma (ยง2.1), the stronger the cooling by seed photons incident on the plasma is, the softer the resulting X-ray power law spectrum is.
ZLS99 considered 2 specific models: one with a central, hot disk surrounded by a cold disk, and one with bulk motion in a disk corona. Here, we discuss only the former model (see Beloborodov 1999a, b for discussion of the latter). In this model, the surrounding cold disk is assumed to extend from some large, outer radius down to such a (variable) transition radius that it may overlap with the hot disk (see also Poutanen, Krolik, & Ryde 1997), as shown in Figure 4. Then, the more inward the cold disk extends, the stronger Compton reflection is, the softer the Comptonization spectrum is (see above), and the higher the characteristic frequencies of the system are. The solid curve in Figure 3 corresponds to this model with the parameters of ZLS99. (Note that their soft photon energy is probably too low for the BH-binary case; on the other hand, other effects not included in that highly idealized model may also affect the values of $`R`$.) We see that it provides an excellent description of the data.
Note that the interpretation in terms of blackbody cooling is also in agreement with a theoretical prediction that thermal synchrotron emission provides a negligible flux of seed photons for Comptonization in luminous BH binaries (Wardziลski & Zdziarski 2000). However, the thermal synchrotron process can be important at low luminosities, in which case departures from the $`R`$-$`\mathrm{\Gamma }`$ correlation are expected (which effect might explain the soft spectrum with weak reflection seen in a low-$`L`$ state of GS 2023+338: ลปycki et al. 1999).
On the other hand, detailed interpretation of the correlation of $`R`$ with the peak PDS frequencies is probably not straightforward. Here, we simply compare the characteristic PDS frequencies with the Keplerian one, which is likely to represent an upper limit on the characteristic frequencies of physical processes taking place at a given radius. We can define the Keplerian radius, $`r_\mathrm{K}(f)GM/c^2`$,
$$r_\mathrm{K}(f)10^3[m(f/1\mathrm{Hz})]^{2/3},$$
(1)
where $`M=m\mathrm{M}_{}`$. For the higher of the peak PDS frequencies in Cyg X-1, $`f0.5`$โ3 Hz (GCR99), $`r_\mathrm{K}100`$โ300, which may represent an upper limit on the range of radii responsible for that peak. This is then in agreement with the transition radii of $`r20`$โ50 found for the hard state of Cyg X-1 by Done & ลปycki (1999) by assuming that the observed smearing is solely due to Doppler and gravitational effects on the surface of a cold disk. We note, however, the result of Revnivtsev et al. (1999) that a given observed spectrum is a sum of spectra with different values of $`R`$ and $`\mathrm{\Gamma }`$ corresponding to different Fourier frequencies. Then, fitting such a sum by a single power law plus reflection will result in some smearing of the reflection in addition to Doppler/gravity effects. Therefore, the transition radii of Done & ลปycki (1999) may underestimate the actual values.
The Fourier-resolved spectra show a positive correlation between $`R`$ and $`\mathrm{\Gamma }`$ (Revnivtsev et al. 1999; RGC00); also, the hardest spectra with the weakest reflections correspond to the highest Fourier frequencies. Although this is an opposite effect to the positive correlation of $`R`$ and $`\mathrm{\Gamma }`$ with the peak frequencies in the PDS spectra, it can also be explained by the hot/cold disk model discussed above. Namely, the spectra corresponding to the highest Fourier frequencies presumably originate close to the central BH where both the blackbody flux from the outer cold disk and the solid angle subtended by it are small. The hardest/weakest-reflection spectra shown by Revnivtsev et al. correspond to $`f<30`$ Hz, which then correspond to $`r_\mathrm{K}>20`$ (at $`m=10`$).
Still, a realistic representation of the geometry will certainly be much more complex than the sketch in Figure 4. In particular, a major issue involves implications of the observed time lags of harder X-rays with respect to softer ones (see Cui 1999 for a recent review).
## 3. The Soft State
X$`\gamma `$ spectra in the soft state can be roughly described by a strong blackbody component dominating energetically, followed by a high-energy tail with $`\mathrm{\Gamma }2.5`$โ3; see Figure 1. The blackbody component comes, most likely, from an optically-thick accretion disk. On the other hand, there is no consensus at present regarding the origin of the tail. Three main models have been proposed, all involving Comptonization of blackbody photons by high-energy electrons. The models differ in the distribution (and location) of the electrons, which are assumed to be either thermal (with a Maxwellian distribution), nonthermal (with a distribution close to a power law), or in free fall from about the minimum stable orbit down to the horizon of the black hole.
A crucial test of the models is given by how they are able to reproduce the shape of the high-energy tail. Its major spectral feature is the lack of an observable high-energy cutoff in all BH binaries in the soft state observed so far (G98; Tomsick et al. 1999; G99; E. Grove, private comm.). In two objects with the best soft $`\gamma `$-ray data, GRO J1655โ40 and GRS 1915+105, the power-law tail extends above $`0.5`$ MeV without any cutoff (G98; Tomsick et al. 1999; see ยง3.3 below). Also, the spectrum of the tail, at least in some well-studied cases, contains a component due to Compton reflection and Fe K$`\alpha `$ fluorescence (Cyg X-1: G99, GCR99; GRO J1655โ40: Tomsick et al. 1999; GRS 1915+105: Coppi 1999; Nova Muscae 1991: ลปycki et al. 1998; 1999).
### 3.1. Thermal Comptonization
Thermal Comptonization was proposed to model spectra of the soft state of Cyg X-1 (Poutanen et al. 1997; Cui et al. 1998; Esin et al. 1998) and other BH binaries (Miyamoto et al. 1991; Esin, McClintock, & Narayan 1997). This model can, in principle, account for the X-ray part of the spectra. However, very high plasma temperatures are then required to account for the observed steep power-law tails extending to $`1`$ MeV, which then requires $`\tau _\mathrm{T}1`$ in order to keep the spectrum soft. This, in turn, causes distinct scattering profiles from consecutive orders of scattering to be visible in the spectrum (see Figure 7 of Coppi 1999) which are not seen in the soft-state data. For instance, a deep dip in the spectrum above the blackbody component is predicted by this model, whereas the Cyg X-1 data show instead an excess of photons in that region, resulting in a very bad fit of this model (G99). Thus, existing observations rule out this model in its simplest version with a single plasma component dominating the formation of the tail.
On the other hand, the observed spectra can be possibly reproduced by a suitable distribution of $`T`$ and $`\tau _\mathrm{T}`$. Such models are, in general, difficult to rule out. However, they appear to require a fine-tuning of the $`(T,\tau _\mathrm{T})`$ distribution. This is because a range of $`T`$ from nonrelativistic to relativistic values is required to account for the broadband tails, and Comptonization in those two regimes has different propertiesโespecially, the energy gain per scattering is $`T`$ and $`T^2`$, respectively. Then, a power-law distribution of $`T`$ would still result in a curved spectrum, most likely contrary to observations.
### 3.2. Bulk-motion Comptonization
Another model of bulk-motion Comptonization (hereafter abbreviated as BMC; Blandford & Payne 1981; Colpi 1988) was proposed to account for the soft-state, power-law spectra by Chakrabarti & Titarchuk (1995). In their model, an accretion flow passes through a shock at a radius close to the radius ($`r_{\mathrm{ms}}`$) of the minimum stable orbit, and it becomes quasi-spherical at smaller radii. Above the shock, the flow consists of a geometrically-thin, optically-thick accretion disk (as required by the observations of strong blackbody components) and an optically-thin flow above and below the disk. (Note that in the standard accretion-disk model, the disk passes through a sonic point close to $`r_{\mathrm{ms}}`$ without a shock and remains geometrically-thin to the horizon; e.g., see Muchotrzeb & Paczyลski 1982.)
Then, the free-falling electrons acquire velocities of $`vc`$ close to the horizon, and Comptonization using the large bulk inflow velocity of the electrons (as opposed to their assumed smaller thermal motions) gives rise to a power-law spectrum. The power-law index, $`\mathrm{\Gamma }`$, depends on $`\dot{m}`$ as shown, e.g., by Monte-Carlo calculations of Laurent & Titarchuk (1999 \[LT99\]). It decreases with increasing $`\dot{m}`$, and $`\mathrm{\Gamma }<3`$ is achieved for $`\dot{m}>2`$; see Figure 5.
A very attractive feature of the BMC model is that it links the presence of the high-energy tail to the lack of a hard surface for a BH. Free-falling electrons can achieve relativistic velocities only close to the BH horizon, unlike the case of accretion onto a neutron star. Indeed, no strong high-energy tails have been observed as yet from accreting neutron stars in their high states (although power-law spectra with $`\mathrm{\Gamma }2.5`$ and no observable high-energy cutoff have been seen, usually in low-$`L`$ states; e.g., see Barret et al. 1992; Goldwurm et al. 1996; Harmon et al. 1996; Piraino et al. 1999). Note that the free-falling electrons represent, in fact, an advection-dominated flow (with the distinction of the transition radius being at $`r_{\mathrm{ms}}`$), models of which also link certain features of BH accretion to the lack of a hard stellar surface (e.g., Narayan & Yi 1995).
However, the attractiveness of a model is not equivalent to its proof, and we should look for specific predictions of the model that can be confronted with data. The main such specific model prediction is the energy of a high-energy cutoff. Based on a nonrelativistic comparison of Compton upscattering and recoil, Ebisawa, Titarchuk, & Chakrabarti (1996) predicted a sharp cutoff at $`E/m_\mathrm{e}c^2\dot{m}^1`$. Given that $`\dot{m}>2`$, as required by $`\mathrm{\Gamma }<3`$, is typically observed, this cutoff would be significantly below 0.5 MeV. In addition, a cutoff or a break around $`m_\mathrm{e}c^2`$ is expected regardless of the value of $`\dot{m}`$ due to relativistic effects. First, the Klein-Nishina cross-section decreases with energy ($`\sigma _{\mathrm{KN}}0.4\sigma _\mathrm{T}`$ at $`m_\mathrm{e}c^2`$ in the electron rest frame), which results in a spectral curvature. Second, photons with energies around $`m_\mathrm{e}c^2`$ are produced relatively close to the horizon, which results in only backscattered photons (whose energy is much less than the average energy after scattering) escaping the flow due to light bending. A related effect is that the escaping electrons scattered close to the horizon will have their energies strongly reduced by the gravitational redshift.
All those effects have been taken into account in the Monte Carlo simulations, in the Schwarzschild metric, of LT99, whose work fully confirms the considerations above. Indeed, all model spectra shown by LT99 have sharp high-energy cutoffs above $`100`$ keV, with the flux at 200 keV being $`<0.5`$ below the extrapolation of the high-energy power law, as illustrated in Figure 5. This is also in agreement with the presence of sharp cutoffs at $`200`$ keV obtained in BMC models of Titarchuk, Mastichiadis, & Kylafis (1997) and Psaltis & Lamb (1999).
On the other hand, no high-energy cutoff has been yet discovered in the soft state of BH binaries. In particular, the OSSE spectra show no trace of any break around $`100`$โ200 keV in the cases of the soft state of Cyg X-1 (G99), GRS 1915+105, GRS 1009โ45, 4U 1543โ47, GRS 1716โ249, and GRO J1655โ40 (G98). A spectrum of the last object from G98 (accumulated over $`30`$ observing days) is shown in Figure 6. We clearly see no hint of a cutoff up to at least 600 keV. More recent data show no cutoff to even higher energies (E. Grove, private comm.).
This lack of a cutoff is clearly incompatible with the BMC spectra. This is illustrated in Figure 6, which also shows the theoretical spectrum of LT99 for $`\dot{m}=2`$. This spectrum matches well the low-energy slope of the OSSE spectrum, but then shows a sharp cutoff with no photons found in the simulation above 200 keV. This strongly rules out the BMC model.
We note that this conclusion has not been reached by proponents of this model because they have performed no fits to the OSSE data. Shrader & Titarchuk (1998) fitted observations of GRO J1655โ40 and GRS 1915+105 from RXTE/PCA, whose high-quality data extend up to $`<50`$ keV only. They also fitted data from CGRO/BATSE, which instrument does not have a sufficient sensitivity to constrain the spectra at $`>100`$ keV. Parenthetically, we note that their shown models have no high-energy break up to $`300`$ keV, contrary to their reference to models of Titarchuk et al. (1997) which have a sharp cutoff at $`200`$ keV. Then, Borozdin et al. (1999) fitted RXTE data of the above 2 objects, as well as of XTE J1755โ324 and GRS 1739โ278, and Exosat/GSPC data of EXO 1846โ031. The usable energy ranges of all those data extend to $`<100`$ keV (see Figure 1 in Borozdin et al. 1999) and thus cannot be used to test the BMC model. Borozdin et al. (1999) do show OSSE data for two 6-day observing periods of GRO J1655โ40 and for an observation of XTE J1755โ324, but they do not present any fits to them. Finally, Shrader & Titarchuk (1999) fitted RXTE/PCA data for LMC X-1 and Ginga data for Nova Muscae 1991, both of which extend to energies $`<30`$ keV. In summary, all of the fitted data are insensitive to the presence or absence of the spectral breaks predicted by the BMC model to occur at $`100`$โ200 keV.
In addition to the main problem of the high-energy cutoff, the BMC model appears to have a number of other problems when confronted with data. One issue involves the predicted dependence $`\mathrm{\Gamma }(\dot{m})`$. It can be compared to the soft-state data for Cyg X-1, for which $`\dot{m}0.5`$ and the observed $`\mathrm{\Gamma }2.5`$ (G99). On the other hand, Table 2 in LT99 gives $`\mathrm{\Gamma }=3.8`$ at this $`\dot{m}`$, i.e., a much softer spectrum than observed. Thus, unless advection strongly dominates in the soft state of Cyg X-1 and the actual $`\dot{m}>4`$, the BMC model is ruled out in this case, independent of the evidence from the lack of an observed high-energy cutoff.
On the other hand, a significant hardening of the slope can be obtained if the free-falling electrons also have thermal motion with a high enough temperature. Results of LT99 (see their Table 2) show that the slope can be hardened by $`\mathrm{\Delta }\mathrm{\Gamma }1`$ if $`kT=50`$ keV. However, the spectral formation is then almost fully due to thermal Comptonization, and although the BMC process is still taking place, its role is negligible. For example, at $`\dot{m}=2`$, an increase of $`kT`$ from 5 keV (when BMC dominates) to 50 keV leads to an increase of the photon flux at 100 keV by a factor of $`100`$ (see Figures 4 and 6 in LT99)โan effect entirely due to Compton scattering on electrons with velocities dominated by their thermal motion. This thermal-Comptonization model can still be ruled out for the soft state based on the energy of its high-energy cutoff (see ยง3.1 above).
Another problem for the BMC model (as well as for any model with the source of the hard emission located below $`r_{\mathrm{ms}}`$) is the detection of Compton reflection in the soft-state spectra of GRO J1655โ40 (Tomsick et al. 1999), GRS 1915+105 (Coppi 1999), Nova Muscae (ลปycki et al. 1998; 1999), and Cyg X-1 (G99; GCR99); see Figure 7. In objects which show state transitions, the strength of Compton reflection is highest in the soft state (GCR99; ลปycki et al. 1998; 1999) and usually consistent with $`\mathrm{\Omega }2\pi `$ (G99; Coppi 1999). This is clearly incompatible with the geometry of the BMC model, in which a thin disk is outside the central, spherical inflow (see Figure 2 in LT99).
Chakrabarti & Titarchuk (1995) have addressed this problem by proposing that the observed reflection-like features are due to partial covering by an absorbing medium with $`\tau _\mathrm{T}3`$. An absorbing medium has been detected in GRO J1655โ40 at a distance $`10^{10}`$ cm but with $`\tau _\mathrm{T}0.2`$ only (Ueda et al. 1998). This low $`\tau _\mathrm{T}`$ cannot explain the observed reflection features. On the other hand, as noted by Chakrabarti & Titarchuk (1995), the Fe K$`\alpha `$ line will be very weak at $`\tau _\mathrm{T}3`$ (results of Makishima 1986 imply an equivalent width of $`10`$ eV), whereas the data show lines with typical equivalent widths $`>100`$ eV (ลปycki et al. 1998; Tomsick et al. 1999; G99). Furthermore, those data show broadening of the Fe K$`\alpha `$ line, implying that most of the reflection takes place from an inner disk, and arguing against disk flaring as a possible explanation of the large reflection fraction.
Another issue to be considered is time lags of harder X-rays with respect to softer ones observed in the soft state (Cui et al. 1997; Li, Feng, & Chen 1999; Cui 1999) as well as in the hard state. In the hard state, those lags have been interpreted as due to either delays between consecutive Compton scatterings in a halo with a large size, $`r>10^4`$ (Kazanas, Hua, & Titarchuk 1997; Hua, Kazanas, & Titarchuk 1997), spectral evolution of a disk-corona system (Poutanen & Fabian 1999), or drift of blobs in a hot disk (Bรถttcher & Liang 1999). The lags in the soft state are shorter than those in the hard state but still reach $`>10`$ ms for photons in the tail with respect to the blackbody-peak photons in the case of Cyg X-1 (Figure 8 in Cui et al. 1997; Figure 5 in Li et al. 1999; Figure 2 in Cui 1999). On the other hand, a characteristic time lag expected due to scattering in a converging flow below $`r_{\mathrm{ms}}`$ is $`0.2`$ ms, i.e., much less than that observed.
Thus, the BMC model can be rejected based on the observed absence (in all objects with data extending to soft $`\gamma `$-rays) of a high-energy cutoff around $`100`$โ200 keV. This cutoff is the specific prediction of the BMC model, making it highly testable. Furthermore, the model disagrees with data on some predictions related to its geometry of a central, very compact sourceโnotably on those regarding the Fe K features and timing properties.
Naturally, the problems discussed above can be solved by assuming that the scattering electrons have sufficiently high nonthermal velocities, that the size of the source is much larger than $`r_{\mathrm{ms}}`$, and that there is a significant overlap between the hot plasma and the cold disk (which possibilities are mentioned by LT99). Then, however, the spectral formation will be due to Compton scattering by electrons with velocities dominated by their nonthermal motion, with their bulk motion playing a negligible role, and the model will lose its identity and become virtually indistinguishable from the nonthermal corona model (described below).
### 3.3. Nonthermal Comptonization
The spectral constraints discussed above strongly point to (1) a radiative process capable of producing power-law spectra with no cutoffs up to $`1`$ MeV, and (2) a geometry with a large solid angle subtended by the reflector as measured from the source of the power-law emission. Natural candidates for that radiative process and geometry are single Compton scattering of the blackbody photons by power-law electrons, and a disk-corona geometry, respectively. Such a model has been proposed by Poutanen & Coppi (1998) and developed in detail and tested against Cyg X-1 data by G99 (using the code of Coppi 1999).
The model consists of a corona above a standard, optically-thick accretion disk. Selected electrons from a thermal distribution in the corona are accelerated to relativistic energies, possibly in reconnection events. The relativistic electrons Compton upscatter the disk photons, forming the high-energy tail. The relativistic electrons also transfer some of their energy via Coulomb scattering to the thermal electrons (at the lowest energies of the total distribution), heating them to a temperature much above the Compton temperature. The thermal electrons then also efficiently upscatter the disk photons (in addition to nonthermal upscattering), which process forms the excess below $`10`$ keV observed in Cyg X-1; see Figure 7. The radiation of the corona is also partly Compton-reflected from the disk, as observed (Figure 7).
An important parameter of the coronal plasma is its compactness, i.e., the ratio of the luminosity to size. At a high compactness, copious $`e^\pm `$ pairs are produced in photon-photon collisions, which then leads to a distinct pair-annihilation feature (e.g., see Svensson 1987). Such a feature is not seen, which constrains the compactness from above. At a low compactness, Coulomb energy losses of relativistic electrons become dominant over the Compton losses. This leads to a break in the steady-state distribution of relativistic electrons. This, in turn, leads to a corresponding break in the photon spectrum. Again, such a break is not seen, which constrains the compactness from below. In the case of Cyg X-1, the allowed range of compactnesses corresponds to a characteristic size of the order of tens of $`GM/c^2`$ (G99). This corresponds to the range of radii at which most of the accretion energy is dissipated.
Note that this characteristic size is also in agreement with the timing data of Cyg X-1 in the soft state. The break frequency in the PDS spectrum is at 13โ14 Hz (Cui et al. 1997), which corresponds to the Keplerian frequency at $`r40`$ (at $`m=10`$); see Equation (1). Also, the 6.5โ13 keV photons are observed to lag the 2โ6.5 keV ones by $`2`$ ms on average (over the 1โ10 Hz Fourier periods; see Figure 9 in Cui et al. 1997). These two energy ranges are dominated by upscattering of disk photons by the thermal part of the electron distribution (G99); see Figure 7. If this time lag is interpreted as being due to light-travel delays in a scattering medium, the resulting characteristic size is $`r40`$, as well. Furthermore, the location of the corona at such radii is also consistent with the observed broadening of the Fe K$`\alpha `$ line in Cyg X-1 (G99).
Another parameter of interest is the power-law index, $`p`$, of the rate of electron acceleration. Its value determines the photon index of the high-energy tail via the relation $`p2(\mathrm{\Gamma }1)`$ (taking into account the steepening of the electron distribution due to the energy loss). Then, the typical value of $`\mathrm{\Gamma }`$ being $`2.5`$ implies that the acceleration in the corona proceeds at a rate $`\gamma ^3`$.
The relative normalization of the tail with respect to the blackbody implies, from energy balance, the fraction of the accretion power released in the corona. In the case of Cyg X-1, it is $`0.5`$. Also, the disk in the soft state of Cyg X-1 is found to be gas-pressure dominated all the way to $`r_{\mathrm{ms}}`$ and, thus, stable (G99).
This nonthermal model has been successfully fitted to soft-state spectra of Cyg X-1 measured by ASCA, RXTE, and OSSE (G99) and by BeppoSAX (Frontera et al. 2000), as well as to RXTE data on GRS 1915+105 by Coppi (1999). Figures 1 and 7 present fits to Cyg X-1 data from ASCA, RXTE, and OSSE (G99).
## 4. Comparison with AGNs and Neutron Stars
X-ray spectra of Seyferts show power-law indices and reflection components rather similar to those of BH binaries in the hard state, as illustrated in Figure 8. It shows results from Ginga (ZLS99) as well as some RXTE fit results for MCG โ6-30-15 (Lee et al. 1998), MCG โ5-23-16 (Weaver, Krolik, & Pier 1998), NGC 5548 (Chiang et al. 2000), and IC 4329A (Done, Madejski, & Smith 2000).
We see that the popular notion that Compton reflection is weaker in BH binaries than in Seyferts is not confirmed by the data shown here. Specifically, BH binaries with hard spectra have stronger Compton-reflection components than Seyfert 1s with the same $`\mathrm{\Gamma }`$ (in the range $`\mathrm{\Gamma }<1.8`$). This is equivalent to AGNs having softer spectra at a given $`R`$. This effect can be explained by the difference in typical blackbody temperatures between BH binaries and AGNs. For a given amplification factor (presumably controlled by geometry), higher blackbody temperatures (in BH binaries) result in harder X-ray spectra (see Figure 10 in Z99).
We also see that Seyferts show much more scatter than BH binaries on the $`R`$-$`\mathrm{\Gamma }`$ diagram. This may be related to a wider range of physical conditions in Seyferts than in BH binaries. For instance, molecular tori in Seyferts are likely to contribute to reflection without a noticeable effect on the cooling of the central, hot plasma, which might explain objects with $`R>1`$. On the other hand, outflows (Beloborodov 1999a, b) may explain the weakness of reflection in some objects, especially broad-line radio galaxies (Woลบniak et al. 1998; Z99); see Figure 8.
Then, Seyfert 1s with soft spectra and strong reflection, like MCG โ6-30-15, may represent AGN counterparts of the soft state of BH binaries (as discussed in Done et al. 2000). On the other hand, those counterparts may be given by Narrow-Line Seyfert 1s, as proposed by Pounds, Done, & Osborne (1995), although their timing properties appear to be different from each other.
Typical plasma temperatures in Seyferts are relatively poorly determined (e.g., see Z99; Done et al. 2000), but they still appear similar to $`kT`$ in BH binaries. In particular, the Seyfert with the best-known soft $`\gamma `$-ray spectrum, NGC 4151 (Johnson et al. 1997), has an (intrinsic) average X$`\gamma `$ spectrum virtually identical to the Ginga-OSSE spectrum of GX 339โ4 shown in Figure 2 (Z98). Then, the similarity of the values of both $`\mathrm{\Gamma }`$ and $`kT`$ implies similar values of $`\tau _\mathrm{T}`$ ($`1`$).
Among neutron-star binaries, the class closest to BH binaries is that of Type 1 X-ray bursters, which are characterized by disk accretion at a relatively low $`\dot{m}`$ and by magnetic fields weak enough not to dominate the dynamics of accretion. Between thermonuclear bursts, they show two spectral states, low (hard) and high (soft), similarly to BH binaries. The spectral and timing properties of BH binaries and X-ray bursters are relatively similar, as recently discussed by Barret et al. (2000). The main differences are that the X-ray spectra of bursters in the low state are, on average, softer (with $`\mathrm{\Gamma }>1.9`$) than those of BH binaries. They also show Compton reflection components with a range of $`R`$ and roughly obeying the $`R`$-$`\mathrm{\Gamma }`$ correlation; see Figure 8. This figure shows the RXTE data for GS 1826โ238 and SLX 1735โ269 (Barret et al. 2000) and Ginga data for GS 1826โ238 and 4U 1608โ522 (ZLS99).
When fitted by thermal Comptonization, bursters in the low state usually show high-energy cutoffs corresponding to $`kT<30`$ keV, whereas BH binaries have $`kT>50`$ keV (Z98; Barret et al. 2000). However, there are cases of low-state spectra of bursters extending above $`100`$ keV without a measurable cutoff, e.g., 4U 0614+091 (Piraino et al. 1999) and SAX J1810.8โ2609 (Natalucci et al. 2000), which seems to happen for relatively soft power laws with $`\mathrm{\Gamma }>2`$. In the case of 4U 0614+091, the power law is accompanied by reflection with $`R>1`$ showing a strong $`R`$-$`\mathrm{\Gamma }`$ correlation, but offset to $`\mathrm{\Gamma }2.4`$โ3 (Piraino et al. 1999), which $`\mathrm{\Gamma }`$ is similar to those seen in the soft state of BH binaries (ยง3).
## 5. Conclusions
1. The main radiative process in the hard state of BH binaries is thermal Comptonization with $`kT50`$โ100 keV and $`\tau _\mathrm{T}1`$.
2. The relative strength of Compton reflection correlates with the X-ray spectral index and with peak frequencies in the PDS spectrum. The simplest interpretation of these correlations appears to be in terms of a cold accretion disk overlapping with a central, hot disk.
3. Models of the power-law tail in the soft state in terms of thermal and bulk-motion Comptonization are shown to be ruled out by the data. An alternative, successful model involves Compton scattering by nonthermal electrons in a corona.
4. X$`\gamma `$ spectra of BH binaries in the hard state are very similar to those of Seyfert 1s, although the latter show more diversity in their spectral properties.
5. X$`\gamma `$ spectra of BH binaries in the hard state have, on average, harder X-ray power laws and higher high-energy cutoffs (corresponding to $`kT>50`$ keV) than those of X-ray bursters.
#### Acknowledgments.
This research has been supported in part by a grant from the Foundation for Polish Science and the KBN grants 2P03C00511p0(1,4) and 2P03D00614. I thank Marat Gilfanov, Eric Grove, and Philippe Laurent for providing me with their results in numerical form, and Andrei Beloborodov, Chris Done, Juri Poutanen, and Lev Titarchuk for valuable comments.
## References
Abramowicz, M. A., Chen, X., Kato, S., Lasota, J.-P., & Regev, O. 1995, ApJ, 438, L37
Barret, D., et al. 1992, ApJ, 394, 615
Barret, D., Olive, J. F., Boirin, L., Done, C., Skinner, G. K., & Grindlay, J. E. 2000, ApJ, 533, 329
Beloborodov, A. M. 1999a, ApJ, 510, L123
Beloborodov, A. M. 1999b, in ASP Conf. Ser. Vol. 161, High Energy Processes in Accreting Black Holes, eds. J. Poutanen & R. Svensson (San Francisco: ASP), 295
Blandford, R. D., & Payne, D. G. 1981, MNRAS, 194, 1041
Borozdin, K., Revnivtsev, M., Trudolyubov, S., Shrader, C., & Titarchuk, L. 1999, ApJ, 517, 367
Bรถttcher, M., & Liang, E. P. 1999, ApJ, 511, L37
Chakrabarti, S. K., & Titarchuk, L. G. 1995, ApJ, 455, 623
Chiang, J., Reynolds, C. S., Blaes, O. M., Nowak, M. A., Murray, N., Madejski, G. M., Marshall, H. L., & Magdziarz, P. 2000, ApJ, 528, 292
Colpi, M. 1988, ApJ, 326, 223
Coppi, P. S. 1999, in ASP Conf. Ser. Vol. 161, High Energy Processes in Accreting Black Holes, eds. J. Poutanen & R. Svensson (San Francisco: ASP), 375
Cui, W. 1999, in ASP Conf. Ser. Vol. 161, High Energy Processes in Accreting Black Holes, eds. J. Poutanen & R. Svensson (San Francisco: ASP), 97
Cui, W., Ebisawa, K., Dotani, T., & Kubota, A. 1998, ApJ, 493, L75
Cui, W., Zhang, S. N., Focke, W., & Swank, J. H. 1997, ApJ, 484, 383
Done, C., Madejski, G. M., & ลปycki, P. T. 2000, ApJ, in press
Done, C., Mulchaey, J. S., Mushotzky, R. F., & Arnaud, K. A. 1992, ApJ, 395, 275
Done, C., & ลปycki, P. T. 1999, MNRAS, 305, 457
Eardley, D. M., Lightman, A. P., & Shapiro, S. L. 1975, ApJ, 199, L153
Ebisawa, K., Titarchuk, L., & Chakrabarti, S. K. 1996, PASJ, 48, 59
Esin, A. A., McClintock, J. E., & Narayan, R. 1997, ApJ, 489, 865
Esin, A. A., Narayan, R., Cui, W., Grove, J. E., & Zhang, S.-N. 1998, ApJ, 505, 854
Frontera, F., Palazzi, E., Zdziarski, A. A., et al. 2000, ApJ, submitted
Gierliลski, M., Zdziarski, A. A., Done, C., Johnson, W. N., Ebisawa, K., Ueda, Y., Haardt, F., & Phlips, B. F. 1997, MNRAS, 288, 958
Gierliลski, M., Zdziarski, A. A, Poutanen, J., Coppi, P., Ebisawa, K., & Johnson, W. N. 1999, MNRAS, 309, 496 (G99)
Gilfanov, M., Churazov, E., & Revnivtsev, M. 1999, A&A, 352, 182 (GCR99)
Goldwurm, A., et al. 1996, A&A, 310, 857
Grove, J. E., Johnson, W. N., Kroeger, R. A., McNaron-Brown, K., & Skibo, J. G. 1998, ApJ, 500, 899 (G98)
Haardt, F., Done, C., Matt, G., & Fabian, A. C. 1993, ApJ, 411, L95
Harmon, B. A., Wilson, C. A., Tavani, M., Zhang, S. N., Rubin, B. C., Paciesas, W. S., Ford, E. C., & Kaaret, P. 1996, A&AS, 120, C197
Hua, X.-M., Kazanas, D., & Titarchuk, L. 1997, ApJ, 482, L57
Johnson, W. N., McNaron-Brown, K., Kurfess, J. D., Zdziarski, A. A., Magdziarz, P., & Gehrels, N. 1997, ApJ, 482, 173
Kazanas, D., Hua, X.-M., & Titarchuk, L. 1997, ApJ, 480, 735
Laurent, P., & Titarchuk, L. 1999, ApJ, 511, 289 (LT99)
Lee, J. C., Fabian, A. C., Reynolds, C. S., Iwasawa, K., & Brandt, W. N. 1998, MNRAS, 300, 583
Li, T. P., Feng, Y. X., & Chen, L. 1999, ApJ, 521, 789
Lightman, A. P., & White, T. R. 1988, ApJ, 335, 57
Magdziarz, P., & Zdziarski, A. A. 1995, MNRAS, 273, 837
Makishima, K. 1986, in The Physics of Accretion onto Compact Objects, eds. K. O. Mason, M. G. Watson, & N. E. White (Berlin: Springer), 249
Miyamoto, S., Kimura, K., Kitamoto, S., Dotani, T., & Ebisawa, K. 1991, ApJ, 383, 784
Muchotrzeb, B., & Paczyลski, B. 1982, Acta Astron., 32, 1
Narayan, R., & Yi, I. 1995, ApJ, 452, 710
Natalucci, L., Bazzano, A., Cocci, M., Ubertini, P., Heise, J., Kuulkers, E., in โt Zand, J. J. M., & Smith, M. J. S. 2000, ApJ, 536, in press
Piraino, S., Santangelo, A., Ford, E. C., & Kaaret, P. 1999, A&A, 349, L77
Pounds, K. A., Done, C., & Osborne, J. P. 1995, MNRAS, 277, L5
Poutanen, J., & Coppi, P. S. 1998, Physica Scripta, T77, 57
Poutanen, J., & Fabian, A. C. 1999, MNRAS, 306, L31
Poutanen, J., Krolik, J. H., & Ryde, F. 1997, MNRAS, 292, L21
Psaltis, D., & Lamb, F. K. 1999, in ASP Conf. Ser. Vol. 161, High Energy Processes in Accreting Black Holes, eds. J. Poutanen & R. Svensson (San Francisco: ASP), 410
Revnivtsev, M., Gilfanov, M., & Churazov, E. 1999, A&A, 347, L23
Revnivtsev, M., Gilfanov, M., & Churazov, E. 2000, A&A, in press (RGC00)
Shapiro, S. L., Lightman, A. P., & Eardley, D. M. 1976, ApJ, 204, 187
Shrader, C. R., & Titarchuk, L. 1998, ApJ, 499, L31
Shrader, C. R., & Titarchuk, L. 1999, ApJ, 521, L121
Sunyaev, R. A., & Titarchuk, L. G. 1980, A&A, 86, 121
Sunyaev, R. A., & Trรผmper, J. 1979, Nature, 279, 506
Svensson, R. 1987, MNRAS, 227, 403
Thorne, K. S., & Price, R. H. 1975, ApJ, 195, L101
Titarchuk, L., Mastichiadis, A., & Kylafis, N. D. 1997, ApJ, 487, 834
Tomsick, J. A., Kaaret, P., Kroeger, R. A., & Remillard, R. A. 1999, ApJ, 512, 892
Ueda, Y., Ebisawa, K., & Done, C. 1994, PASJ, 46, 107
Ueda, Y., Inoue, H., Tanaka, Y., Ebisawa, K., Nagase, F., Kotani, T., & Gehrels, N. 1998, ApJ, 492, 782
Wardziลski, G., & Zdziarski, A. A. 2000, MNRAS, 314, 183
Weaver, K. A., Krolik, J. H., & Pier, E. A. 1998, ApJ, 498, 213
Woลบniak, P. R., Zdziarski, A. A., Smith, D., Madejski, G. M., & Johnson, W. N. 1998, MNRAS, 299, 449
Zdziarski, A. A. 1998, MNRAS, 296, L51
Zdziarski, A. A. 1999, in ASP Conf. Ser. Vol. 161, High Energy Processes in Accreting Black Holes, eds. J. Poutanen & R. Svensson (San Francisco: ASP), 16, astro-ph/9812449 (Z99)
Zdziarski, A. A., Lubiลski, P., & Smith, D. A. 1999, MNRAS, 303, L11 (ZLS99)
Zdziarski, A. A., Poutanen, J., Mikoลajewska, J., Gierliลski, M., Ebisawa, K., & Johnson, W. N. 1998, MNRAS, 301, 435 (Z98)
Zdziarski, A. A., et al., in preparation
ลปycki, P. T., Done, C., & Smith, D. A. 1998, ApJ, 496, L25
ลปycki, P. T., Done, C., & Smith, D. A. 1999, MNRAS, 305, 231 |
no-problem/0001/nlin0001026.html | ar5iv | text | # Parametric dependent Hamiltonians, wavefunctions, random-matrix-theory, and quantal-classical correspondence
## I Introduction
Consider a system whose total Hamiltonian is $`(Q,P;x)`$, where $`(Q,P)`$ is a set of canonical coordinates, and $`x`$ is a constant parameter. This parameter may represent the effect of some externally controlled field. We assume that both $`_0=_0(Q,P;x_0)`$ and $`=(Q,P;x)`$ generate classically chaotic dynamics of similar nature. Moreover, we assume that $`\delta x(xx_0)`$ is classically small, meaning that it is possible to apply linear analysis in order to describe how the energy surfaces $`(Q,P;x)=E`$ are deformed as a result of changing the value of $`x`$. Quantum mechanically, we can use a basis where $`_0=๐_0`$ has a diagonal representation, while
$`=๐_0+\delta x๐`$ (1)
For reasonably small $`\mathrm{}`$, it follows from general semiclassical considerations , that $`๐`$ is a banded matrix. Generically, this matrix looks random, as if its off-diagonal elements were independent random numbers.
It was the idea of Wigner forty years ago, to study a simplified model, where the Hamiltonian is given by Eq.(1), and where $`๐`$ is a random banded matrix. This is known as Wignerโs banded random matrix (WBRM) model. The applicability of such a model is a matter of conjecture. Obviously this conjecture should be tested. The most direct way to test it, which we are going to apply, is to take the matrix $`๐`$ of a โphysicalโ Hamiltonian, and then to randomize the signs of its off-diagonal elements. The outcome of such operation will be referred to as the effective WBRM model that is associated with the physical Hamiltonian. One issue of this paper is to make a comparison between the eigenstates of the physical Hamiltonian, and those of the associated effective WBRM model.
The standard WBRM model (unlike the โeffectiveโ one) involves an additional simplification. Namely, one assumes that $`๐`$ has a rectangular band profile. The theory of eigenstates for the standard WBRM model is well known . Increasing $`x`$, starting from $`\delta x=0`$ the eigenstates of Eq.(1) change their nature. The general questions to address are:
1. What are the parametric regimes in the parametric evolution of the eigenstates;
2. How the structure of the eigenstates changes as we go via the subsequent regimes.
Recently some ideas have been introduced how to go beyond Wignerโs theory in case of physical Hamiltonians. It has been suggested that there are at least three generic parametric scales $`\delta x_c^{\text{qm}}\delta x_{\text{prt}}\delta x_{\text{SC}}`$ that control the parametric evolution of the eigenstates. We shall define these parametric scales later. Accordingly one should distinguish between the standard perturbative regime ($`\delta x\delta x_c^{\text{qm}}`$), the core-tail regime ($`\delta x_c^{\text{qm}}\delta x\delta x_{\text{prt}}`$), and the semiclassical regime ($`\delta x\delta x_{\text{SC}}`$).
The purpose of this paper is not just to numerically establish (for the first time) the existence of the parametric regimes suggested in , but mainly to address question (2) above . Namely, we would like to study how the structure of the eigenstates changes as we go via the subsequent regimes. In particular we would like to understand the significance of RMT assumptions in the general theoretical considerations. The latter issue has been left unexplored in the โquantum chaosโ literature. (Note however that literally the same question is addressed in numerous publication once spectral statistics of eigenvalues, rather than eigenstate structure, is concerned). We also suggest a new procedure for โregion analysisโ of the eigenstate structure. We are going to distinguish between first-order tail regions (FOTRs), higher-order far-tail regions, and non-perturbative (core) region. Our main conclusion is going to be that RMT is inadequate for the analysis of any features that go beyond first-order perturbation theory.
## II The model Hamiltonian
We study the Hamiltonian
$`(Q,P;x)=\frac{1}{2}(P_1^2+P_2^2+Q_1^2+Q_2^2)+xQ_1^2Q_2^2`$ (2)
with $`x=x_0+\delta x`$ and $`x_0=1`$. This Hamiltonian describes the motion of a particle in a 2D well (see Fig.1). The units are chosen such that the mass is equal to one, the frequency for small oscillations is one, and for $`\delta x=0`$ the coefficient of the anharmonic term is also one. The energy $`E`$ is the only dimensionless parameter of the classical motion. Our numerical study is focused on an energy window around $`E3`$ where the motion is mainly chaotic.
In the classical analysis there is only one parametric scale, which is $`\delta x_c^{\text{cl}}1`$. This scale determines the regime of (classical) linear analysis. For $`\delta x\delta x_c^{\text{cl}}`$ the deformation of the energy surface $`_0(Q,P;x)=E`$ can be described as a linear process. Later we are going to give a precise mathematical formulation of this idea. From now on assume that we are in the classical linear regime.
FIG.1: Left: equipotential contours of the model Hamiltonian (2) with $`x=x_0=1`$. Right: A Poincare section of a long trajectory ($`0<t<1300`$) that we have picked in order to get the fluctuating quantity $`(t)`$. The initial conditions are $`(Q_1,Q_2,P_1,P_2)=(1,0,1,2)`$ corresponding to $`E=3`$. The trajectory is quite ergodic. It avoids some small quasi-integrable islands (the main one is around $`(0,0)`$).
Let us pick a very long ergodic trajectory $`(Q(t),P(t))`$ that covers densely the energy surface $`E`$. See Fig.1. Let us define the fluctuating quantity
$`(t)(/x)=Q_1^2Q_2^2`$ (3)
For the later analysis it is important to know the distribution of the variable $``$, and to characterize its temporal correlations. The average value is $`F=`$. The angular brackets stand for microcanonical average over $`(Q(0),P(0))`$, which should be the same as time ($`t`$) average (due to the assumed ergodicity). The auto-correlation function of $`(t)`$ is
$`C(\tau )=((t)F)((t+\tau )F)`$ (4)
Note that $`C(\tau )`$ is independent of $`t`$, and that average over $`t`$ should give the same result as a microcanonical average over $`(Q(0),P(0))`$.
The variance of the fluctuations is $`C(0)=(F)^2`$. The correlation time will be denoted by $`\tau _{\text{cl}}`$. Note that with our choice of units $`\tau _{\text{cl}}1.0`$ within the energy range of interest. The power spectrum $`\stackrel{~}{C}(\omega )`$ of the fluctuating $`(t)`$, is obtained via a Fourier transform of $`C(\tau )`$. See Fig.2. The average $`F`$ and the variance $`C(0)`$ determine just the first two moments of the $``$ distribution. The probability density of $``$ will be denoted by $`P_\text{F}()`$.
All the required information for the subsequent semiclassical analysis is contained in the functions $`C(\tau )`$ and $`P_\text{F}()`$ as defined above. All we have to do in order to numerically determine them is to generate one very long ergodic trajectory (see Fig.1), to compute the respective $`(t)`$, and from it to extract the desired information (see Fig.2 and Fig.3). It is convenient to express $`P_\text{F}()`$ in terms of a scaling function as follows
$`P_\text{F}()={\displaystyle \frac{1}{\sqrt{C(0)}}}\widehat{P}_{\text{cl}}\left({\displaystyle \frac{F}{\sqrt{C(0)}}}\right)`$ (5)
By this definition the scaled distribution $`\widehat{P}_{\text{cl}}(f)`$ is characterized by a zero average ($`f=0`$), a unit variance ($`f^2=1`$), and it is properly normalized. Note that $`\widehat{P}_{\text{cl}}(f)`$ rather than $`\widehat{P}_{\text{cl}}(f)`$ correspond to $`P_\text{F}()`$. This has been done for later convenience.
## III The quantized Hamiltonian
Upon quantization we have a second dimensionless parameter $`\mathrm{}`$. For obvious reasons we are considering a de-symmetrized ($`1/8`$) well with Dirichlet boundary conditions on the lines $`Q_1=0`$ and $`Q_2=0`$ and $`Q_1=Q_2`$. The matrix representation of $`=(Q,P;x)`$ in the basis which is determined by $`(Q,P;0)`$ is very simple. The eigenstates ($`n=1,2,3\mathrm{}`$) of the chaotic Hamiltonian $`_0=(Q,P;1)`$ has been found numerically.
The phase space volume ($`dQdP`$ integral) which is enclosed by an energy surface $`(Q,P;x)=E`$ is given by a function $`n=\mathrm{\Omega }(E,x)`$. It is convenient to measure phase space volume in units of $`(2\pi \mathrm{})^d`$, where $`d=2`$ is the dimensionality of our system. Upon quantization the phase space volume $`n`$ corresponds to the level index ($`n=1,2,3\mathrm{}`$). This is known as Weyl law. It follows that $`g(E)=_E\mathrm{\Omega }(E,x)`$ corresponds to the density of states, and $`\mathrm{\Delta }=1/g(E)\mathrm{}^d`$ is the mean level spacing.
In the following presentation we are going to assume the our interest is restricted to an energy window which is โclassically smallโ but โquantum mechanically largeโ. In the numerical analysis of our model Hamiltonian the energy window was $`2.8<E<3.1`$, where the classical motion is predominantly chaotic. The mean level spacing for $`E3`$ is given approximately by the formula $`\mathrm{\Delta }4.3\mathrm{}^2`$. Our numerical analysis has been carried out for $`\mathrm{}=0.03`$ and for $`\mathrm{}=0.015`$. Smaller values of $`\mathrm{}`$ where beyond our numerical capabilities since the maximal matrix that we can handle is of size $`5000\times 5000`$.
The representation of $`Q_1^2Q_2^2`$, in the basis which is determined by the chaotic Hamiltonian $`_0`$, gives the matrix $`๐`$ of Eq.(1). The banded matrix $`๐`$ and the band profile are illustrated in Fig.2. The band profile is implied by the semiclassical relation :
$`|๐_{nm}|^2{\displaystyle \frac{\mathrm{\Delta }}{2\pi \mathrm{}}}\stackrel{~}{C}\left({\displaystyle \frac{E_nE_m}{\mathrm{}}}\right)`$ (6)
As we see from Fig.2 the agreement with this formula is remarkable. For the bandwidth Eq.(6) implies that $`\mathrm{\Delta }_b=2\pi \mathrm{}/\tau _{\text{cl}}`$. It is common to define $`b=\mathrm{\Delta }_b/\mathrm{\Delta }`$.
FIG.2: The band profile $`(2\pi \mathrm{}/\mathrm{\Delta })|๐_{nm}|^2`$ versus $`\omega =(E_nE_m)/\mathrm{}`$ is compared with the classical power spectrum $`C(\omega )`$. Inset: An image of a piece of the $`๐`$ matrix.
## IV Definition of the LDOS profile
The quantum-eigenstates of the Hamiltonian $`(Q,P;x)`$ are $`|n(x)`$, and the ordered eigen-energies are $`E_n(x)`$. We are interested in the parametric kernel
$`P(n|m)=|n(x)|m(x_0)|^2=\text{trace}(\rho _n\rho _m)`$ (7)
In the equation above $`\rho _m(Q,P)`$ and $`\rho _n(Q,P)`$ are the Wigner functions that correspond to the eigenstates $`|m(x_0)`$ and $`|n(x)`$ respectively. The trace stands for $`dQdP/(2\pi \mathrm{})^d`$ integration.
We can identify $`P(n|m)`$ as the local density of states (LDOS), by regarding it as a function of $`n`$, where $`m`$ is considered to be a fixed reference state. An average of $`P((m+r)|m)`$ over several $`m`$-states leads to the LDOS profile $`P(r)`$. Alternatively, fixing $`n`$, the vector $`P(n|m)`$ describes the shape of the $`n`$-th eigenstate in the $`_0`$ representation. By averaging $`P(n|(nr))`$ over few eigenstates one obtains the average shape of the eigenstate (ASOE). The ASOE is just $`P(r)`$. Thus the ASOE and the LDOS are given by the same function. One would have to be more careful with these definitions if $`_0`$ were integrable while $``$ non-integrable.
The kernel $`P(n|m)`$ gives the overlap between the $`n`$th eigenstate of $``$ and the $`m`$th eigenstate of $`_0`$. For $`\delta x=0`$ we have simply $`P(n|m)=\delta _{nm}`$. For $`\delta x>0`$ the kernel develops a structure, which is described by the LDOS profile $`P(r)`$. If $`\delta x`$ is very small then evidently $`P(r)`$ consists of Kronecker delta (at $`r=0`$) and tail regions ($`|r|>0`$). Later we are going to distinguish between first-order tail regions (FOTRs), and higher order far-tail regions. As $`\delta x`$ becomes larger a non-perturbative core region appears around $`r=0`$. Namely, the profile exhibits a bunch of states (rather than one) that share most of the probability. If $`\delta x`$ becomes even larger, the distinction between core and tail regions become meaningless, and the LDOS profile becomes purely non-perturbative. We are going to explain that the non-perturbative profile reflects the underlying classical phase space structure.
## V The classical approximation for the LDOS
The classical approximation for $`P(n|m)`$ follows naturally from the definition Eq.(7). It is obtained if we approximate $`\rho _n(Q,P)`$ by a microcanonical distribution that is supported by the energy surface $`(Q,P;x)=E_n(x)`$. Namely,
$`\rho _n(Q,P)`$ $`=`$ $`{\displaystyle \frac{1}{g(E)}}\delta ((Q,P;x)E_n(x))`$ (8)
$`=`$ $`\delta (\mathrm{\Omega }((Q,P;x))n)`$ (9)
and a similar expression (with $`x=x_0`$) for $`\rho _m(Q,P)`$. In the classical limit $`n`$ is the phase space volume by which we label energy surfaces. Each energy surface $`n`$ is associated with a microcanonical state $`\rho _n(Q,P)`$. The classical LDOS profile will be denoted by $`P_{\text{cl}}(r)`$. The $`\delta x`$ regime where the classical approximation $`P(r)P_{\text{cl}}(r)`$ applies will be discussed in a later section.
By definition, for $`\delta x\delta x_c^{\text{cl}}`$ the deformed energy surfaces departs linearly from the $`\delta x=0`$ surfaces. As already stated in the Introduction, being in this classical linear regime is a fixed assumption of this paper. Now we want to explain the consequences of this assumption. One may consider these consequences as giving an operational definition for the classical linear regime. The dispersion (square-root of the variance) of the classical profile in the classical linear regime is
$`\delta E_{\text{cl}}=\sqrt{C(0)}\times \delta x`$ (10)
(This should be divided by $`\mathrm{\Delta }`$ if we want the dispersion in proper $`r`$ units. See (13) below). For our model Hamiltonian, for energies $`E3`$, we have found that $`\delta E_{\text{cl}}0.38\delta x`$. Eq.(10) can be regarded as a special consequence of the following scaling relation which we are going to derive below:
$`P_{\text{cl}}(r)={\displaystyle \frac{\mathrm{\Delta }}{\sqrt{C(0)}\delta x}}\widehat{P}_{\text{cl}}\left({\displaystyle \frac{\mathrm{\Delta }r}{\sqrt{C(0)}\delta x}}\right)`$ (11)
The scaling function has already been defined in Eq.(5), and it is illustrated in Fig.3. The classical profile $`P_{\text{cl}}(r)`$ is in general non-symmetric, but it follows from Eq.(11) that it must be characterized by $`r=0`$. \[By definition the scaling function of Eq.(5) gives zero average\]. Another obvious feature is having sharp cutoffs, beyond which $`P_{\text{cl}}(r)=0`$. The existence of these outer โclassically forbiddenโ regions follows from the observation that for large enough $`r`$ there is no longer classical overlap between the energy surfaces that correspond to $`|m(x_0)`$ and $`|n(x)`$ respectively.
The rest of this section is dedicated to technical clarifications of Eq.(11), and it can be skipped in first reading. The derivation is done in two steps. The first step is to establish a relation between $`P_{\text{cl}}(r)`$ and its trivially related version $`P_\text{E}(ฯต)`$. The second step is to demonstrate that $`P_\text{E}(ฯต)`$ is related to $`P_\text{F}()`$ of Eq.(5). It is also possible to make a one-step derivation that relates $`P_{\text{cl}}(r)`$ to $`P_\text{F}()`$, but we find the derivation below more physically appealing.
FIG.3: The scaled classical profile $`\widehat{P}_{\text{cl}}()`$. One unit on the horizontal axis correspond to energy difference $`\delta E_{\text{cl}}0.38\delta x`$. Note that $`r=0`$ implies $`(E_n(x)E_m(x_0))>0`$. The caustic is located at $`(E_n(x)E_m(x_0))=0`$, while the anti-caustic is located at $`(E_n(x)E_m(x_0))=1.65x`$. The โforbidden regionsโ are defined as those regions where $`P_{\text{cl}}(r)=0`$. They are located to the left of the caustic and to the right of the anti-caustic.
By differentiating of $`n=\mathrm{\Omega }(E,x)`$, keeping $`n`$ constant, we get the relation $`\delta E=F(x)\delta x`$, where $`F(x)=_x\mathrm{\Omega }(E,x)/g(E)`$ is known as the (generalized) conservative force. Using the latter expression it is a straightforward exercise to prove that $`F(x)=F`$. Alternatively, we can eliminate $`E`$ from the relation $`n=\mathrm{\Omega }(E,x)`$, and write the result as $`E=E_n(x)`$. Accordingly $`F(x)=(E_n(x)/x)`$. Now we can write the following relation:
$`E_n(x)E_m(x_0)={\displaystyle \frac{E}{x}}|_n\delta x+{\displaystyle \frac{E}{n}}|_x(nm)`$ (12)
which can be re-written in the following form
$`ฯต=F(x)\delta x+(1/g(E))r`$ (13)
Whenever we regard the kernel $`P(n|m)`$ as a function of $`nm`$ we use the notation $`P(r)`$. But sometimes it is convenient to regard $`P(n|m)`$ as an energy distribution $`P_\text{E}(ฯต)`$. Due to the change of variables (13) we have the following relation:
$`P(r)={\displaystyle \frac{1}{g(E)}}P_\text{E}\left({\displaystyle \frac{1}{g(E)}}rF(x)\delta x\right)`$ (14)
The energy distribution $`P_\text{E}(ฯต)`$ can be formally defined as follows:
$`P_\text{E}(ฯต)={\displaystyle \underset{n}{}}P(n|m)\delta (ฯต(E_n(x)E_m(x_0)))`$ (15)
In the classical limit the summation over $`n`$ should be interpreted as a $`dn`$ integral. For $`P(n|m)`$ in the above expression we can substitute the definition Eq.(7) with $`\rho _n`$ and $`\rho _m`$ approximated as in Eq.(9). A straightforward manipulation leads to the result:
$`P_\text{E}(ฯต)`$ $`=`$ $`\delta (ฯต((Q,P;x)(Q,P;x_0)))`$ (16)
$`=`$ $`\delta (ฯต+\delta x(t))={\displaystyle \frac{1}{\delta x}}P_\text{F}\left({\displaystyle \frac{1}{\delta x}}ฯต\right)`$ (17)
Together with (5) and (14), we get Eq.(11) along with the implied special result (10).
## VI Numerical determination of LDOS profiles
Given $`\delta x`$ we can determine numerically the LDOS profile $`P(r)`$. Representative profiles are displayed in Fig.4. For the purpose of further discussion we introduce the following definitions:
* The classical LDOS profile $`P_{\text{cl}}(r)`$
* The quantum mechanical LDOS profile $`P(r)`$
* The effective WBRM LDOS profile $`P_{\text{RMT}}(r)`$
* The first-order perturbative profile $`P_{\text{prt}}(r)`$
We have already discussed the classical LDOS profile. Below we explain how we numerically determine the quantum mechanical LDOS profiles $`P(r)`$ and $`P_{\text{RMT}}(r)`$, and we also define the profile $`P_{\text{prt}}(r)`$.
The numerical procedure for finding $`P(r)`$ is straightforward. For a given $`\delta x`$ we have to diagonalize the matrix (1). The columns of the diagonalization matrix $`๐_{mn}`$ are the eigenstates of the Hamiltonian, and by definition we have $`P(n|m)=|๐_{mn}|^2`$. Then $`P(r)`$ is computed by averaging over roughly 300 reference states that are located within the classically-small energy window $`2.8<E<3.1`$. Fig.4 displays typical profiles.
The effective WBRM Hamiltonian is obtained by randomizing the signs of the off-diagonal elements in the $`๐`$ matrix. For the effective WBRM Hamiltonian exactly the same procedure (as for $`P(r)`$) is applied leading to $`P_{\text{RMT}}(r)`$.
In order to analyze the structure of either $`P(r)`$ or $`P_{\text{RMT}}(r)`$ we have defined the first-order perturbative profile as follows:
$`P_{\text{prt}}(r)={\displaystyle \frac{\delta x^2|๐_{nm}|^2}{\mathrm{\Gamma }^2+(E_nE_m)^2}}`$ (18)
It is implicit in this definition that $`(E_nE_m)`$ and $`|๐_{nm}|^2`$ should be regarded as a function of $`r`$. The $`r=0`$ value of the band-profile should be re-defined by an interpolation. The parameter $`\mathrm{\Gamma }b_0\mathrm{\Delta }`$ is determined (for a given $`\delta x`$) such that the $`P_{\text{prt}}(r)`$ has a unit normalization. Note that Wignerโs Lorentzian would be obtained if the band profile were flat.
## VII Region analysis for the quantal LDOS
By comparing $`P(r)`$ to $`P_{\text{prt}}(r)`$ as in Fig.4, we can determine the range $`b_1\text{[left]}<r<b_1\text{[right]}`$ where $`P_{\text{prt}}(r)`$ is a reasonable approximation for $`P(r)`$. Loosely speaking (avoiding the distinction between the โleftโ and the โrightโ sides of the profile) we shall say that $`P_{\text{prt}}(r)`$ is a reasonable approximation for $`|r|<b_1`$. The core is defined as the region $`|r|<b_0`$. The FOTRs are $`b_0<|r|<b_1`$. The far-tail regions are $`|r|>b_1`$.
FIG.4: The quantal profile $`P(r)`$ is compared with $`P_{\text{prt}}(r)`$ and with $`P_{\text{RMT}}(r)`$. We are using here the $`\mathrm{}=0.015`$ output. The insets are normal plots while the main figures are semilog plots. In the lower plot ($`\delta x=0.2123`$) the classical LDOS profile $`P_{\text{cl}}(r)`$ is represented by heavy dashed line.
The results of this region analysis are summarized by Fig.5. In the following sections we are going to present a detailed discussion of this analysis. For the convenience of the reader we summarize:
* $`b_0=`$ border of the core region
* $`b_1=`$ border of the first order tail region (FOTR)
Having $`b_01`$ implies a standard perturbative structure. Having $`1b_0b_1`$ implies that we have a well developed core-tail structure. Having $`b_0b_1`$ implies a purely non-perturbative structure. In the latter case the distinction between core and tail regions become meaningless.
## VIII The standard perturbative regime
The standard perturbative regime $`\delta x\delta x_c^{\text{qm}}`$ is defined by the requirement $`b_0(\delta x)1`$. This condition implies that $`P(n|m)\delta _{nm}`$. For numerical purpose it is convenient to define $`\delta x_c^{\text{qm}}`$ as the value of $`\delta x`$ for which $`P(r=0)0.5`$. The theoretical considerations of imply that $`\delta x_c^{\text{qm}}\mathrm{}^{(1+d)/2}`$. The prefactor is a classical quantity whose precise value depends on the operational definition of $`\delta x_c^{\text{qm}}`$. With the operational definition given above we have extracted the result $`\delta x_c^{\text{qm}}3.8\mathrm{}^{3/2}`$.
In the standard perturbative regime we can write schematically
$`P(n|m)\delta _{nm}+\text{Tail}`$ (19)
The โTailโ is composed of FOTRs and far-tail regions. The former are given by Eq.(18), while the latter are determined by higher orders of perturbation theory. Note that for the standard WBRM we have by construction $`b_1b`$, and more generally $`n`$-th order perturbation theory becomes essential for $`(n1)\times b<|r|<n\times b`$. In case of our physical Hamiltonian, as well as for the associated effective WBRM model, the boundary $`b_1`$ is $`\delta x`$ dependent.
By comparing $`P(r)`$ with $`P_{\text{RMT}}(r)`$ we can see that RMT cannot be trusted for the analysis of the far-tails, because system-specific interference phenomena becomes important there. Namely, the RMT profile $`P_{\text{RMT}}(r)`$ is almost indistinguishable from $`P_{\text{prt}}(r)`$. In contrast to that, the far-tails of $`P(r)`$ are dominated by either destructive interference (left tail), or by constructive interference (right tail).
## IX The core-tail regime
The core-tail regime $`\delta x_c^{\text{qm}}\delta x\delta x_{\text{prt}}`$ is defined by the requirement $`1b_0b_1`$. The theoretical considerations of imply that $`\delta x_{\text{prt}}\mathrm{}`$. The prefactor is a classical quantity whose precise value depends on the operational definition of $`\delta x_{\text{prt}}`$. In our numerical analysis we have defined $`\delta x_{\text{prt}}`$ as the $`\delta x`$ for which the contribution of the FOTRs to the variance becomes less than $`80\%`$. With this operational definition we have extracted (using the lower subplot of Fig.5) the result $`\delta x_{\text{prt}}5.3\mathrm{}`$.
In the core-tail regime we can write schematically
$`P(n|m)\text{Core}+\text{Tail}`$ (20)
Disregarding the far-tail regions, the large-scale behavior of $`P(r)`$ can be approximated by that of $`P_{\text{prt}}(r)`$. As in the standard perturbative regime one observes that the far-tails are dominated by either destructive interference (left tail), or by constructive interference (right tail).
The core is a non-perturbative region. It means, that unlike the far-tail, it cannot be obtained from any finite-order perturbation theory. Once the core appears, the validity of first-order perturbation theory becomes a non-trivial matter. In a non-rigorous argument is suggested in order to support the claim that, disregarding smoothing effect, the local mixing of neighboring levels does not affect the growth of the tail. An important ingredient in this argumentation is the (self consistent) assumption that most of the probability is well-contained in the core region. Indeed the analysis which is presented in Fig.5 is in agreement with this assumption.
The observation that the local mixing of neighboring levels does not affect the growth of the tail, implies that the tail grows as $`\delta x^2`$ and not like say $`\delta x`$. (The latter type of dependence is implied by an over-simplified argumentation). Having indeed $`\delta x^2`$ behavior is implied by observing that $`P(r)P_{\text{prt}}(r)`$ for the FOTRs.
Finally, it should be emphasized that the local mixing of levels on the small scale $`b_0`$ is not reflected by Eq.(18). In particular, one should not expect Eq.(18) to be literally valid within the core region ($`|r|<b_0`$).
## X The non-perturbative regime
In the non-perturbative regime ($`\delta x\delta x_{\text{prt}}`$) one may say that the core spills over the FOTRs and therefore $`P(n|m)`$ becomes purely non-perturbative. As an example for non-perturbative profile let us consider the lower plot of Fig.4, corresponding to $`\delta x=0.2123`$. We see that there is poor resemblance between $`P(r)`$ and $`P_{\text{prt}}(r)`$. The LDOS profile $`P(r)`$ no longer contains a predominant FOTRs. This claim can be quantified using the analysis in Fig.5. The lower subfigure there displays the FOTR contribution to the dispersion. For $`\delta x>\delta x_{\text{prt}}`$ the dispersion is no longer determined by the FOTR contribution.
The complete disappearance of FOTRs is guaranteed only for $`\delta x\delta x_{\text{prt}}`$. Evidently, for $`\delta x\delta x_{\text{prt}}`$ the FOTRs must disappear, because $`P(r)`$ goes on expanding, while $`P_{\text{prt}}(r)`$ saturates. This is not captured by our numerics since for $`\mathrm{}=0.015`$ we cannot satisfy the strong inequality $`\delta x\delta x_{\text{prt}}`$, and have a classically small $`\delta x`$ at the same time.
## XI The semiclassical regime
Looking back at the lower plot of Fig.4, we see that detailed QCC with the classical profile (represented by heavy dashed line) starts to develop. The right far tail contains a component where $`P(r)`$ and $`P_{\text{cl}}(r)`$ are indistinguishable. This detailed QCC obviously does not hold for the RMT profile.
Being in the non-perturbative regime does not imply detailed QCC . Detailed QCC means that $`P(r)`$ can be approximated by $`P_{\text{cl}}(r)`$. Having $`\delta x\delta x_{\text{prt}}`$ is a necessary rather than sufficient condition for detailed QCC.
A sufficient condition for detailed QCC is $`\delta x\delta x_{\text{SC}}`$. The parametric scale $`\delta x_{\text{SC}}`$ is defined in , and for our system we can obtain the (theoretical) rough estimate $`\delta x_{\text{SC}}4\mathrm{}^{2/3}`$.
In our numerical study we could not make $`\mathrm{}`$ small enough such that $`\delta x_{\text{SC}}\delta x_c^{\text{cl}}`$. Therefore, the lower profile in Fig.4 is neither reasonably approximated by $`P_{\text{prt}}(r)`$, nor by $`P_{\text{cl}}(r)`$. However, we have verified (by comparing the $`\mathrm{}=0.03`$ output to the $`\mathrm{}=0.015`$ output) that detailed QCC between $`P(r)`$ and $`P_{\text{cl}}(r)`$ is easily improved by making $`\mathrm{}`$ smaller. Comparing $`P(r)`$ to $`P_{\text{cl}}(r)`$ on the one hand, and $`P_{\text{rmt}}(r)`$ to $`P_{\text{cl}}(r)`$ on the other hand, leaves no doubt regarding the manifestation of underlying classical structures.
Using a phase-space picture it is evident that larger $`\delta x`$ leads to better QCC. The WBRM model does not have a classical limit, and one finds a quite different scenario . For large enough $`\delta x`$ the eigenstates of Eq.(1) become Anderson localized. This localization shows up in the ASOE provided the eigenstates are properly centered prior to averaging. In the (non-averaged) LDOS localization manifests itself as sparsity, and therefore the various moments of the LDOS profile are not affected. This latter remark should be kept in mind while reading the next section.
FIG.5 The results of region analysis. The common horizontal axis is $`\delta x`$. The upper subfigure presents the $`r`$ boundaries as a function of $`\delta x`$. The dotted lines $`\pm b_0`$ define the core region ($`|r|<b_0`$). The solid lines define the $`r`$ region in which $`50\%`$ of the probability is concentrate. The dashed lines are $`b_1\text{[left]}`$ and $`b_1\text{[right]}`$. The FOTRs are the regions where $`b_0<|r|<b_1`$. The light solid lines and the light dashed lines are for the effective WBRM model. The lower subfigure displays the dependence of $`\delta E_{\text{cl}}`$ and $`\delta E_{\text{qm}}`$ and $`\delta E_{\text{prt}}`$ on $`\delta x`$. The quantal and the classical results are almost indistinguishable, whereas $`\delta E_{\text{prt}}`$ approaches saturation. The contribution of the FOTRs to $`\delta E_{\text{qm}}`$ is also displayed.
## XII Restricted QCC
It is important to distinguish between detailed QCC and restricted QCC. Let us denote the dispersion of the quantal LDOS profile by $`\delta E_{\text{qm}}`$, The corresponding classical quantity is given by Eq.(10). The two types of QCC are defined as follows:
* Detailed QCC means $`P(r)P_{\text{cl}}(r)`$
* Restricted QCC means $`\delta E_{\text{qm}}\delta E_{\text{cl}}`$
Obviously restricted QCC is a trivial consequence of detailed QCC, but the converse is not true. It turns out that restricted QCC is much more robust than detailed QCC. In Fig.5 we see that the dispersion $`\delta E_{\text{qm}}`$ of either $`P(r)`$ or $`P_{\text{RMT}}(r)`$ is almost indistinguishable from $`\delta E_{\text{cl}}`$. This is quite remarkable becuase the corresponding LDOS profiles (quantal versus classical) are very different!
It is important to realize that restricted QCC is implied by first order perturbation theory. If we use Eq.(18) and take into accound the FOTR dominace which is implied by $`\delta x\delta x_{\text{prt}}`$, then we get simply
$`\delta E_{\text{qm}}={\displaystyle \underset{n}{}}P(n|m)(E_nE_m)^2=\delta x^2{\displaystyle \underset{n}{\overset{}{}}}|๐_{nm}|^2`$ (21)
where prime indicates omission of the $`n=m`$ term. Using Eq.(6) one realizes that this result is in complete agreement with Eq.(10). In contrast to that higher moments of the perturbative profile are vanishingly small compared with the corresponding classical result. The latter fact is just a reflection of the absence of detailed QCC.
One may wonder what happens with Eq.(21) if we try to do a better work, taking into account the core width, as well as higher order far-tails contributions. One may think that Eq.(21) is only the lowest order approximation, which would imply that restricted QCC should become worse as $`\delta x`$ grows. However, the latter speculation turns out to be wrong.
We already saw that restricted QCC is implied on the one hand (for small $`\delta x`$) by first-order perturbation theory, and on the other hand (for large $`\delta x`$) by detailed QCC. Now we would like to argue that restricted QCC holds in general. It simply follows from the observation that $`\delta E_{\text{qm}}`$ is determined just by the band profile. The prove is very simple . The variance of $`P(n|m)`$ is determined by the first two moments of the Hamiltonian in the unperturbed basis. Namely
$`\delta E_{\text{qm}}^2=m|^2|mm||m^2`$ (22)
$`=\delta x^2(m|๐^2|mm|๐|m^2)`$ (23)
Thus, we get the same result as in first order perturbation theory without invoking any special assumptions regarding the nature of the profile. Having $`\delta E_{\text{qm}}`$ that is determined only by the band profile, is the reason for detailed QCC, and is also the reason why restricted QCC is not sensitive to the RMT assumption.
We thank Felix Izrailev for suggesting to study the model (2). We also thank ITAMP for their support. |
no-problem/0001/astro-ph0001281.html | ar5iv | text | # 1 INTRODUCTION
## 1 INTRODUCTION
Concentric semi-periodic shells (also termed arcs or rings) appear in the images of several asymptotic giant branch (AGB) stars, proto-planetary nebulae (PNs) and young planetary nebulae (PNs). PNs and proto-PNs known to possess such shells are CRL 2688 (the โEggโ nebula; Sahai et al. 1998); IRAS 17150-3224 (Kwok, Su, & Hrivnak 1998); IRAS 17441-2411 (Su et al. 1998); Roberts 22 (although the shells are not really circular; Sahai et al. 1999); NGC 7027 and NGC 6543 (Bond 2000); and HB5. Presently the only AGB star known to possess shells is IRC+10216 (Mauron & Huggins 1999).
The main properties of the shells are (1) they are semi-periodic with time intervals between consecutive ejection events of $`2001,000\mathrm{yrs}`$. (2) They are spherical or almost spherical. However, low degree of departures from sphericity is seen in some shells, e.g., in the Egg nebula (Sahai et al. 1998). (3) They can be almost complete, i.e., appear as rings, or incomplete, where only a fraction of a full circle is observed, i.e., they appear as arcs. (4) The shellsโ density enhancement relative to the inter-shell density is by a factor of a few up to a factor of $`10`$ (Mauron & Huggins 1999). (5) The centers of all shells in a given object coincide with the central star to within a few percent of their size. (6) All PNs and proto-PNs which possess concentric shells are bipolar, i.e., have two lobes with an equatorial waist between them (e.g., IRAS 17150-3224), or they are extreme ellipticals (NGC 6543). By extreme elliptical PNs I refer to PNs having strong concentration of mass toward the equatorial plane, e.g., a torus (ring). Although there might be a selection effect in detecting the shells in bipolar PNs, since the central star is more attenuated (R. Sahai, private communication), I do not think this alone can explain the observations.
In the present paper I propose that these shells are produced by a solar-like cycle in the progenitor AGB stars. The enhanced magnetic activity at the cycle maximum results in more magnetic cool spots, which facilitate the formation of dust, hence increasing the mass loss rate (Soker 2000). In $`\mathrm{\S }2`$ I review the previous mechanisms proposed for the formation of these shells, and argue that none of these can account for all properties of these shells. I then outline the main ingredient of the magnetic activity cycle mechanism. In $`\mathrm{\S }3`$ I examine some of the properties of a plausible mechanism that may amplify the magnetic field in upper AGB stars. My summary is in $`\mathrm{\S }4`$.
## 2 SUPPORT AND CONSTRAINTS FROM OBSERVATIONS
### 2.1 Previously Proposed Mechanisms
A discussion of several possible mechanisms for the formation of concentric semi-periodic shells is given by Sahai et al. (1998) and Bond (2000). Here I extend these discussions and examine each of the previously proposed models.
Helium-shell flashes (thermal pulses). Any mechanism based on helium-shell flashes is ruled out because the typical inter-flash period is $`10^4\mathrm{yrs}`$ (Sahai et al. 1998; Kwok et al. 1998).
Instability in the dust$`+`$gas outflow. The instability in the gas-dust coupling in the outflowing material was suggested as a mechanism for the formation of the shells in the Egg nebula by Deguchi (1997). This model cannot explain the formation of shells for a few reasons. First, from the results of Morris (1992) it seems that in most cases the time interval between consecutive shells predicted by this mechanism is too short. Second, the shells will be smoothed out within a short distance from the star (Mastrodemos, Morris & Castor 1996). Third, the instability in the gas-dust coupling is a local instability. Therefore, it will form small-scale instability and short arcs, but will not form an almost complete shell (e.g., NGC 6543).
Chaos. Icke, Frank & Heske (1992) examined the response of the outer layers of an evolved AGB star to the oscillatory motion of an instability zone in the stellar interior. They found that for the right initial conditions and parameters, the stellar surface shows multiperiodicity or chaotic behaviors, in addition to the regular oscillations. I find some problems with this mechanism. Qualitatively, the chaotic or multiperiodicity found by Icke et al. (1992) do not have the correct behavior (the two lower panels of their fig. 11). There is no real semi-periodic behavior, but rather the time intervals between two consecutive high amplitudes episodes differ a lot from one interval to another. This behavior cannot account for the regularly spaced arcs in, e.g., IRAS 17150-3224 (Kwok et al. 1998). In some cases the duration of the maximum phase is longer than the duration of the low amplitude intervals. This is not the observed properties of the concentric shells in most of the objects listed in the previous section. In the lower two panels of their figure 11 (panels 7 and 8), the maximum time interval between two consecutive maximum phases is only $`16`$ times as long as the regular oscillation period. This time interval is an order of magnitude shorter than the observed time intervals. In panels 5 and 6 of their figure 11 the maximum phase lasts several hundred years. However, they do not show more than one maximum phase, so I can not comment on the long term behavior of these cases.
A binary companion in an eccentric orbit. In this mechanism, which was proposed by Harpaz, Rappaport & Soker (1997), a periastron passage of a stellar companion in an eccentric orbit modulates the mass loss rate and/or geometry. The periodic, on a time scale of several$`\times 100\mathrm{yrs}`$, periastron passage can increase, or decrease by diverting the flow, the mass loss rate, leading to the formation of rings by these periodic modulations. Sahai et al. (1998) criticized this mechanism on the ground that it predicts exact circular shells with regular spacing between them, properties which are not observed in the Egg nebula. There is another reason to reject this mechanism for the formation of the shells. The eccentric orbit mechanism predicts that the center of the shells will be displaced from the central star (Soker, Rappaport, & Harpaz 1998). Such a displacement is not observed.
A close binary companion. Mastrodemos & Morris (1999) show in their numerical simulations that the presence of a close companion, orbital separation of $``$several$`\times 10\mathrm{AU}`$ leads to the formation of a spiral structure in the equatorial plane. When viewed at a large angle to the symmetry (rotation) axis, the circumstellar matter should show regularly spaced half-rings on each side of the symmetry axis. I find this model unsatisfactory since it predicts on-off locations for the half-shells near the symmetry axis. That is, the dense rings on one side of the symmetry axis will be at radial distances which correspond to the inter-ring spaces on the other side. This is not observed. In addition, the arguments listed by Sahai et al. (1998) against the eccentric binary mechanism hold for this mechanism as well.
Giant convection cells. Sahai et al. (1998) present the idea that large cool convection cells form the concentric semi-periodic shells. Such giant cool convection cells make dust formation more efficient, hence increasing the mass loss rate. I see two problems. First, giant convection cells are expected to appear in specific location on the surface, so it is not clear they can form an almost complete shell. Second, the time scale of several hundred years is much too long for a life time of even a large convection cell in AGB stars.
To summarize, all the mechanisms listed above are expected to have some signatures on some nebulae formed from AGB stars, but none of them can explain the concentric semi-periodic shells.
### 2.2 The Proposed Magnetic Cycle Mechanism
I conjecture that the mechanism behind the formation of the concentric semi-periodic shells is a solar-like magnetic activity cycle. Below I list the observations in support of the proposed mechanism, the basic processes of the mechanism, and the implications of this conjecture. In the next section I will elaborate on plausible dynamo processes to amplify the magnetic field.
#### 2.2.1 Supporting Observations
1) Magnetic fields in AGB stars. Kemball & Diamond (1997) detected a magnetic field in the extended atmosphere of the Mira variable TX Cam. Kemball & Diamond find the intensity of the magnetic field in the locations of SiO maser emission, at a radius of $`4.8\mathrm{AU}2\mathrm{R}_{}`$, to be $`B<5G`$. The detection of X-ray emission from a few M giants (Hรผnsch et al. 1998) also hints at the presence of magnetic fields in giant stars.
2) Solar cycle. From the solar cycle we know that magnetic activity can be semi-periodic, and possess a global pattern, i.e., the cycle affects the entire solar surface.
3) Inhomogeneity. We also know from the solar magnetic activity that the magnetic spots cover only a fraction of the solar surface. This inhomogeneity can explain incomplete shells and the inhomogeneity observed in many shells, e.g., in the Egg nebula.
4) Spot distribution with latitude. From the Maunderโs butterfly diagram, e.g., for the years 1954-1977 (Priest 1987), there is evidence that the spotsโ distribution is most uniform when the number of spots is at maximum, and the spots reach the highest latitude during this cycle maximum. At that phase the spots are distributed almost uniformly from close to the equator up to a latitude $`\theta _m`$. Spots do not distribute from the equator to $`\theta _m`$ in other phases of the solar cycle; at the beginning of a cycle the are concentrated in two annular regions around latitudes $`\pm 30^{}<\theta _m`$, while toward the end of a cycle they are near the equator. Moreover, from the two solar cycles in these years it turns out that $`\theta _m`$ is larger when the maximum total number of spots is larger. This hints that for a very strong magnetic activity, i.e., when there are many spots, as I speculate is the case for the AGB progenitors of the concentric semi-periodic shells, the spots are distributed uniformly, up to the inhomogeneity discussed above, over the entire stellar surface.
#### 2.2.2 Basic Processes
The processes by which magnetic activity regulates the mass loss rate from AGB stars are studied in earlier papers (Soker 1998, 2000; Soker & Clayton 1999). As in the sun, it is assumed that the magnetic activity leads to the formation of magnetic cool spots, which facilitate the formation of dust. Since the mass loss mechanism from AGB stars is radiation pressure on dust, higher magnetic activity leads to enhanced mass loss rate. The goal in the earlier papers was to explain the transition from spherical to axisymmetrical mass loss in the AGB progenitors of elliptical PNs. The idea is that the increase in the magnetic activity (Soker & Harpaz 2000) and/or the increase in the mass loss rate (Soker 2000) which occur as the star is about to leave the AGB, increase the mass loss rate in the equatorial plane more than they do in the polar directions. This is based on the assumption, following the behavior of the sun, that the dynamo magnetic activity results in the formation of more magnetic cool spots near the equatorial plane than near the poles. Since that mechanism for axisymmetrical mass loss is intended to explain the formation of elliptical PNs, it is also assumed that the progenitors of elliptical PNs are slow rotators (Soker 2000), having angular velocities in the range of $`3\times 10^5\omega _{\mathrm{Kep}}\omega 10^2\omega _{\mathrm{Kep}}`$, where $`\omega _{\mathrm{Kep}}`$ is the equatorial Keplerian angular velocity. Such angular velocities could be gained from a planet companion of mass $`>0.1M_{\mathrm{Jupiter}}`$, which deposits its orbital angular momentum to the envelope, or even from single stars which are fast rotators on the main sequence.
In the present case, the AGB stars are progenitors of bipolar PNs or extreme ellipticals. In the binary model for the formation of bipolar PNs most of the AGB progenitors are tidally spun-up by close companions (Soker & Rappaport 2000), while the extreme elliptical PNs may be formed through a common envelope interaction (Soker 1997). In both cases we expect the AGB star to rotate with angular velocity of
$$0.01\omega _{\mathrm{Kep}}\omega 0.1\omega _{\mathrm{Kep}},$$
(1)
which is more than the value of $`\omega /\omega _{\mathrm{Kep}}`$ in the sun. We expect strong magnetic activity since the convection motion is very strong in these stars (next section). The upper limit on the angular velocity means that dynamical effects will not much influence the mass loss geometry, since the centrifugal forces are negligible.
#### 2.2.3 Implications
For the proposed mechanism to explain the spherical shape of the shells, whether complete or not, the following are implied.
1) Spherical magnetic activity. The average concentration of magnetic cool spots should be uniform on the stellar surface, although at any given moment the number of spots can be non-uniform, leading to the small-scale nonuniformity of the concentric semi-periodic shells. We note that the mass loss rate during the formation of the inter shells medium is not very high (the shells and the inter shells form a faint halo). It seems that large cool spots are required to regulate the mass loss rate when the mass loss rate is low (Frank 1995; Soker 2000). This means that only the medium to large cool magnetic spots are required to be distributed uniformly, but not the small spots. Only when mass loss gets to be very high (for detail see Soker 2000), as expected close to the termination of the AGB, do small cool spots facilitate the formation of dust as well.
2) No other mechanisms. The role of any other mechanism that causes departure from spherical mass loss should be very small, e.g., nonradial pulsations. This implies that any detached binary companion cannot be too close, i.e., no Roche lobe overflow, and that the AGB star cannot rotate at $`\omega 0.1\omega _{\mathrm{Kep}}`$. This is indeed the case in most progenitors of bipolar PNs, as discussed in $`\mathrm{\S }\mathrm{2.2.2}`$ above. The requirement that the companion spins up the mass losing star, but have only a minor dynamical influence on the mass loss process, puts severe constraints on the the binary properties. Mainly, the companion should not form an accretion disk and blow a collimated fast wind (CFW) or jets when the shells are formed. This implies that the companion is likely to be a main sequence star of mass $`0.1M_20.5M_{}`$, (for the conditions for the formation of a CFW see Soker & Rappaport 2000). Only during the superwind phase, when mass loss rate is very high, does the companion manage to blow a CFW, leading to the formation of a bipolar PN (Soker & Rappaport 2000). The constraints on the companion mass, of $`0.3M_{}`$, and on the orbital separation, of $`a530AU`$, explain why many bipolar PNs and proto-PNs do not have concentric semi-periodic shells.
## 3 THE DYNAMO IN AGB STARS
Dynamo generation of magnetic fields in evolved AGB stars has two major differences from the dynamo in main sequence stars, e.g., the sun. First the mass loss rate is very high, so that the mass leaving the star drags the magnetic field lines, rather than being dragged by the magnetic field lines. Second, the dynamo number is $`N_D1`$, whereas in main sequence stars $`N_D>1`$, as is required by standard $`\alpha \omega `$ dynamo models. The dynamo number is the square of the ratio of the magnetic field amplification rate in the $`\alpha \omega `$ dynamo model, to the ohmic decay rate. A third difference from the situation in the sun, but not from all main sequence stars, is that in AGB stars the convective region is very thick, whereas in the sun its width is only $`0.3R_{}`$. The aim of this section is to point to possible effects which these differences may have on the amplification of the magnetic field, and not to develop a new dynamo mechanism. Future more complete calculations should examine the exact conditions and mechanism(s) for the generation of magnetic fields in AGB stars.
### 3.1 Effects Due to a High Mass Loss Rate
In the sun, as in most main sequence stars, the mass loss rate is determined mainly by the magnetic activity. The magnetic pressure $`P_B=B^2/\left(8\pi \right)`$ on the stellar surface is no less than the ram pressure of the wind $`P_w=\rho v_w^2`$, where $`\rho =\dot{M}_w/\left(4\pi R^2v_w\right)`$ is the density, $`v_w`$ is the wind velocity, $`R`$ is the stellar radius, and $`\dot{M}_w`$ is the mass loss rate to the wind, defined positively. Substituting typical values for the sun we find $`P_B0.1\left(B/2\mathrm{G}\right)^2\mathrm{erg}\mathrm{cm}^3`$ and $`P_w10^3\mathrm{erg}\mathrm{cm}^3`$, hence $`P_B/P_w100`$. The relative magnetic activity required to dictate the mass loss geometry from AGB stars via the enhanced dust formation above magnetic cool spots is much weaker, and can be as low as $`P_B/P_w10^4`$ (eq. 11 of Soker 1998). Therefore, while in main sequence stars the magnetic field drags the wind close to the stellar surface, in AGB stars the wind drags the magnetic field lines. Assuming that the wind conserves angular momentum, its angular velocity decreases as $`\left(R/r\right)^2`$, where $`r`$ is the distance of a parcel of gas from the center of the star. Hence near the surface
$$\left(\frac{d\omega }{dr}\right)_{\mathrm{surface}}=\frac{2\omega }{R}.$$
(2)
Note that in the solar interior the differential rotation is weaker $`\left|d\omega /dr\right|\omega _{}/R_{}`$ (Tomczyk, Schou, & Thompson 1995; Charbonneau et al. 1998). Even if on the surface of AGB stars the shear is lower than the shear in the inner boundary of the convection region, it occurs on a much larger area, since the inner boundary of AGB convective regions is at several$`\times R_{}`$.
Although the angular velocity shear is similar at the equator and poles, the winding of the field lines will be much stronger near the equator. Winding will be at large angles only when the wind is not much faster than the rotation velocity near the equator. This is indeed the case for the AGB stars considered in this paper, for which the angular velocity is according to equation (1). For a $`1M_{}`$ AGB star with a stellar radius of $`R=2\mathrm{AU}`$, equation (1) gives for the rotation velocity on the equator $`0.2\mathrm{km}\mathrm{s}^1v_{\mathrm{eq}}2\mathrm{km}\mathrm{s}^1`$. The distance along which the wind from upper AGB stars is accelerated is $`R`$, and therefore within this distance from the surface the wind velocity is $`<10\mathrm{km}\mathrm{s}^1`$, with a much lower velocity just above the surface: $`v_{ws}10\mathrm{km}\mathrm{s}^1`$. The magnetic field lines on the surface will be inclined in the azimuthal direction at an angle $`\alpha `$ to the radial direction, which depends on the latitude $`\theta `$ ($`\theta =0`$ at the equator) according to
$$\mathrm{tan}\alpha \left(\theta \right)=\left(v_{eq}/v_{ws}\right)\mathrm{cos}\theta .$$
(3)
The conclusion from this subsection is that the amplification of the magnetic field at the AGB stellar surface, i.e., the outer boundary of the convection region, may be more significant that at the inner boundary of the convection regions of AGB stars.
### 3.2 Small Dynamo Number
In the $`\alpha \omega `$ stellar dynamo mechanism the $`\alpha `$ effect, due to convection, generates the poloidal component of the magnetic field, while differential rotation generates the toroidal component (e.g., Priest 1987 and references therein). Theory predicts that this mechanism operates efficiently only when the dynamo number (which is the square of the ratio of the magnetic field amplification rate to the ohmic decay rate of the magnetic field) is $`N_D>1`$. When comparing with observations it is convenient to use the Rossby number (Noyes et al. 1984; Saar & Brandenburg 1999). The Rossby number is proportional to the ratio of the rotational period, $`P_{\mathrm{rot}}=2\pi /\omega `$, to the convective overturn time $`\tau _c`$. Following Noyes et al. (1984) I take $`\tau _c=2l_p/v_c`$, where $`l_p`$ is the pressure scale height and $`v_c`$ is the convective velocity, hence $`\mathrm{Ro}\left(\omega \tau _c\right)^1=P_{\mathrm{rot}}v_c/\left(4\pi l_p\right)`$. Noyes et al. (1984) based their use of the Rossby number on the crude approximate relation $`N_d\mathrm{Ro}^2`$. For main sequence stars having magnetic activity $`\mathrm{Ro}0.25`$ (Saar & Brandenburg 1999), and hence $`N_D>1`$ as required for the $`\alpha \omega `$ dynamo mechanism. For the sun $`\mathrm{Ro}_{}=0.16`$, while the values of the Rossby number for the superactive stars in the sample used by Saar & Brandenburg (1999) are in the range $`5\times 10^5\mathrm{Ro}10^2`$. Because of the strong convection in the envelope of AGB stars we find that $`\mathrm{Ro}\left(AGB\right)1`$. Using typical values for AGB stars (e.g., figs 1-5 of Soker & Harpaz 2000; note that the density in their figs. 1-5 is lower by a factor of 10; the correct density scale is in their fig. 6), we find the Rossby number to be
$$\mathrm{Ro}\left(AGB\right)=9\left(\frac{v_c}{10\mathrm{km}\mathrm{s}^1}\right)\left(\frac{l_p}{40R_{}}\right)^1\left(\frac{\omega }{0.1\omega _{\mathrm{Kep}}}\right)^1\left(\frac{P_{\mathrm{Kep}}}{1\mathrm{yr}}\right),$$
(4)
where $`P_{\mathrm{Kep}}`$ is the orbital period of a test particle moving in a Keplerian orbit along the equator of the star. The low value of the dynamo number, $`N_D\mathrm{Ro}^21`$, suggests that the convective motion amplifies both the poloidal and toroidal magnetic components, but that the differential rotation, both inside the envelope and on the surface (see previous subsection), still plays a nonnegligible role. Hydrodynamic turbulence can amplify magnetic fields, although not as efficiently as the $`\alpha \omega `$ dynamo (see, e.g., Goldman & Rephaeli 1991, and references therein, for the amplification of magnetic fields in clusters of galaxies). Taking into account the observations that suggest the presence of magnetic fields in AGB stars ($`\mathrm{\S }\mathrm{2.2.1}`$ above), I conclude that the strong convective motion in AGB stars together with the rotation can indeed amplify the magnetic field via an $`\alpha ^2\omega `$ dynamo.
In previous papers I argued that this $`\alpha ^2\omega `$ dynamo can operate, although at a low activity level, even in AGB stars rotating as slowly as $`\omega 10^4\omega _{\mathrm{Kep}}`$ (Soker 1998), and in some cases even as low as $`\omega 3\times 10^5\omega _{\mathrm{Kep}}`$ (Soker & Harpaz 2000). We note that with this angular velocity the mass loss time scale $`\tau _m=M_{\mathrm{env}}/\left|\dot{M}_{\mathrm{env}}\right|`$, where $`M_{\mathrm{env}}`$ is the envelope mass, is not much shorter than $`1/\omega `$. Substituting typical values for AGB stars which are expected to be the progenitors of elliptical PNs, during their super-wind phase
$$\frac{\omega ^1}{\tau _m}=1.6\left(\frac{\omega }{10^4\omega _{\mathrm{Kep}}}\right)^1\left(\frac{P_{\mathrm{Kep}}}{1\mathrm{yr}}\right)\left(\frac{M_{\mathrm{env}}}{0.03M_{}}\right)^1\left(\frac{\left|\dot{M}_{\mathrm{env}}\right|}{3\times 10^5M_{}\mathrm{yr}^1}\right).$$
(5)
This supports the assumption that the angular velocity plays a nonnegligible role in the $`\alpha ^2\omega `$ dynamo even in these very slowly rotating AGB stars. However, the magnetic activity is expected to be weak, and the magnetic cool spots to be concentrated in and near the equatorial plane (see $`\mathrm{\S }\mathrm{2.2.1}`$ above). In these stars, contrary to the case with the stars discussed in the present paper, the rotation is too slow to excite magnetic activity close to the poles, hence a higher mass loss rate near the equatorial plane leads later to the formation of an elliptical PN.
## 4 SUMMARY
In this paper I propose that the concentric semi-periodic shells found around several PNs, proto-PNs, and AGB stars are formed by a magnetic activity cycle in upper AGB stars. The main assumptions, processes and implications of the proposed mechanism for the formation of the shells are listed below, together with the explanations for the shellsโ properties listed in section 1.
(1) It is assumed that the magnetic activity leads to the formation of a large number of magnetic cool spots. The cool spots enhance dust formation (Soker 2000 and references therein), and hence increase the mass loss rate. The magnetic field, though, has no dynamical effects; its only role is to form cool spots. The formation of dust above cool spots is a highly nonlinear process (Soker 2000), and therefore a relatively small increase in the number of cool spots will substantially increase the mass loss rate. This explains the observations that the shells are much denser than the inter-sells density, by up to a factor of $`10`$.
(2) The sporadic nature of the appearance of magnetic cool spots on the stellar surface, as in the sun, explains the incompleteness of some shells and other small-scale shell inhomogeneities.
(3) The spherical shells mean that the magnetic cool spots are distributed uniformly over the entire AGB stellar surface (up to the inhomogeneities mentioned above). This is indeed expected for strong magnetic activity ($`\mathrm{\S }\mathrm{2.2.1}`$).
(4) The strong magnetic activity implies that AGB stars which form concentric shells are relatively fast rotators, $`0.01\left(\omega /\omega _{\mathrm{Kep}}\right)0.1`$. They are spun up by a stellar companion via tidal interaction, or via a common envelope phase. Such tidal interactions are likely to form bipolar PNs (Soker & Rappaport 2000), whereas a common envelope interaction is likely to form an extreme elliptical PNs. This explains why the shells are found in bipolar or extreme elliptical PNs and proto-PNs. Two things should be noted here: ($`i`$) The AGB stars cannot rotate too fast since then the centrifugal force will become nonnegligible and the shells will not be spherical. It is indeed expected that $`\omega <0.3\omega _{\mathrm{Kep}}`$ in most progenitors of bipolar PNs (Soker & Rappaport 2000). The companion cannot blow a collimated fast wind, or jets, during the phase of the shell formation. This constraints the companion to be a main sequence, rather than a white dwarf, and of relatively low mass $`M_20.10.5M_{}`$. ($`ii`$) In very slowly rotating AGB stars the magnetic activity is very weak, the cycle period is extremely long, and the cool spots are expected to be concentrated near the equator (see $`\mathrm{\S }\mathrm{2.2.1}`$), hence leading to the formation of elliptical PNs (Soker 1998, 2000).
(5) It is also assumed that, as in the sun and other main sequence stars, the magnetic activity has a semi-periodic variation. This is the explanation for the semi-periodic nature of the shells. In main sequence stars the ratio of the period of the magnetic activity cycle $`P_{\mathrm{cyc}}`$ to the rotation period is (Baliunas et al. 1996; Saar & Brandenburg 1999) $`50P_{\mathrm{cyc}}/P_{\mathrm{rot}}10^5`$. Therefore, for an AGB stellar rotation period of $`10100\mathrm{yrs}`$, magnetic activity cycles of periods of $`20010^3\mathrm{yrs}`$ require this ratio to be somewhat smaller $`P_{\mathrm{cyc}}/P_{\mathrm{rot}}10`$.
(6) There is no dynamo model for AGB stars. Based on some observations listed in $`\mathrm{\S }\mathrm{2.2.1}`$, I assume that a dynamo can indeed amplify magnetic fields in AGB stars. In the present paper I did not develop or calculate any dynamo mechanism for AGB stars. I only point here ($`\mathrm{\S }3`$) to the two major differences between the standard $`\alpha \omega `$ dynamo mechanism for main sequence stars and any dynamo mechanism for AGB stars. First, the high mass loss rate means that the wind drags the magnetic field lines in AGB stars, contrary to the case with main sequence stars. This suggests that the azimuthal shear near the surface plays a role in the dynamo mechanism, as well as the shear in the stellar interior. Second, the dynamo number is $`N_D1`$ (or the Rossby number is $`\mathrm{Ro}1`$) in AGB stars, whereas the standard $`\alpha \omega `$ dynamo mechanism requires $`N_D>1`$, as is the case with active main sequence stars. This suggests that the main amplification of magnetic fields in AGB stars is via the convective motion, but with a nonnegligible role of the rotation, i.e., an $`\alpha ^2\omega `$ dynamo.
(7) This mechanism proposed in the present paper has some predictions. $`\left(i\right)`$ It predicts that the AGB progenitors of the concentric semi-periodic shells have main sequence companions of mass $`0.10.5M_{}`$, with orbital periods in the rang of $`15150\mathrm{yrs}`$. The orbital periods predicted by the binary models mentioned in $`\mathrm{\S }2.1`$, on the other hand, are in the range of $`20010^3\mathrm{yrs}`$. $`\left(ii\right)`$ The proposed mechanism predicts that almost all PNs and proto-PNs with concentric semi-periodic shells are bipolar or extreme elliptical PNs. $`\left(iii\right)`$ In some case the shells can be formed after the mass losing star was spun up via a common envelope evolution. In these cases most of the descendant PNs are expected to be extreme elliptical PNs, rather than bipolar PNs, and either the companion has a final orbital period of less than a year, even only a few hours, or else the companion is completely destructed in the common envelope.
ACKNOWLEDGMENTS: I thank Raghvendra Sahai for very helpful comments. This research was supported in part by grants from the Israel Science Foundations and the US-Israel Binational Science Foundation. |
no-problem/0001/cond-mat0001057.html | ar5iv | text | # Calculated temperature-dependent resistance in low density 2D hole gases in GaAs heterostructure
\[
## Abstract
We calculate the low temperature resistivity in low density 2D hole gases in GaAs heterostructures by including screened charged impurity and phonon scattering in the theory. Our calculated resistance, which shows striking temperature dependent non-monotonicity arising from the competition among screening, nondegeneracy, and phonon effects, is in excellent agreement with recent experimental data.
PACS Number : 73.40.-c; 71.30.+h; 73.50.Bk; 73.50.Dn
\]
A number of recent density-dependent low temperature transport measurements in dilute two dimensional (2D) n-Si MOSFET and p-GaAs heterostructure systems have attracted a great deal of attention because the experiments nominally exhibit a metal-insulator-transition (2D MIT) as a function of 2D carrier density ($`n`$). In addition to this unexpected 2D MIT phenomenon (at this stage it is unclear whether the transition represents a true $`T=0`$ quantum phase transition (QPT) or a finite temperature crossover behavior) these measurements reveal a number of intriguing transport properties in dilute 2D systems, such as a remarkable temperature dependence of the low density resistivity in the nominally metallic phase, which deserve serious theoretical attention in their own rights irrespective of whether the 2D MIT phenomenon is a true QPT or not.
In this paper we provide a quantitative theory for one such recent experiment carried out in a low density GaAs-based 2D hole gas. In our opinion, Ref. represents a particularly important experiment in relation to the 2D MIT phenomenon (although ironically no MIT is actually observed in Ref. โ even the lowest density data in Ref. are entirely in the nominally metallic phase) because the ultra-pure samples used in Ref. explore the 2D โmetallicโ regime of the highest mobility (i.e., the best quality or equivalently the lowest disorder), the lowest carrier density, and the lowest temperature so far studied in the context of the 2D MIT phenomenon. More specifically, there have been suggestions and speculations that the 2D MIT phenomenon is an interaction-driven QPT (the scaling theory of localization rules out a true localization transition in 2D disordered system) with the dimensionless $`r_s`$ parameter, which is the ratio of the interaction energy to the noninteracting kinetic energy of the 2D electron system, being the tuning parameter which drives the QPT. It is important to emphasize that $`r_s`$ increases as $`n`$ decreases ($`r_sn^{1/2}`$), and therefore the 2D systems of Ref. represent the highest (lowest) $`r_s`$ ($`n`$) and consequently the most strongly interacting 2D systems experimentally studied so far in the context of the 2D MIT phenomenon. To be precise, $`r_s`$ values of the nominally โmetallicโ 2D hole regime explored in Ref. go down to as low as $`r_s=26`$ (corresponding to the lowest hole density $`n=3.8\times 10^9cm^2`$ studied in Ref. ) with no sign of an MIT whereas the other systems studied in the literature exhibit the 2D MIT transition at critical $`r_s`$ values as low as $`r_s812`$ (Si MOSFETs) and $`1020`$ (GaAs hole systems). The experimental results presented in Ref. thus compellingly demonstrate that interaction (i.e., the $`r_s`$ parameter) is by no means the only (or perhaps even the dominant) variable controlling the physics of 2D MIT โ disorder (and perhaps even temperature) also plays an important role.
Our transport theory for the 2D hole system employs the finite temperature Boltzmann equation technique, which has earlier been successful in n-Si MOSFETs and n-GaAs systems . We include the following effects in our calculation : (1) Subband confinement effects (i.e., we take into account the extent of the 2D system in the third dimension and do not assume it to be a zero-width 2D layer); (2) scattering by screened charged random impurity centers; (3) finite temperature and finite wave vector screening through random phase approximation (RPA) (actually we employ a slightly modified version of RPA, the so-called 2D Hubbard approximation, which approximately and rather crudely incorporates the electron-electron interaction-induced vertex correction in the screening function which may be important at the low carrier densities being investigated โ it turns out that our calculated resistance with the Hubbard approximation is within $`30\%`$ of the corresponding RPA results); (4) phonon scattering . The effects we neglect in our theory are (1) all localization and multiple scattering corrections; (2) inelastic electron interaction effects โ in fact, all effects of electron-electron interaction are neglected in our theory except for the long range screening through RPA and (approximate) short-range vertex correction through Hubbard approximation.
Our calculations are similar to the ones we recently carried out for electron inversion layers in n-Si MOSFETs with two important differences; (1) we include the full hole density in the current calculations without subtracting out any critical density as done in Ref. โ this is, in fact, consistent with our Si MOSFET calculations since the critical density in Ref. must be extremely small, and in any case SdH measurements carried out in Ref. show that all the carriers are โfreeโ and participating in the conduction process; (2) we include phonon scattering effects in the current calculations because phonon scattering is significant for GaAs holes already in the $`T=110K`$ temperature range whereas phonon scattering is negligibly small in n-Si MOSFETs in the $`110K`$ temperature range. Details of phonon scattering calculations are given in Ref. โ the essential point is that the phonon resistivity is proportional to $`T`$ for $`T>1K`$ and is negligibly small in the low temperature Bloch-Grรผneisen regime.
Our calculated resistivity for 2D holes in GaAs structures is shown in Figs. 1 and 2 for two different types of 2D quantum confinement: Square well (Fig. 1) and heterojunction (inversion layer type approximately โtriangularโ) confinement (Fig. 2). The qualitative results for the two kinds of confinement are, as expected, very similar (although the actual quantitative resistance values depend on the nature of confinement since the scattering and screening matrix elements are strongly confinement dependent through the wavefunction spread normal to the 2D confinement plane). The resistivity can be written as $`\rho (T)=\rho _0+\rho _{imp}(T)+\rho _{ph}(T)`$, where $`\rho _0\rho (T0)`$ is the residual resistivity arising entirely from (screened) charged impurity scattering in our theory (for a weakly localized system $`\rho _0`$ diverges logarithmically as $`T0`$, our theory is valid above the crossover temperature scale for weak localization to set in โ no indication for the expected $`\mathrm{ln}T`$ weak localization divergence is seen in the experimental data of Ref. down to the lowest reported measurement temperature, $`35mK`$). $`\rho _{ph}(T)`$ is the resistivity contribution by phonon scattering which could be quite significant for 2D holes in GaAs already in the $`110K`$ temperature range. Finally, $`\rho _{imp}(T)`$ is the temperature dependent part of the charged impurity (i.e., random disorder) scattering contribution to the resistivity, i.e., $`\rho _0+\rho _{imp}(T)\rho _i`$ is the total impurity contribution to the resistivity. We note that $`\rho _0`$, which sets the overall resistivity scale \[by definition, both $`\rho _{imp}(T)`$ and $`\rho _{ph}(T)`$ vanish as $`T0`$\] in the problem, is determined by the amount of the random disorder in the system which is in general unknown. The amount of random disorder (and consequently $`\rho _0`$) depends on the strength and the spatial distribution of all the impurity scattering centers in the system. We parameterize the charged impurity density, assuming them to be randomly distributed static Coulomb charged centers interacting with the 2D carriers via the screened Coulomb interaction. We adjust the charged impurity density (assumed to be randomly distributed in our calculations) to get agreement between theory and the experimental data โ thus the scale $`\rho _0`$ is essentially an adjustable parameter in our theory since the actual impurity distribution in the 2D systems of interest is simply not known. We emphasize, however, that the charged impurity density needed in our theory to
obtain agreement between our calculations and the experimental data for $`\rho _0`$ are reasonable.
Before discussing our results we make three salient remarks about our calculation and model. First, we neglect scattering by interface roughness, alloy disorder, etc. in our calculation (including only charged impurity scattering in the theory) since it is well-known that the dominant low temperature resistive mechanism in high quality GaAs structures arises essentially from charged impurity scattering (it is straightforward to include additional scattering mechanisms in our calculations with the unpleasant complication of having additional unknown parameters, such as the interface roughness strength, in the theory โ our choice is to keep the number of unknown adjustable parameters at a minimum by assuming that all of the random disorder scattering is caused by randomly distributed charged impurity scattering which should be an excellent approximation for the extreme high quality GaAs samples used in Ref. ). Second, the Matthiessenโs rule, which is implicitly assumed in separating out $`\rho _i(T)`$ and $`\rho _{ph}(T)`$, is known to be not strictly valid at finite temperatures because different scattering rates do not simply add in the total resistivity. It is important to emphasize, however, that we do not assume the Matthiessenโs rule in our theoretical calculations, and Eq. (1) is written down simply as a rough guide for qualitative discussion. In any case, the deviation from Matthiessenโs rule is of the order of $`30\%`$ or less, which is not of much consequence for our discussion. Finally, the third remark we make is regarding our use of the single scattering Born approximation in our Boltzmann theory (neglecting all multiple scattering effects), which can be justified by noting that our calculated resistivity (and the corresponding experimental resistivity measured in Ref. ) always satisfies the weak scattering condition of $`k_Fl1`$ โ in fact, our results are restricted to $`k_Fl>3`$ even in the worst situation (for our highest resistance results). We therefore believe that the Born approximation may not be a poor approximation for our problem.
In Fig. 1 we show our calculated 2D hole resistivity for symmetric square well systems corresponding to the sample of Ref. . The actual sample configuration is shown schematically as an inset in Fig. 1. We also show some representative experimental results (from Fig. 2 of ref. ). We emphasize that the quantitative agreement with the data of ref. , while being certainly indicative of the essential validity of our theoretical approach, should not be taken too seriously โ it is certainly not the feature of our theory we would focus on, particularly since the random impurity distribution in the experimental samples is unknown. It is the overall striking qualitative similarity between our microscopic theory and the experimental data which deserves attention. This is particularly so because the density and temperature dependence of the measured resistance in ref. shows a throughly nontrivial non-monotonic behavior which is completely reproduced in our calculations. This striking non-monotonicity in $`\rho (T)`$, at lower carrier densities, arises from a competition among three mechanisms: Screening, which is particularly important at lower T; nondegeneracy and the associated quantum-classical crossover for $`TT_F`$ ($`E_F/k_B`$, the Fermi temperature) which was discussed in ref. in the context of n-Si MOSFETs; and phonon scattering effect which is negligible below $`1K`$, but starts becoming quantitatively increasingly important for $`T>1K`$. The Fermi temperature for the 2D hole system can be expressed as $`T_F=0.64(n/10^{10})K`$ where $`n`$ is the 2D hole density measured in units of $`10^{10}cm^2`$. Thus for $`n=4.8\times 10^{10}cm^2`$ between $`n=0.65\times 10^{10}cm^2`$ in Fig. 1 $`T_F`$ varies between $`3K`$ and $`0.4K`$. This makes the quantum-classical crossover physics particularly significant for the results of Ref. as was already noted by the authors in Ref. .
At higher densities (the bottom two curves in Fig. 1) the quantum-classical crossover effects are not particularly important because phonon scattering becomes important before the classical behavior $`\rho T^1`$ can show up, and the system makes a transition from the quantum regime to the phonon scattering dominated regime โ the fast rise in $`\rho (T)`$ at high $`T`$ in Fig. 1 is the phonon scattering effect. At low enough densities, however, phonon scattering effects are absent (because phonons are frozen out in the low temperature Bloch-Grรผneisen range ) at the quantum-classical crossover point which occurs at very low temperatures around $`T<T_F<1K`$ (the top
two curves in Fig. 1). In these low density results one can see $`\rho (T)`$ increasing with T at lower temperatures due to screening effects , then the quantum-classical crossover occurs at the intermediate temperature regime around $`T_F`$ where nondegeneracy effects make resistivity decrease as $`\rho T^1`$; eventually at higher temperatures ($`T1K`$) phonon scattering takes over and $`\rho (T)`$ increases with T again. At higher densities $`T_F`$ is pushed up to the phonon scattering regime, and the quantum-classical crossover physics is pre-empted by phonons so that non-monotonicity effects are not manifest.
The non-monotonic behavior of $`\rho (T)`$ as a function of $`n`$ and $`T`$ is made more explicit in Fig. 2(a) where we show our calculated resistivity for the same density and temperature range as in Fig. 1 for a heterostructure inversion-layer-type โtriangularโ confinement 2D hole gas, separating out the pure impurity scattering contribution (i.e., the dashed curves in Fig. 2(a) leave out the phonon scattering contribution completely). First, we note that the resistivity results in Fig. 2(a) are very similar to those in Fig. 1, indicating that the transport behavior seen in ref. is the generic behavior of a low density 2D GaAs hole system, and does not arise from any particular feature of the square well samples used in ref. . Second, the interplay of screening (low temperature), phonons (high temperature), and nondegeneracy (high temperature and low density) is manifestly obvious in Fig. 2(a): the intriguing low density non-monotonicity in the observed $`\rho (T)`$ clearly arises from the fact that both screening and phonon scattering mechanisms give rise to a $`\rho (T)`$ monotonically increasing with $`T`$ (at low temperature for screening, and at high temperatures for phonons), but nondegeneracy effects produce a $`\rho (T)`$ decreasing with $`T`$ for $`TT_F`$. Since phonon scattering is the dominant temperature dependent scattering mechanism in GaAs holes for $`T>1K`$, the non-monotonicity can show up in any significant way only if $`T_F1K`$, which is precisely the experimental observation.
As an interesting comparison we show in Fig. 2(b) the calculated $`\rho (T)`$, without any phonon scattering, for the same densities (and impurity scattering parameters) as in Fig. 2(a) for a 2D electron inversion layers confined in a GaAs heterostructure (i.e., the only difference between the results for Fig.2(a) and Fig. 2(b) is that the GaAs electron mass has been used in the calculations corresponding to Fig. 2(b) rather that the hole mass. The neglect of phonon scattering is justified by the fact that phonons contribute significantly to GaAs 2D electron resistivity only for $`T>10K`$ โ in fact, inclusion of appropriate phonon scattering would produce results indistinguishable from the results shown in Fig. 2(b) (i.e. upto $`5K`$). The difference between the results of Figs. 2(a) (holes) and 2(b)(electrons) is striking: there is essentially no observable (on log scale) temperature dependence at low temperatures in the 2D electron resistivity in GaAs heterostructure down to 2D densities as low as $`n=0.38\times 10^{10}cm^2`$. This essential temperature independence of low temperature electronic resistance in high quality GaAs heterostructures, which is a well-known experimental fact, arises from the weak screening property (associated with its low effective mass and the associated small electronic density of states) of 2D electrons in GaAs heterostructures compared with higher mass 2D holes in GaAS or 2D electrons in Si MOSFETs. This weak screening behavior of GaAs electrons precludes any strong temperature dependent $`\rho (T)`$ even at very low carrier densities (and temperatures). The quantum-classical crossover phenomenon, however, still occurs around $`TT_F`$, leading to a $`\rho (T)T^1`$ for $`TT_F`$, which is manifestly obvious in Fig. 2(b), particularly for lower densities. Note that the Fermi temperature in Fig. 2(b) corresponds to $`T_F=4.1(n/10^{10})K`$ with $`n`$ being the 2D electron density in Fig. 2(b) measured in units of $`10^{10}cm^2`$. Thus the Fermi temperature in Fig. 2(b) ranges from $`1.5K`$ (top curve) to $`35.5K`$ (bottom curve). We note that the decreasing $`\rho (T)`$ at higher $`T`$ in Fig. 2(b) arises not only from a quantum to classical crossover (which is the dominant effect at lower densities when $`T_F`$ is low), but also from the finite temperature Fermi surface averaging in a degenerate quantum system. It is easy to show that the Fermi surface averaging effect at finite temperatures, by itself, always leads to a finite temperature resistivity which decreases weakly with temperature (even in the $`T0`$ limit) โ in fact, this effect by itself leads to $`\rho (T)\rho _0[1O(T/T_F)^2]`$, and can only be observed if the temperature dependent screening effects are unimportant. This effect was first observed in 2D electrons in GaAS heterostructures more than fifteen years ago .
To conclude, we have developed a theory for the low temperature transport properties of 2D holes and electrons confined in low density and high mobility GaAs heterostructures. Our theory includes temperature dependent screening of impurity scattering and phonon scattering effects. Agreement between our theory and experiment suggests that screening and impurity scattering effects play an essential role in determining much of the intriguing temperature and density dependent transport properties in 2D systems, and that random disorder (mostly arising from charged impurity scattering) is an important ingredient in the physics of low density 2D systems.
This work is supported by the U.S.-ARO. and the U.S.-ONR. |
no-problem/0001/hep-ph0001046.html | ar5iv | text | # RIKEN-AF-NP-324, SAGA-HE-143-99 KOBE-FHD-99-03, FUT-99-02 Polarized Parton Distribution Functions in the Nucleon
## I INTRODUCTION
For a long time, deep inelastic scattering (DIS) of leptons from the nucleon has served as an important tool for studying the nucleon substructure and testing the quantum chromodynamics (QCD). Structure functions of the nucleon have been measured with this reaction in great precision, which often provides a firm basis of a search for new physics in hadron collisions. In addition, basic parameters of QCD such as $`\alpha _s`$ or $`\mathrm{\Lambda }_{\mathrm{QCD}}`$ have been obtained from the $`Q^2`$ dependence of the structure functions. Consequently, hadron-related reactions at high energies are described by the parton model and perturbative QCD with reasonable precision.
The measurement of the polarized structure function $`g_1^p(x,Q^2)`$ by the European Muon Collaboration (EMC) in 1988 has, however, revealed more profound structure of the proton, which is often referred to as โthe proton spin crisisโ. Their results are interpreted as very small quark contribution to the nucleon spin. Then, the rest has to be carried by the gluon spin and/or by the angular momenta of quarks and gluons. Another consequence from their measurement was that the strange quark is negatively polarized, which was not anticipated in a naive quark model.
The progress in the data precision is remarkable in post-EMC experiments. The final results of the Spin Muon Collaboration (SMC) experiment have been reported, and its value of $`A_1^p`$ at the lowest $`x`$ has decreased in comparison with their previous one . The final results of high-precision $`A_1^p`$ and $`A_1^d`$ data have been presented by the Stanford Linear Accelerator Center (SLAC) E143 collaboration , and they consist of more than 200 data points. Moreover, the measurement of $`g_1^p(x,Q^2)`$ with the pure hydrogen target has been carried out by the HERMES collaboration . In addition to such improvements in the data precision, new programs are underway or in preparation at SLAC, Brookhaven National Laboratory - Relativistic Heavy Ion Collider (BNL-RHIC), European Organization for Nuclear Research (CERN) , etc., and results are expected to come out in the near future. On the other hand, theoretical advances such as the development of the next-to-leading-order (NLO) QCD calculations of polarized splitting functions stimulated many works on the QCD analysis of polarized parton distribution functions (PDFs) . There is an attempt to obtain next-to-next-leading order (NNLO) splitting functions and we can expect further progress in the precise analysis of polarized PDFs.
In this paper, we present an analysis of world data on the cross section asymmetry $`A_1`$ in the polarized DIS processes for the proton, neutron, and deuteron targets. We formed a group called Asymmetry Analysis Collaboration (AAC), and our goal is to determine polarized PDFs, $`\mathrm{\Delta }f_i(x,Q^2)`$, where $`i=u,d,s,\overline{u},\overline{d},\overline{s},\mathrm{},\mathrm{and}g`$. Another possible approach is to parametrize structure functions, $`g_1^N(x,Q^2)`$ ($`N=p,n,\mathrm{and}d`$), which can be expressed as linear combinations of the PDFs. In the analysis and predictions of the cross section asymmetry in polarized hadron-hadron collisions, however, what we need are polarized PDFs rather than structure functions, because the contribution of each quark flavor is differently weighted in e.g. $`gqgq`$ from DIS where each flavor is weighted by electric charge squared.
We choose $`A_1`$ as the object of the analysis, since it is more close to the direct observable in experiments than $`g_1^N(x,Q^2)`$. The $`g_1^N(x,Q^2)`$ data published by the experiments depend on the knowledge on the unpolarized structure functions at the time of their publication. By choosing $`A_1`$ as the object of the analysis, we can extend the analysis to include new set of data easily without any change in the previous data set.
As explained in Sec. II, we parametrize the polarized parton distributions at small momentum transfer squared $`Q^2=1.0`$ GeV<sup>2</sup> ($`Q_0^2`$) with a special emphasis on the positivity and quark counting rule. Then, they are evolved to the $`Q^2`$ points, where the experimental data were taken, by the leading-order (LO) or NLO $`Q^2`$ evolution program. Using one of well-established unpolarized parton distributions, we construct $`A_1`$ as
$$A_1(x,Q^2)\frac{g_1(x,Q^2)}{F_1(x,Q^2)},$$
(1)
to compare with the experimental data. The polarized parton distributions at the initial $`Q_0^2`$ are determined by a $`\chi ^2`$ analysis.
In Sec. II, we describe the outline of our analysis with the necessary formulation and the data set used in the analysis. Section III is devoted to the explanation of the LO and NLO $`Q^2`$ evolution programs which we developed for our fit. The parametrization of the polarized parton distribution functions at the initial $`Q_0^2`$ is described in Sec. IV, and the fitting results are discussed in Sec. V. The conclusions are given in Sec. VI.
## II PARTON MODEL ANALYSIS OF POLARIZED DIS DATA
In the experiments of polarized DIS, direct observables are the cross-section asymmetries $`A_{}`$ and $`A_{}`$, which are defined as
$$A_{}=\frac{\sigma _{}\sigma _{}}{\sigma _{}+\sigma _{}},A_{}=\frac{\sigma _{}\sigma _{}}{\sigma _{}+\sigma _{}}.$$
(2)
The $`\sigma _{}`$ and $`\sigma _{}`$ represent the cross sections for the lepton-nucleon scattering with their parallel and anti-parallel helicity states, respectively. On the other hand, the $`\sigma _{}`$ and $`\sigma _{}`$ are the scattering cross sections for transversely polarized nucleon target. We suppress the dependence on $`x`$ and $`Q^2`$ where it is evident hereinafter. The asymmetries, $`A_{}`$ and $`A_{}`$, are related to the photon absorption cross section asymmetries, $`A_1`$ and $`A_2`$, by
$$A_{}=๐(A_1+\eta A_2),A_{}=d(A_2\zeta A_1),$$
(3)
where $`๐`$ represents the photon depolarization factor and $`\eta `$ is approximated as $`\gamma (1y)/(1y/2)`$ with $`\gamma =2Mx/\sqrt{Q^2}`$. The $`d`$ and $`\zeta `$ are other kinematical factors. The asymmetries, $`A_1`$ and $`A_2`$, can be expressed as:
$$A_1(x,Q^2)=\frac{\sigma _{T,\frac{1}{2}}\sigma _{T,\frac{3}{2}}}{\sigma _{T,\frac{1}{2}}+\sigma _{T,\frac{3}{2}}}=\frac{g_1(x,Q^2)\gamma ^2g_2(x,Q^2)}{F_1(x,Q^2)},$$
(4)
$$A_2(x,Q^2)=\frac{2\sigma _{LT}}{\sigma _{T,\frac{1}{2}}+\sigma _{T,\frac{3}{2}}}=\frac{\gamma [g_1(x,Q^2)+g_2(x,Q^2)]}{F_1(x,Q^2)}.$$
(5)
Here $`\sigma _{T,\frac{1}{2}}`$ and $`\sigma _{T,\frac{3}{2}}`$ are the absorption cross sections of virtual transverse photon for the total helicity of the photon-nucleon system of $`\frac{1}{2}`$ and $`\frac{3}{2}`$, respectively; $`\sigma _{LT}`$ is the interference term between the transverse and longitudinal photon-nucleon amplitudes; $`F_1(x,Q^2)`$ is the unpolarized structure function of the nucleon. If we measure both $`A_{}`$ and $`A_{}`$, we can extract both $`g_1(x,Q^2)`$ and $`g_2(x,Q^2)`$ from experimental data with minimal assumptions. Otherwise, $`\eta A_2`$ should be neglected in Eq. (3) to extract $`A_1`$. This is justified since $`\eta A_2`$ is much smaller than $`A_1`$ in the present kinematical region. However, its effect has to be included in the systematic error. In the small-$`x`$ or large-$`Q^2`$ region, $`\gamma ^2`$ is the order of $`10^310^2`$. An absolute value of $`g_2(x,Q^2)`$ has been measured to be significantly smaller than $`g_1(x,Q^2)`$. Therefore, the asymmetry in Eq. (4) can be expressed by
$$A_1(x,Q^2)\frac{g_1(x,Q^2)}{F_1(x,Q^2)},$$
(6)
to good approximation. Since the structure function usually extracted from unpolarized DIS experiments is $`F_2(x,Q^2)`$, we use $`F_2(x,Q^2)`$ instead of $`F_1(x,Q^2)`$ by the relation
$$F_1(x,Q^2)=\frac{F_2(x,Q^2)}{2x[1+R(x,Q^2)]}.$$
(7)
The function $`R(x,Q^2)`$ represents the cross-section ratio for the longitudinally polarized photon to the transverse one, $`\sigma _L/\sigma _T`$, which is determined experimentally in reasonably wide $`Q^2`$ and $`x`$ ranges in the SLAC experiment of Ref. . Recently published data on $`R(x,Q^2)`$ by NMC showed slightly different values from the SLAC measurement but mostly agreed within experimental uncertainties. Therefore, we decided to use SLAC measurements to be consistent with the most of the analyses of polarized DIS experiments.
The structure function $`F_2`$ can be written in terms of unpolarized PDFs with coefficient functions as
$$F_2(x,Q^2)=\underset{i=1}{\overset{n_f}{}}e_i^2x\left\{C_q(x,\alpha _s)[q_i(x,Q^2)+\overline{q}_i(x,Q^2)]+C_g(x,\alpha _s)g(x,Q^2)\right\}.$$
(8)
Here $`q_i`$ and $`\overline{q}_i`$ are the distributions of quark and antiquark of flavor $`i`$ with electric charge $`e_i`$. The gluon distribution is represented by $`g(x,Q^2)`$. The convolution $``$ is defined by
$$f(x)g(x)=_x^1\frac{dy}{y}f\left(\frac{x}{y}\right)g(y).$$
(9)
The coefficient functions, $`C_q`$ and $`C_g`$, are written as a series in $`\alpha _s`$ with $`x`$-dependent coefficients:
$$C(x,\alpha _s)=\underset{k=0}{\overset{\mathrm{}}{}}\left(\frac{\alpha _s}{2\pi }\right)^kC^{(k)}(x).$$
(10)
The LO coefficient functions are simply given by
$$C_q^{(0)}(x)=\delta (1x),C_g^{(0)}(x)=0.$$
(11)
In the same way, the polarized structure function $`g_1(x,Q^2)`$ is expressed as
$$g_1(x,Q^2)=\frac{1}{2}\underset{i=1}{\overset{n_f}{}}e_i^2\left\{\mathrm{\Delta }C_q(x,\alpha _s)[\mathrm{\Delta }q_i(x,Q^2)+\mathrm{\Delta }\overline{q}_i(x,Q^2)]+\mathrm{\Delta }C_g(x,\alpha _s)\mathrm{\Delta }g(x,Q^2)\right\},$$
(12)
where $`\mathrm{\Delta }q_iq_i^{}q_i^{}`$ ($`i=u,d,s,\mathrm{}`$) represents the difference between the number densities of quark with helicity parallel to that of parent nucleon and with helicity anti-parallel. The definitions of $`\mathrm{\Delta }\overline{q}_i`$ and $`\mathrm{\Delta }g`$ are the same. The polarized coefficient functions $`\mathrm{\Delta }C_q`$ and $`\mathrm{\Delta }C_g`$ are defined similarly to the unpolarized case.
Another separation of the quark distribution can be done by using flavor-singlet quark distribution $`\mathrm{\Delta }\mathrm{\Sigma }(x,Q^2)`$ and flavor-nonsinglet quark distributions for the proton and the neutron, $`\mathrm{\Delta }q_{NS}^p(x,Q^2)`$ and $`\mathrm{\Delta }q_{NS}^n(x,Q^2)`$, respectively. Those can be expressed with polarized PDFs as follows:
$`\mathrm{\Delta }\mathrm{\Sigma }(x)`$ $`=`$ $`a_0(x)=\mathrm{\Delta }u^+(x)+\mathrm{\Delta }d^+(x)+\mathrm{\Delta }s^+(x),`$ (13)
$`\mathrm{\Delta }q_{NS}^{p,n}(x)`$ $`=`$ $`\pm {\displaystyle \frac{3}{4}}a_3(x)+{\displaystyle \frac{1}{4}}a_8(x)`$ (14)
$`=`$ $`\pm {\displaystyle \frac{3}{4}}[\mathrm{\Delta }u^+(x)\mathrm{\Delta }d^+(x)]+{\displaystyle \frac{1}{4}}[\mathrm{\Delta }u^+(x)+\mathrm{\Delta }d^+(x)2\mathrm{\Delta }s^+(x)],`$ (15)
where $`\mathrm{\Delta }u^+(x)=\mathrm{\Delta }u(x)+\mathrm{\Delta }\overline{u}(x)`$ and similarly for $`\mathrm{\Delta }d^+(x)`$ and $`\mathrm{\Delta }s^+(x)`$. Analyses in Ref. and Ref. utilized this separation. Such separation is useful in $`Q^2`$ evolution, and it is also natural when one wants to obtain quark contribution to the proton spin, $`_0^1\mathrm{\Delta }\mathrm{\Sigma }(x)๐x`$.
On the other hand, when we try to calculate the cross section for polarized $`pp`$ reaction, e.g. Drell-Yan production of lepton pairs, we need the combination of $`\mathrm{\Delta }q_i(x_1)\times \mathrm{\Delta }\overline{q}_i(x_2)`$ (multiplied by electric charge squared). To allow such calculations with the above separation, we need further assumption on the polarized antiquark distributions, e.g. flavor symmetric sea, $`\mathrm{\Delta }u_{\mathrm{sea}}(x)=\mathrm{\Delta }\overline{u}(x)=\mathrm{\Delta }d_{\mathrm{sea}}(x)=\mathrm{\Delta }\overline{d}(x)=\mathrm{\Delta }s(x)=\mathrm{\Delta }\overline{s}(x)`$. With such assumption, the above separation becomes equivalent to the PDF separation in a sense that one description can be translated to another by simple transformation.
Of course, we already know that unpolarized sea-quark distributions are not flavor symmetric from various experiments including Drell-Yan production of lepton pairs in $`pp`$ and $`pd`$ collisions. Therefore, this assumption is only justified as an approximation due to limited experimental data. In principle, charged-hadron production data could clarify this issue. Although a $`\chi ^2`$ analysis for the SMC and HERMES data seems to suggest a slight $`\mathrm{\Delta }\overline{u}`$ excess over $`\mathrm{\Delta }\overline{d}`$ , the present data are not accurate enough for finding such a flavor asymmetric signature. Future experiments with charged current at RHIC and polarized option at HERA will be very useful in improving our knowledge on the spin-flavor structure of the nucleon. Furthermore, as it has been done in the unpolarized studies, the difference between the polarized $`pp`$ and $`pd`$ cross sections provides a clue for the polarized flavor asymmetry although actual experimental possibility is uncertain at this stage.
The parametrization models studied so far have various differences in other aspects: (a) the choice of the renormalization scheme, (b) the functional form of the polarized parton distributions due to different physical requirements at $`Q_0^2`$, and (c) the physical quantity to be fitted. In the following, we describe our position on these issues.
* Renormalization Scheme
Although the parton distributions have no scheme dependence in the LO, they do depend on the renormalization scheme in the NLO and beyond. In the polarized case, we have different choices of the scheme due to the axial anomaly and the ambiguity in treating the $`\gamma _5`$ in $`n`$ dimensions. In the NLO analysis, the widely-used scheme is the modified minimal subtraction ($`\overline{\mathrm{MS}}`$) scheme, in which the first moment of the nonsinglet distribution is $`Q^2`$-independent. It was used, for example, by Mertig and van Neerven and Vogelsang . However, the first moment of the singlet distribution is $`Q^2`$-dependent in this scheme and thus it is rather difficult to compare the value of $`\mathrm{\Delta }\mathrm{\Sigma }(x,Q^2)`$ extracted from the DIS at large $`Q^2`$ with the one from the static quark model at small $`Q^2`$. To cure this difficulty, Ball, Forte and Ridolfi used the so-called AB (Adler-Bardeen) scheme, in which the first moment of the singlet distribution becomes independent of $`Q^2`$ because of the Adler-Bardeen theorem. In those schemes, however, some soft contributions are included in the Wilson coefficient functions and not completely absorbed into the PDFs. Another scheme called the JET scheme or the CI (chirally invariant) scheme has been recently proposed. All the hard effects are absorbed into the Wilson coefficient functions in this scheme.
Although we choose the $`\overline{\mathrm{MS}}`$ scheme in our analysis, the polarized PDFs in one scheme are related to those in other schemes with simple formulae .
* Functional Form of polarized PDF and Physical Requirements
Different functional forms have been proposed so far for the polarized PDFs by taking account of various physical conditions. We choose the functional form with the special emphasis on the positivity condition and quark counting rule at $`Q_0^2=1.0`$ GeV<sup>2</sup>.
The positivity condition is originated in a probabilistic interpretation of the parton densities. The polarized PDFs should satisfy the condition
$$|\mathrm{\Delta }f_i(x,Q_0^2)|f_i(x,Q_0^2).$$
(16)
This is valid in the LO since we can have the complete probabilistic interpretation for each polarized distribution only at the LO. Even in NLO, however, the positivity condition for the polarized cross section $`\mathrm{\Delta }\sigma `$ with the unpolarized cross section $`\sigma `$,
$$|\mathrm{\Delta }\sigma |\sigma ,$$
(17)
should still apply for any processes to be calculated with the polarized PDFs to the order of $`๐ช(\alpha _s)`$. Since it is very difficult to calculate the polarized and unpolarized cross sections of the NLO for all the possible processes, it is not realistic to determine the polarized NLO distributions by the positivity condition of Eq. (17). In our analysis, we simply require that Eq. (16) should be satisfied in the LO and also NLO at $`Q_0^2`$. It is shown in Ref. that the NLO $`Q^2`$ evolution should preserve the positivity maintained at initial $`Q_0^2`$.
In many cases, Regge behavior has been assumed for $`x0`$, and the color coherence of gluon couplings has been also used at $`x0`$ . Furthermore, it is an interesting guiding principle that the polarized distributions have a similar behavior to the unpolarized ones in the large-$`x`$ region . Since behavior of the distributions at large $`x`$ is determined by the term $`(1x)^\beta `$ in the functions, where $`\beta `$ is a constant, we simply require that the polarized distributions should have the same $`(1x)^\beta `$ term as the unpolarized ones.
Those physical requirements and assumptions have to be tested by comparing with the existing experimental data.
As for the choice of $`Q_0^2`$, it has to be large enough to apply perturbative QCD, but it should be small enough to maintain a large set of experimental data. We find $`Q_0^2`$=1.0 GeV<sup>2</sup> to be a reasonable choice in our analysis.
* Physical Quantities to be Fitted
In most of the polarized experiments, the data have been presented for $`A_1(x,Q^2)`$ and $`g_1(x,Q^2)`$. Some analyses used the $`g_1(x,Q^2)`$ as data samples, while others used the $`A_1(x,Q^2)`$. It should be, however, noted that $`g_1(x,Q^2)`$ is obtained by multiplying $`A_1(x,Q^2)`$ by $`F_1(x,Q^2)`$, so that it is not free from ambiguity of the unpolarized structure function, $`F_1(x,Q^2)`$. Therefore, we consider that it is more advantageous to use the $`A_1(x,Q^2)`$ as the data samples not only for the current work but also for the convenience in expanding the data set to include new data set from SLAC, DESY (German Electron Synchrotron), CERN, and RHIC.
Another important quantity which we should carefully consider is the cross section ratio $`R(x,Q^2)=\sigma _L/\sigma _T`$, where $`\sigma _L`$ and $`\sigma _T`$ are absorption cross sections of longitudinal and transverse photons, respectively. In principle, nonzero $`R(x,Q^2)`$ is originated from radiative corrections in perturbative QCD, higher twist effects, and target mass effects. Higher twist contribution to $`R(x,Q^2)`$ is expected to be small in the large $`Q^2`$ region. So far, some analyses employed nonzero $`R(x,Q^2)`$, while other analyses assumed $`R(x,Q^2)=0`$. However, the latter is not consistent with the experimental analysis procedure, since $`R(x,Q^2)`$ is also used for the evaluation of photon depolarization factor $`๐`$. Indeed our analysis shows that world data prefer $`R(x,Q^2)0`$: the $`\chi ^2`$ increases significantly with $`R=0`$. Therefore, we use nonzero $`R(x,Q^2)`$ in fitting the data of $`A_1(x,Q^2)`$.
Table I summarizes experiments with published data on the polarized DIS . These measurements cover a wide range of $`x`$ and $`Q^2`$ with various beam species and energies and various types of polarized nucleon target (not shown in the table). The listed are the number of data points above $`Q^2=1.0`$ GeV<sup>2</sup>, and the total number of data points are 375.
We use the data with minimal manipulation to analyze them in our framework so as to be consistent with the $`Q^2`$ evolution, the unpolarized parton distributions, and the function $`R(x,Q^2)`$. For example, the E143 provides the proton data which are obtained by combining the results of different beam energies using the weights based on the unpolarized cross sections (28 points), in addition to โrawโ data for each beam energy (81 points at $`Q^2>`$1 GeV<sup>2</sup>). Such weights depend on the choice of the unpolarized structure functions, which are being updated. To localize dependence on the unpolarized structure functions in the final manipulation for getting $`g_1(x,Q^2)`$, i.e. $`A_1(x,Q^2)`$ multiplied by $`F_1(x,Q^2)`$, we decided to use the โrawโ data in our analysis.
Table I also includes analysis methods. One of the major differences in the analysis is the treatment of the $`A_2(x,Q^2)`$ and $`g_2(x,Q^2)`$ contributions to the $`g_1(x,Q^2)/F_1(x,Q^2)`$. Some of SLAC experiments measured both $`A_{}`$ and $`A_{}`$ to enable direct extraction of $`g_1/F_1`$ and $`g_2/F_1`$. Other experiments included possible contribution of $`\eta A_2`$ in their estimation of systematic errors.
As mentioned above, the choice of the function $`R(x,Q^2)`$ potentially affects $`A_1(x,Q^2)`$, thus final results on polarized PDFs, since the function affects the photon depolarization factor $`๐`$. While it was assumed to be constant in the analyses of the early days, its $`x`$-dependence and $`Q^2`$-dependence have been found to be significant . To reflect the most updated knowledge of $`R(x,Q^2)`$ on our analysis, we have reevaluated the E130 and EMC data by using $`R_{1990}(x,Q^2)`$ , which most of the experiments employed. However, we found changes of a few percent in EMC data and about 10% in E130 data: both of them are smaller than experimental errors.
## III Q<sup>2</sup> EVOLUTION
In our framework and in most of the analyses of structure functions in the parton model, the polarized parton distributions are provided at certain $`Q^2(=Q_0^{\mathrm{\hspace{0.17em}2}})`$ with a number of parameters, which are determined so as to fit polarized experimental data. The experimental data, in general, range over a wide $`Q^2`$ region. The polarized parton distributions have to be evolved from $`Q_0^{\mathrm{\hspace{0.17em}2}}`$ to the $`Q^2`$ points, where experimental data were obtained, in the $`\chi ^2`$ analysis. In calculating the distribution variation from $`Q_0^{\mathrm{\hspace{0.17em}2}}`$ to given $`Q^2`$, the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations are used.
To compare our parametrization with the data, we need to construct $`A_1(x,Q^2)`$ from the polarized and unpolarized PDFs. Since the determination of the unpolarized PDFs is not in our main scope, we decided to employ one of the widely-used set of PDFs. Although there are slight variations among the unpolarized parametrizations, the calculated $`F_2(x,Q^2)`$ structure functions are essentially the same because almost the same set of experimental data is used in the unpolarized analyses. The Glรผck-Reya-Vogt (GRV) unpolarized distributions have been used in our analyses; however, the parametrization results do not change significantly even with other unpolarized distributions. We checked this point by comparing the GRV $`F_2(x,Q^2)`$ structure function with those of MRST (Martin-Roberts-Stirling-Thorne) and CTEQ (Coordinated Theoretical/Experimental Project on QCD Phenomenology and Tests of the Standard Model) at $`Q^2`$=5 GeV<sup>2</sup> in the $`x`$ range $`0.001<x<0.7`$. The differences between these distributions are merely less than about 3%. The differences depend on the $`x`$ region; however, we find no significant systematic deviation from the GRV distribution.
We calculate the GRV unpolarized distributions at $`Q_0^{\mathrm{\hspace{0.17em}2}}`$=1 GeV<sup>2</sup> in Ref. <sup>*</sup><sup>*</sup>*Actual calculation has been done by the FORTRAN program, which was obtained from the www site, http://durpdg.dur.ac.uk/HEPDATA/PDF.. The distributions are evolved to those at $`Q^2`$ by the DGLAP equations, then they are convoluted with the coefficient functions by Eq. (8). Because the unpolarized evolution equations are essentially the same as the longitudinally polarized ones in the following, except for the splitting functions, we do not discuss them in this paper. The interested reader may read, for example, Ref. .
The polarized PDFs are provided at the initial $`Q_0^{\mathrm{\hspace{0.17em}2}}`$; therefore, they should be evolved to $`Q^2`$ by the DGLAP equation in order to obtain $`g_1(x,Q^2)`$. The DGLAP equations are coupled integrodifferential equations with complicated splitting functions in the NLO case. Both the LO and NLO cases can be handled by the same DGLAP equation form; however, the NLO effects are included in the running coupling constant $`\alpha _s(Q^2)`$ and in the splitting functions $`\mathrm{\Delta }P_{ij}(x)`$.
In solving the evolution equations, it is more convenient to use the variable $`t`$ defined by
$$t\mathrm{ln}Q^2,$$
(18)
instead of the variable $`Q^2`$. Then, the flavor nonsinglet DGLAP equation is given by
$$\frac{}{t}\mathrm{\Delta }q_{_{NS}}(x,t)=\frac{\alpha _s(t)}{2\pi }\mathrm{\Delta }P_{q^\pm ,NS}(x)\mathrm{\Delta }q_{_{NS}}(x,t),$$
(19)
where $`\mathrm{\Delta }q_{_{NS}}(x,t)`$ is a longitudinally-polarized nonsinglet parton distribution, and $`\mathrm{\Delta }P_{q^\pm ,NS}`$ is the polarized nonsinglet splitting function. The notation $`q^\pm `$ in the splitting function indicates a โ$`\mathrm{\Delta }q\pm \mathrm{\Delta }\overline{q}`$ typeโ distribution $`_ia_i(\mathrm{\Delta }q_i\pm \mathrm{\Delta }\overline{q}_i)`$, where $`a_i`$ is given constant with flavor $`i`$. The singlet evolution is more complicated than the nonsinglet one due to gluon participation in the evolution. The singlet quark distribution is defined by $`\mathrm{\Delta }\mathrm{\Sigma }(x,t)=_i^{N_f}(\mathrm{\Delta }q_i+\mathrm{\Delta }\overline{q}_i)`$, and its evolution is described by the coupled integrodifferential equations,
$$\frac{}{t}\left(\begin{array}{c}\mathrm{\Delta }\mathrm{\Sigma }(x,t)\\ \mathrm{\Delta }g(x,t)\end{array}\right)=\frac{\alpha _s(t)}{2\pi }\left(\begin{array}{cc}\mathrm{\Delta }P_{qq}(x)& \mathrm{\Delta }P_{qg}(x)\\ \mathrm{\Delta }P_{gq}(x)& \mathrm{\Delta }P_{gg}(x)\end{array}\right)\left(\begin{array}{c}\mathrm{\Delta }\mathrm{\Sigma }(x,t)\\ \mathrm{\Delta }g(x,t)\end{array}\right).$$
(20)
The numerical solution of these integrodifferential equations is obtained by a so-called brute-force method. The variables $`t`$ and $`x`$ are divided into small steps, $`\delta t_i`$ and $`\delta x_i`$ respectively, and then the integration and differentiation are defined by
$`{\displaystyle \frac{df(x)}{dx}}`$ $`={\displaystyle \frac{f(x_{m+1})f(x_m)}{\delta x_m}},`$ (21)
$`{\displaystyle f(x)๐x}`$ $`={\displaystyle \underset{m=1}{\overset{N_x}{}}}\delta x_mf(x_m).`$ (22)
The evolution equation can be solved numerically with these replacements in the DGLAP equations. This method seems to be too simple; however, it has an advantage over others not only in computing time but also in future applications. For example, the evolution equations with higher-twist effects cannot be solved by orthogonal polynomial methods. It is solved rather easily by the brute-force method . Another popular method is to solve the equations in the moment space. However, the $`x`$ distributions are first transformed into the corresponding moments. Then, the evolutions are numerically solved. Finally, the evolved moments are again transformed into the $`x`$ distributions. If the distributions are simple enough to be handled analytically in the Mellin transformation, it is a useful method. However, if the distributions become complicated functions in future or if they are given numerically, errors may accumulate in the numerical Mellin and inverse Mellin transformations. Therefore, our method is expected to provide potentially better numerical solution although it is very simple.
The employed method is identical to that in Ref. in its concept, but we had to improve the program in its computing time, since the evolution subroutine is called a few thousand times in searching for the optimum set of polarized distributions. There are two major modifications. The first one is to change the method of the convolution integrals, and the second is to introduce the cubic Spline interpolation for obtaining the parton distributions during the evolution calculation. Previously we calculated the convolution integral by $`_x^1\frac{dy}{y}\mathrm{\Delta }P(x/y)\mathrm{\Delta }q(y,t)`$. In this case, we had to calculate the splitting functions for each $`x`$ value in the numerical integration, since the integration variable and the argument of the splitting function are different. Because the NLO splitting functions are complicated, this part of calculation consumed much time. In the present program, we evaluate the integral by $`_x^1\frac{dy}{y}\mathrm{\Delta }P(y)\mathrm{\Delta }q(x/y,t)`$, which is mathematically equivalent to the above integral, and thus, we only need to calculate the splitting functions at a fixed set of $`x`$ values once before the actual evolution. For example, the nonsinglet equation, Eq. (19), becomes
$$\mathrm{\Delta }q_{_{NS}}(x_k,t_{j+1})=\mathrm{\Delta }q_{_{NS}}(x_k,t_j)+\delta t_j\frac{\alpha _s(t)}{2\pi }\underset{m=k}{\overset{N_x}{}}\frac{\delta x_m}{x_m}\mathrm{\Delta }P_{q^\pm ,NS}(x_m)\mathrm{\Delta }q_{_{NS}}(\frac{x_k}{x_m},t_j).$$
(23)
If the initial distribution $`\mathrm{\Delta }q_{_{NS}}(x_k/x_m,t_0=0)`$ is provided, the next distribution $`\mathrm{\Delta }q_{_{NS}}(x_k,t_1)`$ is calculated by the above equation. Then, $`\mathrm{\Delta }q_{_{NS}}(x_k/x_m,t_1)`$ is calculated by the cubic Spline interpolation. Repeating this step $`N_t1`$ times, we obtain the evolved nonsinglet distribution $`\mathrm{\Delta }q_{_{NS}}(x_k,t_{N_t})`$. With these refinements, the evolution equations are solved significantly faster, and the subroutine can be used in the parametrization study.
We show the $`Q^2`$ dependence in $`g_1^p(x,Q^2)`$ and $`A_1^p(x,Q^2)`$ as a demonstration of the performance of our program. The numerical calculations are done such that the accuracy becomes better than about 2% in the asymmetry $`A_1^p`$. The LO and NLO (set NLO-1) parton distributions obtained in our analyses are used. The details of these distributions are discussed in Sec. V. The initial structure functions $`g_1`$ at $`Q^2=1.0`$ GeV<sup>2</sup> are evolved to those at $`Q^2=60.0`$ GeV<sup>2</sup>. Most of the used $`A_1`$ data are within this $`Q^2`$ range. The LO and NLO results are shown in Fig. 1 by the dashed and solid curves, respectively. The LO distributions tend to be shifted to the smaller $`x`$ region than the NLO ones. There are two reasons for the differences between the LO and NLO distributions. One is the difference between the LO and NLO $`F_2`$ structure functions for fitting the same data set of $`A_1`$, and the other is the difference in $`Q^2`$ evolution.
In Fig. 2, our $`Q^2`$ evolution curves at $`x=0.117`$ are shown with the asymmetry $`A_1`$ data by the SMC , SLAC-E143 , and HERMES collaborations. The initial distributions are our LO and NLO parametrizations at $`Q^2=1`$ GeV<sup>2</sup>. The dashed and solid curves indicate the LO and NLO evolution results, respectively. In the large $`Q^2`$ region, both $`Q^2`$ variations ($`A_1/\mathrm{ln}Q^2`$) are almost the same; however, they differ significantly at small $`Q^2`$, particularly in the region $`Q^2<`$2 GeV<sup>2</sup>. As the $`Q^2`$ becomes smaller, the NLO contributions become more apparent. We find that the theoretical asymmetry has $`Q^2`$ dependence although it is not large at $`x=0.117`$. It is often assumed that the experimental asymmetry $`A_1(x,Q^2)`$ is independent of $`Q^2`$ by neglecting the $`Q^2`$ evolution difference between $`g_1(x,Q^2)`$ and $`F_1(x,Q^2)`$ in extracting the $`g_1(x,Q^2)`$ structure functions. The assumption has no physical basis. For a precise analysis, the $`Q^2`$ dependence in the asymmetry has to be taken into account properly and our framework is ready for such precision studies.
## IV PARAMETRIZATION OF POLARIZED PARTON DISTRIBUTIONS
Now, we explain how the polarized parton distributions are parametrized. The unpolarized PDFs $`f_i(x,Q_0^2)`$ and polarized PDFs $`\mathrm{\Delta }f_i(x,Q_0^2)`$ are given at the initial scale $`Q_0^2`$. Here, the subscript $`i`$ represents quark flavors and gluon. These functions are generally assumed to be in a factorized form of a power of $`x`$ inspired by Regge-like behavior at small $`x`$, a polynomial of $`x`$ at medium $`x`$, and a power of $`(1x)`$ expected from the counting rule at large $`x`$:
$`f_i(x,Q_0^2)`$ $`=`$ $`C_ix^{\alpha _{1i}}(1x)^{\alpha _{2i}}(1+{\displaystyle \underset{j}{}}\alpha _{3i,j}x^{\alpha _{4i,j}}),`$ (4.24a)
$`\mathrm{\Delta }f_i(x,Q_0^2)`$ $`=`$ $`D_ix^{\beta _{1i}}(1x)^{\beta _{2i}}(1+{\displaystyle \underset{j}{}}\beta _{3i,j}x^{\beta _{4i,j}}),`$ (4.24b)
where $`C_i`$ and $`D_i`$ are normalization factors and $`\alpha _{1i}`$, $`\alpha _{2i}`$, $`\alpha _{3i,j}`$, $`\alpha _{4i,j}`$, $`\beta _{1i}`$, $`\beta _{2i}`$, $`\beta _{3i,j}`$, and $`\beta _{4i,j}`$ are free parameters.
From the best fit to all the experimental data of the polarized DIS including new data, we can determine, in principle, the parameters in Eq. (4.24b). In practice, however, some of the parameters highly correlate each other and it is difficult to determine all the parameters independently. Therefore, it is desirable to reduce the number of parameters by applying physical conditions instead of leaving all these parameters free.
In the present analysis, to constrain the explicit forms of polarized PDFs, we require two natural conditions: (i) the positivity condition of the PDFs and (ii) the counting rule for the helicity-dependent parton distribution functions.
In order to make the positivity condition of Eq. (16) be tractable in the numerical analysis, we modify the functional form of the polarized PDF as
$$\mathrm{\Delta }f_i(x,Q_0^2)=h_i(x)f_i(x,Q_0^2),$$
(25)
where
$$h_i(x)=A_ix^{\alpha _i}(1x)^{\beta _i}(1+\gamma _ix^{\lambda _i}),$$
(26)
at the initial scale $`Q_0^2`$. Therefore, the positivity condition can be written as
$$|h_i(x)|1$$
(27)
Furthermore, taking into account of the counting rule mentioned in Section II, we reduce Eq. (26) to
$$h_i(x)=A_ix^{\alpha _i}(1+\gamma _ix^{\lambda _i}),$$
(28)
and we have the following functional form of polarized PDFs at $`Q_0^2`$:
$$\mathrm{\Delta }f_i(x,Q_0^2)=A_ix^{\alpha _i}(1+\gamma _ix^{\lambda _i})f_i(x,Q_0^2).$$
(29)
Thus, we have four parameters ($`A_i`$, $`\alpha _i`$, $`\gamma _i`$ and $`\lambda _i`$) for each $`i`$.
We further reduce the number of free parameters by assuming the SU(3) flavor symmetry for the sea-quark distributions at $`Q_0^2`$. As mentioned in Section II, this is simply a compromise due to a lack of experimental data. It should be noted that the sea-quark distributions are not SU(3) flavor symmetric at $`Q^2>Q_0^2`$ even with the symmetric distributions at the initial $`Q_0^2`$.
When we assume this SU(3) flavor symmetric sea, the first moments of $`\mathrm{\Delta }u_v(x)`$ and $`\mathrm{\Delta }d_v(x)`$ for the LO, which are written as $`\eta _{u_v}`$ and $`\eta _{d_v}`$, respectively, can be described in terms of axial charges for octet baryon, $`F`$ and $`D`$ measured in hyperon and neutron $`\beta `$-decays as follows:
$`\eta _{u_v}\eta _{d_v}`$ $`=`$ $`F+D,`$ (30)
$`\eta _{u_v}+\eta _{d_v}`$ $`=`$ $`3FD.`$ (31)
Note that Eq. (31) is also used for the NLO ($`\overline{\mathrm{MS}}`$) case. Recently, since the $`\beta `$-decay constants have been updated , we reevaluate $`F`$ and $`D`$ from the $`\chi ^2`$ fit to the experimental data of four different semi-leptonic decays: $`np`$, $`\mathrm{\Lambda }p`$, $`\mathrm{\Xi }\mathrm{\Lambda }`$, and $`\mathrm{\Sigma }n`$, by assuming the SU(3)<sub>f</sub> symmetry for the axial charges of octet baryon. With $`\chi ^2`$/d.o.f=0.98, the $`F`$ and $`D`$ are determined as
$`F=0.463\pm 0.008,`$ (32)
$`D=0.804\pm 0.008,`$ (33)
which lead to $`\eta _{u_v}=0.926\pm 0.014`$ and $`\eta _{d_v}=0.341\pm 0.018`$. In this way, we fix these two moments at their central values, so that two parameters $`A_{u_v}`$ and $`A_{d_v}`$ are determined by these first moments and other parameter values. Thus, the remaining job is to determine the values of remaining 14 parameters, $`A_{\overline{q}},A_g,\alpha _i,\gamma _i,\lambda _i(i=u_v,d_v,\overline{q},g)`$, by a $`\chi ^2`$ analysis of the polarized DIS experimental data.
## V NUMERICAL ANALYSIS
### A $`\chi ^2`$ analysis
We determine the values of 14 parameters from the best fit to the $`A_1(x,Q^2)`$ data for the proton ($`p`$), neutron ($`n`$) and deuteron ($`d`$). Using the GRV parametrization for the unpolarized PDFs at the LO and NLO and the SLAC measurement of $`R(x,Q^2)`$, we construct $`A_1^{\mathrm{calc}}(x,Q^2)`$ for the $`p`$, $`n`$, and $`d`$. For the deuteron, we use $`g_1^d=\frac{1}{2}(g_1^p+g_1^n)(1\frac{3}{2}\omega _D)`$ with the D-state probability in the deuteron $`\omega _D=0.05`$.
Then, the best parametrization is obtained by minimizing $`\chi ^2=(A_1^{\mathrm{data}}(x,Q^2)A_1^{\mathrm{calc}}(x,Q^2))^2/(\mathrm{\Delta }A_1^{\mathrm{data}}(x,Q^2))^2`$ with Minuit , where $`\mathrm{\Delta }A_1^{\mathrm{data}}`$ represents the error on the experimental data including both systematic and statistical errors. Since some of the systematic errors are correlated, it leads to an overestimation of errors to include all systematic errors. On the other hand, if we fully exclude them, the uncertainties in the experimental data are not properly reflected in the analysis. Because of our choice to include the systematic errors, the $`\chi ^2`$ defined in our analysis is not properly normalized. The minimum $`\chi ^2`$ divided by a number of degree-of-freedom achieved in the analysis is often smaller than unity. Consequently the $`\chi ^2`$ in our analysis should be regarded as only a relative measure of the fit to the experimental data. In addition, the parameter errors are overly estimated. We have confirmed that inclusion of only statistical errors in the $`\chi ^2`$ analysis does not change the results significantly except a change of the $`\chi ^2`$ by 7%, which is consistent with the change of the error size.
In evolving the distribution functions with $`Q^2`$, we neglect the charm-quark contributions to $`A_1(x,Q^2)`$ and take the flavor number $`N_f=3`$ because the $`Q^2`$ values of the $`A_1`$ experimental data are not so large compared with the charm threshold. To be consistent with the unpolarized, we use the same values as the GRV, $`\mathrm{\Lambda }_{\mathrm{QCD}}^{(3)}=204\mathrm{MeV}`$ at LO and $`\mathrm{\Lambda }_{\mathrm{QCD}}^{(3)}=299\mathrm{MeV}`$ at NLO in the $`\overline{\mathrm{MS}}`$ scheme. The NLO scale parameter leads to the value of $`\alpha _s(M_Z^2)=0.118.`$ In order to obtain a solution which satisfies the positivity condition, we make further refinements to the parametrization functions $`h_i(x)`$. The technical details are discussed in Appendix A.
The results are presented in Table II for the LO with $`\chi ^2`$/d.o.f=322.6/360 and in Table III for the NLO with $`\chi ^2`$/d.o.f=300.4/360. Because the values of $`A_i`$ are determined by the first moments for the $`\mathrm{\Delta }u_v`$ and $`\mathrm{\Delta }d_v`$ distributions, they are listed without errors. We show the LO and NLO fitting results for the asymmetry $`A_1`$ together with experimental data in Fig. 3. The theoretical curves are calculated at $`Q^2`$=5 GeV<sup>2</sup>. The asymmetries are shown for the (a) proton, (b) neutron, and (c) deuteron. As the experimental data, the E130, E143, EMC, SMC, and HERMES proton data are shown in Fig. 3(a); the E142, E154, and HERMES neutron data are in (b); the E143, E155, and SMC deuteron data are in (c). Kinematical conditions and analysis methods of these experiments are listed in Table I. We find from these figures that the obtained parameters reproduce well the experimental data of $`A_1`$ in both LO and NLO cases. However, there are slight differences between the LO and NLO curves in Fig. 3, and three factors contribute to the differences. First, the most important difference is the contribution of the polarized gluon distribution through the coefficient function. Second, the LO and NLO evolutions are different because not only the the splitting functions but also the scale parameters are different. Third, the LO and NLO expressions are different in the unpolarized GRV distributions.
### B Comparison of LO and NLO analyses
Comparing the value of $`\chi ^2`$/d.o.f for the LO with that for the NLO, we found a better description of the experimental data with the NLO analysis. The value of $`\chi ^2`$/d.o.f is improved by 7%. This implies that it is necessary to analyze the data in the NLO if one wants to get better information on the spin structure of the nucleon from the polarized DIS data.
The $`\chi ^2`$ contribution from each data set is listed in Table IV. The improvement is significant especially for the HERMES proton and E154 neutron data. The results of $`g_1`$ at the LO and NLO are shown in Fig. 4 and Fig. 5, respectively. The โexperimentalโ $`g_1`$ data are calculated by using Eqs. (6) and (7) together with the raw data for the asymmetry $`A_1`$ and the GRV unpolarized distributions. The theoretical results are shown by the dashed, solid, and dotted curves at $`Q^2`$=1, 5, 20 GeV<sup>2</sup>. As already shown in Fig. 1, the $`g_1`$ structure function shifts to the smaller-$`x`$ region as $`Q^2`$ increases. It is rather difficult to discuss the agreement with the deuteron data in Figs. 4(c) and 5(c) because of the large experimental errors. However, the proton and neutron data at small $`x`$ tend to agree with the theoretical curves at $`Q^2`$=1 GeV<sup>2</sup>. It is particularly clear in the neutron $`g_1`$ in Figs. 4(b) and 5(b). Furthermore, the proton, neutron, and deuteron data at large $`x`$ agree with the LO and NLO curves at $`Q^2`$=20 GeV<sup>2</sup>. There are correspondences of the data to the theoretical results because the small-$`x`$ data are typically in the small-$`Q^2`$ range ($`Q^2=1`$a few GeV<sup>2</sup>) and the large-$`x`$ data are in the large-$`Q^2`$ range ($`Q^2`$10 GeV<sup>2</sup>).
As seen in Figs. 4 and 5, the LO $`g_1^p`$ is slightly larger at small $`x`$ in comparison with the NLO $`g_1^p`$, while the LO $`g_1^n`$ is smaller than the NLO $`g_1^n`$ in the range $`0.01<x<0.2`$. The NLO fit agrees better with the data. The $`\chi ^2`$ improvement in the NLO for the HERMES and E154 data in Table IV is explained as follows by using Fig. 3. In comparing the theoretical curves with the data, we should note that the theoretical asymmetries are given at fixed $`Q^2`$ ($`Q^2`$=5 GeV<sup>2</sup>), whereas the data are at various $`Q^2`$ values. However, as it is found in Fig. 3(a), the LO curve is slightly above the NLO one and also the HERMES data. It makes the $`\chi ^2`$ value larger in the LO analysis. In Fig. 3(b), it is clear that the LO curve deviates from the E154 neutron data, so that the $`\chi ^2`$ contribution becomes larger from the E154 data. It is well known that the difference between the NLO ($`\overline{\mathrm{MS}}`$ scheme) and LO originates from the polarized gluon contribution to the structure function $`g_1`$ via the Wilson coefficient. Accordingly, the result that the NLO fit is better than the LO implies that the polarized gluon has a nonzero contribution to the nucleon spin, $`i.e.`$ $`\mathrm{\Delta }g0`$ at $`Q_0^2`$. Furthermore, we find in this analysis that the NLO fit is more sensitive to the polarized gluon distribution than the LO one. Therefore, we can conclude that the NLO analysis is necessary to extract information on the polarized gluon distribution.
### C Behavior of polarized parton distribution functions
We show the behavior of polarized parton distributions $`x\mathrm{\Delta }f_i(x,Q^2)`$ as a function of $`x`$ at $`Q^2=1`$ GeV<sup>2</sup> for the (a) LO and (b) NLO cases in Fig. 6. The first moment for $`\mathrm{\Delta }u_v(x)`$ is fixed at the positive value ($`\eta _{u_v}`$=0.926) and the one for $`\mathrm{\Delta }d_v(x)`$ is at the negative value ($`\eta _{d_v}=`$0.341), so that the obtained distributions $`\mathrm{\Delta }u_v(x)`$ and $`\mathrm{\Delta }d_v(x)`$ become positive and negative, respectively. In the same way as the other $`\chi ^2`$-analysis results, the antiquark (gluon) distribution becomes negative (positive) at small- and medium-$`x`$ regions. The gluon distribution cannot be determined well by only the lepton scattering data. In particular, the gluon distribution plays a role in $`g_1`$ only through the $`Q^2`$ evolution in the LO, so that $`\mathrm{\Delta }g(x)`$ cannot be uniquely determined. Even if it is neglected in the analysis ($`\mathrm{\Delta }g=0`$), the $`\chi ^2`$ difference is not so significant in the LO. The NLO effects are apparent by comparing Fig. 6(a) with Fig. 6(b). In the NLO, the gluon distribution contributes to $`g_1`$ additionally through the coefficient function; therefore, it modifies the valence-quark distributions (particularly the $`\mathrm{\Delta }u_v`$) and the antiquark distribution. The NLO distribution $`\mathrm{\Delta }u_v`$ becomes significantly smaller than the LO one at small $`x`$, and the NLO distribution $`\mathrm{\Delta }\overline{q}`$ becomes a more singular function as $`x0`$. Because of more involvement of the gluon distribution in $`g_1`$, the determination of $`\mathrm{\Delta }g`$ is better in the NLO $`\chi ^2`$ analysis.
Recently, the measurement of polarized parton distributions of each flavor has been carried out by the SMC in semi-inclusive processes of the polarized DIS . Although we did not include the semi-inclusive date in our analysis from the consideration of the data precision and the analysis framework, it is still possible to compare our polarized PDFs with their analysis. In order to compare with the SMC data, the LO initial distributions are evolved to those at $`Q^2=10`$ GeV<sup>2</sup> by the LO evolution equations. Then, the ratios $`\mathrm{\Delta }u_v(x)/u_v(x)`$ and $`\mathrm{\Delta }d_v(x)/d_v(x)`$ are shown in Fig. 7 together with the SMC data. The theoretical ratios are roughly constants in the small-$`x`$ region ($`x<0.1`$) and $`\mathrm{\Delta }u_v(x)/u_v(x)`$ approaches $`+1`$ as $`x1`$ whereas $`\mathrm{\Delta }d_v(x)/d_v(x)`$ approaches $`1`$. We find that our LO parametrization seems to be consistent with the data. However, it is unfortunate that our NLO parametrization cannot be compared with the data since the SMC data are analyzed only for the LO.
### D Small-$`x`$ behavior of polarized antiquark distributions
As we obtained in the $`\chi ^2`$ analyses, the small-$`x`$ behavior of the parton distributions is controlled by the parameter $`\alpha `$. It is obvious from Tables II and III that the small-$`x`$ behavior cannot be determined in the antiquark and gluon distributions. For example, the obtained parameter is listed as $`\alpha _{\overline{q}}(NLO)=0.32\pm 0.22`$ with a large error. It suggests that the small-$`x`$ part of the antiquark distribution cannot be fixed by the existing data. In order to clarify the situation, we need to have higher-energy facilities such as polarized-HERA and eRHIC .
Because the present experimental data are not enough for determining the small-$`x`$ behavior, we should consider to fix the parameter $`\alpha `$ for the antiquark distribution by theoretical ideas. The gluon parameter $`\alpha _g`$ cannot be also determined. However, we leave the problem for future studies because the lepton scattering data are not sufficient for determining the gluon distribution in any case. Some predications are made for $`\alpha _{\overline{q}}`$ in the following by using the Regge theory and the perturbative QCD.
According to the Regge model, the structure function $`g_1`$ in the small-$`x`$ limit is controlled by the intercepts ($`\alpha `$) of $`a_1(1260)`$, $`f_1(1285)`$, and $`f_1(1420)`$ trajectories:
$$g_1(x)x^\alpha \mathrm{as}x0.$$
(34)
However, not only the $`a_1`$ intercept but also the $`f_1`$ intercepts are not well known. It is usually assumed as $`\alpha _{a_1}=0.50`$ . Therefore, we expect $`\mathrm{\Delta }\overline{q}x^{(0.0,\mathrm{\hspace{0.17em}0.5})}`$, where $`x^{(0.0,\mathrm{\hspace{0.17em}0.5})}`$ indicates that the function is in the range from $`x^{0.0}`$ to $`x^{0.5}`$. Since our parametrization is provided for the function $`h_i(x)=\mathrm{\Delta }f_i(x)/f_i(x)`$, we should find out the small-$`x`$ behavior of the unpolarized distribution. According to our numerical analysis, the GRV distribution has the property $`x\overline{q}x^{0.14}`$ at $`Q^2`$=1 GeV<sup>2</sup>. Taking these small-$`x`$ functions into account, the Regge prediction is
$$h_{\overline{q}}^{Regge}(x)x^{(1.1,\mathrm{\hspace{0.17em}1.6})},$$
(35)
if the theory is applied at $`Q^2=1`$ GeV<sup>2</sup>. Our LO and NLO fits result in $`x^{0.59}`$ and $`x^{0.32}`$, respectively, as $`x0`$. These functions look very different from Eq. (35); however, they are not inconsistent if the errors of Tables II and III are taken into account.
The perturbative QCD could also suggest the small-$`x`$ behavior. In the small-$`x`$ limit, the splitting functions are dominated by the most singular terms. Therefore, if we can assume that the singlet-quark and gluon distributions are constants at certain $`Q^2`$ ($`Q_1^2`$) in the limit $`x0`$, their singular behavior is predicted from the evolution equations. According to its results, the singlet distribution behaves like
$$\mathrm{\Delta }\mathrm{\Sigma }(x,Q^2)\mathrm{exp}\left[2\sqrt{\frac{8C_A}{\beta _0}\xi (Q^2)\mathrm{ln}\frac{1}{x}}\right],$$
(36)
where $`\xi (Q^2)=\mathrm{ln}[\alpha _s(Q_1^2)/\alpha _s(Q^2)]`$, $`C_A=3`$, and $`\beta _0=112N_f/3`$. The problem is to find an appropriate $`Q_1^2`$ where the singlet and gluon distributions are flat at small $`x`$. Choosing the range $`Q_1^2=0.3\mathrm{\hspace{0.17em}0.5}`$ GeV<sup>2</sup> and $`Q^2=1`$ GeV<sup>2</sup>, we fit the above equation numerically by the functional form of $`x^\alpha `$ at small $`x`$. Then, the obtained function is in the range, $`x^{(0.12,0.09)}`$. Because the unpolarized distribution is given by $`x\overline{q}x^{0.14}`$, the perturbative QCD (with the assumption of the above $`Q_1^2`$ range) suggests
$$h_{\overline{q}}^{pQCD}(x)x^{1.0}.$$
(37)
This function falls off much faster than ours at small $`x`$.
In this way, we found that the perturbative QCD and the Regge theory suggest the small-$`x`$ distribution as $`h_{\overline{q}}x^{(1.0,1.6)}`$. Because the small $`x`$ behavior cannot be determined by the $`\chi ^2`$ analyses in Sec. V A, we had better fix the power of $`x`$ by these theoretical implications. In this subsection, the NLO $`\chi ^2`$ analyses are reported by fixing the parameter at $`\alpha _{\overline{q}}`$=0.5, 1.0, and 1.6. The middle value is the perturbative QCD estimate, and the latter two ones are roughly in the Regge prediction range. The first one is taken simply by considering a slightly singular distribution than these theoretical predictions.
The obtained parameters and $`\chi ^2`$ are listed in Table V. Considering the NLO value $`\chi ^2`$=300.4 in Table IV, we find that the $`\chi ^2`$ change is 0.1%, 1.8%, and 7.7% for $`\alpha _{\overline{q}}`$=0.5, 1.0, and 1.6, respectively. The $`\chi ^2`$ changes are so small in $`\alpha _{\overline{q}}`$=0.5 and 1.0 that they could be equally taken as good parametrizations in our studies. Using the obtained distributions with fixed $`\alpha _{\overline{q}}`$, we have the first moments and spin contents in Table VI. Because of the small-$`x`$ falloff for larger $`\alpha _{\overline{q}}`$, the antiquark first moment and spin content change significantly. If the perturbative QCD and Regge prediction range ($`\alpha _{\overline{q}}`$=1.0 and 1.6) is taken, the calculated spin content is within the usually quoted values $`\mathrm{\Delta }\mathrm{\Sigma }=0.10.3`$. The obtained $`\chi ^2`$ value suggests that the $`\alpha _{\overline{q}}`$=1.0 solution could be also taken as one of the good fits to the data. In this sense, our results are not inconsistent with the previous analyses. However, the results indicate that a better solution could be obtained for smaller $`\alpha _{\overline{q}}`$, so that the spin content could be smaller than the usual values $`\mathrm{\Delta }\mathrm{\Sigma }=0.10.3`$. At least, we can state that the present data are not taken at small enough $`x`$, so that the spin content cannot be determined uniquely.
We found that the $`\alpha _{\overline{q}}`$=0.5 and 1.0 results could be also considered as good parametrizations to the experimental data. The $`\chi ^2`$ is so large in the $`\alpha _{\overline{q}}`$=1.6 analysis that its set cannot be considered a good fit to the data. Because the $`\alpha _{\overline{q}}`$=0.5 results are almost the same as the NLO ones in Sec. V B, it is redundant to take it as one of our parametrizations. Therefore, we propose the LO and NLO distributions (sets: LO and NLO-1) in V B together with the $`\alpha _{\overline{q}}`$=1.0 distributions (set: NLO-2) as three sets of the AAC parametrizations.
Although the parametrization for $`\mathrm{\Delta }f_i/f_i`$ is necessary for imposing the positivity condition, it is rather cumbersome for practical applications in calculating other cross sections in the sense that we always need both our parametrization results and the GRV unpolarized distributions at $`Q^2`$=1 GeV<sup>2</sup>. Furthermore, it is not convenient that the analytical GRV distributions are not given at $`Q^2`$=1 GeV<sup>2</sup>. In Appendix B, we supply simple functions for the three AAC distributions without resorting to the GRV parametrization for the practical calculations.
### E Spin contents of polarized quarks and gluons
The first moment of each polarized parton distribution and the integrated $`g_1`$ at $`Q^2`$=1, 5, and 10 GeV<sup>2</sup> are given in Table VII for the LO and NLO. At $`Q^2=1`$ GeV<sup>2</sup>, the amounts of quarks and gluons carrying the nucleon spin are
$`\mathrm{\Delta }\mathrm{\Sigma }`$ $`=`$ $`0.201,\mathrm{\Delta }g=0.831,\text{in the LO},`$ (38)
$`\mathrm{\Delta }\mathrm{\Sigma }`$ $`=`$ $`0.051,\mathrm{\Delta }g=0.532,\text{in the NLO-1},`$ (39)
$`\mathrm{\Delta }\mathrm{\Sigma }`$ $`=`$ $`0.241,\mathrm{\Delta }g=0.533,\text{in the NLO-2}.`$ (40)
These results confirm that the quarks carry a small amount of the nucleon spin. The first moments of the structure functions at $`Q^2=1`$ GeV<sup>2</sup> are
$`\mathrm{\Gamma }_1^p(Q^2)`$ $`=`$ $`0.144,\mathrm{\Gamma }_1^n(Q^2)=0.067,\mathrm{\Gamma }_1^d(Q^2)=0.036,\text{in the LO},`$ (41)
$`\mathrm{\Gamma }_1^p(Q^2)`$ $`=`$ $`0.110,\mathrm{\Gamma }_1^n(Q^2)=0.069,\mathrm{\Gamma }_1^d(Q^2)=0.019,\text{in the NLO-1},`$ (42)
$`\mathrm{\Gamma }_1^p(Q^2)`$ $`=`$ $`0.128,\mathrm{\Gamma }_1^n(Q^2)=0.051,\mathrm{\Gamma }_1^d(Q^2)=0.035,\text{in the NLO-2}.`$ (43)
Because the first moment of $`\mathrm{\Delta }u_v\mathrm{\Delta }d_v`$ is fixed by Eq. (31), the Bjorken sum rule is satisfied in both LO and NLO at any $`Q^2`$ within the perturbative QCD range.
It should be noted that our $`\mathrm{\Delta }\mathrm{\Sigma }`$ in the NLO-1 seems to be considerably smaller than the usual values published so far in many other papers. In fact, the recent SMC and Leader-Sidrov-Stamenov (LSS) parametrizations obtained $`\mathrm{\Delta }\mathrm{\Sigma }=`$0.19 and 0.28, respectively, at $`Q^2`$=1 GeV<sup>2</sup>. The difference originates mainly from the small-$`x`$ behavior of the antiquark distribution. We compared our NLO-1 distribution $`\mathrm{\Delta }\overline{q}`$, which is denoted as AAC, with the other $`\overline{\mathrm{MS}}`$ distributions in Fig. 8. The LSS(1999) antiquark distribution is directly given in their parametrization, whereas the SMC distribution is calculated by using their singlet and nonsinglet distributions. Because the antiquark distribution is not directly given in the SMC analysis, we may call it as a transformed SMC (โSMCโ) distribution. The transformed SMC has peculiar $`x`$ dependence at medium and large $`x`$; however, all the distributions agree in principle in the region ($`0.01<x<0.1`$) where accurate experimental data exist and the antiquark distribution plays an important role. On the other hand, it is clear that our distribution does not fall off rapidly as $`x0`$ in comparison with the others. This is the reason why our NLO-1 spin content is significantly smaller.
In order to clarify the difference, we plot the spin content in the region between $`x_{min}`$ and 1 by calculating $`\mathrm{\Delta }\mathrm{\Sigma }(x_{min})=_{x_{min}}^1\mathrm{\Delta }\mathrm{\Sigma }(x)๐x`$ in Fig. 9. Because the LSS and SMC distributions are less singular functions of $`x`$, their spin contents saturate even at $`x=10^4`$ although our $`\mathrm{\Delta }\mathrm{\Sigma }`$ still decreases in this region. The difference simply reflects the fact that the accurate experimental data are not available at small $`x`$. The parametrization results with fixed $`\alpha _{\overline{q}}`$ are also shown. As the antiquark distribution becomes less singular, the spin content becomes larger. As mentioned in the previous subsection, the $`\alpha _{\overline{q}}`$=1.0 results could be taken as a good fit. The spin content is 0.24 in this case and it is completely within the usual range $`\mathrm{\Delta }\mathrm{\Sigma }=0.10.3`$.
The small-$`x`$ issue has been discussed in other publications. The idea itself stems from the publication of Close and Roberts , and it is also noted in the numerical analyses of Altarelli, Ball, Forte, and Ridolfi (ABFR) . In the ABFR parametrization, various fits are tried by assuming the small-$`x`$ behavior, and they obtain the first moment of $`a_0(x)`$ as $`a_0=0.020.18`$. Therefore, our NLO-1 analysis is consistent with their studies although the spin content seems to be smaller than the usual one ($`0.10.3`$). In this way, our NLO-1 analysis result may seem very different from many other publications, it is essentially consistent with them. It indicates that the small-$`x`$ ($`10^5`$) data are absolutely necessary for the determination of the spin content.
### F Comparison with recent parametrizations
We have already partially discussed the comparison with the recent parametrization results in the previous subsection. However, the detailed discussions are necessary particularly on the differences between these analyses in order to clarify the difference in the physical basis.
First, we discuss differences between our parametrization and the LSS. Before the detailed comparison, we used their $`\chi ^2`$-fitting procedure in our program and confirmed their numerical results. It indicates that both fitting programs are consistent although evolution methods and other subroutines are completely different.
Our parametrization functions are similar to theirs. In fact, both methods use the parametrization for the ratio of the polarized distribution to the unpolarized one ($`\mathrm{\Delta }f_i(x)/f_i(x)=h_i(x)`$, $`i=u_v,d_v,\overline{q},g`$). The LSS parametrization employed a very simple function $`h_i(x)=A_ix^{\alpha _i}`$, and we used a more complicated one $`h_i(x)=A_ix^{\alpha _i}(1+\gamma _ix^{\lambda _i})`$. This may seem to be insignificant; however, the extra parameters provide wide room for the functions to readjust in the $`\chi ^2`$ analysis. According to our studies, the minimum $`\chi ^2`$ cannot reach anywhere close to our minimum point if the LSS function is used in our fit. Therefore, although it is a slight modification, the outcome has a significant difference. Furthermore, the LSS gluon distribution fails to satisfy the positivity condition at large $`x`$ although it does not matter practically at this stage.
Another important difference is how to calculate the spin asymmetry $`A_1`$ from the unpolarized distributions. There are two issues in this calculation procedure. One is that LSS kept the factor $`1+4M_N^2x^2/Q^2`$ in handling the SLAC data, whereas we neglected. Another is that LSS calculated the structure function $`F_1`$ directly from the unpolarized distributions, whereas we calculated it by Eq. (7). As for the first point, we have checked that inclusion of the factor has not significant impact on the results. It is partly because the factor $`1+4M_N^2x^2/Q^2`$ modifies the asymmetry $`A_1`$ at large $`x`$ but the $`Q^2`$ values are generally large in such a $`x`$ region. The second point is more serious. Their method is right in the light of perturbative QCD. However, the $`F_2`$ structure functions are generally used rather than $`F_1`$ in obtaining the unpolarized PDFs. If there were no higher-twist contributions, it does not matter whether $`F_1`$ is calculated directly or Eq. (7) is used. However, it is well known that the higher-twist effects are rather large as obvious from the function $`R(x,Q^2)`$ in the SLAC-1990 analysis . It modifies the asymmetries as large as 35%, and the modification is conspicuous in the whole $`x`$ region. In the LSS analysis, perturbative QCD contributions to the function $`R`$ are included due to the coefficient-function difference between $`F_1`$ and $`F_2`$, but they are small in the small- and medium-$`x`$ regions. This difference in handling $`F_1`$ creates the discrepancy between the LSS and our polarized antiquark distributions, and it is especially important for determining their small-$`x`$ behavior.
Next, we discuss comparison with the SMC parametrization. Our $`\chi ^2`$ analysis is different from theirs in the parametrization functions. We parametrized the ratios $`\mathrm{\Delta }f_i/f_i`$ ($`i=u_v`$, $`d_v`$, $`\overline{q}`$, $`g`$). As mentioned in Sec. II, the analysis by the SMC in Ref. utilized the separation of the polarized quark distributions into ($`\mathrm{\Delta }\mathrm{\Sigma }(x)`$, $`\mathrm{\Delta }q_{\mathrm{NS}}^p(x)`$, and $`\mathrm{\Delta }q_{\mathrm{NS}}^n(x)`$) which can, in principle, be transformed into $`\mathrm{\Delta }u^+(x)`$, $`\mathrm{\Delta }d^+(x)`$, and $`\mathrm{\Delta }s^+(x)`$.
When we do this transformation of SMC results to compare with the polarized sea-quark distributions from our analysis and LSS, we find that the polarized strange-quark distribution ($`\mathrm{\Delta }s(x)`$) from the โtransformed SMCโ oscillates as shown in Fig. 8. However, this simply implies that the conventionally used functional form has a limitation and the distribution functions obtained from different separations can be quite different. The uncertainty of the sea-quark distribution was also pointed out in the analysis by Gordon, Goshtasbpour, and Ramsey . We should re-emphasize that direct measurement of the sea-quark polarization is very important. At the highest energy of polarized $`pp`$ collisions at RHIC, the weak bosons are copiously produced and the parity violating asymmetry $`๐_L`$ for its production is very useful in elucidating spin-flavor structure of the nucleon . With such direct measurement, the uncertainty in the polarized sea-quark distribution will be much reduced.
Common differences from the SMC and LSS are that a large set of data tables is used for $`A_1`$ rather than the $`Q^2`$ averaged one. Although the present data may not have the accuracy to discuss the $`Q^2`$ dependence, it is desirable to use the large table if one wishes to obtain better information on the gluon distribution. Furthermore, an advantage of our results is that the positivity condition is strictly satisfied, so that our parametrizations does not pose any serious problem in practical applications.
## VI CONCLUSIONS
We have analyzed the experimental data for the spin asymmetry $`A_1`$ of the proton, neutron, and deuteron by using a simple parametrization for the ratios of polarized parton distributions to the corresponding unpolarized ones. We discussed the details on physical meanings behind our parametrization and also on our $`Q^2`$ evolution method. As a consequence, we found that the asymmetry $`A_1`$ could have significant $`Q^2`$ dependence in the small $`Q^2`$ region ($`Q^2<2`$ GeV<sup>2</sup>), so that frequently-used assumption of the $`Q^2`$ independence in $`A_1`$ cannot be justified in a precise analysis. From the LO and NLO $`\chi ^2`$ analyses, we obtained good fits to the experimental data. Because the NLO $`\chi ^2`$ is significantly smaller than that of LO, the NLO analysis should be necessarily used in the parametrization studies. An advantage of our analysis is that the positivity condition is satisfied in the whole $`x`$ region. An important consequence of our analyses is that the small-$`x`$ behavior of the sea-quark distributions cannot be uniquely determined by the present data, so that the usual spin content $`\mathrm{\Delta }\mathrm{\Sigma }=0.10.3`$ could be significantly modified depending on the future experimental data at small $`x`$ ($`10^5`$). Our LO and NLO analyses suggested $`\mathrm{\Delta }\mathrm{\Sigma }`$=0.20 and 0.05, respectively. However, if we take theoretical suggestions by โperturbative QCDโ and Regge theory for the polarized antiquark distribution at small $`x`$, the spin content becomes $`\mathrm{\Delta }\mathrm{\Sigma }=0.240.28`$ in the NLO. The obtained gluon distributions are positive in both LO and NLO, but it is particularly difficult to determine $`\mathrm{\Delta }g`$ in the LO. From these analyses, we have proposed one LO set and two NLO sets of parametrizations as the AAC polarized parton distribution functions.
## Acknowledgments
The authors would like to thank A. Brรผll, M. Grosse-Perdekamp, V. Hughes, R. L. Jaffe, K. Kobayakawa, and D. B. Stamenov for useful discussions or email communications. This work has been done partly within the framework of RIKEN RHIC-Spin project, and it was partly supported by the Japan Society for the Promotion of Science and also by the Japanese Ministry of Education, Science, and Culture.
## A Treatment of positivity condition <br>in our $`\chi ^2`$ analysis
Additional modification of the function $`h_i(x)`$ is desirable in the actual $`\chi ^2`$ fitting. Although Eq. (28) is a useful functional form, it is not very convenient for the $`\chi ^2`$ analysis in the sense that the positivity condition is rather difficult to be satisfied. In fact, running our $`\chi ^2`$ program, we obtain a solution which does not necessarily meet the positivity requirement. In order to take into account this condition, the function is slightly modified although it is equivalent in principle:
$`\begin{array}{cc}\hfill h_i(x)& =\xi _ix^{\nu _i}+\kappa _ix^{\mu _i}\hfill \\ & =\delta _ix^{\nu _i}\kappa _i(x^{\nu _i}x^{\mu _i}),i=u_v,d_v,\overline{q},g\hfill \end{array}`$ (A1)
where $`\delta _i=\xi _i+\kappa _i`$. It can be seen why this function is more suitable at $`x=1`$ by the following simple example. The original function is given by two parameters, $`h_i(x=1)=A_i(1+\gamma _i)`$; however, the modified one is by only one parameter $`h_i(x=1)=\delta _i`$. Therefore, it is more easier to restrict the function $`h_i(x)`$ within the positivity-condition range. There is another advantage that the parameters are rather independent each other. For example, the parameter $`\lambda _i`$ is strongly correlated with $`\alpha _i`$ ($`\lambda _i\alpha _i`$) if we would like to avoid singular behavior as $`x0`$. In this way, the functional form of Eq. (A1) is used in the actual $`\chi ^2`$ fitting although it is mathematically equivalent to Eq. (28).
Although we could perform the $`\chi ^2`$ analysis with the supplied information, it is not straight forward to obtain a solution which satisfies the positivity condition. We describe the details of the analysis procedure. First, it was already mentioned that the first moments of $`\mathrm{\Delta }u_v`$ and $`\mathrm{\Delta }d_v`$ are fixed by the $`F`$ and $`D`$ values, and they are given by
$$\eta _i=_0^1๐x[\delta _ix^{\nu _i}\kappa _i(x^{\nu _i}x^{\mu _i})]f_i(x)(i=u_v,d_v).$$
(A2)
Then, the parameters $`\kappa _{u_v}`$ and $`\kappa _{d_v}`$ are determined by
$$\kappa _i=\frac{\delta _i๐xx^{\nu _i}f_i(x)\eta _i}{๐x(x^{\nu _i}x^{\mu _i})f_i(x)}.$$
(A3)
As we explained in Sec. V D, theory suggests the functions $`h_i`$ should not be a singular function of $`x`$ in the small-$`x`$ region. Therefore, we try to find a solution in the parameter range $`\mu _i,\nu _i0`$.
Next, we discuss the positivity condition. If the signs of the parameters $`\xi _i`$ and $`\kappa _i`$ are the same, the function $`h_i(x)`$ is a monotonically increasing or decreasing function, so that $`h_i(x=1)=\delta _i`$ should be within the range $`1\delta _i+1`$ due to the positivity requirement. On the other hand, if the signs are different, the function could have an extreme value at certain $`x`$ ($`X`$). If $`X`$ is larger than one, the function could be a monotonic one in the range ($`0x1`$). Then, the same condition $`1\delta _i+1`$ is applied. However, if $`X`$ is smaller than one, the situation is slightly complicated. Because the first and second terms have the same functional form in the first equation of Eq. (A1), we can have either $`\mu _i<\nu _i`$ or $`\mu _i>\nu _i`$. Therefore, the condition $`\mu _i<\nu _i`$ is taken (practically only for $`\mathrm{\Delta }\overline{q}`$ and $`\mathrm{\Delta }g`$) in the following analysis without loosing generality. From Eq. (A1), we find that the extreme value is located at
$$X=\left(\frac{\kappa _i\zeta _i}{\xi _i}\right)^{\frac{1}{\nu _i\mu _i}},$$
(A4)
where $`\zeta _i=\mu _i/\nu _i`$ ($`0<\zeta _i<1`$). It is in the range $`0<X<1`$ if the condition $`0<\kappa _i\zeta _i/\xi _i<1`$, namely
$$\begin{array}{cc}\hfill \frac{\delta _i}{1\zeta _i}<\kappa _i& \text{for }0<\kappa _i,\hfill \\ \hfill \frac{\delta _i}{1\zeta _i}>\kappa _i& \text{for }\kappa _i<0,\hfill \end{array}$$
(A5)
is satisfied. The extreme value is then obtained as
$$h_i(X)=\left(\frac{\kappa _i\zeta _i}{\delta _i\kappa _i}\right)^{\frac{\zeta _i}{1\zeta _i}}\kappa _i(1\zeta _i).$$
(A6)
Using the positivity condition $`|h_i(X)|1`$, we obtain the following constraint on the parameters:
$$g^+(\kappa _i)\kappa _i\delta _i\kappa _i\zeta _i\left[\kappa _i(1\zeta _i)\right]^{\frac{1\zeta _i}{\zeta _i}}0,$$
(A7)
in the case $`\kappa _i>0`$ ($`0<h_i(X)1`$). Because the function $`g^+(\kappa _i)`$ has a positive curvature, we try to find a $`\kappa _i`$ point ($`=\kappa _i^{}`$) which satisfies $`g^+(\kappa _i^{})=0`$. There is only one solution for negative $`\delta _i`$ and two solutions for positive $`\delta _i`$. In any case, we seek the solution $`\kappa _i^{}`$ which is larger than the extreme point $`\kappa _i=1/(1\zeta _i)`$ by the Newtonโs method. Then, the parameter $`\kappa _i`$ is redefined as $`\kappa _i=\sigma _i\kappa _i^{}`$. The parameters $`\sigma _i`$ are used in the $`\chi ^2`$ analysis for the antiquark and gluon distributions within the range $`0\sigma _i1`$, so that the actual functional form is
$$h_i(x)=\delta _ix^{\alpha _i}\sigma _i\kappa _i^{}(x^{\alpha _i}x^{\alpha _i\zeta _i})\mathrm{for}i=\overline{q},g.$$
(A8)
On the other hand, we find
$$g^{}(\kappa _i)\kappa _i\delta _i\kappa _i\zeta _i\left[\kappa _i(1\zeta _i)\right]^{\frac{1\zeta _i}{\zeta _i}}0,$$
(A9)
in the case $`\kappa _i<0`$ ($`1h_i(X)<0`$). A similar analysis is done for the function $`g^{}(\kappa _i)`$ in order to satisfy the positivity condition. With these preparations, we can perform the $`\chi ^2`$ analysis.
## B Practical polarized parton distributions
Our polarized parton distributions are given in the parametrized functions $`h_i(x)`$ multiplied by the GRV unpolarized distributions. For practical applications, we supply the following three sets of simple functions, which reproduce the $`\chi ^2`$ analysis results in Sec. V, as the AAC distributions at $`Q^2`$=1 GeV<sup>2</sup>:
$`\mathrm{Set}:\mathrm{AAC}\mathrm{LO}`$ (B1)
$`x\mathrm{\Delta }u_v(x)`$ $`=0.4949x^{0.456}(1x)^{2.84}(1+9.60x^{1.23}),`$ (B2)
$`x\mathrm{\Delta }d_v(x)`$ $`=0.2040x^{0.456}(1x)^{3.77}(1+14.6x^{1.36}),`$ (B3)
$`x\mathrm{\Delta }\overline{q}(x)`$ $`=0.1146x^{0.536}(1x)^{10.5}(1+39.4x^{1.93}),`$ (B4)
$`x\mathrm{\Delta }g(x)`$ $`=2.738x^{0.908}(1x)^{5.61}(1+12.3x^{1.60}),`$ (B5)
$`\mathrm{Set}:\mathrm{AAC}\mathrm{NLO}1`$ (B6)
$`x\mathrm{\Delta }u_v(x)`$ $`=0.4029x^{0.478}(1x)^{3.18}(1+15.1x^{1.07}),`$ (B7)
$`x\mathrm{\Delta }d_v(x)`$ $`=0.2221x^{0.568}(1x)^{3.92}(1+9.46x^{0.813}),`$ (B8)
$`x\mathrm{\Delta }\overline{q}(x)`$ $`=0.03249x^{0.230}(1x)^{7.77}(1+3.65x^{0.883}),`$ (B9)
$`x\mathrm{\Delta }g(x)`$ $`=8.844x^{1.77}(1x)^{6.21}(1+13.6x^{1.51}),`$ (B10)
$`\mathrm{Set}:\mathrm{AAC}\mathrm{NLO}2`$ (B11)
$`x\mathrm{\Delta }u_v(x)`$ $`=0.4353x^{0.465}(1x)^{2.94}(1+8.98x^{0.938}),`$ (B12)
$`x\mathrm{\Delta }d_v(x)`$ $`=0.1850x^{0.471}(1x)^{3.89}(1+14.0x^{1.11}),`$ (B13)
$`x\mathrm{\Delta }\overline{q}(x)`$ $`=0.2452x^{0.752}(1x)^{8.13},`$ (B14)
$`x\mathrm{\Delta }g(x)`$ $`=8.895x^{1.77}(1x)^{6.22}(1+13.6x^{1.51}).`$ (B15) |
no-problem/0001/cond-mat0001371.html | ar5iv | text | # Pressure Induced Quantum Critical Point and Non-Fermi-Liquid Behavior in BaVS3
## Abstract
The phase diagram of BaVS<sub>3</sub> is studied under pressure using resistivity measurements. The temperature of the metal to nonmagnetic Mott insulator transition decreases under pressure, and vanishes at the quantum critical point $`p_{\mathrm{cr}}=20`$kbar. We find two kinds of anomalous conducting states. The high-pressure metallic phase is a non-Fermi liquid described by $`\mathrm{\Delta }\rho T^n`$ where $`n=`$1.2โ1.3 at 1K$`<T<`$60K. At $`p<p_{\mathrm{cr}}`$, the transition is preceded by a wide precursor region with critically increasing resistivity which we ascribe to the opening of a soft Coulomb gap.
Understanding the Mott transition, and clarifying the nature of the phases on either side of the transition, is a matter of great importance. Though metalโinsulator transitions are often accompanied by an ordering transition and/or influenced by disorder, one may speak about a โpureโ Mott transition which is a local correlation effect in an ideal lattice fermion system, and takes place without breaking any global symmetry. Many aspects of this problem can be studied on the multifaceted behavior of BaVS<sub>3</sub> .
The metalโinsulator transition of the nearly isotropic 3D compound BaVS<sub>3</sub> offers a realization of the pure Mott transition in nature . Under atmospheric pressure BaVS<sub>3</sub> has three transitions: the hexagonal-to-orthorhombic transition at $`T_S=240`$K which has only a slight effect on the electrical properties; the metalโinsulator transition at $`T_{\mathrm{MI}}=69`$K, which does not seem to break any of the symmetries of the metallic phase; and the ordering transition at $`T_X=30`$K . In spite of decades of effort, the character of the phases and the driving force of the transitions at $`T_{\mathrm{MI}}`$ and $`T_X`$, remain mysterious.
Here we report the results of single crystal resistivity measurements under hydrostatic pressure in the range of 1bar $`p<`$ 25kbar. These pressures encompass the entire insulating phase and part of a high-pressure low-$`T`$ conducting phase. We report the first observation of the quantum critical point in BaVS<sub>3</sub>, and we characterize the strange metallic phase lying beyond the critical pressure $`p_{\mathrm{cr}}`$. On the metallic side of the phase boundary, we identify two regimes with anomalous properties: (i) a broad region at $`p<p_{\mathrm{cr}}`$ in which the resistivity increases strongly with decreasing temperature, and (ii) a high-pressure non-Fermi-liquid state.
Single crystals of BaVS<sub>3</sub> were grown by Tellurium flux method. The crystals, obtained from the flux by sublimation, have typical dimensions of $`3\times 0.5\times 0.5`$ mm<sup>3</sup>. The resistivity was measured in four probe arrangement. The current was kept low enough to avoid the self-heating of the sample. For the high-pressure measurements the crystal was inserted into a self-clamping cell with kerosene as a pressure medium. The pressure was monitored in-situ by an InSb sensor. During cooling down the cell there was a slight pressure loss, but its influence on the temperature dependence of the resistivity was negligible. Above about 15 kbar the pressure was stable within 0.1 kbar in the whole temperature range.
Figure 1 shows the temperature dependence of the resistivity for various pressures. As expected from earlier low-pressure data , $`T_{\mathrm{MI}}`$ decreases smoothly with increasing pressure. The linear plot highlights the contrasting behavior of $`\rho (T)`$ below and above the critical pressure, but does not include the regime of higher resistivities. Part of this is shown in the logarithmic plot of Fig. 2; it can be perceived that the overall resistivity change at the transition remains roughly the same, though $`T_{\mathrm{MI}}`$ is suppressed. The pressure dependence of the metalโinsulator transition temperature was determined from the spikes of the logarithmic derivative, $`d(\mathrm{log}\rho )/d(1/T)`$, as shown for selected pressures in the lower panel of Fig. 2. The narrowness of the spikes demonstrates that the transition remains sharp under pressure. For $`p=19.8`$kbar we still found a metalโinsulator transition at $`T_{\mathrm{MI}}5.6`$K, but for 21.4kbar, the resistivity keeps on decreasing at least down to 1K. We estimate that $`p_{\mathrm{cr}}20`$kbar. The phase boundary is shown in Fig. 3. Our resistivity measurements allow the division of the conducting phase into further regions of markedly different nature; the discussion of these follows.
For $`p<p_{\mathrm{cr}}`$ the resistivity in the metallic phase has a marked minimum at $`T_{\mathrm{min}}(p)`$ preceding the metalโinsulator transition. Finding $`d\rho /dT<0`$ in a metal is anomalous, and it is tempting to regard the interval $`T_{\mathrm{MI}}<T<T_{\mathrm{min}}(p)`$ as an extended precursor regime to the insulating phase. As shown in Fig. 3 by the dashed line, $`T_{\mathrm{min}}`$ drops to zero simultaneously with $`T_{\mathrm{MI}}`$: the insulator and its precursor vanish together. We believe that the resistivity minimum is a collective effect; if it were due to impurities, it would have no reason to disappear beyond $`p_{\mathrm{cr}}`$.
BaVS<sub>3</sub> is essentially an isotropic 3-dimensional system , thus the appearance of a wide precursor regime within the metallic phase is not a regular feature. There is, however, an interesting subclass of Mott systems to which it is common: e.g., similar behavior is seen above the Verwey transition in magnetite . The $`T_{\mathrm{min}}(p)`$ line does not have the significance of a phase boundary; it merely marks the temperature where fluctuations towards a gapped state become so strong that they determine the sign of $`d\rho /dT`$. The Hall results on polycrystalline BaVS<sub>3</sub> imply that the number of carriers changes in this temperature range, and we believe that resistivity enhancement arises from the loss of charge carriers. A phenomenon of this nature is observed in Fe<sub>3</sub>O<sub>4</sub>, where increasing charge short range order results in a diminishing effective number of carriers, and a resistivity minimum .
It is remarkable that the apparent opening of a soft charge gap is not accompanied by the opening of a spin gap; the magnetic susceptibility does not show any noticeable anomaly at $`T=T_{\mathrm{min}}`$ . This suggests that the phenomenon which sets in at $`T_{\mathrm{min}}`$ is quite distinct from the opening of a real gap which happens at $`T_{\mathrm{MI}}`$. $`T_{\mathrm{min}}(p)`$ can rather be associated with the onset of charge short range order, and the appearance of a soft charge gap which has no effect on the magnetic properties. We note that a somewhat similar scenario, but with particular emphasis on possible 1D aspects, was considered in . However, the essentially isotropic conductivity rules out 1D interpretations.
The resistivity data suggest that the insulating state is approached through a regime of critically increasing resistivity. There are precedents that valuable insight into the nature of phase transitions in strongly correlated systems can be gained by trying to identify critical behavior in transport data . The critical behavior can be demostrated by plotting the resistivity, $`\rho (t)`$, as a function of the reduced temperature on logarithmic scales ($`t=(TT_{\mathrm{MI}})/T_{\mathrm{MI}}`$). The ambient pressure result is shown in Fig. 4: the power law $`\rho t^{0.4}`$ gives a good approximation over more than 30 K above $`T_{\mathrm{MI}}`$, over almost 2 decades in the reduced temperature, $`t`$.
Phenomenologically, low-pressure BaVS<sub>3</sub> can be related to other (essentially 3D) systems which share at least some of its relevant features: the existence of the intermediate disordered insulating phase, the resistivity precursor, and the lack of a discernible Fermi edge in XPS spectra . In addition to magnetite we mention Ca<sub>2</sub>RuO<sub>4</sub> , and Ti<sub>4</sub>O<sub>7</sub> . The detailed behavior of either of these systems is quite different from that of BaVS<sub>3</sub>, but we believe that there is also a common feature: the soft Coulomb gap due to short-range charge fluctuations.
Next we discuss the high-pressure metallic phase. Figure 5a reveals that the temperature dependence of the resistivity is characteristic of a bad metal . The high temperature behavior is sub-linear, and though $`\rho (T=300\mathrm{K})`$ corresponds to a mean free path $`l`$5โ8ร
, which is of the order of the lattice constant, $`\rho `$ continues to grow without any sign of saturation. It has been shown that strong electronโphonon scattering could account for such a behavior , but we believe that in our case the electronโelectron scattering dominates. This assumption is supported by the specific heat data : the electronic (primarily orbital) entropy keeps on increasing even beyond 300K. The magnitude and the unusual shape of $`\rho (t)`$ indicate a new scattering process, which is to be associated to orbital fluctuations.
The low temperature region of the pressure-induced metallic phase is particularly interesting. For $`T<60`$K the resistivity does not follow the characteristic Fermi liquid behavior $`\mathrm{\Delta }\rho =\rho (T)\rho _0T^2`$ ($`\rho _0`$ is the residual resistivity). Furthermore, the logโlog plot of $`\mathrm{\Delta }\rho (T)`$ is approximately linear in an extended range, allowing to fit the resistivity with the customary non-Fermi-liquid โlawโ $`\mathrm{\Delta }\rho T^n`$, where in general $`1n<2`$ . Figure 5b shows that $`\mathrm{\Delta }\rho T^{1.25}`$ gives an excellent description for BaVS<sub>3</sub> for 1K$`<T<`$50K. A somewhat larger temperature range with $`n1.2`$ is found at $`p=21.4`$kbar. The extent of the non-Fermi-liquid regime is given by columns in Fig. 3.
The above low temperature behavior is similar to that of nearly antiferromagnetic $`f`$-electron systems such as CePd<sub>2</sub>Si<sub>2</sub> . The interpretation usually invokes nearness to a quantum critical point, or the existence of rare regions . In these cases the non-Fermi-liquid region is placed into a phase diagram where the static properties of the phases are in principle well understood. It is not so with BaVS<sub>3</sub> for which there is no consensus either about the driving force of the metalโinsulator transition, or the nature of the low-$`T`$ phases. Since even weak extrinsic disorder can have a drastic effect on the critical behavior, the relative importance of correlation and disorder should also be considered both for the insulator and the various conducting regimes.
The effects of the vicinity of a ferromagnetic, or an antiferromagnetic, quantum critical point on the metallic resistivity have been worked out . The overall appearance of the susceptibility curve shows that BaVS<sub>3</sub> is dominated by antiferromagnetic spinโspin interactions, so the predictions concerning the resistivity of a nearly antiferromagnetic metal are relevant. It has been argued that for samples of sufficiently good quality, $`\mathrm{\Delta }\rho T^n`$ where $`n<1.5`$, and finding $`n=1.2`$โ1.3 over 1โ2 decades of $`T`$ is a reasonable expectation . This is in full accordance with our results. The fact that at $`p=22.5`$kbar $`\rho _{300}/\rho _0100`$, shows that our sample is of good quality and disorder effects are weak as far as the high-pressure conductor is concerned.
Let us note here that CaRuO<sub>3</sub> provides another example of a $`d`$-electron system whose non-Fermi-liquid nature is probably explained by its being nearly antiferromagnetic. However, its resistivity follows the $`\mathrm{\Delta }\rho T^{1.5}`$ relationship , which is expected for dirty samples .
The reason for the lack of magnetic long range order in the $`T_X<T<T_{\mathrm{MI}}`$ insulating phase is not evident. One may first think that the in-plane frustration is responsible because the V ions form triangular $`a`$$`b`$ planes. However, neither the isotropic nor the anisotropic triangular Heisenberg model is, for any spin, frustrated enough to give a spin liquid . Here one may be tempted to invoke disorder: it is known that quantum antiferromagnets (especially for $`S=1/2`$) tend to be unstable against the formation of a random singlet phase . The theoretical issue is still open, but the outcome that antiferromagnetism should be always unstable against quenched disorder, is considered implausible .
We believe that in a weakly frustrated system like BaVS<sub>3</sub>, the Heisenberg model would order, and BaVS<sub>3</sub> is non-magnetic because its effective hamiltonian is not a pure Heisenberg model. We argued in Ref. that including the orbital degrees of freedom of the low-lying crystal field quasi-doublet, one finds a large number of energetically favorable dimer coverings of the triangular lattice such that intra-dimer spin coupling is strong, while inter-dimer perturbations are weak. The equilibrium phase at $`T_X<T<T_{\mathrm{MI}}`$ can be visualized as a thermal average over a class of valence bond solid states. Intra-dimer interaction causes the opening of a spin gap, while inter-dimer interactions results intermediate-range correlation, and a Q-dependence in the spin excitation spectrum . In this model we do not have to invoke disorder to explain the singlet phase. Quite on the contrary: either sulfur off-stoichiometry , or Ti-doping , is known to break up singlet pairing. The well-developed singlet insulating phase is the hallmark of a clean system.
In conclusion, we studied the phase diagram of the non-magnetic Mott system BaVS<sub>3</sub> by resistivity measurements. A quantum critical point $`p_{\mathrm{cr}}20`$kbar was found, and the results revealed the existence of anomalous conducting regimes. The pressure-induced metallic phase is a non-Fermi-liquid at $`T<60`$K. We suspect that at high temperatures the electrical transport is determined by the scattering on orbital fluctuations. For $`p<p_{\mathrm{cr}}`$ an unusually wide precursor regime was identified above the metalโinsulator transition. In this regime, which is attributed to the appearance of a soft charge gap, $`\rho (t)`$ can be well described by a power law of the reduced temperature. The microscopic nature of these regimes remains to be elucidated.
This work was supported by the Swiss National Foundation for Scientific Research and by Hungarian Research Funds OTKA T025505 and D32689, FKFP 0355 and B10, AKP 98-66 and Bolyai 118/99. |
no-problem/0001/cond-mat0001112.html | ar5iv | text | # Decay on several sorts of heterogeneous centers: Special monodisperse approximation in the situation of strong unsymmetry. 3. Numerical results for the special monodisperse approximation
## 1 Calculations
Now we shall turn to estimate errors of the floating monodisperse approximation. The errors of substitutions of the subintegral functions by the rectangular form are known. They are rather small ($`0.1`$). But the error of the floating monodisperse approximation itself has to be estimated numerically.
Here again we can see that the error of the number of droplets formed on the first type of heterogeneous centers can be estimated in frame of the standard iteration method and it is small. So, only the error in the number of the droplets formed on the second type of heterogeneous centers will be the subject of our interest.
Here again the worst situation occurs when there is no essential exhaustion of heterogeneous centers of the second type.
We have to recall the system of the condensation equations. Here it can be written in the following form
$$G=_0^z\mathrm{exp}(G(x))\theta _1(x)(zx)^3๐x$$
$$\theta _1=exp(b_0^z\mathrm{exp}(G(x))๐x)$$
with a positive parameter $`b`$ and have to estimate the error in
$$N=_0^{\mathrm{}}\mathrm{exp}(lG(x))๐x$$
with some parameter $`l`$.
We shall solve this problem numerically and compare our result with the already formulated models. In the model of the total monodisperse approximation we get
$$N_A=_0^{\mathrm{}}\mathrm{exp}(lG_A(x))๐x$$
where $`G_A`$ is
$$G_A=\frac{1}{b}(1\mathrm{exp}(bD))x^3$$
and the constant $`D`$ is given by
$$D=_0^{\mathrm{}}\mathrm{exp}(x^4/4)๐x=1.28$$
Numerical results are shown in .
In the model of the floating monodisperse approximation we have to calculate the integral
$$N_B=_0^{\mathrm{}}\mathrm{exp}(lG_B(x))๐x$$
where $`G_B`$ is
$$G_B=\frac{1}{b}(1\mathrm{exp}(b_0^{z/4}\mathrm{exp}(x^4/4)๐x))z^3$$
$$G_B\frac{1}{b}(1\mathrm{exp}(b(\mathrm{\Theta }(Dz/4)z/4+\mathrm{\Theta }(z/4D)D)))z^3$$
We have tried all mentioned approximations for $`b`$ from $`0.2`$ up to $`5.2`$ with the step $`0.2`$ and for $`l`$ from $`0.2`$ up to $`5.2`$ with a step $`0.2`$. We calculate the relative error in $`N`$. The results are drawn in fig.1 for $`N_B`$ where the relative errors are marked by $`r_2`$.
The maximum of errors in $`N_B`$ lies near $`l=0`$. So, we have to analyse the situation with small values of $`l`$. It was done in fig.2 for $`N_B`$. We see that we can not find the maximum error. It lies near $`b=0`$. Then we have to calculate the situation with $`b=0`$. The value of $`l`$ can not be put directly to $`l=0`$. Then we have to solve the following equation
$$G=_0^{\mathrm{}}\mathrm{exp}(G(x))(zx)^3๐x$$
and to compare
$$N=_0^{\mathrm{}}\mathrm{exp}(lG)๐x$$
with
$$N_A=_0^{\mathrm{}}\mathrm{exp}(lDz^3)๐z$$
$$N_B=_0^{\mathrm{}}\mathrm{exp}(l(\mathrm{\Theta }(z/4D)Dz^3+\mathrm{\Theta }(Dz/4)z^4/4))๐z$$
Results of this calculation will be presented together with consideration of the โessential asymptotesโ in the next section.
Fig.1
The relative error of $`N_B`$ drawn as the function of $`l`$ and $`b`$. Parameter $`l`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$. Parameter $`b`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$.
One can see the maximum at small $`l`$ and moderate $`b`$.
Fig.2
The relative error of $`N_B`$ drawn as the function of $`l`$ and $`b`$. Parameter $`l`$ goes from $`0.01`$ up to $`0.11`$ with a step $`0.01`$. Parameter $`b`$ goes from $`0.2`$ up to $`5.2`$ with a step $`0.2`$.
One can see the maximum at small $`l`$ and small $`b`$. One can note that now the values of $`b`$ corresponding to maximum of the relative errors become small. |
no-problem/0001/cond-mat0001205.html | ar5iv | text | # Irreversibility Line in Nb/CuMn Multilayers with a Regular Array of Antidots
## I Introduction
The task of increasing the critical current density $`J_c`$ in superconducting materials has always been a widely studied subject , gaining even more interest after the discovery of high temperature superconductors (HTS) . These studies are strictly related to the knowledge of the flux line pinning mechanism and to the reduction of the vortex mobility. Introducing artificial defects in superconducting materials as, for example, non superconducting distributed phases , columnar tracks of amorphous material obtained by high energy ions or geometrical constrictions such as channels or dots , is a very useful tool for a better understanding of the vortex dynamics and for obtaining higher $`J_c`$ values. The recent developments of the submicrometer electron beam lithographic techniques have made possible to reduce the typical size of these geometrical constrictions to values much smaller than the period of the vortex lattice, and comparable to those of typical superconducting coherence lengths . Many experiments, performed on systems with such a regular array of defects and several numerical simulations , have been focused on the study of the vortex properties at low magnetic fields close to the matching fields $`H_n=n_p\mathrm{\Phi }_0`$ (here $`n_p`$ is the pins concentration and $`\mathrm{\Phi }_0`$ is the flux quantum).
Other studies, performed at higher magnetic fields, have been related to the analysis of the vortex lattice shear stress in superconducting layered systems with the presence of artificially obtained weak pinning channels embedded in a strong pinning environment . Due to the possibility of varying in a controlled way many different parameters such as the density, the dimensionality and the nature of the pinning centers , the study of the vortex dynamics in artificially layered conventional superconducors with the simultaneous presence of a regular array of pinning centers perpendicular to the layers is of great interest. Moreover, the natural layered structure of HTS allows to use these artificial systems as a model to help in discriminating among intrinsic and dimensional effects in the transport properties of HTS compounds .
In this paper we report on current-voltage ($`IV`$) characteristic and resistance versus temperature, $`R(T,H)`$, measurements, in perpendicular external magnetic fields $`H`$, performed on two different series of Nb/CuMn multilayers with a square array of antidots. The choice of a superconducting (Nb)/spin glass (CuMn) layered system could be interesting particularly in view of the use as a model of HTS compounds. The two series have antidots with the same diameter, $`D1`$ $`\mu m`$, and different lattice distances between the antidots: $`d2`$ $`\mu m`$ in one series and $`d1.6`$ $`\mu m`$ in the other. The experiments have been performed on different samples having different anisotropies for each series. The $`IV`$ curves measured at different temperatures and at different values of $`H`$ have shown, in regions of the $`HT`$ phase diagram depending on the anisotropy of the system, a hysteretic behavior with sudden voltage jumps, which disappers approaching the critical field $`H_{c2}(T)`$ curve. From the analysis of the curvature of the logarithmic $`IV`$ characteristics and from the study of the shape of the Arrhennius plots of the resistive transition curves, we have been able to relate the disappearing of such a hysteretical behavior to the presence of an irreversibility line (IL). We have discussed different possible mechanisms responsible for this IL. Among them the most plausible for our samples seems to be the vortex melting mainly induced by quantum fluctuations .
## II Experimental
Regular arrays of antidots have been fabricated using Electron Beam Lithography to pattern the resist on a 2 inch Si (100) wafer. The Nb and CuMn layers were deposited by a dual-source magnetically enhanced dc triode sputtering system with a movable substrate holder onto the patterned resist and the final structures were obtained by the lift-off technique. Single antidots have a circular geometry and the antidot array is arranged in a square lattice configuration, see figure 1. The total area covered by the array is a square of 200 $`\times `$ 200 $`\mu m^2`$ with four separate contact pads connected to the vertices. Eight replicas of this structure are present on the same Si wafer to allow the fabrication of a series of multilayered samples with the same Nb thickness and variable CuMn thicknesses in only one deposition run . The resist used is UV III from Shipley; UV III is a chemically amplified photoresist for the deep-UV range, but is widely used as an e-beam resist because it provides a good tradeoff between reasonable sensitivity and high resolution. In our case it has been used as a positive resist, that is, the resist is retained where unexposed. A resist film thickness of 5800 ร
has been obtained spinning the wafer at 1800 rpm. The resist has been exposed using a Leica Cambridge EBMF 10 system operated at 50 kV. Several tests have been carried out in order to achieve structure profiles most suitable for lift-off. In particular the desired profile is that showing a moderate undercut. Also, developing times and post-exposure treatment have been optimized for profile improvement. After developing and post-baking at 130 C the samples underwent RIE in $`O_2`$ for 30 $`s`$ at 25 W rf power and 14 Pa oxygen pressure, in order to completely remove residual resist in the exposed areas. This treatment lowered the resist thickness to about 5000 ร
. Fig.1 shows a Scanning Electron Microscope (SEM) image of the typical result of the fabrication process: in this case the antidots nominal diameter is 1 $`\mu m`$ and the nominal period of the structure is 2 $`\mu m`$.
Table I. Some of the relevant sample features. $`\rho _N`$ is the resistivity at $`T`$=10 K. See the text for the meaning of the other quantities.
The number of bilayers of Nb and CuMn is always equal to six. The first layer is CuMn, the last one is Nb. The Nb nominal thickness, $`d_{Nb}`$, is 250 ร
for all the samples in both the series. The CuMn thickness, $`d_{CuMn}`$, has been varied from 7 ร
to 25 ร
in one series ($`d2\mu `$m) and from 4 ร
to 20 ร
in the other ($`d1.6\mu `$m). The Mn percentage is always equal to 2.7. For reference a Nb/CuMn multilayer with $`d_{CuMn}`$=13 ร
(Mn %=2.7) without antidots, but with the same configuration (200$`\times `$200 $`\mu m^2`$ square and four pads connected to the vertices), has been fabricated. The samples parameters are summarized in Table I. $`IV`$ characteristics have been registred at $`T4.2`$ K using a dc pulse technique. The temperature stabilization, during the acquisition of curves in the helium bath, was better than $`10^2`$ K. The magnetic field was obtained by a superconducting Nb-Ti solenoid. From the measured temperature dependencies of the perpendicular and parallel upper critical field $`H_{c2}`$ (obtained at half of the resistive transitions $`R(T,H)`$) we deduced the values for the parallel and the perpendicular coherence length at zero temperature, $`\xi _{}(0)`$ and $`\xi _{}(0)`$ respectively, and then the value of the anisotropic Ginzburg-Landau mass ratio $`\gamma _0=\xi _{}(0)/\xi _{}(0)`$ . The different values of the critical temperatures for the samples of the two series having similar values of $`d_{Nb}`$ and $`d_{CuMn}`$ are probably related to the different Nb quality obtained in the two deposition runs.
Figure 2a shows $`IV`$ curves for the sample NCMF at $`T`$=2.60 K for different applied magnetic fields in the range 0.03$`<H<`$0.70 Tesla. At low magnetic fields the curves present hysteresis, not due to thermal effects, when registered both in forward and backward directions, i.e. increasing and decreasing the current . Such hysteresis becomes smaller when $`H`$ is increased disappearing completely, in the limit of our experimental accuracy, when approaching $`H_{c2}`$. These features have been repeatedly obtained for the same sample in different measurements and are typical for all the samples of the two series with antidots. In figure 2b the $`IV`$ characteristics for the sample NCMD are plotted in the temperature range 2.3 K$`<T<`$2.91 K for $`\mu _0H`$= 0.03 T. In this case the hysteresis disappears approaching $`T_c`$. The $`IV`$ curves for the sample NCMDA without antidots are always smooth and parabolic-like, typical of a type II superconductor.
In figure 3, we plot the $`IV`$ curves in double logarithmic scale for the sample RNCMC at $`T`$=2.96 K for different values of the magnetic field. A change in the curvature of the logV-logI curves clearly occurs at low voltages; this is usually related to the presence of an IL in the $`HT`$ phase diagram , given by the points at which the logV-logI curve is linear. In all the samples measured, these points are always very close to the points at which the hysteresis in the $`IV`$ curves disappears. As an example, in the figures 4(a,b,c) are shown the $`HT`$ phase diagrams for the samples NCMB, NCMC and NCMH, respectively. Circles distinguish the two dif-
ferent regimes of the vortex dynamics as determined from the disappearance of the hysteresis in the $`IV`$ curves; up triangles indicate points where change in the curvature of the logV-logI occurs; squares correspond to the values of the perpendicular critical magnetic field. It is then evident that the change in the hysteretical behavior of the $`IV`$ characteristics, is related to the IL as defined by the change in the logV-logI curvature. It is also interesting to note that the position of the IL in the $`HT`$ plane moves away from the $`H_{c2}`$ line as the anisotropy of the samples increases.
Another way to determine the presence of an IL in the $`HT`$ phase diagram of a superconductor is the study of the Arrhenius plot of the resistance versus the temperature curves . In figure 5 the Arrhenius plot of the transiton curves, recorded using a bias current of 2 mA, of the sample RNCMA is shown at different perpendicular magnetic fields. A well defined field dependent temperature $`T^{}`$ separates two zones with very different activation energies. In particular, at $`T<T^{}`$ a sudden increase in the Arrhnius slope signals a transition in the transport properties of the sample which can be related to the presence of the IL. In figure 6 it is shown the measured $`HT`$ phase diagram for the sample RNCMC. The solid squares correspond to the perpendicular magnetic fields; the open squares correspond to the points in the $`HT`$ plane at which the onset of hysteresis in the $`IV`$ curves takes place, the open diamonds are defined taking into account the change of curvature of the logI-logV curves, and the open circles and up triangles are the points at which the slope in the Arrhenius plot of the transition curves, taken respectively at a bias currents of 2 mA and 6 mA, change as viewed in figure 5. It is evident that also in this case all the points (open symbols) fall again on the same curve, which can be identified as an IL.
## III Discussion
In all the measurements performed the bias current was applied in the plane of the film and perpendicular to the direction of the external magnetic field, see figure 1. In this configuration, the Lorentz force acting on the vortices, tends to move them along the channels in between the antidots. On the other hand, in the narrower zones between adjacent antidots, the current density is much higher than in the channels, so that we locally have weaker pinning centers in these parts of the sample. Therefore, we can look at our superconducting layered system as made of alternating zones of strong pinning (the channels along the direction of action of the Lorentz force in figure 1), and weak pinning (the narrower zones between adjacent antidots), similarly to other cases reported in the literature . The value of the matching field in both the series is very small, $`H_n5`$ Oe and also $`d\xi (T)`$, where $`\xi (T)`$ is the temperature dependent coherence length which in our multilayers has typical values of about 100 ร
. Therefore, in the superconducting state it is possible to have a vortex lattice inside the channels between the antidots. The pinning of these interstitial vortices is determined mostly by the intrinsic properties of the superconducting materials .
To start the analysis of our data we have first to determine the dimensionality of the vortex lattice in our samples. At magnetic fields lower than a critical value $`H_{cr}4\mathrm{\Phi }_0/\gamma _0^2s^2`$, where $`s`$ is the interlayer distance between superconducting layers, the vortices in adjacent layers are strongly coupled and vortex lines behave as three domensional (3D). For our samples $`H_{cr}10^3`$ T, well above any applied fields, and we can exclude the appearance of decoupling of the vortex lines in our layered systems .
On the other hand, when the shear modulus $`\mathrm{c}_{66}`$ of a vortex lattice is much lower than the tilt modulus $`\mathrm{c}_{44}`$, the system can behave bidimensionally. In principle, thermal fluctuations could cause tilt deformations in the vortex line. The critical thickness value $`d_{cr}`$ of a film beyond which thermal fluctuation induced tilt deformations become relevant is given by
$$d_{cr}\frac{4.4\xi _{}}{\sqrt{h(1h)}}$$
(1)
where $`h=H/H_{c2}`$. In multilayers, however, the $`d_{cr}`$ value is reduced by a factor $`\gamma _0^2`$ which for our samples gives, in the range of measured temperatures and magnetic fields, $`d_{cr}^{multi}200รท300`$ ร
. Typical thickness for our samples are in the range $`1500รท1700`$ ร
. This means that the vortex lattice in our multilayers is in a strong 3D regime.
One of the proposed interpretation of the nature of the IL relies upon a depinning mechanism in which a crossover from flux creep to flux flow occurs . In our samples we never obtain linearity of the $`IV`$ curves neither at high voltage where a uniform flux flow should be present, nor at small currents, where thermally assisted flux flow (TAFF) should take place. If we fit our data with the relation $`VI^\alpha `$ the fitting exponent is always very high ($`\alpha >10`$). Therefore we exclude the possibility of flux creep-flux flow crossover in the presence of a pinning strength distribution which could also be responsible for the curvature change of the $`IV`$ curves when plotted in double logarithmic scale .
The $`IV`$ curves shown in figure 2a and 2b are very similar to those obtained as a result of numerical simulations for a superconductor with periodic pinning close to the matching field . Considering that we are very far from $`H_n`$ we cannot apply the results of Ref. to explain our $`IV`$ curves. Nevertheless, the main observed features indicate the presence of a region in the $`HT`$ plane where plastic vortex motion probably takes place. Below some crossover value $`H_{pl}`$, vortices experience plastic motion which usually reveals in hysteretical curves . Above $`H_{pl}`$, $`IV`$ curves are smooth, hysteresis vanish and the vortex motion start to be a flow of vortex liquid . The vortex melting scenario along with the simultaneous presence of weak and strong pinning channels, can also explain the observed Arrhenius plots. In fact, immediately below $`T_c^{onset}`$ (defined as the temperature where the electrical resistance $`R`$ is 0.9 of its value in the normal state), the solidification of the vortices in the strong pinning channels determines the high values of the initial slope in the plots. At lower temperatures, the dissipation in the system is mainly due to the vortices in the weak pinning regions, and this results in a lower value of the activation energies. When also these vortices experience the transition from liquid to solid at $`T=T^{}`$, the slope in the plots increases again .
Melting in the vortex lattice can be induced by thermal fluctuations . The melting temperature at which the vortex lattice goes from a solid phase to a liquid one, can be obtained by using the 3D thermal melting criterion
$$c_L^4\frac{3G_i}{\pi ^2}\frac{h}{(1h)^3}\frac{t_m^2}{(1t_m)}$$
(2)
where $`c_L`$ is the Lindemann number, $`T_c`$ is the superconducting transition temperature (defined in our case at the point where the electrical resistance $`R`$ of the sample becomes less than $`10^4`$ $`\mathrm{\Omega }`$), $`t_m=T_m/T_c`$ is the reduced melting temperature, and $`G_i`$ is the Ginzburg number which determines the contribution of the thermal fluctuations to the vortex melting given by $`G_i=(1/2)(2\pi \mu _0k_BT_c\lambda _{}(0)\gamma _0/\mathrm{\Phi }_0^2\xi _{}(0))`$, where $`\lambda _{}`$ is the in-plane penetration depth which, for all our samples, has been assumed to be equal to 1500 ร
. Melting usually occurs when $`c_L0.1รท0.3`$ . When we try to fit the IL observed in our samples with Eq.2 we get $`c_L10^4`$. This extremely low value for the Lindemann number makes unreasonable to consider the IL as due to 3D thermal melting.
However, as first pointed out by Blatter and Ivlev in moderately anisotropic superconductors at low temperatures one could not exclude the contribution of quantum fluctuations to the melting. In this case the total fluctuation displacement of vortex line is $`<u>^2=<u>_{th}^2+<u>_q^2`$, where $`\sqrt{<u>_{th}^2}`$ is the average displacement due to thermal fluctuations and $`\sqrt{<u>_q^2}`$ is the average displacement due to quantum fluctuations. $`<u>_{th}^2`$ diminishes with the temperature, while $`<u>_q^2`$ does not depend on the temperature and at low values of T one could expect $`<u>_{th}^2<u>_q^2`$. The amplitude of $`<u>_q^2`$ depends on the ratio $`Q^{}/\sqrt{G_i}`$ where $`Q^{}=e^2\rho _N/\mathrm{}s`$, with $`\mathrm{}`$ the Planck constant and $`e`$ the elementary charge. If $`Q^{}/\sqrt{G_i}1`$, the contribution of quantum fluctuations is crucial. For the samples discussed here we always get $`Q^{}/\sqrt{G_i}>30`$ which justifies the possibility of an important contribution coming from quantum fluctuations . In this case the melting line is given by
$$h_m=\frac{4\mathrm{\Theta }^2}{\left[1+\left(1+4Q\mathrm{\Theta }\frac{1}{t}\right)^{1/2}\right]^2}$$
(3)
where $`h_m=H_m/H_{c2}`$, $`t=T/T_c`$ is the reduced temperature, $`\mathrm{\Theta }=\pi c_L^2(t^11)/\sqrt{G_i}`$, $`Q=Q^{}\mathrm{\Omega }\tau /\pi \sqrt{G_i}`$ with $`\mathrm{\Omega }`$ a cut-off frequency generally of the order of the Debye frequency and $`\tau `$ an effective electronic relaxation time $`(\mathrm{}/k_BT_c\tau )`$ . As we already showed before , the values of $`\mathrm{\Omega }`$ and $`\tau `$ in Nb/CuMn are in the range $`(2รท3)\times 10^{13}s^1`$ and $`(1รท5)\times 10^{13}s`$ respectively. Therefore, for all the samples studied we have assumed $`\mathrm{\Omega }=3\times 10^{13}s^1`$ and $`\tau =5\times 10^{13}`$ s. In this way we reduce the number of free fit parameters in Eq.3 to only one, namely the Lindemann number $`c_L`$.
The solid line in Fig.6 has been calculated according to Eq.3 using for the Lindemann number a value of $`c_L`$=0.23. Also the solid lines in the Figures 4a, 4b and 4c have been calculated according to Eq.3. For the sample NCMC, Fig.4b, we obtain good agreement with the experimental data for $`c_L=0.19`$, while for the sample NCMH, Fig.4c, the solid line is obtained taking $`c_L=0.09`$. The agreement between experimental data and theoretical curves is very good for all the samples studied. The $`c_L`$ values, as shown in Table I, become smaller with increasing anisotropy reaching in the case of sample NCMH a value slightly below 0.1 . When increasing the anisotropy of the system, the coupling between adjacent superconducting layers reduces and vortex lines become more soft. The influence of the thermal fluctuations on the vortex dynamics is strongly dependent on this coupling. Qualitatively, therefore, the reduction of the $`c_L`$ values with increasing anisotropy could be related to the change in the topology of the vortex system.
On the other hand, the softening of the vortex system could also determine a situation in which thermal fluctuations are able to cause tilt deformations. Anyway, if we try to fit the experimental points in figures 4 and 6 using the 2D pure thermal melting curve
$$\frac{\alpha d}{\kappa ^2}\frac{H_{c2}(T)}{T(1.250.25t)}(10.58h_m0.29h_m^2)(1h_m)^2=1$$
(4)
we do not obtain any agreement with the data. Here $`\alpha =A\mathrm{\Phi }_0(1.07)^2/32\pi \mu _0k_B`$, $`\kappa =\lambda _{}/\xi _{}`$ and $`d`$ is the thickness of the sample. $`A`$ is a renormalization factor of the shear modulus $`c_{66}`$ due to non linear lattice vibrations and vortex lattice defects and is $`A0.64`$ .
We want to point out that the quantum melting theory has been succesfully applied to describe the vortex behavior also in not-perforated Nb/CuMn multilayers . In that case the melting line was determined by analyzing in Arrhenius fashion the measured $`R(T)`$ curves in perpendicular magnetic fields. The shapes of the Arrhenius plots, see for example figure 1 in ref. , were very similar to those observed in the case of perforated samples, suggesting the presence of two types of pinning centers also in the case of non perforated samples. In not perforated samples, edge pinning could be relevant and obviuosly stronger than intrinsic pinning . Therefore one could interpret the shape of the Arrhenius plots in not perforated Nb/CuMn multilayers as due to vortex transition from liquid to solid first of the vortices at the edges and then, at lower temperatures, of the vortices intrinsically pinned in the inner part of the samples. If this interpretation is correct, the slopes of the Arrhenius plots measured in not perforated samples in the zone 2 (see insert in figure 7) should be very close to the slope measured in the zone 1 in perforated samples. In fact, in both cases, this slope should be related to the activation energy of vortices intrinsically pinned inside the system. In figure 7 the solid points refer to the values of the activation energy in a typical not-perforated sample measured in zone 2, while the open symbols refer to the values of the activation energy in a perforated sample (RNCMA) measured in zone 1. The two samples have been chosen to have similar critical temperatures. Also the $`R(T,H)`$ curves have been takin using a similar value for the bias current density $`J_b`$. The quite nice agreement between the two sets of data supports our idea that, in both cases, they are a measure of the intrinsic pinning in the material.
The presence of the antidot array in the multilayers makes more easily measurable the vortex melting. In fact, in the case of perforated samples one is able to detect the change in the slope of the Arrhenius plots using low values of the bias current ($`100\mu `$A) while for not perforated samples bias currents of $`1mA`$ are needed to observe the same effect (the IL in a superconductor does not depend on the value of the bias current). This is consistent with the idea that in antidotted samples the melting takes place in the zones with weaker pinning when compared to the case of not perforated samples in which the measured vortex phase transition takes place in the zones of intrinsic pinning. As a consequence of this also the hysteresis is much easier detectable in antidotted samples, in regions of the $`IV`$ characteristics not to close to the $`H_{c2}`$(T) curve.
The influence of the regular array of antidots on the vortex properties is also confirmed by the behavior in our samples of the vortex correlation length in the liquid phase $`\xi _+`$, defined as
$$\xi _+\xi _{+0}\mathrm{exp}\left\{b\left(\frac{T_m}{TT_m}\right)^\nu \right\}$$
(5)
where $`b`$ is a constant of the order of the unity, $`\nu `$=0.36963 and $`\xi _{+0}`$, being the smallest characteristic length scale in the liquid, is of the order of $`a_0`$, the vortex lattice parameter. According to the melting theory the shear viscosity $`\eta (T)`$ of the vortex liquid starts to grow up approacching the liquid-solid line from above, when $`\eta \xi _+^2(T)`$ . The curves of $`\xi _+`$ versus the temperature for different applied magnetic fields are reported in figure 8 for the sample NCMH. All the curves start to diverge when the value of $`\xi _+`$ becomes comparable to the average distance among the antidots, i.e. when $`\xi _+d=`$1 $`\mu m`$. Similar results have been obtained for all the samples investigated. This is exactly what we expect, if we look at the investigated system as a vortex ensemble constrained in the narrow channels among the lines of the antidots. In this case, infact, the melting transition at $`H=H_m`$ is going to be observed when the correlation length of the liquid $`\xi _+`$ reaches the value of the width of the channels. The results shown in figure 8 clearly indicate the influence of the antidots lattice on the vortex dynamics in our samples.
In conclusion, we have studied transport properties of superconducting (Nb)-spin glass (CuMn) multilayer with a regular array of antidots by measuring $`IV`$ curves in perpendicular magnetic fields. The measurements have been performed far above the matching conditions. The dynamic phase diagram has been drawn out from the analysis of these measurements. Two regions corresponding to plastic flux flow motion and to the motion of the vortex liquid have been distinguished. Melting occurs mostly due to quantum fluctuations and the presence of the antidots renders more easier to detect the melting due to the weaker pinning in the zones with higher local current density. |
no-problem/0001/hep-ex0001063.html | ar5iv | text | # Inclusive Jet Cross Sections in ๐ฬโข๐ Collisions at โ๐ = 630 and 1800 GeV
## I Introduction
Within the framework of quantum chromodynamics (QCD), inelastic scattering between a proton and antiproton is described as a hard collision between their constituents (partons). After the collision, the outgoing partons manifest themselves as localized streams of particles or โjetsโ. Predictions for the inclusive jet cross section have improved in the early nineties with next-to-leading order (NLO) perturbative QCD calculations and new, accurately measured parton density functions (pdf).
The Dร Collaboration has recently measured and published the cross section for the production of jets as a function of the jet energy transverse to the incident beams, $`E_T`$. The measurement is based on an integrated luminosity of about 92 pb<sup>-1</sup> of $`\overline{p}p`$ hard collisions collected with the Dร Detector at the Fermilab Tevatron Collider. This result allows a stringent test of QCD, with a total uncertainty substantially reduced relative to previous results . We have also measured the ratio of jet cross sections at two center-of-mass energies: $`630`$ (based on an integrated luminosity of about $`0.537`$ pb<sup>-1</sup>) and $`1800`$ GeV. Experimental and theoretical uncertainties are significantly reduced in the ratio. This is due to the large correlation in the errors of the two cross section measurements, and the suppression of the sensitivity to parton distribution functions (pdf) in the prediction. The ratio of cross sections thus provides a stronger test of the matrix element portion of the calculation than a single cross section measurement alone. Previous measurements of cross section ratios have been performed with smaller data sets by the UA2 and CDF experiments.
## II Jet Reconstruction and Data Selection
Jets are reconstructed using an iterative jet cone algorithm with a fixed cone radius of $`=`$ $`0.7`$ in $`\eta `$$`\varphi `$ space, (pseudorapidity is defined as $`\eta =\mathrm{ln}[\mathrm{tan}\frac{\theta }{2}]`$. The offline data selection procedure, which eliminates background caused by electrons, photons, noise, or cosmic rays, follows the methods described in Refs. .
## III Energy Corrections
The jet energy scale correction, described in , removes instrumentation effects associated with calorimeter response, showering, and noise, as well as the contribution from spectator partons (underlying event). The energy scale corrects jets from their reconstructed $`E_T`$ to their โtrueโ $`E_T`$ on average. An unsmearing correction is applied later to remove the effect of a finite $`E_T`$ resolution .
## IV The Inclusive Jet Cross Section
The resulting inclusive double differential jet cross sections, $`d^2\sigma /(dE_Td\eta )`$, for $`|\eta |0.5`$ and $`0.1|\eta |0.7`$ (the second region for comparison to Ref. ), are compared with a NLO QCD theoretical prediction . Discussions on the different choices in the theoretical calculation: pdfs, renormalization and factorization scales ($`\mu `$), and clustering algorithm parameter ($`R_{sep}`$) can be found in Refs. .
Figure 1 shows the ratios $`(DT)/T`$ for the data ($`D`$) and JETRAD NLO theoretical ($`T`$) predictions based on the CTEQ3M, CTEQ4M and MRST pdfโs for $`|\eta |0.5`$. (The tabulated data for both $`|\eta |0.5`$ and $`0.1|\eta |0.7`$ measurements can be found in Ref. .)
The predictions are in good quantitative agreement with the data, as verified with a $`\chi ^2=_{i,j}(D_iT_i)(C^1)_{ij}(D_jT_j)`$ test, which incorporates the uncertainty covariance matrix $`C`$. Here $`D_i`$ and $`T_i`$ represent the $`i`$-th data and theory points, respectively. The overall systematic uncertainty is largely correlated.
Table I lists $`\chi ^2`$ values for several JETRAD predictions using various parton distribution functions . The predictions describe both the $`|\eta |0.5`$ and $`0.1|\eta |0.7`$ cross section very well. The measurement by Dร and CDF are also in good quantitative agreement within their systematic uncertainties .
## V $`\eta `$ Dependence of the Inclusive Jet Cross Section
Dร has made a preliminary measurement of the pseudorapidity dependence of the inclusive jet cross section. Figure 2 shows the ratios $`(DT)/T`$ for the data ($`D`$) and JETRAD NLO theoretical ($`T`$) predictions using the CTEQ3M pdf set for $`0.5|\eta |<1.0`$ and $`1.0|\eta |<1.5`$. The measurements and the predictions are in good qualitative agreement. The pseudorapidity reach of this measurement is currently being extended to $`\eta =3.0`$ and the detailed error analysis is being completed.
## VI Ratio of Scale Invariant Jet Cross Sections
A simple parton model would predict a jet cross section that scales with center-of-mass energy. In this scenario, $`E_T^4E\frac{d^3\sigma }{dp^3}`$, plotted as a function of jet $`x_T\frac{2E_T}{\sqrt{s}}`$, would remain constant with respect to the center-of-mass energy. Figure 3 shows the Dร measurement of $`E_T^4E\frac{d^3\sigma }{dp^3}`$ (stars) compared to JETRAD predictions (lines). There is poor agreement between data and NLO QCD calculations using the same $`\mu `$ in the numerator and the denominator (probability of agreement not greater than 10%). The agreement improves for predictions with different $`\mu `$ at the two center-of-mass energies.
In conclusion, we have made precise measurements of jet production cross sections. At $`\sqrt{s}`$=1800 GeV, there is good agreement between the measurements and the data. The ratio of cross sections at $`\sqrt{s}`$=1800 and 630 GeV, however, differs from NLO QCD predictions, unless different renormalization scales are introduced for the two center-of-mass energies.
## VII Acknowledgements
We thank the Fermilab and collaborating institution staffs for contributions to this work and acknowledge support from the Department of Energy and National Science Foundation (USA), Commissariat ร LโEnergie Atomique (France), Ministry for Science and Technology and Ministry for Atomic Energy (Russia), CAPES and CNPq (Brazil), Departments of Atomic Energy and Science and Education (India), Colciencias (Colombia), CONACyT (Mexico), Ministry of Education and KOSEF (Korea), and CONICET and UBACyT (Argentina). |
no-problem/0001/astro-ph0001150.html | ar5iv | text | # Distribution of binary mergers around galaxies
## Introduction
In the last few years the astronomical community has moved much closer to unveiling the nature of GRBs. The discovery of afterglows and identification of host galaxies for several bursts clearly links GRBs to some type of stellar events. Yet the nature of these events is unknown, and we still do not know what the GRB central engines are. Observations of GRB host galaxies and precise locations of GRBs within hosts provide a tool to test some of the possible central engine models. In this paper we discuss the consistency between the current observations and the results of binary population synthesis.
## The Model
The population synthesis code used here is described in bbrhere . One of the most important parameter deterimining the properties of the populations of binaries is the kick velocity a newly formed compact object receives at birth. However, several studies cordeschernoff ; fryer98 indicate that the distribution of kick velocities consists of two components: a low velocity with the width of approximately $`200`$kms<sup>-1</sup>, and a high velocity with the characteristic velocity around $`800`$kms<sup>-1</sup>. About 80% of the kicks are drawn from the first component. It is also known that the production rate of compact object binaries falls off exponentially with increasing kick velocity, see Fig. 1 in bbrhere . Thus the population of compact object binaries will be dominated by the objects formed in the systems that received the kicks drawn from the low velocity component of the distribution. In the following we will consider the properties of the compact object binaries for the case when the kick velocity is drawn from a Gaussian distribution with the width $`200`$kms<sup>-1</sup>.
Little is known a priori about the masses and gravtational potentials of host galaxies where GRB progenitrs reside. Therefore to find the expected distribtion of merger sites around galaxies we consider two extreme cases: propogation in a potential of a large galaxy like the Milky Way and propagation in the empty space bbzmn .
## Results
In Figure 1 we present the distribution of center of mass velocities gained by systems in the supernova explosions in the galactic potantial and binary lifetimes (the time binary takes to evolve from ZAMS to final merger of two components) for four types of compact object binaries.
For the case of propagation in a potential of a massive galaxy only a fraction of NS-NS binaries will be able to escape from their host galaxies. The BH-NS binaries tend to stay in the galaxy. Here, we have assumed that the kick velocity does not depend on the nature of the compact object formed in the supernova. However, it has been argued that the kicks back holes receive should be smaller than those of the neutron stars. This is discussed in more detail in bbz . The helium star mergers stay in the host galaxies, while some of white dwarf black hole mergers have a chance of taking place outside the host, provided that the escape velocity from a given galaxy is not too large.
In the case of propagation in empty space quite a large number of binaries of any type will be able to escpae from their birthplace.
In Figure 2 we present the cumulative distributions of the distances (projected on the sky) between mergers sites and the host galaxies. In the case of propagation in empty space (left panel of Figure 2) most of the mergers take place far outside the host. In the case of propagation in the potential of a massive galaxy (right panel of Figure 2) black hole neutron star mergers and helium star mergers take place inside the host, and only a small, but not negligible fraction of double neutron stars and black hole white dwarf mergers happen outside the host.
## Discussion
We have learned at this conference Fruchter that the afterglow locations coincide with galaxies, and that typically there are intense star forming processes in these galaxies.
In the compact object merger model of GRBs one has to take into account the fact there are significant delays between the stellar formation and the time of merger. This delays consists of the stellar evolutionary time leading to supernavae explosions and formation of the compact onbject binary and the evolution of the compact object binary due to gravitational wave energy loss. The distribution of the delay times is rather wide and varies for different types of binaries bbrhere , see Figure 1. In the case of helium star mergers these delays can be as short as a few million years.
Assuming that GRBs are related with NS-NS, BH-NS, or BH-WD mergers we do not expect any correlation between the GRB sites and star formation because the star formation processes could have ceased by the time the merger happens. Thus, within this model GRB rate should be proportional to the luminous mass in the Universe. As most of the luminous mass is concentrated in massive galaxies we expect to find GRBs within such galaxies. However, this should be typical galaxies and we tend to find GRB hosts in small, star forming ones Fruchter .
Let us now consider the case when the delays between the stellar formation and the merger events are shorter than the star forming episode itself. Naturally, a correlation between the GRB sites and star forming galaxies exists. However, the observed host galaxies in this case are typically small. Thus, as shown above, a significant fraction of the mergers should take place outside the host galaxies and we should be finding GRBs with no underlying host galaxies.
One can also argue that the GRBs that take place outside the host galaxies do not produce significant afterglows, because of low density of the ambient medium and therefore are not followed by afterglows. This would make a strong selection effect against detecting GRBs happening outside of the hosts. At this conference we have heard however, that the Beppo SAX data are consistent with all GRBs having X-ray afterglows Costa . This means that all GRBs have afterglows and that GRBs with no afterglows do not exist, at least within the sample observable by Beppo SAX.
The reasoning presented above strongly argues against the compact object merger model of GRBs. However, we must remember that afterglows have been detected only from the long bursts, and we do not know if short bursts also produce afterglows and what are the locations in relation to host galaxies of short bursts. Moreover, numerical models of compact object coalescences kluzniak ; ruffert agree with analytical estimates and show that the timescales of these events can not be stretched beyond a fraction of a second. Yet the long bursts have a median duration of approximately $`2030`$s.
Thus we conclude that the compact object merger model appears to be inconsistent with the observations of afterglows, and their locations within the host galaxies. Long bursts are therefore most probably not connected with compact object mergers. However, it is quite likely that we will find that the short bursts are connected with mergers of compact objects.
Acknowledgments. We acknowledge the support of the following grants: KBN-2P03D01616, KBN-2P03D00415. |
no-problem/0001/math0001137.html | ar5iv | text | # References
The Weinstein conjecture in the uniruled manifolds
Guangcun Lu<sup>1</sup><sup>1</sup>1Partially supported by the NNSF 19971045 of China.
Nankai Institute of Mathematics, Nankai University
Tianjin 300071, P. R. China
(E-mail: gclu@nankai.edu.cn)
## Abstract
In this note we prove the Weinstein conjecture for a class of symplectic manifolds including the uniruled manifolds based on Liu-Tianโs result.
Key words : Weinstein conjecture, Gromov-Witten invariants, uniruled manifold.
1991 MSC : 53C15, 58F05, 57R57.
Since 1978 A. Weinstein proposed his famous conjecture that every hypersurface of contact type in the symplectic manifolds carries a closed characteristic(\[We\]), many results were obtained (cf.\[C\]\[FHV\]\[H\]\[HV1\]\[HV2\]\[LiuT\]\[Lu1\]\[Lu2\]\[V1\]\[V2\]\[V3\]) after C.Viterbo first proved it in $`(\text{}^{2n},\omega _0)`$ in 1986(\[V1\]). Not long ago Gang Liu and Gang Tian established a deep relation between this conjecture and the Gromov-Witten invariants and got several general results as corollaries(\[LiuT\]).
Assume $`S`$ to be a hypersurface of contact type in a closed connected symplectic manifold $`(V,\omega )`$ separating $`V`$ in the sense of \[LiuT\], i.e. there exist submanifolds $`V_+`$ and $`V_{}`$ with common boundary $`S`$ such that $`V=V_+V_{}`$ and $`S=V_+V_{}`$, then the following result holds.
Theorem 1(\[LiuT\]) If there exist $`AH_2(V;\text{})`$ and $`\alpha _+,\alpha _{}H_{}(V;\text{})`$, such that
$`supp(\alpha _+)int(V_+)`$ and $`supp(\alpha _{})int(V_{})`$,
the Gromov-Witten invariant $`\mathrm{\Psi }_{A,g,m+2}(C;\alpha _{},\alpha _+,\beta _1,\mathrm{},\beta _m)`$ $`0`$ for some $`\beta _1,\mathrm{},\beta _mH_{}(V;\text{})`$ and $`CH_{}(\overline{}_{g,m+2};\text{})`$,
then $`S`$ carries at least one closed characteristic.
Recall that for a given $`AH_2(V;\text{})`$ the Gromov-Witten invariant of genus $`g`$ and with m+2 marked points is a homomorphism
$$\mathrm{\Psi }_{A,g,m+2}:H_{}(\overline{}_{g,m+2};\text{})\times H_{}(V;\text{})^{m+2}\text{},$$
(see \[FO\]\[LiT\]\[R\]\[Si\]). Though one so far does not yet know whether the GW invariants defined in the four papers agree or not, we believe that they have the same vanishing or nonvanishing properties, i.e., for any given classes $`CH_{}(\overline{}_{g,m+2};\text{})`$ and $`\beta _1,\mathrm{},\beta _{m+2}H_{}(V;\text{})`$ one of this four versions vanishes on $`(C;\beta _1,\mathrm{},\beta _{m+2})`$ if and only if any other three ones vanish on them. In addition, the version of \[R\] is actually a homomorphism from $`H_{}(\overline{}_{g,m+2};\text{})\times H_{}(V;\text{})^{m+2}`$ to . However, using the facts that $`H_{}(M;\text{})`$ is dense $`H_{}(M;\text{})`$ for $`M=V,\overline{}_{g,k}`$ and $`\mathrm{\Psi }_{A,g,m+2}`$ is always a homomorphism one can naturally extend the other three versions to the homomorphisms from $`H_{}(\overline{}_{g,m+2};\text{})\times H_{}(V;\text{})^{m+2}`$ to . Below we always mean the extended versions when they can not clearly explained in the original versions. Our main result is
Theorem 2For a connected closed symplectic manifold $`(V,\omega )`$, if there exist $`AH_2(V;\text{})`$, $`CH_{}(\overline{}_{g,m+2};\text{})`$ and $`\beta _1,\mathrm{},\beta _{m+1}H_{}(V;\text{})`$ such that
$$\mathrm{\Psi }_{A,g,m+2}(C;[pt],\beta _1,\mathrm{},\beta _{m+1})0$$
for $`(g,m)(0,0)`$ and the single point class $`[pt]`$, then every hypersurface of contact type $`S`$ in the symplectic manifold $`V`$ separating $`V`$ carries a closed characteristic. Specially, if $`g=0`$ we can also guarantee that $`S`$ carries one contractible ( in $`V`$) closed characteristic.
In case $`g=0`$ it is not difficult to prove that Proposition 2.5(5) and Proposition 2.6 in \[RT\] still hold for any closed symplectic manifold $`(V,\omega )`$ with the method of \[R\]. That is,
$`\mathrm{\Psi }_{0,0,k}([pt];\alpha _1,\mathrm{},\alpha _k)=\alpha _1\mathrm{}\alpha _k`$( the intersection number);
for the product manifold $`(V,\omega )=(V_1\times V_2,\omega _1\omega _2)`$ of any two closed symplectic manifolds $`(V_1,\omega _1)`$ and $`(V_2,\omega _2)`$ it holds that
$$\mathrm{\Psi }_{A_1A_2,0,k}^V([pt];\alpha _1\beta _1,\mathrm{},\alpha _k\beta _k)=\mathrm{\Psi }_{A_1,0,k}^{V_1}([pt];\alpha _1,\mathrm{},\alpha _k)\mathrm{\Psi }_{A_2,0,k}^{V_2}([pt];\beta _1,\mathrm{},\beta _k).$$
Thus if $`\mathrm{\Psi }_{A_2,0,m+1}^{V_2}([pt];[pt],\beta _1,\mathrm{},\beta _m)0`$ we get
$$\mathrm{\Psi }_{A_1A_2,0,m+1}^V([pt];[pt],\alpha _1\beta _1,\mathrm{},\alpha _m\beta _m)0$$
for $`A_1=0`$ and $`\alpha _1=\mathrm{}=\alpha _m=[V_1]`$. This leads to
Corollary 3Weinstein conjecture holds in the product symplectic manifolds of any closed symplectic manifold and a symplectic manifold satisfying the condition of Theorem 2 for $`g=0`$.
Recall that a smooth Kahler manifold $`(M,\omega )`$ is called uniruled if it can be covered by rational curves. Y. Miyaoka and S. Mori showed that a smooth complex projective manifold $`X`$ is uniruled if and only if there exists a non-empty open subset $`UX`$ such that for every $`xU`$ there is an irreducible curve $`C`$ with $`(K_X,C)<0`$ through $`x`$(\[MiMo\]). Specially, any Fano manifold is uniruled(\[Ko\]). The complex projective spaces, the complete intersections in it, the Grassmann manifolds and more general flag manifold are the important examples of the Fano manifolds. In \[R, Prop. 4.9\] it was proved that if a smooth Kahler manifold $`M`$ is symplectic deformation equivalient to uniruled manifold, $`M`$ is uniruled. Actually, as mentioned there, Kollar showed that on the uniruled manifold $`(M,\omega )`$ there exists a class $`AH_2(V;\text{})`$ such that
$$\mathrm{\Phi }_{A,0,3}([pt];[pt],\beta _1,\beta _2)0$$
$`(1)`$
for some classes $`\beta _1`$ and $`\beta _2`$(see \[R\] for more general case). Combing these with Corollary 3 we get
Corollary 4Every hypersurface $`S`$ of contact type in the uniruled manifold $`V`$ or the product of any closed symplectic manifold and an uniruled manifold carries one contractible ( in $`V`$) closed characteristic.
The ideas of proof are combing Liu-Tianโs Theorem 1 above, the properties of the Gromov-Witten invariants and Viterbroโs trick of \[V4\].
Proof of Theorem 2 Under the assumptions of Theorem 2, the reduction formula of the Gromov-Witten invariants(\[R, Prop. C\]) implies that
$$\mathrm{\Psi }_{A,g,m+3}(\pi ^{}(C);[pt],PD([\omega ]),\beta _1,\mathrm{},\beta _{m+1})=\omega (A)\mathrm{\Psi }_{A,g,m+2}(C;[pt],\beta _1,\mathrm{},\beta _{m+1})0$$
$`(2)`$
since $`A`$ contains the nontrivial pseudoholomorphic curves. To use Theorem 1 we need to show that there exists a homology class $`\gamma H_2(V;\text{})`$ with support $`supp(\gamma )int(V_+)`$( or $`int(V_{})`$) such that
$$\mathrm{\Psi }_{A,g,m+3}(\pi ^{}(C);[pt],\gamma ,\beta _1,\mathrm{},\beta _{m+1})0.$$
$`(3)`$
To this goal we note $`S`$ to be a hypersurface of contact type, and thus there exists a Liouville vector field $`X`$ defined in a neighborhood $`U`$ of $`S`$, which is transverse to $`S`$. The flow of $`X`$ define a diffeomorphism $`\mathrm{\Phi }`$ from $`S\times (3ฯต,3ฯต)`$ onto an open neighborhood of $`S`$ in $`U`$ for some $`ฯต>0`$. Here we may assume $`\mathrm{\Phi }(S\times (3ฯต,0])V_+`$ and $`\mathrm{\Phi }(S\times [0,3ฯต))V_{}`$. For any $`0<\delta <3ฯต`$ let us denote by $`U_\delta :=\mathrm{\Phi }(S\times [\delta ,\delta ])`$. We also denote by $`\alpha =i_X\omega `$, then $`d\alpha =\omega `$ on $`U`$. Choose a smooth function $`f:V\text{}`$ such that $`f|_{U_ฯต}1`$ and vanishes outside $`U_{2ฯต}`$. Define $`\beta :=f\alpha `$. This is a smooth $`1`$-form on $`V`$, and $`d\beta =\omega `$ on $`U_ฯต`$. Denote by $`\widehat{\omega }=\omega d\beta `$. Then $`\widehat{\omega }|_{U_ฯต}0`$ and thus cohomology classes $`[\omega ]=[\widehat{\omega }]`$ is in $`H^2(V,U_ฯต)`$. Now from the naturality of Poincare-Lefschetz duality (\[p.296, Sp\]): $`H_{2n2}(VU_ฯต)H^2(V,U_ฯต)`$ it follows that we can choose a cycle representive $`\widehat{\gamma }`$ of $`\gamma :=PD([\omega ])`$ with support $`supp(\widehat{\gamma })int(VU_ฯต)`$. Notice that $`VU_ฯตVS=int(V_+)int(V_{})`$ and $`int(V_+)int(V_{})=\mathrm{}`$. We can denote by $`\widehat{\gamma }_+`$ and $`\widehat{\gamma }_{}`$ the union of connected components of $`\widehat{\gamma }`$ lying $`int(V_+)`$ and $`int(V_{})`$ respectively. Then the homology classes determined by them in $`H_{}(V,\text{})`$ satisfy: $`[\widehat{\gamma }_+]+[\widehat{\gamma }_{}]=\gamma `$. Thus $`[\widehat{\gamma }_+]`$ and $`[\widehat{\gamma }_{}]`$ have at least one nonzero class. By the property of the Gromov-Witten invariants we get
$`(4)\mathrm{\Psi }_{A,g,m+3}(\pi ^{}(C);[pt],\gamma ,\beta _1,\mathrm{},\beta _{m+1})`$ $`=`$ $`\mathrm{\Psi }_{A,g,m+3}(\pi ^{}(C);[pt],[\widehat{\gamma }_+],\beta _1,\mathrm{},\beta _{m+1})`$
$`+`$ $`\mathrm{\Psi }_{A,g,m+3}(\pi ^{}(C);[pt],[\widehat{\gamma }_{}],\beta _1,\mathrm{},\beta _{m+1})0.`$
Hence the right side of (4) has at least one nonzero term. Without loss of generality we assume that
$$\mathrm{\Psi }_{A,g,m+3}(\pi ^{}(C);[pt],[\widehat{\gamma }_+],\beta _1,\mathrm{},\beta _{m+1})0.$$
Then Theorem 1 directly leads to the first conclusion.
The second claim is easily obtained by carefully checking the arguments in \[LiuT\]. $`\mathrm{}`$
Remark 5 Actually we believe that Theorem 1 still holds provided the hypersurface $`S`$ of contact type therein is replaced by the stable hypersurface in the sense of \[HV2\]. Hence the hypersurface $`S`$ of contact type in our results above may be replaced by the stable hypersurface for which the symplectic form is exact in some open neighborhood of it.
Remark 6 In \[B\] it was proved that the system of Gromov-Witten invariants of the product of two varieties is equal to the tensor product of the systems of Gromov-Witten invariants of the two factors. Using the methods developed in \[FO\]\[LiT\]\[R\]\[Si\] we believe that one can still prove these product formula of Gromov-Witten invariants with any genus for any product of two closed symplectic manifolds, and thus Corollary 3 also holds for any genus $`g`$.
Acknowledgements I would like to express my hearty thanks to Professor Claude Viterbo for his many valuable discussions and sending his recent preprint \[V4\] to me. I wish to acknowledge Professor Janos Kollar for his telling me some properties on the uniruled manifolds. He also thanks the referee for his good suggestions to improve the original version. |
no-problem/0001/astro-ph0001197.html | ar5iv | text | # The Beginning of the End of the Anthropic Principle
## 1 A brief outline of the anthropic principle
It is probably fair to say that the existence of humans is an indisputable fact, (pace Sartre). Yet curiously an explanation of this fact was, for most of history, held to be unnecessary. Until the time of Copernicus, it was widely believed that humanity was the center of the Universe, and the Universe was made for it, ideas exemplified by many creation myths in a wide variety of cultures. At least in western civilization, such an established view was overturned when Copernicus demonstrated that the Earth was in orbit around the Sun, and thus put on an equal footing with the other visible planets. Although there was no fundamental explanation for his observations, it removed humanity from a central place in the universe. The Earth became simply one of the six then known planets. The reason behind Copernicusโ observations was found by Newton. The laws of universal gravitation and mechanics allowed for an explanation of the gross structure of the Solar system and also permitted one of the first anthropic questions to be asked. Why is it that the Earth is in an orbit with a mean distance from the Sun, $`r_{}1.50\times 10^{13}`$ cm, and a low eccentricity, $`e0.016`$? The advantages for life on Earth are easy to see: these orbital parameters provide a stable temperate environment in which human life, as we know it, can exist comfortably. A relatively small change in $`r_{}`$ would lead to an Earth that is either too cold or too hot, and a change in $`e`$ to a situation in which there were violent swings in temperature between the seasons.
As more has been understood, it has been noticed that, apparently, a number of features of the universe have to be more or less โjust soโ, or humans would not exist. Specific examples have been offered for over a century (as reviewed in Barrow and Tipler, ), and include noting that the Sun has to be very stable, the Earth cannot be too small (else it could not hold an atmosphere) or too large (as gravity would effectively crush organisms made of molecules), the universe has to be large and dark and old for life to exist because at least two and probably three generations of stars are needed to make the heavy elements life depends on, etc.
The essential point made in anthropic arguments is that we should not be surprised that the Earth is where it is, because if it were in a different orbit, then we would not be here to ask the question. The question is, by virtue of its self-referential nature, nugatory. That does not mean the question is not worth asking. It may well be that in the future, our understanding of how the Solar system was formed would enable us to argue successfully that terrestrial planets are inevitable in stellar systems of our type. Equally well, they may be unusual phenomena.
Carter , appears to be the first to formalize what is meant by the anthropic principle. He described three types of scientific reasoning. One is โtraditional.โ Arguments based on our existence are regarded as extra-scientific. The laws of nature are used to make predictions in a deductive way. There is some degree of arbitrariness involved because it is not exactly clear what the laws of nature are, what the constants of nature are, and what choices of boundary conditions or quantum states are to be made. In contrast, reasoning based on the โweakโ anthropic principle allows us to place restrictions on what we are going to consider to be realistic. Our existence as observers is privileged in both space and time by virtue of our own existence. The weak anthropic principle is interesting and unobjectionable. It adds to our insights, but does not preclude a fully scientific explanation of any feature of the universe, including the origin of the universe and the understanding of why the laws of physics are what they are, with such explanations not depending in any way on knowing whether humans exist. What this is saying really is that our existence in a recognizable form is intimately related to the conditions prevailing now in our part of the galaxy. โWe are here because we are here,โ would be the sound-bite associated with this attitude. It has not much predictive power except that since the Solar system does not seem to be particularly unusual in any way then it would be reasonable to suppose that life, and indeed quite possibly some form of civilization, should also be common at the present epoch. (A most un-anthropic conclusion).
The historical starting point, within the context of modern physics, for anthropic reasoning is to explain the so-called large number coincidence, first formalized by Dirac, . Three large dimensionless quantities, all taking values $`10^{40}`$, can be found in cosmology. The first is the dimensionless gravitational coupling referred to the proton mass $`m_p,`$
$$\frac{\mathrm{}c}{Gm_p^2}2\times 10^{38}$$
(1)
The second is the Hubble time, $`T`$, referred to the same scale,
$$\frac{Tm_pc^2}{\mathrm{}}6\times 10^{41}$$
(2)
The final quantity is a measure of the mass $`M`$ of the visible universe also referred to the same scale
$$\sqrt{\frac{M}{m_p}}5\times 10^{39}$$
(3)
At first sight, the similarity of these numbers can either be regarded as a incredible coincidence or a deep fact. But in reality we should not be surprised. As was first explained by Dicke , in a big-bang cosmology these relations are perfectly natural. The age of the Universe must be in a certain range: not too young as described earlier, but also not so old that stars have largely exhausted their hydrogen fuel. Dicke showed that the above bounds on the age of the universe were equivalent to the coincidence of the numerical values (1) and (2). The equivalence of (2) and (3) can be interpreted as saying that we must live in a universe with a density close to the critical density. The natural explanation of the second equivalence is then the existence of an inflationary epoch.
However, anthropic reasoning can be rather dangerous since it has a tendency to lead one to draw conclusions that can be theological in nature. When some phenomenon cannot be understood simply within a particular system, it is tempting to ascribe its origin as supernatural, whereas a deeper understanding of physics may allow a perfectly rational description. The history of physical science is littered with examples, from medieval times up to the present.
For some people anthropic reasoning even leads to a โstrong formโ of the anthropic principle, which argues that our existence places strong restrictions on the types of theories that can be considered to explain the universe and the laws of physics, as well as the fundamental constants of nature. Some would even like to argue that the universe in some sense had to give rise to humans, or even was designed to do that. Many physicists are antagonistic to any version of these stronger arguments. While it is not yet established that the origin of the universe or the origin of the laws of physics can be understood scientifically, attempts to answer those questions are finally in the past decade or so topics of scientific research rather than speculation. It could happen that such questions do not have scientific answers, but until the effort is made and fails we will not know.
One approach to partially unifying the laws of physics is, so-called, โGrand Unificationโ. In this approach, the weak, electromagnetic, and strong forces are unified into one simple gauge group of which the Standard Model gauge group $`SU(3)\times SU(2)\times U(1)`$ is a subgroup. In that case it has recently been observed (Hogan , Kane ) that since the ratios of the coupling strength are fixed, in making anthropic calculations one cannot independently change the strong and electromagnetic forces. (For example, if the strong force strength is increased a little, the diproton could be bound and cut off nuclear burning in stars. It should however be noted that even without unification effects this issue is more subtle than is usually stated ). Further, it would be incorrect to argue, as some have done, that various โjust soโ probabilities should be multiplied together, since the underlying physics effects are correlated.
Increasing the strength of the electromagnetic repulsion is required since its ratio to the strong force strength is fixed by the theory, and that would decrease the diproton binding, so the net effect would be small. Thus in Grand Unified Theories a number of anthropic effects disappear.
Recently, Hogan has argued that some basic microscopic physics should be determined anthropically. He suggests that the Grand Unified theories go about as far as one can hope or expect to go in relating and explaining the fundamental parameters of a basic theory, and emphasizes that the sensitivity of the properties of the universe to a few quantities is very strong. His argument highlights the fact that in Grand Unified theories there are a number of independent parameters (particularly quark and lepton masses) that need to be specified. He argues that it is important that at least some of these parameters cannot be determined by the theory, or else we cannot understand why the universe is โjust soโ.
The context of the above suggestion is that we are living in one of many universes where these numbers are chosen at random. The reason we see them as being what they are is that if they were different, even very slightly in some cases, then we could not possibly exist. It could be that these many universes are real and emerge as baby universes in some meta-universe as yet unobserved, or simply as a statistical ensemble of distinct universes. In either case, one is faced with the real difficulty of accounting for why the parameters of the Grand Unified theory are what they are. It turns out that although at first sight there appear to a myriad of what might be called anthropic coincidences, only four of the parameters appear to be particularly critical, . They are $`m_e,m_u,m_d`$ and $`g`$, the mass of the electron, up and down quarks respectively, and the Grand Unified coupling constant (which determines the strengths of the strong, electromagnetic, and weak forces). Hogan makes the interesting claim that if these parameters were determined by the theory, it would be very hard to understand why the universe is โjust so.โ
Today, there is a more ambitious approach to understanding and unifying the laws of physics, loosely called โstring theory.โ We would like to argue precisely the opposite point of view to the one presented by Hogan, based on what is known about string theory, or perhaps more precisely its non-perturbative progenitor, M-theory.
By string theory we mean the effective 10 dimensional theory that incorporates gravity and quantum theory and the particles and forces of the Standard Model of particle physics. Whether string theory really describes our world is not yet known. This is certainly the first time in history when we have a theory to study which could unify and explain all of nature, conceivably providing an inevitable primary theory. Testing string theory ideas may be difficult but is not in any sense excluded โ one does not need to be present at the big bang, nor does one need to do experiments at the Planck scale to test them.
Our goals in writing this paper are first to stress that in string theories all of the parameters of the theory โ in particular all quark and lepton masses, and all coupling strengths โ are calculable, so there are no parameters left to allow anthropic arguments of the normal kind, or to allow the kind of freedom that Hogan has argued for. Second, we want to discuss in what ways, if any, there is room to account for why the universe is โjust soโ.
## 2 The String Theory picture of low energy physics
In non-gravitational physics, the role of spacetime is quite clear. It provides an arena, Minkowski spacetime, in which calculations can be carried out. In classical gravitational physics, spacetime continues to exist, but the backgrounds are in general more exotic, representing diverse situations such as black holes or cosmological models. It is easy to graft onto this edifice the content of Grand Unified theories. However, the philosophy of string unification is to unify all the forces, including gravitation. At some level, one can successfully omit gravitation because the natural scale associated with it is $`10^{19}GeV`$ whereas the other forces become unified at scales noticeably less, around $`10^{16}GeV`$. But, if we are to explore energies beyond the unification scale, because we want a general theory, then gravity will become more important and must be included in the overall picture. To do this, one requires a theory of quantum gravity. Treating the gravitational field like a gauge theory leads to an unrenormalizable theory. To include gravitation, one has to resort to a theory of extended objects: strings. One way to think about string theory is to describe the string as an extended object propagating in a fixed background spacetime. The metric, or gravitational field, is just one of the massless degrees of freedom of the string, and it is possible to extend this picture to include backgrounds that correspond to any of the massless degrees of freedom of the string. One ingredient of string theory is that it is described by a conformally invariant theory on the string world-sheet. This requirement imposes a strict consistency condition on the allowed backgrounds in which the string lives. The backgrounds must have ten spacetime dimensions, and obey the supergravity equations of motion. We therefore regard string theory as a consistent quantum theory of gravity in the sense that the theory of fluctuating strings (including excitations of the string that correspond to gravitons) is finite provided that the background obeys the equations of the supergravity theory that corresponds to particular string theory under discussion.
Next, string theory needs to make contact with the known structure of the universe. There are, apparently, four spacetime dimensions. The six remaining directions of spacetime in string theory need to be removed by a process usually termed compactification. The endpoint of this process usually arrives at simple ($`N=1`$) supergravity coupled to various matter fields. As a consequence, a severe restriction applies as to how the compactification takes place. One assumes that spacetime takes the form of $`^4\times K,`$ where $`^4`$ is some four-dimensional Lorentzian manifold, and $`K`$ is a compact space with six real dimensions. In order to have unbroken simple ($`N=1`$) supersymmetry, there is a severe restriction on the nature of $`K`$. $`K`$ must be a so-called โCalabi-Yauโ manifold and its spatial extent must be sufficiently small that it has no direct observational signature, which restricts it to scales of around $`10^{30}`$ cm, or roughly the Planck scale .
This in fact is an example of the weak anthropic principle at work. There seems no reason why one should compactify down to four dimensions. From the string theory perspective, there is nothing wrong with having a spacetime of any number of dimensions less than or equal to ten. However, one could certainly not have intelligent life in either one or two spatial dimensions. It is impossible to have a complicated interconnected set of nerve cells unless the number of space dimensions is at least three, since otherwise one is forced only to have nearest neighbor connections. In spatial dimensions greater than three, even if one could have stars, one could not have stable Newtonian planetary orbits or stable atoms , and it is presumably impossible for a suitable environment for life to exist. More generally, the compactified dimensions could vary in size.
The Calabi-Yau space is determined by two distinct types of property. Firstly we must specify its topology and then specify the metric on it. The topology determines at least two important properties of the low-energy theory, the number of generations and the Yukawa couplings. The metric can just be one of a family of metrics on the given manifold. The fact that there appears to be some arbitrariness here is reflected in the presence of massless scalars, or moduli fields, in the low-energy theory.
If one were just doing field theory, then the above considerations would really be all that there is to it. However, the geometry of string theory is rather more interesting. In Riemannian geometry, one cannot continuously and smoothly deform a metric so as it interpolates between metrics on two topologically distinct manifolds. If one tries to do this, one ends up finding that there is some kind of singularity in the metric that causes the notion of a manifold to break down. In the background field approximation to string theory, this will also be true. However, we also know that there is more to string theory than that. It appears that one should replace the idea of a classical background geometry with what is usually called quantum geometry, . Quantum geometry corresponds to those consistent conformal field theories that define the physics of the string itself. These conformal field theories are intrinsic to the string, and have an existence without any concept of spacetime. Thus, spacetime would be a derived property, rather than being fundamental. This is closer to the true philosophy of a fundamental theory, since we should not be trying to draw a distinction between string physics and the physics of spacetime. A proper discussion of quantum geometry includes an understanding of non-perturbative effects in string theory. However there are two things that we already know for sure that support the viewpoint of spacetime being a derived property. The first is the phenomenon of mirror symmetry, . It is known that Calabi-Yau manifolds come in pairs, related by the so-called mirror map, in which the Kahler and complex structures are interchanged. There is no obvious connection between the metrics on pairs of mirror manifolds. However, the conformal field theory associated with the string is in both cases identical. This indicates that the spacetime description is a derived one, rather than being fundamental. A second property is that when non-perturbative phenomena are included, there is no problem from the string theory point of view in effecting continuous transitions between Calabi-Yau spaces of different topology. This shows that stringy ideas about geometry are really more general than those found in classical Riemannian geometry. The moduli space of Calabi-Yau manifolds should thus be regarded as a continuously connected whole, rather than a series of different ones individually associated with different topological objects, . Thus, questions about the topology of Calabi-Yau spaces must be treated on the same footing as questions about the metric on the spaces. That is, the issue of topology is another aspect of the the moduli fields. These considerations are relevant to understanding the ground state of the universe.
However we reach the final picture of what happens at low energies in four dimensions, the end point does not contain any massless scalar fields. That in turn means that when the final $`N=1`$ supersymmetry is broken, all of the fields must be associated with effective potentials. The only exceptions are the genuinely massless fields associated with unbroken gauge symmetries. These are the photon for electromagnetic $`U(1)`$, the gluons for $`SU(3)`$ of color, and the graviton with diffeomorphism invariance. One could very plausibly conclude that all of the low energy physics must be determined as a result of this type of compactification plus supersymmetry breaking process.
Thus we have relegated most of the traditionally anthropic quantities to physics that we know, at least in principle, even if as yet the calculations are too technically complex to carry out. For example in the scheme sketched here, it now seems perfectly possible to compute from first principles quantities like $`m_d`$ and $`m_u`$ (see below). In fact, it would seem that all of low-energy physics is computable. This leads us to ask about the small number of possible remaining anthropic quantities. For example, what about the number of the non-compact dimensions of spacetime. Whilst in string theory, there seems to be no obvious reason why four should be singled out, in M-theory there is. Oneโs usual attitude to string theory is that it can be derived from M-theory, whose low-energy limit is $`d=11`$ supergravity, by compactifying on a circle, and then saying that the resultant ten-dimensional spacetime is the arena for string theory. However, in $`d=11`$ supergravity theory there is a four-form field strength that could pick out four dimensions as being different from the remaining seven if it has a vacuum expectation value. Thus, a four-seven split seems quite natural. In cosmological models, the observed universe is described by the Friedmann-Robertson-Walker models, and they have the property of being conformally flat. This means that they can be described most simply as a four-dimensional spacetime of constant curvature, together with a time-dependent scale factor. Whilst it is not presently understood how one might realize these ideas in practice, it is beginning to seem plausible that even something like the dimensionality of the large directions of spacetime might eventually be understood in M-theory without recourse to any anthropic arguments at all. Another observed fact is that there seems to be a small cosmological constant with a magnitude similar to the energy density of observed matter in the Universe. Most string theorists think it is likely, or at least possible, that a better understanding of string theory will lead to an understanding of the small size of the cosmological constant, and possibly also its actual value. It is not inconceivable that a solution to the dimension problem would come together with a solution to the cosmological constant problem in M-theory.
## 3 The role of parameters in string theory
As discussed in the previous section, before string theory can be applied to our actual world several problems have to be solved. They include compactification to four dimensions, breaking the full supersymmetry of the theory that is a hidden symmetry in our world, and finding the correct ground state (vacuum) of the theory. These problems are logically independent, though it could happen that one insight solves all of them. We assume here that ongoing research will find solutions to these problems, and consider the implications for our view of the world and for anthropic ideas.
Assuming (as described above) that the theory is successfully formulated as a 4D effective theory near the Planck scale, we describe qualitatively how force strength and masses are viewed in string theory.
In the string theory one can think of the theory as having only a gravitational force in 10D. The other forces arise in a way analogous to what happens in the old Kaluza-Klein theory, where one has a 5D world with only a 5D gravitational force, which splits into a 4D gravitational force plus electromagnetism when one dimension is compactified. Thus in the string theory all of the force strength ratios are fixed by the structure of the theory. In string theory the coupling strengths, including that of gravity, are viewed as vacuum expectation values of scalar fields, so the overall force strengths are calculable too (though how to evaluate those vacuum expectation values is not yet known). If the string approach is correct there is no room for any coupling strength to vary anthropically.
The situation is similar for masses, though more subtle. Physical masses are written as Yukawa couplings (determined by the compactification) times the electroweak Higgs field vacuum expectation value. The quarks and leptons are massless at high temperature scales, until the universe cools through the electroweak phase transition (at 100 GeV), at which point the quarks and leptons acquire mass. At higher scales one speaks of the Yukawa couplings. The string theory determines a function called the superpotential, and the coefficients of terms in the superpotential are the Yukawa couplings. In general the theory determines the Yukawa couplings and thus the masses. The way in which this comes about is directly from the topology of the Calabi-Yau space itself. Given its topological nature, the Yukawa couplings are determined. Thus, there is really not much freedom left at this level.
However, a subtlety may arise. At the most basic level of the theory it could happen that some Yukawa couplings were of the same order as the gauge couplings, giving the masses of the heavier particles such as the top quark, but some Yukawa couplings corresponding to lighter particles were zero. The lighter masses only arise when the full symmetry group of the theory is broken, perhaps when supersymmetry is broken or when the compactification occurs. Then calculating the lighter masses is technically more challenging. Nevertheless, it is expected that in principle, and eventually in practice, all of the masses are calculable, including the up and down quark masses, and the electron mass. There is not any room for anthropic variation of the masses in a string theory.
## 4 What is left that could be anthropic?
String theory can be any one of the five consistent perturbative superstring theories that contain gravitation. These are the two types of closed superstring, the two heterotic strings, and the open superstring. Each of them is characterized by a single dimensionful parameter, usually called $`\alpha ^{}`$, the inverse string tension. This sets the scale for all observations and is as a consequence not a measurable parameter โ it simply sets the scale for units. Each string theory has a second dimensionless parameter, the string coupling constant, that determines the strength with which strings interact. This constant is freely specifiable, and thus manifests itself as a massless scalar field in the theory, the dilaton. However, like all other massless scalar fields, it must acquire a mass through some quantum effects. The fact that such a massless scalar field has not been observed argues in support of this conclusion. Thus the string coupling constant must be determined intrinsically by the theory โ in any given vacuum it is calculable. So, all that is left is $`\alpha ^{}`$ which is unobservable anyway. There still appears to be a choice of which string theory one should pick. However, the discovery of M-theory shows that in fact all string theories are equivalent and so no choice needs to be made.
There are two issues associated today with the cosmological constant. The first is to explain why the actual cosmological constant is tiny compared to the amounts of vacuum energy generated by the electroweak vacuum or the QCD vacuum or other sources of vacuum energy. The second issue is why there is apparently a residual non-zero small vacuum energy which at the present epoch is of the same order as the contributions to the total energy density. The two issues are logically independent, and could be physically independent. We will not try to deal with the second issue, even though it could be an anthropic question because of the apparent coincidence that the cosmological energy density is of the same order as the combined forms of matter at the present time even though the two forms depend differently on time. The first issue has been discussed as an anthropic one , , . However, the string theory point of view is outlined in the previous section. Another possibility is that there might be very light scalars left over in the theory. Standard wisdom has it that such objects would by now have been detected, but there is always the possibility that they couple so weakly to gravitation that they would not have been seen. What is interesting about the above scenario is that the potential for such a scalar field would look very much like an ordinary cosmological constant except that the value of the scalar itself could very both in time and space on cosmological distances or timescales. Under these circumstances, it is thus a possibility that the effective cosmological constant here and now is anthropically determined.
Another possibility at our present state of knowledge for understanding how not to be uncomfortable with a universe that seems to be โjust soโ arises from the observation that universes could be arising with different initial conditions and early histories, leading to different parameters. There are several approaches today to how universes might begin , and of course additional approaches may be required. Some or none of them could be correct. If many universes arise, various initial conditions could lead to various resulting sets of parameters. It is then like a random lottery in that someone wins, and even though they may feel singled out, from a broader viewpoint there is nothing special about whomever won.
Finally, the vacuum structure of string theory is expected to be very complicated. A 10D world is a consistent one as far as is known, and perhaps so are many compactified ones. There may be many stable solutions (local minima), each with different dimensions, topological characteristics, and parameters. Once a universe falls into one of those minima the probability of tunneling to a deeper minimum may be extremely small, so that the lifetime in many minima is large compared to the lifetime needed for life to arise in those minima. For a discussion of such phase transitions see Adams and Laughlin . Then life would actually arise in those minima that were approximately โjust soโ. Thus the โjust soโ issue is resolved by having a large number of possible vacua in which universes can end up. Eventually understanding of M-theory may reach the stage where it is possible to calculate in practice all the possible vacua. In each vacuum, all of the quantities needed for a complete description of the universe, including the masses and couplings that Hogan argues need to be โjust soโ, are calculable.
## 5 Concluding Remarks
We have argued that the usual anthropic arguments cannot be relevant to understanding our world if string theory is the right approach to understanding the law(s) of nature and the origins of the universe. Our arguments are predicated on this hypothesis. If the type of unification found in string theory is not an appropriate description of nature, then we are back to the beginning in trying to understand why the universe has been kind enough to us to allow us to live here. If any parameters such as force strengths or quark masses or the electron mass must be somehow adjustable and not fixed by the theory in order to understand why the universe is โjust soโ, then string theory cannot be correct. We discuss various ways consistent with string theory in which different universes with different parameters could arise, so that the apparent โjust soโ nature of a number of parameters can be understood.
## 6 Acknowledgements
We would like to thank the Institute for Theoretical Physics in Santa Barbara, where this work was initiated, for its hospitality and support in part by National science Foundation grant PHY94-07194, and similarly the California Institute of Technology where it was completed. GLK was supported in part by the US Department of Energy, and MJP partly supported by Trinity College Cambridge. We appreciate various discussions with Fred Adams, Roger Blandford, Mike Duff, Craig Hogan and Steven Weinberg. |
no-problem/0001/astro-ph0001400.html | ar5iv | text | # Collapsars
## Introduction
Massive stars ($`M_{ms}\text{ }>\text{ }`$ 25 M) may not always launch successful neutrino-driven explosions Fry99 . We describe here the continued evolution of such stars after their cores collapse to black holes and accrete the surrounding stellar mantle. For the most massive stars (M<sub>ms</sub> $`>`$ $``$ 35 M) with sufficient angular momentum, a collapsar โ a rapidly accreting ($`\dot{M}`$ 0.1 M s<sup>-1</sup>) stellar mass black hole โ forms promptly at the center of a collapsing star MW99 . We refer to these as Type-I collapsars. Less rapidly accreting black holes can also form over longer time periods due to the fallback of stellar material which failed to escape during the initial supernova explosions (Type II collapsars). This probably happens for main sequence masses, $`M_{ms}\text{ }>\text{ }`$ 20 M. Stars with masses below this explode as normal supernovae and leave behind neutron star remnants. Collapsars power jetted explosions by tapping a fraction of the binding energy released by the accreting star through magnetohydrodynamical processes or neutrino annihilation, or possibly by extracting some of the black hole spin energy. The vast majority of stellar explosions do not make collapsars, only those which make black holes and have sufficient angular momentum. Further, not all collapsars make GRBs. Only those which happen in sufficiently small (in radius) stars and manage to accelerate a fraction of the explosion energy to sufficiently high Lorentz factor. Other collapsar explosions may be responsible for hyper-energetic and asymmetric supernovae like SN1998bw.
Table 1 shows a range of observable phenomena possible from collapsars in various kinds of stars. Prompt and delayed black hole formation can occur in massive stars with a range of radii depending on the evolutionary state of the star, its metallicity and its membership in a binary system. The key difference between the scenarios is the ratio of the time the engine operates, $`t_{\mathrm{engine}}`$, to the time the explosion takes to break out of the surface of the star, $`t_{bo}`$. The breakout time is $`t_{bo}=R_{star}/v_{jet}`$ where $`v_{jet}`$ is the propagation velocity through the star of the explosion shock or jet head. Typical velocities are 50,000 km s<sup>-1</sup> Aloy99 . If the engine operates for a sufficiently long time to continuously power the jet at its base after the explosion shock (the jet head) breaks out of the surface of the star, then highly relativistic outflow can be achieved for a fraction of the ejecta and a classical GRB can result. The column of stellar material pushed ahead of the jet, perhaps a few .01 M, escapes the star and expands sideways leaving a decreasing amount of material ahead of the jet.
The engine time, $`t_{engine}`$, is the time the star is able to feed the black hole at a sufficient rate and depends on the stellar mass and angular momentum distribution at collapse. Viscous entropy generation in the accretion disk and centrifugal bounce can eject significant fractions of the accreting mass flux in a wind MW99 . This can effect the accretion rate onto the black hole and can shorten the accretion time by expelling the outer layers of the star and choking accretion onto the disk. The wind can also be important for ejecting radioactive nickel into the explosion. Until the star starts exploding, the engine time is given by the collapse time of the star onto the disk. It is not given by the initial disk mass divided by the accretion rate since the disk is simply an intermediate repository of mass that is coming from the collapsing stellar envelope. Helium cores can accrete for tens of seconds to minutes while the accretion time is longer for any size star if the star initially explodes then partially reimplodes (Type II collapsar) MWH99 . Not all of this time is available for producing an explosion since there is an initial collapse phase lasting several seconds when the disk forms, the deposited energy can be advected into the hole for several seconds until the density is sufficiently low for energy input to reverse the infall and drive an explosion, and the explosion takes several seconds to break out of the stellar surface. The star can also explode after a sound crossing time due to the lateral expansion of the jet shock especially for โhotโ jets or due to the explosion of the star due to a disk wind.
A key characteristic of the model is the degree of spreading of the jet as it passes through the stellar envelope. GRBs may have an approximately common total energy, $`E_{}10^{52}\mathrm{erg}`$, yet produce a variety of stellar explosions depending on the degree to which the explosion is focussed into a jet. Two characteristics of the explosion can determine the beaming of the jet: their entropy and their duration. In particular, explosions like SN1998bw can be explained by the jet being โhotโ or โbriefโ.
## โHotโ jets
โHotโ jets have large internal pressure compared to their ram pressure and the ambient stellar pressure. They expand laterally as they push through the star and share more of their energy with the stellar envelope. While it may be possible for a hot jet to escape the star and make a GRB, more of the star will participate in the explosion and the jet will take longer to penetrate the star. โColdโ jets on the other hand are capable of penetrating the star with relatively little sharing of energy with the stellar envelope. In some cases the star actually compresses the jet and helps to focus it MWH99 ; Aloy99 . In this case the supernova explosion is relatively weak and a large fraction of the energy goes into the narrow jet beam. These types of explosions can have long accretion episodes and leave large black hole remnants ($`M_{\mathrm{bh}}>5M_{}`$) since the lateral expansion of the jet is inefficient at exploding the star and choking the accretion feeding the hole.
## โBriefโ jets
If the engine is only on for a short time ($`t_{engine}<t_{bo}`$), the power is lost before the jet head reaches the surface of the star even for a small star like the He and C/O cores thought to be responsible for SN1998bw Iwa98 ; woo99 .(Fig 1, right panel). In this case the jet expands quasi-isotropically into the star after its ceases to be energized at itโs base (Fig 1, right panel). The resulting explosion is asymmetric since the explosion is initiated in the polar region. It could be distinguished from a conventional Type Ib/c supernova by high expansion velocities and large energy. SN1998bw may be an example of this with the weak GRB coming from a small amount of moderately relativistic material Iwa98 ; woo99 interacting with the CSM.
## Relativistic Outflow from Massive Stars
Relativistic outflow can be achieved from an accreting black hole surrounded by a collapsing stellar mantle. The key is the prolonged deposition of energy into regions of the star which can expand due to their overpressure. A single impulsive release of energy $`E_{dep}(t)=E_{}\delta (tt_{})`$ will explode a star if the energy input exceeds the binding energy. But the explosion will be โbaryon loadedโ, a โdirty fireballโ i.e. a supernova. The maximum Lorentz factor will be $`\mathrm{\Gamma }\text{ }<\text{ }\frac{E_{dep}}{m_bc^2}`$, where $`m_b`$ is the mass of baryons in the region where the energy is deposited. The Lorentz factor will be less than the asymptotic limit because the expanding fireball will do work in accelerating the surrounding material. For an impulsive energy deposition, a clean environment with $`m_b<\frac{E_{dep}}{\mathrm{\Gamma }c^2}`$ and $`\mathrm{\Gamma }\text{ }>\text{ }100`$ is necessary to make a classical GRB or else the deposited energy will be shared with too many baryons (the โbaryon pollutionโ problem).
The situation is different if the energy is deposited over a period long compared to the expansion time of the energy loading region. In this case, energy is injected into a region already expanding due to the previously deposited energy and the corresponding overpressure. Initially, $`m_b\frac{E}{\mathrm{\Gamma }c^2}`$ and the expansion is sub-relativistic but as the gas expands the baryon density in the deposition region decreases and the energy per baryon increases (assuming constant energy deposition rate per unit volume). The expanding gas must do work in accelerating its surroundings so the deposited energy is shared with many baryons and extremely relativistic motion is initially impossible. Baryons can be mixed into the deposition region but centrifugal force and pressure gradients directed away from the pole can inhibit this (Fig. 1, left panel). The amount of baryons which mix into the deposition region is important for determining the ultimate Lorentz factor that can be achieved. Current two-dimensional relativistic calculations for one particular model achieve $`\mathrm{\Gamma }40`$ near the deposition region Aloy99 and it is expected that higher $`\mathrm{\Gamma }`$ will result when the calculation is run longer or with greater and/or variable energy deposition. Detailed calculations are possible and will be performed soon. At late times after the baryons initially present in the deposition region have expanded away, the Lorentz factor depends on the energy flux and mass flux into the deposition region as $`\mathrm{\Gamma }\frac{\dot{E}}{\dot{m}c^2}`$. The mass flux depends on the rate that hydrodynamical instabilities mix baryons into the deposition region and the rate at which the engine injects baryons.
In the collapsar model for GRBs, or any other similar model involving a massive star, the key to obtaining relativistic motion is the escape of an energy loaded bubble from its surroundings (the stellar mantle). In the case of the toroidal density distribution as in the collapsar MW99 , a low density channel is left behind by regions of the star along the rotational axis which lacked centrifugal support and fell into the black hole. Recent simulations MW99 ; Aloy99 ; Mul00 have shown that energy deposition leads to expansion of gas along the pole, a jet. The key to achieving high $`\mathrm{\Gamma }`$ is for energy deposition to continue at the base of the jet even after the jet head has broken out of the surface of the star and begun free expansion into the low density circumstellar environment. Subsequent energy deposition at the base of the jet continues to load energy into an increasingly baryon-free region with the expanding gas continuously channelled along the rotation axis of the star. |
no-problem/0001/astro-ph0001316.html | ar5iv | text | # FOUR COMETARY BELTS ASSOCIATED WITH THE ORBITS OF GIANT PLANETS: A New View of the Outer Solar Systemโs Structure Emerges From Numerical Simulations
## 1. Introduction
The outer Solar system beyond the four giant planets includes the Kuiper belt and the Oort cloud, which contain a raw material left since the formation of the system. The Kuiper belt objects are thought to be responsible for progressive replenishment of the observable cometary populations, and gravitational scattering of these objects on the four giant planets could provide their transport from the trans-Neptunian region all the way inward, down to Jupiter (Fernรกndez & Ip, 1983; Torbett 1989; Levison & Duncan, 1997; for a review, see Malhotra, Duncan & Levison, 1999 and references therein). The present paper, which continues and develops an approach started by Ozernoy, Gorkavyi, & Taidakova 2000 ($``$ OGT), makes the emphasis on the structure of cometary populations between Neptune and Jupiter, both in phase space, i.e. in the space of orbital elements {$`a,e,i`$}, and in real space. We argue that there are spatial accumulations of comets near the orbits of all four giant planets, which we name cometary belts. These populations have the dynamical nature, because the comets belonging to the given planetโs belt are either in a resonance with the host or are gravitationally scattered predominantly on this planet.
Our approach (which has a number of common elements with the โparticle-in-cellโ computational method) is, in brief, as follows: Let us consider, for simplicity, a stationary particle distribution in the frame co-rotating with the planet (Neptune). The locus of the given particleโs positions (taken, say, as $`610^3`$ positions every $`10^6`$ yrs, i.e. every $`610^3`$ Neptuneโs revolutions about the Sun) are recorded and considered as the positions of many other particles of the same origin but at a different time. After this particle โdiesโ (as a result of infall on a planet/the Sun or ejection from the system by a planet-perturber), its recorded positions sampled over its lifetime form a stationary distribution as if it were produced by many particles. Typically, each run includes $`0.710^6`$ Neptuneโs revolutions ($`10^8`$ yrs) to give $`0.710^6`$ positions of a particle, which is equivalent, for a stationary distribution, to the same number of particles.
We integrate the dynamical equations for the motion of a massless particle in the gravitational field of the Sun and the four giant planets written in the rotating $`\{x,y,z\}`$-coordinate system with the $`x`$-axis directed along the radius and the $`y`$-axis in the direction of the orbital motion of Neptune, while the origin of coordinates is placed at the center of the Sun (thereby the secular resonances are not considered here). We use an implicit second-order integrator (Taidakova 1997) appropriately adapted to achieve our goals are described in detail in OGT (2000); as shown there, the integrator for a dissipationless system provides the necessary accuracy of computations on the time scale of $`0.510^9`$ years. A big advantage of this integrator is its stability: an error in the energy (the Tisserand parameter) does not grow as the number of time steps increases if the value of the step remains the same. The latter situation is exemplified by a resonant particle โ it does not approach too close to the planet so that the same time step can be taken. In contrast to resonant particles, non-resonant ones, in due course of their gravitational scatterings, approach one or another planet from time to time, and therefore one has to change the time step near the planet. Obviously, whenever the time step diminishes near the planet, an error in the Tisserand parameter slowly grows together with an increased number of the smaller time steps. Nevertheless, in our simulations a fractional error in the Tisserand parameter typically does not exceed 0.001 during $`310^6`$ Neptuneโs revolution, which amounts 0.5 Gyrs (OGT 2000). To increase accuracy of computations, we use in the present paper a second iteration. While the 1st iteration yields the gravitational field between points $`A`$ and $`B`$ using an approximative formula based on the particle parameters at point $`A`$ (because those at point $`B`$ are still unknown), the 2nd iteration enables us to compute the gravitational field between $`A`$ and $`B`$ using a middle position between them because the position of $`B`$ is already given by the 1st iteration.
We commence this paper with examining the characteristics of cometary belts in one-planet approximation, i.e. in a 3-body problem: the Sun, a planet, and a comet (Sec. 2). In Sec. 3, we consider how these cometary belts are modified when the influence of all four giant planets is taken into account. Sec. 4 contains discussion and conclusions.
## 2. One-planet approximation
We consider as an example a giant planet of the Saturnian mass placed on a circular orbit of radius $`R_{pl}`$ having a zero inclination. It is assumed that there is an outer source of comets, which injects them into the Saturnโs strong scattering zone. The boundaries of the latter are close to the region, where heliocentric orbits of comets cross the planetโs orbit: this region, which we call hereinafter the crossing zone, is defined as $`a(1e)R_{pl}a(1+e)`$. An outer source of comets could be the Kuiper belt, a scattering zone of an outer planet, or (at a much earlier stage) the disk of planetesimals.
The dynamical evolution of comets typically includes multiple gravitational scatterings of comets on the planet with eventual ejection from the system with a hyperbolic velocity. (If neighboring planets are included, some comets may enter the scattering zones of those planets). On rare occasions, there are impacts of comets with the planet.
We have computed the dynamical evolution of test comets by making a record of their orbital parameters each revolution of the planet. Assuming that the inflow of comets into the planetโs scattering zone does not change in time, we can interpret the sample of orbital parameters of test bodies as representing a stationary distribution of a large number of comets.
We simulated 26 distributions of cometary orbits totalling $`0.37\times 10^6`$ positions in the form of $`(a,e,i)`$-points. The initial conditions for integration of cometary orbits were taken out of resonances: $`a_0=1.1a_{planet}=10.5`$ AU (for $`e_0`$ and $`i_0`$, see Table 1, where details of this and other computational runs are given as well).
Earlier (OGT) we computed the stationary distribution of test comets in the space of orbital coordinates $`(a,e)`$ and $`(a,i)`$. As distinct from OGT (1999), where each comet was represented as a point on the phase plane \[$`(a,e)`$ or $`(a,i)`$\], in the present paper we wish to compute the distribution of the โsurface densityโ (more accurately, the 2D-density) of comets on the phase plane. To this end, we make a record of coordinates of test comets each 10 revolutions of the planet. Then we sort out the computed $`0.37\times 10^6`$ cometary coordinates into two 2D data files: a $`500\times 100`$ array in the $`(a,e)`$-plane ($`a<2.5R_{pl}`$) and a $`500\times 180`$ array in the $`(a,i)`$-space). The following bins are used: $`\mathrm{\Delta }a=0.005R_{pl},\mathrm{\Delta }e=0.01,\mathrm{\Delta }i=0.5^{}`$. Fig. 1a,b show, in the $`(a,e)`$ and $`(a,i)`$-spaces, the surface density of comets, which are gravitationally scattered on a planet of the Saturnian mass. The following features are worth mentioning:
(i) in the $`(a,e)`$-space, the comets are stretched along the boundaries of the planetโs strong scattering zone;
(ii) resonant gaps at the resonances 2:3, 1:1, 3:2, 2:1, 3:1, etc., are well pronounced;
(iii) outside the scattering zone, the dynamical evolution of test bodies occurs slowly (in a diffusive way) resulting in clusterings, which could be named diffusive accumulations. As can be seen in Fig. 1a, these accumulations are separated from the the right boundary $`a(1e)=R_{pl}`$ of the crossing zone by a noticeable โtroughโ of a decreased surface density of comets.
Fig. 2 a,b,c,d show distributions of comets in semimajor axis, pericenter distance, apocenter distance, and heliocentric distance, respectively. The vertical coordinate is a measure of a number of comets within the bin of 0.01 $`R_{pl}`$. Dash line delineates the region occupied by comets with distances of pericenter $`<0.5R_{pl}`$, i.e. those comets whose chances to be discovered are the best. Such objects could be called โvisible cometsโ (which is correct for Jupiter, although for more distant planets the visibility condition should be more stringent).
Fig. 2a reveals a rich resonant structure of the cometary belt. Arrows show positions of particular resonances. A detailed analysis (OGT) indicates that the smaller the mass of a planet (in the range $`M_{Uranus}<M<M_{Jupiter}`$), the more rich is its resonant structure. This is caused by the fact that, for large planetary masses, different resonances are partly overlapping.
Fig. 2b demonstrates that the pericenter distances of the scattered comets are located close to the orbit of the planet, slightly exceeding it. This is determined by dynamics of comets and is an important feature of the cometary belt.
The distribution of comets in apocenter distance (Fig. 2c) demonstrates an appreciable concentration of comets to the planetโs orbit. If we only select comets with small distances of pericenter (as mentioned above, it is the condition to have these comets visible), it turns out that the apocenter distances of those comets are indeed rather close to the orbit of the planet (see dashed curve in Fig. 2c). As is known, it is this circumstance which has been used to define the particular โcometary familyโ.
Fig. 2d (the distribution in heliocentric distance) demonstrates that a strong concentration of comets toward the planetary orbit exists not only in phase space, but in usual space as well. The cometary belt has a pronounced maximum near the host planetโs orbit. Interestingly, the simulated distributions of surface density of both visible and all comets have two maxima. As for the curve of visible comets, this is explained by the fact that the probability to find a comet at various distances from the Sun has two maxima โ at the pericenter and apocenter, and since the visible comets have quite similar orbital parameters those two maxima are clearly revealed. As for the simulated comets belonging to the same cometary belt, the right (outer) peak on the surface density curve appears because the pericenter distances of all outer (relative to the planet) comets are rather close to each other, whereas their apocenter distances differ substantially. Meanwhile the left (inner) peak on the surface density curve appears because the apocenter distances of all inner (relative to the planet) comets are rather close to each other, whereas their pericenter distances are very different. Our simulations indicate that the regions of the largest surface density of comets are located slightly outside the boundaries of the crossing zone (see Fig. 1a). Therefore pericenters of the outermost comets do not coincide with apocenters of the innermost comets. As a result, this leads to two maxima in the surface density of comets near the planetโs orbit.
Distributions shown in Figs. 2a to 2d make it quite convincing that there is a spatial accumulation of comets near the planetary orbit. We name such an accumulation the cometary belt. This population has the dynamical nature, because the comets belonging to the given planetโs belt are either in a resonance with the host or are gravitationally scattered predominantly on this planet.
Summarizing similarities and dissimilarities between the cometary belt and cometary family, we can conclude that the distribution in apocenter distance is the only one which looks alike for both, whereas all other distributions are different due to observational selection to which the cometary family is highly sensitive.
The above material concerns the cometary belt around a planet of the Saturnian mass. We have computed the surface density distribution of comets around a planet of the Jovian, Uranian, and Neptunian mass as well. The details of our computational runs are given in Table 1. The data on the surface density of comets for all for giant planets, in one-planet approximation, are shown in Fig. 3. As can be seen, the smaller the planetโs mass, the larger is the surface density contrast.
## 3. Four-planet approximation
It would be important to see which features of cometary belts are kept invariable when we abandon one-planet approximation and take into account gravitational fields of all four giant planets. However, to save the computational time, we continue to neglect eccentricities and inclinations of the planets (as shown in OGT, accounting for non-zero planetary eccentricities does not lead to any oversimplifications) and we also neglect secular resonances. We have simulated 36 distributions of cometary orbits totalling $`25.7\times 10^6`$ positions, or $`(a,e,i)`$-points. The details of our computational runs are summarized in Table 1. The initial conditions for orbit integrations are taken in the Kuiper belt: we consider those objects whose orbits intersect the Neptuneโs orbit. They belong to the resonances 3:2 and 2:1, but the angle โthe test body โ the Sun โ Neptuneโ was taken in such a way that the test body rapidly leaves the resonance owing to a close encounter with the planet. Available numerical computations (Malhotra et al. 1999 and refs. therein) confirm that the above resonances are temporary and their de-population might indeed explain the origin of the so called scattered disk objects.
Fig. 4a,b show the โsurface densityโ (more accurately, the 2D-density) of comets in the $`(a,e)`$\- and $`(a,i)`$-space governed by gravitational scatterings on all four giant planets.
We use, as we did before in Fig. 1, the logarithmic grey scale, with the only difference that each shade differs 100-fold from the neighboring one. The basic time step used is 0.001 (in units of one Neptuneโs revolution, taken to be $`2\pi `$). The time step was taken smaller as the test comet approaches the planet. These improved simulations provide a 4-fold better accuracy compared to our earlier approach (OGT), where we used a larger basic time step (0.002). The results of both simulations turn out to be close to each other.
As can be seen from Fig. 4a, the resonant gaps in Neptuneโs zone as well as gaps in the resonances 1:1 near Uranus, Saturn and Jupiter are well pronounced (at not too large eccentricities).
Our simulations indicate a progressive, sharp decrease in surface density of comets between orbits of Neptune and Jupiter (see Fig. 5d). This decrease is characterized by the transfer functions computed in OGT. We notice that a substantial decrease in surface density of comets between orbits of Neptune and Jupiter is consistent with the fact that the number of the known Centaur objects at different heliocentric distances has been found not to change substantially with the distance. Bearing in mind that the observational selection is the larger, the bigger is the distance of an object both from the Sun and the observer, the above fact implies that the number of Centaurs should sharply increase toward the Kuiper belt.
As can be seen from Figs. 4a and 4b, our simulations apparently do not explain the known comets with the largest eccentricities $`e>0.9`$ or inclinations $`i>40^{}`$. Such objects, which mostly belong to the Saturn,- Uranus,- and Neptune-family comets, appear to be very rare in the dynamical evolution of short-period comets. A similar conclusion was made by Levison & Duncan (1997) who argue that the majority of such comets (of Halley type) could be produced by a journey from the Oort cloud, and not the Kuiper belt.
Fig. 5a to 5d show the distributions of the cometary populations governed by all four giant planets in semimajor axis, distance of pericenter, distance of apocenter, and heliocentric distance, respectively.
Fig. 5a demonstrates that the resonant structure in the entire cometary population is preserved despite the gravitational influence of all four giant planets. The resonant structure is especially rich in the outer part of the Neptunian cometary belt, where the influence of the other giant planets is somewhat weakened. On the other hand, the resonant structure in the distribution of comets with $`a<30`$ AU is much more smoothed, which can be explained by a strong interaction with all giant planets, including the most massive ones.
Fig. 5b (distribution of simulated comets in pericenter distance) reveals four major maxima indicating the existence of the four cometary belts associated with the giant planet orbits. The separation into four belts becomes more evident for comets with large $`a`$ (say, with $`39<a<75`$ AU, as can be seen in Fig. 4a), because such comets are dynamically governed by the planet whose orbit turns out to be located nearby the cometโs pericenter.
The general distribution of comets in distance of apocenter (Fig. 5c) does not reveal appreciable concentrations to any planetโs orbit, except that of Jupiter. The other, less contrast peaks are hard to be seen, because their apocenter maxima are easily destroyed by the influence of the planets (even the apocenter branch of the Neptunian belt is somewhat dissolved by the three innermost giant planets).
Distribution of comets in heliocentric distance (Fig. 5d) reveals a density peak near Neptune and another one near Jupiter, i.e. around the boundaries where those planets are the only hosts. Absence of noticeable density peaks associated with the orbits of Uranus and Saturn is not surprising, because those peaks are overlapped by a vast number of comets belonging to the Neptunian cometary belt. To illustrate this, we show in Fig. 5d, in one-planet approximation, the distribution of comets near each giant planet orbit. We assume that the density maxima in each belt are proportional to the transfer functions found by OGT (obviously, we need this assumption only to illustrate how the different belts could be populated relative each other).
Our 36 runs performed in the four-planet approximation allow to construct a steady-state distribution consisting of $`25.7\times 10^6`$ positions of test comets. Of this number of comets, only 815 penetrated into the zone of visible comets, with distances of pericenter less than 2.5 AU. Interestingly, this number turns out not to differ much from the total number of Jupiter family comets estimated, with accounting for observational selection, to be $`800\pm 300`$ (Fernรกndez et al. 1999). This implies that our simulations indicate the total number of comets in the Solar system, with the size of several km (typical for Jupiter family comets), to be as large as 20-30 millions.
According to our simulations, the number of gravitationally scattered comets of the Neptunian belt is as large as $`(1020)10^6`$. Using recent observations (Marsden 1999), which indicate that the number of kuiperoids exceeds the number of scattered Neptunian comets by a factor of 50, we estimate the total number of kuiperoids to be $`510^810^9`$ bodies, which is fairly consistent with $`810^8`$ inferred by Jewitt (1999) from available observations.
## 4. Discussion and Conclusions
One-planet approximation (the Sun plus one planet) employed in Sec. 2 suggests that each giant planet can host a cometary belt โ an accumulation of comets associated with the planetโs orbit. This is a non-trivial result: in principle, the distribution of comets governed by the gravitational fields of the Sun and the planet could be alike the main asteroidal belt, i.e. not to have any concentration toward the planetโs orbit. The above accumulation has the dynamical nature implying that each comet in the belt is either gravitationally scattered predominantly on this planet or is in a resonance with it. The major problem would be to verify whether this accumulation found in one-planet approximation is survived when the influence of all giant planets is taken into account. As shown in Sec. 3, this is indeed the case for a cometary population originating in the Kuiper belt and eventually distributed in the steady-state between the orbits of Neptune and Jupiter. Although the cometary belts of all four giant planets can be traced using the distributions in semimajor axis, distance of pericenter, and heliocentric distance, the belts are overlapped. The four-planet approximation indicates that only a tiny fraction of comets is able to penetrate from an outer planetโs zone into the zone of the nearest inner neighbor. As a result, different cometary constituents are seen in the superposition of the belts with a different confidence: the Uranian and Saturnian belts are barely seen having as a background the copious Neptunian belt, and only the latter (plus, in part, the Jovian belt) appear to be well pronounced. Nevertheless, as shown in Sec. 2, the comets of each belt, regardless of its richness, are concentrated to the hostโs orbit. Despite the destructive influence of the four giant planets, the cometary belts maintain their major features found in one-planet approximation: (i) the belts are the more clear separated, the larger are semimajor axes of comets, and (ii) the resonant accumulations and gaps, although lose a little, are still well delineated.
To describe the spatial distribution of comets in the Solar system, astronomers traditionally use such a term as โcometary familyโ (e.g. Jupiter family comets). This term, although introduced on a purely observational basis, turned out to be very valuable as it helped to reveal the first structures in the cometary population. However, since it has been found from numerical simulations that the cometary population in the zone of giant planets is very populous (Levison & Duncan 1997; OGT), it becomes more and more clear that the part of this population observed in the form of the above cometary families is no more than the โtip of the icebergโ. In this paper, we give new evidence in favor of this. Moreover, the distribution of the cometary populations between Jupiter and Neptune simulated in OGT and the present paper is important to compute the distribution of dust in the outer Solar system (GOTM 1999).
Each cometary family is characterized by the same apocenters of comets as the orbit of its host planet. Therefore, each family of comets turns out to be a part of a more general dynamical substance described in this paper โ the cometary belt. Such belts, as shown above, should exist near each giant planetโs orbit. The basic features of each cometary belt are determined by its dynamical interaction with the gravitational field of the host planet, and these features, as distinct from those in cometary families, do not depend upon observational selection. The dynamical term โcometary beltโ seems to be more justifiable and helpful than the observational term โcometary familyโ. The latter is meaningful to characterize the visible part of a cometary belt, which steady grows as soon as more and more faint objects are registered with improving techniques. Further simulations would be highly desirable to separate possible contributions to the cometary belts from the Kuiper belt and Oort cloud.
Acknowledgements. This work has been supported by NASA Grant NAG5-7065 to George Mason University. Discussions with, and an invariable support of, John Mather are highly appreciated. N.G.acknowledges the NRC-NAS associateship. We are very thankful to Alexander Krivov, the referee, for a number of helpful suggestions.
References
Fernรกndez, J.A., Ip, W.-H. 1983, Icarus, 54, 377-387
Fernรกndez, J.A., Tancredi, G., Rickman, H., Licandro, J. 1999, โAsteroids, Comets, Meteorsโ (Abstracts of the International Meeting at Cornell Univ., July 26-30), p. 99; Astronomy & Astrophysics (submitted)
Gorkavyi, N.N., Ozernoy, L.M., Taidakova, T., Mather, J.C. ($``$ GOTM) 1999, Four circumsolar dust belts in the outer Solar system associated with the giant planets. โAsteroids, Comets, Meteorsโ (Abstracts of the International Meeting at Cornell University, July 26-30), p. 131
Jewitt, D. 1999, Kuiper belt objects, Ann. Rev. Earth. Planet. Sci., 27, 287-312
Levison, H.F., Duncan M.J. 1997. From the Kuiper belt to Jupiter-family comets: the spatial distribution of ecliptic comets. Icarus 127, 13-32 (LD).
Malhotra, R., Duncan, M., & Levison, H. 1999. Dynamics of the Kuiper belt objects. In Protostars and Planets IV (in press) $`=`$ astro-ph/9901155
Marsden, B.G. 1999, Minor Planet Center, Ephemerides and Orbital Elements http://cfa-www.harvard.edu/iau/Ephemerides/index.html
Ozernoy, L.M., Gorkavyi, N.N., Taidakova, T. ($``$ OGT) 2000, Large scale structures in the outer Solar system: I. Cometary belts with resonant features associated with the giant planets. Icarus (submitted) (OGT) (an earlier version was posted as astro-ph/9812479
Taidakova, T. 1997, A new stable method for long-time integration in an N-body problem. Astronomical Data Analyses, Software and Systems VI, ed. G. Hunt & H.E.Payne, (San Francisco: ASP), ASP Conf. Ser. 125, p. 174-177
Torbett, M.V. 1989, Chaotic motion in a comet disk beyond Neptune: the delivery of short-period comets. Astron. J. 98, 1477-1481
Figure Captions
Figure 1.
a. 2D density of a cometary belt in coordinates โeccentricity โ semimajor axisโ. To represent the number of comets in each cell, a logarithmic grey scale is used, i.e. each shade differs 10-fold from the neighboring one. Heavy curves represent the boundaries of the crossing zone, and the region above the dashed line is the zone of visible comets ($`q<0.5R_{pl}`$). Numerous resonant gaps are seen.
b. 2D density of a cometary belt in coordinates โinclination angle โ semimajor axisโ. The same logarithmic grey scale as in a is used. Numerous resonant gaps are clearly seen at all inclinations.
Figure 2.
a. Distribution of comets in semimajor axis, with a bin size $`\mathrm{\Delta }a=0.005R_{planet}`$. Various resonant gaps are indicated by arrows. The region shown by dashed lines is occupied by visible comets, whose perihelion distances are the smallest.
b. Distribution of comets in the distance of pericenter. Dashed line indicates the region of visible comets concentrated at the left edge of the distribution.
c. Distribution of comets in the distance of apocenter. Visible comets form a peak near the planetโs orbit.
d. Surface density of the cometary population as a function of heliocentric distance. Visible comets are concentrated near or inside the host planetโs orbit.
Figure 3.
Surface density of cometary populations (logarithmic scale) as a function of the heliocentric distance shown for all four giant planets (in one-planet approximation).
Figure 4.
a. 2D density of the simulated cometary population of the Solar system in coordinates โeccentricity โ semimajor axisโ (the four-planet approximation). Four cometary belts of the giant planets can be seen. The boundaries of the crossing zones are shown by heavy lines, and the region occupied by visible comets ($`q<2`$ AU) is located above the dashed line. Crosses stand for asteroids of the main belt (the first 100 objects of the list), small triangles stand for short-periodic comets (112 objects), large triangles stand for Centaurs (15 objects), and diamonds stand for the Kuiper belt objects (191).
b. 2D density of the four cometary belts in coordinates โinclination angle โ semimajor axisโ(the four-planet approximation). Designations are the same as in a.
Figure 5.
a. Distribution of simulated comets of the Solar system in semimajor axis (the four-planet approximation). Arrows indicate various Neptunian resonances. A region inside Neptuneโs orbit, where the strong scattering zones of different planets are overlapped, looks more uniform than a well structured outermost zone.
b. Distribution of comets in the distance of pericenter (the four-planet approximation). Arrows indicate the four well pronounced peaks which correspond to the four cometary belts. Dashed line is for comets with $`39<a<75`$ AU.
c. Distribution of comets in the distance of apocenter (the four-planet approximation). There is a noticeable peak only around Jupiterโs orbit, the other peaks in this distribution are associated with various local resonances and diffusion accumulations of comets.
d. Surface density of the cometary population as a function of heliocentric distance (the four-planet approximation). Curve 1 is the four-planet approximation, and curve 2 is the sum of 4 one-planet approximations (dotted line shows the Neptunian belt, dashed line shows the Uranian belt, dash-dotted line stands for the Saturnian belt, and least-populated belt is that of Jupiter). |
no-problem/0001/quant-ph0001089.html | ar5iv | text | # References
A Remark on One-Dimensional Many-Body
Problems with Point Interactions
Sergio Albeverio<sup>1</sup><sup>1</sup>1SFB 256; SFB 237; BiBoS; CERFIM (Locarno); Acc.Arch., USI (Mendrisio), Ludwik Daฬงbrowski<sup>2</sup><sup>2</sup>2 SISSA, I-34014 Trieste, Italy and Shao-Ming Fei<sup>3</sup><sup>3</sup>3Institute of Physics, Chinese Academy of Science, Beijing.
Institut fรผr Angewandte Mathematik, Universitรคt Bonn, D-53115 Bonn
and
Fakultรคt fรผr Mathematik, Ruhr-Universitรคt Bochum, D-44780 Bochum
Abstract
The integrability of one dimensional quantum mechanical many-body problems with general contact interactions is extensively studied. It is shown that besides the pure (repulsive or attractive) $`\delta `$-function interaction there is another singular point interactions which gives rise to a new one-parameter family of integrable quantum mechanical many-body systems. The bound states and scattering matrices are calculated for both bosonic and fermionic statistics.
Quantum mechanical solvable models describing a particle moving in a local singular potential concentrated at one or a discrete number of points have been extensively discussed in the literature, see e.g. and references therein. One dimensional problems with contact interactions at, say, the origin ($`x=0`$) can be characterized by separated or nonseparated boundary conditions imposed on the (scalar) wave function $`\phi `$ at $`x=0`$. The classification of one dimensional point interactions in terms of singular perturbations is given in . In the present paper we are interested in many-body problems with pairwise interactions given by such singular potentials. The first model of this type with the pairwise interactions determined by $`\delta `$-functions was suggested and investigated in . Intensive studies of this model applied to statistical mechanics (particles having boson or fermion statistics) are given in (these also leads to the well known Yang-Baxter equations).
Nonseparated boundary conditions correspond to the cases where the perturbed operator is equal to the orthogonal sum of two self-adjoint operators in $`L_2(\mathrm{},0]`$ and $`L_2[0,\mathrm{})`$. The family of point interactions for the one dimensional Schrรถdinger operator $`\frac{d^2}{dx^2}`$ can be described by unitary $`2\times 2`$ matrices via von Neumann formulas for self-adjoint extensions of symmetric operators, since the second derivative operator restricted to the domain $`C_0^{\mathrm{}}(๐\{0\})`$ has deficiency indices $`(2,2)`$. The boundary conditions describing the self-adjoint extensions have the following form
$$\left(\begin{array}{c}\phi \\ \phi ^{}\end{array}\right)_{0^+}=e^{i\theta }\left(\begin{array}{cc}a& b\\ c& d\end{array}\right)\left(\begin{array}{c}\phi \\ \phi ^{}\end{array}\right)_0^{},$$
(1)
where
$$adbc=1,\theta ,a,b,c,dIR.$$
(2)
$`\phi (x)`$ is the scalar wave function of two spinless particles with relative coordinate $`x`$. (1) also describes two particles with spin $`s`$ but without any spin coupling between the particles when they meet (i.e. for $`x=0`$), in this case $`\phi `$ represents any one of the components of the wave function. The values $`\theta =b=0`$, $`a=d=1`$ in (1) correspond to the case of a positive (resp. negative) $`\delta `$-function potential for $`c>0`$ (resp. $`c<0`$). For general $`a,b,c`$ and $`d`$, the properties of the corresponding Hamiltonian systems have been studied in detail, see e.g. .
The separated boundary conditions are described by
$$\phi ^{}(0_+)=h^+\phi (0_+),\phi ^{}(0_{})=h^{}\phi (0_{}),$$
(3)
where $`h^\pm IR\{\mathrm{}\}`$. $`h^+=\mathrm{}`$ or $`h^{}=\mathrm{}`$ correspond to Dirichlet boundary conditions and $`h^+=0`$ or $`h^{}=0`$ correspond to Neumann boundary conditions. In this case it is impossible to express the perturbed operator as the orthogonal sum of two self-adjoint operators in $`L_2(\mathrm{},0]`$ and $`L_2[0,\mathrm{})`$.
In the following we study the integrability of one dimensional systems of $`N`$-identical particles with general contact interactions described by the boundary conditions (1) or (3) that are imposed on the relative coordinates of the particles. We first consider the case of two particles ($`N=2`$) with coordinates $`x_1`$, $`x_2`$ and momenta $`k_1`$, $`k_2`$ respectively. Each particle has $`n`$-โspinโ states designated by $`s_1`$ and $`s_2`$, $`1s_in`$. For $`x_1x_2`$, these two particles are free. The wave functions $`\phi `$ are symmetric (resp. antisymmetric) with respect to the interchange $`(x_1,s_1)(x_2,s_2)`$ for bosons (resp. fermions). In the region $`x_1<x_2`$, from the Bethe ansatz the wave function is of the form,
$$\phi =\alpha _{12}e^{i(k_1x_1+k_2x_2)}+\alpha _{21}e^{i(k_2x_1+k_1x_2)},$$
(4)
where $`\alpha _{12}`$ and $`\alpha _{21}`$ are $`n^2\times 1`$ column matrices. In the region $`x_1>x_2`$,
$$\phi =(P^{12}\alpha _{12})e^{i(k_1x_2+k_2x_1)}+(P^{12}\alpha _{21})e^{i(k_2x_2+k_1x_1)},$$
(5)
where according to the symmetry or antisymmetry conditions, $`P^{12}=p^{12}`$ for bosons and $`P^{12}=p^{12}`$ for fermions, $`p^{12}`$ being the operator on the $`n^2\times 1`$ column that interchanges $`s_1s_2`$.
Let $`k_{12}=(k_1k_2)/2`$. In the center of mass coordinate $`X=(x_1+x_2)/2`$ and the relative coordinate $`x=x_2x_1`$, we get, by substituting (4) and (5) into the boundary conditions at $`x=0`$,
$$\{\begin{array}{c}\alpha _{12}+\alpha _{21}=e^{i\theta }aP^{12}(\alpha _{12}+\alpha _{21})+ie^{i\theta }bk_{12}P^{12}(\alpha _{12}\alpha _{21}),\hfill \\ ik_{12}(\alpha _{21}\alpha _{12})=e^{i\theta }cP^{12}(\alpha _{12}+\alpha _{21})+ie^{i\theta }dk_{12}P^{12}(\alpha _{12}\alpha _{21})\hfill \end{array}$$
(6)
for boundary condition (1), and
$$\{\begin{array}{c}ik_{12}(\alpha _{21}\alpha _{12})=h_+(\alpha _{12}+\alpha _{21}),\hfill \\ ik_{12}P^{12}(\alpha _{12}\alpha _{21})=h_{}P^{12}(\alpha _{12}+\alpha _{21})\hfill \end{array}$$
(7)
for boundary condition (3) respectively.
Eliminating the term $`P^{12}\alpha _{12}`$ from (6) we obtain the relation
$$\alpha _{21}=Y_{21}^{12}\alpha _{12},$$
(8)
where
$$Y_{21}^{12}=\frac{2ie^{i\theta }k_{12}P^{12}+ik_{12}(ad)+(k_{12})^2b+c}{ik_{12}(a+d)+(k_{12})^2bc}.$$
(9)
We remark that the system (7) is contradictory unless
$$h_+=h_{}hIR\{\mathrm{}\}.$$
(10)
In this case it also leads to equation (8) with
$$Y_{21}^{12}=\frac{ik_{12}+h}{ik_{12}h}.$$
(11)
For $`N3`$ and $`x_1<x_2<\mathrm{}<x_N`$, the wave function is given by
$$\psi =\alpha _{12\mathrm{}N}e^{i(k_1x_1+k_2x_2+\mathrm{}+k_Nx_N)}+\alpha _{21\mathrm{}N}e^{i(k_2x_1+k_1x_2+\mathrm{}+k_Nx_N)}+(N!2)otherterms.$$
(12)
The columns $`\alpha `$ have $`n^N\times 1`$ dimensions. The wave functions in the other regions are determined from (12) by the requirement of symmetry (for bosons) or antisymmetry (for fermions). Along any plane $`x_i=x_{i+1}`$, $`i1,2,\mathrm{},N1`$, from similar considerations as above we have
$$\alpha _{l_1l_2\mathrm{}l_il_{i+1}\mathrm{}l_N}=Y_{l_{i+1}l_i}^{ii+1}\alpha _{l_1l_2\mathrm{}l_{i+1}l_i\mathrm{}l_N},$$
(13)
where
$$Y_{l_{i+1}l_i}^{ii+1}=\frac{2ie^{i\theta }k_{l_il_{i+1}}P^{ii+1}+ik_{l_il_{i+1}}(ad)+(k_{l_il_{i+1}})^2b+c}{ik_{l_il_{i+1}}(a+d)+(k_{l_il_{i+1}})^2bc}$$
(14)
for nonseparated boundary condition and
$$Y_{l_{i+1}l_i}^{ii+1}=\frac{ik_{l_il_{i+1}}+h}{ik_{l_il_{i+1}}h}$$
(15)
for separated boundary condition. Here $`k_{l_il_{i+1}}=(k_{l_i}k_{l_{i+1}})/2`$ play the role of spectral parameters. $`P^{ii+1}=p^{ii+1}`$ for bosons and $`P^{ii+1}=p^{ii+1}`$ for fermions, with $`p^{ii+1}`$ the operator on the $`n^N\times 1`$ column that interchanges $`s_is_{i+1}`$.
For consistency $`Y`$ must satisfy the Yang-Baxter equation with spectral parameter , i.e.,
$$Y_{ij}^{m,m+1}Y_{kj}^{m+1,m+2}Y_{ki}^{m,m+1}=Y_{ki}^{m+1,m+2}Y_{kj}^{m,m+1}Y_{ij}^{m+1,m+2},$$
or
$$Y_{ij}^{mr}Y_{kj}^{rs}Y_{ki}^{mr}=Y_{ki}^{rs}Y_{kj}^{mr}Y_{ij}^{rs}$$
(16)
if $`m,r,s`$ are all unequal, and
$$Y_{ij}^{mr}Y_{ji}^{mr}=1,Y_{ij}^{mr}Y_{kl}^{sq}=Y_{kl}^{sq}Y_{ij}^{mr}$$
(17)
if $`m,r,s,q`$ are all unequal.
The operators $`Y`$ given by (14) satisfy the relation (17) for all $`\theta ,a,b,c,d`$. However the relations (16) are satisfied only when $`\theta =0`$, $`a=d`$ and $`b=0`$, that is, according to the constraint (2), $`\theta =0`$, $`a=d=\pm 1`$, $`b=0`$, $`c`$ arbitrary. The case $`a=d=1`$, $`\theta =b=0`$ corresponds to the usual $`\delta `$-function interactions, which has been investigated in . The case $`a=d=1`$, $`\theta =b=0`$, which we shall refer to as โanti-$`\delta `$โ interaction, is related to another singular interactions between any pair of particles (for $`a=d=1`$ and $`\theta =b=c=0`$ see ). Associated with the separated boundary condition, the operators $`Y`$ given by (15) satisfy both the relations (17) and (16) for arbitrary $`h`$.
We have thus found that with respect to $`N`$-particle (either boson or fermion) problems, altogether there are three integrable one parameter families with contact interactions of type $`\delta `$, anti-$`\delta `$ and separated one, described respectively by one of the following conditions on the wave function along the plane $`x_i=x_j`$ for any pair of particles with coordinates $`x_i`$ and $`x_j`$,
$$\phi (0_+)=+\phi (0_{}),\phi ^{}(0_+)=c\phi (0_{})+\phi ^{}(0_{}),cIR;$$
(18)
$$\phi (0_+)=\phi (0_{}),\phi ^{}(0_+)=c\phi (0_{})\phi ^{}(0_{}),cIR;$$
(19)
$$\phi ^{}(0_+)=h\phi (0_+),\phi ^{}(0_{})=h\phi (0_{}),hIR\{\mathrm{}\}.$$
(20)
The wave functions are given by (12) with the $`\alpha `$โs determined by (13) and initial conditions. The operators $`Y`$ in (13) are given respectively by
$$Y_{l_{i+1}l_i}^{ii+1}=\frac{i(k_{l_i}k_{l_{i+1}})P^{ii+1}+c}{i(k_{l_i}k_{l_{i+1}})c};$$
(21)
$$Y_{l_{i+1}l_i}^{ii+1}=\frac{i(k_{l_i}k_{l_{i+1}})P^{ii+1}+c}{i(k_{l_i}k_{l_{i+1}})+c};$$
(22)
and
$$Y_{l_{i+1}l_i}^{ii+1}=\frac{i(k_{l_i}k_{l_{i+1}})+2h}{i(k_{l_i}k_{l_{i+1}})2h}.$$
(23)
Nevertheless, from (21) and (22) we see that if we simultaneously change $`cc`$ and $`P^{ii+1}P^{ii+1}`$, these two formulas are interchanged. There is a sort of duality between bosons (resp. fermions) with $`\delta `$-interaction of strength $`c`$ and fermions (resp. bosons) with anti-$`\delta `$ interaction of strength $`c`$. It can be checked that under the โkink typeโ gauge transformation $`๐ฐ=_{i>j}\mathrm{sgn}(x_ix_j)`$, the N-boson (resp. fermion) $`\delta `$-type contact interaction goes over to the N-fermion (resp. boson) anti-$`\delta `$ interaction. Therefore these two situations are in fact unitarily equivalent under a gauge transformation $`๐ฐ`$ that is non-smooth and does not factorize through one particle Hilbert spaces.
The integrable system related to the case (23) is not unitarily equivalent to either the $`\delta `$ or anti-$`\delta `$ cases. In fact their spectra are different (see the bound states below). In the following we study further the one dimensional integrable $`N`$-particle systems associated with (23).
When $`h<0`$, there exist bound states. For $`N=2`$, the space part of the orthogonal basis (labeled by $`\pm `$) in the doubly degenerate bound state subspace has the form, in the relative coordinate $`x=x_2x_1`$,
$$\psi _{2,\pm }=(\theta (x)\pm \theta (x))e^{h|x|}.$$
(24)
The eigenvalue corresponding to the bound states (24) is $`h^2`$. By generalization we get the $`2^{N(N1)/2}`$ bound states for $`N`$-particle system
$$\psi _{N,\underset{ยฏ}{ฯต}}=\alpha _{\underset{ยฏ}{ฯต}}\underset{k>l}{}(\theta (x_kx_l)+ฯต_{kl}\theta (x_lx_k))e^{h_{i>j}|x_ix_j|},$$
(25)
where $`\alpha _{\underset{ยฏ}{ฯต}}`$ is the spin wave function and $`\underset{ยฏ}{ฯต}\{ฯต_{kl}:k>l\}`$; $`ฯต_{kl}=\pm `$, labels the $`2^{N(N1)/2}`$-fold degeneracy.
It can be checked that $`\psi _{N,\underset{ยฏ}{ฯต}}`$ satisfies the boundary condition (20) at $`x_i=x_j`$ for any $`ij1,\mathrm{},N`$. The spin wave function $`\alpha `$ here satisfies $`P^{ij}\alpha =ฯต_{ij}\alpha `$ for any $`ij`$, that is, $`p^{ij}\alpha =ฯต_{ij}\alpha `$ for bosons and $`p^{ij}\alpha =ฯต_{ij}\alpha `$ for fermions. $`\psi _{N,\underset{ยฏ}{ฯต}}`$ is of the form (12) in each region. For instance comparing $`\psi _{N,\underset{ยฏ}{ฯต}}`$ with (12) in the region $`x_1<x_2\mathrm{}<x_N`$, we get
$$k_1=ih(N1),k_2=k_12ih,k_3=k_22ih,\mathrm{},k_N=k_1.$$
(26)
The energy of the bound state $`\psi _{N,\underset{ยฏ}{ฯต}}`$ is
$$E=\frac{h^2}{3}N(N^21).$$
(27)
The scattering matrix can readily be discussed. For real $`k_1<k_2<\mathrm{}k_N`$, in each coordinate region such as $`x_1<x_2<\mathrm{}x_N`$, the following term in (12) is an outgoing wave
$$\psi _{out}=\alpha _{12\mathrm{}N}e^{k_1x_1+\mathrm{}+k_Nx_N}.$$
(28)
An incoming wave with the same exponential as (28) is given by
$$\psi _{in}=[P^{1N}P^{2(N1)}\mathrm{}]\alpha _{N(N1)\mathrm{}1}e^{k_Nx_N+\mathrm{}+k_1x_1}$$
(29)
in the region $`x_N<x_{N1}<\mathrm{}<x_1`$. The scattering matrix is defined by $`\psi _{out}=S\psi _{in}`$. From (13) we have
$$\begin{array}{c}\alpha _{12\mathrm{}N}=[Y_{21}^{12}Y_{31}^{23}\mathrm{}Y_{N1}^{(N1)N}]\alpha _{2\mathrm{}N1}=\mathrm{}\hfill \\ =[Y_{21}^{12}Y_{31}^{23}\mathrm{}Y_{N1}^{(N1)N}][Y_{32}^{12}Y_{42}^{23}\mathrm{}Y_{N2}^{(N2)(N1)}]\mathrm{}[Y_{N(N1)}^{12}]\alpha _{N(N1)\mathrm{}1}S^{}\alpha _{N(N1)\mathrm{}1},\hfill \end{array}$$
where $`Y_{l_{i+1}l_i}^{ii+1}`$ is given by (23). Therefore
$$S=S^{}P^{N1}P^{(N1)2}\mathrm{}P^{1N}=S^{}[P^{12}][P^{23}P^{12}][P^{34}P^{23}P^{12}]\mathrm{}[P^{(N1)N}\mathrm{}P^{12}].$$
Defining
$$X_{ij}=Y_{ij}^{ij}P^{ij}$$
(30)
we obtain
$$S=[X_{21}X_{31}\mathrm{}X_{N1}][X_{32}X_{42}\mathrm{}X_{N2}]\mathrm{}[X_{N(N1)}].$$
(31)
The scattering matrix $`S`$ is unitary and symmetric due to the time reversal invariance of the interactions. $`<s_1^{}s_2^{}\mathrm{}s_N^{}|S|s_1s_2\mathrm{}s_N>`$ stands for the $`S`$ matrix element of the process from the state $`(k_1s_1,k_2s_2,\mathrm{},k_Ns_N)`$ to the state $`(k_1s_1^{},k_2s_2^{},\mathrm{},k_Ns_N^{})`$. The momenta (26) are imaginary for bound states. The scattering of clusters (bound states) can be discussed in a similar way as in . For instance for the scattering of a bound state of two particles ($`x_1<x_2`$) on a bound state of three particles ($`x_3<x_4<x_5`$), the scattering matrix is $`S=[X_{32}X_{42}X_{52}][X_{31}X_{41}X_{51}]`$.
We have extensively investigated the integrability of one dimensional quantum mechanical many-body problems with general contact interactions. Besides the repulsive or attractive $`\delta `$ and anti-$`\delta `$ function interactions, there is another integrable one parameter families associated with separated boundary conditions. From our calculations it is clear that these are all the integrable systems for one dimensional quantum identical many-particle models (of fermionic or bosonic statistics) with contact interactions. Here the possible contact coupling of the spins of two particles are not taken into account. A further study along this direction would possibly give rise to more interesting integrable quantum many-body systems.
ACKNOWLEDGEMENTS: We would like to thank P. Kulish, P. Kurasov and V. Rittenberg for helpful comments. |
no-problem/0001/nucl-th0001009.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The $`2S`$ state of muonic hydrogen offers interesting possibilities to do precision tests of QED and to determine the proton RMS charge radius (see and references therein). An isolated $`(\mu p)_{2S}`$ is metastable with a lifetime mainly determined by muon decay (about $`2.2\mu `$s). In liquid or gaseous hydrogen the lifetime of the $`2S`$ state is shortened considerably because of Stark mixing followed by $`2P1S`$ radiative transitions. If a sizeable fraction of muonic hydrogen atoms ends up in the $`2S`$ state with a sufficiently long lifetime, then precision laser experiments with this metastable $`2S`$ state become feasible. If the $`(\mu p)_{2S}`$ has kinetic energy below the $`2P`$ threshold (laboratory kinetic energy $`T_0=0.3`$ eV), then Stark transitions $`2S2P`$ are energetically forbidden<sup>1</sup><sup>1</sup>1Some quenching will, however, occur because $`2S2P`$ mixing during collisions allows radiative transitions to the $`1S`$ state (See Refs. ).. The metastable fraction of $`(\mu p)_{2S}`$ in hydrogen depends on the kinetic energy at the time of formation.
The first estimate of the $`(\mu p)_{2S}`$ lifetime was done by Kodosky and Leon . They calculated the inelastic $`2S2P`$ cross section in a semiclassical framework and concluded that the $`2S`$ state for $`T>T_0`$ will be rapidly depopulated except for very small target densities. However, this model did not consider deceleration due to elastic $`2S2S`$ scattering. A more elaborate approach was developed by Carboni and Fiorentini . They calculated both elastic $`2S2S`$ and inelastic $`2S2P`$ cross sections quantum mechanically and estimated the probability for a $`(\mu p)_{2S}`$ atom to slow down below threshold from a given initial energy. The results of their calculations show that a sizeable fraction of $`(\mu p)_{2S}`$ formed at kinetic energies less than 1.3 eV can slow down below the $`2P`$ threshold.
The metastable fraction of $`(\mu p)_{2S}`$ per stopped muon can in principle be calculated in a cascade model which takes the different processes (Stark mixing, radiative decays, etc.) into account . However, if one knows the fraction of stopped muons which reaches the $`2S`$ state (regardless of energy) and the kinetic energy distribution on arrival in this state, then it is sufficient to treat the final part of the cascade ($`n=1,2`$). This information can be obtained from experiments. The fraction of stopped muons which arrives in the $`2S`$ state can be determined from the radiative yields : it was found in Ref. that between 2% and 7% of the $`\mu p`$ reach the $`2S`$ state in the pressure range $`0.33800`$ hPa. The kinetic energy distribution for $`\mu p`$ in the $`1S`$ state, which for low pressures is expected to be very similar to that of the $`2S`$ state just after arrival, can be obtained from diffusion experiments . The median energy is found to be about 1.5 eV for a target pressure of 0.25 hPa .
The purpose of this paper is to calculate the fraction of $`\mu p`$ in the $`2S`$ state which reaches kinetic energies below the $`2P`$ threshold as a function of the initial kinetic energy $`T`$. We will also present a fully quantum mechanical calculation of $`\mu p+`$H differential cross sections which are used in our Monte Carlo simulation of the kinetics.
The paper is organized as follows. The theoretical framework of the quantum mechanical calculation of the cross sections is outlined in Section 2. The calculated cross sections are discussed in Section 3. Section 4 presents the calculations of the metastable $`2S`$ fraction. The summary of the results is given in Section 5.
Unless otherwise stated, atomic units ($`\mathrm{}=a_0=m_e=1`$) are used throughout this paper. The unit of cross section is $`a_0^2=2.810^{17}\mathrm{cm}^2`$.
## 2 Quantum Mechanical Approach to the Calculation of the Cross Sections $`(\mu p)_{nl}+\mathrm{H}(\mu p)_{nl^{}}+\mathrm{H}`$
For the benefit of the reader, we briefly describe the quantum mechanical calculation of $`\mu p+`$H scattering in the coupled-channel approximation. The three-body wave function $`\psi (๐,๐น)`$ where the coordinates $`๐`$ and $`๐น`$ are defined in Fig. 1 satisfies the Schrรถdinger equation
$$H\psi (๐,๐น)=E\psi (๐,๐น)$$
(1)
where the Hamiltonian is given by
$$H=\frac{^2}{2\mu }+H_{\mu p}+V(๐,๐น).$$
(2)
Here $`\mu =m_pm_{\mu p}/(m_p+m_{\mu p})`$ is the reduced mass of the $`p\mu p`$ system, with $`m_p`$ being the proton mass and $`m_{\mu p}`$ the total $`\mu p`$ mass. The two-body Hamiltonian of the $`\mu p`$ atom, $`H_{\mu p}`$, includes the Coulomb interaction and a term that describes the shift of the $`nS`$ state (mainly because of the vacuum polarization) with respect to the states with $`l>0`$. For the case $`n=2`$ considered below, the $`2S`$ state is lower than the $`2P`$ by $`\mathrm{\Delta }E=0.21`$ eV. The much smaller fine and hyperfine structure splitting is neglected. The potential $`V(๐,๐น)`$ describes the interaction of the $`\mu p`$ system with the target proton<sup>2</sup><sup>2</sup>2For the sake of simplicity, we ignore the fact that the protons are identical particles.:
$$V(๐,๐น)=\frac{1}{|๐นฯต๐|}\frac{1}{|๐น+(1ฯต)๐|}$$
(3)
where $`ฯต=m_\mu /m_{\mu p}=0.101`$.
Equation (1) is solved in the coupled-channel approximation by using a finite number of basis functions to describe the state of the $`\mu p`$. For the problem of $`nlmnl^{}m^{}`$ scattering considered in this paper, the set of $`n^2`$ eigenstates with principal quantum number $`n`$ has been selected but the basis can be extended in a straightforward manner. With $`n`$ fixed, let $`\chi _{lm}(๐)`$ denote the normalized eigenfunctions of the atomic Hamiltonian $`H_{\mu p}`$ with the energy $`E_{nl}`$, the square of the $`\mu p`$ internal angular momentum $`๐ฅ^2`$ (eigenvalue $`l(l+1)`$) and its projection along the $`z`$-axis $`l_z`$ (eigenvalue $`m`$). The total wave function $`\psi (๐,๐น)`$ is expanded as follows
$$\psi (๐,๐น)=R^1\underset{JMLl}{}\xi _{JLl}(R)๐ด_{Ll}^{JM}(๐,๐)$$
(4)
where
$$๐ด_{Ll}^{JM}(๐,๐)=\underset{M_Lm}{}LlM_Lm|JMY_{LM_L}(๐)\chi _{lm}(๐),๐=๐น/R.$$
(5)
are simultaneous eigenfunctions of $`๐^2`$, $`๐^2`$, $`๐ฅ^2`$ and $`J_z`$ with eigenvalues $`J(J+1)`$, $`L(L+1)`$, $`l(l+1)`$ and $`M`$, respectively. Here L is the $`p\mu p`$ relative angular momentum, $`๐=๐+๐ฅ`$ is the total orbital angular momentum of the system. For a given value of $`J`$ the system of radial Schrรถdinger equations has the form
$$\left(\frac{1}{2\mu }\frac{d^2}{dR^2}+\frac{L(L+1)}{2\mu R^2}+E_{nl}E\right)\xi _{JLl}(R)+\underset{L^{}l^{}}{}L^{}l^{}JM|V|LlJM\xi _{JL^{}l^{}}(R)=0$$
(6)
where the potential matrix elements are calculated in the basis (5):
$$๐,๐|LlJM=๐ด_{Ll}^{JM}(๐,๐).$$
(7)
The matrix elements of the potential (3) have been calculated analytically; the corresponding formulas are rather lengthy and will be given elsewhere. From the asymptotic form of the solution of the $`n^2`$ coupled<sup>3</sup><sup>3</sup>3 Because of parity conservation the equations decouple into two sets of respectively $`n(n+1)/2`$ and $`n(n1)/2`$ coupled equations. equations (6), the scattering matrix $`S`$ is extracted and cross sections can be calculated using standard formulas. The scattering amplitude for $`nlmnl^{}m^{}`$ is given by
$$f_{nlmnl^{}m^{}}(๐)=\frac{4\pi }{2i\sqrt{k^{}k}}\underset{L^{}LM_L^{}}{}i^{LL^{}}Y_{L^{}M_L^{}}(๐)L^{}l^{}M_L^{}m^{}|S1|Ll0mY_{L0}^{}(0)$$
(8)
where
$$L^{}l^{}M_L^{}m^{}|S|Ll0m=\underset{JM}{}L^{}l^{}M_L^{}m^{}|JMJM|Ll0mL^{}l^{}J|S|LlJ.$$
(9)
As a consequence of rotational symmetry, the matrix elements $`L^{}l^{}J|S|LlJ`$ do not depend on the quantum number $`M`$. The differential cross sections for the transitions $`nlnl^{}`$ are given by
$$\frac{d\sigma _{nlnl^{}}}{d\mathrm{\Omega }}=\frac{1}{(2l+1)}\underset{m^{}m}{}\frac{k^{}}{k}|f_{nlmnl^{}m^{}}|^2$$
(10)
where $`k`$ and $`k^{}`$ are the magnitudes of the relative momenta in the initial and final state, correspondingly.
The total cross sections of the transitions $`nlnl^{}`$ have the form
$$\sigma _{nlnl^{}}=\frac{1}{(2l+1)}\frac{\pi }{k^2}\underset{J}{}(2J+1)\underset{LL^{}}{}|L^{}l^{}J|S1|LlJ|^2$$
(11)
and the corresponding transport cross sections are given by
$$\sigma _{nlnl^{}}^{tr}=๐\mathrm{\Omega }(1\mathrm{cos}\theta )\frac{d\sigma _{nlnl^{}}}{d\mathrm{\Omega }}.$$
(12)
In order to treat the long distance behaviour of the $`\mu p+`$H interaction properly, the effect of electron screening must be taken into account. This is done by multiplying the matrix elements $`L^{}l^{}JM|V|LlJM`$ in Eq. (6) by the screening factor
$$F(R)=(1+2R+2R^2)e^{2R}$$
(13)
which corresponds to the assumption that the electron of the hydrogen atom remains unaffected in the $`1S`$ state during the collision.
For $`p\mu p`$ separations $`R`$ smaller than a few units of the $`\mu p`$ Bohr radius, $`a_\mu =0.0054`$, our model cannot be expected to be valid because the truncated set of basis functions in Eq. (4) is not sufficient to describe the total three-body wave function $`\psi (๐,๐น)`$. Furthermore, exchange symmetry between the two protons must be taken into account. We can estimate the sensitivity of our results to the short range part of the interaction by using the dipole approximation for the potential (3). The interaction in the dipole approximation is given by the first nonzero term in the expansion of Eq. (3) in inverse powers of $`R`$:
$$V_{DA}(๐,๐น)=\frac{๐๐น}{R^3}=\frac{z}{R^2}.$$
(14)
A certain problem arises in the dipole approximation for a few low partial waves ($`J5`$): the Schrรถdinger equation becomes ill defined because of the attractive $`1/R^2`$ singularity<sup>4</sup><sup>4</sup>4This is a problem only in the dipole approximation. The exact matrix elements are all finite for $`R=0`$.. Following Ref. we cure this difficulty by placing an infinitely repulsive sphere of radius $`r_{\mathrm{min}}`$ around the target proton. The sensitivity of the results to this cutoff parameter $`r_{\mathrm{min}}`$ will be used below as an estimate of the importance of detailed description of the interaction at short distances.
In this paper we are interested in the $`2l2l^{}`$ transitions, and only four states $`n=2`$ are used to describe the $`\mu p`$ part of the total wave function. The four coupled second order equations (6) are solved numerically for $`J=0,1,\mathrm{},J_{\mathrm{max}}`$ where the highest partial wave $`J_{\mathrm{max}}`$ is chosen large enough to ensure the convergence of the partial wave expansion at given collision energy.
Until now we have considered the $`\mu p`$ collisions with the atomic target. Treating the collisions with hydrogen molecules is a formidable task (even for $`\mu p`$ in the ground state ) which we do not attempt here. The inelastic threshold $`T_0`$ for $`2S2P`$ transitions is 0.44 eV for atomic target and 0.33 eV for a molecular target. To get the correct threshold value for the inelastic cross sections one can substitute the atomic hydrogen mass with the molecular one. By varying $`r_{\mathrm{min}}`$ and the target mass one can obtain an estimate of the theoretical uncertainty of our approach.
The present model for calculating cross sections for $`(\mu p)_{n=2}`$H scattering is a straightforward extension of the one by Carboni and Fiorentini . There are three major differences: we solve the four coupled differential equations exactly while Ref. treated non adiabatic terms as a perturbation. The second difference is that we include the full angular coupling while Ref. omitted some minor terms. Finally, we use exact matrix elements for the $`\mu pp`$ interaction while Ref. considers only the dipole approximation. These approximations were justified by Ref. as follows: for small kinetic energies the velocity of the muonic hydrogen atom is so low that the motion can be regarded as nearly adiabatic. The angular coupling terms that were omitted in Ref. are of the order 1 which is much smaller than the remaining angular coupling terms of the order of $`J(J+1)`$ (the angular momenta as high as $`J15`$ contribute to the $`2S2P`$ cross section at $`T=1`$ eV). The electric field from a hydrogen atom is strong enough to induce Stark transitions in the $`\mu p`$ for the distances $`Ra_0`$. Therefore, the regions where the dipole approximation is valid ($`Ra_\mu `$) are supposed to be most important. The diffusion experiments have shown that a sizeable fraction of $`(\mu p)_{n=2}`$ atoms has kinetic energies of several eV. In this high energy region the non adiabatic couplings become strong and the model of Ref. can not be expected to give accurate results.
The problem of $`\mu p+`$H scattering has been treated fully quantum mechanically in Refs. . However, these calculations did not include the $`2S2P`$ energy splitting and the question of the metastability of $`(\mu p)_{2S}`$ was not addressed. Stark mixing has been studied in the semiclassical straight-line-trajectory approximation in Refs. . We have calculated the $`2S2P`$ Stark mixing cross sections in the semiclassical approach in order to compare with the quantum mechanical results. A more detailed comparison between semiclassical and quantum mechanical calculations of $`\mu p+\mathrm{H}`$ scattering will be given elsewhere.
## 3 The Cross Sections of $`(\mu p)_{2l}`$ Scattering from Hydrogen
Using the method described in Section 2 the S-matrix has been calculated for the laboratory kinetic energy range $`T_0<T<6`$ eV. Unless otherwise explicitly stated, the results shown are obtained with the exact potential (3). Electron screening is always taken into account. Both atomic and molecular mass of the target ($`M_{\mathrm{target}}=M_\mathrm{H}`$, $`M_{\mathrm{H}_2}`$) have been used.
Figure 2 shows the $`2S2S`$ transport cross section and the $`2S2P`$ Stark mixing cross section in comparison with the results from Ref. (the molecular mass is used in both cases). There is a good agreement for the Stark mixing $`2S2P`$ cross section below 1.7 eV. For the $`2S2S`$ transport cross section, the agreement is fair, with the discrepancy being typically less than 30%.
In order to estimate the theoretical uncertainty of our approach, we calculated the cross sections in the dipole approximation with short distance cutoff $`0.01r_{\mathrm{min}}0.05`$ for $`M_{\mathrm{target}}=M_\mathrm{H}`$ and $`M_{\mathrm{target}}=M_{\mathrm{H}_2}`$. For fixed value of $`M_{\mathrm{target}}`$, the cross sections for the three reactions $`2S2P`$, $`2P2S`$ and $`2P2P`$ are weakly dependent on $`r_{\mathrm{min}}`$. This shows that these reactions are dominated by the long range part of the interaction $`V(๐,๐น)`$. The only process rather sensitive to the value of $`r_{\mathrm{min}}`$ is the elastic scattering $`2S2S`$. This can be understood by considering the adiabatic energy curves for low angular momentum. The energy curve which corresponds asymptotically to the $`2S`$ state is attractive while those corresponding to the three $`2P`$ states are repulsive. Therefore, in the adiabatic approximation the $`2S2S`$ cross sections are expected to depend on the short range part of the potential while this is not the case for $`2P2P`$ scattering. At energies above 2 eV the semiclassical approximation is in a good agreement with our quantum mechanical results. However, this semiclassical approximation does not treat the threshold behaviour correctly.
A more detailed comparison of the approximations used can be done by plotting the $`J`$ dependence of the partial wave cross sections $`\sigma _J`$ at fixed energy as shown in Fig. 3. The quantum mechanical results obtained for the exact potential and the dipole approximation with the short range cutoff agree well for angular momentum $`J>5`$ while the lowest partial waves are sensitive to the short range behaviour of the approximating potentials. The reason is that for $`J>5`$ the centrifugal barrier is strong enough to prevent the $`\mu p`$ from approaching close to the target proton. For $`J5`$ the $`\mu p`$ can get very close to the proton and the use of a small number of atomic orbitals is not sufficient โ a better description in this region is needed in a true threeโbody framework. It is seen that a substantial part of the $`2S2S`$ cross section comes from partial waves with low $`J`$, so this result also explains why the uncertainty of the elastic $`2S`$ cross section is larger than for the other reactions. The semiclassical calculation can be compared with the partial wave cross sections by using the relation between the impact parameter $`\rho `$, the relative momentum $`k`$ and the angular momentum $`J`$
$$k\rho =J+1/2.$$
(15)
For large $`J`$ (large impact parameter) there is a very good agreement between the semiclassical contribution to the $`2S2P`$ cross section and the quantum mechanical partial wave result. An example of the differential cross sections for the reaction $`2S2S`$ given in Fig. 4 shows a characteristic pattern with a strong forward peak and a set of maxima and minima, which is in qualitative agreement with Ref. where the adiabatic approach was used.
## 4 The Surviving Fraction of the Metastable $`(\mu p)_{2S}`$ State
The surviving metastable fraction $`f(T)`$ is defined as the probability that the $`\mu p`$ atom in the $`2S`$ state with initial kinetic energy $`T`$ reaches the energy below the $`2P`$ threshold by slowing down in elastic collisions. Assuming that the rate of the radiative transition $`2P1S`$, $`\lambda _{2P1S}=1.210^{11}\mathrm{s}^1`$, is much larger than the Stark mixing rate<sup>5</sup><sup>5</sup>5With our result for the Stark mixing rate $`2P2S`$ at 1 eV as a function of the target density $`N`$, $`\lambda _{2P2S}=Nv\sigma _{2P2S}410^{12}(N/N_0)\mathrm{s}^1`$ where $`N_0`$ is the liquid hydrogen density $`4.2510^{22}`$ atoms/cm<sup>3</sup>, the range of validity is $`N0.03N_0`$., the surviving fraction $`f(T)`$ was estimated in by the formula
$$f(T)=\mathrm{exp}\left(\frac{(m_{\mu p}+M_{\mathrm{target}})^2}{2m_{\mu p}M_{\mathrm{target}}}_{T_0}^T\frac{\sigma _{2S2P}(T^{})}{T^{}\sigma _{2S2S}^{tr}(T^{})}๐T^{}\right)$$
(16)
with $`M_{\mathrm{target}}=M_{\mathrm{H}_2}`$. It was found that a sizeable fraction of $`(\mu p)_{2S}`$ atoms formed at kinetic energies below 1.3 eV slows down below threshold.
Equation (16) is based on the approximation of continuous energy loss. To provide a more realistic treatment of the evolution in kinetic energy we use a Monte Carlo program based on the differential cross sections for the four processes $`2S2S`$, $`2S2P`$, $`2P2S`$ and $`2P2P`$. In addition to the collisional processes, the $`2P1S`$ radiative transition is also included in the code. The fate of a $`\mu p`$ formed in the $`2S`$ state with kinetic energy $`T`$ is thus either to undergo $`2P1S`$ radiative transition after the Stark mixing $`2S2P`$ or to end up in the $`2S`$ state with kinetic energy below the threshold with probability $`f(T)`$. Figure 5 shows the rates, $`\lambda _{2l2l^{}}=N_0v\sigma _{2l2l^{}}`$, for the collisional transitions in liquid hydrogen in comparison with the radiative deexcitation rate $`\lambda _{2P1S}`$. In liquid hydrogen the Stark mixing rates are so large that the $`\mu p`$ states are expected to be statistically populated for kinetic energies $`T2`$eV (where threshold effects can be neglected).
Figure 6 shows the surviving fraction $`f(T)`$ calculated with the Monte Carlo program for target density $`10^6<N/N_0<10^2`$. The approximation (16) gives somewhat higher values for the survival probability than the exact kinetics calculation at $`T<1.4`$eV. The Monte Carlo results at high energies ($`T>1.5`$eV) are significantly larger than those obtained from Eq. (16) where continuous energy loss is assumed. The reason is that the backward scattering (see Fig. 4) with maximum possible energy loss plays an important role in bringing the $`(\mu p)_{2S}`$ atoms below the $`2P`$ threshold for higher energies.
In order to estimate the theoretical uncertainty of $`f(T)`$ we performed the Monte Carlo calculation with the cross sections obtained in the dipole approximation. In all cases the Monte Carlo calculations are consistent with the results corresponding to the exact potential: at 2 eV the surviving fraction is in the range $`1520\%`$ for atomic target mass and $`1016\%`$ for molecular target mass. The use of the target mass $`M_\mathrm{H}`$ instead of $`M_{\mathrm{H}_2}`$ leads to somewhat higher survival fractions because of a simple kinematical reason: the loss of kinetic energy in a collision with the same angle in the CMS is larger for the target of smaller mass and, furthermore, the inelastic threshold is higher for the scattering from the atomic target. Merely substituting the atomic mass with the molecular mass for the hydrogen target does not account for the additional energy loss due to rotational and vibrational excitations of H<sub>2</sub>. One would therefore expect that the slowing down process is more efficient than this model suggests and the survival probability calculated with molecular target mass is underestimated. The opposite is true for calculations with atomic target: here the transfer of kinetic energy from the $`\mu p`$ to the individual hydrogen atoms is not restricted by molecular bindings. Thus results with atomic target probably give somewhat optimistic results for the surviving fraction.
## 5 Conclusion
The main results of this paper can be summarized as follows. The detailed Monte Carlo kinetics calculations predict the surviving metastable fraction of the $`2S`$ state of $`\mu p`$ to be larger than $`50\%`$ for the initial kinetic energy 1 eV in agreement with earlier estimates . For higher initial kinetic energies, our result is significantly larger than the earlier estimates: the surviving metastable fraction for $`T=5`$ eV is about $`4\%`$. This effect is due to a sizeable contribution of backward scattering in elastic collisions.
Our Monte Carlo calculations are based on the cross sections calculated in the coupled-channel approximation. The main limitation of this method for the problem concerned comes from the use of a small number of atomic states to describe the $`\mu p`$ system and the neglect of the molecular structure of the target. A more accurate treatment of the $`\mu pp`$ three body problem is needed in order to do reliable calculations for a few lowest partial wave amplitudes. Our approach, however, is well suited for the description of the collisions with the characteristic scale of impact parameters of the order of $`a_0`$ which is exactly the case for the problem involved. Therefore our results provide a significantly improved basis for a better estimate of the metastable $`(\mu p)_{2S}`$ fraction which is very important for the planned Lambโshift experiment at PSI .
Further details and more results concerning the scattering of the $`\mu p`$ atoms in the excited states $`n2`$ will be published elsewhere.
## Acknowledgement
We thank P. Hauser, F. Kottmann, L. Simons, D. Taqqu, and R. Pohl for fruitful and stimulating discussions and M.P. Locher for useful comments. |
no-problem/0001/astro-ph0001523.html | ar5iv | text | # WSRT 1.4 GHz Observations of the Hubble Deep Field
## 1. Background, Observations and Preliminary Results
Deep Radio observations of the Hubble Deep Field region are now advancing our understanding of the faint microJy radio source population. In particular, VLA and MERLIN observations of the HDF (Richards et al. 1999, Muxlow et al. 1999) suggest that faint sub-mJy and microJy radio sources are mostly identified with star forming galaxies, often located at moderate to high redshifts.
In the period April-May 1999 we observed the HDF and HFF with the newly upgraded Westerbork Synthesis Radio Telescope (WSRT) at 1.4 GHz for a total of 72 hours. Our aim was to utilise the WSRTโs superb brightness sensitivity to extend the investigation of the microJy source population to extended radio sources that might otherwise be resolved out or go undetected in the previous higher resolution or higher frequency radio observations.
Fig. 1 shows the WSRT image of the HDF/HFF convolved with a circular 15 arcsecond Gaussian restoring beam. This represents the deepest image made with the WSRT to date, reaching a rms noise level of $`8\mu `$Jy/beam. We detect radio emission from galaxies both in the HDF and HFF which have not been previously detected by recent MERLIN or VLA studies of the field. More than 30 new ($`>5\sigma `$) detections have been obtained in a $`10\times 10`$ arcmin field, centred on the HDF. Some of these sources are actually blends of two or more sources but a large percentage are also discrete. Three of the new 1.4 GHz sources are located in the HDF itself, and have infra-red ISO detections. Two of the three are associated with Spiral galaxies and the third is an irregular galaxy (the latter is detected by the VLA at 8.4 but not 1.4 GHz). The new WSRT detections indicate that perhaps a significant fraction of starburst galaxies present more extended radio emission than the previous (higher resolution) VLA and MERLIN observations suggest.
Muxlow, T.W.B., Wilkinson, P.N., Richards, A.M.S., Kellermann, K.I., Richards, E.A., Garrett, M.A. (1999) New Astronomy Reviews, 43, 623.
Richards, E. A., Kellermann, K. I., Fomalont, E. B., Windhorst, R. A., Partridge, R. B. (1998) AJ, 116, 1039. |
no-problem/0001/gr-qc0001052.html | ar5iv | text | # Covariant two-point function for linear gravity in de Sitter space
## 1 Introduction
The graviton propagator on de Sitter (dS) space (in its usual linear approximation for the gravitational fields) for large separated points has a pathological behaviour (infrared divergence) \[Allen, Turyn, $`1987`$; Floratos, Iliopoulos, Tomaras, $`1986`$; Antoniadis, Mottola, $`1991`$\]. Some authors proposed that infrared divergence could rather be exploited in order to create instability of the dS universe \[Ford, $`1985`$; Antoniadis, Iliopoulos, Tomaras, $`1986`$\]. The field operator for linear gravity in dS space has been considered in this way by Tsamis and Woodard in terms of flat coordinates which cover only one-half of the dS hyperboloid \[Tsamis, Woodard, $`1992`$\]. They have examined the possibility of quantum instability and they have found a quantum field which breaks dS invariance. However, we show that this behaviour problem for the traceless part of the field disappears if one uses the Gupta-Bleuler vacuum defined by \[de Bievre, Renaud, $`1998`$; Gazeau, Renaud, Takook, $`1999`$\]. On the other hand, such a procedure is unsuccessful for the pure-trace part of the field (conformal sector). In the general relativity framework, one cannot associate a dynamics to the conformal sector because the physical content of this field is not apparent. It is coordinate or gauge dependent. Therefore one may think that its behaviour troublesome may originate from imposing the gauge invariance and has no actual physical consequence. But, in the presence of a matter quantum field, that part of the metric acquires a dynamical content and the problem appears in any attempt to quantize it.
In a previous paper, we have shown that one can write the rank-2 โmassiveโ tensor field (divergencelesse or โtransverseโ and traceless) in terms of a projection operator and a scalar field. At the โmasslessโ limit, there appears a singularity in the projection operator. This type of singularity appears precisely because of the divergenceless condition. By dropping the divergenceless condition, we can make the mentioned singularity in the tensor field (for its traceless part only) disappear. In quantizing this field, there appears another singularity in the Wightman two point function like in the case of the โmasslessโ minimally coupled scalar fields \[Allen, Folacci, $`1987`$\]. The latter type of singularity appears because of the zero mode problem for the Laplace-Beltrami operator on dS space. In order to solve it, we must follow the procedure already used for a completely covariant quantization of the minimally coupled scalar field \[Gazeau, Renaud, Takook, 1999\].
The organization of this paper is the following. Section $`2`$ is devoted to the traceless field and it is explained how the choice of the Gupta-Bleuler vacuum eliminates pathological behaviour. In Section $`3`$ we examine the questions raised by the pure-trace part. Section $`4`$ is a brief conclusion on the inflationary universe scenario.
## 2 Traceless part
Here, we briefly recall our de Sitterian notations. The de Sitter space-time is made identical to the four dimensional one-sheeted hyperboloid
$$X_H=\{x\text{I R}^5;x^2=\eta _{\alpha \beta }x^\alpha x^\beta =H^2\},\alpha ,\beta =0,1,2,3,4,$$
(1)
where $`\eta _{\alpha \beta }=`$diag$`(1,1,1,1,1)`$. The de Sitter metrics is
$$ds^2=\eta _{\alpha \beta }dx^\alpha dx^\beta =g_{\mu \nu }^{dS}dX^\mu dX^\nu ,\mu =0,1,2,3,$$
(2)
where $`X^\mu `$ are the $`4`$ space-time coordinates in dS hyperboloid. We use the tensor field notation $`K_{\alpha \beta }(x)`$ with respect to the ambiant space, and the transversality condition $`x.K(x)=0`$ is imposed. In this notation, it is simpler to express the tensor field (and also the two-point function) in terms of scalar fields.
The two-point function for the โmassiveโ spin-$`2`$ field $`K_{\alpha \beta }^{tt}(x)`$ (โtransverseโ or divergenceless and traceless) is defined by \[Gazeau, Takook\]
$$๐ฒ_{\alpha \beta \alpha ^{}\beta ^{}}(x,x^{})=\mathrm{\Omega },K_{\alpha \beta }^{tt}(x)K_{\alpha ^{}\beta ^{}}^{tt}(x^{})\mathrm{\Omega }$$
$$๐ฒ_{\alpha \beta \alpha ^{}\beta ^{}}(x,x^{})=D_{\alpha \beta \alpha ^{}\beta ^{}}^{tt}(x,x^{})๐ฒ(x,x^{}).$$
(3)
$`๐ฒ(x,x^{})`$ is the Wightman two-point function for the massive scalar field on dS space.
$`D_{\alpha \beta \alpha ^{}\beta ^{}}^{tt}(x,x^{})`$ is a projection tensor, which satisfies the โdivergencelesseโ and traceless conditions. In the limit of the โmasslessโ spin-$`2`$ field there appear two types of singularity in the two-point function. The first one lies in the projection tensor $`D_{\alpha \beta \alpha ^{}\beta ^{}}^{tt}(x,x^{})`$ and it disappears if one fixes the gauge (the dropping of the divergenceless condition). The other one lies in the scalar Wightman two-point function $`๐ฒ(x,x^{})`$ (the minimally coupled scalar field) and it disappears if we follow the procedure presented in \[Gazeau, Renaud, Takook, $`1999`$\]. Then the two-point function is defined by \[Gazeau, Renaud, Takook\]
$$๐ฒ_{\alpha \beta \alpha ^{}\beta ^{}}(x,x^{})=\mathrm{\Omega },K_{\alpha \beta }^t(x)K_{\alpha ^{}\beta ^{}}^t(x^{})\mathrm{\Omega }$$
$$๐ฒ_{\alpha \beta \alpha ^{}\beta ^{}}(x,x^{})=\mathrm{\Delta }_{\alpha \beta \alpha ^{}\beta ^{}}^t(x,;x^{},^{})๐ฒ(x,x^{}),$$
(4)
where $`\mathrm{\Delta }^t(x,;x^{},^{})`$ is a projection tensor which satisfies the traceless condition. $`๐ฒ`$ is the two-point function for the minimally coupled scalar field in the Gupta-Bleuler vacuum \[Takook, $`1997`$\]
$$๐ฒ(x,x^{})=\frac{iH^2}{4\pi }ฯต(x^0x^0)[\delta (1๐ต(x,x^{}))\theta (๐ต(x,x^{})1)],$$
(5)
where $`๐ต=H^2x.x^{}`$ and $`ฯต(x^0x^0)=\{\begin{array}{ccc}\hfill 1& x^0>x^0& \\ \hfill 0& x^0=x^0& \\ \hfill 1& x^0<x^0.& \end{array}`$
## 3 Conformal sector
The tensor field that we considered in the previous section is traceless. But in the general case the tensor field consists of a traceless part and a pure-trace part (conformal sector):
$$K_{\alpha \beta }(x)=K_{\alpha \beta }^t(x)+K_{\alpha \beta }^{pt}(x).$$
(6)
The pure trace part can be written in the form
$$K_{\alpha \beta }^{pt}(x)=\frac{1}{4}\theta _{\alpha \beta }\psi ,$$
where $`\psi `$ is scalar field and $`\theta _{\alpha \beta }=\eta _{\alpha \beta }+H^2x_\alpha x_\beta `$. With a certain choice in the gauge condition , we are able to write down the following field equation for the scalar field $`\psi `$ \[Gazeau, Renaud, Takook\]
$$(\mathrm{}_H5H^2)\psi =0.$$
(7)
So this field cannot be interpreted in terms of a unitary irreducible representation of the dS group. Difficulties arise when we want to quantize such fields which show negative squared mass in their wave equation. The corresponding two-point functions have a pathological large-distance behaviour (infrared divergence)\[Gazeau, Renaud, Takook\]. We just emphasize on the fact that, so far, this degree of freedom should not appear as a physical one.
## 4 Conclusion
We conclude that the pathological large-distance behaviour for the physical degree of freedom of the linear gravity in the Wightman two-point function can be easily cured. Antoniadis, Iliopoulos and Tomaras have also shown that the pathological large-distance behaviour of the graviton propagator on a dS background does not manifest itself in the quadratic part of the effective action in the one-loop approximation \[Antoniadis, Iliopoulos and Tomaras; $`1996`$\]. That means that this behaviour may be gauge dependent and it should not appear in an effective way in a physical quantity. On the other hand, it exists in an irreducible way in the pure-trace part (conformal sector). The conformal sector may be interesting for inflationary universe scenarii. In these theories, one introduces an inflaton scalar field. Because of this field, the conformal sector of the metric becomes dynamical and it must be quantized \[Antoniadis, Mazure, Mottola, $`1997`$\]. Then it produces a gravitational instability. This gravitational instability and the primordial quantum fluctuation of the inflaton scalar field define the inflationary model. The latter can explain the formation of the galaxies, clusters of galaxies and the large scale structure of the universe \[Lesgourgues, Polarski, Starobinsky, $`1998`$\].
We may conclude that the quantum instability of dS space and the breaking of the dS invariance are both due to the quantization of the conformal sector.
Acknowlegements We are grateful to J-P. Gazeau J. Iliopoulos and J. Renaud for very useful discussions. |
no-problem/0001/hep-ph0001291.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Due to their penetrating nature dilepton probes are among the most promising observables to access the high temperature/density zones formed in the early phases of (ultra-) relativistic heavy-ion collisions (URHICโs). In the low-mass region (LMR, $`M`$ 1 GeV) dilepton emission is governed by the light vector mesons $`\rho `$, $`\omega `$ and $`\varphi `$ attaching the main interest to their medium modifications and possible signatures for the restoration of chiral symmetry in strongly interacting matter. In the high-mass region (HMR, $`M`$ 3 GeV), the focus is on the dissolution of the heavy quarkonium bound states ($`J/\mathrm{\Psi }`$, $`\mathrm{{\rm Y}}`$) to detect the onset of deconfinement. In this talk we will address the intermediate-mass region (IMR) between the $`\varphi `$ and $`J/\mathrm{\Psi }`$ (1 GeV $`M`$ 3 GeV). Here, the thermal dilepton production rate is essentially structureless and can be rather well approximated by perturbative $`q\overline{q}l^+l^{}`$ annihilation for both hadronic and quark-gluon phases (as can be inferred from the well-known total cross section $`\sigma (e^+e^{}hadrons)`$ above $`M`$ 1.5 GeV). With final-state hadron decays being concentrated in the LMR, the main competitors with thermal radiation from an interacting fireball are primordial processes, most notably Drell-Yan (DY) annihilation and the simultaneous decays of associatedly produced open-charm (or bottom) mesons, e.g., $`Dl^{}\overline{\nu }X`$ and $`\overline{D}l^+\nu X`$. An excess over these from proton-proton collision extrapolated sources has long been proposed as a suitable signal for the early high-temperature phases in URHICโs . Since the expected temperatures $`TM`$ in the IMR, the thermal signal might be sufficiently sensitive to reflect the initial temperature and lifetime of a possibly formed Quark-Gluon Plasma (QGP).
In the following we will investigate these issues in the context of data from the CERN-SpS (Sect. 2) and then apply our current understanding to assess upcoming measurements at RHIC (Sect. 3). We finish with some concluding remarks in Sect. 4.
## 2 I-M Dileptons at the SpS
### 2.1 Experimental Results and Previous Theoretical Analyses
At the CERN-SpS I-M dilepton spectra have been measured by the NA38/50 and HELIOS-3 collaborations. Both have found a significant excess of a factor $``$ 2 in central $`A`$-$`A`$ collisions over open-charm and Drell-Yan sources scaled up from $`p`$-$`A`$ systems.
Fig. 1 shows a comparison of the HELIOS-3 data with transport calculations of Li and Gale : apparently the additional yield from in-medium hadronic annihilation processes satisfactorily explains the data (left panel);
note that the prevailing channels, $`\pi a_1`$ and $`\pi \omega `$, are of โ4-pion typeโ (right panel). Within the hadronic transport framework, QGP formation could not be explicitly addressed. In the context of the NA50 data other possibilities for the origin of the additional yield have been elaborated:
* the NA50 collaboration pointed out that an enhancement of the open-charm contribution by a factor of $``$ 3 gives a good account of the data in central Pb(158 AGeV)+Pb. From the theoretical side, however, such an effect is difficult to justify;
* Lin and Wang investigated whether a broadening of transverse-momentum distributions due to a (strong) rescattering of charm quarks in matter might enrich the yield within the NA50 acceptance . The resulting increase amounts to about 20% ;
* Spieles et al. evaluated โsecondaryโ Drell-Yan processes (e.g., $`\pi Nl^+l^{}X`$) arising in the later stages of the collision and found a 10% enhancement of the primordial Drell-Yan around $`M`$ 2 GeV;
* thermal radiation from an equilibrated expanding fireball .
In the following, we will pursue the last item in more detail.
### 2.2 Drell-Yan Annihilation and NA50 Acceptance
To enable a comparison of our final results with NA50 data we need to determine their normalization and acceptance corrections. To this end we use the primordial Drell-Yan contribution to achieve this. For central $`A`$-$`A`$ collisions it is given by
$$\frac{dN_{DY}^{AA}}{dMdy}(b=0)=\frac{3}{4\pi R_0^2}A^{4/3}\frac{d\sigma _{DY}^{NN}}{dMdy},$$
(1)
which we also employ for slightly non-central ones with an accordingly reduced $`A=N_{part}/2`$. Exploiting the fact that the high-mass tail ($`M`$ $``$ 4 GeV) of the data is entirely saturated by DY-pairs we obtain the overall normalization which will also be applied to the thermal production. We furthermore use the calculated DY-spectrum to test our approximate acceptance: in addition to geometric cuts on the single-muon tracks imposed by the NA50 spectrometer set-up, the muons experience substantial absorption when traversing the hadron absorber. The latter can be roughly represented by a lower energy cutoff which is determined by requiring to reproduce the DY-results of NA50 detector simulations, cf. Fig. 3.
### 2.3 Thermal Rates, Space-Time Evolution and Spectra
Based on the assumption that an interacting fireball formed in heavy-ion collisions is in local thermal equilibrium (in the โcomovingโ frame of expansion), the evaluation of the thermal dilepton yield requires two ingredients: production rates and the time evolution of volume/temperature. In the IMR the former turns out to be given in a rather model-independent way by the result from perturbative QCD for the $`q\overline{q}`$ annihilation process,
$$\frac{d^8N_{\mu \mu }^{therm}}{d^4xd^4q}=\frac{\alpha ^2}{4\pi ^4}f^B(q_0;T)\underset{q=u,d,s}{}(e_q)^2+๐ช(\alpha _s)$$
(2)
for both QGP and hadron gas (HG) phases. This is a direct consequence of the well-known โdualityโ threshold located around 1.5 GeV in the inverse process of $`e^+e^{}hadrons`$ annihilation. It is further corroborated by explicit hadronic rate calculations as evidenced from the compilation displayed in Fig. 3 . $`\alpha _s`$-corrections may be as large as 20-30%, whereas higher order temperature/density effects are smaller being of order $`๐ช(T/M)`$, $`๐ช(\mu _q/M)`$.
For the space-time evolution of central $`A`$-$`A`$ collisions we employ an expanding thermal fireball model which is based on an ideal-QGP and resonance-hadron-gas equation of state. Entropy and baryon-number conservation fix a trajectory in the $`T`$-$`\mu _N`$ plane with the ratio $`s/n_B`$ chosen in accord with experimental information on chemical freezeout at the SpS , cf. left panel of Fig. 4. Pion- and kaon-number conservation ensure the correct particle abundances at thermal freezeout towards which finite chemical potentials $`\mu _\pi `$, $`\mu _{K,\overline{K}}`$ build up. A time scale is introduced through a hydro-type volume expansion which yields realistic final flow velocities and transverse sizes . A QGP-HG mixed phase is constructed from standard entropy balancing resulting in a temperature evolution shown in the right panel of Fig. 4.
The thermal dilepton spectra are then computed as
$$\frac{dN_{\mu \mu }^{therm}}{dM}=\underset{0}{\overset{t_{fo}}{}}๐tV_{FB}(t)\frac{Md^3q}{q_0}\frac{d^8N_{\mu \mu }}{d^4xd^4q}(M,q;T)\left[e^{\mu _\pi /T}\right]^4\mathrm{Acc}(M,q_t,y)$$
(3)
including the experimental acceptance as determined above. Note the explicit appearance of the pion-fugacity factor to the 4-th power to appropriately account for off-equilibrium effects in 4-pion-type annihilation processes which dominate in the IMR (see right panel of Fig. 1). The final results of our calculation are displayed in Fig. 5: the experimentally observed excess is reasonably well reproduced by thermal radiation in both invariant-mass and transverse-momentum projections (very similar conclusions have been reached in ref. ). The contribution from the QGP part of the evolution constitutes a rather moderate fraction of $``$ 20%.
## 3 I-M Dileptons at RHIC
The same approach as described in the preceding section is now applied to central Au+Au collisions at $`\sqrt{s}`$=200 AGeV. For definiteness the charged particle multiplicity at midrapidity has been fixed at $`N_{ch}`$=800 with an entropy per baryon of $`s/n_B`$=260.
The resulting IMR dilepton spectra are summarized in Fig. 6: up to $`M`$$``$ 1.5 GeV the hadron gas radiation dominates; in contrast to SpS conditions, the QGP contribution dominates around 2 GeV before DY annihilation takes over. Not shown is the yield from open-charm decays, which in fact could completely outshine the spectrum by a factor of 10 or so . If, however, charm quarks undergo appreciable energy loss ($`dE/dx`$$``$$``$1-2 GeV/fm) when propagating through hot/dense matter they might thermalize entailing a suppression of their contribution above $`M`$=1.5 GeV by factors of $``$100 .
Another complication at RHIC energies concerns the chemical under-saturation of gluon and especially quark densities in the early stages as predicted in various parton-based models : albeit thermalized, the parton distributions are characterized by fugacities $`\lambda _i`$=$`n_i(T)/n_i^{eq}(T)`$$`<`$1 ($`i`$=$`q`$,$`\overline{q}`$,$`g`$). Naively one would expect a substantial reduction of dilepton production in the $`q\overline{q}`$ channel as the rate is proportional to $`\lambda _q\lambda _{\overline{q}}`$. On the other hand, at given entropy (or energy) density, an under-saturated QGP has a larger temperature than in chemical equilibrium which in turn enhances the thermal emission. Using a parameterization of recent hydrodynamic evolution results (see also ref. ) we have recalculated the plasma contribution starting at the same initial entropy density as in the equilibrium scenario. The magnitude of the pertinent QGP signal in the final spectrum turns out to be quite similar with a somewhat harder slope for the off-equilibrium calculation (see left panel of Fig. 7), i.e., the reduction in the fugacities is largely compensated by the increase in initial temperature.
## 4 Conclusions and Outlook
Based on a thermal fireball model coupled with โstandardโ dilepton production rates we have shown that the excess observed in central Pb(158AGeV)+Pb by NA50 in the IMR can be explained by thermal radiation. The contribution from early phases indicative for a QGP is moderate; however, our results corroborate the present understanding of the conditions probed at the CERN-SpS being consistent with low-mass dilepton spectra, chemical freezeout analyses, etc., indicating that one is indeed producing QCD matter in the vicinity of the expected HG-QGP phase boundary.
The extrapolation of this approach to RHIC suggests that the plasma radiation exceeds HG- and DY-sources around $`M`$ 2 GeV. Chemical off-equilibrium effects do not seem to alter this conclusion as long as comparable initial energy densities are reached. A big question mark is attached to the open-charm contribution, i.e., whether energy loss effects significantly redistribute the associated dilepton yields. Experimental input on these issues is eagerly awaited.
Acknowledgment
I thank E. Shuryak and B. Kรคmpfer for productive discussion. This work is supported in part by the A.-v.-Humboldt foundation (Feodor-Lynen program) and the US-DOE under grant no. DE-FG02-88ER40388. |
no-problem/0001/cond-mat0001141.html | ar5iv | text | # The temperature dependent behaviour of surface states in ferromagnetic semiconductors
## Abstract
We present a model calculation for the temperature dependent behaviour of surface states on a ferromagnetic local-moment film. The film is described within the s-f model featuring local magnetic moments being exchange coupled to the itinerant conduction electrons. The surface states are generated by modifying the hopping in the vicinity of the surface of the film. In the calculation for the temperature dependent behaviour of the surface states we are able to reproduce both Stoner-like and spin-mixing behaviour in agreement with recent (inverse) photoemission data on the temperature dependent behaviour of a Gd(0001) surface state.
In the recent past many theoretical and experimental research works have been focussed on the intriguing properties of rare earth metals and their compounds. On the exeperimental side, this interest was aroused after Weller et al. reported on the existence of the magnetically ordered Gd(0001) surface at temperatures where the bulk Gadolinium is paramagnetic . Since then, a variety of different experimental techniques has been applied to the problem by different groups yielding values of the surface Curie temperature enhancement, $`\mathrm{\Delta }T_C=T_C(\mathrm{surface})T_C(\mathrm{bulk})`$, between 17K and 60K . Contrary to the groups cited above Donath et al. using spin-resolved photoemission did not find any indication for an enhanced Curie temperature at the Gd(0001) surface , fueling the controversial discussion.
Concerning the interplay between electronic structure and exceptional magnetic properties at the Gadolinium surface a Gd(0001) surface state is believed to play a crucial role and its temperature dependent behaviour has been discussed intensely . Recently, the investigation of the correlation between strain induced alteration of the surface electronic structure and enhanced magnetization in Gd films has been addressed by a number of experimental works . A thorough account on the surface magnetism and the surface electronic structure of the lanthanides has been given by Dowben et al. .
Rare-earth materials are so-called local-moment systems, i.e. the magnetic moment stems from the partially filled 4f-shell of the rare-earth atom being strictly localized at the ion site. Thus the magnetic properties of these materials are determined by the localized magnetic moments. On the other hand, the electronic properties like electrical conductivity are borne by itinerant electrons in rather broad conduction bands, e.g. 6s, 5d for Gd. Many of the characteristics of local-moment systems can be attributed to a correlation between the localized moments and the itinerant conduction electrons. For this situation the s-f model has been proven to be an adequate description. In this model, the correlation between localized moments and conduction electrons is represented by an intraatomic exchange interaction.
In what follows we consider a film consisting of $`n`$ equivalent parallel layers. The lattice sites within the film are indicated by a greek letter $`\alpha `$, $`\beta `$, $`\gamma `$, $`\mathrm{}`$, denoting the layer, and by a latin letter $`i`$, $`j`$, $`k`$, $`\mathrm{}`$, numbering the sites within a given layer. Each layer posseses two-dimensional translational symmetry, so for any site dependent operator $`A_{i\alpha }`$ we have:
$$A_{i\alpha }A_\alpha .$$
The Hamiltonian for the s-f model consists of three parts:
$$=_s+_f+_{sf}.$$
(1)
The first,
$$_s=\underset{ij\alpha \beta \sigma }{}T_{ij}^{\alpha \beta }c_{i\alpha \sigma }^+c_{j\beta \sigma },$$
(2)
describes the itinerant conduction electrons as s-electrons with $`c_{i\alpha \sigma }^+`$ ($`c_{i\alpha \sigma }`$) being the creation (annihilation) operator of an electron with the spin $`\sigma `$ at the lattice site $`๐_{i\alpha }`$. $`T_{ij}^{\alpha \beta }`$ are the hopping integrals.
The second part of the Hamiltonian represents the system of the localized f-moments and consists itself of two parts,
$$_f=\underset{ij\alpha \beta }{}J_{ij}^{\alpha \beta }๐_{i\alpha }๐_{j\beta }D_0\underset{i\alpha }{}\left(S_{i\alpha }^z\right)^2,$$
(3)
where the first is the well-known Heisenberg interaction. Here the $`๐_{i\alpha }`$ are the spin operators of the localized magnetic moments, which are coupled by the exchange integrals, $`J_{ij}^{\alpha \beta }`$. The second contribution is a single-ion anisotropy term which arrises from the necessity of having a collective magnetic order at finite temperatures, $`T>0`$ . This anisotropy has been assumed to be uniform within the film. The according anisotropy constant $`D_0`$ is typically smaller by some orders of magnitude than the Heisenberg exchange integrals, $`D_0J_{ij}^{\alpha \beta }`$.
In addition to the contribution of the s-electron system and the contribution of the localized f-moments we have a third term which accounts for an intraatomic interaction between the conduction electrons and the localized f-spins:
$$_{sf}=\frac{J}{\mathrm{}}\underset{i\alpha }{}\mathrm{S}_{i\alpha }\sigma _{i\alpha },$$
(4)
where $`J`$ is the s-f exchange interaction and $`\sigma _{i\alpha }`$ is the Pauli spin operator of the conduction electrons. In the case where $`J<0`$ the Hamiltonian (1) is that of the so called Kondo lattice. However, here we are interested in the case of positive s-f coupling ($`J>0`$) which applies to the materials we are interested in. Using the abbreviations,
$$S_{i\alpha }^\sigma =S_{i\alpha }^x+iz_\sigma S_{i\alpha }^y;z_{()}=\pm 1,$$
the s-f Hamiltonian can be written in the form:
$$_{sf}=\frac{J}{2}\underset{i\alpha \sigma }{}\left(z_\sigma S_{i\alpha }^zn_{i\alpha \sigma }+S_{i\alpha }^\sigma c_{i\alpha \sigma }^+c_{i\alpha \sigma }\right).$$
(5)
The problem posed by the Hamiltonian (1) can be solved by considering the retarded single-electron Green function
$`G_{ij\sigma }^{\alpha \beta }(E)`$ $`=`$ $`c_{i\alpha \sigma };c_{j\beta \sigma }^+_E`$ (6)
$`=`$ $`\mathrm{i}{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}dt\mathrm{e}^{\frac{\mathrm{i}}{\mathrm{}}Et}[c_{i\alpha \sigma }(t),c_{j\beta \sigma }^+(0)]_+,`$ (7)
which is related to the spectral density $`S_{๐ค\sigma }^{\alpha \beta }(E)`$ and the local density of states $`\rho _{\alpha \sigma }(E)`$ via the relations:
$`G_{๐ค\sigma }^{\alpha \beta }(E)`$ $`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{ij}{}}\mathrm{e}^{\mathrm{i}๐ค(๐_i๐_j)}G_{ij\sigma }^{\alpha \beta }(E),`$ (8)
$`S_{๐ค\sigma }^{\alpha \beta }(E)`$ $`=`$ $`{\displaystyle \frac{1}{\pi }}\mathrm{Im}G_{๐ค\sigma }^{\alpha \beta }(E+\mathrm{i0}^+),`$ (9)
$`\rho _{\alpha \sigma }(E)`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{}N}}{\displaystyle \underset{๐ค}{}}S_{๐ค\sigma }^{\alpha \alpha }(E).`$ (10)
Due to the translational symmetry of the films, the Fourier transformation (8) has to be performed within the layers of the film. Accordingly, $`N`$ is the number of sites per layer, $`๐ค`$ is an in-plane wavevector from the first two-dimensional Brillouin zone, and $`๐_i`$ represents the in-plane part of the position vector, $`๐_{i\alpha }=๐_i+๐ซ_\alpha `$.
The many-body problem that arises with the Hamiltonian (1) is far from being trivial and a full solution is lacking even for the case of the bulk. In a previous paper we have presented an approximate treatment of the special case of a single electron in an otherwise empty conduction band . This solution, which holds for arbitrary temperatures, is based on the special case of an empty conduction band interacting with a ferromagnetically saturated local-moment system ($`T=0`$), which can be solved exactly for both, the bulk case and the film geometry . This exactly soluble limiting case gives the approximate solution for finite temperatures a certain trustworthiness.
In the following formulas, we briefly recall the results of the calculations presented in . Due to the empty conduction band, we are considering throughout the whole paper, the Hamiltonian (1) can be split into an electronic part, $`_s+_{sf}`$, and a magnetic part, $`_f`$, which can be solved separately . For the electronic part,
$$^{el}=_s+_{sf},$$
(11)
we employ the single-electron Green function (6). The equation of motion for this Green function $`G_{ij\sigma }^{\alpha \beta }(E)`$ can be formally solved by introducing the self-energy $`M_{ij\sigma }^{\alpha \beta }(E)`$,
$$[c_{i\alpha \sigma },_{sf}]_{};c_{j\beta \sigma }^+_E=\underset{m\mu }{}M_{im\sigma }^{\alpha \mu }(E)G_{mj\sigma }^{\mu \beta }(E),$$
(12)
which contains all the information about correlation between the conduction band and the localized moments. With the help of (12) the equation of motion for the single-electron Green function simply becomes, after two-dimensional Fourier transform,
$$๐_{๐ค\sigma }(E)=\mathrm{}\left(E๐๐_๐ค๐_{๐ค\sigma }(E)\right)^1.$$
(13)
Here, $`๐`$ represents the $`(n\times n)`$ identity matrix and the matrices $`๐_{๐ค\sigma }(E)`$, $`๐_๐ค`$, and $`๐_{๐ค\sigma }(E)`$ have as elements the layer-dependent functions $`G_{๐ค\sigma }^{\alpha \beta }(E)`$, $`T_๐ค^{\alpha \beta }`$, and $`M_{๐ค\sigma }^{\alpha \beta }(E)`$, respectively. The necessary computation of the self-energy $`M_{๐ค\sigma }^{\alpha \beta }(E)`$ involves the evaluation of higher Green functions originating from the equation of motion of the original single-electron Green function. For the sake of brevity we here omit the details of the calculations. If the self-energy is assumed to be a local entity,
$$M_{๐ค\sigma }^{\alpha \beta }(E)=\frac{J}{2}\delta _{\alpha \beta }m_\sigma ^\alpha (E),$$
(14)
it can be shown to have the structure:
$$m_\sigma ^\alpha (E)=\frac{Z_\sigma ^\alpha (E)}{N_\sigma ^\alpha (E)},$$
(15)
where the numerator and the denominator, respectively, have the structure:
$`Z_\sigma ^\alpha `$ $`=`$ $`z_\sigma \mathrm{}^2S_\alpha ^z+{\displaystyle \frac{J}{2}}f_1^Z(\mathrm{})+{\displaystyle \frac{J^2}{4}}f_2^Z(\mathrm{})`$ (17)
$`N_\sigma ^\alpha `$ $`=`$ $`\mathrm{}^2+{\displaystyle \frac{J}{2}}f_1^N(\mathrm{})+{\displaystyle \frac{J^2}{4}}f_2^N(\mathrm{})`$ (18)
where the four functions $`f_{1,2}^{Z,N}(\mathrm{})`$ themselves depend <sup>*</sup><sup>*</sup>*For the explicit form of Eqs. (15) see . on the self-energy, $`m_{\pm \sigma }^\alpha (E)`$, and on the layer-dependent coeffcients:
$`\kappa _{\alpha \sigma }`$ $`=`$ $`S_\alpha ^\sigma S_\alpha ^\sigma \lambda _{\alpha \sigma }^{(2)}S_\alpha ^z,`$ (19)
$`\lambda _{\alpha \sigma }^{(1)}`$ $`=`$ $`{\displaystyle \frac{S_\alpha ^\sigma S_\alpha ^\sigma S_\alpha ^z+z_\sigma S_\alpha ^\sigma S_\alpha ^\sigma }{S_\alpha ^\sigma S_\alpha ^\sigma }},`$ (20)
$`\lambda _{\alpha \sigma }^{(2)}`$ $`=`$ $`{\displaystyle \frac{S_\alpha ^\sigma S_\alpha ^\sigma S_\alpha ^zS_\alpha ^zS_\alpha ^\sigma S_\alpha ^\sigma }{(S_\alpha ^z)^2S_\alpha ^z^2}}.`$ (21)
Taking into account Eqs. (13)-(19), we have now a closed system of equations, provided that the f-spin correlation functions appearing in Eqs. (19) are known.
These can be evaluted by considering the magnetic subsystem, according to the Hamiltonian $`_f`$ (Eq. (3)). For all what concerns us in this paper it is only interesting that, employing an RPA-type decoupling, a solution of the magnetic subsystem can be found and that it gives us all the necessary layer-dependent f-spin correlation functions as a function of temperature from $`T=0`$ to the Curie temperature, $`T=T_C`$ . Mediated by Eqs. (15)-(19), the f-spin correlation functions contain the whole temperature dependence of the electronic subsystem.
To briefly recall the main results presented in our previous paper , Fig. 1 shows the density of states of a s.c.-(100) double layer for different s-f interactions and different temperatures. In the case of ferromagnetic saturation, $`T=0`$, we see that for the spin-$``$ electron the density of states of the free case $`J=0`$ (dotted line) is just rigidly shifted when the interaction is switched on. This is due to the impossibility of the spin-$``$ electron to exchange its spin with the perfectly alligned local-moment system. For the spin-$``$ electron in the case of small s-f exchange coupling, $`J>0`$, a slight deformation of the free density of states sets in. For intermediate and strong couplings ($`J0.2`$), the density of states splits into two parts, corresponding to two different spin exchange processes between the spin-$``$ electron and the localized f-spin system. The higher energetic part represents a polarization of the immediate spin neighbourhood of the electron due to the repeated emission and reabsorption of magnons. The corresponding polaron-like quasiparticle is called the magnetic polaron. The low-energetic part of the spectrum is a scattering band which can be explained by the simple emission of a magnon by the spin-$``$ electron without reabsorption, but necessarily connected with a spin flip of the electron .
For $`T>0`$ we see in Fig. 1 that with increasing temperature for the spin-$``$ electron spectral weight is transferred from the high-energetic polaron peak to the low-energetic scattering peak. On the other hand, for the spin-$``$ electron we have the effect that for finite temperatures an additional peak rises at the high energetic side of the spectrum. This rise with increasing temperature is fueled by the loss of spectral weight on the low-energetic peak. The high-energetic peak simply represents the ability of the spin-$``$ electron to exchange its spin with a not perfectly aligned local-moment system, $`T>0`$ . As a result of the shifts of spectral weight occuring for both, the spin-$``$ and spin-$``$ spectra, the spectra for the two spin directions approach each other with increasing temperature. In the limiting case of $`TT_C`$ the system has eventually lost its ability to distinguish between the two possible spin directions because of the loss of magnetization of the underlying local-moment system. Hence, for $`T=T_C`$ the densities of states of the spin-$``$ and the spin-$``$ electron are equal. Another feature which can be seen in Fig. 1 is that while the positions of the quasiparticle subbands stays pretty much the same we observe a narrowing of the bands with increasing temperature. This can especially well be seen for the case of temperature evolvement of spectrum of the spin-$``$ electron for the case of $`J=0.2`$. Whereas the scattering band and the polaron band are, for the case of $`T=0`$, still merged, in the case of $`T=T_C`$ both bands are clearly separated.
In this paper we are interested in surface states and their temperature dependent behaviour. Surface states occur in the spectral density at energies different from the bulk energies and are localized in the vicinity of the surface of a crystal. The theory presented above is applied to a s.c. film consisting of $`n`$ layers oriented parallel to the (100)-surface as drawn schematically in Fig. 2.
The electron hopping in Eq. (2) is restricted to nearest neighbours,
$$T_{ij}^{\alpha \beta }=\delta _{i,j\pm \mathrm{\Delta }}^{\alpha \beta }T^{\alpha \alpha }+\delta _{ij}^{\alpha ,\beta \pm 1}T^{\alpha \beta },$$
(22)
where $`\mathrm{\Delta }`$ stands for the nearest neighbours within the same plane, $`\mathrm{\Delta }=(0,1),(0,\overline{1}),(1,0),(\overline{1},0)`$, and $`\delta _{ij}^{\alpha \beta }\delta _{\alpha \beta }\delta _{ij}`$. $`T^{\alpha \beta }`$ is the hopping between the adjacent layers $`\alpha `$ and $`\beta `$ and $`T^{\alpha \alpha }`$ is the hopping between nearest neighbours within the layer $`\alpha `$. In order to study surface states we vary the electron hopping within the vicinity of the surface,
$$T^{\alpha \beta }=\left(\begin{array}{cccccc}T_{}& T_{}& 0& & \mathrm{}& 0\\ T_{}& T& T& & & \mathrm{}\\ 0& T& & \mathrm{}& & \\ & & \mathrm{}& & T& 0\\ \mathrm{}& & & T& T& T_{}\\ 0& \mathrm{}& & 0& T_{}& T_{}\end{array}\right),$$
(24)
according to Fig. 2, and with
$$T_{}=ฯต_{}T,T_{}=ฯต_{}T.$$
(25)
Here, $`ฯต_{}`$ and $`ฯต_{}`$ are considered as model parameters. In reality the variation of the hopping integrals in the vicinity of the surface may be caused e.g. by a relaxation of the interlayer distance. According to the scaling law $`Tr^5`$ for the d-electrons a relatively small top-layer relaxation $`\mathrm{\Delta }r/r`$ may result in a strong change of the hopping integral $`T`$. Thus e.g. a relaxation of the Gd(0001) surface layer of 3-6% (cf. and references therein) would yield a modification of the hopping integrals of of up to 30%.
In a previous paper we have been dealing with surface states for the exactly solvable case of a single electron in an otherwise empty conduction band and a ferromagnetically saturated f-spin system, $`T=0`$ . For this special case we have shown that modifying the hopping in the vicinity of the surface according to Eqs. (22) leads to the appearance of surface states in the local spectral density $`S_{๐ค\sigma }^{\alpha \alpha }`$. Modifying the hopping within the surface layer by more than 25%, i.e. $`ฯต_{}\frac{3}{4}`$ or $`ฯต_{}\frac{5}{4}`$, while keeping all the other hopping integrals unchanged results in a single surface state at the lower or the upper edge of the bulk band . This surface state first emerges at the $`\overline{\mathrm{\Gamma }}`$\- and at the $`\overline{\mathrm{M}}`$-point from the bulk band and from there spreads for larger modifications of $`ฯต_{}`$ to the rest of the Brillouin zone.
On the other hand, when the hopping within the first layer remains constant, $`ฯต_{}1`$, but the hopping between the first and the second layer is significantly increased, $`ฯต_{}\sqrt{2}`$, then two surface states split off one on each side of the bulk band. In this case the emergence of the surface states from the bulk band is $`๐ค`$-independent. Both types of surface states for the special case of $`T=0`$ can be observed at the single bulk band and on the high-energetic polaron band for the case of the spin-$``$ electron and the spin-$``$ electron, respectively.
Here, we are interested in the possible variations of the $`T=0`$ surface states when going to finite temperatures. Figs. 3 to 5 show the temperature dependence of surface states for the different possible variations of the hopping in the vicinity of the surface. All the calculations for Figs. 3 to 5 have been performed for a 20-layer s.c. film cut parallel to the (100)-plane of the s.c. crystal. For much thinner films than 20 layers the calculation of surface states becomes meaningless since there is no real bulk-like environment in the center of the film to compare the electronic states at the surface to. So it is desirable to calculate thicker films for the discussion of surface states. The choosen thickness of our model film is basically a compromise between computational accuracy and computational time. The parameters for the uniform hopping according to Eqs. (22) and for the s-f exchange interaction are $`T=0.1`$ and $`J=0.1`$, respectively.
For our calculations we have employed a modification of the hopping in the surface layer and between the surface layer and the adjacent layer of 50% ($`ฯต_{}=0.5,\mathrm{\hspace{0.17em}1.5}`$) and 100% ($`ฯต_{}=2`$), respectively (cf. Eq. (25)). Such a drastic modification of the hopping integrals in the vicinity of the surface is rather unlikely to occur in reality. However, as has been shown in the actual peak position of a surface states depends only weakly on the variation of $`ฯต_{}`$ and $`ฯต_{}`$, respectively. The selected strong variation of the hopping parameters $`T_{}`$ and $`T_{}`$ give rise to pronounced surface states which enable us to more clearly see the qualitative behaviour of the surface states as a function of temperature.
In Fig. 3 we have the case that the hopping within the surface layer is enhanced by 50%, leading to the existence of a surface state on the outer edge of the bulk dispersion. This surface state can most clearly be seen at the $`\overline{\mathrm{\Gamma }}`$-point and the $`\overline{\mathrm{M}}`$-point in the two-dimensional Brillouin zone. In Fig. 3, the spectral density of the first layer, $`S_{๐ค\sigma }^{11}(E)`$, for both of these points is displayed as a function of energy. If the temperature is increased, we see that the position of the spin-$``$ and of the spin-$``$ surface states approach each other in a Stoner-like fashion until both peaks are equivalent for $`T=T_C`$. The oscillations in the spectral densities, which can be seen in Fig. 3 around -0.6eV, 0.3eV, and 0.7eV are due to the finite thickness of our model film, as are the respective oscillations in Figs. 4 and 5.
Also in Fig. 3, but upside down, the spectral density of one of the central layers of the 20-layer film, $`S_{๐ค\sigma }^{\mathrm{10\hspace{0.17em}10}}(E)`$, can be seen again for the $`\overline{\mathrm{\Gamma }}`$-point and the $`\overline{\mathrm{M}}`$-point and both spin directions indicating that the positions of the surface states visible in $`S_{๐ค\sigma }^{11}(E)`$ lie outside of the bulk spectrum of the crystal for all temperatures. The same is valid for the spectra displayed in Figs. 4 and 5. In these figures, however, the local spectral densities of the central layers have been omitted for clarity.
In Fig. 4, the temperature dependence of surface states is documented for the case where the hopping within the first layer is reduced by 50%, $`ฯต_{}=0.5`$ ($`ฯต_{}1`$). In this case we observe a spin-mixing behaviour where the positions of the spin-$``$ and of the spin-$``$ surface states stay the same when the temperature is risen but spectral weight is being transferred between the different peaks. This results in equal populations of the spin-$``$ and the spin-$``$ peaks at $`T=T_C`$.
To round up the picture we see in Fig. 5 the case where the hopping between the first and the second layer is modified, $`ฯต_{}=2`$, while the hopping within the first layer remains equal to the uniform hopping within the film, $`ฯต_{}1`$. Here we have for $`T=0`$ two surface states, one on each side of the bulk spectrum. When the temperature is switched on the surface states on the outer side of the bulk dispersion behave Stoner-like, while the surface states on the inner side of the bulk dispersion exhibit a spin-mixing behaviour.
Apparently, our model is able to reproduce a Stoner like collapse of the spin-$``$ and the spin-$``$ peak positions for $`T_C`$ as well as a spin-mixing behaviour. In the spin-mixing case there are two peaks which both have a majority- and a minority-spin contribution. When the temperature is increased, the spectral weight of these contributions is altered until for $`T=T_C`$ for each peak the spin-$``$ and the spin-$``$ contributions have the same spectral weights.
In being able to reproduce both Stoner-like and spin-mixing behaviour, depending on the variation of the hopping and the position in the Brillouin zone, our model calculations are in harmony with more recent (inverse) photoemission studies on Gd(0001) films which abandoned the grasp of earlier works that the temperature dependent behaviour of the Gd(0001) surface state has to be either Stoner-like or of spin-mixing type . Especially, it has been shown here that for certain parameters it is possible to observe both kinds of behaviour at the same time (see Fig. 5). This feature of our model calculation seems to be in strong agreement with a scenario proposed by Donath and Gubanka .
###### Acknowledgements.
This work was supported by the Deutsche Forschungsgemeinschaft within the Sonderforschungsbereich 290 (โMetallische dรผnne Filme: Struktur, Magnetismus, und elektronische Eigenschaftenโ). One of the authors (R.S.) gratefully acknowledges the support by the German National Merit Foundation (โStudienstiftung des deutschen Volkesโ). |
no-problem/0001/astro-ph0001338.html | ar5iv | text | # The Fluence Duration Bias
## Introduction
The fluence duration bias is an instrumental bias causing some gamma-ray burst fluences and durations to be underestimated relative to their peak fluxes. The fluence duration bias does not manifest itself by altering the trigger rate, but instead alters measured burst properties. Elsewhere in this conference hakkila99 we present evidence that the class of Intermediate bursts identified by statistical clustering analysis mukherjee98 can be produced from the hardness vs. intensity correlation and the fluence duration bias. We also demonstrate how the bias can be responsible for decreasing fluences and durations of the longest low peak flux Class 1 bursts. In this paper, we describe the fluence duration bias in more detail.
## An Example
Figure 1 demonstrates the time history of a bright, Class 1 (Long) BATSE burst (trigger 2831) as measured in the 50 to 300 keV range on the 1024 ms timescale. This burst is complex with an overall duration in excess of 180 seconds.
Figure 2 is a Monte Carlo simulation of what this burst might look like if its 1024 ms peak flux were reduced in intensity to 15% of its measured value (Poisson fluctuations have been added to the reduced signal). If the reduced burst duration is assumed to be identical to that of the unreduced burst, then its measured fluence-to-peak flux ratio is unchanged from the actual value of 19.4 (we measure the result in terms of the fluence-to-peak flux ratio, because Poisson fluctuations can also cause a burstโs peak flux to change). If, however, the reduced burst duration is determined from โrecognizable pulsesโ (pulses that are clearly visible above background; our algorithm assumes that the first and last peaks larger than $`4\sigma `$ above background bound the burst duration because there is no formal algorithm used by a human operator), then the average fluence-to-peak flux ratio drops slightly to 94% of its actual value.
Figure 3 shows what the burst might look like if reduced to 2% of its actual value. Most of the burst fluence is confined to a temporal span of roughly 20 seconds. Our โrecognizable pulseโ algorithm finds that the burst is still considerably longer than this single pulse, but that the total burst duration is still underestimated for the purpose of measuring fluence. The fluence-to-peak flux ratio for the burst in question is only 61% of its actual value. This underestimate is even larger when the burst is reduced to a value closer to the trigger threshold.
It is difficult to accurately model the process by which the fluence duration interval is chosen, since human interaction plays an important role. We suspect that the actual amount of the bias is less than the amount described here, since the human eye and mind are good at removing patterns from noise. Nonetheless, there is evidence that the bias is present, and that it is large enough to cause a depletion in the number of small peak flux, high fluence bursts as well as being responsible for producing some Class 3 burst characteristics from Class 1 bursts.
## Evidence for the Fluence Duration Bias in the 4B Catalog
Fluence appears to be one of BATSEโs most accurately measured quantities because its statistical measurement errors are typically only $`\pm 5\%`$. However, there is no intensity-dependent component to this measurement error, as might be expected from Figures 1, 2, and 3. It should be mentioned that there are no BATSE bursts with fluences less than zero (as might be expected if background dominated the fluence measurement), and few with fluences less than the fluence found in the 1024 ms peak flux.
The formal fluence error is kept small in part by fitting the background for faint bursts with high-order polynomials. Unfortunately, this process can introduce systematic underestimates of burst fluence by overestimating background bonnell97 . The fluence error can also be reduced by decreasing the fluence duration. Figure 4 plots fluence durations for available bursts in the 4B Catalog. The sample has been limited to Class 1 bursts detected using the same trigger criteria (because Class 2 and Class 3 bursts are clearly shorter than the Class 1 bursts, and because different trigger criteria might alter the composition of the sample in a heterogeneous way).
Figure 4 indicates that there are few long Class 1 fluence durations near BATSEโs detection threshold (1024 ms peak fluxes slightly greater than BATSEโs 0% efficiency of 0.2 photons cm<sup>-2</sup> second<sup>-1</sup>). This is strong evidence for the existence of the fluence duration bias, and it indicates that the magnitude of the effect apparently strengthens for fainter bursts.
Figures 5, 6 and 7 demonstrate that the fluence duration bias is more difficult to cleanly delineate when peak flux and/or trigger timescales are shorter than 1024 ms (the effect is likewise more pronounced when longer timescales are used). We attribute this to the lower signal-to-noise ratio of shorter timescale measurements, making intensities measured on these timescales less accurate measures with larger intrinsic scatter than 1024 ms. The fluence duration bias is still present on shorter timescales; the scatter of these measures just makes it harder to recognize.
## Conclusions
Monte Carlo modeling of bursts with different temporal structures indicates that fluence duration is easy to underestimate, particularly for faint bursts. This causes some burst fluences and durations to be underestimated. Some bursts, such as trigger 2831, have temporal structures more susceptible to this bias than others. The strength of the bias is hard to judge for an individual burst, as it depends both on burst temporal morphology and on how the human operator selects a fluence duration interval. The magnitude of the bias depends both on the time intervals chosen for the peak flux and trigger flux, since the fluence underestimate must be made relative to a โfixedโ brightness measure. The fluence duration bias appears capable of producing observed characteristics of the fluence vs. 1024 ms peak flux diagram, and of making some Class 1 bursts (primarily faint ones) take on Class 3 characteristics. We currently studying this effect in greater detail. |
no-problem/0001/gr-qc0001044.html | ar5iv | text | # A power filter for the detection of burst sources of gravitational radiation in interferometric detectors
## I Introduction
Currently, the best understood and most highly developed technique for detecting gravitational waves with interferometric detectors is matched filtering. This technique is optimal if the waveform to be detected is known in advance. There are, however, potentially important sources of gravitational radiation that are not well enough modeled to obtain reliable waveforms. Included in this category are binary black hole mergers, which have been discussed in some detail by Flanagan and Hughes (FH), and supernovae.
In order to detect poorly modeled sources, new techniques must be developed. Clearly, these techniques must perform well with incomplete prior knowledge of the expected signal. A number of techniques are under active investigation.
The purpose of this article is to discuss one such technique, the power filter. This filter only requires prior knowledge of the duration and frequency band of the signal. It is therefore well suited to detecting black hole merger signals, for which FH have estimated these parameters. Furthermore, we shall show in the following sections that this filter is optimal in the sense that it gives the highest probability of correctly detecting a signal for a given false alarm probability. Our treatment here is cursory; a more comprehensive description of the filter is in preparation.
## II The Power Statistic
Consider the output $`h(t)`$ of the interferometric gravitational wave detector. It is sampled at a finite rate $`1/\mathrm{\Delta }t`$ to produce a time series $`h_j=h(j\mathrm{\Delta }t)`$, where $`j`$ is an integer. A segment of $`N`$ samples defines a vector $`๐ก=(h_j,\mathrm{},h_{(j+N1)})`$. This vector can be written as
$$๐ก=๐ง+๐ฌ$$
(1)
where $`๐ง`$ is detector noise and $`๐ฌ`$ is a (possibly absent) signal.
The noise is assumed to be stochastic. The vectors $`๐ง`$ and $`๐ก`$ are therefore described statistically. Statistical fluctuations lead to two types of errors in detecting a signal: false alarms, in which signals are detected when none are present, and false dismissals, in which signals are not detected when present. An optimal filter is defined to be one which minimizes false dismissals for a given false alarm rate.
Neyman and Pearson have shown that an optimal filter is one for which the likelihood ratio $`\mathrm{\Lambda }`$ is compared to a threshold. The likelihood ratio is defined to be
$$\mathrm{\Lambda }[๐ก]๐[๐ฌ]\frac{p[๐ก|๐ฌ]}{p[๐ก|\mathrm{๐}]},$$
(2)
where $`p[๐ก|๐ฌ]`$ ($`p[๐ก|\mathrm{๐}]`$) is the probability of obtaining $`๐ก`$ given that a signal $`๐ฌ`$ is present (absent) and $`๐[๐ฌ]`$ is a measure on the space of signals.
The quantities $`p[๐ก|๐ฌ]`$ and $`p[๐ก|\mathrm{๐}]`$ depend on the statistical properties of the noise. For convenience, we assume in this article that the noise is stationary and Gaussian<sup>*</sup><sup>*</sup>* However, other types of noise can also be considered.. We can therefore write the probability distribution for the noise as
$$p[๐ง]=C\mathrm{exp}\left[\frac{๐ง๐ง}{2}\right],$$
(3)
where $`C`$ is a constant prefactor and $`๐ง๐ง`$ an inner product, both of which are determined by the autocorrellation matrix of the noise. When a signal is present (absent) we have $`๐ง=๐ก๐ฌ`$ ($`๐ง=๐ก`$), and can easily use Eq. (3) to find the integrand in Eq. (2)
$$\frac{p[๐ก๐ฌ]}{p[๐ก\mathrm{๐}]}=\mathrm{exp}\left[(๐ฌ๐ก)\frac{1}{2}(๐ฌ๐ฌ)\right].$$
(4)
The measure $`๐[๐ฌ]`$ in Eq. (2) reflects our prior knowledge about the signal. For those signal parameters about which we have no prior knowledge, we choose a measure which reflects our ignorance.
Consider now the case where we know the time window and frequency band in which a signal occurs. The measure $`๐[๐ฌ]`$ restricts the integral in Eq. (2) to the projection $`๐ก_{}`$ of $`๐ก`$ into the space of vectors with the give window and frequency band. Introducing the notation $`A^2=๐ฌ๐ฌ`$, $`=๐ก_{}๐ก_{}`$ and $`๐ฌ๐ก_{}=A^{1/2}\mathrm{cos}\theta `$, we rewrite Eq. (2) as
$$\mathrm{\Lambda }[๐ก]=๐[\theta ,A]\mathrm{exp}\left[A^{1/2}\mathrm{cos}\theta \frac{1}{2}A^2\right].$$
(5)
Since we claim no prior knowledge of $`\theta `$, a suitable measure over $`\theta `$ is uniform over all possible angles between $`๐ก_{}`$ and $`๐ฌ`$ (i.e. over an $`๐ฉ`$ sphere, where $`๐ฉ`$ is the dimension of the space of vectors with the required duration and frequency band), which reflects our lack of knowledge.
While a suitable measure over the signal amplitude $`A`$ can also be deduced, it is unnecessary here. Instead, one constructs a locally optimal statistic, $`\mathrm{\Lambda }_{\mathrm{loc}}[๐ก]`$, which is appropriate in the limit of weak signals. To construct this statistic, one expands the likelihood ratio (5) in a Taylor series about $`A=0`$. The statistic is simply the first non-vanishing coefficient (excluding the $`A^0`$ coefficient) in the expansion. Expanding (5) and integrating over $`\theta `$ we get
$$\mathrm{\Lambda }_{\mathrm{loc}}[๐ก]+\text{terms independent of }๐ก.$$
(6)
The terms independent of $`๐ก`$ clearly do not discriminate between the presence and absence of a signal. Thus, the optimal statistic for detecting a signal of known duration and frequency band is simply the total power in the detector output over that time and band.
## III Operating Characteristics
In the previous section we determined the optimal statistic $``$ for signals of known duration and frequency band in stationary Gaussian noise. We construct the optimal filter from this statistic via a threshold decision rule. That is, we calculate at what value $`^{}`$ of the statistic we incur the largest acceptable false alarm probability. We then compare values of $``$ calculated from our data with $`^{}`$. A signal is said to have been detected if $`>^{}`$.
For Gaussian noise, false alarm and false dismissal probabilities can be calculated analytically up to quadrature. If no signal is present, $``$ is just the sum of the squares of $`V2\times \delta t\times \delta f`$ independent Gaussian random variables. Thus $``$ has a $`\chi ^2`$ distribution with $`V`$ degrees of freedom, and the false alarm probability for a value $`^{}`$ is just
$$P(>^{}A=0)=\frac{\mathrm{\Gamma }(V/2,^{}/2)}{\mathrm{\Gamma }(V/2)}$$
(7)
where $`\mathrm{\Gamma }(a,x)=_x^{\mathrm{}}e^tt^{a1}๐t`$ is the incomplete Gamma function.
If a signal of amplitude $`A`$ is present, $``$ is distributed as a weighted sum of $`\chi ^2`$ probability distributions,
$$p(V,A)=\underset{n=0}{\overset{\mathrm{}}{}}\frac{e^{A^2/2}(A^2/2)^n}{n!}\frac{e^{/2}(/2)^{n+V/21}}{\mathrm{\Gamma }(n+V/2)}.$$
(8)
This is the non-central $`\chi ^2`$ probability distribution. The false dismissal probability is given by
$$P(<^{}A)=_0^{^{}}p(V,A)๐$$
(9)
This probability can be integrated numerically.
## IV Summary and Discussion
We have presented here a power filter to search for gravitational wave signals from burst sources in interferometric data. The filter is designed to look for signals of known duration and frequency bandwidth; when this is the only available information, and the noise is stationary and Gaussian, the power filter is optimal. Moreover, this filter is locally optimal for a wide class of non-Gaussian noise, thus making it a useful tool to analyze real interferometer data.
One shortcoming of the power filter is its inability to distinguish between gravitational wave signals, and spurious instrumental artifacts which produce time and band limited signals. This is mitigated by using multiple instruments to detect gravitational wave bursts. As an added benefit, bursts identified as noise can then be used for detector characterization. The extension of the power filter to multiple instruments will appear in an upcoming article; this article also contains a more complete discussion of the filter, including implementation strategies and a comparison to matched filtering.
In conclusion, we think that the power filter provides a useful tool for gravitational wave data analysis. It should play a significant role in detector characterization for single interferometers, and should form the basic building block in an hierarchical detection strategy using multiple interferometers.
###### Acknowledgements.
This work was supported by the following NSF grants: PHY-9728704, PHY-9507740, PHY-9970821, and PHY-9722189. |
no-problem/0001/cond-mat0001052.html | ar5iv | text | # 1 fig2
28 July 1997
Influence of phase-sensitive interaction
on the decoherence process
in molecular systems
D. Kilin and M. Schreiber
Institut fรผr Physik, Technische Universitรคt, D-09107 Chemnitz, Germany
Abstract
The character of the interaction between an impurity vibrational mode and a heat bath leads to certain peculiarities in the relaxational dynamics of the excited states. We derive a non-Markovian equation of motion for the reduced density matrix of this system which is valid for initial, intermediate and kinetic stages of the relaxation. The linear phase-sensitive character of the interaction ensures the ultrafast disappearance of the quantum interference of the initially superpositional state and the effect of classical squeezing of the initially coherent state. On the other hand, the second power interaction induces a partial conservation of the quantum interference.
Keywords: decoherence, superposition, non-Markovian, vibronic wavepacket
Corresponding author: Dmitri Kilin Institut fรผr Physik, Technische Universitรคt D-09107 Chemnitz, Germany Fax: ++49-371-531-3143 e-mail: d.kilin@physik.tu-chemnitz.de
Motivation and model
Time-resolved experimental techniques sometimes allow to detect quantum superpositional states in different physical systems . Such states provide a basis for the possibility of applications like quantum computers . Unfortunately, these interesting states inevitably decay due to the coupling with a heat bath containing many degrees of freedom . Below we consider how the character of the coupling with the bath influences the dynamics of the superpositional state of the impurity vibrational mode. Describing this mode as a harmonic oscillator, one writes the interaction part of the Hamiltonian as
$$H_I=\mathrm{}\underset{\xi }{}K(\omega _\xi )\left(b_\xi ^+f(a,a^+)+b_\xi f^+(a,a^+)\right),$$
(1)
where the function $`K(\omega _\xi )`$ describes the intensity of the interaction between the vibronic mode and the bath mode operators $`b_\xi `$. $`f(a,a^+)`$ is a function of the vibronic mode operators. We consider two cases, namely $`f(a,a^+)=a+a^+`$, corresponding to the case of the linear phase-sensitive interaction, and $`f(a,a^+)=a^2`$ yielding a quadratic interaction in the rotating wave approximation. Below we are using โsingleโ and โdoubleโ to indicate the baths with these two types of interactions.
Single bath
The first case describes the processes when system and bath are exchanging one quantum. Such a behavior is provided by the physical situation when the majority of the bath modes contributing to dephasing and thermalization of the system have approximately the same frequency $`\omega _\xi `$ as the system mode $`\omega `$. Applying the formalism of the time evolution operator, and restricting to the second order cumulant expansion , we obtain the non-Markovian master equation for the reduced density matrix $`\sigma `$ of the impurity vibrational mode:
$`{\displaystyle \frac{\sigma }{t}}`$ $`=`$ $`i\omega [a^+a,\sigma ]`$ (2)
$`+`$ $`(\gamma _{n+1}+\stackrel{~}{\gamma }_n^{})[a\sigma ,a^++a]`$ (3)
$`+`$ $`(\gamma _{n+1}^{}+\stackrel{~}{\gamma }_n)[a^++a,\sigma a^+]`$ (4)
$`+`$ $`(\stackrel{~}{\gamma }_{n+1}+\gamma _n^{})[a^+\sigma ,a^++a]`$ (5)
$`+`$ $`(\stackrel{~}{\gamma }_{n+1}^{}+\gamma _n)[a^++a,\sigma a].`$ (6)
We obtain four relaxation functions $`\gamma `$, which are shown in Fig. 1a. They originate from the correlations between the operators $`a(t)`$ and $`a^+(t)`$ of the system mode $`\omega `$ and the memory kernels of the bath, like $`b_\xi ^+(0)b_\xi ^{}(\tau )`$, which appear in the second order cumulant expansion of the evolution operator. These coefficients are found to be time-dependent. $`\gamma _{n+1}`$ and $`\gamma _n`$ describe the situation when emission-like processes $`\gamma _{n+1}`$ prevail over the absorption-like processes $`\gamma _n`$, where $`n`$ denotes the number of quanta in the bath modes. The functions $`\stackrel{~}{\gamma }_{n+1}`$, $`\stackrel{~}{\gamma }_n`$ correspond to the reverse situation and are always small.
The evolution of the different initial states of the system was found in analytical form for two cases, namely: the initial stage of relaxation, when all of the relaxation functions are linear in time, and for the kinetic stage of the relaxation, when the coefficients $`\stackrel{~}{\gamma }_n`$, $`\stackrel{~}{\gamma }_{n+1}`$ vanish, while $`\gamma _n`$, $`\gamma _{n+1}`$ become constants $`\mathrm{\Gamma }_1n_1`$ and $`\mathrm{\Gamma }_1(n_1+1)`$, respectively. Here $`n_1=n(\omega )`$ indicates the number of quanta in the bath mode at the system frequency. The analytical solution is based on the generating function formalism . The approaches corresponding to the above stages are applied to the evolution of the superposition of two coherent states $`|\alpha `$ and $`e^{i\varphi }|\alpha `$
$$|\alpha ,\varphi =N^1\left(|\alpha +e^{i\varphi }|\alpha \right),$$
(7)
where $`N`$ is the normalization constant, $`|\alpha `$ is obtained by displacement of the vacuum state $`|0`$ by $`\alpha `$ as $`|\alpha =\mathrm{exp}(\alpha a^+\alpha ^{}a)|0`$. The dependence of the probability density $`P`$ on coordinate $`Q`$ and time is found to consist of classical and interference parts :
$$P(Q,t)=P_{\mathrm{class}}(Q,t)+P_{\mathrm{int}}(Q,t).$$
(8)
The interference part behaves somewhat different under each approach, compare Fig. 1b. Nevertheless, energy relaxation of $`P_{\mathrm{class}}`$ and decoherence of $`P_{\mathrm{int}}`$ occur on different time scales, independent on the approach applied: The quantum interference $`P_{\mathrm{int}}`$ disappears faster. The phase-sensitive character of the relaxation induces small oscillations of the broadening of the initially coherent wave packet .
Double bath
When the system interacts with a bath having the maximum of its mode density at twice the oscillator frequency, then processes occur in which the system loses 2 quanta and the bath obtains one quantum. The reverse processes are also allowed. To describe such a behavior we use $`f(a,a^+)=a^2`$ in the interaction Hamiltonian (1). The kinetic stage of the evolution of such a system follows the master equation
$`{\displaystyle \frac{\sigma }{t}}`$ $`=`$ $`i\omega [a^+a,\sigma ]`$ (9)
$`+`$ $`\mathrm{\Gamma }_2(n_2+1)\left\{[a^2\sigma ,(a^+)^2]+[(a^+)^2,\sigma a^2]\right\}`$ (10)
$`+`$ $`\mathrm{\Gamma }_2n_2\left\{[(a^+)^2\sigma ,a^2]+[a^2,\sigma (a^+)^2]\right\},`$ (11)
where $`\mathrm{\Gamma }_2=\pi K^2g_2`$ is the decay rate of the vibrational amplitude. Here, the number of quanta in the bath mode $`n_2=n(2\omega )`$, the coupling function $`K=K(2\omega )`$, and the density of bath states $`g_2=g(2\omega )`$ are evaluated at the double frequency of the selected oscillator.
Rewritten in the basis of eigenstates $`|n`$ of the unperturbed oscillator this master equation contains only linear combinations of such terms as $`\sigma _{m,n}=m\left|\sigma \right|n`$, $`\sigma _{m+2,n+2}`$, and $`\sigma _{m2,n2}`$. It ensures, in effect, a possibility to distinguish even and odd initial states of the system. The odd excited state $`|1`$ cannot relax into the ground state $`|0`$, but the even excited state $`|2`$ can.
The evolution of the system after preparation under different initial conditions was simulated numerically. The equations of motion of the density matrix elements are integrated using a fourth order Runge-Kutta algorithm with stepsize control. To make the set of differential equations a finite one we restrict the number of levels by $`m,n20`$.
One of the representative examples is the behavior of the initially coherent state of the system. For comparison with the usual behavior we have used the time dependence of the mean value of the coordinate. One can see in Fig. 2a that the relaxation consists of two stages. The coordinate mean value of the usual system decreases with a constant rate during both stages. The same initial value of the system coupled to a double bath shows a fast decrement in the first stage and almost no decrement in the second.
The question about the influence of the type of the bath on the evolution of the superpositional state is of special importance. The simulation was made at a temperature of $`k_BT=2\mathrm{}\omega /\mathrm{ln3}`$, corresponding to $`n(2\omega )=0.5`$. To allow the comparison, we have simulated the evolution of the same superpositional states twice. The only difference was the type of the baths, namely single and double bath. The coordinate representation of the wave packets is presented in Fig. 2b and 2c. The same value of the relaxation $`\mathrm{\Gamma }_1=\mathrm{\Gamma }_2`$ provides, however, different results. In the system coupled to the single bath the quantum interference disappears already during the first period, while the amplitude decreases only slightly. In the opposite case, the system coupled to the double bath leaves the interference almost unchanged, although a fast reduction of the amplitude occurs.
Therefore, the second system partially conserves the quantum superpositional state. The experimental investigation of systems displaying such properties is necessary, both for extracting typical parameter ranges for theoretical models and for practical applications like quantum computation and quantum cryptography.
Conclusions
The character of the coupling with a heat bath plays a dominating role in the time evolution of different excited states of a vibrational mode. Linear coupling ensures the ultrafast decay of the interference part of the superpositional state. This result remains true even if different approaches are applied. The quadratic type of coupling gives the same time scales both for the amplitude and interference relaxations. Some initial coherent properties, like the distinction between even and odd levels, survive for long time scales.
Acknowledgments
The authors thank DFG for financial support.
References
Femtochemistry: Ultrafast Chemical and Physical Processes in Molecular Systems, ed. by M. Chergui (World Scientific, Singapore, 1996).
J.F. Poyatos, J.I. Cirac, and P. Zoller, Phys. Rev. Lett. 77 (1996) 4728.
M. Brune, E. Hagley, J. Dreyer, X. Maรฎtre, A. Maali, C. Wunderlich, J.M. Raimond, and S. Haroche, Phys. Rev. Lett. 77 (1996) 4887.
J.I. Cirac and P. Zoller, Phys. Rev. Lett. 74 (1995) 4091.
W.H. Zurek, Phys. Today 44 (1991) 36.
M. Schreiber and D. Kilin, in: Proc. 2nd Int. Conf. Excitonic Processes in Condensed Matter, ed. by M. Schreiber (Dresden University Press, Dresden 1996) p. 331.
D. Kilin and M. Schreiber, Phys. Rev. A (submitted). |
no-problem/0001/cond-mat0001098.html | ar5iv | text | # Reentrant layering in rare gas adsorption: preroughening or premelting?
\[
## Abstract
The reentrant layering transitions found in rare gas adsorption on solid substrates have conflictually been explained either in terms of preroughening (PR), or of top layer melting-solidification phenomena. We obtain adsorption isotherms of Lennard-Jones particles on an attractive substrate by off lattice Grand Canonical Monte Carlo (GCMC) simulation, and reproduce reentrant layering. Microscopic analysis, including layer-by-layer occupancies, surface diffusion and pair correlations, confirms the switch of the top surface layer from solid to quasi-liquid across the transition temperature. At the same time, layer occupancy is found at each jump to switch from close to full to close to half, indicating a disordered flat (DOF) surface and establishing preroughening as the underlying mechanism. Our results suggest that top layer melting is essential in triggering preroughening, which thus represents the threshold transition to surface melting in rare gas solids.
\]
Rare gas solid surfaces and films provide an important testing ground for a variety of surface phase transitions. Surface melting , roughening , and more recently preroughening (PR) have been identified or at least claimed at the free rare gas solid-vapor interface. Layering transitions of thin rare gas films on smooth substrates have given rise to a wide literature . The discovery of reentrant layering (RL) โ the unexpected disappearance and subsequent reappearance (well below the roughening temperature) of layering steps in adsorption isotherms on smooth substrates โ has led to a debate . One possible explanation is PR, a phase transition which takes a surface from a low-temperature โordered flatโ state, with essentially full surface coverage ($`T<T_{PR}`$), to a high temperature โdisordered flatโ (DOF) state, with half coverage, and a network of meandering steps ($`T>T_{PR}`$). Layering would disappear at PR, but re-enter in the DOF state . The competing explanation is based on the possibility of a melting-solidification-melting sequence in the top surface layer, similar to that seen for increasing temperature in canonical molecular dynamics simulations . In this picture, RL would result directly from a layer-promotion-driven melting of the top surface layer, and the subsequent advance of a solid-liquid interface . Both approaches appear to capture some important physics, but both also have problems. The non-atomistic statistical mechanics lattice models provide, in presence of an attractive substrate potential, an overall adsorption phase diagram with zig-zag lines of heat capacity peaks (whose behavior has been called โzipperingโ ) which are centered at $`T_{PR}`$ and strikingly resemble experimental observations . Because they contain PR, the models can naturally explain why the coverage jump across RL should be about half a monolayer, as seen in ellipsometry and in X-ray measurements . However, they fail to account for continuous atom dynamics, in particular melting, and it remains unclear how bad the total neglect of these aspect might be at these relatively high temperatures. In Ar (111), RL takes place near $`69K`$, not too far from melting at $`T_m=84K`$. By contrast, the atomistic canonical simulation approach does not suffer from that problem, and can describe quite well all the surface degrees of freedom, including thermal evolution of each surface layer from solid to liquid. It finds, realistically, that top-layer surface melting seems to be setting on precisely near the RL temperature. However, it does not explain the half layer coverage jump across RL. A crucial underlying difficulty of this approach lies in the fixed particle number โ a difficulty which the lattice models, being naturally grand canonical, do not encounter. In this situation Grand Canonical Monte Carlo (GCMC) atomistic simulation should be the method of choice, applied since long ago to describe adsorption, albeit of a single monolayer. Recently, we demonstrated how a free rare gas (111) surface can be realistically simulated by GCMC with a Lennard-Jones potential, and found indications that PR is indeed incipient at $`0.8T_m`$ . That work however remained incomplete, because a full equilibrium stabilization of the grand canonical surface proved to be increasingly hard with increasing temperature, and failed above $`0.8T_m`$, where a value of $`\mu `$ that would cause neither decrease nor increase of the total particle number could no longer be found.
In this Letter we present results of a fully equilibrated realistic GCMC simulation of multilayer rare gas adsorption on a flat attractive substrate. In this case, the substrate potential naturally provides the necessary stabilization for the system. We obtain realistic adsorption isotherms, whose main features compare directly with experiment. Reentrant layering is recovered, and layer occupancies confirm its association with a DOF surface and thus with PR. At the same time however, surface diffusion and pair correlations show that while the virtually full monolayer below $`T_{\mathrm{PR}}`$ is solid, with only a gas of adatoms and vacancies, the half-full monolayer found above $`T_{\mathrm{PR}}`$ is made of a 2D liquid islands (even if in a strong periodic potential). A new picture emerges, where the fractional monolayer melting, besides opening the way to surface melting, is also a key element favoring the preroughening of these surfaces.
We simulate adsorption by classical GCMC, implementing small displacement moves (m), creations (c), and destructions (d) with relative probabilities $`\alpha ^{(m)}=12\alpha `$ and $`\alpha ^{(c)}=\alpha ^{(d)}=\alpha `$. Small moves apply to all particles, whereas creation/destruction is restricted to a fixed surface region, about four layers wide, since their acceptance in the fourth layer of this region is already negligible on the entire MC run. In standard bulk GCMC the fastest convergence to the Markov chain is for $`\alpha =1/3`$ . For our surface geometry and our potential, the optimal value of $`\alpha `$ is found to be small, of order $`10^3`$ (the precise value depending on the outer layer population relative to the total), as needed to allow for a more effective equilibration after each creation/destruction move. Creation and destruction acceptance probabilities were checked explicitly to satisfy the detailed balance. We simulated adsorption of atoms interacting via the (12,6) Lennard-Jones potential truncated at $`2.5\sigma `$. The bulk fcc triple point temperature $`T_m`$ of this model is $`0.7ฯต`$ (note that pressure dependence is negligible, i.e. $`P_m/T_m(dT_m/dP)2\times 10^4`$ for Ar), and we will from now on switch notation to a reduced temperature $`t=T/T_m`$. The substrate was taken to be flat and unstructured. Periodic boundary conditions were assumed along the $`x`$ and $`y`$ directions, with a reflecting wall along $`z`$, placed way above the surface. Interactions between atoms and substrate were also of the Lennard-Jones form, giving rise to a laterally invariant (3,9) potential $`V(z)=A(B/z^9C/z^3)`$, with $`A=40\pi /3`$, $`B=1/15`$ and $`C=1/2`$, the latter $`10`$ times larger than the true Ar/graphite value, so as to avoid the stabilization problems encountered previously with the free solid-vapor interface . The $`(xy)`$ simulation box size was of $`22\times 23`$ $`\sigma `$ units and a full fcc layer contained $`N_l=480`$ atoms. We focused on two temperatures, $`t_1=0.75`$ and $`t_2=0.86`$, (respectively below and above the RL temperature $`t0.83`$), where we obtained full and converged adsorption isotherms. For each temperature we increased the chemical potential $`\mu `$ (i.e., increased the pressure of the fictitious perfect gas in contact with the system) by intervals of $`0.02ฯต`$ and waited for stabilization of both total energy and particle number. Generally half a million Monte-Carlo (MC) moves/particle were sufficient to reach equilibrium. Then $`30`$ to $`50`$ uncorrelated configurations were generated from a subsequent half million MC moves and analyzed.
Fig. 1 shows the calculated adsorption isotherms โ the number of adsorbed layers versus $`(\mu _0\mu )^{1/3}`$$`\mu _0`$ being the saturation chemical potential (where a bulk quantity of matter would condense). At the lower temperature $`t_1`$ we find clear layering steps between consecutive integer layers numbers. Analysis of layer occupancies shows that after each coverage jump the first layer is nearly full, with $`1520\%`$ of vacancies, and only few adatoms. In the subsequent plateau the adatom population gradually increases to $`1520\%`$ and vacancies in the first layer are filled, until the next jump suddenly occurs, and so on. Between $`t_1`$ and $`t_2`$ we generally observed that, as in experiments, the layering steps tended to disappear; however here it became very difficult to obtain a stable surface and thus well defined adsorption isotherms. At the higher temperature $`t_2`$ we did recover stability, and we found that layering was again present, but with two important qualitative differences with the low temperature isotherm: coverage was shifted by half a monolayer, and plateaus were broader. Adsorption began at a half-full layer here, and it progressed continuously, leading to a broader plateau, until the next jump to another half integer coverage. We plot in Fig. 1 the $`t_2`$ isotherm up to eight adsorbed layers, the maximum thickness before encountering again stabilization problems. The large plateau breadths are clearly due to our strong substrate potential. The film grand potential can be crudely modeled as a periodic part, say $`k\mathrm{cos}(2\pi n)`$, plus an effective interface repulsion, $`c/(2(nn_0)^2)`$ , plus a growth term $`\mu n`$ ($`n`$ is the total number of layers). The plateau breadth is thus predicted to decrease asymptotically in the form $`\mathrm{\Delta }n1/(1+\gamma ^1(nn_0)^4)`$, where $`\gamma =3c/(4\pi ^2k)`$ measures the strength of the substrate. As Fig. 1 (inset) shows, this law fits well the experimental data, with $`c/k=1200`$. It also agrees fairly well with our actual GCMC plateau widths, once $`\gamma `$ is increased by the correct factor $`10`$. We also note from Fig. 1 the relatively large compressibility $`k^1`$ of the half-coverage state with respect to the low-temperature state.
We conclude that our simulation reproduces the basic RL phenomenon, making it possible to probe deeply into its nature. For a better understanding of the layering reentrance, we plot in Fig. 2 the occupancies, at $`t_2=0.86`$, of the different layers for increasing chemical potential. The jumps leading to fractional coverage states are clearly observable. Following each jump (A, layer #5), the coverage increases continuously by a fraction of monolayer, enriching the adatom population, as well as first and second layers, until at (B) the surface (layer #6) is ready for the next jump, leading to (C) where, following the jump, former adatoms (layer #6) increase in density to form a new half layer, and a new adatom layer (#7) is started. We found no trace of the non-monotonic occupancies reported in earlier canonical studies. The top layer occupancy extrapolates to about $`50`$ % for large adsorbate thickness, strongly supporting the identification with a DOF state: an ordinary 2D liquid should display a much higher average lateral density. The occupancies of the three outermost layers (0.1, 0.5, 0.8) for what we thus suppose to describe a realistic DOF state differ somewhat from the simplistic ones expected from lattice models, namely (0.0, 0.5, 1.0). The finding of a DOF surface at $`t_2`$, against an ordinary flat surface \[occupancies (0.15, 0.85, 1.0)\] at $`t_1`$ indicates that PR of the free rare gas solid surface must take place in between. This conclusion is also supported by the evidence of DOF phase separation taking place at $`t0.83`$ independently obtained by canonical simulations of the free Lennard-Jones surface .
One might thus be led to think that apart from details, the physics is just that dictated by simple SOS models . However, a closer look at our MC configurations reveals that the situation is different, and richer. Following we studied the lateral positional ordering and diffusion coefficients of different layers at the two temperatures by examining pair correlation functions, in particular at $`t_2=0.86`$. For this purpose we carried out two separate canonical molecular dynamics simulations (the diffusion coefficient is ill-defined in a grand canonical simulation), one with an integer layer number, and another with half-integer (no substrate). They were meant to approximate free stable grand canonical surfaces below and above $`t_{RL}=0.83`$, and thus chosen with the same coverages $`0.5`$ and $`1`$ of the grand canonical states A and B described earlier. Fig. 3 shows a selection of lateral pair correlation functions $`g(r)`$ calculated at $`t_2`$. Presence of shell-related peaks/shoulders indicates a solid layer, their absence a liquid layer. We see that the top layer is always liquid, but that it solidifies right after being covered by the next half layer. Consider for instance state A in Fig. 2. The upper layer (#6) has 10% of adatoms (a 2D gas), the lower layer (#4) has 20% of vacancies and is solid, but the the middle half filled layer (#5) is liquid. As coverage increases, layer #5 gets denser, but remains liquid until jump B (see Fig. 3). After that, at C, the former adatoms condense into another fluid half layer #6, while at the same time layer #5 solidifies, leading to a surface identical to the starting one except for one extra layer. This picture is close to that suggested by heat capacity studies . It is also similar to that described by canonical simulations , differing however in two crucial respects, namely (i) the lack of a solid-fluid-solid evolution for any layer and, more importantly, (ii) the half occupancy of the fluid layer. The latter is the hallmark of the DOF state, which here therefore emerges as the likeliest explanation for RL.
In order to further elucidate the connection between surface melting and PR, we examined the lateral diffusion coefficient layer by layer. Mean square displacements were averaged for all particles spending time within three vertical windows corresponding to adatom layer, first layer (surface) and second layer. Not surprisingly, adatoms are very diffusive (gas-like), while buried layers are solid, and poorly diffusive. The top layer diffusivity was always sizable, but larger by about a factor two in the half-covered case, where it is similar to the surface mass transport coefficient near $`T_m`$ (Fig. 4). This confirms that a height jump by about half a layer across PR also takes the top layer from solid to liquid, in agreement with the GCMC analysis. Thus sudden formation of the liquid half layer at PR represents the threshold for the first appearance of the liquid, which will subsequently extend to lower layers and grow critically to a thicker liquid film as temperature is further raised to approach $`T_m`$.
Summarizing, our results can explain the experimental evidence of RL occurring in the adsorption of rare gas on a solid substrate. Layer by layer occupancies and direct insight on the surface processes that are not directly accessible from experiments, confirm the interpretation of the reentrant layering transition in terms of preroughening. The DOF state consists of a half monolayer of barely percolating 2D liquid islands, floating on top of a solid substrate. We found a coincidence of the onset of premelting in the top layer with a PR transition, where coverage jumps from full to partial. These two surface phenomena, apparently very different, appear here to be intimately connected. A lattice model addressing this connection has been published separately .
It is a pleasure to thank S. Prestipino, E. Jagla, and G. Santoro for many constructive discussions. We acknowledge support from INFM, and from MURST. Work at SISSA by F. C. was under European Commission sponsorship, contract ERBCHBGCT940636. |
no-problem/0001/hep-th0001143.html | ar5iv | text | # Untitled Document
hep-th/0001143 CALT-68-2254 CITUSC/00-003 EFI-99-49 NSF-ITP-00-02
D-Sphalerons
and the Topology of String Configuration Space
Jeffrey A. Harvey<sup>1</sup>, Petr Hoลava<sup>2</sup> and Per Kraus<sup>1,3</sup>
<sup>1</sup>Enrico Fermi Institute, University of Chicago, Chicago, IL 60637, USA
harvey, pkraus@theory.uchicago.edu
<sup>2</sup>CIT-USC Center for Theoretical Physics
California Institute of Technology, Pasadena, CA 91125, USA
horava@theory.caltech.edu
<sup>3</sup>Institute for Theoretical Physics, University of California
Santa Barbara, CA 93106, USA
We show that unstable D-branes play the role of โD-sphaleronsโ in string theory. Their existence implies that the configuration space of Type II string theory has a complicated homotopy structure, similar to that of an infinite Grassmannian. In particular, the configuration space of Type IIA (IIB) string theory on $`๐^{10}`$ has non-trivial homotopy groups $`\pi _k`$ for all $`k`$ even (odd).
January 2000
1. Introduction
Most of the recent progress in non-perturbative string theory has been facilitated by the powerful constraints imposed by supersymmetry. It seems vital, for both theoretical and phenomenological reasons, to extend our understanding to configurations where some of these constraints have been relaxed. Non-supersymmetric string vacua are of course notoriously difficult to study. It seems reasonable, therefore, to first analyze non-supersymmetric excitations in the supersymmetric vacua of the theory.
During the last year it has been realized that the spectrum in some vacua of string theory contains not only BPS D-branes, but also stable non-BPS D-branes \[1,,2,,3\]. A very useful perspective for the study of these non-supersymmetric D-brane configurations has been developed by Sen . In this framework, one views stable D-branes as bound states on the worldvolume of an unstable system composed of BPS D-branes and anti-D-branes with a higher worldvolume dimension. This construction has been further generalized \[3,,4\], leading to a systematic framework which implies that D-brane charges on a compactification manifold $`X`$ are classified by a generalized cohomology theory of $`X`$ known as K-theory as suggested by previous work on Ramond-Ramond charges .
A crucial role in this framework is played by unstable D$`p`$-brane systems. In dimensions where RR-charged D$`p`$-branes exist, one can construct unstable systems by considering D$`p`$-D$`\overline{p}`$ pairs. For the โwrongโ values of $`p`$, where stable RR-charged D$`p`$-branes do not exist, it was realized \[6,,4\] that one can still construct an unstable D$`p`$-brane.<sup>1</sup> Such unstable D-branes (in particular, the spacetime-filling D9-branes) are indeed crucial in the systematic classification of D-brane charges in Type IIA theory and its relation to $`K^1(X)`$ groups of spacetime . Thus, in addition to the RR-charged BPS D-branes, there are unstable D$`p`$-branes for $`p`$ odd in Type IIA theory, and $`p`$ even in Type IIB theory.
Such unstable D-branes can be directly constructed in the boundary-state formalism. Consider Type IIA or IIB theory in $`๐^{10}`$.<sup>2</sup> In this paper, we focus on Type IIA and Type IIB string theory in $`๐^{10}`$, making only occasional comments about orientifolds and compactifications. The boundary state describing a D$`p`$-brane can have a contribution from the closed string NS-NS sector and RR sector. For each $`p`$, there is a unique boundary state in the NS-NS sector that implements the correct boundary conditions, and survives the corresponding GSO projection. The RR sector, on the other hand, contains a unique boundary state only for those D$`p`$-branes that can couple to a RR form $`C_{p+1}`$; for all other values of $`p`$, the GSO projection kills all possible boundary states in the RR sector.
The supersymmetric RR-charged D$`p`$-brane is described by
$$|\mathrm{D}p_{\mathrm{BPS}}=\frac{1}{\sqrt{2}}\left(|B_{\mathrm{NS}\mathrm{NS}}\pm |B_{\mathrm{R}\mathrm{R}}\right),$$
where the sign in front of the RR component of the boundary state determines the RR charge of the brane.
In contrast, the boundary state describing the D$`p`$-brane for the โwrongโ values of $`p`$ โ where $`C_{p+1}`$ is absent โ contains only a NS-NS component,
$$|\mathrm{D}p=|B_{\mathrm{NS}\mathrm{NS}}.$$
The relative factor of $`\sqrt{2}`$ and the consistency of this set of boundary states follows from constructing the cylinder amplitudes with all possible pairs of boundary states and imposing the condition that the cylinder amplitude has a consistent open string interpretation \[7\].
The spectrum of open strings ending on unstable D-branes is non-supersymmetric and contains a tachyon. To see this, note that the absence of the RR sector in the boundary state implies the absence of the GSO projection in the open-string loop channel, and as a result, the open-string spectrum contains both the lowest tachyonic mode $`T`$ and the gauge field $`A_M`$. For $`N`$ coincident unstable D-branes in Type II theory, the gauge symmetry is $`U(N)`$, and the tachyon $`T`$ is in the adjoint representation of $`U(N)`$.
The unstable D$`p`$-branes with worldvolumes of the โwrongโ dimension represent legitimate classical solutions of open string theory, despite the fact that they are non-supersymmetric, unstable, and carry no charge. And, as for BPS D-branes, one expects these solutions to be a good approximation to solutions of the full closed and open string theory for small string coupling. The present paper is devoted to clarifying the physical interpretation of the unstable D-branes in string theory.
To whet the readerโs appetite we offer the following observation. In Type IIA theory, unstable D$`p`$-branes exist for $`p`$ odd and, in particular, there is a Type IIA D-instanton. This D-instanton represents a Euclidean solution of the theory with a fluctuation spectrum containing one negative eigenvalue. Instantons with exactly one negative eigenvalue often represent a โbounce,โ or false vacuum decay; the square root of the fluctuation determinant is imaginary due to the single negative eigenvalue and the imaginary part of the vacuum amplitude gives the vacuum decay rate. For a review see . In higher-dimensional theories with gravity, such vacuum decay often has disastrous consequences, leading to a complete annihilation of spacetime that starts by nucleation of a hole which then expands with a speed approaching the speed of light .
These observations lead to an intriguing question: does the existence of a D-instanton with one negative eigenvalue in Type IIA theory signal that its supersymmetric vacuum is false, and therefore unstable to decay? Before jumping to conclusions, declaring that the supersymmetric Type IIA vacuum (and, by duality, perhaps all other supersymmetric vacua) is unstable, and interpreting this as a string phenomenologistโs dream, one needs to carefully examine whether the Type IIA D-instanton represents a bounce for false vacuum decay.
In an attempt to answer this question, we will clarify the role of all the unstable D$`p`$-branes. In particular, we will see that the Type IIA D-instanton does not represent a bounce signaling an instability of the supersymmetric Type IIA vacuum. Instead, the D-instanton is tied to a completely different physical phenomenon, also with a precedent in field theory. We will find that the unstable D-branes in superstring theory are intimately related to the surprisingly complicated topological structure of the configuration space of string theory. In field theory, classical solutions with a negative mode that are mandated by non-trivial homotopy of the configuration space are called sphalerons. The main observation of this paper is that the unstable D-branes are precise string-theoretical analogs of sphalerons of field theory; we hope to convince the reader that it makes sense to call them D-sphalerons. Thus, the spectrum of D-branes in Type IIA theory consists of D$`(2p+1)`$-brane sphalerons (โD$`(2p+1)`$-sphaleronsโ for short) and BPS D$`2p`$-branes, while the Type IIB spectrum contains D$`2p`$-sphalerons and BPS D$`(2p+1)`$-branes. We will see that the existence of the D-sphalerons follows from the fact that the configuration space of IIA (IIB) string theory in $`๐^{10}`$ has nontrivial homotopy groups $`\pi _k`$ for all $`k`$ even (odd), and is thus homotopically at least as complicated as an infinite Grassmannian (the infinite unitary group).
2. Unstable D-branes
In our discussion, it will be convenient to use interchangeably several different representations of D$`p`$-branes, which we first review.
(i) The traditional representation, as a hypersurface $`\mathrm{\Sigma }_{p+1}`$ in spacetime where fundamental strings can end. In string perturbation theory, D-brane dynamics is described by open strings ending on the brane; the boundary conditions are summarized by the closed-string boundary states (1.1) and (1.1). The unstable D-brane carries no charge, and the only long-distance fields associated with it are the dilaton and graviton; the theory is in its supersymmetric vacuum in the regions far away from $`\mathrm{\Sigma }_{p+1}`$.
(ii) The topological defect representation, as a bound state extended along a submanifold $`\mathrm{\Sigma }_{p+1}`$ inside the worldvolume $`\mathrm{\Sigma }_{q+1}`$ of an unstable D$`q`$-brane system with $`q>p`$.
(iii) The spacetime representation in terms of a solution to the closed string equations of motion. This is well understood for BPS D-branes \[10,,11\] and will be partially developed in what follows for non-BPS D-branes.
There are two unstable D-brane systems relevant for the construction in point (ii). When $`q`$ is such that RR-charged D$`q`$-branes exist, the unstable system is given by $`N`$ D$`q`$-D$`\overline{q}`$ pairs. For the complementary values of $`q`$, the unstable system is simply the set of $`2N`$ unstable D$`q`$-branes of (1.1).
In both cases, the worldvolume theory on $`\mathrm{\Sigma }_{q+1}`$ contains a tachyon field $`T`$ which behaves as a Higgs field, rolling down to the minimum of its potential and Higgsing the gauge symmetry on $`\mathrm{\Sigma }_{q+1}`$. The structure of the gauge symmetries and the symmetry breaking patterns are summarized in the following table:
unstable system: gauge symmetry: tachyon: vacuum manifold: $`N`$ D$`q`$-D$`\overline{q}`$ pairs $`U(N)\times U(N)`$ $`(N,\overline{N})`$ $`U(N)`$ $`2N`$ unstable D$`q`$โs $`U(2N)`$ adjoint $`U(2N)/U(N)\times U(N)`$
Notice that in the case of the unstable D$`q`$-branes, the correct spectrum of stable D$`p`$-branes as defects in flat spacetime is reproduced by the symmetric Higgs pattern, with $`U(2N)`$ broken to $`U(N)\times U(N)`$ . The role of configurations with an odd number of unstable D$`q`$-branes, as well as asymmetric Higgs patterns, will be discussed in section 6.
The Higgs mechanism, whereby the tachyon uniformly condenses to the minimum of its potential, can be thought of as the worldvolume representation of how the unstable brane system decays to the vacuum. This interpretation of the Higgs mechanism leaves one obvious puzzle: the existence of the residual gauge symmetry, which should be absent in the true supersymmetric vacuum of the theory. Various attempts to resolve this puzzle have been proposed in the literature \[12,,13\]. In this paper, we will not address this issue, and will simply assume that the unstable D-brane system with the tachyon uniformly condensed to the minimum of its potential is nothing but a somewhat awkward representation of the supersymmetric vacuum of the theory.<sup>3</sup> Strong evidence supporting this assumption has been recently obtained, with the use of string field theory, by Sen and Zwiebach in the closely related case of an unstable D-brane in the bosonic string.
The unstable D-brane systems (2.1) support a host of topological defects, which are interpreted as lower-dimensional D-branes. Stable defects are classified by non-trivial elements in the homotopy groups of the vacuum manifolds,
$$\begin{array}{cc}\hfill \pi _{2k+1}(U(N))& =๐,\hfill \\ \hfill \pi _{2k}(U(N))& =0,\hfill \end{array}$$
and
$$\begin{array}{cc}\hfill \pi _{2k+1}(U(2N)/U(N)\times U(N))& =0,\hfill \\ \hfill \pi _{2k}(U(2N)/U(N)\times U(N))& =๐.\hfill \end{array}$$
(These formulas hold in the stable regime, of $`N`$ sufficiently large for fixed $`k`$.) These homotopy groups are directly related to K-theory groups of spacetime \[4,,3\]; therefore, D-brane charges are naturally described in K-theory.
First, consider a BPS D$`p`$-brane. This brane can be represented as a codimension $`p^{}`$ defect along $`x^i=0,i=1,\mathrm{}p^{}`$, in an unstable system of D$`q`$-branes with $`q=p+p^{}`$. For a codimension $`p^{}`$ defect, the tachyon field $`T`$ maps the sphere $`S^{p^{}1}`$ at infinity in the transverse dimensions to the vacuum manifold $`๐ฑ`$, thus defining an element of $`\pi _{p^{}1}(๐ฑ)`$. In even codimension $`p^{}=2k`$, the unstable system consists of $`N=2^{k1}`$ D$`q`$-D$`\overline{q}`$ pairs, and in odd codimension $`p^{}=2k1`$, of $`N=2^{k1}`$ unstable D$`q`$-branes. The corresponding tachyon condensate is given explicitly by
$$T=f(r)\mathrm{\Gamma }_ix^i,$$
where $`\mathrm{\Gamma }_i`$ are the gamma matrices of the rotation group $`SO(p^{})`$ in the transverse dimensions $`x^i`$.<sup>4</sup> The convergence factor $`f(r)`$ only depends on the radial coordinate, and asymptotes to $`T_0/r`$ as $`r\mathrm{}`$, with $`T_0`$ one of the eigenvalues of $`T`$ at the minimum of its potential; $`f(0)=1`$. This convergence factor will be systematically omitted throughout the paper. The gauge field $`A_M`$ on the D$`q`$-brane system is also non-zero, such that the energy of the whole configuration is finite.
For even $`p^{}`$, we will have occasion to use two distinct definitions of the gamma matrices. Let $`๐ฎ_+`$ and $`๐ฎ_{}`$ be the two $`2^{n1}`$ dimensional irreducible spinor representations of $`SO(2n)`$. We can either define $`\mathrm{\Gamma }_i`$ to be $`2^{n1}\times 2^{n1}`$ matrices mapping $`๐ฎ_+`$ to $`๐ฎ_{}`$, or to be $`2^n\times 2^n`$ matrices mapping $`๐ฎ_+๐ฎ_{}`$ to itself. Which definition is being used will be always be implied by the stated dimensionality of the matrices.
Now, consider an unstable D$`p`$-brane, described by the boundary state (1.1). The tachyon field now defines a homotopically trivial map to the vacuum manifold which is reflected by the instability of the D$`p`$-brane. However we still expect a solution with the core carrying a finite energy density along the worldvolume $`\mathrm{\Sigma }_{p+1}`$. We will argue that this brane is also described by the same formula (2.1), as a defect of codimension $`p^{}`$ in a corresponding unstable brane system, even though there is no direct topological argument as there is for BPS D-branes.
Consider as an example a D$`p`$-brane with $`p`$ even. In IIA theory this is a stable BPS brane and may be represented in various ways as a topological defect in higher dimensional unstable brane systems. The simplest is as a kink in the real tachyon field of the unstable D$`(p+1)`$-brane of IIA. Now we compare this to the unstable D$`p`$-brane of IIB, taking as our starting point the unstable D$`(p+1)`$-D$`\overline{(p+1)}`$ system with a complex tachyon and a โmexican hatโ potential. If we can establish that a cross-section of this potential gives the double-well potential of the IIA D$`p`$ system, then it is clear that the previous kink solution is again a solution, but now with an instability due to the possibility of pulling the kink off the top of the potential.
That the potential has this property follows from the rules developed in . Open strings ending on an unstable D$`p`$-brane are assigned $`2\times 2`$ Chan-Paton matrices. The $`U(1)`$ gauge field is assigned to $`1`$ and the real tachyon is assigned to $`\sigma _1`$. Open strings of the D$`p`$-D$`\overline{p}`$ system also have $`2\times 2`$ Chan-Paton matrices: the $`U(1)\times U(1)`$ gauge fields are assigned to $`1`$ and $`\sigma _3`$, and the complex tachyon is assigned to $`\sigma _1`$ and $`\sigma _2`$. Then at the level of disk diagrams, the action for the tachyon of the unstable D$`p`$-brane is the same as for the $`\sigma _1`$ component of the D$`p`$-D$`\overline{p}`$ tachyon.
This line of argument actually establishes that any solution on the unstable D$`p`$-brane yields a solution on the D$`p`$-D$`\overline{p}`$ system, once the real tachyon is mapped to the $`\sigma _1`$ component of the complex tachyon, and the $`U(1)`$ gauge field is mapped to the $`1`$ component of the $`U(1)\times U(1)`$ gauge field. The remaining fields associated to $`\sigma _2`$, $`\sigma _3`$ will only appear at least quadratically in fluctuations, since the trace over Chan-Paton matrices eliminates any linear terms. The quadratic fluctuations may destabilize a solution constructed in this fashion, but wonโt change the fact that it is a solution. One can also translate solutions in the opposite direction; a similar argument establishes that solutions on two coincident unstable D$`p`$-branes can be mapped to solutions on a D$`p`$-D$`\overline{p}`$ system. Again, the additional fields on the D$`p`$-D$`\overline{p}`$ system appear at least quadratically in fluctuations, and so at worst destabilize the solution. We thus conclude that the unstable D$`p`$-brane is also described by (2.1) as a defect on the worldvolume of an unstable D$`q`$-brane system, despite the fact that this configuration is topologically unstable.
In the following we will make frequent use of the ability to translate stable and unstable solutions in the above manner, although we will not know the explicit solutions beyond their asymptotic behavior.
2.1. Type IIA D-instanton
Equipped with these different representations of unstable D-branes, let us return to the Type IIA D-instanton. For a classical Euclidean solution with one negative eigenvalue to represent a bounce, it has to satisfy several conditions.
Fig. 1: False vacuum decay and the Euclidean instanton (the โbounceโ) with one negative eigenvalue that dominates the path integral.
First of all, it has to be asymptotic to the false vacuum in all directions. The fate of the false vacuum after the tunneling can be read off from the bounce, by identifying its turning point, and evolving the configuration classically in the Minkowski signature. For a solution to admit such a procedure, it has to have a reflection symmetry along a codimension-one surface, which we can identify with the surface of constant Euclidean time $`\tau _E=0`$. Moreover, we must be able to Wick-rotate the solution to the Minkowski regime. The turning point is defined to be a point on the trajectory where the kinetic energy vanishes, so by (Euclidean) energy conservation the potential energy at the turning point equals the potential energy of the false vacuum. Other points on the bounce trajectory have higher potential energy, they are under the barrier. If $`\mathrm{\Phi }(\stackrel{}{x},\tau _E)`$ represents the bounce, and $`\mathrm{\Phi }(\stackrel{}{x},\tau _E)=\mathrm{\Phi }(\stackrel{}{x},\tau _E)`$ then the Euclidean kinetic energy vanishes at the turning point $`\tau _E=0`$.
Consider the Type IIA D-instanton, first in the supergravity approximation. The supergravity solution that represents our D-instanton should respect the $`SO(10)`$ rotation symmetry, and be asymptotic to the supersymmetric vacuum of Type IIA theory. The only fields that can be excited are the metric and the dilaton; unlike in Type IIB theory, there is no โaxionโ that could be excited. It is useful to interpret Type IIA theory as M-theory on $`S^1`$. The Type IIA dilaton is related to the 11-11 component of the eleven-dimensional metric. Therefore, the only field excited in the D-instanton background is the eleven-dimensional metric. The equations of motion for this metric are just the vacuum Einstein equations, constrained by the requirement of $`SO(10)`$ rotation symmetry and $`U(1)`$ translation symmetry along the eleventh dimension. By the eleven-dimensional analog of the Birkhoff theorem, the solution โ at least away from the D-instanton core โ has to be given by the Euclidean Schwarzschild metric, which in appropriate coordinates takes the form
$$ds^2=\left(1\frac{M}{r^8}\right)(dx^{11})^2+\frac{dr^2}{1M/r^8}+r^2d\mathrm{\Omega }_9^2.$$
Furthermore, it is clear that this solution of M-theory represents a D-brane in the sense of a possible end point for strings. To see this, note that a membrane wrapped on the โcigarโ of Euclidean Schwarzschild represents a fundamental string far from the core, and this string clearly ends at the core of the solution.
In spite of all this circumstantial evidence, the usual smooth Euclidean Schwarzschild solution does not correctly represent the IIA D-instanton. In the solution (2.1) there are two parameters: the โmassโ parameter $`M`$, and the value of the radius of the eleventh dimension. This is as it should be, because we expect two parameters in the IIA D-instanton system in the supergravity approximation: the string coupling constant at infinity, and the number $`N`$ of D-instantons. Supergravity cannot distinguish the discreteness of the second quantum number $`N`$, and sees it as a smooth parameter $`M`$. The string coupling is related in the usual way to the radius of the eleventh dimension $`x^{11}`$, and can be adjusted arbitrarily. This of course leaves a conical singularity at $`r=r_0M^{1/8}`$, corresponding to the location of the D-instanton(s) at $`r=r_0`$.
As we will now explain, such a singularity is inevitably present in the supergravity solution describing the D-instanton. If we impose the additional requirement of smoothness at $`r=r_0`$ on (2.1), we obtain the Euclidean Schwarzschild black hole, with the radius $`R_{11}`$ of $`S^1`$ uniquely determined by the parameter $`M`$, $`R_{11}=M^{1/8}/4`$. So far we have ignored the presence of fermions in the theory. The D-instanton is a non-supersymmetric solution in a supersymmetric theory, asymptotic at infinity to the supersymmetric vacuum. Therefore, the spin structure it carries has to preserve supersymmetry asymptotically at infinity. In the eleven-dimensional representation, the spin structure on the eleven-manifold (2.1) describing the D-instanton has to correspond to periodic boundary conditions on the fermions around the $`S^1`$. In contrast, the smoothness of the Euclidean black hole implies that it can carry only one spin structure, with fermions antiperiodic around the $`S^1`$. This in turn implies that the metric of the D-instanton always has to have a singularity at $`r=r_0`$, in order to carry the correct, that is periodic, spin structure.
As we have just seen, the singularity of the metric at the location of the D-instanton cannot be resolved by supergravity; in particular, one cannot count the number of negative modes of the solution in the supergravity approximation.
Fig. 2: Two supergravity solutions: (a) The Euclidean Schwarzschild black hole in eleven dimensions; (b) the Type IIA D-instanton.
On the other hand, the Euclidean Schwarzschild is a smooth solution, with only one free parameter โ the value of the string coupling at infinity, and will have exactly one negative mode. Since its spin structure is that of antiperiodic fermions around $`S^1`$ at infinity, the Euclidean Schwarzschild represents a bounce relevant for the fate of the vacuum in a large class of compactifications, related to M-theory (or string theory) on $`S^1`$ with the non-supersymmetric spin structure \[15,,16\], and also describes black hole nucleation in M-theory at finite temperature, as in .
The singularity found at the tip of the supergravity solution is resolved in string theory by the presence of the D-branes. In this representation, one can count the number of negative modes of this configuration. $`N`$ coincident D-instantons will have $`N^2`$ negative modes, from the open-string tachyon in the adjoint of $`U(N)`$. Clearly, it is only the single-instanton configuration that can in principle represent a bounce. Notice that $`N=1`$ will correspond to a small value of $`M`$, and therefore the supergravity approximation will be invalid for this system.
We will now analyze the turning point configuration for the Type IIA D-instanton by using the representation of the IIA D-instanton as a topological defect of the familiar form
$$T=\mathrm{\Gamma }x,$$
on 32 unstable D9-branes. Here $`\mathrm{\Gamma }_i`$ are $`32\times 32`$ SO(10) gamma matrices.
Consider a $`9+1`$ split of coordinates, $`x^i=(\stackrel{}{x},x^{10})`$. Using two equivalent representations of the $`\mathrm{\Gamma }`$ matrices of $`SO(10)`$, we can write (2.1) in two forms leading to two different physical interpretations of the D-instanton (2.1). First, (2.1) can be written as
$$T=\left(\begin{array}{cc}x^{10}\mathrm{\hspace{0.17em}1}_{16}& \stackrel{}{\mathrm{\Gamma }}\stackrel{}{x}\\ & \\ \stackrel{}{\mathrm{\Gamma }}\stackrel{}{x}& x^{10}\mathrm{\hspace{0.17em}1}_{16}\end{array}\right),$$
where $`\stackrel{}{\mathrm{\Gamma }}`$ are the gamma matrices of $`SO(9)`$. This corresponds to first forming sixteen D8โbranes and sixteen D$`\overline{8}`$-branes as kinks localized at $`x^{10}=0`$ on 32 D9-branes, represented in (2.1) by the terms along the diagonal. The D-instanton then appears as the bound state $`\stackrel{}{\mathrm{\Gamma }}\stackrel{}{x}`$ of sixteen D8-D$`\overline{8}`$ pairs.
Alternatively, one can write (2.1) as
$$T=\left(\begin{array}{cc}\stackrel{}{\mathrm{\Gamma }}\stackrel{}{x}& x^{10}\mathrm{\hspace{0.17em}1}_{16}\\ & \\ x^{10}\mathrm{\hspace{0.17em}1}_{16}& \stackrel{}{\mathrm{\Gamma }}\stackrel{}{x}\end{array}\right).$$
In this picture, we use the D9-branes to first prepare a D0-D$`\overline{0}`$ pair (represented by the diagonal terms in (2.1)), with their worldline along $`x^{10}`$, and then form a kink along $`x^{10}`$ on the worldline of the D0-D$`\overline{0}`$ system. It is this latter representation (2.1) of (2.1) that is useful for determining the physical meaning of the โhalfway pointโ of the D-instanton. Setting $`x^{10}=0`$ in (2.1) leaves the configuration consisting of a D0-D$`\overline{0}`$ pair at Euclidean time $`x^{10}=0`$.
The D-instanton does indeed possess a reflection symmetry in $`x_{10}`$, which in the form (2.1) is given by $`T(\stackrel{}{x},x^{10})=\sigma _3T(\stackrel{}{x},x^{10})\sigma _3^1`$. However, because of the gauge transformation which accompanies the reflection, this symmetry does not imply vanishing of the kinetic energy<sup>5</sup> Strictly speaking, we should study the vanishing of the gauge covariant kinetic energy, but turning on a non-zero gauge field to cancel the time derivative of $`T`$ will simply generate a non-zero electric field leading to non-zero gauge kinetic energy. at the symmetry point $`x^{10}=0`$, which therefore is not a proper turning point. Alternatively, we can use the decomposition (2.1) of the D-instanton to see that an energy condition is being violated at the halfway point, and the instanton therefore cannot represent a legitimate bounce. We have seen that the halfway point consists of a D0-D$`\overline{0}`$ pair on top of the supersymmetric vacuum; however, such a configuration carries positive energy with respect to the supersymmetric vacuum, and its nucleation is forbidden by energy conservation. We conclude that the Type IIA D-instanton does not lead to false vacuum decay of the supersymmetric Type IIA vacuum.
3. Unstable D-Branes as D-Sphalerons
We have argued that the Type IIA D-instanton does not represent a bounce for false vacuum decay of the supersymmetric vacuum in Type IIA theory. In this section, we start collecting evidence leading to a different physical interpretation of all the unstable D-branes, and the Type IIA D-instanton in particular.
3.1. Sphalerons in field theory
In field theory, sphalerons are static solutions of the classical equations of motion with a single negative mode, whose existence is implied by a non-contractible loop in the configuration space of the theory.
Fig. 3: The topological argument tying the existence of a non-contractible loop in the configuration space with the existence of a static solution with one negative eigenvalue (the sphaleron). The vertical axis corresponds to the energy.
The argument \[19,,18\] goes as follows. Consider Yang-Mills gauge theory with matter in $`D+1`$ spacetime dimensions. This theory has a configuration space $`๐`$, of all physically inequivalent, finite energy configurations on the $`D`$-dimensional space. Assume now that $`๐`$ contains a non-contractible loop, i.e., that $`\pi _1(๐)0`$. If $`๐`$ is sufficiently compact, the situation can be visualized as in Figure 3. Choose an arbitrary non-contractible loop $`\mathrm{}`$ in $`๐`$ which begins and ends in the vacuum, and parameterize this loop by $`t[0,2\pi ]`$. Without any loss of generality, assume that the energy along $`\mathrm{}`$ grows monotonically as we move away from the vacuum, and reaches its absolute maximum $`E(\mathrm{})`$ at the half-point $`t=\pi `$. Since $`\mathrm{}`$ is non-contractible, there is a loop $`\mathrm{}_0`$ homotopically equivalent to $`\mathrm{}`$ and such that
$$E_0E(\mathrm{}_0)E(\mathrm{}^{})$$
for all loops $`\mathrm{}^{}`$ that are homotopically equivalent to $`\mathrm{}`$. The point in the configuration space $`๐`$ that corresponds to $`t=\pi `$ along such a minimal loop $`\mathrm{}_0`$ is guaranteed to be a static, finite-energy solution of the theory, called the sphaleron. The spectrum of fluctuations around the sphaleron will contain precisely one negative eigenvalue, corresponding to the two directions in which the sphaleron can slide down to the true vacuum along the loop $`\mathrm{}_0`$ in the configuration space.
A loop $`\mathrm{}`$ in the configuration space $`๐`$ represents a one-parameter set of $`D`$-dimensional configurations, and can be viewed as a $`D+1`$-dimensional Euclidean configuration with the Euclidean time given by the loop parameter $`t`$. The loop $`\mathrm{}`$ will be non-contractible if this $`D+1`$ dimensional configuration is topologically stable. At infinity in all Euclidean dimensions, this configuration is mapped to the vacuum manifold $`๐ฑ`$ of the theory. Thus, the non-contractible loop determines a non-trivial element of $`\pi _D(๐ฑ)`$.
Notice that the sphaleron in a $`D`$-dimensional space carries no conserved topological quantum numbers, since it can be continuously connected to the vacuum. In other words, the sphaleron can be unwrapped at $`S^{D1}`$ at infinity, and corresponds to the trivial element in $`\pi _{D1}(๐ฑ)`$. However, it is the non-contractible loop in the configuration space that is supported by the non-trivial element in $`\pi _D(๐ฑ)`$. In this sense, there is a certain similarity between instantons and the Euclidean configuration representing the non-contractible loop, as it is the same quantum number that is responsible for both. However, even though the topology is similar, the energetics is different. In the case of an instanton, we impose a single condition of finite action in $`D+1`$ dimensions, while in the case of a loop in configuration space, we impose the finite-energy condition in $`D`$ dimensions for each value of the loop parameter $`t`$.
We now illustrate this general construction with a few simple examples:
(1) The original sphaleron of was found in a simplified version of the standard model, given by the $`SU(2)`$ theory with a doublet Higgs in $`3+1`$ dimensions. The vacuum manifold is a three-sphere $`S^3`$. A non-contractible loop exists, and corresponds to a point-like topological defect in four Euclidean dimensions that non-trivially wraps around the $`S^3`$, and corresponds to the generator in $`\pi _3(S^3)=๐`$. The sphaleron is an unstable static solution in the vacuum sector.
(2) As an even simpler example consider the Abelian Higgs model,
$$S=d^2x\left\{\frac{1}{4}F_{\mu \nu }^2+|(_\mu iA_\mu )\varphi |^2\frac{1}{4}\lambda (|\varphi |^21)^2\right\}.$$
The configuration space of this theory also has a non-contractible loop, given by a point-like vortex in two Euclidean dimensions, stable because the Higgs field at infinity corresponds to the generator of $`\pi _1(S^1)`$. The corresponding sphaleron is given by
$$\varphi =\mathrm{tanh}\left[\frac{1}{2}\sqrt{\lambda }(xx_0)\right]e^{i\beta (x)},A_0=0,A_1=_x\beta (x),$$
where $`x_0`$ is arbitrary and the only condition on $`\beta `$ is
$$\beta (\mathrm{})\beta (\mathrm{})=\pi .$$
3.2. Unstable D0-brane as a D-sphaleron in Type IIB theory
Consider Type IIB string theory on $`๐^{10}`$. This theory has an unstable D-particle with a single real tachyon. The system of $`N`$ such coincident D-particles has a tachyon in the adjoint of $`U(N)`$ on the worldline. Upon orientifold projection to Type I, the unstable Type IIB D-particle becomes the $`๐_2`$-charged stable non-BPS D0-brane of Type I string theory \[1,,2,,3\]. This is because only the antisymmetric part of the adjoint tachyon survives the $`\mathrm{\Omega }`$ projection, leaving an instability for $`N>1`$ but making the $`N=1`$ system stable. Here, however, we are interested in the unstable D0-brane of Type IIB theory in its own right.
The D0-brane of Type IIB theory can be viewed as a defect, represented by $`\mathrm{\Gamma }x`$ of (2.1), on sixteen D9-D$`\overline{9}`$ pairs, where $`\mathrm{\Gamma }_i`$ are the $`SO(9)`$ gamma matrices of the rotation group in the nine transverse dimensions $`x^i`$. This configuration is topologically unstable: the tachyon maps the 8-sphere at infinity to the vacuum manifold, but the relevant homotopy group $`\pi _8(U(16))=0`$ is trivial. The $`SO(9)`$ group has only one spinor representation $`๐ฎ`$, and the gamma matrices represent a map $`๐ฎ๐ฎ`$. Since the $`\mathrm{\Gamma }x`$ configuration carries no D-brane charge, it corresponds to the trivial element in K-theory. Thus, the Chan-Paton bundle supported by the D9-branes is isomorphic to the Chan-Paton bundle of the D$`\overline{9}`$-branes, and both are identified with $`๐ฎ`$ (extended to the whole spacetime manifold $`๐^{10}`$).
We now claim that the D0-brane is a D-sphaleron, i.e., it is a static solution of the equations of motion of Type IIB string theory that has one negative mode, and represents the top of the potential barrier along a non-contractible loop in the configuration space of Type IIB string theory on the non-compact space $`๐^9`$. We will prove this directly by constructing the corresponding non-contractible loop in the configuration space, i.e., a one-parameter set of configurations on $`๐^9`$ (parametrized by $`t^{}[0,2\pi ]`$) which begins and ends in the supersymmetric vacuum, and at $`t^{}=\pi `$ passes through the configuration describing the unstable D0-brane.
In our construction, we use the defect representation of the D0-brane, as $`\mathrm{\Gamma }x`$ on sixteen D9-D$`\overline{9}`$ pairs. A one-parameter family of configurations on $`๐^9`$ can be viewed as a Euclidean configuration on $`๐^9\times ๐`$, parametrized by $`y^I=(x^i,t)`$. Using these coordinates, consider
$$T(y)=\mathrm{\Gamma }_Iy^I$$
(where $`\mathrm{\Gamma }_I`$ are now the $`16\times 16`$ gamma matrices thought of as maps between the two inequivalent irreducible spinor representations of the $`SO(10)`$ rotation group, $`\mathrm{\Gamma }_I:๐ฎ_+๐ฎ_{}`$). This loop in the space of configurations indeed satisfies our requirements. It is topologically stable, because now the tachyon wraps $`S^9`$ at infinity once around the non-contractible $`S^9`$ in the vacuum manifold $`U(16)`$ (recall again that $`\pi _9(U(16))=๐`$). Thus, despite our ignorance about the overall normalization factor, the family (3.1) will indeed flow to a certain topologically non-trivial family of configurations. This family is asymptotic to the supersymmetric vacuum at $`t\pm \mathrm{}`$, and by construction passes through the D0-brane configuration at $`t=0`$.
In fact, the proper framework for understanding the non-contractible loop (3.1) in the configuration space is K-theory. Even though at each $`t`$ the Chan-Paton bundles of D9-branes and D$`\overline{9}`$-branes are isomorphic (and given by the $`SO(9)`$ spinor bundle $`๐ฎ`$), they wrap the extra dimension $`t`$ in a topologically nontrivial way, and span the non-isomorphic $`SO(10)`$ spinor bundles $`๐ฎ_+`$ and $`๐ฎ_{}`$ (in accord with the fact that the $`16\times 16`$ gamma matrices of $`SO(10)`$ in (3.1) provide a map $`๐ฎ_+๐ฎ_{}`$). Thus, the Chan-Paton bundles of the whole one-parameter family of D9-D$`\overline{9}`$ pairs represent a non-trivial element in the K-theory group of the extended manifold parametrized by $`(x^i,t)`$. As an element of K-theory, the topological charge that stabilizes (3.1) can be physically identified as one of the RR charges of Type IIB theory (namely, the D-instanton charge).
Hence, we conclude that
(1) the configuration space of Type IIB string theory has a non-contractible loop (supported by a topological charge that takes values in K-theory), and
(2) the D0-brane of Type IIB string represents the D-sphaleron at the top of the potential barrier traversed by the loop.
It should be pointed out that two important assumptions enter into this conclusion. First, we have not defined from first principles, such as string field theory, what we understand by the configuration space of Type II string theory. Instead, we are using the explicit construction of the D-sphalerons, in conjunction with the existence of RR charges as implied by K-theory, to deduce that the appropriately defined configuration space supports a non-contractible loop. This configuration space contains all perturbative string configurations, plus the configurations of all possible sets of D-brane configurations (and possibly more). A priori, we cannot rule out the possibility that there is some yet to be understood part of the configuration space which makes the above loop contractible. However, this possibility seems unlikely, since the existence of a non-contractible loop in the configuration space follows from a topological argument: the loop is non-contractible because it carries a non-trivial K-theory class (essentially, one unit of the D-instanton charge). As long as RR charges are conserved in the theory, it will not be possible to shrink the loop to a point.
Second, the presence of a non-contractible loop only implies the existence of a sphaleron solution if the configuration space is compact. Pure Yang-Mills theory has non-contractible loops, but the non-compactness of configuration space generated by scale transformations forbids the existence of finite size sphaleron solutions. We are assuming that in string theory, the string scale cuts off this source of noncompactness, and that the resulting object is the same as found by quantizing open strings with Dirichlet boundary conditions.
Our conclusions can be easily generalized to the configuration space of extended configurations that fall off at infinity in directions normal to an extended hypersurface in space. Just as in the case of the Type IIB D0-brane, one can interpret all the unstable Type IIB D$`2p`$-branes with $`p>0`$ as D-sphalerons, and deduce the the existence of a non-contractible loop in the corresponding configuration spaces of extended configurations.
4. D-Sphalerons in Type IIA Theory
We now turn to a discussion of the interpretation of the Type IIA D-instanton.
Just as a non-contractible loop in the space of finite energy IIB configurations implied the existence of the D0-sphaleron, a non-contractible loop in the space of finite action IIA Euclidean histories gives rise to a D-instanton with a single negative mode. To exhibit the non-contractible loop, we proceed in parallel to the IIB discussion, now starting with 32 unstable D9-branes. Introducing the parameter $`t`$ and SO(10) gamma matrices $`\mathrm{\Gamma }_i`$, the non-contractible loop is given by
$$T=\underset{i=1}{\overset{10}{}}\mathrm{\Gamma }_ix^i+\mathrm{\Gamma }_{11}t.$$
The loop gives a nontrivial element of $`\pi _{10}(U(32)/U(16)\times U(16))`$. We identify the halfway point of the loop at $`t=0`$ with the IIA D-instanton.
Thus, the reason for the existence of the IIA D-instanton is not instability of the vacuum, rather it is required by the nontrivial topology of the space of histories in IIA string theory. The topological charge that makes (4.1) stable corresponds to a non-trivial element of K-theory, with a very interesting physical interpretation: in K-theory, this topological charge can be identified as the RR D$`(2)`$-brane charge. Recall that in Type IIA string theory, there is a RR ten-form $`F_{10}`$ (related to the cosmological constant in massive Type IIA theory), which couples to the D$`8`$-brane; formally, the magnetic dual of the D$`8`$-brane should be a D$`(2)`$-brane, a concept that is indeed very hard to understand in physical terms. Here we have found a natural physical role of the D$`(2)`$-brane charge (if not the D$`(2)`$-brane), as the topological charge responsible for the non-contractible loop in the space of Type IIA histories. In a formal sense, one can think of the D$`(2)`$-brane as an โobjectโ localized in the extra dimension of the one-parameter family of histories traversing this non-contractible loop.
So far, our discussion of Type IIA theory has been focused on interpreting the Type IIA D-instanton, and therefore we were looking at the space of Euclidean histories. Similar arguments can also be used to analyze the configuration space of Type IIA theory. Interpreting nine of the eleven dimensions in (4.1) as space dimensions, and the remaining two as extra parameters, we can view (4.1) as a two-parameter family of string configurations that corresponds to a non-contractible two-sphere in the configuration space of Type IIA string theory. The corresponding sphaleron at the far pole of this non-contractible $`S^2`$ is easy to find by setting the two parameters representing the $`S^2`$ in (4.1) equal to zero. The sphaleron configuration that we obtain,
$$T=\left(\begin{array}{cc}\stackrel{}{\mathrm{\Gamma }}\stackrel{}{x}& 0\\ & \\ 0& \stackrel{}{\mathrm{\Gamma }}\stackrel{}{x}\end{array}\right)$$
(with $`\stackrel{}{x}`$ describing the nine space dimensions and $`\stackrel{}{\mathrm{\Gamma }}`$ the Gamma matrices of $`SO(9)`$), was already encountered in a different context in (2.1), and describes a coincident D0-D$`\overline{0}`$ pair.
The identification of the sphaleron in Type IIA configuration space as a D0-D$`\overline{0}`$ pair nicely agrees with the expected counting of negative modes. The D0-D$`\overline{0}`$ system has a complex tachyon, from the open string stretching between the D0 and the D$`\overline{0}`$-brane. This tachyon gives two real negative modes, precisely as expected from the sphaleron on the far pole of an $`S^2`$.
5. Topology of Configuration Space in String Theory
We have seen how to relate a single D-sphaleron to a non-contractible loop in configuration space. This loop is non-contractible because the corresponding one-parameter family of string configurations carries a topological charge in K-theory, even though each individual configuration carries zero charge. This structure clearly generalizes to multi-parameter families of string configurations. Looking back at (2.1) and (2.1) (or, more abstractly, invoking Bott periodicity in K-theory), we can generalize the construction of the section 3, and demonstrate that the string configuration space of Type IIB (IIA) string theory contains non-contractible spheres $`S^k`$ of arbitrarily large odd (even) dimension $`k`$. In turn, each non-contractible $`S^k`$ implies the existence of a sphaleron solution (with exactly $`k`$ negative modes), at the pole of $`S^k`$ opposite to the vacuum. What is the physical interpretation of such higher sphaleron solutions?
In this section we show that these higher sphalerons do not represent novel solutions; rather, they can be interpreted as multiple coincident D0-sphalerons of the previous section. We will demonstrate explicitly that we recover the correct counting of negative modes on $`k`$ D-sphalerons.
5.1. Higher non-contractible spheres in the IIB configuration space
Fig. 4: Non-contractible $`2n1`$-sphere in the configuration space and the corresponding $`n`$ sphaleron configuration.
We begin with a concrete example relating two coincident D0-branes in IIB to a non-contractible $`S^3`$ in the space of finite energy nine dimensional field configurations. We have seen that a single D0-brane can be represented as the point $`t=0`$ on the loop $`T=\mathrm{\Gamma }_ix^i+\mathrm{\Gamma }_{10}t`$, where $`i=1\mathrm{}9`$, and $`\mathrm{\Gamma }_i`$ are $`16\times 16`$ $`SO(10)`$ gamma matrices. To represent two D0-branes, we introduce three parameters $`t_1,t_2,t_3`$, and define a non-contractible $`S^3`$ in terms of $`SO(12)`$ gamma matrices by
$$T=\stackrel{~}{\mathrm{\Gamma }}_ix^i+\stackrel{~}{\mathrm{\Gamma }}_{10}t_1+\stackrel{~}{\mathrm{\Gamma }}_{11}t_2+\stackrel{~}{\mathrm{\Gamma }}_{12}t_3.$$
Choosing a convenient representation for $`\stackrel{~}{\mathrm{\Gamma }}_i`$, this becomes
$$T=\left(\begin{array}{cc}\mathrm{\Gamma }_ix^i+\mathrm{\Gamma }_{10}t_1& (t_2t_3)\mathrm{\hspace{0.17em}1}_{16}\\ & \\ (t_2+t_3)\mathrm{\hspace{0.17em}1}_{16}& (\mathrm{\Gamma }_ix^i+\mathrm{\Gamma }_{10}t_1)\end{array}\right).$$
It is evident that the โfar poleโ of the $`S^3`$ at $`t_1=t_2=t_3=0`$, , as depicted in fig. 4, represents two coincident D0-branes.
On the two D0-branes we expect to find $`2^2=4`$ negative modes. Three negative modes arise from motion on the $`S^3`$, i.e. $`\delta T=\stackrel{~}{\mathrm{\Gamma }}_{9+i}\delta t_i`$ $`(i=1,2,3)`$. The final negative mode arises from motion on the non-contractible $`S^1`$ as for a single D0-brane:
$$T+\delta T=\left(\begin{array}{cc}\mathrm{\Gamma }_ix^i+\mathrm{\Gamma }_{10}\delta t& 0\\ & \\ 0& \mathrm{\Gamma }_ix^i+\mathrm{\Gamma }_{10}\delta t\end{array}\right).$$
So we indeed correctly reproduce the 4 negative modes known to exist from the quantization of open strings.
This procedure can be directly generalized to construct a non-contractible $`S^n`$ for all odd values of $`n`$, whose existence is suggested by Bott periodicity of the homotopy groups (2.1). The generalization involves an interesting subtlety, which is best illuminated as follows. To simplify the argument, consider the unstable D0-brane as a real kink on the worldsheet of a coincident D1-D$`\overline{1}`$ pair along its space-like dimension $`x`$. The non-contractible $`S^1`$ in the configuration space is described by the stable vortex on the two-manifold spanned by $`(x,t)`$, where $`t`$ is the parameter along the loop. At each fixed $`t`$, we have one D1-D$`\overline{1}`$ pair. Similarly, the non-contractible $`S^3`$ discussed above corresponds to a point-like defect on a four-manifold spanned by $`(x,t^1,t^2,t^3)`$; to construct such a defect, we need a family consisting of two D1-D$`\overline{1}`$ pairs at each $`t^i`$. This procedure can be iterated; in each step, as we add two more parameters $`t^{2k},t^{2k+1}`$, the $`\mathrm{\Gamma }y`$ representation of the non-contractible $`S^{2k+1}`$ requires doubling the number of D1-D$`\overline{1}`$ pairs. Thus, the non-contractible $`S^{2k+1}`$ requires a family of $`2^k`$ D1-D$`\overline{1}`$ pairs parametrized by $`t^1,\mathrm{}t^{2k+1}`$. Notice that the number of D1-D$`\overline{1}`$ pairs grows exponentially with growing $`k`$.
This construction certainly leads to a non-contractible $`S^{2k+1}`$ in the configuration space, and one might be tempted to identify the configuration at $`t^i=0,i=1,\mathrm{}2k+1,`$ as the corresponding sphaleron. However, a small puzzle immediately appears. While it is easy to show that the configuration at $`t^i=0`$ is given by
$$T=x1_{2^k}$$
and consists therefore of $`2^k`$ coincident D0-sphalerons, it is also straightforward to see that for $`k>1`$ such a configuration has too many negative modes to represent the sphaleron at the far pole of $`S^{2k+1}`$, whose number of negative modes should grow linearly and not exponentially with $`k`$.
This puzzle is resolved by the following observation. One can certainly use the $`\mathrm{\Gamma }y`$ construction to conveniently construct the non-trivial element of $`\pi _{2k+1}(U(N))`$, but the number $`N=2^k`$ of D1-D$`\overline{1}`$ pairs needed in this construction is not the smallest one possible; in fact, it is deeply inside the stability regime. In order to identify the sphaleron, we have to minimize the energy of the configuration at the far pole of the $`S^{2k+1}`$, and for that we need to use the smallest possible number of D1-D$`\overline{1}`$ pairs allowed by the stability bound. This bound requires $`Nk+1`$ pairs to properly accommodate $`\pi _{2k+1}`$! On this minimal number $`k+1`$ of D1-D$`\overline{1}`$ pairs, the sphaleron at $`t^i=0`$ corresponds to $`k+1`$ coincident D0-branes.
Thus, we claim that the sphaleron on the far pole of the non-contractible $`S^{2n1}`$ is given by $`n`$ coincident unstable D0-branes. It is now easy to see that the count of the number of negative modes indeed works as expected. The configuration of $`n`$ coincident D0-branes exhibits $`n^2`$ negative modes, corresponding to the open-string tachyon in the adjoint of $`U(n)`$. Just like in the case of $`n=2`$ discussed explicitly above, it is important to realize that the system of $`n`$ D0-branes contains subsystems of $`p<n`$ D0-branes that sit at the far pole of $`S^{2p1}`$ for all $`p=1,\mathrm{}n1`$. Motion on each $`S^p`$ is associated with $`p`$ negative modes. Thus, the total number of negative modes is
$$1+3+5+\mathrm{}+2n1=n^2,$$
as expected.
An analogous counting of negative modes goes through for configurations of coincident D$`2p`$-branes, including configurations which include branes of different dimensionalities. It is a satisfying consistency check that in all these cases, we reproduce the same spectrum of negative modes as arises from the quantization of open strings on non-BPS D-branes.
We are therefore led to conclude that
(1) the configuration space of Type IIB string theory has a homotopy structure which is at least as complicated as that of the infinite unitary group $`U(N)`$, $`N\mathrm{}`$: $`\pi _k`$ of the configuration space is non-trivial for all odd $`k`$;
(2) similarly, the configuration space of Type IIA string theory has a homotopy structure at least as complicated as that of an infinite Grassmannian, $`U(2N)/U(N)\times U(N)`$, with all $`\pi _{2k}`$ nontrivial.
5.2. Connection to K-theory
Our discussion so far has involved specific examples of D-brane sphalerons and non-trivial homotopy groups of the configuration space of Type II string theory in flat $`๐^{10}`$. It is perhaps worth stressing that the connection between D-sphalerons, K-theory, and the non-trivial homotopy groups of the string configuration space is quite universal, and our results naturally generalize to more complicated cases, including compactifications and orientifolds.
Consider any compactification of Type II or Type I theory. For simplicity, we will discuss the case of Type IIB theory compactification on $`X`$, but the generalization to other theories is straightforward. Stable D-branes on $`X`$ are classified by elements of the (reduced) K-theory group $`K(X)`$, which in turn can be identified as the group of equivalence classes of pairs of Chan-Paton bundles $`(E,F)`$ on a number of spacetime-filling D9-D$`\overline{9}`$ pairs wrapping $`X`$. The equivalence relation corresponds to creation and annihilation of pairs from/to the vacuum.
Imagine now an $`n`$-parameter family of D9-D$`\overline{9}`$ pairs, with Chan-Paton bundles $`(E(t),F(t))`$. In our discussion so far, the parameters $`t=(t^1,\mathrm{}t^n)`$ were coordinates on an $`S^n`$, but one can consider a general $`n`$-manifold $`Y`$ of parameters. For any fixed $`t`$, $`(E(t),F(t))`$ defines an element $`\alpha (t)`$ of $`K(X)`$, and the whole family defines an element of $`K(X\times Y)`$. Even if $`\alpha (t)`$ is trivial for each $`t`$, the element of $`K(X\times Y)`$ defined by the whole family can be non-trivial. When this is so, the family represents a non-contractible manifold $`Y`$ in the configuration space of the theory in the vacuum sector.
Thus, there is an intimate relation between the homotopy structure of the string configuration space on $`X`$ and K-theory groups of $`K(X\times Y)`$ for various $`Y`$. Since the latter are related to the spectrum of D-brane charges on $`X`$, the homotopy structure of the configuration space is closely related to the stable D-brane spectrum on $`X`$. Assuming that the configuration space is sufficiently compact, the non-trivial elements of the homotopy groups in turn imply the existence of corresponding D-sphalerons.
6. Tachyon Condensation and Massive Type IIA Vacua
As mentioned in section 2, we have been imposing certain restrictions in our study of tachyon configurations on unstable D9-branes in IIA. We chose to start with an even number $`2N`$ of D9-branes, and assumed that tachyon condensation Higgsed the gauge group according to $`U(2N)U(N)\times U(N)`$. This symmetry breaking pattern with an even number of unstable D9-branes arose in , where it was found to be directly related to K-theory and the classification of all D-brane charges in Type IIA theory. However, the role of other Higgs patterns, and configurations with an odd number of unstable D9-branes was left somewhat mysterious in the analysis of .
In this section our conditions will be relaxed: we allow an arbitrary number $`N`$ of D9-branes, as well as the general Higgsing pattern $`U(N)U(k)\times U(Nk)`$. We will be led to an interpretation of these configurations in terms of vacua with non-vanishing flux for the RR 10-form $`F_{10}`$.
Let us recall some aspects of vacua with $`F_{10}`$ flux \[21,,22\]. Including the non-propagating field $`F_{10}`$ in type IIA supergravity leads to so-called massive IIA supergravity . The field equations of this theory admit solutions with constant $`F_{10}`$, and which preserve all $`32`$ supersymmetries. In string theory it has been argued \[22,,24,,25\] that such vacua exist only for a discrete set of fluxes, $`\nu ^{}F_{10}=n\mu _8`$, where $`\mu _8`$ is the tension of a BPS D8-brane. D8-branes play the role of domain walls between distinct vacua, with $`\nu `$ jumping by $`\mu _8`$ upon crossing a D8-brane. We also remark that the massive IIA theory has a cosmological constant via $`Sd^{10}x\sqrt{g}\nu ^2`$, and that the theory cannot be obtained from the dimensional reduction of any known eleven-dimensional theory.
To connect the above facts to our discussion, we first examine the simple case of a single unstable D9-brane. On the worldvolume of the D9-brane there is a neutral tachyon $`T`$, whose potential $`V(T)`$ is assumed to be of the standard double-well form, with a local maximum at $`T=0`$ and minima at $`T=\pm T_0`$. As in , it is conjectured that a BPS D8-brane is represented by a kink configuration; i.e. $`T=f(x_9)x_9`$ describes a D8-brane at $`x_9=0`$, where $`f(x_9)`$ is a smooth function behaving as $`T_0/|x_9|`$ for large $`|x_9|`$. The kink will carries the RR charge of a D8-brane given that on the D9-brane there exists a coupling to the RR 9-form potential $`C_9`$ of the form \[4,,1,,26\]
$$S=\frac{\mu _8}{2T_0}๐TC_9.$$
There is no straightforward way to directly compute the coefficient of this term, since the presence of $`T_0`$ in the denominator shows that it depends on unknown details of the tachyon potential. We have chosen the coefficient so that the kink carries the charge of a single D8-brane as in .
Now consider the homogeneous tachyon configurations $`T=0`$ and $`T=\pm T_0`$, and imagine an adiabatic process in which the tachyon is taken from one such solution to another. The quadratic term for $`F_{10}`$ along with the coupling (6.1) yield the field equation
$$d^{}F_{10}=\frac{\mu _8}{2T_0}dT.$$
Hence in taking the tachyon from one minimum, $`T=T_0`$, to the other, $`T=+T_0`$, we find that $`F_{10}`$ changes by $`\mathrm{\Delta }\nu =\mu _8`$. Given the previous quantization condition for $`\nu `$ in the massive IIA vacua, it is natural to conclude that in the process of shifting the tachyon we have moved from one massive IIA vacuum to an adjacent one. In this interpretation, a D8-brane, described as a kink, indeed represents a domain wall between distinct massive IIA vacua. On the other hand, if we adiabatically take the tachyon from $`T=T_0`$ to the unstable local maximum at $`T=0`$, we find $`\mathrm{\Delta }\nu =\mu _8/2`$ which, perhaps surprisingly, forces us to admit values of $`\nu `$ not included among the massive IIA vacua. That is, we learn that in order to respect the quantization of $`\nu `$ after tachyon condensation, we must have that a single D9-brane with vanishing $`T`$ can only exist in the presence of half odd integer units of flux: $`\nu =(n+1/2)\mu _8`$.
The foregoing analysis is easily generalized to the case of $`N`$ unstable D9-branes. We assume that $`V(T)`$ has minima of the form
$$T=T_0\left(\begin{array}{cc}1_k& 0\\ & \\ 0& 1_{Nk}\end{array}\right),$$
and that on the D9-branes there exists a coupling
$$S=\frac{\mu _8}{2T_0}\mathrm{Tr}(dT)C_9.$$
Adiabatic variation of $`T`$ then gives $`\mathrm{\Delta }\nu =\frac{1}{2}\frac{\mu _8}{T_0}\mathrm{\Delta }\mathrm{Tr}(T)`$. By moving between different minima, one can reach values for $`\nu `$ corresponding to any given massive IIA vacuum. For $`N`$ even, it is consistent to take $`\nu =0`$ at $`T=0`$, and also after tachyon condensation to the traceless configuration $`k=N/2`$. This is what has been assumed in the bulk of this paper. But for $`N`$ odd, consistency with the quantization condition requires one to include half odd integer units of flux at $`T=0`$.
One might be suspicious of the need to introduce half odd integer units of flux, given what was said about the difficulty in computing the coefficient of the term (6.1). Perhaps the assumed coefficient is incorrect by a factor of two, so that a kink truly represents two D8-branes. To allay such suspicions, we will compute the spectrum of fermion zero modes on the kink, and see that we obtain a single $`8+1`$ dimensional Majorana fermion, modulo one assumption, as we should if the kink represents a single D8-brane.
The computation is closely related to one performed in , which yielded the fermion zero modes on a Type I D0-brane regarded as a kink on a D$`1`$-D$`\overline{1}`$ pair. On an unstable D9-brane are two Majorana-Weyl fermions of opposite chiralities, $`\psi _\pm `$. We take these to couple to the tachyon at quadratic order through an action of the form
$$S=d^{10}x\left\{\frac{i}{2}f_1(T)[\psi _+^T\mathrm{\Gamma }^0\mathrm{\Gamma }^\mu _\mu \psi _++\psi _{}^T\mathrm{\Gamma }^0\mathrm{\Gamma }^\mu _\mu \psi _{}]+f_2(T)\psi _+^T\mathrm{\Gamma }^0\psi _{}\right\}.$$
$`\mathrm{\Gamma }^\mu `$ are purely imaginary $`SO(9,1)`$ gamma matrices. $`f_{1,2}(T)`$ are functions of $`T`$ and its derivatives. The action is restricted by a $`๐_2`$ symmetry which flips the sign of $`T`$ along with one of the fermions; the symmetry requires $`f_1`$ to be an even function of $`T`$, and $`f_2`$ to be an odd function of $`T`$. The couplings are also restricted by a non-linearly realized supersymmetry acting on the fermion fields as discussed in \[13,,28\]. It is not clear whether this fact is compatible with the last term in (6.1), or with the analogous term in , although the equations of motion which follow from (6.1) appear to be compatible with those in to lowest order. This question deserves closer scrutiny, for now we will assume that (6.1) is correct and proceed.
For the tachyon background we take a kink located at $`x_9=0`$. As with the tachyon potential $`V(T)`$, there is no systematic way to calculate the functions $`f_{1,2}`$. Our main assumption is that for a kink background $`f_2/f_1`$ goes to a nonzero constant โ which can be taken to be positive โ for large $`x_9`$, and hence to a negative constant for large $`x_9`$ as the tachyon moves from one minimum of its potential to the other. Fermion zero modes are obtained from normalizable solutions to the Dirac equation which depend only on $`x_9`$. Defining the linear combinations
$$\chi _\pm =\psi _+\pm \psi _{},$$
the Dirac equation is found to be
$$_9\chi _\pm =\left[\frac{1}{2}\frac{_9f_1}{f_1}\pm i\frac{f_2}{f_1}\mathrm{\Gamma }^9\right]\chi _\pm .$$
The solutions are
$$\chi _\pm =f_1^{1/2}\mathrm{exp}\left[i_0^{x_9}๐x_9^{}\frac{f_2}{f_1}\mathrm{\Gamma }^9\right]\chi _\pm ^{(0)},$$
where $`\chi _\pm ^{(0)}`$ are constant spinors. Given the assumed behavior of $`f_2/f_1`$, normalizability requires
$$\mathrm{\Gamma }^9\chi _+^{(0)}=i\chi _+^{(0)},\mathrm{\Gamma }^9\chi _{}^{(0)}=+i\chi _{}^{(0)}.$$
With these projections, the spectrum of fermion zero modes is that of a single $`8+1`$ dimensional Majorana fermion. Thus we have verified that a kink on an unstable D9-brane represents a single BPS D8-brane, which in turn requires that an odd number of unstable D9-branes be accompanied by half odd integral units of 10-form flux.
In closing this section, we point out that according to \[29,,30\] an $`8+1`$ dimensional theory with an odd number of Majorana fermions potentially suffers from a global gravitational anomaly. In the present case, the $`8+1`$ dimensional theory on the kink was obtained by starting from an anomaly free $`9+1`$ dimensional theory, which indicates that the anomaly should cancel through some global version of anomaly inflow. This anomaly problem has recently been addressed in .
7. Conclusions and Outlook
In this paper, we have established the existence of finite-energy sphalerons in perturbative string theory, and identified them with the previously studied unstable D-branes. Thus, the unstable D-branes are legitimate objects in string theory, tied to the existence of a complicated homotopy structure of the configuration space of the theory and the existence of RR charges (or, more generally, charges in K-theory) in the โrightโ dimensions. As mentioned earlier, it is clear from the connection to K-theory that the structure uncovered in this paper is very universal, and a much richer spectrum of D-sphalerons is to be expected upon compactification. It will be interesting to unravel the implications of such D-sphalerons in more complicated situations.
Our construction of D-sphalerons was perturbative in $`g_s`$. Unlike their RR-charged BPS counterparts, D-sphalerons do not carry any conserved quantum numbers, and there is no a priori reason to expect that they survive as pronounced objects beyond the regime of weak string coupling. Therefore, our conclusions about the structure of the configuration space are strictly valid at small $`g_s`$ only. Nonetheless, since the existence of D-sphalerons is protected by the existence of BPS RR-charges (and is therefore topological in nature, related to K-theory), it seems natural to expect that at least some aspects of the sphalerons will survive even at large $`g_s`$. In principle, one can ask whether the homotopy structure of the string configuration space can be recovered in a dual description of a given theory. It is amusing that infinite Grassmannians appeared previously in the string theory literature in early attempts to go beyond perturbation theory, where they played the role of the universal moduli space of all Riemann surfaces (including surfaces of infinite genus) .
Our construction sheds light on the existence of the elusive D$`(2)`$-brane of Type IIA string theory, which couples to $`F_{10}`$ and therefore is important for issues that have to do with the cosmological constant. The D$`(2)`$-brane charge was found responsible for the existence of a non-contractible loop in the space of Type IIA histories in $`๐^{10}`$.
Although the Type IIA D-instanton โ being an example of a D-sphaleron โ does not cause false vacuum decay, of the supersymmetric vacuum of IIA theory, the closely related Euclidean Schwarzschild instanton will lead to false vacuum decay of $`M`$ theory on $`๐^{10}\times S^1`$ with the anti-periodic choice of spin structure on the $`S^1`$ following the analysis of . This process has interesting generalizations to other non-supersymmetric string compactifications .
Finally, we have not yet explored the physical implications of D-sphalerons in string theory. In field theory, sphalerons represent solutions at the top of a finite-energy barrier that can be classically overcome under favorable circumstances. In certain regimes they provide the leading semi-classical contribution to certain processes such as baryon number violation in the standard model.
At finite temperature, one can create field-theory sphalerons because they are soft and large objects, relatively easy to create by a large number of soft quanta in the thermal ensemble. In high-energy scattering processes, on the other hand, it might be difficult to create a soft large sphaleron by scattering a few very energetic quanta, and it has been argued in field theory that baryon-mediated processes are not enhanced .
In string theory, D-sphalerons are objects that have a hard core under a stringy halo. Therefore, one can expect that โ unlike in field theory โ the stringy D-sphalerons could play an important role in high-energy scattering processes. On the other hand, their possible role at finite temperatures seems more obscure. At small values of the string coupling, the mass of the sphalerons is proportional to $`\sqrt{\alpha ^{}}/g_s`$, and before we reach that energy regime in the thermal ensemble, we encounter the Hagedorn transition.
We would like to thank Eric Gimon, Ruth Gregory, Chris Hull, Emil Martinec, Djordje Minic, Albert Schwarz, Steve Shenker, and Edward Witten for helpful conversations. The work of J.H. is supported in part by NSF Grant No. PHY 9901194. The work of P.H. is supported in part by a Sherman Fairchild Prize Fellowship, and by DOE Grant No. DE-FG03-92-ER 40701. The work of P.K. is supported in part by NSF Grant No. PHY 9901194 and by NSF Grant No. PHY94-07194.
References
relax A. Sen, โStable non-BPS bound states of BPS D-branes,โ JHEP 9808 (1998) 010, hep-th/9805019; โSO(32) spinors of type I and other solitons on brane-antibrane pair,โ JHEP 9809 (1998) 023, hep-th/9808141; โType I D-particle and its interactions,โ JHEP 9810 (1998) 021, hep-th/9809111; โNon-BPS states and branes in string theory,โ hep-th/9904207, and references therein. relax O. Bergman and M. R. Gaberdiel, โStable non-BPS D-particles,โ Phys. Lett. B441 (1998) 133, hep-th/9806155. relax E. Witten, โD-Branes and K-Theory,โ JHEP 9812 (1998) 019; hep-th/9810188. relax P. Hoลava, โType IIA D-Branes, K-Theory, and Matrix Theory,โ Adv. Theor. Math. Phys. 2 (1999) 1373, hep-th/9812135. relax R. Minasian and G. Moore, โK-Theory and Ramond-Ramond Charge,โ JHEP 9711 (1997) 002, hep-th/9710230. relax A. Sen, โBPS D-branes on non-supersymmetric cycles,โ JHEP 9812 (1998) 021, hep-th/9812031. relax J. Cardy, โBoundary Conditions, Fusion Rules, and the Verlinde Formula,โ Nucl. Phys. B324 (1989) 581; P. Hoลava, โStrings on World-Sheet Orbifolds,โ Nucl. Phys. B327 (1989) 461; โOpen Strings from Three Dimensions: Chern-Simons-Witten Theory on Orbifolds,โ (Prague, 1990), J. Geom. Phys. 21 (1996) 1, hep-th/9404101. relax S. Coleman, โThe Uses of Instantons,โ in Aspects of Symmetry (Cambridge University Press, 1985). relax E. Witten, โInstability of the Kaluza-Klein Vacuum,โ Nucl. Phys. B195 (1982) 481. relax G. T. Horowitz and A. Strominger, โBlack Strings and $`p`$-Branes,โ Nucl. Phys. B360 (1991) 197. relax P. Di Vecchia, M. Frau, I. Pesando, S. Sciuto, A. Lerda and R. Russo, โClassical p-Branes from Boundary State,โ Nucl. Phys. B507 (1997) 259. relax P. Yi, โMembranes from Five-Branes and Fundamental Strings from D$`p`$-Branes,โ Nucl. Phys. B550 (1999) 214; hep-th/9901159. relax A. Sen, โSupersymmetric World-volume Action for Non-BPS D-branes,โ hep-th/9909062. relax A. Sen and B. Zwiebach, โTachyon Condensation in String Field Theory,โ hep-th/9912249. relax D. Brill and G. T. Horowitz, โNegative energy in string theory,โ Phys. Lett. B262 (1991) 437. relax J.A. Harvey, P. Hoลava and P. Kraus, work in progress. relax D.J. Gross, M.J. Perry and L.G. Yaffe, โInstability of Flat Space at Finite Temperature,โ Phys. Rev. D25 (1982) 330. relax N.S. Manton, โTopology in the Weinberg-Salam Theory,โ Phys. Rev. DD28 (1983) 2019; F.R. Klinkhamer and N.S. Manton, โA Saddle-Point Solution in the Weinberg-Salam Theory,โ Phys. Rev. D30 (1984) 2212. relax C.H. Taubes, โThe Existence of a Nonminimal Solution to the $`SU(2)`$ Yang-Mills-Higgs Equations on $`R^3`$,โ Commun. Math. Phys. 86 (1982) 257; 86 (1982) 299. relax A. I. Bochkarev and M. E. Shaposhnikov, โAnomalous Fermion Number Nonconservation At High Temperatures: Two-Dimensional Example,โ Mod. Phys. Lett. A2 (1987) 991; D. Y. Grigorev and V. A. Rubakov, โSoliton Pair Creation At Finite Temperatures. Numerical Study In (1+1)-Dimensions,โ Nucl. Phys. B299 (1988) 67. relax J. Polchinski, โDirichlet-Branes and Ramond-Ramond Charges,โ Phys. Rev. Lett. 75 (1995) 4724, hep-th/9510017. relax J. Polchinski and A. Strominger, โNew Vacua for Type II String Theory,โ Phys. Lett. B388 (1996) 736, hep-th/9510227. relax L. J. Romans, โMassive N=2a Supergravity In Ten Dimensions,โ Phys. Lett. B169 (1986) 374. relax E. Bergshoeff, M. de Roo, M.B. Green, G. Papadopoulos and P.K. Townsend, โDuality of Type II 7-branes and 8-branes,โ Nucl. Phys. B470 (1996) 113, hep-th/9601150. relax M.B. Green, C.M. Hull and P.K. Townsend, โD-Brane Wess-Zumino Actions, T-Duality and the Cosmological Constant,โ Phys. Lett. B382 (1996) 65, hep-th/9604119. relax M. Billรณ, B. Craps and F.Roose, โRamond-Ramond couplings of non-BPS D-branes,โ JHEP 9906 (1999) 033; hep-th/9905157. relax A. Sen, โ$`SO(32)`$ Spinors of Type I and Other Solitons on Brane-Antibrane Pair,โ JHEP 9809 (1998) 023, hep-th/9808141. relax T.Yoneya, โSpontaneously Broken Space-Time Supersymmetry in Open String Theory Without GSO Projection,โ hep-th/9912255. relax L. Alvarez-Gaumรฉ and E. Witten, โGravitational Anomalies,โ Nucl. Phys. B234 (1984) 269. relax E. Witten, โGlobal Gravitational Anomalies,โ Commun. Math. Phys. 100 (1985) 197. relax G. Moore and E. Witten, โSelf-Duality, Ramond-Ramond Fields, and K-Theory,โ hep-th/9912279. relax G. Segal and G. Wilson, โLoop groups and equations of KdV type,โ IHES Publ. Math. 61 (1985) 5; D. Friedan and S.H. Shenker, โThe integrable analytic geometry of quantum string,โ Phys. Lett. B175 (1986) 287; L. Alvarez-Gaumรฉ, C. Gomez and C.Reina, โLoop groups, Grassmannians and String Theory,โ Phys. Lett. B190 (1987) 55; C. Vafa, โOperator formalism on Riemann surfaces,โ Phys. Lett. B190 (1987) 47; A. Schwarz, โFermionic String and Universal Moduli Space,โ Nucl. Phys. B317 (1989) 323; โGrassmannian and String Theory,โ Commun. Math. Phys. 199 (1998) 1, hep-th/9610122. relax M. Dine, O. Lechtenfeld, B. Sakita, W. Fischler and J. Polchinski, โBaryon Number Violation at High Temperature in the Standard Model,โ Nucl. Phys. B342 (1990) 381. |
no-problem/0001/cond-mat0001392.html | ar5iv | text | # Orientational pinning and transverse voltage: Simulations and experiments in square Josephson junction arrays
## I INTRODUCTION
The interaction between the periodicity of vortex lattices (VL) and periodic pinning potentials in superconductors has raised a great interest both in equilibrium systems and in driven non-equilibrium systems. Several techniques have been used to artificially fabricate periodic pinning in superconducting samples: thickness modulated films, wire networks, Josephson junction arrays, magnetic dot arrays, sub-micron hole lattices and pinning induced by Bitter decoration.
The ground states of these systems, which result from the competition between the vortex-vortex and the vortex-pinning interactions, can be either commensurate or incommensurate vortex structures depending on the vortex density. These conmensurability effects in the ground state vortex configurations lead to enhanced critical currents and resistance minima for the โmatchingโ and for the โfractionalโ (submatching) vortex densities where the VL is strongly pinned. At finite temperatures, it has been shown that there are both a depinning and a melting transition, which can occur either sequentially or simultaneously depending on the magnetic field.
Very recently, the physics of driven vortices under periodic pinning has been studied numerically both at zero temperature and at finite temperatures. At $`T=0`$ there is a complex variety of dynamic phases. At finite $`T`$ there are two dynamic transitions when increasing temperature at high drives: there is first a transverse depinning and second a melting transition of the moving vortex lattice.
Most of the effects of periodic pinning that have been studied are related to conmensurability phenomena and the breaking of translational symmetry in these systems. Less studied is the effect of the breaking of rotational symmetry in periodic pinning potentials, in particular regarding transport properties. One question of interest is how the motion of vortices changes when the direction of the driving current is varied. If there is rotational symmetry, the vortex motion and voltage response should be insensitive to the choice of the direction of the current. However, it is clear that in a periodic pinning potential the dynamics may depend on the direction of the current. For example, in square Josephson junction arrays (JJA) it has been found that the existence of fractional giant Shapiro steps (FGGS) depends on the orientation of the current bias. When the JJA is driven in the direction the FGGS are absent, while they are very large when the drive is in the direction. Another example of more recent interest is the phenomenon of transverse critical current in superconductors with pinning. It has been found that for a VL a driven with a high current there is a transverse critical current when an additional small bias is applied in the perpendicular direction. Furthermore, when the transverse bias is increased it is possible to have a rich behavior with a Devilโs staircase in the transverse voltage.
In this paper we will study in detail the breaking of rotational invariance in square JJA. In this case, the discrete lattice of Josephson junctions induces a periodic egg-carton potential for the motion of vortices. We will study here how the voltage response depends on the angle of the current with respect to the lattice directions of the square JJA. We will show that there are preferred directions for vortex motion for which there is orientational pinning. This leads to an anomalous transverse voltage when vortices are driven in directions different from the symmetry directions. An analogous effect of a transverse voltage due to the guided motion of vortices has been observed in YBCO superconductors with twin boundaries. Another related case is the intrinsic breaking of rotational symmetry of d-wave superconductivity which causes an angle-dependent transverse voltage for large currents. Here we will show the differences and similarities of the square JJA with these problems.
The paper is organized as follows. In Sec. II we present the model equations for the dynamics of the JJA, which will be solved in the numerical simulations. In Sec. III we describe the experimental details of the JJA used in the measurements. In Sec. IV we will present our results for both the simulations and for the experiments. In particular, we will present experiments corresponding to the orientation for which the effect of a transverse voltage is maximum. Finally in Sec. V we will compare our results with other similar effects and discuss future directions of study.
## II MODEL
We study the dynamics of JJA using the resistively shunted junction (RSJ) model for the junctions of the square network. In this case, the current flowing in the junction between two superconducting islands in a JJA is modeled as the sum of the Josephson supercurrent and the normal current:
$$I_\mu (๐ง)=I_0\mathrm{sin}\theta _\mu (๐ง)+\frac{\mathrm{\Phi }_0}{2\pi cR_N}\frac{\theta _\mu (๐ง)}{t}+\eta _\mu (๐ง,t)$$
(1)
where $`I_0`$ is the critical current of the junction between the sites $`๐ง`$ and $`๐ง+\mu `$ in a square lattice \[$`๐ง=(n_x,n_y)`$, $`\mu =\widehat{๐ฑ},\widehat{๐ฒ}`$\], $`R_N`$ is the normal state resistance and
$$\theta _\mu (๐ง)=\theta (๐ง+\mu )\theta (๐ง)A_\mu (๐ง)=\mathrm{\Delta }_\mu \theta (๐ง)A_\mu (๐ง)$$
(2)
is the gauge invariant phase difference with
$$A_\mu (๐ง)=\frac{2\pi }{\mathrm{\Phi }_0}_{๐งa}^{(๐ง+\mu )a}๐๐๐ฅ.$$
(3)
The thermal noise fluctuations $`\eta _\mu `$ have correlations
$$\eta _\mu (๐ง,t)\eta _\mu ^{}(๐ง^{},t^{})=\frac{2kT}{R_N}\delta _{\mu ,\mu ^{}}\delta _{๐ง,๐ง^{}}\delta (tt^{})$$
(4)
In the presence of an external magnetic field $`H`$ we have
$`\mathrm{\Delta }_\mu \times A_\mu (๐ง)`$ $`=`$ $`A_x(๐ง)A_x(๐ง+๐ฒ)+A_y(๐ง+๐ฑ)A_y(๐ง)`$ (5)
$`=`$ $`2\pi f,`$ (6)
$`f=Ha^2/\mathrm{\Phi }_0`$ and $`a`$ is the array lattice spacing. We take periodic boundary conditions (p.b.c) in both directions in the presence of an external current $`๐=(I_x,I_y)`$ in arrays with $`L\times L`$ junctions. The vector potential is taken as
$$A_\mu (๐ง,t)=A_\mu ^0(๐ง)\alpha _\mu (t)$$
(7)
where in the Landau gauge $`A_x^0(๐ง)=2\pi fn_y`$, $`A_y^0(๐ง)=0`$ and $`\alpha _\mu (t)`$ allows for total voltage fluctuations under periodic boundary conditions. In this gauge the p.b.c. for the phases are:
$`\theta (n_x+L,n_y)`$ $`=`$ $`\theta (n_x,n_y)`$ (8)
$`\theta (n_x,n_y+L)`$ $`=`$ $`\theta (n_x,n_y)2\pi fLn_x.`$ (9)
The condition of a total current flowing in the $`x`$ and $`y`$ directions:
$`I_x`$ $`=`$ $`{\displaystyle \frac{1}{L^2}}\left[{\displaystyle \underset{๐ง}{}}I_0\mathrm{sin}\theta _x(๐ง)+\eta _x(๐ง,t)\right]+{\displaystyle \frac{\mathrm{}}{2eR_N}}{\displaystyle \frac{d\alpha _x}{dt}},`$ (10)
$`I_y`$ $`=`$ $`{\displaystyle \frac{1}{L^2}}\left[{\displaystyle \underset{๐ง}{}}I_0\mathrm{sin}\theta _y(๐ง)+\eta _y(๐ง,t)\right]+{\displaystyle \frac{\mathrm{}}{2eR_N}}{\displaystyle \frac{d\alpha _y}{dt}},`$ (12)
determines the dynamics of $`\alpha _\mu (t)`$. We also consider local conservation of current,
$$\mathrm{\Delta }_\mu I_\mu (๐ง)=\underset{\mu }{}I_\mu (๐ง)I_\mu (๐ง\mu )=0.$$
(13)
After Eqs. (1,8,9) we obtain the following set of dynamical equations for the phases,
$`\mathrm{\Delta }_\mu ^2{\displaystyle \frac{\theta (๐ง)}{t}}`$ $`=`$ $`\mathrm{\Delta }_\mu [S_\mu (๐ง)+\eta _\mu (๐ง,t)]`$ (14)
$`{\displaystyle \frac{\alpha _\mu }{t}}`$ $`=`$ $`I_\mu {\displaystyle \frac{1}{L^2}}{\displaystyle \underset{๐ง}{}}[S_\mu (๐ง)+\eta _\mu (๐ง,t)]`$ (15)
where
$$S_\mu (๐ง)=\mathrm{sin}[\mathrm{\Delta }_\mu \theta (๐ง)A_\mu ^0(๐ง)\alpha _\mu ],$$
(16)
we have normalized currents by $`I_0`$, time by $`\tau _J=2\pi cR_NI_0/\mathrm{\Phi }_0`$, temperature by $`I_0\mathrm{\Phi }_0/2\pi k_B`$, and the discrete Laplacian is
$`\mathrm{\Delta }_\mu ^2\theta (๐ง)`$ $`=`$ $`\theta (๐ง+\widehat{๐ฑ})+\theta (๐ง\widehat{๐ฑ})+\theta (๐ง+\widehat{๐ฒ})+\theta (๐ง\widehat{๐ฒ})`$ (18)
$`4\theta (๐ง).`$
The Langevin dynamical equations (10-11) are solved with a second order Runge-Kutta-Helfand-Greenside algorithm with time step $`\mathrm{\Delta }t=0.1\tau _J`$ and integration time $`5000\tau _J`$ after a transient of $`2000\tau _J`$. The discrete Laplacian is inverted with a fast Fourier + tridiagonalization algorithm as in Ref.. We calculate the time average of the total voltage as
$`V_x`$ $`=`$ $`v_x(t)=d\alpha _x(t)/dt`$ (19)
$`V_y`$ $`=`$ $`v_y(t)=d\alpha _y(t)/dt`$ (20)
with voltages normalized by $`R_NI_0`$.
We study the JJA under a magnetic field corresponding to a single vortex in the array, $`f=1/L^2`$, and system sizes of $`L\times L`$ junctions, with $`L=32,64`$. We apply a current $`I`$ at an angle $`\varphi `$ with respect to the $`[10]`$ lattice direction,
$`I_x`$ $`=`$ $`I\mathrm{cos}\varphi `$ (21)
$`I_y`$ $`=`$ $`I\mathrm{sin}\varphi .`$ (22)
We define the longitudinal voltage as the voltage in the direction of the applied current,
$$V_l=V_x\mathrm{cos}\varphi +V_y\mathrm{sin}\varphi ,$$
(23)
and the transverse voltage
$$V_t=V_x\mathrm{sin}\varphi +V_y\mathrm{cos}\varphi .$$
(24)
From the voltage response, we define the transverse angle as
$$\mathrm{tan}\theta _t=V_t/V_l$$
(25)
and the voltage angle as
$$\mathrm{tan}\theta _v=V_y/V_x,$$
(26)
i.e. $`\theta _t=\theta _v\varphi `$, see Fig. 1.
When the vortices move in the direction perpendicular to the current, there is no transverse voltage, therefore $`\theta _t=0`$ and $`\theta _v=\varphi `$.
## III EXPERIMENTAL SETUP
We measured CurrentโVoltage (IV) characteristics of square proximityโeffect Pb/Cu/Pb Josephson Arrays with the current applied in different directions.
The samples consist of 2500 ร
-thick cross-shaped Pb islands on top of a continuous 2500 ร
-thick copper film. Copper, and subsequently lead, were thermally evaporated onto a silicon substrate within the same evaporator. An array of $`1000\times 1000`$ lead islands were defined by photolithographic patterning followed by an Ar ion etching. The cell parameter of the resulting array was 10 $`\mu `$m with junctions 2 $`\mu `$m wide and a separation of 1 $`\mu `$m. A second photolithography step was used to define a $`1\times 10`$ mm<sup>2</sup> strip with current and voltage (longitudinal and transversal) contacts. This mask was manually aligned in the and directions for different samples.
The six-terminal measurements were made using a programable dc current source and a two channel nanovoltmeter (HP 34420A). Each channel measuring the longitudinal and transversal voltage respectively. The arrays were coooled up to 1.25 K in a pumped <sup>4</sup>He cryostat shielded by $`\mu `$-metal. A superconducting solenoid was used to null the remaining ambient magnetic field ($`15`$ mG), and apply fields to the sample. The typical periodic-in-field response in the resistance were observed extending in a large number of periods, which was used to determine the frustration applied to the sample.
## IV NUMERICAL AND EXPERIMENTAL RESULTS
### A Breaking of rotational invariance.
The square lattice has two directions of maximum symmetry: the and the directions (and the ones obtained from them by $`\pi /2`$ rotations), which correspond to the directions of reflection symmetry. When the current bias is in the direction, the angle of the current is $`\varphi =0`$, and we call it a โparallelโ bias. When the current bias is in the direction, the angle of the current is $`\varphi =\pi /4=45^o`$, and we call it a โdiagonalโ bias.
In the case of the parallel bias we find that the transverse voltage is zero (in agreement with the reflection symmetry). This corresponds to vortex motion in the direction perpendicular to the current ($`\theta _t=0`$). In the IV curve for the longitudinal voltage we find a critical current corresponding to the single vortex depinning $`I_c^{[10]}=0.1`$, as shown in Fig. 2(a). Above $`I_c`$ it is possible to distinguish three regimes: (A) a single vortex regime for $`0.1<I<0.85`$, where the IV is quasi-linear and dissipation is caused by vortex motion; (B) an intermediate crossover regime for $`0.85<I<1.0`$ and (C) a resistive regime for $`I>1.0`$ where dissipation is caused by the Ohmic shunt resistance and the IV curve is linear. The vortex regime A can be described by the dynamics of a single collective degree of freedom: an overdamped particle moving in the periodic โegg-cartonโ potential, although with a non-linear viscosity. The resistive regime C is also very simple, it can be represented by the behavior of a single junction at large currents, $`\theta _\mu (๐ง,t)\frac{2eR_N}{\mathrm{}}I_\mu t+\delta _\mu (๐ง,t)`$. The crossover regime B is characterized by a complex collective dynamics with an interplay of the vortex degree of freedom with spin waves excitations, and at finite temperatures there is also a steep increase of vortex-antivortex excitations in this regime.
In the case of the diagonal bias, we obtain similar results as in the parallel bias case. The transverse voltage is zero and therefore the vortex moves perpendicular to the current ($`\theta _t=0`$). The IV curve for $`V_l`$ has a critical current of $`I_c^{[11]}=\sqrt{2}I_c^{[10]}=0.1414`$. The onset of the resistive regime C is also multiplied by a factor of $`\sqrt{2}`$, while the crossover regime B starts at nearly the same current as in the parallel bias case, see Fig. 2(b).
For orientations different than the symmetry directions, we always find a finite transverse voltage. In order to see this, we study the voltage response when varying the orientation $`\varphi `$ of the drive while keeping fixed the amplitude $`I`$ of the current. In Fig. 3(a) we plot the transverse angle $`\theta _t=\mathrm{arctan}(V_t/V_l)`$ as a function of the angle of the current $`\varphi `$. We find that $`\theta _t`$ vanishes only in the maximum symmetry directions corresponding to angles $`\varphi =0,\pm 45^o,\pm 90^o,\mathrm{}`$, as discussed before. Furthermore, we see that for orientations near $`\varphi =0`$, the transverse angle basically follows the current angle: $`\theta _t\varphi `$. This is an indication that vortex motion is pinned in the lattice direction , since $`V_y0`$, meaning that the voltage angle is $`\theta _v0`$. Whenever the voltage response is insensitive to small changes in the orientation of the current, we will call this phenomenon orientational pinning. On the other hand, near $`\varphi =45^o`$ the transverse angle changes rapidly. We show in Fig. 3(b-c) the behavior of $`\theta _t`$ for different currents and temperatures. The $`\theta _t`$ vs. $`\varphi `$ curves become smoother around $`\varphi =45^o`$ for increasing current as well as for increasing $`T`$. At the same time, the magnitude of the transverse angle decreases when increasing the current for $`I1`$, or when increasing $`T`$.
A more direct evidence of the breaking of rotational symmetry can be seen in the parametric curves of $`V_y(\varphi )`$ vs. $`V_x(\varphi )`$. In Fig. 4 we plot the values of the voltages $`V_y`$ and $`V_x`$ when varying the orientational angle $`\varphi `$ for different values of the current amplitude $`I`$ and the temperature. In the case of rotational symmetry we should have a perfect circle. In the set of plots of Fig.4(a-c), the current amplitude is fixed and the temperature is varied. In Fig.4(a) we have $`I=0.2`$, near the onset of single vortex motion in the regime A. In this case most of the points are either on the axis $`V_x=0`$ or on the axis $`V_y=0`$, indicating strong orientational pinning in the lattice directions or . When increasing $`T`$ the orientational pinning decreases and the length of the โhornsโ in the $`x`$ and $`y`$ axis decreases. Fig.4(b) corresponds to $`I=0.6`$ , near the end of regime A when the vortex is moving fast. In this case the horns have disappeared and orientational pinning is lost. However, the breaking of rotational symmetry is still present in the star-shaped curves that we find at low $`T`$. The dip at $`45^0`$ in the stars are because in this direction the voltages are minimum, since the critical current is maximum in this case, $`I_c^{[11]}=0.1\sqrt{2}`$. When increasing the temperature, the stars tend to the circular shape of rotational invariance. The parametric curves in the crossover regime B also have star shaped behavior at low $`T`$ which tends to circles when increasing $`T`$. Above the onset of the resistive regime C the โhornedโ curves reappear \[Fig. 4(c)\]. In this case the orientational pinning corresponds to the locking of ohmic dissipation in the junctions in one of the lattice directions, either or . Once again, when increasing $`T`$ the horned structure shrinks, and the curves evolve continuously from square shapes to circular shapes.
The variation with current of the rotational parametric curves for a fixed temperature is shown in Figs.4(d-f). At a low temperature, $`T=0.05`$, we clearly see the horned structure of the curves for almost all the currents and even for large currents the circular curves have โhornsโ, see Fig.4(d). At an intermediate temperature, $`T=0.2`$, there are still some signatures of the orientational pinning \[Fig. 4(e)\], while for $`T=1.2`$ all the curves are smooth and rounded with a slightly square shape \[Fig. 4(f)\].
### B Orientational pinning near the direction.
The orientational pinning characterizes the breaking of rotational symmetry in square arrays at low temperatures, as we have seen in the previous Section. When the JJA is driven in any of the โparallelโ directions, $`\varphi =0^o,90^o,180^o,270^o`$ both vortex motion and dissipation are pinned along these directions. When the current is rotated a small angle, for example from the direction, the dissipation remains pinned along the direction ($`V_x0`$, $`V_y=0`$). This effect causes a finite transverse voltage, measured with respect to the direction of the current, $`V_tV_x\mathrm{sin}\varphi `$ and a transverse angle $`\theta _t=\varphi `$. The orientational pinning is lost at a given critical angle $`\varphi _c`$ which depends on temperature and current. As we can see in Fig. 3(a), for very low $`T`$ and for certain values of the current, the critical angle can reach values very close to $`45^o`$. This leads to a large transverse voltage in arrays driven near the direction as we will discuss later in Sec. IV.C.
Let us analyze now the behavior of the critical angle for orientational depinning, $`\varphi _c`$. This can be studied by looking at the angle of the voltage with respect to the direction, $`\theta _v=\mathrm{arctan}(V_y/V_x)`$. For $`\varphi <\varphi _c`$ we have orientational pinning and therefore $`\theta _v=0`$, while for $`\varphi >\varphi _c`$ we have $`\theta _v0`$. Therefore, the onset of a finite $`\theta _v`$ defines the critical angle $`\varphi _c`$. In Fig. 5 we plot $`\theta _v`$ as a function of $`\varphi `$ for different currents and temperatures. There are two cases of interest: for a current in the single vortex regime A \[Fig. 5(a)\] and for a current in the resistive regime C \[Fig. 5(b)\]. We find that in both cases there is clearly a finite $`\varphi _c`$ which decreases with temperature. In the case of the single vortex regime A, we see that $`\varphi _c`$ tends to vanish at a crossover temperature corresponding to the energy scale for vortex depinning, $`T_{pin}\mathrm{\Delta }E_{pin}0.2`$. The notion of โtransverse critical currentโ $`I_{c,tr}`$ has been introduced recently by Giamarchi and Le Doussal for driven vortex lattices in random pinning. When a vortex structure is moving fast, it can still be pinned in the transverse direction. After applying a current in the direction perpendicular to the drive, a finite transverse critical current may exist at $`T=0`$. In the case of periodic pinning, a finite $`I_{c,tr}`$ was also proposed due to conmensurability effects. In particular, in the periodic โegg-cartonโ pinning of Josephson junction arrays it was found in Ref. that $`I_{c,tr}`$ is finite in a wide range of temperatures for a system driven in the direction. Here we see that due to the orientational pinning, a transverse critical current is finite only when the JJA is driven either in the or directions. In any other case it will be zero. Moreover, the critical angle in the regime A can be interpreted as corresponding to a transverse critical current $`I_{c,tr}=I\mathrm{sin}\varphi _c`$ for a vortex driven by a longitudinal current $`I_l=I\mathrm{cos}\varphi _c`$.
In Fig. 5(b) we see that for a current in the resistive regime C there is also a critical angle for the onset of transverse ohmic dissipation. In this case $`\varphi _c`$ tends to vanish at a higher temperature above the Kosterlitz-Thouless transition, $`T_{KT}0.9`$.
The other lattice symmetry direction of interest corresponds to the case of โdiagonalโ bias, i.e. the directions of $`\varphi =45^o,135^o,225^o,315^o`$. In this case, we have shown previously that the transverse voltage also vanishes. In agreement with this, we see both in Fig. 5(a) and Fig. 5(b) that all the curves cross in the point $`(\varphi ,\theta _v)=(45^o,45^o)`$, which corresponds to $`\theta _t=\theta _v\varphi =0`$. However, there is no orientational pinning in the direction. On the contrary, in this direction any small deviation in the orientation of the current can cause a fast increase of the transverse voltage. In order to show this effect, we study the โdiagonal voltage angleโ, which is measured with respect to $`45^o`$: $`\theta _v^{}=\mathrm{arctan}(V_yV_x)/(V_y+V_x)`$ $`=\theta _v\pi /4`$, as a function of the deviation from the direction, $`\mathrm{\Delta }\varphi =\varphi \pi /4`$, see Fig. 6. We find that, in contrast with the case of Fig. 5, a small deviation from the symmetry direction leads to a large change in the voltage angle for any current and temperature. This shows that the direction is highly unstable against small changes in the orientation of the current, leading to an anomalously large transverse voltage.
### C Transverse voltage near the direction.
In Fig. 7(a) we show our experimental voltage-current characteristics for an array of $`100\times 1000`$ junctions at a low temperature $`T=1.25K`$ and at a low magnetic field. The current is applied nominally in the direction, but a small misalignment is possible in the setup of electrical contacts, therefore $`\varphi =45^o\pm 5^o`$. We see that for low currents there is a very large value of the transverse voltage $`V_t`$, which is nearly of the same magnitude as the longitudinal voltage $`V_l`$. The transverse voltage is maximum at a characteristic current $`I_m`$. Above $`I_m`$, $`V_t`$ decreases with increasing current while $`V_l`$ increases. It is remarkable that these results are very different from the IV curve of Fig. 2, where $`V_t=0`$ at $`\varphi =45^o`$. However, if we assume a misalignment of a few degrees with respect to the direction we can reproduce the experimental results. In Fig. 7(b) we show the IV curves obtained numerically for $`\varphi =40^o`$ and $`T=0.02`$. We see that for low currents $`V_t`$ is close to $`V_l`$: $`V_tV_l`$, similar to the experiment, and later $`V_t`$ has a maximum at a current $`I_m1/\mathrm{cos}\varphi 1.3`$. This corresponds to the current for which the junctions in the $`x`$-direction become critical ($`I_x=1`$). The range of currents we can measure experimentally is limited to the regimes B and C, since we can not fully access the regime A of single vortex motion due to the small voltages involved in this case. Of course, in the simulations we can study the full range of currents, which is shown in Fig. 7(c). Here we see that near the vortex depinning current the transverse voltage is also very close to $`V_l`$ in a small range of currents, then when increasing $`I`$ they separate first inside the regime A, and later in the regime B the transverse voltage approaches the longitudinal voltage again.
As we saw in Fig. 3 the highest transverse voltage can be obtained for orientations near $`\varphi =45^o`$. Therefore a slight misalignment of the array from the direction is useful for studying both experimentally and numerically the behavior of the transverse voltage as a function of current and temperature.
In Fig. 8 we show the dependence of $`\theta _t`$ with current for different temperatures. The experimental results are shown in Fig.8(a) where we find that $`\theta _t`$ first increases with current, it reaches a maximum value $`\theta _t^{max}(T)<45^o`$ and then for large currents $`\theta _t`$ tends to zero. In Fig. 8(b) we show that the simulations with the RSJ model with $`\varphi =40^o`$ reproduce this behavior. Here we see that the maximum of the transverse angle is reached inside the regime B well before the onset of the resistive regime. We also find that, when going deep into the regime C, $`\theta _t`$ decreases with current: $`\theta _t0`$ for $`II_m`$. In Fig. 8(c) we show that a similar behavior is obtained for other values of $`\varphi `$ close to $`45^o`$. We also observe here the full range of currents. We see that at the critical current the transverse angle $`\theta _t`$ first has a maximum, then it decreases rapidly in a small range of currents, after which, for most of the regime A, $`\theta _t`$ is increasing with $`I`$ before reaching a second maximum value in the crossover regime B. This shows that there are two regimes where the effect of anomalous transverse voltage is maximum: near the vortex depinning current, due to orientational pinning of vortex motion; and near the Josephson junction critical current due to the orientational โpinningโ of ohmic dissipation. Regretfully, we can not measure the small voltages of the low current regime, therefore we were not able to observe experimentally the first maximum of $`\theta _t`$.
In Fig. 9 we analyze the behavior of the transverse angle as a function of temperature. We plot the value of $`\theta _t`$ for a current near the maximum value of the transverse voltage at low $`T`$. We observe experimentally that $`\theta _t`$ decreases with temperature and in particular it has a sharp decrease at $`T1.5K`$, as we show in Fig. 9(a). On the other hand, in the simulation results we see a smooth decrease of $`\theta _t`$ with temperature \[Fig. 9(b)\]. The transverse angle becomes small at the depinning crossover temperature $`T_{pin}0.2`$. The fact that our experimental results show a sharper decrease with temperature than the simulations is possibly due to vortex collective effects. The simulation results presented here are focused in the motion of a single vortex in the periodic pinning of a square JJA. The vortex collective effects, which have to be studied for fields $`f>1/L^2`$, will be discussed elsewhere.
## V DISCUSSION
In the egg-carton potential of a square JJA there are pinning barriers for vortex motion in all the directions. The direction with the lowest pinning barrier is the direction. Therefore the strong orientational pinning we find here is in the direction of the lowest pinning for motion, i.e. the direction of easy flow for vortices. The presence of a strong orientational pinning leads to a large transverse voltage when the systems is driven away from the favorable direction, to the existence of a critical angle and to a transverse critical current. On the other hand, the direction is the direction of the largest barrier for vortex motion in the egg-carton potential. In this case, the behavior is highly unstable against small variations in the angle of the drive, leading to a rapid change from zero transverse voltage to a large transverse voltage within a few degrees. This explains the transverse voltage observed in our experimental measurements in JJA driven near the diagonal orientation.
An analogous effect of orientational pinning has also been seen in experiments on YBCO superconductors with twin boundaries. In this case, due to the correlated nature of the disorder, the direction for easy flow is the direction of the twins. A similar effect of horns in the parametric voltage curves are therefore observed in the direction corresponding to the twins. Also transport measurements when the sample is driven at an angle with respect to the twin show a large transverse voltage.
It is interesting to compare with the angle-dependent transverse voltage calculated for d-wave superconductors. Also in this case, the transverse voltage vanishes only in the and directions. However the $`\theta _t`$ vs. $`\varphi `$ curves are smooth in this case, since $`\mathrm{tan}\theta _t\mathrm{sin}4\varphi `$. This is because there is no pinning and the transverse voltage is caused only by the intrinsic nature of the d-wave ground state. On the other hand, the breaking of rotational symmetry studied here is induced by the pinning potential, and it results in non-smooth responses like โhornedโ parametric voltage curves, critical angles, transverse critical currents, etc.
In superconductors with a square array of pinning centers, typically the pins are of circular shape and the size of the pins is much smaller than the distance between pinning sites. In this case, the pinning barriers that vortices find for motion are the same in many directions. Therefore it is possible to have orientational pinning in many of the square lattice symmetry directions. This explains the rich structure of a Devilโs staircase observed recently in the simulations of Reichhardt and Nori, where each plateau corresponds to orientational pinning in each of the several possible directions for orientational pinning. This interesting behavior is not possible in JJA, however, since the egg-carton pinning potential corresponds to the situation of square-shaped pinning centers with the pin size equal to the interpin distance. In this case the only possible directions for orientational pinning are the and , as we have seen here.
It is worth noting that many experiments in JJA in the past have been done in samples with a diagonal bias. For example, van Wees et al. have observed the existence of a transverse voltage in their measurements, which was unexplained. A small misalignment of their direction of the current drive could easily explain their results, since we have learned here that the -direction is unstable against small changes in the angle of the bias. Also Chen et al. have reported a transverse angle in measurements in JJA driven in the diagonal direction. In their case the effect has a strong component that is antisymmetric against a change in the direction of the magnetic field, which means that they have a Hall effect possibly due to quantum effects. However, they report that their transverse voltage had also a component which was even with the magnetic field (which was discounted in their computation of the Hall angle). This particular spurious contribution can also be attributed to a small misalignment of the direction of the bias. From this we conclude that in order to study the Hall effect in JJA the most convenient choice would be a current bias in the direction where the effect of transverse voltages at small deviations is minimum.
When this work was upon completion, new studies of the effect of the orientation of the bias in driven square JJA have appeared. Fisher, Stroud and Janin have studied some of the effects of the direction of current in a fully frustrated JJA ($`f=1/2`$) at $`T=0`$. In their case a transverse critical critical current and the dynamics as a function of $`I_x`$ and $`I_y`$ has been described. Their results are in part complementary to our work with a single vortex ($`f=1/L^2`$). Yoon, Choi and Kim find differences in the IV characteristics of JJA at $`f=0`$ when comparing the parallel current bias with the diagonal current bias. Their results are in agreement with our Fig. 2 results.
In this paper we have considered the dynamics of a single vortex in a square JJA. We were able to characterize in detail the orientational pinning and breaking of rotational symmetry in this case. Furthermore, with the results of the RSJ numerical calculation we were able reproduce and interpret most of our experimental measurements for a quasi-diagonal bias. It remains for the future to study the behavior of a driven vortex lattice (VL) when the current is rotated, since the VL has also its own periodicity and symmetry directions. As we saw recently in Ref. , a moving vortex lattice in a JJA shows different dynamical phases as a function of temperature and current. Therefore we expect that the characteristics of the breaking of rotational invariance, orientational pinning and transverse voltages will depend on the dynamical phase under consideration.
###### Acknowledgements.
We acknowledge financial support from CONICET, CNEA, ANPCyT, Secyt-Cuyo, and Fundaciรณn Antorchas. On of us (P. M.) thanks the Swiss National Science Foundation for support. |
no-problem/0001/astro-ph0001032.html | ar5iv | text | # Limits on the Spatial Extent of AGN Measured with the Fine Guidance Sensors of the HST1footnote 11footnote 1Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
## 1 Introduction
One of the most important properties of the Hubble Space Telescope (HST) is its ability to attain high spatial resolution limited by diffraction rather than the properties of the atmosphere. This resolution is normally exploited via direct imaging by HSTโs cameras, potentially enhanced by the application of restoration algorithms. The Fine Guidance Sensors (FGSs) of the HST offer an alternative method for studying structure on the finest scales. The FGSs fold one half of the telescopeโs pupil onto the other and the resulting interference fringes are recorded by photomultiplier tubes as the 5 arcsecond FGS field of view is scanned across a source. These fringes are the basis of the HSTโs high precision tracking but can also be used to study structure on scales down to 20 milliarcseconds or less. The FGSs are more sensitive than the cameras to the finest spatial structure but produce data which are more difficult to interpret.
We have carried out a project to investigate the usefulness of the HST FGSs for attaining ultra-high resolutions on extra-galactic objects, observing several bright AGN known to have radio structure on scales ranging from several milliarcsecs to several tens of milliarcsecs. The goal was to search for and study similar scale structure in the optical.
In this paper we present the results of these observations using the original HST FGSs and their analyses using a simple model for detecting and quantifying extended structure. We apply statistical tests to decide whether the hypothesis that the object is a point source on an extended background is consistent with the data and if so, at what confidence level. Simulations are used to derive limits on the detection thresholds for extensions to the central point-source. The systematic effects due to calibrations incorrectly matched in both time and color are investigated as are other systematic effects which limit the resolution we can attain. We find that all our datasets are consistent with point-like object intensity distributions. The high spatial resolution of the FGS allows us to derive interesting limits on the physical sizes of the optical emission regions, and on the scale and relative intensities of features which would have been seen. These results delineate the region of phase-space where the FGSs are the best observational tool for mapping at high resolution in the optical, especially given the improved characteristics of the newly available FGSs.
## 2 Data from the FGS
The HST uses three FGSs at the edge of the field of view of the telescope (see the FGS Instrument Handbook, Version 8, June 1999, STScI, for further discussion). During normal use of the telescope, two of the FGSs are used to โlockโ onto guide stars to precisely point the telescope, while the third is available to perform astrometry, independently scanning objects within its field of view and measuring their positions. Each FGS contains a pair of orthogonal interferometers which, when scanned across a point source, produce two S-shaped โtransfer functions.โ The zero-point crossings of the transfer functions determine the position of the source โ these nulls are used by the pointing control of the telescope โ while the morphology of the transfer functions are dependent on the brightness profile of the source. Extended or multiple objects produce lower amplitude and more complicated transfer functions compared to those from a point source. It may be shown that the S-curve is related to the monochromatic intensity distribution of the object on the sky in the following way
$$S(\theta )=\frac{I_AI_B}{I_A+I_B}=_{\mathrm{\Delta }\varphi /2}^{\mathrm{\Delta }\varphi /2}\frac{F(\varphi )}{F_{tot}}\frac{sin^2(D\pi (\theta \varphi )/\lambda )}{D\pi (\theta \varphi )/\lambda }๐\varphi $$
(1)
where $`S(\theta )`$ is the value of the observed S-curve expressed as a function of angle away from interferometric null,$`I_A`$ and $`I_B`$ are the intensities measured by the photomultipliers, $`D`$ is the aperture of the HST, $`\lambda `$ is the wavelength, $`F(\varphi )`$ is the intensity distribution of the object in the direction of the scan and $`F_{tot}`$ is the total integrated flux. The observed S-curve can be regarded as the convolution of the normalized intensity profile of the object on the sky and the S-curve resulting from a reference observation of a point-source on a negligible background. It follows from equation (1) that, in principle, the morphology of the source (in the direction of the scan) can be obtained by deconvolving the transfer function from the observed FGS interferometer output. A method for doing this has been described by Hershey (1992). Stronger backgrounds lead to lower amplitude S-curves and this effect becomes important for faint sources such as those discussed in detail later in this paper. To illustrate the general form of the FGS S-curves and how they change when the object has small extent simulations were created and the results are shown in Figure 1.
The primary use of the FGS in transfer function mode is for observations of binary stars (see, e.g. Franz et al. 1992), where separations, position angles, and relative intensities of close binaries are measured. Lattanzi et al. (1997) have also successfully used the FGS to measure the diameters of the disks of Mira variables. The observations discussed in this paper are the first to attempt to study extended structures in extragalactic sources with the FGS.
## 3 The Observations and Data Reduction
A list of datasets used in the analysis described below is given in Table 1. All observations were made with FGS3 in the โTRANS-MODEโ (cf. FGS Instrument Handbook) to obtain transfer functions. The initial observations in this survey were carried out with three different filters: โPUPILโ (a filter-free stopped down aperture); โCLEARโ (a broad-band filter with FWHM of 2340ร
centered at 5830ร
); and โYELLOWโ (a 750ร
FWHM filter centered at 5500ร
). The original goal was to attempt to derive some color information from the filters, while the pupil stop, which apodizes the outer $`1/3`$ of the telescope pupil, was used to restore fringe visibility in the presence of spherical aberration. Although in principle, the interferometers should not be affected by the symmetric aberration in the primary mirror, small internal FGS misalignments interact with the spherical aberration to impair FGS performance. All later observations, forming the majority of the data presented here, used the pupil stop only.
Each observation consisted of multiple scans across the object. The two orthogonal interferometers of the FGS were oriented at 45 degrees to the the scan direction. Each scan was typically 2.4โ long, centered on the object, with a nominal 0.3 mas step size.
Routine data reduction software developed by the FGS group at STScI, working with the HST Astrometry team, was used to unpack, inspect, smooth, and merge the data sets (see, e.g., HST Data Handbook, Part X, February, 1994, STScI). Individual scans which on visual inspection showed no obvious transfer function or displayed pathological (non-physical) characteristics due to spacecraft jitter, were not considered further. On the average, 75% of the scans were considered โgood.โ The data from these scans were then co-added, separately for each filter and for each coordinate axis. The standard merging software was used, which first determines a โzero phaseโ for each scan by measuring the zero crossing of the transfer function (observed in the smoothed data), then calculates the relative offsets of the observed transfer functions from that of a reference scan (typically the first scan of the set), and then co-adds the raw (unsmoothed) data from the individual scans, using the calculated offsets. For the data from the faint objects discussed here each individual scan has significant photon noise. This limits the accuracy of the offset determination and degrades the resolution which can finally be attained. This source of systematic error is discussed below in section 5.
It was apparent that the pupil data were the โcleanestโ in the sense that the amplitudes of the transfer functions were greatest and the shapes were smoothest and most closely resembled expected transfer function shapes. This is expected because the full aperture, which accepts a larger portion of the spherically aberrated optical beam, is more susceptible to internal FGS misalignments. In order to have a uniform set of datasets for many objects, which could be adequately calibrated with the limited set of calibration curves at our disposal, we decided to restrict the analyses presented here to those datasets obtained using the pupil stop. Unfortunately, use of the pupil did reduce the photon flux, and thus the number of sources we could study with this technique, as well as the angular resolution which could be achieved.
In order to analyze these curves, we require for reference the response of the system to a known point-source on a negligible background, effectively the S-curve โpoint spread functionโ (PSF). The FGS is routinely scheduled to observe standard stars of different colors, through different filters, to provide such references. We selected as reference S-curves for our analyses observations of stars observed through the pupil aperture as close as possible in time to our observations, as the FGS3 S-curves are known to change over the course of time. Ideally a reference star with color comparable to that of the object should be selected, but this can only be approximated for AGN. It is also not always possible to find reference curves taken less than about 200 days from a given observation. The systematic effects due to the inadequacies of the reference curves in color and time are discussed below in section 5. A list of all calibration curves used in this paper is given as Table 2.
## 4 Model-fitting the Observed S-curves
The merged transfer functions for several objects observed in our sample appeared clearly inconsistent with what would be expected from an ideal point source. In particular, the amplitudes of the S-curves were less (suggesting that the background was a significant fraction of the object intensity) and the widths appeared greater (suggesting extent) than expected for point sources. To understand the significance of these differences, we created model object intensity profiles, generated sets of simulated S-curves from them, and compared these to the observations. In this section we use the 3C279 dataset f0wj0602m to illustrate the methods used to assess the statistical significance of the structures seen in the S-curves. The other data sets were analyzed in the same way and the results are presented below.
The objects are modeled using normalized Gaussian intrinsic intensity profiles, superimposed on flat backgrounds. The two-parameter set of functions we used for the intensity distribution in each of โxโ and โyโ is thus of the form:
$$F=B+He^{\theta ^2/2\sigma ^2}$$
(2)
where B and H are constrained so that the sum over all values of F is 1.0, $`\theta `$ is the angle with respect to the FGS interferometry null (normally expressed in milliarcseconds) and $`\sigma `$ is expressed in the same units. The normalization of this function is important and may be derived easily from the theory of the FGSs (see eq. ). This function $`F`$ is a model for the intensity profile of the object in one direction, summed over the 5 arcsec width of the FGS instantaneous field of view in the perpendicular direction. The model S-curve itself is then created by convolving the model source profile F with the appropriate calibration S-curve - the โPSFโ - as listed in Table 1.
The observation data sets were reduced using the standard FGS software referenced earlier. The noisy, un-smoothed data products rather than the smoothed versions were used (both are created by the standard FGS software), as we wished to retain the noise properties of the data. For the calibrator S-curves, the smoothed version was used, re-sampled to match the scale and range of the observed data. In both cases a 512 sample subset of the data around the central S-curve feature was used to make the convolutions more efficient. This reduced range does not affect the results, as there are no significant features further out in these data.
The observed data sets were compared with the models (convolutions of the intensity profiles with calibration S-curves) for a range of values of $`\sigma `$ and $`B`$, and the sum of squared residuals was computed. To estimate the statistical significance of the results, we measure the noise in the outer parts of the S-curve, where the mean is zero, and assume it to be approximately constant. The reduced $`\chi ^2`$ for the fits is then calculated in the normal way
$$\chi _{red}^2=\frac{(v_{dat}v_{model})^2/var}{n_{free}}$$
(3)
where $`v_{dat}`$ and $`v_{model}`$ are the values of the data and model at a given position, $`n_{free}`$ is the number of degrees of freedom, in this case this is the number of data points (512) minus the two free fit parameters (the width of the Gaussian and the fraction of energy in the background) and $`var`$ is the measured variance of the noise, assumed to be constant throughout the S-curve. It is assumed that there is no correlation between the data points; this assumption will be discussed further below.
Confidence levels are plotted in Figure 2 for the two-parameter fits to the 3C279 data sets (f0wj0602m). For a $`\chi ^2`$ distribution with 510 degrees of freedom a reduced $`\chi ^2`$ of 1.072 corresponds to the 25% confidence level, 1.123 to 5% and 1.161 to 1%. The reduced $`\chi ^2`$ values of the good fits are slightly less than $`1.0`$, implying that the noise as estimated from the outer parts of the S-curve is slightly too large, in agreement with visual inspection; the data are not โover-fitted,โ as the curve is smooth on large scales and cannot fit any but the largest features in the S-curve. Estimating the variance from the outer parts of the S-curve gives a spread around the truth with the effect of shifting the confidence contours in or out. We use the $`F`$ test (below), based on ratios of $`\chi ^2`$ values, as it is not affected by small variations of the variance measure.
For statistical tests to be valid, we need to know the magnitude of the noise, its distribution, and whether there are correlations present which would reduce the effective number of degrees of freedom. To assess these properties, we analyzed a โnoise datasetโ produced from the difference between the fitted and observed S-curves for 3C279 (the โyโ data were used but โxโ would be equally good). Again, because the fitted curve is smooth on all but the largest scales, this will be a good guide to the fine structure of the noise. The histogram of this noise dataset appeared approximately Gaussian with zero mean, a width (i.e. the $`\sigma `$ of the noise) of 0.044, and no obvious asymmetry. Figure 3 shows the power spectrum of the noise which appears to be approximately โwhiteโ. It is plotted with the zero-frequency bin at the center and is hence symmetrical about this point. Finally, the correlations within the noise were assessed by block-averaging the data in 2, 4 and 8 sample bins and computing the standard deviation in each. If there are no correlations on these scales, the values should drop by $`\sqrt{2}`$ each time. The actual ratios found were 1.50,1.47 and 1.36, confirming that the noise may be regarded as Gaussian and independent, with 512 degrees of freedom.
As seen from Figure 2, there is a large region of acceptable parameter space. The best fit requires a small extension (FWHM about 15mas) with a background of about 24%. However, a point source with a background about 28% of the total intensity, is also fully acceptable; the required background is consistent with the level expected from on object of this magnitude. The hypothesis that there is no background, and that the reduction in S-curve amplitude is due to a broad nucleus, is strongly rejected at a significance level of 99.9%.
We adopt two approaches to further assess the significance of the small extension indicated by the minimum $`\chi ^2`$ fit. First, we ask whether the improvement in the fit due to the addition of an extra parameter to the model (the extent of the object) is justified by the data. We compare the $`\chi ^2`$ of the null hypothesis โthe object is a pointโ with that obtained by including a non-zero width as well. The ratio of these two $`\chi ^2`$ values should follow the $`F`$ distribution, allowing us to put a confidence limit on the extension. The use of the $`F`$ test also removes the uncertainty in the measurement of the variance, as we are comparing a ratio of reduced $`\chi ^2`$ values.
From the 3C279 S-curves we have
$$F=\frac{\chi _{extended}^2}{\chi _{point}^2}=0.9807/0.9682=1.013$$
(4)
for x and
$$F=\frac{\chi _{extended}^2}{\chi _{point}^2}=0.9892/0.9573=1.033$$
(5)
for y, where there are 511 degrees of freedom for the point-source fit and 510 for the two parameter fit. For an $`F`$ distribution with 510 and 511 degrees of freedom, a 10% confidence level is reached at $`F_{510,511,0.9}=1.12`$. In other words, such a high value is to be expected by chance in 10% of cases if the null hypothesis were true. Although this application of the $`F`$ statistic is not rigorously correct, as the fitted function is not linear in the width of the Gaussian, we think it is clear that we cannot reject the null hypothesis and hence cannot put any trust in an extension, when the derived $`F`$ values are so close to $`1.0`$.
In a second approach, we generate Monte Carlo simulations using the same two parameter model, selecting the width and fractional background strength at random, convolving the models with the same calibrator S-curve, and adding Gaussian noise at levels comparable to that present in two data sets of interest. The results were analyzed in exactly the same way as above to measure the width and background. Figure 4 compares the measured widths with those input to the simulations for two noise levels. For the noisier simulations on the left, which have noise comparable to that seen in the 3C279 S-curves, there is reasonable agreement for extents ($`\sigma `$) greater than about $`10`$ mas. In a higher signal to noise case on the right, where the noise is comparable to that seen in the NGC4151 data set, extents down to about $`5`$ mas can be reliably detected. These plots provide a guide to whether or not an extension can be detected at a specified noise level when other sources of systematic error are negligible. We quantify the statistical significance of the extent determination by again looking at the $`F`$ statistic for each of these S-curves, as plotted in Figure 5. The pluses indicate fits which are non-point-like at the 99% confidence level, the stars fits which have $`F`$ values insufficiently different from $`1.0`$ to allow rejection of the null hypothesis (i.e. point-source) at the 25% level. The triangles are intermediate cases.
It is clear from the left-hand plots that $`\sigma `$ values less than about 12mas (FWHM=28mas) will not be detectable at statistically significant levels in S-curves having noise levels comparable to the 3C279 data. Hence the extension of $`\sigma =6`$mas (FWMH=14mas) we obtained for the 3C279 โxโ S-curve is not significant. Similarly the smaller extension measured for the โyโ S-curve of the NGC4151 data described later in the paper is also not statistically significant. In this case, that of the best signal to noise of our sample, the smallest statistically significant extent ($`\sigma `$) detectable is of order $`5`$mas. For data of lower signal to noise ratio, including most of the other data sets studied here, the minimum extension which can be detected will clearly be larger than that for 3C279. Figures 4 and 5 also show that the minimum detectable extension falls as the fraction of energy in the background increases. This is to be expected as the amplitude of the S-curve, compared to the constant noise, is also falling in this case.
These simulations allow us to place limits on the spatial extents of the objects in our study which would be detected at different noise levels under the assumption that other systematic effects are negligible. Unfortunately the combined effects of the spherical aberration of the HST, small misalignments within FGS3 itself, variations in the fringe shapes with time and the impossibility of accurately coadding shifted fringes with very low signal restrict the resolution which can be obtained. These limits are discussed below.
## 5 Systematic Effects
The above analysis has assumed the calibration reference S-curves are an accurate representation of what would be seen if the observed object were a point on a negligible background. It is also assumed that the observed S-curves can be regarded as the convolution of the normalized object intensity distribution with the calibration S-curve. Unfortunately neither of these assumptions can be regarded as completely valid for S-curves of faint objects observed with the FGS3 interferometer. In practice, S-curves depend on the color of the object and are also known to vary in time. In addition, the accuracy of aligning and coadding the multiple scans across a faint object is limited by the photon noise of each scan. In this section we discuss how these systematic effects limit our results.
The effect of color on the S-curve follows simply from the definition of the transfer function: equation (1) contains the dependence of the S-curve on color. For a monochromatic S-curve, changing the wavelength has the effect of scaling the S-curve with respect to the $`\theta `$ axis. This is a simple effect which can be modeled for an object of known spectral-energy distribution by also convolving over wavelength. We generated two simulated S-curves based on two extremes โ a very blue spectrum and a very red spectrum. We then used the red S-curve as a calibrator to analyze the blue using the procedures discussed above. The color difference, far larger than any realistic case, is found to introduce a ripple in the residuals of the best fit but no measurable extent.
We also took the spectrum of NGC4151 and multiplied by a simulated FGS spectral response and used this to generate a simulated S-curve corresponding to a point source having the appropriate SED. We then made another simulated S-curve with spectral weighting appropriate for the calibration star Upgren 69. To test the color effect on our results we analyzed this simulated NGC4151 observation using the simulated Upgren 69 S-curve as calibrator. The resultant fit showed low amplitude residuals but no detectable bias in the measurement of extent or background, both of which were zero at the resolution of the analysis.
The time variation of the S-curves is much less well understood than the color dependence. It presumably depends on time-dependent changes in the geometry of the fine guidance sensors. Experience with FGS3 data on double stars and the disks of Mira stars has shown that the variations, which are particularly marked for scans in โxโ, limit resolution to approximately 20mas. Unfortunately, there are at present insufficient data to allow predictions of this effect. This, and other sources of systematic effects, are discussed in the FGS Instrument Handbook.
To empirically estimate the effects of time-variant systematics on the analyses of our data sets as described above, we re-processed a high-signal to noise case (f2m40301, 3C 273) using a set of different reference calibrations S-curves listed in Table 2 which were taken after the first HST servicing mission. These cover a period from 280 days before the observation to 740 days afterwards. The last calibration set listed, below the line, is a bluer star. The resultant fits are given in Table 3.
There is significant variation of quality of fit when different calibration curves are used. As expected, the reference curve closest in time to the observation (f2vc0201m) does seem to give a better fit than those obtained a long period either before or after, but the differences are small. All fits give very similar numerical values for both the fitted parameters, on both axes, within the formal uncertainties as calculated before. In this case we find consistently that both the background level and the width are negligible.
A final important source of systematic error is introduced by the reduction of the multiple FGS scans. Each of these has very low signal-to-noise for the faint objects considered here and each has a random, unpredictable shift relative to the others. To coadd these scans, as described in section 3 above, it is necessary to measure their relative shifts. This process cannot be done accurately when noise is very significant and this process inevitably degrades the resolution attainable.
The systematic effects described above combine to limit the resolution, and hence the minimal detectable object extent, to approximately 20mas, and are greater than the more fundamental limits due to the pure photon noise in the observed S-curves. This limit applies to the FGS3 interferometer when used in pupil mode. The newer FGS1R interferometer, installed during the second HST servicing mission, has been shown to have much more stable S-curves and also doesnโt require the use of the pupil stop. Hence it should have much improved performance for this kind of work.
## 6 Results for Individual Objects
All the objects listed in Table 1 were fitted to the two-parameter family of models described above and the minimum $`\chi ^2`$ fit results are shown in Table 4. The data are shown in Figures 6 to 13. In all figures, the best fit is shown along with the data in the upper plot and the residuals are plotted below, on the same scale. Only the 512 points used in the analysis are plotted. In each subsection below, we summarize the results for each object.
### 6.1 3C279 (Figures 6 & 7)
3C279 is an extended double radio source which includes a compact core and a jet which extends about 5 arcsec (de Pater & Perley 1983) as well as structure observed with VLBI on scales down to 0.1 mas (Bรฅรฅth et al. 1992). The jet of 3C279 was one of the first known examples of superluminal motion. 3C279 is associated with a QSO at redshift 0.538 (Sandage & Wyndham 1965). It is a luminous X-ray and gamma-ray source, and is highly variable at all wavelengths. In the optical it is highly polarized, with V magnitude ranging from 15th to 17th magnitude during recent years.
There are two observations of this object and the earlier ones were used in the above examples of modeling methods. The data are of reasonable quality and the fits are good. The data are consistent with a point-source on a moderate background, consistent with the objectโs brightness.
### 6.2 NGC1275 (Figure 8)
These data are of very low signal to noise and the amplitude is very low because of the relatively high background level from this bright galaxy. The fits are reasonable and consistent with a point-source.
### 6.3 3C345 (Figures 9 & 10)
Both datasets for this object have low signal to noise. The fits are good and are consistent with point-sources.
### 6.4 NGC4151 (Figure 11)
These S-curves have excellent signal to noise, the best of all those studied here. Unfortunately the fits are not very good as is shown in the $`\chi ^2`$ values. It seems most likely that this is due to an inadequately matched reference S-curve due to variations with time of the S-curve form. This dataset is considered further in the Section 6 above.
### 6.5 3C273 (Figure 12)
These curves also have excellent signal to noise and the fits are reasonable although, like NGC4151, it appears that better calibrations would allow more information to be extracted. In terms of amplitude and width these curves are very similar to the calibration indicating that the relative background is low and that the object is, to first approximation, a point. The X S-curve , which is less stable than in Y, shows some systematic differences. Possible explanations for such differences were discussed in section 5 above. The Y S-curves fit is closer but there is also a systematic difference.
### 6.6 M87 (Figure 13)
These S-curves have very small amplitudes because of the strong background from the underlying elliptical galaxy. The fits are consistent with a point source but the signal to noise ratio is very low.
## 7 Discussion
All the bright AGN which have been studied using the HST FGS in TRANS mode can be modeled well by assuming that the true intensity distribution of the object is a point source on a background. Statistical tests show that the very small measured widths are not significant. The levels of background measured are consistent with what is expected: bright point-like objects (eg, 3C273) have negligible background and objects which are the cores of bright, large galaxies (eg, M87 and NGC4151) have strong backgrounds. For faint objects the instrumental background is detected and appears at the expected strength (eg, 3C279).
An assessment of the systematic effects limiting the resolution of the FGS3 interferometer imply that we would not expect to detect extents (FWHM) less than 20mas ($`\sigma =9`$ mas). These effects are dominated by the instability of the FGS3 interferometer in combination with the spherically aberrated telescope optics and the difficulty of accurately coadding individual scans of faint objects.
We may use our results to estimate the physical extents of the emitting regions and the luminosity densities which these imply. Table 5 tabulates these quantities for those objects with statistically significant upper limits. We assume a spherical, uniform, optically-thin emitting region, $`H_0=65kms^1Mpc^1`$, and that the upper limit to the radius of the sphere is the measured $`\sigma `$. The luminosities are calculated from the $`V`$ magnitude of the objects and the sun with no consideration of color effects. These objects are variable and hence approximate mean values are used for the $`V`$ magnitude. For the two bright objects we use the limit of $`\sigma =9`$ mas as deduced from a study of the systematics. For the fainter 3C279 the noise of the data itself becomes more important and we use the limit $`\sigma =12`$ mas. For the brightest objects in out sample the FGS3 observations are limited by the systematic effects described in detail in section 5 above.
Because it is both the closest object in our study and has the S-curves with the highest signal to noise ratio, the NGC4151 data yield the smallest physical limit on the size of the nuclear emission region. This object is the closest and most studied Seyfert I galaxy and has been modeled in detail in the framework of the unified view of AGN by Casidy and Raine (1997). The broad line region (BLR) of this object varies dramatically on time-scales of weeks and must have an angular extent below 1mas. The narrow line region (NLR) further out extends over hundreds of parsecs and has been studied in detail from the ground and by imaging with HST. Our FGS observations are not sensitive to these extended features on arcsecond scales; they are included in the measured background. We expect any structure we see on scales of 10mas to come from material around the BLR which is scattering the intense nuclear radiation. If such scattering was bright enough to effectively broaden the nuclear intensity profile in the optical we would detect such extent. Because of our observations are consistent with a point source it appears likely that such scattering is overwhelmed by the direct view of the BLR or that such a broadening occurs only closer to the BLR.
## 8 Conclusions
The original HST FGS3 interferometer has been shown to be capable of measuring extents down to about $`\sigma =9`$ mas โ on the order of a parsec in nearer AGNs. This surpasses the capabilities of HST PC camera by close to an order of magnitude. The upper limit is due to a combination of photon statistics and systematic effects not predictable in the pre-refurbishment FGS. If these objects were re-observed using the post-refurbishment FGS1R these systematics would be dramatically reduced and the attainable performance would become limited by the photon statistics of the data. This would lower the upper limits which could be placed on the extent of the AGN emitting regions to the values given in Table 6. The refurbished FGS1R, with S-curves which are more stable and closer to the theoretically perfect form, and which do not require the pupil stop which degrades both throughput and resolution, will allow detection of structure on scales of 5mas or smaller on the brighter AGNs.
We are grateful to many people who contributed to this project over many years. Doris Daou and Nicola Caon gave invaluable help with the data reduction. Mario Lattanzi, Pierre Bely, and Sherie Holfeltz provided much useful advice and encouragement in many discussions. We especially thank Ed Nelan for his simulations, his advice about the FGS systematics, and his careful reading of the manuscript. We would like to acknowledge support from STScI grants GO-02443.01-87A and GO-06578.01-95A. |
no-problem/0001/quant-ph0001110.html | ar5iv | text | # Note on Separability of the Werner states in arbitrary dimensions 1footnote 11footnote 1 This work was supported in part by the National Security Agency.
## Abstract
Great progress has been made recently in establishing conditions for separability of a particular class of Werner densities on the tensor product space of $`n`$ $`d`$โlevel systems (qudits). In this brief note we complete the process of establishing necessary and sufficient conditions for separability of these Werner densities by proving the sufficient condition for general $`n`$ and $`d`$.
03.67.Lx, 03.67.Hk, 03.65.Ca
We consider the $`d^n`$-dimensional Hilbert space
$`H^{\left[d^n\right]}=H^{\left[d\right]}\mathrm{}H^{\left[d\right]}`$
composed of the direct product of $`n`$ $`d`$โdimensional Hilbert spaces. As in we let $`\stackrel{~}{j}`$ denote the $`n`$-tuple $`j\mathrm{}j`$ and define $`|\stackrel{~}{j}=|j\mathrm{}|j`$. The particular class of generalized Werner state, $`W^{\left[d^n\right]}\left(s\right),`$ considered here is defined as the convex combination of the completely random state $`\frac{1}{d^n}I^{(n)}`$ and an entangled pure state $`|\mathrm{\Psi }\mathrm{\Psi }|`$,
$$W^{\left[d^n\right]}\left(s\right)=\left(1s\right)\frac{1}{d^n}I^{(n)}+s|\mathrm{\Psi }\mathrm{\Psi }|$$
(1)
where $`I^{(n)}`$ is the identity operator on the $`d^n`$โdimensional Hilbert space. To be specific, take
$`|\mathrm{\Psi }={\displaystyle \frac{1}{\sqrt{d}}}{\displaystyle \underset{j=0}{\overset{d1}{}}}|\stackrel{~}{j}.`$
The Werner state was originally defined in for two qubits. Its generalization has been applied to study Bellโs inequalities and local reality, and it has served as a test case of separability arguments in a number of studies. The problem treated here is to determine necessary and sufficient conditions on the parameter $`s`$ so that $`W^{\left[d^n\right]}\left(s\right)`$ is fully separable. That is
$$W^{[d^n]}(s)=\underset{a}{}p\left(a\right)\rho ^{\left(1\right)}\left(a\right)\mathrm{}\rho ^{\left(n\right)}\left(a\right),$$
(2)
where the $`\rho ^{\left(r\right)}\left(a\right)`$ are density matrices on the respective $`d`$-dimensional Hilbert spaces and the set of numbers {$`p(a)`$} is a probablility distributuion. For references to many of the related studies and for the relevance of the Werner states to the study of entanglement the reader can consult .
As shown in , a necessary condition for separability for all $`d`$ and $`n`$ follows from the Peres partial transpose condition or from a weaker condition that can be proved via the Cauchy-Schwarz inequality. Specifically suppose $`j=j_1\mathrm{}j_n`$ and $`k=k_1\mathrm{}k_n`$ differ in each component: $`j_rk_r`$. Let $`u`$ and $`v`$ be indices with $`u_rv_r`$ and $`\{u_r,v_r\}=`$ $`\{j_r,k_r\}.`$ Then for fully separable states $`\rho `$
$$\left(\sqrt{\rho _{j,j}}\sqrt{\rho _{k,k}}\right)\left|\rho _{u,v}\right|,$$
(3)
where $`\rho `$ is written as a matrix in the computational basis defined by the tensor products of $`|j_ik_i|,1in`$. Choosing $`j`$ and $`k`$ appropriately in (3), we obtain the necessary condition
$$s\left(1+d^{n1}\right)^1,$$
(4)
and special cases of this condition were derived in , for example.
The remaining challenge has been to show that this necessary condition is also sufficient, and various partial results have been obtained in the papers just cited. In particular, (4) was shown in to be sufficient for all $`d`$ and $`n=2`$ and in sufficiency was established for $`d`$ prime and all $`n`$. In this note we complete the study of this aspect of the Werner states by proving the sufficiency part of the following result.
Theorem: The Werner density $`W^{\left[d^n\right]}\left(s\right)`$ is fully separable if and only if $`s\left(1+d^{n1}\right)^1`$.
The relevant technique combines a representation of fully separable states presented in with an induction argument presented in . Let $`s=\left(1+d^{n1}\right)^1`$. Then it is easy to show that
$$W^{\left[d^n\right]}\left(s\right)=\frac{1}{1+d^{n1}}\left(\frac{1}{d}\underset{j=0}{\overset{d1}{}}|\stackrel{~}{j}\stackrel{~}{j}|\right)+\frac{d^{n1}}{1+d^{n1}}\left(\frac{1}{d^n}\left(I^{(n)}+\underset{jk}{}|\stackrel{~}{j}\stackrel{~}{k}|\right)\right).$$
(5)
Since the first term in (5) is a sum of separable projections, the proof reduces to showing the separability of the second term. It is convenient in what follows to intorduce a set of fixed phases and to show that
$$\rho ^{(n)}=\frac{1}{d^n}\left(I^{(n)}+\underset{jk}{}\zeta _j|\stackrel{~}{j}\stackrel{~}{k}|\zeta _k^{}\right)$$
(6)
where $`\{|\zeta _r|=1,r=0,\mathrm{},d1\}`$ is separable.
We proceed by induction. When $`n=1`$, (6) becomes
$$\rho ^{(1)}=\frac{1}{d}\left(\underset{j=0}{\overset{d1}{}}\underset{k=0}{\overset{d1}{}}\zeta _j|jk|\zeta _k^{}\right)$$
(7)
which is obviously a projection for any choice of the parameters $`\zeta _r`$. Now assume that the density matrix of the form in (6) is fully separable for $`n`$; then we shall show that it is fully separable for $`n+1`$. Following the ideas in , for a fixed choice of parameters $`\zeta _r`$ define
$$๐ฐ^{(m)}=(\zeta _0z_0^{(m)},\mathrm{},\zeta _{d1}z_{d1}^{(m)})$$
(8)
where $`z_j^{(m)}\{\pm 1,\pm i\}`$ and $`\zeta _r`$ is independent of $`m`$. We have a total of $`4^d`$ different vectors. It is easy to check that if we sum over all $`m`$,
$`{\displaystyle \underset{m}{}}๐ฐ_r^{(m)}=\zeta _r{\displaystyle \underset{m}{}}z_r^{(m)}=0,{\displaystyle \underset{m}{}}\left(๐ฐ_r^{(m)}\right)^2=\zeta _r^2{\displaystyle \underset{m}{}}\left(z_r^{(m)}\right)^2=0\text{, and }{\displaystyle \underset{m}{}}\left|๐ฐ_r^{(m)}\right|^2=4^d.`$
For each $`๐ฐ^{(m)}`$ define the product state $`\rho (m)=\rho ^{\left(n\right)}\left(๐ฐ^{(m)}\right)\rho ^{\left(1\right)}\left(๐ณ^{(m)}\right)`$ where $`๐ณ^{(m)}`$ is equal to $`๐ฐ^{(m)}`$ with all the $`\zeta _r`$โs equal to $`1`$ and
$`\rho ^{\left(n\right)}\left(๐ฐ^{(m)}\right)`$ $`=`$ $`{\displaystyle \frac{1}{d^n}}\left(I^{(n)}+{\displaystyle \underset{jk}{}}\zeta _jz_j^{(m)}|\stackrel{~}{j}\stackrel{~}{k}|\zeta _k^{}z_k^{(m)}\right)`$
$`\rho ^{\left(1\right)}\left(๐ณ^{(m)}\right)`$ $`=`$ $`{\displaystyle \frac{1}{d}}\left(I^{\left(1\right)}+{\displaystyle \underset{rs}{}}z_r^{(m)}|rs|z_s^{(m)}\right).`$
The state $`\rho (m)`$ is separable by the induction hypothesis,and it follows that the convex combination $`\rho ^{(n+1)}=\frac{1}{4^d}_m\rho (m)`$ is also separable. Now we multiply out the terms:
$`\rho ^{(n+1)}`$ $`=`$ $`{\displaystyle \frac{1}{d^{n+1}}}\left(I^{(n+1)}+I^{(n)}A^{(1)}+A^{(n)}I^{(1)}+B\right)`$
$`A^{(1)}`$ $`=`$ $`{\displaystyle \underset{rs}{}}|rs|{\displaystyle \frac{1}{4^d}}{\displaystyle \underset{m}{}}z_r^{(m)}z_s^{(m)}`$
$`A^{(n)}`$ $`=`$ $`{\displaystyle \underset{jk}{}}\zeta _j\zeta _k^{}|\stackrel{~}{j}\stackrel{~}{k}|{\displaystyle \frac{1}{4^d}}{\displaystyle \underset{m}{}}z_j^{(m)}z_k^{(m)}`$
$`B`$ $`=`$ $`{\displaystyle \underset{jk}{}}{\displaystyle \underset{rs}{}}\zeta _j\zeta _k^{}|\stackrel{~}{j}\stackrel{~}{k}||rs|{\displaystyle \frac{1}{4^d}}{\displaystyle \underset{m}{}}z_j^{(m)}z_k^mz_r^{(m)}z_s^{(m)}.`$
Since the components of $`๐ฐ^{(m)}`$ are chosen independently of one another, $`_mz_r^{(m)}z_s^{(m)}=0`$ for $`rs`$; consequently, $`A^{(n)}`$ and $`A^{(1)}`$ vanish. As noted in the choice of the $`๐ฐ^{(m)}`$ also simplifies the remaining term, since for $`rs`$ and $`jk`$
$`{\displaystyle \frac{1}{4^d}}{\displaystyle \underset{m}{}}z_j^{(m)}z_k^{(m)}z_r^{(m)}z_s^{(m)}=\delta (j,r)\delta (k,s),`$
where $`\delta (r,s)`$ is the Kronecker delta. Then
$`\rho ^{\left(n+1\right)}={\displaystyle \frac{1}{d^{n+1}}}\left(I^{\left(n+1\right)}+{\displaystyle \underset{jk}{}}\zeta _j|\stackrel{~}{j}j\stackrel{~}{k}k|\zeta _k^{}\right),`$
which is of the same form as (6) with $`nn+1,`$ completing the induction step and the proof of the theorem. . |
no-problem/0001/cond-mat0001060.html | ar5iv | text | # Lyapunov Exponent for Pure and Random Fibonacci Chains
## 1 Introduction
The experimental discovery of quasicrystals,<sup>1</sup> and also the building of artificial multilayer structures by molecular beam epitaxy,<sup>2</sup> have considerably stimulated the theoretical study of quasiperiodic systems. <sup>3-5</sup> Quasicrystals have a deterministic aperiodicity that characterizes them as intermediate structures between periodic crystals and disordered materials, therefore being expected to display new behavior. There has been in particular, great discussion on the nature of the energy spectrum and eigenstates of electron and phonon excitations on quasicrystals. It is questioned whether the spectrum is absolutely continuous, pointlike or singular continuous, or correspondingly, if the states are extended, localized or critical.
The Fibonacci chain is the simplest quasicrystal, a one-dimensional system where the site or bond variables take one of the two values $`A`$ and $`B`$, and are arranged in a Fibonacci sequence. The Fibonacci chain can be constructed recursively by successive applications of a substitution rule, $`AAB`$ and $`BA`$, or alternatively, by successive applications of a concatenation rule, $`S_i=S_{i1}S_{i2}`$, $`S_i`$ being the Fibonacci sequence at iteration $`i`$. The quasiperiodicity of the Fibonacci chain is characterized by the golden mean $`\tau =\left(1+\sqrt{5}\right)/2`$ , which gives the ratio of the number of $`A`$ and $`B`$ units. Tight-binding electron and phonon excitations have been studied on a Fibonacci chain, using mainly transfer-matrix<sup>6-12</sup> and real space renormalization group techniques.<sup>13-15</sup> It has been found that the energy spectrum for these excitations is a Cantor set with zero Lebesgue measure, this result having in addition been proven<sup>16</sup> for the case of electronic excitations on a site Fibonacci chain. The spectra of the periodic approximants to the Fibonacci chain exhibit self-similarity in the band structure, with a scaling index that for the electronic excitations is independent of the energy, while for the phonon excitations it is a function of the energy. <sup>9</sup> The integrated density of states for the various excitations presents rich scaling behavior, with indices varying from the edge to the center of the bands.<sup>10-12,14</sup>The characterization of the eigenstates on a Fibonacci chain is a more difficult task, and it has usually been restricted to a few special energies on the spectrum, for which the states are found to be self-similar or chaotic. More generally, it has been found evidence for the states being neither extended nor localized in the usual sense. <sup>10-12</sup>
The localization properties of the states can be studied through the calculation of the Lyapunov exponent $`\gamma `$, which characterizes the evolution of a wavefunction along the chain.<sup>17-19</sup> The Lyapunov exponent is zero for an extended or critical state, but is positive for a localized state representing then the inverse of the localization length. Delyon and Petritis<sup>20</sup> have proved that the Lyapunov exponent for a class of binary quasiperiodic tight-binding chains, vanishes on the spectrum, which rules out the presence of localized states. The Fibonacci sequence does not however belong to this class of chains, and the characterization of the states in that case remains under discussion. A study on localization lenghts of tight-binding electrons on a pure Fibonacci chain has been presented by Capaz et al.,<sup>21</sup> that found no evidence for localization of the states.
Real systems have in general some disorder. Random quasicrystals, in the sense of a random tiling, have been considered<sup>22</sup> to explain the properties of quasicrystalline alloys. It is well known that disorder has pronounced effects on the transport properties of periodic systems, specially in one-dimension where all the states turn to localized whatever the amount of disorder. A striking property of quasicrystals is that they exhibit extremely high resistivities, which decrease with the amount of defects.<sup>23</sup> The effects of some types of disorder on the electronic spectra and wavefunctions of Fibonacci chains have been considered.<sup>24-26</sup>
In this work we study the Lyapunov exponent for electron and phonon excitations in pure and random Fibonacci quasicrystal chains. We consider electrons in a tight-binding model, with โdiagonalโ-site and โoff-diagonalโ-bond Fibonacci ordering, and phonons on a lattice with bond Fibonacci ordering. The disorder introduced is random tiling imposed on the substitution or concatenation rules for construction of the Fibonacci chains. We use a real space renormalization group method, which allows the calculation of a wavefunction along the chain, for any given energy, and therefore enables the determination of its Lyapunov exponent. This method provides a simple and very efficient way of numerically calculating the Lyapunov exponent as a function of the energy, for large Fibonacci chains. The method has great similarities, but also important differences, as will be discussed, with that used by Capaz et al.<sup>21</sup> The method is based on decimation, which is here applied either to substitution<sup>14</sup> or concatenation,<sup>25</sup> and implemented in the presence of disorder.
In order to calculate an eigenstate, one needs to specify an energy on the spectrum. Since the spectrum of a Fibonacci chain is a Cantor set with zero Lebesgue measure, the probability of numerically specifying an energy on the spectrum is essentially zero. Hence any chosen energy will almost certainly correspond to a gap, and the calculated Lyapunov exponents are then associated to gap states. It is shown that the Lyapunov exponent for the gap states of the various excitations has a fractal struture, and we study its scaling properties. From these properties we obtain information on the Lyapunov exponent for the states on the spectrum of the Fibonacci chain, and therefore on the localization properties of the excitations. We study the Lyapunov exponent for both, tight-binding electrons and phonons, remarking that the Goldstone symmetry present in the later and absent in the former, may lead to important differences in the scaling properties of the two systems.
The outline of the paper is as follows. In Sec. II we describe the tight-binding electron and phonon systems that are studied, and present the renormalization method used to calculate the Lyapunov exponent. In Sec. III we present the Lyapunov exponent for the various excitations on a pure Fibonacci chain, study its scaling properties and discuss localization, and finally consider the effects of disorder on the Lyapunov exponent. In Sec. IV we present our conclusions.
## 2 Renormalization approach
The dynamics of tight-binding electron and phonon excitations on a Fibonacci quasicrystal chain, can be described by the generic equation
$$(\epsilon _nE)\mathrm{\Psi }_n=V_{n1}\mathrm{\Psi }_{n1}+V_n\mathrm{\Psi }_{n+1}.$$
(1)
For the electrons, $`\mathrm{\Psi }_n`$ denotes the amplitude of the wavefunction at site $`n`$, corresponing to energy $`E`$, $`\epsilon _n`$ is a site energy, and $`V_n`$ is the hopping amplitude between site $`n`$ and $`n+1`$. For phonons, $`\mathrm{\Psi }_n`$ represents the displacement from the equilibrium position of the atom at site $`n`$, $`E=m\omega ^2`$, $`\omega `$ being the phonon frequency and $`m`$ the atom mass, $`\epsilon _n=V_{n1}+V_n`$, and $`V_n`$ is the spring constant connecting sites $`n`$ and $`n+1`$. This latter model describes equally well spin waves on an Heisenberg ferromagnet at zero temperature, replacing the spring constants by exchange constants, and $`m\omega ^2`$ by the spin wave frequency $`\omega `$. We note the Goldstone symmetry present in the phonon system, which imposes a correlation between the site $`\epsilon _n`$ and the coupling $`V_n`$ parameters, that is not present in the electron system.
The various Fibonacci quasicrystal models are defined as follows. For electrons, the โdiagonalโ model is obtained from $`\left(1\right)`$ by setting, $`V_n=1`$ and $`\epsilon _n=\epsilon _A`$ or $`\epsilon _B`$, according to the Fibonacci sequence, and the โoff-diagonalโ model is obtained from $`\left(1\right)`$ by setting, $`\epsilon _n=0`$ and $`V_n=V_A`$ or $`V_B`$, according to the Fibonacci sequence. The model for phonons is obtained from $`\left(1\right)`$ with the couplings $`V_n=V_A`$ or $`V_B`$, arranged in the Fibonacci sequence.
The disordered Fibonacci chains are built by introducing random tiling in the substitution rule for construction,
$`B`$ $``$ $`A,`$
$`A`$ $``$ $`AB,\mathrm{probability}p,`$ (2)
$`A`$ $``$ $`BA,\mathrm{probability}1p,`$
in each iteration $`i`$, starting with $`B`$, the two possibilities corresponding respectively to direct and inverse substitution, or they are built by introducing random tiling in the concatenation rule for construction,
$`S_i`$ $`=`$ $`S_{i1}S_{i2},\mathrm{probability}p,`$ (3)
$`S_i`$ $`=`$ $`S_{i2}S_{i1},\mathrm{probability}1p,`$
starting with $`S_0=B`$ and $`S_1=A`$, the two possibilities corresponding respectively to direct and inverse concatenation. Random tiling on substitution or concatenation generates, at each iteration, an identical set of disordered Fibonacci chains, though throug a different sequence of preceeding chains (e.g. $`AABABAABABA`$, by substitution, vs, $`ABAAABABABA`$, by concatenation).
The method that we use to calculate the Lyapunov exponent is based on the fact that the wavefunction $`\mathrm{\Psi }_n`$ at the Fibonacci sites $`n=n(i)=F_{i+1}`$, given by $`F_{i+1}=F_i+F_{i1}`$ with $`F_1=F_0=1`$, can be easily obtained via a real-space renormalization group transformation, which consists in eliminating appropriated sites on the chain, so that a chain similar to the original one is obtained, with a rescaled length and renormalized parameters. Under successive decimations one carries the system through larger length scales separating the sites. For the Fibonacci chain it is possible to deduce an exact renormalization transformation for the parameters $`\epsilon _n`$ and $`V_n`$, the rescaling factor, under which the system is self-similar, being equal to $`\tau `$. After $`i`$ iterations, the renormalization transformation takes, for example, $`V_A`$ to $`V_A^{(i)}`$, which represents the renormalized interaction between two sites that are a distance $`\tau ^i`$ apart, measured in units of the original lattice spacing. The Fibonacci sites $`n(i)`$ become the first neighbours of the end site $`n=0`$, at each iteration $`i`$. Now, writing $`\left(1\right)`$ as a recursion relation for the wavefunction, and fixing the โfree-end boundary condition $`V_1=0`$, one gets, $`\mathrm{\Psi }_1=\left[\left(\epsilon _0E\right)/V_0\right]\mathrm{\Psi }_0`$. The wavefunction $`\mathrm{\Psi }_n`$ at the consecutive Fibonacci sites $`n(i)`$, can therefore be obtained in terms of the parameters under successive renormalization iterations $`i`$, through
$$\mathrm{\Psi }_{n(i)}=\left[\left(\epsilon _0^{(i)}E\right)/V_0^{(i)}\right]\mathrm{\Psi }_0,$$
(4)
fixing $`\mathrm{\Psi }_0`$ (e.g. $`\mathrm{\Psi }_0=1`$). The Lyapunov exponent $`\gamma `$ is then calculated from the wavefunction $`\left(4\right)`$, given that
$$\left|\mathrm{\Psi }_n\right|e^{\gamma x_n},(n\mathrm{}),$$
(5)
and $`x_n=\tau ^i`$ for $`n=n(i)`$. In the work of Capaz et al. <sup>21</sup> the localization of the wavefunction $`\mathrm{\Psi }`$ is studied following the behavior of the coupling $`V`$ under successive renormalizations, and not through the evolution of the wavefunction $`\left(4\right)`$, which also involves the parameter $`\epsilon `$. Although the behaviour of $`\mathrm{\Psi }`$ is mainly determined by $`V`$, the complete expression should be used. Furthermore, in that work a small imaginary part $`\eta `$ is added to the energy $`E`$, which produces an artificial decay of the coupling $`V`$, that alters the actual evolution of the wavefunction, and consequently interferes in the study of localization and evaluation of the Lyapunov exponent.
Now we present the decimation techniques used to obtain the renormalization transformation of the parameters $`\epsilon _0`$ and $`V_0`$, for chains constructed by substitution or concatenation.
A. Substitution chains
The renormalization transformation of the parameters is obtained by eliminating sites in such a way as to reverse the substitution procedure in $`\left(2\right)`$.<sup>14</sup> In order to build the transformation one needs to consider an expanded parameter space, for the various excitations, where the bonds $`V_n`$ assume two different values, $`V_A`$ and $`V_B`$, arranged in a Fibonacci sequence, and the site energies $`\epsilon _n`$ may assume three different values, depending on the local environment of $`n`$, $`\epsilon _\alpha `$ if $`V_{n1}=V_n=V_A`$, $`\epsilon _\beta `$ if $`V_{n1}=V_A`$ and $`V_n=V_B`$, $`\epsilon _\gamma `$ if $`V_{n1}=V_B`$ and $`V_n=V_A`$. A choice of the initial parameters $`V_A`$ , $`V_B`$, $`\epsilon _\alpha `$, $`\epsilon _\beta `$, $`\epsilon _\gamma `$, casts the problem into the model for electron excitations, โdiagonalโ ($`V_A=V_B`$, $`\epsilon _\alpha =\epsilon _\gamma \epsilon _\beta `$) or โoff-diagonalโ ($`V_AV_B`$, $`\epsilon _\alpha =\epsilon _\beta =\epsilon _\gamma `$), or phonon excitations ($`V_AV_B`$, $`\epsilon _\alpha =2V_A`$, $`\epsilon _\beta =\epsilon _\gamma =V_A+V_B`$). The reversal of rule $`\left(2\right)`$ is achieved through the elimination of $`\beta sites`$ , corresponding to direct substitution, or $`\gamma `$-sites, corresponding to inverse substitution. The resulting renormalization equations are:
i) direct substitution,
$`\epsilon _\alpha ^{(i+1)}`$ $`=`$ $`\epsilon _\gamma ^{(i)}\left(V_A^{(i)2}+V_B^{(i)2}\right)/\left(\epsilon _\beta ^{(i)}E\right),`$
$`\epsilon _\beta ^{(i+1)}`$ $`=`$ $`\epsilon _\gamma ^{(i)}V_B^{(i)2}/\left(\epsilon _\beta ^{(i)}E\right),`$ (6)
$`\epsilon _\gamma ^{(i+1)}`$ $`=`$ $`\epsilon _\alpha ^{(i)}V_A^{(i)2}/\left(\epsilon _\beta ^{(i)}E\right),`$
$`V_A^{(i+1)}`$ $`=`$ $`V_A^{(i)}V_B^{(i)}/\left(\epsilon _\beta ^{(i)}E\right),V_B^{(i+1)}=V_A^{(i)},`$
and for the end site $`n=0`$,
$$\epsilon _0^{(i+1)}=\epsilon _0^{(i)}V_A^{(i)2}/\left(\epsilon _\beta ^{(i)}E\right),V_0^{(i+1)}=V_A^{(i+1)},$$
(7)
ii) inverse substitution,
$`\epsilon _\alpha ^{(i+1)}`$ $`=`$ $`\epsilon _\beta ^{(i)}\left(V_A^{(i)2}+V_B^{(i)2}\right)/\left(\epsilon _\gamma ^{(i)}E\right),`$
$`\epsilon _\beta ^{(i+1)}`$ $`=`$ $`\epsilon _\alpha ^{(i)}V_A^{(i)2}/\left(\epsilon _\gamma ^{(i)}E\right),`$ (8)
$`\epsilon _\gamma ^{(i+1)}`$ $`=`$ $`\epsilon _\beta ^{(i)}V_B^{(i)2}/\left(\epsilon _\gamma ^{(i)}E\right),`$
$`V_A^{(i+1)}`$ $`=`$ $`V_A^{(i)}V_B^{(i)}/\left(\epsilon _\gamma ^{(i)}E\right),V_B^{(i+1)}=V_A^{(i)},`$
and for the end site $`n=0`$,
$`\epsilon _0^{(i+1)}`$ $`=`$ $`\epsilon _0^{(i)}V_B^{(i)2}/\left(\epsilon _\gamma ^{(i)}E\right),V_0^{(i+1)}=V_A^{(i+1)},\mathrm{if}V_0^{(i)}=V_B^{(i)},`$
$`\epsilon _0^{(i+1)}`$ $`=`$ $`\epsilon _0^{(i)},V_0^{(i+1)}=V_B^{(i+1)},\mathrm{if}V_0^{(i)}=V_A^{(i)}.`$ (9)
B. Concatenation chains
The renormalization transformation of the parameters is obtained by eliminating the central site, after having performed concatenation, so as to reverse the concatenation procedure $`(3)`$.<sup>25</sup> This leads to the following renormalization equations, which are different for bond Fibonacci ordering, i.e. โoff-diagonalโ electrons and phonons, or site Fibonacci ordering, i.e. โdiagonalโ electrons.
For the bond problem:
i) direct concatenation,
$`\epsilon _0^{(i+1)}`$ $`=`$ $`\epsilon _0^{(i)}V_0^{(i)2}/\left(\epsilon _{cd}^{(i+1)}E\right),`$
$`\epsilon _{F_{i+1}}^{(i+1)}`$ $`=`$ $`\epsilon _{F_{i1}}^{(i1)}V_0^{(i1)2}/\left(\epsilon _{cd}^{(i+1)}E\right),`$ (10)
$`V_0^{(i+1)}`$ $`=`$ $`V_0^{(i)}V_0^{(i1)}/\left(\epsilon _{cd}^{(i+1)}E\right),`$
with, $`\epsilon _{cd}^{(i+1)}=\epsilon _{F_i}^{(i)}+\epsilon _0^{(i1)}`$,
ii) inverse concatenation,
$`\epsilon _0^{(i+1)}`$ $`=`$ $`\epsilon _0^{(i1)}V_0^{(i1)2}/\left(\epsilon _{ci}^{(i+1)}E\right),`$
$`\epsilon _{F_{i+1}}^{(i+1)}`$ $`=`$ $`\epsilon _{F_i}^{(i)}V_0^{(i)2}/\left(\epsilon _{ci}^{(i+1)}E\right),`$ (11)
$`V_0^{(i+1)}`$ $`=`$ $`V_0^{(i)}V_0^{(i1)}/\left(\epsilon _{ci}^{(i+1)}E\right),`$
with, $`\epsilon _{ci}^{(i+1)}=\epsilon _{F_{i1}}^{(i1)}+\epsilon _0^{(i)}`$, and the initial values, $`V_0^{(0)}=V_B`$, $`V_0^{(1)}=V_A`$ , $`\epsilon _0^{(0)}=\epsilon _1^{(0)}=\epsilon _0^{(1)}=\epsilon _1^{(1)}`$, for โoff-diagonalโ electrons, and $`\epsilon _0^{(0)}=\epsilon _1^{(0)}=V_0^{(0)}=V_B`$, $`\epsilon _0^{(1)}=\epsilon _1^{(1)}=V^{(1)}=V_A`$, for phonons.
For the site problem:
i) direct concatenation,
$`\epsilon _0^{(i+1)}`$ $`=`$ $`\epsilon _0^{(i)}V_0^{(i)2}/\left[\left(\epsilon _{F_i}^{(i)}E\right)T^2/\left(\epsilon _0^{(i1)}E\right)\right],`$
$`\epsilon _{F_{i+1}}^{(i+1)}`$ $`=`$ $`\epsilon _{F_{i1}}^{(i1)}V_0^{(i1)2}/\left[\left(\epsilon _0^{(i1)}E\right)T^2/\left(\epsilon _{F_i}^{(i)}E\right)\right],`$ (12)
$`V^{(i+1)}`$ $`=`$ $`TV_0^{(i)}V_0^{(i1)}/\left[\left(\epsilon _0^{(i1)}E\right)\left(\epsilon _{F_i}^{(i)}E\right)T^2\right],`$
ii) direct concatenation,
$`\epsilon _0^{(i+1)}`$ $`=`$ $`\epsilon _0^{(i1)}V_0^{(i1)2}/\left[\left(\epsilon _{F_{i1}}^{(i1)}E\right)T^2/\left(\epsilon _0^{(i)}E\right)\right],`$
$`\epsilon _{F_{i+1}}^{(i+1)}`$ $`=`$ $`\epsilon _{F_i}^{(i)}V_0^{(i)2}/\left[\left(\epsilon _0^{(i)}E\right)T^2/\left(\epsilon _{F_{i1}}^{(i1)}E\right)\right],`$ (13)
$`V^{(i+1)}`$ $`=`$ $`TV_0^{(i)}V_0^{(i1)}/\left[\left(\epsilon _0^{(i)}E\right)\left(\epsilon _{F_{i1}}^{(i1)}E\right)T^2\right],`$
with the initial values, $`V_0^{(2)}=T`$, $`\epsilon _0^{(3)}=\epsilon _2^{(3)}=\epsilon _AT^2/\left(\epsilon _BE\right)`$, $`V_0^{(3)}=T^2/\left(\epsilon _BE\right)`$, and in i) $`\epsilon _0^{(2)}=\epsilon _A`$, $`\epsilon _1^{(2)}=\epsilon _B`$, while in ii) $`\epsilon _0^{(2)}=\epsilon _B`$, $`\epsilon _1^{(2)}=\epsilon _A`$.
Considering the general case of a random Fibonacci chain, for a given probability of disorder $`p`$, we start with a specific disordered configuration, generated by $`\left(2\right)`$ or $`(3)`$, respectively for substitution or concatenation chains, and then iterate $`\left(6\right)`$ $`(9)`$, $`(10)(11)`$ or $`(12)(13)`$, depending on the system studied, according to that configuration, in order to obtain the successive values for $`V_0^{(i)}`$ and $`\epsilon _0^{(i)}`$. This allows us to calculate the wavefunction $`\mathrm{\Psi }_n`$, provided by $`\left(4\right)`$, at the successive Fibonacci sites, for a given energy $`E`$. For each probability $`p`$, we average the obtained wavefunction for $`E`$ over many different disorder configurations. It is important to remark that when dealing with random chains, one should first calculate the wavefunction for a specific disordered configuration and then average over configurations, instead of averaging the parameters over disorder at each step of the renormalization and then calculate the wavefunction with the averaged parameters. This latter procedure<sup>25</sup> will wash out important correlations in the system, and leads to different results depending on how the average is performed. The first procedure describes the physics more accurately.
## 3 Lyapunov Exponent for Fibonacci Chains
We now present the results concerning the Lyapunov exponent, calculated as a function of the energy, for the tight-binding electron, โdiagonalโ and โoff-diagonalโ, and phonon excitations on the pure and random Fibonacci chains. We consider first the case of pure chains, for which we study the scaling properties of the Lyapunov exponent and their implications for the localization of states on the spectrum, and analyse afterwards the effects of disorder, of the kind of random tiling, on the Lyapunov exponent.
As mentioned above, the wavefunctions that we numerically calculate correspond to gap states. Figure 1 shows the typical behavior of a wavefunction $`\mathrm{\Psi }_n`$, at any chosen energy $`E`$, either for the electron or the phonon excitations on a pure Fibonacci chain. One observes that the wavefunction first oscillates over a certain length, and then grows exponentially. This behavior has mixed characteristics of an extended (oscillating) state and a localized (exponential) state. The length over which a wavefunction oscillates is a โmemoryโ length,<sup>10</sup> in the sense that beyond this length it loses memory of its initial phase. The exponential growth of the wavefunction is characterized by the Lyapunov exponent, which measures the inverse of a โlocalization lengthโ. We find that the โmemoryโ length $`\xi `$ and the Lyapunov exponent $`\gamma `$ are simply related, $`\xi 1/\gamma `$. In figure 2 we present the Lyapunov exponent for the electron, โdiagonalโ and โoff-diagonalโ, and phonon excitations on the pure Fiboncci chain, calculated as a function of the energy. The exponent exhibits a rather nontrivial dependence on the energy, which has a clear correspondence with the associated density of states obtained by Ashraff and Stinchcombe<sup>14,15</sup> for the various cases, the finite values of the Lyapunov exponent corresponding to gap states, with the further a state is inside a gap the larger is its Lyapunov exponent. The Lyapunov exponent exhibits a fractal structure, i.e. under dilation the same structure is revealed in a smaller scale, as can be seen by comparing the Lyapunov plots in figure 2 with those in figure 3. This structure is observed even in the very low energy range of the magnetic excitations, where $`\gamma `$ takes particularly small values, most probably due to the Goldstone symmetry.
The scaling behavior of the Lyapunov exponent is studied through the variation of the maximum exponent in a gap, $`\gamma _{\mathrm{max}}`$, versus the gap width, $`\mathrm{\Delta }E_g`$.<sup>21</sup> We find that
$$\gamma _{\mathrm{max}}(\mathrm{\Delta }E_g)^\delta ,$$
(14)
where the scaling index $`\delta `$, is independent of the energy for the electron excitations, โdiagonalโ and โoff-diagonalโ, as shown in figure 4, but depends on the energy for the phonon excitations, as figure 5 reveals, and it is shown in figure 6. We also find that the scaling index for the electron excitations depends on the quasicrystal site ($`\epsilon _A`$, $`\epsilon _B`$) or bond ($`V_A`$,$`V_B`$) parameters, decreasing as the difference between the parameters increases, while the scaling index for the phonon excitations, varying with energy, also depends on the quasicrystal parameters ($`V_A`$,$`V_B`$). Our results for the electron excitations are in agreement with those obtained by Capaz et al.,<sup>21</sup> though their scaling indices differ from ours, probably due to the fact that they have calculated the Lyapunov exponent from the behavior of the coupling $`V`$ alone and not from the evolution of the wavefunction $`\mathrm{\Psi }`$, in $`(4)`$, and moreover have introduced an imaginary part in the energy which influences the Lyapunov exponent, as discussed earlier.
From the scaling expression $`\left(14\right)`$, one obtains, for the various excitations, that $`\gamma _{\mathrm{max}}0`$ when $`\mathrm{\Delta }E_g0`$, implying that the Lyapunov exponent for wavefunctions on the spectrum, vanishes. We therefore have that the electron, โdiagonalโ or โnondiagonalโ, and phonon excitations on a Fibonacci chain are nonlocalized.
Let us now study the effects of disorder on the Lyapunov exponent. Disorder has drastic effects on the wavefunctions of one-dimensional periodic systems, localizing all the states. Figure 7 illustrates this fact, showing the Lyapunov exponent for phonon excitations on a random periodic chain, with couplings $`V_A`$ and $`V_B`$, as a function of the probability $`p`$ of disorder, for various energies. One sees that the Lyapunov exponent increases with disorder, being also an increasing function of the energy.
For the random Fibonacci chains we considered disorder of the kind of random tiling, introduced in the substitution or concatenation rule for construction of the chains. The resulting disordered chains differ from the pure Fibonacci chain in having a varying number of phason flips, located at certain points on that chain. By a phason flip it is meant a local rearrangement of tiles on the quasiperiodic structure, corresponding to a switch of the site, $`\epsilon _A`$ and $`\epsilon _B`$, or the bond, $`V_A`$ and $`V_B`$, parameters.<sup>26</sup> Using the cyclic property of the trace one can see that all those random tiling chains have the same spectrum, for the electron and the phonon excitations, as the pure Fibonacci chain. In the work of Lรณpez et al.,<sup>25</sup> on the effects of that kind of random tiling on the electronic excitations of a Fibonacci chain, it has however been found that the disorder affects the spectrum of the excitations. We think that this result is a consequence of the averaging of the parameters over disorder taken at each step of the renormalization in that work, which as already mentioned, loses important correlations in the system and introduces effects that depend on the averaging procedure used, corresponding in fact to different systems. On the other hand Naumis and Aragรณn,<sup>26</sup> considering electronic excitations, have also noted that phason flips located at certain points on the Fibonacci chain do not alter the spectrum of the excitations.
We calculated the Lyapunov exponent for the electron and phonon excitations on Fibonacci chains with random tiling, as a function of the probability of disorder, for different values of energy. The results obtained are illustrated in figure 8. We find that the disorder considered does not affect the Lyapunov exponent either for the electron, โdiagonalโ or โoff-diagonalโ, or the phonon excitations. The same result is obtained for disordered Fibonacci chains with random tiling either in the substitution or the concatenation rule for construction of chains. The irrelevance of disorder found for the Lyapunov exponent of excitations on a Fibonacci chain is surprising, knowing the drastic effects that disorder has on the excitations on periodic chains. However, it should be noted that random tiling introduces a kind of bounded disorder, which has also correlations, and therefore might not be sufficient to produce localization of states. Furthermore, in contrast to the general case, it has been reported that there exist particular random potentials in one dimension that allow for extended states, those being described by an iterative procedure of construction.<sup>27,28</sup> Liu and Riklund<sup>24</sup> have found that other types of disorder, different from the one considered by us, produce localization of electronic excitations on a Fibonacci chain.
## 4 Conclusions
We have studied the Lyapunov exponent for tight-binding electron, โdiagonalโ and โoff-diagonalโ, and phonon excitations in pure and random Fibonacci quasicrystal chains, using a real space renormalization group method. This method allows the calculation of a wavefunction along the chain, and the determination of the associated Lyapunov exponent, as a function of the energy, in a very efficient way for very long chains. We have found that the Lyapunov exponent for the pure Fibonacci chain has a self-similar structure, being characterized by a scaling index that is independent of the energy for the electronic excitations, but depends on the energy for the phonon excitations. The scaling properties of the Lyapunov exponent, imply that it vanishes on the spectrum for the various excitations. We therefore have that the electronic and phonon excitations are nonlocalized on the Fibonacci chain . Considering random Fibonacci chains, we calculated the Lyapunov exponent as a function of the probability of disorder, and found that the disorder introduced, of the kind of random tiling, does not affect the Lyapunov exponent, which takes the same value as for the pure Fibonacci chain whatever the degree of disorder, either for the electron or for the phonon excitations. The random tiling considered generates in fact chains that are locally isomorphic to the pure Fibonacci chain, and therefore our results imply that locally isomorphic chains, besides having the same energy spectrum,<sup>29</sup> also have the same Lyapunov exponent, and hence their eigenstates have the same nature as the ones of the pure Fibonacci chain. We are now investigating the effects of random tiling on the Lyapunov exponent, of electron and phonon excitations, on other aperiodic chains, such as the Thue-Morse, the period-doubling, and binary non-Pisot sequences. Other types of disorder are also being considered on the Fibonacci chain, as well as on the other aperiodic chains mentioned, in order to understand the relevance/irrelevance of disorder on the Lyapunov exponent, and consequently on the localization properties of those systems. The results of this work will be reported elsewhere.
Acknowledgements
We would like to thank R. B. Stinchcombe and J. M. Luck for very helpful conversations.
References
iveta@alf1.cii.fc.ul.pt
<sup>1</sup>D. Shechtman, I. Blech, D. Gratias and J. W. Cahn, Phys. Rev. Lett. 53, 1951 (1984).
<sup>2</sup>R. Merlin, K. Bajena, R. Clarke, F. Y. Juang, and P. K. Bhattacharya, Phys. Rev. Lett. 5, 1768 (1985).
<sup>3</sup>P. J. Steinhardt and S. Ostlund, The Physics of Quasicrystals (World Scientific, Singapore, 1987).
<sup>4</sup>D. P. DiVicenzo and P. Steinhardt, Quasicrystals: The State of the Art (World Scientific, Singapore, 1991).
<sup>5</sup>C. Janot, Quasicrystals - a primer, 2nd. ed. (Clarendon Press, Oxford, 1994).
<sup>6</sup>M. Kohmoto, L. P. Kadanoff and C. Tang, Phys. Rev. Lett. 50, 1870 (1983).
<sup>7</sup>S. Ostlund and R. Pandit, Phys. Rev. B 29, 1394 (1984).
<sup>8</sup>M. Kohmoto and Y. Oono, Phys. Lett. 102A, 145 (1984).
<sup>9</sup>M. Kohmoto and J. Banavar, Phys. Rev. B 34, 563 (1986).
<sup>10</sup>J. M. Luck and D. Petritis, J. Stat. Phys. 42, 289 (1986).
<sup>11</sup>M. Kohmoto, B. Sutherland and C. Tang, Phys. Rev. B 35, 1020 (1987).
<sup>12</sup>B. Sutherland, Phys. Rev. B 35, 9529 (1987).
<sup>13</sup>Q. Niu and F. Nori, Phys. Rev. Lett. 57, 2057 (1986).
<sup>14</sup>J. A. Ashraff and R. B. Stinchcombe, Phys. Rev. B 37, 5723 (1988).
<sup>15</sup>J. A. Ashraff, D.Phil. Thesis, Oxford 1989.
<sup>16</sup>A. Sรผto, J. Stat. Phys. 56, 525 (1989).
<sup>17</sup>K. Ishii, Supp. Progr. Theor. Phys. 53, 77 (1973).
<sup>18</sup>J. M. Luck, Systรจmes Dรฉsordonnรฉs Unidimensionnels (Collection Alรฉa - Saclay, 1992).
<sup>19</sup>A. Crisanti, G. Paladin and A. Vulpiani, Products of Random Matrices, Vol. 104 of Springer Series in Solid-State Sciences (Springer, Berlin, 1993).
<sup>20</sup>F. Delyon and D. Petritis, Commun. Math. Phys. 103 , 441 (1986).
<sup>21</sup>R. B. Capaz, B. Koiller and S. L. A. Queiroz, Phys. Rev. B 42, 6402 (1990).
<sup>22</sup>C. L. Henley, in Ref. 4, p. 429.
<sup>23</sup>K. Kimura and S. Takeuchi, in Ref. 4, p.313.
<sup>24</sup>Y. Liu and R. Riklund, Phys. Rev. B 35, 6034 (1987).
<sup>25</sup>J. C. Lรณpez, G. Naumis and J. L. Aragรณn, Phys. Rev. B 48, 12459 (1993).
<sup>26</sup>G. G. Naumis and J. L. Aragรณn, Phys. Lett. A 244, 133 (1998).
<sup>27</sup>J. S. Denbigh and N. Rivier, J. Phys. C 12, L107 (1979).
<sup>28</sup>A. Crisanti, C. Falesia, A. Pasquarella, and A. Vulpiani, J. Phys.: Condens. Matter 1, 9509 (1989).
<sup>29</sup>F. Wijnands, J. Phys. A 22, 3267 (1989).
Figure captions
FIG. 1. Inverse of the phonon wavefunction $`\mathrm{\Psi }_n`$ at the Fibonacci sites $`n=F_{i+1}`$, for energy $`E=0.4`$, on a pure Fibonacci chain, and the associated Lyapunov exponent $`\gamma `$.
FIG. 2. Lyapunov exponent $`\gamma `$ for: a) electronic โdiagonalโ ($`\epsilon _\alpha =\epsilon _\beta =\epsilon _\gamma =1`$), b) electronic โoff-diagonalโ ($`V_A=1,V_B=2`$), and c) phonon ($`V_A=1,V_B=2`$) excitations on a pure Fibonacci chain.
FIG. 3. Self-similar structure of $`\gamma `$: a) electronic โdiagonalโ ($`\epsilon _\alpha =\epsilon _\beta =\epsilon _\gamma =1`$), b) electronic โoff-diagonalโ ($`V_A=1,V_B=2`$), and c) phonon ($`V_A=1,V_B=2`$) excitations, to compare with FIG. 2.
FIG. 4. Maximum $`\gamma `$ in a gap vs gap width $`\mathrm{\Delta }E_g`$, for: a) electronic โdiagonalโ, ($`\mathrm{}`$) ($`\epsilon _\alpha =\epsilon _\beta =\epsilon _\gamma =1`$, $`\delta =0.62`$ ), ($`\mathrm{}`$) ($`\epsilon _\alpha =\epsilon _\beta =\epsilon _\gamma =2`$, $`\delta =0.47`$); b) electronic โoff-diagonalโ, ($``$) ($`V_A=1`$, $`V_B=2`$, $`\delta =0.75`$), ($`\mathrm{}`$) ($`V_A=3`$, $`V_B=1`$, $`\delta =0.53`$), excitations on a pure Fibonacci chain.
FIG. 5. Maximum $`\gamma `$ in a gap vs gap width $`\mathrm{\Delta }E_g`$, for phonon excitations ($`V_A=1`$, $`V_B=2`$), on a pure Fibonacci chain.
FIG. 6. Power-law exponent $`\delta `$, of $`\gamma _{\mathrm{max}}`$ vs $`\mathrm{\Delta }E_g`$, for phonon excitations: a) $`V_A=1`$, $`V_B=2`$, b) $`V_A=2`$, $`V_B=1`$, on a pure Fibonacci chain.
FIG. 7. Lyapunov exponent $`\gamma `$ vs probability of disorder $`p`$, for phonon excitations, with energies: ($``$) $`E=1.2`$, ($`\mathrm{}`$) $`E=2.3`$, ($`\mathrm{}`$) $`E=3.4`$, on random periodic chains.
FIG. 8. Lyapunov exponent $`\gamma `$ vs probability of disorder $`p`$, for various energies $`E`$, of: a) electronic โdiagonalโ, ($``$) $`E=1.9`$, ($`\mathrm{}`$) $`E=0.15`$, ($`\mathrm{}`$) $`E=1.1`$; b) electronic โoff-diagonalโ, ($``$) $`E=0.5`$, ($`\mathrm{}`$) $`E=1.5`$, ($`\mathrm{}`$) $`E=2.05`$, c) phonon, ($``$) $`E=1.4`$, ($`\mathrm{}`$) $`E=3.1`$, ($`\mathrm{}`$) $`E=5.29`$, excitations on random Fibonacci chains. |
no-problem/0001/cond-mat0001137.html | ar5iv | text | # The number of guards needed by a museum: a phase transition in vertex covering of random graphs
\[
## Abstract
In this letter we study the NP-complete vertex cover problem on finite connectivity random graphs. When the allowed size of the cover set is decreased, a discontinuous transition in solvability and typical-case complexity occurs. This transition is characterized by means of exact numerical simulations as well as by analytical replica calculations. The replica symmetric phase diagram is in excellent agreement with numerical findings up to average connectivity $`e`$, where replica symmetry becomes locally unstable.
Keywords (PACS-codes): General studies of phase transitions (64.60.-i), Classical statistical mechanics (05.20.-y), Combinatorics (02.10.Eb)
\]
Imagine you are director of an open-air museum situated in a large park with numerous paths. You want to put guards on crossroads to observe every path, but in order to economize costs you have to use as few guards as possible. Let $`N`$ be the number of crossroads, $`X<N`$ the number of guards you are able to pay. Then there are $`\left(\genfrac{}{}{0pt}{}{N}{X}\right)`$ possibilities of putting the guards, but the most โconfigurationsโ will lead to unobserved paths. Deciding whether there exists any perfect solution or finding one can thus take a time growing exponentially with $`N`$. In fact, this problem is one of the six basic NP-complete problems , namely vertex cover (VC). It is widely believed that no algorithm can be found which solves our problem substantially faster than exhaustive search for any configuration of the paths.
Similar combinatorial decision problems have been found to show interesting phase transition phenomena. These occur in their solvability and, even more surprisingly, in their typical-case algorithmic complexity, i.e. the dependence of the median solution time on the system size . E.g. in satisfiability (SAT) problems a number of Boolean variables has to simultaneously satisfy many logical clauses. When the number of these (randomly chosen) clauses exceeds a certain threshold, the solvability of the full problem undergoes a sharp transition from almost always satisfiable to almost always unsatisfiable . The instances which are hardest to solve are found in the vicinity of the transition point. Far away from this point the solution time is much smaller, as a formula is either easily fulfilled or hopelessly over-constrained. The typical solution times in the under-constrained phase are even found to depend only polynomially on the system size. Recently, insight coming from a statistical mechanics perspective on these problems has lead to a fruitful cooperation with computer scientists, and has shed some light on the nature of this transition . Frequently, the methods of statistical mechanics allow to obtain more insight than the classical tools of computer science or discrete mathematics.
This is also true for the above mentioned VC problem. After having introduced the VC model and reviewed some previously known rigorous results, we present numerical evidence for the existence of a phase transition in its solvability which is connected to an exponential peak in the typical case complexity. Due to the much simpler geometrical structure, many features of this transition can be understood much more intuitively than for SAT. In addition, we will see that the replica-symmetric theory correctly describes the phase transition up to an average connectivity $`e`$. This is a fundamental difference to previously studied models with discontinuous transitions; see for the example of 3-satisfiability where replica symmetry breaking is necessary to calculate the transition threshold.
Let us reformulate our problem in a mathematical way: Take any graph $`G=(V,E)`$ with $`N`$ vertices $`iV=\{1,2,\mathrm{},N\}`$ (the crossroads in the above example) and edges $`(i,j)EV\times V`$ (the paths). We consider undirected graphs, so with $`(i,j)E`$ we also have $`(j,i)E`$. A vertex cover is a subset $`V_{vc}V`$ of vertices such that for all edges $`(i,j)E`$ there is at least one of its endpoints $`i`$ or $`j`$ in $`V_{vc}`$ (the path is observed). We call the vertices in $`V_{vc}`$ covered, whereas the vertices in its complement $`VV_{vc}`$ are called uncovered. Please note that the VC of a disconnected graph is consequently given by the union of the VCs of its connected components.
Also partial VCs $`UV`$ are considered, where there are some uncovered edges $`(i,j)`$ with $`iU`$ and $`jU`$. The task of finding the minimum number of uncovered edges for given graph $`G`$ and the cardinality $`|U|X`$ is an optimization problem. We have already mentioned that the corresponding decision problem if a VC of fixed cardinality $`X`$ does exist or not, belongs to the basic NP-complete problems.
In order to be able to speak of typical or average cases, we have to introduce some probability distribution over graphs. We investigate random graphs $`G_{N,c/N}`$ with $`N`$ vertices and edges $`(i,j)`$ (with $`ij`$) which are drawn randomly and independently with probability $`c/N`$. Thus the expected edge number equals $`\left(\genfrac{}{}{0pt}{}{N}{2}\right)c/N=cN/2+O(1)`$. The average connectivity $`c`$ remains finite in the limit $`N\mathrm{}`$ of infinitely large graphs. For $`c<1`$, these graphs are known to be decomposed into $`O(N)`$ connected components of typical size $`O(1)`$, whereas for $`c>1`$ a giant component appears which unifies $`O(N)`$ vertices . For a more recent and complete introduction see .
As an element of a VC $`V_{vc}`$ typically covers $`O(c)`$ edges, the minimal cover size $`X_{min}`$ is also expected to be of $`O(N)`$, $`X_{min}=x_{N,c}N`$. In fact, there are rigorous lower and upper bounds on $`x_{N,c}`$ which are valid for almost all random graphs. To our knowledge, the best bounds are given in , see Fig. 3 for a comparison with our results. The exact asymptotics for large connectivities $`c1`$ is also known, Frieze proved that
$$x_{N,c}=1\frac{2}{c}(\mathrm{log}c\mathrm{log}\mathrm{log}c\mathrm{log}2+1)+o(\frac{1}{c})$$
(1)
for almost all graphs $`G_{N,c/N},N\mathrm{}`$.
It is however not clear, if a sharp threshold $`x_c(c)=lim_N\mathrm{}\overline{x_{N,c}}`$ does exist at finite $`c`$, with the over-bar denoting the average over the random graph ensemble at fixed $`N`$ and $`c`$. In order to get some intuition on this point we have started our work with exact numerical simulations. Analytic results are presented below.
Using an exact branch-and-bound algorithm all optimal configurations at fixed $`X`$ are enumerated: As each vertex is either covered or uncovered, there are $`\left(\genfrac{}{}{0pt}{}{N}{X}\right)`$ possible configurations which can be arranged as leaves of a binary (backtracking) tree. At each node of the tree, the two subtrees represent the subproblems where the corresponding vertex is either covered or uncovered. A subtree will be omitted if its leaves can be proven to contain less covered edges than the best of all previously considered configurations. The order of the vertices within the levels of the tree is given by their current connectivity, i.e. only neighbors are counted which are not yet included into the cover set. Thus, the first descent into the tree is equivalent to the greedy heuristic which iteratively covers vertices by always taking the vertex with the highest current connectivity.
First results are exposed in Fig. 1: The probability of finding a vertex cover of size $`xN`$ in a random graph $`G_{N,c/N}`$ is displayed for $`c=2`$ and several values of $`N`$, analogous results have been obtained for other values of $`c`$. The drop of the probability from one for large cover sizes to zero for small cover sets obviously sharpens with $`N`$, so that a jump at a well-defined $`x_c(c)`$ is to be expected in the large-$`N`$ limit: for $`x>x_c(c)`$ almost all random graphs with $`cN`$ edges are coverable with $`xN`$ vertices, below $`x_c(c)`$ almost no graphs have such a VC. Fig. 1 also shows the minimal fraction $`e`$ of uncovered edges as a function of $`x`$ for the partial covers. It vanishes for $`x>x_c(c)`$, whereas it remains positive for $`x<x_c(c)`$.
It is also interesting to measure the median computational effort, as given by the number of visited nodes in the backtracking tree, in dependence on $`x`$ and $`N`$. The curves, which are given in Fig. 2, show a pronounced peak near the threshold value. Inside the coverable phase, $`x>x_c(c)`$, the computational cost is growing only linearly with $`N`$, and in many cases the greedy heuristic is already able to cover all edges by covering $`xN`$ vertices. Below the threshold, $`x<x_c(c)`$, the computational effort is clearly exponential in $`N`$, but becomes smaller and smaller if we go away from the threshold. This easy-hard-easy scenario resembles very much the typical-case complexity pattern of 3SAT , and deserves some analytical investigation.
To achieve this, we use the strong similarity between combinatorial optimization problems and statistical mechanics. In the first case, a cost function depending on many discrete variables has to be minimized, e.g. the number of uncovered edges is such a cost function for vertex cover. This is equivalent to zero temperature statistical mechanics, where the Gibbs weight is completely concentrated in the ground states of the Hamiltonian. As the local variables for VC are binary because a vertex is either covered or uncovered, we may give a canonical one-to-one mapping of the vertex cover problem to an Ising model: for any subset $`UV`$ we set $`S_i=+1`$ if $`iU`$, and $`S_i=1`$ if $`iU`$. The edges are encoded in the adjacency matrix $`(J_{ij})`$: an entry equals 1 iff $`(i,j)E`$, and $`J_{ij}=0`$ else. $`(J_{ij})`$ is thus a symmetric random matrix with independently and identically distributed entries in its lower triangle. The Hamiltonian, or cost function, of the system counts the number of edges which are not covered by the elements of $`U`$,
$$H=\underset{i<j}{}J_{ij}\delta _{S_i,1}\delta _{S_j,1},$$
(2)
and has to be minimized under the constraint $`|U|=xN`$, which, in terms of Ising spins, reads
$$\frac{1}{N}\underset{i}{}S_i=2x1.$$
(3)
The resulting ground state energy $`e_{gs}(c,x)`$ equals zero iff the graph is coverable with $`xN`$ vertices.
We want to skip the details of the calculation, as these go beyond the scope of this letter. A detailed technical description will be presented elsewhere . We only mention the main steps:
(i) We introduce a positive formal temperature $`T`$ and calculate the canonical partition function $`Z=_{๐_x}\mathrm{exp}\{H/T\}`$ where the sum is over all configurations $`\{S_i\}_{i=1,\mathrm{},N}`$ which satisfy (3).
(ii) We are interested in the disorder-averaged free-energy density $`f(c,x)=lim_N\mathrm{}TN^1\overline{\mathrm{ln}Z}`$, which we calculate using the replica method, closely following the scheme proposed in . Within the replica symmetric framework, this free energy self-consistently depends on the order parameter $`P(m)`$ which is the histogram of local magnetizations $`m_i=S_i`$. $``$ denotes the thermodynamic average.
(iii) The ground states are recovered by sending $`T0`$. In this limit, one has to take care of the scaling of the order parameter with $`T`$, which is different below and above $`x_c(c)`$. For a similar reasoning in the case of 3SAT see also .
(iv) Both equations for $`x<x_c(c)`$ and $`x>x_c(c)`$ tend to the same limit for $`xx_c(c)`$. At the threshold, the resulting self-consistency equation can be solved analytically.
From this solution, many properties of the threshold VCs can be read off. The first is of course the value of the threshold itself:
$$x_c(c)=1\frac{2W(c)+W(c)^2}{2c}$$
(4)
with the Lambert-W-function $`W`$ . The result for $`x_c(c)`$ is displayed in Fig. 3 along with numerical data obtained by a variant of the branch-and-bound algorithm. For relatively small connectivities $`c`$ perfect agreement is found. We also have compared (4) with rigorous bounds obtained from counting VCs for small connected components having up to 7 vertices, which are very precise for small c (e.g. 0.999997N vertices are taken into account for c=0.1). Also here, perfect coincidence was found.
For larger $`c`$ systematic deviations of (4) from numerical results occur, it even violates the asymptotic form (1). For $`c>e`$, the replica symmetric solution becomes instable, and we find a continuous appearance of a replica symmetry broken solution; work is in progress on this point . We conjecture, that the replica symmetric result (4) is exact for $`ce`$, whereas it gives a lower bound for $`c>e`$ . Please note that this point is situated well beyond $`c=1`$ where the giant component appears. Neither analytically nor numerically, we have found any influence of the giant component on the vertex covers. This is significantly different from Ising models on random graphs as studied in .
Besides the value of $`x_c(c)`$, the replica symmetric solution also contains structural information. One important phenomenon is a partial freezing of degrees of freedom. For a given random graph, there exists typically an exponential number of minimal VCs, thus the entropy density is finite. On the other hand, a fraction $`b_+(c)`$ of the vertices will be covered in all minimal VCs, thus forming a covered backbone, other vertices will never be covered and are collected in the uncovered backbone which has size $`b_{}(c)N`$:
$`b_{}(c)`$ $`=`$ $`{\displaystyle \frac{W(c)}{c}}`$ (5)
$`b_+(c)`$ $`=`$ $`1{\displaystyle \frac{W(c)+W(c)^2}{c}}`$ (6)
In Fig. 3 the total backbone size $`b_c(c)=b_{}(c)+b_+(c)`$ is compared with numerical data, again very good agreement is found in the range of validity of replica symmetry.
For small $`c`$, the uncovered backbone is large, which is mainly due to isolated vertices which have to be uncovered in minimal VCs. The simplest structure showing a covered backbone are subgraphs with three vertices and two edges. In the minimal VC of this subgraph, the central vertex is covered, thus belonging to the covered backbone, the other two are uncovered, thus belonging to the uncovered backbone. The simplest non-backbone structures are components with only two vertices and one edge, because the vertices have no unique covering state.
These backbones appear discontinuously at the threshold because inside the coverable phase the backbone is empty. The proof is simple ($`x>x_c(c)`$ fixed):
(i) Assume that there is a non-empty uncovered backbone, with $`i`$ being an element. Now take any minimal cover $`V_0`$. It can be extended by covering arbitrarily chosen $`(xx_c(c))N`$ vertices out of $`VV_0`$, e.g. vertex $`i`$, which is a contradiction to our assumption.
(ii) Assume now a non-empty covered backbone, with $`i`$ being an element. Then $`i`$ has to be an element of $`V_0`$. As the connectivity of $`i`$ is almost surely smaller than or equal to $`O(\mathrm{log}N)`$, all uncovered neighbors of $`i`$ can be covered by some of the $`(xx_c(c))N`$ covering marks (for $`N`$ sufficiently large), and $`i`$ can be uncovered without uncovering the graph. This is again a contradiction to our assumption.
To summarize, we have investigated the vertex cover problem on random graphs by means of exact numerical simulations and analytical replica calculations. A sharp transition from a coverable to an uncoverable phase is found by decreasing the permitted size of the cover set. This transition coincides with a change of the typical case complexity from linear to exponential growth in $`N`$ and the discontinuous appearance of a frozen-in backbone. The complete RS solution was given for $`c<e`$, it is found to be in perfect agreement with numerical results. For $`c>e`$ the behavior is less clear as replica symmetry breaking occurs.
Also the behavior inside the coverable and the uncoverable phases is of some interest. There the use of variational techniques similar to those proposed in could be of great help.
The authors are grateful to J.A. Berg for critically reading the manuscript. Financial support was provided by the DFG (Deutsche Forschungsgemeinschaft) under grant Zi209/6-1. |
no-problem/0001/astro-ph0001131.html | ar5iv | text | # A Search for X-ray emission from Saturn, Uranus and Neptune
## 1 Introduction
X-ray emission from solar system objects has so far been detected from the Earth (Rugge et al. rug79 (1979), Fink et al. fink88 (1988)), from the Moon (Schmitt et al. schmitt91 (1991)), from several comets (e.g., Lisse et al. lisse96 (1996), Mumma et al. mumma97 (1997)) and from Jupiter (e.g., Metzger et al. metzger83 (1983)). The observed X-ray emission seems to have different physical origins in the different objects. The principal X-ray production mechanism for Moon and Earth is reflection of solar X-rays; auroral X-ray emission has been found from the Earth and from Jupiter, and similar emission from the outer planets is anticipated.
Aurorae on Earth and Jupiter are generated by charged particles precipitating into the atmosphere along the magnetic field lines. While at Earth the precipitating flux consists of solar wind electrons, there is strong evidence from the Einstein observations (e.g., Metzger et al. metzger83 (1983)) that the Jovian X-rays are caused by heavy ion precipitation. Assuming energetic electron precipitation an input power of $`10^{15}`$ to $`10^{16}`$ W was estimated which seemed to be unreasonably large compared with both the auroral input power calculated on the basis of the Voyager observations of the UV aurora ($`10^{13}10^{14}`$ W) and with the power estimated to be available in the magnetosphere through mass loading in the torus or pitch-angle scattering induced by wave-particle interactions. From this and from a direct observation of heavy ions in Jupiterโs magnetosphere with the Voyager spacecraft, Metzger et al. (metzger83 (1983)) concluded that heavy ion precipitation is a reasonable X-ray production process (for references, see Metzger et al. metzger83 (1983)). With the Einstein Observatory Imaging Proportional Counter (IPC) pulse height spectrum, both a continuous spectrum resulting from bremsstrahlung and a characteristic line emission spectrum from heavy ions, especially from oxygen and sulfur, are consistent. Because of this inability of the IPC to distinguish between continuous emission and line emission, the possibility that the Jovian X-ray emission is due to bremsstrahlung could not be ruled out, but a comparison of ROSAT observations in the soft X-ray spectrum with model-generated bremsstrahlung and line emission spectra strengthened the case for heavy ion precipitation (Waite et al. waite94 (1994)).
X-ray emission from the other outer planets and especially from Saturn is expected because of the discovery of magnetospheres by the Voyager spacecraft (e.g. Opp opp80 (1980); Sandel et al. sandel82 (1982)) and the observation of auroral ultraviolet emission from Saturn at high latitude regions (Broadfoot et al. broad81 (1981)), from Uranus (Herbert & Sandel herb89 (1989)) near the poles and from Neptune (Broadfoot et al. broad89 (1989)). Broadfoot et al. (broad81 (1981)) conclude from their UV observations that magnetotail activity on Saturn is more Earth-like and quite different from the dominant Io plasma torus mechanism on Jupiter. If energetic particles are responsible for the observed UV emission, associated X-ray emission is also expected.
On 1979 December 17 Saturn was observed with the Einstein Observatory IPC for 10,850 seconds, but no X-ray emission was detected, leading Gilman et al. (gilman86 (1986)) to the conclusion that bremsstrahlung was the more likely X-ray production mechanism for Saturn. From this spectral assumption they calculated from the IPC observation a $`3\sigma `$ upper limit for the Saturnian X-ray flux at Earth of $`1.7\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. This value has to be compared with an expected energy flux at Earth of $`8\times 10^{16}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, obtained from a model calculation (Gilman et al. gilman86 (1986)) based on UV observations (Sandel et al. sandel82 (1982)) and the assumption of thick-target bremsstrahlung at high latitudes.
With the ROSAT position sensitive proportional counter (PSPC) a more sensitive soft X-Ray observation of Saturn as well as the first X-ray observations of Uranus and Neptune have been carried out. The purpose of this paper is to present and to analyze these data.
## 2 Observation and Analysis
The outer planets Saturn, Uranus and Neptune were observed with the ROSAT PSPC in the pointing mode. Details of the observations such as date, elapsed time, ROSAT sequence numbers, number of observation intervals (OBI), apparent angular size, distance from Earth and other relevant items are summarized in Tab. 1; for purposes of comparison we also analyzed and list a ROSAT PSPC observation of Jupiter, discussed in detail by Waite et al. (waite94 (1994)).
As can be seen from Tab. 1, the largest of the planets targeted in these observations, Saturn, had an apparent size of 17โณ at the time of the observation. Since this is still small compared to the ROSAT PSPC point spread function (50% encircled energy is contained within radius of 22โณ for angles up to 10โฒ with respect to the optical axis), we will treat the data as emission from point sources. As can be further seen from Tab. 1, the elapsed time of the PSPC observations was quite long leading to significant motions of the planets during that period. The ROSAT standard data processing provides the position of each recorded photon with respect to a fixed reference frame. Since the PSPC is photon counting, we know the arrival time for each recorded photon. The planet ephemeris are also known as a function of time, and therefore, we can calculate the position shifts $`\mathrm{\Delta }\alpha `$ and $`\mathrm{\Delta }\delta `$ to all photons required to correct for the planetary motion. This procedure combines all planetary photons into a point source, while photons from sources with fixed celestial positions will yield multiple sources reflecting the planetary motions. The thus transformed images were analyzed in the soft energy range taking counts in the amplitude channel range from channel 10 to 60 ($`0.10.55`$ keV) and in the hard energy range with channels from 61 to 160($`0.551.6`$ keV). From the count rate we calculated the energy flux using a conversion factor of $`6\times 10^{12}`$ erg cm<sup>-2</sup> cts<sup>-1</sup> for the soft band and $`2\times 10^{11}`$ erg cm<sup>-2</sup> cts<sup>-1</sup> for the hard band. The analysis was carried out in two different ways. The first method simply consists of placing a square box on the planetโs position and comparing the source box counts with the background counts determined from a much larger box placed in the vicinity of the source box but containing no sources. For practical purposes we treat Saturn as a point source, since resolving 17โณ requires an extremely high signal to noise ratio. In order to pick up as many source counts as possible while keeping the background low, we choose a box size of $`1.5\mathrm{}\times 1.5\mathrm{}`$. For the soft X-ray range this source box contains 83.6% of all source photons. This was empirically determined from the supersoft white dwarf HZ43; the energy fluxes given in Tab. 1 for the soft energy range are corrected by this value. In the soft energy range we thus have an expected source box count of $`7.6\pm 0.1`$ at the position of Saturn from background alone, but we find 22 counts, concentrated towards the center of the source box as expected for a point-like source. The probability of measuring 22 counts or more with only 7.6 counts are expected is $`1.7\times 10^5`$, assuming Poisson statistics. The corresponding numbers in the hard energy range are $`2.4\pm 0.1`$ expected counts with 4 counts actually recorded. The probability for measuring at least four counts assuming no source is 22%. Clearly, the signal recorded in the hard band is consistent with a background fluctuation, while in the soft band a significant excess is seen. These numbers are recorded in Tab. 1 for all target planets (as well as Jupiter) for both the soft and hard energy band.
Our second approach consists of applying the maximum likelihood detection technique described by Cruddace et al. (crudd88 (1988)) to the transformed images. This procedure results in a source existence maximum likelihood of 3.1 at Saturnโs position for the soft energy range and again no detection in the hard energy range. Clearly, the source existence likelihood is low. In judging the significance level one must however keep in mind that X-ray emission was searched for at only one position. A confirmation of this detection by another satellite measurement thus is highly desirable. Accepting for the time being the ROSAT detection of Saturn as real, we find a count rate of 2.7$`\times 10^3`$ cts/s which corresponds to an incident energy flux of $`1.9\times 10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup>.
It is obvious from Tab. 1 that no detections of Uranus nor Neptune have been obtained. From the computed 95% confidence count upper limits we calculated flux upper limits in the soft and hard energy bands, which are also listed in Tab. 1.
## 3 Discussion
Our analysis of the ROSAT PSPC data on trans-Jovian planets yields a marginal X-ray detection of Saturn, while only upper limits could be obtained for Uranus and Neptune. These upper limits are sensitive in the sense that X-ray emission at the level of Jupiter from Uranus and Neptune would have been detected. However, the upper limits are consistent assuming intrinsic Saturn-like X-ray luminosities for Uranus and Neptune. Therefore, similar X-ray production mechanisms on all trans-Jovian planets can certainly not be ruled out from the currently available observations. On the other hand, it also appears that Jupiter is rather unique with regard to its X-ray luminosity (Gilman et al. gilman86 (1986), Waite et al. waite94 (1994)). A possible explanation for the X-ray production on the trans-Jovian planets is thick-target bremsstrahlung caused by electron precipitation, i.e., the process which is also responsible for the generation of aurorae on the Earth. Gilman et al. (gilman86 (1986)) (cf. Tab. 2) expect an X-ray flux from Saturn of $`8\times 10^{16}`$ erg cm<sup>-2</sup> s<sup>-1</sup> from thick-target bremsstrahlung, based on the observed UV-flux and the assumption of a power-law electron distribution function. An even lower value is however obtained from the measured electron flux in Saturnโs outer magnetosphere with the Voyager spacecraft, assuming, at high energies, an exponential in electron speed (Barbosa bosa90 (1990)). Saturnโs energy flux obtained from our PSPC observation exceeds these expected fluxes by more than one order of magnitude. This might be either due to an elevated electron flux at the time of the observation or to other X-ray production mechanisms in addition to bremsstrahlung. Since the Saturnian system does not contain a volcanically active moon like Io, for example, it is unclear how heavy ions can be efficiently inserted into Saturnโs magnetosphere.
Support for the assumption of thick-target bremsstrahlung comes from the observed spectral signatures of the X-ray detection of Saturn, which was detected only in the soft energy band (cf. Tab. 1) as well as from a comparison of aurorae on Earth observed with the very same instrument, i.e., the ROSAT PSPC, with which Saturn was observed and analyzed also in the soft energy band. Freyberg (frey94 (1994)) discusses a number of PSPC observations which show a strong enhancement in the diffuse background count rate and which can be traced back to auroral activity and/or geomagnetic storms. The relevant data are summarized in Tab. 3. A specific example is the ROSAT observation CA150057, which showed a significantly enhanced background (almost exclusively in the soft energy band) in one observation interval when ROSAT traversed the region south of Greenland. This elevation in count rate is interpreted as auroral X-ray emission due to bremsstrahlung at the Earthโs atmosphere near the northern radiation belt. An even more extreme case is the ROSAT PSPC observation WG700232 during which an intense geomagnetic storm took place. In the 27<sup>th</sup> observation interval of this specific data set the PSPC count rate rose up to a value of more than 2300 counts per second when the PSPC was turned off and went in safe mode; also in this case the count rate was highly time variable and consisted of very soft photons. In both cases we can determine the observed PSPC intensity (in units of counts/sec/arcsec<sup>2</sup>) by subtracting the observed background from observation intervals unaffected by auroral emission; the results are listed in Tab. 3.
A comparison of the observed PSPC intensities of aurorae on Earth in the soft energy band with that of Saturn shows that the former reach values that can easily account for the observed emission from Saturn. Note in particular that the X-ray emission during a geomagnetic storm may be much higher. If therefore โ as appears likely โ the X-ray emission from Saturn is restricted to its auroral belts, the resulting X-ray intensities may still be in the range of X-ray intensities observed in geomagnetic storms on Earth.
In summary, we can state that Jupiterโs X-ray emission among the solar system planets appears unique in terms of total luminosity and possibly also in terms of spectral shape. No other planet has an intrinsic X-ray luminosity as high as Jupiter, and furthermore, Waite et al. (waite94 (1994)) suggest that its X-ray emission seems to be dominated by lines rather than continuum emission. The obtained X-ray detection of Saturn appears to be consistent both in strength and spectral shape with thick-target bremsstrahlung as occurring in auroral emission on Earth, but the observed luminosity implies rather high electron fluxes. It is clearly highly desirable to obtain a high angular resolution X-ray image of Saturn in order to confirm, first, the X-ray detection obtained with the ROSAT PSPC and, second, to study the spatial distribution of the X-ray emission on Saturnโs surface, which is expected to be concentrated in Saturnโs auroral belts.
###### Acknowledgements.
J.-U.N. acknowledges financial support from Deutsches Zentrum fรผr Luft- und Raumfahrt e.V. (DLR) under 50OR98010. |
no-problem/0001/astro-ph0001437.html | ar5iv | text | # SAX J1810.8-2609: A New Hard X-ray Bursting Transient
## 1 INTRODUCTION
During a long term 2-28 keV monitoring campaign of the Galactic Bulge region with the Wide Field Cameras (WFC) on board the BeppoSAX satellite, the new X-ray transient SAX J1810.8-2609 was discovered on 1998, March 10 (Ubertini et al. 1998a ). The source showed a weak emission ($``$15 mCrab) corresponding to an X-ray flux of $`3.1\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup> in the 2-10 keV range and was positioned in quasi real-time with the quick-look analysis (QLA) tools at $`\alpha =18^h10^m46^s`$ and $`\delta =26\mathrm{ยฐ}09\mathrm{}.1`$ (equinox 2000.0) with an error radius of $`3\mathrm{}`$. During the on-going monitoring, on March 11 a strong type-I X-ray burst was observed with a peak intensity of $`1.9`$ Crab, from a sky position consistent with that of the persistent emission (Cocchi et al. (1999), Ubertini et al. 1998a ). Two days after the source was discovered a follow-up observation was performed with the BeppoSAX Narrow Field Instruments (NFI) showing that the 2-10 keV intensity had declined to $`7.5`$ mCrab (Ubertini et al. 1998b ). On 1998, March 24 the ROSAT High Resolution Imager (HRI) observed the error box of SAX J1810.8-2609 for 1153 s (Greiner et al. (1998)). A low energy source, named RX J1810.7-2609, was detected at a position consistent with the WFC error box but not with the one obtained by the QLA of the NFI observation. (Ubertini et al. 1998b , see however further details in Sect.2.1). The 0.1-2.4 keV flux of RX J1810.7-2609 was $`1.5`$ mCrab and ROSAT did not detect the source in previous observations of the same sky region on 1993 September 10 (0.1-2.4 keV), with a 3$`\sigma `$ upper limit of $`0.08`$ mCrab and in 1990 during the All-Sky Survey, thus confirming the transient nature of the source. Very recently, Greiner et al. (1999) have reported details of the ROSAT HRI target of opportunity (TOO), and of optical to infrared follow-up observations of the 20$`\mathrm{}`$ error box of the ROSAT HRI source. They tentatively suggested as counterpart of RX J1810.7-2609 a variable object showing $`R=19.5\pm 0.5`$ on March 13 and R $`>`$ 21.5 on August 27. The ROSAT HRI observation showed an unabsorbed flux of $``$ $`1.1\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. If one assumes a Crab-like spectrum, this extrapolates to $``$ $`3.5\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup> in the 2-10 keV range, which is a factor of 4 lower than the BeppoSAX NFI detection (Ubertini et al. 1998b , Greiner et al. (1999)). The variability during the ROSAT HRI observation was less than a factor of 3 in the 0.1-2.4 keV and no evidence of coherent or Quasi-Periodic Oscillations (QPO) in the range from 2 to 200 s was found, with a 3 $`\sigma `$ upper limit on the pulsed fraction of $`<`$ 40 % (Greiner et al., 1999).
We here report on a detailed analysis of the WFC and NFI observations of SAX J1810.8-2609, and discuss the nature of the compact source in this X-ray transient.
## 2 OBSERVATIONS AND DATA ANALYSIS
The WFC (Jager et al. (1997)) on board BeppoSAX are designed for performing spatially resolved simultaneous measurements of X-ray sources in crowded fields enabling studies of spectral variability at high time resolution. The mCrab sensitivity in 2-28 keV over a $`40\times 40`$ square degrees field of view (FOV) and the near-to-continuous operation over a period of years offer the unique opportunity to measure continuum emission as well as bursting behaviour from many new X-ray transients and already known (weak) transient and persistent sources. For this reason the Galactic Bulge is being monitored over 1 to 2 months during each of the visibility periods since the beginning of the BeppoSAX operational life in July 1996. During those observations, that combine to a total of $`3`$ Ms net exposure time up to November 1999, more than 900 X-ray bursts and at least 45 sources have been detected (Cocchi, et al. (1998); Heise et al. (1999); Ubertini et al. (1999)). The data of the two cameras are systematically searched for bursts or flares by analysing the time profiles in the 2-10 keV energy range with a 1 s time resolution.
Follow-up observations with the more sensitive, broad band Narrow Field Instruments are often performed each time a new transient source is detected in the WFC field of view. The BeppoSAX NFI comprise an assembly of four imaging instruments: one low energy and three medium energy concentrator spectrometers, named LECS and MECS, with a 37 and 56 arcmin circular FOV and energy ranges 0.1-10 keV and 1.8-10 keV, respectively (Parmar et al., 1997 and Boella et al. 1997). The other two non-imaging co-aligned detectors are the High Pressure Gas Scintillation Proportional Counter (HPGSPC), operative in the range 4-120 keV (Manzo et al., 1997) and the Phoswich Detector System (PDS), operative in the range 15-200 keV (Frontera et al., 1997). On 1998, March 12.19 UT a BeppoSAX follow-up observation was performed with the NFI on the WFC error box of the newly discovered source (Ubertini et al., 1998b). The total observation lasted 85.1 ks corresponding to a net exposure time of 14.4 ks for LECS, 26.8 ks for MECS, 20.0 ks for HPGSPC and 30.4 ks for PDS. SAX J1810.8-2609 was strongly detected in all instruments, including a high energy tail extending up to $``$ 200 keV and was the only source present in the LECS and MECS images, at an updated position consistent with the WFC error box (see Sect. 2.1). Extraction radii of $`8\mathrm{}`$ an $`4\mathrm{}`$ have been used for source photons for the LECS and MECS images respectively, encircling $`95`$ % of the power of the concentrators point spread function. These data have been used for spectral analysis and light curves production. All spectra have been rebinned, oversampling the detector spectral resolution, to have at least 20 counts per channel. The bandpasses for spectral analysis were limited to 0.3-3.0 keV for the LECS, 1.6-10.5 keV for the MECS, 4-25 keV for HPGSPC and 15-200 keV for the PDS to take advantage of accurate detectors calibration. The standard procedure to leave free the relative normalization parameters of the different instruments within a narrow band, was applied, to accommodate cross-calibration uncertainties.
### 2.1 The source position
On March 11.06633 UT a strong burst was observed from SAX J1810.8-2609; this is the only X-ray burst ever observed from the source in all the WFC data since 1996, which amounts to a total net exposure time of $`3`$ Ms. We have improved the source position in the WFC with respect to the one previously reported (Ubertini et al. 1998a ) to $`\alpha =18^h10^m45.6^s`$ and $`\delta =26\mathrm{ยฐ}08\mathrm{}48.5\mathrm{}`$ (1.1 $`\mathrm{arcmin}`$ error radius), by using the burst data which has a much higher statistical quality than that of the non-burst data (see Table 1). This confirms the association with the ROSAT HRI source RX J1810.7-2609. We note that the original inconsistency between the BeppoSAX NFI and ROSAT HRI (Greiner et al. (1998), Ubertini et al. 1998b ) was due to an error in the aspect solution of BeppoSAX which resulted from an unusual attitude configuration. We have therefore refined the position of the source taking into account a new calibration (L. Piro, L.A. Antonelli, private communication). This results in $`\alpha =18^h10^m45.5^s`$ and $`\delta =26\mathrm{ยฐ}08\mathrm{}14\mathrm{}`$ (equinox 2000.0) with a conservative error radius of $`1.5\mathrm{}`$, and is now consistent with that determined by the ROSAT HRI. The various error circles are shown in Figure 1.
### 2.2 The single X-ray burst
A single, strong burst was detected from SAX J1810.8-2609 on 1998 March 11.06634. The event lasted 47 s with an e-folding time of $`12.5\pm 0.7`$ s and showed a peak intensity of $`1.9\pm 0.2`$ Crab in the 2-28 keV band (see also Cocchi et al. (1999)). The time profiles in two energy bands are shown in Figure 2: a clear double-peaked structure is present at high energy (10-28 keV) suggesting photospheric radius expansion (Lewin et al. (1995)). The spectrum of the burst obtained integrating data over the whole burst duration is well represented by a blackbody emission with temperature kT $``$ 2 keV. In order to study the time resolved spectra we have integrated the burst data in time intervals as shown in the lower panel of Fig.2, more or less corresponding to the peak structures observed in the high energy profile. Under given assumptions (Lewin, van Paradijs, & Taam (1993)) the effective temperature $`T_{eff}`$ and the bolometric flux of a burst can determine the ratio between the blackbody radius $`R_{bb}`$ (that is, the radius of the emitting sphere) and the distance d of the neutron star. Assuming $`d=10`$ kpc and the observed colour temperatures as $`T_{eff}`$, and not correcting for gravitational redshift the data are consistent with a radius expansion of a factor of $`2`$ during the first $`10`$ s of the event. The average blackbody radius, excluding the radius expansion part is $`12`$ km (see Table 2) at 10 kpc. Also evident is the typical spectral softening due to the cooling of the photosphere after the contraction of the emitting region. These results clearly indicate that the burst is of type-I, i.e. it is identified as a thermonuclear flash on a neutron star (NS). The total bolometric fluence of the burst, estimated by spectral analysis is $`(1.45\pm 0.06)\times 10^6`$ erg cm<sup>-2</sup>.
The observation of the near-Eddington profile is a clue to estimate the source distance. In fact, for a 1.4 $`M_{}`$ NS and a corresponding Eddington bolometric luminosity of $`2\times 10^{38}`$ erg/s we obtain d=($`4.9\pm 0.3`$) kpc, assuming standard burst parameters (here the error is purely statistical). For this distance the total burst emitted energy is $`4\times 10^{39}`$ erg and the observed blackbody radius scales to a value of $`6`$ km. This value of radius could be underestimated, due to the uncertainties in the relationship between colour and effective temperature. If, as suggested by Ebisuzaki (1987) the colour temperature exceeds $`T_{eff}`$ by a factor $``$1.5, then the neutron star radius should be at least two times the measured blackbody radius. These values therefore support a neutron star nature of the compact object.
### 2.3 The wide band persistent emission
The light curve of SAXJ1810.8-2609 measured with the BeppoSAX NFI is shown in Figure 3, in different energy ranges. There is a slight decrease of the flux in the lower energy range (E$`<`$10 keV) in the first $`60`$ ks of the observations, while there is no clear evidence for a decline in the final part of the observation. This picture is consistent with the overall flux trend of this source, and with the derived e-folding time of $`7.5`$ days that is estimated from the WFC, ROSAT HRI and NFI observations.
The count rate spectrum shows substantial emission at high energy. In fact, the unfolded spectrum in the energy range 15-200 keV can be fitted by a single power law of spectral index $`\mathrm{\Gamma }=2.02\pm 0.07`$ ($`\chi _r^2`$=0.76 over 15 d.o.f), and a flux in this range of $`2.2\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. The broad band spectral data, fitted by a single absorbed power law results in a photon spectral index of $`2.22\pm 0.02`$, with a reduced chi-square $`\chi _r^2`$=1.35 over 165 degrees of freedom and an average flux of $`4.2\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup> in the 0.1-200 keV band. It is clear that the absorbed power law model is not satisfactory when applied to the broad band emission.
The fit is significantly improved by using a thermal comptonization spectrum (comptt in XSPEC v.10) instead of the simple power law, resulting in $`\chi _r^2`$=1.12 for 163 d.o.f. (see Table 3) which corresponds to a null hypothesis probability of 0.147. In this model, the hard X-ray tail is produced by the upscattering of soft seed photons by a hot, optically thin electron plasma (Sunyaev & Titarchuk, (1980)). The seed photons temperature for this fit is ($`0.36\pm 0.02`$) keV. The hard X-ray data, however, cannot constrain the parameters of the Compton emission region (temperature and optical depth) due to the very high energy cutoff which is above $`150`$ keV.
The addition of a soft thermal component improves both the power law and comptonization fits. The soft component can be modelled satisfactorily with blackbody or multicolor disk (MCD) blackbody emission (Mitsuda et al. (1984)). Using single temperature blackbody, the fits for power law and comptonization are both compatible with a temperature value kT$``$ 0.5 keV (see Table 3 for details), giving a $`\chi _r^2`$ of 0.97 and 0.99 respectively. The power law photon spectral index is $`\mathrm{\Gamma }`$=$`1.96\pm 0.04`$ and the temperature of the soft comptonized emission is ($`0.6\pm 0.4`$) keV. The estimated blackbody flux is between $`2.5`$ and $``$ 4 $`\times `$10<sup>-11</sup> erg cm<sup>-2</sup> s<sup>-1</sup>. At the quoted source distance of 4.9 kpc this indicates an emission radius between $`10`$ and $`40`$ km. Using a MCD model to describe the additional soft component, the thermal emission is characterized by $`kT_{in}`$=$`0.6\pm 0.1`$ keV (temperature at the inner disk radius $`R_{in}`$). For this model, the best fit gives $`\chi _r^2`$=0.99 for 161 d.o.f. The values for $`R_{in}`$ $`\sqrt{\mathrm{cos}\theta }`$ may range from $`1.5`$ to $`10`$ km (here $`\theta `$ is the disk viewing angle). Hence, if this soft component originates from an optically thick region of the accretion disk, this should be expected to be not too far from the NS, unless the disk is seen at very large inclination.
The broadband source spectrum unfolded by the four instruments response is shown in Fig.4 along with the model spectrum obtained for the blackbody component plus thermal comptonization best fit. We note that the value of $`N_H`$ $``$ 3.5$`\times `$10<sup>21</sup> cm<sup>-2</sup> obtained for the fits which include comptonization match very well the current estimate of the Galactic column density of $`3.7\times 10^{21}`$ cm<sup>-2</sup> for this region (Dickey & Lockman (1990)).
## 3 DISCUSSION
The deep and timely investigations carried out by means of repeated BeppoSAX observations of SAX J1810.8-2609 are consistent with a transient type-I X-ray bursting source, most likely a low mass X-ray binary (LMXB) containing a weakly magnetized NS. This source is a weak transient, as supported by the fact that it was never detected in more than three years of BeppoSAX monitoring of the Galactic Bulge region (apart from these discovery and follow-up observations) and also never seen by the RXTE All Sky Monitor (ASM), even during the March 1998 outburst. The ASM non-detection implies an upper limit on the 2-10 keV flux of $`7\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. It is noteworthy, that a similar weak transient behaviour has also been observed in a number of recently discovered bursters, detected during dim X-ray outburst episodes with maximum intensities well below 100 mCrab and lasting $`1`$ to a few weeks (see e.g. Heise et al. (1999)).
The estimated value of distance of $`5`$ kpc, that we obtained from the observation of radius expansion during the burst (Lewin, van Paradijs, & Taam (1993); Lewin et al. (1995)) places SAX J1810.8-2609 at our side of the Galactic Bulge (see e.g. Christian & Swank (1997)). This is consistent with the tentative detection of the optical counterpart (Greiner et al. (1999)). We note that the presence of the neutron star in the system is also supported by the relatively small blackbody radius of $`6`$ km, calculated for the derived distance.
The detection of a single X-ray burst during our monitoring observations is consistent with the observed combination of burst fluence and average persistent bolometric emission, which is $`5\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, i.e. $`0.01`$ $`L_{edd}`$ at 5 kpc. In fact, taking into account the total energy release of the burst and assuming that steady nuclear burning is negligible, we can estimate a typical value of $`5`$ days for the mean burst interval, which corresponds to the expected $`\alpha `$ parameter for helium burning (i.e., $`\alpha `$ $`100`$, see Lewin, van Paradijs, & Taam (1993)) and is comparable with the e-folding decay time of 7.5 days estimated for the persistent emission. Conversely, if a significant part of the nuclear fuel is burnt steadily the quoted value should be considered as a lower limit.
The broad band spectrum of SAX J1810.8-2609 shows a high energy power law tail, which is remarkably hard ($`\mathrm{\Gamma }=2.06\pm 0.11`$ in the 15-200 keV band) and with no cutoff. There is also an indication for a soft blackbody component with temperature $`kT0.5`$ keV and total flux $`F_{bb}`$ $``$ 3 $`\times `$10<sup>-11</sup> erg cm<sup>-2</sup> s<sup>-1</sup>. The ratio of the soft component luminosity to the total X-ray (0.1-10 keV) luminosity is estimated to be in the range $``$ 10-15%. This is consistent with upper limits obtained for X-ray bursters observed by ASCA in the low state (see e.g., Revnivtsev et al. (1999)) and also with the detection of similar soft components in the spectra of 4U 0614+091 (Piraino et al. (1999)), 1E 1724-3045 and SLX 1735-269 (Barret et al. (1999)), which were all observed in a hard state. A recent analysis of ROSAT spectra of LMXB (Schulz 1999) also shows that a soft component is present in several low luminosity (mainly, Atoll type) X-ray bursters.
The luminosities in the soft and hard X-ray bands match quite well the observed correlation pattern found for neutron star binaries in the low state (Barret et al. (1999)), with values of $``$ $`7.5\times 10^{35}`$ erg s<sup>-1</sup> in the 1-20 keV band, and $``$ $`5.0\times 10^{35}`$ erg s<sup>-1</sup> in the 20-200 keV band. Nevertheless, the absence of cutoff below $``$200 keV is particularly outstanding, as in most cases X-ray bursters with hard tail spectra do have this feature which is suggestive of comptonization with plasma temperatures below $``$ 50 keV (Guainazzi et al. (1998), in โt Zand et al. (1999)). The presence of such a cutoff was suggested as a possible criterion to distinguish NS from black hole (BH) spectra, the latter being characterized by much higher electron temperatures (Tavani & Barret, (1997)). The case of SAX J1810.8-2609 is not compatible with this kind of interpretation. Very recently, an analysis of BeppoSAX observations of the Atoll X-ray burster 4U 0614+091 has revealed a similar behaviour, i.e. a high energy power law tail with no visible cutoff (Piraino et al. (1999)). Whether the spectrum of SAX J1810.8-2609 could have a cut-off just above 200 keV (that is our observational upper energy limit) is difficult to say. Our broad band spectral analysis shows that only a Comptonization fit is compatible with a low energy absorption matching the value of Galactic column density. We conclude that, even if the data are not able to constrain the parameters of the scattering region we still have good indication that comptonization is the mechanism that produces the hard X-ray tail.
We thank Team Members of the BeppoSax Science Operation Centre and Science Data Centre for continuos support and timely actions for quasi real-time detection of new transient and bursting sources and the follow-up TOO observations. The BeppoSAX satellite is a joint Italian and Dutch programme. LN is grateful to D. Barret for useful discussion and specially thanks A. Santangelo and A. Segreto for suggestions and help on HPGSPC data analysis. |
no-problem/0001/quant-ph0001058.html | ar5iv | text | # Freezing light via hot atoms
## Abstract
We prove that it is possible to freeze a light pulse (i.e., to bring it to a full stop) or even to make its group velocity negative in a coherently driven Doppler broadened atomic medium via electromagnetically induced transparency (EIT). This remarkable phenomenon of the ultra-slow EIT polariton is based on the spatial dispersion of the refraction index $`n(\omega ,k)`$, i.e., its wavenumber dependence, which is due to atomic motion and provides a negative contribution to the group velocity. This is related to, but qualitatively different from, the recently observed light slowing caused by large temporal (frequency) dispersion.
Slow group velocity in coherently driven media has been shown to provide new regimes of nonlinear interaction with highly increased efficiency even for very weak light fields, high precision spectroscopy and magnetometry . It has been demonstrated that EIT is accompanied by large frequency dispersion, $`|\omega n/\omega |1`$, and can slow the group velocity down to 10 - 10<sup>2</sup> m/s.
In this paper we show that, using spatial dispersion due to atomic motion, it is possible to freeze the light, $`v_g=0`$, or even to make its group velocity opposite to the wavevector, $`v_g<0`$ (see Eq. (1)). We consider two different types of atomic media: (i) atomic beam or uniformly moving sample, and (ii) hot gas in a stationary cell.
Freezing of light in a stationary cell via a hot gas is especially intriguing (Fig. 1). The idea is to tune the driving field to resonance with the velocity group of atoms that moves in the direction opposite to the light pulse with velocity equal to the light group velocity that would be supported by this group of atoms if they were at rest.
The main result of the present paper is contained in Fig. 2 which shows that $`v_g`$ can be zero, for a pulse in a hot gas, when the drive detuning $`\mathrm{\Delta }\omega _d`$ is properly chosen.
As is well-known, in a medium possessing both temporal and spatial dispersion of the refraction index, $`n(\omega ,k)=1+2\pi \chi (\omega ,k)`$, the group velocity of light contains two contributions,
$$v_gRe\frac{d\omega }{dk}=Re\frac{c\omega {\displaystyle \frac{n(\omega ,k)}{k}}}{n(\omega ,k)+\omega {\displaystyle \frac{n(\omega ,k)}{\omega }}}=\stackrel{~}{v}_gv_s.$$
(1)
Eq. (1) is an immediate result of differentiating the dispersion equation $`kc=\omega n(\omega ,k)`$, i.e., $`c=v_g(n+\omega n/\omega )+\omega n/k`$. The meaning of Eq. (1) becomes clear if one turns to the equation for a field amplitude
$$\left(c\frac{}{z}+\frac{}{t}\right)=2\pi i\omega ๐z^{}๐t^{}\chi (tt^{},zz^{})(t^{},z^{}).$$
Using the convolution theorem to write the RHS as $`๐\overline{k}๐\overline{\omega }\chi (\overline{\omega },\overline{k})(\overline{\omega },\overline{k})\mathrm{exp}i(\overline{\omega }t\overline{k}z)`$, expanding the susceptibility to the first order in $`\overline{k}`$, $`\overline{\omega }`$, noting that $`\overline{k}`$ and $`\overline{\omega }`$ under the integral may be written in terms of $`/z`$ and $`/t`$ acting on $`(t,z)`$ and rearranging terms we have
$$\left(c2\pi \omega \frac{\chi }{k}\right)\frac{}{z}+\left(1+2\pi \omega \frac{\chi }{\omega }\right)\frac{}{t}=2\pi i\omega \chi (\omega ,k)$$
which implies the field equation with $`v_g`$ given by Eq. (1),
$$\left(v_g\frac{}{z}+\frac{}{t}\right)=2\pi ik\chi (\omega ,k)\left[1+2\pi \omega \frac{\chi }{\omega }\right]^1.$$
The first term in Eq. (1), $`\stackrel{~}{v}_g=Re[c/(n+\omega n/\omega )]`$, is due to frequency dispersion, and was discussed in recent papers . The second term, $`v_s=Re[(\omega n/k)/(n+\omega n/\omega )]`$, is due to the effect of spatial dispersion, i.e., nonlocal response of the medium to a probe field. We study dilute systems where the susceptibility is small, $`|\chi (\omega ,k)|1`$, but $`v_gc`$, as it is for all the EIT experiments carried out so far. As usual, we consider real-valued group velocities under the condition that imaginary part of $`d\omega /dk`$ is negligible. Otherwise group velocity looses its simple kinematic meaning and strong absorption governs or prevents propagation of the light pulse through the medium. The latter is the reason why the resonant interaction of light with a two-level medium never results in an ultra-slow polariton.
A mono-velocity atomic beam or uniformly moving sample corresponds to the simple case of spatial dispersion, so-called drift dispersion. In the co-moving frame atoms are at rest, there is no spatial dispersion and the group velocity is given by the first term of Eq. (1) alone, $`\stackrel{~}{v}_g`$. The Galilean transformation to the laboratory frame, $`k=\stackrel{~}{k},\omega =\stackrel{~}{\omega }\stackrel{~}{k}v,`$ where $`v`$ is the atomic velocity, yields the group velocity $`v_g=Re(d\omega /dk)=\stackrel{~}{v}_gv.`$
Eq. (1) yields the same result, since the susceptibility depends only on the combination $`\omega +kv`$,
$$\chi _v(\omega ,k)=\chi (\omega +kv)=\frac{i\mu _{ab}^2N}{\mathrm{}}\frac{n_{ab}\mathrm{\Gamma }_{cb}+\mathrm{\Omega }^2n_{ca}/\mathrm{\Gamma }_{ac}^{}}{\mathrm{\Gamma }_{ab}\mathrm{\Gamma }_{cb}+\mathrm{\Omega }^2}.$$
(2)
Here $`n_{ab}=\rho _{aa}\rho _{bb}`$, $`n_{ca}=\rho _{cc}\rho _{aa}`$, $`\rho _{ii}`$ is the population of the $`i`$th level, $`\gamma `$ and $`\gamma _{cb}`$ are the relaxation rates of excited state and $`cb`$ coherence respectively ($`\gamma \gamma _{cb}`$); $`\omega _{ab}`$ and $`\omega _{cb}`$ are the frequencies of the optical and low frequency transitions ($`\omega _{ab}\omega _{cb}`$); $`\omega _d`$, $`k_d`$ and $`\omega `$, $`k`$ are the frequency and wavenumber of the driving and probe fields respectively, $`N`$ is the atomic density, $`\mathrm{\Omega }=|\mu _{ac}E_d|/2\mathrm{}`$ is the Rabi frequency of drive field $`(1/2)E_d\mathrm{exp}(i\omega _dtik_dz)+c.c.`$, $`\mu _{ac}`$ and $`\mu _{ab}`$ are the dipole moments of $`ac`$ and $`ab`$ transitions respectively, $`\mathrm{\Gamma }_{ac}=\gamma +i(\mathrm{\Delta }\omega _d+k_dv)`$, $`\mathrm{\Gamma }_{ab}=\gamma +i(\mathrm{\Delta }\omega +kv)`$, $`\mathrm{\Gamma }_{cb}=\gamma _{cb}+i(\mathrm{\Delta }\omega \mathrm{\Delta }\omega _d+\mathrm{\Delta }kv)`$, $`\mathrm{\Delta }\omega _d=\omega _d\omega _{ac}`$, $`\mathrm{\Delta }\omega =\omega \omega _{ab}`$, $`k_d=\omega _d/c`$, $`k=k_d+\omega _{cb}/c+\mathrm{\Delta }k`$. We use a standard model with incoherent pump and loss rates ($`r_c=r_b=\gamma _{cb}/2`$), assuming time of flight broadening of $`bc`$ transition (Fig. 1), so that in the absence of fields $`\rho _{cc}=\rho _{bb}=1/2`$. According to Eqs. (1), (2), we again obtain $`v_g=\stackrel{~}{v}_gv`$. The physical reason for this drifting of the pulse is that the field is basically โseizedโ by the atoms in the form of atomic polarization.
An important question is how to input the light pulse into the gas. There are different possibilities. One example uses a grid mirror that has the grid stripes of small area, so that atoms can freely fly through the mirror, and small spacing between the grid stripes as compared to the wavelength of light to provide efficient reflection, as in Fig. 1b. If atoms are at rest, the light would propagate in the forward direction. However, if the velocity of atoms is equal to (or larger than) $`\stackrel{~}{v}_g`$, one should see a frozen (or backward) pulse.
Depending on the mechanism of pulse input into the medium, one should look for the solution of the problem with initial (time), boundary (space), or mixed (time-space) conditions. In the case of the initial value problem, we solve the dispersion equation for $`\omega =\omega (k)`$, Fig. 3a. Galilean transformation ensures the same EIT half width, $`\mathrm{\Delta }k_{EIT}=\mathrm{\Omega }^2/\gamma \stackrel{~}{v}_g`$, as for the atoms at rest since $`Im[\stackrel{~}{\omega }(\stackrel{~}{k})]=Im[\omega (k)]`$. In the case of the boundary value problem, we find $`k=k(\omega )`$. The result shows narrowing of the EIT dip proportional to the kinematic factor $`\alpha =(\stackrel{~}{v}_gv)/\stackrel{~}{v}_g`$. Indeed, in the accompanying frame the dispersion relation near EIT resonance can be decomposed in a form of a quadratic polynomial, $`\mathrm{\Delta }\stackrel{~}{k}=\mathrm{\Delta }k_0i[\kappa _0+\xi (\mathrm{\Delta }\stackrel{~}{\omega }\mathrm{\Delta }\omega _d)^2]+(\mathrm{\Delta }\stackrel{~}{\omega }\mathrm{\Delta }\omega _d)/\stackrel{~}{v}_g`$. Its Galilean transformation to the laboratory frame yields
$$\mathrm{\Delta }k=\mathrm{\Delta }k_0+\frac{1}{\alpha }\left[\frac{\delta \omega }{\stackrel{~}{v}_g}i\kappa _0i\xi \left(\frac{\delta \omega }{\alpha }\right)^2\right],$$
(3)
where $`\delta \omega =\mathrm{\Delta }\omega \mathrm{\Delta }\omega _d`$. Coefficients in Eq. (3) can be easily deduced using Eq. (2). For example, for the case of one-photon resonance $`\mathrm{\Delta }\omega _d=0`$ at $`\mathrm{\Omega }^2\gamma _{cb}\gamma `$, we have $`\stackrel{~}{v}_g=\mathrm{}\mathrm{\Omega }^2/2\pi \mu _{ab}^2k_dN`$, $`\mathrm{\Delta }k_0=0`$, residual absorption coefficient at the center of EIT dip is $`\kappa _0=\gamma _{cb}/\stackrel{~}{v}_g`$, and a coefficient determining the parabolic profile of absorption in the EIT dip is $`\xi =\gamma /\mathrm{\Omega }^2\stackrel{~}{v}_g`$. This approximation is valid if residual absorption is small, $`\kappa _0\xi (1v/\stackrel{~}{v}_g)^2/v^2`$. Absorption increases twice as much as EIT minimum value at detuning $`\delta \omega _{EIT}=|\stackrel{~}{v}_gv|\mathrm{\Omega }\sqrt{\gamma _{cb}/\gamma }/\stackrel{~}{v}_g`$ that is much less than the EIT half width $`\mathrm{\Delta }\omega _{EIT}=\mathrm{\Delta }k_{EIT}|\stackrel{~}{v}_gv|`$.
Eq. (3) shows that the absorption coefficient $`Imk`$ is increased and sharpened by a factor $`(\stackrel{~}{v}_gv)/\stackrel{~}{v}_g`$ as compared to that in the co-moving frame. Since the spectrum of the pulse cannot be transformed on the stationary boundary, only those spectral components that are within the sharpened EIT dip penetrate deep into the medium. For drift velocity $`v>\stackrel{~}{v}_g`$, the backward EIT polariton can be excited from inside a cell (Fig. 1b).
In the case of an atomic beam with a moving boundary (or moving sample), i.e., for the mixed boundary-initial value problem, the spectrum (inverse duration) of the pulse shrinks at the moving boundary exactly in the same way as the EIT width in Eq. (3), $`\mathrm{\Delta }\omega =\mathrm{\Delta }\stackrel{~}{\omega }(\stackrel{~}{v}_gv)/\stackrel{~}{v}_g`$. This is not a coincidence, but is necessary for consistency of viewing of the same process from different frames. The pulse within the EIT dip decays in time with the same rate independently of whether it propagates through atoms at rest or through a beam, since this decay is pre-determined by atomic relaxation $`\gamma _{cb},\gamma `$.
Atoms with a thermal velocity distribution. Let us consider a stationary cell of hot atoms. If the intensity of the drive is strong enough to provide EIT for the resonant group of atoms (see Fig. 4) but at the same time weak enough to avoid an interaction with off-resonant atoms, moving with โwrongโ velocities, it is mainly this drifting beam that would support the ultra-slow EIT polariton with zero or even negative group velocity.
To prove this we calculate the dispersion law $`\omega (k)`$ for the EIT polariton in a hot gas in a cell at rest. The susceptibility is given by an average of the beam susceptibility over a velocity distribution $`F(v)`$ of atoms in a gas with thermal velocity $`v_T`$, $`\chi (\omega ,k)=_{\mathrm{}}^+\mathrm{}๐vF(v)\chi _v(\omega ,k).`$ Instead of the Maxwellian thermal distribution we can use Lorentzian, $`F(v)=v_T/[\pi (v_T^2+v^2)]`$, since the far-off-resonant tails are not important. This allows us to obtain simple analytical results because an integration over velocities is reduced to a sum of a few residues in the simple poles, $`v=v_j`$. Only those poles count that lay in the lower half complex $`v`$-plane in the formal limit of infinitely large growth rate $`Im\omega \mathrm{}`$. For a positive wavenumber detuning, $`\mathrm{\Delta }k>0`$, there are two such poles. One originates from Lorentzian, $`v_1=iv_T`$, and the other from the velocity dependent populations, $`v_2=(i\gamma G+\mathrm{\Delta }\omega _d)/k_d`$. Here $`\gamma G=\gamma \left(1+\mathrm{\Omega }^2/(\gamma _{cb}\gamma )\right)^{1/2}`$ determines the velocity width of an effective drifting beam of atoms that are driven by an external field into a coherent โdarkโ state , and, hence, responsible for the ultra-slow EIT polariton (see Fig. 4). For $`\mathrm{\Delta }k<0`$, there is an additional pole, $`v_31/\mathrm{\Delta }k`$, originated from resonance $`\mathrm{\Gamma }_{ab}\mathrm{\Gamma }_{cb}+\mathrm{\Omega }^2=0`$ in Eq. (2). However, near EIT resonance, i.e., for small detuning $`\mathrm{\Delta }k`$, it enters the lower half plane from infinity, $`v_3i\mathrm{}`$, so that its contribution is negligible if $`Nk_d^3(\gamma _{cb}/\gamma )\sqrt{k_dv_T/\mathrm{\Omega }}`$.
Calculation of the residues at poles $`v_1`$ and $`v_2`$ yields
$$\chi (\omega ,k)=\frac{i\mu _{ab}^2N}{2\mathrm{}}\left[\frac{\eta _1}{\mathrm{\Omega }^2+\mathrm{\Gamma }_{ab}^{(1)}\mathrm{\Gamma }_{cb}^{(1)}}+\frac{\eta _2}{\mathrm{\Omega }^2+\mathrm{\Gamma }_{ab}^{(2)}\mathrm{\Gamma }_{cb}^{(2)}}\right],$$
(4)
where $`\eta _1=[R_1\mathrm{\Gamma }_{ac}^{(1)}\mathrm{\Gamma }_{cb}^{(1)}(1+2\gamma R_1/\gamma _{cb})]/[1+\gamma ^2(G^21)R_1/\mathrm{\Omega }^2],`$ $`\eta _2=k_dv_TR_2[\mathrm{\Omega }^2/(G1)\mathrm{\Gamma }_{cb}^{(2)}\gamma ]/\gamma _{cb}\gamma G`$, $`\mathrm{\Gamma }_{ab}^{(1)}=\gamma +kv_T+i\mathrm{\Delta }\omega `$ , $`\mathrm{\Gamma }_{ab}^{(2)}=\gamma (1+Gk/k_d)+i(\mathrm{\Delta }\omega k\mathrm{\Delta }\omega _d/k_d)`$, $`\mathrm{\Gamma }_{ac}^{(1)}=\gamma +k_dv_T+i\mathrm{\Delta }\omega _d`$, $`\mathrm{\Gamma }_{cb}^{(1)}=\gamma _{cb}+|\mathrm{\Delta }k|v_T+i(\mathrm{\Delta }\omega \mathrm{\Delta }\omega _d)`$ , $`\mathrm{\Gamma }_{cb}^{(2)}=\gamma _{cb}+|\mathrm{\Delta }k|\gamma G/k_d+i(\mathrm{\Delta }\omega \mathrm{\Delta }\omega _d\mathrm{\Delta }k\mathrm{\Delta }\omega _d/k_d)`$, $`R_1=\mathrm{\Omega }^2/[\gamma ^2+(\mathrm{\Delta }\omega _dik_dv_T)^2]`$, $`R_2=\mathrm{\Omega }^2/[(k_dv_T)^2+(\mathrm{\Delta }\omega _d+i\gamma G)^2]`$.
The susceptibility (4) of a hot gas looks like the susceptibility of a medium consisting of just two mono-velocity components: (i) broad background with velocity $`v=0`$ and linewidth $`\gamma +k_dv_T`$, and (ii) a drifting beam with velocity $`v_d=\mathrm{\Delta }\omega _d/k_d`$ and power broadened linewidth $`\gamma (1+G)`$ (see Fig. 4). This interpretation becomes very accurate near EIT dip, $`|\mathrm{\Delta }\omega \mathrm{\Delta }\omega _d|\gamma G`$, at the conditions necessary for the existence of freezing ultra-slow EIT polariton: a) low-frequency coherence decay is much slower than optical decay ($`\gamma _{cb}\gamma `$); b) drifting beam width is less than Doppler broadening ($`\gamma Gk_dv_T`$); c) detuning of driving and probe fields from one-photon resonance is large enough ($`|\mathrm{\Delta }\omega _d|\gamma G`$) while two-photon resonance is maintained. Then, for the ultra-slow EIT polariton, the susceptibility is approximated as
$$\chi =\frac{\mu _{ab}^2N^{}}{\mathrm{}\gamma G}\left[\frac{\mathrm{\Omega }^2}{\gamma (1+G)(\omega \omega _k)}i\right],$$
(5)
if we keep only resonant $`\omega `$-dependence in denominators setting everywhere else $`\mathrm{\Delta }\omega =\mathrm{\Delta }\omega _d`$. Here $`N^{}=N\gamma Gk_dv_T/[(k_dv_T)^2+\mathrm{\Delta }\omega _d^2]N`$ is the density of atoms in the drifting beam. The resonant denominator, where $`\omega _k=\omega _{ab}+\mathrm{\Delta }\omega _dk/k_d+i\gamma _k`$, $`\gamma _k=\gamma _{cb}+\mathrm{\Omega }^2/\gamma (1+G)+|\mathrm{\Delta }k|\gamma G/k_d`$, comes from the factor $`\mathrm{\Omega }^2+\mathrm{\Gamma }_{ab}^{(2)}\mathrm{\Gamma }_{cb}^{(2)}`$ in Eq. (4). Thus, we explicitly find the frequency and the decay ($`\omega _k`$, $`\gamma _k`$) of the EIT exciton coupling of which to the probe field produces the ultra-slow polariton.
For the boundary value problem, Eq. (5) yields a dispersion that is similar to that for the mono-velocity beam (3) with parameters $`v=v_d`$, $`\stackrel{~}{v}_g^{}=[(k_dv_T)^2+\mathrm{\Delta }\omega _d^2]\mathrm{\Omega }^2\mathrm{}/[\mu _{ab}^2N\gamma (1+G)k_d^2v_T]`$, $`\kappa _0=\gamma _{cb}/\stackrel{~}{v}_g^{}`$, $`\xi =1/\gamma _k\stackrel{~}{v}_g^{}`$.
For the initial value problem, from the dispersion equation $`kc=\omega (1+2\pi \chi )`$ and Eq. (5), we find dispersion law
$$\mathrm{\Delta }\omega =\mathrm{\Delta }\omega _dv_d\mathrm{\Delta }k+i\gamma _k\frac{\mathrm{\Omega }^2}{\gamma (1+G)}\left[\frac{\mathrm{}\gamma G\mathrm{\Delta }k}{2\pi \mu _{ab}^2k_dN^{}}+i\right]^1$$
(6)
shown in Fig. 3b. The EIT half width is $`\mathrm{\Delta }k_{EIT}^{}=\gamma _k/\stackrel{~}{v}_g^{}`$. For small detuning $`|\mathrm{\Delta }k|\mathrm{\Delta }k_{EIT}^{}`$, Eq. (6) yields linear dispersion and parabolic decay profile, $`\mathrm{\Delta }\omega =\mathrm{\Delta }\omega _d+\mathrm{\Delta }k(\stackrel{~}{v}_g^{}v_d)+i\gamma _{cb}+i\mathrm{\Delta }k^2\stackrel{~}{v^{}}^2/\gamma _k`$. Decay increases twice as much as EIT minimum value, $`Im\mathrm{\Delta }\omega =2\gamma _{cb}`$, at very small detuning $`\delta k_{EIT}^{}=\sqrt{\gamma _{cb}\gamma _k}/\stackrel{~}{v}_g^{}\mathrm{\Delta }k_{EIT}^{}`$. The group velocity describes pulse kinematics if $`d\omega /dk`$ has negligible imaginary part, i.e., near the center of the EIT dip where $`|\mathrm{\Delta }k|<|\stackrel{~}{v}_gv_d|\gamma _k/\stackrel{~}{v^{}}_g^2`$. The last inequality does not mean that the pulse cannot be stopped. It just means that when the pulse is frozen, $`v_g=\stackrel{~}{v}_g^{}v_d=0`$, its evolution is governed by the dispersion of absorption.
Fig. 3 clearly shows that the ultra-slow EIT polariton in a hot gas is similar to that in a mono-velocity beam, since detuning of driving field picks a beam with velocity $`v_d=\mathrm{\Delta }\omega _d/k_d`$. However, effective density of atoms supporting EIT polariton $`N^{}`$ and EIT width $`\mathrm{\Delta }k_{EIT}^{}=\gamma _k/\stackrel{~}{v}_g^{}`$ in a hot gas are different because of factors $`\gamma G`$ and $`F(v)`$. As a result, the group velocity at the EIT resonance, according to Eq. (6), in terms of a critical density is
$$v_g=\frac{\beta N_{cr}}{NF(v_d)}v_d,N_{cr}=\frac{\mathrm{}\mathrm{\Omega }}{2\pi ^2\beta \mu _{ab}^2}\sqrt{\frac{\gamma _{cb}}{\gamma }},$$
(7)
where $`\beta =\mathrm{max}[v_dF(v_d)]`$. For Lorentzian $`F(v_d)`$, we have $`\beta =1/2\pi `$, and $`v_g=(v_dv_d^{(1)})(v_dv_d^{(2)})N_{cr}/2Nv_T`$ is a quadratic polynomial over $`v_d`$, i.e., the group velocity is zero for drive detunings $`v_d^{(1,2)}=v_T[N/N_{cr}\pm \sqrt{(N/N_{cr})^21}]`$ and negative between them for density higher than the critical value, $`N>N_{cr}`$, as is shown in Fig. 2. To achieve minimal group velocity, $`\mathrm{min}v_g=(v_TN/2N_{cr})[1(N_{cr}/N)^2]`$, one has to tune at $`v_d=v_TN/N_{cr}`$. The condition to freeze or reverse the light ($`v_g0`$) means that the group velocity supported by the drifting beam with the density $`N^{}=\pi NF(v_d)\gamma G/k_d`$ should be equal to or less than the velocity of atoms in the beam, i.e., $`\stackrel{~}{v}_g^{}=\stackrel{~}{v}_gN^{}/Nv_d`$. If we compare a mono-velocity beam with a hot gas at $`v_d=v`$ and the same $`N^{}`$ as the total density $`N`$ in a beam to provide the same group velocity, $`\stackrel{~}{v}_g=\stackrel{~}{v}_g^{}`$, we find that the EIT width and the residual decay in a hot gas are $`G\mathrm{\Omega }/\sqrt{\gamma _{bc}\gamma }`$ times less than in a beam. To minimize $`N_{cr}`$ the drive intensity should be as low as possible to decrease $`\stackrel{~}{v}_g^{}`$ due to power broadening effect and to avoid EIT contribution from the atoms with โwrongโ (positive) velocities. That is, the drive intensity should be just above a threshold of the EIT effect at resonance, $`\mathrm{\Omega }^2>\gamma _{cb}\gamma `$. Under realistic parameters relevant to the experiments with <sup>87</sup>Rb vapor and chozen in Figs. 2-5, the critical density is $`N_{cr}10^{11}`$ cm<sup>-3</sup>.
Absorption or time variation of the drive field results in a spatial or time dependence of the group velocity in the cell. This allows us to control input and parameters of the pulse in the cell. According to geometrical optics, the parameters of the EIT polariton adiabatically follow the local properties of the driven atoms. Fig. 5 demonstrates how the ultra-slow pulse decelerates up to the point $`v_g=0`$ where it becomes frozen.
The important conclusion is that the drifting beam provides large enough drift spatial dispersion $`n/k`$ (see Eq.(1)) to ensure $`v_g0`$. Although the density of drifting atoms is small $`N^{}N`$, their resonant contribution dominates. This allows us to make the group velocity zero or even negative . To observe freezing or backward light one can look, e.g., for a scattering, luminescence, delay, or enhanced nonlinear mixing caused by ultra-slow pulse.
We thank M. Fleishhauer, E. Fry, S. Harris, M. Lukin, and G. Welch for helpful discussions. This work was supported by the ONR, the NSF, the Welch Foundation, and the Texas Advanced Technology Program. |
no-problem/0001/cond-mat0001453.html | ar5iv | text | # The mesoscopic proximity effect probed through superconducting tunneling contacts
\[
## Abstract
We investigate the properties of complex mesoscopic superconducting-normal hybrid devices, Andreev-Interferometers in the case, where the current is proped through a superconducting tunneling contact whereas the proximity effect is generated by a transparent SN-interface. We show within the quasiclassical Greenโs functions technique, how the fundamental SNIS-element of the such structures can be mapped onto an effective SIS-junction, where S is the proximised material with an effective energy gap $`E_g<\mathrm{\Delta }`$. The conductance through such a sample at $`T=0`$ vanishes if $`V<\mathrm{\Delta }+E_g`$, whereas at $`T>0`$ the conductance shows a peak at $`V=\mathrm{\Delta }E_g`$. We propose the Andreev-Interferometer, where $`E_g`$ can be tuned by an external phase $`\varphi `$ and displays maxima at 0 mod $`2\pi `$ and minima at $`\pi `$ mod $`2\pi `$. This leads to peculiar current-phase-relations, which depart from a zero-phase maximum or minimum depending on the bias voltage and can even show intermediate extreme at $`V\mathrm{\Delta }`$. We propose an experiment to verify our predictions and show, how our results are consistent with recent, unexplained experimental results.
\]
The proximity effect, although already known for many decades (see e.g. ), has recently attracted new scientific interest in the context of mesoscopic normal-superconducting hybrid structures, which are now experimentally acessible due to progress in nanofabrication and measurement support technology . Departing from the properties of single junctions and the nonmonotonic diffusion conductance of SN-wires , the interest turned to the possibility of tuning the conductance by an external phase or a loop in the normal part . On the other hand, if probed through tunneling contacts the conductance is controlled by the DOS and the induced minigap , which can also be controlled by a phase and hence opens another channel for phase controlled conductance of a different sign . If a system contains more than one superconducting terminal, a supercurrent can flow , which can be controlled and reversed externally . The situation becomes more difficult and in particular time-dependent, if nonequilibrium is created by applying an external voltage parallel to the junction .
This latter situation is substantially simplified, if one of the contacts is separated from the rest of the structure by a tunneling barrier. In that case, the voltage- and phase-drop is concentrated at the barrier and the problem is essentially split into two parts: The time-dependence of the phase at the contact and the proximity effect, which determines the superconducting properties at the normal side of the contact, within the normal metal. In that case, the physics should be basically identical to the case of an SIS-junction, where the properties of the โsuperconductorโ S are entirely controlled by the proximity effect, i.e. we expect a gap of size $`E_g<\mathrm{\Delta }`$ where, if the junction is long, $`d\xi _0`$ $`E_gE_{\mathrm{Th}}=D/d^2`$, the Thouless energy. Hence, we will expect the known physics of such SIS-contacts: The onset of a tunneling current at $`V=\mathrm{\Delta }+E_g`$ at any $`T`$ plus the appearance of a current peak at $`V=\mathrm{\Delta }E_g`$ if $`T>0`$. The origin of this peak can be easiest understood within a semiconductor representation of the two superconductors, see e.g..
Such a structure can in principle be manufactured in a controlled manner. To the best of our knowledge, this has not yet been realized in Andreev interferometers. Nevertheless, we are going to discuss the connection to two experiments: Kutchinsky et al. studied the conductance in a T-shaped interferometer with superconducting contacts in a semiconducting systems, where unwanted barriers at the interfaces are likely to occur. Antonov et al. . in turn studied a sample with normal tunneling contacts, which might eventually be connected to superconducting pieces.
Model and basic equations. Mesoscopic proximity systems are efficiently and quantitatively described by the quasiclassical Greenโs functions technique, described in and its references 4โ6, 49, and 50. In this approach, the microscopic Gorโkov equation is reduced to the more handy Usadel equation by various systematic approximations. At interfaces, this equation is supplemented by boundary conditions <sup>*</sup><sup>*</sup>*In our case, deviations from these conditions as discussed in are not likely to occur
$`p_{F1}^2l_1\widehat{G}_1{\displaystyle \frac{d}{dx}}\widehat{G}_1`$ $`=`$ $`p_{F2}^2l_2\widehat{G}_2{\displaystyle \frac{d}{dx}}\widehat{G}_2,`$ (1)
$`l_2\widehat{G}_2{\displaystyle \frac{d}{dx}}G_2`$ $`=`$ $`t[\widehat{G}_2,\widehat{G}_1].`$ (2)
These conditions guarantee current conservation. We want to apply them to the case of small transparencies $`t1`$. Here, they enforce that the drop of phase and voltage is concentrated at the insulating layer . The current can thus be expressed as an effective tunneling formula
$`J`$ $`=`$ $`\text{Re}J_p(V,T)\mathrm{sin}\varphi +\text{Im}J_p(V,T)\mathrm{cos}\varphi +\text{Im}J_q(V,T)`$ (3)
$`\varphi `$ $`=`$ $`2eVt+\varphi _0`$ (4)
for the current through the interface. Here, the quasiparticle tunneling current amplitude is
$`\text{Im}\left[J_q(V,T)\right]`$ $`=`$ $`{\displaystyle \frac{G_n}{2e}}{\displaystyle ๐E\text{Re}\left[G_N^R(E)\right]\text{Re}\left[G_{BCS}^R(E+V)\right]}`$ (6)
$`\left(\mathrm{tanh}\left({\displaystyle \frac{E+eV}{2T}}\right)+\mathrm{tanh}\left({\displaystyle \frac{E}{2T}}\right)\right)`$
and $`\text{Re}G^R`$ gives the quasiparticle DOS. This formula is the microscopic formulation of the usual Josephson tunneling formula .
We want to apply this result to the specific case of an SNIS-junction, Fig. 1. Eq. 4 allows to identify this system with an effective $`\mathrm{S}^{}`$ IS-Josephson junction, where the โsuperconductorโ $`\mathrm{S}^{}`$ is the normal metal layer influenced by the proximity effect. We can characterize $`\mathrm{S}^{}`$ by the Greenโs functions at the interface calculated from the Usadel equation assuming โ in order to be consistent with $`R_\mathrm{T}R_\mathrm{N}`$ โ a highly resistive interface and consequently a vanishing phase drop over the $`N`$-part. The โsuperconductorโ S has a gap of size $`E_\mathrm{G}\text{min}(E_{\mathrm{Th}},\mathrm{\Delta })`$, see fig. 2. Thus we expect from a semiconductor model that the system shows a DC supercurrent at $`V=0`$ and a DC quasiparticle current at $`V\mathrm{\Delta }+E_\mathrm{G}`$. Moreover, at finite temperature, a few empty states below $`E_\mathrm{F}`$ and a few quasiparticles above $`E_\mathrm{F}`$ are available, enabling transport already at $`V\mathrm{\Delta }E_\mathrm{G}`$ (see eq. 6) hence leading to a logarithmic quasiparticle current peak there . Unlike the situation in a massive superconductor, the induced DOS in $`\mathrm{S}^{}`$ does not diverge at the gap edge but has a maximum slightly above $`E_G`$, see Fig. 2, thus we can conclude that also the peak will be smoothened and be slightly above $`\mathrm{\Delta }E_\mathrm{G}`$. Additionally, due to BCS singularity in S, another structure is present in DOS of $`\mathrm{S}^{}`$ at $`E\mathrm{\Delta }`$, which is weakened with increasing thickness $`d`$ (or decreasing Thouless energy).
Numerical results. In order to obtain quantitative results from eqs. 4,6, the function $`\text{Im}\left[G^R(d)\right]`$ has to be calculated. It is given by the solution of the Usadel equation
$`D_x^2\alpha ^R=2iE\mathrm{sinh}\alpha ^R`$
with boundary conditions
$`\alpha ^R(x=0)=\alpha _S^R=\text{Atanh}\left|{\displaystyle \frac{\mathrm{\Delta }}{E}}\right|`$
at the superconductor and
$`_x\alpha ^R|_{x=d}=0`$
at the tunneling barrier, through $`G^R(d)=\mathrm{cosh}\alpha ^R`$. These nonlinear equations are in general not solvable analytically. Nevertheless, we find from a low-energy expansion that $`\text{Im}\left[\alpha ^R\right]=0`$ to all orders, which indicates the presence of a gap in the spectrum with a sharp edge (at the convergence radius of the low-energy expansion). At high energies, $`EE_{\mathrm{Th}}`$, the system is decoupled from the boundary conditions at the barrier and
$`\alpha (d)=4\text{Atanh}\left(\mathrm{tanh}(\alpha _S/4)\mathrm{exp}\sqrt{2iE/E_{Th}}\right)`$
indicating that the deviation from the normal state value is exponentially cut off at those energies. This is consistent with our numerical result, Fig. 2.
Our qualitative predictions in the preceding section are confirmed by our numerical results, Fig. 3. As predicted, the peaks grow and smear out with increasing temperature, but stay visible up to temperatures far above $`E_{\mathrm{Th}}`$. Furthermore, the feature becomes more pronounced if $`E_G`$ is big, i.e. for a shorter junction.
SNIS Andreev interferometers. Even if this type of junction is not prepared on purpose, during the fabrication process an asymmetric barrier can easily show up accidentally, e.g. if the N-metal is a highly doped semiconductor and a Schottky-barrier is likely to occur or if the structure is prepared out of two layers within a two-step shadow evaporation technique .
As a particular example, we discuss a specific set of experiments .
Unfortunately, the interface resistance has not been systematically investigated there, but a Schottky barrier is likely to occur in this system. As a model, we consider the interferometer Fig. 4 discussed already in in the case when the tunneling barriers are strong and all four reservoirs are superconducting.
The phase difference allows to control the strength of the proximity effect, manifested here in the size of the minigap $`E_\mathrm{G}(\varphi )`$, which varies between $`E_G^{\mathrm{max}}`$ at integer and $`0`$ at half-integer numbers of flux quanta. The influence of the phase difference in the interferometer is hence most pronounced for $`|\mathrm{\Delta }E_\mathrm{G}^{\mathrm{max}}|V|\mathrm{\Delta }+E_\mathrm{G}^{\mathrm{max}}|`$. The I-V characteristics at a fixed phase, Fig. 4 resembles the form already discussed in Fig. 3 but is slightly smoothened.
At fixed temperatures and voltages, the I-$`\varphi `$ relation shows many shapes including zero-field minima and maxima as well as additional extrema at intermediate phases as depicted in Fig.5. This can be traced back to the motion of $`E_\mathrm{G}(\varphi )`$: At $`V<\mathrm{\Delta }E_G^{\mathrm{max}}`$, a bigger gap slightly lowers the current (see left upper in fig. 5), at $`\mathrm{\Delta }E_G^{\mathrm{max}}<V<\mathrm{\Delta }`$, we are in the vicinity of the induced peak, which only shows up due to $`E_g`$, so the current is rather suppressed by shifting the gap (see right upper in fig. 5). At $`\mathrm{\Delta }<V<\mathrm{\Delta }+E_G^{\mathrm{max}}`$, the situation is more subtle: The current will be maximum, if the edge at $`\mathrm{\Delta }+E_GV`$, which will be achieved at intermediate $`\varphi `$. Due to symmetry reasons, this does not only result into a phase shift, but into an intermediate maximum. Comparing $`\varphi =0`$ and $`\varphi =\pi `$, one finds that depending on the particular voltage, there is a competition of the sharpness of the induced gap at $`\varphi =0`$ increasing on the current above the gap edge but decreasing it below the gap edge, which have to be traded off and e.g. in Fig. 5, left lower, lead to a higher current at $`\varphi =0`$. At $`V>\mathrm{\Delta }+E_g^{\mathrm{max}}`$, both peaks in the DOS contribute to the current, which is again leads to a zero-phase maximum, right lower.
A similar multitude of structures was observed in the $`G(\varphi )`$ in the interferometer studied in the last section in the experiments by e.g. by Antonov et al., see . In that paper, the conductance of an Andreev-interferometer as probed through normal tunneling contacts was investigated. For technical reasons, small pieces of Aluminum had to be deposited at the site of the barriers, which may become superconducting, rendering the structure a superconducting rather than a normal tunneling contact. As a result, there have been oscillations with intermediate maxima observed under certain bias conditions, which are compatible with our predictions .
The oscillation amplitude, see Fig. 6 shows a remarkable peak structure. In the experiments , this effect will be washed out due to the 2D-geometry, however, a pronounced splitting of the conductance peak around $`\mathrm{\Delta }`$ is observed. Remarkably and in agreement with , the oscillation amplitude in 6 only depends weakly on temperature, although we would have expected a strong T-dependence at least of the sub-gap peak. This observation in agreement with the experiments and makes it a likely explanation of the observed peak splitting.
Our predictions can be studied in a more genuine setup like in the inset of Fig. 4, which is also remarkable to another reason: The attached tunneling contacts cool the distribution function in the normal metal by removing quasiparticles. This should also influence the supercurrent between the other two superconducting reservoirs in a way opposite to . Whether or not this also leads to $`\pi `$-junction behavior requires more detailed knowledge of the efficiency of the cooling. The experimental detection of the $`\pi `$-junction along the lines of require detailed knowledge of the current-phase-relations 3 and 4 (in that terminology the the control line), which is provided by our study.
Summary and Conclusions We have discussed the physics of proximity systems probed through a superconducting tunneling contact. We showed, how these can be understood as junctions between two different superconductors separated by a tunneling barrier. This leads to a peculiar current-voltage characteristic containing a huge step preceded by a small peak at $`T>0`$. We discussed the phase-dependence of that current in a typical Andreev-Interferometer and outlined connections to existing and future experiments.
We would like to acknowledge useful discussions with A.D. Zaikin, G. Schรถn, T.M. Klapwijk, J.J.A. Baselmans, H. Weber, T. Heikkilรค, O. Kuhn, and R. Taboryski. This work was supported by the DFG through SFB 195 and GK 284 and by the EU through the EU-TMR โSuperconducting Nanocircuitsโ. |
no-problem/0001/physics0001017.html | ar5iv | text | # Inelastic semiclassical Coulomb scattering
## 1 Introduction
Semiclassical scattering theory was formulated almost 40 years ago for potential scattering in terms of WKB-phaseshifts . Ten years later, a multidimensional formulation appeared, derived from the Feynman path integral . Based on a similar derivation Miller developed at about the same time his โclassical S-matrixโ which extended Pechukasโ multidimensional semiclassical S-matrix for potential scattering to inelastic scattering . These semiclassical concepts have been mostly applied to molecular problems, and in a parallel development by Balian and Bloch to condensed matter problems, i.e. to short range interactions.
Only recently, scattering involving long range (Coulomb) forces has been studied using semiclassical S-matrix techniques, in particular potential scattering , ionization of atoms near the threshold and chaotic scattering below the ionization threshold . The latter problem has also been studied purely classically and semiclassically within a periodic orbit approach .
While there is a substantial body of work on classical collisions with Coulomb forces using the Classical Trajectory Monte Carlo Method (CTMC) almost no semiclassical studies exist. This fact together with the remarkable success of CTMC methods have motivated our semiclassical investigation of inelastic Coulomb scattering. To carry out an explorative study in the full (12) dimensional phase space of three interacting particles is prohibitively expensive. Instead, we restrict ourselves to collinear scattering, i.e. all three particles are located on a line with the nucleus in between the two electrons. This collision configuration was proven to contain the essential physics for ionization near the threshold and it fits well into the context of classical mechanics since the collinear phase space is the consequence of a stable partial fixed point at the interelectronic angle $`\theta _{12}=180^{}`$ . Moreover, it is exactly the setting of Millerโs approach for molecular reactive scattering.
For the theoretical development of scattering concepts another Hamiltonian of only two degrees of freedom has been established in the literature, the s-wave model . Formally, this model Hamiltonian is obtained by averaging the angular degrees of freedom and retaining only the zeroth order of the respective multipole expansions. The resulting electron-electron interaction is limited to the line $`r_1=r_2`$, where the $`r_i`$ are the electron-nucleus distances, and the potential is not differentiable along the line $`r_1=r_2`$. This is not very important for the quantum mechanical treatment, however, it affects the classical mechanics drastically. Indeed, it has been found that the s-wave Hamiltonian leads to a threshold law for ionization very different from the one resulting from the collinear and the full Hamiltonian (which both lead to the same threshold law) . Since it is desirable for a comparison of semiclassical with quantum results that the underlying classical mechanics does not lead to qualitative different physics we have chosen to work with the collinear Hamiltonian. For this collisional system we will obtain and compare the classical, the quantum and the primitive and uniformized semiclassical result. For the semiclassical calculations the collinear Hamiltonian was amended by the so called Langer correction, introduced by Langer to overcome inconsistencies with the semiclassical quantization in spherical (or more generally non-cartesian) coordinates.
As a side product of this study we give a rule how to obtain the correct Maslov indices for a two-dimensional collision system directly from the deflection function without the stability matrix. This does not only make the semiclassical calculation much more transparent it also considerably reduces the numerical effort since one can avoid to compute the stability matrix and nevertheless one obtains the full semiclassical result.
The plan of the paper is as follows: in section 2 we introduce the Hamiltonian and the basic semiclassical formulation of the S-matrix in terms of classical trajectories. We will discuss a typical S-matrix $`S(E)`$ at fixed total energy $`E`$ and illustrate a simple way to determine the relevant (relative) Maslov phases. In section 3 semiclassical excitation and ionization probabilities are compared to quantum results for singlet and triplet symmetry. The spin averaged probabilities are also compared to the classical results. In section 4 we will go one step further and uniformize the semiclassical S-matrix, the corresponding scattering probabilities will be presented. We conclude the paper with section 5 where we try to assess how useful semiclassical scattering theory is for Coulomb potentials.
## 2 Collinear electron-atom scattering
### 2.1 The Hamiltonian and the scattering probability
The collinear two-electron Hamiltonian with a proton as a nucleus reads (atomic units are used throughout the paper)
$$h=\frac{p_1^2}{2}+\frac{p_2^2}{2}\frac{1}{r_1}\frac{1}{r_2}\frac{1}{r_1+r_2}.$$
(1)
The Langer-corrected Hamiltonian reads
$$H=h+\frac{1}{8r_1^2}+\frac{1}{8r_2^2}.$$
(2)
For collinear collisions we have only one โobservableโ after the collision, namely the state with quantum number $`n`$, to which the target electron was excited through the collision. If its initial quantum number before the collision was $`n^{}`$, we may write the probability at total energy $`E`$ as
$$P_{n,n^{}}(E)=|n|S|n^{}|^2$$
(3)
with the S-matrix
$$S=\underset{\genfrac{}{}{0pt}{}{t\mathrm{}}{t^{}\mathrm{}}}{lim}e^{iH_ft}e^{iH(tt^{})}e^{iH_it^{}}.$$
(4)
Generally, we use the prime to distinguish initial from final state variables. The Hamiltonians $`H_i`$ and $`H_f`$ represent the scattering system before and after the interaction and do not need to be identical (e.g. in the case of a rearrangement collision). The initial energy of the projectile electron is given by
$$ฯต^{}=E\stackrel{~}{ฯต}^{}$$
(5)
where $`\stackrel{~}{ฯต}^{}`$ is the energy of the bound electron and $`E`$ the total energy of the system. In the same way the final energy of the free electron is fixed. However, apart from excitation, ionization can also occur for $`E>0`$ in which case $`|n`$ is simply replaced by by a free momentum state $`|p`$. This is possible since the complicated asymptotics of three free charged particles in the continuum is contained in the S-matrix.
### 2.2 The semiclassical expression for the S-matrix
Semiclassically, the S-matrix may be expressed as
$$S_{n,n^{}}(E)=\underset{j}{}\sqrt{๐ซ_{n,n^{}}^{(j)}(E)}e^{i\mathrm{\Phi }_ji\frac{\pi }{2}\nu _j},$$
(6)
where the sum is over all classical trajectories $`j`$ which connect the initial state $`n^{}`$ and the final โstateโ $`n`$ with a respective probability of $`๐ซ_{n,n^{}}^{(j)}(E)`$. The classical probability $`๐ซ_{n,n^{}}^{(j)}(E)`$ is given by
$$๐ซ_{n,n^{}}^{(j)}(E)=๐ซ_{ฯต,ฯต^{}}^{(j)}(E)\frac{ฯต}{n}=\frac{1}{N}\left|\frac{ฯต(R^{})}{R_j^{}}\right|^1\frac{ฯต}{n},$$
(7)
see where also an expression for the normalization constant $`N`$ is given. Note, that due to the relation (5) derivatives of $`ฯต`$ and $`\stackrel{~}{ฯต}`$ with respect to $`n`$ or $`R^{}`$ differ only by a sign. From now on we denote the coordinates of the initially free electron by capital letters and those of the initially bound electron by small letters. If the projectile is bound after the collision we will call this an โexchange processโ, otherwise we speak of โexcitationโ (the initially bound electron remains bound) or ionization (both electrons have positive energies). The deflection function $`ฯต(R^{})`$ has to be calculated numerically, as described in the next section. The phase $`\mathrm{\Phi }_j`$ is the collisional action given by
$$\mathrm{\Phi }_j(P,n;P^{},n^{})=๐t\left(q\dot{n}+R\dot{P}\right)$$
(8)
with the angle variable $`q`$. The Maslov index $`\nu _j`$ counts the number of caustics along each trajectory. โStateโ refers in the present context to integrable motion for asymptotic times $`t\pm \mathrm{}`$, characterized by constant actions, $`J^{}=2\pi \mathrm{}(n^{}+1/2)`$. The (free) projectile represents trivially integrable motion and can be characterized by its momentum $`P^{}`$. In our case, each particle has only one degree of freedom. Hence, instead of the action $`J^{}`$ we may use the energy $`\stackrel{~}{ฯต}^{}`$ for a unique determination of the initial bound state. In the next sections we describe how we calculated the deflection function, the collisional action and the Maslov index.
#### 2.2.1 Scattering trajectories and the deflection function
The crucial object for the determination of (semi-)classical scattering probabilities is the deflection function $`ฯต(R^{})`$ where $`ฯต`$ is the final energy of the projectile electron as a function of its initial position $`R_0+R^{}`$. Each trajectory is started with the bound electron at an arbitrary but fixed phase space point on the degenerate Kepler ellipse with energy $`\stackrel{~}{ฯต}^{}=1/2`$ a.u.. The initial position of the projectile electron is changed according to $`R^{}`$, but always at asymptotic distances (we take $`R_0=1000`$ a.u.), and its momentum is fixed by energy conservation to $`P^{}=[2(E\stackrel{~}{ฯต}^{})]^{1/2}`$. The trajectories are propagated as a function of time with a symplectic integrator and $`ฯต=ฯต(t\mathrm{})`$ is in practice evaluated at a time $`t`$ when
$$d\mathrm{ln}|ฯต|/dt<\delta $$
(9)
where $`\delta `$ determines the desired accuracy of the result. Typical trajectories are shown in figure 1, their initial conditions are marked in the deflection function of figure 2.
In the present (and generic) case of a two-body potential that is bounded from below the deflection function must have maxima and minima according to the largest and smallest energy exchange possible limited by the minimum of the two-body potential. The deflection function can only be monotonic if the two-body potential is unbounded from below as in the case of the pure (homogeneous) Coulomb potential without Langer correction (compare, e.g., figure 1 of ). This qualitative difference implies another important consequence: For higher total energies $`E`$ the deflection function is pushed upwards. Although energetically allowed, for $`E>1`$ a.u. the exchange-branch vanishes as can be seen from figure 3. As we will see later this has a significant effect on semiclassical excitation and ionization probabilities.
#### 2.2.2 The form of the collisional action
The collisional action $`\mathrm{\Phi }_j`$ along the trajectory $`j`$ in (6) has some special properties which result from the form of the S-matrix (4). The asymptotically constant states are represented by a constant action $`J`$ or quantum number $`n`$ and a constant momentum $`P`$ for bound and free degrees of freedom respectively. Hence, in the asymptotic integrable situation with $`\dot{n}=\dot{P}=0`$ before and after the collision no action $`\mathrm{\Phi }_j`$ is accumulated and the collisional action has a well defined value irrespectively of the actual propagation time in the asymptotic regions. This is evident from (8) which is, however, not suitable for a numerical realization of the collision. The scattering process is much easier followed in coordinate space, and more specifically for our collinear case, in radial coordinates. In the following, we will describe how to extract the action according to (8) from such a calculation in radial coordinates (position $`r`$ and momentum $`p`$ for the target electron, $`R`$ and $`P`$ for the projectile electron). The discussion refers to excitation processes to keep the notation simple but the result (13) holds also for the other cases. The collisional action $`\mathrm{\Phi }`$ can be expressed through the action in coordinate space $`\stackrel{~}{\mathrm{\Phi }}`$ by
$$\mathrm{\Phi }(P,n;P^{},n^{})=\stackrel{~}{\mathrm{\Phi }}(P,r;P^{},r^{})+F_2(r^{},n^{})F_2(r,n),$$
(10)
where
$$\stackrel{~}{\mathrm{\Phi }}(P,r;P^{},r^{})=\underset{\genfrac{}{}{0pt}{}{t\mathrm{}}{t^{}\mathrm{}}}{lim}\underset{t^{}}{\overset{t}{}}๐\tau \left[R\dot{P}+p\dot{r}\right]$$
(11)
is the action in coordinate space and $`F_2`$ is the generator for the classical canonical transformation from the phase space variables $`(r,p)`$ to $`(q,n)`$ given by
$$F_2(r,n)=\mathrm{sgn}(p)\underset{r_i}{\overset{r}{}}\left(2m\left[ฯต\left(n\right)v\left(x\right)\right]\right)^{\frac{1}{2}}๐x.$$
(12)
Here, $`r_i`$ denotes an inner turning point of an electron with energy $`ฯต(n)`$ in the potential $`v(x)`$. Clearly, $`F_2`$ will not contribute if the trajectory starts end ends at a turning point of the bound electron. Partial integration of (11) transforms to momentum space and yields a simple expression for the collisional action in terms of spatial coordinates:
$$\mathrm{\Phi }(P,n;P^{},n^{})=\underset{\genfrac{}{}{0pt}{}{t_i\mathrm{}}{t_i^{}\mathrm{}}}{lim}\underset{t_i^{}}{\overset{t_i}{}}๐\tau \left[R\dot{P}+r\dot{p}\right].$$
(13)
Note, that $`t_i^{}`$ and $`t_i`$ refer to times where the bound electron is at an inner turning point and the generator $`F_2`$ vanishes. Phases determined according to (13) may still differ for the same path depending on its time of termination. However, the difference can only amount to integer multiples of the (quantized !) action
$$J=p๐r=2\pi \left(n+\frac{1}{2}\right)$$
(14)
of the bound electron with $`ฯต<0`$. Multiples of $`2\pi `$ for each revolution do not change the value of the S-matrix and the factor $`\frac{2\pi }{2}`$ is compensated by the Maslov index. In the case of an ionizing trajectory the action must be corrected for the logarithmic phase accumulated in Coulomb potentials .
Summarizing this analysis, we fix the (in principle arbitrary) starting point of the trajectory to be an inner turning point ($`r_i^{}|p^{}=0,\dot{p}^{}>0`$) which completes the initial condition for the propagation of trajectories described in section 2.2.1. In order to obtain the correct collisional action (8) in the form (13) we also terminate a trajectory at an inner turning point $`r_i`$ after the collision such that $`\mathrm{\Phi }`$ is a continuous function of the initial position $`R^{}`$. Although this is not necessary for the primitive semiclassical scattering probability which is only sensitive to phase differences up to multiples of $`J`$ as mentioned above, the absolute phase difference is needed for a uniformized semiclassical expression to be discussed later.
### 2.3 Maslov indices
#### 2.3.1 Numerical procedure
In position space the determination of the Maslov index is rather simple for an ordinary Hamiltonian with kinetic energy as in (2). According to Morseโs theorem the Maslov index is equal to the number of conjugate points along the trajectory. A conjugate point in coordinate space is defined by ($`f`$ degrees of freedom, $`(q_i,p_i)`$ a pair of conjugate variables)
$$det\left(M_{qp}\right)=det\left(\frac{(q_1,\mathrm{},q_f)}{(p_1^{},\mathrm{},p_f^{})}\right)=0.$$
(15)
The matrix $`M_{qp}`$ is the upper right part of the stability or monodromy matrix which is defined by
$$\left(\genfrac{}{}{0pt}{}{\delta \stackrel{}{q}(t)}{\delta \stackrel{}{p}(t)}\right)M(t)\left(\genfrac{}{}{0pt}{}{\delta \stackrel{}{q}(0)}{\delta \stackrel{}{p}(0)}\right).$$
(16)
In general, the Maslov index $`\nu _j`$ in (6) must be computed in the same representation as the action. In our case this is the momentum representation in (13). However, the Maslov index in momentum space is not simply the number of conjugate points in momentum space where $`det\left(M_{pq}\right)=0`$. Morseโs theorem relies on the fact that in position space the mass tensor $`B_{ij}=^2H/p_ip_j`$ is positive definite. This is not necessarily true for $`D_{ij}=^2H/q_iq_j`$ which is the equivalent of the mass tensor in momentum space. How to obtain the correct Maslov index from the number of zeros of $`det\left(M_{pq}\right)=0`$ is described in , a review about the Maslov index and its geometrical interpretation is given in .
#### 2.3.2 Phenomenological approach for two degrees of freedom
For two degrees of freedom, one can extract the scattering probability directly from the deflection function without having to compute the stability matrix and its determinant explicitly . In view of this simplification it would be desirable to determine the Maslov indices also directly from the deflection function avoiding the complicated procedure described in the previous section. This is indeed possible since one needs only the correct difference of Maslov indices for a semiclassical scattering amplitude.
A little thought reveals that trajectories starting from branches in the deflection function of figure 2 separated by an extremum differ by one conjugate point. This implies that their respective Maslov indices differ by $`\mathrm{\Delta }\nu =1`$. For this reason it is convenient to divide the deflection function in different branches, separated by an extremum. Trajectories of one branch have the same Maslov index. Since there are two extrema we need only two Maslov indices, $`\nu _1=1`$ and $`\nu _2=2`$. The relevance of just two values of Maslov indices $`(1,2)`$ can be traced to the fact that almost all conjugate points are trivial in the sense that they belong to turning points of bound two-body motion.
We can assign the larger index $`\nu _2=2`$ to the trajectories which have passed one more conjugate point than the others. As it is almost evident from their topology, these are the trajectories with $`dฯต/dR^{}>0`$ shown in the upper row of figure 1. (They also have a larger collisional action $`\mathrm{\Phi }_j`$). The two non-trivial conjugate points for these trajectories compared to the single conjugate point for orbits with initial conditions corresponding to $`dฯต/dR^{}<0`$ can be understood looking at the ionizing trajectories (b) and (e) of each branch in figure 1. Trajectories from both branches have in common the turning point for the projectile electron ($`P=0`$). For trajectories of the lower row all other turning points belong to complete two-body revolutions of a bound electron and may be regarded as trivial conjugate points. However, for the trajectories from the upper row there is one additional turning point (see, e.g., figure 1(b)) which cannot be absorbed by a complete two-body revolution. It is the source for the additional Maslov phase.
We finally remark that $`dฯต/dR^{}>0`$ is equivalent to $`dn/d\overline{q}<0`$ of leading to the same result as our considerations illustrated above.
## 3 Semiclassical scattering probabilities
Taking into account the Pauli principle for the indistinguishable electrons leads to different excitation probabilities for singlet and triplet,
$`P_ฯต^+(E)`$ $`=`$ $`\left|S_{ฯต,ฯต^{}}(E)+S_{Eฯต,ฯต^{}}(E)\right|^2`$
$`P_ฯต^{}(E)`$ $`=`$ $`\left|S_{ฯต,ฯต^{}}(E)S_{Eฯต,ฯต^{}}(E)\right|^2,`$ (17)
where the probabilities are symmetrized a posteriori (see ). Here, $`S_{ฯต,ฯต^{}}`$ denotes the S-matrix for the excitation branch, calculated according to (6), while $`S_{Eฯต,ฯต^{}}`$ represents the exchange processes, at a fixed energy $`ฯต<0`$, respectively.
Ionization probabilities are obtained by integrating the differential probabilities over the relevant energy range which is due to the symmetrization (17) reduced to $`E/2`$:
$$P_{ion}^\pm (E)=\underset{0}{\overset{E/2}{}}P_ฯต^\pm (E)๐ฯต.$$
(18)
### 3.1 Ionization and excitation for singlet and triplet symmetry
We begin with the ionization probabilities since they illustrate most clearly the effect of the vanishing exchange branch for higher energies as illustrated in figure 3. The semiclassical result for the Langer Hamiltonian (2) shows the effect of the vanishing exchange branch in the deflection function figure 3 which leads to merging $`P^\pm `$ probabilities at a finite energy $`E`$, in clear discrepancy to the quantum result, see figure 4. Moreover, the extrema in the deflection function lead to the sharp structures below $`E=1`$ a.u.. The same is true for the excitation probabilities where a discontinuity appears below $`E=1`$ a.u. (figure 5). Note also that due to the violated unitarity in the semiclassical approximation probabilities can become larger than unity, as it is the case for the $`n=1`$ channel.
Singlet and triplet excitation probabilities represent the most differential scattering information for the present collisional system. Hence, the strongest deviations of the semiclassical results from the quantum values can be expected. Most experiments do not resolve the spin states and measure a spin-averaged signal. In our model this can be simulated by averaging the singlet and triplet probabilities to
$$P_ฯต(E)=\frac{1}{2}(P_ฯต^+(E)+P_ฯต^{}(E)).$$
(19)
The averaged semiclassical probabilities may also be compared to the classical result which is simply given by
$$P_ฯต^{CL}(E)=\underset{j}{}(๐ซ_{ฯต,ฯต^{}}^{(j)}(E)+๐ซ_{ฯต,Eฯต^{}}^{(j)}(E))$$
(20)
with $`๐ซ_{ฯต,ฯต^{}}^{(j)}(E)`$ from (7).
Figure 6 shows averaged ionization probabilities. They are very similar to each other, and indeed, the classical result is not much worse than the semiclassical result.
In figure 7 we present the averaged excitation probabilities. Again, on can see the discontinuity resulting from the extrema in the deflection function. As for ionization, the spin averaged semiclassical probabilities (figure 7b) are rather similar to the classical ones (figure 7a), in particular the discontinuity is of the same magnitude as in the classical case and considerably more localized in energy than in the non-averaged quantities of figure 5.
Clearly, the discontinuities are an artefact of the semiclassical approximation. More precisely, they are a result of the finite depth of the two-body potential in the Langer corrected Hamiltonian (2). Around the extrema of the deflection function the condition of isolated stationary points, necessary to apply the stationary phase approximation which leads to (6), is not fulfilled. Rather, one has to formulate a uniform approximation which can handle the coalescence of two stationary phase points.
## 4 Uniformized scattering probabilities
We follow an approach by Chester et. al. . The explicit expression for the uniform S-matrix goes back to Connor and Marcus who obtained for two coalescing trajectories $`1`$ and $`2`$
$$S_{n,n^{}}(E)=\mathrm{Bi}^+\left(z\right)\sqrt{๐ซ_{n,n^{}}^{(1)}(E)}e^{i\mathrm{\Phi }_1+i\frac{\pi }{4}}+\mathrm{Bi}^{}\left(z\right)\sqrt{๐ซ_{n,n^{}}^{(2)}(E)}e^{i\mathrm{\Phi }_2i\frac{\pi }{4}}$$
(21)
where
$$\mathrm{Bi}^\pm \left(z\right)=\sqrt{\pi }\left[z^{\frac{1}{4}}\mathrm{Ai}\left(z\right)iz^{\frac{1}{4}}\mathrm{Ai}^{}\left(z\right)\right]e^{\pm i\left(\frac{2}{3}z^{\frac{3}{2}}\frac{\pi }{4}\right)}$$
(22)
The argument $`z=\left[\frac{3}{4}\left(\mathrm{\Phi }_2\mathrm{\Phi }_1\right)\right]^{\frac{2}{3}}`$ of the Airy function Ai(z) contains the absolute phase difference. We assume that $`\mathrm{\Phi }_2>\mathrm{\Phi }_1`$ which implies for the difference of the Maslov indices that $`\nu _2\nu _1=1`$ (compare (6) with (21) and (23)). Since the absolute phase difference enters (21) it is important to ensure that the action is a continuous function of $`R^{}`$ avoiding jumps of multiples of $`2\pi `$, as already mentioned in section 2.2.2. For large phase differences (6) is recovered since
$$\underset{z\mathrm{}}{lim}\mathrm{Bi}^\pm \left(z\right)=1.$$
(23)
Our uniformized S-matrix has been calculated by applying (21) to the two branches for exchange and excitation separately and adding or subtracting the results according to a singlet or triplet probability. In the corresponding probabilities of figure 8 the discontinuities of the non-uniform results are indeed smoothed in comparison with figure 5. However, the overall agreement with the quantum probabilities is worse than in the non-uniform approximation. A possible explanation could lie in the construction of the uniform approximation. It works with an integral representation of the S-matrix, where the oscillating phase (the action) is mapped onto a cubic polynomial. As a result, the uniformization works best, if the deflection function can be described as a quadratic function around the extremum. Looking at figure 2 one sees that this is true only in a very small neighborhood of the extrema because the deflection function is strongly asymmetric around these points. We also applied a uniform approximation derived by Miller which gave almost identical results.
Finally, for the sake of completeness, the spin averaged uniform probabilities are shown in figure 9. As can be seen, the discontinuities have vanished almost completely. However, the general agreement with quantum mechanics is worse than for the standard semiclassical calculations, similarly as for the symmetrized probabilities.
## 5 Conclusion
In this paper we have described inelastic Coulomb scattering with a semiclassical S-matrix. To handle the problem for this explorative study we have restricted the phase space to the collinear arrangement of the two electrons reducing the degrees of freedom to one radial coordinate for each electron. In appreciation of the spherical geometry we have applied the so called Langer correction to obtain the correct angular momentum quantization. Thereby, a lower bound to the two-body potential is introduced which generates a generic situation for bound state dynamics since the (singular) Coulomb potential is replaced by a potential bounded from below. The finite depth of the two-body potential leads to singularities in the semiclassical scattering matrix (rainbow effect) which call for a uniformization.
Hence, we have carried out and compared among each other classical (where applicable), semiclassical, and uniformized semiclassical calculations for the singlet, triplet and spin-averaged ionization and excitation probabilities. Two general trends may be summarized: Firstly, the simple semiclassical probabilities are overall in better agreement with the quantum results for the singlet/triplet observables than the uniformized results. The latter are only superior close to the singularities. Secondly, for the (experimentally most relevant) spin-averaged probabilities the classical (non-symmetrizable) result is almost as good as the semiclassical one compared to the exact quantum probability. This holds for excitation as well as for ionization. Hence, we conclude from our explorative study that a full semiclassical treatment for spin-averaged observables is probably not worthwhile since it does not produce better results than the much simpler classical approach. Clearly, this conclusion has to be taken with some caution since we have only explored a collinear, low dimensional phase space.
We would like to thank A. Isele for providing us with the quantum results for the collinear scattering reported here. This work has been supported by the DFG within the Gerhard Hess-Programm.
## References |
no-problem/0001/astro-ph0001122.html | ar5iv | text | # Derivation of a Sample of Gamma-Ray Bursts from BATSE DISCLA Data
## Introduction
We have been engaged for several years in an effort to derive a homogeneous sample of gamma-ray bursts (GRBs) from the continuous data stream transmitted by the Burst and Transient Source Experiment (BATSE). In particular, we have used the DISCLA data which provide the counts for each of the eight BATSE detectors in channels $`14`$ on a time scale of 1024 msec. The resulting BD2 sample of 1391 GRBs covers a time period of 5.9 years schmidt99a ; schmidt99b . It includes 378 GRBs that are not in the BATSE catalog maintained on the BATSE web site. The BD2 sample has been used for a derivation of the characteristic luminosity and local space density of GRBs schmidt99b .
We will briefly review the procedures used to construct the BD2 sample, and analyze the differences between the BATSE catalog and the BD2 sample in some detail.
## Derivation of the BD2 Sample
For most of the time period covered by the BATSE catalog, the on-board trigger mechanism required that the counts in channels 2 and 3 covering the energy range $`50300`$ keV exceed the background by 5.5$`\sigma `$ on a time scale of 64, 256 or 1024 msec in two of the eight detectors. In our search based on DISCLA data, we used channels 2$`+`$3, and required an excess of 5$`\sigma `$ above background on a time scale of 1024 msec in two detectors.
The BATSE on-board trigger employs a background averaged over 17.408 sec that is updated every 17.408 sec. We have taken advantage of the archival nature of the DISCLA data to use a background that is derived by linear interpolation from two averages taken over 17.408 sec, as shown in Figure 1. The interval of 20.48 sec between the first background average and the test time is intended to alleviate the problem of detecting slowly rising GRBs hig96 .
We searched for defects in the DISCLA data, since these could lead to false triggers or affect the background. For the time period of TJD 8365-10528, we found around 151,000 defects, ranging from checksum errors that affected only one 1024 msec bin to gaps caused by transmission problems, passage through the South Atlantic Anomaly, etc. Around each of these defects, we set up an exclusion time window such that the defect does not affect the background.
The initial search yielded 7536 triggers. The geographic coordinates of the satellites at the time of trigger showed strong concentrations over W. Australia and over Mexico and Texas schmidt99a . We then outlined geographic exclusion zones to avoid the trigger concentrations. With these exclusions in place, we were left with 4485 triggers. For each of these we derived celestial coordinates based on the relative response in all eight detectors and the orientation of the satellite, and other properties such as the duration, the hardness ratios, $`V/V_{max}`$, etc.
The equatorial coordinates of these triggers showed clear concentrations that were identified as Cyg X-1, Nova Persei 1992, and solar flares along the ecliptic schmidt99a . We excluded from consideration all triggers whose positions were within 23 deg of these sources while they were active. Most of the remaining triggers were either magnetospheric events or cosmic GRBs. For a more detailed description of the selection procedure, the reader is referred to schmidt99a . We ended up with a sample of 1422 GRBs schmidt99a which we call the BD1 sample.
Subsequently, we investigated carefully all our bursts that were either not in the BATSE catalog (for an example, see Fig. 3), or whose positions or times agreed poorly with the catalog data. We eventually rejected 31 of the sources in the BD1 sample (most of which were parts of long bursts or identified as the repeater SGR $`180620`$), resulting in the BD2 sample of 1391 GRBs schmidt99b . We show in Table 1 an updated accounting of the classification of all 4485 triggers.
The most important property among those derived for the GRBs in the BD2 sample is $`V/V_{max}`$. Since redshifts are not know for most GRBs, $`V/V_{max}`$ has to be evaluated in euclidean space. It has usually been assumed that $`V/V_{max}=(C_{max}/C_{min})^{3/2}`$, where $`C_{min}`$ is the minimum detectable burst signal, and $`C_{max}`$ the maximum amplitude. Instead, we derive $`V/V_{max}`$ of each GRB by carrying out a simulation in which we move the source out in euclidean space, and at each step apply the detection algorithm to see whether the reduced burst is still detected schmidt99a . In the process of moving out the source, it may get detected later and later (depending on its time profile) and some burst signal may be included in the background. In some cases, the $`C_{max}`$ part of the profile is never detected before the source disappears as it is being removed. Using $`C_{max}`$ to derive $`V/V_{max}`$ therefore tends to lead to an underestimate of $`V/V_{max}`$.
Using the procedure outlined above, the BD2 sample produces a mean value $`<V/V_{max}>=0.334\pm 0.008`$ schmidt99b . The deviation from the value 0.5 expected for a uniform space distribution reflects to first order the effect of using euclidean space in its derivation rather than a relativistic cosmological model. Hence the euclidean $`<V/V_{max}>`$ is effectively a distance indicator, allowing derivation of the characteristic luminosity of GRBsschmidt99b .
## Comparison of the BATSE catalog and the BD2 sample
Given that the BD2 sample was derived independently of the BATSE catalog, it is of interest to compare the two data sets.
1) The BD2 sample produces an all-sky rate of 694 per year, the BATSE catalog yields $``$ 690 per year meegan98 . One might expect a larger rate in the BD2 sample since its limiting S/N is 5.0, while for the BATSE catalog it is 5.5. However, the BD2 sample is limited to detections at the time scale 1024 msec, while the BATSE catalog also includes time scales of 256 and 64 msec.
2) The BD2 sample has 378 GRB not in the BATSE catalog, and the BATSE catalog has 130 GRB, detected at a time scale of 1024 msec, that are not in the BD2 sample. These differences are related to the different depth of the two data sets (5$`\sigma `$ vs. 5.5$`\sigma `$), but they are also influenced by items (3)โ(6) below.
3) The $``$ 151,000 time exclusion windows used in the derivation of the BD2 sample are independent from those used in the BATSE search.
4) Following a BATSE trigger, the detection mechanism was disabled until data could be transmitted to the ground. After a BD2 detection, we disabled the software trigger for 230 sec.
5) The trigger criteria for the BATSE database were changed a number of times, see meegan98 .
6) In general, if the background stretches used in two searches are different, then if a source is detected precisely at the limiting S/N in one of the searches, the probability that the other search finds the source is around 50%. The backgrounds used in the BATSE search and the BD2 sample are independent when the BATSE background stretch is separated by less than 3 sec from the time of detection and partly dependent when the separation is larger than 3 sec. Since we would find a substantial number of GRBs that are not in the BD2 by carrying out a search using different background stretches than those of Figure 1, we prefer to call the resulting collection of sources a sample. It should be stressed that the different samples so found are all statistically equivalent to each other, as long as one background choice is not to be preferred over the other. |
no-problem/0001/astro-ph0001330.html | ar5iv | text | # ROSAT Evidence for Intrinsic Oxygen Absorption in Cooling Flow Galaxies and Groups
## 1. Introduction
The evolution of the hot gas in the centers of massive elliptical galaxies, groups, and galaxy clusters has been most frequently interpreted in terms of the cooling flow paradigm (e.g., Fabian 1994). However, the characteristic radially increasing temperature profiles and centrally peaked X-ray surface brightness profiles usually attributed to cooling flows can be successfully described by only assuming a two-tier structure for the gravitational potential (e.g., Ikebe et al 1996; Xu et al 1998); i.e., the cooler gas sits in the shallower potential associated with the central galaxy whereas the hotter gas sits in the deeper potential of the surrounding group or cluster. The two-tier structure is typically incorporated into cooling flow models (e.g., Thomas, Fabian, & Nulsen 1987; Brighenti & Mathews 1998), but since cooling flows are not necessarily required to explain the temperatures and surface brightness profiles of the hot gas why should they still receive attention?
Unlike the empirical two-tier potential model, cooling flows attempt to offer a nearly complete theoretical description of the time evolution of the gas properties. A cooling flow describes the dynamical evolution and the X-ray emission of the hot gas by considering the following simple picture for the energy balance of a parcel of gas. Since the parcel of gas emits X-rays it loses energy. As the parcel radiates it sinks deeper (i.e., flows inward) into the potential well of the system where it encounters regions of higher density and therefore higher pressure. Consequently, the gas parcel contracts but is then heated as the result of PdV work. This balance between cooling and heating is expected to apply over much of the cooling flow. Within the central regions of highest density this balance is broken as cooling overwhelms the heating leading to the key prediction of the inhomogeneous cooling flow scenario: in massive elliptical galaxies, groups, and clusters large quantities of gas should have cooled and dropped out of the flow and be distributed at least over the central regions of the flow.
This prediction has inspired many searches in H i and CO for cold gas at the centers of cooling flows, and all such attempts have either detected small gas masses or placed upper limits which are in embarrassing disagreement with the large masses expected to have been deposited in a cooling flow (e.g., Bregman, Hogg, & Roberts 1992; OโDea et al 1994). If instead the mass drop-out is in the form of dust then current constraints on the infrared emission in cluster cores are not inconsistent with cooling flow models (e.g., Voit & Donahue 1995; Allen et al 2000b). But cooling flows are not required to explain the infrared data in clusters (Lester et al, 1995) and individual elliptical galaxies (Tsai & Mathews, 1996).
The case for mass drop-out received a substantial boost with the discovery of intrinsic soft X-ray absorption in the Einstein spectral data of cooling flow clusters (White et al 1991; Johnstone et al 1992). The Einstein results have been verified with multitemperature models of the ASCA spectral data of clusters (Fabian et al 1994; Allen et al 2000b), elliptical galaxies, and groups (Buote & Fabian 1998; Buote 1999, 2000a; Allen et al 2000a). If the soft X-ray absorption is interpreted as cold gas then the large intrinsic column densities of cold H suggested by the Einstein and ASCA observations still suffer from the tremendous disagreement with the H i and CO observations noted above. The Einstein and ASCA results appear even more suspect when considering that in systems with low Galactic columns (where any intrinsic absorption should be easier to detect) no significant excess absorption from cold gas is ever found with the ROSAT PSPC which should be more sensitive to the absorption because of its softer bandpass, 0.1-2.4 keV (e.g., David et al 1994; Jones et al 1997; Briel & Henry 1996).
We have re-examined the ROSAT PSPC data of cooling flows to search for evidence of intrinsic soft X-ray absorption and in particular have allowed for the possibility that the absorber is not cold. Previously in Buote (2000c; hereafter PAPER2) we have presented temperature and metallicity profiles of the hot gas in 10 of the brightest cooling flow galaxies and groups inferred from deprojection analysis of the PSPC data. We refer the reader to that paper for details on the data reduction and deprojection procedure.
In this paper we present the absorption profiles of the 10 galaxies and groups, each of which have low Galactic column densities (see Table 1 of PAPER2). Partial results for two systems analyzed in the present paper (NGC 1399 and 5044) also appear with results for the cluster A1795 in Buote (2000b; hereafter PAPER1). In ยง2 we describe the models used to parameterize the soft X-ray absorption. We present the radial absorption profiles for a standard absorber model with solar abundances in ยง3.1. The effects of partial covering and the sensitivity of the results to the bandpass are discussed in ยง3.2 and ยง3.3. The results of modeling the absorption with an oxygen edge are presented in ยง3.4. In ยง3.6 we consider multiphase models such as cooling flows (ยง3.6.1). Evidence for emission from a warm gaseous component is described in ยง3.6.3. We demonstrate the consistency of the ROSAT and ASCA absorption measurements in ยง4. In ยง5 we discuss in detail the implications of our absorption measurements for the physical state of the absorber, the cooling flow scenario, observations at other wavelengths, and theoretical models. Finally, in ยง6 we present our conclusions and discuss prospects for verifying our prediction of warm ionized gas in cooling flows with future X-ray observations.
## 2. Models
### 2.1. Hot Plasma
As discussed in PAPER2 we use the MEKAL plasma code to represent the emission from a single temperature component of hot gas. Because of the limited energy resolution of ROSAT we initially focus on a โsingle phaseโ representation of the hot gas such that a single temperature component exists within each three-dimensional radial annulus. Multiphase models are examined in ยง3.6.
### 2.2. Absorber
It is standard practice to represent the soft X-ray absorption arising from the Milky Way by material with solar abundances distributed as a foreground screen at zero redshift. In this standard absorption model the X-ray flux is diminished according to $`A(E)=\mathrm{exp}(N_\mathrm{H}\sigma (E))`$, where $`N_\mathrm{H}`$ is the hydrogen column density and $`\sigma (E)`$ is the energy-dependent photo-electric absorption cross section for an absorber with solar abundances. We allow $`N_\mathrm{H}`$ to be a free parameter in our fits to indicate any excess absorption intrinsic to a galaxy or group and also to allow for any errors in the assumed Galactic value for $`N_\mathrm{H}`$ and for any calibration uncertainties. Note that in this standard model $`N_\mathrm{H}`$ is measured as a function of two-dimensional radius, $`R`$, on the sky.
Intrinsic absorption is expressed more generally as,
$$A(E)=f\mathrm{exp}(N_\mathrm{H}\sigma [E(1+z)])+(1f),$$
where $`z`$ is the source redshift and $`f`$ is the covering factor. Since the redshifts are small for the objects in our sample any absorption in excess of the Galactic value indicated by the standard model is essentially that of an intrinsic absorber with $`f=1`$ placed in front of the source. We discuss the effects of $`f<1`$ in ยง3.2.
As discussed in PAPER1 we consider oxygen absorption intrinsic to the galaxy or group which we represent by the simple parameterization of an edge, $`\mathrm{exp}\left[\tau (E/E_0)^3\right]`$ for $`EE_0`$, where $`E_0`$ is the energy of the edge in the rest frame of the galaxy or group, and $`\tau `$ is the optical depth. To facilitate a consistent comparison to the standard absorber model we place this edge in front of the source, and thus $`\tau `$ is also measured as a function of two-dimensional radius, $`R`$, on the sky. Models with $`f<1`$ behave similarly to the solar-abundance absorber (ยง3.2).
The photo-electric absorption cross sections used in this paper are given by Baluciลska-Church & McCammon (1992). Although Arabadjis & Bregman (1999) point out that the He cross section at 0.15 keV is in error by 13%, since we analyze $`E>0.2`$ keV we find that our fits do not change when using the Morrison & McCammon (1983) cross sections which have the correct He value. Further details on the spectral models used in the analysis are given in PAPER2.
## 3. Radial Absorption Profiles
Following PAPER2 we plot in Figures 1-4 the radial profiles of the column density and oxygen edge optical depth obtained from the deprojection analysis according to the number of annuli for which useful constraints on the parameters were obtained. This categorizes the systems essentially according to the S/N of the data. We refer the reader to PAPER2 for the temperature and metallicity profiles corresponding to these models.
In several cases the column densities and optical depths could not be constrained in the outermost annuli. Owing to the nature of the deprojection method large errors in the outer annuli can significantly bias the results for nearby inner annuli. Hence, in some systems we fixed the column densities to their nominal Galactic values or the edge optical depths to zero in the relevant outer annuli.
### 3.1. Foreground Absorber with Solar Abundances
We begin by examining the spectral fits using the standard absorption model of a foreground screen $`(z=0)`$ with solar abundances. The left panels of Figures 1-4 show $`N_\mathrm{H}(R)`$ obtained from spectral fits over the energy range 0.2-2.2 keV. Our column density profiles are consistent with those presented in previous ROSAT studies (Forman et al 1993; David et al 1994; Trinchieri et al 1994; Kim & Fabbiano 1995; Jones et al 1997; Trinchieri et al 1997; Buote 1999) after accounting for the different plasma codes and solar abundances used.
The column densities are always within a factor of $`2`$ of the Galactic value ($`N_\mathrm{H}^{\mathrm{Gal}}`$), though in most cases $`N_\mathrm{H}`$ decreases as $`R`$ decreases such that $`N_\mathrm{H}<N_\mathrm{H}^{\mathrm{Gal}}`$ at small $`R`$. The quality of the fits for four of these systems is also formally poor $`(P<0.01)`$ in the central $`1\mathrm{}`$ bin (Table 1), and for several objects in the sample the metallicities are very large and very inconsistent with all ASCA studies (see PAPER2).<sup>1</sup><sup>1</sup>1Here $`P`$ represents the $`\chi ^2`$ null hypothesis probability under the assumption of gaussian random errors. We discuss the suitability of this approximation for interpreting goodness of fit in section 3.4 of PAPER2. Since $`N_\mathrm{H}N_\mathrm{H}^{\mathrm{Gal}}`$ in the outer radii the observation that $`N_\mathrm{H}<N_\mathrm{H}^{\mathrm{Gal}}`$ at small $`R`$ indicates that, whatever the origin of the deficit, it must be intrinsic to the source. (We provide an explanation in ยง3.6.3.)
The approximately Galactic columns are wholly inconsistent with the large excess columns inferred from multitemperature models of the spatially integrated ASCA spectral data of these systems (Buote & Fabian 1998; Buote 1999, 2000a; Allen et al 2000a). We now address the origin of this inconsistency.
### 3.2. Effects of Partial Covering
It has been suggested that the reason why analyses with the PSPC do not infer large excess column densities for cluster cooling flows is that the standard foreground model used above systematically underestimates the true column intrinsic to the system (Allen & Fabian 1996; Sarazin, Wise, & Markevitch 1998). However, in PAPER1 we have tested this hypothesis for NGC 1399 and 5044 (and the cluster A1795) using our deprojection code. We find that the hot gas within the central $`r=1\mathrm{}`$ (3D) cannot be absorbed very differently from the gas projected from larger radii because their spectral shapes for energies below $`0.5`$ keV are very similar. If we do assume an absorber with covering factor $`f=0.5`$ we obtain an excess column $`\mathrm{\Delta }N_\mathrm{H}=0`$ at best fit and $`\mathrm{\Delta }N_\mathrm{H}<N_\mathrm{H}^{\mathrm{Gal}}`$ at $`>90\%`$ confidence for NGC 1399 and 5044. (Note that the $`f=0.5`$ model implies a flat absorbing screen that bisects the source so that the 2D and 3D radii are equal, and thus the values of $`\mathrm{\Delta }N_\mathrm{H}`$ quoted do refer to quantities within the 3D radius $`r=1\mathrm{}`$.)
Entirely analogous results are obtained for the other systems in our sample. We mention that partial covering models never improve the fits over the $`f=1`$ case. The only effect is that somewhat larger columns are generally allowed; e.g., for $`f=0.5`$ the implied upper limits for the excess columns are typically a factor of 20%-40% larger than for $`f=1`$.
Hence, as in PAPER1 we conclude that models with $`f<1`$ cannot account for (1) the large excess columns inferred from ASCA , (2) the sub-Galactic columns and poor fits obtained for several systems in the central $`1\mathrm{}`$, or as we now discuss (3) the sensitivity of $`N_\mathrm{H}`$ to the lower energy boundary of the bandpass.
### 3.3. Sensitivity of $`N_\mathrm{H}(R)`$ to Bandpass
Thus far we have shown (and confirmed previous results) that the ROSAT PSPC does not indicate the presence of excess absorbing material arising from cold gas intrinsic to the galaxy or group. In Figure 5 (Left) we display the ROSAT PSPC spectrum of NGC 1399 within the central arcminute along with the best-fitting plasma model modified by a standard absorber with $`N_\mathrm{H}=N_\mathrm{H}^{\mathrm{Gal}}`$. The simple model provides a good visual fit to these data as well as a formally acceptable fit ($`\chi ^2=107`$ for 97 dof and $`P=0.23`$).
However, the evidence for intrinsic soft X-ray absorption from cold gas in galaxies, groups, and clusters from the Einstein SSS and the ASCA SIS is obtained with data restricted to energies above $`0.5`$ keV because lower energies reside outside the bandpasses of those instruments. Let us examine what happens to the spectral fits of the PSPC data of NGC 1399 within the central arcminute if instead we raise the lower energy limit of the bandpass, $`E_{\mathrm{min}}`$, to a value near 0.5 keV comparable to ASCA and Einstein , and we allow $`N_\mathrm{H}`$ to be a free parameter.
We find that when $`E_{\mathrm{min}}0.20.3`$ keV we obtain fits essentially as indicated in Figure 5 (Left) with $`N_\mathrm{H}N_\mathrm{H}^{\mathrm{Gal}}`$. We see a noticeable change near $`E_{\mathrm{min}}=0.4`$ keV when the fitted column density increases to $`N_\mathrm{H}2N_\mathrm{H}^{\mathrm{Gal}}`$. A dramatic increase occurs near $`E_{\mathrm{min}}=0.5`$ keV which we show in Figure 5 (Right). The best-fitting model gives $`N_\mathrm{H}15N_\mathrm{H}^{\mathrm{Gal}}`$ when data below 0.5 keV are excluded from the fits: the standard absorber model when $`E_{\mathrm{min}}=0.5`$ keV predicts that there should be negligible emission at lower energies in clear conflict with the data for $`E0.20.3`$ keV. The large column density obtained when $`E_{\mathrm{min}}=0.5`$ keV is similar to the large value inferred from ASCA with two-temperature or cooling flow models (see ยง5 in Buote 1999).
For larger values of $`E_{\mathrm{min}}`$ we find that $`N_\mathrm{H}`$ does not change significantly within the uncertainties. The statistical uncertainties on the fitted spectral parameters increase with increasing $`E_{\mathrm{min}}`$ since the degrees of freedom decrease as the lower energy data are ignored.
In the middle panels of Figures 1-4 we plot $`N_\mathrm{H}(R)`$ for $`E_{\mathrm{min}}=0.5`$ keV. The character of the $`N_\mathrm{H}`$ profiles for $`E_{\mathrm{min}}=0.5`$ keV is entirely different from the previous $`E_{\mathrm{min}}=0.2`$ keV case for half of the sample: NGC 507, 1399, 4472, 4649, and 5044. In these systems $`N_\mathrm{H}(R)`$ for $`E_{\mathrm{min}}=0.5`$ is consistent with the Galactic values in the outermost annuli and increases as $`R`$ decreases until (except for NGC 4472) it reaches a value consistent with a maximum for $`R1\mathrm{}`$. For the other five galaxies the constraints are too poor to discern a trend, though within the large errors their profiles are consistent with increasing as $`R0`$.
The excess column densities inferred when $`E_{\mathrm{min}}=0.5`$ keV are most significant for NGC 1399. The lower limits within $`R=1\mathrm{}`$ are $`8\times 10^{20}`$ cm<sup>-2</sup> and $`3\times 10^{20}`$ cm<sup>-2</sup> at 95% and 99% confidence respectively which are factors of $`6`$ and 3 larger than the Galactic value. Also within $`R=1\mathrm{}`$ for NGC 507 we obtain 95%/99% lower limits of $`8/6\times 10^{20}`$ cm<sup>-2</sup> compared to the adopted Galactic value of $`5.2\times 10^{20}`$ cm<sup>-2</sup>. The most significant measurement for NGC 5044 is within the $`R=1\mathrm{}2\mathrm{}`$ annulus where we find that $`N_\mathrm{H}>N_\mathrm{H}^{\mathrm{Gal}}`$ at the 95% confidence level.
In order to obtain measurements of the intrinsic absorption at a higher significance level we must include the data below 0.5 keV. This requires a more appropriate model of the intrinsic absorption that does not conflict with the emission at lower energies which we now consider.
### 3.4. Intrinsic Oxygen Edge
Since we find that $`N_\mathrm{H}(E_{\mathrm{min}})constant`$ for $`E_{\mathrm{min}}0.5`$ keV, the portion of the spectrum responsible for the excess absorption must be near 0.5 keV. Considering the PSPC resolution \[$`\mathrm{\Delta }E/E=0.43(E/0.93\mathrm{keV})^{0.5}`$\] and effective area this translates to energies $`0.40.7`$ keV. The dominant spectral features in both absorption and emission over this energy range are due to oxygen, though ionized carbon and nitrogen can contribute as well (see ยง5.2).
Since the PSPC data cannot distinguish between a single edge and multiple edges, we parameterize the intrinsic absorption with a single absorption edge at 0.532 keV (rest frame) corresponding to cold atomic oxygen (O i). In the right columns of Figures 1-4 we plot the optical depth profiles, $`\tau (R)`$, for the O i edge obtained from fits with $`E_{\mathrm{min}}=0.2`$ keV.
For every system we find that the shape of $`\tau (R)`$ is very similar to that of $`N_\mathrm{H}(R)`$ for $`E_{\mathrm{min}}=0.5`$ for the standard absorber. Moreover, for those systems where $`N_\mathrm{H}N_\mathrm{H}^{\mathrm{Gal}}`$ for $`E_{\mathrm{min}}=0.5`$ we find that $`\tau (R)0.1`$ in the outermost annuli and increases to $`\tau (R)1`$ in the central bin. Therefore, the single oxygen edge reproduces all of the excess absorption indicated by $`N_\mathrm{H}(R)`$ for $`E_{\mathrm{min}}=0.5`$ for the standard absorber model.
The fits within the central radial bin are clearly improved for several systems when the single oxygen edge is added, even though in some cases the quality of the fit is already judged to be formally acceptable (null hypothesis $`P0.1`$) without the edge. The improvement in the $`\chi ^2`$ fit is quantified by the F Test (e.g., Bevington 1969), and in Table 1 for each galaxy and group we give the F-Test probability, $`P(F)`$, which compares the fits with and without the edge; i.e., $`P(F)1`$ indicates the edge improves the fit significantly.
Concentrating on models with the standard absorber with $`N_\mathrm{H}=N_\mathrm{H}^{\mathrm{Gal}}`$ we see in Table 1 the largest improvements exist for NGC 5044 ($`P(F)=9.2\times 10^{10}`$), NGC 4472 ($`P(F)=8.8\times 10^8`$), and NGC 1399 ($`P(F)=1.3\times 10^6`$). A very significant improvement is also found for NGC 4649 ($`P(F)=3.2\times 10^3`$) while marginal improvements are found for NGC 507 ($`P(F)=2.7\times 10^2`$) and NGC 5846 ($`P(F)=7.6\times 10^2`$). Although only NGC 4472, 5044, and 5846 are indicated to have formally unacceptable fits in terms of the $`\chi ^2`$ null hypothesis probability for models with or without the edge, the fact that adding the edge lowers $`\chi ^2`$ by much more than 1 (i.e., $`P(F)1`$) in several cases indicates that the fits are indeed better with the edge in those systems.
For every system where we found $`N_\mathrm{H}<N_\mathrm{H}^{\mathrm{Gal}}`$ for the standard absorber (ยง3.1), we find that $`N_\mathrm{H}`$ systematically increases when the oxygen edge is added. Although the addition of the edge results in $`N_\mathrm{H}N_\mathrm{H}^{\mathrm{Gal}}`$ for many of these systems we find that in some cases (most notably NGC 5044) $`N_\mathrm{H}`$ is still significantly less than $`N_\mathrm{H}^{\mathrm{Gal}}`$, and the fits in the central bin, though improved, are still formally unacceptable. Thus, adding the edge does significantly improve the models, but it apparently is not the only improvement required in some cases. We discuss another mechanism to improve the fits below in ยง3.6.3.
We obtain the best constraints on the edge optical depth for NGC 507, 1399, 4472, 4649, and 5044. Only for these systems is $`\tau >0`$ significant at the 95% confidence level in the central bin whether or not $`N_\mathrm{H}`$ is fixed to the Galactic value for the standard absorber model. In some cases $`\tau >0`$ is significant at the 99% level โ we discuss individual cases below in ยง3.5.
We have also investigated whether these oxygen absorption profiles can be reproduced by a profile of decreasing oxygen abundance; i.e., the strong K$`\alpha `$ lines of O VII and O VIII lie between 0.5-0.65 keV. If instead we allow the oxygen abundance in the hot gas to be a free parameter in the fits we find that typically the best-fitting oxygen abundance is zero, and the quality of most of the fits are improved similarly to that found when the oxygen edge is added. This degeneracy is not surprising owing to the limited energy resolution of the PSPC. The notable exception is NGC 1399 where the fits for $`r=1\mathrm{}`$ are only improved to $`\chi ^2=95.9`$ for zero oxygen abundance as opposed to 83.7 for the edge (both have variable $`N_\mathrm{H}`$). However, zero oxygen abundance in the centers is highly unlikely because of the expected enrichment from the stars in the central galaxy. And if instead we consider plausible O/Fe ratios to be at least 1/2 solar in NGC 507, 1399, 4472, 4649, and 5044, then the fits are not as much improved as when adding the edge.
As mentioned above (and in PAPER1) we find that for most radii the constraints on the edge energy are not very precise which is why we fixed the edge energy in our analysis. The best constraints are available for NGC 1399 and 5044 within the central bin for which we obtain $`0.51_{0.05}^{+0.05}`$ keV and $`0.51_{0.05}^{+0.09}`$ keV (90% confidence) for the edge energies for the standard absorber models with variable $`N_\mathrm{H}`$. (Models with fixed foreground Galactic columns give similar results; e.g., $`0.53_{0.03}^{+0.05}`$ keV for NGC 1399.)
These constraints are consistent with the lower ionization states of oxygen but not edges from the highest states O vi-viii. Due to the limited resolution we can add additional edges to โshareโ the $`\tau `$ obtained for the O i edges, although when using a two-edge model a significant $`\tau `$ cannot be obtained for edge energies above $`0.65`$ keV corresponding to $``$O vi.
### 3.5. Comments on Individual Systems
We elaborate further on the results for individual systems. When comparing $`N_\mathrm{H}`$ profiles to those obtained from previous ROSAT PSPC studies we implicitly account for any differences in the plasma codes and solar abundances used.
#### 3.5.1 Systems with 7 Annuli
In Figure 1 we display the results for the three systems where the spectral parameters are well determined in seven annuli. These observations thus generally correspond to the highest S/N data in our sample. The evidence for intrinsic oxygen absorption is strongest for these systems.
NGC 507: The oxygen edge optical depth, $`\tau (R)`$, falls gradually from the center and remains significantly non-zero out to $`R4\mathrm{}`$; e.g., in the $`R=3\mathrm{}`$-$`4.25\mathrm{}`$ annulus the 95% confidence lower limits on $`\tau `$ are 0.39 and 0.24 respectively in models with fixed (Galactic) and variable $`N_\mathrm{H}`$ for the standard absorber. The values of $`N_\mathrm{H}`$ for the standard absorber are consistent with those obtained by Kim & Fabbiano (1995) with the PSPC data.
NGC 1399: This system exhibits the most centrally peaked $`\tau `$ profile in our sample. Within the central arcminute $`\tau >0.26`$ and 0.33 at 99% confidence respectively for the fixed and free $`N_\mathrm{H}`$ models. At larger radii the non-zero optical depths are also quite significant; e.g., for $`R=2.5\mathrm{}`$-$`4\mathrm{}`$ $`\tau >0.64`$ and 0.50 at 99% confidence for the fixed and free $`N_\mathrm{H}`$ models. We obtain values of $`N_\mathrm{H}`$ for the standard absorber consistent with the previous PSPC study by Jones et al (1997).
NGC 5044: The second radial bin ($`R=1\mathrm{}`$-$`2\mathrm{}`$) actually has a smaller uncertainty on the oxygen edge optical depth than the central bin; i.e., the 95% lower limits on $`\tau `$ are 0.87 and 0.79 respectively for the fixed and free $`N_\mathrm{H}`$ models. In fact, the corresponding 99% lower limits are 0.75 and 0.42 for $`R=1\mathrm{}`$-$`2\mathrm{}`$ which are larger than the values for the $`R=1\mathrm{}`$ bin. The optical depth remains significantly non-zero out to the $`R=3\mathrm{}`$-$`4.5\mathrm{}`$ bin in which we obtain 95% lower limits on $`\tau `$ of 0.36 and 0.10 for the fixed and free $`N_\mathrm{H}`$ models. Our values of $`N_\mathrm{H}`$ are consistent with those obtained from the PSPC data by David et al (1994).
#### 3.5.2 Systems with 5-6 Annuli
In Figure 2 we display the results for the three systems where the spectral parameters are well determined in 5-6 annuli. Only for NGC 4472 is the intrinsic oxygen absorption clearly significant.
NGC 2563: The uncertainties on $`\tau (R)`$ are large and consistent with zero at the center. However, the shapes of the error regions, especially the large uncertainty at the center, are consistent with the same type of increasing $`\tau `$ profile with decreasing $`R`$ found for the systems with seven annuli.
NGC 4472: The optical depth is consistent with $`\tau 2`$ for $`R5\mathrm{}`$ and then decreases rapidly at larger radii. Unlike the other systems with evidence for intrinsic absorption $`\tau `$ is most significant away from the central bin; i.e., for $`R=3.25\mathrm{}`$-$`5\mathrm{}`$ the 95% confidence lower limits on $`\tau `$ are 1.62 and 1.75 respectively for the fixed and free $`N_\mathrm{H}`$ models. (The corresponding 99% lower limits are 1.27 and 1.46.) Interestingly, $`R5\mathrm{}`$ corresponds to the region where Irwin & Sarazin (1996) identified holes in the X-ray emission from visual examination of the PSPC image, and thus the large oxygen edge optical depths could be related to these holes. The sub-Galactic columns obtained for the standard absorber with variable $`N_\mathrm{H}`$ are consistent with those obtained by Forman et al (1993).
NGC 5846: The uncertainties on $`\tau (R)`$ are large, and although a substantial amount of intrinsic absorption is allowed by the data, no excess absorption is required.
#### 3.5.3 Systems with 4 Annuli
In Figure 3 we display the results for the three systems where the spectral parameters are well determined in 4 annuli. The uncertainties on $`\tau (R)`$ are large for each of these systems and thus no intrinsic oxygen absorption is clearly required by the data.
#### 3.5.4 Systems with 3 Annuli
In Figure 4 we display the results for the one system where the spectral parameters are well determined in only 3 annuli, NGC 4649. In contrast to the systems with 4 annuli we find evidence for significant intrinsic oxygen absorption in the central bin for NGC 4649.
NGC 4649: In the central radial bin the 90% lower limits on $`\tau `$ are 0.55 and 0.20 respectively for for the fixed and free $`N_\mathrm{H}`$ models, although only the 95% lower limit for the model with fixed $`N_\mathrm{H}`$ is significantly larger than zero (Table 1). Our values of $`N_\mathrm{H}`$ for the standard absorber are consistent with those obtained from the PSPC data by Trinchieri et al (1997).
### 3.6. Multiphase Models
#### 3.6.1 Simple Cooling Flow
In the inhomogeneous cooling flow scenario the hot gas is expected to emit over a continuous range of temperatures in regions where the cooling time is less than the age of the system. We consider a simple model of a cooling flow where the hot gas cools at constant pressure from some upper temperature, $`T_{\mathrm{max}}`$ (e.g., Johnstone et al 1992). The differential emission measure of the cooling gas is proportional to $`\dot{M}/\mathrm{\Lambda }(T)`$, where $`\dot{M}`$ is the mass deposition rate of gas cooling out of the flow, and $`\mathrm{\Lambda }(T)`$ is the cooling function of the gas (in our case, the MEKAL plasma code).
Since the gas is assumed to be cooling from some upper temperature $`T_{\mathrm{max}}`$, the cooling flow model requires that there be a reservoir of hot gas emitting at temperature $`T_{\mathrm{max}}`$ but is not cooling out of the flow. Consequently, our cooling flow model actually consists of two components, CF+1T, where โCFโ is the emission from the cooling gas and โ1Tโ is emission from the hot ambient gas. We set $`T_{\mathrm{max}}`$ of the CF component equal to the temperature of the 1T component, and both components are modified by the same photoelectric absorption. This simple model of a cooling flow is appropriate for the low energy resolution of the ROSAT PSPC and has the advantage of being well studied, relatively easy to compute, and a good fit to the ASCA data of many elliptical galaxies and groups (e.g., Buote & Fabian 1998; Buote 1999, 2000a).
When fitting this cooling flow model to the ROSAT spectra of the galaxies and groups in our sample we find that if only absorption from the standard cold absorber model with solar abundances is included then we obtain results identical to the single-phase models; i.e., the CF component is clearly suppressed by the fits. Since the CF model includes temperature components below $`0.5`$ keV it has stronger O VII and O VIII lines than the single-phase models which, as shown above, already predict too much emission from these lines. It is thus not surprising that only when an intrinsic oxygen absorption edge is included in the fits can we obtain a significant contribution from a CF component.
Even when adding the oxygen edge the cooling flow models do not improve the fits perceptively in any case. And since the magnitude of $`\tau `$ for the oxygen edge is degenerate with $`\dot{M}`$ of the cooling flow component, the constraints on both parameters are quite uncertain. For NGC 5044, which has the best constraints, adding the CF component improves $`\chi ^2`$ from 156.5 to 154.1 for 116 dof within the central arcminute (i.e., marginal improvement). The oxygen edge optical depths are consistent with those obtained from the single-phase analysis. Only within $`r=2\mathrm{}`$ is there an indication of significant cooling with $`\dot{M}12M_{\mathrm{}}\mathrm{yr}^1`$ which is very consistent with the ROSAT results of David et al (1994) within the same radius. (Our mass deposition rates have large statistical uncertainties because of the inclusion of the intrinsic oxygen edge.)
Thus, simple cooling flow models give results that are entirely consistent with the single-phase models.
#### 3.6.2 Two Hot Phases
A two-temperature model (2T) is a more flexible multiphase emission model than the constant-pressure cooling flow and can very accurately mimic a cooling flow spectrum over typical X-ray energies (Buote, Canizares, & Fabian 1999). If we restrict the temperatures of the 2T model to lie between $`0.52`$ keV appropriate for hot gas near the virial temperatures of these galaxies and groups, then we obtain results equivalent to the cooling flow models above; e.g., (1) the extra temperature component is only allowed if the intrinsic oxygen edge included, and (2) results are entirely consistent with the single-phase models. Since the temperatures of the 2T models are fitted separately, constraints on the 2T models are even poorer than the cooling flows because of the extra free parameter.
#### 3.6.3 Two-Phase Medium: Warm and Hot Gas
Recall that when fitted with only a standard absorber with solar abundances that $`N_\mathrm{H}(R)`$ decreases as $`R`$ decreases such that $`N_\mathrm{H}`$ is less then $`N_\mathrm{H}^{\mathrm{Gal}}`$ near $`R=0`$ for most of the galaxies and groups (ยง3.1). This trend implies the existence of excess soft X-ray emission above that produced by the hot gas, the signature of which is most evident at the centers of these galaxies and groups. Although the sub-Galactic values of $`N_\mathrm{H}`$ are only marginally significant in many cases, they are highly significant for NGC 4472 and NGC 5044.
If the excess soft X-rays near the centers represent emission from coronal gas, then the temperatures must be $`0.1`$ keV (i.e., distinctly cooler than the hot gas phase). For NGC 5044, which has the most significant sub-Galactic column densities in the central bins, we find that when adding an extra temperature component to the single-phase model modified only by the standard absorber with $`N_\mathrm{H}=N_\mathrm{H}^{\mathrm{Gal}}`$ (i.e., no intrinsic oxygen edge), that the fits are improved very similarly to the single-phase models when $`N_\mathrm{H}`$ is allowed to take a value significantly less than $`N_\mathrm{H}^{\mathrm{Gal}}`$. For example, in the central bin the fit improves from $`\chi ^2=191`$ (118 dof) for the 1T model to $`\chi ^2=133`$ (116 dof) for the 2T model (both models with $`N_\mathrm{H}=N_\mathrm{H}^{\mathrm{Gal}}`$) very similar to the result obtained for the 1T model with $`N_\mathrm{H}<N_\mathrm{H}^{\mathrm{Gal}}`$ (i.e., $`\chi ^2=139`$ for 117 dof โ โFreeโ in Table 1). Essentially the same large improvement is found for the second radial bin ($`R=1\mathrm{}2\mathrm{}`$). Significant but smaller improvement in the fits ($`\mathrm{\Delta }\chi ^210`$) is also seen in in the third and fourth bins (i.e. $`R=2\mathrm{}4.5\mathrm{}`$).
The inferred temperature of the soft component in the central bin is $`T=0.05_{0.03}^{+0.03}`$ keV ($`6_4^{+4}\times 10^5`$ K) at 90% confidence consistent with values obtained for other radii; i.e., consistent with โwarmโ gas rather than โhotโ gas near the virial temperatures of the galaxies and groups. Gas at these warm temperatures is not optically thin to photons with energies near 0.5 keV (e.g., Krolik & Kallman 1984), and thus this warm gas may be responsible for the intrinsic oxygen absorption inferred in ยง3.4. However, since this warm gas is apparently partially photoionized by the hot gas (and perhaps by itself) a proper calculation of the emission from the warm gas must consider radiative transfer effects which is beyond the scope of this paper. (We shall continue to use the optically thin models in this paper.) We discuss the properties of this warm gas in more detail in ยง5.
The improvement obtained in the fit when adding a component of warm gas to the single-phase model ($`N_\mathrm{H}=N_\mathrm{H}^{\mathrm{Gal}}`$) is somewhat larger than that obtained when adding the single oxygen edge (i.e., $`\chi ^2=133`$ for 116 dof for the 2T model versus $`\chi ^2=156.5`$ for 117 dof for the edge โ Table 1). Unfortunately, due to the limited energy resolution of the PSPC it is very difficult to obtain simultaneous constraints on both absorption and emission models for both the warm and hot gas.
The most reliable constraints on the warm gas component are possible for NGC 5044 because for 1T models we find that $`N_\mathrm{H}`$ of the standard absorber is well below $`N_\mathrm{H}^{\mathrm{Gal}}`$ at small $`R`$ whether or not an intrinsic oxygen edge is included (Figure 1). For a 2T model with an intrinsic oxygen edge we find that the temperature of the warm component in the central bin is $`T=0.06_{0.04}^{+0.03}`$ keV ($`7_5^{+4}\times 10^5`$ K) at 90% confidence consistent with that obtained for the 2T model without the oxygen edge. The oxygen edge optical depth is not very well constrained, $`\tau =0.8_{0.6}^{+0.4},1.2_{0.7}^{+0.8}`$ (90% confidence) in the inner two bins respectively, which is about half the best-fitting value and near the 95% lower limit of $`\tau `$ obtained without including the emission from the warm gas (Table 1). The weakened constraint on the oxygen edge optical depth for the 2T model of NGC 5044 reflects the relatively small improvement in the fit ($`\chi ^2`$ of 127 for 115 dof) over the 2T model without an edge. For the other galaxies, most notably NGC 1399, the constraints on the oxygen edge are not weakened nearly as much.
Emission from such a warm gas component is consistent with, though not as clearly required by, the PSPC data for the other systems. Only for NGC 4472 is clearly significant improvement found when adding both the warm gas component and the oxygen edge. Temperatures ($`T(510)\times 10^5`$ K) and the reduction in edge optical depth are obtained similarly as found for NGC 5044. Future observations with better energy resolution are required to definitively confirm and measure the emission and absorption properties of the warm gas in all of these systems.
### 3.7. Caveats
(i) Calibration: The gain of the PSPC is well calibrated and in particular no significant calibration problems near 0.5 keV have been reported.<sup>2</sup><sup>2</sup>2See http://heasarc.gsfc.nasa.gov/docs/rosat. The large values of $`\tau 1`$ obtained in the central regions imply that the absorption significantly affects a large energy range comparable to the energy resolution of the PSPC; e.g., the O i edge absorbs 25% of the flux at 0.8 keV for $`\tau =1`$; i.e. possible residual calibration errors near 0.5 keV where the effective area is changing rapidly (e.g., Figure 1 of Snowden et al 1994) can not explain the need for absorption at higher energies. Moreover, the shape of $`\tau (R)`$ is not the same for each system as would be expected if calibration were responsible for the intrinsic oxygen absorption found in half of the sample; e.g., $`\tau (R)`$ for NGC 1399 is much more centrally peaked than for NGC 4472 or the other systems. The agreement between the absorption inferred by the PSPC and ASCA mentioned below in ยง4 further argues against a systematic error intrinsic to the PSPC being responsible for the measured oxygen absorption.
(ii) Galactic Columns: All of the objects in our sample reside at high Galactic latitude, and thus the Galactic absorption should be fairly uniform over the relevant $`5\mathrm{}`$-$`10\mathrm{}`$ scales. Errors in the assumed Galactic columns (Dickey & Lockman, 1990) should only affect the baseline value and not the variation with radius.
(iii) Background: Errors in the background level affect most seriously the lowest energies ($`0.4`$ keV) which are also most sensitive to the column density of the standard absorber model. Hence, measurements of $`N_\mathrm{H}`$ at the largest radii (which have lowest S/N) are the most affected by background errors. The hydrogen column densities measured in the outer radii (see Figures 1-4) are very similar to and are usually consistent with the assumed Galactic values within the estimated $`1\sigma `$ errors which attests to the accuracy of our background estimates. The intrinsic oxygen optical depths in the central radial bins are very insensitive to the background level.
## 4. Comparison to ASCA
The intrinsic oxygen absorption indicated by the PSPC data in half of our sample is most significant within the central $`1\mathrm{}2\mathrm{}`$ which is similar to the width of the ASCA PSF. In addition, since the ASCA SIS is limited to $`E>0.5`$ keV, and the efficiency near 0.5 keV is severely limited due to instrumental oxygen absorption, it cannot be expected that ASCA can distinguish an oxygen edge from a standard absorber with solar abundances. However, it is instructive to examine the consistency between results obtained from spatially resolved (PSPC) and single-aperture (ASCA ) methods. As mentioned in PAPER1 the results obtained when adding an oxygen edge to the ASCA data of NGC 1399 and 5044 are consistent with those obtained with the PSPC. Since similar results are found for other objects in our sample showing intrinsic absorption, we focus on NGC 1399 for illustration.
Previously we (Buote, 1999) have fitted two-temperature models to the accumulated ASCA SIS and GIS data within $`R5\mathrm{}`$ of NGC 1399 and obtained $`N_\mathrm{H}^\mathrm{c}=49_9^{+6}\times 10^{20}`$ cm<sup>-2</sup> (90% confidence) for the standard absorber on the cooler temperature component. Using the meteoritic solar abundances (see ยง4.1.3 of PAPER2) slightly modifies this result to $`N_\mathrm{H}^\mathrm{c}=40_7^{+6}\times 10^{20}`$ cm<sup>-2</sup> which is about 30 times larger than the Galactic value ($`N_\mathrm{H}^\mathrm{c}=1.3\times 10^{20}`$ cm<sup>-2</sup>). Comparing this result to those obtained from the PSPC for $`E_{\mathrm{min}}=0.5`$ keV (Fig 1) we see that the ASCA column density is qualitatively similar to the value of $`N_\mathrm{H}20\times 10^{20}`$ obtained within $`R=1\mathrm{}`$ and is consistent with the total column within $`R5\mathrm{}`$. If instead the columns of the standard absorber are fixed to Galactic on both components, and an intrinsic O i edge is added to the cooler component, then we obtain $`\tau =6.0_{0.7}^{+0.7}`$ (90% confidence) for the ASCA data. Again this ASCA result is similar to $`\tau 4`$ in the central PSPC bin and is very consistent with the value of $`\tau =5.7`$ obtained from adding up the best-fitting values obtained with the PSPC within $`R=5\mathrm{}`$. Therefore, the oxygen edge provides as good or better description of the excess absorption inferred from multitemperature models of ASCA data within the central few arcminutes as a standard cold absorber with solar abundances, and yields optical depths that are consistent with those obtained with the PSPC data.
Although a similar consistency is achieved for NGC 4472 the interpretation of the absorption of the two-component model of the single-aperture ASCA data is not so straightforward because the absorption is not obviously concentrated at the center (i.e. on the cooler temperature component). In fact, a spatially uniform absorber within $`R=5\mathrm{}`$ is probably a better description of the PSPC data. If the column densities on both temperature components are tied together for the two-temperature model of the ASCA data the quality of the fit is slightly worse ($`\mathrm{\Delta }\chi ^2=8`$ for 361 dof), but $`N_\mathrm{H}11\times 10^{20}`$ cm<sup>-2</sup> which is a fair representation of the average $`N_\mathrm{H}`$ profile obtained from the PSPC for $`E_{\mathrm{min}}=0.5`$ keV (Fig 2). Similar agreement is found for the O i edge when applied to both temperature components.
## 5. Warm Ionized Gas in Cooling Flows
We consider now in some detail the properties of the intrinsic absorber and their implications. Initially we focus our attention on the physical state of the absorber.
### 5.1. Why Not Dust?
Models of dust grains indicate that dust can give rise to significant amounts of absorption between 0.1-1 keV (e.g., Laor & Draine 1993). In principle such grains could explain the intrinsic absorption we have inferred for energies above $`0.5`$ keV in the PSPC data of half of the galaxies and groups in our study as well as the excess absorption detected for energies above $`0.5`$ keV in the ASCA data of bright galaxies, groups, and clusters.
However, dust should also heavily absorb X-rays with energies between $`0.10.4`$ keV (e.g., Laor & Draine 1993) which is inconsistent with the ROSAT data for the 10 galaxies and groups in our study and, e.g., the large sample of ROSAT clusters studied by Allen & Fabian (1997). To evade this constraint (i.e., no absorption below 0.4 keV) an unconventional model for the formation of dust grains has to be postulated; e.g., dust grains condense out of a medium in which helium remains ionized (Arnaud & Mushotzky, 1998). Even if we consider the (rather unlikely) possibility that dust does not absorb X-rays below $`0.4`$ keV, we still have that dust cannot account for the excess X-ray emission at those energies implied by the sub-Galactic column densities (ยง3.1 and 3.6.3).
Other strong arguments against the existence of large quantities of oxygen-absorbing dust in cooling flows have been made in the past. At the centers of cooling flows where the gas density is largest and the soft X-ray absorption is most significant dust should not be present in large quantities because the grains are rapidly destroyed by sputtering by the hot gas (e.g., Tsai & Mathews 1995; Voit & Donahue 1995). A different argument due to Voit & Donahue (1995) considers that the transient heating of the grains by X-rays from the hot gas prevents CO from fully condensing onto dust grains. Consequently, significant amounts of oxygen would remain in molecular gas which would produce CO emission from rotational lines that are inconsistent with the generally negligible CO detections in cooling flows (e.g., Bregman, Hogg, & Roberts 1992; OโDea et al 1994).
### 5.2. Temperature of the Warm Ionized Gas
Unlike dust all of the key features of the observed soft X-ray absorption and emission can be accounted for if the absorber is (primarily) collisionally ionized gas. The lack of intrinsic absorption observed for energies between $`0.20.4`$ keV in the ROSAT data requires that H and He be completely ionized implying a gas temperature $`T1.0\times 10^5`$ K (e.g., Sutherland & Dopita 1993). In order to have significant absorption at 0.5 keV the temperature cannot be larger than $`1\times 10^6`$ K (see Figure 2 of Krolik & Kallman 1984). This absorbing temperature range ($`T=10^{56}`$ K) is entirely consistent with that inferred from the gas emission to explain the sub-Galactic column densities especially for NGC 5044 (ยง3.6.3). This consistency of absorption and emission properties lends strong support to the idea that the absorber is warm ionized gas.
At these temperatures carbon and nitrogen are not completely ionized (e.g., Nahar & Pradhan 1997; Sutherland & Dopita 1993), and thus these elements will also contribute to the soft X-ray absorption. For nitrogen the states N iv-vi are significant, and their edge energies span 0.46-0.55 keV (rest frame) which are consistent with the edge energy range determined from the PSPC data. Since $`\mathrm{N}/\mathrm{O}=1/8.51`$ (assuming solar abundance ratios), and the threshold cross sections for absorption are similar for N and O, only $`12\%`$ of the optical depth we have measured in each system likely arises from ionized nitrogen.
The ionization fraction for carbon changes rapidly near $`T=10^5`$ K with C v dominating for temperatures above this value and up to $`T10^6`$ K. The edge energy for C v is 0.39 keV, and since $`\mathrm{C}/\mathrm{O}0.5`$ (assuming solar abundance ratios) and the threshold cross sections of C and O are similar, the optical depth of C v is about half that expected from a dominant ionized state of oxygen. However, the strong instrumental carbon absorption leaves the PSPC with essentially no effective area over energies 0.28 keV to $`0.4`$ keV (e.g., see Figure 1 of Snowden et al 1994), and thus it is only possible to detect intrinsic absorption from the C v edge for energies above $`0.4`$ keV even considering the smearing due to the limited energy resolution. This is entirely consistent with the variation of $`N_\mathrm{H}`$ with $`E_{\mathrm{min}}`$ described in ยง3.3.
Considering the energy resolution and effective area curve of the PSPC, the 0.532 keV edge that we have used to parameterize the intrinsic absorption is likely an average of the $`40\%`$ contribution from the C v edge (0.39 keV) and N iv-vi edges (0.46-0.55 keV) with a $`60\%`$ contribution from ionized oxygen states. Although as discussed in ยง3.4 the PSPC data cannot distinguish between multi-edge models, when using a more realistic absorber model consisting of C v and N vi and an oxygen edge, we are able to obtain a significant optical depth for the oxygen edge for energies as high as $`0.75`$ keV consistent with O vii (0.74 keV). However, to insure that oxygen produces at least as much absorption as the C and N edges, the PSPC data also require a contribution from edges around $`0.60.65`$ keV corresponding to edges from O iv-vi.
Consideration of these maximum allowed ionization states for oxygen indicates that the maximum temperature of the warm gas is more like $`T5\times 10^5`$ K if the gas is isothermal and collisionally ionized (e.g., Nahar 1999; Sutherland & Dopita 1993). The absorption signature of this warm gas is not one dominant feature near 0.5 keV but is rather a relatively broad trough over energies 0.4 to $`0.8`$ keV for total optical depths of unity (see Figure 1 of Krolik & Kallman 1984).
(We mention that the edge energies we have quoted are from Daltabuit & Cox (1972) though Gould & Jung (1991) argue that the edge energy for O i is $`10`$ eV higher. Such differences may be relevant for modeling future high resolution spectra but are unimportant for our present discussion with the PSPC data.)
Hence, a proper absorber model needs to consider several edges from different ionization states of oxygen as well as edges from ions of C and N. Since the warm gas absorbs photons from the hot gas the assumption of collisional ionization equilibrium is also not strictly valid nor is it clear that the warm gas is fully optically thin to its own radiation as was assumed in ยง3.6.3 for convenience. Consequently, the single-edge oxygen absorber that we have used throughout this paper (and PAPER1) has been intended primarily as a phenomenological tool to establish the existence and to study the gross properties of the absorber which is appropriate for the low spectral resolution afforded by the PSPC data. If our basic results are confirmed with the substantially higher quality data from Chandra and XMM , then it will be appropriate to expend the effort to construct rigorous models of the warm absorber accounting for many edges and possible radiative transfer effects that are not currently available in xspec .
### 5.3. Absorber Masses vs Mass Drop-Out
As suggested in PAPER1 this warm ionized absorber might be the gas that has dropped out of the putative cooling flow during the lifetime of the galaxy or group and thus could provide the confirmation of the inhomogeneous cooling flow scenario that has been suggested to operate in massive elliptical galaxies, groups, and clusters (e.g., Fabian 1994). To make this connection between our measurements of oxygen absorption and the cooling flow scenario we first estimate the mass of absorbing material implied by the measured optical depths. This mass is then compared to the cooling flow mass deposition rate inferred from ASCA data.
Before computing the absorber masses some caveats must be discussed. First, although the simple constant-pressure cooling flow model discussed in ยง3.6.1 can describe the X-ray emission of the hot gas just as well as the single-phase model, it does not describe the excess 0.2-0.4 keV X-rays presumably arising from the emission of the warm absorbing gas (ยง3.6.3). Although at this time we cannot exclude the possibility that with a more rigorous treatment of the absorption and emission properties of the warm gas (see end of previous section) that the simple cooling flow model could be compatible with the PSPC data between 0.2-0.4 keV, it is quite possible that an important modification of the simple cooling flow model is required (see ยง5.5). As a convenient benchmark for comparison to most previous studies we shall consider here the mass deposition rates predicted by the constant-pressure cooling flow models.
Second, the oxygen edge optical depths predicted by the single-edge models without including the emission from the warm gas certainly are overestimates. Recall that when including the warm gas emission in NGC 5044 that $`\tau `$ reduces to a value near the 95% lower limit of the result obtained without including the warm gas emission (ยง3.6.3). Of equal importance, when including additional edges the inferred optical depth for each edge decreases, and we expect several edges from different ionization states of oxygen, carbon, and nitrogen to contribute (ยง5.2). Since the absorption of an edge is not linear in the energy of the edge (i.e., $`A_0(E)=\mathrm{exp}[\tau _0(E/E_0)^3]`$ and $`A_1(E)A_2(E)A_{1+2}(E)`$ if the edge energies $`E_1E_2`$), by spreading multiple edges over a large energy range one can produce the observed absorption with smaller total optical depth than can be achieved with a single edge. Consequently, the single-edge optical depths obtained in ยง3.4 should be considered upper limits.
Let us now estimate the amount of absorbing material implied by these absorption measurements with these caveats in mind, and in particular with the assumption that the inferred absorber masses are most likely over-estimates. Assuming the optical depths refer to the O i edge, then the measured values of $`\tau `$ imply a hydrogen column density (assuming $`\sigma =5.5\times 10^{19}`$ cm<sup>2</sup> at threshold) and thus a mass within a projected radius, $`R`$,
$`M_{\mathrm{abs}}(<R)=`$
$`(7.8\times 10^9)(\tau )\left({\displaystyle \frac{R}{10\mathrm{k}\mathrm{p}\mathrm{c}}}\right)^2\left({\displaystyle \frac{\mathrm{O}/\mathrm{H}}{8.51\times 10^4}}\right)^1M_{\mathrm{}},`$ (1)
where $`\tau `$ is the optical depth of the O i edge and O/H is the oxygen abundance of the absorber. Although the projected mass is larger than the mass within the 3D radius $`r=R`$, the value of $`\tau `$ in equation (1) slightly underestimates the 3D value as discussed in (ยง3.2); i.e. these projection effects approximately cancel. The metallicity of the hot gas in the central bins for the objects in our sample are larger than solar (PAPER2), and thus we expect the same for the absorber. Since the oxygen abundance is uncertain we shall quote results assuming O is solar and recognize that $`M_{\mathrm{abs}}`$ could be overestimated by a factor of 2-3 in the central bin. The expected contribution from carbon and nitrogen (ยง5) to $`\tau `$ also reduces $`M_{\mathrm{abs}}`$ by another $`40\%`$.
In Table 2 we give $`M_{\mathrm{abs}}`$ for NGC 507, 1399, 4472, 4649, and 5044 in both the central bin $`(R=1\mathrm{})`$ and the total mass interior to the largest bin investigated; the edge optical depths used refer to the single-phase models since the cooling flow models give entirely consistent values (ยง3.6.1). The mass deposition rates, $`\dot{M}`$, listed in the second column are determined from the accumulated ASCA data within radii of $`r3\mathrm{}5\mathrm{}`$. (The ASCA spectra place much tighter constraints on the total $`\dot{M}`$ than do the ROSAT spectral data.) The results for NGC 1399, 4472, and 5044 are taken from Buote (1999). For NGC 507 and 4649 we re-analyzed the data sets as prepared in Buote & Fabian (1998) and fitted cooling flow models analogously to that done in Buote (1999). That is, the spectra were fitted with (1) a constant pressure cooling flow component, (2) an isothermal component representing the ambient gas, and (3) for NGC 4649 an extra high-temperature bremsstrahlung component. Since the cooling flow model assumes constant pressure and neglects the gravitational work done on the cooling gas, the value of $`\dot{M}`$ is an upper limit. This over-estimate is typically $`30\%`$ (e.g., the agreement of different cooling flow models in Allen et al 2000b).
Let us focus on $`M_{\mathrm{abs}}`$ and the accumulation time, $`t_{\mathrm{acc}}=M_{\mathrm{abs}}/\dot{M}`$, within the central bin $`(R=1\mathrm{})`$ where the measured optical depths are most significant and the fits are most clearly improved when the edge is added; i.e. we consider the results for the central bin to be most reliable. Assuming an age of the universe, $`t_{\mathrm{age}}=1.3\times 10^{10}`$yr, examination of Table 2 reveals that within the central bin $`t_{\mathrm{acc}}(0.10.5)t_{\mathrm{age}}`$ using the best-fitting values or a 95% upper limit of $`t_{\mathrm{acc}}(0.30.6)t_{\mathrm{age}}`$; NGC 5044 actually has smaller values, but if the second bin is included (which has intrinsic absorption just as significant as the inner bin) then $`t_{\mathrm{acc}}`$ is consistent with the values quoted above. These accumulation timescales are a sizeable fraction of $`t_{\mathrm{age}}`$, and thus $`M_{\mathrm{abs}}`$ within the central bin(s) can account for most, if not all, of the mass deposited by the cooling flow over the lifetime of the flow; the exact value depends on precisely when the cooling flow begins and whether $`\dot{M}`$ varies with time.
The total absorbing masses have large errors within the 95% confidence limits. Although the best-fitting values for $`t_{\mathrm{acc}}`$ are typically larger than $`t_{\mathrm{age}}`$, the 95% lower limits are $`t_{\mathrm{age}}`$ for all but NGC 4472. We reiterate that we consider these values at the largest radius to be less secure than the central bin(s) because the fits do not clearly require the addition of the oxygen edge outside the inner 1-2 bins. Nevertheless, the result for NGC 4472 is striking and deserves comment. Clearly the approximation of a spherically symmetric, relaxed cooling flow is invalid for $`R3\mathrm{}`$ because the isophotal distortions suggest a strong interaction with the surrounding Virgo gas (Irwin & Sarazin, 1996) and thus the estimate of $`\dot{M}`$ unlikely applies at larger radii. If the large value of $`M_{\mathrm{abs}}`$ at large radius is confirmed then another mechanism must have produced the warm gas in NGC 4472.
Hence, within the central 1-2 bins ($`R1020`$ kpc) where the model constraints are most secure we conclude that the absorbing mass inferred from the oxygen edges can explain most (and perhaps all) of the mass deposited by a cooling flow over the age of the system. If the edges also apply at larger radii (as we have assumed), then all of the deposited mass (except for NGC 4472) can be easily explained by the inferred absorbing mass. These qualitative conclusions still apply if the oxygen edge optical depths are really closer to their 95% lower limits as discussed near the top of this section.
We mention that systems without strong cooling flows will not have had sufficient time to accumulate the $`10^9M_{\mathrm{}}`$ within $`R1\mathrm{}`$ to produce detectable soft X-ray emission. Thus, the โVery Soft Componentsโ found in galaxies with low ratios of X-ray to optical luminosity unlikely arise from warm gas deposited in a cooling flow and instead probably reflect the collective emission from X-ray binaries (e.g., Irwin & Bregman 1999).
### 5.4. Constraints from the Optical and FUV
Since collisionally ionized gas at temperatures of $`10^{56}`$ K emits many strong lines at optical and ultraviolet wavelengths (e.g., Pistinner & Sarazin 1994; Voit & Donahue 1995), we consider whether the large amounts of absorbing material implied by the intrinsic X-ray absorption (Table 2) violate published constraints on line emission in the optical and UV spectral regions. The best published constraints available in the optical are for H$`\alpha `$ from studies of extended ionized gas in the centers of elliptical galaxies (e.g., Trinchieri & di Alighieri 1991; Goudfrooij et al 1994; Macchetto et al 1996). In most cases the emission line gas is only detected within $`r20\mathrm{}`$ which is significantly smaller than the central $`1\mathrm{}`$ used in our analysis.
The object in our sample where H$`\alpha `$ has been detected out to the largest angular radius is NGC 5044. Macchetto et al (1996) measure $`\mathrm{F}(\mathrm{H}\alpha )=1.4\times 10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> within $`R=0.5\mathrm{}`$. We can estimate the temperature at which the H$`\alpha `$ emission implied by $`M_{\mathrm{abs}}`$ (Table 2) within $`R=1.0\mathrm{}`$ equals the observed flux. We take the predicted H$`\alpha `$ line intensity at peak temperature from Pistinner & Sarazin and the temperature dependence of $`T^{2.5}`$ from inspection of Figure 7 of Voit & Donahue. After accounting for the different region sizes we find that the required temperature is $`1.5\times 10^5`$ K using the best-fitting $`M_{\mathrm{abs}}`$, although when using the 95% lower limit on $`M_{\mathrm{abs}}`$ we find that $`T0.5\times 10^5`$ K.
Similar results hold for NGC 1399, 4472, and 4649 although the comparison is less certain because of the larger aperture corrections. If no aperture correction is made for NGC 5044 then the implied temperatures rise to $`T2.5\times 10^5`$ K at best fit and $`T1.7\times 10^5`$ K at the 95% lower limit. If we consider also that $`M_{\mathrm{abs}}`$ in Table 2 is over-estimated because the oxygen abundance is larger than solar ($`1.5Z_{\mathrm{}}\text{ }`$ โ see PAPER2) and carbon and nitrogen contribute $`40\%`$ to the measured optical depths (ยง5.3), we obtain $`T1.8\times 10^5`$ K at best fit and $`T1.2\times 10^5`$ K at the 95% lower limit (again without aperture correction). Therefore, the published constraints on H$`\alpha `$ are satisfied if $`T2.0\times 10^5`$ K.
Stronger lines from warm gas are expected to appear in the UV, but Hopkins Ultraviolet Telescope (HUT) observations detected no significant emission lines in NGC 1399, 4472, and 4649 (Ferguson et al 1991; Brown, Ferguson, & Davidsen 1995). It is unfortunate that the strongest emission lines for temperatures $`T(23)\times 10^5`$ K are O v(1218ร
), O vi(1034ร
), and N v(1240ร
) which appear to be lost in the background geocoronal emission (e.g., Figure 1 of Ferguson et al 1991, though see below for O VI). However, the lines C iv(1549ร
), O iv(1401ร
), and Ne iv(1602ร
) are also strong and uncontaminated by geocoronal emission.
To determine the gas temperature at which, e.g., the O iv(1401ร
) flux would not violate the published UV constraints we estimate that the O iv flux would have to be less than $`10\%`$ of the continuum considering the error bars on the spectrum of NGC 1399 (Figure 2 of Ferguson et al 1991). This limit corresponds to a flux of $`1.4\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> . We take the predicted O iv line intensity at peak temperature from Pistinner & Sarazin and assume the temperature gradient above the peak falls similarly to that displayed for C iv in Figure 7 of Voit & Donahue. Using the best-fitting $`M_{\mathrm{abs}}`$ (and accounting for the smaller HUT aperture) we find that a temperature of at least $`3\times 10^5`$ K is required, though the 95% lower limits on $`M_{\mathrm{abs}}`$(which are probably more realistic โ ยง3.6.3) coupled with the oxygen abundance and C/N issues as above indicate the limit is more conservatively $`2\times 10^5`$ K. Similar limits are obtained for the other lines and for the HUT spectra of NGC 4472 and 4649 (Brown et al, 1995).
Our procedure of requiring the line fluxes to be less than 10% of the continuum may result in limits that are too restrictive. Dixon et al (1996) have estimated $`2\sigma `$ upper limits on the O vi(1034ร
) intensity from M87 which has a HUT spectrum very similar to NGC 1399. Their $`2\sigma `$ upper limit on the O vi flux within a $`1\mathrm{}`$ circle is $`1\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup> . If the warm gas has a temperature of $`3.2\times 10^5`$ K corresponding to the peak temperature for O vi, then the observed limit is comparable to the O vi emission expected from the warm gas of NGC 1399 when using the 95% lower limit on $`M_{\mathrm{abs}}`$. Hence, if the HUT results for M87 apply to NGC 1399 (as they appear to), then the predicted O vi emission agrees with the limits, especially if the temperature is not precisely at the peak temperature for O vi.
### 5.5. Theoretical Issues
Although the hypothesis of warm, mostly collisionally ionized, gas apparently can explain the X-ray observations and the matter deposited by the cooling flows, this model has serious theoretical difficulties which must be overcome before it can be considered a viable model:
1. $`T_{\mathrm{warm}}<T_{\mathrm{virial}}`$. The temperature of the warm gas ($`0.1`$ keV) is less than the virial temperatures of the halos ($`1`$ keV) implying the gas is not thermally supported. How does the warm gas support itself in the gravitational fields of these systems?
2. $`t_{\mathrm{cool}}^{\mathrm{warm}\mathrm{gas}}t_{\mathrm{cool}}^{\mathrm{hot}\mathrm{gas}}`$. The cooling time of the warm gas is very short, even more so than the hot gas in the central regions. How are large quantities of gas maintained at these temperatures?
Most likely these problems can only be solved with a substantial modification of the standard cooling flow scenario. Clearly an additional energy source and very possibly the important role of magnetic fields will have to be considered. A small number of models have been proposed throughout the years which consider these issues, though they are not well developed and do not at present make detailed predictions regarding the issues (1) and (2) above. These models have generally been designed to completely suppress cooling flows, and thus will have to be modified to allow some cooling of the hot gas. We now briefly review some of the candidate models.
Binney (1996) and Ciotti & Ostriker (2000) have proposed feedback from the central black hole as a promising means of inhibiting the cooling of hot gas in the central regions of galactic cooling flows. In this model whenever the black hole accretes a sufficient amount of gas to stimulate nuclear activity, the accompanying radiation stimulated by the accretion heats up the hot gas and prevents further cooling. This is supposed to be a cyclical process such that the AGN phase is sufficiently rare to be consistent with the lack of nuclear activity in most cooling flows. If the AGN feedback energy does not completely suppress cooling of the hot gas but instead merely prevents most of the gas from cooling down to temperatures below $`10^{56}`$ K, this might be able to explain issues (1) and (2). We also note that most of the cooling gas is not expected to reach the central black hole in standard models (Brighenti & Mathews, 1999).
Another energy source proposed to exist in the centers of cooling flows is that from the reconnection of tangled magnetic field lines. It has been known for some time that small seed magnetic fields can be amplified within a cooling flow to produce sizeable fields at the centers (e.g., Soker & Sarazin 1990; Lesch & Bender 1990; Moss & Shukurov 1996; Mathews & Brighenti 1997). Zoabi et al (1998) show that reconnection energy can significantly reduce the cooling rates and may account for significant amounts of warm ($`T=10^{56}`$ K) gas. Similarly, Norman & Meiksen (1996) propose a two-phase model to recycle warm and hot gas along magnetic flux loops also resulting in much lower rates of mass deposition.
Both the AGN and magnetic field reconnection models would explain the warm gas phase as a non-equilibrium configuration where the warm gas is continuously diluted by heating processes and replenished by cooling from the hot phase. If we consider the (unlikely) possibility that the warm gas is a long-lived equilibrium phase (see, e.g., Fabian 1996) then magnetic pressure is probably the most viable non-thermal process which can support the gas. Mathews & Brighenti (1999) describe a possible equilibrium model of the warm gas as the the outer envelopes of low mass stars forming in a cooling flow.
The details of how the magnetic field would support the gas are uncertain. Daines et al (1994) suggest that the cool gas blobs would be anchored to the hot gas by the magnetic fields, and thus the pressure support would actually come from the hot gas. However, inspection of Table 2 reveals that $`M_{\mathrm{abs}}(110)M_{\mathrm{hot}}`$ within the central bins indicating that the hot gas could not support the cool gas. (The 95% lower limits on $`M_{\mathrm{abs}}`$are considered here.) At larger radii it may be possible for the hot gas to support the cool gas.
Whatever the details of the magnetic support the condition of hydrostatic equilibrium requires that $`B^26M_{\mathrm{abs}}GM_{\mathrm{grav}}r^4`$. Using the values for $`M_{\mathrm{abs}}`$ within the central $`1\mathrm{}`$ bin quoted in Table 2 and the gravitating masses obtained from previous ROSAT studies (David et al 1995; Kim & Fabbiano 1995; Rangarajan et al 1995; Irwin & Sarazin 1996) we find that $`B100\mu \mathrm{G}`$ is required within a 5 kpc radius.
Estimates of the magnetic field strengths from radio polarization analyses of these galaxies are lacking, although there exist estimates using minimum energy arguments for NGC 1399 (Killeen et al, 1988), NGC 4472 (Ekers & Kotanyi, 1977), and NGC 4649 (Stanger & Warwick, 1986) which give consistent results: $`B50100\mu \mathrm{G}`$ at the centers and $`B(510)\mu \mathrm{G}`$ at $`r0.5\mathrm{}`$. Assuming $`Br^{1.2}`$ (Mathews & Brighenti, 1997) then these observations imply $`B5\mu \mathrm{G}`$ when averaged over a $`1\mathrm{}`$ circle. Since the observations only set lower limits the expected $`B100\mu \mathrm{G}`$ fields are consistent with the observations.
Interestingly, the need for $`B100\mu \mathrm{G}`$ in cooling flows has been suggested by Brighenti & Mathews (1997) on entirely different grounds. In their analysis of the gravitating mass distributions of NGC 4472, 4636, and 4649 Brighenti & Mathews (1997) find in every case that the gravitating mass determined from the X-ray analysis falls below that estimated from stellar dynamics for $`r(\mathrm{few})\mathrm{kpc}`$. If $`B100\mu \mathrm{G}`$ within the centers of these systems then the X-ray and stellar dynamical masses agree. (A similar result holds for NGC 1399 as well โ W. Mathews 2000, private communication.)
Finally, we mention it may be useful to consider a model for the warm gas that does not originate from cooling out of the hot phase. One such possibility is offered by the material continuously ejected by the stars. This material is injected into the ISM with low energy and is shock-heated up to the virial temperature of the halo. Although theoretical arguments suggest that the heating is very rapid (Mathews, 1990), more detailed calculations are required to rule out the possibility that in fact the transition is gradual and could give rise to an observable phase of warm gas consistent with the X-ray observations.
## 6. Conclusions
From deprojection analysis of the ROSAT PSPC data of 10 cooling flow galaxies and groups with low Galactic columns we have detected oxygen absorption at the $`2\sigma /3\sigma `$ level intrinsic to the central $`1\mathrm{}`$ in half of the sample: NGC 507, 1399, 4472, 4649, and 5044. The data for the other systems are insufficient to place interesting constraints on the absorption profile but are consistent with substantial absorption. We modeled the oxygen absorption as a single edge (rest frame $`E=0.532`$ keV) which produces the necessary absorption in both the PSPC and ASCA data for $`E0.5`$ keV without violating the PSPC constraints over $`0.2`$ $`0.4`$ keV for which no significant excess absorption is indicated. Assuming the absorber is collisionally ionized gas we infer a temperature of $`10^{56}`$ K from consideration of the possible edge energies consistent with the PSPC data.
The intrinsic oxygen absorption reconciles the longstanding problem of why negligible column densities for a foreground absorber with solar abundances were inferred from ROSAT data whereas large columns were obtained from ASCA and other instruments with bandpasses above $`0.5`$ keV. Moreover, since the absorption is confined to energies above $`0.5`$ keV there is no need for large columns of cold H which are known to be very inconsistent with the negligible atomic and molecular H measured in galactic and cluster cooling flows (e.g., Bregman et al 1992; OโDea et al 1994).
In most of the galaxies and groups we have found that single-phase and cooling flow models cannot explain the X-ray emission in the soft (0.2-0.4 keV) energy channels of the ROSAT PSPC data (ยง3.6.3). That is, when $`N_\mathrm{H}`$ of the standard absorber model is freely fitted it is found that $`N_\mathrm{H}<N_\mathrm{H}^{\mathrm{Gal}}`$ in the central bins of most systems, with NGC 4472 and NGC 5044 showing the most significant soft excesses. If we model this soft emission as coronal gas we obtain temperatures $`10^{56}`$ K in excellent agreement with those inferred from the energy ranges of the absorption edges.
Hence, the sub-Galactic column densities are consistent with a direct detection of the emission from the intrinsic absorbing gas. The agreement between the temperatures inferred from the emission and absorbing properties of the warm gas lends strong support to the ionized gas model. In contrast, dust can not explain the excess soft X-ray emission. (Other problems exist with the dust hypothesis โ see ยง5.1.)
Our simple estimates of the amount of absorbing matter implied by our single-edge absorption measurements are consistent with the total amount of matter expected to have been deposited by a cooling flow (ยง5.3). With the arrival of higher quality data from Chandra and XMM more accurate estimates should be made which account for a range of absorbing edges and possible radiative transfer effects in the warm gas.
We have examined the theoretical difficulties associated with attributing the absorption to warm ionized gas and have discussed some candidate models that may be able to account for these problems. Future detailed calculations are required to assess the viability of these models (ยง5.5).
Fortunately on the observational front it will be very easy to verify the intrinsic oxygen absorption with new Chandra and XMM data. The XMM (EPIC) and Chandra (ACIS-S) CCDs both extend down to 0.1-0.2 keV and have substantially better energy resolution than the PSPC. Observations with these instruments can easily test our prediction for warm gas in both absorption and emission. The grating spectrometers of XMM and Chandra have even better energy resolution (but smaller effective area) and, in principle, might detect individual edges.
We emphasize that the absorption signature of the warm gas is expected to be a relatively broad trough over energies $`0.40.8`$ keV, and thus future Chandra and XMM observations will not see a single sharp feature. The most straightforward means to confirm our results will be to reproduce the sensitivity of $`N_\mathrm{H}`$ to $`E_{\mathrm{min}}`$ for a standard absorber model (ยง3.3). To obtain the properties of the absorber (e.g., temperature and abundances) a model for the soft X-ray opacity such as that described by Krolik & Kallman (1984) must be compared to the new data. In so doing the emission from the warm gas must also be accounted for (ยง3.6.3), and thus it is very important that the detector bandpass extend down to $`0.1`$ keV which it does for the Chandra and XMM CCDs.
As discussed in ยง5.4 optical and FUV constraints imply a lower limit of $`T2\times 10^5`$ K. It is possible that precise measurements of O iii(5007ร
) could refine this limit, but since this line peaks at $`T0.8\times 10^5`$ K its emissivity is already falling rapidly at $`T2\times 10^5`$ K. Future high resolution FUV spectroscopy of the O VI (1034ร
) line (peak temperature $`3.2\times 10^5`$ K) with, e.g., FUSE may be able to place additional interesting constraints on the warm gas if its temperature does not exceed $`T5\times 10^5`$ K.
It should be remembered that to infer the properties of the warm gas from X-ray observations the absorption and emission spectrum arising from warm gas must be disentangled from Galactic absorption and the emission from hot plasma. Since (if confirmed) the warm gas almost certainly represents the mass deposited by an inhomogeneous cooling flow (ยง5.3), the hot gas at each radius should also emit over a range of temperatures. Hence, the thermodynamic state of the X-ray emitting plasma appears to be very complex in the central regions of the (X-ray) brightest galaxies and groups, and the analogous results for A1795 presented in PAPER1 suggest the same applies for galaxy clusters.
I thank W. Mathews for fruitful discussions and the anonymous referee for detailed comments. Support for this work was provided by NASA through Chandra Fellowship grant PF8-10001 awarded by the Chandra Science Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073. |
no-problem/0001/math0001104.html | ar5iv | text | # Remarks on the existence of Cartier divisors
## 1. Introduction
Let $`X`$ be a noetherian scheme and $``$ an invertible $`๐ช_X`$-module; does there exist a Cartier divisor $`D\mathrm{Div}(X)`$ with $`๐ช_X(D)`$? This is no problem if $`X`$ satisfies Serreโs condition $`(S_1)`$, and the issue is to deal with embedded components. The goal of this short note is to provide an answer and to correct an erroneous counterexample in the literature.
The question was first posed by Nakai \[5, p. 300\], and later Grothendieck \[3, 21.3.4\] showed that the canonical map $`\mathrm{Div}(X)\mathrm{Pic}(X)`$ is surjective if the subset $`\mathrm{Ass}(๐ช_X)X`$ allows an affine open neighborhood. On the other hand, it seemed to be well known from the beginning that in general obstructions might arise. Hartshorne proposed a construction (attributed to Kleiman) of a non-projective irreducible 3-fold $`X`$ with a single embedded component $`xX`$ for which it is claimed that $`\mathrm{Div}(X)\mathrm{Pic}(X)`$ is not surjective \[4, ex. 1.3, p. 9\]. Unfortunately, $`\mathrm{Ass}(๐ช_X)=\{x,\eta \}`$ is contained in every affine open neighborhood $`UX`$ of $`x`$, and Grothendieckโs criterion tells us that that the proposed construction does not yield an invertible sheaf without Cartier divisor.
In the first part of this note we will discuss how the construction can be modified in order to obtain the desired counterexample. In the second part we will prove a positive result, which complements Grothendieckโs criterion in the following way: Let $`TX`$ be a finite subset containing $`\mathrm{Ass}(๐ช_X)`$; then there is a Cartier divisor $`D\mathrm{Div}(X)`$ with $`๐ช_X(D)`$ and support $`\mathrm{Supp}(D)`$ disjoint from $`T`$ if and only if the restriction of $``$ to $`T`$ is trivial. Here we view $`T`$ also as a ringed space, endowed with the subspace topology and sheaf of rings $`๐ช_T=i^1(๐ช_X)`$, where $`i:TX`$ is the inclusion map.
## 2. Absence of Cartier divisors
In this section we construct two schemes $`X`$ for which $`\mathrm{Div}(X)\mathrm{Pic}(X)`$ is not surjective.
###### (2.1)
Let us recall Hartshornes construction. We fix a ground field $`K`$; then there is a regular, integral, proper 3-fold $`Y`$ containing two irreducible curves $`A,BY`$ such that $`A+B`$ is numerically trivial. Such a scheme is obviously non-projective, and was constructed by Hironaka using local blow-ups; the construction is thoroughly discussed in \[6, p. 75\]. For each Cartier divisor $`D\mathrm{Div}(Y)`$ we have
$$AD>0BD<0,$$
and the complement of an affine open neighborhood $`UY`$ of the generic point of $`A`$ defines such a Cartier divisor. Choose a closed point $`aA`$ and consider the infinitesimal extension $`YX`$ with ideal $`=\kappa (a)`$. The outer groups in the exact sequence
$$H^1(Y,)\mathrm{Pic}(X)\mathrm{Pic}(Y)H^2(Y,)$$
vanishes, hence there is an invertible $`๐ช_X`$-module $``$ with $`Bc_1()>0`$. Grothendieckโs criterion tells us that $``$ is representable by a Cartier divisor $`D\mathrm{Div}(X)`$; assume that it is even representable by an effective Cartier divisor $`DX`$. But $`AD<0`$ implies $`AD`$, hence $`aD`$; on the other hand, according to \[2, 3.1.9\], $`D`$ must be disjoint to $`\mathrm{Ass}(๐ช_X)`$, contradiction. In other words, the construction only yields a Cartier divisor $`D\mathrm{Div}(X)`$ not linearly equivalent to an effective one such that the restriction to $`X^{\mathrm{red}}=Y`$ is equivalent to an effective Cartier divisor.
In order to achieve the desired effect we have to introduce at least two embedded components. Choose closed points $`aA`$ and $`bB`$, and let $`YX`$ be the infinitesimal extension with ideal $`=\kappa (a)\kappa (b)`$. Again there is an invertible $`๐ช_X`$-module $``$ with $`Ac_1()<0`$ and $`Bc_1()>0`$. We observe that $`\mathrm{Div}(X)Z^1(X)`$ is the subgroup generated by all prime cycles disjoint to $`\{a,b\}`$. Assume that there is a Cartier divisor $`D\mathrm{Div}(X)`$ representing $``$. Decomposing $`D=n_iD_i`$ into prime cycles, we see that each summand is Cartier, hence $`AD_i0`$ and $`BD_i0`$ holds for some index $`i`$. Consequently we have $`AD_i<0`$ and $`aD_i`$, or $`BD_i<0`$ and $`bD_i`$; in both cases, $`\mathrm{Ass}(๐ช_X)=\{a,b,\eta \}`$ is not disjoint to $`D_iX`$, contradiction. Hence it is impossible to represent $``$ by a Cartier divisor.
###### (2.2)
Another counterexample features non-separated schemes. Let $`A`$ be a discrete valuation ring with field of fraction $`R`$. We can glue two copies $`U_1,U_2`$ of $`\mathrm{Spec}(A)`$ along $`\mathrm{Spec}(R)`$ and obtain an integral, regular curve $`Y`$, which is a non-separated scheme \[1, 8.8.5\]. The group $`\mathrm{Div}(Y)=Z^1(Y)`$ is isomorphic to $`^2`$, and the exact sequence
$$1\mathrm{\Gamma }(Y,๐ช_Y)^\times \mathrm{\Gamma }(Y,_Y)^\times \mathrm{Div}(Y)\mathrm{Pic}(Y)0$$
yields $`\mathrm{Pic}(Y)=`$. Let $`YX`$ be the infinitesimal extension with the ideal $`=\kappa (y_1)\kappa (y_2)`$, where $`y_1,y_2Y`$ are the closed points. The restriction map $`\mathrm{Pic}(X)\mathrm{Pic}(Y)`$ is bijective, but the sheaf $`๐\text{iv}_X`$ is zero. Thus we have $`\mathrm{Div}(X)=0`$, and $`=๐ช_X`$ is the only invertible sheaf associated to a Cartier divisor.
## 3. Existence of Cartier divisors
In this section, $`X`$ is a noetherian scheme, and $`TX`$ is a finite subset containing the finite subset $`\mathrm{Ass}(๐ช_X)X`$.
###### (3.1)
Let $`\mathrm{Div}_T(X)\mathrm{Div}(X)`$ be the subgroup of Cartier divisors $`D`$ with $`\mathrm{Supp}(D)T=\mathrm{}`$. Recall that the support $`\mathrm{Supp}(D)`$ is defined as the support of $`D_1D_2`$, where $`\mathrm{cyc}(D)=D_1D_2`$ is the decomposition into positive and negative parts of the associated Weil divisor.
This construction can be sheafified: Let $`๐ฎ_T๐ช_X`$ be the subsheaf of sets whose stalk $`๐ฎ_{T,x}`$ consists of the stalks $`s_x๐ช_{X,x}`$ whose localizations $`s_y๐ช_{X,y}`$ are units for all $`y\mathrm{Spec}(๐ช_{X,x})T`$. Let $`_{X,T}=๐ฎ_T^1๐ช_X`$ be the localization in the category of sheaves of rings. We now define a sheaf of abelian groups $`๐\text{iv}_{X,T}`$, written additively, by the exact sequence
(3.1.1)
$$1๐ช_X^\times _{X,T}^\times ๐\text{iv}_{X,T}0,$$
and obtain $`\mathrm{Div}_T(X)=\mathrm{\Gamma }(X,๐\text{iv}_{X,T})`$. Now let $`i:TX`$ be the inclusion map, and set $`๐ช_T=i^1(๐ช_X)`$. We observe the following
###### (3.2) Proposition.
The $`๐ช_X`$-algebras $`_{X,T}`$ and $`i_{}(๐ช_T)`$ are canonically isomorphic.
First, assume that $`X`$ is the spectrum of a local ring $`A`$ with closed point $`xX`$. Let $`SA`$ be the multiplicative subset of all $`aA`$ with $`a/1A_๐ญ^\times `$ for all primes $`๐ญA`$ corresponding to points $`tT`$. Clearly, $`i_{}(๐ช_T)_x`$ and $`(_{X,T})_x`$ are canonically isomorphic to $`S^1A`$. In the general case, consider the diagram
$$\begin{array}{ccc}i_{}(๐ช_T)& & _{xX}i_{}(๐ช_T)_x\\ & & & & \\ _{X,T}& & _{xX}(_{X,T})_x,\end{array}$$
where the horizontal maps are the canonical inclusions. Since the bijections $`i_{}(๐ช_T)_x(_{X,T})_x`$ are compatible with localization, the vertical map induces the desired bijection $`i_{}(๐ช_T)_{X,T}`$. QED.
It should be noted that these $`๐ช_X`$-algebras are in general not quasi-coherent. From the above fact we immediately obtain the following criterion:
###### (3.3) Theorem.
An invertible $`๐ช_X`$-module $``$ is representable by a Cartier divisor $`D\mathrm{Div}(X)`$ with support disjoint from $`T`$ if and only if the restriction of $``$ to $`T`$ is trivial in $`\mathrm{Pic}(T)`$.
Let $`i:TX`$ be the corresponding flat morphism of ringed spaces. The exact sequence (3.1.1) can be rewritten as
$$1๐ช_X^\times i_{}i^{}(๐ช_X^\times )๐\text{iv}_{X,T}0,$$
and we obtain an exact sequence
$$\mathrm{Div}_T(X)\mathrm{Pic}(X)H^1(X,i_{}i^{}(๐ช_X^\times )).$$
The spectral sequence for the composition $`\mathrm{\Gamma }i_{}`$ gives an inclusion
$$0H^1(X,i_{}(๐ช_T^\times ))H^1(T,๐ช_T^\times ),$$
and we end up with the exact sequence
$$\mathrm{Div}_T(X)\mathrm{Pic}(X)\mathrm{Pic}(T),$$
which is precisely our assertion. QED.
###### (3.4) Remark.
Grothendieckโs criterion can be recovered from this: Assume that $`TX`$ is contained in an affine open neighborhood $`U=\mathrm{Spec}(A)`$. If $`SA`$ is the complement of the union of all primes $`๐ญA`$ corresponding to points $`xUT`$, then $`T`$ is also contained in the semi-local scheme $`V=\mathrm{Spec}(S^1A)`$, and $`\mathrm{Pic}(X)\mathrm{Pic}(T)`$ factorizes over $`\mathrm{Pic}(V)`$. Since the Picard group of a semi-local ring vanishes, each invertible $`๐ช_X`$-module is representable by a Cartier divisor $`D\mathrm{Div}_T(X)`$.
Anschrift des Autors:
Mathematisches Institut
Ruhr-Universitรคt
44780 Bochum
Germany
E-mail s.schroeer@ruhr-uni-bochum.de |
no-problem/0001/astro-ph0001095.html | ar5iv | text | # Phase Lag and Coherence Function of X-ray emission from Black Hole Candidate XTE J1550-564
## 1 Introduction
Black hole candidates (BHCs) are characterized by rapid X-ray variability (see recent reviews by van der Klis 1995 and Cui 1999a). It is also common for BHCs that the variability at high energies lags behind that at low energies (Cui 1999a and references therein), which is often referred to as hard lag. The hard lag is often attributed to thermal inverse-Comptonization processes (e.g., Miyatomo et al. 1988; Hua & Titarchuk 1996; Kazanas et al. 1997; Bรถttcher & Liang 1998; Hua et al. 1999), which are generally thought to be responsible for producing the characteristic hard tail in the X-ray spectra of BHCs (Tanaka & Lewin 1995). In these models, the hard lag arises simply because a greater number of scatterings are required for seed photons to reach higher energies. Therefore, the lag is directly related to the diffusion timescale through the Comptonizing region, which scales logarithmically with photon energy (e.g., Payne 1980; Hua & Titarchuk 1996). The expected logarithmic energy-dependence of the hard lag is in rough agreement with the observations (Cui et al. 1997; Crary et al. 1998; Nowak et al. 1999). However, the measured lag is often large (e.g., a few tenths of a second) at low frequencies, which would require a very extended Comptonizing region (Kazanas et al. 1997; Bรถttcher & Liang 1998; Hua et al. 1999). It is not clear whether such a region can be physically maintained (Nowak et al. 1999; Bรถttcher & Liang 1999; Poutanen & Fabian 1999). Suggestions have also be made to link the hard lag either to the propagation or drift time scale of waves or blobs of matter through an increasingly hotter region toward the central black hole where hard X-rays are emitted (Miyamoto et al. 1988; Kato 1989; Bรถttcher & Liang 1999) or to the evolution time scale of magnetic flares (Poutanen & Fabian 1999). Regardless which scenario turns out to be close to reality, it is clear that the hard lag is an important property of BHCs which we can use to gain insight into the geometry and dynamics of accretion flows in these systems.
Recently, however, it was discovered that a strong QPO in GRS 1915+105, a well-known microquasar, had a rather complex pattern of phase lag (Cui 1999b): while the hard lag was measured for the odd harmonics of the signal, the even harmonics displayed soft lag. The pattern is puzzling because it does not fit naturally into any of the models suggested for BHCs. Speculation was made that the complicated QPO lag in this case might be caused by a change in the form of the wave that produced the QPO (Cui 1999b). It is, however, not clear what physical mechanisms could be responsible for such evolution of the wave form. Similar behavior was subsequently observed for some of the QPOs in XTE J1550-564 (Wijnands et al. 1999). Therefore, the phenomenon may actually be common for BHCs.
A related timing property to the phase lag is the coherence function between two different energy bands. Only recently, however, enough attention is paid to the importance of this property (Vaughan & Nowak 1997) and efforts are made to compute it along with the phase lag. Consequently, the results are very limited. It is nevertheless interesting to note that for BHCs the coherence function often appears to be around unity over a wide frequency range โ the X-ray variabilities in different energy bands are almost perfectly linearly correlated on those timescales in Fourier domain (Vaughan & Nowak 1997; Cui et al. 1997; Nowak et al. 1999). This puts additional constraints on the models for X-ray production mechanisms in BHCs. Lower coherence was observed of Cyg X-1 when the source was in the transitional periods between the two spectral states (Cui et al. 1997). This could be attributed to the variation of the Comptonizing region during those episodes on timescales less than an hour (Cui et al. 1997), in the context of Comptonization models (Hua et al. 1997). However, more data is required to verify such a scenario.
In this Letter, we present the results from measuring the phase lag and coherence function of X-ray variability for XTE J1550-564 during the initial rising phase of the 1998 outburst (Cui et al. 1999, Paper 1 hereafter). In addition to the intense aperiodic variability, a strong QPO was detected, along with its first and sometimes second harmonics, and the frequency of the QPO increased by almost 2 orders of magnitude during this period (Paper 1). We examine the timing properties of both the QPO and broad-band variability.
## 2 Data and Analyses
Paper 1 should be consulted for the details of the observations. Very briefly, there were 14 RXTE observations, covering the rising phase of the outburst. In the first observation, however, the overflow of the on-board data buffers (due to the inappropriate data modes adopted) produced gaps in the data. For this work, we chose to ignore this observation entirely. For the remaining 13 observations, we rebinned the data with $`2^7`$ s time bins and combined the Event and Binned data into the six energy bands as defined in Paper 1.
We chose the 2 โ 4.5 keV band as the reference band. A cross-power spectrum (CPS) was computed for each 256-second data segment between the reference band and each of the higher energy bands. The results from all segments were then properly weighed and averaged to obtain the CPSs for the observation. The phase of a CPS represents a phase shift of the light curve in a selected energy band with respect to that of the reference band. We followed the convention that a positive phase indicates that the X-ray variability in the high energy band lags behind that in the low energy band, i.e., a hard lag. The uncertainty of the phase lag was estimated from the standard deviations of the real and imaginary parts of the CPS. For the phase lag associated with a QPO, the magnitude was derived from fitting the CPS in a narrow frequency range around the QPO with a linear function (for the continuum) and two Lorentzian functions whose centroid frequencies and widths were fixed at those of the QPO (Paper 1). Acceptable fits (i.e., the reduced $`\chi ^2`$ around unity) were obtained for all cases. The corresponding errors were derived by varying the parameters until $`\mathrm{\Delta }\chi ^2=1`$ (i.e., representing roughly $`1\sigma `$ confidence intervals; Lampton et al. 1976).
## 3 Results
In all observations, significant hard lag was found to be associated with the broad-band variability. The phase lag shows significant evolution during the rising phase, which can be divided into two distinct periods. Fig. 1 shows an example for each period Fourier spectra of power density, phase lag, and coherence function. At the early stage of the rising phase, the broad-band hard lag is significantly measured only in the middle range of frequencies. The lag increases with frequency first, peaks at some characteristic frequency, and then decreases. The peak frequency does not appear to correspond to any characteristic features in the power density spectrum (PDS). At the frequency of the QPO, soft lag is clearly detected, on top of the broad-band hard lag. However, there does not appear to be significant phase lag (hard or soft) associated with the first harmonic. At the later stage of the rising phase, while the hard lag is still apparent there also seems to be significant soft lag associated with the broad-band variability at frequencies around the QPO and its harmonics. The soft lag associated with the fundamental component of the QPO remains significant, but now hard lag is also measured for the first harmonic. As for the coherence function, it is high and nearly constant at low frequencies and drops off sharply almost right after the first harmonic of the QPO for both periods of the rising phase. It is interesting to notice that at the frequencies of the QPO and its harmonics the coherence function increases significantly.
Both the broad-band and QPO lags show strong energy dependence, as illustrated in Fig. 2. They become larger at higher energies (with respect to the reference band), which is typical of BHCs (Cui 1999a). The QPO lag also shows correlation with X-ray flux (and thus with the QPO frequencies; Paper 1) for the second period of the rising phase, as is apparent in Fig. 3. Both the soft and hard QPO lags increase as the source brightens, reaching as high as 0.1 and 0.3 radians, respectively. The coherence function is near unity at the beginning of the rising phase. Subsequently, it decreases rapidly and seems to level off in the end. To quantify the evolution, we averaged the observed values over a frequency range 0.004-0.1 Hz where the coherence function is roughly constant. As an example, the results for the 8.1-13.3 keV band are plotted in Fig. 4. The loss of coherence is apparent during the initial rising phase. Moreover, the coherence function also decreases as the separation between the two energy bands widens, as shown in Fig. 5.
## 4 Discussion
Although the broad-band hard lag is significantly detected only at intermediate frequencies, large uncertainties prevent us from drawing any definitive conclusions on the results at low frequencies (where the lag is expected to be small). It appears, from Fig. 2, that at the early stage of the rising phase the corresponding time lag ($`t_{lag}p_{lag}/2\pi f`$) saturates below a characteristic frequency where the phase lag peak. This is unusual because for other BHCs the time lag seems to monotonically increase toward low frequencies (e.g., Miyatomo et al. 1988; Cui et al. 1997; Grove et al. 1998; Nowak et al. 1999). In the context of Comptonization models, such a saturation in time lag might be the manifestation of the finiteness of the Comptonizing region (Hua et al. 1999). If so, the characteristic frequency (a few Hz) would provide a direct measure of the outer radius of the region (i.e., tens of lt-ms), which seems to be in rough agreement with that estimated from the measured time lag. It is interesting to notice that the characteristic frequency appears to increase as the source brightens, perhaps indicating that the size of the Comptonizing region decreases, as was suggested for Cyg X-1 when the source goes from the hard (or low) state to the soft (or high) state. However, it is puzzling why the lag spectrum appears so different at the later stage of the rising phase, or more specifically what causes the observed broad-band soft lag around the QPO and its harmonic (see Fig. 1).
The complex pattern of the phase lag associated with the QPO bear remarkable resemblance to that observed of GRS 1915+105 (Cui 1999b). Perhaps, the phenomenon is common for certain types of QPOs in BHCs. Combined with the published results (Wijnands et al. 1999), our results show that the phenomenon seems generic for XTE J1550-564, as opposed to being limited to certain spectral states. More importantly, we are now seeing the evolution of the phenomenon as the QPO evolves during the rising phase of the outburst. If we accept the Comptonization scenario for the broad-band lag, we would have to rule out this scenario for the QPO because the QPO time lag increases with X-ray flux during the rising phase (see Fig. 3). This would not be surprising, since, as discussed by Cui (1999b), it is problematic to attribute the observed soft and hard lags of the QPO (and its harmonic) entirely to inverse-Comptonization processes in the first place. At present, no models can naturally account for this type of phase lag phenomenon.
In the context of Comptonization models, the loss of coherence during the rising phase may be due to the variation in the physical conditions of the Comptonizing region (Hua et al. 1997), as was suggested for Cyg X-1 during spectral state transitions (Cui et al. 1997). This scenario might also explain why more widely separated energy bands are less coherent (see Fig. 5), since the difference in the number of scatterings that seed photons experience is greater. The observed Fourier spectra of the coherence function are, however, somewhat unusual. For a number of BHCs, the coherence function is typically close to unity over the entire frequency range where it can be reliably determined (e.g., Vaughan & Nowak 1997; Cui et al. 1997; Nowak et al. 1999). Here, the coherence function drops precipitously above a โbreak frequencyโ (which appears to be near the first harmonic of the QPO; see Fig. 1). We speculate that the break frequency might be indicative of the timescale on which the Comptonizing region varies. The break frequency appears to evolve in unison with the frequency of the QPO, which could imply that (1) the QPO originates in the Comptonizing region (instead of the accretion disk, as often thought) and (2) the Comptonizing region varies on an increasingly short timescale during the initial rising phase of the outburst. The former is also supported by the fact that the QPO becomes stronger at higher energies (Paper 1; also see discussion in Cui 1999b). For BHCs in general, such energy dependence of QPOs is observed in most cases (Cui 1999a). Associating the QPO with the Comptonizing region directly might also account for the intriguing increase of the coherence function at the frequencies of the QPO and its harmonics, if the QPO is more localized, given that the broad-band variability is probably a disk phenomenon and Compton upscattering of disk photons in an extended non-static โcoronaโ can cause the loss of coherence.
This work was supported in part by NASA through grants NAG5-7484 and NAG5-7990. We thank Markus Bรถttcher for useful comments. |
no-problem/0001/math0001070.html | ar5iv | text | # From random sets to continuous tensor products: answers to three questions of W. Arveson
## Introduction
โThe term *product system* is a less tortured contraction of the phrase *continuous tensor product system of Hilbert spaces*โ (Arveson \[3, p. 6\]). The theory of product systems, elaborated by W. Arveson in connection with $`E_0`$-semigroups and quantum fields (see , and refs therein) suffers from lack of rich sources of examples. I propose such a source by combining A. Vershikโs idea of a *measure type factorization* \[9, Sect. 1c\], my own idea of a *spectral type of a noise* \[8, Sect. 2\], and J. Warrenโs idea (private communication, Nov. 1999) of constructing a measure type factorization from a given random set. The new rich source of examples leads to rather simple answers to three questions of Arveson; see Sections 2,4,5 for the questions, and Theorems 2.1, 4.2 and 5.4 for the answers.
It is interesting to compare measure type factorizations with so-called *noises* (a less tortured substitute for such phrases as *homogeneous continuous tensor product system of probability spaces* or *stationary probability measure factorization*), see , , and refs therein. Theory of noises is able to answer two out of the three questions of Arveson, however, the new approach makes it easier. I still do not know whether the third question (see Sect. 4) also has a noise-theoretic answer, or not.
## 1 The construction
Consider the standard Brownian motion $`B()`$ in $``$, and the random set
$$Z_{t,a}=\{s[0,t]:B(s)=a\},$$
where $`a,t(0,\mathrm{})`$ are parameters.<sup>1</sup><sup>1</sup>1When writing $`Z_{t,a}`$ I always assume that $`a,t(0,\mathrm{})`$ unless otherwise stated; the reservation applies when I write, say, $`Z_{\mathrm{},0}`$. The set $`Z_{t,a}`$ may be treated as a random variable taking on values in the space $`๐_t`$ of all closed subsets of $`[0,t]`$.<sup>2</sup><sup>2</sup>2Also the empty set $`\mathrm{}`$ belongs to $`๐_t`$. There is a natural Borel $`\sigma `$-field $`_t`$ on $`๐_t`$, and $`(๐_t,_t)`$ is a standard Borel space. Moreover, $`๐_t`$ is a compact metric space w.r.t. the Hausdorff metric $`\rho _t(C_1,C_2)=inf\{\epsilon >0:C_1(C_2)_{+\epsilon }\&C_2(C_1)_{+\epsilon }\}`$ (here $`C_{+\epsilon }`$ means the $`\epsilon `$-neighborhood of $`C`$), and $`_t`$ is the Borel $`\sigma `$-field of the metric space $`(๐_t,\rho _t)`$. Let $`P_{t,a}`$ be the law of the $`๐_t`$-valued random variable $`Z_{t,a}`$, then $`(๐_t,_t,P_{t,a})`$ is a probability space.
###### 1.1 Lemma.
$`P_{t,a_1}P_{t,a_2}`$; that is, measures $`P_{t,a_1}`$ and $`P_{t,a_2}`$ are equivalent ($`=`$mutually absolutely continuous) for all $`a_1,a_2(0,\mathrm{})`$.
###### Proof.
Consider the random time $`T_a=\mathrm{min}\{t[0,\mathrm{}):B(t)=a\}`$. The shifted set $`Z_{\mathrm{},a}T_a`$ is independent of $`T_a`$ and distributed like $`Z_{\mathrm{},0}`$. Thus, $`P_{\mathrm{},a}`$ is a mix of shifted copies of $`P_{\mathrm{},0}`$, weighted according to the law of $`T_a`$. However, laws of $`T_{a_1},T_{a_2}`$ are equivalent measures, therefore $`P_{\mathrm{},a_1}P_{\mathrm{},a_2}`$, which implies $`P_{t,a_1}P_{t,a_2}`$. โ
Denote by $`๐ซ_t`$ the set of all probability measures on $`(๐_t,_t)`$ that are equivalent to $`P_{t,a}`$ for some (therefore, all) $`a(0,\mathrm{})`$. The triple $`(๐_t,_t,๐ซ_t)`$ is an example of a structure called *measure-type space.*
Denote by $`P_{s,a}P_{t,a}`$ the law of the random set $`C_1(C_2+s)`$, where $`C_1๐_s`$ is distributed $`P_{s,a}`$, and $`C_2๐_t`$ is distributed $`P_{t,a}`$, and $`C_1,C_2`$ are independent; of course, $`C_2+s[s,t]`$ is the shifted $`C_2`$.
###### 1.2 Lemma.
$`P_{s,a}P_{t,a}P_{s+t,a}`$ for all $`s,t,a(0,\mathrm{})`$.
###### Proof.
The conditional distribution of the set $`(Z_{s+t,a}[s,s+t])s`$, given the set $`Z_{s,a}`$, is the mix (over $`x`$) of its conditional distributions, given $`Z_{s,a}`$ and $`B(s)=x`$. The latter conditional distribution, being equal to $`P_{t,|ax|}`$, belongs to $`๐ซ_t`$ (except for $`x=a`$, which case may be neglected). Therefore the former conditional distribution also belongs to $`๐ซ_t`$. โ
We cannot identify the Cartesian product $`๐_s\times ๐_t`$ with $`๐_{s+t}`$, since natural maps $`๐_{s+t}๐_s\times ๐_t`$ and $`๐_s\times ๐_t๐_{s+t}`$ are not mutually inverse (in fact, both are non-invertible). However, $`๐ซ_{s+t}\{C:sC\}=0`$;<sup>3</sup><sup>3</sup>3I mean, of course, that $`P\left(\{C๐_{s+t}:sC\}\right)=0`$ for some (therefore all) $`P๐ซ_{s+t}`$. neglecting some sets of probability $`0`$, we get
(1.3)
$$(๐_s,_s,๐ซ_s)(๐_t,_t,๐ซ_t)=(๐_{s+t},_{s+t},๐ซ_{s+t}),$$
or simply $`๐ซ_s๐ซ_t=๐ซ_{s+t}`$ for $`s,t(0,\mathrm{})`$.
In order to introduce Hilbert spaces $`L_2(๐_t,_t,๐ซ_t)`$ note that Hilbert spaces $`L_2(๐_t,_t,P_1)`$ and $`L_2(๐_t,_t,P_2)`$ for $`P_1,P_2๐ซ_t`$ are in a natural unitary correspondence; namely, $`\psi _1L_2(๐_t,_t,P_1)`$ corresponds to $`\psi _2L_2(๐_t,_t,P_2)`$ if
$$\psi _2=\sqrt{\frac{P_1}{P_2}}\psi _1,$$
where $`\frac{P_1}{P_2}`$ is the Radon-Nikodym density. Define an element $`\psi `$ of $`L_2(๐_t,_t,๐ซ_t)`$ as a family $`\psi =(\psi _P)_{P๐ซ_t}`$ satisfying $`\psi _PL_2(๐_t,_t,P)`$ and
$$\psi _{P_2}=\sqrt{\frac{P_1}{P_2}}\psi _{P_1}\text{for all }P_1,P_2๐ซ_t.$$
Clearly, $`L_2(๐_t,_t,๐ซ_t)`$ is a separable Hilbert space, naturally isomorphic to every $`L_2(๐_t,_t,P)`$, $`P๐ซ_t`$.<sup>4</sup><sup>4</sup>4Intuitively we may think that $`\sqrt{P}\psi _P=\psi `$ for all $`P๐ซ_t`$. See also . Relation (1.3) gives
(1.4)
$$L_2(๐_s,_s,๐ซ_s)L_2(๐_t,_t,๐ซ_t)=L_2(๐_{s+t},_{s+t},๐ซ_{s+t})$$
in the sense that the two Hilbert spaces are *naturally* isomorphic.
However, (1.4) is only a part of requirements stipulated in the definition of a product system \[3, Def. 1.4\]. The point is that (1.4) holds for each $`(s,t)`$ individually; nothing was said till now about measurability in $`s,t`$. In order to get a product system, we need a *measurable* unitary correspondence between spaces $`L_2(๐_t,_t,๐ซ_t)`$ for different $`t`$, making the map implied by (1.4) jointly measurable. The correspondence need not be natural, but our case is especially nice, having a natural correspondence described below.
For every $`\lambda (0,\mathrm{})`$ the random process $`t\sqrt{\lambda }B(t/\lambda )`$ is a Brownian motion, again. Therefore the two random sets $`\{s:B(s)=a\}`$ and $`\{s:\sqrt{\lambda }B(s/\lambda )=a\}=\lambda \{s:B(s)=a/\sqrt{\lambda }\}`$ are identically distributed. It means that the โrescalingโ map $`R_\lambda :๐_1๐_\lambda `$, defined by $`R_\lambda (C)=\lambda C`$, sends $`P_{1,a/\sqrt{\lambda }}`$ to $`P_{\lambda ,a}`$. Accordingly, it sends $`๐ซ_1`$ to $`๐ซ_\lambda `$. We define a unitary operator $`\stackrel{~}{R}_t:L_2(๐_1,_1,๐ซ_1)L_2(๐_t,_t,๐ซ_t)`$ by
$$(\stackrel{~}{R}_t\psi )_{R_t(P)}(R_t(C))=\psi _P(C)\text{for }P\text{-almost all }C๐_1,$$
for all $`\psi L_2(๐_1,_1,๐ซ_1)`$ and $`P๐ซ_1`$; of course, $`R_t(P)`$ is the $`R_t`$-image of $`P`$ (denoted also by $`PR_t^1`$). The disjoint union $`E=_{t(0,\mathrm{})}L_2(๐_t,_t,๐ซ_t)`$ (not a Hilbert space, of course) is now parametrized by the Cartesian product $`(0,\mathrm{})\times L_2(๐_1,_1,๐ซ_1)`$, namely, $`(t,\psi )(0,\mathrm{})\times L_2(๐_1,_1,๐ซ_1)`$ parametrizes $`\stackrel{~}{R}_t(\psi )L_2(๐_t,_t,๐ซ_t)E`$. We equip $`E`$ with the Borel structure that corresponds to the natural Borel structure on $`(0,\mathrm{})\times L_2(๐_1,_1,๐ซ_1)`$. Linear operations and the scalar product are Borel measurable (on their domains) for trivial reasons. It remains to consider the multiplication $`E\times EE`$,
$$E\times EH_s\times H_t(\psi _1,\psi _2)\psi _1\psi _2H_sH_t=H_{s+t}E;$$
it must be Borel measurable.<sup>5</sup><sup>5</sup>5I do not distinguish between $`H_sH_t`$ and $`H_{s+t}`$ in the notation. A cautious reader may insert a notation for the natural unitary operator $`H_sH_tH_{s+t}`$. In other words, we consider $`\psi =\stackrel{~}{R}_{s+t}^1\left(\stackrel{~}{R}_s(\psi _1)\stackrel{~}{R}_t(\psi _2)\right)`$ as an $`H_1`$-valued function of four arguments $`s,t(0,\mathrm{})`$, $`\psi _1,\psi _2H_1`$; we have to check that the function is jointly Borel measurable. After substituting all relevant definition it boils down to $`C=R_{s+t}^1\left((R_sC_1)(s+R_tC_2)\right)`$ treated as a $`๐_1`$-valued function of four arguments $`s,t(0,\mathrm{})`$, $`C_1,C_2๐_1`$; the reader may check that the function is jointly Borel measurable. So, Hilbert spaces
$$H_t=L_2(๐_t,_t,๐ซ_t)$$
form a product system.
## 2 Units
Every measure $`P๐ซ_t`$ has an atom, since $`\left(Z_{t,a}=\mathrm{}\right)>0`$; in fact, $`\{\mathrm{}\}`$ is the only atom of $`P`$.
For every $`t(0,\mathrm{})`$ the space $`H_t=L_2(๐_t,_t,๐ซ_t)`$ contains a special element $`v_t`$ defined by
$$(v_t)_P(C)=\{\begin{array}{cc}\frac{1}{\sqrt{P(\{\mathrm{}\})}}\hfill & \text{if }C=\mathrm{},\hfill \\ 0\hfill & \text{otherwise}.\hfill \end{array}$$
Clearly, $`v_{s+t}=v_sv_t`$ for all $`s,t(0,\mathrm{})`$. Also, $`v_t=1`$ for all $`t`$.
A unit of a product system $`(H_t)`$ is a family $`(u_t)_{t(0,\mathrm{})}`$ such that $`u_tH_t`$ for all $`t(0,\mathrm{})`$, and $`u_su_t=u_{s+t}`$ for all $`s,t(0,\mathrm{})`$, and the map $`tu_t_tH_t`$ is measurable, and $`u_t0`$ for some $`t`$ (which implies $`u_t0`$ for all $`t`$); see \[2, p. 10\], \[3, Sect. 4\].
The family $`(v_t)`$ is a unit, since $`\stackrel{~}{R}_t^1(v_t)`$ is measurable in $`t`$; in fact, it is constant, $`\stackrel{~}{R}_t^1(v_t)=v_1`$.
If $`(u_t)`$ is a unit (of a product system) then $`(e^{i\lambda t}u_t)`$ is also a unit for every $`\lambda `$. All these units may be called equivalent. Some product systems contain non-equivalent units. Some product systems contain no units at all. The trivial product system (consisting of one-dimensional Hilbert spaces) contains a unit, and all its units are equivalent. Arveson \[2, p. 12\] asked: is there a nontrivial product system that contains a unit but does not contain non-equivalent units? The product system constructed in Sect. 1 appears to be such an example; the question is answered by the following result. (Note however that the question is already answered by noise theory; I mean the system of \[9, Sect. 5\].)
###### 2.1 Theorem.
Every unit $`(u_t)`$ is of the form $`u_t=e^{i\lambda t}v_t`$.
###### Proof.
Every $`\psi H_t`$ determines a measure $`|\psi |^2`$ on $`(๐_t,_t)`$ by<sup>6</sup><sup>6</sup>6Do not confuse the *measure* $`|\psi |^2`$ with the *number* $`\psi ^2`$, the squared norm; in fact, $`\psi ^2=(|\psi |^2)(๐_t)`$, the total mass.
(2.2)
$$\frac{|\psi |^2}{P}=|\psi _P|^2\text{for some (therefore, all) }P๐ซ_t.$$
Note that $`|\psi _1\psi _2|^2=|\psi _1|^2|\psi _2|^2`$ whenever $`\psi _1H_s`$, $`\psi _2H_t`$. If $`(u_t)`$ is a unit, then $`|u_s|^2|u_t|^2=|u_{s+t}|^2`$. We may assume that $`u_t=1`$ for all $`t`$ (since $`(u_t/u_t)`$ is a unit equivalent to $`(u_t)`$, see \[3, Th. 4.1\]), then $`|u_t|^2`$ is a probability measure. Applying \[3, Th. 4.1\] again we get $`u_t,v_t=e^{\gamma t}`$ for some $`\gamma `$. However, for every $`\psi H_t`$
$$\psi ,v_t=\psi _P\overline{(v_t)_P}๐P=\psi _P(\mathrm{})\frac{1}{\sqrt{P(\{\mathrm{}\})}}P(\{\mathrm{}\}),$$
$$|\psi ,v_t|^2=|\psi _P(\mathrm{})|^2P(\{\mathrm{}\})=|\psi |^2(\{\mathrm{}\}).$$
Applying it to $`\psi =u_t`$ we get $`|u_t|^2(\{\mathrm{}\})=e^{2\mathrm{Re}\gamma t}`$. In combination with the property $`|u_s|^2|u_t|^2=|u_{s+t}|^2`$ it shows that $`|u_t|^2`$ is the law of the Poisson point process with intensity $`(2\mathrm{Re}\gamma )`$ on $`[0,t]`$.<sup>7</sup><sup>7</sup>7A simple way to check it: divide $`(0,t)`$ into $`n`$ equal intervals; each of them is free of $`C`$ (distributed $`|u_t|^2`$) with probability $`e^{2\mathrm{Re}\gamma t/n}`$, independently of others. Consider $`n=2,4,8,16,\mathrm{}`$ Thus, $`|u_t|^2`$ is concentrated on finite sets $`C๐_t`$. On the other hand, being absolutely continuous w.r.t. $`๐ซ_t`$, the measure $`|u_t|^2`$ is concentrated on sets $`C๐_t`$ with no isolated points. Therefore $`|u_t|^2`$ is concentrated on $`C=\mathrm{}`$ only. It means that $`\mathrm{Re}\gamma =0`$, that is, $`\gamma =i\lambda `$, $`\lambda `$. So, $`u_t=1`$, $`v_t=1`$ and $`u_t,v_t=e^{i\lambda t}`$; therefore $`u_t=e^{i\lambda t}v_t`$. โ
## 3 Using Bessel processes
Introduce a parameter $`\delta (0,2)`$ and consider the random set
$$Z_{t,a,\delta }=\{s[0,t]:\mathrm{BES}_{\delta ,a}(s)=0\},$$
and its law $`P_{t,a,\delta }`$; here $`\mathrm{BES}_{\delta ,a}()`$ is the Bessel process of dimension $`\delta `$ started at $`a`$ (see \[6, Chap. XI, Defs 1.1 and 1.9\]). As before, $`t,a(0,\mathrm{})`$. The law $`P_{t,a,1}`$ of $`Z_{t,a,1}`$ is equal to the law $`P_{t,a}`$ of $`Z_{t,a}`$ of Sect. 1, since $`\mathrm{BES}_{1,a}`$ is distributed like $`|B()+a|`$. The structure of $`Z_{\mathrm{},0,\delta }`$ was well-understood long ago;<sup>8</sup><sup>8</sup>8Namely, $`Z_{\mathrm{},0,\delta }`$ is the closure of the range of a stable subordinator of index $`1\delta /2`$ (see \[5, Example 6\]); it is of Hausdorff dimension $`1\delta /2`$ near every point . especially, measures $`P_{t,0,\delta _1}`$ and $`P_{t,0,\delta _2}`$ for $`\delta _1\delta _2`$ are mutually singular. Measures $`P_{t,a,\delta _1}`$ and $`P_{t,a,\delta _2}`$ (where $`a>0`$) are not singular because of a common atom ($`Z_{t,a,\delta }=\mathrm{}`$ with a positive probability).
Below, $`\mu \nu `$ means that a measure $`\mu `$ is absolutely continuous w.r.t. a measure $`\nu `$; $`\mu \nu `$ means $`\mu \nu \&\nu \mu `$.
###### 3.1 Lemma.
(a) $`P_{t,a_1,\delta }P_{t,a_2,\delta }`$;
(b) if $`\delta _1\delta _2`$, $`\mu P_{t,a,\delta _1}`$ and $`\mu P_{t,a,\delta _2}`$, then $`\mu `$ is concentrated on $`\{\mathrm{}\}`$.
###### Proof.
Similarly to the proof of Lemma 1.1, consider the random time $`T_a=\mathrm{min}\{s[0,\mathrm{}):\mathrm{BES}_{\delta ,a}(s)=0\}`$; $`T_a(0,\mathrm{})`$ almost sure (since $`\delta <2`$). The shifted set $`Z_{\mathrm{},a,\delta }T_a`$ is independent of $`T_a`$ and distributed like $`Z_{\mathrm{},0,\delta }`$. Statement (a) follows from the fact that laws of $`T_{a_1},T_{a_2}`$ are equivalent measures. Statement (b): $`\mu `$ is concentrated on sets that must have two different Hausdorff dimensions near each point; the only such set is $`\mathrm{}`$. โ
###### 3.2 Lemma.
$`P_{s,a,\delta }P_{t,a,\delta }P_{s+t,a,\delta }`$.
The proof is quite similar to the proof of Lemma 1.2.
The Bessel process has the same scaling property as the Brownian motion: the process $`t\sqrt{\lambda }\mathrm{BES}_{\delta ,a/\sqrt{\lambda }}(t/\lambda )`$ has the law $`P_{t,a,\delta }`$ irrespective of $`\lambda (0,\mathrm{})`$.
So, all properties of Brownian motion, used in Sect. 1, hold for Bessel processes. Generalizing the construction of Sect. 1 we get a product system $`(H_{t,\delta })_{t(0,\mathrm{})}`$ for every $`\delta (0,2)`$. The product system of Sect. 1 corresponds to $`\delta =1`$.
## 4 Continuum of non-isomorphic product systems
โAt this point, we are not even certain of the *cardinality* of $`\mathrm{\Sigma }`$! It is expected that $`\mathrm{\Sigma }`$ is uncountable, but this has not been proved.โ W. Arveson \[2, p. 12\].
An isomorphism between two product systems $`(H_t)`$, $`(H_t^{})`$ is defined naturally as a family $`(\theta _t)_{t(0,\mathrm{})}`$ of unitary operators $`\theta _t:H_tH_t^{}`$ such that, first, $`\theta _{s+t}(\psi _1\psi _2)=\theta _s(\psi _1)\theta _t(\psi _2)`$ whenever $`\psi _1H_s`$, $`\psi _2H_t`$, and second, $`\theta _t(\psi )`$ is jointly measurable in $`t`$ and $`\psi `$; see \[3, p. 6\]. Are there uncountably many non-isomorphic product systems? This question, asked by Arveson \[2, p. 12\], will be answered here in the positive by showing that product systems $`(H_{t,\delta })`$ for different $`\delta `$ are non-isomorphic.
Consider the projection operator (the index $`\delta `$ is suppressed)
$$Q_t:H_tH_t,(Q_t\psi )_P(C)=\{\begin{array}{cc}\psi _P(C)\hfill & \text{if }C=\mathrm{},\hfill \\ 0\hfill & \text{otherwise},\hfill \end{array}$$
just the orthogonal projection onto the one-dimensional subspace corresponding to the atom of $`๐ซ_{t,\delta }`$. Given $`0<r<s<t`$, we introduce an operator $`Q_{t,(r,s)}=Q_r\mathrm{๐}_{sr}Q_{ts}`$ on the space $`H_t=H_rH_{sr}H_{ts}`$; of course, $`\mathrm{๐}_{sr}`$ is the identical operator on $`H_{sr}`$. Operators $`Q_{t,E}`$ are defined similarly for every elementary set (that is, a union of finitely many intervals) $`E(0,t)`$.<sup>9</sup><sup>9</sup>9For example, $`Q_{t,(r,s)(u,v)}=Q_r\mathrm{๐}_{sr}Q_{us}\mathrm{๐}_{vu}Q_{tv}`$ for $`0<r<s<u<v<t`$; $`Q_{t,(0,s)}=\mathrm{๐}_sQ_{ts}`$; $`Q_{t,(s,t)}=Q_s\mathrm{๐}_{ts}`$; $`Q_{t,(0,t)}=\mathrm{๐}_t`$; $`Q_{t,\mathrm{}}=Q_t`$. Clearly,
$$(Q_{t,E}\psi )_P(C)=\{\begin{array}{cc}\psi _P(C)\hfill & \text{if }CE,\hfill \\ 0\hfill & \text{otherwise}.\hfill \end{array}$$
Note a relation to measures $`|\psi |^2`$ defined by (2.2):
(4.1)
$$Q_{t,E}\psi ,\psi =|\psi |^2(\{C๐_t:CE\}).$$
###### 4.2 Theorem.
If $`\delta _1\delta _2`$ then product systems $`(H_{t,\delta _1})`$, $`(H_{t,\delta _2})`$ are non-isomorphic.
###### Proof.
Assume the contrary: operators $`\theta _t:H_{t,\delta _1}H_{t,\delta _2}`$ are an isomorphism of the product systems. The system $`(H_{t,\delta _1})`$ has a unit, and all its units are equivalent, which is Theorem 2.1 when $`\delta _1=1`$, and a (straightforward) generalization of Theorem 2.1 for arbitrary $`\delta _1`$. The same for the other product system $`(H_{t,\delta _2})`$. It follows that operators $`Q_t`$ are preserved by isomorphisms; $`Q_t\theta _t=\theta _tQ_t`$ (that is, $`Q_t^{(\delta _2)}\theta _t=\theta _tQ_t^{(\delta _1)}`$). Tensor products of these operators are also preserved:
$$Q_{t,E}\theta _t=\theta _tQ_{t,E}.$$
In combination with 4.1 it gives for $`\psi H_{t,\delta _1}`$
(4.3)
$$|\psi |^2(A)=|\theta _t\psi |^2(A)$$
for every $`A`$ of the form $`A=A_E=\{C๐_t:CE\}`$ where $`E`$ is an elementary set. However, $`A_{E_1E_2}=A_{E_1}A_{E_2}`$, and the $`\sigma `$-field generated by sets $`A_E`$ is the whole $`_t`$. It follows (by Dynkin Class Theorem) that 4.3 holds for all $`A_t`$, that is,
$$|\psi |^2=|\theta _t\psi |^2\text{for all }\psi H_{t,\delta _1},$$
which contradicts to Lemma 3.1(b). โ
## 5 Asymmetry via countable random sets
The law $`P_{t,a}`$ of the random set $`Z_{t,a}`$ of Sect. 1 is asymmetric in the sense that $`P_{t,a}`$ is not invariant under the time reversal
$$๐_tCtC๐_t$$
(of course, $`tC=\{ts:sC\}`$). However, the measure type $`๐ซ_t`$ is symmetric; therefore the product system $`(H_t)`$ is symmetric, which means existence of unitary operators $`\theta _t:H_tH_t`$ such that, first, $`\theta _{s+t}(\psi _1\psi _2)=\theta _t(\psi _2)\theta _s(\psi _1)`$ whenever $`\psi _1H_s`$, $`\psi _2H_t`$, and second, $`\theta _t(\psi )`$ is jointly measurable in $`t`$ and $`\psi `$; see \[2, p. 12\], \[3, p. 6\]. It was noted by Arveson \[3, p. 6\] that we do not know if an arbitrary product system is symmetric. Apparently, the first example of an asymmetric product system is โthe noise made by a Poisson snakeโ of J. Warren ; there, asymmetry emerges from a random countable closed set that has points of accumulation from the left, but never from the right. A different, probably simpler way from such sets to asymmetric product systems is presented here.
Our first step toward a suitable countable random set is choosing a (nonrandom) set $`S[0,\mathrm{})`$ and a function $`\lambda :S\times S[0,\mathrm{})`$ such that
(a) $`S`$ is closed, countable, $`1`$-periodic (that is, $`sSs+1S`$ for $`s[0,\mathrm{})`$), totally ordered (that is, no strictly decreasing infinite sequences), $`0S`$, and $`S(0,1)`$ is infinite;<sup>10</sup><sup>10</sup>10 An example: $`S=\{k2^l:k,l=1,2,3,\mathrm{}\}\{0,1,2,\mathrm{}\}`$; another example: $`S=\{k2^l2^{lm}:k,l,m=1,2,3,\mathrm{}\}\{k2^l:k,l=1,2,3,\mathrm{}\}\{0,1,2,\mathrm{}\}`$.
(b) $`\lambda (s_1,s_2)>0`$ whenever $`s_1,s_2S`$, $`s_1<s_2s_1+1`$; and $`\lambda (s_1,s_2)=0`$ whenever $`s_1,s_2S`$ do not satisfy $`s_1<s_2s_1+1`$;
(c) denoting by $`s_+`$ the least element of $`S(s,\mathrm{})`$ we have
$$\lambda (s,s_+)=\frac{1}{s_+s},\underset{s^{}S,s^{}>s_+}{}\lambda (s,s^{})1$$
for all $`sS`$.
On the second step we construct a Markov process $`\left(X(t)\right)_{t[0,\mathrm{})}`$ that jumps, from one point of $`S`$ to another, according to the rate function $`\lambda (,)`$. Initially, $`X(0)=0`$. We introduce independent random variables $`\tau _{0,s}`$ for $`sS(0,1]`$ such that $`\left(\tau _s>t\right)=e^{\lambda (0,s)t}`$ for all $`t[0,\mathrm{})`$. We have $`inf_s\tau _s>0`$, since $`_s\lambda (0,s)<\mathrm{}`$. We let
$$X(t)=0\text{ for }t[0,T_1),X(T_1)=s_1,$$
where random variables $`T_1(0,\mathrm{})`$ and $`s_1S`$ are defined by
$$T_1=\underset{s}{inf}\tau _s=\tau _{s_1}.$$
The first transition of $`X()`$ is constructed. Now we construct the second transition, $`X(T_2)=s_1`$, $`X(T_2)=s_2`$ using rates $`\lambda (s_1,s)`$; and so on. It may happen (in fact, it happens almost always) that $`sup_kT_k=T_{\mathrm{}}<\mathrm{}`$, and then (almost always) $`X(T_k)s_{\mathrm{}}S`$ (recall that $`S`$ is closed). We let $`X(T_{\mathrm{}})=s_{\mathrm{}}`$ and construct the next transition of $`X()`$ using rates $`\lambda (s_{\mathrm{}},s)`$. And so on, by a transfinite recursion over countable ordinals, until exhausting the time domain $`[0,\mathrm{})`$. Almost surely, $`X(t)S`$ is well-defined for all $`t[0,\mathrm{})`$, and $`X(t)\mathrm{}`$ for $`t\mathrm{}`$.
The last step is simple. We define the random set $`Z_{\mathrm{},0,S}`$ as the closure of the set of all instants when $`X()`$ jumps. That is, $`Z_{\mathrm{},0,S}`$ is the set of all $`t`$ such that $`X(t\epsilon )<X(t+\epsilon )`$ for all $`\epsilon (0,t)`$. Instead of starting at $`0`$ we may start at another point $`aS`$, which leads to another process $`X_a()`$ and random set $`Z_{\mathrm{},a,S}`$; the law $`P_{t,a,S}`$ of $`Z_{t,a,S}=Z_{\mathrm{},a,S}[0,t]`$ is a probability measure on $`(๐_t,_t)`$.
###### 5.1 Lemma.
$`P_{t,a_1,S}P_{t,a_2,S}`$ for all $`a_1,a_2S`$.
###### Proof.
(Similar to 1.1.) Consider the random time $`T_a=\mathrm{min}Z_{a,S}`$, just the instant of the first jump: $`X_a(T_a)=a`$, $`X_a(T_a)>a`$. The conditional distribution of the shifted set (without the first point), $`(Z_{\mathrm{},a,S}T_a)\{0\}`$, given $`T_a`$ and $`X_a(T_a)`$, is $`P_{\mathrm{},X_a(T_a),S}`$. Thus, $`P_{\mathrm{},a,S}`$ is a mix of shifted copies of $`P_{\mathrm{},b,S}\{0\}`$ for various $`bS(a,a+1]`$. However, $`P_{\mathrm{},b,S}=P_{\mathrm{},b+1,S}`$ for all $`bS`$. It remains to note that the joint law of $`T_{a_1}`$ and $`\left(X_{a_1}(T_{a_1})mod1\right)`$ is equivalent to the joint law of $`T_{a_2}`$ and $`\left(X_{a_2}(T_{a_2})mod1\right)`$. โ
We denote by $`๐ซ_{t,S}`$ the set of all probability measures on $`(๐_t,_t)`$ that are equivalent to $`P_{t,a,S}`$ for some (therefore, all) $`aS`$.
###### 5.2 Lemma.
$`P_{s,a,S}P_{t,a,S}P_{s+t,a,S}`$ for all $`s,t(0,\mathrm{})`$, $`aS`$.
###### Proof.
(Similar to 1.2.) The conditional distribution of the set $`(Z_{s+t,a,S}[s,s+t])s`$, given the set $`Z_{s,a,S}`$, is the mix (over $`b`$) of its conditional distributions, given $`Z_{s,a,S}`$ and $`X_a(s)=b`$. The latter conditional distribution, being equal to $`P_{t,b,S}`$, belongs to $`๐ซ_{t,S}`$. Therefore the former conditional distribution also belongs to $`๐ซ_{t,S}`$. โ
Now we can construct the corresponding product system $`(H_{t,S})_{t[0,\mathrm{})}`$ as before. Though, scaling invariance is absent; unlike Sect. 1, $`R_t`$ does not send $`๐ซ_{1,S}`$ to $`๐ซ_{t,S}`$. We have no *natural* correspondence between spaces $`L_2(๐_t,_t,๐ซ_{t,S})`$, but still, *some* Borel-measurable correspondence exists; I do not dwell on this technical issue.
A more important point: in contrast to previous sections, the product system $`(H_{t,S})`$ contains non-equivalent units (since the law of a Poisson point process on $`(0,t)`$ is absolutely continuous w.r.t. $`๐ซ_{t,S}`$). Unlike Sect. 4, an isomorphism need not preserve projection operators $`Q_t`$ and measures $`|\psi |^2`$, which prevents us from deriving asymmetry of the product system $`(H_{t,S})`$ just from asymmetry of measure types $`๐ซ_{t,S}`$. Instead, weโll adapt some constructions of (see (2.15) and (3.4) there).
As before, $`Q_t:H_{t,S}H_{t,S}`$ is the one-dimensional projection operator corresponding to the atom $`\{\mathrm{}\}`$ of $`๐ซ_{t,S}`$ (you see, $`\left(Z_{t,a,S}=\mathrm{}\right)>0`$). Introduce operators
$$U_{t,p,n}=\left((1p)Q_{t/n}+p\mathrm{๐}_{t/n}\right)^n$$
on $`H_t=H_{t/n}\mathrm{}H_{t/n}=H_{t/n}^n`$ (here $`p(0,1)`$ is a parameter).<sup>11</sup><sup>11</sup>11Of course, $`\mathrm{๐}_t`$ is the identical operator on $`H_{t,S}`$. It is just multiplication by a function of $`C๐_t`$; the function counts intervals $`(\frac{k}{n},\frac{k+1}{n})`$ that contain points of $`C`$, and returns $`p^m`$ where $`m`$ is the number of such intervals. For $`n\mathrm{}`$, operators $`U_{t,p,n}`$ converge (in the strong operator topology) to
$$U_{t,p}=\underset{n\mathrm{}}{lim}U_{t,p,n},(U_{t,p}\psi )_P(C)=p^{|C|}\psi _P(C),$$
just multiplication by $`p^{|C|}`$ where $`|C|`$ is the cardinality of $`C`$; naturally, $`p^{|C|}=0`$ for infinite sets $`C`$. (In fact, $`U_{t,p_1}U_{t,p_2}=U_{t,p_1p_2}`$.) The operator $`U_{t,1}=lim_{p1}U_{t,p}`$ is especially interesting:
$$(U_{t,1}\psi )_P(C)=\{\begin{array}{cc}\psi _P(C)\hfill & \text{if }C\text{ is finite},\hfill \\ 0\hfill & \text{otherwise}.\hfill \end{array}$$
(In fact, $`U_{t,1}`$ is the projection onto the stable ($`=`$ linearizable) part of the product system \[7, (2.15)\], which is not used here.)
Operators $`U_{t,p}`$ correspond to a particular unit (or rather, equivalence class of units) of the product system $`(H_{t,S})`$. However, we may do the same for any given unit $`u=(u_t)`$. Namely,
$$Q_{t,u}\psi =\frac{\psi ,u_t}{u_t,u_t}u_t\text{for }\psi H_t;$$
$$U_{t,p,n,u}=\left((1p)Q_{t/n,u}+p\mathrm{๐}_{t/n}\right)^n;$$
$$U_{t,p,u}=\underset{n\mathrm{}}{lim}U_{t,p,n,u}.$$
Existence of the limit is an easy matter, since operators $`U_{t,p,n,u}`$ for all $`n`$ belong to a single commutative subalgebra. Even simpler, we may take $`lim_n\mathrm{}U_{t,p,2^n,u}`$, the limit of a *decreasing* sequence of commuting operators.
###### 5.3 Lemma.
$`U_{t,1,u}=U_{t,1}`$ for all units $`u`$ of the product system $`(H_{t,S})`$.
###### Proof.
Let $`u=(u_t)`$ and $`v=(v_t)`$ be two units; weโll prove that $`U_{t,1,u}=U_{t,1,v}`$. Due to \[3, Th. 4.1\] we may assume that $`u_t=1`$, $`v_t=1`$ and $`u_t,v_t=e^{\gamma t}`$ for some $`\gamma [0,\mathrm{})`$. An elementary calculation (on the plane spanned by $`u_t,v_t`$) gives<sup>12</sup><sup>12</sup>12It is not about product systems, just two vectors in a Hilbert space.
$$Q_{t,u}Q_{t,v}=\sqrt{1e^{2\gamma t}}.$$
Opening brackets in $`U_{t,p,n,u}=\left((1p)Q_{t/n,u}+p\mathrm{๐}_{t/n}\right)^n`$ we get a sum of $`2^n`$ terms, each term being a tensor product of $`n`$ factors. After rearranging the factors (which changes the term, of course, but does not change its norm), a term becomes simply $`(1p)^kp^{nk}Q_{\frac{k}{n}t,u}\mathrm{๐}_{\frac{nk}{n}t}`$. We see that
$$U_{t,p,n,u}U_{t,p,n,v}๐ผQ_{\frac{k}{n}t,u}Q_{\frac{k}{n}t,v},$$
where the expectation is taken w.r.t. a random variable $`k`$ having the binomial distribution $`\mathrm{Bin}(n,1p)`$. Using concavity of $`\sqrt{1e^{2\gamma t}}`$ in $`t`$,
$$๐ผQ_{\frac{k}{n}t,u}Q_{\frac{k}{n}t,v}=๐ผ\sqrt{1e^{2\gamma kt/n}}\sqrt{1e^{2\gamma ๐ผkt/n}}=\sqrt{1e^{2\gamma t(1p)}},$$
therefore
$$U_{t,p,n,u}U_{t,p,n,v}\sqrt{1e^{2\gamma t(1p)}}\text{for all }n;$$
$$U_{t,p,u}U_{t,p,v}\sqrt{1e^{2\gamma t(1p)}};$$
so, $`U_{t,1,u}U_{t,1,v}=0`$. โ
Informally, the distinction between empty and non-empty sets $`C๐_t`$ is relative (to a special unit) and non-invariant (under isomorphisms of product systems), while the distinction between finite and infinite sets $`C๐_t`$ is absolute, invariant.
For any $`C๐_t`$ denote by $`C^{}`$ the set of all accumulation points of $`C`$; clearly, $`C^{}๐_t`$, and $`C^{}=\mathrm{}`$ if and only if $`C`$ is finite. We proceed similarly to Sect. 4, but $`C^{}`$ is used here instead of $`C`$. Given an elementary set $`E(0,t)`$, we define operators $`Q_{t,E}^{}`$ by
$$\left(Q_{t,E}^{}\psi \right)_P(C)=\{\begin{array}{cc}\psi _P(C)\hfill & \text{if }C^{}E,\hfill \\ 0\hfill & \text{otherwise}.\hfill \end{array}$$
We do not worry about boundary points of $`E`$, since $`๐ซ_{t,S}`$-almost all $`C`$ avoid them. Operators $`Q_{t,E}^{}`$ are tensor products of operators $`U_{s,1}`$. (For example, if $`E=(r,s)`$, $`0<r<s<t`$, then $`Q_{t,E}^{}=U_{r,1}\mathrm{๐}_{sr}U_{ts,1}`$.) By Lemma 5.3, every isomorphism preserves $`U_{s,1}`$; therefore it preserves $`Q_{t,E}^{}`$. Given $`\psi H_{t,S}`$, we define a measure $`|\psi |_{}^{}{}_{}{}^{2}`$ on $`(๐_t,_t)`$ as the image of the measure $`|\psi |^2`$ (defined by (2.2)) under the map $`๐_tCC^{}๐_t`$. Similarly to (4.3) we see that $`|\psi |_{}^{}{}_{}{}^{2}`$ is preserved by isomorphisms (even though $`|\psi |^2`$ is not).
###### 5.4 Theorem.
If $`S^{\prime \prime }\mathrm{}`$ then the product system $`(H_{t,S})`$ is asymmetric.<sup>13</sup><sup>13</sup>13Of course, $`S^{\prime \prime }`$ means $`(S^{})^{}`$; recall examples of $`S`$ on page 10.
###### Proof.
Assume the contrary: the product system is symmetric; $`\theta _t:H_{t,S}H_{t,S}`$, $`\theta _{s+t}(\psi _1\psi _2)=\theta _t(\psi _2)\theta _s(\psi _1)`$ for $`\psi _1H_{s,S}`$, $`\psi _2H_{t,S}`$. Then
$$\theta _tQ_{t,E}^{}=Q_{t,tE}^{}\theta _t.$$
It follows that
(5.5)
$$R_t\left(|\psi |_{}^{}{}_{}{}^{2}\right)=|\theta _t\psi |_{}^{}{}_{}{}^{2}\text{for }\psi H_{t,S};$$
here $`R_t(|\psi |_{}^{}{}_{}{}^{2})`$ is the image of the measure $`|\psi |_{}^{}{}_{}{}^{2}`$ under the time reversal $`R_t:๐_t๐_t`$, $`R_t(C)=tC`$. However, for $`๐ซ_{t,S}`$-almost all $`C๐_t`$, $`C`$ is totally ordered, therefore $`C^{}`$ is also totally ordered. Both measures, $`|\psi |_{}^{}{}_{}{}^{2}`$ and $`|\theta _t\psi |_{}^{}{}_{}{}^{2}`$, being absolutely continuous w.r.t. $`๐ซ_{t,S}`$, are concentrated on totally ordered sets. In combination with (5.5) it means that they are concentrated on finite sets. So, $`C^{\prime \prime }=\mathrm{}`$ for $`๐ซ_{t,S}`$-almost all $`C๐_t`$.
The Markov process $`X()`$ consists of โsmall jumpsโ $`X(t)=\left(X(t)\right)_+`$ and โbig jumpsโ $`X(t)>\left(X(t)\right)_+`$.<sup>14</sup><sup>14</sup>14As before, $`s_+`$ is the least element of $`S(s,\mathrm{})`$. The rate of big jumps never exceeds $`1`$. The rate of small jumps results in the mean speed $`1`$ in the sense that $`X(t)t`$ is a martingale between big jumps. There is a chance that $`X()`$ increases by $`1`$ (or more) by small jumps only (between big jumps). In such a case, $`S^{\prime \prime }\mathrm{}`$ implies $`Z_{t,a,S}^{\prime \prime }\mathrm{}`$. So, $`\{C๐_t:C^{\prime \prime }\mathrm{}\}`$ is not $`๐ซ_{t,S}`$-negligible, in contradiction to the previous paragraph. โ
School of Mathematics, Tel Aviv Univ., Tel Aviv 69978, Israel
tsirel@math.tau.ac.il
http://www.math.tau.ac.il/$``$tsirel/ |
no-problem/0001/cond-mat0001160.html | ar5iv | text | # Incommensurate and commensurate antiferromagnetic spin fluctuations in ๐ถโข๐ and ๐ถโข๐-alloys from ab-initio dynamical spin susceptibility calculations
## Abstract
A scheme for making ab-initio calculations of the dynamic paramagnetic spin susceptibilities of solids at finite temperatures is described. It is based on Time-Dependent Density Functional Theory and employs an electronic multiple scattering formalism. Incommensurate and commensurate anti-ferromagnetic spin fluctuations in paramagnetic $`Cr`$ and compositionally disordered $`Cr_{95}V_5`$ and $`Cr_{95}Re_5`$ alloys are studied together with the connection with the nesting of their Fermi surfaces. We find that the spin fluctuations can be described rather simply in terms of an overdamped oscillator model. Good agreement with inelastic neutron scattering data is obtained.
Chromium is the archetypal itinerant anti-ferromagnet (AF) whose famous incommensurate spin density wave (SDW) ground state is determined by the nesting wave-vectors $`๐ช_{nest}`$ identified in the Fermi surface . Chromium alloys also have varied AF properties and their paramagnetic states have recently attracted attention owing, in part, to analogies drawn with the high temperature superconducting cuprates especially $`(La_cSr_{1c})_2CuO_4`$. The incommensurate SDW fluctuations in these materials are rather similar to those seen in the paramagnetic phase of $`Cr`$ close to $`T_N`$. Moreover โparentโ materials $`La_2CuO_4`$ in the one instance and $`Cr_{95}Mn_5`$ or $`Cr_{95}Re_5`$ in the other are simple commensurate AF materials which on lowering the electron concentration by suitable doping develop incommensurate spin fluctuations which may be promoted by imperfectly nested Fermi surfaces.
Here we examine the nature of damped diffusive spin fluctuations in chromium above the Ne$`\stackrel{ยด}{e}`$l temperature $`T_N`$ which are precursory to the SDW ground state. We also study dilute chromium alloys, $`Cr_{95}Re_5`$ and $`Cr_{95}V_5`$ and obtain good agreement with experimental data. For example, recent inelastic neutron scattering experiments have measured incommensurate AF โparamagnonsโ, persisting up to high frequencies in the latter system. We explore the temperature dependence, variation with dopant concentration and the evolution of the spin fluctuations in these systems from incommensurability to commensurability with increasing frequency and provide the first ab-initio description of these effects. To this end we describe a new scheme for calculating the wave-vector and frequency dependent dynamic spin susceptibility of metals which is based on the Time Dependent Density Functional Theory (TDDFT) of Gross et al. and as such is an all electron theory. For the first time the temperature dependent dynamic spin susceptibility of metals and alloys is calculated from this basis. There have been several simple parameterised models to describe the magnetic properties of $`Cr`$ and its alloys . All of these have concentrated on the approximately nested electron โjackโ and slightly larger octahedral hole pieces of the Fermi surface and, at best, have only included the effects of all the remaining electrons via an electron reservoir. We find similarities between our results and results from such models but show that a complete picture is obtained only when an electronic band-filling effect which favors a simple AF ordering at low temperature is also considered. We also find that the spin fluctuations are given an accurate description as overdamped diffusive simple harmonic oscillator modes which are at the heart of theories of the effects of spin fluctuations upon the properties of itinerant electron systems .
Over the past few years great progress has been made in establishing TDDFT . Analogs of the Hohenberg-Kohn theorem of the static density functional formalism have been proved and rigorous properties found. Here we consider a paramagnetic metal subjected to a small, time-dependent external magnetic field, $`๐(๐ซ,t)`$ which induces a magnetisation $`๐ฆ(๐ซ,t)`$ and use TDDFT to derive an expression for the dynamic paramagnetic spin susceptibility $`\chi (๐ช,w)`$ via a variational linear response approach . Accurate calculations of dynamic susceptibilities from this basis are scarce (e.g. ) because they are difficult and computationally demanding. Here we mitigate these problems by accessing $`\chi (๐ช,w)`$ via the corresponding temperature susceptibility $`\overline{\chi }(๐ช,w_n)`$ where $`w_n`$ denotes a bosonic Matsubara frequency . We outline this approach below.
The equilibrium state of a paramagnetic metal, described by standard DFT, has density $`\rho _0(๐ซ)`$ and its magnetic response function $`\chi (๐ซt;๐ซ^{}t^{})=(\delta m[b](๐ซ,t)/\delta b(๐ซ^{},t^{}))|_{b=0,\rho _0}`$ is given by the following Dyson-type equation.
$$\chi (๐ซt;๐ซ^{}t^{})=\chi _s(๐ซt;๐ซ^{}t^{})+๐๐ซ_1๐t_1๐๐ซ_2๐t_2\chi _s(๐ซt;๐ซ_1t_1)K_{xc}(๐ซ_1t_1;๐ซ_2t_2)\chi (๐ซ_2t_2,๐ซ^{}t^{})$$
(1)
$`\chi _s`$ is the magnetic response function of the Kohn-Sham non-interacting system with the same unperturbed density $`\rho _0`$ as the full interacting electron system, and $`K_{xc}(๐ซt;๐ซ^{}t^{})=(\delta b_{xc}(๐ซ,t)/\delta m(๐ซ^{},t^{}))|_{b=0,\rho _0}`$ is the functional derivative of the effective exchange-correlation magnetic field with respect to the induced magnetisation. As emphasised in ref. eq.1 represents an exact representation of the linear magnetic response. The corresponding development for systems at finite temperature in thermal equilibrium has also been described . In practice approximations to $`K_{xc}`$ must be made and this work employs the adiabatic local approximation (ALDA) so that $`K_{xc}^{ALDA}(๐ซt;๐ซ^{}t^{})=(d^2b_{xc}^{LDA}(\rho (๐ซ,t),m(๐ซ,t))/dm^2(๐ซ,t))|_{\rho _0,m=0}\delta (๐ซ๐ซ^{})\delta (tt^{})=I_{xc}(๐ซ)\delta (๐ซ๐ซ^{})\delta (tt^{})`$. On taking the Fourier transform with respect to time we obtain the dynamic spin susceptibility $`\chi (๐ซ,๐ซ^{};w)`$.
For computational expediency we consider the corresponding temperature susceptibility $`\overline{\chi }(๐ซ,๐ซ^{};w_n)`$ which occurs in the Fourier representation of the temperature function $`\overline{\chi }(๐ซ\tau ;๐ซ^{}\tau ^{})`$ that depends on imaginary time variables $`\tau `$,$`\tau ^{}`$ and $`w_n`$ are the bosonic Matsubara frequencies $`w_n=2n\pi k_BT`$. Now $`\overline{\chi }(๐ซ,๐ซ^{};w_n)\chi (๐ซ,๐ซ^{};iw_n)`$ and an analytical continuation to the upper side of the real $`w`$ axis produces the dynamic susceptibility $`\chi (๐ซ,๐ซ^{};w)`$. Using crystal symmetry and carrying out a lattice Fourier transform we obtain the following Dyson equation for the temperature susceptibility
$$\overline{\chi }(๐ฑ,๐ฑ^{},๐ช,w_n)=\overline{\chi }_s(๐ฑ,๐ฑ^{},๐ช,w_n)+๐๐ฑ_1\overline{\chi }_s(๐ฑ,๐ฑ_1,๐ช,w_n)I_{xc}(๐ฑ_1)\overline{\chi }(๐ฑ_1,๐ฑ^{},๐ช,w_n)$$
(2)
with $`๐ฑ`$,$`๐ฑ^{}`$ and $`๐ฑ_1`$ measured relative to crystal lattice unit cells of volume $`V_{WS}`$.
In terms of the DFT Kohn-Sham Green function of the static unperturbed system
$$\overline{\chi }_s(๐ฑ,๐ฑ^{},๐ช,w_n)=\frac{1}{\beta N}Tr.\underset{๐}{}\underset{m}{}G(๐ฑ,๐ฑ^{},๐,\mu +i\nu _m)G(๐ฑ^{},๐ฑ,๐,\mu +i(\nu _m+w_n))e^{i๐ช๐}$$
(3)
where $`๐`$ is a lattice vector between the cells from which $`๐ฑ`$ and $`๐ฑ^{}`$ are measured, $`\mu `$ the chemical potential and $`\nu _m`$ is a fermionic Matsubara frequency $`(2n+1)\pi k_BT`$. The Green function can be obtained within the framework of multiple scattering (KKR) theory . This makes this formalism applicable to disordered alloys as well as ordered compounds and elemental metals, the disorder being treated by the Coherent Potential Approximation (CPA) . Then the partially averaged Green function, $`G(๐ซ,๐ซ^{},z)_{๐ซ\alpha ,๐ซ^{}\beta }`$, where $`๐ซ`$,$`๐ซ^{}`$ lie within unit cells occupied by $`\alpha `$ and $`\beta `$ atoms respectively, can be evaluated in terms of deviations from the Green function of an electron propagating through a lattice of identical potentials determined by the CPA ansatz .
To solve equation (2), we use the direct method of matrix inversion and local field effects are fully incorporated. $`\overline{\chi }(๐ช,๐ช;w_n)=(1/V_{WS})๐๐ฑ๐๐ฑ^{}e^{i๐ช(๐ฑ๐ฑ^{})}\overline{\chi }(๐ฑ,๐ฑ^{},๐ช,w_n)`$ can then be constructed. The most computationally demanding parts of the calculation are the convolution integrals over the Brillouin Zone which result from the expression for $`\overline{\chi }_s`$, eq. (3). Since all electronic structure quantities are evaluated at complex energies, these convolution integrals have no sharp structure and can be evaluated straightforwardly by an application of adaptive quadrature .
As discussed in ref. , for example, we can define the retarded response function $`\chi (๐ช,๐ช,z)`$ of a complex variable $`z`$. Since it can be shown formally that $`lim_z\mathrm{}\chi (z)1/z^2`$ and we can obtain $`\chi (iw_n)`$ from the above analysis it is possible to continue analytically to values of $`z`$ just above the real axis, i.e. $`z=w+i\eta `$. In order to achieve this we fit our data to a rational function $`\overline{\chi }(๐ช,๐ช,w_n)=\chi (๐ช)(1+_{k=1}^{M2}U_k(๐ช)w_n^k)/(1+_{k=1}^MD_k(๐ช)w_n^k)`$ with the choice of coefficients $`U_k`$,$`D_k`$ ensuring that the sum rule involving the static susceptibility $`\chi (๐ช)`$ is satisfied, i.e. $`\chi (๐ช)=(2/\pi )_0^{\mathrm{}}๐wIm\chi (๐ช,๐ช,w)/w`$. We find that very good fits are obtained with small $`M`$.
For chromium and its alloys, we find that $`M=2`$ is sufficient to provide excellent fits to the calculations of $`\overline{\chi }`$ over a wide range of $`w_n`$โs, i.e. $`\overline{\chi }^1(๐ช,๐ช,w_n)=\chi ^1(๐ช)(1+(w_n/\mathrm{\Gamma }(๐ช))+(w_n/\mathrm{\Omega }(๐ช))^2)`$ so that $`\chi ^1(๐ช,๐ช,w)=\chi ^1(๐ช)(1i(w/\mathrm{\Gamma }(๐ช))(w/\mathrm{\Omega }(๐ช))^2)`$ (standard error $`<`$ 3$`\%`$ of mean). For the systems studied here we find $`\mathrm{\Omega }(๐ช)/\mathrm{\Gamma }(๐ช)<0.151`$ and so the spin dynamics can be described in terms of a heavily overdamped oscillator model. Evidently $`t_{SF}(๐ช)=\mathrm{}/\mathrm{\Gamma }(๐ช)`$ represents a relaxation time for a damped diffusive spin fluctuation with wavevector $`๐ช`$. Moreover, the imaginary part of the dynamical susceptibility which, when multiplied by $`(1\mathrm{exp}(\beta w))^1`$, is proportional to the scattering cross-sections measured in inelastic neutron scattering experiments, is written $`Im\chi (๐ช,๐ช,w)=\chi (๐ช)w\mathrm{\Gamma }^1(๐ช)/((1(w/\mathrm{\Omega }(๐ช))^2)^2+(w/\mathrm{\Gamma }(๐ช))^2)`$. We note that theories for the spin fluctuation effects upon itinerant electron properties, including quantum critical phenomena, also invoke such a model . The small $`๐`$, $`=(๐ช๐ช_0)`$, dependence of $`\chi ^1(๐ช_0+๐)`$ and $`\mathrm{\Gamma }(๐ช_0+๐)`$ is of particular importance. ($`๐ช_0`$ is where $`\chi ^1(๐ช)`$ is smallest.)
Finite-temperature calculations were carried out for the static susceptibilities of the three systems, using the experimental b.c.c. lattice spacing of $`Cr`$, 2.88 $`\dot{A}`$, and von Barth-Hedin local exchange and correlation . We find that (i) $`Cr`$ orders into an incommensurate AF state below 280K specified by $`๐ช_0=\{0,0,0.93\}`$, where experiment yields $`T_N=311K`$ and $`๐ช_0=\{0,0,0.95\}`$ ; (ii) $`Cr_{95}V_5`$ does not develop magnetic order at any temperature, as found in experiment ; and (iii) $`Cr_{95}Re_5`$ orders into a weakly incommensurate AF state below T=410K ($`๐ช_0=\{0,0,0.96\}`$), whereas experimentally it forms a commensurate AF state below $`T_N`$ of 570K .
Our calculations for $`Im\chi (๐ช,๐ช,w)`$ are shown in figures 1(a) and (b) for $`Cr_{95}V_5`$ and $`Cr_{95}Re_5`$ respectively. Our calculation of $`Im\chi (๐ช,๐ช,w)`$ for paramagnetic $`Cr`$ at $`T=300K`$ is broadly similar to that for paramagnetic $`Cr`$ at $`T=0K`$ by Savrasov so a figure is not presented. It shows incommensurate spin fluctuations for small frequencies which are signified by peaks in $`Im\chi (๐ช,๐ช,w)`$ at $`๐ช_0`$ which is equal to the Fermi surface nesting vector $`๐ช_{nest}`$. For increasing $`w`$ the peaks move to $`๐ช=\{0,0,1\}`$ i.e. the spin fluctuations become commensurate. The spin fluctuations at 300K shown in fig.1(a) for $`Cr_{95}V_5`$, on the other hand, remain incommensurate up to much higher frequencies maintaining intensity comparable to that at the peak at low $`w`$. This qualitative difference between the two systems has not been described before by a first-principles theory although found experimentally . For lower temperatures we find that $`Im\chi (๐ช,๐ช,w)`$ of $`Cr_{95}V_5`$ becomes a sharper function of $`w`$ and we can also infer that $`(1\mathrm{exp}(\beta w))^1Im\chi (๐ช,๐ช,w)`$ should vanish for small $`w`$ when $`T0`$K. These aspects have also been noted from experimental measurements .
It is striking that the alloyโs Fermi surface is well defined despite impurity scattering although it is more poorly nested (the difference between the sizes of the electron and hole octahedra is larger) than that of $`Cr`$ owing to its fewer electrons. Once again the peaks in $`Im\chi (๐ช,๐ช,w)`$ occur at the nesting vectors $`๐ช_{nest}=\{0,0,0.9\}`$. The spin fluctuations in the paramagnetic phase of $`Cr_{95}Re_5`$ are shown in fig.1(b). Here adding electrons by doping with $`Re`$ has improved the Fermi surface nesting so that $`๐ช_{nest}=\{0,0,0.96\}`$ and $`Im\chi (๐ช,๐ช,w)`$ has weight spread from $`๐ช_{nest}`$ to $`\{0,0,1\}`$. The dominant spin fluctuations now rapidly become commensurate with increasing $`w`$. We obtain a rather similar picture from the calculations for $`Cr`$ by artificially raising the chemical potential by a small amount. Interestingly when we account for thermally induced electron-phonon scattering by adding a small shift ($`20`$ meV) to the Matsubara frequencies $`\nu _m`$ in eq. (3), we find a tendency for the dominant spin fluctuations to become commensurate at lower $`w`$ in both $`Cr`$ and $`Cr_{95}Re_5`$.
Some of these features also emerge qualitatively from the simple parameterised models based on that part of the band-structure near $`\mu `$ which leads to the nested electron and hole octahedra at the Fermi surface . Our โfirst-principlesโ calculations, being based on an all-electron theory, however, need some additional interpretation. As analysed by recent total energy calculations , b.c.c. $`Cr`$ with the experimentally measured lattice spacing tends to form a commensurate AF phase at low temperatures which is modulated by a spin density wave of appropriate wavelength. The overall AF instability of the paramagnetic phase is promoted by the approximate half-filling of the narrow 3d-bands which is further modified by a weak perturbation coming from the Fermi surface nesting. As dopants such as $`V`$ and $`Re`$ are added not only is the Fermi surface nesting altered but also the d-bands become either further from or closer to being half-filled.
The calculations can be summarised in terms of the damped oscillator model. $`\chi ^1(๐ช_0+๐)\chi ^1(๐ช_0+cQ^2)`$ for small $`๐`$ for $`Cr`$ and $`Cr_{95}V_5`$ whereas for $`Cr_{95}Re_5`$, $`\chi ^1(๐ช)`$ is nearly constant for a range of $`๐ช`$ between $`๐ช_{nest}`$ and $`\{0,0,1\}`$. We find the product $`\gamma (๐ช)`$ of $`\chi (๐ช)`$ with damping factor $`\mathrm{\Gamma }(๐ช)`$ to be only very weakly temperature dependent in these three systems and $`\gamma (๐ช_0)`$, a constant, for small $`๐`$, yielding a dynamical critical exponent of 2 typically assumed for antiferromagnetic itinerant electron systems. The nature of the spin fluctuations can be succinctly described via the variance $`<m^2(๐ช)>`$. From the fluctuation dissipation theorem, $`<m^2(๐ช)>=(1/\pi )_{\mathrm{}}^{\mathrm{}}๐w(1\mathrm{exp}(\beta w))^1Im\chi (๐ช,๐ช,w)`$. Fig.2 shows $`<m^2(๐ช)>`$ at several temperatures for $`Cr`$ where we have used a frequency cutoff of 500 meV and so have not included the faster of the quantum fluctuations. Near $`T_N`$ the magnetic fluctuations have their greatest weight around the $`๐ช_{nest}`$. At higher $`T`$ the peak diminishes and weight grows at $`๐ช`$โs nearer $`\{0,0,1\}`$ reflecting the shift in the peak in $`Im\chi (๐ช,๐ช,w)`$ from $`๐ช_{nest}`$ to commensurate $`๐ช`$โs with increase in frequency $`w`$. Similar plots to fig.2 for $`Cr_{95}V_5`$ and $`Cr_{95}Re_5`$ show respectively a smaller and greater tendency for the weight in $`<m^2(๐ช)>`$ to transfer in this way. If the frequency cutoff is reduced, $`<m^2(๐ช)>`$ near $`\{0,0,1\}`$ is sharply diminished so that the Brillouin zone integral of $`<m^2(๐ช)>`$, $`<m^2>`$, decreases with increasing temperature as inferred from neutron scattering data .
We have not included the effects of spin fluctuation interactions i.e. mode-mode coupling into our calculations and have determined $`T_N`$ and the static susceptibility by what is essentially an ab-initio Stoner theory. In weak itinerant ferromagnets, for example, mode-mode coupling causes a dramatic suppression of the Curie temperatures from those estimated from a Stoner theory. For the $`Cr`$ systems studied here, the fair agreement with experiment which we obtain for $`T_N`$ and the relatively large value of the damping factor $`\mathrm{\Gamma }(๐ช)`$ with respect to that in weakly itinerant ferromagnets, is suggestive that such spin fluctuation effects are small. Spin fluctuation calculations have, however, been carried out by Hasegawa and others for simple parameterised models of $`Cr`$ neglecting Stoner particle-hole excitations and Fermi surface nesting . Using a functional integral technique he made a high temperature (static) approximation so that all the thermally induced fluctuations were treated classically and found $`T_N`$ for commensurate AF order to be 370K and $`\sqrt{<m^2>}`$ to increase linearly with temperature above $`T_N`$. A quantitative calculation, however, in which the Stoner particle-hole excitations and spin fluctuations are treated within the same framework is needed to determine unequivocally whether or not a Stoner-like picture is adequate for these systems.
In summary, we have presented a first-principles framework for the calculation of dynamic paramagnetic spin susceptibilities of metals and their alloys at finite temperatures. At this point we add that the approach can also be applied to the study of magnetic excitations in magnetically ordered materials. The first applications on the AF spin fluctuations in $`Cr`$ and $`Cr_{95}Re_5`$ above $`T_N`$ and in paramagnetic $`Cr_{95}V_5`$ have found good agreement with available experimental data.
We are grateful to F.J.Pinski, S.Hayden and R.Doubble for useful discussions. |
no-problem/0001/hep-ph0001018.html | ar5iv | text | # Muon Collider Physics at Very High EnergiesTo appear in the Proceedings of Studies on Colliders and Collider Physics at the Highest Energies: Muon Colliders at 10 TeV to 100 TeV, Montauk Yacht Club Restor, Montauk, New York, 27 September - 1 October, 1999.
## Introduction
The large mass of the muon compared to that of the electron results in a large suppression of bremstrahlung radiation. Consequently it is possible to consider building circular colliders with energies in the multi-TeV regimemuon . Muon colliders have been proposed as Higgs factories and more recently as neutrino factories, but the long-term goal of muon colliders should be to extend the energy frontier. It is not clear at the present time whether advances in accelerator technology will result in electron-positron machines achieving energies of several TeV. In this workshop first attempts were made to explore the feasibility of muon colliders with energies of at least 10 TeV.
It is hard to know what kind of physics might present itself in the 10-100 TeV mass range. After all, physicists have been arguing for a long time about the physics that will manifest itself at the Large Hadron Collider (LHC). The LHC, linear electron-positron colliders, and perhaps muon colliders should give us some clue as to what to expect at the following generation of machines. It is easy to imagine scenarios where a new collider might be necessary, but it is impossible to motivate a specific energy at this time. We can only speculate as to what physics might appear at the LHC or future linear colliders.
## Luminosity requirements
The figure of merit for physics searches at a muon collider is the QED cross section $`\mu ^+\mu ^{}e^+e^{}`$, which has the value
$`\sigma _{QED}={\displaystyle \frac{100\mathrm{fb}}{s(\mathrm{TeV}^2)}}`$ (1)
To arrive at a simple estimate of the integrated luminosity needed to study new physics, we assume
$`\left({\displaystyle ๐t}\right)\sigma _{QED}\stackrel{>}{}1000\mathrm{events}`$ (2)
Then the luminosity requirement for this number of events to be accumulated in one yearโs running is
$`\stackrel{>}{}10^{33}s(\mathrm{cm})^2(\mathrm{sec})^1`$
For the colliders with the center-of-mass energies considered at this meeting:
* $`\sqrt{s}10`$ TeV, requiring
$`{\displaystyle ๐t}\stackrel{>}{}1(\mathrm{fb})^1,\stackrel{>}{}10^{35}(\mathrm{cm})^2(\mathrm{sec})^1`$
* $`\sqrt{s}100`$ TeV, requiring
$`{\displaystyle ๐t}\stackrel{>}{}100(\mathrm{fb})^1,\stackrel{>}{}10^{37}(\mathrm{cm})^2(\mathrm{sec})^1`$
These luminosities are extremely high, of course, and it is not clear if experiments can be performed in such an environment.
## Electroweak symmetry breaking
A 10 TeV muon collider might be very useful for exploring the physics responsible for electroweak symmetry breaking. If Higgs bosons with $`m_H<๐ช(800)`$ GeV do not exist then interactions of longitudinally polarized weak bosons $`(W_L,Z_L)`$ become strong and can be probed by studying vector boson scattering as shown in the figure. Therefore, new physics must be present at the TeV energy scale. While one can study strong $`W_LW_L`$ scattering at the LHC, linear colliders, or $`\mu ^+\mu ^{}`$ colliders with a few TeV center-of-mass energy, it might become necessary to go to higher energies to fully explore the multitude of resonances. Indeed we are still studying the analogous spectrum of QCD today.
## Fermion mass generation
The mechanism responsible for fermion masses and the mechanism breaking the electroweak symmetry are the same in the Standard Model. A Higgs scalar acquires a vacuum expectation value giving rise to massive gauge bosons and (through Yukawa couplings) masses for the fermions. However, it need not be the case that these mechanisms are the same, and technicolor models were the most prominent examples of theories where the fermion masses arise from a different sector from that responsible for the electroweak symmetry breaking. Hence one should keep an open mind about the origin of fermion masses. Very general constraints one can place on the physics of fermion mass generation are unitarity bounds. The relevant bound for fermions scattering into longitudinally polarized vector boson $`V_L`$,
$$f\overline{f}V_LV_L,$$
(3)
is the Appelquist-Chanowitz boundac which states that unitarity is violated at the scale
$$\mathrm{\Lambda }_f<\frac{8\pi v^2}{\sqrt{3N_c}m_f},$$
(4)
where $`v=(\sqrt{2}G_F)^{1/2}`$ is the electroweak vev and $`N_c`$ is the number of colors of the fermion. In the Standard Model this unitarity violation is cured by the inclusion of the $`s`$-channel Higgs exchange diagram. The strongest bound comes for the heaviest fermion the top quark for which $`\mathrm{\Lambda }_t3`$ TeV, indicating that some new physics must occur below this scale.
For a muon one gets $`\mathrm{\Lambda }_\mu 8,000`$ TeV. So if the physics responsible for the muon mass saturates this bound, it is beyond the reach even of a 10-100 TeV muon collider. But one does not really expect that the bound is saturated, but rather that the fermion masses are all generated at a common scale with some masses suppressed by some approximate flavor symmetries. In light of the lower value of $`\mathrm{\Lambda }_t`$, one might expect a 10 TeV collider to provide important insight into fermion mass generation if Nature is not so kind to provide a elementary scalar particle. In the typical case one expects the resonances to be broad. In some scenariosbe , one can have strongly interacting Higgs sectors with narrow resonances for which a small energy spread might be helpful.
One can also study the unitarity violation in the subprocess $`V_LV_Lt\overline{t}`$, analogous to the case discussed in the previous section for electroweak symmetry breaking. This process could also be sensitive to new physics responsible for the fermion masses, and one would measure the cross sections for $`\mu ^+\mu ^{}\nu \overline{\nu }t\overline{t}`$ and $`\mu ^+\mu ^{}\mu ^+\mu ^{}t\overline{t}`$ , and in scenarios where the unitarity is saturated, one might need the energy reach of a very high energy muon collider to probe these strong interactions.
## Gauge Bosons
A favorite target for new physics is the possibility of new gauge bosons beyond those found in the Standard Model. One might first reveal the existence of these particles via radiative returnrad whereby a vector boson with mass less than the center-of-mass energy is produced in association with an energetic photon. Alternatively one could pinpoint the mass of the vector boson by doing precision measurements of the couplings and asymmetries at energies below the vector boson mass. In either case, one would ultimately want to build a collider with an energy equal to the mass of the vector boson and take advantage of the resonance cross section. An important consideration then is the beam energy spread of the muon collider. The width of the vector boson should scale linearly with its mass. The expectations for a 10 TeV collider is that the energy spread $`\sigma _E/E`$ should be something like $`10^410^3`$bking , so the spread should be much smaller than the resonance peak in the typical case.
## Supersymmetry
It is possible that the LHC and linear colliders will uncover only part of the supersymmetric (SUSY) spectrum. In fact the lightest two generations of squarks and sleptons might appear at the multi-TeV scale. The absence of certain supersymmetric partners being produced below the TeV energy scale would certainly compel us to go to higher energies.
Beyond the discovery of all the superpartners to the Standard Model particles, another possible role for a very high energy muon collider would be to uncover an entirely new sector responsible for the dynamical breaking of supersymmetry. In gravitationally mediated SUSY breaking, the dynamical sector is hidden and couples only via gravitational couplings to the supersymmetric Standard Model particles. However other scenarios of SUSY breaking are possible, and these can be directly probed with sufficiently energetic collisions. In gauge mediated SUSY breaking scenarios, for example, there is just such another sector (known as the messenger sector) occurs at a scale beyond that which can be probed at the LHC. This messenger sector might perhaps be accessible at a very high energy muon collider. The LHC might indirectly provide clues about the source of SUSY breaking by measuring the spectrum of superpartners and perhaps seeing radiative decays in the case of gauge mediated SUSY breaking. In fact by measuring the location of displaced vertices (relative to the interaction point) from the radiative decay of the next-lightest supersymmetric particle one can put a constraint on the scale of the gauge mediation sector as first suggested in a Very Large Hadron Collider studyvlhc .
## Compton Backscattering
It seems at first peculiar to consider backscattering photons off of a muon beam. After all, the reason to employ muon beams rather than the electron beams is to decrease electromagnetic radiation. Eventually however, even for muons, bremstrahlung radiation would again become a problem at sufficiently high energies in a circular collider. At the energies contemplated here, one can reconsidering employing Compton backscattering to produce photon beams of comparable energies. Kinematics dictates that the highest energy of a backscattered photon that can be obtained is given by
$$\omega _{\mathrm{max}}=\frac{x}{1+x}E_{\mathrm{beam}},$$
(5)
where
$$x=\frac{4E_{\mathrm{beam}}\omega _{\mathrm{laser}}}{m_\mu ^2}.$$
(6)
Assuming an incident laser with energy $`1.17`$ eV<sup>1</sup><sup>1</sup>1For definiteness we take a neodinium glass laser with $`\omega _{\mathrm{laser}}=1.17`$ eV which is often considered for Compton scattering at a linear $`e^+e^{}`$ collider. In any case, one expects the laser energy to be in the few eV range., one obtains maximum backscattered photon energies (shown in the figure) which are still much smaller than the incident muon beam energy. A more energetic photon source would be needed to fully realize the backscattered photon option even at the extremely high muon energies considered here.
## Conclusions
It is difficult to motivate a very high energy muon collider without information that will be gleaned after years of operation of the LHC and linear colliders. However, if the past history of particle physics has taught us anything it is that the most important progress has occurred by going to higher and higher energies. It will be interesting in the coming years to learn whether multi-TeV muon colliders are realistic and economical.
## Acknowledgement
Work supported in part by the U.S. Department of Energy under Grant No. DE-FG02-95ER40661. |
no-problem/0001/hep-ph0001035.html | ar5iv | text | # ๐ธ^(โ)โข๐ธ^(โ) reactions at high energies11footnote 1submitted to proceedings of the Durham Collider Workshop, Durham, UK, 22-26 September 1999
## 1 Introduction
Diffractive phenomena occur in each of untagged, single-tagged and double-tagged photon-photon reactions via the total hadronic $`\gamma \gamma `$ cross-section, $`\sigma _{\gamma \gamma }`$; the structure function of the real photon, $`F_2^\gamma `$ (or equivalently the $`\gamma ^{}\gamma `$ cross-section); and the total hadronic $`\gamma ^{}\gamma ^{}`$ cross-section, $`\sigma _{\gamma ^{}\gamma ^{}}`$ respectively. Thus in principle it is possible to study diffraction continuously from the quasi-hadronic regime dominated by non-perturbative physics to the realm of perturbative QCD with either single or double hard scales.
## 2 $`๐ธ๐ธ`$ scattering
The total hadronic $`\gamma \gamma `$ cross-section was measured at LEP in the ranges $`5W145`$ GeV and $`10W110`$ GeV , where $`W`$ is the photon-photon centre-of-mass energy (Fig. 1).
Since the use of different Monte Carlo models (PYTHIA or PHOJET ) for the unfolding of detector effects leads to significant shifts of the normalisation, the results of the two experiments are compared using only PHOJET for these corrections<sup>2</sup><sup>2</sup>2The published OPAL data are given after averaging the PHOJET and the PYTHIA corrected results (Fig. 3)..
Both experiments have measured the high energy rise of the total cross-section typical for hadronic interactions. However a faster rise of the total $`\gamma \gamma `$ cross-section with $`W`$ compared to purely hadronic interactions has not been unambiguously observed. This faster rise is predicted by most models for $`\gamma \gamma `$ interactions.
To quantify this effect, both experiments have fitted a Donnachie-Landshoff parametrisation of the form $`\sigma _{\gamma \gamma }=Xs^ฯต+Ys^\eta `$ with $`\eta =0.34`$. The results are $`ฯต=0.10\pm 0.02`$ (OPAL) and $`ฯต=0.22\pm 0.02`$ (L3), where the L3 fit uses only the preliminary data at $`\sqrt{s}_{\mathrm{ee}}=183189`$ GeV. The fitted curves are also shown in Fig. 1. The L3 result implies a significantly faster rise of the total $`\gamma \gamma `$ cross-section than in hadron-hadron scattering whereas the OPAL result is consistent with a typical value of $`ฯต0.08`$ for a soft Pomeron.
The results are consistent in the kinematic region where the measurements of both experiments overlap. Some discrepancies seem to exist between the L3 measurements at different $`\sqrt{s}_{\mathrm{ee}}`$, both at low and at high $`W`$. The data in the range $`W<10`$ GeV also have large influence on the fitted value of $`ฯต`$ due to the large correlation between the Reggeon-Term $`Y`$ and $`ฯต`$.
The main problems of this measurement are the resolution effects in the reconstruction of $`W`$ from the hadronic final state and the small acceptance for events coming from soft diffractive or quasi-elastic processes (e.g. $`\gamma \gamma \rho \rho `$) which lead to the large model dependence for the final results. The $`W`$ resolution makes it necessary to use unfolding models which introduce large bin-to-bin correlations. The acceptance for soft diffractive and quasi-elastic processes is only 5-15$`\%`$, depending on the $`W`$ range and on the MC model used . For $`W>20`$ GeV the average polar angle of the pions in $`\gamma \gamma \rho \rho `$ events is less than 100 mrad, well below the tracking coverage of the LEP detectors. L3 has therefore measured inclusive $`\rho `$ production in $`\gamma \gamma `$ events for $`3W10`$ GeV. In this $`W`$ region large discrepancies between the Monte Carlo models and the data are observed (Fig. 2). At higher $`W`$ OPAL has studied the maximum rapidity gap $`\mathrm{\Delta }\eta _{\mathrm{max}}`$ between any two particles (tracks and calorimeter clusters) in an event. At high $`\mathrm{\Delta }\eta _{\mathrm{max}}`$, where diffractive events are expected to contribute, the data lie above the Monte Carlo models and PHOJET is closer to the data than PYTHIA.
Soft processes like quasi-elastic scattering ($`\gamma \gamma VV`$, where $`V`$ is a vector meson), single-diffractive scattering ($`\gamma \gamma VX`$, where $`X`$ is a low mass hadronic system) or double-diffractive scattering ($`\gamma \gamma X_1X_2`$) are modelled by both generators. The cross-sections are obtained by fitting a Regge parametrisation to pp, $`\text{p}\overline{\text{p}}`$ and $`\gamma `$p data and by assuming Regge factorisation, i.e. universal couplings of the Pomeron to the hadronic fluctuations of the photon. In both generators the quasi-elastic cross-section is about $`56\%`$, the single-diffractive cross-section about $`812\%`$ and the double-diffractive cross-section about $`34\%`$ of $`\sigma _{\gamma \gamma }`$ for $`W>10`$ GeV. In the $`\gamma \gamma `$ data no clear diffractive signal has yet been observed and it would be very useful to find experimental variables which could give a better discrimination between diffractive and non-diffractive events at LEP and which could be used to test the Monte Carlo models.
## 3 The hard Pomeron model and the dipole formalism
As the energies and virtualities available at LEP are comparatively moderate it is necessary to take into account diffractive and non-diffractive contributions. There are two main sources of the non-diffractive contributions: Reggeon exchange and the quark box diagram with pointlike couplings of the photon. In Regge language the latter gives rise to a fixed pole in the complex angular momentum plane, so it is not dual to Regge exchange and it is correct to add the two contributions. The box diagram is well defined . The Regge contribution to $`F_2^\gamma `$ can be estimated using the DGLAP evolved pion structure function and naive VMD. This can be extended to both the $`\gamma \gamma `$ and $`\gamma ^{}\gamma ^{}`$ cross-sections assuming factorization . In the dipole approach to the small-$`x`$ structure function of the proton it has become increasingly clear that the nominally perturbative regime still contains some non-perturbative contribution . A specific model has been proposed in terms of two Pomerons . This combines a hard Pomeron with an intercept of about 1.44 together with the soft Pomeron of hadronic physics with an intercept of about 1.08.
An analogous approach is that of in which the hard Pomeron is modelled within the BFKL framework. A similar conclusion is reached, that for diffractive reactions on a hadronic target the purely perturbative regime is not reached until rather large values of $`Q^2`$.
Combining the dipole formalism with the two-Pomeron approach allows predictions to be made for the $`\gamma ^{()}\gamma ^{()}`$ cross-sections . An appropriate model for soft Pomeron exchange is the eikonal approach to high energy scattering. It is particularly suited to incorporate the non-perturbative aspects of QCD which are treated in the Model of the Stochastic Vacuum , which approximates the infrared part of QCD by a Gaussian stochastic process in the colour field strength. The two-Pomeron approach of has been adapted to the MSV model in and successfully tested for the photo- and electroproduction of vector mesons, and for the proton structure function over a wide range of $`x`$ and $`Q^2`$.
With all parameters determined from hadronic scattering and deep inelastic scattering, in principle the model can then predict all the $`\gamma ^{()}\gamma ^{()}`$ cross-sections . The only caveat is that the Pomeron contribution to $`\sigma _{\gamma \gamma }`$ is rather sensitive to the effective light quark mass $`m_\mathrm{q}`$ entering the photon wave function, varying as $`1/m_\mathrm{q}^4`$. This is illustrated in Figs. 3a and b which show separately the L3 and OPAL data and the Pomeron model with $`m_\mathrm{q}=210`$ MeV and 200 MeV respectively, together with the other contributions to the total cross-section. These values of $`m_\mathrm{q}`$ are within the range previously determined, and the choice does not affect the predictions away from $`Q^2=0`$. Comparison of the predictions with $`F_2^\gamma `$ at $`Q^2=1.9`$ and $`15`$ GeV<sup>2</sup> are shown in Figs. 4 a and b respectively. They clearly provide a satisfactory description of the data. However comparison with $`\sigma _{\gamma ^{}\gamma ^{}}`$ is much less successful as is evident in Figs. 5a and b, for which $`Q_1^2=Q_2^2=3.5`$ and 14 GeV<sup>2</sup> respectively.
The significance of these results is that a well-tried model of diffraction which successfully describes high-energy hadronic interactions, vector meson photo- and electroproduction, deep inelastic scattering at small $`x`$, the real $`\gamma \gamma `$ cross-section and the structure function of the real photon fails to predict correctly the $`\gamma ^{}\gamma ^{}`$ cross-section even at quite modest photon virtualities. This is clearly due to the fact that, uniquely among these various processes, the $`\gamma ^{}\gamma ^{}`$ interaction involves two small dipoles. It emphasizes the importance of the $`\gamma ^{}\gamma ^{}`$ cross-section as a probe of the dynamics of the perturbative hard Pomeron.
## 4 $`๐ธ^{\mathbf{}}๐ธ^{\mathbf{}}`$ scattering in the BFKL formalism
The application of the BFKL formalism to $`\gamma ^{}\gamma ^{}`$ scattering has been considered by \[21-26\]. In the BFKL formalism there is a problem at LLO in setting the two mass scales on which the cross-section depends: the mass $`\mu ^2`$ at which the strong coupling $`\alpha _s`$ is evaluated and the mass $`Q_s^2`$ which provides
the scale for the high energy logarithms. The result is very sensitive to these parameters, and Brodsky et al showed that changing $`\mu ^24\mu ^2`$ or $`Q_s^2Q_s^2/4`$ alters the predicted cross-section by factors of $`1/4`$ or $`4`$ respectively in a typical LEP experiment. An additional uncertainty is due to the correct treatment of the production of massive charm quarks.
In an attempt to overcome the scale problem, Boonekamp et al take a phenomenological approach to estimate the NNLO effects, making use of a fit to the proton structure function using the QCD dipole picture of BFKL dynamics. This reduces both the size of the BFKL cross-section and its energy dependence. Fig. 6 shows the preliminary OPAL measurement of the double-tag e<sup>+</sup>e<sup>-</sup> cross-section in the range $`Q_1^2Q_2^2525`$ GeV<sup>2</sup> compared to the LO BFKL calculation and to the HO model of Boonekamp et al. . The cross-section predicted by PHOJET is also shown. The L3 collaboration has extracted the $`\gamma ^{}\gamma ^{}`$ cross-section using the photon flux for transverse photons (Fig. 6. The QPM part (box diagram) has been subtracted (the unsubtracted cross-section is shown in Fig. 5). The L3 data is compared to a LO BFKL prediction and to the two-gluon exchange cross-section (here called one-gluon) based on Ref. and to a fit of the hard Pomeron intercept. A calculation of subleading corrections to the BFKL equation shows that these are significant at LEP energies, and with the inclusion of the soft Pomeron a reasonable description of the L3 data is obtained.
Both experiments observe that the cross-section predicted by PHOJET, which does not contain BFKL effects, is consistent with the data within the large experimental errors, whereas LO BFKL predictions overestimate the $`\gamma ^{}\gamma ^{}`$ cross-section by a large factor. However, the large theoretical uncertainties discussed above need to be taken into account.
## 5 Conclusions
In the last year difficulties have emerged with the application of the Altarelli-Parisi equation to the evolution of the proton structure function at small $`x`$ and with the BFKL equation. These are summarised in . One of the questions is whether intrinsically non-perturbative contributions are involved, even at quite large $`Q^2`$, because of the intrinsically non-perturbative target. This complication is in principle avoided in $`\gamma ^{}\gamma ^{}`$ reactions as both are dominated at large $`Q^2`$ by the perturbative part of the photon wave function. This may be happening at quite modest values of $`Q^2`$, providing LEP with an excellent opportunity to clarify this question.
## References |
no-problem/0001/nucl-th0001013.html | ar5iv | text | # A Calculation of Baryon Diffusion Constant in Hot and Dense Hadronic Matter Based on an Event Generator URASiMA
## 1 Introduction
Physics of a high density and high temperature hadronic matter has been highly attracting in the context of both high energy nuclear collisions and cosmology as well as theoretical interest. In the recent ultra-relativistic nuclear collisions, though the main purpose should be confirmation of Quark-Gluon Plasma(QGP) state, physics of hot and/or dense hadronic state dominates the system. Hence, thermodynamical properties and transport coefficients of a hadronic matter are essentially important for the phenomenological description of the space-time evolution of the produced exited region. In the cosmology, in addition to the global evolution of the early universe, baryon diffusion would play an important roll in the nucleosynthesis problem.
Because of the highly non-perturbative property of a hot and dense hadronic state, investigation on the thermodynamical properties and transport coefficients has been hardly investigated. Numerical simulation based on Lattice gauge theory is a very powerful tool for the analysis of finite temperature QCD. Recently, transport coefficient of hot gluonic matter has been investigated . But even for the modern high-performance super-computer, lattice QCD evaluation of the transport coefficients of hadronic matter is very difficult, especially below $`T_c`$. Furthermore, at finite density, present numerical scheme of lattice QCD is almost useless since inclusion of chemical potential makes lattice action complex although there are several new approaches have been proposed. In this paper, we evaluate the transport coefficients by using statistical ensembles generated by Ultra-Relativistic A-A collision simulator based on Multiple Scattering Algorithm (URASiMA). Originally, URASiMA is an event generator for the nuclear collision experiments based on the Multi-Chain Model(MCM) of the hadrons. Some of us(N. S and O. M) has already discussed thermodynamical properties of a hot-dense hadronic state based on a molecular dynamical simulations of URASiMA with periodic condition. Recently, some groups have been performed similar calculation with use of the different type of event-generator UrQMD, where Hagedorn-type temperature saturation is reported. We improve URASiMA to recover detailed balance at temperature below two hundred MeV. As a result, Hagedorn-type behavior in the temperature disappears. This is the first calculation of the transport coefficient of a hot and dense hadronic matter based on an event generator.
In section 2, we review URASiMA and explain how to make ensembles with finite density and finite temperature. Section 3 is devoted to the calculation for nucleon diffusion constant through the first-kind fluctuation dissipation theorem. Section 4 is concluding remarks.
## 2 URASiMA for Statistical Ensembles
URASiMA is a relativistic event generator based on hadronic multi-chain model, which aims at describing nuclear-nuclear collision by the superposition of hadronic collisions. Hadronic 2-body interactions are fundamental building blocks of interactions in the model, and all parameters are so designed to reproduce experimental data of hadron-hadron collisions. Originally, URASiMA contains 2-body process (2 incident particle and 2 out-going particles), decay process (1 incident particle and 2 out-going particles), resonance (2 incident particles and 1 out-going particle) and production process (2 incident particles and n ($``$ 3) out going particles). The production process is very important for the description of the multiple production at high energies. On the other hand re-absorption processes ( n ($``$ 3) incident particles and 2 out-going particles) thought to be unimportant in the collisions since system quickly expands and they have not been included in the simulation. On the other hand, in the generation of statistical ensembles in equilibrium, detailed balance between processes is essentially important. Lack of re-absorption process leads one-way conversion of energy into particle production rather than heat-up. As a result, artificial temperature saturation occurs.
Therefore, role of re-absorption processes is very important and we should take into account it. However exact inclusion of multi-particle re-absorption processes is very difficult. In order to treat them effectively, multi-particle productions and absorptions are treated as 2-body processes including resonances with succeeding decays and/or preceding formations of the resonances. Here two body decay and formation of resonances are assumed. For example, $`NNNN\pi `$ is described as $`NNNR`$ followed by decay of $`RN\pi `$, where $`R`$ denotes resonance. The reverse process of it is easily taken into account. In this approach, all the known inelastic cross-sections for baryon-baryon interactions up to $`\sqrt{s}<3`$GeV, are reproduced.
For the higher energy, $`\sqrt{s}>3`$GeV, in order to give appropriate total cross section, we need to take direct production process into account. Only this point, detailed balance is broken in our simulation, nevertheless, if temperature is much smaller than 3 GeV, the influence is negligibly small. For example , if the temperature of the system is 100 MeV , occurrence of such process is suppressed by factor of $`\text{exp}(30)`$ and thus time scale to detect violation of detailed balance is very much longer than hadronic scale.
In order to obtain equilibrium state, we put the system in a box and impose periodic condition to URASiMA as the space-like boundary condition. Initial distributions of particles are given by uniform random distribution of baryons in a phase space. Total energy and baryon number in the box are fixed at initial time and conserved through-out simulation. Though initial particles are only baryons, many masons are produced through interactions. After thermalization time-period about 100 fm/c, system seems to be stationary. In order to confirm the achievement of equilibrium, we calculate energy distributions and particle numbers. Slope parameters of energy distribution of all particles become the same value in the accuracy of statistics(Fig. 1). Thus, we may call this value as the temperature of the system. The fact that numbers of species saturate indicates the achievement of chemical equilibrium(Fig. 2). Running URASiMA many times with the same total energy and total baryons in the box and taking the stationary configuration later than $`t=150`$ fm/c, we obtain statistical ensemble with fixed temperature and fixed baryon number(chemical potential).
==============
fig.1
==============
==============
fig.2
==============
By using the ensembles obtained through above mentioned manner, we can evaluate thermodynamical quantities and equation of states.
## 3 Diffusion Constant
According to the Kuboโs Linear Response Theory, the correlation of the currents stands for admittance of the system(first fluctuation dissipation theorem) and equivalently, random-force correlation gives impedance(Second fluctuation dissipation theorem) . As the simplest example, we here focus our discussion to the diffusion constant. First fluctuation dissipation theorem tells us that diffusion constant $`D`$ is given by current(velocity) correlation,
$$D=\frac{1}{3}_0^{\mathrm{}}<๐(t)๐(t+t^{})>๐t^{}.$$
(1)
Average $`<\mathrm{}>`$ is given by,
$$<\mathrm{}>=\frac{1}{\text{number of ensembles}}\underset{\text{ensemble}}{}\frac{1}{\text{number of particle}}\underset{\text{particle}}{}\mathrm{}.$$
(2)
If the correlation decrease exponentially, i.e.,
$$<๐(t)๐(t+t^{})>\mathrm{exp}(\frac{t^{}}{\tau }),$$
(3)
with $`\tau `$ being relaxation time, diffusion constant can be rewritten in the simple form,
$$D=\frac{1}{3}<๐(t)๐(t)>\tau .$$
(4)
Usually, diffusion equation is given as,
$$\frac{}{t}f(t,๐)=D^2f(t,๐),$$
(5)
and diffusion constant $`D`$ has dimension of $`[L^2/T].`$ Because of relativistic nature of our system, we should use $`๐ท=\frac{๐}{c}=\frac{๐}{E}`$ instead of $`๐`$ in eq.(1) and $`D`$ is obtained by,
$`D`$ $`=`$ $`{\displaystyle \frac{1}{3}}{\displaystyle _0^{\mathrm{}}}<๐ท(t)๐ท(t+t^{})>๐t^{}c^2.`$ (6)
$`=`$ $`{\displaystyle \frac{1}{3}}<๐ท(t)๐ท(t)>c^2\tau .`$ (7)
$`=`$ $`{\displaystyle \frac{1}{3}}<\left({\displaystyle \frac{๐(t)}{E(t)}}\right)\left({\displaystyle \frac{๐(t)}{E(t)}}\right)>c^2\tau `$ (8)
with $`c`$ being the velocity of light. Figure 3 shows correlation function of the velocity of baryons. The figure indicates that exponential damping is very good approximation. Figure 4 displays the our results of baryon diffusion constant in a hot and dense hadronic matter.
==============
fig.3
==============
==============
fig.4
==============
Our results show clearer dependence on the baryon number density while dependence on energy density is mild. This result means importance of baryon-baryon collision process for the random walk of the baryons and thus non-linear diffusion process of baryons occurs. In this sense, we can state that baryon number density in our system is still high. In the inhomogeneous big-bang nucleosynthesis scenario, baryon-diffusion plays an important roll. The leading part of the scenario is played by the difference between proton diffusion and neutron diffusion. In our simulation, strong interaction dominates the system and we assume charge independence in the strong interaction, hence, we can not discuss difference between proton and neutron. However obtained diffusion constant of baryon in our simulation can give some kind of restriction to the diffusion constants of both proton and neutron.
From diffusion constant, we can calculate charge conductivity. Figure 5 shows baryon number conductivity $`\sigma _\mathrm{B}`$,
$$\sigma _\mathrm{B}=\frac{n_\mathrm{B}}{k_\mathrm{B}T}D,$$
(9)
where $`n_\mathrm{B}`$ is baryon number density, $`T`$ is temperature and $`k_\mathrm{B}`$ is Boltzmann constant(put as unity through out this paper), respectively.
==============
fig.5
==============
Therefore, if we want, we can discuss Joule heat and entropy production in the Baryonic circuit based on the above baryon number conductivity.
Because fundamental system in URASiMA is high energy hadronic collisions, we use relativistic notations usually. However, diffusion equation (5) is not Lorentz covariant and is available only on the special system i.e. local rest frame of the thermal medium. For the full-relativistic description of the space-time evolution of a hot and dense matter, we need to establish relativistic Navier-Stokes equation. Taking correlation of appropriate currents, we can easily evaluate viscosities and heat conductivity in the same manner .
## 4 Concluding Remarks
Making use of statistical ensembles obtained by an event generator URASiMA, we evaluate diffusion constants of baryons in the hot and dense hadronic matter. Our results show strong dependence on baryon number density and weak dependence on temperature. The temperature in our simulation is limited only small range, i.e., from 100 MeV to 200 MeV, and this fact can be one of the reasons why the change of diffusion constant of temperature is not clear. Strong baryon number density dependence indicates that, for the baryon diffusion process, baryon plays more important roll than light mesons. In this sense our simulation corresponds to high density region and non-linear diffusion process occurs. Calculation of the diffusion constants is the simplest examples of first fluctuation dissipation theorem. In principle, taking correlation of appropriate currents, i.e. energy flow, baryon number current, stress-tensor, etc., we can evaluate any kinds of transport coefficients. However, in relativistic transport theory, there exist several delicate points, e.g., relativistic property makes difference of mass and energy meaningless and, as a result, meaning of the โflowโ of the fluid and โheat flowโ become ambiguous. The choice of the current depends on the macroscopic phenomenological equations which contain transport coefficients. Once we establish phenomenological equations for the high temperature and high density hadronic matter, we can evaluate the appropriate transport coefficients in the same manner. Detailed discussion will be reported in our forthcoming paper.
Acknowledgment
The authors would like to thank prof. T. Kunihiro for the discussion. This work is supported by Grant-in-Aid for scientific research number 11440080 by Monbusho. Calculation has been done at Institute for Nonlinear Sciences and Applied Mathematics, Hiroshima University. |
no-problem/0001/cond-mat0001165.html | ar5iv | text | # On the universality classes of driven lattice gases
## Abstract
Motivated by some recent criticisms to our alternative Langevin equation for driven lattice gases (DLG) under an infinitely large driving field, we revisit the derivation of such an equation, and test its validity. As a result, an additional term, coming from a careful consideration of entropic contributions, is added to the equation. This term heals all the recently reported generic infrared singularities. The emerging equation is then identical to that describing randomly driven diffusive systems. This fact confirms our claim that the infinite driving limit is singular, and that the main relevant ingredient determining the critical behavior of the DLG in this limit is the anisotropy and not the presence of a current. Different aspects of our picture are discussed, and it is concluded that it constitutes a very plausible scenario to rationalize the critical behavior of the DLG and variants of it.
PACS numbers: 64.60.-i, 05.70.Fh
The driven lattice gas (DLG) is a simple nontrivial extension of the kinetic Ising model, and constitutes certainly a main archetype of out-of-equilibrium system. Fully understanding the critical properties of the DLG would be a fundamental milestone on the way to rationalizing the fast developing field of nonequilibrium phase transitions. The DLG is defined as a half filled, d-dimensional kinetic Ising model with conserved dynamics, in which transitions in the direction (against the direction) of an external field, $`E`$, are favored (unfavored) . The external field induces two main nonequilibrium effects: the presence of a net current of particles in its direction, and anisotropic system configurations. At high temperatures the system is in a disordered phase, while below a certain critical point it orders by segregating into high and low density aligned-with-the-field stripes.
In order to analyze the DLG critical nature, and determine its degree of universality, a Langevin equation intended to capture the relevant physics at criticality was proposed and renormalized more than a decade ago . This elegant theory, the driven diffusive system (DDS) seems to capture the main symmetries and conservation laws of the discrete DLG (including a current term as the most relevant nonlinearity), and is therefore a suitable and very reasonable candidate to be the canonical continuous model, representative of the DLG universality class.
Unfortunately, the most emblematic prediction coming from the analysis of the DDS equation, namely, the mean field behavior of the order parameter critical exponent ($`\beta =1/2`$ ), has not been compellingly verified in any Monte Carlo simulation of the DLG in spite of the huge computational effort devoted to test it. In particular, systematic deviations from scaling are observed both in $`d=2`$ and in $`d=3`$ if data collapse is attempted using $`\beta =1/2`$ . On the other hand, different Monte Carlo numerical simulations (performed in different geometries and using different finite size scaling ansatzs) lead systematically to a value of $`\beta `$ around $`0.3`$ with error bars apparently excluding $`\beta =1/2`$ (we refer the interested reader to for a review of simulation analysis). This is a main indication that the DDS Langevin equation does not describe properly the DLG at criticality.
Moreover, there are some other hints suggesting strongly that the discrepancies between the predictions of the standard theory and Monte Carlo results are more fundamental than a simple numerical difference in $`\beta `$. In particular, the intuition developed from Monte Carlo simulations of the DLG and variants of it (performed under large external driving fields) suggests that, contrarily to what happens for the DDS equation, it is the anisotropy and not the presence of a current the most relevant ingredient for criticality. For instance, in a modification of the DLG in which anisotropy is included by means others than a current , the scaling behavior at criticality remains unaltered upon the switching on of an infinite driving (see the appendix and ). Other compelling evidences supporting this hypothesis can be found in .
In order to shed some light on this puzzling situation and reconcile theory with numerics, different possible scenarios have been explored; but so far no satisfactory clarification has been reached. Within this context, we have recently revisited the time-honored DDS equation and questioned its general validity . In particular, we have tackled the task of constructing a coarse-grained procedure in a more detailed way such that, starting from a Master equation representing the DLG, would give as output a continuous Langevin equation. This approach permits us to keep track of microscopic details that could eventually be overlooked when writing down a Langevin equation respecting naively the microscopic symmetries and conservation constraints . This approach has given rise to a rather unexpected and quite interesting output: The limit of infinitely large driving (i.e. the limit in which attempted jumps in the direction of the field are performed with probability one and jumps against E are strictly forbidden) is singular . Let us stress that in order to enhance nonequilibrium effects most of the available computer studies are performed in this limit. The main results derived so far using our approach are:
* For vanishing values of the driving field it leads to the standard equilibrium model B , capturing the relevant physics of the kinetic Ising model with conserved dynamics.
* For nonvanishing, but finite driving fields we reproduce the standard DDS Langevin equation .
* In the limit of infinitely large driving, where the dependence of jumps in the direction of the field on energetics is replaced by a zero-one (all-or-nothing) condition, a different Langevin equation emerges. This new equation has the main property of not including any relevant term coupling $`E`$ to the density field $`\varphi `$ , and the presence of anisotropy is its main relevant ingredient.
The new Langevin equation for the DLG under infinitely large driving proposed in and renormalized in has some important virtues to be discussed afterwards, but seems also to exhibit some pathologies, as recently pointed out by Caracciolo et al. and also by Schmittmann et al. . In what follows we show that such anomalies can be healed in a rather natural way, and do not disprove at all the general validity of our new approach (as could be inferred from ).
Let us now present the Langevin equation derived in for the infinite driving limit, report on its deficiencies , and discuss the way to heal them. The equation reads :
$`_t\varphi ={\displaystyle \frac{e_0}{2}}\left[\mathrm{\Delta }_{}\mathrm{\Delta }_{}\varphi \mathrm{\Delta }_{}^2\varphi +\tau \mathrm{\Delta }_{}\varphi +{\displaystyle \frac{g}{3!}}\mathrm{\Delta }_{}\varphi ^3\right]`$ (1)
$`+\sqrt{e_0}_{}๐_{}+\sqrt{{\displaystyle \frac{e_0}{2}}}_{}\xi _{},`$ (2)
where $`_{}`$ ($`_{}`$) is the gradient operator in the direction parallel (perpendicular) to the driving field, and $`\xi `$ is a conserved Gaussian white noise . This equation is analogous to a model B in the direction(s) perpendicular to the field, coupled to a simple random diffusion mechanism in the parallel direction. The origin of all the difficulties pointed out in can be traced back to the following property: Defining the total density for each value of $`r_{}`$, $`\mathrm{{\rm Y}}(r_{},t)d^{d1}r_{}\varphi (r_{},๐ซ_{})`$, it is not difficult to see (after averaging over the noise) that $`\mathrm{{\rm Y}}(r_{})`$ is a conserved quantity for all values of $`r_{}`$ . Observe also that $`\mathrm{{\rm Y}}(r_{})`$ is nothing but the zero Fourier mode of the density at each column. These (spurious) conservation laws, absent in the DLG, are at the origin of the infrared singularities appearing in Eq. (1) .
In order to investigate the causes of this deficiency in our Langevin equation and eventually overcome the problem of the extra conservation laws and associated infrared divergences, we have re-analyzed our derivation of Eq. (1) in . One can easily see that the transition rates in the microscopic master equation in were written as depending on the variations of two adding contributions: the free energy functional (the usual Ginzsburg-Landau free energy) and the external driving-field contribution. The transition rates, written in that way, saturate to zero or one in the field direction, in the limit of infinite driving. This saturation erases any further dependence on the free energy density (which includes both entropic and energetic contributions). On the contrary, in the DLG it is only the dependence on the Ising energetics that becomes negligible in the limit of large driving fields. In a coarse grained description we should therefore separate energetic from entropic terms. With this guiding idea, we have reconsidered our derivation of Eq. (1) and rewritten the transition rates in as the product of two contributions: one controlling the energetics and the other one the entropic part . By performing a calculation analogous to that in , but including the transition rates written in this modified way, it is a matter of algebra to see that a new term (missing in ) emerges: $`\rho _{}\varphi (๐ฑ,t)`$ .
It is straightforward to verify that apart from properly keeping track of entropic contributions, this extra (mass) term heals all of the aforementioned problems in Eq. (1): no spurious conservation laws are involved and generic infrared singularities disappear.
Let us now discuss how this new additional term affects the results presented in . Performing a naive scaling analysis, one sees that $`x_{}x_{}^{}{}_{}{}^{2}`$ , and upon elimination of naively irrelevant terms and absorbing $`e_0`$ into the time scale, one obtains our final result: the critical Langevin theory under infinitely large driving
$$_t\varphi =\rho \mathrm{\Delta }_{}\varphi \mathrm{\Delta }_{}^2\varphi +\tau \mathrm{\Delta }_{}\varphi +\frac{g}{3!}\mathrm{\Delta }_{}\varphi ^3+\frac{2}{\sqrt{e_0}}_{}๐_{},$$
(3)
that we call the anisotropic diffusive system (ADS). This turns out to be a well known Langevin equation: the continuous representation of the randomly driven DLG , i.e. a DLG in which the external field changes sign randomly in an unbiased fashion. The main difference between this theory and the DDS is that the ADS does not include an overall current. The current term $`E_{}\varphi ^2`$ appearing in the DDS (and constituting its most relevant nonlinearity) is absent here. In the random DLG, such a term cannot appear for symmetry reasons, while in the infinite driving case discussed in this paper it is the saturation of the transition rates in the field direction that prevents such a current term from appearing.
The cubic operator and the Laplacian term in the parallel direction in Eq. (3) are both marginal at the critical dimension $`d=3`$. The results up to first order in an epsilon expansion of Eq. (3) around $`d=3`$ are : $`\nu _{}=1/2+\epsilon /12`$, and $`\beta =1/2ฯต/6`$ . Observe that in $`d=2`$ one obtains $`\beta =1/3`$ (slightly modified by two-loop corrections ) in remarkable good agreement with Monte Carlo results. For instance (see table and ): the best available Monte Carlo result for the random DLG is $`\beta 0.33`$ ; for the infinitely driven DLG $`\beta 0.30\pm 0.05`$ ; and $`\beta 0.34`$ for the closely related model studied in , called ALGA, and argued to belong to the same universality class (see appendix).
Some further comments on the validity of our approach follow: (i) Our complete theory (including constant and irrelevant terms) does have a net current , though it does not enter the final Langevin equation. (ii) An infinitely large field is, in practice, any for which transitions against the field never occur. Given that all commonly used transition rates depend on $`E`$ through exponential functions, field values much larger that unity can be considered infinite for all practical purposes in Monte Carlo experiments. For smaller fields, we expect crossover effects from the infinite field regime (ruled by the ADS) to the finite-driving standard DDS behavior to occur. These crossovers could obscure the numerical observation of the DDS mean-field exponent beta for large but finite driving fields. (iii) The introduction of the new term in the parallel direction heals all the possible problems in relation to infrared singularities, extra conservation laws, and anomalies in the structure function . In particular, the structure function presents a discontinuity singularity as happens in the DLG . (iv) Given the absence of any relevant current term in Eq. (3), the critical theory has โup-downโ symmetry ($`\varphi \varphi `$). This symmetry, in principle, is absent in the microscopic model as the presence of nonvanishing three-point correlation functions seems to indicate . However, it is not clear whether such correlations are relevant at criticality or not. As an indication that in fact they could well be irrelevant, we discuss here the problem of triangular anisotropies : In both the DLG and the DDS, droplets of the minority phase (if any) develop triangular shapes, closely related to the existence of nonvanishing three-point correlations. However, triangles orientate in opposite directions in the microscopic DLG and in the continuous DDS. This difference seems to be not universal as shown by recent Monte Carlo studies , i.e. it depends on microscopic details and can be modified by changing them both in the DLG and in the DDS. This fact supports the idea that nonvanishing three-point correlation functions might not be a relevant ingredient for a description of the DLG at criticality. More significatively, simulations show that for large enough driving fields the triangular anisotropy is suppressed (see ), providing an indication that the up-down symmetry is restored in the infinite driving limit. This constitutes, we believe, another strong backing of our picture.
In summary, we have discussed the plausibility of the alternative field theoretical approaches to driven lattice gases under the effect of an infinitely large external driving field. Some deficiencies recently pointed out are overcome by introducing an extra Laplacian term in the direction of the field in the Langevin equation first proposed in . This new term, coming from a proper consideration of entropic contributions, had been overlooked in previous papers. Our approach leads to the following global picture: (i) For $`E=0`$, model B reproduces the equilibrium critical properties of isotropic diffusive systems. (ii) For finite driving field, the standard DDS Langevin equation, including a current term, should describe properly the long wavelength properties around the critical point. (iii) The limit of infinitely fast driving is singular: the current is irrelevant, and the anisotropy becomes the main relevant property. The leading critical properties in this case are expected to be described by the ADS, Eq. (3). The reason for this being that in the presence of infinite driving the transition rates saturate to 1 (0) for allowed (forbidden) transitions in the driving direction, and no further track of coupling between E and the density field survives in the resulting Langevin equation. We want to stress at this point that this property was not obvious a priori, but emerges as a natural output from our model building strategy.
The proposed Langevin equation for large external field provides a quite plausible scenario shedding some light on a difficult problem. In particular, it justifies the observed lack of differences (for large fields) in simulations in systems with and without a current, and provides a likely justification of why the standard prediction $`\beta =1/2`$ is not confirmed in Monte Carlo simulations, and instead a value $`\beta 0.33`$ is observed.
In order to test numerically the picture presented in this paper, it would be highly desirable to perform extensive simulations for finite driving field ($`E1`$), and study whether differences with respect to the available Monte Carlo results for large fields appear. It would also be interesting to improve the finite size scaling analysis following the strategy used in .
Appendix.
As an evidence aimed to transmitting the intuition that, for infinitely large driving, the current is not relevant at criticality, let us briefly discuss in this appendix a rather compelling Monte Carlo observation. It corresponds to a variation of the DLG, named ALGA (anisotropic lattice gas automaton); see for a detailed definition. This model is placed by definition at the limit of infinite driving: jumps in the anisotropy direction are performed randomly without attending to energetic considerations. Simulations are performed both in the presence of an overall current (case $`p1/2`$ in ) and in the absence of it ($`p=1/2`$); the curves for the order parameter versus the distance to the critical point are indistinguishable in the cases with and without a current (figure 3 in is particularly illuminating). It could be argued that the details of this modified model render it not completely equivalent to the original DLG. However, we do not think these microscopic differences have any relevance at a coarse grained level. In fact, we expect this model to be represented by Eq. (3): in one direction particles tend to stay together, and it is natural to assume that their coarse grained behavior is controlled by a model B in this direction. In the other direction, jumps occur regardless of energetics and, therefore, the dynamics becomes purely diffusive. With these two ingredients we recover the ADS, Eq. (3), as the Langevin equation for the ALGA. As a further evidence supporting this hypothesis let us mention that the measured $`\beta `$ exponent in the ALGA is $`\beta 0.34`$ (again very close to the value $`\beta 1/3`$) in both cases: with and without current.
ACKNOWLEDGMENTS- It is a pleasure to acknowledge J. Marro and J. L. Lebowitz for useful discussions and encouragement. We thank with special gratitude S. Caracciolo and collaborators for sharing with us extremely useful and valuable unpublished results. This work has been partially supported by the European Network Contract ERBFMRXCT980183 and by the Ministerio de Educaciรณn under project DGESEIC, PB97-0842. |
no-problem/0001/astro-ph0001193.html | ar5iv | text | # Astrometric Resolution of Severely Degenerate Binary Microlensing Events
## 1 Introduction
Caustic-crossing binary microlensing events are potentially very useful, but their interpretation can be problematic. If there is good photometric coverage of a caustic crossing, one can measure the limb darkening of the source (Albrow et al. 1999b; Afonso et al. 2000; Albrow et al. 2000), and if this is combined with spectroscopy, one can resolve the sourceโs spectral features as a function of angular position (Gaudi & Gould 1999). If there is sufficiently good coverage of the event to obtain a unique binary solution, then one determines the binary mass ratio and in some cases other information about the binary (Albrow et al. 2000). If this information can be obtained for a number of events, then one can infer statistical properties about the binaries in the lens population as a whole (Gaudi & Sackett 2000).
In some cases, it is possible to measure the proper motion $`\mu `$ of the lens relative to the observer-source line of sight. This requires three pieces of information from the photometric light curve. First, one must measure the time it takes the source star to cross the caustic, $`\mathrm{\Delta }t`$ which can be done from photometry of the caustic-crossing alone (e.g. Albrow et al. 1999a,c). Second, one must measure the angle $`\varphi `$ of this crossing, which requires a unique binary solution for the event as a whole. Third, one must determine the angular size $`\theta _{}`$ of the source from its color and apparent magnitude using an empirically calibrated relation (van Belle 1999). The color is quite easily measured but the apparent magnitude again requires a unique binary solution. The proper motion is then $`\mu =\theta _{}/(\mathrm{\Delta }t|\mathrm{sin}\varphi |)`$. Five groups (Afonso et al. 1998; Udalski et al. 1998; Alcock et al. 1999; Albrow et al. 1999; Rhie et al. 1999) combined observations from 8 observatories to measure the proper motion of MACHO 98-SMC-1 and so proved beyond reasonable doubt that the lens was in the SMC and not in the Galactic halo (Afonso et al. 2000).
As can be seen from this brief summary, many applications of binary lenses require that one obtain a unique binary-lens solution to the event. However, Dominik (1999a) presented multiple solutions to a number previously published events and argued that degeneracies of this sort may be generic.
Han, Chun, & Chang (1999) therefore investigated whether such degeneracies can be broken astrometrically. In a binary lensing event there are three or five images depending on whether the source is outside or inside the caustic. The combined light of these three or five images makes up the photometric light curve which is the only effect that has been observed to date. The images are separated by of order the Einstein radius, $`\theta _\mathrm{E}`$ which is a few hundred $`\mu `$as for typical events. Hence, the images cannot be separately resolved with any existing or planned instrument. However, the image centroid deviates from the source position by a vector amount $`\delta \theta \theta _c=(\delta \theta _{c,x},\delta \theta _{c,y})`$ which is also of order $`\theta _\mathrm{E}`$. The Space Interferometry Mission (SIM) with its planned $`4\mu `$as precision will therefore have the capability to measure this deviation, and several ground-based interferometers may also achieve the necessary precision.
Han et al. (1999) explicitly showed that the four solutions that Dominik (1999a) presented for OGLE-7 (Udalski et al. 1994), which all fit the observed light curve extremely well, had radically different astrometric trajectories. Hence, had there been astrometric data, this degeneracy could have easily been broken.
Subsequently, Albrow et al. (1999c) developed a general method for finding solutions in events with well-covered caustic crossings, and Afonso et al. (2000) applied this method to MACHO 98-SMC-1 and found two solutions that fit the full data set equally well. See Figures 1 and 2, below. In spite of Dominikโs (1999a) work showing that degeneracies in earlier light curves were common, the degeneracy in MACHO 98-SMC-1 came as something of a surprise because the 5-collaboration data set was far superior to those of the events investigated by Dominik (1999a).
However, simultaneously with Afonso et al.โs (2000) empirical discovery of a severe degeneracy in MACHO 98-SMC-1, Dominik (1999b) found an entire class of severe degeneracies between โcloseโ and โwideโ binaries, i.e., binaries with projected angular separations small and large compared to $`\theta _\mathrm{E}`$. Indeed, MACHO 98-SMC-1 turns out to be a particular case of this class.
The disturbing thing about the Dominik (1999b) close/wide degeneracies, and what makes them so difficult to break, is that they derive from a degeneracy in the lens equation itself. That is, while some of the degeneracies found by Dominik (1999a) may be regarded as due โaccidentalโ similarities between different light curves (the sum of the magnification of the three or five images), the close/wide degeneracies are rooted in the similarities of the individual images. This immediately raises the question of whether it is possible to break these degeneracies at all, even using astrometric data as Han et al. (1999) showed could be done for the earlier (Dominik 1999a) degeneracies. We address that question here.
## 2 Astrometric Resolution
To investigate this question, we examine the astrometric behavior of the two solutions<sup>1</sup><sup>1</sup>1Afonso et al. (2000) actually found two wide solutions, a static and a rotating one. The static solution was completely consistent with the data in the neighborhood of the observable event, but was by chance ruled out by early data about 500 days before the event. For simplicity, and because we are trying to illustrate a general principle rather than specifically investigate MACHO 98-SMC-1, we will use the static wide solution and therefore will ignore the early data. This will allow us to compare two static systems, thus ensuring that differences between the astrometric trajectories are not due to the fact that one is rotating and the other is not. to MACHO 98-SMC-1 found by Afonso et al. (2000). The Einstein radii $`\theta _\mathrm{E}`$ are $`74\mu `$as and $`167\mu `$as, the Einstein crossing times $`t_\mathrm{E}`$ are 99 days and 165 days, the mass ratios $`M_2/M_1`$ are 0.50 and 4.17, and the separations are $`d\theta _\mathrm{E}`$ where $`d=0.54`$ and $`3.25`$. Here $`M_1`$ is the mass of the component that is closer to the caustic that the source passes through. The full solutions are described in Tables 1 and 2 of Afonso et al. (2000).
Figure 1 shows the trajectory of the source relative to the binary in the two solutions. Figure 2 is adapted from Figures 3 and 4 of Afonso et al. (2000) and shows the predicted light curves in $`I`$ band for these two solutions. Time is shown as HJD=HJD-2450000. The data are binned in 1-day intervals except in the immediate neighborhood of the caustics where there are 0.1-day bins. Data taken in other bands are adjusted to the $`I`$ band system using the source and background fluxes from each observatory and each band as determined from the overall fit. See Afonso et al. (2000). The main conclusion from Figure 2 is that the two solutions are essentially identical from a photometric standpoint.
Figure 3 shows the astrometric deviation of the light centroid from the source position for the close and wide solutions, respectively. The dashed portions of the curve show the jumps at the times of the caustic crossings. These jumps would be discontinuous for a point source, but in fact take place by a (rapid) continuous motion for a finite source. Note that the two displacement curves look extremely similar. This similarity derives from the underlying degeneracy in the lens equation that was discovered by Dominik (1999b).
Although the pattern of centroid motion is extremely similar in the two cases, the two curves are actually displaced from one another by an offset
$$\mathrm{\Delta }\delta \theta \theta _c(t)=\delta \theta \theta _{c,\mathrm{close}}(t)\delta \theta \theta _{c,\mathrm{wide}}(t),$$
(1)
which is about $`40\mu `$as, i.e., $`0.5\theta _\mathrm{E}`$ for the close binary or by $`0.25\theta _\mathrm{E}`$ for the wide binary. Such an offset is not observable if the observations are restricted to times when $`\mathrm{\Delta }\delta \theta \theta _c`$ is approximately constant. However, at very early or very late times, $`\delta \theta \theta _c0`$, since when the source is far from the lens, the image and source positions coincide. Because this occurs for both models, $`\mathrm{\Delta }\delta \theta \theta _c`$ must vanish at these times. How long, in practice, must one wait to tell the difference between the two models?
This question is addressed in Figure 4 where we plot the offset between the two models, $`\mathrm{\Delta }\delta \theta \theta _c(t)`$ as a function of time. The main figure shows the behavior of $`\mathrm{\Delta }\delta \theta \theta _c(t)`$ over the whole event, while the inset is restricted to times during and after the caustic crossing (when in practice astrometric observations might first be triggered). The offset shows some structure on scales of $`5\mu `$as during the time when the source is inside the caustic, but the main thing to notice is that over the next year it changes by $`20\mu `$as and thus would be noticed if the event were monitored with SIM-like precision. The full $`40\mu `$as change would take place only after about a decade. Note that to the extent that $`d\mathrm{\Delta }\delta \theta \theta _c/dt`$ can be approximated as a constant, $`\mathrm{\Delta }\delta \theta \theta _c`$ cannot be detected at all, because such uniform motion can be subsumed in the fit for the proper motion of the source. However, from Figure 4, $`\mathrm{\Delta }\delta \theta \theta _c(t)`$ slows down dramatically after about 1 year, so that after 2 years, its non-uniform motion could be unambiguously distinguished from uniform source motion.
It is also instructive to look at the behavior of Figure 4 at early times. Of course no astrometric measurements could have been taken then because there had been no signature of an event. However, the entire event could have just as well taken place in reverse. In this case post-caustic astrometric measurements would probe dramatic changes in the offset, much larger than the $`40\mu `$as changes in the event proceeding in its actual direction. The reason for this can be seen in Figure 1: the source passes relatively close to the companion binary member and this passage induces a large astrometric deviation. (Indeed, this passage is so close that it induces a noticeable deviation in the photometric light curve which is why the early data for this event ruled out the static wide solution.) In general, the source is not likely to pass close to both members, so that deviations of the type seen in Figure 4 are unlikely.
However, the $`40\mu `$as offset seen in Figure 3 at times when the source is inside the caustic is a generic feature of this caustic and does not depend in any way on the direction of the source trajectory through the caustic. Therefore, it is generically possible to break the close/wide (Dominik 1999b) degeneracy astrometrically, even when it is extremely difficult to do so photometrically.
## 3 Discussion
As a practical matter, SIM could not resolve the degeneracy in MACHO 98-SMC-1 because the source is only $`I22`$, far too faint for SIM to follow. Most events that SIM could monitor would be in the bulge where there are far more events and where the sources are much brighter. For these events, the typical Einstein radius is probably $`\theta _\mathrm{E}300\mu `$as, so the astrometric deviations would be several times larger than for MACHO 98-SMC-1. Hence, it seems likely that for sources that could be monitored astrometrically at all, breaking the degeneracy would be well within SIMโs capabilities.
Finally, we ask: what is the fundamental physical reason that the photometric degeneracy is reproduced as an astrometric degeneracy in the neighborhood of the caustic, but can be broken astrometrically at late times? This can most easily be seen by looking at Figure 1. In each model, the caustic that is crossed is associated with the mass at the right. The Einstein crossing times associated with these masses, $`t_\mathrm{E}^{}=[M_1/(M_1+M_2)]^{1/2}t_\mathrm{E}`$, are about the same in the two cases, $`t_{\mathrm{E},\mathrm{close}}^{}=81`$days and $`t_{\mathrm{E},\mathrm{wide}}^{}=72`$days, respectively. We therefore show the size of mass $`M_1`$ to be $`t_{\mathrm{E},\mathrm{close}}^{}/t_{\mathrm{E},\mathrm{wide}}^{}=1.25`$ times larger in the close-binary panel than in the wide-binary panel. The lens equation is very similar in the neighborhood of this mass, which is the origin of both the photometric and astrometric degeneracy. However, for the wide-binary solution, the very large mass of the companion at the left displaces the entire image structure to the right by $`[M_2/(M_1+M_2)]^{1/2}\theta _\mathrm{E}/d57\mu `$as. This displacement only gradually returns to zero: even at time $`dt_\mathrm{E}540`$days after the event, it has only fallen by half. By contrast, once the source has left the vicinity of $`M_1`$ of the close binary, there are no large and distant masses that could significantly displace the images relative to the source.
Acknowledgements: We thank Scott Gaudi for stimulating discussions. Work by AG was supported in part by grant AST 97-27520 from the NSF. Work by CH was supported by grant KRF-99-041-D00442 of the Korea Research Foundation. CH thanks the Ohio State University Astronomy Department for its hospitality during a visit during which most of the work on this paper was completed.
Figure 1: Positions of the components of the binary lens MACHO 98-SMC-1 in the two models of Afonso et al. (2000). The size (area) of the dots indicates the relative masses of the components. The panels show angular position scaled by the proper motion $`\mu `$. The units are therefore time which means that the source trajectory is shown as a function of $`\mathrm{\Delta }\theta _x/\mu +t_0=`$ HJD, and is therefore the same in the two panels.
Figure 2: Light curves for close binary (solid) and wide binary (dashed) models for the caustic-crossing binary microlensing event MACHO 98-SMC-1, together with binned data from 5 microlensing collaborations. Both the curves and data are adapted from Figures 3 and 4 of Afonso et al. (2000). The non-$`I`$ points have been put on the $`I`$ band system using the solutions of Afonso et al. (2000) so that both curves could be shown on the same plot. The two models predict virtually identical photometric results.
Figure 3: Astrometric deviation $`\delta \theta \theta _c`$ of the image centroid from the source position for the same two models shown in Figure 2 and over the same time interval, $`960\mathrm{HJD}^{}995`$. The crosses show the progress of the event in 1 day intervals, and the arrows designate the direction of the centroid motion. The dashed lines show the โinstantaneous jumpsโ that the image centroid of a point source would undergo at the caustic crossing. Finite source effects (not shown) would make these transitions continuous and would fore-shorten them by about 3%. The arcs at the bottoms represent the image centroid positions when the source is inside the caustic. The pattern of motion in the two cases looks extremely similar, confirming the photometric degeneracy illustrated in Figure 2. However, the two trajectories are offset by $`40\mu `$as, meaning that they can be distinguished if the zero-point of astrometry is established by sufficiently late-time observations.
Figure 4: Difference $`\mathrm{\Delta }\delta \theta \theta _c`$ between the two astrometric deviations shown in Figure 3, with evaluations every 100 days shown by crosses. The full figure shows $`\mathrm{\Delta }\delta \theta \theta _c`$ over a period of 20 years, while the inset shows only the period during and after the caustic crossing (when astrometric measurements might reasonably have been triggered). During the year after the caustic crossing, $`\mathrm{\Delta }\delta \theta \theta _c`$ changes by $`20\mu `$as. To the extent that this is consistent with uniform motion, it could not be disentangled from the uniform proper motion of the source. However, during the subsequent year, $`\mathrm{\Delta }\delta \theta \theta _c`$ slows down substantially, so that its motion over 2 years could easily be distinguished from uniform motion. Thus, astrometric measurements would distinguish between the two solutions. |
no-problem/0001/astro-ph0001159.html | ar5iv | text | # Probing the width of the MACHO mass function
## Abstract
The simplest interpretation of the microlensing events observed towards the Large Magellanic Cloud is that approximately half of the mass of the Milky Way halo is in the form of MAssive Compact Halo Objects with $`M0.5M_{}`$. This poses severe problems for stellar MACHO candidates, and leads to the consideration of more exotic objects such as primordial black holes (PBHs). Constraining the MACHO mass function will shed light on their nature. Using the current data we find, for four halo models, the best fit delta-function, power law and PBH mass functions. The best fit PBH mass functions, despite having significant finite width, have likelihoods which are similar to, and for one particular halo model greater than, those of the best fit delta functions. We also find that if the correct halo model is known then $``$ 500 events will be sufficient to determine whether the MACHO mass function has significant width, and will also allow determination of the mass function parameters to $`5\%`$.
## 1. Introduction
The rotation curves of spiral galaxies are typically flat out to about $`30`$ kpc. This implies that the mass enclosed increases linearly with radius, with a halo of dark matter extending beyond the luminous matter. The nature of the dark matter is unknown with possible candidates including MAssive Compact Halo Objects (MACHOs) such as brown dwarves, Jupiters or black holes and elementary particles, known as Weakly Interacting Massive Particles (WIMPs), such as axions and neutrilinos .
MACHOs with mass in the range $`10^8M_{}`$ to $`10^3M_{}`$ can be detected via the temporary amplification of background stars which occurs, due to gravitational microlensing, when a MACHO passes close to the line of sight to a background star . Since the early 1990s several collaborations have been monitoring millions of stars in the Large and Small Magellanic Clouds, (LMC and SMC), and a number of candidate microlensing events have been observed.
The interpretation of these microlensing events is a matter of much debate. Whilst the lenses responsible for these events may be located in the halo of our galaxy, it is possible that the contribution to the lensing rate due to other populations of objects has been underestimated. For the standardโhalo model, a cored isothermal sphere, the most likely MACHO mass function (MF) is sharply peaked around $`0.5M_{}`$, with about half of the total mass of the halo in MACHOs. This poses a problem for stellar MACHO candidates (chemical abundance arguments and direct searches place tight limits on their abundance) , and leads to the consideration of more exotic MACHO candidates such as primordial black holes (PBHs).
PBHs with mass $`M0.5M_{}`$ could be formed due to a spike in the primordial density perturbation spectrum at this scale or at the QCD phase transition, where the reduced pressure forces allow PBHs to form more easily . In both cases it is not possible to produce an arbitrarily narrow PBH MF and the predicted MF is considerably wider than the sharply peaked MFs which have been fitted to the observed events to date .
## 2. Current data
In their analysis of their 2-year data the MACHO collaboration form a 6 event sub-sample which they argue is a conservative estimate of the events resulting from lenses located in the Milky Way halo . We find the maximum likelihood fit, to the 6 event โhalo subโsampleโ, for DF, power law and PBH MFs for four sample halo models: the standardโhalo, the standardโhalo including the transverse velocity of the line of sight and 2 powerโlaw halo models . For the standardโhalo, both with and without the transverse velocity of the line of sight, and one of the powerโlaw halo models, the DF MF has the largest maximum likelihood, whilst for the other powerโlaw halo the PBH MF provides the best fit.
The differences in maximum likelihood between MF/halo model combinations are small and, unsurprisingly given the small number of events, it is not possible to differentiate between MFs using the current data, even if the halo model is fixed (for more details see ).
## 3. Monte Carlo simulations
We carried out 400 Monte Carlo simulations each for $`N=100,316,1000`$ and 3162 event samples, assuming a standardโhalo and taking the MACHO MF to be the, comparatively, โbroadโ PBH MF. For each simulation we found the best fit PBH and delta-function (DF) MFs.
For each simulation we compared the theoretical event rate distributions produced by the best fit PBH and DF MFs with those โobservedโ using a modified form of the Kolmogorov-Smirnov (KS) test. The fraction of the simulations passing the KS test at a given confidence level, for both the PBH and DF MACHO MFs, is shown in Fig. 1 for each $`N`$.
In Fig. 2 we plot 1 and 2 $`\sigma `$ contours (which contain 68% and 95% of the simulations respectively), of the mean MACHO mass and halo fraction, of the best fit PBH MFs. The values for the input MF are marked with a cross. We find that fitting a DF MF when the true MF is the PBH MF leads to a systematic underestimation of the mean MACHO mass by $`15\%`$. For further details see ref. .
## 4. Future prospects
Assuming that the lenses are located in the Milky Way halo, and that the correct halo model is known, approximately 500 events should be sufficient to ascertain whether the MACHO mass function has significant finite width, and also determine the parameters of the mass function to $`5\%`$. If the halo model is not known then the number of events necessary is likely to be increased by a least an order of magnitude , however the use of a satellite to make parallax measurements of microlensing events would allow simultaneous determination of the lens location and, if appropriate, the halo structure and mass function with of order 100s events . If the MACHOs are PBHs then the gravitational waves emitted by PBH-PBH binaries will allow the MACHO mass distribution to be mapped by the Laser Interferometer Space Antenna .
## References
1. Alcock C. et. al. 1997, ApJ, 486, 697
2. see for instance Carr B. J. in these proceedings
3. Evans N. W. 1993, MNRAS, 260, 191
4. Green A. M. 1999 preprint astro-ph/9912424
5. Jedamzik K., & Niemeyer J. C. 1999 Phys. Rev. D 59, 124014
6. Markovic D., & Sommer-Larsen J. 1997, MNRAS, 229, 929
7. Markovic D. 1998 MNRAS, 507 316
8. Nakamura T. Sasaki M. Tanaka T., & Thorne K. 1997 ApJ 487 L139 and Ioka K. in these proceedings
9. Paczyลski B. 1986, ApJ, 428 L5 |
no-problem/0001/cond-mat0001367.html | ar5iv | text | # Duality relations for ๐ coupled Potts models
(January 2000)
## Abstract
We establish explicit duality transformations for systems of $`M`$ $`q`$-state Potts models coupled through their local energy density, generalising known results for $`M=1,2,3`$. The $`M`$-dimensional space of coupling constants contains a selfdual sub-manifold of dimension $`D_M=[M/2]`$. For the case $`M=4`$, the variation of the effective central charge along the selfdual surface is investigated by numerical transfer matrix techniques. Evidence is given for the existence of a family of critical points, corresponding to conformal field theories with an extended $`S_M`$ symmetry algebra.
For several decades, the $`q`$-state Potts model has been used to model ferromagnetic materials , and an impressive number of results are known about it, especially in two dimensions . More recently, its random-bond counterpart has attracted considerable attention , primarily because it permits one to study how quenched randomness coupling to the local energy density can modify the nature of a phase transition.
But despite the remarkable successes of conformal invariance applied to pure two-dimensional systems, the amount of analytical results on the random-bond Potts model is rather scarce. Usually the disorder is dealt with by introducing $`M`$ replicas of the original model, with mutual energy-energy interactions, and taking the limit $`M0`$. The price to be paid is however that the resulting system loses many of the properties (such as unitarity) that lie at the heart of conventional conformal field theory .
Very recently, an alternative approach was suggested by Dotsenko et al . These authors point out that the perturbative renormalisation group (effectively an expansion around the Ising model in the small parameter $`\epsilon =q2`$) predicts the existence of a non-trivial infrared fixed point at interlayer coupling $`g_{}\epsilon /(M2)+๐ช(\epsilon ^2)`$, so that the regions $`M<2`$ and $`M>2`$ are somehow dual upon changing the sign of the coupling constant<sup>1</sup><sup>1</sup>1The case $`M=2`$ is special: For $`q=2`$ (the Ashkin-Teller model) the coupling presents a marginal perturbation, giving rise to a halfline of critical points along which the critical exponents vary continuously . On the other hand, for $`q>2`$ where the perturbation is relevant, the model is still integrable, but now presents a mass generation leading to non-critical behaviour .. More interestingly, for $`M=3`$ they identify the exact lattice realisation of a critical theory with exponents consistent with those of the perturbative treatment, and they conjecture that this generalises to any integer $`M3`$. Their proposal is then to study this class of coupled models, which are now unitary by definition, and only take the limit $`M0`$ once the exact expressions for the various critical exponents have been worked out. One could hope to attack this task by means of extended conformal field theory, thus combining the $`Z_q`$ symmetry of the spin variable by a non-abelian $`S_M`$ symmetry upon permuting the replicas.
Clearly, a first step in this direction is to identify the lattice models corresponding to this series of critical theories, parametrised by the integer $`M3`$. For $`M=3`$ this was achieved by working out the duality relations for $`M`$ coupled Potts models on the square lattice, within the $`M`$-dimensional space of coupling constants giving rise to $`S_M`$ symmetric interactions amongst the lattice energy operators of the replicas. Studying numerically the variation of the effective central charge along the resulting selfdual line, using a novel and very powerful transfer matrix technique, the critical point was unambiguously identified with one of the endpoints of that line.
Unfortunately it was hard to see how such duality relations could be extended to the case of general $`M`$. The calculations in Ref. relied on a particular version of the method of lattice Fourier transforms , already employed for $`M=2`$ two decades ago . Though perfectly adapted to the case of linear combinations of cosinoidal interactions within a single (vector) Potts model , this approach led to increasingly complicated algebra when several coupled models were considered. Moreover, it seemed impossible to recast the end results in a reasonably simple form for larger $`M`$.
In the present publication we wish to assess whether such a scenario of a unique critical point with an extended $`S_M`$ symmetry can indeed be expected to persist in the general case of $`M3`$ symmetrically coupled models. We explicitly work out the duality transformations for any $`M`$, and show that they can be stated in a very simple form \[Eq. (9)\] after redefining the coupling constants.
The lattice identification of the $`M=3`$ critical point in Ref. crucially relied on the existence of a one-parameter selfdual manifold, permitting only two possible directions of the initial flow away from the decoupling fixed point. We find in general a richer structure with an $`[M/2]`$-dimensional selfdual manifold. Nonetheless, from a numerical study of the case $`M=4`$ we end up concluding that the uniqueness of the non-trivial fixed point can be expected to persist, since the decoupling fixed point acts as a saddlepoint of the effective central charge.
Consider then a system of $`M`$ identical planar lattices, stacked on top of one another. On each lattice site $`i`$, and for each layer $`\mu =1,2,\mathrm{},M`$, we define a Potts spin $`\sigma _i^{(\mu )}`$ that can be in any of $`q=2,3,\mathrm{}`$ distinct states. The layers interact by means of the reduced hamiltonian
$$=\underset{ij}{}_{ij},$$
(1)
where $`ij`$ denotes the set of lattice edges, and an $`S_M`$ symmetric nearest-neighbour interaction is defined as
$$_{ij}=\underset{m=1}{\overset{M}{}}K_m\underset{\mu _1\mu _2\mathrm{}\mu _m}{\overset{}{}}\underset{l=1}{\overset{m}{}}\delta (\sigma _i^{(\mu _l)},\sigma _j^{(\mu _l)}).$$
(2)
By definition the primed summation runs over the $`\left(\genfrac{}{}{0pt}{}{M}{m}\right)`$ terms for which the indices $`1\mu _lM`$ with $`l=1,2,\mathrm{},m`$ are all different, and $`\delta (x,y)=1`$ if $`x=y`$ and zero otherwise.
For $`M=1`$ the model thus defined reduces to the conventional Potts model, whilst for $`M=2`$ it is identical to the Ashkin-Teller like model considered in Ref. , where the Potts models of either layer are coupled through their local energy density. For $`M>2`$, additional multi-energy interactions between several layers have been added, since such interactions are generated by the duality transformations, as we shall soon see. However, from the point of view of conformal field theory these supplementary interactions are irrelevant in the continuum limit. The case $`M=3`$ was discussed in Ref. .
By means of a generalised Kasteleyn-Fortuin transformation the local Boltzmann weights can be recast as
$$\mathrm{exp}(_{ij})=\underset{m=1}{\overset{M}{}}\underset{\mu _1\mu _2\mathrm{}\mu _m}{\overset{}{}}\left[1+\left(\mathrm{e}^{K_m}1\right)\underset{l=1}{\overset{m}{}}\delta (\sigma _i^{(\mu _l)},\sigma _j^{(\mu _l)})\right].$$
(3)
In analogy with the case of $`M=1`$, the products can now be expanded so as to transform the original Potts model into its associated random cluster model. To this end we note that Eq. (3) can be rewritten in the form
$$\mathrm{exp}(_{ij})=b_0+\underset{m=1}{\overset{M}{}}b_m\underset{\mu _1\mu _2\mathrm{}\mu _m}{\overset{}{}}\underset{l=1}{\overset{m}{}}\delta (\sigma _i^{(\mu _l)},\sigma _j^{(\mu _l)}),$$
(4)
defining the coefficients $`\{b_m\}_{m=0}^M`$. The latter can be related to the physical coupling constants $`\{K_m\}_{m=1}^M`$ by evaluating Eqs. (3) and (4) in the situation where precisely $`m`$ out of the $`M`$ distinct Kronecker $`\delta `$-functions are non-zero. Clearly, in this case Eq. (3) is equal to $`\mathrm{e}^{J_m}`$, where
$$J_m=\underset{k=1}{\overset{m}{}}\left(\genfrac{}{}{0pt}{}{m}{k}\right)K_k$$
(5)
for $`m1`$, and we set $`J_0=K_0=0`$. On the other hand, we find from Eq. (4) that this must be equated to $`_{k=0}^m\left(\genfrac{}{}{0pt}{}{m}{k}\right)b_k`$. This set of $`M+1`$ equations can be solved for the $`b_k`$ by recursion, considering in turn the cases $`m=0,1,\mathrm{},M`$. After some algebra, the edge weights $`b_k`$ (for $`k0`$) are then found as
$$b_k=\underset{m=0}{\overset{k}{}}(1)^{m+k}\left(\genfrac{}{}{0pt}{}{k}{m}\right)\mathrm{e}^{J_m}.$$
(6)
The partition function in the spin representation
$$Z=\underset{\{\sigma \}}{}\underset{ij}{}\mathrm{exp}(_{ij})$$
(7)
can now be transformed into the random cluster representation as follows. First, insert Eq. (4) on the right-hand side of the above equation, and imagine expanding the product over the lattice edges $`ij`$. To each term in the resulting sum we associate an edge colouring $`๐ข`$ of the $`M`$-fold replicated lattice, where an edge $`(ij)`$ in layer $`m`$ is considered to be coloured (occupied) if the term contains the factor $`\delta (\sigma _i^{(m)},\sigma _j^{(m)})`$, and uncoloured (empty) if it does not. \[In this language, the couplings $`J_k`$ correspond to the local energy density summed over all possible permutations of precisely $`k`$ simultaneously coloured edges.\]
The summation over the spin variables $`\{\sigma \}`$ is now trivially performed, yielding a factor of $`q`$ for each connected component (cluster) in the colouring graph. Keeping track of the prefactors multiplying the $`\delta `$-functions, using Eq. (4), we conclude that
$$Z=\underset{๐ข}{}\underset{m=1}{\overset{M}{}}q^{C_m}b_m^{B_m},$$
(8)
where $`C_m`$ is the number of clusters in the $`m`$th layer, and $`B_m`$ is the number of occurencies in $`๐ข`$ of a situation where precisely $`m`$ ($`0mM`$) edges placed on top of one another have been simultaneously coloured.
It is worth noticing that the random cluster description of the model has the advantage that $`q`$ only enters as a parameter. By analytic continuation one can thus give meaning to a non-integer number of states. The price to be paid is that the $`C_m`$ are, a priori, non-local quantities.
In terms of the edge variables $`b_m`$ the duality transformation of the partition function is easily worked out. For simplicity we shall assume that the couplings constants $`\{K_m\}`$ are identical between all nearest-neighbour pairs of spins, the generalisation to an arbitrary inhomogeneous distribution of couplings being trivial. By analogy with the case $`M=1`$, a given colouring configuration $`๐ข`$ is taken to be dual to a colouring configuration $`\stackrel{~}{๐ข}`$ of the dual lattice obtained by applying the following duality rule: Each coloured edge intersects an uncoloured dual edge, and vice versa. In particular, the demand that the configuration $`๐ข_{\mathrm{full}}`$ with all lattice edges coloured be dual to the configuration $`๐ข_{\mathrm{empty}}`$ with no coloured (dual) edge fixes the constant entering the duality transformation. Indeed, from Eq. (8), we find that $`๐ข_{\mathrm{full}}`$ has weight $`q^Mb_M^E`$, where $`E`$ is the total number of lattice edges, and $`๐ข_{\mathrm{empty}}`$ is weighted by $`q^{MF}\stackrel{~}{b}_0^E`$, where $`F`$ is the number of faces, including the exterior one. We thus seek for a duality transformation of the form $`q^{MF}\stackrel{~}{b}_0^EZ(\{b_m\})=q^Mb_M^E\stackrel{~}{Z}(\{\stackrel{~}{b}_m\})`$, where for any configuration $`๐ข`$ the edge weights must transform so as to keep the same relative weight between $`๐ข`$ and $`๐ข_{\mathrm{full}}`$ as between $`\stackrel{~}{๐ข}`$ and $`๐ข_{\mathrm{empty}}`$.
An arbitrary colouring configuration $`๐ข`$ entering Eq. (8) can be generated by applying a finite number of changes to $`๐ข_{\mathrm{full}}`$, in which an edge of weight $`b_M`$ is changed into an edge of weight $`b_m`$ for some $`m=0,1,\mathrm{},M1`$. By such a change, in general, a number $`kMm`$ of pivotal bonds are removed from the colouring graph, thus creating $`k`$ new clusters, and the weight relative to that of $`๐ข_{\mathrm{full}}`$ will change by $`q^kb_m/b_M`$. On the other hand, in the dual configuration $`\stackrel{~}{๐ข}`$ a number $`Mmk`$ of clusters will be lost, since each of the $`k`$ new clusters mentioned above will be accompanied by the formation of a loop in $`\stackrel{~}{๐ข}`$. The weight change relative to $`๐ข_{\mathrm{empty}}`$ therefore amounts to $`\stackrel{~}{b}_{Mm}/(\stackrel{~}{b}_0q^{Mmk})`$. Comparing these two changes we see that the factors of $`q^k`$ cancel nicely, and after a change of variables $`mMm`$ the duality transformation takes the simple form
$$\stackrel{~}{b}_m=\frac{q^mb_{Mm}}{b_M}\text{ for }m=0,1,\mathrm{},M,$$
(9)
the relation with $`m=0`$ being trivial.
Selfdual solutions can be found by imposing $`\stackrel{~}{b}_m=b_m`$. However, this gives rise to only $`\left[\frac{M+1}{2}\right]`$ independent equations
$$b_{Mm}=q^{M/2m}b_m\text{ for }m=0,1,\mathrm{},\left[\frac{M1}{2}\right],$$
(10)
and the $`M`$-dimensional parameter space $`\{b_m\}_{m=1}^M`$, or $`\{K_m\}_{m=1}^M`$, thus has a selfdual sub-manifold of dimension $`D_M=\left[\frac{M}{2}\right]`$. In particular, the ordinary Potts model ($`M=1`$) has a unique selfdual point, whilst for $`M=2`$ and $`M=3`$ one has a line of selfdual solutions.
Our main result is constituted by Eqs. (5) and (6) relating the physical coupling constants $`\{K_m\}`$ to the edge weights $`\{b_m\}`$, in conjunction with Eqs. (9) and (10) giving the explicit (self)duality relations in terms of the latter.
Since the interaction energies entering Eq. (3) are invariant under a simultaneous shift of all Potts spins, an alternative way of establishing the duality transformations procedes by Fourier transformation of the energy gaps . This method was used in Refs. and to work out the cases $`M=2`$ and $`M=3`$ respectively. However, as $`M`$ increases this procedure very quickly becomes quite involved. To better appreciate the ease of the present approach, let us briefly pause to see how the parametrisations of the selfdual lines for $`M=2,3`$, expressed in terms of the couplings $`\{K_m\}`$, can be reproduced in a most expedient manner.
For $`M=2`$, Eq. (10) gives $`b_2=q`$, where from Eqs. (5) and (6) $`b_2=\mathrm{e}^{2K_1+K_2}2\mathrm{e}^{K_1}+1`$. Thus
$$\mathrm{e}^{K_2}=\frac{2\mathrm{e}^{K_1}+(q1)}{\mathrm{e}^{2K_1}},$$
(11)
in accordance with Ref. . Similarly, for $`M=3`$ one has $`b_1=qb_2/b_3=b_2/\sqrt{q}`$ with $`b_1=\mathrm{e}^{K_1}1`$, $`b_2`$ as before, and $`b_3=\mathrm{e}^{3K_1+3K_2+K_3}3\mathrm{e}^{2K_1+K_2}+3\mathrm{e}^{K_1}1`$. This immediately leads to the result given in Ref. :
$`\mathrm{e}^{K_2}`$ $`=`$ $`{\displaystyle \frac{(2+\sqrt{q})\mathrm{e}^{K_1}(1+\sqrt{q})}{\mathrm{e}^{2K_1}}},`$ (12)
$`\mathrm{e}^{K_3}`$ $`=`$ $`{\displaystyle \frac{3(\mathrm{e}^{K_1}1)(1+\sqrt{q})+q^{3/2}+1}{\left[(2+\sqrt{q})\mathrm{e}^{K_1}(1+\sqrt{q})\right]^3}}\mathrm{e}^{3K_1}.`$
Returning now to the general case, we notice that the selfdual manifold always contains two special points for which the behaviour of the $`M`$ coupled models can be related to that of a single Potts model. At the first such point,
$$b_m=q^{m/2}\text{ for }m=0,1,\mathrm{},\left[\frac{M}{2}\right],$$
(13)
one has $`K_1=\mathrm{log}(1+\sqrt{q})`$ and $`K_m=0`$ for $`m=2,3,\mathrm{},M`$, whence the $`M`$ models simply decouple. The other point
$$b_m=\delta (m,0)\text{ for }m=0,1,\mathrm{},\left[\frac{M}{2}\right]$$
(14)
corresponds to $`K_m=0`$ for $`m=1,2,\mathrm{},M1`$ and $`K_M=\mathrm{log}(1+q^{M/2})`$, whence the resulting model is equivalent to a single $`q^M`$-state Potts model. Evidently, for $`M=1`$ these two special points coincide.
Specialising now to the case of a regular two-dimensional lattice, it is well-known that at the two special points the model undergoes a phase transition, which is continuous if the effective number of states ($`q`$ or $`q^M`$ as the case may be) is $`4`$ . In Ref. the question was raised whether one in general can identify further non-trivial critical theories on the selfdual manifolds. In particular it was argued that for $`M=3`$ there is indeed such a point, supposedly corresponding to a conformal field theory with an extended $`S_3`$ symmetry.
To get an indication whether such results can be expected to generalise also to higher values of $`M`$, we have numerically computed the effective central charge of $`M=4`$ coupled models along the two-dimensional selfdual surface. We were able to diagonalise the transfer matrix for strips of width $`L=4,6,8`$ lattice constants in the equivalent loop model. Technical details of the simulations have been reported in Ref. . Relating the specific free energy $`f_0(L)`$ to the leading eigenvalue of the transfer matrix in the standard way, two estimates of the effective central charge, $`c(4,6)`$ and $`c(6,8)`$, were then obtained by fitting data for two consecutive strip widths according to
$$f_0(L)=f_0(\mathrm{})\frac{\pi c}{6L^2}+\mathrm{}.$$
(15)
A contour plot of $`c(6,8)`$, based on a grid of $`21\times 21`$ parameter values for $`(b_1,b_2)`$, is shown in Fig. 1. The data for $`c(4,6)`$ look qualitatively similar, but are less accurate due to finite-size effects. We should stress that even though the absolute values of $`c(6,8)`$ are some 4 % below what one would expect in the $`L\mathrm{}`$ limit, the variations in $`c`$ are supposed to be reproduced much more accurately . On the figure $`q=3`$, but other values of $`q`$ in the range $`2<q4`$ lead to similar results.
According to Zamolodchikovโs $`c`$-theorem , a system initially in the vicinity of the decoupled fixed point $`(b_1,b_2)=(\sqrt{q},q)`$, shown as an asterisk on the figure, will start flowing downhill in this central charge landscape. Fig. 1 very clearly indicates that the decoupled fixed point acts as a saddle point, and there are thus only two possibilities for the direction of the initial flow.
The first of these will take the system to the stable fixed point at the origin which trivially corresponds to one selfdual $`q^4`$-state Potts model. For $`q=3`$ this leads to the generation of a finite correlation length, consistent with $`c_{\mathrm{eff}}=0`$ in the limit of an infinitely large system. As expected, the flow starts out in the $`b_2`$ direction, meaning that it is the energy-energy coupling between layers ($`K_2`$) rather than the spin-spin coupling within each layer ($`K_1`$) that controls the initial flow.
More interestingly, if the system is started out in the opposite dirrection (i.e., with $`K_2`$ slightly positive) it will flow towards a third non-trivial fixed point, for which the edge weights tend to infinity in some definite ratios. \[Exactly what these ratios are is difficult to estimate, given that the asymptotic flow direction exhibits finite-size effects.\] Seemingly, at this point the central charge is only slightly lower than at the decoupled fixed point, as predicted by the perturbative renormalisation group . From the numerical data we would estimate the drop in the central charge as roughly $`\mathrm{\Delta }c=0.01`$$`0.02`$, in good agreement with the perturbative treatment which predicts $`\mathrm{\Delta }c=0.0168+๐ช(\epsilon ^5)`$ .
All of these facts are in agreement with the conjectures put forward in Ref. , and in particular one would think that this third fixed point corresponds to a conformal field theory with a non-abelian extended $`S_4`$ symmetry.
Finally, the numerics for $`q=2`$ (four coupled Ising models) is less conclusive, and we cannot rule out the possibility of a more involved fixed point structure. In particular, a $`c=2`$ theory is not only obtainable by decoupling the four models, but also by a pairwise coupling into two mutually decoupled four-state Potts (or Ashkin-Teller) models. Indeed, a similar phenomenon has already been observed for the case of three coupled Ising models .
Acknowledgments
The author is indebted to M. Picco for some very useful discussions. |
no-problem/0001/nlin0001056.html | ar5iv | text | # Spatial solitons in a semiconductor microresonator
\[
## Abstract
We show experimentally the existence of bright and dark spatial solitons in a passive quantum-well-semiconductor resonator of large Fresnel number. For the wavelength of observation the nonlinearity is mixed absorptive/defocusing. Bright solitons appear more stable than dark ones.
\] The possibility of the existence of spatial solitons in semiconductor resonators has recently been investigated in some detail theoretically motivated, among others, by possible usefulness of such structures in new types of optical information processing. Bright and dark solitons have been predicted. The majority of the papers treated bright solitons . We have recently reported on experiments concerning the switching space-time dynamics of quantum-well semiconductor resonators . In these experiments we showed the existence of hexagonal patterns, some โrobustnessโ of small switched domains and the possibility of switching of individual elements of ensembles of bright spots; thus giving first evidence for stable localized structures. In this letter we show directly the existence of bright and dark solitons.
The semiconductor resonator used for the measurements consists of flat Bragg mirrors of about 99.7 $`\%`$ reflectivity with 18 GaAs/Ga<sub>0.5</sub>Al<sub>0.5</sub>As quantum wells between them . The optical resonator length is approximately 3 $`\mu `$m, the area about 2 cm<sup>2</sup>. Across this area the resonator wavelength varies, so that one can work from the dispersive range (wavelength longer than the gap wavelength) to well within the absorption band. The resonator was optimized for dispersive optical bistability, for which reason the absorption of the semiconductor material is too high for absorptive bistability at the bandgap wavelength (the most stable solitons are predicted for absorptive bistability, i.e. for wavelengths near band gap).
Our previous experiments in the dispersive bistability region had shown linear structure formation in the form of somewhat irregular clusters of bright spots. These are simply the result of filtering of light scattered in the material, by the high Fresnel number, high finesse resonator . This structured background field prohibits the formation of pure unperturbed, independent solitons. We attempted therefore to work closer to the band gap wavelength, where the absorption of the nonlinear quantum-well material is larger and thus the resonator finesse smaller, in order to avoid the linear structuring of the field.
The experimental set up is conceptually simple: light is generated by a Ti:Al<sub>2</sub>0<sub>3</sub>-laser tunable to desired wavelengths. It irradiates in a spot about 40 $`\mu `$m diameter the semiconductor sample, with intensities of order of kW/cm<sup>2</sup> as required for saturating the semiconductor material. As the substrate of the semiconductor resonator is GaAs, which is opaque in the wavelength range in question, the light reflected from the sample is observed.
Observations are done either by a CCD camera for recording 2D images or by a small detector which can record the time variation e.g. on a cross-section of the illuminated area (โstreak-cameraโ - images). In order to avoid as much as possible thermal nonlinearities, the measurements are done during a time of a few microseconds. For this purpose the light is admitted to the sample for about 5 $`\mu `$s, repeated every ms, using a mechanical chopper. For recording 2D images with good time resolution, in front of the CCD camera an electro-optical modulator is placed as a fast shutter, permitting recording with exposure times of down to 10 ns. The shutter is triggered after a variable delay with respect to the beginning of the sample illumination. Thus the evolution of the reflected light field can be followed in 2D when varying the trigger delay.
Optical โobjectsโ (e.g. spatial intensity variations) in the light field move according to the field gradients in phase and intensity . Thus, as long as there are well-defined gradients in the field, the time evolution of the 2D field is completely reproducible in each successive illumination. Consequently, averaging over several illuminations is possible to increase signal/noise ratio. The CCD camera in the usual TV-format reads out a frame every 40 ms i.e. it averages 40 illuminations. For the finite extinction ratio of the electro-optical modulator (300) we used a shutter aperture time of 50 ns. The latter can be chosen by the length of a Blumlein line driving the electro-optical modulator.
Fig. 1 shows the structures observed at $`\lambda `$ = 860 nm, i.e. $``$ 10 nm in wavelength above the band gap and 5 nm above the exciton line center. Fig. 1b) shows a bright soliton (dark in reflection) on an unswitched background, 1c) shows a dark soliton (bright in reflection) on a switched background. For clarity 1a) shows a switched area without soliton.
This switched area is surrounded by a switching front which (when raising the intensity of the illumination with a Gaussian laser beam) initially travels outward from the center of the beam until it stops at an intensity contour given by the Maxwellian intensity .
The bright soliton exists on a background corresponding to the lower branch of the (plane-wave) bistability characteristic and the dark soliton on a background corresponding to the upper branch. The solitons Fig. 1b),c) develop above the illumination intensity which switches to the upper branch.
Figs 2,3,4 give โstreak-cameraโ recordings of the formation of the structures Fig. 1. Formation of the switched domain 1a) is shown in Fig. 2. At time t $``$ 2 $`\mu `$s the resonator switches in the center of the laser beam and a switching front travels rapidly outward. It stops, because it reaches the Maxwellian intensity contour and the domain retains its size until the reduction of illumination shrinks the domain size and finally switches the resonator back to the lower branch.
The development of the bright soliton Fig. 1b) is given in Fig. 3. The resonator switches at t $``$ 1.5 $`\mu `$s and the switched-up domain develops as in Fig. 2. The incident light intensity here reaches a higher value than in Fig. 2, which apparently makes switched-up state modulationally unstable. The consequence is a subsequent shrinking of the switched-up domain to a small size which is stable.
We note several features of the stable structure which show its soliton properties:
1) Stable diameter in time;
2) Size of 10 $`\mu `$m as expected from model calculations ;
3) Surrounded by characteristic rings due to the โoscillating tailsโ of the switching front ;
4) Robustness: From t = 6.3 $`\mu `$s the incident light intensity drops , until the structure switches off at t = 6.8 $`\mu `$s. In spite of this change of illumination, the brightness of the structure remains constant. Such immunity against external parameter variation would seem characteristic for a nonlinearly stabilized structure;
5) We note the fast switch-off of the structure at t = 6.8 $`\mu `$s. The structure disappears abruptly at a certain intensity, in a manner suggesting a subcritical process. This allows to conclude that the nonlinearity is โfastโ (electronic). A slow (e.g. thermal) nonlinearity would not allow such abrupt disappearance of the structure.
Due to the properties 1) to 5) we can identify the structure with a bright soliton based on a fast nonlinearity.
The development of the dark soliton structure (Fig. 1c)) is shown in Fig. 4. The resonator switches at t $``$ 0.7 $`\mu `$s, after which the illumination intensity is further substantially increased. Again, after a long transient the bright structure forms slightly off the beam center (compare Fig. 1c)). Surprisingly the reflected light intensity of the structure is almost two times higher than the incident light intensity, indicating that this structure collects light from its surrounding (see the dark surrounding of the bright spot in Fig. 1c)), as one might expect for a nonlinear structure.
The reduction of the brightness of the structure apparent in Fig. 4a),b) in the time 3.5 $`\mu `$s to 6.5 $`\mu `$s does not show a damping or disappearance of the structure but is rather due to a motion of the soliton in a plane perpendicular to the paper plane. Fig. 5 shows three 2D snapshots within the time 3.5 $`\mu `$s to 6.5 $`\mu `$s clearly showing this motion. This bright structure on a switched background which moves like a stable particle and is thus a dark soliton .
The long formation time of these solitons appears related to โcritical slowingโ. We found that the formation time for the solitons is reduced by a factor of 10 upon increase of the illuminating intensity by only 10 $`\%`$.
Fig. 6 gives the field dynamics for fixed detuning $`\delta \lambda `$ of the laser frequency from the resonator resonance frequency and increasing light intensities. Fig. 6a) corresponds to the bright soliton Fig. 1b) for small intensity. Fig. 6b) shows an unstable (pulsing) dark soliton at medium intensity and 6c) gives the case of the stable dark soliton as in Fig. 1c) for high intensity. Note that the transient period between switching of the resonator (t = 1.3 $`\mu `$s for 6b), 6c)) and the onset of the dark soliton is here reduced compared to Figs 3,4 due to the higher intensity (critical slowing). The two minima of the reflected light before the onset of the soliton in Fig. 4 a) which are related to spatial structure (Fig. 4b)) also appear in Fig. 6b),c).
In the Figs 1-6 we have shown the cases where only one soliton exists. In general, at higher intensities / at later times in the illumination, several solitons can exist. Fig. 7 shows as an illustration 1, 2, 3 solitons existing simultaneously. We have observed up to 5 simultaneous solitons.
Concluding, we find at wavelengths corresponding to a wavelength detuning of about one exciton linewidth above the exciton line center and at high (negative) resonator detuning the existence of bright and dark resonator solitons as predicted in and respectively. The bright solitons appear stable, while the dark solitons are observed to move in space and to pulse, which makes them appear as less stable objects than the bright solitons.
Acknowledgements
This work was supported by ESPRIT LTR project PIANOS. |
no-problem/0001/quant-ph0001114.html | ar5iv | text | # Abstract
### Abstract
Consider an infinite collection of qubits arranged in a line, such that every pair of nearest neighbors is entangled: an โentangled chain.โ In this paper we consider entangled chains with translational invariance and ask how large one can make the nearest neighbor entanglement. We find that it is possible to achieve an entanglement of formation equal to 0.285 ebits between each pair of nearest neighbors, and that this is the best one can do under certain assumptions about the state of the chain.
PACS numbers: 03.67.-a, 03.65.Bz, 89.70.+c
## 1 Introduction: Example of an entangled chain
Quantum entanglement has been studied for decades, first because of its importance in the foundations of quantum mechanics, and more recently for its potential technological applications as exemplified by a quantum computer . The new focus has led to a quantitative theory of entanglement that, among other things, allows us to express analytically the degree of entanglement between simple systems . This development makes it possible to pose new quantitative questions about entanglement that could not have been raised before and that promise fresh perspectives on this remarkable phenomenon. In this paper I would like to raise and partially answer such a question, concerning the extent to which a collection of binary quantum objects (qubits) can be linked to each other by entanglement.
Imagine an infinite string of qubits, such as two-level atoms or the spins of spin-1/2 particles. Let us label the locations of the qubits with an integer $`j`$ that runs from negative infinity to positive infinity. I wish to consider special states of the string, satisfying the following two conditions: (i) each qubit is entangled with its nearest neighbors; (ii) the state is invariant under all translations, that is, under transformations that shift each qubit from its original position $`j`$ to position $`j+n`$ for some integer $`n`$. Let us call a string of qubits satisfying the first condition an entangled chain, and if it also satisfies the second condition, a uniform entangled chain. Note that each qubit need not be entangled with any qubits other than its two nearest neighbors. In this respect an entangled chain is like an ordinary chain, whose links are directly connected only to two neighboring links. By virtue of the translational invariance, the degree of entanglement between nearest neighbors in a uniform entangled chain must be constant throughout the chain. The main question I wish to pose is this: How large can the nearest-neighbor entanglement be in a uniform entangled chain?
This problem belongs to a more general line of inquiry about how entanglement can be shared among more than two objects. Some work on this subject has been done in the context of the cloning of entanglement \[6โ11\], where one finds limits on the extent to which entanglement can be copied. In a different setting not particularly involving cloning, one finds an inequality bounding the amount of entanglement that a single qubit can have with each of two other qubits . One can imagine more general โlaws of entanglement sharingโ that apply to a broad range of configurations of quantum objects. The present work provides further data that might be used to discover and formulate such laws. The specific problem addressed in this paper could also prove relevant for analyzing models of quantum computers in which qubits are arranged along a line, as in an ion trap . The infinite chain can be thought of as an idealization of such a computer. Moreover, the analysis of our question turns out to be interesting in its own right, being related, as we will see, to a familiar problem in many-body physics.
To make the question precise we need a measure of entanglement between two qubits. We will use a reasonably simple and well-justified measure called the โconcurrence,โ which is defined as follows .
Consider first the case of pure states. A general pure state of two qubits can be written as
$$|\psi =\alpha |00+\beta |01+\gamma |10+\delta |11.$$
(1)
One can verify that such a state is factorizable into single-qubit statesโthat is, it is unentangledโif and only if $`\alpha \delta =\beta \gamma `$. The quantity $`C=2|\alpha \delta \beta \gamma |`$, which ranges from 0 to 1, is thus a plausible measure of the degree of entanglement. We take this expression as the definition of concurrence for a pure state of two qubits. For mixed states, we define the concurrence to be the greatest convex function on the set of density matrices that gives the correct values for pure states .
Though this statement defines concurrence, it does not tell us how to compute it for mixed states. Remarkably, there exists an explicit formula for the concurrence of an arbitrary mixed state of two qubits : Let $`\rho `$ be the density matrix of the mixed state, which we imagine expressed in the standard basis $`\{|00,|01,|10,|11\}`$. Let $`\stackrel{~}{\rho }`$, the โspin-flippedโ density matrix, be $`(\sigma _y\sigma _y)\rho ^{}(\sigma _y\sigma _y)`$, where the asterisk denotes complex conjugation in the standard basis and $`\sigma _y`$ is the matrix $`\left(\begin{array}{cc}0& i\\ i& 0\end{array}\right)`$. Finally, let $`\lambda _1,\lambda _2,\lambda _3,\lambda _4`$ be the square roots of the eigenvalues of $`\rho \stackrel{~}{\rho }`$ in descending orderโone can show that these eigenvalues are all real and non-negative. Then the concurrence of $`\rho `$ is given by the formula
$$C(\rho )=\mathrm{max}\{\lambda _1\lambda _2\lambda _3\lambda _4,0\}.$$
(2)
The best justification for using concurrence as a measure of entanglement comes from a theorem showing that concurrence is a monotonically increasing function of the โentanglement of formation,โ which quantifies the non-local resources needed to create the given state .<sup>1</sup><sup>1</sup>1One can define the entanglement of formation as follows. Let $`\rho `$ be a mixed state of a pair of quantum objects, to be shared between two separated observers who can communicate with each other only via classical signals. The entanglement of formation of $`\rho `$ is the asymptotic number of singlet states the observers need, per pair, in order to create a large number of pairs in pure states whose average density matrix is $`\rho `$. (This is conceptually different from the regularized entanglement of formation, which measures the cost of creating many copies of the mixed state $`\rho `$ . However, it is conceivable that the two quantities are identical.) Entanglement of formation is conventionally measured in โebits,โ and for a pair of binary quantum objects it takes values ranging from 0 to 1 ebit. As mentioned above, the values of $`C`$ range from zero to one: an unentangled state has $`C=0`$, and a completely entangled state such as the singlet state $`\frac{1}{\sqrt{2}}(|01|10)`$ has $`C=1`$. Our problem is to find the greatest possible nearest-neighbor concurrence of a uniform entangled chain. At the end of the calculation we can easily re-express our results in terms of entanglement of formation.
Another issue that needs to be addressed in formulating our question is the meaning of the word โstateโ as applied to an infinite string of qubits; in particular we need to discuss how such a state is to be normalized. Formally, we can define a state of our system as follows. A state $`w`$ of the infinite string is a function that assigns to every finite set $`S`$ of integers a normalized (i.e., trace one) density matrix $`w(S)`$, which we interpret to be the density matrix of the qubits specified by the set $`S`$; moreover the function $`w`$ must be such that if $`S_2`$ is a subset of $`S_1`$, then $`w(S_2)`$ is obtained from $`w(S_1)`$ by tracing over the qubits whose labels are not in $`S_2`$. This formal definition is perfectly sensible but somewhat bulky in practice. In what follows we will usually specify states of the string more informally when it is clear from the informal specification how to generate the density matrix of any finite subset of the string. We will also usually use the symbol $`\rho `$ instead of $`w(S)`$ to denote the density matrix of a pair of nearest neighbors.
It is not immediately obvious that there exists even a single example of an entangled chain. Note, for example, that the limit of a Schrรถdinger cat stateโan equal superposition of an infinite string of zeros with an infinite string of onesโis not an entangled chain. In the cat state, the reduced density matrix of a pair of neighboring qubits is an incoherent mixture of $`|00`$ and $`|11`$, which exhibits a classical correlation but no entanglement. (Note, by the way, that our informal statement โan equal superposition of an infinite string of zeros with an infinite string of ones,โ specifies exactly the same state as if we had taken an incoherent mixture of these two infinite strings: no finite set of qubits contains information about the phase of the superposition.)
We can, however, construct a simple example of an entangled chain in the following way. Let $`w_0`$ be the state such that for each even integer $`j`$, the qubits at sites $`j`$ and $`j+1`$ are entangled with each other in a singlet state. We can write this state informally as<sup>2</sup><sup>2</sup>2Alternatively, we can characterize the state $`w_0`$ according to our formal definition by specifying the density matrix of each finite collection of qubits: Let $`S`$ define such a collection. Then for each even integer $`j`$ such that both $`j`$ and $`j+1`$ are in $`S`$, the corresponding pair of qubits is in the singlet state; all other qubits (i.e., the unpaired ones) are in the completely mixed state $`\left(\begin{array}{cc}\frac{1}{2}& 0\\ 0& \frac{1}{2}\end{array}\right)`$, and the full density matrix $`w(S)`$ is obtained by taking the tensor product of the pair states and single-qubit states.
$$\mathrm{}\left(\frac{|0_2|1_1|1_2|0_1}{\sqrt{2}}\right)\left(\frac{|0_0|1_1|1_0|0_1}{\sqrt{2}}\right)\left(\frac{|0_2|1_3|1_2|0_3}{\sqrt{2}}\right)\mathrm{}.$$
(3)
The state $`w_0`$ is not an entangled chain because the qubits are not entangled with both of their nearest neighbors: qubits at even-numbered locations are not entangled with their neighbors on the left. However, if we let $`w_1`$ be the state obtained by translating $`w_0`$ one unit to the left (or to the rightโthe result is the same), and let $`w`$ be an equal mixture of $`w_0`$ and $`w_1`$โthat is, $`w=(w_0+w_1)/2`$โthen $`w`$ is a uniform entangled chain, as we now show.
That $`w`$ is translationally invariant follows from the fact that both $`w_0`$ and $`w_1`$ are invariant under even displacements and that they transform into each other under odd displacements. Thus we need only show that neighboring states are entangled. For definiteness let us consider the qubits in locations $`j=1`$ and $`j=2`$. In the state $`w_0`$, the density matrix for these two qubits is
$$\rho ^{(0)}=\left(\begin{array}{cccc}\frac{1}{4}& 0& 0& 0\\ 0& \frac{1}{4}& 0& 0\\ 0& 0& \frac{1}{4}& 0\\ 0& 0& 0& \frac{1}{4}\end{array}\right),$$
(4)
that is, the completely mixed state. (The two qubits are from distinct singlet pairs.) The density matrix of the same two qubits in the state $`w_1`$ is
$$\rho ^{(1)}=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& \frac{1}{2}& \frac{1}{2}& 0\\ 0& \frac{1}{2}& \frac{1}{2}& 0\\ 0& 0& 0& 0\end{array}\right),$$
(5)
that is, the singlet state. In the state $`w`$, the qubits are in an equal mixture of these two density matrices, which is
$$\rho =(\rho ^{(0)}+\rho ^{(1)})/2=\left(\begin{array}{cccc}\frac{1}{8}& 0& 0& 0\\ 0& \frac{3}{8}& \frac{1}{4}& 0\\ 0& \frac{1}{4}& \frac{3}{8}& 0\\ 0& 0& 0& \frac{1}{8}\end{array}\right).$$
(6)
It is easy to compute the concurrence of this density matrix, because $`\stackrel{~}{\rho }`$ is the same as $`\rho `$ itself. The values $`\lambda _i`$ in this case are the eigenvalues of $`\rho `$, which are $`\frac{5}{8},\frac{1}{8},\frac{1}{8},\frac{1}{8}`$. The concurrence is therefore $`C=\frac{5}{8}\frac{1}{8}\frac{1}{8}\frac{1}{8}=\frac{1}{4}`$. This same value of the concurrence applies to any other pair of neighboring qubits in the string because of the translational invariance. The fact that the concurrence is non-zero implies that neighboring qubits are entangled, so that the state $`w`$ is indeed an entangled chain. For uniform entangled chains, we will call the common value of $`C`$ for neighboring qubits the concurrence of the chain. Thus in the above example the concurrence of the chain is $`\frac{1}{4}`$.
As we will see, it is possible to find uniform entangled chains with greater concurrence. Let $`C_{\mathrm{max}}`$ be the least upper bound on the concurrences of all uniform entangled chains. We would like to find this number. We know that $`C_{\mathrm{max}}`$ is no larger than 1, since concurrence never exceeds 1. In fact we can quickly get a somewhat better upper bound, using the following fact: when a qubit is entangled with each of two other qubits, the sum of the squares of the two concurrences is less than or equal to one . In a uniform entangled chain, each qubit must be equally entangled with its two nearest neighbors; so the concurrence with each of them cannot exceed $`1/\sqrt{2}`$. Thus, so far what we know about $`C_{\mathrm{max}}`$ is this:
$$1/4C_{\mathrm{max}}1/\sqrt{2}.$$
(7)
This is still a wide range. Most of the rest of this paper is devoted to getting a better fix on $`C_{\mathrm{max}}`$ by explicitly constructing entangled chains.
## 2 Building chains out of blocks
Using the above example as a model, we will use the following construction to generate other uniform entangled chains. (1) Break the string into blocks of $`n`$ qubits, and define a state $`w_0`$ in which each block is in the same $`n`$-qubit state $`|\xi `$; that is, $`w_0`$ is a tensor product of an infinite number of copies of $`|\xi `$. (In the above example $`n`$ had the value 2 and $`|\xi `$ was the singlet state.) (2) Define $`w_k`$, $`k=1,\mathrm{},n1`$, to be the state obtained by shifting $`w_0`$ to the left by $`k`$ units. (3) Let the final state $`w`$ be the average $`(w_0+\mathrm{}+w_{n1})/n`$. A state generated in this way will automatically be translationally invariant. In order that the chain have a large concurrence, we will need to choose the state $`|\xi `$ carefully. Finding an optimal $`|\xi `$ and proving that it is optimal may turn out to be a difficult problem. In this paper I will choose $`|\xi `$ according to a strategy that makes sense and may well be optimal but is not proven to be so.
In the final state $`w`$, each pair of neighboring qubits has the same density matrix because of the translational invariance. Our basic strategy for choosing $`|\xi `$, described below, is designed to give this neighboring-pair density matrix the following form:
$$\rho =\left(\begin{array}{cccc}\rho _{11}& 0& 0& 0\\ 0& \rho _{22}& \rho _{23}& 0\\ 0& \rho _{23}^{}& \rho _{33}& 0\\ 0& 0& 0& 0\end{array}\right).$$
(8)
(The ordering of the four basis states is the one given above: $`|00,|01,|10,|11`$.) One can show that the concurrence of such a density matrix is simply
$$C=2\left|\rho _{23}\right|.$$
(9)
Besides making the concurrence easy to compute, the form (8) seems a reasonable goal because it picks out a specific kind of entanglement, namely, a coherent superposition of $`|01`$ and $`|10`$, and limits the ways in which this entanglement can be contaminated or diluted by being mixed with other states. In particular, the form (8) does not allow contamination by an orthogonal entangled state of the form $`\alpha |00+\beta |11`$โorthogonal entangled states when mixed together tend to cancel each otherโs entanglementโor by the combination of the two unentangled states $`|00`$ and $`|11`$. If the component $`\rho _{44}`$ were not equal to zero and the form were otherwise unchanged, the concurrence would be $`C=\mathrm{max}\{2(|\rho _{23}|\sqrt{\rho _{11}\rho _{44}}),0\}`$; so it is good to make either $`\rho _{11}`$ or $`\rho _{44}`$ equal to zero if this can be done without significantly reducing $`\rho _{23}`$. We have chosen to make $`\rho _{44}`$ equal to zero.
As it happens, one can guarantee the form (8) for the density matrix of neighboring qubits by imposing the following three conditions on the $`n`$-qubit state $`|\xi `$: (i) $`|\xi `$ is an eigenstate of the operator that counts the number of qubits in the state $`|1`$. That is, each basis state represented in $`|\xi `$ must have the same number $`p`$ of qubits in the state $`|1`$. (ii) $`|\xi `$ has no component in which two neighboring qubits are both in the state $`|1`$. (iii) The $`n`$th qubit is in the state $`|0`$. (This last condition effectively extends condition (ii) to the boundary between successive blocks.) Condition (i) guarantees that the density matrix $`\rho `$ for a pair of nearest neighbors is block diagonal, each block corresponding to a fixed number of 1โs in the pair. That is, there are two single-element blocks corresponding to $`|00`$ and $`|11`$, and a 2x2 block corresponding to $`|01`$ and $`|10`$. Conditions (ii) and (iii) guarantee that $`\rho _{44}`$ is zero. The conditions thus give us the form (8). We impose these three conditions because they seem likely to give the best results; we do not prove that they are optimal.
To illustrate the three conditions and how they can be used, let us consider in detail the case where the block size $`n`$ is 5 and the number $`p`$ of 1โs in each block is 2. (Our strategy does not specify the value of either $`n`$ or $`p`$; these values will ultimately have to be determined by explicit maximization.) In this case, the only basis states our conditions allow in the construction of $`|\xi `$ are $`|10100`$, $`|10010`$, and $`|01010`$. Any other basis state either would have a different number of 1โs or would violate one of conditions (ii) and (iii). Thus we write
$$|\xi =a_{13}|10100+a_{14}|10010+a_{24}|01010.$$
(10)
The subscripts in $`a_{ij}`$ indicate which qubits are in the state $`|1`$. The state $`w`$ of the infinite string is derived from $`|\xi `$ as described above. We now want to use Eq. (10) to write the density matrix $`\rho `$ of a pair of nearest neighbors when the infinite string is in the state $`w`$. For definiteness let us take the two qubits of interest to be in locations $`j=1`$ and $`j=2`$, and let us take the 5-qubit blocks in the state $`w_0`$ to be given by $`j=1,\mathrm{},5`$, $`j=6,\mathrm{},10`$, and so on. Our final density matrix $`\rho `$ will be an equal mixture of five density matrices, corresponding to the five different displacements of $`w_0`$ (including the null displacement).
For $`w_0`$ itself, the qubits at $`j=1`$ and $`j=2`$ are the first two qubits of $`|\xi `$. The density matrix for these two qubits, obtained by tracing out the other three qubits of the block, is
$$\rho ^{(0)}=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& |a_{24}|^2& a_{14}^{}a_{24}& 0\\ 0& a_{14}a_{24}^{}& |a_{13}|^2+|a_{14}|^2& 0\\ 0& 0& 0& 0\end{array}\right).$$
(11)
For $`w_1`$, the qubits at $`j=1`$ and $`j=2`$ are now the second and third qubits of the block, since the block has been shifted to the left. Thus we trace over the first, fourth, and fifth qubits to obtain
$$\rho ^{(1)}=\left(\begin{array}{cccc}|a_{14}|^2& 0& 0& 0\\ 0& |a_{13}|^2& 0& 0\\ 0& 0& |a_{24}|^2& 0\\ 0& 0& 0& 0\end{array}\right).$$
(12)
In a similar way one can find $`\rho ^{(2)}`$ and $`\rho ^{(3)}`$:
$$\rho ^{(2)}=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& |a_{14}|^2+|a_{24}|^2& a_{13}^{}a_{14}& 0\\ 0& a_{13}a_{14}^{}& |a_{13}|^2& 0\\ 0& 0& 0& 0\end{array}\right);\rho ^{(3)}=\left(\begin{array}{cccc}|a_{13}|^2& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& |a_{14}|^2+|a_{24}|^2& 0\\ 0& 0& 0& 0\end{array}\right).$$
The density matrix corresponding to $`w_4`$ is different in that the two relevant qubits now come from different blocks: the qubit at $`j=1`$ is the last qubit of one block and the qubit at $`j=2`$ is the first qubit of the next block. The corresponding density matrix is thus the tensor product of two single-qubit states:
$$\rho ^{(4)}=\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)\left(\begin{array}{cc}|a_{24}|^2& 0\\ 0& |a_{13}|^2+|a_{14}|^2\end{array}\right)=\left(\begin{array}{cccc}|a_{24}|^2& 0& 0& 0\\ 0& |a_{13}|^2+|a_{14}|^2& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right).$$
To get the neighboring-pair density matrix corresponding to our final state $`w`$, we average the above five density matrices, with the following simple result:
$$\rho =\frac{1}{5}\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 2& x& 0\\ 0& x^{}& 2& 0\\ 0& 0& 0& 0\end{array}\right),$$
(13)
where
$$x=a_{13}^{}a_{14}+a_{14}^{}a_{24}.$$
(14)
According to Eq. (9), the concurrence of the pair is
$$C=\frac{2}{5}\left|a_{13}^{}a_{14}+a_{14}^{}a_{24}\right|.$$
(15)
Continuing with this exampleโ$`n=5`$ and $`p=2`$โlet us find out what values we should choose for $`a_{13}`$, $`a_{14}`$, and $`a_{24}`$ in order to maximize $`C`$. First, it is clear that we cannot go wrong by taking each $`a_{ij}`$ to be real and non-negativeโany complex phases could only reduce the absolute value in Eq. (15)โso let us restrict our attention to such values. To take into account the normalization condition, we use a Lagrange multiplier $`\gamma /2`$ and extremize the quantity
$$a_{13}a_{14}+a_{14}a_{24}(\gamma /2)(a_{13}^2+a_{14}^2+a_{24}^2).$$
(16)
Differentiating, we arrive at three linear equations expressed by the matrix equation
$$\left(\begin{array}{ccc}0& 1& 0\\ 1& 0& 1\\ 0& 1& 0\end{array}\right)\left(\begin{array}{c}a_{13}\\ a_{14}\\ a_{24}\end{array}\right)=\gamma \left(\begin{array}{c}a_{13}\\ a_{14}\\ a_{24}\end{array}\right).$$
(17)
Of the three eigenvalues, only one allows an eigenvector with non-negative components, namely, $`\gamma =\sqrt{2}`$. The normalized eigenvector is
$$\left(\begin{array}{c}a_{13}\\ a_{14}\\ a_{24}\end{array}\right)=\left(\begin{array}{c}\frac{1}{2}\\ \frac{1}{\sqrt{2}}\\ \frac{1}{2}\end{array}\right),$$
(18)
which gives $`C=\sqrt{2}/5=0.283`$. This is greater than the value 0.25 that we obtained in our earlier example.
Before generalizing this calculation to arbitrary values of $`n`$ and $`p`$, we adopt some terminology that will simplify the discussion. Let us think of the qubits as โsites,โ and let us call the two states of each qubit โoccupiedโ ($`|1`$) and โunoccupiedโ ($`|0`$). The states $`|\xi `$ that we are considering have a fixed number $`p`$ of occupied sites in a string of $`n`$ sites; so we can regard the system as a collection of $`p`$ โparticlesโ in a one-dimensional lattice of length $`n`$. Condition (ii) requires that two particles never be in adjacent sites; it is as if each particle is an extended object, taking up two lattice sites, and two particles cannot overlap. Thus the number of particles is limited by the inequality $`2pn`$.
## 3 Generalization to blocks of arbitrary size
We now turn to the calculation of the optimal concurrence for general $`n`$ and $`p`$ assuming our conditions are satisfied. It will turn out that this calculation can be done exactly.
For any values of $`n`$ and $`p`$, the most general form of $`|\xi `$ consistent with condition (i) is
$$|\xi =\underset{j_1<\mathrm{}<j_p}{}a_{j_1,\mathrm{},j_p}|j_1,\mathrm{},j_p,$$
(19)
where $`|j_1,\mathrm{},j_p`$ is the state of $`n`$ sites $`j=1,\mathrm{},n`$ in which sites $`j_1,\mathrm{},j_p`$ are occupied and the rest are unoccupied. Because of conditions (ii) and (iii), $`a_{j_1,\mathrm{},j_p}`$ must be zero if two of the indices differ by 1 or if $`j_p`$ has the value $`n`$. The coefficients in Eq. (19) satisfy the normalization condition
$$\underset{j_1<\mathrm{}<j_p}{}|a_{j_1,\mathrm{},j_p}|^2=1.$$
(20)
Going through the same steps as in the above example, we find that in the state $`w`$ the density matrix of any pair of neighboring sites is
$$\rho =\frac{1}{n}\left(\begin{array}{cccc}n2p& 0& 0& 0\\ 0& p& y& 0\\ 0& y^{}& p& 0\\ 0& 0& 0& 0\end{array}\right),$$
(21)
where
$$y=\underset{q=1}{\overset{p}{}}\underset{j_1<\mathrm{}<j_p}{}\underset{j_1^{}<\mathrm{}<j_p^{}}{}\left[a_{j_1,\mathrm{},j_p}^{}a_{j_1^{},\mathrm{},j_p^{}}\delta _{j_q^{},j_q+1}\underset{rq}{}\delta _{j_r^{},j_r}\right].$$
(22)
Here $`\delta `$ is the Kronecker delta, and we define $`a_{j_1,\mathrm{},j_p}`$ to be zero if any two of the indices are equal. In words, $`y`$ is constructed as follows: Let two coefficients $`a_{j_1,\mathrm{},j_p}`$ and $`a_{j_1^{},\mathrm{},j_p^{}}`$ be called adjacent if they differ in only one index and if the difference in that index is exactly one; then $`y`$ is the sum of all products of adjacent pairs of coefficients, the coefficient with the smaller value of the special index being complex conjugated in each case. In the above example there were only two such products, $`a_{13}^{}a_{14}`$ and $`a_{14}^{}a_{24}`$; hence the form of Eq. (14).
As before, the concurrence of the chain is equal to $`2|\rho _{23}|`$; that is, $`C=(2/n)|y|`$. We want to maximize the concurrence over all possible values of the coefficients that are consistent with conditions (ii) and (iii). These conditions are somewhat awkward to enforce directly: one has to make sure that certain of the coefficients $`a_{j_1,\mathrm{},j_p}`$ are zero. However, this problem is easily circumvented by defining a new set of indices. Let $`k_1=j_1`$, $`k_2=j_21`$, $`k_3=j_32`$, and so on up to $`k_p=j_p(p1)`$, and let $`b_{k_1,\mathrm{},k_p}=a_{j_1,\mathrm{},j_p}`$. The constraints on the new indices $`k_r`$ are simply that $`0<k_1<k_2<\mathrm{}<k_p<n^{}`$, where $`n^{}=n(p1)`$. Finally, in place of $`|\xi `$, define a new vector $`|\zeta `$:
$$|\zeta =\underset{k_1<\mathrm{}<k_p}{}b_{k_1,\mathrm{},k_p}|k_1,\mathrm{},k_p,$$
(23)
where $`|k_1,\mathrm{},k_p`$ is the state of a lattice of length $`n^{}1`$ in which the sites $`k_1,\mathrm{},k_p`$ are occupied. In effect we have removed from the lattice the site lying to the right of each occupied site. Note that our earlier inequality $`2pn`$ becomes, in terms of $`n^{}`$, simply $`pn^{}1`$, which reflects the fact that the new lattice has only $`n^{}1`$ sites. The concurrence is still given by $`C=(2/n)|y|`$, where
$$y=\underset{q=1}{\overset{p}{}}\underset{k_1<\mathrm{}<k_p}{}\underset{k_1^{}<\mathrm{}<k_p^{}}{}\left[b_{k_1,\mathrm{},k_p}^{}b_{k_1^{},\mathrm{},k_p^{}}\delta _{k_q^{},k_q+1}\underset{rq}{}\delta _{k_r^{},k_r}\right].$$
(24)
We can express $`y`$ more simply by introducing creation and annihilation operators for each site. We associate with site $`k`$ the operators
$$c_k=\left(\begin{array}{cc}0& 1\\ 0& 0\end{array}\right)\mathrm{and}c_k^{}=\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right),$$
(25)
which are represented here in the basis $`\{|0,|1\}`$. In terms of these operators, we can write $`y`$ as
$$y=\zeta |\underset{k=1}{\overset{n^{}2}{}}c_k^{}c_{k+1}|\zeta .$$
(26)
Our problem is beginning to resemble the nearest-neighbor tight-binding model for electrons in a one-dimensional lattice. The Hamiltonian for the latter problemโassuming that the spins of the electrons are all in the same state and can therefore be ignoredโcan be written as<sup>3</sup><sup>3</sup>3In Eq. (27) the operators $`c`$ and $`c^{}`$ are fermionic, whereas those defined in Eq. (25) are not, because they do not anticommute when they are associated with different sites. We could, however, use our $`c`$โs to define genuinely fermionic operators in terms of which the extremization problem has exactly the same form .
$$H=\underset{k=1}{\overset{n^{}2}{}}(c_k^{}c_{k+1}+c_{k+1}^{}c_k),$$
(27)
where we have taken the lattice length to be the same as in our problem, namely, $`n^{}1`$. From Eqs. (26) and (27) we see that $`\zeta |H|\zeta =2\mathrm{Re}(y)`$. This expectation value is not quite what we need for the concurrence: the concurrence is proportional to the absolute value of $`y`$, not its real part. However, as in our earlier example, for the purpose of maximizing $`C`$ there is no advantage in straying from real, non-negative values of $`b_{k_1,\mathrm{},k_p}`$. If we restrict our attention to such values, then the absolute value of $`y`$ is the same as its real part, and we can write the concurrence as
$$C=\frac{1}{n}\zeta |H|\zeta .$$
(28)
Thus, maximizing the concurrence amounts to minimizing the expectation value of $`H`$, that is, finding the ground state energy of the tight-binding model, as long as the ground state involves only real and non-negative values of $`b_{k_1,\mathrm{},k_p}`$.
The one-dimensional tight-binding model is in fact easy to solve . Its ground state is the discrete analogue of the ground state of a collection of $`p`$ non-interacting fermions in a one-dimensional box. In our case the โwallsโ of the box, where the wavefunction goes to zero, are at $`k=0`$ and $`k=n^{}`$, and the ground state $`|\zeta _0`$ is given by the following antisymmetrized product of sine waves:
$$b_{k_1,\mathrm{},k_p}๐\left[\mathrm{sin}\left(\frac{\pi k_p}{n^{}}\right)\mathrm{sin}\left(\frac{2\pi k_{p1}}{n^{}}\right)\mathrm{}\mathrm{sin}\left(\frac{p\pi k_1}{n^{}}\right)\right].$$
(29)
Here $`๐`$ indicates the operation of antisymmetrizing over the indices $`k_1,\mathrm{},k_p`$. In the range of values we are allowing for these indices, that is, $`0<k_1<k_2<\mathrm{}<k_p<n^{}`$, the coefficients $`b_{k_1,\mathrm{},k_p}`$ are indeed non-negative, so that Eq. (28) is valid.
The ground state energy, from which we can find the concurrence, is simply the sum of the first $`p`$ single-particle eigenvalues of $`H`$. There are exactly $`n^{}1`$ such eigenvalues, one for each dimension of the single-particle subspace; they are given by
$$E_m=2\mathrm{cos}\left(\frac{m\pi }{n^{}}\right),m=1,\mathrm{},n^{}1.$$
(30)
Thus the concurrence is
$$C=\frac{1}{n}\zeta _0|H|\zeta _0=\frac{2}{n}\underset{m=1}{\overset{p}{}}\mathrm{cos}\left(\frac{m\pi }{n^{}}\right).$$
(31)
Doing the sum is straightforward, with the following result:
$$C=\frac{1}{n}\left[\frac{\mathrm{cos}(p\pi /n^{})\mathrm{cos}((p+1)\pi /n^{})+\mathrm{cos}(\pi /n^{})1}{1\mathrm{cos}(\pi /n^{})}\right].$$
(32)
Recall that $`n^{}=np+1`$. Eq. (32) gives the largest value of $`C`$ consistent with our conditions, for fixed values of $`n`$ and $`p`$. Note, for example, that when $`n=5`$ and $`p=2`$, Eq. (32) gives $`C=\sqrt{2}/5`$, just as we found before for this case.
We still need to optimize over $`n`$ and $`p`$. It is best to make the block size $`n`$ very largeโany state $`w`$ that is possible with block size $`n`$ is also allowed by block size $`2n`$โso we take the limit as $`n`$ goes to infinity. Let $`\alpha `$ be the density of occupied sitesโthat is, $`\alpha =p/n`$โand let $`n`$ approach infinity with $`\alpha `$ held fixed. In this limit, the concurrence becomes
$$C_{\mathrm{lim}}=\frac{2}{\pi }(1\alpha )\mathrm{sin}\left(\frac{\alpha \pi }{1\alpha }\right).$$
(33)
Taking the derivative, one finds that $`C_{\mathrm{lim}}`$ is maximized when
$$\mathrm{tan}\left(\frac{\alpha \pi }{1\alpha }\right)=\frac{\pi }{1\alpha },$$
(34)
which happens at $`\alpha =0.300844`$, where $`C_{\mathrm{lim}}=0.434467`$. This is the highest value of concurrence that is consistent with our method of constructing the state of the chain and with our three conditions on $`|\xi `$. Note that it is considerably larger than what we got in our first example, in which a string of singlets was mixed with a shifted version of the same stringโone might call this earlier construction the โbicycle chainโ state. Unlike the bicycle chain state, our best state breaks the symmetry between the basis states $`|0`$ and $`|1`$: the fraction of qubits in the state $`|1`$ is about 30% rather than 50%. Of course the entanglement would be just as large if the roles of $`|1`$ and $`|0`$ were reversed.
It is interesting to ask what value of entanglement of formation the above value of concurrence corresponds to. As a function of the concurrence, the entanglement of formation is given by
$$E_f=h\left(\frac{1+\sqrt{1C^2}}{2}\right),$$
(35)
where $`h`$ is the binary entropy function $`h(x)=[x\mathrm{log}_2x+(1x)\mathrm{log}_2(1x)]`$. For the above value of concurrence, one finds that the entanglement of formation is $`E_f=0.284934`$ ebits. (For the bicycle chain state, the entanglement of formation between neighboring pairs is only 0.118 ebits.)
If one can prove that this value is optimal, then it can serve as a reference point for interpreting entanglement values obtained for real physical systems. A string of spin-1/2 particles interacting via the antiferromagnetic Heisenberg interaction, for example, has eigenstates that typically have some non-zero nearest-neighbor entanglement. It would be interesting to find out how the entanglements appearing in these states compare to the maximum possible entanglement for a string of qubits.<sup>4</sup><sup>4</sup>4Since the original version of this paper was written, the question about the antiferromagnetic Heisenberg chain has been answered for the ground state : though the nearest-neighbor concurrence of the ground state is high ($`C=0.386`$), it is not optimal.
Clearly the problem we have analyzed here can be generalized. One can consider a two or three-dimensional lattice of qubits and ask how entangled the neighboring qubits can be. If we were to analyze these cases using assumptions similar to those we have made in the one-dimensional case, we would again find the problem reducing to a many-body problem, but with less tractable interactions. Assuming that pairwise entanglement tends to diminish as the total entanglement is shared among more particles, one expects the optimal values of $`C`$ and $`E_f`$ to shrink as the dimension of the lattice increases.
I would like to thank Kevin OโConnor for many valuable discussions on distributed entanglement. |
no-problem/0001/cond-mat0001356.html | ar5iv | text | # Conduction channels of superconducting quantum point contacts
## I Introduction
An atomic size contact between two metallic electrodes can accommodate only a small number of conduction channels. The contact is thus fully described by a set $`\left\{T_\mathrm{n}\right\}=\{T_1,T_2,\mathrm{}T_\mathrm{N}\}`$ of transmission coefficients which depends both on the chemical properties of the atoms forming the contact and on their geometrical arrangement. Experimentally, contacts consisting of even a single atom have been obtained using both scanning tunneling microscope and break-junction techniques . The total transmission $`D`$= $`\underset{n=1}{\overset{N}{}}T_\mathrm{n}`$ of a contact is deduced from its conductance $`G`$ measured in the normal state, using the Landauer formula $`G=G_0D`$ where $`G_0=2e^2/h`$ is the conductance quantum .
Experiments on a large ensemble of metallic contacts have demonstrated the statistical tendency of atomic-size contacts to adopt configurations leading to some preferred values of conductance. The actual preferred values depend on the metal and on the experimental conditions. However, for many metals, and in particular โsimpleโ ones (like Na, Auโฆ) which in bulk are good โfree electronsโ metals, the smallest contacts have a conductance $`G`$ close to $`G_0`$ . Statistical examinations of Al point contacts at low temperatures yield preferred values of conductance at $`G=0.8G_0,1.9G_0,3.2G_0`$ and $`4.5G_0`$ , indicating that single-atom contacts of Al have a typical conductance slightly below the conductance quantum. Does this mean that the single-atom contacts correspond to a single, highly transmitted channel $`(T=0.8)`$? This question cannot be answered solely by conductance measurements which provide no information about the number or transmissions of the individual channels.
However, it has been shown that the full set $`\left\{T_\mathrm{n}\right\}`$ is amenable to measurement in the case of superconducting materials by quantitative comparison of the measured current-voltage ($`IV`$ characteristics) with the theory of multiple Andreev reflection (MAR) for a single channel BCS superconducting contact with arbitrary transmission $`T`$, developed by several groups for zero temperature $`\mathrm{\Theta }=0`$ and zero magnetic field $`H=0`$ . Although the typical conductance of single-atom contacts of Al ($`G0.8G_0`$) is smaller than the maximum possible conductance for one channel, three channels with transmissions such that $`T_1+T_2+T_30.8`$ have been found .
Moreover, there exist other physical properties which are not linear with respect to $`\{T_\mathrm{n}\}`$ as e.g. shot noise , conductance fluctuations , and thermopower , which also give information about the $`\left\{T_\mathrm{n}\right\}`$ of a contact. Although it is not possible to determine the full set of transmission coefficients with these properties, certain moments of the distribution and in particular the presence or absence of partially open channels can be detected. Recent experiments have shown that normal atomic contacts of Al with conductance close to $`G_0`$ contain incompletely open channels in agreement with the findings in the superconducting state .
In previous work we have shown how the conduction channels of metallic contacts can be constructed from the valence orbitals of the material under investigation. In the case of single-atom contacts the channels are determined by the valence orbitals of the central atom and its local environment. In particular for Al the channels arise from the contributions of the $`s`$ and $`p`$ valence bands. To the best of our knowledge it has never been observed in contacts of multivalent metals that a single channel arrives at its saturation value of $`T=1`$ before at least a second one had opened. Single-atom contacts of the monovalent metal Au transmit one single channel with a transmission $`0<T1`$ depending on the particular realization of the contact .
From the theoretical point of view no difference between the normal and superconducting states is expected, because ($`i`$) according to the BCS theory the electronic wave functions themselves are not altered when entering the superconducting state, but only their occupation and ($`ii`$) MAR preserves electron-hole symmetry and therefore does not mix channels .
Experimental evidence for the equivalence of the normal and superconducting channels can be gained by tracing the evolution of the $`IV`$ curves from the superconducting to the normal state in an external magnetic field and/or higher temperatures and comparing them to the recent calculations by Cuevas et al. of MAR in single channel contacts at finite temperatures and including pair breaking due to magnetic impurities or magnetic field .
We show here that the channel ensemble $`\left\{T_\mathrm{n}\right\}`$ of few atom contacts remains unchanged when suppressing the superconducting transport properties gradually by raising the temperature or the magnetic field up to temperatures and magnetic fields approaching the critical temperature and the critical field, respectively. Although it is not possible to measure the full channel ensemble above the critical temperature or field, respectively, no abrupt change is to be expected since the phase transition (as a function of temperature) is of second order. Because the determination of the channel ensemble relies on the quantitative agreement between the theory and the experimental $`IV`$s, we concentrate here on the case of Al point contacts since we expect for this material the BCS theory to fully apply.
## II Transport through a superconducting quantum point contact
The upper left inset of Fig. 1 shows the theoretical $`IV`$s by Cuevas et al. for zero temperature $`\mathrm{\Theta }=0`$ and zero field $`H=0`$. A precise determination of the channel content of any superconducting contact is possible making use of the fact that the total current $`I(V)`$ results from the contributions of $`N`$ independent channels:
$$I(V)=\underset{n=1}{\overset{N}{}}i(V,T_n).$$
(1)
This equation is valid as long as the scattering matrix whose eigenvalues are given by the transmission coefficients is unitary, i.e. the scattering is time independent. The $`i(V,T)`$ curves present a series of sharp current steps at voltage values $`V`$ =$`2\mathrm{\Delta }/me`$, where $`m`$ is a positive integer and $`\mathrm{\Delta }`$ is the superconducting gap. Each one of these steps corresponds to a different microscopic process of charge transfer setting in. For example, the well-known non-linearity at $`eV=2\mathrm{\Delta }`$ arises when one electronic charge $`(m=1)`$ is transferred thus creating two quasiparticles. The energy $`eV`$ delivered by the voltage source must be larger than the energy $`2\mathrm{\Delta }`$ needed to create the two excitations. The common phenomenon behind the other steps is multiple Andreev reflection (MAR) of quasiparticles between the two superconducting reservoirs . The order $`m=2,3,\mathrm{},`$ of a step corresponds to the number of electronic charges transferred in the underlying MAR process. Energy conservation imposes the threshold $`meV2\mathrm{\Delta }`$ for each process. For low transmission, the contribution to the current arising from the process of order $`m`$ scales as $`T^m`$. The contributions of all processes sum up to the so-called โexcess currentโ the value of which can be determined by extrapolating the linear part of the $`IV`$s well above the gap $`v>5\mathrm{\Delta }`$ down to zero voltage. As the transmission of the channel rises from 0 to 1, the higher order processes grow stronger and the current increases progressively. The ensemble of steps is called โsubharmonic gap structureโ, which was in fact discovered experimentally , has been extensively studied in superconducting weak links and tunnel junctions with a very large number of channels .
## III Experimental techniques
In order to infer $`\left\{T_\mathrm{n}\right\}`$ from the $`IV`$s, very stable atomic-size contacts are required. For this purpose we have used micro-fabricated mechanically controllable break-junctions . Our samples are 2 $`\mu `$m long, 200 nm thick suspended microbridges, with a $`100`$$`\mathrm{nm}\times 100`$$`\mathrm{nm}`$ constriction in the middle (cf. Fig. 2). The bridge is broken at the constriction by controlled bending of the elastic substrate mounted on a three-point bending mechanism. A differential screw (100 $`\mu `$m pitch) driven by a dc-motor through a series of reduction gear boxes, controls the motion of the pushing rod that bends the substrate (Fig. 2).
The geometry of the bending mechanism is such that a 1 $`\mu `$m displacement of the rod results in a relative motion of the two anchor points of the bridge of around 0.2 nm. This was verified using the exponential dependence of the conductance on the interelectrode distance in the tunnel regime. This very strong dependence was used to calibrate the distance axis to an accuracy of about 20 %. The bending mechanism is anchored to the mixing chamber of a dilution refrigerator within a metallic box shielding microwave frequencies. The bridges are broken at low temperature and under cryogenic vacuum to avoid contamination.
The $`IV`$ characteristics are measured by voltage biasing with $`U=U_{\mathrm{d}c}`$ the sample in series with a calibrated resistor $`R_\mathrm{s}=102.6\mathrm{k}\mathrm{\Omega }`$ and measuring the voltage drop across the sample (giving the $`V`$ signal) and the voltage drop $`V_\mathrm{S}=IR_\mathrm{s}`$ across $`R_\mathrm{s}`$ (giving the $`I`$ signal) via two low-noise differential preamplifiers. The differential conductance is measured by biasing with $`U=U_{\mathrm{d}c}+U_1\mathrm{cos}(2\pi ft)`$ using a lock-in technique at low frequency $`f<200\mathrm{H}z`$. All lines connecting the sample to the room temperature electronics are carefully filtered at microwaves frequencies by a combination of lossy shielded cables , and microfabricated cryogenic filters . The cryostat is equipped with a superconducting solenoid allowing to control the field $`\mu _0H`$ at the position of the sample within 0.05 mT. After having applied a magnetic field and before taking new $`H=0`$ data we demagnetize carefully the solenoid. The temperature is monitored by a calibrated resistance thermometer thermally anchored to the shielding box. The absolute accuracy of the temperature measurement is about 5%.
## IV Determination of the channel transmissions
Pushing on the substrate leads to a controlled opening of the contact, while the sample is maintained at $`\mathrm{\Theta }<100\mathrm{m}K`$. As found in previous experiments at higher temperatures, the conductance $`G`$ decreases in steps of the order of $`G_0`$, their exact sequence changing from opening to opening (see right inset of Fig. 1). The last conductance value before the contact breaks is usually between 0.5 and 1.5 $`G_0`$.
Figure 1 shows four examples of $`IV`$s obtained at $`\mathrm{\Theta }<50\mathrm{m}K`$ on last plateaux of two different Al samples just before breaking the contact and entering the tunnel regime. The curves differ markedly eventhough they correspond to contacts having the same conductance of about $`G0.9G_0`$ within 10%. The existence of $`IV`$s with the same conductance but different subgap structure implies the presence of more than one channel without further analysis. In particular, the examples shown here demonstrate that although they would correspond to the first maximum in the conductance histogram , they do not transmit a single channel, but at least two, with a variety of transmissions.
In Fig. 1 we also show the best least-square fits obtained using the numerical results of the $`\mathrm{\Theta }=0`$ theory of Cuevas et al. . The fitting procedure decomposes the total current into the contributions of eight independent channels. Channels found with transmissions lower than 1% of the total transmission were neglected. When $`N`$$`3`$, this fitting procedure allows the determination of each $`T_\mathrm{n}`$ with an accuracy of 1% of the total transmission $`D`$. For contacts containing more channels only the 2 or 3 dominant channels (depending on their absolute value) can be extracted with that accuracy. Details of the fitting procedure are published in .
## V IVs of Al point contacts at higher temperatures
The right inset of Fig. 3 displays the evolution of the $`IV`$ of contact (a) from Fig. 1 for three different temperatures below the critical temperature $`\mathrm{\Theta }_\mathrm{c}=\mathrm{\hspace{0.17em}1.21}\mathrm{K}`$ (each trace is offset by $`0.5`$ for clarity). When the temperature is increased the subgap structure is slightly smeared out due to the thermal activation of quasiparticles, and the position of the current steps is shifted to smaller voltages due to the reduction of the superconducting gap. Although the $`IV`$s are very smooth due to the dominance of an almost perfectly open channel, up to eight MAR processes are distinguishable in the $`\mathrm{d}I/\mathrm{d}V`$. The solid lines are calculated with the same set of transmissions $`\left\{T_1=0.900,T_2=0.108\right\}`$ for the temperatures indicated in the caption using the BCS dependence of the superconducting gap and the Fermi function for the respective temperature . The quality of the fit does not vary with temperature.
In order to further interpret the data, we plot in Fig. 4 the theoretical zero temperature single channel $`\mathrm{d}I/\mathrm{d}V`$s for the same transmissions as in the inset in Fig. 1 in two different manners. In the left panel they are plotted as a function of $`eV/\mathrm{\Delta }`$ showing the different shapes and amplitudes of the individual MAR processes for varying transmission. A small $`T`$ gives rise to narrow conductance spikes, whereas a higher $`T`$ yields round maxima at voltages $`V<2\mathrm{\Delta }/me`$ and pronounced minima close to the sub-multiple values $`V=2\mathrm{\Delta }/me`$. This behavior is clearly visible in the right panel where the differential conductance is plotted as a function of the generalized order of the MAR process $`m^{}=2\mathrm{\Delta }/eV`$. For small $`T`$ the onset of the MAR processes is equidistant with spacing $`2\mathrm{\Delta }/eV`$ and their amplitudes decrease very rapidly with $`m^{}`$. For high transmission the position of the maxima is progressively shifted to higher $`m^{}`$ values, while the minima correspond approximately to integer values of $`m^{}`$. The experimental data of Fig. 3 display a mixed character of high $`T`$ and low $`T`$ behavior, because of the presence of the two extreme channels with $`T_1=0.90`$ and $`T_2=0.108`$. The value of the temperature dependent gap $`\mathrm{\Delta }(\mathrm{\Theta })`$ can be determined by the peak of the $`m=1`$ process, while the rest of the $`\mathrm{d}I/\mathrm{d}V`$ is dominated by the widely open channel $`T_1`$. In the left inset of Fig. 3 we plot the position of the $`m=1`$ maximum as a function of the temperature. Also shown are data taken on different contact configurations of the same sample. The development of the peak position follows the BCS gap function $`\mathrm{\Delta }_{\mathrm{B}CS}(\mathrm{\Theta })`$ which is plotted as a solid line in the same graph. We have verified for contacts with different conductances ranging from the tunnel regime to several $`G_0`$ that the $`IV`$s can be described by the same channel distribution (with restricted accuracy due to less pronounced MAR features) up to the critical temperature. When exceeding $`\mathrm{\Theta }_\mathrm{c}`$ the $`IV`$ characteristics become linear with a slope corresponding to $`D`$ within 1%.
## VI IVs of Al point contacts with external magnetic field
Fig. 5 shows the evolution of the subgap structure with applied magnetic field. The traces are offset for clarity. In our experiment the field is applied perpendicular to the film plane. As the field size is increased, the excess current is suppressed, the current steps are strongly rounded and the peak positions are shifted to lower voltages. For fields larger than $``$ 5.0 mT no clear sub-multiple current steps are observable. When a field of $`\mu _0H_\mathrm{c}=\mathrm{\hspace{0.17em}10.2}\mathrm{m}T`$ close to the bulk critical field of Al $`\mu _0H_{\mathrm{c},bulk}=\mathrm{\hspace{0.17em}9.9}\mathrm{m}T`$ is reached the $`IV`$ becomes again linear with a slope corresponding to the sum $`D`$ of the transmissions determined in zero field. When lowering the field again to $`H=0`$ we recover the same subgap structure as before. When reversing the field direction the same $`IV`$ is observed for the same absolute value of the field, which proves that there is no residual field along the field axis. Effects of the earth magnetic field or spurious fields in different directions however cannot be excluded.
An external magnetic field suppresses superconductivity because it acts as an effective pair breaking mechanism . Since a magnetic field breaks the electron-hole symmetry, the $`IV`$s due to MAR are modified as demonstrated by Zaitsev and Averin . Strictly speaking, the conduction channels could be different with and without magnetic field.
A quantitative description of the influence of the magnetic field is difficult because of the complicated shape of the samples. The point contact spectra are sensitive to the superconducting properties at the constriction. Since the pair breaking parameter $`\mathrm{\Gamma }=\mathrm{}/(2\tau _{\mathrm{p}b}\mathrm{\Delta })`$ ($`\tau _{\mathrm{p}b}`$ is the pair-breaking time) due to an external magnetic field is geometry dependent a complete description needs to take into account the exact shape of the sample on the length scale of the coherence length $`\xi `$. We estimate for our Al films in the dirty limit $`\xi =(\mathrm{}D_{\mathrm{e}l}/2\mathrm{\Delta })^{1/2}=280\mathrm{n}m`$ where $`D_{\mathrm{e}l}=v_\mathrm{F}l/3=0.042\mathrm{m}^2/s`$ is the electronic diffusion constant.
Due to the finite elastic mean free path $`l65\mathrm{nm}`$ (determined by the residual resistivity ratio RRR $`=R(300\mathrm{K})/R(4.2\mathrm{K})4`$) of the evaporated thin film the penetration depth is enhanced and it is comparable to the sample thickness. The fact that all signature of superconductivity is destroyed at the bulk critical field indicates that the geometry of the sample does not play a dominant role, but that $`\xi `$ is the most important length scale. We therefore describe the influence of the magnetic field along the lines of Skalski et al. using a homogeneous $`\mathrm{\Gamma }`$ given by the expression
$$\mathrm{\Gamma }=\frac{D_{\mathrm{e}l}e^2\mu _0^2H^2w^2}{6\mathrm{}\mathrm{\Delta }}$$
(2)
where the effective width of the film $`w=280\mathrm{n}m\xi `$ is limited by the coherence length. Superconductivity is completely suppressed when $`\mathrm{\Gamma }=0.5`$. In order to obtain the $`IV`$ curves for one channel in an external magnetic field, the BCS density of states in the theory of Ref. is replaced by the corresponding expressions given in Refs. which include the effect of a pair breaking mechanism.
Contrary to the influence of higher temperatures, the magnetic field rounds the density of states and the Andreev reflection amplitude. The rounding is a consequence of the fact that the pair amplitude $`\mathrm{\Delta }`$ and the spectral gap $`\mathrm{\Omega }`$ of the density of states (i.e. the energy up to which the density of states is zero) differ from each other when time reversal symmetry is lifted. The position of the $`m=1`$ maximum of the $`\mathrm{d}I/\mathrm{d}V`$ does not give an accurate estimation of $`\mathrm{\Gamma }`$ and it is necessary to fit the whole $`IV`$. In the left inset we display the evolution (as a function of $`\mathrm{\Gamma }`$) of $`\mathrm{\Delta }`$, $`\mathrm{\Omega }`$ and the position of the maximum conductance of the $`m=1`$ process of the contact whose $`IV`$s are shown in Fig. 5. The functional dependence of the latter is not universal but depends on the distribution of transmissions.
It turns out that the structure in the experimental data is more rounded than in the calculated curves indicating the limitations of the model used here. A reasonable agreement between the experimental data and the model is found when determining $`\mathrm{\Gamma }`$ such that the excess current is correctly described. The solid lines in Fig. 5 are calculated with the transmission ensemble determined at zero field for values of $`\mathrm{\Gamma }`$ given in the inset. These values correspond nicely to the predicted quadratic behavior of Eq. 2. It was not possible to achieve better agreement between the measured and calculated $`IV`$s by altering the channel ensemble, supporting again that the conduction channels are not affected by superconductivity, nor by its suppression .
We have demonstrated here, that it is possible to drive a particular contact reproducibly into the normal state and back to the superconducting state without changing $`\{T_\mathrm{n}\}`$. We stress the high stability of the setup necessary for maintaining a particular contact stable during the measurement series.
## VII Conclusions
We have reported measurements and the analysis of multiple Andreev reflection in superconducting atomic contacts demonstrating that the conduction channel ensemble of the smallest point contacts between Al electrodes consists of at least two, more often three channels. We have verified that the channel ensemble remains unchanged when suppressing the superconductivity gradually by increasing the temperature or applying a magnetic field. This result strongly supports the expected equivalence of conduction channels in the normal and in the superconducting state and agrees with the quantum chemical picture of conduction channels. The latter suggests that the conduction channels are determined by the band structure of the metal and therefore their transmissions vary significantly only on the scale of several $`e`$V. Superconductivity, which opens a spectral gap for quasiparticles of the order of only $``$ m$`e`$V does not modify the channels and is therefore a useful tool to study them.
We thank C. Strunk and W. Belzig for valuable discussions. This work was supported in part by the Deutsche Forschungsgemeinschaft (DFG), Bureau National de Mรฉtrologie (BNM), and the Spanish CICYT. |
no-problem/0001/cond-mat0001339.html | ar5iv | text | # Unusual ๐_๐ variation with hole concentration in Bi2Sr2-xLaxCuO6+ฮด
## Abstract
We have investigated the $`T_c`$ variation with the hole concentration $`p`$ in the La-doped Bi 2201 system, Bi<sub>2</sub>Sr<sub>2-x</sub>La<sub>x</sub>CuO<sub>6+ฮด</sub>. It is found that the Bi 2201 system does not follow the systematics in $`T_c`$ and $`p`$ observed in other high-$`T_c`$ cuprate superconductors (HTSCโs). The $`T_c`$ vs $`p`$ characteristics are quite similar to what observed in Zn-doped HTSCโs. An exceptionally large residual resistivity component in the inplane resistivity indicates that strong potential scatterers of charge carriers reside in CuO<sub>2</sub> planes and are responsible for the unusual $`T_c`$ variation with $`p`$, as in the Zn-doped systems. However, contrary to the Zn-doped HTSCโs, the strong scatter in the Bi 2201 system is possibly a vacancy in the Cu site.
PACS numbers: 74.72.Hs, 74.62.Dh, 74.62.Bf, 74.25.Dw
Many high-$`T_c`$ cuprate superconductors (HTSCโs) display an approximately parabolic dependence of $`T_c`$ upon the hole concentration $`p`$ with the maximum $`T_c`$ at $`p`$ 0.16. ($`p`$ is defined as the hole concentration per Cu atom in CuO<sub>2</sub> planes. ) This behavior was observed first in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>. Then other HTSCโs such as YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-y</sub>, Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub>, and TlSr<sub>2</sub>CaCu<sub>2</sub>O<sub>7+ฮด</sub> were also found to show approximately the same relation between $`T_c`$ and $`p`$ which scales only with the maximum $`T_c`$, $`T_{c,max}`$. Though not studied for the full range of $`p`$, several other HTSCโs are also known to have $`T_{c,max}`$ at $`p`$ 0.14 $``$ 0.15. Therefore one might expect that there possibly exists a universal relation between $`T_c`$ and $`p`$ which all HTSCโs satisfy.
Existence of a universal parabolic relation between $`T_c`$ and $`p`$ for all HTSCโs, despite the different combinations of constituent atoms, the presence of various charge-carrier reservoir layers, and a variety of inter-plane coupling strengths, cannot be common but is believed to be related to a noble nature of high-temperature superconductivity. It is therefore not strange that the recent observations in Zn-doped HTSCโs of departure from the universal relation have drawn particular interest. Much attention has focused on the function of Zn. Within a HTSC, Zn substitutes for Cu in the CuO<sub>2</sub> plane and behaves as a nonmagnetic impurity without altering the carrier concentration. In this report, we show that a similar non-universal $`T_c`$-$`p`$ relation holds also for the La-doped Bi 2201 system, Bi<sub>2</sub>Sr<sub>2-x</sub>La<sub>x</sub>CuO<sub>6+ฮด</sub>, which contains strong disorders in CuO<sub>2</sub> planes differing from impurities.
We have obtained the hole concentration $`p`$ of the samples from the thermopower ($`S`$) measurements. The room-temperature thermopower $`S`$(290 K) of HTSCโs was found to be a universal function of $`p`$ over the whole range of doping, which has since been used widely to determine the $`p`$ of HTSCโs. The superconducting-transition temperature $`T_c`$ was determined at half the normal-state resistivity. The conventional solid-state reaction of stoichiometric oxcides and carbonates was adopted in preparing polycrystalline samples of Bi<sub>2</sub>Sr<sub>2-x</sub>La<sub>x</sub>CuO<sub>6+ฮด</sub>. The x-ray diffraction (XRD) analysis shows all the samples to be single phase to the threshold of detection. The oxygen content in the sample of $`x`$ = 0.1 could be varied by annealing the same sample in vacuum for 6 h at different temperatures (400<sup>o</sup>C, 500<sup>o</sup>C, and then 600<sup>o</sup>C). $`S`$ was measured by employing the dc method described in Ref. 10. The resistivity $`\rho `$ was measured through the conventional low-frequency ac four-probe method.
Figure 1 shows the temperature dependences of $`S`$ and $`\rho `$ of Bi<sub>2</sub>Sr<sub>2-x</sub>La<sub>x</sub>CuO<sub>6+ฮด</sub> (BSLCO) with 0.1 $`x`$ 0.8. The temperature and doping dependences of $`S`$ in Fig. 1(a) are typical of HTSCโs. $`S`$(290 K) increases with doping $`x`$ from -15.5 $`\mu `$V/K to 60 $`\mu `$V/K. Corresponding $`p`$ determined from the relations between $`S`$(290 K) and $`p`$ in Ref. 3 varies from 0.286 to 0.073 with doping. The $`\rho `$ measurements in Fig. 1(b) displays that the $`T_c`$ of BSLCO has its maximum at $`x`$ 0.5 or $`p`$ 0.22. The appearance of $`T_{c,max}`$ at $`x`$ 0.5 agrees with the previous measurements. $`T_c`$/$`T_{c,max}`$ against $`p`$ is plotted in Figure 2. The $`T_c`$ (= 21.5 K) of $`x`$ = 0.5 is used as $`T_{c,max}`$ for solid circles. The dotted curve is of the โuniversalโ relation, $`T_c`$/$`T_{c,max}`$ = 1- 82.6 ($`p`$ \- 0.16)<sup>2</sup>, in Ref. 1. The relation has not yet been fully tested in the overdoped region of $`p>`$ 0.25. Figure 2 clearly displays that BSLCO does not follow the systematics. Superconductivity in the underdoped region is deeply suppressed and the $`T_{c,max}`$ appears at an overdoped hole concentration $`p`$ 0.22 rather than 0.16 . Besides, the $`T_{c,max}`$ of $``$ 21.5 K is also unusually low, which is only $`\frac{1}{4}`$ the $`T_c`$ of Tl<sub>2</sub>Ba<sub>2</sub>CuO<sub>6+ฮด</sub>, isostuctural of BSLCO. Taking the maximum $`T_c`$ of Tl<sub>2</sub>Ba<sub>2</sub>CuO<sub>6+ฮด</sub> as $`T_{c,max}`$, BSLCO has much lower $`T_c`$/$`T_{c,max}`$โs, as represented by open circles in Figure 2.
Unusual $`T_c`$ variation with $`p`$ is exposed more dramatically in the vacuum-annealed sample of $`x`$ = 0.1 which superconducts at $`T`$ 10 K without vacuum-annealed. Vacuum annealing reduces the content of oxygen atoms interstitial between Bi-O planes and consequently $`p`$ in CuO<sub>2</sub> planes. Fig 3(a) shows that successive vacuum annealings at 400<sup>o</sup>C, 500<sup>o</sup>C, and then 600<sup>o</sup>C enhance $`S`$ of Bi<sub>2</sub>Sr<sub>1.9</sub>La<sub>0.1</sub>CuO<sub>6+ฮด</sub> from -15.5 $`\mu `$V/K to -9.3 $`\mu `$V/K. The corresponding variation of $`p`$ is from 0.286 to 0.240. We expect from the observed $`T_c`$-$`p`$ relation of BSLCO in Figure 2 that $`T_c`$ of the sample of $`x`$ = 0.1 rises with annealing from 10 K to 20 K. The $`\rho `$ measurements in Figure 3(b), however, show that the superconductivity observed in the as-grown sample disappears with annealing in vacuum. We observed similar behaviors also in Bi<sub>2</sub>Sr<sub>2</sub>CuO<sub>6+ฮด</sub> which had been prepared from the nominal composition of Bi:Sr:Cu = 2:2:1.5. The semiconducting as-grown sample of Bi<sub>2</sub>Sr<sub>2</sub>CuO<sub>6+ฮด</sub> having $`p`$ = 0.282 exhibited a superconducting-transition onset at 11.5 K when vacuum-annealed at 400<sup>o</sup>C. And yet subsequent vacuum annealings at 500<sup>o</sup>C and 600<sup>o</sup>C put the sample back in the semiconducting states. The $`p`$โs of the Bi<sub>2</sub>Sr<sub>2</sub>CuO<sub>6+ฮด</sub> sample annealed at 400<sup>o</sup>C, 500<sup>o</sup>C, and 600<sup>o</sup>C were 0.256, 0.250 and 0.216 respectively, all of which are located in the superconducting region of Figure 2.
The $`T_c`$ vs $`p`$ characteristics of as-grown samples represented by the open circles in Figure 2 resemble those of Zn-doped HTSCโs in Ref. 6 and 7. It has been suggested that the primary effect of Zn impurities is to produce a large residual resistivity as a nonmagnetic potential scatterer in the unitary limit and that the more rapid depression of $`T_c`$ in the underdoped region is related to the large residual resistivity reaching the universal two-dimensional resistance h/4e<sup>2</sup> $``$ 6.5 k$`\mathrm{\Omega }`$/$`\mathrm{}`$ per CuO<sub>2</sub> plane at the edge of the underdoped superconducting region. Unlike most HTSCโs, the Bi 2201 superconductor is found to have an exceptionally large residual resistivity. The corresponding two-dimensional residual resistance per CuO<sub>2</sub> plane ranges from 0.3 k$`\mathrm{\Omega }`$/$`\mathrm{}`$ at an overdoped hole concentration to 10 k$`\mathrm{\Omega }`$/$`\mathrm{}`$ at an underdoped concentration with 50 % uncertainties. The large residual resistivity indicates that BSLCO contains strong scatterers of charge carriers in the planes. The strong scatterer in BSLCO is, however, not an impurity but most likely a vacancy in the Cu site, since any of Bi, Sr, and La can hardly substitute for Cu and disorders in the noncopper sites have little effect on superconducting properties but changing the hole concentration. Nevertheless, a vacancy in the CuO<sub>2</sub> plane is expected to act as a nonmagnetic potential scatterer, just like the Zn impurity in the planes. Vacuum annealing may cause extra vacancies in CuO<sub>2</sub> planes as well as expelling intersititial oxygen atoms. Thus the same arguement in terms of disorder in the CuO<sub>2</sub> plane can be adopted for an explanation of the deeper suppression of $`T_c`$ in vacuum-annealed samples.
Although the above discussion does not provide a full account for the origin of the nonuniversal $`T_c`$ vs $`p`$ characteristics, it may be concluded that similarity between the Bi 2201 HTSC with disorders differing from impurities and other HTSCโs with Zn impurities seem to strengthen the arguement that a strong potential scattering in the planes and a large residual resistivity at an underdoped hole concentration are closely related to the strong suppression of high-temperature superconductivity and the more rapid $`T_c`$ depression in the underdoped region.
We wish to thank Y. Yun and I. Baek for their assistances with the XRD analysis. |
no-problem/0001/hep-ex0001035.html | ar5iv | text | # Neutrino Oscillation Appearance Experiment using Nuclear Emulsion and Magnetized Iron
## 1 Introduction
To observe all of the possible neutrino oscillation phenomena one would ideally ask for pure neutrino beams of different flavors, and a detector which could identify the three possible leptons in the final state of a charged current interaction. Traditional neutrino beams contain predominantly $`\nu _\mu `$ or $`\overline{\nu }_\mu `$ with a small admixture of other neutrino flavors. Muon storage rings offer a possibility of mixed $`\overline{\nu }_\mu `$/$`\nu _e`$ and $`\nu _\mu `$/$`\overline{\nu }_e`$ beams. These beams will have virtually no admixture of other neutrino flavors, but they will contain neutrinos and antineutrinos. To exploit fully these beams and uniquely identify the oscillation mode one needs a determination of the electric charge of the outgoing lepton. This will uniquely distinguish neutrino from antineutrino interactions.
Several massive detectors exist which can identify the presence and charge of an outgoing muon, and some have been proposed to detect the presence of an outgoing tau or electron. However, measuring both the presence AND the charge of an outgoing tau, or electron, on an event-by-event basis remains a serious challenge in the field of neutrino oscillation experiments. Such a measurement is necessary to detect a $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillation in the presence of a $`\nu _e`$ component of the beam and/or to distinguish $`\nu _\mu \nu _\tau `$ from $`\overline{\nu }_e\overline{\nu }_\tau `$ oscillations. Precise measurements of $`\nu _\mu \nu _\tau `$, $`\nu _\mu \nu _e`$ and $`\nu _e\nu _\tau `$ oscillation amplitudes would allow a test of unitarity of the neutrino oscillation mixing matrix, and an indirect search for sterile neutrinos. A measurement of the difference between $`\nu _\mu \nu _e`$ and $`\nu _e\nu _\mu `$ oscillation amplitudes could provide a direct test of T (or CP) violation, free of matter effects which affect a $`\nu _e\nu _\mu `$ and $`\overline{\nu }_e\overline{\nu }_\mu `$ comparison.
In this paper we describe a detector which combines emulsion technology with a magnetic field. High resolution tracking capabilities of nuclear emulsions permit unambiguous detection of produced $`\tau `$ leptons and electrons, whereas a measurement of the deflection in a magnetic field allows a determination of the sign of the electric charge.
## 2 Lepton Identification in the Emulsion Detector
The CHORUS experiment has demonstrated the capabilities for identification of $`\tau `$ leptons through observation of its decay kinks in emulsion, and has set the most stringent limits on $`\nu _\mu \nu _\tau `$ at high $`\delta m^2`$ ref1 to date. CHORUS uses a bulk nuclear emulsion as a detector; such a technique is prohibitively expensive for very large mass detectors.
A different geometry which is being studied by both the MINOS and OPERA collaborations involves interspacing emulsion plates, used as detectors, with thin lead plates, used as a targetedgecock ref2a . This geometry was used succesfully in studies of cosmic rays by the JACEE collaborationJACEE . Electronic tracking devices, sampling the emulsion detector with frequency of the order of $`0.5\lambda _I`$ are used to trigger the detector and to locate the interaction point to a small volume. That small volume can be removed from the detector and analyzed in nearly real time, with electronic tracking information serving as a guide to the analysis of the emulsion sheets.
An emulsion detector optimized for tau detection typically involves a thin lead plate serving as a target, followed by a a gap with two emulsion layers separated by a low-Z-material spacer. Track segments and their spatial angles are measured in both emulsion layers. Tau decays inside the spacer material are characterized by a large angle (typically above $`50mrad`$) between the downstream and upstream track segments. A certain fraction of taus decaying in the target plate can be identified by a large impact parameter of the resulting tau daughter.
Electromagnetic showers are sampled in emulsion with a typical granularity of the order of $`0.2X_0`$. Thanks to the excellent spatial resolution of the emulsion and a high sampling frequency of the electromagnetic shower, individual acts of photon emission and conversion are easily detectable in the emulsion. Additional information is provided by the double ionization (measured via grain density along the track) of the electron-positron pair from a photon conversion.
An electron can be thus identified as a charged track with several conversion pairs within a small cone around it. An important feature of the emulsion detector is its ability to follow the initial electron track even inside the electromagnetic cascade.
## 3 Lepton Charge Determination
The electric charge of the particles can be determined by measuring the trajectory of a particles in the magnetic field. The simplest solution consists of replacing the lead target plates by iron ones and using an external coil to create a magnetic field in the iron in excess of $`1Tesla`$.
Charged particles traversing the iron plate will receive a $`p_t`$ kick of $`0.003xBGeV`$, where $`x`$ is the thickness of the steel plate in $`cm`$ and $`B`$ is the magnetic field in $`Tesla`$. At the same time, however, multiple scattering in the iron will generate a random pt of $`0.014GeV\sqrt{\frac{x}{X_0}}`$, where $`X_0=1.76cm`$ is the radiation length of steel. For a typical field strength of $`1Tesla`$, the multiple scattering effects dominate over the bending in the magnetic field. The situation improves when several iron plates are traversed: the $`p_t`$ due to the magnetic field add linearly, whereas the multiple scattering induced $`p_t`$ grows like $`\sqrt{x}`$. After a traversal of $`N`$ steel plates with thickness $`x`$ each, the significance of the charge determination is
$$\sigma =2\frac{0.003B(Tesla)x(cm)}{0.014\sqrt{\frac{x}{X_0}}}\sqrt{N}$$
(1)
The factor of $`2`$ accounts for the fact that we are not trying to determine the sign as such, but rather to distinguish between a positive and negative track hypothesis.
For a typical example of a detector with $`x=1mm`$ ($`0.05X_0`$) of steel, in a $`1Tesla`$ field, it takes about $`100`$ planes or $`10cm`$ of iron to achieve a 2 $`\sigma `$ sigma sign measurement. For muons, or even daughters of tau leptons it should be possible to achieve $`3`$ or $`4\sigma `$ sign determination. In the electron case it may become impractical to follow the primary electron beyond some $`510X_0`$, however.
In case a more precise charge determination is required for electrons, the solution could be to immerse the entire detector in an external, strong magnetic field. In such a case a significant improvement of the measurement is achieved, as the space between the steel plates contributes to the magnetic bending as well, whereas its contribution to the multiple scattering is negligible in comparison with that of the iron plates.
As an example, consider a detector with $`x=1mm`$ steel plates and a $`2mm`$ gap between them, immersed in $`4Tesla`$ magnetic field: it will require only $`45`$ emulsion plates to achieve a $`5\sigma `$ charge measurement.
Such a detector is not an impractical one: a 1 kton detector would have a total volume of $`375m^3`$ ($`5m\times 5m\times 15m`$). This is a rather modest volume in comparison, for example, with the ATLAS barrel toroidal magnet systematlas , which generates a magnetic field of $`2.44.2Tesla`$ in a total volume of $`7600m^3`$. The ultimate size of a possible detector will be, thus, limited by the available resources, rather than by technical factors. A cost of a hypothetical detector with 20 kton of active mass would be of the order of 1100 million SF: 100 MSF for the superconducting magnet and 1000 MSF for the emulsion detector.
## 4 Conclusion
We have presented a concept of a novel detector involving large volume emulsion detectors interspaced with thin layers of steel. Such a detector should be capable of identification of the final state lepton (muon, electron or tau) with good efficiency and very small background. The superb spatial resolution and granularity of nuclear emulsions makes it possible to determine the sign of the electric charge of the lepton, once the detector is immersed in the magnetic field. The magnetic field can be generated by the excitation of steel plates with an external coil, or by immersion of the entire detector inside a superconducting magnet, similar to the ones being built for the LHC experiments.
With such a detector one can take full advantage of the physics opportunities provided by intense neutrino beams produced at a muon storage ring. It will allow a precise determination of the neutrino mixing matrix elements, search for matter and CP violation effects. A very large mass detector, of the order of 20 kton, can be constructed at the price of the order of 1 BSF. While this cost is probabbly comparable with the cost of construction of the neutrino factory itself, this detector would provide as complete a set of physics information as possible. |
no-problem/0001/astro-ph0001098.html | ar5iv | text | # 1 Obscured AGN and the X-ray background
## 1 Obscured AGN and the X-ray background
The origin of the cosmic X-ray background (XRB) has been a puzzle for over 35 years, but now there is strong evidence that this will be explained by a large population of absorbed AGN. If these are to explain the bump in the XRB spectrum at 30keV, however, the total energy output of this population must exceed that of broad-line AGN by at least a factor of $`5`$, with wide ranging implications . This hidden population could also explain the apparent discrepancy between the black hole densities predicted by ordinary QSOs and recent observations of local galaxy bulges , .
Deep X-ray surveys with Chandra and XMM will soon test this obscured AGN hypothesis, although existing surveys with ROSAT, ASCA and BeppoSAX have already revealed what could be the โtip of the icebergโ of this population. Several unambiguous cases of obscured QSOs have been detected (e.g. , , , , ) while at the faintest X-ray fluxes there is growing evidence for a large population of X-ray luminous emission-line galaxies, many of which show clear evidence for AGN activity ,,, .
Arguably the most convincing evidence for the obscured AGN model came from the ultra-deep survey of Hasinger, Schmidt et al (, ). Using the Keck telescope to identify sources from the deepest X-ray observation ever taken they found that most of these X-ray galaxies could be classified as AGN. Exciting new work has also been undertaken at harder energies with ASCA and Beppo-SAX (, , ) resolving $`30\%`$ of the $`210`$ keV XRB. Increasing numbers of these sources have been identified with absorbed AGN.
## 2 Star-forming Galaxies and Submillimetre Surveys
Deep sub-mm observations offer the potential to revolutionise our understanding of the high redshift Universe. Beyond 100$`\mu `$m, both starburst galaxies and AGN show a very steep decline in their continuum emission, which leads to a large negative K-correction as objects are observed with increasing redshift. This effectively overcomes the โinverse square lawโ to pick out the most luminous objects in the Universe to very high redshift . Since the commissioning of the SCUBA array at the James Clerk Maxwell Telescope a number of groups have announced the results from deep sub-mm surveys, all of which find a high surface density of sources at $`850\mu `$m ( , , , , ). The implication is the existence of a large population of hitherto undetected dust enshrouded galaxies. In particular, the implied star-formation rate at high redshift ($`z>2`$) is significantly higher than that deduced from uncorrected optical-UV observations, roughly a factor of two higher than even the dust-corrected version of the optically derived star-formation history . The recent detections of the far-infrared/sub-mm background by the DIRBE and FIRAS experiments provide further constraints, representing the integrated far-infrared emission over the entire history of the Universe (, , ). Since most of this background has now been resolved into discrete sources by SCUBA, the implication is that most high redshift star-forming activity occurred in rare, exceptionally luminous systems.
In other words, at high redshift ULIRG-like starburst galaxies dominate the cosmic energy budget, in stark contrast to the situation today where (to quote Andy Lawrence) โULIRGs are little more than a spectacular sideshowโ .
## 3 The X-ray/sub-mm link
Considerable excitement has been generated recently by the possibility that many of these SCUBA sources could be AGN. Whether these AGN are actually heating the dust is another matter (see Section 5) but there are now several independent lines of argument which suggest that AGN are present in a significant fraction of these SCUBA sources. At the very least, the implication is that much of the star formation in the high redshift Universe occurred in galaxies containing active quasars. First we present the arguments predicting an AGN contribution, followed by recent observational evidence.
### 3.1 Arguments for AGN in deep sub-mm surveys
* The analogy with ULIRGs: In many ways a significant AGN fraction would not be a surprise. The SCUBA sources are exceptionally luminous systems, essentially the high redshift equivalents to local ULIRGs. At the luminosities of the SCUBA sources ($`10^{12}L_{}`$) we note that a least $`30\%`$ of local ULIRGs show clear evidence for an AGN .
* Predictions based on AGN luminosity functions: One can estimate the AGN contribution by transforming the X-ray luminosity function to the sub-mm waveband using a template AGN SED. We estimate that $`1020\%`$ of the sources in recent SCUBA surveys could contain AGN, perhaps higher if one allows for Compton-thick objects . Major sources of uncertainty are in the extrapolation of local AGN SEDs to high redshift, the dust temperature and in assuming the same underlying luminosity function for obscured AGN. Note that an independent but very similar analysis by Manners et al (in preparation) predicts a lower fraction ($`510\%`$).
* Re-radiating the absorbed energy: Another approach, also based on obscured AGN models for the XRB, does not rely on uncertain SEDs but instead on thermally re-radiating the absorbed energy directly in the far-infrared . This method also predicts a significant AGN fraction among the SCUBA sources ($`530\%`$) although the exact prediction is strongly dependent on the assumed dust temperature.
* SCUBA observations of high-z quasars: Observations of the most luminous, very high redshift quasars ($`z>3`$) suggest that many are exceptionally luminous in the far-infrared/sub-mm, with sub-mm luminosities comparable to Arp 220 . Whether this emission is due to dust heated by the quasar or associated with starburst activity is unclear, but the lack of any correlation between the quasar power and the sub-mm luminosity would favour a starburst origin for the far-infrared emission. If one assumes that all high redshift quasars have similar sub-mm luminosities this would lead to a very large AGN fraction in the deep SCUBA surveys ($`50\%`$). In reality, however, some weak correlation between quasar power and the associated starburst is likely to exist, which would reduce this fraction significantly. Further sub-mm observations of high-z quasars are required to investigate these correlations.
### 3.2 Direct evidence for AGN in sub-mm surveys
* The SEDs of detected SCUBA sources: A recent analysis of the multi-wavelength spectral energy distributions (SEDs) of SCUBA sources has suggested that $`1/3`$ are likely to be AGN .
* Spectroscopic identification: Although the identification of many SCUBA deep survey sources remains elusive (perhaps indicating their exceptionally high redshift) there are growing indications that a significant fraction harbour AGN. The first clear-cut identification turned out to be an obscured QSO and since then various surveys have been able to place limits on the AGN fraction. From a study of $`14`$ sub-mm sources, Barger and collaborators have placed a lower limit of $`20\%`$ on the fraction showing evidence for AGN activity . Of seven sub-mm sources studied in detail by Ivison et al, at least $`3`$ show evidence for an AGN . These estimates are in good agreement with the predictions of the various models outlined above.
## 4 Probing the X-ray/sub-mm link with Chandra and XMM
Forthcoming deep X-ray surveys with Chandra and XMM will push significantly fainter than ever before. In the soft X-ray band, we expect to reach at least an order of magnitude fainter than the deepest ROSAT surveys, while in the hard X-ray band the improvement is even more dramatic (Figure 1). The puzzle of the XRB should therefore soon be solved, and we expect to detect large numbers of obscured AGN and study their properties in detail. In addition, we will be able to detect typical quasars to very high redshift ($`z8`$) and hence assess the importance of quasar activity during those early epochs.
These deep X-ray observations are potentially ideal for identifying AGN in sub-mm surveys. The source densities expected are very similar ($`1000`$ deg<sup>-2</sup>) and hence with the resolution of Chandra in particular it will be possible to pick out the AGN directly from their X-ray flux, with very little confusion.
The $`8`$mJy survey of the UK SCUBA consortium is ideal for such a study (Figure 2). This is the only wide area SCUBA survey being conducted, covering $`500`$ square arcminutes in $`2`$ contiguous regions. One of these regions is in the Lockman Hole (which will be covered by Chandra in PV time). The other is concentrated on the N2 region of the ELAIS survey, where we have deep Chandra and XMM observations planned (PI: Almaini). We will be able to detect the hard X-ray emission from the hidden AGN and obtain a measurement of the absorbing column. If the equivalent width is high enough, in some cases XMM may even allow a redshift determination from an Iron line detection
## 5 Conclusions and Implications
Several independent arguments now point to the conclusion that a significant fraction ($`1030\%`$) of the luminous sub-mm sources detected by SCUBA will contain AGN. This points to a very important link between X-ray astronomy and the newly emerging sub-mm field, both of which provide probes of the obscured, high redshift Universe.
If a significant AGN fraction is confirmed with forthcoming Chandra/XMM surveys, considerable uncertainties will still remain. Is the dust heated by the AGN or by stellar processes? If the AGN is responsible, and their contribution is large, the recent conclusions about star-formation at high redshift may require significant revision. On the other hand the dust may be largely heated by stellar activity (see ) but with the interesting implication that much of the star formation at high redshift occurred in galaxies containing active quasars. It has recently been postulated that perhaps all quasars could go through an obscured phase during the growth of the black hole, a process which could be intimately linked with the formation of the galaxy bulge itself (see ). Future surveys combining X-ray and sub-mm observations will provide a powerful tool for disentangling these processes. |
no-problem/0001/astro-ph0001069.html | ar5iv | text | # 2-D Radiation Transfer Model of Non-Spherically Symmetric Dust Shell in Proto-Planetary Nebulae
## 1. Introduction
The detection of bipolar reflection nebulosities in PPN suggests that the dust envelopes in PPN are highly asymmetric. (see Hrivnak, these proceedings). In order to model these objects, a 2-D radiation transfer model is necessary. However, a fully self-consistent determination of the source function in 2-D is impractical not only because of the large computing time required, but also in the lack of knowledge of the physical details of the scattering process. Since the morophology of the nebulae will be primarily determined by the geometry and orientation of the envelope, we have developed an approximate solution to simultaneously fit the SED and images of a centrally-heated dust envelope. In this paper, we report the models for 3 PPN, IRAS 17105-3224, IRAS 18095+2704 and IRAS 17441-2411, where HST images are available.
## 2. The Model
The dust envelope is assumed to be axial symmetric where the density distribution $`\rho (r,\theta )`$ is assumed to have radial cutoffs at $`r_{in}`$ and $`r_{out}`$. The plane perpendicular to $`z`$ ($`\theta =90^{}`$) is referred to as the โequatorial planeโ and the directions along the $`z`$ axis ($`\theta =0^{}`$ and $`180^{}`$) are referred to as the two โpolesโ. The density is assumed to decrease from the equator to the poles in the form of a power law ($`\rho \theta ^\beta r^\gamma `$). In order to produce the searchlight beams observed in the PPN, IRAS 17150-3224 (Kwok et al. 1998), cavities can be put in the density distribution simply assuming a open-cone structure, where $`\rho `$ drops by a factor of $`\tau _{scale}`$ inside the cone. A disk can also be put in the density distribution in order to reprduce the dark lane observed in IRAS 17441-2411 (Su et al. 1998). The viewing angle $`i`$ is defined as $`0^{}`$ if the object is viewed along the pole (pole-on), and $`90^{}`$ if it is viewed along the equator (edge-on).
We first solve the dust temperature distribution at the โpolarโ and โequatorialโ directions from 1-D radiation transfer models. The values at other angles are then obtained by interpolation using a power law:
$$T(\theta )=T(\theta =0)+\left(T(\theta =\frac{\pi }{2})T(\theta =0)\right)\left[\frac{2\theta }{\pi }\right]^N\mathrm{for}\theta \pi /2$$
$$T(\pi \theta )=T(\theta )$$
The three power indices ($`\beta ,\gamma ,N`$) are adjusted until best fits are obtained for both the SED and the images. Specifically, $`\gamma `$ can be constrained by the infrared color (- $`\mu `$m) because the total amount of dust is fixed. $`\beta `$ determines the degree of asymmetry and is constrained by the observed width of the reflection nebula. $`N`$ is basically the same as $`\beta `$, but can be adjusted in order to get the total energy conserved. The viewing angle i can be determined by comparing the flux ratio of the two reflection lobes in the simulated image to the one in the observed image.
IRAS 17150-3224 and IRAS 18095+2704 are both O-rich objects, as evidenced by the presence of the 9.7$`\mu `$m silicate feature in the IRAS Low Resolution Spectra (LRS); therefore, we use silicate dust grains in the fitting. The strength of the silicate feature is also used to constrain the dust opticall depth along the line of sight. IRAS 17441-2411 shows no feature in IRAS LRS, we adopted amorphous carbon dust grains in the fitting.
## 3. Results
Table 1 lists all the fitting parameters we used. Figure 1 shows model fits to the SED for these three objects. Figure 2 shows the comparision between the observed model images. The model images not only reproduce the approximate shapes of the PPN, but also the absolute flux levels (as evidenced by the sizes of the outermost contour). In addition, searchlight beams in IRAS 17150-3224 and dark lane in IRAS 17441-2441 are successfully reproduced as well.
## References
Kwok, S., Su, K.Y.L., and Hrivnak, B.J. 1998, ApJ, 501, L117
Su, K.Y.L., Volk, K., Kwok, S. and Hrivnak, B.J. 1998, ApJ, 508, 744 |
no-problem/0001/cond-mat0001032.html | ar5iv | text | # References
QUANTITATIVE DESCRIPTION OF THERMODYNAMICS OF LAYERED MAGNETS IN A WIDE TEMPERATURE REGION
V.Yu. Irkhin<sup>1</sup><sup>1</sup>1Corresponding author. Fax: +7 (3432) 74 52 44; e-mail: Valentin.Irkhin@imp.uran.ru and A.A. Katanin
Institute of Metal Physics, 620219 Ekaterinburg, Russia
## Abstract
The thermodynamics of layered antiferro- and ferromagnets with a weak interlayer coupling and/or easy-axis anisotropy is investigated. A crossover from an isotropic 2D-like to 3D Heisenberg (or 2D Ising) regime is discussed within the renormalization group (RG) analysis. Analytical results for the the (sublattice) magnetization and the ordering temperature are derived are obtained in different regimes. Numerical calculations on the base of the equations obtained yield a good agreement with experimental data on La<sub>2</sub>CuO<sub>4</sub> and layered perovskites. Corresponding results for the Kosterlitz-Thouless and Curie (Nรฉel) temperatures in the case of the easy-plane anisotropy are derived.
PACS: 75.10 Jm, 75.30.Gw, 75.70.Ak
Keywords: Layered magnetic systems, renormalization group, $`1/N`$ expansion.
The problem of layered magnetic systems is of interest both from theoretical and practical point of view . Here belong, e.g., quasi-two-dimensional (quasi-2D) perovskites, ferromagnetic monolayers and ultrathin films. We consider the Heisenberg model with small parameters of the interlayer coupling $`\alpha =J^{}/J`$ and easy-axis anisotropy $`\eta =1J^z/J^{x,y}`$ ($`J`$ is the in-plane exchange parameter). This case permits a regular consideration since the magnetic transition temperature is small, $`T_M|J|S^2,`$ which is reminiscent of weak itinerant magnets.
At not too low temperatures $`T`$ the standard spin-wave theory (SWT) is insufficient to describe correctly thermodynamics of such systems. Somewhat better results can be obtained within the self-consistent theory (SSWT) , which takes into account the temperature renormalization of $`\alpha `$ and $`\eta .`$ However, the values of the ordering temperature in SSWT are still too high in comparison with experimental ones, and the critical behavior is quite incorrect.
To improve radically SSWT, the summation of leading contributions in all orders of perturbation theory should be performed. To this end we use the renormalization group (RG) approach similar to that of Ref. . This approach is valid outside the critical region which is very narrow for layered systems. The same results can be obtained by direct summation of RPA-type corrections to spin-wave interaction vertex.
The result for the relative (sublattice) magnetization $`\overline{\sigma }_r\overline{S}/\overline{S}(T=0)`$ reads
$$\overline{\sigma }_r=1\frac{T}{4\pi \rho _s}\left[\mathrm{ln}\frac{\mathrm{\Gamma }(T)}{\mathrm{\Delta }(f_T,\alpha _T)}+2\mathrm{ln}(1/\overline{\sigma }_r)+2(1\overline{\sigma }_r)+\mathrm{\Phi }\left(\frac{T}{4\pi \rho _s\overline{\sigma }_r}\right)\right]$$
(1)
with $`\mathrm{\Delta }(f,\alpha )=f+\alpha +\sqrt{f^2+2\alpha f}`$, $`f=4\eta ,`$ other quantities are given in the Table 1:
| | $`\mathrm{\Gamma }`$ | $`\rho _s`$ | $`f_r`$ | $`\alpha _r`$ |
| --- | --- | --- | --- | --- |
| AFM, quantum regime ($`T|J|S`$) | $`T^2/c^2`$ | $`\gamma |J|S\overline{S}_0`$ | $`f\overline{S}_0^2/S^2`$ | $`\alpha \overline{S}_0/S`$ |
| FM, quantum regime ($`TJS`$) | $`T/JS`$ | $`JS^2`$ | $`f`$ | $`\alpha `$ |
| FM,AFM, classical regime ($`T|J|S`$) | $`32`$ | $`JS^2Z_{L1}`$ | $`fZ_{L2}`$ | $`\alpha Z_{L3}`$ |
where $`c=\sqrt{8}|J|\gamma S`$ is the spin-wave velocity, $`\gamma 1+0.078/S`$ is the renormalization parameter for intralayer coupling, $`Z_{L1}=Z_{L2}=Z_{L3}=1T/8\pi |J|S^2`$. The inequality $`\mathrm{\Gamma }\mathrm{\Delta }`$ should to be satisfied for the validity of (1). Note that the classical regime is realized only for very large $`S`$. The quantities $`f_T`$ and $`\alpha _T`$ in (1) are the temperature-renormalized values of interlayer coupling and anisotropy parameters, and for $`T4\pi \rho _s\overline{\sigma }_r`$ (i.e. beyond the critical region, see below)
$$f_T/f_r=(\alpha _T/\alpha _r)^2=\overline{\sigma }_r^2$$
(2)
Thus the anisotropy and interlayer coupling are strongly renormalized with temperature which should be taken into account when treating the experimental data. In the quantum regime, the parameters $`\alpha _r`$ and $`f_r`$ are ground-state (quantum-renormalized) anisotropy and interlayer coupling (for ferromagnetic case the ground-state renormalizations are absent). In the classical regime, $`f_r`$ and $`\alpha _r`$ are โlattice-renormalizedโ parameters of anisotropy and interlayer coupling, i.e. the corresponding parameters of the continuum model with the same thermodynamic properties as the original lattice model.
Three temperature regimes can be distinguished.
(i) low temperatures, $`TT_M2\pi |J|S^2/\mathrm{ln}(\mathrm{\Gamma }/\mathrm{\Delta }).`$ The analysis of non-uniform susceptibility shows that the excitations in the whole Brillouin zone have spin-wave nature. Only first term in the square brackets in (1) is to be taken into account and the magnetization also demonstrates the spin-wave behavior.
(ii) intermediate temperatures, $`(\overline{S}/S)/\mathrm{ln}(\mathrm{\Gamma }/\mathrm{\Delta })T/2\pi |J|S^2\overline{S}/S`$ ($`T`$ is of the same order as $`T_M`$). Close to the center of the Brillouin zone the excitations still have the spin-wave nature, while for large enough momenta they have non-spin-wave character. Only in-plane (two-dimensional) fluctuations are important in this regime. All the terms in (1), except for the last, are important, which leads to significant modification of the dependence $`\overline{\sigma }_r(T)`$ in comparison with SWT.
(iii) critical region, $`T/(2\pi |J|S^2)\overline{S}/S`$($`1T/T_M1`$). In this regime the spin-wave excitations are present only for momenta $`q^2\mathrm{\Delta }`$ (hydrodynamic region) whereas at all other $`q`$ excitations have non-spin-wave character. The thermodynamics in this case is determined by 3D (or Ising-like) fluctuations. The contribution of $`\mathrm{\Phi }_\text{a}`$ is of the same order as other terms in the square brackets of (1) and the RG approach is unable to describe the thermodynamics in this regime. The values of critical exponents can be corrected in comparison with SSWT with the use of the $`1/N`$ expansion. For the isotropic quasi-2D antiferromagnet we obtain
$$\overline{\sigma }_r^2=\left[\frac{T_N}{4\pi |J|S\overline{S}_0\gamma }\right]^{1\beta _3}\left[\frac{1}{1A_0}\left(1\frac{T}{T_N}\right)\right]^{2\beta _3}$$
(3)
with $`A_00.9635`$ and $`\beta _3=(18/\pi ^2N)/20.36.`$
Up to some (unknown) constant $`C(f/\alpha )`$ we have for the transition temperature the result
$$1=\frac{T_M}{4\pi \rho _s}\left[\mathrm{ln}\frac{2\mathrm{\Gamma }(T_M)}{\mathrm{\Delta }(f_t,\alpha _t)}+2\mathrm{ln}\frac{4\pi \rho _s}{T_M}+C(f/\alpha )\right].$$
(4)
Note that all the logarithmic terms are included in (4) and $`C`$ gives only a small contribution to this result. For the quantum antiferromagnets the value of $`C(\mathrm{})`$ can be calculated by the $`1/N`$ expansion , $`C(\mathrm{})0.0660.`$ The value of $`C(0)`$ for the same case can be deduced from experimental data on layered magnetic compounds , $`C(0)0.7.`$
For practical purposes, simple interpolation expressions for the functions $`\mathrm{\Phi }(x),`$ which permit to describe the crossover temperature region, are useful. We obtain at $`x<1`$ :
$`\mathrm{\Phi }(x)|_{\alpha =0}`$ $`=`$ $`{\displaystyle \frac{x}{\sqrt{x^2+1}}}\left[C(\mathrm{})2+8\mathrm{ln}2\right],`$
$`\mathrm{\Phi }(x)|_{f=0}`$ $`=`$ $`{\displaystyle \frac{x}{\sqrt{x^2+1}}}\left[C(0)1+3\mathrm{ln}3\right].`$ (5)
Numerical calculations with the use of the equations obtained yield a good agreement with experimental data on the layered perovskites (see Figs. 1-2 and the Table 2), and the Monte-Carlo results for the anisotropic classical systems.
| Compound | La<sub>2</sub>CuO<sub>4</sub> | K<sub>2</sub>NiF<sub>4</sub> | Rb<sub>2</sub>NiF<sub>4</sub> | K<sub>2</sub>MnF<sub>4</sub> | CrBr<sub>3</sub> |
| --- | --- | --- | --- | --- | --- |
| $`T_M^{\text{SSWT}},`$K | 527 | 125 | 118 | 52.1 | 51.2 |
| $`T_M^{\text{RG}},`$K | 343 | 97.0 | 95.0 | 42.7 | 39.0 |
| $`T_M^{\mathrm{exp}},`$K | 325 | 97.1 | 94.5 | 42.1 | 40.0 |
In the easy-plane case ($`\eta <0`$) finite-temperature magnetic transition is absent at $`\alpha =0.`$ At the same time, the Kosterlitz-Thouless transition where unbinding of topological excitations (vortex pairs) occurs. For small anisotropy value the temperature of this transition is small, and to leading logarithmic accuracy
$$T_{KT}^{(0)}=\frac{4\pi |J|S^2}{\mathrm{ln}|\pi ^2/\eta |}$$
(6)
which is similar to the result for $`T_M`$ in the easy-axis case. As well as for magnets with small easy-axis anisotropy, topological excitations are important only close to $`T_{KT}.`$ Using the renormalization group approach, the result similar to (4) can be obtained
$$1=\frac{T_{KT}}{4\pi \rho _s}\left[\mathrm{ln}\frac{\mathrm{\Gamma }(T_{KT})}{|f_r|}+4\mathrm{ln}\frac{4\pi \rho _s}{T_{KT}}+C^{}\right]$$
(7)
For the magnetic ordering temperature in the presence of interlayer coupling we obtain
$$1=\frac{T_M}{4\pi \rho _s}\left[\mathrm{ln}\frac{\mathrm{\Gamma }(T_M)}{|f_r|}+4\mathrm{ln}\frac{4\pi \rho _s}{T_M}+C^{}\frac{2A^2}{\mathrm{ln}^2(f/\alpha )}\right]$$
(8)
The comparison of our results with the experimental data is presented in the Table 3 (the parameters $`C^{}1.0`$ and $`A=3.5`$ are fitted for the first compound).
| Compound | K<sub>2</sub>CuF<sub>4</sub> | stage-2 NiCl<sub>2</sub> | BaNi<sub>2</sub>(PO<sub>4</sub>)<sub>2</sub> |
| --- | --- | --- | --- |
| $`T_{KT}^{\text{(0)}},`$K | 11.4 | 35.3 | 45.0 |
| $`T_{KT}^{\text{RG}};T_C^{\text{RG}},`$ K | 5.5; 6.25 | 17.4; 18.7 | 23.0; 24.3 |
| $`T_{KT}^{\mathrm{exp}};T_C^{\mathrm{exp}},`$ K | 5.5; 6.25 | 18$`รท`$20 | 23.0; 24.5 |
One can see that the two-loop corrections improve radically the result (6).
Figure captions
Fig.1. The theoretical temperature dependences of the relative sublattice magnetization $`\overline{\sigma }_r`$ from the spin-wave theory (SWT), SSWT, Tyablikov theory (TT), RG approach (1), and the experimental points for La<sub>2</sub>CuO<sub>4</sub> . The RG curve corresponds to inclusion of the function $`\mathrm{\Phi }_a^{\text{AF}}(t/\overline{\sigma })`$ given by (5). The $`1/N`$ curve is the critical behavior predicted by $`1/N`$-expansion (3). The result of $`1/N`$-expansion at intermediate temperatures practically coincides with RG$`^{}.`$
Fig.2. Temperature dependence of the relative sublattice magnetization $`\overline{\sigma }(T)`$ of K<sub>2</sub>NiF<sub>4</sub> in the SWT, SSWT, RG approaches and $`1/N`$-expansion for $`O(N)`$ model as compared with the experimental data (circles). The RG curve corresponds to inclusion of the function $`\mathrm{\Phi }_a^{\text{AF}}(t/\overline{\sigma })`$ given by (5). Short-dashed line is the extrapolation of the $`1/N`$-expansion result to the critical region. |
no-problem/0001/astro-ph0001454.html | ar5iv | text | # Near-Infrared-Spectroscopy with Extremely Large Telescopes: Integral-Field- versus Multi-Object-Instruments
## 1. Extremely Large Telescopes and Near-Infrared-Spectroscopy
### 1.1. Science with Extremely Large Telescopes
Extremely Large Telescopes of the next century will have diameters of up to 100 m. Compared to current state of the art 10 m-class telescopes, the biggest of those telescopes provide a collecting power roughly 100 times as big, and an angular resolution 10 times as good. The science-drivers for such telescopes are threefold:
First, and most straight forward, we will be able to carry out spectroscopy of objects that we already know about from deep imaging, but which are too faint for spectroscopy with present day telescopes. The most prominent and cited target of such observations is the Hubble Deep Field.
Second, we will image objects that we have never seen before, because they are too faint or too distant.
These two science drivers are the straight forward extension of what astronomers have done during the last century, and at first sight seem related only to the collecting area of the telescopes. But since such observations will be background limited, we will only gain a factor of 10 โ 2.5 magnitudes โ compared to the existing 10 m telescopes for seeing limited observations of point sources. Only adaptive optics assisted observations at the diffraction limit of the telescopes will boost the limiting magnitude by a factor of 100 โ 5 magnitudes โ when enlarging the mirror-size from 10 m to 100 m. High angular resolution capability therefore will be mandatory.
And third, most exciting for us, is the prospect of exploring the universe at angular scales of a few milliarcseconds, the diffraction limit of such an Extremely Large Telescope. Like the Hubble Deep Field for the faint object science, the direct imaging and spectroscopy of planets can serve as the final goal for high angular resolution astronomy. While imaging at this angular resolution will also be possible with interferometric arrays like the VLT, only the collecting area of several 1000 m<sup>2</sup> will provide enough photons for spectroscopy.
### 1.2. The Need for Integral-Field- and Multi-Object-Instruments
Since an Extremely Large Telescope will cost the order of 1 billion US$, throughput of the instruments has highest priority. Throughput in this context does not only mean imperfect transmission and detection of the light, but specifically multiplex-gain. For the faint-object-science, best throughput implies simultaneous spectroscopy of as many objects as possible. This is the standard domain of multi-object-spectroscopy. On the other hand, if objects are to be resolved, and if we are interested in their complex structure, integral-field-spectroscopy is the the first choice. This technology is definitely required when observing with adaptive optics at the diffraction limit of a telescope, both to avoid imperfect slit-positioning on the object, and for post-observational correction of the imperfect point-spread-function by means of deconvolution.
### 1.3. Why Near-Infrared-Spectroscopy?
There are several reasons, both object-inherent and technical, to carry out a significant fraction of the observations with such a telescope at near-infrared wavelengths:
First, many of the faint objects we are looking for โ like in the Hubble Deep Field โ are at high redshift. Therefore a lot of the well established โopticalโ spectral diagnostics are shifted beyond 1 micron.
Second, many of the interesting objects in the universe โ like nuclei of galaxies, star- and planet-forming regions โ are hidden behind dust. For example our Galactic Center is dimmed in the visible by about 30 magnitudes, while we suffer from only 3 magnitudes of extinction in K-Band (2.2 $`\mu `$m).
And third, high angular resolution through the earthโs atmosphere is much easier achieved at longer wavelengths. Even though there is no principle limit to achieve the diffraction limit in the visible, the high complexity of an adaptive optics system for an extremely large telescopes with roughly $`10^5`$ actuators may suggest to start with the easier task of correcting in the near-infrared.
## 2. Concepts for Integral-Field- and Multi-Object-Spectroscopy
In this section we will present current developments and concepts for integral-field-spectroscopy and multi-object-spectroscopy (with emphasis on the technology developed at the Max-Planck-Institut fรผr extraterrestrische Physik), and outline the technology-challenge for their operation at cryogenic temperatures.
A number of instruments have been built or are going to be built for integral-field-spectroscopy and multi-object-spectroscopy. Even though most of them are designed for operation at visible wavelengths, their concepts are applicable for the near-infrared as well. In this section we will describe the basic idea behind the different approaches and compare their specific properties and feasibility at cryogenic temperatures.
### 2.1. Integral-Field-Spectroscopy
An integral-field-spectrograph obtains simultaneously the spectra for a two-dimensional field with a single exposure. It therefore distinguishes itself from several other ways of measuring spectra for a two-dimensional field, which all need multiple integrations. Well known and applied in astronomy for several decades are (1) Fabry-Perot-imaging-spectroscopy, (2) Fouriertransform-spectroscopy, (3) slit-scanning-spectroscopy. Why is integral-field-spectroscopy most appropriate to ground-based astronomy, especially at the highest angular resolution?
Fabry-Perot-imaging-spectroscopy and Fouriertransform-spectroscopy require repetitive integrations to obtain full spectra. Therefore ground-based observations suffer a lot from varying atmospheric conditions. Both atmospheric absorption and emission must be measured in between two adjacent wavelength-settings, and since the atmospheric properties vary on a time-scale of minutes at near-infrared-wavelengths, long single exposures, and therefore high quality spectra for faint objects are almost impossible to record with wavelength scanning techniques. The difference between integral-field-spectroscopy and slit-scanning is in principle rather small, since both instruments provide roughly the same number of image points and the same spectral sampling. But because most astronomical targets are far from being slit-like, almost all observations can gain a lot from a square field of view.
Three basic techniques are used in todayโs instruments for integral-field spectroscopy: The Mirror-Slicer, the Fiber-Slicer and the Micro-Pupil-Array.
The basic idea of image-slicing with mirrors is rather simple: A stack of tilted plane mirrors is placed in the focal plane, and each mirror reflects the light from the image in a different direction. At a distance at which the rays from the different mirrors are clearly separated, a second set of mirrors realigns the rays to form the long-slit (figure 1) of a long-slit-spectrograph, which disperses the light along the rows of the detector. This concept was successfully applied in the 3D-instrument , a near-infrared integral-field-spectrometer developed and operated by MPE, and will be used for SPIFFI II, the adaptive-optics-assisted field-spectrometer for the VLT-instrument SINFONI . The disadvantage of this concept is that shadowing at the steps of the first stack of mirrors leads to unavoidable light losses. This shadowing effect increases with smaller mirrors and a larger field of view. In order to have little light losses one would like to have large mirrors in the first stack. Because this increases the total slit-length, and therefore makes the collimator of the spectrograph-optics uncomfortably big, a compromise has to be found. For SPIFFI II with its approximately 1000 spatial pixels arranged in 32 slitlets, we chose the width of each mirror of the first stack to be 300 $`\mu `$m, which leads to a total slit-length of about 300 mm. For these parameters the shadowing-effect cuts out about 11% of the total light. The whole slicer for SPIFFI II will be fabricated from Zerodur using classical polishing techniques. Optical contacting of the individual mirrors will provide a monolithic structure that is insensitive to changes in temperature. With 3D we proved that the concept is feasible, and our recent results from cool-downs of an engineering slicer to the temperature of liquid nitrogen ensures operation in cryogenic instruments. However, the concept will find its limitation for much larger fields due to increased shadowing. Other recent developments of mirror-slicers derive from the basic design with plane mirrors, and take advantage of curved mirrors . Such a concept avoids part of the shadowing-effects and provides a smaller โslitโ, thereby simplifying the design of the spectrograph-optics.
A completely different approach for integral-field-spectroscopy is based on optical fibers. In the image plane the two dimensional field is sampled by a bundle of optical fibers, which are then rearranged to a โlong-slitโ. As in the mirror-slicer-concept, a normal long-slit spectrograph can be used to disperse the light. As simple and expandable as this concept seems, many little problems are inherent to such devices.
To achieve a high coupling-efficiency, an array of square or hexagonal lenslets with a filling factor of close to 100 % is used to couple the light into the fibers. However, for a high coupling efficiency the fibers have to be accurately positioned behind each lenslet and the image quality of the lenslets has to be very good. One way to loosen the constraints on the positioning accuracy and the optical quality of the lenslets is to use a larger $`A\mathrm{\Omega }`$ for the fiber, which in turn increases the f-number of the spectrograph-camera and finally limits the pixel-size, especially at extremely large telescopes. For a cryogenic instrument the positioning of the fibers behind the lenslets is another critical point, since differential thermal contraction both complicates gluing of the lenslets and fibers, and due to small displacements degrades the coupling efficiency. For SPIFFI we therefore started the development of monolithic lens-fiber-units (figure 2), each consisting of a silica-fiber, that has been flared and a spherical lens polished onto it . Even though individual fiber-lens-units could be produced with an overall transmission of more than 70 % โ including coupling efficiency, reflection losses and intrinsic absorption โ, the technology is not yet optimized for producing several 1000 fibers at reasonable cost. Despite all the technical problems with the image-slicer based on flared fibers, there are three main advantages of this concept over its competitors: First, its possible extension to any number of fibers. Second, this concept provides full flexibility in the arrangement of the fibers (see section on multiple-field-spectroscopy). Third, the flared fiber is insensitive to a change in temperature and can be used at cryogenic temperatures. The flared-fiber technology will be implemented in LUCIFER , the general-purpose near-infrared instrument for the LBT.
Like the fiber-solution, the third concept for integral-field-spectroscopy is based on a micro-lens-array in the image plane. But instead of reimaging the pupil of the individual lenses onto different fibers, the whole set of micro-pupils is now fed into a spectrograph . With the micro-pupils filling only a small fraction of the total field, with a slight tilt in the dispersion direction the spectra on the detector fill the unused detector area between the micro-pupils without overlapping of the individual spectra. Compared to the fiber-based concept, there is no additional loss of light due to the coupling of light into the fiber. Also the technology of producing micro-lens-arrays is now well established, and its application in cryogenic instruments seems straight forward. But while both mirror- and fiber-slicers can disperse the light all across the detector, the spectra of the micro-pupils need to be truncated before they overlap with the spectra of another micro-pupil. Therefore such instruments can provide high spectral resolution only for a very limited wavelength range.
### 2.2. Multi-Object-Spectroscopy
The need for multi-object-spectroscopy is obvious for faint object astronomy. Whenever good statistics is crucial for the scientific interpretation, we need to have information on as many objects as possible. And since many programs require integration times of several hours per object, simultaneous observations of a large number of objects are the only possibility to carry out the observations within a reasonable time.
There are two basic concepts to carry out simultaneous spectroscopy of multiple objects in a field: Using multiple slits and coupling the light into fibers.
In the multi-slit approach a mask with slits located at the object positions is placed in the image-plane. This slit-plate is normally fabricated โoff-lineโ prior to the observation. Special care must be taken to avoid overlap of the spectra from different slits. Therefore usually several masks and observations are required for a complete set of spectra of the objects within a given field. The big advantage of such slit-mask-spectrographs is their high optical throughput, since no additional optical element is introduced. Examples for such instruments are the CFHT MOS and the two VIRMOS instruments for the VLT. One way to overcome the โoff-line-productionโ of the slit-masks might be with micro-mirror arrays which would allow electronically controlled object selection.
The second concept of multi-object-spectroscopy is based on fibers. While in previous instruments fibers had to be placed by hand, nowadays robots arrange the fibers, like in the AAT 2dF . Depending on the image-scale and the f-number, the light is either coupled directly into the fiber, or a lenslet is used to reimage the telescope-pupil onto the fiber core. As the fiber-based integral-field-unit, such multi-object-spectrographs can be expanded to almost any number of objects.
For the time being, no cryogenic multi-object-spectrograph for the infrared wavelength range has been set into operation. For the LUCIFER instrument for the LBT, however, possible realization of the two concepts โ multi-slit and fiber-based โ in a cryogenic instrument are under study: In a multi-slit-spectrograph, the technical key-problem is that the slit-masks have to be produced โoff-lineโ, and need to be inserted into the cryogenic system. One possibility would be an air-lock through which a set of slit-masks are fed into a juke-box and cooled down to the temperature of liquid nitrogen, before they are actually moved into the field. A fiber-based system, however, will require a fully cryogenic robot to position the fibers. But unlike their optical counterparts, present-day fibers for the infrared are either rather fragile (zirconiumflouride), or show significant extinction towards longer wavelengths (waterfree silica). Therefore long fibers and big movements should be avoided, and a โSpaltspinneโ-like mount of the fibers with a long-slit-spectrograph located directly behind seems most promising. While the fiber-technology โ e.g. based on the monolithic concept described for the integral-field-unit โ is almost established, a reliable cryogenic robot is not yet in operation.
A common problem to both multi-object-concepts described above is the need to have precise target positions. In addition no (fiber-concept) or very limited (multi-slit, since all slits are parallel) spatial information can be obtained.
### 2.3. Multi-Field-Spectroscopy
Both problems of multi-object-spectrographs โ the need for precise target-positions, and the lack of spatial information โ will be overcome by multi-field-spectroscopy: Like in multi-object-spectrographs, multiple objects are observed simultaneously, but now each object is spatially sampled with an integral-field-unit.
In principle each of the three basic concepts for integral-field-units โ mirror-slicer, fiber-slicer, or micro-pupil-array โ could be combined with the multi-object concept:
Little mirror-slicers or micro-pupil-arrays are the natural extension of the slit-mask (figure 3). Assuming the same pixel-scale, the same size of the individual fields and the same size of the detector, the source density decides which slicer-concept matches best the science-program. For micro-pupil-arrays, all the objects should be within a field with a linear dimension equal to the number of fields times the size of each single field. For mirror-slicers, the objects should be arranged more loosely, the objects at least separated by the square of the linear dimension of each subfield.
Most promising, however, is the combination of the fiber-based multi-object- and integral-field-concepts. The single fibers of the multi-object-spectrograph need only to be replaced by small fiber-slicers built from several lenslet-fiber-units (figure 4). Depending on the science-program, either small individual fields with about seven pixels each, or larger fields with about 100 spatial elements would be selected. The monolithic concept developed for the fiber-slicer as described above would fulfill all requirements for this kind of multi-field-spectroscopy.
However, like multi-field-spectrographs based on fibers, all multi-field-solutions for the infrared wavelength-range require reliable cryogenic robots.
## 3. Comparison
### 3.1. General Considerations for Extremely Large Telescopes
Before comparing integral-field-, multi-object-, and multi-field-spectroscopy for extremely large telescopes, we discuss the noise regime and pixel-scales for such instruments.
What is the maximum pixel scale? In order to get a rough estimate, we will assume a telescope with a diameter of 100 m. The physical size of a pixel of a present day near-infrared-detector is about 20 $`\mu `$m. We further know from present-day near-infrared-instruments like SPIFFI that the f-number of any camera-optic needs to be greater than or equal to roughly 1 to achieve acceptable image quality. From this limit, and the fact that $`A\mathrm{\Omega }`$ is preserved in imaging optics, one can derive a maximum pixel size of 60 mas. So whenever larger image elements are required, their flux must be spread over several pixels. The smallest pixel scale is determined by the diffraction limit of the telescope. For the H-band (1.65 $`\mu `$m) the appropriate pixel scale to Nyquist-sample the image is 3 mas.
What is the noise regime we have to work with? Let us assume H-band observations again. Most of the sky-background in this wavelength range arises from about 70 OH lines , which sum up to a total surface brightness of about 14 mag / arcsec<sup>2</sup>. The flux between the OH-lines is roughly 18 mag / arcsec<sup>2</sup>. The first lesson we learn from these numbers is that even for present day technology OH-suppression is crucial for deep observations. In order to loose only 1/10 of the H-band-spectrum to OH-contaminated pixels, roughly 1400 pixels are required for Nyquist-sampling, corresponding to a spectral resolution of approximately 3000 in H-band. But even at this spectral resolution and with adaptive-optics-pixel-scales, observations will be background-limited assuming future detectors with a read-noise close to 1 electron and negligible dark-current, and integration-times of the order of 1 hour.
### 3.2. What concept is best suited for extremely large telescopes?
Extremely large telescopes provide the unprecedented opportunity for spectroscopy of (a) extremely small and (b) extremely faint objects.
For us it is obvious that spectroscopy at the diffraction limit of an adaptive-optics equipped telescope and with pixel scales of the order of milliarcseconds requires integral-field-units. Of the three concepts described above โ mirror-slicer, fiber-slicer, micro-pupil-array โ, the mirror-slicer provides the most efficient use of detector elements, because it is the only technology that actually uses almost all pixels. Being not yet at its limitation in field-coverage, this concept may be the choice for the next generation of instruments. In the more distant future, when detector-size and -availability will not limit our instrumentation-plans any more, the micro-pupil-concept, and finally the most expandable fiber-concept are most appropriate.
Since we will be sky-limited at near-infrared-wavelengths at any pixel-scale, the biggest gain in sensitivity (5 magnitudes) over 10m-class telescopes will be achieved by adaptive-optics-assisted observations of point-like sources. In order to make most efficient use of the telescope time, multi-object spectroscopy will be one of the most important operation modes for extremely large telescopes. But with the pixel-size of a few milliarcseconds, the problem of accurate slit-positioning will require the extension of the multi-object-technique towards the multi-field-approach. For this application the fiber-based-concept combined with a cryogenic robot seems most promising.
For spectroscopic surveys the object-density will finally determine the most appropriate instrumentation. But one should keep in mind that โ in contrast to smaller but equally sensitive future space-telescopes โ the maximum pixel-size for a 100m telescope will be limited to about 50 mas. Assuming OH-suppressed observations in the H-band with roughly 1400 spectral pixels for each image point, even 16 detectors with 4k $`\times `$ 4k pixels could only cover a field 20 arcsec on a side. In order to make efficient use of an integral-field-unit for this application, the source density should be of the order $`10^4`$ objects per arcminute, like extragalactic star-forming regions. For most applications, however, the source density will be much smaller, and the combination of deep imaging and multi-field-spectroscopy will match best. |
no-problem/0001/astro-ph0001015.html | ar5iv | text | # New Direct Observational Evidence for Kicks in SNe
## 1. Introduction
We present an updated list of direct strong evidence in favour of kicks being imparted to newborn neutron stars. In particular we discuss the new cases of evidence resulting from recent observations of the X-ray binary Circinus X-1 and the newly discovered binary radio pulsar PSR J1141โ6545. We conclude that the assumption that neutron stars receive a kick velocity at their formation is unavoidable (van den Heuvel & van Paradijs 1997).
This assumption explains a large variety of observations, ranging from direct observed properties of individual binary pulsars and Be/X-ray binaries to the observed birth rates and dynamical properties of the populations of LMXBs, binary recycled pulsars as well as the motion and distribution of single pulsars. Below we give an updated list in favour of kicks based on the compilation given by van den Heuvel & van Paradijs (1997) โ see references therein for details.
## 2. List of evidence for asymmetric supernovae and resulting kicks
* High radial velocity of Circinus X-1 (Tauris et al. 1999), cf. Sect. 3
* Nonalignment of spin and orbit in PSR J0045โ7319 (Kaspi et al. 1996)
* Geodetic precession and evolution of PSR 1913+16 (Wex et al. 1999)
* Low eccentricity of PSR J1141โ6545 (Tauris & Sennels 2000), cf. Sect. 4
* High eccentricities of Be/X-ray binaries (Verbunt & van den Heuvel 1995)
* High velocities of certain BMSP/LMXBs (e.g. Tauris & Bailes 1996)
* Population synthesis/incidence arguments (e.g. Dewey & Cordes 1987)
The origin of the kick mechanism is still rather unclear; it either may be related to a neutrino driven convection instability and an asymmetric outflow of neutrinos or to mass outflow in a MHD jet during/following the core collapse.
## 3. Circinus X-1 โ survivor of a highly asymmetric SN
Based on the recently measured (Johnston, Fender & Wu 1999) radial velocity of +430 km s<sup>-1</sup>, Tauris et al. (1999) find that a minimum neutron star kick velocity of $``$500 km s<sup>-1</sup> is needed to account for such a high system radial velocity (they find that on average, a kick of $``$740 km s<sup>-1</sup> is necessary). This is by far the largest kick needed to explain the motion of any observed binary system. It should be noted that this result is independent on the uncertainty of the exact mass of the companion star (most likely $`1<M_2/M_{}<2`$).
## 4. PSR J1141โ6545 โ a young pulsar with an old deg. companion
The very recently discovered non-recycled binary pulsar PSR J1141โ6545 (Manchester et al. 1999) has a massive companion ($`M_21.0M_{}`$) in an eccentric 0.198 days orbit. The pulsarโs high value of $`\dot{P}_{\mathrm{spin}}`$ ($`4.31\times 10^{15}`$) in combination with its relatively slow rotation rate ($`P_{\mathrm{spin}}=394`$ ms) and non-circular orbit ($`e=0.17`$) identifies this pulsar as being young and the last formed member of a double degenerate system. Given $`P_{\mathrm{orb}}=0.198`$ days it is evident that a non-degenerate star can not fit into the orbit without filling its Roche-lobe. Based on evolutionary considerations and population synthesis, Tauris & Sennels (2000) demonstrate that the companion is most likely to be an O-Ne-Mg white dwarf. In that case the system resembles PSR B2303+46: the first neutron star โ white dwarf binary system observed, in which the neutron star was born after the formation of the white dwarf (van Kerkwijk & Kulkarni 1999). The existence of a minimum eccentricity for systems undergoing a symmetric SN follows from celestial mechanics (e.g. Flannery & van den Heuvel 1975): $`e=(M_{2\mathrm{H}\mathrm{e}}M_{\mathrm{NS}})/(M_{\mathrm{WD}}+M_{\mathrm{NS}})`$
If one assumes $`M_{2\mathrm{H}\mathrm{e}}>M_{\mathrm{He}}^{\mathrm{crit}}2.5M_{}`$, $`M_{\mathrm{WD}}^{\mathrm{max}}<1.4M_{}`$ and $`M_{\mathrm{NS}}=1.3M_{}`$ it follows that $`e>0.45`$ (Note, that this result also remains valid if the companion star should turn out to be another neutron star with a similar mass). In order to reproduce the low observed eccentricity ($`e=0.17`$) Tauris & Sennels (2000) conclude that the neutron star must have received a natal kick velocity at birth of $`>`$100 km s<sup>-1</sup>. The evidence of asymmetry remains even if one assumes an extremely low mass for the exploding naked helium star (e.g. adopting $`M_{\mathrm{He}}^{\mathrm{crit}}=2.0M_{}`$ results in a minimum post-SN eccentricity of 0.26). |
no-problem/0001/astro-ph0001380.html | ar5iv | text | # Design and expected performance of the ANTARES neutrino telescope.
## 1 Aims and principle.
The ANTARES (Astronomy with a Neutrino Telescope and Abyss environmental RESearch) is an international collaboration which aims at the construction and the operation of a large undersea telescope for the detection and the study of high energy cosmic neutrinos. The collaboration is rapidly growing and is composed of particle physicists and of astronomers, as well experts in sea science and technology.
The physics and astrophysics aims are described in more detail in the contribution from L. Moscoso to this conference. The basic idea of the detection of high energy cosmic neutrino by large undersea or under-ice Cerenkov detectors is almost 40 years old: an array of optical modules (OMs) is used to detect the Cerenkov signal emitted by muons in water. Muons are induced by charged current interaction of neutrinos at some distance of the detector. The target size is thus of the order of the muon range i.e. much larger than the detector itself. The overwhelming background of down going muons produced in the high atmosphere is reduced by shielding the apparatus under some thousands of meters of water and rejecting the remaining atmospheric muons by looking only at upward going muons. The muon trajectory is accurately reconstructed from the Cerenkov photons arrival time information on each photomultiplier (PMT) contained in the optical modules. At high energy (above a few TeV) the neutrino direction is well preserved at the interaction and the resulting pointing accuracy is better than a fraction of a degree allowing accurate source identification. Given that the expected fluxes are very low, the ultimate goal is the realisation of a km<sup>3</sup> detector which should record hundreds or thousands of cosmic neutrino events with energies above a few TeV but a 10 times smaller detector would be able to reveal first high energy cosmic neutrinos and maybe identify some point-like sources. At lower energies, contained or semi contained events can be used and the neutrino energy can be inferred from the muon range measurement, giving access to neutrino oscillation physics using atmospheric neutrinos or to indirect neutralino searches in the core of the Earth, the Sun or the Galaxy.
## 2 The R&D program.
Since 1996, the ANTARES collaboration performed an active R&D program to show the feasibility of a large undersea detector. Indeed, past experience had shown that the realisation of such a detector is not trivial and that specific studies in sea technology were needed. The required studies were carried out by ANTARES and concerned: the mechanical structure of the elementary detector lines, the deployment and recovery techniques, the connection of the detector to the shore with an electro-optical cable for energy supply and data transfer, PMT front-end electronics, data readout and remote control, the monitoring of the OM positions on the whole structure. Furthermore, many questions concerning the deep sea environmental parameters, water optical properties and long term effects needed to be assessed. For that reason, ANTARES performed many in situ measurements to answer these questions.
Environmental studies:
Many measurements and long term survey of environmental parameters such as current velocity and variation, optical background, light attenuation and scattering in water, and fouling on OMs were performed. These measurements have been obtained in situ by instrumented autonomous mooring lines deployed on a Mediterranean site located off-shore from Toulon (France) at a depth of 2400 m (hereafter called the ANTARES site).
The optical background is studied by recording the counting rate of OMs as function of the time. For a 8โ diameter PMT, it shows a continuous level of less than 50 kHz due to Cerenkov emission from <sup>40</sup>K $`\beta `$-decays and spikes with typical duration of one second coming from bioluminescence activity.
Several measurements of the sea water optical properties have been performed by looking at the arrival time distribution on a small PMT placed 24 or 44 m away from a pulsed blue LED emitting 466 nm photons. A comparison of the relative proportion of direct and delayed photons leads to an absorption length of 55-65 m accounting for seasonal variations and to a scattering length greater than 200 m for large angle (Rayleigh-like) scattering.
The effect of sedimentation and bio-fouling on the transparency of the optical surface was monitored over long periods using a setup consisting of continuous light source and PIN diodes at different positions on a optical module sphere. The optical attenuation was measured to be less than 1.5% after 8 months.
The ANTARES site is now well studied and the deep water and environmental parameters are found to be acceptable for a neutrino telescope.
Mastering the detector deployment:
It was soon understood that owing to the critical deployment of any large mechanical structure and to the necessity to reach long term reliability, the detector structure should be as simple as possible: mere flexible string-like mooring lines anchored on the sea bed and held up by buoyancy, supporting optical modules. The counterpart of this simplicity is a rather sparse horizontal detector density and the necessity to accurately monitor the position of every single element along the strings.
In order to learn about the complex deployment procedure as well as the mechanical behaviour of the detector during the deployment phase and when it rests at the bottom of the sea, a demonstrator line consisting of a 350 m high detector string was designed and built. This line is made of two vertical cables, separated by 2 m, supporting 16 frames each holding a pair of optical modules. The frames are placed every 15 m, starting 100 m above the sea bed. It is fully equipped as far as cabling and electronics containers. The line also contains all the sensors needed for the precise positioning of the detector elements and for the recording of the environmental parameters. Successful deployment tests of this line at a depth of 2400 m have been performed in summer 1998 using a dynamical positioning ship, showing that the deployment and recovery procedures were well mastered. It was also proved that the bottom of a string could be set at its aimed position on the sea bed with an accuracy of the order of a meter. The string was then equipped with 8 large dimension PMTs (8 and 10โ) as well as the electronics needed to transmit the signal to the shore via a 37 km long electro-optical cable. It has been successfully connected to the shore station and deployed November 26th 1999 at a depth of 1100 m for long-term running. Raw background events and atmospheric muon data are currently being recorded and analysis is in progress.
In December 1998, we also performed successful tests of undersea electrical connections of a detector anchor at 2400 m depth using the IFREMER submarine vehicle (Le Nautile ).
All this insures the feasibility of the installation of an array of instrumented strings at the bottom of the sea.
## 3 A 0.1 km<sup>2</sup> detector.
Since spring 1999, the ANTARES Collaboration is starting the second phase of the project which is the design of a 0.1 km<sup>2</sup> undersea neutrino detector.
This detector will be equipped with a total of about 1000 OMs placed on 13 mooring lines of 400 m high and spaced by 60 to 80 m. Each line will be connected to a junction box using a submarine vehicle, the junction box being connected to the shore station through a 50 km electro-optical cable. This 13 string detector is aimed to be deployed on the ANTARES site by 2003. This second phase of the ANTARES project is already approved by the French and Spanish scientific councils, it will be decided soon in the UK and the Netherlands.
This 0.1 km<sup>2</sup> detector is foreseen to be devoted to three main topics. The first one is the neutrino astronomy i.e. the study of cosmic neutrinos with an energy above 1 TeV which may come from diffuse signal, point sources such as individual AGN. The second subject is the search for neutrinos coming from dark matter particle annihilations in the centre of the Earth, in the Sun or in the Galactic centre. The third topic is the study of atmospheric neutrino oscillations in the 5-100 GeV range.
Extensive simulation studies have been carried out to understand the performances of such a detector for these different topics (see for exemple ICRC proc. references in ). For high energy neutrino astronomy, the angular resolution is a crucial point. Simulations show that because of the good optical quality of the sea water (low scattering) and taking into account the good timing capability of the detector, a point source will be reconstructed with half of the events contained within $`0.2^{}`$ of the source direction. This result will permit the division of the sky map into 200 000 pixels. From the amount of light measured by the optical modules, the energy of the muon is estimated within a factor 3 for muons in the 1-10 TeV range and within a factor 2 above 10 TeV. These performances will allow the detection of a signal of cosmic neutrinos coming from cosmic sources such as AGNs above the atmospheric neutrino background: this analysis would be performed by imposing a threshold on the reconstructed neutrino energy to reduce the atmospheric neutrino contribution to the diffuse flux or enriching a signal coming from point sources by overlaying the pixels of the 43 known AGNs detected by EGRET.
Studying upward going atmospheric neutrino events with an interaction point inside the detector and using the low energy (5-100 GeV) muon range as an estimator of the parent neutrino energy, on can explore the physics of atmospheric neutrino oscillations with mass difference in the range $`\mathrm{\Delta }m^2`$ between $`10^3`$ and $`10^4`$ eV<sup>2</sup>. Our analysisis based on the shape of $`E/L`$ distribution and is basically independent of the poorly known absolute $`\nu _{atm}`$ flux. In case of a positive oscillation signal in $`\mathrm{\Delta }m^2`$-$`\mathrm{sin}^22\theta `$ region allowed by the Super-Kamiokande experiment, the fit would lead to a precise detemination of the oscillation parameters. On the contrary, in absence of oscillation, the excluded region would well cover the region allowed by Super-Kamiokande.
## 4 Conclusions
The R&D program performed by the ANTARES Collaboration has demonstrated that the water and environmental properties of the chosen ANTARES site are well suited for the installation of the first stage of a large size neutrino telescope. It was also demonstrated that the necessary marine technologies concerning aspects such as detector deployment, undersea connections, positionning, long term reliability, etc, are well under control. ANTARES is starting the next step towards the km-scale neutrino telescope by the design, the installation and the running of a 0.1 km<sup>2</sup> detector off the Mediterranean coast of France by 2003. This detector will play a pioneering role in neutrino astronomy. |
no-problem/0001/astro-ph0001436.html | ar5iv | text | # X-ray Afterglows of Gamma-Ray Bursts
## 1. Introduction
Gamma-Ray Bursts (GRB) were discovered in 1969 (Klebesadel et al. 1973) by the Vela satellites, deployed by USA to verify the compliance of USSR to the nuclear test ban treaty. In the following 28 years thousands of events have been observed by several satellites, leading to a good characterization of the global properties of this phenomenon. A big step in this area was achieved with BATSE (Fishman et al. 1994) The isotropical distribution of the events in the sky (Fishman & Meegan 1995) was suggestive of an extragalactic origin, but a direct measurement of the distance in a single object was not available. What was lacking was a fast AND precise position, where the Holy Grail of GRB scientists, i.e. the counterpart, could have been searched for at all wavelenghts with more chances to catch it. This was achieved in 1996, with observations of GRB by BeppoSAX.
## 2. Gamma-ray bursts in the Afterglow Era
### 2.1. The first afterglow: GB970228
Before BeppoSAX (Piro et al. 1995, Boella et al. 1999) GRB astronomy has proceeded on a statistical approach and the only information gathered was limited to the tens of seconds of the GRB: the subsequent evolution was completely unknown. The operations for a prompt follow up of GRBs became operative on December 1996, after an off-line analysis of a GRB (GRB960720: Piro et al. 1998a) had demonstrated the designed capability of the mission. The first opportunity was on January 11, 1997: GRB970111. The field was pointed with the NFI 16 hours after the GRB. The possible association of one of the faint sources found in the error box with the GRB was under scrutiny (Feroci et al. 1998), when on February 28, 1997, another event, GRB970228, was detected by BeppoSAX GRBM and WFC. The NFI were pointed to the GRB location 8 hours after burst. A previously unknown X-ray source was detected in the field of view of the LECS and MECS instruments with a flux in the 2-10 keV energy range of $`3\times 10^{12}ergcm^2s^1`$. The new source appeared to be fading away during the observation. On March 3 we performed another observation that confirmed that the source was quickly decaying : at that time its flux was a factor of about 20 lower than the first observation (Figure 1). This was the first detection of an โafterglowโ of a GRB (Costa et al. 1997).
The flux of the source appeared to decrease following a power law dependence on time ($`t^\alpha `$) with index $`\alpha =(1.3\pm 0.1)`$. Further X-ray observation with the X-ray satellites ASCA and ROSAT detected the source about one week later with a flux consistent with the same law (Yoshida et al. 1997, Frontera et al. 1998a). This kind of temporal behaviour agrees with the general predictions of the fireball models for GRBs (e.g.Mรฉszรกros P. & Rees 1997, Vietri 2000 ). A backward extrapolation of this power law decay (Fig. 2) is consistent with the X-ray flux measured during the burst, suggesting that the afterglow started soon after the GRB. Another important result came from the spectral analysis of the X-ray afterglow. It excluded a black body emission, therefore arguing against a model in which the radiation comes from the cooling of the surface of a neutron star (Frontera et al. 1998b).
While the X-ray monitoring of GRB970228 was going on, an observational campaign of the same object was simultaneously started with the most important optical telescopes. This campaign led to the discovery (van Paradijs et al. 1997) of an optical transient associated with the X-ray afterglow. As in the X-ray domain, the optical flux of the source showed a decrease well described by a power law with index -1.12 (e.g. Garcia et al. 1998), again in agreement with the general predictions of the fireball model. The images taken with the Hubble Space Telescope (HST) (Fruchter et al. 1997, Sahu et al. 1997) showed the presence of a nebulosity around the optical transient. However the nebulosity was very weak, and it was not possible to disentangle whether it was associated with the host galaxy (extragalactic origin) or with a transient diffuse emission representing the residual of the explosion (galactic origin).
### 2.2. The first measurement of redshift: GRB970508
On 8 May 1997 the second breakthrough arrived with GRB970508 (Piro et al. 1998b), detected just few minutes before the satellite was passing over the ground station in Malindi. This opportunity and the experience gained from previous events allowed to point the BeppoSAX NFI on source 5.7 hours after the burst, while optical observations started 4 hours after the burst.
The early detection of the optical transient (Bond 1997), and its relatively bright magnitude allowed a spectroscopical measurement of its optical spectrum with the Keck telescope (Metzger et al. 1997). The spectrum revealed the presence of FeII and MgII absorption lines at a redshift of $`z=0.835`$, attributed to the presence of a galaxy between us and the GRB, and therefore demonstrated that GRB970508 was at a cosmological distance.
### 2.3. GRB are long-lasting phenomena โฆ after all!
The BeppoSAX observation of GB970508 has also changed our view of the GRB phenomenon. The old concept of a brief - sudden release of luminosity concentrated in few seconds does not stand the new information provided by BeppoSAX. Indeed the name afterglow attributed to the X-ray emission observed after the event is somewhat misleading. This is clear when one considers the energy produced in the afterglow phase, which turns out to be comparable to that of the GRB. In order to compute the energy emitted in the afterglow phase it is necessary to integrate in time its luminosity and it is then crucial to know when the afterglow starts. A detailed analysis of the data of the WFC of GB970508 (Piro et al. 1998b) shows that the X-ray emission is present (Fig. 4) even when the signal of the light curve disappears in to the noise at $`30`$ s (Fig.3) and remains visible for at least 2000 seconds, when the flux goes below the sensitivity of the WFC (Fig. 5). The conclusion is then that the afterglow phase starts immediately after (or even before) than the prompt emission settles down. The energy emitted in the afterglow in X-rays turns out to be a substantial fraction ($`4050\%`$) of the energy produced by the GRB. Furthermore the light curve in Fig. 5 shows a rebursting event starting 1 day after the initial burst, an evidence that indicates that the source of the energy can re-ignite on long time scales (Piro et al. 1998b).
## 3. The largest and most distant explosions since the Big Bang
The GRB observed by BeppoSAX on December 14, 1997 if, on one hand, has consolidated the extragalactic origin of GRB, on the other has underlined the problem of the energy budget and, ultimately, of the nature of the โcentral engineโ.
The chain of steps leading to the identification of the counterpart of GRB971214 (Dal Fiume et al. 1999) and its distance was the same of previous BeppoSAX observations. With a redshift z=3.42 (Kulkarni et al. 1998a) this GRB and its host galaxy are at a distance that corresponds to a look-back time of about 85% of the present age of the Universe. At this distance, the luminosity would be about $`3\times 10^{53}ergs^1`$, were the emission isotropic. This was the highest luminosity ever observed from any celestial source. Initially the huge luminosity appeared to be not compatible with the energy available in the coalescence of neutron-star mergers (Kulkarni et al. 1998a), unless beaming were invoked. Other alternative energetic models are based on the death of extremely massive stars, leading to an explosion orders of magnitude more energetic than a supernova, hence named hypernova (Paczynski 1998, Woosley 1993). However it was shown (Mรฉszรกros & Rees 1998a) that all these progenitors, whether Neutron Star - Neutron Star mergers or hypernovae, eventually go through the formation of a same Black Hole/torus system, from which the energy is extracted to form the GRB. The radiation physics and energy of all mergers and hypernovae are then, to order of magnitude the same, and still compatible with the luminosity observed in GRB971214.
## 4. To beam or not to beamโฆ.
The energy problem became much more severe with GRB990123, again one of the BeppoSAX GRB (Heise et al. 1999, Piro et al. 1999a). It was one of the brightest GRB ever observed, ranking in the 0.3% top of the BATSE flux distribution. Its distance (z=1.6, Kulkarni et al. 1999) would imply a total energy of $`1.6\times 10^{54}erg`$, assuming isotropical emission. This corresponds to $`2M_{}c^2`$, at the limit of all models of mergers (Mรฉszรกros & Rees 1998a).
This piece of evidence is lending support to the idea that, at least in some case, the emission is collimated. This would reduce the energy budget by $`\theta ^2/4\pi `$, where $`\theta `$ is the angle of the jet. A typical feature of a jet expansion (vs spherical) would be the presence of an achromatic (i.e. energy-independent) break in the light curve, that appears when the relativistic beaming angle $`1/\mathrm{\Gamma }`$ becomes $`\theta `$ (e.g. Rhoads 1997, Sari et al. 1999). The presence of such a break has been claimed in GRB990123 (Kulkarni et al. 1999) and in another more recent GRB, GRB990510 (Harrison et al. 1999). With an angle $`\theta 10^{}`$, the total energy would be reduced by $`10^3`$, within the limits of current models. So far evidence of an achromatic break is limited to the optical range and an independent measurement confirming its presence in different regions of the spectrum is lacking or not conclusive (e.g. in X-rays Kuulkers et al. 1999).
Very important indications on the geometry of the expansion can be derived by comparing the prediction of the standard scenario (i.e. fireball expansion with synchrotron emission) on the spectral and temporal behaviour of the afterglow with observations. In particular, the spectral and temporal slopes of the afterglow emission $`Ft^\delta \nu ^\beta `$ are linked together by a relationship that depends on the geometry of the fireball expansion (Sari et al. 1997, 1999). In the assumption of an adiabatic spherical expansion in a constant density medium we have $`\delta =3\beta /2`$ and $`\delta =3\beta /21/2`$ below and above the cooling frequency $`\nu _c`$ respectively. In the case of a jet expansion the relations are $`\delta =2\beta +1`$ ($`\nu <\nu _c`$) and $`\delta =2\beta `$ (($`\nu >\nu _c`$). These relationship are plotted in fig.6 along with the measured slopes we have derived for a first sample of X-ray afterglows. The data refer to a sample of 11 afterglows, among the brightest pointed by BeppoSAX upto May 1999, observed from few hours to about 2 days after the GRB (Stratta et al. 1999, Piro et al. 2000). The average property of the sample are fully consistent with a spherical expansion and deviates substantially from a jet expansion in the first two days. We stress that our sample is not biased against steep spectral slopes ( $`\beta >1.5`$), because $`\stackrel{>}{}90\%`$ of the GRB detected by BeppoSAX and followed on with a fast observation, do show an X-ray afterglow.
The disagreement with the jet prediction does not imply that the geometry is spherical, because deviations from the emission pattern of a spherical expansion are expected only when the beaming angle of the relativistic emission $`\mathrm{\Gamma }^1`$ becomes comparable to the opening angle of the jet $`\theta _0`$. This happens at a time $`t_{jet}6.2(E_{52}/n_1)^{1/3}(\theta _0/0.1)^{8/3}hr`$, which should then be $`\stackrel{>}{}48hr`$. We then derive $`\theta _0\stackrel{>}{}12^{}(\mathrm{n}_1/\mathrm{E}_{52})^{1/8}`$. Hence collimation, if present, cannot be very high.
## 5. The nature of the progenitor
Information on the nature of the progenitor can be drawn from the GRB environment. In the case of hypernova the massive star should die young ($`10^6`$ years) and therefore GRB should be preferentially hosted in regions near the center of star-forming galaxies. On the contrary, NS-NS coalescence happens on much longer time scales and the kick velocity given to the system by two consecutive supernova explosions should bring a substantial fraction of these systems away from the the parent galaxy. So far the angular displacement of 5 optical counterparts indicates that GRB are located within their host galaxies (Bloom et al. 1999a), favoring the association with star-forming regions. We note also that those events are not located in the very center of their galaxies, that excludes an association of GRB with AGN activity.
The other diagnostics of the progenitor is based on spectral measurements of broad and narrow features imprinted by a dusty - gas rich environment expected in the hypernova scenario (e.g. Perna & Loeb 1998, Mรฉszรกros & Rees 1998b, Bottcher et al. 1998). The absence of an optical transient in about 50% of well localized BeppoSAX GRB (25 as of Dec. 99), in which instead an X-ray afterglow has been found in almost all the cases, may be explained by heavy absorption by dust in the optical range, which leaves almost unaffected the X-rays (Owens et al. 1998).
An exciting possibility is opened by the possible detection of X-ray iron line features in two different GRB, one by BeppoSAX (GRB970508 Piro et al. 1999b; Fig. 7) and the other by ASCA (GRB970828, Yoshida et al. 1999 ), associated with rebursting on time scales of the order of a day. It should be remarked that the presence of rebursting appears to be an uncommon feature of X-ray afterglows, whose temporal behaviour is very well described by power laws (Fig.8) at least until 2-3 days, when the X-ray flux goes below the sensitivity of current X-ray instruments. Both the temporal and spectral features betray the presence of dense ($`n10^{10}cm^3`$) medium of $`1M_{}`$ near the site of the explosion ($`10^{16}cm`$) (Piro et al. 1999b). Such a medium should have been pre-ejected before the GRB explosion, but the large value of the density excludes stellar winds. A possible, intriguing explanation is that the shell is the result of a SN explosion preceding the GRB (Piro et al. 1999b, Vietri et al. 1999, Vietri & Stella 1998).
Other evidences argue in favour of the association GRB-SN. In the BeppoSAX error box of GRB980425 (Pian et al. 1999) two groups (Kulkarni et al. 1998b, Galama et al. 1998) found a supernova (SN1998bw) that had exploded at about the same time of the GRB. The probability of a chance coincidence of the two events is $`10^4`$. Since the majority of GRB are not associated with SN (e.g. Graziani et al. 1999), this event (if the association is true) should apparently represent an uncommon kind of GRB. However it is also possible that the two families are indeed associated: this scenario would require that the GRB are emitted by collimated jets. The majority of GRB and afterglow we see are beamed towards us, so that the contribution of the supernova to the total emission is negligible. The case of SN1998bw was then particular in that the jet producing the GRB was collimated away from our line of sight, allowing the detection of the (isotropic) SN emission at an early phase. This scenario also explains why GRB980425 was not particularly bright, notwithstanding its redshift (z=0.0085), much lower than the typical value of the other GRB ($`z1`$). Since the afterglow decays as a power law, it is possible that at late times the emission of the SN becomes detectable. Evidence of such emission has been claimed in at least two cases (GRB990326: Bloom et al. 1999b; GRB970228: Reichart et al. 1999)
## 6. Conclusions
Several bits of evidence supporting the association of GRB to star-forming regions have been gathered so far. The potential perspectives of this link are extremely exciting. Being GRB the most powerful and distant sources of ionizing photons, we can think of using them as probes of heavy elements and star/galaxy formation in the early Universe. A direct proof of this association is still missing but the near future appears very promising in this respect. BeppoSAX is discovering and localizing GRB and X-ray afterglows at a pace of 1 per month. Other satellites have also set up with success procedures for rapid GRB localization (BATSE, XTE, ASCA and IPN). The launch of HETE2, foreseen in early 2000 will increase substantially the number of well localized GRB. Furthermore present and near-future big X-ray satellites, like Chandra, XMM, ASTRO-E will allow detailed spectral studies of X-ray afterglows and provide (Chandra) arcsec position of X-ray counterparts and, possibly, a direct redshift determination.
## ACKNOWLEDGEMENTS
The BeppoSAX results presented here were obtained through the joint effort of all the components of the BeppoSAX Team. BeppoSAX is a major program of the Italian space agency (ASI) with participation of the Netherlands agency for aerospace programs (NIVR)
## REFERENCES
Bloom J.S. et al. . 1999a, ApJ, in press
Bloom J. et al. 1999b, Nature, in press
Boella G. et al. 1997 A&AS, 122 299.
Bond H., IAU Circular n. 6654, May 1997.
Bottcher M., Dermer C.D., Crider A. W. & Liang E. D. 1998 A&A, 343, 111
Costa E. et al., 1997 Nature, 387, 783.
Dal Fiume D. et al. 1999, A&A, in press
Feroci M. et al. 1998, A&A, 332, L29
Fishman J. et al. 1994 Astrophys. Journal Suppl. Ser., 92, 229
Fishman J. and C.A. Meegan 1995 , Annual Review Astron. Astrophys., 33, 415
Frontera F. et al., 1998a A&A, 334, L69
Frontera F. et al., 1998b ApJ 493, L67
Fruchter A. et al., IAU Circular n. 6747, September 1997.
Galama T. et al. 1998, Nature, 395, 670
Graziani C., D. Lamb & G.H,. Marion 1999, A&AS, 138, 469, Proc. of Gamma-Ray Bursts in the Afterglow Era, F. Frontera & L. Piro ed.s.
Garcia M. et al., 1998 ApJ, 500, L105
Harrison F.A. et al. 1999, Apj 523, L21
Heise J.et al. 1999, submitted to Nature
Klebesadel R.W. et al., 1973 Astrophys. Journal Letters 182, L85.
Kulkarni S. R. et al. 1998a, Nature, 393, 35
Kulkarni S. R. et al. 1998b, Nature, 395, 663
Kulkarni S. et al. 1999, Nature 398, 389
Kuulkers E. et al. 1999, ApJ, submitted
Mรฉszรกros P. & Rees M. J. 1997 ApJ 476, 319
Mรฉszรกros P. & Rees M. J. 1998a, New Astronomy, (astro-ph/9808106)
Mรฉszรกros P. & Rees M. J. 1998b MNRAS, 299, L10.
M.R. Metzger et al. 1997, Nature, 387, 878.
Owens A. et al. 1998, A&A, 339, L37
Paczynski, B. 1998 ApJ, 494, L45
Perna R. & Loeb A., 1998, ApJ, 501, 467
E. Pian et al. 1999, A&A, in press
Piro L., Scarsi L. & Butler R.C. 1995, in X-Ray and EUV/FUV Spectroscopy and Polarimetry, (ed. S. Fineschi) SPIE 2517, 169-181
Piro L. et al., 1998a A&A, 329, 906
Piro L. et al. 1998b, A&A, 331, L41
Piro L. et al. 1999a , GCN 199,203;
Piro L. et al. 1999b, ApJ, 514, L73
Piro L. et al. 2000, in preparation
D. Reichart et al. 1999, Ap.J., in press
Rhoads J.E. 1997, ApJ, 478, L1
Sahu K.C. et al. 1997, Nature, 387, 476
Sari,R.,Piran,T. & Narayan,R., 1997 ApJ, 497 ,L17
Sari R., Piran T, Halpern J.P. 1999, ApJ, 497, L17
Stratta G., Piro L., Gandolfi G. et al. , 1999, Proc.s. of the 5th Huntsville Symposium on GRB
J. van Paradijs et al., 1997 Nature, 386, 686.
Vietri M., Perola G.C., Piro L. & Stella L. 1999, MNRAS, 308, P29
M. Vietri & L. Stella 1998, ApJ, 507, L45
Vietri M. 2000, this conference
A. Yoshida et al., IAU Circular n. 6593, 19 March 1997.
A. Yoshida et al. 1999 A&AS, 138, 433, Proc. of the Gamma-Ray Bursts in the Afterglow Era, F. Frontera &L. Piro ed.s.
Woosley S. 1993, ApJ, 405, 273 |
no-problem/0001/cond-mat0001170.html | ar5iv | text | # Theory of electron transport in normal metal/superconductor junctions
## I Introduction
One of the powerful methods detecting the quasiparticle states in a superconductor is to measure the conductance of a junction made up of a normal-metal and a superconductor (NS). There have been developed many theories describing the electron-tunneling phenomenon. In the case of the high interfacial potential-barrier limit, the linear-response theory is a well-known description.<sup>1</sup> But it is not valid for describing the electron transport in the low potential-barrier limit.
To calculate the conductance in more general cases, Blonder, Tinkham, and Klapwijk (BTK) have developed a theory by supposing that the system is in such a non-equilibrium state that only the incoming particles have equilibrium distributions.<sup>2</sup> This theory has been widely used for analyzing the tunneling phenomena in various NS junctions, and also has been extended for investigating electronic tunneling in Josephson junctions.<sup>3</sup>
When a finite voltage is applied to a junction, the electron transport in the junction is a non-equilibrium process. We would like to consider the case when the current passing through the junction is a constant. The electron transport process is then a steady state. Such a non-equilibrium problem can be solved by the Keldysh approach.<sup>4</sup> In fact, this approach has been applied by a number of investigators for studying the tunneling in junctions of normal metals <sup>5-6</sup> and the electron transport under impurity scattering.<sup>7</sup>
In this paper, we present a tunneling theory along this direction. We will start with a tunneling-Hamiltonian model defined in a square lattice. This model is appropriate for the tight-binding systems. The tunneling current can be exactly obtained in terms of the equilibrium Green functions of the normal metal and the superconductor. By so doing, all the effects of external voltage on the tunneling current can be rigorously taken into account. Moreover, it can be extended to study the tunneling in the point-contact junctions as in the scanning-tunneling microscope measurement.
## II Formalism
We consider a junction consisting of a normal metal on the left side and a superconductor (SC) on the right side. In the Nambu representation, the tunneling Hamiltonian describing the electron-transport processes in the junction is given by
$$H_T=\underset{lr}{}(c_r^{}\widehat{T}_{rl}c_l+c_l^{}\widehat{T}_{lr}c_r)$$
$`(1)`$
where $`c_r^{}=(c_r^{},c_r)`$ is the field operator for particles in the right superconductor, and $`c_l^{}`$ is similarly defined for the left metal, $`\widehat{T}_{rl}=\widehat{T}_{lr}^{}=t_0(|y_ry_l|)\sigma _3`$, and $`y_r`$ and $`y_l`$ are respectively the coordinates of the sites $`r`$ and $`l`$ along the interface. The $`r`$ and $`l`$ summations in Eq.(1) run over the edge (interface) sites on the two sides of the junction, respectively. The function $`t_0(|y_ry_l|)`$ may be taken as real. For simplicity of description, we suppose that the lattice sites $`\{r\}`$ along the edge are equally spaced as the same as $`\{l\}`$. Suppose there is a voltage $`V`$ applied between the junction, the total Hamiltonian of the system is given by
$$H=H^0+H_TH_leVN_l+H_r+H_T,$$
$`(2)`$
where $`H_l`$ and $`H_r`$ are the intrinsic Hamiltonians of the left metal and the right superconductor, respectively, and $`N_l`$ is the total electron number of the left metal. We here adopt the tight-binding model for $`H_r`$ which contains a hopping term and an attraction term. For $`H_l`$, we keep only the hopping term.
To define the tunneling-current operator, we first consider the charge operator for the right SC. Apart from a constant, it can be written as
$$Q=e\underset{r}{}c_r^{}\sigma _3c_r.$$
$`(3)`$
The operator of current through the junction from left to right is then obtained as
$$\widehat{I}=i[H,Q]=ie\underset{lr}{}(c_r^{}\sigma _3\widehat{T}_{rl}c_lc_l^{}\widehat{T}_{lr}\sigma _3c_r).$$
$`(4)`$
Now, let us choose the unperturbed state described by $`H_0`$ as our reference system. This reference system consists of the unperturbed normal metal and the SC on two side of the junction, each of them in its own equilibrium state. For the purpose of employing the grand canonical ensembles, we use $`K_l=H_l(\mu _l+eV)N_l`$ and $`K_r=H_r\mu _rN_r`$ to describe the normal metal and the Sc, respectively. Here, $`\mu _l`$ and $`\mu _r`$ are respectively the chemical potentials of the normal metal and the SC, and $`N_r`$ is the total number of electrons in the SC. At the steady state, we have $`\mu _r=\mu _l+eV`$ in order to maintain charge neutrality in the bulk of each side. To calculate the statistical average of a physical quantity, we need to write the related operators in the interaction picture. An operator of physical quantity, e.g., the current $`\widehat{I}(t)`$, in the interaction picture at time $`t`$ is defined as,
$$\widehat{I}(t)=\mathrm{exp}(iH^0t)\widehat{I}\mathrm{exp}(iH^0t).$$
This operator can be further rewritten in terms of the field operators,
$$\widehat{I}(t)=2e\mathrm{Im}\underset{lr}{}c_r^{}(t)\sigma _3\widehat{T}_{rl}(t)c_l(t),$$
$`(5)`$
where $`c_r^{}(t)=\mathrm{exp}(iK_rt)c_r^{}\mathrm{exp}(iK_rt)`$ (and a similar definition for $`c_l(t)`$), $`\widehat{T}_{rl}(t)=\widehat{T}_{lr}^{}(t)=\widehat{T}_{rl}\mathrm{exp}(ieVt\sigma _3)`$. The form for $`\widehat{I}(t)`$ as given by Eq. (5) is convenient for the statistical average over the grand canonical ensembles. Similarly, the tunneling Hamiltonian can be written as
$$H_T(t)=\underset{lr}{}[c_r^{}(t)\widehat{T}_{rl}(t)c_l(t)+c_l^{}(t)\widehat{T}_{lr}(t)c_r(t)].$$
$`(6)`$
For applying the Keldysh method, it is convenient to define the field operator,
$$\varphi _r^{}(t)=[c_r^{}(t_+),c_r^{}(t_{})]$$
$`(7)`$
where the subscripts + and - on time $`t`$ means the operators defined in the time branches ($`\mathrm{},\mathrm{}`$) and ($`\mathrm{},\mathrm{}`$), respectively. Accordingly, we define a perturbation Hamiltonian,
$$H_c(t)=\underset{lr}{}[\varphi _r^{}(t)T_{rl}^c(t)\varphi _l(t)+\varphi _l^{}(t)T_{lr}^c(t)\varphi _r(t)]$$
$`(8)`$
where
$$T_{rl}^c(t)=\left(\begin{array}{cc}\widehat{T}_{rl}(t)& 0\\ 0& \widehat{T}_{rl}(t)\end{array}\right)\tau _z\widehat{T}_{rl}(t).$$
$`(9)`$
The matrix $`\tau _z`$ is the third Pauli matrix defined in the space corresponding to the two time branches. To distinguish with that, we reserve $`\sigma _3`$ as the third Pauli matrix defined in the particle-hole space. The Green function is defined as
$$G_{ij}(t,t^{})=i๐ฏ[S_c\varphi _i(t)\varphi _j^{}(t^{})]$$
$$S_c=๐ฏ\mathrm{exp}[i_{\mathrm{}}^{\mathrm{}}๐tH_c(t)]$$
where $`๐ฏ`$ is the Keldysh time-ordering operator.
With the above definitions, the current under the statistical average can be expressed as
$$I=e\underset{lr}{}\mathrm{Re}\mathrm{Tr}\sigma _3\widehat{T}_{rl}(t)G_{lr}(t,t),$$
$`(10)`$
To calculate the current, we need to know the Green function $`G_{lr}(t,t)`$. It can be determined from the Dyson equations.
Let $`L`$ and $`R`$ denote the Green functions (as $`4\times 4`$ matrices) for the left metal and the right SC, respectively (with the superscript 0 for the unperturbed ones). By assuming that the system is uniform along the direction parallel to the interface, we can then work in the momentum space. Here, the momentum is parallel to the interface. The Dyson equations are
$$G_k(t,t^{})=๐t_1L_k^0(t,t_1)T_k^c(t_1)R_k(t_1,t^{})$$
$`(11)`$
$$R_k(t,t^{})=R_k^0(t,t^{})+๐t_1๐t_2R_k^0(t,t_1)\mathrm{\Sigma }_k(t_1,t_2)R_k(t_2,t^{})$$
$`(12)`$
$$\mathrm{\Sigma }_k(t_1,t_2)=T_k^c(t_1)L_k^0(t_1,t_2)T_k^c(t_2)$$
$`(13)`$
where $`T_k^c(t)=\tau _z\widehat{T}_k\mathrm{exp}(ieVt\sigma _3)`$, $`\widehat{T}_k=t_0(k)\sigma _3`$, and the range of time integrals is from $`\mathrm{}`$ to $`\mathrm{}`$. Note that the Green function $`L_k^0(t_1,t_2)=L_k^0(t_1t_2)`$ consists of four diagonal matrices. The factors $`\mathrm{exp}(ieVt_1\sigma _3)`$ and $`\mathrm{exp}(ieVt_2\sigma _3)`$ commute with the matrix $`L_k^0(t_1,t_2)`$. The self energy $`\mathrm{\Sigma }_k(t_1,t_2)=\mathrm{\Sigma }_k(t_1t_2)`$, and thereby the Green function $`R_k(t,t^{})=R_k(tt^{})`$ are functions of time difference. We can therefore take the Fourier transformation of the Dyson equations. In the frequency space, these equations have the usual forms except
$$\mathrm{\Sigma }_k(\omega )=T_k^c(0)L_k^0(\omega +eV\sigma _3)T_k^c(0).$$
$`(14)`$
With the help of the Dyson equations, we can write the factor $`\widehat{T}_{rl}(t)G_{lr}(t,t)`$ in the expression of $`I`$ as
$$\widehat{T}_{rl}(t)G_{lr}(t,t)=_{\mathrm{}}^{\mathrm{}}\frac{d\omega }{2\pi }\tau _z\mathrm{\Sigma }_k(\omega )R_k(\omega ).$$
$`(15)`$
Inserting Eq. (15) into Eq. (10) and taking the trace of time-branch space, we have
$$I=e\underset{k}{}_{\mathrm{}}^{\mathrm{}}\frac{d\omega }{2\pi }t_0^2\mathrm{ReTr}\sigma _3M_+(L_fR_{}^0+L_+^0R_f)M_{}$$
$`(16)`$
with
$$M_\pm =[1t_0^2L_\pm ^0R_\pm ^0]^1,$$
$$L_f=\mathrm{tanh}[(\omega +eV\sigma _3)/2k_BT](L_+^0L_{}^0),$$
$$R_f=\mathrm{tanh}(\omega /2k_BT)(R_+^0R_{}^0),$$
$$L_+^0=L_{}^0=L^0(k,\omega +eV\sigma _3+i0),$$
$$R_+^0=R_{}^0=R^0(k,\omega +i0).$$
Here $`L_+^0`$ and $`R_+^0`$ ($`L_{}^0`$ and $`R_{}^0`$) are the retarded (advanced) Green functions (as $`2\times 2`$ matrices in the Nambu space) of equilibrium state, $`L_f`$ and $`R_f`$ are the Keldysh functions, $`t_0^2=|t_0(k)|^2`$, $`k_B`$ is the Boltzmann constant, and $`T`$ is the temperature of the system. By noting the relationships $`R_+(k,\omega )=\sigma _2R_{}(k,\omega )\sigma _2`$, $`L_+(k,\omega +eV\sigma _3)=\sigma _2L_{}(k,\omega +eV\sigma _3)\sigma _2`$, it is enough to only take the frequency integral in Eq. (16) in the range $`(0,\mathrm{})`$. The front factors in the Keldysh functions take part of the roles of quasiparticle distribution functions. The additional term $`eV\sigma _3`$ reflects the chemical potential shifts of the quasiparticles in the left metal.
## III Greenโs functions of the equilibrium state
To calculate the tunneling current $`I`$, we need to know the Green functions $`L^0`$ and $`R^0`$. If we know the wave functions $`\psi _n`$ and energies $`E_n`$ of the quasiparticles, e.g., for the SC, we can obtain $`R^0`$ by
$$R^0(k,\omega )=\underset{n}{}\frac{\psi _n\psi _n^{}}{\omega E_n},$$
$`(17)`$
where $`\psi _n`$ takes the edge value. Since we have taken the Fourier transformation for the dependence on the coordinates parallel to the interface, the wave function $`\psi _n(j)`$ depends on the $`x`$-coordinates (normal to the edge) of lattice sites, $`j=\{1,2,\mathrm{}\}`$; the edge value is $`\psi _n(1)`$.
For illustration, we here consider a $`d`$-wave SC and suppose that the order parameter is constant everywhere. The wave functions can be determined analytically by the BdG equation. As an example, we consider the tight-binding model defined in a semi-infinite square lattice with a {11} edge. The BdG equation reads <sup>8</sup>
$$\underset{j}{}H_{ij}\psi _n(j)=E_n\psi _n(i),$$
$`(18)`$
where $`H_{jj}=\mu \sigma _3`$, $`H_{j,j1}=2t\mathrm{cos}k\sigma _3i2\mathrm{\Delta }\mathrm{sin}k\sigma _1`$ for $`j2`$, $`H_{j,j+1}=2t\mathrm{cos}k\sigma _3+i2\mathrm{\Delta }\mathrm{sin}k\sigma _1`$, otherwise $`H_{ij}=0`$, $`t`$ is the hopping energy of electrons between nearest-neighbor sites, and $`\mathrm{\Delta }`$ is the order parameter. Here, we have used the unit $`\sqrt{2}/a`$ (with $`a`$ the lattice constant) for the momentum $`k`$, and $`k`$ is confined to a Brillouin zone $`(\pi /2,\pi /2)`$. There are two kinds of solutions to Eq. (18): The continuum states and the surface bound states.
The continuum states are generally degenerate. To distinguish them, we can consider each eigen wave function contains a unique incoming wave component or a unique outgoing wave component. We then characterize the wave function by the incoming wave number $`q_\mu `$ or the outgoing wave number $`q_\alpha `$. For example, the wave function and energy of state $`q_\mu `$ can be written as
$$\psi _{k,\mu }(j)=[\psi _{k,\mu }^0(j)\underset{\alpha }{}a_{\mu \alpha }\psi _{k,\alpha }^0(j)]/\sqrt{2},$$
$`(19)`$
$$E_{k,\mu }=\pm E(q_\mu ,k)=\pm \sqrt{e^2(q_\mu ,k)+\mathrm{\Delta }^2(q_\mu ,k)},$$
$`(20)`$
where $`\psi ^0`$โs are the plane-wave solution to the infinite system, $`e(q,k)=4t\mathrm{cos}q\mathrm{cos}k\mu `$ (with $`\mu `$ the chemical potential), $`\mathrm{\Delta }(q,k)=4\mathrm{\Delta }\mathrm{sin}q\mathrm{sin}k`$. The coefficients $`a_{\mu \alpha }`$ are determined by the boundary condition at $`j=1`$. The summation over $`\alpha `$ in eq. (19) runs over all the outgoing components with $`E(q_\alpha ,k)=E(q_\mu ,k)`$. It is worth noticing that sometimes we may have complex $`q_\alpha `$โs, the summation then should be taken at those $`q_\alpha `$โs corresponding to decaying waves.
The number of the bound states is determined by the Levinson theorem.<sup>9</sup> Under the assumption that the order parameter is constant, we only have the state with $`E_n=0`$ for each $`|k|k_m`$ ($`k_m`$ is very close to the Fermi wave number).<sup>8,10</sup> For $`E_n=0`$, it can be shown that the two components $`u_k(j)`$ and $`v_k(j)`$ satisfy the relation,
$$v_k(j)=i\lambda u_k(j),\lambda =\pm 1.$$
$`(21)`$
Suppose $`u_k(j)=z^j`$ with $`z`$ ($`|z|<1`$) a complex quantity for the general solution. Corresponding to $`z`$, we have a complex number $`q=i\mathrm{log}(z)`$. The equation $`E(q,k)=0`$ determining the eigenvalue reduces to
$$t(z+z^1)\mathrm{cos}k+\lambda (zz^1)\mathrm{\Delta }\mathrm{sin}k+\mu /2=0.$$
$`(22)`$
The solutions to Eq. (22) are
$$z_\pm =[\mu \pm \sqrt{\mu ^2(c_1^2c_2^2)}]/(c_1+\lambda c_2),$$
$`(23)`$
where $`c_1=4t\mathrm{cos}k`$ and $`c_2=4\mathrm{\Delta }\mathrm{sin}k`$. Note $`z_+z_{}=(c_1\lambda c_2)/(c_1+\lambda c_2)`$, therefore $`\lambda =\mathrm{sgn}(k)`$ whereby $`|z_+z_{}|<1`$. The wave function is given by
$$u_k(j)=(z_+^jz_{}^j)/N_k,$$
$`(24)`$
with $`N_k^2=2[(1|z_+|^2)^1+(1|z_{}|^2)^12\mathrm{R}\mathrm{e}(1z_+^{}z_{})^1]`$ the normalization constant. This wave function satisfies the boundary conditions at $`j=1`$ and $`j\mathrm{}`$ provided $`|z_\pm |<1`$. If $`\mu ^2<(c_1^2c_2^2)`$, then $`z_+`$ and $`z_{}`$ are complex conjugates of each other, and $`|z_\pm |<1`$. On the other hand, if $`\mu ^2>(c_1^2c_2^2)`$, both of them are real. In this case, there may be no bound state unless both $`|z_\pm |<1`$.
With the knowledge of the wave functions, the Green function $`R^0`$ can be calculated by Eq. (17). As for $`L^0`$ of the normal metal, it contains only the continuum states. The wave functions can be obtained immediately from Eq. (18) by setting $`\mathrm{\Delta }=0`$. The resulted Green function is given by
$$L^0(k,\omega )=\frac{2}{\pi }_0^\pi ๐q\frac{\mathrm{sin}^2q}{\omega e(q,k)\sigma _3}.$$
$`(25)`$
## IV Comparison with the BTK theory
Obviously, the present treatment is a non-perturbative theory. It takes into account all the effects of the voltage within the model. At this point, it is instructive to compare our theory with the BTK theory. In the BTK model, only the incoming particles in each side of the junction are described by the equilibrium distributions with the chemical potential shift of the left metal due to the external voltage. But, the outgoing particles are not described by the equilibrium distributions. The quasiparticle states in the whole system are determined by the Bogoliubov-de Gennes equation that is independent of the external voltage. The tunneling current is calculated as the result of the current by the incident particles from the left metal minus that from the right SC. In contrast, by the present consideration, the particle distributions are referred to the reference system. Since in the interaction picture, the tunneling Hamiltonian depends on time, there cannot be quasiparticle states for the whole system. Each state in both sides of the junction has its lifetime because of the non-equilibrium process between the interface. From the Green function, the lifetime of a quasiparticle is determined by the inverse of the imaginary part of the self-energy. In this approach, the transport process is treated by the equivalent of time-dependent perturbation theory to all orders, which leads to lifetimes. The electron transport is the process of quasiparticles decaying. On the other hand, in the BTK model, the transport process is treated by the time-independent perturbation theory to all orders, which determines the quasiparticle states in the whole system, with infinitive lifetimes for the continuum states. Therefore, the mechanisms of electron transport through the junction by the two theories are very different.
For numerical comparison, we need to present the BTK scheme in the lattice model. The basic work in the scheme is to solve the BdG equation for the wave functions of quasiparticles in the whole system. An eigen wave function characterized by an incoming wave in the left metal can be written as the incoming wave plus all the reflected waves (including the Andreev and the ordinary reflections), with the transmitted waves in the right SC including all the outgoing waves. One needs only then consider the boundary condition at the interface barrier. By denoting the wave functions in the left and right sides respectively by $`\psi _l(j)`$ with $`j=\{1,2,\mathrm{}\}`$ and $`\psi _r(j)`$ with $`j=\{1,2,\mathrm{}\}`$, the BdG equation at the interface barrier reads
$$H_{1,2}\psi _l(2)+H_{1,1}\psi _l(1)+\widehat{T}_k^{}\psi _r(1)=E\psi _l(1),$$
$`(26a)`$
$$\widehat{T}_k\psi _l(1)+H_{1,1}\psi _r(1)+H_{1,2}\psi _r(2)=E\psi _r(1).$$
$`(26b)`$
Eqs. (26a,b) are nothing but the boundary conditions. With the wave functions, one can immediately calculate the tunneling current according to the BTK theory.
To see the difference between the present and the BTK theories, we have carried out the numerical calculations of the tunneling conductance
$$G=\frac{dI}{dV}$$
$`(27)`$
for normal-metal/$`d`$-wave superconductor junctions with $`\{110\}`$ and $`\{100\}`$ interface at various barrier strengths. For presentation, we normalize $`G`$ by $`Ne^2/\pi `$ ($`\mathrm{}=1`$) with $`N`$ the total number of the lattice sites on one side of the interface. The basic parameters for the SC are, $`t=176meV`$, hole concentration $`\delta =0.15`$, attractive potential between the nearest-neighbor sites $`v=124meV`$. The transition temperature $`T_c`$ and the order parameter $`\mathrm{\Delta }_0`$ are obtained as $`T_c=90K`$, and $`\mathrm{\Delta }_04\mathrm{\Delta }|_{T=0}=16.7meV`$, respectively. As being stated before, the Hamiltonian of the left metal contains only the hopping term. We assume that the hopping energies of both sides of the junction are the same. For simplicity, we choose tunneling matrix element as $`t_0(k)=t_0`$.
The numerical result for the normalized conductance as function of $`V`$ for an NS ($`d`$-wave) junction with {100} interface at $`T=0`$ is shown in Fig. 1. The tunneling parameter $`t_0/t=0.5`$ is used. Though the interfacial potential barrier at this parameter is not very high, the agreement between BTK and the present theories is very good. A small $`t_0`$ means a high interfacial potential barrier. At the high potential barrier limit, both theories reproduce the linear response result . However, at $`t_0/t=1`$ corresponding to a weak barrier, the discrepancy is clear as shown in Fig. 2. At weak barrier and small voltage $`|eV|\mathrm{\Delta }_0`$, the Andreev reflection is the predominant contribution to the conductance in the BTK theory. Under the present assumption, however, the transport is due to the decay of quasiparticles in both sides. Such a decaying process is more complex than the BTK picture. The difference between the two theories at small $`|eV|`$ is mainly due to the different treatment of the tunneling Hamiltonian (i.e., time-dependent vs time-independent perturbation theory). The voltage effect in $`L^0`$ is important only at large $`|eV|`$, because the relevant dimensionless parameter is the ratio $`eV/E_F`$ (with $`E_F`$ the Fermi energy of the left metal). The voltage effect is more evident at $`V<0`$ than at $`V>0`$, because more precisely the parameter is actually $`|eV/(E_F+eV)|`$. At negative voltage, the chemical potential of the left metal shifts upward, resulting in electrons right below the Fermi surface within the energy range $`(E_F+eV,E_F)`$ transferring into the right SC. At positive voltage, the states in the energy range $`(E_F,E_F+eV)`$ in the left metal are available for the electrons in the right SC to transfer in.
In Fig. 3, we show the results for the junctions with {110} interface at $`t_0/t=1`$. In this case, the results by both theories are in excellent agreement. The agreement is even better at smaller $`t_0`$. At $`|eV|<\mathrm{\Delta }_0`$, the conductance $`G`$ is given by a broadened zero-bias peak. Actually, there are zero-energy bound states in the right SC near the interface, with lifetime due to tunneling. The tunneling current is predominantly conducted by these states. The width of the broadening is mainly determined by the tunneling parameter $`t_0`$ rather than by the external voltage. Because of the existence of these states, the particle transmission through the junction for $`|eV|<\mathrm{\Delta }_0`$ is a resonant process. These resonance states exist in the BTK model as well. At least at $`eV=0`$, both theories produce the same resonance states with the same energy broadening. Therefore, we can understand the excellent agreement near $`eV=0`$.
The discrepancy between the two theories is even more clear for the normal-metal/conventional-superconductor junctions. Fig. 4 shows the result for an NS ($`s`$-wave) junction with $`\{100\}`$ interface at $`T=0`$ and $`t_0/t=1`$. The parameters for the SC are, the chemical potential $`\mu =0.3t`$, the on-site pairing parameter $`\mathrm{\Delta }_0=0.02t`$. The conductance predicted by the present theory is only about $`78\%`$ of that of BTK for $`|eV|/\mathrm{\Delta }_01`$ where the conductance is almost a constant. Qualitatively, the electron transport in this junction is similar as that in the NS ($`d`$-wave) junction with {100} interface. The explanation for Fig.2 applies here.
## V An approximation scheme
When $`eV/E_F1`$, the dependence of $`L^0`$ on the external voltage is very weak. We can then drop $`eV`$ in $`L^0`$. By this approximation, the conductance $`G`$ is given by
$$G=2e^2\underset{k}{}_{\mathrm{}}^{\mathrm{}}\frac{d\omega }{2\pi }t_0^2\mathrm{TrIm}(R_{}^0M_{}\sigma _3M_+)\sigma _3\mathrm{Im}L^+g$$
$`(28)`$
where $`g=\mathrm{cosh}^2[(\omega +\sigma _3eV)/2k_BT]/2k_BT`$ is only fact which depends on $`eV`$. In Fig. 4, the result by Eq. (28) is also plotted. At small voltage, the approximation is in very good agreement with our main theory. However, at large voltage, the approximation reproduces the BTK result. This clearly shows that the discrepancy between our main theory and the BTK theory at small voltage is not due to the voltage effect in $`L^0`$. In the case of $`eV/E_F1`$, Eq. (28) is a simple but good scheme for calculation of the conductance.
## VI Summary
In summary, on the basis of the Keldysh approach, we have developed a theory of electron transport in normal metal/superconductor junctions to all orders in the applied voltage and the barrier strength. In the present scheme, the tunneling current is given in terms of renormalized Green functions of a steady state. It can give a reliable description of the electron tunneling, including the ballistic transport in NS junctions. We have calculated the tunneling conductance for various NS junctions using the present formalism and have compared it with the BTK theory. In most cases, both theories agree with each other. However, for some junctions of low barrier strength, the discrepancy between the two theories can be sizable.
###### Acknowledgements.
This work is supported by the Texas Higher Education Coordinating Board under the grant No. 1997-010366-029, and by the Texas Center for Superconductivity at the University of Houston. |
no-problem/0001/cond-mat0001378.html | ar5iv | text | # โQuasi 2-Dโ Spin Distributions in II-VI Magnetic Semiconductor Heterostructures: Clustering and Dimensionality
## Abstract
Spin clustering in diluted magnetic semiconductors (DMS) arises from antiferromagnetic exchange between neighboring magnetic cations and is a strong function of reduced dimensionality. Epitaxially-grown single monolayers and abrupt interfaces of DMS are, however, never perfectly two-dimensional (2D) due to the unavoidable inter-monolayer mixing of atoms during growth. Thus the magnetization of DMS heterostructures, which is strongly modified by spin clustering, is intermediate between that of 2D and 3D spin distributions. We present an exact calculation of spin clustering applicable to arbitrary distributions of magnetic spins in the growth direction. The results reveal a surprising insensitivity of the magnetization to the form of the intermixing profile, and identify important limits on the maximum possible magnetization. High-field optical studies of heterostructures containing โquasi-2Dโ spin distributions are compared with calculation.
Spin clustering is ubiquitous in II-VI diluted magnetic semiconductors (DMS), resulting in reduced effective magnetizations at low magnetic fields and magnetization steps at high fields. Clear predictions can be made for the number and type of spin clusters in 3D systems (e.g., bulk Cd<sub>1-x</sub>Mn<sub>x</sub>Se), where the distribution of magnetic Mn<sup>2+</sup> cations is random and isotropic. With the advent of molecular beam epitaxy (MBE) and other techniques for monolayer-by-monolayer growth of DMS heterostructures, spin clustering in systems of reduced dimensionality has enjoyed much recent interest. It is well established that spin clustering (arising mainly from an antiferromagnetic exchange between neighboring magnetic cations) should be greatly reduced in two-dimensional systems such as abrupt interfaces or discrete monolayer planes, leading to enhanced paramagnetism. However, experiments show that perfect 2D interfaces and monolayers are never realized due to the inevitable inter-monolayer mixing of atoms during MBE growth, which smears the magnetic cations over several monolayers. Common mechanisms include segregation (mixing between the monolayer being grown and the underlying monolayer) which leads to roughly exponential magnetic profiles, and diffusion (which can arise from, e.g., high growth temperatures or annealing) which leads to gaussian profiles. Hence, real DMS heterostructures are more accurately said to contain โquasi-2Dโ distributions of spins, with a corresponding magnetization and degree of spin clustering somewhere between that of bulk (3D) and planar (2D) spin distributions.
The local, planar magnetic concentration in these quasi-2D spin distributions varies significantly from monolayer to monolayer, strongly affecting the probability of forming spin clusters (which themselves may span many monolayers). It is desirable to quantitatively predict the degree of this spin clustering in a given DMS heterostructure so that accurate comparisons can be made with real data. In this paper we present exact expressions for determining the number and type of spin clusters (singles, pairs, open- and closed triples) for arbitrary distributions of magnetic spins in the common (100) growth direction. The results reveal a rather surprising insensitivity of the computed magnetization to the form of the intermixing profile (exponential/gaussian), and highlight important limits on the maximum possible magnetization using MBE techniques. High-field photoluminescence (PL) and reflectivity studies of DMS superlattices and quantum wells containing quasi-2D magnetic planes are compared with the analytic results.
Spin clustering in DMS derives predominantly from the strong antiferromagnetic d-d exchange between nearest-neighbor (NN) magnetic cations ($`J_{NN}10K`$). As outlined in the work of Shapira and others, single Mn<sup>2+</sup> cations with no magnetic NNs are $`S=\frac{5}{2}`$ paramagnets, with Brillouin-like magnetization. Two NN Mn<sup>2+</sup> cations form an antiferromagnetically-locked pair with zero spin at low magnetic fields, and step-like magnetization at high fields and low temperatures. Three Mn<sup>2+</sup> spins can form a closed or open triple with net spin $`S_T=\frac{1}{2}`$ and $`S_T=\frac{5}{2}`$ (respectively) at low fields, and a unique set of magnetization steps at high fields. Spins in higher order clusters are usually treated empirically and often exhibit a linear susceptibility at high magnetic fields.
The magnetization of monolayer planes of Mn<sup>2+</sup> spins is a significant challenge for conventional magnetometry. Alternatively, the magnetization from DMS heterostructures may be inferred from their giant magneto-optical properties. The $`J_{spd}`$ exchange interaction between electrons/holes and local Mn<sup>2+</sup> moments generates giant exciton spin-splittings that are proportional to the local magnetization within the exciton wavefunction. Using the giant spin-splitting of confined excitons to probe the magnetization within a quantum well, the studies of Gaj, Grieshaber, and of Ossau clearly established i) an enhanced paramagnetic response in very thin layers of magnetic semiconductor, and ii) that โidealโ magnetic-nonmagnetic semiconductor interfaces are smeared out due to segregation of Mn<sup>2+</sup> during growth. A clear example of both effects can be seen in the high-field PL data of Figure 1.
Here, we measure the giant energy shift ($``$ magnetization) of the band-edge exciton PL to 60 Tesla in three quantum wells, each containing the same total number of Mn<sup>2+</sup> spins, but in very different distributions. The samples are 120$`\AA `$ ZnSe/Zn<sub>.8</sub>Cd<sub>.2</sub>Se single quantum wells into which the magnetic semiconductor MnSe has been incorporated in the form of โdigitalโ planes of 25%, 50%, and 100% monolayer coverage (12, 6, and 3 planes, respectively). The samples and the experimental method have been described elsewhere. Evidence of decreased spin clustering with decreasing planar concentration is clear in the low-field magnetization ($`H<8T`$), which is largely due to isolated Mn<sup>2+</sup> spins and is fit to a modified Brillouin function, $`E_{sat}=B_{5/2}[5\mu _BH/k_B(T+T_0)]`$. $`E_{sat}`$ is the saturation splitting and $`T_0`$ is an empirical parameter that accounts for long-range Mn-Mn interactions. As shown, as the planar Mn<sup>2+</sup> density increases, $`E_{sat}`$ decreases while $`T_0`$ grows, consistent with the expectation that increasing the planar density results in fewer single (un-clustered) Mn<sup>2+</sup> spins and more long-range correlations between Mn<sup>2+</sup> spins. Further, with increasing planar density, the high-field susceptibility evolves from magnetization steps (from Mn-Mn pairs), to the linear susceptibility common in large, highly correlated spin clusters. The presence of low-field paramagnetism in the 100% MnSe planes suggest, however, the quasi-2D nature of these spin distributions: a full (perfect) 2D magnetic monolayer would behave as one infinite cluster and contain no paramagnetic spins whatsoever.
The probability of forming a particular spin cluster is essentially determined by the number of (possibly magnetic) NNs bordering the cluster, and the number of ways the cluster can form. In the zincblende crystal structure we consider, cations form an fcc lattice, with twelve NNs per cation. Thus the probability of a Mn<sup>2+</sup> being single or paired in bulk crystals (3D) is $`P_s^{3D}=(1x)^{12}`$ and $`P_p^{3D}=12x(1x)^{18}`$ respectively, where $`x`$ is the Mn<sup>2+</sup> concentration. For perfect 2D monolayers grown in the (100) direction, the cations form a 2D square lattice with only four possible magnetic NNs per cation, so that $`P_s^{2D}=(1x)^4`$ and $`P_p^{2D}=4x(1x)^6`$. In a real system, the effects of diffusion and/or segregation intermix adjacent monolayers, so that a โperfectโ 2D plane of DMS is smeared over several monolayers, with the $`n^{th}`$ (100) monolayer having a 2D magnetic concentration $`x_n`$ (assumed to be random within the plane). Clustering within these quasi-2D spin distributions can be modeled numerically or through analytic approximations, but an exact expression has been lacking. Figure 2 shows the exact probabilities of a Mn<sup>2+</sup> spin in the $`n^{th}`$ monolayer belonging to a single, pair, closed- and open triple. Diagrams show the clusters under consideration - e.g., there are three different types of Mn<sup>2+</sup> pairs (with the paired spin in the $`n1^{th}`$, $`n^{th}`$, or $`n+1^{th}`$ monolayer), each four-fold degenerate. There are four types of closed triples (for a total of 24), and 126 total configurations for open triples. (We do not attempt the 1900 configurations of spin quartets that have been recently identified in the bulk, nor do we consider the much weaker distant-neighbor couplings between Mn<sup>2+</sup> moments.)
This algorithm allows for an exact calculation of spin clusters in a heterostructure with an arbitrary distribution of Mn<sup>2+</sup> in the (100) direction. An example of its utility is shown in Fig. 3, where we compute the number of Mn<sup>2+</sup> in singles, pairs, triples, and higher-order clusters in a 10 monolayer (10 ml) wide nonmagnetic quantum well with magnetic ($`x_{Mn}=30\%`$) barriers. We assume full segregation of atoms during growth, giving $`e^{n/\lambda }`$ and $`1e^{n/\lambda }`$ Mn<sup>2+</sup> profiles ($`\lambda 1.44ml`$) at the first and second interfaces (Fig 3a). Figs. 3b-e show the type and number of clusters in each monolayer. Although the Mn<sup>2+</sup> density is comparatively small near the center of the quantum well, the paramagnetic contribution from single spins to the optically-measured magnetization can be significant, as it depends on their overlap with the exciton wavefunction as shown. The density of triples and of pairs is clearly peaked in the quasi-2D interfaces. In the barriers, the spin distribution is bulk-like and the vast majority of spins are bound up in higher order clusters that contribute little to the low-field magnetization.
Modeling quasi-2D spin distributions leads to some rather unexpected results. In particular, it is clear that magnetization studies alone will be of limited use in distinguishing the exact form of the intermixing profile. Fig. 4a shows the calculated magnetization for four different profiles of an initially 2D monolayer containing 20% Mn<sup>2+</sup>, where we include the magnetization from singles, pairs, triples, and higher order clusters following Refs. 1 and 6. Though unrealistic, the first two profiles - a perfect 2D plane with $`x_{Mn}=20\%`$ and two adjacent planes with $`x_{Mn}=10\%`$ โ illustrate an important point: clustering often โconspiresโ to equalize low-field magnetizations. Although the single monolayer contains 5% fewer single Mn<sup>2+</sup> spins, it contains over a third more open triples and higher-order clusters, which act to equalize the deficit.
Only at the first magnetization step are the profiles distinguishable, as the single monolayer contains fewer Mn-Mn pairs. The last two profiles represent the exponential and gaussian profiles roughly expected from segregation and diffusion, respectively, with decay length and half-width equal to 1 ml. Again, the calculated magnetizations are nearly identical (although larger than for the first two profiles). Thus, magnetization measurements alone cannot distinguish the form of the spin profile. However, assuming a particular form, the magnetization does depend sensitively on the segregation (or diffusion) length, which can then be used to fit an intermixing lengthscale as demonstrated below.
The model we present can also identify configurations for realizing the maximum possible magnetization per unit volume in MBE-grown structures. One motivation for growing โdigitalโ alloys is to exploit the reduced clustering of 2D planes to achieve enhanced magnetizations beyond those possible with bulk, 3D distributions. In bulk DMS, the maximum paramagnetic response is obtained with $`x_{Mn}8\%`$, where isolated Mn<sup>2+</sup> spins comprise $`2.9\%`$ of all cation sites. In Fig. 4b we investigate whether it is then possible โ with the same total number of Mn<sup>2+</sup> spins โ to increase the number of isolated spins by redistributing the Mn<sup>2+</sup> in digital planes (solid dots). Bulk can be thought of as 2D planes of spins with $`x_{Mn}^{2D}`$=8%, spaced every monolayer. Next, we consider 2D planes with twice the density ($`x_{Mn}^{2D}`$ =16%), spaced every other monolayer, which results in a paramagnetic enhancement of over a third, as shown. However, spacing planes with $`x_{Mn}^{2D}`$=24% every third monolayer results in fewer free Mn<sup>2+</sup> spins per unit volume than in the case of bulk. Additional divisions continue to reduce the paramagnetic response. So, only by spacing magnetic planes every other monolayer is it possible to increase the density of free Mn<sup>2+</sup> beyond 3D spin distributions. However, any intermixing during growth couples the 2D planes and dramatically reduces the paramagnetic enhancement, as shown by the open dots for the case of full segregation. Of course, clever schemes for control of the spin distribution within the 2D plane could certainly result in reduced spin clustering, such as MBE growth in the (120) direction, where neighboring cation sites in the (120) plane are not nearest neighbors. Thusfar, however, such efforts have been hampered by the inevitable inter-monolayer mixing of atoms during growth, leading to spin clusters.
We apply the model to measurements of superlattices and quantum wells containing โdigitalโ planes of DMS. Fig. 5a shows the measured splitting between exciton spin states in two superlattices with nominally single monolayers of Zn<sub>.75</sub>Mn<sub>.25</sub>Se and Zn<sub>.50</sub>Mn<sub>.50</sub>Se (separated by 4ml of ZnSe). The dotted lines are Brillouin fits to the low-field magnetization ($`H<8T`$). Increased spin clustering in the Zn<sub>.50</sub>Mn<sub>.50</sub>Se monolayers is evident in the smaller paramagnetic saturation, and more linear high-field susceptibility. With perfect 2D planes, however, it is impossible to account for the 15% larger paramagnetic saturation from the superlattice with Zn<sub>.75</sub>Mn<sub>.25</sub>Se planes. However, assuming exponential, segregated Mn<sup>2+</sup> profiles $`e^{n/\lambda }`$ for each of the Zn<sub>1-x</sub>Mn<sub>x</sub>Se planes (reasonable for the low growth temperature of 300 C), the relative low field saturations can be reproduced with a decay length $`\lambda `$=1.15ml, implying partial segregation during growth. As a final study (Fig. 5b), we attempt to account for the size of the magnetization steps observed in PL from the quantum well containing twelve 1/4ml planes of MnSe. Magnetization steps arise from the partial unlocking of antiferromagnetically-bound Mn-Mn pairs, resulting in a step height proportional to the number of pairs. The observed magnetization steps are never more than 5% of the low-field โsaturation magnetizationโ $`M_{sat}`$, a ratio which is much smaller than predicted by any conceivable distribution profile of the Mn<sup>2+</sup> within the quantum well. The expected step height for 3D, 2D, and segregated 2D spin distributions are shown for comparison. This puzzling anomaly is seen in all โdigitalโ samples, and even quantum wells containing bulk ($`x_{Mn}^{3D}`$ =8%) DMS show a similar deficit. We postulate this effect is due to the nature of the PL measurement itself, which is not a direct measure of magnetization, but is rather only proportional to the magnetization through the $`J_{spd}`$ exchange interaction and the Mn<sup>2+</sup>-exciton wavefunction overlap. It is anticipated that true magnetization studies will reveal the correct magnitude of the magnetization step.
In summary, we have presented a method for calculating the exact number of spin singles, pairs, triples, and higher order clusters for an arbitrary magnetic concentration profile in the (100) growth direction, to model the magnetic properties of real, quasi-2D spin distributions in DMS heterostructures. Calculation of the magnetization for diffusion and segregation profiles reveals nearly identical values, so that fitting an intermixing length is likely possible only when the form (exponential, gaussian, etc) of the quasi-2D profile is assumed a priori, as was demonstrated for the case of ZnMnSe:ZnSe superlattices. The model also predicts a larger paramagnetism compared with bulk spin distributions only if digital planes are spaced every other monolayer, although the effects of intermixing will greatly reduce any enhancement. Lastly, the discrepancy between the magnitude of observed and predicted magnetization steps remains outstanding. The methods outlined in this paper will be of use in modeling future epitaxially-grown DMS heterostructures, where spin distributions can be engineered with nearly monolayer precision.
The authors gratefully acknowledge the assistance of J. Schillig and M. Gordon during operation of the 60T Long-Pulse magnet. Work supported by grants NSF DMR 97-01072 and 9701484. |
no-problem/0001/hep-lat0001023.html | ar5iv | text | # Insight into the Scalar Mesons from a Lattice Calculation
## I Introduction
The light scalar mesons have defied classification for decades . Some are narrow and have been firmly established since the 1960โs. Others are so broad that their very existence is controversial. Scalar mesons are predicted to be chiral partners of the pseudoscalars like the pion, but their role in chiral dynamics remains obscure. Naive quark models interpret them as orbitally excited $`\overline{q}q`$ states. Others have suggested that they are $`\overline{q}^2q^2`$ or โmolecularโ states, strongly coupled to $`\pi \pi `$ and $`\overline{K}K`$ thresholds.
In this paper we propose a way to shed some light on the nature of the scalar mesons using lattice QCD. Previously scalar mesons have been treated like other mesons: their masses have been extracted from the large Euclidean time falloff of $`\overline{q}q\overline{q}q`$ correlation functions with the appropriate quantum numbers. Here we look for a $`0^{++}\overline{q}^2q^2`$ *bound state*. We construct $`\overline{q}^2q^2`$ sources, work in the quenched approximation, and discard $`\overline{q}q`$ annihilation diagrams so communication with $`\overline{q}q`$ and vacuum channels is forbidden. Also, we allow the quark masses to be large (hundreds of MeV), so the continuum threshold for the decay $`\overline{q}^2q^2(\overline{q}q)(\overline{q}q)`$ is artificially elevated. We then study the large Euclidean time falloff of a $`\overline{q}^2q^2\overline{q}^2q^2`$ correlator, looking for a falloff slower than $`2m_{\overline{q}q}`$, signalling a bound state. Such an object would have been missed by studies of $`\overline{q}q`$ correlators in the quenched approximation. We use shortcomings of lattice QCD to our advantage. By excluding processes that mix $`\overline{q}q`$ and $`\overline{q}^2q^2`$, we can unambiguously assign a quark content to a state. The heavy quark mass suppresses relativistic effects, which we believe complicate the interpretation of light quark states.
Our initial results are encouraging: within the limits of our computation we see signs of a bound state in the โnon-exoticโ $`\overline{q}^2q^2`$ channel, namely, the one with quantum numbers that could also characterize a $`\overline{q}q`$ state ($`I=0`$ for 2 flavors, the 1 and 8 for 3 flavors). In contrast, the โexoticโ flavor $`\overline{q}^2q^2`$ channel ($`I=2`$ for 2 flavors, the 27 for 3 flavors) shows no bound state. Instead it shows a negative scattering length, characteristic of a repulsive interaction. To obtain a definitive result will require larger lattices and more computer time, but this is well within the scope of existing facilities.
In Sec. II we give an overview of the $`0^{++}`$ mesons. First we summarize the phenomenology. Then we summarize previous lattice calculations. We also review earlier studies of $`\overline{q}^2q^2`$ sources on the lattice. Because these earlier works looked only at one (relatively small) lattice size they were unable to examine the possibility of a bound state. In Sec. III we summarize our computation. First we briefly review $`\overline{q}^2q^2`$ operators and discuss lattice size and quark mass dependence. Next, we review the improved lattice action we use to enable us to study larger lattices. Finally in Sec. IV we present our results and discuss their implications. We explore some of the directions in which our computation could be improved.
A reader who wishes to skip the details can look immediately at Fig. 5 where we plot the dependence on lattice size of the binding energy associated with the exotic and non-exotic $`\overline{q}^2q^2`$ channels. The exotic channel shows a negative binding energy with the $`1/L^3`$ dependence expected from analysis of the $`(\overline{q}q)(\overline{q}q)`$ continuum. The coefficient of $`1/L^3`$ agrees roughly with Refs. and with the predictions of chiral perturbation theory. The non-exotic channel shows positive binding energy, but seems to depart from $`1/L^3`$, perhaps approaching a constant as $`L\mathrm{}`$, which would indicate the existence of a bound $`\overline{q}^2q^2`$ state. Confirmation of this result will require further calculations on larger lattices.
## II Overview of the light scalar mesons
In this section we establish the context for our work. First we give a very brief introduction to the phenomenology of the lightest $`0^{++}`$ mesons composed of light ($`u`$, $`d`$, and $`s`$) quarks. We give a sketch of the $`\overline{q}q`$ and $`\overline{q}^2q^2`$ models for $`0^{++}`$ states and contrast them. More information can be found in Refs. and references quoted therein. Next we summarize existing lattice calculations which relate to the $`0^{++}`$ channel. These fall into two classes: traditional searches for $`\overline{q}q`$ eigenstates and attempts to learn about low energy $`\pi \pi `$ scattering by studying $`\overline{q}^2q^2`$ sources.
### A Phenomenology
The known $`0^{++}`$ mesons divide into effects near and below 1 GeV, which are unusual, and effects in the 1.3โ1.5 GeV region which may be more conventional. Here we focus on the states below 1 GeV. Altogether, the objects below $`1\mathrm{GeV}`$ form an $`SU(3)_\mathrm{f}`$ nonet: two isosinglets, an isotriplet and two strange isodoublets. The isotriplet and one isosinglet are narrow and well confirmed. The isodoublets and the other isosinglet are very broad and still controversial.
The well established $`0^{++}`$ mesons are the isosinglet $`f_0(980)`$ and the isotriplet $`a_0(980)`$. Both are relatively narrow: $`\mathrm{\Gamma }[f_0]`$ 40 MeV, $`\mathrm{\Gamma }[a_0]`$ 50 MeV,<sup>*</sup><sup>*</sup>*We use the observed peak width into $`\pi \pi `$ and $`\pi \eta `$ respectively, rather than some more model dependent method for extracting a width. despite the presence of open channels ($`\pi \pi `$ for the $`f_0`$ and $`\pi \eta `$ for the $`a_0`$) for allowed s-wave decays. Both couple strongly to $`\overline{K}K`$ and lie so close to the $`\overline{K}K`$ threshold at 987 MeV that their shapes are strongly distorted by threshold effects. Interpretation of the $`f_0`$ and $`a_0`$ requires a coupled channel scattering analysis. The relevant channels are $`\pi \pi `$ and $`\overline{K}K`$ for the $`f_0`$ and $`\pi \eta `$ and $`\overline{K}K`$ for the $`a_0`$. In both cases the results favor an intrinsically broad state, strongly coupled to $`\overline{K}K`$ and weakly coupled to the other channel. The physical object appears narrow because the $`\overline{K}K`$ channel is closed over a significant portion of the objectโs width. No summary this brief does justice to the wealth of work and opinion in this complex situation.
The other light scalar mesons are known as broad enhancements in very low energy s-wave meson-meson scattering. The enhancements are universally accepted, but their interpretation is more controversial. At the lowest energies only the $`\pi \pi `$ channel is open. The $`\pi \pi `$ s-wave can couple either to isospin zero or two. The $`I=2`$ (e.g. $`\pi ^+\pi ^+`$) channel shows a weak repulsion in rough agreement with the predictions of chiral low energy theorems. The $`I=0`$ channel shows a strong attraction: the phase shift rises steadily from threshold to approximately $`\pi /2`$ by $`800`$ MeV before effects associated with the $`f_0`$ complicate the picture. This low mass enhancement in the $`\pi \pi `$ s-wave is the $`\sigma `$ meson of nuclear physics and chiral dynamics. Recent studies support the existence of an S-matrix pole associated with this state at a mass around 600 MeV, which we will refer to as the $`\sigma (600)`$. The $`\pi K`$ s-wave is very similar to $`\pi \pi `$. The exotic $`I=3/2`$ (e.g. $`\pi ^+K^+`$) channel shows weak repulsion. The non-exotic $`I=1/2`$ channel shows relatively strong attraction. Black et al. identify the enhancement with an S-matrix pole at approximately 900 MeV, which is known as the $`\kappa `$(900).The enhancement is not in doubt, but the interpretation is, if anything, more controversial than the $`\pi \pi `$ case.For example, the $`\kappa `$(900) is not mentioned in Ref. Other couplings of these objects (the $`\sigma `$ can couple to $`\overline{K}K`$ and the $`\kappa `$ can couple to $`\eta K`$) are unknown because the relevant thresholds lie above the states. The large widths of these states reflect their strong coupling to the open decay channels $`\pi \pi `$ and $`\pi K`$ respectively.
The conventional quark model assigns the $`0^{++}`$ mesons to the first orbitally excited multiplet of $`\overline{q}q`$ states. As in positronium, $`0^{++}`$ quantum numbers are made by coupling $`L=1`$ to $`S=1`$ to give total $`J=0`$. The $`0^{++}`$ states should be very similar to the $`1^{+\pm }`$ and $`2^{++}`$ $`\overline{q}q`$ states that lie in the same family. These are very well known and form conventional meson nonets (in $`SU(3)_\mathrm{f}`$). Since they have a unit of excitation (orbital angular momentum), they are expected to be quite a bit heavier than the pseudoscalar and vector mesons. Most models put the $`\overline{q}q0^{++}`$ mesons along with their $`2^{++}`$ and $`1^{++}`$ brethren around 1.2โ1.5 GeV.
An idealized $`\overline{q}q`$ meson nonet has a characteristic pattern of masses and decay couplings. The vector mesons are best known, but the pattern is equally apparent in the $`2^{++}`$ or $`1^{++}`$ nonets. The isotriplet and the isosinglet composed of non-strange quarks are lightest and are roughly degenerate (e.g. the $`\rho `$ and the $`\omega `$). The strange isodoublets are heavier because they contain a single strange quark (e.g. the $`K^{}`$). The final isosinglet is heaviest because it contains an $`\overline{s}s`$ pair (e.g. the $`\varphi `$). Decay patterns show selection rules which follow from this quark content. In particular, the lone isosinglet does not couple to non-strange mesons ($`\varphi \overline{)}3\pi `$). The mass pattern, quark content and natural decay couplings of a $`\overline{q}q`$ nonet are summarized in Fig. 1a. These patterns seem to bear little resemblance to the masses and couplings of the light $`0^{++}`$ mesons, a fact which led earlier workers to explore other interpretations.
Four quarks ($`\overline{q}^2q^2`$) can couple to $`0^{++}`$ without a unit of orbital excitation. Furthermore, the color and spin dependent interactions, which arise from one gluon exchange, favor states in which quarks and antiquarks are separately antisymmetrized in flavor. For quarks in 3-flavor QCD the antisymmetric state is the flavor $`\overline{\mathrm{๐}}`$. Thus the energetically favored configuration for $`\overline{q}^2q^2`$ in flavor is $`(\overline{q}\overline{q})^\mathrm{๐}(qq)^{\overline{\mathrm{๐}}}`$, a flavor nonet. The lightest multiplet has spin 0. Explicit studies in the MIT Bag Model indicated that the color-spin interaction could drive the $`\overline{q}^2q^2`$ $`0^{++}`$ nonet down to very low energies: 600 to 1000 MeV depending on the strangeness content.
The most striking feature of a $`\overline{q}^2q^2`$ nonet in comparison with a $`\overline{q}q`$ nonet is an *inverted mass spectrum* (see Fig. 1b). The crucial ingredient is the presence of a hidden $`\overline{s}s`$ pair in several states. The flavor content of $`(qq)^{\overline{\mathrm{๐}}}`$ is $`\{[ud],[us],[ds]\}`$, where the brackets denote antisymmetry. When combined with $`(\overline{q}\overline{q})^\mathrm{๐}`$, four of the resulting states contain a hidden $`\overline{s}s`$ pair: the isotriplet and one of the isosinglets have quark content $`\{u\overline{d}s\overline{s},\frac{1}{\sqrt{2}}(u\overline{u}d\overline{d})s\overline{s},d\overline{u}s\overline{s}\}`$ and $`\frac{1}{\sqrt{2}}(u\overline{u}+d\overline{d})s\overline{s}`$, and therefore lie at the top of the multiplet. The other isosinglet, $`u\overline{d}d\overline{u}`$ is the only state without strange quarks and therefore lies alone at the bottom of the multiplet. The strange isodoublets ($`u\overline{s}d\overline{d}`$, etc.) should lie in between. In summary, one expects a degenerate isosinglet and isotriplet at the top of the multiplet and strongly coupled to $`\overline{K}K`$, an isosinglet at the bottom, strongly coupled to $`\pi \pi `$, and a strange isodoublet coupled to $`K\pi `$ in between (Fig. 1b). The resemblance to the observed structure of the light $`0^{++}`$ states is considerable.
These qualitative considerations motivate a careful look at the classification of the scalar mesons. Models of QCD are not sophisticated enough to settle the question. For example, the $`\overline{q}^2q^2`$ picture does not distinguish between one extreme where the four quarks sit in the lowest orbital of some mean field, and the other, where the four quarks are correlated into two $`\overline{q}q`$ mesons which attract one another in the flavor $`(\overline{q}\overline{q})^\mathrm{๐}(qq)^{\overline{\mathrm{๐}}}`$ channel. For years, phenomenologists have attempted to analyse meson-meson scattering data in ways which might distinguish between $`\overline{q}q`$ and $`\overline{q}^2q^2`$ assignments. A recent quantitative study favors the $`\overline{q}^2q^2`$ assignment. However the $`\overline{q}q`$ assignment has strong advocates. We hope that a suitably constrained lattice calculation can aid in the eventual classification of these states.
### B Existing Lattice Studies
In this section we briefly summarize existing lattice calculations which bear on the classification of the $`0^{++}`$ mesons. There have been lattice studies of both the spectrum of $`0^{++}`$ states and the mixing of $`\overline{q}q`$ states with glueballs.
Unquenched spectroscopic calculations are just beginning to become available. In principle, they are of interest because they would couple to a $`\overline{q}^2q^2`$ configuration if it is energetically favorable. One unquenched calculation reports tentative evidence of $`0^{++}`$ state at an energy much lower than that reported in quenched calculations . We return to this work briefly in our conclusions. Further insight from unquenched calculations will have to await more definitive studies.
For the rest of this section, we restrict ourselves to consideration of quenched calculations. We will not discuss the mixing of glueballs with $`\overline{q}q`$ states, because we are interested in distinguishing $`\overline{q}q`$ from $`\overline{q}^2q^2`$ components of mesons. First we consider quenched studies of $`\overline{q}q`$ spectra. Then we describe attempts to extract meson-meson scattering lengths from quenched studies of $`\overline{q}^2q^2`$ sources.
#### 1 $`\overline{q}q`$ Spectrum Calculations
The masses of the $`0^{++}`$ $`\overline{q}q`$ states have been calculated on the lattice in the quenched approximation by various groups . Some of their results are shown in Fig. 2. As well as the $`J^{PC}=0^{++}(a_0)`$, we have included data from the same groups on the other positive parity mesons $`J^{PC}=1^{++}(a_1)`$ and $`1^+(b_1)`$. Data for the $`2^{++}(a_2)`$ was not available. The spectra in Fig. 2 behave roughly as $`\overline{q}q`$ states with orbital angular momentum should. In the heavy quark limit, as the pseudoscalar mass $`m_P`$ approaches the vector mass $`m_V`$, their masses approach one another because spin and spin-orbit splittings decrease with $`m_q`$, and approach $`m_V`$ because orbital excitation energy decreases with $`m_q`$. To make this behavior manifest we plot $`m_{J^{PC}}/m_V`$ versus $`m_P/m_V`$, and note that $`m_{J^{PC}}/m_V`$ approaches unity as $`m_P/m_V`$ increases.
#### 2 Pseudoscalar Scattering Length Calculations
In the past, lattice studies of four-quark states have been undertaken in order to extract pseudoscalar-pseudoscalar ($`P`$-$`P`$) scattering lengths for comparison with the predictions of chiral dynamics. It is known that the energy shift $`\delta E`$ of a two-particle state with quantum numbers $`\alpha `$ in a cubic box of size $`L`$ is related to the threshold scattering amplitude,
$$\delta E_\alpha =E_\alpha 2m_P=\frac{T_\alpha }{L^3}\left(1+2.8373\frac{m_PT_\alpha }{4\pi L}+6.3752\left(\frac{m_PT_\alpha }{4\pi L}\right)^2+\mathrm{}\right),$$
(1)
where $`m_P`$ is the mass of the scattering particles, and $`T_\alpha `$ is the scattering amplitude at threshold in the channel labelled by $`\alpha `$, which can be related to the scattering length,
$$T_\alpha =\frac{4\pi a_\alpha }{m_P}.$$
(2)
For a more detailed discussion, see Ref. . In our case the channels of interest are exotic ($`I=2`$, for two flavors) and non-exotic ($`I=0`$, for two flavors). If the interaction is attractive enough to produce a bound state, then instead of eq. (1) one would find that $`\delta E`$ goes to a negative constant as $`L\mathrm{}`$.
In order to distinguish between a bound state and the continuum behavior described by eq. (1), it is necessary to perform calculations for several different lattice sizes. Calculations with $`\overline{q}^2q^2`$ sources have been performed by Gupta et al., who studied one lattice volume at one lattice spacing, and Fukugita et al., who, for the heavy quark masses we are interested in, also studied only one lattice volume at one lattice spacing. Their results were therefore not sufficient to check the lattice-size dependence of energy of the two-pseudoscalar state, and investigate the possibility of a bound state. Our method follows theirs, but we have studied a range of lattice sizes. Their results are plotted along with ours in Fig. 5. Where our calculations overlap, they agree.
## III A $`\overline{๐ช}^\mathrm{๐}๐ช^\mathrm{๐}`$ Exercise on the Lattice
### A Quark contractions and flavor dependence
For our purposes the salient categorization of $`\overline{q}^2q^2`$ correlators is into โexoticโ channels (flavor states that are only possible for a $`\overline{q}^2q^2`$ state, $`I=2`$ for two flavors, the 27 for three flavors) and non-exotic channels (flavor states that could be $`\overline{q}^2q^2`$ or $`\overline{q}q`$, $`I=0`$ for two flavors, the 8 and 1 for three flavors). In the absence of quark annihilation diagrams, the 8 and 1 are identical. When annihilation is included, the 1, like the $`I=0`$ for two flavors, can mix with pure glue. As shown in Fig. 3, the $`\overline{q}^2q^20^{++}`$ correlation functions can be expressed in terms of a basis determined by the four ways of contracting the quark propagators: direct (D), crossed (C), single annihilation (A), complete annihilation into glue (G).
Since we are interested in $`\overline{q}^2q^2`$ states, we only study the D and C contributions. We will assume that all quarks are degenerate, so there is only one quark mass, and as far as color and spinor indices are concerned all quark propagators are the same. In our lattice calculation we will therefore build our $`\overline{q}^2q^2`$ correlators from color and spinor traces of contractions of four identical quark propagators, putting in the flavor properties by hand when we choose the relative weights of the different contractions.
In the case of two flavors, there are two possible channels for a spatially symmetric source: $`I=2`$ (exotic) and $`I=0`$ (non-exotic). Evaluation of the flavor dependence of the quark line contractions shows that the $`I=2`$ channel is $`DC`$, and $`I=0`$ is $`D+\frac{1}{2}C`$ .
For three flavors, the possible channels are the symmetric parts of $`\mathrm{๐}\times \mathrm{๐}\times \overline{\mathrm{๐}}\times \overline{\mathrm{๐}}`$, namely $`\mathrm{๐}+\mathrm{๐}`$ (non-exotic) and $`\mathrm{๐๐}`$ (exotic). As in the two-flavor case, the exotic channel is $`DC`$. At sufficiently large Euclidean time separation, each contraction will behave as a sum of exponentials, corresponding to the states it overlaps with. Generically, all linear combinations will be dominated by the same state: the lightest. Only with correctly chosen relative weightings will the leading exponential cancel out, yielding a faster-dropping exponential corresponding to a more massive state. We will see in Sec. IV that the exotic ($`DC`$) channel is the one where such a cancellation occurs, yielding a repulsive interaction between the pseudoscalars. For any other linear combination of $`D`$ and $`C`$ the correlator is therefore dominated by the lightest, attractive state. Without loss of generality, we can therefore study the following linear combinations:
$$\begin{array}{cccccc}\hfill \text{Exotic:}& \hfill J_\mathrm{E}& =& DC\text{2 flavor: }\hfill & I=2\text{3 flavor: }\hfill & \mathrm{๐๐}\hfill \\ \hfill \text{Non-exotic:}& \hfill J_\mathrm{N}& =& D+\frac{1}{2}C\text{2 flavor: }\hfill & I=0\text{3 flavor: }\hfill & \mathrm{๐},\mathrm{๐}\hfill \end{array}$$
(3)
We conclude that if, as our results suggest, there is a bound $`\overline{q}^2q^2`$ state in the non-exotic channel, then this means that with two flavors, the $`I=0`$ channel is bound, and with three flavors both the 1 and 8 are bound. Once quark loops and annihilation diagrams are included, the 1 and 8 will split apart. Unquenched lattice calculations will be needed to see if they remain bound.
### B Lattice action
In our lattice calculations, we work in the quenched (valence) approximation, and use Symanzik-improved glue and quark actions. This means that irrelevant terms ($`๐ช(a),๐ช(a^2)`$, where $`a`$ is the lattice spacing) have been added to the lattice action to compensate for discretization errors.
Improved actions are crucial to our ability to explore a range of physical volumes using limited computer resources. Because most of the finite-lattice-spacing errors have been removed, we can use coarse lattices, which have fewer sites and hence require much less computational effort: note that the number of floating-point-operations required even for a quenched lattice QCD calculation rises faster than $`a^4`$.
Improved actions have been studied extensively , and it has been found that even on fairly coarse lattices ($`a`$ up to $`0.4\mathrm{fm}`$) good results can be obtained for hadron masses by estimating the coefficients of the improvement terms using tadpole-improved perturbation theory. For the energy differences that we measure, we find that the improved action works very well. There are no signs of lattice-spacing dependence at $`a`$ up to $`0.4\mathrm{fm}`$, so as well as greatly reducing the computer resources required, it enables us to dispense with the extrapolation in $`a`$ that is usually needed to obtain continuum results.
Our lattice glue and quark action parameters are summarized in table I and are described in detail in Ref. . For the glue we use a Lรผscher-Weisz (plaquette and $`2\times 1`$ rectangle) action . We measured the lattice spacing by NRQCD calculations of the charmonium $`PS`$ splitting, using the experimental spin-averaged value of $`458\mathrm{MeV}`$.
For the quarks we use a D234 action, which includes third and fourth derivative terms as well as an improved clover term. All the coefficients in the action are evaluated at tree level in tadpole-improved perturbation theory , using the mean link in Landau gauge to estimate the tadpole contribution. We work at a quark mass close to the physical strange quark: the pseudoscalar to vector meson mass ratio $`m_P/m_V`$ is 0.76.
We collected data at two lattice spacings, $`a=0.40`$ and $`0.25\mathrm{fm}`$. The scaling of the hadron masses is good but not perfect: the pseudoscalar weighed $`790\mathrm{MeV}`$ on our coarser lattice and $`840\mathrm{MeV}`$ on the finer one. For our fits to eq. (1) in Sect. IV we used the average.
### C Sources and fitting
To look for bound $`0^{++}\overline{q}^2q^2`$ states, we investigate states of two pseudoscalar mesons on lattices of various volumes, keeping only quark-line-connected diagrams. We calculate the binding energy $`\delta E_\mathrm{E}`$ in the exotic channel (flavor states that are only possible for a $`\overline{q}^2q^2`$ state, ie $`I=2`$ for two flavors, the 27 for three flavors) and the binding energy $`\delta E_\mathrm{N}`$ in the non-exotic channel (flavor states that could be $`\overline{q}^2q^2`$ or $`\overline{q}q`$, ie $`I=0`$ for two flavors, the 8 and 1 for three flavors). We could have used sources based on preconceptions about maximally attractive channels in QCD. For example, one-gluon exchange and instanton interactions are known to favor the color $`\overline{\mathrm{๐}}`$ diquark channel, leading to interesting phenomenology at high density, but we verified that these have good overlap with the two pseudoscalar meson source that we used.
Since we are interested in $`\overline{q}^2q^2`$ states, we only study the D and C contributions. For each gauge field configuration we evaluate the pseudoscalar correlator $`P(t)`$ and the the direct (โ$`D`$โ) and crossed (โ$`C`$โ) contributions to the two-pseudoscalar correlator. We use a wall source at $`t=0`$, and both point and smeared sinks.
$$\begin{array}{ccc}\hfill P(t)& =& \underset{\stackrel{}{x}}{}\text{Tr}\left(G(t,\stackrel{}{x})G^{}(t,\stackrel{}{x})\right)\hfill \\ \hfill D(t)& =& \underset{\stackrel{}{x}}{}\left[\text{Tr}\left(G(t,\stackrel{}{x})G^{}(t,\stackrel{}{x})\right)\right]^2\hfill \\ \hfill C(t)& =& \underset{\stackrel{}{x}}{}\text{Tr}\left(G(t,\stackrel{}{x})G^{}(t,\stackrel{}{x})G(t,\stackrel{}{x})G^{}(t,\stackrel{}{x})\right)\hfill \end{array}$$
(4)
where the trace is over color and spinor indices, and $`G(t,\stackrel{}{x})`$ is the quark propagator in a given gauge background from a wall source at $`t=0`$ to the point $`\stackrel{}{x}`$ at time $`t`$. For smeared correlators, we performed covariant smearing at the sink. Note that the source for a pseudoscalar meson is $`\overline{\psi }\gamma _5\psi `$, and the inverse propagator $`G^1(t,\stackrel{}{x})=\gamma _5G(t,\stackrel{}{x})\gamma _5`$, so no factors of $`\gamma _5`$ appear in eq. (4).
We construct the exotic and non-exotic correlators
$$\begin{array}{ccc}\hfill J_\mathrm{N}(t)& =& D(t)+\frac{1}{2}C(t)\hfill \\ \hfill J_\mathrm{E}(t)& =& D(t)C(t),\hfill \end{array}$$
(5)
where the angle brackets signify an average over the ensemble of gauge field configurations.
To obtain the binding energy $`\delta E_\mathrm{N}`$ in the $`I=0`$ channel, and the binding energy $`\delta E_\mathrm{E}`$ in the $`I=2`$ channel, we construct ratios of correlators and fit them to an exponential
$$\begin{array}{c}R_\mathrm{N}(t)=\frac{J_\mathrm{N}(t)}{P(t)^2}A\mathrm{exp}(\delta E_\mathrm{N}t),\\ R_\mathrm{E}(t)=\frac{J_\mathrm{E}(t)}{P(t)^2}B\mathrm{exp}(\delta E_\mathrm{E}t).\end{array}$$
(6)
The ratios of correlators are expected to take the single exponential form only at large $`t`$, after contributions from excited states have died away. We followed the usual procedure of looking for a plateau, and checking that smeared and unsmeared correlators give consistent results. The results for a typical case are shown in Fig. 4. There is no difficulty in identifying the plateau and extracting $`\delta E_\mathrm{N}`$.
## IV Results and Discussion
### A Our Results
We measured $`\delta E_N`$ and $`\delta E_E`$ for several different lattice spacings and sizes. Our results are shown in Fig. 5 along with previous results from Refs. . The exotic and non-exotic channels appear to scale differently as a function of $`L`$. The exotic channel falls like $`1/L^3`$, which is the expected form for a scattering state, eq. (1). A fit is given in Table II, and shown in the figure. The non-exotic channel appears to depart from $`1/L^3`$ falloff. To be complete, however, we have fitted the non-exotic data also to the form expected for a scattering state. The results are given in Table II and in the figure.
The parameters of the lattice calculations are given in Table I. The lattice spacings were determined by quarkonium $`PS`$ splittings, using either charmonium or upsilon experimental measurements to set the scale.
Although we only studied one quark mass, Gupta et al. repeated their calculation for a lower quark mass, corresponding to $`m_P=770\mathrm{MeV}`$, and found that $`\delta E_\mathrm{N}`$ and $`\delta E_\mathrm{E}`$ were unchanged to within statistical errors. This suggests that studies of $`\overline{q}^2q^2`$ operators near $`\overline{q}q`$$`\overline{q}q`$ thresholds are not too sensitive to quark masses and leads us to combine the energy splittings from the different calculations in Table I on the same plot. For the $`\delta E_\mathrm{E}`$ data we display both Gupta et al. and Fukugita et al.โs results with our own. We enlarged the error bar on Fukugita et al.โs point to 14%, since Ref. quoted a $`14\%`$ uncertainly in measuring their lattice spacing, which arises from the discretization errors involved in using an unimproved action on a coarse lattice. These are also apparent from the fact that Ref โs $`m_P/m_V`$ is close to ours, but its $`m_P`$ is significantly lower. For $`\delta E_\mathrm{N}`$ we do not use Fukugita et al.โs data, since they included the annihilation diagrams, which we specifically exclude in order to see a $`\overline{q}^2q^2`$ state.
Our results are consistent with those of Refs. , even though we use much coarser lattices. This supports our use of Symanzik-improved glue and quark actions with tadpole-improved coefficients. As a further check on the validity of the improved actions, we note that at $`L=2\mathrm{fm}`$, where we performed a calculation at two different lattice spacings for the same lattice volume, the results for the two lattice spacings agree very well. There is no evidence of any discretization errors.
For the exotic $`\overline{q}^2q^2`$ system, the fit to eq. (1) is quite good, and the fitted scattering amplitude is remarkably similar to the result expected in the chiral limit, $`4f_P^2T=1`$.Since we did not calculate $`f_P`$ at our quark masses, we have used the value $`f_P=148`$ MeV, derived from Ref. , Table 1. We conclude that there are no surprises in the exotic channel โ the interaction near threshold appears repulsive and the strength is close to that predicted by chiral perturbation theory.
The non-exotic $`\overline{q}^2q^2`$ system, however, does not fit the expected scaling law at large $`L`$. The fit to eq. (1) has a very large $`\chi ^2`$, and is so poor that the extracted amplitude $`T`$ is meaningless. Instead $`\delta E_\mathrm{N}`$ appears to be approaching a negative constant at large $`L`$. Instead of a scattering state, we appear to be seeing a *bound state* in the non-exotic channel. Although our data are suggestive, they are not conclusive. It would be very interesting to gather more data at $`L4`$ fm, as well as at a range of quark masses, in order to verify the existence of this new state in the quenched hadron spectrum.
### B Interpretation and Discussion
We have found evidence for a $`\overline{q}^2q^2`$ bound state just below threshold in the non-exotic pseudoscalar-pseudoscalar $`s`$-wave. In 2-flavor QCD the bound state would correspond to an isosinglet meson coupling to $`\pi \pi `$. In 3-flavor QCD the non-exotic channel corresponds to an entire nonet including two non-strange isosinglets and an isotriplet, and two strange isodoublets (see Fig. 1b). We work with a large quark mass so our results are not directly applicable to $`\pi \pi `$ scattering, but they do resemble physical $`K\overline{K}`$ scattering.<sup>ยง</sup><sup>ยง</sup>ยงAlthough we work in the $`SU(3)_\mathrm{f}`$ limit where all quark masses are equal. The known isosinglet $`f_0(980)`$ and isotriplet $`a_0(980)`$ mesons are obvious candidates to identify with the non-exotic $`\overline{q}^2q^2`$ bound states we seem to have found on the lattice.
We believe the quark mass dependence of the non-exotic $`\overline{q}^2q^2`$ state is quite different from a standard $`\overline{q}q`$ lattice state. In the quenched approximation the masses of $`\overline{q}q`$ states like those shown in Fig. 2 are roughly independent of $`m_P`$. Note especially that the masses are smooth as they cross the threshold, $`2m_P`$. In contrast, we believe that the $`\overline{q}^2q^2`$ state we may have identified is strongly correlated with the $`PP`$ threshold when the quark mass is large, and departs from it in a characteristic way as the quark mass is reduced. (Indirect support for this comes from Gupta et al.โs finding that their binding energy is independent of the pseudoscalar mass.) In particular, we believe that the bound state will move off into the meson-meson continuum as $`m_P`$ is reduced toward the physical pion mass.
To explore the $`m_P`$ dependence of our results, we have made a toy model based on a relativistic generalization of potential scattering. We write a Klein-Gordon equation for the $`s`$-wave relative meson-meson wavefunction, $`\varphi (r)`$,
$$\varphi ^{\prime \prime }(r)+(2m_PU(r))^2\varphi (r)=E^2\varphi (r),$$
(7)
with the boundary condition that $`\varphi (0)=0`$. For $`U(r)=0`$ the spectrum is a continuum beginning at $`E=2m_P`$ as required. In the non-relativistic limit $`m_P|U|`$, eq. (7) reduces to the Schrรถdinger equation with an attractive potential $`U(r)`$ (for $`U(r)>0`$). For sufficient depth and range, this potential will have a bound state. However, as $`m_P0`$, the potential term in eq. (7) turns repulsive and the bound state disappears. Thus, if one keeps the depth and range of $`U`$ fixed as one decreases $`m_P`$, the bound state moves out into the continuum and disappears. To be quantitative, we have taken a square well, $`U(r)=U_0`$, for $`rb`$, and $`U(r)=0`$ for $`r>b`$. We chose a range $`b=1/m_\pi `$ 1.4 fm, and adjusted $`U_0`$ such that the bound state has binding energy of 10 MeV when $`m_P800\mathrm{MeV}`$. The bound state does indeed move off into the continuum (first as a virtual state) when $`m_P`$ goes below $`330\mathrm{MeV}`$. The behavior of the bound state in this toy model is shown in Fig. 6. Note this toy model is not meant to be definitive We could have chosen a different relativistic generalization of the Schrรถdinger equation which would have preserved the bound state as $`m_P0`$. For example, we could have replaced $`(2m_PU)^2`$ by $`2m_P^22m_PU_1U_2^2`$, and fine-tuned $`U_1`$ and $`U_2`$ to provide binding at arbitrarily low $`m_P`$. but it illustrates the expected behavior of a $`P`$$`P`$ bound state: tracking $`2m_P`$ with roughly constant binding energy as $`m_P`$ falls, then unbinding at some critical $`m_P`$.
On the basis of our lattice computation and the $`m_P`$ dependence suggested by our toy model, we believe it is possible that *all* the phenomena associated with the light scalar mesons are linked to $`\overline{q}^2q^2`$ states. The narrow $`0^{++}`$ isosinglet $`f_0(980)`$ and isotriplet $`a_0(980)`$ mesons near $`K\overline{K}`$ threshold can be directly identified with $`\overline{q}^2q^2`$ lattice bound states (top line of Fig. 1b). The broad $`\kappa (900)`$ and $`\sigma (600)`$ (middle and bottom lines of Fig. 1b) couple to low mass ($`\pi \pi `$ or $`\pi K`$) channels. We speculate that they are to be identified as the continuum relics of the same objects which appear as bound states of heavy quarks.
Of course, a thorough examination of this question would require implementing flavor $`SU(3)`$ violation by giving the strange quark a larger mass. This would mix and split the isoscalars, shift the other multiplets (see Fig. 1b), and dramatically alter thresholds. For example, the $`I=1`$ $`\overline{q}^2q^2`$ state couples both to $`K\overline{K}`$ and $`\pi \eta `$ (through the $`\overline{s}s`$ component of the $`\eta `$) in the quenched approximation. The fact that the physical $`K\overline{K}`$ and $`\pi \eta `$ thresholds are significantly different would certainly affect the manifestation of bound states such as those we have been discussing in the $`SU(3)`$-flavor-symmetric limit.
### C Conclusions and Future Work
We have presented evidence for previously unknown pseudoscalar meson bound states in lattice QCD. Our results need confirmation. Calculations on larger lattices are needed, and variation with quark mass, lattice spacing, and discretization scheme should be explored.
In the real world a $`0^{++}\overline{q}^2q^2`$ state may, depending on its flavor quantum numbers, mix with $`0^{++}\overline{q}q`$ and glueball states. It seems natural to expect that for sufficiently heavy quarks a bound state will remain, but only full, unquenched lattice calculations can confirm this.
It is possible that an existing unquenched study of $`0^{++}\overline{q}q`$ operators may show some corroboration of our results. In Ref. the authors study $`\overline{q}q`$ sources with dynamical fermions. Although their interest was in exploring the mixing of $`0^{++}\overline{q}q`$ with glueballs, there is nothing to stop their $`\overline{q}q`$ source from mixing with $`\overline{q}^2q^2`$. So they should be sensitive to the $`\overline{q}^2q^2`$ bound state we have identified. It is therefore quite interesting that they report a $`0^{++}`$ state with an anomalously low mass $`800\mathrm{MeV}`$.
If light $`\overline{q}^2q^2`$ states are, in fact, a universal phenomenon, and if the $`\sigma (600)`$ is predominantly a $`\overline{q}^2q^2`$ object, then the chiral transformation properties of the $`\sigma `$ have to be re-examined. The $`\pi `$ and the $`\sigma (600)`$ are usually viewed as members of a (broken) chiral multiplet. In the naive $`\overline{q}q`$ model both $`\pi `$ and $`\sigma `$ are in the $`(\frac{1}{2},\frac{1}{2})(\frac{1}{2},\frac{1}{2})`$ representation of $`SU(2)_LSU(2)_R`$ before symmetry breaking. In a $`\overline{q}^2q^2`$ model, as in the real world, the chiral transformation properties of the $`\sigma `$ are not clear.
If the phenomena that we have discussed survive the introduction of differing quark masses, then they will also have implications for heavy quark physics. For example, there could be a $`0^{++}`$ bound state just below the decay threshold for two $`D`$ mesons in the charmonium spectrum.
Finally, we note that calculations similar to ours could be undertaken in the meson-baryon sector and in other $`J^{PC}`$ meson channels. It has long been speculated that the $`\mathrm{\Lambda }(1405)`$ is some sort of $`KN`$ bound state and $`\overline{q}^2q^2`$ states have been postulated in other meson-meson partial waves.
## V Acknowledgments
We would like to thank the members of the CTP Phenomenology Club for the stimulating environment in which this project was conceived, and Craig McNeile, Weonjong Lee, and Seyong Kim for discussions of their published work.
This work is supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative research agreement #DF-FC02-94ER40818. The lattice QCD calculations were performed on the SP-2 at the Cornell Theory Center, which receives funding from Cornell University, New York State, federal agencies, and corporate partners. The code was written by P. Lepage, T. Klassen, and M. Alford. |
no-problem/0001/hep-th0001120.html | ar5iv | text | # Pair Production of charged vector bosons in supercritical magnetic fields at finite temperatures
## I Introduction
As is well known the energy spectrum of the vector boson with mass $`m`$, charge $`e`$, spin $`S=1`$ and gyromagnetic ratio $`g=2`$ in a constant uniform magnetic field $`๐=(0,0,B)`$ is given by the formula (we set $`c=\mathrm{}=1`$)
$`E_n(p)=\sqrt{m^2+(2n+12S)|eB|+p^2},S=1,0,1`$ (1)
The integer $`n`$ ($`n=0,1,2,\mathrm{}`$) labels the Landau level, and $`p`$ is the momentum along the direction of the field . For $`n=0,p=0,S=1,E_0`$ vanishes at $`B=B_{\mathrm{cr}}m^2/e`$. When $`B>B_{\mathrm{cr}},E_0`$ becomes purely imaginary. Such a behavior of the energy $`E`$ reflects a quantum instability of electrically charged vector boson field in the presence of an external uniform magnetic field. The source of this instability is due to the interaction of the external field with the additional (anomalous) magnetic moment of the bosons, which, owing to the gyromagnetic ratio $`g=2`$ , appears already in the tree approximation.
Charged spin-1 particles with the gyromagnetic ratio $`g=2`$ are not minimally coupled to an external electromagnetic field ( if they were coupled in such a way the above ratio had to be $`g=1`$). However, the quantum theory of relativistic charged spin-1 bosons with $`g=2`$ in the presence of external electromagnetic fields is a linear approximation of gauge field theory in which the local $`SU(2)`$ symmetry is spontaneously broken into $`U(1)`$ symmetry . So one anticipates that the perturbative vacuum of the WeinbergโSalam electroweak model in the linear approximation would exhibit instability in a homogeneous superstrong magnetic field.
When $`B`$ becomes equal to $`B_{\mathrm{cr}}`$ the lowest energy levels of charged spin-1 particle and antiparticle โcollideโ with each other at $`E_0=0`$. One finds similar behavior in the case of scalar particles in a deep potential well which acts as the external field. In this latter case, one usually interprets the behavior of energies as follows: when the binding energy of a state exceeds the threshold for particle creation, pairs of scalar particle-antiparticle may be spontaneously produced giving rise to the so-called condensate. The number of boson pairs produced by such a supercritical external field (here the depth of the well) may be limited if only the mutual interaction of the created particles is taken into account. In the framework of (second) quantized field theory the behavior of bosons in supercritical external fields was first considered in , which assumes a self-interaction of the $`\varphi ^4`$-type, with the conclusion that the vacuum is, in fact, stabilized by the extremely strong (mutual) vacuum polarization. For a thorough discussion on the problem of electron-positron and scalar boson pair production in external electromagnetic fields see .
The case of vector bosons was considered in by taking into account only the ground state of the charged spin-1 bosons in the superstrong external magnetic field and assuming a self-interaction for this state like the one of the $`W_\mu `$ vector boson field in the Weinberg-Salam electroweak gauge theory, namely the $`|W|^4`$ interaction. In this work the condensate energy of charged spin-1 boson pairs was found, and a scheme for quantizing the $`W`$ field in the neighborhood of the new classical vacuum with $`W_{\mathrm{classical}}0`$ near the threshold for condensate production $`BB_{\mathrm{cr}}B_{\mathrm{cr}}m_W^2/e`$ ($`m_W`$ is the mass of the $`W`$ boson) was presented.
Using the complete electroweak Lagrangian the authors of have managed to construct new โclassicalโ static magnetic solutions for a $`W`$-condensate in the tree approximation. They also show that the instability of the $`W`$ field does not occur owing to the $`|W|^4`$ self-interaction term in the electroweak Lagrangian. Moreover, the electroweak gauge symmetry may be restored in the presence of superstrong magnetic field with $`B=m_H^2/e`$ ($`m_H`$ is the mass of the Higgs boson) if $`m_H>m_W`$.
In the one-loop approximation of the effective Lagrangian of the charged spin-1 boson field (without a self-interaction term), radiative corrections may induce, in the presence of a strong uniform magnetic fields, the production of charged vector boson pairs in the lowest energy states, i.e. the condensate. It is of interest to see what happens with the vacuum when not only an external magnetic field is present but also when the temperature is finite.
In this paper we shall investigate the problem of pair production of charged vector bosons induced by the unstable mode in the presence of a supercritical magnetic field at finite temperature. To study the vacuum effects we need to compute the effective potential density, which is closely related to the thermodynamic potential. To this end we shall first try to treat the problem in the framework of standard quantum statistical physics for the case $`B<B_{\mathrm{cr}}`$ when quantum statistical quantities such as the thermodynamic potential are unambiguous and may be well defined. After deriving the thermodynamic potential in the region $`B<B_{\mathrm{cr}}`$ we shall perform an analytic continuation of this quantity into the supercritical region $`B>B_{\mathrm{cr}}`$. This will give us the imaginary part of the effective potential, from which we can derive the expression for the rate of pair production. The contribution in the thermal one-loop effective action from gauge boson field in a constant homogeneous magnetic field was previously considered in in connection with the question of symmetry restoration. But in that work contribution from the unstable modes was explicitly ignored.
## II Thermodynamic potential
The thermodynamic potential $`\mathrm{\Omega }`$ for a gas of real (not virtual) charged vector bosons as a function of the chemical potential $`\mu `$, the magnetic induction of external field $`B`$, and the gas temperature $`T1/\beta `$ is defined by
$`\mathrm{\Omega }={\displaystyle \frac{eBV}{4\pi ^2\beta }}{\displaystyle ๐p\mathrm{ln}\left\{1\mathrm{exp}\beta \left[\mu \left(m^2eB+p^2\right)^{1/2}\right]\right\}}`$ (2)
$`+{\displaystyle \frac{eBV}{4\pi ^2\beta }}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}g_n{\displaystyle ๐p\mathrm{ln}\left\{1\mathrm{exp}\beta \left[\mu \left(m^2+(2n+1)eB+p^2\right)^{1/2}\right]\right\}},`$ (3)
where $`V`$ is the volume of the gas, and $`g_n=3\delta _{0n}`$ counts the degeneracy of the excited states.
By expanding the logarithms and integrating over $`p`$, one can recast $`\mathrm{\Omega }`$ into
$`\mathrm{\Omega }(\mu )=\mathrm{\Omega }_1+\mathrm{\Omega }_2+\mathrm{\Omega }_3`$ (4)
$`={\displaystyle \frac{VeB}{2\pi ^2\beta }}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}k^1\mathrm{exp}(k\beta \mu )\left[M_{}K_1(k\beta M_{})M_+K_1(k\beta M_+)\right]`$ (5)
$`{\displaystyle \frac{3VeB}{2\pi ^2\beta }}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\sqrt{M_+^2+2neB}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}k^1\mathrm{exp}(k\beta \mu )K_1\left(k\beta \sqrt{M_+^2+2neB}\right),`$ (6)
where $`M_{}\sqrt{m^2eB}`$ and $`K_n(x)`$ is the modified Bessel function of order $`n`$. If both particles and antiparticles are present, the factor $`\mathrm{exp}(k\beta \mu )`$ in (6) has to be replaced by $`2\mathrm{cosh}(k\beta \mu )`$. The thermodynamic potential as a function of the chemical potential is real-valued for real values of $`\mu `$ that are to satisfy for particles and antiparticles with mass $`M_{}`$ the inequality $`|\mu |M_{}`$. This condition comes from the physical requirement that the density (and the occupation numbers) of particles and antiparticles with the mass $`M_{}`$ is positive for any real values of momenta $`p`$.
In weak field $`Bm^2/e`$ and at temperatures $`eB/mT<m`$ when the spacing between Landau levels is still considerably less than the thermal energy, one can approximate $`\mathrm{\Omega }`$ as follows. For $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$ in (6), we set $`M_{}m(1\chi /2)`$ with $`\chi eB/m^2`$, and use the following formula
$`K_1(k\beta m(1\chi /2))=K_1(k\beta m)+\chi {\displaystyle \frac{dz}{d\chi }}{\displaystyle \frac{dK_1(z)}{dz}},`$ (7)
where $`z=k\beta m(1\chi /2)`$. Evaluation of $`\mathrm{\Omega }_3`$ can be done by first replacing the summation over $`n`$ by an integral using the Euler formula
$`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}f(n+1/2)={\displaystyle \underset{0}{\overset{\mathrm{}}{}}}f(x)๐x+(1/24)f^{}(0),`$ (8)
and then by using the formula
$`{\displaystyle _1^{\mathrm{}}}๐zz^2K_1(kmz\beta )={\displaystyle \frac{1}{km\beta }}K_2(km\beta ).`$ (9)
The thermodynamic potential and density of spin-1 bosons at equilibrium can then be obtained as
$`\mathrm{\Omega }`$ $``$ $`{\displaystyle \frac{VT^{1/2}m^{3/2}}{(2\pi )^{3/2}}}\left[3T^2\mathrm{Li}_{5/2}(e^{\beta (\mu m)})+{\displaystyle \frac{7(eB)^2}{8m^4}}e^{\beta (\mu m)}\right],`$ (10)
$`\rho `$ $``$ $`3\left({\displaystyle \frac{Tm}{2\pi }}\right)^{3/2}\zeta (3/2),`$ (11)
where $`\mathrm{Li}_s(x)=_{k=1}^{\mathrm{}}x^k/k^s`$ is the polylogarithmic function of order $`s`$, and $`\mathrm{Li}_s(1)=\zeta (s)`$. The magnetization of the gas under the above conditions is a positive function<sup>*</sup><sup>*</sup>*we take this opportunity to correct the expression for the magnetization in . of the magnetic field induction and temperature because paramagnetic (spin) contribution dominate
$`M_z(B)={\displaystyle \frac{1}{V}}{\displaystyle \frac{\mathrm{\Omega }}{B}}={\displaystyle \frac{7e^2BT^{1/2}}{4(2\pi )^{3/2}m^{1/2}}}e^{\beta (\mu m)}.`$ (12)
When $`BB_{\mathrm{cr}}`$ transitions of bosons from level $`n=0`$ to any excited levels $`n1`$ will not be allowed if $`T<eB/m`$ and all bosons in quantum state with $`n=0`$ that are available may be considered as condensate in a two-dimensional โmomentumโ space in the plane perpendicular to the magnetic field with values of โeffective momentaโ $`k<(eB)^{1/2}`$. True condensate, however, will not actually be formed in three-dimensional momentum space because longitudinal momenta of bosons may have values outside this region.
For low temperature $`T`$ such that $`\beta M_{}1`$, contributions to the thermodynamic potential (6) from all the excited states of the vector bosons are exponentially small compared with that from the state with $`n=0`$ and $`S=1`$. Hence only the first term $`\mathrm{\Omega }_1`$ in (6) needs be considered in this limit
$`\mathrm{\Omega }(\mu )=\mathrm{\Omega }_1(\mu )={\displaystyle \frac{VeBM_{}}{2\pi ^2\beta }}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}k^1\mathrm{exp}(k\beta \mu )K_1(k\beta M_{}).`$ (13)
Subsequently, the boson density is
$`\rho _g={\displaystyle \frac{eBM_{}}{2\pi ^2}}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}K_1(k\beta M_{})\mathrm{exp}\left(k\beta \mu \right).`$ (14)
When $`M_{}T`$, $`M_{}\mu <T`$, we can get for the total density at equilibrium
$`\rho {\displaystyle \frac{eB(TM_{})^{1/2}}{(2\pi )^{3/2}}}\left[\left({\displaystyle \frac{\pi T}{M_{}\mu }}\right)^{1/2}1.46\right].`$ (15)
The first (leading) term of (15) reduces exactly to the one obtained in . The total boson density (14) for relatively โhighโ temperature for which $`T>M_{}`$ but $`Tm`$ is
$`\rho {\displaystyle \frac{eBT}{2\pi ^2}}e^{\mu /T},`$ (16)
for $`\mu T`$ and
$`\rho {\displaystyle \frac{eBT}{2\pi ^2}}\mathrm{ln}(T/|M_{}+\mu |)`$ (17)
for $`\mu M_{}`$.
It follows from formulae (15) and (17) that significant amount of vector bosons persists to fill the states with non-zero momentum projections on the magnetic field direction. These states now should be considered as excited ones. Hence, any density of bosons can be accommodated outside the ground state (with $`p=0`$) at any temperature. This means that there is no true BoseโEinstein condensation in the presence of finite magnetic fields. We mention here that it was R. Feynman who first showed that true BEC is impossible in a classical one-dimensional gas.
An exact expression for the magnetization of the vector boson gas in the lowest energy state in a strong magnetic field may be derived from (13) in the form
$`M_z(B)`$ $`=`$ $`{\displaystyle \frac{e}{2\pi ^2\beta }}{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}\left[{\displaystyle \frac{M_{}}{k}}K_1(k\beta M_{})+{\displaystyle \frac{eB\beta }{2}}K_0(k\beta M_{})\right]\mathrm{exp}(k\beta \mu ).`$ (18)
The magnetization is also a positive function of the magnetic field and temperature.
## III Pairs production of vector bosons
Let us now turn to discussing the problem of pair production of vector bosons in a supercritical magnetic field at finite temperature. There are two possible mechanisms for this process: 1) pairs may be produced as a result of thermal collisions of real charged bosons in the external field, 2) pairs may be spontaneously produced by a constant magnetic field when $`B>B_{\mathrm{cr}}`$ from the vacuum, just as electron-positron pairs are produced by an external electric field .
Before we proceed to the second mechanism, let us first give some estimates of the density of spin-1 boson pairs (in the lowest energy state only) that may be produced as a result of thermal collisions of real bosons in the external field with $`BB_{\mathrm{cr}}`$. If the density of the created pairs is much greater than that of the bosons present initially, we may apply formula (14) with $`\mu =0`$ to find the density of the pairs produced by thermal collisions. For low ($`\beta M_{}>1`$ but $`Tm`$) and โhighโ ($`\beta M_{}<1`$, $`T<m`$) temperature, we obtain respectively
$`\rho _T{\displaystyle \frac{eB(M_{}T)^{1/2}}{(2\pi )^{3/2}}}\mathrm{exp}(M_{}/T),`$ (19)
$`\rho _T{\displaystyle \frac{eBT}{2\pi ^2}}\mathrm{ln}(T/M_{}).`$ (20)
Now we come to the second mechanism. As is known (see, for example ) the quantum electrodynamics vacuum in the presence of an external electromagnetic field can be described by the total transition amplitude from the vacuum state $`|0_{\mathrm{in}}`$ in time $`t\mathrm{}`$ to the vacuum state $`0_{\mathrm{out}}|`$ in time $`t\mathrm{}`$ as follows:
$$C_\mathrm{v}=0_{\mathrm{out}}|0_{\mathrm{in}}=\mathrm{exp}(iW(,B)),$$
(21)
where $`W`$ is the effective action for a given quantum field. $`W`$ defines the effective Lagrangian $`L_{\mathrm{eff}}`$ according to $`W=d^4xL_{\mathrm{eff}}`$. The effective action is a classical functional depending on the external electric ($``$) and magnetic ($`B`$) fields. When the external electromagnetic field is homogeneous the effective action is equal to $`W(,B)=(E(,B)E(=0,B=0))V\mathrm{\Delta }t`$, where $`E(,B)`$ is nothing but the density of vacuum energy in the presence of the external field, $`V`$ is the volume and $`\mathrm{\Delta }t`$ is the transition time. It is worthwhile to note that the effective action contains all divergencies of the theory but they are in the real part of $`W(,B)`$. $`C_\mathrm{v}`$ is the probability amplitude when the external electromagnetic field does not change and so this applies for the vacuum.
For external fields smoothly changing both in space and time one has
$$|C_\mathrm{v}|^2=\mathrm{exp}(2\mathrm{}L_{\mathrm{eff}}(,B)V\mathrm{\Delta }t).$$
(22)
The imaginary part of the effective Lagrangian density $`\mathrm{}L_{\mathrm{eff}}`$, or of the vacuum energy, is finite and describes production of particles by the external electromagnetic field. It also signals that an instability of the vacuum occurs. The imaginary part of the effective Lagrangian density reduces at $`T=0`$ to the imaginary part of the effective potential density. The latter (for the case under consideration) arises from the lowest energy of the charged massive vector boson being imaginary at $`B>B_{\mathrm{cr}}`$. At $`T=0`$ the imaginary part of the effective potential is :
$$\mathrm{}V_0=\frac{eB(eBm^2)}{16\pi }.$$
(23)
Here $`V_0`$ is the zero-point energy density of the vacuum at $`T=0`$. ยฟFrom (22) one concludes that the quantity $`\mathrm{\Gamma }(B)2\mathrm{}V_0(B)`$ is the production rate per unit volume of vector boson pairs (or, equivalently, the decay rate of the vacuum) in the external magnetic field at $`T=0`$, which, in this case, is given by
$$\mathrm{\Gamma }(B)=\frac{eB(eBm^2)}{8\pi }.$$
(24)
When $`T0`$ one must use the thermal one-loop effective potential density which is related to the thermodynamic potential by (see, for example )
$$V(B,T)=V_0+\mathrm{\Omega }(\mu =0,B,T)/V.$$
(25)
The first term in (25) does not depend on temperature, while the second term coincides with the thermodynamic potential (per unit volume) of noninteracting gas of bosons at $`\mu =0`$. We see that the thermodynamic potential at $`\mu =0`$ play the role of the free energy of the vacuum of quantum field system in the presence of the external field at finite temperature.
Since the unstable mode contributes to $`\mathrm{\Omega }_1`$, it follows from (13) that the temperature-dependent part of the effective potential also becomes complex when $`B>B_{\mathrm{cr}}`$. The point $`B=B_{\mathrm{cr}}`$ is the branch point of the effective potential considered as a function of product $`eB`$. One could avoid the complex values of the effective potential by taking account of the โphysicalโ part of energy spectrum (1) which may be well defined only when $`B<B_{\mathrm{cr}}`$.
To find the effective potential in the region $`B>B_{\mathrm{cr}}`$ we must perform an analytic continuation of $`K_1(z)`$ as a function of variable $`z=k\beta M_{}`$ into the complex range. Let us denote the argument of function $`K_1(z)`$ in the region $`B>B_{\mathrm{cr}}`$ as $`z=\pm ik\beta \sqrt{eBm^2}\pm ikt`$. Then, by Schwartz symmetry principle, we have
$$K_1(\pm ikt)=K_1^{}(ikt)=\frac{\pi }{2}\left[J_1(kt)iY_1(kt)\right],$$
(26)
where $`J_1(kt)`$ and $`Y_1(kt)`$ are the first- and second-order Bessel functions, respectively. ยฟFrom (13) and (26) we get
$$\mathrm{\Omega }_1(B,T)=\frac{Vm^3}{4\pi \beta }\chi \sqrt{\chi 1}\underset{k=1}{\overset{\mathrm{}}{}}\frac{1}{k}\left[\pm iJ_1(k\beta m\sqrt{\chi 1})+Y_1(k\beta m\sqrt{\chi 1})\right].$$
(27)
Here $`\chi =eB/m^2`$ as defined before. For the imaginary part of $`\mathrm{\Omega }_1(B,T)`$ one can obtain another form using the following formula which is valid for small temperatures such that $`t>2\pi `$ :
$$\underset{k=1}{\overset{\mathrm{}}{}}\frac{J_1(kt)}{k}=1\frac{t}{4}+\frac{2}{t}\underset{n=1}{\overset{l}{}}\sqrt{t^2(2\pi n)^2},$$
(28)
where $`l`$ is the integral part of $`t/2\pi `$.
The production rate at finite temperature and magnetic field is now defined to be $`\mathrm{\Gamma }(B,T)2\mathrm{}V(B,T)`$ according to (22). Since $`\mathrm{\Gamma }(B,T)`$ must be positive at any finite temperature, $`\mathrm{}V(B,T)`$ must always be negative. This requirement allows one to perform the analytic continuation unambiguously. It follows from Eqs. (23), (25), (27) and (28) that we must take (27) with the lower sign in order to obtain a non-negative decay rate:
$$\mathrm{\Gamma }(B,T)=\frac{m^4}{8\pi }\chi \left(\chi 1\right)\left[1+\frac{4}{x}\underset{k=1}{\overset{\mathrm{}}{}}\frac{J_1(kx)}{k}\right],$$
(29)
where $`x\beta m\sqrt{\chi 1}`$. This is the total pair production rate of vector bosons induced by the unstable mode in a supercritical magnetic field at finite temperatures. We note here that the expression inside the square bracket of (29) depends on $`\beta ,m,e`$ and $`B`$ only through the combination $`x`$.
For low temperature ($`\sqrt{eBm^2}T`$) formula (29) may be simplified. Applying the asymptotic expansion for $`J_\nu (z)`$ at $`z\mathrm{}`$ in the form
$$J_\nu (z)=\sqrt{2/\pi z}\mathrm{cos}(z\nu \pi /2\pi /4),$$
replacing the summation over $`k`$ by the integration and then taking into account the following formula for the Fourier integral
$`{\displaystyle \underset{a}{\overset{\mathrm{}}{}}}e^{ixt}f(t)๐t=ie^{iax}f(a)/x+o(1/x)(x\mathrm{})`$ (30)
(by the Riemann-Lebesgue lemma the last formula is valid if the integral converges uniformly in $`(a,\mathrm{})`$ at all large enough $`x`$), we obtain (up to the term $`o(1/x)`$)
$$\mathrm{\Gamma }(B,T)\frac{m^4\chi (\chi 1)}{4\pi }\left[1+\left(\frac{2}{x\pi ^{1/5}}\right)^{5/2}\mathrm{cos}\left(x\frac{\pi }{4}\right)\right].$$
(31)
As $`T0`$ ($`x\mathrm{}`$), (31) reduces to the zero temperature expression (24).
## IV Resume
Our calculations have been performed for the case of the constant (in time) magnetic field. When pairs are spontaneously produced by the constant magnetic field exclusively the background magnetic field will be changed in time. One may suppose that the background magnetic field is likely to remain constant in time only during a characteristic time $`\delta t1/\sqrt{eBm^2}`$.
We see that the number of boson pairs produced by such a supercritical external field increases with increasing magnetic field. This number may be limited if the mutual interactions of the created particles are taken into account. As mentioned in the Introduction the vacuum may be stabilized with the appearance of vector boson condensate (at $`T=0`$) in the tree approximation. Another possibility to stabilize the vacuum is the one-loop radiative corrections to the mass of the charged spin-1 boson field in the critical field region $`BB_{\mathrm{cr}}`$ .
## V Acknowledgments
This work was supported in part by the Republic of China through Grant No. NSC 88-2112-M-032-002. VRK would like to thank the Department of Physics of Tamkang University (R.O.C.) for kind hospitality and financial support. |
no-problem/0001/cond-mat0001298.html | ar5iv | text | # Continuum field description of crack propagation
\[
## Abstract
We develop continuum field model for crack propagation in brittle amorphous solids. The model is represented by equations for elastic displacements combined with the order parameter equation which accounts for the dynamics of defects. This model captures all important phenomenology of crack propagation: crack initiation, propagation, dynamic fracture instability, sound emission, crack branching and fragmentation.
\]
The dynamics of cracks is the long-standing challenge in solid state physics and materials science . The phenomenology of the crack propagation is well-established by recent experimental studies : once a flux of energy to the crack tip passes the critical value, the crack becomes unstable, it begins to branch and emits sound. Although this rich phenomenology is consistent with the continuum theory, it fails to describe it because the way the macroscopic object breaks depends crucially on the details of cohesion on the microscopic scale .
Significant progress in understanding of fracture dynamics was made by large-scale (about $`10^7`$ atoms) molecular dynamics (MD) simulations . Although limited to sub-micron samples, these simulations were able to reproduce several key features of the crack propagation, in particular, the initial acceleration of cracks and the onset of dynamic instability. However, detailed understanding of the complex physics of the crack propagation still remains a challenge .
The uniform motion of the crack is relatively well-understood in the framework of the continuum theory . Most of the studies treat cracks as a front or interface separating broken/unbroken materials and propagating under the forces arising from elastic stresses in the bulk of material and additional cohesive stresses near the crack tip . Although these investigations revealed some features of the oscillatory crack tip instability, they are based on built-in assumptions, e.g., on specific dependence of the fracture toughness on velocity, structure of the cohesive stress etc. To date there is no continuum model capable to describe in the same unified framework the whole phenomenology of the fractures, ranging from crack initiation to oscillations and branching.
In this Letter we present a continuum field theory of the crack propagation. Our model is the wave equations for the elastic deformations combined with the equation for the order parameter, which is related to the concentration of material defects. The model captures all important phenomenology: crack initiation by small perturbation, quasi-stationary propagation, instability of fast cracks, sound emission, branching and fragmentation.
Model. Our model is a set of the elasto-dynamic equations coupled to the equation for order parameter $`\rho `$. We define the order parameter as the relative concentration of point defects in amorphous material (e.g., micro-voids). Outside the crack (no defects) $`\rho =1`$ and $`\rho =0`$ inside the crack (all the atomic bonds are broken). At the crack surface $`\rho `$ changes from $`0`$ to $`1`$ on the scale much larger than the inter-atomic distance, justifying the continuum description of the crack .
We consider the two-dimensional geometry focusing on the so-called type-I crack mode, see Fig. 1. Equations of motion for an elastic media are:
$$\rho _0\ddot{u}_i=\eta \mathrm{\Delta }\dot{u}_i+\frac{\sigma _{ij}}{x_j},j=1,2.$$
(1)
$`u_i`$ are the components of displacements, $`\eta \mathrm{\Delta }\dot{u}_i`$ accounts for viscous damping, $`\eta `$ is the viscosity coefficient , $`\rho _0`$ is the density of material. In the following we set $`\rho _0=1`$. The stress tensor $`\sigma _{ij}`$ is related to deformations via
$$\sigma _{ij}=\frac{E}{1+\sigma }\left(u_{ij}+\frac{\sigma }{1\sigma }u_{ll}\delta _{ij}\right)+\nu \dot{\rho }\delta _{ij}$$
(2)
where $`u_{ij}`$ is the elastic strain tensor, $`E`$ is the Youngโs modulus and $`\sigma `$ is the Poissonโs ratio. To take into account effect of plastic deformations inside the material we introduce dependence of $`E`$ upon $`\rho `$. For simplicity we consider linear dependence $`E=E_0\rho `$, where $`E_0`$ is the regular Youngโs modulus. The term $`\nu \dot{\rho }`$ in Eq. (2) accounts for the hydrostatic pressure created due to generation of new defects, $`\nu `$ is the constant which can be obtained from comparison with the experimental data. Although this term can be formally interpreted as the hydrostatic pressure due to thermal expansion, we stress that the thermal expansion may be not the only mechanism contributing to the magnitude of this term. There is experimental evidence that the temperature at the crack tip can be of the order of several hundred degrees . However, this would imply the concept of thermal equilibrium which is unlikely to be achieved at the crack tip.
One can observe that Eqs. (1) are linear elasticity equations for $`\rho =1`$, i.e., outside the crack, and have trivial dynamics for $`\rho =0`$ (there is no dynamics inside crack).
We assume that the order parameter $`\rho `$ is governed by pure dissipative dynamics which can be derived from the โfree-energyโ type functional $``$, i.e., $`\dot{\rho }=\delta /\delta \rho `$. Following Landau ideas on phase transitions , we adapt the simplest form for the โfree energyโ $`๐x๐y(D|\rho |^2+\varphi (\rho ))`$, where the โlocal potential energyโ $`\varphi `$ has minima at $`\rho =0`$ and $`\rho =1`$. Choosing the polynomial form for $`\varphi (\rho )`$ we arrive at
$$\dot{\rho }=D\mathrm{\Delta }\rho a\rho (1\rho )F(\rho ,u_{ll})+f(\rho )\frac{\rho }{x_l}\dot{u}_l.$$
(3)
Coupling to the displacement field enters through the position of the unstable fixed point defined by the function $`F(\rho ,u_{ll})`$, where $`u_{ll}`$ is the trace of the strain tensor. Constraint imposed on $`F(\rho ,u_{ll})`$ is such that it must have one zero in interval $`1>\rho >0`$: $`F(\rho _c,u_{ll})=0`$, $`1>\rho _c>0`$, and $`_\rho F(\rho =\rho _c,u_{ll})<0`$. The simplest form of $`F`$ satisfying this constraint is $`F(\rho ,u_{ll})=1(b\mu u_{ll})\rho `$, although any monotonic function of $`\rho `$ will show the qualitatively similar behavior. Here $`\beta `$ and $`\mu `$ are material constants related to such properties as crack toughness and strain to failure. Coefficients $`D`$ and $`a`$ can be set to $`1`$ by rescaling $`tat`$, $`x_i\nu x_i`$ where $`\nu ^2=D/a`$.
The last term in Eq. (3) represents the coupling of the order parameter to the velocity field $`\dot{u}`$ and is responsible for the localized shrinkage of the crack due to material motion. Since the specific form of the function $`f(\rho )`$ is irrelevant, we take $`f(\rho )=c\rho (1\rho )`$ to insure that $`f`$ vanishes at $`\rho =0`$ (no material) and $`\rho =1`$ (no defects), where $`c`$ is the dimensionless material constant. From our simulations we have found that this term in Eq. (3) is crucial to maintain the sharp form of the crack tip.
Static solutions. Eqs. (1)-(3) have a dip-like solution corresponding to the open gap far behind the crack tip (a โgrooveโ along $`x`$-axis for our geometry, see Fig.(1)). The static one-dimensional equations read:
$$\frac{\rho u_{yy}}{y}=0,\frac{^2\rho }{y^2}\rho (1\rho )(1(b\mu u_{yy})\rho )=0,$$
(4)
with the fixed-grips boundary conditions (BC): $`u_y(y=\pm L)=\pm L\delta `$ ($`\delta `$ โ relative displacement), $`\rho (y=\pm L)=1`$, and $`_y\rho (y=0)=0`$. Exclusion of $`u_{yy}`$ from Eqs. (4) yields
$$u_{yy}=C/\rho ,_\xi ^2\rho =\rho (1\rho )(1\beta \rho ),$$
(5)
where $`C`$ is a constant of integration ($`C_0^L๐y/\rho (y)=L\delta `$), $`\beta =b/(1+\mu C)`$, and $`\xi =y\sqrt{1+\mu C}`$. The solution to Eq.(4) satisfying the BC for $`L\mathrm{}`$ is:
$$\rho =\frac{\sqrt{(\beta +1)(1\beta /2)}\mathrm{cosh}(\xi \sqrt{\beta 1})+2\beta }{\sqrt{(\beta +1)(1\beta /2)}\mathrm{cosh}(\xi \sqrt{\beta 1})+2\beta 1},$$
(6)
This solution exists for $`1<\beta <2`$. A deep and wide crack opening is attainable if $`2\beta =ฯต1`$. In this case the BC for $`u_y`$ can be reduced to $`\delta L=C(L+\pi \sqrt{3/(ฯตb)})`$ yielding an equation for $`ฯต`$ since $`C=(b/21)/\mu `$. For $`ฯต1`$ the width of the crack opening $`d`$, defined as $`\rho (d/2)=1/2`$, is $`d=\sqrt{2/b}\mathrm{ln}(24/ฯต)`$. After exclusion of $`ฯต`$ one arrives at
$$d=\sqrt{\frac{8}{b}}\mathrm{ln}\left(\frac{\sqrt{8b}}{\pi }L\left(\frac{2\mu \delta }{b2}1\right)\right)$$
(7)
The solution to Eq. (7) exists only if $`\delta `$ exceeds some critical value $`\delta _c`$ given by $`\delta _c(b/21)/\mu `$, which leads to the relation between strain to failure $`\delta _c`$ and the material parameters $`\mu `$ and $`b`$. The logarithmic, instead of linear, dependence of crack opening on system size $`L`$ in Eq. (7) is a shortcoming of the model resulting from oversimplified dependence of the function $`F`$ on $`u_{ll}`$.
To study the dynamics of cracks we perform numerical simulations with Eqs. (1)-(3). We use explicit second-order numerical scheme with the number of grid points up to $`4000\times 800`$. Our model reproduces all important phenomenology: crack arrest below the critical stress, crack propagation above the critical stress, oscillations of the crack velocity, crack branching and fragmentation. Selected results are presented in Figs. 2-5.
Quasi-stationary propagation. We considered crack propagation initiated from a long notch with the length of the order $`100`$ units. At relatively small loadings $`\delta `$ we have observed a quasi-stationary propagation (no oscillations). The crack produces the stress concentration near the tip, and the stress is relaxed behind the tip, see Fig. 2b. The distribution of shear (Figs. 2c,3) is close to that expected from the elasticity theory. The angular dependence $`\mathrm{\Sigma }_{xy}(\theta )`$ of the shear stress $`\sigma _{xy}`$ near the tip is close to the theoretical dependence $`\sigma _{xy}(r,\theta )r^{1/2}\mathrm{\Sigma }_{xy}^0+\mathrm{}`$, where $`\mathrm{\Sigma }_{xy}^0(\theta )=\mathrm{sin}(\theta )\mathrm{cos}(3\theta /2)`$ obtained for the infinite stationary crack. The discrepancy can be attributed to finite-size effects, velocity correction, etc. We computed the angle $`\theta _m`$ of the maximum shear stress vs crack speed $`V`$ normalized by the Rayleigh speed $`V_R`$. (see Fig. 2c, 3). As one derives from the linear elasticity, the angle increases with the speed of the crack , in an agreement with our numerical results.
The calculated dependence of the crack tip velocity on the effective fracture energy $`G\delta ^2`$, shown on Fig. (4), demonstrates an excellent agreement with the experimental data from Ref. . The instability of the crack occurs when the velocity becomes of the order of 55% of the Rayleigh speed for the parameters of Fig. 2a-e. For parameters of Fig. 2f we have found lower value of the critical velocity, namely about 32% of the Rayleigh speed. In all cases the instability manifests itself as pronounced velocity oscillations, crack branching and the sound emission from the crack tip.
Our calculations indicate absence of the minimal crack velocity, the so-called velocity gap . The initial velocity jump, seen experimentally as well as in some of our simulations (see Fig. 4), is attributed to the fact that the initial crack (notch) is too short or too blunt.
Instability of cracks propagation. Taking the sufficiently large values of $`\nu `$ and $`c`$ and starting from short cracks with the large load we observed consecutive crack branching. Since Eqs. (1)-(3) are homogeneous, these secondary crack branches typically retract after the stress at the tip of the shorter crack relaxes. Although this retracting may indeed take place, e.g., in vacuum the small cracks may heal, the oxidation of the crack surface and lattice trapping would prevent cracks from healing. In order to model these effects, one can introduce an additional field representing concentration of oxygen and then couple it to the order parameter. In some simulations, we multiplied r.h.s. of Eq. (3) by a monotonic function $`w(xx_{tip})`$: $`w(x>0)=1`$ and $`w(x\mathrm{})0`$, where $`x_{tip}`$ is the crack tip position. Thus, we slowed down the evolution of $`\rho `$ behind the crack tip, which, in turn, prevents secondary cracks from healing. We succeeded to obtain realistic crack forms, see Fig. 2d,e. For fast cracks the โfreezingโ is not a necessity, since the retraction is rather slow. Fig. 2f shows results without freezing: massive crack branching along with crack healing are present.
Far away from the crack tip we have registered oscillations of hydrostatic pressure (see Fig. 5, Inset), which is a clear indication of the sound emission by the crack tip. The sound waves reflected from boundaries may also induce velocity oscillations, but they do not provide a mechanism for branching . Increase in the applied displacement $`\delta `$ results in increase of amplitude and the number of secondary branches (cm. Figs. 2d-f and 5).
Some estimates are in order. Our unit of length $`\lambda `$ is the width of the craze zone and is of the order of a micron in PMMA. The unit of time $`\tau `$ is obtained from $`\lambda 1\mu `$ and Rayleigh speed $`V_R10^3\text{m/s}`$, which gives $`\tau \sqrt{E_0}10^9`$ $`s`$. In experiments the characteristic time of velocity oscillations $`\tau _0`$ is of the order of 1 $`\mu s`$. Our model gives $`\tau _010^2\tau 0.1`$$`1`$ $`\mu s`$ for $`E_0=10`$$`100`$.
Conclusion. We have developed a continuum field theory of the crack propagation. The central element of our approach is the description of crack by the order parameter. The proposed approach enables us to avoid the stress singularity at the crack tip and to derive the tip instability. Our model is complimentary to MD simulations of cracks and allows for a description of fracture phenomena on large scales. The parameters of our model can be obtained from comparison with the experiment. It will be interesting to derive the order parameter equation from discrete models of crack propagation .
We are grateful to M. Marder, H. Swinney, J. Fineberg, V. Steinberg, H. Levine, E. Bouchaud, A. Bishop, I. Daruka for stimulating discussions. This research is supported by US DOE, grant W-31-109-ENG-38. |
no-problem/0001/astro-ph0001344.html | ar5iv | text | # Stochastic Acceleration and Non-Thermal Radiation in Clusters of Galaxies
## 1 Introduction
The recent detection of hard X-rays (Fusco-Femiano et al. 1999; Kaastra et al. 1999) and ultraviolet (UV) radiation (e.g. Lieu et al. 1996a, 1996b) from the Coma cluster and other clusters of galaxies have raised important questions about our understanding of the cosmic ray (CR) energy content of these large scale structures. In the case of the Coma cluster, the detection of both radio radiation and hard X-rays, together with the standard interpretation based on a synchrotron plus inverse compton scattering (ICS) model implies unusually large CR energy densities, comparable with the equipartition value (Lieu, et al. 1999) and, at the same time, magnetic fields ($`B0.1\mu G`$) appreciably smaller than the ones inferred from Faraday rotation measurements (Kim et al. 1990; Feretti et al. 1995 ).
Although the condition of equipartition of CRs with thermal energy in clusters is certainly appealing from many points of view, it also encounters several problems: first, as shown by Berezinsky, Blasi & Ptuskin (1997), ordinary sources of CRs in clusters (e.g. normal galaxies, radio galaxies and accretion shocks) can provide only a small fraction of the equipartition energy density. Second, gamma ray observations put strong limits on the CR energy density $`ฯต_{CR}`$: As shown by Blasi (1999), some models of CR injection that imply equipartition are already ruled out by the present EGRET gamma ray observations. In the same paper it was also proposed that current and future experiments working in the $`101000`$ GeV energy range could impose considerably better limits on $`ฯต_{CR}`$.
Since the requirement for CR equipartition descends directly from the assumption of ICS as the source of the hard X-ray emission, it is natural to look for alternative interpretations. Ensslin, Lieu & Biermann (1999) showed that, assuming a non-thermal tail in the thermal distribution of the intracluster electrons, X-ray observations above $`20`$ keV could be explained as bremsstrahlung emission (see also Sarazin (1999)). The natural mechanism responsible for this tail is stochastic acceleration of low energy electrons due to plasma waves. We carry out here a detailed calculation, solving numerically a Fokker-Planck equation to derive the global (thermal plus non-thermal) electron distribution, resulting from the superposition of thermalization processes (mainly Coulomb scattering), acceleration and radiative energy losses (synchrotron emission and ICS). The electron distribution is shown to be substantially changed by the systematic stochastic energy gain, so that hard X-rays can be produced by bremsstrahlung with a non-thermal spectrum.
The paper is planned as follows: in section 2 we discuss the stochastic acceleration, while in section 3 we outline the mathematical formalism used to describe acceleration and thermalization processes. In section 4 we apply the calculation to the case of the Coma cluster and give our conclusions in section 5.
## 2 Stochastic Acceleration
Waves excited in a magnetized plasma can carry both an electric field parallel and perpendicular to the direction of the magnetic field $`\stackrel{}{B}`$. This electric field affects the dynamics of the charged particles in the medium in several ways. In particular, particles can be resonantly accelerated by waves when the following resonance condition is fulfilled:
$$\omega k_{}v_{}\frac{l\mathrm{\Omega }}{\gamma }=0,$$
(1)
where $`\omega `$ is the frequency of the wave, $`k_{}`$ is the parallel wavenumber, $`v_{}`$ is the component of the velocity of the particle parallel to the direction of the magnetic field $`\stackrel{}{B}`$, $`\mathrm{\Omega }=|q|B/mc`$ is the non relativistic gyrofrequency of the particle with electric charge $`q`$ and mass $`m`$, and finally $`\gamma `$ is the Lorentz factor of the particle. The quantum number $`\mathrm{}`$ is zero for resonance with the parallel electric field and can be $`\pm 1,\pm 2,\mathrm{}`$ for resonances with the perpendicular electric field.
In the present calculation we limit our attention to electrons as accelerated particles (therefore $`\mathrm{\Omega }=\mathrm{\Omega }_e=|q|B/m_ec`$ in eq. (1)) and we consider fast modes as accelerating waves. These modes populate the range of frequencies below the proton cyclotron frequency $`\mathrm{\Omega }_p=|q|B/m_pc`$, and, far from this frequency their dispersion relation can be written as $`\omega =v_Ak_{}`$, where $`v_A`$ is the Alfvรจn speed. Using now this dispersion relation and $`\mathrm{}=1`$ in eq. (1), we easily obtain the minimum momentum of the particles that can resonate with the waves, $`p_{min}(m_p/m_e)\beta _A/\mu `$. Here $`p`$ is the electron momentum in units of $`m_ec`$, $`\beta _A=v_A/c`$, and $`\mu `$ is the cosine of the pitch angle. For parameters typical of clusters of galaxies ($`B1\mu G`$, average gas density $`n4\times 10^4cm^3`$), we obtain $`p0.5/\mu `$. Particles with smaller momenta can be accelerated by whistlers, that populate the region of frequencies between the proton and electron cyclotron frequencies.
The power spectrum of waves in the magnetized plasma is very poorly known: we assume an a priori spectrum in the form $`W(k)=W_0(k/k_0)^q`$. The physical meaning underlying this functional form is that energy is initially injected on a large scale turbulence of size $`L_c2\pi /k_0`$ and a cascade to smaller scales (larger values of $`k`$) is produced. The fraction $`\xi _A`$ of the total magnetic energy $`U_B`$ in the form of waves is given by $`\xi _A=\frac{2}{U_B}_{k_0}^{\mathrm{}}๐k_{}W(k_{})`$, where the factor $`2`$ comes from the condition $`W(k_{})=W(k_{})`$.
The resonant interaction of particles with the electromagnetic field of the waves results in a random walk in momentum space, which can be described by a diffusion coefficient $`D(p)`$ containing the details of the particle-wave interaction. Assuming that the waves are isotropically distributed it is possible to average over the pitch angle distribution and calculate the pitch-angle-averaged diffusion coefficient (Roberts (1995) and Dermer, Miller and Li (1996)):
$$D(p)=\frac{\pi }{2}\left[\frac{q1}{q(q+2)}\right]ck_0\beta _A^2\xi _A(r_Bk_0)^{q2}p^q/\beta .$$
(2)
Here $`r_B=m_ec^2/eB`$ and $`\beta c`$ is the particle speed. The rate of systematic energy gain is
$$\frac{d\gamma }{dt}=\frac{1}{p^2}\frac{d}{dp}\left(\beta p^2D(p)\right),$$
(3)
and the time scale for acceleration is readily given by $`\tau _{acc}=(\gamma 1)/(d\gamma /dt)`$. In the presence of energy losses with a typical time scale $`\tau _{loss}`$, efficient acceleration occurs when $`\tau _{acc}`$ and $`\tau _{loss}`$ are comparable.
## 3 Thermalization and Acceleration
In clusters of galaxies no stationary equilibrium can be achieved due to the confinement of particles up to very high energies (Berezinsky, Blasi & Ptuskin, 1997). In these conditions all the energy injected in the cluster in the form of plasma waves and eventually converted into accelerated electrons remains stored in the cluster, and partially dissipated only for Lorentz factors around $`3001000`$ due to ICS and synchrotron. As a consequence, a time-dependent self-consistent treatment of the acceleration and losses is required.
In the region of electron energies that we are interested in, the acceleration process must strongly compete with Coulomb scattering. We develop here a suitable framework for this analysis in the context of the Fokker-Planck equation in energy space, which allows to take properly into account both the acceleration processes, the processes responsible for the gas thermalization (mainly Coulomb scattering) and the radiative losses.
The Fokker-Planck equation in energy space reads
$$\frac{f(E,t)}{t}=\frac{1}{2}\frac{^2}{E^2}\left[๐(f(E,t),E)f(E,t)\right]\frac{}{E}\left[A(f(E,t),E)f(E,t)\right],$$
(4)
where $`E`$ is the electron kinetic energy in units of $`m_ec^2`$, and $`f(E,t)`$ is the electron distribution. $`f`$ is defined such that $`_0^{\mathrm{}}๐Ef(E,t)`$ represents the total number of electrons in the system, that remains constant in the absence of source terms. The coefficients $`A(f(E,t),E)`$ and $`๐(f(E,t),E)`$ include all the processes of acceleration and losses and need to be properly defined. In general we can write $`๐=D_{acc}+D_{loss}`$ and $`A=A_{acc}+A_{loss}`$, where $`D_{acc}`$ and $`A_{acc}`$ are the acceleration terms while $`D_{loss}`$ and $`A_{loss}`$ account for the Coulomb energy exchange and radiative energy losses. We discuss them separately below:
i) Stochastic Acceleration
The diffusion coefficient in energy space is related to the diffusion coefficient in momentum space \[eq. (2)\] by $`D_{acc}(E)=2\beta ^2D(p)`$. The coefficient $`A_{acc}`$ represents the rate of systematic energy gain and is given by eq. (3).
ii) Thermalization and Losses
The Fokker-Planck approach to the thermalization of an astrophysical plasma was considered in a general case by Nayakshin & Melia (1998).
The coefficient $`A_{loss}`$ for Coulomb scattering can be written as $`A_{loss}^{Coul}(f(E,t),E)=๐E^{}a(E,E^{})f(E^{},t)`$, where
$$a(E,E^{})=\frac{2\pi r_e^2cln\mathrm{\Lambda }}{\beta ^{}E^2\beta E^2}(EE^{})\chi (E,E^{})$$
(5)
and $`\chi (E,E^{})=_E^{}^{E^+}๐xx^2/\left[(x+1)(x1)^3\right]^{1/2}`$. Here $`E^\pm =EE^{}(1\pm \beta \beta ^{})`$, $`r_e`$ is the classical electron radius, $`ln\mathrm{\Lambda }30`$, and $`\beta ^{}`$ ($`\beta `$) is the dimensionless speed of an electron with energy $`E^{}`$ ($`E`$). The diffusion coefficient $`D_{loss}^{Coul}`$ associated with the Coulomb losses describes the dispersion around the average rate of energy transport given by $`A_{loss}^{Coul}`$ and is given by $`D_{loss}^{Coul}(f(E,t),E)=๐E^{}d(E,E^{})f(E^{},t)`$, with
$$d(E,E^{})=\frac{2\pi r_e^2cln\mathrm{\Lambda }}{\beta \beta ^{}E^2E^2}\left[\zeta (E,E^{})\frac{1}{2}(EE^{})^2\chi (E,E^{})\right]$$
(6)
where
$$\zeta (E,E^{})=_E^{}^{E^+}๐x\frac{x^2}{(x^21)^{1/2}}\left[\frac{(E+E^{})^2}{2(1+x)}1\right].$$
(In the previous expressions some typos in the paper of Nayakshin & Melia (1998) have been corrected).
Note that both $`A(f(E,t),E)`$ and $`D(f(E,t),E)`$ depend on the function $`f(E,t)`$, so that eq. (4) is now a non-linear partial differential equation, to be solved numerically.
When and if electrons are accelerated to high Lorentz factors the ICS and synchrotron energy losses become important. They are included in the Fokker-Planck equation through a coefficient $`A_{loss}^{syn+ICS}`$ given by the well known rate of energy losses (e.g. Ensslin et al. 1999), while the related diffusion coefficient is neglected, since we are not interested in the dispersion at these energies.
The Fokker-Planck equation, eq. (4), was numerically solved to give the electron distribution at different times. The details of the numerical technique used to solve the equation will be given in a forthcoming paper. We checked that, assuming no acceleration, starting from an arbitrary distribution of electrons, Coulomb scattering drives the system towards a Maxwell-Boltzmann distribution with the appropriate temperature.
## 4 Stochastic Acceleration in clusters of galaxies
Stochastic acceleration due to plasma waves is a natural way to produce non-thermal tails in otherwise thermal particle distributions (see for instance the applications to solar flares by Miller and Roberts 1995). In this section we apply the formalism described above in order to calculate the electron distribution in the intracluster medium and the related X-ray emission. For simplicity we assume that the intracluster medium is homogeneous with a mean density $`n_{gas}`$ and a mean magnetic field $`B`$.
We assume here specific values of the parameters that can explain the hard X-ray observations in the Coma cluster (Fusco-Femiano et al. 1998), but this choice is not unique and a thorough investigation of the parameter space will be carried out elsewhere. Assuming an emission volume with size $`1.5`$ Mpc, we take $`n_{gas}=4\times 10^4\mathrm{cm}^3`$ and $`B=0.8\mu G`$ (note that with this value of $`B`$ an ICS interpretation of the hard X-rays would not be possible without overproducing the radio flux).
The turbulent energy is assumed to be injected on a typical large scale comparable with the size of a galaxy ($`L_c10`$ kpc) and decay down according with a Kolmogorov spectrum ($`q=5/3`$). The fraction of magnetic energy in the form of waves is taken to be $`\xi _A7\%`$. Clearly, large values of $`B`$ and $`\xi `$ decrease the time scale for acceleration. However a too efficient acceleration also means an efficient damping of the waves. In other words, a fast acceleration can deplete the spectrum of the waves unless a constant injection of energy is provided, and this injection rate is in turn constrained by the physical processes that we believe are responsible for the production of waves. For instance during a merger event a total energy of the order of $`L_{mer}2\times 10^{47}`$ erg/s is made available during a time of order $`10^9`$ yrs. We shall consider as reasonable rates of injection of energy in the form of waves ($`L_W`$), values which are appreciably smaller that $`L_{mer}`$. It can be shown that with the values of the parameters reported above, $`L_W2\times 10^{46}`$ erg/s.
The comparison of the time scales for Coulomb losses and acceleration implies that the stochastic energy gain becomes efficient for $`p0.5`$ (comparable with the minimum momentum at which waves can couple with electrons). We start assuming that the initial ($`t=0`$) electron distribution is a Maxwell-Boltzmann distribution with temperature $`T7.5`$ keV, and we evolve the system under the action of stochastic acceleration and energy losses. The results of the calculation are shown in Fig. 1 for $`t=0`$ (thin solid line), $`t=5\times 10^8`$ yrs (thick dashed line) and $`t=10^9`$ yrs (thick solid line). The thin dash-dotted line is a thermal distribution with temperature $`8.21`$ keV estimated to give the best fit to the thermal X-ray emission from Coma (Hughes et al. 1993). It is clear that the energy injected in waves is partially reprocesses into thermal energy, because of the efficient Coulomb scattering. Therefore the average temperature of the gas increases, as it is expected for instance in merger events. In addition however a pronounced non-thermal tail appears. For $`E0.1`$ the non-thermal tail has an approximately power law behaviour with a power index $`2.5`$ up to $`E1000`$ where ICS and synchrotron losses cut off the spectrum. As a consequence, stochastically accelerated electrons do not have any effect on radio observations.
The rate of production of X-rays per unit volume due to bremsstrahlung emission can be calculated as
$$j_X(E_X,t)=n_{gas}c๐pf(p,t)\beta \sigma _B(p,E_X)$$
(7)
where $`f(p,t)=\beta f(E,t)`$ and $`\sigma _B(p,E_X)`$ is the differential cross section for the production of a photon of energy $`E_X`$ from bremsstrahlung of an electron with momentum $`p`$ (Haug 1997). Since we are assuming a constant density in the cluster, the total X-ray flux from the fiducial volume $`V`$ is $`I_X(E_X)=Vj_X(E_X)/(4\pi d_L^2)`$, where $`d_L`$ is the distance to the cluster. We specialize these calculations to the case of the Coma cluster, assuming $`h=0.6`$ for the dimensionless Hubble constant. The results are plotted in Fig. 2, with the lines labelled as in Fig. 1. The data points are from BeppoSAX while the upper limits are from OSSE (Rephaeli et al. 1994). In this calculation we assumed that the injection of waves is continuous. If at some point the injection is turned off, the system gradually thermalizes and the electrons approach a Maxwell-Boltzmann distribution in $`10^710^8`$ years. This implies that in clusters where an X-ray tail at suprathermal energies is observed the process of wave production must be still at work or must have been turned off not longer than $`10^710^8`$ years ago.
## 5 Discussion and Conclusions
We studied the thermalization process of the intracluster medium under the effect of stochastic acceleration induced by plasma waves. We demonstrated that the wave-particle interactions can heat up the intracluster gas, but the overall electron distribution is not exactl a Maxwell-Boltzman distribution, being characterized by a prominent non-thermal tail starting at the energies where the acceleration time is short enough to prevent a complete thermalization. The heating process is cumulative because of particle confinement in clusters of galaxies (Berezinsky, Blasi & Ptuskin (1997)). Although the model was applied here to the case of the Coma cluster, our results are very general and the deviations from thermality are only limited by the rate of energy injection in clusters and from the specific cluster parameters (e.g. $`n_{gas}`$ and $`B`$). In particular, the gas density and temperature determine the fraction of the injected energy that is rapidly reprocessed by Coulomb scattering into thermal energy of the bulk of electrons: high density clusters (or high density regions in clusters) are more unlikely to have non-thermal tails.
Using the modified electron distribution in the calculation of the X-ray emission from clusters of galaxies results in non-thermal tails similar to the ones recently detected by the Beppo SAX satellite. In the case of the Coma cluster, we showed that the observed spectrum can be fit reasonably well by our model, using average values of the magnetic field which are close to the ones obtained from Faraday rotation measurements.
There are some observational consequences of this model, that can be used as diagnostic tools as more detailed observations are being performed. First, the efficiency of the stochastic acceleration depends on the ability of the plasma to recycle the injected energy in the form of thermal energy, through Coulomb collisions. In the clusterโs core, where the density is larger the thermalization is more efficient and it is correspondingly harder to form non-thermal tails as compared with the more peripherical regions of the cluster. Thus one prediction of the model is that the non-thermal X-ray excess would become more prominent in the outer regions. This seems to be actually the case for the cluster A2199 (Kaastra et al. (1999)) where it was possible to measure the hard X-ray excess as a function of the distance from the center of the cluster.
A second consequence of the model presented here is that the modified electron distribution should contribute in a peculiar way to the Sunyaev-Zeldovich (SZ) effect: the pressure in the tail increases the rate of up-scatter of the photons in the low frequency part of the photon distribution in comparison with the purely thermal case. This results in an enhanced temperature change and in a different null point respect to the thermal case (Blasi, Olinto & Stebbins 1999). The SZ test is, in our opinion the most effective in discriminating between a ICS origin and a pseudo-thermal origin for the hard X-ray tails in clusters of galaxies.
I am grateful to A. Olinto, R. Rosner, C. Litwin, A. Ferrari, A. Stebbins and A. Malagoli for many helpful discussions. I am also indebted to S. Nayakshin, F. Melia and C. Dermer for a useful correspondence on the problem of thermalization of astrophysical plasmas. I am particularly grateful to R. Fusco-Femiano for kindly providing the BeppoSAX data of the X-ray emission from the Coma cluster. I am also grateful to the anonymous referee for the several comments that helped to improve the present paper. This work was partially supported by the DOE and the NASA grant NAG 5-7092 at Fermilab, by NSF through grant AST 94-20759 and by the DOE through grant DE-FG0291 ER40606 at the University of Chicago and by I.N.F.N. |
no-problem/0001/math-ph0001012.html | ar5iv | text | # Introduction
## Introduction
Let $`D^3`$ be a bounded domain with a smooth boundary $`\mathrm{\Gamma }`$,
$$(^2+k^2)u=0\text{in}D^{}:=^3D,k=\text{const}>0;u=0\text{on}\mathrm{\Gamma }$$
$`1`$
$$u=\mathrm{exp}(ik\alpha x)+A(\alpha ^{},\alpha ,k)r^1\mathrm{exp}(ikr)+o(r^1),r:=|x|\mathrm{},\alpha ^{}:=xr^1.$$
$`2`$
Here $`\alpha S^2`$ is a given unit vector, $`S^2`$ is the unit sphere in $`^3`$, the function $`A(\alpha ^{},\alpha ,k)`$ is called the scattering amplitude (the radiation pattern). It is well known that problem (1)-(2) has a unique solution, the scattering solution, so that the map $`\mathrm{\Gamma }A(\alpha ^{},\alpha ,k)`$ is well defined. We consider the inverse obstacle scattering problem (IOSP): given $`A(\alpha ^{},\alpha ):=A(\alpha ^{},\alpha ,k=1)`$ for all $`\alpha ^{},\alpha S^2`$ and a fixed $`k`$ (for example, take $`k=1`$ without loss of generality), find $`\mathrm{\Gamma }`$.
Let us assume that $`\mathrm{\Gamma }\gamma _\lambda `$, where $`\gamma _\lambda `$ is the set of star-shaped (with respect to a common point $`O`$) surfaces, which are located in the annulus $`0<a_0|x|a_1`$, and whose equations $`x_3=\varphi (x_1,x_2)`$ in the local coordinates (in which $`x_3`$ is directed along the normal to $`\mathrm{\Gamma }`$ at a point $`s\mathrm{\Gamma }`$), have the property
$$\varphi _{C^{2,\lambda }}c_0,$$
$`3`$
$`C^{2,\lambda }`$ is the space of twice differentiable functions, whose second derivatives satisfy the Hรถlder condition of order $`0<\lambda 1`$, $`\lambda `$ and $`c_0`$ are independent of $`\varphi `$ and $`\mathrm{\Gamma }`$.
Uniqueness of the solution to IOSP with fixed frequency data is first proved in \[1, p. 85\]. We are interested here in the stability problem: suppose that $`\mathrm{\Gamma }_j\gamma _\lambda `$ generate $`A_j(\alpha ^{},\alpha )`$, $`j=1,2`$, and
$$\underset{\alpha ^{},\alpha S^2}{\mathrm{max}}|A_1(\alpha ^{},\alpha )A_2(\alpha ^{},\alpha )|<\delta .$$
$`4`$
What can one say about the Hausdorff distance between $`D_1`$ and $`D_2`$: $`\rho :=sup_{x\mathrm{\Gamma }_1}inf_{y\mathrm{\Gamma }_2}|xy|`$. Let $`\stackrel{~}{D}_1`$ denote a connected component of $`D_1D_2`$, $`D_{12}:=D_1D_2`$, $`\mathrm{\Gamma }_{12}:=D_{12}`$, $`D_{12}^{}:=^3D_{12},`$ $`\stackrel{~}{\mathrm{\Gamma }}_1:=\stackrel{~}{D}_1:=\mathrm{\Gamma }_1^{}\stackrel{~}{\mathrm{\Gamma }}_2`$, $`\stackrel{~}{\mathrm{\Gamma }}_2\mathrm{\Gamma }_2:=D_2`$, $`\mathrm{\Gamma }_1^{}\mathrm{\Gamma }_1:=D_1`$. Let us assume, without loss of generality, that $`\rho =|x_0y_0|`$, $`x_0\mathrm{\Gamma }_1^{}`$, $`y_0\stackrel{~}{\mathrm{\Gamma }}_2`$. Can one obtain a formula for calculating $`\mathrm{\Gamma }`$, given $`A(\alpha ^{},\alpha )`$ for all $`\alpha ^{},\alpha S^2`$, $`k=1`$ is fixed? No such formula is known for IOSP. For inverse potential scattering problem with fixed-energy data such a formula and stability estimates are obtained in , . These results are based on the works ,, -, -.
In section II we prove that $`\rho c_1\left(\frac{\mathrm{ln}|\mathrm{ln}\delta |}{|\mathrm{ln}\delta |}\right)^{c_2}`$ as $`\delta 0`$. We also prove some inversion formula, but it is an open problem to make an algorithm out of this formula. In Remark 3, we comment on some recent papers \[4-6\] in which attempts are made to study the stability problem and point out a number of errors in these papers. Our result, formulated as Theorem 1 in section II, is stronger than the results announced in Theorem 1 in , Theorem 1 in and Theorem 2.10 in .
## II. Stability Result and a Reconstruction Formula
###### Theorem 1
Under the assumptions of section I, one has $`\rho (\delta )c_1\left(\frac{\mathrm{ln}|\mathrm{ln}\delta |}{|\mathrm{ln}\delta |}\right)^{c_2},`$ where $`c_1`$ and $`c_2`$ are positive constants independent of $`\delta `$.
###### Proposition 1
There exists a function $`\nu _ฯต(\alpha ,\theta )L^2(S^2)`$ such that
$$4\pi \underset{ฯต0}{lim}_{S^2}A(\theta ^{},\alpha )\nu _ฯต(\alpha ,\theta )๐\alpha =\frac{\lambda ^2}{2}\stackrel{~}{\chi }_D(\lambda ).$$
$`5`$
Here $`\lambda ^3`$ is an arbitrary fixed vector, $`\chi _D(x):=\{\begin{array}{cc}1,\hfill & xD\hfill \\ 0,\hfill & xD\hfill \end{array}`$, $`\stackrel{~}{\chi }_D(\lambda ):=_^3\mathrm{exp}(i\lambda x)\chi _D(x)๐x`$, $`\theta ,\theta ^{}M:=\{\theta :\theta ^3,\theta \theta =1\}`$, $`\theta ^{}\theta =\lambda `$, and $`A(\theta ^{},\alpha )`$ is defined by the absolutely convergent series
$$A(\theta ^{},\alpha )=\underset{\mathrm{}=0}{\overset{\mathrm{}}{}}A_{\mathrm{}}(\alpha )Y_{\mathrm{}}(\theta ^{}),\theta ^{}M,A_{\mathrm{}}(\alpha ):=_{S^2}A(\alpha ^{},\alpha )\overline{Y_{\mathrm{}}(\alpha ^{})}๐\alpha ^{},$$
$`6`$
where $`Y_{\mathrm{}}(\alpha )`$ are the orthonormal in $`L^2(S^2)`$ spherical harmonics, $`Y_{\mathrm{}}(\theta ^{})`$ is the natural analytic continuation of $`Y_{\mathrm{}}(\alpha ^{})`$ from $`S^2`$ to $`M`$, and the series (6) converges absolutely and uniformly on compact subsets of $`S^2\times M`$.
###### Demonstration Remark 1
The stability result given in Theorem 1 is similar to the one in , p. 9, formula (2.42), for inverse potential scattering.
###### Demonstration Remark 2
Proposition 1 claims the existence of the inversion formula (5). An open problem is to construct the function $`\nu _ฯต(\alpha ,\theta )`$ algorithmically, given the data $`A(\alpha ^{},\alpha )\alpha ^{},\alpha S^2`$.
###### Demonstration Proof of Theorem 1
First, we prove that $`\rho (\delta )0`$ as $`\delta 0`$. Then, we prove that $`|u_2|c\rho `$ in $`\stackrel{~}{D}_1`$. Next, we prove that $`|u_2(x)|cฯต^{\rho ^c^{}}()`$ if $`\text{dist}(x,\mathrm{\Gamma }_1^{})=O(\rho )`$, where $`|\mathrm{ln}ฯต|=cN(\delta )`$, $`N(\delta ):=|\mathrm{ln}\delta |/\mathrm{ln}|\mathrm{ln}\delta |`$. From $`()`$ Theorem 1 follows. By $`c`$, $`c^{}`$, $`\stackrel{~}{c}`$, $`c_j`$ various positive constants, independent of $`\delta `$ and on $`\mathrm{\Gamma }\gamma _\lambda `$, are denoted.
Step 1. Proof of the relation $`\rho (\delta )0`$ as $`\delta 0`$. Assume the contrary:
$$\rho _n:=\rho (\delta _n)c>0\text{for some sequence}\delta _n0.$$
$`7`$
Let $`\mathrm{\Gamma }_{jn}`$, $`j=1,2`$, be the corresponding sequences of the boundaries, $`\mathrm{\Gamma }_{jn}\gamma _\lambda `$. Due to assumption (3), one can select a convergent in $`C^{2,\mu }`$, $`0<\mu <\lambda `$, subsequence, which we denote $`\mathrm{\Gamma }_{jn}`$ again. Thus $`\mathrm{\Gamma }_{jn}\mathrm{\Gamma }_j`$ as $`n\mathrm{}`$. From (7) it follows that $`()\rho (D_1,D_2)c>0`$, where $`D_j`$ is the obstacle with the boundary $`\mathrm{\Gamma }_j`$. By the known continuity of the map $`\mathrm{\Gamma }_jA_j`$, $`\mathrm{\Gamma }_j\gamma _\mu `$, it follows that $`A_1(\alpha ^{},\alpha )A_2(\alpha ^{},\alpha )=0`$.
By the uniqueness theorem \[1, p. 85\] it follows that $`\mathrm{\Gamma }_1=\mathrm{\Gamma }_2`$. Thus, $`\rho (D_1,D_2)=0`$ which is a contradiction to $`()`$. This contradiction proves that $`\rho (\delta )0`$ as $`\delta 0`$.
Step 2. Proof of the estimate $`|u_2(x)|c\rho `$ for $`x\stackrel{~}{D}_1`$. It is known that $`u_2_{C^2(D_2^{})}c`$, where $`u_2=u_2(x,\alpha )`$ is the scattering solution corresponding to the obstacle $`D_2`$. Since $`u_2=0`$ on $`\stackrel{~}{\mathrm{\Gamma }}_2`$, one has $`|u_2(x)|(\mathrm{max}_{x\stackrel{~}{D}_1}|u_2|)\rho c\rho `$.
Step 3. Proof of the estimate $`|v(x)|cฯต^{d^c^{}}`$, where $`v:=u_2u_1`$ and $`d:=\text{dist}(x,\mathrm{\Gamma }_1^{})`$.
From \[3, p. 26, formulas (4.12), (4.17), (2.28)\], one has
$$|v(x)|ฯต:=c\mathrm{exp}\{\gamma N(\delta )\},|x|>a_2,N(\delta ):=\frac{|\mathrm{ln}\delta |}{\mathrm{ln}|\mathrm{ln}\delta |},\gamma :=\mathrm{ln}\frac{a_2}{a_1}>0,$$
$`8`$
$`a_2>a_1`$ is an arbitrary fixed number, $`a_2|x|a_2+1`$ (in it is assumed $`a_2>a_1\sqrt{2}`$, but $`a_2>a_1`$ is sufficient). Let us derive from (8), from equation (1) for $`v(x)`$, from the radiation condition for $`v(x)`$, and from the estimate $`v_{C^2(D_{12}^{})}c`$, the estimate:
$$|v(x)|cฯต^{d^c^{}},xD_{12}^{},c_3\rho dc_4\rho ,c_3>0,d=\text{dist}(x,\mathrm{\Gamma }_1^{}),$$
$`9`$
If (9) is proved, then Theorem 1 follows. Indeed, $`|v(x)|=|v(s)+v(xs)|=O(\rho )cฯต^{\rho ^c^{}}`$ if $`d`$ satisfies (9). Here we use: 1) $`v=u_2u_1=u_2`$ on $`\mathrm{\Gamma }_1^{}`$, $`|u_2|=O(\rho )`$ on $`\mathrm{\Gamma }_1^{}`$, since $`u_2=0`$ on $`\stackrel{~}{\mathrm{\Gamma }}_2`$, and $`|u_2|c`$, 2) $`|xs|=O(\rho )`$ if $`\text{dist}(x,\mathrm{\Gamma }_1^{})=O(\rho )`$, and 3) $`0<c|v|\stackrel{~}{c}`$ if $`d`$ satisfies (9). The last claim follows from the continuity of $`v(x)`$, smallness of $`\rho `$, $`\rho (\delta )0`$ as $`\delta 0`$, and the fact that $`|u_j|_{\mathrm{\Gamma }_j}0`$ almost everywhere (otherwise, by the uniqueness of the solution to the Cauchy problem for (1), one concludes that $`u_j=0`$ in $`D_j^{}`$, which contradicts (2), since, by (2), $`|u_j|1`$ as $`|x|\mathrm{}`$). Thus $`\mathrm{ln}\rho c\rho ^c^{}\mathrm{ln}ฯต`$, or $`()\frac{\rho ^c^{}}{\mathrm{ln}(\rho ^1)}c/\mathrm{ln}(ฯต^1)`$, where $`\rho `$ and $`ฯต`$ are small numbers, $`0<\rho `$, $`ฯต<1`$, $`c,c^{}>0`$, and $`c`$ stands for different constants. It follows from $`()`$ that $`\rho \{c/\mathrm{ln}(ฯต^1)\}^{\frac{1+\omega }{c^{}}}`$, where $`\omega 0`$ as $`ฯต0`$. From the definition (8) of $`ฯต`$, one gets the estimate of Theorem 1. Thus, the proof of Theorem 1 is completed as soon as (9) is proved.
Our argument remains valid if $`|v|=O(\rho ^m)`$ with some $`m,\mathrm{\hspace{0.17em}0}<m<\mathrm{}`$. Such an inequality is always true for a solution $`v`$ to elliptic equation (1) unless $`v0`$ (see \[26, p.14\]).
###### Demonstration Proof of (9)
Since $`v_{C^{2,\mu }(D_{12}^{})}c`$, $`v(x)`$ vanishes at infinity, and $`v`$ solves (1), one can represent $`v(x)`$ in $`D_{12}^{}`$ by the volume potential: $`v(x)=_{D_{12}}g(xy)f(y)๐y`$, $`fC^\mu (D_{12})`$, $`g(x):=\frac{\mathrm{exp}(i|x|)}{4\pi |x|}`$. The function $`|xy|=[r^22r|y|\mathrm{cos}\theta +|y|^2]^{1/2}:=R`$ admits analytic continuation on the complex plane $`z=r\mathrm{exp}(i\psi )`$ to the sector $`S_\varphi :|\mathrm{arg}z|<\varphi `$, if $`z^22z|y|\mathrm{cos}\theta +|y|^20`$ for $`z`$ in this sector. We use the branch of $`R`$ for which $`ImR0`$, and $`ReR|_{Imz=0}0`$. The argument of $`R^2:=z^22z|y|\mathrm{cos}\theta +|y|^2`$ is defined so that it belongs to the interval $`[0,2\pi )`$, so that the analytic continuation of $`g(xy)`$ to the sector $`S_\varphi `$ is bounded there. It is crucial to have at least boundedness of the norm $`()v_{C^1(D_{12}^{})}`$. Indeed, $`()`$ implies that one can extend $`v`$ from $`D_{12}^{}`$ to $`D_{12}`$ as $`C^1(^3)`$ functions. This is true although the boundary $`D_{12}`$ may be nonsmooth to the degree which prevents using the known extension theorems (Steinโs theorem, for example). The way to go around this difficulty is to extend $`u_1`$ and $`u_2`$ separately to $`D_1`$ and $`D_2`$ respectively, and then take $`v=u_2u_1`$ as the extension. If $`vC^1(^3)`$ satisfies the radiation condition and the Helmholtz equation, and is $`C^2`$ in the interior and in the exterior of $`D_{12}`$, then it is representable as a sum of the volume and single-layer potentials, and our argument, which uses analytic continuation, goes through. Without this assumption the argument is not valid and the conclusion fails, as the following example shows.
Example 1: Let $`D:=\{x:|x|1,x^3\}`$, $`v=v_{\mathrm{}}:=\frac{h_{\mathrm{}}^{(1)}(r)}{h_{\mathrm{}}^{(1)}(1)}Y_{\mathrm{}}(x^0)`$, where $`h_{\mathrm{}}^{(1)}(r)`$ is the spherical Hankel function, $`Y_{\mathrm{}}(x^0)`$ is the normalized in $`L^2(S^2)`$ spherical harmonic. It is well known that $`h_{\mathrm{}}^{(1)}(r)i\sqrt{\frac{1}{(\mathrm{}+\frac{1}{2})r}}(\frac{2\mathrm{}+1}{er})^{\frac{2\mathrm{}+1}{2}}`$ as $`\mathrm{}\mathrm{}`$ uniformly in $`1rb`$, $`b<\mathrm{}`$ is arbitrary. Therefore $`v_{\mathrm{}}r^{(\mathrm{}+1)}Y_{\mathrm{}}(x^0)`$ as $`\mathrm{}\mathrm{}`$. In any annulus $`๐:=\{x:1<a_2rb\}`$, one has $`v_{\mathrm{}}_{L^2(A)}ca_2^{(\mathrm{}+1)}0`$ as $`\mathrm{}\mathrm{}`$. On the other hand $`v_{\mathrm{}}_{L^2(S^2)}=1`$ for all $`\mathrm{}`$. Thus, for sufficiently large $`\mathrm{}`$ the solution $`v_{\mathrm{}}`$ to Helmholtz equation is as small as one wishes in the annulus $`๐`$, but it is not small at the boundary $`D`$: for any $`\mathrm{}`$ its $`L^2(D)`$ norm is one. The reason for the solution to fail to be small on $`D`$ is that the $`C^1`$ norm of $`v_{\mathrm{}}`$ is unbounded, as $`\mathrm{}\mathrm{}`$, on $`D`$.
Let us continue the proof of (9). The function $`v(r,x^0,\alpha )`$, where $`\alpha `$ is the same as in (2), $`x^0:=x/r`$, and $`r=|x|,`$ admits an analytic continuation to the sector $`S`$ on the complex plane $`z`$, $`S:=\{z:|\mathrm{arg}[zr(x^0)]|<\varphi \}`$, $`\varphi >0`$, $`r=r(x^0)`$ is the equation of the surface $`\mathrm{\Gamma }_1`$ in the spherical coordinates with the origin at the point $`O`$, and $`v(z,x^0,\alpha )`$ is bounded in $`S`$. The angle $`\varphi `$ is chosen so that the cone $`K`$ with the vertex at $`r(x^0)`$, axis along the normal to $`\mathrm{\Gamma }_1^{}`$ at the point $`r(x^0)`$, and the opening angle $`2\varphi `$, belongs to $`D_{12}^{}`$. Such a cone does exist because of the assumed smoothness of $`\mathrm{\Gamma }_j`$. The analytic continuation of this type was used in . It follows from (8) that $`sup_{ra_2}|v(r)|ฯต`$, and $`sup_{zS}|v(z)|c`$, since $`Im[z^22z|y|\mathrm{cos}\theta +|y|^2]^{1/2}0`$ in $`S`$. From this and the classical theorem about two constants \[22, p. 296\], one gets $`|v(z)|cฯต^{h(z)}`$, where $`h(z)=h(z,L,Q)`$ is the harmonic measure of the set $`SL`$ with respect to the domain $`Q:=SL`$ at the point $`zQ`$. Here $`L`$ is the ray $`[a_2,+\mathrm{})`$, $`S`$ is the union of two rays, which form the boundary of the sector $`S`$, and of the ray $`L`$. The proof is completed as soon as we demonstrate that $`h(z)kd^c^{}`$ as $`zr(x^0)`$ along the real axis, $`d:=|zr(x^0)|,`$ $`k=\text{const}>0`$, $`c=\text{const}>0`$. This, however, is clear: let $`r(x^0)`$ be the origin, and denote $`zr(x^0)`$ by $`z`$. If one maps conformally the sector $`S`$ onto the half-plane $`Rez0`$ using the map $`w=z^c^{}`$, $`c^{}=\frac{\pi }{2\varphi }`$, then the ray $`L`$ is mapped onto the ray $`L:=[a_2^c^{},+\mathrm{})`$, and (see \[22, p. 293\]) $`h(z,L,Q)=h(z^c^{},L^{},Q^{})`$, where $`Q^{}`$ is the image of $`Q`$ under the mapping $`zz^c^{}=w`$. By the Hopf lemma \[23, p. 34\], $`\frac{h(0,L^{},Q^{})}{w}>0`$, $`h(0,L^{},Q^{})=0`$, so $`h(w,L^{},Q^{})kw=kz^c^{}`$ as $`z0`$, and (9) is proved. Theorem 1 is proved. โ
###### Demonstration Proof of Proposition 1
It is proved in \[2, p. 183\] that the set $`\{u_N(s,\alpha )\}_{\alpha S^2}`$ is complete in $`L^2(\mathrm{\Gamma })`$. This implies existence of a function $`\nu _ฯต(\alpha ,\theta )`$ such that
$$_{S^2}u_N(s,\alpha )\nu _ฯต(\alpha ,\theta )๐\alpha \frac{\mathrm{exp}(i\theta s)}{N_s}_{L^2(\mathrm{\Gamma })}<ฯต,$$
$`10`$
where $`ฯต>0`$ is arbitrarily small fixed number, $`N_s`$ is the exterior normal to $`\mathrm{\Gamma }`$ at the point $`s`$, and $`\theta M`$ is an arbitrary fixed vector. It is well known \[1, p. 52\], that
$$4\pi A(\theta ^{},\alpha )=_\mathrm{\Gamma }\mathrm{exp}(i\theta ^{}s)u_N(s,\alpha )๐s.$$
$`11`$
Multiply (11) by $`\nu _ฯต(\alpha ,\theta )`$, integrate over $`S^2`$ and use (10), to get
$$4\pi \underset{ฯต0}{lim}_{S^2}A(\theta ^{},\alpha )\nu _ฯต(\alpha ,\theta )๐\alpha =_\mathrm{\Gamma }\mathrm{exp}(i\theta ^{}s)\frac{\mathrm{exp}(i\theta s)}{N_s}๐s.$$
$`12`$
Note that
$`{\displaystyle _\mathrm{\Gamma }}\mathrm{exp}(i\theta ^{}s){\displaystyle \frac{\mathrm{exp}(i\theta s)}{N_s}}๐s`$ $`={\displaystyle \frac{1}{2}}{\displaystyle _\mathrm{\Gamma }}{\displaystyle \frac{\mathrm{exp}[i(\theta ^{}\theta )s]}{N_s}}๐s`$ $`13`$
$`={\displaystyle \frac{1}{2}}{\displaystyle _D}^2\mathrm{exp}(i\lambda x)๐x={\displaystyle \frac{\lambda ^2}{2}}\stackrel{~}{\chi }_D(\lambda )`$
where the first equation is obtained with the help of Greenโs formula. From (12) and (13) one obtains (5). Proposition 1 is proved. โ
###### Demonstration Remark 3
In - attempts are made to obtain stability results for IOSP, but several errors invalidate the proofs in , and related to stability for IOSP. Let us point out some of the errors. Lemma 5, as stated in \[4, p. 83\], repeated as Lemma 4 in , claims that if a solution to a homogeneous Helmholtz equation in the exterior of a bounded domain $`D`$ is small in the annulus $`R|x|R+1`$, $`|v|ฯต`$ in the annulus, then $`|v|_Dc|\mathrm{log}ฯต|^{c_1}`$. This is incorrect as Example 1 shows. Lemma 3 in is wrong (factor $`\rho ^{2m}`$ is forgotten in the argument). In fact, stronger results have been published earlier , , . In Lemma 2 is intended as a correction of Lemma 3 in (without even mentioning ), but its proof is also wrong: the factor $`\rho ^{2m}`$ is not estimated. There are other mistakes in (e.g., the known asymptotics of Hankel functions in \[5, p. 538\] is given incorrectly). In these mistakes are repeated (p. 600). There are claims in that: a) there is a gap in the Schifferโs proof of the uniqueness theorem for IOSP with the data $`A(\alpha ^{},\alpha _0,k)\alpha ^{}S^2`$, $`k>0`$, $`\alpha _0S^2`$ is fixed \[6, p. 605\], b) that Theorem 6 in is incorrect, and the proof of Lemma 5 in contains a flaw \[6, p. 588\]. These claims are wrong, and no justifications of the claims are given. The remark concerning Shifferโs proof in \[6, p. 605, line 1\] is irrelevant (see \[1,pp.85-86\]). It should be noted that the arguments in - are based on the well known estimates of Landis for the stability of the solution to the Cauchy problem, but no references to the work of Landis are given. In it is not mentioned that the concept of completeness of the set of products of solutions to PDE (which is discussed in ) has been introduced and widely used for the proof of the uniqueness theorems in inverse problems in the works , , - (see also references in , ). In and two theorems are announced which contradict each other (Theorem 1 in and Theorem 2 in ).
## Acknowledgements
The author thanks NSF for support and Prof. H.-D.Alber for useful discussions.
e-mail: rammmath.ksu.edu |
no-problem/0001/astro-ph0001074.html | ar5iv | text | # SUPERNOVA REMNANTS, PULSARS AND THE INTERSTELLAR MEDIUM - SUMMARY OF A WORKSHOP HELD AT U SYDNEY, MARCH 1999
## 1 Introduction
The study of Supernova Remnants (SNRs) and their interaction with the surrounding medium has made significant advances in the last decade or so, thanks in large part to detailed observations of SN 1987A and SN 1993J. The vast amounts of data obtained over several years of study have considerably improved our understanding of the evolution of young supernova remnants in general. The coincidence of occurrence of SN 1998bw within the error circle of the gamma-ray burst GRB 980425, suggesting a relationship between the two objects and new avenues to advance our understanding of them, has added an exciting new dimension to our investigation of supernovae (SNe).
With a view to discuss the latest results on these and similar topics, The Special Research Centre for Theoretical Astrophysics at the University of Sydney organised a workshop on Supernova Remnants, Pulsars and the Interstellar medium. The two day workshop (March 18-19 1999) brought together more than 65 observers and theorists from all over Australia (and even a few from overseas), providing a forum for frank discussion and vigorous interaction. The topic was broadly interpreted, and the agenda for the meeting was kept open to accommodate talks that would be interesting to the audience, even if they did not easily fall into one of the categories. Graduate students were especially encouraged to attend and present their work.
A discussion of supernovae naturally leads one to think of the stellar remnants that remain after the explosion. In recent years large-scale surveys have led to a large increase in the number of known pulsars. Thus pulsars and the nebulae around them formed an important part of the workshop, with two sessions being devoted to the study of pulsars and their properties, especially radio pulsars. There were also interesting reviews presented on contemporary topics such as Magnetars and Anomalous X-ray Pulsars.
Finally the last session was devoted to the study of masers in SNRs, a field that, after a period of dormancy, is enjoying a great resurgence nowadays. Intriguing new observations were revealed, with the promise of more to come.
The following summary captures the essence of the science that was discussed at the workshop. The various sections correspond to the sessions at the meeting. Further details, and abstracts of talks, are available at the meeting home page:
http://www.physics.usyd.edu.au/$``$vikram/snrwkshop/snmain.html
## 2 Supernova remnants and the Surrounding Medium I
The first session in this workshop dealt with supernovae (SNe) and their interaction with surrounding circumstellar material (CSM). In particular, papers were presented on the diversity of SNe in general, and on some detailed observations of two very young objects (SN 1987A and SN 1993J) which now show evidence for interaction between the expanding ejecta and the surrounding material.
It is clear that the evolutionary stage of the progenitor star determines the kind of SN that occurs. However, it is only rarely that we have comprehensive data on the progenitor. Typically, classification is made from the observation of the supernova event and its aftermath, and a great diversity is seen in these catastrophic explosions. Brian Schmidt (ANU) gave a comprehensive review of SN classification emphasising this diversity and the fact that many events do not fit the existing sub-type classifications, based on studies of light curves and optical spectra (Filippenko 1997). Because there are so many SNe which are atypical, it may be that the broader groupings of โthermonuclear explosionsโ (involving predominantly white dwarfs) and โcore collapse of massive starsโ might lead to better predictions of the progenitor star type. It is clear that variations in age, mass and metallicity can all affect the SN light curve and spectrum. In the core collapse scenario, there are five phases which produce different spectral characteristics. These phases relate to shock break-out, adiabatic cooling, transfer of energy and subsequent radioactive heating in the core, and the eventual transition to the nebula phase. However, it is unclear what are the primary causes of differences in observed events. Present models involving variation in the energy of the initial explosion, mass loss rates and the condition of the CSM do not seem to explain the observed diversity. In a further twist, it may be that some types of SNe (for example Type Ib/c) may be linked to the phenomenon of Gamma-Ray-Bursters (GRBs).
The first specific example selected to demonstrate CSM interaction was SN 1987A, in the Large Magellanic Cloud. Ray Stathakis (AAO) presented results from optical and infrared monitoring with several instruments mounted on the Anglo Australian Telescope (AAT). Hubble Space Telescope (HST) images show evidence for the SN ejecta interacting with the edge of the CSM, from H$`\alpha `$ and Ly-$`\alpha `$ observations. At the AAT the source is not well resolved. However, monitoring of optical CSM lines establishes valuable baseline levels against which future changes due to increasing interaction may be measured. Several spectral lines, both in the optical (eg. H$`\alpha `$, OI) and infrared (eg. FeII, Br$`\gamma `$) regimes are becoming strong enough to image. It is anticipated this program will continue.
Radio observations of SN 1987A, made with the Australia Telescope Compact Array (ATCA), were presented by Lister Staveley-Smith (ATNF). Evolution of both angular size and flux density is seen. Radio frequency observations have been an effective way of monitoring the expanding shock front (Gaensler et al. 1997). Finding a consistent model to explain the results is more problematic. The data show that the overall radio luminosity is increasing linearly and that the EW asymmetry in the brightness of the observed circular ring is also becoming more pronounced. From the change in image size over several years, it appears that the expansion velocity has slowed significantly. The morphology of the images suggests a thin spherical shell with an EW asymmetry, expanding and now very close to the ring of CSM. Evidence for the onset of interaction is seen in the HST H$`\alpha `$ and Ly-$`\alpha `$ observations. It is speculated that the emission is coming from the reverse shock, consistent with the low value for the expansion velocity. Two possible models which might explain the observed results both have some unsatisfactory features. The minimum energy solution is inconsistent with a low shock velocity and the model invoking a dense HII torus to account for the slow shock velocity would not predict the symmetric ring observed, nor the inferred spherical shock. Overall, it seems that SN 1987A was an atypical Type II SN. It is expected that the shock will heavily impact the CSM ring in about 2004. High resolution observations at 20 GHz are planned with the ATCA for the anticipated impressive display.
The second object selected to illustrate early interaction with the CSM is SN 1993J. Michael Rupen (NRAO) showed results from VLBI observations of this young SNR (Bartel et al. 1994; Rupen et al. 1998), which was the brightest optical SN seen in the northern hemisphere since 1937. Early observations classified this event as a core collapse SN (Type IIb) of a massive progenitor star, probably about 15 M. This object has been closely monitored since 30 days after the explosion over several wavelengths in the range 1โ20 cm. The SN occurred in M81, a galaxy 3.63 Mpc away. Distance estimates from the SN observations agree well with the independent estimate from Cepheid measurements. The object is now seen as a nearly-circular expanding shell with an asymmetric brightness distribution. There is some indication that the core may be located off-centre. However, there is clear evidence of source evolution and the shell is noticeably decelerating, even if the most extreme opacity effects are included.
From the review by Schmidt and the specific data on SN 1987A and SN 1993J, it is clear that even for very young remnants, the peculiarities of the individual SN explosion and the pre-existing CSM are far stronger influences than any underlying generic characteristics. This makes it hard to develop global theories and emphasises the need for continuing searches and subsequent long-term monitoring of SNe.
## 3 Supernova Remnants and the Surrounding Medium II
Miroslav Filipovic (U Western Sydney) presented evidence for a young, nearby SN remnant, RX J0852.0-4622, initially identified by its X-ray and $`\gamma `$-ray emission. He showed that the X-ray image obtained in the ROSAT all-sky survey shows a disk-like, partially limb-brightened emission region, which is the typical appearance of a shell-like SNR. The objectโs high temperature of $`>3\times 10^7`$K indicates that RXJ0852.0-4622 is a young object which must also be relatively nearby (because of its large angular diameter of 2). Comparison with historical SNRs limits the age to about $``$1,500 years and the distance to $`<1`$ kpc. Miroslav showed that any doubt of the identification of RX J0852.0-4622 as a SNR should be erased by the detection of $`\gamma `$-ray line emission from <sup>44</sup>Ti, which is produced almost exclusively in supernovae. Using the mean lifetime of <sup>44</sup>Ti (90.4 yrs), the angular diameter, adopting a mean expansion velocity of 5000 km/s, and a <sup>44</sup>Ti yield of $`5\times 10^5`$ M, Iyudin et al. (1998) derived an age of $`680`$ years and a distance of $``$ 200 pc, which argues that RX J0852.0-4622 (GRO J0852-4642) is the closest supernova to Earth to have occurred during recent human history. However, these observations are in apparent conflict with historical records. Miroslav also reported a positive radio-continuum detection at 4.75 GHz (PMN) which shows similarities to the X-ray emission. Further studies of this SNR will compare a mosaic radio-continuum survey to observations at other wavelengths such as the ROSAT and ASCA X-ray images and spectra (already observed) and UKST H$`\alpha `$, \[SII\] and \[OIII\] plates.
Vikram Dwarkadas (SRCfTA) presented work he, along with Roger Chevalier (UVa), is carrying out on SN-circumstellar interaction, motivated by the presence of a circumstellar bubble surrounding SN 1987A. The evolution of supernova remnants in circumstellar bubbles depends mainly on a single parameter - the ratio of the mass of the circumstellar shell to the mass of the ejecta (Franco et al. 1991). For low values, the supernova remnant, over many doubling times, eventually โforgetsโ about the existence of this shell, and the resulting density profile looks as it would have in the absence of the shell. Vikram showed that analytical approximations and numerical models indicate that the evolution becomes more rapid as this ratio increases, and that the amount of energy transmitted from the shock to the shell also increases. Unless the shell mass substantially exceeds the ejecta mass, reflected and transmitted shocks are formed when the SN shock hits the circumstellar shell. Vikram demonstrated that the shock-shell interface is hydrodynamically unstable. The reflected shock moves towards the center, and may rebound off the center. Eventually several shocks may be found criss-crossing the remnant, leading to a highly complicated interior structure, with more than one hydrodynamically unstable region possible (Dwarkadas 2000). A rise in X-ray emission accompanies each shock-shell collision. When applied to the observations of SN 1987A, the SN-circumstellar shell model, with appropriate modifications, confirms the prediction of the outgoing shock colliding with the circumstellar ring in about 2005 (Chevalier & Dwarkadas 1995).
Chris Wright (ADFA) presented work on ISO observations of the SN remnant RCW 103. This supernova remnant has been studied extensively in the past in the near-infrared (NIR) by Oliva et al. (1989, 1990 and 1999) who showed that the remnant blast wave is interacting with the interstellar medium and producing very bright emission in lines of \[FeII\] and H<sub>2</sub>. The \[FeII\] emission coincides with the optical, radio and x-ray emission, but the H<sub>2</sub> emission occurs 20-30 arcseconds outside (i.e. in front) of it. This poses a problem in that standard shock excitation of H<sub>2</sub> predicts that the H<sub>2</sub> would reside either behind or coincident with the optical emission. Extinction arguments cannot be applied since the extinction to all of the optical, \[FeII\] and H<sub>2</sub> emission is independently observed to be the same. Further, the H<sub>2</sub> spectrum โlooksโ thermal. Therefore, x-rays have been proposed as a possible excitation mechanism. Chris presented ISO observations which covered a large suite of pure rotational and ro-vibrational H<sub>2</sub> lines, out to 28 microns, as well as lines of H, \[NeII\], \[OIV\] and \[FeII\] and the x-ray sensitive molecules H<sub>3</sub>\+ and HeH+. The latter two were not detected, and their upper limits may imply interesting constraints on the amount of x-ray heating. Many H<sub>2</sub> lines were detected, and the spectrum still appears to be shock (i.e. thermally) excited, although more modelling is required to determine the type of shock. However, there are several cases where the line appears to have a non-thermal component to it.
Amy Mioduszewski (SRCfTA) discussed simulating Radio Images from Numerical Hydrodynamic Models (Mioduszewski et al. 1997). She motivated her discussion by emphasising that while hydrodynamic simulations are widely used to understand objects such as supernovae or jets, the calculated pressure, density, and velocity must be linked to what is observed, the synchrotron radiation from the material. Assuming minimum energy, Amy demonstrated that the synchrotron emissivity and opacity can be related to the hydrodynamical pressure and the number density of the particles. Using these, she calculated the total synchrotron flux and created an โimageโ of the source. Amy also pointed out that in case of relativistic jets it is important to consider light travel time effects, because they significantly influence the appearance of the jets. In addition she showed that the simulated total intensity light curves, even of non-evolving jets, are not easily related to the relatively simple and regular shock structure in the underlying flow.
John Patterson (U Adelaide) discussed the potential for using very high energy gamma rays to understand the high energy astrophysical processes which occur in objects such as supernova remnants, gamma ray pulsars and AGN (BL Lacs), as well as the many unidentified EGRET ($``$1 GeV) sources. See Ong (1998) for a review of the field. As a leading member of the joint Australian-Japanese CANGAROO Project at Woomera, John is pushing the frontier of this ground-based observational area of photons with energies around 100 GeV. These high energy photons are produced in a variety of places by relativistic processes such as inverse Compton effect and shock acceleration. A new 10 m Cangaroo II telescope has been commissioned, and John warmly welcomes co-operation with other Australian facilities and universities.
## 4 Pulsars and the Interstellar Medium I
The first session on Thursday afternoon opened with Simon Johnston (SRCfTA) reviewing pulsar wind nebulae (PWN). Typically 1% of the spin-down luminosity of pulsars appears as pulsed emission, the remaining energy presumably coming off in the form of a relativistic particle wind, which is eventually stopped and shocked by the pressure of the surrounding interstellar medium (ISM), producing nonthermal radio emission. If the space velocity of the pulsar is low, the wind produces a bubble, or plerion, which can be imaged in radio/optical or X rays. On the other hand, if the pulsar has a high space velocity (and many do โ see the next talk), the rapid motion through the ISM produces a bow shock, which can be seen in $`H_\alpha `$ or the radio continuum. Clearly what will be seen in individual cases will vary depending on the properties of the pulsar, its space motion, and the nature of the surrounding ISM. A search for PWN associated with 35 pulsars at 8.4 GHz with the VLA turned up 14 examples. There appear to be two classes of pulsars - young systems with $`L_{\mathrm{radio}}/\dot{E}10^4`$ and middle-aged pulsars where this ratio is below $`10^6`$. Several hypotheses could account for this difference - perhaps the ratio of Poynting flux to particle flux decreases, or the energy spectrum steepens, or more pulse energy appears as gamma-rays. Of course, more observations are needed: a survey of 50 pulsars with ATCA is underway at 1.4 GHz using pulsar gating to increase the sensitivity 200-fold. The survey to date has turned up one bow shock in 5 pulsars examined โ the Speedboat Nebula associated with PSR 0906-49.
Matthew Bailes (Swinburne) gave a talk on the distribution of pulsar velocities. The first part promoted the new supercomputer centre at Swinburne, extolling the processing power of the planned network of 64 linked workstations. This is impressive, but will be limited to problems that can be broken into many fairly-independent parallel modules. The second part discussed pulsar velocities, a topic that is important in understanding the Galaxyโs pulsar population as a whole. He briefly reviewed the methods for measuring them. Generally the old millisecond pulsars have $`v300`$ km/s - they are likely bound to the Galaxy as one would expect. However, younger pulsars overall have a higher velocities, which indicates that their progenitor supernova explosions have a substantial asymmetry.
Lewis Ball (SRCfTA) spoke about inverse Compton scattering by relativistic pulsar winds. Electrons in the wind can upscatter ambient photons โ starlight or the cosmic microwave background โ to TeV energies for the expected Lorentz factors $`10^6`$. Generally this effect is small except for pulsars embedded in strong radiation fields, such as those in close binary systems. The pulsar B1259-63, which is in an eccentric orbit about a Be star, is of particular interest: conversion of 0.1% of the pulsarโs spin-down luminosity into 100 GeV photons would give a flux detectable by the CANGAROO II Cerenkov telescope. The scattering is a strong function of geometry and distance from the star โ the pulsar must be rather close to the star for the effect to be significant. The distance of B1259-63 to the Be star ranges from 20 to 300 $`R_{}`$, so the gamma ray luminosity at periastron is large, with predictable variations around the well-determined orbit (Kirk, Ball & Skjรฆraasen 1999; Ball & Kirk 1999). Thus CANGAROO II observations will potentially be able to probe the properties of the pulsar wind, which is otherwise difficult to detect.
Kinwah Wu (SRCfTA) presented observations of optical and infrared lines in the spectrum of the X-ray binary Cir X-1. The emission lines are asymmetric, with a narrow component at +350 km/s and a broader blue-shifted component. Previously it had been suggested that the narrow component arises from rotation of an accretion disc, the corresponding blue-shifted component being absent because of a shadowing effect at that particular orbital phase. However, the new observations and archival data show that the profiles have varied systematically over the last 20 years: the narrow component always lies in the range 200-400 km/s, while the blue component varies somewhat both in shape and redshift (Johnston, Fender & Wu 1999). Kinwah offered a new model in which the narrow component is interpreted as arising from the heated surface of the $`35\mathrm{M}_{}`$ companion star, and the broad component arises in an optically thick outflow driven by super-Eddington accretion from the neutron star. The variability in the blue component reflects the eccentricity of the orbit: at periastron, the companion overfills its Roche lobe and dumps matter onto the star, producing the outflow. The overflow shuts off after periastron; near apastron the remaining overflow material settles into a quasi-steady accretion disc. This model explains the variability of the blueshifted component of the spectrum and the X-ray behaviour. One implication of this picture is that the system has a radial velocity of +430 km/s, which makes Cir X-1 one of the fastest binaries known. Even so, a sufficiently asymmetric supernova explosion can impart the required kick without unbinding the system (Tauris et al. 1999).
## 5 Pulsars and the ISM II
Extreme Scattering Events (ESEs) from pulsars were the topic of Mark Walkerโs (SRCfTA) presentation. ESEs were first discovered in extra-galactic sources, the symptoms being a rapid change in flux density of the observed source. These flux density changes are attributed to ionised gas clouds in our own Galaxy. From the observational data, Mark Walker and Mark Wardle have determined the parameters of these clouds: they have a size of roughly 2 AU, an electron density of $`10^3`$ cm<sup>-3</sup> and a filling factor of about $`5\times 10^3`$. They postulate that these clouds may solve the โmissing massโ problem, at least in our Galaxy (Walker & Wardle 1998; 1999).
If a pulsar undergoes an ESE, one can in principle measure three different quantities. These are the deflection of the image (which can be measured by VLBI techniques), the delay of the signal (which can be obtained from pulsar timing) and the magnification of the image. Pulsars are exceedingly small, and this implies both a large peak magnification and a large coherent path length. Pulsars are also bright at low frequencies where the effects should be strongest. Previous work on ESEs on pulsars include the time delay and flux changes in the millisecond pulsar PSR B1937+21 and the fringe patterns in the dynamic spectrum of PSR B1237+25. However, there has been no systematic observational program carried out and this is needed as a matter of some urgency.
The nature of pulsars means that more information can be gleaned from ESEs than from say quasars. This in turn will lead to a better understanding of the structures in the interstellar medium which cause ESEs.
Jean-Pierre Macquart (USydney) continued the theme of scintillations with his presentation on scintillation and density fluctuations in the ISM. In scintillation theory it is thought that energy is deposited at very large scales (kpc or more), that it then โcascadesโ down to lower scales before finally dissipating at some small scale. However, although this sounds good, the questions of what provides the energy, how exactly it cascades down and what the dissipation mechanism is are all unanswered! (see, for example, Cordes, Weisberg & Boriakoff 1985)
If supernovae are providing the energy at the large scales then perhaps one might expect to see more turbulence in the vicinity of supernova remnants. Also, one might expect the power-law index of the turbulence, $`\beta `$, to be $``$4 rather than the canonical (Kolmogorov) value of $`11/3`$. Is there any observational evidence for this? In or near the Vela supernova remnant there is some evidence for $`\beta =4`$. Two surveys of extra-galactic point sources located behind supernova remnants have been ambiguous with no clear evidence for a higher power law index although one group do claim an enhancement behind the Cygnus Loop (Dennison et al. 1984). In summary, although supernova explosions are the popular choice for the energy input there is no unambiguous evidence for this (Spangler et al. 1986).
Jianke Li (ANU) gave his talk on the topic of the spin-up mechanism for millisecond pulsars (MSPs). It is widely believed that MSPs are formed from low-mass X-ray binaries in which a neutron star accretes matter from its low mass companion. Along with the mass transfer, the neutron star โaccretesโ angular momentum causing it to spin up. Typically, to end up with a 1 millisecond rotation rate requires the accretion of $`10^{10}M_{}`$/yr over $`10^7`$ years.
Li argued that even a low magnetic field (say $`10^4`$ Tesla) is enough to truncate the inner edge of the accretion disk and thus one has to have a magnetic boundary layer. This magnetic boundary may impede angular momentum accretion on to the star, so that the angular momentum accretion could be far less efficient as compared to the standard model. This casts doubt on whether a low-mass X-ray binary system such as J1808โ369, with a binary period of only two hours, is indeed spun up by accretion.
## 6 Exotics I
Much of this session was devoted to a discussion of Gamma-Ray bursts (GRBs) and their relationship to supernovae. Ron Ekers (ATNF) set the pace with an overview of GRB 980425 and its relationship to SN 1998bw. The 20s outburst was detected by BATSE and localised using Beppo Sax. Information was quickly distributed on the GRB Coordinates Network (GCN). Like most well-localised bursts, follow-up radio observations were carried out using the VLA/ATCA telescopes. About 30% of GRBs have been detected in the radio, and about 50% in the optical. It is interesting that detection in the radio is always accompanied by the detection of the optical transient. The detection of a SN within the BeppoSax error box, which has a chance probability of one in 10<sup>4</sup>, led to the suspicion that the GRB and SN were related. If so the SN is the most luminous radio SN known, and the light curve is quite peculiar. The $`\gamma `$-ray luminosity was about 10<sup>41</sup> J. Ron remarked that there had been a suggestion by Paczynski a few years ago postulating that GRBs could be the result of hypernovae, so this was one case where theory might have predicted the observation.
Ray Stathakis (AAO) presented the results of a cooperative spectral monitoring campaign of SN 1998bw, carried out on the AAT, UKST and MSSSO 2.3m, between 11 and 106 days after the Gamma-Ray Burst (Stathakis et al. 1999). The spectra showed no H, He or Si lines, thus making it a Type Ic SN. They consisted of broad emission and absorption features which slowly evolved over the period. SN 1998bw had entered the supernebular phase by day 106 with the appearance of nebular emission lines. In comparison to a typical Ic supernova, SN 1994ai, SN 1998bw was much bluer, and the features were broader and more distinct at early times. However, transitions and spectral evolution seen appeared similar, confirming SN 1998bw as a peculiar type Ic supernova. While the broader lines ( 45% broader than classical supernovae at similar epochs) explain much of the peculiarities of the spectra of SN 1998bw, there is some indication that additional contribution from line species such as nitrogen, carbon or titanium may be needed to reproduce the observations.
Following this, the GRB 990123 was reviewed by Brian Boyle (AAO). This GRB was first detected by BATSE, and the burst was of 90s duration. Its brightness was in the top 0.3% of all BATSE sources. Optical observations were carried out within 22 secs of the burst by the ROTSE telescope. Followup Keck spectra were featureless, apart from some absorption lines, arising perhaps from a foreground galaxy at z=1.6. The peak V Mag was 8.6, and the total estimated energy in $`\gamma `$ rays was about 3.4 $`\times `$ 10<sup>47</sup> J. The luminosity of the optical transient was about 3.3 $`\times `$ 10<sup>16</sup> L. The host galaxy appears to be a blue star-forming galaxy, in common with many any other GRB hosts. Brian also summarised briefly some of the theory of GRBs and the afterglows. The optical decay can be approximated by three different power laws, due perhaps to the reverse shock and the forward shocks. The second break may be a signature of beaming effects.
GX 1+4, a low-mass X-ray pulsar toward the galactic centre, was observed by Duncan Galloway (UTas/SRCfTA) with the Rossi X-ray Timing Explorer (RXTE) satellite during July 1996, $``$10 days before a short-lived โtorque reversalโ event. Persistent pulsars such as GX 1+4 typically exhibit no correlation between luminosity (and hence mass accretion) and spin-up or spin-down rates, contrary to predictions of existing models. These sources are often found in โtorque statesโ, where the spin-up or spin-down rate is almost constant over time-scales of up to 10 years, with torque reversals occurring irregularly between states. Often the spin-up and spin-down torques are similar in magnitude.During the RXTE observation significant variations in the mean spectrum and the pulse profile were observed over time-scales of a few hours. Variations of this type have not previously been observed on such short time-scales, and it is suggested that these phenomena may be related to the (as yet unknown) mechanism causing the torque reversals (Galloway et al. 1999; Giles et al. 1999).
## 7 Exotics II
Dick Manchester (ATNF) and Don Melrose (SRCfTA) talked on the observations and theory of Anomalous X-ray Pulsars (AXPs). AXPs have periods of 6โ12 seconds (cf. โnormalโ pulsar periods range from 0.025 s to several seconds), soft X-ray spectra, and relatively low X-ray luminosities of $`10^{28}10^{29}`$ W โ significantly below the Eddington limit $`10^{31}`$ W. Their X-ray emission is relatively steady on time scales longer than the pulse period โ much more so than for accretion-powered binary sources โ and they exhibit no evidence that they are binary star systems.
The pulse periods of AXPs increase with time (Mereghetti, Israel & Stella 1998). If the associated loss of rotational energy is attributed solely to magnetic dipole radiation, the inferred surface field is $`B3.2\times 10^{15}(P\dot{P})^{1/2}`$ Tesla, whence $`B3\times 10^{10}`$ T for typical AXP parameters: $`P10`$ s and $`\dot{P}10^{11}`$ (corresponding to 3 ms/year). This is much stronger than the inferred fields for โnormalโ ($`B10^8`$ T) and millisecond pulsars ($`B10^5`$ T). The idea that their strong magnetic field may be the defining characteristic of AXPs has led to them being referred to as โmagnetarsโ (Thompson & Duncan 1993).
More specifically, a magnetar is a neutron star whose surface field exceeds the critical field strength $`B_\mathrm{c}=4.4\times 10^9`$ T at which the energy corresponding to the cyclotron frequency $`\mathrm{\Omega }_\mathrm{e}`$ equals the electron rest energy $`\mathrm{}\mathrm{\Omega }_\mathrm{e}=m_\mathrm{e}c^2`$. Electric fields of energy densities exceeding that of the critical field decay spontaneously via electron-positron pair creation. Magnetic fields which exceed $`B_\mathrm{c}`$ cannot decay in this way because of kinematic restrictions โ the process of pair creation would violate momentum conservation.
The strong inferred fields of magnetars may arise in one of two ways. Usov (1992) has shown that the if the strong magnetic fields associated with some white dwarf stars are frozen in when they collapse as Type 1a supernovae, then neutron star fields of $`10^7`$ T may result. Duncan & Thompson (1992) have shown that dynamo action could generate the inferred fields.
The energy loss rates, $`4\pi ^2I/(P\dot{P})`$ where $`I`$ is the moment of inertia, for normal and millisecond pulsars are much higher than the observed radiation luminosities, and these objects are thought to be rotation powered. In contrast, the spin-down luminosity of a neutron star with $`P10`$ s and $`\dot{P}10^{11}`$ is $`4\times 10^{25}`$ W, much less than the observed X-ray luminosities of AXPs. It is therefore thought that AXP emission is not powered by rotation, but rather by the decay of their strong magnetic fields.
Some the eight known AXPs are associated with supernova remnants and some with Soft Gamma-ray Repeaters (SGRs). There is some evidence that the AXPs associated with SGRs have the strongest inferred magnetic fields. The idea that a strong neutron star magnetic field suppresses radio emission has recently been placed on a more firm theoretical foundation by Baring and Harding (1998), invoking suppression of electron-positron pair formation due to increased photon splitting.
The best known SGR was the source of the 5 March 1979 event which attained a luminosity of $`10^{37}`$ J and had a clear 8.1 s periodicity. It is believed to be associated with a supernova remnant, N49, in the Large Magellanic Cloud. A specific model for this object involves the release of magnetic energy through fractures of the neutron star crust (Thompson & Duncan 1995).
In a supercritical magnetic field the cross section for the scattering of radiation with frequencies well below the gyrofrequency is highly anisotropic. In particular, scattering of the extraordinary mode is strongly suppressed with respect to that of radiation in the ordinary mode. The consequences of this effect are subtle: it allows extraordinary mode emission to escape even from close to the neutron star, and it clearly affects the interpretation of the Eddington โlimitโ for accretion powered sources.
The Parkes Multibeam Pulsar Survey (Lyne et al., 1999), which has a flux sensitivity of 150 $`\mu `$Jy and is seven times more sensitive than any previous survey, may double the number of radio pulsars from the 750 known before it began. It has already discovered 362 new pulsars, including PSR J1814$``$17 which has a period of around 4 s and a high $`\dot{P}`$ which places it in the part of $`P\dot{P}`$-space occupied by AXPs. The AXP 1904$`+`$09, which has $`P=5.16`$ s, $`\dot{P}=1.23\times 10^{10}`$, and which is associated with SGR1900$`+`$14, has recently been claimed as a radio pulsar (Shitov 1999).
AXP/SGR/SNR associations, and the relationship between magnetic field strength and radio emission, may ultimately shed light on the apparent deficiency of radio pulsars that are associated with supernova remnants.
The collapse of a star and the resulting supernova explosion that produces a neutron star depends on neutrinos to revive the shock and eject the outer layers of the star (Bethe & Wilson 1985). Four neutrino flavours are necessary to explain all known neutrino anomalies, but only three ordinary neutrinos are allowed. Yvonne Wong (University of Melbourne) is investigating the possibility that the fourth flavour may arise through oscillations into โsterileโ neutrinos - which do not participate in weak interactions as ordinary neutrinos do. The physics of such oscillations, in matter rather than in vacuo, has important implications for the understanding of supernova shocks (Nunokawa et al. 1997).
Roberto Soria (ANU/SRCfTA) and Amy Mioduszewski (SRCfTA) discussed observations of the sources GRS J1655$``$40 and CI Cam which have answered some questions and raised others. Optical spectra of GRS J1655$``$40 display both broad lines in absorption and emission ($`>1000\mathrm{km}\mathrm{s}^1`$), and emission lines which are narrower than the minimum allowed if they originate in an accretion disk. This can be explained if the system is a black hole binary, and the narrow lines originate in an extended envelope surrounding the disk. The nature of the source CI Cam remains a mystery. It has been classified as a symbiotic star and as a Herbig B object. It is a bright emission-line star which exhibited a single uncomplicated X-ray brightening on 1 April 1998, detected by RXTE and CGRO/BATSE, brightening from $`0`$ to $`2`$ Crabs in less than 1 day and then slowly decaying. An associated optical brightening by 2 magnitudes was recorded (Fontera et al. 1998). A radio flare was detected with the VLBA on day 1 and then at intervals of a few days. The images, with a resolution of just a few AU show a slowly-expanding synchrotron shell, with a speed of just $`200\mathrm{km}\mathrm{s}^1`$, and no evidence of the jet-like collimated outflows seen in all other soft X-ray transient-related radio transients observed with sufficient resolution.
## 8 Masers associated with SNRs
An overview of the field was presented by Anne Green (USydney). The first observations revealing the likely association of 1720-MHz OH masers with SNRs were made 30 years ago, but the field then lay dormant for many years, since the detailed follow up observations of high spatial resolution and high sensitivity were beyond the reach of the available instruments. Interest in the field was revived 5 years ago (Frail, Goss & Slysh 1994), with a three pronged attack: first, high resolution observations of the stronger masers known from the pioneering work; second, a general single dish survey to see how widespread the phenomenon was in the light of the better catalogues of SNRs now available; third, a review of the theoretical explanation and implications. To date, about 75 per cent of the known SNRs have been searched (Frail et al. 1996; Green et al. 1997). Overall, a 10 per cent detection rate has been found, although the remnants containing masers are not uniformly distributed throughout the Galaxy; the detection rate is higher for SNRs located closer to the Galactic Centre, where there is a greatly increased density of molecular gas. Where high resolution observations have been made (with the VLA or with the ATCA), they reveal clusters of small diameter maser spots located predominantly at the periphery of the associated SNR. Zeeman splitting is often detectable, implying magnetic fields of typically 10$`{}_{}{}^{7}T`$ or less. The masers tend to have only a small spread in radial velocity, irrespective of their location relative to the SNR boundary, and the inference has been drawn that they occur at points tangential to the shock front, and thus their velocity represents the systemic velocity of the remnant itself (but see later). If so, this provides a valuable distance indicator, and prompts the theoretical question of assessing the physical and chemical conditions needed, and whether the postulated shock provides them. These theoretical aspects were expanded on by Mark Wardle (SRCfTA). The basic pumping scheme was suggested 20 years ago and has required only minor refinements. It satisfactorily accounts for the fact that the 1720-MHz transition is seen without any accompanying masers at other OH transitions, occurring at a density too low for their excitation. More puzzling is how the required OH abundance, densities and temperatures arise. Remarkably good progress has been achieved with a strong consensus that non-dissociative (C-type) shock waves are a key factor. More contentious is the question of whether the soft X-ray emission from the SNR is also vital to the process.
Although the framework of understanding is in place, it is largely based on the very small number of objects studied in detail. This is slowly being remedied with follow up observations of more remnants, and new results were presented by David Moffett (U Tasmania). He found it difficult in several cases to confirm at high resolution the preliminary detections made with single dishes. This may be because in these cases the emission is of a commonly found diffuse variety of very low gain maser, which lies in the direction of the SNR purely by chance. And even where the maser emission was confirmed, further puzzles arose. In the case of SNR 332.0+0.2, the velocity is large and if it represents the systemic velocity, then the implied distance is unexpectedly large. So perhaps the velocities can be significantly offset from the systemic velocity, a possibility that would reduce their value as distance indicators. Furthermore the location of the maser spots is slightly outside the shell as defined by the radio non-thermal emission; so how well does the radio shell delineate the outer shock front? These puzzles highlight the need to enlarge our sample of well studied objects, since generalisations drawn from only a few may be quite misleading.
Because the collisional excitation is believed to occur as a result of the SNR shock impinging on an adjacent or surrounding molecular cloud (Frail et al. 1996; Lockett et al. 1999), one expects to be able to explore this putative cloud by other means. For a few objects, this investigation has begun, and Jasmina Lazendic (SRCfTA) described her work to extend these investigations to more molecular species in a larger number of remnants, using mm radio observations. Studies of molecular hydrogen using IR observations can also be used in such studies, and work by Michael Burton (UNSW) with Jasmina Lazendic and others have revealed further unexpected phenomena. The Galactic object commonly known as the โsnakeโ intersects a likely SNR shell almost at 90 degrees; at the point of intersection is a 1720-MHz maser, not in itself unexpected since the required shock conditions could well be fulfilled here. More surprising is the discovery of a molecular hydrogen outflow jet, apparently emanating from the intersection point. The difficulty of accounting for this perhaps indicates that this is a chance alignment without significance, and more study is clearly required.
Overall, the session was a lively reminder that this field is now making rapid and exciting progress after a 20 year dormant period while we waited for the appropriate investigative tools to be developed.
## Acknowledgements
The organisers would like to thank all the attendees for their enthusiastic participation in making the workshop such a great success. A special thanks goes to Samantha Mackinlay who was responsible for much of the behind-the-scenes work, to Noella DโCruz for designing the meeting web page, and to Noella, Amy Mioduszewski and Michael Rupen for their help in the organisation. We also take this opportunity to thank the staff of the University Staff Club, and especially Mark Flusk, for helping things run as smoothly as they did.
## References
Ball, L., Kirk, J. G. 1999, Astroparticle Phys., in press, (astro-ph/9908201)
Baring, M. G., & Harding, A. K. 1998, ApJ, 507, L55-58.
Bartel, N. et al. 1994, Nature, 368, 610
Bethe, H. A., & Wilson, J. R. 1985, ApJ, 295, 14
Chevalier, R. A., & Dwarkadas, V. V. 1995, ApJ, 452, L45
Cordes, J.M., Weisberg, J.M. & Borkiakoff, V. 1985, ApJ, 288, 221
Dennison, B., et al. 1984, A&A, 135, 199
Duncan, R. C., & Thompson, C. 1992, ApJ 392, L9-13.
Dwarkadas, V. V. 2000, in preparation
Filippenko, A .V. 1997, ARA&A, 35, 309
Fontera, F., Orlandini, M., Amati, L., Dal Fiume, D., Masetti, N., Orr, A.,Parmar, A.N., Brocato, E., Raimondo, G., Piersimoni, A., Tavani, M., & Remillard, R.A. 1998, A&A, 339, L69
Frail, D. A., Goss, W. M., & Slysh, V. I. 1994, ApJ, 424, L111.
Frail, D. A., Goss, W. M., Reynoso, E. M., Giacani, E. B., Green, A. J. & Otrupcek, R. 1996, AJ, 111,1651
Franco, J., Tenorio-Tagle, G., Bodenheimer, P., & Rozyczka, M. 1991, PASP, 103, 803
Gaensler, B. M., Manchester, R. N., Staveley-Smith, L., Tzioumis, A. K., Reynolds, J. E., & Kesteven, M. J. 1997, ApJ, 479, 845
Galloway D.K., Giles A.B., Greenhill J.G. & Storey M.C. 1999, MNRAS, in press
Giles A.B., Galloway D.K., Greenhill J.G., Storey M.C. & Wilson C.A. 1999, ApJ, in press
Green, A. J., Frail, D. A., Goss, W. M. & Otrupcek, R. 1997, AJ, 114, 2058
Iyudin, A. F., Schonfelder, V., Bennett, K., et al. 1998, Nature 396, 142I
Johnston, H. M., Fender, R. P., & Wu, K. 1999, MNRAS, 308, 415
Kirk, J. G., Ball, L., Skjรฆraasen, O. 1999, Astroparticle Phys., 10, 31
Lockett, P., Gauthier, E. & Elitzur, M. 1999, ApJ, 511,235
Lyne, A. G., Camilo, F., Manchester, R. N., Bell, J. F., Kaspi, V. M., DโAmico, N., McKay, N. P. F., Crawford, F., Morris, D. J., Sheppard, D. C. & Stairs, I. H. 1999, MNRAS. In press.
Mereghetti, S., Israel, G. L. & Stella, L. 1998, MNRAS 296, 689-692
Mioduszewski, A.J., Hughes, P.A. & Duncan, G.C. 1997, ApJ, 476, 649
Nunokawa, H., Peltoniemi, J. T., Rossi, A., & Valle, J. W. F. 1997, Phys. Rev D, 56, 1704
Oliva, E., Moorwood, A.F.M., Danziger, I.J. 1989, A&A 214, 307
Oliva, E., Moorwood, A.F.M., Danziger, I.J. 1990, A&A 240, 453
Oliva, E., Moorwood, A.F.M., Drapatz, S., Lutz, D., Sturm, E. 1999, A&A 343, 943
Ong, R.A. 1998. Phys. Reports, 305, (Nos3-4), 93-202
Rupen, M.P. et al. 1998, in Radio Emission from Galactic and Extragalactic Compact Sources, ASP Conference Series v. 144, IAU Colloquium 164, eds. J.A. Zensus, G.B. Taylor, & J.M. Wrobel, p. 355
Shitov, Y. P. 1999, IAU Circular 7110.
Spangler, S.R., et al. 1986, ApJ, 301, 312
Stathakis, R., et al. 1999, MNRAS, in preparation.
Tauris, T. M., Fender, R. P., van den Heuvel, E. P. J., Johnston, H. M., & Wu, K. 1999, MNRAS, in press
Thompson, C. & Duncan, R. C. 1993, ApJ, 408, 194-217
Thompson, C., & Duncan, R. C. 1995, MNRAS, 275, 255-300.
Usov, V. V. 1992, Nature, 357, 472-474.
Walker, M., & Wardle, M. 1998, ApJ, 498, L125
Walker, M., & Wardle, M. 1999, PASA, 16, 262 |
no-problem/0001/astro-ph0001071.html | ar5iv | text | # DISCOVERY OF A LOW SURFACE BRIGHTNESS OBJECT NEAR SEYFERTโS SEXTET
## 1. INTRODUCTION
Low surface brightness (LSB) galaxies have been extensively discussed within the context of the overall formation and evolution of galaxies as well as observational cosmology, and in particular for their possible contribution as a major fraction of the total galaxy population (see Impey & Bothun 1997, for a review). During deep imaging observations of the compact group of galaxies known as Seyertโs Sextet (Nishiura et al. 1999) we found a LSB galaxy candidate near the group. In the present paper, we report on the photometric properties of this LSB candidate. We adopt a Hubble constant of 100 $`h`$ km s<sup>-1</sup> Mpc<sup>-1</sup> throughout this paper.
## 2. OBSERVATIONS AND DATA REDUCTION
The observations were carried out at the University of Hawaii 2.2 m telescope using the 8192$`\times `$8192 (8k) CCD Mosaic camera (Luppino et al. 1996). The camera was attached at the f/10 Cassegrain focus and provided a $`18\mathrm{}\times 18\mathrm{}`$ field of view. The CCDs were read out in the $`2\times 2`$ pixel binning mode which gave an image scale of 0$`\stackrel{}{\mathrm{.}}`$26 pixel<sup>-1</sup>. We obtained broad band images with the $`VR`$ and $`I`$ filters on 1999 May 20 and May 23 (UT), respectively. The integration time for each exposure was set to 8 minutes. Twenty-three exposures for the $`VR`$-band and 24 exposures for the $`I`$-band were taken, thus the total integration time amounted to 184 minutes in $`VR`$ and 192 minutes in $`I`$.
Data processing was done in a standard way using IRAF<sup>1</sup><sup>1</sup>1Image Reduction and Analysis Facility (IRAF) is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.. After bias and dark counts were subtracted, each frame was divided by the flatfield image, which was a median image of all of the object frames obtained during a night. The object frames were median-combined with their positions registered. Typical seeing, as estimated from the processed images, was $``$ 0$`\stackrel{}{\mathrm{.}}`$8 in both bands. Standard stars from Landolt (1992) were observed and used for calibration of absolute fluxes. Since the $`VR`$ filter is not a standard photometric band (see Jewitt, Luu, & Chen 1996), we adopted an AB magnitude scale for this bandpass. The absolute photometric errors were estimated to be $`\pm 0.05`$ mag for the $`VR`$-band and $`\pm 0.03`$ mag for $`I`$-band. The limiting surface brightnesses are $`\mu _{VR}^{\mathrm{lim}}=28.7`$ mag arcsec<sup>-2</sup> and $`\mu _I^{\mathrm{lim}}=28.1`$ mag arcsec<sup>-2</sup>, corresponding to a 1$`\sigma `$ variation in the background.
## 3. RESULTS
In Figure 4 we show the $`VR`$\- and $`I`$-band images of Seyfertโs Sextet. A faint, extended object is located in both bands at 2$`\stackrel{}{\mathrm{.}}`$3 southwest from the group center of Seyfertโs Sextet. Our estimate of the centroid position of this faint object is $`\alpha `$(B1950)=15<sup>h</sup> 56<sup>m</sup> 51$`\stackrel{\mathrm{s}}{\mathrm{.}}`$6, $`\delta `$(B1950)=+20ยฐ 52โฒ 42โณ. As shown in the lower panels of Figure 4, the shape of the object appears to be nearly spherical in both bands and very diffuse compared with the foreground/background galaxies around it. There is no evidence that the object moved during the $`VR`$ and $`I`$ observations (the $`VR`$ image was taken three days prior to the $`I`$ band image), thus the object is not likely to be within our solar-system.
Using a wavelet package in ESO-MIDAS,<sup>2</sup><sup>2</sup>2European Southern Observatory Munich Image Data Analysis System (ESO-MIDAS) is developed and maintained by the European Southern Observatory. we applied a Wiener-like wavelet filter to the images in order to improve the signal-to-noise. The results are shown in Figure 4. In the central region of the processed $`VR`$ image there seem to be two intensity peaks lying along a northwest-southeast direction with a separation of $`2\stackrel{}{\mathrm{.}}5`$. On the other hand, the $`I`$-band image shows a single peak which is located between the two $`VR`$ peaks. This could be interpreted as being due to an inhomogeneous distribution of dust, or perhaps a strong emission line from ionized gas that appears in either band. Alternatively, the intensity peaks near the center may be background galaxies. At 19โณ west of the center of the faint object there is another diffuse condensation which could perhaps be interpreted as a tidal structure. This companion structure appears to be present in the original images, but given that the strength of this feature is comparable to the noise uncertainty in the original images, we will not discuss it further in this paper.
Next we discuss the photometric properties of this LSB object. Since the object appears to be nearly spherical, we will adopt a circular aperture for computing the radial light distribution. Apparent foreground/background objects were first masked, and the surface brightness on the original images was then determined using 0$`\stackrel{}{\mathrm{.}}`$8 radial bins where the binwidth was set to be approximately equal to the seeing. The center of the aperture was fixed at the position of the intensity peak in the noise reduced $`I`$-band image. Figure 4 shows the surface brightness profiles in both bands. These profiles cannot be simply fit with an exponential disk as can be seen by the fact that an exponential profile would appear as a straight line on the $`\mu `$-$`r`$ plot (the upper panel of Figure 4). Although the observed radial profiles could be approximated by a straight line fit at radii $`4\mathrm{}r10\mathrm{}`$, it is clear that the profiles flatten at $`r4\mathrm{}`$. Furthermore, a $`r^{1/4}`$-law profile, which is more centrally concentrated than an exponential profile, is also a poor approximation to the observed data points. We also note that since the flattened region of the profiles is large compared to the seeing size of $`0\stackrel{}{\mathrm{.}}8`$, the flat profiles at $`r4\mathrm{}`$ are a genuine property of this LSB object.
In order to more accurately approximate the observed radial surface brightness distribution, we decided to adopt a $`r^{1/n}`$-law fit with $`n<1`$;
$$\mu (r)=\mu _0+2.5(\mathrm{log}_{10}e)\left(\frac{r}{s}\right)^{1/n},$$
where $`\mu (r)`$ is the surface brightness at a radius of $`r`$ from the center, $`\mu _0`$ is the central surface brightness, and $`s`$ is the angular scale length. This profile is less concentrated than an exponential profile, i.e., shallower at the inner area and a steeper profile at the outer area than an exponential profile. The solid curves shown in Figure 4 are the best fit for each band. We used only the points at $`1\mathrm{}<r<10\mathrm{}`$ (shown by the filled circles in Figure 4) to avoid the seeing effect at the center and the sky-noise limited area at large radii. Table 1 lists the fit parameters ($`n`$, $`\mu _0`$, and $`s`$) as well as other photometric properties which were subsequently derived. Our analysis shows that the LSB galaxy has a $`r^{1/n}`$ surface brightness profile with $`n0.6`$.
## 4. DISCUSSION
We first discuss the observed properties of our candidate LSB galaxy in terms of the known properties of LSB galaxies. The color of our LSB candidate is $`VRI0.81`$. Since the $`VR`$-band is inconvenient for comparison with standard photometry of galaxies, we first estimated that $`VI0.94`$ and $`RI0.49`$ by interpolating the observed fluxes in the $`VR`$-band and $`I`$-band. These colors are not peculiar for LSB dwarfs or LSB disk galaxies (e.g., Impey & Bothun 1997). One of the characteristics of our LSB object is the $`r^{1/n}`$ surface brightness profile with $`n0.6`$, however the majority of LSB galaxies exhibit an exponential profile, i.e., $`n=1`$ (OโNeil, Bothun, & Cornell 1997). On the other hand, many of the faiter dwarf ellipticals in the Fornax cluster have less concentrated profiles than the exponential (Caldwell & Bothun 1987). Davies et al. 1988 also showed that a significant number of LSB dwarf galaxies in the Fornax cluster show $`r^{1/n}`$-law profiles with $`n<1`$. Furthermore, OโNeil et al. (1997) showed that 17 % of the LSB galaxies in their sample have less concentrated surface brightness profiles than an exponential disk although they did not fit the profiles with the $`r^{1/n}`$ law but instead used a King model profile.
Caon, Capaccioli, & DโOnofrio (1993) found that the value of $`n`$ is well correlated with the effective radius ($`R_\mathrm{e}`$) for spheroidal galaxies ranging from the LSB dwarfs in the sample of Davies et al. (1988) to the giant ellipticals in the Virgo cluster. In this context, the observed exponent of $`n0.6`$ for our LSB candidate implies that this object is indeed a good candidate for a LSB dwarf galaxy. As shown in Figure 5 of Caon et al. (1993), the LSB dwarfs with $`n<1`$ have $`R_\mathrm{e}`$ in a range between $`0.13`$ kpc and $`1.3`$ kpc. Given the angular effective radius of our LSB galaxy candidate of $`r_\mathrm{e}=4\stackrel{}{\mathrm{.}}8`$, this would imply a distance somewhere in the range of 5.4 Mpc to 54 Mpc, and a corresponding absolute $`I`$-band magnitude of between $`10`$ mag and $`15`$ mag, which is comparable to those of dwarf galaxies in the Local Group (Mateo 1998). This would imply that our LSB galaxy candidate would be at the faint end of the luminosity function of LSB galaxies (e.g. Impey & Bothun 1997).
If the LSB galaxy is located at the same distance as Seyfertโs Sextet (44 $`h^1`$ Mpc), one might conclude that the LSB galaxy may have been formed through possible tidal interactions between the group galaxies. The projected separation of the LSB galaxy from the group center is $`30`$ $`h^1`$ kpc. The LSB galaxy could travel this distance in $`2\times 10^8`$ years assuming a a projected velocity equal to the radial velocity dispersion (138 km s<sup>-1</sup>) of the group.
At smaller distances than that of Seyfertโs Sextet, only one galaxy is known within 1ยฐ of the LSB candidate. It is also a LSB galaxy, F583-1 (= D584-04: Schombert & Bothun 1988; Schombert et al. 1992; Schombert, Pildis, & Eder 1997). The distance toward F583-1 corresponding to its redshift is 25 $`h^1`$ Mpc. If our LSB galaxy is located at the same distance as F583-1, the apparent separation of 23โฒ between F583-1 and the LSB galaxy corresponds to 170 $`h^1`$ kpc.
Another possibility is that the LSB object is located at a much smaller distance. This would imply that the LSB object may be a system more like Galactic globular clusters. In fact, a King model profile with a concentration parameter of $`0.7`$ and a core radius of $`4\stackrel{}{\mathrm{.}}0`$ is also a food fit to the present LSB object (Note: we have not shown this fit since it is very similar to that shown in Fig. 3). OโNeil et al. (1997) noted a possibility that some LSB objects well fitted with the King profile that were found in their survey may be Galactic LSB globular clusters. Although the concentration parameter of $`c0.7`$ of our LSB object is smaller than those of typical Galactic globular clusters (e.g., Chernoff & Djorgovski 1989), globular clusters in the outer halo of the Galaxy ($``$ 30โ100 kpc from the Galactic center) have concentration parameters as small as our LSB object (see, for example, Djorgovski & Meylan 1994). The central surface brightness of the globular clusters in the Galactic halo can be as faint as $`24`$ mag arcsec<sup>-2</sup> in $`V`$, which is comparable to those of LSB galaxies. The clusters in the Galactic halo have larger core radii of $`20`$ pc than globular clusters at smaller galactocentric radii because of smaller tidal forces at larger distance from the galactic center. If the present LSB object is such a distant globular cluster, it should have a core radius of $`20`$ pc. Thus, the apparent core radius of $`r_\mathrm{c}4\stackrel{}{\mathrm{.}}0`$ leads to a distance toward the LSB object of $`1`$ Mpc. However if this is the case, stars in the LSB would be resolved spatially in our images. Therefore, this possibility can be rejected.
In summary, we have discovered a LSB object near the compact group of galaxies known as Seyfertโs Sextet. The LSB object is likely to be one of the following: 1) a field LSB dwarf galaxy at a distance of 5.4โ54 Mpc, or 2) a LSB dwarf galaxy at the same distance of Seyfertโs Sextet (44 $`h^1`$ Mpc). Measurement of the redshift, either by optical spectroscopy or by radio observations of H I gas will be necessary to determine which of these descriptions applies.
The authors are very grateful to the staff of the UH 2.2 m telescope. In particular, we would like to thank Andrew Pickles for his technical support and assistance during the observations. We also thank Richard Wainscoat and Shinki Oyabu for their kind help on photometric calibration, Tadashi Okazaki for kindly providing us his program for calculating King model profiles, and Daisuke Kawata for helpful comments. This work was financially supported in part by Grants-in-Aid for Scientific Research (Nos. 07055044, 10044052, and 10304013) from the Japanese Ministry of Education, Science, Sports, and Culture by the Foundation for Promotion of Astronomy, Japan. TM is thankful for support from a Research Fellowship from the Japan Society for the Promotion of Science for Young Scientists. This research has made use of the NASA/IPAC Extragalactic Database (NED) and the NASA Astrophysics Data System Abstract Service. |
no-problem/0001/math0001150.html | ar5iv | text | # Two dimensional Einstein-Weyl structures
## 1. Introduction
Einstein-Weyl geometry has received much attention in recent years , particularly in three dimensions , where Einstein-Weyl structures arise as symmetry reductions of the self-duality equations for four dimensional conformal structures . An Einstein-Weyl structure on an $`n`$-manifold $`M`$, with $`n3`$, consists of a conformal structure together with a compatible (i.e., conformal) torsion-free connection $`D`$ such that the symmetric trace-free part of the Ricci tensor of $`D`$ vanishes. When $`D`$ is the Levi-Civita connection of a compatible Riemannian metric then this metric is Einstein. As with Einstein metrics, the two dimensional story is somewhat exceptional. A conformal surface with compatible torsion-free connection $`D`$ is said to be Einstein-Weyl iff
$$D\mathrm{๐ ๐๐๐}^D\mathit{2}๐๐๐ฃ^DF^D=\mathit{0},$$
where $`๐๐๐ฃ^D=๐ก๐D`$ is the divergence on $`2`$-forms, $`F^D`$ is the Faraday 2-form of $`D`$, which is the curvature of $`D`$ on a natural real line bundle $`L^1`$, and $`\mathrm{๐ ๐๐๐}^D`$ is the scalar curvature of $`D`$ viewed as a section of $`L^2:=(L^1)^{}(L^1)^{}`$. If $`F^D=0`$, then $`D`$ is locally the Levi-Civita connection of a metric of constant scalar curvature.
The idea of studying the two dimensional case was first suggested in , in which Pedersen and Tod proposed the goal of classifying the compact examples. This classification was carried out in . Pedersen and Tod also claimed that the local solutions should depend on a single holomorphic function of one variable. The main aim of this paper is to show that this is true for the definition above and to obtain all the solutions explicitly in terms of this holomorphic function.
###### Theorem 1.1.
Let $`D`$ be an Einstein-Weyl structure in two dimensions. Then there is a local complex coordinate $`\zeta =x+iy`$ and a holomorphic function $`h`$ such that $`D=D^g+\omega `$, where $`g=dx^2+dy^2`$ is the flat metric and
$$\omega =\frac{1}{\overline{h}\zeta }d\zeta +\frac{1}{h\overline{\zeta }}d\overline{\zeta }.$$
The notation used in this paper follows . In particular $`L^w`$ is the real line bundle associated to the representation $`A|detA|^{w/2}`$ of $`GL_2()`$, so that $`L^2`$ may be identified with $`\mathrm{\Lambda }^2T^{}M`$ once an orientation is chosen. A conformal structure on $`M`$ may be viewed as a metric on $`TM`$ with values in $`L^2`$. A Weyl derivative is a covariant derivative $`D`$ on $`L^1`$. Each choice of compatible metric $`g`$ trivialises $`L^1`$. If the corresponding trivial Weyl derivative is denoted $`D^g`$, then $`D=D^g+\omega `$ for some connection $`1`$-form $`\omega `$. It is well known that Weyl derivatives on a conformal manifold correspond bijectively to compatible torsion-free connections. For instance, $`D^g`$ corresponds to the Levi-Civita connection of $`g`$.
I prove theorem 1.1 in section 2. In the following sections I discuss the extent to which the solutions are genuinely distinct Einstein-Weyl structures, explain the geometry behind the solutions and show how the compact examples arise when $`h`$ is a (possibly degenerate) Mรถbius transformation. I end the paper, with a brief discussion of the โtwistor theoryโ of Mรถbius structures.
## 2. Local solution of the two dimensional Einstein-Weyl equations
The two dimensional Einstein-Weyl condition is, a priori, nonlinear, but may in fact be linearised. In order to do this I shall make use of the relationship between Weyl structures and Mรถbius structures .
###### Definition 2.1.
A Mรถbius structure on a conformal manifold $`M`$ is a (smooth) second order linear differential operator $``$ from $`L^1`$ to $`S_0^2T^{}ML^1`$ such that for some Weyl derivative $`D`$, the operator $`๐ ๐ฆ๐_0D^2`$ is zero order.
A Mรถbius structure is a possibly non-integrable and unoriented version of a complex projective structure. More precisely, a Mรถbius structure $``$ possesses a tensorial invariant $`C^{}\mathrm{C}^{\mathrm{}}(M,L^2T^{}M)`$ called the Cotton-York tensor of $``$, by analogy with the three dimensional case. The Mรถbius structure is integrable (i.e., given locally by the trace-free Hessian in a suitable chart) iff $`C^{}=0`$ (see ). In this case, if $`M`$ is oriented and $`\varphi `$ is a local orientation preserving conformal diffeomorphism then $`\varphi ^{}`$ can be identified with the Schwarzian derivative of $`\varphi `$, and so the Mรถbius structure defines a complex projective structure.
In general the Cotton-York tensor of $``$ may be computed using an arbitrary Weyl derivative $`D`$. The result is:
$$C^{}=๐๐๐ฃ^D\left(r_0^D\frac{1}{4}\mathrm{๐ ๐๐๐}^D\mathrm{๐๐}+\frac{\mathit{1}}{\mathit{2}}F^D\right),$$
where $`r_0^D=๐ ๐ฆ๐_0D^2`$. From this, the following result is immediate.
###### Proposition 2.2.
A Weyl structure $`D`$ in two dimensions is Einstein-Weyl if and only if the trace-free Hessian $`๐ ๐ฆ๐_0D^2`$ is locally the trace-free Hessian in some conformal chart.
Consequently, if $`D`$ is Einstein-Weyl, there is locally a flat metric $`g`$ such that $`๐ ๐ฆ๐_0D^2=๐ ๐ฆ๐_0(D^g)^2`$. If $`D=D^g+\omega `$, then $`๐ ๐ฆ๐_0D^g\omega \omega _0\omega =0`$. Solving this will give all local solutions of the Einstein-Weyl equation.
Although this equation is still nonlinear, its resemblance to the Riccati equation suggests a way of linearising it. To do this, let $`\zeta `$ be a local complex coordinate such that $`g=d\zeta d\overline{\zeta }`$ and write $`\omega =fd\zeta +\overline{f}d\overline{\zeta }`$ for some complex-valued function $`f`$. Then the equation for $`\omega `$ becomes $`f^{}=f^2`$, where $`f^{}`$ denotes the complex linear part of $`df`$. This is the Riccati equation if $`f`$ is holomorphic. Substituting $`f=u^1u^{}`$ (which is always possible locally) gives $`u^{\prime \prime }=0`$ and so $`u^{}=\overline{h}_0`$ for some holomorphic function $`h_0`$. Hence $`u=\overline{h}_0(\zeta \overline{h})`$, where $`h`$ is also holomorphic, and so $`f=u^1u^{}=1/(\overline{h}\zeta )`$.
This proves Theorem 1.1.
## 3. Gauge transformations
In order to show that the Einstein-Weyl solutions of Theorem 1.1 depend in an essential way on a single holomorphic function, it is necessary to ask to what extent the solutions are equivalent under a change of complex coordinate $`\zeta `$.
An initial observation is that the scalar curvature and Faraday curvature of $`D`$ are given by the real and imaginary parts of $`h^{}/(h\overline{\zeta })^2`$. In particular, $`D`$ is flat if and only if $`h`$ is constant, and so most of the solutions are non-trivial.
More generally, note that the complex coordinate $`\zeta `$ has been partially fixed by requiring that the trace-free Hessian induced by this coordinate chart is the Mรถbius structure determined by $`D`$. Hence, the only remaining freedom in $`\zeta `$ is the freedom to apply Mรถbius transformations.
If $`\zeta =\varphi (z)=(az+b)/(cz+d)`$ with $`adbc0`$, then
$$d\zeta d\overline{\zeta }=\left|\frac{adbc}{(cz+d)^2}\right|^2dzd\overline{z}.$$
After rescaling the metric, the Einstein-Weyl structure is given by the new holomorphic function $`\stackrel{~}{h}=\overline{\varphi }^1h\varphi `$, where $`\overline{\varphi }(z)=(\overline{a}z+\overline{b})/(\overline{c}z+\overline{d})`$. Thus the Einstein-Weyl structure determines $`\overline{h}`$ up to conjugation by a Mรถbius transformation.
## 4. Geometry of Weyl connections
The transformation law for $`\overline{h}`$ may be traced back to the fact that it defines a Weyl derivative $`D`$. If $`J^1L^1`$ denotes the bundle of 1-jets of $`L^1`$, then $`D`$ is a section of the affine subbundle $`A(M)`$ of $`L^1J^1L^1`$ given by the splittings of the 1-jet projection $`J^1L^1L^1`$. This affine bundle is modelled on $`T^{}M`$.
A Mรถbius structure on $`M`$, as a second order linear differential operator on $`L^1`$, defines a vector subbundle $`E(M)`$ of the $`2`$-jet bundle $`J^2L^1`$. Since this operator is given in coordinates by the trace-free Hessian plus a zero order term, the $`1`$-jet projection $`E(M)J^1L^1`$ is surjective, and its kernel, which is the intersection of $`E(M)`$ with $`S^2T^{}ML^1`$, is the line bundle $`L^1`$, embedded as the trace-like tensors.
If $`\mu `$ is a nonvanishing section of $`L^1`$ and $`g`$ is the compatible metric corresponding to this trivialisation, then $`\mathrm{๐ ๐๐๐}^g\mu ^\mathit{2}`$ is a function whose value at $`x`$ depends quadratically on $`(j^2\mu )_x`$. This turns out to define a natural metric of signature $`(3,1)`$ on $`E(M)`$ such that the distinguished line $`L^1`$ is null and is the only null line in the kernel of the projection from $`E(M)`$ to $`L^1`$ (see , and also for more details in the analogous higher dimensional case). Consequently, there is a natural sphere bundle $`S^2(M)`$ over $`M`$, namely the space of null lines in $`E(M)`$, and this sphere bundle has a distinguished section. The complement of this section is an affine bundle and this affine bundle is canonically isomorphic to $`A(M)`$ by projecting each null line into $`J^1L^1`$. Therefore a Weyl connection is a section of $`S^2(M)`$ which does not meet the distinguished section.
Now suppose that the Mรถbius structure is integrable. Then $`E(M)`$ also possesses a canonical flat connection compatible with the Lorentzian structure. This flat connection identifies $`S^2(M)`$ locally with $`M\times S^2`$, and the distinguished section gives the developing map from (open subsets of) $`M`$ to $`S^2`$. A complex coordinate $`\zeta `$ on $`M`$ compatible with the Mรถbius structure identifies this sphere of parallel sections with $`\{\mathrm{}\}`$, so that $`\zeta `$ itself corresponds to the distinguished section of $`S^2(M)`$. The function $`\overline{h}`$ arising in Theorem 1.1 is therefore the local coordinate representation of an antiholomorphic section of $`S^2(M)`$. The expression $`1/(\overline{h}\zeta )`$ may be viewed as stereographic projection from $`S^2(M)`$ onto $`A(M)`$. It is well defined for $`\overline{h(\zeta )}\zeta `$ and sends poles of $`h`$ to the origin of $`A(M)`$ determined by the Levi-Civita connection of $`g`$.
In fact, if the Weyl connection $`D`$ is viewed as a section of $`A(M)`$, its covariant derivative (as a section of $`T^{}MV(A(M))=T^{}MT^{}M`$) can be identified with $`r_0^D+\frac{1}{4}\mathrm{๐ ๐๐๐}^D\mathrm{๐๐}\frac{\mathit{1}}{\mathit{2}}F^D`$, where $`r_0^D=๐ ๐ฆ๐_0D^2`$ (cf. ). Hence $`D`$ is holomorphic iff it is flat, and antiholomorphic (with respect to $``$) iff $`=๐ ๐ฆ๐_0D^2`$. The apparent nonlinearity of the Einstein-Weyl condition arises from the fact that the flat connection on $`A(M)`$ is not affine. Nevertheless, it identifies $`A(M)`$ locally with an open subset of $`M\times S^2`$, and so the condition for a section to be antiholomorphic is in fact linear.
## 5. The compact examples
In , the local forms of the Einstein-Weyl structures on compact surfaces were found. In this section I will show that these solutions are obtained when $`h`$ is a (possibly degenerate) Mรถbius transformation.
The solutions are given explicitly in terms of a compatible metric and connection $`1`$-form as follows:
$`g`$ $`=P(v)^1dv^2+v^2dt^2`$
$`\omega `$ $`=Av^2dt,`$
where $`P(v)`$ $`=A^2v^4+Bv^2+C,`$
and $`A,B,C`$ are arbitrary constants, constrained only by the condition that $`P(v)`$ should be somewhere positive. In , I showed that these Einstein-Weyl structures are defined on $`S^2`$ (for $`C>0`$) or $`T^2`$ (for $`C<0`$) by writing $`v`$ as a elliptic function of $`x`$ so that $`v^{}(x)^2=P(v)`$. If instead, one substitutes $`v^2=1/u`$ and rescales $`g`$ and $`t`$ by $`2`$, then the Einstein-Weyl structure becomes
$`g`$ $`={\displaystyle \frac{1}{u}}\left({\displaystyle \frac{du^2}{A^2+Bu+Cu^2}}+dt^2\right)`$
$`\omega `$ $`={\displaystyle \frac{Adt}{2u}}.`$
Now for $`C>0`$ introduce a new coordinate $`r`$ by $`u^{}(r)^2=(A^2+Bu+Cu^2)/(Cr^2)`$. This is readily integrated to give
$$u(r)=\frac{(B^2+4A^2C)2Br^2+r^4}{4Cr^2}.$$
Rescaling so that the metric is $`dr^2+r^2dt^2`$ leads to the solution of Theorem 1.1 given by $`h(\zeta )=(B2iA\sqrt{C})/\zeta `$. Notice that $`\overline{h(\zeta )}=\zeta `$ iff $`\zeta \overline{\zeta }=(B2iA\sqrt{C})`$. Hence if $`A\sqrt{C}0`$, the solution is globally defined on $`S^2`$.
For $`C<0`$ introduce instead a coordinate $`\theta `$ by $`u^{}(\theta )^2=(A^2+Bu+Cu^2)/(C)`$. This integrates to give
$$u(\theta )=\frac{B+\sqrt{B^2+4A^2C}\mathrm{sin}\theta }{2C}$$
and the Einstein-Weyl structure becomes
$`g`$ $`={\displaystyle \frac{1}{B+\sqrt{B^2+4A^2C}\mathrm{sin}\theta }}\left(dt^2+d\theta ^2\right)`$
$`\omega `$ $`={\displaystyle \frac{A\sqrt{C}dt}{B+\sqrt{B^2+4A^2C}\mathrm{sin}\theta }},`$
which is globally defined on $`T^2`$ (for $`C<0`$ and $`B^2+4A^2C>0`$). After rescaling so that the metric is $`e^{2t}(dt^2+d\theta ^2)`$, the solution $`h(\zeta )=i(B+2A\sqrt{C})\zeta /\sqrt{B^2+4A^2C}`$ of Theorem 1.1 is obtained.
More generally if $`\overline{h}`$ is an orientation reversing Mรถbius transformation, then the Weyl connection is well defined away from the fixed points of this transformation. Hence the elliptic elements, apart from the simple inversions (which have an invariant circle), give solutions globally defined on $`S^2`$ (equivalent to one of the solutions above). The hyperbolic elements, with two fixed points, correspond to the solutions on $`T^2`$ (they are periodic solutions on a cylinder). The remaining cases occur as limits. For instance the simple inversions, such as $`\zeta 1/\overline{\zeta }`$, give the hyperbolic metric.
## 6. Twistor theory
The twistor space of a conformal $`2`$-manifold $`M`$ is its orientation double cover, viewed as a complex curve $`\mathrm{\Sigma }`$. This is a rather trivial two dimensional analogue of the four dimensional theory (see, for instance, ). Note that $`\mathrm{\Sigma }`$ has a real structure given by the nontrivial involution in each fibre and that $`M`$ may be recovered from $`\mathrm{\Sigma }`$ as the moduli space of real pairs of points. The full moduli space of (unordered, distinct) pairs of points in $`\mathrm{\Sigma }`$ is $`M^{}=\left(\mathrm{\Sigma }\times \mathrm{\Sigma }\mathrm{\Delta }(\mathrm{\Sigma })\right)/S_2`$. This complex surface has a natural conformal structure: a tangent vector to $`M^{}`$ at $`\{x_1,x_2\}`$ consists of a pair of tangent vectors to $`\mathrm{\Sigma }`$ (at $`x_1`$ and $`x_2`$), and it is null if one of these components vanishes. Hence $`\mathrm{\Sigma }`$ is (locally) the space of null geodesics in $`M^{}`$. Of course $`M^{}`$ is the natural space in which real analytic functions on $`M`$ may be written $`f=f(z,\overline{z})`$ with $`f`$ holomorphic in two variables.
Although this notion of twistor space has no real content, it does provide a formal way to distinguish an integrable Mรถbius structure in two dimensions from a one dimensional complex projective structure. The former is a trace-free Hessian $`L^1S_0^2T^{}ML^1`$, whereas the latter is a second order operator $`(T^{}\mathrm{\Sigma })^2`$ (on a line bundle $``$ with $`^2=T\mathrm{\Sigma }`$) whose symbol is the identity. The two are easily related: $`(T^{}\mathrm{\Sigma })^2`$ is the pullback of $`S_0^2T^{}M`$, and since $`T\mathrm{\Sigma }\overline{T\mathrm{\Sigma }}`$ is (the pullback of) $`L^2`$, it follows that $`\overline{}`$ can be identified with $`L^1`$. The projectivisation of $`J^1`$ corresponds to $`S^2(M)`$, and the complex projective structure defines a connection $`J^1J^2J^1(J^1)`$ which projectivises to the flat connection on $`S^2(M)`$ induced by the integrable Mรถbius structure.
A more satisfying twistorial description would encode the Mรถbius structure in pure holomorphic geometry. Nevertheless, I hope the naรฏve twistor theory given here at least provides some light entertainment. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.