id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0001/cond-mat0001216.html | ar5iv | text | # Random Walk Beyond Hartree-Fock
## 1 Introduction
We will give a brief discussion of the recently developed Constrained-Path Monte Carlo (CPMC) Method . This ground-state ($`T=0`$) method is a quantum Monte Carlo (QMC) technique that eliminates the infamous fermion sign problem plaguing simulations of systems of interacting electrons. The fermion sign problem causes the variance of measured quantities to increase exponentially with increasing system size and decreasing temperature. Rapidly the sign problem destroys one’s ability to compute with acceptable accuracy. With the CPMC method one has simulated system sizes not possible with the standard method . In particular, the lattice size dependence of many-body superconducting pairing correlations functions were simulated for some of the largest lattice sizes studied to date.
In CPMC method the elimination of the sign problem is accomplished by trading an exact procedure for an approximate one that has been demonstrated to give very accurate estimates of energies and various many-body correlation functions. The exact procedure determines the lowest eigenvalue and eigenvector of the Hamiltonian by projecting them from a trial state. This procedure is easily converted to a branched random walk. Because of the sign problem, the random walkers carry a positive and negative weight, unfortunately in proportions such that the average weight (sign) becomes zero as the system size increases. The constrained path method is a particular way to break the symmetry in the sign of the walkers and produce only positively weighted walkers by eliminating those with a negative overlap with a certain constraining state. The procedure bears a similarly to the fixed-node Monte Carlo method that has been successfully used for several decades in simulations of interacting electrons systems defined in the continuum of configuration space. The CPMC method however operates in the manifold of single-particle states (Slater determinants) defined in Fock space and hence represents quite a different and novel Monte Carlo algorithm.
In the next section we will give a brief discussion of the method. Then in the following sections will discuss the various models to which the method has been applied, highlighting significant results. After this we will discuss several strategies for parallelizing the method. At first glance it would seem as if we could simply exploit the natural parallelization enjoyed by most Monte Carlo methods . In fact we need to do a bit more. In the closing section we make some speculations on future applications and extensions of the CPMC method and the potential changes in parallelization procedures.
## 2 The Constrained-Path Monte Carlo Method
Our numerical method is extensively described and benchmarked elsewhere . Here we only discuss its basic strategy and approximation. In the CPMC method, the ground-state wave function $`|\mathrm{\Psi }_0`$ is projected from a known initial wave function $`|\mathrm{\Psi }_T`$ by a branching random walk in an over-complete space of Slater determinants $`|\varphi `$. In such a space, we can write $`|\mathrm{\Psi }_0=_\varphi \chi (\varphi )|\varphi `$. The random walk produces an ensemble of $`|\varphi `$, called random walkers, which represent $`|\mathrm{\Psi }_0`$ in the sense that their distribution is a Monte Carlo sampling of $`\chi (\varphi )`$, that is, a sampling of the ground-state wave function.
To completely specify the ground-state wave function for a system of interacting electrons, only determinants satisfying $`\mathrm{\Psi }_0|\varphi >0`$ are needed because $`|\mathrm{\Psi }_0`$ resides in either of two degenerate halves of the Slater determinant space, separated by a nodal surface $`𝐍`$ that is defined by $`\mathrm{\Psi }_0|\varphi =0`$. The degeneracy is a consequence of both $`|\psi _0`$ and $`|\psi _0`$ satisfying Schrödinger’s equation. The sign problem occurs because walkers can cross $`𝐍`$ as their orbitals evolve continuously in the random walk. Asymptotically they populate the two halves equally, leading to an ensemble that has zero overlap with $`|\mathrm{\Psi }_0`$. If $`𝐍`$ were known, we would simply constrain the random walk to one half of the space and obtain an exact solution of Schrödinger’s equation. In the constrained-path QMC method, without a priori knowledge of $`𝐍`$, we use a trial wave function $`|\mathrm{\Psi }_T`$ and require $`\mathrm{\Psi }_T|\varphi >0`$. This is what is called the constrained-path approximation.
The quality of the calculation clearly depends on the trial wave function $`|\mathrm{\Psi }_T`$. Since the constraint only involves the overall sign of its overlap with any determinant $`|\varphi `$, it seems reasonable to expect the results to show some insensitivity to $`|\mathrm{\Psi }_T`$. Through extensive benchmarking on the Hubbard model, it has been found that simple choices of this function can give very good results .
Besides as starting point and as a condition constraining a random walker, we also use $`|\mathrm{\Psi }_T`$ as an importance function. To reduce variance, we use $`\mathrm{\Psi }_T|\varphi `$ to bias the random walk into those parts of Slater determinant space that have a large overlap with the trial state. For all three uses of $`|\mathrm{\Psi }_T`$, it clearly is advantageous to have $`|\mathrm{\Psi }_T`$ approximate $`|\mathrm{\Psi }_0`$ as closely as possible. Only in the constraining of the path does $`|\mathrm{\Psi }_T|\mathrm{\Psi }_0`$ generate an approximation.
Almost all the calculations reported here are done for square lattices with periodic boundary conditions. Mostly, we study closed shell cases, for which the corresponding free-electron wave function is non-degenerate and is translationally invariant. In these cases, the free-electron wave function, represented by a single Slater determinant, is used as the trial wave function $`|\psi _T`$. The use of an unrestricted Hartree-Fock wave function as $`|\psi _T`$ generally produces no significant improvement in the results.
We remark the the CPMC method has been extended to use generalized Hartree-Fock wave functions, of which the most famous example is the BCS wave function. The trick for doing this is described elsewhere .
One does simulations because exact solutions are generally unavailable, and approximate solutions, like those from Hartree-Fock approximations, are often poor. The objective of the CPMC method and other simulation methods is to go beyond Hartree Fock approximation. By expressing the wave function as a linear combination of Slater determinant, the CPMC method is a type of stochastic configuration interaction (CI) method. One difference from the classic CI method is its basis functions, the Slater determinants, being over-complete. Another difference is the set of basis states is selected via a constrained, importance-sampled random walk. There is not just one set of basis functions but many sets. Averaging over these sets is necessary to compute expectation values.
## 3 Applications
Except for a very recent application to a nuclear physics model , all applications of the constrained-path method have been to Hubbard-like models of interest to physical chemists and condensed-matter physicists. These models can be grouped as one-band, two-band, and three-band models, and each group usually has targeted classes of phenomena and materials. The Hubbard models represent considerable simplifications of the complex interactions found in actual materials but still represent complicated many-body problems for which exact solutions are rare and limited to one-dimensional systems. Under these circumstances Monte Carlo methods represent perhaps the only controlled means to study the properties of these models and benchmark approximate theories for these properties.
Hubbard models are used to study a sweeping array of intrinsically quantum many-electron phenomena. Historically these models were heavily studied for possible magnetic phenomena (ferromagnetic, anti-ferromagnetic, paramagnetic, etc.). More recently, they have been intensively studied for possible representatives of high temperature superconducting materials. In between, they were studied as candidate models of heavy fermion and mixed-valence materials.
The principal objects of interest are correlation functions, for example, spin-spin, charge-charge, pair-pair correlation functions, because the scaling of these functions with systems size gives a measure of symmetry of the ground-state. In one and two dimensions, long-range order at finite temperature is generally precluded by the Mermin-Wagner theorem. Such states however can exist at zero temperature, and their existence is signified by the behavior of correlation functions with increasing systems size. Long range anti-ferromagnetic order for example would be indicated by the $`(\pi ,\pi )`$ peak of the Fourier transform of the spin-spin correlation function increasing with increasing system size and extrapolating to a non-zero value in the limit of infinite system size. Long-range superconducting order would be indicated by the long-range part of the pair-pair correlation function remaining a positive number as the systems size is extrapolated to infinite volume. Since the sign problem worsens as the systems size increases, having a good approximate method, like the CPMC method, is an important tool for determining whether long-range order exists in these models.
### 3.1 One-Band Models
The classic one-band Hubbard model is given by the Hamiltonian
$$H=t\underset{ij\sigma }{}(c_{i\sigma }^{}c_{j\sigma }+c_{j\sigma }^{}c_{i\sigma })+U\underset{i}{}n_in_i.$$
(1)
where the double summation is restrict to nearest neighbors, the operators $`c_{i\sigma }^{}`$ and $`c_{i\sigma }`$ create and destroy an electron of spin $`\sigma `$ at lattice site $`i`$, and $`n_{i\sigma }=c_{i\sigma }^{}c_{i\sigma }`$ is the number operator at site $`i`$. Typically the lattices studied have periodic boundary conditions and were square. When the interaction $`U=0`$, the model is exactly solvable and the non-interacting electrons are described by a single band. When the number of electrons equals the number of lattice sites, the model is said to be half-filled. At half-filling $`n=_\sigma n_{i\sigma }=1`$.
In recent years, this model was extensively studied for its magnetic and possible superconducting properties. The half-filled model has no sign problem for most QMC methods, and QMC studies have played a major role in establishing that the ground state of the two-dimensional model has long-range anti-ferromagnetic order. This state is consistent with the observed behavior of the parent (undoped) state of high temperature superconductors. In these materials superconductivity appears when the parent state is doped away from half-filling. At dopings relevant to experiment, the sign problem is very bad. With the CPMC method it was possible to study the doped Hubbard model and to compute various correlation functions as a function of system size. The main focus has been on superconducting pairing correlation functions
To be a bit more definite, the types of correlation functions computed were: The spin density structure factor is
$$S(k_x,k_y)=S(k)=1/N\underset{j}{}\mathrm{exp}(\mathrm{i}kj)s_0s_j,$$
(2)
where $`s_j=n_jn_j`$ is the z-component of spin at site $`j`$. The charge density structure factor is similar to (2), with spin replaced by density, i.e., with the $``$ sign in $`s_j`$ replaced by a $`+`$ sign. The electron pairing correlation function is defined as
$$P_\alpha (j_x,j_y)=P(j)=\mathrm{\Delta }_\alpha ^{}(j)\mathrm{\Delta }_\alpha (0),$$
(3)
where $`\alpha `$ indicates the symmetry of the pairing. The on-site $`s`$-wave pairing function has $`\mathrm{\Delta }_s(j)=c_jc_j`$, while for $`d`$-wave pairing we used $`\mathrm{\Delta }_d(j)=c_j_\delta f(\delta )c_{j+\delta }`$, where $`\delta `$ is $`(\pm 1,0)`$ and $`(0,\pm 1)`$. For $`\delta `$ along the $`x`$-axis, $`f(\delta )`$ is $`1`$; otherwise it is $`1`$.
Simulations of the Hubbard model find that the s-wave pairing correlations are generally suppressed relative to the d-wave pairing correlations. For a given lattice size, increasing the value of $`U`$ suppresses the d-wave pairing and for a given value of $`U`$, increasing the lattice size suppresses the d-wave pairing. These results are inconsistent with long-range order and are illustrated in Figs. 1 and 2 where the d-wave pairing correlation function is shown for a $`12\times 12`$ and a $`16\times 16`$ lattice as a function of distance $`|j|`$. In these figures, also reported is the “vertex contribution” to the correlation functions defined as
$$V_\alpha (j)=P_\alpha (j)\overline{P}_\alpha (j)$$
(4)
where $`\overline{P}_\alpha (j)`$ is the contribution of two dressed non-interacting propagators: For each term in $`P_\alpha (j)`$ of the form $`c_{}^{}c_{}c_{}^{}c_{}`$, $`\overline{P}_\alpha (j)`$ has a term like $`c_{}^{}c_{}c_{}^{}c_{}`$. When combined, Figs. 1 and 2 indicate the likely absence of long-range superconducting order in the two-dimensional Hubbard model.
Most projector Monte Carlo ground-state calculations of Hubbard models have projected from trial states that were not superconducting. Using a trick, we projected from a d-wave BCS superconducting wave function with a BCS superconducting order parameter $`\mathrm{\Delta }=0.5`$. (Again we use the trial state not only for the initial state, but also for importance and constraining states.) Within statistical error, we found the same d-wave correlation function as we did when we projected from a free-electron wave function. One of our results is shown in Fig. 3. This similarity re-enforces the results of Ref. that suggest the absence of ODLRO in the two-dimensional Hubbard model.
Adding a next-nearest neighbor hopping term of strength $`t^{}`$ to the classic Hubbard model produces the $`tt^{}U`$ model. The CPMC method has been applied to this model with the results not being materially different from those without the $`t^{}`$. Also adding the $`t^{}`$ term, whose presence is suggested by band structure calculations, does not seem to change the magnetic properties at half-filling, and away from half-filling its addition does not seem to enhance superconductivity .
Adding a nearest neighbor interaction, $`V_{ij}n_in_j`$ with $`n_i=_\sigma n_{i\sigma }`$, produces the $`tUV`$ model. The additional interaction generates additional competing effects that leading to a novel co-existence of states .
### 3.2 Two-Band Model
The two-band model is almost always called the periodic Anderson model. In this model a $`d`$ and a $`f`$ orbital are on each lattice site. These two orbitals per unit cell lead to two bands. The Hamiltonian is
$`H`$ $`=`$ $`t{\displaystyle \underset{i,j,\sigma }{}}(d_{i,\sigma }^{}d_{j,\sigma }+d_{j,\sigma }^{}d_{i,\sigma })+V{\displaystyle \underset{i,\sigma }{}}(d_{i,\sigma }^{}f_{i,\sigma }+f_{i,\sigma }^{}d_{i,\sigma })`$ (5)
$`+ϵ_f{\displaystyle \underset{i,\sigma }{}}n_{i,\sigma }^f+{\displaystyle \frac{1}{2}}U{\displaystyle \underset{i,\sigma }{}}n_{i,\sigma }^fn_{i,\sigma }^f`$
where the creation and destruction operators create and destroy d-electrons on sites of a square lattice and f-electrons on localized orbitals associated with these sites. $`n_{i,\sigma }^f=f_{i,\sigma }^{}f_{i,\sigma }`$ is the number operator for f-electrons. Hopping only occurs between between neighboring lattice sites (-$`t`$ term) and between a lattice site and its orbital ($`V`$ term).
Because of the two bands, the model is half-filled when there are two electrons per lattice site. At half-filling the model is said to be symmetric when $`ϵ_f=U/2`$. For the symmetric model there is no sign problem and standard QMC methods suggest that the model is an insulating anti-ferromagnetic if $`U>U_c2`$. With the CPMC method studying the magnetic properties of the doped model was possible.
Upon doping with holes, the long-range anti-ferromagnetic order was rapidly destroyed. Around three-quarters filling of the lower band, a strong peak developed at the $`(\pi ,0)`$ value of the spin structure factor $`S_{\mathrm{ff}}(k)`$ for the $`f`$-electrons. This peak is shown in Fig. 4, and its development is consistent with the resonance of two degenerate spin-density waves with wave vectors $`(\pi ,0)`$ and $`(0,\pi )`$ . Unestablished is whether this novel state is one of long-range order.
### 3.3 Three-Band Model
The three-band model was constructed with the structure of high-temperature superconductors in mind. The common structural feature of these materials are $`CuO_2`$ planes. In addition, these materials possess strong two-dimensional-like anisotropy. These properties have focused attention on the physics in these planes as being the source of the superconductivity.
The three-band model studied by the CPMC method represents the Hamiltonian for the $`CuO_2`$ planes with only the most relevant $`Cu`$ and $`O`$ being kept
$`H`$ $`=`$ $`{\displaystyle \underset{<j,k>\sigma }{}}t_{jk}^{pp}(p_{j\sigma }^{}p_{k\sigma }+p_{k\sigma }^{}p_{j\sigma })+ϵ_p{\displaystyle \underset{j\sigma }{}}n_{j\sigma }^p+U_p{\displaystyle \underset{j}{}}n_j^pn_j^p`$
$`+ϵ_d{\displaystyle \underset{i\sigma }{}}n_{i\sigma }^d+U_d{\displaystyle \underset{i}{}}n_i^dn_i^d`$
$`+V^{pd}{\displaystyle \underset{<i,j>}{}}n_i^dn_j^p+{\displaystyle \underset{<i,j>\sigma }{}}t_{ij}^{pd}(d_{i\sigma }^{}p_{j\sigma }+p_{j\sigma }^{}d_{i\sigma })`$
In writing the Hamiltonian, we adopted the convention that the operator $`d_{i,\sigma }^{}`$ creates a hole at a Cu $`3d_{x^2y^2}`$ orbital and $`p_{j,\sigma }^{}`$ creates a hole in an O $`2p_x`$ or $`2p_y`$ orbital. $`U_d`$ and $`U_p`$ are the Coulomb repulsions at the Cu and O sites respectively, $`ϵ_d`$ and $`ϵ_p`$ are the corresponding orbital energies, and $`V_{pd}`$ is the nearest neighbor Coulomb repulsion. As written, the model has a Cu-O hybridization $`t_{ij}^{pd}=\pm t_{pd}`$ with the minus sign occurring $`j=i+\widehat{x}/2`$ and $`j=i\widehat{y}/2`$ and also hybridization $`t_{jk}^{pp}=\pm t_{pp}`$ between oxygen sites with the minus sign occurring for $`k=j\widehat{x}/2\widehat{y}/2`$ and $`k=j+\widehat{x}/2+\widehat{y}/2`$. The lattice is a planar $`CuO_2`$ structure which has three atoms, one $`Cu`$ and two $`O`$, per unit cell. Hence the non-interacting problem has three bands. In a bit of a switch, convention for this model has half-filling corresponding to half-filling of the anti-bonding (upper) band which is the lowest band in the hole representation. When the Hamiltonian is expressed in a hole representation, half-filling then corresponds to one hole per unit cell.
The superconducting pairing correlation functions were studied with the CPMC method, and the findings were similar to those found from the studies on the Hubbard model: d-wave correlations are stronger than the s-wave ones . For a given system size, increasing $`U`$ suppress the pairing correlations. For a given $`U`$, increasing the system size suppresses the pairing correlations.
For the three-band model the binding energy between two holes was also calculated. This is a difficult calculation because it requires the accurate calculation of the difference between nearly equal energies. It is a significant calculation because the binding of holes is a pre-requisite for superconductivity, phase separation, or stripe formation.
To calculate the binding energy for holes, we need to study the half-filled case and then the 1 and 2 hole doped cases. In the systems considered ($`2\times 2`$, $`4\times 2`$ and $`6\times 4`$ unit cells) the 2 hole doped case corresponds to a closed shell case. The one hole case is one hole away from a closed shell, and the corresponding free-electron wave function is doubly degenerate. In this one-hole case the accuracy of the energy is as good as in the closed shell case independently of the trial wavefunction used. However, for the half-filled case, which is two holes away from a close shell, there are 4 degenerate free-electron states. If we used a trial state made from selecting arbitrarily any one of the degenerate states or an arbitrary linear combination of these states, the calculated energies would not be accurate enough to compute the binding energy. Therefore, we used the following procedure for the half-filled case: we diagonalized the interacting part of the Hamiltonian in this degenerate subspace, and obtained 2 states with energy proportional to $`U_d`$ and 2 states with zero energy. Of the 2 states with zero energy only one of them is a singlet. We used this state which is represented by a linear combination of two Slater determinants: $`(c_{k_1,}c_{k_1,}c_{k_2,}c_{k_2,})|\mathrm{C}S/\sqrt{2}`$. In Table 1, we compare the energies obtained using the CPMC with the one and two Slater determinants trial wave function and energies obtained from exact diagonalization. We see that using two Slater determinants improves the accuracy by an order of magnitude or more. The accuracy has become better than the closed shell case.
With this increased accuracy we found parameter ranges where holes bind, values of the parameters where binding is optimal, and an increase in binding energy with an increase in system size. Since the appearance of hole binding seems decoupled from any enhancement of superconducting correlation, an open question is the significance and consequence of this binding.
## 4 Parallelization
If we are considering a system of $`N_\mathrm{e}`$ electrons and $`N`$ lattice sites, the CPU time scales as $`N_\mathrm{w}N_\mathrm{e}N^2`$ where $`N_\mathrm{w}`$ is the number of walkers. (The number of walkers is typically of the order of $`200`$ to $`1000`$ with the larger number usually need for the larger values of the interaction strengths.) This number needs to be sufficiently large to insure an adequate approximation to the ground state and is determined on the basis of experience. The factor $`N_\mathrm{e}N^2`$ comes from the scaling of the basic matrix operation that must be performed: The basic matrix operations are matrix multiplication, inversion, re-orthogonalization, and rank-one updates of matrix inverses. The dominant operation is the matrix multiplication propagating a walker. The propagator is represented by a $`N\times N`$ matrix, and each walker by two $`N\times N_\sigma `$ matrices, i.e., one for each spin.
In general the method is CPU intensive as opposed to memory intensive. As a consequence, the basic code usually fits in the memory of one processor and the parallelization of the simulation can follow one of several natural paths. The simplest path is to give each processor a copy of the code, have it read different independent input files, and compute independent runs. A less embarrassing parallelization, and a run time reducing one, is to share the number of walkers as equally as possible among as many processors as possible, propagate the walkers on each processor independently, and combine the results. The branched nature of the random walk however requires a slightly different procedure.
Each walker carries a weight, and as the walker propagates, its weight increases or decreases. Eventually the large weighted walkers dominate, but propagating one of then cost as much computing time as it does to propagate a small weighted walker that has little bearing on the final results. Hence carrying the small weighted walkers becomes inefficient. In these types of random walks, a standard procedure is periodically eliminating walkers with small weights and replacing each large weighted walker by many medium weighted walkers whose total weight on the average is the same as the single walker. There are several schemes to do this, and these procedures are called population control. Population control prevents a single walker from ultimately dominating the simulation and when used properly reduces the variance of the computed results.
After a population control step, the loads on the processors are unbalanced. The natural action then would be to redistribute the load. Unless this load represents a relatively large number of walkers per processor, a danger of introducing a bias into the simulation exists by performing population control on a population too small to be representative. Such a case is expected if several hundred walkers distributed across a hundred or so processors.
Two other options for parallelization are easy to implement. One is to use relatively few well-populated processors and have them independently execute population control. (This is the procedure we used for small and intermediate lattice sizes.) The other, which is more effective in reducing run time for larger jobs when a large number of processors are available, is doing population control by moving all the walkers onto one processor, performing population control there, and then uniformly re-distributing the population. Because the amount of information passed between processors is small and the amount of time needed for population control is small, this procedure still achieves a nearly linear reduction in computation time with the number of processors and is simple to implement in any message passing environment.
## 5 Concluding Remarks
The constrained-path method has provided simulators of interacting electron systems with a useful tool to study systems sizes impossible by other means. With this method it has been possible to investigate the timely and important question of the nature of superconducting pairing correlations in candidate models for high temperature superconductivity. The results obtained strongly suggest the absence of long-range order in these models even though they do not provide a rigorous proof of this absence.
While appearing similar to the fixed-node method long used for continuum problem, it is not a fixed node method . Appearing about the same time as the CPMC method was a lattice version of the fixed-node method . The CPMC method appears to give more accurate estimates of quantities like the energy and correlation functions, but the fixed-node method can more easily propagate some special trial states not expressible as a sum of single Slater determinants.
The concept of constrained random walks has lead to two other quantum Monte Carlo methods. One is a constrained random walk at finite temperatures . The initial results are promising. The significance is that finite temperature methods have lacked effective analogs even of the fixed-node method . The other advance is the constrained phase method. The method allows for the propagators and Slater determinants to be complex valued: One has generalized the sign problem to a phase problem. The ground-state wave functions of a system of electrons in a magnetic field must be complex-valued. Making the propagator complex-valued may be convenient if the system has long-range forces. Preliminary results on small systems in a magnetic field are also promising.
The $`16\times 16`$ lattices size is most likely the largest for which the simple parallelization scheme described above is convenient. Often $`N_\mathrm{e}N`$ so the computation scales roughly as $`N^3`$. Thus simulating a $`20\times 20`$ system increases the run time by a factor of 4 relative to the $`16\times 16`$ system. This increases the simulation time from 2 weeks, for example, to 2 months. It is becoming clear that one might need to simulate larger lattice sizes to insure finite-size effects are not influencing the results. Going to these larger systems will require distributing the matrix operations across many processors. Such operations are all fundamental, and procedures for doing this type of parallelization exist. We will be exploring their utilization in the near future. |
no-problem/0001/astro-ph0001520.html | ar5iv | text | # The Fornax Spectroscopic Survey
## 1 Introduction
The development of a new generation of multi-object spectrographs, exemplified by the ‘Two degree Field’, or 2dF, multi-fibre spectrograph on the Anglo-Australian Telescope (AAT), has opened up whole new areas of astronomical survey science. One particular area, which we discuss in this paper, is the opportunity to make a truly complete spectroscopic survey of a given area on the sky, down to well determined, faint limits, irrespective of image morphology or any other preselection of target type.
The Fornax Spectroscopic Survey, or FSS, seeks to exploit the huge multiplexing advantage of 2dF by surveying a region of 12 square degrees centred on the Fornax Cluster of galaxies. It will encompass both cluster galaxies, of a wide range of types and magnitudes, and background and foreground galaxies (over a similarly wide range of morphologies), as well as Galactic stars, QSOs and any unusual or rare objects.
Although many surveys of nearby clusters have been made over the past 20 years or more, these are all limited in several crucial aspects. Spectroscopic surveys exist, but typically only of the few dozen brightest cluster galaxies (and any background interlopers in the top few magnitudes of the cluster luminosity function). Photometric surveys, of course, go much deeper, but such studies must be of a statistical nature (e.g. subtracting off the expected background numbers; Smith et al. 1997), or rely on subjective judgements of likely cluster membership based on morphology, surface brightness or colour (e.g. Ferguson 1989). Of particular concern is the surface brightness; low surface brightness galaxies (LSBGs) seen towards a cluster are conventionally assumed to be members, while apparently faint, but high surface brightness galaxies (HSBGs) are presumed to be luminous objects in the background (e.g. Sandage, Binggeli, & Tammann 1985). The failure of either assumption, i.e. the existence of large background LSBGs (such as the serendipitously discovered Malin 1; Bothun et al. 1987) or of a population of high surface brightness (compact) dwarfs in the cluster (Drinkwater & Gregg 1998), can have a dramatic effect on our perception of the galaxy population as a whole. Furthermore, it is possible that a population of extremely compact galaxies (either in the cluster or beyond) could masquerade as stars and hence be missed altogether from galaxy samples. Examples have previously been found in, for example, QSO surveys, but again these are serendipitous discoveries and hard to quantify (see Drinkwater et al. 1999a = Paper II, and references therein).
Few previous attempts at all-object surveys have been made. The one most similar to ours is probably that of Morton and Tritton in the early 1980s. They obtained around 600 objective prism spectra and 100 slit spectra for objects in a 0.31 square degree region of background sky (i.e. no prominent cluster) over the course of a 5 year period (Morton, Krug & Tritton 1985). More recently Colless et al. (1993) obtained spectra of about 100 objects in a small area of sky and small magnitude range in order to investigate the completeness of faint galaxy redshift surveys.
Our overall survey will therefore represent a huge increase in the volume of data and in addition will give a uniquely complete picture of a cluster of galaxies. It is worth noting that the huge galaxy surveys planned, with 2dF (Folkes et al. 1999; Colless 1999) or the Sloan Digital Survey (Gunn 1995; Loveday & Pier 1998) will not address such problems, since their galaxy samples will be pre-selected from photometric surveys and will only include objects classified as galaxies and not of too low surface brightness, thus removing both ends of any potentially wide range of galaxy parameters.
In the present paper we discuss the design and aims of our all-object Fornax Spectroscopic Survey and present initial results on the velocity distributions. Section 2 gives a technical definition of the survey, describing the relevant features of the 2dF spectrograph, the selection of our target catalogue and the calibration of this input catalogue. In Section 3 we discuss the scientific aims of the survey and summarise the types and numbers of objects we expect to observe. In Section 4 we discuss the spectroscopic observations and observational strategy. We describe the technique we have developed to identify and classify objects automatically from the 2dF spectra and give some examples from our initial observations. Section 5 gives the initial redshift results and Section 6 summarises the survey work to date.
## 2 The Survey Design
In this Section we describe the basic parameters of the Fornax Spectroscopic Survey. We start with the relevant technical details of the 2dF spectrograph, and then discuss our selection of targets from the digitised photographic sky survey plates and the calibration of our input catalogues.
### 2.1 The 2dF Spectrograph
The 2dF facility (Taylor, Cannon & Parker 1998) is probably the most complex ground-based astronomical instrument built to date. Via a straightforward ‘top-end’ change the capability of the 3.9-m Anglo-Australian Telescope is transformed into an unique wide-field multi-fibre spectroscopic survey instrument. Up to 400 fibres are available at any one time for rapid configuration over the full two-degree diameter focal surface via a highly accurate robotic positioner mounted in situ. Each 2 arcsec diameter fibre can be placed to an accuracy of 0.3 arcsec in less than 10 seconds. The input target positions must be accurate to 0.3 arcsec r.m.s. or better over the whole two-degree field to avoid vignetting of the fibre entrance apertures. This requirement is only for relative positions; the absolute accuracy of a complete set of targets and guide stars need only be 1–2 arcsec as the guide stars will then centre all the targets accurately.
The wide field is provided by a highly sophisticated multi-component corrector with in-built atmospheric dispersion compensator. In a novel arrangement 2dF can simultaneously observe 400 target objects on the sky at the same time that a further 400 fibres are being configured using the robotic positioner on one of the two available ‘field plates’ (focal surfaces). Once observations and configurations have been completed (usually over the same timescale) a tumbling mechanism allows the newly configured field plate to point at the sky whilst the previously observed field can be re-configured for the next target field. In this way rapid field inter-change is provided for an extremely efficient observing environment. Each set of 400 fibres feeds two spectrographs which accept 200 fibres each. These are mounted on the 2dF top end ring and can produce low to medium resolution spectra on the dedicated $`1024\times 1024`$ TEK CCDs.
The 2dF is now operating at close to the original specifications anticipated for this most complex of instruments (Lewis, Glazebrook & Taylor, 1998). Field configuration times of about 1 hour for 400 fibres permit rapid cycling of target fields and have enabled excellent progress to be made with our complete survey.
### 2.2 The Fornax Cluster Field
We chose the Fornax Cluster for this study because it is a well-studied, compact, nearby southern galaxy cluster suited to this type of survey. We and several other groups have made photometric or small-scale spectroscopic surveys of the region (e.g. Ferguson 1989; Davies et al. 1988; Drinkwater & Gregg 1998, Hilker et al. 1999). The published spectroscopic samples have either been very small or have concentrated on the brighter cluster galaxies (Jones & Jones 1980; Drinkwater et al. 2000a). A search of NED<sup>1</sup><sup>1</sup>1The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. in our central 2dF field (number 1 in Table 1) found only 42 objects brighter than $`B=20`$ with measured redshifts: 30 cluster galaxies, 6 background galaxies and 6 QSOs. With 2dF we can now measure the redshifts of some 700–900 galaxies and quasars in this same field.
The Fornax Cluster is concentrated within one United Kingdom Schmidt Telescope (UKST) Sky Survey plate and our survey will comprise four separate 2dF fields which are listed in Table 1. We show the distribution of our fields on the sky in Fig. 1 compared to the positions of galaxies classified as likely cluster members by Ferguson (1989). Our first field is centred on the large galaxy NGC 1399 at the centre of the cluster. In order both to cover a large number of targets and to go significantly deeper than previous spectroscopic surveys we chose to limit our survey at a $`b_j`$ magnitude of 19.7. This is then essentially the same depth as the large scale 2dF Galaxy Redshift Survey (GRS) of Ellis, Colless and collaborators (e.g. Colless 1999; Folkes et al. 1999). This combination of survey area and magnitude limit will optimise our measurement of the cluster galaxies (see Section 3.1).
### 2.3 Target Selection
In common with the large 2dF Galaxy Redshift Survey (Colless 1999), we have chosen to select our targets from catalogues based on UKST Sky Survey plates digitised by the Automatic Plate Measuring facility (APM) at Cambridge. Unlike most galaxy surveys however, which only select resolved images for spectroscopic measurement, we avoid any morphological selection and include all objects, both resolved and unresolved (i.e. ‘stars’ and ‘galaxies’). This means that we can include galaxies with the greatest possible range in surface brightnesses (and with a range of scale sizes): our only selection criterion is a (blue) magnitude limit. Including objects normally classified as stars greatly increases the size of our sample but it is the only way to ensure completeness.
Our input catalogue for the FSS is a standard APM ‘catalogues’ file (see Irwin, Maddox & McMahon 1994) of field F358 from the UK Schmidt southern sky survey. The field is centred at 03<sup>h</sup>37<sup>m</sup>55$`\stackrel{s}{.}`$9, $``$34°50′14″ (J2000) and the region scanned for the catalogue file is $`5.8\mathrm{deg}\times 5.8\mathrm{deg}`$. This field is approximately centered on the Fornax Cluster and is slightly smaller than the region surveyed by Ferguson (1989) shown in Fig. 1. The APM image catalogue lists image positions, magnitudes and morphological classifications (as ‘star’, ‘galaxy’, ‘noise’, or ‘merged’) measured from both the blue ($`b_j`$) and red survey plates. The ‘merged’ image classification indicates two overlapping images: at the magnitudes of interest for this project the merged objects nearly always consisted of a star overlapping a much fainter galaxy. All the positions are measured from the more recent red survey plate (epoch 1991 September 13 compared to 1976 November 18 for the blue plate) to minimise problems with proper motions. The APM catalogue magnitudes are calibrated for unresolved (stellar) objects only, so we supplemented these with total magnitudes for the galaxies measured by direct analysis of the plate data (see Section 2.4).
Our target selection consisted simply of taking all objects from the APM catalogue in each of our four 2dF fields with magnitudes in the range $`16.5b_j19.7`$. We did not apply any morphological selection, although the APM image classifications from the blue survey plate were used to determine which photometry to use (see Section 2.4). The limits were chosen to avoid very bright objects which could not be observed efficiently with 2dF and for which the photographic photometry would be unreliable, whilst, at the faint end, to allow us to measure a significant area of the cluster (12$`\mathrm{deg}^2`$) in a reasonable amount of time. With these limits our sample contains some 14,000 objects, i.e. around 3,500 in each 2dF area. Thus each region requires a total of ten 2dF set ups (to allow for ‘sky’ and broken fibres).
The selection of our targets is illustrated in Fig. 2, a magnitude-surface brightness (SB) diagram of all objects in our central 2dF field (‘Field 1’) from the APM data. The APM points include stars which follow a well-defined locus at the bottom right of the distribution. The selection limit on the input APM catalogue (an area of 16 pixels at the isophotal detection threshold) does not impinge on the area from which our spectroscopic targets are chosen, running above the area shown in figure 2, at SB $`>25b_j\mathrm{mag}\mathrm{arcsec}^2`$, except for a tiny intersection with the top right hand corner of the plotted region at $`b_j20`$ (i.e. our objects are well within the completeness limit of the overall APM catalogue). We also show on the diagram the positions of the objects discussed above with published redshifts in this same field (very bright cluster galaxies with $`b_j<13`$ were not matched). Apart from a few QSOs, the previously observed galaxies occupy a very small part of the diagram, tending to bright magnitudes and ‘normal’ surface brightness. Our new survey sample defined by the dashed lines includes the full range of surface brightness detected in the Schmidt data in that magnitude range. The breakdown of the sample by image classification is given in Table 2; as expected it is dominated by unresolved objects (stars).
Our final spectroscopic sample will inevitably suffer from incompleteness at the low surface brightness limit. It will not be possible to measure spectra for the faintest LSB galaxies in reasonable exposure times even though the multiplex advantage of 2dF will enable us to go much fainter than previous work.
### 2.4 Photometric Calibration
The photometric calibration of the input catalogues is complicated by the non-linear response of the photographic emulsion. The methods we use to estimate the true fluxes are described in this Section. We use different methods to measure the stars, which are often heavily saturated, to the galaxies which are mostly unsaturated. The choice of estimator is based on the automated APM classifications of the objects as ‘stellar’ or ‘resolved’, although we emphasise again that objects of all morphological types are observed. In all of the discussion below we use the photographic blue $`b_j`$ magnitude defined by the IIIaJ emulsion combined with a GG 395 filter. This is related to the standard Cousins $`B`$ magnitude by $`b_j=B0.28\times (BV)`$ (Blair & Gilmore 1982).
#### 2.4.1 Unresolved objects (‘stars’)
It is relatively easy to estimate the magnitudes of unresolved objects (probably stars) from photographic data because the images all have the same shape. This means that the total magnitudes can be reliably derived from the outer, unsaturated regions of the brighter objects. The object magnitudes in the APM catalogue data (Irwin et al. 1994) were measured this way using an internal self-calibration procedure to fit stellar profiles, correcting for the non-linear response of the photographic emulsion (see Bunclark & Irwin 1984). The same method was used for objects classified as ‘merged’, since, as discussed above, these are dominated by stars.
The default APM calibration uses a quadratic relation to convert from raw instrumental magnitudes to calibrated $`b_j`$ magnitudes. We checked this calibration with CCD data and made a small adjustment to the default values. We based our recalibration on 2 stars from the Guide Star Photometric Catalog (Lasker et al. 1988), 2 fainter stars from CCD frames taken for the same project (Lasker, private communication, 1997) and 68 stars from our own CCD observations with the CTIO<sup>2</sup><sup>2</sup>2Cerro Tololo Interamerican Observatory (CTIO) is operated by the Association of Universities for Research in Astronomy Inc. (AURA), under a cooperative agreement with the National Science Foundation as part of the National Optical Astronomy Observatories. Curtis Schmidt Telescope. In each case the photographic $`b_j`$ magnitudes were derived from the BV calibration data. Our adjustment to the default APM calibration was equivalent to a shift of about 0.2 mag in the sense that our recalibrated values are fainter. We plot the calibrated CCD magnitudes against our final adopted magnitudes in Fig. 3. Our fit is very good with an r.m.s. scatter of 0.13 mag over a range of 8 magnitudes (see Table 3).
#### 2.4.2 Resolved objects (‘galaxies’)
For resolved objects (probably galaxies) we did not use the APM catalogue magnitudes, but instead used total magnitudes estimated by fitting exponential disk profiles to the APM image parameters (as in Davies et al. 1988 and Irwin et al. 1990). In this section we describe the calibration of these total galaxy magnitudes.
The absolute calibration was taken from the original calibration of the Fornax Schmidt survey plate by Cawson et al. (1987). They used CCD images of 18 cluster galaxies which they compared pixel by pixel with the APM machine scan data of the same galaxies. The CCD images were calibrated using standard stars and should correspond closely to the standard Johnson B band. Cawson et al. quote a calibration error of 0.1 mag.
The Cawson et al. calibration was subsequently used by Davies et al. (1988) for their sample of Fornax LSBGs and for the photometry of brighter galaxies by Disney et al. (1990). Ferguson (1989) carried out an independent calibration of galaxies in the Fornax region and where his sample overlaps with that of Davies et al. the mean difference between the two magnitude scales is 0.09 mag. The APM images data for objects classified as ‘galaxies’ were calibrated directly from the Davies et al. (1988) sample (see Morshidi-Esslinger et al. 1999).
At high surface brightness levels (brighter than 22.7$`b_j\mathrm{mag}\mathrm{arcsec}^2`$; Cawson et al. 1987; Davies et al. 1988) the limited dynamic range of the APM machine affects the calculated APM magnitude, even for galaxies. To alleviate this problem, for each galaxy we have fitted an exponential to the surface brightness profile in the range between $`\mu _B=22.7b_j\mathrm{mag}\mathrm{arcsec}^2`$ and the limiting surface brightness (detection isophote) $`\mu _L=25.7b_j\mathrm{mag}\mathrm{arcsec}^2`$. This procedure largely overcomes the problems of saturation at the centre of an image, but of course will not allow for any central excess light such as a nucleus. We chose to use an exponential to fit each surface brightness profile because a large fraction of the galaxy population is well fitted by such a function (Davies et al. 1988). The exponential fit gives values for the extrapolated central surface brightness $`\mu _x`$ and the exponential scale length $`\alpha `$. From these the relation $`m_{tot}=\mu _x5\mathrm{log}(\alpha )1.995`$ can be used to derive the total apparent magnitude $`m_{tot}`$ under the fitted profile. The surface brightness profile data are supplied as part of the APM images file. (In fact, the image area at different surface brightnesses is supplied, and this is used to produce a surface brightness profile assuming circular isophotes; see Phillipps et al. 1987, Morshidi-Esslinger et al. 1999).
We compare our galaxy photometry with the CCD photometry of Caldwell & Bothun (1987) in Fig. 4. Here we have plotted total $`b_j`$ magnitudes from Caldwell & Bothun against our photographic galaxy magnitudes. Following Ferguson & Sandage (1988) we have assumed $`BV=0.72`$ when converting to $`b_j`$ if the colour was not measured by Caldwell & Bothun. Over the whole range shown there is an average offset of about 0.2 mag in the sense that our magnitudes are fainter. We found a similar offset when comparing our whole sample to that of Ferguson (1989), also measured from photographic plates (not plotted, but listed in Table 3). We also estimated total (Kron) galaxy magnitudes using our own CCD data for a larger sample of faint galaxies: these show a small average offset in the opposite sense (see Table 3). When plotted in Fig. 4 these data give the impression that the slope of the calibration is steeper than unity, but this is also consistent with a constant offset in the Caldwell & Bothun data points as they are all at brighter magnitudes. We have retained the Davies et al. (1988) calibration without any further adjustment since we do not expect a closer agreement from methods which use different data and fit different profiles to estimate total galaxy magnitudes.
### 2.5 Astrometric Calibration and Guide Stars
The input target lists (including guide stars) must have relative positions accurate to 0.3 arcsec or better over the full two degree field of the 2dF spectrograph as discussed above. This condition is not satisfied by all image catalogues based on UKST plates—see Drinkwater, Barnes & Ellison (1995) for a discussion of the problems. We therefore checked the accuracy of the APM positions by comparison with the Positions and Proper Motions star catalogue (PPM; Röser, Bastian & Kuzmin, 1994).
A total of 232 PPM stars matched stars in the APM catalogue file for the whole UK Schmidt field F358 with $`b_j`$ magnitudes of 8.9–12.1. We eliminated outliers (total position errors more than 1.5 arcsec) and selected a test sample of the faintest 60 remaining stars. The mean and r.m.s. errors were $`(0.39\pm 0.30)`$ arcsec in RA and $`(0.12\pm 0.23)`$ arcsec in Dec. These errors are for the whole 6 degree field of the UK Schmidt: in a single 2dF field the scatter is about 0.25 arcsec in each direction. We also calculated the radial errors as a function of radius from the plate centre as in Drinkwater et al. (1995) to test for any overall scale errors. We found a very small error: the faint stars were slightly too far from the centre (0.089 arcsec/degree) and the bright stars were slightly too close to the centre ($`0.021`$ arcsec/degree). This is in the same sense as we found for the COSMOS/UKST catalogue, but smaller in magnitude: there we measured 0.14 and $`0.29`$ arcsec/degree respectively. This shift (the ‘magnitude’ effect) between the bright and faint stars is caused by asymmetric halos around the brighter stars which displace their image centroids away from the plate centre. The largest scale error we found would not be significant over a two-degree field, so we have established that APM catalogue positions are accurate enough for 2dF observing.
The guide stars used to align the telescope during 2dF observations must have positions to the same accuracy as the targets and—importantly—in the same reference frame. We chose stars from the same APM catalogue file as our targets but took care to minimise errors that could arise from proper motion or the ‘magnitude effect’. Both these problems are reduced by selecting faint guide stars, but we can further reduce the chance of selecting stars with significant proper motions by choosing stars with blue colours and APM positions measured from the more recent red survey plates (see Drinkwater, Barot & Irwin 1996). We used faint blue stars with $`15.5b_j16.5`$ and $`BR<1.8`$. As a rule-of-thumb, this corresponds to stars with barely discernible diffraction spikes on the blue survey plates and no diffraction spikes on the red survey plates. We found that stars selected according to these criteria were easily detected by the 2dF guiding system and had consistent positions.
## 3 Scientific Rationale
As emphasised in the Introduction, the FSS will cover all possible types of objects visible within our target magnitude range. As such, it will be useful for a large number of individual survey projects. In this Section we summarise some of these, divided according to object type.
### 3.1 Cluster Galaxies
The prime reason for choosing the sky area we have was, of course, the presence of the Fornax Cluster of galaxies. The Fornax and Virgo galaxy clusters are the nearest reasonably rich clusters (Fornax is approximately Abell Richness Class 0 – it is supplementary cluster S0373 in Abell, Corwin & Olowin 1989). We take the distance to Fornax to be 15.4 Mpc (a distance modulus of 30.9 mag) as derived by Bureau, Mould & Staveley-Smith (1996), so our 2dF sample reaches absolute magnitudes around $`M_B=11`$. The Fornax Cluster has been the subject of several previous spectroscopic (Jones & Jones 1980; Drinkwater & Gregg 1998; Hilker et al. 1999) and photometric studies (Phillipps et al. 1987; Davies et al. 1988; Ferguson 1989; Ferguson & Sandage 1988; Irwin et al. 1990; Morshidi-Eslinger et al. 1999).
The main motivation for an all-object survey is, of course, to determine cluster membership for a complete sample of objects (including, especially, dwarf galaxies), irrespective of morphology. This will allow us to test whether the usual assignment of LSBGs to the cluster and high (or even ‘normal’) surface brightness faint galaxies to the background is justified (see, for example, the contrasting views expressed in Ferguson & Sandage 1988, and Irwin et al. 1990) and enable us to determine the complete surface brightness distribution (and the joint surface brightness - magnitude distribution) for a cluster for the first time (see Phillipps et al. 1987; Phillipps 1997). We report elsewhere on the identification of a population of low luminosity, very compact HSBGs in the cluster (Drinkwater et al. 2000b = Paper IV).
### 3.2 Normal Field Galaxies
Our galaxy sample will, though, be dominated by very large numbers of background galaxies. These will be roughly 10 times more numerous than the cluster members (depending on the relative slopes of the field number counts and the faint end of the cluster luminosity function) and can obviously be used to determine a field luminosity function (LF), using the conventional approach which requires redshift data. Although containing many fewer galaxies than the major 2dF or Sloan surveys, an LF determined from the FSS sample will have the advantage of including all galaxies, irrespective of morphology, at both the high and low surface brightness ends (down, obviously, to the surface brightness limit of the 2dF observations). Surface brightness limitations on the determination of LFs have been discussed by Phillipps & Driver (1995) and Dalcanton (1998). Extending this, we will be able to determine the bivariate brightness distribution for field galaxies (i.e. the joint distribution in luminosity and surface brightness; Phillipps & Disney 1986, van der Kruit 1987, Boyce & Phillipps 1995, de Jong 1996).
### 3.3 Compact and LSB Field Galaxies
We have already detected a number of very compact field galaxies beyond the Fornax Cluster (see Paper II for details). These objects are so compact that they have been classified as ‘stars’ from the blue sky survey plates and therefore represent a class of galaxy missed in previous galaxy surveys based on photographic survey plates. We estimate that they represent $`2.8\pm 1.6\%`$ of all galaxies in the magnitude range $`16.5b_j19.7`$. They are luminous (within a few magnitudes of $`M_{}`$) and most have strong emission lines and small sizes typical of luminous H II galaxies and compact narrow emission line galaxies. Four of the thirteen have red colours and early-type spectra, so are of a type unlikely to have been detected in any previous surveys.
Similarly we have been able to obtain spectra for a number of LSBGs beyond Fornax. Some are likely to have been misclassified as cluster members on morphological grounds (Ferguson 1989) and can only be revealed as larger background LSBGs (potential ‘Malin 1 cousins’; Freeman 1999) via redshift measurements (Drinkwater et al. 1999b; Jones et al. 2000 = Paper III, in preparation).
### 3.4 Galactic Stars
Although initially motivated by extragalactic interests, the FSS can also make significant contributions to Galactic astronomy. The lion’s share of the unresolved targets in the survey will be ordinary Galactic stars, making up about 70% of the objects in the overall survey. For instance, the final tabulation will include many thousand M dwarfs in the Galactic disk. While the FSS 2dF velocity precision ($`50\mathrm{km}\mathrm{s}^1`$) is low compared to that used in most kinematic studies of the Galaxy (e.g. Norris, 1994), the sheer numbers of M dwarfs should allow a good determination of their scale height and velocity dispersion, for example. As Fornax is only $`30\mathrm{°}`$ from the South Galactic Pole, many of the stars in the survey will belong to the halo. Although only a minor mass component of the Galaxy, the properties of the halo provide clues to the formation of the whole Milky Way. Blue horizontal branch stars from the metal poor halo will make up perhaps $`1\%`$ of our sample and are straightforward to recognise spectroscopically.
### 3.5 Radio Sources
The region of our Fornax Spectroscopic Survey is covered by two sensitive radio continuum surveys – the NRAO VLA Sky Survey (NVSS; Condon et al. 1998) and the Sydney University Molonglo Sky Survey (SUMSS; Bock, Large and Sadler 1999). These cover different frequencies (1.4 GHz for NVSS, 843 MHz for SUMSS), both with an angular resolution of about 45 arcsec. The faintest radio sources catalogued by these surveys are roughly 2.5 mJy for NVSS and 5 mJy for SUMSS. At these faint flux density levels we expect to detect three main kinds of radio sources: QSOs, active galaxies and star-forming galaxies (Kron, Koo & Windhorst 1985, Condon 1984; see also Condon 1992). The fraction of star-forming galaxies increases rapidly below 10 mJy, and below 1 mJy they become the dominant radio–source population Our 2dF spectra should discriminate reliably between AGN and starburst galaxies.
### 3.6 Quasi-Stellar Objects
As well as the foregoing radio quasars, the survey will detect one of the largest ever completely unbiased samples of optical QSOs. All previous optical QSO surveys have relied on one or more specific selection criteria, such as UV-excess or variability, to pre-select ‘candidate’ QSOs for follow-up spectroscopy (e.g. Boyle, Jones & Shanks 1991). The FSS on the other hand will be limited only by the strength of the QSO’s emission lines. Preliminary results suggest that this technique detects some 10% more QSOs to the same magnitude limit as conventional multi-colour work (see Meyer et al. 2000 = Paper V).
## 4 Spectroscopic Observations
In this Section we describe our spectroscopic data from the 2dF. We give a brief summary of the observing setup and initial data reduction and then explain the semi-automated analysis we perform to classify all the spectra. Finally we present some example spectra from our initial observations.
### 4.1 Observing Setup
We observed all our targets with the same observing setup for 2dF: the 300B grating and a central wavelength setting of 5800Å giving a wavelength coverage of 3600–8010Å at a resolution of 9Å (a dispersion of 4.3Å per pixel). This is the same setup as for the 2dF galaxy (Folkes et al. 1999) and QSO (Boyle et al. 1997) redshift surveys. We did not attempt to flux-calibrate our spectra given the difficulty of flux-calibration in fibre-fed spectroscopy, and because our objective was to measure velocities for as many objects as possible in the available time.
In order to maximise our observing efficiency we grouped our targets by their central surface brightness, so that we could vary exposure times to obtain similar quality spectra over a large range of apparent surface brightness. The exposure times ranged from 30 minutes for bright stars to four hours for LSB galaxies. Our early runs in commissioning time were limited to minimum exposures of about 2 hours and also included a range of objects and surface brightnesses, so produced some very high signal-to-noise stellar spectra. We discuss the quality of the spectra as a function of exposure time and surface brightness in Section 4.4.
### 4.2 Data Reduction
The 2dF facility includes its own data reduction package (2DFDR) which permits fast, semi-automatic reduction of data direct from the instrument. When we started the project 2DFDR was still under development, so we chose instead to reduce the data with the DOFIBERS package in IRAF<sup>3</sup><sup>3</sup>3IRAF is distributed by the National Optical Astronomy Observatories, operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the NSF.. We are now reducing the data in parallel with 2DFDR to compare our results.
The data reduction with IRAF follows the standard procedures for multi-object fibre spectroscopy supplemented by several scripts used to reformat the image header data and tabulate the object identifications using the output from the 2dF configuration software and the 2dF fibre-spectrum lookup table.
Accurate sky-subtraction with fibre spectra is difficult and problematic (Barden et al. 1993; Watson, Offer & Lewis 1998) especially for the stronger night sky-lines and when the fibres are closely spaced with profiles only 2-3 pixels wide (as with 2dF). After extracting the spectra we removed residuals from the strong night sky lines at 5577 and 6300Å by interpolation across the lines. The spectra were then visually inspected and any strong features due to cosmic ray events removed: these were identified by having widths less than the instrumental resolution.
We also remove atmospheric absorption features from the spectra using a simple self-calibration method. We take all the galaxies observed with a given CCD on a given night and average them with no weighting. The galaxy features cancel out as they are all at different redshifts, leaving a combination of the instrumental response and the main atmospheric absorption bands. We then fit and normalise by a continuum. Then we edit the resulting spectrum by hand to set it to unity in all regions except the main atmospheric bands at 6800–7600Å. We then divide all object spectra by this to remove the bands. This has the effect of removing the atmospheric features, but otherwise leaving the spectra unchanged with instrumental response intact. The same approach could not be used when most of the spectra on a given CCD are stars with many common features all at the same wavelengths. For these we found that a normalisation spectrum generated from galaxies observed the same night was more than adequate.
### 4.3 Spectral Analysis
The aim of our spectral analysis is to determine a redshift and identification for all spectra ranging from Galactic stars to high redshift QSOs. In keeping with our survey philosophy we analyse all the spectra in an identical fashion, irrespective of their image morphology. To do this successfully we have adapted the usual procedure in galaxy surveys of cross-correlating against template galaxy spectra by using a set of stellar templates instead of the normal templates of absorption-line galaxies. This was previously used on a small sample of unresolved objects by Colless et al. (1993).
The first stage of the identification is to calculate cross-correlations automatically using the IRAF add-on package RVSAO (Kurtz & Mink, 1998). For each spectrum-template combination this measures the Tonry & Davis (1979) $`R`$ coefficient, the redshift and its error. Emission lines are not removed before performing the correlations, but the spectra are Fourier-filtered. At this stage the full available wavelength range is used for the cross-correlations. The redshifts are measured as radial velocities in units of $`cz`$ and are subsequently converted to heliocentric values. By chosing the template giving the best $`R`$ coefficient we can determine not only the redshift, but a first estimate of the object type. We only accept identifications with $`R3`$ (corresponding, in principle, to a peak in the cross-correlation which is significant at the 3 sigma level) and in addition make a check by eye for misidentifications (see below). This is straightforward for $`R3`$, since in practice such spectra always have three or more identifiable features. Objects with redshifts of $`500\mathrm{km}\mathrm{s}^1`$ or less are Galactic stars for which the best template indicates the stellar spectral type. At higher redshifts, external galaxies are separated into absorption-line types if they match one of the stellar spectra or emission-line types if they match the emission-line galaxy template best. We found that all the absorption-line galaxies that would have been detected by galaxy templates were easily measured using the stellar templates, so we do not need to use specific absorption-line galaxy templates.
The template spectra used are listed in Table 4 and plotted in Fig. 5: we use a set of nine stellar templates from the Jacoby, Hunter & Christian (1984) library and a synthetic emission-line galaxy spectrum provided with the RVSAO package. We constructed a second emission-line template similar to the first but limited to wavelengths less than 6000Å. This was needed to give reasonable fits to the high-redshift galaxies where the H$`\alpha `$ line was shifted out of the 2dF bandpass, but strong H$`\beta `$/OIII features were present. The stellar templates were chosen to give a reasonable range of spectral types, but not more than could be separated with our low-resolution unfluxed spectra. We also note that the Jacoby et al. spectra have not been shifted to zero redshift: we therefore estimated their redshifts using a combination of individual line measurements and cross-correlation with other standards. These were then entered in the image headers to give the correct results with RVSAO; they are also listed in Table 4.
In the second stage of the identification process we check each identification interactively using the RVSAO package to display the best cross-correlation and a plot of the object spectrum with common spectral features plotted at the corresponding redshift. When the redshift is obviously wrong (e.g. with the Calcium H & K lines clearly present but misidentified), it is flagged as being wrong or in some cases is recalculated. The recaculation most commonly involves repeating the template cross-correlation on a restricted wavelength range chosen to avoid the red end of the spectrum affected by poorly removed sky features. In extreme cases the object may be a QSO: these are distinguished by strong broad emission lines and are measured using a composite QSO spectrum (Francis et al. 1991). Objects still not identified at this stage are flagged to be reobserved.
A third, supplementary stage is used for any spectra measured with good signal (a signal-to-noise ratio $`>`$10 in each 4.3Å wide pixel) but no obvious features in their 2dF spectra: these are flagged as ‘strange’ and scheduled for detailed follow-up observations with conventional slit spectrographs.
Once the spectroscopic redshift measurements are complete, they are corrected to heliocentric values. We checked the accuracy of the redshift measurements by comparing the results for 66 objects with repeated measurements. The r.m.s. scatter of the velocity differences is 90$`\mathrm{km}\mathrm{s}^1`$. This uncertainty is consistent with the combined error estimates for the same measurements produced by RVSAO: the mean predicted error was 92$`\mathrm{km}\mathrm{s}^1`$. Note that this implies a measurement error of a single observation of $`90/\sqrt{2}64\mathrm{km}\mathrm{s}^1`$. We also compared our results to redshifts of 44 galaxies found in a search of the literature using NED (most were from Hilker et al. 1999). The comparison gave a mean velocity difference of ($`7\pm 17`$)$`\mathrm{km}\mathrm{s}^1`$ and an r.m.s. scatter of 111$`\mathrm{km}\mathrm{s}^1`$, entirely consistent with our internal calibration.
The 2dF spectra, although of low resolution and unfluxed, are useful for more detailed analysis than simple redshift measurements and object classifications (c.f. Tresse et al. 1999). We defer any detailed analysis of the spectra to later papers dealing with specific object classes, but note here that, even for the lowest luminosity galaxies, they can be used to measure emission line equivalent widths, and hence star formation rates, line widths (limited by the resolution of 900$`\mathrm{km}\mathrm{s}^1`$), emission line ratios (e.g. OIII/H$`\beta `$ and NII/H$`\alpha `$), absorption line indices (e.g. CaIIH+H$`ϵ`$/CaIIK and H$`\delta `$/FeI4045) and even ages and metallicities from these Balmer and the metal absorption lines (Paper II; Folkes et al. 1999).
### 4.4 Current status
Although this present paper is mainly concerned with the principles behind the FSS, we already have a considerable amount of 2dF data for the project from commissioning observations in 1996/1997, and scheduled time in December 1997/January 1998 and November 1998. We have nearly completed our observations of the first field having observed 92% of all targets in the range $`b_j=16.5`$ to 19.7 and successfully obtained redshifts for 94% of those observed.
For resolved objects (galaxies) the success rate of our redshift measurements is a function of surface brightness. In Fig. 6 we plot the numbers of galaxies observed and identified as a function of central surface brightness. We have attempted to optimise the exposure times to the surface brightnesses of the objects, using exposures up to 3.75 hours for the lower surface brightness images. The identification rate runs at 78% or better to a limit of 23$`b_j\mathrm{mag}\mathrm{arcsec}^2`$. Fainter than this limit (corresponding to a mean surface brightness inside the detection threshold $`24.5b_j\mathrm{mag}\mathrm{arcsec}^2`$), the identifications drop off rapidly. The unresolved objects at higher surface brightness (mostly stars) have an identification rate of 95% in our target magnitude range of $`b_j=16.5`$ to 19.7.
In Fig. 7 we show example spectra from our initial observations of the various types of object discussed above. The first two spectra are Galactic stars, an M-dwarf and a white dwarf. The next two spectra are of normal low-redshift galaxies, one with an absorption line spectrum and one with an emission line spectrum. The remaining four spectra are all of objects that were unresolved (i.e. classified as stars in the target catalogues), but have been identified as various types of galaxy. The first is a compact emission line galaxy (CELG; see Section 3.3), the second is a normal, optically selected QSO and the third is an X-ray source. The final spectrum is of a fainter radio-loud quasar.
## 5 Initial Scientific Results: velocity distributions from the first 2dF field
The number of galaxies observed in the Fornax Cluster itself is not yet large enough to allow a detailed study of the cluster population: this will require results from the remaining three 2dF spectrograph fields to achieve the statistical samples needed. However, we have ample data to delineate clearly the velocity structure in the direction of Fornax. In particular, we have determined accurately the velocity distribution of Galactic stars, as well as the galaxy distribution in redshift space behind the cluster.
### 5.1 Radial velocities of Galactic stars
The radial velocity distribution of Galactic stars is revealed in the existing FSS results, despite the modest resolution of the spectra compared to those conventionally used in kinematic surveys of the Galaxy. A total of 2467 objects in Field 1 of Table 1 have reliable radial velocities $`v_r<750\text{km}\text{s}^1`$. The estimated standard errors in the velocities are typically $`\pm \mathrm{\hspace{0.25em}25}80\text{km}\text{s}^1`$, sufficiently small to reveal the contributions of different Galactic components.
Figure 8 shows the distribution of all FSS Field 1 objects with reliable ($`R3`$) velocities less than 750 kms<sup>-1</sup>. The field has Galactic co-ordinates $`(l,b)=(237^\mathrm{o},54^\mathrm{o})`$, so we are sampling a sight-line looking diagonally ‘down’ through the Galactic plane between the anti-centre and anti-rotation directions. The component of the motion of the local standard of rest in this direction is $`120`$ kms<sup>-1</sup>. For our chosen magnitude range, the survey will sample predominantly disc, thick disc and halo main sequence stars, with some contribution from halo giants and disc white dwarfs (Gilmore & Reid 1983). The results can be compared with dedicated spectroscopic studies of faint stars in high-latitude fields (e.g. Kuijken & Gilmore 1989; Croswell et al. 1991; Majewski 1992).
The contribution of the various Galactic components can be demonstrated by considering subsamples of the stars defined by colour. Basic colour information can be derived from the blue and red magnitudes given in the APM Catalogue. Figure 9 shows the distribution of these $`(b_jr)_{\mathrm{APM}}`$ colours for the Field 1 objects with velocities $`v_r750`$ kms<sup>-1</sup>. The form of the distribution is similar to that obtained in dedicated studies of the properties of faint stars (e.g. Reid & Majewski 1993). We divide the stars into three samples: relatively blue stars having $`(b_jr)_{\mathrm{APM}}0.6`$; moderately red stars having $`0.6<(b_jr)_{\mathrm{APM}}1.7`$; and very red stars having $`(b_jr)_{APM}>1.7`$. The sharp decline in numbers bluewards of $`(b_jr)_{\mathrm{APM}}0.6`$ is the result of the main sequence cut-off for moderately old stellar populations; the blue sample extends to this limit. These limits at $`(b_jr)=0.6`$ and 1.7 correspond to $`(\mathrm{B}\mathrm{V})0.4`$ and 1.1
The moderately red stars are expected to include G and K dwarfs in the thick disc and halo, G and K giants in the halo, and disc K dwarfs. The halo component, being dynamically pressure supported, has a broad radial velocity distribution which is displaced relative to the solar motion by the component of the solar rotation velocity towards Fornax (Freeman 1987; Gilmore, Wyse & Kuijken 1987; Majewski 1993). Disc and thick disc stars, being rotationally supported, have a zero or small asymmetric drift and a modest intrinsic velocity dispersion: their velocity distributions will be centred closer to zero heliocentric velocity. The moderately-red star sample is therefore expected to have a broad velocity distribution with the halo component contributing a high velocity tail, consistent with the velocity distributions shown in Figure 10. In contrast, the very red star sample will be rich in disc late K and M dwarfs and will include halo late K and M giants. It is therefore expected to have only a modest net drift with respect to the local standard of rest but with a tail to high velocity, as observed in Figure 10. The blue stars include local (disc) white dwarfs and halo horizontal branch stars. They are therefore expected to have a broad range of velocities, consistent with the results here.
Of particular interest is the high velocity tail at $`v_r400`$ km s<sup>-1</sup>. If the extreme examples are confirmed by higher resolution spectroscopy they will provide useful constraints on the mass of the Galaxy (e.g. Carney, Latham & Laird 1988; Croswell et al. 1991; Majewski, Munn & Hawley 1996; Freeman 1999, private communication).
### 5.2 Galaxies in the foreground of the Fornax Cluster
A gap is present in the velocity distribution between the cut-off in Galactic stars at $`cz550`$ kms<sup>-1</sup> and the Fornax Cluster at $`cz9002200\text{km}\text{s}^1`$. No objects are found in this intermediate velocity range among the results from the first 2dF field. The low velocity limit of the cluster velocity distribution is therefore defined without ambiguity.
It is of interest to determine whether there are any galaxies in the foreground to the Fornax Cluster having heliocentric radial velocities $`cz<600\text{km}\text{s}^1`$ which might be overlooked given the very large number of Galactic stars in this velocity range. The APM Catalogue (used as the input database for the survey) provides a classification for each image from the blue and red sky survey plates. Of the 2467 objects having $`cz<600\text{km}\text{s}^1`$ and cross-correlation $`R`$ parameter $`3.0`$, 14 are classified as being galaxies in both blue and red. All 14 were inspected visually on the Digitised Sky Survey and again on a SuperCOSMOS measuring machine (Miller et al. 1992) scan of film OR17818 taken on Tech Pan emulsion with the UKST. The Tech Pan data provided higher resolution and greater depth than the Digitised Sky Survey (e.g. Phillipps & Parker 1993). All foreground galaxy candidates appeared to be compact images merged with another, fainter image. Most were unambiguously Galactic stars merged with either another star or with a background galaxy. None of the 14 candidates had the extended appearance expected of a nearby dwarf galaxy.
To extend the search, the visual inspection was repeated on the five images with reliable velocities $`\mathrm{\hspace{0.25em}600}\text{km}\text{s}^1`$ having the largest APM $`\sigma `$ parameter on the blue sky survey plates. The $`\sigma `$ parameter measures the degree to which an image differs from a point-spread function and is a convenient indicator of a non-stellar light profile. Of the five images having a blue $`\sigma _\mathrm{B}>38.0`$, none had the appearance expected of a nearby dwarf galaxy: all were found to be compact (star-like) images merged with either another star or a faint galaxy.
A third and final search for foreground galaxies was performed using large exponential scale lengths as a indicator of extended images. The surface photometry described in Section 2.4.2 derived a scale length from the low surface brightness regions of each image. Only five images with reliable velocities $`\mathrm{\hspace{0.25em}600}\text{km}\text{s}^1`$ had scale lengths $`\alpha >\mathrm{\hspace{0.25em}1.5}\text{arcsec}`$. None had the appearance expected of a nearby dwarf galaxy on the Digitised Sky Survey or the scan of the Tech Pan film: all objects were again found to be merged images. We conclude that no foreground galaxies were found with star-like velocities.
We therefore have no galaxies with heliocentric $`cz<\mathrm{\hspace{0.25em}900}\text{km}\text{s}^1`$ in Field 1 of Table 1 within our magnitude range. Among the brighter galaxies ($`B<16`$) in the whole cluster region, Jones & Jones (1980) previously found a small number with such low velocities (NGC 1375, NGC 1386, NGC 1396 ($``$ G75), and NGC 1437A), though the exact number depends on the accuracy of their redshift determinations. A search of the NASA Extragalactic Database (NED) identifies the same four galaxies. Of these, NGC 1375, NGC 1386 and NGC 1396 lie in our Field 1.
### 5.3 The Velocity Structure of the Fornax Cluster
Figure 12 shows the velocity distribution of Fornax Cluster galaxies from the FSS. The mean heliocentric radial velocity from the FSS data is $`1560\pm 80\text{km}\text{s}^1`$ (26 galaxies). This compares with $`1540\pm 50\text{km}\text{s}^1`$ from Jones & Jones. Recall that the Jones & Jones galaxies are much brighter than ours (roughly $`21M_\mathrm{B}15`$ as against $`14M_\mathrm{B}11`$) and are spread over a much larger area, 6 degrees or about 1.6 Mpc across compared to our 2 degrees or 0.5 Mpc. A velocity dispersion can be estimated fairly unambiguously as there are no galaxies with velocities less than 900 or between 2300 and $`3000\text{km}\text{s}^1`$. Our 26 galaxies give an observed radial velocity dispersion of $`380\pm 50\text{km}\text{s}^1`$, compared to the $`391\text{km}\text{s}^1`$ of Jones & Jones.
The FSS velocity distribution can also be compared with the equivalent distribution compiled from all published redshift data. Figure 12 presents the velocity data from NED. These give a mean heliocentric radial velocity of $`1450\pm 70\text{km}\text{s}^1`$ (32 galaxies), and a velocity dispersion of $`370\pm 50\text{km}\text{s}^1`$, entirely consistent with the FSS results. The NED results generally, though not entirely, apply to the brighter cluster galaxies.
Fornax is an apparently well relaxed, regular cluster as judged by its central density concentration and low spiral content. It would require a very much larger sample of redshifts over the other fields in order to explore properly any dynamical differences between different galaxy populations. These initial results do not reveal any difference in the dynamics between the bright and faint (giant and dwarf) members of the cluster, although a wide-field study of brighter galaxies (Drinkwater et al. 2000a) does suggest such a difference.
### 5.4 The Velocity Structure behind the Fornax Cluster
Figure 13 shows our redshift distribution behind the cluster. Immediately beyond the cluster, as noted by Jones & Jones (1980) and Phillipps & Davies (1992), there is a large void, extending some 40 Mpc (from the cluster mean redshift to about $`5000\text{km}\text{s}^1`$ assuming $`H_0=75\text{km}\text{s}^1\text{Mpc}^1`$). Beyond this “Fornax Void”, we see the ubiquitous ‘spiky’ distribution (Broadhurst et al. 1990) showing more distant walls and filaments.
Figure 14 shows the distribution of background galaxies taken from the NED. The difference in depth between the two data sets is immediately apparent: the FSS results probe to much greater distances on account of the fainter magnitudes of the galaxies. Nevertheless, the first two main features in our distribution clearly match the two peaks seen in the NED data (i.e. in the brighter galaxies).
A standard cone diagram is shown in Figure 15, illustrating the skeleton of the large scale 3-D structure beyond Fornax. The median redshift of the entire galaxy sample is 0.15. This compares with a mean of 0.11 in the preliminary data from the 2dF Galaxy Redshift Survey (Colless 1999). The data continue to map structure out to $`z0.30`$, where there are still significant numbers of galaxies. The cluster J1556.15BL identified by Couch et al. (1991) lies in Field 1 at $`z=0.457`$, but the density of FSS galaxies at this redshift is too small to show the cluster.
In addition to the general galaxy population, Figure 15 also shows (as large solid points) the compact galaxies discussed in Paper II. These objects have star-like images on Schmidt survey plates but the 2dF spectroscopy showed them to be compact star-forming galaxies at redshifts $`0.040.21`$. The figure also shows low surface brightness galaxies having intrinsic (cosmologically corrected) central surface brightnesses fainter than $`22.5b_j\text{mag}\text{arcsec}^2`$, plotted as open circles. Despite their low surface brightnesses, these objects are sufficiently distant that they are too luminous to be dwarfs given the apparent magnitude limits of the survey (they have $`M_\mathrm{B}17`$ to $`19`$ for $`H_0=75\text{km}\text{s}^1\text{Mpc}^1`$).
Many authors (e.g. Phillipps & Shanks 1987; Eder et al. 1989; Thuan et al. 1991; Loveday et al. 1995; Mo, McGaugh & Bothun 1994) have discussed whether or not low luminosity and/or low surface brightness galaxies follow the same structures as the brighter component. Although, as stated earlier, this sample is not yet complete – so we can not use strictly objective measures such as the galaxy correlation function (Phillipps & Shanks 1987) – we do have enough information in our distribution to see that the low surface brightness galaxies (shown as open circles in Figures 16 and 15) do trace the same large scale structure and are not seen “filling the voids”(Dekel & Silk 1986). The present data extend this comparison of the distribution of LSBG with that of normal galaxies to significantly lower surface brightnesses than most other studies (or, indeed, will be possible with the standard SLOAN or GRS samples). Similarly, the compact (high surface brightness) galaxies, which are also likely to be missing from other surveys, again follow the same overall large scale structure in Figure 15 as the general galaxy population. This is unlike the suggestions from some previous emission line galaxy surveys (e.g. Salzer 1989) that such objects can appear in very low density regions.
## 6 Summary
In this paper we have presented an overview of the Fornax Spectroscopic Survey, the first complete, all-object spectroscopic survey to cover a large area of sky. This project has only been made possible by the advent of the 400-fibre Two-degree Field spectrograph on the Anglo-Australian Telescope. In total we hope to observe some 14,000 objects to a magnitude limit of $`b_j`$=19.7 — both ‘stars’ and ‘galaxies’ — in a 12$`\mathrm{deg}^2`$ area of sky centred on the Fornax Cluster.
The main technical challenges of the project concern the preparation of the target catalogue and the analysis of the resulting spectra. Our input catalogues are based on UK Schmidt Sky Survey plates digitised by the APM facility. We have demonstrated that the APM image catalogues provide sufficiently accurate target positions and photometry for the unresolved sources. For the resolved sources our photometry is derived by fitting exponential profiles to the image parameters measured by the APM. We have tested our calibration with new CCD observations. We use a semi-automated procedure to classify our spectra and measure radial velocities based on cross-correlation comparison with a set of stellar spectra, two emission-line galaxy spectra and one QSO spectrum. This procedure successfully identifies stars, galaxies and QSOs completely independently of their image morphology.
When the Fornax Spectroscopic Survey is complete we will have a unique, complete, sample of Galactic stars, Fornax Cluster galaxies, field galaxies and distant AGN. We have discussed some of the scientific questions that can be addressed with such a sample. The principal objective is to obtain an unbiased sample of cluster members, which includes compact galaxies and low surface brightness dwarfs, independent of a membership classification based on morphological appearance.
Redshift/velocity distributions are presented here based on spectroscopic results from the first of four 2dF fields. The velocity distribution of Galactic stars can be understood in terms of a conventional three-component model of the Galaxy. The Fornax Cluster dwarf galaxies in the first 2dF field have a mean heliocentric radial velocity of $`1560\pm 80\text{km}\text{s}^1`$ and a radial velocity dispersion of $`380\pm 50\text{km}\text{s}^1`$. The Fornax Cluster is well-defined dynamically, with a low density of galaxies in the foreground and immediate background. Beyond $`5000\text{km}\text{s}^1`$, the large-scale structure behind the Fornax Cluster is clearly delineated out to a redshift $`z0.30`$. The compact galaxies found behind the cluster by Drinkwater et al. (1999a) are found to follow the structures delineated by the general galaxy population, as are background low surface brightness galaxies. Some more detailed initial results have already been presented elsewhere (Drinkwater et al. 1999a, 1999b).
###### Acknowledgements.
This project would not be possible without the superb 2dF facility provided by the AAO and the generous allocations of observing time we have received from PATT and ATAC. MJD is grateful for travel support from the University of Bristol and the International Astronomical Union. SP acknowledges the support of the Royal Society via a University Research Fellowship. JBJ and JHD are supported by the UK PPARC. SP, JBJ and RMS acknowledge the hospitality of the School of Physics, University of New South Wales. Part of this work was done at the Institute of Geophysics and Planetary Physics, under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. |
no-problem/0001/astro-ph0001352.html | ar5iv | text | # 1 Introduction
## 1 Introduction
There has been sufficient observational indication suggesting a causal connection between the binary history of neutron stars and the evolution of their magnetic field. In particular, recent observations and their statistical analyses suggest that (see Bhattacharya 1995, 1996, Bhattacharya & Srinivasan 1995, Ruderman 1995 for detailed reviews):
1. Isolated pulsars with high magnetic fields ($`10^{11}`$$`10^{13}`$ G) do not undergo any significant field decay during their lifetimes;
2. Binary pulsars as well as millisecond and globular cluster pulsars, almost always having a binary history, possess much lower field strengths, down to $`10^8`$ G;
3. Most of these low-field pulsars are also quite old (age $`10^9`$ yrs), which implies that their field must be stable over such long periods;
4. Binary pulsars with massive companions, like the Hulse-Taylor pulsar, have field strengths in excess of $`10^{10}`$ Gauss, whereas the low mass binary pulsars include both high-field pulsars and very low-field objects like the millisecond pulsars.
5. The evolutionary link between millisecond pulsars and low-mass X-ray binaries seem to be borne out both from binary evolution models and from the comparative study of the kinematics of these two populations.
Therefore, in this thesis we have tried to understand the mechanism of field evolution in neutron stars that are members of binary systems. To this end we have looked at four related problems as described below :
* the effect of diamagnetic screening on the final field of a neutron star accreting material from its binary companion;
* evolution of magnetic flux located in the crust of an accreting neutron star;
* application of the above-mentioned model to real systems and a comparison with observations;
* an investigation into the consequences of magnetic flux being initially located in the core of the star and its observational implications.
## 2 Effect of Diamagnetic Screening
We investigate the effect of diamagnetic screening on the final field of an accreting neutron star, in particular we try to answer the following questions :
1. Are diffusive time-scales, in the layers where the field is expected to be buried, long enough to allow screening to be an effective mechanism for long-term field reduction ?
2. Is it at all possible to bury the field or create horizontal components at the expense of the vertical one against Rayleigh-Taylor overturn instability ?
We find that,
1. The density of flow increases with an increase in the field strength. The larger the field, the deeper and denser it gets buried . The flow density also has a mild positive dependence on the rate of mass accretion. But even for very large values of the surface field strength the flow does not occur at densities beyond $`10^9\mathrm{g}\mathrm{cm}^3`$. That means the flow always takes place within the liquid layer. And the earlier contentions of a burial within the solid layer does not really happen.
2. As the screening time-scales are always much smaller than the diffusive time-scales a condition of flux-freezing prevails and material movement indeed should proceed dragging the field lines along.
3. But, since the instability time-scale (sum of the overturn and the re-connection time-scales) is much too smaller than the other two time-scales of the problem, (fig. 1) any stretching of field line is quickly restored (over the instability time-scale) and the material effectively flows past the field lines without causing any change.
Therefore, it is not possible to screen the magnetic field of a neutron star by the accreting material in order to reduce the magnitude of the external dipole observed.
## 3 Evolution of Crustal Magnetic Field in an Accreting Neutron Star
A possible mechanism of field decay is that of rapid ohmic diffusion in the accretion heated crust. The effect of accretion on purely crustal fields, for which the current loops are completely confined within the solid crust, is two-fold. On one hand the heating reduces the electrical conductivity and consequently the ohmic decay time-scale inducing a faster decay of the field. At the same time the material movement, caused by the deposition of matter on top of the crust, pushes the original current carrying layers into deeper and denser regions where the higher conductivity slows the decay down. The mass of the crust of a neutron star changes very little with a change in the total mass; accretion therefore implies assimilation of the original crust into the superconducting core. When the original current carrying regions undergo such assimilation, further decay is stopped altogether.
The combination of enhanced ohmic diffusion due to crustal heating and the transport of current-carrying layers to higher densities due to the accreted overburden, causes the surface field strength to exhibit the following behaviour (fig. 2):
1. An initial rapid decay (power law behaviour followed by exponential behaviour) followed by a leveling off (freezing),
2. Faster onset of freezing at higher crustal temperatures and at a lower final value of the surface field,
3. Lower final fields for lower rates of accretion for the same net amount of accretion,
4. The longer the duration of the pre-accretion phase the less the amount of field decay during the accretion phase, and
5. The deeper the initial current loops are the higher the final surface field.
For details see Konar & Bhattacharya (1997).
## 4 Comparison with Observation
We have investigated the evolution of the magnetic field of neutron stars in its entirety – in case of the isolated pulsars as well as in different kinds of binary systems, assuming the field to be originally confined in the crust. We model the full evolution of a neutron star in a binary system through several stages of interaction. Initially there is no interaction between the components of the binary and the evolution of the neutron star is similar to that of an isolated one. It then interacts with the stellar wind of the companion and finally a phase of heavy mass transfer ensues through Roche-lobe overflow.
We find that the model can explain almost all the features that have been observed to date. And our conclusions can be summarized as follows :
* for this model to be consistent with the statistical analyses performed on the isolated pulsars at the most a maximum value of 0.05 for the impurity strength can be allowed;
* HMXBs produce high-field long period pulsars provided the duration of the wind accretion phase is short or the initial current distribution is located at higher densities;
* Relatively low-field ($`B10^{10}`$ Gauss) objects near death-line (low-luminosity pulsars) are also predicted from HMXBs;
* LMXBs will produce both high-field long period pulsars as well as low-field short period pulsars inclusive of millisecond pulsars in the later variety (fig. 3); and
* a positive correlation between the rate of accretion and the final field strength is indicated that is supported by observational evidence.
For details see Konar & Bhattacharya (1999a).
## 5 Spin-down induced Flux-expulsion and its Consequences
Finally, we look at the outcome of spindown-induced expulsion of magnetic flux originally confined to the core, in which case the expelled flux undergoes ohmic decay. The general nature of field evolution again seems to fit the overall observational scenario. The nature of field evolution is quite similar to that in case of a purely crustal model of field evolution though the details differ. Most significantly, this model has the requirement of large values of the impurity strength $`Q`$ in direct contrast to the crustal model. To summarize then :
* The field in isolated neutron stars do not undergo any significant decay, over the active lifetime of the pulsar, conforming with the statistical analyses.
* The field values in the high mass X-ray binaries can reaming fairly large for a moderate range of impurity strength.
* A reduction of three to four orders of magnitude in the field strength can be achieved in the low mass X-ray binaries provided the impurity strength is as large as 0.5 (fig. 4).
* If the wind accretion phase is absent then to achieve millisecond pulsar field values, impurity strength in excess of unity is required.
For details see Konar & Bhattacharya (1999b).
## 6 Conclusions
In this thesis we have mainly investigated two models of field evolution - that of an initial crustal current supporting the field and that of spin-down induced flux expulsion, also looked at the effect of diamagnetic screening in an accreting neutron star. Following are our conclusions regarding the nature of field evolution in the crust.
* Pure Ohmic Decay in Isolated Neutron Stars :
1. A slow cooling of the star gives rise to a fast decay and consequent low final field. The opposite happens in case of an accelerated cooling.
2. An initial crustal current distribution concentrated at lower densities again gives rise to faster decay and low final surface field. Whereas if the current is located at higher densities the decay is slow resulting in a higher final surface field.
3. A large value of impurity strength implies a rapid decay and low final field. If the crust behaves more like a pure crystal the decay slows down considerably.
* Accretion-Induced Field Decay in Accreting Neutron Stars :
1. In an accreting neutron star the field undergoes an initial rapid decay, followed by slow down and an eventual freezing.
2. A positive correlation between the rate of accretion and the final field strength is observed, giving rise to higher final saturation field strengths for higher rates of accretion.
3. An expected screening of the surface field by the diamagnetic accreting material is rendered ineffective by the interchange instabilities in the liquid surface layers of the star.
4. To produce millisecond pulsars in LMXBs in spin-down induced flux expulsion model very large values of impurities are required. This makes the surface field go down to very low values in $`10^9`$ years in isolated pulsars in contrast to a purely crustal model.
## Acknowledgement
I would like to thank Dipankar Bhattacharya for his guidance and support. |
no-problem/0001/quant-ph0001008.html | ar5iv | text | # Optical Realization of Quantum Gambling Machine
## Abstract
Quantum gambling — a secure remote two-party protocol which has no classical counterpart — is demonstrated through optical approach. A photon is prepared by Alice in a superposition state of two potential paths. Then one path leads to Bob and is split into two parts. The security is confirmed by quantum interference between Alice’s path and one part of Bob’s path. It is shown that a practical quantum gambling machine can be feasible by this way.
As a kind of game, gambling plays an important role in the society and nature which are full of conflict, competition and cooperation. Up to now, game theory has been investigated with mathematical methods and applied to study economy, psychology, ecology, biology and many other fields .
One might wonder why games like gambling can have anything to do with quantum physics. After all, game theory is about numbers that entities are efficiently acting to maximize or minimize. However, if linear superpositions of the actions are permitted, games will be generalized into quantum domain . Quantizing games may be interesting in several fields , such as foundation of game theory, games of survival and quantum communication . Moreover, quantum mechanics may assure the fairness in remote gambling .
In this letter, we present a quantum gambling machine composed of optical elements.
We may firstly investigate the simplest classical gambling machine: one particle and two boxes $`A`$ and $`B`$. During a game, the casino (Alice) stores the particle in $`A`$ or $`B`$ randomly, then the player (Bob) guesses which box the particle is in. For the two parties do not trust each other, even a third party, a remote classical gambling is impossible. Whereas in the quantum domain, Alice may prepare the particle in a superposition state of $`|a`$ (the particle in $`A`$) and $`|b`$ (the particle in $`B`$). If she generate the equal superposition state
$$|\mathrm{\Psi }_0=\frac{1}{\sqrt{2}}\left(|a+|b\right)$$
(1)
and a prescribed box (e.g. $`B`$) is sent to Bob, a remote fair gambling may be carried out. For simplicity, the bet in a single game is taken to be one coin. If Bob finds the particle in box $`B`$ (state $`|b`$), he wins one coin, otherwise he loses the bet. Obviously, the probability for Bob to win is exactly $`50\%`$. Moreover, Bob cannot cheat by claiming that he found the particle when he did not, for Alice can verify by opening box $`A`$.
In order to decrease the probability for the particle in box $`B`$, Alice may prepare a biased superposition state (she gets no advantage using an ancilla or other complex strategy )
$$|\mathrm{\Psi }_0^{}=\sqrt{\frac{1}{2}+ϵ}|a+\sqrt{\frac{1}{2}ϵ}|b$$
(2)
instead of $`|\mathrm{\Psi }_0`$, where $`ϵ`$ is the preparation parameter, with $`0ϵ\frac{1}{2}`$. However, the quantum principle assures that Bob has a chance to find out the difference and win her $`R`$ coins, which is the punishment the two parties agree on before the game.
Bob’s strategy is to split out part of the state $`|b`$ and convert it to state $`|b^{}`$ by performing a unitary operation, i.e.,
$$|b\sqrt{1\eta }|b+\sqrt{\eta }|b^{},$$
(3)
Where $`|b^{}`$ is orthogonal to $`|a`$ and $`|b`$ and $`\eta `$ is the splitting parameter. After the splitting, if Bob does not find the particle in box $`B`$, Alice will send box $`A`$ to him for verification. In this case the state of the particle is reduced to $`|\varphi _a=\sqrt{\frac{1}{1+\eta }}\left(|a+\sqrt{\eta }|b^{}\right)`$, if Alice prepare the particle in the equal superposition state $`|\mathrm{\Psi }_0`$. Therefore, the verification of Bob is to measure the particle under the basis $`|\varphi _a`$ and its orthogonal basis $`|\varphi _b`$. If Alice prepare the biased superposition state $`|\mathrm{\Psi }_0^{}`$, he may find the particle in state $`|\varphi _b`$ with a certain probability and win $`R`$ coins.
There exists an equilibrium for the two parties in this protocol . Alice can ensure her expected gains no less than zero by preparing the equally distributed state $`|\mathrm{\Psi }_0`$. Bob can ensure his expected gains no less than a particular value only depending on $`R`$ by selecting an optimal splitting parameter $`\eta =\stackrel{~}{\eta }\left(R\right)`$. In fact, this protocol is a zero-sum game, and the strategies of Alice and Bob are represented by different choices of $`ϵ`$ and $`\eta `$, respectively.
In the experiment, a linear-polarized photon is employed as the particle. Similar to the simulation of quantum logic , two potential paths of the photon may serve as boxes $`A`$ and $`B`$. $`|b^{}`$ are distinguished from $`|a`$ and $`|b`$ by the polarization of the photon.
Figure 1
The setup of the optical quantum gambling machine is shown in Figure 1. A virtue of this machine is that all the detections are carried out automatically by the machine, which may help to eliminate the classical communication between the parties and prevent their cheating.
Initially, the photon is generated in a definite linear polarization state (such as vertical $`|V`$ or horizontal $`|H`$) by a polarizer. Then the state is transferred to a superposition state of $`|V`$ and $`|H`$ with half waveplate (HWP) $`a`$ according to the preparation parameter $`ϵ`$ chosen by Alice. The preparation is accomplished by swapping the location and polarization states of the photon with polarizing beamsplitter (PBS) $`1`$ and the fixed HWP $`\sigma _x`$. After the state swapping, the polarization is horizontal while the location is prepared in the required state $`|\mathrm{\Psi }_0^{}`$.
Bob’s splitting is realized by adjusting the HWP $`b_1`$ according to the parameter $`\eta `$ he selects. Then $`|b^{}`$ (split out by Bob) is separated from $`|b`$ via PBS $`2`$ and superposed with $`|a`$ via PBS $`3`$. The verification is implemented with HWP $`b_1`$ and PBS $`4`$. HWP $`b_1`$ is adjusted according to $`\eta `$ so as to assure that $`|\varphi _a`$ and $`|\varphi _b`$ are transmitted and reflected by PBS $`4`$ respectively. In order to obtain the result of the gambling, three detectors $`D_1`$, $`D_2`$ and $`D_3`$ are adopted to detect the photon in the state $`|b`$, $`|\varphi _a`$ and $`|\varphi _b`$, respectively.
A single game of gambling with this machine is described as follows. After Bob put in his bet — one coin, the machine will inform Alice and Bob to select the parameter $`ϵ`$ (adjusting HWP $`a`$) and $`\eta `$ (adjusting HWP $`b_1`$ and $`b_2`$ simultaneously). Then a photon is generated from the polarizer and distributed to three parts. If the detector $`D_1`$ or $`D_3`$ responds, Bob win one or $`R`$ coins; if $`D_2`$ responds, Bob loses the bet (then the bet will be conserved for Alice automatically).
To demonstrate the performance of the optical gambling machine, a beam (composed of independent identical photons) is generated instead of a single photon during the experiment, namely, a well polarized He-Ne laser (3mW at 632.8nm) is utilized as the light source. The results are shown in Figure 2, where $`P_1`$ and $`P_3`$ denote the probabilities that Bob win one and $`R`$ coins, $`P_2`$ denotes the probability that Bob lose the bet. The probabilities are determined by the relative light intensities measured by the three detectors.
Figure 2
In order to illustrate Bob’s strategies, we suppose that Alice and Bob agree on $`R=5`$ at the beginning of the gambling. The expected gains of Bob are shown in Figure 3. Obviously, there exists an optimal splitting parameter $`\stackrel{~}{\eta }\left(5\right)0.27`$ to assure his expected gains no less than a particular value despite Alice’s choice.
Figure 3
Optical approach has many advantages. By making use of two different freedom degrees of the photon (location and polarization), an optical quantum gambling machine may be realized conveniently with several HWPs, PBSes and detectors. Particularly, the decoherence of all-optical system is relatively low , while the protocol is very sensitive to the errors caused by the device and environment. As discussed by Goldenberg et al. , for a successful realization of quantum gambling, the error rate has to be lower than $`\sqrt{2/R^3}`$. Since the error rate in the experiment is only about $`\frac{1}{40}`$, a practical quantum gambling may be carried out with this optical machine under the condition $`R<14.4`$.
Our experiment has shown that quantum gambling and quantum games have real physical counterpart, and a practical quantum gambling machine can be realized with simple optical devices. It can be expected that quantum mechanics may bring other interesting results in game theory.
This work was supported by the National Natural Science Foundation of China and the Doctoral Education Fund of the State Education Commission of China. |
no-problem/0001/cond-mat0001394.html | ar5iv | text | # Spontaneous Breakdown of Translational Symmetry in Quantum Hall Systems: Crystalline Order in High Landau Levels
\[
## Abstract
We report on results of systematic numerical studies of two-dimensional electron gas systems subject to a perpendicular magnetic field, with a high Landau level partially filled by electrons. Our results are strongly suggestive of a breakdown of translational symmetry and the presence of crystalline order in the ground state. This is in sharp contrast with the physics of the lowest and first excited Landau levels, and in good qualitative agreement with earlier Hartree-Fock studies. Experimental implications of our results are discussed.
\]
Recently there has been considerable interest in the behavior of a two-dimensional (2D) electron gas subject to a perpendicular magnetic field, when a high Landau level (LL) (with LL index $`N2`$) is partially filled by electrons. This is largely inspired by the recent experimental discovery that the transport properties of the system are highly anisotropic and non-linear for LL filling fraction $`\nu =9/2,11/2,13/2,\mathrm{}`$. Previously, Hartree-Fock (HF) and variational studies suggested that, unlike the $`N=0`$ and $`N=1`$ LL’s (in which either incompressible fractional quantum Hall (FQH) or compressible Fermi-liquid like states are realized), in $`N2`$ LL’s the electrons form charge density waves (CDW). In particular, at half-integral filling CDW’s break translational symmetry only in one direction and form stripes. Anisotropic transport would indeed result from such a striped (or related) structure.
We neglect LL mixing, and consider the case where the LL with index $`N`$ has partial filling $`\stackrel{~}{\nu }`$, while LL’s with lower index are completely filled ($`\nu `$ = $`2N+\stackrel{~}{\nu }`$). By particle-hole symmetry of the partially-filled LL, this is equivalent to $`\nu `$ = $`2N+2\stackrel{~}{\nu }`$. We also assume that the partially-filled LL is maximally spin-polarized at the $`\stackrel{~}{\nu }`$ we consider. Previously, we studied such $`N2`$ LL’s with $`\stackrel{~}{\nu }`$ = 1/2 by numerically diagonalizing the Hamiltonians of finite-size systems; those results strongly supported the existence of stripe order.
An outstanding issue is the nature of the ground state at high LL’s for fillings sufficiently far from the half-filled level. Koulakov, Fogler and Shklovskii (see also Moessner and Chalker) predicted a novel crystalline phase called the “bubble” phase with more than one electron per unit cell outside of the range $`\stackrel{~}{\nu }=0.40.6`$. The bubble crystal has lower energy than the Laughlin state for $`\nu =4+1/3`$. Experimentally, a re-entrant quantum Hall state is found near $`\nu =4+1/4`$ which is quantized as a $`\nu =4`$ LL plateau. Evidently the electrons in the top-most LL are frozen out of the transport. Pinning of a crystalline structure provides a natural explanation of the re-entrant phase and would further explain the observed threshold in conduction. However this is not entirely conclusive and other mechanisms for the conduction threshold are also possible.
In this paper, we report on new numerical results on systems away from half-filling using the unscreened Coulomb interactions. Remarkably, our results suggest that CDW’s are formed at all filling factors we have studied, including those that would support prominent FQH states or composite fermion Fermi-liquid states in the lowest or first excited Landau levels. These CDW’s, however, have 2D structures and are no longer stripes when the filling factors are sufficiently far away from $`1/2`$. They are not Wigner crystals either, unless $`\stackrel{~}{\nu }`$ is small (below 0.2). In the intermediate filling factor range, we find each unit cell of the CDW contains more than one electron. Our results are in good agreement with the predicted bubble phase and are the first exact finite-size calculations which exhibit a crystalline state in a system with continuous translational symmetry.
We restrict the states of the electrons to a given LL, and work with periodic boundary conditions (PBC, torus geometry) as in our previous paper. We also set the magnetic length to unity. To detect intrinsically preferred configurations we consider a rectangular PBC unit cell and vary its aspect ratio. The PBC plays a crucial role in removing continuous rotational symmetry, and selecting a discrete set of possible crystal orientations.
In Fig.1 we plot the energy levels of systems with $`N_e=8`$ electrons in the $`N=2`$ LL at filling factor $`\stackrel{~}{\nu }=1/4`$ as a function of the aspect ratio. We also show the levels of a system with a hexagonal PBC unit cell at the right side of Fig.1. A generic feature of the spectra is the existence of a large number of low-lying states whose energies are almost degenerate, which we call the ground state manifold. The momenta of these quasi-degenerate states for rectangular geometry with aspect ratio $`asp=0.77`$ and hexagonal geometry are shown in Fig.2; they form a 2D superlattice structure, which for the rectangular geometry have the super cell vectors $`𝐛_1=2a\widehat{e}_xb\widehat{e}_y`$, and $`𝐛_2=2b\widehat{e}_y`$. Where, $`a=2\pi /L_1`$, and $`b=2\pi /L_2`$. $`L_1`$, and $`L_2`$ are the dimensions of the unit cell ($`L_1\times L_2=2\pi N_\mathrm{\Phi }`$, $`N_\mathrm{\Phi }`$ is the total flux quanta in the system). The area per wavevector in the Brillouin zone (BZ) is $`ab=(2\pi )^2/A`$, where $`A`$ is the (real space) area of the system.
There are similarities as well as important differences between these spectra and those of half-filled high LL’s with stripe order. As in the stripe case, the large quasi-degeneracy of the ground state manifold is an indication of broken translational symmetry. The difference here is that (i) the degeneracy is much larger and (ii) the momenta of the low-lying states form 2D instead of 1D arrays. These new features indicate that the translational symmetry is broken in both directions and the ground state is a 2D CDW. In the stripe state, on the other hand, the translational symmetry is only broken in the direction perpendicular to the stripes. Therefore the degeneracy is smaller and the momenta of the low-lying states form a 1D array.
The momenta of the states in the ground state manifold are the reciprocal lattice vectors of the bubble crystal. Transforming to the direct lattice vectors, we obtain $`𝐚_1=\pi /a\widehat{e}_x`$ and $`𝐚_2=\pi /2a\widehat{e}_x+\pi /b\widehat{e}_y`$. For the optimum system, with $`asp`$=0.77, we obtain $`a_1=8.08`$, $`a_2=7.42`$, and $`\varphi =57^{}`$. This is very close to a triangular lattice. In the case of hexagonal PBC unit cell, both the reciprocal superlattice and its direct lattice are triangular.
The number $`N_D`$ of distinct quasi-degenerate ground states allows the number $`N_b`$ of bubbles in the system, and hence the number $`M`$ = $`N_e/N_b`$ of electrons per bubble, to be immediately obtained through the relation $`N_bN_D`$ = $`\overline{N}^2`$, where $`\overline{N}`$ is the highest common divisor of $`N_e`$ and $`N_\mathrm{\Phi }`$. In our case, $`\overline{N}`$ = $`N_e`$ = 8, and $`N_D=16`$, which gives $`N_b=4`$ and $`M=2`$. The Wigner crystal would correspond to $`N_b=N_e`$ and $`M=1`$. In general, there are $`\overline{N}^2`$ distinct values of the total momentum quantum number, which define a BZ of area $`(2\pi \overline{N})^2/A`$. If translational symmetry is broken, the area of the BZ of the superlattice is then $`(2\pi \overline{N})^2/AN_D`$, which must be $`(2\pi )^2/(A/N_b)`$, where $`A/N_b`$ is the area per bubble; hence $`N_bN_D`$ = $`\overline{N}^2`$.
We next turn to the density response functions. In Fig.3 we show the projected ground state charge susceptibility $`\chi (𝐪)`$ of one of the optimum rectangular and hexagonal systems described above. The calculation takes into account the contributions from the two lowest energy states in each symmetry subspace; this is an excellent approximation in view of the fact that the response function is dominated by low energy states because of the energy denominator. We note that $`\chi (𝐪)`$ exhibits a strong response at the reciprocal lattice vectors (Bragg condition); the background at other wave vectors (shown by $`\times `$’s in Fig.3) are negligible compared to these responses. The origin of the strong response lies in the approximate degeneracy among the states forming the ground state manifold. The system responds very strongly to a potential modulation with a wave vector that connects the ground state to one of the low lying states (which must be a reciprocal lattice vector) because of the small energy denominator. This is also another reason why there must be one low-lying state for each reciprocal lattice vector. A second notable feature is the almost hexagonal symmetry of the response, despite the fact that the PBC geometry used in this case was rectangular. This indicates that the bubbles tend to form a triangular lattice, in agreement with the predictions of HF theory.
The tendency toward forming a triangular lattice is also seen in the “guiding center (GC) static structure factor” $`S_0(𝐪)`$, which we present as a 3D plot in Fig.4. Here we see sharp peaks with an approximate six-fold symmetry at the primary reciprocal lattice vectors, indicating the presence of strong density correlation at these wave vectors in the ground state.
In Fig.5 and Fig.6 we plot ground state “projected density” correlation functions in real space. These describe correlations relative to the GC (not the coordinate) of a particle. The first is the Fourier transform (FT) of $`S_0(𝐪)\mathrm{exp}(q^2/2)`$, which is the electron density of an equivalent lowest-LL system, and gives information on the spatial distribution of GC’s. The second is the FT of $`S_0(𝐪)[L_N(q^2/2)]^2\mathrm{exp}(q^2/2)`$ with $`N=2`$ ($`L_N`$ is a Laguerre polynomial): this (plus the uniform density of the filled LL’s) represents the actual electron density.
In Fig.5 the presence of four bubbles and the relative orientation of the bubbles can be clearly seen and there is strong crystalline order of the GC distribution. The central peak contains two electrons, one of which is the particle with the GC at the origin. For $`N>0`$, as in Fig.6, only weak order is displayed by the actual electron density, because of the averaging effect of the cyclotron motion around the GC’s. It is the guiding centers of the electrons that form bubbles as anticipated in reference 7 (Fig.1). The electrons themselves manage to stay apart to lower the Coulomb repulsion, in spite of the clustering of their GC’s.
We have also explored other filling factors in the $`N=2`$ LL such as $`\stackrel{~}{\nu }=2/5`$ and $`\stackrel{~}{\nu }=1/3`$, where the system would condense into prominent FQH states if it was in the lowest LL; here, however, our studies suggest formation of CDW’s instead. For $`\stackrel{~}{\nu }=1/3`$ we obtain similar behavior to $`\stackrel{~}{\nu }=1/4`$: the energy spectra as a function of the aspect ratio shown in Fig.7 is very similar to Fig.1 and indicates formation of a 2D CDW. Using the degeneracy of the ground state manifold, we find the number of electrons per bubble is also two. The energies of the states in the ground state manifold, however, are not as close as the $`\stackrel{~}{\nu }=1/4`$. This results in weaker peaks in $`\chi (𝐪)`$ at the reciprocal lattice vectors.
We interpret this to be an indication that, in this LL, $`\stackrel{~}{\nu }=1/4`$ is more favorable than $`\stackrel{~}{\nu }=1/3`$ for formation of a two-electron bubble phase. In real systems a crystal is always pinned by a disorder potential, and in a nonlinear transport measurement, there should be a threshold depinning field at which there is a sharp feature in the $`IV`$ curve. A weaker crystal would result in more diffuse conduction threshold as various portions of the crystal get depinned at different current values, while a stronger one, on the other hand, will have sharp conduction threshold. This is consistent with the observation of Cooper et al that there is a sharp threshold region at about $`\stackrel{~}{\nu }`$ = 1/4, but more diffuse thresholds at both higher and lower $`\stackrel{~}{\nu }`$.
In contrast to $`\stackrel{~}{\nu }=1/4`$ and $`\stackrel{~}{\nu }=1/3`$, the spectra for $`\stackrel{~}{\nu }=2/5`$ was found to be very similar to $`\stackrel{~}{\nu }=1/2`$. The momenta of the low-lying states belong to a 1D array, indicating formation of a 1D CDW or stripe phase; the weight of the HF state 11110000001111000000 ($`N_e=8`$), in a rectangular geometry with an aspect ratio of 0.80, is about 65%. We conclude that the transition from stripe to bubble phases occurs between $`\stackrel{~}{\nu }=1/3`$ and $`\stackrel{~}{\nu }=2/5`$, in qualitative agreement with HF predictions. We have also studied higher LL’s. The results are similar and will be reported elsewhere.
We have benefited from stimulating discussions with J. P. Eisenstein, K. B. Cooper and M. P. Lilly. We thank B. I. Shklovskii and M. M. Fogler for helpful comments. This work was supported by NSF DMR-9420560 and DMR-0086191 (E.H.R.), DMR-9809483 (F.D.M.H.), DMR-9971541, and the Sloan Foundation (K.Y.). E.H.R. acknowledges the hospitality of ITP Santa Barbara, supported by NSF-PHY94-07194, where part of the work was performed. |
no-problem/0001/cond-mat0001256.html | ar5iv | text | # The Baxter Revolution
## 1 Introduction
At the beginning of the $`20^{th}`$ century statistical mechanics was conceived of as a microscopic way to understand the laws of thermodynamics and the kinetic theory of gases. In practice its scope was limited to the classical ideal gas, the perfect quantum gases and finally to a diagrammatic technique devised in the $`30^{}s`$ for computing the low density properties of gases. At that time there was even debate as to if the theory were in principle powerful enough to include phase transitions and dense liquids.
All of this changed in 1944 when Onsager demonstrated that exact solutions of strongly interacting problems were possible by computing the free energy of the Ising model. But, while of the greatest importance in principle, this discovery did not radically alter the field of statistical mechanics in practice. However, starting with the beginning of the 70’s Rodney Baxter took up the cause of exactly solvable models in statistical mechanics and from that time on the field has been so totally transformed that it may truly be said that a revolution has occurred. In this paper I will examine how this revolution came about.
## 2 The Eight Vertex model
Onsager’s work of 1944 was monumental but cannot be said to be revolutionary because its consequences were so extremely limited. Kaufman and Onsager reduced the computations to a free fermi problem in 1949 and after Yang computed the spontaneous magnetization in 1952 there were no further developments. Indeed the reduction of the solution of the Ising model to a free fermi problem had the effect of suggesting that Onsager’s techniques were so specialized that there might in fact not be any other statistical mechanical models which could be exactly solved.
It was therefore very important when in 1967 Lieb introduced and solved (cases of) the six vertex model. This showed that other exactly solvable statistical mechanical problems did indeed exist. Lieb found that this statistical model had the very curious property that the eigenvectors of its transfer matrix were exactly the same as the eigenvectors of the quantum spin 1/2 anisotropic Heisenberg chain
$$H=\frac{1}{2}\underset{j=1}{\overset{L}{}}(\sigma _j^x\sigma _{j+1}^x+\sigma _j^y\sigma _{j+1}^y+\mathrm{\Delta }\sigma _j^z\sigma _{j+1}^z)$$
(1)
which had been previously solved - by methods that went back to the work of Bethe in 1931. This result is particularly striking because the six vertex model depends on one more parameter that does the $`XXZ`$ spin chain. That extra parameter (which I will refer to as $`v`$) appears in the eigenvalues of the transfer matrix but not in the eigenvectors. The reasons for this curious relation between the quantum spin chain in one dimension and the problem in classical statistical mechanics in two dimensions were totally obscure.
At that time the author was a post doctoral fellow and he and his thesis advisor in a completely obscure paper explained the relation between the quantum and classical system by demonstrating that the transfer matrix for the six vertex model $`T(v)`$ commutes for all $`v`$ with the Hamiltonian (1) of the XXZ model.
$$[T(v),H]=0.$$
(2)
This commutation relation guarantees that the eigenvectors of $`T(v)`$ are independent of $`v`$ and that they are equal to the eigenvectors of $`H`$ without having to explicitly compute the eigenvectors themselves.
The next year Sutherland found an identical commutation relation between the quantum Hamiltonian of the XYZ model
$$H_{\mathrm{XYZ}}=\underset{j=1}{\overset{L}{}}(J^x\sigma _j^x\sigma _{j+1}^x+J^y\sigma _j^y\sigma _{j+1}^y+J^z\sigma _j^z\sigma _{j+1}^z)$$
(3)
and the transfer matrix of the eight-vertex model. But since neither the eight vertex model nor the $`XYZ`$ model had been solved this commutation relation merely related two equally intractable problems.
All mysteries were resolved when in 1971 Baxter solved both the eight vertex model and the $`XYZ`$ model - at the same time and moreover solved them by inventing methods of such power and generality that the the course of research in statistical mechanics was permanently altered. This is the beginning of the Baxter revolution.
The first revolutionary advance made by Baxter - was the generalization of
$$[T(v),H]=0\mathrm{to}[T(v),T(v^{})]=0$$
(4)
and that as $`v0`$
$$T(v)T(0)(1+cv+vH_{XYZ})$$
(5)
This generalization is of great importance because it relates a model to itself and can be taken as a general criteria which selects out particular models of interest. Moreover, Baxter demonstrated the existence of this global commutation relation by means of a local relation between Boltzmann weights. Baxter called this local relation a star triangle equation because the first such relation had already been found by Onsager , in the Ising model and Onsager had referred to the relation as a star triangle equation. A related local equation had been known since the work of McGuire and Yang on the quantum delta function gases but its deep connection with the work of Onsager had not been understood. The search for solutions of the star triangle equation has been of major interest ever since and has led to the creation of the entirely new field of mathematics called “Quantum Groups” -. The Baxter revolution of 1971 is directly responsible for this new field of mathematics
The second revolutionary step in Baxter’s paper is that in addition to the commutation relation (4) he was able to obtain a functional equation for the eigenvalues of the transfer matrix and from this he could obtain equations which characterized the eigenvalues. In the limit where the eight vertex model becomes the six vertex model these equations reduced to the Bethe’s equations previously found by Lieb . But Lieb found his equations by finding expressions for all of the eigenvectors of the problem whereas Baxter never considered eigenvectors at all. It is truly a revolutionary change in point of view to divorce the solution of the eigenvalue and eigenvector problems and to solve the former without knowing anything of the latter. This technique has proven to be of utmost generality and, indeed, for almost every solution which has been found to the star triangle equation a corresponding functional equation for eigenvalues has been found. On the other hand, the study of the eigenvectors, which was the heart of the solution of the six vertex and XXZ models has almost been abandoned.
The final technique introduced by Baxter is the thorough going use of elliptic functions. Elliptic functions, of course, have been used in physics since the days of the heavy symmetric top and are conspicuously used in Onsager’s solution of the Ising model. But even though elliptic functions appear in Onsager’s final expression for the free energy of the Ising model they play no role in either Onsager’s original algebraic solution or in Kaufman’s free fermi solution. On the other hand there are steps in Baxter’s solution where the elliptic functions are essential. It is quite fair to say that just as Onsager invented the loop group of $`sl_2`$ in his solution of the Ising model that Baxter in his 1971 paper first introduced the essential use of elliptic and modular functions into 20th century physics.
## 3 The corner transfer matrix
It took Onsager 5 years from the computation of the Ising model free energy before he made public his conjecture for the order parameter . Baxter was much more prompt in the case of the eight vertex model and produced with Barber in 1973 conjecture for the order parameter a mere two years after the free energy was computed. For the Ising model it took another three years to go from the conjecture to a proof . For the eight vertex model it also took Baxter three years to obtain a proof of the conjecture.
The details of Baxter’s proof are contained in two separate papers - and form the subject of chapter 13 of his 1982 book Exactly Solved Models in Statistical Mechanics . It is even more revolutionary than the 1971 free energy computation. Baxter not only abandons the use of the eigenvectors of the row to row transfer matrix (which had been retained in his 1973 computation of the free energy of the six vertex model order parameter ) but he abandons the use of the row to row transfer matrix altogether. In its place he uses a completely new construct which had never been seen before and which had absolutely no precursors in the literature: the corner transfer matrix.
A transfer matrix builds up a large lattice one row at a time. In an $`L\times L`$ lattice of a 2 state per site model it has dimension $`2^L.`$ A corner transfer matrix builds up a lattice by adding one quadrant at a time and has dimension $`2^{L^2/4}.`$ The spin whose average is being computed lies at the corner common to all four quadrants. Order parameters are computed from the eigenvector of the ground state of the row to row transfer matrix. For the corner transfer matrix the order parameter is expressed in terms of the eigenvalues and the eigenvectors are not needed.
Thus far the philosophy of the order parameter computation has followed the spirit of the free energy computation in that all attention has been moved from eigenvectors to eigenvalues. But in order to make this a useful tool Baxter takes one more revolutionary step. He takes the thermodynamic limit before he obtains equations for the eigenvalues. This is exactly the opposite from what was done in the free energy computation where the equations are obtained first and only in the end is the thermodynamic limit taken.
This early introduction of the thermodynamic limit has a very dramatic impact on the eigenvalues of the corner transfer matrix. To see this we note that the matrix elements of the corner transfer matrix are all quasi periodic functions of the spectral variable $`v.`$ This is of course also true for the row to row transfer matrix. It is thus a natural argument to make to say that a matrix with quasi periodic elements should be have doubly periodic eigenvalues and this is in fact true for the row to row transfer matrix. But for the corner transfer matrix the taking of the thermodynamic limit has the astounding effect that the eigenvalues, instead of being elliptic functions all become simple exponentials $`e^{\alpha _rv}.`$ Once these very simple exponential expressions for the eigenvalues are obtained it is a straightforward matter to obtain the final form for the spontaneous magnetization of the eight vertex model, but all along the way, it is fair to say, a great deal of magic has been worked.
## 4 The RSOS models
The next stage in the Baxter revolution is the discovery and solution of the RSOS model by Andrews, Baxter and Forrester in 1984 . As in the case of the eight vertex model revolution in 1971 there were several precursor papers, this time all by Baxter himself.
It has been stressed in the preceding sections that Baxter made a revolutionary shift of point of view by discovering that the eigenvalue problems could be solved without solving the eigenvector problems. Therefore for the six vertex and $`XXZ`$ model Baxter could obtain the Bethe’s equations for the eigenvalues without recourse to the Bethe’s form of the eigenvector
$$\psi (x_1,x_2,\mathrm{},x_n)=\underset{P}{}A(P)e^{i_jx_jk_{Pj}}.$$
(6)
In the previous work on the six vertex and $`XXZ`$ models the restriction was made that all the $`k_j`$ were distinct. It was therefore quite a surprise when in 1973 Baxter discovered that in the $`XXZ`$ chain (1) when
$$\mathrm{\Delta }=\frac{1}{2}(q+q^1)\mathrm{and}q^{2N}=1$$
(7)
that there are in fact eigenvectors of the $`XXZ`$ chain for which the $`k_j`$ of (6) are equal. For these solutions the $`k_j`$ obey
$$\mathrm{\Delta }e^{ik_j}1e^{i(k_j+k_l)}=0$$
(8)
and this case had been tacitly excluded in all previous work.
In Baxter generalized the root of unity condition (7) of the six vertex model to the eight vertex model and he found an entire basis of eigenvectors which in a sense makes maximal use of the violation of the previously assumed condition $`k_jk_l`$. Baxter is thus able to re-express these root of unity eight vertex models in terms of what he calls in his 1973 paper an “Ising–like model.”
Baxter’s next encounter with root of unity models was in 1981 when he solved the hard hexagon model . In this most remarkable paper Baxter uses his corner transfer matrices to compute the order parameter of the problem and in the course of the computation discovers the identities of Rogers and Ramanujan which were first found in 1894
$$\underset{n=0}{}\frac{q^{n(n+a)}}{(q)_n}=\frac{1}{(q)_{\mathrm{}}}\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}(q^{n(10n+1+2a)}q^{(5n+2a)(2n+1)})$$
(9)
where $`(q)_n=_{j=1}^n(1q^j)`$ and $`a=0,1.`$ Baxter was clearly impressed that these classic identities appeared naturally in a statistical mechanics problem because he put the term “Rogers–Ramanujan” in the title of the paper. Because the right hand side of (9) is obviously written as the difference of two theta functions we once again see that modular functions appear naturally in statistical mechanics. But neither the 1973 nor the 1981 papers can be called genuinely revolutionary because neither of them was seen to have general applicability.
The revolution that allowed the general applicability of Baxter’s techniques is carried out in the paper of 1984 with Andrews and Forrester and the companion paper by Forrester and Baxter in which it was shown that the hard hexagon model of is obtained from a special case of the “Ising–like models” found in the root of unity eight vertex models in 1973 . These Ising–like models are now called eight vertex solid–on–solid models and the restriction needed to obtain the hard hexagon model is in the general case called the restricted solid–on–solid model. Starting from this formulation of the RSOS models the order parameters are computed by a direct application of the corner transfer matrix method and at the step where in the hard hexagon model the identity (9) was obtained the authors of , instead solve a path counting problem and find the general result
$$\frac{1}{(q)_{\mathrm{}}}\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}(q^{n(npp^{}+rp^{}sp)}q^{(np^{}+s)(jp+r)})$$
(10)
where the relatively prime integers $`p`$ and $`p^{}`$ effectively parameterize the root of unity condition (7). This sum in this result is obviously the difference of two Jacobi theta functions and thus we see that all the RSOS models lead to theta functions. But most remarkably the exact same expression (10) was discovered at the same time to arise in the expression of the characters - of the minimal models $`M(p,p^{})`$ conformal field theory and these models were soon thereafter obtained as cosets of the affine Lie algebra $`A_1^{(1)}.`$
It thus became clear that the statistical mechanics of RSOS models, conformal field theory, and affine Lie algebras are all part of the same subject and from this point forth the results of statistical mechanics appear in such apparently unrelated fields as string theory ,number theory and knot theory. Baxter’s corner transfer matrix was seen to be intimately related to constructions in the theory of affine Lie algebras involving null vectors and the corner transfer matrix computations of Baxter’s statistical models were rapidly generalized from the affine Lie algebra $`A_1^{(1)}`$ to all affine Lie algebras. Solvable statistical mechanical models were now seen everywhere in physics and Baxter’s methods were subject to vast generalization.
## 5 The chiral Potts model
For a few years it was thought that the revolution was complete and that corner transfer matrix methods and group theory could solve all problems which started out from commuting transfer matrices. This was changed however when the chiral Potts model was discovered in 1987 . This model does indeed satisfy the condition of commuting transfer matrices (4) and the Boltzmann weights do obey a star triangle equation but unlike all previously seen models the Boltzmann weights are not parameterized either by trigonometric or elliptic functions but rather are functions on some higher genus spectral curve. There is a modulus like variable $`k`$ in the model and when $`N=3`$ the genus of the curve is 10 if $`k0,1`$ and if $`k=1`$ the curve is the very symmetric elliptic curve $`x^3+y^3=z^3.`$ If $`N=4`$ and $`k=1`$ the curve is the fourth order Fermat curve $`x^4+y^4=z^4`$ which has genus three .
As would be expected Baxter rapidly became interested in this problem and soon Baxter. Perk and Au–Yang found that for arbitrary $`N`$ and $`k`$ the spectral curve has the very simple form
$$a^N+kb^N=k^{}d^N\mathrm{and}ka^N+b^N=k^{}c^N$$
(11)
with $`k^2+k^2=1.`$ When $`N=2`$ this curve reduces to an elliptic curve and the chiral Potts model reduces to the Ising model. However, in general for $`k0,1`$ the curve has genus $`N^32N^2+1`$ and for $`k=1`$ the curve reduces to the $`N^{th}`$ order Fermat curve of genus $`(N1)(N2)/2.`$
The first thing to attempt after finding the Boltzmann weights for the chiral Potts model is to repeat what had been done so many times before and to obtain a functional equation for the eigenvalues . That was soon done - but the next step in Baxter’s program was not so easy because the methods of solution of this functional equation which relied on the properties of genus 1 elliptic functions did not work. Solutions for the free energy which by passed the elliptic functions were soon found but the fact that new methods were needed indicated that the revolution was not yet complete.
The greatest puzzle was set up in 1989 when after generalizing earlier work on the $`N=3`$ state model it was conjectured on the basis of extensive series expansions that the order parameters of the $`N`$ state chiral Potts model are given by
$$M_n=(1k^2)^{n(Nn)/2N^2}\mathrm{for}1nN1$$
(12)
This remarkably simple expression reduces to the result of Onsager and Yang for the Ising model when $`N=2`$ and is a great deal simpler than the order parameters for the RSOS models , . The first expectation was that Baxter’s corner transfer matrix methods could be applied to prove the conjecture true and the first attempt to do this was made by Baxter in . In this paper Baxter gives a new and very transparent derivation of the corner transfer matrix methods and he reduces the computation of the order parameter to a problem of the evaluation of a path ordered exponential of non–commuting operators over a Riemann surface. Such a formulation sounds as if methods of non Abelian field theory could now be applied to solve the problem. Unfortunately to quote Baxter in a subsequent paper “Surprisingly the method completely fails for the chiral Potts model.”
The reason for the failure of the method is that the introduction of the higher genus curve into the problem has destroyed a property used by Baxter and all subsequent authors in the application of corner transfer matrix methods. This property is the so called difference property which is the property, shared by the plane and torus but by no curve of higher genus, of having an infinite automorphism group (the translations). It is this property which was used to reduce the eigenvalues to exponentials in the spectral variable and it is not present in the chiral Potts model.
## 6 Future Prospects
The discovery of the chiral Potts model has made it now clear that the Baxter revolution has met up with problems in algebraic geometry which have proven intractable for almost 150 years. Baxter has investigated these problems now for almost a decade and it is clear that the solution of these physics problems will make a major advance in mathematics. But even with this evaluation of current problems the impact of Baxter’s revolution is clearly seen. Mathematics is no longer treated as a closed finished subject by physicists. More than anyone else Baxter has taught us that physics guides mathematics and not the other way around. This is of course the way things were in the $`17^{th}`$ century when Newton and Leibnitz invented calculus to study mechanics. Perhaps in the intervening centuries in the name of being experimental scientists we physicists drifted away from away from doing creative mathematics. The work of Rodney Baxter serves now and will serve in the future as a beacon of inspiration to all those who believe that there is a unity in physics and mathematics which provides inspiration that can be obtained in no other way.
Acknowledgments
This work was partially supported by NSF grant DMR97–03543. |
no-problem/0001/cond-mat0001345.html | ar5iv | text | # Untitled Document
Table of Contents
This article will be published in volume 19 of “Domb, Lebowitz”, “Phase Transitions and Critical Phenomena”, (C) Academic Press, London. More information can be obtained from Academic Press or from the author via email: wiese@theo-phys.uni-essen.de or internet: http://www.theo-phys.uni-essen.de/tp/u/wiese/pubwiese.html. |
no-problem/0001/hep-ex0001043.html | ar5iv | text | # High Rate Neutrino Detectors for Neutrino Factories 11footnote 1Presented at the ICFA/ECFA Workshop ”Neutrino Factories based on Muon Storage Rings” (𝜈FACT’99), Lyon, France, 5–9 July, 1999.
## 1 Introduction
Muon colliders and other proposed high-current muon storage rings can be collectively referred to as neutrino factories. As well as the long baseline neutrino oscillation studies that are currently garnering much of the attention, neutrino factories also have considerable potential for wide-ranging studies involving the physics of neutrino interactions. Exciting and unique high rate (HR) neutrino physics could be performed using detectors placed as close as is practical to the storage ring in order to maximize the event rate and to subtend the neutrino beam with the narrowest possible target. Rather than studying the properties of the neutrinos themselves, such experiments would instead investigate their interactions with the quarks inside nucleons and with electrons. HR detectors needed for these studies form the topic of this paper.
The advantages of neutrino beams from stored muons over traditional neutrino beams are in some ways even more notable for HR experiments than for oscillation studies. In particular, the increased neutrino flux and the much smaller transverse extent close to production allows the collection of unprecedented event statistics even in compact fully-active tracking targets backed by high-rate, high-performance detectors.
The small transverse extent of the beam at the HR detectors derives from the production method in neutrino factories. Muon decays,
$`\mu ^{}`$ $``$ $`\nu _\mu +\overline{\nu _\mathrm{e}}+\mathrm{e}^{},`$
$`\mu ^+`$ $``$ $`\overline{\nu _\mu }+\nu _\mathrm{e}+\mathrm{e}^+,`$ (1)
in the production straight sections of the muon storage ring will produce pencil beams of neutrinos with unique two-component flavor compositions. The beams from $`\mu ^{}`$ and $`\mu ^+`$ decays will be denoted as $`(\nu _\mu \overline{\nu }_e)`$ and $`(\overline{\nu }_\mu \nu _e)`$ in the rest of this paper. From relativistic kinematics, the forward hemisphere in the muon rest frame is boosted into a narrow cone in the laboratory frame with a characteristic opening half-angle, $`\theta _\nu `$, given in obvious notation by
$$\theta _\nu \mathrm{sin}\theta _\nu =1/\gamma _\mu =\frac{m_\mu c^2}{E_\mu }\frac{0.106}{E_\mu [\mathrm{GeV}]}.$$
(2)
For example, the neutrino beams from 50 GeV muons will have an opening half-angle of approximately 2 mrad and a radius of only 20 cm at 100 meters downstream from the center of the production straight section. (This neglects corrections due to the non-zero width of the muon beam and the length of the production straight section.) As an additional advantage besides the increased beam intensity, the decay kinematics for equations 1 are precisely specified by electroweak theory. This enables precisely modeled and completely pure two-component neutrino spectra for HR physics at neutrino factories, which is a substantial advantage over conventional neutrino beams from pion decays, particularly for high-statistics precision measurements.
Analysis topics at these novel beams and detectors at neutrino factories might extend well beyond traditional neutrino physics topics and should complement or improve upon many analyses in diverse areas of high energy and nuclear physics. Section 2 presents an example of a high performance general purpose detector whose excellent event reconstruction capabilities should address almost all such analyses, and gives a brief overview of the physics benefits of these analyses. There are two important physics topics that might be much better conducted using specialized detectors. Polarized targets for spin physics are discussed in section 3, and section 4 introduces the options for high mass detectors to study neutrino-electron scattering.
Table 1, which is reproduced from reference , displays parameters for examples of the three types of detectors discussed in the following sections. It also gives realistic but very approximate integrated luminosities and event sample sizes for 2 illustrative neutrino factory energies: 50 GeV and 500 GeV. A muon beam energy of about 50 GeV is a likely choice for a dedicated muon storage ring , with default specifications of $`10^{20}`$ muon decays per year in the production straight section. Five hundred GeV muons correspond to a 1 TeV center-of-mass muon collider such as, for example, that discussed in reference .
The event samples in table 1 are truly impressive. It is seen that high performance detectors with fully-active tracking neutrino targets might collect and precisely reconstruct data samples with of order billions of neutrino-nucleon DIS interactions – more than three orders of magnitude larger than any of the data samples collected using today’s much larger and cruder neutrino targets. Each of the three detector types in table 1 will now be discussed further in the following sections.
## 2 Example design for a neutrino detector to study DIS
Figure 1 shows a general purpose high rate neutrino detector that might be well matched to the intense neutrino pencil beams at neutrino factories. This specific example is reproduced from reference and it illustrates the design considerations that might be shared by other HR detector designs at neutrino factories. A brief overview of its capabilities will be given in this section; the reader is referred to the references for more in-depth presentations of its anticipated performance capabilities.
As the most striking feature of the detector, the neutrino pencil beam allows a compact, fully-active precision vertexing target in place of the kilotonne-scale coarse-sampling calorimetric targets often used for past and present high rate neutrino experiments. For example, a 2 meter long stack of equally-spaced CCD tracking planes with a radius to match the beam width could contain 1500 planes of 300 micron thick silicon CCD’s, corresponding to a mass per unit area of approximately 100 $`\mathrm{g}.\mathrm{cm}^2`$, which is about 5 radiation lengths or one interaction length. According to table 1, even such a modest detector volume might well correspond to unprecedented neutrino event samples of order a billion, or even 10 billion, interactions per year.
The relatively small interaction region of the CCD target is backed by a hermetic detector that is reminiscent of many collider detector designs and serves much the same functions. The enveloping time projection chamber (TPC) provides track-following, momentum measurements and particle identification for all charged tracks emanating from the interactions. Further particle ID might be provided by a mirror reflecting Cherenkov light to an instrumented back-plane directly upstream from the target. Downstream from these, electromagnetic and hadronic calorimeters use total absorbtion to measure the energies of individual particles and particle jets and, lastly, iron-core toroidal magnets will identify muons that have filtered through the calorimetry.
Rather than attempting to derive the performance of this detector for specific physics topics, the rest of this section will simply present plausibility arguments for its potentially wide-ranging physics capabilities at neutrino factories, then quote some more specific conclusions taken from reference .
The dominant interaction processes that provide the physics content are the charged current (CC) and neutral current (NC) deep inelastic scattering (DIS) of (anti-) neutrinos off nucleons ($`N`$, i.e. protons and neutrons) with the production of several hadrons ($`X`$):
$`\nu (\overline{\nu })+N`$ $``$ $`\nu (\overline{\nu })+X(NC)`$
$`\nu +N`$ $``$ $`l^{}+X(\nu CC)`$
$`\overline{\nu }+N`$ $``$ $`l^++X(\overline{\nu }CC),`$ (3)
where the charged lepton, $`l`$, is an electron if the neutrino is an electron neutrino and a muon for muon neutrinos. At the many-GeV energies of neutrino factories, these interactions are well described as the quasi-elastic (elastic) scattering of neutrinos off one of the many quarks (and anti-quarks), q, inside the nucleon through the exchange of a virtual W (Z) boson:
$`\nu (\overline{\nu })+q`$ $``$ $`\nu (\overline{\nu })+q(NC)`$ (4)
$`\nu +q^{()}`$ $``$ $`l^{}+q^{(+)}(\nu CC)`$ (5)
$`\overline{\nu }+q^{(+)}`$ $``$ $`l^++q^{()}(\overline{\nu }CC),`$ (6)
where all quarks, $`q`$, participate in the NC process but the CC interactions convert negatively charged quarks to positive ones for neutrinos and vice versa for anti-neutrinos, as denoted by $`q^{()}d,s,b,\overline{u},\overline{c}`$ and $`q^{(+)}u,c,\overline{d},\overline{s},\overline{b}`$.
It is clear from our experience with collider detectors that the detector of figure 1 could reconstruct DIS events with at least comparable accuracy and completeness to, for example, the reconstruction of Z or W decay events at an e+e- collider detector. The charged leptons from CC interactions would of course be well measured and, more crucially, the properties of the struck quark could be inferred from reconstruction of the hadronic jet it produces. In particular, the favorable geometry of closely spaced CCD’s in the neutrino target along with their $`3.5\mu `$m typical hit resolutions should provide vertexing of charm and beauty decays that would be superior to any current or planned collider detector .
The potential richness of neutrino interactions as a probe of both the nucleon and the weak interaction is apparent from the 3 experimentally distinguishable processes of equations 4 through 6, comprising 3 different weightings of the quark flavors probed through weak interactions involving both the W and Z. Consider, for comparison, that only a single and complementary weighting of quarks is probed by the photon exchange interactions of analogous charged lepton scattering experiments. Past and present neutrino experiments with the more diffuse neutrino beams from pion decays have suffered from either insufficient event statistics (e.g. bubble chambers) or inadequate detector performance (e.g. iron-scintillator sampling calorimeters) to exploit this rich physics potential. High rate experiments at neutrino factories will certainly not lack for statistics – as evidenced by table 1 – so high performance detectors should provide the final piece of the puzzle in realizing the considerable potential of HR neutrino physics.
Beyond the plausibility arguments given above, more detailed analyses suggest the following physics capabilities for general purpose high rate detectors at neutrino factories:
* the only realistic opportunity, in any physics process, to determine the detailed quark-by-quark breakdown of the internal structure of the nucleon
* some of the most precise measurements and tests of perturbative QCD
* some of the most precise tests of the electroweak theory through measurements of the electroweak mixing angle, $`\mathrm{sin}^2\theta _W`$, in neutrino-nucleon deep inelastic scattering, with uncertainties that might approach a 10 MeV equivalent uncertainty in the W mass – i.e. comparable with, and complementary to, the best measurements predicted for determinations at future colliders
* unique measurements of the elements of the CKM quark mixing matrix that will be interesting for lower energy neutrino factories ($`|\mathrm{V}_{\mathrm{cd}}|`$ and $`|\mathrm{V}_{\mathrm{cs}}|`$) and will become extremely important ($`|\mathrm{V}_{\mathrm{ub}}|`$ and $`|\mathrm{V}_{\mathrm{cb}}|`$) at muon beam energies of order 100 GeV and above
* a new realm to search for exotic physics processes
* as a bonus outside neutrino physics, a charm factory with unique capabilities.
## 3 Polarized Nucleon Targets
Neutrinos have intrinsic promise for polarization studies because they are 100% longitudinally polarized: neutrinos are always ”left-handed” or ”backward” polarized and anti-neutrinos are ”right-handed” or ”forward” polarized. Despite this, no past or present neutrino beam has yet been intense enough or collimated enough for polarized targets, so polarized neutrino-nucleon DIS appears to have even more to gain from the improved neutrino beams at neutrino factories than the non-polarized case presented in the preceding section.
Until now, the main tool for spin physics studies has been charged lepton scattering with either polarized electrons or muons . The capabilities of these experiments are limited by several factors:
1. the polarization state of the leptons is never 100%
2. the photon exchange interaction provides only a single probe of the nucleon, as was mentioned in the preceding section
3. beam heating of the cryogenic polarized targets places serious restrictions on their design.
Very little consideration has yet been given to the design of a polarized target for neutrino factories or, for that matter, to the design of the detector that would surround it. The simplest design solution is to copy the targets used in charged lepton spin experiments. For example, another contribution to this workshop discussed designs based on the butanol target used with polarized muons by the NMC collaboration. The problem with such a target is that most of its mass resides in unpolarized nuclei (carbon, in this case) rather than in the interesting hydrogen atoms so the effective polarization of the target is diluted by typically an order of magnitude. It is hoped that the absence of significant target heating from the beam will allow the use of polarized solid protium-deuterium (HD) targets such as have been used in experiments with low intensity neutron or photon beams. The preparation of such targets is a detailed craft involving doping the targets with ortho-hydrogen and holding them for long periods of time at very low temperatures and high magnetic fields, e.g. 30-40 days at 17 T and 15 mK. In order to avoid building an entire new detector around the target, an economical solution would be to place the polarized target immediately upstream from another detector, such as the general purpose detector described in the preceding section.
The fundamental task of such targets at neutrino factories will be to probe and quantify the quark and gluon contributions to the longitudinal spin component, $`S_z^N`$, of the nucleon. The overall spin component for forward polarized nucleons is, of course, 1/2 in fundamental units ($`\mathrm{}=1`$) and the potential component contributions are summarized in the helicity sum rule:
$$S_z^N=\frac{1}{2}=\frac{1}{2}(\mathrm{\Delta }u+\mathrm{\Delta }d+\mathrm{\Delta }s)+L_q+\mathrm{\Delta }G+L_G,$$
(7)
where the quark contribution is $`\mathrm{\Delta }\mathrm{\Sigma }=\mathrm{\Delta }u+\mathrm{\Delta }d+\mathrm{\Delta }s`$, $`\mathrm{\Delta }G`$ is the gluon spin and $`L_q`$ and $`L_G`$ are the possible angular momentum contributions from the quarks and gluons circulating in the nucleon. (In this notation, $`\mathrm{\Delta }qq^{}q^{}`$ is the difference between quarks of type $`q`$ polarized parallel to the nucleon spin and those polarized anti-parallel, and similarly for gluons.) The motivation for measuring the individual terms in equation 7 has strengthened following the experimental observation in 1989 that only a small fraction of the nucleon spin is contributed by the quarks, $`\mathrm{\Delta }\mathrm{\Sigma }1/2`$, which has been considered counter-intuitive and is often referred to as the nucleon spin crisis.
Independent of the details of the polarized target, the experimental procedure for extracting the $`\mathrm{\Delta }q`$’s at neutrino factories will be rather analogous to the more familiar extraction of the quark 4-momentum distributions in conventional non-polarized targets that was alluded to in the preceding section. The spin “structure functions” $`g_1`$ and $`g_5`$ will be extracted from differences in the DIS CC differential cross-sections for the target spin aligned with, and then opposite to, the neutrino spin direction. These structure functions correspond to linear combinations of the quark spin contributions: the parity conserving structure function, $`g_1`$, is the sum of quark and anti-quark contributions (analogous to the 4-momentum contributions of quarks to the non-polarized structure function $`F_1`$) while the parity violating $`g_5`$ is the difference of quark and anti-quark contributions (analogous to the non-polarized structure function $`F_3`$).
As was the case in the preceding section, the extraction of the quark-by-quark contributions from the structure functions should benefit greatly from the richness of neutrino interactions, with 8 independent structure functions to be measured: $`g_1`$ and $`g_5`$ from both neutrinos and antineutrinos and for both protons and neutrons.
The relative advantage over polarized DIS experiments with charged leptons is particularly evident for the parity-violating spin structure functions, $`g_5`$, since these can only be measured in CC weak interactions. The only other future opportunity to measure $`g_5`$ that has been widely discussed is the possibility of eventually polarizing the proton beams in the HERA e-p collider. Because of kinematic constraints on reconstructing events, a polarized HERA would be able to make less precise measurements for protons in a complementary kinematic region that will not be accessible to neutrino factories. It would not provide measurements for neutrons, of course.
The above method for extracting the various quark spin distributions is called “inclusive” because it sums over all hadronic final states. Additionally, neutrino factories should also provide novel and extended capabilities for “semi-inclusive” measurements. In particular, the semi-muonic tagging of charm production is sensitive to the spin contribution from the strange quarks in the nucleon, $`\mathrm{\Delta }s`$. In some kinematic regions it may also provide sensitivity to the spin contribution of the gluon, $`\mathrm{\Delta }G`$. Such a capability, if realized, would be very valuable in helping to solve the spin crisis since $`\mathrm{\Delta }G`$ is extremely difficult to measure and yet it is the leading suspect for providing the bulk of the nucleon’s spin.
## 4 Neutrino detector for neutrino-electron elastic scattering
The other physics topic that would benefit from a specialized detector is the precise determination of the weak mixing angle, $`\mathrm{sin}^2\theta _W`$, from the measurement of the cross-section for neutrino-electron scattering:
$$\nu e^{}\nu e^{}.$$
(8)
This is an interaction between point elementary particles with a precise theoretical prediction for its cross section as a function of $`\mathrm{sin}^2\theta _W`$ so statistical and experimental uncertainties will always dominate over the theoretical uncertainty.
Determination of the absolute cross section for the process of equation 8 provides a less traditional way of measuring $`\mathrm{sin}^2\theta _W`$ with neutrinos than the neutrino-nucleon DIS scattering method mentioned in section 2 and is complementary since the two measurements have different sensitivities to exotic physics processes. The best measurement so far from neutrino-electron scattering was performed with a finely segmented sampling calorimetric target by the CHARM II collaboration at CERN, with the result:
$$\mathrm{sin}^2\theta _W=0.2324\pm 0.0058(\mathrm{stat})\pm 0.0059(\mathrm{syst}).$$
(9)
The systematic component of the 3.6% total uncertainty is due mainly to beam normalization and background uncertainties. Background uncertainties at this level may well be intrinsic to sampling calorimetric targets so the new approach of a tracking detector is probably required to obtain much more precise measurements at neutrino factories.
The experimental signature for the process in a tracking detector is a single electron track with a very low transverse momentum with respect to the beam direction, $`\mathrm{p}_\mathrm{t}<\sqrt{2\mathrm{m}_\mathrm{e}\mathrm{E}_\nu }`$, and no other activity in the detector. The two experimental challenges that motivate a dedicated detector are:
1. the cross section for neutrino interactions with electrons is three orders of magnitude below the dominating process of DIS interactions with nucleons. Even at neutrino factories this would require a relatively massive detector – perhaps several tonnes – to obtain sufficiently large event samples.
2. the crucial measurement of the electron $`\mathrm{p}_\mathrm{t}`$’s must be obtained before the electron initiates an electromagnetic shower and this distance scale is characterized by the radiation length of the tracking medium. This effectively restricts the target to elements with particularly low atomic numbers (Z) because the radiation length scales inversely as $`\mathrm{Z}^2`$.
Other desirable characteristics for the detector are a fully-active tracking medium with good position resolution, a magnetic field to verify the negative charge of the electrons and a fast read-out to minimize pile-up from the DIS background events.
An attractive target/detector option is a cylindrical tank containing a low-Z liquid that can form tracks of ionization electrons and drift them to an electronic read-out. The choice of the liquid requires a more detailed survey of low-Z liquid properties than has so far been conducted. Working up from the lowest Z, liquid hydrogen (Z=1) is unfortunately ruled out because of insufficient electron mobility. Liquid helium (Z=2) also suffers from poor mobility and potentially difficult operation because it lacks the ability to self-quench.
Liquid methane and other saturated alkanes appear to be good candidates for the tracking medium as they contain only carbon (Z=6) and hydrogen (Z=1) and sufficiently pure samples are capable of transporting electrons over large distances. Experimental studies of electron transport in methane have been successful enough to suggest its use in TPC detectors of up to several kilotons. It is liquid at atmospheric pressure between –182.5 and –161.5 degrees centigrade and has a density of 0.717 g/cm<sup>3</sup> and a radiation length of 65 cm. Heavier alkanes that are liquid at room temperature, such as octane, would be superior for safety and convenience if they can be maintained at sufficient purity for good electron transport, and this deserves further study. Finally, liquid argon (Z=18) also deserves further consideration despite having a radiation length of only 14 cm since its suitability as a large-volume tracking medium has been convincingly demonstrated through prototyping for the multi-kilotonne ICANOE neutrino oscillation detector.
Example monte carlo-generated event pictures for neutrino-electron scattering events in liquid methane are shown in figures 2 and 3. Essentially all of the $`\mathrm{p}_\mathrm{t}`$ and charge sign information on the electron is contained in the initial track at the upstream (left hand side) of the display.
A time projection chamber (TPC), as used in ICANOE, is the best established readout option but may run into problems with event pile-up due to the large drift distances characteristic of this readout geometry. A faster read-out alternative that has been suggested by Rehak uses printed-circuit kaptan strips to provide more channels and, hence, shorter drift distances.
The two other big experimental challenges for the measurement besides event pile-up are background rejection and benchmarking the signal event rate to precisely predictable flux normalization processes, which will now be discussed in turn.
The dominant DIS background events will usually be readily distinguishable from the signal due to their high track multiplicity at the primary vertex. Instead, the most difficult backgrounds will come from low-multiplicity neutrino-nucleon scattering events such as quasi-elastic neutrino-nucleon scattering:
$$\nu \mathrm{N}\mathrm{l}^\pm \mathrm{N}^{},$$
(10)
where $`N^{}`$ is an excited state of the nucleon $`N`$. A tracking detector with very good $`p_t`$ resolution is needed to resolve the signal peak from the much broader background distributions from such events.
Flux normalization should be less difficult for the $`(\nu _\mu \overline{\nu }_e)`$ beam due to the availability of two theoretically predictable normalization processes involving muon production off electrons:
$`\nu _\mu e^{}\nu _e\mu ^{}`$ (11)
$`\overline{\nu _e}e^{}\overline{\nu _\mu }\mu ^{}.`$ (12)
The $`(\overline{\nu }_\mu \nu _e)`$ beam is more problematic, probably requiring an additional stage of relative flux normalization back to the $`(\nu _\mu \overline{\nu }_e)`$ flux using the relative sizes of the event samples for the quasi-elastic neutrino-nucleon scattering process of equation 10. This requires the detector to also measure very low-$`\mathrm{p}_\mathrm{t}`$ muons from the processes of equation 12 and 10 which should in practice be less difficult than the signal process for the detectors under consideration.
Table 1 gives signal event sample sizes in the range of millions to tens-of-millions of events for an 11-tonne liquid methane detector. This corresponds to the impressive limiting statistical uncertainties of $`\mathrm{\Delta }\mathrm{sin}^2\theta _W=\mathrm{0.000\hspace{0.17em}3}`$ and $`\mathrm{0.000\hspace{0.17em}1}`$ for the $`(\nu _\mu \overline{\nu }_e)`$ and $`(\overline{\nu }_\mu \nu _e)`$ beams, respectively, at the 50 GeV neutrino factory, and to $`\mathrm{\Delta }\mathrm{sin}^2\theta _W=\mathrm{0.000\hspace{0.17em}1}`$ and $`\mathrm{0.000\hspace{0.17em}03}`$ for the 500 GeV neutrino factory. With negligible theoretical uncertainties for this process the experimental challenge in approaching these statistical limits rests largely on the design of a specialized detector that can minimize the experimental uncertainties.
## 5 Conclusions
The prospects for short-baseline high rate neutrino physics at future neutrino factories is substantial and is tightly coupled to the development of novel high performance neutrino detectors that exploit the uniquely intense and collimated neutrino beams at these facilities.
Three types of high rate neutrino detectors have been discussed in this paper:
1. general purpose detectors featuring, for example, a fully active CCD vertexing and tracking target and with a backing detector of similar complexity and performance to some collider detectors. Such detectors would have wide-ranging potential for extending neutrino physics well beyond its traditional bounds.
2. polarized targets that might map out the quark-by-quark spin structure of the nucleon and, perhaps, also determine the gluon contribution to the nucleon’s spin. Cryogenic targets of solid hydrogen might have much superior performance to the conventional polarized targets used in charged lepton scattering if their considerable design challenges can be negotiated.
3. fully active tracking targets comprising several tonnes of low atomic number liquids hold promise for one of the most precise tests of the electroweak interaction, through the determination of the weak mixing angle, $`\mathrm{sin}^2\theta _W`$, from the total cross-section for neutrino-electron scattering.
The designs for all three detector types are both exciting and very challenging. Designs for general purpose detectors have been presented only at the conceptual level and those for the two specialized target types have not even proceeded that far. Given the levels of complexity and challenge, there is both an opportunity and a need to soon begin the design work towards realizing these detector options at the first neutrino factory facility. The expected experimental conditions at neutrino factories are so novel and impressive that there are sure to be many surprises along the way.
## 6 Acknowledgments
The author has benefitted from many discussions with his co-authors on reference . A discussion on spin physics with M. Velasco was also valuable. The organizers and secretariat of NUFACT99 are to be commended for a well-organized and stimulating workshop.
This work was performed under the auspices of the U.S. Department of Energy under contract no. DE-AC02-98CH10886. |
no-problem/0001/hep-th0001139.html | ar5iv | text | # Untitled Document
A Note on Singular Instantons
Paul Federbush
Department of Mathematics
University of Michigan
Ann Arbor, MI 48109-1109
(pfed@math.lsa.umich.edu)
Abstract
We point out the existence of some singular, radial, spin-0 instantons for curvature-quadratic gravity theories. They are complex.
———————————————–
We consider Euclidean gravity theories described by the Lagrangian density
$$\sqrt{g}\left(\alpha R_{ik}R^{ik}+\beta R^2\right).$$
(1)
For these theories there are solutions of the Euler-Lagrange equations of the form
$$g_{\mu \nu }(x)=\delta _{\mu \nu }\left(r^2\right)^\epsilon $$
(2)
for $`\epsilon =2,1\pm i/\sqrt{3}`$. Here
$$r^2=\underset{i=1}{\overset{4}{}}x_i^2.$$
(3)
For $`\epsilon =2`$ the instanton is also a (well-known, trivial) instanton of the Einstein action. For the two complex values of $`\epsilon `$ this is not true, and the values of the metric are complex. For $`\beta =\frac{1}{3}\alpha `$, any $`g_{mu\nu }`$ of the form
$$g_{\mu \nu }(x)=\delta _{\mu \nu }f(x)$$
(4)
satisfies the Euler-Lagrange equations; one is dealing with a conformal gravity theory. |
no-problem/0001/hep-th0001130.html | ar5iv | text | # 1. Introduction.
## 1. Introduction.
The orbifold construction is one of the most powerful techniques in string theory that allows one to obtain interesting new theories with reduced symmetries . This procedure can be applied whenever a string theory possesses a discrete symmetry group $`\mathrm{\Gamma }`$. The construction proceeds in two steps: first we restrict the space of states to those that are invariant under $`\mathrm{\Gamma }`$, and then we add suitably projected twisted sectors, as needed by modular invariance. The initial projection is equivalent to adding sectors to the partition function in which the boundary condition in the timelike world-sheet direction $`t`$ is twisted by generators of $`\mathrm{\Gamma }`$. One of the generators of the modular group of the torus, $`S:\tau 1/\tau `$, exchanges the timelike direction $`t`$ with the spacelike world-sheet direction $`\sigma `$. In order for the partition function to be modular invariant, the theory must therefore also contain sectors in which the boundary condition in $`\sigma `$ is twisted by the generators of $`\mathrm{\Gamma }`$; these are the twisted sectors. Finally, invariance under the other generator of the modular group, $`T:\tau \tau +1`$, requires us to add sectors in which both boundary conditions are twisted; in essence this amounts to projecting the twisted sectors by $`\mathrm{\Gamma }`$ as well.
Whilst this algorithm determines the theory in principle, there are a number of ambiguities that are not fixed by modular invariance of the one-loop vacuum amplitude alone. For example, if we are dealing with a string theory in the RNS formalism, every sector of the theory must be GSO-projected. In each sector that possesses fermionic zero modes, and therefore a degenerate ground state, the GSO-projection acts as the chirality operator on the ground state. A priori, the definition of this operator is only fixed up to a sign, and this sign choice affects the spectrum of the theory significantly. On the other hand, the partition function is insensitive to this choice, and therefore the condition of one-loop vacuum modular invariance does not fix this ambiguity.
It is generally believed that these ambiguities are resolved by the conditions that come from non-vacuum and higher-loop amplitudes, and that the remaining freedom is described in terms of discrete torsion . In practice, it is however difficult to determine the consistent GSO-projections explicitly. In fact, the analysis of describes the freedom in modifying the action of the orbifold (and GSO-projections) in the twisted sectors relative to a certain solution which is assumed to be consistent, namely the one in which all phases are taken to be $`+1`$. In particular, this requires that the above sign ambiguities have been chosen so as to give a consistent solution, but it is a priori not obvious how this should be done.
As we shall explain in this paper, there exists a simple non-perturbative consistency condition that fixes these ambiguities, at least for a certain class of theories, uniquely. This condition arises from a careful analysis of the D-brane spectrum of these theories. The D-brane spectrum can be determined using the boundary state formalism , in which D-branes are described by coherent states of the closed string sector. Once the various projection operators of the closed string theory have been fixed, the D-brane spectrum is uniquely determined. However, not every such spectrum is allowed, since there exist transitions between different D-brane configurations. Specifically, when a D-brane in the bulk collides with a fixed-plane of the orbifold it must be allowed to break into fractional D-branes. In order for this to occur, however, both types of branes must exist in the spectrum. As we shall see, this requirement fixes the ambiguities in the GSO-projection of the twisted sectors uniquely.
For orbifold projections that preserve supersymmetry, these ambiguities can also be fixed by requiring the twisted sectors to preserve the same supersymmetry. This is the case, for example, for Type IIA or IIB on $`T^4/𝖹𝖹_2`$, where the $`𝖹𝖹_2`$ generator acts by reflection of the four compact coordinates. In these cases, the supersymmetry considerations pick out precisely the theory that is also non-perturbatively consistent. For orbifolds that break supersymmetry completely, however, there does not appear to be an analogous perturbative criterion that selects the non-perturbatively consistent theory.
Our interest in this problem arose from recent work of Klebanov, Nekrasov and Shatashvili , who considered Type IIB on $`T^6/𝖹𝖹_4`$, where the $`𝖹𝖹_4`$ generator acts by the reflection of the six compact directions. This orbifold breaks all the supersymmetries of Type IIB, and is in fact equivalent to Type 0B on $`T^6/𝖹𝖹_2`$. As we shall discuss in detail below, their choice for the GSO-projection in the twisted sectors does not satisfy the above condition, and therefore does not define a (non-perturbatively) consistent theory. This resolves the question of which state is charged under the $`U(1)`$ gauge field that comes from the twisted R-R sector, since the consistent theory does not have such a gauge field.
## 2. D-branes on $`𝖹𝖹_2`$ orbifolds.
Let us begin by describing briefly the D-brane spectrum of a $`𝖹𝖹_2`$ orbifold, where the $`𝖹𝖹_2`$ generator acts by the inversion of $`n`$ coordinates. (More details can be found in ; see also .) For simplicity we shall consider the non-compact case $`\mathrm{I}\mathrm{R}^n/𝖹𝖹_2`$, which only has one fixed plane (of spatial dimension $`9n`$) at the origin.<sup>1</sup><sup>1</sup>1If $`m`$ of the $`n`$ directions on which the $`𝖹𝖹_2`$ acts are compact there will be $`2^m`$ fixed planes. There exist three possible types of D-branes, which differ in their boundary state components. Bulk D-branes have components only in the untwisted sectors,
$$|Dp_b=\left(|Bp_{\text{NS}\text{NS ;U}}+|Bp_{\text{R}\text{R ;U}}\right),$$
(1)
fractional D-branes have components in all untwisted and twisted sectors,
$$|Dp_f=\frac{1}{2}\left(|Bp_{\text{NS}\text{NS ;U}}+|Bp_{\text{R}\text{R ;U}}+|Bp_{\text{NS}\text{NS ;T}}+|Bp_{\text{R}\text{R ;T}}\right),$$
(2)
and truncated D-branes only involve the untwisted NS-NS and twisted R-R sectors,
$$|Dp_t=\frac{1}{\sqrt{2}}\left(|Bp_{\text{NS}\text{NS ;U}}+|Bp_{\text{R}\text{R ;T}}\right),$$
(3)
and can only exist for values of $`p`$ for which a fractional (and bulk) D-brane does not exist. The above spectrum, including the latter restriction, follows from the usual condition that the cylinder amplitude between any two D-branes must correspond to an open string partition function.
In a supersymmetric theory (1) and (2) are BPS states, whereas (3) are non-BPS, but nevertheless stable in certain regimes of the moduli space . To account for the different orientations of the D-branes relative to the action of the orbifold, we denote as in the number of world-volume directions parallel to the fixed-plane by $`r`$, and the number of world-volume directions transverse to the fixed-plane by $`s`$. We shall henceforth label the D$`p`$-branes (and boundary states) by $`(r,s)`$, where $`p=r+s`$. Fractional and truncated branes are either completely localised at the fixed-plane (if $`s=0`$), or extend in $`s`$ directions transverse to the fixed-plane and terminate at an $`r`$-dimensional hyperplane in it; this follows from the fact that the boundary states have components in the twisted sectors.<sup>2</sup><sup>2</sup>2In the compact case fractional and truncated branes would contain components in multiple twisted sectors, and would therefore be suspended between different fixed planes.
Physical closed string states must be invariant under the GSO-projection and the action of the orbifold group. The spectrum of physical D-branes, i.e. the allowed values of $`r`$ and $`s`$ for the three types of branes, is therefore determined by the action of the GSO and orbifold operators on the boundary states in the four different sectors. Since these boundary states are generically given by
$$|B(r,s)=e^{{\scriptscriptstyle (\pm {\scriptscriptstyle \frac{1}{n}}\alpha _n^i\stackrel{~}{\alpha }_n^i\pm \psi _r^i\stackrel{~}{\psi }_r^i)}}|B(r,s)^{(0)},$$
(4)
the crucial constraint comes from the action of the above operators on the ground states of the different sectors. The left- and right-moving GSO-operators are given as
$$(1)^f=\pm (i^{||/2})\underset{\mu }{}(\sqrt{2}\psi _0^\mu )(1)^{\stackrel{~}{f}}=\pm (i^{||/2})\underset{\mu }{}(\sqrt{2}\stackrel{~}{\psi }_0^\mu ),$$
(5)
where $`\psi _0^\mu `$ and $`\stackrel{~}{\psi }_0^\mu `$ are the left- and right-moving fermionic zero modes, respectively, which satisfy the Clifford algebra
$$\{\psi _0^\mu ,\psi _0^\nu \}=\eta ^{\mu \nu }\{\psi _0^\mu ,\stackrel{~}{\psi }_0^\nu \}=0\{\stackrel{~}{\psi }_0^\mu ,\stackrel{~}{\psi }_0^\nu \}=\eta ^{\mu \nu },$$
(6)
and $``$ denotes the set of coordinates in which the given sector has fermionic zero modes. The prefactor has been fixed (up to a sign) so that both operators square to the identity (we shall only consider the case where the number of elements in $``$, $`||`$, is even). In the untwisted NS-NS sector, for which $`||=0`$, it is conventional to take both signs to be $``$, so that the tachyonic ground state is odd under both operators. For definiteness we adopt this convention for all other sectors as well. Finally, the inversion operator of the orbifold acts as
$$g=\underset{\mu }{}(\sqrt{2}\psi _0^\mu )\underset{\mu }{}(\sqrt{2}\stackrel{~}{\psi }_0^\mu ),$$
(7)
where $``$ denotes the set of coordinates on which the orbifold acts. Again, the definition of $`g`$ in the twisted sectors is a priori ambiguous up to a sign; for definiteness we have again fixed this to be $`+`$ in all sectors. The resulting actions on the boundary states<sup>3</sup><sup>3</sup>3One must actually combine boundary states with different spin structures to get GSO-eigenstates; the listed states are the linear combination for which $`(1)^f`$ has eigenvalue $`+1`$. are shown in table 1 (see for details).
The definition of the GSO and orbifold projections in the various sectors determines which of the boundary states are physical, and therefore the D-brane spectrum of the theory. However, in order for this D-brane spectrum to make sense, it must satisfy an additional consistency condition: as a bulk D$`(r,s)`$-brane brane approaches the fixed plane, additional massless scalars usually appear in the world-volume gauge theory; these parametrise the Coulomb branch, and describe the moduli along which the bulk brane fractionates into two fractional branes. In order for this to be possible, the theory must therefore also have a fractional D$`(r,s)`$-brane. As we shall see below, this condition fixes uniquely the ambiguity in defining the GSO-projection in the twisted sectors.
We shall illustrate this condition by considering two examples. The first is Type II on $`\mathrm{I}\mathrm{R}^4/𝖹𝖹_2`$, which is supersymmetric, and the second is Type 0 on $`\mathrm{I}\mathrm{R}^6/𝖹𝖹_2`$ (or equivalently Type II on $`\mathrm{I}\mathrm{R}^6/𝖹𝖹_4`$), in which supersymmetry is broken.
## 3. Supersymmetric example.
Consider Type II strings on $`\mathrm{I}\mathrm{R}^{1,5}\times \mathrm{I}\mathrm{R}^4/𝖹𝖹_2`$, where the generator $`g`$ of $`𝖹𝖹_2`$ reflects the coordinates $`x^5,x^6,x^7,x^8`$. This breaks $`SO(1,9)`$ to $`SO(1,5)\times SO(4)_R`$ ($`SO(4)_S\times SO(4)_R`$ in light-cone gauge), where the second factor corresponds to a global R-symmetry from the six-dimensional point of view. For Type IIA the resulting six-dimensional theory has $`𝒩=(1,1)`$ supersymmetry, and for Type IIB it has $`𝒩=(2,0)`$ supersymmetry. The relevant properties of the different sectors are shown in table 2.
Modular invariance requires that we project all sectors onto states which are even under $`g`$,
$$P_{orbifold}=\frac{1}{2}(1+g).$$
(8)
In principle, modular invariance should also determine the correct GSO-projections in the twisted sectors, once that of the untwisted R-R sector has been fixed, i.e. once we have decided whether to start with Type IIA or Type IIB string theory. However, due to the sign ambiguity in the action of the GSO operators on the ground states of these sectors, it is actually difficult to determine the consistent projections. The most general GSO-projections are given as<sup>4</sup><sup>4</sup>4Only the relative phase of $`(1)^f`$ and $`(1)^{\stackrel{~}{f}}`$ is relevant for our purposes:
$$P_{GSO}=\{\begin{array}{cc}\frac{1}{4}(1+(1)^f)(1+(1)^{\stackrel{~}{f}})\hfill & \text{NS-NS;U}\hfill \\ \frac{1}{4}(1+(1)^f)(1+ϵ(1)^{\stackrel{~}{f}})\hfill & \text{R-R;U}\hfill \\ \frac{1}{4}(1+(1)^f)(1+\eta (1)^{\stackrel{~}{f}})\hfill & \text{NS-NS;T}\hfill \\ \frac{1}{4}(1+(1)^f)(1+\delta (1)^{\stackrel{~}{f}})\hfill & \text{R-R;T},\hfill \end{array}$$
(9)
where $`ϵ,\eta ,\delta =\pm 1`$. The phase $`ϵ`$ corresponds to whether the theory is Type IIA ($`ϵ=1`$) or Type IIB ($`ϵ=+1`$), and the phases $`\eta ,\delta `$ correspond to the aforementioned ambiguity.
To fix the ambiguity in the choice for $`\eta `$ and $`\delta `$, let us now consider the D-brane spectrum of the theory. From the projections in the untwisted sectors it follows that the physical bulk D-branes have $`r`$ even and $`s`$ even in Type IIA, and $`r`$ odd and $`s`$ even in Type IIB. In order to satisfy the above fractionation condition, there must exist fractional D-branes for the same values of $`r`$ and $`s`$. In particular, this means that the corresponding boundary states must be physical in both the twisted NS-NS and the twisted R-R sectors. It follows from the results of Table 1 that this requires $`\eta =+1`$ and $`\delta =\eta ϵ=ϵ`$; the D-brane spectrum can then be summarised by
* IIA on $`\mathrm{I}\mathrm{R}^4/𝖹𝖹_2`$: Fractional and bulk D-branes exist for $`r`$ and $`s`$ both even.
Truncated D-branes exist for $`r`$ even and $`s`$ odd, e.g. the non-BPS D-string .
* IIB on $`\mathrm{I}\mathrm{R}^4/𝖹𝖹_2`$: Fractional and bulk D-branes exist for $`r`$ odd and $`s`$ even.
Truncated D-branes exist for both $`r`$ and $`s`$ odd.
This choice of $`\eta `$ and $`\delta `$ also leads to a supersymmetric spectrum in the twisted sectors. Indeed, the surviving components of the ground state in the twisted R-R sector transform as
$$(\mathrm{𝟐},\mathrm{𝟏})(\mathrm{𝟐}^{},\mathrm{𝟏})=(\mathrm{𝟒},\mathrm{𝟏})$$
(10)
in the Type IIA orbifold, and as
$$(\mathrm{𝟐},\mathrm{𝟏})(\mathrm{𝟐},\mathrm{𝟏})=(\mathrm{𝟑},\mathrm{𝟏})(\mathrm{𝟏},\mathrm{𝟏})$$
(11)
in the Type IIB orbifold. The former corresponds to the vector component of an $`𝒩=(1,1)`$ vector multiplet, and the latter to the rank two antisymmetric tensor component and one of the scalar components of an $`𝒩=(2,0)`$ tensor multiplet. As these are precisely the unbroken supersymmetries associated with the respective theories, the above choice for the GSO-projection in the twisted sectors is consistent with supersymmetry as well.
It may also be worth mentioning that only this choice for the GSO-projection in the twisted sectors leads to an orbifold that can be blown up to a smooth ALE space (K3 in the fully compact case), since the latter is manifestly supersymmetric.
## 4. Non-supersymmetric example.
Now consider the case of Type 0 strings on $`\mathrm{I}\mathrm{R}^{1,3}\times \mathrm{I}\mathrm{R}^6/𝖹𝖹_2`$, where the generator $`g`$ of $`𝖹𝖹_2`$ reflects the six coordinates $`x^3,\mathrm{},x^8`$.<sup>5</sup><sup>5</sup>5This case is of particular interest since it is directly related to recent work of Klebanov, Nekrasov and Shatashvili , where this orbifold of Type 0B was discussed. Note that $`g^2=(1)^F`$, but since Type 0 is purely bosonic this is equivalent to the identity. In Type II strings $`g`$ generates a $`𝖹𝖹_4`$ group, so the above theory is identical to Type II on $`\mathrm{I}\mathrm{R}^{1,3}\times \mathrm{I}\mathrm{R}^6/𝖹𝖹_4`$. The ten-dimensional Lorentz group is broken to $`SO(1,3)\times SO(6)`$ ($`SO(2)\times SO(6)`$ in light-cone gauge), and the theory is not supersymmetric (and in fact completely fermion-free). The different sectors of the theory are described in table 3.
Since we are using the Type 0 picture, the relevant GSO operator is actually the combination $`(1)^{f+\stackrel{~}{f}}`$. But since the action of $`(1)^f`$ is trivial on all boundary states, we can still refer to table 1 (with $`n=6`$ in this case) for the transformation properties of the boundary states.<sup>6</sup><sup>6</sup>6In fact, both $`(1)^f`$ and $`(1)^{\stackrel{~}{f}}`$ change the spin structure of the boundary states, and the Type 0 GSO operator $`(1)^{f+\stackrel{~}{f}}`$ therefore preserves the spin structure. Thus it is not necessary to consider a linear combination of boundary states, and this gives rise to the famous doubling of the D-brane spectrum of Type 0 relative to Type II. The most general GSO-projections are now given by
$$P_{GSO}=\{\begin{array}{cc}\frac{1}{2}(1+(1)^{f+\stackrel{~}{f}})\hfill & \text{NS-NS}\hfill \\ \frac{1}{2}(1+ϵ(1)^{f+\stackrel{~}{f}})\hfill & \text{R-R}\hfill \\ \frac{1}{2}(1+\eta (1)^{f+\stackrel{~}{f}})\hfill & \text{NS-NS;T}\hfill \\ \frac{1}{2}(1+\delta (1)^{f+\stackrel{~}{f}})\hfill & \text{R-R;T},\hfill \end{array}$$
(12)
where $`ϵ`$ again determines whether the theory is Type A or B.
Referring again to table 1, it now follows that in order to satisfy the D-brane fractionation condition, we must have $`\eta =1`$, and $`\delta =\eta ϵ=ϵ`$. For this choice (and only for this choice) we obtain a spectrum of D-branes that is consistent with the above fractionation condition; this D-brane spectrum can be summarised by
* 0A on $`\mathrm{I}\mathrm{R}^6/𝖹𝖹_2`$: Fractional and bulk D-branes exist for $`r`$ and $`s`$ both even.
Truncated D-branes exist for $`r`$ even and $`s`$ odd.
* 0B on $`\mathrm{I}\mathrm{R}^6/𝖹𝖹_2`$: Fractional and bulk D-branes exist for $`r`$ odd and $`s`$ even.
Truncated D-branes exist for both $`r`$ and $`s`$ odd.
The above solutions for $`\eta `$ and $`\delta `$ are somewhat counterintuitive, in that the GSO-projection in the twisted sectors is opposite to that in the untwisted sectors. In particular, the projection in the twisted R-R sector is chiral in Type 0A and non-chiral in Type 0B, which is opposite to the convention in the untwisted R-R sector. The surviving massless fields in the twisted R-R sector therefore transform as
$$(\mathrm{𝟏}_{\frac{1}{2}}\mathrm{𝟏}_{\frac{1}{2}})(\mathrm{𝟏}_{\frac{1}{2}}\mathrm{𝟏}_{\frac{1}{2}})=\mathrm{𝟏}_1\mathrm{𝟏}_1$$
(13)
in the Type 0A orbifold, and as
$$(\mathrm{𝟏}_{\frac{1}{2}}\mathrm{𝟏}_{\frac{1}{2}})(\mathrm{𝟏}_{\frac{1}{2}}\mathrm{𝟏}_{\frac{1}{2}})=2\times \mathrm{𝟏}_0$$
(14)
in the Type 0B orbifold. The former corresponds to the two helicities of a massless vector in four dimensions, and the latter to two massless scalars. <sup>7</sup><sup>7</sup>7For the case of the Type 0B orbifold, this also agrees with the convention that was considered in .
This is the opposite of what was claimed in , namely that the twisted sector of the Type 0B orbifold contains a massless vector. In effect, the authors of chose $`\eta =\delta =+1`$, and therefore a chiral GSO-projection in the twisted R-R sector. It is clear from table 1 that in this case there are no physical fractional branes, and therefore that the fractionation condition is violated.
## 5. Conclusions.
The condition of modular invariance, which underlies all consistent closed string theories, is sometimes clouded by phase ambiguities. This is especially true for the GSO-projection in sectors containing fermionic zero modes. In this note we have explained that there exists a non-perturbative consistency condition for orbifold theories, related to the spectrum of D-branes, that determines the GSO-projection in the twisted sectors uniquely. For the supersymmetric cases, this condition reproduces the conventions that follow from supersymmetry, but it also applies to situations where supersymmetry is broken. As an example of the latter case, we considered Type 0 strings on $`\mathrm{I}\mathrm{R}^{1,3}\times \mathrm{I}\mathrm{R}^6/𝖹𝖹_2`$. In this case, requiring the spectrum of fractional D-branes to match up with that of the bulk D-branes fixes the GSO-projection in the twisted sectors, and thus the spectrum of the theory.
## Acknowledgments.
We thank Igor Klebanov for useful correspondences. M.R.G. thanks Peter Goddard for a helpful conversation. O.B. thanks Zurab Kakushadze. O.B. is supported in part by the DOE under grant no. DE-FG03-92-ER 40701. M.R.G. is grateful to the Royal Society for a University Research Fellowship. |
no-problem/0001/astro-ph0001124.html | ar5iv | text | # On the Internal Absorption of Galaxy Clusters
## 1 Introduction
A central consequence of the cooling flow model for galaxy clusters is that cool gas is deposited in the central 200 kpc region at a rate that is typically 30-300 $`M_{}`$ y<sup>-1</sup> (White, Jones & Forman, 1997; Allen & Fabian, 1997). Although this model is consistent with a wealth of X-ray data, there has been considerable skepticism about the validity of this picture because of the difficulty in finding the end state of this cooled gas. The gas does not form stars with a normal initial mass function, so either star formation is heavily weighted to low mass stars, the material does not form stars but remains as cooled gas, or the cooling flow model is incorrect. Consequently, there was considerable excitement when X-ray observations claimed to discover large amounts of cooled gas in galaxy clusters with approximately the masses expected from a long-lived cooling flow (White et al., 1991) (hereafter WFJMA). They used Einstein SSS data for 21 clusters, corrected for a time-dependent ice build-up, and their spectral fits yielded an absorption column which they compared to the Galactic value obtained from the large-beam Bell Labs survey (Stark et al., 1992). About half of the clusters (12/21) had X-ray absorption columns in excess of the Galactic HI column by at least $`3\sigma `$, and the excess was correlated with the deduced rate of cooling gas. The mass of absorbing gas within the cluster was determined to be $`3\times 10^{11}10^{12}`$ $`M_{}`$, which is approximately the amount of cooled gas that would be produced by a cooling flow over its lifetime.
The WFJMA study led to searches at other wavelengths for cold gas in cooling flow clusters, since $`10^{11}10^{12}`$ $`M_{}`$ of HI or H<sub>2</sub> would be easily detected if its properties were similar to Galactic gas. Observational searches for HI usually yielded upper limits (Jaffe, 1987, 1991; Dwarakanath, van Gorkom & Owen, 1994; O’Dea, Gallimore & Baum, 1995), and when HI was detected, it was typically two orders of magnitude lower than the expected HI mass (Jaffe, 1990; McNamara, Bregman & O’Connell, 1990; Norgaard-Nielsen et al., 1993; Hansen, Jorgensen & Norgaard-Nielsen, 1995). One concern was that the HI might have a velocity dispersion similar to the cluster, making it difficult to detect in narrow bandwidth studies. However, a recent wide bandwidth search for HI rules out such emission, typically at a level of $`5\times 10^9`$ $`M_{}`$ (O’Dea, Payne, & Kocevski, 1998).
Searches for molecular hydrogen have often focused on emission or absorption from CO millimeter lines, which have led to stringent upper limits (McNamara & Jaffe, 1994; O’Dea et al., 1994; Braine & Dupraz, 1994; Braine et al., 1995). Recently, searches have employed the H<sub>2</sub> infrared lines, usually the H<sub>2</sub> (1-0)S(1) line, and emission has been detected in a few cases (Jaffe & Bremer, 1997; Falcke et al., 1998). In their analysis of the detections, Jaffe & Bremer (1997) deduce masses that are about $`10^{10}`$ $`M_{}`$, still inadequate by two orders of magnitude to be in agreement with the X-ray observations.
Given the limits on HI and H<sub>2</sub>, theoretical investigations have examined whether the gas could be hidden in a form that would be difficult to detect. The work of Daines, Fabian & Thomas (1994) and of Ferland, Fabian & Johnstone (1994) indicated that the gas might be difficult to detect, with the most likely form being very cold molecular gas (near 3K). However, Voit & Donahue (1995) argue that the material is unlikely to be this cold and that the X-ray absorbing material would not have evaded detection if it were in the form of HI or H<sub>2</sub>. This agrees with the modeling of O’Dea et al. (1994), and the detection of the infrared H<sub>2</sub> lines shows that some of the molecular gas must be warm (Jaffe & Bremer, 1997). The theoretical models suggest that it would be difficult to hide cold gas from detection, although perhaps not impossible.
This apparent conflict between the WFJMA result and data at other wavebands raises the concern that there might be a problem with the SSS X-ray observations. A different group (White et al., 1994) studied four of the same clusters as WFJMA using SSS data supplemented by GINGA data as part of a study of abundance gradients in clusters. White et al. (1994) found that the amount of X-ray absorbing material depended upon various assumptions about the spectra, such as including a cooling flow in the modeling. Also, increasing the ice parameter for the SSS data would lead to a decrease in the X-ray absorbing column. In most cases, these changes could reduce but not eliminate an X-ray absorbing column in excess of the Galactic $`N_{\mathrm{HI}}`$ value. A direct conflict with the WFJMA work was presented by Tsai (1994), who used data from several instruments on the Einstein Observatory and found that toward M87, no additional X-ray absorption was required beyond the Galactic $`N_{\mathrm{HI}}`$ column.
The ROSAT PSPC spectra should provide a strong test of this extra absorption since it has good sensitivity across the energy band where the absorption occurs. For the clusters where the Galactic $`N_{\mathrm{HI}}5\times 10^{20}`$ cm<sup>-2</sup> and that have claimed excess X-ray absorption, such as M87, the Virgo Cluster, Abell 1795, Abell 2029, Abell 2142, and Abell 2199, no excess absorption is required by the PSPC data (Briel & Henry, 1996; Henry & Briel, 1996; Lieu et al., 1996; Sarazin, Wise & Markevitch, 1998; Siddiqui, Stewart & Johnstone, 1998), in direct conflict with the work of WFJMA. Also, PSPC spectra of other cooling flow clusters, such as Abell 401 and Abell 2597, fail to show excess absorption (Henry & Briel, 1996; Sarazin & McNamara, 1997).
It is important to note that most of these spectral fits are for a single temperature within an annulus or region. Models with cooling flows can naturally accommodate considerable internal absorbing material because these models produce soft emission (from the production of cooling gas), which can be reduced through absorption in order to agree with the observed spectrum (e.g., Wise & Sarazin (1999)). A particularly clear illustration of that is given by Siddiqui, Stewart & Johnstone (1998), who show that no excess absorption is required for either single-temperature models or cooling flow models without reheating, but that excess absorption can occur in the center for a cooling flow model with a partial covering screen. A somewhat different approach is taken by Allen & Fabian (1997) who use PSPC color maps along with a deprojection technique to fit cooling flows plus internal absorption to nearly all of their galaxy clusters. They can achieve agreement with WFJMA when they adopt a partial covering model for the absorption. The evidence suggests to us that excess absorption can be accommodated but is not required for successful spectral fits of clusters along lines of sight where the Galactic $`N_{\mathrm{HI}}5\times 10^{20}`$ cm<sup>-2</sup>.
The situation is different along sight lines with higher Galactic column densities, where excess columns are reported even for isothermal fits to the data. Irwin & Sarazin (1995) observed 2A0335+096, which has a Galactic $`N_{\mathrm{HI}}=1.7\times 10^{21}`$ cm<sup>-2</sup> and found an excess of $`0.61.2\times 10^{21}`$ cm<sup>-2</sup>, depending upon the type of fit. A similar result is found by Allen et al. (1993), who observed Abell 478 and found an excess of $`0.71.7\times 10^{21}`$ cm<sup>-2</sup> compared to the Galactic $`N_{\mathrm{HI}}=1.4\times 10^{21}`$ cm<sup>-2</sup>. An important aspect of these studies is that the excess absorption occurs both inside and outside of the cooling flow core.
Of direct relevance to this discussion is our recent study where we used the non-central regions of bright clusters to measure absorption columns for comparison with Galactic $`N_{\mathrm{HI}}`$ and $`N_{\mathrm{HII}}`$ data (Arabadjis & Bregman (1999a), hereafter AB). The motivation was that the bright isothermal parts of galaxy clusters were ideal background light sources with particularly simple spectra, so absorption columns could be determined to high accuracy. We found that for X-ray absorption columns $`<5\times 10^{20}`$ cm<sup>-2</sup>, the only absorption necessary was due to Galactic $`N_{\mathrm{HI}}`$. However, for the seven clusters with higher Galactic column densities, excess absorption was detected in every case and we attribute this excess to H<sub>2</sub> in the Galaxy, a result that is consistent with Copernicus H<sub>2</sub> studies (Savage et al., 1977). As part of our investigation, we developed software to incorporate the most recent values of the He absorption cross section, to which the results are are somewhat sensitive. Here we extend the techniques that we developed to study the centers of these 20 bright clusters with the goal of determining whether excess absorption is required, and whether it is statistically different than the absorption seen in the non-central parts of galaxy clusters.
## 2 Method and Sample Selection
For this investigation we use the cluster sample studied in AB (Table 1). These clusters were chosen to fulfill several criteria: they must be sufficiently bright such that there were enough photons in each archived observation to constrain the spectral models; they must be well-studied so that we minimize the number of free parameters in the models; they must lie out of the plane of the Galaxy so that opacity corrections in the corresponding HI columns are minimal. The data consisted of ROSAT PSPC observations taken from the archives at the HEASARC. Standard packages (i.e., the PCPICOR suite in FTOOLS) were used to correct for spatial and temporal gain fluctuations in the ROSAT detectors (PSPC B and C; see Briel et al. (1989)). Spectra were usually taken from 3-6 and 6-9 annuli centered on the emission center of each cluster (but well outside of any possible cooling flows), over the energy range 0.14-2.4 keV (avoiding the softest channels where the calibration may be unreliable – see Briel et al. (1989); Snowden, Turner, George & Yusaf (1995)), and modelled using both XSPEC (Arnaud, 1996) and PROS (Conroy et al., 1993). Background spectra with point sources removed were generally taken from annuli with widths between 2-4 and radii between $`15^{}`$ and $`20^{}`$, and events were binned to ensure a minimum of 20 photons for each channel used in the fitting process. Each resulting background-subtracted spectrum was modelled as a single-temperature thermal plasma (model MEKAL in XSPEC; Mewe, Gronenschild & ven den Oord (1985); Mewe, Lemen & ven den Oord (1986); Arnaud & Rothenflug (1996); Kaastra (1992)) at a fixed temperature and redshift (White, Jones & Forman, 1997) and metallicity (0.3 solar) and with variable Galactic absorption and spectral normalization. As mentioned above, we have replaced the neutral helium cross sections of Bałucińska-Church & McCammon (1992) in XSPEC with the more recent calculations of Yan, Sadeghpour & Dalgarno (1998), and set the helium abundance He/H = 0.10 (see discussion in AB and in Arabadjis & Bregman (1999b)).
In the present study our goal is to determine if we can model the emission from a $`3^{}`$ disk at the emission center using the same Galactic absorption column as that derived from X-ray fits to the outer regions, rendering an absorption component local to the cluster unnecessary. For each cluster we try to fit a single-component thermal plasma at the same temperature, redshift, and metallicity ($`T`$, $`z`$, and $`Z`$) as the models of AB. This leaves only one free parameter, the spectral normalization.
Many galaxy clusters appear to exhibit a spatial metallicity gradient, especially those containing cooling flows (Fabian & Pringle, 1977; Ponman et al., 1990; Koyama, Takano & Tawara, 1991; Matsumomo et al., 1996; Ezawa et al., 1997; Hwang et al., 1997). Roughly speaking, the metallicity ranges from 0.3-0.5 in the outer regions to approximately solar at the cooling flow center (Edge & Stewart, 1991; Fukazawa et al., 1994; Mushotzsky et al., 1996; Hwang et al., 1997; Allen & Fabian, 1997). Allowing the metallicity to vary in our models often produces implausible values, however, with $`Z420`$. This seems to be the result of a competition between metallicity and absorption to reproduce the sharpness of the spectral peak at 1 keV; i.e., the feature can be sharpened either by increasing the Galactic column or increasing the metallicity. For our models the simplest solution is to use 0.3 for the thermal plasma metallicity, and if the fit obtained is unacceptable (e.g., one in which the reduced chi-squared $`\chi _r^2`$ of the fit exceeds 1.26 for 187 degrees of freedom, indicating a probability of less than 1%), we increase it to 0.5. For the cooling flows we adopt $`Z=1`$. Our choice of metallicity does not have a large effect upon our derived absorption columns, although it should be noted that the effect is somewhat greater for the resulting mass deposition rates, reducing them by 10-20% when $`Z`$ is increased to 0.5 from 0.3.
If increasing the metallicity to 0.5 fails to improve the fit, we add a second thermal plasma at the same redshift and metallicity. This adds two free parameters, the temperature and normalization of the second emission component. If this results in an unacceptable fit, we allow the aborption column to vary. The models used for each cluster are shown in Table 2.
In order to facilitate a comparison with the WFJMA results we also run cooling flow models (i.e., a thermal plasma plus emission from a cooling flow) for each cluster. We use the model of Mushotzsky & Szymkowiak (1998) (i.e. the CFLOW routine in XSPEC) for the cooling flow component, as did WFJMA. The addition of the cooling flow adds a number of free parameters: the temperature range $`T_{lo}`$ and $`T_{up}`$ of the emitting material, the slope $`\alpha `$ of the power law emissivity function, and the cooling flow mass deposition rate $`\dot{M}`$, as well as the redshift and metallicity. In these models we set $`T_{up}`$ to the temperature of the thermal plasma component (as was done in the WFJMA study), leaving $`T_{lo}`$ a free parameter. We note that $`T_{lo}`$ could have been set to an arbitrarily low value (where the gas no longer contributes to emission in the soft band), but allowing it to vary produced slightly better fits in a few instances. In any case, the differences in the fits produced by the two methods are quite small. We assume an emission measure that is proportional to the inverse of the cooling time at the local flow temperature, corresponding to $`\alpha =0`$. The cooling rate $`\dot{M}`$ is left as a free parameter.
For each cluster we fit several cooling flow models which differ in their approach to the absorption. The first model holds the intervening column constant, at the Galactic value of AB. The second model allows the column to vary. It could be argued that any additional absorption seen in this model is not truly “local”, however, since it is manifest only as an increase in the Galactic column. Therefore we run a third model wherein the Galactic column is fixed (at a value determined in AB) and a separate, redshifted absorber covers only the central cooling flow. It should be noted, however, that such an approach does not allow for the expected small-scale structure in the Galactic interstellar medium ($`7\%`$ on these scales; AB), nor is the poor spectral resolution of ROSAT data capable of distinguishing between absorbers with differing (low) redshifts.
## 3 Results
Most of the clusters in our sample do not require an extra absorption component to be modelled successfully. Model fits for each cluster are shown in Table 2. Of the 20 clusters in the sample, 12 can be fit with a one- or two-component model with the intervening column set to the Galactic $`N_\mathrm{x}`$ value, and thus require no extra absorption component. Figure On the Internal Absorption of Galaxy Clusters shows an acceptable model spectrum, convolved with the PSPC instrument profile, for Coma (Abell 1656), a cluster in the direction of low Galactic absorption. The model used here consists of one emission component at a temperature of 8.0 keV, with a Galactic column set to $`0.60\times 10^{20}`$ cm<sup>-2</sup>, a value determined from fits to the X-ray emission more than $`6^{}`$ minutes from the emission center. (A nearby region was determined by AB to have a column of $`0.78\times 10^{20}`$ cm<sup>-2</sup>. Both of these values deviate from the 21 cm column of Hartmann & Burton (1997) by more than the expected 5-7% – see AB for a discussion.) Figure On the Internal Absorption of Galaxy Clusters shows the fit for Abell 2657, which lies in a direction of relatively large Galactic column ($`N_\mathrm{x}=1.13\times 10^{21}`$ cm<sup>-2</sup>). This model also uses a single emission component ($`T=3.4`$ keV), with the column set to the value derived for an annulus 3-6.
For the 8 clusters that cannot be fit adequately using $`N_\mathrm{x}`$ from AB, we allow the Galactic column to vary in order to ascertain whether extra absorption is required. In no case do we achieve an acceptable fit (i.e., $`\chi _r^2<1.26`$) by allowing the column density to deviate from the value obtained using the outer parts of each cluster. In three of these clusters, however, the fits are only marginally unacceptable. Abell 85 ($`\chi _r^2=1.307`$; Figure On the Internal Absorption of Galaxy Clusters) requires absorption about 6% higher than the Galactic $`N_\mathrm{x}`$ value at a significance of about 1.5$`\sigma `$, rather weak evidence for an absorption component local to the cluster. The best fit for Abell 496 ($`\chi _r^2=1.273`$; Figure On the Internal Absorption of Galaxy Clusters) requires a Galactic column which is about 8% lower than the nominal $`N_\mathrm{x}`$ value at about the $`2.6\sigma `$ level. The brighter of the two emission peaks in Abell 2256 can be fit equally well using either the Galactic column from AB or by allowing $`N_\mathrm{x}`$ to vary. In the latter case the resulting column is lower, but by less than 3% (less than $`1\sigma `$ significance).
The difference between the $`N_\mathrm{x}`$ fit in the center and in the outer parts of each cluster is expected from normal fluctuations of Galactic $`N_\mathrm{H}`$ on these angular scales, which are typically at the 5-7% level (Crovisier & Dickey, 1983; Schlegel, Finkbeiner & Davis, 1998; Arabadjis & Bregman, 1999a). Alternatively, they may be the result of small calibration errors in the PSPC response matrices (Prieto, Hasinger & Snowden, 1994). Neither Abell 85 nor Abell 496 shows a systematic fluctuation in its residuals, which would undermine confidence in the choice of models used, but the nominal uncertainty in each channel is perhaps too small, artificially inflating the $`\chi ^2`$ value of the fit. Such calibration errors probably dominate the $`\chi ^2`$ of the best-fit model for Abell 1795. The fit is unacceptable ($`\chi _r^2=3.85`$), but the residual pattern in Figure On the Internal Absorption of Galaxy Clusters demonstrates the effect of a probable gain offset below 0.5 keV (Prieto, Hasinger & Snowden, 1994) coupled with small statistical errors derived from the large number of counts ($`6\times 10^5`$). The cooling flow model fit of A1795 is of equally poor quality, but the resulting excess column is closer to the Galactic value (+19% for the cooling flow versus +29% for the two-thermal component model).
Two-component model fits to the remaining 5 clusters are poor, but if they are physically significant they show the same behavior as the rest of the sample. Allowing each of their Galactic columns to vary does reduce the $`\chi ^2`$ of the fit, yielding a model with a higher column, although the significance of this is difficult to ascertain due to the poor significance of the resulting model. If we assume that these fits are physically significant, the columns exceed their Galactic values by $`38`$% (48% for the cooling flow models), which is typically an order of magnitude smaller than the excesses found by WFJMA (see Table 3). For example, clusters displaying absorption above the Galactic value in both studies (Abell 85, 1795, 2029, and 2199), but otherwise do not seem to be unusual in $`N_\mathrm{H}`$, show an excess that is 40 times greater in WFJMA than in the present work.
The models of WJFMA all contain cooling flows, so for completeness we ran cooling flow models with variable absorption (both Galactic and proximal to the cooling flow) for the entire sample. For those models which contain only a Galactic absorption component (as a free parameter), in no case was a substantial excess absorption required to model the emission. We cannot rule out the presence of a significant quantity of cool gas at the center of cooling flows, but we stress that a significant excess absorption is not a required feature of these spectra. Of the internally absorbed cooling flow clusters common to both WFJMA and this study, only one quarter show significant excess absorption. The fact that any show significant absorption is not surprising, since absorption can be invoked to obscure any amount of cooling flow emission; that only a quarter actually display this behavior suggests that excess internal absorption is probably not a ubiquitous feature of these systems.
It is difficult to compare these results with those of Allen & Fabian (1997) since the methods differ significantly; however, one point is worth mentioning. The “color profile” approach that they adopted used data from 0.4 keV through 2 keV. In low Galactic column clusters most of the absorption is manifest from 0.2 to 0.4 keV, where our technique is quite sensitive. For example, they compute an excess $`N_\mathrm{H}`$ of almost 600% for A2029, whereas our two-component model is only 11% larger than the Galactic value (and lower still for our externally absorbed cooling flow model; see Table 3).
Figures On the Internal Absorption of Galaxy Clusters and On the Internal Absorption of Galaxy Clusters show a direct comparison between between our cooling flow models of 2A0335+096 and A0085, respectively, and those of the WFJMA study. The first spectrum shown in each figure with its residuals is the application of the WFJMA model to the ROSAT data. Each of the model’s two absorption components (the Galactic column and an absorber in proximity to the cooling flow), plasma temperature, and cooling flow mass deposition rate are taken from WFJMA, while the plasma normalization is left as a free parameter. The emission and absorption physics used in WJFMA, Raymond & Smith (1977) and Morrison & McCammon (1983), respectively, is also used here. The second spectrum plotted in each figure is the single absorption component (i.e. a variable Galactic column) cooling flow model of this study. In both cases our fit is significantly better than WFJMA (2A0335: $`\chi _r^2=1.11`$ vs. 2.05; A0085: $`\chi _r^2=1.15`$ vs. 8.70). In the case of 2A0335, we find an excess column approximately half that of WFJMA. For A0085, however, it is more than an order of magnitude lower.
## 4 Summary and Conclusions
We have examined the centers of 20 X-ray bright galaxy clusters for evidence of internal absorption by cool gas. 12 of the 20 clusters can be adequately fit by a one- or two-component model using the Galactic column density determined though X-ray absorption to the outer regions of each cluster. None of the best-fit models of the 8 remaining clusters becomes an acceptable fit by allowing the absorption to vary, although three of them are borderline cases (i.e., their reduced chi-squared values are close to the cut-off of 1.26). Their columns each deviate from the Galactic absorption to the outer parts of the clusters by 3-8%, much less than the large deviations found by WFJMA, and two of these three have a lower value. This is consistent with emission contrasts due to small-scale structure in the Galactic interstellar medium, therfore no change in $`N_\mathrm{x}`$ beyond those expected are seen. The remaining cluster centers are not fit successfuly by either the one-component or two-component models used here, and although allowing their columns to vary does reduce their $`\chi ^2`$ values, they never reach acceptable levels. However, if we assume that these best fits yield valid information about $`N_\mathrm{H}`$, the resulting column density increases are only 11-38%, more than an order of magnitude below those seen by WFJMA. At least 3/4 of this sample require no absorption beyond that expected from the Galaxy. Cooling flow models wherein the sole (Galactic) absorption component is left as a free parameter show excess absorption at least an order of magnitude lower than those seen in WJFMA.
We suggest that the discrepancy between our work and that of WFJMA is probably due to the Einstein SSS calibration. The WFJMA results depend upon the values chosen for the SSS ice buildup parameters, and although they used the best available values, there could be significant uncertainties. The time-dependent thickness of the ice buildup varied with position on the solid state detector, producing an extra absorption component (equivalent to absorption of between $`10^{20}`$ and $`10^{21}`$ cm<sup>-2</sup>) that is significantly larger than many of the columns being measured. The standard model for the behavior of the ice buildup attempts to correct for the extra absorption, and is valid to a low energy cut-off near the oxygen edge at 0.5 keV (Madejski et al., 1991). Unfortunately, low and intermediate Galactic columns ($`N_\mathrm{G}5\times 10^{20}`$ cm<sup>-2</sup>) are most readily measured in the 0.14-0.5 keV band (AB), limiting confidence in these measurements. Although the data no longer require extra $`N_\mathrm{x}`$, it may be possible to accomodate extra absorbing material in certain models (Siddiqui, Stewart & Johnstone, 1998; Wise & Sarazin, 1999).
We would like to acknowledge financial support from NASA grant NAG5-3247. We would also like to thank J. Irwin and M. Sulkanen for many useful discussions. |
no-problem/0001/astro-ph0001178.html | ar5iv | text | # The Energy Output of the Universe
## 1. Introduction
The X-ray Background (XRB) is the integrated emission from all X-ray sources. Its hard spectrum has proved difficult to explain since, in the 2–10 keV band, it is flat with a power-law of energy index 0.4. This is flatter than the spectrum of any known common population of objects. For the last decade the most popular explanation has been that the XRB intensity is dominated by many absorbed sources (Setti & Woltjer 1989), with ranges of absorbing column density and redshift causing the observed spectrum to be a power-law. The absorption model has been extensively studied by Madau et al (1994), Matt & Fabian (1994), Comastri et al (1995), Celotti et al (1995), and Wilman & Fabian (1999). The most complete studies include Compton down-scattering in the estimation of the observed spectrum of the Compton-thick sources.
The absorption model is adopted here and is used in a simple way to show that black holes grow by radiatively efficient accretion and to determine a) the local mean density of black holes, b) the fraction of accretion power which has been absorbed, and c) contraints on the fraction of power in the Universe due to accretion (see also Fabian & Iwasawa 1999). After some discussion of how so much obscuring material can surround most sources, and how the nuclei might be fuelled, I then outline a model of obscuration in a forming, isothermal galaxy spheroid (Fabian 1999). The XRB is shown to be a key diagnostic of the accretion power of the Universe.
## 2. Accretion and the XRB
I assume that the underlying active galactic nuclei (AGN) which power the XRB have a quasar-like spectrum with an energy photon index of one. The spectrum is then constant in a $`\nu F_\nu `$ sense (Fig. 1). The action of photoelectric absorption by increasing amounts of material, characterised by a column density $`N_H`$, is (Fig. 2) to cut out the lower energy emission from the observed spectrum up to an (approximate) energy $`E10N_H^{8/3}`$ keV, where $`N_H`$ is in units of $`10^{24}`$ cm<sup>-2</sup>. As the column density exceeds about $`1.5\times 10^{24}`$ cm<sup>-2</sup> so the absorber becomes Compton thick and Compton (electron) scattering causes the residual spectrum above this cutoff to decrease in intensity. This means that the intensity observed above about 30 keV is close to the intrinsic unattenuated intensity from Compton-thin sources, and is a lower limit for Compton-thick ones. Therefore the intensity of the XRB at 30 keV equals the normalization of the XRB after correction for absorption by Compton-thin sources. This normalization can be increased by a factor of $`f^1`$ if only a fraction $`f`$ of all the power emerges from sources which are Compton thin. $`f`$ is at most 3/4 (Maiolino et al 1998) and could be less than one half.
The absorption-corrected XRB spectrum can then be extended into the ultraviolet band assuming the mean quasar spectral energy distribution of Elvis et al (1994). This shows that about 3 per cent of the power from a typical quasar is emitted in the 2–10 keV band. The total absorption-corrected AGN background can now be converted into an energy density $`ϵ_{AGN}`$ and thence, through the use of $`E=Mc^2`$ or rather $`ϵ(1+\overline{z})=\eta \rho c^2`$ with an accretion efficiency factor $`\eta `$ and a mean redshift $`\overline{z}`$ (since photons lose energy in the expansion of the Universe but mass does not), we have the mean density in black holes $`\rho _{bh}`$.
The resulting value of $`\rho _{bh}=6\times 10^5`$ M Mpc<sup>-3</sup> is about half the value found by Magorrian et al (1998) from a study of ground-based optical data of the cores of nearby galaxies, and in rough agreement with an HST photometric study made by van der Marel (1999). Similar agreement has been obtained by Salucci et al (1999) from a detailed considerations of source counts etc. This close agreement emphasises that most of the mass in black holes has been accreted by a radiatively efficient (but obscured) process, and not by some inefficient process such as an advective flow. The correction required for absorption is extensive and requires that most, about 85%, of the accretion power has been absorbed.
## 3. AGN, the FIR Background and the energy from stars
The absorbed power is assumed to be emitted in the Far Infrared (FIR) bands, and when redshifted it should contribute to the sub-mm background. The total predicted is about 3 nW m<sup>-2</sup> sr<sup>-1</sup> which is several tens percent of total the sub-mm background (Fixsen et al 1997; see also Almaini et al 1999 for estimates of the AGN contribution to the sub-mm background). This suggest that to within a factor of two the total integrated power (ie the total energy released) from accretion onto black holes is about one quarter of that from stars (mostly starlight but including supernovae), i.e.
$$E_{AGN}/E_{}0.25.$$
The details of any comparison depend upon the history of the starlight and of the accretion. No estimate of the contribution to the NIR and optical backgrounds, which could lower the above value, has been made here.
A simple check on this is obtained from an argument due to G. Hasinger (see Fabian & Iwasawa 1999). Magorrian et al (1998) find the following relation between the black hole mass $`M_{bh}`$ and spheroid mass $`M_{sph}`$ of a galaxy:
$$M_{bh}0.005M_{sph},$$
so if the total energy radiated
$$E_{AGN}0.1M_{bh}c^2$$
then
$$E_{AGN}0.1\times 0.005M_{sph}c^2.$$
But the total energy radiated by stars
$$E_{}0.1\times 0.005a^1M_{sph}c^2,$$
where the first term is the fraction of a star which undergoes nuclear fusion and the second is the efficiency (in a $`E=mc^2`$ sense) of that fusion. $`a`$ is the ratio of the present mass of the spheroid to its original mass (many of the stars have evolved) and for a Salpeter mass function is about 20 per cent. Therefore
$$E_{AGN}/E_{}a0.2.$$
## 4. Uncertainties
The above estimate reduces to 0.1 if the scaling relation of van der Marel (1999), which agrees better with the XRB intensity, is used, but can increase towards unity if stellar mass loss is recycled into new stars, so that $`a1`$. A mass-to-energy efficiency of $`0.1`$ has been used but it can be $`0.06`$ if the black hole is not spinning. or $`0.42`$ if it becomes a maximally spinning, Kerr, black hole.
An even more extreme possibility which defines an upper limit on the efficiency relative to the final (dead) black hole mass is to assume that the black hole was maximally spinning during the accretion phase and then spun down by, say, the Blandford-Znajek (1977) mechanism. The total energy released relative to the final black hole mass allows for an order of magnitude uncertainty in $`\eta `$ and thus $`E_{AGN}`$. Of course a high value here, which maximises $`E_{AGN}/E_{}`$, overpredicts the XRB intensity unless most of the growing phase of black holes is Compton thick. It is also possible that a significant fraction of the power from an AGN is in the form of a wind and not directly in radiation. As discussed later, growing black holes may be both Compton thick and powering winds. If this is correct, then $`E_{AGN}/E_{}`$ may be significantly higher than the estimate in the last section.
## 5. Obscuration, metallicity and fuelling
As outlined above, at least 85 per cent of accretion power is absorbed. Since about ten per cent is in quasars which show very little absorption, this means that most lines of sight out of the remaining objects are highly absorbed. This is difficult for the standard obscuring torus model, which could absorb perhaps one half to two thirds of all sight lines. Even then it is unclear what inflates the torus, which is supposed to be cold and molecular. Dissipation in in a system of orbiting clouds should cause it to flatten into a disc, with lowcovering factor.
Energy must be continuously injected into any cold absorbing cloud system to keep it inflated and so sky covering. One plausible solution is that a gas-rich star cluster surrounds the black hole and it is the massive stars (winds and supernovae) which supply the energy (Fabian et al 1998). The surrounding starburst can thereby obscure the active nucleus.
The starburst should enhance the metallicity of the absorbing gas. This makes a given mass of gas more efficient at absorbing X-rays and indeed increases the effect of absorption before Compton down-scattering comes into play. This is important in opening up the parameter space for model-fitting of the XRB spectrum (Wilman & Fabian 1999).
Fuelling of the nucleus is an old problem (see e.g. Shlosman et al 1990). Although there may be lots of gas around the nucleus, angular momentum may prevent it from rapidly accreting to the centre. In this respect, a hot phase in the surrounding medium may be important, with Bondi accretion from this phase being the dominant mechanism (see e.g. Nulsen & Fabian 1999). Angular momentum may be transported outward by turbulence within such a hot phase, so allowing rapid accretion to proceed.
## 6. The mean luminosity of the distant AGN dominating the XRB intensity
Since the mass of the black hole in nearby galaxies appears to be proportional to the spheroid mass, the mass function of black holes must be similar in shape to the spheroid mass function. The mean black hole mass is therefore that appropriate to an $`L^{}`$ galaxy, or about $`3\times 10^8`$ M. The Eddington limit of such a black hole is about $`3\times 10^{46}`$ erg s<sup>-1</sup> and its mass doubling (Salpeter) time is about $`3\times 10^7`$ yr. If the typical mass black hole has therefore grown from say a million solar mass one in $`3\times 10^9`$ yr (i.e. by $`z2`$), then we probably need $`L>0.05L_{Edd}10^{45}`$erg s<sup>-1</sup>. This means that the typical growing black hole was powerful and of quasar-like luminosity (indeed housing a quasar at the centre).
Such an obscured powerful object would locally be classified as a ULIRG (see Sanders & Mirabel 1996), although the distant ones need not be the same as the local ones, which are perhaps mainly fuelled by mergers.
Of course it is possible that massive black holes grew inside galaxies which themselves were merging back at $`z2`$. Nevertheless, unless they were all assembled from smaller holes just before accretion switched off, it is probable that they emit for a reasonable fraction of the last doubling time as a single object.
## 7. Obscuration in a growing, isothermal galaxy spheroid
Consider an isothermal galaxy in which a significant fraction $`f_c`$ of cooled gas remains as cold dusty clouds instead of rapidly forming stars. At the centre a black hole grows by accretion from the surrounding cold (and hot) gas. Assume that the nucleus also blows a wind of velocity $`v_w`$ which has a power $`L_w=\alpha L_{Edd}`$. Eventually the wind becomes powerful enough to blow away the surrounding gas and so shut off the accretion and further growth to the black hole and spheroid. The Magorrian et al (1998) black-hole – spheroid mass relation can then be obtained (Silk & Ress 1998; Fabian 1999; Blandford 1999).
The kinetic power of a wind at which it ejects cold gas of column density $`N_H`$ from a spheroid is given by
$$L_W2\pi GM_{sph}m_pN_Hv_w$$
or
$$L_wf_c\sigma ^4v_wG^1,$$
where $`\sigma `$ is the velocity dispersion within the spheroid. (I have used a force argument here, see Fabian 1999; Silk & Rees 1998 use an energy argument to obtain a limit of $`\sigma ^5/G`$, which is a factor $`\sigma /v_w`$ smaller than the above $`L_w`$.) Ejection occurs when
$$M_{bh}\frac{\sigma ^4\sigma _T}{4\pi G^2m_p}\frac{v_w}{c}\frac{f_c}{\alpha }.$$
Using the Faber-Jackson relation for spheroids ($`M_{sph}\sigma ^4`$) then yields, if $`\frac{v_w}{c}\frac{f_c}{\alpha }1`$
$$M_b0.005M_{sph},$$
close to the Magorrian et al (1998) relation.
At that point the column density in to the accretion radius $`N_HN_T=\sigma _T^1`$, so the growth is (just) Compton thick. The growth of massive black holes is radiatively efficient, highly obscured and gives rise to much of the XRB. It is also intimately linked with the growth of galaxy spheroids, the main evolution of which is terminated by a quasar wind. X-ray observations probe best the underlying obscured nucleus at (rest frame) ebergies of about 30 keV. Indeed X-rays are the best diagnostic of the black hole accretion history of the Universe.
The optically bright quasar phase (from an outside observer’s point of view) follows over the next few million years as the accretion disc around the black hole empties. The early phase as the wind clears the gas away can be identified with BAL quasars. The central engine is only revived after the quasar phase if a merger or other event brings in sufficient low angular momentum gas to fuel it.
The prospects of testing the above scenario and absorption models of the XRB are close at hand, with Chandra and XMM. They should detect large numbers of faint, but powerful absorbed sources in the 3–10 keV band, due to the negative K correction involved (see Fig. 3) and identify them with luminous FIR/sub-mm–emitting young galaxy spheroids.
## ACKNOWLEDGEMENTS
I am grateful to Kazushi Iwasawa, Paul Nulsen and Richard Wilman for continued collaboration and the organisers for an interesting conference. The Royal Society is thanked for support.
## REFERENCES
Almaini O Lawrence A Boyle B 1998 MNRAS 305 59
Blandford RD 1999 astro-ph 9906025
Blandford RD Znajek RL 1977 MNRAS 179 433
Celotti A Fabian AC Ghisellini G Madau P 1995 MNRAS 277 1169
Comastri A Setti G Zamorani G Hasinger G 1995 A&A 296 1
Elvis M 1994 ApJS 95 1
Fabian AC 1999 MNRAS 308 L39
Fabian AC Barcons X Almaini O Iwasawa K 1998 MNRAS 297 L11
Fabian AC Iwasawa K 1999 MNRAS 303 L34
Fixsen D Dwek E Mather JC Bennet CL Shafer RA 1998 ApJ 508 123
Madau P Ghisellini G Fabian AC 1994 MNRAS 270 L17
Magorrian J et al 1998 AJ 115 2285
Maiolino R et al 1998 A&A 338 781
Matt G Fabian AC 1994 MNRAS 267 187
Nulsen PEJ Fabian AC MNRAS in press
Salucci P Szuskiewicz E Monaco P Danese L 1999 MNRAS
Sanders DB Mirabel IF 1996 ARAA 34 749
Silk J Rees MJ 1998 A&A 331 L1
Setti G Woltjer L 1989 A&A 224 L21
Shlosman I Begelman MC Frank 1990 Nature 345 679
van der Marel RP 1999 ApJ 117 744
Wilman RJ Fabian AC 1999 MNRAS 309 862 |
no-problem/0001/astro-ph0001369.html | ar5iv | text | # ON THE DETERMINATION OF STAR FORMATION RATES IN EVOLVING GALAXY POPULATIONS
## 1 INTRODUCTION
The stellar content and hence the spectral energy distribution (SED) of a galaxy depends on many factors. Accurate predictions of galaxy SEDs require sound theories of stellar evolution and stellar atmospheres, including transient and extreme phases that remain difficult to model. In addition, the evolution of a galaxy SED depends on (i) the initial mass function (IMF) of stars and (ii) the history of the star formation rate (SFR). Parts of the SED, such as the UV continuum and the Balmer lines, are sensitive to the recent IMF and SFR, while other parts, such as the IR continuum, reflect long-term averages. This opens the possibility of using observations of SEDs to determine the current and past SFR in a particular galaxy, and, by observing the SEDs of populations of galaxies over a range of redshifts, the history of star formation in the universe. To do this, one needs a galaxy spectral synthesis model connecting SEDs with SFRs and IMFs. Early models of this kind were reviewed by Tinsley & Danly (1980), and more recent work is summarised by Leitherer et al. (1996) and Schaerer (1999).
The past decade or so has seen the acquisition of a rapidly growing body of data on the distribution, over redshift, absolute luminosity, and galaxy type, of those spectral properties of galaxies sensitive to star formation rates. Particularly notable have been new data on rest-frame UV luminosities to redshifts $`z4`$ (Lilly et al., 1996; Madau et al., 1996; Cowie et al., 1997; Connolly et al., 1997; Madau, Pozzetti & Dickinson, 1998; Pascarelle, Lanzetta & Fernández-Soto, 1998; Treyer et al., 1998; Cowie, Songaila & Barger, 1999; Sullivan et al., 1999), emission line luminosities in H$`\alpha `$ and \[Oii\] $`\lambda 372`$ nm to $`z1`$ (Gallego et al., 1995; Cowie et al., 1997; Tresse & Maddox, 1998; Glazebrook et al., 1998), and photometry (with redshift estimates to $`z1`$) in the far-IR, sub-mm and radio spectral bands (Rowan-Robinson et al., 1997; Flores et al., 1999; Hughes et al., 1998; Blain et al., 1999; Cram et al., 1998). The data have been interpreted by several of these authors using galaxy spectral synthesis models, to yield estimates of the star formation rate as a function of redshift.
Although it is widely recognised that there are numerous sources of uncertainty in the process of inferring star formation rates from the observable diagnostics, there have been few systematic, internally consistent investigations of these uncertainties (see also Schaerer, 1999). In an attempt to bridge this gap, this paper uses a galaxy spectral evolution code to test the self-consistency of common diagnostic procedures. We do this by comparing the known star formation history of a model universe containing specified galaxy populations with the star formation history that would be inferred by applying commonly adopted diagnostic procedures. Two questions are addressed: (1) are star formation rates derived from H$`\alpha `$ and UV luminosities consistent with each other?, and (2) are star formation rates inferred from luminosity densities consistent with the true star formation rate in the model? It is important to stress that we do not aim to explore the validity of any particular model of cosmic star formation history: we are concerned here only with checking the internal consistency of diagnostic procedures.
## 2 THE MODEL AND ITS CALIBRATION
We use the galaxy spectral evolution model pegase (Fioc & Rocca-Volmerange, 1997) and the galaxy population evolution model of Pozzetti, Bruzual & Zamorani (1996) to predict the evolution of the H$`\alpha `$ and UV luminosity densities in a “model universe” with a known star formation history. The key steps in our approach are (1) calculate the evolution of the actual SFR density defined by the parameters given in the model; (2) combine pegase and the model universe to predict the evolving luminosity densities; (3) use pegase to calibrate the SFR in terms of luminosity density using the methods commonly applied to observations, and (4) combine the calibration and the predicted luminosity density evolution to deduce the SFR history for comparison with step (1).
Pozzetti et al. (1996) explored pure luminosity evolution (PLE) models based on a mix of four galaxy types, E/S0, Sab-Sbc, Scd-Sdm and very Blue (vB). The different types are denoted hereinafter by the parameter $`k`$. The local luminosity function $`\mathrm{\Phi }_k(L)`$ of each type in each selected waveband is parametrised by the local space density $`\mathrm{\Phi }_k^{}`$, characteristic luminosity $`L_k^{}`$, and faint-end slope $`\alpha _k`$. Each type also has a characteristic IMF $`\mathrm{\Psi }_k(M)`$ and star formation rate history, $`\dot{\rho }_k(t)`$. A Scalo-type IMF is used for the E/S0 and Sab-Sbc types, hereinafter called “early”, while a Salpeter-type IMF is used for the Scd-Sdm and vB types, hereinafter called “late”. For the E/S0 galaxies, Pozzetti et al. consider two models distinguished by different e-folding times ($`\tau _1,\tau _2`$) in their SFR. We adopt the $`\tau _2`$ model.
Pozzetti et al. (1996) constructed their model universe to match a number of observational constraints, including the source count distribution in several optical and IR photometric bands, the distribution of colours as a function of apparent magnitude, and the distribution of redshifts as a function of magnitude. Pozzetti et al. (1996) exhibit a PLE model which, in an $`\mathrm{\Omega }=0`$ Friedmann cosmology, leads to acceptable agreement with almost all of these constraints. They also deduce that PLE models in a flat ($`\mathrm{\Omega }=1`$) cosmology cannot reproduce several aspects of the data, and therefore we consider only the $`\mathrm{\Omega }=0`$ and $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> model.
One constraint not used by Pozzetti et al. is the observed redshift dependence of the luminosity density in certain wavebands. Figure 1 compares the UV luminosity density ($`^{200}`$ – see below) of the model of Pozzetti et al. with the observations of Cowie, Songaila & Barger (1999). While there remain significant uncertainties in the measured UV luminosity density (cf. Lilly et al., 1996; Cowie, Songaila & Barger, 1999; Sullivan et al., 1999), there is satisfactory agreement between the prediction and recent measurements. The significance of this will be amplified below.
Our application of pegase takes place in two steps. First, for galaxies of type $`k`$ we compute the time-dependent spectral emission which follows the instantaneous formation of 1 M of stars. We use the evolutionary tracks of Bressan et al. (1993) supplemented to later evolutionary phases and to lower masses as indicated in Fioc & Rocca-Volmerange (1997). We use the spectral stellar library described by Fioc & Rocca-Volmerange (1997). We ignore extinction in the prediction of the UV luminosity, and assume the number of ionizing photons to be 70% of the Lyman continuum photons. To ensure agreement with the evolutionary tracks, pegase has an upper limit of 120 M for the chosen IMFs. This leads to a minor inconsistency with the upper limit of 125 M used by Pozzetti et al., but this has no effect on our conclusions.
In the second step we convolve the time-dependent spectral emission with the star formation rate history $`\dot{\rho }_k(t)`$, to determine the evolution of $`L_k^{}`$. Inserted into the luminosity function in the model of Pozzetti et al. this yields a prediction of the luminosity density $`_k^p(t)`$ produced in the (rest-frame) waveband $`p`$ at time $`t`$ by the population $`k`$ undergoing star formation with a rate density of $`\dot{\rho }_k(t)`$. The total SFR density is then clearly
$$\dot{\rho }(t)=\underset{k}{}\dot{\rho }_k(t),$$
and the total luminosity density in waveband $`p`$ is
$$^p(t)=\underset{k}{}_k^p(t).$$
For illustration it is convenient to define the ratio
$$R_k^p(t)=\dot{\rho }_k(t)/_k^p(t).$$
It is also convenient to define a global ratio for waveband $`p`$ as
$$R^p(t)=\dot{\rho }(t)/^p(t).$$
Our notation recognises that $`R`$ may be time dependent, and may depend on the galaxy type $`k`$ and waveband $`p`$.
Figures 2(a) and (b) exhibit the evolution with time of $`R_k^p(t)`$ for each galaxy type and of the global ratio $`R^p(t)`$, respectively for the 200 nm continuum and for H$`\alpha `$. As previously stressed by Kennicutt (1983), Schaerer (1999) and others, the ratio for H$`\alpha `$ in each galaxy type rapidly settles to a steady value, reflecting the fact that H$`\alpha `$ emission is completely controlled by the short-lived, massive component of the IMF. The difference of $`0.5`$ dex between the asymptotic ratios for the early and late-type galaxies is due to the adoption of Scalo and Salpeter IMFs, respectively, for the two types.
The evolution of $`R_k^p(t)`$ for the 200 nm continuum is quite complex. For the E/S0 type, in which most star formation takes place in the first 2 Gyr, the ratio displays a steady decline over $`1`$ Gyr, a plateau to $`8`$ Gyr, and a subsequent decline to the present epoch. The initial decline arises from the rapid evolution of the initial burst, while the later decline reflects the late stages of evolution of relatively low mass stars at a time when few new stars are being born. The extended plateaux in the Sab-Sbc and Scd-Sdm types reflect the slower change in the star formation rate in these populations, while the difference in asymptotic value is due to the different IMFs. By definition, the vB component has a ratio equal to that of the Scd-Sdm type at an age of 100 Myr.
To emulate observational procedures, we require calibration constants $`C^p`$ that do not depend on time or galaxy type. These have been estimated by previous workers using galaxy spectral synthesis models over a range of star-forming histories and ages, and selecting a “typical” value (e.g. Kennicutt, Tamblyn & Congdon, 1994; Schaerer, 1999). We have conducted a similar study using pegase. As with previous derivations of calibration factors, we find a significant sensitivity to the IMF, but this is not the focus of our study. Accordingly, we explore calibrations based on both the Salpeter (“late”) and Scalo (“early”) models adopted by Pozzetti et al. The calibration constants are listed in Table 1. We acknowledge that there is inevitable uncertainty in the precise numerical values of the calibration factors in our study, as there is in other studies, but stress that this uncertainty has no effect on our conclusions.
## 3 THE INFERRED STAR FORMATION RATE AND ITS EVOLUTION
Figure 3 exhibits the time dependent star formation rate density of each galaxy type, and for the totality of the populations. Early-type galaxies dominate star formation for $`z>1`$, while all types except E/S0 contribute for $`z0`$. Figure 4 shows the global star formation history that would be inferred from the luminosity densities using each of the calibration factors listed in Table 1. Despite the self-consistency in our approach, in no case does the inferred value match the actual star formation history shown as the solid line in Figures 3 and 4.
There are two reasons for the discrepancies seen in Figure 4 at $`z=0`$. First, the fact that the early and late-type populations have different IMFs implies that neither a Salpeter nor a Scalo calibration factor applied to the global luminosity density will yield the true star formation rate. Secondly, even in the absence of this difference, the 200 nm and H$`\alpha `$ calibrations are not consistent because they refer to different averages over the recent star formation history. This can be seen clearly in Figure 2, where the vB and Scd-Sdm components have identical values of $`R_k^{\mathrm{H}\alpha }(t)`$ after the initial transient phase of $`10`$ Myr, but have different values of $`R_k^{200}(t)`$ except at 100 Myr.
Another way to view the discrepancy is to contrast the relative contribution of each galaxy type to the luminosity density with its contribution to the star formation rate. Table 2 shows, for example, that at $`z=0`$ the vB galaxy type contributes 32% to $`^{\mathrm{H}\alpha }`$ and 27% to $`^{200}`$, while its contribution to the star formation rate itself is 24%.
At high redshift yet another factor comes into play: the relative mix of galaxy types changes as each undergoes luminosity evolution with its specified star formation history. The systematic errors seen at $`z=0`$ therefore change with redshift. Not surprisingly, we see that for each waveband the Scalo IMF calibration is poorer than the Salpeter at $`z=0`$, and that the situation reverses at $`z=2`$, since the early-type galaxies become dominant at higher redshifts. There are, however, always systematic errors in the inferred star formation rates.
## 4 DISCUSSION AND PROSPECTS
The qualitative trends in our results could have been anticipated on the basis of previous studies of the influence of the IMF and the star formation history on the calibration of luminosity densities in terms of star formation rates (e.g. Schaerer, 1999). Our study shows quantitatively that attempts to infer the SFR density locally and over a range of redshifts can be compromised by the presence of different galaxy types, whose mix evolves differentially. Such differential evolution is an almost inevitable consequence of models based on pure luminosity evolution or, indeed, other descriptions of cosmic evolution. Insofar as the model of Pozzetti et al. (1996) is typical in respect of its mixture of galaxy types, systematic errors can be anticipated of the order of a factor of at least 2 in both the absolute value of SFR and in its relative evolution in $`0<z<2`$. Some of the intrinsic problems that arise from the adoption of fixed calibration factors for the relation between SFR density and luminosity density can be partially addressed by computing the luminosity density explicitly from a model universe for comparison with observations (cf. Figure 1).
At first sight, the inconsistency between the SFR inferred from 200 nm and H$`\alpha `$ calibrations could be regarded as a serious problem. However, the difference between the two values contains potentially useful diagnostic power regarding the star formation history. Observations of a variety of different diagnostics of the star formation rate in a sample of galaxies could provide a more robustly constrained star formation history, by allowing the determination of other important factors (such as type-specific IMFs). However, the number of star formation diagnostics accessible to observation is not large, and there are many parameters to be constrained. Clearly, the possible existence of type-specific IMFs and star-formation histories presents a significant challenge to any systematic investigation of the cosmic evolution of star formation.
We wish to thank Michael Rowan-Robinson for helpful comments and suggestions. JA gratefully acknowledges support in the form of a scholarship from the Science and Technology Foundation (FCT, Portugal) through Program Praxis XXI. |
no-problem/0001/astro-ph0001348.html | ar5iv | text | # Non-Gaussianity and the recovery of the mass power spectrum from the Ly𝛼 forest
## 1 Introduction
Since high resolution QSO spectra became available, the transmitted flux in QSO spectra, or the Ly$`\alpha `$ forest, offers an unprecedented opportunity to study the large-scale structure of the universe and its evolution at redshifts beyond the galaxy redshift catalog (e.g. Bi & Davidsen 1997 and references within). A basic goal of this study is to reconstruct the initial mass field. Assuming that these objects trace the underlying matter field in some way, it seems to be possible to trace the evolution of mass field back in time. Because the initial mass field is expected to be Gaussian in many models of the origin of fluctuations, reconstructing the initial mass fluctuations is synonymous with recovering the mass power spectrum (e.g. Croft et al. 1999.)
The recovery of power spectrum from the Ly$`\alpha `$ forests relies upon two theoretical conjectures. The first is to assume that the transmitted flux of a QSO Ly$`\alpha `$ absorption spectrum is a point-to-point tracer of the underlying dark matter distribution. The Ly$`\alpha `$ forest has been successfully modeled by the absorption of the ionized intergalactic gas, of which the distribution is continuous, and locally determined by the underlying dark matter distribution (Bi 1993; Fang et al. 1993; Bi, Ge & Fang 1995; Hernquist et al 1996; Bi & Davidsen 1997; Hui, Gnedin & Zhang 1997). Thus, the transmitted flux of a QSO absorption spectrum at a given redshift depends only on the mass density of dark matter at the position corresponding to the redshift.
The second assumption is that the initial mass field can be recovered from the flux of QSO spectrum by the Gaussianization algorithm (Weinberg 1992; Croft et al. 1998). With this method, the shape of 1-D initial Gaussian density field with an arbitrary normalization can be recovered approximately from the observed flux by a point-to-point Gaussian mapping if the relation between flux and mass density is monotonous, i.e. the higher the underlying mass density, the stronger the Ly$`\alpha `$ absorption. The monotony would be a good approximation in the weak nonlinear or quasilinear evolutionary regime of the cosmic clustering.
This paper is trying to study the influence of the non-Gaussianity of the Ly$`\alpha `$ forest on the recovery of the initial power spectrum. It is motivated by the recently systematic detection of non-Gaussianity of the Ly$`\alpha `$ forests. Despite it is well known that the two-point correlation function of the Ly$`\alpha `$ absorption lines is quite weak, the distribution of these lines does show non-Gaussian behavior. For instance, it has been well known about ten years ago that the distribution of the nearest neighbor Ly$`\alpha `$ line intervals is different from a Poisson process (Duncan, Ostriker, & Bajtlik 1989; Liu and Jones 1990; Fang, 1991). Recently, the detection of the spectrum of higher order cumulants (Pando & Fang 1998a) and the scale-scale correlations (Pando et al. 1998) of the Ly$`\alpha `$ forests implies systematic non-Gaussianity on scales as large as about 10 h<sup>-1</sup> Mpc. The abundance of the Ly$`\alpha `$ line “clusters” identified with respect to the richness is also found to be significantly different from a Gaussian process (Pando & Fang 1996.)
According to the philosophy of the Gaussianization reconstruction, all the non-Gaussian features of the Ly$`\alpha `$ forests are not initial. It should be removed by the Gaussianization of the flux of QSO spectra. The recovered mass field should be Gaussian. The algorithm of Gaussian mapping is designed for removing the non-Gaussianities of the flux, and recovering a Gaussian mass field.
The idea of Gaussianization is exquisite. However, we will show that even though the current algorithm of Gaussian mapping does map the distribution of the flux value into a Gaussian probability distribution function (PDF), the above-mentioned non-Gaussianities of the Ly$`\alpha `$ forests still remain largely in the Gaussianized flux. Namely, the recovered field is not linear and Gaussian, but contaminated by the non-Gaussian behavior in the Ly$`\alpha `$ forest.
It has been recognized that the estimation of power spectrum is significantly affected by the non-Gaussian behavior, such as the correlation between band-averaged power spectra which is essentially the scale-scale correlation (e.g. Meiksin & White 1999). Therefore, with the current algorithm of Gaussianization, the recovered power spectrum is distorted by the non-Gaussianity of the Ly$`\alpha `$ forests. The question is then raised: how to improve the algorithm of Gaussianization in order to recover a mass field exempted from the non-Gaussianity of the Ly$`\alpha `$ forest? Or how to suppress the effect of the non-Gaussian contamination in estimation of the power spectrum? We try to investigate these problems in this paper.
This paper is organized as follows. In §2, using popular cold dark matter models, we present the non-Gaussian features in the transmitted flux of the Ly$`\alpha `$ forests. In §3, we demonstrate that the non-Gaussianity of mass field recovered by the conventional Gaussianization algorithm is about the same order as the original non-Gaussianity. Two alternatives which yield better Gaussianization are then proposed. In §4, the distortion of power spectrum by the non-Gaussianity is shown, and a possible way of suppressing the non-Gaussian effect on the power spectrum detection, i.e. properly choosing representation of the power spectrum, is suggested. We conclude this paper with a discussion of our findings in §5.
## 2 The Non-Gaussian features of the Ly$`\alpha `$ forests
### 2.1 Samples of the Ly$`\alpha `$ forests
To investigate the recovery method of mass power spectrum, we generate the simulation samples of the Ly$`\alpha `$ forest in the semi-analytic model of the intergalactic medium (IGM) developed by H.G. Bi et al. (Bi 1993; Fang et al. 1993; Bi, Ge & Fang 1995; Bi & Davidsen 1997.) This model can approximately fit most observed features of the Ly$`\alpha `$ forest, including the column density distribution and the number density of the Ly$`\alpha `$ forest lines; the distribution of the equivalent widths and their redshift dependence; the clustering and the Gunn-Peterson effect. Moreover, in this model, the relations among the dark matter field, the flux of the Ly$`\alpha `$ absorption and the power spectrum of reconstructed initial mass fields are under control. It would be very useful to reveal the problems of the reconstruction.
The model was described in detail in the above listed references. We now give a brief account of, especially, the fundamental physics underlying in this model. The basic assumption of the model is that the density distribution of the baryonic diffuse matter $`n(𝐱)`$ in the universe is determined by the underlying dark matter density distribution via a lognormal relation as
$$n(𝐱)=n_0\mathrm{exp}\left[\delta _0(𝐱)\frac{<\delta _0^2>}{2}\right],$$
(1)
where $`n_0`$ is the mean number density, and $`\delta _0(𝐱)`$ is a Gaussian random field derived from the density contrast $`\delta _{DM}`$ of the dark matter by:
$$\delta _0(𝐱)=\frac{1}{4\pi x_b^2}\frac{\delta _{DM}(𝐱_1)}{|𝐱𝐱_1|}e^{\frac{|𝐱𝐱_1|}{x_b}}𝑑𝐱_1$$
(2)
in the comoving space, or
$$\delta _0(𝐤)=\frac{\delta _{DM}(𝐤)}{1+x_b^2k^2}$$
(3)
in the Fourier space. To take into account the effect of redshift distortion, the peculiar velocity field along the line of sight is also calculated by the simulation model (Bi 1993; Fang et al. 1993; Bi& Davidsen 1997.)
The Gaussian field $`\delta _{DM}`$ is produced in a cold dark matter model. To account for the baryonic effect on the transfer function, we adopt the fitting formula for power spectrum presented by Eisenstein & Hu (1999). Because the goal of this paper is mainly on examining the recovery method of power spectrum, we will not take into account the variants of CDM family, but only the “standard” one, i.e. the flat model ($`\mathrm{\Omega }_0=1.0`$) normalized by the 4-year COBE data, and the $`\mathrm{\Gamma }=\mathrm{\Omega }_0h`$ is taken to be 0.3, where $`h`$ denotes the normalized Hubble parameter, and $`\mathrm{\Omega }_0`$ is the cosmological density parameter of total mass. This model is compatible with the galaxy correlation observed on scales of $`10`$ h<sup>-1</sup> Mpc (Efstathiou et.al. 1992). The baryonic fraction in the total mass was fixed by the constraint from the primordial nucleosynthesis of $`\mathrm{\Omega }_b=0.0125`$ h<sup>-2</sup> (Walker et.al, 1991).
The factor $`x_b`$ in Eq. (2) is the Jeans length of IGM given by
$$x_b\frac{1}{2\pi H_0}\left[\frac{2\gamma kT_m}{3\mu m_p\mathrm{\Omega }(1+z)}\right]^{\frac{1}{2}},$$
(4)
where $`T_m`$ and $`\mu `$ are the density-average temperature and molecular weight of the IGM respectively, and $`\gamma `$ is the ratio of specific heats. The thermal equation of state of IGM is assumed to be polytropic, $`Tn^{\gamma 1}`$ with $`\gamma =4/3`$.
The lognormal relation, Eq.(1), has the following property: (1) When fluctuations are small, i.e. $`(n/n_01)\delta _0`$, Eq. (1) is just the expected linear evolution of the IGM; (2) On small scales as $`|𝐱𝐱_1|x_b`$, Eq. (1) becomes the well-known isothermal hydrostatic solution, which describes highly clumped structures such as intracluster gas, $`n\mathrm{exp}(\mu m_p\psi _{DM}/\gamma kT)`$, where $`\psi _{DM}`$ is the dark matter potential (Sarazin & Bahcall 1977).
The absorption optical depth at observed wavelength $`\lambda `$ is
$$\tau (\lambda )=_{t_{qso}}^{t_0}\sigma \left(\frac{c}{\lambda _\alpha }\frac{1+z}{1+z_0}\right)n_{HI}(t)𝑑t,$$
(5)
where $`z_0=(\lambda /\lambda _\alpha )1`$, $`t_0`$ denotes the present time, $`t_{qso}`$ is the time corresponding to the redshift $`z_{qso}`$ of the QSO, and so does for the relation between $`t`$ and $`z`$; $`\sigma `$ is the absorption cross section at the Ly$`\alpha `$ transition, and $`\lambda _\alpha =1216\AA `$ represents the Ly$`\alpha `$ wavelength. The density of the neutral hydrogen atoms, $`n_{HI}`$, can be found from $`n`$ by the cosmic abundance of hydrogen, and photoionization equilibrium (Bi, Ge & Fang 1995.)
Obviously in this model, the relation between the transmitted flux, $`F(\lambda )=e^\tau `$, and $`n(𝐱)`$ or $`\delta _0(𝐱)`$ is basically local. The non-locality is only caused by the width of the absorption cross section $`\sigma `$, and the peculiar velocity of the neutral hydrogen. Therefore, $`F`$ is approximately a point-to-point tracer of mass fluctuation $`\delta _0(𝐱)`$. Moreover, the $`F`$ is monotonically related to $`\delta _0`$, and then, to the density contrast $`\delta _{DM}`$ on scales larger than $`x_b`$.
In this paper, we produce the simulation samples in the redshift range $`z=2.0662.436`$ with $`2^{14}`$ pixels. The corresponding simulation size in the CDM model is 189.84 h<sup>-1</sup>Mpc in comoving space which is long enough to incorporate most of the fluctuation power. The selection of this redshift range is to compare the simulation with the Keck spectrum of HS1700+64. The spectrum of HS1700+64 ranges from 3723.012Åto 5523.554Åwith the resolution $`3`$ kms<sup>-1</sup>, or totally 55882 pixels in which the first $`2^{14}`$ pixels are chosen here. These data have been used for testing the model considered in Bi & Davidsen (1997).
### 2.2 The skewness and kurtosis spectra of the transmitted flux
If appropriate parameters of the intergalactic UV background are adopted, the lognormal IGM model described in §2.1 could explain successfully many observed properties of the Ly$`\alpha `$ forest and their evolution from redshift 2 to 4. Now we show that it also works against tests of the non-Gaussian features.
We use the wavelet transform to analyze the non-Gaussian behavior of the transmitted flux $`F`$. As a 1-D field, the flux $`F(\lambda )`$ in the wavelength range of $`L=\lambda _{max}\lambda _{min}`$ is subject to a discrete wavelet transform (DWT) as
$$F=\overline{F}+\underset{j=0}{\overset{\mathrm{}}{}}\underset{l=0}{\overset{2^j1}{}}\stackrel{~}{ϵ}_{j,l}\psi _{j,l}(\lambda )$$
(6)
where $`\psi _{j,l}(x)`$, $`j=0,1,\mathrm{}`$, $`l=0\mathrm{}2^j1`$, is an orthogonal and complete set of the DWT basis (the details of the DWT, see e.g. Fang & Thews 1998). The wavelet basis $`\psi _{j,l}(x)`$ is localized both in the physical space and the Fourier (scale) space. The function $`\psi _{j,l}(x)`$ is centered at position $`lL/2^j`$ of the physical space, and at wavenumber $`2\pi \times 2^j/L`$ of the Fourier space. Therefore, the wavelet function coefficients (WFCs), $`\stackrel{~}{ϵ}_{j,l}`$, have two subscripts $`j`$ and $`l`$. They describe the fluctuation of the flux on scale $`L/2^j`$ at position $`lL/2^j`$. To be more specific, we will use the Daubechies 4 wavelet in this paper, although all conclusions are not affected by this particular choice as long as a compactly supported wavelet basis is used.
The WFC, $`\stackrel{~}{ϵ}_{j,l}`$, is computed by the inner product of
$$\stackrel{~}{ϵ}_{j,l}=<F\psi _{j,l}>.$$
(7)
Since the DWT bases are complete, the WFCs contain all information of the flux.
Note that the $`\psi _{j,l}(x)`$ are orthogonal with respect to the position index $`l`$, and therefore, for an ergodic field, the $`2^j`$ WFCs at a given $`j`$, i.e. $`\stackrel{~}{ϵ}_{j,l}`$, $`l=0,1\mathrm{}2^j1`$, can be treated as independent measures of the flux field. The $`2^j`$ WFCs, $`\stackrel{~}{ϵ}_{j,l}`$, from one realization of $`F(\lambda )`$ can be employed as a statistical ensemble. In other words, when the fair sample hypothesis holds (Peebles 1980), an ensemble average can be estimated equivalently by averaging over $`l`$, i.e. $`\stackrel{~}{ϵ}_{j,l}(1/2^j)_{l=0}^{2^j1}\stackrel{~}{ϵ}_{j,l}`$, where $`\mathrm{}`$ denotes the ensemble average. The distribution of $`\stackrel{~}{ϵ}_{j,l}`$ represents approximately the one-point distribution of the WFCs at a given scale $`j`$.
The non-Gaussianity of the flux $`F(\lambda )`$ can be directly measured by the deviation of the one-point distribution from a Gaussian distribution. For this purpose, we calculate the cumulant moments defined by
$$I_j^2=M_j^2,$$
(8)
$$I_j^3=M_j^3,$$
(9)
$$I_j^4=M_j^43M_j^2M_j^2,$$
(10)
$$I_j^5=M_j^510M_j^3M_j^2,$$
(11)
where
$$M_j^n\frac{1}{2^j}\underset{l=0}{\overset{2^j1}{}}(\stackrel{~}{ϵ}_{j,l}\overline{\stackrel{~}{ϵ}_{j,l}})^n.$$
(12)
The second order cumulant moment gives the DWT power spectrum (§4) (Pando & Fang 1998). For Gaussian fields all the cumulant moments higher than order 2 are zero. Thus one can measure the non-Gaussianity by $`I_j^n`$ with $`n>2`$. We call $`I_j^n`$ the DWT spectrum of $`n`$-th cumulant. The cumulant measures $`I_j^3`$ and $`I_j^4`$ are related to the well known skewness and kurtosis, respectively, defined by
$$S_j\frac{1}{(I_j^2)^{3/2}}I_j^3,$$
(13)
$$K_j\frac{1}{(I_j^2)^2}I_j^4.$$
(14)
Using the skewness and kurtosis spectra as statistical indicators, a significant non-Gaussian behavior has been found in the distribution of Ly$`\alpha `$ forest lines (Pando & Fang 1998a). The skewness and kurtosis spectra of the transmitted flux in 100 simulated samples are shown in Figs. 1 and 2, respectively. To assess the statistical significance, the 95% confidence range from 100 realizations of Gaussian noise are also displayed in these figures. Clearly, the kurtosis spectrum of simulated $`F`$ shows difference from the Gaussian noise spectra on the scales $`j8`$ (or $`1.5`$ h<sup>-1</sup> Mpc) with 95% confidence. The skewness spectrum does not show significant difference from the Gaussian noise till $`j=11`$ ($`100`$ h<sup>-1</sup> kpc). These results are qualitatively consistent with that for the observed forest line distributions. In addition, the skewness and kurtosis spectra of the flux of HS1700+64 are also presented in Figs. 1 and 2. Obviously, the CDM model is in excellent agreement with the observation.
### 2.3 The scale-scale correlations of the transmitted flux
The scale-scale correlations measure the correlations between the fluctuations on different scales (Pando et al. 1998, Pando, Valls–Gabaud, & Fang 1998, Feng, Deng & Fang 2000). This non-Gaussianity is independent from the higher order cumulants (§2.2), which are only of $`j`$ dependent. A simplest measure of the scale-scale correlation is given by
$$C_j^{p,p}=\frac{2^{(j+1)}_{l=0}^{2^{j+1}1}\stackrel{~}{ϵ}_{j;[l/2]}^p\stackrel{~}{ϵ}_{j+1;l}^p}{\stackrel{~}{ϵ}_{j,[l/2]}^p\stackrel{~}{ϵ}_{j+1;l}^p}$$
(15)
where $`p`$ is an even integer, and \[ \]’s denote the integer part of the quantity. Because $`Ll/2^j=L2l/2^{j+1}`$, the position $`l`$ at scale $`j`$ is the same as the positions $`2l`$ and $`2l+1`$ at scale $`j+1`$. Therefore, $`C_j^{p,p}`$ measures the correlation between fluctuations on scale $`j`$ and $`j+1`$ at the same physical point. For Gaussian fields, $`C_j^{p,p}=1`$. $`C_j^{p,p}>1`$ corresponds to the positive scale-scale correlation, and $`C_j^{p,p}<1`$ to the negative case. One variant of the above definition is
$$C_{j,\mathrm{\Delta }l}^{p,p}=\frac{2^{(j+1)}_{l=0}^{2^{j+1}1}\stackrel{~}{ϵ}_{j;[l/2]+\mathrm{\Delta }l}^p\stackrel{~}{ϵ}_{j+1;l}^p}{\stackrel{~}{ϵ}_{j,[l/2]}^p\stackrel{~}{ϵ}_{j+1;l}^p}.$$
(16)
This statistics is for measuring the correlations between fluctuations on scales $`j`$ and $`j+1`$, but at different positions, i.e. the fluctuation at scale $`j`$ is displaced from the $`j+1`$ fluctuation by a distance $`\mathrm{\Delta }lL/2^j`$.
The scale-scale correlation $`C_j^{2,2}`$ calculated from the simulated transmitted flux and HS1700+64 are shown in Fig. 3. Clearly, the values of $`C_j^{2,2}`$ are significantly larger than unity and well above the Gaussian noise spectra on all the scales $`j7`$. This result is also qualitatively in agreement with the scale-scale correlation of the Ly$`\alpha `$ forests (Pando et al. 1998.). Figure 3 also indicates that the model of §2.1 is still in a good shape of fitting the observed non-Gaussian correlation.
Similar to Eq.(15) for the correlation between scales $`|jj^{}|=1`$, one may define, in principle, the correlation between two arbitrary scales with $`|jj^{}|>1`$. However, for hierarchical clustering the scale-scale correlation is quantified mainly by $`|jj^{}|=1`$. Therefore, we will not calculate the scale-scale correlations for $`|jj^{}|>1`$.
## 3 The Non-Gaussian features of the Gaussian-recovered mass fields
### 3.1 Non-Gaussianity after Gaussianization
The cosmological reconstruction is to extract the power spectrum of the initial linear mass fluctuations from the observed distribution of various tracers of the evolved density field. The algorithm of Gaussianization was designed for recovering the primordial density fluctuations from an observed galaxy distribution (Weinberg 1992). This method has been recently applied to recovering the linear density field and its power spectrum from the observed transmitted flux $`F`$ of QSO absorption spectra (Croft et al 1998, 1999).
The key step of the Gaussianization algorithm is a pixel-to-pixel mapping from an observed flux $`F`$ into the density contrast $`\delta `$. The probability distribution function (PDF) of the observed transmitted flux $`F`$ is generally non-Gaussian, while the PDF of the initial density contrasts $`\delta (n/n_0)1`$ is assumed to be Gaussian in large variety of galaxy formation models. The relation between $`F=\mathrm{exp}(\tau )`$ and $`\delta `$ is monotonic, i.e. high initial density $`\delta `$ pixels evolved into high $`\tau `$ pixels, low initial density pixels into low $`\tau `$ pixels. Thus, using the observed $`F`$, one can sort out the total N pixels by the amount of $`F`$ in the ascending order: the pixel with lowest $`F`$ is labeled by 1st, the next higher $`F`$ pixel is labeled by 2nd, and so on. For the n-th pixel, we then assign the density contrast $`\delta `$, which is given by the solution of the equation $`(2\pi )^{1/2}_{\mathrm{}}^\delta \mathrm{exp}(x^2/2)𝑑x=n/N`$. Thus, the Gaussian mapping produces a mass field with the same rank order as the flux but with a Gaussian PDF of $`\delta (𝐱)`$. The overall amplitude of the recovered power spectrum should be determined by a separate procedure. For instance, we may set up the initial condition by using the recovered spectrum, evolve the simulation to the observed redshift and then normalize the spectrum by requiring that the simulation reproduces the observed power spectrum of the transmitted QSO flux. This amplitude normalization is model-dependent.
We apply the Gaussianization to 100 simulation samples of the QSO transmitted flux, and measure the skewness and kurtosis spectrum as well as the scale-scale correlation. The results are displayed in Figs. 4 - 6. For comparison, the non-Gaussian spectra of the flux in Figs. 1 - 3. are also plotted correspondingly. Figs. 4 - 6 show that the Gaussianized flux still largely exhibits non-Gaussian features. Especially, the scale-scale correlations of the Gaussianized field is as strong as the pre-Gaussianized flux on scales $`j10`$. That is, the recovered density field is seriously contaminated by the non-Gaussianities in the original flux.
### 3.2 The efficiency of the conventional Gaussianization
The reason for the lower efficiency of the conventional Gaussian mapping (§3.1) is simple. The initial Gaussian random mass field is assumed to be a superposition of independent modes, of which the PDFs are Gaussian. For instance, in the Fourier representation, all Fourier modes of a Gaussian mass field are Gaussian, i.e. they have Gaussian PDF of the amplitudes and randomized phases. The conventional algorithm considered only the Gaussianization of one variable, $`\delta `$. It does not guarantee the Gaussianization of the amplitudes and phases of all relevant modes. In other words, the Gaussian mapping algorithm will work perfectly for a system with one stochastic variable, but not so for a field.
Alternatively, this problem can also be seen via the DWT representation. Using eq.(1), any 1-D mass field given by density contrast $`\delta (x)`$ ($`\overline{\delta }=0`$) can be decomposed with respect to a DWT basis as
$$\delta (x)=\underset{j=0}{\overset{\mathrm{}}{}}\underset{l=0}{\overset{2^j1}{}}\stackrel{~}{ϵ}_{j,l}^M\psi _{j,l}(x),$$
(17)
where the superscript $`M`$ means mass. Equation (17) represents a linear superposition of modes $`\psi _{j,l}`$. As has been pointed out in §2.2, for a given $`j`$, the 2<sup>j</sup> WFCs $`\stackrel{~}{ϵ}_{j,l}^M`$ form a statistical ensemble. The distribution of the 2<sup>j</sup> WFCs gives the one-point distribution of the amplitude of mode at the scale $`j`$. For the initial Gaussian mass field, these one-point distributions should be Gaussian. Obviously, the Gaussian PDF of $`\delta `$ does not imply that the one-point distributions of the WFCs for all $`j`$ are Gaussian (the central limit theorem). The amplitude $`\delta `$ can only play the role as one variable of the field.
Moreover, even when the one-point distributions of 2<sup>j</sup> WFCs at all $`j`$ are Gaussianized, the mass field could still be non-Gaussian. For instance, suppose the one-point distribution of the 2<sup>j</sup> WFCs, $`\stackrel{~}{ϵ}_{j,l}^M`$, on scale $`j`$, is Gaussian. If the WFCs on scale $`j+1`$ is given by
$`\stackrel{~}{ϵ}_{j+1;2l}^M`$ $`=`$ $`a\stackrel{~}{ϵ}_{j,l}^M,`$ (18)
$`\stackrel{~}{ϵ}_{j+1;2l+1}^M`$ $`=`$ $`b\stackrel{~}{ϵ}_{j,l}^M,`$
where $`a`$ and $`b`$ are arbitrary constant, the one-point distribution of the 2<sup>j+1</sup> WFCs $`\stackrel{~}{ϵ}_{j+1,l}^M`$ is also Gaussian. However, Eq.(18) leads to a strong correlation between $`\stackrel{~}{ϵ}_{j+1,l}^M`$ and $`\stackrel{~}{ϵ}_{j,l}^M`$. This is an example of the scale-scale correlation, i.e. the scale $`j+1`$ fluctuations are always proportional to those on the scale $`j`$ at the same position. Moreover, this correlation can not be eliminated by the Gaussianization of $`\stackrel{~}{ϵ}_{j;l}^M`$. The Gaussian mapping changes all the WFCs at a given position (pixel) by a same amplifying or reducing factor, and therefore, the local relations Eq.(18) remains.
The scale-scale correlations only depend upon the statistical behavior of the fluctuation distribution with respect to the index $`j`$. Therefore, a Gaussian field requires the uncorrelation between the distributions of WFCs with different $`j`$. This uncorrelation corresponds to decorrelating the band average Fourier modes, which will be discussed in detail in §4.
### 3.3 Algorithms of scale-by-scale Gaussianization
Based on the considerations in the last section, we may design an algorithm which is capable of reducing the contamination of the non-Gaussianity, and produce fields with less non-Gaussianity.
The new method is based on the scale-by-scale decomposition of flux and mass field. From Eq.(6), we have
$$F=F^j+\underset{j^{}=j}{\overset{\mathrm{}}{}}\underset{l=0}{\overset{2^j^{}1}{}}\stackrel{~}{ϵ}_{j^{},l}\psi _{j^{},l},$$
(19)
and
$$F^j\overline{F}+\underset{j^{}=0}{\overset{j1}{}}\underset{l=0}{\overset{2^j^{}1}{}}\stackrel{~}{ϵ}_{j^{},l}\psi _{j^{},l}.$$
(20)
$`F^j`$ is actually a smoothed $`F`$ by a filter on the scale $`j`$. There is a recursion relation in $`F^j`$ given by
$$F^{j+1}=F^j+\underset{l=0}{\overset{2^j1}{}}\stackrel{~}{ϵ}_{j,l}\psi _{j,l}.$$
(21)
Namely, flux $`F^{j+1}`$ can be reconstructed from flux $`F^j`$ and 2<sup>j</sup> WFCs $`\stackrel{~}{ϵ}_{j,l}`$ at the scale $`j`$. Similarly, for a mass distribution, we have
$$\delta =\delta ^j+\underset{j^{}=j}{\overset{\mathrm{}}{}}\underset{l=0}{\overset{2^j^{}1}{}}\stackrel{~}{ϵ}_{j^{},l}^M\psi _{j^{},l},$$
(22)
$$\delta ^j\underset{j^{}=0}{\overset{j+1}{}}\underset{l=0}{\overset{2^j^{}1}{}}\stackrel{~}{ϵ}_{j^{},l}^M\psi _{j^{},l},$$
(23)
and
$$\delta ^{j+1}=\delta ^j+\underset{l=0}{\overset{2^j1}{}}\stackrel{~}{ϵ}_{j,l}^M\psi _{j,l}.$$
(24)
Since the relations between $`F`$ and $`\delta `$ are local and monotonic, the smoothed flux $`F^{j+1}`$ depends only on the smoothed mass field $`\delta ^{j+1}`$, and one can perform a local and monotonic mapping between $`F^{j+1}`$ and $`\delta ^{j+1}`$. Thus, we can implement the reconstruction of the mass field $`\delta ^{j+1}`$ from $`F^{j+1}`$ by a scale-by-scale Gaussianization algorithm (hereafter referred to as algorithm I):
1. Supposing the reconstruction down to the scale $`j`$ has been done, i.e. the $`\delta ^j`$ is known already;
2. Calculating the WFCs of the flux $`F`$ on the scale $`j`$;
3. Making the Gaussian mapping of the 2<sup>j</sup> WFCs $`\stackrel{~}{ϵ}_{j,l}`$, and assigning the Gaussianized result, $`\epsilon _{j,l}`$, to the 2<sup>j</sup> pixels according to the rank order. The distribution of $`\epsilon _{j,l}`$ is Gaussian with zero mean and variance one.
4. Finding the 2<sup>j</sup> WFCs of mass field by
$$\stackrel{~}{ϵ}_{j,l}^M=\nu \epsilon _{j,l}.$$
(25)
where the parameter $`\nu `$ is a normalization factor to be determined. The one-point distribution of the WFCs of mass field at the scale $`j`$, $`\stackrel{~}{ϵ}_{j,l}^M`$, is then Gaussianized.
5. Reconstructing the mass field $`\delta ^{j+1}`$ on scale $`j+1`$ by the recursion relation Eq.(24)
6. To determine the parameter $`\nu `$, we require that the DWT power spectrum of the flux $`F^{j+1}`$ simulated from $`\delta ^{j+1}`$ reproduces the observed flux $`F^{j+1}`$. We have then $`\delta ^{j+1}`$. The reconstruction of mass field on the scale $`j+1`$ is done.
Repeating the steps 1 to 6, one can reconstruct the mass field on scales from large to small until the scale of the resolution of the flux $`F`$, or the scale on which the relation between $`F`$ and $`\delta `$ is no longer local.
Figure 7 illustrates the transmitted flux, the initial density field and the recovered density field by algorithm I. The recovered 1D density field is in excellent agreement with the original density field scale-by-scale. The non-Gaussianities of the recovered fields by algorithm I are shown in Figs. 4 - 6. The skewness and kurtosis spectra exhibit almost nothing but Gaussianity. The scale-scale correlation is also significantly reduced.
The Gaussianization algorithm I is conceptually clear. However, it needs to determine the normalization factor $`\nu `$ at each scale. Therefore, it is rather cumbersome to do the numerical calculation. Moreover, there is still somewhat residual scale-scale correlation in the recovered mass field. In fact, algorithm I does ensure the Gaussian PDF of $`\stackrel{~}{ϵ}_{j,l}^M`$, but it is unable to remove all the correlation between different modes, just as the simple example \[Eq.(18)\] demonstrated in §3.2.
To avoid the multiple normalizations and keep the virtues of scale-by-scale Gaussianization, we design an alternative algorithm as follows (hereafter referred to as algorithm II):
1. Using the conventional Gaussianization (§3.1) to reconstruct the mass field, i.e. to perform Gaussian mapping of the density contract $`\delta `$ and normalize the mass field by requiring that the evolved simulations reproduce the power spectrum of the observed flux.
2. Calculating the WFCs $`\stackrel{~}{ϵ}_{j,l}`$ of the recovered mass field $`\delta ^M`$ on each scale $`j`$.
3. Similar to the step 3 of algorithm I, making the Gaussianization of $`\stackrel{~}{ϵ}_{j,l}`$ for each scale $`j`$ to produce unnormalized WFCs $`\epsilon _{j,l}^M`$.
4. Normalizing the WFCs $`\epsilon _{j,l}^M`$ on scale $`j`$ by requiring that the variance of $`\epsilon _{j,l}^M`$, i.e., the 2nd cumulant moment $`I_j^2`$ \[Eq.(7)\], is the same as those for the WFCs $`\stackrel{~}{ϵ}_{j,l}`$.
5. For each scale $`j`$, randomizing the spatial sequence of the Gaussianized WFCs $`\epsilon _{j,l}^M`$, i.e., making a random permutation among the index $`l`$.
6. Using these WFCs $`\epsilon _{j,l}^M`$, one can reconstruct the mass density field by Eq.(24) till the scale given by the resolution of the flux.
Algorithm II is still scale-by-scale in nature. However, the normalization is one only once for the recovered $`\delta `$. The step 4 ensure that the normalization is unchanged after the step 3, which eliminates the non-Gaussianities of the skewness and kurtosis spectra. Step 5 is for eliminating the residual scale-scale correlations by a randomization of the spatial index $`l`$ of $`\stackrel{~}{ϵ}_{j,l}`$. Namely, it changes only the position of $`\stackrel{~}{ϵ}_{j,l}`$, but not the values. Therefore, it is similar to a randomization of phases of the Fourier modes, and will not change the normalization of the amplitude and power spectrum of the fields.
Figs. 4 - 6 show that the Gaussianized field by the algorithm II contains almost none of the non-Gaussian features considered. However, it should be pointed out that the field given by algorithm II is no longer a point-to-point reconstruction due to the randomization of $`l`$. Namely, the recovered field will not be point-to-point the same as the field shown in Fig. 7. Nonetheless, since the purpose of the Gaussianization is to recover the power spectrum of the primordial density fluctuations, algorithm II is a valuable approach. As will be shown in next section, the algorithm II gives more unbiased estimation of power spectrum by the standard FFT technique.
In order to illustrate the effect of the peculiar velocities on the Gaussianization, each of Figs. 4 - 6 contains two panels: one employed the simulation samples including the effects of peculiar velocities, and the other did not. All the figures show that for the algorithm I, the effect of peculiar velocities is significant only on small scales $`j>9`$, or $`k>10`$ h Mpc<sup>-1</sup>; while for algorithm II, the effect of peculiar velocities appears on smaller scales. Therefore, our proposed scale-by-scale Gaussianization methods would not be affected by the peculiar velocities as least up to the scale j=9.
## 4 Recovery of mass power spectrum from the transmitted QSO flux
### 4.1 The power spectrum in different representations
As a preparation for measuring the non-Gaussian effects on power spectrum recovery, we first discuss the representation of power spectrum. Principally, a random field can be described by any complete orthonormal basis (representation). Although the default usage of power spectrum is defined on the Fourier basis, one can define the power spectrum with respect to different representation. This is due to the fact that the Parseval’s theorem holds for any complete and orthonormal basis decomposition.
In the Fourier representation, the power spectrum of a 1-D density field $`\delta (x)`$ is given by
$$P(n)=|\widehat{\delta }_n|^2.$$
(26)
where $`\widehat{\delta }_n`$ is the Fourier transform of $`\delta (x)`$. $`|\widehat{\delta }_n|^2`$ measures the power of mode $`n`$ because of Parseval’s theorem
$$\frac{1}{L}_0^L\delta ^2(x)𝑑x=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}|\widehat{\delta }_n|^2.$$
(27)
Similarly, we have the Parseval’s theorem for the DWT transform given by (Fang & Thews 1998, Pando & Fang, 1998b)
$$\frac{1}{L}_0^L\delta ^2(x)𝑑x=\underset{j=0}{\overset{\mathrm{}}{}}\frac{1}{L}\underset{l=0}{\overset{2^j1}{}}\stackrel{~}{ϵ}_{j,l}^2.$$
(28)
(For simplicity, we ignore the superscript $`M`$ on $`\stackrel{~}{ϵ}_{j,l}`$). Therefore, the term $`\stackrel{~}{ϵ}_{j,l}^2`$ describes the power of mode $`(j,l)`$, and the total power on the scale $`j`$ is
$$P_j=\frac{1}{L}\underset{l=0}{\overset{2^j1}{}}|\stackrel{~}{ϵ}_{j,l}|^2,$$
(29)
which defines the DWT power spectra $`P_j`$.
Generally, the second order correlation functions of $`\widehat{\delta }_n`$ or $`\stackrel{~}{ϵ}_{j,l}`$ can be converted from each other by
$$\stackrel{~}{ϵ}_{j,l}\stackrel{~}{ϵ}_{j^{},l^{}}=\underset{n,n^{}=\mathrm{}}{\overset{+\mathrm{}}{}}\widehat{\delta }_n\widehat{\delta }_n^{}^{}\widehat{\psi }_{j^{},l^{}}(n^{})\widehat{\psi }_{j,l}^{}(n)$$
(30)
$$\widehat{\delta }_n\widehat{\delta }_n^{}^{}=\underset{j,j^{}=0}{\overset{+\mathrm{}}{}}\underset{l=0}{\overset{2^j1}{}}\underset{l^{}=0}{\overset{2^j^{}1}{}}\stackrel{~}{ϵ}_{j,l}\stackrel{~}{ϵ}_{j^{},l^{}}\widehat{\psi }_{j,l}(n)\widehat{\psi }_{j^{},l^{}}^{}(n^{})$$
(31)
where $`\widehat{\psi }_{j,l}(n)`$ is the Fourier transform of $`\psi _{j,l}(x)`$. For an homogeneous random field, $`\widehat{\delta }_n\widehat{\delta }_n^{}^{}=|\widehat{\delta }_n|^2\delta _{n,n^{}}`$, we have then
$$\stackrel{~}{ϵ}_{j,l}^2=\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}|\widehat{\delta }_n|^2|\widehat{\psi }_{j,l}(n)|^2$$
(32)
or
$$P_j=\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}P(n)|\widehat{\psi }(n/2^j)|^2$$
(33)
where $`\widehat{\psi }(n/2^j)`$ is the Fourier transform of the generating wavelet $`\psi (x)`$ (Pando & Fang 1998b). In Eq. (33) the function $`|\widehat{\psi }(n/2^j)|^2`$ plays the role of window function in the wavenumber $`n`$ space. The function $`\widehat{\psi }(n)`$ is localized in $`n`$-space. For the Daubechies 4 wavelet, $`|\widehat{\psi }(n)|`$ is peaked at $`n=\pm n_p`$ with the width of $`\mathrm{\Delta }n_p`$. Therefore, the DWT spectrum $`P_j`$ gives an estimator of the “band averaged” Fourier power spectrum within the band centered at
$$\mathrm{log}n=(\mathrm{log}2)j+\mathrm{log}n_p,$$
(34)
with the band width,
$$\mathrm{\Delta }\mathrm{log}n=\mathrm{\Delta }n_p/n_p.$$
(35)
As the mean of the WFCs $`\stackrel{~}{ϵ}_{j,l}`$ over $`l`$ is zero, so the second cumulant moment $`I_j^2`$ is related to DWT spectrum $`P_j`$ by $`I_j^2=(L/2^j)P_j`$, we will use variance $`I_j^2`$ as the estimator of DWT power spectrum instead of $`P_j`$.
For a Gaussian field, the statistical behavior are completely determined by the second order statistics of the Gaussian variables $`\widehat{\delta }_n`$ or $`\stackrel{~}{ϵ}_{j,l}`$. Theoretically, the power spectrum estimators $`P(n)`$ and $`P_j`$ present the equivalent description. However, as will be shown below, once non-Gaussianity appears, these estimators will no longer be equivalent.
### 4.2 Effect of non-Gaussianity on the recovery of mass power spectrum
Using the 100 realizations of the mass density fields recovered by the conventional algorithm, and algorithm I and II of the Gaussianization, we calculated the power spectra by the standard FFT technique. To reveal the effect of non-Gaussianity on the power spectrum estimation, we do not include the effects of instrumental noise and continuum fitting in the synthetic spectra. The dominant sources of error in estimation of power spectrum would be the cosmic variance and the non-Gaussian effects.
Figure. 8 compares the power spectra obtained by different Gaussianization methods. The 1-D linear power spectrum of Eq.(3) is also shown by solid line. These power spectra are normalized to the present. In general, the recovered power spectrum can match the shape of the linear theory over a wide range of wavelength, especially on larger wavelength. Yet, the recovered spectra show somewhat systematic departure from the initial mass power spectrum with the increase of wavenumbers. For the conventional Gaussianization, the recovered power spectrum falls below the initial power spectrum on scales of $`j8`$ or $`k1.5`$ h<sup>-1</sup> Mpc. The power spectrum recovered by the algorithm I is better than the conventional Gaussianization, and the recovery by algorithm II gives the best one, which is almost the same as the initial power spectrum on all scales.
Comparing Fig. 8 with Figs. 4 - 6, we can see that the scales on which the depression of the recovered power spectrum appears is always the same as the scale on which the scale-scale correlations become significant. Moreover, the less the scale-scale correlation (Fig. 6), the less the depression. This indicates that the recovered spectrum is substantially affected by the non-Gaussianities, especially, the scale-scale correlations. Actually, this effect has already been recognized by Meiksin & White (1998) in analyzing N-body simulation samples. Namely, the goodness of a power spectrum estimation is significantly dependent on the correlation between the Fourier power spectra averaged at different scale bands.
Back to the definition of scale-scale correlation Eq. (15), and recall that the average over an ensemble is equivalent to the spatial average taken over one realization, Eq.(15) can be rewritten as
$$C_j^{2,2}=\frac{\widehat{P}_j\widehat{P}_{j+1}}{\widehat{P}_j\widehat{P}_{j+1}}.$$
(36)
Hence, the scale-scale correlation is actually a measure of the correlation between the Fourier power spectra averaged at different scale bands. This can also be seen from Eq.(31) that the Fourier power spectrum around $`n`$ depends on the fluctuations on different $`j`$, and therefore, their non-Gaussian correlations.
Because the algorithm II is most effective for eliminating the scale-scale correlations, the resulting power spectrum shows the best recovery of the linear model.
### 4.3 Suppression of non-Gaussian correlations by representation
In the DWT representation, the power spectrum (29) does not depend on modes at the scales different from $`j`$, and therefore the scale-scale correlation will not affect the estimation of $`P_j`$. One can expect that the DWT spectrum estimator, $`P_j`$, will give a better recovery of the initial power spectrum.
Fig. 9 displays the DWT power spectrum $`P_j`$ for mass fields given by the different Gaussianization methods. The DWT power spectrum in the linear CDM model is also shown by solid line, which is calculated from the Fourier linear power spectrum by Eq. (33) in the continuous limit of $`n`$. This figure indicates that even for the mass field recovered from conventional Gaussianization, the DWT power spectrum is in good agreement with the initial DWT mass power spectrum up to the scale $`j=9`$. This is already much better than its counterpart in the Fourier representation, for which the power spectrum shows significant difference from the linear spectrum on scale $`j8`$. For the algorithm I and II, the DWT power spectrum also gives the good results. In addition, the errors due to the cosmic variance and normalization in the DWT spectrum are manifestly smaller than that of Fourier spectrum.
The DWT power spectrum $`P_j`$ (29) is given by the summation of $`|\stackrel{~}{ϵ}_{j,l}|^2`$ over $`l`$ at a given scale. Therefore, the non-Gaussian effect on estimation of $`P_j`$ mainly arises from the correlation between the WFCs $`\stackrel{~}{ϵ}_{j,l}^2`$ at different $`l`$, which can be measured by
$$Q_{j,\mathrm{\Delta }l}^{2,2}=\frac{2^j_{l=0}^{2^j1}\stackrel{~}{ϵ}_{j;l}^2\stackrel{~}{ϵ}_{j;l+\mathrm{\Delta }l}^2}{\stackrel{~}{ϵ}_{j,l}^2\stackrel{~}{ϵ}_{j;l+\mathrm{\Delta }l}^2}.$$
(37)
$`Q_{j,\mathrm{\Delta }l}^{2,2}`$ gives the correlation between the density fluctuations on the same scale $`j`$ at different places $`l`$ and $`l+\mathrm{\Delta }l`$.
Fig. 10 displays the correlations $`Q_{j,\mathrm{\Delta }l}^{2,2}`$ with $`\mathrm{\Delta }l=1`$. It shows that this non-Gaussianity can be ignored till $`j=10`$. On the other hand, the scale-scale correlation $`C_j^{2,2}`$ had been significant on $`j=8`$ (Fig. 3). In result, the Fourier power spectrum is contaminated by the non-Gaussianity on $`j8`$, while the DWT power spectrum is less biased till $`j=9`$.
In a word, the non-Gaussian correlations are effectively suppressed in the DWT representation. The DWT spectrum estimator gives a better recovery of the initial power spectrum.
## 5 Conclusions
In the cosmological reconstruction of initial Gaussian mass power spectrum, a serious obstacle is the non-Gaussianity of the evolved field. The quality of the recovery of the power spectrum is affected by the non-Gaussian correlations. The precision to which the mass power spectrum could be measured relies on how to treat the non-Gaussianity of the evolved mass field.
In the quasi-nonlinear regime of cosmic gravitational clustering (like that traced by the Ly$`\alpha `$ forests), the dynamical evolution is characterized by the power transfer from large scale perturbations to small ones (Suto & Sasaki 1991). This is the mode-mode coupling which produces the scale-scale correlations. Using perturbation theory in the DWT representation, one can further show that the mode-mode coupling at the same position (local coupling) is much stronger than coupling between modes at different positions (non-local) (Pando, Feng & Fang 1999). On the other hand, the power spectrum in the quasi-nonlinear regime does not significantly differ from the linear regime. Therefore, the algorithm for recovering the initial mass power spectrum from the Ly$`\alpha `$ forests should be designed to eliminate the local scale-scale correlations of the evolved mass field.
Using simulations in semi-analytical model of the Ly$`\alpha `$ forests, we show that the conventional algorithm of the Gaussianization is not enough to recover a Gaussian field. The local scale-scale correlations of the Ly$`\alpha `$ forests are still retained in the Gaussianized mass field. Based on the DWT scale-space decomposition, we proposed two algorithms of the Gaussianization, which are effective to eliminate the non-Gaussian features.
We showed that representation selection is important for the recovery of the power spectrum. A representation, which can effectively suppress the contamination of local scale-scale correlations, would be good for extracting the initial linear spectrum. We compared the Fourier and DWT representations for the estimation of power spectrum. We demonstrated that, at least in the quasi-nonlinear regime, the DWT power spectrum estimator is better, because it can avoid the major contamination, the local scale-scale correlations. We also showed that the peculiar velocities of gas will not affect on the DWT power spectrum recover up to, at least, the scale $`j=9`$.
We thank Dr. D. Tytler for kindly providing the data of the Keck spectrum HS1700+64. We also thank Drs. Wei Zheng, Hongguang Bi and Wolung Lee for useful discussion. LLF acknowledges support from the National Science Foundation of China(NSFC) and World Laboratory scholarship. This project was done during LLF’s visiting at Department of Physics, University of Arizona. This work was supported in part by LWL foundation. |
no-problem/0001/hep-ph0001306.html | ar5iv | text | # References
Large Mixing Angle MSW Solution
in $`S_3`$ Flavor Symmetry
Morimitsu TANIMOTO <sup>1</sup><sup>1</sup>1E-mail address: tanimoto@edserv.ed.ehime-u.ac.jp
Science Education Laboratory, Ehime University, 790-8577 Matsuyama, JAPAN
ABSTRACT
We have investigated phenomenological implications on the neutrino flavor mixings in the $`S_{3L}\times S_{3R}`$ symmetric mass matrices including symmetry breaking terms. We have shown how to get the large mixing angle MSW solution, $`\mathrm{sin}^22þ_{}=0.650.97`$ and $`\mathrm{\Delta }m_{}^2=10^510^4\mathrm{eV}^2`$, in this model. It is found that the structure of the lepton mass matrix in our model is stable against radiative corrections although the model leads to nearly degenerate neutrinos.
Recent Super-Kamiokande data of atmospheric neutrinos have provided a more solid evidence of the neutrino oscillation, which corresponds to the nearly maximal neutrino flavor mixing. The observed solar neutrino deficit is also an indication of a different sort of the neutrino oscillation . For the solar neutrino problem, four solutions are still allowed. Those are large mixing angle (LMA) MSW, small mixing angle (SMA) MSW , vacuum oscillation (VO) and low $`\mathrm{\Delta }m^2`$ (LOW) solutions .
Those data give constraints on the structure of the lepton mass matrices in the three family model , which may suggest the some flavor symmetry . There is a typical texture of the lepton mass matrix with the nearly maximal mixing of flavors, which is derived from the symmetry of the lepton flavor democracy , or from the $`S_{3L}\times S_{3R}`$ symmetry of the left-handed Majorana neutrino mass matrix . This texture have given a prediction for the neutrino mixing $`\mathrm{sin}^22þ_{\mathrm{atm}}=8/9`$. The mixing for the solar neutrino depends on the symmetry breaking pattern of the flavor such as $`\mathrm{sin}^22þ_{}=1`$ or $`1`$. However, the LMA-MSW solution, $`\mathrm{sin}^22þ_{}=0.650.97`$ and $`\mathrm{\Delta }m_{}^2=10^510^4\mathrm{eV}^2`$, has not been obtained in the previous works .
In this paper, we study how to get the LMA-MSW solution in the $`S_{3L}\times S_{3R}`$ symmetric mass matrices including symmetry breaking terms. Furthermore, we discuss the stability of the neutrino mass matrix against radiative corrections since the model predicts nearly degenerate neutrinos.
We assume that oscillations need only account for the solar and atmospheric neutrino data. Since the result of LSND awaits confirmation by KARMEN experiment , we do not take into consideration the LSND data in this paper. Our starting point as to the neutrino mixing is the large $`\nu _\mu \nu _\tau `$ oscillation of atmospheric neutrinos with $`\mathrm{\Delta }m_{\mathrm{atm}}^2=(26)\times 10^3\mathrm{eV}^2`$ and $`\mathrm{sin}^22þ_{\mathrm{atm}}0.84`$, which is derived from the recent data of the atmospheric neutrino deficit at Super-Kamiokande . The mass difference scales of the solar neutrinos are $`\mathrm{\Delta }m_{}^2=10^{10}10^4\mathrm{eV}^2`$ depending on the four solutions .
The texture of the charged lepton mass matrix was presented based on the $`S_{3L}\times S_{3R}`$ symmetry as follows :
$$M_{\mathrm{}}=\frac{c_{\mathrm{}}}{3}\left(\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right)+M_{\mathrm{}}^{(c)},$$
(1)
where the second matrix is the flavor symmetry breaking one. The unitary matrix $`V_{\mathrm{}}`$, which diagonalizes the mass matrix $`M_{\mathrm{}}`$, is given as $`V_{\mathrm{}}=FL`$, where
$$F=\left(\begin{array}{ccc}1/\sqrt{2}& 1/\sqrt{6}& 1/\sqrt{3}\\ 1/\sqrt{2}& 1/\sqrt{6}& 1/\sqrt{3}\\ 0& 2/\sqrt{6}& 1/\sqrt{3}\end{array}\right)$$
(2)
diagonalizes the democratic matrix and $`L`$ depends on the mass correction term $`M_{\mathrm{}}^{(c)}`$.
Let us turn to the neutrino sector. The neutrino mass matrix is different from the democratic one if they are Majorana particles. The $`S_{3L}`$ symmetric mass term is given as follows:
$$c_\nu \left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)+c_\nu r\left(\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right),$$
(3)
where $`c_\nu `$ and $`r`$ are arbitrary parameters. The eigenvalues of this matrix are easily obtained by using the orthogonal matrix $`F`$ in eq.(2) as $`c_\nu (1,1,1+3r)`$, which means that there are at least two degenerate masses in the $`S_{3L}`$ symmetric Majorana mass matrix .
The simplest breaking terms of the $`S_{3L}`$ symmetry are added in (3,3) and (2,2) entries. Therefore, the neutrino mass matrix is written as
$$M_\nu =c_\nu \left(\begin{array}{ccc}1+r& r& r\\ r& 1+r+ϵ& r\\ r& r& 1+r+\delta \end{array}\right),$$
(4)
in terms of small breaking parameters $`ϵ`$ and $`\delta `$. In order to explain both solar and atmospheric neutrinos in this mass matrix, $`r1`$ should be satisfied. In other words, three neutrinos should be nearly degenerate.<sup>2</sup><sup>2</sup>2$`r=2/3`$ also gives nearly degenerate neutrinos . However, solar and atmospheric neutrinos are not explained by simple breaking terms in eq.(4). However, there is no reason why $`r`$ is very small in this framework. In order to answer this question, we need a higher symmetry of flavors such as the $`O_{3L}\times O_{3R}`$ model . We do not address this problem in this paper.
We start with discussing the simple case of $`ϵ=0`$ and $`\delta r`$, in which the $`S_{2L}`$ symmetry is preserved but the $`S_{3L}`$ symmetry is broken. Mass eigenvalues are given as
$$m_1=1,m_21+2r,m_31+r+\delta ,$$
(5)
in the $`c_\nu `$ unit. We easily obtain $`\mathrm{\Delta }m_{\mathrm{atm}}^2=\mathrm{\Delta }m_{32}^22c_\nu ^2\delta `$ and $`\mathrm{\Delta }m_{}^2=\mathrm{\Delta }m_{21}^24c_\nu ^2r`$. The neutrino mass matrix is diagonalized by the orthogonal matrix $`U_\nu `$ such as $`U_\nu ^TM_\nu U_\nu `$, where
$$U_\nu \left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& \frac{r}{\delta }\\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& \frac{r}{\delta }\\ 0& \sqrt{2}\frac{r}{\delta }& 1\end{array}\right),$$
(6)
in which the first and second family mixes maximally due to the $`S_{2L}`$ symmetry. This maximal mixing is completely canceled out by the charged lepton sector in the neutrino mixing matrix (MNS mixing matrix) $`U_{\alpha i}`$ , which is determined by the product of $`V_{\mathrm{}}^{}`$ and $`U_\nu `$ as follows:
$$U=V_{\mathrm{}}^{}U_\nu =L^{}F^TU_\nu \left(\begin{array}{ccc}1& \frac{1}{\sqrt{3}}L_{21}& \sqrt{\frac{2}{3}}L_{21}\\ L_{12}& \frac{1}{\sqrt{3}}(1+2\frac{r}{\delta }+\sqrt{2}L_{32})& \sqrt{\frac{2}{3}}(1\frac{r}{\delta }+\frac{1}{\sqrt{2}}L_{32})\\ L_{13}& \sqrt{\frac{2}{3}}(1\frac{r}{\delta }+\frac{1}{\sqrt{2}}L_{23})& \frac{1}{\sqrt{3}}(1+2\frac{r}{\delta }\sqrt{2}L_{23})\end{array}\right),$$
(7)
where $`L_{ij}`$ are components of the correction matrix $`L`$ in the charged lepton sector. We take $`L_{ii}1`$$`(i=1,2,3)`$ and $`L_{31}L_{21}1`$ like mixings in the quark sector. The CP violating phase is also neglected. This case corresponds to the SMA-MSW solution of the solar neutrino. In this MNS mixing matrix, we have:
$$U_{e3}\sqrt{2}U_{e2},$$
(8)
which means that $`U_{e3}`$ is predicted if the solar neurino data will be confirmed in the future. The long baseline (LBL) experiments provide an important test of the model since the oscillation of $`\nu _\mu \nu _e`$ is predicted as follows:
$$P(\nu _\mu \nu _e)\frac{4}{3}\mathrm{sin}^22þ_{}\mathrm{sin}^2\frac{\mathrm{\Delta }m_{31}^2L}{4E}.$$
(9)
Putting $`\mathrm{sin}^22þ_{}`$ of the SMA-MSW solution , we obtain $`P(\nu _\mu \nu _e)=10^310^2`$ in the relevant LBL experiment. These results with the SMA-MSW solution of the solar neutrino are maintained as far as $`ϵr`$.
Let us consider the case of $`ϵ0`$ with $`\delta ϵr`$, in which $`S_{3L}`$ symmetry is completely broken. Then neutrino mass eigenvalues are given as
$$m_11+\frac{1}{2}ϵ+r\frac{1}{2}\sqrt{ϵ^2+4r^2},m_21+\frac{1}{2}ϵ+r+\frac{1}{2}\sqrt{ϵ^2+4r^2},m_31+r+\delta ,$$
(10)
in the $`c_\nu `$ unit. Then we have
$$\mathrm{\Delta }m_{32}^22c_\nu ^2\delta ,\mathrm{\Delta }m_{21}^22c_\nu ^2\sqrt{ϵ^2+4r^2}.$$
(11)
The orthogonal matrix $`U_\nu `$ is given as
$$U_\nu \left(\begin{array}{ccc}t& \sqrt{1t^2}& \frac{r}{\delta }\\ \sqrt{1t^2}& t& \frac{r}{\delta ϵ}\\ \frac{r}{\delta }(\sqrt{1t^2}t)& \frac{r}{\delta ϵ}(t+\sqrt{1t^2})& 1\end{array}\right),$$
(12)
where
$$t^2=\frac{1}{2}+\frac{1}{2}\frac{ϵ}{\sqrt{ϵ^2+4r^2}}.$$
(13)
In order to find the structure of the MNS matrix $`U_{\alpha i}`$, we show $`F^TU_\nu `$ as follows:
$$F^TU_\nu \left(\begin{array}{ccc}\frac{1}{\sqrt{2}}(t+\sqrt{1t^2})& \frac{1}{\sqrt{2}}(\sqrt{1t^2}t)& \frac{1}{\sqrt{2}}\frac{ϵr}{\delta (\delta ϵ)}\\ \frac{1}{\sqrt{6}}(t\sqrt{1t^2})(1+\frac{2r}{\delta })& \frac{1}{\sqrt{6}}(t+\sqrt{1t^2})(1+\frac{2r}{\delta ϵ})& \frac{2}{\sqrt{6}}(1\frac{r}{\delta })\\ \frac{1}{\sqrt{3}}(t\sqrt{1t^2})(1\frac{r}{\delta })& \frac{1}{\sqrt{3}}(t+\sqrt{1t^2})(1\frac{r}{\delta ϵ})& \frac{1}{\sqrt{3}}(1+\frac{2r}{\delta })\end{array}\right).$$
(14)
The mixing angle between the first and second flavor depends on $`t`$, which is determined by $`r/ϵ`$. It becomes the maximal angle in the case of $`t=1`$ ($`r/ϵ=0`$) and the minimal one in the case of $`t=1/\sqrt{2}`$ ($`ϵ/r=0`$). It is emphasized that the relevant value of $`r/ϵ`$ leads easily to $`\mathrm{sin}^22þ_{}=0.650.97`$, which corresponds to the LMA-MSW solution. The case of $`t=1/\sqrt{2}`$ may correspond rather to the VO solution.
In order to get the MNS mixing matrix $`U_{\alpha i}`$, the correction matrix $`L^{}`$ in the charged lepton sector should be multiplied such as $`L^{}F^TU_\nu `$. Then we obtain:
$`U_{e1}{\displaystyle \frac{1}{\sqrt{2}}}(t+\sqrt{1t^2})+{\displaystyle \frac{1}{\sqrt{6}}}(t\sqrt{1t^2})L_{21},`$
$`U_{e2}{\displaystyle \frac{1}{\sqrt{2}}}(\sqrt{1t^2}t)+{\displaystyle \frac{1}{\sqrt{6}}}(t+\sqrt{1t^2})L_{21},`$
$`U_{e3}{\displaystyle \frac{2}{\sqrt{6}}}(1{\displaystyle \frac{r}{\delta }})L_{21},`$
$`U_{\mu 1}{\displaystyle \frac{1}{\sqrt{6}}}(t\sqrt{1t^2})(1+{\displaystyle \frac{2r}{\delta }})+{\displaystyle \frac{1}{\sqrt{2}}}(t+\sqrt{1t^2})L_{12},`$
$`U_{\mu 2}{\displaystyle \frac{1}{\sqrt{6}}}(t+\sqrt{1t^2})(1+{\displaystyle \frac{2r}{\delta }})+{\displaystyle \frac{1}{\sqrt{2}}}(\sqrt{1t^2}t)L_{12},`$
$`U_{\mu 3}{\displaystyle \frac{1}{\sqrt{6}}}(2{\displaystyle \frac{2r}{\delta }}\sqrt{2}L_{32}),`$ (15)
$`U_{\tau 1}{\displaystyle \frac{1}{\sqrt{3}}}(t\sqrt{1t^2})(1{\displaystyle \frac{r}{\delta }}+{\displaystyle \frac{1}{\sqrt{2}}}L_{23}),`$
$`U_{\tau 2}{\displaystyle \frac{1}{\sqrt{3}}}(t+\sqrt{1t^2})(1{\displaystyle \frac{r}{\delta }}+{\displaystyle \frac{1}{\sqrt{2}}}L_{23}),`$
$`U_{\tau 3}{\displaystyle \frac{1}{\sqrt{3}}}(1+{\displaystyle \frac{2r}{\delta }}\sqrt{2}L_{23}),`$
where $`L_{ii}1`$$`(i=1,2,3)`$ are taken and $`L_{31},L_{13}`$ are neglected. The CP violating phase is also neglected. $`U_{e3}`$ depends on $`L_{21}`$, which is determined by $`M_{\mathrm{}}^{(c)}`$ in eq.(1). The MNS mixings in eqs.(15) agree with the numerical one (without any approximations) within a few percent error.
We should carefully discuss the stability of our results against radiative corrections since the model predicts nearly degenerate neutrinos. When the texture of the mass matrix is given at the $`S_{3L}\times S_{3R}`$ symmetry energy scale, radiative corrections are not negligible at the electoroweak (EW) scale. The runnings of the neutrino masses and mixings have been studied by using the renormalization group equations (RGE’s) .
Let us consider the basis, in which the mass matrix of the charged leptons is diagonal. The neutrino mass matrix in eq.(4) is transformed into $`V_{\mathrm{}}^{}M_\nu V_{\mathrm{}}`$. Taking $`V_{\mathrm{}}F`$ because of $`L`$ being close to the unit matrix, we obtain the mass matrix at the high energy scale:
$$F^TM_\nu F=\overline{M}_\nu =c_\nu \left(\begin{array}{ccc}1+\frac{ϵ}{2}& \frac{ϵ}{2\sqrt{3}}& \frac{1}{\sqrt{6}}ϵ\\ \frac{ϵ}{2\sqrt{3}}& 1+\frac{1}{6}ϵ+\frac{2}{3}\delta & \frac{\sqrt{2}}{6}ϵ\frac{\sqrt{2}}{3}\delta \\ \frac{1}{\sqrt{6}}ϵ& \frac{\sqrt{2}}{6}ϵ\frac{\sqrt{2}}{3}\delta & 1+\frac{1}{3}ϵ+\frac{1}{3}\delta +3r\end{array}\right).$$
(16)
The radiatively corrected mass matrix in the MSSM at the EW scale is given as $`R_G\overline{M}_\nu R_G`$, where $`R_G`$ is given by RGE’s as
$$R_G\left(\begin{array}{ccc}1+\eta _e& 0& 0\\ 0& 1+\eta _\mu & 0\\ 0& 0& 1\end{array}\right),$$
(17)
where $`\eta _e`$ and $`\eta _\mu `$ are
$$\eta _i=1\sqrt{\frac{I_i}{I_\tau }}(i=e,\mu ),$$
(18)
with
$$I_i\mathrm{exp}\left(\frac{1}{8\pi ^2}\underset{\mathrm{ln}(Mz)}{\overset{\mathrm{ln}(M_R)}{}}y_i^2𝑑t\right).$$
(19)
Here $`y_i(i=e,\mu )`$ are Yukawa couplings and the $`M_R`$ scale is taken as the $`S_{3L}\times S_{3R}`$ symmetry energy scale. We transform back this neutrino mass matrix $`R_G\overline{M}_\nu R_G`$ into the basis where the charged lepton mass matrix is the democratic one at the EW scale:
$$FR_G\overline{M}_\nu R_GF^Tc_\nu \left(\begin{array}{ccc}1+\overline{r}& \overline{r}& \overline{r}\\ \overline{r}& 1+ϵ+\overline{r}& \overline{r}\\ \overline{r}& \overline{r}& 1+\delta +\overline{r}\end{array}\right)+2\eta _Rc_\nu \left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),$$
(20)
where
$$\overline{r}=r\frac{2}{3}\eta _R.$$
(21)
Here we take $`\eta _R\eta _e\eta _\mu `$, which is a good approximation . Its numerical value depends on $`\mathrm{tan}\beta `$ as: $`10^2`$, $`10^3`$ and $`10^4`$ for $`\mathrm{tan}\beta =60,10,`$ and $`1`$, respectively. As seen in eq.(4) and eq.(20), radiative corrections are absorbed into the original parameters $`r`$, $`ϵ`$ and $`\delta `$ in the leading order. Thus the structure of the mass matrix is stable against radiative corrections although our model leads to nearly degenerate neutrinos.
Let us present numerical results. We take $`L_{12}=L_{21}=\sqrt{m_e/m_\mu }`$ and $`L_{23}=L_{32}=m_\mu /m_\tau `$, which are suggested from the ones in the quark sector, in eqs.(15). We show the result in the case of $`\delta =0.05`$ as a typical case.<sup>3</sup><sup>3</sup>3Parameters $`r`$, $`ϵ`$ and $`\delta `$ are asuumed to be real. If they are taken to be complex, the CP violation can be predicted as in ref.. Putting $`\mathrm{\Delta }m_{\mathrm{atm}}^2=\mathrm{\Delta }m_{32}^2=3\times 10^3`$ in eq.(11), we get $`c_\nu =0.18\mathrm{eV}`$, which is consistent with the double beta decay experiment .<sup>4</sup><sup>4</sup>4 The result is consistent with the constraint of the double beta decay experiment as far as $`\delta 0.04`$. Taking $`ϵ=0.002`$ as a typical value, predictions of $`\mathrm{sin}^22þ_{\mathrm{atm}}`$ and $`\mathrm{sin}^22þ_{}`$ are shown versus $`r`$ in fig.1. It is found that the predicted solar neutrino mixing lies in the region of the LMA-MSW solution if $`r/ϵ=0.10.5`$ is taken, while the mixing of the atmospheric neutrino changes slowly. In this parameter region, $`\mathrm{\Delta }m_{}^2=(12)\times 10^4\mathrm{eV}^2`$ is predicted. As far as $`\delta =ł^2ł`$ and $`ϵ=ł^4ł^3`$, where $`ł0.22`$, obtained results are similar to the ones in fig.1. Thus the LMA-MSW solution with $`\mathrm{sin}^22þ_{\mathrm{atm}}0.9`$ is easily realized by taking a relevant $`r/ϵ`$ in this model.
We have investigated phenomenological implications on the neutrino flavor mixings in the $`S_{3L}\times S_{3R}`$ symmetric mass matrices including symmetry breaking terms. We have shown how to get the LMA-MSW solution in this model. The non-zero value of the symmetric parameter $`r`$ is essential in order to get $`\mathrm{sin}^22þ_{}=0.650.97`$. However, there is no reason that $`r`$ is very small in the $`S_{3L}\times S_{3R}`$ symmetry, and so we need its extension, for example, the $`O_{3L}\times O_{3R}`$ model , which leads to naturally the small $`r`$ and the unique prediction of the LMA-MSW solution. It is found that radiative corrections are absorbed into the original parameters $`r`$, $`ϵ`$ and $`\delta `$. Therefore, the structure of the mass matrix is stable against radiative corrections although it leads to nearly degenerate neutrinos. Furtheremore, the neutrino mass matrix can be modified by introducing the CP violating phase . We wait for results in KamLAND experiment as well as new solar neutrino data.
This research is supported by the Grant-in-Aid for Science Research, Ministry of Education, Science and Culture, Japan(No.10640274).
Fig. 1: The $`r`$ dependence of $`\mathrm{sin}^22þ_{\mathrm{atm}}`$ and $`\mathrm{sin}^22þ_{}`$. $`c_\nu =0.18\mathrm{eV}`$, $`\delta =0.05`$ and $`ϵ=0.002`$ are taken. |
no-problem/0001/astro-ph0001269.html | ar5iv | text | # CHANDRA OBSERVATION OF ABELL 2142: SURVIVAL OF DENSE SUBCLUSTER CORES IN A MERGER
## 1. INTRODUCTION
Clusters of galaxies grow through gravitational infall and merger of smaller groups and clusters. During a merger, a significant fraction of the enormous ($`10^{6364}`$ ergs) kinetic energy of the colliding subclusters dissipates in the intracluster gas through shock heating, giving rise to strong, but transient, spatial variations of gas temperature and entropy. These variations contain information on the stage, geometry and velocity of the merger. They also can shed light on physical processes and phenomena occurring in the intracluster medium, including gas bulk flows, destruction of cooling flows, turbulence, and thermal conduction. Given this wealth of information contained in the merger temperature maps, they have in the past few years been a subject of intensive study, both experimental (using ROSAT PSPC and ASCA data, e.g., Henry & Briel 1996; Markevitch, Sarazin, & Vikhlinin 1999, and references in those works) and theoretical, using hydrodynamic simulations (e.g., Schindler & Muller 1993; Roettiger, Burns, & Stone 1999 and references therein). The measurements reported so far, while revealing, were limited by the ROSAT’s limited energy coverage and the ASCA’s moderate angular resolution. Two new X-ray observatories, Chandra and XMM, will overcome these difficulties and provide much more accurate spatially resolved temperature data, adequate for studying the above phenomena.
In this paper, we analyze the first Chandra observation of a merging cluster, A2142 ($`z=0.089`$). This hot ($`T_e9`$ keV), X-ray-luminous cluster has two bright elliptical galaxies near the center, aligned in the general direction of the X-ray brightness elongation. The line-of-sight velocities of these galaxies differ by $`1840`$ km s<sup>-1</sup> (Oegerle, Hill, & Fitchett 1995), suggesting that the cluster is not in a dynamically relaxed state. The X-ray image of the cluster has a peak indicating a cooling flow. From the ROSAT HRI image, Peres et al. (1998) deduced a cooling flow rate of $`72_{19}^{+14}h^2`$$`M_{}`$yr<sup>-1</sup>. From the ROSAT PSPC image, Buote & Tsai (1996) argued that this cluster is at a late merger stage. Henry & Briel (1996) used ROSAT PSPC data to derive a rough gas temperature map for A2142. Since this cluster is too hot for the PSPC to derive accurate temperatures, they adjusted the PSPC gain to make the average temperature equal to that from Ginga and looked for spatial hardness variations. Their temperature map showed azimuthally asymmetric temperature variations, which also is an indication of a merger. A derivation of an ASCA temperature map for this relatively distant cluster was hindered by the presence of a central brightness peak associated with a cooling flow.
Examination of the ROSAT PSPC and HRI images reveals two striking X-ray brightness edges within a few arcminutes northwest and south of the brightness peak, which were not reported in the earlier studies of A2142. The new Chandra data show these intriguing cluster gas features more clearly and allow us to study them in detail, including spectroscopically. Chandra also provides a high-resolution temperature map of the central cluster region. These results are presented below. We use $`H_0=100h`$ km s$`^1`$Mpc<sup>-1</sup> and $`q_0=0.5`$; confidence intervals are one-parameter 90%.
## 2. DATA REDUCTION
A2142 was observed by Chandra during the calibration phase on 1999 August 20 with the ACIS-S detector <sup>10</sup><sup>10</sup>10Chandra Observatory Guide http://asc.harvard.edu/udocs/docs/docs.html, section “Observatory Guide”, “ACIS”. Two similar, consecutive observations (OBSID 1196 and 1228) are combined here. The data were telemetered in Faint mode. Known hot pixels, bad columns, chip node boundaries, and events with ASCA grades 1, 5, and 7 are excluded from the analysis, along with several short time intervals with incorrect aspect reconstruction. The cluster was centered in the backside-illuminated chip S3 that is susceptible to particle background flares<sup>11</sup><sup>11</sup>11Chandra memo http://asc.harvard.edu/cal/Links/Acis/acis/WWWacis\_cal.html, section “Particle Background”. For our study of the low surface brightness regions of the cluster, it is critical to exclude any periods with anomalous background. For that, we made a light curve for a region covering 1/5 of the S3 chip far from the cluster peak where the relative background contribution to the flux is largest, using screened events in the 0.3–10 keV energy band (Fig. 1). The light curve shows that most of the time the background is quiescent (approximately half of the flux during these periods is due to the cluster emission in this region of the detector) but there are several flares. We excluded all time intervals when the flux was significantly, by more than $`3\sigma `$, above or below the quiescent rate (the flux may be below normal, for example, due to data dropouts). The excluded intervals are shaded in Fig. 1. This screening resulted in a total clean exposure of 16.4 ks for the S3 chip (out of a total of 24 ks). The same flare intervals can be identified from the light curve of another backside-illuminated chip, S1, that also was active during the exposure but has a much smaller cluster contribution. A similar screening of the frontside-illuminated chips, less affected by the flares, resulted in a total clean exposure of 21.3 ks for those chips. In this paper, we limit our imaging analysis to chips S2 and S3 and spectral analysis to chip S3.
During the quiescent periods, the particle background is rather constant in time but is non-uniform over the chip (varying by $`30`$% on scales of a few arcmin). To take this nonuniformity into account in our spectral and imaging analysis, we used a background dataset composed of several other observations of relatively empty fields with bright sources removed. Those observations were screened in exactly the same manner as the cluster data. The total exposure of that dataset is about 70 ks. To be able to extract the background spectra and images in sky coordinates corrected for the observatory dither, chip coordinates of the events from the background dataset were converted to the sky coordinate frame of the observation being analyzed. This was done by assigning randomly generated time tags to the background events and applying the corresponding aspect correction. The background spectra or images were then normalized by the ratio of the respective exposures. This procedure yields a background which is accurate to $`10`$% based on comparison to other fields; this uncertainty will be taken into account in our results.
For generating a temperature map (§3.3), we corrected the images for the effect of the source smearing during the periods of CCD frame transfer. While the frame transfer duration (41 ms) is small compared to the useful exposure (3.2 s) in each read-out cycle, the contamination may be significant for the outer, low surface brightness regions of the cluster that have the same chip $`x`$ coordinates as the cluster sharp brightness peak. To a first approximation, this effect can be corrected by convolving the ACIS image with the readout trajectory (a line parallel to the chip $`y`$ axis), multiplying by the ratio of the frame transfer and useful exposures, and subtracting from the uncorrected image. This assumes that the image is not affected by the pileup effect, which is true for most cluster data, including ours.
## 3. RESULTS
### 3.1. Image
An ACIS image of the cluster using the 0.3-10 keV events from chips S2 and S3 is shown in Fig. 2 (the cluster peak is in S3). An overlay of the X-ray contours on the DSS optical plate in Fig. 2b shows that the cluster brightness peak is slightly offset from the central galaxy (galaxy 201 in the Oegerle et al. notation; we will call it G1), and that the second bright galaxy, hereafter G2 (or galaxy 219 from Oegerle et al.), does not have any comparable gas halo around it. North of G2, there is an X-ray point source coincident with a narrow-tail radio galaxy (Harris, Bahcall, & Strom 1977).
The image in Fig. 2a shows a very regular, elliptical brightness distribution and two striking, elliptical-shaped edges, or abrupt drops, in the surface brightness, one $`3^{}`$ northwest of the cluster center and another $`1^{}`$ south of the center. We derive gas density and temperature profiles across these interesting structures in §§3.4-3.5.
### 3.2. Average cluster spectrum
Before proceeding to the spatially-resolved spectroscopy, we fit the overall cluster spectrum to check the consistency with previous studies. For this, we use a spectrum from the entire S3 chip, excluding point sources. This approximately corresponds to an integration radius of 5. At present, the soft spectral response of the S3 chip is uncertain and we observe significant residual deviations below $`E0.7`$ keV for any reasonable spectral models. Therefore, we have chosen to restrict all spectral analysis to energies 1–10 keV. The cluster is hot and this choice does not limit the accuracy of our main results. The spectra were extracted in PI (pulse height-invariant) channels that correct for the gain difference between the different regions of the CCD. The spectra from both pointings were grouped to have a minimum of 100 counts per bin and fitted simultaneously using the XSPEC package (Arnaud 1996). Model spectra were multiplied by the vignetting factor (auxiliary response) calculated by weighting the position-dependent effective area with the X-ray brightness over the corresponding image region. Fitting results for an absorbed single-temperature thin plasma model (Raymond & Smith 1977, 1992 revision) and a model with an additional cooling flow component are given in Table 1, where the iron abundance is relative to that of Anders & Grevesse (1989). Our single-temperature fit is in reasonable agreement with values from Ginga ($`9.0\pm 0.3`$ keV for $`N_H=5\times 10^{20}`$ cm<sup>-2</sup>; White et al. 1994) and ASCA ($`8.8\pm 0.6`$ keV for $`N_H=4.2\times 10^{20}`$ cm<sup>-2</sup>; Markevitch et al. 1998). At this stage of the Chandra calibration, and for our qualitative study, the apparent small discrepancy is not a matter of concern; also, the above values correspond to different integration regions for this highly non-isothermal cluster. If we allow for a cooling flow component (see Table 1), our temperature is consistent with a similarly derived ASCA value, $`9.3_{0.7}^{+1.3}`$ keV (Allen & Fabian 1998), and the cooling rate with the one derived from the ROSAT images (Peres et al. 1998), although the presence of a cooling flow is not strongly required by the overall spectrum in our energy band. The table also shows that the absorbing column is weakly constrained (due to our energy cut) but is in good agreement with the Galactic value of $`4.2\times 10^{20}`$ cm<sup>-2</sup>(Dickey & Lockman 1990). We therefore fix $`N_H`$ at its Galactic value in the analysis below.
TABLE 1
Overall Spectrum Fits
| Model | $`T_e`$, | $`N_H`$, | Abund. | $`\dot{M}`$, | $`\chi ^2`$/d.o.f |
| --- | --- | --- | --- | --- | --- |
| | keV | $`10^{20}`$ cm<sup>-2</sup> | | $`h^2`$$`M_{}`$yr<sup>-1</sup> | |
| single-$`T`$ | $`8.1\pm 0.4`$ | $`3.8\pm 1.5`$ | $`0.27\pm 0.04`$ | | 517.2 / 493 |
| cooling flow | $`8.8_{0.9}^{+1.2}`$ | $`5.9\pm 2.8`$ | $`0.28\pm 0.04`$ | $`69_{\mathrm{}}^{+70}`$ | 515.1 / 492 |
### 3.3. Temperature map
Using Chandra data, it is possible to derive a two-dimensional temperature map within $`3^{}4^{}`$ of the cluster peak. The Chandra angular resolution is more than sufficient to allow us to ignore any energy-dependent PSF effects and, for example, simply convert an X-ray hardness ratio at each cluster position to temperature. Taking advantage of this simplicity, we also tried to use as much spectral information as possible without dividing the cluster into any regions for full spectral fitting. To do this, we extracted images in five energy (or PI) bands 1.0–1.5–2.0–3.0–5.5–10 keV, smoothed them, and for each $`2^{\prime \prime }\times 2^{\prime \prime }`$ pixel fitted a spectrum consisting of the flux values in each band properly weighted by their statistical errors. The corresponding background images were created as described in §2 and subtracted from each image. The background-subtracted images were approximately corrected for the frame transfer smearing effect following the description in §2 and divided by the vignetting factor relative to the on-axis position (within each energy band, the vignetting factor for different energies was weighted using a 10 keV plasma spectrum). The images were then smoothed by a variable-width Gaussian (same for all bands) whose $`\sigma `$ varied from 10<sup>′′</sup> at the cluster peak to 30<sup>′′</sup> near the edges of the map. Bright point sources were masked prior to smoothing. We used a one-temperature plasma model with the absorption column fixed at the Galactic value and iron abundance at the cluster average, multiplying the model by the on-axis values of the telescope effective area (since the images were vignetting-corrected). The instrument spectral response matrix was properly binned for our chosen energy bands.
The resulting temperature map is shown in Fig. 3. The useful exposure of our observations is relatively short so the statistical accuracy is limited. The map shows that the cluster brightness peak is cool and that this cool dense gas is displaced to the SE from the main galaxy G1. There is also a cool filament extending from the peak in the general direction of the second galaxy G2, or along the southern brightness edge. The G2 galaxy itself is not associated with any features in the temperature map.
At larger scales, the map shows that the hottest cluster gas lies immediately outside the NW brightness edge and to the south of the southern edge. In the relatively small region of the cluster covered by our analysis, our temperature map is in general agreement with the coarser ROSAT/Ginga map of Henry & Briel (1996). Both maps show that the center of the cluster is cool (probably has a cooling flow) and the hot gas lies outside, mostly to the north and west. The maps differ in details; for example, our map indicates an increase of the temperature southeast of the center where the ROSAT map suggests a decrease. An important conclusion from our map is that the brightness edges separate regions of cool and hot gas. These edges are studied in more detail in sections below.
There is also some marginal evidence in Fig. 3 for a faint cool filament running across the whole map through the cluster brightness peak and coincident with the chip quadrant boundary. It is within the statistical uncertainties and most probably results from some presently unknown detector effect. This feature does not affect our arguments.
### 3.4. Temperature profiles across the edges
To derive the temperature profiles across the edges, we divide the cluster into elliptical sectors as shown in Fig. 4a, chosen so that the cluster edges lie exactly at the boundaries of certain sectors, and so that the sectors cover the azimuthal angles where the edges are most prominent. Figure 4b shows the best-fit temperature values in each region, for both observations fitted together or separately (for a consistency check). The fitting was performed as described in §3.2. The temperatures shown in the figure correspond to the iron abundance fixed at the cluster’s average and a fixed Galactic absorption; when fit as a free parameter, the absorption column was consistent with the Galactic value in all regions. For both edges, as we move from the inside of the edge to the outer, less dense region, the temperature increases abruptly and significantly. The profiles also show a decrease of the temperature in the very center of the cluster, which is also seen in the temperature map in Fig. 3.
We must note here that our spectral results in the outer, low surface brightness regions of the cluster depend significantly on the background subtraction. To quantify the corresponding uncertainty, we varied the background normalization by $`\pm 10`$% (synchronously for the two observations), re-fitted the temperatures in all sectors and added the resulting difference in quadrature to the 90% statistical uncertainties. While the values for the brighter cluster regions are practically unaffected, for the regions on the outer side of the NW edge, these differences are comparable to the statistical uncertainty. The 10% estimate is rather arbitrary and appears to overestimate the observed variation of the ACIS quiescent particle background with time. A possible incomplete screening of background flares is another source of uncertainty that is difficult to quantify. Experimenting with different screening criteria shows that it can significantly affect the results. An approximate estimate of this uncertainty is made by comparing separate fits to the two observations (dotted crosses in Fig. 4b); their mutual consistency shows that for the conservative data screening that we used, this uncertainty is probably not greater than the already included error components.
### 3.5. Density and pressure profiles
Figure 4c shows X-ray surface brightness profiles across the two edges, derived using narrow elliptical sectors parallel to those used above for the temperatures. The energy band for these profiles is restricted to 0.5–3 keV to minimize the dependence of X-ray emissivity on temperature and to maximize the signal-to-noise ratio. Both profiles clearly show the sharp edges; the radial derivative of the surface brightness is discontinuous on a scale smaller than $`5^{\prime \prime }10^{\prime \prime }`$ (or about $`510h^1`$ kpc, limited mostly by the accuracy with which our regions can be made parallel to the edges). The brightness edges have a very characteristic shape that indicates a discontinuity in the gas density profile. To quantify these discontinuities, we fitted the brightness profiles with a simple radial density model with two power laws separated by a jump. The curvature of the edge surfaces along the line of sight is unknown; therefore, for simplicity, we projected the density model under the assumption of spherical symmetry with the average radius as the single radial coordinate, even though the profiles are derived in elliptical regions. The accuracy of such modeling is sufficient for our purposes. We also restrict the fitting range to the immediate vicinity of the brightness edges (see Fig. 4c) and ignore the gas temperature variations since they are unimportant for the energy band we use. The free parameters are the two power-law slopes and the position and amplitude of the density jump. The best-fit density models are shown in Fig. 4d and the corresponding brightness profiles are overlaid as histograms on the data points in Fig. 4c. The best-fit amplitudes of the density jumps are given by factors of $`1.85\pm 0.10`$ and $`2.0\pm 0.1`$ for the S and NW edges, respectively. As Figure 4c shows, the fits are very good, with respective $`\chi ^2=26.5/25`$ d.o.f. and $`18.9/22`$ d.o.f. The goodness of fits suggests that the curvature of the edges along the line of sight is indeed fairly close to that in the plane of the sky. To estimate how model-dependent are the derived amplitudes of the jumps, we tried to add a constant density background (positive or negative) as another fitting component representing possible deviations of the profile from the power law at large radii. The resulting changes of the best-fit jump amplitudes were comparable to the above small uncertainties. Thus, our evaluation of the density discontinuities appears robust, barring strong projection effects that can reduce the apparent density jump at the edge.
From the density and temperature distributions in the vicinity of the brightness edges, we can calculate the pressure profiles. Note that even though the measured temperatures correspond to emission-weighted projections along the line of sight, they are reasonably close to the true three-dimensional temperatures at any given radius, because the X-ray brightness declines steeply with radius. Figure 4e shows pressure profiles calculated by multiplying the measured temperature values and the model density values in each region (the density is taken at the emission-weighted radius for each region). Remarkably, while the temperature and density profiles both exhibit clear discontinuities at the edges, the pressure profiles are consistent with no discontinuity within the uncertainties. Thus the gas is close to local pressure equilibrium at the density edges. It is also noteworthy that the denser gas inside the edges has lower specific entropy, therefore the edges are convectively stable.
## 4. DISCUSSION
Shock fronts would seem the most natural interpretation for the density discontinuities seen in the X-ray image of A2142. Such an interpretation was proposed for a similar brightness edge seen in the ROSAT image of another merging cluster, A3667 (Markevitch, Sarazin, & Vikhlinin 1999), even though the ASCA temperature map did not entirely support this explanation. However, if these edges in A2142 were shocks, they would be accompanied by a temperature change across the edge in the direction opposite to that observed. Indeed, applying the Rankine–Hugoniot shock jump conditions for a factor of $`2`$ density jump and taking the post-shock temperature to be $`7.5`$ keV (the inner regions of the NW edge), one would expect to find a $`T4`$ keV gas in front of the shock (i.e. on the side of the edge away from the cluster center). This is inconsistent with the observed clear increase of the temperature across both edges and the equivalent increase of the specific entropy. This appears to exclude the shock interpretation. An alternative is proposed below.
### 4.1. Stripping of cool cores by shocked gas
The smooth, comet-like shape and sharpness of the edges alone (especially of the NW edge) may hint that we are observing a body of dense gas moving through and being stripped by a less dense surrounding gas. This dense body may be the surviving core of one of the merged subclusters that has not been penetrated by the merger shocks due to its high initial pressure. The edge observed in the X-ray image could then be the surface where the pressure in the dense core gas is in balance with the thermal plus ram pressure of the surrounding gas; all core gas at higher radii that initially had a lower pressure has been stripped and left behind (possibly creating a tail seen as a general elongation to the SE). The hotter, rarefied gas beyond the NW edge can be the result of shock heating of the outer atmospheres of the two colliding subclusters, as schematically shown in Fig. 5. In this scenario, the outer subcluster gas has been stopped by the collision shock, while the dense cores (or, more precisely, regions of the subclusters where the pressure exceeded that of the shocked gas in front of them, which prevented the shock from penetrating them) continued to move ahead through the shocked gas. The southern edge may delineate the remnant of the second core (core B in Fig. 5) that was more dense and compact and still retains a cooling flow. The two cores should have already passed the point of minimum separation and be moving apart at present. It is unlikely that the less dense core A could survive a head-on passage of the denser core (in that case we probably would not see the NW edge). This suggests a nonzero impact parameter; for example, the cores could have been separated along the line of sight during the passage, with core B either grazing or being projected onto core A at present.
Although the thermal pressure profiles in Fig. 4e do not suggest any abrupt decline across the edges that could be due to a ram pressure component, at the present accuracy they do not strongly exclude it. To estimate what bulk velocity, $`\upsilon `$, is consistent with the data on the NW edge, we can apply the pressure equilibrium condition to the edge surface, $`p_1=p_2+\rho _2\upsilon ^2`$, where indices 1 and 2 correspond to quantities inside and outside the edge, respectively. The density jump by a factor of 2 and the 90% lower limit on the temperature in the nearest outer bin, $`T_2>10`$ keV, corresponds to an average bulk velocity of the gas in that region of $`\upsilon <900`$ km s<sup>-1</sup>. This is consistent with subcluster velocities of order 1000 km s<sup>-1</sup> expected in a merger such as A2142. Note, however, that this is a very rough estimate because, if our interpretation is correct, the gas velocity would be continuous across the edge and there must be a velocity gradient, as well as a compression with a corresponding temperature increase, immediately outside the edge. Also, if the core moves at an angle to the plane of the sky, the maximum velocity may be higher, since one of its components would be tangential to the contact surface that we can see. In addition, as noted above, projection effects can dilute the density jump, leaving smaller apparent room for ram pressure. A similar estimate for the southern edge is $`\upsilon <400`$ km s<sup>-1</sup>, but it is probably even less firm because of the likely projection of core B onto core A.
Depending on the velocities of the cores relative to the surrounding (previously shocked) gas, they may or may not create additional bow shocks at some distance in front of the edges (shown as dashed lines in Fig. 5). The above upper limit on the velocity of core A is lower than the sound velocity in a $`T>10`$ keV gas ($`\upsilon _s>1600`$ km s<sup>-1</sup>) and is therefore consistent with no shock, although it does not exclude it due to the possible projection and orientation effects mentioned above. The available X-ray image and temperature map do not show any obvious corresponding features, but deeper exposures might reveal such shocks.
Comparison of the X-ray and optical images (Fig. 2b) offers an attractive possibility that core B is centered on galaxy G1 and core A on galaxy G2. However, this scenario has certain problems. Velocity data, while scarce ($`10`$ galaxy velocities in the central region; Oegerle et al.), show that G2 is separated from most other cluster members by a line-of-sight velocity of $`1800`$ km s<sup>-1</sup>, except for the radio galaxy north of G2 that has a similar velocity. It therefore appears unlikely that G2 can be the center of a relatively big, $`T7`$ keV subcluster, unless a deeper spectroscopic study reveals a concentration of nearby galaxies with similar velocity. It is possible that this galaxy is completely unrelated to core A; we recall that it does not display any strong X-ray brightness enhancement. Another problem is a displacement, in the wrong direction, of the cool density peak from the G1 galaxy. If G1 is at the peak of the gravitational potential of the smaller core B, one would expect the gas to lag behind the galaxy as the core moves to the south or southeast (as the edge suggests). The observed displacement might be explained if at present core B is moving mostly along the line of sight on a circular orbit and the central galaxy is already starting its turnaround toward G2, perhaps leaving behind a trail of cool gas seen as a cool filament. The observed southern edge would then be a surface where the relative gas motion is mostly tangential, which is also in better agreement with the low allowed ram pressure.
Below we propose a slightly different scenario for the merger, motivated by comparison of the observed structure with hydrodynamic simulations. It invokes the same physical mechanism for the observed density edges.
### 4.2. Late stage unequal mass merger
As noted above, our temperature map is substantially in agreement with the coarser map derived by Henry & Briel (1996) using ROSAT PSPC. The ROSAT map covers greater area than the Chandra data and shows a hot sector extending to large radii in the NW. If this is correct, then a comparison of the X-ray structure with some hydrodynamic simulations (e.g. Roetigger, Loken & Burns 1997, hereafter RLB97) suggests that A2142 is the result of an unequal merger, viewed at a time at least 1–2 Gyr after the initial core crossing. The late phase is required by the largely smooth and symmetrical structure of the X-ray emission, and the lack of obvious shocks. In the simulations, shock-heated gas at the location of the initial impact of the smaller system can still be seen at late times, similar to the hot sector seen in the Henry & Briel map far to the NW. Hence in this model, the low mass system has impacted from the NW.
The undisrupted cool core which we see in A2142 differs from what is seen in the work of RLB97 and many others. However these simulations all involved clusters with low core gas densities ($`n<10^3`$ cm<sup>-3</sup>). Under these circumstances, the shock runs straight through the core of the main cluster, raising its temperature. In contrast, it appears that the collision shock has failed to penetrate the core of A2142, in which gas densities reach $`10^2`$ cm<sup>-3</sup>, and has instead propagated around the outside, heating the gas to the north and southwest of the cluster core.
In this model, galaxy G1 is identified with the center of the main cluster (whose core includes the whole elliptical central region of A2142), and there is less difficulty in accepting G2 (which lies essentially along the collision axis, at least in projection) as being the former central galaxy of the smaller subcluster. Having lost its gas halo on entering the cluster from the NW, G2 has already crossed the center of the main cluster twice, and is now either returning to the NW, or falling back towards the center for a third time. The latter option derives some support from the fact that the radio galaxy, presumably (from its similar line-of-sight velocity) accompanying G2, has a narrow-angle radio tail which points to the west, away from the center of the main cluster (Bliton et al. 1998). The idea that G2 has already crossed the cluster core also helps to explain the elongated morphology of the central cooling flow, apparent in Fig. 3.
Simulations show that as the subcluster recrosses the cluster core, gas which has been pulled out to the SE should fall in behind it, forming an extended inflowing plume (see, e.g. RLB97 Fig. 8f). This is consistent with the shallow X-ray surface brightness gradient seen to the SE in Fig. 2b. In the case of A2142, this gas, flowing in from the SE, will run into the dense cool core surrounding G1 at subsonic velocity, and this could give rise to the SE density step through a physical mechanism similar to that discussed in the previous section, involving gas shear and stripping at the interface.
In this scenario, the NW edge may be the fossilized remains of the initial subcluster impact that took place here. Shock heating from this impact has raised the entropy of the gas outside the core to the NW. The shock has propagated into the core until the radius where the pressure in the core matched the pressure driving the shock. Subsequently, the flow of the shocked gas towards the SE has swept away the outer layer of the core where the shock decayed, leaving the high entropy shocked gas in direct contact with the low entropy unshocked core. Once the gas returns to a hydrostatic configuration, this entropy step manifests itself as a jump in temperature and density of the form seen, while the gas pressure would be continuous across the edge. In contrast to the model from the previous section, little relative motion of the gas to either side of the NW edge is expected at this late merger stage, so there is no current stripping. Simulations are required to investigate how long a sharp edge of the kind observed can persist under these conditions; this depends in part on poorly understood factors such as the thermal conductivity of the gas.
## 5. SUMMARY
We have presented the results of a short Chandra observation of the merging cluster A2142, which include a temperature map of its central region and the temperature and density profiles across the two remarkable surface brightness edges. The data indicate that these edges cannot be shock fronts — the dense gas inside the edges is cooler than the gas outside. It is likely that the edges delineate the dense subcluster core(s) that survived merger and shock heating of their surrounding, less dense atmospheres. We propose that the edges themselves are surfaces where these cores are presently being ram pressure-stripped by the surrounding hot gas, or fossilized remains of such stripping which took place earlier in the merger. More accurate temperature and pressure profiles for the edge regions would help to determine whether the gas stripping is continuing at present, and may also provide information on the gas thermal conductivity. A comprehensive galaxy velocity survey of the cluster, and large-scale temperature maps such as will be available from XMM, will help to construct a definitive model for this interesting system.
An accurate quantitative interpretation of the available optical and X-ray data on A2142 requires hydrodynamic simulations of the merger of clusters with realistically dense cores and radiative cooling. We also hope that the results presented here will encourage an improvement in linear resolution of the simulations necessary for modeling the sharp cluster features such as those Chandra can now reveal.
The results presented here are made possible by the successful effort of the entire Chandra team to build, launch and operate the observatory. Support for this study was provided by NASA contract NAS8-39073 and by Smithsonian Institution. TJP, PEJN and PM thank CfA for hospitality during the course of this study. |
no-problem/0001/physics0001069.html | ar5iv | text | # Nonlinear denoising of transient signals with application to event related potentials
## 1 Introduction
The electroencephalogram (EEG) reflects brain electrical activity owing to both intrinsic dynamics and responses to external stimuli. To examine pathways and time courses of information processing under specific conditions, several experiments have been developed controlling sensory inputs. Usually, well defined stimuli are repeatedly presented during experimental sessions (e.g., simple tones, flashes, smells, or touches). Each stimulus is assumed to induce synchronized neural activity in specific regions of the brain, occurring as potential changes in the EEG. These evoked potentials (EPs) often exhibit multiphasic peak amplitudes within the first hundred milliseconds after stimulus onset. They are specific for different stages of information processing, thus giving access to both temporal and spatial aspects of neural processes. Other classes of experimental setups are used to investigate higher cognitive functions. For example, subjects are requested to remember words, or perhaps they are asked to respond to specific target stimuli, e.g. by pressing a button upon their occurrence. The neural activity induced by this kind of stimulation also leads to potential changes in the EEG. These event related potentials (ERPs) can extend over a few seconds, exhibiting peak amplitudes mostly later than EPs. Deviation of amplitudes and/or moment of occurrence (latency) from those of normal EPs/ERPs are often associated with dysfunction of the central nervous system and thus, are of high relevance for diagnostic purposes.
As compared to the ongoing EEG, EPs and ERPs possess very low peak amplitudes which, in most cases, are not recognizable by visual inspection. Thus, to improve their low signal-to-noise ratio, EPs/ERPs are commonly averaged (Figure 1), assuming synchronous, time-locked responses not correlated with the ongoing EEG. In practice, however, these assumptions may be inaccurate and, as a result of averaging, variations of EP/ERP latencies and amplitudes are not accessed. In particular, short lasting alterations which may provide relevant information about cognitive functions are probably smoothed or even masked by the averaging process. Therefore, investigators are interested in single trial analysis, that allows extraction of reliable signal characteristics out of single EP/ERP sequences . In ref. autoregressive models (AR) are adopted to EEG sequences recorded prior to stimulation in order to subtract uncorrelated neural activity from ERPs. However, it is an empirical fact, that external stimuli lead to event-related-desynchronizaition of the ongoing EEG. Thus, the estimated AR-model might be incorrect. The authors of applied autoregressive moving average (ARMA) models to time sequences which were a concatenation of several EP/ERP sequences. In the case of short signal sequences, this led to better spectral estimations than commonly achieved by periodograms. The main restriction is, however, that investigated signals must be linear and stationary, which cannot be strictly presumed for the EEG. In particular the high model order in comparison to the signal length shows that AR- and ARMA-models are often inadequate for EP/ERP analysis. Other methods have been developed to deal with the nonstationary and transient character of EPs/ERPs. Woody introduced an iterative method for EP/ERP latency estimation based on common averages. He determined the time instant of the best correlation between a template (EP/ERP average) and single trials by shifting the latter in time. This method corrects a possible latency variability of EPs/ERPs, but its performance highly depends on the initial choice of templates. The Wiener filter , on the other hand, uses spectral estimation to reduce uncorrelated noise. This technique, however, is less accurate for EPs/ERPs, because the time course of transient signals is lost in the Fourier domain. Thus, DeWeerd introduced a time adaptive Wiener filter, allowing better adjustment to signal components of short duration. The paradigm of orthogonal wave packets (wavelet transform<sup>1</sup><sup>1</sup>1Continuous wavelet transform: $`w_{a,b}(\mathrm{\Psi },x(t))=\frac{1}{\sqrt{\left|a\right|}}_{\mathrm{}}^+\mathrm{}x(t)\mathrm{\Psi }(\frac{tb}{a})𝑑t`$$`w`$: wavelet coefficient, $`a`$: scaling parameter, $`b`$: translation parameter, $`x(t)`$: time series, $`\mathrm{\Psi }`$: mother wavelet function) also follows this concept of adopted time-frequency decomposition. In addition, the wavelet transform provide several useful properties which make it preferable even for the analysis of transient signals :
* Wavelets can represent smooth functions as well as singularities.
* The basis functions are local which makes most coefficient based algorithms to be naturally adapted to inhomogeneities in the function.
* They have the unconditional basis property to represent a variety of functions implying that the wavelet basis is usually a reasonable choice even if very little is known about the signal.
* Fast wavelet transform is computationally inexpensive of order $`O(N)`$, where $`N`$ denotes the number of sample points. In contrast, fast Fourier transform (FFT) requires $`O(Nlog(N))`$.
* Nonlinear thresholding is nearly optimal for signal recovery.
For that reasons, wavelets became a popular tool for the analysis of brain electrical activity , especially for denoising and classification of single trial EPs/ERPs. Donoho et al. introduced a simple thresholding algorithm to reduce noise in the wavelet domain requiring no assumptions about the time course of signals. Nevertheless, high signal amplitudes are in need to distinguish between noise and signal related wavelet coefficients in single trials. Bertrand et al. modified the original a posteriori Wiener filter to find accurate filter settings. The authors emphasized better adoption to transient signal components than can be achieved by corresponding techniques in the frequency domain. However, due to the averaging process, this technique runs the risk of choosing inadequate filter settings in the case of a high latency variability. The same restriction is valid for discriminant techniques applied e.g. by Bartink et al. . Nevertheless, wavelet based methods enable a more adequate treatment of transient signals than techniques applied in the frequency domain. The question of accurate filter settings, however, is still an unresolved problem.
To circumvent this problem, we introduce a new method for single trial analysis of ERPs that neither assumes fully synchronized nor stationary ERP sequences. The method is related to techniques already developed for the paradigm of deterministic chaotic systems, using time delay embeddings of signals for state space reconstruction and denoising . Schreiber and Kaplan demonstrated the accuracy of these methods to reduce measurement noise in the human electrocardiogram (ECG). Heart beats are also of transient character and exhibit relevant signal components in a frequency range that compares to ERPs. Unfortunately, ERPs are of shorter duration as compared to the ECG. Thus, in the case of high dimensional time delay embedding (in the order of the signal length), we cannot create a sufficient number of delay vectors for ERP sequences. To circumvent this problem we reconstruct ERPs in state-space using circular embeddings, that have turned out to be appropriate even for signal sequences of short duration. In contrast to the nonlinear projection scheme described in , we do not use singular value decomposition (SVD) to determine clean signals in state space. The reason for this is threefold. First, estimating relevant signal components using the inflexion of ordered eigen-values is not always applicable to EEG because eigen-values may decay almost linearly. In this case, an a priori restriction to a fixed embedding dimension is in need, running the risk either to discard important signal components or to remain noise of considerable amplitude if only little is known about the signal. Second, SVD stresses the direction of highest variances, so that transient signal components may be smoothed by projection. Third, the number of signal related directions in state space may alter locally, which is also not concerned by SVD. Instead we calculate wavelet transforms of delay vectors and determine signal related components by estimating variances separately for each state-space direction. Scaling properties of wavelet bases allow very fast calculation as well as focusing on specific frequency bands. To confirm the accuracy of our method, we apply it to ERP-like test signals contaminated with different types of noise. Afterwards, we give an example of reconstructed mesial temporal lobe P300 potentials, that were recorded from within the hippocampal formation of a patient with focal epilepsy.
## 2 Outline of the Method
A time series may be contaminated by random noise allowing the measurement $`y_n=x_n+ϵ_n`$. If the measured time series is purely deterministic, it is restricted to a low-dimensional hyper-surface in state space. For the transient signals we are concerned with here, we assume this still to be valid. We hope to identify this direction and to correct $`y_n`$ by simply projecting it onto the subspace spanned by the clean data .
Technically we realize projections onto noise free subspaces as follows. Let $`Y=(y_1,y_2,\mathrm{},y_N)`$ denote an observed time sequence. Time-delay embedding of this sequence in a $`m`$-dimensional state space leads to state space vectors $`𝐲_n=(y_n,\mathrm{},y_{n(m1)\tau })`$, where $`\tau `$ is an appropriate time delay. In an embedding space of dimension $`m`$ we compute the discrete wavelet transform of all delay vectors in a small neighborhood of a vector $`𝐲_n`$ we want to correct. Let $`r_{n,j}`$ with $`j=0,\mathrm{},k`$ denote the indices of the k nearest neighbors of $`𝐲_n`$, and for $`𝐲_n`$ itself, i.e. $`j=0`$, and $`r_{n,0}=n`$. Thus, the first neighbor distances from $`𝐲_n`$ in increasing order are $`d(Y)_n^{(1)}𝐲_n𝐲_{r_{n,1}}=\mathrm{min}_r^{}𝐲_n𝐲_r^{}`$, $`d(Y)_n^{(2)}𝐲_n𝐲_{r_{n,2}}=\mathrm{min}_{r^{}r_{n,1}}𝐲_n𝐲_r^{}`$, etc., where $`𝐲𝐲^{}`$ is the Euclidean distance in state space. Now the important assumption is that the clean signal lies within a subspace of dimension $`dm`$, and that this subspace is spanned by only a few basis functions in the wavelet domain. Let $`𝐰_{r_{n,j}}`$ denote the fast wavelet transform of $`𝐲_{r_{n,j}}`$. Futhermore, let $`C_i^{(k)}(𝐰_{r_n})=𝐰_{r_{n,j}}_i`$ denote the $`i^{th}`$ component of the centre of mass of $`𝐰_{r_n}`$, and $`\sigma _{n,i}^2`$ the corresponding variance. In the case of neighbors owing to the signal (true neighbors), we can expect the ratio $`C_i^{(k)}(𝐰_{r_n})/\sigma _{n,i}^2`$ to be higher in signal than in noise related directions. Thus, a discrimination of noise and noise free components in state space is possible. Let
$$\stackrel{~}{w}_{n,i}=\{\begin{array}{cc}\hfill w_{n,i}:& |C_i^{(k)}(𝐰_{r_n})|2\lambda \frac{\sigma _{n,i}}{\sqrt{k+1}}\hfill \\ \hfill 0:& \text{else}\hfill \end{array}$$
(1)
define a shrinking condition to carry out projection onto a noise free manifold . The parameter $`\lambda `$ denotes a thresholding coefficient that depends on specific qualities of signal and noise. Inverse fast wavelet transform of $`\stackrel{~}{𝐰}_n`$ provides a corrected vector in state space, so that application of our projection scheme to all remaining delay vectors ends up with a set of corrected vectors, out of which the clean signal can be reconstructed.
### 2.1 Extension to multiple signals of short length
Let $`Y_l=(y_{l,1},y_{l,2},\mathrm{},y_{l,N})`$ denote a short signal sequence that is repeatedly recorded during an experiment, where $`l=1,\mathrm{},L`$ orders the number of repetitions. A typical example may be ERP recordings, where each $`Y_l`$ represents an EEG sequence following well defined stimuli. Time-delay embeddings of these sequences can be written as $`𝐲_{l,n}=(y_{l,n}\mathrm{},y_{l,n(m1)\tau })`$. To achieve a sufficient number of delay vectors even for high embedding dimensions, we define circular embeddings by
$$𝐲_{l,n}=(y_{l,n},\mathrm{},y_{l,1},y_{l,N},\mathrm{},y_{l,N(mq)})n<m,$$
(2)
so that all delay vectors with indices $`1nN`$ can be formed. Circular embeddings are introduced as the most attractive choice to handle the ends of sequences. Alternatives are (i) losing neighbors, (ii) zeropadding, and (iii) shrinking the embedding dimension towards the ends. However, discontinuities may occur at the edges, requiring some smoothing. For each $`Y_l`$ we define the smoothed sequence as
$$𝐲_{l,n,i}^s=\{\begin{array}{cc}y_{l,n,i}e^{(\frac{qi}{p})^2}:\hfill & i<q\hfill \\ y_{l,n,i}:\hfill & qiNq\hfill \\ y_{l,n,i}e^{(\frac{i(Nq)}{p})^2}:\hfill & i>Nq\hfill \end{array}$$
(3)
where $`q`$ defines the window width in sample points, $`p`$ the steepness of exponential damping, and $`i`$ the time index. Time-delay embedding of several short sequences leads to a filling of the state space, so that a sufficient number of nearest neighbors can be found for each point.
### 2.2 Parameter Selection
Appropriate choice of parameters, in particular embedding dimension $`m`$, time delay $`\tau `$, thresholding coefficient $`\lambda `$, as well as the number of neighbors $`k`$ is important for accurate signal reconstruction in state space. Several methods have been developed to estimate “optimal” parameters, depending on specific aspects of the given data (e.g., noise level, type of noise, stationarity, etc.). These assume that the clean signal is indeed low dimensional, an assumption we are not ready to make in the case of ERPs. Thus, we approached the problem of “optimal” parameters empirically.
Parameters $`\tau `$ and $`m`$ are not independent from each other. In particular, high embedding dimensions allow small time-delays and vice versa. We estimated ”optimal” embedding dimensions and thresholding coefficients on simulated data by varying $`m`$ and $`\lambda `$ for a fixed $`\tau =1`$. To allow fast wavelet transform, we chose $`m`$ to be a power of 2.
Repeated measurements, like in the case of EPs/ERPs, have a maximum number of true neighbors which is given by $`k_{max}=L`$. In the case of identical signals this is the best choice imaginable. However, real EPs/ERPs may alter during experiments, and it seems more appropriate to use a maximum distance true neighbors are assumed to be restricted to. We define this distance by
$$d(𝐲)_{max}=\frac{\sqrt{2}}{LN}\underset{l=1,n=1}{\overset{L,N}{}}d(𝐲)_{n,l}^{(L)}$$
(4)
## 3 Model Data
### 3.1 Generating test signals and noise
To demonstrate the effectiveness of our denoising technique and to estimate accurate values for $`m`$, $`\lambda `$, and $`L`$, we applied it to EP/ERP-like test signals contaminated with white noise and in-band noise. The latter was generated using phase randomized surrogates of the original signal . Test signals consisted of 256 sample points and were a concatenation of several Gaussian functions with different standard deviations and amplitudes. To simulate EPs/ERPs not fully synchronized with stimulus onset, test signals were shifted randomly in time (normally deviated, std. dev.: $`20`$ sample points, max. shift: $`40`$ sample points). Since even fast components of the test signal extended over several sample points, a minimum embedding dimensions $`m=16`$ was required to cover any significant fraction of the signal. The highest embedding dimension was bounded by the length of signal sequences and the number of embedded trials, thus allowing a maximum of $`m=256`$. However, if the embedding dimension is $`m=N`$, neighborhood is not longer defined by local characteristics, and we can expect denoised signals to be smoothed in the case of multiple time varying components.
### 3.2 Denoising of test signals
Let $`X_l=(x_{l,1},x_{l,2},\mathrm{},x_{l,N})`$ denote the $`l^{th}`$ signal sequence of a repeated measurement, $`Y_l=(y_{l,1},y_{l,2},\mathrm{},y_{l,N})`$ the noise contaminated sequence, and $`\stackrel{~}{Y}_l=(\stackrel{~}{y}_{l,1},\stackrel{~}{y}_{l,2},\mathrm{},\stackrel{~}{y}_{l,N})`$ the corresponding result of denoising. Then
$$𝐫=\frac{1}{L}\underset{l=1}{\overset{L}{}}\sqrt{\frac{(Y_lX_l)^2}{(\stackrel{~}{Y}_lX_l)^2}}$$
(5)
defines the noise reduction factor which quantifies signal improvement owing to the filter process.
We determined r for test signals contaminated with white noise, using noise amplitudes ranging from 25% - 150%, and embedding dimensions ranging from 16 - 128 (Figure 2a, Figure 3). Five repetitions for each parameter configuration were calculated using 5 embedded trials each. In the case of $`\lambda 2`$, the noise reduction factor was quite stable against changes of noise levels but depended on embedding dimension $`m`$ and thresholding coefficient $`\lambda `$. Best performance was achieved for $`1.0\lambda 2.0`$ ($`𝐫_{max}^{m=128,\lambda =2.0}=4.7`$). In the case of $`\lambda >4.0`$, most signal components were rejected, and as a result, the noise reduction factor $`𝐫`$ increased linearly with noise levels, as expected. Figure 2b and Figure 4 depict effects of denoising of 5 test signals contaminated with in-band noise. In comparison to white noise the performance decreased, but nevertheless, enabled satisfactory denoising for $`0.5\lambda 1.0`$ ($`𝐫_{max}^{m=128,\lambda =1.0}=1.6`$). Within this range, the noise reduction factor $`𝐫`$ depended weakly on noise levels. Note that the embedding dimension must be sufficiently high ($`m=128`$) to find true neighbors.
In order to simulate EPs/ERPs with several time-varying components, we used 5 test signals which were again a concatenation of different Gaussian functions, each, however, randomly shifted in time (Figure 2c and Figure 5). In contrast to test signals with time fixed components, ”optimal” embedding dimension depended on the thresholding coefficient $`\lambda `$. Higher values of $`\lambda `$ required lower embedding dimensions and vice versa. Best results were achieved for $`0.5\lambda 2.0`$ ($`𝐫_{max}^{m=128,\lambda =1.0}=3.2`$).
Even for high noise levels, the proposed denoising scheme preserved finer structures of original test signals in all simulations. Moreover, the reconstructed sequences were closer to the test signals than the corresponding averages, especially for time varying signals. Power spectra showed that denoising took part in all frequency bands and was quite different from common low-, or band-pass filtering. Simulation indicated that ”optimal” values of the thresholding coefficient were in the range $`0.5\lambda 2.0`$. Best embedding dimension was found to be $`m=128`$, since the ongoing background EEG can be assumed to be in-band with ERPs. The filter performance was quite stable against the number of embedded sequences, at least for $`L=5,10,20`$.
## 4 Real data
### 4.1 Data Acquisition
We analyzed event related potentials recorded intracerebrally in patients with pharmacoresistent focal epilepsy . Electroencephalographic signals were recorded from bilateral electrodes implanted along the longitudinal axis of the hippocampus. Each electrode carried 10 cylindrical contacts of nickel-chromium alloy with a length of 2.5 mm and an intercontact distance of 4 mm. Signals were referenced to linked mastoids, amplified with a bandpass filter setting of 0.05 - 85.00 Hz (12dB/oct.) and, after 12 bit A/D conversion, continuously written to a hard disk using a sampling interval of $`5760\mu `$s. Stimulus related epochs spanning 1480 ms (256 sample points) including a 200 ms pre-stimulus baseline were extracted from recorded data. The mean of the pre-stimulus baseline was used to correct possible amplitude shifts of the following ERP epoch.
In a visual odd-ball paradigm 60 rare (letter $`<x>`$, targets) and 240 frequent stimuli (letter $`<o>`$, distractors) were randomly presented on a computer monitor once every $`1200\pm 200ms`$ (duration: 100 ms, probability of occurrence: 1 ($`<x>`$) : 5 ($`<o>`$)). Patients were asked to press a button upon each rare target stimulus. This pseudo-random presentation of rare stimuli in combination with the required response is known to elicit the mesial temporal lobe (MTL) P300 potential in recordings from within the hippocampal formation (cf. Figure 1).
### 4.2 Results
By simulation, we estimated a range in which ”optimal” parameters of the filter can be expected. However, the quality of denoising ERP sequences could not be estimated, because the clean signal was not known a priori. A rough estimation of filter performance was only possible by a comparison to ERP averages. Taking into account results of simulation as well as ERP averages, we estimated $`\lambda =0.6`$ and $`m=128`$ to be the best configuration.
Based on the empirical fact that specific ERP components exhibit peak amplitudes within a narrow time range related to stimulus onset, we defined a maximum allowed time jitter of $`\pm 20`$ sample points ($`116ms`$) true neighbors are assumed to be restricted to. This accelerated the calculation time and avoided false nearest neighbors. Figure 6 depicts several ERPs recorded from different electrode contacts within the hippocampal formation. The number of embedded sequences was chosen as $`L=8`$. Comparing averages, we can expect that the filter extracted the most relevant MTL-P300 components. Even for low amplitude signals reconstruction was possible, exhibiting higher amplitudes in single trial data than in averages. As corresponding power spectra show, the 50 Hz power line was reduced but not eliminated after filtering. Especially low amplitude signals showed artifacts based on the 50 Hz power line.
## 5 Conclusion
In this study, we introduced a new wavelet based method for nonlinear noise reduction of single trial EPs/ERPs. We employed advantages of methods developed for the paradigm of deterministic chaotic systems, that allowed denoising of short and time variant EP/ERP sequences without assuming fully synchronized or stationary EEG.
Denoising via wavelet shrinkage does not require a priori assumptions about constrained dimensions, as is usually required for other techniques (e.g., singular value decomposition). Besides, it is more straight forward using thresholds depending on means and variances rather than initial assumptions about constrained embedding dimensions. Moreover, the local calculation of thresholds in state space enables focusing on specific frequency scales, which may be advantageous in order to extract signal components located within both narrow frequency bands and narrow time windows.
Extension of our denoising scheme to other types of signals seems to be possible, however, demands further investigations, since ”optimal” filter parameters highly depend on signal characteristics. In addition, the noise reduction factor $`𝐫`$ does not consider all imaginable features of signals investigators are possibly interested in, so that other measures may be more advantageous in specific cases.
So far, we have not considered effects of smoothing the edges of signal sequences. But since delay vectors as well as corresponding wavelet coefficients hold information locally, we can assume artifacts to be also constrained to the edges which we were not interested in.
In conclusion, the proposed denoising scheme represents a powerful noise reduction technique for transient signals of short duration, like ERPs.
Acknowledgements
This work is supported by the Deutsche Forschungsgemeinschaft (grant. no. EL 122 / 4-2. ).
We thank G. Widman, W. Burr, K. Sternickel, and C. Rieke for fruitful discussions.
Figure captions:
Fig. 1: Examples of averaged ERPs recorded along the longitudinal axis of the hippocampal formation in a patient with epilepsy. Randomized presentation of target and standard stimuli is known to elicit the mesial temporal lobe P300, a negative deflection peaking at about 500 ms after stimulus onset (cf. Sect. 4.1 for more details). Letters (a), (b), and (c) indicate recordings used for single trial analysis (cf. Figure 6).
Fig. 2: Results of denoising test signals. Parts a) and b): contamination with white noise and in-band noise. Part c): time varying signal components and white noise contamination (see text for more details). Five calculations for each parameter configuration have been executed to determine standard deviations.
Fig. 3: Nonlinear denoising applied to white noise contaminated test signals (5 sequences embedded, each 256 sample points, randomly shifted in time (std. dev.: 20 sample points, max. shift: 40 sample points), noise amplitude 75%, $`m=128`$, $`\tau =1`$, $`\lambda =1.5`$). Power spectra in arbitrary units. For state space plots we used a time delay of 25 sample points.
Fig. 4: Same as Figure 3 but for in-band noise and $`\lambda =0.75`$.
Fig. 5: Same as Figure 3 but for Gaussian functions each randomly shifted in time and $`\lambda =0.75`$.
Fig. 6: Examples of denoised MTL-P300 potentials (cf. Figure 1). Power spectra in arbitrary units. For state space plots we used a time delay of 25 sample points. |
no-problem/0001/physics0001062.html | ar5iv | text | # Further Effects of Varying 𝐺
## 1 Introduction
In a recent communication we saw that it is possible to account for the precession of the perihelion of Mercury, for example, only in terms of the time varying universal constant of gravitation G. It may be mentioned that Dirac had argued that a time varying G could be reconciled with General Relativity and the perihelion precession by considering a suitable redefinition of units. We will now show that it is also possible to account for the bending of light on the one hand and on the other, the flat galactic rotation curves without invoking dark matter, with the same time variation of G.
## 2 Bending of Light
It may also be mentioned that some varying G cosmologies have been reviewed by Narlikar and Barrow, while a fluctuational cosmology with the above G variation has been considered by the author and .
We start by observing that, as is well known, the bending of light can be deduced in Newtonian theory also, though the amount of bending is half of that predicted by General Relativity. In this case the equations for the orbit of a particle of mass $`m`$ are used in the limit $`m0`$ with due justification. A quick way of obtaining the result is to observe that we have the well known orbital equations.
$$\frac{1}{r}=\frac{GM}{L^2}(1+ecos\mathrm{\Theta })$$
(1)
where $`M`$ is the mass of the central object, $`L`$ is the angular momentum per unit mass, which in our case is $`bc`$, $`b`$ being the impact parameter or minimum approach distance of light to the object, and $`e`$ the eccentricity of the trajectory is given by
$$e^2=1+\frac{c^2L^2}{G^2M^2}$$
(2)
For the bending of light, if we substitute in (1), $`r=\pm \mathrm{}`$, and then use (2) we get
$$\alpha =\frac{2GM}{bc^2}$$
(3)
$`\alpha `$ being the deflection or bending of the light. This is half the General Relativistic value.
We also note that the effect of time variation is given by (cf.ref.)
$$G=G_0(1\frac{t}{t_0}),r=r_0(1\frac{t}{t_0})$$
(4)
where $`t_0`$ is the present age of the universe and $`t`$ is the time elapsed from the present epoch.
Using (4) the well known equation for the trajectory is given by (Cf.,,)
$$u\mathrm{"}+u=\frac{GM}{L^2}+u\frac{t}{t_0}+0\left(\frac{t}{t_0}\right)^2$$
(5)
where $`u=\frac{1}{r}`$ and primes denote differenciation with respect to $`\mathrm{\Theta }`$.
The first term on the right hand side represents the Newtonian contribution while the remaining terms are the contributions due to (4). The solution of (5) is given by
$$u=\frac{GM}{L^2}\left[1+ecos\left\{\left(1\frac{t}{2t_0}\right)\mathrm{\Theta }+\omega \right\}\right]$$
(6)
where $`\omega `$ is a constant of integration. Corresponding to $`\mathrm{}<r<\mathrm{}`$ in the Newtonian case we have in the present case, $`t_0<t<t_0`$, where $`t_0`$ is large and infinite for practical purposes. Accordingly the analogue of the reception of light for the observer, viz., $`r=+\mathrm{}`$ in the Newtonian case is obtained by taking $`t=t_0`$ in (6) which gives
$$u=\frac{GM}{L^2}+ecos\left(\frac{\mathrm{\Theta }}{2}+\omega \right)$$
(7)
Comparison of (7) with the Newtonian solution obtained by neglecting terms $`t/t_0`$ in equations (4),(5) and (6) shows that the Newtonian $`\mathrm{\Theta }`$ is replaced by $`\frac{\mathrm{\Theta }}{2}`$, whence the deflection obtained by equating the left side of (6) or (7) to zero, is
$$cos\mathrm{\Theta }\left(1\frac{t}{2t_0}\right)=\frac{1}{e}$$
(8)
where $`e`$ is given by (2). The value of the deflection from (8) is twice the Newtonian deflection given by (3). That is the deflection $`\alpha `$ is now given not by (3) but by
$$\alpha =\frac{4GM}{bc^2},$$
which is the correct General Relativistic Formula.
## 3 Galactic Rotation
The problem of galactic rotational curves is well known (cf.ref.). We would expect, on the basis of straightforward dynamics that the rotational velocities at the edges of galaxies would fall off according to
$$v^2\frac{GM}{r}$$
(9)
whereas it is found that the velocities tend to a constant value,
$$v300km/sec$$
(10)
This has lead to the hypothesis of as yet undetected dark matter, that is that the galaxies are more massive than their visible material content indicates.
We observe that from (4) it can be easily deduced that
$$a(\ddot{r}_o\ddot{r})\frac{1}{t_o}(t\ddot{r_o}+2\dot{r}_o)2\frac{r_o}{t_o^2}$$
(11)
as we are considering infinitesimal intervals $`t`$ and nearly circular orbits. Equation (11) shows (Cf.ref also) that there is an anomalous inward acceleration, as if there is an extra attractive force, or an additional central mass.
So,
$$\frac{GMm}{r^2}+\frac{2mr}{t_o^2}\frac{mv^2}{r}$$
(12)
From (12) it follows that
$$v\left(\frac{2r^2}{t_o^2}+\frac{GM}{r}\right)^{1/2}$$
(13)
From (13) it is easily seen that at distances within the edge of a typical galaxy, that is $`r<10^{23}cms`$ the equation (9) holds but as we reach the edge and beyond, that is for $`r10^{24}cms`$ we have $`v10^7cms`$ per second, in agreement with (10).
Thus the time variation of G given in equation (4) explains observation without taking recourse to dark matter. |
no-problem/0001/hep-ph0001207.html | ar5iv | text | # MADPH-00-1152 January, 2000 CP Violating Phases and the Dark Matter ProblemPresented at COSMO99: 3rd International Conference on Particle Physics and the Early Universe, Trieste, Italy
## 1 CP Violation in Supersymmetry
This last year has seen a lot of work on CP violating phases in supersymmetry–how to constrain the phases, how to measure them, how to avoid the same constraints, and the extent to which CP violation can spoil predictions appropriate in the absence of CP violation in the SUSY parameters. Today I will discuss some of the cosmological consequences of CP violating phases in the MSSM, and in particular, their effect on the abundance and detection of SUSY dark matter.
The Supersymmetric Standard Model contains many new potential sources of CP violation beyond that of the standard model. In particular, the supersymmetric Higgs mixing mass $`\mu `$, the gaugino masses $`M_i`$, the scalar trilinear couplings $`A_i`$ and the SUSY breaking scalar Higgs mixing parameter $`B\mu `$ can all in principle be complex. However, not all the phases are physical, and depending on the model, some or most can be removed by field redefinitions. The remaining sources for CP violation are experimentally constrained, primarily due to their contributions to the Electric Dipole Moments (EDMs) of the electron and neutron, and in particular, the EDM of the mercury atom <sup>199</sup>Hg.
## 2 mSUGRA Constraints
In minimal Supergravity (mSUGRA), the large number of relations between the SUSY parameters reduces the set of CP violating phases to just two: $`\theta _\mu `$, associated with the Higgs mixing mass $`\mu `$, and $`\theta _A`$, a common trilinear parameter phase. These phases then appear in the low energy Lagrangian in the neutralino and chargino mass matrices (in the case of $`\theta _\mu `$) and in the left-right sfermion mixing terms (both $`\theta _\mu `$ and $`\theta _A`$). The new sources for CP violation then contribute to the EDMs of standard model fermions, and the tight experimental constraints on the EDMs of the electron, neutron and mercury atom place severe limits on the sizes of $`\theta _\mu `$ and $`\theta _A`$ .
The EDMs generated by $`\theta _\mu `$ and $`\theta _A`$ are sufficiently small if either 1) the phases are very small ($`<10^2`$), or 2) the SUSY masses are very large ($`𝒪`$ (a few TeV)), or 3) There are large cancellations between different contributions to the EDMs. In mSUGRA, option 2) is forbidden by the relic density constraints, as we’ll show next. Condition 3), large cancellations, does naturally occur in mSUGRA models over significant regions of parameter space, including in the body of the cosmologically allowed region with $`m_{1/2}=𝒪(100400\mathrm{GeV})`$. These cancellations relax the constraints on the phases, but the limit on $`\theta _\mu `$ remains small, $`\theta _\mu <\pi /10`$.
To see why option 2) is cosmologically forbidden, recall that the SUSY phases contribute to the electron EDM, for example, via processes of the following type:
where selectrons and sneutrinos appear in the loop. These contributions diminish as the sfermion masses are increased, but this also shuts off neutralino annihilation in the early universe, which is dominated by sfermion exchange as in Fig. 1, and hence increases the neutralino relic abundance. In Fig. 2a we denote by light shading the region of the $`m_{1/2}m_0`$ parameter space with a relic neutralino abundance in the preferred range $`0.1\mathrm{\Omega }_{\stackrel{~}{\chi }}h^20.3`$. The upper bound on $`\mathrm{\Omega }_{\stackrel{~}{\chi }}h^2`$, coming from a lower limit of 12 Gyr on the age of the universe, then limits the extent to which one can turn off the electron EDMs by raising the sfermion masses. The combination of cosmological with EDM constraints in the MSSM and mSUGRA is discussed in detail in .
To demonstrate the combined limits on $`\theta _\mu `$ and $`\theta _A`$ in mSUGRA, we plot in the $`\{\theta _\mu ,\theta _A\}`$ plane the minimum value of $`m_{1/2}`$ required to bring the EDMs of both the electron and the mercury atom <sup>199</sup>Hg below their respective experimental constraints (Fig. 2b). These experiments currently provide the tightest bounds on the SUSY phases<sup>1</sup><sup>1</sup>1The extraction of the neutron EDM from the SUSY parameter space is plagued by significant hadronic uncertainties , so that the inclusion of the neutron EDM constraint does not improve the limits when the uncertainties in the calculated neutron EDM are taken into account. Here we’ve fixed $`\mathrm{tan}\beta =2`$, $`A_0=300`$ GeV and $`m_0=100`$ and scanned upwards in $`m_{1/2}`$ until the experimental constraints are satisfied. Due to cancellations, the EDMs are not monotonic in $`m_{1/2}`$; however, there is still a minimum value of $`m_{1/2}`$ which is allowed. In the absence of coannihilations, there is an upper bound on $`m_{1/2}`$ of about 450 GeV (though slightly smaller for this $`m_0`$); an analogous figure to Fig. 2a for $`\mathrm{tan}\beta =2`$ shows that coannihilations increase the bound to about 600 GeV. Comparing with Fig. 2a, we see that zone V is cosmologically forbidden, and that the effect of including coannihilations is to allow zone IV, which was formerly excluded.
The bowing to the right of the contours in Fig. 2 is a result of cancellations between different contributions to the EDMs , and we can see that the effect is to relax the upper bound on $`\theta _\mu `$ by a factor of a few. As we increase $`A_0`$, the extent of the bowing increases, and larger values of $`\theta _\mu `$ can be accessed. This loophole to larger $`\theta _\mu `$ is limited by the diminishing size of the regions in which there are sufficient cancellations to satisfy the EDM constraints. In general, the regions of cancellation for the electron EDM are different than those for the <sup>199</sup>Hg EDM, and the two regions do not always overlap. As $`\theta _\mu `$ is increased, the sizes of the regions of sufficient cancellations decrease; in Fig. 2, the width in $`m_{1/2}`$ of the combined allowed region near the $`\theta _\mu `$ upper bound is 40-80 GeV, which on a scale of 200-300 GeV is reasonably broad. Larger $`A_0`$ permits larger $`\theta _\mu `$, but the region of cancellations shrinks so that a careful adjustment of $`m_{1/2}`$ becomes required to access the largest $`\theta _\mu `$. At the end of the day, values of $`\theta _\mu `$ much greater than about $`\pi /10`$ cannot satisfy the EDM constraints without significant fine-tuning of the mass parameters. At larger values of $`\mathrm{tan}\beta `$, the upper bound decreases roughly as $`1/\mathrm{tan}\beta `$. See for more details on the status of EDM and cosmological constraints on CP violating phases in mSUGRA.
## 3 Large Phases
Much of the work on SUSY CP violation in the last year has been inspired by the hope of having large ($`𝒪`$(1)) phases, and there have been several suggestions as to how this might be achieved while satisfying the stringent constraints from the EDMs. First, the presence of cancellations between different contributions to the fermion EDMs (which recently has dubbed the “cancellation mechanism” ) has been used to motivate interest in large phases, although as we have seen in the last section, in mSUGRA, cosmological considerations limit the extent to which cancellations can free the phases. If gaugino mass unification is broken, then there are two additional phases in the model, namely the relative phases between $`M_1`$, $`M_2`$ and $`M_3`$. More phases then lead to more opportunities for cancellations . There may be hints that string theory can provide (small) regions with large phases in models without gaugino mass unification .
Alternatively, note that the one-loop diagrams contributing to the fermion EDMs only contain first generation sfermions. Hence in models in which the first (or first two) generation sfermions are extremely heavy , the EDMs are suppressed even with $`𝒪`$(1) phases. Of course if the LSP neutralino is gaugino-like, the third generation sfermions must be quite light in order to satisfy the relic density constraints. The phases in these models are still not completely unconstrained however. The third generation sfermions can contribute to the EDMs at two loops , and further, phases in the stop sector (i.e. $`\theta _\mu ,\theta _{A_t}`$) enter radiatively into the Higgs potential and induce a phase misalignment between the Higgs vevs, and this can potentially introduce visible effects. Both the latter effects are particularly important at large $`\mathrm{tan}\beta `$.
## 4 Neutralino Relic Density
As originally shown in , CP violating phases can have a large effect on the relic density of the Lightest Supersymmetric Particle (LSP) in SUSY models. This occurs because the dominant annihilation channel for a gaugino-like neutralino (Fig. 1) exhibits “p-wave suppression”. That is, if one expands the thermally averaged annihilation cross-section at freeze-out in powers of $`(T/m_{\stackrel{~}{\chi }})`$,
$$\sigma _{\stackrel{~}{\chi }\stackrel{~}{\chi }}v=a+b(T/m_{\stackrel{~}{\chi }})+𝒪(T/m_{\stackrel{~}{\chi }})^2$$
(1)
the zeroth order term $`a`$ is suppressed by $`m_f^2`$. This suppresses the annihilation rate in the early universe, and enhances the $`\chi `$ relic abundance, by more than an order of magnitude. However, in the presence of left-right sfermion mixing and CP violating phases, the zeroth order term has a piece
$$a\frac{g_1^4}{32\pi }\frac{m_{\stackrel{~}{\chi }}^2}{(m_{\stackrel{~}{f}}^2+m_{\stackrel{~}{\chi }}^2m_f^2)^2}Y_L^2Y_R^2\mathrm{sin}^22\theta _f\mathrm{sin}^2\gamma _f+𝒪(m_fm_{\stackrel{~}{\chi }})$$
(2)
where $`\theta _f`$ is the sfermion mixing angle and $`\gamma _f=Arg(A_f^{}+\mu \mathrm{tan}\beta )`$. For significant $`\theta _f`$ and $`\gamma _f`$, this results in a dramatic increase in the annihilation rate and decrease in the $`\chi `$ relic density, and it weakens the cosmological upper bound on $`m_{\stackrel{~}{\chi }}`$ (in the absence of coannihilations) from $`250`$ GeV to $`650`$ GeV . In mSUGRA, however, neutralino annihilation is primarily through sleptons into lepton pairs, and the mixing angles $`2\theta _f`$ are typically very small. The effect of CP phases on the $`\chi `$ annihilation in mSUGRA is therefore negligible. This is also generally true in models where the sfermion mixing angles are small; e.g. for a string inspired model, see .
Phases also potentially affect masses and couplings of the neutralino and charginos, stop and Higgs particles, as well as providing mixing between the scalar and pseudoscalar Higgs, and these can effect the neutralino relic density, particularly for a Higgsino-type neutralino.
## 5 Neutralino Direct Detection
Phases can also affect the direct detection of relic neutralinos. In direct detection schemes, relic neutralinos elastic scatter off of nuclei in a target material, depositing a detectable amount of energy. The low-energy effective four-Fermi Lagrangian for neutralino-quark interactions takes the form
$``$ $`=`$ $`\overline{\chi }\gamma ^\mu \gamma ^5\chi \overline{q_i}\gamma _\mu (\alpha _{1i}+\alpha _{2i}\gamma ^5)q_i+\alpha _{3i}\overline{\chi }\chi \overline{q_i}q_i+\alpha _{4i}\overline{\chi }\gamma ^5\chi \overline{q_i}\gamma ^5q_i+`$ (3)
$`\alpha _{5i}\overline{\chi }\chi \overline{q_i}\gamma ^5q_i+\alpha _{6i}\overline{\chi }\gamma ^5\chi \overline{q_i}q_i,`$
The coefficients $`\alpha _2`$ and $`\alpha _3`$ contribute to spin-dependent and spin-independent neutralino-nucleon scattering, respectively. The phases enter into the coefficients $`\alpha _i`$ .
Due to cancellations in the scattering rates, phases can produce a significant effect on the detection rate. In Fig. 3, we show the neutralino-nucleus elastic scattering cross-section as a function of $`\theta _\mu `$, displaying separately the spin-dependent and spin-independent contributions, for two target nuclei, <sup>19</sup>F and <sup>73</sup>Ge. Dramatic reductions in the spin-independent cross-section occur near $`\theta _\mu =0.6\pi `$ in both cases, and near $`\theta _\mu =0.25\pi `$ for the spin-dependent rate for <sup>19</sup>F. Of course these large values of $`\theta _\mu `$ are excluded in mSUGRA, as we have seen above, although in more general models one can tune the other parameters (e.g. the trilinear parameters $`A_i`$) in order produce the cancellations necessary to satisfy the EDM constraints, as we have done in Fig. 3.
In Fig. 4 we perform a scan over MSSM parameters $`M_2,\mu ,A`$ and $`m_0`$ for fixed phases $`\theta _\mu `$ and $`\theta _A`$ and $`\mathrm{tan}\beta =3`$ and compute the ratio of total scattering cross-sections with and without phases, for scattering off of <sup>19</sup>F. We haven’t chosen the phases to lie in the dips of Fig. 3, so Fig. 4 isn’t intended to indicate the maximum possible effect of the phases. Rather, we hope display a more typical result and to demonstrate the variation of the effect of the phases as a function of the parameters $`\mu `$ and $`M_2`$. We see that reductions in the rate up to %50 and enhancements by up to %10 occur over much of the $`M_2\mu `$ parameter space. The plotted points all satisfy the EDM constraints. There is one caveat to bear in mind: because we have done a scan over parameters, we have found the (small) regions of parameter space satisfying the EDM bounds for these large values of the phases. These regions are not generic, and in fact are uncomfortably tuned. Thus we simply take these plots as an existence proof that CP-violating phases can have a significant effect on the direct detection of neutralino dark matter.
## 6 Summary
New sources of CP violation are present in the MSSM which are not present in the standard model. In mSUGRA, cosmological bounds on $`m_{1/2},m_0`$ and $`m_{\stackrel{~}{\chi }}`$ combine with limits on the Electric Dipole Moments of the electron and <sup>199</sup>Hg to constrain $`\theta _\mu <\pi /10`$, while $`\theta _A`$ remains essentially unconstrained. In general models, phases can affect neutralino annihilation, so that the cosmological upper bound on $`m_{\stackrel{~}{\chi }}`$ increases from 250 to 650 GeV. Phases can also affect neutralino direct detection rates, typically reducing them by a factor $`2`$, but orders of magnitude in parts of the parameter space.
## Acknowledgments
The work of T.F. was supported in part by DOE grant DE–FG02–95ER–40896, and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation. |
no-problem/0001/quant-ph0001006.html | ar5iv | text | # Untitled Document
COMMENT ON ”QUANTUM PHASE SHIFT CAUSED BY SPATIAL
CONFINEMENT”
Murray Peshkin
Argonne National Laboratory
Physics Division-203
Argonne, IL 60439-4843
peshkin@anl.gov
Received 3 September 1999; revised 22 November 1999
The analysis of phase shifts in executed and proposed interferometry experiments on photons and neutrons neglected forces exerted at the boundaries of spatial constrictions. When those forces are included it is seen that the observed phenomena are not in fact geometric in nature. A new proposal in the reply to this comment avoids that pitfall.
Key Words: interferometry, phase shifts, force-free effects.
1. INTRODUCTION
Allman et al. have proposed a neutron interference experiment wherein the neutrons in one arm of an interferometer pass through a channel in an otherwise reflecting barrier and the resulting phase shift is to be measured. Following an earlier discussion , they calculate that the phase shift expected to be induced by the neutrons’ passage through the channel in the barrier will be given for an appropriate range of the parameters by
$$\mathrm{\Delta }\mathrm{\Phi }\frac{\pi }{4}\frac{\lambda \mathrm{}}{a^2}=\frac{\pi ^2\mathrm{}\mathrm{}}{2a^2\sqrt{2mE}},$$
(1)
where $`\mathrm{}`$ and $`a`$ are the length and width of the channel and $`\lambda `$, $`m`$, and $`E`$ are the wavelength, mass, and energy of the neutrons. They assert that no force is exerted on the neutrons and from that they conclude that the proposed experiment will demonstrate a new, purely geometrical, force-free effect of the Aharonov-Bohm type.
That conclusion is not correct. The phase shifts induced by force-free interactions are necessarily independent of the energy of the neutrons , contrary to Eq. (1). The neutrons in the proposed experiment are in fact acted on by forces having non-vanishing components in the direction of the beam.
2. FORCES ON THE NEUTRON
In reality, any forces on the neutrons are exerted in the neutrons’ exchange of momentum with atoms in the barrier. Allman et al. substitute a boundary condition for the interaction of the neutrons with the atoms in the barrier. That approximation gives an adequate wave function and leads to the correct phase shift. No potential gradient appears in the Schroedinger equation, but Allman et al., in saying that no force is exerted, neglect the force exerted on the neutrons at the boundary.
To focus on the principle involved, consider an idealized situation in which a single neutron, initially in a state represented by some wave packet $`\psi (t)`$, is aimed at a long channel so that there is a time interval when the neutron’s wave packet is for practical purposes entirely within the channel. In the best case, the wave packet enters the channel with only minimal reflection, proceeds through the channel at reduced speed as described by Ref. 1, then exits the channel, again with minimal reflection, and continues on with its initial speed. Although the reflection is minimal, the wave function unavoidably spreads into some diffraction region around the ends of the channel. There the boundary exerts a force $`F_b`$ in the beam direction whose expectation is given by
$`<F_b>_t`$ $`=`$ $`{\displaystyle \frac{d}{dt}}<p_b>=i\mathrm{}{\displaystyle d^3x\left(\frac{\psi ^{}}{t}_b\psi \frac{\psi }{t}_b\psi ^{}\right)}`$ (2)
$`=`$ $`{\displaystyle \frac{\mathrm{}^2}{2m}}{\displaystyle d^2x\left|_b\psi \right|^2},`$ (3)
where the two-dimensional integral is carried over the boundary segments normal to the beam direction, $`i.e.`$ over the surfaces of the barrier outside of the channel. In Eq. (2), the minus sign applies to the first face of the barrier encountered by the neutron and the plus sign to the second face, corresponding to the neutron’s losing momentum when it enters the channel and regaining that momentum when it leaves. Ehrenfest’s theorem guarantees that the force given in Eq. (2) accounts correctly for the reduced momentum $`p^{}`$ in the channel, correctly given by Allman et al. as
$$p^{}=p\mathrm{\Delta }pp\frac{\pi ^2\mathrm{}^2}{2a^2p},$$
(4)
where $`p`$ is the free-space momentum before and after the passage through the channel. In the case of a neutron that misses the channel and is reflected from the barrier, the time integral of the force given in Eq. (2) agrees with a net momentum transfer equal to $`2p`$, as it must.
That the momentum shift is brought about by the force exerted on the neutron at the boundary is especially clear when the wave packet exits the channel, because there the negligible momentum carried in the small reflected wave is directed oppositely to the momentum transferred to the neutron by the force at the boundary, so the reflected wave cannot carry off the transferred momentum.
The same physics appears in a more physical model where the barrier is represented by a finite repulsive potential instead of by a boundary condition. If the potential is taken to be rounded at the edge of the barrier, a finite force appears in the Schroedinger equation in the form of the gradient of the potential. If the potential ends with a step, the force becomes a delta function. Although the details of the force vary with those of the model, the momentum the force imparts to the neutron is in all cases the same as that imparted by the boundary in the boundary-condition model.
3. CONCLUSIONS
This phenomenon stands in contrast to genuinely force-free interference phenomena , in which the phase shift is independent of the energy. It is also very different from the Aharonov-Bohm magnetic scattering effect, in which electrons are scattered from a solenoid containing a magnetic flux. There too the interaction of the beam particles with the atoms of the solenoid can adequately be represented by a boundary condition and the force at the boundary accounts correctly for the momentum change when the electrons are scattered . The interest in that version of the Aharonov-Bohm effect arises from the fact that the scattering, and with it the momentum transfer, depends upon the magnitude of the magnetic flux in the solenoid even though there is no force and no momentum transfer between the electrons and the local, present magnetic field. As expected, the magnetic-flux-dependent part of the phase shift is independent of the electron’s energy. No analogous consideration arises when neutrons pass through a channel in a barrier.
These considerations apply equally to the optical interference experiments in Ref. as to the proposed neutron experiment, but they do not apply to the temporally modulated constriction in a new experiment proposed by Allman et al. , which involves no force in the direction of motion of the wave packet. That proposed experiment falls into the force-free class that includes the Aharonov-Casher effect, the Scalar Aharonov-Bohm effect, and the force-free nuclear phase shifter but not the electric and magnetic Aharonov-Bohm effects .
Acknowledgements. This work is supported by the U.S. Department of Energy, Nuclear Physics Division, under contract W-31-109-ENG-38. I thank Brendan Allman for informing me about the new proposed experiment.
REFERENCES
1. B. E. Allman, A. Cimmino, S. L. Griffin, and A. G. Klein, Found. Phys. 29, 325 (1999).
2. J.-M. Levy-Leblond, Phys. Lett. A 125, 441 (1987).
3. A. Zeilinger, in Fundamental Aspects of Quantum Theory, V. Gorini and A. Frigerio, eds., (NATO ASI Series B, Vol. 144), (Plenum, NY 1986), p. 311.
4. M. Peshkin, Found. Phys. 29, 481 (1999) and quant-ph/9806055
5. P. Pfeifer, Phys. Rev. Lett. 72, 305 (1994).
6. M. Peshkin and A. Tonomura, The Aharonov-Bohm Effect, (Lecture Notes in Physics 340), (Springer-Verlag, NY 1989), p. 31.
7. B. E. Allman, A. Cimmino, and A. G. Klein, Found. Phys. Lett. (to be published). Also see D. M. Greenberger, Physica B 151, 374 (1988).
8. M. Peshkin and H. J. Lipkin, Phys. Rev. Lett. 74, 2847 (1995). |
no-problem/0001/nlin0001004.html | ar5iv | text | # On Peakon Solutions of the Shallow Water Equation 11footnote 1Keywords: solitons, peakons, billiards, shallow water equation, Hamiltonian systems
## 1 Introduction
Camassa and Holm described classes of $`n`$-soliton peaked weak solutions, or “peakons,” for an integrable (SW) equation
$$U_t+3UU_x=U_{xxt}+2U_xU_{xx}+UU_{xxx}2\kappa U_x,$$
(1.1)
arising in the context of shallow water theory. Of particular interest is their description of peakon dynamics in terms of a system of completely integrable Hamiltonian equations for the locations of the “peaks” of the solution, the points at which its spatial derivative changes sign. (Peakons have discontinuities in the $`x`$-derivative but both one-sided derivatives exist and differ only by a sign. This makes peakons different from cuspons considered earlier in the literature.) In other words, each peakon solution can be associated with a mechanical system of moving particles. Calogero and Calogero and Francoise further extended the class of mechanical systems of this type.
For the KdV equation, the spectral parameter $`\lambda `$ appears linearly in the potential of the corresponding Schrödinger equation: $`V=u\lambda `$ in the context of the inverse scattering transform (IST) method (see Ablowitz and Segur ). In contrast, the equation (1.1), as well as $`N`$-component systems in general, were shown to be connected to the energy dependent Schrödinger operators with potentials with poles in the spectral parameter.
Alber et al. showed that the presence of a pole in the potential is essential in a special limiting procedure that allows for the formation of “billiard solutions”. By using algebraic-geometric methods, one finds that these billiard solutions are related to finite dimensional integrable dynamical systems with reflections. This provides a short-cut to the study of quasi-periodic and solitonic billiard solutions of nonlinear PDE’s. This method can be used for a number of equations including the shallow water equation (1.1), the Dym type equation, as well as $`N`$-component systems with poles and the equations in their hierarchies . More information on algebraic-geometric methods for integrable systems can be found in and on billiards in .
In this paper we consider singular limits of quasi-periodic solutions when the spectral curve becomes singular and its arithmetic genus drops to zero. The solutions are then expressed in terms of purely exponential $`\tau `$-functions and they describe the finite time interaction of 2 solitary peakons of the shallow water equation (1.1). Namely, we invert the equations obtained by using a new parameterization. First a profile of the 2-peakon solution is described by considering different parameterizations for the associated Jacobi inversion problem on three subintervals of the $`X`$-axis and by gluing these pieces of the profile together. The dynamics of such solutions is then described by combining these profiles with the dynamics of the peaks of the solution in the form developed earlier in Alber et al. . This concludes a derivation in the context of the algebraic geometric approach of the $`n`$-peakon ansatz which was used in the initial papers for obtaining Hamiltonian systems for peaks. More recently $`n`$-peakon waves were studied in and .
The problem of describing complex traveling wave and quasi-periodic solutions of the equation (1.1) can be reduced to solving finite-dimensional Hamiltonian systems on symmetric products of hyperelliptic curves. Namely, according to Alber et al , such solutions can be represented in the case of two-phase quasi-periodic solutions in the following form
$$U(x,t)=\mu _1+\mu _2M,$$
(1.2)
where $`M`$ is a constant and the evolution of the variables $`\mu _1`$ and $`\mu _2`$ is given by the equations
$$\underset{i=1}{\overset{2}{}}\frac{\mu _i^k\mathrm{d}\mu _i}{\pm \sqrt{R(\mu _i)}}=\{\begin{array}{cc}\mathrm{d}t\hfill & k=1,\hfill \\ \mathrm{d}x\hfill & k=2.\hfill \end{array}$$
(1.3)
Here $`R(\mu )`$ is a polynomial of degree 6 of the form $`R(\mu )=\mu _{i=1}^5(\mu m_i)`$. The constant from (1.2) takes the form $`M=1/2m_i`$. Notice that (1.3) describes quasi-periodic motion on tori of genus 2. In the limit $`m_10`$, the solution develops peaks. (For details see Alber and Fedorov .)
### Interaction of Two Peakons.
In the limit when $`m_2m_3a_1`$ and $`m_4m_5a_2`$, we have 2 solitary peakons interacting with each other. For this 2 peakon case, we derive the general form of a profile for a fixed $`t`$ ($`t=t_0,dt=0`$) and then see how this profile changes with time knowing how the peaks evolve. Notice that the limit depends on the choice of the branches of the square roots present in (1.3) meaning choosing a particular sign $`l_j`$ in front of each root. The problem of finding the profile, after applying the above limits to (1.3) gives
$`l_1{\displaystyle \frac{d\mu _1}{\mu _1(\mu _1a_1)}}+l_2{\displaystyle \frac{d\mu _2}{\mu _2(\mu _2a_1)}}`$ $`=`$ $`a_2{\displaystyle \frac{dX}{\mu _1\mu _2}}=a_2dY`$ (1.4)
$`l_1{\displaystyle \frac{d\mu _1}{\mu _1(\mu _1a_2)}}+l_2{\displaystyle \frac{d\mu _2}{\mu _2(\mu _2a_2)}}`$ $`=`$ $`a_1{\displaystyle \frac{dX}{\mu _1\mu _2}}=a_1dY`$ (1.5)
where $`Y`$ is a new variable. This is a new parameterization of the Jacobi inversion problem (1.3) which makes the existence of three different branches of the solution obvious. In general, we consider three different cases: $`(l_1=1,l_2=1)`$, $`(l_1=1,l_2=1)`$ and $`(l_1=1,l_2=1)`$. In each case we integrate and invert the integrals to calculate the symmetric polynomial ($`\mu _1+\mu _2`$). After substituting these expressions into the trace formula (1.2) for the solution, this results in three different parts of the profile defined on different subintervals on the real line. The union of these subintervals gives the whole line. On the last step these three parts are glued together to obtain a wave profile with two peaks.
The new parameterization $`dX=\mu _1\mu _2dY`$ plays an important role in our approach. In what follows each $`\mu _i(Y)`$ will be defined on the whole real $`Y`$ line. However, the transformation from $`Y`$ back to $`X`$ is not surjective so that $`\mu _i(X)`$ is only defined on a segment of the real axis. This is why different branches are needed to construct a solution on the entire real $`X`$ line.
In the case ($`l_1=l_2=1`$), if we assume that there is always one $`\mu `$ variable between $`a_1`$ and $`a_2`$ and one between 0 and $`a_1`$ and that initial conditions are chosen so that $`0<\mu _1^0<a_1<\mu _2^0<a_2`$, then we find that: $`\mu _1+\mu _2=a_1+a_2(m_1+n_1)a_1a_2e^X.`$ This solution is valid on the domain
$$X<\mathrm{log}(a_1n_1+a_2m_1)=X_1^{},$$
where $`n_1,m_1`$ are constants depending on $`\mu _1^0,\mu _2^0`$. At the point $`X_1^{}`$,
$$\mu _1(X_1^{})=0,\mu _2(X_1^{})=\frac{a_2^2m_1+a_1^2n_1}{a_2m_1+a_1n_1}.$$
Now we consider ($`l_1=1,l_2=1`$). Here we find the following expression for the symmetric polynomial
$$\mu _1+\mu _2=a_1+a_2\frac{(a_2a_1)e^X+m_2n_2(a_2a_1)e^X}{m_2+n_2},$$
which is only defined on the interval
$$\mathrm{log}\frac{n_2a_1+m_2a_2}{m_2n_2(a_2a_1)}>X>\mathrm{log}\frac{a_2a_1}{m_2a_1+n_2a_2}=X_1^+.$$
$`m_2,n_2`$ are constants which must be chosen so that both $`\mu _1`$ and $`\mu _2`$ are continuous at $`X_1^{}`$ and that the ends of the branches match up, that is so that $`X_1^{}=X_1^+`$. These conditions are satisfied if
$`m_2`$ $`=`$ $`{\displaystyle \frac{a_2}{a_1}}(a_2a_1)m_1,`$ (1.6)
$`n_2`$ $`=`$ $`{\displaystyle \frac{a_1}{a_2}}(a_2a_1)n_1.`$ (1.7)
Continuing in this fashion we arrive at the final 3 branched profile for a fixed $`t`$,
$`U`$ $`=`$ $`(a_1M+a_2N)e^X\mathrm{if}X<\mathrm{log}(N+M)`$ (1.8)
$`U`$ $`=`$ $`{\displaystyle \frac{a_1a_2e^X+MNe^X(a_2a_1)^2}{a_2M+a_1N}}`$ (1.9)
$`\mathrm{if}`$ $`\mathrm{log}(N+M)<X<\mathrm{log}{\displaystyle \frac{a_2^2M+a_1^2N}{(a_2a_1)^2MN}}`$ (1.10)
$`U`$ $`=`$ $`e^X{\displaystyle \frac{a_2^3M+a_1^3N}{MN(a_2a_1)^2}}\mathrm{if}X>\mathrm{log}{\displaystyle \frac{a_2^2M+a_1^2N}{(a_2a_1)^2MN}},`$ (1.11)
where we have made the substitution $`M=a_2m_1`$ and $`N=a_1n_1`$ and used the trace formula (1.2).
Please place the first figure near here.
### Time evolution.
So far only a profile has been derived. Now we will include the time evolution of the peaks to find the general solution for the two peakon case. To do this we use functions $`q_i(t)`$ for $`i=1,2`$ introduced in Alber et al.
$$\mu _i(x=q_i(t),t)=0,$$
for all $`t`$ and $`i=1,2`$ which describe the evolution of the peaks. All peaks belong to a zero level set: $`\mu _i=0`$. Here the $`\mu `$-coordinates, generalized elliptic coordinates, are used to describe the positions of the peaks. This yields a connection between $`x`$ and $`t`$ along trajectories of the peaks resulting in a system of equations for the $`q_i(t)`$. The solutions of this system are given by
$`q_1(t)`$ $`=`$ $`q_1^0a_2t\mathrm{log}|1C_1e^{(a_1a_2)t}|+\mathrm{log}(1C_1)`$ (1.12)
$`q_2(t)`$ $`=`$ $`q_2^0a_2t+\mathrm{log}|1C_2e^{(a_2a_1)t}|\mathrm{log}(1C_2),`$ (1.13)
where $`C_i=(q_i^{}(0)a_1)/(q_i^{}(0)a_2)`$.
The solution defined in (1.8) has the peaks given in terms of the parameters $`N`$ and $`M`$. So to obtain the solution in terms of both $`x`$ and $`t`$, these parameters must be considered as functions of time. The complete solution now has the form
$`U`$ $`=`$ $`(a_1M(t)+a_2N(t))e^X\mathrm{if}X<\mathrm{log}(N(t)+M(t))`$ (1.14)
$`U`$ $`=`$ $`{\displaystyle \frac{a_1a_2e^X+M(t)N(t)e^X(a_2a_1)^2}{a_2M(t)+a_1N(t)}}`$
$`\mathrm{if}`$ $`\mathrm{log}(N(t)+M(t))<X<\mathrm{log}{\displaystyle \frac{a_2^2M(t)+a_1^2N(t)}{(a_2a_1)^2M(t)N(t)}}`$ (1.15)
$`U`$ $`=`$ $`e^X{\displaystyle \frac{a_2^3M(t)+a_1^3N(t)}{M(t)N(t)(a_2a_1)^2}}\mathrm{if}X>\mathrm{log}{\displaystyle \frac{a_2^2M(t)+a_1^2N(t)}{(a_2a_1)^2M(t)N(t)}}.`$ (1.16)
where the functions $`M(t),N(t)`$ are determined by the relations
$`N(t)+M(t)`$ $`=`$ $`e^{q_1(t)}={\displaystyle \frac{e^{q_1^0}|e^{a_2t}C_1e^{a_1t}|}{1C_1}}`$ (1.17)
$`{\displaystyle \frac{a_2^2M(t)+a_1^2N(t)}{M(t)N(t)}}`$ $`=`$ $`(a_2a_1)^2e^{q_2(t)}={\displaystyle \frac{(a_2a_1)^2e^{q_2^0}|e^{a_2t}C_2e^{a_1t}|}{(1C_2)}},`$ (1.18)
where $`q_1(t),q_2(t)`$ are taken from (1.12)-(1.13). This system can be solved to find that
$`M(t)`$ $`=`$ $`{\displaystyle \frac{a_1^2a_2^2+A(t)B(t)\pm \sqrt{(a_1^2a_2^2)^22A(t)B(t)(a_1^2+a_2^2)+A(t)^2B(t)^2}}{2B(t)}}`$ (1.19)
$`N(t)`$ $`=`$ $`A(t)M(t),`$ (1.20)
where $`A(t)=e^{q_1(t)}`$ and $`B(t)=(a_2a_1)^2e^{q_2(t)}`$. These functions contain 4 parameters, but in fact these can be reduced to two parameters by using the following relations
$`q_1(0)`$ $`=`$ $`\mathrm{log}(M(0)+N(0))q_1^{}(0)={\displaystyle \frac{a_2M(0)+a_1N(0)}{M(0)+N(0)}}`$ (1.21)
$`q_2(0)`$ $`=`$ $`\mathrm{log}{\displaystyle \frac{a_2^2M(0)+a_1^2N(0)}{(a_2a_1)^2M(0)N(0)}}q_2^{}(0)={\displaystyle \frac{a_1a_2(a_2M(0)+a_1N(0))}{a_2^2M(0)+a_1^2N(0)}}.`$ (1.22)
Some care must be used in choosing the sign in (1.19). It is clear that for large negative $`t`$, $`\mu _1(q_1(t),t)`$ refers to the path of one peakon while for large positive $`t`$ it refers to the other. If this were not the case, simple asymptotic analysis of (1.12) would show that the peakons change speed which is not the case. Therefore $`q_1(t)`$ represents the path of one of the peakons until some time $`t^{}`$ and the other one after this time. The opposite is true for $`q_2(t)`$. At the time $`t^{}`$ we say that a change of identity has taken place. $`t^{}`$ can be found explicitly by using the fact that at this time, the two peaks must have the same height. But the peaks have the same height exactly when
$$a_2M(t^{})=a_1N(t^{}).$$
(1.23)
Without loss of generality we can rescale time such that $`t^{}=0`$. In this case (1.23), due to the original definitions of $`m_1,n_1`$ given in terms of $`\mu _1^0`$ $`\mu _2^0`$, corresponds to a restriction on the choice of $`\mu _1^0`$ and $`\mu _2^0`$, namely
$$a_2^2\frac{\mu _1^0a_1}{\mu _1^0a_2}=a_1^2\frac{\mu _2^0a_2}{\mu _2^0a_1}.$$
(1.24)
This condition is satisfied for example when $`\mu _1^0={\displaystyle \frac{a_1a_2}{a_1+a_2}}`$ and $`\mu _2^0={\displaystyle \frac{a_1+a_2}{2}}`$. Also notice that under this rescaling, the phase shift is simply $`q_1(0)q_2(0)`$.
Please place the second figure near here
So we now have a procedure to make the change of identity occur at $`t=0`$, i.e. $`\mu _1`$ goes from representing the first peakon to the second one at $`t=0`$. This change is represented by the change in the sign of the plus/minus in (1.19). That is, the sign is chosen as positive for $`t<0`$ and negative for $`t>0`$. However, $`M`$ remains continuous despite this sign change since the change of identity occurs precisely when the term under the square root is zero. Therefore (1.14)-(1.16) and (1.19) together describe the solution $`U(X,t)`$ of the SW equation as a function of $`x`$ and $`t`$ depending on two parameters $`M(0)`$, $`N(0)`$.
By using the approach of this paper weak billiard solutions can be obtained for the whole class of $`n`$-peakon solutions of $`N`$-component systems.
## Bibliography.
R. Camassa and D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett. 71 1661-1664 (1993).
F. Calogero, An integrable Hamiltonian system, Phys. Lett. A. 201 306-310 (1995).
F. Calogero and J. Francoise, Solvable quantum version of an integrable Hamiltonian system, J. Math. Phys. 37 (6) 2863-2871 (1996).
M. Ablowitz and H. Segur, Solitons and the Inverse Scattering Transform, SIAM, Philadelphia (1981).
M. Alber, R. Camassa, D. Holm and J. Marsden, The geometry of peaked solitons and billiard solutions of a class of integrable PDE’s, Lett. Math. Phys. 32 137-151 (1994).
M. Alber, R. Camassa, D. Holm, and J. Marsden, On the link between umbilic geodesics and soliton solutions of nonlinear PDE’s, Proc. Roy. Soc 450 677-692 (1995).
M. Alber and Y. Fedorov, Wave Solutions of Evolution Equations and Hamiltonian Flows on Nonlinear Subvarieties of Generalized Jacobians, (subm.) (1999).
E. Belokolos, A. Bobenko, V. Enol’sii, A. Its, and V. Matveev, Algebro-Geometric Approach to Nonlinear Integrable Equations., Springer-Verlag, Berlin;New York (1994).
M. Alber, R. Camassa, Y. Fedorov, D. Holm, and J. Marsden, The geometry of new classes of weak billiard solutions of nonlinear PDE’s. (subm.) (1999).
M. Alber, R. Camassa, Y. Fedorov, D. Holm and J. Marsden, On Billiard Solutions of Nonlinear PDE’s, Phys. Lett. A (to appear) (1999).
Y. Fedorov, Classical integrable systems and billiards related to generalized Jacobians, Acta Appl. Math., 55 (3) 151–201 (1999).
R. Camassa, D. Holm, and J. Hyman, A new integrable shallow water equation, Adv. Appl. Mech., 31 1–33 (1994).
R. Beals, D. Sattinger, J. Szmigielski, Multipeakons and a theorem of Stieltjes, Inverse Problems, 15 L1–L4 (1999).
Y. Li and P. Olver, Convergence of solitary-wave solutions in a perturbed bi-Hamiltonian dynamical system, Discrete and continuous dynamical systems, 4, 159–191 (1998). |
no-problem/0001/astro-ph0001392.html | ar5iv | text | # HST Observations of M Subdwarfs
## 1 Introduction
The old Population II stellar halo is a fossil relic of the formation of the Galaxy. Most observational studies to date have concentrated on F and G subdwarfs near the turnoff of the halo main sequence. The lowest-mass, metal-poor stars, M subdwarfs, are intrinsically less luminous, but have a higher local number density. Observations of these stars allow us to probe both the stellar mass function and the binary formation frequency favoured by star formation in the early Universe.
There is currently little information on the binary frequency for the lowest mass stars in the halo. The formation mechanism(s) of binaries remains uncertain, with no clear predictions as to whether conditions in the young Galaxy would favor the production of more or fewer very-low-mass binaries relative to the present-day Galactic disk. If the binary fraction is high, then the measurements of the field halo luminosity function (Gould, Flynn & Bahcall, 1998; Gizis & Reid, 1999) will be in error. Current transformations of Population II stellar luminosity functions into mass functions depend entirely upon theory, since there are no M subdwarfs with empirical mass measurements.
In order to search for M subdwarf binary systems, particularly systems suitable for mass determinations, we have obtained Hubble Space Telescope Planetary Camera (HST PC) images of spectroscopically classified metal-poor M subdwarfs (Gizis, 1997) with well-determined trigonometric parallaxes. The targets were scheduled in Snapshot mode, which allowed only a fraction of the allocated targets to be observed. As it turned out, PC images were obtained of nine targets known to be within 50 parsecs of the Sun. An additional observation was obtained of VB12 (LHS 541), a more distant star of special interest since it is the lowest mass member of a metal-poor triple (Van Briesbroeck, 1961; Gizis & Reid, 1997b).
We report the results of our search in this paper. In Section 2, we discuss WFPC2 photometry for these stars. In Section 3, we discuss the results of our search for binaries. In Appendix A, we report on two Hyades systems observed by HST since the publication of Reid & Gizis (1997b).
## 2 Photometry
Our targets were observed using both the F555W and F850LP filters. Two F850LP images were obtained to allow cosmic ray removal. The standard HST pipeline processing was used. Photometry for the targets was measured on the HST flight system (Holtzman et al., 1995) including a correction for CTE effects:
$$mag=2.5\mathrm{log}\left(\mathrm{DN}/t_{exp}\right)+\mathrm{ZP}+2.5\mathrm{log}(\mathrm{GF})2.5\mathrm{log}\left(1+\left(0.04\times Y/800\right)\right)$$
where in our case $`\mathrm{GF}=1.987`$ for the PC chip, $`\mathrm{ZP}=21.725`$ for F555W, $`\mathrm{ZP}=19.140`$ for F850LP, and the counts are measured in a 0.5″aperture. The uncertainties due to Poisson statistics are less than 0.01 magnitude, but the uncertainties discussed by Holtzman et al. (1995) imply there may be effects that approach $`0.020.03`$ magnitude. The photometry is listed in Table A, along with photometry and parallaxes compiled by Gizis (1997). Note that the original classification of LHS 407 as sdM5 was based upon a noisy Palomar 60-in spectrum. We have obtained a better spectrum using the Palomar 200-in. and found that the spectrscopic indices are TiO5$`=0.64`$, CaH1$`=0.60`$, CaH2$`=0.32`$, and CaH3=$`0.53`$, leading to a classification of sdM5.0 but placing it near the (arbitrary) esdM/sdM border.
In Figures 1 and 2, we compare the ground V and I<sub>C</sub> photometry compiled by Gizis (1997) to our new HST photometry. Leggett (1992) estimated that a similar compilation of VI photometry had uncertainties of 0.05 magnitude. Also shown in each Figure are the polynomial fits based on modelling determined by Holtzman et al. (1995). In the case of the F555W filter, Baraffe et al. (1997) have calculated both V and F555W magnitudes based on stellar models, and we also show those calculations in Figure 1. The figures suggest that the Holtzman et al. (1995) relation is reliable in transforming F850LP to I<sub>C</sub>, while both Baraffe et al. (1997) and Holtzman et al. (1995) are consistent with the F555W-V data. The observed scatter is consistent with the probable uncertainties in the ground-based VI photometry. This result suggests that transformations used to study WFPC2 globular cluster color-magnitude diagrams are reasonable.
## 3 Binarity
Our observing technique and analysis is essentially identical to our previous surveys of Hyades (Gizis & Reid, 1995; Reid & Gizis, 1997b) and field (Reid & Gizis, 1997a) M dwarfs. Companions with $`\delta m_{850}`$ of 0, 1, 3 and 5 magnitudes respectively can be resolved at 0.09, 0.14, 0.23, and 0.32 arcseconds respectively. However, we do not detect any companions to our target stars.
We estimate that our observations are sensitive to stars at the bottom of the halo main sequence for separations of $`>10`$ A.U.. The Baraffe et al. (1997) models predict that end of the metal-poor main sequence lies at $`M_I14`$. The last 1.7 magnitudes correspond to masses between 0.083 and 0.090 $`M_{}`$, and are predicted to have very red colors but lie in a regime where the models are very uncertain. These subdwarfs are presumably rare, and they have not yet been detected. Using the Holtzman et al. (1995) transformations, we predict that these dwarfs have $`M_{850}12.5`$, as illustrated in Figure 3, but at present there is no empirical verification of the validity of the color transformations for subdwarfs of such extreme colors. For Figure 3, we have not allowed the I<sub>C</sub> to F850LP correction to exceed 1.5 magnitudes. We compare these values to coolest sdM (LHS 377, observed $`M_{850}=11.43`$) and esdM (LHS 1742, tranformed $`M_{850}=11.1`$) with parallaxes. Gizis & Reid (1997a) and Schweitzer et al. (1999) have found extreme M subdwarfs that are slightly cooler than LHS 1742a, but no parallaxes are yet available. A multi-epoch HST study of NGC 6397 found no detected cluster members by $`M_I12.212.7`$ (King et al., 1998), which would correspond to $`M_{850}11.512.0`$. While the hydrogen burning limit may be at or fainter than this point, it seems clear that the probability of detecting a subdwarf in this very small mass range is very low. Most of our targets are classified sdM, and therefore are more metal-rich than NGC 6397, which may result in a slightly redder, fainter hydrogen burning limit. We are sensitive to $`M_{850}=12`$ companions as close as 4-10 A.U. for all of the primary targets except LHS 174 (for which the limit is 15 A.U.), and at distances of $`12`$ A.U. we are typically sensitive to companions as faint as $`M_{850}=1416`$.
Since we detect no companions but have only nine nearby targets, the significance of our result is limited. In both the nearby Hyades cluster (Gizis & Reid, 1995; Reid & Gizis, 1997b) and field (Reid & Gizis, 1997a), we found that 20% of our HST targets (which have similar mass but near-solar metallicity) were resolved into doubles, corresponding to an overall companion rate of 35%. Thus we expect to observe 1.8 M subdwarf companions rather than the none actually seen. Most of our targets are actually closer than the typical objects in our previous programs, and we reach the hydrogen burning limit, so if anything the fraction of observable companions should be slightly higher.<sup>1</sup><sup>1</sup>1We do not expect that the sample is biased against binaries, which would be overluminous in HR diagrams and therefore closer to the disk sequence, because the sample of high velocity parallax stars observed by Gizis (1997) include many spectroscopic non-subdwarfs near the disk main sequence. It is unclear whether overluminous binaries would be preferentially included or excluded by parallax and spectroscopic studies based on proper motion surveys. Our failure to detect any companions suggests that the binary fraction of M subdwarfs is less than or equal to that of Galactic disk M dwarfs. We note also the contrast between our result and the success found by Koerner et al. (1999), who found that three of ten L brown dwarf targets were resolved into near-equal-luminosity systems with separations of 5-10 A.U. – we would have detected equal-luminosity systems in that range of separations.
The possibility that the high-velocity, metal-poor stars have a low binary fraction dates back to Oort (1926), who found that high velocity stars were deficient in visual binaries. More recently, Abt & Wilmarth (1987) argued that both visual and spectroscopic binary fraction of Population II stars is only 40% that of Population I stars. If so, this is a signature of the formation of the Galactic halo — Stryker et al. (1985) have shown that disruption of halo binaries is insignificant. Stryker et al. (1985) and others, however, have argued that the halo binary fraction is in fact comparable to that of the disk. Carney et al. (1994) have found that at least 15% of their sample of halo FG subdwarfs are spectroscopic binaries with periods less than 3000 days. This is identical to the 14% spectroscopic binary fraction of local disk G dwarfs (Duquennoy & Mayor, 1991; Mazeh et al., 1992) in the same period range. Given our small number statistics, our data are consistent with either scenario. Our data and the G dwarf data taken together strongly suggest that the binary fraction is not greater than that in the disk. On the basis of imaging of wide binaries in IC 348, Duchêne, Bouvier & Simon (1999) argue that loose associations exhibit an excess of binaries with respect to both denser open clusters (IC 348, Trapezium, the Pleiades) and the solar neighborhood. They suggest that the disk binary frequency depends on stellar density within the cluster (or perhaps some other parameter which also controls the density). If this scenario is correct, and if it applies to the halo, it suggests that most halo stars form in clusters of densities comparable to or greater than the typical disk star formation region.
Our results suggest that further study of the binary fraction of halo stars of all masses will be profitable. A few more M subdwarfs are available for study within 50 parsecs; significant improvement in the sample size requires extending the sample out to 100 parsecs – increasing numbers of such subdwarfs are being identified. In addition to imaging, a radial velocity monitoring campaign is needed to search for closer objects. Comparison of such data to higher mass halo stars (Carney et al., 1994) and data for disk stars in a variety of enviroments may provide an important constraint on the formation and subsequent evolution of the Galactic halo. The importance of density on the star formation process and the subsequent evolution of halo binary systems also needs to be investigated.
## 4 Summary
We have found that color transformations of WFPC2 F555W and F850LP to ground V,I photometry are consistent with predictions. This supports analysis of WFPC2 color-magnitude diagrams of globular clusters. We find no companions to a sample of nine isolated nearby M subdwarfs and one tertiary M subdwarf. A binary fraction as high as that for similar mass disk M dwarf cannot be ruled out on the basis of such a small sample, but the lack of observed binaries suggests that the binary fraction is not high enough to seriously bias the luminosity function. Unfortunately, we have not yet found a system suitable for mass determinations.
This research was supported by NASA HST Grant GO-07385.01-96A.
## Appendix A Hyades Stars
Since our last analysis of our Hyades HST observations (Reid & Gizis, 1997b), two more snapshots have been obtained. RHy 164 and RHy 326 have no detected companions. There are now nine resolved binaries out of 55, or 16%; including the three marginally resolved systems pushes the fraction up to 22%. The effect on our analysis is insignificant. |
no-problem/0001/hep-th0001017.html | ar5iv | text | # The Interaction of Two Hopf Solitons
## 1 Introduction
The O(3) nonlinear sigma model in 3+1 dimensions involves a unit vector field $`\stackrel{}{\varphi }=(\varphi ^1,\varphi ^2,\varphi ^3)`$ which is a function of the space-time coordinates $`x^\mu =(t,x,y,z)`$. Since $`\stackrel{}{\varphi }`$ takes values on the unit sphere $`S^2`$ and the homotopy group $`\pi _3(S^2)`$ is non-trivial, the system admits topological solitons (textures) with non-zero Hopf number $`N\pi _3(S^2)`$. The model arises naturally in condensed-matter physics and in cosmology, and its dynamics in these contexts is governed by the Lagrangian density $`(_\mu \stackrel{}{\varphi })^2`$. With such a Lagrangian, there are no stable soliton solutions: the usual scaling argument shows that textures are unstable to shrinking in size. (In practice, the decay of these textures is more complicated, and can, for example, involve decay into monopole-antimonopole pairs.)
Another context in which the nonlinear sigma model arises is as an approximation to a gauge theory; ie one gets an effective action for the gauge field in terms of the scalar fields $`\varphi ^a`$. In this case, there are higher-order terms in the Lagrangian, which can have the effect of stabilizing solitons. This idea is familiar in the Skyrme model, where the target space is a Lie group; it also occurs for the case relevant here, where the target space is $`S^2`$ , , .
The simplest such modification involves adding a fourth-order (Skyrme-like) term to the Lagrangian. This leads to a sigma model which admits stable, static, localized solitons — they resemble closed strings, which can be linked or knotted. The system has been written about at least since 1975 , ; but recently interest in it has increased, stimulated by numerical work , , , , , ; see also , , .
In this Letter, we shall deal only with static configurations, so $`\stackrel{}{\varphi }`$ is a function of the spatial coordinates $`x^j=(x,y,z)`$. The boundary condition is $`\varphi ^a(0,0,1)`$ as $`r\mathrm{}`$, where $`r^2=x^2+y^2+z^2`$. So we may think of $`\stackrel{}{\varphi }`$ as a smooth function from $`S^3`$ to $`S^2`$; and hence it defines a Hopf number $`N`$ (an integer). This $`N`$ may be thought of as a linking number: the inverse images of two generic points on the target space, for example $`(0,0,1)`$ and $`(0,0,1)`$, are curves in space, and the first curve links $`N`$ times around the other.
The energy of a static field $`\stackrel{}{\varphi }(x^j)`$ is taken to be
$$E=\frac{1}{32\pi ^2}\left[(_j\varphi ^a)(_j\varphi ^a)+F_{jk}F_{jk}\right]d^3x,$$
(1)
where $`F_{jk}=\epsilon _{abc}\varphi ^a(_j\varphi ^b)(_k\varphi ^c)/2`$. The ratio of the coefficients of the two terms in (1) sets the length scale — in this case, one expects the solitons to have a size of order unity (note that other authors use slightly different coefficients). The factor of $`1/32\pi ^2`$ is justified in : there is a lower bound on the energy which is proportional to $`N^{3/4}`$, and if space is allowed to be a three-sphere, then there is an $`N=1`$ solution with $`E=1`$. So one expects, with the normalization (1), to have the lower bound $`EN^{3/4}`$.
## 2 The One-Soliton
The minimum-energy configuration in the $`N=1`$ sector is an axially-symmetric, ring-like structure. It was studied numerically in (with no quantitative results), in (which gave an energy value of $`E=1.25`$, although without any statement of numerical errors), and in , (where the field was placed in a finite-volume box, so its energy was not evaluated accurately). For the results described in this letter, a numerical scheme has been set up which
* includes the whole of space $`R^3`$ (by making coordinate transformations that bring spatial infinity in to a finite range); and
* using a lattice expression for the energy in which the truncation error is of order $`h^4`$, where $`h`$ is the lattice spacing.
Using this shows that the energy of the one-soliton is $`E=1.22`$ (accurate to the two decimal places).
Let us choose the axis of symmetry to be the $`z`$-axis, and the soliton to be concentrated in the $`xy`$-plane. In terms of the complex field
$$W=\frac{\varphi ^1+i\varphi ^2}{1+\varphi ^3}$$
(2)
(the stereographic projection of $`\stackrel{}{\varphi }`$), the 1-soliton solution is closely approximated by the expression
$$W=\frac{x+iy}{zif(r)},$$
(3)
where $`f(r)`$ is a cubic polynomial. We may minimize the energy of the configuration (3) with respect to the coefficients of $`f`$: this gives an energy $`E=1.23`$ (less than 1% above the true minimum), for
$$f(r)=0.453(r0.878)(r^2+0.705r+1.415).$$
(4)
Note that $`W0`$ as $`r\mathrm{}`$ (the boundary condition); that $`W=0`$ on the $`z`$-axis; and that $`W=\mathrm{}`$ on a ring (of radius 0.878) in the $`xy`$-plane. The ring where $`W=\mathrm{}`$ links once around the “ring” (the $`z`$-axis plus a point at infinity) where $`W=0`$: hence the linking number $`N`$ equals 1. The field looks like a ring (or possibly a disc, depending on what one plots) in the $`xy`$-plane.
To leading order as $`r\mathrm{}`$, $`\varphi ^1`$ and $`\varphi ^2`$ have to be solutions of the Laplace equation ($`\varphi ^1`$ and $`\varphi ^2`$ are the analogues of the massless pion felds in the Skyrme model). From (3) one sees that
$$\varphi ^1+i\varphi ^22W\frac{4i(x+iy)}{r^3}\mathrm{for}\mathrm{large}r.$$
(5)
So $`\varphi ^1`$ and $`\varphi ^2`$ resemble, asymptotically, a pair $`(\stackrel{}{P},\stackrel{}{Q})`$ of dipoles, orthogonal to each other and to the axis of symmetry.
It is useful to note the effect on the 1-soliton field of rotations by $`\pi `$ about each of the three coordinate axes. These, together with the identity, form the dihedral group $`D_2`$. Let $`\pm I`$ and $`\pm C`$ denote the maps
$$\pm I:W\pm W,\pm C:W\pm \overline{W}.$$
(6)
Then it is clear from (3) that the four elements of $`D_2`$ induce the four maps $`\{I,I,C,C\}`$ on $`W`$.
The single soliton depends on six parameters: three for location in space, two for the direction of the $`z`$-axis, and one for a phase (the phase of $`W`$). The phase is unobservable, in the sense that it does not appear in the energy density, and can be removed by a rotation of the target space $`S^2`$; but the energy of a two-soliton system depends on the relative phase of the two solitons, as we shall see in the next section.
## 3 Two Solitons Far Apart
Suppose we have two solitons, located far apart. Let $`(\stackrel{}{P}_+,\stackrel{}{Q}_+)`$ denote the dipole pair of one of them, $`(\stackrel{}{P}_{},\stackrel{}{Q}_{})`$ the dipole pair of the other, and $`\stackrel{}{R}`$ the separation vector between them. There will, in general, be a force between the solitons, which depends (to leading order) on the distance $`R`$ between them, and on their mutual orientation. One can predict this force by considering the forces between the dipoles. Since the fields are space-time scalars, like charges attract; so the force between two dipoles is maximally attractive if they are parallel, and maximally repulsive if they are anti-parallel. There are three obvious “attractive channels” (mutual orientations for which the two solitons attract), which will be referred to as channels $`A`$, $`B`$ and $`C`$. We now discuss each of these.
### Channel A.
The only axisymmetric configuration involving two separated solitons is one where each dipole pair is orthogonal to the separation vector: in fact, where $`\stackrel{}{P}_+\times \stackrel{}{Q}_+`$, $`\stackrel{}{P}_{}\times \stackrel{}{Q}_{}`$ and $`\stackrel{}{R}`$ are all parallel. The configuration is illustrated in Fig 1.
The two circles are where $`W=\mathrm{}`$, while the line linking them is where $`W=0`$. The arrows on the curves serve partly to distinguish solitons from anti-solitons: the convention here is that solitons obey the right-hand rule, whereas anti-solitons would obey the left-hand rule.
Let $`\theta `$ denote the angle between $`\stackrel{}{P}_+`$ and $`\stackrel{}{P}_{}`$ (ie the pair $`(\stackrel{}{P}_+,\stackrel{}{Q}_+)`$ is rotated by $`\theta `$ about the line joining the two solitons). Let $`E_1`$ denote the energy of a single soliton, and $`E_2(R,\theta )`$ the energy of the two-soliton system, as a function of the separation $`R`$ and the relative phase $`\theta `$. Considering the potential energy of the interacting dipoles suggests that
$$E_2(R,\theta )=2E_12kR^3\mathrm{cos}\theta $$
(7)
for some constant $`k`$. Clearly $`E_2(R,\theta )`$ is minimized, for a given $`R`$, when $`\theta =0`$ (ie when the two solitons are in phase): this is channel $`A`$. The formula (7) was tested numerically, by computing the energy of the configurations obtained by combining translated and rotated versions of the approximate one-soliton (3). The combination anstaz was simply that of addition ($`W=W_++W_{}`$), which is a plausible approximation for large $`R`$ (bearing in mind that the $`W`$-field tends to zero away from each soliton). For $`R`$ in the range $`6<R<16`$, the form (7) is indeed found to hold, with $`k1`$. (In view of the crudity of the “sum” ansatz, the accuracy is not claimed to be better than 10% or so; but the $`R^3\mathrm{cos}\theta `$ dependence is very clear.) The behaviour under the discrete symmetries $`D_2`$ is the same as for the 1-soliton, namely $`\{I,I,C,C\}`$.
### Channel B.
This channel is one in which both dipole pairs are co-planar with $`\stackrel{}{R}`$, with $`\stackrel{}{P}_+\times \stackrel{}{Q}_+`$ and $`\stackrel{}{P}_{}\times \stackrel{}{Q}_{}`$ being parallel (and orthogonal to $`\stackrel{}{R}`$). This configuration is depicted in Fig 2(a).
In this case, the effect of the discrete symmetries $`D_2`$ on the configuration is $`\{I,I,C,C\}`$. Consideration of the forces between the dipoles suggests that the energy behaves like
$$E_2(R,\theta )=2E_1+kR^3\mathrm{cos}(\theta _+\theta _{}),$$
(8)
where $`\theta _\pm `$ is the angle that $`\stackrel{}{P}_\pm `$ makes with $`\stackrel{}{R}`$. The expression (8) has a minimum when $`\theta _+\theta _{}=\pi `$, and this is attractive channel $`B`$.
### Channel C.
Here the dipole pairs are again co-planar with $`\stackrel{}{R}`$, but now $`\stackrel{}{P}_+\times \stackrel{}{Q}_+`$ and $`\stackrel{}{P}_{}\times \stackrel{}{Q}_{}`$ are anti-parallel. This is depicted in Fig 3(a).
In this case, one expects that
$$E_2(R,\theta )=2E_1+3kR^3\mathrm{cos}(\theta _++\theta _{}),$$
(9)
where, as before, $`\theta _\pm `$ is the angle that $`\stackrel{}{P}_\pm `$ makes with $`\stackrel{}{R}`$. So the attractive force is maximal when $`\theta _++\theta _{}=\pi `$. The dependence (9) was confirmed numerically, as before, with $`k1`$. This ‘maximally-attractive’ channel is referred to as channel $`C`$. The effect of the discrete symmetries $`D_2`$ is $`\{I,I,C,C\}`$, as for channel $`B`$.
## 4 Relaxing in Channel A
In this section, we see what happens when we begin with two solitons far apart (in the first of the attractive channels described above), and minimize the energy. This was done numerically, using a conjugate-gradient procedure.
Suppose, then, that we start in the (axisymmetric) channel $`A`$, and minimize energy without breaking the axial symmetry. Then the two solitons approach each other along the line joining them, and the minimum is reached when they are a nonzero distance apart. The resulting configuration therefore is a static solution of the field equation; it has energy $`E=2.26`$ (this is to be compared with $`2E_1=2.45`$), and it resembles two rings around the $`z`$-axis, separated by a distance $`R=1.3`$. In other words, $`W=0`$ consists of the $`z`$-axis plus infinity, as for the 1-soliton, while $`W=\mathrm{}`$ consists of two disjoint rings around it; so the linking number $`N`$ does indeed equal 2. The picture is as in Fig 1.
There is an approximate configuration analogous to (3), namely
$$W=\frac{2(x+iy)[z(1+\beta r)ih(r)]}{[z_+(1+\alpha r_+)if(r_+)][z_{}(1+\alpha r_{})if(r_{})]},$$
(10)
where $`\alpha `$ and $`\beta `$ are parameters, $`f`$ and $`h`$ are cubic polynomials, $`z_\pm =z\pm 0.65`$ and $`r_\pm ^2=x^2+y^2+z_\pm ^2`$. The two-ring structure is evident from (10). Minimizing the energy of (10) with respect to the ten parameters ($`\alpha `$, $`\beta `$, and the coefficients of $`f`$ and $`h`$) gives an energy $`E=2.29`$, it ie $`1.3\%`$ above that of the solution. The corresponding configuration $`\stackrel{}{\varphi }`$ is very close to the actual solution.
While this is a solution, it is not the global minimum of the energy in the $`N=2`$ sector; in particular, channel $`B`$ produces a solution with lower energy. So the question arises as to whether the channel $`A`$ minimum is stable to (non-axisymmetric) perturbations (ie whether it is a local minimum of the energy, as opposed to a saddle-point). The linking behaviour of the channel $`B`$ minimum is that of a single ring around a double axis (as we shall see in the next section), as opposed to a double ring around a single axis; there is a continuous path in configuration space from the one configuration to the other, but the contortions involved in this suggest that there is an energy barrier (in other words, that the channel $`A`$ solution is a local minimum). Numerical experiments, involving random perturbations of this solution, provide strong support for this; but more study is needed.
## 5 Relaxing in Channel B
Next, we start in channel $`B`$ and once again flow down the energy gradient. As depicted in Fig 2, the two rings (where $`W=\mathrm{}`$) merge into one, and then the two lines where $`W=0`$ merge as well. We end up with a solution which has been described previously , , , and which is believed to be the global minimum in the $`N=2`$ sector. It is axially-symmetric, and resembles a single ring; but this time the ring winds around a double copy of the $`z`$-axis, and hence it has a linking number of $`N=2`$. The energy of the solution is $`E=2.00`$, which agrees with the figure given in .
As before, we can write down an explicit configuration which is very close to the solution. One such expression is
$$W=\frac{(x+iy)^2}{azrif(r)},$$
(11)
where $`a`$ is a constant and $`f(r)`$ is a quintic polynomial. Minimizing the energy with respect to the six coefficients contained in (11) gives $`E=2.03`$ (ie $`1.5\%`$ above the true minimum), for
$$a=1.55,f(r)=0.23(r1.27)(r+0.44)(r+0.16)(r^22.15r+5.09).$$
(12)
Since $`f`$ has only one positive root, $`W=\mathrm{}`$ is a ring (of radius $`1.27`$) in the $`xy`$-plane; whereas $`W=0`$ is the $`z`$-axis, with multiplicity two. The components of $`\stackrel{}{\varphi }`$ derived from (11) are very close to those of the actual solution.
## 6 Relaxing in Channel C
If one begins with the configuration depicted in Fig 3(a) and moves in the direction of the energy gradient, the two solitons approach each other. If the two $`W=\mathrm{}`$ loops touch, one has a figure-eight curve, with the $`W=0`$ lines linking through it in opposite directions: Fig 3(b). This configuration is certainly not stable: preliminary numerical work indicates that the two ‘halves’ of the configuration rotate by $`\pi /2`$ (in opposite directions) about the axis joining them. So the figure-eight untwists to become a simple loop, and the two $`W=0`$ curves end up pointing in the same direction, exactly as in Fig 2(b) and (c). Hence the minimum in channel $`C`$ is the same as that in channel $`B`$. Between this mimimum and the channel-$`A`$ one, there should be saddle-point solutions; but what these look like is not yet clear.
## 7 Concluding Remarks
There has already been some study of two-soliton dynamics, using a “direct” numerical approach (see, for example, ); this is computationally very intensive. The results reported in this Letter could be viewed as the first step towards a somewhat different approach, namely that of constructing a collective-coordinate manifold for the two-soliton system. The analogous structure for the Skyrme model has been investigated in some detail , ; in particular, it has the advantage that one can introduce quantum corrections by quantizing the dynamics on the collective-coordinate manifold . Since each Hopf soliton depends on six parameters, the two-soliton manifold $`M_2`$ should have dimension (at least) twelve; each point of $`M_2`$ corresponds to a relevant $`N=2`$ configuration, and the expressions (3) and (11) are examples of such configurations.
But clearly much more work remains to be done towards understanding the energy functional on the $`N=2`$ configuration space. The suggestion of this Letter is that the global minimum (which is, of course, degenerate: it depends on six moduli) is as in Fig 2(c); there is a local minimum as in Fig 1; and between the two are saddle-point solutions which may be related to the figure-eight configuration Fig 3(b). |
no-problem/0001/nucl-th0001055.html | ar5iv | text | # Hadrons in Dense Resonance-Matter: A Chiral 𝑆𝑈(3) Approach
## I Introduction
The investigation of the equation of state of strongly interacting matter is one of the most challenging problems in nuclear and heavy ion physics. Dense nuclear matter exists in the interior of neutron stars, and its behaviour plays a crucial role for the structure and properties of these stellar objects. The behaviour of hadronic matter at high densities and temperatures strongly influences the observables in relativistic heavy ion collisions (e.g. flow, particle production,…). The latter depend on the bulk and nonequilibrium properties of the produced matter (e.g. pressure, density, temperature, viscosity,…) and the properties of the constituents (effective masses, decay widths, dispersion relations,…). So far it is not possible to determine the equation of state of hadronic matter at high densities (and temperatures) from first principles. QCD is not solvable in the regime of low momentum transfers and finite baryon densities. Therefore one has to pursue alternative ways to describe the hadrons in dense matter. Effective models, where only the relevant degrees of freedom for the problem are considered are solvable and can contain the essential characteristics of the full theory. For the case of strongly interacting matter this means that one considers hadrons rather than quarks and gluons as the relevant degrees of freedom. Several such models like the RMF model(QHD) and its extensions (QHD II, nonlinear Walecka model) successfully describe nuclear matter and finite nuclei . Although these models are effective relativistic quantum field theories of baryons and mesons, they do not consider essential features of QCD, namely broken scale invariance and approximate chiral symmetry. Including SU(2) chiral symmetry in these models by adding repulsive vector mesons to the $`SU(2)`$-linear $`\sigma `$-model does neither lead to a reasonable description of nuclear matter ground state properties nor of finite nuclei . Either one must use a nonlinear realization of chiral symmetry or include a dilaton field and a logarithmic potential motivated by broken scale invariance in order to obtain a satisfactory description of nuclear matter. Extending these approaches to the strangeness sector leads to a number of new, undetermined coupling constants due to the additional strange hadrons. Both to overcome this problem and to put restrictions on the coupling constants in the non-strange sector the inclusion of SU(3) and chiral SU(3) has been investigated in the last years. Recently it was shown that an extended $`SU(3)\times SU(3)`$ chiral $`\sigma \omega `$ model can describe nuclear matter ground state properties, vacuum properties and finite nuclei simultaneously. This model includes the lowest lying SU(3) multiplets of the baryons (octet), the spin-0 and the spin-1 mesons (nonets) as physical degrees of freedom. The present paper will discuss the predictions of this model for high density nuclear matter, including the spin $`\frac{3}{2}`$ baryon resonances (decuplet). This is necessary, because the increasing nucleonic fermi levels make the production of resonances energetically favorable at high densities. The paper is structured as follows: Section II summarizes the nonlinear chiral $`SU(3)\times SU(3)`$-model. Section III gives the baryon meson interaction, with main focus on the baryon meson-decuplet interaction and the constraints on the additional coupling constants. In section IV the resulting equations of motions and thermodynamic observables in the mean field approximation are discussed. Section V contains the results for dense hadronic matter, followed by the conclusions.
## II Lagrangian of the nonlinear chiral SU(3) model
We use a relativistic field theoretical model of baryons and mesons based on chiral symmetry and scale invariance to describe strongly interacting nuclear matter. In earlier work the Lagrangian including the baryon octet, the spin-0 and spin-1 mesons has been developed . Here the additional inclusion of the spin-$`\frac{3}{2}`$ baryon decuplet for infinite nuclear matter will be discussed. The general form of the Lagrangian then looks as follows:
$$=_{\mathrm{kin}}+\underset{W=X,Y,V,𝒜,u}{}_{\mathrm{BW}}+_{\mathrm{VP}}+_{\mathrm{vec}}+_0+_{\mathrm{SB}}.$$
(1)
$`_{\mathrm{kin}}`$ is the kinetic energy term, $`_{\mathrm{BW}}`$ includes the interaction terms of the different baryons with the various spin-0 and spin-1 mesons. $`_{\mathrm{VP}}`$ contains the interaction terms of vector mesons with pseudoscalar mesons. $`_{\mathrm{vec}}`$ generates the masses of the spin-1 mesons through interactions with spin-0 mesons, and $`_0`$ gives the meson-meson interaction terms which induce the spontaneous breaking of chiral symmetry. It also includes the scale breaking logarithmic potential. Finally, $`_{\mathrm{SB}}`$ introduces an explicit symmetry breaking of the U(1)<sub>A</sub> symmetry, the SU(3)<sub>V</sub> symmetry, and the chiral symmetry. These terms have been discussed in detail in and this shall not be repeated here. We will concentrate on the new terms in $`_{\mathrm{BW}}`$, which are due to adding the baryon resonances.
## III Baryon meson interaction
$`_{BW}`$ consists of the interaction terms of the included baryons (octet and decuplet) and the mesons (spin-0 and spin-1). For the spin-$`\frac{1}{2}`$ baryons the $`SU(3)`$ structure of the couplings to all mesons are the same, except for the difference in Lorentz space. For a general meson field $`W`$ they read
$$_{\text{OW}}=\sqrt{2}g_{O8}^W\left(\alpha _{OW}[\overline{B}𝒪BW]_F+(1\alpha _{OW})[\overline{B}𝒪BW]_D\right)g_{O1}^W\frac{1}{\sqrt{3}}\mathrm{Tr}(\overline{B}𝒪B)\mathrm{Tr}W,$$
(2)
with $`[\overline{B}𝒪BW]_F:=\mathrm{Tr}(\overline{B}𝒪WB\overline{B}𝒪BW)`$ and $`[\overline{B}𝒪BW]_D:=\mathrm{Tr}(\overline{B}𝒪WB+\overline{B}𝒪BW)\frac{2}{3}\mathrm{Tr}(\overline{B}𝒪B)\mathrm{Tr}W`$. The different terms to be considered are those for the interaction of spin-$`\frac{1}{2}`$ baryons ($`B`$), with scalar mesons ($`W=X,𝒪=1`$), with vector mesons ($`W=V_\mu ,𝒪=\gamma _\mu `$), with axial vector mesons ($`W=𝒜_\mu ,𝒪=\gamma _\mu \gamma _5`$) and with pseudoscalar mesons ($`W=u_\mu ,𝒪=\gamma _\mu \gamma _5`$), respectively. For the spin-$`\frac{3}{2}`$ baryons ($`D^\mu `$) one can construct a coupling term similar to (2)
$$_{\text{DW}}=\sqrt{2}g_{D8}^W[\overline{D^\mu }𝒪D_\mu W]g_{D1}^W[\overline{D^\mu }𝒪D_\mu ]TrW,$$
(3)
where $`[\overline{D^\mu }𝒪D_\mu W]`$ and $`[\overline{D^\mu }𝒪D_\mu ]`$ are obtained from coupling $`[\overline{10}]\times [10]\times [8]=[1]+[8]+[27]+[64]`$ and $`[\overline{10}]\times [10]\times [1]`$ to an SU(3) singlet, respectively. In the following we focus on the couplings of the baryons to the scalar mesons which dynamically generate the hadron masses and vector mesons which effectively describe the short-range repulsion. For the pseudoscalar mesons only a pseudovector coupling is possible, since in the nonlinear realization of chiral symmetry they only appear in derivative terms. Pseudoscalar and axial mesons have a vanishing expectation value at the mean field level, so that their coupling terms will not be discussed in detail here.
### Scalar Mesons
The baryons and the scalar mesons transform equally in the left and right subspace. Therefore, in contrast to the linear realization of chiral symmetry, an $`f`$-type coupling is allowed for the baryon-octet-meson interaction. In addition, it is possible to construct mass terms for baryons and to couple them to chiral singlets. Since the current quark masses in QCD are small compared to the hadron masses, we will use baryonic mass terms only as small corrections to the dynamically generated masses. Furthermore a coupling of the baryons to the dilaton field $`\chi `$ is also possible, but this will be discussed in a later publication. After insertion of the vacuum matrix $`X`$, (Eq.A4), one obtains the baryon masses as generated by the vacuum expectation value (VEV) of the two meson fields:
$`m_N`$ $`=`$ $`m_0{\displaystyle \frac{1}{3}}g_{O8}^S(4\alpha _{OS}1)(\sqrt{2}\zeta \sigma )`$ (4)
$`m_\mathrm{\Lambda }`$ $`=`$ $`m_0{\displaystyle \frac{2}{3}}g_{O8}^S(\alpha _{OS}1)(\sqrt{2}\zeta \sigma )`$ (5)
$`m_\mathrm{\Sigma }`$ $`=`$ $`m_0+{\displaystyle \frac{2}{3}}g_{O8}^S(\alpha _{OS}1)(\sqrt{2}\zeta \sigma )`$ (6)
$`m_\mathrm{\Xi }`$ $`=`$ $`m_0+{\displaystyle \frac{1}{3}}g_{O8}^S(2\alpha _{OS}+1)(\sqrt{2}\zeta \sigma )`$ (7)
with $`m_0=g_{O1}^S(\sqrt{2}\sigma +\zeta )/\sqrt{3}`$. The parameters $`g_{O1}^S`$, $`g_{O8}^S`$ and $`\alpha _{OS}`$ can be used to fit the baryon-octet masses to their experimental values. Besides the current quark mass terms discussed in , no additional explicit symmetry breaking term is needed. Note that the nucleon mass depends on the strange condensate $`\zeta `$! For $`\zeta =\sigma /\sqrt{2}`$ (i.e. $`f_\pi =f_K`$), the masses are degenerate, and the vacuum is SU(3)<sub>V</sub>-invariant. For the spin-$`\frac{3}{2}`$ baryons the procedure is similar. If the vacuum matrix for the scalar condensates is inserted one obtains the dynamically generated vacuum masses of the baryon decuplet
$`m_\mathrm{\Delta }`$ $`=`$ $`g_D^S\left[(3\alpha _{DS})\sigma +\alpha _{DS}\sqrt{2}\zeta \right]`$ (8)
$`m_\mathrm{\Sigma }^{}`$ $`=`$ $`g_D^S\left[2\sigma +\sqrt{2}\zeta \right]`$ (9)
$`m_\mathrm{\Xi }^{}`$ $`=`$ $`g_D^S\left[(1+\alpha _{DS})\sigma +(2\alpha _{DS})\sqrt{2}\zeta \right]`$ (10)
$`m_\mathrm{\Omega }`$ $`=`$ $`g_D^S\left[2\alpha _{DS}\sigma +(3\alpha _{DS})\sqrt{2}\zeta \right]`$ (11)
The new parameters are connected to the parameters in (3) by $`g_{D8}^W=\sqrt{120}(1\alpha _{DS})g_D^S`$ and $`g_{D1}^W=\sqrt{90}g_D^S`$. $`g_D^S`$ and $`\alpha _{DS}`$ can now be fixed to reproduce the masses of the baryon decuplet. As in the case of the nucleon, the coupling of the $`\mathrm{\Delta }`$ to the strange condensate is nonzero.
It is desirable to have an alternative way of baryon mass generation, where the nucleon and the $`\mathrm{\Delta }`$ mass depend only on $`\sigma `$. For the nucleon this can be accomplished for example by taking the limit $`\alpha _{OS}=1`$ and $`g_{O1}^S=\sqrt{6}g_{O8}^S`$. Then, the coupling constants between the baryon octet and the two scalar condensates are related to the additive quark model. This leaves only one coupling constant to adjust for the correct nucleon mass. For a fine-tuning of the remaining masses, it is necessary to introduce an explicit symmetry breaking term, that breaks the SU(3)-symmetry along the hypercharge direction. A possible term already discussed in , which respects the Gell-Mann-Okubo mass relation, is
$$_{\mathrm{\Delta }m}=m_1\mathrm{Tr}(\overline{B}B\overline{B}BS)m_2\mathrm{Tr}(\overline{B}SB),$$
(12)
where $`S_b^a=\frac{1}{3}[\sqrt{3}(\lambda _8)_b^a\delta _b^a]`$. As in the first case, only three coupling constants, $`g_{N\sigma }3g_{O8}^S`$, $`m_1`$ and $`m_2`$, are sufficient to reproduce the experimentally known baryon masses. Explicitly, the baryon masses have the values
$`m_N`$ $`=`$ $`g_{N\sigma }\sigma `$ (13)
$`m_\mathrm{\Xi }`$ $`=`$ $`{\displaystyle \frac{1}{3}}g_{N\sigma }\sigma {\displaystyle \frac{2}{3}}g_{N\sigma }\sqrt{2}\zeta +m_1+m_2`$ (14)
$`m_\mathrm{\Lambda }`$ $`=`$ $`{\displaystyle \frac{2}{3}}g_{N\sigma }\sigma {\displaystyle \frac{1}{3}}g_{N\sigma }\sqrt{2}\zeta +{\displaystyle \frac{m_1+2m_2}{3}}`$ (15)
$`m_\mathrm{\Sigma }`$ $`=`$ $`{\displaystyle \frac{2}{3}}g_{N\sigma }\sigma {\displaystyle \frac{1}{3}}g_{N\sigma }\sqrt{2}\zeta +m_1,`$ (16)
For the baryon decuplet the choice $`\alpha _{DS}=0`$ yields coupling constants related to the additive quark model. We introduce an explicit symmetry breaking proportional to the number of strange quarks for a given baryon species. Here we need only one additional parameter $`m_{Ds}`$ to obtain the masses of the baryon decuplet:
$`m_\mathrm{\Delta }`$ $`=`$ $`g_{\mathrm{\Delta }\sigma }\left[3\sigma \right]`$ (17)
$`m_\mathrm{\Sigma }^{}`$ $`=`$ $`g_{\mathrm{\Delta }\sigma }\left[2\sigma +\sqrt{2}\zeta \right]+m_{Ds}`$ (18)
$`m_\mathrm{\Xi }^{}`$ $`=`$ $`g_{\mathrm{\Delta }\sigma }\left[1\sigma +2\sqrt{2}\zeta \right]+2m_{Ds}`$ (19)
$`m_\mathrm{\Omega }`$ $`=`$ $`g_{\mathrm{\Delta }\sigma }\left[0\sigma +3\sqrt{2}\zeta \right]+3m_{Ds}`$ (20)
For both versions of the baryon-meson interaction the parameters are fixed to yield the baryon masses of the octet and the decuplet. The corresponding parameter set $`C_2`$, has been discussed in detail in .
### Vector mesons
For the spin-$`\frac{1}{2}`$ baryons two independent interaction terms with spin-1 mesons can be constructed, in analogy to the interaction of the baryon octet with the scalar mesons. They correspond to the antisymmetric ($`f`$-type) and symmetric ($`d`$-type) couplings, respectively. From the universality principle and the vector meson dominance model one may conclude that the $`d`$-type coupling should be small. Here $`\alpha _V=1`$, i.e. pure $`f`$-type coupling, is used. It was shown in , that a small admixture of d-type coupling allows for some fine-tuning of the single-particle energy levels of nucleons in nuclei. As in the case of scalar mesons, for $`g_{O1}^V=\sqrt{6}g_{O8}^V`$, the strange vector field $`\varphi _\mu \overline{s}\gamma _\mu s`$ does not couple to the nucleon. The remaining couplings to the strange baryons are then determined by symmetry relations: $`g_{N\omega }`$ $`=`$ $`(4\alpha _V1)g_{O8}^V`$ $`g_{\mathrm{\Lambda }\omega }`$ $`=`$ $`{\displaystyle \frac{2}{3}}(5\alpha _V2)g_{O8}^V`$ $`g_{\mathrm{\Sigma }\omega }`$ $`=`$ $`2\alpha _Vg_{O8}^V`$ $`g_{\mathrm{\Xi }\omega }`$ $`=`$ $`(2\alpha _V1)g_{O8}^V`$ $`g_{\mathrm{\Lambda }\varphi }`$ $`=`$ $`{\displaystyle \frac{\sqrt{2}}{3}}(2\alpha _V+1)g_{O8}^V`$ $`g_{\mathrm{\Sigma }\varphi }`$ $`=`$ $`\sqrt{2}(2\alpha _V1)g_{O8}^V`$ $`g_{\mathrm{\Xi }\varphi }`$ $`=`$ $`2\sqrt{2}\alpha _Vg_{O8}^V.`$ In the limit $`\alpha _V=1`$, the relative values of the coupling constants are related to the additive quark model via:
$$g_{\mathrm{\Lambda }\omega }=g_{\mathrm{\Sigma }\omega }=2g_{\mathrm{\Xi }\omega }=\frac{2}{3}g_{N\omega }=2g_{O8}^Vg_{\mathrm{\Lambda }\varphi }=g_{\mathrm{\Sigma }\varphi }=\frac{g_{\mathrm{\Xi }\varphi }}{2}=\frac{\sqrt{2}}{3}g_{N\omega }.$$
(23)
Note that all coupling constants are fixed once e.g. $`g_{N\omega }`$ is specified. For the coupling of the baryon resonances to the vector mesons we obtain the same Clebsch-Gordan coefficients as for the coupling to the scalar mesons. This leads to the following relations between the coupling constants:
$`g_{\mathrm{\Delta }\omega }`$ $`=`$ $`(3\alpha _{DV})g_{DV}`$ $`g_{\mathrm{\Sigma }^{}\omega }`$ $`=`$ $`2g_{DV}`$ $`g_{\mathrm{\Xi }^{}\omega }`$ $`=`$ $`(1+\alpha _{DV})g_{DV}`$ $`g_{\mathrm{\Omega }\omega }`$ $`=`$ $`\alpha _{DV}g_{DV}`$ $`g_{\mathrm{\Delta }\varphi }`$ $`=`$ $`\sqrt{2}\alpha _{DV}g_{DV}`$ $`g_{\mathrm{\Sigma }^{}\varphi }`$ $`=`$ $`\sqrt{2}g_{DV}`$ $`g_{\mathrm{\Xi }^{}\varphi }`$ $`=`$ $`\sqrt{2}(2\alpha _{DV})g_{DV}`$ $`g_{\mathrm{\Omega }\varphi }`$ $`=`$ $`\sqrt{2}(3\alpha _{DV})g_{DV}.`$
In analogy to the octet case we set $`\alpha _{DV}=0`$, so that the strange vector meson $`\varphi `$ does not couple to the $`\mathrm{\Delta }`$-baryon. The resulting coupling constants again obey the additive quark model constraints:
$`g_{\mathrm{\Delta }\omega }`$ $`=`$ $`{\displaystyle \frac{3}{2}}g_{\mathrm{\Sigma }^{}\omega }=3g_{\mathrm{\Xi }^{}\omega }=3g_{DV}g_{\mathrm{\Omega }\omega }=0`$ (25)
$`g_{\mathrm{\Omega }\varphi }`$ $`=`$ $`{\displaystyle \frac{3}{2}}g_{\mathrm{\Xi }^{}\varphi }=3g_{\mathrm{\Sigma }^{}\varphi }=\sqrt{2}g_{\mathrm{\Delta }\omega }g_{\mathrm{\Delta }\varphi }=0`$ (26)
Hence all coupling constants of the baryon decuplet are again fixed if one overall coupling $`g_{DV}`$ is specified. Since there is no vacuum restriction on the $`\mathrm{\Delta }`$-$`\omega `$ coupling, like in the case of the scalar mesons, we have to consider different constraints. This will be discussed in section V.
## IV Mean-field approximation
The terms discussed so far involve the full quantum field operators. They cannot be treated exactly. Hence, to investigate hadronic matter properties at finite baryon density we adopt the mean-field approximation. This nonperturbative relativistic method is applied to solve approximately the nuclear many body problem by replacing the quantum field operators by their classical expectation values (for a recent review see ), i.e. the fluctuations around the vacuum expectation values of the field operators are neglected:
$`\sigma (x)`$ $`=`$ $`\sigma +\delta \sigma \sigma \sigma ;\zeta (x)=\zeta +\delta \zeta \zeta \zeta `$ (27)
$`\omega _\mu (x)`$ $`=`$ $`\omega \delta _{0\mu }+\delta \omega _\mu \omega _0\omega ;\varphi _\mu (x)=\varphi \delta _{0\mu }+\delta \varphi _\mu \varphi _0\varphi .`$ (28)
The fermions are treated as quantum mechanical single-particle operators. The derivative terms can be neglected and only the time-like component of the vector mesons $`\omega \omega _0`$ and $`\varphi \varphi _0`$ survive if we assume homogeneous and isotropic infinite baryonic matter. Additionally, due to parity conservation we have $`\pi _i=0`$. The baryon resonances are treated as spin-$`\frac{1}{2}`$ particles with spin-$`\frac{3}{2}`$ degeneracy. After these approximations the Lagrangian (1) reads
$`_{BM}+_{BV}`$ $`=`$ $`{\displaystyle \underset{i}{}}\overline{\psi _i}[g_{i\omega }\gamma _0\omega ^0+g_{i\varphi }\gamma _0\varphi ^0+m_i^{}]\psi _i`$
$`_{vec}`$ $`=`$ $`{\displaystyle \frac{1}{2}}m_\omega ^2{\displaystyle \frac{\chi ^2}{\chi _0^2}}\omega ^2+{\displaystyle \frac{1}{2}}m_\varphi ^2{\displaystyle \frac{\chi ^2}{\chi _0^2}}\varphi ^2+g_4^4(\omega ^4+2\varphi ^4)`$
$`𝒱_0`$ $`=`$ $`{\displaystyle \frac{1}{2}}k_0\chi ^2(\sigma ^2+\zeta ^2)k_1(\sigma ^2+\zeta ^2)^2k_2({\displaystyle \frac{\sigma ^4}{2}}+\zeta ^4)k_3\chi \sigma ^2\zeta `$
$`+`$ $`k_4\chi ^4+{\displaystyle \frac{1}{4}}\chi ^4\mathrm{ln}{\displaystyle \frac{\chi ^4}{\chi _0^4}}{\displaystyle \frac{\delta }{3}}\mathrm{ln}{\displaystyle \frac{\sigma ^2\zeta }{\sigma _0^2\zeta _0}}`$
$`𝒱_{SB}`$ $`=`$ $`\left({\displaystyle \frac{\chi }{\chi _0}}\right)^2\left[m_\pi ^2f_\pi \sigma +(\sqrt{2}m_K^2f_K{\displaystyle \frac{1}{\sqrt{2}}}m_\pi ^2f_\pi )\zeta \right],`$
with the effective mass $`m_i^{}`$ of the baryon $`i`$, which is defined according to section III for $`i=N,\mathrm{\Lambda },\mathrm{\Sigma },\mathrm{\Xi },\mathrm{\Delta },\mathrm{\Sigma }^{},\mathrm{\Xi }^{},\mathrm{\Omega }`$.
Now it is straightforward to write down the expression for the thermodynamical potential of the grand canonical ensemble, $`\mathrm{\Omega }`$, per volume $`V`$ at a given chemical potential $`\mu `$ and at zero temperature:
$$\frac{\mathrm{\Omega }}{V}=_{vec}_0_{SB}𝒱_{vac}\underset{i}{}\frac{\gamma _i}{(2\pi )^3}d^3k[E_i^{}(k)\mu _i^{}]$$
(29)
The vacuum energy $`𝒱_{vac}`$ (the potential at $`\rho =0`$) has been subtracted in order to get a vanishing vacuum energy. The $`\gamma _i`$ denote the fermionic spin-isospin degeneracy factors. The single particle energies are $`E_i^{}(k)=\sqrt{k_i^2+m_{i}^{}{}_{}{}^{2}}`$ and the effective chemical potentials read $`\mu _i^{}=\mu _ig_{\omega i}\omega g_{\varphi i}\varphi `$.
The mesonic fields are determined by extremizing $`\frac{\mathrm{\Omega }}{V}(\mu ,T=0)`$:
$`{\displaystyle \frac{(\mathrm{\Omega }/V)}{\chi }}`$ $`=`$ $`\omega ^2m_\omega ^2{\displaystyle \frac{\chi }{\chi _0^2}}+k_0\chi (\sigma ^2+\zeta ^2)k_3\sigma ^2\zeta +\left(4k_4+1+4\mathrm{ln}{\displaystyle \frac{\chi }{\chi _0}}4{\displaystyle \frac{\delta }{3}}\mathrm{ln}{\displaystyle \frac{\sigma ^2\zeta }{\sigma _0^2\zeta _0}}\right)\chi ^3+`$ (30)
$`+`$ $`2{\displaystyle \frac{\chi }{\chi _0^2}}\left[m_\pi ^2f_\pi \sigma +(\sqrt{2}m_K^2f_K{\displaystyle \frac{1}{\sqrt{2}}}m_\pi ^2f_\pi )\zeta \right]=0`$ (31)
$`{\displaystyle \frac{(\mathrm{\Omega }/V)}{\sigma }}`$ $`=`$ $`k_0\chi ^2\sigma 4k_1(\sigma ^2+\zeta ^2)\sigma 2k_2\sigma ^32k_3\chi \sigma \zeta 2{\displaystyle \frac{\delta \chi ^4}{3\sigma }}+`$ (32)
$`+`$ $`\left({\displaystyle \frac{\chi }{\chi _0}}\right)^2m_\pi ^2f_\pi +{\displaystyle \underset{i}{}}{\displaystyle \frac{m_i^{}}{\sigma }}\rho _i^s=0`$ (33)
$`{\displaystyle \frac{(\mathrm{\Omega }/V)}{\zeta }}`$ $`=`$ $`k_0\chi ^2\zeta 4k_1(\sigma ^2+\zeta ^2)\zeta 4k_2\zeta ^3k_3\chi \sigma ^2{\displaystyle \frac{\delta \chi ^4}{3\zeta }}+`$ (34)
$`+`$ $`\left({\displaystyle \frac{\chi }{\chi _0}}\right)^2\left[\sqrt{2}m_K^2f_K{\displaystyle \frac{1}{\sqrt{2}}}m_\pi ^2f_\pi \right]+{\displaystyle \underset{i}{}}{\displaystyle \frac{m_i^{}}{\zeta }}\rho _i^s=0`$ (35)
$`{\displaystyle \frac{(\mathrm{\Omega }/V)}{\omega }}`$ $`=`$ $`\left({\displaystyle \frac{\chi }{\chi }}_0\right)m_\omega ^2\omega 4g_4^4\omega ^3+{\displaystyle \underset{i}{}}{\displaystyle \frac{g_{i\omega }}{\rho _i}}=0`$ (36)
$`{\displaystyle \frac{(\mathrm{\Omega }/V)}{\varphi }}`$ $`=`$ $`\left({\displaystyle \frac{\chi }{\chi }}_0\right)m_\varphi ^2\varphi 8g_4^4\varphi ^3+{\displaystyle \underset{i}{}}{\displaystyle \frac{g_{i\varphi }}{\rho _i}}=0`$ (37)
The scalar densities $`\rho _i^s`$ and the vector densities $`\rho _i`$ can be calculated analytically for the case $`T=0`$, yielding
$`\rho _i^s`$ $`=`$ $`\gamma _i{\displaystyle \frac{d^3k}{(2\pi )^3}\frac{m_i^{}}{E_i^{}}}={\displaystyle \frac{\gamma _im_i^{}}{4\pi ^2}}\left[k_{Fi}E_{Fi}^{}m_i^2\mathrm{ln}\left({\displaystyle \frac{k_{Fi}+E_{Fi}^{}}{m_i^{}}}\right)\right]`$ (38)
$`\rho _i`$ $`=`$ $`\gamma _i{\displaystyle _0^{k_{Fi}}}{\displaystyle \frac{d^3k}{(2\pi )^3}}={\displaystyle \frac{\gamma _ik_{Fi}^3}{6\pi ^2}}.`$ (39)
The energy density and the pressure follow from the Gibbs–Duhem relation, $`ϵ=\mathrm{\Omega }/V+_i\mu _i\rho ^i`$ and $`p=\mathrm{\Omega }/V`$. The Hugenholtz–van Hove theorem yields the Fermi surfaces as $`E^{}(k_{Fi})=\sqrt{k_{Fi}^2+m_i^2}=\mu _i^{}`$ .
## V Results for dense nuclear matter
### A Parameters
Fixing of the parameters to vacuum and nuclear matter ground state properties was discussed in detail in . It has been shown that the obtained parameter sets describe the nuclear matter saturation point, hadronic vacuum masses and properties of finite nuclei reasonably well. The additional parameters here are the couplings of the baryon resonances to the scalar and vector mesons. For the scalar mesons this is done by a fit to the vacuum masses of the spin-$`\frac{3}{2}`$ baryons. The coupling of the baryon resonances to the spin-1 mesons will be discussed later. These new parameters will not influence the results for normal nuclear matter and finite nuclei.
### B Extrapolation to high densities
Once the parameters have been fixed to nuclear matter at $`\rho _0`$, the condensates and hadron masses at high baryon densities can be investigated, assuming that the change of the parameters of the effective theory with density are small. The behaviour of the fields and the masses of the baryon octet have been investigated in . It is found that the gluon condensate $`\chi `$ stays nearly constant when the density increases. This implies that the approximation of a frozen glueball is reasonable. In these calculations the strange condensate $`\zeta `$ is only reduced by about 10 percent from its vacuum expectation value. This is not surprising since there are only nucleons in the system and the nucleon–$`\zeta `$ coupling is fairly weak. The main effect occurs for the non–strange condensate $`\sigma `$: This field drops to 30 percent of its vacuum expectation value at 4 times normal nuclear density, at even higher densities the $`\sigma `$ field saturates. The behaviour of the condensates is also reflected in the behaviour of the baryon masses: The change of the scalar fields causes a change of the baryon masses in the dense medium. Furthermore, the change of the baryon masses depends on the strange quark content of the baryon. This is due to the different coupling of the baryons to the non-strange and strange condensate. The masses of the vector mesons are shown in fig. 3. The corresponding terms in the lagrangean are discussed in . These masses stay nearly constant when the density is increased.
Now we discuss the inclusion of baryonic spin-$`\frac{3}{2}`$ resonances. How do they affect the behaviour of dense hadronic matter? We consider the two parameter sets $`C_1`$ and $`C_2`$, which satisfactorily describe finite nuclei . As stated above, the main difference between the two parameter sets is the coupling of the strange condensate to the nucleon and to the $`\mathrm{\Delta }`$. In $`C_2`$ this coupling is set to zero, while the nucleon and the $`\mathrm{\Delta }`$ couple to the $`\zeta `$ field in the case of $`C_1`$. Fig. 1 shows how the strength of the coupling of the strange condensate to the nucleon and the $`\mathrm{\Delta }`$ depends on the vacuum expectation value of the strange condensate $`\zeta _0`$. $`\zeta _0`$ in turn is a function of the kaon decay constant ($`\zeta _0=\frac{1}{\sqrt{2}}(f_\pi f_K)`$). The results are obtained by changing the value of $`f_K`$, starting from parameter set $`C_1`$. $`f_K`$ is expected to be in the range of 105 to 125 MeV . For infinite nuclear matter one obtains good fits for the whole range of expected values. But when these parameter sets are used to describe finite nuclei, satisfactory results are only obtained for a small range of values for $`f_K`$, as can be seen for the proton single particle levels in fig. 2: with decreasing $`f_K`$ the gap between the single-particle levels $`1h_{\frac{9}{2}}`$ and $`3s_{\frac{1}{2}}`$ in $`{}_{}{}^{208}Pb`$ decreases such that e.g. for $`f_K=112MeV`$ the experimentally observed shell closure cannot be reproduced in the calculation. This result is not very surprising, because the smaller value of $`f_K`$ leads to a stronger coupling of the nucleon to the strange field, with a mass of $`m_\zeta 1GeV`$. But it has been shown , that for a reasonable description of finite nuclei the nucleon must mainly couple to a scalar field with $`m500600MeV`$. The equation of state of dense hadronic matter for vanishing strangeness is shown in Fig. 4. Here two $`C_1`$ fits are compared, one with $`f_k=122`$, which corresponds to the fit that has been tested to describe finite nuclei satisfactory in , and a $`C_1`$-type fit with $`f_k=116`$, as the minimum acceptable value extracted from Fig. 2. The resulting values of coupling-constants to the nucleon are $`g_{N\zeta }0.49`$ for $`f_K=122`$ MeV and $`g_{N\zeta }1.72`$ for $`f_K=116`$ MeV. For the $`\mathrm{\Delta }`$-baryon $`g_{\mathrm{\Delta }\zeta }2.2`$ and $`g_{\mathrm{\Delta }\zeta }0.59`$, respectively. If these values are compared to the couplings to the non-strange condensate (which is around $`10`$ for the nucleon and the $`\mathrm{\Delta }`$ in both cases) one observes that the mass difference between nucleon and $`\mathrm{\Delta }`$ is due to the different coupling to the strange condensate.
Furthermore the resulting equation of state for parameter set $`C_2`$ is plotted. Here the nucleon and $`\mathrm{\Delta }`$-mass do not depend on the strange condensate. Fig. 4 shows two main results: The resulting EOS does not change significantly if $`f_K`$ in the $`C_1`$-fits is varied within the reasonable range discussed above. In the following we refer to the $`C_1`$-fit of with $`f_K=122MeV`$.
However, the different ways of nucleon and $`\mathrm{\Delta }`$ mass generation lead to drastic differences in the resulting equations of state:
A pure $`\sigma `$-dependence of the masses of the nonstrange baryons ($`C_2`$) leads to an equation of state which is strongly influenced by the production of resonances at high densities. This is not the case when both masses are partially generated by the strange condensate ($`C_1`$), Fig. 4. In both fits the coupling of the $`\mathrm{\Delta }`$ to the $`\omega `$-meson ($`g_{\mathrm{\Delta }\omega }`$) has been set equal to $`g_{N\omega }`$. The very different behaviour of the EOS can be understood from the ratio of the effective $`\mathrm{\Delta }`$-mass to the effective nucleon-mass, Fig. 5. If the coupling of the nucleon to the $`\zeta `$ field is set to zero ($`C_2`$), the mass ratio stays at the constant value $`\frac{m_\mathrm{\Delta }}{m_N}=\frac{g_{\mathrm{\Delta }\sigma }}{g_{N\sigma }}1.31`$. However, if the nucleon couples to the strange condensate ($`C_1`$), the mass ratio $`\frac{m_\mathrm{\Delta }}{m_N}`$ increases with density, due to the different coupling of the nucleon and the $`\mathrm{\Delta }`$ to the strange condensate $`\zeta `$. The $`\mathrm{\Delta }`$ does not feel less scalar attraction - the coupling to the $`\sigma `$ field is the same for the nonstrange baryons. However, the mass of the $`\mathrm{\Delta }`$ does not drop as fast as in the case of pure $`\sigma `$-couplings, and hence the production of baryon resonances is less favorable at high densities, Fig. 6.
Both coupling constants of the $`\mathrm{\Delta }`$-baryon are freely adjustable in the RMF models . In the chiral model, which incorporates dynamical mass generation, the scalar couplings are fixed by the corresponding vacuum masses. If explicit symmetry breaking for the baryon mass generation is neglected, then the scalar couplings are fixed by the vacuum alone. To investigate the influence of the coupling to the strange condensate $`\zeta `$, small explicitly symmetry breaking terms $`m1,m2`$ are used. This model behaves similar as the RMF models with $`r=\frac{g_{\mathrm{\Delta }\sigma }}{g_{N\sigma }}=\frac{m_\mathrm{\Delta }}{m_N}`$.
The remaining problem is the coupling of the resonances to the vector mesons. The coupling constants can be restricted by the requirement that resonances are absent in the ground state of normal nuclear matter. Furthermore possible secondary minimua in the nuclear equation of state should lie above the saturation energy of normal nuclear matter.
QCD sum-rule calculations suggest that the net attraction for $`\mathrm{\Delta }`$‘s in nuclear matter is larger than that of the nucleon. From these constraints a ’window’ of possible parameter sets $`g_{\mathrm{\Delta }\sigma },g_{\mathrm{\Delta }\omega }`$ has been extracted . In the chiral model one then obtains for each type of mass generation only a small region of possible values for $`g_{\mathrm{\Delta }\omega }`$. The $`\mathrm{\Delta }\omega `$ coupling in Fig.7 is in this range. Pure $`\sigma `$-coupling ($`C_2`$) of the non-strange baryons yields a range of coupling constants $`r_v=\frac{g_{\mathrm{\Delta }\omega }}{g_N\omega }`$ between $`0.91<r_v<1`$. For a non-vanishing $`\zeta `$-coupling one obtains $`0.68<r_v<1`$. A smaller value of the ratio $`r_v`$ (less repulsion), leads to higher $`\mathrm{\Delta }`$-probabilities and to softer equations of state. Due to this freedom in the coupling of the resonances to the vector mesons the equation of state cannot be predicted unambigiously from the chiral model. Here additional input from experiments are necessary to pin down the equation of state.
Finally we address the question, whether at very high densities the anti-nucleon potentials become overcritical. That means the potential for anti-nucleons may become larger than 2 $`m_Nc`$ and nucleon- anti-nucleon pairs may be spontaneously emitted . The nucleon and anti-nucleon potentials in the chiral model are shown as function of density (Fig. 8) for parameter set $`C_1`$ with and without quartic vector self-interaction. The latter is to obtain reasonable compressibility in the chiral model and is in agreement with the principle of naturalness stated in . From that the anti-nucleon potentials are predicted not to turn overcritical at densities below $`12\rho _0`$ in the chiral model (Fig. 8 left). Earlier calculations in RMF-models did not include the higher order vector self-interactions. Then spontaneous anti-nucleon production occurs around $`46\rho _0`$. This also happens in the chiral model if the quartic-terms would be neglected (Fig. 8 right). The critical density shifts to even higher values, if the equation of state is softened by the baryon resonances, as can be seen in Fig. 9. Hence, the chiral mean field model does not predict overcriticality for reasonable densities.
## VI conclusion
Spin-$`\frac{3}{2}`$-baryon resonances can be included consistently in the nonlinear chiral SU(3)-model. The coupling constants of the baryon resonances to the scalar mesons are fixed by the vacuum masses. Two different ways of mass generation were investigated. It is found that they lead to very different predictions for the resulting equation of state of non-strange nuclear matter. The coupling of the baryon resonances to the vector mesons cannot be fixed. The allowed range of this coupling constant is restricted by requireing that possible density isomers are not absolutely stable, that there are no $`\mathrm{\Delta }`$’s in the nuclear matter ground state and by QCD sum-rule induced assumption that the net attraction of $`\mathrm{\Delta }`$’s in nuclear matter is larger than that for nucleons. Nevertheless, the behaviour of non-strange nuclear matter cannot be predicted unambigiously within the chiral $`SU(3)`$-model, so that further experimental input on $`\mathrm{\Delta }`$-production in high density systems and theoretical investigations on how the resonance production influences the observables in these systems (neutron stars, heavy ion-collisions) is needed. For both cases calculations are under way .
###### Acknowledgements.
The authors are grateful to C. Beckmann, L. Gerland, I. Mishustin, L. Neise, and S. Pal for fruitful discussions. This work is supported by Deutsche Forschungsgemeinschaft (DFG), Gesellschaft für Schwerionenforschung (GSI), Bundesministerium für Bildung und Forschung (BMBF) and Graduiertenkolleg Theoretische und experimentelle Schwerionenphysik.
## A
The SU(3) matrices of the hadrons are (suppressing the Lorentz indices)
$$X=\frac{1}{\sqrt{2}}\sigma ^a\lambda _a=\left(\begin{array}{ccc}(a_0^0+\sigma )/\sqrt{2}& a_0^+& \kappa ^+\\ a_0^{}& (a_0^0+\sigma )\sqrt{2}& \kappa ^0\\ \kappa ^{}& \overline{\kappa ^0}& \zeta \end{array}\right)$$
$$P=\frac{1}{\sqrt{2}}\pi _a\lambda ^a=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}\left(\pi ^0+\frac{\eta ^8}{\sqrt{1+2w^2}}\right)& \pi ^+& 2\frac{K^+}{w+1}\\ \multicolumn{3}{c}{}\\ \pi ^{}& \frac{1}{\sqrt{2}}\left(\pi ^0+\frac{\eta ^8}{\sqrt{1+2w^2}}\right)& 2\frac{K^0}{w+1}\\ \multicolumn{3}{c}{}\\ 2\frac{K^{}}{w+1}& 2\frac{\overline{K}^0}{w+1}& \frac{\eta ^8\sqrt{2}}{\sqrt{1+2w^2}}\end{array}\right)$$
(A1)
$$V=\frac{1}{\sqrt{2}}v^a\lambda _a=\left(\begin{array}{ccc}(\rho _0^0+\omega )/\sqrt{2}& \rho _0^+& K^+\\ \rho _0^{}& (\rho _0^0+\omega )/\sqrt{2}& K^0\\ K^{}& \overline{K^0}& \varphi \end{array}\right)$$
(A2)
$$B=\frac{1}{\sqrt{2}}b^a\lambda _a=\left(\begin{array}{ccc}\frac{\mathrm{\Sigma }^0}{\sqrt{2}}+\frac{\mathrm{\Lambda }^0}{\sqrt{6}}& \mathrm{\Sigma }^+& p\\ \mathrm{\Sigma }^{}& \frac{\mathrm{\Sigma }^0}{\sqrt{2}}+\frac{\mathrm{\Lambda }^0}{\sqrt{6}}& n\\ \mathrm{\Xi }^{}& \mathrm{\Xi }^0& 2\frac{\mathrm{\Lambda }^0}{\sqrt{6}}\end{array}\right)$$
(A3)
for the scalar ($`X`$), pseudoscalar($`P`$), vector ($`V`$), baryon ($`B`$) and similarly for the axial vector meson fields. A pseudoscalar chiral singlet $`Y=\sqrt{2/3}\eta _0\mathrm{\hspace{0.17em}11}`$ can be added separately, since only an octet is allowed to enter the exponential.
The notation follows the convention of the Particle Data Group (PDG),, though we are aware of the difficulties to directly identify the scalar mesons with the physical particles . However, note that there is increasing evidence that supports the existence of a low-mass, broad scalar resonance, the $`\sigma (560)`$-meson, as well as a light strange scalar meson, the $`\kappa (900)`$ (see and references therein).
The masses of the various hadrons are generated through their couplings to the scalar condensates, which are produced via spontaneous symmetry breaking in the sector of the scalar fields. Of the 9 scalar mesons in the matrix $`X`$ only the vacuum expectation values of the components proportional to $`\lambda _0`$ and to the hypercharge $`Y\lambda _8`$ are non-vanishing, and the vacuum expectation value $`X`$ reduces to:
$$X=\frac{1}{\sqrt{2}}(\sigma ^0\lambda _0+\sigma ^8\lambda _8)\text{diag }(\frac{\sigma }{\sqrt{2}},\frac{\sigma }{\sqrt{2}},\zeta ),$$
(A4)
in order to preserve parity invariance and assuming, for simplicity, $`SU(2)`$ symmetry<sup>*</sup><sup>*</sup>*This implies that isospin breaking effects will not occur, i.e., all hadrons of the same isospin multiplet will have identical masses. The electromagnetic mass breaking is neglected. of the vacuum. |
no-problem/0001/cond-mat0001384.html | ar5iv | text | # Exactly solvable statistical model for two-way traffic
## I Introduction
The non-equilibrium properties of one-dimensional lattice gases have been studied intensively over the last years . With lattice gases, one can model not only physical situations such as transport in solid ionic conductors or growth processes , but also the traffic flow on roads . Moreover, they can be used to study general features of phase transitions in non-equilibrium systems -. For the traffic problem, the simplest model is the completely asymmetric exclusion process (ASEP), where classical hard-core particles hop stochastically, with unit rate, in one direction only . On a ring, one then finds a steady state of product form where all configurations are equally likely. In terms of the density $`\rho `$ of particles, the flux is then given by $`j=\rho (1\rho )`$ and shows already the qualitative features found also in more sophisticated models, i.e. it vanishes for $`\rho =0,1`$ and has a maximum in between.
An essentially new description of traffic flow was proposed recently by Brankov et al. . In this work, non-intersecting domain-wall lines on a square lattice were interpreted as space-time trajectories of cars. The weight of a trajectory is then obtained from the fugacities for horizontal and vertical moves. The single step, however, has no stochastic interpretation. The problem can be formulated in terms of a five-vertex model which generates these lines and which is exactly solvable since it satisfies the so-called free-fermion condition. The result for the flux $`j`$ is physically reasonable and very similar to that for a variant of the (stochastic) Nagel-Schreckenberg model . In this paper, we show that one can generalize this model to the case of two-way traffic where cars on different lanes interact with each other. The specific effect which we are treating is a tendency to slow down when another car is approaching. In the two-dimensional formulation, this is described by a modification of the fugacities whenever trajectories of oppositely-moving cars cross. One then is led to consider two five-vertex models with a certain coupling between them. It turns out, however, that this coupling only renormalizes the parameters in each subsystem, so that the problem remains solvable as before. One finds that, in this model, the effect of an obstacle, i.e. of a car in the other lane, is relatively weak. While in stochastic models already a certain finite reduction of the hopping rate at one position usually leads to a traffic-jam phenomenon with a region of high density appearing in front of the bottleneck -, this happens here only if the fugacity is reduced to zero for a large system. As will be explained, this feature is related to the different weighting of the trajectories in both cases.
In the following, we first describe the model in section 2 and then explain its solution in section 3. Finally, in section 4, we discuss the results and add some further remarks.
## II Model
We first recall the formulation of the original one-way traffic model in . For a square lattice with periodic boundary conditions, the horizontal direction is interpreted as space, the vertical one as time (increasing downwards). Non-intersecting lines running towards the lower right are then drawn on the lattice and viewed as trajectories of right-moving cars. They do not end, so that the number $`N_1`$ of cars is conserved. A horizontal step, representing a move, is given fugacity (weight) $`x_1`$, a vertical one fugacity $`t_1`$. Statistical averages are then obtained from the partition function
$$Z(N_1,x_1,t_1)=\underset{C}{}x_{1}^{}{}_{}{}^{N_x(C)}t_{1}^{}{}_{}{}^{N_t(C)}$$
(1)
where $`N_x(C)`$ and $`N_t(C)`$ are the total numbers of steps in the two directions for a certain configuration $`C`$ of trajectories. These trajectories are generated with their correct weights if at each lattice site the vertices shown in Figure 1 are possible.
Since crossings (vertex 1) are forbidden, one is effectively dealing with a five-vertex model which can be solved exactly via the Bethe ansatz, even for more general weights $`w_5`$, $`w_6`$ ,. In the present case, free-fermion techniques can be used to obtain the partition function .
For the two-way traffic model, we introduce a second lattice where trajectories run towards the lower left, corresponding to the cars in the other lane. The fugacities are taken to be $`x_2`$, $`t_2`$ and the trajectories are now generated by the vertices in Figure 2.
To formulate the interaction between cars in the two lanes, the indices at the vertices are specified in the following way
The variables $`\alpha ,\beta ,\mathrm{}`$ take the value 1 if a car is present (thick line) and zero otherwise. For all vertices, the so-called ice rule
$$\alpha +\beta =\alpha ^{}+\beta ^{}\text{and}\gamma +\delta =\gamma ^{}+\delta ^{}$$
(2)
holds, which ensures the conservation law for the number of cars,separately for both lanes.
We now imagine that the two lattices are placed above each other and attribute an additional Boltzmann weight
$$v=exp(ϵ)=exp\left(\frac{h}{2}\left(\alpha \delta +\alpha ^{}\delta ^{}+\beta \gamma ^{}+\beta ^{}\gamma \right)\right)$$
(3)
to adjacent vertices in the two layers. Then each crossing of two trajectories will be weighed with the factor
$$0<r=exp(h)<1$$
(4)
To see this, one first notes that $`ϵ=0,v=1`$ if one of the vertices is of type 2, i.e. if there is no car present. The values of $`ϵ`$ in the remaining cases are given in Table 1. It then follows that simple crossings, which involve a pair of vertices of type 3 and 4, lead directly to a factor $`r`$, see Table 1.
Table 1. Interaction $`ϵ`$ between two adjacent vertices in the two layers.
| $`vertex`$ | $`3`$ | $`4`$ | $`5`$ | $`6`$ |
| --- | --- | --- | --- | --- |
| $`3`$ | $`0`$ | $`h`$ | $`h/2`$ | $`h/2`$ |
| $`4`$ | $`h`$ | $`0`$ | $`h/2`$ | $`h/2`$ |
| $`5`$ | $`h/2`$ | $`h/2`$ | $`h/2`$ | $`h/2`$ |
| $`6`$ | $`h/2`$ | $`h/2`$ | $`h/2`$ | $`h/2`$ |
If trajectories meet and run (anti)parallel before they separate again, each of the two branch points contributes a factor $`\sqrt{r}`$. Some examples illustrating such crossings are shown in Figure 3.
One should mention that the choice (3) for the interaction is not unique. The more general form for $`ϵ`$
$$ϵ=A(\alpha \delta ^{}+\beta \gamma )+B(\alpha \delta +\beta ^{}\gamma )+C(\alpha ^{}\delta ^{}+\beta \gamma ^{})+D(\alpha ^{}\delta +\beta ^{}\gamma ^{})$$
(5)
with $`A+B+C+D=h`$, still leads to the same factor $`r=exp(h)`$. The individual terms listed in Table 1, however, become more complicated.
In the model defined in this way, one still has the freedom to choose the particle numbers and the fugacities. Thus, by setting $`x_2=0`$, one can immobilize the cars in the second lane and treat in particular the case of one fixed obstacle, which is of special interest.
## III Solution
We now show that the two-lane model can be solved by reducing it to the one-lane problem. The proof follows Ref. where a similar problem was treated. It is based on the ice rule (2) which relates horizontal and vertical bond variables. Suppose that the lattices have $`N`$ columns and $`M`$ rows, and let $`\alpha _{n,m}(\gamma _{n,m})`$ and $`\beta _{n,m}(\delta _{n,m})`$ be the variables to the right and below the vertex $`(n,m)`$, respectively, in the two layers. Then the total interaction is
$$E=\frac{h}{2}\underset{m=1}{\overset{M}{}}\underset{n=1}{\overset{N}{}}(\alpha _{n1,m}\delta _{n,m1}+\alpha _{n,m}\delta _{n,m}+\beta _{n,m1}\gamma _{n1,m}+\beta _{n,m}\gamma _{n,m})$$
With the help of (2), this can be rewritten as
$$E=\frac{h}{2}\left(\underset{m=1}{\overset{M}{}}(\alpha _{0,m}+\alpha _{N,m})N_2+\underset{m=1}{\overset{M}{}}(\gamma _{0,m}+\gamma _{N,m})N_1+U(0)U(M)\right)$$
(6)
where
$$U(m)=\underset{n=1}{\overset{N1}{}}\underset{k=1}{\overset{n}{}}(\beta _{k,m}\delta _{n+1,m}\beta _{n+1,m}\delta _{k,m})$$
(7)
contains only vertical bonds, while the other two terms in (6) contain only horizontal bonds. Due to the periodic boundary conditions, the difference $`U(0)U(M)`$ vanishes and one obtains
$$E=hN_2\underset{m=1}{\overset{M}{}}\alpha _{N,m}hN_1\underset{m=1}{\overset{M}{}}\gamma _{N,m}$$
(8)
This can be compared with the effect of a rescaling $`x_1x_1e^{\eta _1}`$, $`x_2x_2e^{\eta _2}`$ which leads to an extra factor $`exp(\eta _1(\alpha +\alpha ^{})/2\eta _2(\gamma +\gamma ^{})/2)`$ for each pair of adjacent vertices. Summed over all sites, this gives
$$E^{}=\underset{m=1}{\overset{M}{}}\underset{n=1}{\overset{N}{}}\left(\frac{\eta _1}{2}(\alpha _{n1,m}+\alpha _{n,m})+\frac{\eta _2}{2}(\gamma _{n,m}+\gamma _{n1,m})\right)$$
which can be expressed as
$$E^{}=\underset{m=1}{\overset{M}{}}\frac{\eta _1}{2}N(\alpha _{0,m}+\alpha _{N,m})+V(0)V(M)\underset{m=1}{\overset{M}{}}\frac{\eta _2}{2}N(\gamma _{0,m}+\gamma _{N,m})\stackrel{~}{V}(0)+\stackrel{~}{V}(M)$$
where
$$V(m)=\underset{n=1}{\overset{N1}{}}\underset{k=1}{\overset{n}{}}\beta _{k,m}\underset{n=2}{\overset{N}{}}\underset{k=n}{\overset{N}{}}\beta _{k,m}$$
(9)
and $`\stackrel{~}{V}(m)`$ is defined analogously with $`\beta \delta `$. Using again the periodic boundary conditions, one finds
$$E^{}=\eta _1N\underset{m=1}{\overset{M}{}}\alpha _{N,m}\eta _2N\underset{m=1}{\overset{M}{}}\gamma _{N,m}$$
(10)
which has the same form as $`E`$ in (8). Therefore the interaction has the same effect as a change in the horizontal fugacities if one chooses $`\eta _1=hN_2/N=h\rho _2`$ and $`\eta _2=hN_1/N=h\rho _1`$ where $`\rho _1`$ and $`\rho _2`$ are the densities of cars in the two lanes. The partition function is then
$$Z(N_1,N_2,x_1,x_2,t_1,t_2,r)=Z(N_1,x_1r^{\rho _2},t_1)Z(N_2,x_2r^{\rho _1},t_2)$$
(11)
This exact formula looks like the result of a mean-field treatment since only the densities in the other layer enter the expressions. One should point out that it also holds for more general choices of the vertex weights in the layers. Then, also the weight $`w_1`$ of vertex 1 has to be renormalized with the same exponential factor.
## IV Results and discussion
One can now make use of the results for the single-lane case . For one lane, the flux per site is equal to the average number of horizontal steps and given by
$$j(\rho ,x)=\frac{N_x}{NM}=\frac{1}{2}\left[\frac{1}{\pi }arccos\left(\frac{c2x+cx^2}{12xc+x^2}1\right)\rho \right]$$
(12)
where $`c=cos(\pi \rho )`$ and $`x<1`$ has been assumed. This is the physical region since the average speed of one car is $`v=x/(1x)`$. As a function of $`\rho `$, the flux has a maximum at $`\rho =(1/\pi )arccos(x)`$, which shifts from $`\rho =1/2`$ to $`\rho =0`$ as $`x`$ increases.
By inserting $`x_1r^{\rho _2}`$ and $`x_2r^{\rho _1}`$ into (14), one then obtains the fluxes $`j_1`$ and $`j_2`$ in the two-lane case. These do not depend on the motion in the other lane, but only on the density there. Since $`j`$ increases with $`x`$, the interaction factor $`r^{\rho _\alpha }`$ always reduces the flux, as expected. This reduction, however, becomes smaller as the density in the second lane decreases. For the case of only one car one has
$$j_1=j(\rho _1,x_1r^{1/N})$$
(13)
and this approaches the value $`j(\rho _1,x_1)`$ without interaction for large $`N`$. In order to slow down the traffic appreciably, one would need $`hN`$, i.e. an interaction increasing with the size, so that $`r`$ vanishes exponentially. In other words, a transition occurs only at $`r=0`$.
As mentioned, the situation is different for stochastic models. There $`j`$ shows a sudden decrease as soon as the corresponding quantity $`r`$ (describing the reduced crossing probability at a defect) falls below a certain finite value $`r_c`$. This is connected with the appearance of a jam at the defect. In terms of trajectories, the effect can be described as follows. Consider a stochastic model as in where a particle can move an arbitrary distance horizontally, at each step continuing with probability $`p`$ and stopping with probability $`q=1p`$. At the defect, the quantities are $`p^{}<p`$ and $`q^{}>q`$. A particle some steps away from the defect will typically move to the bottleneck and then stay there for some time. Due to $`q^{}>q`$, such a trajectory has a higher weight than any other one where it makes stops before and then crosses the defect immediately. The same holds for another particle following it, since this has $`q^{}=1`$ once it has reached the site next to the first one. In this way, the jam builds as a region of vertical trajectories to the left of the defect.
In the present model, the picture is different. There is no advantage in staying at the blockage, the crossing factor $`r`$ and the weights $`x^kt^l`$ are the same as for paths which approach the defect gradually. Nor is there an advantage for following particles to move next to the preceding one. Therefore no jam builds up. One could say that the model mimics the anticipation of disturbances by producing less densely packed trajectories. But, in shifting $`r_c`$ to zero, it overestimates the effect.
It is also interesting to compare the two models at the operator level. According to , the transfer matrix $`T`$ of the (one-layer) five-vertex model commutes with the operator
$$=\underset{n}{}\left(\sigma _n^x\sigma _{n+1}^x+\sigma _n^y\sigma _{n+1}^y+2H\sigma _n^z\right)$$
(14)
where $`H=(1+x^2t^2)/2x`$, and it is easy to see that the ground state of $``$ gives the maximal eigenvalue of $`T`$. This operator shows very clearly the free-fermion character of the model and also its non-stochastic nature, since the necessary $`\sigma ^z\sigma ^z`$-terms (which are related to the loss processes in the master equation) are missing.
If one uses more general vertex weights $`w_5`$ and $`w_6`$, the operator
$$=\underset{n}{}\left(\sigma _n^{}\sigma _{n+1}^++\mathrm{\Delta }\sigma _n^z\sigma _{n+1}^z\right)$$
(15)
commutes with $`T`$, where $`\mathrm{\Delta }=(w_3w_4w_5w_6)/(w_2w_4)`$ , . Although this contains such terms and has the form of the time-evolution operator for fully asymmetric hopping ,, the fact that $`\mathrm{\Delta }`$ is not equal to one still makes it different. On the other hand, this model is interesting, because it contains, in the $`x`$$`t`$ plane, a frozen phase with density $`\rho =1/2`$ ,, where the trajectories have the form of stairs with steps of unit lenght in both directions. This corresponds to synchronized traffic with always one empty site between the cars. As this phase gives the highest possible throughput of vehicles and persists for a wide range of parameters $`x,t`$, it represents the analogue of the maximal current phase in stochastic hopping models . In the $`j`$$`\rho `$ relation, one then finds a cusp at $`\rho =1/2`$. As mentioned above, also this model can be treated in the two-way case. However, apart from half-filling, the blocking properties will be similar to those described above.
## Acknowledgement
V.P. would like to thank the Alexander von Humboldt foundation for financial support. |
no-problem/0001/astro-ph0001507.html | ar5iv | text | # ISO observations of the BL Lac object PKS 2155–304Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) with the participation of ISAS and NASA..
## 1 Introduction
BL Lacertae objects are characterized by an intense and variable non–thermal continuum, that extends from the radio to the gamma–ray band. This is commonly attributed to synchrotron and inverse Compton radiation from a relativistic jet pointing toward the observer (see Ulrich et al. (1997) for a review). In a $`\nu F_\nu `$ representation, their overall spectrum has two broad peaks, one at low energies (IR–X) due to synchrotron radiation and one at higher energies (X–$`\gamma `$), plausibly due to inverse Compton scattering.
PKS 2155–304 is one of the brightest BL Lacs from the optical to the X–ray band with the synchrotron peak in the UV- soft X–ray range, corresponding to the definition of High frequency peak BL Lac objects (HBL) (Padovani & Giommi (1995)), which have the synchrotron peak at the highest frequencies, low luminosity and a small ratio between the $`\gamma `$–ray and the synchrotron peak luminosities. The gamma-ray spectrum is flat ($`\alpha _\gamma 0.7`$<sup>1</sup><sup>1</sup>1$`\alpha `$ is defined as $`F(\nu )\nu ^\alpha `$. in the 0.1-10 GeV energy range), indicating that the Compton peak is beyond $`10`$ GeV. Recently it has been detected in the TeV band (Chadwick et al. Chadwick (1999)). Due to these characteristics, PKS 2155–304 has been the target of numerous multiwavelength campaigns (e.g. Edelson et al.Edelson (1995) for November 1991, Urry et al. Urry97 (1997) for May 1994). The study of the simultaneous behavior of the source at different frequencies is important in order to understand the emission mechanisms and to constrain the physical properties of the emitting region.
In 1996 May–June, an intense multiwavelength monitoring was carried out involving optical telescopes, UV, X–ray and $`\gamma `$–ray satellites. Thanks to the Infrared Space Observatory (ISO), for the first time we had infrared simultaneous observations. These are the first observations of this object in the mid– and far– infrared since IRAS. PKS 2155–304 was detected by IRAS in 1983 at 12, 25, 60 microns with a flux of about 100 mJy in all three bands (Impey & Neugebauer ImpeyNeugebauer (1988)). In this object the IR emission is at frequencies lower than the synchrotron peak, and the spectral shape in this band can reveal if there are relevant thermal contributions (e.g. by the host galaxy or by a dusty torus around the nucleus) or if the emission can be entirely attributed to synchrotron radiation.
Here we present the ISO observations of PKS 2155–304, carried out during the campaign in 1996 May–June, covering a wavelength range from 2.8 to 200 $`\mu `$m. This is complemented by some simultaneous BVR observations from the Dutch 0.9 m ESO telescope. Results from ISO observations of 1996 November and 1997 May are also reported.
The paper is organized as follows: a brief description of the ISO instruments and of the observations are given in section 2 and the results are reported in section 3. In section 4 we present the optical data and in section 5 we compare our results with the theoretical models. PKS 2155–304 is a weak IR source for ISO. Therefore considerable care was taken in data reduction and background subtraction. Details are given in Appendix A.
## 2 ISO observations
PKS 2155–304 was observed with ISO between 1996 May 7 and June 8. Two additional observations were performed on 1996 November 23 and 1997 May 15.
The ISO satellite (Kessler et al. (1996)) is equipped with a 60 cm Ritchey–Chrétien telescope and has four scientific instruments on board. For the PKS 2155–304 observations both the camera ISOCAM and the photometer ISOPHOT were used.
The 32x32 pixel imaging camera ISOCAM (Césarsky et al. (1996)) has two detectors: an InSb CID (Charge Injection Device) for short wavelengths (SW detector; 2.5 – 5.5 $`\mu `$m) and a Si:Ga photoconductor array for longer wavelengths (LW detector; 4 – 17 $`\mu `$m). It is equipped with a set of 21 broad–band filters and a circular variable filter with a higher spectral resolution. The spatial resolution ranges from 1.5″ to 12″ per pixel.
The photometer ISOPHOT (Lemke et al. (1996)) has three subsystems: a photo–polarimeter (PHT–P) (3 – 120 $`\mu `$m), which has 3 detectors, sensitive at different wavelengths, 14 broad-band filters and different apertures, from 5″ to 180″; an imaging photometric camera (PHT–C) (50 – 240 $`\mu `$m), with a 3x3 and a 2x2 pixel detectors, a field of view of 43.5″x43.5″ and 89.4″x89.4″ per pixel, respectively, and 11 broad–band filters; two low–resolution grating spectrometers (PHT–S) (2.5 – 5 $`\mu `$m and 6 – 12 $`\mu `$m).
In order to determine the variability characteristics in the infrared band, 15 identical observations were performed in the period between 1996 May 7 and June 8, at 4.0, 14.3, 60, 90 and 170 $`\mu `$m (see Tab. 1 for the filter characteristics). From May 13 to May 27 ISO observed PKS 2155–304 almost each day. The observing modes (AOTs, Astronomical Observation Templates) were CAM01 (ISOCAM Observer’s Manual (1994)), in single pointing mode, and PHT22 (ISOPHOT Observer’s Manual (1994)), in rectangular chopped mode (see Appendix A.2).
On 1996 May 27 the source was observed in a large wavelength range (from 2.8 to 200 $`\mu `$m) with 17 different filters in order to determine the infrared spectrum. The same AOTs as before were used, except the observation with the P2\_25 filter, for which the PHT03, still in rectangular chopped mode, was used.
On 1997 May 15 two 3x3 raster scans, centered on PKS 2155–304 (R.A. 21h 58m 52s, Dec –30° 13′ 32″) were performed with the photometric camera PHT–C, at 60 $`\mu `$m and at 180 $`\mu `$m; the distance between two adjacent raster positions was 180″, in order to have an almost complete sky coverage of an area of 9′ side. This mapping was performed to search for any structure in the cirrus clouds; a non flat background could compromise a reliable photometry of the source. In this observation the AOT PHT22 was used in staring mode.
The ISOPHOT observation of 1996 May 25 failed because of problems during the instrument activation.
The complete log of the observations is shown in Tabs. 2 and 3.
## 3 ISO results
### 3.1 The light curves
The data and the corresponding light curves at 4.0 (SW5 filter), 14.3 (LW3), 60 (C1\_60) and 90 $`\mu `$m (C1\_90) are reported in Tabs. 4 and 5 and shown in Figs. 1 and 2. The discussion on the data analysis and error evaluation is given in Appendix A. At 170 $`\mu `$m (C2\_160), the source is not detected: the three sigma upper limit at this wavelength is 1235 mJy (see Fig. 3).
When the purpose is to verify whether the flux is variable, the contribution of the pixel responsitivity to the absolute error can be neglected and a smaller uncertainty can be associated to the relative flux values of the light curves. However, this can be done only for the two light curves of the photometer (see Tab. 5), due to the way the photometric error was determined.
The relative errors on the flux are, in any case, quite large, about 10 – 12% for the camera observations and from 20 to more than 50% for the photometer (see Appendix A). Within these uncertainties the light curves show no evidence of variability. To quantify this statement, we fitted the light curves with a constant term and the reduced chi–square values were computed in order to test the goodness of the fits. We first fitted the values of the best sampled period, from 1996 May 13 to May 27. The results are $`54.8\pm 1.8`$ mJy at 4.0 $`\mu `$m, $`90.8\pm 3.2`$ mJy at 14.3 $`\mu `$m, $`315\pm 27`$ mJy at 60 $`\mu `$m and $`250\pm 34`$ mJy at 90 $`\mu `$m. To fit the data at 4.0 $`\mu `$m the lower limits were neglected. We then repeated the fits, taking the mean of the above–mentioned period and adding the other data, to look for possible longer–term variability. All the fits are acceptable within a confidence level of 95%. This means that PKS 2155–304 showed no evidence of variability at these wavelengths in the observed period.
However, the large uncertainty on the flux can hide smaller variations. We calculated the mean relative error and obtained 3 sigma limits for the lowest detectable variations of 32%, 36%, 76% and 132% at 4.0, 14.3, 60 and 90 $`\mu `$m, respectively.
### 3.2 The infrared spectrum
The infrared spectral shape of PKS 2155–304 was sampled, using 16 filters, from 2.8 to 170 $`\mu `$m. The photometer filter C2\_200 was not considered reliable enough and its observation was discarded. The flux values are given in Tab. 6 and the spectrum is shown in Fig. 3, in a $`\mathrm{log}\nu \mathrm{log}\nu F(\nu )`$ representation.
In Fig. 3 it is also shown the result of a power law fit, that gives an energy spectral index of $`\alpha =0.40\pm 0.06`$. The lower and upper limits were not considered in the fit; the reduced chi–square is $`\chi _r^2=1.31`$, with 9 d.o.f., that gives a confidence level of 77.4%.
From each simultaneous pairs of flux values of the SW5 and LW3 light curves, we obtained the spectral indices between 4.0 and 14.3 $`\mu `$m as $`\alpha _i=\mathrm{log}(f_{SW5,i}/f_{LW3,i})/\mathrm{log}(\nu _{SW5}/\nu _{LW3})`$. The mean value is $`\alpha =0.403\pm 0.017`$, which is fully consistent with the index derived using 11 filters on a larger IR band.
The fit with a constant term of the spectral indices $`\alpha _i`$ vs. time has a reduced chi–square of 0.26, with 13 d.o.f., which corresponds to a confidence level of less than 1%. This indicates that the source showed no spectral variability in the 4.0 – 14.3 $`\mu `$m range, during the observed period.
## 4 Optical observations
### 4.1 Observations and data reduction
The optical data were obtained using the Dutch 0.9 m ESO telescope at La Silla, Chile, between May 17 and 27 1996. The telescope was equipped with a TEK CCD 512x512 pixels detector and Bessel BVR filters were used for the observations. The pixel size is 27 $`\mu `$m and the projected pixel size in the plane of the sky is 0.442 arcsec, providing a field of view of 3.77 x 3.77.
The original frames were flat fielded and bias corrected using MIDAS package and photometry was performed using the Robin procedure, developed at the Torino Observatory, Italy, by L. Lanteri. This procedure fits the PSF with a circular gaussian and evaluates the background level by fitting it with a 1st order polynomial surface. The magnitude of the object and the error are derived by comparison with reference stars in the same field of view. The typical photometric error is $`0.02`$ mag in all bands.
### 4.2 Results
The light curves (Tab. 7 and Fig. 4) show an increase of luminosity of about 20% ($`0.2`$$`0.25`$ mag), between the starting low level of May 17–18 and the maximum of May 24. The flux is then decreasing during the last two days. The behavior is very similar in all of the three filters.
Assuming that the optical spectrum is described by a power law, we calculated the mean spectral indices using the simultaneous pairs data of the light curves. The results are $`\alpha _{RV}=0.62\pm 0.02`$ and $`\alpha _{VB}=0.60\pm 0.02`$ and indicate that the optical spectrum is steeper than the IR one.
## 5 Discussion
### 5.1 IR flux and spectral variability
The ISO light curves of May–June 1996 show that the time variability of PKS 2155–304 in the mid– and far–infrared bands is very low or even absent. The flux has not varied significantly in 1996 November and in 1997 May, one year later, and is quite similar to the 1983 IRAS state (Impey & Neugebauer ImpeyNeugebauer (1988)) (Fig. 3), except at 60 $`\mu `$m, where the IRAS flux seems significantly lower. This agreement could support the idea that the infrared flux level of this source is rather stable. We have to wait for future satellite missions to test this statement.
The infrared spectrum from 2.8 to 100 $`\mu `$m is well fitted by a single power law. This is a typical signature of synchrotron radiation, that can explain the whole emission in this wavelength range, excluding important contributions of thermal sources.
The variability in the optical bands is small too, while the simultaneous RXTE light curve (Urry et al. Urry98 (1998), Sambruna et al. (1999)) shows, on the contrary, strong and fast variability at energies of 2–20 keV: the flux varied by a factor 2 on a timescale shorter than a day. This seems to be a common behavior in blazars, for which there is a more pronounced variability at frequencies above the synchrotron peak (Ulrich et al. (1997)).
### 5.2 Contribution of the host galaxy to the IR flux
The absence of variability could be also explained by the contribution, in the IR, of a steady component, such as the host galaxy. The host galaxy of PKS 2155–304 is a large elliptical which is well resolved in near infrared images (Kotilainen et al. (1998)), but the pixel field of view of the ISOCAM camera (3″or 6″) is too big to resolve it and its contribution is integrated in the flux of the active nucleus.
The magnitude of the host galaxy in the $`H`$ band is $`m_H=12.4`$ (Kotilainen et al. (1998)). The color of a typical elliptical at $`z=0.11`$ is $`BH`$=4.6 (Buzzoni (1995)), from which we get $`m_B=17.0`$, which corresponds to a flux $`f_B`$ = 0.7 mJy. Mazzei & De Zotti (MazzeiDeZotti (1994)) calculated the flux ratio between the IRAS and the $`B`$ bands for a sample of 47 elliptical galaxies: their results are $`\mathrm{log}f_{12}/f_B=0.01\pm 0.05`$, $`\mathrm{log}f_{25}/f_B=0.70\pm 0.32`$, $`\mathrm{log}f_{60}/f_B=0.22\pm 0.155`$, $`\mathrm{log}f_{100}/f_B=0.25\pm 0.10`$. From these relations we can estimate the host galaxy fluxes in the far–IR at 12, 25, 60 and 100 $`\mu `$m: we have $`f_{12}=0.7`$ mJy, $`f_{25}=0.1`$ mJy, $`f_{60}=0.4`$ mJy, $`f_{100}=1.2`$ mJy. If we compare these values with those of Tab. 6, we see that they are less than 1% of the active nucleus flux, and much less than the uncertainties. We thus conclude that the contribution of the host galaxy to the ISO far–IR flux is negligible.
This fact can be also inferred from the spectral energy distribution (SED), built with the simultaneous data of May 1996 (Fig. 5), that shows that the ISO data lie on the interpolation between radio and optical spectra.
### 5.3 Synchrotron self–absorption
The observed IR spectrum is rather flat, and one can wonder if this is due to a partially opaque emission, i.e. if we have, in the IR, the superposition of components with different self–absorption frequencies, as for the flat radio spectra.
To show that this is $`not`$ the case, we calculate the self–absorption frequency assuming that the IR radiation originates in the same compact region responsible for most of the emission, including the strongly variable X–ray flux. This is a conservative assumption, since the more compact is the region, the larger is the self–absorption frequency. In the case of an isotropic population of relativistic electrons with a power–law distribution $`N(\gamma )=K\gamma ^p`$, the self–absorption frequency is given by (e.g. Krolik 1999)
$$\nu _t=\frac{\delta \nu _B}{1+z}\left[\frac{3^{\frac{p}{2}}\pi \sqrt{3\pi }}{4}\frac{\mathrm{\Gamma }(\frac{3p+22}{12})\mathrm{\Gamma }(\frac{3p+2}{12})\mathrm{\Gamma }(\frac{p+6}{4})}{\mathrm{\Gamma }(\frac{p+8}{4})}\frac{e\tau }{B\sigma _T}\right]^{\frac{2}{p+4}},$$
where $`\mathrm{\Gamma }`$ is the gamma function, $`\nu _B`$ is the cyclotron frequency, $`\delta `$ is the beaming factor, $`R`$ is the size of the source, $`\tau \sigma _TKR`$, and $`p`$ is the slope of the electron distribution appropriate for those electrons radiating at the self–absorption energy. In the homogeneous synchrotron self–Compton model, the optical depth $`\tau `$ is approximately the ratio of the Compton and synchrotron flux at the same frequency. This ratio can be estimated from the SED (Fig. 5), where the Compton flux is obtained by extending at low frequencies the Compton spectrum with the same spectral index of the synchrotron curve. The upper limit for the $`\gamma `$–ray emission in 1996 May corresponds to an upper limit for the value of the optical depth of $`\tau 10^5`$. From the ISO spectrum, we have $`p=2\alpha +1=1.8`$. Although we cannot a priori determine the other two parameters, namely $`B`$ and $`\delta `$, a reasonable estimate can be derived through the broad band model fitting. In particular if we adopt the values derived by Tavecchio et al. (Tavecchio (1998)), $`B=0.25`$ G and $`\delta 30`$, we get $`\nu _t1.4\times 10^{11}`$ Hz. For less extreme values of $`\delta `$, $`\nu _t`$ becomes smaller, while much larger values of the magnetic field (making $`\nu _t`$ to increase) are implausible, if the significant $`\gamma `$–ray emission is due to the self–Compton process, which requires the source not to be strongly magnetically dominated. The frequency of self–absorption is thus significantly lower then the IR frequencies, implying that the IR emission is completely thin.
### 5.4 Spectral energy distribution
In Fig. 5 we show the SED of PKS 2155–304 during our multiwavelength campaign, from the far IR to the $`\gamma `$–ray band. We also collected other, not simultaneous, data from the literature, especially in the X–ray band, to compare our overall spectrum with previous observations. As can be seen, our IR data fill a hole in the SED and, together with our optical results, contribute to a precise definition of the shape of the synchrotron peak. It is remarkable that although the X–ray state during our campaign was very high (one of the highest ever seen), the optical emission was not particularly bright. Also the upper limit in the $`\gamma `$–ray band testifies that the source was not bright in this band.
All this can be explained assuming that the X–ray flux is due to the steep tail of an electron population distributed in energy as a broken power law. The first part of this distribution is flat and steadier than the high energy, steeper part. In this case without changing significantly the bolometric luminosity large flux variations are possible above the synchrotron (and the Compton) peak. An electron distribution with these characteristics can be obtained by continuous injection and rapid cooling (see e.g. Ghisellini et al. (1998)). In fact, if the electrons are injected at a rate $`Q(\gamma )\gamma ^s`$ between $`\gamma _1`$ and $`\gamma _2`$, the steady particle distribution will be $`N(\gamma )\gamma ^{(s+1)}`$ above $`\gamma _1`$, and $`\gamma ^2`$ below, until radiation losses dominate the particle escape or other cooling terms (e.g. adiabatic expansion). Electrons with energy $`\gamma _1m_ec^2`$ are the ones responsible for the emission at the synchrotron and Compton peaks (as long as the scattering process is in the Thomson limit). Since it is possible to change $`s`$ without changing the total injected power, large flux variations above the peak are compatible with only minor changes below. This model also predicts that the spectrum below the peak has a slope $`\alpha =0.5`$, which is not far from what we have observed in the far IR.
###### Acknowledgements.
We would like to thank the ISOCAM team and, in particular, Marc Sauvage for his help with CIA, the ISOCAM data reduction procedure and with the installation of the software at OAB. We also thank Giuseppe Massone e Roberto Casalegno, who made the optical observations at La Silla.
## Appendix A Data reduction
### A.1 ISOCAM
The observations were processed with CIA<sup>2</sup><sup>2</sup>2ISOCAM Interactive Analysis, CIA, is a joint development by the ESA Astrophysics Division and the ISOCAM Consortium led by the ISOCAM PI, C. Césarsky, Direction des Sciences de la Matière, C.E.A., France. v2.0.
Each observation consisted in a sequence of frames, which had an elementary integration time of about 2 s. By this way the temporal behaviour of each pixel was known.
First, the dark current was subtracted from each raw frame, using the dark images present in the software library, flagging the bad pixels of SW and LW detectors.
The impact of charged particles (glitches) on the detectors create spikes in the pixel signal curve. To remove these spurious signals, we first used the Multiresolution Median Transform method (Starck et al. (1996)), then every frame was inspected to make sure that the number of suppressed noise signals was negligible and finally a manual deglitching operation was done to detect the glitches left and flag them. Some glitches caused a change in the pixel sensitivity: in this case we flagged the pixel in all readouts after the glitch.
The library dark images were not good enough to remove all the effects of the dark current: the signals in rows and columns showed a saw–teeth structure, that was eliminated using the Fast Fourier Transform technique (Starck & Pantin (1996)).
The response of the detector pixels to a change in the incident flux is not immediate and the signal reaches the stabilization after some time. This time interval depends on the initial and final flux values and on the number of readouts (ISOCAM Observer’s Manual (1994)). Therefore, the time sequence of a pixel signal shows, after a change in the incident flux, an upward or downward transient behaviour. At the beginning of every observation, after a certain number of frames, the signal should reach the stable value. As this ideal situation could not always been achieved, CIA provides different routines to overcome this problem and apply the transient correction. These routines use different models to fit the signal curves, in order to identify the stable value.
In the SW5 observations, the photons coming from PKS 2155–304 fell mainly in one or two pixels, whose signals showed an upward transient behaviour that never reached the stabilization. On the contrary, the background, being very low, was stabilized. No transient correction routines were able to adequately fit the source signals, either underestimating or overestimating the stable flux. Observing the signal curves, we noticed that the behaviour of the first part of the curves were far from the expected converging trends that are used in the models of the correction routines, while the remaining part of the curves seemed to be well described by a converging exponential trend. So, after having discarded the starting readouts, we fitted the signal with a simple exponential model $`s_{fit}=s_{\mathrm{}}+ce^{t/\tau }`$, where the optimized parameters are $`c`$, $`\tau `$ and $`s_{\mathrm{}}`$, that represents the stable signal. We chose the fit which showed a reasonable result and optimized the determination coefficient $`R^2=1\frac{_i(s_is_{fit})^2}{_i(s_i\overline{s})^2}`$, where $`s_i`$ are the measured signals and $`\overline{s}`$ is the mean of the part we considered. In three cases, the results were not acceptable and we could define only lower limits, as the upward transients had not reached the stabilization.
For the transient correction of the LW3 observations, the model developed at the Institut d’Astrophysique Spatiale (IAS Model) (Abergel et al. (1996)) has been used. As the corrected curves attained stable values in the second half only, we did not use the first half of the frames.
In the spectral observation of May 27, the uninterrupted sequence of filters used created either upward or downward transients and the stabilization of the source signal was reached just in few cases. The five observations made with the SW channel were corrected using the same method as the SW5 ones, except for the SW11 filter, in which the stabilization was reached for all pixels. In this case, we just discarded the first half of the 162 frames. In the SW2 filter data, at the end of the observation, the source signal was so far from stabilization that we could define only a lower limit. The five observations made with the LW channel were corrected using the IAS model. As this model takes into account all the past illumination history, we fitted a unique curve that was built linking together all the LW filters data. This method worked fine for two filters only (LW8 and LW9), while for the other three filters again we defined lower limits.
We averaged all the frames neglecting the flagged signal values and then the images were flat fielded, using the library flat fields of CIA.
The total signal of the source was computed integrating the values of the signal in a box centered on the source and subtracting the normalized background obtained in a ring of 1 pixel width around the box. The boxes had dimension ranging from 3x3 to 7x7 pixels, depending on the filter and on the pixel field of view (pfov). The results were colour corrected and divided for the point spread function (PSF) fraction falling in the box. This fraction also depends on filters and pfov. To compute it, we extracted from the library, for each combination of filter and pfov, the nine PSF images centered more or less on the same pixels of PKS 2155–304. For calibration requirements, in each PSF image the centroid of the source was placed in a slightly different position inside the same pixel. As we do not know with enough accuracy the position of the centroid of PKS 2155–304 in the ISOCAM images, the nine PSF were averaged and the result was normalized. The PSF correction was calculated by summing the signal of the pixels in a box of the same dimension of that in which we extracted the source signal. For the LW detector, a further correction factor was applied to take into account the flux of the point–like source that falls outside the detector (Okumura (1997)). For the SW channel, we adopted for all filters the SW5 PSF, because, along with SW1, was the only one present in the calibration library, however the error we introduced can only be of a few percent.
Finally, the source signal was converted to flux density using the coefficients in Blommaert (Blommaert (1997)).
To compute the photometric error we divided the uncertainty sources in two parts: the first one took into account the dark current subtraction, deglitching, flat fielding operations and signal to flux conversion, while the second one considered the transient correction. The first group of error sources are derived from the Automatic Analysis Results (AAR; OLP v7.0 for the light curves data, OLP v6.3.2 for the spectrum data). The source flux values $`f_{AAR}`$ given by the AAR are not reliable because the transient correction is not performed, but the AAR absolute flux errors $`\sigma _{AAR}`$ are a good estimate of the first group of errors (the AAR fluxes are given in Tabs. 4 and 6). We assumed that the fluxes $`f_{src}`$ that we derived have the same relative error $`\sigma _{rel}=\sigma _{AAR}/f_{AAR}`$. Thus, for our fluxes this part of error is $`\sigma _f=\sigma _{rel}f`$, which accounts for all the uncertainties sources, but the transient correction. We estimated that the error due to the transient correction is of the order of 10%, which is the rounded maximum error on the stable signal $`s_{\mathrm{}}`$, obtaining a total error of $`\sigma =\sqrt{\sigma _f^2+\sigma _{tr}^2}`$. We assumed then a $`\sigma _{tr}`$ of 10% for all our measurements (20% for SW4 and SW10 filters).
### A.2 ISOPHOT
The observations were done in rectangular chopped mode: the observed field of view switches alternately between the source and an 180″ distant off–source position. This is necessary in order to measure the background level. The chopping direction was along the satellite Z-axis, which was slowly rotating by about one degree per day. Thus, every time the background was sampled in different fields of the sky and a raster map was performed just to check the stability of the background all around the source. The standard deviation of the background flux measured in the central pixel of the C100 detector, in the eight off-source positions of the scan, is 37 mJy. This value is much less than the error of the source flux (see Tab. 5). This small background fluctuation would lead to a rise of the scatter of the source flux, in any case our results are compatible with absence of variability (see section 3).
Each observation of an astronomical target was immediately followed by a Fine Calibration Source (FCS) measurement, using internal calibrations sources. These measurements were made in order to determine the detector responsitivity, which is necessary to compute the target flux.
Each observation consisted in a series of integration ramps, each one made by the sequence of voltage readouts between two destructive readouts.
The observations were processed with PIA<sup>3</sup><sup>3</sup>3ISOPHOT Interactive Analysis (PIA) is a joint development by the ESA Astrophysics Division and the ISOPHOT Consortium. The ISOPHOT Consortium is led by the Max–Planck–Institute for Astronomy, Heidelberg. v7.0 (Gabriel et al. (1997)).
PIA separates the operations to be performed on the data in different levels: at each level PIA creates a data structure on which it operates. This data structure takes its name according to the properties of the data. The first part of the data analysis was common for all the observations, then the procedures changed according to the different characteristics of the observation (whether it was chopped or not or whether the detector was receiving photons from the astronomical target or from the FCS).
At the beginning, PIA automatically converted the digital data from telemetry in meaningful physical units and created the structure of data, called Edited Raw Data (ERD). At the ERD level, some starting readouts and the last readout of each ramp were discarded, because they are disturbed by the voltage resetting; we also manually discarded the part of the ramp before or after a glitch (that causes a sudden jump of the readout value) in the cases where most part of the ramp was unchanged and the glitch did not modify the detector responsitivity. A correction for the non–linear responsitivity of the detector was applied, using special calibration files. Then, each ramp was fitted by a 1st order polynomial model. A signal (in V s<sup>-1</sup>) was obtained from the slope of every ramp: the slope is proportional to the incident power. At Signal per Ramp Data (SRD) level, the first half of the signals per chopper plateau were discarded, because of stabilization problems. As the signal value depends on the integration time, a correction factor was applied and the signal was normalized for an integration time of 1/4 s. The dark current was subtracted using the PIA calibration files, which take into account the satellite position in the orbit. An algorithm was applied to discover and discard the signals that were anomalously high, because of glitches; then, the signals of each chopper plateau were averaged. At Signal per Chopper Plateau (SCP) level, the responsitivity of each detector pixel was computed taking the median of the FCS2 signals of the calibration measurements; then, the vignetting correction was performed on the target observations. In the chopped measurements, the background, that was calculated at the off–source position, was subtracted to get the source signal.
As for the camera, the response of the photometer detectors has some delays after a change in the incident flux. This effect causes losses in the signal values measured in the chopped measurements, so a correction factor was applied. The signal was finally converted into power, using the responsitivity obtained from the FCS measurement.
In the observations performed with the 3x3 pixel C100 detector, only the central pixel was used to compute the source flux density, because, as the most of the Airy disk of a point-like source centered in the pixel lies in the same pixel (69% for C1\_60 and 61% for C1\_90), to use the outer pixels just adds more noise than signal. The source flux density is defined as $`F_\lambda =P_{src}/(C1f_{psf})`$, where $`P_{src}`$ is the incident power, $`C1`$ is a conversion factor of each filter (as given in the PIA calibration file pfluxconv.fits) and $`f_{psf}`$ is the fraction of PSF that falls on the pixel considered when the source is located in the centre (ISOPHOT Observer’s Manual (1994), Tabs. 2 and 4).
The absolute photometric error was computed by PIA, during the data reduction process, and took into account the uncertainty in the determination of the slope of the ramp and the errors associated to the other performed correction operations. |
no-problem/0001/quant-ph0001083.html | ar5iv | text | # Quantum cryptography with 3-state systems
## Abstract
We consider quantum cryptographic schemes where the carriers of information are 3-state particles. One protocol uses four mutually unbiased bases and appears to provide better security than obtainable with 2-state carriers. Another possible method allows quantum states to belong to more than one basis. The security is not better, but many curious features arise.
When Samuel Morse invented the telegraph, he devised for it an alphabet consisting of three symbols: dash, dot, and space. More modern communication methods use binary signals, conventionally called 0 and 1. Information theory, whose initial goal was to improve communication efficiency, naturally introduced binary digits (bits) as its accounting units. However, the theory can easily be reformulated in terms of ternary digits (trits) 0, 1, 2, or larger sets of symbols . For example, instead of bytes (sets of 8 bits) representing 256 ordinary characters, we would have “trytes” (5 trits) for 243 characters. An ordinary text would thus be encoded into a string of trits. If we wish to encrypt the latter, this can be done by adding to it (modulo 3) a random string, called key, known only to legitimate users. Decrypting is then performed by subtracting that key (modulo 3).
The aim of quantum cryptography is to generate a secret key by using quantum carriers for the initial communication between distant parties (conventionally called Alice and Bob). The simplest methods use 2-state systems, such as polarized photons. Orthogonal states represent bit values 0 and 1. One may use either two or three orthogonal bases, chosen in such a way that any basis vectors $`|e_j`$ and $`|e_\mu `$ belonging to different bases satisfy $`|e_j,e_\mu |^2=1/2`$. Such bases are called mutually unbiased . As a consequence, if an eavesdropper (Eve) uses the wrong basis, she gets no information at all and causes maximal disturbance (error rate 1/2) to the transmission, thereby revealing her presence.
In this Letter, we consider 3-state systems as the quantum carriers for cryptographic key distribution. For example, one may use “biphotons” , namely photon pairs in symmetric Fock states $`|0,2`$, $`|2,0`$, and $`|1,1`$. Biphotons can easily be produced with present technology, and detecting arbitrary linear combinations of them will probably be possible soon. (Another possibility would be to use four states of a pair of photons , but here we consider only 3-state systems.)
Following the method of refs. , we introduce four mutually unbiased bases. Let $`|\alpha `$, $`|\beta `$, and $`|\gamma `$ be the unit vectors of one of the bases. Another basis is obtained by a discrete Fourier transform,
$$\begin{array}{c}|\alpha ^{}=(|\alpha +|\beta +|\gamma )/\sqrt{3},\hfill \\ |\beta ^{}=(|\alpha +e^{2\pi i/3}|\beta +e^{2\pi i/3}|\gamma )/\sqrt{3},\hfill \\ |\gamma ^{}=(|\alpha +e^{2\pi i/3}|\beta +e^{2\pi i/3}|\gamma )/\sqrt{3}.\hfill \end{array}$$
(1)
The two other bases can be taken as
$$(e^{2\pi i/3}|\alpha +|\beta +|\gamma )/\sqrt{3}\text{and cyclic perm.},$$
(2)
and
$$(e^{2\pi i/3}|\alpha +|\beta +|\gamma )/\sqrt{3}\text{and cyclic perm.}.$$
(3)
Any basis vectors $`|e_j`$ and $`|e_\mu `$ belonging to different bases now satisfy $`|e_j,e_\mu |^2=1/3`$.
The protocol for establishing a secret key is the usual one. Alice randomly chooses one of the 12 vectors and sends to Bob a signal whose quantum state is represented by that vector. Bob randomly chooses one of the four bases and “measures” the signal (that is, Bob tests whether the signal is one of the basis vectors). Having done that, Bob publicly reveals which basis he chose, but not the result he obtained. Alice then reveals whether her vector belongs to that basis. If it does, Alice and Bob share the knowledge of one trit. If it does not, that transmission was useless. This procedure is repeated until Alice and Bob have obtained a long enough key. They will then have to sacrifice some of the trits for error correction and privacy amplification (we shall not discuss these points, which are the same as in all cryptographic protocols, except that we have to use trits instead of bits, and therefore parity checks become triality checks, that is, sums modulo 3).
Consider the simplest eavesdropping strategy: Eve intercepts a particle, measures it, and resends to Bob the state that she found. In 3/4 of the cases, she uses a wrong basis, gets no information, and causes maximal disturbance to the transmission: Bob’s error rate (that is, the probability of a wrong identification of the trit value) is 2/3. On the average, over all transmissions, Eve gets $`I_E=1/4`$ of a trit and Bob’s error rate is $`E_B=1/2`$. (It is natural to measure Eve’s information in trits, since Bob gets one trit for each successful transmission.) These results may be compared to those obtained by using 2-state systems. With only two bases as in ref. and with the same simple eavesdropping strategy, Eve learns on the average 1/2 of a bit for each transmitted bit, and Bob’s error rate is 1/4. If we use three bases as in , these numbers become 1/3. Thus, with the present method, Eve learns a smaller fraction of the information and causes a larger disturbance. It is likely that this is also true in presence of more sophisticated eavesdropping strategies, such as using an ancilla to gently probe the transmission without completely disrupting it. When people seek Eve’s “optimal” eavesdropping strategy , their criterion usually is the maximal value of $`I_E/E_B`$.
Do the above results mean that using 3-state systems improves the cryptographic security? The answer depends on which aim we seek to achieve. If Alice and Bob simply wish to be warned that an eavesdropper is active, and in that case they will use another communication channel, then obviously the highest possible ratio $`E_B/I_E`$ is desirable. Eve can at most conceal her presence by intercepting only a small fraction $`x`$ of the transmissions, such that $`xE_B`$ is less than the natural error rate, but then $`I_E`$ is reduced by the same factor, and Eve’s illicit information can be eliminated by classical privacy amplification .
However it may be that Alice and Bob have no alternative channel to use and privacy amplification is their only possibility of fighting the eavesdropper. In that case, it is known that secure communication can in principle be achieved if Bob’s mutual information with Alice, $`I_B`$, is larger than Eve’s $`I_E`$. Note that even if Bob and Eve have the same error rate, as in one of the above examples, $`I_E>I_B`$. The reason is that Eve knows whether Alice and Bob used the same basis, and therefore which ones of her data are correct and which ones are worthless. On the other hand, Bob can only compare with Alice a subset of data, so as to measure his mean error rate $`E_B`$, and from the latter deduce the Shannon entropy of his string. For 2-state systems, assuming all bit values equally probable, he obtains
$$I_B=1+(1E_B)\mathrm{log}_2(1E_B)+E_B\mathrm{log}_2E_B,$$
(4)
and likewise for 3-state systems,
$$I_B=1+(1E_B)\mathrm{log}_3(1E_B)+E_B\mathrm{log}_3(E_B/2).$$
(5)
Numerical results, in bits and trits respectively, will be given in Table II at the end of this Letter, together with those for two other cryptographic protocols, discussed below.
New types of cryptographic protocols may indeed be devised if the Hilbert space has more than two dimensions. The reason is that a basis vector may now belong to several bases. In that case, it is natural to assume that each vector represents a definite trit (0, 1, or 2), which is the same in all the bases to which that vector belongs . An example is given in the table below, where vectors are labelled green, red, and blue, for later convenience.
> TABLE I. Components of 21 unnormalized vectors. The symbols 1̄ and 2̄
> stand for $`1`$ and $`2`$, respectively. Orthogonal vectors have different colors.
| green | | 001 | 101 | 01̄1 | 11̄1 | | 11̄2 | 112 | 21̄1 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| red | | 100 | 110 | 101̄ | 111̄ | | 211̄ | 211 | 121̄ |
| blue | | 010 | 011 | 1̄10 | 1̄11 | | 1̄21 | 121 | 1̄12 |
Although this new algorithm does not improve transmission security (as shown below), it has many fascinating aspects and leads to new insights into quantum information theory. The 12 vectors in the first four columns of Table I are shown in Fig. 1, as dots on the faces of a cube, in a way similar to the graphical representation of a Kochen-Specker uncolorable set . In the present case, the tricolor analogue of the Kochen-Specker theorem requires only 13 rays for its proof, because ray (111) is orthogonal to all the rays in the third column, which have three different colors. These 12 vectors form 13 bases, but only four bases are complete. The nine others bases have only two vectors each and have to be completed by nine new vectors, listed in the last three columns of the table. To display these nine vectors on Fig. 1, their integer components should be divided by 2. The corresponding dots are then located at the centers of various squares on the faces of the cube.
The cryptographic protocol is the same as before, but now Alice has 21 vectors to choose from, and Bob has a choice of 13 bases. The essential difference is that these bases are not mutually unbiased, so that if Eve chooses a different basis (which happens 12/13 of the time), she still gets at least probabilistic information on Alice’s vector. It may also happen that Eve’s basis is different from Bob’s, but both bases contain the vector found by Eve. In that case, when Bob announces his basis and Alice confirms it, Eve can infer that she got the correct state and caused no error.
Let us analyze what happens for each successful transmission, that is, when Alice’s vector $`|e_j`$ is one of those in the basis announced by Bob. Suppose that in her eavesdropping attempt, Eve obtains a state $`|e_\mu `$. This happens with probability $`P_{\mu j}=|e_j,e_\mu |^2`$. This is also the probability that Bob gets the correct $`|e_j`$ when Eve resends to him $`|e_\mu `$. On the average over all Alice’s $`|e_j`$ and all Eve’s choices of a basis, the probability that Bob gets a correct result is
$$C=\underset{j=1}{\overset{21}{}}\underset{\mu =1}{\overset{21}{}}M_\mu (P_{\mu j})^2/(21\times 13),$$
(6)
where $`M_\mu `$ is the number of bases to which $`|e_\mu `$ belongs (namely $`M_\mu =2`$ for the vectors in the first and third columns of Table I, $`M_\mu =3`$ for those of the second and fourth columns, and $`M_\mu =1`$ for the rest). Bob’s mean error probability is $`E_B=1C=0.385022`$.
To evaluate Eve’s gain of information $`I_E`$, we note that when Alice confirms the basis chosen by Bob, Eve is left with a choice of three vectors having equal prior probabilities, $`p_j=1/3`$. The initial Shannon entropy is $`H_i=1`$ trit, and the prior probability for Eve’s result $`\mu `$ is
$$q_\mu =\underset{j=0}{\overset{2}{}}P_{\mu j}p_j=1/3.$$
(7)
It then follows from Bayes’s theorem that the likelihood (posterior probability) of signal $`j`$ is (see ref. , page 282)
$$Q_{j\mu }=P_{\mu j}p_j/q_\mu =P_{\mu j}.$$
(8)
The new Shannon entropy, following result $`\mu `$, is
$$H_f=\underset{j=0}{\overset{2}{}}Q_{j\mu }\mathrm{log}_3Q_{j\mu }.$$
(9)
Eve’s information gain is obtained by averaging $`H_f`$ over all results $`\mu `$, all Eve’s bases, and all Bob’s bases. The final result is
$`I_E`$ $`=`$ $`H_iH_f,`$ (10)
$`=`$ $`1+{\displaystyle \underset{j=1}{\overset{21}{}}}{\displaystyle \underset{\mu =1}{\overset{21}{}}}M_jM_\mu P_{\mu j}\mathrm{log}_3P_{\mu j}/(3\times 13^2).`$ (11)
Table II lists the relevant data for intercept-and-resend eavesdropping (IRE) on all the above cryptographic protocols.
> TABLE II. Result of IRE on various cryptographic protocols: Eve’s information; Bob’s information and error rate for a single IRE event; and fraction of eavesdropped transmissions needed to make both informations equal to each other.
| units | | bases | | vectors | $`I_E`$ | | $`I_B`$ | | $`E_B`$ | | $`x`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| bits | | 2 | | 4 | 0.500000 | | 0.188722 | | 0.250000 | | 0.68214 |
| bits | | 3 | | 6 | 0.333333 | | 0.081710 | | 0.333333 | | 0.68128 |
| trits | | 4 | | 12 | 0.250000 | | 0.053605 | | 0.500000 | | 0.71770 |
| trits | | 13 | | 12 | 0.575142 | | 0.143418 | | 0.391738 | | 0.51007 |
| trits | | 13 | | 21 | 0.442765 | | 0.150431 | | 0.385022 | | 0.68994 |
We also investigated the possibility that Alice uses only the 12 vectors in the first four columns of Table I (those represented by the dots in Fig. 1). The IRE results are also listed in Table II. However, it is interesting that in this case, Eve can get some information without performing any active eavesdropping and without causing any error, just by passively listening and waiting for Alice to confirm Bob’s choice of an incomplete basis. Eve then learns that one of the three trit values is eliminated. On the average, she gets information
$$I_E=(9/13)(1+\mathrm{log}_32)=0.255510\text{trit}.$$
(12)
Finally, let us investigate what happens if Eve eavesdrops only on a fraction $`x`$ of the particles sent by Alice. In that case, both $`I_E`$ and $`E_B`$ are multiplied by $`x`$, and $`I_B`$ is still given by Eqs. (4) and (5), with $`E_B`$ replaced by $`xE_B`$ on the right hand side. The results are displayed in Fig. 2, which also shows the security domain $`I_BI_E`$, assuming standard error correction and privacy amplification . The values of $`x`$ for which $`I_B=I_E`$ are listed in the last column of Table II. We see that the use of four mutually unbiased bases for 3-state particles requires the highest value of $`x`$ to breach the security. Moreover, for any given value of $`I_B`$, this protocol is the one that gives the lowest value of $`I_E`$. It thus appears that this method is the one giving the best results against IRE attacks. It is likely to also be the best for more sophisticated eavesdropping strategies, but this problem lies beyond the scope of the present Letter.
We thank Nicolas Gisin for helpful comments on cryptographic security, and Daniel Terno for bringing ref. to our attention. H.B.-P. was supported by the Danish National Science Research Council (grant no. 9601645) and also acknowledges support from the European IST project EQUIP. A.P. was supported by the Gerard Swope Fund and the Fund for Encouragement of Research.
FIG. 1. Twelve vectors are obtained by connecting the center of the cube to the various dots on its faces (diametrically opposite dots represent the same vector). The four dots at the vertices of the squares labelled G, R, and B, are green, red and blue, respectively. The truncated vertex corresponds to the uncolorable ray (111).
FIG. 2. Mutual informations for the various protocols listed in Table II, when the fraction of intercepted particles is $`0<x<1`$. For the case of 13 bases and 12 vectors, it is assumed that in the remaining fraction $`(1x)`$, Eve performs passive eavesdropping on incomplete bases. The data are given in bits for 2-dimensional systems, and trits for 3-dimensional ones. |
no-problem/0001/cond-mat0001231.html | ar5iv | text | # Stripe ordering and two-gap model for underdoped cuprates
## 1 INTRODUCTION
The opening of a pseudogap below a strong doping $`(\delta )`$ dependent crossover temperature $`T^{}(\delta )`$, already above the superconducting critical temperature $`T_c(\delta )`$, is an intriguing feature of the underdoped cuprates . One possibility is that the pseudogap arises from strong scattering processes in the particle-hole channel. In this framework, spin fluctuations have been considered . A similar outcome would arise from scattering by charge fluctuations near a charge instability for stripe formation . It was indeed suggested that the tendency to spatial charge order (which evolves into a mixed spin-charge stripe phase by lowering the doping) gives rise to an instability line ending in a Quantum Critical Point (QCP) at $`T=0`$ near optimal doping. By approaching the instability line, the quasi-critical stripe fluctuations affect the states on the FS in a quite anisotropic way. The fermionic states around the M points \[i.e. $`(\pm \pi ,0),(0,\pm \pi )`$\] interact strongly (these are the so-called “hot spots”), while the states around the $`\mathrm{\Gamma }`$-$`X`$ or $`\mathrm{\Gamma }`$-$`Y`$ diagonals are weakly interacting (“cold spots”). Well below the instability line, deep in the underdoped phase, a local (quasi-static) stripe order takes place, which can strongly affect the spectral distribution of the quasiparticles . This mechanism for pseudogap formation will be discussed in the next section. On the other hand a second possibility is that the pseudogap arises from pairing in the particle-particle channel with $`T^{}`$ being a mean-field-like temperature where electrons start to form local pairs without phase coherence. Lowering the temperature, the phase coherence and hence superconductivity are established at the critical temperature $`T_c`$. In this context it is still debated whether the superconducting transition is intermediate between a BCS transition and a Bose-Einstein condensation of preformed pairs , or is due to a more intricate interplay between preformed bosonic pairs and fermions . Within the Stripe-Quantum-Critical-Point (Stripe-QCP) scenario a two-gap model can be considered, where strongly paired fermionic states can coexist and interplay with weakly coupled pairs in different regions of the Fermi surface (FS). This is a natural description of the cuprates near the instability line, when quasiparticles have very different dispersions and effective interactions in different regions of the Fermi surface. In this case the strong momentum dependent singular scattering arising in the proximity of the stripe instability acts in the particle-particle channel and leads to tightly bound (strongly phase fluctuating) pairs near the hot spots (close to the M points) coexisting with weakly interacting quasiparticles near the $`\mathrm{\Gamma }`$-$`X`$ or $`\mathrm{\Gamma }`$-$`Y`$ directions. In this case the pseudogap will find a natural interpretation in terms of the incoherent Cooper-pair formation around the M-points. We will discuss this model in Section 3.
## 2 CHARGE ORDERING IN THE UNDERDOPED CUPRATES
Within the Stripe-QCP scenario the underdoped region of the cuprates corresponds to the nearly ordered stripe phase where a local (quasi-static) charge ordering takes place. In this framework, a mean-field analysis will likely capture the main qualitative features of the (locally) charge-ordered phase. In particular, in Ref. the spectral properties of an incommensurate charge-density-wave system were investigated within a standard Hartree-Fock approach. Both a purely one-dimensional ordering with an order parameter $`<\rho _q>^{1D}=_n<\rho _q>\delta _{q,nq_c^{x,y}}`$ and a two-dimensional “eggbox” structure with $`<\rho _q>^{egg}=_n<\rho _q>[\delta _{q,nq_c^x}+\delta _{q,nq_c^y}]`$ were considered. The effective density-density interaction and the critical wavevectors $`𝐪_c`$ were derived from microscopic calculations based on the frustrated phase-separation mechanism for an Hubbard-Holstein model with long-range Coulomb interaction. A first generic outcome was that charge ordering tends to substantially enhance the van Hove singularities near the M points. Pseudogaps are also formed. However the charge ordering modulated along one single direction, $`x`$ or $`y`$, naturally produces pseudogaps along one direction only. The superposition of two one-dimensional CDW modulated along perpendicular directions simply fills up the pseudogap in all four M-points. On the other hand, in the eggbox case a leading-edge gap arises near these points, leaving finite arcs of the Fermi surface gapless. This latter non-trivial feature might account for recent ARPES results , but does not seem to be robust under the disordering of the eggbox modulation . Therefore, although the particle-hole scattering definitely seems to affect the electronic spectra, particularly around the M points, additional mechanisms, likely related to the particle-particle pairing discussed in the next section, seem to be in order to fully account for the observed pseudogap. On the contrary, in the specific cases where commensuration effects couple the stripes to the underlying lattice structure, charge ordering becomes particularly strong and it can alone open a full insulating gap. This is the case of various hole-doped compounds at doping $`\delta =1/8`$, where stripes in the (1,0) or (0,1) directions with half an hole in excess per site (half-filled vertical stripes, HFVS) are observed . In this regard it was recently established within a joynt (slave-boson)-(unrestricted Hartree-Fock) approach, that both a proper treatment of the strong local hole-hole interaction and the presence of a sizable long-range Coulomb repulsion are needed to obtain a ground state with HFVS. The same conclusion was reached within a realistic three-band extended Hubbard model , where also the electron doped case was investigated. For electron doping the most stable configuration was predicted to have the stripes along the (1,1) or (1,-1) directions with one additional electron per site (filled diagonal stripes).
## 3 THE TWO-GAP MODEL
As discussed above, the formation of a pseudogap in the metallic underdoped phase is likely related to the strongly anisotropic attractive potential arising in the proximity of the instability. In order to capture the relevant physical effects of the anisotropy of both the pairing interaction and the Fermi velocity, we introduce a simpliflied two-band model for the cuprates
We describe the quasiparticle arcs of FS about the nodal points by a free electron band (labelled below by the index 1) with a large Fermi velocity $`v_{F1}=k_{F1}/m_1`$ and the hot states about the M points with a second free electron band, displaced in momentum and by an energy $`\epsilon _0`$ from the first, with a small $`v_{F2}=k_{F2}/m_2`$. The energy $`\epsilon _0`$ is introduced to allow the chemical potential to cross both bands: $`E_{F1}=\epsilon _0+E_{F2}`$. Moreover, since our main interest is the interplay between strongly and weakly coupled pairs irrespectively of their symmetry, for simplicity we assume a s-wave pairing interaction .
The model Hamiltonian for pairing in the two-band system is then
$$H_{pair}=\underset{kk^{}p\alpha \beta }{}g_{\alpha ,\beta }c_{k^{}+p\beta }^+c_{k^{}\beta }^+c_{k\alpha }c_{k+p\alpha }$$
(1)
where $`\alpha `$ and $`\beta `$ run over the band indeces 1 and 2. We also introduce a BCS-like energy cutoff $`\omega _0`$ to regularize the pairing interaction. The $`2\times 2`$ scattering matrix $`\widehat{g}`$ accounts for the strongly $`q`$-dependent effective interaction in the p-p channel of the original single-band system. Its elements $`g_{ij}`$ are the different coupling constants which couple the electrons in the p-p channel within the same band ($`g_{11}`$ and $`g_{22}`$) and between different bands ($`g_{12}=g_{21}`$). The ladder equation for the superconducting fluctuation propagator is given by $`\widehat{L}=\widehat{g}+\widehat{g}\widehat{\mathrm{\Pi }}\widehat{L}`$, where the particle-particle bubble operator for the two-band spectrum has a diagonal $`2\times 2`$ matrix form with elements $`\mathrm{\Pi }_{11}(𝐪)`$ and $`\mathrm{\Pi }_{22}(𝐪)`$. The resulting fluctuation propagator is given by
$$\widehat{L}(𝐪)=\left(\begin{array}{cc}\stackrel{~}{g}_{11}\mathrm{\Pi }_{11}(𝐪)& \stackrel{~}{g}_{12}\\ \stackrel{~}{g}_{12}& \stackrel{~}{g}_{22}\mathrm{\Pi }_{22}(𝐪)\end{array}\right)^1$$
(2)
where we have defined $`\stackrel{~}{g}_{ij}(\widehat{g}^1)_{ij}`$. It turns out useful to define the temperatures $`T_{c1}^0`$ and $`T_{c2}^0`$ as $`\stackrel{~}{g}_{11}\mathrm{\Pi }_{11}(0,T)\rho _1\mathrm{ln}\frac{T}{T_{c1}^0}`$, $`\stackrel{~}{g}_{22}\mathrm{\Pi }_{22}(0,T)\rho _2\mathrm{ln}\frac{T}{T_{c2}^0}`$, where $`\rho _i=m_i/(2\pi )(i=1,2)`$ is the density of states of the $`i`$-th band. To emulate the hot and cold points we assume $`g_{22}>>g_{11}g_{12}`$. In this limit $`T_{c1}^0`$ and $`T_{c2}^0`$ (with $`T_{c2}^0T_{c1}^0`$) give the two BCS critical temperatures for the two decoupled bands (i.e. for $`g_{12}=0`$). For the coupled system ($`g_{12}0`$) the mean-field BCS superconducting critical temperature $`T_c^0`$ is defined by the equation $`\text{det}\widehat{L}^1(𝐪=0,T_c^0)=0`$. We then obtain $`T_c^0>T_{c2}^0`$ given by
$$T_c^0=\sqrt{T_{c1}^0T_{c2}^0}\mathrm{exp}\left[\frac{1}{2}\sqrt{\mathrm{ln}^2\left(\frac{T_{c2}^0}{T_{c1}^0}\right)+\frac{4\stackrel{~}{g}_{12}^2}{\rho _1\rho _2}}\right].$$
(3)
The role of fluctuations can be investigated within a standard Ginzburg-Landau (GL) scheme, when both $`g_{22}<E_{F2}`$ and $`\omega _0<E_{F2}`$. We will assume that the chemical potential is not affected significantly by pairing and that fluctuations from the BCS result are not too strong. The relevance of the space fluctuations of the order parameter is assessed by the gradient term coefficient $`\eta `$, which provides the momentum dependence of the propagator. This calculation requires the expansion of the fluctuation propagator in Eq.(2) in terms of $`q`$. In particular the expansion of the particle-particle bubbles reads $`\mathrm{\Pi }_{11}(q)\mathrm{\Pi }_{11}(0)\rho _1\eta _1q^2`$ and $`\mathrm{\Pi }_{22}(q)\mathrm{\Pi }_{22}(0)\rho _2\eta _2q^2`$. Here $`\eta _i(i=1,2)`$ is given by $`\eta _i=(7\zeta (3)/32\pi ^2)v_{Fi}^2/T^2`$, with $`\eta _1\eta _2`$. We obtain $`\eta =\alpha _1\eta _1+\alpha _2\eta _2`$ with
$$\frac{\alpha _1}{\alpha _2}=\frac{\stackrel{~}{g}_{12}^2}{\rho _1\rho _2\mathrm{ln}^2(T_c^0/T_{c1}^0)},$$
(4)
and $`\alpha _1+\alpha _2=1`$. The presence of a fraction of electrons with a large $`\eta _1`$ increases the stiffness of the whole electronic system $`\eta `$ with respect to $`\eta _2`$. However when the mean-field critical temperature $`T_c^0`$ is much larger than $`T_{c1}^0`$ the correction to $`\eta _2`$ due to the interband coupling is small. At the same time the Ginzburg number is large implying a sizable mass correction $`\delta ϵ(T)`$ due to fluctuations to the “mass” $`ϵ(T)`$ of the bare propagator $`\widehat{L}(𝐪)`$. The renormalized critical temperature $`T_c^r`$, given by the equation $`ϵ(T_c^r)+\delta ϵ(T_c^r)=0`$, is lower than $`T_c^0`$ . We find that the renormalized gradient term coefficient $`\eta ^r`$, in the presence of the mass correction is still given by Eq. (4) with $`T_c^0`$ replaced by $`T_c^r`$. Therefore, while mass renormalizations of the fluctuation propagator tend to lower $`T_c`$, at the same time, this increases the gradient term coefficient $`\eta `$ by increasing the coupling to $`\eta _1`$. As a consequence the effective Ginzburg number is reduced and the system is stabilized with respect to fluctuations allowing for a coherent superconducting phase even in the extreme limit $`\eta _2=0`$. Within the GL approach we associate the temperature $`T_c^0T_{c2}^0`$ to the crossover temperature $`T^{}`$ and $`T_c^r`$ to the superconducting critical temperature $`T_c`$ of the whole system.
Within the Stripe-QCP scenario the coupling $`g_{22}`$ is related to the singular part of the effective interaction mediated by the stripe fluctuations. $`g_{22}`$ is the most doping dependent coupling and attains its largest value in the underdoped regime. $`g_{11}`$ and $`g_{12}`$ are instead less affected by doping. In the region of validity of the GL approach, the explicit calculations show that $`r(\delta )\frac{T^{}T_c}{T^{}}\frac{T_c^0T_c^r}{T_c^0}`$ is increasing by increasing $`g_{22}`$, i.e., by decreasing doping. For small values of $`r`$ we find that both $`T^{}`$ and $`T_c`$ increases. This regime corresponds to the overdoped and optimally doped region. For $`r0.25÷0.5`$, $`T_c`$ is instead decreasing while $`T^{}`$ is always increasing by decreasing doping. The large values of $`r`$, which are attained in the underdoped region show that we are reaching the limit of validity of our GL approach. We think however that the behavior of the bifurcation between $`T^{}`$ and $`T_c`$ represents correctly the physics of the pseudogap phase, at least qualitatively, while a quantitative description would require a more sophisticated approach like a RG analysis.
In the very low doping regime, where $`T^{}`$ has increased strongly, the value of $`g_{22}`$ can be so large to drive the system in a strong coupling regime for the fermions in band 2 ($`g_{22}>E_{F2}`$). In this case the chemical potential is pulled below the bottom of the band 2. The GL scheme must be abandoned and, in the limit of tightly bound 2-2 pairs, the propagator $`L_{22}(𝐪)`$ assumes the form of a single pole for a bosonic particle (similarly to the single-band strong-coupling problem ). The critical temperature of the system is again obtained by the vanishing of the inverse of $`det\widehat{L}^1`$ at $`q=0`$ where, however, the chemical potential is now self-consistently evaluated including the selfenergy corrections to the Green function in band 2 and the fermions left in band 1. One gets
$$\frac{\stackrel{~}{g}_{12}^2}{\rho _1\mathrm{ln}(T_c^0/T_{c1}^0)}=\frac{\rho _2\omega _0(|\mu _2||\mu _B|)}{|\mu _2||\mu _B|}$$
(5)
where $`\mu _2=\mu _2(T_c^0)`$ is the chemical potential measured with respect to the bottom of the band 2 and $`\mu _B=\rho _2\omega _0g_{22}`$ represents the bound-state energy. In the present case most of the fluctuation effect has been taken into account by the formation of the bound state occurring at a very high $`T^{}g_{22}`$. In this new physical situation $`\eta \eta _1`$ stays sizable and the fluctuations will not strongly further reduce $`T_c^r`$ with respect to $`T_c^0`$: $`T_cT_c^rT_c^0`$. In this low doping regime $`\frac{T^{}T_c}{T^{}}`$ approaches its largest values before $`T_c`$ vanishes.
The strong-coupling limit of our model shares some similarities as well as some important differences with phenomenological models of interacting fermions and bosons . In particular, we believe that the model considered here is more suitable to describe the crossover to the optimal and overdoped regime, where no preformed bound states are present and the superconducting transition is quite similar to a standard BCS transition. |
no-problem/0001/astro-ph0001486.html | ar5iv | text | # Search for Binary Protostars
## 1. Introduction
A major gap in our understanding of low-mass star formation concerns the origins of binary stars. About 30 to 50% of low-mass main-sequence stars have companions, and the frequency of young T Tauri binary systems in nearby star-forming regions is nearly twice as high. Binary systems have been observed in all pre-main-sequence stages of evolution and there is growing evidence for proto-binary systems, although the numbers are still very small (e.g., Fuller et al. 1996; Looney et al. 1997). Both theory and observations support the hypothesis that binary systems form during the gravitational collapse of molecular cloud cores. Most scenarios propose bar formation and fragmentation in rotating and accreting protostellar cloud cores or circumstellar disks as a formation mechanism (e.g., Burkert & Bodenheimer 1996; Boss & Myhill 1995; Bonnell et al. 1991; Boss 1999). To understand the formation process of binary stars, high angular resolution studies of the earliest stages of star formation are required. We have, therefore, started a program to search for multiplicity among low- and intermediate-mass protostars (Class 0 and I) using the Owens Valley Radio Observatory (OVRO) Millimeter Array.
## 2. Observations
Our program aims at sub-arcsecond resolution corresponding to linear resolutions of 150 to 450 AU. Later, with ALMA, we aim for 0.1 arcsec resolution, or 15-45 AU, close to the peak of the pre-main sequence binary separation distribution. The mm continuum emission is used to trace the optically thin thermal dust emission. The molecular gas is traced by the C<sup>18</sup>O(1$``$0) and N<sub>2</sub>H<sup>+</sup>(1$``$0) lines at 110 and at 93 GHz, respectively. N<sub>2</sub>H<sup>+</sup>(1$``$0) comprises seven hyperfine components and, compared to other molecules, depletes later and more slowly onto grains (Bergin & Langer 1997). It is, thus, a very reliable gas tracer of the the morphology of protostellar cores. Initial results presented here are based on observations conducted at OVRO in September and October 1999. The 1 mm and 3 mm continuum maps have 1 $`\sigma `$ rms sensitivities of 4 mJy/beam for HPBW 2.0<sup>′′</sup>$`\times `$1.5<sup>′′</sup> and 0.7 mJy/beam for HPBW 5.2<sup>′′</sup>$`\times `$4.2<sup>′′</sup>, respectively. The N<sub>2</sub>H<sup>+</sup> images were obtained at low resolution only and have a velocity resolution of 0.2 km/s and a 1 $`\sigma `$ sensitivity of 110 mJy/beam for HPBW 13<sup>′′</sup>$`\times `$9.4<sup>′′</sup>.
The NIR, submm, and 1.2 mm continuum observations were performed at the 3.5 m Calar Alto telescope (MAGIC), the 15 m JCMT (SCUBA), and the IRAM 30 m telescope (19-channel bolometer array). We wish to thank Th. Henning, R. Zylka, R. Lenzen, D. Ward-Thompson, and J. Kirk who are involved in these programs.
## 3. CB 230 (L 1177)
CB 230 is a Bok globule located at a distance of $``$450 pc. It contains a strong submm/mm continuum source (Launhardt & Henning 1997; Launhardt et al. 1998, 2000) and a dense CS core which shows spectroscopic signature of mass infall (Launhardt et al. 1997). The dense core is associated with two NIR reflection nebulae separated by $``$10<sup>′′</sup> (Yun 1996; Launhardt 1996). The western nebula is bipolar with a bright northern lobe perfectly aligned with the blue lobe of a well-collimated CO outflow (Yun & Clemens 1994). The much fainter southern (red) part of this bipolar nebula seems heavily obscured possibly by the infalling envelope. No star is visible and the NIR morphology can be interpreted as light emerging from a deeply embedded YSO and scattered outward through the outflow cone directed towards us. The eastern NIR nebula is much fainter and redder and displays no bipolar structure.
Previous single-dish mm continuum and molecular line observations did not resolve the central part of the dense core. But they demonstrated that the mm emission has a core-envelope structure and peaks at the origin the western bipolar NIR nebula (Fig. 1, top row). The slight extension of the continuum emission to the south east suggests that the faint eastern NIR source is also associated with circumstellar material.
The new OVRO continuum maps at 1 mm and 3 mm (Fig. 1, bottom row) show only one unresolved component clearly associated with the origin of the western bipolar nebula. The compactness and location of the 1 mm continuum source observed ($`<`$ 400 AU E-W extent), together with the bipolar structure of the NIR nebula, suggest the presence of a circumstellar disk. The compact source contains $``$10% of the total 1 mm continuum flux in the IRAM map. A significant contribution by free-free emission can be ruled out since the bolometric luminosity of the entire cloud core of 11 L points to a low-mass protostar with no capability to ionize its environment (Launhardt et al. 1997). The eastern source may be too faint to detect ($`<2`$ mJy at 3 mm and $`<10`$ mJy at 1 mm) or no compact disk is associated with it.
In contrast to the dust continuum emission, all seven hyperfine structure components of the N<sub>2</sub>H<sup>+</sup>(1–0) line are detected at both NIR positions. The N<sub>2</sub>H<sup>+</sup> data resolve the molecular cloud core into two separate components each of which is spatially coincident with one of the two NIR nebulae (Fig. 2). The projected separation of the two sources is $``$5000 AU. The double core seems to rotate around an axis perpendicular to the connecting line and approximately parallel (in projection) to the outflow of the western source. A comparison of the kinetic, gravitational, and rotational energy of the double core system shows that the two cores are gravitationally bound. This is consistent with the assumption that the double core formed due to rotational fragmentation from a single cloud core and with the orientation of the assumed circumstellar accretion disk around the western protostar. The angular resolution is not yet high enough to derive the rotation curves of the individual cores, but planned observations should improve the resolution considerably. The projected separation of $``$5000 AU is at the upper end of the pre-main-sequence binary separation distribution, Nevertheless, these preliminary results suggest that the Bok globule CB 230 contains a “true” wide binary protostar system.
## References
Bergin, E.A., & Langer, W.D. 1997, ApJ, 486, 316
Bonnell, I., Martel, H., Bastien, P., et al. 1991, ApJ, 377, 553
Boss, A.P., & Myhill, E.A. 1995, ApJ, 451, 218
Boss, A.P. 1999, ApJ, 520, 744
Burkert, A., & Bodenheimer, P. 1996, MNRAS, 280, 1190
Fuller, G.A., Ladd, E.F., & Hodapp, K.-W. 1996, ApJ, 463, L97
Launhardt, R. 1996, PhD thesis, University of Jena
Launhardt, R., & Henning, Th. 1997, A&A, 326, 329
Launhardt, R., Evans II, N.J., Wang J., et al. 1997, ApJS, 119, 59
Launhardt, R. Ward-Thompson, D., & Henning Th. 1998, MNRAS, 288, L45
Launhardt, R., Henning, Th., & Zylka, R. 2000, in preparation
Looney, L.W., Mundy, L.G., & Welch, W.J. 1997, ApJ, 484, L157
Yun, J.L. & Clemens, D.P. 1994, ApJS, 92, 145
Yun, J.L. 1996, AJ, 111, 930 |
no-problem/0001/quant-ph0001011.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Since its inception by Bohm and its popularization by Bell , the pilot wave theory, or causal interpretation of quantum mechanics – now often called Bohmian mechanics – has been regarded by a number of people as a in some respects bizarre but otherwise viable ontology for quantum mechanics. Books and proceedings appeared that discuss the features of the theory in detail, cf. Holland , Bohm & Hiley , Cushing et al. , good introductory surveys are available, cf. Berndl et al. , Dürr et al. , and accounts for the lay reader exist, cf. Albert , Goldstein .
On the other hand, Bohmian mechanics has remained a minority view, since, from its beginnings, it had been critically viewed by most of the influential quantum physicists. The main early arguments against it are stated in Holland \[20, Sections 1.5.3 and 6.5.3\]; they are usually argued away by some mathematical analysis accompanied by statements such as “classical prejudice” (Bell \[3, Chapter 14\]),“to our knowledge no serious technical objections have ever been raised against” it (Holland \[20, Section 1.5.3\]), or “Bohmian mechanics accounts for all of the phenomena governed by nonrelativistic quantum mechanics” (Dürr et al. ). The arguments on both sides usually rest on one’s unwillingness or readiness to accept counterintuitive consequences of the Bohmian picture, since none of the phenomena in question are observable.
More recent counterintuitive implications of Bohmian mechanics (Englert et al. , Griffiths ) met with similar responses (Dürr et al. , Dewdney et al. ). In particular, Dürr et al. write, “an open-minded advocate of quantum orthodoxy would presumably have preferred the clearer and stronger claim that BM is incompatible with the predictions of quantum theory, so that, despite its virtues, it would not in fact provide an explanation of quantum phenomena. The authors are, however, aware that such a strong claim would be false.”
The purpose of this paper is to demonstrate – independent of the arguments in – that such a strong claim is valid indeed. Specifically, Bohmian mechanics contradicts the predictions of quantum mechanics at the level of time correlations. Since time correlations can be observed experimentally via linear response theory (see, e.g., Reichl \[27, Chapter 15.H\]), Bohmian mechanics and quantum mechanics cannot be both valid.
Concerning discrepancies between Bohmian mechanics and quantum mechanics involving multiple times, see also Redington et al. for Bohmian hydrogen atoms, and Ghose for histories of indistinguishable particles.
There are similar problems with multiple times in Nelson’s stochastic quantum mechanics; however, there they can be overcome by a specific procedure for state reduction under measurement, see Blanchard et al. . Bohmian mechanics does not seem to have such an option to rescue their case since in the orthodox Bohmian interpretation state reduction is a purely dynamical phenomenon.
Acknowledgments. I’d like to thank Philippe Blanchard, Sheldon Goldstein, Arkadiusz Jadzcyk and Jack Sarfatti for their comments on an earlier version of this paper.
## 2 Background
Quantum mechanics. A one-dimensional quantum particle without spin in an external potential $`V(q)`$ is described by the Hamiltonian
$$H(p,q)=\frac{p^2}{2m}+V(q)$$
(1)
(see, e.g., Messiah \[24, (2.20)\]), where the position operator $`q`$ and the momentum operator $`p`$ satisfy the canonical commutation relations
$$[q,p]=ih^{}$$
(2)
\[24, (5.53)\]. In the Schrödinger picture, observables are associated with Hermitian operators $`A`$. The dynamics of a quantity $`A`$ is given in the Heisenberg picture by one-parameter families of operators $`A(t)`$ satisfying
$$ih^{}\dot{A}(t)=[A(t),H(p(t),q(t))]$$
(3)
\[24, (8.40)\]; the identification with the Schrödinger picture is obtained by specifying the initial condition $`A(0)=A`$ at some reference time $`t=0`$.
In the position representation, pure ensemble states are given by wave functions $`\psi _0(x)`$ satisfying $`|\psi _0(x)|^2𝑑x=1`$, on which $`q`$ acts as multiplication by $`x`$ and $`p`$ acts as the differential operator $`\frac{h^{}}{i}`$. The expectation of a Heisenberg operator family $`A(t)`$ in a pure ensemble is defined by
$$A(t)_Q=\psi _0^{}(x)(A(t)\psi _0)(x)𝑑x$$
(4)
\[24, (4.22)\]. If one defines a time-dependent wave function $`\psi (x,t)`$ as the solution of the initial-value problem
$$ih^{}\frac{}{t}\psi (x,t)=H\psi (x,t),\psi (x,0)=\psi _0(x)$$
(5)
\[24, (2.29)\], one can rewrite the expectation in the equivalent Schrödinger picture as
$$A(t)_Q=\psi ^{}(x,t)(A\psi )(x,t)𝑑x$$
(6)
\[24, (4.22)\]. In particular, the expectation of a function of position is
$$f(q(t))_Q=f(x)|\psi (x,t)|^2𝑑x$$
(7)
\[24, (4.13)\], so that
$$P(x,t)=|\psi (x,t)|^2$$
(8)
\[24, (4.2)\] behaves as a probability density. For Hamiltonians of the form (1), the probability density satisfies an equation of continuity,
$$\frac{}{t}P+divJ=0$$
(9)
\[24, (4.11)\], with the probability current
$$J(x,t)=Re\psi ^{}(x,t)\frac{h^{}}{im}\psi (x,t)$$
(10)
\[24, (4.9)\]. Thus an ensemble behaves like a flow of noninteracting particles.
Bohmian mechanics. Bohmian mechanics tries to give reality to this picture of an ensemble as a flow of particles with classical-like properties.
Following Holland \[20, Section 3.1\], ensembles are interpreted in Bohmian mechanics as classical ensembles of particles characterized by a solution $`\psi (x,t)`$ of Schrödinger’s wave equation (5) and a trajectory $`x(t)`$ obtained by solving the initial value problem
$$\dot{x}(t)=\frac{1}{m}S(x,t)|_{x=x(t)},$$
(11)
where the phase $`S(x,t)`$ of $`\psi `$ is defined by
$$\psi (x,t)=e^{iS(x,t)/h^{}}|\psi (x,t)|.$$
(12)
The probability that a particle in the ensemble lies between the points $`x`$ and $`x+dx`$ at time $`t`$ is given by $`|\psi (x,t)|^2dx`$. (Holland discusses the 3-dimensional case and hence has a volume element in place of $`dx`$. It would be trivial to rewrite the present discussion in three dimensions without changing the conclusion. Similarly, as in many expositions of Bohmian mechanics, spin is ignored, but incorporating it would not change anything essential.)
To indicate the flow of individual particles in an ensemble described by a fixed solution $`\psi (x,t)`$ of the Schrödinger equation, we refine the notation and write $`x_\xi (t)`$ for the position of a particle that is in position $`\xi `$ at time $`t=0`$, so that $`x_\xi (0)=\xi `$. The associated probability measure is then $`d\mu (\xi )=|\psi _0(\xi )|^2d\xi `$. Ensemble expectations of some real property $`A_\xi `$ that a particle – characterized by its wave function $`\psi _0`$ (asumed fixed) and its position $`\xi `$ at time $`t=0`$ – has are therefore given by averaging the values of $`A`$ over the ensemble,
$$A_B=A_\xi |\psi _0(\xi )|^2𝑑\xi .$$
(13)
Since
$$J(x(t),t)=P(x(t),t)\dot{x}(t)$$
(14)
\[20, (3.2.29)\], the continuity equation (10) implies that expectations of functions $`A(x(t),t)`$ are invariant under a shift of the reference time $`t=0`$. (Note that other authors use the equation
$$\dot{x}(t)=J(x(t),t)/P(x(t),t)$$
(15)
in place of (11) to define the trajectories; because of (14), this is indeed equivalent and has the advantage of being directly motivated by time shift invariance.)
Local expectation values. To calculate expectation values of quantum mechanical operators, Holland \[20, (3.5.4)\] defines the local expectation value of a Hermitian operator $`A`$ in the Schrödinger picture as the real number
$$A(x,t)=Re\frac{(A\psi )(x,t)}{\psi (x,t)}.$$
(16)
The local expectation values evaluated along a trajectory,
$$A_\xi (t)=A(x_\xi (t),t),$$
(17)
are considered to be the real properties of a particle. Indeed, Holland mentions in \[20, Section 3.7.2\] that the local expectation value “might, following the common parlance, be termed the ‘hidden variable’ associated with the corresponding physical variable”. With this definition of real properties, Bohmian mechanics achieves agreement with simple quantum mechanical predictions since, as is easily checked,
$$A_B=A_Q$$
(18)
(Holland \[20, (3.8.8/9)\]). To appreciate what the local expectation values are in specific cases, Holland calculates explicitly the case of position, momentum, total energy, and total orbital angular momentum. In particular, the particle positions (local expectation values of $`A=q`$) and particle momenta (local expectation values of $`A=p`$) at arbitrary times $`t`$ are
$$q_\xi (t)=x_\xi (t),p_\xi (t)=S(x_\xi (t),t)$$
(19)
\[20, (3.2.18)\]. More generally, if $`A=f(q)`$ then $`A(x,t)=f(x)`$; thus functions of position at a fixed time behave classically. But for other operators, this is not the case; e.g., while $`p_\xi (t)=m\dot{x}_\xi (t)`$, the kinetic energy $`K=p^2/2m`$ satisfies
$$K_\xi (t)=\frac{m}{2}\dot{x}_\xi (t)^2+Q(x_\xi (t),t)$$
with an additional ‘quantum potential’ $`Q(x,t)`$.
## 3 Time correlations in Bohmian mechanics
Particles in the ground state. For any Hamiltonian with a nondegenerate ground state $`\psi _0`$ (satisfying $`H\psi _0=E_0\psi _0`$), this ground state can always be taken to be real. Indeed, since the complex conjugate $`\psi _0^{}`$ also satisfies $`H\psi _0^{}=E_0\psi _0^{}`$ and the ground state is nondegenerate, $`\psi _0^{}`$ must be a multiple of $`\psi _0`$, and scaling with the square root of the multiplier leaves a real eigenfunction.
The solution $`\psi `$ of the Schrödinger equation (5) corresponding to the ground state is
$$\psi (x,t)=e^{itE_0/h^{}}\psi _0(x).$$
If a particle can be in position $`x`$ at time $`t`$ then $`|\psi (x,t)|^2>0`$, hence $`\psi _00`$. A comparison with (12) therefore shows that particles in a nondegenerate ground state have a phase $`S(x,t)=\pm tE_0`$ independent of $`x`$. Thus (11) implies that $`x(t)`$ is constant, $`x_\xi (t)=\xi `$ for all $`t`$. Thus each particle in the ensemble stands still.
This observation is puzzling and lead Einstein to reject the Bohmian interpretation; see Holland \[20, Section 6.5.3\] for a discussion and a defense.
The harmonic oscillator. A one-dimensional harmonic oscillator of mass $`m`$, period $`T`$ and angular frequency $`\omega =2\pi /T`$ is described by the Hamiltonian
$$H(p,q)=\frac{p^2}{2m}+\frac{\omega ^2m}{2}q^2.$$
(20)
The canonical commutation relations (2) imply that, for the Hamiltonian (20), the Heisenberg dynamics (3) of position and momentum are given by
$$\frac{dq(t)}{dt}=\frac{p(t)}{m},\frac{dp(t)}{dt}=\omega ^2mq(t),$$
just as in the classical case. In particular, we can solve the dynamics explicitly in terms of the position operator $`q`$ and the momentum operator $`p`$ at time $`t=0`$ as
$$q(t)=q\mathrm{cos}\omega t+\frac{p}{\omega m}\mathrm{sin}\omega t,$$
$$p(t)=p\mathrm{cos}\omega tq\omega m\mathrm{sin}\omega t,$$
again as in the classical case. In particular, $`q(t+T/2)=q(t)`$, so that quantum mechanics predicts the time correlation
$$q(t+T/2)q(t)_Q=q(t)^2_Q<0$$
(21)
for an ensemble in an arbitrary pure (or even mixed) state. ($`q(t)^2_Q=0`$ would be possible only in an eigenstate of $`q(t)`$ to the eigenvalue zero, but there is no such normalized state.)
On the other hand, interpreting the time correlations in a Bohmian sense, one finds from (19) and (13) that
$$q(t+T/2)q(t)_B=q_\xi (t+T/2)q_\xi (t)|\psi _0(\xi )|^2𝑑\xi .$$
For particles in the ground state (which for the harmonic oscillator is nondegenerate), the discussion above shows that the right hand side is constant,
$$q(t+T/2)q(t)_B=q(t)^2_B=q(t)^2_Q>0.$$
(22)
Comparing (21) and (22), we see that the quantum mechanical time correlation and the Bohmian time correlation have opposite signs.
Measuring time correlations. The fact that, in general, $`q(s)q(t)`$ is not Hermitian and hence cannot be measured in individual events does not mean that the expectation on the left hand side of (21) is meaningless and has no relation to experiment. Indeed, one may define the expectation of an arbitrary quantity $`f`$ in orthodox quantum mechanics (where all self-adjoint operators = observables can be measured, cf. Dirac \[12, p.37\]) in terms of the observables $`Ref=\frac{1}{2}(f+f^{})`$ and $`Imf=\frac{1}{2i}(ff^{})`$ by
$$f_Q:=Ref_Q+iImf_Q.$$
(23)
This gives unambiguous values to all expectations, and is fully consistent with orthodox quantum mechanics. Of course, it may not be easy to measure $`Ref`$ and $`Imf`$, but an operational procedure for measuring arbitrary Hermitian functions of $`p`$ and $`q`$ by a suitable experimental arrangement can be found, e.g., in Lamb . And quantum optics routinely deals with expectations and measurements of coherent states, which are eigenstates of nonhermitian annihilator operators; see, e.g. Leonhardt .
While the example of the harmonic oscillator is somewhat artificial, it has the advantage that all calculations can be done explicitly. Significant physical applications of time correlations are, however, made in statistical mechanics, where integrals over time correlations in thermodynamic equilibrium states are naturally linked to linear response functions, and hence are measurable as susceptibilities. See, e.g., Reichl \[27, (15.161) and (15.172)\]. Time correlations also arise in the calculation of optical spectra (Carmichael \[8, Lecture 3.3\]) and in the context of quantum Markov processes (Gardiner \[16, Section 10.5\]). Thus, at least in principle, it is possible to test the validity of the recipe (23) by experiment, by measuring susceptibilities or spectra directly, and by comparing the result to that obtained by applying (23) to measurements of $`Ref`$ and $`Imf`$.
As Arkadiusz Jadczyk (personal communication) pointed out, (23) implies that due to noncommutativity, the quantum mechanical time correlations $`q(s)q(t)_Q`$ are complex in most states at most times, while time correlations computed from Bohm trajectories are always real. Thus an agreement would be a coincidence.
On the other hand, it is possible to avoid nonhermitian operators completely. Indeed, the contradiction persists in the following consequence of (21) and (22):
$$q(t+T/2)q(t)+q(t)q(t+T/2)_Q=2q(t)^2_Q<0,$$
(24)
$$q(t+T/2)q(t)+q(t)q(t+T/2)_B=2q(t)^2_Q>0.$$
(25)
Note that $`q(t+T/2)q(t)+q(t)q(t+T/2)`$ is Hermitian, and (24) has the correct classical time correlation as limit when $`h^{}0`$. Symmetrized time correlations are discussed in the context of linear response theory in Kubo et al. \[21, pp. 167-169\].
In discussions with proponents of Bohmian mechanics, it is claimed that my interpretation of the Bohmian formalism is erroneous, in that I am not making the proper distinction between the ontological ”beable” and the epistemological ”observable”, and compare the statistics of unobserved Bohm trajectories with those for quantum observations.
However, quantum mechanics can be used in practice without reference to the (still ill-defined) measurement mechanism, while Bohmian mechanics resorts to the latter to justify any discrepancy. This should not be the case if the ‘beables’ were the real entities that Bohmian mechanics claims them to be. And indeed, the whole purpose of the local expectation values is to show the equivalence of expectations in Bohmian mechanics with those in quantum mechanics, without having to refer to measurement.
What else could the meaning of (18) be? The whole discussion in Holland \[20, Section 3.5–3.8\] becomes meaningless unless it is accepted that (18) is the real link between quantum mechanics and Bohmian mechanics, independent of any measurement questions. The probabilities – Holland discusses these independent of expectations – follow the rule (18) when $`A`$ is an orthogonal projector corresponding to the associated subspaces, and if the expectation rule fails then associated probabilities also fail.
Thus, one wonders why Bohmian mechanics, which can do calculations of single-time probabilities without reference to measurement questions, suddenly needs the measurement process to calculate probabilities of pair events occuring at two different times.
It may be noted that there are similar problems with multiple times in Nelson’s stochastic quantum mechanics; Blanchard et al. show how these problems can be overcome by a specific procedure for state reduction under measurement.
However, Bohmian mechanics does not seem to have such an option to rescue its interpretation since in the orthodox Bohmian interpretation, state reduction is a purely dynamical phenomenon. The suggestion to explain equivalence to quantummechanical predictions by invoking the measurement process leads at best to an approximate equivalence since Bohmian theory discusses measurement only in an approximate way (Holland \[20, Chapter 8\], Bohm & Hiley \[7, Chapter 6\]). And even then, specific efforts would be needed to show that the time correlations come out in the right way.
And the explanation by measurement fails completely if we consider the universe as a whole which, if supposed to behave deterministically according to the laws of Bohmian mechanics, has no meaningful way of defining time correlations apart from $`q(s)q(t)_B`$.
The ambiguity of local expectation values. To gain a better understanding of the problems of Bohmian mechanics from a slightly different point of view, we look more closely at the local expectation values that are supposed to define the real properties of particles, and that lie at the heart of the claim of Bohmian mechanics that all its predictions agree with those of quantum mechanics.
We first note that the recipe for calculating local expectation values is linear without restriction; in particular, for a particle in the ground state, where $`q_\xi (t)=\xi `$ and $`p_\xi (t)=0`$, we have $`A_\xi (t)=\alpha \xi `$ for any operator $`A=\alpha q+\beta p`$. We use this to calculate the local expectation value of the Heisenberg position operator $`A=q(s)`$ at time $`s`$ in the ground state of the harmonic oscillator, and find the remarkable formula
$$A_\xi (t)=\xi \mathrm{cos}\omega s.$$
Thus the objective value of $`A=q(s)`$ at any time $`t`$ is $`\xi \mathrm{cos}\omega s`$, corresponding to our intuition if we regard $`s`$ as the physical time. It seems that, at least in the Bohmian picture of the harmonic oscillator, the Heisenberg time $`s`$ is the real time while the Schrödinger time $`t`$ only plays a formal and counterintuitive role.
This gives weight to what is called ‘operator realism’ in Daumer et al. , against the Bohmian program advocated there. And it makes the interpretation of local expectation values as real properties of the system highly dubious since these values depend on the choice of the Heisenberg time $`s`$.
In particular, for multi-time expectations, which are meaningful only in the Heisenberg picture, there is no distinguished single Heisenberg time, and hence no natural Bohmian interpretation.
Thus Bohmian mechanics can at best be said to reproduce a subset of quantum mechanics. It contradicts the quantum mechanical predictions about time correlations if one proceeds in the straightforward way that generalizes the basic formula (18) that accounts for agreement of single time expectations and single-time probabilities.
And Bohmian mechanics does not say anything at all about time correlations if the connection to quantum mechanics is kept more vague and left hidden behind a measurement process that is inherently approximate in Bohmian mechanics. Should this be the real link between quantum mechanics and Bohmian mechanics, one could claim the predictions of Bohmian mechanics to be approximately equal onlyto those of quantum mechanics, against the explicit assertions of many supporters of Bohmian mechanics.
## 4 Conclusion
In contrast to the claim by Dürr et al. , Bohmian mechanics does not account for all of the phenomena governed by nonrelativistic quantum mechanics. Indeed, it was shown that for a harmonic oscillator in the ground state, Bohmian mechanics and quantum mechanics predict values of opposite sign for certain time correlations. Bohmian mechanics therefore contradicts quantum mechanics at the level of time correlations. Since time correlations can be observed experimentally via linear response theory, Bohmian mechanics and quantum mechanics cannot both describe experimental reality.
Due to the complicated form of the Bohmian dynamics, it seems difficult to compute time correlations for realistic scenarios where a comparison with linear response theory and hence with experiment would become possible. But perhaps numerical simulations are feasible. On the other hand, it is unlikely that, if the predictions of quantum mechanics and Bohmian mechanics differ in such a simple case, they would agree in more realistic situations.
The time correlations used in statistical mechanics are those from quantum mechanics and not those from Bohm trajectories. Moreover, they can be calculated and used without reference to any theory about the measurement process. If an elaborate theory of quantum observation is needed to reinterpret Bohmian mechanics – so that it matches quantum mechanics and thus restores the connection to statistical mechanics – then Bohmian mechanics is at best approximately equivalent to quantum mechanics and, I believe, irrelevant to practice.
It is therefore likely that Bohmian mechanics is ruled out as a possible foundation of physics. |
no-problem/0001/astro-ph0001503.html | ar5iv | text | # A GPS-based method to model the plasma effects in VLBI observations
## 1 Introduction
VLBI provides unprecedentedly accurate angular resolution through observations of celestial bodies with radio telescopes spread over the Earth’s surface. Each observing station records data on magnetic tapes. The local-oscillator signals and the time-tagging of the data are governed by hydrogen-maser frequency standards. The tapes are processed in special-purpose correlators to determine the so-called VLBI observables: group and phase delays and phase-delay rates.
A main problem in determining the sky positions of celestial radio sources from these VLBI observables is the effect of the Earth’s ionosphere on them. The use of GPS satellite data to remove this effect forms the thrust of this paper.
The ionosphere is characterized by its content of free electrons and ions. The F<sub>2</sub> layer of the ionosphere has the largest density of charged particles, with values up to $`3\times 10^{12}`$ m<sup>-3</sup>. The total electron content per square meter along a line of sight is the number of electrons in a column of one square meter cross section along the ray path:
$$\mathrm{TEC}=_0^{h_0}N𝑑h,$$
(1)
where $`N`$ is the spatial density of electrons, $`h`$ is the coordinate of propagation of the wave, and $`h_0`$ corresponds to the effective end of the ionosphere. TEC is highly variable and depends on several factors, such as local time, geographical location, season, and solar activity. TEC can have values between 1 TECU (or TEC unit, defined as 10<sup>16</sup> m<sup>-2</sup>) and 10<sup>3</sup> TECU. Epochs of greater solar activity cause higher values of the TEC.
The ionosphere affects the phase and group delays oppositely (to first order, see, e.g., Thompson et al. tho86 (1986)):
$$\mathrm{\Delta }\tau =\frac{\kappa }{c\nu ^2}\mathrm{TEC},$$
(2)
where $`\kappa 40.3`$ m<sup>3</sup>s<sup>-2</sup>, $`c`$ is the speed of light (m$``$s<sup>-1</sup>), and $`\nu `$ the frequency (Hz), and where we neglect magnetic field effects and assume $`\nu `$ is large compared with the local plasma frequency (for an extreme case, the plasma frequency is of $``$15 MHz). The negative sign applies for phase delays and the positive sign for group delays. In standard astrometric VLBI experiments observations are made simultaneously in two well separated bands of frequencies in order to estimate the ionospheric effect. A nearly vacuum equivalent delay can be obtained from the following expression:
$$\tau _{\mathrm{free}}=\frac{(\nu _1/\nu _2)^2\tau _1\tau _2}{(\nu _1/\nu _2)^21},$$
(3)
where $`\tau _i`$ is the delay –either group or phase– at frequency $`\nu _i`$ ($`i=1,2`$, $`\nu _1>\nu _2`$).
Thus, with dual frequency observations, the ionosphere effect can largely be removed from the VLBI data. Such removal can also be made for single-frequency observations via Eq. (2), if estimates of the TEC along the lines of sight of the radio telescopes are available from other observations.
Guirado et al. (gui95 (1995)) showed that it is possible to estimate the ionospheric effect with accuracy useful for astrometric purposes from Faraday-rotation measurements. In this work, the authors used a “clipped” sinusoidal function to model the diurnal behavior of the TEC. In this model, the night component is constant and equal to the minimum TEC value, and the day component is expressed as the positive part of a sinusoid, with its maximum some hours after noon. Their observations were obtained in late 1985, a time of minimum solar activity.
In the method presented here we used GPS measurements that provide TEC values as a function of time. Such GPS-based TEC determinations were first successfully applied to geodetic VLBI by Sardón et al. (sar92 (1992)).
## 2 The Global Positioning System and the TEC.
An introduction to the Global Positioning System (GPS) can be found in Hofmann-Wellenhof et al. (hof97 (1997)). A main use of the GPS system is to determine the position $`(x,y,z,t)`$ of a GPS receiver on Earth’s surface. The system consists of a constellation of 24 satellites broadcasting electromagnetic signals in two narrow frequency bands, a set of monitoring ground sites, the Master Control Station, and GPS receivers. The 24 satellites orbit the Earth in near circular orbits with a 12 hr period, at a height of about 20,200 km, and an inclination of 55. The spacecraft are in six orbital planes with four satellites nearly equally spaced along the orbit in each plane. At any moment, from any point on Earth, it is possible to detect signals simultaneously from 7 to 9 of these satellites. Each satellite broadcasts a block of data every 30 seconds, consisting of a description of its orbit and of GPS time, as well as a pseudo-random code every millisecond (coarse-acquisition C/A, for civilian use) and a precision code (P, 266 d period, for military use), usable to determine more accurately the position of the ground receiver.
The oscillators of the satellites generate a fundamental frequency $`\nu _0=10.23`$ MHz, which is the P-code frequency. The C/A-code frequency is $`\nu _0/10`$. The GPS signal is emitted at two frequencies, 154 $`\nu _0`$ and 120 $`\nu _0`$ (L1 and L2, respectively, 1,575.42 MHz and 1,227.60 MHz, or $`\lambda \lambda `$19 and 24.4 cm). L2 carries only the P signal, and L1 both P and C/A signals.
### 2.1 The GPS observables.
The main GPS observable is $`dT`$, the time of transit of the signal from the satellite to the ground receiver. The value of $`dT`$ can be determined, in effect, by comparing the time of broadcasting of the codes P or C/A with the time of receipt of these codes. An observable that corresponds to $`dT`$ can be obtained by cross correlation of the signal received from the satellite at each band with a reference signal generated at the receiver and “tied” to GPS time. In this case, $`dT=\varphi /2\pi \nu `$, where $`\varphi `$ is the total phase change of the signal during propagation from satellite to receiver and $`\nu `$ is the center frequency of this transmitted signal. This observable is the phase delay of the carrier signal.
The GPS observable is affected by the following: the ionosphere, the troposphere, $`2\pi `$ phase ambiguities (equivalent to multiples of of 634.75 ps and 814.60 ps, respectively, for L1 and L2), multipath (e.g., from signals that are reflected or scattered into the receiver antennas from objects nearby to it), different effective location of receivers for L1 and L2, instrumental delays (different for L1 and L2), degraded coding of the signals, and clock errors. The contributions to the observables of the largest of these effects can be sharply reduced by application of suitable techniques (see, e.g., Blewitt ble90 (1990)).
### 2.2 Obtaining the TEC from GPS data.
The GPS observable can be modeled as a function of distance from satellite to receiver, ionospheric delays, tropospheric delays, clock errors, and instrumental phase- and group-delay biases (Sardón et al. sar94 (1994)). The ionospheric term can be estimated by a combination of the L1 and L2 observables. We can denote the TEC for any observation direction as $`I_k^i(t)`$. This TEC is defined along the line of sight from the radio telescope $`k`$ to the radio source $`i`$ and can be expressed approximately as a function $`𝒱`$ (vertical value of TEC) of time $`t`$ and the intersection point $`P_k^i`$ (“ionospheric point”) of the line of sight from $`k`$ to $`i`$ with the (average) F<sub>2</sub> layer of the ionosphere (at an altitude of $`h_{\mathrm{F}_2}=350`$ km), times the obliquity or slant function $`S(e_k^i)`$, defined as the secant of the zenith angle at the ionospheric point (see below for geometry clarification), which is a function of the elevation angle $`e_k^i`$ of the observation:
$$I_k^i(t)=S(e_k^i)𝒱(P_k^i,t).$$
(4)
The positions of the involved sites and the ionospheric points can be expressed in a geocentric coordinate system $`(X,Y,Z)`$ or in a geocentric-solar one $`(\mathrm{\Psi },\chi ,Z^{})`$ with the $`Z^{}`$ axis directed toward the Sun from the Earth center, $`\mathrm{\Psi }`$ the angle in the $`XY`$-plane (measured counterclockwise from $`X`$) and $`\chi `$ the angle with apex at the Earth’s center, measured from the direction to the Sun ($`Z^{}`$) to the direction to $`P_j^i`$. This latter coordinate system is useful since the ionosphere is roughly time-independent in this reference frame. Following Sardón et al. (sar94 (1994)), we replace $`𝒱`$ for a GPS site $`j`$ and satellites $`l`$ by its locally linear approximation in $`P_j^l`$ (ionospheric point towards satellite $`l`$) using the $`(\mathrm{\Psi },\chi ,Z^{})`$-coordinate system:
$$I_j^l(t)=S(e_j^l)[A_j(t)+B_j(t)d\mathrm{\Psi }_j^l(t)+C_j(t)d\chi _j^l]+K^l+K_j.$$
(5)
Here, the coefficients for each site are $`A`$ (0<sup>th</sup> order, vertical), $`B`$ (1<sup>st</sup> order, $`\mathrm{\Psi }`$-direction), $`C`$ (1<sup>st</sup> order, $`\chi `$-direction). We have also introduced the instrumental GPS satellite $`K^l`$ and receiver $`K_j`$ biases. $`A_j(t)`$ is the vertical TEC at site $`j`$, $`d\mathrm{\Psi }_j^l(t)=\mathrm{\Psi }_j^l\mathrm{\Psi }_j`$, and $`d\chi _j^l=\chi _j^l\chi _j`$ are the coordinates $`\mathrm{\Psi }`$ and $`\chi `$ of the ionospheric point of the line of sight from $`j`$ towards the satellite $`l`$ minus the corresponding coordinates of the GPS site $`j`$. The coefficients $`A`$, $`B`$, $`C`$, can be determined, and the $`K`$-biases largely removed, using a Kalman filtering method (Herring et al. her90 (1990)) with the data from different satellites $`l`$ (about 8 at a time) to obtain an estimate of the TEC from GPS data. The $`K`$ biases can be due to such effects as errors in the estimation of phase ambiguities and multipath (see Sardón et al. sar94 (1994)). In sum, we produce estimates of the total electron content of the ionosphere using data from a network of GPS stations and a multiplicity of satellites.
## 3 Vertical TEC from GPS and TEC for VLBI.
Here we describe the formulas we used to estimate the TEC along the paths of a VLBI observation from the values of the vertical TEC (evaluated by the method described in Sect. 2.2) at a GPS site near each one of the VLBI sites.
Consider a GPS site $`j`$ at latitude $`\zeta _j`$ and longitude $`\lambda _j`$, and a VLBI site $`k`$ at position $`(\zeta _k,\lambda _k)`$ (see Fig. 1). The coordinates of $`P_k^i`$, denoted by $`(\zeta _{P_k^i},\lambda _{P_k^i})`$ at the epoch the radio source is at elevation $`e_k^i`$ and azimuth $`a_k^i`$, are (Klobuchar klo87 (1987)):
$`\zeta _{P_k^i}`$ $`=`$ $`\mathrm{arcsin}(\mathrm{sin}\zeta _k\mathrm{cos}_j^i+\mathrm{cos}\zeta _k\mathrm{sin}_j^i\mathrm{cos}a_j^i)`$ (6)
$`\lambda _{P_k^i}`$ $`=`$ $`\lambda _k+\mathrm{arcsin}\left({\displaystyle \frac{\mathrm{sin}_j^i\mathrm{sin}a_j^i}{\mathrm{cos}\zeta _{P_k^i}}}\right),`$ (7)
where $`(\pi /2\zeta _k\pi /2)`$ and the $`\mathrm{arcsin}`$-function in $`\lambda `$ holds for values in the interval $`(\pi /2,\pi /2)`$, appropriate for Eq. (7) since GPS and VLBI sites are nearly collocated. $`_k^i`$ is the angle, measured from the center of the Earth between the line to site $`k`$ and the line to the ionospheric point for radio source $`i`$: $`_k^i=\pi /2e_k^i\mathrm{arcsin}(\mathrm{\Xi }\mathrm{cos}e_k^i)`$ (see Fig. 2, where $`\mathrm{\Xi }=R_{}/(R_{}+h_{\mathrm{F}_2})`$, $`R_{}`$ is the Earth’s radius, and $`\mathrm{\Xi }0.948`$ for $`h_{\mathrm{F}_2}=350`$ km). The local time at the ionospheric point is $`t=(\lambda _{P_k^i}/15)+`$UT hr ($`\lambda `$ in degrees, with $`\lambda `$ positive to the East).
We assume that the vertical TEC $`A_{P_k^i}(t)`$ at the ionospheric point is given in terms of $`A_j(t)`$ by:
$$A_{P_k^i}(t)=A_j\left(t+\frac{\lambda _{P_k^i}\lambda _j}{15}\right).$$
(8)
This relation effects a longitude correction but ignores any latitude dependence of the TEC values. This approach is reasonable for mid-latitude stations and sources at high declination since GPS sites can be collocated with VLBI sites, or at least placed relatively near to them, and since TEC changes more rapidly with longitude than with latitude.
The slant factor was defined above as the secant of the zenith angle, which is $`\mathrm{arcsin}(\mathrm{\Xi }\mathrm{cos}e_k^i)`$ (see Fig. 2, and Klobuchar klo87 (1987)). Thus, the TEC at the ionospheric point $`P_k^i`$, mapped by the slant factor (Eq. 4) gives
$$I_k^i(t)=\mathrm{sec}\{\mathrm{arcsin}(\mathrm{\Xi }\mathrm{cos}e_k^i)\}A_{P_k^i}(t),$$
(9)
which, with Eq. (2) yields our estimate of the ionospheric delays at the indicated VLBI site. The overall ionospheric effect on a VLBI observable is a simple linear combination of the effects from each of the two sites involved in the observation.
## 4 A case to test the method: VLBI observations of the Draco triangle.
Progress in high precision phase-delay difference astrometry has been made by Ros et al. (ros99 (1999)) through VLBI observations of triangle formed by the radio sources BL 1803+784, QSO 1928+738 and BL 2007+777, in the Northern constellation of Draco (the Dragon). The observations were made simultaneously at the frequencies of 2.3 and 8.4 GHz at epoch 1991.89 with an intercontinental interferometric array. The angular separations among these radio sources were determined with submilliarcsecond accuracy from a weighted-least-squares analysis of the differenced and undifferenced phase delays. The modeling of these astrometric VLBI observations was sufficiently accurate to estimate reliably the “$`2\pi `$ ambiguities” in the differenced phase delays for source separations of almost 7 on the sky. For such angular distances, this accurate “phase connection” at 8.4 GHz, yielding the phase delays with standard errors well within one phase cycle over the entire session of observations, was demonstrated at an epoch of solar maximum. As in earlier works (e.g., Guirado et al. gui95 (1995), Lara et al. lar96 (1996)), after phase connection the effects of the extended structure of the radio sources were largely removed from the phase-delay observables. The effects of the ionosphere were also mostly removed, via the GPS-based method described in this paper.
In 1991 the number of available GPS sites was small, and only data from Goldstone and Pinyon Flats in the US, and from Herstmonceux and Wettzell in Europe (see Table 1 for details) were available to be used for our experiment. GPS data from the two US sites were used for the VLBI sites at Fort Davis (TX), Pie Town (NM), Kitt Peak (AZ), and Los Alamos (NM) to estimate the TEC for observations at these sites. Similarly, GPS data from both European sites were used for Effelsberg (Germany). The VLBI observations were carried out from 14 hr UT on 20 November to 4 hr UT on 21 November 1991. In Fig. 3 we show the corresponding vertical TEC values for one of the GPS sites on each continent. We see the dusk and night part of the data for Wettzell, and the daylight data for Pinyon Flats.
From the GPS-based estimates of the TEC along the lines of sight for each VLBI site, we calculated the ionospheric contribution for each baseline and epoch of observation. These ionospheric contributions were removed from the VLBI observables. For the phase delays, these contributions ranged in magnitude from 0 to 1.2 ns at 8.4 GHz for intercontinental baselines, and were less than 0.1 ns for continental baselines. The intercontinental baseline lengths range from $``$7800 to $``$8300 km, and the differences in local time are $``$7.5 hr (see Fig. 3). Since the ionospheric effect is the combination of effects for both antennas, the ionospheric delay is quite important for our intercontinental baselines. By contrast our US-continental baseline lengths range from $``$200 to $``$750 km which corresponds at most to about 30 minutes difference in local time and a much smaller ionospheric effect on the VLBI observables.
Eq. (3) provides the usual way to largely remove the effect of the ionosphere on dual-band VLBI observations. The phase delays from our 2.3 GHz observations could not be freed from $`2\pi `$ ambiguities due to the large scatter in these data. However, unambiguous but less precise group delays were available at both 2.3 and 8.4 GHz. The ionospheric contributions were estimated from these group-delay data. In Fig. 4 we show the comparison of GPS-based and dual-band VLBI-based ionospheric delay estimates at 8.4 GHz. We show four of the ten available baselines as representative examples: two intercontinental and two continental-US ones. Apart from the larger dispersion in the group-delay than in the GPS-based data, this comparison provides a good independent confirmation of the reliability of the GPS-based method for the correction of VLBI data. The error bars for the group delays shown in the figure are the appropriate combination of the statistical standard errors for the data at these frequencies (see Eq. 3). The statistical standard errors for the GPS estimates are each about 0.2 TECU. We assume a much larger standard error –1.5 TECU– to try to account for possible inaccuracies not estimated with the Kalman filtering, such as incorrect values for $`h_{\mathrm{F}_2}`$, the consequent changes in the mapping function and in the eventual position of the ionospheric point, and other unmodeled effects. Thus, we infer a corresponding contribution of $``$30 ps to the standard errors of the phase delays at 8.4 GHz. For the data presented in Fig. 4, the root-mean-square of the differences between the results from the two methods are below 0.15 ns for intercontinental baselines and 0.10 ns for continental ones.
## 5 Conclusions
Assuming that the ionosphere can be modeled usefully as a thin shell surrounding the Earth at a height of 350 km, we estimate the TEC for the line of sight from the GPS site to the satellite using ground reception of GPS signals and knowledge of the orbital parameters of the GPS satellites. Having a GPS site near a VLBI site, we can reliably “transfer” this estimate to the TEC for the line of sight from the radio telescope to the radio source. These TEC values allow us to correct the VLBI observables for the effects of the ionosphere for any frequency. We made such corrections for our phase-delay data at 8.4 GHz (Ros et al. ros99 (1999)) for an epoch at which there was a paucity of relevant GPS data. The estimates of the ionospheric delays provided by the VLBI measurements of group delay at 2.3 and 8.4 GHz differ from the corresponding delays obtained from GPS data to within root-mean-square values below 0.15 ns for intercontinental baselines and 0.10 ns for continental ones. Thus, we have shown, in particular, that the GPS determination of TEC can be successfully used in the astrometric analysis of VLBI observations.
The density of the network of GPS sites has increased dramatically since 1991 and the accuracy of the TEC deduced from GPS data has improved significantly. Given such progress, the approximations used in this paper are no longer necessary. Now GPS estimates for the vertical TEC are available from virtually every land location all of the time and thus for every VLBI observation.
The advantages of GPS compared with geostationary beacons which used Faraday rotation to determine TECs are notable: global land coverage for GPS is available from geodetic networks (e.g., the International GPS Geodynamics Service); the TEC estimates do not depend on assumptions about the Earth’s magnetic field; L1 and L2 GPS data are available over the internet in standard formats (e.g., RINEX: Receiver INdependent EXchange). We also note that the ionosphere cannot always be represented by a thin shell model with good accuracy; moreover, more accurate models can be devised and suitably parameterized given the tomography-like sampling of the ionosphere provided by the GPS. The biases in the GPS observables have to be properly corrected in the estimates of TECs. Davies & Hartmann (dav97 (1997)) set an upper limit of 3 TECU for the present agreement of GPS TEC with the results from other methods, but only of few 0.01 TECU for the relative errors of TEC estimates over the course of some hours for any given site. The latter represents relative changes of delay at 8.4 GHz of 0.2 ps, nearly two orders of magnitude smaller than the delay equivalent of a $`2\pi `$ phase ambiguity. With such accuracies and the present network of GPS sites, the removal of most of the effect of the ionosphere from VLBI observations should not be difficult, although much of the smaller-scale ionospheric activity cannot be adequately sampled by GPS. From dual-band difference VLBI astrometry and GPS data, the optical depth of the emission in radio sources can be better studied by comparing brightness distributions obtained independently at each frequency band with respect to the same coordinate system. In sum, the introduction of GPS techniques should greatly improve the scientific results obtained from VLBI observations.
###### Acknowledgements.
E.R. acknowledges a F.P.I. fellowship of the Generalitat Valenciana. We acknowledge the referee, Dr. R.M. Campbell for his very helpful suggestions and remarks. We are grateful to the SOPAC/IGPP, at SIO, University of California, San Diego (US), for kindly providing the GPS data from their GARNER archives, and to Prof. R.T. Schilizzi for encouragement. This work has been partially supported by the Spanish DGICYT grants PB 89-0009, PB 93-0030, and PB 96-0782, and by the U.S. National Science Foundation Grant No. AST 89-02087. |
no-problem/0001/physics0001020.html | ar5iv | text | # Why Occam’s Razor
## 1 INTRODUCTION
Wigner once remarked on “the unreasonable effectiveness of mathematics”, encapsulating in one phrase the mystery of why the scientific enterprise is so successful. There is an aesthetic principle at large, whereby scientific theories are chosen according to their beauty, or simplicity. These then must be tested by experiment — the surprising thing is that the aesthetic quality of a theory is often a good predictor of that theory’s explanatory and predictive power. This situation is summed up by William of Ockham, “Entities should not be multiplied unnecessarily”, known as Occam’s Razor.
We start our search into an explanation of this mystery with the anthropic principle. This is normally cast into either a weak form (that physical reality must be consistent with our existence as conscious, self-aware entities) or a strong form (that physical reality is the way it is because of our existence as conscious, self-aware entities). The anthropic principle is remarkable in that it generates significant constraints on the form of the universe. The two main explanations for this are the Divine Creator explanation (the universe was created deliberately by God to have properties sufficient to support intelligent life), or the Ensemble explanation (that there is a set, or ensemble, of different universes, differing in details such as physical parameters, constants and even laws, however, we are only aware of such universes that are consistent with our existence). In the Ensemble explanation, the strong and weak formulations of the anthropic principle are equivalent.
Tegmark introduces an ensemble theory based on the idea that every self-consistent mathematical structure be accorded the ontological status of physical existence. He then goes on to categorize mathematical structures that have been discovered thus far (by humans), and argues that this set should be largely universal, in that all self-aware entities should be able to uncover at least the most basic of these mathematical structures, and that it is unlikely we have overlooked any equally basic mathematical structures.
An alternative ensemble approach is that of Schmidhuber’s — the “Great Programmer”. This states that all possible halting programs of a universal Turing machine have physical existence. Some of these programs’ outputs will contain self-aware substructures — these are the programs deemed interesting by the anthropic principle. Note that there is no need for the UTM to actually exist, nor is there any need to specify which UTM is to be used — a program that is meaningful on UTM<sub>1</sub> can be executed on UTM<sub>2</sub> by prepending it with another program that describes UTM<sub>1</sub> in terms of UTM<sub>2</sub>’s instructions, then executing the individual program. Since the set of halting programs (finite length bitstrings) is isomorphic to the set of whole numbers $``$, an enumeration of $``$ is sufficient to generate the ensemble that contains our universe. In a later paper, Schmidhuber extends his ensemble to non-halting programs, and consider the consequences of assuming that this ensemble is generated by a machine with bounded resources.
Each self-consistent mathematical structure (member of the Tegmark ensemble) is completely described by a finite set of symbols, and a countable set of axioms encoded in those symbols, and a set of rules (logic) describing how one mathematical statement may be converted into another.<sup>1</sup><sup>1</sup>1Strictly speaking, these systems are called recursively enumerable formal systems, and are only a subset of the totality of mathematics, however this seem in keeping with the spirit of Tegmark’s suggestion These axioms may be encoded as a bitstring, and the rules encoded as a program of a UTM that enumerates all possible theorems derived from the axioms, so each member of the Tegmark ensemble may be mapped onto a Schmidhuber one.<sup>2</sup><sup>2</sup>2In the case of an infinite number of axioms, the theorems must be enumerated using a dovetailer algorithm. The dovetailer algorithm is a means of walking an infinite level tree, such that each level is visited in finite time. An example is that for a $`n`$-ary tree, the nodes on the $`i`$th level are visited between steps $`n^i`$ and $`n^{i+1}1`$.. The Tegmark ensemble must be contained within the Schmidhuber one.
An alternative connection between the two ensembles is that the Schmidhuber ensemble is a self-consistent mathematical structure, and is therefore an element of the Tegmark one. However, all this implies is that one element of the ensemble may in fact generate the complete ensemble again, a point made by Schmidhuber in that the “Great Programmer” exists many times, over and over in a recursive manner within his ensemble. This is now clearly true also of the Tegmark ensemble.
## 2 UNIVERSAL PRIOR
In this paper, I adopt a Schmidhuber ensemble consisting of all infinite length bitstrings, denoted $`\{0,1\}^{\mathrm{}}`$. I call these infinite length strings descriptions. By contrast to Schmidhuber, I assume a uniform measure over these descriptions — no particular string is more likely than any other. It can be shown that the cardinality of $`\{0,1\}^{\mathrm{}}`$ is the same as the cardinality of the reals, $`c`$. This set cannot be enumerated by a dovetailer algorithm, rather the dovetailer algorithm enumerates all finite length prefixes of these descriptions. Whereas in Schmidhuber’s 1997 paper, the existence of the dovetailer algorithm explains the ease with which the “Great Programmer” can generate the ensemble of universes, I merely assume the pre-existence of all possible descriptions. The information content of this complete set is precisely zero, as no bits are specified. It is ontologically equivalent to Nothing. This has been called the “zero information principle”.
Since some of these descriptions describe self aware substructures, we can ask the question of what these observers observe. An observer attaches sequences of meanings to sequences of prefixes of one of these strings. A meaning belongs to a countable set, which may be enumerated by the whole numbers. Thus the act of observation may formalised as a map $`O:[0,1]^{\mathrm{}}`$. If $`O(x)`$ is a computable (also known as a recursive) function, then $`O(x)`$ is equivalent to a Turing machine, for which every input halts. It is important to note that observers must be able to evaluate $`O(x)`$ within a finite amount of subjective time, or the observer simply ceases to be. The restriction to computable $`O(x)`$ connects this viewpoint with the original viewpoint of Schmidhuber.
Another interpretation of this scenario is a state machine, possibly finite, consuming bits of an infinite length string. As each bit is consumed, the current state of the machine is the meaning attached to the prefix read so far.
Under the mapping $`O(x)`$, some descriptions encode for identical meanings as other descriptions, so one should equivalence class the descriptions. In particular, strings where the bits after some bit number $`n`$ are “don’t care” bits, are in fact equivalence classes of all strings that share the first $`n`$ bits in common. One can see that the size of the equivalence class drops off exponentially with the amount of information encoded by the string. Under $`O(x)`$, the amount of information is not necessarily equal to the length of the string, as some of the bits may be redundant. The sum
$$P_O(s)=\underset{p:O(p)=s}{}2^{|p|},$$
(1)
where $`|p|`$ means the number of bits of $`p`$ consumed by $`O`$ in returning $`s`$, gives the size of the equivalence class of all descriptions having meaning $`s`$ This measure distribution is known as a universal prior, or alternatively a Solomonoff-Levin distribution, in the case where $`O(x)`$ is a universal prefix Turing machine.
The quantity
$$𝒞_O(x)=\mathrm{log}_2P_O(O(x))$$
(2)
is a measure of the information content, or complexity of a description $`x`$. If only the first $`n`$ bits of the string are significant, with no redundancy, then it is easy to see $`𝒞_O(x)=n`$. Moreover, if $`O`$ is a universal prefix Turing machine, then the coding theorem assures that $`𝒞(x)K(x)`$, where $`K(x)`$ is the usual Kolmogorov complexity, up to a constant independent of the length of $`x`$.
If we assume the self-sampling assumption, essentially that we expect to find ourselves in one of the universes with greatest measure, subject to the constraints of the anthropic principle. This implies we should find ourselves in one of the simplest (in terms of $`𝒞_O`$) possible universes capable of supporting self-aware substructures (SASes). This is the origin of physical law — why we live in a mathematical, as opposed to a magical universe. This is why aesthetic principles, and Ockam’s razor in particular are so successful at predicting good scientific theories. This might also be called the “minimum information principle”.
A final comment to highlight the distinction between this approach and Schmidhuber’s. Schmidhuber assumes that there is a given universal Turing machine $`U`$ which generates the ensemble we find ourselves in. He even uses the term “Great Programmer” to underscore this. Ontologically, this is no more difficult than assuming there is an ultimate theory of everything — ie a final set of equations from which all of physics can be derived. Occam’s razor is a consequence of the resource constraints of $`U`$. In my approach, there is no given laws or global interpreter. By considering just the resource constraints of the observer, even in the case of the ensemble having a uniform measure, Occam’s razor still applies.
## 3 THE WHITE RABBIT PARADOX
An important criticism leveled at ensemble theories is what John Leslie calls the failure of induction\[9, §4.69\]. If all possible universes exist, then what is to say that our orderly, well-behaved universe won’t suddenly start to behave in a disordered fashion, such that most inductive predictions would fail in them. This problem has also been called the White Rabbit paradox, presumably in a literary reference to Lewis Carrol.
This sort of issue is addressed by consideration of measure. We should not worry about the universe running off the rails, provided it is extremely unlikely to do so. Note that Leslie uses the term range to mean what we mean by measure. At first consideration, it would appear that there are vastly more ways for a universe to act strangely, than for it to stay on the straight and narrow, hence the paradox.
Evolution has taught us to be efficient classifiers of patterns, and to be robust in the presence of errors. It is important to know the difference between a lion and a lion-shaped rock, and to establish that difference in real time. Only a finite number of the description’s bits are processed by the classifier, the remaining being “don’t care” bits. Around each compact description is a cloud of completely random descriptions considered equivalent by the observer. The size of this cloud decreases exponentially with the complexity of the description.
This requirement imposes a significant condition on $`O(x)`$. Formally, each connected component of the preimage $`O^1(s)`$ must be dense, ie have nonzero measure, in the space of descriptions.
Turing machines in general do not have this property of robustness against errors. Single bit errors in the input typically lead to wildly different outcomes. However, an artificial neural network, which is a computational model inspired by the brain does exhibit this robustness — leading to applications such as classifying images in the presence of noisy or extraneous data.
So what are the chances of the laws of physics breaking down, and of us finding ourselves in one of Lewis Carrol’s creations? Such a universe will have a very complex description — for instance the coalescing of air molecules to form a fire breathing dragon would involve the complete specification of the states of some $`10^{30}`$ molecules, an absolutely stupendous amount of information, compared with the simple specification of the big bang and the laws of physics that gave rise to life as we know it. The chance of this happening is equally remote, via Eq. (1).
## 4 QUANTUM MECHANICS
In the previous sections, I demonstrate that formal mathematical systems are the most compressible, and have highest measure amongst all members of the Schmidhuber ensemble. In this work, I explicitly assume the validity of the Anthropic Principle, namely that we live in a description that is compatible with our own existence. This is by no means a trivial assumption — it is entirely possible that we are inhabiting a virtual reality where the laws of the observed world needn’t be compatible with our existence. However, to date, the Anthropic Principle has been found to be valid.
In order to derive consequences of the Anthropic Principle, one needs to have a model of consciousness, or at very least some necessary properties that conscious observer must exhibit. I will explore the consequences of just two such properties of consciousness.
The first assumption to be made is that observers will find themselves embedded in a temporal dimension. A Turing machine requires time to separate the sequence of states it occupies as it performs a computation. Universal Turing machines are models of how humans compute things, so it is possible that all conscious observers are capable of universal computation. Yet for our present purposes, it is not necessary to assume observers are capable of universal computation, merely that observers are embedded in time.
The second assumption, which is related to Marchal’s computational indeterminism, is that the simple mathematical description selected from the Schmidhuber ensemble describes the evolution of an ensemble of possible experiences. The actual world experienced by the observer is selected randomly from this ensemble. More accurately, for each possible experience, an observer exists to observe that possibility. Since it is impossible to distinguish between these observers, the internal experience of that observer is as though it is chosen randomly from the ensemble of possibilities. This I call the Projection Postulate.
The reason for this assumption is that it allows for very complex experiences to be generated from a very simple process. It is a very generalised form of Darwinian evolution, which exhibits extreme simplicity over ex nihilo creation explanations of life on Earth. Whilst by no means certain, it does seem that a minimum level of complexity of the experienced world is needed to support conscious experience of that world according the the anthropic principle.
This ensemble of possibilities at time $`t`$ we can denote $`\psi (t)`$. Ludwig\[12, D1.1\] introduces a rather similar concept of ensemble, which he equivalently calls state to make contact with conventional terminology. At this point, nothing has been said of the mathematical properties of $`\psi `$. I shall now endeavour to show that $`\psi `$ is indeed an element from complex Hilbert space, a fact normally assumed as an axiom in conventional treatments of Quantum Mechanics.
The projection postulate can be modeled by a partitioning map $`A:\psi \{\psi _a,\mu _a\}`$, where $`a`$ indexes the allowable range of potential observable values corresponding to $`A`$, $`\psi _a`$ is the subensemble satisfying outcome $`a`$ and $`\mu _a`$ is the measure associated with $`\psi _a`$ ($`_a\mu _a=1`$).
Finally, we assume that the generally accepted axioms of set theory and probability theory hold. Whilst the properties of sets are well known, and needn’t be repeated here, the Kolmogorov probability axioms are:
If $`A`$ and $`B`$ are events, then so is the intersection $`AB`$, the union $`AB`$ and the difference $`AB`$.
The sample space $`S`$ is an event, called the certain event, and the empty set $`\mathrm{}`$ is an event, called the impossible event.
To each event $`E`$, $`P(E)[0,1]`$ denotes the probability of that event.
$`P(S)=1`$.
If $`AB=\mathrm{}`$, then $`P(AB)=P(A)+P(B)`$.
For a decreasing sequence $`A_1A_2\mathrm{}A_n\mathrm{}`$of events with $`_nA_n=\mathrm{}`$, we have $`lim_n\mathrm{}P(A_n)=0`$.
Consider now the projection operator $`𝒫_{\{a\}}:VV`$, acting on a ensemble $`\psi V`$, $`V`$ being the set of all such ensembles, to produce $`\psi _a=𝒫_{\{a\}}\psi `$, where $`aS`$ is an outcome of an observation. We have not at this stage assumed that $`𝒫_{\{a\}}`$ is linear. Define addition for two distinct outcomes $`a`$ and $`b`$ as follows:
$$𝒫_{\{a\}}+𝒫_{\{b\}}=𝒫_{\{a,b\}},$$
(3)
from which it follows that
$`𝒫_{AS}`$ $`=`$ $`{\displaystyle \underset{aA}{}}𝒫_{\{a\}}`$ (4)
$`𝒫_{AB}`$ $`=`$ $`𝒫_A+𝒫_B𝒫_{AB}`$ (5)
$`𝒫_{AB}`$ $`=`$ $`𝒫_A𝒫_B=𝒫_B𝒫_A.`$ (6)
These results extend to continuous sets by replacing the discrete sums by integration over the sets with uniform measure. Here, as elsewhere, we use $`\mathrm{\Sigma }`$ to denote sum or integral respectively as the index variable $`a`$ is discrete of continuous.
Let the ensemble $`\psi V\{𝒫_A\psi |AS\}`$ be a “reference state”, corresponding to the certain event. It encodes information about the whole ensemble. Denote the probability of a set of outcomes $`AS`$ by $`P_\psi (𝒫_A\psi )`$. Clearly
$$P_\psi (𝒫_S\psi )=P_\psi (\psi )=1$$
(7)
by virtue of (A4). Also, by virtue of Eq. (5) and (A4),
$$P_\psi ((𝒫_A+𝒫_B)\psi )=P_\psi (𝒫_A\psi )+P_\psi (𝒫_B\psi )\mathrm{if}AB=\mathrm{}.$$
(8)
Assume that Eq. (8) also holds for $`AB\mathrm{}`$ and consider the possibility that $`A`$ and $`B`$ can be identical. Eq. (8) may be written:
$$P_\psi ((a𝒫_A+b𝒫_B)\psi )=aP_\psi (𝒫_A\psi )+bP_\psi (𝒫_B\psi ),a,b.$$
(9)
Thus, the set $`V`$ naturally extends by means of the addition operator defined by Eq. (3) to include all linear combinations of observed states, at minimum over the natural numbers. If $`AB\mathrm{}`$, then $`P_\psi ((𝒫_A+𝒫_B)\psi )`$ may exceed unity, so clearly $`(𝒫_A+𝒫_B)\psi `$ is not necessarily a possible observed outcome. How should we interpret these new “nonphysical” states?
At each moment that an observation is possible, an observer faces a choice about what observation to make. In the Multiverse, the observer differentiates into multiple distinct observers, each with its own measurement basis. In this view, there is no preferred basis.
The expression $`P_\psi ((a𝒫_A+b𝒫_B)\psi )`$ must be the measure associated with $`a`$ observers choosing to partition the ensemble into $`\{A,\overline{A}\}`$ and observing an outcome in $`A`$ and $`b`$ observers choosing to partition the ensemble into $`\{B,\overline{B}\}`$ and seeing outcome $`B`$. The coefficients $`a`$ and $`b`$ must be be drawn from a measure distribution over the possible choices of measurement. The most general measure distributions are complex, therefore the coefficients, in general are complex. We can comprehend easily what a positive measure means, but what about complex measures? What does it mean to have an observer with measure $`1`$? It turns out that these non-positive measures correspond to observers who chose to examine observables that do not commute with our current observable $`A`$. For example if $`A`$ were the observation of an electron’s spin along the $`z`$ axis, then the states $`|++|`$ and $`|+|`$ give identical outcomes as far as $`A`$ is concerned. However, for another observer choosing to observe the spin along the $`x`$ axis, the two states have opposite outcomes. This is the most general way of partitioning the Multiverse amongst observers, and we expect to observe the most general mathematical structures compatible with our existence.
The probability function $`P`$ can be used to define an inner product as follows. Our reference state $`\psi `$ can be expressed as a sum over the projected states $`\psi =_{aS}𝒫_{\{a\}}\psi _{aS}\psi _a`$. Let $`V^{}=(\psi _a)`$ be the linear span of this basis set. Then, $`\varphi ,\xi V`$, such that $`\varphi =_{aS}\varphi _a\psi _a`$ and $`\xi =_{aS}\xi _a\psi _a`$, the inner product $`\varphi ,\xi `$ is defined by
$$\varphi ,\xi =\underset{aS}{}\varphi _a^{}\psi _aP_\psi (\psi _a).$$
(10)
It is straightforward to show that this definition has the usual properties of an inner product, and that $`\psi `$ is normalized ($`\psi ,\psi =1`$). The measures $`\mu _a`$ are given by
$`\mu _a=P_\psi (\psi _a)`$ $`=`$ $`\psi _a,\psi _a`$
$`=`$ $`\psi ,𝒫_a\psi `$
$`=`$ $`|\psi ,\widehat{\psi }_a|^2,`$
where $`\widehat{\psi }_a=\psi _a/\sqrt{P_\psi (\psi _a)}`$ is normalised.
Until now, we haven’t used axiom (A6). Consider a sequence of sets of outcomes $`A_0A_1\mathrm{}`$, and denote by $`AA_nn`$ the unique maximal subset (possibly empty), such that $`\overline{A}_nA_n=\mathrm{}`$. Then the difference $`𝒫_{A_i}𝒫_A`$ is well defined, and so
$`(𝒫_{A_i}𝒫_A)\psi ,(𝒫_{A_i}𝒫_A)\psi `$ $`=`$ $`P_\psi ((𝒫_{A_i}𝒫_A)\psi )`$
$`=`$ $`P_\psi ((𝒫_{A_i}+𝒫_{\overline{A}}𝒫_S)\psi )`$
$`=`$ $`P_\psi (𝒫_{A_i\overline{A}}).`$
By axiom (A6),
$$\underset{n\mathrm{}}{lim}(𝒫_{A_i}𝒫_A)\psi ,(𝒫_{A_i}𝒫_A)\psi =0,$$
(13)
so $`𝒫_{A_i}\psi `$ is a Cauchy sequence that converges to $`𝒫_A\psi V`$. Hence $`V`$ is complete under the inner product (10). It follows that $`V^{}`$ is complete also, and is therefore a Hilbert space.
The most general form of evolution of $`\psi `$ in continuous time is given by:
$$\frac{\mathrm{d}\psi }{\mathrm{d}t}=(\psi ).$$
(14)
Some people may think that discreteness of the world’s description (ie of the Schmidhuber bitstring) must imply a corresponding discreteness in the dimensions of the world. This is not true. Between any two points on a continuum, there are an infinite number of points that can be described by a finite string — the set of rational numbers being an obvious, but by no means exhaustive example. Continuous systems may be made to operate in a discrete way, electronic logic circuits being an obvious example. For the sake of connection with conventional quantum mechanics, we will assume that time is continuous. A discrete time formulation can also be derived, in which case we need a difference equation instead of Eq. (14). Other possibilities also exist, such as the rational numbers example mentioned before. The theory of time scales could provide a means of developing these other possibilities.
Axiom (A3) constrains the form of the evolution operator $``$. Since we suppose that $`\psi _a`$ is also a solution of Eq. 14 (ie that the act of observation does not change the physics of the system), $``$ must be linear. The certain event must have probability of 1 at all times, so
$`0`$ $`=`$ $`{\displaystyle \frac{\mathrm{d}P_{\psi (t)}(\psi (t))}{\mathrm{d}t}}`$
$`=`$ $`\mathrm{d}/\mathrm{d}t\psi ,\psi `$
$`=`$ $`\psi ,\psi +\psi ,\psi `$
$`^{}`$ $`=`$ $`,`$ (15)
i.e. $``$ is $`i`$ times a Hermitian operator.
## 5 Discussion
A conventional treatment of quantum mechanics (see eg Shankar) introduces a set of 4-5 postulates that appear mysterious. In this paper, I introduce a model of observation based on the idea of selecting actual observations from an ensemble of possible observations, and can derive the usual postulates of quantum mechanics aside from the Correspondence Principle.<sup>3</sup><sup>3</sup>3The Correspondence Principle states that classical state variables are represented in the quantum formulation by replacing appropriately $`xX`$ and $`pi\mathrm{}d/dx`$. Stenger has developed a theory based on fundamental symmetries that explains the Correspondence Principle. Even the property of linearity is needed to allow disjoint observations to take place simultaneously in the universe. Weinberg experimented with a possible non-linear generalisation of quantum mechanics, however found great difficulty in producing a theory that satisfied causality. This is probably due to the nonlinear terms mixing up the partitioning $`\{\psi _a,\mu _a\}`$ over time. It is usually supposed that causality, at least to a certain level of approximation, is a requirement for a self-aware substructure to exist. It is therefore interesting, that relatively mild assumptions about the nature of SASes, as well as the usual interpretations of probability and measure theory lead to a linear theory with the properties we know of as quantum mechanics. Thus we have a reversal of the usual ontological status between Quantum Mechanics and the Many Worlds Interpretation.
## ACKNOWLEDGMENTS
I would like to thank the following people from the “Everything” email discussion list for many varied and illuminating discussions on this and related topics: Wei Dai, Hal Finney, Gilles Henri, James Higgo, George Levy, Alastair Malcolm, Christopher Maloney, Jaques Mallah, Bruno Marchal and Jürgen Schmidhuber.
In particular, the solution presented here to the White Rabbit paradox was developed during an email exchange between myself and Alistair Malcolm during July 1999, archived on the everything list (http://www.escribe.com/science/theory). Alistair’s version of this solution may be found on his web site at http://www.physica.freeserve.co.uk/p101.htm.
I would also like to thank the anonymous reviewer for suggesting Ludwig’s book. Whilst the intuitive justification in that book is very different, there is a remarkable congruence in the set of axioms chosen to the ones presented in this paper. |
no-problem/0001/astro-ph0001396.html | ar5iv | text | # Gamma-Ray Bursts via Pair Plasma Fireballs from Heated Neutron Stars
## Introduction
It has been speculated for some time that inspiraling neutron stars could provide a power source for cosmological gamma-ray bursts jwg:mr92 ; jwg:piran98 . However, previous Newtonian and post-Newtonian studies jwg:jr96 of the final merger of two neutron stars have found that the neutrino emission time scales are so short that it would be difficult to drive a gamma-ray burst from this source. It is clear that a mechanism is required for extending the duration of energetic neutrino emission. A number of possibilities could be envisioned, for example, neutrino emission powered by accretion shocks, MHD or tidal interactions between the neutron stars, etc. The present study, however, has been primarily motivated by numerical studies of the strong field relativistic hydrodynamics of close neutron star binaries (NSBs) in three spatial dimensions. These studies jwg:wm95 ; jwg:wmm96 ; jwg:mw97 ; jwg:mmw98a suggest that neutron stars in a close binary can experience relativistic compression and heating over a period of seconds. During the compression phase released gravitational binding energy can be converted into internal energy. Subsequently, up to $`10^{53}`$ ergs in thermally produced neutrinos can be emitted before the stars collapse jwg:mw97 . Here we briefly summarize the physical basis of this model and numerically explore its consequences for the development of an $`e^+e^{}`$ plasma and associated GRB.
In jwg:mw97 properties of equal-mass neutron-star binaries were computed as a function of mass and EOS (Equation of State). From these studies it was deduced that compression, heating and collapse could occur a few seconds before binary merger. Our calculation of the rates of released binding energy and neutron star cooling suggests that interior temperatures as hot as 70 MeV are achieved. This leads to a high neutrino luminosity which peaks at $`L_\nu 10^{53}`$ ergs sec<sup>-1</sup>. This much neutrino luminosity would partially convert to an $`e^+e^{}`$ pair plasma above the stars as is also observed above the nascent neutron star in supernova simulations jwg:wm93 .
## Neutrino Annihilation and Pair Creation
Having outlined a mechanism by which neutrino luminosities of 10<sup>52</sup> to 10<sup>53</sup> ergs/sec may arise from binary neutron stars approaching their final orbits, we must calculate the efficiency of conversion of neutrino pairs into an electron pair plasma via $`\nu \overline{\nu }e^+e^{}`$. Here we argue that the efficiency for converting these neutrinos into pair plasma is probably quite high. Neutrinos emerging from the stars will deposit energy outside the stars predominantly by $`\nu \overline{\nu }`$ annihilation to form electron pairs. A secondary mechanism for energy deposition is the scattering of neutrinos from the $`e^+e^{}`$ pairs. Strong gravitational fields near the stars will bend the neutrino trajectories. This greatly enhances the annihilation and scattering rates jwg:sw99 . For our employed neutron-star equations of state the radius to mass ratio is typically between $`R/M3`$ and 4 just before stellar collapse (in units $`G=c=1`$). In jwg:sw99 it is shown that $`\nu \overline{\nu }`$ annihilation rates will be enhanced by a factor $`(R/M)8`$ to $`28`$ due to relativistic effects. From Eq. 24 of jwg:sw99 we obtain,
$$\frac{\dot{Q}}{L_\nu }0.03(R/M)L_{53}^{5/4}.$$
(1)
Thus, the efficiency of annihilation ranges from $``$0.1 to $`0.84\times L_{53}^{5/4}`$. For the upper range of luminosity the efficiency is quite large. Also, using the supernova code of Wilson and Mayle jwg:wm93 we calculate the entropy per baryon of the plasma to be as high as $`10^6`$, thus the resulting pair plasma will have low baryon loading.
## Pair Plasma Expansion AND SHOCK WITH ISM
Having determined the initial conditions of the hot $`e^+e^{}`$ pair plasma near the surface of a neutron star, we wish to follow its evolution and characterize the observable gamma-ray emission. To study this we have developed a spherically symmetric, general relativistic hydrodynamic computer code to track the flow of baryons, $`e^+e^{}`$ pairs, and photons. For the present discussion we consider the plasma deposited at the surface of a $`1.45M_{}`$ neutron star with a radius of 10 km. Discussion of this code can be found in jwg:swm97 ; jwg:swm00 . In those papers the emission from an expanding fireball was studied. In Figure 1 it is shown that the resulting emission spectrum and $`\gamma `$-ray emission efficiency $`E_\gamma /E_{tot}`$ strongly depends upon the entropy per baryon of the plasma deposited near the surface of the neutron stars; entropies of $`10^6`$ resulted in weak emission with most of the original energy manifesting itself as kinetic energy of the baryons. Thus, for the low entropy per baryon fireballs ($`s10^510^6`$) produced by NSBs it is necessary to examine the emission due to the interaction of the relativistically expanding baryon wind with the interstellar medium (ISM). We find that these baryon winds typically have a Lorentz factor $`\gamma 300`$ and have a total energy $`10^{52}`$ ergs.
After becoming optically thin and decoupling with the photons, the matter component of the fireball continues to expand and interact with the ISM via collisionless shocks. As the ISM is swept up, the matter decelerates. We model this process as an inelastic collision between the expanding fireball and the ISM as in, for example, jwg:piran98 . We assume that the absorbed internal energy is immediately radiated away. From this we construct a simple picture of the emission due to the matter component of the fireball “snowplowing” into the ISM of baryon number density $`n`$.
We have constructed an analytic formula for the luminosity in time jwg:swm00 of the fireball plowing into the ISM. We show a plot of this function in Figure 2 for a range of ISM densities. Defining $`t_{max}`$ as the time of maximum luminosity
$$L(t)\{\begin{array}{cc}t^2\hfill & \text{free expansion phase }(t<t_{max})\hfill \\ t^{10/7}\hfill & \text{deceleration phase }(t>t_{max})\text{.}\hfill \end{array}$$
(2)
This luminosity curve has the so called “FRED” (Fast Rise, Exponential Decay) profile which is characteristic of real bursts.
### Synchrotron Shock Spectrum
Using the theory of synchrotron shocks (e.g. jwg:spn98 ) we can construct a spectrum as shown in Figure 3. To model the synchrotron spectrum there are three free parameters: $`ϵ_B,ϵ_e`$ are the fractions of baryonic kinetic energy that is deposited into the magnetic field and the electrons respectively, and $`n`$, the number density of baryons in the ISM. In these calculations we assume $`ϵ_B=ϵ_e=1/4`$. As shown in Figure 3, a reasonable ISM density of 1 baryon/cm<sup>3</sup> gives a peak in the $`\nu L_\nu `$ spectrum at $`100`$ keV in agreement with observations. Calculations of the efficiency show that 75 % of the energy is emitted at photon energies of 10 keV and above jwg:swm00 .
## Conclusions
In this proceedings we have argued that heated neutron stars (perhaps by stellar compression of close neutron-star binaries) are viable candidates for the production of large, high entropy per baryon, $`e^+e^{}`$ pair plasma fireballs, and thus, for the creation of gamma-ray bursts. We find that fireballs of total energies $`E10^{51}`$ to $`3\times 10^{52}`$ ergs and entropies per baryon $`s>10^5`$ are possible. Also, this model gives a power-law spectrum that peaks at hundreds of keV and has an overall efficiency of 10-20 %.
Work performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under contract W-7405-ENG-48. J.R.W. was partly supported by NSF grant PHY-9401636. Work at University of Notre Dame supported in part by DOE grant DE-FG02-95ER40934, NSF grant PHY-97-22086, and by NASA CGRO grant NAG5-3818. |
no-problem/0001/astro-ph0001072.html | ar5iv | text | # The Stellar Population Histories of Local Early-Type Galaxies. I. Population Parameters
## 1. Introduction
This paper is the first in a series on the stellar populations of local field and group elliptical galaxies based on the high-quality spectral data of González (1993; G93). The present paper concentrates on deriving improved stellar population parameters by correcting existing population models for the effects of *non-solar abundance ratios*. The major roadblock to population synthesis models of elliptical galaxies is the fact that the effects of age and metallicity are *nearly degenerate* in the spectra of old stellar populations (Faber (1972), 1973; O’Connell (1980); Rose (1985); Renzini (1986)). However, it was early noted that certain spectral features are more sensitive to age than metallicity (e.g., the Balmer lines \[O’Connell (1980); Rabin (1982); Burstein et al. (1984); Rose (1985)\], and Sr II $`\lambda `$4077 \[Rose (1985)\]), and hope grew that such features might be able to break the degeneracy if accurately calibrated. (At about the same time, several workers were also using Balmer lines to discover strong bursts of star formation in so-called “E+A” or “post-starburst” galaxies \[Dressler and Gunn (1983); Couch & Sharples (1987); Schweizer et al. (1990)\], but these applications always implicitly assumed solar metallicity.)
Our ability to decouple age and metallicity in integrated spectra has greatly improved over the last decade, due to three developments. In the late 1980’s, interior models of super-solar-metallicity stellar evolution became available (e.g., VandenBerg (1985), VandenBerg & Bell (1985), VandenBerg & Laskarides (1987); Bertelli et al. (1994)). Next, the Lick/IDS stellar absorption-line survey provided empirical polynomial fitting functions for a set of standardized absorption-line indices as a function of stellar temperature, gravity, and metallicity (Gorgas et al. (1993); Worthey et al. (1994)). Finally, an extensive grid of theoretical model atmospheres and stellar flux distributions was provided by Kurucz (1992) for stars over a wide range of temperatures and metallicities. With these three ingredients, it finally became possible to compute absorption-line strengths from first principles for single-burst stellar populations (SSPs) of a given age and metallicity (Worthey (1992), 1994).
Using such models, Worthey showed that the age-metallicity degeneracy was actually worse than suspected: a factor of *two* uncertainty in the metallicity of a galaxy mimics a factor of *three* uncertainty in its age at fixed color or metal-line strength, the so-called “3/2 law”. The law implies that such commonly used “age” indicators as colors and metal-line strengths are by themselves useless (although they still are widely used). At the same time, the Worthey models also provided a quantitative tool to break the degeneracy (see also Worthey & Ottaviani (1997)). A Balmer index plotted versus a metal line (or color) yields a two-dimensional theoretical grid; the equivalent single-burst age and metallicity for a population can be read off from its location in this grid. Tests of the method on composite stellar populations will be demonstrated in Trager et al. (1999; Paper II), where it is shown that these single-stellar-population (SSP) equivalent parameters correspond approximately to the *luminosity-weighted* vector addition of populations in the index diagrams. A galaxy’s age determined from its integrated spectrum is thus quite sensitive to *recent* star formation, and hence to the epoch and strength of its last major dissipative merger or accretion event.
While Worthey models validated use of the Balmer lines, they also showed that extremely accurate Balmer data would be needed. To our knowledge, the line-strength data of González (1993) are still the only published data on a *diversified* sample of local E galaxies that are adequate for this purpose. Applying early Worthey models to his data, González found that blue, weak-lined ellipticals in his sample tended to have young ages, while red, strong-lined ellipticals had older ages. In contrast, the metallicity spread was fairly small, less than a few tenths of a dex. This result seemed to imply (G93, Faber et al. (1995)) that age was the major cause of the well known color/line-strength relation in the G93 sample, not metallicity as in the classic picture (Baum (1959); McClure & van den Bergh (1968); Spinrad & Taylor (1971); Faber (1972), 1973). The large age spread in G93 galaxies was later confirmed by Trager (1997) and by Tantalo, Chiosi & Bressan (1998a; TCB98) using later stellar population models.
Excellent line strengths have also been measured for E and S0 galaxies in the Fornax cluster by Kuntschner & Davies (1998) and Kuntschner (1998). Fornax turns out to be the reverse of the G93 sample in showing a larger spread in metallicity than age; the dense cluster environment of Fornax may be the key difference. A goal of the present series of papers is to explore the relative importance of age versus metallicity in driving the color and line strength relations of ellipticals in different environments (see Paper II).
Although the data of G93 shed hope on solving the age vs. metallicity problem, they brought another simmering problem to the fore, namely, non-solar abundance ratios. Enhancement of Mg relative to Fe had been suggested by O’Connell (1976) and Peletier (1989) and shown to be widespread in the Lick/IDS ellipticals by Worthey, Faber & González (1992). However, the high-quality data of G93 offered a great improvement for hard-to-measure weak Fe lines, and, using them, Trager (1997) showed that metallicities deduced from Mg were indeed considerably higher than those deduced from Fe. Other elements such as Na, C, N, and possibly O are also probably enhanced in giant ellipticals (Worthey (1998)). Because the Worthey models are not designed for non-solar abundance ratios, applying them to different metal line features in elliptical spectra gives inconsistent ages and, especially, abundances. The progress promised by the G93 data thus suddenly came to a full stop.
The present paper addresses the problem of non-solar abundance ratios in a rough but hopefully satisfactory way. On the one hand, the general effects of non-solar ratios on evolutionary isochrones are now beginning to be understood (Salaris, Chieffi & Straniero (1993); Weiss, Peletier & Matteucci (1995); Salaris & Weiss (1998); Bressan, priv. comm.; see Tantalo, Chiosi & Bressan 1998a ). Second, the responses of nearly all the Lick/IDS indices to non-solar element ratios have been modeled by Tripicco and Bell (1995; TB95). The latter prove crucial, and it is really these responses that open the way forward. Using both inputs, reasonable corrections to the W94 indices for non-solar ratios can be estimated for the first time. The corrected models are used here to derive three SSP-equivalent population parameters for each galaxy—age, mean metallicity, and mean element “enhancement ratio.” Future papers will use these parameters to study stellar populations as a function of galaxy type, determine correlations among age, metallicity, enhancement, and other variables, and measure radial population gradients.
Other groups (Weiss, Peletier & Matteucci (1995); Greggio (1997); Trager (1997)) have also attempted to interpret G93 data in terms of non-solar abundance ratios, but their approaches were more ad hoc. Inferred metallicities and enhancements both tend to be larger than what we find here. The most similar analysis so far is by Tantalo, Chiosi & Bressan (1998a; TCB98), building on previous work by that group (Bressan, Chiosi, & Tantalo (1996)). However, these authors use different response functions from ours (and in fact do not correct the Fe index at all for non-solar ratios). Their results consequently differ, and a section is devoted to comparing our results to their work (Sec. 6.1).
We note briefly that Balmer-line equivalent widths might be spuriously contaminated by light from blue horizontal-branch (BHB) stars or blue straggler stars (BSS) (e.g., Burstein et al. (1984); Lee (1994); Faber et al. (1995); Trager (1997)). These possibilities are discussed in Section 5.1. To anticipate the conclusions, we believe that current data do not support the existence of large numbers of BHB and BSS stars in giant elliptical galaxies, and we thus conclude that the SSP-equivalent ages derived here for both young and old ellipticals must be substantially correct. Likewise, reduction of Balmer indices by emission fill-in, though present, cannot change the derived ages very much. Thus, despite efforts, we have been unable to find any explanation for the wide range of Balmer line strengths in the G93 galaxies other than a *wide range of SSP-equivalent ages.* This is our principal conclusion.
The outline of this paper is as follows: Section 2 presents absorption-line data for the G93 galaxies. Section 3 presents a brief description of the Worthey (1994) models; their extension to non-solar abundance ratios using the results of TB95; the final choice of elements for inclusion in the enhanced element group; the method for determining the stellar population parameters from the models; and the final population parameters for the G93 sample. Section 4 briefly presents the parameters for the G93, both central and global, and their distributions. Section 5 discusses the assumptions, in particular the use of $`\mathrm{H}\beta `$ as an age indicator, and examines all known uncertainties in the age, metallicity, and abundance-ratio scales and zeropoints. Section 6 presents evidence from other absorption-line strength studies for the presence of intermediate-age stellar populations in elliptical galaxies; it also compares in detail our results to those of TCB98. Two appendices discuss the effect of changing isochrones in the models and the effect of using different prescriptions for emission and velocity dispersion corrections to $`\mathrm{H}\beta `$.
## 2. Data
### 2.1. The galaxy sample
The G93 galaxy sample was not selected according to quantitative criteria but was rather chosen with the aim of covering *relatively uniformly* the full range of color, line strength, and velocity dispersion shown by local elliptical galaxies. As such, it contains more dim, blue, weak-lined, low-dispersion galaxies than would be found in a magnitude-limited sample. In that sense the G93 sample may more closely resemble a *volume*-limited sample, but this has not been established quantitatively.
The original sample in G93 consisted of 41 galaxies, of which 40 are included here. NGC 4278 has been discarded because of its strong emission. Table 1 presents morphologies, positions, and heliocentric redshifts. All galaxies are classified as elliptical (or compact elliptical) in the RC3 (de Vaucouleurs et al. (1991)), the RSA (Sandage & Tammann (1987)), or the Carnegie Atlas (Sandage & Bedke (1994)) except for NGC 507 and NGC 6703, both classified as SA0 in the RC3 but not cataloged in the RSA or the Carnegie Atlas. NGC 224 (the bulge of M 31) is also included.
The environmental distribution of the G93 sample bears comment. Group assignments and approximate group richnesses may be found for nearly all galaxies in Faber et al. (1989). Most of the G93 galaxies are in poor groups, a few are quite isolated (there are no other galaxies in the RC3 within 1 Mpc projected distance and $`\pm 2000\mathrm{km}\mathrm{s}^1`$ of NGC 6702, for example \[Colbert, Mulchaey & Zabludoff, in prep.\]), and six are members of the Virgo cluster. Only one is in a rich cluster (NGC 547, in Abell 194). We therefore refer to the galaxies in this sample as local “field” ellipticals, given the low-density environments of most of them. Environmental effects are discussed in more detail in Paper II.
### 2.2. G93 indices: calibrations and corrections
The Lick/IDS indices were introduced by Burstein et al. (1984) to measure prominent absorption features in the spectra of old stellar populations in the 4100–6300 Å region. A large and homogeneous database of stellar and galaxy spectra was assembled (Worthey et al. (1994); Trager et al. (1998), hereafter TWFBG98) with the Image Dissector Scanner at Lick Observatory (IDS; Robinson & Wampler (1972)). A description of the Lick/IDS system and its application to stellar and galaxy spectra is given in those papers.
González (1993) measured Lick/IDS indices with a different spectrograph setup, at higher dispersion, and over a restricted spectral range (4700–5500 Å). The four best indices in his wavelength interval are $`\mathrm{H}\beta `$, $`\mathrm{Mg}b`$, Fe5270, and Fe5335, which we use in this paper. The bandpasses of these four indices are given in Table 2, and the precise index definitions are given in G93, Worthey et al. (1994), and TWFBG98.
We use a combined “iron” index, $`\mathrm{Fe}`$, in this work, which has smaller errors than either Fe index separately and is defined as follows:
$$\mathrm{Fe}\frac{\mathrm{Fe5270}+\mathrm{Fe5335}}{2}.$$
(1)
It has the convenient property of being sensitive primarily to \[Fe/H\] (see Sec. 3.1.2). Although Mg<sub>2</sub> has also become a standard “metallicity” indicator for the integrated spectra of galaxies, we do not use it to determine stellar population parameters. G93 was unable to transform his observations of this broad index (or of Mg<sub>1</sub>) accurately onto the Lick/IDS system due to chromatic focus variations in his spectrograph, coupled with the steep light gradient in the central regions of most ellipticals (Fisher et al. 1995 avoided Mg<sub>2</sub> for the same reason). We prefer to use the narrower index $`\mathrm{Mg}b`$, which is not affected by this problem.
#### 2.2.1 Velocity-dispersion corrections
The observed spectrum of a galaxy is a convolution of the integrated spectrum of its stellar population with the line-of-sight velocity distribution function of its stars. Indices measured for broad-line galaxies are therefore too weak compared to unbroadened standard stars. TWFBG98 statistically corrected the Lick/IDS indices for this effect in the following way: individual stellar spectra of a variety of spectral types (plus M 32) were convolved with Gaussian broadening functions of increasing widths and their indices were remeasured. A smooth multiplicative correction as a function of velocity dispersion was determined separately for each index and applied to the galaxy data.
G93 used a more sophisticated technique, taking advantage of the higher resolution and signal-to-noise of his data. His stellar library was used to synthesize a summed stellar template representing a best fit to the the spectrum of each galaxy. Indices were measured from the unbroadened template and again from the broadened template, generating a velocity dispersion correction for each galaxy that was tuned to its spectral type. For $`\mathrm{Mg}b`$, Fe5270, and Fe5335, the mean multiplicative corrections of G93 are very similar to those of TWFBG98 (compare his Figure 4.1 with Figure 3 of TWFBG98). However, for $`\mathrm{H}\beta `$, the correction of G93 is flat or even *negative*, whereas the correction of TWFBG98 is always positive and reaches the value 1.07 at $`\sigma =300\mathrm{km}\mathrm{s}^1`$. Use of the TWFBG98 correction increases $`\mathrm{H}\beta `$ over G93 and leads to slightly younger ages. In what follows, we use the G93 correction to remain consistent with his published data but explore the effects of the TWFBG98 correction in Appendix B. The data marginally appear to favor TWFBG98, but the differences are not large.
#### 2.2.2 Emission corrections
G93 noted that \[O III\] $`\lambda \lambda 4959,5007`$ are clearly detectable in about half of the nuclei in his sample and that most of these galaxies also have detectable $`\mathrm{H}\beta `$ emission (see his Figure 4.10). For galaxies in his sample with strong emission, $`\mathrm{H}\beta `$ is fairly tightly correlated with \[O III\] such that EW($`\mathrm{H}\beta _{\mathrm{em}}`$)/EW(\[O III\]$`\lambda 5007`$) $`0.7`$. A statistical correction of
$$\mathrm{\Delta }\mathrm{H}\beta =0.7\mathrm{EW}([\mathrm{O}\mathrm{III}]\lambda 5007)$$
(2)
was therefore added to $`\mathrm{H}\beta `$ to correct for this residual emission.
We have examined the accuracy of this correction by studying $`\mathrm{H}\beta `$/\[O III\] among the G93 galaxies, supplemented by additional early-type galaxies from the emission-line catalog of Ho, Filipenko & Sargent (1997). The sample was restricted to include only normal, non-AGN Hubble types E through S0$``$, and to well measured objects with $`\mathrm{EW}(\mathrm{H}\alpha )>1.0`$ Å. For 27 galaxies meeting these criteria, $`\mathrm{H}\beta `$/\[O III\] varies from 0.33 to 1.25, with a median value of 0.60. This suggests that a better correction coefficient in Equation 2 might be 0.6 rather than 0.7, and thus that the average galaxy in G93 is slightly overcorrected. For a median \[O III\] strength through the G93 $`r_e/8`$ aperture of $`0.17`$ Å, the error would be about $`0.02`$ Å, or 3% in age. This systematic error for a typical galaxy is negligible compared to other sources of error in the ages (see Table 7). Random errors due to scatter in the ratio are about three times larger but are still small.
Carrasco et al. (1996) report no correlation between $`\mathrm{H}\beta `$ and \[O III\] emission in their sample of early-type galaxies, but give no data. Their claim is explored in Appendix B, which repeats our calculations but with no $`\mathrm{H}\beta `$ correction. The ages of a few strong-\[O III\] galaxies are increased, as expected, but the broad conclusions of this work are unaffected.
No correction for \[N II\] emission has been made to $`\mathrm{Mg}b`$, although this has been suggested as a sometimes significant contributor to this index (by increasing the flux in the red sideband; Goudfrooij & Emsellem (1996)). Only NGC 315 and NGC 1453 would be affected (see G93).
Table 3a presents final corrected index strengths, velocity dispersion corrections, and emission corrections for measurements through a central $`r_e/8`$ aperture; Table 3b presents similar data for a global $`r_e/2`$ aperture. All values are taken directly from G93. The aperture index strengths are weighted averages of the major and minor axis profile data, computed so as to mimic what would be observed through the indicated circular aperture (see G93 for details).
## 3. SSP-equivalent stellar population parameters
### 3.1. Method
#### 3.1.1 Solar-abundance ratio models of Worthey (1994)
SSP-equivalent population parameters have been derived by matching observed line strengths of $`\mathrm{Mg}b`$, $`\mathrm{Fe}`$, and $`\mathrm{H}\beta `$ to updated single-burst stellar population (SSP) models of W94 (available at http://astro.sau.edu/$``$worthey/; “Padova” isochrones by Bertelli et al. (1994) are explored in Appendix A). The models of W94 depend on two adjustable parameters—metallicity and single-burst age—and one fixed parameter, the initial mass function exponent (IMF), here chosen to have the Salpeter value. For reasons stated below, we believe that the basic models of W94 have essentially solar abundance ratios; we will presently adjust these models to allow for non-solar ratios and, in the process, derive a third adjustable parameter, the non-solar enhancement ratio, $`[\mathrm{E}/\mathrm{Fe}]`$. The W94 models are reviewed briefly here, and the reader is referred to Worthey (1994) for more details.
The models incorporate three ingredients: stellar evolutionary isochrones, a stellar SED library, and absorption-line strengths. From the bottom of the main sequence to the base of the red-giant branch (RGB), the models use the isochrones of VandenBerg and collaborators (VandenBerg (1985); VandenBerg & Bell (1985); VandenBerg & Laskarides (1987)). These are mated to red giant branches from the Revised Yale Isochrones (Green, Demarque & King (1987)) by shifting the latter in $`\mathrm{\Delta }\mathrm{log}L`$ and $`\mathrm{\Delta }\mathrm{log}T_e`$ to match at the base of the RGB. Extrapolations are made to cover a wide range of ($`Z`$, $`Y`$, age) assuming that $`Z_{}=0.0169`$ and $`Y=0.228+2.7Z`$.
The SED library was constructed using the model atmospheres and SEDs of Kurucz (1992) for stars hotter than 3750 K, and model SEDs of Bessel et al. (1989, 1991) and observed SEDs from Gunn & Stryker (1983) for cooler M giants.<sup>1</sup><sup>1</sup>1There is a systematic color offset in the Kurucz (1992) models when compared with the empirical colors of Johnson (1966), the Kurucz (1992) models being too red by $`0.06`$ mag in $`(BV)`$ (but not in other colors; W94). All model $`(BV)`$ colors in this series are corrected for this offset.
Polynomial fitting functions from Worthey et al. (1994) for the Lick/IDS indices are used as the basis of the model absorption-line strengths. Metal-rich stars in the Lick/IDS library are a random sample of metal-rich stars in the solar neighborhood; since evidence suggests that such stars have essentially solar ratios of O, Mg, Na, and other key elements relative to Fe (Edvardsson et al. (1993)), we assume that the line-strengths produced by the metal-rich models of W94 reflect *solar-abundance ratios*.
To construct a model of a given age and metallicity, the appropriate stellar isochrone is first selected. Each star on the isochrone is assigned an SED from the flux library and a set of absorption-line strengths from the Lick/IDS fitting functions. Final model outputs are the integrated fluxed SED (from which colors and magnitudes can be derived) and absorption-line strengths on the Lick/IDS system.
The ability of the W94 models to break the age-metallicity degeneracy is illustrated in Figures 1 and 2, which plot $`\mathrm{Mg}b`$ and $`\mathrm{Fe}`$ versus $`\mathrm{H}\beta `$ for the G93 galaxies. Model grids from W94 are overplotted. Both line strength pairs break the degeneracy, but $`\mathrm{Fe}`$-$`\mathrm{H}\beta `$ does so more than $`\mathrm{Mg}b`$-$`\mathrm{H}\beta `$ because $`\mathrm{Fe}`$ is less temperature sensitive than $`\mathrm{Mg}b`$. Metallicities inferred from $`\mathrm{Mg}b`$ are clearly higher than from $`\mathrm{Fe}`$, reflecting the probable element enhancement in \[Mg/Fe\]. Low-$`\mathrm{H}\beta `$ galaxies tend to fall off the grid to high ages in the $`\mathrm{Fe}`$-$`\mathrm{H}\beta `$ diagram, especially through the $`r_e/2`$ aperture (Figure 2b). Most of this effect is removed when $`\mathrm{Fe}`$ is corrected for depressed \[Fe/H\] (see below), and any small remainder can be attributed to use of the G93 $`\mathrm{H}\beta `$ velocity corrections (instead of TWFBG98) and, in a few galaxies, to possible residual, uncorrected $`\mathrm{H}\beta `$ emission (see Appendix B).
#### 3.1.2 Non-solar abundance ratio models
Adjusting the W94 models for non-solar ratios involves two steps. First, one must compute new evolutionary tracks in a fully self-consistent manner using new interior opacities, reaction rates, and atmospheric boundary conditions that faithfully reflect the altered compositions. Second, one must compute new absorption-line indices. Part one is less developed in the literature but also proves to be less important; we discuss it below. Part two, the indices, could in principle be handled by observing populations of stars with known non-solar ratios and deriving empirical fitting functions for them. For example, Borges et al. (1995) derived a fitting function for Mg<sub>2</sub> versus \[Mg/Fe\] using local dwarf and subgiant stars (this was the function adopted by TCB98 for their population models); and Weiss, Peletier & Mateucci (1995) attempted to correct Mg<sub>2</sub> and $`\mathrm{Fe}`$ using Galactic Bulge stars studied by Rich (1988).
However, it is hard to identify groups of stars with exactly the same (known) enhancements, and it is even more difficult to *vary* the pattern of element abundance enhancements in a controlled way using real stars. For these reasons, a theoretical approach is recommended, and we have chosen to utilize the computations of Tripicco & Bell (1995), who re-computed all of the Lick/IDS spectral indices from a grid of theoretical stellar SEDs and atmospheres with varying abundance ratios. For three sample locations on an old stellar isochrone, TB95 tabulate the response of each Lick/IDS index to separate enhancements $`[\mathrm{X}/\mathrm{H}]=+0.3`$ dex for the elements $`\mathrm{X}=\mathrm{C},\mathrm{N},\mathrm{O},\mathrm{Mg},\mathrm{Fe},\mathrm{Ca},\mathrm{Na},\mathrm{Si},\mathrm{Cr}`$ and $`\mathrm{Ti}`$. These response functions are the basis for our corrections to the indices for non-solar abundance ratios. Note that, because we use the response functions *differentially*, we are insensitive to any zeropoint uncertainties that the TB95 indices may have (which are in any case known to be small, as TB95 showed by comparing to real stars).
Following previous practice, we adopt the convention that a certain group of elements is “enhanced” in elliptical galaxies (more is said on this below). Precisely which elements are enhanced, and by how much, is poorly known. From an intercomparison of absorption-line strengths in the Lick/IDS galaxy sample (TWFBG98), Worthey (1998) suggested that Mg, Na, and N are enhanced in giant ellipticals but that Ca tracks Fe (cf. O’Connell (1976); Vazdekis et al. (1996)). Comparing to additional galaxy data from TWFBG98 below, we suggest that C also belongs to the enhanced group. Unfortunately, the Lick/IDS system has no indices that are capable of directly probing oxygen in elliptical galaxies (Worthey (1998)). Oxygen is important because it dominates $`[\mathrm{Z}/\mathrm{H}]`$ on account of its high masss fraction.
Because O (and perhaps C) are uncertain, we have considered four models for the enhancement pattern in elliptical galaxies, as described in Table 4. In each model there are three groups of elements: enhanced, depressed, and fixed. The assignment of elements to the three groups is always the same except for C and O, whose assignments vary. Elements in the fixed group have their solar (photospheric) abundances (Grevesse, Noels & Sauval (1996)), while elements in each of the enhanced and depressed groups are all scaled up or down by the same factor. After the amount of enhancement is chosen and C and O are assigned to their proper groups, the depression of the depressed elements is calculated so as to preserve *constant* $`[\mathrm{Z}/\mathrm{H}]`$.
In the present work, we generally take the enhanced group to include the abundant elements that are nucleosynthetically related to Mg, several of which are actually seen to be overabundant in giant elliptical galaxies (Worthey (1998)). Elements placed in the enhanced group include N, Ne, Na, Mg, Si, and S (plus sometimes C and/or O).<sup>2</sup><sup>2</sup>2In retrospect it would have made more sense to group N with C since they are nucleosynthetically related (Woosley & Weaver (1995)), but making this change would negligibly affect the conclusions. The iron-peak elements Cr, Mn, Fe, Co, Ni, Cu and Zn constitute the depressed group. All other elements (including those heavier than Zn) are in the fixed group, with the exception of Ca (in the depressed group), and C and O (which vary).
As noted, the four models differ in their treatment of C and O: model 1 has C fixed, O up; model 2 has C fixed, O fixed; model 3 has C down, O up; and model 4 has C up, O up. Because O is produced in massive stars like Mg, it is probable that it, too, is routinely enhanced in giant ellipticals; hence model 2 is unlikely on nucleosynthetic grounds. Model 3, with C down and O up, is similar to the models of Weiss, Peletier & Mateucci (1995), TCB98, and Salaris & Weiss (1998). We show below that depression of C does not match the Lick/IDS indices and that this model is also therefore unlikely. Model 4, with C and O both enhanced, is our preferred model based on McWilliam & Rich (1990) and Rich & McWilliam (priv. comm.), who find that O and C are enhanced in lockstep with Mg in stars in the Galactic Bulge. However model 1 (with C fixed) is very hard to distinguish observationally from model 4 (see below).
Because the enhanced elements are not exactly the same as the $`\alpha `$-elements (e.g., Ca is nominally an $`\alpha `$-element but apparently tracks Fe in elliptical spectra; Worthey 1998; TWFBG98), we use the notation $`[\mathrm{E}/\mathrm{Fe}]`$, where “E” refers to the mass fraction of elements that are specifically enhanced in each model, in preference to the more common notation $`[\alpha /\mathrm{Fe}]`$ used by previous authors. Following TCB98, we write
$$[\mathrm{Fe}/\mathrm{H}]=[\mathrm{Z}/\mathrm{H}]A[\mathrm{E}/\mathrm{Fe}],$$
(3)
or
$$\mathrm{\Delta }[\mathrm{Fe}/\mathrm{H}]=A\mathrm{\Delta }[\mathrm{E}/\mathrm{Fe}]=\frac{A}{1A}\mathrm{\Delta }[\mathrm{E}/\mathrm{H}]$$
(4)
at constant $`[\mathrm{Z}/\mathrm{H}]`$, where their very small second-order term in $`[\mathrm{E}/\mathrm{Fe}]`$ has been ignored. Table 4 gives values of $`A`$ and illustrative heavy-element fractions (C, O, E-group, Fe-peak) for the four models, all at $`\mathrm{\Delta }[\mathrm{Fe}/\mathrm{H}]=0.3`$ dex and solar $`[\mathrm{Z}/\mathrm{H}]`$; values of $`[\mathrm{E}/\mathrm{Fe}]`$ and $`[\mathrm{E}/\mathrm{H}]`$ for other values of $`\mathrm{\Delta }`$$`[\mathrm{Fe}/\mathrm{H}]`$ can be calculated using Eqs. 3 and 4. For reference, TCB98’s model has $`A_{\mathrm{TCB98}}=0.8`$.
Table 4 reveals an important fact—because the Fe-peak contribution to $`[\mathrm{Z}/\mathrm{H}]`$ is so small (only 8% for solar abundance), reducing it by even 0.3 dex frees up only minimal room for the so-called “enhanced” elements. Hence, what really happens in enhanced models is that the enhanced (and fixed) elements remain *nearly at their solar values, whereas Fe (and related elements) are depressed*. In short, we should think of giant ellipticals as *failing* to make Fe-peak elements rather than making too much of certain other elements. Likewise, the quantity $`[\mathrm{E}/\mathrm{Fe}]`$ is not really an enhancement of the E-elements but rather a *depression of Fe*.
Other authors, including ourselves (e.g., Worthey, Faber & González (1992); Weiss, Peletier & Matteucci (1995); Greggio (1997); Vazdekis et al. (1997)) have said this, but the contradictory notion nevertheless persists that strong Mg indices are due to an “overabundance” of Mg—this is not mathematically possible if Mg, O, and the $`\alpha `$-elements track one another closely, as these elements together dominate $`[\mathrm{Z}/\mathrm{H}]`$ by mass. We show below that the TB95 response functions provide an alternative means of strengthening $`\mathrm{Mg}b`$ and Mg<sub>2</sub>, namely, via *weak* Fe-peak elements (see below). This unanticipated *anti*-correlation between $`\mathrm{Mg}b`$ and the Fe-peak elements is one of the major new features of our treatment and the cause of our relatively small derived values of $`[\mathrm{E}/\mathrm{Fe}]`$ (compared with previous authors; see Sec. 4).
We return next to the problem of the stellar evolutionary isochrones. Since a full library of isochrones is not available for all abundance ratios, we follow the lead of TCB98, who suggest from examining their unpublished isochrones that models with varying $`[\mathrm{E}/\mathrm{Fe}]`$ are “virtually indistinguishable in the CMD” from models at the same $`[\mathrm{Z}/\mathrm{H}]`$ with $`[\mathrm{E}/\mathrm{Fe}]=0`$. Earlier, Salaris et al. (1993) had shown (at sub-solar metallicities) that $`\alpha `$-enhanced isochrones are identical to scaled-solar abundance isochrones at the same $`Z`$ provided that the quantity
$$\left[\frac{X_{\mathrm{HPE}}}{X_{\mathrm{LPE}}}\right]\left[\frac{X_\mathrm{C}+X_\mathrm{N}+X_\mathrm{O}+X_{\mathrm{Ne}}}{X_{\mathrm{Mg}}+X_{\mathrm{Si}}+X_\mathrm{S}+X_{\mathrm{Ca}}+X_{\mathrm{Fe}}}\right]$$
(5)
remains constant at the solar value ($`=0`$). Here $`X_i`$ is the mass fraction in element $`i`$, and brackets indicate the usual logarithm relative to solar. The elements in $`X_{\mathrm{HPE}}`$ have high ionization potentials and their opacity governs the mean turnoff temperature; the elements in $`X_{\mathrm{LPE}}`$ have low ionization potentials and their opacity governs the temperature of the giant branch. Preserving the ratio \[$`X_{\mathrm{HPE}}/X_{\mathrm{LPE}}`$\] thus preserves the *shape* of the track, they say, and the new track is found to fit neatly into the old sequence at the same value of $`Z`$. Values of \[$`X_{\mathrm{HPE}}/X_{\mathrm{LPE}}`$\] are given in Table 4 for our four models. Models 2 and 3 are nearly solar, while models 1 and 4 are about 15% overabundant in HPE elements. These small deviations prove to be relatively unimportant, as shown in Section 5.4.
More recently, Salaris & Weiss (1998) have suggested that, at higher metallicities near solar, track constancy may break down and that increasing $`[\alpha /\mathrm{Fe}]`$ both shifts the track to the blue and changes its shape. It is not clear whether these effects are due to high $`[\mathrm{E}/\mathrm{Fe}]`$, to $`[X_{\mathrm{HPE}}/X_{\mathrm{LPE}}]0`$, or both. However, the motions are small, and we show in Section 5.4 that their impact on the indices is probably slight.
If isochrones do not shift (at fixed metallicity), we can assume that $`\mathrm{log}g`$, $`\mathrm{log}T_e`$, $`\mathrm{log}L`$, and the SED of each star on the track are also constant. Hence, it is necessary only to calculate the changes in each spectral feature using the index response functions of TB95, by perturbing each element up or down according to the model. TB95 tabulate fractional index changes for three typical stars, one on the lower main sequence, one at the turnoff, and one on the RGB, at solar metallicity. We assume the same fractional changes at all metallicities and combine these responses by weighting by the fractional light contributions of each type of star at each index.<sup>3</sup><sup>3</sup>3We have ignored the dependence of the line strength indices on Ti, as TB95 make contradictory statements about its inclusion in their model atmospheres. Although their tables include the effects of varying Ti, they clearly state that they have not included TiO lines in their line lists. This will affect the line strengths in the coolest giants. However, $`\mathrm{H}\beta `$, $`\mathrm{Mg}b`$, and $`\mathrm{Fe}`$ are little affected by Ti in their models; see their Tables 4–6. Details are given in the notes to Table 5.
Note that the TB95 response functions are for enhancement values corresponding to $`[\mathrm{X}/\mathrm{H}]=+0.3`$ dex. Response functions for arbitrary values of $`[\mathrm{E}/\mathrm{Fe}]`$ are calculated via Eq. 4 to get $`\mathrm{\Delta }[\mathrm{Fe}/\mathrm{H}]`$ and $`\mathrm{\Delta }[\mathrm{E}/\mathrm{Fe}]`$ and then by exponentially scaling the response functions in Tables 4–6 of TB95 by the appropriate element abundance. The fractional response of index $`I`$ is therefore
$$\frac{\mathrm{\Delta }I}{I_0}=\left\{\underset{i}{}[1+R_{0.3}(X_i)]^{([\mathrm{X}_\mathrm{i}/\mathrm{H}]/0.3)}\right\}1,$$
(6)
where $`R_{0.3}(X_i)`$ is the TB95 response function for element $`i`$ at $`[\mathrm{X}_i/\mathrm{H}]=+0.3`$ dex.<sup>4</sup><sup>4</sup>4This equation assumes that the percentage index change is constant for each step of 0.3 dex in abundance. This assures that index values approach zero gracefully at low abundances but predicts infinite indices at high abundances, which is impossible. The scaling law should therefore probably not be applied at levels much above $`[\mathrm{X}_\mathrm{i}/\mathrm{H}]=+0.6`$.
Table 5 shows changes in the indices corresponding to the four models in Table 4, all of which have $`\mathrm{\Delta }[\mathrm{E}/\mathrm{Fe}]`$ 0.3. $`\mathrm{H}\beta `$ is virtually unaffected by non-solar abundance ratios, even at substantial $`[\mathrm{E}/\mathrm{Fe}]`$; all changes are less than 3%, which translates to $`8`$% in age. Changes in $`\mathrm{Fe}`$ are roughly the same in all models and amount to a decrease of about 20%, driven mostly by the decrease in $`[\mathrm{Fe}/\mathrm{H}]`$ (of 0.3 dex). However, C<sub>2</sub>4668, Mg<sub>1</sub>, Mg<sub>2</sub>, and $`\mathrm{Mg}b`$ are all different, owing to the presence (or not) of C<sub>2</sub> bands in the passband or sidebands of these indices; C<sub>2</sub> and Mg<sub>1</sub> increase greatly with increasing C, $`\mathrm{Mg}b`$ declines with increasing C, while Mg<sub>2</sub> stays about the same independent of C. These changes all reflect the different abundance of C in the models since the abundance of Mg (and other elements in the E group) is always about constant (cf. Table 4).
Finally, we note that $`\mathrm{Mg}b`$ increases in all models, in apparent contradiction to the near constancy of $`[\mathrm{E}/\mathrm{Fe}]`$. This increase is due mostly to the decrease in Fe and Cr (see TB95), which has the effect of increasing $`\mathrm{Mg}b`$. In fact, changes in all the Mg indices are driven more by the Fe-peak deficit than by any actual increase in Mg, proving once again that a more correct way of looking at elliptical galaxies is to regard them as Fe-poor rather than $`\alpha `$-enhanced.
### 3.2. Ages, metallicities, and abundance ratios
SSP-equivalent parameters are derived for each G93 galaxy by choosing, for each model 1–4, the best-fitting age $`t`$, metallicity $`[\mathrm{Z}/\mathrm{H}]`$, and enhancement ratio $`[\mathrm{E}/\mathrm{Fe}]`$. Solving for three free parameters requires three indices, for which we use $`\mathrm{H}\beta `$, $`\mathrm{Fe}`$, and $`\mathrm{Mg}b`$. First, an expanded model grid of line strengths as a function of $`t`$, $`[\mathrm{Z}/\mathrm{H}]`$, and (now) $`[\mathrm{E}/\mathrm{Fe}]`$ is generated by applying the TB95 response functions to the base W94 models at each ($`t`$, $`[\mathrm{Z}/\mathrm{H}]`$). These new grids (one for each model 1–4) are created by interpolating the W94 models at intervals of $`\mathrm{\Delta }t=0.1`$ Gyr and $`\mathrm{\Delta }[\mathrm{Z}/\mathrm{H}]=0.01`$ and then interpolating the TB95 results at intervals of $`\mathrm{\Delta }[\mathrm{E}/\mathrm{Fe}]=0.01`$ at each $`(t,[\mathrm{Z}/\mathrm{H}])`$. The process is then inverted to derive $`(t,[\mathrm{Z}/\mathrm{H}],[\mathrm{E}/\mathrm{Fe}])`$ for each galaxy by searching in the grid to find that point with minimum distance from the observed parameters $`(\mathrm{H}\beta ,\mathrm{Mg}b,\mathrm{Fe})`$. It was necessary to linearly extrapolate the W94 models to slightly higher ages and to both lower and higher $`[\mathrm{Z}/\mathrm{H}]`$ values to cover the full range of $`(\mathrm{H}\beta ,\mathrm{Mg}b,\mathrm{Fe})`$-space populated by the observations. The range of $`(t,[\mathrm{Z}/\mathrm{H}],[\mathrm{E}/\mathrm{Fe}])`$ space covered by the final grids is
$$\begin{array}{cc}1t(\mathrm{Gyr})22,\hfill & 0.5[\mathrm{Z}/\mathrm{H}]1.25,\hfill \\ & 0.3[\mathrm{E}/\mathrm{Fe}]0.75\hfill \\ & \\ 22<t(\mathrm{Gyr})30,\hfill & 0.5[\mathrm{Z}/\mathrm{H}]0.5,\hfill \\ & 0.3[\mathrm{E}/\mathrm{Fe}]0.75.\hfill \end{array}$$
Tables 6a and 6b give derived $`(t,[\mathrm{Z}/\mathrm{H}],[\mathrm{E}/\mathrm{Fe}])`$ values and associated uncertainties in the $`r_e/8`$ and $`r_e/2`$ apertures, respectively. Errors were derived by searching the grid at $`(\mathrm{H}\beta \pm \sigma _{\mathrm{H}\beta },\mathrm{Mg}b,\mathrm{Fe})`$, $`(\mathrm{H}\beta ,\mathrm{Mg}b\pm \sigma _{\mathrm{Mg}b},\mathrm{Fe})`$, and $`(\mathrm{H}\beta ,\mathrm{Mg}b,\mathrm{Fe}\pm \sigma _{\mathrm{Fe}})`$ and taking the maximum deviations $`\mathrm{max}(\mathrm{\Delta }t)`$, $`\mathrm{max}(\mathrm{\Delta }[\mathrm{Z}/\mathrm{H}])`$, and $`\mathrm{max}(\mathrm{\Delta }[\mathrm{E}/\mathrm{Fe}])`$ as the associated uncertainties.<sup>5</sup><sup>5</sup>5These errors faithfully relect the magnitude of the uncertainties but not their correlations. Correlated errors in $`[\mathrm{Z}/\mathrm{H}]`$ and $`t`$ can be important, driven jointly by observational errors in $`\mathrm{H}\beta `$ (Trager (1997)). Fortunately the G93 errors are so small that observationally driven correlations in the output parameters are not important.
The derived SSP-equivalent parameters should be treated with caution for the extrapolated solutions ($`t>18`$ Gyr at all metallicities, $`[\mathrm{Z}/\mathrm{H}]>0.5`$ at all ages, and $`t<8`$ Gyr at $`[\mathrm{Z}/\mathrm{H}]<0.225`$). However, in the $`r_e/8`$ aperture, which we concentrate on in this and the following paper, only one galaxy (NGC 5813) has $`t>18`$ Gyr, and only a few more have $`[\mathrm{Z}/\mathrm{H}]>0.5`$ for any enhancement model. The extrapolations are more significant for the stellar population parameters in the $`r_e/2`$ aperture (Table 6b). However, many of these would also lessen or disappear if TWFBG98 velocity corrections to $`\mathrm{H}\beta `$ were substituted for those of G93, or if small $`\mathrm{H}\beta `$ emission fill-in errors were corrected (see Appendix B).
We have checked our fitting procedure by correcting the observed line strengths back to solar abundance ratios using the TB95 response functions for the solved-for values of $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$. The resulting corrected line strengths are presented in Figures 3 (for $`r_e/8`$) and 4 (for $`r_e/2`$) with the W94 models overplotted. These are the predicted line strengths that would be seen if the populations had the same $`t`$ and $`[\mathrm{Z}/\mathrm{H}]`$ but $`[\mathrm{E}/\mathrm{Fe}]`$ = 0. Metallicities and ages inferred from $`\mathrm{Mg}b`$ and $`\mathrm{Fe}`$ now agree, suggesting that our method for finding for the best-fitting parameters by searching in the three-dimensional grid is working correctly. These corrected points show graphically our final values of $`t`$ and $`[\mathrm{Z}/\mathrm{H}]`$.
Derived stellar parameters from the four enhancement models are compared in Figure 5. The most notable difference is between model 3 (C down, O up) versus all other models: galaxies are older, more metal-poor, and less enhanced in model 3 than in the others. These differences are driven entirely by the low C abundance in model 3; reducing C increases $`\mathrm{Mg}b`$ but has little effect on $`\mathrm{Fe}`$. Models with low C (like model 3) therefore result in lower overall metallicities, smaller $`[\mathrm{E}/\mathrm{Fe}]`$, and older ages, as may be seen by following through the consequences of a higher $`\mathrm{Mg}b`$ response function in Figure 1.
Is model 3 in fact compatible with observed galaxy line strengths? To test this, we augment the G93 indices with data on the C-sensitive feature C<sub>2</sub>4668 from the Lick/IDS sample of TWFBG98. For each population model, we use the response functions of TB95 to compute predicted line strengths for three new features—C<sub>2</sub>4668, Mg<sub>1</sub>, and Mg<sub>2</sub>—none of which were used in the original fits. Observed versus predicted indices are shown in Figure 6. Enhancement model 3, in which C is depressed, clearly fails systematically to reproduce the strengths of the new indices, especially C<sub>2</sub>4668. Models 1, 2, and 4 are nearly indistinguishable, as expected since the C abundance hardly varies among them (cf. Table 4). Model 4 is marginally the best ($`1\sigma `$) on account of its slightly higher C abundance, a further slight boost for our preferred model. Although model 4 fits best, it still fails systematically to reproduce the highest values of C<sub>2</sub>4668, Mg<sub>1</sub> and, especially, Mg<sub>2</sub>. This may indicate that C (and perhaps Mg) are actually *over*-enhanced compared to the E-group generally and may signal a breakdown in our assumption that all E-group elements scale in lockstep. Specific element abundance ratios will be explored using the full set of Lick indices in future papers.
## 4. SSP-equivalent parameters for the G93 sample
This section presents a brief overview of the resultant SSP-equivalent population parameters for the G93 galaxies; detailed discussion is reserved to Papers II and III. Our focus here is on the preferred model 4 (C and O both up), but results from models 1 and 2 are similar (model 3 being ruled out).
Figure 7 presents histograms of $`t`$, $`[\mathrm{Z}/\mathrm{H}]`$, and $`[\mathrm{E}/\mathrm{Fe}]`$ for the G93 sample through the $`r_e/8`$ aperture. The original conclusions of G93 are confirmed using this more rigorous analysis: the central stellar populations of galaxies in this sample span a large range of SSP-equivalent ages, from $`1.5t(\mathrm{Gyr})18`$ (more than 1 dex), but a relatively narrow range in $`[\mathrm{Z}/\mathrm{H}]`$, $`0.1[\mathrm{Z}/\mathrm{H}]+0.6`$, and an even smaller spread in $`[\mathrm{E}/\mathrm{Fe}]`$. The metallicity distribution has a peak at $`[\mathrm{Z}/\mathrm{H}]=+0.24`$ and a dispersion of $`\sigma ([\mathrm{Z}/\mathrm{H}])=0.14`$, while the enhancement distribution peaks strongly at $`[\mathrm{E}/\mathrm{Fe}]=+0.20`$ with a dispersion $`\sigma `$($`[\mathrm{E}/\mathrm{Fe}]`$) of only 0.05 (these values vary slightly with the model).
A striking fact to emerge from Figure 7 is how *mild* the mean metallicities and enhancements of ellipticals really are. Matching the high Mg index values of ellipticals has been problematic in the past (e.g., Matteucci (1994); Greggio (1997)), and previous authors have typically invoked rather large enhancements in the range $`[\mathrm{E}/\mathrm{Fe}]=+0.3`$–0.5 (Weiss, Peletier & Matteucci (1995); Trager (1997); Greggio (1997)). With the TB95 response functions, however, the average $`[\mathrm{Z}/\mathrm{H}]`$ is only a factor of two higher than solar, and the average $`[\mathrm{E}/\mathrm{Fe}]`$ is only $`+0.2`$. The latter is small compared to the maximum value of $`[\alpha /\mathrm{Fe}]+0.5`$ found in metal-poor Galactic stars (Wheeler, Sneden & Truran (1989); Edvardsson et al. (1993)), which is widely regarded as an empirical upper limit to the amount of depression in Fe that can result from total suppression of SNae Ia. The depression of the Fe-peak in ellipticals appears to be much less than this and should be easier to accommodate with reasonable galacto-nucleosynthesis models.
Figure 8 presents similar histograms of $`t`$, $`[\mathrm{Z}/\mathrm{H}]`$, and $`[\mathrm{E}/\mathrm{Fe}]`$ for the $`r_e/2`$ aperture. The global stellar populations span a slightly larger range of ages, from 1.5 to 25 Gyr and a slightly wider range of metallicities, $`0.3[\mathrm{Z}/\mathrm{H}]+0.7`$ (with NGC 720 at $`[\mathrm{Z}/\mathrm{H}]=1.05`$), although some of this larger scatter is surely due to the larger uncertainties in the $`r_e/2`$ line strengths. Otherwise, the shapes of the distributions are similar. Comparing $`r_e/8`$ with $`r_e/2`$ shows that mean $`[\mathrm{Z}/\mathrm{H}]`$ is down by 0.18 dex in the outer parts, indicating that the outer regions are slightly more metal-poor than the centers. The outer mean enhancement $`[\mathrm{E}/\mathrm{Fe}]`$ is lower by only 0.03 dex, however, confirming the conclusion of Worthey et al. (1992), Davies, Sadler & Peletier (1993), and G93 that enhancement gradients *within* galaxies are weak. Ages increase slightly outwards, the outer parts being on average roughly 25% older. Overall, the differences *among* galaxies are much more striking than the differences *within* galaxies, at least in the G93 sample, through these apertures.
## 5. Uncertainties and systematic errors
This section assesses both zeropoint and scale errors in $`t`$, $`[\mathrm{Z}/\mathrm{H}]`$, and $`[\mathrm{E}/\mathrm{Fe}]`$. We begin by examining our basic assumption that the ages, metallicities, and enhancement ratios we have derived above represent true light-weighted ages and abundances of elliptical galaxies. In particular, we first ask whether the apparent large age spread among the G93 galaxies could be due to spurious effects.
### 5.1. $`\mathrm{H}\beta `$ as an age indicator
The assumption that we are measuring real ages of stellar populations rests on the further assumption that $`\mathrm{H}\beta `$ light is coming purely from main-sequence and red giant-branch stars. We now discuss three scenarios whereby $`\mathrm{H}\beta `$ might be contaminated by light from other sources.
(1) *Fill-in by emission* (see Section 2.2.2). The extreme form of this hypothesis says that *all* ellipticals are actually young and that the apparent large age spread is due entirely to variable amounts of infill by emission. This extreme view is strictly ruled out by numerous observational studies of emission in elliptical galaxies. For example, G93’s plot of precision continuum-subtracted spectra (G93, Figure 4.10) shows that emission is nearly always less than a few tenths of an Å, not nearly large enough to create the observed age spread. In the same vein, Carrasco et al. (1996) went so far as to suggest that no emission corrections should be applied *at all* to most ellipticals, implying that any emission can at most be small. The final point is that $`\mathrm{H}\beta `$ correlates strongly both with Mg<sub>2</sub> and $`\sigma `$ (G93; Jørgensen (1997)), inconsistent with emission fill-in, which varies unpredictably from galaxy to galaxy.
A more reasonable hypothesis is that *errors* in the emission correction contribute noticeably to the age spread. Such errors were investigated in Section 2.2.2, where we noted that scatter in the $`\mathrm{H}\beta `$/\[O III\] ratio would induce age errors of only $`\pm 9`$% for typical galaxies. An even more drastic test is presented in Appendix B, which shows that neglecting the emission correction altogether affects a few strong-\[O III\] galaxies but makes at most small changes in the broad age distribution.
(2) *Contamination by blue horizontal branch stars (BHBs)*. BHB stars are not present in the standard Worthey (1994) models, which assume red clumps for old metal-rich populations. BHB stars might come from an anomalous BHB population associated with the metal-rich stars, or from contamination by a normal BHB associated with a subordinate metal-poor population. By “BHB,” we mean blue horizontal branches similar to M 92, which would contribute significantly to the light at 4000–5500 Å, not the extremely hot horizontal branches identified in populations like NGC 6791 (Liebert, Saffer & Green (1994)) which contribute primarily to 1500 Å flux and the “UV upturn” (e.g., Lee (1994), Yi et al. (1999)).
The galaxy M 32 can be used to rule out the hypothesis that BHBs alone are responsible for the large $`\mathrm{H}\beta `$ excesses seen in *high*-$`\mathrm{H}\beta `$ ellipticals. It can be shown that nearly the *entire* red clump in M 32 would have to be moved to a BHB at approximately spectral type mid-F to explain its high $`\mathrm{H}\beta `$ index (Burstein et al. (1984)); this is strictly ruled out by blue spectral indices (Rose (1985), 1994). Moreover, the HB has actually been detected in the outer part of M 32 by HST (Grillmair et al. (1996)) and is seen to be mostly red.<sup>6</sup><sup>6</sup>6To be precise, the Grillmair et al. (1996) data are not deep enough to rule out a *small* number of BHB stars (Grillmair et al. (1996); C. Gallart, priv. comm.), but the lack of point sources in archival F300W images suggests that any BHB must indeed be weak. Extrapolating the G93 indices outward to this field and matching to W94 models there yields an excellent fit to both the integrated colors and the color of the RGB at this point (Grillmair et al. (1996)), supporting the assumption that the HB is indeed red.
The existence of a dominant BHB population in metal-rich ellipticals is not expected on astrophysical grounds. If ellipticals were *very* old (i.e., $`>18`$ Gyr, as Lee 1994 has suggested), then BHBs could conceivably be significant components, but such large ages violate current constraints on the age of the Universe (see, e.g., Gratton et al. (1997)). No *solar metallicity* cluster populations in the Milky Way have BHBs (Worthey (1994)), although we note that Rich et al. (1997) have discovered significant M3-like BHB populations in two metal-rich Galactic bulge globular clusters, NGC 6388 and NGC 6441 ($`[\mathrm{Fe}/\mathrm{H}]0.5`$). However, these two globulars are the densest known in the Galactic globular cluster system; the fact that BHB stars occur precisely there (Sosin et al. (1997)) suggests that dynamical interactions are the cause. We conclude that the occurrence of BHB stars in low-density systems like giant elliptical galaxies is unlikely, but a deeper understanding of their presence in these globulars is obviously necessary.
Contamination by BHBs from a subordinate metal-poor population also does not seem probable. As noted, such contamination by a trace blue BHB component cannot materially affect the indices of *high*-$`\mathrm{H}\beta `$ galaxies, but perturbations in *weak*-$`\mathrm{H}\beta `$ galaxies should be considered. For example, 5% of the $`V`$-band light in metal-poor BHB stars would decrease the inferred age of a galaxy from 13 Gyr to 8 Gyr at solar metallicity. However, Rose (1985, 1994), using a set of high-resolution spectral indices in the 4000 Å region, has shown in M 32 and eight strong-lined ellipticals that no more than $`5\%`$ of the light in the blue region (and less than 2% in the $`V`$-band) can come from very hot stars (F0 and earlier). This falls short by a factor of two. Moreover 5% of $`V`$-band light in BHB stars would imply that altogether $``$25% of the *total* light would have to come from metal-poor stars. This is twenty-five times more than the amount of metal-poor ($`[\mathrm{Z}/\mathrm{H}]<1.5`$) $`V`$-band light actually found in the outer part of M 32 by Grillmair et al. (1996). That a much larger quantity of metal-poor stars could be found near the centers of *more* metal-rich elliptical galaxies seems implausible.
We conclude that contamination by metal-poor populations is a negligible perturbation to the *central* ages of the G93 galaxies and could cause at most a $``$10% reduction near $`r_e`$, if that.
(3) *Contamination by blue straggler stars (BSSs).* A typical BSS has $`M_V3`$ mag, $`BV0.2`$ (Bailyn (1995)), and spectral type A8–F0. From Worthey et al. (1994), dwarf A8–F0 stars have $`\mathrm{H}\beta 5.8`$$`7`$ Å. To explain the high $`\mathrm{H}\beta `$ strength of M 32 ($`\mathrm{H}\beta =2.4`$ Å) as arising from a population of blue stragglers superimposed on an old (15 Gyr), solar-metallicity population ($`\mathrm{H}\beta =1.5`$ Å) would require that $`15\%`$ of the $`V`$-band light come from BSSs. This implies a BSS specific frequency of $`275`$ per $`10^4L_{}`$, which is a factor of 8 higher than seen in the most BSS-rich Galactic globular cluster (Palomar 5) and a factor of 28 higher than seen in the average Galactic globular cluster (Ferraro, Fusi Pecci & Bellazzini (1995)). We again conclude that *high*-$`\mathrm{H}\beta `$ galaxies like M 32 are immune to perturbations by spurious hot components such as BSS stars.
Consider next a trace contamination by BSSs in low-$`\mathrm{H}\beta `$ galaxies. For example, to decrease the age of an elliptical from 13 Gyr to 8 Gyr at solar metallicity, BSSs would again need to contribute $`5\%`$ of the $`V`$-band light. For NGC 3379, this implies a specific frequency of $`120`$ per $`10^4L_{}`$. This specific frequency is a little more than a factor of 3 higher than that seen in the most BSS-rich Galactic globular cluster (Ferraro, Fusi Pecci & Bellazzini (1995)). In the absence of a complete theory of BSS formation, a factor of three increase might not be impossible. On the other hand, as noted, Rose (1985, 1994) has shown that no more than $`2\%`$ of $`V`$-band light can come from hot stars F0 and earlier in M 32 and eight strong-lined ellipticals. This is less than half the light required and would perturb the age from 13 Gyr to only 11 Gyr, a reduction of only 15%.
To summarize, the leading hot-star contaminants, BHBs and BSSs, both peak at temperatures at or hotter than F0, whereas the blue line-strength data of Rose (1985, 1994) imply that the great bulk of Balmer absorption must be coming from cooler F and G stars, at or at most only slightly hotter than the derived turnoff temperatures. Barring some as-yet-undiscovered contaminating population of cooler stars, the Rose limits imply that contamination of $`\mathrm{H}\beta `$ by non-main sequence stars can reduce the ages of even the oldest ellipticals by at most 10–15%.
### 5.2. Errors due to theoretical model uncertainties
The next three sections assess additional sources of systematic errors; results are collected in Table 7. This section discusses theoretical model uncertainties caused by errors in the stellar isochrones and line-strength response functions of TB95. The major uncertainty in the interior models is the *age scale*, which is continually being refined. With the recent release of parallaxes from the HIPPARCOS satellite, much effort has been spent recalibrating the ages of Galactic globular clusters using both the new parallaxes and up-to-date models of stellar evolution (e.g., Reid (1997), 1998; Gratton et al. (1997); Chaboyer et al. (1998); Grundahl, Vandenberg & Andersen (1998); Pont et al. (1998); Salaris & Weiss (1998)). This effort has brought the ages of the oldest globular clusters down from $`14`$–15 Gyr to $`12`$ Gyr, a reduction of $`15\%`$. At least half of this reduction is due to corrections in the metallicity scale of globular clusters and to the use of more up-to-date stellar evolutionary models (Gratton et al. (1997)).
These age redeterminations have so far been restricted to clusters with metallicities $`[\mathrm{Fe}/\mathrm{H}]0.7`$. At the metallicities typical of elliptical galaxies, the effect of the age recalibrations is not yet known but could be as much as $`20\%`$, just by using isochrones from the most modern stellar evolutionary models. This agrees with Charlot, Worthey & Bressan (1996), who found that absolute ages are uncertain at the 25% level in stellar populations with ages $`>10`$ Gyr, resulting almost entirely from the choice of different stellar models. Below and in Appendix A, we explore the effect of substituting “Padova” isochrones by Bertelli et al. (1994) for those of W94 and find that young ages differ by 35% but that old ages change by only 4%. As a rough rule of thumb, we assume that both the age zeropoint and age scale of the models are uncertain at the $`20`$-$`25\%`$ level.
The effect of errors in the theoretical response functions of TB95 is illustrated in Figure 9, which is a schematic repeat of Figure 1(b) showing $`\mathrm{H}\beta `$ versus $`\mathrm{Fe}`$. Figure 9 shows a galaxy plotted two ways, one using raw $`\mathrm{Fe}`$, the other using the value of $`\mathrm{Fe}`$ inferred from $`\mathrm{Mg}b`$ by assuming solar abundance ratios (call this $`\mathrm{Fe}(\mathrm{Mg})`$). $`\mathrm{Fe}`$ lies to the left of $`\mathrm{Fe}(\mathrm{Mg})`$, indicating Fe depression. Applying the TB95 corrections for non-solar $`[\mathrm{E}/\mathrm{Fe}]`$ moves $`\mathrm{Fe}`$ to the right and $`\mathrm{Fe}(\mathrm{Mg})`$to the left, as shown by the arrows (the correction to $`\mathrm{H}\beta `$ is small and is ignored). When the correct value of $`[\mathrm{E}/\mathrm{Fe}]`$ is reached, the two points coincide, giving final $`t`$, $`Z`$, and $`[\mathrm{E}/\mathrm{Fe}]`$ (right hand panel).
Where the solution lands is evidently governed by the *relative lengths* of the two correction vectors; for model 4, this ratio is $`\mathrm{\Delta }\mathrm{log}\mathrm{Fe}(\mathrm{Mg})/\mathrm{\Delta }\mathrm{log}\mathrm{Fe}`$ = 1.25. The systematic errors of the final point depend mainly on the error of this ratio. Assuming that the two response functions of TB95 are individually uncertain by as much as 30% and that the errors of their three stellar types add in quadrature, the resultant zeropoint uncertainties are 3% in age, 0.10 dex in $`[\mathrm{Z}/\mathrm{H}]`$, and 0.04 dex in $`[\mathrm{E}/\mathrm{Fe}]`$ for highly enhanced galaxies. Since this last error drops to zero for galaxies with $`[\mathrm{E}/\mathrm{Fe}]=0`$, we derive an overall scale uncertainty in $`[\mathrm{E}/\mathrm{Fe}]`$ of $``$20%.
We note that fundamental uncertainties in stellar models, for example the use of a single-parameter mixing length theory for convection or the detailed effects of rotation and diffusion, may induce additional, unknown systematic errors in our absolute age estimates. Such uncertainties also affect the globular cluster age scale. At present, our estimated uncertainties in the absolute ages of galaxies therefore should be considered to be relative to the globular cluster age scale.
### 5.3. Errors due to empirical model uncertainties
Errors in this category include errors in the metallicity and temperature scales of the Lick/IDS fitting functions and errors in the fitting accuracy of the functions themselves.
We have checked the metallicity scale of the Lick/IDS system by comparing our assumed stellar $`[\mathrm{Z}/\mathrm{H}]`$ values (summarized by Worthey et al. (1994)) with the compilation of published spectroscopically-determined values by Cayrel de Strobel et al. (1997). For stars with $`\mathrm{log}g4`$ (mostly dwarfs), the Lick/IDS metallicity scale is in excellent agreement with the published values at all $`[\mathrm{Fe}/\mathrm{H}]`$. For giants with $`[\mathrm{Fe}/\mathrm{H}]0`$, the Lick/IDS metallicity scale is within $`0.05`$$`0.1`$ dex of the Cayrel de Strobel scale (systematically slightly high). However, for giants with $`[\mathrm{Fe}/\mathrm{H}]>0`$ (“SMR” stars), the Lick/IDS metallicity scale deviates strongly from the Cayrel de Strobel scale, such that the Lick/IDS giants appear to be more metal rich. The Lick/IDS metallicity scale for giants is based on the narrow-band photometric metallicity scales of Hansen & Kjærgaard (1971) and Gottlieb & Bell (1971), and on the high-resolution spectral study of Gustafsson, Kjærgaard & Anderson (1974) (Faber et al. (1985)). In contrast, the Cayrel de Strobel et al. (1997) catalog is populated in the SMR giant regime by older spectroscopic abundance determinations based on stellar atmospheres that typically (1) have a too-low solar iron abundance (McWilliam (1997)) and (2) do not properly account for molecule formation in SMR giants (Castro et al. (1996)). Correcting the abundances of SMR giants for these two effects suggests that the Lick/IDS scale may actually be very close (possibly $`0.05`$$`0.1`$ dex too high) to the modern spectroscopic metallicity scale, even at $`[\mathrm{Fe}/\mathrm{H}]0.4`$ (Castro et al. (1996); McWilliam, priv. comm.).
The next question is whether the fitting functions are in fact good fits to the stellar line strengths. By inspecting the residual diagrams in Gorgas et al. (1993) and Worthey et al. (1994), we estimate that any systematic errors in the metal-line fits are less than 3%, which translates to zeropoint uncertainties of 0.05 in $`[\mathrm{Z}/\mathrm{H}]`$ and 0.10 in $`[\mathrm{E}/\mathrm{Fe}]`$ (the former averages Fe and Mg while the latter differences them, accounting for its larger error). The crucial function for age is the fit to $`\mathrm{H}\beta `$ versus $`VK`$ for main sequence A-F stars. Again, we estimate that the basic line-strength calibration level is accurate to better than 3% in this interval, which translates to about 10% in age. Finally, the temperature scale ($`VK`$ vs. $`T_e`$) of main sequence stars is needed to attach $`\mathrm{H}\beta `$ strengths to the theoretical isochrones. An error of 100 K (Worthey et al. (1994)) again translates to about 10% in age. Note that all these errors in the fitting functions affect only the absolute zeropoints of age, metallicity, and $`[\mathrm{E}/\mathrm{Fe}]`$ but not their differential values.
### 5.4. Errors due to unknown element enhancements
Our treatment of element enhancements is crude—we simply group all elements into three categories (enhanced, depressed, and fixed) and assume that differences within each group are nil. The group assignment of certain elements is also uncertain. Unknown element abundance ratios introduce errors in the predicted index response functions and, to a smaller extent, in the theoretical stellar evolutionary tracks.
According to TB95, the elements that significantly influence the indices used here are Fe, Cr, C, and Mg. Fe and Cr are produced in both Type Ia and in intermediate-mass progenitor Type II SNae (Woosley & Weaver (1995)). They should vary closely together by virtue of similar nucleosynthesis; i.e., their relative uncertainty should be small. Breaking the link between Fe and Cr, for example by decreasing \[Cr/Fe\], would have the effect of altering the $`\mathrm{Mg}b`$ index strength without significantly affecting other indices (TB95). In our own galaxy, however, \[Cr/Fe\] is solidly at the solar value until $`[\mathrm{Fe}/\mathrm{H}]2`$ (McWilliam (1997)), much lower than the metallicities of interest in elliptical galaxies. We will discuss possible element-to-element variations in a future paper. Likewise we have tested the sensitivity of the indices to C explicitly in models 1–4 and found that low-C models (like model 3) are ruled out. With this eliminated, remaining uncertainties due to the C abundance variations are limited to 10% in age, 0.05 dex in $`[\mathrm{Z}/\mathrm{H}]`$, and 0.01 dex in $`[\mathrm{E}/\mathrm{Fe}]`$ (Figure 5).
A larger source of uncertainty arises from uncertain ratios *within* the Type II SNae group. The metallicity $`[\mathrm{Z}/\mathrm{H}]`$ is controlled by O, which has little spectroscopic signature (TB95), while a major spectral impact comes from Mg. Our inferred values of $`[\mathrm{Z}/\mathrm{H}]`$ thus depend critically on the assumption that Mg and O track one another. Breaking this link, e.g., by enhancing Mg over O, could reduce our inferred $`[\mathrm{Z}/\mathrm{H}]`$’s substantially. For example, suppose that \[O/H\] is always solar regardless of Mg (this would place O in the depressed group in metal-rich galaxies). Since O contributes half the mass in $`Z`$ (see Table 4), our values of $`[\mathrm{Z}/\mathrm{H}]`$ would be overestimated by a factor of two. Correcting for this would reduce the typical $`[\mathrm{Z}/\mathrm{H}]`$ from 0.26 to 0.13, and in so doing would increase ages by about 20%; enhancements $`[\mathrm{E}/\mathrm{Fe}]`$ would remain unchanged. We are thus relying quite heavily on the notion that decoupling O and Mg is astrophysically unreasonable.
Finally, our analysis assumes that isochrone shape and location are unaffected by the exact value of $`[\mathrm{E}/\mathrm{Fe}]`$ or by the detailed pattern of element enhancements within $`[\mathrm{E}/\mathrm{Fe}]`$. Existing models suggest that this assumption might be tolerable. Salaris & Weiss (1998) have calculated an isochrone for an old population model with $`[\mathrm{Z}/\mathrm{H}]=0.3`$, $`[\mathrm{E}/\mathrm{Fe}]=+0.4`$, and non-solar $`[X_{\mathrm{HPE}}/X_{\mathrm{LPE}}]=0.12`$. Log $`T_e`$ at the turnoff shifts to the blue by 0.0044, while log $`T_e`$ on the RGB shifts to the blue by 0.011 relative to a scaled solar model. The blueward shifts should scale in proportion to both $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$, while the shape change may also depend on $`[X_{\mathrm{HPE}}/X_{\mathrm{LPE}}]`$ (Salaris, Chieffi & Straniero (1993); Salaris & Weiss (1998)). A typical G93 galaxy is four times more metal-rich than their model but smaller by a factor of two in $`[\mathrm{E}/\mathrm{Fe}]`$. The quantity $`[X_{\mathrm{HPE}}/X_{\mathrm{LPE}}]`$ is also likely to be smaller, being +0.07 in model 4 (Table 4) versus +0.12 in their model. On balance, the net shifts and shape changes in the elliptical isochrones are plausibly no more than twice those in their model.
The effects of such motions would be small. A shift of log $`T_e=0.0044`$ at the turnoff causes a change of only 0.016 in log $`\mathrm{H}\beta `$, for a change in age of 6%. A shift of log $`T_e=0.011`$ on the RGB causes a change in metal lines of the same amount, for a change in $`[\mathrm{Z}/\mathrm{H}]`$ of about 0.05 and no change in $`[\mathrm{E}/\mathrm{Fe}]`$. Even if multiplied by two, as estimated above, these effects would still be small compared to other errors. On tho other hand, it should be stressed that the effect of non-solar ratios on isochrone location is acclerating at high metallicity, and the above models were calculated for metallicities considerably smaller than what we require. A failure of O to track Mg (as mentioned above) could also introduce further shape changes that have not yet been modeled in detail. In sum, our assumption that isochrone shape and location are unaffected by the value of $`[\mathrm{E}/\mathrm{Fe}]`$ or by the pattern of non-solar enhancements within $`[\mathrm{E}/\mathrm{Fe}]`$ looks promising but is in need of further validation.
### 5.5. Error summary
The results of the preceding sections, plus some additional experiments in Appendices A and B, are summarized in Table 7. Age errors are significant—several terms amount individually to 10–25% and their addition is uncertain. Some age errors are also larger for weak-$`\mathrm{H}\beta `$ objects and therefore tend to stretch or compress the age scale. However, most of the errors, including those in age, are simple zeropoint shifts. Future applications will take advantage of the relative robustness and use the data differentially.
The galaxy M 32 offers a final check on the zeropoints of both $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$. The integrated spectrum of M 32 has been modeled by many authors, and the upper CM diagram of the outer parts has been measured (Grillmair et al. (1996)). All spectrum modelers concur that a mix of moderately young stars of near-solar metallicity matches every known feature of the spectrum. The mean turnoff spectral type within the $`r_e/2`$ aperture is accurately known to be F7–8 (Faber (1972); O’Connell (1980); Rose (1994)), while the light-weighted metallicity in the Grillmair field is $`[\mathrm{Fe}/\mathrm{H}]=0.25`$. The enhancement ratio $`[\mathrm{E}/\mathrm{Fe}]`$ is also known to be small based on the excellent spectral fits using solar-neighborhood-abundance stars (e.g., Faber (1972)).
These independently measured parameters agree well with the SSP-equivalent parameters. G93 indices for the $`r_e/2`$ aperture yield an SSP-equivalent age of 5 Gyr, a mean $`[\mathrm{Z}/\mathrm{H}]`$ of $`0.07`$, and an $`[\mathrm{E}/\mathrm{Fe}]`$ of $`0.07`$. These parameters imply a turnoff spectral type of exactly F7–8, as the modelers have concluded, and the near-solar $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$ also agree with their results. Extrapolating the G93 indices outward, Grillmair et al. (1996) found an SSP-equivalent age in their field of 8 Gyr, a mean $`[\mathrm{Z}/\mathrm{H}]`$ of $`0.25`$, and $`[\mathrm{E}/\mathrm{Fe}]`$ of $`0.05`$. This metallicity coincides precisely with the metallicity distribution that they inferred from the color locus of the RGB for that assumed age. Putting this information together, we conclude that the actual absolute uncertainties in both $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$ are $`0.05`$, at least for galaxies close to solar composition like M 32.
## 6. Comparisons with previous studies: Evidence for intermediate-age populations
Many previous studies have examined the line strengths and colors of elliptical galaxies to determine their stellar content. A complete review of all previous models is beyond the scope of this paper (see Charlot, Worthey & Bressan 1996, Vazdekis et al. 1996, and Arimoto 1996 for comparison of some modern stellar population synthesis models, and Worthey 1998 for a historical review of the metallicities and abundance ratios of early-type galaxies). We concentrate here on previous investigations that derived SSP ages and models by using the Balmer lines. We begin with the results of TCB98 and then turn to those of other workers. Consideration of other methods, in particular those using colors, is delayed to future papers.
### 6.1. Model dependence of derived stellar population parameters: comparison with TCB98
In a recent paper, TCB98 have analyzed line strengths of the G93 galaxies in the context of their own stellar population models. These models are based on isochrones by Bertelli et al. (1994) (which, like ours, neglect the effects of $`[\mathrm{E}/\mathrm{Fe}]0`$), the original fitting functions for $`\mathrm{H}\beta `$ and $`\mathrm{Fe}`$ by Worthey et al. (1994) (which also neglect $`[\mathrm{E}/\mathrm{Fe}]0`$), and a fitting function for Mg<sub>2</sub> by Borges et al. (1995), who claim to take into account the effects of $`[\mathrm{E}/\mathrm{Fe}]0`$. Like us, TCB98 assume that their isochrones depend only on bulk metallicity $`[\mathrm{Z}/\mathrm{H}]`$ but not on $`[\mathrm{E}/\mathrm{Fe}]`$ (which they call $`[\alpha /\mathrm{Fe}]`$). To determine the stellar population parameters $`\mathrm{log}t`$, $`[\mathrm{Z}/\mathrm{H}]`$, and $`[\alpha /\mathrm{Fe}]`$, TCB98 compute averaged derivatives (from their models) of (the logarithms of) Mg<sub>2</sub>, $`\mathrm{Fe}`$, and $`\mathrm{H}\beta `$ versus population parameters. These derivatives are then inverted to derive a series of linear equations that yield relative values of $`\mathrm{log}t`$, $`[\mathrm{Z}/\mathrm{H}]`$, and $`[\alpha /\mathrm{Fe}]`$ as functions of Mg<sub>2</sub>, $`\mathrm{Fe}`$, and $`\mathrm{H}\beta `$. This solution method is equivalent to assuming that all line strengths depend linearly on all parameters, which is marginally inconsistent with the curved shapes of the actual grids (cf. Figure 1).
Figure 10 (top row) shows the stellar population parameters derived by TCB98 as a function of our derived parameters. For this comparison we have used our enhancement model 3 with C depressed and O enhanced, which is closest to their model. While the two studies roughly agree in the distribution of ages of the G93 galaxies, they are discordant in both $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$, for which the slopes of their values versus ours deviate strongly from unity. These different inferred $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$ values imply different interpretations of the star formation histories of these galaxies (particularly in the $`\mathrm{log}t`$$`[\mathrm{Z}/\mathrm{H}]`$ relation; see below).
To isolate the source of the differences, we constructed new models (“Padova”) by substituting the Bertelli et al. (1994) isochrones for the RYI/VandenBerg isochrones used by W94 (Appendix A). The results are presented in the bottom row of Figure 10. The match between the new Padova models and the standard W94 models is quite good, with only slight slope changes and mild zeropoint offsets; $`\mathrm{log}t`$ decreases by from 10% to 30%, $`[\mathrm{Z}/\mathrm{H}]`$ increases by less than $`+0.08`$ dex, and $`[\mathrm{E}/\mathrm{Fe}]`$ increases by no more than $`+0.02`$ dex in the Padova models.
The differences between the present results and those of TCB98 in the top row are therefore not caused by differences in the isochrones, but must rather stem from one of the following other differences: (1) use of different response functions for $`[\mathrm{E}/\mathrm{Fe}]0`$ for Mg<sub>2</sub>, Fe5270, and Fe5335; (2) use of Mg<sub>2</sub> instead of $`\mathrm{Mg}b`$; and/or (3) use of a linearized solution method for deriving ages, metallicities, and enhancement ratios from observed line strengths. Inspection shows that differences (2) and (3) are most likely minor; in particular, the G93 Mg<sub>2</sub> strengths are less reliable than the $`\mathrm{Mg}b`$ strengths, but broadly the two agree fairly well. Likewise, the linearized method deviates at large distances from the middles of the grids owing to grid curvature, but these differences are not large enough to cause the global slope differences seen in Figure 10.
Difference (1), the use of different response functions, dominates the differences in $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$. TCB98 use the Borges et al. (1995) fitting function for Mg<sub>2</sub>, which nominally takes $`[\mathrm{E}/\mathrm{Fe}]0`$ into account but is derived from only a small set of calibration stars.<sup>7</sup><sup>7</sup>7The stellar metallicites are most likely on a different metallicity scale, as they are drawn directly from Cayrel de Strobel et al. (1997), which may be unreliable at $`[\mathrm{Fe}/\mathrm{H}]>0`$; see Section 5.4. The Borges et al. sample is also deficient in calibrating stars on the RGB. Furthermore, by using the original Worthey et al. (1994) fitting functions for Fe5270 and Fe5335 without correction for $`[\mathrm{E}/\mathrm{Fe}]0`$, they implicitly assume that *metallicities can be determined from $`\mathrm{Fe}`$ alone*. In other words, the ages and metallicities of the G93 galaxies are effectively defined by the $`\mathrm{Fe}`$$`\mathrm{H}\beta `$ line-strength diagram alone in the TCB98 scheme, and the enhancement ratios $`[\mathrm{E}/\mathrm{Fe}]`$ are defined by the offset of the galaxies in the Mg<sub>2</sub>$`\mathrm{H}\beta `$ line-strength diagram (scaled by some factor from the Borges et al. (1995) fitting function for Mg<sub>2</sub>). By not correcting $`\mathrm{Fe}`$ upwards for Fe-deficiency, TCB98 *underestimate* the metallicites $`[\mathrm{Z}/\mathrm{H}]`$ and *overestimate* the ages $`t`$ and enhancement ratios $`[\mathrm{E}/\mathrm{Fe}]`$. This matches the behavior of residuals seen in Figure 10.
These systematic effects cause TCB98 to find a much narrower spread in in $`[\mathrm{Z}/\mathrm{H}]`$ in the centers of the G93 galaxies than we do, and also a much wider spread in $`[\mathrm{E}/\mathrm{Fe}]`$. The narrow spread in $`[\mathrm{Z}/\mathrm{H}]`$ prevents them from finding any age–metallicity relation, which is a major focus of our Paper II; conversely, the broad spread in $`[\mathrm{E}/\mathrm{Fe}]`$ causes them to find a strong age–enhancement ratio relation, which we do not find in Paper II. Further discussion of these trends is reserved to future papers. However, it is clear that the adopted response functions can have far-reaching consequences for parameter correlation studies.
### 6.2. Other authors
Few other authors have fitted stellar population parameters to Balmer line data. Kuntschner (1998; Kuntschner & Davies (1998)) has studied the line strengths of a complete, magnitude-limited ($`M_B<17`$) sample of early-type galaxies in the Fornax cluster, split evenly between ellipticals and lenticulars. He derives stellar population parameters but does not correct his line strengths for non-solar abundance ratios. In general, Kuntschner finds old ages for ellipticals but a wide spread in the ages of S0s. His data are of excellent quality, and we lump them together with the G93 sample and analyze them in parallel in Paper II.
Using moderate-$`S/N`$ ($`30`$) long-slit and fiber spectroscopy, Jørgensen (1999) has studied the line strengths of 115 early-type galaxies in Coma. Of these galaxies, 71 have measured $`\mathrm{Mg}b`$$`\mathrm{Fe}`$, and $`\mathrm{H}\beta `$(the last with typical errors of $`0.22`$ Å). However, Trager (1997) has shown that errors in $`\mathrm{H}\beta `$ of this magnitude (typical of the Lick/IDS galaxy sample; TWFBG98) seriously compromise the determination of stellar population parameters through correlated errors in age, metallicity, and enhancement ratio; errors of $`0.1`$ Å in $`\mathrm{H}\beta `$ are required to determine ages to 10% or better and to reduce the correlated errors to insignificant levels. Further consideration of Jorgensen’s work is reserved to Paper II.
Vazdekis et al. (1997) fit their own SSP-equivalent models to three early-type galaxies, including NGC 3379 and NGC 4472 studied here. Their derived ages are about 50% larger than ours, for a variety of reasons. Although $`\mathrm{H}\beta `$is included in the suite of data fitted, it is only one among many features used. The resultant models significantly under-predict their own $`\mathrm{H}\beta `$strengths, and matching them would yield ages as young or younger than we find. Their high ages (and low metallicities) seem to be driven by the very red near-IR colors of their models at high $`[\mathrm{Z}/\mathrm{H}]`$, which in turn may stem from the cool giant-branch tips of the Padova isochrones used. Since giant-branch temperatures are still in flux, we prefer the Balmer lines, which are less sensitive to stellar evolution uncertainties.
Fisher, Franx & Illingworth (1995) studied the line strengths of nearby field ellipticals and brightest cluster galaxies (BCG). All of the nearby ellipticals (seven galaxies) were drawn from G93; the data are consistent, and we have therefore not added them to this series of papers. The BCG data are of slightly lower quality as the galaxies are more distant. Fisher et al. compare their line strengths to the W94 models—ignoring $`[\mathrm{E}/\mathrm{Fe}]`$ variations—and generally find old ($`t10`$ Gyr) mean stellar populations in the centers. However, two of nine BCGs have $`\mathrm{H}\beta `$ strengths indicative of intermediate-age populations, NGC 2329 (Abell 569) and NGC 7720 (Abell 2634).
Jones & Worthey (1995) developed a novel H$`\gamma `$ index that has lower sensitivity to metallicity, and therefore in principle better age discrimination. Using W94 models, they applied this index to the center of M 32 and determined an SSP age of $`t5`$–7 Gyr. Our age for this object is only $`t=3.0\pm 0.6`$ Gyr (with $`[\mathrm{Z}/\mathrm{H}]=0.00\pm 0.05`$ dex; formal errors only). Jones and Worthey also fitted other Balmer indices (including different versions of H$`\gamma `$), which gave similarly low ages. They were not able to identify a reason for the discrepancy. This disagreement among Balmer indices is an outstanding issue.
Finally, we mention the results of Rose (1985, 1994) for a sample of 10 normal elliptical nuclei, 6 of which overlap with our sample, including M 32. Rose’s spectra were taken around the 4000 Å break, and he developed a large number of stellar population indicators in this wavelength region, including the Balmer index Ca II H$`+`$H$`ϵ`$/Ca II K, a sensitive measure of the presence of hot A and B stars, and Sr II/Fe I, a measure of the total dwarf-to-giant light. By balancing these and other indices, Rose found that there must be a substantial intermediate-temperature component of dwarf light in all of these galaxies, but that no more than 2% of 4000 Å light could come from stars hotter than F0. He concluded that all 10 galaxies contained a significant component of intermediate-age main sequence stars. Our SSP ages range from 3 to 10 Gyr for the 6 galaxies in common, consistent with these conclusions.
## 7. Summary
We have presented central ($`r_e/8`$) and global ($`r_e/2`$) line strengths for the González (1993) local elliptical galaxy sample. A method for deriving SSP-equivalent stellar population parameters is presented using the models of Worthey (1994), supplemented by model-atmosphere line-strength response functions for non-solar element abundance ratios by Tripicco & Bell (1995). The resultant stellar population parameters broadly confirm the findings of G93 in showing a wide range of ages but a fairly narrow range of metallicities and enhancement ratios. Differences among galaxies in the sample are larger than radial differences within them.
Four different models are considered with different patterns of element enhancement. The best-fitting model (model 4) has all elements enhanced or normal except for the Fe-peak (and Ca), which are depressed. The actual atomic abundance ratios of the so-called “enhanced” elements are in fact virtually solar—it is really the Fe-peak elements that are depressed. Indeed, the TB95 response functions imply that the observed strengthening of $`\mathrm{Mg}b`$ is not due to an overabundance of Mg but to an *under*abundance of Fe (and Cr). It is shown that C must also belong to the enhanced group (i.e., it does not follow Fe, as sometimes assumed). Hence, a more accurate description of elliptical galaxies is that they failed to make Fe-peak elements rather than that they made an overabundance of $`\alpha `$-elements. The element enhancement pattern of ellipticals will be considered in more detail in a future paper.
Sources of error in the population parameters are considered. Contamination of $`\mathrm{H}\beta `$ by hot stars such as horizontal branch stars and blue stragglers can cause small reductions in the measured ages of the oldest galaxies but cannot noticeably affect the strong $`\mathrm{H}\beta `$ lines, and thus the deduced low ages, of young ellipticals (as also found by Rose (1985), 1994 and Greggio (1997)). Emission fill-in may increase the measured ages of a few, largely old galaxies, but the broad age distribution is unaffected by whether any emission corrections are made or not. Uncertainties in the theoretical tracks, index response functions, element enhancement patterns, and the Lick/IDS metallicity scale all affect the absolute zero points of age, $`[\mathrm{Z}/\mathrm{H}]`$, and $`[\mathrm{E}/\mathrm{Fe}]`$ at the level of a few tens of percent or tenths of a dex—but not the relative age rankings among galaxies.
Finally, we have compared our population parameters to those derived by TCB98, who apply a different modeling technique to the G93 sample. Our values of $`[\mathrm{Z}/\mathrm{H}]`$ and $`[\mathrm{E}/\mathrm{Fe}]`$ correlate with theirs, but the slopes differ significantly from unity. This appears to stem from the use of different response functions; in particular, TCB98 do not correct $`\mathrm{Fe}`$ for the underabundance of Fe. When this is allowed for, the two studies are consistent.
Future papers will discuss the central stellar populations of the G93 sample in detail, correlations between stellar populations and structural parameters, scaling relation of these local ellipticals in the context of stellar populations, and stellar population gradients in elliptical galaxies.
We thank Drs. M. Bolte, A. Bressan, D. Burstein, J. Dalcanton, G. Illingworth, D. Kelson, I. King, A. McWilliam, A. Renzini, M. Rich, M. Salaris, and A. Zabludoff for stimulating discussions. We thank especially the referee, Dr. J. Rose, for a careful and thorough reading of the manuscript which helped improve the final presentation. We are indebted to Drs. M. Tripicco and R. Bell for calculating response functions for the Lick/IDS indices, without which this work would not have been possible, and to Dr. Tripicco for sending electronic versions of their tables. We also thank Dr. Salaris for sending us his and Dr. Weiss’s solar-metallicity, $`\alpha `$-enhanced isochrones in advance of publication.
## Appendix A Stellar population parameters using Padova isochrones
This section provides more details on the “Padova” models discussed in Section 6.1. The Padova models are identical to the W94 models except that the isochrones (and opacities) are replaced by the isochrone library of Bertelli et al. (1994). This isochrone library is based on the stellar evolutionary tracks developed by the Padova group (see Bertelli et al. 1994 and Charlot, Worthey, & Bressan 1996 for more details) using the Iglesias, Rogers & Wilson (1992) radiative opacities. These isochrones include all phases of stellar evolution from the ZAMS to the remnant stage for stars of masses in the age range $`0.004t16`$ Gyr and metallicity range $`0.0004Z0.1`$ ($`Z_{}=0.02`$). The models include convective overshooting in stars more massive than $`1M_{}`$ and an analytic prescription for the TP-AGB regime.
Figure 1 presents the inferred $`\mathrm{H}\beta `$, $`\mathrm{Mg}b`$, and $`\mathrm{Fe}`$ line strengths for the W94 models using the Padova isochrones. Comparing this with Figure 1 shows small differences: the Padova models have a higher metallicity at a given $`\mathrm{Mg}b`$ or $`\mathrm{Fe}`$ strength and a younger age at a given $`\mathrm{H}\beta `$ strength.
Figure 10 (bottom row) shows the results of applying these new models to the G93 central ($`r_e/8`$) line strengths. The derived ages, metallicities, and enhancement ratios agree quite well with the W94 models, apart from slight slope changes and zeropoint offsets:
$`\mathrm{log}t_{\mathrm{Padova}}`$ $`=`$ $`1.02\mathrm{log}t_{\mathrm{W94}}0.10,`$ (A1)
$`[\mathrm{Z}/\mathrm{H}]_{\mathrm{Padova}}`$ $`=`$ $`1.09[\mathrm{Z}/\mathrm{H}]_{\mathrm{W94}}+0.03,`$ (A2)
$`[\mathrm{E}/\mathrm{Fe}]_{\mathrm{Padova}}`$ $`=`$ $`0.97[\mathrm{E}/\mathrm{Fe}]_{\mathrm{W94}}+0.02.`$ (A3)
The above are linear least-square fits using enhancement model 4 and rejecting 3-$`\sigma `$ outliers. The fit for $`\mathrm{log}t`$ is in accordance with the results of Charlot, Worthey & Bressan (1996): changing isochrones can alter the inferred ages from line strengths at young ages by as much as $`25\%`$; agreement at old ages is within 10%. On average, the inferred metallicities $`[\mathrm{Z}/\mathrm{H}]`$ are increased by $`10\%`$ in the Padova models, as expected at fixed line strengths from the 3/2 rule ($`\mathrm{\Delta }\mathrm{log}t/\mathrm{\Delta }[\mathrm{Z}/\mathrm{H}]1.4`$ between the two sets of models).
## Appendix B Corrections to $`\mathrm{H}\beta `$
### B.1. Emission corrections
This section presents stellar population parameters derived by omitting the emission fill-in correction to $`\mathrm{H}\beta `$ discussed in Section 2.2.2. This follows the suggestion by Carrasco et al. (1996) that no correction to $`\mathrm{H}\beta `$ should be made for residual $`\mathrm{H}\beta `$ emission based on \[O III\], as they find no such correlation in their own sample of early-type galaxies.
Figure 1 presents the Balmer–metal–line diagrams for the G93 galaxies through the $`r_e/8`$ aperture with the $`\mathrm{H}\beta `$ correction omitted. As expected from the additive nature of the correction, galaxies now appear lower in the grid, and therefore older and more metal-poor than in Figure 1. Neglecting emission corrections forces some galaxies to have unreasonably old ages: for example, without corrections, NGC 1453, NGC 2778, NGC 4261, NGC 4374, NGC 5813, NGC 5846, NGC 7052 have ages $``$ 20 Gyr. Since all of these galaxies have clear $`\mathrm{H}\beta `$ emission (see Figs. 3.11 and 4.10 of G93), omitting the corrections makes no sense. Furthermore, careful checking reveals that a few galaxies (e.g., NGC 4552, NGC 4649, NGC 5813, NGC 5846, NGC 7052) actually appear to have $`\mathrm{H}\beta `$ a little *stronger* than the standard ratio and are therefore probably undercorrected in our standard treatment. Fixing them would move them up by a few hundredths in log $`\mathrm{H}\beta `$ and decrease their ages by $``$20%. Since some of these are also objects that lie low in the grid, this correction would improve their positions.
Figure 1 without $`\mathrm{H}\beta `$ corrections looks essentially like the original one—the large age spread and relative parameter rankings of the galaxies are essentially the same. This point is reinforced in the histograms of Figure 2, which are nearly identical to those in Figure 7. We conclude that emission corrections are needed to derive the best age estimates for early-type galaxies, but that their exact magnitude does not affect our broad conclusions.
### B.2. Velocity dispersion corrections
This section derives a third set of population parameters using the Lick/IDS velocity dispersion corrections of TWFBG98 for $`\mathrm{H}\beta `$ rather than the template-based corrections of G93. As G93 does not provide raw $`\mathrm{H}\beta `$ strengths, we use his Figure 4.1 to estimate his velocity corrections and use them to “uncorrect” the fully corrected line strengths back to raw strengths (after removing the emission correction discussed in Section 2.2.2). For most galaxies, G93’s velocity corrections are insignificant, but for high-$`\sigma `$ galaxies they tend to be *negative*. After the G93 corrections are removed, we apply the *positive* corrections presented in TWFBG98 (their Figure 3) and then reapply the emission corrections discussed in Section 2.2.2. The newly corrected line strengths are plotted in Figure 3, and the resulting stellar population parameter histograms are presented in Figure 4.
The new corrections move only a few high-$`\sigma `$ galaxies, and therefore the *relative* age ranking of the sample is unaffected. The affected galaxies again tend to lie at the bottom of the grid and are again moved *up* by the new corrections (by $``$0.04 in log $`\mathrm{H}\beta `$) so that galaxies that formerly lay below the grid at high ages now tend to lie on it. This correction, like the refined emission corrections of the previous section, thus improves the ages of the oldest objects.
The G93 $`\mathrm{H}\beta `$ velocity corrections were based on a very high-S/N stellar template fit to each galaxy, whereas the TWFBG98 corrections are based on a statistical average over stellar spectral types whose correction curves scattered widely. Nevertheless, it is possible that the TWFBG98 corrections are actually more acurate. The G93 stellar templates are a superb match to most of the spectrum *except* at $`\mathrm{H}\beta `$, where emission corrupted the data. Hence the template match (and correction) at $`\mathrm{H}\beta `$ in particular may be poor. The TWFBG98 corrections were selected to match a large number of stars with about the same $`\mathrm{H}\beta `$ strength as typical galaxies and could therefore be better on average. |
no-problem/0001/cond-mat0001336.html | ar5iv | text | # Dynamics of aeolian sand ripples
## 1 Introduction
Perhaps the most ancient and fascinating out-of-equilibrium example of spontaneous pattern formation known in nature is that exhibited by a sand bed subjected to wind. If wind is strong enough (but not too strong to prevent erosion), of the order of few m/s, sand grains enter into a perpetual motion causing ultimately the sand bed to become unstable to ripple formation, commonly referred to as aeolian sand ripples. The typical wavelength is of the order of few cm (in some deserts, in Libya, however, ripples continue to coarsen leading to wavelengths which are much larger – several m –, and are usually called ridges). Geologists, in particular, have been intrigued that such an apparently simple system as sand turns from an intially structureless state into a rather organized structure in a quite robust and reproducible fashion, despite the turbulent air flow that causes the ripple formation. Following the seminal work of Bagnold many researchers have achieved a significant contribution to the understanding of ripple formation both experimentally and theoretically. This field of research has known more recently an upsurge of interest as a part of the puzzling behaviour of granular media. Despite the fact that sand is a very familiar material, the understanding of its static and dynamical properties still poses a formidable challenge to theoretical modelling. Unlike elastic, viscoelastic materials, and Newtonian fluids, there is yet no universal continuum theory (such that leading to the Lamé or Navier-Stokes equations) to describe in a effective manner the behaviour of granular media. A major difficulty, in our opinion, lies in the broad spectrum of length and time scales. Despite this situation, various tools have been used to describe in a more or less ad hoc way granular media. With regard to ripple formation, we may cite (i) molecular dynamics introducing empirical laws of collision, (ii) Monte Carlo simulations, trying to mimic what is our feeling about rules of collision and rearrangements, (iii) hydrodynamical theories inspired from Bagnold’s view. Concerning the birth of ripples the view of Bagnold is largely adopted and it will be reviewed shortly here.
An important preliminary question is in order. Indeed, even without evoking the possibility of writing basic sand equations, we may ask a fundamental question about locality-versus-nonlocality in the aeolian sand ripple formation. More precisely does dynamics of a given region (small in comparison to the ripple wavelength) depend on that of a distant region located at a distance which is significantly larger than the ripple wavelength? If so we can say that sand surface dynamics must be nonlocal. This question is still controversial, and it seems to us very important to settle it up from the very beginning. The argument in favour of nonlocality rests on the following fact: because the grains that make a high fly (the saltation grains – Fig. 1) possess a saltation length $`l`$ which is much larger than the ripple wavelength $`\lambda `$, then a rather distant region on the sand bed would get these saltating particles. As information is being passed from two quite distant points, we would a priori think of the importance of nonlocality. In reality, the moving grains in the ripple formation process can divided into two main categories: the saltating ones that have a high kinetic energy (and make long jumps) and the low energy splashed grains (dislodged by the impact of saltating grains – see Fig.1) which in turn travel in a hopping manner on a scale $`a`$ which is several (typically 6–10) times smaller that the ripple wavelength. If nonlocality is adopted we must answer these two experimental facts which clearly contradict it: (i) as noted by Bagnold the saltating grains population arrives on the sand bed almost at the same angle everywhere along the bed – as if a rain of particles were sent from a very long altitude at a fixed incident angle (which is of order of $`10^{}`$); so when they impact on a region there is no way to distinguish between, say, two gains that originate from two different regions of the sand surface. (ii) Additionally, as a grain has been extracted from the bed, becoming thereby a saltating one, it is transported by a turbulent flow where at a such high Reynolds number the coherence length is so small that during the fly the grains loose, so to speak, the memory of where it comes from. Given these two facts it is hard to believe that saltating grains provides any effective interaction between the topography of two distant regions on the surface. Thus it seems difficult to be in favour of nonlocality, albeit saltating grains make, beyond any doubt, long jumps. Their high jumps simply imply that their energy is such that they can dislodge some grains (the reptating grains) and make them jump on a length scale $`a`$. If the saltating grains had a higher energy, then $`a`$ would be increased which in turn (as seen also experimentally ) would increase the ripple wavelength, keeping $`\lambda /a`$ to a typical value of order $`610`$. In other words, and as again noted by Bagnold, and reported by Anderson et al., the saltating grains serve merely to bring energy into the system, the saltating population exchanges almost no grains with the reptating population. The ripple formation depends basically on the local topography of surface, and the information is propagated only by reptating grains. We believe that we can even conceive the following experiment: we use an air gun designed to throw beads and inclined with some angle with respect to an intially flat bead surface. Then we move the air gun over the bed in a erratic fashion, back and forth, and send beads on the bed. After a sufficiently large number of collisions the bead surface should develop a ripple structure. By this way we completely eliminate any notion of saltation; the air gun is simply injecting energy into the system.
The initiation of sand ripples as imagined by Bagnold is appealing (see later). However, a question of major importance has remained open until recently: once the instability takes place, what is the subsequent evolution of the instability? Would the instability lead to an ordered or disordered structure? Would the wavelength be that corresponding to the linearly fastest growing mode? The initiation of the instability is based on a linear analysis, while the subsequent behaviour requires a non-linear treatment. In the absence of any continuum theory of sand, we have recently briefly discussed the derivation of a non-linear evolution equation by evoking conservation laws and symmetry arguments . If the local character is admitted, on the basis of many obvious experimental facts described above, our equation should generically be of the form given in . Any more or less microscopic theory of sand should, in the continuum limit, be compatible with that equation. It has been shown indeed that using a hydrodynamical model for sand flow the derived equation is that inferred from symmetry and conservation. The aim of this paper is three-folds. (i) We give an extensive discussion on the derivation of the non-linear evolution equation both from symmetry and conservation. We shall also revisit the hydrodynamical model and show that altering the basic model leads to a modified equation precisely as dictated by symmetry and conservation. (ii) We analyse in details the properties of the continuum equation. It will be shown that this equation leads to coarsening – at the initial stage the linearly unstable mode prevails, while at a subsequent times the structure coarsens. We shall analyse the coarsening process and quantify the exponent for the wavelength increase in the course of time. This task will be dealt with both analytically and numerically. We shall discuss some variants of the originally proposed equation, and the contribution of higher order terms. Though the quantitative feature may change, we shall see that the overall qualitative behaviour remains the same. (iii) Since there is a sparse information on sand ripples, and that various models have been suggested in the literature, we have felt it worthwhile to devote a review to previous works, and first to summarize the basic features and order of magnitude of the underlying physical phenomena of interest. Thus the first part of the paper must be regarded rather as a short review paper.
This paper is organized as follows. In Section 2 we outline the main physical ingredients in the formation of aeolian sand ripples. Section 3 is devoted to a short review of different models. Section 4 reconsiders the hydrodynamical model from which we can extract a non-linear evolution equation. We discuss in particular different variants and its impact on the form of the evolution equation. Section 5 uses symmetry and conservation arguments to write down a generic evolution equation and its variants. We shall then pay a special attention to the analysis of the equation and its far reaching consequences. Section 6 contains a summary and discussion.
## 2 Outlines of aeolian sand transport
According to Bagnold’s vision, the aeolian sand transport can be described in terms of a cloud of grains leaping along the sand surface, the grains regaining from the wind the energy lost when rebounding. His perception of the process, although refinements and modifications have been developed in recent years (see for a review of recent progress), still holds in the main lines.
When the wind blowing over a stationary sand bed becomes sufficiently strong, some particularly exposed grains are set in motion. Some grains are lifted by the pressure difference between the top and the bottom. Once lifted free of the bed, the grains are much more easily accelerated by the wind. Therefore as they return to the bed, some of the grains will have gained enough energy so that on impact they rebound and eject other grains.
Saltation is usually defined as the transport mode of a grain capable of splashing up other grains. One can think of the saltating grains as the high-energy population of grains in motion. In the initial period after the wind set in, the number of ejected grains resulting from one impact is on average larger than one. These ejected grains are generally sufficiently energetic to enter in saltation. Therefore the number of saltating grains increases at an exponential rate. As the transport rate increases, the vertical wind profile is modified due to the presence of the curtain of saltating grains. The wind speed drops so that the saltating grains are accelerated less and impact at lower speeds. As a result, the number of grains ejected per impact decreases and when it falls to one, an equilibrium is reached. This equilibrium state is only stationary in a statistical sense. The number of saltating grains may fluctuate around the equilibrium value. One should note however that it is possible that in the equilibrium state a few grains are still dislodged into saltation by fluid lift. The number of grains ejected per impact would then be slightly smaller than one.
The cloud of grains transported in equilibrium state does not consist of saltating grains only. On impact the saltating grains splash up a number of grains most of which do not saltate, i.e., their energy is so low that, as they return to the bed, they can not rebound or eject other grains. The motion of these low-energy ejectas is usually called reptation.
In summary, in equilibrium state of transport, two populations of grains can be distinguished: (i) the high-energy saltating grains which travel by successive jumps over long distances and (ii) the low-energy reptating grains generated upon impacts of saltating grains which move over much shorter distances.
The purpose of this section is to present an overview of the current knowledge of aeolian transport. We will first recall the characteristics of the wind profile over a flat sand surface and report the modification induced by the presence of a saltation cloud. Then we will present the mechanisms of the initiation of sand motion. Finally, we will expose the main features of the saltation and reptation motion.
### 2.1 Wind profile
When a flow of air is blowing over a flat rough surface, the wind profile is defined by the standard form for a turbulent boundary layer <sup>1</sup><sup>1</sup>1 One can recover the expression of the velocity field of a turbulent flow near a wall by using simple physical arguments. In the region near the wall, the flow is completely characterized by the three following parameters: the shear velocity $`U^{}`$, the distance from the wall $`z`$, and the kinematic viscosity $`\nu `$. However, the viscosity is important only very close to the wall. One can therefore say that the mean velocity gradient depends only on $`U^{}`$ and $`z`$. Using dimensional analysis, one obtains finally the expected result.
$$\frac{du(z)}{dz}=\frac{U^{}}{kz},$$
(1)
where $`u(z)`$ is the average horizontal component of the velocity, $`k`$ is the Karman constant and $`U^{}`$ is related to the shear stress on the ground
$$\tau =\rho _aU^2,$$
(2)
$`\rho _a`$ being the density of air. The wind profile can thus be written as
$$u(z)=u_0+\frac{U^{}}{k}\mathrm{ln}\frac{z}{z_0},$$
(3)
where $`u_0`$ is the velocity at the reference height $`z_0`$ which can be chosen at convenience. In absence of saltation cloud, $`z_0`$ is often chosen to be the roughness height of the bed surface. A good estimation of $`z_0`$ for a flat sand surface is given by $`z_0d/30`$, where $`d`$ is the grain diameter. At this height, the wind velocity is zero (i.e., $`u_0=0`$). In a log-log plot, the height-velocity lines (associated to different wind strengths) all converge to a focal point located at the roughness height $`z_0`$ where the velocity is zero. The presence of saltation alters significantly the nature of the velocity profile. For saltating flows, as experimentally shown by Bagnold , the height-velocity lines are also straight but converge at a different focus at some greater height $`z_0^{}`$ ($`5d`$) and non-zero velocity $`u_0^{}`$.
Concerning the incidence of a wavy sand bed on the wind profile, only little is known. The literature is quite poor on this topic. It should however be interesting to have reliable data about the modification of the wind profile by the presence of a ripple field.
### 2.2 Initiation of sand transport
The initiation process requires grains to be entrained by wind forces. This occurs when the wind strength rises to the so-called threshold fluid velocity to be defined below.
The initiation of grain motion can be understood by examining the forces acting on individual grains. Wind blowing over a sand surface exerts two types of forces. (i) a drag force acting horizontally in the direction of the flow, and (ii) a lift force acting vertically upwards. (iii) Opposing these aerodynamic forces are inertial forces, the most important is the grain’s weight.
* The drag force is composed by the friction drag and the pressure drag. The latter results from increased pressure on the upwind face of the grain and decreased pressure on its downwind side. The friction drag is the viscous stress acting tangentially to the grain. The total drag <sup>2</sup><sup>2</sup>2It is worth noting that the expression of the drag force together with the lift one discussed below can be established by means of a simple dimensional analysis. Indeed, if one wants to write the drag force on a grain of size $`d`$ taking into account the fluid velocity $`U`$, its viscosity $`\nu `$ and its density $`\rho _a`$, the only way is $`F=f(R)\rho _ad^2U^2`$ where $`f`$ is a function of the Reynolds number $`R=Ud/\nu `$. At high Reynolds number (that is the case we are dealing with), the force is expected to be independent of the fluid viscosity (at the scale of the grain the effective Reynolds number is too large) so that $`f`$ must be independent of $`R`$. On the contrary at low Reynolds number, the force is viscosity-dependent and inertia must scale out of the equation. So the only way is that $`f`$ must scale as $`1/R`$. We thus recover the Stokes law. acting on the grain is given by
$$F_d=\beta \rho _ad^2U^2.$$
(4)
$`d`$ is the grain diameter and $`\beta `$ is a parameter depending on the Reynolds number $`R^{}=dU^{}/\nu `$.
* The lift force (or Magnus-Robbins force) is inherent to Bernoulli effect. Indeed, it arises because of the high wind velocity gradient near the bed. The flow velocity on the underside of a grain at rest on the bed is zero but on the upper side the flow velocity is positive. Due to Bernoulli’s law this leads to an underpressure on top of the grain, causing a lift. The average lift force can be expressed as
$$F_l=C_l\rho _ad^2u^2,$$
(5)
where $`u`$ is the fluid velocity evaluated at the top of the grain and $`C_l`$ is a lift coefficient which depends on the Reynolds number $`R=du/\nu `$. Note that $`u`$ can be easily related to the shear velocity $`U^{}`$ thanks to eq. 3.
* Finally, the effective weight of a grain immersed in a fluid is given by
$$P=\rho _g^{}gd^3,$$
(6)
with $`\rho _g^{}=\rho _g\rho _a`$, $`\rho _g`$ being the density of the grain. The fact that $`\rho _g\rho _a`$, enters the weight and not $`\rho _g`$ is due to the Archimedes force.
One can now examine the balance between the different forces acting on individual grains. Consider a flat surface covered by loose sand of uniform size. Grains in the top layer of the bed are free to move upward but their horizontal movement is constrained by adjacent grains. The point of contact between neighbouring acts as a pivot around which rotational movement takes place when the lift and drag forces exceed the inertial force. The threshold at which the grains detach from the ground is then reached when the moment of the three forces about the pivot balance each other
$$(d/2)(F_l+F_d)=(d/2)P$$
(7)
This corresponds to a threshold shear velocity $`U_t^{}`$ determined by
$$U_t^{}=A^{}\sqrt{\frac{\rho _g\rho _a}{\rho _a}gd}$$
(8)
where $`A^{}`$ is a coefficient which depends essentially on the Reynolds number $`R^{}`$. $`A^{}`$ turns out to be fairly constant when the Reynolds number $`R^{}`$ is large compared to $`1`$. The threshold value of $`U^{}`$ for fine dune sand with a diameter of $`0.02`$ cm is about $`0.2`$ m/s and the corresponding value of $`R^{}`$ is of order of unity (see for more details). For this and for all sands of larger grain size, $`A^{}`$ is found to be constant (in air $`A0.1`$).
### 2.3 Saltation motion
The first stage in the grain motion is lifted, discussed before. After being lifted, grains are transported by wind and start to make successive long jumps (that is saltation motion). Then an equilibrium establishes between the saltating grains and the wind profile. According to Bagnold, the motion of the saltating grains can be described by an average trajectory. In particular, one can define an average height and length of the trajectory of the saltating grains. These quantities can be estimated considering that the moving grain experiences the gravity force $`\rho _g^{}d^3𝐠`$, the air friction $`C_d\rho _ad^2V_r𝐕_r`$ (where $`V_r`$ is the relative velocity of the grain with respect to the air) and the lift force $`C_l\rho _ad^2V_r^2𝐧`$ (where $`𝐧`$ is a unit vector perpendicular to $`𝐕_r`$). The equation of motion of a grain therefore reads , after projection on the horizontal and vertical axis,
$`\rho _gd^3{\displaystyle \frac{du_p}{dt}}=C_d\rho _ad^2V_r(uu_p)+C_l\rho _ad^2V_rv_p,`$ (9)
$`\rho _gd^3{\displaystyle \frac{dv_p}{dt}}=\rho _g^{}d^3gC_d\rho _ad^2V_rv_pC_l\rho _ad^2V_r(uu_p),`$ (10)
where $`u`$ is the horizontal velocity of the air, $`u_p`$ and $`v_p`$ are the horizontal and vertical components of the grain velocity. The relative velocity is given by $`V_r=\sqrt{(uu_p)^2+v_p^2}`$. These are two coupled non-linear equations for which an analytic solution does not look possible. However, making some simplifications, it is possible to extract the basic features of the grain trajectory. We will therefore assume that (i) the lift force is negligible <sup>3</sup><sup>3</sup>3 The lift force is expected to play a role only in the region near the bed., (ii) the wind velocity is uniform along the vertical axis $`z`$ and equal to $`\overline{u}`$, and (iii) the horizontal particle velocity is rapidly in equilibrium with the wind flow (i.e, $`u_p\overline{u}`$).
Making use of these assumptions, it is straightforward to calculate the height $`h_s`$, the length $`l`$ and the incidence angle $`\alpha `$ of the saltating grain trajectory:
$`h_s`$ $`=`$ $`{\displaystyle \frac{v_{\mathrm{}}^2}{2g}}\mathrm{ln}(1+{\displaystyle \frac{v_0^2}{v_{\mathrm{}}^2}}),`$ (11)
$`l`$ $`=`$ $`\overline{u}t_f,`$ (12)
$`\mathrm{tan}\alpha `$ $`=`$ $`{\displaystyle \frac{v_{\mathrm{}}}{\overline{u}\sqrt{1+\frac{v_{\mathrm{}}^2}{v_0^2}}}},`$ (13)
where $`t_f`$ is the time of flight
$$t_f=\frac{v_{\mathrm{}}}{g}\left[\mathrm{arctan}(\frac{v_0}{v_{\mathrm{}}})+\mathrm{ln}(\sqrt{1+\frac{v_0^2}{v_{\mathrm{}}^2}}+\frac{v_0}{v_{\mathrm{}}})\right].$$
(14)
Note that $`v_0`$ is the liftoff velocity and $`v_{\mathrm{}}`$, the terminal velocity of a grain falling in air, is given by $`v_{\mathrm{}}=\sqrt{\rho _gdg/C_d\rho _a}`$. For typical fine sand grains with a diameter of $`0.02`$ cm, the terminal velocity is about $`1`$ m/s. One should point out at this stage that the liftoff velocity $`v_0`$ corresponds to the speed of saltating grains just after their rebound on the granular bed. In the equilibrium state of saltation, this velocity is quite large (of order of a few $`m/s`$) since the saltating grains have been able to extract energy from the wind during their flight.
In the limit where $`v_0`$ is appreciably larger than $`v_{\mathrm{}}`$, we get the following simple results:
$`h_s`$ $`=`$ $`{\displaystyle \frac{v_0^2}{2g}}{\displaystyle \frac{\mathrm{ln}(v_0^2/v_{\mathrm{}}^2)}{v_0^2/v_{\mathrm{}}^2}},`$ (15)
$`l`$ $`=`$ $`{\displaystyle \frac{2\overline{u}v_0}{g}}{\displaystyle \frac{\mathrm{ln}(2v_0/v_{\mathrm{}})}{2v_0/v_{\mathrm{}}}},`$ (16)
$`\mathrm{tan}\alpha `$ $`=`$ $`{\displaystyle \frac{v_{\mathrm{}}}{\overline{u}}}.`$ (17)
If we take $`v_0=2.5`$ m/s, and $`\overline{u}=5`$ m/s, we find $`h_s10`$ cm, $`l80`$ cm and $`\alpha 12^{}`$ for grain size of $`0.02`$ cm. The following remarks are in order. (i) One can note that the values of the hop length and height are less than those expected in absence of vertical drag. (ii) The hop height relative to $`v_0^2/2g`$ decreases with the liftoff velocity. (iii) The hop length increases both with the wind strength and the liftoff velocity. (iv) Finally, the impact angle is independent of the liftoff velocity. These results, although derived in a crude way, give the same trends as those found from the full numerical calculation.
One should point out that we have omitted here the Magnus lift force which is present where grains are rotating. This additional force can enhance significantly the height and length of the saltating trajectory . However, the general features found above remains qualitatively correct (see for example ).
### 2.4 Collision process
The collision between the saltating grains and the bed surface is a crucial process in the aeolian sand transport. Although aeolian transport is initiated by aerodynamic forces as seen above, its maintenance relies essentially through impacts. In other words, the dislogment of grains from sand bed is essentially induced by the impacts of saltating grains. The collision process between the saltating grains and the sand bed is of great importance both in saltation and reptation motion. In particular, the rebound angle and the liftoff velocity of the saltating grain, as well as the dynamics properties of the ejected grains (i.e., reptating grains) can only be determined by a thorough knowledge of the collision process.
Recent experiments and numerical simulations have focused on the saltation impacts. It is worth recalling the main results. The impact of a single saltating grain typically results in one energetic rebounding grain and a large number of emergent grains (i.e., low-energy ejectas or reptating grains). The rebounding grain leaves the surface with roughly two-third of the impact velocity while the emergent grains have a mean ejection speed less than $`10\%`$ of the impact speed and therefore have a short trajectory in the air. Change of impact angle and speed is found to affect the outcomes of collision in several ways.
First, with decreasing angle of incidence the ratio of the rebound to impact vertical speed increases. This ratio is greater than one for low incident angles ($`10^{}`$) which correspond precisely to impact angles observed in the equilibrium state of saltation transport. It means that the saltating grain can reach a height as great as that from where it fell provided that the amplification of the vertical speed is sufficient to overcome drag losses as it rises. If this condition is achieved, the saltation is able to maintain itself due to the particle impact.
Second, the properties of the emergent grains (speed and take-off angle) are practically unaffected by a change of the incident angle or impact speed. The only noticeable effect is that an increase of the impact speed results in an augmentation of the number of the ejectas.
Although the understanding of the collision process has been largely improved in the last decades, there remains an important set of problems to deal with. (i) Both theoretical and experimental works assume that the bed is comprised of a single grain size which is obviously not the case in the real word. (ii) In most of studies, the sand bed is taken as simple as possible: flat and uniformly packed. However, it seems clear that the local bed topography (at the ripple scale) as well as the variation of the bed packing fraction may alter significantly the nature of the collision process. (iii) Finally, there is an important limitation in almost all studies: the third dimension is ignored. A three-dimensional knowledge of the collision process would however be necessary for the understanding of the complex spatial evolution of a 3D bed over which saltation occurs.
## 3 Short review of analytical models
Once we have clarified and summarized the main physical aspects and order of magnitudes in the process by which ripples form, we are in a position to tackle the problem of ripples itself. We present here a survey of analytical models of aeolian ripple formation proposed in the literature. We shall also suggest how these models could be modified in order to include a more realistic dynamics both in the linear and non-linear regimes. This step of the extension of the model will serve as a preliminary analysis before tackling the full non-linear analysis in the subsequent sections, which is the main topic.
All these models explain the ripple formation as the result of a dynamical instability of a flat sand bed. However, one should point out that two fundamentally different explanations for the ripple instability have been proposed in the literature. The first one is based on the fact that the reptation flux varies according to the local slope of the bed profile and has been developed by Bagnold and later on by Anderson. Another explanation has been proposed by Nishimori and Ouchi based on the variation of the saltation length according to the height of the sand bed. This second theory, although it is appealing, is not really supported by experimental evidences.
### 3.1 Anderson Model
As it will emerge the key ingredient of the flat bed instability is of geometrical nature: an inclined surface is subjected to more abundant collisions than a flat one. That is to say the the mass current is an increasing function of the slope. This is a situation where the consequence acts in favour of the cause, say in a way which is against the Lechatellier principle, leading thus inevitably to an instability, as seen below.
In the Anderson model the saltating grains are not directly responsible for the ripple instability. Instead, the saltating grains are just considered as an external reservoir which brings energy into the system. They are assumed to be sufficiently energetic that they can travel on large distances without being incorporated into the sand bed. Furthermore, at each impact, they can eject a certain number of low-energy grains (i.e., reptating grains) which make single small hops. The ripple instability is driven by the the flux of the reptating grains.
The surface is assumed to be subject to a homogeneous rain of saltating grains impacting with a uniform incident angle. On impacts the saltating grains eject low-energy grains which hop over a characteristic reptation length $`a`$. The local sand height $`h(x,t)`$ changes in the course of time, this is simply due to the fact that a horizontal flux of particles exists. Due to mass conservation we must have
$$\frac{h}{t}=\frac{1}{\rho _g}\frac{Q}{x},$$
(18)
where $`Q`$ is the horizontal mass flux of moving grains (i.e., the mass of grains per unit time and unit width of flow). The horizontal flux can be split into two contributions (i.e, the flux of saltating grains and reptating grains):
$$Q(x)=Q_s+Q_{rep}(x),$$
(19)
with
$$Q_{rep}(x)=m_p_{xa}^xN_{ej}(x)𝑑x,$$
(20)
where $`m_p`$ is the mass of a grain, and $`N_{ej}(x)`$ the number of ejected grains at $`x`$ per unit time and surface to be specified below. Taking advantage of the expression of the flux, the governing equation for the bed profile \[eq. (18)\] can be rewritten as
$$_th=d^3[N_{ej}(x)N_{ej}(xa)].$$
(21)
We recall that $`d`$ is the grain size.
The rate of ejected grains can directly be related to the number $`N_{imp}`$ of impacting (i.e., saltating) grains:
$$N_{ej}(x)=n_0N_{imp}(x),$$
(22)
where $`n_0`$ is the number of grains ejected per impact. In a first approach, $`n_0`$ can be taken as constant whereas $`N_{imp}(x)`$ clearly depends on the impact angle of the saltating grains with respect to the local bed slope at the position $`x`$. If $`\alpha `$ measures the impact angle of the saltating grains with respect to the horizontal and $`\theta `$ the angle of the local bed slope (see Fig. 2), the rate of impacting grains reads
$`N_{imp}`$ $`=`$ $`N_0\mathrm{cos}\theta (1+{\displaystyle \frac{\mathrm{tan}\theta }{\mathrm{tan}\alpha }})`$ (23)
$`=`$ $`N_0{\displaystyle \frac{(1+\mathrm{cot}\alpha _xh)}{[1+(_xh)^2]^{1/2}}}.`$
$`N_0`$ is the number of saltating grains arriving on a flat horizontal bed per unit time and unit surface.
Eq. (21) together with (22) and (23) completely describe the evolution of a sand bed surface subject to saltation. A flat profile is obviously solution of this equation but the question of interest is to know whether it is stable against small fluctuations. First we extract the leading term in $`h`$ from $`N_{imp}`$ and inject it into the equation for $`h`$. Then seeking solutions of the form $`he^{iqx+\omega t}`$ (where $`q`$ is the wave number), we obtain
$$\omega =\mu _0\mathrm{cot}\alpha iq[1e^{iqa}],$$
(24)
where $`\mu _0=n_0N_0d^3`$. One can see (cf. Fig. 3) that there is an infinite number of bands of unstable modes. The flat bed surface is unstable. One can also note that each band exhibits a maximum at $`k=(4n+1)\pi /2a`$ and the growth rate of these maxima diverges for large wavenumber which is physically not acceptable.
Anderson refined his model to circumvent this problem by introducing a dispersion in the reptation length. If we call $`p(a)da`$ the probability that the reptation length is comprised between $`a`$ and $`a+da`$, the governing equation (21) for the bed surface becomes
$$_th=d^3_{\mathrm{}}^{\mathrm{}}p(a)[N_{ej}(x)N_{ej}(xa)]𝑑a.$$
(25)
In that case, the linear stability analysis yields
$$\omega =\mu _0\mathrm{cot}\alpha iq[1\widehat{p}(q)],$$
(26)
where $`\widehat{p}(q)`$ is the Fourier transform of $`p`$. If we assume that $`p(a)e^{(x\overline{a})^2/2\sigma ^2}`$ (where $`\overline{a}`$ is the mean reptation length and $`\sigma `$ is the variance), we get
$$\omega =\mu _0\mathrm{cot}\alpha iq\left[1e^{iq\overline{a}}e^{\sigma ^2q^2/2}\right]$$
(27)
The flat surface is again unstable (see Fig 3) but contrary to the previous case, the most dangerous mode has a finite wavenumber. In the case where the dispersion is large enough (i.e., $`\sigma >\overline{a}`$), the most dangerous mode is the first peak at $`q=\pi /2\overline{a}`$ which corresponds to a wavelength $`\lambda =4\overline{a}`$. This mode is expected to dominate the subsequent development of the instability and therefore to give the order of magnitude of the ripple wavelength. One can conclude that the dispersion in the reptation length damps the growth of the large wavenumber modes.
One shall add a few comments about the Anderson model. It can be interesting to rewrite Anderson equations in the limit where the reptation length is smaller than the bed deformation (i.e., the ripple wavelength). This limit corresponds to the usual situation encountered in the case of aeolian sand ripples. In other words, the process of ripple formation can be considered as a local one. In this limit, one can perform a Taylor expansion of $`N_{ej}(xa)`$ about the position $`x`$ and the governing equation (21) can be approximated by
$$_thd^3\left[a_x\frac{a^2}{2}_x^2+\frac{a^3}{6}_x^3+\mathrm{}\right]N_{ej}(x).$$
(28)
Using the expression of $`N_{ej}`$ and keeping only the linear terms, one gets
$$_th=f_1_{xx}h+f_2_{xxx}h+f_3_{xxxx}h,$$
(29)
with $`f_1=a\mu _0\mathrm{cot}\alpha `$, $`f_2=(a^2/2)\mu _0\mathrm{cot}\alpha `$, and $`f_3=(a^3/6)\mu _0\mathrm{cot}\alpha `$. Note that the first linear term of the right hand side (whose coefficient is negative) is directly responsible for the ripple instability. The third derivative term is inferred to the drift of the ripple structure whereas the last one stabilizes structure at large wavenumber. The growth rate of a mode $`q`$ is given by $`\omega =a\mu _0\mathrm{cot}\alpha [q^2i(a/2)q^3(a^2/6)q^4]`$ which is nothing but the long wavelength limit of expression (24). The wavelength of the most dangerous mode is here of the same order of that found previously: $`\lambda =2\pi a/\sqrt{3}4a`$.
The Anderson Model gives a good description for the initiation of the ripple instability but it is not intended to predict the subsequent non-linear dynamics of the sand bed profile, which we shall attempt to consider here.
### 3.2 Hoyle-Woods Model
The Hoyle-Woods model is an extension of the Anderson model. Hoyle and Woods have taken into account the rolling and avalanching effect of the grains under the influence of the gravity as well as the shadowing effect. However we shall forget here avalanching which is generally absent in the process of aeolian ripple formation (slip faces are solely observed on the lee slope of dunes).
The rolling effect can be important on the lee slope of the ripple. Indeed, the reptating grains can roll down a slope under the influence of gravity. This can be modeled by an additional horizontal flux $`Q_{rol}`$
$$Q_{rol}=m_pN_ru_r\mathrm{cos}\theta .$$
(30)
$`N_r`$ is the number of rolling grains per unit surface and $`u_r`$ is the speed of the rolling grains along the slope. The authors assumed that $`N_r`$ is constant and $`u_r`$ is a function of the gravitational force
$$u_r=\frac{\sqrt{gd}}{r}\mathrm{sin}\theta ,$$
(31)
where $`r`$ is a function of the grain packing and grain size. Taking into account of this additional flux, the governing equation for the bed height is given by
$$\frac{h}{t}=\frac{1}{\rho _g}\left[\frac{Q_{rep}}{x}+\frac{Q_{rol}}{x}\right].$$
(32)
In the long wavelength limit (where the wavelength of the ripple structure is much larger than the reptation length) the bed growth due to reptation motion can be approximated by $`[h/t]_{rep}_xN_{ej}`$, retaining only the leading order term (see eq. 28). In this limit, the evolution equation takes thus the following form
$$\frac{h}{t}=_x\left[a\mu _0\frac{(1+\mathrm{cot}\alpha _xh)}{[1+(_xh)^2]^{1/2}}\nu _0\frac{_xh}{[1+(_xh)^2]}\right],$$
(33)
where $`\mu _0=n_0N_0d^3`$ and $`\nu _0=N_r(\sqrt{gd}/r)d^3`$. An expansion in power of $`h`$ yields to leading order
$$\frac{h}{t}=f_1_{xx}h,$$
(34)
with $`f_1=(\nu _0\mu _0\mathrm{cot}\alpha )a`$. One clearly sees that the rolling effect introduces a threshold for the ripple instability. Indeed the flat bed surface is unstable only if $`\mu _0\mathrm{cot}\alpha >\nu _0`$. We recall that $`\mu _0`$ represents the flow rate of reptating grains for a flat sand surface ($`\mu _0\mathrm{cot}\alpha `$ is nothing but the excess flow rate when the sand bed is tilted) whereas $`\nu _0`$ corresponds to the flow rate of rolling grains for a tilted surface. The instability results therefore from a competition between reptation and rolling motion. As $`\nu _0`$ is assumed to be constant in this model, it follows that high saltation flux (i.e., high value of $`\mu _0`$) or low impact angle (i.e, small $`\alpha `$) favour the destabilization of the bed surface. In summary, the rolling effect tends to smooth out surface irregularities and therefore the ripple instability can occur only above a certain threshold.
In this model, the ripple structure resulting from the instability has no characteristic length (contrary to the Anderson model) since the most dangerous mode is not finite. To circumvent this problem, Hoyle and Woods have taken into account the shadowing effect. On the lee slope of the ripple, they considered that there is a region beyond the ripple crest which is shielded from the saltation flux. This is called the shadow zone and no hopping occurs there. In the shadow zone the ripple evolves solely owing to rolling. The role of this shadowing effect has been investigated numerically by Hoyle and Woods. They found stable ripple structure whose wavelength is governed by the length of the shadow zone, as expected from simple geometrical considerations.
### 3.3 Nishimori-Ouchi Model
The Nishimori-Ouchi model is based on the hypothesis that the saltation flux is not homogeneous when the sand surface is deformed. They assume that the hopping length $`l`$ of the salting grains depends on the location where they take off:
$$l(x)=l_0+bh(x),$$
(35)
where $`x`$ is the location of takeoff. Furthermore the saltating grains are assumed to be incorporated to the sand surface when they hit it. According to these hypothesis, the flux of saltating grains can be written here as
$$Q_s(x)=m_p_\zeta ^xN_s(x^{})𝑑x^{},$$
(36)
where $`N_s(x^{})`$ is the number of saltating grains which take off at $`x^{}`$ and $`\zeta `$ is the location of the takeoff point of the grains which lands at $`x`$. The authors also consider a reptation motion (or creep) induced by gravity
$$Q_{rep}(x)=\frac{D_r}{\rho _g}\frac{h}{x},$$
(37)
$`D_r`$ is a constant coefficient which stands for the relaxation rate. The dynamics of the sand bed is thus given by
$`{\displaystyle \frac{h}{t}}`$ $`={\displaystyle \frac{1}{\rho _g}}{\displaystyle \frac{}{x}}(Q_s+Q_{rep})`$ (39)
$`=d^3(N_s(x){\displaystyle \frac{\zeta }{x}}N_s(\zeta ))+D_r{\displaystyle \frac{^2h}{x^2}}.`$
In the limit of small deformations of the bed surface, one gets to leading order (assuming that $`N_s=constant`$)
$$\frac{h}{t}|_x=bN_s\frac{h}{x}|_{xl_0}+D_r\frac{^2h}{x^2}|_x.$$
(40)
The growth rate of a mode of wavenumber $`q`$ is then given by
$$\omega =bd^3N_siqe^{iql_0}D_rq^2.$$
(41)
One can easily show that the sand surface is unstable only if $`l_0`$ is greater than a critical value given by $`l_c=3\pi D_r/2bd^3N_s`$ (see Figure 4). Furthermore, near the instability threshold the most dangerous mode $`k_{max}3\pi /2l_0`$ which corresponds to a wavelength $`\lambda _{max}4l_0/3`$. In this model, the order of magnitude of ripple wavelength is given by the saltation length whereas in the Anderson model the pertinent length is the reptation one. Furthermore the ripple wavelength is of the same magnitude of order as the saltation length. The problem can not be therefore treated in the long wavelength limit. Here the process of ripple formation can not be considered as a local one.
The Nishimori-Ouchi model gives interesting results, however the way of modelling the saltating grains can be seriously questioned. First, the mechanism of ejection due to impacts of saltating grains on the sand (whose importance has been evidenced by Bagnold ) is not taken into account, and second the variation of the saltating length according to the takeoff point has never been clearly set neither from wind tunnel experiments nor from field observations. Furthermore, as we discussed in the introduction, it is hard to conciliate this picture with evidences in favour of locality.
## 4 Hydrodynamic model
We expose here a phenomenological model which is inspired from the ”BCRE” model developed in the context of avalanches in granular flows. This model has been adapted to the ripple formation process first by Bouchaud and his coworkers and later on by Valance et al. . This model is based on a continuous description where dynamics of the two pertinent grain species (that is the moving grains and the grains at rest) are considered. One of the advantage of the model is to treat separately the erosion process and the deposition one.
This model has been presented in . However we find it worth recalling the main lines. This will allow us to discuss more critically the model and to show how the final equation may be sensitive to the starting physical ingredients. This will clear up the question of why the equation derived in the next section (based on symmetry) contains additional nonlinearities not present in , and to show how this can be cured.
We shall call the moving grains density $`R(x,t)`$ where $`x`$ is the coordinate in the direction of the wind and $`t`$ the time. The grains at rest are measured in term of the local height $`h(x,t)`$ of the static bed. In the thermodynamical limit, the dynamical equations of $`h`$ and $`R`$ read
$`_tR=V_x(R)+\mathrm{\Gamma }[R,h],`$ (42)
$`_th=\mathrm{\Gamma }[R,h],`$ (43)
where $`V`$ the mean velocity of the moving grains and $`\mathrm{\Gamma }`$ describes the exchange rate between the moving grains and the grains at rest
$$\mathrm{\Gamma }=\mathrm{\Gamma }_{dep}+\mathrm{\Gamma }_{ej},$$
(44)
the first term describing the deposition process of the reptating grains and the other modelling the ejection of grains from the bed surface. $`\mathrm{\Gamma }`$ depends a priori on $`h`$ and $`R`$.
The expression of $`\mathrm{\Gamma }`$ can be determined using phenomenological physical arguments. We have seen that the saltating grains are never caught by the bed surface but act as a reservoir of energy. They induce reptation motion which is directly responsible for the ripple formation. Among the moving grains, we are therefore interested only in those in reptation. $`\mathrm{\Gamma }`$ should describe the exchange rate between the grains at rest and the reptating grains.
As seen above, the ejection rate of reptating grains is essentially driven by the flux of the saltating grains hitting the surface. To a smaller extent, one can expect that a small part of the reptating population are set in motion by the wind. One shall therefore consider two ejection mechanisms, one due to impacts of the saltating grains and the other driven by the wind.
(i) The ejection rate due to impacts can be modeled using Anderson approach where the ejection rate is given by $`\mathrm{\Gamma }_{ej}^{imp}n_0N_{imp}`$ (we recall that $`N_{imp}`$ the rate of saltating grains impacting the granular bed and $`n_0`$ is the number of ejected grains per impact). Anderson assumed $`n_0`$ to be constant. We will consider here that the efficiency of the ejection can depend on the bed topography and especially on the bed curvature. Indeed, it is natural to think of that it is easier to dislodge a grain at the top of a bump than in a trough. The number $`n`$ of ejected grains per impact can thus be modelled by
$$n=n_0(1c\kappa ),$$
(45)
where $`c`$ is a constant parameter and $`\kappa `$ the bed curvature. The rate of ejection reads therefore
$`\mathrm{\Gamma }_{ej}^{imp}`$ $`=d^3n_0(1c\kappa )N_{imp}`$ (47)
$`=d^3n_0N_0\left(1c{\displaystyle \frac{h_{xx}}{(1+h_x^2)^{3/2}}}\right){\displaystyle \frac{(1+\mathrm{cot}\alpha h_x)}{(1+h_x^2)^{1/2}}}.`$
In the limit where $`h_x1`$, an expansion in power of $`h_x`$ can be performed. Retaining the terms up to the quadratic order one gets
$$\mathrm{\Gamma }_{ej}^{imp}=\alpha _0(1+\alpha _1_xh\alpha _2_x^2h)\alpha _0\left[\alpha _3h_x^2+\alpha _4_x(h_x^2)\right]+O(h_x^3).$$
(48)
$`\alpha _0=d^3n_0N_0`$, $`\alpha _1=\mathrm{cot}\alpha `$, $`\alpha _2=c`$, $`\alpha _3=1/2`$ and $`\alpha _4=c\mathrm{cot}\alpha `$. $`\alpha _0`$ is directly related to the number $`N_0`$ of saltating grains hitting a flat surface per unit time and unit surface. Let us recall the meaning of the different terms. The term proportional to $`_xh`$ expresses the fact that the rate of ejection is greater when the local slope is facing the wind (the flux of saltating grains being larger on the stoss side as seen previously). The last linear term takes into account the curvature effect: it is harder to dislodge grains in troughs than at the top of a crest. There are two nonlinear terms. The first nonlinearity comes from the contribution of the slope effect on the ejection rate and the second one corresponds to the coupling between slope and curvature effects. It is worth noting that these nonlinear terms have been neglected in previous works but turn out to be important in the nonlinear development of the ripple instability under some certain circumstances to be specified below.
(ii) The ejection rate due to wind entrainment is in principle extremely weak because the wind is screened by the saltating grains. Indeed, it has been found from numerical simulation that the fluid entrainment is unimportant for a flat surface. However, one may think that the direct dislodgement by the wind of a grain located on the top of a crest can be significant. One can thus assume that the ejection rate due to wind entrainment is driven by curvature effect
$$\mathrm{\Gamma }_{ej}^{wind}=\beta _2_{xx}h+O(h_x^3)$$
(49)
(iii) The rate of deposition is assumed to be proportional to the number $`R`$ of reptating grains so that we can write
$`\mathrm{\Gamma }_{dep}`$ $`=R\gamma `$ (50)
$`=R\gamma _0(1\pm \gamma _1_xh+\gamma _2_x^2h)`$
$`\gamma ^1`$ represents the typical time during which the reptating grains are moving before being incorporated to the sand bed. This life time can be interpreted in terms of the characteristic reptation length $`l`$ which can be defined by $`l=V/\gamma `$, where $`V`$ is the mean speed of the reptating grains. The first contribution in (50) represents the deposition rate for a flat bed surface. As a consequence $`\gamma _0`$ corresponds to the typical life time of a reptating grain on a flat surface. The other contributions mimic the slope and curvature effects. The effect of slope on deposition process can be different depending on the importance of the wind drag on the reptating grains. If one assumes that the wind drag is negligible near the surface, one may think that the deposition process is enhanced on a stoss slope (positive sign in front $`\gamma _1`$). Indeed, the reptation length is expected to be smaller on a stoss slope due to gravity (cf. Hoyle-Woods model). On the other hand, if the wind drag near the bed surface is significant, the deposition process should be weakened on slope facing the wind (negative sign in front $`\gamma _1`$) since a reptating grain on a stoss slope can gain additional energy from the wind and therefore travel over a longer distance. Finally, the plus sign in front the term modelling the curvature effect clearly indicates that the deposition is favoured in troughs in comparison to crests.
The set of equations 4243 and 44 plus 48, 49, 50 describe completely our system. There exists a trivial solution corresponding to the situation where the bed surface is flat. In this case, the density of reptating grains is simply given by $`R_0=\alpha _0/\gamma _0`$. The next step is to investigate the stability of the flat surface and the subsequent nonlinear dynamics.
We have seen just above that two different situations may be distinguished according to the presence (or not) of direct erosion by the wind. We will treat both situations and show that they lead to slight different dynamics. We will deal first with the case where direct erosion by the wind is present because it is the situation which has been treated in .
* Presence of direct wind erosion
This is the situation when the wind is not too strong. The saltation curtain is not very dense and the wind near the bed is strong enough to lift off some grains from the bed. In this case, the exchange rate $`\mathrm{\Gamma }`$ reads
$$\mathrm{\Gamma }=\mathrm{\Gamma }_{ej}^{imp}+\mathrm{\Gamma }_{ej}^{wind}+\mathrm{\Gamma }_{dep}.$$
(51)
In that situation, the flat surface is found to be always unstable. As soon as the wind is strong enough to maintain saltation and therefore induces reptation motion (i.e., $`\alpha _00`$), the surface is intrinsically unstable. In the situation where $`\alpha _0/V`$ is smaller than unity (which is expected for low saltation flux; cf ) the dispersion relation in long wavelength limit, is given by
$$\omega =\gamma _0\left[(\alpha _1+\gamma _1)(\alpha _0/V)l_0^2q^2l_0^3l_cq^4\right]+i\gamma _0l_0^2l_cq^3,$$
(52)
where $`l_0=V/\gamma _0`$ and $`l_c=\beta _2/V`$. $`l_0`$ is the reptating length for a flat bed while $`l_c`$ (which as the dimension of a length) plays the role of a cut-off length preventing the surface from arbitrary small wawelength deformation. One clearly notes that the flat interface destabilizes as soon as $`\alpha _0`$ is non zero. The most dangerous mode is given by $`\lambda _{max}=2\pi \sqrt{l_0l_c}/\sqrt{\epsilon (\alpha _1+\gamma _1)}`$ (where we set $`\epsilon =\alpha _0/V`$). One can note that the most dangerous mode does not vary linearly with the reptation length (as in the Anderson model) but is given by the geometrical average between the reptation length $`l_0`$ and $`l_c`$. This is a slight difference with the Anderson model. Unfortunately the field observations and data from wind tunnel experiments do not allow us to discriminate between these two descriptions.
In order to investigate the subsequent development of the instability, the non-linear terms neglected in the linear analysis should be taken into account. To do this, a non-linear analysis is needed. By means of a multi-scale analysis, it is possible to perform a weakly non-linear development in the vicinity of the instability threshold (i.e., $`\epsilon =\alpha _0/V1`$). We will not expose the strategy of this analysis here (a detail presentation can be found in ) but just give the final outcome. After some algebra, the non-linear analysis yields an evolution equation for the bed profile which reads
$$\frac{h}{t}=f_1_{xx}h+f_2_{xxx}h+f_3_{xxxx}h+f_{12}_{xx}(h_x^2).$$
(53)
$`f_1=l_0^2(\alpha _1+\gamma _1)\epsilon `$, $`f_2=l_0l_c`$, $`f_3=l_0^3l_c`$ and $`f_{12}=l_0^2l_c\gamma _1`$. We can note that the leading non-linear term is of the form $`_{xx}(h_x^2)`$. Using arguments based on symmetries and conservation laws (as to be seen in section 5), we would have expected a non-linearity of the form $`_x(h_x^2)`$. This term does not appear here. We may thus wonder whether it is fortuitous or not. It turns out that it is an accident here because in the present situation this term is of higher order since it is multiplied by $`\alpha _0`$ (see eq. 48) which scales here as $`\epsilon `$. We will see that in the next case where there is no direct wind erosion, $`_x(h_x^2)`$ is the leading order non-linear term.
* Absence of direct wind erosion
This situation occurs when the wind is relatively strong such that the saltation curtain is dense enough to screen the wind near the bead. This question was not discussed previously and constitutes an interesting point for the comparison with the symmetry arguments developed later. In this case, the erosion rate due to wind entrainment is neglected so that the exchange rate $`\mathrm{\Gamma }`$ is given by:
$$\mathrm{\Gamma }=\mathrm{\Gamma }_{ej}^{imp}+\mathrm{\Gamma }_{dep}$$
(54)
A linear stability analysis teaches us that the flat bed surface is unstable above a certain threshold defined by $`\epsilon =(\alpha _1\gamma _1)=0`$. Indeed the growth of a perturbation of the form $`e^{iqx+\omega t}`$ is given by
$$\omega \frac{\gamma _0\alpha _0}{V}\left[\epsilon l_0^2q^2il_cl_0^2q^3l_cl_0^3q^4\right]$$
(55)
where $`l_c`$ is now defined by $`l_c=\alpha _2+\gamma _2`$. The surface is unstable for $`\epsilon >0`$ (i.e., $`\alpha _1>\gamma _1`$). Since $`\alpha _1=\mathrm{cot}\alpha `$ (we recall that $`\alpha `$ is the incident angle of the saltating grains), there exists a critical incident angle $`\alpha _c`$ below which the flat bed is unstable. In other words, grazing impact angle favours the ripple instability. The most dangerous mode $`q_{max}`$ which is expected to give the order of magnitude of the ripple wavelength is easily estimated: $`\lambda _{max}=2\pi /q_{max}=2\pi \sqrt{l_ol_c}/\sqrt{\epsilon }`$. Here again it is the geometrical average between the saltation length $`l_0`$ and $`l_c`$.
A weakly non-linear analysis in the vicinity of the threshold instability (i.e., $`\epsilon =\alpha _1\gamma _11`$) can be performed following the same lines as in the previous case. The calculation yields
$$\frac{h}{t}=f_1_{xx}h+f_2_{xxx}h+f_3_{xxxx}h+f_{11}_x(h_x^2)+f_{12}_{xx}(h_x^2)$$
(56)
where $`f_1=l_0^2\epsilon `$, $`f_2=l_0(\alpha _2+\gamma _2)\epsilon ^{1/2}`$, $`f_3=l_0(\alpha _2+\gamma _2)\epsilon ^{1/2}`$, $`f_{11}=l_0\alpha _3`$ and $`f_{12}=(\alpha _4\alpha _3l_0\gamma _1l_c)\epsilon ^{1/2}`$. The leading non-linear term is $`_x(h_x^2)`$, that is the non-linear term expected from the symmetries as to be seen below. The non-linear term coming to next order is $`_{xx}(h_x^2)`$ and we will see that this second non-linearity is crucial to stabilize the linear growth of the structure.
## 5 Non-linear ripple dynamics
In the previous sections we have seen how can a mathematical model be constructed for ripple formation. It is natural to ask whether there is a simple explanation why Eq. (56) is the governing equation of ripple formation. It turns out that evoking only geometrical and conservation considerations it is possible to predict the form of the equation including the leading non-linear terms. The power of this approach, as was used in a more general context recently , is that it is model-independent, and that it can provide very general ingredients for the appearance of a nonlinearity, as we shall comment below. In particular, it will, appear, for example, that though the nonlinearity $`_{xx}(h_x^2)`$ is compatible with symmetries and conservation, it should not be present if the system where not anisotropic! (here the wind).
### 5.1 General approach
To start with let us consider an arbitrary (not self crossing) curved front in one dimension. It represents the sand–air front parametrized by an intrinsic variable $`\alpha `$ ($`0<\alpha <1`$). In a coordinate system independent representation the front can be characterized (up to a rotation and displacement) by its curvature $`\kappa `$ as a function of the arclength $`s`$. It is conceptually important to make a clear distinction between $`\alpha `$ and $`s`$. For example, $`\alpha =1`$ always corresponds to the end of the curve while the arclength coordinate of the end (i.e., the total length of the curve) can change. It is therefore not equivalent to work at constant $`\alpha `$ or at constant $`s`$.
We are interested in deriving a general form of evolution equation for the front. More precisely we are seeking the equation of evolution of the curvature. From geometrical considerations we obtain the following equation
$$\kappa _t|_s=\left[\frac{^2}{s^2}+\kappa ^2\right]v_n\frac{\kappa }{s}_0^s𝑑s^{}\kappa v_n,$$
(57)
where $`v_\mathrm{n}`$ denotes the normal component of the local velocity of the surface. This is a general equation which holds for any front.
The normal velocity ($`v_\mathrm{n}`$) contains the physics of the evolution process of the surface. Since $`v_\mathrm{n}`$ is a coordinate system independent quantity (i.e., a scalar) it must be a function of the curvature and its derivatives with respect to the arclength (that are also scalars). The knowledge of $`v_\mathrm{n}(\kappa )`$ allows to obtain the dynamics of the front from Eq. (57). In the general case, however, it is possible only by numerical integration of the equation. Note also that the above equation is very appropriate for numerical treatment in an intrinsic representation.
In our particular case we restrict ourselves to slightly curved fronts that will allow to derive the evolution equation in a closed form. There is a privileged direction in the ripple formation process as the $`xx`$ symmetry is broken due to the wind. Therefore $`v_\mathrm{n}`$ may contain explicit dependence on the local slope $`\theta `$ of the surface
$$v_\mathrm{n}=v_\mathrm{n}(\theta ,\kappa ,\kappa _s,\mathrm{}).$$
(58)
Since $`\kappa =\theta _s`$, we can reformulate Eq. (58) as
$$v_\mathrm{n}=v_\mathrm{n}(\theta ,\theta _s,\theta _{ss},\mathrm{}).$$
(59)
The concrete choice of this dependence is restricted by the mass conservation law for the sand
$$v_\mathrm{n}ds=0.$$
(60)
This condition eliminates, for example, a choice like $`v_\mathrm{n}\kappa ^2`$. If the evolution process can be considered as local, as in the case when the reptation length is much smaller than the ripple wavelength, we can write Eq. (59) as
$$v_\mathrm{n}=\frac{}{s}F(\theta ,\theta _s,\theta _{ss},\mathrm{}),$$
(61)
where $`F`$ is an arbitrary (but smooth) function of its arguments. Expanding $`F`$ around $`\theta =0`$ (straight front) we obtain
$$v_\mathrm{n}=\frac{}{s}\left(f_1\theta +f_2\theta _s+\mathrm{}+\frac{1}{2}f_{11}\theta ^2+f_{12}\theta \theta _s+\frac{1}{2}f_{22}\theta _s^2+\mathrm{}+\frac{1}{6}f_{333}\theta ^3+\mathrm{}\right)$$
(62)
The assumption of a slightly curved front (the height of the ripples $`H`$ is always much smaller than their wavelength $`\lambda `$ in the experiments, $`h/xH/\lambda 1`$) allows us to describe the front by a more natural parameter: its height $`h(x,t)`$ with respect to the initial state (Fig. 2). The inclination $`\theta `$ and the curvature $`\kappa `$ can be expressed in terms of $`h`$ in leading order as $`h_x`$ and $`h_{xx}`$, respectively. Substituting Eq. (62) into Eq. (57) and keeping only the lowest order linear terms results in
$$h_t=f_1h_{xx}+f_2h_{xxx}+f_3h_{xxxx}.$$
(63)
The first term on the r.h.s corresponds to a sum of diffusion in the fluidized upper layer driven by gravity (i.e., rolling in the model of Hoyle ) and the effect of erosion by the wind. The third term represents surface diffusion that comes from an effective surface tension (also related to the property of the fluidized layer) so its prefactor $`f_3`$ is considered to be always negative (stabilizing). If the prefactor of the first term $`f_1>0`$ then the flat interface ($`h=0`$) is stable. If, on the other hand, $`f_1<0`$ then the flat interface becomes unstable against ripple formation. The second term in Eq. (63) is a propagative term that is responsible for the drift of the emerging pattern. In fact, a term proportional to $`h_x`$ is also acceptable in Eq. (63) but it can be easily eliminated by a Galilean transformation.
### 5.2 Wavelength selection at short times
In the previous section we have seen that analyzing the physical processes during sand ripple evolution one finds the prefactor of $`h_{xx}`$ can change its sign and become negative in case of sufficiently strong winds. This leads to the appearance of a range of linearly unstable modes for $`f_1<0`$. The linear dispersion relation of fluctuations around $`h=0`$ is
$$\omega (q)=f_1q^2if_2q^3+f_3q^4,$$
(64)
where $`q`$ is the wavenumber ($`he^{\omega t+iqx}`$). The growth rate of fluctuations is determined by the real part of the dispersion relation ($`f_1q^2+f_3q^4`$) while the imaginary part describes the drift properties. The wavenumber of the linearly most unstable mode is $`q_\mathrm{c}=\sqrt{f_1/(2f_3)}`$ (note that $`f_1,f_3<0`$) that gives the typical ripple wavelength ($`\lambda _\mathrm{c}=2\pi /q_\mathrm{c}`$) which is observed shortly after their appearance.
To proceed, we have to identify which non-linear terms have the most important contribution to Eq. (63). Consider a ripple structure of wavelength $`\lambda `$. Its amplitude ($`H`$) will be infinitesimal at $`t=0`$ and then grow exponentially due the linear instability. We can write the typical height as $`H\lambda ^a`$, where $`a=a(t)<0`$ is an increasing function of time. The order of magnitude of the terms in Eq. (62) can be easily evaluated:
| $`\theta _{ss}`$ | $``$ | $`h_{xxxx}`$ | $``$ | $`\lambda ^{a3}`$ |
| --- | --- | --- | --- | --- |
| $`\theta ^2`$ | $``$ | $`(h_x^2)_x`$ | $``$ | $`\lambda ^{2a2}`$ |
| $`\theta \theta _s`$ | $``$ | $`(h_x^2)_{xx}`$ | $``$ | $`\lambda ^{2a3}`$ |
| $`\theta ^3`$ | $``$ | $`(h_x^3)_x`$ | $``$ | $`\lambda ^{2a3}`$ |
The dominant terms are the ones with largest exponent. That is when $`a`$ is a large negative number ($`t0`$) then all the non-linear terms are negligible compared to the linear terms as expected. The first non-linearity that becomes significant is $`(h_x^2)_x`$. In fact, this term contains an odd number of spatial derivatives and therefore it gives contribution only to the imaginary part of $`\omega (q)`$. Consequently, it can not lead to a development of a finite height structure although it modifies the drift properties. We disregard for the moment the term $`(h_x^2)_x`$ to which we will come back later. The next lowest order term is $`(h_x^2)_{xx}`$ which leads indeed to saturation. We dispose of three physical scales in the problem (time, length, and height) and Eq. (63) extended with the before mentioned non-linear term contains four relations between the prefactors of the terms. Thus by appropriate rescaling<sup>4</sup><sup>4</sup>4With an equation having 5 terms, we have 4 independent coefficients. Rescaling space, time and the height, we can absorb 3 of them so that the equation can be reduced to a one parameter one. variables the full equation can be reduced to the following single parameter non-linear evolution equation for the ripple height
$$h_t=h_{xx}+\nu h_{xxx}+h_{xxxx}+(h_x^2)_{xx}.$$
(65)
The sign of the non-linear term is taken to be positive. Choosing negative sign would be equivalent simply to a $`hh`$ transformation of the original. Since Eq. (65) has no up–down symmetry simply by inspecting the form of the ripples one can decide if it is the positive or the negative sign that corresponds to the physical situation (Fig. 2). Apparently, it is the positive sign that is appropriate in case of both aeolian and under water ripples.
### 5.3 Amplitude expansion
To analyze the properties of Eq. (65) let us consider the stationary ($`h_t=0`$) solutions of Eq. (65) with spatial period $`L`$ and for the moment with $`\nu =0`$.
The first remark that has to be made is that the instability of the planar solution can manifest itself only if the lateral size of the system $`L`$ is larger than $`\lambda _{\mathrm{cut}\mathrm{off}}=2\pi `$. This feature is due to the fact that the largest wavenumber in the unstable band is given by $`2\pi /L`$. Thus, if the size of system is too small then all possible Fourier modes will be stable (Fig. 5). In order to find the amplitude of the developed pattern we re-write Eq. (65) in Fourier space
$$A_n=\omega (nq)A_n+q^4n^2\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}m(nm)A_mA_{nm},$$
(66)
where $`A_n`$ is the amplitude of the Fourier mode with wave number $`nq`$, i.e., $`h(x)=_{n=\mathrm{}}^{\mathrm{}}A_ne^{inqx}`$. The amplitudes are subject to the restriction $`A_n=\overline{A}_n`$ (since $`h(x)`$ is a purely real function) and $`A_00`$ (since we impose $`_{\mathrm{}}^{\mathrm{}}h(x)𝑑x=0`$).
If $`L\lambda _{\mathrm{cut}\mathrm{off}}`$ then only the longest wavelength mode ($`A_1`$) is active (i.e., instable). The first harmonic ($`A_2`$) is inherently stable but since it is coupled through the non-linear term to the leading mode, it will be non-zero. The higher harmonics can be safely neglected as their amplitude will be exponentially small compared to $`A_2`$. We take into account only the first two modes and in addition we can assume that $`A_2`$ varies much faster than $`A_1`$, and hence $`A_2`$ is adiabatically slaved to $`A_1`$. Solving the resulting set of two equations we find for the leading amplitude
$$A_1^2=\frac{\omega (q)\omega (2q)}{16q^8},$$
(67)
where $`q=2\pi /L`$. In order to have a solution for $`A_1`$ the r.h.s of Eq. (67) has to evaluate to a non-negative real number. Therefore two conditions has to be satisfied: (i) $`\omega (q)\omega (2q)>0`$ and (ii) $`\mathrm{Im}(\omega (q)\omega (2q))`$=0. Since $`\omega (q)>0`$ and $`\omega (2q)<0`$, and both are real, the two conditions are met. It is convenient to choose $`A_1`$ to be real (Eq. (67) fixes only the magnitude of $`A_1`$) and then it scales as $`A_1(q_{\mathrm{cut}\mathrm{off}}q)^{1/2}`$ where $`q_{\mathrm{cut}\mathrm{off}}=1`$. The approximation that only one mode is active breaks down far from the threshold ($`L\lambda _{\mathrm{cut}\mathrm{off}}`$ or $`qq_{\mathrm{cut}\mathrm{off}}`$). Indeed, for $`q=\frac{1}{2}q_{\mathrm{cut}\mathrm{off}}`$ Eq. (67) gives a zero amplitude that is obviously not correct.
Far from the threshold we consider that the wavelength of the pattern is very large (i.e., $`qq_{\mathrm{cut}\mathrm{off}}`$) and thus take approximately $`\omega (nq)n^2q^2`$. To remove the dependence on $`q`$ from Eq. (66) we look for a solution in form of $`A_nq^2`$. After some algebra the amplitude of the $`n`$th mode is found to be
$$A_n=\frac{1}{2n^2q^2}.$$
(68)
This relation is expected to be valid only if $`nq1`$. Figure 7 shows that Eq. (68) reproduces very well the direct numerical solution of Eq. (66).
### 5.4 Coarsening
There is a subtle issue that is worth emphasizing. A stationary solution $`h_L(x)`$ with period $`L`$ will be a stationary in a box of $`2L`$ too. That is – if $`L`$ is large enough – there can be multiple solutions with periods that are divisors of $`L`$. Which one of these is stable? We find by numerical stability analysis that the solution with the longest period is the stable one. This means that during the temporal evolution from a planar front first the fastest growing mode appears and then the structure gradually coarsens to reach the final state of one huge ridge.
The width $`w^2=h^2`$ of the pattern evolves in time as
$`{\displaystyle \frac{1}{2}}_tw^2`$ $`=`$ $`h_th=h\left(h_{xx}h_{xxxx}+(h_x^2)_{xx}\right)`$ (69)
$`=`$ $`h_x^2h_{xx}^2+h_{xx}h_x^2.`$
The last term is zero since it is a full derivative with respect to $`x`$. The growth of the width is due to the first term, the second one being negligible for later times. The typical wavelength and slope scale with time as $`\lambda t^{1/z}`$ and $`\theta t^\alpha `$, respectively. Using Eq. (68) we find that the amplitude $`H`$ of the structure behaves as
$$HA_1\lambda ^2.$$
(70)
On the other hand $`H`$ can be approximated as
$$H\lambda \theta .$$
(71)
Combining these two relations with the scaling of $`\lambda `$ and $`\theta `$ gives the exponent relation
$$\alpha =1/z.$$
(72)
The width of the interface scales as $`wH`$ and thus the order of the terms of Eq. (69) is written as
$$O(t^{4/z1})=O(t^{2/z})O(1).$$
(73)
We see that the growth is dominated by the first term on the r.h.s that corresponds to the unstable linear term. By equating the exponents on the two sides of Eq. (73) we obtain for the coarsening exponent
$$1/z=1/2,$$
(74)
in accord with the results of numerical simulation (Fig. 8).
For $`\nu >0`$ the pattern loses its $`xx`$ symmetry (Fig. 6) and drifts sideways. We have measured the drift velocity numerically (Fig. 9) and close to the threshold it compares well with the results of calculation around the threshold
$$v=\nu 3\nu (\lambda /\lambda _{\mathrm{cut}\mathrm{off}}1)+O((\lambda /\lambda _{\mathrm{cut}\mathrm{off}}1)^2)$$
(75)
This equation results from the requirement (see Eq. (67)) that $`\mathrm{Im}\omega (q)\omega (2q)=0`$. The imaginary contribution originating from the $`\nu h_{xxx}`$ term can be compensated by a purely propagative term $`vh_x`$ that fixes the value of the drift $`v`$. We find that the coarsening law for drifting patterns does not change, the scaling $`t^{1/2}`$ is observed (Fig. 8). This is not surprising since considering a $`h_{xxx}`$ term in Eq. (69) leads to a zero contribution as $`hh_{xxx}=h_xh_{xx}=0`$.
### 5.5 Higher order non-linearities
As it can be easily seen from Eq. (67) the amplitude of the basic Fourier mode ($`A_1`$) for the solutions of Eq. (65) grows as $`L^2`$. That is the typical slope of the structure ($`A_1/L`$) increases indefinitely as the wavelength increases during coarsening. Another way to view this feature is realizing that Eq. (65) possesses a parabolic particular solution for $`\nu =0`$ of form $`h(x)=h_0\frac{1}{4}x^2`$. Introducing the next order non-linear term ($`(h_x^3)_x`$) limits the growth of the amplitude to be of the order of $`L`$ and thus imposes a finite slope. In fact, in the derivation of Eq. (65) we have supposed that the slope is small and therefore that equation is only valid at the birth of the ripple structure. For later times higher order non-linearities start to play an important role and thus the evolution equation changes to
$$h_t=h_{xx}+\nu h_{xxx}+h_{xxxx}+\mu (h_x^2)_{xx}+\eta (h_x^3)_x.$$
(76)
Here we have introduced the parameters $`\mu `$ and $`\eta `$ to control the relative importance of the two non-linear terms.
If we set $`\mu =0`$ then the $`hh`$ symmetry of the equation is restored. As a consequence at later stages of ripple development when the second non-linearity becomes dominant the shape of ripples will become more triangular. If in addition $`\nu =0`$ then the $`xx`$ symmetry is restored and the system becomes variational. In this limit Eq. (76) reduces to the noiseless conserved Cahn-Hilliard equation . The coarsening takes place very slowly, the typical length scale grows as
$$\lambda \mathrm{ln}t.$$
(77)
By setting non-zero $`\nu `$ and $`\mu `$, that is imposing a drift ($`h_{xxx}`$ term) and re-introducing the leading non-linearity ($`(h_x^2)_{xx}`$ term) we observe an effective scaling of the wavelength over almost one decade. The exponent is found to be close to $`1/4`$. But the coarsening process stops after some time, meaning that the surface regains its stability. This effect can be attributed to the stabilizing nature of the $`h_{xxx}`$ term: it introduces a wavelength dependent drift velocity for the perturbations and thus diminishes their coherence leading to effective stabilization. The non-linear term $`(h_x^2)_{xx}`$, on the other hand, acts on the direction of destabilizing the surface and accelerates the coarsening. Far from the threshold, however, the importance of this term becomes negligible and the second non-linearity dominates the dynamics. Since in the case $`\nu =0`$ considered above the coarsening was logarithmic, so in a sense marginal, introducing a stabilizing term can lead to an eventual stopping of the coarsening process. This was not the case with only the leading non-linearity as we have seen above: there the scaling is not affected by the extra linear term.
The numerical results presented in the figures has been obtained by the integration of Eq. (65) and Eq. (76) using a pseudo-spectral method with $`\mathrm{\Delta }x=0.1`$ and $`\mathrm{\Delta }t=10^5`$ in a system of size $`L30\lambda _{\mathrm{cut}\mathrm{off}}`$. Figure 10 shows the temporal evolution of the structure corresponding to the case $`\nu =\mu =\eta =1`$. The simulation has been started from a small amplitude random initial condition. Soon a rather regular pattern appears with wavenumber corresponding to the linearly most unstable mode. The structure contains defects that will trigger the coarsening process: at the end of the simulation the typical wavelength has been doubled. The slowing down of the drift with growing wavelength predicted by Eq. (75) is also clearly observable.
## 6 Conclusion and discussion
Based on general arguments and observations we have adopted the notion of locality for sand ripple formation. We have then presented general symmetry and conservation considerations to show how a model-independent nonlinear equation for sand ripples can be derived. That equation takes the form
$$h_t=h_{xx}+\nu h_{xxx}+h_{xxxx}+(h_x^2)_x+\mu (h_x^2)_{xx}+\eta (h_x^3)_x.$$
(78)
The first nonlinearity $`(h_x^2)_x`$ contributes to drift and is not able to saturate the linear growth. So the first efficient nonlinearity is $`\mu (h_x^2)_{xx}`$. At short time an ordered ripple structure emerges with a wavelength close to that corresponding to the linearly fastest growing mode. Later a coarsening process takes place; With $`\eta =0`$ coarsening continues indefinitely, until one huge dune is reached. The coarsening is quantified in terms of a dynamical exponent defined in relation to the increase of the mean wavelength, $`\lambda t^{1/z}`$. We find both analytically and numerically $`z=2`$. We have also identified that the slope increases in that case without bound when increasing the system size. Thus higher nonlinear terms must become decisive. We have included the next nonlinear term $`(h_x^3)_x`$ which has led to a saturation of the slope (both the height and the wavelength scale in the same manner as a function of the system size). Though at short time this nonlinear term is irrelevant, it dominates dynamics at longer times. Here again we observe a coarsening but with a smaller exponent $`1/z=1/4`$. The coarsening seems to stop after a certain stage, typically when the wavelength is about twice of the fastest growing mode. We have found that as the ripple structure forms, it drifts sideways. The drift occurs with dispersion (the drift velocity depends on wavelength). The first term which is responsible for the drift is the one proportional to $`h_{xxx}`$ (note that there is another linear term $`h_x`$ which provides a phase velocity that can be absorbed in $`h_t`$ via a Galilean transformation). The nonlinear term $`(h_x^2)_x`$ though it does not saturate the linear growth it contributes significantly to drift. In particular it can also lead to a drift opposite to the wind. This happens in particular when all three nonlinearities of Eq. (78) are present but $`\nu =0`$.
The hydrodynamical model captures the full nonlinear equation written above and provides an encouraging physical basis for the derivation of the ripples equation from physical ingredients. Unlike symmetry and conservation laws, the explicit physical model relates the coefficients to the underlying (phenomenological) physical parameters and provides the physical explanation for the initiation of the instability. That instability is also present in a very transparent and general picture in the Anderson’s model.
These two views (explicit phenomenological model and symmetries) have aided in identifying some general picture of the form of the ripple evolution equation. A numerical study, though not exhaustive, has allowed extraction of some general results. It will be of great importance in future studies to quantify experimentally the coarsening process and to identify whether or not the coarsening stops or rather would continue without bounds if energy continues to be injected. This step will be vital to guide further theoretical development. |
no-problem/0001/astro-ph0001129.html | ar5iv | text | # Gravitational Waves from Low-Mass X-ray Binaries: a Status Report Much theoretical progress has been made in understanding GW emission from LMXBs in the six months since the Amaldi conference in July 1999. Rather than just transcribe the talk given by one of us, we review the situation as of December 1999. Because of space limitations, this review is far from complete.
## I Spins of Accreting Neutron Stars
Gravitational wave emission from rapidly rotating neutron stars (NS) has attracted considerable interest in the past several years. In addition to radiation from the spindown of newborn NSs (see the review by B. Owen in this volume), it has long been suspected Wagoner84 ; Thorne87:300yrs that rapidly accreting NSs, such as Sco X-1, may be a promising class of gravitational wave (GW) emitters. However, firm observational evidence of fast spins of these neutron stars had been missing until recently.
NSs in low-mass X-ray binaries (LMXBs) have long been thought to be the progenitors of millisecond pulsars bhattacharya95 . However, directly measuring their periods has proved elusive, probably because of their rather low magnetic fields. With the launch of the Rossi X-ray Timing Explorer, precision timing of accreting NSs has opened new threads of inquiry into the behavior and lives of these objects. RXTE observations klis99:\_millis have finally provided conclusive evidence of millisecond spin periods of NSs in about one-third of known Galactic LMXBs. These measurements are summarized in Fig. 1a. Altogether, there are seven such NSs with firmly established spin periods, by either pulsations in the persistent emission (discovered by Wijnands & van der Klis in the millisecond X-ray pulsar SAX J1808.4-3658; wijnands98 ) or oscillations during type I X-ray bursts (burst QPOs, first discovered in 4U 1728–34 by Strohmayer et al. strohmayer96:\_millis\_x\_ray\_variab\_accret ). There are an additional thirteen sources with twin kHz QPOs for which the spin may be approximately equal to the frequency difference klis99:\_millis . A striking feature of all these neutron stars is that their spin frequencies lie within a narrow range, $`260\mathrm{Hz}<\nu _s<589\mathrm{Hz}`$. The frequency range might be even narrower if the burst QPOs seen in KS 1731–260, MXB 1743–29, and Aql X-1 are at the first harmonic of the spin frequency, as is the case with the $`581\mathrm{Hz}`$ burst oscillations in 4U 1636–536 miller99:\_eviden\_antip\_hot\_spots\_durin . These NSs accrete at diverse rates, from $`10^{11}M_{}\mathrm{yr}^1`$ to the Eddington limit, $`\dot{M}_{\mathrm{Edd}}=2\times 10^8M_{}\mathrm{yr}^1`$. Since disk accretion exerts a substantial torque on the NS and these systems are very old vanParadijs95:LMXB\_distrib , it is remarkable that their spin frequencies are so similar, and that none of them are near the breakup frequency of $`1.5\mathrm{kHz}`$.
One possible explanation, proposed by White & Zhang white97 , is that these stars have reached the magnetic spin equilibrium (where the spin frequency matches the Keplerian frequency at the magnetosphere) at nearly identical frequencies. This requires that the NS dipolar $`B`$ field correlate very well with $`\dot{M}`$ white97 ; miller98 . However, there are no direct $`B`$ field measurements for LMXBs, and in the strongly magnetic binaries, where the $`B`$ field has been measured directly, such a correlation is not observed. More importantly, for 19 out of 20 systems, there must be a way of hiding persistent pulses typically seen from magnetic accretors. These difficulties led Bildsten Bildsten98:GWs to resurrect the conjecture originally due to Papaloizou & Pringle Papaloizou78:gravity\_waves and Wagoner Wagoner84 that gravitational radiation can balance the torque due to accretion. The detailed mechanisms will be discussed in the following sections.
Regardless of the detailed mechanism for GW emission, if gravitational radiation balances the accretion torque, then it is easy to estimate the GW strength. As noted by Wagoner Wagoner84 , in equilibrium the luminosities in GWs and in X-rays are both proportional to the mass accretion rate $`\dot{M}`$, so the characteristic strain amplitude $`h_c`$ depends on the X-ray flux $`F_x`$ at Earth and the spin frequency
$$h_c=4\times 10^{27}\frac{R_6^{3/4}}{M_{1.4}^{1/4}}\left(\frac{300\mathrm{Hz}}{\nu _s}\right)^{1/2}\left(\frac{F_x}{10^8\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1}\right)^{1/2}.$$
(1)
In Fig. 1b we show $`h_c`$ for Sco X-1 (marked with a star), a few other bright LMXBs with the spin inferred from kHz QPO separation (thick dots) and burst QPO frequency (triangles), and the millisecond X-ray pulsar SAX J1808.4-3658 (open diamond). The dotted line shows LIGO-II sensitivity $`h_{3/\mathrm{yr}}`$ (i.e., $`h_c`$ detectable with 99% confidence in $`10^7`$ s, provided the frequency and the phase of the signal are known in advance bradycreighton99 ) in the broadband configuration, while the solid line shows $`h_{3/\mathrm{yr}}`$ for the narrowband configuration. However, the frequency and the phase are known precisely only for the SAX J1808.4-3658 millisecond X-ray pulsar chakrabarty98b . For other sources, Brady & Creighton bradycreighton99 showed that the number of trials needed to guess the poorly known orbital parameters or to account for the torque noise due to $`\dot{M}`$ variations lowers the effective sensitivity by roughly a factor of two.
While the average $`\dot{M}`$ certainly correlates with the X-ray brightness, current observations unfortunately do not let us robustly infer the instantaneous torque klis99:\_millis . Even though $`\dot{M}`$ varies on a timescale of days, torque noise leads to frequency drift only on a timescale of weeks. The accretion torque is $`N_a=\dot{M}(GMR)^{1/2}`$, and the total time-averaged torque is zero due to equilibrium with GW emission. Assume that $`N_a`$ flips sign randomly on a timescale $`t_s`$ few days. The spin frequency $`\mathrm{\Omega }`$ will experience a random walk with step size $`\delta \mathrm{\Omega }=(N_a/I)t_s`$, where $`I`$ is the NS moment of inertia. After an observation time $`t_{\mathrm{obs}}`$, the drift is $`\mathrm{\Delta }\mathrm{\Omega }=(t_{\mathrm{obs}}/t_s)^{1/2}\delta \mathrm{\Omega }`$. This will exceed a Fourier frequency bin width, i.e., $`\mathrm{\Delta }\mathrm{\Omega }>2\pi /t_{\mathrm{obs}}`$ only after
$$t_{\mathrm{obs}}=\frac{21\mathrm{days}}{M_{1.4}^{1/3}R_6^{1/3}}\left(\frac{1\mathrm{day}}{t_s}\right)^{1/3}\left(\frac{10^8M_{}\mathrm{yr}^1}{\dot{M}}\right)^{2/3}.$$
(2)
Hence, on a timescale of tens of days, the intrinsic GW signal is coherent. The dashed line in Fig. 1b shows the LIGO-II sensitivity for a two-week integration in a narrowband configuration. This suggests that the way to detect GWs from LMXBs may be short integrations bradycreighton99 .
Currently, there are two classes of theories for GW emission from NSs in LMXBs. The presence of a large-scale temperature asymmetry in the deep crust will cause it to deform Bildsten98:GWs . The resulting “mountains” will give the rotating star a time-dependent mass quadrupole moment. Alternatively, unstable r-mode pulsations (see a review by B. Owen in this volume) of a suitable amplitude in the NS liquid core can emit enough gravitational radiation to balance the accretion torque Bildsten98:GWs ; andersson99:accreting\_rmode .
## II Deformations of Accreting NS Crusts
The crust is a $`1`$ km layer of crystalline “ordinary” (albeit neutron-rich) matter that overlies the liquid core composed of free neutrons, protons, and electrons. The crust’s composition varies with depth in a rather abrupt manner. As an accreted nucleus gets buried under an increasingly thick layer of more recently accreted material, it undergoes a series of e<sup>-</sup> captures, neutron emissions, and pycnonuclear reactions Sato79 ; HZ90 ; Blaes90 , resulting in layered composition. In Fig. 2a, we show schematically two such compositional layers (light and dark shading) sandwiched between the liquid core and the ocean. Since an appreciable fraction of the pressure is supplied by degenerate electrons, e<sup>-</sup> captures induce abrupt density increases. In the outer crust, these density jumps are as large as $`10\%`$, while in the inner crust the density contrast is $`<1\%`$. At $`T=0`$, the e<sup>-</sup> captures occur when the electron Fermi energy $`E_\mathrm{F}`$ is greater than the mass difference between the e<sup>-</sup> capturer and the product of the reaction. In the absence of other effects, this depth is the same everywhere, and such an axisymmetric capture boundary (the dashed line in Fig. 2a) does not create a mass quadrupole moment.
However, in accreting NSs the crustal temperatures are high enough (in excess of $`2\times 10^8\mathrm{K}`$) that e<sup>-</sup> capture rates become temperature-sensitive BC98 . Bildsten Bildsten98:GWs pointed out that if there is a lateral temperature gradient in the crust (the arrow in Fig. 2a), then regions of the crust that are hotter undergo captures at a lower density than the colder regions. The capture boundary becomes “wavy” (the solid line in Fig. 2a), with captures proceeding a height $`\mathrm{\Delta }z_\mathrm{d}`$ higher on the hot side of the star, and $`\mathrm{\Delta }z_\mathrm{d}`$ lower on the cold side. Such a temperature gradient, if misaligned from the spin axis, will give rise to a nonaxisymmetric density variation and a nonzero quadrupole moment $`Q_{22}`$ Bildsten98:GWs .
The required quadrupole moment $`Q_{\mathrm{eq}}`$ such that GW emission is in equilibrium with the accretion torque is
$$Q_{\mathrm{eq}}=3.5\times 10^{37}\mathrm{g}\mathrm{cm}^2M_{1.4}^{1/4}R_6^{1/4}\left(\frac{\dot{M}}{10^9M_{}\mathrm{yr}^1}\right)^{1/2}\left(\frac{300\mathrm{Hz}}{\nu _s}\right)^{5/2},$$
(3)
The range of $`\dot{M}`$’s in LMXBs is $`10^{11}2\times 10^8M_{}\mathrm{yr}^1`$, requiring $`Q_{22}10^{37}10^{38}`$ g cm<sup>2</sup> for $`\nu _s=300\mathrm{Hz}`$ Bildsten98:GWs . Can temperature-sensitive e<sup>-</sup> captures sustain a quadrupole moment this large? The quadrupole moment generated by a temperature-sensitive capture boundary is $`Q_{22}Q_{\mathrm{fid}}\mathrm{\Delta }\rho \mathrm{\Delta }z_\mathrm{d}R^4`$, where $`\mathrm{\Delta }\rho `$ is the density jump at the electron capture interface. $`Q_{\mathrm{fid}}`$ is the quadrupole moment that would result if the crust did not elastically adjust (or just moved horizontally) in response to the lateral pressure gradient due to wavy e<sup>-</sup> captures. Using this estimate, Bildsten Bildsten98:GWs argued that a single wavy capture boundary in the thin outer crust could generate $`Q_{22}`$ sufficient to buffer the spinup due to accretion (Eq. ), provided that temperature variations of $`20\%`$ are present in the crust.
However an important piece of physics is missing from this estimate: the shear modulus $`\mu `$. If $`\mu =0`$, the crust becomes a liquid and cannot support a non-zero $`Q_{22}`$. Ushomirsky, Cutler, & Bildsten UCB99 recently calculated of the elastic response of the crust to the wavy e<sup>-</sup> captures. They found that the predominant response of the crust to a lateral density perturbation is to sink, rather than move sideways. For this reason, $`Q_{22}`$ generated in the outer crust Bildsten98:GWs is much too small to buffer the accretion torque. However, a single e<sup>-</sup> capture boundary in the deep inner crust can easily generate an adequate $`Q_{22}`$. Because of the much larger mass involved by captures in the inner crust, the temperature contrasts required are $`<5\%`$, or only $`10^610^7`$ K, not $`10^8`$ K as originally postulated Bildsten98:GWs .
What causes the lateral temperature asymmetry, and can it persist despite the strong thermal contact with the almost perfectly conducting core? In LMXBs, the crusts are composed of the compressed products of nuclear burning of the accreted material. The exact composition depends on the local accretion rate, which could have a significant non-axisymmetric piece due to, e.g., the presence of a weak $`B`$ field. Moreover, except in the highest accretion rate LMXBs, nearly all of the nuclear burning occurs in type I X-ray bursts. Burst QPOs (see Sec. I) provide conclusive evidence that bursts themselves are not axisymmetric. Until the origin of this symmetry breaking is clearly understood, it is plausible to postulate that these burst asymmetries get imprinted into the crustal composition.
Ushomirsky et al. UCB99 showed that such a non-uniform composition leads directly to lateral temperature variations $`\delta T`$. Horizontal variations in the charge-to-mass ratio $`Z^2/A`$ (which determines the crustal conductivity) and/or nuclear energy release modulate the radial heat flux in the crust and set up a nonaxisymmetric $`\delta T`$. The $`\delta T`$’s required to induce a $`Q_{22}Q_{\mathrm{eq}}`$ can easily be maintained if there is a $`10\%`$ asymmetry in the nuclear heating or $`Z^2/A`$. So long as accretion continues, these $`\delta T`$’s persist despite the strong thermal contact with the isothermal NS core.
The e<sup>-</sup> capture $`Q_{22}`$ calculations UCB99 are summarized in Fig. 2b. If the size of the heating or $`Z^2/A`$ asymmetry is a constant fixed fraction, then for $`\dot{M}<0.5\dot{M}_{\mathrm{Edd}}`$ the scaling of $`Q_{22}(\dot{M})`$ is just that needed for all of these NSs to have the same spin frequency (the normalization is proportional to the magnitude of the asymmetry, but the scaling is fixed by the microphysics). For $`\dot{M}>0.5\dot{M}_{\mathrm{Edd}}`$, in order to explain the spin clustering at exactly 300 Hz, this mechanism requires that the crustal asymmetry correlate with $`\dot{M}`$. Alternatively, if the asymmetry is the same as in the low $`\dot{M}`$ systems, then one would expect the bright LMXBs to have higher spins, a possibility that cannot be ruled out by current observations (Sec. I).
So long as crustal deformations are due to shear forces only, the crustal $`Q_{22}`$ is limited by the yield strain $`\overline{\sigma }_{\mathrm{max}}`$ to be less than UCB99
$$Q_{\mathrm{max}}10^{38}\mathrm{g}\mathrm{cm}^2\left(\frac{\overline{\sigma }_{\mathrm{max}}}{10^2}\right)\frac{R_6^{6.26}}{M_{1.4}^{1.2}}.$$
(4)
$`Q_{22}`$’s needed to buffer the accretion torque require strains $`\overline{\sigma }10^310^2`$ at $`300`$ Hz, with $`\overline{\sigma }>10^2`$ in near-Eddington accretors. Estimates for the yield strain of the neutron star crust range anywhere from $`10^1`$ for perfect one-component crystals to $`10^5`$. Hence $`\overline{\sigma }>10^2`$ is probably higher than yield strain, though this conclusion is based on extrapolating experimental results for terrestrial materials by $`>10`$ orders of magnitude. Such high strains are perhaps the biggest problem with the crustal $`Q_{22}`$ mechanism. At high pressures ($``$ shear modulus) terrestrial materials tend to deform plastically rather than crack, and so the crusts of accreting NSs may be in a state of continual plastic flow. If accretion continually drives the crust to $`\overline{\sigma }_{\mathrm{max}}`$, this leads to a natural explanation for spin similarities near $`\dot{M}_{\mathrm{Edd}}`$.
However, many fundamental issues remain unanswered. First, the calculation UCB99 is only good up to an overall prefactor set by the density of capture layers in the deep crust. We thus need an exploratory calculation of both the composition of the products of nuclear burning in the upper atmosphere over the entire range of $`\dot{M}`$ in LMXBs, and their detailed nuclear evolution under compression in the crust. Knowledge of the composition is also necessary for a robust calculation of the shear modulus, which is clearly the crucial number to know when computing the elastic response of the crust. Recent results PethickRavenhall95 ; pethick98:liquid\_crystal indicate that inner crusts of NSs are composed of highly nonspherical nuclei and may be more like liquid crystals (solids that provide no elastic restoring force for certain kinds of distortions) rather than simple Coulomb solids. Such improved calculations have implications far beyond the problem of the crustal quadrupole moment. The shear modulus of the crust affects the maximum elastic energy that can be stored in the crust, and hence the energetics of pulsar glitches and starquakes, as well as the models of magnetic field evolution that depend on crustal “plate tectonics” (see Ruderman91 ). It even has bearing on the stability of r-modes in neutron stars (Sec. III). In addition, much work needs to be done on understanding what sets the shear strength $`\overline{\sigma }_{\mathrm{max}}`$ of multicomponent crystals, likely with defects and highly nonspherical nuclei, or what happens when $`\overline{\sigma }_{\mathrm{max}}`$ is exceeded and viscoelastic flow ensues.
## III R-modes in Accreting NS Cores
Bildsten Bildsten98:GWs and Andersson, Kokkotas, & Stergioulas andersson99:accreting\_rmode pointed out that the r-mode instability (see the review by B. Owen in this volume for the introduction and notation) may also explain the spins of NSs in LMXBs, and, if so, produce GW signal detectable by LIGO-II. An accreting NS is spun up (along a line in $`(\nu _s,T)`$ plane marked with an arrow in Fig. 3) until it reaches the r-mode instability line (the solid line in Fig. 3). At that point (marked by a thick dot in Fig. 3) the r-mode amplitude needed to balance the accretion torque is rather small. The NS can then hover at the instability line, with $`1/\tau _G+1/\tau _V=0`$, and the r-mode amplitude such that it balances the accretion torque. However, at $`T=`$ few$`\times 10^8`$ K, the r-mode$``$accretion equilibrium spin frequency would be $`150`$ Hz, rather than $`300`$ Hz, resulting in an apparent disagreement with the observed spins of LMXBs. Bildsten Bildsten98:GWs and Andersson et al. andersson99:accreting\_rmode speculated that including other sources of viscosity, e.g., superfluid mutual friction, is likely to raise the instability curve, resulting in equilibrium frequencies closer to the canonical $`300`$ Hz. Finally, the narrow range of the observed spin frequencies would presumably arise because of the similar core temperatures of the accreting NSs (shown by the shaded box in Fig. 3).
Recent theoretical work brought up several challenges to this scenario. Levin Levin99 and Spruit spruit99:\_gamma\_x showed that steady-state equilibrium between accretion and r-modes is thermally unstable for normal fluid cores. In a normal fluid (i.e., not superfluid), the shear viscosity scales as $`T^2`$, so the increase in the core temperature due to viscous heating decreases the shear viscosity. The smaller shear viscosity increases the growth rate of the r-mode, leading to an unstable runaway. Using a phenomenological model of nonlinear r-mode evolution owen98:\_gravit , Levin Levin99 showed that in this case, instead of just hovering near the instability line, the r-mode grows rapidly until saturation, heats up the star, and spins it down and out of the instability region in less than 1 yr. Therefore, if NSs in LMXBs have normal fluid cores, we would not expect to see any of them with $`300`$ Hz spins.
The unstable regime for r-modes in normal fluid NSs (above the solid line in Fig. 3) encompasses much of the parameter space occupied by NSs in LMXBs and newborn NSs. Because of the large torques exerted by the unstable r-modes, we would not expect to see any NSs in this region. In addition, the existence of two 1.6 ms radio pulsars (the spins and upper limits on core temperatures of which are shown by arrows in Fig. 3) means that rapidly rotating NSs are formed in spite of the r-mode instability andersson99:accreting\_rmode . While it is not clear whether their current core temperatures place these pulsars within the r-mode instability region, normal-fluid r-mode theory says that they were certainly unstable during spinup.
Superfluid r-mode calculations have been eagerly awaited, as they could resolve these conflicts. However, Lindblom & Mendell lindblom99:superfluid showed that, for most values of the neutron-proton entrainment parameter, the superfluid dissipation is not competitive with gravitational radiation. Only over about $`3\%`$ of the possible entrainment parameter values is mutual friction strong enough to compete with gravitational radiation. The r-mode instability line in this case is an approximately horizontal line (see Fig. 8 of lindblom99:superfluid ) separating the unstable spin frequencies ($`\nu _s>\nu _{\mathrm{crit}}`$) from the stable ones ($`\nu _s<\nu _{\mathrm{crit}}`$). If the superfluid entrainment parameter has a value such that $`\nu _{\mathrm{crit}}300`$ Hz, then the LMXB spin frequencies could still be understood in terms of the r-mode instability and the special nature of the NS superfluid.
Before learning about these results, Brown & Ushomirsky bu99:rmodes ruled out such a simple superfluid equilibrium observationally for a subset of LMXBs. In steady state, the shear in the r-mode deposits $`10`$ MeV of heat per accreted baryon into the NS core. When the core is superfluid, Urca neutrino emission from it is suppressed, and this heat must flow to the NS surface and be radiated thermally. In steadily accreting systems (such as Sco X-1) this thermal emission is dwarfed by the accretion luminosity of $`GM\dot{M}/R200`$ MeV per accreted baryon. However, in transiently accreting systems, such as Aql X-1, when accretion ceases, the r-mode heating should be directly detectable as enhanced X-ray luminosity from the NS surface. For Aql X-1 and other NS transients, Brown & Ushomirsky bu99:rmodes showed that, if the superfluid r-mode equilibrium prevails, then the quiescent luminosity should be about 5$``$10 times greater than is actually observed.
A possible resolution of this conundrum has been recently proposed by Bildsten & Ushomirsky BU99 . All but the hottest ($`>10^{10}`$ K) NSs have solid crusts. The r-mode motions are mostly transverse, and reach their maximum amplitude near the crust-core boundary. The fluid therefore rubs against the crust, which creates a thin (few cm) boundary layer. Because of the short length scale, the dissipation in this boundary layer is very large. The damping time due to rubbing is BU99
$$\tau _{\mathrm{rub}}100\mathrm{s}T_8\frac{M_{1.4}}{R_6^2}\left(\frac{1\mathrm{kHz}}{\nu _s}\right)^{1/2},$$
(5)
substantially shorter than the viscous damping times due to the shear and bulk viscosities in the stellar interior, as well as the mutual friction damping time for most values of the superfluid entrainment parameter.
The critical frequency for the r-mode instability in NSs with crusts is shown by the dashed line in Fig. 3 for the case where all nucleons are normal, and the dark shading around it represents the range of frequencies when either neutrons or all nucleons are superfluid. The crust-core rubbing raises the minimum frequency for the r-mode instability in NSs with crusts to $`>500`$ Hz for $`T10^{10}`$ K, nearly a factor of five higher than previous estimates. This substantially reduces the parameter space for the instability to operate, especially for older, colder NSs, such as those accreting in binaries and millisecond pulsars. In particular, the smallest unstable frequency for the temperatures characteristic of LMXBs is $`>700`$ Hz, safely above all measured spin frequencies. This work resolves the discrepancy between the theoretical understanding of the r-mode instability and the observations of millisecond pulsars and LMXBs, and, along with observational inferences bu99:rmodes , likely rules out r-modes as the explanation for the clustering of spin frequencies of neutron stars in LMXBs around 300 Hz.
To summarize, a significant role of steady-state r-modes in LMXBs has probably been ruled out, both on theoretical grounds BU99 (unless crust-core coupling is much stronger than was estimated), and observationally bu99:rmodes (for Aql X-1 in particular). However, stochastically excited r-modes that decay rapidly may still play a significant role in accreting systems, as even a very small amplitude ($`\alpha <10^5`$) can balance the accretion torque at $`\dot{M}_{\mathrm{Edd}}`$. In addition, the issues of crustal shear modulus and the structure of the crust-core boundary, highlighted in Sec. II, are of paramount importance for r-modes as well. Crustal quadrupoles Bildsten98:GWs ; UCB99 can explain the spins of LMXBs and remain a viable source of continuous GWs, but the strains at $`\dot{M}_{\mathrm{Edd}}`$ are rather high. The crustal breaking strain $`\overline{\sigma }_{\mathrm{max}}`$ is not likely to be understood theoretically any time soon, and detection of GWs from LMXBs with LIGO-II type instruments will surely teach us many new things about NSs.
###### Acknowledgements.
GU acknowledges support from the Fannie and John Hertz Foundation. |
no-problem/0001/hep-lat0001027.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Two–dimensional (2d) CP<sup>N-1</sup> models play an important role in quantum field theory since they represent a convenient theoretical laboratory to investigate analytical and numerical methods to be eventually exported to QCD. This is believed to be possible since these models have several important properties in common with QCD, for instance asymptotic freedom, spontaneous mass generation, confinement and topological structure of the vacuum . In particular, the CP<sup>N-1</sup> vacuum, like the QCD one, admits instanton classical solutions. The non–trivial structure of the vacuum for the CP<sup>N-1</sup> models, as well as for QCD, can be probed by defining suitable operators, like the topological charge density and its correlators, and by applying the prescriptions of quantum field theory. Yet most of the observables relevant for the investigation of the vacuum topological structure of a theory are beyond the reach of perturbation theory. Therefore some approximation is needed which works also in the non–perturbative regime. So far, the most effective and comprehensive tool is represented by Monte Carlo simulations on a space–time lattice. The lattice approach for the study of the vacuum structure brings about a very delicate task, that of separating the quantum fluctuations at the level of the lattice spacing from the relevant long–distance topological ones in any given configuration of the thermal equilibrium ensemble. Indeed, trying to assign a topological charge to a thermal equilibrium configuration, treating on the same footing short–distance (quantum) fluctuations and long–distance (topological) ones, would even prevent the reaching of the continuum limit. One powerful technique, frequently adopted in the literature to study the topological structure of the vacuum, is the so–called cooling method . The idea behind the cooling method is that the quantum fluctuations at the level of the lattice spacing of a given configuration can be smoothed out by a sequence of local minimizations of the lattice action, thus leading to a configuration where only the long–distance, topologically relevant fluctuations survive. When this procedure is performed on a set of well–decorrelated equilibrium configurations, it yields a “cooled” equilibrium ensemble on which the expectation values of topological quantities can be determined. The precise definition of a cooling procedure is however arbitrary to a large extent: the lattice action to be locally minimized can be chosen differently from the one used in the thermalization, the number of cooling steps is a free variable, driving or control parameters of the cooling can be introduced, and so on. Comparing the behavior of different cooling methods is an interesting issue, especially when a new cooling strategy is proposed.
The aim of the present work is to get insight into a new cooling method first adopted in Ref. for the SU(2) gauge theory and to compare it with two other methods, already known in the literature, namely the “standard” cooling and its controlled version adopted by the Pisa group (see for instance Ref. ). The new cooling method, as it will be pointed out in the next Section, follows a different strategy with respect to the other two and could possess various interesting features, inaccessible to the other two cooling methods, like for instance preserving instanton–anti-instanton (I–A) pairs with interaction energy smaller than a fixable value . The test–field for this comparison will be provided by the CP<sup>N-1</sup> models, which have all the interesting topological properties of the SU($`N`$) gauge theories, but are much easier to simulate on a lattice.
The paper is organized as follows: in Section 2 we introduce the lattice action and the lattice discretization of the relevant topological quantities. In Section 3 we describe in detail the three cooling methods. In Sections 4 and 5 we discuss their performance on 1–instanton and I–A classical lattice configurations respectively. In Section 6 we consider their behavior on thermal equilibrium configurations and find a correspondence between the three cooling techniques. In Section 7 we determine the physical value of the topological susceptibility of CP<sup>9</sup> using the new cooling method and compare the results to those from an alternative approach not based on cooling. Finally in Section 8 we draw our conclusions.
## 2 Lattice definition of action and observables
The CP<sup>N-1</sup> models describe the physics of a gauge invariant theory of interacting classical spins. The continuum action for the 2d CP<sup>N-1</sup> model is
$$S=\frac{1}{g}d^2x\overline{D_\mu z}D_\mu z,D_\mu =_\mu +iA_\mu ,$$
(1)
where $`g`$ is the coupling constant and $`z(x)`$ is a $`N`$component complex scalar field which obeys the constraint $`\overline{z}(x)z(x)=1`$. The bar over a complex quantity means complex conjugate and a central dot among vectors implies the scalar product: $`\overline{z}(x)z(x)_{i=1}^N\overline{z}_i(x)z_i(x)`$.
We regularize the theory on a square lattice using the standard discretization:
$$S^L=N\beta \underset{x}{}\underset{\mu =1,2}{}\left(\overline{z}(x+\widehat{\mu })z(x)\lambda _\mu (x)+\overline{z}(x)z(x+\widehat{\mu })\overline{\lambda }_\mu (x)2\right),$$
(2)
where $`x`$ indicates a generic site of the lattice; $`\lambda _\mu (x)`$ is a U(1) gauge field ($`\overline{\lambda }_\mu (x)\lambda _\mu (x)=1`$) and $`\beta 1/(Ng)`$ where $`g`$ is the bare lattice coupling constant. We used the standard action both to generate thermal equilibrium configurations in Monte Carlo simulations and during the cooling procedure.
The continuum topological charge is defined as
$$Qd^2xq(x)\frac{i}{2\pi }d^2xϵ_{\mu \nu }\overline{D_\mu z}D_\nu z=\frac{1}{2\pi }𝑑x_\mu A_\mu ,$$
(3)
where $`ϵ_{\mu \nu }`$ is the antisymmetric tensor with $`ϵ_{12}=1`$ and $`q(x)`$ is the topological charge density which can be written as the divergence of a topological current $`K_\mu (x)1/(2\pi )ϵ_{\mu \nu }A_\nu `$, $`q(x)=_\mu K_\mu (x)`$ . The last integral in Eq. (3) is calculated on the 2d plane along the circle of infinite radius.
The topological susceptibility $`\chi `$ is a renormalization group invariant quantity which gives a measure of the amount of topological excitations in the vacuum. It is defined as the two–point zero–momentum correlation of the topological charge density operator $`q(x)`$,
$$\chi d^2x0|T[q(x)q(0)]|0.$$
(4)
This equation needs a prescription for the contact term coming from the product of operators at the same point. We use the prescription
$$\chi =d^2x_\mu 0|T[K_\mu (x)q(0)]|0,$$
(5)
which corresponds to the physical requirement that $`\chi =0`$ in the sector with trivial topology (see for instance Ref. ).
On the lattice any discretization of the topological charge density with the correct naïve continuum limit can be used. A standard definition is
$$q^L(x)\frac{i}{2\pi }\underset{\mu \nu }{}ϵ_{\mu \nu }\mathrm{Tr}\left[P(x)\mathrm{\Delta }_\mu P(x)\mathrm{\Delta }_\nu P(x)\right],$$
(6)
with
$$\mathrm{\Delta }_\mu P(x)\frac{P(x+\widehat{\mu })P(x\widehat{\mu })}{2},P_{ij}(x)\overline{z}_i(x)z_j(x).$$
(7)
The lattice topological susceptibility is defined as
$$\chi ^L\underset{x}{}q^L(x)q^L(0)=\frac{1}{L^2}\left(Q^L\right)^2,$$
(8)
where $`Q^L_xq^L(x)`$ and $`L`$ is the lattice size in lattice units. Unless otherwise stated, hereafter the brackets $``$ mean average over decorrelated equilibrium configurations.
In the CP<sup>N-1</sup> model $`q(x)`$ is a renormalization group invariant operator. Imposing that $`q(x)`$ does not renormalize in the continuum generally implies a finite multiplicative renormalization $`Z`$ on the lattice
$$q^L(x)=a^2Z(\beta )q(x)+O(a^4),$$
(9)
$`a`$ being the lattice spacing.
Moreover, the lattice topological susceptibility $`\chi ^L`$ in general does not meet the continuum prescription given in Eq. (5), thus leading to an additive renormalization which consists in mixings of $`\chi ^L`$ with operators which have the same quantum numbers as $`\chi `$,
$$\chi ^L(\beta )=a^2Z(\beta )^2\chi +M(\beta ),$$
(10)
where $`M(\beta )`$ indicates such mixings. The extraction of $`\chi `$ from the lattice thus requires the determination of both the multiplicative and additive renormalizations, $`Z(\beta )`$ and $`M(\beta )`$. This is a feasible task, as will be shown in Section 7, and in this way one obtains the so–called “field theoretical” determination of the topological susceptibility.
Alternatively one can use cooling to determine the topological susceptibility. This method will be described extensively in the next Section. The idea is that, by an iterative process of local minimization of the action, the quantum fluctuations at the scale of the ultraviolet cut–off, which are responsible for the renormalizations, are suppressed thus implying $`Z1`$ and $`M0`$ in Eq. (10) as the cooling iteration goes on, so that $`a^2\chi `$ can be extracted directly from the measurement of $`\chi ^L`$ after cooling, as it will be shown in Section 7.
## 3 The cooling method
A cooling step applied on a lattice configuration consists in assigning to every field variable a new value chosen in order to locally minimize the lattice action. The iteration of this procedure converges to a configuration which should represent a solution of the Euclidean classical equations of motion. In the continuum 2d CP<sup>N-1</sup> models, as well as in the case of 4d SU($`N`$) gauge theories, there are classical solutions that belong to a definite topological charge sector, identified by an integer value of the topological charge $`Q`$ and by a finite action ($`S=2\pi |Q|/g`$ in the case of CP<sup>N-1</sup>). The classical solutions with $`Q0`$ are called instantons or anti-instantons (depending on the sign of $`Q`$). On the lattice, owing to the artifacts introduced by the discretization of the theory, configurations with non–trivial topology can represent only approximated solutions of the lattice equations of motion. The common belief is that a moderately long cooling sequence can be able to wash out the quantum fluctuations of a given lattice configuration, so revealing the underlying metastable state with possible non–trivial topology, and that this state is the lattice counterpart of a continuum classical one. Of course, if the cooling is further protracted, the metastable state will eventually “decay” into the trivial vacuum. Stated in other words, once the quantum fluctuations have been eliminated and the topological bumps of a configuration (instantons or anti-instantons) have been revealed, if the cooling is further protracted then the isolated instantons or anti-instantons shrink, the I–A pairs annihilate and the configuration falls into the trivial vacuum of the lattice theory. In consideration of the above arguments, it is clear that a great care is needed in determining when a cooling iteration has to be stopped. Nevertheless, there is no way to prevent the loss of topological signal at lattice scales of the order of the lattice spacing, since topology at these scales cannot be distinguished from quantum fluctuations. It must be pointed out however, that any loss of topological signal due to cooling which occurs at fixed scales in lattice units becomes irrelevant in physical units as far as the coupling is tuned towards its critical value, i.e. in the continuum limit, when the lattice spacing $`a`$ tends to zero. This is not the case only when the instanton size distribution is singular in the short–distance regime, as in the 2d CP<sup>1</sup> model .
In the case of the 2d CP<sup>N-1</sup> models, the cooling algorithm amounts to replacing at each lattice site $`x`$ the field variables $`z(x)`$ and $`\lambda _\mu (x)`$ by new ones $`z^{\mathrm{new}}(x)`$ and $`\lambda _\mu ^{\mathrm{new}}(x)`$, chosen in order to locally minimize the lattice action, while the other field variables are left unchanged. A cooling step consists in a sweep through the entire lattice volume in order to apply this local minimization sequentially at every site $`x`$. The terms in the action density which depend on $`z(x)`$ and $`\lambda _\mu (x)`$ for a given lattice site $`x`$ have the form
$$s_z=2\beta N\mathrm{Re}\{\overline{z}(x)F_z(x)\},F_z(x)\underset{\mu =1,2}{}\left(z(x\widehat{\mu })\lambda _\mu (x\widehat{\mu })+z(x+\widehat{\mu })\overline{\lambda }_\mu (x)\right),$$
(11)
and
$$s_\lambda =2\beta N\mathrm{Re}\{\overline{\lambda }_\mu (x)F_\lambda (x,\mu )\},F_\lambda (x,\mu )\overline{z}(x)z(x+\widehat{\mu }),$$
(12)
where $`F_z(x)`$ and $`F_\lambda (x,\mu )`$ are the respective forces. It is easy to see that the local minimum of the lattice action is obtained when the new field variables coincide with the corresponding normalized forces:
$$z^{\mathrm{new}}(x)=\frac{F_z(x)}{F_z(x)},\lambda _\mu ^{\mathrm{new}}(x)=\frac{F_\lambda (x,\mu )}{F_\lambda (x,\mu )},\mu =1,2,$$
(13)
with $`u\sqrt{\overline{u}u}`$ for a generic complex number or $`N`$–component vector $`u`$ ($`z(x)`$, $`F_z(x)`$, $`\lambda _\mu (x)`$, $`F_\lambda (x,\mu )`$, etc.). According to the supplementary conditions under which the above replacements are performed, we have different cooling procedures. The first cooling procedure, which we call “standard”, is the one where the replacements $`z(x)z^{\mathrm{new}}(x)`$ and $`\lambda _\mu (x)\lambda _\mu ^{\mathrm{new}}(x)`$ are unconstrained. The second one, which we call “Pisa cooling”, is the one where the replacement of $`z(x)`$ and $`\lambda _\mu (x)`$ is performed if the distance between the old and the new fields is smaller than a previously fixed parameter $`\delta _{\mathrm{Pisa}}`$: $`\lambda _\mu (x)\lambda _\mu ^{\mathrm{new}}(x)\delta _{\mathrm{Pisa}}`$ and analogously for the $`z`$ fields, otherwise the new field is chosen to lie on the $`(z(x),z^{\mathrm{new}}(x))`$ ($`(\lambda _\mu (x),\lambda _\mu ^{\mathrm{new}}(x))`$) plane, at a distance from $`z(x)`$ ($`\lambda _\mu (x)`$) exactly equal to $`\delta _{\mathrm{Pisa}}`$ . The third cooling procedure, called “new cooling” in the following, was introduced in Ref. . It consists in making the replacement only if the distance between the old and new fields is larger than a fixed parameter $`\delta `$, otherwise the field is left unchanged. We observe that the three methods follow a different strategy. In particular, while the Pisa cooling acts first on the smoother fluctuations and the number of cooling iterations to be performed has to be fixed a priori, the new cooling performs local minimizations only if these fluctuations are larger than a given threshold and the iteration stops automatically when all the surviving fluctuations are smoother than the (lattice) threshold determined by the dimensionless parameter $`\delta `$. It is easy to see that the new field variables differ from the old ones by terms which, in the continuum limit, are of order $`a^2`$, where $`a`$ is the lattice spacing. Therefore the parameter $`\delta `$ is expected to scale in the continuum limit like a quantity of dimension two.
In the following we will compare these three cooling methods in several contexts, namely on “artificial” 1–instanton configurations, on I–A pairs, on thermal equilibrium configurations and in the determination of the continuum topological susceptibility. Moreover we will establish a relation between the number of cooling iterations in the standard and Pisa coolings with the parameter $`\delta `$ of the new cooling and will discuss the validity of the picture of cooling as a diffusion process.
## 4 Cooling 1–instanton configurations
In the continuum CP<sup>N-1</sup> models, in the infinite 2d space–time, a 1–instanton configuration is a classical solution of the equations of motion characterized by a topological charge one and a finite action ($`S=2\pi /g`$). The explicit form of a continuum 1–instanton configuration is
$$z(x)=\frac{w(x)}{w(x)},A_\mu =i\overline{z}(x)_\mu z(x),$$
(14)
with
$$w(x)=u+\frac{(x_1c_1)i(x_2c_2)}{\rho }v,$$
(15)
where $`u`$ and $`v`$ are complex $`N`$–vectors which satisfy $`\overline{u}u=\overline{v}v=1`$ and $`\overline{u}v=0`$. The space–time coordinates $`(c_1,c_2)`$ are the center of the instanton and the real parameter $`\rho `$ represents a measure of its size. An “artificial” 1–instanton lattice configuration can be represented by the discretization of the above continuum configuration on a 2d finite volume lattice with periodic boundary conditions. In particular, we used the following expressions for the fields $`z`$ and $`\lambda `$
$$z_1(x)=\frac{\rho }{\sqrt{\rho ^2+(x_1c_1)^2+(x_2c_2)^2}},z_2(x)=\frac{x_1c_1i(x_2c_2)}{\sqrt{\rho ^2+(x_1c_1)^2+(x_2c_2)^2}},$$
$`z_i(x)`$ $`=`$ $`0,i=3,\mathrm{},N,`$
$`\lambda _\mu (x)`$ $`=`$ $`{\displaystyle \frac{\overline{z}(x+\widehat{\mu })z(x)}{\overline{z}(x+\widehat{\mu })z(x)}},`$ (16)
which correspond to choose $`u=(1,0,0,\mathrm{})`$ and $`v=(0,1,0,0,\mathrm{})`$. Actually 1–instanton configurations cannot exist on a 2d torus even in the continuum and they represent only approximate solutions of the classical equations of motion. This approximation becomes better and better as the ratio $`\rho /(La)`$ decreases. On the lattice, this problem would lead to metastability even in the case of an ideally perfect discretization. However the above–defined artificial 1–instanton lattice configurations are enough for our purposes since they will be used only as test configurations for cooling methods and not to extract physical information.
In Fig. 1 we show the distribution of the angles $`\theta _z(x)`$ and $`\theta _\lambda (x,1)`$ for an artificial 1–instanton with size $`\rho =10a`$ located in the middle of a $`60^2`$ lattice. These angles are defined as
$$\lambda _\mu (x)\lambda _\mu ^{\mathrm{new}}(x)2\mathrm{sin}\frac{\theta _\lambda (x,\mu )}{2},z(x)z^{\mathrm{new}}(x)2\mathrm{sin}\frac{\theta _z(x)}{2}.$$
(17)
It is clear that, while the standard cooling acts indistinctly on each lattice site, the Pisa cooling and the new cooling deform 1–instanton lattice configurations starting from different regions of the lattice. The new cooling will act first on the region around the center of the instanton, the Pisa cooling on the border regions. Therefore it is not obvious whether under the three coolings the configurations will look the same sweep after sweep. We did therefore the following check: we performed a long ($`O(1000)`$ steps) iteration of the standard and Pisa coolings<sup>1</sup><sup>1</sup>1For the Pisa cooling we always use $`\delta _{\mathrm{Pisa}}=0.2`$, following Ref. . on an artificial 1–instanton configuration with size $`\rho =10a`$ and after each sweep we extracted the topological charge $`Q^L`$, the action $`S^L`$ and the size $`\rho /a`$. This size was determined by a very local procedure, namely by looking for the maximum of the lattice action density $`s_{\mathrm{max}}^L`$ and using the relation $`s_{\mathrm{max}}^L=2a^2/\rho ^2`$, valid for a 1–instanton configuration. Then, starting from the same configuration, we applied the new cooling for $`\delta `$ as small as $`5\times 10^4`$ and for each configuration obtained during the cooling iteration, we determined again $`Q^L`$, $`S^L`$ and $`\rho /a`$. Finally, we put on a plot $`Q^L`$ and $`S^L`$ as functions of $`\rho /a`$. The result is shown in Fig. 2: the values of $`Q^L`$ and $`S^L`$ during the three cooling methods lie on the same curve. The same plot was obtained by using a different procedure to determine the size of the cooled configurations, namely a global fit of the lattice action density to the continuum expression for the action density of a 1–instanton configuration, Eq. (14). We interpret these results as an indication that the three coolings deformed in the same way the starting configuration. If this conjecture is correct, it should be possible to put into correspondence the $`\delta `$ parameter of the new cooling with the number of steps of the standard or Pisa coolings. This will be done in Section 6.
## 5 Cooling I–A pairs
An I–A configuration in the continuum 2d CP<sup>N-1</sup> models can be taken in the form given in Refs. ,
$$z(x)=\frac{w(x)}{w(x)},A_\mu =i\overline{z}(x)_\mu z(x),$$
(18)
with
$`w_1(x_1,x_2)`$ $`=`$ $`(x_1+ix_2a_1)(x_1ix_2\overline{b}_1),`$
$`w_2(x_1,x_2)`$ $`=`$ $`(x_1+ix_2a_2)(x_1ix_2\overline{b}_2),`$
$`w_i`$ $`=`$ $`0,i=3,\mathrm{},N.`$ (19)
In this expression, the complex numbers $`a_{1,2}`$ and $`b_{1,2}`$ are related to the position of the center and the size of the instanton and anti-instanton through the following relations:
$$a_1=c_1^{(I)}+ic_2^{(I)}\rho ^{(I)},a_2=c_1^{(I)}+ic_2^{(I)}+\rho ^{(I)},$$
$$b_1=c_1^{(A)}+ic_2^{(A)}\rho ^{(A)},b_2=c_1^{(A)}+ic_2^{(A)}+\rho ^{(A)},$$
(20)
where $`c_i^{(I,A)}`$ is the $`i`$–th coordinate of the center of the instanton or anti-instanton and $`\rho ^{(I,A)}`$ are their sizes. As “artificial” I–A lattice configuration we took the discretization of the above continuum configuration on a 2d finite volume lattice with periodic boundary conditions. For simplicity, we used $`\rho ^{(I)}=\rho ^{(A)}\rho `$.
Starting from several I–A pairs with different values of the distance between the centers $`c^{(I)}`$ and $`c^{(A)}`$ and of the size $`\rho `$, we performed long cooling sequences with the three methods, namely $`O(1000)`$ iterations for the standard and Pisa coolings and $`\delta `$ as small as $`5\times 10^4`$ for the new cooling. In order to have an indication of how the cooling process works, we examined by eye the distribution of the topological charge and action densities during the different cooling iterations. The evolution of these densities looked the same, no matter which cooling procedure was adopted. Specifically, the distribution of the action density shows at the beginning two equal instanton bumps, which for $`d\rho `$ ($`d`$ is the distance between the two centers $`c^{(I)}`$ and $`c^{(A)}`$) are well–separated; then, as the cooling goes on, these bumps lower in height and merge together, up to complete annihilation. In order to make quantitative the statement that the three coolings behave essentially in the same way, we studied the shell correlation function of the topological charge density. This function is defined as $`G(r)1/N_r_{x,y}q^L(x)q^L(y)`$ where the sum is extended over all pairs of sites $`x`$, $`y`$ which satisfy $`r|xy|<r+\mathrm{\Delta }r`$ and $`N_r`$ is the number of those pairs for a given value of $`r`$ (we have chosen $`\mathrm{\Delta }r=0.6a`$). We determined $`G(r)`$ on every configuration resulting after each cooling iteration<sup>2</sup><sup>2</sup>2From reflection positivity it follows that the two–point correlation function of the topological charge density, and consequently also $`G(r)`$, is negative at distances $`r>0`$. However reflection positivity is lost during cooling, since cooling affects the quantum structure of the theory leaving only the semiclassical background intact. After cooling one in general obtains positive values for $`G(r)`$, apart from those large separations which the cooling procedure has not yet affected.. We compared the results which presented the same value for the internal energy $`E`$ which is defined by
$$E\frac{1}{2L^2}\underset{x}{}\underset{\mu =1,2}{}\left(\overline{z}(x+\widehat{\mu })z(x)\lambda _\mu (x)+\overline{z}(x)z(x+\widehat{\mu })\overline{\lambda }_\mu (x)2\right),$$
(21)
and is proportional to the lattice action. In Fig. 3 we show the behavior of the shell correlation versus $`E`$ during the three (protracted) coolings for the case of $`r`$ equal to the distance $`d`$ between the centers of the instanton and of the anti-instanton (left) and to the instanton (or anti-instanton) size $`\rho `$ (right). The three curves fall on top of each other, apart from a small deviation between the Pisa cooling and the other two at the very first (1–2) cooling steps, due to (unphysical) finite size effects. Since the shell correlation of the topological charge density is related to the size of the instantons, the conclusion is that the three cooling methods perform equivalently in deforming the size of both the instanton and the anti-instanton in the I–A pair and in modifying the distance between the two peaks.
## 6 Cooling on the equilibrium ensemble
Using artificial configurations with non–trivial topology provides interesting suggestions about a cooling procedure, but in real life the cooling must be applied on thermal equilibrium configurations. Among them there are configurations which contain several instanton and anti-instanton bumps, merged in a sea of quantum fluctuations. These configurations can represent as well good test–fields for the different cooling procedures. Their evolution under cooling can be visualized by the distribution of the action density or of the topological charge density. In principle we could expect that the three cooling methods described in Section 3 make the starting thermalized configuration evolve along different directions. In particular, for a not too protracted cooling, one expects that all the three methods have been able to erase the quantum fluctuations, but the number, type and location of the surviving topological bumps in the cooled configuration can be rather different. In order to check this expectation, it is necessary to find a criterion to make a correspondence between the number of iterations in the standard or Pisa coolings with a $`\delta `$ parameter for the new cooling. We defined an effective temperature for configurations obtained during the cooling iteration in such a way that the comparison could be made only between configurations at the same temperature. The most natural “thermometer” is the internal energy $`E`$, Eq. (21), since this is the quantity which is minimized during the cooling procedure.
We put into practice the above considerations in the case of the CP<sup>3</sup> model. We generated by Monte Carlo a sample of equilibrium configurations on a $`76^2`$ lattice at $`\beta =1.05`$. As in Ref. , the simulation algorithm is, for every updating step, a mixture of 4 microcanonical updates and 1 over heat–bath . We measured the expectation value of $`E`$ on the cooled ensembles obtained by the three cooling methods after several number of iterations (standard and Pisa coolings) and for several values of $`\delta `$ (new cooling). By comparing the results and imposing that $`E`$ be the same, we obtained a correspondence table between the number of cooling iterations for the standard and Pisa coolings with a value of the $`\delta `$ parameter in the new cooling. We found for instance that for the CP<sup>3</sup> model at the above values of $`\beta `$ and lattice size, 30 iterations of the standard cooling correspond (in the sense of the average value of $`E`$) to $`\delta =0.007`$ for the new cooling and to approximately 33 iterations of the Pisa cooling.
In general the matching between the number of standard and Pisa cooling iterations is not as good as the one obtained between $`\delta `$ in the new cooling and a fixed number of iterations in the standard (or Pisa) cooling. The reason is that in the second case one can tune a continuous parameter while in the first only integer jumps of the parameter controlling the amount of cooling are allowed. For this reason we have mainly investigated the relation existing between the standard and new coolings. In Table 1 we show the values of $`\delta `$ (new cooling) found in correspondence to several numbers of iterations of the standard cooling for the CP<sup>3</sup> model at $`\beta =1.05`$ on a $`76^2`$ lattice. Notice that a different correspondence table may be found for other values of $`N`$ and $`\beta `$.
Using the correspondence table we can directly compare the different coolings. As a first step, we have computed the average values of some physical quantities on samples obtained after applying equivalent amounts of cooling on a set of equilibrium configurations. In Fig. 4 we show the results for the lattice topological susceptibility with the standard and new coolings. By using the relation given in Table 1, we report both determinations in terms of the number of standard cooling iterations. It clearly appears that, regarding to the determination of the topological susceptibility, the two cooling techniques are completely equivalent.
Analogously in Fig. 5 we compare the determinations of the magnetic susceptibility $`\chi _\mathrm{m}`$ and of the correlation length $`\xi _G`$ defined by
$`\chi _\mathrm{m}`$ $``$ $`{\displaystyle \frac{1}{L^2}}{\displaystyle \underset{x,y}{}}\text{Tr}P(x)P(y)_{\mathrm{conn}},`$
$`\left({\displaystyle \frac{\xi _G}{a}}\right)^2`$ $``$ $`{\displaystyle \frac{1}{4\mathrm{sin}^2(\pi /L)}}\left[{\displaystyle \frac{\stackrel{~}{G}_P(0,0)}{\stackrel{~}{G}_P(0,1)}}1\right],`$ (22)
where
$$\stackrel{~}{G}_P(k)\frac{1}{L^2}\underset{x,y}{}\mathrm{Tr}P(x)P(y)_{\mathrm{conn}}\mathrm{exp}\left[i\frac{2\pi }{L}(xy)k\right],$$
(23)
is the lattice Fourier transform of the correlation of two local gauge–invariant composite operators $`P_{ij}(x)`$ which have been defined in Eq. (7) (the subscript “conn” means connected Green function). The results for $`\chi _\mathrm{m}`$ and $`\xi _G/a`$ from Fig. 5 indicate that in this case the new cooling and the standard cooling show a good agreement, apart from a small discrepancy at large numbers of iterations.
We have also studied the shell correlation of two lattice topological charge density operators, $`G(r)`$, for $`r/a`$ going from 0 to 20, after equivalent amounts of the three coolings. In Fig. 6 we compare the standard cooling (30 iterations) to the new cooling ($`\delta =0.007`$) and the Pisa cooling (33 iterations, $`\delta _{\mathrm{Pisa}}=0.2`$)<sup>3</sup><sup>3</sup>3See footnote 2 in Section 5 about the positivity of $`G(r)`$ after cooling.. The results are in good agreement, although a small deviation can be observed at small distances. The suggestion of this outcome is that the three coolings affect the instanton size distribution in the small distance region in a slightly different way; the effect is however modest.
Next we compared the three coolings on a subset of the above–generated thermal equilibrium ensemble for the CP<sup>3</sup> model. To this aim, we have chosen from the thermal ensemble several configurations showing non–zero topological charge after 30 iterations of the standard cooling and compared their action density and topological charge density distributions by eye to those of the same configurations cooled by 33 iterations of the Pisa cooling and by the new cooling with $`\delta =0.007`$. For all the configurations considered, we observed that the bumps corresponding to instantons and anti-instantons had the same shape, number and location (see Fig. 7 for an example).
These considerations provide evidence that the three cooling methods defined in Section 3 behave essentially in an equivalent way, not only on average, but also configuration by configuration, and that they can be related by a simple correspondence between number of iterations (standard and Pisa coolings) and $`\delta `$ (new cooling).
We close this Section with some considerations about the usual picture of cooling as a diffusion process. As we pointed out in Section 3, the parameter $`\delta `$ in the new cooling behaves, in the continuum limit, like a physical quantity of dimension two. Therefore, indicating with $`r_0`$ the physical scale up to which cooling affects the quantum fluctuations and following Ref. , we infer that $`r_0\delta ^{1/2}`$ in the continuum limit. On the other hand, the standard cooling is usually believed to act as a diffusion process, so that, if $`n`$ is the number of iterations, it should affect fluctuations up to a scale $`r_0n^{1/2}`$. Using the relation found between $`n`$ and $`\delta `$, we can check both these predictions: indeed we expect that $`\delta 1/n`$. In Fig. 8 we have plotted the function $`\delta (n)`$ given in Table 1, together with a best fit to a function proportional to $`1/n`$. For the lattice parameters used in the simulation we see that $`\delta (n)`$ behaves as expected for small values of $`n`$, while the relation $`\delta 1/n`$ breaks down for $`n3040`$, indicating that the picture of the standard cooling as a diffusion process may work well only for moderately small values of $`n`$.
## 7 The topological susceptibility
In this Section we apply the new cooling method to the determination of the physical topological susceptibility and compare the results to those from an alternative approach, the field theoretical method combined with smearing . As for the comparison of the new cooling with the standard and Pisa coolings, we have already shown in the previous Section that they give equivalent results for $`\chi ^L`$.
The lattice discretization of the topological charge density and of the topological susceptibility has already been discussed in Section 2. In the relation existing between the lattice topological susceptibility $`\chi ^L`$ and the continuum one, Eq. (10), $`M(\beta )`$ indicates the mixings of $`\chi ^L`$ to operators with the same quantum numbers. More precisely one can write
$$M(\beta )=A(\beta )a^2T_{\mathrm{np}}+P(\beta )I+O(a^4).$$
(24)
The first and second terms in the r.h.s. are the mixings with the trace of the energy–momentum tensor and with the unit operator respectively (“np” means the purely non–perturbative part of $`T`$).
The mixing coefficients $`A(\beta )`$ and $`P(\beta )`$ as well as the multiplicative renormalization $`Z(\beta )`$ can be calculated in perturbation theory. The perturbative series for $`A(\beta )`$ starts from the order $`1/\beta ^3`$ and arguments can be given which justify that $`A(\beta )a^2T_{\mathrm{np}}`$ is safely negligible in the scaling window of the simulation. In Ref. the perturbative series for $`Z(\beta )`$ has been calculated up to the order $`1/\beta ^2`$ and the perturbative series of $`P(\beta )`$ at the first two non–zero orders $`1/\beta ^4`$ and $`1/\beta ^5`$. On the other hand there are no available perturbative estimates of the $`O(a^4)`$ terms in Eq.(24).
However, a more powerful, purely numerical technique can be used to get a non–perturbative determination of $`Z(\beta )`$ and of the whole additive renormalization $`M(\beta )`$. This technique is the so–called “heating method” .
The idea of the heating method is to determine the average values of topological quantities on samples of configurations obtained by thermalizing the short–range fluctuations, which are responsible for the renormalizations, on configurations of well defined topological background. For instance if one applies several thermalization steps at a given value of $`\beta `$ on a configuration containing one discretized instanton of charge $`Q=1`$ and measures the average value of $`Q^L`$, then the renormalization constant $`Z(\beta )`$ can be determined as $`Z(\beta )=Q^L/Q`$ (here $``$ indicates an average in the background of a fixed charge $`Q`$). Analogously, by thermalizing a trivial configuration (for example a configuration where for all lattice sites $`x`$, the respective fields are $`z(x)=(1,0,0,\mathrm{})`$ and $`\lambda _\mu (x)=1`$) at a given $`\beta `$ and measuring the average value of $`\chi ^L`$, one gets a determination of $`M(\beta )`$. In the first case one obtains a non–perturbative estimate of $`Z(\beta )`$ as long as the $`\overline{\text{MS}}`$ value of the background topological charge $`Q`$ is chosen (see Section 2). In the second case the method amounts to impose the requirement that the physical susceptibility $`\chi `$ vanishes in the absence of instantons.
The heating method has been extensively applied in the following. Once all the renormalization constants are known, the extraction of $`\chi `$ from Eq. (10) is possible and the result is called the field theoretical determination.
Since the continuum $`\chi `$ is extracted from the lattice $`\chi ^L`$ by subtracting the renormalization effects, it happens that, if $`M(\beta )`$ is a large part of the whole lattice signal and $`Z(\beta )`$ is small, the physical signal for $`\chi `$ is extracted with very large errors. This is the case when one uses the lattice discretization of Eq. (6). However, since both $`Z(\beta )`$ and $`M(\beta )`$ depend on the discretization $`q^L(x)`$, one can exploit the arbitrariness in the lattice definition and use an improved operator. To this aim, following the idea of Ref. , already used for the determination of $`\chi `$ in Yang–Mills theories , we have used a smeared topological charge density operator, which is built from the original operator $`q^L(x)`$, defined in Eq. (6), by replacing the fields $`z(x)`$ and $`\lambda _\mu (x)`$ with
$$z^{\mathrm{smear}}(x)=𝒩_z\left[(1c)z(x)+\frac{c}{4}\underset{\mu =1,2}{}\left(z(x\widehat{\mu })\lambda _\mu (x\widehat{\mu })+z(x+\widehat{\mu })\overline{\lambda }_\mu (x)\right)\right],$$
$$\lambda _\mu ^{\mathrm{smear}}(x)=𝒩_\lambda \left[(1c)\lambda _\mu (x)+c\frac{\overline{z}(x)z(x+\widehat{\mu })}{\overline{z}(x)z(x+\widehat{\mu })}\right],$$
(25)
where $`𝒩_z`$, $`𝒩_\lambda `$ are normalization constants which allow that $`z^{\mathrm{smear}}(x)=\lambda _\mu ^{\mathrm{smear}}(x)=1`$ and $`c`$ has been chosen to be equal to 0.65 <sup>4</sup><sup>4</sup>4This choice has been based on the fact that i) it is convenient to take $`c`$ as large as possible and ii) for $`c>2/3`$ the smearing procedure behaves in a radically different way, leading, when iterated, to a completely disordered field configuration instead of a smoother one. A similar behavior has been observed in Yang–Mills theories .. The smearing can be iterated at will by defining the $`n`$–th level of smeared fields from the $`(n1)`$–th level in an analogous fashion as shown in Eq. (25). For each level of smearing a relation like Eq. (10) holds although $`Z(\beta )`$ and $`M(\beta )`$ change. Using smeared operators these renormalization constants get closer to 1 and 0, respectively, as the smearing level increases, thus allowing a much more precise determination of $`\chi `$.
We performed a numerical simulation for the CP<sup>9</sup> model at several values of $`\beta `$ and $`L`$. We used the same updating algorithm of the previous Section. For most simulations we collected 100K equilibrium configurations after discarding 10K configurations to allow thermalization. The successive equilibrium configurations were decorrelated by 10 updating steps. By averaging on the thermal equilibrium ensemble we determined the unsmeared lattice topological susceptibility $`\chi ^L`$ and the smeared ones, $`\chi _{\mathrm{smear}}^L`$, for 1 to 10 smearing levels. These results, together with $`Z(\beta )`$ and $`M(\beta )`$ coming from the heating method, allow to extract $`a^2\chi `$ from Eq. (10).
Every 50 updating steps we have also applied the new cooling method at several values of the $`\delta `$ parameter ranging from 0.3 to $`5\times 10^4`$. Since cooling eliminates the short–distance fluctuations, the value of $`\chi ^L`$ after cooling, hereafter called $`\chi _{\mathrm{cool}}^L`$, can be written as in Eq. (10) with $`Z1`$ and $`M0`$ if the cooling has been protracted enough. Hence it should provide directly $`a^2\chi `$. Consistency requires that the cooling and the field theoretical methods provide the same result.
The summary of the performed simulations is presented in Table 2. In Fig. 9 we show the results for $`\chi _{\mathrm{cool}}^L`$ from the new cooling at several values for the $`\delta `$ parameter at $`\beta =0.70`$ on a $`24^2`$ lattice. We interpret the behavior under the new cooling in the following way: as $`\delta `$ decreases, the cooling erases an even larger amount of quantum fluctuations thus bringing $`\chi _{\mathrm{cool}}^L`$ to a maximum. Afterwards, for too small values of $`\delta `$, the cooling begins to affect also the topological fluctuations and part of the topological signal gets lost. These considerations suggest that a convenient choice of the $`\delta `$ parameter may be around the value for which $`\chi _{\mathrm{cool}}^L`$ reaches the maximum. At this value of $`\delta `$ we then assume that $`Z1`$ and $`M0`$. The choice of a smaller value for $`\delta `$ would not be dramatically dangerous, but would push the scaling region towards larger values of $`\beta `$.
In Table 3 we summarize the results for $`\chi ^L`$ and for $`\xi _G^2\chi `$ obtained for CP<sup>9</sup> by the field theoretical method combined with smearing (0, 5 and 10 levels). In Table 4 the results for $`\chi _{\mathrm{cool}}^L`$ and $`\xi _G^2\chi `$ obtained by the new cooling method for $`\delta `$ around the peak value (see Fig. 9 for $`\beta =0.70`$) are shown. Fig. 10 displays the scaling of $`\xi _G^2\chi `$ for the cases of 0, 5 and 10 smearing levels and the field theoretical method (left) and with the new cooling method for $`\delta `$=0.0100, 0.0050, 0.0025 (right). From this figure we see that there is practically no dependence on the smearing level, although error bars strongly decrease with the number of smearing levels. As for the cooling method, we see that there is a satisfying consistence among the results for the different values of $`\delta `$ and also between these results and those from the field theoretical method.
## 8 Summary and conclusions
The study of the topological properties of the vacuum of a field theory simulated on a lattice is made difficult by the presence of quantum fluctuations which hinder the extraction of the relevant physical information. Among other methods, cooling has been proposed as a technique to wash out such fluctuations and reveal the topological background of any single field configuration. In the present paper we have made a comparison among three different cooling methods using as test–field the 2d CP<sup>N-1</sup> model defined in Section 2.
The three cooling methods under study have been defined in Section 3. They are: the standard cooling (a local minimization of the Euclidean lattice action), the Pisa cooling (the same but with local minimizations constrained to be smaller than a given bound) and a new cooling (recently introduced in Ref. ) where the local minimizations are accepted only if they are larger than a given bound.
Firstly, we have studied the performance of the three coolings on classical non–equi-librium configurations representing 1–instanton solutions and instanton–anti-instanton pairs. We have placed one such object on the lattice and have performed a series of cooling iterations in order to measure several physical (topological and non–topological) observables. In all situations the three coolings have yielded the same result —see Figs. 2 and 3.
After these results, we have assumed that the three cooling methods are equivalent and that a correspondence can be established between them in the sense that a given number of iterations of the Pisa or the standard coolings corresponds to a precise value for the bound in the new cooling if the value of the energy $`E`$, Eq. (21), is the same after applying the coolings. This correspondence has been used in the subsequent investigation and, for the CP<sup>3</sup> model at $`\beta =1.05`$ on a $`76^2`$ lattice, it is shown in Table 1.
We have studied the performance of the cooling methods on a set of equilibrium configurations obtained after a Monte Carlo simulation. On these thermalized configurations we have first extracted several physical quantities after applying cooling: the lattice topological susceptibility is the same when equivalent amounts of cooling, in the sense of Table 1, are applied —see Fig. 4. The same happens for the magnetic susceptibility and for the correlation length as far as the cooling iteration is not protracted too much —see Fig. 5. The same agreement is seen for the shell correlation function $`G(r)`$ if $`r/a`$ is large enough —see Fig. 6. Then we have peered the action density and the topological charge density distributions obtained with the application of equivalent amounts of cooling, in the sense of Table 1. In all cases we have again obtained completely analogous distributions —see Fig. 7 for an example. These results altogether strongly suggest that the three cooling methods, although different in the procedure, behave equivalently.
By using the correspondence given in Table 1 we have tested the picture of cooling as a diffusion process. For the lattice parameters used in this analysis, this picture works well only for a moderately small number of iterations ($`n40`$).
Finally we have compared the results obtained for the topological susceptibility in the CP<sup>9</sup> model by the new cooling method with those extracted from a well–tested method: the so–called field theoretical method improved with smearing. In Tables 3 and 4 we give the results for the quantity $`\xi _G^2\chi `$. They agree fairly well among them, see Fig. 10, and also with the large $`N`$ estimate which provides $`\xi _G^2\chi =153`$ up to order $`O(1/N^2)`$.
We used the standard action both to generate thermal equilibrium configurations in Monte Carlo simulations and during the cooling procedure. We expect that the above results concerning the equivalence between the different cooling techniques should not depend strongly on the action used during cooling. However the comparison of the three cooling techniques by using different lattice actions is worth to be pursued in a future work.
## Acknowledgement
We would like to thank P. Cea, Ph. de Forcrand, A. Di Giacomo, P. Rossi, I.O. Stamatescu and E. Vicari for useful discussions.
TABLES |
no-problem/0001/cond-mat0001327.html | ar5iv | text | # Charged Excitons in a Dilute 2D Electron Gas in a High Magnetic Field
## I Introduction
The magneto-optical properties of quasi-two-dimensional (2D) electron systems have been intensively investigated experimentally and theoretically. For a dilute electron gas, the photoluminescence (PL) spectrum is determined by a charged-exciton complex $`X^{}`$ and its interaction with remaining electrons. The $`X^{}`$ consists of two electrons and a valence hole and is similar to the hydrogen ion H<sup>-</sup>. Its existence in bulk semiconductors was first predicted by Lampert, but due to small binding energy it has not been observed experimentally. Stebe and Ainane showed that the binding of the second electron to the exciton $`X`$ should be enhanced in 2D systems. Indeed, the $`X^{}`$ has been observed in semiconductor quantum wells (QW) by Kheng et al. and in many related experiments.
The experimental observation stimulated a number of theoretical works. It is now well established that the only bound $`X^{}`$ state at zero magnetic field is the singlet state ($`X_s^{}`$) with the total electron spin $`J_e=0`$. Accordingly, the PL spectrum shows only two peaks, due to the $`X`$ and $`X_s^{}`$ recombination, split by the $`X_s^{}`$ binding energy $`\mathrm{\Delta }_s`$. The situation is much more complicated in a magnetic field. In very high fields, MacDonald and Rezayi showed that optically active magneto-excitons do not bind a second electron. They are effectively decoupled from the excess electrons due to the “hidden symmetry,” and the PL spectrum is that of a single exciton, irrespective of the number of electrons present.
It was therefore surprising when a bound $`X^{}`$ complex was discovered via numerical experiments in the lowest Landau level (LL). The bound complex was a triplet ($`X_t^{}`$) with finite total angular momentum and a macroscopic degeneracy. It was later shown by Palacios et al. that an isolated $`X_t^{}`$ in the lowest LL has infinite radiative time $`\tau _t`$. Two independent symmetries must be broken to allow for the $`X_t^{}`$ recombination: the “hidden symmetry” due to an equal strength of $`e`$$`e`$ and $`e`$$`h`$ interactions, and the 2D geometrical (translational) symmetry resulting in the conservation of two angular momentum quantum numbers. The “hidden symmetry” can be broken by mixing of LL’s, valence band mixing effects, and asymmetry of the QW. The translational symmetry can be broken by disorder. Therefore, the $`X_t^{}`$ recombination probability is determined by disorder and scattering by additional electrons, and is expected to disappear with increasing magnetic field. Also, crossing of the $`X_t^{}`$ and $`X_s^{}`$ PL peaks must occur at some value of the magnetic field, when $`X_t^{}`$ becomes the $`X^{}`$ ground state. This hypothetical long-lived $`X_t^{}`$ ground state in high magnetic fields has recently received a lot of attention. Because the $`X_t^{}`$ complexes carry a net charge and form LL’s, they are expected to form (together with remaining electrons) the multi-component incompressible fluid states with Laughlin–Halperin (LH) correlations. Since an experimental realization of such states requires reaching the “hidden symmetry” regime (long-lived $`X_t^{}`$ ground state), an estimate of required magnetic fields is needed.
While variational calculations of hydrogen-like $`X_s^{}`$ appear satisfactory, an accurate description of $`X_t^{}`$ at finite magnetic fields is extremely difficult. Although Whittaker and Shields (WS) predicted a transition to the $`X_t^{}`$ ground state in a GaAs/AlGaAs QW of width $`w=10`$ nm at the magnetic field of $`B30`$ T, the experimental data for $`B10`$ T that was available at the time could not verify their result. A negative answer came recently from Hayne et al., whose PL measurements in magnetic fields up to $`50`$ T seemingly precluded such transition. In their spectra, $`X_s^{}`$ remained the ground state up to 50 T, and an extrapolation to higher fields ruled out the singlet-triplet crossing at any near values. Moreover, in clear disagreement with Ref. , strong $`X_t^{}`$ PL was detected, whose intensity increased with increasing the magnetic field, and which at 13.5 T exceeded that of the $`X_s^{}`$. Results of Hayne et al. not only disagreed with the model of WS, but also suggested that a picture of long-lived $`X_t^{}`$’s forming the low energy states of an $`e`$$`h`$ plasma, worked out for a strictly 2D system ($`w=0`$) in the lowest LL, might be totally inadequate to realistic GaAs systems. This suspicion was further reinforced by the unexplained lack of the sensitivity of PL to the filling factor of the electron gas. The source of disagreement might be either in the description of bound $`X^{}`$ states or in the description of its interaction with excess electrons.
In this paper we address both issues. We report on detailed numerical calculations of the energy and PL spectra of $`e`$$`h`$ systems at high magnetic fields. Using Lanczos-based methods we were able to include in our model the effects of Coulomb interaction, LL mixing, finite QW width, and realistic Zeeman and cyclotron splittings. Our calculations predict the existence of a new, optically active bound state $`X_{tb}^{}`$ of the triplet charged exciton. The identification of this new state as the triplet $`X^{}`$ state observed in PL explains the puzzling qualitative disagreement between earlier theory and experiments. The “bright” $`X_{tb}^{}`$ state is distinguished from the “dark” state $`X_{td}^{}`$ found in earlier calculations, which is the lowest-energy triplet $`X^{}`$ state at high magnetic field but remains undetected in PL experiments (however, see also Ref. ). Energies and oscillator strengths of all bound complexes: $`X`$, $`X_s^{}`$, $`X_{tb}^{}`$, and $`X_{td}^{}`$, are calculated as a function of the magnetic field and QW width. The transition to the $`X_{td}^{}`$ ground state at $`B30`$ T is confirmed.
The interaction of $`X^{}`$’s with additional electrons is also studied. Because this interaction has short range, it effectively isolates the bound $`X^{}`$ states from remaining electrons and only weakly affects PL from dilute systems, as observed by Priest et al. In particular, collisions of $`X_{td}^{}`$ with surrounding electron gas at filling factors $`\nu <1/5`$ do not significantly enhance its oscillator strength. This explains why this state is not observed in PL.
## II Model
In order to preserve the 2D translational symmetry of an infinite QW in a finite-size calculation, electrons and holes are put on a surface of the Haldane sphere of radius $`R`$. The reason to choose the spherical geometry for calculations is strictly technical and of no physical consequence for our results. Because of the finite LL degeneracy, the numerical calculations on a sphere can be done without cutting off the Hilbert space and thus without breaking the 2D translational symmetry. This allows exact resolution of the two quantum numbers conserved due to this symmetry: the total ($`_{\mathrm{tot}}`$) and center-of-mass ($`_{\mathrm{cm}}`$) angular momenta. Let us note that in earlier calculations, WS and Chapman et al. used planar geometry and hence could not resolve the $`_{\mathrm{cm}}`$ quantum number, which is essential to correctly identify the bound $`X^{}`$ states and to further accurately calculate their energy and PL. The exact mapping between the $`_{\mathrm{tot}}`$ and $`_{\mathrm{cm}}`$ quantum numbers on a plane and the 2D algebra of the total angular momentum on a sphere (and between the respective Hilbert eigensubspaces) allows conversion of the results from one geometry to the other. The price paid for closing the Hilbert space without breaking symmetries is the surface curvature which modifies interactions. However, if the correlations to be modeled have short range that can be described by a small characteristic length $`\delta `$, the effects of curvature are scaled by a small parameter $`\delta /R`$ and can be eliminated by extrapolating the results to $`R\mathrm{}`$. Therefore, despite all differences, the spherical geometry is equally well suited to modeling bound complexes as to the fractional quantum Hall systems (as originally used by Haldane).
The detailed description of the Haldane sphere model can be found e.g. in Refs. and (since it is not essential for our results) it will not be repeated here. The magnetic field $`B`$ perpendicular to the surface of the sphere is due to a magnetic monopole placed in the center. The monopole strength $`2S`$ is defined in the units of elementary flux $`\varphi _0=hc/e`$, so that $`4\pi R^2B=2S\varphi _0`$ and the magnetic length is $`\lambda =R/\sqrt{S}`$. The single-particle states are the eigenstates of angular momentum $`l`$ and its projection $`m`$ and are called monopole harmonics. The energies $`\epsilon `$ fall into $`(2l+1)`$-fold degenerate angular momentum shells separated by the cyclotron energy $`\mathrm{}\omega _c`$. The $`n`$-th ($`n0`$) shell (LL) has $`l=S+n`$ and thus $`2S`$ is a measure of the system size through the LL degeneracy. Due to the spin degeneracy, each shell is further split by the Zeeman gap.
Our model applies to the narrow and symmetric QW’s, and the calculations have been carried out for the GaAs/AlGaAs structures with the Al concentration of $`x=0.33`$ and the widths of $`w=10`$ nm, 11.5 nm, and 13 nm. For such systems, only the lowest QW subband need be included and the cyclotron motion of both electrons and holes can be well described in the effective-mass approximation. For the holes, only the heavy-hole states are included, with the inter-subband coupling partially taken into account through the realistic dependence $`\mathrm{}\omega _{ch}(B)`$, i.e. through the dependence of the effective in-plane (cyclotron) mass $`m_h^{}`$ on $`B`$ (after Cole et al.).
Using a composite index $`i=[nm\sigma ]`$ ($`\sigma `$ is the spin projection), the $`e`$$`h`$ Hamiltonian can be written as
$$H=\underset{i,\alpha }{}c_{i\alpha }^{}c_{i\alpha }\epsilon _{i\alpha }+\underset{ijkl,\alpha \beta }{}c_{i\alpha }^{}c_{j\beta }^{}c_{k\beta }c_{l\alpha }V_{ijkl}^{\alpha \beta },$$
(1)
where $`c_{i\alpha }^{}`$ and $`c_{i\alpha }`$ create and annihilate particle $`\alpha `$ ($`e`$ or $`h`$) in state $`i`$, and $`V_{ijkl}^{\alpha \beta }`$ are the Coulomb matrix elements.
At high magnetic fields, $`w`$ significantly exceeds $`\lambda `$ and it is essential to properly include the effects due to the finite QW width. Merely scaling all matrix elements $`V_{ijkl}^{\alpha \beta }`$ by a constant factor $`\xi (w/\lambda )`$ is not enough. Ideally, the $`V_{ijkl}^{\alpha \beta }`$ should be calculated for the actual 3D electron and hole wavefunctions. The “rod” geometry used by Chapman et al. might be a reasonable approximation (for the lowest QW subband), although using the same effective rod length for electrons and holes and its arbitrary scaling with $`B`$ leads to an incorrect $`B`$-dependence of obtained results. In this work we insist on using numerically correct values of $`V_{ijkl}^{\alpha \beta }`$ and calculate them in the following way. The actual density profile across the QW can be approximated by $`\varrho (z)\mathrm{cos}^2(\pi z/w^{})`$, i.e. by replacing the actual QW by a wider one, with an infinite potential step at the interface. This defines the effective widths of electron and hole layers, $`w_e^{}`$ and $`w_h^{}`$. For $`w10`$ nm, we obtain $`w^{}(w_e^{}+w_h^{})/2=w+2.5`$ nm. We have checked that the effective 2D interaction in a quasi-2D layer,
$$V(r)=𝑑z𝑑z^{}\frac{\varrho (z)\varrho (z^{})}{\sqrt{r^2+(zz^{})^2}},$$
(2)
can be well approximated by $`V_d(r)=1/\sqrt{r^2+d^2}`$ if an effective separation across the QW is taken as $`d=w^{}/5`$. For a given $`d/\lambda `$, matrix elements of $`V_d(r)`$ have been calculated analytically and used as $`V_{ijkl}^{\alpha \beta }`$ in Eq. (1). A small difference between $`w_e^{}`$ and $`w_h^{}`$ is included by additional rescaling, $`V_{\alpha \beta }(r)=\xi _{\alpha \beta }V(r)`$, with $`\xi _{\alpha \beta }^2=z_{eh}^2/z_{\alpha \beta }^2`$. For $`w10`$ nm, we obtain $`\xi _{ee}=0.94`$ and $`\xi _{hh}=1.08`$, and for wider QW’s, the difference between $`w_e^{}`$ and $`w_h^{}`$ is even smaller. Note that our treatment of interactions in a quasi-2D layer is different from the “biplanar” geometry (electrons and holes confined in two parallel infinitely thin layers) tested by Chapman et al.
The Hamiltonian $`H`$ is diagonalized numerically in the basis including up to five LL’s ($`n4`$) for both electrons and holes (note that since $`l=S+n`$, the inter-LL excitations of only one particle have non-zero angular momentum and, e.g., do not contribute to the $`X`$ ground state). Energies obtained for different values of $`2S20`$ are extrapolated to $`2S\mathrm{}`$, i.e. to an infinite QW. The eigenstates are labeled by total angular momentum $`L`$ and its projection $`M`$, which are related to the good quantum numbers on the plane: $`_{\mathrm{tot}}`$, $`_{\mathrm{cm}}`$, and $`_{\mathrm{rel}}_{\mathrm{tot}}_{\mathrm{cm}}`$. The total electron and hole spins ($`J_e`$ and $`J_h`$) and projections ($`J_{ze}`$ and $`J_{zh}`$) are also resolved.
## III Bound $`X^{}`$ States
The $`2e`$$`1h`$ energy spectra calculated for $`2S=20`$ and five included electron and hole LL’s ($`n4`$) are shown in Fig. 1.
The parameters used in the calculation ($`w_e^{}`$, $`w_h^{}`$, and the dependence of $`\mathrm{}\omega _{ch}`$ on $`B`$) correspond to the 11.5 nm GaAs QW. The energy is measured from the exciton energy $`E_X`$, so that for the bound states (the states below the lines) it is opposite to the binding energy $`\mathrm{\Delta }`$ (the lowest LL energy is set to zero). Open and full symbols denote singlet and triplet electron spin configurations, respectively, and only the state with the lowest Zeeman energy is marked for each triplet. Similarly, each state with $`L>0`$ represents a degenerate multiplet with $`|M|L`$. The angular momentum $`L`$ calculated in the spherical geometry translates into angular momenta on a plane in such way that the $`L=S`$ multiplet corresponds to $`_{\mathrm{rel}}=0`$ and $`_{\mathrm{tot}}=_{\mathrm{cm}}=0`$, 1, …, and the $`L=S1`$ multiplet corresponds to $`_{\mathrm{rel}}=1`$ and $`_{\mathrm{tot}}=_{\mathrm{cm}}+1=1`$, 2, ….
Due to the conservation of $`L`$ in the PL process, only states from the $`L=S`$ channel are radiative. This is because an annihilated $`e`$$`h`$ pair has $`L_X=0`$, and the final-state electron left in the lowest LL has $`l_e=S`$. Recombination of other, non-radiative ($`LS`$) states requires breaking rotational symmetry, e.g., by interaction with electrons, other charged complexes, or impurities. This result is independent of chosen spherical geometry and holds also for the planar QW’s, where the 2D translational symmetry leads to the conservation of $`_{\mathrm{tot}}`$ and $`_{\mathrm{cm}}`$, and the corresponding PL selection rule is $`_{\mathrm{rel}}=0`$ (this simple result can also be expressed in terms of magnetic translations ).
Three states marked in Fig. 1(a,b,c) ($`B=13`$, 30, and 68 T) are of particular importance. $`X_s^{}`$ and $`X_{tb}^{}`$, the lowest singlet and triplet states at $`L=S`$, are the only well bound radiative states, while $`X_{td}^{}`$ has by far the lowest energy of all non-radiative ($`LS`$) states. The transition from the $`X_s^{}`$ to the $`X_{td}^{}`$ ground state is found at $`B30`$ T, which confirms the calculation of WS. Our slightly larger binding energies for $`w=10`$ nm are due to a larger basis used for diagonalization and including the magnetic-field dependence of the effective hole cyclotron mass (for $`w10`$ nm, $`m_h^{}`$ increases from 0.28 at 10 T to 0.40 at 50 T). A new result is the identification of the $`X_{tb}^{}`$ state, which remains an excited radiative bound state in all frames (a)–(c).
For comparison, the spectrum of an ideal, strictly 2D system in the lowest LL is shown in Fig. 1(d). The $`X_{td}^{}`$ is the only bound state. As a result of the hidden symmetry, the only radiative states are the pair of “multiplicative” states at $`L=S`$ and $`E=E_X`$, in which an optically active $`X`$ with $`L_X=0`$ is decoupled from a free electron with $`l_e=S`$.
We have performed similar calculations for systems larger than $`2e`$$`1h`$. The results confirm that $`X`$ and $`X^{}`$ are the only well bound $`e`$$`h`$ complexes at $`B10`$ T. For example, the charge-neutral singlet biexciton $`X_2`$ (with $`J_e=J_h=L_{X_2}=0`$) unbinds at $`B20`$ T even in the absence of the Zeeman splitting, and its Coulomb binding energy between 10 and 20 T is less than 0.1 meV.
To illustrate the finite size and surface curvature effects on the results obtained in the spherical geometry, in Fig. 2(a) we plot the Coulomb binding energies (without the Zeeman energy) of all three $`X^{}`$ states marked in Fig. 1(b) ($`B=30`$ T) as a function of $`S^1=(\lambda /R)^2`$. The very regular dependence of the binding energies on the system size allows accurate extrapolation of the values obtained for $`82S20`$ to $`2S\mathrm{}`$, i.e. to an extended planar system ($`\lambda /R=0`$ and infinite LL degeneracy).
The effect of LL mixing is demonstrated in Fig. 2(b), where we plot the extrapolated binding energies ($`\lambda /R=0`$), calculated including between two and five electron and hole LL’s, as a function of $`B`$.
The following observations can be made. Although inclusion of one excited ($`n=1`$) LL already leads to a significant $`X_s^{}`$ binding, at least the $`n=2`$ level must be added for the quantitatively meaningful results. Because the singlet state has more weight in the excited LL’s than the triplet states, the ground-state transition shifts to higher $`B`$ when more LL’s are included. The $`X_s^{}`$ binding energy $`\mathrm{\Delta }_s`$ weakly depends on $`B`$ and saturates at $`B20`$ T, while $`\mathrm{\Delta }_{td}e^2/\lambda \sqrt{B}`$. Finally, the $`X_{tb}^{}`$ energy goes at a roughly constant separation of 1.5 meV above $`X_s^{}`$, and never crosses either $`X_s^{}`$ or $`X_{td}^{}`$.
To illustrate the dependence on the QW width, in Fig. 3(a,c,d) we compare the $`X^{}`$ binding energies obtained for $`w=10`$ nm, 11.5 nm, and 13 nm.
The thick dotted lines for $`X_s^{}`$ include the Zeeman energy needed to flip one electron’s spin and form a bound spin-unpolarized state in an initially spin-polarized electron gas. The Zeeman energy $`E_Z=g^{}\mu _BB`$ is roughly a linear function of energy through both cyclotron energy $`\mathrm{}\omega _cB`$ and confinement energy $`1/w^2`$. After Snelling et al., for $`w10`$ nm at $`B=0`$, we have $`(g_e^{}+0.29)w^2=9.4`$ nm<sup>2</sup>, and after Seck, Potemski, and Wyder we find $`dg_e^{}/dB=0.0052`$ T<sup>-1</sup> (for very high fields see also Ref. ). In all frames, $`E_Z`$ changes sign at $`B40`$ T, resulting in cusps in the $`X_s^{}`$ binding energy.
To explicitly show the magnitude of $`E_Z`$, with thin lines we also plot the $`X_s^{}`$ energy without $`E_Z`$. While the $`\mathrm{\Delta }_s`$ including $`E_Z`$ governs the $`X_s^{}`$ relaxation and dependence of the $`X_s^{}`$ PL intensity on temperature, the $`\mathrm{\Delta }_s`$ without $`E_Z`$ is the difference between the $`X`$ and $`X_s^{}`$ PL energies (neglecting the difference between $`g_h^{}`$ in the two complexes). It is clear from Fig. 3(a,c,d) that $`E_Z`$ is almost negligible for $`B<50`$ T and that the binding energies are similar for all three widths.
Since only three $`e`$$`h`$ complexes: $`X`$, $`X_s^{}`$, and $`X_{tb}^{}`$, have significant binding energy and at the same time belong to the radiative $`L=S`$ channel, only three peaks are observed in the PL spectra of dilute systems (not counting the Zeeman splittings). The total oscillator strength $`\tau _\psi ^1`$ of a given state $`\psi `$ can be expressed as
$$\tau _\psi ^1=\psi |𝒫^{}𝒫|\psi ,$$
(3)
where $`𝒫^{}=_i(1)^mc_{ie}^{}c_{ih}^{}`$ and $`𝒫=_i(1)^mc_{ie}c_{ih}`$ are the optical operators coherently creating and annihilating an $`e`$$`h`$ pair with $`L=0`$ (optically active $`X`$). In Fig. 3(b), we plot $`\tau ^1`$ of $`X`$, $`X_s^{}`$, and $`X_{tb}^{}`$ as a function of $`B`$ for the 11.5 nm QW. The units of $`\tau ^1`$ follow from Eq. (3). We assume here that both electrons and holes are completely spin-polarized ($`J_z=J`$). Typically, all electron spins and only a fraction of hole spins $`\chi _h`$ (depending on temperature and the Zeeman energy) are aligned with the field. In result, the $`X_{tb}^{}`$ PL has definite circular polarization ($`\sigma _+`$) and its intensity is reduced by $`\chi _h`$, while the $`X_s^{}`$ PL peak splits into a $`\sigma _\pm `$ doublet (separated by the appropriate Zeeman energy) with the intensity of the two transitions weighted by $`\chi _h`$ and $`1\chi _h`$.
In a system obeying the hidden symmetry ($`w_e^{}=w_h^{}`$, no LL mixing, and no QW subband mixing), the total oscillator strength of one $`X`$ is equally shared by a pair of multiplicative $`e`$$`X`$ states. In Fig. 3(b), it is distributed over a number of radiative ($`L=S`$) states, and, although most is inherited by the two nearly multiplicative states at $`EE_X`$, a fraction also goes to the well bound $`X_s^{}`$ and $`X_{tb}^{}`$ states, with the ratio $`\tau _{tb}^12\tau _s^1`$ almost independent of $`B`$. The resulting three PL peaks ($`X`$, $`X_s^{}`$, and $`X_{tb}^{}`$) are precisely the ones observed in experiments.
The actual relative intensity of the PL peaks will depend not only on the oscillator strengths but also on the relative population of the respective initial states (i.e., efficiency of the relaxation processes, which in turn depends on the excitation energies and temperature) and their spin polarization. An increase of $`\chi _h`$ from $`\frac{1}{2}`$ to 1 with increasing $`B`$ can explain an increase of the $`X_{tb}^{}`$ PL intensity by up to a factor of two, while the $`X_s^{}`$ PL intensity remains roughly constant.
Let us stress that the results presented in Figs. 13 are appropriate for narrow and symmetrically (or remotely) doped QW’s. The agreement of the calculated binding energies and their dependence on $`B`$ with the experimental data for such systems is good. In much wider QW’s ($`w30`$ nm), the subband mixing becomes significant (and favors the $`X_s^{}`$ ground state), while in strongly asymmetric QW’s or heterojunctions the Coulomb matrix elements $`V_{ijkl}^{\alpha \beta }`$ are quite different. In the latter case, the significant difference between electron and hole QW confinements ($`w_h^{}w_e^{}`$) increases the $`e`$$`h`$ attraction compared to the $`e`$$`e`$ repulsion within an $`X^{}`$. Roughly, the binding energies of all three $`X^{}`$ states increase (compared to the values calculated here) by an uncompensated $`e`$$`h`$ attraction which scales as $`e^2/\lambda \sqrt{B}`$. This most likely explains the origin of an (equal) increase of $`\mathrm{\Delta }_s`$ and $`\mathrm{\Delta }_{tb}`$ as a function of $`B`$ found in Ref. .
While a quantitative model adequate to asymmetric QW’s or heterojunctions must use correct (sample-dependent) electron and hole charge density profiles $`\varrho (z)`$, our most important result remains valid for all structures: The triplet $`X^{}`$ state seen in PL is the “bright” excited triplet state at $`L=S`$ ($`_{\mathrm{rel}}=0`$), while the lowest triplet state at $`L=S1`$ ($`_{\mathrm{rel}}=1`$) so far remains undetected.
It might be useful to realize which of the experimentally controlled factors generally shift the singlet-triplet $`X^{}`$ transition to lower magnetic fields. The hidden symmetry which in Fig. 1(d) prevents binding of any other states than $`X_{td}^{}`$ is the exact overlap of electron and hole orbitals. The experimentally observed binding of $`X_s^{}`$ is due to the confinement of the hole charge in a smaller volume (through asymmetric LL mixing, $`\mathrm{}\omega _{ce}>\mathrm{}\omega _{ch}`$, and asymmetric QW confinement, $`w_e^{}>w_h^{}`$), which enhances the $`e`$$`h`$ attraction compared to the $`e`$$`e`$ repulsion. Therefore, any factors should be avoided which break the $`e`$$`h`$ orbital symmetry, such as (i) large $`w`$ leading to the QW subband mixing, (ii) well/barrier material combinations yielding $`w_h^{}w_h^{}`$, (iii) large in-plane effective masses and small dielectric constants (large $`[e^2/ϵ\lambda ]/[\mathrm{}\omega _c]`$) leading to the strong LL mixing. On the other hand, reducing $`w`$ strengthens Coulomb interaction and thus LL mixing, while too weak interactions (scaled e.g. by $`ϵ`$) might decrease $`X^{}`$ binding energies below the experimental resolution. The giant electron Zeeman splitting ($`g_e^{}1`$) in CdTe or ZnSe structures might certainly help to stabilize the $`X_{td}^{}`$ ground state at low $`B`$. Also, appropriate asymmetric doping producing an electric field across the QW and slightly separating electron and hole layers can help to restore balance between the $`e`$$`e`$ and $`e`$$`h`$ interactions.
## IV Effects of $`X^{}`$ Interactions
Even in dilute systems, recombination of bound $`e`$$`h`$ complexes can in principle be affected by their interaction with one another or with excess electrons. The short-range part of the $`e`$$`X^{}`$ and $`X^{}`$$`X^{}`$ interaction potentials is weakened due to the $`X^{}`$ charge polarization, and it is not at all obvious if even in the narrow QW’s the resulting $`e`$$`X^{}`$ and $`X^{}`$$`X^{}`$ correlations will be similar to the Laughlin correlations in an electron gas. Instead, the long-range part of the effective potentials could lead to some kind of $`e`$$`X^{}`$ or $`X^{}`$$`X^{}`$ pairing (in analogy to the electron or composite Fermion pairing in the fractionally filled excited LL’s). It has been shown that the repulsion has short range and results in Laughlin correlations, if its pseudopotential $`V(L)`$, defined as the pair interaction energy $`V`$ as a function of the pair angular momentum $`L`$ (on a sphere, larger $`L`$ means smaller separation) increases more quickly than $`L(L+1)`$. Therefore, the correlations in an infinite system (QW) are determined by form of the relevant pseudopotentials which can be obtained from studies of relatively small systems.
In Fig. 4 we plot the energy spectra of an $`3e`$$`1h`$ system (the simplest system in which an $`X^{}`$ interacts with another charge), calculated for $`2S=20`$ and three electron and hole LL’s included ($`n2`$).
The open and filled circles mark the states with total electron spins $`J_e=\frac{1}{2}`$ and $`\frac{3}{2}`$, respectively, and only the lowest energy states are shown for each spin multiplet. In the low energy states, bound $`X^{}`$ complexes interact with an electron through the effective pseudopotentials $`V(L)`$, and the total energy of an interacting pair is the sum of $`V(L)`$ and $`E_X^{}`$. For each pair, the allowed $`L`$ are obtained by adding $`l_e=S`$ of an electron and $`L_X^{}`$ of an appropriate $`X^{}`$. This yields $`L0`$ for $`X_s^{}`$ and $`X_{tb}^{}`$, and $`L1`$ for $`X_{td}^{}`$. However, maximum $`L`$ are smaller than $`L_X^{}+S`$ due to the finite size of the $`X^{}`$ (hard core). The allowed total electron spins $`J_e`$ are obtained by adding $`\frac{1}{2}`$ of an electron to 0 or 1 of an $`X^{}`$, so that the $`e`$$`X_s^{}`$ pair states have $`J_e=\frac{1}{2}`$, while the $`e`$$`X_{tb}^{}`$ and $`e`$$`X_{td}^{}`$ pair states can have either $`J_e=\frac{1}{2}`$ or $`\frac{3}{2}`$.
At low $`L`$ (i.e., at low $`e`$$`X^{}`$ interaction energy compared to the $`X^{}`$ binding energy), the $`e`$$`X^{}`$ scattering is decoupled from internal $`X^{}`$ dynamics, and all $`e`$$`X^{}`$ pseudopotentials marked with lines in Fig. 4 are rather well approximated by those of two distinguishable point charges (electrons) with appropriate $`l`$’s. Their relative position in different $`e`$$`X^{}`$ bands depends on involved $`\mathrm{\Delta }`$ and $`E_Z`$, and the $`e`$$`X_{td}^{}`$ states form the lowest energy band at sufficiently large $`B`$; see Fig. 4(c). Such regular behavior of the (two-charge) $`3e`$$`1h`$ system implies that the lowest states of an infinite $`e`$$`h`$ plasma are formed by bound $`X^{}`$’s interacting with one another and with excess electrons through the Coulomb-like pseudopotentials. Depending on $`B`$, either $`X_s^{}`$’s or $`X_{td}^{}`$’s form the ground state, while other bound complexes occur at higher energies, with the excitation gap given by the appropriate difference in $`\mathrm{\Delta }`$ and $`E_Z`$.
Less obviously, because of the short-range character of $`V(L)`$, the low-lying states have Laughlin–Halperin $`e`$$`X^{}`$ correlations described by a Jastrow prefactor $`(x_iy_j)^\mu `$, where $`x`$ and $`y`$ are complex coordinates of electrons and $`X^{}`$’s, respectively, and $`\mu `$ is an integer. At fractional LL fillings $`\nu =\nu _e\nu _h`$, $`X^{}`$’s avoid as much as possible the $`e`$$`X^{}`$ pair states with largest values of $`L`$. At $`\nu =1/\mu `$, the ground state is the Laughlin-like incompressible fluid state with $`LL_X^{}+S\mu `$, with quasiparticle-like excitations described by a generalized composite Fermion model. Even though formation of an equilibrium $`X^{}`$ Laughlin state requires long $`X^{}`$ lifetime and hence is only likely for $`X_{tb}^{}`$, all $`X^{}`$’s will stay as far as possible from other charges, and the distance to the nearest one corresponds to the $`L=L_X^{}+S\mu `$ pair state. This result depends on our assumption of the small QW width $`w`$. He at al. showed that the Laughlin $`e`$$`e`$ correlations are destroyed in a thick GaAs QW when $`w/\lambda >6`$. At $`B=40`$ T, this corresponds to $`w>24`$ nm, but the critical width for the $`e`$$`X^{}`$ correlations will be even smaller because of the above mentioned $`X^{}`$ charge polarization
The connection between $`\nu `$ and the minimum allowed $`e`$$`X^{}`$ separation (or $`L`$) allows calculation of the effect of the $`e`$$`X^{}`$ interaction on the $`X^{}`$ recombination as a function of $`\nu `$. In Fig. 5 we plot the PL oscillator strength $`\tau ^1`$ and energy $`E`$ (measured from the exciton energy $`E_X`$) for some of the $`3e`$$`1h`$ states marked in Fig. 4(a,b,c).
We assume that the Zeeman energy will polarize all electron spins prior to recombination, except for those two in the $`X_s^{}`$, and concentrate on the following three initial configurations: $`e`$$`X_s^{}`$ with $`J_{ze}=J_e=\frac{1}{2}`$ and $`e`$$`X_{tb}^{}`$ and $`e`$$`X_{td}^{}`$ with $`J_{ze}=J_e=\frac{3}{2}`$. For each of the three configurations, $`\tau ^1`$ and $`E`$ are plotted as a function of $`L`$ (i.e. of $`\nu `$).
The quantities conserved in an emission process are the total angular momentum $`L`$ and its projection $`M`$ (on a plane, $`_{\mathrm{tot}}`$ and $`_{\mathrm{cm}}`$), and the total electron and hole spins and their projections change by $`\pm \frac{1}{2}`$. For $`X_{tb}^{}`$ and $`X_{td}^{}`$, only an $`e`$$`h`$ pair can be annihilated, and an emitted photon has a definite circular polarization $`\sigma _+`$. Two indistinguishable electrons left in the final state have the total spin $`J_e=1`$, so their pair angular momentum $`L`$ must be odd ($`2l_e`$ minus an odd integer). For $`X_s^{}`$, both $`\sigma _+`$ and $`\sigma _{}`$ PL are possible, with the energy of the latter transition shifted by the total electron and hole Zeeman energy. For $`\sigma _+`$, the two electrons in the final state can have either $`J_e=0`$ and $`L`$ even, or $`J_e=1`$ and $`L`$ odd; while for $`\sigma _+`$ they can only have $`J_e=1`$ and $`L`$ must be odd. Note that $`g_e^{}`$ changes sign at $`B42`$ T, and the polarizations in Fig. 5(e) are reversed. As expected, for $`L0`$ the oscillator strengths converge to those of appropriate single $`X^{}`$’s in Fig. 3(b) (multiplied by two if only one parity of $`L`$ is allowed). On the right-hand side of Fig. 5, the $`\sigma _+`$ PL energies are shown. For only partial polarization of hole spins, an unmarked $`\sigma _{}`$ peak of an $`X_s^{}`$ will appear at the energy higher by the $`X^{}`$ (not electron) Zeeman splitting.
There is no significant effect of the $`e`$$`X^{}`$ interactions on the $`X^{}`$ oscillator strength and energy at small $`L`$. Moreover, the decrease of the PL energy of an $`X_s^{}`$ at larger $`L`$ is due to its induced charge polarization (dipole moment). This effect is greatly reduced for an $`X^{}`$ surrounded by an isotropic electron gas, although slight residual variation of the PL energy at $`\nu \frac{1}{3}`$ might broaden the $`X_s^{}`$ peak. The insensitivity of the $`X^{}`$ recombination to the $`e`$$`X^{}`$ interactions at small $`L`$ justifies a simple picture of PL in dilute $`e`$$`h`$ plasmas. In this picture, recombination occurs from a single isolated bound complex and hence is virtually insensitive to $`\nu `$. Quite surprisingly, the LH correlations prevent increase of the $`X_{td}^{}`$ oscillator strength through interaction with other charges. $`\tau _{td}^1`$ decreases exponentially (see insets) with decreasing $`\nu `$, and remains ten times longer than $`\tau _s`$ even at $`\nu =\frac{1}{3}`$. This explains the absence of an $`X_{td}^{}`$ peak even in the PL spectra showing strong recombination of a higher-energy triplet state $`X_{tb}^{}`$ (however, see also Ref. ).
## V Conclusion
We have studied photoluminescence (PL) from a dilute 2D electron gas in narrow and symmetric quantum wells (QW’s) as a function of the magnetic field $`B`$ and the QW width. The puzzling qualitative discrepancy between experiments and earlier theories is resolved by identifying the radiative ($`X_{tb}^{}`$) and non-radiative ($`X_{td}^{}`$) bound states of a triplet charged exciton. Even in high magnetic fields, when it has lower energy than the radiative states, the $`X_{td}^{}`$ remains invisible in PL experiments due to its negligible oscillator strength. The short range of the $`e`$$`X^{}`$ interaction pseudopotentials results in the Laughlin–Halperin correlations in a dilute $`e`$$`h`$ plasma, and effectively isolates the bound $`X^{}`$ states from the remaining electrons. This explains the observed insensitivity of the PL spectra to the filling factor and persistence of the small $`X_{td}^{}`$ oscillator strength in an interacting system. An idea of the Laughlin incompressible-fluid states of long-lived $`X_{td}^{}`$’s is supported. The “dark” $`X_{td}^{}`$ state could be identified either in time-resolved PL or transport experiments.
## Acknowledgment
The authors wish to thank M. Potemski (HMFL Grenoble) and C. H. Perry and F. Munteanu (LANL Los Alamos) for helpful discussions. AW and JJQ acknowledge partial support by the Materials Research Program of Basic Energy Sciences, US Department of Energy. |
no-problem/0001/cond-mat0001447.html | ar5iv | text | # 1 Introduction
## 1 Introduction
In many problems of statistical physics one encounters the so–called crossover phenomena, when a physical quantity qualitatively changes its behaviour in different domains of its variable. To be more precise, we may specify a crossover as follows. Let a function $`f(x)`$ represent a physical quantity of interest, with a variable running through the interval $`x_1xx_2`$. And let the behaviour of this function, describing some physical process, be essentially different near the boundary points $`x_1`$ and $`x_2`$. Assume that the function varies continuously from $`f(x_1)`$ to $`f(x_2)`$, as $`x`$ changes from $`x_1`$ to $`x_2`$. Then we may say that the function in the interval $`[x_1,x_2]`$ undergoes a crossover between $`f(x_1)`$ and $`f(x_2)`$.
Crossover behaviour of different physical quantities is so ubiquitous in nature that one could list a plenty of examples. For instance, a number of physical quantities essentially change their behaviour when passing from the weak–coupling to strong–coupling limit . In the theoretical description of crossover there exists a problem which is common for practically all physical applications. Real physical systems are usually so complicated that describing them equations almost never can be solved exactly. However, it is often possible to find asymptotic expansions of solutions in vicinity of boundary points. The natural arising problem is how to construct an accurate approximation for the sought function, valid on the whole domain of its variable, knowing only its asymptotic behavior near the boundaries. This problem is aggravated by the fact that only a few terms of the asymptotic expansions are usually available. In such a case the problem looks unsolvable.
The most known method of treating the interpolation problem is by using the so–called two–point Padé approximants \[2-4\] or, equivalent to the latter, the Thron continued fractions . In many cases the two–point Padé approximation yields quite reasonable interpolation formulas. However, the usage of this method has not become widespread because of the following shortcomings of the Padé approximants:
(i) When constructing these approximants, one often obtains spurious poles yielding unphysical singularities \[2-4\]. A sequence of Padé approximants may even have infinitely many poles .
(ii) A number of examples are known when Padé approximants are not able to sum perturbation series even for small values of an expansion parameter .
(iii) In the majority of cases, except some trivial examples, to reach a reasonable accuracy, one needs to have tens of terms in perturbative expansions . While, as is emphasized above, in physically interesting problems one often has only a few terms.
(iv) Defining the two–point Padé approximants one always confronts the ambiguity in distributing the coefficients, that is, in deciding which of the coefficients must reproduce the left–side expansion and which the right–side series \[1-6\]. Such an ambiguity increases with the increase of approximants orders, making it difficult to compose two–point Padé tables. And for the case of a few terms, this ambiguity makes the two–point Padé approximants practically unapplicable. A nice analysis of the latter difficulty was done in Ref. , where it was shown that, for the same problem, one may construct different two–point Padé approximants all having correct left–side and right–side limits, but differing from each other in the intermediate region by a factor of $`40`$ which gives $`1000\%`$ of uncertainty. This clearly demonstrates that in the case of short series the two–point Padé approximation not only cannot provide a reasonable quantitative approach but even does not permit to get a qualitative description. The latter concerns the general situation, although there can happen some trivial cases when two–point approximants make sense even being built with a few perturbative terms. However, their application to such few–term cases, in general, is absolutely unreliable.
(v) The two–point Padé approximants can be used for interpolating between two different expansions not always, but only when these two expansions have compatible variables \[2-4,9\]. When these expansions have incompatible variables, the two–point Padé approximants cannot be defined in principle.
(vi) When interpolating between two points, one of which is finite and another is at infinity, one is able to describe at infinity only rational powers \[2-4\]. The impossibility to deal with nonrational powers limits the applicability of the two–point Padé approximation.
(vii) The problem of approximating the functions increasing at infinity is especially difficult. A two–point Padé approximant can treat only a power–law increase \[1-5\] and is not able to describe other types of behaviour. But in physical problems the functions of interest often exhibit at infinity quite different behaviour, for example, growing exponentially or following other more complicated ways. In such cases the two–point Padé approximants are useless.
The difficulties listed above are well known and discussed in literature. We have cited here only some important references \[1-10\]. More details on mathematical problems in Padé approximation and its applications can be found in several volumes of papers, e.g. in Ref. .
As follows from the above discussion, the two–point Padé approximation in many cases is not applicable. It is evident that there is a necessity of developing a more general approach which could overcome the discussed difficulties and would be applicable to a larger variety of problems, including those for which the two–point Padé approximants cannot be used. It is important that such an approach would provide relatively simple analytical formulas for the physical quantities of interest. The advantage of having analytical expressions, as compared to just numbers that could be obtained from a numerical procedure, is in the convenience of analysing such expressions with respect to various parameters entering into them. Therefore, we keep in mind an analytical, rather than numerical, method that would combine relatively simple representations for physical quantities with their good accuracy.
It is worth emphasizing that to derive a new physical formula, valid in the whole range of physical variables, is not merely a mathematical challenge but this provides new physics, since in the majority of cases realistic physical problems correspond neither to weak coupling regime nor to strong coupling limit, but to the intermediate range of parameters. Therefore, it is of great importance for physics to possess a general mathematical tool permitting to derive explicit crossover formulas for arbitrary physical phenomena.
In the present paper we suggest an approach for treating this problem. Our approach is based on the self–similar approximation theory \[12-22\] permitting an accurate reconstruction of functions from a few terms of perturbative expansions. The effectiveness of the self–similar approximation theory is due to the usage of powerful techniques of dynamical theory and optimal control theory. Fast convergence of renormalized sequences is achieved by means of control functions. In the algebraic self-similar renormalization \[20-22\], we required the algebraic invariance of renormalization-group flow. Then, control functions are introduced as powers of a multiplicative algebraic transformation. These control functions are defined by the stability and fixed-point conditions for a dynamical system called the approximation cascade. In general, the evolution equations for a dynamical system can be completed by additional constraints whose existence imposes restrictions on the definition of control functions.
Crossover problem presents an example when additional constraints appear absolutely naturally. Really, assume that we have a $`k`$-order expansion $`p_k(x)`$ approximating the sought function $`f(x)`$ in the asymptotic vicinity of the left boundary $`x=x_1`$. And suppose that we are given an asymptotic behavior of this function near the right boundary $`x=x_2`$. For a moment, take for simplicity that we are given the value $`f(x_2)`$ at the right boundary point $`x=x_2`$. When constructing a self-similar approximation $`f_k^{}(x)`$ by renormalizing the left boundary expansion $`p_k(x)`$, we have as an additional constraint the right boundary condition $`f_k^{}(x_2)=f(x_2)`$ .
We show below that the algebraic self-similar renormalization provides a very convenient tool for treating the crossover problem. This approach permits us to find, having just a minimal information about the asymptotic behavior of a function near boundary points, a quite accurate approximation for the whole region of the variable. In the majority of cases the maximal error of a self–similar approximation is a few percent and in many cases not more than one percent. In addition to being quite accurate, this approximation is usually given by very simple expressions that are easy to analyze. We illustrate the approach by several examples from different branches of statistical physics. The variety of considered cases emphasizes the generality of the approach and proves that it is a very effective tool for treating arbitrary crossover phenomena.
Recently, we have applied such an interpolation approach to several quantum–mechanical problems . However, what makes the latter principally different from the problems of statistical physics is that in quantum mechanics one usually possesses quite a number of terms of perturbative expansions, while in statistical physics this luxury is rather rare, so that in the majority of cases one is able to derive just a few perturbative terms. In the present paper we aim at showing that our interpolation method does work for those complicated problems of statistical physics where only a few terms of asymptotic expansions are available and other methods are not applicable. Nevertheless, the self–similar interpolation makes it possible to treat even such complicated crossover problems, obtaining simple and accurate formulae.
## 2 General approach
In this section, we formulate the general scheme of the approach not specifying the physical nature of a considered function. Let us be interested in a function $`f(x)`$ of a real variable $`x`$. Assume that in the vicinity of some point $`x=x_0`$ there exist asymptotic expansions $`p_k(x,x_0)`$, with $`k=0,1,2,\mathrm{}`$, corresponding to this function,
$$f(x)p_k(x,x_0),xx_0.$$
(1)
Following the algebraic self–similar renormalization procedure \[20-22\], we define the algebraic transform
$$P_k(x,s,x_0)=x^sp_k(x,x_0),$$
(2)
where $`s`$ is yet unknown, and later will play the role of a control function. The transform inverse to that in Eq.(2) is
$$p_k(x,x_0)=x^sP_k(x,s,x_0).$$
(3)
Then we have to construct an approximation cascade with a trajectory bijective to the approximation sequence $`\left\{P_k\right\}`$. This procedure with all necessary mathematical foundations and details has been described in Refs. \[13-19\]. So, we sketch here only the main steps needed for grasping the idea of the method and we concentrate on those formulas that permit us to apply the method for crossover phenomena.
Define an expansion function $`x=x(\phi ,s,x_0)`$ by the equation
$$P_0(x,s,x_0)=\phi ,x=x(\phi ,s,x_0),$$
(4)
where $`P_0`$ is the first available term from the sequence $`\{P_k\}`$. Introduce a function
$$y_k(\phi ,s,x_0)=P_k(x(\phi ,s,x_0),s,x_0).$$
(5)
The transformation inverse to Eq. (5) reads
$$P_k(x,s,x_0)=y_k(P_0(x,s,x_0),s,x_0).$$
(6)
The family of endomorphisms, $`\{y_k\}`$, forms a cascade with the velocity field
$$v_k(\phi ,s,x_0)=y_{k+1}(\phi ,s,x_0)y_k(\phi ,s,x_0).$$
(7)
The trajectory of the cascade $`\{y_k\}`$ is, by definitions (5) and (6), bijective to the approximation sequence $`\{P_k\}`$. Embedding the approximation cascade into an approximation flow \[16-19\] and integrating the corresponding evolution equation, we come to the evolution integral
$$_{P_k}^{P_{k+1}^{}}\frac{d\phi }{v_k(\phi ,s,x_0)}=\tau ,$$
(8)
in which $`P_k=P_k(x,s,x_0)`$ is any given term from the approximation sequence $`\{P_k\};P_{k+1}^{}=P_{k+1}^{}(x,s,\tau ,x_0)`$ is a self–similar approximation representing a fixed point of the approximation cascade; and $`\tau `$ is an effective minimal time necessary for reaching the fixed point.
Recall that we started with a sequence $`\{p_k\}`$ of asymptotic expansions for the considered function $`f(x)`$. Then we passed to the sequence $`\{P_k\}`$ by means of the algebraic transformation (2). Now we have to return back employing the inverse transformation (3). To this end, we set
$$F_k^{}(x,s,\tau ,x_0)=x^sP_k^{}(x,s,\tau ,x_0).$$
(9)
The quantities $`s`$ and $`\tau `$ are the control functions guarantying the stability of the method, that is, the convergence of the procedure. These functions are to be defined by the stability conditions, such as the minimum of multiplier moduli, together with additional constraints, like, e.g., boundary conditions. Let us find from such conditions $`s=s_k`$ and $`\tau =\tau _k`$. Substituting these into Eq.(9), we obtain the self–similar approximation
$$f_k^{}(x,x_0)=F_k^{}(x,s_k,\tau _k,x_0)$$
(10)
for the function $`f(x)`$. We retain here the notation for the point $`x_0`$ in order to show that the approximation (10) has been obtained by renormalizing $`p_k(x,x_0)`$ which, according to Eq. (1), is an asymptotic expansion of $`f(x)`$ in the vicinity of the point $`x=x_0`$.
Now assume that the variable $`x`$ changes in the interval $`x_1xx_2`$ and that the asymptotic behavior of a function $`f(x)`$ is known near the boundaries of this interval. The latter means that in Eq. (1) we have to put, instead of $`x_0,`$ either $`x_1`$ or $`x_2`$. Let us take, for concreteness, the boundary points $`x_1=0`$ and $`x_2\mathrm{}`$. Then we have two types of expansions, $`p_k(x,0)`$ and $`p_k(x,\mathrm{})`$. Following the procedure described above, we can construct, in the place of Eq. (9), two quantities, $`F_k^{}(x,s,\tau ,0)`$ and $`F_k^{}(x,s,\tau ,\mathrm{})`$.
As is discussed above, the control functions $`s`$ and $`\tau `$ are to be defined from stability conditions plus additional constraints. The natural such constraints for the crossover problem can be formulated as follows. Suppose we have constructed the renormalized expression $`F_k^{}(x,s,\tau ,0)`$ starting from the left asymptotic expansion $`p_k(x,0)`$. By this construction, the function $`F_k^{}`$ has correct asymptotic behavior near the left boundary. But in order to correctly represent the sought function in the whole interval of $`x[0,\mathrm{})`$, the renormalized expression $`F_k^{}`$ must have the correct asymptotic behavior when approaching the right limit. This implies the validity of the condition
$$\underset{x\mathrm{}}{lim}\left|F_k^{}(x,s,\tau ,0)p_i(x,\mathrm{})\right|=0,$$
(11)
imposing constraints on $`s=s_k`$ and $`\tau =\tau _k`$. We shall call Eq. (11) providing the correct crossover behavior from the left to the right boundary the left crossover condition. The quantity $`s_k`$ can be called the left crossover index, and $`\tau _k`$, the left crossover time. For the self–similar approximation (10) we get, in this way,
$$f_k^{}(x,0)=F_k^{}(x,s_k,\tau _k,0),$$
(12)
which may be named the left self–similar approximation, or the left crossover approximation.
The analogous reasoning works, as is clear, when we are considering the crossover from the right to left. Then we obtain the right crossover condition
$$\underset{x0}{lim}\left|F_k^{}(x,s,\tau ,\mathrm{})p_j(x,0)\right|=0,$$
(13)
imposing constraints on $`s=s_k`$ and $`\tau =\tau _k`$, thus defining the right crossover index $`s_k`$ and the right crossover time $`\tau _k`$. As a result, we come to the right self–similar approximation, or the right crossover approximation
$$f_k^{}(x,\mathrm{})=F_k^{}(x,s_k,\tau _k,\mathrm{}).$$
(14)
In general, from the left and the right approximations, (12) and, respectively, (14), we can compose the average self–similar approximation, or average crossover approximation
$$f_k^{}(x)=\frac{1}{2}\left[f_k^{}(x,0)+f_k^{}(x,\mathrm{})\right].$$
(15)
The suggested general approach to reconstructing crossover functions can be employed for any crossover phenomena. In particular applications, it can happen that we possess a reliable asymptotic expansion only from one side of the crossover domain, and from another side just one term is available. In this case, as is clear, we are not able to construct both left and right self-similar approximations, but only one of them. Nevertheless, such one–side approximations are usually quite accurate, as we show by examples in the following sections. The possibility of constructing accurate approximations, when we have a perturbative series only from one side of the crossover region and a sole asymptotic term from another side, is very important since this situation constantly occurs in realistic physical problems. We shall demonstrate in what follows how it is possible to improve the accuracy of such one–side approximations by combining the terms of a given one–side series and defining the crossover indices so that to satisfy the asymptotic limit from another side, in accordance with the crossover conditions (11) or (13).
In order to emphasize that the suggested approach does work even for the cases with a very scarce information about the sought function, let us consider a simple example. Suppose that we know the asymptotic behavior of a function near the left boundary, where $`x0`$, only in the linear approximation
$$p_1(x,0)a_0+a_1x,a_0,a_10.$$
(16)
And assume that only one asymptotic term is known from the right side,
$$p_1(x,\mathrm{})Ax^n,A,n0,$$
(17)
as $`x\mathrm{}`$. In such an extreme case of minimal information, it looks like there is no a regular way of recovering the function for the whole axis $`0x<\mathrm{}`$. However, our approach, based on the idea of self–similarity, permits us to recover the sought function.
Following the procedure described above, in the place of Eq.(9), starting from expansion (16), we obtain
$$F_1^{}(x,s,\tau ,0)=a_0\left(1\frac{a_1\tau }{a_0s}x\right)^s.$$
(18)
With Eqs.(17) and (18), the left crossover condition (11) reads
$$\underset{x\mathrm{}}{lim}\left|a_0\left(1\frac{a_1\tau }{a_0s}x\right)^sAx^n\right|=0,$$
From here the left crossover index $`s_1`$ and the left crossover time $`\tau _1`$ are uniquely defined as follows:
$$s_1=n,\tau _1=n\frac{a_0}{a_1}\left(\frac{A}{a_0}\right)^{1/n}.$$
(19)
Substituting these values into Eq. (18), as is prescribed by Eq. (12), we recover the left crossover approximation
$$f_1^{}(x,0)=a_0\left[1+\left(\frac{A}{a_0}\right)^{1/n}x\right]^n.$$
(20)
At large $`x\mathrm{}`$, expression (20) reduces to the limit (17). When $`x0`$, then we have the linear behavior
$$f_1^{}(x,0)a_0+a_1^{}x,$$
where $`a_1^{}=na_0\left(A/a_0\right)^{1/n}`$ is the renormalized coefficient. Such a renormalization is typical of renormalization group techniques, as is discussed in Refs. \[20-22\].
Thus, even having so scanty information about the asymptotic properties of a function, as in the above example, our approach allows us to reconstruct, in a systematic way, the function for the whole domain of its variable. This reconstruction becomes possible owing to the idea of self–similarity which our approach is based on and due to the convenient introduction of control functions through the algebraic transformation. The idea of self–similarity, complimented by the property of algebraic invariance, eliminates the umbiguity typical of divergent series in the standard perturbative approaches. In the sections that follow, it will be shown that the accuracy of so constructed self-similar approximations is rather good.
Note that achieving good accuracy with a limited number of terms of an asymptotic expansion should not be treated as surprising. Asymptotic series are known to provide reasonable accuracy when up to some optimal number of terms are taken , the subsequent terms only spoil the picture being, in this sense, excessive. Whether there are such excessive terms or not is decided, in our approach, by stability and crossover conditions. As soon as these are satisfied, there are no excessive terms. And if adding more terms does not allow us to satisfy these conditions, the added terms are to be considered excessive. Fortunately, the real life and realistic physical problems are so complicated that we practically never have excessive terms, but vice versa, have to deal with very short expansions containing only a few terms.
## 3 Zero–dimensional model
For illustrative purpose, we start with a simple model example. Consider the partition function of a zero-dimensional anharmonic model, represented by the integral
$$J(g)=\frac{1}{\sqrt{\pi }}_{\mathrm{}}^{\mathrm{}}\mathrm{exp}\left(x^2gx^4\right)𝑑x,$$
(21)
with the integrand possessing a single ”vacuum” state, located at the point $`x=0`$. The weak–coupling expansion of this integral in powers of the coupling parameter $`g`$, around the vacuum state, leads to divergent series,
$$J(g)a+bg+cg^2+\mathrm{},(g0),$$
(22)
where
$$a=1,b=\frac{3}{4},c=\frac{105}{32}.$$
The so-called strong-coupling expansion, in inverse powers of the coupling constant, can be written down as well:
$$J(g)Ag^{1/4}+Bg^{3/4}+Cg^{5/4}+\mathrm{},(g\mathrm{}),$$
(23)
with
$$A=\frac{1.813}{\sqrt{\pi }},B=\frac{0.612}{\sqrt{\pi }},C=\frac{0.227}{\sqrt{\pi }}.$$
Following the approach of Section 2, one can derive the right crossover approximation,
$$J^{}(g,\mathrm{})=aA\left(A^2+a^2g^{1/2}\right)^{1/2},$$
(24)
with the right crossover index $`s=1/2`$ and crossover time $`\tau =A^3/(2a^2B)=1.55`$. At $`g=1`$, the percentage error of formula (24), is equal to $`7.38\%`$, while the maximal error is reached at $`g=0.35`$ and equals $`7.96\%`$.
The left crossover approximation is given as follows:
$$J^{}(g,0)=aA\left(A^4+a^4g\right)^{1/4},$$
(25)
with the left crossover index $`s=1/4`$, and crossover time $`\tau =a^5/(4A^4b)=0.304`$. At $`g=1`$, the percentage error of Eq. (25) is $`10.13\%`$, while the maximal error at $`g=2.5`$, is equal to $`10.53\%`$. We conclude, that the crossover approximations (24) and (25) may be viewed, correspondingly, as the lower and upper bounds for the integral (21). The average, defined by Eq. (15),
$$J^{}(g)=\frac{J^{}(g,0)+J^{}(g,\mathrm{})}{2},$$
possesses the correct leading asymptotes and approximates the exact result at $`g=1`$ with the percentage error of $`1.37\%`$. And the maximal error, at $`g=3`$, is $`2.21\%`$.
## 4 Lattice gauge model
The vacuum energy density $`f_0`$ of the (3+1)–dimensional SU(2) lattice gauge model in its weak–coupling, asymptotically free regime, may be presented in the form of an expansion in powers of the parameter $`x=4/g^4`$, where $`g`$ stands for the coupling :
$$f_0Ax+B\sqrt{x}+\mathrm{},\left(x\mathrm{}\right),$$
(26)
where
$$A=6,B=7.1628.$$
In its strong–coupling limit, $`f_0`$ can be presented as follows :
$$f_0ax^2+bx^4+\mathrm{},\left(x0\right),$$
(27)
with
$$a=1,b=0.03525.$$
Because of the interfering roughening transition, the quality of the high–order terms in the strong–coupling expansion is doubtful , so we use only its leading terms. The left crossover approximation can be readily written down as
$$f^{}(x,0)=ax^2\left[1+\left(\frac{a}{A}\right)^2x^2\right]^{1/2},$$
(28)
where we have used
$$s=\frac{1}{2},\tau =\frac{a^3}{2bA^2}.$$
The numbers generated by formula (28), practically coincide in the region $`x[0.1,1.1]`$ with estimates obtained in from the strong–coupling approximants. The right crossover approximation can be obtained as well, but its accuracy is worse than that of Eq. (28).
## 5 One–dimensional Bose system
The ground–state energy of the one–dimensional Bose system with the $`\delta `$–functional repulsive interaction potential is known in a numerical form from the Lieb–Liniger exact solution . It is desirable, nevertheless, to have a compact analytical expression for the ground–state energy $`e(g)`$ as a function of the $`\delta `$–function strength $`g`$, valid for arbitrary $`g`$. In the weak–coupling limit an exact analytical result is known:
$$e(g)g,\left(g0\right),$$
(29)
obtained in Ref. , while in the strong–coupling limit another exact result, obtained by Girardeau , is available:
$$e(g)\frac{\pi ^2}{3},\left(g\mathrm{}\right).$$
(30)
The higher–order terms in these expansions were derived by approximate methods, the next term in the weak–coupling limit being $`bg^{3/2}`$, and in the strong–coupling limit $`Bg^1`$. We shall not use the approximate values for the coefficients, $`b`$ and $`B`$ (see e.g. and references therein), writing instead trial expansions and determining the coefficients by matching the two asymptotic forms for the ground state energy. Following the standard approach of Section 2, we obtain the right crossover approximation:
$$e^{}(g,\mathrm{})=\frac{\pi ^2}{3}g\left(g+\frac{\pi ^2}{3}\right)^1,$$
(31)
with the right crossover index $`s=1`$ and $`B=(\pi ^2/3)^2`$. Although, Eq. (31) can be further simplified, we leave it in present form in order to stress the origin of its different parts. Simple expression (31) works with surprising accuracy of about $`12\%`$, up to $`g10`$, till there are numerical data available for comparison . The left crossover–type expression can be written as well, following the standard procedure, but its accuracy is inferior to that of Eq. (31).
## 6 One–dimensional ferromagnet
Low–dimensional magnetic systems give a plenty of examples of the crossover phenomena, when only the asymptotic behavior with respect to different parameters, such as spin, temperature etc, is known and the intermediate region, in most of the cases, could be reached only numerically. The crossover self–similar approximations offer simple analytical expressions for the intermediate region. We put below, for simplicity, the exchange integral $`J=1`$.
### 6.1 Zero–field thermodynamics
The free energy $`F`$ and magnetic susceptibility $`\chi `$ of the one–dimensional Heisenberg ferromagnet of spin $`S`$, within the framework of the spin-wave approximation, valid at temperatures $`T0`$, has the form of an expansion in powers of $`T`$ :
$$Fa(S)T^{3/2}+b(S)T^2+\mathrm{},(T0),$$
(32)
in which
$$a(S)=\frac{\zeta (3/2)}{(2\pi )^{1/2}}\left(\frac{1}{2S}\right)^{1/2},b(S)=\frac{1}{4S^2},$$
and
$$\chi A(S)T^2+B(S)T^{3/2},(T0),$$
(33)
where
$$A(S)=\frac{8}{3}S^4,B(S)=A(S)\frac{3\zeta (1/2)}{\sqrt{2\pi }S}.$$
As $`T\mathrm{}`$, a different asymptotic behavior happens :
$$FT\mathrm{ln}(1+2S)(T\mathrm{})$$
(34)
and
$$\chi \frac{4S(S+1)}{3T}(T\mathrm{}).$$
(35)
Applying the standard approach of Section 2, we obtain for the free energy the following left crossover approximation, corresponding to the left crossover index $`s=1`$,
$$F^{}(T,S)=a(S)\frac{T^{3/2}}{1[b(S)/a(S)]\tau T^{1/2}},\tau =\frac{a^2(S)}{b(S)\mathrm{ln}(1+2S)},$$
(36)
and the expression for specific heat $`C^{}=Td^2F^{}(T,S)/dT^2`$ as
$$C^{}(T,S)=\frac{1}{4}T^{1/2}a^3(S)\frac{3a(S)+b(S)\tau T^{1/2}}{[a(S)+b(S)\tau T^{1/2}]^3}.$$
(37)
The position, height and spin–dependence of the maximum occurring in the expression for $`C^{}(T,S)`$ are in qualitative agreement with numerical results for finite chains .
The left crossover approximation for the renormalized susceptibility is
$$\chi ^{}=\frac{A(S)}{T^2}\left[1+\frac{B(S)}{2A(S)}\tau T^{1/2}\right]^2,\tau =\frac{A(S)}{B(S)}\left[\frac{16S(1+S)}{3A(S)}\right]^{1/2},$$
(38)
with the left crossover index $`s=2`$. The expressions (36) and (38) are very accurate for $`S=1/2`$, where they practically coincide with the results of a numerical solution of the thermodynamic Bethe–ansatz equations .
### 6.2 Spin waves at finite temperatures
Variational theory, as applied at low temperatures , gives the temperature–dependent expression for the spin–wave energy $`\omega _k`$ for $`S=1/2`$ in the form
$$\omega _k=2Z(T)\left|\mathrm{sin}(k)\right|,Z(T)\frac{1}{2}\pi \left[1\frac{2}{3}\left(\frac{T}{2}\right)^2\right](T0),$$
(39)
being at $`T=0`$ completely in agreement with the exact results . In order to find the behavior of $`Z(T)`$ at arbitrary $`T`$, we continue it from the region of $`T0`$ self–similarly, along the most stable trajectory, with the crossover index $`s`$, determined by the condition of the minimum of the multiplier \[20-22\]
$$m(T,s)=1\frac{1}{6}T^2\frac{1+s}{s},$$
from where
$$s(T)=\frac{1}{6}T^2\left(1\frac{T^2}{6}\right)^1,T<\sqrt{6},$$
$$s\mathrm{},T\sqrt{6}.$$
This gives the left crossover approximation
$$Z^{}(T)=\frac{1}{2}\pi \left(\frac{s(T)}{s(T)+T^2/6}\right)^{s(T)},T<\sqrt{6},$$
(40)
$$Z^{}(T)=\frac{1}{2}\pi \mathrm{exp}(T^2/6),T\sqrt{6}.$$
(41)
Formulae (40) and (41) suggest that the spin waves should survive at least up to $`T\sqrt{6}`$, and become exponentially ”soft” above this temperature. Note that in this particular case the left self–similar approximation plausibly reconstructed the function for arbitrary temperatures, even not knowing beforehand the asymptotic behavior at $`T\mathrm{}`$.
### 6.3 Field–dependent part of free energy
It is believed that the magnetic field–dependent part of the free energy of the one–dimensional Heisenberg ferromagnet is independent on spin and scales as $`\rho =h/T^2`$ ($`h`$ denotes the magnetic field), with the scaling function independent on the value of spin . For the classical ferromagnet, both low and high field behavior of the field dependent part of the free energy $`\delta F(\rho )`$ are known \[32-35\] in the simple form:
$$T^2\delta F(\rho )a\rho ^2+b\rho ^4,a=\frac{1}{3},b=\frac{11}{135}(\rho 1),$$
(42)
$$T^2\delta F(\rho )A\rho +B\rho ^{1/2},A=1,B=1(\rho 1).$$
(43)
The left crossover approximation is controlled by the crossover index $`s=1/2`$ and crossover time $`\tau =a^3/(2A^2B)=0.227`$, yielding:
$$T^2\delta F^{}(\rho ,0)=a\rho ^2\left[1+\left(\frac{a}{A}\right)^2\rho ^2\right]^{1/2},$$
(44)
while the right crossover approximation is given by
$$T^2\delta F^{}(\rho ,\mathrm{})=A\rho ^2\left[\rho ^{1/2}+\left(\frac{A}{a}\right)^{1/2}\right]^2,$$
(45)
with
$$s=2,\tau =2\frac{A}{B}\left(\frac{A}{a}\right)^{1/2}=3.464.$$
Both expressions (44) and (45) are in good agreement with the known results \[32-35\].
## 7 Flexible polymer coil
The calculation of the so–called expansion function $`\alpha ^2(z)`$ of a flexible polymer coil is of long standing interest in polymer science \[36-38\]. This quantity defines the ratio of the mean square end–to–end distance $`<R^2>`$ of the chain to its unperturbed value $`<R^2>_0Nl^2`$, where $`N`$ is the number of segments with the length $`l`$ each, so that $`Nl`$ is the contour length of the chain,
$$\alpha ^2(z)\frac{<R^2>}{<R^2>_0},$$
(46)
as a function of a dimensionless interaction parameter $`z`$. The latter is
$$z\frac{BN}{\pi l^2}(D=2)$$
(47)
for the two–dimensional case and
$$z=\left(\frac{3}{2\pi }\right)^{3/2}\frac{B\sqrt{N}}{l^3}(D=3)$$
(48)
for the three–dimensional coil, where $`B`$ is the effective binary cluster integral for a pair of segments.
When the excluded volume interaction is very weak, a perturbation theory leads to an asymptotic series
$$\alpha ^2(z)1+\underset{n=1}{}a_nz^n(z0),$$
(49)
in which the coefficients for the two–dimensional case are
$$a_1=\frac{1}{2},a_2=0.121545,a_3=0.026631,a_4=0.132236(D=2),$$
and for the three–dimensional coil they take the values
$$a_1=\frac{4}{3},a_2=2.075385,a_3=6.296880,a_4=25.057251,$$
$$a_5=116.134785,a_6=594.71663(D=3).$$
The asymptotic result for the strong coupling limit is
$$\alpha ^2(z)A_1z^\beta +A_2z^\gamma (z\mathrm{}).$$
(50)
Using our method of self–similar interpolation, we obtain from (49) and (50)
$$\alpha _{}^2(z)=\left(1+A_1^{1/\beta }z\right)^\beta $$
(51)
in the first approximation. The second approximation gives
$$\alpha _{}^2(z)=\left[\left(1+C_1z\right)^{2\beta +\gamma }+C_2z^2\right]^{\beta /2},$$
(52)
where
$$C_1=\left(\frac{2A_2}{\beta A_1^{12/\beta }}\right)^{1/(2\beta +\gamma )},C_2=A_1^{2/\beta }.$$
These formulae can serve for both the two– as well as for three–dimensional coils. We shall concentrate on the latter case for which accurate numerical data for $`\alpha ^2(z)`$ are available in the whole range of $`z[0,\mathrm{})`$. Then in the strong coupling limit (50), one has
$$A_1=1.5310,A_2=0.1843,\beta =0.3544,\gamma =0.5756(D=3).$$
(53)
The coefficients in (52) become
$$C_1=6.5866,C_2=11.0631,2\beta +\gamma =1.07.$$
In this way, from (52) we obtain
$$\alpha _{}^2(z)=\left[\left(1+6.5866z\right)^{1.07}+11.0631z^2\right]^{0.1772}.$$
(54)
The self–similar approximation (54) is accurate, within $`0.4\%`$ of error, in the full range $`z0`$, as compared to numerical calculations . This formula (54) practically coincides with the phenomenological extrapolation expression
$$\alpha _{MN}^2(z)=\left(1+7.524z+11.06z^2\right)^{0.1772},$$
(55)
obtained by Muthukumar and Nickel by means of a fit to numerical data.
In conclusion, we have developed the method of self–similar interpolation for deriving explicit interpolation formulae for difficult crossover problems of statistical mechanics. This method, as is illustrated by several examples, is general, simple, and accurate.
Acknowledgement
We appreciate useful discussions with E.P. Yukalova. |
no-problem/0001/cond-mat0001138.html | ar5iv | text | # Multifractality of Entangled Random Walks and Nonuniform Hyperbolic Spaces
## I Introduction
The phenomenon of multifractality consists in a scale dependence of critical exponents. It has been widely discussed in the literature for a wide range of issues, such as statistics of strange sets , diffusion limited aggregation , wavelet transforms , conformal invariance or statistical properties of critical wave functions of massless Dirac fermions in a random magnetic field .
The aim of our work is not only to describe a new model possessing multiscaling dependence, but also to show that the phenomenon of multifractality is related to local nonuniformity of the exponentially growing (”hyperbolic”) underlying ”target” phase space, through an example of entangled random walks distribution in homotopy classes. Indeed, to our knowledge, almost all examples of multifractal behavior for physical or more abstract systems share one common feature—all target phase spaces have a noncommutative structure and are locally nonuniform.
We believe that multiscaling is a much more generic physical phenomenon compared with uniform scaling, appearing when the phase space of a system possesses a hyperbolic structure with local symmetry breaking. Such perturbation of local symmetry could be either regular or random—from our point of view the details of the origin of local nonuniformity play a less significant role.
We discuss below the basic features of multifractality in a locally nonuniform regular hyperbolic phase space. We show in particular that a multifractal behavior is encountered in statistical topology in the case of entangled (or knotted) random walks distribution in topological classes.
The paper is organized as follows. In Section II we consider a 2D $`N`$–step random walk in a nonsymmetric array of topological obstacles and investigate the multiscaling properties of the “target” phase space for a set of specific topological invariants—the “primitive paths”. The renormalization group computations of mean length of the primitive path, as well as return probabilities to the unentangled topological state are developed in Section III. Section IV is devoted to the application of conformal methods to a geometrical analysis of multifractality in locally nonuniform hyperbolic spaces.
## II Multifractality of topological invariants for random entanglements in a lattice of obstacles
The concept of multifractality has been formulated and clearly explained in the paper . We begin by recalling the basic definitions of Rényi spectrum, which will be used in the following.
Let $`\nu (C_i)`$ be an abstract invariant distribution characterizing the probability of a dynamical system to stay in a basin of attraction of some stable configuration $`C_i`$ $`(i=1,2,\mathrm{},𝒩)`$. Taking a uniform grid parameterized by ”balls” of size $`l`$, we define the family of fractal dimensions $`D_q`$:
$$D_q=\frac{1}{q1}\underset{l0}{lim}\frac{\mathrm{ln}{\displaystyle \underset{i=1}{\overset{𝒩}{}}}\nu ^q(C_i)}{\mathrm{ln}l}$$
(1)
As $`q`$ is varied, different subsets of $`\nu ^q`$ associated with different values of $`q`$ become dominant.
Let us define the scaling exponent $`\alpha `$ as follows
$$\nu ^q(C_i)l^{\alpha q}$$
where $`\alpha `$ can take different values corresponding to different regions of the measure which become dominant in Eq.(1). In particular, it is natural to suggest that $`_{i=1}^𝒩\nu ^q(C_i)`$ can be rewritten as follows:
$$\underset{i=1}{\overset{𝒩}{}}\nu ^q(C_i)=\left[𝑑\alpha ^{}\rho (\alpha ^{})l^{f(\alpha ^{})}l^{\alpha ^{}q}\right]|_{l0}$$
where $`\rho (\alpha )`$ is the probability to have the value $`\alpha `$ lying in a small ”window” $`[\alpha ^{},\alpha ^{}+\mathrm{\Delta }\alpha ^{}]`$ and $`f(\alpha )`$ is a continuous function which has sense of fractal dimension of the subset characterized by the value $`\alpha `$.
Supposing $`\rho (\alpha )>0`$ one can approximately evaluate the last expression via the saddle–point method. Thus, one gets (see, for example, ):
$$\begin{array}{c}\frac{d}{d\alpha }f(\alpha )=q\hfill \\ \frac{d^2}{d\alpha ^2}f(\alpha )<0\hfill \end{array}$$
what together with (1) leads to the following equations
$$\begin{array}{c}\tau (q)=q\alpha (q)f[\alpha (q)]\hfill \\ \alpha (q)=\frac{d}{dq}\tau (q)\hfill \end{array}$$
(2)
where $`\tau (q)=(q1)D_q`$. Hence, the exponents $`\tau (q)`$ and $`f[\alpha (q)]`$ are related via Legendre transform. For further details and more advanced mathematical analysis, the reader is refered to .
### A 2D topological systems and their relation to hyperbolic geometry
Topological constraints essentially modify physical properties of the broad class of statistical systems composed of chain–like objects. It should be stressed that topological problems are widely investigated in connection with quantum field and string theories, 2D gravitation, statistics of vortices in superconductors, quantum Hall effect, thermodynamic properties of entangled polymers etc. Modern methods of theoretical physics allow us to describe rather comprehensively the effects of nonabelian statistics on the physical behavior of some systems. However the following question remains still obscure: what are the fractal (and as it is shown below, multifractal) properties of the distribution function of topological invariants, characterizing the homotopy states of a statistical system with topological constraints? We investigate this problem in the framework of the model ”Random Walk in an Array of Obstacles” (RWAO).
The RWAO–model can be regarded as physically clear and as a very representative image for systems of fluctuating chain–like objects with a full range of nonabelian topological properties. This model is formulated as follows: suppose that a random walk of $`N`$ steps of length $`a`$ takes place on a plane between obstacles which form a simple 2D rectangular lattice with unit cell of size $`c_x\times c_y`$. We assume that the random walk cannot cross (”pass through”) any obstacles.
It is convenient to begin with the lattice realization of the RWAO–model. In this case the random path can be represented as a $`N`$–step random walk in a square lattice of size $`a\times a`$ ($`ac_yc_x`$)—see fig.1.
It had been shown previously (see, for example ) that for $`a=c_x=c_y`$ a lattice random walk in the presence of a regular array of obstacles (punctures) on the dual lattice $`\mathrm{𝖹𝖹}^2`$ is topologicaly equivalent to a free random walk on a graph—a Cayley tree with branching number $`z=4`$ (see Fig.2). An outline of the derivation of this result is as follows. The different topological states of our problem coincide with the elements of the homotopy group of the multi-punctured plane, which is the free group $`\mathrm{\Gamma }_{\mathrm{}}`$ generated by a countable set of elements. The translational invariance allows to consider a local basis and therefore to study the factored group $`\mathrm{\Gamma }_{\mathrm{}}/\mathrm{𝖹𝖹}^2=\mathrm{\Gamma }_{z/2}`$, where $`\mathrm{\Gamma }_{z/2}`$ is a free group with $`z/2`$ generators whose Cayley graph is precisely a $`z`$–branching tree.
The relation between Cayley trees and hyperbolic geometry is discussed in details in Section III. Intuitively such a relation could be understood as follows. The Cayley tree can be isometrically embedded in the hyperbolic plane $``$ (surface of constant negative curvature). The group $`\mathrm{\Gamma }_{z/2}`$ is one of the discrete subgroups of the group of motion of the hyperbolic plane $`=SL(2,\mathrm{𝖨𝖱})/SO(2)`$, therefore the Cayley tree can be considered as a particular discrete realization of the hyperbolic plane.
Returning to the RWAO–model, we conclude that each trajectory in the lattice of obstacles can be lifted to a path in the “universal covering space” i.e to a path on the $`z`$–branching Cayley tree. The geodesic on the Cayley graph, i.e the shortest trajectory along the graph which connects ends of the path, plays the role of a complete topological invariant for the original trajectory in the lattice of obstacles. For example, the random walk in the lattice of obstacles is closed and contractible to a point (i.e. is not entangled with the array of obstacles) if and only if the geodesic length between the ends of the trajectory on the Cayley graph is zero. Hence, this geodesic length can be regarded as a topological invariant, which preserves the main nonabelian features of the considered problem.
We would like to stress two facts concerning our model: (i) The exact configuration of a geodesic is a complete topological invariant, while its length $`k`$ is only a partial topological invariant (except the case $`k=0`$); (ii) Geodesics have a clear geometrical interpretation, having sense of a bar (or ”primitive”) path which remains after deleting all even times folded parts of a random trajectory in the lattice of obstacles. The concept of ”primitive path” has been repeatedly used in statistical physics of polymers, leading to a successful classification of the topological states of chain–like molecules in various topological problems .
Even if many aspects of statistics of random walks in fixed lattices of obstacles have been well understood (see, for example and references therein), the set of problems dealing with the investigation of fractal properties of the distribution of topological invariants in the RWAO–model are practically out of discussion. Thus we devote the next Section to the study of fractal and multifractal structures of the measure on the set of primitive paths in the RWAO–model for $`ac_y<c_x`$.
### B Multifractality of the measure on the set of primitive paths on a nonsymmetric Cayley tree
The classification of different topological states of a $`N`$–step random walk in a rectangular lattice of obstacles in the case $`ac_y<c_x`$ turns out to be a more difficult and more rich problem than in the case $`a=c_y=c_x`$ discussed above. However, after a proper rescaling, the mapping of a random walk in the rectangular array of obstacles to a random walk on a Cayley tree can be explored again. To proceed we should solve two auxiliary problems. First of all we consider a random walk inside the elementary rectangular cell of the lattice of obstacles. Let us compute:
(i) The ”waiting time”, i.e the average number of steps $`t`$ which a $`t`$–step random walk spends within the rectangle of size $`c_x\times c_y`$;
(ii) The ratio of the ”escape probabilities” $`p_x`$ and $`p_y`$ through the corresponding sides $`c_x`$ and $`c_y`$ for a random walk staying till time $`t`$ within the elementary cell.
The desired quantities can be easily computed from the distribution function $`P(x_0,y_0,x,y,t)`$ which gives the probability to find the $`t`$–step random walk with initial $`(x_0,y_0)`$ and final $`(x,y)`$ points within the rectangle of size $`c_x\times c_y`$. The function $`P(x,y,t)`$ in the continuous approximation ($`a0;t\mathrm{};at=\mathrm{const}`$) is the solution of the following boundary problem
$$\{\begin{array}{c}\frac{}{t}P(x,y,t)=\frac{a^2}{4}\left(\frac{^2}{x^2}+\frac{^2}{y^2}\right)P(x,y,t)\hfill \\ P(0,y,t)=P(c_x,y,t)=P(x,0,t)=P(x,c_y,t)=0\hfill \\ P(x,y,0)=\delta (x_0,y_0)\hfill \end{array}$$
(3)
where $`a`$ is the length of the effective step of the random walk and the value $`\frac{a^2}{4}`$ has sense of a diffusion constant.
The solution of Eqs.(3) reads
$$P(x_0,y_0,x,y,t)=\frac{4}{c_xc_y}\underset{m_x=1}{\overset{\mathrm{}}{}}\underset{m_y=1}{\overset{\mathrm{}}{}}e^{\frac{\pi ^2a^2}{4}\left(\frac{m_x^2}{c_x^2}+\frac{m_y^2}{c_y^2}\right)t}\mathrm{sin}\frac{\pi m_xx_0}{c_x}\mathrm{sin}\frac{\pi m_yy_0}{c_y}\mathrm{sin}\frac{\pi m_xx}{c_x}\mathrm{sin}\frac{\pi m_yy}{c_y}$$
(4)
The ”waiting time” $`t`$ can be written now as follows
$$t=\frac{1}{c_xc_y}_0^{c_x}𝑑x_0_0^{c_y}𝑑y_0_0^{c_x}𝑑x_0^{c_y}𝑑y_0^{\mathrm{}}𝑑tP(x_0,y_0,x,y,t)$$
(5)
while the ratio $`p_x/p_y`$ can be computed straightforwardly via the relation:
$$\frac{p_x}{p_y}=\frac{{\displaystyle _0^{c_x}}𝑑x_0{\displaystyle _0^{c_y}}𝑑y_0{\displaystyle _0^{c_x}}𝑑xP(x_0,y_0,x,y,t)|_{y=\{a,c_ya\}}}{{\displaystyle _0^{c_x}}𝑑x_0{\displaystyle _0^{c_y}}𝑑y_0{\displaystyle _0^{c_y}}𝑑yP(x_0,y_0,x,y,t)|_{x=\{a,c_xa\}}}$$
(6)
In the ”ground state dominance” approximation we truncate the sum (4) at $`m_x=m_y=1`$ and get the following approximate expressions:
$$t=\frac{4^4c_x^2c_y^2}{\pi ^6a^2(c_x^2+c_y^2)};\frac{p_x}{p_y}=\frac{c_x^2}{c_y^2}$$
(7)
In the symmetric case ($`c_x=c_yc`$) Eq.(7) gives $`t=\frac{2^7c^2}{\pi ^6a^2}`$ and $`p_x/p_y=1`$, as it should be for the square lattice of obstacles.
Now the distribution function of the primitive paths for the RWAO model can be obtained via lifting this topological problem to the problem of directed random walks<sup>*</sup><sup>*</sup>*Recall that by definition the primitive path is the geodesic distance and therefore cannot have two successive opposite steps. on the 4–branching Cayley tree, where the random walk on the Cayley tree is defined as follows:
(a) The total number of steps $`\stackrel{~}{N}`$ on the Cayley tree is
$$\stackrel{~}{N}=\frac{N}{t}=\frac{\pi ^6}{4^4}\frac{Na^2(c_x^2+c_y^2)}{c_x^2c_y^2}$$
(the value $`t`$ has been computed in (7)).
(b) The distance (or ”level” $`k`$) on the Cayley tree is defined as the number of steps of the shortest path between two points on the tree. Each vertex of the Cayley tree has 4 branches; the steps along two of them carry a Boltzmann weight $`1`$, while the steps along the two remaining ones carry a Boltzmann weight $`\beta `$ as it is shown in Fig.3. The value of $`\beta `$ is fixed by Eq.(7), which yields
$$\beta =\frac{p_x}{p_y}=\frac{c_x^2}{c_y^2}$$
(8)
The ultrametric structure of the topological phase space, i.e. of the Cayley tree $`\gamma (\beta )`$, allows us to use the results of paper for investigating multicritical properties of the measure of all primitive (directed) paths of $`k`$ steps along the graph $`\gamma (\beta )`$ with nonsymmetric weights $`1`$ and $`\beta `$ (see Fig.3). A rigorous mathematical description of such weighted paths on trees (called cascades) can be found in , where the authors derive multifractal spectra, but for different distributions of weights.
We construct the partition function $`\mathrm{\Omega }(\beta ,k)`$ which counts properly the weighted number of all $`4\times 3^{k1}`$ different $`k`$–step primitive paths on the graph $`\gamma (\beta )`$.
Define two partition functions $`a_k`$ and $`b_k`$ of $`k`$–step paths, whose last steps carry the weights $`1`$ and $`\beta `$ correspondingly. These functions satisfy the recursion relations for $`k1`$:
$$\{\begin{array}{c}a_{k+1}=a_k+2b_k\hfill \\ b_{k+1}=2\beta a_k+\beta b_k\hfill \end{array}(k1)$$
(9)
with the following initial conditions at $`k=1`$:
$$\{\begin{array}{c}a_1=2\hfill \\ b_1=2\beta \hfill \end{array}$$
(10)
Combining (9) and (10) we arrive at the following 2–step recursion relation for the function $`a_k`$:
$$\{\begin{array}{cc}a_{k+2}=(1+\beta )a_{k+1}+3\beta a_k\hfill & (k1)\hfill \\ a_1=2\hfill & (k=1)\hfill \\ a_2=2+4\beta \hfill & (k=2)\hfill \end{array}$$
(11)
whose solution is
$$a_k=\frac{a_2a_1\lambda _2}{\lambda _1\lambda _2}\lambda _1^{k1}+\frac{a_1\lambda _1a_2}{\lambda _1\lambda _2}\lambda _2^{k1}$$
(12)
where
$$\lambda _{1,2}=\frac{1}{2}\left(1+\beta \pm \sqrt{(1+\beta )^2+12\beta }\right)$$
(13)
Taking into account that $`b_k`$ is given by the same recursion relation as $`a_k`$ but with the initial values $`b_1=2\beta `$ and $`b_2=2\beta ^2+4\beta `$, we get the following expression for the partition function $`\mathrm{\Omega }(\beta ,k)=a_k+b_k`$:
$$\mathrm{\Omega }(\beta ,k)=\frac{2(1+4\beta +\beta ^2)2(1+\beta )\lambda _2}{\lambda _1\lambda _2}\lambda _1^{k1}+\frac{2(1+\beta )\lambda _12(1+4\beta +\beta ^2)}{\lambda _1\lambda _2}\lambda _2^{k1}$$
(14)
The partition function $`\mathrm{\Omega }(\beta ,k)`$ contains all necessary information about the multifractal behavior. Following Eqs.(1)–(2), we associate the set of stable configurations $`\{C_i\}`$ with the set of $`𝒩(k)=4\times 3^{k1}`$ vertices of level $`k`$. Hence, we define
$$\underset{i=1}{\overset{𝒩}{}}\nu ^q(C_i)=\frac{\mathrm{\Omega }(\beta ^q,k)}{\mathrm{\Omega }^q(\beta ,k)}$$
(15)
Taking into account that the uniform grid has resolution $`l(k)=1/𝒩(k)`$ for $`k1`$ and using Eq.(1), we obtain
$$\tau (q)=\underset{k\mathrm{}}{lim}\frac{\mathrm{ln}\mathrm{\Omega }(\beta ^q,k)q\mathrm{ln}\mathrm{\Omega }(\beta ,k)}{\mathrm{ln}l(k)}$$
(16)
which allows to determine the generalized Hausdorff dimension $`D_q`$ via the relation
$$D_q=\tau (q)/(q1)$$
(17)
The corresponding plots of the functions $`D_q(q)`$ for different values of $`\beta =\{0.001;\mathrm{\hspace{0.17em}0.01};\mathrm{\hspace{0.17em}0.1};\mathrm{\hspace{0.17em}0.5}\}`$ are shown in Fig.4 (the numerical computations of Eqs.(16)–(17) are carried out for $`k=\mathrm{100\hspace{0.17em}000}`$). The fact that $`D_q(q)`$ depends on $`q`$ clearly demonstrate the multifractal behavior.
## III Random walk on a nonsymmetric Cayley tree
### A Master equation
Consider a random walk on a 4-branching Cayley tree and investigate the distribution $`P(k,\stackrel{~}{N})`$ giving the probability for a $`\stackrel{~}{N}`$–step random walk starting at the origin of the tree to have a primitive (shortest) path between ends of length $`k`$. The random walk is defined as follows: at each vertex of the Cayley tree the probability of a step along two of the branches is $`p_x`$, and is $`p_y`$ along the two others; $`p_x`$ and $`p_y`$ satisfy the conservation condition $`2p_x+2p_y=1`$. Using Eq.(8), the following expressions hold:
$$\{\begin{array}{c}p_x=\frac{\beta }{2(1+\beta )}\hfill \\ p_y=\frac{1}{2(1+\beta )}\hfill \end{array}$$
(18)
The symmetric case $`\beta =1`$ (which gives $`p_x=p_y=1/4`$) has already been studied and an exact expression for $`P(k,\stackrel{~}{N})`$ has been derived in . The rigorous mathematical description of random walks on graphs can be found in . Importance of spherical symmetry (i-e the fact that all vertices of a given level are strictly equivalent) is discussed in . Another example of nonsymmetric model on a tree (case of randomly distributed transition probabilities, the so-called RWRE model) is described in . To our knowledge, the solution for the nonsymmetric random walk which we defined above is known only for $`k`$ fixed and $`\stackrel{~}{N}1`$ . Here we consider the case $`k1,\stackrel{~}{N}1`$, and in particular we study the distribution in the neighborhood of the maximum. Breaking the symmetry by taking $`\beta 1`$ affects strongly the structure of the problem, since then the phase space becomes locally nonuniform: we have now vertices of two different kinds, $`x`$ and $`y`$, depending on whether the step toward the root of the Cayley tree occurs with probability $`p_x`$ or $`p_y`$. In order to obtain a master equation for $`P(k,\stackrel{~}{N})`$, we introduce the new variables $`L_x(k,\stackrel{~}{N})`$ and $`L_y(k,\stackrel{~}{N})`$, which define the probabilities to be at the level $`k`$ in a vertex $`x`$ or $`y`$ after $`\stackrel{~}{N}`$ steps. We recursively define the same way the probabilities $`L_{a_1\mathrm{}a_n}(k,\stackrel{~}{N})`$, ($`a_i=\{x,y\}`$) to be at level $`k`$ in a vertex such that the sequence of vertices toward the root of the tree is $`a_1\mathrm{}a_n`$. One can see that the recursion depends on the total ”history” till the root point, what makes the problem nonlocal. The master equation for the distribution function $`P(k,\stackrel{~}{N})`$
$$\begin{array}{ccc}P(k,\stackrel{~}{N}+1)\hfill & =\hfill & (2p_x+p_y)L_y(k1,\stackrel{~}{N})+(2p_y+p_x)L_x(k1,\stackrel{~}{N})+\hfill \\ & & p_yL_y(k+1,\stackrel{~}{N})+p_xL_x(k+1,\stackrel{~}{N})\hfill \end{array}$$
(19)
is coupled to the hierarchical set of functions $`\{L_x,L_y;L_{xx},L_{xy},L_{yx},L_{yy};\mathrm{};L_{a_1\mathrm{}a_n}\}`$ which satisfy the following recursion relation
$$\begin{array}{ccc}L_{a_1\mathrm{}a_n}(k,\stackrel{~}{N}+1)\hfill & =\hfill & (2\delta _{a_1,a_2})p_{a_1}L_{a_2\mathrm{}a_n}(k1,\stackrel{~}{N})+\hfill \\ & & p_xL_{xa_1\mathrm{}a_n}(k+1,\stackrel{~}{N})+p_yL_{ya_1\mathrm{}a_n}(k+1,\stackrel{~}{N})\hfill \end{array}$$
(20)
where $`a_1\mathrm{}a_n`$ cover all sequences of any lengths ($`k`$) in $`x`$ and $`y`$. In order to close this infinite system at an arbitrary order $`n_0`$ we make the following assumption: for any $`nn_0`$ we have
$$\frac{L_{a_1\mathrm{}a_n}(k,\stackrel{~}{N})}{P(k,\stackrel{~}{N})}|_{\genfrac{}{}{0pt}{}{kn_0}{\stackrel{~}{N}n_0}}\alpha _{a_1\mathrm{}a_n}$$
(21)
with $`\alpha _{a_1\mathrm{}a_n}`$ constant.
Using the approximation (21) we rewrite (19)–(20) for large $`k`$ and $`\stackrel{~}{N}`$ in terms of the function $`P(k,\stackrel{~}{N})`$ and constants $`\alpha _{a_1\mathrm{}a_n}`$ ($`0<nn_0`$). Taking into account that
$$L_{a_1\mathrm{}a_nx}+L_{a_1\mathrm{}a_ny}=L_{a_1\mathrm{}a_n}$$
we arrive at $`2^{n_01}`$ independent recursion relations for one and the same function $`P(k,\stackrel{~}{N})`$, with $`2^{n_0}1`$ independent unknown constants $`\alpha _{a_1\mathrm{}a_{n_0}}`$. In order to make this system self–consistent, one has to identify coefficients entering in different equations, what yields $`2^{n_0}2`$ compatibility relations for the constants $`\alpha _{a_1\mathrm{}a_{n_0}}`$, and the system is still open. This fact means that all scales are involved and the evolution of $`L_{a_1\mathrm{}a_n}`$ depends on $`L_{a_1\mathrm{}a_{n+1}}`$, the evolution of $`L_{a_1\mathrm{}a_{n+1}}`$ depends on $`L_{a_1\mathrm{}a_{n+2}}`$ and so on. At each scale we need informations about larger scales. This kind of scaling problem naturally suggests to use a renormalization group approach, which is developed in the next Section.
To begin with the renormalization procedure, we need to estimate the values of the constants $`\alpha _{a_1\mathrm{}a_{n_0}}`$ for the first (i.e. the smallest) scale. Let us denote
$$\{\begin{array}{c}\alpha _x=\alpha \hfill \\ \alpha _y=1\alpha \hfill \end{array}$$
and define $`\alpha _{xx},\alpha _{xy},\alpha _{yy},\alpha _{yx}`$ as follows:
$$\{\begin{array}{c}\alpha _{xx}=v_x\alpha \hfill \\ \alpha _{yy}=v_y(1\alpha )\hfill \\ \alpha _{xy}=(1v_x)\alpha \hfill \\ \alpha _{yx}=(1v_y)(1\alpha )\hfill \end{array}$$
Now we set
$$p_x\alpha _{xa_1\mathrm{}a_n}+p_y\alpha _{ya_1\mathrm{}a_n}=\left(p_x\alpha +p_y(1\alpha )\right)\alpha _{a_1\mathrm{}a_n}$$
(22)
what means that we neglect the correlations between the constants $`\alpha _{a_1\mathrm{}a_n}`$ and $`\alpha _{a_2\mathrm{}a_n}`$ at different scales. As it is shown in the next Section, the renormalization group approach allows us to get rid of the approximation (22).
With (22) one can obtain the following generic master equation
$$P(k,\stackrel{~}{N}+1)=\frac{p_{a_1}\alpha _{a_2\mathrm{}a_n}}{\alpha _{a_1\mathrm{}a_n}}(2\delta _{a_1,a_2})P(k1,\stackrel{~}{N})+\left(\alpha p_x+(1\alpha )p_y\right)P(k+1,\stackrel{~}{N})$$
(23)
where $`a_1\mathrm{}a_n`$ again cover all possible sequences in $`x`$ and $`y`$. We have now $`2^{n_0}1`$ unknown quantities with $`2^{n_0}1`$ compatibility relations (23), what makes the system (23) closed.
For illustration, we derive the solution for $`n_0=2`$:
$$\{\begin{array}{c}P(k,\stackrel{~}{N}+1)=\frac{p_x}{v_x}P(k1,\stackrel{~}{N})+\left(\alpha p_x+(1\alpha )p_y\right)P(k+1,\stackrel{~}{N})\hfill \\ P(k,\stackrel{~}{N}+1)=\frac{2p_x(1\alpha )}{\alpha (1v_x)}P(k1,\stackrel{~}{N})+\left(\alpha p_x+(1\alpha )p_y\right)P(k+1,\stackrel{~}{N})\hfill \\ P(k,\stackrel{~}{N}+1)=\frac{p_y}{v_y}P(k1,\stackrel{~}{N})+\left(\alpha p_x+(1\alpha )p_y\right)P(k+1,\stackrel{~}{N})\hfill \\ P(k,\stackrel{~}{N}+1)=\frac{2p_y\alpha }{(1\alpha )(1v_y)}P(k1,\stackrel{~}{N})+\left(\alpha p_x+(1\alpha )p_y\right)P(k+1,\stackrel{~}{N})\hfill \end{array}$$
(24)
Note that (24) displays clearly a $`\mathrm{𝖹𝖹}_2`$ symmetry: $`p_xp_y,\alpha \alpha ,v_xv_y`$. Compatibility conditions for system (24) read:
$$\frac{p_x}{v_x}=\frac{p_y}{v_y}=\frac{2p_x(1\alpha )}{\alpha (1v_x)}=\frac{2p_y\alpha }{(1\alpha )(1v_y)}$$
(25)
which finally gives
$$\{\begin{array}{c}\alpha =\frac{13\beta +\sqrt{1+14\beta +\beta ^2}}{2(1\beta )}\hfill \\ v_x=\frac{\alpha }{2\alpha }\hfill \\ v_y=\frac{1\alpha }{1+\alpha }\hfill \end{array}$$
(26)
As it has been said above, without (22) the system (19)–(20) is open, giving a single equation for the unknown function $`P(k,\stackrel{~}{N})`$ depending on the unknown parameter $`\alpha `$:
$$\begin{array}{ccc}P(k,\stackrel{~}{N}+1)\hfill & =\hfill & \left((2p_x+p_y)(1\alpha )+(2p_y+p_x)\alpha \right)P(k1,\stackrel{~}{N})+\hfill \\ & & \left(p_y(1\alpha )+p_x\alpha \right)P(k+1,\stackrel{~}{N})\hfill \end{array}$$
(27)
Eq.(27) describes a 1D diffusion process with a drift
$$\frac{k}{\stackrel{~}{N}}\overline{k}=2\alpha p_y+2(1\alpha )p_x$$
(28)
and a dispersion
$$\delta =\frac{kk^2}{\stackrel{~}{N}}=14\left(\alpha p_y+(1\alpha )p_x\right)^2$$
(29)
which provides for $`k1`$ and $`\stackrel{~}{N}1`$ the usual Gaussian distribution with nonzero mean (see ). The value of $`\alpha `$ obtained in (26) using the approximation (22) gives a fair estimate of the drift compared with the numerical simulations, as it is shown in Fig.5.
### B Real space renormalization
In order to improve the results obtained above, we recover the information lost in the approximation (22) and take into account “interactions” between different scales. Namely, we follow the renormalization flow of the parameter $`\alpha (l)`$ at a scale $`l`$ supposing that a new effective step is a composition of $`2^l`$ initial lattice steps. Let us define:
* the probability $`f_a(l)`$ of going forth (with respect to the location of the root point of the Cayley tree) from a vertex of kind $`a`$;
* the probability $`b_a(l)`$ of going back (towards the root point of the Cayley tree) from a vertex of kind $`a`$;
* the probability $`\alpha (l)`$ of being at a vertex of kind $`x`$;
* the conditional probability $`w_a(l)`$ to reach a vertex of kind $`a`$ starting from a vertex of kind $`a`$ under the condition that the step is forth;
* the conditional probability $`v_a(l)`$ to reach a vertex of kind $`a`$ starting from a vertex of kind $`a`$ under the condition that the step is back;
* the effective length $`d(l)`$ of a composite step.
Then the drift $`\overline{k}(l)`$ at scale $`l`$ is given by (compare with 28):
$$\overline{k}(l)=d(l)\left[\alpha (l)\left(f_x(l)b_x(l)\right)+\left(1\alpha (l)\right)\left(f_y(l)b_y(l)\right)\right]$$
(30)
We say that the problem is scale–independent if the flow $`\overline{k}(l)`$ is invariant under the decimation procedure, i.e. with respect to the renormalization group. We compute the flow counting the appropriate combinations of two steps, depending on the variable considered:
$$\begin{array}{ccc}w_a(l+1)\hfill & =\hfill & \left(1w_a(l)\right)\left(1w_{\overline{a}}(l)\right)+w_a^2(l)\hfill \\ v_a(l+1)\hfill & =\hfill & \left(1v_a(l)\right)\left(1v_{\overline{a}}(l)\right)+v_a^2(l)\hfill \\ f_a(l+1)\hfill & =\hfill & \frac{f_a(l)\left[w_a(l)f_a(l)+\left(1w_a(l)\right)f_{\overline{a}}(l)\right]}{c_a(l)}\hfill \\ b_a(l+1)\hfill & =\hfill & \frac{b_a(l)\left[v_a(l)b_a(l)+\left(1v_a(l)\right)b_{\overline{a}}(l)\right]}{c_a(l)}\hfill \\ d(l+1)\hfill & =\hfill & d(l)\left[\alpha (l)c_x(l)+\left(1\alpha (l)\right)c_y(l)\right]\hfill \\ \alpha (l+1)\hfill & =\hfill & \overline{k}(l)\left[\alpha (l)w_x(l)+\left(1\alpha (l)\right)\left(1w_y(l)\right)\right]+\hfill \\ & & \left(1\overline{k}(l)\right)\left[\alpha (l)v_x(l)+\left(1\alpha (l)\right)\left(1v_y(l)\right)\right]\hfill \end{array}$$
(31)
where $`\overline{a}=x`$ when $`a=y`$ (and $`\overline{a}=y`$ when $`a=x`$) and the value $`c_a(l)`$ ensures the conservation condition $`f_a(l+1)+b_a(l+1)=1`$ because we do not consider the combinations of two successive steps in opposite directions.
The transformation of $`\alpha `$ in (31) needs some explanations. We consider the drift $`\overline{k}(l)`$ as a probability to make a (composite) step forward. The equation for $`\alpha `$ is given by counting the different ways of getting to a vertex of kind $`x`$. One can check that $`\overline{k}(l)`$ given by (30) remains invariant under such transformation, what is considered as a verification of the scale independence (i.e. of renormalizability).
Following the standard procedure, we find the fixed points for the flow of $`\alpha (l)`$. First of all we realize that the recursion equations for $`w_a(l)`$ and $`v_a(l)`$ can be solved independently, providing a continuous set of fixed points: $`w_x^0=1w_y^0`$ and $`v_x^0=1v_y^0`$. Using the initial conditions (26) for $`v_a(l)`$ and deriving straightforwardly the absent initial conditions for $`w_a(l)`$, we get
$$\{\begin{array}{c}v_x(1)=v_x\hfill \\ v_y(1)=v_y\hfill \\ w_x(1)=w_x=\frac{p_x}{p_x+2p_y}\hfill \\ w_y(1)=w_y=\frac{p_y}{p_y+2p_x}\hfill \end{array}$$
(32)
(we recall that these values are obtained by taking into account the elementary correlations for two successive steps).
With the initial conditions (32) we find the following renormalized values $`v^0`$ and $`w^0`$ at the fixed point
$$\{\begin{array}{c}v^0=v^0(\beta )=\underset{l\mathrm{}}{lim}v_x(l)=1\underset{l\mathrm{}}{lim}v_y(l)=\frac{1}{2}\left[(v_xv_y)\underset{n=1}{\overset{\mathrm{}}{}}f^{(n)}(v_x+v_y)+1\right]\hfill \\ w^0=w^0(\beta )=\underset{l\mathrm{}}{lim}w_x(l)=1\underset{l\mathrm{}}{lim}w_y(l)=\frac{1}{2}\left[(w_xw_y)\underset{n=1}{\overset{\mathrm{}}{}}f^{(n)}(w_x+w_y)+1\right]\hfill \end{array}$$
(33)
where $`f^{(n)}(x)`$ is the $`n^{\mathrm{th}}`$ iteration of the function
$$f(x)=x^22x+2$$
We then obtain successively all renormalized values at the fixed point
$$\{\begin{array}{c}f_a^0=1\hfill \\ b_a^0=0\hfill \\ d^0=\overline{k^0}=\frac{\alpha ^0+\beta (1\alpha ^0)}{1+\beta }\hfill \\ \alpha ^0=\frac{v^0+\beta w^0}{1+\beta +(1\beta )(v^0w^0)}\hfill \end{array}$$
(34)
where the invariance of the drift $`\overline{k}`$ is taken into account:
$$\overline{k^0}=\overline{k}(1)=2p_y\alpha ^0+2p_x(1\alpha ^0)=\frac{\alpha ^0+\beta (1\alpha ^0)}{1+\beta }$$
In figure 5 we compare the theoretical results with numerical simulations. It is worth mentioning the efficiency of the renormalization group method, which yields a solution in very good agreement with numerical simulations in a broad interval of values $`\beta `$.
In addition we compare our results with the exact expression obtained by P.Gerl and W.Woess in for the probability $`P(0,\stackrel{~}{N})`$ to return to the origin after $`\stackrel{~}{N}`$ random steps on the nonsymmetric Cayley tree. This distribution function $`P(0,\stackrel{~}{N})`$ reads
$$P(0,\stackrel{~}{N})\mu ^{\stackrel{~}{N}}\stackrel{~}{N}^{3/2}$$
(35)
with
$$\mu \mu (\beta )=\mathrm{min}\left\{t+\sqrt{t^2+4p_x^2}+\sqrt{t^2+4p_y^2}|t>0\right\}$$
(36)
Let us assert now without justification that Eq.(27) (which is actually written for $`k1`$ and $`\stackrel{~}{N}1`$) is valid for any values of $`k`$ and $`\stackrel{~}{N}`$. The initial conditions for the recursion relation (27) are as follows
$$\{\begin{array}{c}P(0,\stackrel{~}{N}+1)=\left(2\alpha p_x+2(1\alpha )p_y\right)P(1,\stackrel{~}{N})\hfill \\ P(k,0)=\delta _{k,0}\hfill \end{array}$$
(37)
One can notice that Eq.(27) completed with the conditions (37) can be viewed as a master equation for a symmetric random walk on a Cayley tree with effective branching $`z`$ continuously depending on $`\beta `$:
$$z(\beta )=\frac{2}{\alpha p_x+(1\alpha )p_y}$$
(38)
Hence, we conclude that our problem becomes equivalent to a symmetric random walk on a $`z(\beta )`$-branching tree. For $`k=0`$ the solution, given in is
$$P(0,\stackrel{~}{N})\left[\frac{2\sqrt{z(\beta )1}}{z(\beta )}\right]^{\stackrel{~}{N}}\stackrel{~}{N}^{3/2}$$
(39)
This provides the same form as the exact solution (35). It has been checked numerically that for $`\beta \mathrm{𝖨𝖱}^+`$ the discrepancy between (35) and (39) is as follows
$$\frac{1}{\mu (\beta )}\left|\frac{2\sqrt{z(\beta )1}}{z(\beta )}\mu (\beta )\right|<0.02$$
Thus, we believe that our self–consistent RG–approach to statistics of random walks on nonsymmetric trees can be extended with sufficient accuracy to all values of $`k`$.
## IV Multifractality and locally nonuniform curvature of Riemann surfaces
We have claimed in Sections III that local nonuniformity and the exponentially growing structure of the phase space of statistical systems generates a multiscaling behavior of the corresponding partition functions. The aim of the present Section is to bring geometric arguments to support our claim by introducing a different approach of the RWAO model. The differences between the approach considered in this Section and the one discussed in Section II are as follows:
* We consider a continuous model of random walk topologically entangled with either a symmetric or a nonsymmetric triangular lattice of obstacles on the plane.
* We pursue the goal to construct explicitly the metric structure of the topological phase space via conformal methods and to relate directly the nonuniform fractal relief of the topological phase space to the multifractal properties of the distribution function of topological invariants for the given model.
Consider a random walk in a regular array of topological obstacles on the plane. As in the discrete case we can split the distribution function of all $`N`$–step paths with fixed positions of end points into different topological (homotopy) classes. We characterize each topological class by a topological invariant similar to the ”primitive path” defined in Section II. Introducing complex coordinates $`z=x+iy`$ on the plane, we use conformal methods which provide an efficient tool for investigating multifractal properties of the distribution function of random trajectories in homotopy classes.
Let us stress that explicit expressions are constructed so far for triangular lattices of obstacles only. That is why we replace the investigation of the rectangular lattices discussed in Sections III by the consideration of the triangular ones. Moreover, for triangular lattices a continuous symmetry parameter (such as $`\beta =c_x^2/c_y^2`$ in case of rectangular lattices) does not exist and only the triangles with angles $`(\pi /3,\pi /3,\pi /3)`$, $`(\pi /2,\pi /4,\pi /4)`$, $`(\pi /2,\pi /6,\pi /3)`$ are available—only such triangles tessellate the whole plane $`z`$. In spite of the mentioned restrictions, the study of these cases enables us to figure out an origin of multifractality coming from the metric structure of the topological phase space.
Suppose that the topological obstacles form a periodic lattice in the $`z`$–plane. Let the fundamental domain of this lattice be the triangle $`ABC`$ with angles either $`(\pi /3,\pi /3,\pi /3)`$ (symmetric case) or $`(\pi /2,\pi /6,\pi /3)`$ (nonsymmetric case). The conformal mapping $`z(\zeta )`$ establishes a one-to-one correspondence between a given fundamental domain $`ABC`$ of the lattice of obstacles in the $`z`$–plane with a zero–angled triangle $`𝒜𝒞`$ lying in the upper half–plane $`\eta >0`$ of the plane $`\zeta =\xi +i\eta `$, and having corners on the real axis $`\eta =0`$. To avoid possible misunderstandings let us point out that such transform is conformal everywhere except at corner (branching) points—see, for example . Consider now the tessellation of the $`z`$–plane by means of consecutive reflections of the domain $`ABC`$ with respect to its sides, and the corresponding reflections (inversions) of the domain $`𝒜𝒞`$ in the $`\zeta `$–plane. Few first generations are shown in Fig.6. The obtained upper half–plane $`\mathrm{Im}\zeta >0`$ has a ”lacunary” structure and represents the topological phase space of the trajectories entangled with the lattice of obstacles. The details of such a construction as well as a discussion of the topological features of the conformal mapping $`z(\zeta )`$ in the symmetric case can be found in . We recall the basic properties of the transform $`z(\zeta )`$ related to our investigation of multifractality.
The topological state of a trajectory $`C`$ in the lattice of obstacles can be characterized as follows.
* Perform the conformal mappings $`z_\mathrm{s}(\zeta )`$ (or $`z_{\mathrm{ns}}(\zeta )`$) of the plane $`z`$ with symmetric (or nonsymmetric) triangular lattice of obstacles to the upper half-plane $`\mathrm{Im}\zeta >0`$, playing the role of the topological phase space of the given model.
* Connect by nodes the centers of neighboring curvilinear triangles in the upper half–plane $`\mathrm{Im}\zeta >0`$ and raise a graph $`\gamma _\mathrm{s}`$ (or $`\gamma _{\mathrm{ns}}`$) (which is, as shown below an isometric Cayley tree embedded in the Poincaré plane).
* Find the image of the path $`C`$ in the ”covering space” $`\mathrm{Im}\zeta >0`$ and define the shortest (primitive) path connecting the centers of the curvilinear triangles where the ends of the path $`C`$ are located. The configuration of this primitive path projected to the Cayley tree $`\gamma _\mathrm{s}`$ (or $`\gamma _{\mathrm{ns}}`$) plays the role of topological invariant for the model under consideration.
The Cayley trees $`\gamma _{\mathrm{s},\mathrm{ns}}`$ have the same topological content as the one described in the Section II, but here we determine the Boltzmann weights $`\beta _1,\beta _2,\beta _3`$ associated with passages between neighboring vertices (see Fig.7) directly from the metric properties of the topological phase space obtained via the conformal mappings $`z_{\mathrm{s},\mathrm{ns}}(\zeta )`$.
It is well known that random walks are conformally invariant; in other words the diffusion equation on the plane $`z`$ preserves its structure under a conformal transform, but the diffusion coefficient can become space–dependent . Namely, under the conformal transform $`z(\zeta )`$ the Laplace operator $`\mathrm{\Delta }_z=\frac{d^2}{dzd\overline{z}}`$ is transformed in the following way
$$\frac{d^2}{dzd\overline{z}}=\frac{1}{|z^{}(\zeta )|^2}\frac{d^2}{d\zeta d\overline{\zeta }}$$
(40)
Before discussing the properties of the Jacobians $`|z^{}(\zeta )|^2`$ for the symmetric and nonsymmetric transforms, it is more convenient to set up the following geometrical context. The connection between Cayley trees and surfaces of constant negative curvature has already been pointed out , mostly through volume growth considerations. Therefore it becomes more natural to regard the upper half–plane $`\mathrm{Im}\zeta >0`$ as the standard realization of the hyperbolic 2–space (surface of constant negative curvature $`R`$, with here arbitrarily $`R=2`$), that is to consider the following metric:
$$ds^2=\frac{2}{R\eta ^2}(d\xi ^2+d\eta ^2)$$
(41)
Let us rewrite the Laplace operator (40) in the form
$$\frac{d^2}{dzd\overline{z}}=D(\xi ,\eta )\eta ^2\left(\frac{d^2}{d\xi ^2}+\frac{d^2}{d\eta ^2}\right)$$
(42)
where the value $`D(\xi ,\eta )D(\zeta )`$ can be interpreted as the normalized space–dependent diffusion coefficient on the Poincaré upper half–plane:
$$D(\zeta )=\frac{1}{\eta ^2|z^{}(\zeta )|^2}$$
(43)
The methods providing the conformal transform $`z_\mathrm{s}(\zeta )`$ for the symmetric triangle with angles $`(\pi /3,\pi /3,\pi /3)`$ have been discussed in details in . The generalization of these results to the conformal transform $`z_{\mathrm{ns}}(\zeta )`$ for the nonsymmetric triangle with angles $`(\pi /2,\pi /6,\pi /3)`$ is very straightforward. We here expose the Jacobians of those conformal mappings without derivation:
$$\begin{array}{c}\left|z_\mathrm{s}^{}(\zeta )\right|^2=\frac{1}{\pi ^{2/3}B^2(\frac{1}{3},\frac{1}{3})}\left|\theta _1^{}(0,e^{i\pi \zeta })\right|^{8/3}\hfill \\ \left|z_{\mathrm{ns}}^{}(\zeta )\right|^2=\frac{\pi ^2}{B^2(\frac{1}{2},\frac{1}{3})}\left|\theta _0(0,e^{i\pi \zeta })\right|^{8/3}\left|\theta _2(0,e^{i\pi \zeta })\right|^4\left|\theta _3(0,e^{i\pi \zeta })\right|^{4/3}\hfill \end{array}$$
(44)
where $`\theta _1^{}(\chi |\mathrm{})=\frac{d}{d\chi }\theta _1(\chi |\mathrm{})`$ and $`\theta _i(0|\mathrm{})`$ $`(i=0,\mathrm{},3)`$ are the standard definitions of Jacobi elliptic functions .
Combining (43) and (44) we define the effective inverse diffusion coefficients in symmetric ($`D_\mathrm{s}^1`$) and in nonsymmetric ($`D_{\mathrm{ns}}^1`$) cases:
$$\begin{array}{c}D_\mathrm{s}^1(\zeta )=\eta ^2\left|z_\mathrm{s}^{}(\zeta )\right|^2\hfill \\ D_{\mathrm{ns}}^1(\zeta )=\eta ^2\left|z_{\mathrm{ns}}^{}(\zeta )\right|^2\hfill \end{array}$$
(45)
The corresponding 3D plots of the reliefs $`D_\mathrm{s}^1(\xi ,\eta )`$ and $`D_\mathrm{s}^1(\xi ,\eta )`$ are shown in Fig.8.
The functions $`D_\mathrm{s}^1(\zeta )`$ and $`D_{\mathrm{ns}}^1(\zeta )`$ are considered as quantitative indicators of the topological structure of the phase spaces; in particular a Cayley tree can be isometrically embedded in the surface $`D_\mathrm{s}^1(\zeta )`$. It can be shown that the images of the centers of the triangles of the symmetric lattice in the $`z`$–plane correspond to the local maxima of the surface $`D_\mathrm{s}^1(\zeta )`$ in the $`\zeta `$–plane. We define the vertices of the embedded tree as those maxima. The links connecting neighboring vertices are defined in the next paragraph.
Let us define the horocycles which correspond to repeating sequences of weights in Fig.7 with minimal periods. There are only three such sequences: $`\beta _1\beta _2\beta _1\beta _2\mathrm{}`$, $`\beta _1\beta _3\beta _1\beta _3\mathrm{}`$ and $`\beta _2\beta _3\beta _2\beta _3\mathrm{}`$. The horocycles are images (analytically known) of certain circles of the $`z`$–plane. They proved to be a convenient tool for a constructive description of the trajectories in the $`z`$–plane starting from the trajectories in the covering space $`\zeta `$.
The first generation of horocycles (closest to the root point of the Cayley tree) is shown in Fig.8. Let us consider the symmetric case. Following a given horocycle we follow a ridge of the surface, and we pass through certain maxima of this surface (that is through certain vertices of the tree). We therefore define locally the links of the tree as the set of ridges connecting neighboring maxima of $`D_\mathrm{s}^1(\xi ,\eta )`$. We recall that the ridge of the surface can be defined as the set of points where the gradient of the function $`D_\mathrm{s}^1(\xi ,\eta )`$ is minimal along its isoline. Even if this gives a proper definition of the tree, extracting a direct parametrization is difficult, that is why henceforth we will approximate the tree by arcs of horocycles.
To give a quantitative formulation of the local definition of the embedded Cayley tree, we consider the path integral formulation of the problem on the $`\zeta `$–plane. Define the Lagrangian $`D_\mathrm{s}^1(\zeta )\dot{\zeta }^2`$ of a free particle moving with the diffusion coefficient $`D_\mathrm{s}(\zeta )`$ in the space $`\zeta `$. Following the canonical procedure and minimizing the corresponding action , we get the equations of motion in the effective potential $`U=\mathrm{ln}(\eta ^2D_\mathrm{s})`$:
$$\ddot{q}_i=(\dot{q}_j_jU)\dot{q}_i\frac{1}{2}\dot{q}_j\dot{q}_j_iU$$
(46)
where $`q_1=\xi `$ and $`q_2=\eta `$. Even if Eq.(46) is nonlinear with a friction term, one can show that the trajectory of extremal action between the centers of two neighboring triangles follow the ridge of the surface $`D_\mathrm{s}(\zeta )`$.
It is noteworthy that obtaining an analytical support of Cayley graphs is of great importance, since those graphs clearly display ultrametric properties and have connections to $`p`$–adic surfaces . The detailed study of metric properties of the functions $`D_\mathrm{s}^1(\zeta )`$ and $`D_{\mathrm{ns}}^1(\zeta )`$ is left for separate publication.
While the self–similar properties of the Jacobians of those conformal mappings appear clearly in Fig.8, one could wonder how the local symmetry breaking affects the continuous problem. We can see that if $`D_\mathrm{s}^1(\zeta )`$ is univalued along the embedded tree, $`D_{\mathrm{ns}}^1(\zeta )`$ does vary, what makes the tree locally nonuniform and leads to a multifractal behavior. In other words, different paths of same length along the tree have the same weights in the symmetric case, but have different ones in the nonsymmetric case. The probability of a random path $`C`$ of length $`L`$ can be written in terms of a path integral with a Wiener measure
$$p_C=𝒟\{s\}\mathrm{exp}\left\{_0^L\frac{1}{D[s(t)]}\left(\frac{ds}{dt}\right)^2𝑑t\right\}$$
(47)
where $`s(t)`$ is a parametric representation of the path $`C`$.
The first horocycles in Fig.8 can be parameterized as follows
$$\{\begin{array}{c}\xi =\frac{1}{2}\pm (\frac{1}{2}\frac{\sqrt{3}}{3}\mathrm{sin}\theta )\hfill \\ \eta =\frac{\sqrt{3}}{3}(1\mathrm{cos}\theta )\hfill \end{array}$$
(48)
with $`\theta `$ running in the interval $`[0,\pi /2]`$. The condition ensuring the constant velocity $`\dot{s}\frac{ds}{dt}`$ along the horocycles gives with (41)
$$\frac{1}{\eta }\frac{d\theta }{dt}=\mathrm{const}$$
hence
$$\theta (t)=\mathrm{arctan}\left(\frac{1}{t}\right)$$
(49)
with proper choice of the time unit. This parameterization is used to check that the embedded tree is isometric. Indeed, the horocycles shown in Fig.8 correspond to a periodic sequence of steps like $`\beta _1\beta _2\beta _1\beta _2\mathrm{}`$, $`\beta _1\beta _3\beta _1\beta _3\mathrm{}`$ or $`\beta _2\beta _3\beta _2\beta _3\mathrm{}`$. It is natural to assert that a step carries a Boltzmann weight characterized by the corresponding local values of $`D_{\mathrm{ns}}^1`$. Therefore the period of the plot shown in Fig.9 is directly linked to the spacing of the tree embedded in the profile $`D_{\mathrm{ns}}^1`$.
Coming back to the probability of different paths covered at constant velocity, one can write
$$\mathrm{log}p_C_{t_1}^{t_2}\frac{dt}{D[s(t)]}$$
(50)
The figure 10 shows the value $`\mathrm{log}p_C`$ in symmetric and nonsymmetric cases for different paths starting at $`t_1=0^+`$ and ending at $`t`$. In the symmetric case all plots are the same (solid line), whereas in the nonsymmetric case they are different: dashed and dot–dashed curves display the corresponding plots for the sequences $`\beta _2\beta _3\beta _2\beta _3\mathrm{}`$ and $`\beta _1\beta _3\beta _1\beta _3\mathrm{}`$.
Following the outline of construction of the fractal dimensions $`D_q`$ in Section II, we can describe multifractality in the continuous case by
$$D_q=\frac{1}{q1}\underset{L\mathrm{}}{lim}\frac{1}{\mathrm{ln}𝒩(L)}\mathrm{ln}\frac{{\displaystyle 𝒟\{s\}\mathrm{exp}\left\{q_0^LD_{\mathrm{ns}}^1(s(t))𝑑t\right\}}}{\left[{\displaystyle 𝒟\{s\}\mathrm{exp}\left\{_0^LD_{\mathrm{ns}}^1(s(t))𝑑t\right\}}\right]^q}$$
(51)
where $`𝒩(l)`$ is the area of the surface covered by the trajectories of length $`L`$. This form is consistent with definitions 1 and 16. Indeed, if instead of the usual Wiener measure one chooses a discrete measure $`d\chi _T`$, which is nonzero only for trajectories along the Cayley tree, we recover the following description.
Define the distribution function $`\mathrm{\Theta }(\beta _1,\beta _2,\beta _3,k)\mathrm{\Theta }(\frac{\beta _1}{\beta _3},\frac{\beta _2}{\beta _3},k)`$, which has sense of the weighted number of directed paths of $`k`$ steps on the nonsymmetric 3–branching Cayley tree shown in Fig.7. The values of the effective Boltzmann weights $`\frac{\beta _1}{\beta _3}`$ and $`\frac{\beta _2}{\beta _3}`$ are defined in terms of the local heights of the surface $`D_{\mathrm{ns}}^1`$ along the corresponding branches of the embedded tree. We set
$$\begin{array}{c}\frac{\beta _1}{\beta _3}=\mathrm{exp}\left[_{t_1}^{t_2}\frac{dt}{D_{\mathrm{ns}}[s_r(t)]}_{t_2}^{t_3}\frac{dt}{D_{\mathrm{ns}}[s_r(t)]}\right]1.07;\hfill \\ \frac{\beta _2}{\beta _3}=\mathrm{exp}\left[_{t_1}^{t_2}\frac{dt}{D_{\mathrm{ns}}[s_l(t)]}_{t_2}^{t_3}\frac{dt}{D_{\mathrm{ns}}[s_l(t)]}\right]1.19\hfill \end{array}$$
(52)
where $`t_1,t_2,t_3`$ are adjusted so that $`s_r(t)`$ represents a step weighted with $`\beta _3`$ for $`t_1<t<t_2`$ and a step weighted with $`\beta _1`$ for $`t_2<t<t_3`$ for right–hand–side horocycles while $`s_l(t)`$ represents a step weighted with $`\beta _3`$ for $`t_1<t<t_2`$ and a step weighted with $`\beta _2`$ for $`t_2<t<t_3`$ for left–hand–side horocycles.
The partition function $`\mathrm{\Theta }(\frac{\beta _1}{\beta _3},\frac{\beta _2}{\beta _3},k)`$ can be computed via straightforward generalization of Eq.(9); it can be written in the form:
$$\mathrm{\Theta }(\frac{\beta _1}{\beta _3},\frac{\beta _2}{\beta _3},k)=A_0\lambda _1^{k1}+B_0\lambda _2^{k1}+C_0\lambda _3^{k1}(k1)$$
(53)
where $`\lambda _1`$, $`\lambda _2`$ and $`\lambda _3`$ are the roots of the cubic equation
$$\lambda ^3\lambda \left(1+\frac{\beta _2^2}{\beta _3^2}+\frac{\beta _1\beta _2}{\beta _3^2}\right)\left(\frac{\beta _1\beta _2}{\beta _3^2}+\frac{\beta _2^2}{\beta _3^2}\right)=0$$
and $`A_0`$,$`B_0`$ and $`C_0`$ are the solutions of the following system of linear equations
$$\{\begin{array}{c}A_0+B_0+C_0=1+\frac{\beta _1}{\beta _3}+\frac{\beta _2}{\beta _3}\hfill \\ A_0\lambda _1+B_0\lambda _2+C_0\lambda _3=2\frac{\beta _1}{\beta _3}+2\frac{\beta _2}{\beta _3}+2\frac{\beta _1\beta _2}{\beta _3^2}\hfill \\ A_0\lambda _1^2+B_0\lambda _2^2+C_0\lambda _3^2=\frac{\beta _1}{\beta _3}+\frac{\beta _2}{\beta _3}+\frac{\beta _2^2}{\beta _3^2}+\frac{\beta _1^2}{\beta _3^2}+6\frac{\beta _1\beta _2}{\beta _3^2}+\frac{\beta _1^2\beta _2}{\beta _3^3}+\frac{\beta _1\beta _2^2}{\beta _3^3}\hfill \end{array}$$
Knowing the distribution function $`\mathrm{\Theta }(\frac{\beta _1}{\beta _3},\frac{\beta _2}{\beta _3},k)`$, Eq.(51) with the discrete measure $`d\chi _T`$ reads now (compare to (16)–(17))
$$D_q=\frac{1}{q1}\underset{k\mathrm{}}{lim}\frac{\mathrm{ln}\mathrm{\Theta }(\left[\frac{\beta _1}{\beta _3}\right]^q,\left[\frac{\beta _2}{\beta _3}\right]^q,k)q\mathrm{ln}\mathrm{\Theta }(\frac{\beta _1}{\beta _3},\frac{\beta _2}{\beta _3},k)}{\mathrm{ln}(3\times 2^{k1})}$$
(54)
The plot of the function $`D_q(q)`$ is shown in Fig.11 (the plot is drawn for $`k=\mathrm{100\hspace{0.17em}000}`$).
## V Discussion
The results presented in Sections IIIV are summarized; they underline several problems still unsolved related to our work, and raise the issue of their possible applications to real physical systems.
1. The basic concepts of multifractality have been clearly formulated mainly for abstract systems in . In the present work, we have tried to remain as close as possible to these classical formulations, while adding to abstract models of Ref. the new physical content of topological properties of random walks entangled with an array of obstacles. Our results point out two conditions which generate multifractality for any physical system: (i) an exponentially growing number of states, i.e. ”hyperbolicity” of the phase space, and (ii) the breaking of a local symmetry of the phase space (while on large scales the phase space could remain isotropic).
In Section II we have considered the topological properties of the discrete ”random walk in a rectangular lattice of obstacles” model. Generalizing an approach developed earlier (see for example and references therein) we have shown that the topological phase space of the model is a Cayley tree whose associated transition probabilities are nonsymmetric. Transition probabilities have been computed from the basic characteristics of a free random walk within the elementary cell of the lattice of obstacles. The family of generalized Hausdorff dimensions $`D_q(q)`$ for the partition function $`\mathrm{\Omega }(\beta ^q,k)`$ (where $`k`$ is the distance on the Cayley graph which parameterizes the topological state of the trajectory) exhibits nontrivial dependence on $`q`$, what means that different moments of the partition function $`\mathrm{\Omega }(\beta ^q,k)`$ scale in different ways, e.g. that $`\mathrm{\Omega }(\beta ^q,k)`$ is multifractal.
The main topologically–probabilistic issues concerning the distribution of random walks in a rectangular lattice of obstacles have been considered in Section III. In particular we have computed the average ”degree of entanglement” of a $`\stackrel{~}{N}`$–step random walk and the probability for a $`\stackrel{~}{N}`$–step random walk to be closed and unentangled. Results have been achieved through a renormalization group technique on a nonsymmetric Cayley tree. The renormalization procedure has allowed us to overcome one major difficulty: in spite of a locally broken spherical symmetry, we have mapped our problem to a symmetric random walk on a tree of effective branching number $`z`$ depending on the lattice parameters. To validate our procedure, we have compared the return probabilities obtained via our RG–approach with the exact result of P.Gerl and W.Woess and found a very good numerical agreement.
The problem tackled in Section IV is closely related to the one discussed in Section II. We believe that the approach developed in Section IV could be very important and informative as it explicitly shows that multifractality is not attached to particular properties of a statistical system (like random walks in our case) but deals directly with metric properties of the topological phase space. As we have already pointed out, the required conformal transforms are known only for triangular lattices, what restricts our study. However we explicitly showed that the transform $`z_{\mathrm{ns}}(\zeta )`$ maps the multi-punctured complex plane $`z`$ onto the so-called “topological phase space”, which is the complex plane $`\zeta `$ free of topological obstacles (all obstacles are mapped onto the real axis). We have connected multifractality to the multi–valley structure of the properly normalized Jacobian $`D_{\mathrm{ns}}(\xi ,\eta )`$ of the nonsymmetric conformal mapping $`z_{ns}(\zeta )`$. The conformal mapping obtained has deep relations with number theory, which we are going to discuss in a forthcoming publication.
2. The ”Random Walk in an Array of Obstacles”–model can be considered as a basis of a mean–field–like approach to the problem of entropy calculations in sets of strongly entangled fluctuating polymer chains. Namely, we choose a test chain, specify its topological state and assume that the lattice of obstacles models the effect of entanglements with the surrounding chains (the ”background”). Changing $`c_x`$ and $`c_y`$ one can mimic the affine deformation of the background. Investigating the free energy of the test chain entangled with the deformed media is an important step towards understanding high-elasticity of polymeric rubbers .
Neglecting the fluctuations of the background as well as the topological constraints which the test chain produces by itself, leads to information losses about the correlations between the test chain and the background. Yet, even in this simplest case we obtain nontrivial statistical results concerning the test chain topologically interacting with the background.
The first attempts to go beyond the mean–field approximation of RWAO–model and to develop a microscopic approach to statistics of mutually entangled chain–like objects have been undertaken recently in . We believe that investigating multifractality of such systems is worth attention.
Acknowledgments
The authors are grateful to A.Comtet for valuable discussions and helpful comments, and would like to thank the referees for drawing their attention to references . |
no-problem/0001/astro-ph0001132.html | ar5iv | text | # Nature of microstructure in pulsar radio emission
## 1 Introduction
Intensity variations of pulsar radio emission have several different time-scales, starting from a few nanoseconds (nanostructure). Single pulses consist of subpulses (there are usually about $`3÷5`$ of them in a single pulse window) which often undergo drift, i.e. transition in phase. The nature of subpulses and most of their various features (such as circular polarisation, drifting subpulses, mode switching, nullings and phase memory phenomenon) were explained by the Plasma model for pulsar emission developed in the series of papers (see, e.g. Kazbegi et al. 1987, 1991a, 1991b, 1991c, 1996). In this paper we present an explanation of the pulsar microstructure in the frame of the Plasma model.
Let us first summarise the main observational features of microstructure. Micropulses are ultrashort intensity variations within individual pulses with the following properties. (1) Characteristic time-scale is $`20÷30`$ $`\mu `$<sup>1</sup><sup>1</sup>1Time-scale generally ranges from 1 $`\mu `$s to 1 ms (Cordes 1979)., with an upper limit of the time width distribution $`0.1÷1`$ ms (Boriakoff 1996). (2) Individual micropulse widths are constant with frequency. (3) Micropulse widths are about twice smaller than the distance between micropulses. (4) The time separation (phase) of any micropulse with respect to the fiducial point is constant with frequency, i.e. it is simultaneous at all frequencies at source (let us note here that the position of subpulses typically varies with the frequency). (5) In about 25 per cent of the cases micropulses are observed in quasi-periodic sequences (trains) which can be detected simultaneously at widely separated frequencies, e.g. at 430 MHz and 1.4 GHz. (6) The typical life-time of a micropulse does not exceed one second. (7) Modulation depth depends on the frequency as $`\nu ^{0.5}`$.
Lange et al. (1998) reported on observations of microstructure at 4.85 and 1.41 GHz for a few pulsars. It appeared that for all observed pulsars a large fraction of the single pulses (varying between 30 and 70 per cent) show microstructure. For the pulsars where also low frequency results are available (Popov et al. 1988), there is no significant difference between microstructure properties at high and low frequencies. These results confirmed that microstructure is a common property of the pulsar emission and not only an additional feature of a few strong pulsars. Thus, the microstructure represents perhaps one of the fundamental features of pulsar pulses. Each self-consistent theory of pulsar radio emission should be able to explain existence of this phenomenon.
Two general types of models were presented for pulsar microstructure, often regarded as beaming models and temporal models, respectively (Chian & Kennel 1983). In beaming models the observer’s line of sight sweeps across a non-uniform pulsar beam, which results in rapid intensity fluctuation of observed radiation (Benford 1977; Ferguson 1981). In the angular beaming model proposed by Cordes (1979) and Gil (1982, 1986) both subpulses and micropulses are generated by the same narrow-band emission mechanism. Here micropulses correspond to thin plasma columns flowing along dipolar magnetic field lines, and the width of micropulses is determined by relativistic beaming. The model explains frequency stability of micropulses, in contrary to subpulses, basing on the geometrical consideration. The temporal models assume that the pulsar radiation is modulated while propagating through the magnetospheric plasma (see, e.g. Harding & Tademaru 1981; Chian & Kennel 1983).
Basing on the similarity of the microstructure time-scales over the broad frequency range, Lange et al. (1998) claimed that the micropulse duration is generated by the dimension of the emitting structure. Below we argue that the microstructure is caused by alteration of radio wave generation region by nearly transverse drift waves propagating across the magnetic field and encyrcling the open field lines region of the pulsar magnetosphere. Our mechanism explains naturally important features of microstructure.
## 2 Emission model
It is generally assumed that the pulsar magnetosphere is filled by dense relativistic electron-positron plasma flowing along the open magnetic field lines, which is generated as a consequence of the avalanche process first described by Goldreich & Julian (1969) and developed by Sturrock (1971). This plasma is multicomponent, with a one-dimensional distribution function (see Fig. 1 in Arons 1981), containing: (i) electrons and positrons of the bulk of plasma with mean Lorentz factor of $`\gamma _p`$ and density $`n_p`$; (ii) particles of the high-energy ’tail’ of the distribution function with $`\gamma _t`$ and $`n_t`$, stretched in the direction of positive momenta; (iii) the ultrarelativistic ($`\gamma _b10^6`$) primary beam with so called ’Goldreich-Julian’ density $`n_b7\times 10^2B_0P^1(R_0/r)^3[\mathrm{cm}^3]`$ (where $`P`$ is a pulsar period, $`R_0`$ is a neutron star radius, $`B_0`$ is a magnetic field value at the stellar surface and $`r`$ is a distance from the neutron star’s centre), which is much less than $`n_p`$ ($`\kappa n_p/n_b10^{4÷6}`$). Such a distribution function should generate various wave-modes in certain conditions. These waves then propagate in the pair plasma of a pulsar magnetosphere, transform into the vacuum electromagnetic waves as the plasma density drops, enter the interstellar medium, and reach an observer as the pulsar radio emission.
An important feature of the pulsar radio emission is that the circular polarisation is observed in the vicinity of the maximum of a pulse (see, e.g. Taylor & Stinebring 1986). At the same time, the exact maximum of the pulse power corresponds to the zero circular polarisation (as the latter changes its sign). This means that the waves leave the magnetosphere propagating at relatively small angles to the pulsar magnetic field (Kazbegi et al. 1991b).
An extensive analysis have been conducted (Volokitin et al. 1985; Arons & Barnard 1986; Lominadze et al. 1986) in order to study the dispersion properties of the waves propagating through the highly magnetised relativistic electron-positron plasma of pulsar magnetospheres. In the general case of oblique propagation with respect to the magnetic field three different wavemodes can be distinguished, these are: (i) the purely electromagnetic X-mode (also called $`t`$-wave), (ii) the subluminous Alfvén mode ($`lt`$-wave) and iii) the superluminous O-mode ($`L`$-wave). The last two modes are of mixed electrostatic-electromagnetic nature. Electric field vectors $`𝑬^\mathrm{O}`$ and $`𝑬^\mathrm{A}`$ of the O and A-modes lie in the $`\left(𝒌𝑩\right)`$ plane, while the electric field of the X-mode $`𝑬^\mathrm{X}`$ is directed perpendicularly to this plane. Here $`𝒌`$ a wave-vector and $`𝑩`$ is a local magnetic field.
Particles moving along the curved magnetic field undergo drift transversely to the plane of field curvature, with the velocity
$$u=\frac{c\gamma v_\phi }{\omega __BR_c},$$
(1)
where $`\omega __B=(eB/mc)`$, $`R_c`$ is the curvature radius of the dipolar magnetic field line, $`\gamma `$ is the relativistic Lorentz factor of a particle, and $`v_\phi `$ is a particle velocity along the magneic field line. Here and below the cylindrical coordinate system $`(x,r,\phi )`$ is chosed, with the $`x`$-axis directed transversely to the plane of the magnetic field curvature, while $`r`$ and $`\phi `$ are the radial and azimuthal coordinates, respectively.
Generation of X and A-modes is possible after satisfaction of one of the following resonance conditions:
$$\omega k_\phi v_\phi k_xu=\frac{\omega __B}{\gamma _{\mathrm{r}es}},\mathrm{Cyclotron}\mathrm{instability},\mathrm{and}/\mathrm{or}$$
(2)
$$\omega k_\phi v_\phi k_xu=0,\mathrm{Cherenkov}\mathrm{instability}.$$
(3)
These conditions are very sensitive to the parameters of the magnetospheric plasma, particularly to the value of the drift velocity (equation 1), hence to the curvature of magnetic field lines (Kazbegi et al. 1996).
Let us notice that the Plasma model is in accordance with observed systematic increase of component separation and profile widths with decreasing frequency, often called radius-to-frequency mapping (RFM). The RFM implies that the generation region is restricted to narrow range of altitudes. Besides, low-frequency waves are generated at relatively lower altitudes within this region. Namely, provided that magnetic field is dipolar far from the stellar surface, the wave frequency scales with the distance as
$$\omega _0\left(\frac{R_0}{R}\right)^6$$
(4)
in the case of the cyclotron (equation 2) mechanism, and as
$$\omega _0\left(\frac{R_0}{R}\right)^{3.5}$$
(5)
in the case of the Cherenkov (equation 3) mechanism (Kazbegi et al. 1991a). It is obvious that both mechanisms satisfy RFM, hence the emission region is localised to distinct altitudes.
## 3 Generation of drift waves
It was shown (Kazbegi et al. 1991c, 1996) that in addition to the X and A-modes (whose characteristic frequencies fall into radio band) propagating with small angles to the magnetic field lines, the very low-frequency nearly transverse drift waves can be excited. They propagate across the magnetic field, so that the angle $`\theta `$ between $`𝒌`$ and $`𝑩`$ is close to $`\pi /2`$. In other words, $`k_{}/k_\phi 1`$, where $`k_{}=(k_r^2+k_x^2)^{1/2}`$. Assuming $`\gamma (\omega /\omega __B)1,\left(u_\alpha /c\right)^21,k_\phi /k_x1`$ and $`k_r0`$, dispersion equation of the drift wave writes (Kazbegi et al. 1991c, 1996)
$$\frac{k_x^2c^2}{\omega ^2k_\phi ^2c^2}=1+\underset{\alpha }{}\frac{\omega _{p\alpha }^2}{\omega }\frac{v_{\phi /c}}{\omega k_\phi v_\phi k_xu_\alpha }\frac{f}{\gamma }𝑑\gamma ,$$
(6)
where $`\alpha `$ denotes the sort of particles and $`\omega _{p\alpha }^2=4\pi n_{p\alpha }e^2/m`$.
Let us assume that
$$\omega =k_xu_b+k_\phi v_\phi +a,$$
(7)
where $`u_b`$ is a drift velocity of the beam particles (equation 1). Integration in parts and summation over the sorts of particles in equation (6), and use of equations (7) yields
$$1\frac{3}{2}\frac{1}{\gamma _p^3}\frac{\omega _p^2}{\omega ^2}\frac{1}{2}\frac{\omega _p^2}{\omega ^3}\frac{k_xu_p}{\gamma _p}\frac{\omega _b^2}{\omega a^2}\frac{k_xu_b}{\gamma _b}=\frac{k_{}^2c^2}{\omega ^2},$$
(8)
where subscripts ’$`p`$’ and ’$`b`$’ denote values of the quantities corresponding to the bulk of plasma and the beam, respectively. Note that a small term $`(\omega _b^2/\gamma _b^3\omega a)`$ has been neglected in equation (8). Thus, the frequency of a drift wave $`\omega _0\mathrm{R}e\omega `$ writes
$$\omega _0=k_xu_b+k_\phi v_\phi ,$$
(9)
where $`k_\phi v_\phi k_xu_b`$.
Neglecting the second and the third terms in equation (8), and solving this equation for the imaginary part of the complex frequency $`|a|\mathrm{\Gamma }\mathrm{Im}\omega `$, we obtain that the growth rate of drift waves is a maximum when
$$k_{}^2\frac{\omega _p^2}{\gamma _p^3c^2}.$$
(10)
In this approximation the growth rate writes
$$\mathrm{\Gamma }\left(\frac{n_b}{n_p}\right)^{1/2}\left(\frac{\gamma _p^3}{\gamma _b}\right)^{1/2}k_xu_b.$$
(11)
The drift waves propagate across the magnetic field and encircle the region of the open field lines of the pulsar magnetosphere. They draw energy from the longitudinal motion of the beam particles, as in the case of the ordinary Cherenkov wave-particle interaction. However, they are excited only if $`k_xu_b0`$, i.e. in the presence of the drift motion of the beam particles. Note that these low-frequency waves are nearly transverse, with the electric vector directed almost along the local magnetic field.
## 4 The model for microstructure
Let us assume that a drift wave with the dispersion defined by equation (9) is excited at some place in the pulsar magnetosphere. It follows from the Maxwell equations that $`B_r=E_\phi \left(k_xc/\omega _0\right)`$, hence $`B_rE_\phi `$ for such a wave. Therefore, excitation of a drift wave causes particular growth of the $`r`$-component of the local magnetic field.
The field line curvature $`\rho _c1/R_c`$ is defined in a Cartesian frame of coordinates as
$$\rho _c=\left[1+\left(\frac{dy}{dx}\right)^2\right]^{3/2}\frac{d^2y}{dx^2},$$
(12)
where $`dy/dx=B_y/B_x`$. Using $`\left(𝑩\right)=0`$ and rewriting equation (12) in the cylindrical coordinates we get
$$\rho _c=\frac{1}{r}\frac{B_\phi }{B}\frac{1}{r}\frac{1}{B}\frac{B_\phi ^2}{B^2}\frac{B_r}{\phi }.$$
(13)
Here $`B=\left(B_\phi ^2+B_r^2\right)^{1/2}B_\phi \left[1+(B_r^2/2B_\phi ^2)\right].`$ Assuming that $`k_\phi r1`$ we obtain from equation (13)
$$\rho _c=\frac{1}{r}\left(1k_\phi r\frac{B_r}{B_\phi }\right).$$
(14)
From equation (14) it follows that even a small variation of $`B_r`$ causes significant change of $`\rho _c`$ (since $`k_\phi r1`$).
Thus, the drift wave affects significantly the curvature of magnetic field lines, which in turn alters the drift velocity (equation 1) of particles. On the other hand, the resonance conditions (2) and (3), which in the Plasma model are responsible for the radio wave generation in pulsar magnetospheres, are very sensitive to the parameters of the magnetospheric plasma, particularly to the value of the drift velocity $`u`$. Therefore, any variation of the magnetic field line curvature affects strongly the process of waves excitation. It follows that, within the generation region, the resonance conditions can be fulfilled only at the definite places (’emitting spots’), corresponding to the definite (’favorable’) phases of the drift wave. The characteristic linear size $`\lambda _m`$ of a single emitting spot should be of order of the half-wavelength of the drift wave $`\lambda _m\lambda _{}/2=\pi /k_{}`$.
Assuming that the generation region corotates with a pulsar and introducing the altitude $`R_{\mathrm{e}m}`$ of this region from the stellar surface, we can estimate the characteristic time during which the observer sweeps a single emitting spot
$$\tau _m=\frac{\lambda _m}{\mathrm{\Omega }R_{\mathrm{em}}}\frac{1}{2}\frac{P}{k_{}R_{\mathrm{em}}}10^{10}\left(k_{}_{\mathrm{em}}\right)^1[\mathrm{s}],$$
(15)
where $`\mathrm{\Omega }=2\pi /P`$ is the angular velocity of a pulsar and $`_{\mathrm{em}}R_{\mathrm{em}}/R_{\mathrm{LC}}`$ is a dimensionless altitude of the generation region from the stellar surface measured in the units of the light cylinder radius $`R_{\mathrm{LC}}5\times 10^9P[\mathrm{cm}]`$. Let us calculate the drift wave-number in the generation region of a ’typical’ pulsar (i.e., the pulsar with $`P=0.5`$ s and $`\dot{P}=10^{15}`$), for the magnetospheric parameters used in the Plasma model ($`\gamma _p3`$ and $`\kappa 10^6`$). Note that according to this model radio emission originates at $`_{\mathrm{em}}=0.5\pm 0.2`$. Thus, from equation (10) we obtain that
$$k_{}3\times 10^3\mathrm{cm}^1.$$
(16)
Speculating that $`k_{}10^6÷10^5`$ $`\mathrm{cm}^1`$, in agreement with equation (16), we obtain from equation (15) the characteristic times $`\tau _m10^5÷10^4`$ s, which agree well with the typical time-scales of the microstructure. Note that the case $`k_{}10^5\mathrm{cm}^1`$ at $`_{\mathrm{em}}0.5`$ corresponds to the typical observed micropulse width of about $`20\mu \mathrm{s}`$, whereas drift waves with the longer wavelengths $`k_{}10^6\mathrm{cm}^1`$ account for the time-scales corresponding to the upper limit of the microstructure widths distribution. From the model presented above naturally follow all the other features of the microstructure.
An observer distinguishes between the micropulses by their width which in turn is determined by the wave-length of the drift wave. On the other hand, the position of the emitting spots depends on the latitude, as it follows from equation (14). At the same time, the generation region, altered by the drift wave, is rather localised in both mechanisms (see equations 2 and 3), and the broad band of radio waves with different frequencies is excited there. So the observer detects different frequencies which are coming almost from the same place of the magnetosphere, i.e. from the same altitude and latitude. This explains the stability of the micropulse phase with respect to the pulse fiducial point at different frequencies. The quasi-periodic trains of micropulses observed at widely separated frequencies thus correspond to favorable phases of a drift wave.
From equation (11) it follows that the growth rate of drift waves increases with the altitude from the stellar surface. Hence, the alteration of the generation region by these waves is stronger at higher altitudes where the lower frequencies are excited (equations 2 and 3). This explains why the probability of microstructure detection, as well as its modulation depth, decrease at higher frequencies.
Let us notice that although $`k_\phi k_x`$ for the drift waves, there still exists some non-zero $`k_\phi `$. From equation (9) it follows that the phase velocity of the drift wave has a longitudinal component $`v_\phi ^{\mathrm{ph}}/ck_\phi /k_x`$ which gradually draws it away from the resonance region. This implies that the corresponding micropulse should disappear after a period of time
$$\mathrm{\Delta }t_m=\frac{\mathrm{\Delta }R_{\mathrm{em}}}{v_\phi ^{\mathrm{ph}}}=\frac{\mathrm{\Delta }_{\mathrm{em}}}{2\pi }\frac{k_x}{k_\phi }P,$$
(17)
where $`\mathrm{\Delta }R_{\mathrm{em}}`$ is a longitudinal size of the generation region and $`\mathrm{\Delta }_{\mathrm{em}}`$ is the same but in the units of the light cylinder radius. Assuming that $`\mathrm{\Delta }_{\mathrm{em}}0.4`$ we obtain for the ’typical’ pulsar (with $`P=0.5`$s) that $`\mathrm{\Delta }t_m1`$ s if $`k_\phi /k_x0.031`$. This corresponds to the observed upper bound of the micropulse typical life-time.
## 5 Summary
In this paper we attempt to explain the existence and various properties of the microstructure in the pulsar radio emission basing on the Plasma model. According to suggested mechanism, the radio wave generation region is strongly altered by so called drift waves. These nearly transverse waves propagate across the local magnetic field, encyrcling the whole bundle of the open magnetic field lines, and cause significant change of field lines’ curvature. This alters the curvature drift velocity (equation 1), therefore influencing fulfillment of the resonance conditions (equations 2 and 3) necessary for excitation of radio waves. Thus, radio waves are generated only at the definite regions (emitting spots) placed at approximately the same altitude from the star but different latitudes, corresponding to definite phases of the drift wave. The time-scale of the micropulses is thus determined by the characteristic time needed to the observer’s line of sight to sweep an emitting spot with characteristic transverse size being about half-wavelength of the drift wave (equation 15). Characteristic times estimated from our model are in a good accordance with the typical observed time-scales of the microstructure.
Most of the important features of the microstructure are successfully explained by the model presented in this paper. However, we would like to stress that the results of the model are qualitative, and it does not pretend to strict quantitative explanation of all the existing observational data.
## 6 Acknowledgments
We thank T. Smirnova and V. Boriakoff for stimulating discussion. G. Ma. is thankful to J. Gil for his hospitality during the visit to J. Kepler Astronomical Centre, where the paper was completed. The work is supported in part by the INTAS Grant 96-0154. G. Me. and D. K. acknowledge also support from the KBN Grants 2 P03D 015 12 and 2 P03D 003 15. |
no-problem/0001/hep-ph0001310.html | ar5iv | text | # I INTRODUCTION
## I INTRODUCTION
The observed atmospheric and solar neutrino anomalies from the terrestrial experiments seem to provide a strong evidence in favor of neutrino flavor conversions, implying that neutrinos are massive and they mix among themselves. While the details are fuzzy at this stage, it is clear that the atmospheric neutrino data require a large mixing of $`\nu _\mu \nu _\tau `$ or $`\nu _\mu \nu _s`$, although the latter possibility is beginning to look less and less likely . Regarding the solar neutrino anomaly, the mixing angle (between $`\nu _e`$ and $`\nu _{\mu ,\tau }`$ or $`\nu _s`$) could be small, as in the small angle MSW scenario, or large, as in the large angle MSW or vacuum oscillation scenarios .
Since large mixing angles are involved in the possible solutions of both the anomalies, a great deal of theoretical work has gone in the understanding of the maximal mixing . There are two complementary approaches: (i) searching for scenarios and symmetries beyond the standard model, and (ii) establishing general model independent criteria which guarantee the stability at the weak scale of the masses and mixing pattern that emerge at the high scale. The second approach has the advantage that it may not only narrow down the search for new physics scenarios to a manageable level, but it may also throw light on the parameters of the theory at the high scale, and on the value of the high scale itself. In this paper, we present some model independent criteria for such theories, first focussing on the two flavor case and subsequently on the three flavor models.
If neutrinos contribute even a small fraction of the dark matter of the universe, the oscillation observations imply a situation where at least two neutrinos (and possibly even three) are quasi-degenerate in mass. If the neutrino mass hierarchy is inverted ($`m_3<m_1,m_2`$ where $`|\mathrm{\Delta }m_{32}^2||\mathrm{\Delta }m_{21}^2|`$), the neutrinos $`\nu _1`$ and $`\nu _2`$ are necessarily quasi-degenerate. The study of scenarios where two or even three neutrinos are nearly degenerate is therefore of crucial importance.
The two flavor quasi-degenerate neutrino scenarios fall in two classes: the neutrino flavors in the degenerate limit can be (i) in the same $`CP`$ eigenstate or (ii) in opposite $`CP`$ eigenstates. It turns out that not only does one need to invoke different kinds of symmetries to understand the two cases, but the radiative corrections to the tree level degeneracy at the high scale can have very different implications for the two cases. For instance, it has been noted in that in the case (ii), the radiative corrections (such as those through the RGE evolution from the seesaw to the weak scale) do not substantially affect the maximal mixing and quasi-degeneracy predicted by the theory at a high scale.
In this analysis, we point out that in the case (i), an arbitrary mixing at the high scale can get “magnified” to a large mixing, and even possibly maximal mixing, at the low scale. We find this interesting because (a) it enables a model builder to avoid any fine-tuning for the values of mixing angles at the high scale, and hence relaxes the constraints on the parameters of the high scale physics, (b) it brings a certain unity in the understanding of the quark and lepton mixings. This is arrived at by relating the radiative corrections and the degree of mass degeneracy ($`\frac{\delta m}{m}`$), regardless of the mixing pattern at the high scale. In the context of specific models, this also leads to relationships between the degree of degeneracy, the value of the high scale and the model parameters (e.g. $`\mathrm{tan}\beta `$ for the MSSM). We further extend the results to the three generation scenario and find that the constraint on $`U_{e3}`$ from the CHOOZ experiment indicates that the $`CP`$ parity of one of the neutrinos must be opposite to that of the others for our scheme to be implementable.
Our paper is organized in the following form: in the next section, we derive our main result for the two flavor mixing. In section III, we present implications of the two flavor result for the case of the standard model and the MSSM. In section IV, we consider the extension to three generation case.
## II BASIC FORMALISM FOR TWO FLAVORS
Consider the mixing of two neutrinos. The $`2\times 2`$ Majorana matrix in the mass basis is of the form
$$_𝒟=\left(\begin{array}{cc}𝓂_\mathcal{1}& \mathcal{0}\\ \mathcal{0}& 𝓂_\mathcal{2}\end{array}\right).$$
(1)
The unitary matrix which takes $`_𝒟`$ to the flavor basis can be written as
$$U=\left(\begin{array}{cc}C_\theta & S_\theta \\ S_\theta & C_\theta \end{array}\right)\left(\begin{array}{cc}1& 0\\ 0& e^{i\varphi /2}\end{array}\right),$$
(2)
where $`\theta `$ is the mixing angle and $`\varphi `$ is the $`CP`$ phase. All the quantities are defined at the high scale $`\mathrm{\Lambda }`$. The two neutrino flavors are related to the mass eigenstates in the conventional form:
$$\nu _f=U_{fi}\nu _i,f=\alpha ,\beta ;i=1,2.$$
(3)
We define the convention for the “numbering” of $`\nu _1`$ and $`\nu _2`$ as follows. Let $`\nu _\alpha `$ and $`\nu _\beta `$ be the $`SU(2)_L`$ partners of the charged leptons $`\mathrm{}_\alpha `$ and $`\mathrm{}_\beta `$ respectively, such that $`m_\mathrm{}_\alpha <m_\mathrm{}_\beta `$. Then we define $`\nu _1`$ ($`\nu _2`$) as the state with a larger component of the flavor $`\alpha `$ ($`\beta `$) at the high scale. With this convention, $`0\theta \pi /4`$.
The mass matrix in the flavor basis can be written as
$`_{}`$ $`=`$ $`U^{}_𝒟U^{}`$ (4)
$`=`$ $`\left(\begin{array}{cc}C_\theta & S_\theta \\ S_\theta & C_\theta \end{array}\right)\left(\begin{array}{cc}m_1& 0\\ 0& m_2e^{i\varphi }\end{array}\right)\left(\begin{array}{cc}C_\theta & S_\theta \\ S_\theta & C_\theta \end{array}\right).`$ (11)
Let us examine the situation when $`\varphi =0`$, which corresponds to the case when the neutrinos $`\nu _1`$ and $`\nu _2`$ are in the same $`CP`$ eigenstate. Due to the presence of radiative corrections to $`m_1`$ and $`m_2`$, the matrix $`_{}`$ gets modified to
$$_{}\left(\begin{array}{cc}\mathcal{1}+\delta _\alpha & \mathcal{0}\\ \mathcal{0}& \mathcal{1}+\delta _\beta \end{array}\right)_{}\left(\begin{array}{cc}\mathcal{1}+\delta _\alpha & \mathcal{0}\\ \mathcal{0}& \mathcal{1}+\delta _\beta \end{array}\right).$$
(12)
In the above, $`\delta _\alpha `$ and $`\delta _\beta `$ denote the corrections to the masses in the flavor basis. The above general structure for $`_{}`$ has been motivated by the RGE structure for radiative corrections . We define
$$ϵ2(\delta _\beta \delta _\alpha ),$$
(13)
which is the net difference in the radiative corrections for the masses of the two neutrino flavors.
The mixing angle $`\overline{\theta }`$ that now diagonalizes the matrix $`_{}`$ at the low scale $`\mu `$ (after radiative corrections) can be related to the old mixing angle $`\theta `$ through the following expression:
$$\mathrm{tan}2\overline{\theta }=\mathrm{tan}2\theta (1+\delta _\alpha +\delta _\beta )\frac{1}{\lambda },$$
(14)
where
$$\lambda \frac{(m_2m_1)C_{2\theta }+2\delta _\beta (m_1S_\theta ^2+m_2C_\theta ^2)2\delta _\alpha (m_1C_\theta ^2+m_2S_\theta ^2)}{(m_2m_1)C_{2\theta }}.$$
(15)
In the case of near degeneracy: $`m_1m_2m`$, we have
$$\lambda =\frac{mϵ}{(m_2m_1)C_{2\theta }}+1,$$
(16)
where $`m`$ is the common mass scale of the neutrinos.
If
$$|mϵ||(m_2m_1)C_{2\theta }|,$$
(17)
then $`\lambda \mathrm{}`$ and we have $`\mathrm{tan}2\overline{\theta }0`$. Under this condition, any mixing angle tends to zero after radiative corrections, i.e. a large mixing is unstable under radiative corrections. Note that this is true only for the two neutrinos with the same $`CP`$ parity. If they had different $`CP`$ parities, i.e. $`\varphi =\pi `$, quasi-degeneracy would imply $`|m_1||m_2|m`$, however $`|m_1m_2|2m`$. Then the radiative corrections (which are small) cannot give the inequality (17). In this case, $`|mϵ||(m_2m_1)C_{2\theta }|`$, so that $`\lambda 1`$ and the mixing angle does not change much. The mixing at the high scale then remains stable. This reproduces the observations made in regarding the stability of the Maki-Nakagawa-Sakata (MNS) mixing matrix , when the mixing angle is close to $`\pi /4`$. In addition, our analysis shows that the same conclusions remain valid for any arbitrary nonzero $`\theta `$ of the MNS matrix.
If
$$(m_1m_2)C_{2\theta }=2\delta _\beta (m_1S_\theta ^2+m_2C_\theta ^2)2\delta _\alpha (m_1C_\theta ^2+m_2S_\theta ^2),$$
(18)
then $`\lambda =0`$ or equivalently $`\overline{\theta }=\pi /4`$; i.e. maximal mixing. Given the mass heirarchy of the charged leptons: $`m_{l_\alpha }m_{l_\beta }`$, we expect $`|\delta _\alpha ||\delta _\beta |`$, which reduces (18) to a simpler form:
$$ϵ=\frac{\delta mC_{2\theta }}{(m_1S_\theta ^2+m_2C_\theta ^2)},$$
(19)
where $`\delta mm_1m_2`$. In the quasi-degenerate case,
$$ϵ\frac{\delta m}{m}C_{2\theta }.$$
(20)
The above expression can be translated in terms of the mass-squared difference (which is the quantity measured in the oscillation experiments) as
$$ϵ\frac{\mathrm{\Delta }m^2(\mathrm{\Lambda })}{2m^2}C_{2\theta },$$
(21)
where $`\mathrm{\Delta }m^2(\mathrm{\Lambda })=m_1^2(\mathrm{\Lambda })m_2^2(\mathrm{\Lambda })`$. If the condition (21) is satisfied, the mixing at the scale $`\mu `$ tends to become maximal regardless of the value of the mixing at the scale $`\mathrm{\Lambda }`$.
Several points are worth emphasizing here.
1. The above relation between $`ϵ`$ and the neutrino parameters $`\theta `$, $`m_1`$ and $`m_2`$ is a model independent result and has profound implications for model building. For instance, it will relax the domain of parameters of the high scale theory compared to what was believed earlier for $`\varphi =0`$.
2. From (21), the sign of $`ϵ`$ must be the same as that of $`\mathrm{\Delta }m^2(\mathrm{\Lambda })`$ for getting maximal mixing at the low scale. This preference is of a phenomenological importance since the sign of $`\mathrm{\Delta }m^2`$ at low scales is measured by experiments: if the solar neutrino solution is MSW, the identity of the heavier neutrino is known, and the heavy / light nature of the third neutrino may be determined through the long baseline experiments or the observations of a galactic supernova . The model needs to be able to reproduce this sign from the values of the masses at the high scale through the RGE. The results of this paper can thus be used to discriminate between various models for a large flavor mixing.
3. The condition (21) is not to be mistaken for a fine-tuning. Though maximal mixing at the low scale requires an exact equality (21), the condition can be slackened if we only need a large mixing. Indeed, the SK data indicate $`|\mathrm{tan}2\overline{\theta }|>2`$ at 90% c.l.. In Fig. 1, we show the range of $`ϵ`$ that allows a large mixing at the scale $`\mu `$ as a function of the mixing angle at the scale $`\mathrm{\Lambda }`$. The region enclosed within the “leaf” gives the range of $`ϵ`$ which generates a large mixing ($`|\mathrm{tan}2\overline{\theta }|>2`$). The value of the degree of degeneracy $`|\frac{\delta m}{m}|`$ chosen for the figure is 0.1. From (20), changing this value would just change the scale of $`ϵ`$ by a factor proportional to $`|\frac{\delta m}{m}|`$. The figure shows that a large mixing at the scale $`\mu `$ is indeed possible for a large range of neutrino parameters. The condition on the signs of $`ϵ`$ and $`\mathrm{\Delta }m^2`$ is also relaxed if the mixing at the high scale is already large.
## III APPLICATIONS TO THE STANDARD MODEL AND THE MSSM
In this section, we analyze the implications of (21) for the case of the standard model (SM) and MSSM to see whether it is satisfied for acceptable values of the model parameters. In the case of the SM, the value of $`ϵ_{SM}`$ from the RGE evolution is
$$ϵ_{SM}\frac{h_\beta ^2}{32\pi ^2}ln(\frac{\mathrm{\Lambda }}{M_Z}),$$
(22)
where $`h_\beta `$ corresponds to the Yukawa coupling of the heavier charged lepton. Eq. (20) and the sign of $`ϵ_{SM}`$ in (22) imply that for large flavor mixing to be generated in the SM through radiative corrections, we require $`m_1>m_2`$. In addition, from (21) and (22), the strength of $`h_\beta `$ needs to be
$$h_\beta (SM)\sqrt{\frac{16\pi ^2|\mathrm{\Delta }m^2(\mathrm{\Lambda })|C_{2\theta }}{ln(\frac{\mathrm{\Lambda }}{M_Z})m^2}}.$$
(23)
This is a relation between $`\mathrm{\Delta }m^2(\mathrm{\Lambda })`$ and the scale $`\mathrm{\Lambda }`$ that needs to be obeyed. As an illustration, taking $`\beta \mu `$ (for $`\nu _e\nu _\mu `$, for example), with $`h_\mu 6\times 10^4`$ and the high scale as $`\mathrm{\Lambda }10^{12}`$ GeV, for degenerate neutrino mass of $`m1`$ eV, we get $`|\mathrm{\Delta }m^2(\mathrm{\Lambda })|10^7`$ eV<sup>2</sup>.
In the case of MSSM, we have
$$ϵ_{MSSM}\frac{h_\beta ^2}{16\pi ^2}ln(\frac{\mathrm{\Lambda }}{\mu }).$$
(24)
Eq. (20) and the sign of $`ϵ_{MSSM}`$ in (24) imply that we need $`m_1<m_2`$ for large flavor mixing to be generated through radiative corrections in the MSSM. In addition, from (21) and (24), the strength of $`h_\beta `$ has to be of the order of
$$h_\beta (MSSM)\sqrt{\frac{8\pi ^2|\mathrm{\Delta }m^2(\mathrm{\Lambda })|C_{2\theta }}{ln(\frac{\mathrm{\Lambda }}{\mu })m^2}}.$$
(25)
Taking $`\beta \tau `$ (for $`\nu _\mu \nu _\tau `$ mixing, for example), and using
$$h_\tau \frac{m_\tau }{v\mathrm{cos}\beta },$$
(26)
we get a relation between $`\mathrm{\Lambda }`$, $`\mathrm{tan}\beta `$ and $`m`$, the common mass scale of the neutrinos. In Fig. 2 we show the variation of $`ϵ`$ with $`\mathrm{\Lambda }`$ in MSSM for different values of $`h_\tau `$. From (25) and (26), for given $`m`$ and $`\mathrm{\Lambda }`$, we can infer the desirable value of $`h_\tau `$ and hence of $`\mathrm{tan}\beta `$. For example, for $`m1eV`$ and $`\mathrm{\Lambda }10^{12}GeV`$, taking $`|\mathrm{\Delta }m^2(\mathrm{\Lambda })|10^3eV^2`$, we get $`\mathrm{tan}\beta 5`$.
## IV Extension to three generations
Let us now make a few comments on the possible extension to the case of three quasi-degenerate Majorana neutrinos. If $`m_{\alpha \beta }`$ are the elements of the neutrino mass matrix in the flavor basis, then in the approximation of the decoupling of the third flavor, a large mixing between flavors $`\alpha `$ and $`\beta `$ is guaranteed at the low scale if
$$ϵ_{\alpha \beta }2(\delta _\beta \delta _\alpha )\frac{2(m_{\alpha \alpha }m_{\beta \beta })(1+\delta _\alpha +\delta _\beta )}{(m_{\alpha \alpha }+m_{\beta \beta })},$$
(27)
where no summation over repeated indices is implied. Assuming that the $`\mathrm{\Delta }m^2`$ hierarchy observed at the low scale is true at the high scale also (small radiative corrections), we have $`|m_1m_2||m_2m_3|`$. The condition for $`U_{\mu 3}`$ to be maximal is then
$$ϵ_{\mu \tau }\frac{(m_3m_2)(|U_{\tau 3}|^2|U_{\tau 2}|^2)}{m}.$$
(28)
In all the models in which $`h_\tau `$ dominates over $`h_e`$ and $`h_\mu `$, we have $`ϵ_{e\tau }ϵ_{\mu \tau }`$. Then, the condition for the enhancement of $`U_{\mu 3}`$ (28) is similar to the condition for the enhancement of $`U_{e3}`$ (with the replacement $`[\mu e,21]`$) if all the neutrinos have the same $`CP`$ parity, assuming that both $`U_{\tau 1},U_{\tau 2}U_{\tau 3}`$. That would imply that when $`U_{\mu 3}`$ is magnified due to radiative corrections, so is $`U_{e3}`$. Then one cannot naturally get a small value of $`U_{e3}`$ at the low scale, as is suggested by the CHOOZ data . Thus, in the three generation case with quasi-degenerate Majorana neutrinos, we need the $`CP`$ phase of one neutrino opposite to that of the other two in order for our condition to be implementable. It should be noted that satisfying this condition still does not guarantee the stability of small $`U_{e3}`$.
In conclusion, we have derived a model independent condition that guarantees a large mixing at the low scale irrespective of the mixing angle at the high scale, for two quasi-degenerate Majorana neutrinos with the same $`CP`$ parity. The condition relates the masses at the high scale to the radiative corrections. In the case of SM and MSSM, this predicts the sign of the mass difference between the neutrinos and also gives a range for its magnitude. In MSSM, it translates into a relation between the value of the high scale $`\mathrm{\Lambda }`$, $`\mathrm{tan}\beta `$, and the common mass of the neutrinos. Extending the argument to three quasi-degenerate Majorana neutrinos, we again show in a model independent way that the CP parity of one of the neutrinos should be opposite to that of the others for our conditions to be implementable at the phenomenological level.
Acknowledgements
We thank WHEPP-6, Chennai, India, where a part of the work was completed. The work of RNM is supported by the NSF Grant no. PHY-9802551. The work of MKP is supproted by the project No. 98/37/9/BRNS-cell/731 of the Govt. of India. |
no-problem/0001/quant-ph0001084.html | ar5iv | text | # Distillation of GHZ states by selective information manipulation
## Abstract
Methods for distilling maximally entangled tripartite (GHZ) states from arbitrary entangled tripartite pure states are described. These techniques work for virtually any input state. Each technique has two stages which we call primary and secondary distillation. Primary distillation produces a GHZ state with some probability, so that when applied to an ensemble of systems, a certain percentage is discarded. Secondary distillation produces further GHZs from the discarded systems. These protocols are developed with the help of an approach to quantum information theory based on absolutely selective information, which has other potential applications.
In the rapidly developing field of quantum information, it is possible to identify two main lines of investigation. On the one hand, it addresses basic questions on the fundamental nature of information, how it is embodied in quantum systems, how it can be quantified, and the extent to which physical properties can be reduced to informational ones . On the other hand are specific operational issues; for example, how quantum information can be manipulated for applications such as quantum computation and teleportation .
In this Letter we try to bring together these two strands by proposing a new approach to the analysis of quantum information at a fundamental level, which leads directly to an operational technique for distilling maximally entangled tripartite (GHZ) states , using local operations and classical communication. The three qubits of the system are assumed to be physically separated and held by Alice, Bob, and Cara, respectively. (Throughout this Letter we use the term “maximally entangled” to refer to N-partite states that are N-orthogonal: i.e., if the subsystems are two-dimensional such a state would be $`\sqrt{1/2}(|0000\mathrm{}|1111\mathrm{})`$.)
Central to our approach is the notion of absolutely selective information, which has a straightforward interpretation in terms of classical information but can be seen as a basic distinguishing feature between quantum systems and their classical counterparts. We apply our approach to the specific problem of distilling GHZ states from arbitrary entangled tripartite pure states. We show that it is possible to distill, with a certain probability, a GHZ state from virtually any entangled tripartite pure state while retaining all three subsystems of the input state. As far as we know this is the first protocol of this type to be suggested. Our initial yield of GHZ states is then supplemented by an additional yield which involves sacrificing some subsystems. In this Letter we outline our approach and summarize our results. A more detailed exposition of the underlying analysis will be presented elsewhere .
The distinction between “selective” and “structural” information was addressed by Mackay in the early days of classical information theory. Whereas structural information measures are based on an analysis of the form of possible events, selective information refers to new information gained from the occurrence of a specific event. For example, a signal might transmit a bit as one of two different waveforms; the selective information would be one bit, while the structural information, sufficient to describe all possible waveform measurements, would be considerably more. Absolutely selective information signifies data that are irreducibly unpredictable, and hence genuinely new, in the sense that their unpredictability cannot be explained by the observer’s ignorance. This type of information can arise only in a theory that is fundamentally stochastic; hence it is commonplace in quantum physics, but absent from classical physics. For a pure state, the minimum local absolutely selective information (the minimum information generated by measuring one of the subsystems with a free choice of measurement basis) is exactly the same as the local entropy.
When considered as a quantitative measure, selective information is closely related to fundamental measures in quantum information theory. For example, the standard measure of entanglement for bipartite pure states is numerically equal to the minimum local absolutely selective information. In a similar way, minimizing the absolutely selective information can be used to develop measures of nonorthogonality for quantum states . In this Letter we show that absolutely selective information can be manipulated by an appropriate measurement procedure, and apply this to an operational problem.
The problem we address is to transform a state $`|\psi _{123}`$
$`|\psi _{123}`$ $`=`$ $`a|000+b|001+c|010+d|011`$ (2)
$`+e|100+f|101+g|110+h|111`$
into the state $`|\psi _{\mathrm{GHZ}}=\sqrt{1/2}(|000|111)`$, with some probability. Let us first consider the minimum absolutely selective information $`A_i`$ associated with each of the three qubits in $`|\psi _{123}`$:
$$A_i=\left(p_i\mathrm{log}_2p_i+\left(1p_i\right)\mathrm{log}_2\left(1p_i\right)\right)i=1,2,3,$$
(3)
where $`p_i`$ and $`1p_i`$ are the eigenvalues of the reduced density operator $`\rho _i`$ that describes system $`i`$ when the other two subsystems are traced out,
$$\rho _i=\mathrm{Tr}_{jk}\left(|\psi _{ijk}\psi _{ijk}|\right).$$
(4)
(Without loss of generality we adopt the convention that $`p_i1/2`$.)
It can be shown that the maximal value of $`_{i=1}^3A_i`$ for any tripartite pure state occurs uniquely for the GHZ state (or any local unitary transform of it), for which each of the $`p_i`$’s is equal to 1/2 and $`_{i=1}^3A_i=3`$. Our distillation procedure makes use of this fact by decreasing each of the $`p_i`$’s in turn (with some probability), applying the procedure repeatedly, until all of the $`p_i`$’s are within some tolerance of 1/2, at which point a GHZ state will necessarily have been distilled (up to a local unitary transformation).
The technique for decreasing the $`p_i`$’s is similar to the “Procrustes” method of . We perform a positive operator valued (POV) measurement consecutively on each subsystem. To see how this works, let us carry out the procedure on subsystem 1. If we consider subsystems 2 and 3 as a single composite system, we can write the tripartite state as a Schmidt decomposition with respect to subsystem 1 and the composite 2-3 system:
$$|\psi _{123}=\sqrt{p_1}|+_1|\varphi ^+_{23}+\sqrt{1p_1}|_1|\varphi ^{}_{23}.$$
(5)
The minimum absolutely selective information for subsystem 1 is then given by
$$A_1=\left(p_1\mathrm{log}_2p_1+\left(1p_1\right)\mathrm{log}_2\left(1p_1\right)\right).$$
(6)
By carrying out an appropriate POV measurement on this subsystem, we can with some probability bring $`A_1`$ to its maximal value of 1. We introduce an ancilla qubit “$`a`$”, which interacts unitarily with subsystem 1:
$`|+_1|0_a`$ $``$ $`\alpha |+_1|0_a+\sqrt{1\alpha ^2}|+_1|1_a,`$ (7)
$`|_1|0_a`$ $``$ $`|_1|0_a,`$ (8)
$`|+_1|1_a`$ $``$ $`\sqrt{1\alpha ^2}|+_1|0_a\alpha |+_1|1_a,`$ (9)
$`|_1|1_a`$ $``$ $`|_1|1_a.`$ (10)
We then measure the state of the ancilla. If we set $`\alpha =\sqrt{(1p_1)/p_1}`$ and the starting state of the ancilla to be $`|0_a`$, we will with probability $`2\left(1p_1\right)`$ find the ancilla in state $`|0_a`$, which projects the system into the state $`\sqrt{1/2}\left(|+_1|\varphi ^+_{23}+|_1|\varphi ^{}_{23}\right)`$, for which $`A_1=1`$. With probability $`2p_11`$ we will measure state $`|1_a`$, in which case the procedure fails.
In the Procrustes technique, this one step plus a local unitary transformation suffices to distill EPR pairs from arbitrary entangled bipartite states . In the tripartite case, we then repeat the procedure on subsystems 2 and 3, which projects the system into states for which $`p_2=1/2,A_2=1`$ and $`p_3=1/2,A_3=1`$, respectively. However, if we simply carry out a single POV measurement of this type on each of the three subsystems in turn, the resulting tripartite state will not in general be a GHZ state. Each step of the process is nonunitary, since the tripartite system can be discarded at each stage if the wrong result for the POV measurement is obtained. Whilst the $`A_i`$ are conserved by unitary operations on the local subsystems, they are not conserved in general for nonunitary operations. Hence, when we carry out a POV measurement on bit $`i`$ to project the system into a state for which $`p_i=1/2`$ and $`A_i=1`$, this will disrupt the values of $`p`$ and $`A`$ for the other two qubits.
Nevertheless, it transpires that for most tripartite states, repeated application of this type of POV measurement will steadily move the input state towards a GHZ state until it gets arbitrarily close to it. (Exceptions will be identified later). There are a number of plausible ways to measure “closeness” to a GHZ state. Three such measures are
$`D_p`$ $``$ $`{\displaystyle \underset{i=1}{\overset{3}{}}}p_i3/2,`$ (12)
$`D_S`$ $``$ $`3{\displaystyle \underset{i=1}{\overset{3}{}}}A_i,`$ (13)
$`D_2`$ $``$ $`3/4{\displaystyle \underset{i=1}{\overset{3}{}}}p_i\left(1p_i\right).`$ (14)
We introduce this last quantity because it is more tractable analytically than $`D_p`$ and $`D_S`$, being a simple function of the coefficients of $`|\psi _{123}`$.
Numerical analysis of this process on randomly chosen starting states shows that for a large fraction of these states, $`D_p`$ approaches zero to an accuracy of $`10^3`$ after just two complete iterations (i.e. two POV measurements performed on each of the three subsystems), while virtually all do so within four iterations. Interestingly, we find that in every case examined (aside from the exceptions given below), $`D_p`$ decreases monotonically toward zero with each step of the procedure, whereas $`D_S`$ and $`D_2`$ can fluctuate, though of course their general trend is downward.
The results presented so far are supported only by numerical analysis. However, there is a closely related procedure for which we have derived an analytical proof of efficacy for virtually any input state. In this second method, instead of reducing each probability $`p_i`$ to $`1/2`$ in turn using POV measurements, the probabilities are reduced by a small amount $`ϵ`$, so that with each step the state changes infinitesimally in the limit $`ϵ0`$. The proof follows fairly straightforwardly by deriving the changes in the state coefficients from the procedure, then using these to derive the change in $`D_2`$. By changing to the Schmidt basis for all three bits, one can, with some effort, show that $`p_i(1p_i)`$ can never decrease for $`i=1,2,3`$, and will only remain unchanged for certain very special initial states detailed below. This monotonicity implies monotonicity for $`D_2`$, $`D_p`$, and $`D_S`$, so all of these quantities diminish steadily as the state approaches a GHZ state. This infinitesimal method would be quite challenging experimentally, but it is analytically interesting due to its relative tractability.
The protocols described so far correspond to what we call “primary” distillation. They will give a specific yield (i.e., surviving percentage) of GHZ states if a collection of systems is supplied in a given input state. This yield can be straightforwardly calculated for the large-step procedure; after each POV measurement on the $`i`$th subsystem a proportion $`2(1p_i)`$ of the systems are retained. The yield of GHZ states for the primary distillation process averaged approximately $`9.2\%`$ for the evenly-distributed sample of input states we analyzed, but this will clearly depend strongly on the initial distribution.
Average yields for the infinitesimal procedure were $`9.7\%`$. The chance of the procedure failing on any given step is quite small, but over many steps the number of failures mounts. The difference between the infinitesimal and big-step procedure is interesting when contrasted with the bipartite Procrustes technique. In the bipartite case, there is no advantage to using small steps over a single large step; the yields are the same in both cases. Clearly in the more elaborate tripartite procedure there is a difference.
This yield can be greatly enhanced by a process of secondary distillation, which makes use of those systems discarded during primary distillation. When we carry out the initial POV measurement on subsystem 1 for the input state $`|\psi _{123}`$ given by eq. (2), with probability $`2p_11`$ we will fail to obtain the desired result. However, this failure will leave the discarded system in the state $`|+_1|\varphi ^+_{23}`$, where $`|\varphi ^+_{23}`$ is in general an entangled bipartite state of subsystems 2 and 3. Similarly, failures at later steps of the primary distillation process can yield entangled bipartite states of subsystems 1 and 2 and of subsystems 1 and 3. Thus, when the primary distillation procedure has been completed on a collection of systems in a given input state, we will have an additional residue of entangled bipartite states of subsystems 1 and 2, 1 and 3, and 2 and 3. These entangled pairs can be distilled to EPR pairs by standard techniques , and the resulting EPR pairs can be used to prepare further GHZ triplets.
This is quite similar to the method of , where GHZ states were produced by first distilling EPR pairs between Alice, Bob, and Cara, and then using two pairs to produce a GHZ triplet. (For example, if Alice shares one EPR pair with Bob and another with Cara, she can distribute a GHZ state by preparing it locally and then teleporting the states of two of the subsystems to Bob and Cara with the help of the two EPR pairs.) If, when primary distillation is completed, we produce $`N_{23}`$ EPR pairs of subsystems 2 and 3, $`N_{31}`$ EPR pairs of subsystems 3 and 1, and $`N_{12}`$ EPR pairs of subsystems 1 and 2 from the discarded systems, we will be able to distill a further $`(N_{23}+N_{31}+N_{12})/2`$ GHZ triplets (in the case where none of the $`N`$s is greater than the sum of the other two), or $`(N_{jk}+N_{ki})`$ GHZ triplets (if $`N_{ij}>N_{jk}+N_{ki}`$). Numerical analysis indicates that the average yield for secondary distillation of GHZ states, for the random sample considered, is approximately $`27.5\%`$ giving a total yield of about $`36.7\%`$. The infinitesimal technique does even better, giving a secondary yield of $`29.3\%`$ for a total yield of $`39\%`$.
Since the bulk of this yield comes from the production of EPR pairs, one might reasonably ask how these methods compare to simply producing EPR pairs (with no primary distillation), and then using these pairs to produce GHZ triplets directly . EPR pairs are produced by measuring one of the subsystems in such a way as to maximize the pair-wise entanglement between the other two bits, and then distilling perfect EPR pairs from the resulting states. For the same random sample of states, this technique produces an average yield of $`31.5\%`$, lower than either of the other two techniques, and not much higher than the secondary yield alone. This does not, of course, prove that it is worse for every initial state. However, the closer the initial state is to a GHZ (using any of our distance measures (1214)), the better these distillation procedures perform, while producing GHZs from EPR pairs has a maximum yield of $`50\%`$. (It may be possible to increase this yield asymptotically; but if, so the secondary yield in our protocols will also be increased, so we do not expect this to change our conclusions substantially.)
For some special cases these protocols will not work as described. If the original input state is not three-party entangled, the protocol will fail completely; that is, if the original state can be written as $`|\chi _i|\zeta _{jk}`$, no three-party entanglement will be distillable by either primary or secondary distillation. There is another set of states for which primary distillation fails, but which can still produce GHZ states by secondary distillation. This set consists of tripartite input states with just three components, where each component is biorthogonal (but not triorthogonal) to the other two, and local unitary transforms of such states. For example, the state $`|\psi _{\mathrm{tr}}=b|001+c|010+e|100`$ is of this type. We call such states “triple” states; all have $`D_p1/2`$, though a substantial subset has $`D_p=1/2`$ exactly. Both forms of the primary distillation process take triple states to triple states, so that the GHZ state will never be produced. The large step procedure causes all triple states to converge to the state $`|\psi _{GM}=\sqrt{1/2}|001+\sqrt{(\sqrt{5}1)/4}|010+\sqrt{(3\sqrt{5})/4}|100`$, at which point any further steps will simply result in a cyclic shuffling of the component amplitudes. We call this attractor state the “golden mean” state because of the appearance of the golden mean in the amplitudes. The infinitesimal procedure leaves all triple states with $`D_p=1/2`$ unchanged. This procedure can also cause certain other states with $`D_p>1/2`$ to converge to triple states rather than GHZ states, though most do not. The large step procedure may also take some states to triple states. Although triple states do not yield any GHZ states by primary distillation, they can of course produce them by secondary distillation.
What is more, it is possible to move off of a triple state (with some probability) by performing a POV measurement in a basis other than the Schmidt basis. That is, instead of using the basis $`|+_1`$ and $`|_1`$ in the transformation (10), one uses a different basis, such as $`(|+_1\pm |_1)/\sqrt{2}`$. Setting $`\alpha `$ to a reasonable value (such as $`\alpha =\sqrt{1/2}`$) will then take triple states to non-triple states. In principle, the state might then re-converge to a triple state, but this is quite unlikely.
We have shown that manipulation of absolutely selective information can be used to distill maximally entangled tripartite states from arbitrary tripartite entangled pure states. This method will not work for systems with four or more subsystems, since in these $`p_i=1/2`$ does not uniquely determine the maximally entangled state. It may be that related techniques might succeed, however, if it is possible to manipulate other locally unitarily invariant parameters by local POVs and classical communication. Even in the 3 qubit case, however, the procedure we have described is surely not optimal. The optimal distillation technique is not known, but would almost certainly make use of joint manipulations on many copies of the input state. Nor is an asymptotically reversible distillation technique known for GHZ states . It would be interesting to compare the yields of these two hypothetical techniques. In the bipartite case they are the same, but this need not be so in the tripartite case. Indeed, if the reversible GHZ distillation technique produced any extra two-party entanglement, one would generally expect to be able to produce further GHZ states by an irreversible secondary distillation stage. This suggests that the algorithm giving the optimal yield of GHZs will probably not be reversible. It would also be interesting to compare our yield of GHZ states to some standard measure of tripartite entanglement. Lacking such a measure, however, the best that can be done is to compare different distillation techniques to each other.
There may be a number of other problems in quantum information theory which are amenable to an approach focusing on the absolutely selective information content of quantum systems. For example, work in progress suggests than such an approach can be useful in the analysis of nonorthogonality. Since selective information is a classical concept, this approach also provides a valuable link between classical and quantum information.
We are grateful to Bob Griffiths and Chris Fuchs for helpful discussions. This research was supported by NSF Grant No. PHY-9900755. |
no-problem/0001/astro-ph0001194.html | ar5iv | text | # Discovery of a Brown Dwarf Companion to Gliese 570ABC: A 2MASS T Dwarf Significantly Cooler than Gliese 229B
## 1 Introduction
Direct detection techniques, like those that discovered the prototype T dwarf Gl 229B (Nakajima et al., 1995; Oppenheimer, 1999), have been used for the last 15 years to search for brown dwarfs around nearby stars<sup>1</sup><sup>1</sup>1For a review of these companion searches see Oppenheimer (1999). Despite the large samples involved, only two bona fide brown dwarf companions have been directly detected, Gl 229B and the young L-type brown dwarf G 196-3B (Rebolo et al., 1998)<sup>2</sup><sup>2</sup>2The companion object GD 165B (Becklin & Zuckerman, 1989) may also be a brown dwarf, although its status is questionable (Kirkpatrick et al., 1999b).. Since most of these searches have been confined to a narrow field of view around the primary (typically 10-60$`\mathrm{}`$), widely separated companions<sup>3</sup><sup>3</sup>3We adopt an observational definition for “widely separated” as angular separation greater than 100$`\mathrm{}`$; see Fischer & Marcy (1992). may be missed. Indeed, both G 196-3B and Gl 229B are less than 20$`\mathrm{}`$ from their primary. Field surveys, such as the Two Micron All Sky Survey (Skrutskie et al., 1997, hereafter 2MASS), the DEep Near Infrared Survey (Epchtein et al., 1997, hereafter DENIS), and the Sloan Digital Sky Survey (York et al., 1999, hereafter SDSS), overcome this limitation. Indeed, Kirkpatrick et al. (2000) have recently identified two L-type brown dwarf companions at wide separation.
We are currently searching the 2MASS catalogs for field T dwarfs (Burgasser et al., 1998), brown dwarfs spectrally identified by CH<sub>4</sub> absorption bands at 1.6 and 2.2 $`\mathrm{\mu m}`$ (Kirkpatrick et al., 1999a). One of our discoveries, 2MASSW J1457150-212148 (hereafter Gl 570D), has been confirmed as a widely separated, common proper motion companion to the Gl 570ABC system. This system is comprised of a K4V primary and a M1.5V-M3V close binary (Duquennoy & Mayor, 1988; Mariotti et al., 1990; Forveille et al., 1999) at a distance of 5.91$`\pm `$0.06 pc (Perryman et al., 1997). In $`\mathrm{\S }`$2 we describe the selection of this object from the 2MASS database, review subsequent observations, and establish its common proper motion with Gl 570ABC. In $`\mathrm{\S }`$3 we estimate L and T<sub>eff</sub> of Gl 570D based on its distance and brightness, and make T<sub>eff</sub> and mass estimates using the evolutionary models of Burrows et al. (1997).
## 2 Identification of Gl 570D
### 2.1 Selection and Confirmation of Gl 570D
Gl 570D was initially selected as a T dwarf candidate from the 2MASS Point Source Catalog. T dwarf candidates were required to have J- and H-band detections with J $`<`$ 16 (2MASS signal-to-noise ratio $``$ 10 limit), J-H $`<`$ 0.3 and H-K<sub>s</sub> $`<`$ 0.3 (limit or detection), $`b>15^o`$ (to eliminate source confusion in the Galactic plane), and no optical counterpart in the USNO-A catalog (Monet et al., 1998) within 10$`\mathrm{}`$. Close optical doubles not identified by USNO-A and proper motion stars were eliminated by examination of Digitized Sky Survey (DSS) images of the SERC-J and AAO SES (Morgan et al., 1992) surveys. Our search criteria are also sensitive to minor planets, due to their intrinsically blue near-infrared colors (Veeder et al., 1995; Sykes et al., 1999), lack of optical counterpart at an earlier epoch, and point-like appearance due to the short 2MASS exposure time (7.8 seconds). Follow-up near-infrared imaging to eliminate these objects from our candidate pool was carried out on the Cerro Tololo InfraRed IMager (CIRIM) on the Cerro Tololo Interamerican Observatory (CTIO) Ritchey-Chretien 1.5m during 1999 July 23-25 (UT). Gl 570D was one of only 11 candidates detected in these observations (the remaining candidates were likely asteroids). Optical images of the Gl 570D field from the SERC-J and AAO SES surveys, as well as 2MASS J- and K<sub>s</sub>-band images, are shown in Figure 1; the Gl 570ABC triple can be seen in the lower left corner. No optical counterpart is seen at either the current or projected (proper motion) positions of Gl 570D, indicating very red optical-infrared colors. Table 1 lists 2MASS J, H, and K<sub>s</sub> magnitudes (rows -) and colors (rows -) for Gl 570D, as well as measurements for G 196-3B and Gl 229B taken from the literature (Matthews et al., 1996) and from 2MASS data. Note that Gl 570D has blue near-infrared colors, similar to Gl 229B.
### 2.2 Spectral Data
The 1.6 and 2.2 $`\mathrm{\mu m}`$ fundamental overtone CH<sub>4</sub> bands were identified in Gl 570D from near-infrared spectral data taken with the Ohio State InfraRed Imager/Spectrometer (Depoy et al., 1993, hereafter OSIRIS) on the CTIO Blanco 4m on 1999 July 27 (UT). Using OSIRIS’s cross-dispersion mode, we obtained continuous 1.2-2.3 $`\mathrm{\mu m}`$ spectra with $`\lambda `$/$`\mathrm{\Delta }`$$`\lambda `$ $``$ 1200. The slit width was fixed at 1$`\stackrel{}{\mathrm{.}}`$2 for all observations. The object was placed on the slit by direct image centroiding, and then stepped across the slit in seven positions at 3$`\mathrm{}`$ intervals (to offset fringing and detector artifacts) with 120-second integrations at each position. A total of 3360 seconds of integration time was acquired. Spectra were then extracted using standard IRAF reduction packages. Raw data were flat-fielded using observations of the 4m illuminated dome spot and software generously supplied by R. Blum at CTIO. Object spectra were extracted using a template from the A1V standard star HR 5696 (Hoffleit & Jaschek, 1982). Wavelength calibration was computed from OH sky lines. Finally, telluric corrections and relative flux calibration were done using the extracted standard spectrum.
The near-infrared spectrum of Gl 570D is shown in Figure 2, along with data for the SDSS T dwarf SDSSp J162414.37+002915.6 (Strauss et al., 1999, hereafter SDSS1624+00) obtained on the same night. Both spectra are normalized at 1.55 $`\mathrm{\mu m}`$, with SDSS1624+00 offset vertically by a constant. Gl 229B spectral data from Geballe et al. (1996), also normalized at 1.55 $`\mathrm{\mu m}`$, are overlain on both for comparison (dark line). The 1.6 and 2.2 $`\mathrm{\mu m}`$ CH<sub>4</sub> bands are present in all three brown dwarfs, as well as combined H<sub>2</sub>O and CH<sub>4</sub> absorption from 1.3 to 1.5 $`\mathrm{\mu m}`$. Suppression of flux around 2.1 $`\mathrm{\mu m}`$ is likely due to increased H<sub>2</sub> absorption in the low temperature atmospheres (Lenzuni, Chernoff, & Salpeter, 1991).
There is a striking similarity in the spectral morphology of these objects; however, the overlaid spectrum of Gl 229B may indicate some subtle differences. There appears to be a slight enhancement in flux (relative to Gl 229B) in SDSS1624+00 at the blue edge of the 1.3 $`\mathrm{\mu m}`$ absorption feature and at the base of the 1.6 $`\mathrm{\mu m}`$ CH<sub>4</sub> absorption band. Conversely, the spectrum of Gl 570D does not show these features and in fact appears slightly deficient at the 1.5 $`\mathrm{\mu m}`$ H<sub>2</sub>O-CH<sub>4</sub> wing and the 2.1 $`\mathrm{\mu m}`$ flux peak. We might expect such variations if SDSS1624+00 is warmer than Gl 229B and Gl 570D cooler, as CH<sub>4</sub> bands at 1.4 and 1.6 $`\mathrm{\mu m}`$ should deepen as the observed temperature decreases, since the conversion of CO to CH<sub>4</sub> will occur at greater optical depth (Burrows & Sharp, 1999). Similarly, there should be increased H<sub>2</sub> absorption in the K-band toward lower temperatures (Burgasser et al., 1999). While metallicity and mixing effects may complicate these simple arguments, the warmer temperature of SDSS 1624+00 is supported by recent detections of FeH and CrH bands in its optical spectrum (Liebert et al., 2000) which are disappearing in the latest L dwarfs (Kirkpatrick et al., 1999a), as well as shallower H<sub>2</sub>O and CH<sub>4</sub> bands in the near-infrared as compared to Gl 229B (Nakajima et al., 2000). The coolness of Gl 570D, based on its association with Gl 570ABC, is discussed below.
### 2.3 Association with Gl 570ABC
The proximity of the bright Gl 570ABC triple led us to suspect possible association for this 2MASS object. Fortunately, the system has a relatively high proper motion of 2$`\stackrel{}{\mathrm{.}}`$012$`\pm `$0$`\stackrel{}{\mathrm{.}}`$002 yr<sup>-1</sup> (Perryman et al., 1997). In addition, multiple sampling and the 2MASS position reconstruction strategy results in a higher astrometric accuracy<sup>4</sup><sup>4</sup>4Cutri, R. M., et al. Explanatory Supplement to the 2MASS Spring 1999 Incremental Data Release: http://www.ipac.caltech.edu/2mass/releases/spr99/doc/explsup.html. ($``$ 0$`\stackrel{}{\mathrm{.}}`$3) than the raw pixel scale of the 2MASS detectors (2$`\mathrm{}`$), sufficient to measure the motion of this system on a one-year timescale. The original 2MASS scan of the Gl 570D field was taken on 1998 May 16 (UT); a second scan was obtained on 1999 July 29 (UT). Table 2 summarizes the resulting astrometric data, indicating that all components have a common sky motion of 2$`\stackrel{}{\mathrm{.}}`$3$`\pm `$0$`\stackrel{}{\mathrm{.}}`$4 at position angle 155$`\pm `$8$`\mathrm{°}`$. The mean motion of all other correlated sources in the same 2MASS scan as Gl 570D with J $`<`$ 15.8 ($``$ 2000 sources) is 0$`\stackrel{}{\mathrm{.}}`$0$`\pm `$0$`\stackrel{}{\mathrm{.}}`$2 in right ascension and 0$`\stackrel{}{\mathrm{.}}`$2$`\pm `$0$`\stackrel{}{\mathrm{.}}`$1 in declination. This statistically significant common proper motion confirms companionship. Gl 570D lies 258$`\stackrel{}{\mathrm{.}}`$3$`\pm `$0$`\stackrel{}{\mathrm{.}}`$4 from the K4V primary, a projected physical separation of 1525$`\pm `$15 AU. Note that this is an order of magnitude larger than the A-BC separation (24$`\stackrel{}{\mathrm{.}}`$7$`\pm `$0$`\stackrel{}{\mathrm{.}}`$4) and over four orders of magnitude larger than the B-C separation of 0$`\stackrel{}{\mathrm{.}}`$1507$`\pm `$0$`\stackrel{}{\mathrm{.}}`$0007 (Forveille et al., 1999). The separation of Gl 570D is compared to those of G 196-3B and Gl 229B in Table 1 (rows -).
The dynamic stability of this system can be addressed using the results of Eggleton & Kiseleva (1995) with the separations<sup>5</sup><sup>5</sup>5We assume face-on projection and negligible eccentricity in this order-of-magnitude analysis. listed in Table 2 and masses of 0.7 M for Gl 570A (estimated from the measured mass of the M0Ve eclipsing binary YY Gem; Bopp 1974), 1.0 M for combined Gl 570BC (directly measured by Forveille et al. 1999), and 0.05 M for Gl 570D (estimated, as discussed below). We find that the system is dynamically stable for eccentricities less than about 0.6. A more rigorous analysis using measured orbital parameters is restricted by the roughly 40,000-year period of Gl 570D around the Gl 570ABC barycenter.
## 3 Estimates of the Physical Properties of Gl 570D
Distance moduli and absolute J magnitudes for the three brown dwarf companions G 196-3B, Gl 229B, and Gl 570D, based on the distances to their respective primaries, are listed in Table 1 (rows -). Gl 570D is nearly a magnitude fainter than Gl 229B at all three near-infrared bands. If we assume a Gl 229B J-band bolometric correction of 2.19$`\pm `$0.10 (Leggett, Geballe, & Brown, 1999) and a radius of (7.0$`\pm `$0.5)x10<sup>9</sup> cm $``$ 1 Jupiter radius (Burrows & Liebert, 1993), we then derive L = (2.8$`\pm `$0.3)x10<sup>-6</sup> L and T<sub>eff</sub> = 750$`\pm `$50 K, roughly 200 K cooler than Gl 229B, making Gl 570D the least luminous and coolest brown dwarf thus far detected. More accurate determinations of the effective temperature and mass of Gl 570D can be made using brown dwarf evolutionary models, but only if we can constrain its age ($`\tau `$). The proximity of Gl 570ABC has permitted detailed studies of kinematic properties, activity, and high energy emission (UV and X-ray), leading to various age estimates for the system (Leggett, 1992; Poveda et al., 1993; Fleming, Schmitt, & Giampapa, 1995). There is a general consensus among these authors that this system is older than 2 Gyr, which is supported by the lack of activity in the close BC binary (Reid, Hawley, & Gizis, 1995). The solar-like metallicity of Gl 570ABC (Forveille et al., 1999) and the system’s total space motion of $``$ 60 km s<sup>-1</sup> (Leggett, 1992) constrains formation to the Galactic disk, which establishes a rough upper limit of about 10 Gyr. Using the evolutionary models of Burrows et al. (1997) and adopting log (L/L) = -5.56$`\pm `$0.05 and $`\tau `$ = 6$`\pm `$4 Gyr, we derive values of T<sub>eff</sub> = 790$`\pm `$40 K, and M = 50$`\pm `$20 M<sub>Jup</sub><sup>6</sup><sup>6</sup>61 M<sub>Jup</sub> = 1.9x10<sup>33</sup> grams = 0.0095 M. (Table 1, rows -). The effective temperature is consistent with the brightness estimate above, and is significantly lower than those of G 196-3B and Gl 229B. Perhaps most interesting is that, despite having the lowest T<sub>eff</sub>, Gl 570D could possibly be the most massive of these three brown dwarfs. This accentuates the difficulty of basing comparisons of brown dwarfs on brightness and/or temperature alone, and the importance of age determinations in deriving the physical properties of cool brown dwarfs. More accurate estimates of this object’s properties require spectral modeling and additional broad-band photometry, and will be addressed in a future paper.
A. J. B. acknowledges Robert Blum and Ron Probst for their guidance at the telescope and in the reduction process, the capable assistance of CTIO telescope operators Mauricio Fernandez and Alberto Zúniga, useful discussions with Mark Marley, helpful comments from the anonymous referee, and Daniel Durand for dealing with high volumes of image requests on the CADC DSS server. We thank the 2MASS staff and scientists for their efforts in creating a truly incredible astronomical resource. DSS images were obtained from the Canadian Astronomy Data Centre, which is operated by the Herzberg Institute of Astrophysics, National Research Council of Canada. A. J. B., J. D. K., I. N. R., and J. L. acknowledge funding through a NASA/JPL grant to 2MASS Core Project science. A. J. B., J. D. K., R. M. C., and C. A. B. acknowledge the support of the Jet Propulsion Laboratory, California Institute of Technology, which is operated under contract with the National Aeronautics and Space Administration. This publication makes use of data from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by the National Aeronautics and Space Administration and the National Science Foundation. |
no-problem/0001/hep-th0001080.html | ar5iv | text | # Three dimensional gravity from ISO(2,1) coset models
## I Introduction
The observational evidence for the existence of black holes in nature is now very strong. The data support the existence of both supermassive black holes at the centers of galaxies and smaller (a few solar masses up to a few tens of solar masses) black holes in binary systems . The best candidate for a unified theory of all the physical phenomena observed so far, including black holes, is string theory and, indeed, several black $`p`$-brane solutions have been found in various space-time dimensions in the low energy limit of this theory (for a review see, e.g., Ref. ). However, only one black hole is known to exist in the three-dimensional low energy limit of string theory and it coincides with the only known black hole in three-dimensional Einstein gravity , to wit the (BTZ) black hole of Bañados, Teitelboim and Zanelli (see also Ref. ).
Although the BTZ black hole is not useful as a global description of real black holes (for example, the curvature of the BTZ black hole is constant and there are no gravitational waves in three dimensions), it does provide a manageable model of string propagation on a black background in which an infinite number of propagating modes is present. The Green’s function for this black hole can be constructed, and the quantum stress tensor can be calculated from it . This system has also been used to study such problems as the quantization of a string on a black hole background (see and Refs. therein).
Recently, the theoretical interest in the BTZ black hole has also been raised by the conjectured AdS/CFT correspondence , according to which all the relevant quantities of the gravitational field theory in the bulk of the anti-de Sitter (AdS) space-time (or any space-time with a time-like boundary) can be described in terms of a conformal field theory (CFT) on the boundary. Thus, by applying this conjecture to the black $`p`$-branes there is some hope of describing the complete evolution of a black hole, from its formation to the evaporation , and solve the riddle of its final fate (see Ref. for a list of still unanswered questions). However, it is not clear whether the AdS/CFT correspondence extends beyond perturbation theory on a given background manifold as the solution of the black hole problem would require in order to compute the backreaction of the evaporation radiation on the geometry .
Because of the usefulness of three-dimensional black holes as prototypes for four-dimensional black holes in string theory, a search for a second exact, three-dimensional black hole in string theory would seem to be a worthwhile pursuit, especially if one could be found which has non-negative curvature. This work describes our attempt to obtain such a solution, starting from Wess-Zumino-Witten-Novikov (WZWN) model in the Poincaré group $`ISO(2,1)`$ . Our procedure for obtaining a three-dimensional metric is to promote the six parameters of the $`ISO(2,1)`$ group to space-time variables and then to reduce space-time to three dimensions by various compactifications. After each compactification we investigate the symmetries of the resulting model.
A partial result has been obtained, since we can now show that the string theory we start from can be compactified in such a way as to yield either a (linear dilaton) vacuum and AdS<sub>3</sub> (or the BTZ black hole). We also obtaine other solutions which contain a non-trivial dilaton field and, thus, might be of interest for studying the evaporation.
In Section II we review the WZWN Poincaré action in three dimensions and, in Section III, its coset descendants $`ISO(2,1)/\text{I R}^n`$ . In Section IV we specialize to the case when I R is the translation in the time direction and, in Section V, we further compactify to three-dimensional space-time in which we recover the AdS (and BTZ) manifold. In Section VI we describe other solutions and their T-duals, and finally comment on our results in Section VII. For the metric and other geometrical quantities we follow the convention of Ref. .
## II ISO(2,1) WZWN Models
The WZWN construction starts with the $`\sigma `$-model action at level $`\kappa `$
$`S_\sigma ={\displaystyle \frac{\kappa }{4}}{\displaystyle _{}}d^2\sigma \mathrm{Tr}\left(g^1_+gg^1_{}g\right){\displaystyle \frac{\kappa }{4}}{\displaystyle _{}}d^3\zeta \mathrm{Tr}\left(g^1gg^1gg^1g\right),`$ (1)
where in the present case $`g`$ is an element of the Poincaré group $`ISO(2,1)`$ and $`\sigma ^\pm =\tau \pm \sigma `$ are light-cone coordinates on the boundary $``$ of the three-dimensional manifold $``$.
The elements of $`ISO(2,1)`$ can be written using the notation $`g=(\mathrm{\Lambda },v)`$, where $`\mathrm{\Lambda }SO(2,1)`$ and $`v\text{I R}^3`$. Given the map $`g:=D^2\times RISO(2,1)`$ from the two-dimensional disc$`\times `$time to $`ISO(2,1)`$, the action can be written entirely on the boundary $`\text{I R}\times S^1`$ and it describes a closed bosonized spinning string moving in 2+1 Minkowski space-time with coordinates $`v^i`$ ,
$`S={\displaystyle \frac{\kappa }{4}}{\displaystyle _{}}d^2\sigma ϵ^{ijk}\left(_+\mathrm{\Lambda }\mathrm{\Lambda }^1\right)_{ij}_{}v_k.`$ (2)
where $`ϵ^{ijk}`$ is the Levi-Civita symbol in three dimensions and the metric tensor is $`\eta _{ij}=\mathrm{diag}[1,+1,+1]`$ ($`i,j,\mathrm{}=0,1,2`$).
The basic property of the action $`S`$ is that it is invariant under
$`gg__L(\sigma ^+)gg__R^1(\sigma ^{}),`$ (3)
where $`g_{_{L/R}}ISO(2,1)`$, and also under the left and right action of the group of diffeomorphisms of the world-sheet . Starting from this observation, the canonical structure of the model can be computed by reverting to the “chiral” version of Eq. (2), which is obtained by formally replacing $`\sigma ^+\tau \text{I R}`$ and $`\sigma ^{}\sigma (0,2\pi )`$ . One then finds two sets of conserved current densities, the first of which is given by
$`P^i(\sigma )={\displaystyle \frac{\kappa }{2}}ϵ^{ijk}\left(\mathrm{\Lambda }^1_\sigma \mathrm{\Lambda }\right)_{jk}`$ (4)
$`J^i(\sigma )=\kappa \left(\mathrm{\Lambda }^1_\sigma v\right)^i,`$ (5)
with Poisson brackets
$`\{P^i(\sigma ),P^j(\sigma ^{})\}=0`$ (6)
$`\{J^i(\sigma ),J^j(\sigma ^{})\}=ϵ_{ijk}J_k(\sigma )\delta (\sigma \sigma ^{})`$ (7)
$`\{J^i(\sigma ),P^j(\sigma ^{})\}=ϵ^{ijk}P_k(\sigma )\delta (\sigma \sigma ^{})+\kappa \eta ^{ij}{\displaystyle \frac{}{\sigma }}\delta (\sigma \sigma ^{}),`$ (8)
and generate $`L^{}ISO(2,1)`$, the Poincaré loop group with the central extension given by the last term in Eq. (8). This is the algebra of the right transformations in Eq. (3), since in the chiral picture $`g__R(\sigma ^{})g__R(\sigma )`$ has become a space-dependent transformation of the field $`g`$ on the world-sheet. The (time dependent) left chiral transformation, $`g__L(\sigma ^+)g__L(\tau )`$, in Eq. (3) is now an $`ISO(2,1)`$ invariance generated by the zero Fourier modes of the second set of (weakly vanishing) current densities
$`\overline{P}^i={\displaystyle \frac{\kappa }{2}}ϵ^{ijk}\left(_\sigma \mathrm{\Lambda }\mathrm{\Lambda }^1\right)_{jk}`$ (9)
$`\overline{J}^i=\kappa \left[\mathrm{\Lambda }_\sigma (\mathrm{\Lambda }^1v)\right]^i.`$ (10)
The latter commute with $`P^i`$ and $`J^i`$ and have Poisson brackets among themselves given by Eqs. (6), (7) and (8) with a central extension opposite in sign . One then concludes that (half ) the (classical) gauge invariant phase space of the model is $`L^{}ISO(2,1)/ISO(2,1)`$.
As usual, one expects the Fourier modes of $`P_i`$ and $`J_i`$ (the Kac-Moody generators) in turn yield a Virasoro algebra (for each chiral sector) whose generators $`L_n`$ are obtained via the Sugawara construction. However, there is a potential problem in the quantum theory since the standard highest weight construction , which would give a central charge $`c=\mathrm{dim}ISO(2,1)=6`$, fails to deliver unitary representations because not all negative norm states are suppressed by the conditions $`\widehat{L}_nphys=0`$ for all $`n0`$. Spaces of positive norm states can instead be obtained by employing the method of induced representations which yields the Virasoro generators as Fourier modes of
$`L(\sigma )={\displaystyle \frac{1}{\kappa }}J^i(\sigma )P_i(\sigma ),`$ (11)
and a central charge $`c=0`$ for each chiral sector . In either case, the total central charge of the model, after adding the ghost contribution , is $`c_T=c26`$ and one must eventually add $`26c`$ bosonic degrees of freedom in order to have a quantum model which is free of anomaly.
The action (2) is also one of the two exceptional cases described in Ref. , where it was shown that, if one considers all parameters of the six-dimensional Poincaré group as space-time coordinates, then $`S`$ describes a spinless string moving on a curved background with six-dimensional metric. It was also proved that this action is in some sense unique, since no generalization of the kind studied in Refs. exists for the Poincaré group in three dimensions.
## III Coset Models
The action (2) is not invariant under the local action of any subgroup $`H`$ of $`ISO(2,1)`$ given by
$`hg:gh__L(\sigma ^{},\sigma ^+)gh__R^1(\sigma ^{},\sigma ^+),`$ (12)
where now $`h_{_{L/R}}=(\theta _{_{L/R}},y_{_{L/R}})H`$, due to the dependence of $`h__L`$ on $`\sigma ^{}`$ and of $`h__R`$ on $`\sigma ^+`$. However, $`H`$ can in general be promoted to a gauge symmetry of the action by introducing suitable gauge fields $`A_\pm =(\omega _\pm ,\xi _\pm )`$ belonging to the Lie algebra of $`H`$, and the corresponding covariant derivatives $`D_\pm =_\pm +A_\pm `$.
In order that $`ISO(2,1)/H`$ be a coset, $`H`$ must be normal, $`Hg=gH`$, under the action defined in Eq. (12). This means that, for all $`gISO(2,1)`$ and $`hH`$, there must exist an $`\overline{h}H`$ such that $`hgh^1=\overline{h}^1g\overline{h}`$, and we thus find that the only possible choices are subgroups of the translation group $`\text{I R}^3`$, that is $`h_{_{L/R}}=(\text{1 I},y_{_{L/R}}^{\overline{n}})`$, where $`\overline{n}`$ runs in a subset of $`\{0,1,2\}`$ and 1 I is the identity in $`SO(2,1)`$. In this case, by inspecting the action (2) one argues that $`\omega _\pm \xi _+0`$, and $`\xi _{}^i0`$ iff the translation in the $`i`$ direction is not included in $`H`$. The gauged action finally reads
$`S_g={\displaystyle \frac{\kappa }{4}}{\displaystyle d^2\sigma ϵ^{ijk}\left(_+\mathrm{\Lambda }\mathrm{\Lambda }^1\right)_{ij}\left(_{}v+\xi _{}\right)_k}.`$ (13)
For the ungauged action in Eq. (2) the equations of motion $`\delta _vS=0`$, which follow from the variation $`vv+\delta v`$ with $`\delta v`$ an infinitesimal 2+1 vector, lead to the conservation of the six momentum currents on the light cone of the string world-sheet,
$`_{}P_+^i=_+P_{}^i=0,`$ (14)
where $`P_+^i`$ is given by $`\overline{P}^i`$ in Eq. (9) with $`\sigma \sigma ^+`$ and $`P_{}^i`$ by $`P^i`$ in Eq. (4) with $`\sigma \sigma ^{}`$. In the gauged case this variation must be supplemented by the condition that the gauge field varies under an infinitesimal $`H`$ transformation according to
$`\xi _{}^{\overline{n}}\xi _{}^{\overline{n}}_{}(\delta v^{\overline{n}}),`$ (15)
and from $`\delta _vS_g=0`$ one obtains
$`_{}P_+^{i\overline{n}}=0,`$ (16)
so that only the currents $`P_+^{i\overline{n}}`$ are still conserved.
Similarly, from the variation $`\mathrm{\Lambda }\mathrm{\Lambda }+\delta \mathrm{\Lambda }`$, $`\delta \mathrm{\Lambda }=\mathrm{\Lambda }ϵ`$ and $`\delta v=ϵv`$ with $`ϵ_{ij}=ϵ_{ji}`$ an infinitesimal skewsymmetric matrix, the equations $`\delta _ϵS=0`$ lead to the conservation of the six angular momentum currents $`J_{}^i=J^i`$ in Eq. (5) with $`\sigma \sigma ^{}`$ and $`J_+^i=\overline{J}^i`$ in Eq. (10) with $`\sigma \sigma ^+`$. When interpreted as components of the string angular momentum in the target space-time, these currents are shown to include a contribution of intrinsic (non orbital) spin . In the gauged case, by making use of Eqs. (15) and (16), one obtains
$`_+J_{}^i=\kappa _+(\mathrm{\Lambda }\xi _{})^i,`$ (17)
so that the currents $`J_{}^i`$ couple to the gauge field.
Since the gauge field is not dynamical, we are now free to choose $`\mathrm{dim}H`$ gauge conditions to be satisfied by the elements of $`ISO(2,1)/H`$. A natural choice is
$`\xi _{}^{\overline{n}}=_{}v^{\overline{n}},`$ (18)
so that the previous equations of motion become the same as $`\delta _vS_{eff}^{(\overline{n})}=\delta _ϵS_{eff}^{(\overline{n})}=0`$ obtained by varying the effective action
$`S_{eff}^{(\overline{n})}={\displaystyle d^2\sigma \underset{k\overline{n}}{}P_+^k_{}v_k},`$ (19)
where the sum runs over only the indices corresponding to the translations not included in $`H`$.
An explicit form for the effective action (19) can be obtained by writing an $`SO(2,1)`$ matrix as a product of two rotations (of angles $`\alpha `$ and $`\gamma `$) and a boost ($`\beta `$) ,
$`\mathrm{\Lambda }_j^i`$ $`=`$ $`\left[\begin{array}{ccc}1& 0& 0\\ 0& \mathrm{cos}\alpha & \mathrm{sin}\alpha \\ 0& \mathrm{sin}\alpha & \mathrm{cos}\alpha \end{array}\right]\left[\begin{array}{ccc}\mathrm{cosh}\beta & 0& \mathrm{sinh}\beta \\ 0& 1& 0\\ \mathrm{sinh}\beta & 0& \mathrm{cosh}\beta \end{array}\right]\left[\begin{array}{ccc}1& 0& 0\\ 0& \mathrm{cos}\gamma & \mathrm{sin}\gamma \\ 0& \mathrm{sin}\gamma & \mathrm{cos}\gamma \end{array}\right],`$ (29)
which yields
$`\begin{array}{c}P_+^0={\displaystyle \frac{\kappa }{2}}\left(_+\alpha +\mathrm{cosh}\beta _+\gamma \right)\hfill \\ \\ P_+^1={\displaystyle \frac{\kappa }{2}}\left(\mathrm{cos}\alpha _+\beta +\mathrm{sin}\alpha \mathrm{sinh}\beta _+\gamma \right)\hfill \\ \\ P_+^2={\displaystyle \frac{\kappa }{2}}\left(\mathrm{sin}\alpha _+\beta \mathrm{cos}\alpha \mathrm{sinh}\beta _+\gamma \right).\hfill \end{array}`$ (35)
In the following we shall gauge a specific one-dimensional subgroup which allows the number of degrees of freedom to be reduced from six to four.
## IV Gauging the Time Translations
We gauge the subgroup $`H=\{(\text{1 I},y^0)\}`$ of the translations in the time direction. This choice is peculiar, since no derivative of $`\alpha `$ occurs in $`P_+^1`$ and $`P_+^2`$, and we can then rotate the variables $`v^1`$ and $`v^2`$ by an angle $`\alpha `$ ,
$`\left[\begin{array}{c}_{}\stackrel{~}{v}^1\\ _{}\stackrel{~}{v}^2\end{array}\right]\left[\begin{array}{cc}\mathrm{cos}\alpha & \mathrm{sin}\alpha \\ \mathrm{sin}\alpha & \mathrm{cos}\alpha \end{array}\right]\left[\begin{array}{c}_{}v^1\\ _{}v^2\end{array}\right].`$ (42)
This can be considered as an internal symmetry of the effective theory which is used to further simplify the action in Eq. (19) with $`\overline{n}=0`$ to the form
$`S_{eff}^{(0)}={\displaystyle \frac{\kappa }{2}}{\displaystyle d^2\sigma \left[_+\beta _{}\stackrel{~}{v}^1\mathrm{sinh}\beta _+\gamma _{}\stackrel{~}{v}^2\right]}.`$ (43)
In the following we shall find it more convenient to regard $`\beta `$, $`\gamma `$, $`\stackrel{~}{v}^2`$ and $`\stackrel{~}{v}^1`$ as canonical (field) variables by foliating the closed string world-sheet with circles of constant time $`\tau `$ . Their conjugate momenta are then given by
$`\begin{array}{c}P^1{\displaystyle \frac{\delta S_{eff}^{(0)}}{\delta _\tau \beta }}={\displaystyle \frac{\kappa }{2}}_{}\stackrel{~}{v}^1\hfill \\ \\ P^2{\displaystyle \frac{\delta S_{eff}^{(0)}}{\delta _\tau \gamma }}={\displaystyle \frac{\kappa }{2}}\mathrm{sinh}\beta _{}\stackrel{~}{v}^2\hfill \\ \\ P^3{\displaystyle \frac{\delta S_{eff}^{(0)}}{\delta _\tau \stackrel{~}{v}_2}}={\displaystyle \frac{\kappa }{2}}\mathrm{sinh}\beta _+\gamma =P_+^2(\alpha =0)\hfill \\ \\ P^4{\displaystyle \frac{\delta S_{eff}^{(0)}}{\delta _\tau \stackrel{~}{v}_1}}={\displaystyle \frac{\kappa }{2}}_+\beta =P_+^1(\alpha =0).\hfill \end{array}`$ (51)
The above relations can be inverted to express the velocities in terms of the momenta. Therefore, the action (43) does not contain any constraint and its canonical structure can be analyzed straightforwardly. The absence of constraints also signals the fact that all (explicit) symmetries of the original model have been “gauge fixed” and $`\stackrel{~}{v}^1`$, $`\stackrel{~}{v}^2`$, $`\beta `$ and $`\gamma `$ are physical degrees of freedom. We are then allowed to consider them all as target space-time coordinates for the compactified string.
From the point of view of the target space-time, with dimensionless coordinates $`X^1=\beta `$, $`X^2=\gamma `$, $`X^3=\stackrel{~}{v}^2`$ and $`X^4=\stackrel{~}{v}^1`$, the action (43) can be written as
$`S_{eff}^{(0)}`$ $`=`$ $`{\displaystyle \frac{\kappa }{2}}{\displaystyle d^2\sigma \left(\eta ^{ab}+ϵ^{ab}\right)\left[_aX^1_bX^4F(X^1)_aX^2_bX^3\right]}`$ (52)
$`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle d^2\sigma \left[\eta ^{ab}G_{\mu \nu }^{(4)}_aX^\mu _bX^\nu +ϵ^{ab}B_{\mu \nu }^{(4)}_aX^\mu _bX^\nu \right]},`$ (53)
where $`F\mathrm{sinh}X^1`$, $`\eta ^{ab}`$ and $`ϵ^{ab}`$ are, respectively, the Minkowski tensor and the Levi-Civita symbol in two dimensions and $`\mu ,\nu ,\mathrm{}=1,\mathrm{},4`$. By interpreting $`\kappa =\mathrm{}_s^2`$ as the square of the fundamental (string) length, the symmetric tensor $`𝑮^{(4)}`$ in the chosen reference frame has components
$`G_{\mu \nu }^{(4)}=\mathrm{}_s^2\left[\begin{array}{cccc}0& 0& 0& 1\\ 0& 0& F& 0\\ 0& F& 0& 0\\ 1& 0& 0& 0\end{array}\right]`$ (58)
and is the space-time metric with signature $`2+2`$ and $`G^{(4)}det𝑮^{(4)}=\mathrm{}_s^8F^2`$. The antisymmetric tensor $`𝑩^{(4)}`$ has components
$`B_{\mu \nu }^{(4)}=\mathrm{}_s^2\left[\begin{array}{cccc}0& 0& 0& 1\\ 0& 0& F& 0\\ 0& F& 0& 0\\ 1& 0& 0& 0\end{array}\right]`$ (63)
and is the axion potential.
The Euler-Lagrange equations of motion can be written as
$`\begin{array}{c}\delta _\beta S_{eff}^{(0)}={\displaystyle \frac{\kappa }{2}}\left[_+_{}X^4+\sqrt{1F^2}_+X^2_{}X^3\right]=0\hfill \\ \\ \delta _\gamma S_{eff}^{(0)}={\displaystyle \frac{\kappa }{2}}_+\left(F_{}X^3\right)=_+P^2=0\hfill \\ \\ \delta _{\stackrel{~}{v}^2}S_{eff}^{(0)}={\displaystyle \frac{\kappa }{2}}_{}\left(F_+X^2\right)=_{}P^3=0\hfill \\ \\ \delta _{\stackrel{~}{v}^1}S_{eff}^{(0)}={\displaystyle \frac{\kappa }{2}}_{}_+X^1=_{}P^4=0,\hfill \end{array}`$ (71)
from which one sees that three of the canonical momenta ($`P^2`$, $`P^3`$ and $`P^4`$) are conserved along (one of the two) null directions of the world-sheet. We also note that $`X^1`$ is a “flat” target space direction, since the fourth of Eqs. (71) is the free wave equation whose general solution is given by
$`X^1=X_L^1+X_R^1,`$ (72)
where the arbitrary functions $`X_L^\mu =X_L^\mu (\sigma ^+)`$ stand for left-moving and $`X_R^\mu =X_R^\mu (\sigma ^{})`$ for right-moving waves.
The system of Eqs. (71) considerably simplifies for zero canonical momentum modes along $`X^2`$ ($`X^3=X_L^3`$) or $`X^3`$ ($`X^2=X_R^2`$), in which cases $`X^4=X_L^4+X_R^4`$. When both $`P^2`$ and $`P^3`$ vanish one then has the simple solution
$`\{\begin{array}{c}X^1=X_L^1+X_R^1\hfill \\ \\ X^2=X_R^2\hfill \\ \\ X^3=X_L^3\hfill \\ \\ X^4=X_L^4+X_R^4,\hfill \end{array}`$ (80)
which describes free wave modes in all of the four space-time directions. More general solutions would instead describe wave modes which propagate along a direction but couple with modes propagating in (some of) the other directions.
## V The final compactification
Upon using the fact that $`X^1=\beta `$ is a flat direction for the propagating string, we impose a further compactification condition in order to eliminate $`X^4=\stackrel{~}{v}^1`$,
$`e^{2\lambda }_\pm x^1=_\pm X^4,`$ (81)
where $`\lambda `$ is, at present, an arbitrary function of $`x^1X^1`$. We also define the two coordinates $`x^0`$ and $`x^2`$ according to
$`\begin{array}{c}_+X^2=e^\rho \left(c_1_+x^0+c_2^1_+x^2\right)\hfill \\ \\ _+X^3=e^\rho \left(c_1^1_{}x^0c_2_{}x^2\right),\hfill \end{array}`$ (85)
with $`\rho =\rho (x^1)`$ and $`c_1`$ and $`c_2`$ are non-zero real constants. This reduces the action (53) to
$`S_3`$ $`=`$ $`{\displaystyle \frac{\kappa }{2}}{\displaystyle d^2\sigma \left[e^{2\lambda }_+x^1_{}x^1e^{2\rho }F\left(_+x^0_{}x^0_+x^2_{}x^2\right)+c_1c_2e^{2\rho }F\left(_+x^0_{}x^2_+x^2_{}x^0\right)\right]}`$ (86)
$`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle d^2\sigma \left[\eta ^{ab}G_{ij}^{(3)}_ax^i_bx^j+ϵ^{ab}B_{ij}^{(3)}_ax^i_bx^j\right]},`$ (87)
where now the three-metric $`𝑮^{(3)}`$ has components
$`G_{ij}^{(3)}=\mathrm{}_s^2\mathrm{diag}[e^{2\rho }F,e^{2\lambda },e^{2\rho }F],`$ (88)
and signature $`2+1`$. Further, the only non-vanishing independent component of the axion potential $`𝑩^{(3)}`$,
$`B_{02}^{(3)}=\mathrm{}_s^2c_1c_2e^{2\rho }F,`$ (89)
does not depend on $`\lambda `$.
We then observe that the axion field strength in three dimensions must be proportional to the Levi-Civita (pseudo)tensor,
$`H_{ijk}=\sqrt{G^{(3)}}ϵ_{ijk},`$ (90)
where $`=(x^i)`$ is a function of the space-time coordinates to be determined from the field equations and $`\sqrt{G^{(3)}}\sqrt{det𝑮^{(3)}}`$ is the volume element. In the present case we have
$`H_{012}=_1B_{20}^{(3)}=\mathrm{}_s^2c_1c_2e^{2\rho }\left(\sqrt{1F^2}+2F\rho ^{}\right),`$ (91)
which yields
$`={\displaystyle \frac{c_1c_2}{\mathrm{}_s}}e^\lambda \left({\displaystyle \frac{\sqrt{1F^2}}{F}}+2\rho ^{}\right).`$ (92)
With respect to the particular solution (80), we note that the condition (81) can be safely imposed only if $`X_L^4=0`$, and then the compactification condition becomes
$`e^{2\lambda (x_R^1)}_{}x_R^1=_{}X_R^4,`$ (93)
or $`X_R^4=0`$ and
$`e^{2\lambda (x_L^1)}_+x_L^1=_+X_L^4.`$ (94)
The functions $`\rho `$, $`\lambda `$ and the constants $`c_1`$, $`c_2`$ can then be determined by noting that the low energy string action in three dimensions is given by (we set $`\mathrm{}_s=1`$ henceforth)
$`S_{low}={\displaystyle d^3x\sqrt{G^{(3)}}e^{2\varphi }\left[R+\frac{4}{k}+4_k\varphi ^k\varphi \frac{1}{12}H_{ijk}H^{ijk}\right]},`$ (95)
where $`4/k`$ is a cosmological constant, $`R`$ the scalar curvature, $``$ the covariant derivative with respect to the metric $`𝑮^{(3)}`$ and $`\varphi `$ the dilaton. On varying $`S_{low}`$ one obtains the field equations
$`R_{ij}+2_i_j\varphi {\displaystyle \frac{1}{4}}H_{ikl}H_j^{kl}=0`$ (96)
$`_k\left(e^{2\varphi }H_{ij}^k\right)=0`$ (97)
$`4_k^k\varphi 4_k\varphi ^k\varphi +{\displaystyle \frac{4}{k}}+R{\displaystyle \frac{1}{12}}H_{ijk}H^{ijk}=0,`$ (98)
which must be satisfied by the metric (88) and the axion obtained from the potential (89).
### A Linear dilaton vacuum
First we observe that for
$`e^{2\rho }=\pm F,`$ (99)
the metric (88) becomes the flat Minkowski metric
$`ds^2=\left(dx^0\right)^2+\left(dz_\pm \right)^2\pm \left(dx^2\right)^2,`$ (100)
where the upper signs correspond to $`x^1>0`$ ($`F>0`$) and the lower signs to $`x^1<0`$ ($`F<0`$) and the new coordinate $`z`$ is determined by
$`dz_\pm =e^\lambda dx^1={\displaystyle \frac{dx^1}{\sqrt{\pm F}}}.`$ (101)
Correspondingly $`𝑩^{(3)}`$ is constant and the axion vanishes, thus the field equation (97) is trivially satisfied.
The remaining Eqs. (96) and (98) yield the following expression for the dilaton field
$`\varphi =a+{\displaystyle \frac{x^0}{b}}+{\displaystyle \frac{z_\pm }{c}}+{\displaystyle \frac{x^2}{d}},`$ (102)
where the constant $`a`$ is arbitrary and the constants $`b`$, $`c`$ and $`d`$ must satisfy
$`\pm {\displaystyle \frac{1}{b^2}}{\displaystyle \frac{1}{c^2}}{\displaystyle \frac{1}{d^2}}={\displaystyle \frac{1}{k}}.`$ (103)
This solution represents a linear dilaton vacuum. When $`k\mathrm{}`$ one of course obtains the trivial form for such a vacuum with $`\varphi =a`$.
We finally observe that along $`x^1=0`$ there occurs a signature flip, so that the roles of $`x^0`$ and $`x^2`$ as, respectively, a time coordinate and a spatial coordinate are exchanged. We shall find the same feature again in the following.
### B Recovering AdS<sub>3</sub> and BTZ
If we assume
$`={\displaystyle \frac{2}{\mathrm{}}},`$ (104)
where $`\mathrm{}`$ is a constant, then the field equations (96)-(98) are satisfied by choosing $`\rho =0`$, $`c_1c_2=1`$ and
$`e^{2\lambda }={\displaystyle \frac{\mathrm{}^2}{4}}\mathrm{coth}^2x^1,`$ (105)
which yields
$`X^4={\displaystyle \frac{\mathrm{}}{2}}\left(x^1\mathrm{coth}x^1\right)+X_0^4,`$ (106)
with $`X_0^4`$ an integration constant. It then follows that the compactification we are employing is indeed singular, since $`X^41/x^1`$ for $`x^10^\pm `$, which means that we are mapping vanishing boosts along $`v^1`$ (parameterized by $`\beta `$) into infinite translations along $`\stackrel{~}{v}^1`$. For this reason we tentatively consider the range of $`x^1=\beta `$ as divided into the two (disjoint) half lines $`x^1>0`$ and $`x^1<0`$.
It is indeed possible to show that this partition of the range of $`\beta `$ has a natural interpretation in terms of the space-time manifold. In fact, the choice (105) reduces Eq. (87) to the action for a string propagating in the three-dimensional AdS space-time. This can be seen, e.g., by defining new (dimensionless) coordinates $`r_\pm \text{I R}`$ such that
$`\begin{array}{ccc}r_+=\mathrm{ln}(+\mathrm{sinh}x^1)\hfill & \hfill \mathrm{for}& x^1>0\hfill \\ & & \\ r_{}=\mathrm{ln}(\mathrm{sinh}x^1)\hfill & \hfill \mathrm{for}& x^1<0.\hfill \end{array}`$ (110)
The metric
$`ds^2=\mathrm{sinh}x^1\left[\left(dx^2\right)^2\left(dx^0\right)^2\right]+{\displaystyle \frac{\mathrm{}^2}{4}}\mathrm{coth}^2x^1\left(dx^1\right)^2`$ (111)
then becomes
$`ds^2=\pm e^{r_\pm }\left[\left(dx^2\right)^2\left(dx^0\right)^2\right]+{\displaystyle \frac{\mathrm{}^2}{4}}\left(dr_\pm \right)^2,`$ (112)
where the equality holds with the plus sign for $`x^1>0`$ and with the minus sign for $`x^1<0`$. The expression in Eq. (112) is one of the standard forms for AdS<sub>3</sub> with $`x^0`$ (or $`x^2`$) playing the role of time and $`r_+`$ and $`x^2`$ (or $`r_{}`$ and $`x^0`$) of spatial coordinates. This can perhaps be more easily recognized if one defines a coordinate
$`z=\mathrm{exp}\left({\displaystyle \frac{r_\pm }{2}}\right),`$ (113)
and obtains
$`ds^2={\displaystyle \frac{\pm \left[\left(dx^2\right)^2\left(dx^0\right)^2\right]+\mathrm{}^2\left(dz\right)^2}{z^2}},`$ (114)
which describes the half of (one of the two) AdS<sub>3</sub> with $`z>0`$ (the half $`z<0`$ being given by a definition of $`z`$ with the opposite sign). The BTZ black hole is then obtained by the usual periodicity condition imposed on the coordinates .
It then follows that the metric (112) we have found simultaneously describes two copies of (half of) AdS<sub>3</sub> and $`x^1=0`$ again plays the role of a boundary at $`r_+=r_{}=\mathrm{}`$ across which the signature of the metric flips. In fact this is the standard AdS horizon at $`|z|=+\mathrm{}`$, while the time-like infinity is at $`z=0`$, and the scalar curvature,
$`R={\displaystyle \frac{6}{\mathrm{}^2}},`$ (115)
is a negative (regular) constant everywhere. Of course, the metrics (112) and (114) solve the field equations (96)-(98) provided $`\varphi `$ is also a constant and $`k=\mathrm{}^2`$ .
We conclude this part by noting that the solution (80) places further restrictions on the propagating modes, since $`F`$ can then be a function of either $`x_L^1`$ or $`x_R^1`$, but not of both \[see Eqs. (93 and (94)\]. This selects out a subclass of solutions with only left- (or right-) movers along $`x^1`$ and both kinds of waves along $`x^0`$ and $`x^3`$.
## VI Other solutions
Various other solutions to the field equations (96)-(98) can be found for a non-constant dilaton.
### A First example
Let us consider the metric
$`G_{ij}^{(3)}=\mathrm{diag}[F,e^{2\lambda },F],`$ (116)
where now $`\lambda =\lambda (x^2)`$ and similarly for the dilaton $`\varphi =\varphi (x^2)`$. This metric can be obtained from the action (53) by applying again a nonlinear transformation of the form (81)-(85),
$`_{}X^4`$ $`=`$ $`e^{2\lambda }_{}x^1`$ (117)
$`_+X^2`$ $`=`$ $`e^\rho \left[c_1_+x^0+c_2^1_+x^2\right]`$ (118)
$`_{}X^3`$ $`=`$ $`e^\rho \left[c_1^1_{}x^0c_2_{}x^2\right],`$ (119)
where $`c_1`$ and $`c_2`$ are constants and $`\rho =\rho (x^1)`$. If we choose $`\rho `$ such that
$`e^{2\rho }=F,`$ (120)
the $`\mathrm{sinh}(x^1)`$ term in Eq. (43) is cancelled, and the axion potential $`𝑩^{(3)}`$ is constant in our model, so the axion field strength $``$ is zero. Substituting this form for $`𝑮^{(3)}`$ into the field equations (with $`k=1`$) allows us to determine $`\lambda `$ and leads to the invariant line element
$`ds^2=(dx^0)^2+\mathrm{coth}^2(x^2)(dx^1)^2+(dx^2)^2.`$ (121)
The dilaton in this case is
$`\varphi =C_\varphi \mathrm{ln}\left(\mathrm{sinh}(x^2)\right),`$ (122)
where $`C_\varphi `$ is an integration constant and the domain of validity of the solution is $`x^2>0`$ (which we call region II). This metric is “asymptotically flat”, in the sense that it converges to the Minkowski metric for $`x^2\mathrm{}`$. The Ricci scalar is negative and diverges at the origin (i.e., for $`x^2=0`$),
$`R={\displaystyle \frac{4}{\mathrm{sinh}^2(x^2)}}.`$ (123)
The change of variables
$`t=x^0,r=\mathrm{coth}(x^2),\theta =x^1,`$ (124)
brings the invariant line element into the form
$`ds^2=dt^2+{\displaystyle \frac{dr^2}{(r^21)^2}}+r^2d\theta ^2,`$ (125)
which shows explicitly the cylindrical symmetry. The radial coordinate is understood to be $`r>1`$, according to the above definition of region II, and the dilaton is written as
$`\varphi _{II}=C_\varphi +{\displaystyle \frac{1}{2}}\mathrm{ln}(r^21).`$ (126)
In the form (125), the metric can also be extended to the region $`0r<1`$ (region I), where the Ricci scalar,
$`R=4\left(1r^2\right),`$ (127)
is positive and regular everywhere and the dilaton becomes
$`\varphi _I=C_\varphi +{\displaystyle \frac{1}{2}}\mathrm{ln}(1r^2).`$ (128)
A plot of an angular sector of the “lifted surface” , adapted so as to include the origin of region I, is shown in Fig. 1. With this choice a new singularity appears at $`r=1`$, where the surface has diverging slope and would extend to unlimited height (the plot is of course truncated along the vertical axis). This simply represents the fact that the asymptotically flat region ($`r1`$) is at an infinite proper distance ($`\mathrm{ln}|r1|`$) both from the origin $`r=0`$ of region I and from the singularity $`r\mathrm{}`$ ($`x^2=0`$) of region II. In fact, the Ricci scalar actually vanishes along $`r=1`$ and the only real singularity is at $`r\mathrm{}`$.
The real singularity in region II is not accessible from region I. In particular, by solving the equation of radial null geodesics,
$`{\displaystyle \frac{d^2r}{d\tau ^2}}{\displaystyle \frac{2r}{r^21}}\left({\displaystyle \frac{dr}{d\tau }}\right)^2=0,`$ (129)
where $`\tau `$ is an affine parameter, one finds (near $`r=1`$ and with $`\tau 0`$)
$`r1\pm e^\tau ,`$ (130)
where the minus (plus) sign is for geodesics starting in region I (II). Such trajectories define the light cones in regions I and II and therefore show that the two regions are causally disconnected.
### B Second example
Let us now consider the metric
$`G_{ij}^{(3)}=\mathrm{diag}[e^{2\rho },e^{2\rho },e^{2\rho }],`$ (131)
where now $`\rho =\rho (x^1)`$ and $`\varphi =\varphi (x^1)`$. This metric results from the nonlinear coordinate transformation
$`_{}X^4`$ $`=`$ $`e^{2\rho }_{}x^1`$ (132)
$`_+X^2`$ $`=`$ $`F^{1/2}\left[c_1e^\rho _+x^0+c_2^1e^\rho _+x^2\right]`$ (133)
$`_{}X^3`$ $`=`$ $`F^{1/2}\left[c_1^1e^\rho _{}x^0c_2e^\rho _{}x^2\right].`$ (134)
The transformation of coordinates used to obtain this form for $`𝑮^{(3)}`$ in our model insures that the axion potential is again constant. Eliminating $`x^1`$ in favor of $`\rho `$ and solving the field equations again with $`k=1`$, we find for the invariant line element
$`ds^2=e^{2\rho }(dx^0)^2+{\displaystyle \frac{4d\rho ^2}{\mathrm{sinh}^2(\sqrt{2}\rho )}}+e^{2\rho }(dx^2)^2,`$ (135)
and the dilaton is
$`\varphi =C_\varphi {\displaystyle \frac{1}{2}}\mathrm{ln}\left(\mathrm{sinh}(\sqrt{2}\rho )\right),`$ (136)
with $`C_\varphi `$ the usual integration constant.
The change of variables
$`t=x^0,r=e^\rho ,\theta =x^2,`$ (137)
which is well defined for $`r>0`$, gives the invariant line element the manifestly cylindrically symmetric form
$`ds^2={\displaystyle \frac{dt^2}{r^2}}+{\displaystyle \frac{dr^2}{r^2(r^\sqrt{2}r^\sqrt{2})^2}}+r^2d\theta ^2,`$ (138)
and the dilaton can now be written as
$`\varphi =C_\varphi {\displaystyle \frac{1}{2}}\mathrm{ln}\left(r^\sqrt{2}r^\sqrt{2}\right).`$ (139)
The Ricci scalar is everywhere negative,
$`R=\left(r^\sqrt{2}r^\sqrt{2}\right)^2,`$ (140)
has essential singularities at both $`r=0`$ and $`r\mathrm{}`$ and vanishes along the circle $`r=1`$. This implies a similarity with the previous metric (125), namely one can define a region I for $`0<r<1`$ and a region II for $`r>1`$. The main difference is then that region I also contains a real singularity at $`r=0`$.
### C T-dual solutions
We conclude this Section by noting the in both solutions above, there are two isometric coordinates, to wit $`t`$ and $`\theta `$. Therefore, one can generate new solutions by employing T-duality . In particular, we shall T-dualize with respect to one coordinate at a time, and denote the fields of the dual solutions with a tilde. We also denote by $`B`$ the only non-vanishing component of the axion potential, $`B_{02}^{(3)}`$, which is constant in all solutions considered.
For the solution in Section VI A, the non-vanishing component of the axion potential in the coordinate system $`(t,r,\theta )`$ is given by
$`B_{tr}^{(3)}={\displaystyle \frac{B}{1r^2}}.`$ (141)
On dualizing the metric (125) with respect to $`t`$ then yields the non-diagonal line element
$`\stackrel{~}{ds}^2=dt^2+{\displaystyle \frac{1B^2}{(r^21)^2}}dr^2+{\displaystyle \frac{Bdtdr}{r^21}}+r^2d\theta ^2,`$ (142)
which solves the field equations with a vanishing axion potential, $`\stackrel{~}{𝑩}^{(3)}=0`$, and an unchanged dilaton field, $`\stackrel{~}{\varphi }=\varphi `$ .
Dualizing with respect to $`\theta `$ instead leaves the metric unaffected, as can be seen by switching to the new radial coordinate $`R=r^1`$ after applying the dual relations , but yields $`\stackrel{~}{𝑩}^{(3)}=0`$ and a shifted dilaton field, $`\stackrel{~}{\varphi }=\varphi +\mathrm{ln}(R)`$.
The duals of the metric (138) and of the axion potential $`B_{t\theta }^{(3)}=B`$ of Section VI B with respect to $`t`$ are given by
$`\stackrel{~}{ds}^2=r^2dt^2+{\displaystyle \frac{dr^2}{r^2(r^\sqrt{2}r^\sqrt{2})^2}}Br^2dtd\theta +r^2(1B^2)d\theta ^2`$ (143)
and $`\stackrel{~}{𝑩}^{(3)}=0`$, with the dilaton $`\stackrel{~}{\varphi }=\varphi +\mathrm{ln}(r)`$. The metric above represent a rotating space-time, since the off-diagonal term $`\stackrel{~}{G}_{t\theta }^{(3)}0`$ (for $`B0`$).
Dualizing with respect to $`\theta `$ and defining $`R=r^1`$ gives
$`\stackrel{~}{ds}^2=R^2(1B^2)^2dt^2+{\displaystyle \frac{dR^2}{R^2(R^\sqrt{2}R^\sqrt{2})^2}}BR^2dtd\theta +R^2d\theta ^2,`$ (144)
$`\stackrel{~}{𝑩}^{(3)}=0`$ and $`\stackrel{~}{\varphi }=\varphi +\mathrm{ln}(R)`$. Again this represents a rotating space-time.
In three out of four cases above the presence of a non-vanishing axion potential, although it corresponds to zero field strength, affects the metric field in a non-trivial manner. The axion potential is in fact always absorbed into the dual metric and dilaton fields and sometimes generates off-diagonal terms and rotation.
## VII Conclusions
Starting from the six parameter group $`ISO(2,1)`$ we have, by using various types of compactification (gauge fixing, internal symmetries and coordinate identification), reduced the original action, which describes a spinless string moving on a curved six-dimensional background, to a string propagating on either a flat (Minkowski) background with a linear dilaton or on AdS space-time with a constant dilaton field. If the fields satisfying the equations obtained from the low energy effective string action are restricted to be functions of a single variable (in our case one of the boost parameters from the original Poincaré group), the fields are so tightly constrained that there are apparently only two possible solutions with a trivial dilaton.
The original goal of this work was to find a three-dimensional black hole other than the BTZ black hole by starting from a model of string propagation on a group manifold different from the $`SL(2,\text{I R})`$ manifold. This goal has not been realized, but the tactic has resulted in a relatively simple form for the compactified Lagrangian, allowing us to recover the space-time of AdS<sub>3</sub> (and BTZ) and to obtain solutions of the field equations we might not otherwise have been able to attain.
The fact that AdS<sub>3</sub> can be related to the (non-semisimple) three-dimensional Poincaré group might be surprising at first sight. However, one can consider the following general argument: The natural group of symmetry of AdS<sub>3</sub>, that is the semisimple group $`SL(2,\text{I R})`$, is contained within $`SL(2,C)`$ which, in turn, is isomorphic to $`SO(3,1)`$. Moreover, the Lie algebra of the group $`ISO(2,1)`$ can be reached from the Lie algebra of the (semisimple) group $`SO(3,1)`$ by means of a transformation called contraction (see, e.g., Ref. ). One can therefore conclude that the sequence of operations we have performed reproduces the (local) effect of an expansion (roughly, the opposite of a contraction ) from the coset $`ISO(2,1)/\text{I R}`$ to $`SL(2,\text{I R})`$.
Other such formal constructions can be envisoned, and might turn out to be useful in the search for new solutions, as we have shown in Section VI. Whether or not the BTZ black hole is the unique one in three-dimensional space-time remains an open question, and so, therefore, is the question of whether or not another exact three-dimensional black hole solution to string theory exists.
###### Acknowledgements.
This work was supported in part by the U.S. Department of Energy under Grant No. DE-FG02-96ER40967 and by the NATO grant No. CRG 973052. |
no-problem/0001/astro-ph0001427.html | ar5iv | text | # Note on the Origin of the Highest Energy Cosmic Rays.
## 1 Focussing properties of the proposed magnetic wind
As far as ultra-high energies (over $`100`$ EeV) are concerned, the bending effects are dominated by the long range behaviour of the galactic field. In the model proposed in , the asymptotic field, in spherical coordinates, is purely azimuthal and reads :
$$B_\phi =B_{}r_{}\frac{\mathrm{sin}\theta }{r}$$
where the normalization constant $`B_{}r_{}=70\mu `$G.kpc is defined from the local value of the field in the Solar system. The bounds of the region where the galactic wind extends are not well defined; the authors use 1.5 Mpc in their numerical simulations.
The most important feature of such a field (in the absence of a cutoff on $`r`$) is that the bending integral $`B_\phi 𝑑r`$ is divergent in any radial direction except the polar axis. As a result, whatever the energy, a charged particle can never escape to infinity in a direction other than a pole. In practice, this strong focussing effect is limited by the cutoff, which therefore plays a crucial role. Using the field and radial limits given above the bending integral is about $`500\times \mathrm{sin}\theta `$ EeV along a radial trajectory. In other words particles of $`100`$ EeV will only escape within a cone of less than about 10 degrees around the polar axis.
When considering particles of a given sign the orientation of the radial component of the Lorentz force depends on the polar projection of the velocity, therefore the “positive” pole (as defined by the orientation of curl $`\stackrel{}{B}`$) is focussing, while the “negative” pole is antifocussing.
These features are intrinsic to the field model (especially its slow decrease with the distance) and we can suspect that the convergence of the trajectories found in is therefore not related to any specific property of the observed set of highest energy cosmic rays.
## 2 Numerical simulations with random data
To confirm this hypothesis, we have numerically back-traced, in the field model of Ref. , a random set of cosmic rays drawn from a uniform distibution on the (Earth) sky. As most of the observered high energy events are actually concentrated around $`10^{20}`$ eV, and since the authors of have further assumed that the two highest energy events could be Helium nuclei the range of magnetic rigidity of the observed data sample is quite narrow. For a fair comparison we have therefore generated a sample of protons with a flat energy distribution ranging from 100 to 160 EeV.
As expected the trajectories drawn on Fig.1 clearly show the strong focussing effect of the field. Most of the focussing take place over the first few 100 kpc as mentioned in . We have drawn separately the trajectory for particle reaching the Earth’s northern hemisphere and reaching the southern one. The focussing effect is stronger and faster for rays originating from the north<sup>2</sup><sup>2</sup>2The 13 events used in were all observed in the northern hemisphere, which is where the largest detectors are installed. which is expected given the configuration of the Earth’s rotation axis with respect to the galactic center. Using a random population we obtain the same behavior as in showing that the focussing is a property of the field and not of the events.
## 3 Discussion and conclusion
First the validity of the model may be questioned: if there are reversals of the azimuthal component of $`B`$ in the galactic disk (as acknowledged in ), how can the wind remain consistent with a “coherent” parametrization $`\mathrm{sin}\theta /r`$ at long distances (several 100 kpc, much more than the distance between regions with reversed fields) ? Normally one would expect some destructive interferences between the wind contributions coming from differents parts of the disk, hence an intensity decreasing faster than $`1/r`$; the argument that most cosmic rays are observed in the direction opposite to the galactic center (where no reversal occurs) is not valid, because their bending depends mainly on the long range behaviour of the field.
If however the model of Ref. is true, then the accumulation of events at the pole is not at all an evidence for a pointlike source of the observed rays. This model only demonstrates that our sensitivity on extragalactic charged particles might be limited to a small solid angle (decreasing with energy) around the galactic polar direction, whatever their initial origin could be.
One should note that even if the extragalactic flux is isotropic the integrated luminosity at Earth would be the same with or without this galactic wind. Despite the strong dispersion in the original directions the restriction of the angular acceptance due to the focussing effect is compensated by the collecting area.
Addressing the question of the active galaxy M87 (Virgo A) as a possible source of UHECR, the only possible affirmative conclusion is: if the field model of is valid, and if the sources are known pointlike objects, a possible candidate is M87. However, as acknowledged in , this scenario implies a regular transverse magnetic field from here to Virgo, i.e. over a length of about 20 Mpc, of about 2 nanogauss. Therefore the overall system would behave like a spectrum analyzer : a magnetic spectrometer (the transverse field) followed by a collimator (the Galactic field), strongly selecting the initial momentum of the cosmic rays.
### Acknowledgements
We thank P. Astier, X. Bertou, M. Boratav and M. Lemoine for their usefull comments and fruitful discussions. |
no-problem/0001/cond-mat0001097.html | ar5iv | text | # Electron Transfer in Porphyrin Complexes in Different Solvents
## I Introduction
Electron transfer (ET) is a very important process in biology, chemistry and physics . The most well known ET theory is the one of Marcus . Of special interest is the ET in configurations where a bridge (B) between donor (D) and acceptor (A) mediates the transfer. On this kind of ET we will focus in this paper. The primary step of ET in bacterial photosynthetic reaction centers is of this type and a lot of work in this direction was done after the structure of the protein-pigment complex of the photosynthetic reaction centers of purple bacteria was clarified in 1984 . Many artificial systems especially self-organized porphyrin complexes have been developed to model this bacterial photosynthetic reaction center .
Bridge-mediated ET reactions can occur via different mechanisms : incoherent sequential transfer in which the bridge level is populated or coherent superexchange in which the mediating bridge level is not populated but nevertheless necessary for the transfer. Changing a building block of the complex or changing the environment can modify which mechanism is mainly at work. Actually, there is still a discussion in literature whether sequential transfer and superexchange are limiting cases of one process or whether they are two processes which can coexist . To clarify which mechanism is present in an artificial system one can systematically vary the energetics of the complex. In experiments this is done by substituting parts of the complexes or by changing the polarity of the solvent . Also the geometry and size of the bridge block can be varied, and in this way the length of the subsystem through which the electron has to be transfered can be changed.
Superexchange occurs due to coherent mixing of the three or more states of the system . The ET rate in this channel depends algebraically on the differences between the energy levels and decreases exponentially with increasing bridge length . When incoherent effects such as dephasing dominate the transfer is mainly sequential , i. e., the levels are occupied mainly in sequential order . The dependence on the differences between the energy levels is exponential . An increase of the bridge length induces only a small reduction in the ET rate . This is why sequential transfer is the desired process in molecular wires .
In the superexchange case the dynamics is mainly Hamiltonian and can be described on the basis of the Schrödinger equation. The physically important results can be obtained by perturbation theory and, most successfully, by the semiclassical Marcus theory . The complete system dynamics can be directly extracted by numerical diagonalization of the Hamiltonian . In case of sequential transfer the influence of an environment has to be taken into account. There are quite a few different ways how to include an environment modeled by a heat bath. The simplest phenomenological descriptions are based on the Einstein coefficients or on the imaginary terms in the Hamiltonian , as well as on the Fokker-Planck or Langevin equations . The most accurate but also numerically most expensive way is the path integral method . This has been applied to bridge-mediated ET especially in the case of bacterial photosynthesis . Bridge-mediated ET has also been investigated using Redfield theory , by propagating a density matrix (DM) in Liouville space and other methods (e. g. ).
The purpose of the present investigation is to present a simple, analytically solvable model based on the DM formalism and apply it to a porphyrin-quinone complex which is taken as a model system for the bacterial photosynthetic reaction center. The master equation which governs the DM evolution as well as the appropriate relaxation coefficients can be derived from such basic informations as system-environment coupling strength and spectral density of the environment . In the present model relaxation is introduced in a way similar to Redfield theory but in site representation not in eigenstate representation. A discussion of advantages and disadvantages these representations has been given elsewhere . The equations for the DM are the same as in the generalized stochastic Liouville equation (GSLE) model for exciton transfer which is an extension of the Haken-Strobl-Reineker (HSR) model to a model with a quantum bath. Here we give an analytic solution to these equations. The present equations for the DM obtained are also similar to those of Ref. where relaxation is introduced in a phenomenological fashion but only a steady-state solution is found in contrast to the model introduced here. In addition the present model is applied to a concrete system. A comparison of the ET time with the bath correlation time allows us to regard three time intervals of system dynamics: the interval of memory effects, the dynamical interval, and the kinetic, long-time interval . In the framework of DM theory one can describe the ET dynamics in all three time intervals. However, often it is enough to find the solution in the kinetic interval for the explanation of experiments within the time resolution of most experimental setups, as has been done in Ref. . The master equation is analytically solvable only for simple models, for example . Most investigations are based on the numerical solution of this equation . Here we perform numerical as well as approximate analytical calculations for a simple model. Since the solution can be easily obtained, the influence of all parameters on the ET can be examined.
The paper is organized as follows. In the next section we introduce the model of a supermolecule which we use to describe ET processes. The properties of an isolated supermolecule are modeled in subsection II A, as well as the static influence of the environment. The dynamical influence of bath fluctuations is discussed and modeled by a heat bath of harmonic oscillators in section II B. The reduced DM equation of motion (RDMEM) describing the excited state dynamics is presented in subsection II C. In subsection II D the system parameter dependence on the solvent dielectric constant is discussed for different models of solute-solvent interaction. In subsection II E system parameters are determined. The methods and results of the numerical and analytical solutions of the RDMEM are presented in section III. The dependencies of the ET rate and final acceptor population on the system parameters are given for the numerical and analytical solutions in subsection IV A. The analysis of the physical processes in the system is also performed there. In subsection IV B we discuss the dependence of the ET rate on the solvent dielectric constant for different models of solute-solvent interaction and compare the calculated ET rates with the experimentally measured ones. The advantages and disadvantages of the presented method in comparison with the GSLE model and the method of Davis et al. are analyzed in subsection IV C. In the conclusions the achievements and possible extensions of this work are discussed.
## II Model
### A System Part of the Hamiltonian
The photoinduced ET in supermolecules consisting of three sequentially connected molecular blocks, i. e., donor, bridge, and acceptor, ($`M=1,2,3`$) is analyzed. The donor is not able to transfer its charge directly to acceptor because of their spatial separation. Donor and acceptor can exchange their charges only through B. In the present investigation the supermolecule consists of free-base tetraphenylporphyrin ($`\mathrm{H}_2\mathrm{P}`$) as donor, zinc substituted tetraphenylporphyrin ($`\mathrm{ZnP}`$) as bridge, and p-benzoquinone as acceptor . In each of those molecular blocks we consider only two molecular orbitals ($`m=0,1`$), the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) . Each of these orbitals can be occupied by an electron or not, denoted by $`|1`$ or $`|0`$, respectively. This model allows us to describe four states of each molecular block, the neutral ground state $`|1_{\mathrm{HOMO}}|0_{\mathrm{LUMO}}`$, the neutral excited state $`|0_{\mathrm{HOMO}}|1_{\mathrm{LUMO}}`$, the positively charged ionic state $`|0_{\mathrm{HOMO}}|0_{\mathrm{LUMO}}`$, and the negatively charged ionic state $`|1_{\mathrm{HOMO}}|1_{\mathrm{LUMO}}`$. $`c_{Mm}^+=|1_{Mm}0|_{Mm}`$, $`c_{Mm}=|0_{Mm}1|_{Mm}`$, and $`\widehat{n}_{Mm}=c_{Mm}^+c_{Mm}`$ describe the creation, annihilation, and number of electrons in orbital $`Mm`$, respectively, while $`\widehat{n}_M=_m\widehat{n}_{Mm}`$ gives the number of electrons in a molecular block. The number of particles in the whole supermolecule is conserved, $`_M\widehat{n}_M=const`$.
Each of the electronic states has its own vibrational substructure. As a rule for the porphyrin-containing systems the time of vibrational relaxation is two orders of magnitude faster than the characteristic ET time . Because of this we assume that only the ground vibrational states play a role and we do not include the vibrational substructure. A comparison of the models with and without vibrational substructure has been given elsewhere .
Below we consider the evolution of single charge-transfer exciton states in the system. For the full description of the system one also should include photon modes to describe for example the fluorescence from the LUMO to the HOMO in each molecular block transferring an excitation to the electro-magnetic field. But the rates of fluorescence and recombination are small in comparison to other processes for porphyrin-type systems . When fluorescence does not have to be taken into account, all states except $`|\mathrm{D}^{}\mathrm{BA}`$ ($`M=1`$), $`|\mathrm{D}^+\mathrm{B}^{}\mathrm{A}`$ ($`M=2`$), and $`|\mathrm{D}^+\mathrm{BA}^{}`$ ($`M=3`$) remain essentially unoccupied, while those three take part in the intermolecular transport process. In this case the number of states coincides with the number of sites in the system and we label the states $`\mu =1,2,3`$ instead of $`\{M,m\}`$.
For the description of the ET and other dynamical processes in the system placed in a dissipative environment we introduce the Hamiltonian
$`\widehat{H}=\widehat{H}_\mathrm{S}+\widehat{H}_\mathrm{B}+\widehat{H}_{\mathrm{SB}},`$ (1)
where $`\widehat{H}_\mathrm{S}`$ describes the supermolecule, $`\widehat{H}_\mathrm{B}`$ the dissipative bath, and $`\widehat{H}_{\mathrm{SB}}`$ their interaction. We are interested in the kinetic limit of the excited state dynamics here. For this limit we assume that the relaxation of the solvent takes only a very short time compared to the system times of interest.
The effect of the solvent is twofold. On one hand the system dynamics is perturbed by the solvent state fluctuations, independent of the system states. $`\widehat{H}_{\mathrm{SB}}`$ shall only reflect the dynamical influence of the fluctuations leading to dissipative processes as discussed in the next subsection. On the other hand the system states are shifted in energy ,
$$\widehat{H}_\mathrm{S}=\widehat{H}_0+\widehat{H}_{\mathrm{es}}+\widehat{V},$$
(2)
due to the static influence of the solvent which is determined by the relaxed value of the solvent polarization and in general also includes the non-electrostatic contributions such as van-der-Waals attraction, short-range repulsion, and hydrogen bonding . In Eq. (2) the energy of free and noninteracting blocks $`\widehat{H}_0=_{Mm}E_{Mm}\widehat{n}_{Mm}`$, is given by the energies $`E_{Mm}`$ of orbitals $`Mm`$ in the independent electron approximation . The $`E_{Mm}`$ are chosen to reproduce the ground-state–excited-state transitions e. g. $`\mathrm{D}\mathrm{D}^{}`$, which change only a little for different solvents and are assumed to be constants here. To determine $`E_{Mm}`$ one starts from fully ionized double bonds in each molecular block , calculates the one-particle states and fills these orbitals with two electrons each starting from the lowest energy. By exciting, removing, adding the last electron to the model system one obtains the energy of the excited, oxidized, reduced molecular block in the independent particle approximation.
The inter-block hopping term
$$\widehat{V}=\underset{\mu \nu }{}v_{\mu \nu }(\widehat{V}_{\mu \nu }^++\widehat{V}_{\mu \nu })\left[(\widehat{n}_\mu 1)^2+(\widehat{n}_\nu 1)^2\right]$$
in Eq. (2) includes the hopping operators $`\widehat{V}_{\mu \nu }=c_{N1}^+c_{M1},`$ and the coherent couplings $`v_{\mu \nu }`$. We assume $`v_{13}=0`$ because there is no direct connection between donor and acceptor. The scaling of $`v_{\mu \nu }`$ for different solvents is discussed in subsection II D.
The electrostatic interaction $`\widehat{H}_{\mathrm{es}}`$ scales like energies of a system of charges in a single or multiple cavity surrounded by a medium with static dielectric constant $`ϵ_\mathrm{s}`$ according to the classical reaction field theory . Here we consider two models of scaling. In the first model each molecular block is in an individual cavity in the dielectric. For this case the electrostatic energy reads
$$\widehat{H}_{\mathrm{es}}=S^H(ϵ_\mathrm{s})\left(\widehat{H}_{\mathrm{el}}+\widehat{H}_{\mathrm{ion}}\right).$$
(3)
$$\widehat{H}_{\mathrm{el}}=\underset{\mu }{}|\widehat{n}_\mu 1|e^2\left(4\pi ϵ_0r_\mu \right)^1$$
takes the electron interaction into account while bringing an additional charge onto the block $`\mu `$ and thus describes the energy to create an isolated ion. This term depends on the characteristic radius $`r_\mu `$ of the molecular block. The interaction between the ions
$$\widehat{H}_{\mathrm{ion}}=\underset{\mu }{}\underset{\nu }{}(\widehat{n}_\mu 1)(\widehat{n}_\nu 1)e^2\left(4\pi ϵ_0r_{\mu \nu }\right)^1$$
depends on the distance between the molecular blocks $`r_{\mu \nu }`$. Both distances $`r_\mu `$ and $`r_{\mu \nu }`$ are also used in Marcus theory . The term $`H_{\mathrm{el}}+H_{\mathrm{ion}}`$ reflects the interaction of charges inside the supermolecule which is weakend by the reaction field according to the Born formula
$$S^H=1+\frac{1ϵ_\mathrm{s}}{2ϵ_\mathrm{s}}.$$
(4)
In the second model, considering the supermolecule as one object placed in a single cavity of constant radius one has to use the Onsager term . This term is state selective, it gives a contribution only for the states with nonzero dipole moment, i.e., charge separation. Defining the static dipole moment operator as $`\widehat{\stackrel{}{p}}=\underset{\mu \nu }{}(\widehat{n}_\mu 1)(\widehat{n}_\nu 1)\stackrel{}{r}_{\mu \nu }e`$ we obtain $`\widehat{H}_{\mathrm{es}}=S^H\widehat{\stackrel{}{p}}^2/r_{13}`$, with Onsager scaling
$`S^H`$ $`=`$ $`{\displaystyle \frac{1ϵ_\mathrm{s}}{2ϵ_\mathrm{s}+1}}.`$ (5)
### B Microscopic Motivation of the System-Bath Interaction and the Thermal Bath
One can express the dynamic part of the system-bath interaction as
$`\widehat{H}_{\mathrm{SB}}={\displaystyle d^3\stackrel{}{r}\underset{\mu \nu }{}\widehat{\stackrel{}{D}}_{\mu \nu }(\stackrel{}{r})\mathrm{\Delta }\widehat{\stackrel{}{P}}(\stackrel{}{r})}.`$ (6)
Here $`\widehat{\stackrel{}{D}}_{\mu \nu }(\stackrel{}{r})`$ denotes the field of the electrostatic displacement at point $`\stackrel{}{r}`$ induced by the system transition dipole moment $`\widehat{\stackrel{}{p}}_{\mu \nu }=\stackrel{}{p}_{\mu \nu }(\widehat{V}_{\mu \nu }^++\widehat{V}_{\mu \nu })`$ . The field of the environmental polarization is denoted as $`\widehat{\stackrel{}{P}}(\stackrel{}{r})=_n\delta (\stackrel{}{r}\stackrel{}{r}_n)\widehat{\stackrel{}{d}}_n`$, where $`\widehat{\stackrel{}{d}}_n`$ is the $`n`$th dipole of the environment and $`\stackrel{}{r}_n`$ its position. Only fluctuations of the environment polarization $`\mathrm{\Delta }\widehat{\stackrel{}{P}}(\stackrel{}{r})`$ influence the system dynamics. Averaged over the angular dependence the interaction reads
$`\widehat{H}_{\mathrm{SB}}={\displaystyle \underset{\mu \nu n}{}}{\displaystyle \frac{1}{4\pi ϵ_0}}\left({\displaystyle \frac{2}{3}}\right)^{\frac{1}{2}}{\displaystyle \frac{|\widehat{\stackrel{}{p}}_{\mu \nu }|\mathrm{\Delta }|\widehat{\stackrel{}{d}_n}|}{|\stackrel{}{r}_n|^3}}.`$ (7)
The dynamical influence of the solvent is described with a thermal bath model. The deviation $`\mathrm{\Delta }\left|\widehat{\stackrel{}{d}}_n\right|`$ of $`d_n`$ from its mean value is determined by temperature induced fluctuations. For unpolar solvents described by a set of harmonic oscillators the diagonalization of their interaction yields a bath of harmonic oscillators with different frequencies $`\omega _\lambda `$ and effective masses $`m_\lambda `$. In the case of a polar solvent the dipoles are interacting rotators as, e.g. used to describe magnetic phenomena . The elementary excitation of each frequency can again be characterized by an appropriate harmonic oscillator. So we use generalized coordinates of solvent harmonic oscillator modes $`\widehat{Q}_\lambda =\sqrt{\mathrm{}\left(2m_\lambda \omega _\lambda \right)^1}(\widehat{a}_\lambda +\widehat{a}_\lambda ^+)`$ for polar as well as unpolar solvents. The occupation of the $`i`$th state of the $`\lambda `$th oscillator is defined by the equilibrium DM $`\rho _{\lambda ,ij}=\mathrm{exp}\left[\mathrm{}\omega _\lambda i/(k_\mathrm{B}T)\right]\delta _{ij}`$.
All mutual orientations and distances of solvent molecules have equal probability. An average over all spatial configurations is performed. The interaction Hamiltonian (7) is written in a form which is bilinear in system and bath operators:
$`\widehat{H}_{\mathrm{SB}}=\left[{\displaystyle \underset{\mu \nu }{}}p_{\mu \nu }(\widehat{V}_{\mu \nu }+\widehat{V}_{\mu \nu }^+)\right]\left[{\displaystyle \underset{\lambda }{}}K_\lambda (\widehat{a}_\lambda ^++\widehat{a}_\lambda )\right]S_{\mathrm{SB}}`$ (8)
$`p_{\mu \nu }K_\lambda `$ denotes the interaction intensity between the bath mode $`a_\lambda `$ of frequency $`\omega _\lambda `$ and the quantum transition between the LUMOs of molecules $`\mu `$ and $`\nu `$ with frequency $`\omega _{\mu \nu }=\left(E_\mu E_\nu \right)/\mathrm{}`$. The scaling function $`S_{\mathrm{SB}}`$ reflects the properties of the solvent. Explicit expressions for the solvent influence are still under discussion in the literature .
### C Reduced Density Matrix Approach
The interaction of the system with the bath of harmonic oscillators describes the irradiative energy transfer from the system to the solvent as modeled by Eq. (8). For the description of the dynamics we use the reduced DM which can be obtained from the full DM $`\rho `$ by tracing over the environmental degrees of freedom $`\sigma =\mathrm{Tr}_\mathrm{B}\rho `$ with the evolution operator technique , restricting ourselves to the second order cumulant expansion . Here we apply the Markov approximation, i.e., we restrict ourselves to the limit of long times. Furthermore, we replace the discrete set of bath modes with a continuous one. To do so one has to introduce the spectral density of bath modes $`J(\omega )=\pi _\lambda K_\lambda ^2\delta (\omega \omega _\lambda )`$. Finally one obtains the following master equation
$`\dot{\sigma }_{\kappa \lambda }=`$ $`{\displaystyle \frac{i}{\mathrm{}}}\left([\widehat{H}_\mathrm{S},\sigma ]\right)_{\kappa \lambda }+2\delta _{\kappa \lambda }{\displaystyle \underset{\mu }{}}\left\{\mathrm{\Gamma }_{\mu \kappa }\left[n(\omega _{\mu \kappa })+1\right]+\mathrm{\Gamma }_{\kappa \mu }n(\omega _{\kappa \mu })\right\}\sigma _{\mu \mu }`$ (11)
$`{\displaystyle \underset{\mu }{}}\left\{\mathrm{\Gamma }_{\mu \kappa }\left[n(\omega _{\mu \kappa })+1\right]+\mathrm{\Gamma }_{\kappa \mu }n(\omega _{\kappa \mu })+\mathrm{\Gamma }_{\mu \lambda }\left[n(\omega _{\mu \lambda })+1\right]+\mathrm{\Gamma }_{\lambda \mu }n(\omega _{\lambda \mu })\right\}\sigma _{\kappa \lambda }`$
$`+\left\{\mathrm{\Gamma }_{\lambda \kappa }\left[2n(\omega _{\lambda \kappa })+1\right]+\mathrm{\Gamma }_{\kappa \lambda }\left[2n(\omega _{\kappa \lambda })+1\right]\right\}\sigma _{\lambda \kappa },`$
where $`n(\omega )=[\mathrm{exp}(\mathrm{}\omega /k_BT)1]^1`$ denotes the Bose-Einstein distribution. The damping constant
$$\mathrm{\Gamma }_{\mu \nu }=S_{\mathrm{SB}}^2\mathrm{}^2J(\omega _{\mu \nu })p_{\mu \nu }^2$$
(12)
reflects the coupling of the transition $`|\mu |\nu `$ to a bath mode of the same frequency. It depends on the density of bath modes $`J`$ at the transition frequency $`\omega _{\mu \nu }`$ and on the transition dipole moments $`p_{\mu \nu }`$. A RDMEM of similar structure was used for the description of exciton transfer in the Haken, Strobl, and Reineker (HSR) model and the generalized stochastic Liouville equation (GSLE) model . The HSR method originating from the stochastic bath model, is valid only in the high temperature limit . The GSLE method appeals to the quantum bath model with system-bath coupling of the form $`\widehat{H}_{\mathrm{SB}}\widehat{V}^+\widehat{V}\left(\widehat{a}_\lambda ^++\widehat{a}_\lambda \right)`$, which modulates the system transition frequency. In Ref. the equations for exciton motion are derived using the projection operator technique. Taking the different system-bath coupling we have derived the RDMEM which coincides with GSLE . Both GSLE and our RDMEM are able to describe correctly finite temperatures. Below we neglect the last term of Eq. (11) corresponding to the $`\overline{\gamma }`$ term in the HSR and GSLE models because the rotating wave approximation (RWA) is applied.
For the sake of convenience of analytical and numerical calculations we replace $`\mathrm{\Gamma }_{\mu \nu }`$ and the population of the corresponding bath mode $`n(\omega _{\mu \nu })`$ with the dissipative transitions $`d_{\mu \nu }=\mathrm{\Gamma }_{\mu \nu }|n(\omega _{\mu \nu })|`$ and the corresponding dephasings $`\gamma _{\mu \nu }=_\kappa \left(d_{\mu \kappa }+d_{\kappa \nu }\right)/2.`$ With this one can express the RDMEM (11) in the form
$`\dot{\sigma }_{\mu \mu }`$ $`=`$ $`i/\mathrm{}{\displaystyle \underset{\nu }{}}(v_{\mu \nu }\sigma _{\nu \mu }\sigma _{\mu \nu }v_{\nu \mu }){\displaystyle \underset{\nu }{}}d_{\mu \nu }\sigma _{\mu \mu }+{\displaystyle \underset{\nu }{}}d_{\nu \mu }\sigma _{\nu \nu },`$ (13)
$`\dot{\sigma }_{\mu \nu }`$ $`=`$ $`\left(i\omega _{\mu \nu }\gamma _{\mu \nu }\right)\sigma _{\mu \nu }i/\mathrm{}v_{\mu \nu }(\sigma _{\nu \nu }\sigma _{\mu \mu }).`$ (14)
The parameters controlling the transitions between the selected states are discussed in subsection II E.
### D Scaling of the Damping Constants
The relaxation coefficients Eq. (12) include the second power of the scaling function $`S_{\mathrm{SB}}`$ because one constructs the relaxation term of Eq. (11) with the second power of the interaction Hamiltonian. The physical meaning of $`H_{\mathrm{SB}}`$ is similar to the interaction of the system dipole with a surrounding media. That is why it is reasonable to use the Onsager expression (5) for $`S_{\mathrm{SB}}`$. In the work of Mataga, Kaifu, and Koizumi the interaction energy between the system dipole and the media scales in leading order as
$$S_{\mathrm{SB}}=\left[\frac{2(ϵ_\mathrm{s}1)}{2ϵ_\mathrm{s}+1}\frac{2(ϵ_{\mathrm{}}1)}{2ϵ_{\mathrm{}}+1}\right],$$
(15)
where $`ϵ_{\mathrm{}}`$ denotes the optical dielectric constant. From a recent paper of Georgievskii, Hsu, and Marcus we extract $`\mathrm{\Gamma }\frac{1}{ϵ_\mathrm{s}}\frac{1}{ϵ_{\mathrm{}}}`$ for the multiple cavities model assuming $`ϵ_\omega =ϵ_{\mathrm{}}`$. In terms of a scaling function it can be expressed as
$`S_{\mathrm{SB}}=\left(1/ϵ_\mathrm{s}1/ϵ_{\mathrm{}}\right)^{\frac{1}{2}}.`$ (16)
As we have already argued in the coherent coupling $`v_{\mu \nu }`$ between two electronic states scales with $`ϵ_\mathrm{s}`$ and $`ϵ_{\mathrm{}}`$ too, because a coherent transition in the system is accompanied by a transition of the environment state which is larger for solvents with larger polarity. As discussed above we neglect the vibrational substructure of each electronic state because the vibrational relaxation is about two orders of magnitude faster than the characteristic ET time. But in contrast to the model with vibrational substructure the present model does not involve any reaction coordinate. To reproduce the results of the more elaborate model with vibrational substructure one has to scale the electronic couplings $`v_{\mu \nu }`$ with the Franck-Condon overlap elements $`F_{\mathrm{FC}}(\mu ,0,\nu ,0)`$ between the vibrational ground states of each pair of electronic surfaces
$$v_{\mu \nu }=v_{\mu \nu }^0F_{\mathrm{FC}}(\mu ,0,\nu ,0),$$
(17)
where $`v_{\mu \nu }^0`$ is the coupling of electronic states of the isolated molecule. For the calculation of the Franck-Condon factors one has to introduce the leading (mean) environment oscillator frequency $`\omega _{\mathrm{vib}}`$. Here $`\omega _{\mathrm{vib}}=1500\mathrm{cm}^1`$ is used which is similar to the frequency of the C-C stretching mode. With this scaling one implicitly introduces a reaction coordinate into the model.
### E Model Parameters
The dynamics of the system is controlled by the following parameters: energies of system states $`E_\mu `$, coherent couplings $`v_{\mu \nu }`$, and damping constants $`\mathrm{\Gamma }_{\mu \nu }`$.
On the basis of the spectral data and taking reference energy $`E_{\mathrm{DBA}}=0`$ we determine $`E_{\mathrm{D}^{}\mathrm{BA}}=1.82`$ eV (in $`\mathrm{CH}_2\mathrm{Cl}_2`$). We take the energy of the state with ET to $`\mathrm{Q}`$ from reference : $`E_{\mathrm{D}^+\mathrm{BA}^{}}=1.42\mathrm{eV}`$ . Further Rempel et al. estimate the coupling of initially excited and charged bridge states $`\mathrm{D}^{}\mathrm{BA}|H|\mathrm{D}^+\mathrm{B}^{}\mathrm{A}=v_{12}^0=65\mathrm{meV}=9.8\times 10^{13}\mathrm{s}^1`$ and the coupling of the two states with charge separation $`\mathrm{D}^+\mathrm{B}^{}\mathrm{A}|H|\mathrm{D}^+\mathrm{BA}^{}=v_{23}^0=2.2\mathrm{meV}=3.3\times 10^{12}\mathrm{s}^1`$. The values of the couplings are essentially lower than the energy differences between the relevant system states
$$\mathrm{}\omega _{ij}v_{ij}^0.$$
(18)
This is the reason to remain in site representation instead of eigenstate representation . The damping constants are found with help of the analytical solution derived at the end of the next section to be $`\mathrm{\Gamma }_{21}=\mathrm{\Gamma }_{23}=2.25\times 10^{12}\mathrm{s}^1`$. The typical radius of the porphyrin ring is about $`r_\mu =5\pm 1\mathrm{\AA }`$ , while the distance $`r_{\mu \nu }`$ between the blocks of $`\mathrm{H}_2\mathrm{P}\mathrm{ZnP}\mathrm{Q}`$ reaches $`r_{12}=12.5\pm 1\mathrm{\AA }`$ , $`r_{23}=7\pm 1\mathrm{\AA }`$, $`r_{13}=14\pm 1\mathrm{\AA }`$. The main parameter which controls ET in a triad is the energy of the state $`E_{\mathrm{D}^+\mathrm{B}^{}\mathrm{A}}`$. This state has a big dipole moment because of its charge separation and is therefore strongly influenced by the solvent. Because of the special importance of this value we calculate it for the different solvents as a matrix element of the system Hamiltonian (2). The calculated values of the energies of the $`\mathrm{D}^+\mathrm{B}^{}\mathrm{A}`$ state for some solutions are shown in Table I.
## III Results
The time evolution of the ET in the supermolecule is described by solving numerically and analytically Eqs. (13)-(14) with the initial condition of donor excitation with a $`\pi `$ pulse of appropriate frequency, i.e., the donor population is set to one.
For the numerical simulation we express the system of Eqs. (13)-(14) in the form $`\dot{\overline{\sigma }}=A\overline{\sigma },`$ where $`\overline{\sigma }`$ is a vector of dimension $`3^2`$ for the model with $`3`$ system states and the super-operator $`A`$ is a matrix of dimension $`3^2\times 3^2`$. We find an exponential growth of the acceptor population
$$P_3(t)=P_3(\mathrm{})\left[1\mathrm{exp}(k_{\mathrm{ET}}t)\right],$$
(19)
where for the solvent MTHF $`k_{\mathrm{ET}}3.59\times 10^8\mathrm{s}^1`$ and $`P_3(\mathrm{})0.9994`$. The population $`P_2`$ which corresponds to charge localization on the bridge does not exceed $`0.005`$. This means that in this case the superexchange mechanism dominates over the sequential transfer mechanism. Besides it ensures the validity of characterizing the system dynamics with $`P_3(\mathrm{})`$ and
$`k_{\mathrm{ET}}=P_3(\mathrm{})\left\{{\displaystyle _0^{\mathrm{}}}\left[1P_3(t)\right]𝑑t\right\}^1.`$ (20)
The alternative analytical approach is performed in the kinetic limit
$$t1/\mathrm{min}(\gamma _{\mu \nu }).$$
(21)
In Laplace space the inequality (21) reads $`s\mathrm{min}(\gamma _{\mu \nu })`$, where $`s`$ denotes the Laplace variable. It is equivalent to replacing the factor $`1/(i\omega _{\mu \nu }+\gamma _{\mu \nu }+s)`$ in the Laplace transform of Eqs. (13)-(14) with $`1/(i\omega _{\mu \nu }+\gamma _{\mu \nu })`$. This trick allows to substitute the expressions (14) for non-diagonal elements of the DM into Eq. (13). After this elimination we describe the coherent transitions to which the non-diagonal elements contribute by redefinition of the diagonal RDMEM (13)
$$\dot{\sigma }_{\mu \mu }=\underset{\nu }{}g_{\mu \nu }\sigma _{\mu \mu }+\underset{\nu }{}g_{\nu \mu }\sigma _{\nu \nu }.$$
(22)
The transition coefficients $`g_{\mu \nu }`$ contain dissipative and coherent contributions
$$g_{\mu \nu }=d_{\mu \nu }+v_{\mu \nu }v_{\nu \mu }\gamma _{\mu \nu }\left[\mathrm{}^2\left(\omega _{\mu \nu }^2+\gamma _{\mu \nu }^2\right)\right]^1.$$
(23)
Now it is assumed that the bridge is not populated. This allows us to find the acceptor population in the form of Eq. (19), where
$`k_{\mathrm{ET}}=g_{23}+g_{23}(g_{12}g_{32})(g_{21}+g_{23})^1,`$ (24)
$`P_3(\mathrm{})=g_{12}g_{23}\left[\left(g_{21}+g_{23}\right)k_{\mathrm{ET}}\right]^1.`$ (25)
The value of $`\mathrm{\Gamma }_{\mu \nu }=S_{\mathrm{SB}}^2\mathrm{}^2J(\omega _{\mu \nu })p_{\mu \nu }^2`$ can be found comparing the experimentally determined ET rate and Eq. (24). To calculate $`J(\omega _{\mu \nu })`$ would require a microscopic model. To avoid a microscopic consideration we simply take the same $`\mathrm{\Gamma }_{\mu \nu }`$ for all transitions between excited states. The value of ET for $`\mathrm{H}_2\mathrm{P}\mathrm{ZnP}\mathrm{Q}`$ in MTHF is found by Rempel et al. to be $`k_{\mathrm{ET}}=3.6\pm .5\times 10^8\mathrm{s}^1`$. If the bridge state has a rather high energy one can neglect thermally activated processes. $`v_{23}`$ is negligibly small with respect to $`v_{12}`$. In this case our result (24) reads
$`k_{\mathrm{ET}}=v_{12}^2\mathrm{\Gamma }_{21}\mathrm{\Gamma }_{23}\left(\mathrm{}^2\omega _{21}^2+\mathrm{\Gamma }_{21}^2\right)^1\left(\mathrm{\Gamma }_{21}+\mathrm{\Gamma }_{23}\right)^1.`$ (26)
With the relation $`\mathrm{\Gamma }_{21}=\mathrm{\Gamma }_{23}`$ and the experimental $`k_{\mathrm{ET}}`$ one obtains $`\mathrm{\Gamma }_{21}=\mathrm{\Gamma }_{23}2.25\times 10^{12}\mathrm{s}^1`$. The fit of the numerical solution of Eqs. (13)-(14) to the experimental $`k_{\mathrm{ET}}`$ in MTHF gives the same value. So the damping constants are fixed for a specific solvent and for other solvents they are calculated with the scaling functions. With this method the ET was found to occur with dominance of the superexchange mechanism with rates $`4.6\times 10^6\mathrm{s}^1`$ for CYCLO and $`3.3\times 10^8\mathrm{s}^1`$ for $`\mathrm{CH}_2\mathrm{Cl}_2`$.
## IV Discussion
### A Sequential Versus Superexchange
To discuss how the transfer mechanism depends on the change of parameters we calculate the system dynamics varying one parameter at a time. The dependencies of $`k_{\mathrm{ET}}`$ and $`P_3(\mathrm{})`$ on $`v_{12}`$, $`v_{23}`$ and $`\mathrm{\Gamma }_{21}`$, $`\mathrm{\Gamma }_{23}`$ are shown in Figs. 2 and 3. The change of each parameter influences the transfer in a different way.
In particular, $`k_{\mathrm{ET}}`$ depends quadratically on $`v_{12}`$ from $`10^{15}\mathrm{s}^1`$ to $`10^{12}\mathrm{s}^1`$ in Fig. 2. Below it saturates at the lower bound $`k_{\mathrm{ET}}3\times 10^5\mathrm{s}^1`$. This corresponds to a crossover of the ET mechanism from superexchange to sequential transfer. But, due to the big energy difference between donor and bridge states the sequential transfer efficiency is extremely low. This is displayed by $`P_3(\mathrm{})0`$. In the region $`v_{12}v_{13}`$ both mechanisms contribute to $`k_{\mathrm{ET}}`$. The decrease of $`P_3(\mathrm{})`$ in this region corresponds to coherent back transfer. The ET rate depends on $`v_{23}`$ in a similar way. At rather high values of $`v_{12}`$, $`v_{23}10^{15}\mathrm{s}^1`$ the relation (18) is no more valid. For this regime one has to use eigenstate instead of site representation because the wavefunctions are no more localized .
The variation $`\mathrm{\Gamma }_{21}`$, $`\mathrm{\Gamma }_{23}`$ near the experimental values shows similar behavior of $`k_{\mathrm{ET}}(\mathrm{\Gamma }_{21})`$ and $`k_{\mathrm{ET}}(\mathrm{\Gamma }_{23})`$ (see Fig. 3). Here we independently vary $`\mathrm{\Gamma }_{21}`$ and $`\mathrm{\Gamma }_{23}`$. Both, $`k_{\mathrm{ET}}(\mathrm{\Gamma }_{21})`$ and $`k_{\mathrm{ET}}(\mathrm{\Gamma }_{23})`$ increase linear until the saturation value $`7\times 10^8\mathrm{s}^1`$ at $`\mathrm{\Gamma }>10^{12}\mathrm{s}^1`$ is reached. There is qualitative agreement between the numerical and analytical values. In Eq. (20) infinite time is approximated by $`10^5\mathrm{s}`$ and so one cannot obtain ET rates lower than this limit.
The physical meaning of the ET rate dependence on $`\mathrm{\Gamma }`$ seems to be transparent. At small values of $`\mathrm{\Gamma }`$ a part of the population coherently oscillates back and forth between the states. The increase of the dephasing $`\gamma _{\mu \nu }`$ quenches the coherence and makes the transfer irreversible. So transfer becomes faster up to a maximal value. For the whole range of $`\mathrm{\Gamma }`$, depopulations $`d_{21}`$, $`d_{23}`$ and thermally activated transitions $`d_{12}`$, $`d_{32}`$ always remain smaller than the coherent couplings, therefore they do not play an essential role.
Next, the similarity of the dependencies on $`\mathrm{\Gamma }_{21}`$ and $`\mathrm{\Gamma }_{23}`$ will be discussed on the basis of Eq. (24). In the limit $`k_BT/\mathrm{}\omega _{\mu \nu }0`$ thermally activated processes with $`\omega _{\mu \nu }<0`$ vanish and so $`|n(\omega _{\mu \nu })|=0`$, while depopulations with $`\omega _{\mu \nu }>0`$ remain constant $`|n(\omega _{\mu \nu })|=1`$. The condition $`\omega _{\mu \nu }\gamma _{\mu \nu }`$ allows us to neglect $`\gamma _{\mu \nu }^2`$ in comparison with $`\omega _{\mu \nu }^2`$. With these simplifications Eq. (24) becomes
$`k_{\mathrm{ET}}\mathrm{\Gamma }_{21}\mathrm{\Gamma }_{23}\left(\mathrm{\Gamma }_{21}+\mathrm{\Gamma }_{23}\right)^1\left(v_{12}^2/\omega _{21}^2+v_{23}^2/\omega _{23}^2\right),`$ (27)
i.e. symmetric with respect to $`\mathrm{\Gamma }_{21}`$ and $`\mathrm{\Gamma }_{23}`$.
To the largest extent the mechanism of transfer depends on the bridge energy $`E_{\mathrm{D}^+\mathrm{B}^{}\mathrm{A}}`$ as presented in Fig. 4. In different regions one observes different types of dynamics. For large bridge energy $`E_{21}=E_{\mathrm{D}^+\mathrm{B}^{}\mathrm{A}}E_{\mathrm{D}^{}\mathrm{BA}}0`$ the numerical and analytical solutions do not differ from each other. The transfer occurs with the superexchange mechanism. The ET rate reaches a maximal value of $`10^{11}\mathrm{s}^1`$ for low bridge energies.
While the bridge energy approaches the donor energy the sequential transfer starts to contribute to the ET process. The traditional scheme of sequential transfer is obtained when donor, bridge, and acceptor levels are arranged in a cascade. In this region the analytical solution need not coincide with the numerical solution because the used approximations are no more valid. For equal bridge and acceptor energies $`k_{\mathrm{ET}}`$ displays a small resonance peak in Fig. 4(a). When the bridge energy is lower than the acceptor energy the population gets trapped at the bridge. The finite $`k_{\mathrm{ET}}`$ for $`E_{21}<E_{31}`$ does not mean ET because $`P_3(\mathrm{})0`$. For the dynamic time interval $`t<\gamma _{\mu \nu }^1`$ a part of the population tunnels force and back to the acceptor with $`k_{\mathrm{ET}}`$. The analytical solution (24) gives a constant rate for the regime $`E_{21}<E_{31}`$, while the numerical solution of Eqs. (13)-(14) is instable, because such coherent oscillations of population cannot be described by Eq. (19) and $`k_{\mathrm{ET}}`$ cannot be fitted with Eq. (20). In Fig. 4 the regime $`E_{21}<E_{31}`$ occurs for small $`E_{21}`$ while $`E_{31}`$ is kept constant and for large $`E_{31}`$ while $`E_{21}`$ remains constant.
The energy dependence of the final population has a transparent physical meaning for the whole range of energy. A large bridge energy ensures the transition of the whole population to the acceptor. In the intermediate case, when the bridge has the same energy as the acceptor, final population spreads itself over these two states $`P_3(\mathrm{})=.5`$. Lowering the bridge even more the whole population remains on the bridge as the lowest state of the system. The dependence of the ET rate on the acceptor energy $`E_{31}=E_{\mathrm{D}^+\mathrm{BA}^{}}E_{\mathrm{D}^{}\mathrm{BA}}`$ in Fig. 4 remains constant while the acceptor energy lies below the bridge energy. Increase of $`E_{31}`$ up to $`E_{21}=1.36\mathrm{eV}`$ gives the maximal $`k_{\mathrm{ET}}\mathrm{\Gamma }_{21}`$. When $`E_{31}`$ increases further the acceptor becomes the highest level in the system and therefore the population cannot remain on it.
### B Different Solvents
For the application of the results to various solvents and comparison with experiment one should use the scaling for energy, coherent couplings, and damping constants as discussed above. The combinations of the energy scaling in subsection II A and damping constants scalings in subsection II D are represented in Fig. 5. An increase in $`ϵ_\mathrm{s}`$ from $`2`$ to $`4`$ leads to an increase of the ET rate, no matter which scaling is used. Further increase of $`ϵ_\mathrm{s}`$ induces saturation for the Onsager-Mataga scaling and even a small decrease Within the applied approximations an increase in the solvent polarizability and, consequently, of its dielectric constant lowers the bridge and acceptor energies and increases the system-bath interaction and, consequently, the relaxation coefficients. It induces a smooth rise of the ET rate for the Onsager-Mataga scaling. On the other hand large $`ϵ_\mathrm{s}`$ leads to essentially different polarisational states of the environment for the supermolecule states with different dipole moment. This reduces the coherent couplings, see Eq. (17), leading for the Born-Marcus scaling to a small decrease of $`k_{\mathrm{ET}}`$ for large $`ϵ_\mathrm{s}`$. The ET rate with this scaling comes closer to the experimental value $`k_{\mathrm{ET}}(ϵ_\mathrm{s}^{\mathrm{CH}_2\mathrm{Cl}_2})`$. This gives a hint that the model of individual cavities for each molecular block is closer to reality than the model with a single cavity for the whole supermolecule.
Below we consider Born scaling Eq. (4) for the system energies and Marcus scaling Eq. (16) for the damping constant to compare the calculated ET rates with the measured ones. For the solvents CYCLO, MTHF, and $`\mathrm{CH}_2\mathrm{Cl}_2`$ one obtains the relative bridge energies $`E_{21}=1.77\mathrm{eV}`$, $`1.36\mathrm{eV}`$, and $`1.30\mathrm{eV}`$, respectively.
The calculated ET rate coincides with the experimental value for $`\mathrm{H}_2\mathrm{P}\mathrm{ZnP}\mathrm{Q}`$ in CYCLO, see table I. For $`\mathrm{CH}_2\mathrm{Cl}_2`$ the numerical ET rate is approximately 30% faster than the experimental value. It has to be noted that a value for the damping rates can be chosen such that the calculated curve almost passes through all three experimental error bars. On the other hand an error in the present calculation could be due to (i) absence of vibrational substructure of the electronic states in the present model; (ii) incorrect dependence of system states energies on the solvent properties; (iii) opening of additional transfer channels not mentioned in the scheme shown in Fig. 1. Each of these possibilities needs some comments.
ad (i): The incorporation of the vibrational substructure will result in a complication of the model giving a more complicated ET rate dependence on the energy of the electronic states and dielectric constant. It should yield the maximal ET rate for nonequal energies of electronic states, namely for the activationless case when the energy difference equals the reorganization energy. For a comparison of the models with and without vibrational substructure see Ref. .
ad (ii): Effects as the solvation shell do need a molecular dynamics simulation. The total influence of the solvent is, probably, reflected in an energy shift between the spectroscopically observable states $`E_{\mathrm{D}^{}\mathrm{BA}}`$ and $`E_{\mathrm{DB}^{}\mathrm{A}}`$ .
ad (iii): A solvent with large $`ϵ_\mathrm{s}`$ can bring high-lying system states closer to the ones included in Fig. 1. E.g., because of its larger dipole moment, $`|\mathrm{D}^{}\mathrm{B}^+\mathrm{A}`$ is strongly influenced by the solvent.
### C Comparison with similar theories
As discussed above the RDMEM are very similar to those of the GSLE model. This is an extension of the HSR theory in which a classical bath is used and for which analytical solutions are available . We are not aware of any analytical solution of the GSLE model as presented here. Also this model has not been applied to similar ET processes.
The numerical steady-state method used by Davis et al. is an attractive one due to its simplicity, but unlike our method it is not able to give information about the time evolution of the system. We use a similar approach derived within a Redfield-like theory. But we consider dephasing and depopulation between each pair of levels. In contrast Davis et al. incorporate relaxation phenomenologically only to selected levels, dephasing $`\gamma `$ occurs between excited levels, while depopulation $`k`$ takes place only for the sink from acceptor to the ground state. The advantage of the approach of Davis et al. is the possibility to investigate the ET rate dependence for the bridge consisting of more than one molecular block. This was not the goal of the present work but it can be extended into this direction. We are interested in the ET in a concrete molecular complex with realistic parameter values and realistic possibilities to modify those parameter. Our results as well as the results of Davis show that ET can occur as coherent (with the superexchange mechanism) or dissipative process (with the sequential transfer mechanism).
## V Conclusions
We have performed a study of the ET in the supermolecular complex $`\mathrm{H}_2\mathrm{P}\mathrm{ZnP}\mathrm{Q}`$ within the DM formalism. The determined analytical and numerical ET rates are in reasonable correspondence with the experimental data. The superexchange mechanism of ET dominates over the sequential transfer. We have investigated the stability of the model varying one parameter at a time. The qualitative character of the transfer is stable with respect to a local change of system parameter. The crossover between the two transfer mechanisms can be induced by lowering the bridge energy. The relation of the theory presented here to other theoretical approaches to ET has been discussed.
The calculations performed in the framework of the present formalism can be extended in the following directions: (i) Considerations beyond the kinetic limit. The vibrational substructure has to be included into the model as well as solvent dynamics and, probably, non-Markovian RDMEM. (ii) Enlargement of the number of molecular blocks in the complex. (iii) Initial excitation of states with rather high energy should open additional transfer channels.
###### Acknowledgements.
D. K. thanks U. Rempel and E. Zenkevich for stimulating discussions. Financial support of the DFG is gratefully acknowledged. |
no-problem/0001/cond-mat0001006.html | ar5iv | text | # Condensed Matter Physics - Biology Resonance
## I Introduction
‘Condensed Matter Physics’ (CMP) is a clever name for the study of any form of matter that is condensed - liquids, solids, gels, cells, superfluid He4, quantum Hall liquid etc etc. The folklore is that the name ‘Condensed Matter Theory’ was coined in Cambridge in the 60’s in the solid state group that involved people like V. Heine and P.W. Anderson. Of course, before this field was christened it existed, on its own right as solid state physics and related fields with very many significant developments to its credit - the new name gave it an added identity and perhaps a new purpose.
Physics, a part of Natural Science, is an experimental science. It gains its strength from experiments, observations, theorizing, and impact on technology and society. CMP has a special place in physics because of its closeness to a multitude of feasible and often novel experiments. This feasibility is intimately tied with the wealth of matter and associated phenomena around as well as the development in the field of material science and in turn technology - both low and high tech. Elegant concepts from quantum physics, statistical mechanics, mathematics are combined alive so that it continues to produce surprises and new phenomena and new concepts. This field is also a source for innovative new experimental methods that has extended human ‘senses’ to atomic scale - modern x-ray crystallography, NMR, neutron scattering, spectroscopy, scanning tunneling microscope and so on.
The aim of the present article is to provide a point of view that this field has grown, partly with an aim to address deeper issues in the field of living condensed matter namely biology; and a century of efforts is really a warm up exercise towards this difficult goal. The point of view I am providing is perhaps obvious - my main message is that a true resonance between the two fields is something that is natural and so likely to happen or has already begun.
## II Nature of Condensed Matter Physics
CMP is diverse and complex. It addresses issues such as why silicon has a diamond like structure using quantum mechanical considerations, or the growth dynamics of snow flakes, or the electrical conduction in carbon nanotubes. There is CMP in the field effect transistor, modern computer chips and the sensitive SQUID magnetometer that detects the feeble electrical activity that goes on in our restless brain.
The field is messy but rewarding. Quantization of Hall conductance, that won 2 Nobel prizes, occur amidst disorder and interaction. While the field is diverse, there are powerful unifying notions and ideas: spontaneous symmetry breaking, order parameter, renormalization group, complex collective behavior, quantum coherence, chaos etc. The idea of renormalization group is an an example that has grown out of the study of condensed matter systems such as liquid-gas phase transition and Kondo problems - it has far reaching application potential including possibility of understanding some hierarchical structures in biological systems to turbulence in classical fluids.
The field of CMP possesses a deep working knowledge of quantum mechanics, both in theory and experiments. This gives it an unique strength and also makes its relation to biology special. The stability of atoms, the origin of chemical bonds, electron transfer, proton tunneling etc. in biology are truly quantum mechanical. However, it is fair to say that some mysterious leaking of quantum effects to some unexpected aspects and domains of biology (such as the origin of consciousness), apart from the above obvious ones. are distinct perhaps remote possibilities. CM physicists will not accept such suggestions uncritically, but will have a natural edge in unraveling those which turn out to be meaningful.
Physics gains its predictive power and becomes a quantitative science because of the powerful use of mathematics - analysis and approximations intertwined with physical insights, order or magnitude estimates, dimensional analysis and many modern mathematical ideas such as homotopy theory, group theory, algebraic geometry, functions of many complex variables etc. In view of the remarkable developments in computers computational CMP is becoming very popular and powerful. Often you can do a computer simulation or experiment and create situations that you find it hard to create in the laboratory, or study analytically.
I alluded to the complexity of the study of condensed matter. This gives it a remarkable ability to suggest new paradigms through its emergent character, that could be helpful elsewhere. The last several decades have seen some of them: i) spin glass and neural network, and ii) self organized criticality, power laws etc. These notions may not have solved the real problem of biology - but they are some new windows for physicists to look at this totally new world of biology. The wealth of phenomena in condensed matter is sure to provide seeds of new paradigms provided we look for them and and develop a sensitivity to abstract them.
## III Nature of Biology as a science
Like physics, biology is truly an experimental science. Most of the problems in biology are far too complex, at the moment, to be analyzed threadbare, the way we do in physics with some problems, using our existing knowledge and concepts of physics, chemistry and mathematics. However, after the revolutionary beginning of the field of molecular biology at the middle of this century, biology has taken a new shape, and looks comfortable even for a physicist to look at from a distance. Very general principles like Darwin’s natural selection to very specific structure and function relations in DNA, proteins, etc. are dominating the field currently. There are also many dogmas, hard earned hypothesis and working principles that pervade this truly diverse field - protein structure, signal transduction, brain function to name a few. Thanks to the experimental tools like x-ray crystallography, electron microscopy, NMR imaging, ATM, STM, optical tweezers and so on, that actually came from physics, the field is undergoing revolutionary development.
The urgent problem facing a hard core biologist is often very different from what a physicist, genuinely interested in biology, is capable of solving in a short time period. This is the reason many biologists sincerely feel that physicists can not solve the mysteries of biology. On the other hand, physicists like Schroedinger, Max Dellbruke, Crick, Hopfield and others have made truly original contributions and opened up new directions. It is becoming clear that physics is not just providing experimental tools to other fields such as biology, it is evolving capabilities and insights to understand the spirit of biology.
## IV Some Examples
Having made several general remarks let me indicate some examples, based on my one decade of a distant admiration for biology - it is so distant that biology does not know that I am dreaming of her !
Brain
A brain would naturally like to think about how it thinks; why grey matter-a large piece of condensed matter possesses consciousness, self awareness, minds eye-I etc. Physicists have no clues as to how the laws of quantum mechanics, thermodynamics or even quantum gravity for that matter, leads to these profound properties in a living state of condensed matter. After listening to an illuminating talk by John Hopfield on neural network and associative memory in the fall of 1987 at Princeton, I thought, in a moment of weakness, that I understood the physics of the mind ! - soon to realize it is far from it.
It is becoming increasingly clear that all our understanding of spin glass physics, that came from partly the study of a dilute concentration of magnetic impurities in an otherwise pure gold, has only landed us somewhere at the plains of the Himalayan range, we have to scale Mount Everest. The concept of network, basin of attraction, possible hierarchical or ultra metric organization of attractors, are all but words in constructing a long poem. This became clear to me, when I participated in an Institute of Theoretical Physics, Santa Barbara workshop on ‘Neurobiology for physicists’ in the fall of 1987, where neurobiologists humbled us with mind boggling biological and clinical facts about the brain. But, at the same time, our successful understanding of the neural network of sea slug, an organism that has few hundred neurons, gives us hope that perhaps one day we can understand mammalian brain, with all its complexities and hierarchies. It is clear that all our efforts so far has been a warm up exercise.
Gene
The next example is from DNA. I was surprised, like my colleagues, by the findings of some physicists that DNA of various living organisms possess some kind of long range or power law correlation in its nucleotide sequence - that is, the probability that two nucleotides separated by n bases in a strand of DNA, will be both adenine for example, is $`\frac{1}{n^\alpha }`$. It was claimed that the exponent $`\alpha `$ is species dependent - a one parameter characterization of a species at the level of nucleotide sequence in DNA !
Soon it became clear that things are much more complicated - there are introns, the non coding genes, tandem repeats and so on that make this long range correlation not very meaningful or insightful. However, it gave an opportunity to many physicists like me, to get a glimpse of the world of biology with problems that are even more challenging than the power law correlation in the nucleotide sequence. Genetics is full of many surprises - gene replication, gene repair, translation, gene regulation etc. All my initial enthusiasm to model the DNA as a one dimensional 4 state Potts model with long range interaction and frustration soon gave way to other glamours of biology.
Gene regulation is a fascinating subject. This is what controls the shape of a blue whale or a butterfly, in its growth, by a profound regulation of the production of various proteins, at various cell at different times starting from the first zygote (formed by the union of a sperm and an egg) cell. It is a network that is very different from the neural network or immune network or the spin glass or glass. At the same time it is a network that should posses some general characteristics of any large networks - this is what prompted Kaufman, for example, to invent a Boolean net to model gene regulation.
Physicists have come across some networks and learned some general principles - network of dislocation that controls the mechanical properties of solids under stress, shape memory alloys, glasses and spin glasses. Thanks to experiments that guides the theoretical developments hand in hand, we have gained some insight and useful notions have been developed. Erstwhile Condensed matter physicists, and my ex collaborator Shoudan Liang is deep into genetic net and Stan Leibler and Naama Barkai are deep into biochemical nets apart from other involvements. But all our insights from CMP are truly warm up exercises at the base camp.
Electron and exciton transport in biological systems
Szent-Gyorgyi, an eminent biochemist, speculated on the importance of electron transport in biological systems including DNA. He along with others have speculated that it could hold some of the the secret of carcinogenesis. Possible connection to cancer apparently got a lot of (unjustifiable ?) funds - that is a different story. The point is that the lightest of charged particles in biology, namely electron, is involved in too many vital activities. Within proteins electron transport is well studied in biology for the last many decades. There are reaction centers, typically a prosthetic group such as a porphyrine complexing a metal ion, embedded in protein. On absorbing a light quanta the reaction center releases an electron that tunnels through a couple of tens of angstrom distance through the folded protein before it is absorbed by another special complex, just to trigger another reaction; then it continues sometimes ending up in an ion transfer across the membrane, if it were a membrane bound protein.
Electron transfer in biological system, even though it takes place at room temperature, is clearly a quantum mechanical phenomenon. The theory of Marcus and its generalizations have been used for quantitative estimates of the reaction rates. Our experiences with electronic conduction in semiconductors, metals or in general crystalline materials, where Bloch’s theory applies is only a warm up exercise to handle this special disordered system. Condensed matter systems such as amorphous materials where Anderson theory of localization applies looks too simple and less structured compared to the mesoscopic biological proteins. We have a disordered peptide bond skeleton along with the amino acid side groups that have considerable number of $`\pi `$ electrons, where the electron correlations are important. That is there is more structure including some significant vibrionic couplings and electron correlation effects. Most of the present theoretical efforts I have seen are one electron type that are computer intensive. Are we missing some subtle effects, including the correlation effect in the $`\pi `$ electron pool of the porphyrine rings ? I think only experiments have to give an answer to these questions through possible anomalies. It is interesting that even a diamagnetic response of a pool of $`\pi `$ electrons of planar aromatic ring compounds show interesting surprises through correlation effects, a subject that worried people like Pauling, London, K S Krishnan and some in the recent times.
One hears of new experiments where electron transfer along the DNA double helix has been seen indirectly in some experiments. Its possible relevance in biological functions is an obvious next question. Physicists with their warm up exercise and training in condensed matter can hope to scale the mountain after knowing many biological details and with help from future experiments.
Structure and function are catch words in biology often used at the level of DNA or enzyme functions. In a different context, in bacterial photo synthesis certain geometrical arrangement of porphyrine complexes have given new insights into mechanism of energy transport by exciton. The structure of the basic unit of the so called light harvesting complex has been deciphered a couple of years ago. There are two types of ring complexes one containing one concentric circle of 18 porphyrine molecules that are stacked in a circle like slides in a circular slide box. The other contains two concentric rings with the reaction center complex at the center. These ring complexes are organized on the surface of the cell in some quasi periodic fashion. The incident photon is absorbed by the porphyrine to create an exciton which propagates to end up in the reaction center to activate an electron transfer reaction. To this complex geometrical arrangement one can apply, as Ramakrishna and myself attempted among others, our knowledge of exciton transport in molecular crystals. Already there are many surprises - one always felt that the exciton transport is an incoherent hopping process at the physiological temperatures. But within the ring complex the exciton transport has been shown to be coherent experimentally and theoretically. Our feeling is that between the ring complex also , through Forster mechanism, there is some coherence and possibility of new physics.
What is remarkable is that the photo synthetic apparatus of the purple bacteria, is probably the simplest of the lot. When we come to even simple algae and plant leaves, the photo synthetic apparatus is much more complex and structured, with light guides and so on. In many of these cases we do not know in detail the basic structural units and their organizations - apart from circular complexes there are cylindrical complexes and light guides. What we have learned in CMP as exciton transport in semiconductors or molecular crystals is truly a warm up exercise when we come to this very complex photo synthetic apparatus.
Regulated self assembly
Periodic structures are very dear to condensed matter physicists. We study how these structures change when we heat a solid or how a beautiful sugar crystal grows from a tiny nucleus in a concentrated sugar solutions. There is plenty of physics and statistical mechanics. Some times there are even quantum effects like in the case of Helium solids or solids of light elements such as Li.
In biology very rarely do we come across periodic structures. Since the structure of proteins and DNA imply important functions, evolution has not chosen structures that are manifestly periodic. However, there are remarkable regular, sometimes symmetric and hierarchies of structures. For example if we look at the virus T4, there are a few types of basic proteins that make up the so called pro head - a complex of proteins that has icosahedral symmetry that encapsulates the viral DNA. Then there is a neck, again made up of protein complexes and a body (that looks like a bit of micro tubule) - a cylinder made up of proteins and the legs made of proteins. This tiny little ‘robot’ is different from a periodic crystal. However there is some regularity in its making.
And condensed matter physicist is tempted to wonder about the assembly of this complicated macromolecular robot. The physics is not exactly that of the growth of a sugar crystal. It is a self assembly that is regulated. It is non equilibrium statistical mechanics that is embedded in a signaling network.
The regulated self assembly is a new notion that is very unique to and ubiquitous in biology. The above is only an example of the many hierarchical structural organizations that one comes across in biology - morphogenesis, micro tubules, myosin complex, collagen fibers, fibrils etc.
In fact I learned about this notion of regulated self assembly during my sabbatical at Princeton in 1996, in a Cell Biology course organized by Stan Leibler and Frank Wilczek. It became clear in that course, that had distinguished attendees like Curtis Callan, Stephen Adler and others, that there are many challenging and profound problems. We physicists returned spell bound at the end of every class on learning new wonders in cell biology and felt the need for serious investigations by many physicists.
Finally a word about some macro molecular structural changes in biology. Structural rearrangements in biology are in plenty. A heme protein, as soon as it gets an oxygen, undergoes a conformational change so that it can bind the second oxygen more easily and so on. An allosteric protein like the motor protein, once it gets an ATP undergoes a massive conformational or structural change, that is like an elementary step in walking. Our well known notions such as soft modes or anharmonic interactions that we are used to in structural changes in simple solids, are far from sufficient to understand even the simplest macro molecular structural change - we have to think afresh. People like Frauenfelder, Austin, Stein, Wolynes and others have made a start at this.
## V Conclusion
The trend of many condensed matter physicists taking a serious look at biology is visible for a long time. One also hears of new Institutes and ventures like Santa Fe Institute which catalyses new kind of activities and exchanges. In a special section devoted to ‘Complexity Science’, a recent issue of Science enumerates about a dozen Universities and Labs in the United States trying to set up new across the disciplinary ventures involving physics and biology departments, just at the turn of this century. This has started happening in a natural fashion in developed nations like the US or Europe. The developing countries will do well to recognize this and participate and contribute to this resonance and redefinition among disciplines in science.
When the condensed matter physics - biology resonance touches the spirit of biology, the nature of progress will be substantial.
Acknowledgment
I thank P.W. Anderson for his critical reading of the manuscript and comments and S. Arunachalam for correcting the manuscript. |
no-problem/0001/cond-mat0001054.html | ar5iv | text | # Study of 𝐶𝑒 intermetallic compounds: an LDA classification and hybridization effects
## I Introduction
In the last decades, $`Ce`$ intermetallic compounds have received a lot of attention both in theory and experiment. On one side, these systems can exhibit a large variety of behaviors, such as superconductivity, huge magnetoresistance, itinerant magnetism, etc. On the other side, the nature of the $`4f`$ states is controversial since it has not been established yet which systems can be treated using a bandlike picture and which ones using a localized one. In the first picture, the effective electronic correlation is weak and the $`4f`$ states form a narrow band, while in the second one, the $`4f`$ electrons are highly correlated and interact weakly with the conduction band.
The main difference between $`Ce`$ and other rare earth systems is that, in the case of Ce, the $`4f`$ state is energetically very close to the Fermi level so that its occupation number and the strength of hybridization with the conduction electron bands strongly depend on the chemical and geometrical environment. In that sense , we can understand the complexity of the phase diagram of pure metallic $`Ce`$ which, depending on temperature and pressure, can be magnetic, paramagnetic or superconducting. Due to this fact, different types of magnetism have been observed within the intermetallic compounds: intermediate valence behavior, Kondo effect and magnetic ordered structures (FM, AF) are among them. Intermediate valence behavior appears in systems having the states with $`n`$ and $`n1`$ $`4f`$ electrons almost degenerate. The nearly degenerate condition is a consequence of hybridization of the $`4f`$ states and results in a configuration with a non integer electronic occupation number $`n_{4f}`$. In general, intermediate valence materials show anomalous values for some physical properties such as lattice parameter, bulk modulus, magnetic susceptibility, etc, as compared to systems with an integer $`4f`$ occupation number. The intermediate valence behavior in $`Ce`$ compounds is characterized by having a very small or zero magnetic moment.
The different physical situations posed by these systems have been traditionally described by many body hamiltonians which consider strong electron-electron interactions and treat hybridization as a small perturbation. Kondo systems, for which correlations are dominant, have to be treated with these model hamiltonians. In $`Ce`$ intermetallics, particularly, it is not clear at all how important the relative strength of hybridization and correlations are. There are systems whose electronic ground state can be well described by an itinerant picture, namely those in which $`4f`$ hybridization plays a relevant role giving rise to a decrease in correlation effects.
In this work our aim is twofold. First we want to use the band picture to establish a criterion for characterizing the ground state of $`Ce`$ intermetallic compounds by analysing their LDA spin contribution to the magnetic moment. As a second goal, and using for this the established criterion as a tool, we also want to analyse the dependence of the magnetism of $`Ce`$ on geometrical and chemical environment. For this we study a variety of $`Ce`$ compounds, whose magnetic properties are experimentally well known. We claim that it is possible to determine to which regime the system under study belongs. That is, if we are dealing with strongly hybridized or strongly localized systems, going through the intermediate situations, by analysing the itinerant magnetic contribution that results from spin polarized LDA calculations. This analysis is certainly done in the knowledge that only those $`Ce`$ compounds whose $`4f`$ states are essentially itinerant can be well described within the LDA band theory frame. Actually, a proper treatment for the $`4f`$ electrons would be to include the self-interaction correction (SIC) in the calculations but this is not necessary for the kind of description undertaken in this work.
Once we have established our characterization tool, we discuss the relative importance of chemical and crystalline environment on hybridization and thereafter on the determination of the magnetic state of intermetallic systems of the type $`CeX_n`$ with $`n`$ equal or larger than one.
This paper is organized as follows. In Sec. II we report the results of calculations done for several real systems and present a discussion of how the LDA approximation can be used to classify them into magnetic, intermediate valence or paramagnetic compounds. In the second part of this section we analyse the influence of chemical and structural environment on the magnetic ground state of $`Ce`$ compounds. We finally present the conclusions in Sec. III.
## II Electronic structure calculations
In this work we perform ab-initio calculations using the Full Potential Linearized Augmented Plane Waves (FP-LAPW) method in the Local Density Approximation (LDA). We use the exchange and correlation potential of J.P. Perdew and Y. Wang. We make both paramagnetic and spin polarized calculations. The sampling of the Brillouin zone used to calculate the electronic ground state depends on the size and symmetry of each system. In general, from 800 to 1400 k-points in the first Brillouin zone are enough for convergence. The considered muffin tin radios, $`R_{mt}`$, are equal to 2.8 au for Ce, 2.4 au for $`4d`$ transition metals (TM), 2.0 au for $`3d`$ transition metals (TM), 1.8 au for S and 1.6 au for N. The cutoff parameter which gives the number of plane waves in the interstitial region is taken as $`R_{mt}K_{max}=8`$, where $`K_{max}`$ is the maximun value of the reciprocal lattice vector used in the expansion of plane waves in that zone. The total energy is converged up to $`10^4Ry`$.
### A Characterization of $`Ce`$ compounds through their LDA magnetic moments
It is well known that $`Ce`$ has one $`4f`$ electron in the solid. As we have mentioned before, the energy of this $`4f`$ state is very close to the Fermi level so that it is, in principle, very sensitive to chemical and geometrical environment. In some compounds $`Ce`$ keeps its magnetic moment equal to 1 $`\mu _B`$ and in others it can decrease even going to zero. In this Section we establish a way of characterizing $`Ce`$ compounds which allows us to classify them into magnetic, intermediate valence or paramagnetic by doing LDA spin polarized calculations. To achieve this we study compounds whose magnetic properties have already been reported and are listed in Table I.
$`CeNi`$, $`CeRh`$ and $`CePd_3`$ are accepted to be intermediate valence compounds while $`CeN`$ and $`CeRh_3`$ are well described by an itinerant picture, being paramagnetic. $`CeS`$, $`CePd`$, $`CeAg`$ and $`CeCd`$ are magnetically ordered systems. Table I contains the structural data and the experimental ordering temperatures, $`T_C`$ and $`T_N`$, which correspond to the Curie and Neèl temperatures depending on whether the compounds are ferromagnetic or antiferromagnetic. No magnetic order has been experimentally observed for the cases where no ordering temperature is shown.
In our calculations three types of magnetic configurations are possible: paramagnetic (P), ferromagnetic (F) and antiferromagnetic(AF). In Table I we also list the more stable configuration and the obtained LDA magnetic moment of $`Ce`$ for the different compounds. The calculations are performed at the experimental volumes shown in the same table.
The obtained magnetic moments for the compounds $`CeNi`$, $`CeRh`$ and $`CePd_3`$ are far from being 1$`\mu _B`$ but they are clearly not zero so that we cannot consider them as being paramagnetic. In that sense it is not surprising that these compounds are those which are widely accepted to be intermediate valence systems. An intermediate valence system is usually defined in the literature as one having in the average a non integer number of $`4f`$ electrons.
It is very important to keep in mind that LDA calculations cannot account for the strong correlation effects that may occur in some rare earth compounds. However, we can infer from their itinerant contribution and using LDA the correct ground state for the systems studied here.
Based on the comparison of experimental data and our LDA results , we propose that, depending on the LDA magnetic moment, a $`Ce`$ compound can be considered as
Itinerant if $`\mu _{Ce}=0`$,
Intermediate valence if $`\mu _{Ce}<0.5\mu _B`$
or
Magnetic if $`\mu _{Ce}>0.5\mu _B`$.
The systems that are accepted to be itinerant, such as $`CeN`$ and $`CeRh_3`$, are the ones whose magnetic moment is exactly equal to zero. That is, when in a given $`Ce`$ system the magnetic moment is zero the $`4f`$ electrons have a strong itinerant character.
In $`Ce`$ compounds the degree of localization is closely related to the strength of $`Ce`$ hybridization and consequently to the magnetic state. Let us take two extreme situations as examples, $`CeRh_3`$ and $`CeAg`$. In Figure 1 we show an electronic charge density plot corresponding to one of the Kohn-Sham orbitals with energy close to the Fermi level and with $`90\%`$ of $`4f`$ character. In the $`CeRh_3`$ case, there is a mixing between $`Rh`$ and $`Ce`$ states near $`E_F`$ while, this is not the case for $`CeAg`$, as can be clearly seen in Figure 1.
We can also analyse the degree of hybridization by comparing $`4f`$ and $`4d`$ partial densities of states (Figure 2). The most striking difference between $`CeRh_3`$ and $`CeAg`$ systems is that in the last one, the $`4d`$ and $`4f`$ bands are approximately 4 eV apart being both of them very narrow, of the order of 1eV, while in $`CeRh_3`$, the $`4d`$ band is more extended in energy (about 4eV), leading to an energy region around the Fermi level where hybridization is important. Thus, taking into account the calculated magnetic moments and using the established criterion, we say that in the compound where the $`4f`$ band is more hybridized, namely $`CeRh_3`$, the $`4f`$ band turns more delocalized leading thus to a non magnetic Ce. This should hold in general.
### B Dependence of $`Ce`$ magnetism on crystalline and chemical environment
$`Ce`$ can completely or partially loose its magnetic moment depending on chemical and crystalline environment. In this section we study how $`4f`$ hybridization affects the $`Ce`$ magnetic moment depending on the local symmetry and the chemistry of the ligand. The $`4f`$ band hybridizes not only with $`4f`$ electrons of neighboring $`Ce`$ sites but also with orbitals of the ligand ($`4d`$, $`3d`$ or $`sp`$ bands). Both types of hybridization produce a decrease in the magnetic moment of Ce.
For the symmetry considerations, we compare systems of the type $`CeX`$ (X=Ni, Rh, Pd) and analyse them in the CrB and CsCl crystal structures. These structures are being taken as prototypes of different symmetries within the same relative composition. Actually, both structures appear in nature associated with $`CeX`$ compounds. In general, when X is a latter TM the observed structure is CrB and when X belongs to the 1B, 2B or the 3A column of the periodic table the corresponding structure is the CsCl one.
The calculation for a given $`CeX`$ system in both the CrB and the CsCl structures helps us to understand how local symmetry affects $`4f`$ hybridization and consequently the magnetization of Ce. The calculations within the CrB structure are performed at the experimental equilibrium volume at room temperature and, since the CsCl structures are hypothetical ones for $`CeNi`$, $`CeRh`$ and $`CePd`$, we take the same volume per atom as in CrB in order to be able to compare the outcoming magnetic properties.
From Table II we see that the CsCl structure favours magnetism, even if slightly, as compared to the CrB one. This is a consequence of CsCl having higher symmetry. Crystal field effects lead, in the CrB case, to a lifting of almost all of the $`4f`$ degeneracies while this is not the case in the CsCl structure. Within a ’quasi’-Stoner image CsCl favours magnetism due to a higher density of states at the Fermi level. This is coherent with the fact that only some of the CrB compounds are magnetic while all of the CsCl ones are magnetic.
Figure 3 shows charge density plots with contributions stemming from the energy range $`0.7E_F<E<E_F`$, where $`E_F`$ is the Fermi energy. In this range the $`4f`$ band is the most important one. The plots show, for the $`CeRh`$ system, the charge densities projected into the (010) plane for CrB and into the (110) one for CsCl. In the CrB structure the $`4f`$-$`4f`$ hybridization between neighboring $`Ce`$ sites is more important than the $`4f`$-$`4d`$ or $`4f`$-$`3d`$ hybridization between $`Ce`$ and TM atoms. This can be infered from the fact that there is more charge between $`Ce`$ atoms than between $`Ce`$ and TM, namely a weight of 0.02 in CrB as compared to 0.007 in CsCl. On the other hand, the $`4f`$-$`d`$ hybridization is stronger in CsCl than in CrB. The amount of total interstitial charge is practically the same in both structures: CrB and CsCl structures have 4.4 and 4.3 interstitial electrons per formula unit respectively. The difference among them comes from the spatial distribution of charge.
For the systems under study which crystallize in the CsCl structure, symmetry makes a contribution to the magnetic moment of $`Ce`$, but it is actually not the crystalline environment the determining factor for the magnetic behavior. It is rather the chemical nature of the ligands the one that is responsible for this magnetic result. Considering for instance, the $`CeAg`$ or $`CeCd`$ system from the previous section, which crystallize in the CsCl structure, the 4d band lies very low in energy and thereafter there is nearly no hybridization betwen $`4f`$-$`4d`$ bands, leading this to the magnetization of Ce. In the hypothetical situation that one could force these systems to crystallize in the CrB structure they would also be magnetic.
On the other hand, if we compare the magnetization of $`CeNi`$, $`CeRh`$ and $`CePd`$ focusing on the CrB structure, which is the real one for these compounds, we can get insight into the effect that the type of ligand has on the magnetic moment of $`Ce`$. Using our criterion $`CePd`$ is a magnetically ordered system, while $`CeRh`$ and $`CeNi`$ are not when considered in their natural CrB structure. In Figure 4 we show the densities of states for the three compounds. The different magnetic solutions within the same crystalline structure can be explained, by the fact that in $`CeRh`$ the $`4d`$ band is closer in energy to the Fermi level than in the $`Pd`$ compound and consequently $`4f`$-$`4d`$ hybridization is stronger. The more the $`4f`$ band hybridizes with the $`4d`$ band the smaller is the $`Ce`$ magnetic moment. On the other hand, in the $`CeNi`$ case, there is an interplay between two types of hybridization namely $`Ce`$-$`Ce`$ and $`Ce`$-$`Ni`$, both induce a decrease of the magnetism of Ce as compared to $`CePd`$. Actually, in $`CeNi`$, there is a reduction in volume and $`Ce`$ atoms are nearer to each other than in CePd. Consequently, as can be seen from the densities of states in Figure 4, the $`4f`$ band is 0.5 eV wider in $`CeNi`$ than in $`CePd`$ due to an increase in $`Ce`$-$`Ce`$ mixing. On the other side, the $`3d`$ band lies nearer to $`E_F`$ than the $`4d`$ one this giving rise to $`4f`$-$`3d`$ mixing. Due to this interplay $`CeNi`$ is the less magnetic compound among the CrB systems.
## III Conclusions
In this contribution using an itinerant picture we show that it is possible to characterize the magnetic ground state of $`Ce`$ intermetallic compounds by doing ab-initio calculations within the LDA approximation. This criterion allows us to classify them into magnetically ordered, intermediate valence or paramagnetic systems trough their calculated spin contributions to the $`Ce`$ magnetic moment. It is based on the band theory frame in which, in general, correlated $`Ce`$ systems are not well described. However we find that it is an usefull tool to obtaine qualitative information about the electronic ground state of a wide variety of $`Ce`$ compounds, including those with strong electronic correlations.
We study the importance of $`4f`$ hybridization in determining the magnetic state of $`Ce`$ by analysing both symmetry and chemical effects. We study first the influence of the symmetry environment on $`4f`$ hybridization in $`CeNi`$, $`CeRh`$ and $`CePd`$ and take CsCl and CrB as prototypes of high and low symmetry structures. We see that CsCl slightly favour magnetism as compared to CrB. This fact can be understood with the following argument: CrB’s local environment has less symmetry operations than CsCl (8 vs. 48) and consequently there is a lifting of $`4f`$ degeneracies giving rise to a smaller density of states at the Fermi energy. Within a Stoner picture there is a stronger instability for magnetism in CsCl than in CrB. In this sense we can understand the fact that all the systems with the formula unit CeX are magnetic when growing in the CsCl structure (with X belonging to the 1B, 2B and 3A column of the periodic table) but they are not when growing in the CrB, in which only $`CePd`$ and $`CePt`$ are magnetically ordered.
Noteworthy is the fact that, in the cases studied, local symmetry is not a determining factor for the magnetic behavior. It is actually the type of ligand the crucial factor to determine the magnetic state. Along this line, we focus our study on the CrB structure to analyse the effect of chemical environment. We conclude that both $`4f`$-$`d`$ and $`4f`$-$`4f`$ types of hybridization can lead to a decrease of $`Ce`$ magnetic moment in the systems $`CeRh`$ and $`CeNi`$ with respect to $`CePd`$. In the first case it is mainly the mixing between the $`4d`$ band of $`Rh`$ with the $`4f`$ one from $`Ce`$ what produces a strong decrease in the magnetic moment. In the second case, $`CeNi`$, both types of hybridization occur, resulting this in an even lower value of the magnetic moment of $`Ce`$.
## IV Acknowledgments
We would like to thank Dr. J. G. Sereni for having encouraging the study of these systems. We also thank Dr. M. Weissmann for helpful and fruitful discussions. We acknowledge Consejo Nacional de Investigaciones Científicas for supporting this work. This work was funded by ANPCyT Project No. PICT 03-00105-02043. |
no-problem/0001/astro-ph0001019.html | ar5iv | text | # The Ulysses Supplement to the GRANAT/WATCH Catalog of Cosmic Gamma-Ray Bursts
## 1 Introduction
The multi-wavelength counterparts to numerous gamma-ray bursts (GRBs) have now been identified using the rapid, precise localizations available from the BeppoSAX spacecraft (e.g. Costa et al. 1997; van Paradijs et al. 1997), as well as from the Rossi X-Ray Timing Explorer and the IPN. However, there is still a need for less precise GRB localizations of older bursts, for several reasons. For example, the discovery of bright optical emission coincident with one burst (Akerlof et al. 1999) indicates that searches through archival optical data may reveal other examples of this interesting phenomenon. Also, the possible association of one GRB with a nearby supernova (Galama et al. 1998), if valid, means that other such associations may exist in the historical records. Because the current rate of rapid, precise localizations remains low ($``$ 8 events/year), it is important to add as many bursts as possible to the existing database. The GRANAT/WATCH GRB catalog contains data on 95 bursts observed between 1989 and 1994 (Sazonov et al. 1998); of the 95, 47 bursts were localized to error circles with radii between 0.2 and 1.6 $`\mathrm{°}`$. The 3rd interplanetary network (IPN) began operations in 1990 with the launch of the Ulysses spacecraft. By combining WATCH data with IPN data, it is possible to reduce the sizes of these error circles by as much as a factor of 800, making them more useful for archival studies. This is the 7th in a series of catalogs of IPN localizations. The supplements to the BATSE 3B and 4Br catalogs appeared in Hurley et al. (1999a,b; 218 and 147 bursts, respectively). Localizations involving the Mars Observer (MO) and Pioneer Venus Orbiter (PVO) spacecraft have been presented in Laros et al. (1997, 1998; 9 and 37 bursts, respectively). Fifteen Ulysses , PVO, SIGMA, WATCH, and PHEBUS burst localizations were published in Hurley et al. (2000a). Ulysses /BeppoSAX bursts may be found in Hurley et al. (2000b; 16 bursts). Localization data for the bursts in all these catalogs may also be found on the IPN website <sup>1</sup><sup>1</sup>1ssl.berkeley.edu/ipn3/index.html.
## 2 Instrumentation
The gamma-ray bursts in this paper were observed by at least two instruments. One was the omnidirectional GRB detector aboard the Ulysses spacecraft, consisting of two 3 mm thick hemispherical CsI scintillators with a projected area of $``$ 20 cm<sup>2</sup> in any direction. The instrument observes bursts in the 25 - 150 keV energy range in either a triggered mode, in which the time resolution is as high as 8 ms, or, for the weaker bursts, in a real-time mode, in which the time resolution is between 0.25 and 2 s. A more complete description of the experiment may be found in Hurley et al. (1992).
The second was the WATCH experiment aboard the GRANAT spacecraft. WATCH employs a unique rotating modulation collimator technique to determine the positions of bursts to $``$ 1 $`\mathrm{°}`$ accuracy. The detector is a scintillator operating in the 8 - 60 keV range with a field of view of 74 $`\mathrm{°}`$ and a maximum effective area of 47 cm<sup>2</sup>. Four independent modules were deployed aboard the GRANAT spacecraft, and $``$80% of the sky was monitored with them. See Sazonov et al. (1998) for a more detailed description.
To localize the GRBs in this supplement, use was sometimes made of the data from other experiments, too. These are noted in the following section.
## 3 Technique
The methodology employed here is similar or identical to that used for the Ulysses supplement to the BATSE 3B and 4B catalogs (Hurley et al. 1999a,b). Each WATCH burst was searched for in the Ulysses data. One or more annuli of possible arrival directions was derived by triangulation for each burst identified using the data from Ulysses and at least one other instrument. The bursts in this catalog thus fall into one of the following categories.
1. Event observed by Ulysses and WATCH only. In this case, the triangulation annulus was obtained utilizing the data of these two instruments.
2. Event observed by Ulysses , WATCH, and PHEBUS. PHEBUS was also aboard the GRANAT spacecraft (Barat et al. 1988; Terekhov et al. 1991). It consisted of six 12 cm. long by 7.8 cm. diameter BGO detectors oriented along the axes of a Cartesian coordinate system, operating in the 100 keV - 100 MeV energy range, with 1/128 s to 1/32 s time resolution. In this case, the triangulation was done using Ulysses and the instrument which resulted in the most precise triangulation annulus. The WATCH data have the advantage of being taken in an energy range which corresponds more closely to that of Ulysses , but the time resolution of the WATCH data was sometimes rather coarse ($``$ 10 s or more). On the other hand, the PHEBUS data, although taken in an energy range higher than that of Ulysses , have the advantages of good time resolution and in some cases better statistics. The more accurate of the two possible triangulation annuli is quoted here.
3. Event observed by Ulysses , WATCH, and BATSE. BATSE consists of eight detector modules aboard the Compton Gamma-Ray Observatory (GRO). Each module has an area $``$ 2025 cm<sup>2</sup>. The DISCSC data type was used, which gives 0.064 ms resolution data for the 25-100 keV energy range. BATSE is described in Meegan et al. (1996). For the purposes of triangulation, the GRANAT and GRO spacecraft were close enough to one another ($`<`$ 250 light-ms) that the accuracy of the triangulation could not be improved by including the data from both spacecraft. (For comparison, the Ulysses -Earth distance was as great as several thousand light-seconds.) In this case, the Ulysses \- BATSE annulus was used, since the BATSE energy range corresponds closely to that of Ulysses , the time resolution is good, and the statistics are always better. These annuli have appeared in Hurley et al. (1999a), but their intersections with the WATCH error circles are presented here for the first time. The BATSE error circles may be found in Meegan et al. (1996). In those cases where WATCH did not localize the burst, the Ulysses \- BATSE localization information consists of the intersection of the IPN annulus with BATSE error circle. Because the error circle is large, the curvature of the annulus does not allow a simple description of error box, and no localization information appears in table 2; it may be found in Hurley et al. (1999a).
4. Event observed by Ulysses , WATCH, and one or more of the following experiments: BATSE, COMPTEL (Kippen et al. 1998), SIGMA (Claret et al. 1994), PVO (Laros et al. 1997), or MO (Laros et al. 1998). Here, triangulation using PVO or MO data, and/or the independent localization capabilities of COMPTEL or SIGMA, have been utilized. These special cases are noted in table 2, and the previously published error box coordinates have been included in the table for convenience. In most cases the error box is fully contained within the WATCH error circle. If no figure has been previously published showing the WATCH error circle and the IPN triangulation result, one appears in this paper.
## 4 The data
In table 1 the WATCH bursts also detected by Ulysses are listed. Column 1 gives the date, column 2 gives the detection time at WATCH, and column 3 indicates the Ulysses data mode (RI for rate increase, observed in the low time resolution real-time mode, trigger for the high time resolution triggered mode). Column 4 indicates whether BATSE observed the burst. Here N/O means not observable (GRO had not been launched yet), and a number, if present, is the BATSE trigger number. Column 5 indicates whether the burst was localized by WATCH, and column 6 indicates whether PHEBUS observed the event.
Table 2 gives the localization information for the events in table 1. Columns 1 and 2 give the date and the time. For those bursts localized by WATCH, columns 3 and 4 give the right ascension and declination of the center of the WATCH error circle (J2000), and column 5 gives the WATCH 3 $`\sigma `$ error circle radius. These data are taken directly from Sazonov et al. (1998). Columns 6 and 7 give the right ascension and declination $`\alpha ,\delta `$ of the center of the IPN annulus (J2000); columns 8 and 9 give the radius R of the center line of the annulus, and the 3 $`\sigma `$ half-width of the annulus $`\delta `$R. That is, the annulus is described by two small circles on the celestial sphere both centered at $`\alpha ,\delta `$, with radii R-$`\delta `$R and R+$`\delta `$R. For those cases where there is a WATCH error circle and the annulus intersects it, additional data are given in columns 10 and 11. (The possible exceptions are first, cases where the IPN annulus is wider than the error circle diameter and therefore does not intersect it, and second, cases where an actual error box has been obtained and published elsewhere.) Column 10 gives the right ascensions and declinations (J2000) of the IPN error box and column 11 gives the error box area. Note that, strictly speaking, it is not possible to define a true error box with straight line segments between the four intersection points of a WATCH error circle with an IPN annulus due to the curvatures of both the annulus and the error circle. However, for many purposes, this may be negligible.
## 5 Discussion and Conclusions
There is good agreement between the IPN annuli and the WATCH error circles in all cases. We call attention to some of the more precise error boxes:
1. 940703. The error box area is 16 sq. arcmin., a reduction in area from the 0.24$`\mathrm{°}`$ WATCH error circle of a factor of $``$41.
2. 921022. The error box area is 22 sq. arcmin., a reduction in area from the 0.72$`\mathrm{°}`$ WATCH error circle of a factor of $``$265.
3. 921013. The error box area is 32 sq. arcmin., a reduction in area from the 1.51$`\mathrm{°}`$ WATCH error circle of a factor of $``$800.
The localizations in table 2 are presented in figures 1-25. (Figures for the WATCH bursts involving SIGMA, which have already appeared in Hurley et al. (2000a), have been omitted.) As these figures show, the combination of WATCH and the IPN results in very precise location information for these bursts. Another version of the WATCH experiment was flown aboard the EURECA spacecraft. Analysis of these events is currently underway.
KH is grateful to JPL for Ulysses support under Contract 958056, and to NASA for Compton Gamma-Ray Observatory support under grant NAG 5-3811. |
no-problem/0001/cond-mat0001203.html | ar5iv | text | # Symmetry Does Not Allow Canting of Spins in La1.4Sr1.6Mn2O7
\[
## Abstract
We analyze the symmetry of all possible magnetic structures of bilayered manganites La<sub>2-2x</sub>Sr<sub>1+2x</sub>Mn<sub>2</sub>O<sub>7</sub> with doping $`0.3x<0.5`$ and formulate a corresponding Landau theory of the phase transitions involved. It is shown that canting of spins is not allowed at $`x=0.3`$ though is at $`x=0.4`$. The observed magnetic reflections from the sample with $`x=0.3`$ may be described as arising from two spatially distributed phases with close transition temperatures but different easy axes and ranges of stability. Experimental results are revisited on the basis of the theoretical findings.
PACS number(s): 75.25.+z, 75.30.-m, 75.40.Cx, 75.30.Kz
\]
Recent extensive investigation of the so-called colossal magnetoresistance (CMR) in doped perovskite manganites has stimulated considerable interest in relative bilayered compound La<sub>2-2x</sub>Sr<sub>1+2x</sub>Mn<sub>2</sub>O<sub>7</sub> in an attempt to understand and to improve the sensitivity of the magnetoresistive response . The material of interest is comprised of perovskite (La, Sr)MnO<sub>3</sub> bilayers separating by (La, Sr)O blocking layers, namely, the $`n=2`$ member of the Ruddlesden-Popper series of manganites (La, Sr)O\[(La, Sr)MnO<sub>3</sub>\]<sub>n</sub>. This quasi two-dimensional nature promotes fluctuations that lower the critical temperature $`T_c`$ of the magnetic transition and hence the relevant scale of a magnetic field for the huge magnetoresistance. As the tetragonal $`I4/mmm`$ symmetry of the material a priori lifts the degeneracy of the $`e_g`$ orbitals of the Mn<sup>3+</sup> ions, the Jahn-Teller distortion of which was argued to be responsible for the CMR of the perovskite manganites , observation of antiferromagnetic (AFM) correlations above $`T_c`$ of a para- (PM) to ferromagnetic (FM) transition in La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub> was suggestive as an alternative origin to assist localization of carriers above $`T_c`$ . Importance of the AFM superexchange interaction shows up at the same doping level as canting of the ordered moments in neighboring layers within each bilayer as inferred from the sign reversal of the Mn-O bond compressibility below $`T_c`$ . Further neutron scattering investigation of PM correlations provided evidence for the strong canting of the spins with an average angle that depends on both the magnetic field and the temperature above $`T_c`$ owing to the weaker FM correlation within the bilayers . The canting angle, in particular, changes from $`86^{}`$ at zero field to $`74^{}`$ at an external magnetic field of 1 Tesla to $`53^{}`$ at 2 Teslas at 125K. Comprehensive neutron-diffraction studies on the other hand found that the canting angle increases from 6.3 at $`x=0.4`$ to 180 (A-type AFM) at $`x=0.48`$ at 10K, while $`T_c`$ decreases from 120 K to 0 K correspondingly. Moreover, the AFM correlations above $`T_c`$ were identified as an intermediate phase whose order parameter decreases in an anomalous exponential manner upon increasing temperature to about 200K . Accordingly, the AFM correlations and more generally the magnetic structure seem to play an important role in the bilayered manganites.
For $`0.32x0.4`$, the bilayered manganites exhibit a FM order below $`T_c`$ with an easy axis at the layer. The magnetic structure at $`x=0.3`$, however, is somewhat complicated and so there exists no consensus. Perring et al proposed an AFM order of an intra-bilayer FM and inter-bilayer AFM structure (denoted as AFM-B) with the easy axis along $`z`$ below about 90K from magnetic neutron diffraction. However, a substantial component within the layers rises up and then falls down between 60 and 90K or so. Argyriou et al by neutron diffractions and Heffner et al by muon spin rotation measurements reported, on the other hand, that their sample with the same doping involves two structurally similar phases: The major phase (hole poor) arranges itself in a similar AFM-B structure with a substantial canting in the plane as well as out of it. The minor phase (hole rich but $`x<0.32`$) differs from the major one only by its FM arrangement along $`z`$ axis and its lower ordering temperature. However, as they pointed out, the assignment of the in-plane component is not so unambiguous. Also their in-plane AFM reflections become vanishingly small below about 60K either. Still another scenario at the 30 percent doping is this: The magnetic structure changes from PM to AFM-B at about 100K and then to FM at 70K or so. The easy axis rotates correspondingly from in-plane in the AFM-B to $`z`$ direction in the FM state . From these experiments, whether there exists canting of spins at $`x=0.3`$ is still ambiguous. So, noticing the importance of the magnetic structure in the $`x0.4`$ doping, clarification of the magnetic structure of the $`x=0.3`$ doping is a key to understand its characteristic transport behavior . In this Letter, we show that there is a qualitative difference between doping at $`x=0.3`$ and $`x=0.4`$ by analyzing the symmetry of the magnetic structures. It is found that the symmetry of the magnetic order parameters cannot allow canting at $`x=0.3`$ in contrast to $`x=0.4`$. This result sheds new light to the mechanism of the CMR behavior.
First we identify the order parameters and their sym- TABLE I.: Components of the magnetic vectors that form a basis of the IR’s of $`I4/mmm`$ at $`𝐤_\mathrm{\Gamma }`$ and $`𝐤_M`$. IR BASES $`\tau ^2`$ $`L_z;L_{Az}`$ $`\tau ^3`$ $`M_z;L_{Bz}`$ $`\tau ^9`$ $`(M_x,M_y);(L_{Bx},L_{By})`$ $`\tau ^{10}`$ $`(L_x,L_y);(L_{Ax},L_{Ay})`$ metry responsible for the possible magnetic structures. The Mn ions with magnetic moments $`𝝁`$<sub>i</sub> in the $`I4/mmm`$ structure occupy four positions at $`i=1(0,0,z)`$, $`2(0,0,1z)(z0.1)`$ and their translation by $`𝐭_0=(\frac{1}{2},\frac{1}{2},\frac{1}{2})`$, i.e., $`(\frac{1}{2},\frac{1}{2},\frac{1}{2}\pm z)`$ (see Fig. 1). Following the representation analysis of magnetic structures , we define two magnetic vectors
$`𝐌`$ $`=`$ $`𝝁_1+𝝁_2`$ (1)
$`𝐋`$ $`=`$ $`𝝁_1𝝁_2.`$ (2)
Then a FM state corresponds to M propagating with a wave vectors $`𝐤_\mathrm{\Gamma }=(000)`$, a bilayered-type AFM-B and an A-type AFM (intra-bilayer AFM but inter-bilayer FM) state to M and L, respectively, with $`𝐤_M=(00\frac{1}{2})`$ of the first Brillouin zone. Denoting the latter two order parameters as $`𝐋_B`$ and $`𝐋_A`$ respectively, and noticing that k<sub>Γ</sub> and k<sub>M</sub> share the same irreducible representations (IR’s) of the $`I4/mmm`$ group , one can find the components of the four vectors that form bases of the IR’s shown in Table I. Note that the IR’s $`\tau ^9`$ and $`\tau ^{10}`$ are both two-dimensional, and so $`M_x`$ and $`M_y`$ together form a basis vector of $`\tau ^9`$, so do $`L_{Bx}`$ and $`L_{By}`$. From Table I and the possible experimental magnetic structures , we identify L<sub>B</sub> with the order parameter for the major phase, $`M_z`$ and ($`L_{Bx},L_{By}`$) for the minor phase of $`x=0.3`$, $`(M_x,M_y)`$ with the order parameter for $`0.3<x0.38`$, $`(M_x,M_y)`$ and $`(L_{Ax},L_{Ay})`$ for $`0.38<x<0.48`$, and $`(L_{Ax},L_{Ay})`$ for $`0.48x<0.5`$ which is A-type AFM.
From Table I, the relevant lowest order magnetic part of the Landau free-energy can be written as
$`F`$ $`=`$ $`{\displaystyle \frac{c}{2}}𝐌^2+{\displaystyle \underset{w}{}}{\displaystyle \frac{a_w}{2}}𝐋_w^2+{\displaystyle \underset{w}{}}{\displaystyle \frac{b_w}{4}}𝐋_w^4+{\displaystyle \frac{d}{4}}𝐌^4`$ (5)
$`+{\displaystyle \frac{1}{2}}\beta _zM_z^2+{\displaystyle \frac{1}{2}}\beta _{xy}(M_x^2+M_y^2)`$
$`+{\displaystyle \underset{w}{}}\left[{\displaystyle \frac{1}{2}}\alpha _{wxy}(L_{wx}^2+L_{wy}^2)+{\displaystyle \frac{1}{2}}\alpha _{wz}L_{wz}^2\right],`$
where $`w`$ represents the summation over $`𝐋`$, $`𝐋_A`$, and $`𝐋_B`$. Note that the latter two vectors will carrier a factor $`\mathrm{exp}\{i𝐤_M𝐭_0\}=1`$ when they are translated by $`𝐭_0`$, and so cannot appear in odd powers. In Eq. (5), we have separated the exchange contributions (first four terms), which depend only on the relative orientation of the spins, from the magnetic anisotropic energies (remaining terms), which depend on the relative direction of the magnetic moments to the lattice and arise from the rela-
tivistic spin-spin and spin-orbit interactions and so are effects of the order of $`O(v_0^2/c_0^2)`$, ordinarily about $`10^2`$ to $`10^5`$, where $`v_0`$ is the speed of electrons in the crystal and $`c_0`$ that of light, since the magnetic moments themselves contain a factor $`v_0/c_0`$ . Hence $`\alpha `$ and $`\beta `$ are small constants due to their relativistic origin. $`b_w`$ and $`d`$ are positive for stability.
We now focus on the $`x=0.3`$ doping. The relevant magnetic vectors in this case is $`𝐋_B`$ and $`𝐌`$. Minimizing Eq. (5) with the components of these vectors, one obtains five solutions
$`𝐌=𝐋_B=\mathrm{𝟎},`$ (7)
$`𝐌=\mathrm{𝟎},L_{Bx}=L_{By}=0,L_{Bz}^2={\displaystyle \frac{a_B+\alpha _{Bz}}{b}},`$ (8)
$`𝐌=\mathrm{𝟎},L_{Bz}=0,L_{Bx}^2+L_{By}^2={\displaystyle \frac{a_B+\alpha _{Bxy}}{b}},`$ (9)
$`𝐋_B=\mathrm{𝟎},M_x=M_y=0,M_z^2={\displaystyle \frac{c+\beta _z}{d}},`$ (10)
$`𝐋_B=\mathrm{𝟎},M_z=0,M_x^2+M_y^2={\displaystyle \frac{c+\beta _{xy}}{d}}.`$ (11)
Since anisotropic terms like $`M_x^2M_y^2`$ have not been included, the direction in the $`xy`$-plane cannot yet be determined. Note that the exchange term of $`(𝐋_B𝐌)^2`$ type is irrelevant, since $`𝐌𝐋_B=0`$ due to the incompatibility of $`𝐌`$ and $`𝐋_B`$ along a single direction. Eq. (7) represents the PM phase, Eqs. (8) and (9) pure AFM-B phases with the moments directing respectively along the $`z`$-axis and the $`xy`$-plane, and Eqs. (10) and (11) pure FM phases. An remarkable feature of Eqs. (1) is that there is no mixed order such as $`L_{Bz}`$ with $`L_{Bx}`$ or $`L_{By}`$, $`M_z`$ with $`M_x`$ or $`M_y`$ and $`𝐋_B`$ with $`𝐌`$. In other words, no canting state exists. The reason is that there is no symmetry relation between $`\alpha _{Bz}`$ ($`\beta _z`$) and $`\alpha _{Bxy}`$ ($`\beta _{xy}`$), so that both $`L_{Bz}`$ ($`M_z`$) and $`L_{Bx}`$ ($`M_x`$) or $`L_{By}`$ ($`M_y`$) cannot simultaneously acquire nonzero values in general. This can also been seen from Table I that the $`z`$ and the $`xy`$ components transform according to different IR’s.
In order to determine the range of stability of the phases, one substitutes the solutions Eqs. (1) into the free energy and obtains respectively to the first order,
$`F_0`$ $`=`$ $`0,`$ (13)
$`F_{L_z}`$ $``$ $`{\displaystyle \frac{a_B^2}{4b_B}}{\displaystyle \frac{a_B\alpha _{Bz}}{2b_B}},`$ (14)
$`F_{L_{xy}}`$ $``$ $`{\displaystyle \frac{a_B^2}{4b_B}}{\displaystyle \frac{a_B\alpha _{Bxy}}{2b_B}},`$ (15)
$`F_{M_z}`$ $``$ $`{\displaystyle \frac{c^2}{4d}}{\displaystyle \frac{c\beta _z}{2d}},`$ (16)
$`F_{M_{xy}}`$ $``$ $`{\displaystyle \frac{c^2}{4d}}{\displaystyle \frac{c\beta _{xy}}{2d}}.`$ (17)
Accordingly, if $`0<\beta _z<\beta _{xy}`$, for instance, then $`F_{M_z}<F_{M_{xy}}`$ and so the moments will point to $`z`$-axis, whereas, if $`\beta _z>\beta _{xy}>0`$, they will lie on the $`xy`$-plane. This may be the case for the change of the FM magnetization direction with increasing doping observed experimentally . Similarly, when $`\alpha _{Bz}`$ becomes bigger than $`\alpha _{Bxy}`$ (both are assumed to be positive without loss of generality), the system changes from the phase $`L_{Bz}`$ \[Eq. (8)\] to $`L_{Bxy}`$ \[Eq. (9)\]. The two phases have respectively crystallographic space groups $`P4/mnc`$ and $`Cmca`$, which cannot be related by an active IR and so the transition between them is necessarily discontinuous . Another reason is that the two directions are not connected continuously. In practice, the two phases may appear almost simultaneously within of a single sample at different places where there is, for example, a small variation of doping or inhomogeneity since the two phases differ in their transition points \[$`a_B+\alpha _B=0`$, Eqs. (1)\] and free energies by only values of the order of $`O(v_0^2/c_0^2)`$, and so which will appear depend rather sensitively on detailed conditions. This same reason also implies that the separation might be mesoscopic. Moreover, the two phases may have different temperature windows of stability due to different variations of $`\alpha _{Bz}`$ and $`\alpha _{Bxy}`$ with the temperature. Occurrence of AFM-B or FM order relies on the other hand on whether $`a_B`$ or $`c`$ becomes negative first, respectively.
There exists possible mixing of $`L_{Bz}`$ and its $`xy`$-plane counterparts at higher order terms, but it cannot produce canting either. As the transition points of the two phases differ by only small quantities of order of $`O(v_0^2/c_0^2)`$, we use the expansion in $`𝐋_B`$ itself. Thus, besides those pure $`𝐋_B`$ terms in the free energy Eq. (5), we add terms
$`{\displaystyle \frac{1}{4}}\lambda _1L_{Bz}^4,{\displaystyle \frac{1}{4}}\lambda _2(L_{Bx}^2+L_{By}^2)^2,`$ (18)
$`{\displaystyle \frac{1}{2}}\lambda _3L_{Bz}^2(L_{Bx}^2+L_{By}^2),{\displaystyle \frac{1}{2}}\lambda _4L_{Bx}^2L_{By}^2,`$ (19)
with the coefficients $`\lambda `$’s of order $`O(v_0^4/c_0^4)`$ relative to the exchange ones . Then one can obtain new solutions that determine the direction of the moments in the $`xy`$-plane to be either along the $`x`$ or $`y`$ axis or along its diagonal depending respectively on whether $`\lambda _4`$ is positive or negative. In addition, there appear solutions such as
$`L_{Bx}`$ $`=`$ $`0,`$ (21)
$`L_{By}^2`$ $``$ $`{\displaystyle \frac{a_B(\lambda _1\lambda _4)+b_B(\alpha _1\alpha _2)}{b_B(2\lambda _1\lambda _3\lambda _4)}},`$ (22)
$`L_{Bz}^2`$ $``$ $`{\displaystyle \frac{a_B(\lambda _1\lambda _3)b_B(\alpha _1\alpha _2)}{b_B(2\lambda _1\lambda _3\lambda _4)}},`$ (23)
and a similar one in the diagonal plane perpendicular to the $`xy`$-plane, where we have kept terms of order $`\lambda `$ in both the numerators and denominators. However, it is readily seen that the left hand sides of Eqs. (22) and (23) possess just opposite signs in general, so that only one of them can have a real solution. Similar result can also be proved by expanding the free energy in the unit vector along $`𝐋_B`$ valid at low temperatures. Further, there is no external or demagnetizing field to tilt the moments. Therefore, canting is not allowed for the bilayered-type AFM order of the major phase with $`x=0.3`$ doping. The observation of both the $`z`$ and the $`xy`$ components of the AFM-B order should thus arise from the two phases each with one kind of the AFM-B components.
Nevertheless, mixing of different magnetic vectors is still possible by coupling of the type $`𝐌^2𝐋_B^2`$ for instance. This can exist due to either an exchange or a relativistic origin. Adding such a term with a coefficient $`\delta /2`$ for the coupling of, say, $`M_z`$ and $`L_{Bx}`$ and $`L_{By}`$ for the minor phase of $`x=0.3`$, one obtains, besides Eqs. (10) and (9), a new phase with mixing
$`M_z^2`$ $``$ $`{\displaystyle \frac{\delta a_Bcb_B}{db_B\delta ^2}},`$ (24)
$`L_{Bx}^2+L_{By}^2`$ $``$ $`{\displaystyle \frac{\delta cda_B}{db_B\delta ^2}},`$ (25)
where we have neglected $`\alpha _B`$ and $`\beta `$. A system with such a coupling may exhibit several scenarios depending on the strength of the coupling and the nature of the pure phases . It may appear in a pure phase, which may transform continuously or discontinuously to the mixed phase, or discontinuously to another pure phase at lower temperatures, the latter can only take place in the strong coupling of $`\delta ^2>db_B`$. It may even change directly to the mixed phase when the transition temperatures of the two pure phases get identical. Reentrant phase transitions from a pure phase to a mixed one and then back to the pure phase are also possible.
We now compare our results with experiments. The experimental assignment of both a canting major phase and a canting minor phase is based on the result that if canting is exclusively associated with only one phase, the resultant total magnetic moment is too large at 80K, near the peak temperature of the plane AFM reflections . This excludes the possibility of a canting minor phase and a pure $`L_{Bz}`$ phase and appears to suggest instead that the plane AFM reflections arise at least partly from an independent $`L_{Bxy}`$ phase. The fact that the reflections from $`L_{Bz}`$ and $`L_{Bxy}`$ start appearing at almost the same temperature seems to support the theoretical results that both phases emerge almost simultaneously at different places where there is a small variation of doping or inhomogeneity, which balances the small quantities $`\alpha _{Bz}`$ and $`\alpha _{Bxy}`$ in their transition temperatures. With the two phases rather than a single canting major phase, the too large magnetic moment may be remedied. The peak structure of the reflection intensities from the $`L_{Bxy}`$ phase may then arise from the different temperature dependence of $`\alpha _{Bxy}`$ and $`\alpha _{Bz}`$ in such a way that below about 60K, $`\alpha _{Bxy}>\alpha _{Bz}`$, and so the $`L_{Bxy}`$ phase transforms to the $`L_{Bz}`$ phase by a reorientation transition. The small remaining reflections may originate from the remnant $`L_{Bxy}`$ phase due to possible inhomogeneity or supercooling.
For higher doping, noting that the reflections from the $`M_z`$ component emerge separately and accompany with the decline of the $`L_{Bxy}`$ reflections , it seems that the minor phase may be a pure FM phase with the $`z`$-axis as its easy orientation. Its significantly lower $`T_c`$ of about 80K than those of slightly higher doping might result from its competition with the $`L_{Bxy}`$ phase, which suppresses its occurrence via a positive $`\delta `$. Nevertheless, a canting minor phase may still be possible, but its lower $`T_c`$ and the peak feature of the $`L_{Bxy}`$ reflections should be properly accounted for. When doping increases, $`T_c`$ increases but $`\beta _{xy}`$ becomes smaller than $`\beta _z`$, and so the moment aligns ferromagnetically in the $`xy`$-plane. At high doping near 0.5, the A-type AFM is the most stable state. In between these two cases, the two types of states compete with each other via mixing terms similar to Eqs. (25), leading possibly to the lowering of their respective transition temperatures and a $`(M_x,M_y)`$ and $`(L_{Ax},L_{Ay})`$ tilt as observed experimentally. The exponential-like growth of the A-type AFM with cooling might be due to two-dimensional FM fluctuations.
In conclusion, noticing the importance of magnetic correlations to magnetoresistive response, we have analyzed the symmetry of all possible magnetic structures of bilayered manganites with doping $`0.3x<0.5`$ on the basis of experimental results. A corresponding Landau theory of the phase transitions involved is formulated. A prominent result is that the ordered magnetic moments of the $`x=0.3`$ doping (the major phase ) cannot be canting though $`x=0.4`$ can, since the former is characterized by a single magnetic vector $`𝐋_B`$ whereas the latter by two different magnetic vectors, which may be mixed by an exchange or relativistic mechanism. Such a result indicates that the magnetic structure of the $`x=0.3`$ doping is far more complex than what has been proposed and demands further experimental clarifications. Instead of a canting major phase, there exist two spatially distributed phases with close transition temperatures but different easy axes and ranges of stability, to which the observed magnetic reflections from the $`x=0.3`$ sample may be attributed. Such a picture can account for the peak of the plane AFM reflections. Furthermore, it seems to accord with the two-step variation of lattice parameters with temperatures through an assumption that the $`d_{3z^2r^2}`$ and $`d_{x^2y^2}`$ orbital states correspond to magnetic orientations along $`z`$ and $`xy`$ respectively, namely, an increase in the $`L_{Bxy}`$ phase elongates the in-plane scale but shortens the $`z`$ scale, and then a decrease gives rise to a reverse effect . As both the $`z`$ and the $`xy`$ components possess a bilayered-type AFM structure, the material should be expected to display an insulating behavior in the whole temperature range. So the metal-insulator transition should mostly be attributed to the percolation of the minor FM phase, whose transition temperature, however, seems to be too low . Further work is desirable to clarify this.
This work was supported by a URC fund at HKU. |
no-problem/0001/astro-ph0001042.html | ar5iv | text | # Eccentric stellar discs with strong density cusps and separable potentials
## 1 INTRODUCTION
High resolution observations based on Hubble Space Telescope photometry of nearby galaxies have increased our understanding of the central regions of elliptical and spiral galaxies. It was found that in most galaxies density diverges toward the centre in a power-law cusp. In the presence of a cusp, regular box orbits are destroyed and replaced by chaotic orbits (Gerhard & Binney 1985). Through a fast mixing phenomenon, stochastic orbits cause the orbital structure to become axisymmetric at least near the centre (Merritt & Valluri 1996). These results are confirmed by the findings of Zhao et al. (1999, hereafter Z99). Their study reveals that highly non-axisymmetric, scale-free mass models can not be constructed self-consistently. Among the models studied for self-consistency, one can refer to the integrable, cuspy models of Sridhar & Touma (1997, hereafter ST97). Without a nuclear black hole (BH), centrophobic bananas are the only family of orbits presenting in ST97 discs. Although such orbits elongate in the same direction as density profile, the orbital angular momentum takes a local minimum somewhere rather than the major axis where the surface density has a maximum. This is the main obstacle for building self-consistent equilibria by regular bananas (Syer & Zhao 1998; Z99). A similar situation occurs for anti-aligned tube and high resonance orbits for which one could not be able to fit the curvatures of orbits and surface density distribution near the major axis (Z99). According to the results of Miralda-Escudé & Schwarzschild (1989), it is only possible to construct self-consistent models by certain families of fish orbits.
The orbital structure of stellar systems is enhanced by central BHs in a different manner. Although nuclear BHs destroy box orbits, they enforce some degree of regularity in both centred and eccentric discs (Sridhar & Touma 1999, hereafter ST99). In systems with analytical cores and central BHs, a family of long-axis tube orbits can help the host galaxy to maintain its non-axisymmetric structure within the BH sphere of influence (Jalali 1999).
In this paper, we present a class of non-scale-free, lopsided discs, which display a collection of properties expected in self-consistent non-axisymmetric cuspy systems. Our models are of Stäckel form in elliptic coordinates (e.g., Binney & Tremaine 1987) for which the Hamilton-Jacobi equation separates and stellar orbits are regular. In central regions where the effect of the cusp dominates, the potential functions of our distributed mass models are proportional to $`r^1`$ as $`r0`$. So, we attain an axisymmetric structure near the centre which is consistent with the predicted nature of density cusps. The slope of potential function changes sign as we depart from the centre and our model galaxies considerably become non-axisymmetric. Non-axisymmetric structure is supported by a family of eccentric loop orbits, which are aligned with the lopsidedness. Our potential functions have a local minimum around of which a family of eccentric butterfly orbits emerges. Close to the centre, loop orbits break down and give birth to a new family of orbits, horseshoe orbits. Stars moving in horseshoes lose their kinetic energy as they approach to the centre and contribute a large amount of mass to form a cusp. Our models can be applied to the study of dynamics in systems with double nucleus such as M31 (Tremaine 1995, hereafter T95) and NGC4486B (Lauer et al. 1996).
## 2 THE MODEL
Consider the Hamiltonian function
$$=\frac{1}{2}(p_x^2+p_y^2)+\mathrm{\Phi }(x,y),$$
(1)
which is described in cartesian coordinates, $`(x,y)`$. The variables $`p_x`$ and $`p_y`$ denote the momenta conjugate to $`x`$ and $`y`$, respectively. $`\mathrm{\Phi }`$ is the potential due to the self-gravity of the disc. Let us express $``$ in elliptic coordinates, $`(u,v)`$, through the following transformations
$`x`$ $`=`$ $`a(1+\mathrm{cosh}u\mathrm{cos}v),`$ (2)
$`y`$ $`=`$ $`a\mathrm{sinh}u\mathrm{sin}v,`$ (3)
$`u`$ $``$ $`0,0v2\pi ,`$
where $`a`$ is constant and $`2a`$ is the distance between the foci of confocal ellipses and hyperbolas defined by the curves of constant $`u`$ and $`v`$, respectively. In the new coordinates, the Hamiltonian function becomes
$$=\frac{1}{2a^2(\mathrm{sinh}^2u+\mathrm{sin}^2v)}(p_u^2+p_v^2)+\mathrm{\Phi }(u,v),$$
(4)
with $`p_u`$ and $`p_v`$ being the new canonical momenta. We think of those potentials which take Stäckel form in elliptic coordinates. The most general potential of Stäckel form is
$$\mathrm{\Phi }(u,v)=\frac{F(u)+G(v)}{2a^2(\mathrm{sinh}^2u+\mathrm{sin}^2v)},$$
(5)
where $`F`$ and $`G`$ are arbitrary functions of their arguments. By this assumption, the Hamilton-Jacobi equation separates and results in the second integral of motion, $`I_2`$. We get
$$I_2=p_u^22a^2E\mathrm{sinh}^2u+F(u),$$
(6)
or equivalently
$$I_2=p_v^22a^2E\mathrm{sin}^2v+G(v),$$
(7)
where $`E`$ is the total energy of the system, $`E`$.
We now introduce a class of potentials with
$`F(u)`$ $`=`$ $`C(\mathrm{cosh}u)^\gamma ,`$
$`G(v)`$ $`=`$ $`C\mathrm{cos}v|\mathrm{cos}v|^{\gamma 1},`$ (8)
where $`C>0`$ and $`\gamma `$ are constant parameters. One can readily verify that
$$\mathrm{cosh}u=\frac{1}{2a}(r+s),\mathrm{cos}v=\frac{1}{2a}(rs),$$
(9)
where
$$r^2=x^2+y^2,s^2=(x2a)^2+y^2.$$
(10)
We substitute from (10) into (8) and express $`\mathrm{\Phi }`$ in the $`(x,y)`$ coordinates:
$`\mathrm{\Phi }`$ $`=`$ $`K{\displaystyle \frac{(r+s)^\gamma (rs)|rs|^{\gamma 1}}{2rs}},`$ (11)
$`K`$ $`=`$ $`C(2a)^\gamma .`$
The surface density distribution, associated with $`\mathrm{\Phi }`$, is determined as (see Binney & Tremaine 1987):
$$\mathrm{\Sigma }(x^{},y^{})=\frac{1}{4\pi ^2G}\frac{(^2\mathrm{\Phi })dxdy}{\sqrt{(x^{}x)^2+(y^{}y)^2}}.$$
(12)
We examine the characteristics of the potential and surface density functions for small and large radii. Very close to the centre, we have $`rs`$ that simplifies (11) as follows
$$\mathrm{\Phi }=\frac{Ks^{\gamma 1}}{2}\frac{(1+\frac{r}{s})^\gamma +(1\frac{r}{s})^\gamma }{r}.$$
(13)
One can expand $`(1+\frac{r}{s})^\gamma `$ and $`(1\frac{r}{s})^\gamma `$ in terms of $`r/s`$ to obtain
$$\mathrm{\Phi }=\frac{Ks^{\gamma 1}}{r}\left[1+\underset{n=1}{\overset{\mathrm{}}{}}\frac{\mathrm{\Gamma }(\gamma +1)}{(2n)!\mathrm{\Gamma }(\gamma 2n+1)}\left(\frac{r}{s}\right)^{2n}\right],$$
(14)
where $`\mathrm{\Gamma }`$ is the well known Gamma function. As $`r`$ tends to zero, $`s`$ is approximated by $`2a`$ and $`r/s0`$. Therefore, Equation (14) reads
$$\mathrm{\Phi }\frac{K(2a)^{\gamma 1}}{r}.$$
(15)
Dimensional considerations show that the surface density $`\mathrm{\Sigma }`$ will approximately be proportional to $`r^2`$. Thus, sufficiently close to the centre, we obtain a strong density cusp with spherical symmetry. When $`r`$ tends to infinity, the potential $`\mathrm{\Phi }`$ is approximated as
$$\mathrm{\Phi }K2^{\gamma 1}r^{\gamma 2}.$$
(16)
So, we find out
$$\mathrm{\Sigma }r^{\gamma 3}.$$
(17)
We have to select those values of $`\gamma `$ for which the surface density distribution is plausible and orbits are bounded. According to (17), the surface density decays outward ($`r\mathrm{}`$) for $`\gamma <3`$. Moreover, Equation (16) shows that orbits will be escaping if $`\gamma 2`$. To verify this, consider the force exerted on a star, which is equal to $`\mathrm{\Phi }`$. This force will always be directed outward for $`\gamma 2`$ and results in escaping motions. Therefore, we are confined to $`2<\gamma <3`$.
We have used Equations (11) and (12) to compute $`\mathrm{\Phi }`$ (Figure 1) and $`\mathrm{\Sigma }`$ (Figure 2) for $`\gamma =2.8`$. Due to the complexity of $`^2\mathrm{\Phi }`$, we have utilized a numerical scheme to evaluate the double integral of (12). The potential and surface density functions are symmetric with respect to the $`x`$-axis and are cuspy at $`(x=0,y=0)`$. The potential $`\mathrm{\Phi }`$ has a local minimum at $`(x=a,y=0)`$ that plays an important role in the evolution of orbits. This minimum point has no image in the plane of the surface density isocontours. The surface density monotonically decreases outward from the centre. As it is evident from Figure 2, a non-axisymmetric, lopsided structure is present at moderate distances from the centre.
## 3 ORBITS
To this end, we classify orbit families. Having the two isolating integrals $`E`$ and $`I_2`$, one can find the possible regions of motion by employing the positiveness of $`p_u^2`$ and $`p_v^2`$ in (6) and (7). We define the following functions:
$`f(u)`$ $`=`$ $`2a^2E\mathrm{sinh}^2u+F(u),`$ (18)
$`g(v)`$ $`=`$ $`2a^2E\mathrm{sin}^2v+G(v),`$ (19)
where $`F(u)`$ and $`G(v)`$ are given as (8). By virtue of $`p_u^20`$ and $`p_v^20`$ one can write
$`I_2f(u)`$ $``$ $`0,`$ (20)
$`I_2g(v)`$ $``$ $`0.`$ (21)
Due to the nature of $`\mathrm{\Phi }`$, no motion exists for negative energies. Hence, $`E`$ can only take positive values, $`E>0`$. Our classification is based on the behavior of $`f(u)`$ and $`g(v)`$. The most general form of $`f(u)`$ is attained for $`\gamma C<4a^2E`$. In such a circumstance, $`f(u)`$ takes a local maximum at $`u=0`$, $`f_\mathrm{M}=f(0)=C`$, and a global minimum at $`u=u_\mathrm{m}`$, $`f_\mathrm{m}=f(u_\mathrm{m})`$, where
$$\mathrm{cosh}u_\mathrm{m}=\left(\frac{4a^2E}{C\gamma }\right)^{\frac{1}{\gamma 2}},$$
(22)
and
$$f_\mathrm{m}=2a^2E\mathrm{sinh}^2u_\mathrm{m}+C(\mathrm{cosh}u_\mathrm{m})^\gamma .$$
(23)
According to (20) we obtain
$$I_2f_\mathrm{m}.$$
(24)
On the other hand, $`g(v)`$ has a global maximum at $`v=\pi `$, $`g_\mathrm{M}=g(\pi )=C`$, and two global minima at $`v=\pi /2`$ and $`v=3\pi /2`$, $`g_\mathrm{m}`$=$`g(\pi /2)`$=$`g(3\pi /2)`$=$`2a^2E`$. Therefore, Inequality (21) implies
$$I_22a^2E.$$
(25)
By combining (24) and (25) one achieves
$$f_\mathrm{m}I_22a^2E.$$
(26)
It should be noted that $`2a^2E>C`$. This is because of $`2<\gamma <3`$. $`f_\mathrm{m}`$ and in consequence $`I_2`$, can take both positive and negative values. Depending on the value of $`I_2`$, three general types of orbits are generated:
(i) Eccentric Butterflies. For $`C<I_2<2a^2E`$, the allowed values for $`u`$ and $`v`$ are
$$uu_0,v_{b,1}vv_{b,2},v_{b,3}vv_{b,4},$$
(27)
where $`u_0`$ and $`v_{b,i}`$ ($`i=1,2,3,4`$) are the roots of $`f(u)=I_2`$ and $`g(v)=I_2`$, respectively. As Figure 3a shows, the horizontal line that indicates the level of $`I_2`$, intersects the graph of $`f(u)`$ at one point, which specifies the value of $`u_0`$. The line corresponding to the level of $`I_2`$ intersects $`g(v)`$ at four points that give the values of $`v_{b,i}`$s (Figure 3b). In this case the motion takes place in a region bounded by the coordinate curves $`u=u_0`$ and $`v=v_{b,i}`$. The orbits fill the shaded region of Figure 4a. These are butterfly orbits (de Zeeuw 1985) displaced from the centre. We call them eccentric butterfly orbits.
(ii) Aligned Loops. We now let $`I_2`$ be negative so that $`f_\mathrm{m}<I_2<C`$. In this case the equation $`f(u)=I_2`$ has two roots, $`u_{l,1}`$ and $`u_{l,2}`$, which can be identified by the intersections of $`f(u)`$ and the level line of $`I_2`$ (see Figure 3c). The equation $`g(v)=I_2`$ has no real roots and Inequality (21) is always satisfied (Figure 3d). The allowed ranges of $`u`$ and $`v`$ will be
$$u_{l,1}uu_{l,2},0v2\pi .$$
(28)
The orbits fill a tubular region as shown in Figure 4b. These orbits are bound to the curves of $`u=u_{l,1}`$ and $`u=u_{l,2}`$ and elongate in the same direction as lopsidedness. Following ST99, they are called aligned loops.
(iii) Horseshoes. For $`C<I_2<C`$, we have a different story. In this case, both of the equations $`f(u)=I_2`$ and $`g(v)=I_2`$ have two roots. We denote these roots by $`u=u_{h,i}`$ and $`v=v_{h,i}`$ ($`i=1,2`$). In other words, the level lines of $`\pm I_2`$ intersect the graphs of $`f(u)`$ and $`g(v)`$ at two points as shown in Figures 3e and 3f. The orbits fill the shaded region of Figure 4c, which looks like a horseshoe. We call these horseshoe orbits. The orbital angular momentum of stars moving in horseshoes ($`G=xp_yyp_x`$) flips sign when stars arrive at one of the coordinate curves $`v=v_{h,1}`$ or $`v=v_{h,2}`$.
For $`\gamma C>4a^2E`$, $`f(u)`$ is a monotonically increasing function of $`u`$ and eccentric butterflies are the only existing family of orbits. There are three transitional cases corresponding to $`I_2=C`$, $`I_2=2a^2E`$ and $`I_2=f_\mathrm{m}`$. For $`I_2=C`$, eccentric butterflies extend to a lens orbit as shown in Figure 4d. For $`I_2=2a^2E`$, stars undergo a rectilinear motion on the line $`x=a`$ with the amplitude of $`\pm a\mathrm{sinh}u_0`$ in the $`y`$-direction. For $`I_2=f_\mathrm{m}`$, loop orbits are squeezed to an elliptical orbit defined by $`u=u_\mathrm{m}`$.
## 4 DISCUSSIONS
In this work we explore a credible model based on the self-gravity of stellar discs to explain how an eccentric disc, with strong density cusp, can be in equilibrium. Our mass models exhibit most features of eccentric stellar systems, especially, double nucleus ones such as M31 and NGC4486B.
All of the orbits of our model discs are non-chaotic. Below, we clarify how the existing families of orbits help the eccentric disc to maintain the assumed structure.
The force exerted on a star is equal to $`\mathrm{\Phi }`$. The motion under the influence of this force can be tracked on the potential hill of Figure 1b. This helps us to better imagine the motion trajectories.
As Figure 1b shows, the potential function is concave. A test particle released from distant regions with $`x>0`$ and “small” initial velocity, slides down on the potential hill and moves toward the local minimum at ($`x=a,y=0`$). After passing through the neighborhood of this point (there are some trajectories that exactly visit the minimum point), the test particle climbs on the potential hill until its potential energy becomes maximum. Then, the particle begins to slip down again. This process is repeated and the trajectory of the particle fills an eccentric butterfly orbit. Stars moving in eccentric butterflies form a local group in the vicinity of ($`x=a,y=0`$). The accumulation of stars around this local minimum of $`\mathrm{\Phi }`$ can create a second nucleus like P2 in M31 (see T95). The predicted second nucleus will approximately be located at the “centre” of loop orbits while the brighter nucleus (P1) is at the location of the cusp.
Aligned loop orbits occur when the orbital angular momentum is high enough to prevent the test particle to slide down on the potential hill. The boundaries of loop orbits are defined by the ellipses $`u=u_{l,1}`$ and $`u=u_{l,2}`$. The central cusp is located at one of the foci of these ellipses. Aligned loops have the same orientation as the surface density isocontours (compare Figures 2 and 4b). Thus, according to the results of Z99, it is possible to construct a self-consistent model using aligned loop orbits.
Similarly, we can describe the behavior of horseshoe orbits. Stars that start their motion sufficiently close to the centre, are repelled from the centre because the force vector is not directed inward in this region. As they move outward, their orbits are bent and cross the $`x`$-axis with non-zero angular momentum. These stars considerably lose their kinetic energy as they approach the centre (this is equivalent to their climb on the cuspy region of the potential hill). Meanwhile, the orbital angular momentum takes a minimum and switches sign somewhere on the boundary of horseshoe orbit. This boundary is defined by $`v=v_{h,1}`$ (or $`v=v_{h,2}`$) and can be chosen arbitrarily close to the centre. These stars spend much time near the centre and deposit a large amount of mass, which generates a cusp. Therefore, horseshoe orbits can be used to construct a self-consistent strong cusp. The method of Z99 is no longer applicable to horseshoes because such orbits don’t cross the long axis (here the $`x`$-axis) near the centre. In fact, horseshoe orbits are an especial class of boxlets that appropriately bend toward the centre. The lack of such a property in banana orbits causes the ST97 discs to be non-self-consistent.
In the case of M31 and NGC4486B, if we suppose that loop and high-energy butterfly orbits control the overall shape of outer regions, horseshoe orbits together with low-energy butterflies (small-amplitude liberations around the local minimum of $`\mathrm{\Phi }`$) can support the existence and stability of a double nucleus. The parameter $`a`$ will indicate the distance between P1 and P2.
There remains an important question: what does happen to a star just at the centre? The centre of the model, where the cusp has been located, is inherently unstable. With a small disturbance, stars located at ($`x=0,y=0`$) are repelled from the centre. But, the time that they spend near the centre will be much longer than that of distant regions when they move in horseshoes. We remark that the stars of central regions live in horseshoe orbits. Although one can place a point mass (black hole) at the centre without altering the Stäckel nature of the potential, such a point mass will not remain in equilibrium and leaves the centre. Based on the results of this paper, we conjecture that there may not be any mass concentration just at the centre of cuspy galaxies. However, a very dense region exists arbitrarily close to the centre! This may be an explanation of dark objects at the centre of cuspy galaxies. The centre of our model galaxies is unreachable. Our next goal is to apply the method of Schwarzschild (1979,1993) for the investigation of self-consistency. |
no-problem/0001/cond-mat0001023.html | ar5iv | text | # Extinction transition in bacterial colonies under forced convection
## Abstract
We report the spatio-temporal response of Bacillus subtilis growing on a nutrient-rich layer of agar to ultra-violet (UV) radiation. Below a crossover temperature, the bacteria are confined to regions that are shielded from UV radiation. A forced convection of the population is effected by rotating a UV radiation shield relative to the petri dish. The extinction speed at which the bacterial colony lags behind the shield is found to be qualitatively similar to the front velocity of the colony growing in the absence of the hostile environment as predicted by the model of Dahmen, Nelson and Shnerb. A quantitative comparison is not possible without considering the slow dynamics and the time-dependent interaction of the population with the hostile environment.
Bacterial colonies growing on a nutrient rich substrate have served as model systems for studying pattern formation and population dynamics in biological systems. Studies with strains of Bacillus subtilis and Escherichia coli have reported a wide variety of complex patterns depending on nutrient conditions . The patterns have been modeled using reaction-diffusion equations . These experimental and theoretical studies have considered an essentially uniform environment where the changes are due only to the depletion of nutrients with time. However, living organisms often are forced to migrate due to changes in the environment.
The modeling of population dynamics of bacterial colonies due to changes in the environment has been studied recently by Shnerb, Nelson, and Dahmen . Their theoretical model incorporates the effect of a forced convection on the growth of a bacterial colony by considering the convective-diffusion equation given by:
$$\frac{c(\text{x},t)}{t}=D^2c(\text{x},t)\text{v}c(\text{x},t)+U(\text{x})c(\text{x},t)bc^2(\text{x},t),$$
(1)
where $`c(\text{x},t)`$ is the bacteria number density, $`D`$ is the diffusion constant of the bacteria, $`U(\text{x})`$ is the spatially varying growth potential, v is an externally imposed convection velocity, and $`b`$ is a parameter that limits the population number density to a maximum saturation value. If $`\text{v}=0`$ and $`U(\text{x})`$ is constant, Eq. (1) corresponds to the Fisher wave equation which has a solution with a limiting constant value of the front speed $`v_F`$. Wakita et al. have studied a colony of Bacillus subtilis in a high nutrient and low agar medium growing in such a Fisher mode.
The two new features of the forced convection model given by Eq. (1) are the introduction of a growth potential $`U(\text{x})`$, corresponding to exposing photosensitive bacteria to a light source for example, and the convection of the bacteria due to the motion v of the light source. By considering a colony confined to a rectangular region, the resulting steady-state number density of the bacteria (the time independent solution of Eq. (1)) was obtained in Ref. as a function of v. They concluded that that the total number of bacteria in the rectangular region decreases linearly to zero as $`v`$ approches $`v_F`$ from below. The steady-state spatial density distribution was obtained by solving for the time-independent solutions of Eq. (1) numerically. Because the linearized version of Eq. (1) allows a mapping to non-Hermitian quantum mechanics, additional predictions of the properties of bacterial colonies in terms of localization-delocalization transitions in quantum systems can be made .
We report the first experimental study of a Bacillus subtilis colony forced to migrate by environmental changes due to a moving ultra-violet (UV) source. UV radiation is shined on a petri dish containing nutrient rich agar except in a rectangular region which is shielded. Although UV radiation is supposed to kill these bacteria , we find more subtle behaviors. For example, the colony is confined to the shielded region only when the temperature is below a “crossover” value of approximately $`22^{}`$C. When the UV radiation is turned off, the front of the colony which was near the boundary between the hostile and favorable regions initially grows slowly, but recovers to the Fisher front speed $`v_F`$ in about 25 hours (h). This slow recovery near the boundary suggests the presence of signalling between the bacteria, a feature which is absent in Eq. (1).
To study the effect of a changing environment, we rotate the rectangular shield with a constant angular velocity relative to the petri dish. The bacteria are inoculated along a line inside the rectangular shield region. The rotation results in the colony being forced to convect with velocities that increase linearly from zero at the axis of rotation to a maximum value at the edge of the plate. We find that the bacteria colony cannot keep up with the shielded region if the shield moves with velocities much greater than $`v_F`$, thus showing an extinction transition in qualitative agreement with the theoretical model . The spatial number density $`n(𝐱,v)`$ of the bacteria as a function of the speed $`v`$ was measured and found to be time-dependent, even after three days of forced convection. These experimental results illustrate the relevance of Eq. (1) and also indicate that the time-dependent response is of experimental relevance because of the long time scales of biological systems.
We now describe our experimental procedure and observations in more detail. The wild-type strain of Bacillus subtilis was obtained from Presque Isle Cultures and freeze dried at $`70^{}`$C. All experiments were performed from this initial sample by incubating a portion of the sample for 8 h at $`30^{}`$C in nutrient rich broth. A drop of this broth representing a total of at least $`10^7`$ bacteria is used to inoculate the nutrient rich agar. The experiments were performed in 15 cm diameter plexiglass petri dishes containing a thin layer of nutrient agar (7 grams/liter of bacto-peptone and 3 grams/liter agar.) These conditions are similar to that used in previous observations of the Fisher wave mode . When inoculated as a single point source (diameter $``$ 3 mm), the growth of the colony was observed to have a uniform disk shape with a front velocity that increases slowly for the first 8 h and eventually reaches a constant front speed $`v_F`$ consistent with previous work . Experiments were performed over a range of temperature ($`21^{}40^{}`$C) and it was found that $`v_F`$ is an increasing function of temperature within this range with $`v_F=1.7\mu \mathrm{m}/\mathrm{s}`$ at $`40^{}`$C and $`v_F=0.19\mu \mathrm{m}/\mathrm{s}`$ at $`21^{}`$C.
Next we describe our experiments in which we shine UV radiation on the petri dish using two 8 W long wavelength UV-lamps placed 5 cm above the dish. An Aluminum sheet is used to shield a rectangular region of the petri dish from the radiation (see Fig. 1). The density of the bacteria is obtained by imaging the light scattered by the bacteria with a CCD camera. Calibration experiments show that the light intensity is proportional to the bacteria density. The colony at time $`t=23.15`$ h after a point inoculation at the center of the petri dish is shown in Fig. 1a for $`21^{}`$C. The shielded region is within the dashed lines and has a width $`w=5`$ cm. We observe that the front of the colony is circular and its diameter is smaller than $`w`$. As the colony grows further outward, the edge of the shielded region is reached, and the shape of the colony is no longer circular as shown in Fig. 1b. The width of the colony along the axes parallel ($`x`$) and perpendicular ($`y`$) to the shield is plotted in Fig. 1c. The error bars correspond to the range of fluctuations due to slightly different initial conditions in different runs. The diameter $`d`$ of the bacterial colony growing at the same temperature in a petri dish which is completely shielded from UV radiation is also shown. We observe that the colony under the shield grows with a speed comparable to $`v_F`$ at that temperature. As the colony approaches the edge of the shield, the front speed slows down because the bacteria are confined.
To further demonstrate that the confinement effect is due to the presence of UV radiation, the UV radiation was turned off after 72 h. We observe no change in the velocity of the front along the $`x`$-direction as expected, because the bacteria are deep inside the shield. However, we would expect to see a change in the rate of growth along the $`y`$-direction because the radiation has been removed. We observe that the front velocity recovers to $`v_F`$, but only after 25 h. This behavior is not modeled by Eq. (1), but is important in our discussion of the convection experiments as discussed below.
We performed experiments at higher temperatures and observed that for temperatures greater than approximately $`22^{}`$C, the bacteria are able to grow into irradiated regions, but with a front speed that decreases with time. (At $`26^{}`$C, the speed was reduced by 41% after 12 h.) Hence, in the presence of radiation we can vary the growth rate by changing the temperature and obtain a transition from a localized colony to one which is delocalized. A detailed study of this phenomena would be an interesting avenue for further research. In this paper we will consider a simple case in which the bacteria are confined at a temperature of $`22\pm 1^{}`$C to investigate the extinction transition in the presence of convection.
The convection experiments were performed by inoculating the bacteria along a diameter of the petri dish. The petri dish is then kept under a radiation shield of width $`w=4.3\mathrm{cm}`$ and placed on a rotating platform, similar to the experiments described earlier. As the platform rotates, the region shielded from the UV radiation advances at a speed which increases linearly from the axis of rotation outward. The colony was initially allowed to grow for 14 h before the platform was rotated. During this time the bacteria covered the shielded region. The time $`t=0`$ corresponds to the time at which the platform was rotated. The results of the colony growth under these conditions are shown in Fig. 2. The position of the shielded region and the axis of rotation are indicated. The bacterial population is clearly seen to migrate and follow the shielded region at low velocities near the axis of rotation and lag behind at higher velocities.
These observations are consistent with the theory of Ref. where a phase diagram for the growth and the extinction of a colony as a function of the growth potential $`U`$ and the convection speed $`v`$ was obtained using Eq. (1). In particular, it was predicted that the bacteria will be localized to the favorable region. Furthermore, the total bacterial population in favorable regions decreases linearly to zero as a function of $`v`$ as $`v`$ approaches $`v_F`$ from below. (In this theory the critical extinction speed $`v_c`$ is the same as $`v_F`$.) To make quantitative comparisons, we have extracted the positions of the fronts corresponding to the three images shown in Fig. 2. These positions are plotted in Fig. 3a; the origin corresponds to the axis of rotation and the initial line of inoculation is along the horizontal axis. The dashed arc in Fig. 3a corresponds to the distance where the velocity of the shield is the same as $`v_F`$. We observe that very far from the axis of rotation, the front does not change during the time $`t=46.56`$ h to $`t=73.73`$ h, indicating that bacteria which could not cope with the speed of the shield were left behind in the hostile irradiated region and did not grow.
Dividing the displacement of the bacteria front by the time difference between images, we extracted the approximate velocity of the front as a function of the radial distance $`r`$. Such an analysis ignores the diffusion of the bacteria along the radial direction. The data for the average velocity of the front, $`v_b(r)`$, is plotted in Fig. 3b. The velocity of the shield $`v(r)`$ also is plotted to provide a reference for the front velocities. The bacteria are confined to the shielded region, and $`v_b(r)`$ is observed to increase with $`r`$, but is always less than $`v(r)`$, the corresponding speed of the shield. The reason for this lag might be due to the slow recovery of the bacteria after the UV irradiated region moves ahead as discussed earlier in reference to Fig. 1c. We also observe that $`v_b(r)`$ increases linearly up to a velocity of $`0.2\mu \mathrm{m}/\mathrm{s}`$ which corresponds to $`r45`$ mm. For greater $`r`$, $`v_b(r)`$ decreases and the bacteria increasingly lag behind the shield and stop growing for $`r>80`$ mm. The maximum value of $`v_b`$ corresponds to the value of $`v_F`$ of the bacteria colony at $`22^{}`$C in the absence of convection and UV radiation.
To explain the velocity data for $`r>50`$ mm, we note the following. In the interval of time corresponding to the images shown in Figs. 2b and 2c, the point where the bacteria completely lag behind the shield decreases from $`r=75`$ mm to $`r=59`$ mm. During this time, the bacteria are exposed to UV radiation for at least part of this time interval which increases for larger $`r`$. Hence, because the bacteria grow for only a portion of the total time, the mean front speed $`v_b(r)`$ decreases. The front speed is zero when the bacteria are always in the UV radiation corresponding to $`r>80`$ mm in Fig. 3b.
We also note that because of the slow rate of growth of the colony, the relative slow speed of the shield $`v`$, and the finite width of the shield $`w`$, a long transient time of the order of $`w/(vv_c)`$ is required for the shield to leave the colony which is growing with a speed $`v`$ less than the critical extinction speed $`v_c`$. This transient time diverges as $`v`$ approaches $`v_c`$. Hence, for an experiment which is conducted over a finite duration, the value of $`r`$ where the bacteria completely lag behind the shield is larger than the value corresponding to the critical extinction speed $`v_c`$. However, $`v_c`$ can be indirectly calculated from the above relation for the transient time. We obtain the estimate $`v_c0.23\mu \mathrm{m}/\mathrm{s}`$ which is similar to the value of $`v_F0.26`$ at $`22^{}`$C. This estimate was obtained from the image in Fig. 3 using $`v=0.4\mu \mathrm{m}/\mathrm{s}`$ at $`r=59`$ mm where the bacteria have completely lagged behind the shield at time $`t=73.73\mathrm{h}`$ of rotation. Therefore, we find a critical extinction speed consistent with the Fisher wave velocity as predicted in Ref. .
A more direct comparison of our experimental results to theory can perhaps be made by considering the time dependent response of the model considered in Eq. (1). Additional considerations such as the time-dependent response of the front speed of the bacteria may have to be incorporated. To encourage future comparisons of experimental data with time-dependent models, we plot the number density $`n(x,v)`$ of the bacteria colony at different distances from the shield in Fig. 4 corresponding to different convection velocities $`v`$. The shielded region normalized by the width $`w`$ corresponds to $`0.5`$ to 0.5. This data corresponds to the image shown in Fig. 2c. These density distributions are still time dependent except at $`v=0.41\mu \mathrm{m}/\mathrm{s}`$, which corresponds to distances where the bacteria are immobile because they have been in the UV irradiated regions for a long time. We observe that the front of the colony in the direction of the convection velocity always lags behind the edge of the strip. This characteristic of the bacteria distribution is similar to that predicted in Ref. , but a direct comparison is not possible because the distribution is still time-dependent after $`t=73`$ h of rotation. From Fig. 4 we further observe that the total bacterial population given by the area under the curve decreases for increasing velocity. We have found it impractical to conduct the experiments for a longer time, which is a significant limitation in making a more direct comparison with time-independent predictions.
The fact that the extinction transition occurs near $`v_F`$ is an interesting result for real biological systems because of the relatively simple model considered in Refs. . Our experiments are an important first step in investigating the usefulness of convection-diffusion models in studying convection in biological systems. The question remains if the observed evolution of the front can be captured by the time-dependence in Eq. (1) with the same initial conditions or if additional terms which include the time-dependent interactions between the bacteria and the hostile environment are necessary.
We thank Karin Dahmen, Nadav Shnerb, and David Nelson for many useful discussions. We thank Jeremy Newburg-Rinn for help in acquiring data, and Anna Delprato and Nancy King for helping us with technical aspects of culturing Bacillus subtilis. |
no-problem/0001/cond-mat0001457.html | ar5iv | text | # Compaction of Rods: Relaxation and Ordering in Vibrated, Anisotropic Granular Material
## I Introduction
The packing of identical objects inside a given volume, from atoms to large molecules and polymers to macroscopic particles, is an important problem in many areas of science and technology. For thermal systems, there are well-defined optimal packing configurations, corresponding to thermodynamic equilibrium. In particular, at high packing densities, these equilibrium configurations correspond to ordered, often crystalline, states. However, in many systems, such as glasses, competing interactions between the individual constituents prevent a thermodynamic equilibrium from being reached over experimentally accessible time scales. Furthermore, there are large classes of nonthermal systems, including foams and granular materials such as sand, rice, or pharmaceutical pills, for which ordinary temperature is irrelevant and the usual concept of a corresponding, thermodynamic equilibrium does not exist. In these systems, local temperature-driven fluctuations do not couple to particle motion in an effective manner and do not allow for a full exploration of configuration space. At high packing densities, the most stable, ordered configurations are therefore rarely reached and the packing becomes trapped in metastable, disordered states. Important questions for these strongly non-equilibrium systems remain only partially answered, such as how the metastable configurations respond to perturbations, and how the packing fraction (i.e., the fraction of volume occupied by the granular material) evolves over long times.
Macroscopic granular materials provide a model system for the exploration of these issues . In a three-dimensional (3-D) packing of monodisperse, rigid spheres held together by gravity and frictional forces, there is a myriad of metastable states, each corresponding to a different packing configuration that satisfies mechanical equilibrium, yet with packing fraction $`\rho `$ far less dense than the most stable, crystalline configuration for which $`\rho 0.74`$. Configuration space can be explored conveniently through the application of external mechanical excitations, such as shaking or vibrating. Starting from an initial, low-packing-fraction configuration, the packing evolves over time toward an asymptotic, higher-packing-fraction state with a more compact particle arrangement. For disordered 3-D packings of equal-sized spheres, computer simulations, as well as experiments on steel balls, have found that the maximum final packing fraction is set by the random close packing limit, $`\rho _{\mathrm{rcp}}0.64`$. Physically, this limit corresponds to amorphous packing configurations that are fully frustrated by geometrical constraints and unable to compact further. We note in this context that a (topological) effective temperature of the packing may be defined in terms of the available free volume or compactivity . Increases in volume fraction then correspond to decreases in this temperature. High densities around $`\rho _{\mathrm{rcp}}`$ are only reachable via careful cycling of the excitation intensity , similar to thermal cycling during annealing procedures. For fixed excitation intensity, the packing will evolve toward final configurations reflecting a balance between defect creation and annihilation. This suggests a second (dynamic) type of effective temperature associated with the strength of the applied forcing. Recent experiments showed that the corresponding final densities $`\rho `$ are approached logarithmically slow in time . In a number of theoretical models this slow relaxation was explained as arising from geometrical frustration due to excluded volume , and analogies to glassy behavior were drawn . Furthermore, it was found that, at short times and low excitation levels, the system can only explore a limited region of configuration space, resulting in highly irreversible behavior and memory effects. Only after sufficiently long times and large excitation levels does one reach a reversible, steady-state response in the sense that the two effective temperatures track each other, i.e., the packing fraction becomes a monotonic function of applied forcing.
Actual materials, and certainly macroscopic granulates, typically are far from perfectly spherical and often elongated. Under thermal conditions, particle anisotropy is known to give rise to ordering in a variety of systems, such as liquid crystals , rod-like colloidal virus particles , and certain polymers . For example, a fluid of long, rigid rods at high packing fraction will undergo a transition to an ordered nematic phase in which the rod axes align along a common direction . In nonthermal systems, on the other hand, almost all work to date has focused on spherical particles and the effect of particle anisotropy on the stability and evolution of packing configurations has been largely unexplored. Recent theoretical work by Baulin and Khokhlov on sedimenting solutions of long rigid rods investigated the limit that external (gravitational) forces far outweigh thermal fluctuations and predicts an isotropic to nematic transition for increasing packing fraction. The nature of the transition into the nematic state was studied by Mounfield and Edwards for a model of granular rods. They concluded that an externally imposed (flow) field was required to stabilize a discontinous, first-order-type phase transition into a highly ordered nematic state; otherwise there would merely be a cross-over during which the ordering increases continuously with decreasing compactivity (or increasing packing fraction). Buchalter and Bradley used Monte Carlo simulations to study packings of rigid, prolate or oblate ellipsoids. They found that these systems form amorphous, glassy packings with long-range orientational order. Some limited experimental data on the compaction of non-spherical, prolate granular materials under vibrations has been published by Neuman , but, to the best of our knowledge, no information on the degree of ordering or on the asymptotically reached, 3-D packing configurations is available. This lack of systematic investigations is surprising, given the enormous importance of prolate granular materials in a wide range of geological and industrial processes.
Here we study 3-D packings comprised of prolate granular material: millimeter-sized rigid cylinders (“rods”). Applying discrete mechanical excitations, or “taps”, we let the system evolve from an initial, low-packing-fraction to a final, high-packing-fraction state. During this relaxation process, we monitor the local packing fraction non-invasively and correlate it with direct visual images of the packing configurations. We are specifically interested in the question of how two competing factors affect the packings’ evolution as the packing fraction increases: on the one hand, the tendency of randomly arranged rods to lock up in a disordered state because of steric hindrance and friction, and on the other hand the possibility, provided by both gravity and container walls, to align and form ordered, nematic-type configurations. Our results show that there are characteristic stages to the evolution, corresponding to either process. We find that, depending on the tapping history and intensity, highly ordered final states are achievable, in contrast to sphere packings under the same experimental conditions.
This paper is organized as follows. In Section II we describe the experimental set-up and procedure. Results on the relaxation and alignment behavior for a range of excitation intensities are shown in Section III and discussed in Section IV. Section V contains a summary of the findings and conclusions.
## II Experimental Set-Up and Procedure
All experiments were performed on monodisperse nylon 6/6 rods of specific density ($`1.145\pm 0.005`$)g/$`\mathrm{cm}^3`$, each 1.8 mm in diameter and 7.0 mm in length. Approximately 7200 of these rods were filled into a 1 m tall glass tube (1.90 cm inner diameter) mounted vertically on a Bruel and Kjaer 4808 electromagnetic vibration exciter (Fig. 1a). As in previous experiments on spherical particles , vertical vibrations were applied in the form of individual shaking events (“taps”) by driving the exciter with a single cycle of a 30 Hz sine wave. Successive taps were spaced by time intervals sufficiently long (typically 0.5 s) to allow complete relaxation of the system. The vibration intensity was monitored using an accelerometer. In the following we parameterize the tapping intensity by $`\mathrm{\Gamma }`$, the ratio of the measured peak acceleration to the Earth’s acceleration $`\mathrm{g}=9.81\mathrm{m}/\mathrm{s}^2`$ (Fig. 1c). The bottom of the tube contained an entry way for dry nitrogen, which was used only in the preparation of the initial, low packing fraction state of the sample. Through the top of the glass tube the system was placed under vacuum during runs. A control tube similar to the tube described above, filled with the same material but not vibrated, was used to measure electronic drift.
The evolution of the packing fraction between taps was monitored both globally by recording the total filling height of material inside the tube, and locally using a capacitive technique. Mounted along the outside of the tube were four capacitors made from pairs of copper strips, each 1.25 cm wide and 17 cm in length. Each of the capacitors was sensitive to a measurement volume inside the tube defined by a slab of cross-sectional area as indicated in Fig. 1b (about 70% of the total cross-sectional area of the tube). We ascertained that there was little sensitivity to material placed outside this active volume. The capacitance was read by a capacitance bridge with 1fF resolution. The relation between packing fraction, $`\rho `$, and capacitance, $`C`$, was found to be linear, $`\rho =3.47\times 10^1+2.35\times 10^3C`$, where $`C`$ is measured in fF(Fig. 2). The data in this figure contains data from all capacitors, with readings for the empty ($`\rho =0`$) tube and for a solid nylon rod occupying its total volume ($`\rho =1`$), as well as data for intermediate packing fractions. For the latter, the two methods employed, a) inserting a solid rod partially into a capacitor or b) compacting a test sample of cylinders with a known number of particles inside the measurement volume, both yielded the same results, indicating that particle shape effects do not significantly influence the packing fraction measurements.
Because visual tracking of particles through the tube side walls was limited to areas between capacitors, we used a separate set-up with a tube of same diameter but shorter (21 cm) and without capacitors to explore qualitatively the evolution of particle orientations. This was done by video-taping the material through the outside walls and also from above. In addition, careful layer-by-layer vacuuming allowed us to map out the depth dependence.
For each run, the 1 m tall tube was filled with 143.1 g of material to a height of ($`82.3\pm 0.3`$)cm. The material was then fluffed with nitrogen to an initial filling height of ($`90.6\pm 0.5`$)cm, corresponding to a packing fraction $`\rho =0.49`$. We found this to represent the least dense, reproducible initial packing state attainable. Both measurement and control tube were then placed under vacuum to isolate them from environmental changes in the room during a run.
Here we describe two sets of experiments. In the first set, the material was tapped 70,000 times at fixed acceleration and capacitance readings were made after certain, fixed tap intervals. Simultaneously, the average packing fraction of the column was recorded by measuring the overall filling height with a ruler. Prior to each experimental run, the system was returned to the same loose-packed initial state by removing all material, refilling and fluffing with nitrogen. This was done to remove all traces of ordering from previous runs. In the second set of experiments, we explored the effect of tapping history on the packing fraction evolution. A fixed number, $`\mathrm{\Delta }t`$, of taps were applied to the system and the final packing fraction was recorded. Without refilling or fluffing the system, the acceleration was adjusted by $`\mathrm{\Delta }\mathrm{\Gamma }`$ and the measurement process was continued, ramping $`\mathrm{\Gamma }`$ from 1.5 to 7.5 and back several times. This is similar to what would be done in a cyclic heating and cooling process, with $`\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Delta }t`$ playing the role of an effective heating or cooling rate.
## III Results
Figure 3 shows the evolution of the packing fraction, $`\rho (t)`$, for different capacitor regions as a function of tap number, $`t`$. Data for four different accelerations are shown. Each curve is an average of five independent runs and the error bars, for sake of clarity given only for capacitor 2 in the $`\mathrm{\Gamma }=4.5`$ graph, represent the general rms variation. In order to be able to display the initial packing fraction, $`\rho (0)`$, the tap number (or time) axis on all plots has been incremented by one tap.
The evolution shown in Figure 3 exhibits three distinct stages which we call the initial relaxation stage (up to about $`10^3`$ taps), the vertical ordering stage (between roughly $`10^3`$ and $`10^4`$ taps), and the final, steady-state regime.
During the initial relaxation stage there is a quick increase in $`\rho `$ during the first decade and then a leveling off to a plateau near 0.55. This saturation is highly pronounced for $`\mathrm{\Gamma }=4.5`$, 5.5 and 6.5, but much less so for $`\mathrm{\Gamma }=7.5`$. The second stage is identified by an abrupt increase in packing fraction, which becomes less steep and smaller as $`\mathrm{\Gamma }`$ increases. This increase in packing fraction coincides with a nematic ordering of the material, during which the material aligns vertically parallel to the tube walls. Figure 4 shows snapshots of particle configurations during this alignment process, taken midway down the height of the tube for $`\mathrm{\Gamma }=4.5`$: (a) and (b) are side views, as seen through the tube wall, of the initial, randomly packed state, and the highly aligned arrangement of the outer layer at some point toward the end of the ordering stage, respectively.
Images (c)-(f) show top views of the packing interior, obtained after careful vacuuming out the material in the upper half of the column. The initial state (c) shows the material as poured (but not fluffed). After 2000 taps (d), the particles have lined up along the edge of the tube, while its interior remains disordered (start of ramping domain). After 6000 taps (e), the particles in the interior also have begun to orient vertically. The final, steady-state regime is a dense, highly aligned configuration (f). This image sequence demonstrates that the vertical alignment starts at the tube walls and proceeds inward. We find that the start of the ordering stage varies slightly with height along the tube, for $`\mathrm{\Gamma }<5`$ happening somewhat earlier at the top of the tube and moving downward (see Fig. 3a). At higher accelerations, $`\mathrm{\Gamma }=6.5`$ and 7.5, the vertical alignment at the end of the ordering stage is less perfect and at the highest accelerations explored, $`\mathrm{\Gamma }=7.5`$, the ordering stage becomes less distinct, particularly for the lower regions of the container. Final, steady-state configurations at these accelerations are very similar to those in Figure 4e, where only the outside is well aligned while the inside still exhibits numerous defects.
In the final steady-state regime both the initial relaxation and subsequent vertical ordering have saturated and the packing fraction is found to fluctuate around a constant, asymptotic value (Fig. 3). This final packing fraction typically increases with depth below the free surface (in our data for $`\mathrm{\Gamma }=4.5`$ the differences may be too small to be significant). For accelerations below $`\mathrm{\Gamma }=4.5`$ we found the dynamics to be exceedingly slow, preventing an asymptotic state from being reached by $`t=10^5`$, the longest tapping interval explored in these experiments.
In Fig. 5 we show the average overall packing fraction as determined by the total height of the material in the tube, $`\rho _h(t)`$. Clearly visible is the significant increase of final packing fraction for $`\mathrm{\Gamma }=5.5`$ and below. In general, $`\rho _h(t)`$ mimics the three stages seen in $`\rho (t)`$. However, the onset of the vertical ordering regime is not as abrupt for $`\rho _h(t)`$ as it is for $`\rho (t)`$, nor is it preceded by the slight dip in volume fraction which ends the first domain in Figure 3. A direct comparison of $`\rho _h(t)`$ with the height-averaged packing fraction, $`<\rho (t)>_h`$, obtained from the capacitor data in Fig. 3, is shown in Fig. 6. As we will discuss below, the differences between the two types of measurement reflect the fact that the active volume responsible for $`\rho (t)`$ includes only a fraction of the material near the tube wall. Also included in Figs. 5 and 6 is a trace for $`\mathrm{\Gamma }=4.5`$ obtained under different initial conditions: instead of initial fluffing, the material was merely dropped into the tube and then tapped. We note that the memory of the preparation conditions persists only up to about 20 taps, after which traces for different initial conditions coincide.
Results from the second type of experiment are given in Fig. 7 for $`\mathrm{\Delta }t=100`$ (a) and 1,000 (b), using $`\mathrm{\Delta }\mathrm{\Gamma }=1.0`$ in both cases. The ramp rates are fast enough that the material does not have time to reach the steady-state during the first pass. Consequently, the packing fraction initially does not depend on acceleration alone, but also strongly on the vibration history: As $`\mathrm{\Gamma }`$ is ramped first up, and then down and up again several times, $`\rho _h`$ slowly cycles toward a reversible regime in which $`\rho _h(\mathrm{\Gamma })`$ becomes monotonic. For fast rates, as in Fig. 7a, a large number of cycles may be required; for slower ramp rates the reversible regime may be approached much earlier (Fig. 7b).
## IV Discussion
Two central observations from recent, systematic compaction experiments with spheres were a) at fixed acceleration the logarithmically slow approach in time toward the final steady-state packing fraction, and b) the existence of memory effects in which the final state can depend on the initial sample preparation and on the acceleration history. In particular, from analysis of the packing fraction fluctuation spectra it was found that there is an intrinsic, broad range of relaxation time scales, many of which, however, are effectively “frozen out” if the acceleration stays below certain values .
Much of this behavior qualitatively carries over to the cylinder-shape particles investigated here. The data in Figs. 3 and 6 indicate a non-exponential relaxation of voids in the packing during the first stage of the packing fraction evolution. At the highest acceleration, when effects due to particle alignment are weakest (Figs. 3d and 6d), we find that $`\rho (t)`$ increases approximately logarithmically before it eventually saturates. Such logarithmic relaxation arises naturally from free volume considerations, in which the ability of particles to find and occupy any of the remaining free space decreases exponentially in time . In its simplest form, this scenario is independent of particle shape and, furthermore, only based on void annihilation. Without the additional possibility of defect or void creation, however, the system eventually has to jam at a final packing fraction that can only be a monotonically increasing function of applied acceleration (since hindrance effects are more effectively overcome at larger $`\mathrm{\Gamma }`$). Instead we find that, over the range in $`\mathrm{\Gamma }`$ explored, the overall final packing fraction decreases with increasing acceleration (Figs. 5, 6). As for spheres, this clearly indicates defect production in response to tapping. Nevertheless, as Fig. 7 shows, memory of the tapping history is not easily destroyed and can persist over ten-thousands of taps: if during this time the tapping intensity is changed, the system irreversibly moves into a new state of higher packing fraction, independent of whether the tapping intensity is increased or decreased . Only at sufficiently high acceleration and after sufficiently many taps is the reversible regime reached where the packing fraction depends monotonically on acceleration. This situation is similar to super-heating or super-cooling in thermal systems.
The prolate, anisotropic particle shape amplifies the ability of 3-D cylinder packings to sustain large voids. This is demonstrated by the low packing fraction ($`\rho (0)=0.49`$ in Fig. 3 compared to 0.59 for spheres in the same experimental set-up ) for the loosest, mechanically stable configuration, by the large overall compaction range ($`0.49<\rho <0.72`$ in Fig. 3a compared to $`0.59<\rho <0.65`$ for spheres ), and by the large fluctuations in $`\rho (t)`$ in the steady-state.
As Fig. 7b shows, packing defects can be annealed out, first by either “heating” or “cooling” along an irreversible branch and, once the reversible branch is reached, by “cooling” to final packing fractions as high as 0.73. We note that this value is roughly 10% smaller than the random close packing density in two dimensions , $`\rho _{\mathrm{rcp}}^{2\mathrm{D}}=0.82`$ , defined as the maximum packing fraction beyond which a transition to an ordered triangular array would be necessary. Along the reversible branch, memory of the acceleration history is erased as long as “heating” or “cooling” steps are taken at sufficiently slow rates $`\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Delta }t`$. However, the maximum allowable rate itself depends on how far the system has evolved: With $`\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Delta }t=10^2`$, in Fig. 7a, the system clearly was cycled too fast (“superheated” as well as “supercooled”) and stayed in the irreversible regime even at the highest $`\mathrm{\Gamma }`$ until it had time to relax. But, once the steady-state is reached, the same fast rate produces very little, if any, “supercooling”.
Qualitatively, the response to acceleration cycling seen in Fig. 7 is similar to that obtained for spherical particles. Presumably, therefore, concepts based on frustration due to volume exclusion alone may model the observed behavior. However, the shape of $`\rho (t)`$ differs dramatically from the sphere case because of the vertical ordering regime. Presently available models for $`\rho (t)`$ do not account for the orientational degree of freedom and thus are not applicable without modification.
The most crucial difference between sphere and cylinder packings comes from the tendency of cylinders to align along their long axis, both with each other and with the container walls. Vertical alignment along the direction of the tube walls becomes noticeable after initial void relaxation has started to saturate (typically after $`100`$ taps in Figs. 3 and 6). After the void relaxation process saturates, vertical particle ordering becomes the dominant mechanism for compaction. Note that, over the acceleration range explored, the material once vertically aligned is highly stable against reorientation.
This ordering process bears resemblance to the transition from isotropic to nematic phases in liquid crystals and for hard rods in thermal systems . As in a nematic, particle motion in the aligned granular state is found to occur mainly by translation parallel to the cylinder axis. In cases where the aligned system was particularly carefully cooled, we also observed vertical stacking (seen for the outermost particle layer in Fig. 4b) akin to smectic-type phases in liquid crystals.
As seen from Fig. 4, at any given height inside the tube, the first particles to vertically align are those in contact with the tube walls. Alignment then progresses horizontally inward, until the whole tube cross-section is ordered. This progression is also detected non-invasively from the difference in the packing fraction values obtained by filling height ($`\rho _h(t)`$) and by capacitance ($`\rho (t)`$, $`\rho (t)_h`$). The fraction of particles near the tube walls covered by the active measurement volume of the capacitors is less than 50% (Fig. 1b), making the capacitive measurements more indicative of the packing fraction in the central tube region, along its axis. For this reason, $`\rho (t)_h`$ is less than $`\rho _h(t)`$ during the vertical ordering stage until the ordering front has reached the tube center and the final steady state is obtained (Fig. 6).
In the simplest possible picture, the increase in packing fraction during the ordering stage is solely due to conversion of disordered three-dimensional particle arrangements with large void space to a dense, highly aligned configuration of essentially two-dimensional character. A model of this process can be constructed as follows. We assume that cylinders in the disordered state can align vertically only in the presence of previously aligned neighbors that act as nucleation sites (this assumption of an essentially step-like front separating the isotropic from the nematic regions is supported by the calculations of Baulin and Khokhlov ). Then the ordering proceeds at a rate $`N/t=\mathrm{p}n`$, where $`N`$ is the total number of already lined-up cylinders behind the ordering front, $`n`$ is the number of open nucleation sites and p a fixed probability for alignment. Through the two-dimensional packing fraction for discs, $`N`$ and $`n`$ are related to the size of the ordered area and its inner perimeter, respectively. For a cylindrical tube we find, $`n\sqrt{N_fN}`$ , where $`N_f`$ is the final number of aligned particles at $`t=t_f`$, the end of the ordering stage. The square root dependence reflects the fact that the number of nucleation sites shrinks as the ordering front advances radially from the tube wall toward the tube center (the tube wall acts as a ring of nucleation sites at $`t=t_i`$ when $`N(t_i)=0`$). As a result, the fraction of ordered cylinders increases with tap number as $`\frac{N(\tau )}{N_f}=2\tau \tau ^2`$ for $`0\tau 1`$, where $`\tau =\frac{tt_i}{t_ft_i}`$ is the normalized tap number. The measured increase in packing fraction is then directly proportional to $`N(\tau )/N_f`$. If we take as the start of the ordering stage ($`t=t_i`$ in the above expression) the point in time at which $`\rho _h`$ first falls below $`\rho _h`$, we find that the above functional form for $`N(\tau )`$ fits the data quite well (inset to Fig. 5).
Radial packing fraction gradients, detected by differences between $`\rho (t)_h`$ and $`\rho _h(t)`$, are also found in the initial relaxation stage and in the final steady-state regime. During void relaxation, the capacitive data in Fig. 6 typically lie above those from the height measurements, indicating a slightly higher packing fraction in the central region, away from the walls. This is a consequence of two effects. First, the tube wall can provide stable pinning, preventing low-packing-fraction configurations from collapse (initial void collapse is accelerated if the material is merely dropped into the column, rather than carefully fluffed, as seen in Fig. 6a). Second, next to the tube wall, the packing fraction naturally is reduced due to excluded volume (unless the packing configuration is commensurable with the tube volume). Of course, once alignment along the tube wall has begun, the outer regions become denser than the tube interior and the traces for $`\rho (t)_h`$ and $`\rho _h(t)`$ cross. The slight dip present in $`\rho (t)`$ or $`\rho (t)_h`$ at the end of void relaxation and the beginning of the ordering stages (for $`\mathrm{\Gamma }6.5`$) indicates that the packing fraction temporarily decreases in the central tube region. We speculate that this local dilation might be required to allow particles to rotate and align with the tube wall.
At high accelerations, $`\mathrm{\Gamma }5.5`$, a radial gradient in the packing fraction remains even in the steady-state regime (Fig. 6). At these accelerations, particles near the wall remain highly aligned while the interior exhibits packing defects (similar to Figs. 4d, e). Consequently, the capacitively measured steady-state packing fraction values fall below those from the height measurements (Fig. 6b-d). Interestingly, for $`\mathrm{\Gamma }=7.5`$ this gradient develops only at large times. Conversely, for $`\mathrm{\Gamma }=4.5`$ the fact that $`\rho (t)_h`$ and $`\rho _h(t)`$ coincide at large times indicates a highly uniform packing fraction profile across the tube once the asymptotic state is reached (cf. Fig. 4e).
As far as particle configurations are concerned, the simulations of prolate ellipsoid packings by Buchalter and Bradley predict that the pouring and packing process by itself should lead to a certain degree of nematic ordering (“nematic glass”). In a (infinite) system without vertical side walls this would be a consequence of minimizing the gravitational potential energy during particle deposition: the first rods hitting the container bottom would tend to lay flat and thus induce horizontal alignment of subsequent layers. Such a state contains fewer large voids than a completely isotropic packing configuration and is thus denser. In our experiments, this might be reflected in the difference between the as poured and fluffed traces for $`\mathrm{\Gamma }=4.5`$ in Figs. 5 and 6a. Indeed, as the side and top views in Figs. 4a and c show for the poured initial state, there is a preference for cylinders to orient toward the horizontal. As the eventual merging of the $`\rho (t)`$ curves for different initial conditions indicates, the same vertically “flattened” packing state is also reached from the more isotropic, fluffed initial condition, namely as most large voids have collapsed and $`\rho (t)`$ starts to level off (Fig. 6a). At least for accelerations $`\mathrm{\Gamma }6.5`$ this packing state can be identified with a packing fraction $`\rho 0.55`$
Our present system is too small (in terms of lateral extent) to cleanly test whether the packing configuration at the end of the void relaxation stage corresponds to a horizontally orientated “nematic glass” state. According to Ref. shaking is expected to eventually break up this orientational order and our data are certainly compatible with this for long times and/or high accelerations. However, vertical shaking in the presence of the tube sidewalls, rather than merely reducing the degree of horizontal ordering, provides a strong incentive for rods to line up vertically, similar to the situation in a thermal system of rods where the loss in rotational entropy is compensated by a gain in free volume accessible to translations. (As we mentioned above, this gain may be the cause of the small dip in the packing fraction that we pick up by the capacitive measurements in Figs. 3a-c before the vertical ordering changes the overall particle packing fraction.) In principle, this argument might also lead to some initial vertical ordering along the tube walls from the pouring process. From Figs. 4a and c, and also from the fact that the initial densities $`\rho (0)_h`$ and $`\rho _h(0)`$ coincide in Fig. 6, we find, however, no evidence of such alignment, demonstrating that gravitational potential energy far outweighs rotational entropy unless it is mitigated by the applied acceleration during tapping.
## V Conclusions
We have extended our investigations of the relaxation behavior of nonthermal, granular material to highly anisotropic, cylindrical particle shapes. Using a combination of non-invasive, capacitive probes and video imaging, we have traced both the local and global packing fraction $`\rho `$ as well as the evolution of the packing configurations (Fig. 4) from an initial, low-packing-fraction to a final, high-packing-fraction state under applied mechanical excitations.
We observe many qualitative features also seen in the relaxation of sphere packings and find them amplified by the particles’ anisotropy. This includes an even wider range of metastable configurations and thus a larger span of accessible packing fractions (Fig. 3), as well as memory effects in the irreversible branch of $`\rho (\mathrm{\Gamma })`$ that are much more pronounced (Fig. 7).
In contrast to sphere packings, which tend to end up in disordered configurations even after prolonged tapping and cycling, we find clear evidence that particle anisotropy can drive ordering. This is most strikingly observed in $`\rho (t)`$ where we can distinguish three characteristic stages (Figs. 3 and 6): First, a void relaxation stage takes the initially isotropic arrangement to an intermediate packing fraction near $`\rho =0.55`$. This is a state of vertically collapsed and thus predominantly horizontally oriented particle configurations which may correspond to the nematic glass state seen in computer simulations . This state, however, turns out to be unstable to continued vertical excitations. During a second stage, the vertical ordering regime, particles re-align their long axes vertically. This ordering process starts from the container walls, moves inward and eventually leads to an ordered, nematic-type configuration. We have shown that a simple model can account for the change in packing fraction during this ordering process. The imposed boundary conditions at the wall stabilize the nematic state, effectively playing a role similar to strong flow fields or packing fraction gradients . The third, and final, stage is a steady-state with large fluctuations around an average packing fraction set by competing defect annihilation and creation within the nematic-type particle arrangement.
These results provide a first experimental step toward the full exploration of the effect of anisotropy on ordering in strongly non-equilibrium, nonthermal systems. Our experiments used a fixed aspect ration of close to four. Larger aspect ratios as well as less rigid particles are expected to hinder the transition from isotropic to nematic configurations, and there is also evidence that oblate particles should order differently from the prolate ones investigated here . Another intriguing extension concerns mixtures of spheres and rods. For a thermal system of this type, novel micro-phase separated patterns, such as lamellae of spheres and rods, have recently been found experimentally , while theoretical models for the granular equivalent would suggest macroscopic phase separation . Finally, for none of the non-spherical systems has the spectrum of fluctuations around the steady-state been explored yet.
## Acknowledgments
We wish to thank Damien Dawson, Allan Smith, Tom Witten and, in particular, Sidney Nagel for many helpful and illuminating discussions. This work was supported by the NSF under Award CTS-9710991 and by the MRSEC Program of the NSF under Award DMR-9808595. |
no-problem/0001/cond-mat0001229.html | ar5iv | text | # The physics of the stripe quantum critical point in the superconducting cuprates
## 1 THE STRIPE QUANTUM CRITICAL POINT SCENARIO
The non-Fermi-liquid behavior of the normal-state of the cuprates has two major features depending on the doping $`(\delta )`$ regimes. Specifically, (i) near optimal doping no energy scales seem to be present besides the temperature (e.g. the in-plane resistivity stays linear in $`T`$ from just above the critical temperature $`T_c`$), while (ii) in the underdoped regime, new energy scales appear in the form of pseudogaps, which persist well above $`T_c`$ up to a doping-dependent crossover temperature $`T^{}`$ . Starting in the deeply underdoped phase, $`T_c`$ increases with increasing doping and $`T^{}`$ decreases from high values of several hundreds of kelvins until it merges with $`T_c`$ near optimal doping. On the other hand, a Fermi-liquid-like behavior is observed in the overdoped materials. Correspondingly, many different physical quantities display qualitatively different behaviors in going from the under- to the optimally and to the over-doped regimes. As schematically described in Fig. 1, the subdivision of the phase diagram in three regions naturally arises from the occurrence of an instability line starting at high temperature in the deeply underdoped phase and ending at zero temperature in a quantum critical point (QCP) located near optimal doping. In this scheme the optimally doped and overdoped regimes would be related to the quantum critical (QC) and to the quantum disordered (QD) region of the QCP respectively. The two regions are separated by a crossover line $`\stackrel{~}{T}(\delta )`$. The underdoped regime corresponds to the (quasi)-ordered region below the instability line. However, precursor effects of the ordering could extend up to a higher temperature $`T_0(\delta )`$.
It was shown that in strongly correlated systems (e.g., in the large-U Hubbard model with an electron-phonon interaction and long-range Coulomb forces) an incommensurate charge-density-wave instability occurs when the doping is reduced below a critical value $`\delta _c`$ . This tendency to order the charge arises as a compromise between the local tendency towards phase separation and the electrostatic cost to segregate charged carriers . For reasonable values of the parameters this instability line starts near optimal doping at zero temperature. In the underdoped regime this charge ordering tendency occurs below a doping-dependent $`T_{CDW}(\delta )`$ instability line. The charge ordering strongly mixes with spin degrees of freedom and gives rise to the so-called stripe phase.
As shown in Ref. a crucial consequence of the stripe formation is the occurrence nearby the instability of a singular effective interaction, strongly dependent on momentum, doping, and temperature:
$$\mathrm{\Gamma }(𝐪,\omega )\stackrel{~}{U}\frac{V}{\kappa ^2+|𝐪𝐪_c|^2i\gamma \omega }$$
(1)
where $`\stackrel{~}{U}`$ is the residual repulsive interaction between the quasiparticles, $`\gamma `$ is a damping parameter, and $`𝐪_c`$ is the wavevector of the CDW instability. The crucial parameter $`\kappa ^2=\xi _c^2`$ is the inverse square of the correlation length of charge order and provides a measure of the distance from criticality. At $`T=0`$, in the overdoped regime, $`\kappa ^2`$ is linear in the doping deviation from the critical concentration, $`\kappa ^2=a(\delta \delta _c)`$. In Ref. the instability was found at $`\delta _c0.2`$, with $`q_c1`$. On the other hand, in the QC region above $`\delta _c`$, $`\kappa ^2T`$, according to the behavior of a Gaussian QCP. In the underdoped regime $`\kappa ^2`$ vanishes approaching the instability line $`T_{CDW}(\delta )`$.
The occurrence of singular interactions near the QCP and near the instability line determines the physical properties of the cuprates. In particular, the non-Fermi-liquid behavior characteristic of the optimally doped materials is a signature of the QCP . In the next section we report on some spectroscopic consequences of the strong scattering mediated by charge fluctuations near optimal doping. On the other hand in the overdoped region the term $`\kappa ^2=a(\delta \delta _c)`$ reduces the scattering and determines a region of Fermi-liquid behavior. In the underdoped compounds, when the instability line $`T_{CDW}`$ is approached, a singular scattering between the quasiparticles is again mediated by the charge fluctuations at wavevectors $`qq_c`$. Thus the region near $`T_{CDW}(\delta )`$ is characterized by a strong effective interaction both in the particle-particle (p-p) and the particle-hole (p-h) channels, with a new doping-dependent energy scale. In both cases a pseudogap is an expected outcome, as it will be discussed in Section 3.
## 2 SPECTRAL PROPERTIES NEAR OPTIMAL DOPING
The charge fluctuations couple with spin degrees of freedom since in the hole-poor regions the system is locally closer to half-filling where antiferromagnetic correlations are more pronounced. Both charge and spin fluctuations then mediate a nearly singular scattering between the quasiparticles, strongly affecting the spectral properties. In order to compare the outcomes of this scattering with the ARPES experiments, mostly performed on optimally doped Bi2212 , we assumed a tight-binding model with the band parameters commonly accepted for this material. The exchange of QC charge fluctuations at wavevectors $`𝐪_c=\pm (0.4\pi ,0.4\pi )`$, and QC antiferromagnetic spin fluctuations at $`𝐪_s=(\pi ,\pi )`$ was then considered within a perturbative approach. Since in general the critical wavevector is model and doping dependent, the present choice of $`𝐪_c`$ was suggested to match the experiments . The resulting single-particle spectra are characterized by (i) a transfer of spectral weight from the quasiparticle peak to the incoherent shadow peaks; (ii) a redistribution of the low-energy spectral weight with a modification of the FS; (iii) a strong anisotropic suppression of spectral weight around the M points $`(\pm \pi ,0)`$ and $`(0,\pm \pi )`$. All these features have a counterpart in the experiments.
In this framework one can also investigate the bilayer structure of Bi2212 and explain the puzzling absence of bonding-antibonding band splitting. According to the band calculations , we introduced a k-dependent interplane hopping $`t_{}(𝐤)=t_{}|\gamma _𝐤|`$ with $`\gamma _𝐤=\frac{1}{2}(\mathrm{cos}k_x\mathrm{cos}k_y)`$. $`t_{}(𝐤)`$ is large near the M points only. The intraplane scattering, however, mostly reduces the quasiparticle spectral weight near the M points. Thus, within our scenario, the absence of detectable band splitting in ARPES spectra follows directly from the in-plane spectral properties.
The interaction mediated by the quasicritical fluctuations also provides an effective pairing mechanism. In this regard approaching the superconducting region from the overdoped regime, the doping and the temperature dependences of the $`\kappa ^2`$ term in the QD and QC regimes give rise to a non-trivial increase of $`T_c`$ followed by a saturation around optimal doping . The more involved case of the underdoped regime, with pseudogap formation will be discussed in the next section.
## 3 PARTICLE-PARTICLE AND PARTICLE-HOLE PSEUDOGAP
The effect of the stripe instability can be more dramatic in underdoped materials, when the system approaches the instability line at temperatures $`TT_{CDW}(\delta )`$. In this case, near $`T_{CDW}(\delta )`$ the critical fluctuations at wavevectors near $`𝐪_c`$ can mediate a large effective interaction between the quasiparticles states $`𝐤`$ and $`𝐤^{}`$ such that $`𝐤𝐤^{}𝐪_c`$. The generic outcome is that states of the Fermi surface near the M points are strongly interacting, while quasiparticles around the diagonals ($`\mathrm{\Gamma }X`$ and $`\mathrm{\Gamma }Y`$ directions) are less affected. This finds a correspondence in ARPES experiments , where at $`T^{}`$ the Leading Edge shift ($`LE`$) starts to develop near the M points. Indeed the strong interaction near the M points can give rise to pairing and gaps both in the p-p and p-h channels. Although it is quite natural that both channels contribute to the formation of the pseudogap below the crossover temperature $`T^{}`$, the two limiting cases, when a single channel (either p-p or p-h) dominates the pseudogap formation, are simpler to analyze and each one of them shows relevant aspects of the physics of the cuprates.
In the first mechanism we propose, the pseudogap opens due to incoherent pairing in the p-p channel. The strong momentum dependence of the effective interaction (1) plays in this regard a crucial role in selecting the quasiparticle states which are most strongly paired. This leads to non-trivial fluctuation effects, because strongly paired states near the M points coexist with weakly interacting quasiparticles along the diagonals. This situation is quite different from the case described by a single superconducting order parameter $`\mathrm{\Delta }(𝐤)=\mathrm{\Delta }_sg(𝐤)`$. In the present case, indeed, the momentum dependence of the effective pairing interaction not only produces the k structure of $`g(𝐤)`$, but also confers different fluctuation properties in k-space to the Cooper pairs depending on their strongly or weakly paired character. This physical situation has recently been described within a two-gap model . In this latter framework, incoherent tightly bound Cooper pairs around the M points are formed at $`T^{}`$, while phase coherence is established at a lower temperature $`T_c`$ by coupling to the stiffness of the weakly bound pairs near the diagonal directions.
The scattering in the p-h channel can provide an additional mechanism for the pseudogap formation. If this happens, the issue arises of the interplay between the preformed p-h pseudogap and an additional BCS pairing for the weakly interacting quasiparticles. While this issue was discussed in Ref. for a simple isotropic pseudogap, in Ref. a specific band structure is considered, which includes a preformed k-dependent gap. The whole complication of the strong scattering around the M points is schematized by this preformed p-h gap $`\mathrm{\Delta }_0(\delta ,T)\gamma _𝐤`$, which separates the conduction and the valence band, and vanishes at the points $`(\pm \pi /2,\pm \pi /2)`$. Each band has a width $`4t1`$ eV, $`t`$ being the nearest-neighbor hopping. We assume $`T^{}(\delta )`$ as the critical line for the preformed gap formation and take $`\mathrm{\Delta }_0(\delta ,T)=cT^{}(\delta )g(T/T^{}(\delta ))`$, where $`c`$ is a fitting parameter, $`g(0)=1`$, $`g(1)=0`$, and $`g(x)`$ interpolates smoothly between these two limits. A suitable weak pairing $`V`$ in the Cooper channel promotes a d-wave superconducting gap $`\mathrm{\Delta }_s(\delta ,T)\gamma _𝐤`$ in the low valence band of the hole doped system. The mean-field BCS critical temperature $`T_c`$ vanishes at $`\delta =0`$, increases with increasing doping, and reaches a maximum at $`\delta _c`$ when the chemical potential crosses the peak which individuates the pseudogap region in the density of states, and then decreases. Therefore $`T^{}`$ and $`T_c`$ merge near optimum doping and the $`T_c(\delta )`$ curve has the characteristic bell-shaped form in reasonable agreement with the experiments (see Fig. 2).
The quasiparticle spectra are characterized by a $`LE`$, i.e. a finite minimum distance of the quasiparticle peak from the Fermi level, which persists in the normal state and is largest at the M points, where $`LE\mathrm{\Delta }_0|\mu |`$. In underdoped SC regime, the $`LE`$ is controlled by two parameters, as seen in experiments . The M points are dominated by the normal-state pseudogap, whereas the nodal region are controlled by $`\mathrm{\Delta }_s`$, which scales as $`T_c`$. In the overdoped regime the $`LE\mathrm{\Delta }_s`$.
The above preformed gap accounts for most of the non-mean-field effects by the input of a normal-state pseudogap $`\mathrm{\Delta }_0(\delta ,T)`$. In particular, the model yields a phase diagram in good qualitative agreement with the experiments.
On the other hand, the bifurcation between $`T^{}`$ and $`T_c`$ nearby optimum doping is also an outcome of the two-gap model . Clearly, the two models assign a different relevance to the effect of the strong QCP effective interaction in the p-p and p-h channels, and select just one of the two channels as the most affected one. It is quite plausible that the stripe fluctuations will indeed produce non-Fermi-liquid- and non-mean-field-like effects in both channels. However, whether the results discussed above within each of the two models should cooperate to produce a better quantitative description of the cuprates, is still an open problem under investigation. |
no-problem/0001/astro-ph0001060.html | ar5iv | text | # On the Interpretation of the Optical Spectra of L-type Dwarfs Based on observations made with the William Herschel Telescope (WHT) operated on the island of La Palma by the Isaac Newton Group at the Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias; and on observations obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. This observatory was made possible by the generous financial support of the W. M. Keck Foundation.
## 1 Introduction
The optical spectra of the recently discovered very cool dwarfs present new challenges to theoretical interpretation. Their spectral characteristics are drastically different from those of the well known M-dwarfs and this has prompted the use of a new spectral classification, the so-called L-dwarfs (Martín et al. 1997a , martin99 (1999); Kirkpatrick et al. 1999a ). In principle, the study of the optical spectral energy distribution may allow a better understanding of their physical properties, effective temperatures, gravities, atmospheric composition, etc. The main molecular absorbers at optical wavelengths in early- to mid-M dwarfs are significantly depleted at the lower temperatures present in L dwarfs because of the incorporation of their atoms into dust grains. This process should start in the latest M-dwarf atmospheres (Lunine et al. lunine89 (1989); Tsuji, Ohnaka & Aoki 1996a ; Tsuji et al. 1996b ; Jones & Tsuji jones97 (1997); Allard et al. allard97 (1997)), considerably reducing the strength of TiO, VO and other molecular bands, and producing significant changes in the overall properties of the optical spectra. Naturally, the appearance of dust will modify the temperature structure of the atmosphere, significantly affecting the formation of the emerging spectrum (Allard et al. allard97 (1997)).
In this paper we follow a semi-empirical approach to understand the relevance of different processes on the resulting spectral energy distributions in the optical for L dwarfs and Gl 229B. We have obtained far-red optical spectra of several of these cool dwarfs and compared them to synthetic spectra generated using the latest models by Tsuji (tsuji00 (2000)) and Allard (allard99 (1999)). These models have been successfully used in the interpretation of the near-infrared spectra of these objects (Tsuji, Ohnaka & Aoki tsuji99 (1999); Kirkpatrick et al. 1999b ).We have taken into account the depletion of some relevant molecules associated with the formation of dust and we have investigated the effects of dust scattering and/or absorption on the formation of the optical spectra. Remarkably strong alkali lines are present in the spectra and dominate its shape in the 600–900 nm region, providing major constraints to the theoretical modelling. We present the observations in section 2, models and synthetic spectra in section 3, and the role of alkalis, molecular bands and dust scattering and/or absorption is considered in section 4. In section 5 we discuss the implications of our study on the determination of effective temperatures and gravities as well as the formation of the lithium resonance line which is a key discriminator of substellar nature in these objects. Finally, conclusions are presented in Section 6.
## 2 Observations and data reduction
We have collected intermediate-resolution spectra in the 640–930 nm range for Kelu 1 (Ruiz, Leggett & Allard ruiz97 (1997)), Denis-P J1228–1547 and Denis-P J0205–1159 (Delfosse et al. delfosse97 (1997)) using the 4.2 m William Herschel Telescope (WHT; Observatorio del Roque de los Muchachos, La Palma). The coolest L dwarf in our sample, Denis-P J0205–1159, has also been observed with the KeckII telescope (Mauna Kea Observatory, Hawaii). Table 1 summarizes the log of the observations. The instrumentation used was the ISIS double-arm spectrograph at the WHT (only the red arm) with the grating R158R and a TEK (1024$`\times `$1024 pix<sup>2</sup>) CCD detector, and the Low Resolution Imaging Spectrograph (LRIS, Oke et al. oke95 (1995)) with the 600 grove mm<sup>-1</sup> grating and the TEK (1024$`\times `$1024 pix<sup>2</sup>) CCD detector at the KeckII telescope. The nominal dispersion and the wavelength coverage provided by each instrumental setup were similar and are listed in Table 1. Slit projections were typically 2–3 pix giving spectral resolutions of 6–8 Å. Spectra were reduced by a standard procedure using IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by National Optical Astronomy Observatories, whcih is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation., which included debiasing, flat-fielding, optimal extraction, and wavelength calibration using the sky lines appearing in each individual spectrum (Osterbrock et al. osterbrock96 (1996)). Finally, the observed WHT spectra were corrected from instrumental response making use of the spectrophotometric standard stars BD +26 2606 and HD 19445 which have absolute flux data available in the IRAF environment. No flux standards were observed at the KeckII telescope, and thus the instrumental signature of the KeckII spectrum of Denis-P J0205–1159 was removed by matching it with the WHT spectrum. Our observations are displayed in Fig. 1 together with spectra of the other two objects in our sample: BRI 0021–0214 (M9.5, Irwin, McMahon, & Reid irwin91 (1991)), and Gl 229B (a brown dwarf with methane absorptions, Nakajima et al. nakajima95 (1995); Oppenheimer et al. oppenheimer95 (1995)). Our sample covers a wide range of spectral types, from the transition objects between M and L types down to the latest types.
## 3 Model atmospheres, spectral synthesis code, chemical equilibrium and opacities
We carried out the computations using the LTE spectral synthesis program WITA5, which is a modified version of the program used by Pavlenko et al. (pav95 (1995)) for the study of the formation of lithium lines in cool dwarfs. The modifications were aimed to incorporate “dusty effects” which can affect the chemical equilibrium and radiative transfer processes in very cool atmospheres. We have used the set of Tsuji’s (tsuji00 (2000)) “dusty” (C-type) LTE model atmospheres. These models were computed for the case of segregation of dust–gas phases, i.e. for conditions $`r_{dust}>r_{crit}`$, where $`r_{dust}`$ is the size of dust particles and $`r_{crit}`$ is critical size corresponding to the gas–dust detailed equilibrium state. In a previous study (Pavlenko, Zapatero Osorio & Rebolo pav+oso+reb99 (2000)) we had used Tsuji’s (tsuji00 (2000)) B-type models which are computed for the case of $`r_{dust}=r_{crit}`$. In this paper we have also used a grid of the NextGen “dusty” model atmospheres computed recently by Allard (allard99 (1999)). The temperature-pressure stratification of Allard’s models lie between those of the C-type and B-type models of Tsuji (tsuji00 (2000)) as it can be seen in Fig. 2.
Chemical equilibrium was computed for the mix of $``$100 molecular species. Alkali-contained species formation processes were considered in detail because of the important role of the neutral alkali atoms in the formation of the spectra. Constants for chemical equilibrium computations were taken mainly from Tsuji (tsuji73 (1973)).
In the high pressure conditions of the atmospheres of L-dwarfs some molecules can be oversaturated (Tsuji et al. 1996a ); in this case such molecules should undergo condensation. To take into account this effect, we reduced the abundances of those molecular species down to the equilibrium values (Pavlenko pav98 (1998)). The constants for computations of saturation densities were taken from Gurwitz et al. (gurvitz82 (1982)).
We used the set of continuum opacity sources listed in Table 2 where we also give the original sources for the opacity computation codes. That opacity grid allows us to obtain resonable fits to a variety of stars (see e.g. Martín et al. 1997b ; Israelian, García López & Rebolo israelian98 (1998); Yakovina & Pavlenko yakovina98 (1998)). Opacities due to molecular band absorption were treated using the Just Overlapping Line Approximation (JOLA). Synthetic spectra of late M-dwarfs using both the continuum opacities listed in Table 2 and the JOLA treatment for molecular band absorptions have been already discussed in Pavlenko (pav97 (1997)).
The alkali line data were taken from the VALD database (Piskunov et al. piskunov95 (1995), see their Table 3). At the low temperatures of our objects we deal with saturated absorption lines of alkalis. Their profiles are pressure broadened. At every depth in the model atmosphere the profile of the aborption lines is described by a Voigt function $`H(a,v)`$, where damping constants $`a`$ were computed as in Pavlenko et al. (pav95 (1995)).
### 3.1 Molecular opacities
Detail molecular opacities have been computed for all the molecules listed in Table 2 of Pavlenko et al. (pav95 (1995)) as well as for CrH and CaH. To compute the opacity due to absorption of VO and TiO bands we followed the scheme presented in Pavlenko et al. (pav95 (1995)) and Pavlenko (pav97 (1997)). However, for the $`B^4\mathrm{\Pi }_{(r)}X^4\mathrm{\Sigma }^{}`$ band system of VO we used a more complete matrix of Franc-Condon Factors computed by the FRANK program (Cymbal cymbal77 (1977)) with account of rotational-vibrational interaction in the Morse-Pekeris approximation modified by Schumaker (schumaker69 (1969), see Pavlenko 1999b for more details). Futhermore for this band system we used oscillator strength $`f_e`$ from Allard & Hauschildt (allard95 (1995)). For the $`ϵ`$ band system of TiO the parameters of Schwenke (schwenke98 (1998)) were used.
The CrH molecular band opacity was also considered, the data required for the computations of the electronic transition A$`{}_{}{}^{6}\mathrm{\Sigma }_{}^{(+)}`$–X$`{}_{}{}^{6}\mathrm{\Sigma }_{}^{(+)}`$ being taken from Huber & Herzberg (huber79 (1979)). Franc-Condon factors were computed by the FRANK program (Cymbal cymbal77 (1977)). For this band system we used $`f_e`$ = 0.001, determined by Pavlenko (1999b ). Molecular bands of the A$`{}_{}{}^{6}\mathrm{\Sigma }_{}^{(+)}`$–X$`{}_{}{}^{6}\mathrm{\Sigma }_{}^{(+)}`$ of CrH lie in the wide wavelength region 500–1200 nm, and a head of the strong (0,1) band lies near the core of the K i atomic line. A head of the (0,0) band of A$`{}_{}{}^{6}\mathrm{\Sigma }_{}^{(+)}`$–X$`{}_{}{}^{6}\mathrm{\Sigma }_{}^{(+)}`$ system of CrH at 860 nm is blended with the heads of (1,0), (3,2), (2,1) bands of the $`B^4\mathrm{\Pi }_{(r)}`$$`X^4\mathrm{\Sigma }^{}`$ system of VO. For the spectral region 650–710 nm we took into account the absorption due to the $`B^2\mathrm{\Sigma }`$$`X^2\mathrm{\Sigma }`$ band system of CaH for which we adopted $`f_e`$ = 0.05. Frank-Condon factors were computed with the FRANK program. Although FeH is an important absorber at wavelengths around 870 nm and 990 nm for the coolest dwarfs, it is not yet included in our calculations due to the lack of appropriate laboratory data. In the wavelength region presented in this paper, the blue band of FeH is blended with CrH and VO.
## 4 Analysis and interpretation
### 4.1 Atomic features
Our spectral synthesis for model atmospheres in the temperature range 2200–1000 K confirms the relevant role of atomic features due to the Na i and K i resonance doublets in the spectral range 640–930 nm. The strength of these doublets increases dramatically when decreasing the effective temperature ($`T_{\mathrm{eff}}`$). In Fig. 3 we display synthetic spectra for different $`T_{\mathrm{eff}}`$ and gravity values. These computations do not include any molecular or grain opacity in order to show how alkali absorptions change with these parameters. In addition, we can see in the figure how the overall shape of the coolest L-dwarf spectra is governed by the resonance absorptions of Na i and K i. Even the less abundant alkali (i.e. Li, Rb, Cs) produce lines of remarkable strength. Note the increase in intensity of K i and Na i lines with decreasing $`T_{\mathrm{eff}}`$ and with increasing atmospheric gravity. Although this different behaviour can, in principle, make it difficult to disentangle these parameters from the optical spectra, simple physical considerations give an upper limit to gravity of log $`g`$ = 5.5 for brown dwarfs with lithium and therefore the large broadening of the K i lines seen in some of our objects cannot be attributed to higher aphysical values of gravity.
Our computations provide a qualitative explanation of the far-red spectral energy distributions presented in Fig. 1. The equivalent widths (EWs) of the K i and Na i resonance doublets may reach several thousand Angstroms, becoming the strongest lines so far seen in the spectra of astronomical objects. This is mainly caused by the high pressure broadening in the atmospheres of the coolest dwarfs, where damping constants of the absorption lines vary from 0.001 in the uppermost layer of the atmosphere to 2–4 in the deepest regions. The subordinate lines of Na i at 819.5 nm, clearly seen in all the L-dwarfs in our sample, become weaker as $`T_{\mathrm{eff}}`$ decreases. These computations also show that the subordinate Li i line at 812.6 nm may be detected in early/mid L-dwarfs with equivalent widths not exceeding 1Å, and that the triplet at 610.3 nm appears completely embedded by the wings of the K i and Na i resonance lines.
### 4.2 Molecular features
In the optical spectra of very cool dwarfs one expects the presence of bands of VO, TiO and indeed, our synthetical spectra show that these bands play an important role in the cases of BRI 0021–0214 and Kelu 1. In Fig. 4 (upper panel) we compare the observed spectrum of Kelu 1 with a synthetic spectrum obtained using the Tsuji C-type model for $`T_{\mathrm{eff}}`$ = 2000 K and log $`g`$ = 5. The natural depletion of molecules resulting from the chemical equilibrium is not sufficient to get a reasonable fit to the data. The discrepancies with respect to the observed spectrum can be notably reduced (Fig. 4, lower panel) if we impose a depletion of CaH, CrH, TiO and VO molecules which should account for the condensation of Ca, Cr, Ti and V atoms into dust grains. Since we do not have an appropriate theoretical description of the processes of dust formation at present, we use the simple approach of modifying the chemical equilibrium for all molecules. We implemented this “extra” depletion simply by introducing a factor $`R`$ which describes the reduction of molecular densities of the relevant species over the whole atmosphere. From the comparison between observed and computed spectra we find that the reduction factor for TiO ranges from 0 (complete depletion) to 1 (i.e. non depletion). In Fig. 5 we can see the comparison of similar synthetic spectra (Tsuji’s C-type model atmosphere for $`T_{\mathrm{eff}}`$ = 1600 K and log $`g`$ = 5) with the spectrum of DenisP J1228-1547. Total or almost total depletion of Ti and V into the dust grains is required to explain the spectrum of the mid-type L-dwarfs DenisP J1228–1547 and DenisP J0205–1159.
### 4.3 The need for additional opacity (AdO)
Our first attempts to model the spectra including atomic and molecular features showed in Figs. 4 and 5 were only modestly successful. Although we could reproduce reasonably well the red wing of the K i line, the theoretical fluxes in the blue part of our synthetic spectra (640–750 nm) are too large. In fact, we cannot fit the observations by taking into account only the opacity provided by the Na i and K i resonance doublets and the continuum opacity sources listed in Table 2.
Since the formation of dust in these cool atmospheres can produce additional opacity (AdO) which may affect the synthetic spectra we decided to investigate whether a simple description of it could help us to improve the comparison between observed and computed spectra. We adopted as law for AdO the following expression: $`a_\nu =a_o(\nu /\nu _0)^N`$. For $`N`$ = 0 to 4 this law corresponds to the case of radiation scattering produced by particles of different sizes, being $`N`$ = 4 the case of pure Rayleigh scattering, and $`N`$ = 0 corresponding to the case of white scattering. However, at present we cannot distinguish whether this AdO is due to absorption or scattering processes. The parameters $`N`$ and $`a_0`$ would be determined from the comparison with observations, but in all cases we try to get the best fit for $`N`$ = 4, which would be the most simple from the physical point of view. We adopted as $`\nu _o`$ the frequency of the K i resonance line at 769.9 nm. Our model of AdO is depth independent and therefore in this approach we cannot model the inhomogeneities (e.g. dust clouds) which may exist in L-dwarf atmospheres.
We have investigated how our simple approach for the modelling of AdO may lead to better comparisons between predicted and observed spectra. The implementation of our law of AdO depresses the fluxes in the blue wing of the K i doublet, considerably improving the reproduction of the observed data of Kelu 1 (Fig. 6), DenisP J1228–1547 (Fig. 7, upper panel) and DenisP J0205–1159 (Fig. 7, lower panel). For each object, synthetic spectra were computed for a range of $`T_{\mathrm{eff}}`$, gravities, $`a_0`$ and depletion factors $`R`$ for TiO, VO, CaH and CrH. The previous figures display those syntheses which better reproduce the observed spectra. From our computations we infer that VO is less efficiently depleted than TiO at a given $`T_{\mathrm{eff}}`$. The depletion of these oxides appears to increase very rapidly as we go from BRI 0021–0214 and Kelu 1 to the lower temperature objects DenisP J0205–1159 and DenisP J0205–1159. Note the good fit to the observed spectra at 860 nm provided by the (0,0) band of the A$`{}_{}{}^{6}\mathrm{\Sigma }_{}^{(+)}`$–X$`{}_{}{}^{6}\mathrm{\Sigma }_{}^{(+)}`$ system of CrH. From the study of this band we also find that the depletion factor of CrH increases from the warmer to the coolest objects in the sample. Finally, we also find that atomic lines become weaker and narrower when increasing the amount of AdO in the atmospheres.
We have also studied whether we can explain the optical spectrum of Gl 229B using the following Tsuji’s model: C-type, $`T_{\mathrm{eff}}`$ = 1000 K, and log $`g`$ =5.0, and the AdO law of index $`N`$ = 4 used above. In Fig. 8 we show several spectral synthesis reproducing the optical spectrum of this object, and showing the effect of different $`a_{}`$ parameters which is related to the amount of dust in the atmosphere. In Gl 229B we need the highest value of $`a_0`$, which is interpreted as evidence for the most “dusty” atmosphere in our sample. We have not attempted to reproduce the Cs i lines in Gl 229B. According to our hypothesis, the inclusion of AdO avoids the contribution of high pressure regions to the formation of these lines. The shorter wavelength Cs i line at 894.3 nm is very affected by dust opacity.
## 5 Discussion
### 5.1 $`T_{\mathrm{eff}}`$ for L-type dwarfs
Our computations provide a reasonable description of the far-red optical spectra of L-dwarfs and provide a physical basis for a progressively decreasing $`T_{\mathrm{eff}}`$ for the proposed spectral classifications (Martín et al. martin99 (1999); Kirkpatrick et al. 1999a ). Effective temperatures for a few field L-dwarfs and for Gl 229B have been derived from spectra at IR wavelengths (Allard et al. allard96 (1996); Marley et al. marley96 (1996); Matthews et al. matthews96 (1996); Tsuji et al. tsuji99 (1999); Jones et al. jones96 (1996); Kirkpatrick et al. 1999b ). Here we study to what extent we can use the broad energy spectral distribution in the optical to infer the $`T_{\mathrm{eff}}`$ for the cool dwarfs in our sample. Our best estimates using Tsuji’s models (see Table 4) are in good agreement with those found from IR data for objects of similar spectral types and for Gl 229B. Using Allard’s models and the simple approach of AdO described in section 4.3 we are also able to reproduce the observed spectra, albeit we require lower $`T_{\mathrm{eff}}`$’s by up to several hundred degrees for the coolest L-dwarfs. This is mainly due to the hotter stratification of Allard’s model atmospheres for the potassium and sodium lines forming layers (see Fig. 2). Opposite to IR-based temperature determinations, the estimation from optical spectra is very sensitive to the input physics parameters, like dust formation, molecular equilibrium, sources of opacity and atmospheric models, which limits the accuracy of the estimates to $``$200 K. On the other hand, the optical spectra provide an opportunity to test the reliability of the physical description of the atmospheres.
Cs i lines in the optical spectra have been recently used to infer the $`T_{\mathrm{eff}}`$ of some L-dwarfs (Basri et al. basri99 (1999)). In spite of the sensitivity of these lines to many input parameters in the models (i.e. to the amount of opacity, chemical equilibrium, etc.) we find in general a good agreement with their estimated temperatures for the earlier L-dwarfs (see Table 4). However, for the latest L-type objects in our sample, DenisP J1228–1547 and DenisP J0205-1159, we find temperatures up to 400 K lower. We may attribute this discrepancy to the effects that dust opacity has in the formation of optical lines of alkalie. This produces a kind of “veiling” of the atomic lines, reducing their intensities. Basri et al. (basri99 (1999)) did not consider this effect, and therefore they required hotter models in order to explain the strength of the observed Cs lines. Another possible reason for the discrepancy is that the temperatures in Table 4 have been obtained for gravity log $`g`$ = 5; if we increased gravity by 0.5 dex we would have to increase the temperatures of our models by about 200 K. In this case the spectral synthesis does not reproduce the observations so well but they are still acceptable.
### 5.2 Alkali lines: the case of lithium
Recently, several new Gl 229B-like objects have been discovered by the SDSS survey (Strauss et al. strauss99 (1999)) and the 2MASS collaboration (Burgasser et al. burgasser99 (1999)). Based on their near-IR spectra, these authors suggest that these new cool brown dwarfs may be warmer than Gl 229B. In order to estimate the optical properties of these objects we have computed synthetical spectra using a Tsuji’s C-type model of $`T_{\mathrm{eff}}`$ = 1200 K and log $`g`$ = 5.0, and we have adopted the basic prescriptions that were followed in section 4, i.e total depletion of VO and TiO and the AdO law with $`N`$ = 4. In Fig. 9 we plot the resulting spectra considering different amounts of dust opacity. As expected, the overall shape of the spectrum is intermediate between that of DenisP J0205–1159 and Gl 229B. In the absence of dust absorption ($`a_{}`$ = 0.0) alkali lines are clearly seen (including the lithium resonance doublet), and the spectrum is governed by the sodium and potassium lines. If we consider dust opacities comparable to those in Gl 229B, the alkali lines of Cs and Rb become weaker, but still detectable with intermediate resolution spectroscopy. Remarkably, the subordinate Na i doublet at 819.5 nm is very sensitive to the incorporation of dust opacity due to the larger depths of its formation as compared with resonance lines. For very high dust opacities these lines may become undetectable.
The effects of additional dust opacity on the formation of Li i lines (resonance and subordinate ones) also deserve detailed consideration since they play a major role as a discriminator of substellar nature for brown dwarf candidates (see Rebolo, Martín, & Magazzù rebolo92 (1992); Magazzù, Martín, & Rebolo magazzu93 (1993)). Most of the known brown dwarfs are actually recognized by the detection of the Li i resonance doublet in their spectra (Rebolo et al. rebolo96 (1996); Martín et al. 1997a ; Rebolo et al. rebolo98 (1998); Tinney tinney98 (1998); Kirkpatrick et al. 1999a ). The chemical equilibrium of lithium contained molecules have been considered in all our syntheses (Fig. 10 depicts the density profiles for lithium species for two C-type models by Tsuji tsuji00 (2000)). Our computations show that both, the resonance line at 670.8 nm, and the subordinate lines at 601.3 nm and 812.6 nm are very sensitive to the AdO that we need to incorporate in the spectral synthesis if we want to explain the observed broad spectral energy distribution. Among the subordinate lines the doublet at 812.6 nm is more easily detectable, but the predicted EWs, assuming fully preserved lithium, are rather small, ranging from EW = 0.4Å to 0.04Å for $`T_{\mathrm{eff}}`$ values in the range 2000 K down to 1200 K. These EWs are considerably reduced by the inclusion of the AdO described in the previous section, which makes their detection rather difficult. In Table 5 we give the predicted EWs of the Li i resonance doublet at 670.8 nm for several of the coolest model atmospheres (2000–1000 K) considered in this work. First, we note that in the absence of any AdO (second column in the table), we would expect rather strong neutral Li resonance lines in the spectra of objects as cool as DenisP J0205–1159 and Gl 229B. The chemical equilibrum of Li-contained species still allow a sufficient number of Li atoms to produce a rather strong resonance feature; one reason for this is that Cl and O atoms should be also bounded into other molecules (e.g. NaCl, KCl, H<sub>2</sub>O, etc.). Our computations indicate that objects like DenisP J0205–1159 and cooler objects with moderate dust opacities should show the Li i resonance doublet if they had preserved this element from nuclear burning, and consequently the lithium test can still be applied. Furthermore, even in very dusty cool atmospheres like that of Gl 229B for which we have inferred a high value of the opacity parameter of $`a_{}`$ = 0.1 (fourth column in Table 5), the lithium resonance line could be detected with an EW of several hundred mÅ (high S/N data would be required).
Another effect that we shall consider is whether small changes in the AdO (which could be originated as a consequence of some “meteorological” phenomena occurring in these cool atmospheres) can lead to detectable variations in the EWs of the lithium lines. In particular, weak lithium lines do not necessarily imply a depletion of this element. The observed Li i variability in Kelu 1 (with changes in EW by a factor 5), could be an indication of meteorological changes in the atmosphere of this rapidly rotating cool object. In Fig. 11 we present several spectral synthesis showing the sensitivity of the lithium line to the AdO in the atmosphere. The AdO parameters which give the best fit for the lithium line in Kelu 1 (EW = 6.5$`\pm `$1.0 Å) coincide with those also providing the best fit to the whole optical spectrum (see Fig. 6). Anyway, the obtained lithium abundance is consistent with complete preservation of this element.
## 6 Conclusions
In this paper we have attempted to model the far-red spectra (640–930 nm) of several L-dwarfs suitably selected to cover this new spectral class. We have used model atmospheres from Tsuji (tsuji00 (2000)) and Allard (allard99 (1999)), as well as an LTE spectral synthesis code (Pavlenko et al. pav95 (1995)) which takes into account chemical equilibrium for more than 100 molecular species, and detailed opacities for the most relevant bands. We have arrived to the following conclusions:
1) Alkali lines play a major role governing the far-red spectra of L-dwarfs. At early types, this role is shared with TiO and VO bands, which dominate this spectral region in late M-dwarfs. As we move to later spectral types we need to incorporate progressively higher depletions of these oxides and of the hydrides CrH and CaH, consistently with the expectation that Ti and V atoms are depleted into grains; and we also require additional opacity to reproduce the overall shape of the spectra. This additional opacity could be either due to molecular/dust absorption or to dust scattering.
2) We have shown that a simple law for this additional opacity of the form $`a_{}(\nu /\nu _{})^N`$, with $`N`$ = 4, gives a sufficiently good fit to the observed spectra of L-dwarfs and Gl 229B. For this late object we require the highest value of $`a_{}`$ consistent with a very dusty atmosphere. The strength of alkali lines is highly affected by this opacity.
3) From the best fits to our spectra, we derive the most likely $`T_{\mathrm{eff}}`$ values for our sample of L-dwarfs. For the warmer objects, our values are consistent with those obtained by other authors, however we find lower $`T_{\mathrm{eff}}`$’s by serveral hundred degrees for the coolest L-dwarfs. Because the optical spectra are very much affected by the input physics, a more reliable $`T_{\mathrm{eff}}`$ scale should be obtained by fitting the IR data of these cool objects.
4) After detailed consideration of chemical equilibrium, we find that the lithium resonance doublet at 670.8 nm can be detected in the whole spectral range (down to 1000 K). In the coolest L-dwarfs the strength of the resonance line is more affected by the amount of additional opacity needed to explain the spectra than by the depletion of neutral lithium atoms into molecular species. In those atmospheres where the additional opacity required is low, the lithium test can provide a useful discrimination of substellar nature. Changes in the physical conditions governing dust formation in L-dwarfs, will cause variability of the lithium resonance doublet. Taking into account the need for additional opacity in Kelu 1, we find that the lithium abundance can be as high as log $`N`$(Li) = 3.0, i.e. consistent with complete preservation.
###### Acknowledgements.
We thank T. Tsuji, F. Allard, D. Schwenke, G. Schultz and B. Oppenheimer for providing us model atmospheres, updated TiO molecular data and the optical spectrum of GL 229B, respectively. We are also indebted to R. García López, Gibor Basri and Eduardo L. Martín for their assistance with the observations. Partial financial support was provided by the Spanish DGES project no. PB95-1132-C02-01. |
no-problem/0001/astro-ph0001496.html | ar5iv | text | # Modeling Galaxy Lenses
## 1 Introduction
For a long while, (eg Refsdal 1964), gravitational lenses have promised unique and compelling cosmographical measurements. Despite considerable observational progress and a developing theoretical sophistication, the lens community has not yet delivered on this promise. The largest obstacle to further progress is the modeling of the lenses. Two novel approaches to improving our understanding of lens models are now described.
## 2 B1608+656
The well-studied quad, B1608+656 has four variable radio components arranged around an Einstein ring and labeled A, B, C, D (Fig. 1a). The scalar magnifications relative to B of A, C, D, at the same emission time, are 2, 1, 0.35 and the associated delays are 26, 33, 73 d respectively. The source and lens redshifts are known to be 0.63, 1.394 (Fassnacht, these proceedings and references therein). The lens comprises two interacting galaxies G1, G2. Models have been presented in Myers et al (1995), Blandford & Kundić (1997), Koopmans & Fassnacht (1999) and Fassnacht, (these proceedings).
The conventional approach to modeling galaxy lenses is to adopt a small library of potentials or mass distributions and adopt parameters that provide the best fit to the observed image properties by minimizing a suitably defined $`\chi ^2`$. As more data has been acquired, more parameters have become necessary and the accuracy of the derived value of the Hubble constant has deteriorated (eg Barkana et al 1999). The fundamental problem is that when the image data is limited to a few isolated points there is no unique interpolation between them. This can be demonstrated for B1608+656 by exhibiting two different mass models that fit the four radio image positions and magnifications with reasonable accuracy as well as the ratios of the reported time delays and yet which yield Hubble constants of $`60,100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, respectively, (Surpi & Blandford, these proceedings)
This degeneracy may be broken when there is extended emission from an Einstein ring as it is then possible to match points with similar surface brightness. This approach has already been attempted at radio wavelengths, where it is convenient to work in Fourier space (eg Wallington, Kochanek & Narayan 1996 and references therein). However, the method has been limited to fitting simple and arbitrary models of the mass distribution. We now discuss a somewhat different approach in which an attempt is made to solve directly for the surface potential from the brightness distribution and which is specialized to address the peculiar difficulties posed by optical data. A quite different method with a similar goal has been presented here by Sahu (and references therein).
### 2.1 Intensity Reconstruction
We use the $`V`$ and $`I`$ band images from Schechter et al (in preparation) and the $`H`$ band image from Fassnacht et al (in preparation). These have effective wavelengths of 372, 499, 982 nm in the lens frame, respectively. In order to form a faithful image of the multiply-imaged source, we must deconvolve, de-contaminate and de-redden the observed image. We do this by convolving the $`V`$ image with the $`I`$ PSF and vice versa. We then use these images to derive color maps of $`V/I`$ and $`I/H`$. Next we use the observed radio magnifications and take the brightest $`2N,N,N,0.35N`$, (with $`N=20`$) pixels from images $`A,B,C,D`$ respectively and plot them on a two color diagram (Fig. 2a). (The pixel numbers are in proportion to the radio magnifications, which we suppose to be unaffected by “milli-lensing” (eg Koopmans, these proceedings.)
We observe that the pixels around $`A`$, $`B`$ have similar colors and are presumably subject to little reddening, whereas those from $`C`$, $`D`$ have different colors that are displaced by a vector similar to that associated with Galactic dust (after correcting for the lens redshift) with $`A_V=0.4,0.5`$ respectively. (Note that we do not assume that the Galactic reddening law operates but can draw this conclusion from the data.) The presence of a constant reddening applied to the whole image will not affect our lens model; it will affect the photometric properties of the source and lens galaxies.
Next we take the brightest points around the two lens galaxy nuclei, $`G1`$, and $`G2`$ and plot these on a two color diagram, we find that they lie along two lines also parallel to the Galactic reddening line, suggesting strongly that there are extinction gradients across the two lens galaxy nuclei. By inspection, we deduce that most of the reddening is due to $`G2`$, which appears to lie in front of $`G1`$. If we assume that {A, B, C, D}, G1 and G2 have three separate but uniform intrinsic colors, then it is possible to solve for the reddening over most of the image. The result of de-reddening the images A, B, C, D is shown in Fig 2b. Note that their surface brightnesses are now all similar, within the errors as required.
After we de-redden the lens galaxies, we observe that the surface brightness of G1, measured at points that are well-removed from G2 and the Einstein ring, has a distribution with radius that matches the de Vaucouleurs profile commonly used to describe elliptical galaxies, (cf Kochanek, these proceedings) (Fig. 3). We assume this profile for the light and iteratively remove G1 to leave G2, which is too distorted for a simple profile to be appropriate. In this way we can separate the light of both galaxies from that of the Einstein ring and, by adopting mass to light ratios that are matched to the observed size of the Einstein ring, we can model the luminous mass associated with the two lens galaxies.
The final step is to return to the original V, I images and de-convolve the Einstein ring using the measured PSFs. We then subtract off the reddened galaxy light and deredden the remaining ring image according to the extinction map deduced above. After iterating and some further refinements, we end up with a ring image like that shown in Fig. 1c.
### 2.2 Potential reconstruction
Before we show how to use this image to solve for the surface potential we note some general features of quad images arranged around an Einstein ring. We assume that the source intensity map contains a single maximum with nested, concave isophotes. This seems to be true for B1608+656. The challenge is to find a lens model that gives a four-to-one mapping of isophotes in the Einstein ring onto isophotes of similar intensity in the source plane. (We ignore the fifth image near the nucleus of G1.) Now mapping curves onto curves is not a unique operation. (It could be made into one, if we possessed two sets of distinct isophotes, but we don’t.) Nevertheless, there are strong constraints. Firstly, observe that the “crossing isophotes” (Fig. 1c) take the form of three, nested “lemniscates” inside a “limaçon”. The critical curve of the lens potential reconstruction must pass through all four saddles in the intensity, with each crossing isophote corresponding to a simple nested isophote in the source plane that is tangent to the caustic. Furthermore, if we construct the “outer limit” and the “inner limit” curves - the loci of the two isolated image points associated with pairs of images merging on the critical curve - then these must be tangent to the crossing isophotes, as shown. These constraints point to deficiencies in existing models.
In order to construct a surface potential, we start with a simple lens model that distributes mass density in proportion to the derived surface brightness in the two lensing galaxies, using a separate mass to light ratio for each of them and extrapolating using a de Vaucouleurs law to large radius. It is then adjusted to locate the four images A, B, C, D accurately.
This model does not yet map isophotes onto isophotes and we must correct it. This we do by constructing a trial source from the average on the source plane of the image intensities. We then map this trial source back onto the image plane and compare the resulting isophotes $`I_1(\stackrel{}{\beta })`$ with the observed isophotes $`I_0(\stackrel{}{\theta })`$. We use the linearized equation,
$$I_0I_1=\frac{I_0}{\stackrel{}{\theta }}\mu \frac{\delta \psi }{\stackrel{}{\theta }}=\frac{I_1}{\stackrel{}{\beta }}\frac{\delta \psi }{\stackrel{}{\theta }}$$
(1)
to lowest order, where $`\delta \psi (\stackrel{}{\theta })`$ is the correction to the normalized surface potential and $`\mu `$ is the magnification tensor of the original model. We can solve for the correction to the potential by integrating down a sequence of curves of steepest descent in the source plane for each of four image zones around A, B, C, D. The potential and its gradient must match on the critical curve. This matching can be accomplished, iteratively, by adjusting the intensity distribution in the source plane. A few iterations ought to suffice to render the model consistent with the observed brightness within the errors associated with the intensity reconstruction. In principle, we can connect A, B, C, D without making any assumptions about the distribution of dark matter.
The practical application and uniqueness of this approach is currently under study. If it is successful, we can take the Laplacian of the derived potential to give the corrected mass distribution and subtract off the potential associated with this mass to leave the potential associated with matter not covered by the Einstein ring. Of course, this procedure conveys no information, apart from boundary condition at the inner and outer curves, concerning the potential outside the Einstein ring and where the image intensity map is unreliable.
### 2.3 Hubble constant
The procedure that we have outlined can provide just the potential information that we need to convert the measured arrival times to a value of the Hubble constant, subject to the usual concerns associated with the influence of intervening mass distribution and the overall world model. The results can only be as good as the image intensity model and the corrections made to it. It does, however, include one more internal consistency check. The three ratios of the arrival times are determined independently by the potential model and can be compared with the observed values. More generally, this potential reconstruction technique may be applicable to other extended gravitational lenses, like those associated with rich clusters.
## 3 Strong Lensing on the HDF(N)
### 3.1 The lens deficit
Over 15,000 radio sources have been scrutinized in the JVAS/CLASS radio surveys (Browne, these proceedings) and they include roughly 25 confirmed gravitational lenses. Allowing for incompleteness etc, it appears that the probability of a distant radio source being multiply imaged by an intervening gravitational lens is roughly 0.003. (The probability for bright quasars is somewhat larger due to magnification bias which is less important for radio sources and faint galaxies.) Now turn to the HDF(N). There are roughly 3000 discernible galaxy images on the roughly 5 sq. arcmin of sky covered by the WFPC2, (giving $`10^{11}`$ over the sky) and an expectation of $`10`$ detectable cases of multiple imaging (Hogg et al 1996). There have been quite a few follow up observations, but there are still no convincing examples of multiple imaging (Zepf, Moustakis & Davis 1997, Blandford 1998). (One compelling case has, however, been reported on the HDF(S), cf Barkana, Blandford & Hogg 1999.)
There are two immediate rationalizations of this large difference between the radio and optical lensing rates. The first is that the faint optical galaxies are all at very low redshift and therefore not likely to be multiply-imaged. The second is that the HDF(N) is too small to comprise a fair sample of the lensing sky. In order to explore this matter further, we have carried out a more detailed analysis of the probability of strong lensing rate (cf Blandford, 1999).
### 3.2 Cross sections and multi-image probability
Images of all of the (roughly 150) galaxies on the HDF(N) with spectroscopic redshifts were prepared and their rest B surface brightness converted into surface density using mass-to-light ratios $`hM/L_B=5,10`$ for disk and elliptical galaxies respectively, together with a simple prescription for passive evolution (eg Vogt 1996). The galaxies were assumed to be isolated with dark matter in their individual halos whose density declines with radius faster than $`r^2`$. This ensures that the surface density, which fixes the size of the Einstein ring, is determined by the central, luminous mass. The lack of strong color gradients in the galaxies of most interest suggests that reddening is not a concern.
Given these assumptions, it is possible to compute lensing cross sections for each of these putative lens galaxies assuming that the background sources are all at redshift $`z_s=3`$. The surface potentials were computed from the surface densities using a Fourier method and then the total angular cross section for multiple imaging of a point source was computed and back-projected onto the source plane. The cross sections were all combined and the aggregate for the three WFPC2 chips was $`1`$ sq. arcsec. It was dominated by four $`z1`$ elliptical galaxies. None of the spirals contributed significantly to the cross section (cf Kochanek these proceedings). If the HDF(N) is typical, then the multiple imaging probability per bright, distant source should be at most $`10^4`$, over an order of magnitude smaller than suggested by the radio surveys.
One worry about this result is that the multiple images might actually be hidden in the cores of the lensing galaxies and have not been recognized as such. This was checked observationally by imaging actual faint galaxies taken from the HDF(N) through the principal lens galaxies. It turned out that the magnified source population was generally recognizable. (It is much easier to see a multiply imaged faint galaxy through an elliptical lens than through a spiral.) Actually as a result of this investigation, it was found that one of the elliptical galaxies contributing significantly to the total cross section did contain a faint, arc-like feature in its nucleus, possibly a merger, but conceivably a lens. Either way it does not change the conclusion that the total cross section for strong lensing over the area of sky covered by the HDF(N) is ten times smaller than average.
### 3.3 Lensing by Groups
When bona fide radio lenses are examined in detail, it is found that several of them have companion galaxies that are almost certainly contributing to the imaging. Furthermore, upon spectroscopic examination, several of these companion galaxies have similar redshifts to the nominal lens galaxy (eg Kundić et al 1997ab, Lubin et al 2000 in press). This suggests that a good fraction of the radio lens galaxies are ellipticals belonging to compact groups. This inference is consistent with the conclusion of a pencil beam redshift survey of $`z0.51`$ field galaxies which shows that most of the “absorption” line galaxies are in compact redshift groupings Cohen et al (1999).
Groups probably form within substantial dark matter perturbations. Although the surface density of the dark matter alone may not exceed the critical value, it may well be sufficient to enhance the cross section and the size of the Einstein ring in those elliptical galaxies that are located near the centers of the richest and most compact groups, (eg Zabludoff & Mulchaey 1998, Mulchaey & Zabludoff 1998).
Let us make a simple model of a giant elliptical galaxy located at the center of a compact group. We suppose that the dark matter in the group is centered on the galaxy and has a profile, $`\rho =\rho _{\mathrm{gp0}}(1+r^2/s_{\mathrm{gp}}^2)^{3/2}`$. The galaxy is taken to have a density profile $`\rho =\rho _{\mathrm{gal0}}(1+r^2/s_{\mathrm{gal}}^2)^1`$, that is to say it is isothermal in its outer parts which extend to a tidal radius $`r_{\mathrm{tid}}0.5s_{\mathrm{gp}}\sigma _{\mathrm{gal}}/\sigma _{\mathrm{gp}}`$ where its density matches that of the group. (We assume that the group velocity dispersion $`\sigma _{\mathrm{gp}}`$ is larger than that in the outer parts of the galaxy $`\sigma _{\mathrm{gal}}`$.)
The cross section to multiple imaging can be approximated by the solid angle subtended by the Einstein and a straightforward calculation furnishes the estimate
$$\pi \theta _E^2=\frac{\pi s_{\mathrm{gal}}^2}{D_d^2}\beta (\beta 2)$$
(2)
where
$$\beta =\frac{4\pi \sigma _{\mathrm{gal}}^2D_dD_{ds}}{(1A)s_{\mathrm{gal}}D_s}$$
(3)
(with $`A=0`$), is a measure of the lensing strength of the galaxy and
$$A=\frac{18\sigma _{\mathrm{gp}}^2D_dD_{ds}}{s_{\mathrm{gp}}D_s}$$
(4)
measures the extra magnification associated with the dark matter in the group. Numerically, and very roughly, for $`z_d0.5,z_s2`$, $`\sigma _{\mathrm{gal}}200`$ km s<sup>-1</sup>, $`s_{\mathrm{gal}}2h_{60}^1`$ kpc, say, then the cross section for an isolated elliptical galaxy like one of those observed on the HDF(N) is $`0.5`$ sq arcsec and $`\beta =3`$. Now, if $`\sigma _{\mathrm{gp}}`$ 500km s<sup>-1</sup> and $`s_{\mathrm{gp}}100h_{60}^1`$ kpc, then $`A0.5`$, $`\beta `$ doubles and the cross section increase by a factor 24 to $`7`$ sq arc sec, comparable with that observed in group lenses. Although this example is very simple-minded, it does illustrate a general point, namely that the cross section of an elliptical galaxy can be very sensitive to the presence of dark matter in a surrounding group.
Returning to the HDF(N), it appears that it contains (perhaps through selection) no massive elliptical - group combinations with the most propitious redshifts for lensing, $`z0.5`$. The probability that a particular elliptical - group be aligned with a suitable source and produce a prominent optical ring is, in any case, typically less than $`0.3`$, even at intermediate redshift. It is therefore not unreasonable that no lenses have been seen. A larger area of the sky must be imaged to the depth of the HDF(N) to have a fair sample. We do not yet know the redshift distribution of the faint source galaxy population (though this can be ascertained by weak galaxy-galaxy lensing and strong cluster lensing) but it is not required that most faint galaxies are local.
There are three consequences of this interpretation. Firstly, as already reported by Keeton, Kochanek & Falco (1997), lens galaxies should exhibit larger than average mass-to-light ratios. Secondly, the dark matter groups that we postulate to enhance the cross section should be detectable locally as X-ray sources. The matter density in this form has a cosmological density which we estimate to be $`0.03`$, roughly ten percent of the total. Thirdly, accurate modeling of the lenses, as we have attempted for B1608+656, should actually require the addition of asymmetric dark matter and this may account for the unusually high proportion of quads. Of course not all galaxy lenses are located in groups or are associated with elliptical galaxies, but it is our contention that a significant fraction will turn out to be so.
A fuller treatment of these ideas will be presented elsewhere.
###### Acknowledgements.
This research owes much to the careful radio, infrared and optical observations of B1608+656 led by Chris Fassnacht, Tony Readhead and Paul Schechter, respectively. Tereasa Brainerd, Judith Cohen, David Hogg and Lori Lubin are thanked for collaboration on parts of the HDF analysis. Support under NSF grant AST is gratefully acknowledged. RB thanks the Institutes for Advanced Study, of Astronomy and of Theoretical Physics for hospitality and NSF (through grant AST99-00866) for support, respectively. |
no-problem/0001/hep-ph0001034.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The search for supersymmetry (SUSY) is one of the most important goals of a future $`e^+e^{}`$ linear collider (LC) in the energy range between 500 GeV and 1000 GeV . In addition to the $`e^+e^{}`$ option the $`e^{}\gamma `$ mode is also technically realizable with high luminosity polarized photon beams obtained by backscattering of intensive laser pulses off the electron beam . Associated production of selectrons with the lightest neutralino $`\stackrel{~}{\chi }_1^0`$ (assumed to be the LSP) in $`e^{}\gamma `$ collisions allows to probe heavy selectrons beyond the kinematical limit of selectron pair production in $`e^+e^{}`$ annihilation. Further associated production of selectrons and gaugino-like neutralinos provides us with the possibility to study the electron-selectron-neutralino couplings complementary to $`e^+e^{}`$ annihilation.
In the present paper we study the associated production $`e^{}\gamma \stackrel{~}{\chi }_1^0\stackrel{~}{e}_{L/R}^{}`$ with polarized beams and the subsequent direct leptonic decay $`\stackrel{~}{e}_{L/R}^{}\stackrel{~}{\chi }_1^0e^{}`$. The beam polarization is chosen suitably to optimize cross sections and polarization asymmetries. The signal is a single electron with high transverse momentum $`p_T`$. We do not consider cascade decays of heavy selectrons, which may yield a similar single electron signal with, however, a less pronounced $`p_T`$ . We also refrain from a discussion of the background.
The calculations are done in the Minimal Supersymmetric Standard Model (MSSM). The masses and couplings of the neutralinos depend on the gaugino mass parameters $`M_1`$ and $`M_2`$, the higgsino mass parameter $`\mu `$ and the ratio $`\mathrm{tan}\beta `$ of the two Higgs vacuum expectation values. The parameters $`M_2`$, $`\mu `$ and $`\mathrm{tan}\beta `$ can in principle be determined by chargino production alone . For the gaugino mass parameters usually the GUT relation $`M_1=M_2\frac{5}{3}\mathrm{tan}^2\theta _W`$ is assumed. A precise determination of $`M_1`$ is, however, only possible in the neutralino sector .
In the present paper we investigate if associated production of selectrons and the LSP $`\stackrel{~}{\chi }_1^0`$ is suitable as a test for this relation. We therefore study the influence of the gaugino mass parameter $`M_1`$ on the total cross section and on polarization asymmetries for different selectron masses.
## 2 Cross Sections and Polarization Asymmetries
The production cross section $`\sigma _P^{L/R}\left(s_{e\gamma }\right)`$ for the process $`e^{}\gamma \stackrel{~}{\chi }_1^0\stackrel{~}{e}_{L/R}^{}`$ proceeds via electron exchange in the s-channel and selectron exchange in the t-channel. The electron-selectron-LSP couplings
$$f_{e1}^L=\sqrt{2}\left[\frac{1}{\mathrm{cos}\theta _W}\left(\frac{1}{2}+\mathrm{sin}^2\theta _W\right)N_{12}\mathrm{sin}\theta _WN_{11}\right],$$
(1)
$$f_{e1}^R=\sqrt{2}\mathrm{sin}\theta _W\left[\mathrm{tan}\theta _WN_{12}^{}N_{11}^{}\right]$$
(2)
for left and right selectrons with masses $`m_{\stackrel{~}{e}_L}`$ and $`m_{\stackrel{~}{e}_R}`$ depend on the photino component $`N_{11}`$ and the zino component $`N_{12}`$ of the LSP . For an electron beam with longitudinal polarization $`P_e`$ the cross sections $`\sigma _P^L`$ and $`\sigma _P^R`$ are proportional to $`\left(1P_e\right)`$ and $`\left(1+P_e\right)`$, respectively. For special cases the cross sections are given in and , the complete analytical expressions for the differential and the total cross section for polarized beams will be given in a forthcoming paper .
In the narrow width approximation one obtains the total cross section $`\sigma _{e\gamma }^{L/R}`$ for the combined process of $`\stackrel{~}{e}_{L/R}^{}`$$`\stackrel{~}{\chi }_1^0`$ production and the subsequent leptonic decay $`\stackrel{~}{e}_{L/R}^{}e^{}\stackrel{~}{\chi }_1^0`$ by multiplying the production cross section with the leptonic branching ratio:
$$\sigma _{e\gamma }^{L/R}\left(s_{e\gamma }\right)=\sigma _P^{L/R}\left(s_{e\gamma }\right)\mathrm{Br}\left(\stackrel{~}{e}_{L/R}^{}e^{}\stackrel{~}{\chi }_1^0\right).$$
(3)
The LSP-selectron-electron coupling $`f_{e1}^{L/R}`$ appears in the production amplitudes as well as in the decay amplitude, so that the total cross section $`\sigma _{e\gamma }^{L/R}\left(s_{e\gamma }\right)`$ is proportional to $`\left(f_{e1}^{L/R}\right)^4`$.
The photon beam is assumed to be produced by Compton backscattering of circularly polarized laser photons (polarization $`\lambda _L`$) off longitudinally polarized electrons (polarization $`\lambda _e`$). The energy spectrum $`P\left(y\right)`$ and the mean helicity $`\lambda \left(y\right)`$ of the high energy photons are given in . The ratio $`y=E_\gamma /E_e`$ of the photon energy $`E_\gamma `$ and the energy of the converted electron beam $`E_e`$ is confined to $`y\stackrel{<}{}0.83`$ . For $`y>0.83`$ $`e^+e^{}`$ pairs can be produced via scattering of laser photons and backscattered photons, so that the flux of high energetic photons drops considerably. To obtain the total cross section $`\sigma _{ee}^{L/R}(s_{ee},P_e,\lambda _e,\lambda _L)`$ for the combined process in the laboratory frame ($`e^+e^{}`$ CMS) one has to convolute the total cross section $`\sigma _{e\gamma }^{L/R}\left(s_{e\gamma }\right)`$ in the $`e\gamma `$ CMS with the energy distribution $`P\left(y\right)`$ and the mean helicity $`\lambda \left(y\right)`$ of the backscattered photon beam :
$$\sigma _{ee}^{L/R}=𝑑yP\left(y\right)\widehat{\sigma }_{e\gamma }^{L/R}\left(s_{e\gamma }=ys_{ee}\right),$$
(4)
$`\widehat{\sigma }_{e\gamma }^{L/R}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(1+\lambda \left(y\right)\right)\left(\sigma _{e\gamma }^{L/R}\right)^++{\displaystyle \frac{1}{2}}\left(1\lambda \left(y\right)\right)\left(\sigma _{e\gamma }^{L/R}\right)^{}`$ (5)
$`=`$ $`\sigma _{e\gamma }^{L/R}\left(1+\lambda \left(y\right)A_c^{L/R}\right).`$
In eq. (5) $`\left(\sigma _{e\gamma }^{L/R}\right)^{+/}`$ are the total cross sections for a completely right (left) circular polarized photon beam whereas $`\sigma _{e\gamma }^{L/R}`$ is the cross section for unpolarized photons.
$$A_c^{L/R}=\frac{\left(\sigma _{e\gamma }^{L/R}\right)^+\left(\sigma _{e\gamma }^{L/R}\right)^{}}{\left(\sigma _{e\gamma }^{L/R}\right)^++\left(\sigma _{e\gamma }^{L/R}\right)^{}}$$
(6)
is the polarization asymmetry for circular polarized photons.
Since the production and decay of right and left selectrons lead to the same final state we add both cross sections and obtain
$$\sigma _{ee}=\sigma _{ee}^L+\sigma _{ee}^R.$$
(7)
We consider two types of polarization asymmetries of the convoluted cross section. For the first one we flip the electron polarization $`P_e`$ and fix the polarization $`\lambda _L`$ of the laser beam and the polarization $`\lambda _e`$ of the converted electron beam:
$$A_{P_e}=\frac{\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)}{\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)+\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)}.$$
(8)
If we split off from $`\sigma _{ee}^{L/R}`$ the dependence of beam polarization $`\left(1P_e\right)`$
$$\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)=\left(1P_e\right)\stackrel{~}{\sigma }_{ee}^L+\left(1+P_e\right)\stackrel{~}{\sigma }_{ee}^R,$$
(9)
we obtain
$$A_{P_e}=P_e\frac{\stackrel{~}{\sigma }_{ee}^R\stackrel{~}{\sigma }_{ee}^L}{\stackrel{~}{\sigma }_{ee}^R+\stackrel{~}{\sigma }_{ee}^L}.$$
(10)
Here $`\stackrel{~}{\sigma }_{ee}^R`$ ($`\stackrel{~}{\sigma }_{ee}^L`$) is the cross section for production of right (left) selectrons with an unpolarized electron beam ($`P_e=0`$) and their subsequent leptonic decay.
As a second asymmetry we discuss that with respect to the polarization $`\lambda _L`$ of the laser beam:
$$A_{\lambda _L}=\frac{\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)}{\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)+\sigma _{ee}(s_{ee},P_e,\lambda _e,\lambda _L)}.$$
(11)
## 3 Numerical Results
In the following numerical analysis we study the total cross section $`\sigma _{ee}^{(L/R)}`$ and the polarization asymmetries $`A_{P_e}`$ and $`A_{\lambda _L}`$ for $`\sqrt{s_{ee}}=500`$ GeV. For the MSSM parameters we choose $`M_2=152`$ GeV, $`\mu =316`$ GeV, $`\mathrm{tan}\beta =3`$ with $`M_1`$ varying between $`M_1=40`$ GeV and $`M_1=300`$ GeV. The region $`M_1<40`$ GeV is excluded by assuming a lower limit of 35 GeV for the LSP mass $`m_{\stackrel{~}{\chi }_1^0}`$. In the figures the excluded region is shaded. For $`M_1=78.7`$ GeV this corresponds to the DESY/ECFA reference scenario for the Linear Collider , which implies the GUT relation $`M_1=M_2\frac{5}{3}\mathrm{tan}^2\theta _W`$.
For this set of parameters one has $`35\text{ GeV}<m_{\stackrel{~}{\chi }_1^0}<m_{\stackrel{~}{\chi }_1^\pm }<128`$ GeV. Fig. 1a shows that in the region $`40\text{ GeV}<M_1<150`$ GeV the LSP mass depends very strongly on $`M_1`$, varying between $`m_{\stackrel{~}{\chi }_1^0}=35`$ GeV for $`M_1=40`$ GeV and $`m_{\stackrel{~}{\chi }_1^0}=121`$ GeV for $`M_1=150`$ GeV whereas for $`M_1>150`$ GeV the mass of the LSP is practically independent of $`M_1`$. In the whole $`M_1`$ region the LSP is gaugino-like (fig. 1b). At $`M_1=M_2`$ the photino component $`N_{11}`$ changes its sign which leads to completely different strength of the couplings $`f_{e1}^{L/R}`$ in the regions $`M_1>150`$ GeV and $`M_1<150`$ GeV (fig. 1c). For the selectron masses we choose two examples: $`m_{\stackrel{~}{e}_L}=179.3`$ GeV, $`m_{\stackrel{~}{e}_R}=137.7`$ GeV corresponding to the value $`m_0=110`$ GeV of the common scalar mass at the GUT scale and $`m_{\stackrel{~}{e}_L}=350.0`$ GeV, $`m_{\stackrel{~}{e}_R}=330.5`$ GeV corresponding to $`m_0=320`$ GeV. In the second case selectron pair production at an $`e^+e^{}`$ collider with $`\sqrt{s_{ee}}=500`$ GeV is kinematically forbidden.
For the integrated luminosity of the $`e\gamma `$ machine we assume $`=100`$ fb<sup>-1</sup> so that cross sections of a few fb should be measurable.
Fig. 1c shows that in our scenario also the electron-selectron-LSP couplings strongly depend on $`M_1`$. For $`M_1<150`$ GeV the coupling of the right selectron $`f_{e1}^R`$ dominates whereas for $`M_1>150`$ GeV that of the left selectron $`f_{e1}^L`$ is the stronger one. Similarly the total cross sections $`\sigma _{ee}^{L/R}`$ depicted in fig. 2a for a CMS energy $`\sqrt{s_{ee}}=500`$ GeV and for unpolarized beams ($`P_e=\lambda _L=\lambda _e=0`$) have a pronounced $`M_1`$-dependence. Comparing fig. 2a for the cross sections with fig. 1c for the couplings $`f_{e1}^{L/R}`$ one can see that even in the region $`40\text{ GeV}<M_1<150`$ GeV the influence of the additional $`M_1`$-dependence of the LSP mass (fig. 1a) is weak so that the total cross sections reflect essentially the $`M_1`$-dependence of the couplings.
As a consequence of the somewhat higher mass the cross section for production and decay of $`\stackrel{~}{e}_L`$ is additionally suppressed compared to that for $`\stackrel{~}{e}_R`$. Therefore in fig. 2a the crossing of the cross sections is at a somewhat higher value of $`M_1175`$ GeV than that of the couplings at $`M_1150`$ GeV in fig. 1c. For $`M_1<175`$ GeV the production of $`\stackrel{~}{e}_R`$ dominates whereas for $`M_1>175`$ GeV that of $`\stackrel{~}{e}_L`$ dominates with, however, much smaller cross sections. Fig. 2a shows the strong variation of the cross section $`\sigma _{ee}^R`$ with $`M_1`$. If we assume that a cross section $`\sigma _{ee}^R=100`$ fb has been measured with an error of $`\pm 5\%`$ this is compatible with $`M_1`$ between 122 GeV and 126 GeV.
For an unpolarized electron beam ($`P_e=0`$) polarization of the laser beam and of the converted electrons essentially changes only the magnitude of the cross sections by a maximal factor between 0.7 and 1.3. As we have checked numerically the $`M_1`$ dependence is very similar to that given in fig. 2a.
Fig. 2b - 2d exhibit the energy dependence of the total cross section for three different values of $`M_1`$: the GUT value $`M_1=78.7`$ GeV (fig. 2b) and two higher values $`M_1=170`$ GeV (fig. 2c) and $`M_1=250`$ GeV (fig. 2d). For a polarization of the electron beam $`P_e=+0.9`$ ($`P_e=0.9`$) the cross section for production and decay of left (right) selectrons is reduced and that for right (left) selectrons is enhanced.
In fig. 3a the asymmetry $`A_{P_e}`$ defined in eq. (10) is shown for unpolarized converted electrons ($`\lambda _e=0`$), unpolarized laser photons ($`\lambda _L=0`$) and electron polarization $`P_e=\pm 0.9`$. In our scenario the dependence of $`A_{P_e}`$ on $`\lambda _L`$ and on $`\lambda _e`$ turns out to be negligible. The $`M_1`$-dependence of $`A_{P_e}`$ is as expected from that of the cross sections (fig. 2). Since for $`M_1<175`$ GeV ($`M_1>175`$ GeV) the production of $`\stackrel{~}{e}_R`$ ($`\stackrel{~}{e}_L`$) dominates we obtain large positive asymmetries (large negative asymmetries) for $`M_1<175`$ GeV ($`M_1>175`$ GeV). For $`40\text{ GeV}<M_1<142`$ GeV the asymmetry $`A_{P_e}`$ is larger than 0.85 and nearly independent of $`M_1`$. In this region, however, the LSP mass (fig. 1a) and the total cross section (fig. 2) depend strongly on $`M_1`$. For $`M_1>205`$ GeV the asymmetry increases up to large negative values between $`A_{P_e}=0.5`$ for $`M_1=205`$ GeV and $`A_{P_e}=0.82`$ for $`M_1=300`$ GeV with, however, rather small cross sections $`<38`$ fb. For $`142\text{ GeV}<M_1<205`$ GeV the asymmetry $`A_{P_e}`$ shows a strong variation with $`M_1`$. If we assume that for instance an asymmetry $`A_{P_e}=0.5\pm 5\%`$ has been measured this is compatible with $`M_1`$ in the narrow region between 158 GeV and 160 GeV.
Additional informations on the value of $`M_1`$ can be obtained if the laser beam and the converted electrons are polarized. In fig. 3b we show the $`M_1`$-dependence of the total cross section $`\sigma _{ee}`$ for $`P_e=0.9`$ and $`\lambda _e=+1`$. For $`\lambda _L=1`$ ambiguities exist in the region $`40\text{ GeV}<M_1<120`$ GeV and for $`M_1>180`$ GeV the dependence on $`M_1`$ is rather weak. For $`120\text{ GeV}<M_1<180`$ GeV however this cross section shows a strong variation with $`M_1`$. For $`\lambda _L=+1`$ the cross section again shows ambiguities in the region $`40\text{ GeV}<M_1<108`$ GeV and is nearly independent on $`M_1`$ for $`M_1>180`$ GeV. The interval $`108\text{ GeV}<M_1<180`$ GeV, where the cross section is sensitive to $`M_1`$ is however larger than for $`\lambda _L=1`$. If we assume that a cross section $`\sigma _{ee}=250\text{ fb}\pm 5\%`$ has been measured this is compatible with $`M_1`$ between 122 GeV and 127 GeV. In the region $`60\text{ GeV}<M_1<300`$ GeV the asymmetry $`A_{\lambda _L}`$ (eq. (11)) depicted in fig. 3c for $`P_e=0.9`$ and $`\lambda _e=+1`$ is nearly linearly dependent on $`M_1`$ so that it should be possible to determine $`M_1`$ uniquely in the region $`60\text{ GeV}<M_1<190`$ GeV. An asymmetry $`A_{\lambda _L}=0.25\pm 5\%`$ would be compatible with $`M_1`$ between 116 GeV and 132 GeV according to fig. 3c. In the region $`M_1>190`$ GeV the cross sections are smaller than 16 fb.
The cross section $`\sigma _{ee}`$ and the asymmetry $`A_{\lambda _L}`$ are depicted in fig. 3d, e for the polarization configuration $`P_e=0.9`$ and $`\lambda _e=1`$. For $`\lambda _L=1`$ the total cross section has ambiguities in the region $`40\text{ GeV}<M_1<167`$ GeV and for $`\lambda _L=+1`$ in the region $`40\text{ GeV}<M_1<173`$ GeV. For $`M_1>173`$ GeV one notices a strong variation of the cross section for $`\lambda _L=\pm 1`$. As can be seen from fig. 3d with $`\lambda _L=+1`$ a cross section $`\sigma _{ee}=35\text{ fb}\pm 5\%`$ is compatible with $`M_1`$ between 193 GeV and 209 GeV. For this polarization configuration the asymmetry $`A_{\lambda _L}`$ (fig. 3e) grows practically linearly between $`M_1=40`$ GeV and $`M_1=126`$ GeV and is very sensitive on $`M_1`$ but shows ambiguities between $`M_1=40`$ GeV and $`M_1=150`$ GeV . If we assume that an asymmetry $`A_{\lambda _L}=0.15\pm 5\%`$ has been measured this is compatible with $`M_1`$ between 89 GeV and 94 GeV or between 138 GeV and 140 GeV according to fig. 3e. One can distinguish between these two regions via the cross section for $`\lambda _L=+1`$ depicted in fig. 3d because one expects 18-19 fb for $`M_1`$ between 89 GeV and 94 GeV and 7-8 fb for $`M_1`$ between 138 GeV and 140 GeV. For $`M_1>170`$ GeV the asymmetry is nearly constant $`A_{\lambda _L}0.07`$.
To sum up: for unpolarized laser beams ($`\lambda _L=0`$) and converted electrons ($`\lambda _e=0`$) the polarization asymmetry $`A_{P_e}`$ exhibits a pronounced $`M_1`$ dependence in the region $`142\text{ GeV}<M_1<205`$ GeV. For the polarization configuration $`P_e=0.9`$, $`\lambda _e=+1`$ and $`\lambda _L=\pm 1`$ the cross sections $`\sigma _{ee}`$ and the polarization asymmetry $`A_{\lambda _L}`$ are sensitive to $`M_1`$ in the region $`60\text{ GeV}<M_1<190`$ GeV. Finally for $`P_e=0.9`$, $`\lambda _e=1`$ and $`\lambda _L=\pm 1`$ these observables show a strong $`M_1`$ dependence in the region $`40\text{ GeV}<M_1<300`$ GeV.
We choose as a second example higher selectron masses $`m_{\stackrel{~}{e}_L}=350.0`$ GeV and $`m_{\stackrel{~}{e}_R}=330.5`$ GeV corresponding to $`m_0=320`$ GeV. Then for $`\sqrt{s_{ee}}=500`$ GeV selectron pair production in $`e^+e^{}`$ annihilation is forbidden, whereas single selectron production in $`e^{}\gamma \stackrel{~}{\chi }_1^0\stackrel{~}{e}_{L/R}^{}`$ is still possible, provided that $`\sqrt{s_{e\gamma }}>m_{\stackrel{~}{e}_{L/R}^{}}+m_{\stackrel{~}{\chi }_1^0}`$ where $`\sqrt{s_{e\gamma }}0.91\sqrt{s_{ee}}`$ is the energy of the hardest photon obtained by Compton backscattering . Now the kinematical accessible $`M_1`$ region is confined to $`M_1<184`$ GeV ($`m_{\stackrel{~}{\chi }_1^0}<124.6`$ GeV). In fig. 4a,b we show the total cross section and the asymmetry $`A_{\lambda _L}`$ for $`P_e=0.9`$, $`\lambda _e=+1`$ and $`\lambda _L=\pm 1`$. For $`\lambda _L=+1`$ the cross section depends nearly linearly on $`M_1`$ in the region $`40\text{ GeV}<M_1<115`$ GeV. For $`M_1>115`$ GeV the cross section is smaller than 2 fb. The cross section for $`\lambda _L=1`$ is higher and more sensitive to $`M_1`$ between $`40\text{ GeV}<M_1<135`$ GeV. If we assume for example that a cross section $`\sigma _{ee}=45\text{ fb}\pm 5\%`$ has been measured this is compatible with $`M_1`$ between 80 GeV and 88 GeV. Also the polarization asymmetry $`A_{\lambda _L}`$ strongly depends on $`M_1`$ in the whole region. According to fig. 4b an asymmetry $`A_{\lambda _L}=0.7\pm 5\%`$ would be compatible with $`M_1`$ between 99 GeV and 109 GeV. The polarization asymmetry $`A_{P_e}`$ for this scenario is between 0.85 and 0.9 and depends only weakly on $`M_1`$. Also the polarization configuration $`P_e=0.9`$, $`\lambda _e=1`$ and $`\lambda _L=\pm 1`$ is not shown because the cross sections are smaller than 2 fb. Thus for the case of high selectron masses and polarization configuration $`P_e=0.9`$, $`\lambda _e=+1`$ and $`\lambda _L=\pm 1`$ both the cross section and the asymmetry $`A_{\lambda _L}`$ can be helpful for determining $`M_1`$ in the greatest part ($`40\text{ GeV}<M_1<135`$ GeV) of the kinematical accessible region $`M_1<184`$ GeV.
## 4 Conclusion
We have demonstrated that associated selectron - LSP production with subsequent leptonic decay of the selectron $`e^{}\gamma \stackrel{~}{\chi }_1^0\stackrel{~}{e}_{L/R}^{}e^{}\stackrel{~}{\chi }_1^0\stackrel{~}{\chi }_1^0`$ at a $`\sqrt{s_{ee}}=500`$ GeV linear collider in the $`e\gamma `$ mode should allow to test for a gaugino-like LSP the GUT relation $`M_1=M_2\frac{5}{3}\mathrm{tan}^2\theta _W`$ between the MSSM gaugino mass parameters. The polarization $`P_e`$ of the electron beam helps to enlarge the production cross section for left or right selectrons. For suitably polarized electron beams and laser photons the total cross section $`\sigma _{ee}`$ and the polarization asymmetries $`A_{P_e}`$ and $`A_{\lambda _L}`$ are very sensitive to the gaugino mass parameter $`M_1`$ in the whole investigated region between 40 GeV and 300 GeV. For high selectron masses $`m_{\stackrel{~}{e}_{L/R}}`$ the accessible $`M_1`$ region is kinematically constrained. The optimal polarization configuration depends on the values of the selectron masses. For realistic predictions a complete MC study with inclusion of background processes and experimental cuts would be indispensable.
## 5 Acknowledgements
We are grateful to Gudrid Moortgat-Pick and Stefan Hesselbach for valuable discussions. This work was supported by the Deutsche Forschungsgemeinschaft under contract no. FR 1064/4-1 and the Bundesministerium für Bildung und Forschung (BMBF) under contract number 05 HT9WWA 9. |
no-problem/0001/hep-ph0001162.html | ar5iv | text | # HOW TO CALCULATE THE QUANTUM PART OF THE TRULY NONPERTURBATIVE YANG-MILLS VACUUM ENERGY DENSITY IN THE AXIAL GAUGE QCD
## I Introduction
The nonperturbative QCD vacuum is a very complicated medium and its dynamical and topological complexity \[1-3\] means that its structure can be organized at various levels (classical, quantum) and it can contain many different components and ingredients which contribute to the vacuum energy density (VED), one of the main characterics of the QCD ground state. Many models of the QCD vacuum involve some extra classical color field configurations such as randomly oriented domains of constant color magnetic fields, background gauge fields, averaged over spin and color, stochastic colored background fields, etc. (see Refs. and references therein). The most elaborated classical models are random and interacting instanton liquid models (RILM and IILM, respectively) of the QCD vacuum . These models are based on the existence of the topologically nontrivial instanton-type fluctuations of gluon fields, which are nonperturbative solutions to the classical equations of motion in Euclidean space (see Ref. and references therein).
Here we are going to discuss the quantum part of VED which is determined by the effective potential approach for composite operators (see also Ref. ). It allows us to investigate the nonperturbative QCD vacuum, in particular Yang-Mills (YM) one, by substituting some physically well-justified Ansatz for the full gluon propagator since the exact solutions are not known. In the absence of external sources the effective potential is nothing but VED which is given in the form of the loop expansion where the number of the vacuum loops (consisting in general of the confining quarks and nonperturbative gluons) is equal to the power of the Plank constant, $`\mathrm{}`$.
Let us remind the reader that the full dynamical information of any quantum gauge field theory such as QCD is contained in the corresponding quantum equations of motion, the so-called Schwinger-Dyson (SD) equations for lower (propagators) and higher (vertices and kernels) Green’s functions . These equations should be also complemented by the corresponding Slavnov-Taylor (ST) identites \[9-12\] which in general relate the above mentioned lower and higher Green’s functions to each other. These identities are consequences of the exact gauge invariance and therefore $`\mathrm{"}areexactconstaintsonanysolutiontoQCD\mathrm{"}`$ . Precisely this system of equations can serve as an adequate and effective tool for the nonperturbative approach to QCD. Among the above-mentioned Green’s functions, the two-point Green’s function describing the full gluon propagator (see section II below) has a central place \[9-13\]. In particular, the solutions to the above-mentioned SD equation for the full gluon propagator, are supposed to reflect the quantum structure of the QCD ground state. It is a highly nonlinear integral equation containing many different propagators, vertices and kernels \[9-13\]. For this reason it may have many different exact solutions with different asymptotics in the deep infrared (IR) limit (the ultraviolet (UV) asymptotics because of asymptotic freedom are apparently uniquely determined), describing thus many different types of quantum excitations of gluon field configurations in the QCD vacuum. Evidently, there is no hope for an exact solutions as well as not all of them can reflect the real structure of the QCD vacuum. Let us emphasize now that any deviation in the behavior of the full gluon propagator in the IR domain from the free one automatically assumes its dependence on a scale parameter (at least one) resposible for nonperturbative dynamcs in the quntum model under consideration, say, $`\mathrm{\Lambda }_{NP}`$. This is very similar to asymtotic freedom which requires asymptotic scale parameter associated with the nontrivial perturbative dynamics (scale violation). However, to calculate the truly nonperturbative VED we need not the IR part in the decomposition of the full gluon propagator, but rather its truly nonperturbative part which wanishes when the above-mentioned nonperturbative scale parameter goes to zero, i.e., when the perturbative phase survives only in the corresponding decomposition of the full gluon propagator (see next section below).
It is well known, however, that VED is badly divergent in quantum field theory, in particular QCD (see, for example, the discussion given by Shifman in Ref. ). The main problem thus is how to extract the truly nonperturbative VED which is relevant for the QCD vacuum quantum model under consideration. It should be finite, negative and it should have no imaginary part (stable vacuum). Why is it so important to calculate it from first principles? As was emphasized above, this quantity is important in its own right being nothing but the bag constant (the so-called bag pressure) apart from the sign, by definition . Through the trace anomaly relation it assists in the correct estimating such an important phenomenological nonperturbative parameter as the gluon condensate introduced in the QCD sum rules approach to resonance physics . Furthermore, it assists in the resolution of the $`U(1)`$ problem via the Witten-Veneziano (WV) formula for the mass of $`\eta ^{}`$ meson . The problem is that the topological susceptibility needed for this purpose \[16-19\] is determined by the two point correlation function from which perturbative contribution is already subtracted by definition \[18-22\]. The same is valid for the above-mentioned bag constant which is much more general quantity than the string tension since it is relevant for light quarks as well. Thus to correctly calculate the truly nonperturbative VED means to correctly understand the structure of the QCD vacuum in different models.
We have already formulate a method how to calculate the truly nonperturbative YM VED in the covariant gauge QCD . The main purpose of this paper (section II) is to formulate precisely a general method how to correctly calculate the truly nonperturbative quantum part of YM VED in the axial gauge QCD. In sections III and IV we illustrate it by considering the Abelian Higgs model of the dual QCD ground state. We will explicitly show that the vacuum of this model without string contributions is unstable against quantum corrections. In section V we summarize our results.
## II The truly nonperturbative vacuum energy density
In this section we are going to analytically formulate a general method of calculation of the quantum part of the truly nonperturbative YM VED in the axial gauge QCD. Let us start from the nonperturbative gluon part of VED which to-leading order (log-loop level $`\mathrm{}`$)<sup>*</sup><sup>*</sup>*Next-to-leading and higher terms (two and more vacuum loops) are suppressed by one order of magnitude in powers of $`\mathrm{}`$ at least and are left for consideration elsewhere. is given by the effective potential for composite operators as follows
$$V(D)=\frac{i}{2}\frac{d^nq}{(2\pi )^n}Tr\{\mathrm{ln}(D_0^1D)(D_0^1D)+1\},$$
(1)
where $`D(q)`$ is the full gluon propagator (see below) and $`D_0(q)`$ is its free (perturbative) counterpart. Here and below the traces over space-time and color group indices are understood. The effective potential is normalized as $`V(D_0)=0`$, i.e., the free perturbative vacuum is normalized to zero.
A general parametrization of the gauge boson propagator in the axial gauge of dual QCD is \[24-26\] (here and below we use notations and definitions of Refs. )
$$D_{\mu \nu }(q,n)=\frac{1}{(qn)^2}T_{\mu \nu }(n)G(q^2)+L_{\mu \nu }(q,n)F(q^2),$$
(2)
where
$`T_{\mu \nu }(n)`$ $`=`$ $`\delta _{\mu \nu }n_\mu n_\nu ,`$ (3)
$`L_{\mu \nu }(q,n)`$ $`=`$ $`\delta _{\mu \nu }{\displaystyle \frac{q_\mu n_\nu +q_\nu n_\mu }{(qn)}}+{\displaystyle \frac{q_\mu q_\nu }{(qn)^2}}`$ (4)
with an arbitrary constant unit vector $`n_\mu `$, $`n_\mu ^2=1`$. The exact coefficient functions $`G(q^2)`$ and $`F(q^2)`$ characterize the vacuum of the theory under consideraion. Their free perturbative counterparts are
$$F^{PT}(q^2)=\frac{1}{(q^2)},G^{PT}(q^2)=0.$$
(5)
Thus the free perturbative gluon propagator is
$$D_{\mu \nu }^0(q,n)=\frac{1}{(q^2)}L_{\mu \nu }(q,n)$$
(6)
while its inverse is
$$[D_{\mu \nu }^0]^1(q)=(q^2)\left(\delta _{\mu \nu }\frac{q_\mu q_\nu }{q^2}\right).$$
(7)
Using futher Eqs. (2.2) and (2.6), one obtains
$$[D_{\mu \nu }^0]^1(q)D_{\mu \nu }(q,n)=(q^2)F(q^2)+G(q^2).$$
(8)
In order to evaluate the effective potential (2.1) we use the well-known expression,
$$Tr\mathrm{ln}(D_0^1D)=8\times \mathrm{ln}det(D_0^1D)=8\times 4\mathrm{ln}\left[(q^2)F(q^2)+G(q^2)\right].$$
(9)
It becomes zero (in accordance with the above mentioned normalization condition) when the full gluon form factors are replaced by their free counterparts (see Eqs. (2.4)). Going over to four ($`n=4`$) dimensional Euclidean space in Eq. (2.1), on account of (2.8), and evaluating some numerical factors, one obtains ($`ϵ_g=V(D)`$)
$$ϵ_g=\frac{1}{\pi ^2}𝑑q^2q^2\left[\mathrm{ln}\left(q^2F(q^2)+G(q^2)\right)\left(q^2F(q^2)+G(q^2)\right)+1\right].$$
(10)
Let us now introduce the following decomposition of the exact coefficient functions $`G(q^2)`$ and $`F(q^2)`$ (Euclidean metrics)
$`F(q^2)`$ $`=`$ $`F^{NP}(q^2)+F^{PT}(q^2),`$ (11)
$`G(q^2)`$ $`=`$ $`G^{NP}(q^2)+G^{PT}(q^2),`$ (12)
where the truly nonperturbative quantities $`F^{NP}(q^2)`$ and $`G^{NP}(q^2)`$ are defined as follows:
$`F^{NP}(q^2,\mathrm{\Lambda }_{NP})`$ $`=`$ $`F(q^2,\mathrm{\Lambda }_{NP})F(q^2,\mathrm{\Lambda }_{NP}=0),`$ (13)
$`G^{NP}(q^2,\mathrm{\Lambda }_{NP})`$ $`=`$ $`G(q^2,\mathrm{\Lambda }_{NP})G(q^2,\mathrm{\Lambda }_{NP}=0),`$ (14)
which explains the difference between the truly nonperturbative parts and the full gluon form factors which are nonperturbative themselves. Let us note, that the perturbative parts $`F^{PT}(q^2)`$ and $`G^{PT}(q^2)`$ may, in general, contain renormgroup log improvmenets due to asymptotic freedom. Without these improvments their free perturbative counterparts are given in Eqs. (2.4). Substituting these relations into Eq. (2.9) and doing some trivial rearangment, one obtains
$$ϵ_g=\frac{1}{\pi ^2}𝑑q^2q^2\left[\mathrm{ln}\left(1+q^2F^{NP}(q^2)+G^{NP}(q^2)\right)\left(q^2F^{NP}(q^2)+G^{NP}(q^2)\right)\right]+I_{PT},$$
(15)
where we introduce the following notation
$$I_{PT}=\frac{1}{\pi ^2}𝑑q^2q^2\left[\mathrm{ln}\left(1\frac{1q^2F^{PT}(q^2)G^{PT}(q^2)}{1+q^2F^{NP}(q^2)+G^{NP}(q^2)}\right)+\left(1q^2F^{PT}(q^2)G^{PT}(q^2)\right)\right],$$
(16)
as containing contribution which is mainly determined by the perturbative part. However, this is not the whole story yet. We must now to introduse the soft cutoff in order to separate the deep IR region where the truly nonperturbative contributions become dominant (obviously they can not be valid in the whole energy-momentum range). So the expression (2.12) becomes
$$ϵ_g=\frac{1}{\pi ^2}_0^{q_0^2}𝑑q^2q^2\left[\mathrm{ln}\left(1+q^2F^{NP}(q^2)+G^{NP}(q^2)\right)\left(q^2F^{NP}(q^2)+G^{NP}(q^2)\right)\right]+I_{PT}+\stackrel{~}{I}_{PT},$$
(17)
where the explicit formula for $`\stackrel{~}{I}_{PT}`$ (which is obvious) is not important. The contribution over the perturbative region $`\stackrel{~}{I}_{PT}`$ as well as $`I_{PT}`$ should be subtracted by introducing the corresponding counter terms into the effective potential, which is equivalent to define the truly nonperturbative VED as $`ϵ_g^{np}=ϵ_gI_{PT}\stackrel{~}{I}_{PT}`$. Thus one finally obtains
$$ϵ_g^{np}=\frac{1}{\pi ^2}_0^{q_0^2}𝑑q^2q^2\left[\mathrm{ln}\left(1+q^2F^{NP}(q^2)+G^{NP}(q^2)\right)\left(q^2F^{NP}(q^2)+G^{NP}(q^2)\right)\right].$$
(18)
This a general formula which can be applied to any model of the axial gauge QCD ground state based on the corresponding Ansatz for the full gluon propagator. So Eq. (2.15) is our definition of the truly nonperturbative VED as integrated out the truly nonperturbative part of the full gluon propagator over the deep IR region, soft momentum region, $`0q^2q_0^2`$. How to determine $`q_0^2`$? By the corresponding minimization procedure, of course (see below).
### A
From this point it is convenient to factorize the dependence on a scale in the nonperturbative VED (2.15). As was already emphasized above, the full gluon form factors always contain at least one scale parameter responsible for the nonperturbative dynamics in the model under consideration, $`\mathrm{\Lambda }_{NP}`$. Within our general method we are considering it as free one, i.e., as ”running” (when it formally goes to zero, only perturbative phase survives in the model under consideration) and its numerical value (if any) will be used only at final stage in order to numerically evaluate the corresponding truly nonperturbative VED (if any). We can introduce dimensionless variables and parameters by using completely extra scale (which is aways fixed in comparison with $`\mathrm{\Lambda }_{NP}`$), for example flavorless QCD asymptotic scale parameter $`\mathrm{\Lambda }_{YM}`$ as follows:
$$z=\frac{q^2}{\mathrm{\Lambda }_{YM}^2},z_0=\frac{q_0^2}{\mathrm{\Lambda }_{YM}^2},b=\frac{\mathrm{\Lambda }_{NP}^2}{\mathrm{\Lambda }_{YM}^2}.$$
(19)
Here $`z_0`$ is the corresponding dimensionless soft cutoff while the parameter $`b`$ has a very clear physical meaning. It measures the ratio between nonperturbative dynamics, symbolized by $`\mathrm{\Lambda }_{NP}^2`$ and nontrivial perturbative dynamics (violation of scale, asymptotic freedom) symbolized by $`\mathrm{\Lambda }_{YM}^2`$. When it is zero only perturbative phase remains in the quantum model under consideration. In this case, the gluon form factors obviously become a functions of $`z`$ and $`b`$, i.e., $`F^{NP}(q^2)=F^{NP}(z,b)`$ and $`G^{NP}(q^2)=G^{NP}(z,b)`$, so the truly nonperturbative VED (2.15) is ($`ϵ_g^{np}ϵ_g^{np}(z_0,b)`$)
$$\mathrm{\Omega }_g(z_0,b)=\frac{1}{\mathrm{\Lambda }_{YM}^4}ϵ_g^{np}(z_0,b),$$
(20)
where for futher aims we introduce the gluon effective potential at a fixed scale $`\mathrm{\Lambda }_{YM}`$,
$$\mathrm{\Omega }_g\mathrm{\Omega }_g(z_0,b)=\frac{1}{\pi ^2}_0^{z_0}𝑑zz\left[\left(zF^{NP}(z,b)+G^{NP}(z,b)\right)\mathrm{ln}\left(1+zF^{NP}(z,b)+G^{NP}(z,b)\right)\right].$$
(21)
Precisely this expression allows us to investigate the dynamical structure of the YM vacuum free of scale dependence complications as it has been already factorized in Eq. (2.17). It depends only on $`z_0`$ and $`b`$ and the minimization procedure can be done now with respect to $`b`$, $`\mathrm{\Omega }_g(z_0,b)/b=0`$ (usually after integrated out in Eq. (2.18)) in order to find self-consistent relation between $`z_0`$ and $`b`$, which means to find $`q_0`$ as a function of $`\mathrm{\Lambda }_{NP}`$. Let us note in advance that all final numerical results will always depend only on $`\mathrm{\Lambda }_{NP}`$ as it should be for the nonperturbative part of VED. Obviously, the minimization with respect to $`z_0`$ leads to trivial zero. In principle, through the relation $`\mathrm{\Lambda }_{YM}^4=q_0^4z_0^2`$, it is possible to fix the soft cutoff $`q_0`$ itself, but this is not the case indeed since then $`z_0`$ can not be varied.
### B
On the other hand, the scale dependence can be factorized as follows:
$$z=\frac{q^2}{\mathrm{\Lambda }_{NP}^2},z_0=\frac{q_0^2}{\mathrm{\Lambda }_{NP}^2},$$
(22)
i.e., $`b=1`$. For simplicity (but not loosing generality) we use the same notations for the dimensionless set of variables and parameters as in Eq. (2.16). In this case, the gluon form factors obviously becomes the function of $`z`$ only and the truly nonperturbative VED (2.15) becomes
$$ϵ_g^{np}(z_0)=\frac{1}{\pi ^2}q_0^4z_0^2_0^{z_0}𝑑zz\left[\left(zF^{NP}(z)+G^{NP}(z)\right)\mathrm{ln}\left(1+zF^{NP}(z)+G^{NP}(z)\right)\right].$$
(23)
Evidently, to fix the scale now is possible in the two different ways. In principle, we can fix $`\mathrm{\Lambda }_{NP}`$ itself, i.e., introducing
$$\stackrel{~}{\mathrm{\Omega }}_g(z_0)=\frac{1}{\mathrm{\Lambda }_{NP}^4}ϵ_g^{np}(z_0)=\frac{1}{\pi ^2}_0^{z_0}𝑑zz\left[\left(zF^{NP}(z)+G^{NP}(z)\right)\mathrm{ln}\left(1+zF^{NP}(z)+G^{NP}(z)\right)\right].$$
(24)
However, the minimization procedure again leads to the trivial zero, which shows that this scale can not be fixed.
In contrast to the previous case, let us fix the soft cutoff itself, i.e., setting
$$\overline{\mathrm{\Omega }}_g(z_0)=\frac{1}{q_0^4}ϵ_g^{np}(z_0)=\frac{1}{\pi ^2}z_0^2_0^{z_0}𝑑zz\left[\left(zF^{NP}(z)+G^{NP}(z)\right)\mathrm{ln}\left(1+zF^{NP}(z)+G^{NP}(z)\right)\right].$$
(25)
The minimization procedure with respect to $`z_0`$ is nontrivial now. Indeed, $`\overline{\mathrm{\Omega }}_g(z_0)/z_0=0`$, yields the following ”stationary” condition
$`{\displaystyle _0^{z_0}}𝑑zz\left[\left(zF^{NP}(z)+G^{NP}(z)\right)\mathrm{ln}\left(1+zF^{NP}(z)+G^{NP}(z)\right)\right]`$ (26)
$`={\displaystyle \frac{1}{2}}z_0^2\left[\left(z_0F^{NP}(z_0)+G^{NP}(z_0)\right)\mathrm{ln}\left(1+z_0F^{NP}(z_0)+G^{NP}(z_0)\right)\right],`$ (27)
which solutions (if any) allows one to find $`q_0`$ as a function of $`\mathrm{\Lambda }_{NP}`$. On account of this ”stationary” condition, the effective potential (2.22) itself becomes simpler for numerical calculations, namely
$$\overline{\mathrm{\Omega }}_g(z_0^{st})=\frac{1}{2\pi ^2}\left[\left(z_0^{st}F^{NP}(z_0^{st})+G^{NP}(z_0^{st})\right)\mathrm{ln}\left(1+z_0^{st}F^{NP}(z_0^{st})+G^{NP}(z_0^{st})\right)\right],$$
(28)
where $`z_0^{st}`$ is a solution (if any) of the ”stationary” condition (2.23) and corresponds to the minimum(s) (if any) of the effective potential (2.22). In the next sections we will illustrate how this method works.
## III Abelian Higgs model
Let us now consider some special model of the dual QCD ground state. In the dual Abelian Higgs theory which confines electric charges the coefficient functions $`F(q^2)`$ and $`G(q^2)`$ are (Euclidean metrics)
$`F(q^2)`$ $`=`$ $`{\displaystyle \frac{1}{q^2+M_B^2}}\left(1+{\displaystyle \frac{M_B^4D^\mathrm{\Sigma }(q^2)}{q^2+M_B^2}}\right),`$ (29)
$`G(q^2)`$ $`=`$ $`{\displaystyle \frac{M_B^2}{q^2+M_B^2}}\left(1M_B^2{\displaystyle \frac{q^2D^\mathrm{\Sigma }(q^2)}{q^2+M_B^2}}\right),`$ (30)
where $`M_B`$ is the mass of the dual gauge boson $`B_\mu `$ and $`D^\mathrm{\Sigma }(q^2)`$ represents the string contribution into the gauge boson propagator. The mass scale paprameter $`M_B`$ is the scale responsible for nonperturbative dynamics in this model (in our notations $`\mathrm{\Lambda }_{NP}=M_B`$). When it formally goes to zero, then one recovers the free perturbative expressions indeed, (2.4). Removing the string contributions from these relations we get
$$F^{nostr.}(q^2)=\frac{1}{q^2+M_B^2},G^{nostr.}(q^2)=\frac{M_B^2}{q^2+M_B^2},$$
(31)
i.e., even in this case these quantities remain nonperturbative. The truly nonperturbative expressions (2.11) now become
$`F^{NP}(q^2)`$ $`=`$ $`{\displaystyle \frac{M_B^2}{q^2(q^2+M_B^2)}}\left(1{\displaystyle \frac{M_B^2q^2D^\mathrm{\Sigma }(q^2)}{q^2+M_B^2}}\right),`$ (32)
$`G^{NP}(q^2)`$ $`=`$ $`{\displaystyle \frac{M_B^2}{q^2+M_B^2}}\left(1{\displaystyle \frac{M_B^2q^2D^\mathrm{\Sigma }(q^2)}{q^2+M_B^2}}\right),`$ (33)
while with no-string contributions they are
$$F_{NP}^{nostr.}(q^2)=\frac{M_B^2}{q^2(q^2+M_B^2)},G_{NP}^{nostr.}(q^2)=\frac{M_B^2}{q^2+M_B^2}.$$
(34)
Both expressions (3.3) and (3.4) are truly nonperturbative indeed, since they become zero in the perturbative limit ($`M_B0`$), when only perturbative phase remains. From these relations also follows
$`G^{NP}(q^2)`$ $`=`$ $`q^2F^{NP}(q^2)={\displaystyle \frac{M_B^2}{q^2+M_B^2}}\left(1{\displaystyle \frac{M_B^2q^2D^\mathrm{\Sigma }(q^2)}{q^2+M_B^2}}\right),`$ (35)
$`G_{NP}^{nostr.}(q^2)`$ $`=`$ $`q^2F_{NP}^{nostr.}(q^2)={\displaystyle \frac{M_B^2}{q^2+M_B^2}},`$ (36)
so the truly nonperturbative vacuum energy density (2.15) will depend only on one function, say, $`G^{NP}(q^2)`$ (see next section).
Although the expression (2.2), on account of (3.1), for the gluon propagator is exact, nevertheless it contains an unknown function $`D^\mathrm{\Sigma }(q^2)`$ which is the intermadiate string state contribution into the gauge boson propagator . It can be considered as a glueball state with the photon quantum numbers $`1^{}`$. The bahavior of this function $`D^\mathrm{\Sigma }(q^2)`$ in the IR region ($`q^20`$) can be estimated as follows :
$$D^\mathrm{\Sigma }(q^2)=\frac{C}{q^2+M_{gl}^2}+\mathrm{},$$
(37)
where $`C`$ is a dimensionless parameter and $`M_{gl}^2`$ is the mass of the lowest $`1^{}`$ glueball state. The dots denote the contributions of heavier states. Thus, according to Eqs. (3.1) and (3.6), the coefficient functions in the IR limit behave like
$$F(q^2)=\frac{1}{M_B^2}+\frac{C}{M_{gl}^2}+O(q^2),G(q^2)=1+O(q^2),q^20.$$
(38)
At the same time according to Eqs. (3.3), (3.5) and (3.6) their truly nonperturbative counterparts behave like $`G^{NP}(q^2)=q^2F^{NP}(q^2)=1+O(q^2),q^20`$, i.e., in the same way as $`G(q^2)`$ in Eq. (3.7).
## IV Vacuum structure in the Abelian Higgs model
Let us calculate the truly nonperturbative VED in the Abelian Higgs model described in the preceding section. It is instructive to start from the case when there are no string contributions into the structure functions $`F(q^2)`$ and $`G(q^2)`$. Then their truly nonperturbative parts are given in Eqs. (3.4). It is convenient to factorize the scale dependence of VED by introducing dimensionless variables and parameters in accordance with B-scheme (2.19) with $`\mathrm{\Lambda }_{NP}=M_B`$. In this case, the gluon form factors (structure functions) obviously becomes the functions of $`z`$ only, $`q^2F^{NP}(q^2)=G^{NP}(q^2)=(1/1+z)`$. The truly nonperturbative VED (2.22) becomes
$$\overline{\mathrm{\Omega }}_g(z_0)=\frac{1}{q_0^4}ϵ_g^{np}(z_0)=\frac{1}{\pi ^2}z_0^2_0^{z_0}𝑑zz\left[\frac{2}{1+z}+\mathrm{ln}\left(\frac{1+z}{1+z}\right)\right].$$
(39)
Easily integrating Eq. (4.1), one obtains
$$\overline{\mathrm{\Omega }}_g(z_0)=\frac{1}{2\pi ^2}z_0^2\left[2z_04\mathrm{ln}(1+z_0)+\mathrm{ln}\left(\frac{1+z_0}{1z_0}\right)+z_0^2\mathrm{ln}\left(\frac{1+z_0}{1+z_0}\right)\right].$$
(40)
From this expression it is almost obvious that the effective potential will have imaginary part at any finite value of the soft cutoff, which is a direct manifestation of the vacuum instability . Asymptotics of the effective potential (4.2) to-leading order are
$`\overline{\mathrm{\Omega }}_g(z_0)_{z_00}`$ $``$ $`{\displaystyle \frac{1}{2\pi ^2}}\mathrm{ln}(1),`$ (41)
$`\overline{\mathrm{\Omega }}_g(z_0)_{z_0\mathrm{}}`$ $``$ $`{\displaystyle \frac{2}{\pi ^2}}z_0^2\mathrm{ln}z_0.`$ (42)
Let us remind, that $`z_0\mathrm{}`$ is the perturbative limit ($`M_B0`$) when the soft cutoff $`q_0`$ is fixed. The ”stationary” condition (2.23) now is
$$4\mathrm{ln}(1+z_0)\mathrm{ln}\left(\frac{1+z_0}{1z_0}\right)=\frac{z_0}{1+z_0}\left((1+z_0)^2+2\right).$$
(43)
It has only trivial solution $`z_0=0`$, so the ”stationary” state does not exist in this model.
## V Conclusions
In summary, we have formulated a general method how to calculate the truly nonperturbative VED in the axial gauge QCD quantum models of its ground state using the effective potential approach for composite operators. It is defined as integrated out the truly nonperturbative part of the full gluon propagator over the deep IR region (soft momentum region). The nontrivial minimization procedure which can be done only by the two different ways (leading however to the same numerical value (if any) of VED) makes it possible to determine the value of the soft cutoff in terms of the corresponding nonperturbative scale parameter which is inevitably presented in any nonperturbative model for the full gluon propagator. If the chosen Ansatz for the full gluon propagator is a realistic one, then our method uniquely determines the truly nonperturbative VED, which is always finite, automatically negative and it has no imaginary part (see, for example our previous publications ). Here we illustrate it by considering the Abelian Higgs model of the dual QCD ground state. The quantum part of VED (4.2) always contains imaginary part. Thus, the vacuum of the Abelian Higgs model without string contributions is unstable indeed. Whether the string contributions can cure this fundamental problem or not is beyond the scope of this letter and is left for consideration elsewhere. The vacuum instability of the Abelian Higgs model without string contributions will be recovered, of course, within A-scheme (2.16) as well. Nothing depends on how one introduces the scale dependence by chosing different scale parameters.
Comparing Eqs. (2.9) and (2.15), a prescription to obtain the relevant expression for the truly nonperturbative VED can be derived. Indeed, for this purpose in Eq. (2.9) the replacement $`q^2F(q^2)+G(q^2)1+q^2F^{NP}(q^2)+G^{NP}(q^2)`$ should be done. Also the soft cutoff $`q_0^2`$ on the upper limit should be introduced. Now it looks like the UV cutoff, but nevertheless let us underline once more that it separates the deep IR region from the perturbative one, which includes the IM region as well. It can not be arbitrary large as the UV cutoff is, by definition. As far as one chooses Ansatz for the full gluon propagator, the separation ”NP versus PT” in Eq. (2.10) is exact because of the definition (2.11). The separation ”soft versus hard” momenta is also exact because of the above-mentioned minimization procedure. Thus the proposed determination of the truly nonperturbative VED is uniquely defined. It is possible to minimize the effective potential at a fixed scale (2.17) with respect to the physically meaningful parameter. When it is zero, the perturbative phase only survives in all quantum models of the QCD ground state. Equivalently, we can minimize the auxiliarly effective potential (2.22) as a function of the soft cutoff itself. As was underlined above, both methods lead to the same numerical value for the truly nonperturbative VED.
There is no general method to formulate in order to calculate the confining quark quantum contribution into the total VED since this contribution depends heavily on the particular solutions to the quark SD equation. If it is correctly calculated then it is of opposite sign to the nonperturbative gluon part and it is one order of magnitude less as well (see, for example our recent papers ). Concluding, let us note that the generalization of our method on different noncovariant gauges is straightforward. Let us underline that our method is not a solution to the above-mentioned fundamental badly divergent problem of VED. However, it is a general one and can be applied to any nontrivial QCD quantum vacuum models in order to extract the finite part of the truly nonperturbative VED in a self-consistent way. In particular, it can serve as a test of different axial gauge QCD quantum as well as classical vacuum models since our method provides an exact criterion for the separation ”stable versus unstable vacua”.
###### Acknowledgements.
One of the authors (V.G.) is grateful to M. Polikarpov, M. Chernodub, A. Ivanov and especially to V.I. Zakharov for useful and critical discussions which led finally the authors to the formulation of a general method presented here. He also would like to thank H. Toki for many discussions on this subject during his stay at RCNP, Osaka University. This work was supported by the OTKA grants No: T016743 and T029440. |
no-problem/0001/astro-ph0001249.html | ar5iv | text | # A Unified Scaling Law in Spiral Galaxies
## 1. INTRODUCTION
Luminosity $`L`$, radius $`R`$ and rotation velocity $`V`$ are basic parameters for spiral galaxies. We have known the correlations between each two of them: the log$`L`$-log$`V`$ (Tully & Fisher 1977; TF), log$`V`$-log$`R`$ (also Tully & Fisher 1977) and log$`R`$-log$`L`$ (Freeman 1970) correlations. These scaling relations provide an observational benefit to measure galaxy distances (e.g., Strauss & Willick 1995; Giovanelli et al. 1997), and also provide theoretical benchmarks to understand the structure, formation and evolution of spiral galaxies (e.g., Dalcanton, Spergel & Summers 1997; Silk 1997; Mo, Mao & White 1998).
There have been many efforts to search tighter correlations than these three. In order to improve the accuracy of distance estimation, a third-parameter effect on the TF relation, i.e. a correlation between TF residuals and a third parameter, have been sought by many authors (e.g., Willick et al. 1997; Courteau & Rix 1999). Most of them have concluded that the third-parameter effect may not be crucial, while Willick (1999) has found a slight dependence of TF residuals on surface brightness. On the other hand, principal component analyses have suggested that two parameters are necessary and sufficient to describe spiral galaxies (see Djorgovski 1992, for a review), in contrast to stars which are described by one parameter (mass). Kodaira (1989) has found that the correlation among all the three parameters, log$`L`$, log$`R`$ and log$`V`$, is much tighter than the correlations between each two of them. Koda & Sofue (2000) have recently found that spiral galaxies are distributed on a surfboard-shaped plane in the 3-D space (log$`L`$, log$`R`$, log$`V`$). The 2-D scaling relations ($`L`$-$`V`$, $`V`$-$`R`$, $`R`$-$`L`$) can be understood uniformly as oblique projections of this surfboard-shaped plane. Koda & Sofue (2000) also argued that this unified scaling relation would be produced through galaxy formation which is affected by galactic mass and angular momentum.
Theoretically the importance of mass and angular momentum in the structure of spiral galaxies has, of course, been discussed by many authors (e.g., Fall & Efstathiou 1980; Kashlinsky 1982). Recently, the 2-D scaling relations ($`L`$-$`V`$, $`V`$-$`R`$, $`R`$-$`L`$) have been discussed as the products of galaxy formation which is controlled by mass and angular momentum (Dalcanton, Spergel & Summers 1997; Mo, Mao & White 1998; Koda, Sofue & Wada 2000). In this Letter, we discuss whether the unified scaling relation (plane) in the 3-D space can also be a product of mass and angular momentum. We take the $`N`$-body/SPH approach which includes cooling, star formation and stellar feedback (see Tissera, Lambas & Abadi 1997; Weil, Eke & Efstathiou 1998; Steinmetz & Navarro 1999; Elizondo et al. 1999; Koda, Sofue & Wada 2000), and consider the formation of 14 galaxies with different masses and angular momenta. The simulated galaxies show internal structures as observed in spiral galaxies, e.g., the exponential density profile, flat rotation curve, and distributions of stellar age and metallicity. Using these simulated “spiral galaxies”, we try to confirm the origin of the unified scaling relation.
## 2. OBSERVATIONAL FACT
We briefly introduce the unified scaling relation in spiral galaxies. Throughout this Letter, we use the data set presented by Han (1992), which consists of member galaxies in 16 clusters. All the sample galaxies in each cluster are assumed to be at the same distance indicated by the systemic recession velocities of the host cluster, which are measured in the CMB reference frame (Willick et al. 1995). We assume $`h=0.5`$, where $`h`$ is the present Hubble constant in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. In order to select exact members of a cluster, we reject galaxies whose recession velocities deviate more than $`1,000\mathrm{km}\mathrm{s}^1`$ from the mean velocity of the cluster. We use total $`I`$-band magnitude $`M_\mathrm{I}(\mathrm{mag})`$, HI velocity width $`W_{20}(\mathrm{km}\mathrm{s}^1)`$ and face-on $`I`$-band isophotal radius $`R_{23.5}(\mathrm{kpc})`$. Final samples consist of 177 spiral galaxies.
When we consider the 3-D space of luminosity $`L`$, radius $`R`$ and rotation velocity $`V`$, observed spiral galaxies are (i) distributed on a plane as $`L(VR)^{1.3}`$ and (ii) distributed in a surfboard-shaped region on the plane (Koda & Sofue 2000). Figure 1 schematically illustrates the situation with parameters of radius $`\mathrm{log}R`$, velocity $`\mathrm{log}W`$ and absolute magnitude $`M_I`$. Since the well-known 2-D scaling relations ($`LV`$, $`VR`$, $`RL`$) can be understood uniformly as oblique projections of this surfboard-shaped plane (Figure 1), we hereafter call the plane the scaling plane. The upper panels of Figure 2 show the Tully-Fisher projection (left) and the edge-on projection (right) of the scaling plane. The edge-on projection has tighter correlation than the Tully-Fisher projection. The same plane can be found in the data sets of Mathewson et al. (1992) and Courteau (1999) as well. Note the $`LV`$, $`VR`$ and $`RL`$ relations themselves may also be found as the projections of a prolate (not thin plane) distribution in a 3-D space. However, the plane distribution unifies the scatters of these three 2-D correlations as well.
In the 3-D space, observed galaxies are spread in the range of the order of two for $`L`$, and the several factors for $`R`$ and $`V`$. Hence the scaling plane has exactly the elongated (surfboard) shape. The primary and secondary axes are schematically illustrated in Figure 1. We hypothesize (a) that the 2-D distribution implies the existence of two dominant physical factors in spiral galaxy formation, and (b) that one of them is more dominant than the other because of the surfboard shape.
## 3. NUMERICAL EXPERIMENT
### 3.1. Numerical Methods
We simulate formation and evolution of spiral galaxies by the $`N`$-body/SPH method similar to Katz (1992) and Steinmetz & Müller (1994,1995). We use a GRAPE-SPH code (Steinmetz 1996), a hybrid scheme of the smoothed particle hydrodynamics and the $`N`$-body integration hardware GRAPE-3 (Sugimoto et al. 1990). This code can treat the gravitational and hydrodynamical forces, radiative cooling, star formation and stellar feedback (see Koda, Sofue & Wada 2000 in details).
We take a phenomenological model of star formation. If a region is locally contracting and Jeans-unstable, stars are formed at a rate $`\dot{\rho _{}}=c_{}\rho _{\mathrm{gas}}/\mathrm{max}(\tau _{\mathrm{dyn}},\tau _{\mathrm{cool}})`$, where $`\rho _{}`$, $`\rho _{\mathrm{gas}}`$, $`\tau _{\mathrm{dyn}}`$ and $`\tau _{\mathrm{cool}}`$ are the local densities of stars and the gas, dynamical and cooling timescales, respectively. We set $`c_{}=0.05`$. We assume that massive stars with mass $`m8M_{}`$ explode as type II supernovae and release energy ($`10^{51}\mathrm{erg}`$), mass $`(m1.4M_{})`$ and metals ($`16\%`$ of total released mass on an average; see Nomoto et al. 1997a) into the surrounding gas at a constant rate in the first $`4\times 10^7\mathrm{yr}`$ from their birth. They leave white dwarfs with mass $`1.4M_{}`$ after the explosion. And 15 % of the white dwarfs are assumed to result in type Ia supernovae (Tujimoto et al. 1995), which release energy ($`10^{51}\mathrm{erg}`$), mass ($`1.4M_{}`$) and metals ($`100\%`$ of total released mass; see Nomoto et al. 1997b) into the surrounding gas. The number of the massive stars is counted with the initial mass function (IMF) of Salpeter (1955) and we set the lower $`m_l`$ and upper mass $`m_u`$ of stars to $`(m_l,m_u)=(0.1M_{},60M_{})`$. The energy is released into the surrounding gas as thermal energy.
### 3.2. Initial Conditions
We consider 14 homogeneous spheres which are rigidly rotating, isolated, and overdense above the background field by $`\delta \rho /\rho =0.25`$. The spheres follow the reduced Hubble expansion at $`z=25`$ in the CDM cosmology ($`\mathrm{\Omega }_0=1`$, $`h=0.5`$ and the rms fluctuation in $`8h^1\mathrm{Mpc}`$ spheres $`\sigma _8=0.63`$). Small scale CDM fluctuations are superimposed on the considered spheres. We use the same realization (random numbers) of the fluctuations for all the 14 galaxies. Two free parameters, total mass $`M`$ and spin parameter $`\lambda `$, are $`M=8\times 10^{11}M_{}`$ ($`\lambda =0.10,0.08,0.06`$), $`4\times 10^{11}M_{}`$ ($`0.08,0.06,0.04`$), $`2\times 10^{11}M_{}`$ ($`0.10,0.08,0.06,0.04`$), and $`1\times 10^{11}M_{}`$ ($`0.10,0.08,0.06,0.04`$). Since we consider collapses of isolated spheres, there is no infall of clumps at low redshift which causes an extreme transfer of angular momentum from baryons to dark matter (Navarro, Frenk & White 1995; Steinmetz & Navarro 1999).
The gas and dark matter are represented by the same number of particles, and their mass ratio is set to $`1/9`$ (Steinmetz & Müller 1995). The mass of a gas particle varies between $`2.4\times 10^6`$ and $`1.9\times 10^7M_{}`$ according to the system mass considered. The mass of a dark matter particle is between $`2.1\times 10^7`$ and $`1.7\times 10^8M_{}`$. Low resolution may cause artificial heating due to two-body relaxation, however this range of particle mass is small enough to exclude the artificial heating effect (Steinmetz & White 1997). The gravitational softenings are taken to be $`1.5\mathrm{kpc}`$ for gas and star particles, and $`3\mathrm{kpc}`$ for dark matter.
## 4. RESULTS
We compute absolute magnitude $`M_I`$ of each “spiral galaxy” at $`z=0`$ with the simple stellar population synthesis models of Kodama & Arimoto (1997), and take the isophotal radius $`R_{23.5}`$ $`(\mathrm{kpc})`$ at the level $`23.5\mathrm{mag}\mathrm{arcsec}^2`$ in $`I`$-band. The line-width $`W_{20}`$ $`(\mathrm{km}\mathrm{s}^1)`$ is derived in a way similar to observation by constructing a line-profile of gas, and measuring the width at $`20\%`$ level of a peak flux. All the simulated galaxies have the exponential-light profile and the flat rotation curve (see Koda, Sofue & Wada 2000).
### 4.1. The Scaling Plane of Simulated Galaxies
In Figure 2, we compare the observed (upper panels) and simulated (lower panels) distributions of spiral galaxies in the TF projection (right panels) and edge-on (left panels) projection of the scaling plane. In the lower panels, the dotted lines represent the observed correlations (as do the solid lines in the upper panels), and we shift the zero-point of the solid lines to fit the simulations. The ranges of the figures are shifted between the upper and lower panels because of the systemic shift of simulated galaxies. The lengths of the axes, however, are exactly the same and we can compare the slope and scatter between the upper and lower panels.
In this figure, we find the following three points: (i) The slope and scatter of both correlations are well reproduced in the simulation. Note that the slope and scatter of $`LR`$ and $`RV`$ are also consistent with the observations. (ii) The edge-on projection of the simulated scaling plane shows a much better correlation than the simulated TF projection, similar to the observations. The simulations reproduce the slope and scatter of the scaling plane well. (iii) However, the distribution of simulated galaxies is systematically shifted from that of observed galaxies.
The systemic shift of the simulated distribution from the observed one amounts to $`(\mathrm{\Delta }M_I,\mathrm{\Delta }\mathrm{log}R_{23.5})=(1.5,0.3)`$ in the 3-D space. This shift would result mainly from the adopted cosmology ($`h=0.5`$, $`\mathrm{\Omega }_0=1`$), which could contribute to the shift in two ways: (a) The $`h`$ shifts the observed galaxies through distance estimation. If we change $`h`$ from 0.5 to 1, observed galaxies are shifted by $`(\mathrm{\Delta }M_I,\mathrm{\Delta }\mathrm{log}R_{23.5})=(1.5,0.3)`$, which are sufficient to explain the above shift. (b) The lower $`\mathrm{\Omega }_0`$ would increase the ratio of baryon to dark matter, and then, decrease the mass-to-light ratio. If we decrease $`\mathrm{\Omega }_0`$, simulated galaxies would be shifted in the direction of $`\mathrm{\Delta }M_I<0`$ and $`\mathrm{\Delta }\mathrm{log}R_{23.5}>0`$. \[Note on the contrary, if we assume a lower baryon fraction in galaxies than the one adopted here, the simulated galaxies would be shifted in the opposite direction.\]
In our simulations, the procedure of galaxy formation and evolution is not affected so much by changing the cosmology since we consider initial conditions of nearly monolithic collapse. Hence the comparison only in simulated galaxies would be possible even though the zero-point is shifted.
### 4.2. Origin of The Scaling Plane
As discussed in Section 2, the scaling plane has the primary and secondary axes. Here we show that these two axes of the simulated scaling plane correspond to galactic mass and angular momentum, respectively. In order for these two parameters to correspond to the primary and secondary axes exactly, they must satisfy the following three conditions: (a) The axes along these two parameters are on the scaling plane. (b) These axes are not parallel each other. (c) The axis along mass (angular momentum) is parallel to the primary (secondary) axis. In Figure 2, the lower-right panel shows the edge-on projection. All the simulated galaxies, which have different mass and angular momentum, lie on the same scaling plane. The condition (a) is satisfied. The axes along mass and angular momentum are illustrated in the lower-left panel of Figure 2 (see also Koda, Sofue & Wada 2000). In this TF projection, the axes along mass and angular momentum (spin parameter) are not parallel each other, which satisfies the condition (b). It is clear that the projections of the primary and secondary axes onto the TF plot are along the directions of mass and angular momentum, respectively, satisfying the condition (c). We conclude that the scaling plane is spread by the difference of primarily galactic mass and secondarily angular momentum.
The backbone of galactic scaling relations is the virial theorem. Most of parameters would be determined on the domination of galactic mass. However, if the mass is the only parameter which determines galactic properties, galaxies would be distributed on a line in the 3-D space. The secondary factor, spin parameter, causes a slight spread in properties of disk galaxies. Then, spiral galaxies are distributed on a particularly elongated (surfboard-shaped) plane in the 3-D space.
In fact, spin parameter (angular momentum) affects galactic properties in the following three ways: (i) Spin parameter changes the central concentration of disks in dark matter halos. Lower spin parameter produces relatively concentrated disks and leads to higher rotation velocities. (ii) Spin parameter changes the radius of spiral galaxies. Higher spin parameter produces galaxies with larger radii. (iii) Therefore higher spin parameter produces galaxies with lower surface densities, and then leads to slower star formation. It results in brighter galaxies at $`z=0`$ because of the relatively younger age of their stellar component. These three effects produce the scatters of the three scaling relations ($`LV`$, $`VR`$, $`RL`$).
## 5. DISCUSSION
We have introduced the scaling plane (unified scaling relations) of observed spiral galaxies in the 3-D space of luminosity, radius and rotation velocity, and investigated a possible origin of the scaling plane. We have shown that mass primarily determines the galaxy position in the 3-D space, and angular momentum (spin parameter) produces a slight spread on the scaling plane. The scaling plane is originated in the galaxy formation process, controlled by these two factors, mass and angular momentum. In order to clarify the uniqueness of the origin, one could further consider (1) other cosmological models (Mo, Mao & White 1998), (2) different ratios of baryon to dark matter, (3) different mass aggregation histories (Avila-Reese et al. 1998), and (4) other modelings of star formation and feedback (Silk 1997).
Many studies have concluded that there is no correlation of TF residuals with radius and any other parameter. These results appear to be against the existence of the scaling plane. We should note, however, that the existence of the scaling plane does not imply a clear correlation between TF residuals and radius, when the plane contains any kind of scatter, e.g., observational errors or intrinsic one. The apparent discrepancy comes from a confusion of two facts, that is, spiral galaxies are distributed (i) on a plane, and (ii) in a surfboard-shaped region on it (see Section 2). The definition of TF residuals are affected by the property (ii). If the surfboard-shaped region rotates on the same plane, the TF relation (proejcted relation) will be changed in its slope, zero-point and ’the definision of residuals’ as well (cf. Figure 1). Hence the correlation of TF residuals with radius is strongly affected by the property (ii), i.e., how galaxies are distributed on the plane, and if the plane contains any kind of scatter such as errors in observation, the combination of the property (ii) and the scatter could hide the property (i), i.e., the existence of the scaling plane.
Still, the scaling plane implies correlations of each scaling relation ($`LV`$, $`VR`$, $`RL`$) with surface brightness, at least in normal galaxies. It is interesting to investigate whether low surface brightness (LSB) galaxies are also distributed on the scaling plane. Zwaan et al. (1995) discussed that LSB galaxies lie on the same TF relation as normal galaxies, while O’Neil, Bothun & Schombert (1999) have concluded that their sample of LSB galaxies does not produce the TF relation. So, the question is still under debate, and further researches would be necessary to discuss LSB galaxies in analyses of the scaling plane. There have been studies which concluded that the Freeman’s law would be an artifact due to observational selection effects, because LSB galaxies deviate from the luminosity-radius relation of normal galaxies (recently, de Jong 1996; Scorza & van den Bosch 1998). The scaling plane is so tight that the plane itself would not be an artifact due to selection effects. However, the galaxy distribution on the plane may change, if selection effects affect the sample. LSB galaxies may provide a clue to understand such selection effects, if they are the sequence of normal galaxies.
Numerical computations were carried out on the GRAPE system at the Astronomical Data Analysis Center of the National Astronomical Observatory, Japan. We would like to thank Dr. N. Arimoto for providing us with the tables of the stellar population synthesis. We are grateful to the anonymous referee for his/her fruitful comments. J.K. thanks the Hayakawa Fund of the Astronomical Society of Japan. J.K. also thanks Mrs. M. Redmond for reading the manuscript. |
no-problem/0001/hep-ex0001005.html | ar5iv | text | # Observation of the Decay 𝐾_𝐿→𝜇⁺𝜇⁻𝛾𝛾.
## Abstract
We have observed the decay $`K_L\mu ^+\mu ^{}\gamma \gamma `$ at the KTeV experiment at Fermilab. This decay presents a formidable background to the search for new physics in $`K_L\pi ^0\mu ^+\mu ^{}`$. The 1997 data yielded a sample of 4 signal events, with an expected background of 0.155 $`\pm `$ 0.081 events. The branching ratio is $``$($`K_L\mu ^+\mu ^{}\gamma \gamma `$) $`=(10.4_{5.9}^{+7.5}(\mathrm{stat})\pm 0.7(\mathrm{sys}))\times 10^9`$ with $`m_{\gamma \gamma }1\mathrm{MeV}/\mathrm{c}^2`$, consistent with a QED calculation which predicts $`(9.1\pm 0.8)\times 10^9`$.
In this paper we present the first measurement of the branching ratio for $`K_L\mu ^+\mu ^{}\gamma \gamma `$. This decay is expected to proceed mainly via the Dalitz decay $`K_L\mu ^+\mu ^{}\gamma `$ with an internal bremsstrahlung photon. This decay is one of a family of radiative decays ($`K_L\mu ^+\mu ^{}\gamma `$, $`K_L\mu ^+\mu ^{}\gamma \gamma `$, $`K_Le^+e^{}\gamma `$, $`K_Le^+e^{}\gamma \gamma `$) which are under study at KTeV and elsewhere . The decay $`K_L\mu ^+\mu ^{}\gamma \gamma `$ presents a formidable background to the search for direct CP violation and new physics in $`K_L\pi ^0\mu ^+\mu ^{}`$ decays .
The measurement presented here was performed as part of the KTeV experiment, which has been described elsewhere . The experiment used two nearly parallel $`K_L`$ beams created by 800 GeV protons incident on a BeO target. The decays used in our studies were collected in a region approximately 65 meters long, situated 94 meters from the production target. The fiducial volume was surrounded by a photon veto system used to reject events in which photons missed the calorimeter. The charged particles were detected by four drift chambers, each consisting of one horizontal and one vertical pair of planes, with typical resolution of 70 $`\mu m`$ per plane pair. Two drift chambers were situated on either side of an analysis magnet which imparted 205 MeV/c of transverse momentum to charged particles. The drift chambers were followed by a trigger hodoscope bank, and a 3100 element pure CsI calorimeter with electromagnetic energy resolution of $`\sigma (E)/E=0.45\%2.0\%/\sqrt{E(\mathrm{GeV})}`$. The calorimeter was followed by a muon filter composed of a 10 cm thick lead wall and three steel walls totalling 511 cm. Two planes of scintillators situated after the third steel wall served to identify muons. The planes had 15 cm segmentation, one horizontal, the other vertical.
The trigger for the signal events required hits in the upstream drift chambers consistent with two tracks, as well as two hits in the trigger hodoscopes. The calorimeter was required to have at least one cluster with over 1 GeV in energy, within a narrow (20 ns) time gate. The muon counters were required to have at least two hits. In addition, preliminary online identification of these decays required reconstruction of two track candidates originating from a loosely-defined vertex, and each of those track candidates was required to point to a cluster in the calorimeter with energy less than $`5\mathrm{GeV}`$. A separate trigger was used to collect $`K_L\pi ^+\pi ^{}\pi ^0`$ decays which were used for normalization. This trigger was similar to the signal trigger but had no requirements on hits in the muon hodoscopes or clusters in the calorimeter. The preliminary online identification was performed on the normalization sample as well, but no energy requirements were made on clusters pointed to by the tracks. The normalization mode trigger was prescaled by a factor of 500:1.
The main background to $`K_L\mu ^+\mu ^{}\gamma \gamma `$ was the Dalitz decay $`K_L\mu ^+\mu ^{}\gamma `$ with an additional cluster in the calorimeter coincident with but not due to the decay. Such an “accidental” cluster could appear as a photon. Additional backgrounds were $`K_L\pi ^+\pi ^{}\pi ^0`$ decays with the charged pions misidentified as muons due to pion decay or pion punchthrough the filter steel, and $`K_L\pi ^\pm \mu ^{}\nu `$ decays ($`K_{\mu 3}`$) with both charged pion misidentification and accidental cluster contributions. Other contributions, such as $`K_L\pi ^+\pi ^{}`$ decays and $`K_L\pi ^+\pi ^{}\gamma `$ decays, were negligible.
Offline analysis of the signal required the full reconstruction of exactly two tracks. The vertex reconstructed from the two tracks was required to fall between 100 meters and 158 meters from the target. In order to reduce backgrounds due to pion decay in flight, we required that the track segments upstream and downstream of the analysis magnet matched to within 1 mm at the magnet bend plane. Further, we required the $`\chi ^2`$ calculated from the reconstructed two-track vertex be less than 10 for 1 degree of freedom. Tracks were required to have momenta equal to or greater than 10 GeV/c to put them above threshold for passing through the filter steel but below 100 GeV/c to ensure well measured track momenta. Since muons typically deposit $``$ 400 MeV in the calorimeter, we required the energy deposited by each track be 1 GeV or less. In addition, we required two non-adjacent hits in both the vertically and horizontally segmented muon counters.
Figure 1 shows the expected distribution of cluster energy due to photons from $`K_L\mu ^+\mu ^{}\gamma `$ events and those from accidental sources. Accidental clusters in the calorimeter were typically of low energy. Events were required to have two calorimeter clusters consistent with photons with no tracks pointing to them. One of these clusters was required to have greater than 10 GeV of energy, thus reducing backgrounds due to accidental clusters.
In order to reject backgrounds from decays that contained a $`\pi ^0`$, the invariant mass of the two photons, $`m_{\gamma \gamma }`$, was required to be less than 130 MeV/$`c^2`$. Approximately 8% of the $`K_L\pi ^+\pi ^{}\pi ^0`$ decays in which the charged pions decay to muons survived the $`m_{\gamma \gamma }`$ cut because the mismeasurement of the charged vertex smeared the $`m_{\gamma \gamma }`$ distribution. In order to remove these events, we constructed a variable ($`\mathrm{R}_{}^{\pi \pi }`$) defined as
$$\mathrm{R}_{}^{\pi \pi }=\frac{(m_K^2m_{\pi \pi }^2m_{\pi ^0}^2)^24m_{\pi \pi }^2m_{\pi ^0}^24m_K^2p_{\pi \pi }^2}{p_{\pi \pi }^2+m_{\pi \pi }^2}$$
(1)
where $`m_K`$ is the kaon mass, $`m_{\pi \pi }`$ is the invariant mass of the two tracks assuming they are due to charged pions, $`p_{\pi \pi }^2`$ is the square of the transverse momentum of the two pions with respect to a line connecting the target to the two-track vertex, and $`m_{\pi ^0}`$ is the mass of the $`\pi ^0`$. This quantity is proportional to the square of the longitudinal momentum of the $`\pi ^0`$ in a frame along the $`K_L`$ flight direction where the $`\pi ^+\pi ^{}`$ pair has no longitudinal momentum.
Figure 2 shows the expected distribution of $`\mathrm{R}_{}^{\pi \pi }`$ for the signal (using an $`𝒪(\alpha )`$ QED matrix element), and the $`K_L\pi ^+\pi ^{}\pi ^0`$ background. By requiring $`\mathrm{R}_{}^{\pi \pi }`$ to be -0.06 or less, 92.7% of the remaining $`K_L\pi ^+\pi ^{}\pi ^0`$ background was eliminated.
The invariant mass of the two tracks assuming muons, $`m_{\mu \mu }`$, provided a way to reduce backgrounds due to $`K_{\mu 3}`$ decays. Figure 3 shows the expected distribution of $`m_{\mu \mu }`$ for the signal and background. We required $`m_{\mu \mu }`$ to be less than 340 MeV/$`c^2`$. This cut eliminated 92.9% of the $`K_{\mu 3}`$ events.
The cosine of the angle between the two photons in the kaon rest frame, $`\mathrm{cos}\theta _{\gamma \gamma }`$, was also used to reject $`K_{\mu 3}`$ decays. The distribution of $`\mathrm{cos}\theta _{\gamma \gamma }`$ for the signal peaks at -1 corresponding to anti-collinear emission of the two photons. The $`K_{\mu 3}`$ background, which has two accidental clusters identified as photons, displays no such correlation. Figure 4 shows the expected $`\mathrm{cos}\theta _{\gamma \gamma }`$ distribution for signal and $`K_{\mu 3}`$ background. We required $`\mathrm{cos}\theta _{\gamma \gamma }`$ to be -0.3 or less. This cut rejected 85.3% of the remaining $`K_{\mu 3}`$ events.
We also required the transverse shower shape for the photon clusters to be consistent with that expected from an electromagnetic process. The $`\chi ^2`$ of the spatial distribution of energy deposited in the calorimeter was used to identify clusters as photons. This cut reduced the remaining backgrounds due to accidental energy by a factor of 4.5 while retaining 98.8% of the signal events.
In order to estimate the amount of background in the signal region, we simulated all the leading sources of background. Our simulation incorporated both charged pion decay in flight and punch-through the filter steel. The punch-through rate was a function of $`\pi ^\pm `$ momentum, determined by a $`K_L\pi e\nu `$ control sample. The effect of accidental activity was simulated by overlaying Monte Carlo events with data from a random trigger that had a rate proportional to the beam intensity. The estimated background level is detailed in Table I. A total of $`0.155\pm 0.081`$ background events are expected within the signal region. This region is defined by the invariant mass of the $`\mu ^+\mu ^{}\gamma \gamma `$ ($`m`$), and square of the transverse momentum of the four particles with respect to a line connecting the target to the decay vertex ( $`P_{}^2`$ ) of the four particles in the final state: $`492\mathrm{MeV}/\mathrm{c}^2<m<504\mathrm{MeV}/\mathrm{c}^2`$, and $`P_{}^2100(\mathrm{MeV}/\mathrm{c})^2`$. After all the cuts we observed four events in the signal region. Figure 5 shows the $`m`$ vs. the $`P_{}^2`$ for events with all but these cuts. A linear extrapolation of the high $`P_{}^2`$ data in this figure yields a background estimate of 0.25 $`\pm `$ 0.10 events, consistent with the expectation from Monte Carlo studies. To further test the background estimate with higher statistics we removed the cluster shape $`\chi ^2`$ cut and verified that the data matched the prediction in $`m`$ and $`P_{}^2`$ side bands. The probability of observing four events in the signal region due to fluctuation of the background is $`2.1\times 10^5`$, corresponding to a 4.2 $`\sigma `$ fluctuation of the estimated background. The branching ratio for $`K_L\mu ^+\mu ^{}\gamma \gamma `$ was calculated by normalizing the four signal events to a sample of $`K_L\pi ^+\pi ^{}\pi ^0`$ events, collected with the prescaled normalization trigger. For the normalization events $`m_{\gamma \gamma }`$ was required to be within 3 MeV/$`c^2`$ of $`m_{\pi ^0}`$, and the $`\mathrm{R}_{}^{\pi \pi }`$ and muon counter hit requirements were not enforced. The acceptance of these events was calculated to be 8.1% via Monte Carlo. We determined that $`(2.68\pm 0.04)\times 10^{11}`$ $`K_L`$ within an energy range of 20 to 220 GeV decayed between 90 and 160 meters from the target. The acceptance of the signal was (0.14 $`\pm `$ 0.01)%, so $``$($`K_L\mu ^+\mu ^{}\gamma \gamma `$) $`=(10.4_{5.9}^{+7.5}(\mathrm{stat}))\times 10^9`$ with $`m_{\gamma \gamma }1\mathrm{MeV}/\mathrm{c}^2`$ which was the cutoff we used in generating the Monte Carlo events.
We have calculated the branching ratio for this $`K_L`$ Dalitz decay by performing a numerical integration of the tree-level ($`𝒪(\alpha )`$) $`K_L\mu \mu \gamma \gamma `$ matrix element with an $`m_{\gamma \gamma }1\mathrm{MeV}/c^2`$ cutoff. We performed a similar integration of the $`K_L\mu \mu \gamma `$ matrix element, which included contributions due to virtual photon loops and emission of soft bremsstahlung photons. Both integrations assumed unit form factors. The ratio of partial widths is 2.789%. Multiplying this ratio with the measured value for $`(K_L\mu \mu \gamma )=(3.26\pm 0.28)\times 10^7`$ yields $`(K_L\mu \mu \gamma \gamma )=(9.1\pm 0.8)\times 10^9`$.
The four-body phase space for $`K_L\mu ^+\mu ^{}\gamma \gamma `$ can be parametrized by five variables, as in reference .
Figure 6 shows the distribution of the energy asymmetry of the photon pair ($`y_\gamma `$), the angle between the normals to the planes containing the $`\mu ^+\mu ^{}`$ and $`\gamma \gamma `$ in the center of mass ($`\varphi `$), and the minimum angle from any muon to any photon ($`\mathrm{\Theta }_{\mathrm{MIN}}`$). The distribution of these kinematic variables for the four signal events is consistent with expectations.
We examined several possible sources of systematic uncertainty in the measurement. The largest effects were due to a possible miscalibration of the calorimeter resulting in a mismeasurement of the photon energies, and particle identification. If we conservatively assume a 0.7% miscalibration of the calorimeter we obtain a 5.11% systematic error. The uncertainty due to muon identification was determined to be 4.2% by comparing the $`K_L`$ flux with that obtained by using $`K_{\mu 3}`$ decays. The uncertainty in the $`K_L\pi ^+\pi ^{}\pi ^0`$ branching ratio is 1.59%. Adding these and other smaller contributions detailed in Table II in quadrature we assigned a total systematic uncertainty of 6.95% to the branching ratio measurement.
In summary we have determined the branching ratio to be $``$($`K_L\mu ^+\mu ^{}\gamma \gamma `$) $`=(10.4_{5.9}^{+7.5}(\mathrm{stat})\pm 0.7(\mathrm{sys}))\times 10^9`$ with $`m_{\gamma \gamma }1\mathrm{MeV}/\mathrm{c}^2`$. Defining the acceptance with a 10 MeV infrared cutoff for photon energies in the kaon frame ($`E_\gamma ^{}`$), our result is $``$$`(K_L\mu ^+\mu ^{}\gamma \gamma ;E_\gamma ^{}10\mathrm{MeV})=(1.42_{0.8}^{+1.0}(\mathrm{stat})\pm 0.10(\mathrm{sys}))\times 10^9`$. This is the first observation of this decay and is consistent with theoretical predictions.
We gratefully acknowledge the support and effort of the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported in part by the U.S. DOE, The National Science Foundation and The Ministry of Education and Science of Japan. In addition, A.R.B., E.B. and S.V.S. acknowledge support from the NYI program of the NSF; A.R.B. and E.B. from the Alfred P. Sloan Foundation; E.B. from the OJI program of the DOE; K.H., T.N., K.S., and M.S. from the Japan Society for the Promotion of Science. |
no-problem/0001/cond-mat0001292.html | ar5iv | text | # References
submitted for publication (January 2000)
Non-gaussian electrical fluctuations
in a quasi-2d packing of metallic beads
N.Vandewalle<sup>1</sup><sup>1</sup>1corresponding author, e-mail: nvandewalle@ulg.ac.be, C.Lenaerts and S.Dorbolo,
GRASP, Institut de Physique B5, Université de Liège, B-4000 Liège, Belgium.
PACS: 45.70.-n — 05.40.+j — 81.05.Rm
keywords: granular media, electrical resistance, mechanical stresses
Abstract
The electrical properties of a two-dimensional packing of metallic beads are studied. Small mechanical perturbations of the packing leads to giant electrical fluctuations. Fluctuations are found to be non-gaussian and seem to belong to Lévy stable distributions. Anticorrelations have been also found for the sign of these fluctuations.
The granular state of matter exhibits fascinating physical properties such as memory effects , phase segregation , heterogeneous mechanical stresses , etc… Although many experiments have been performed for studying mechanical and geometrical aspects of the granular matter , only very few reports can be found in the scientific literature for describing electrical properties of such systems. In 1890, Branly reported that the electrical resistance of packed metallic grains is affected by an incident electromagnetic wave. The Branly’s coherer was the first reported antenna. He discovered also that a small vibration of the packing affects the electrical resistance. More recently, Ammi et al. measured a non-Hertzian behavior of the electrical conductivity when a packing is submitted to high uniaxial pressures. In an other experiment, Vandembrouck et al. visualized the electrical paths in a packing of metallic beads for different injected current densities.
In the present letter, we report electrical measurements of a 2d packing of metallic beads submitted to small perturbations. Electrical fluctuations are statistically analyzed. A physical interpretation is given within the framework of Hertzian contacts. Comparison with recent experiments and models for stress fluctuations are also emphasized.
Our experimental set-up is illustrated in Figure 1. The main part of the system is a tilted plane of epoxy of $`25\times 30`$ $`cm^2`$ on which $`PbO_2`$ beads have been placed. A small tilt angle (typically $`\theta =10^o`$) can be adjusted by two screws. This plane has been made using printed circuit technique allowing to design easily the configuration of the electrical contacts. In our experiments, three contacts (A, B, C) have been used as described in Figure 1. Contacts are placed on specific beads. A plexiglas protection plane has been placed just above the epoxy plane to ensure the quasi-2d characteristic of the system. The mean diameter of the beads is 2.35 mm and a polydispersity of 2 % has been measured. About 3000 beads have been placed on the epoxy plane. Above the protection plane, a wooden hammer driven by the parallel port of the computer can hit the epoxy plane at the bottom of the packing.
The experiments have been realized in a constant current regime. A current source injects a current $`i`$ through A and C. The tension $`U`$ between the point B and C is then sent to the computer by a nanovoltmeter. This method is the so-called “3-points measurement”. In each series of measurements, the following procedure has been repeated a large number $`n`$ of times: (i) the beads are placed in between the epoxy and plexy planes, (ii) the hammer hits the bottom of the packing every 10 seconds, (iii) the computer waits 3 seconds and then stores the tension between B and C at sampling rate of 0.2 seconds during the next 7 seconds.
Figure 2 presents a typical evolution of the voltage $`U`$ as a function of the shock number $`n`$. The inset illustrates the tension at the scale of a few seconds: shocks events are denoted by arrows at that scale. When a shock occurs, $`U`$ is seen to be drastically affected and thereafter sticks at that new value. One observes also that there is apparently no systematic behavior at each shock: (i) the voltage fluctuation is either positif or negatif, and (ii) the jumps/drops of $`U`$ seem to be characterized by a broad distribution of amplitudes. A compaction of the packing is also observed at the early stages of the experiment, and therefore one observes that the voltage $`U`$ globally decreases since more and more beads are touching. After $`n=2000`$ shocks, we have observed no macroscopic drift of the electrical voltage in our measurements! After $`t=10000`$ shocks, the packing is still disordered. Indeed, numerous departures (defects) from an ordered structures are observed.
Fluctuations $`\mathrm{\Delta }U_n`$ are defined as differences of the electrical voltage before $`U_n`$ and after $`U_{n+1}`$ the shocks. Since $`i`$ is constant during the experiment, fluctuations $`\mathrm{\Delta }U`$ are mainly due to changes of the conductivity of the packing, i.e.
$$\mathrm{\Delta }U=i\mathrm{\Delta }Ri\frac{\mathrm{\Delta }\sigma }{\sigma ^2}$$
(1)
from the Ohm’s law $`U=Ri`$. From Eq.(1), one expects that the mean size of the fluctuations $`\mathrm{\Delta }U`$ would be proportional to the injected current $`i`$. In order to compare our different data series, we normalized $`\mathrm{\Delta }U`$ by $`i`$. Figure 3 presents the distribution of fluctuations $`P(\mathrm{\Delta }R)`$ in a semi-log plot for different values of the injected current $`i`$. So-called “fat tails” for rare events are observed in Figure 3, the ocurrence of large fluctuations is several orders of magnitude more frequent than those for gaussian tails. The fluctuation histograms are however well described by a Lévy stable distribution which has the following behaviors:
$$P(|\mathrm{\Delta }R|)\mathrm{exp}\left(\gamma |\mathrm{\Delta }R|^\alpha \right)for|\mathrm{\Delta }R|<<1/\gamma $$
(2)
and
$$P(|\mathrm{\Delta }R|)|\mathrm{\Delta }R|^{(\alpha +1)}for|\mathrm{\Delta }R|>>1/\gamma $$
(3)
where $`0\alpha 2`$ is the so-called Lévy index and $`\gamma >0`$ is a quantity related to the width of the distribution (for $`\alpha >1`$). When $`\alpha `$ reaches 2, the distribution becomes a Gaussian distribution. When $`\alpha =1`$, the Lévy distribution reduces to a Cauchy distribution. When $`\alpha <1`$, the second moment of the distribution diverges. The continuous curves in Figure 3 are fits with Eq.(2) giving a value $`\alpha =0.9\pm 0.1`$ for both cases. Figure 4 presents in a log-log plot the fat tail of the distribution $`P(|\mathrm{\Delta }R|)`$. Power law behaviors (Eq.(3)) are observed. We have found $`\alpha =0.95\pm 0.10`$ in good agreement with the $`\alpha `$ value of the first fits using Eq.(2). Both fits in Figures 3 and 4 confirm the Lévy scenario for electrical fluctuations. This value is quite low (lower than a Cauchy) and expresses that the voltage evolves erratically with frequent large drop and burst excursions. It is also worthwile to point out that the Lévy index $`\alpha `$ seem to be independent of the injected current $`i`$ ranging from 3$`mA`$ up to 100$`mA`$ in our different series of measurements.
Let us also investigate the temporal correlations between two successive fluctuations. Figure 5 presents the phase space $`(\mathrm{\Delta }U_n,\mathrm{\Delta }U_{n+1})`$ for a current $`i`$=50$`mA`$. The data points are mainly dispersed in an ellipsoid shape which is tilted with respect to the horizontal and vertical axis. Moreover, branches are observed along the vertical, horizontal and diagonal directions. The latter branches indicate the existence of strong correlations for large fluctuations. The continuous line is a linear fit through the entire cloud of data points. The slope $`a=0.45\pm 0.01`$ of the linear regression $`\mathrm{\Delta }U_{n+1}=a\mathrm{\Delta }U_n+b`$ is significantly negative, and expresses that two successive fluctuations are highly anticorrelated. The slope $`a`$ is found to be weakly depend on $`i`$.
Finally, it should be noted that $`Ni`$ beads which are more conducting than $`PbO_2`$ beads lead to similar results: Lévy-like distributions and anticorrelations. We checked also that the results are not sensitive to the tilt angle $`\theta `$.
If the properties of the fluctuations characterizing the phenomenon are found to obey non-trivial laws, they should certainly arise from fundamental phenomena. A physical interpretation of these giant and non-Gaussian fluctuations should be found in the framework of the geometry of electrical paths. Indeed, the existence of multiple tortuous conducting paths have been reported in . We conjecture that the fluctuations observed herein are most probably related to the fluctuations of stress chains in the packing. Also, the strong anticorrelations that we put into evidence should be related to memory effects in the packing.
As recently underlined by deGennes , both macroscopic electrical conducitivity $`\sigma `$ and macroscopic elastical modulus $`\mu `$ of the packing are closely related if Hertzian contacts are assumed. This analogy between mechanical strain and electrical conductance implies that the distribution of the fluctuations measured herein could be related to the stress fluctuations in the packing. Giant stress fluctuations have been recently reported in compression and photoelastic experiments . Cooperative rearrangements such as nucleation of cracks and the propagation of these cracks in the packing can lead to stress flcutuation of high amplitude. It has been put into evidence that theses cracks have a transient life and heal after the displacement of blocks or single beads . From the simulation point of view, Coppersmith et al. introduced the so-called $`q`$-model deducing a probability distribution of vertical forces varying as $`f\mathrm{exp}(f/f)`$. Howerer, the $`q`$-model and more elaborated variants lead to Gaussian distributions of stress fluctuations. At our knowledge, the only one model which is able to produce rearrangements of force paths at all scales is the so-called Scalar Arching Model (SAM) introduced by Claudin et al. . The SAM gives rise to a broad distribution of the apparent weight variations $`\mathrm{\Delta }W_a`$ measured at the bottom of a silo. Claudin has found a universal power law behavior $`P(|\mathrm{\Delta }W_a|)|\mathrm{\Delta }W_a|^{0.94}`$ whatever the values of solid friction coefficients. This power law behavior is consistent with our results but the relationship between our exponent $`\alpha +1`$ and the Claudin’s exponent 0.94 is non-obvious.
In summary, the careful measurements of the electrical properties of a quasi-bidimensional packing of $`PbO_2`$ beads have put into evidence the non-gaussian character of conductivity fluctuations. Those non-Gaussian electrical fluctuations are related to stress fluctuations in the packing. Our measurements confirm some predictions of the Scalar Arching Model.
Acknowledgements
NV thanks the FNRS (Brussels, Belgium). SD is financially supported by FRIA (Brussels, Belgium). Valuable discussions with M.Ausloos, E.Clement, R.Cloots, J.Rajchenbach, S.Galam, P.Harmeling and Ph.Vanderbemden are acknowledged.
Figure Captions
Figure 1 – A schematic illustration of the experimental setup. Contacts A, B and C on beads. Epoxy (E) and Plexiglass (P) Planes which are separated by the bead diameter and which are tilted by an angle $`\theta `$.
Figure 2 – The electrical fluctuations of $`U(n)`$ in the quasi-2d packing of $`PbO_2`$ beads. The inset presents the evolution of $`U`$ at the scale of a few seconds.
Figure 3 – Semi-log plot of fluctuation distributions $`P(\mathrm{\Delta }R)`$ for two different current intensities $`i`$. The continuous curves are fitted Lévy-like distributions using Eq.(2).
Figure 4 – Log-log plot for the tail of $`P(|\mathrm{\Delta }R|)`$. The lines are power law fits using Eq.(3).
Figure 5 – Phase space $`(\mathrm{\Delta }U_n,\mathrm{\Delta }U_{n+1})`$. The continuous line is a linear regression $`\mathrm{\Delta }U_{n+1}=a\mathrm{\Delta }U_n+b`$. |
no-problem/0001/math0001141.html | ar5iv | text | # 1 Introduction
## 1 Introduction
A recent paper of David Krebes poses the following interesting question: given a tangle $`𝒯`$, and a link $``$, when can $`𝒯`$ sit inside $``$? A tangle is a $`1`$-manifold with 4 boundary components, properly embedded in a 3-ball; this is somewhat more general than the usual definition. An embedding of $`𝒯`$ in $``$ is determined by a ball in $`S^3`$, whose intersection with $``$ is the given tangle; we will indicate an embedding by $`𝒯`$. Krebes gives the following simple criterion for when $`𝒯`$ embeds in $``$. Complete the tangle to a link in either of the two obvious ways, giving rise to the numerator $`n(𝒯)`$ and denominator $`d(𝒯)`$. For any oriented link $``$, its determinant $`det()`$ is defined to be $`det(V+V^{})`$, where $`V`$ is a Seifert matrix for $``$. Krebes shows
###### Theorem 1
If $`𝒯`$, then
$$\mathrm{gcd}(det(n(𝒯)),det(d(𝒯)))\text{ }det()$$
(0.1)
The proof of Theorem 1 given in is essentially combinatorial, and uses an interpretation of the determinant in terms of link diagrams and the Kauffman bracket. On the other hand, the determinant has a homological interpretation; it is essentially the order of the homology of the $`2`$-fold branched cover $`S^3`$, branched along the link. In this paper, we will prove a simple fact about the homology of certain $`3`$-manifolds, which readily implies Theorem 1. In essence, we replace the divisibility condition above with the conclusion that the homology of the $`2`$-fold branched cover of the link must contain a subgroup of a certain size. Our approach has the advantage that it gives some stronger results on the embedding problem, which do not seem approachable via the combinatorial route. A slightly different argument yields a similar conclusion about other branched coverings as well. Some examples, and further remarks on embeddings, are given in the final section.
To state the result, let us use the notation $`|M|`$ for the order of the first homology group of $`M`$, where by convention the order is defined to be $`0`$ if the homology is infinite. Also, if $`M`$ is a torus $`T^2`$, and $`\alpha T`$ is a simple closed curve which does not bound a disc, then let $`M(\alpha )`$ denote the result of Dehn filling with slope $`\alpha `$. (In other words, glue $`S^1\times D^2`$ to $`M`$ so that $`D^2`$ is glued to $`\alpha `$.)
###### Theorem 2
Suppose that $`M`$ is an orientable $`3`$-manifold, and that $`\alpha ,\beta `$ are simple closed curves on $`T=M`$ which generate all of $`H_1(T)`$. Suppose that $`MN`$, where $`N`$ is a closed, orientable $`3`$-manifold. Then
$$\mathrm{gcd}(|M(\alpha )|,|M(\beta )|)\text{ }|N|$$
It is worth remarking that as a consequence of Theorem 2, the quantity
$$f(M)=\mathrm{gcd}(|M(\alpha )|,|M(\beta )|)$$
is independent of the choice of pair $`\alpha ,\beta `$, and hence defines an invariant of $`M`$. For, given another such pair, say $`\alpha ^{},\beta ^{}`$, the theorem says that $`\mathrm{gcd}(|M(\alpha )|,|M(\beta )|)`$ divides both $`|M(\alpha ^{})|`$ and $`|M(\beta ^{})|`$, and so $`\mathrm{gcd}(|M(\alpha )|,|M(\beta )|)\text{ }\mathrm{gcd}(|M(\alpha ^{})|,|M(\beta ^{})|)`$. The remark follows by reversing the roles of $`\alpha ,\beta `$ and $`\alpha ^{},\beta ^{}`$.
Some further results on the embedding problem, using invariant derived from the Kauffman bracket, can be found in the recent preprint .
## 2 Proof of Theorem 2
Before beginning the proof of the theorem, we remark that unless $`H_1(N)`$ is torsion, then the theorem has no content. So we can assume that $`N`$ is a rational homology sphere for the remainder of this section. Writing
$$N=M_TX$$
it follows from a standard Poincaré duality argument that both $`M`$ and $`X`$ have the rational homology of a circle. Hence we can write (non-canonically)
$$H_1(M)=𝐙𝐙/q_1\mathrm{}𝐙/q_s.$$
In particular, the torsion subgroup $`T_1(M)H_1(M)`$ has order $`q_1\mathrm{}q_s`$.
Under the map $`j_{}:H_1(T)H_1(M)`$, the classes $`\alpha `$ and $`\beta `$ go to
$$j_{}(\alpha )=(a,a_1,\mathrm{},a_s)\mathrm{and}j_{}(\alpha )=(b,b_1,\mathrm{},b_s)$$
respectively. (The coefficients are with respect to generators of the summands of $`H_1(M)`$ in the splitting given above.)
Claim: $`|M(\alpha )|=a|T_1(M)|.`$
To see this, note that $`H_1(M(\alpha ))H_1(M)/<\alpha >`$. The given splitting of $`H_1(M)`$ corresponds to a presentation of that group by the $`s\times (s+1)`$ matrix
$$\left(\begin{array}{ccccc}0& q_1& 0& \mathrm{}& 0\\ 0& 0& q_2& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& 0& \mathrm{}& q_s\end{array}\right)$$
(0.2)
Killing $`\alpha `$ adds an additional row, to get the following presentation matrix for $`H_1(M(\alpha ))`$:
$$\left(\begin{array}{ccccc}a& a_1& a_2& \mathrm{}& a_s\\ 0& q_1& 0& \mathrm{}& 0\\ 0& 0& q_2& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& 0& \mathrm{}& q_s\end{array}\right)$$
(0.3)
which has determinant $`aq_1\mathrm{}q_s=a|T_1(M)|.`$ But the order of $`H_1(M)`$ is the same as the determinant of any (square) presentation matrix for it.
By the same argument for homology of $`\beta `$ we get that
$$\mathrm{gcd}(|M(\alpha )|,|M(\beta )|)=\mathrm{gcd}(a|T_1(M)|,b|T_1(M)|)=\mathrm{gcd}(a,b)|T_1(M)|$$
(0.4)
Now we turn to the situation at hand, and consider the homology of $`N`$, which we place into the long exact sequence of the pair $`(N,M)`$.
$$\begin{array}{ccccccccccc}0& & H_2(N,M)& \stackrel{}{}& H_1(M)& & H_1(N)& H_1(N,M)& & 0& \\ & & & & j_{}& & & & & & \\ & & H_2(X,T)& \stackrel{}{}& H_1(T)& & & & & & \end{array}$$
By exactness, $`H_1(M)/(H_2(N,M))`$ injects into $`H_1(N)`$, and so the theorem will follow if we can show that $`\mathrm{gcd}(a,b)|T_1(M)|`$ divides the order of $`H_1(M)/(H_2(N,M))`$. Again by duality, $`H_2(X,T)H^1(X)𝐙`$ is generated by a relative 2-cycle $`C`$ with boundary $`\gamma `$ lying in $`T`$. By the commutativity of the diagram (the isomorphism is an excision) it follows that $`H_1(M)/(H_2(N,M))`$ is just $`H_1(M)/<\gamma >`$. As before, the image of $`\gamma `$ in $`H_1(M)`$ may be written $`(c,c_1,\mathrm{},c_s)`$, so that $`|H_1(M)/<\gamma >|=c|T_1(M)|`$. Since $`\alpha `$ and $`\beta `$ generate $`H_1(T)`$, we can write $`\gamma =m\alpha +n\beta `$ which implies that $`c=ma+nb`$. This means that $`\mathrm{gcd}(a,b)\text{ }c`$, or in other words
$$\mathrm{gcd}(a,b)|T_1(M)|\text{ }c|T_1(M)|.$$
Since $`c|T_1(M)|\text{ }|H_1(N)|`$ the theorem follows.
## 3 Complements and Examples
First, let us observe that Theorem 2 implies Theorem 1. The basic point is that the $`2`$-fold cover of the ball, branched along a trivial tangle, is $`S^1\times D^2`$. The different ways of completing a tangle $`𝒯`$ to a link give rise to different Dehn fillings of $`M`$ = the $`2`$-fold cover of the ball, branched along $`𝒯`$. It is easy to see that the meridians of the solid tori corresponding to the numerator $`n(𝒯)`$ and denominator $`d(𝒯)`$ have intersection number $`\pm 1`$ in $`T=M`$ and thus generate $`H_1(T)`$. Now if $`𝒯`$ as in Theorem 1, there is an embedding $`MN`$, where $`N`$ is the $`2`$-fold cover of the $`3`$-sphere branched along $``$. The $`2`$-fold cover of the ball, branched along a trivial tangle, is a solid torus. Hence the hypotheses of Theorem 2 are satisfied, and we get that
$$\mathrm{gcd}(|n(𝒯)|,|d(𝒯)|)\text{ }|det()|$$
Second, the proof gives a somewhat stronger condition, involving the homology groups themselves, rather than their orders. We give the statement in terms of the homology of $`3`$-manifolds, with the understanding that passing to the 2-fold cover of a link gives rise to a restriction on embeddings of tangles in links.
###### Corollary 1
Suppose that $`H_1(N)`$ is torsion, and that $`MN`$. Then there is an injection of $`𝐙/cT_1(M)`$ into $`H_1(N)`$, where $`c`$ has the same significance as in the proof of Theorem 2.
In applying these results to specific tangles, the most useful part of the conclusion is the fact that the inclusion map of $`M`$ into $`N`$ induces an injection on $`T_1(M)`$. This is true in a more general setting, by an argument which is perhaps more conceptual than the calculation proving Theorem 2.
###### Theorem 3
Suppose $`M`$ is an orientable $`3`$-manifold with connected boundary, and $`i:MN`$, where $`N`$ is an orientable $`3`$-manifold with $`H_1(N)`$ torsion. Then the inclusion map $`i_{}`$ induces an injection of $`T_1(M)`$ into $`H_1(N)`$.
Proof of Theorem 3: The linking pairing $`\lambda :T_1(M)\times T_1(M,M)𝐐/𝐙`$ is non-degenerate, by Poincaré duality. So if $`xT_1(M)`$ is non-zero, there is an element $`yT_1(M,M)`$ with $`\lambda (x,y)`$ non-zero in $`𝐐/𝐙`$; represent each of these by (absolute or relative) cycles with the same names. Since $`M`$ is connected, $`H_1(M)H_1(M,M)`$ is surjective, so $`y`$ lifts to $`\overline{y}H_1(M)`$. There is no good reason that $`\overline{y}`$ represents a torsion class in $`M`$, but by hypothesis it is a torsion class in $`N`$. Now $`\lambda (x,i_{}\overline{y})`$ (as calculated in $`N`$) may be calculated as the intersection number of $`\overline{y}`$ with a $`2`$-chain bounding $`nx`$, and $`C`$ can be chosen to lie in $`M`$ since $`x`$ is a torsion class in $`M`$. Hence $`\lambda (x,i_{}\overline{y})=\lambda (x,y)0`$ in $`𝐐/𝐙`$, and therefore $`x`$ is nontrivial in $`H_1(N)`$.
Remark: It is not possible to deduce Theorem 2 (and hence Theorem 1) from Theorem 3. To do so would amount to proving that (in the notation of Theorem 2) $`\mathrm{gcd}(|M(\alpha )|,|M(\beta )|)=|T_1(M)|`$. However, this is not the case, as the following example indicates; the example has $`T_1(M)=𝐙/3`$ but $`\mathrm{gcd}(|M(\alpha )|,|M(\beta )|)=9.`$
Consider an oriented solid torus $`M_0`$ with a basis $`\alpha ,\beta `$ for $`H_1(T)`$ chosen so that $`\beta `$ generates $`H_1(M_0)`$ and $`\alpha `$ bounds a disk. Let $`KM_0`$ be an oriented knot, representing $`3`$ times $`\beta `$ in $`H_1(M_0)`$. The meridian $`\mu `$ of $`K`$ is determined by the orientation, and we choose a longitude $`\lambda `$ by requiring that $`\lambda `$ be homologous to $`3\beta `$ in $`H_1(M_0K)`$. (If $`M_0`$ were embedded in $`S^3`$ in a standard way, so that $`\beta `$ bounds a disk in $`S^3\mathrm{int}(M_0)`$, then $`\lambda `$ would be the longitude of $`K`$ in $`S^3`$.) Now let $`M`$ be the result of Dehn surgery on $`M_0`$, with coefficient $`9/1`$. In other words, remove a neighborhood of $`K`$, and glue in a solid torus killing $`9\mu +\lambda `$. The homology of $`M_0\nu (K)`$ is generated by $`\mu `$ and $`\beta `$, and the surgery kills $`9\mu +3\beta `$, so $`H_1(M)=𝐙+𝐙/3`$.
On the other hand, the homology of $`M(\alpha )`$ is gotten by killing $`\alpha `$, which is homologous to $`3\mu `$ and so $`H_1(M(\alpha ))`$ is presented by the matrix
$$\left(\begin{array}{cc}9& 3\\ 3& 0\end{array}\right)$$
(0.5)
yielding $`H_1(M(\alpha ))=𝐙/3𝐙/3`$. Likewise, $`H_1(M(\beta ))`$ is presented by the matrix
$$\left(\begin{array}{cc}9& 3\\ 0& 1\end{array}\right)$$
(0.6)
yielding $`H_1(M(\beta ))=𝐙/9`$.
This same example shows that the hypothesis in Theorem 3, that $`H_1(N)`$ be torsion, is necessary. For if $`N`$ is obtained by filling $`M`$ with slope $`\alpha +\beta `$, then $`H_1(N)𝐙`$, and so $`T_1(M)`$ doesn’t inject.
Theorem 3 may be applied to branched covers of tangles, of degrees other than $`2`$. In applying this remark, one must take care, because for $`k>2`$ the $`k`$-fold covers of the ball, branched along $`𝒯`$ are not uniquely specified by the branch locus. These different covers may well have differing homology groups, so in practice, one might have to calculate the homology groups for all of the different possibilities.
###### Corollary 2
Suppose that $`𝒯`$, and that $`N`$ is a k-fold cover of $`S^3`$ branched along $``$. Let $`M`$ be the induced cover of $`B^3`$, branched along $`𝒯`$. If $`N`$ is a rational homology sphere, then $`T_1(M)`$ is a subgroup of $`T_1(N)`$.
Most of the results so far have concerned only the torsion part of the homology, but there are some things which can be said about the torsion-free part of the homology. One simple result is the following.
###### Theorem 4
Suppose that $`M`$ has boundary of genus $`g`$, and that $`i:MN`$. Then $`dim(H_1(N;𝐐))dim(H_1(M;𝐐))g`$.
The proof is straightforward; if the quantity $`dim(H_1(M;𝐐))g`$ is greater than zero, then there is a subspace of that dimension in $`H_1(M;𝐐)`$ which pairs non-trivially with a subspace of $`H_2(M;𝐐)`$ of the same dimension. Hence both of these inject into the homology of $`N`$.
###### Example 1
It is not hard to give examples where the homological approach gives more information than can be deduced from the determinants alone. The simplest I can think of is the following. Let $`𝒯`$ be the tangle
$$(T_3)^{}+(T_3)^{}+(T_3)^{}$$
pictured below in Figure 1.
According to , the numerator and denominator of this tangle may be calculated as the numerator and denominator of the fraction obtained by the grade school addition of fractions, without canceling common factors:
$$\frac{1}{3}+\frac{1}{3}+\frac{1}{3}=\frac{6}{9}+\frac{1}{3}=\frac{18+9}{27}=\frac{9}{27}$$
and so $`\mathrm{gcd}(|n(𝒯)|,|d(𝒯)|)=9`$. This would allow, in principle, that $`𝒯`$ might embed in a $`2`$-bridge knot (or link) corresponding to the fraction $`9p/q`$, for the determinant of such a knot is $`9p`$. But we calculate below that for $`M=`$ the $`2`$-fold cover of $`B^3`$ branched along $`𝒯`$, we have $`H_1(M)=𝐙𝐙/3𝐙/3`$, whereas the $`2`$-fold cover of a $`2`$-bridge knot or link has cyclic homology.
The calculation of $`H_1(M)`$ proceeds as in the usual calculation of $`2`$-fold covers branched along knots, as described in §6 of . If surgery is done on the middle crossings in each of the three $`\pm 1/3`$ tangles which make up $`𝒯`$, then $`𝒯`$ becomes trivial. Straightening it out (for the purpose of drawing the branched cover, it’s legal to move the endpoints around) gives the surgery picture in Figure 2.
Passing to the double branched cover give a picture of $`M`$ as surgery on a $`6`$-component link in $`S^1\times D^2`$, whose homology may be readily calculated to give the result quoted above.
## 4 An obstruction to embedding in a trivial link
An obstruction to embedding tangles in the trivial link, of a somewhat different sort, may be derived from the invariants $`I^n(𝒯)`$ defined in . To explain this, we recall that for a $`2`$-component oriented link $`=(L_x,L_y)`$ of linking number $`\lambda =0`$, Cochran defined ‘higher linking numbers’ $`\beta _x^n()`$ and $`\beta _y^n()`$. For $`n=1`$ these are both equal to the Sato-Levine invariant while for $`n>1`$ they depend on the ordering of the components. For a tangle $`𝒯`$ it is possible to choose a tangle sum with a trivial tangle (a closure of $`𝒯`$ in the terminology of ) to get a link $``$ with $`\lambda =\beta ^1=0`$. There is no canonical choice for $``$, but we showed that
$$|I^n(𝒯)|=|\beta _x^n()\beta _y^n()|$$
independent of $``$ (and of the order of the components because of the absolute value signs).
###### Theorem 5
Suppose that $`𝒯`$ is a tangle with no loops such that $`𝒯𝒥=`$ the trivial $`2`$-component link. Then $`I^n(𝒯)=0`$ for all $`n2`$.
Proof: Consider first a $`2`$-string tangle $`𝒦`$ with no loops, and the 4-punctured sphere $`\mathrm{\Sigma }`$ in the boundary of its exterior. Then $`\mathrm{\Sigma }`$ is compressible if and only if $`𝒦`$ is split, where $`𝒦`$ is split if the ball can be split into two sub-balls by a properly embedded disk with the two strings of $`𝒦`$ lying in different sub-balls. This follows directly from the loop theorem and Dehn’s lemma. The structure of a split tangle is very simple: it is a trivial tangle possibly with knots tied in each string. The tangle is trivial if and only if there are no knots. In particular, a split tangle has $`I^n=0`$ for all $`n`$, because it has a completion which is a split link, which in turn has all of its $`\beta ^n=0`$.
Now consider a tangle $`𝒯`$ with $`I^n0`$, and suppose that $`𝒯𝒥`$, or in other words that $`𝒥`$ splits as a sum $`𝒯+𝒯^{}`$. From the preceding paragraph $`I^n(𝒯)0`$ implies that the surface $`\mathrm{\Sigma }`$ is incompressible in the exterior of $`𝒯`$. But then $`\mathrm{\Sigma }`$ must be compressible in the exterior of $`𝒯^{}`$. For if it weren’t then $`\mathrm{\Sigma }`$ would be an incompressible $`4`$-punctured sphere in the exterior of the unlink. Applying the preceding paragraph once more, it follows that $`𝒯^{}`$ is split. If there is a knot in one of the strands of $`𝒯^{}`$, then that would give a knot in the corresponding component of the unlink, which cannot be. It follows from all of this that $`𝒯^{}`$ must in fact be a trivial tangle, so that the the unlink $`𝒯+𝒯^{}`$ may be used to calculate $`I^n(𝒯)`$ and show that it is zero. This contradicts our assumption that $`I^n(𝒯)0`$.
The tangles cited in give rise to examples of tangles which cannot be embedded in a trivial link.
## 5 Generalizations of Tangles
We close with a few remarks on some of the questions raised at the end of . One question concerned the existence of a family of completions of any tangle with $`2t`$ strands, which would play the role of the numerator and denominator of a $`2`$-string tangle. Following the proof of Theorems 2 and 3 we see how to construct such a family. Note that the $`2`$-fold branched cover of a trivial $`2t`$-tangle is a handlebody of genus $`2t1`$. By twisting the strings around, one can vary the attaching of this complementary handlebody so as to kill of the homology of $`M`$ in various fillings. The $`\mathrm{gcd}`$ of the orders of the homology of all these fillings then would have to divide the determinant of any link in which the tangle was embedded.
Another generalization of a tangle pointed out in §14 of is an embedded arc (or more generally a $`1`$-manifold) in a more complicated submanifold of $`S^3`$, such as a solid torus. In particular, Krebes asks whether a particular arc $`𝒜`$ in a $`S^1\times D^2`$ (Figure 17 of ) can sit inside an unknot. Our approach, especially Corollary 2 gives (in principle) an obstruction, if there were torsion in the homology of some cover of $`S^1\times D^2`$ branched along $`𝒜`$. Unfortunately, it seems that the homology of all of the cylic covers of the solid torus, branched along this arc, is torsion-free. Hence we cannot apply Theorem 3 to deduce anything about embeddings of this pair in a link. Likewise, it does not seem possible to use Theorem 4, because the rational homology of each of the cyclic branched covers is the same as for a trivial arc.
## 6 Acknowledgements
The author was partially supported by NSF grant DMS 9917802. |
no-problem/0001/hep-ph0001241.html | ar5iv | text | # FIRST-ORDER PHASE TRANSITIONS IN AN EARLY-UNIVERSE ENVIRONMENT
## 1 Introduction
First-order phase transitions proceed by bubble nucleation and expansion. When three or more bubbles collide a phase winding of $`2\pi n`$ may be generated, forming a cosmic string in the region between them. In order to assess the cosmological significance of cosmic strings, it is important to be able to forecast their initial density. This depends on the behaviour of the phase of the Higgs field after bubbles collide and merge – in particular if the phase difference between two bubbles can equilibrate before the arrival of the crucial third bubble, there may be a strong suppression of the initial string density.
At any phase transition where particles acquire mass, those particles outside the bubble without enough energy to become massive inside bounce off of the bubble wall, retarding its progress through the plasma. The faster the bubble is moving, the greater the momentum transfer in each collision, and hence the stronger the retarding force. Thus a force proportional to the bubble-wall velocity appears in the effective equations of motion. Impeded by its interaction with the hot plasma, the bubble wall reaches a terminal velocity $`v<c`$ – for the (Standard Model) electroweak phase transition, the value $`v0.1c`$ was predicted . In this paper, we investigate the consequences of slow-moving bubble walls on phase equilibration in global- and local-symmetry models.
## 2 Phase Equilibration
Writing the Higgs field $`\mathrm{\Phi }=\rho e^{i\theta }`$ the equations of motion for the Abelian Higgs ($`U(1)`$ gauge symmetry) model (which we consider, for simplicity) are
$`\ddot{\rho }\rho ^{\prime \prime }(_\mu \theta eA_\mu )^2\rho `$ $`=`$ $`{\displaystyle \frac{V}{\rho }}`$ (1)
$`^\mu \left[\rho ^2(_\mu \theta eA_\mu )\right]`$ $`=`$ $`0`$ (2)
$`\ddot{A_\nu }A_{\nu }^{}{}_{}{}^{\prime \prime }_\nu \left(A\right)`$ $`=`$ $`2e\rho ^2_\nu \theta .`$ (3)
Taking, after Kibble and Vilenkin , our gauge-invariant phase difference between two points to be
$$\mathrm{\Delta }\theta =_A^B𝑑x^i\left(_iieA_i\right),$$
(4)
it is possible to derive an analytic expression for the phase difference after time $`t`$ between the centres of two bubbles nucleated at time zero with radius $`R`$ and initial phase difference $`2\theta _0`$
$$\mathrm{\Delta }\theta =\frac{2R}{t}\theta _0\left(\mathrm{cos}e\eta \left(tR\right)+\frac{1}{e\eta R}\mathrm{sin}e\eta \left(tR\right)\right),$$
(5)
that is, decaying phase oscillations take place – see for numerical verification.
In order to model the interaction of the bubble wall with the plasma, we add a term $`\mathrm{\Gamma }\dot{\rho }`$ to the equation of motion for the modulus of the Higgs field, Eq. (1), as motivated in §1. If there are no gauge fields, this leads to a different kind of decaying phase oscillations . What happens in theories with a gauge symmetry, where the bubbles move with speeds less than that of light?
Eq. (5) was obtained by imposing $`SO(1,2)`$ Lorentz symmetry on the field equations for the two-bubble problem. If the bubbles do not move at the speed of light, no such assumption is possible. This is because whilst the modulus $`\rho `$ of the Higgs field is constrained to propagate at a speed $`v`$, there is no such restriction on the phase $`\theta `$ or the gauge fields. The problem must then be approached via numerical simulations.
## 3 Results
For the sake of clarity, we have chosen to present our results in terms of the evolution with time of the gauge-invariant phase difference $`\mathrm{\Delta }\theta `$ between the centres of the two bubbles, though the qualitative behaviour was found not to change when calculated between different points.
Figure 2 (a) shows the behaviour of the gauge-invariant phase difference for bubbles moving at the speed of light – the decaying oscillations calculated by Kibble and Vilenkin in the local case. In the global case, $`e=0`$, we find that the phase does equilibrate, but on a much longer time-scale. Thus we would expect that for fast-moving bubbles, fewer defects are formed in local theories than global ones, since in order to form a defect a phase difference inside the two merged bubbles must still be present when a third bubble collides.
In Figure 2 (b) we plot $`\mathrm{\Delta }\theta `$ for slower-moving bubbles. For $`e=0`$, we confirm in $`3+1`$-dimensions the decaying phase oscillations described by Ferrera and Melfo and observed by them in $`2+1`$-dimensions. These oscillations are killed by adding in gauge fields – for a fixed bubble-wall velocity, the stronger the gauge coupling, the less time the gauge-invariant phase difference is non-zero, and hence the less likely a third collision will occur in time for a defect to form. Thus we would expect a lower defect-formation rate in local theories with slower-moving bubble walls.
Figure 2 illustrates our findings – it shows a cross-section through a non-simultaneous three-bubble collision, after all three bubbles have merged. In each case, the bubbles of initial radius $`R=5`$, centred at $`(\pm 8,0,10)`$ and $`(0,0,10)`$, were given phases $`\theta =\pi /2,0`$ and $`2\pi /3`$. For identical initial conditions, we see that in the fast-moving case a vortex is formed, but when the bubbles are slowed down, the phase difference between the two bubbles has equilibrated by the time the third bubble collides, and no defect is formed.
For a fuller discussion of these and other results, including the effect of taking into account the finite conductivity of the plasma, and the magnetic fields formed at collisions of fast- and slow-moving bubbles, see .
## Acknowledgments
This work was done in collaboration with A.-C. Davis. We would like to thank O. Törnkvist for helpful discussions. Financial support was provided by PPARC and Fitzwilliam College, Cambridge. Computer facilities were provided by the UK National Cosmology Supercomputing Centre in cooperation with Silicon Graphics/Cray Research, supported by HEFCE and PPARC.
## References |
no-problem/0001/astro-ph0001182.html | ar5iv | text | # X-ray sources as tracers of the large-scale structure in the Universe
## 1. Introduction
In the current cosmological picture, galaxies, clusters and large-scale structures have grown from initial small perturbations in the density of the Universe via gravitational collapse. Cosmological models are required to meet two basic observational constraints: on the one hand the Universe at $`z1500`$ was very smooth, as the cosmic microwave background (CMB) is seen to have anisotropies of amplitude $`10^5`$; on the other hand local mass inhomogeneities measured through the distribution of galaxies exhibit fluctuations of the order $`1`$ on scales $`10\mathrm{Mpc}`$. Different cosmologies, however, predict highly discrepant ways in which structures on different scales grow up to the current state from the CMB initial conditions. The largest discrepancies occur at redshifts $`z15`$ which is when galaxies began to collapse and to form stars. Accessing these intermediate redshifts will provide crucial tests for the cosmological models.
The isotropy of the cosmic X-ray background (XRB) on large angular scales ($`\frac{\mathrm{\Delta }I}{I}`$ less than a few % on scales of degrees and larger) suggests that most of the X-ray photons we receive from the Universe must have been originated in the distant Universe. Surveys at different depths carried out with $`ROSAT`$ have revealed that 50-70% of the (soft) XRB is resolved into point sources, mostly Active Galactic Nuclei (AGN) of different classes. Although there are still some discrepancies in the determination of the X-ray luminosity function and its redshift evolution, there is no doubt that most of the XRB originates at redshift $`z>1`$. Boyle et al (1994) and Page et al (1996) who find their samples of X-ray selected AGN consistent with pure luminosity evolution models, predict a peak in the X-ray volume emissivity around $`z1.52`$. Miyaji et al (1998) instead find better consistency with luminosity dependent density evolution, in which case the X-ray volume emissivity in AGN more luminous than $`10^{44.5}\mathrm{erg}\mathrm{s}^1`$ (which for the broken power-law shape of the luminosity function account for most of the X-rays emitted by AGN) rises steeply from $`z=0`$ to $`z=12`$ with no evidence for a decline at higher redshifts. In both cases it is clear that soft X-ray emission from the extragalactic sky comes mostly from redshifts $`z=12`$ or larger, in a situation very similar to the star formation in the Universe (Madau et al 1996, Boyle & Terlevich 1998). Studying the X-ray Universe is then likely to provide a major handle to understand the evolution of the Universe at intermediate redshifts and therefore it is an issue of prime cosmological relevance.
There are other reasons to prefer X-rays to carry out cosmological studies. On the one hand the high-latitude X-ray sky is ‘clean’, at least at photon energies above 2 keV, galactic absorption has negligible effects and the contribution of the Galaxy to the XRB is less than a few % (Iwan et al 1982). A further reason is the small content in stars of high galactic latitude surveys, ranging from 25% at bright fluxes down to probably less than 10% at the faintest fluxes.
In this paper we review the current status of studies of the large-scale structure of the Universe, which up to now has produced relevant but certainly not spectacular results. The two main questions that we address are:
* Do X-ray sources (and the XRB) trace mass in the Universe and what is their bias parameter?
* What are the best observational approaches to obtain information on the large-scale structure of the Universe at intermediate redshifts with X-rays?
Except when otherwise stated we use $`H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`q_0=0.5`$ and $`\mathrm{\Lambda }=0`$.
## 2. The X-ray sky on the largest scales
The distribution of the XRB fluctuations on the largest scales and their link to inhomogeneities in the distribution of matter has been an active field of research for many years. The observational resources have been mostly limited to the HEAO-1 A2 experiment which scanned the sky with a resolution of $`3^{}\times 1.5^{}`$ at photon energies 2-60 keV.
### 2.1. The dipole of X-ray sources
Since the Galaxy is moving with respect to the frame where the CMB would be isotropic towards $`l=264^{},b=48^{}`$, there must be an overdensity of sources which are pulling us towards that direction. The distribution of X-ray sources in the sky should therefore exhibit an approximate large-scale dipolar distribution pointing towards the same direction.
Using the AGNs in the Piccinotti et al (1982) flux-limited sample of X-ray sources (2-10 keV flux limit $`3\times 10^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$), Miyaji & Boldt (1990) and Miyaji (1994) found the dipole of these sources to point towards $`l=318^{},b=38^{}`$ with a large error circle ($`30^{}`$ radius). The dipole appears to saturate at $`50100h^1\mathrm{Mpc}`$ and is roughly aligned with the CMB dipole. Within the framework of linear theory, this allows the bias parameter of the X-ray selected AGN to be estimated, giving a somewhat large value ($`b_X\mathrm{\Omega }_0^{0.6}36`$). Uncertainties come primarily from the indetermination of the redshift at which the dipole saturates.
Plionis & Kolokotronis (1998) and Kolokotronis et al (1998) have measured the dipole of an X-ray flux-limited sample of galaxy clusters. This is again in rough alignment with the CMB dipole, but it appears to saturate at $`160h^1\mathrm{Mpc}`$. As expected in all popular scenarios where clusters arise in extreme peaks of the underlying dark-matter distribution, they exhibit a large bias parameter ($`b_X4`$, see Table 4).
The fact that the dipoles of the two most numerous classes of extragalactic X-ray sources (AGNs and clusters) are roughly aligned with the CMB dipole is encouraging. We note, however, that all-sky deeper samples of these objects (particularly X-ray selected AGN) would enormously help in defining the distance at which the contribution to the dipole saturates and therefore in measuring the bias parameter.
### 2.2. The dipole of the X-ray background
There are two reasons why the XRB should show a dipole signal: our motion relative to the CMB rest frame (the so-called Compton-Getting effect) and the excess contribution of the sources that cause this motion in the same direction. The XRB dipole is expected to be aligned with the CMB dipole, but the amplitude should be larger than the Compton-Getting effect, allowing for the excess emissivity.
There are two basic problems in measuring the XRB dipole: one is the contribution of the Galaxy and the other one is the integrated nature of the XRB whereby confusion noise dominates on all angular scales. Warwick, Pye & Fabian (1980) realized that even at photon energies $`>2`$ keV and galactic latitudes $`b>20^{}`$ a residual galactic contribution $`27\%`$ is present. Iwan et al (1982) modelled this galactic component in terms of a finite radius disk with thermal spectrum at $`T9`$ keV. To emphasize how difficult is to obtain the extragalactic signal, the galactic contribution amounts to a few % at the galactic poles, while the effect it is being looked for is less than 1%.
Attempts to look for singular enhancements of the XRB surface brightness include those by Warwick et al (1980), the Jahoda & Mushotzky (1989) search for emissivity from the great attractor, the Mushotzky & Jahoda (1992) search for XRB negative fluctuations towards the most prominent voids and the unsuccessful detection of X-ray emission from superclusters by Persic et al (1990).
By modelling out the Galaxy, Shafer (1983) and Shafer & Fabian (1983) found a dipole signal significant at $`2\sigma `$ level in the HEAO-1 A2 map. Most of the subsequent dipole refinements have used the same data with increasingly finer corrections for detector drifts and other unwanted effects. The latest one is by Scharf et al (1999), who excluded the galactic plane, the Magellanic clouds and also regions around the Piccinotti et al (1982) sources, which leaves less than 50% of the sky for the dipole analysis. Various methods are used to deal with the masked regions (including spherical harmonic reconstruction) and the results are shown in Table 1. The dipole signal is very clearly detected and its intensity appears larger than the Compton-Getting effect. The direction of this extra large-scale structure dipole caused by the fluctuations in the source density is only roughly aligned with the direction of our motion, and its amplitude is similar to the Compton-Getting effect as predicted by theory (Lahav, Piran & Treyer 1997).
In an analysis of the $`ROSAT`$ all-sky data (0.9-2.4 keV), Plionis & Georgantopoulos (1999) also find a dipole component. The Galaxy is modeled according to the Iwan et al (1982) model and they further exclude other regions associated with the Galaxy. The direction of the resulting dipole is in better agreement with the CMB dipole, but the amplitude is almost a factor of 10 larger than the Compton-Getting effect.
There are various reasons for the discrepancy between these measurements. First, an extra residual contribution from the Galaxy is likely to contaminate more strongly the $`ROSAT`$ data than the HEAO-1 A2 data. This would explain why the $`ROSAT`$ dipole points closer to the galactic plane and that its amplitude is larger. A second reason for the discrepancy is the fact that Scharf et al (1999) have excluded regions around the galaxy clusters present in the Piccinotti et al (1982) sample (which are known to have a very large bias parameter and represent 50% of the extragalactic sources in that sample) but Plionis & Georgantopoulos (1999) have not. In fact these last authors note that the contribution from the Virgo cluster alone is of the order of 20% of the detected dipole. A good exercise which could give some insight on the level of the galactic contamination in the $`ROSAT`$ data would be to exclude the clusters in the $`ROSAT`$ analysis and not excise the Piccinotti et al (1982) sources from the analysis of the A2 data.
### 2.3. Higher order multipoles of the X-ray background
Lahav, Piran & Treyer (1997) proposed the use of a multipole expansion of the angular variations of the XRB in order to measure the large-scale structure of the Universe. Under fairly general assumptions, the coefficients $`a_{lm}`$ of the harmonic expansion would be the sum of a large-scale structure term $`a_{lm}^{(LLS)}l^{0.4}`$ and a confusion noise term which is a function of the flux $`S_{cut}`$ down to which sources have been excised from the maps for the multipole analysis $`a_{lm}^{(Noise)}S_{cut}^{\gamma 1}`$, where $`\gamma `$ is the slope of the integral source counts in the energy band used ($`N(>S)S^\gamma `$).
Treyer et al (1998) performed this analysis on the HEAO-1 A2 all sky data by removing regions around the Piccinotti et al (1982) sample and the galactic plane. They find evidence for a growth of the spherical harmonic coefficients growing at low values of $`l`$ in a manner roughly consistent with the predictions. The significance of the signal is difficult to assess as the harmonic coefficients are not independent due to cross-talk between different orders introduced by the masking. Assuming a redshift dependent bias parameter for the X-ray sources parametrized as $`b_X(z)=b_X(0)+z[b_X(0)1]`$ (which assumes that all galaxies form at some past epoch, Fry 1996), they estimate a rather modest bias parameter ($`1.0<b_X(0)<1.6`$). In their diagrams it is also seen that the dipole ($`l=1`$) has an unusually large amplitude compared to higher harmonics.
The way to go is indeed to have precise measurements of the XRB intensity on large angular scales, but with the possibility of excluding sources down to the faintest possible levels. Treyer et al (1998) suggest that an all-sky map with XRB intensities measured with a 1% precision and with sources excised down to $`3\times 10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (i.e., 100 times fainter than the Piccinotti et al catalogue) would be ideal for a spherical harmonic analysis.
## 3. Cross-correlations of galaxy catalogues with XRB intensities
An alternative way that has been devised to look for structure in the X-ray sky is to cross-correlate the unresolved XRB intensity with catalogues of galaxies. The amplitude of the cross-correlation function (CCF) between the X-ray intensity $`I_{XRB}`$ and the galaxy surface density $`N_g`$, $`W_{Xg}(\theta )=I_{XRB}N_g_\theta /I_{XRB}N_g1`$ at zero-lag ($`\theta =0`$) provides an approximate measurement of the fraction of the XRB arising either in the catalogued galaxies or in sources clustered with them within a scale of the beam with which X-ray observations have been obtained (Lahav et al 1993).
Positive signals have been found for $`W_{Xg}`$, typically of the order of 1% when the galaxies are optically or infrared selected, and up to $`>10`$% when active galaxies are selected (see table 2). The interpretation of this signal requires to model the clustering of X-ray sources around the catalogued galaxies, which is indeed modulated by the bias parameter $`b_X`$. Using $`b_X=1`$, it is found that the local volume emissivity of optically selected galaxies amounts to $`10^{39}h\mathrm{erg}\mathrm{s}^1\mathrm{Mpc}^3`$ (Lahav et al 1993; Miyaji et al 1994; Carrera et al 1995), most of which is contributed by Seyfert galaxies and QSOs (Barcons et al 1995).
When this volume emissivity is extrapolated to higher redshifts, the fraction of the XRB intensity due to the precursors of the catalogued galaxies can be predicted (Lahav et al 1993). Carrera et al (1995) find that 10-30% of the hard X-ray background might be produced by optically selected galaxies without exceeding the upper limits on the autocorrelation function of the XRB. This value is similar to the result of cross-correlation analyses of deep $`ROSAT`$ X-ray images with deep optical images in the same fields (Almaini et al 1997).
A constraint on the bias parameter of X-ray sources from the CCF results can be derived by taking into account that a fraction $`f2/3`$ of the CCF signal arises from sources clustered with the catalogued galaxies. As that contribution scales linearly with $`b_X`$, the local volume emissivity scales $`\frac{3}{1+2b_X}`$. Since the AGN-only local emissivity is also $`10^{39}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, we can safely derive that $`b_X<2`$ as otherwise the total volume emissivity will be significantly less than the AGN emissivity.
## 4. Clustering of X-ray selected sources
In the recent years large, complete samples of X-ray selected AGN have been built. This has allowed the direct measurement of the 3D spatial correlation function $`\xi (r)`$ for these objects and its comparison with the spatial correlation function of galaxies selected at other wavebands.
Carrera et al (1998) used two complete samples of X-ray selected AGN in pencil beam survey regions, spanning a wide redshift range ($`0<z<2`$) to search for clustering signals and deriving its amplitude and redshift evolution. Clustering is found to be 99% significant at $`z<1`$. When the spatial correlation function is fitted to a standard power-law form $`\xi (r)=(1+z)^p(\frac{r}{r_0})^{1.8}`$ (for comoving $`r`$), it is seen that comoving or slower clustering evolution is excluded, and that even for stable or linear growth the values of $`r_0`$ permitted by the data are of the same order as the ones derived from clustering of $`IRAS`$ galaxies. Carrera et al (1998) conclude that X-ray selected AGN are not significantly biased $`0.7<b_X<2`$.
Akylas, Plionis & Georgantopoulos (1999) have used the $`ROSAT`$ All Sky Survey sources to derive a local angular correlation function from which they estimate a somewhat higher correlation length ($`r_079\mathrm{Mpc}`$), consistent with optically selected QSO clustering (La Franca et al 1998) and comoving clustering evolution. The obvious weakness of this method is that it is not based on 3D but 2D data.
## 5. Fluctuations and Anisotropies in the XRB
The method of auto-correlating the XRB intensity at various separations has been extensively used in an effort to detect small scale structure in the XRB attributable to source clustering (Barcons & Fabian 1989, De Zotti et al 1990, Jahoda & Mushotzky 1991, Carrera et al 1991, Carrera & Barcons 1992, Carrera et al 1993, Chen et al 1994). These works produced a set of upper limits for the auto-correlation function of the XRB $`W_{XX}(\theta )=I_{XRB}I_{XRB}_\theta /I_{XRB}^21`$ on different angular scales (except for the Jahoda & Mushotzky 1991 work, which claimed a detection at separations $`10^{}`$) of the order of $`10^310^4`$ which constrained the clustering properties of the underlying source population (see, e.g., Fabian & Barcons 1992).
Under the assumption of comoving clustering evolution, the sources of the XRB cannot be more strongly clustered than optically selected galaxies (see Carrera & Barcons 1992), in which case $`b_X1`$. However, as explained above, Carrera et al (1998) found marginal evidence for faster clustering evolution in samples of X-ray selected AGN. This means that $`b_X`$ could be higher without violating the upper limits on the autocorrelation function, as the sources that produce the bulk of the XRB at high redshift could be very weakly clustered.
A further method employing the XRB angular variations has been to search for fluctuations in the XRB intensity distribution in excess of the ones expected from confusion noise produced by unresolved sources. These excess fluctuations should then be attributed to source clustering if all remaining noises (counting noise, systematics, etc.) could be removed. Studies of this kind have invariably lead to upper limits summarized in Table 3. What actually limits the sensitivity of this method is the statistics: it scales as $`N_{obs}^{1/2}`$ where $`N_{obs}`$ is the number of independent measurements of the XRB intensity that have been used to derive the excess fluctuations.
Excess fluctuations are related to the power spectrum of the density field of the Universe, weighted with the X-ray volume emissivity as a function of redshift (Barcons, Fabian & Carrera 1998). The method is potentially very powerful as it reflects the clustering properties of the sources that produce the bulk of the XRB at redshifts $`z>1`$.
## 6. The bias parameter of X-ray sources
Over the past sections we have discussed various approaches to detect and measure the clustering properties of X-ray sources. Table 4 summarizes the inferred bias parameter $`b_X`$ from these studies. Measurements are carried out with a variety of methods, correspond to different objects, are sensitive to different redshifts and also to different scales. Besides that, all dynamical estimates actually measure the combination $`b_X\mathrm{\Omega }_0^{0.6}`$.
Measurements of the correlation function are also affected by the cosmological parameters in the computation of the distances at significant redshifts, beyond the obvious linear dependence on $`H_0`$. If we live in an accelerating Universe, the Carrera et al (1998) correlation length would have to be scaled up by 30-50%, resulting in a subsequent increase of almost a factor of 2 in the bias parameter. Given the uncertainties in the values of $`q_0`$ and $`\mathrm{\Lambda }`$ (even for a flat Universe), the Carrera et al (1998) and Akylas et al (1999) results cannot be considered inconsistent.
As expected, clusters are a largely biased population ($`b_X4`$) compared to AGN ($`b_X12`$). The multipoles of the XRB are expected to be dominated by AGN, as these objects are the main sources of the XRB. The bias parameter derived from the XRB multipoles is consistently in agreement with the bias parameter derived from AGN clustering ($`b_X12`$). The exception to this is the XRB dipole which implies a larger value of $`b_X`$. This could be partly due to a larger cluster contribution, as the lowest order multipoles are most sensitive to nearest (and brightest) sources, where the cluster contribution to the source counts ($`10\%`$ on average in the deep extragalactic surveys) is $`50\%`$ for the Piccinotti et al (1982) sample.
## 7. Future Prospects
X-ray astronomy is now in a position to address cosmological studies. X-ray selected AGN which produce most of the X-rays in the Universe, appear to trace mass with a moderate bias parameter $`b_X12`$, but that has to be better defined as a function of scale and redshift. $`Chandra`$ and XMM will carry out several deep ‘pencil beam’ surveys which, after subsequent identification of the serendipitous sources discovered, will define the redshift evolution of the AGN X-ray luminosity function at photon energies $`>2\mathrm{k}\mathrm{e}\mathrm{V}`$ and therefore the X-ray volume emissivity as a function of redshift. However, these surveys will not map sufficiently large areas of the sky which are necessary to trace the large-scale structure of the Universe at the redshifts where the XRB was produced.
The obvious way to go would be to survey very large areas of the sky (the whole sky even better) for X-ray sources, in order to have a most complete picture. Unless hard X-rays are produced at significantly lower redshifts than soft X-rays (which is doubtful in view of the $`ASCA`$ and BeppoSAX surveys), to reach $`z1`$ where a significant fraction of the X-ray emissivity in the Universe resides, these surveys will have to go at least down to $`10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$.
There is an alternative which is to perform high sensitivity observations of the XRB with a beam corresponding to the linear scale to be probed (Barcons, Fabian & Carrera 1998). As the peak of the power spectrum of the density field of the Universe occurs at comoving wavenumbers $`0.010.1h\mathrm{Mpc}^1`$, for a standard geometry a $`1^{}`$ resolution is well matched to this at $`z13`$. All-sky measurements of the XRB intensity on that angular scale with a precision of a few % could then be used to detect the excess fluctuations due to source clustering which are expected to be just below 1% in amplitude. Controlling all other possible sources of excess fluctuations well below that level requires a stable large-area detector (to reduce photon counting noise) and probably an X-ray monitor which images simultaneously the brightest sources in the field.
## ACKNOWLEDGEMENTS
Partial financial support for this work was provided by the DGESIC under project PB95-0122.
## REFERENCES
Akylas, T., Plionis, M., Georgantopoulos, 1999, in: MPA/ESO conference on Evolution of Large Scale Structure, in the press
Almaini, O. et al 1997, MNRAS, 291, 372
Barcons, X., Fabian, A.C., 1989, MNRAS, 237, 119
Barcons, X., Franceschini, A., De Zotti, G., Danese, L., Miyaji, T., 1995, ApJ, 455, 480
Barcons, X., Fabian, A.C., Carrera, F.J., 1998, MNRAS, 293, 60
Boyle, B.J. et al, 1994, MNRAS, 271, 639
Boyle, B.J., Terlevich, R.J., 1996, MNRAS, 293, L49
Butcher, J.A. et al, 1997, MNRAS, 291, 437
Carrera, F.J. et al 1991, MNRAS, 249, 698
Carrera, F.J., Barcons, X., 1992, MNRAS, 257, 507
Carrera, F.J. et al 1993, MNRAS, 260, 376
Carrera, F.J. et al 1995, MNRAS, 275, 22
Carrera, F.J., Fabian, A.C., Barcons, X., 1997, MNRAS, 285, 820
Carrera, F.J. et al 1998, MNRAS, 299, 229
Chen, L.-W. et al, 1994, MNRAS, 266, 846
De Zotti, G., et al 1990, ApJ, 351, 22
Fixsen, D.J. et al, 1996, ApJ, 473, 576
Fabian, A.C., Barcons,X., 1992, ARAA, 30, 429
Fry, J.N., 1996, ApJ, 461, L65
Iwan, D. et al, 1982, ApJ, 260, 111
Jahoda, K., Mushotzky, R.F., 1989, ApJ, 346, 638
Kolokotronis, V., Plionis, M., Coles, P., Borgani, S., 1998, MNRAS, 295, 19
La Franca, F., Andreani, P., Cristiani, S., 1998, ApJ, 497, 529
Lahav, O. et al 1993, Nat, 364, 693
Lahav, O., Piran, T., Treyer, M.A., 1997, MNRAS, 284, 499
Madau, P. et al 1996, MNRAS, 283, 1388
Miyaji, T., Boldt, E., 1990, ApJ, 353, L3
Miyaji, T., 1994, PhD Thesis, Univ of Maryland
Miyaji, T., Hasinger, G., Schmidt, M., 1999, A&A, in press
Mushotzky, R., Jahoda, K., 1992, in: The X-ray background, Barcons, X., Fabian, A.C. eds, CUP
Page, M.J. et al 1996, MNRAS, 281, 597
Persic, M. et al, 1990, ApJ, 364, 1
Piccinotti, G. et al, 1982, ApJ, 253, 485
Plionis, M., Kolokotronis, V., 1998, ApJ, 500, 1
Plionis, M., Georgantopoulos, I., 1999, MNRAS, in the press
Scharf, C., Jahoda, K., Treyer, M., Lahav, O., Boldt, E., Piran, T., 1999, ApJ, submitted
Shafer, R.A., 1983, PhD thesis, Univ of Maryland
Shafer, R.A., Fabian, A.C., 1983, in: IAU Symposium 104, Early evolution of the Universe and its present structure, Reidel, p. 333
Treyer, M., Scharf, C., Lahav, O., Jahoda, K., Boldt, E., Piran, T., 1998, ApJ, 509, 531
Warwick, R.S., Pye, J.P., Fabian, A.C., 1980 , MNRAS, 190, 243 |
no-problem/0001/hep-ph0001294.html | ar5iv | text | # Non-equilibrium dynamics of hot Abelian Higgs model
## 1 Hard thermal loops
Depending on its details, the electroweak phase transition may explain the observed baryon asymmetry in the Universe. Although static equilibrium properties such as the phase structure of the theory have been determined to high accuracy with numerical lattice simulations, much less is known about non-equilibrium dynamics, or even real-time correlators in equilibrium. The reason is that to calculate any real-time correlator, one needs to evaluate a Minkowskian path integral, which cannot be done using Monte Carlo methods, and perturbation theory breaks down because of infrared problems.
Assuming that the fields are initially in thermal equilibrium, the occupation number of the soft modes (with momentum $`k<gT`$) is large, and they can be approximated by classical fields. The hard modes ($`k>T`$) have a small occupation number and must be treated as quantum fields, but perturbation theory works well for them, and they can be integrated out perturbatively. This is done by choosing a lattice cutoff $`\delta x=1/\mathrm{\Lambda }`$ with $`gT\mathrm{\Lambda }T`$, calculating one-loop correlators both in the full theory and in the lattice theory and determining the effective Lagrangian by matching the results. Because $`\mathrm{\Lambda }`$ acts as an infrared cutoff, this construction is free from infrared problems, and the expansion parameter is $`g^2`$, not $`g^2T/m`$ as usually is the case in perturbative calculations. In practice, since the full lattice result is very cumbersome and therefore unsuitable for numerical simulations, it is easier to match only static correlators, which gives a small error to the results.
In this way, an effective theory can be constructed for the soft modes, and the dynamics of this effective theory is well described by classical equations of motion. This approach has previously been used to measure the sphaleron rate of hot SU(2) gauge theory. I will argue that it can also be used for simulating non-equilibrium dynamics of phase transitions.
In the case of the Abelian Higgs model, the effective Lagrangian is
$`_{\mathrm{HTL}}`$ $`=`$ $`{\displaystyle \frac{1}{4}}F_{\mu \nu }F^{\mu \nu }{\displaystyle \frac{1}{4}}m_D^2{\displaystyle \frac{d\mathrm{\Omega }}{4\pi }F^{\mu \alpha }\frac{v_\alpha v^\beta }{(v)^2}F_{\mu \beta }}`$ (1)
$`+(D_\mu \varphi )^{}D^\mu \varphi m_T^2\varphi ^{}\varphi \lambda (\varphi ^{}\varphi )^2,`$
where $`m_D^2=\frac{1}{3}e^2T^2\delta m_D^2`$ and $`m_T^2=m^2+(e^2/4+\lambda /3)T^2\delta m_T^2`$. The mass counterterms are $`\delta m_D^2=e^2\mathrm{\Sigma }T/4\pi \delta x`$ and $`\delta m_T^2=(3e^2+4\lambda )\mathrm{\Sigma }T/4\pi \delta x`$. The integration is taken over the unit sphere of velocities $`v=(1,\stackrel{}{v})`$, $`\stackrel{}{v}^2=1`$. The extra terms in the Lagrangian (1) depend only on the couplings and the temperature, as long as the high-temperature approximation $`Tm`$ is valid.
The equations of motion can be derived from Eq. (1), and are
$`_\mu F^{\mu \nu }`$ $`=`$ $`m_D^2{\displaystyle \frac{d\mathrm{\Omega }}{4\pi }\frac{v^\nu v^i}{v}E^i}2e\mathrm{Im}\varphi ^{}D^\nu \varphi ,`$
$`D_\mu D^\mu \varphi `$ $`=`$ $`m_T^2\varphi 2\lambda (\varphi ^{}\varphi )\varphi .`$ (2)
Because of the derivative in the denominator, the gauge field equation of motion is non-local.
## 2 Local formulation
To make numerical simulations feasible, one needs a local formulation for the theory. The most straightforward approaches involve describing the hard modes by a large number of charged point particles, or by the phase-space distribution of these particles. In practice, both formulations lead to a 5+1-dimensional field theory. However, in the Abelian case, one can integrate out one of the dimensions, thus obtaining a 4+1-dimensional theory that is completely equivalent with the others. It consists of two extra fields $`\stackrel{}{f}(t,\stackrel{}{x},z)`$ and $`\theta (t,\stackrel{}{x},z)`$, where $`z[0,1]`$. In the temporal gauge, they satisfy the equations of motion
$`_0^2\stackrel{}{f}(z)`$ $`=`$ $`z^2\stackrel{}{}^2\stackrel{}{f}+m_Dz\sqrt{{\displaystyle \frac{1z^2}{2}}}\stackrel{}{}\times \stackrel{}{A},`$
$`_0^2\theta (z)`$ $`=`$ $`z^2\stackrel{}{}\left(\stackrel{}{}\theta m_D\stackrel{}{A}\right),`$
$`_0^2\stackrel{}{A}`$ $`=`$ $`\stackrel{}{}\times \stackrel{}{}\times \stackrel{}{A}2e\mathrm{Im}\varphi ^{}\stackrel{}{D}\varphi `$ (3)
$`+m_D{\displaystyle _0^1}𝑑zz^2\left(\stackrel{}{}\theta m_D\stackrel{}{A}+\sqrt{{\displaystyle \frac{1z^2}{2z^2}}}\stackrel{}{}\times \stackrel{}{f}\right).`$
With these equations, it is possible to calculate any real-time correlator at finite temperature. One simply takes a large number of initial configurations from the thermal ensemble with the probability distribution $`\mathrm{exp}(\beta H)`$, using the Hamiltonian corresponding to the equations of motion (3), and evolves each configuration in time, measuring the correlator of interest. The average over the initial configurations gives the ensemble average of the correlator.
## 3 Simulations
In addition to the standard lattice discretization, the dependence on the new coordinate $`z`$ needs to be discretized as well. A convenient way is to define the canonical momenta $`\stackrel{}{F}=_0\stackrel{}{f}`$ and $`\mathrm{\Pi }=_0\theta `$ and to expand both the fields and the momenta in terms of Legendre polynomials:
$`\stackrel{}{f}^{(n)}`$ $`=`$ $`{\displaystyle _0^1}𝑑zz\sqrt{{\displaystyle \frac{2}{1z^2}}}P_{2n}(z)\stackrel{}{f}(z),\theta ^{(n)}={\displaystyle _0^1}𝑑zP_{2n}(z)\theta (z),`$
$`\stackrel{}{F}^{(n)}`$ $`=`$ $`{\displaystyle _0^1}{\displaystyle \frac{dz}{z}}\sqrt{{\displaystyle \frac{2}{1z^2}}}P_{2n}(z)\stackrel{}{F}(z),\mathrm{\Pi }^{(n)}={\displaystyle _0^1}𝑑zP_{2n}(z)\mathrm{\Pi }(z).`$ (4)
The Hamiltonian can be written in terms of these Legendre modes, and the ensemble of initial configurations can be generated using standard Monte Carlo techniques. This can be done in two steps:
* The Hamiltonian is Gaussian in the hard fields $`\stackrel{}{f}`$, $`\stackrel{}{F}`$, $`\theta `$ and $`\mathrm{\Pi }`$, and they can therefore be integrated out analytically. This leads to the Hamiltonian of the ordinary classical Abelian Higgs model, with an extra Debye screening term for the electric field. This classical Hamiltonian can be used to generate the initial configuration for the soft modes.
* Given the soft configuration from step (i), the hard field configuration can be generated very effectively, since the hard Hamiltonian is Gaussian.
After the initial configurations have been generated, any real-time correlator can be measured as was explained in the end of Sec. 2.
## 4 Non-equilibrium dynamics
In addition to equilibrium real-time correlators, the formulation presented here can also be used to study non-equilibrium dynamics, provided that the hard modes remain close enough to the equilibrium. This is plausible in a phase transition, since the phase of a system is a property of the long-wavelength modes only. Near the phase transition, in both phases, all the masses are suppressed by powers of the coupling constant $`g`$ relative to the temperature, and the high-temperature approximation can be used. In this approximation, the phase of the system does not enter the results of the one-loop diagrams, i.e. the distribution of the hard modes is indeed the same in both phases, within the accuracy of our approach. Thus, there is no reason for the hard modes to fall out of equilibrium during the transition.
A natural way of studying a phase transition would be to start from thermal equilibrium in the Coulomb phase, and decrease the temperature so that the system undergoes a transition to the Higgs phase. This requires a mechanism for changing the temperature, and in practice, it is easier to keep the temperature constant and change the parameters, such as the mass of the Higgs field, instead. In fact, when $`\lambda e^2`$, even that is not necessary. The transition is of first order, and if one thermalizes the system initially to the metastable Coulomb phase below $`T_c`$, bubbles of the Higgs phase nucleate during the time evolution, and the phase transition takes place. Assuming that the latent heat is small enough, the temperature does not change significantly. In this way, many non-equilibrium properties of the phase transition can be studied non-perturbatively.
## Acknowledgments
I would like to thank M. Hindmarsh for collaboration on this topic. |
no-problem/0001/nlin0001018.html | ar5iv | text | # Generic occurrence of rings in rotating systems
## Abstract
In rotating scattering systems, the generic saddle-center scenario leads to stable islands in phase space. Non-interacting particles whose initial conditions are defined in such islands will be trapped and form rotating rings. This result is generic and also holds for systems quite different from planetary rings.
and
thanks: Present address: Max-Planck-Institut für Kernphysik, D-69117 Heidelberg, Germany. E-mail: Luis.Benet@mpi-hd.mpg.de
Light particles interacting with massive rotating systems are frequently encountered in diverse fields of physics. Electrons in rotating molecules and particular versions of the restricted three-body problem in celestial mechanics are the most well-known examples, but some nuclear models fall into the same category. Situations with large angular momenta will be of particular interest. Increasing amounts of information about narrow planetary rings suggest that such rings are often associated with the so-called shepherd satellites , and may exist due to mechanisms somewhat more complicated than the well known broad rings . The implications of such mechanisms for the above mentioned systems could be far reaching.
The purpose of this paper is to show that there exists in rotating systems a generic mechanism to obtain narrow rings with structure, that does not depend on Kepler orbits. The genericity of the mechanism guarantees that the stable orbits supporting the ring structure are rather insensitive to small perturbations and thus may play a role in different situations of the type mentioned above.
The principal result of the paper is a very general argument to support the occurrence of such rings in terms of one of the two generic scenarios for the formation of bounded trajectories in a scattering problem as a function of some external parameter. We exemplify this argument using the simple model of discs rotating around a center outside of these discs, which we proposed earlier . We shall find a large number of sometimes complicated rings. The introduction of a second disc on a smaller orbit may limit these to a small number of narrow rings. Our line of argumentation does not require that both discs move at the same angular velocity to maintain a ring structure. This implies that we leave the framework of a rotating system and with it the conservation of the Jacobi integral; the generic properties play an important role in the transference of the results obtained to situations for which the Jacobi integral is not conserved.
If we transform free motion to a synodic frame (rotating frame) the Jacobi integral becomes the Hamiltonian and the motion becomes a two-armed spiral. One arm for the incoming motion up to the point of closest approach to the center of rotation, and another for the outgoing motion after this point. We next consider a convex repulsive potential rotating at some distance from the center. It is clear that this potential can throw a particle from the outgoing arm of one spiral to the incoming arm of another. If the radial motion of the particle is sufficiently slow that the particle can hit the same repulsive potential again on its new way out, the trajectory may be confined. This will occur if the absolute value of the Jacobi integral is sufficiently small to avoid that the particle will always escape after at most one collision. Thus by gradually reducing (or increasing for negative values) the Jacobi integral we will find bounded orbits at some value. There are two generic ways for this to occur: First, a saddle-center bifurcation which will always produce a stable and an unstable periodic orbit (elliptic and hyperbolic fixed points) . Second, the scenario where abruptly a fully hyperbolic structure appears in phase space ; the latter are typically associated with maxima in the potential and what is known as orbiting scattering.
We shall concentrate on the first scenario, for which the occurrence of stable periodic orbits is generic. In the synodic frame non-interacting particles dispersed along the corresponding stable island will usually form an eccentric ring obtained from the spiral orbit deformed by the potential such as to form a closed path. In the sidereal (space-fixed) frame this ring will undergo a precession corresponding to the frequency of the rotation. As we shall see below, the ring obtained in the sidereal frame may be quite different from the actual trajectories of the individual ring particles.
To illustrate the general argument we shall start from a toy-model, which has been studied before . The repulsive potentials in this case corresponds are two (rotating) hard-wall scatterers. This model has the additional advantage of excluding the second generic scenario mentioned above.
We consider first one disc rotating around a center which lies outside this disc (Fig. 1); the motion will be restricted to a plane. We shall denote by $`R`$ the radial position of the center of the disc with respect to the center of rotation, its radius by $`D`$ ($`R>D`$) and the angular velocity by $`\mathrm{\Omega }`$. In the sidereal frame, particles move on straight lines with constant velocity until they collide with the disc; otherwise they escape. A collision with the disc will typically change the direction and the magnitude of the velocity of the particle. Only if the collision is radial, i.e. in one of the intersection points on the disc of the line that joins the center of rotation (origin) with the center of the disc, the magnitude of the velocity will be unchanged, and the incoming collision angle is equal to the outgoing one. For these orbits, we can choose the outgoing angle $`\alpha `$ and the velocity of the particle to obtain identical and consecutive radial collisions. In the synodic frame, these orbits are periodic and symmetric. Such orbits provide the backbone of the horseshoe construction , as was shown for this model in Ref. . In terms of the Jacobi integral, these orbits are given by
$$J_n=2\mathrm{\Omega }^2(RD)^2\frac{\mathrm{cos}^2\alpha [(2n+1)\frac{\pi }{2}\alpha ]\mathrm{sin}2\alpha }{[(2n+1)\pi 2\alpha ]^2}.$$
(1)
Here, $`\alpha `$ is measured with respect to the normal at the radial collision point, and $`n`$ is the number of complete rotations the disc performs between consecutive collisions.
In Fig. 2 we show for small $`n`$ the characteristic curves $`J_n`$ for the symmetric periodic orbits. As mentioned above, there is a connected interval of the Jacobi integral where the action of the repulsive potential can build periodic orbits. In fact, by reducing the absolute value of the Jacobi integral, a saddle-center bifurcation creates a pair of periodic orbits every time a maximum or minimum is crossed. One of these is stable and the other unstable . In the neighborhood of every stable periodic orbit there exist small regions of stability, which allow for the appearance of rings as described above.
As the stability of the symmetric periodic orbits is known , we can actually determine at what angle $`\alpha `$ stable orbits, periodic in the synodic frame, can occur. Indeed, for each $`n`$ there is exactly one prograde ($`\alpha >0`$) and one retrograde orbit ($`\alpha <0`$), that will be stable over some interval of decreasing $`|J|`$ and then undergo a period doubling bifurcation sequence. The only exception is the prograde $`n=0`$ solution which is marginally stable for a single $`J`$ at $`\alpha =\pi /2`$. The angles $`\alpha _n^\pm `$ for which each saddle-center bifurcation occurs are given by
$$\mathrm{tan}\alpha _n^\pm =\frac{2}{(2n+1)\pi 2\alpha _n^\pm }\pm 1.$$
(2)
Here, the upper index identifies the sign of the solution of Eq. (2), that is whether the solution corresponds to a prograde or retrograde trajectory. The absolute value of the solutions of this equation are shown in Fig. 3. The stable periodic orbits are found at absolute values of angles slightly larger than these .
Ring structures are obtained if we distribute initial conditions randomly in the interaction region and observe those that remain after a long time (Figs. 4). Here, we have assumed that ring particles do not interact among themselves. If we consider all these stability regions together, we find a rather large area of rings that can coexist if there is no restriction on the initial conditions. This would lead to a wide ring with very complicated structure. Each stable region contributes a narrow ring that may have several loops, giving rise to different strands.
One way to obtain well defined narrow rings with only their intrinsic structure would be to limit initial conditions to the surroundings of one of the stable islands (as we often do for numerical purposes in our Monte Carlo calculations). Yet we do not want to argue such a selection, as this would imply information about the formation of rings, which is by no means the subject of this Letter. Based on the known presence of two shepherds near some narrow rings , we introduce a second disc moving on a circular trajectory with respect to the same center that lies inside the one we have considered above (see Fig. 1). We shall proceed to show that a second disc indeed provides such a selection mechanism.
The main effect of a second disc on a smaller orbit will be to sweep many of the possible stable orbits. The corresponding elliptic regions and therefore the associated rings will disappear. Clearly, new ones will be created inside the inner edge of the inner disc, but those are again of the type just discussed and we shall disregard them. Also, new periodic orbits involving collisions with both discs will show up, but these tend to be very unstable. We shall thus concentrate on the periodic orbits of the outer disc that are not affected by the inner one.
The simplest case is, in some sense, the one where the two discs move with incommensurable frequencies, although this implies that the Jacobi integral is no longer conserved. We proceed to evaluate which periodic orbits will not be affected by the inner disc, whose center is at distance $`r`$ from the rotation point and whose radius is $`d`$. We assume that the orbits of the discs are non-overlapping, i.e., $`r+d<RD`$. From the geometrical arrangement, it is clear that all the orbits that cross the outer edge of the inner disc will be affected by this disc. In terms of the angle $`\alpha `$, we find that the orbits that will not be affected satisfy
$$|\alpha |\alpha _{\mathrm{max}}=\mathrm{arcsin}\frac{r+d}{RD}.$$
(3)
Note that for commensurable and in particular equal frequencies, resonant conditions may preserve some ring components for special configurations, while for other configurations these components may be wiped out. Yet, for the survival of rings fulfilling Eq. (3), no resonance condition is necessary.
Equation (3) permits to predict, which components will be unaffected by the inner disc. In this sense, Eq. (3) and the precise positions of $`\alpha _n^\pm `$ define selection rules. For instance, if the geometry is such that the condition $`\alpha _2^+<\alpha _{\mathrm{max}}<\alpha _1^+`$ holds, the system will only display one ring corresponding to the $`\alpha _1^+`$ stable region (see Fig. 3). For $`\alpha _{\mathrm{max}}`$ sufficiently larger than $`\alpha _1^+`$, rings will not occur in the system, while for $`\alpha _{\mathrm{max}}<|\alpha _0^{}|`$ all possible strands will show up. We conclude that ring components corresponding to prograde orbits are more likely to be found than those associated to retrograde orbits.
In Fig. 4a we present in the sidereal frame, examples of the ring structures found when only the $`\alpha _1^+`$ component survives, and in Fig. 4b the case when the $`\alpha _2^+`$ ring is also present. Figure 4c shows the finite width of the rings in an amplified region. As a function of time, these rings rotate with the frequency of the outer disc. While these figures show rings at fixed time in the sidereal frame, the individual particles follow polygonal orbits corresponding to reflection angles near $`a_n^+`$, as shown in Fig. 5. These polygons will typically not close. Note that the corresponding periodic orbits in the synodic frame will have a shape similar to the rings in the sidereal frame.
If we would use a mountain-like potential rather than a hard disc, associated with the hilltop the second type of scenario, an abrupt bifurcation, may occur . This scenario implies the sudden appearance of a hyperbolic structure which is structurally stable. But in these case pruning would typically set in after a finite change of the Jacobi integral and we revert to the other scenario if the mountain has a steep slope.
For attractive potentials we can have other periodic orbits, but typically we would expect that the ones we consider still exist, and follow a similar scenario. The case of attractive gravitational potentials is of particular interest: the fact that the central potential will produce elliptic or hyperbolic trajectories in the sidereal system can easily be included in the argument. On the other hand, the existence of the weaker $`1/r`$ potentials from the shepherd moons seems to fit only loosely in the picture discussed above. Yet, we may recall that Hénon found that the periodic orbits of consecutive collision in the restricted three-body problem for zero mass parameter (the Kepler problem with a non-attractive rotating singularity) determine the structure of the periodic orbits for finite small masses. This obviously carries over to the four-body problem. Therefore, we can expect the generic hard-disc results to have some qualitative similarity to the shepherd situation, except that the roles of the inner and outer shepherds may be interchanged . This argument is further supported by some numerical investigations in the restricted three-body problem for small mass ratio , as well as by the recent rigorous proof, for the same problem, of the existence of a chaotic subshift near collision orbits . Further research in this direction is on the way.
Finally we would like briefly to touch upon the structure of the narrow rings we found. As mentioned above and illustrated in Figs. 4, stable periodic orbits belonging to different stability regions lead to independent ring components, each of which may display several loops ($`n+1`$ for retrograde and $`n`$ for prograde orbits). Particles in different rings will clearly have different speeds. There is a second mechanism that does not imply such a difference of speeds. A single strand will generically become structured, when the elliptic fixed point undergoes the period doubling cascade. Just after the period doubling, the ring component will have first two, then four, etc., entangled strands associated with each region of stability. This mechanism can produce a narrow braided ring. In this case ring particles in the different strands move almost synchronously. Indeed, the relative motion of ring particles and system rotation is a result of the particular potential and may be near zero for the first stable prograde orbit.
In conclusion, we have established the existence of a generic scenario for the occurrence of stable rings of non-interacting particles in rotating systems.
The example we discussed above is particular in several senses: First, we exclude the abrupt bifurcation scenario. If instead of hard discs we consider smooth potential hills of a Gaussian or similar shape, this scenario will occur at the hill tops, and give rise to interesting phenomena. Yet in their steep flanks the usual saddle-center bifurcations will still take place. Second, we violate the invariance of the Jacobi Integral only in a subspace of phase space, that we are not interested in. If we think of semi-classical applications, the border of the invariant subspace will be smeared out and the symmetry breaking will become ubiquitous though it may be weak in some parts. It is here that the genericity of saddle-center bifurcations and their consequent insensitivity becomes very important, as it guarantees that the structures survive with minor changes. Third, the rings, as found here, display complicated structure, both because two or several rings with different particle speed may coexist and interfere in complicated ways in semi-classics, and because a single ring may have strands of similar particle speed as the central orbit undergoes a period-doubling cascade.
In semi-classics we can proceed to perform the calculation in the rotating system, where we will use standard techniques both for the stable and unstable orbits; thereafter, we can transform the resulting wave function to space fixed system. The errors of the semi-classical approximation will be modified, but the result in the space fixed frame will be correct to the same order in $`\mathrm{}`$. Thus we can expect to see such structures whenever the motion of the light particle is too slow for the Born-Oppenheimer approximation to be valid, but semi-classical reasoning is adequate. This will e.g. be the case for very large angular momenta both in molecules and Nuclei.
In this letter we emphasize the generic character of the result obtained. The possible applications have to be studied within each particular field. In most cases the fact, that we expect the same behavior for attractive forces is very important. In nuclei, for example, the rotating mean potential may well result in resonances of surface nucleons that spend most of the time outside this potential. In molecules high angular momenta for the core are still quite inaccessible to most experiments, but it will be interesting to see what happens at the verge of the formation of Rydberg states in such cases. Finally we should not discard the possibility that such effects are relevant to argue that narrow rings with shepherds may live longer than the broad rings due to the generic stability of such structures .
We are thankful to François Leyvraz and Christof Jung for many useful discussions. We also acknowledge one of the referees for bringing to our attention Ref. , which gives a stronger support to our ideas. This work was partially supported by the DGAPA (UNAM) project IN-102597 and the CONACYT grant 25192-E. |
no-problem/0001/astro-ph0001544.html | ar5iv | text | # Oscillation Waveforms and Amplitudes from Hot Spots on Neutron Stars
## 1 INTRODUCTION
The study of neutron stars is attractive in part because of the fundamental issues of physics that can be addressed. These include the behavior of spacetime in strong gravity, the equation of state of matter at supranuclear densities, and the propagation of thermonuclear burning in degenerate matter, an issue which has relevance to many astrophysical events including classical novae and Type Ia supernovae.
The discovery with the Rossi X-ray Timing Explorer (RXTE) of highly coherent brightness oscillations during type I (thermonuclear) X-ray bursts from six low mass X-ray binaries (LMXB) (for reviews see, e.g., Strohmayer, Zhang & Swank 1997b; Smith, Morgan & Bradt 1997; Zhang et al. 1996; and Strohmayer et al. 1997c) has provided a potentially sensitive new tool to understand these fundamental issues. The burst oscillations are thought to be produced by spin modulation of one or two localized thermonuclear hot spots that are brighter than the surrounding surface. The existence of the oscillations, as well as some of the reported behavior of their amplitudes (see, e.g., Strohmayer et al. 1997b) seems to confirm the pre-existing theoretical expectation that X-ray bursts on neutron stars are caused by ignition at a point followed by thermonuclear propagation around the surface (e.g., Fryxell & Woosley 1982; Nozakura et al. 1984; Bildsten 1995). The observed waveforms of these oscillations, and their dependence on time and photon energy, can in principle be used to constrain the mass and radius of the star and the velocity and type of thermonuclear propagation. Such information can only be extracted by detailed comparison of theoretical waveforms with the data.
Here we conduct the most complete existing survey of the properties of the light curves and resultant oscillation amplitudes for one or two expanding hot spots. We calculate light curves and oscillation amplitudes as a function of stellar compactness, rotational velocity at the stellar surface, spot size and location, orientation of the line of sight, angular dependence of the specific intensity, and spot asymmetries. Our calculations follow a procedure similar to that of Pechenick, Ftaclas, & Cohen (1983), Strohmayer (1992), and Miller & Lamb (1998), but our survey is more comprehensive than these previous treatments in that we fully investigate the effects of an expanding spot size on the light curves and oscillation amplitudes, while also exploring the effects of gravity, stellar rotation, viewing geometries, and anisotropic emission. In addition, we present the first calculations of the effects of having two non-antipodal spots as well as the effects of asymmetries in spot brightness.
In § 2 we describe our assumptions and the calculational method. In § 3 we present our results. We show that for small spot sizes the oscillation amplitude has only a weak dependence on spot size, but that as the spot grows the dependence becomes very strong. We also show that stellar rotation, beaming functions and spot asymmetries all tend to increase the observed oscillation amplitudes whereas greater compactness and larger spot sizes tend to decrease the amplitudes. In § 4 we exhibit applications of these results to data on the amplitudes of two harmonics in 4U 1636–536 and on the phase lags versus energy for SAX J1808–3658. We discuss our results and present our conclusions in § 5.
## 2 CALCULATIONAL METHOD
We make the following assumptions in our calculations:
1. The observed radiation comes from one or two emitting spots on the surface. The sources with strong bursts tend to have persistent accretion rates a factor of $``$10–100 less than the Eddington critical luminosity $`L_E`$ at which the radial radiation force balances gravity, whereas the peak luminosity of the bursts is typically close to $`L_E`$. The flux from the burning regions therefore greatly exceeds the flux from the quiescent regions, so for much of the burst this is a plausible approximation.
2. The radiation is homogeneous and emitted isotropically unless noted otherwise. This assumption is made for simplicity, as presently there is no physical evidence that suggests whether or not the photon emission from the hot spots is isotropic and homogeneous.
3. If there are two spots, they are identical and both grow at the same velocity unless noted otherwise. This assumption is also made for simplicity. Although the geometry of the two magnetic poles is unlikely to be identical, not enough is known about their structure to realistically model non-identical spots.
4. The exterior spacetime of the neutron star is the Schwarzschild spacetime. We neglect the effect of frame dragging due to stellar rotation because it only generates small second order effects for the rotation rates of interest (see Lamb & Miller 1995 and Miller & Lamb 1996).
We compute the waveform of the oscillation as seen at infinity using the procedure of Pechenick et al. (1983). Figure 1 shows our coordinate system and angle definitions. The photons emitted from the star travel along null geodesics which, for a Schwarzschild geometry, satisfy the equation (Misner, Thorne, and Wheeler 1973, p. 673)
$$\left(\frac{1}{r^2}\frac{dr}{d\varphi }\right)^2+\left(\frac{12M/r}{r^2}\right)=\frac{1}{b^2}$$
(1)
where $`r`$ and $`\varphi `$ are spherical coordinates, $`M`$ is the gravitational mass of the star, and $`b`$ is the impact parameter of the photon orbit. In both the above equation and throughout, we use geometrized units in which $`G=c1`$. If the photon is initially at a global azimuthal angle $`\varphi =0`$, then the global azimuthal angle at infinity follows from equation (1) and is (Pechenick et al. 1983, eq. \[2.12\])
$$\varphi _{\mathrm{obs}}=_0^{M/R}\left[u_b^2\left(12u\right)u^2\right]^{1/2}𝑑u$$
(2)
where $`u_b=M/b`$. Note that not all of this angle is due to light deflection: for example, a photon emitted tangent to the radial vector in flat spacetime will have an angle $`\varphi _{\mathrm{obs}}=\pi /2`$ at infinity. The maximum angle occurs when $`b=b_{max}=R(12M/R)^{1/2}`$ and is given by (Pechenick et al. 1983, eq. \[2.13\])
$$\varphi _{max}=_0^{M/R}\left[\left(1\frac{2M}{R}\right)\left(\frac{M}{R}\right)^2\left(12u\right)u^2\right]^{1/2}𝑑u$$
(3)
The observer at infinity cannot see the spot if the observer’s azimuthal angle exceeds $`\varphi _{\mathrm{max}}`$.
For each phase of rotation we compute the projected area of many small elements of a given finite size spot. We then build up the light curve of the entire spot by superposing the light curve of all the small elements. We chose a grid resolution such that the effect of having a finite number of small elements produces a fractional error $`<10^4`$ in the computed oscillation amplitudes. For isotropic emission the intensity of radiation at a given rotational phase as seen by an observer at infinity is directly proportional to the projected area of the spot. To investigate the effect of anisotropic emission we include a flux distribution function in the intensity, $`f(\delta )`$, where $`\delta `$ is the angle between the surface normal and the photon propagation direction. The intensity is then proportional to the product of the projected area of the spot (which is proportional to $`\mathrm{cos}\delta `$) and $`f(\delta )`$. We consider two types of anisotropic emission: cosine (“pencil”) beaming, in which $`f(\delta )=\mathrm{cos}\delta `$, and sine (“fan”) beaming, in which $`f(\delta )=\mathrm{sin}\delta `$.
The intensity distribution of an emitting spot is aberrated by the rotation of the star, and the photon frequency is Doppler shifted by the factor $`1/[\gamma (1v\mathrm{cos}\zeta )]`$. Here $`v`$ is the velocity at the stellar equator, $`\gamma =(1v^2)^{1/2}`$, and $`\zeta `$ is the angle between the direction of photon propagation and the local direction of rotation. The inferred spin frequencies of these neutron stars are $``$300 Hz, implying surface velocities $`v0.1`$c for stellar radii $`R10`$ km.
After computing the oscillation waveform using the above approach, we Fourier-analyze the resulting light curve to determine the oscillation amplitudes and phases as a function of photon energy at different harmonics.
## 3 RESULTS
As discussed in the introduction, the basic quantities of interest include the mass and radius of the neutron stars in bursting sources and the nature and speed of thermonuclear propagation on the stellar surface. We therefore need to relate these fundamental quantities to the observables, such as the oscillation waveform as a function of time and photon energy. We do this by computing theoretical waveforms using different assumptions about the compactness of the star, the angular size of the burning region, the angular location of the observer and magnetic pole relative to the stellar rotation axis, the surface rotation velocity of the star, and the angular distribution of the specific intensity at the surface. In this section we consider each of these effects separately, to isolate the effect they have on the waveform and facilitate interpretation of the data. Here we always quote the fractional rms amplitude of brightness oscillations. We also quote only bolometric amplitudes in this section; as shown by Miller & Lamb (1998), oscillations in the energy spectrum of the source may yield substantially higher amplitudes in the countrate spectrum measured by bandpass-limited instruments such as RXTE.
### 3.1 Waveforms
The decrease in oscillation amplitude as the bursts in some sources progress (Strohmayer et al. 1997b) may suggest an initially localized emission spot that expands via relatively slow ($`10^6`$cm s<sup>-1</sup>) thermonuclear propagation. If so, we would expect that the waveforms from burst oscillations would reflect a variety of spot sizes. We therefore consider spots that range from pointlike to those with an angular radius of 180. Also, physical conditions existing in the region of emitting spots may alter photon emission as in the case of some radio pulsars. Accordingly, we consider the effects of including cosine and sine beaming functions in the calculations of the waveforms.
Figure 2 shows the waveforms from a single emitting spot (left-hand column) and two emitting spots (right-hand column) for various spot sizes. As expected, the amplitude of the intensity oscillations decreases as the spot size increases. Furthermore, in the case of a single emitting spot there is a critical spot size ($`\alpha 50^{}`$ for the case of $`R/M=5.0`$) at which the spot is never completely out of view and hence the intensity remains greater than zero for the entire rotational phase. As the waveforms illustrate, the cosine beaming function, which enhances emission along the magnetic field axis, tends to narrow the width of the waveform peaks. The sine beaming function enhances emission near the tangential plane and will produce a four peaked waveform for the case of a small single emitting spot (see Pechenick et al. 1983).
### 3.2 Effects of Spot Size and Light Deflection
We are also interested in the effect of the compactness of the star on the observed amplitudes. Figure 3a shows the fractional rms amplitudes at the first two harmonics as a function of spot size and stellar compactness for one emitting spot centered at $`\beta =\gamma =90^{}`$ (i.e., for an observer and spot both in the rotational equator). The curves for the first harmonic illustrates the general shape of most of the first harmonic curves. Initially, the amplitude depends only weakly on spot size. However, once the spot grows to $`40^{}`$ there is a steep decline in the oscillation amplitude which flattens out only near the tail of the expansion. Figure 3b shows the fractional rms amplitude at the second harmonic under the same assumptions but for two identical, antipodal emitting spots. The range in spot size here is $`0^{}90^{}`$ since two antipodal spots of $`90^{}`$ radii cover the entire stellar surface. Note that in this situation, there is no first harmonic.
These curves illustrate two interesting features of the two spot configuration. First, the strength of the strongest oscillation amplitude in the two spot case is $`90\%`$ weaker than the strength of the strongest oscillation amplitude in the one spot case considered above. Furthermore, the curve of the second harmonic does not exhibit the same sharp falloff seen in the first harmonic curve. Thus, the detection of a particularly large fractional rms amplitude with a steep amplitude decline can verify that what is being observed is a first harmonic (i.e., power generated at the stellar spin frequency) rather than any higher harmonics (see Miller & Lamb 1998). The second interesting feature is that the curve of the second harmonic in Figure 3b is nearly identical in both magnitude and shape to the first $`90^{}`$ of the curve of the second harmonic for the case of one spot shown in Figure 3a. Thus, for this geometry, the introduction of a second emitting spot antipodal to the first tends to destroy the first harmonic while leaving the second harmonic unaffected. This result obtains whenever: (1) the physical assumptions (e.g., compactness, rotational velocity, flux distribution function) made for both the one and two spot configurations are the same, and (2) the viewing geometry for both configurations is $`\beta =\gamma =90^{}`$.
In this figure we also display the effect gravity has on the oscillation amplitudes. From equation (3) we know that more compact stars have a larger $`\varphi _{max}`$, and hence a larger fraction of their surface is visible to observers. As a result, oscillation amplitudes for more compact neutron stars are smaller. An exception occurs at the second harmonic of very compact stars ($`R/M<4.0`$), in which case gravitational light deflection focuses the emitted radiation enough to raise the oscillation amplitude (see Pechenick et al. 1983 and Miller & Lamb 1998). Note that the stellar compactness affects the amplitude at the second harmonic far more than the amplitude at the first harmonic.
### 3.3 Effects of Viewing Angle and Magnetic Inclination
Figure 4a shows the oscillation amplitude as a function of $`\beta =\gamma =x`$ (i.e., for the observer and the center of the spot at the same rotational latitude) for a single emitting spot with $`\alpha =15^{}`$ and $`R/M=5.0`$. As $`x`$ increases, the width of the peaks in the light curve decrease (see Pechenick et al. 1983) and hence the oscillation amplitudes increase. The interesting feature here is that the second harmonic has a significant amplitude only for $`x>60^{}`$. Since 50% of the time $`x`$ will be between $`60^{}`$ and $`90^{}`$ (assuming randomly distributed observers), only half of all observers will detect a second harmonic during a typical burst involving one spot. In Figure 4b we make the same assumptions as in Figure 4a but for two emitting spots rather than one. If we had assumed flat space-time and an infinitesimal spot size then the second emitting spot would become visible only for $`2x=180^{}\varphi _{max}=180^{}90^{}=90^{}`$. Therefore, for $`x<45^{}`$ only one spot would be observable. For $`R/M=5.0`$, $`\varphi _{max}=128^{}`$, and therefore a second, infinitesimal, spot would begin to be visible at $`x=26^{}`$. Since in Figure 4, the calculation was done with $`\alpha =15^{}`$, the spot begins to become visible at $`x=26^{}(15/2)^{}20^{}`$, explaining the appearance of the second harmonic at this $`x`$ value. Note that in the two spot case the first harmonic generates significant power for a wide range of $`x`$. This occurs because for $`x90^{}`$ one spot is more directly aligned with the observer’s line of sight, and as a result the intensity maxima of the two spots are unequal. In general, whenever an asymmetry exists between the two emitting spots such that the intensity maxima of the two spots are unequal, power is generated at the first harmonic.
### 3.4 Effects of Anisotropies from Doppler Shifts and Beaming
In Figure 5 we include the effects of Doppler shifts and aberrations on the oscillation amplitudes. We assume a surface rotation velocity of $`v=0.1`$c, which corresponds to a neutron star with radius $`R=10`$ km and spin frequency $`\nu 400`$ Hz. As can be seen, the amplitude of the second harmonic is increased significantly more than the amplitude of the first harmonic as a result of rotation. The tendency to generate more power at the higher harmonics than at the spin frequency is a general property of the rotation (see Miller & Lamb 1998 for a discussion of this effect).
Physical conditions in the region of emitting spots might cause anisotropic emission of radiation. The results of including a cosine beaming function and a sine beaming function for the case of one spot are shown in Figure 6a and for two antipodal spots in Figure 6b. As is apparent from Figure 2, the enhanced emission along the magnetic axis for the cosine beaming tends to narrow peaks in the light curves (see Pechenick et al. 1983 for a discussion of the light curves for beamed emission) and hence raise the oscillation amplitudes. For the sine beaming the peaks in the light curve are broadened, tending to lower the amplitude at the first harmonic. Both beaming functions do, however, generate substantial additional power at the higher harmonics.
## 4 APPLICATION TO X-RAY BURST SOURCES
### 4.1 Relative Amplitudes of Harmonics in 4U 1636–536
Recent work by Miller (1999) gives evidence for the presence of power at the stellar spin frequency for a source (4U 1636–536) consisting of two emitting spots. Earlier we saw that one possible mechanism for generating significant power at the stellar spin frequency for the case of two emitting spots is to vary the viewing geometry. Another possible mechanism is to have the spots be non-antipodal. This can occur, for instance, if the star’s dipolar magnetic field has its axis slightly displaced from its center. In the left panel of Figure 7 we show the oscillation amplitude as a function of spot separation for the case of two emitting spots with $`\alpha =30^{}`$, $`\beta =\gamma =90^{}`$, and $`R/M=5.0`$. The spots are perfectly antipodal at a spot separation of $`180^{}`$. As the figure shows, the oscillation amplitude at the second harmonic is relatively constant while the oscillation amplitude at the first harmonic is a linear function of spot separation. At a spot separation of $`170^{}`$ the fractional rms amplitudes of the first and second harmonic are equal. Another way to produce power at the spin frequency is to have differences in brightness between the two spots. Such an asymmetry can occur, for example, if the strength of the magnetic field at the location of the two spots is different, thereby pooling different amounts of nuclear fuel onto the hot spot regions. In the right panel of Figure 7 we show the oscillation amplitude as a function of the percent difference between the brightness of the two spots. As in the case of the non-antipodal spots, the amplitude at the second harmonic is essentially constant while the amplitude at the first harmonic increases linearly with increasing percent difference in spot brightness.
These figures reinforce the conclusion, also evident from Figure 3, that only with two spots can the oscillation at the first overtone be stronger than the oscillation at the fundamental. Therefore, within the general theoretical model explored in this paper, 4U 1636–536 has two nearly antipodal hot spots.
### 4.2 Phase Lags in SAX J1808–3658
Doppler model of phase lags.—The hard X-ray spectrum of low-mass X-ray binaries is well-fit by a Comptonization model, in which the central neutron star is surrounded by a hot corona of electrons and the photons injected into this corona are relatively soft. It was therefore expected that the observed hard photons, having scattered more often than the soft photons and thus having a longer path length before escape, would lag the soft photons. Instead, in several sources a hard lead was discovered. One explanation for this lead was suggested by Ford (1999). He proposed that Doppler shifting of photons emitted from rotating hot spots, as in thermonuclear burst oscillations, would tend to produce a hard lead because the approaching edge of the spot would precede the trailing edge. He compared a simplified calculation of this effect with burst data for Aql X-1 and showed that an adequate fit could be obtained (Ford 1999).
The millisecond X-ray pulsar SAX J1808–3658 provides a stronger test of this hypothesis. This source has strong oscillations ($``$5% rms) at $``$401 Hz, which as usual are attributed to rotational modulation of a hot spot on the surface. Cui, Morgan, & Titarchuk (1998) obtained precise measurements of the oscillation phase as a function of energy, and found that in this source as well there is a hard lead.
Figure 8 shows sample calculations of the time lag as a function of energy. In the left panel we focus on the dependence of the lag on mass, and in the right panel we concentrate on the effect of changing the surface temperature. In both cases the surface emission pattern is the pattern for a gray atmosphere, and we assume $`R/M=5.1`$ and a stellar spin frequency of 401 Hz, which is the spin frequency of SAX J1808–3658. In panel (a) we assume a surface effective temperature of $`kT=0.7`$ keV as measured at infinity. In panel (b) we assume a stellar gravitational mass of 1.6 $`M_{}`$, which gives a surface equatorial rotation velocity of 0.1 c as measured at infinity. From this figure it is clear that the effect of increasing the mass is to increase the phase lead, whereas the effect of increasing the temperature is to increase the energy at which the curve starts to flatten.
Comparison with data.—Comparing these models with the data for SAX J1808–3658 introduces additional complications. In order to improve statistics, Cui et al. (1998) averaged the phase lags over the period from 11 April 1998 to 29 April 1998. The calculation of the phase leads by Cui et al. (1998) also involves averaging the phase over energy bins several keV in width. Examination of Figure 9 shows that the phase changes rapidly over such an energy range, implying that the measured phase lead depends sensitively on the input spectrum. The effective area of RXTE also decreases rapidly below 4 keV, which strongly affects the observed average phase in the 2–3 keV reference bin. Finally, given that the observed spectrum is not a blackbody, but is instead approximately a power law of index 1.86 from $``$3 keV to $``$30 keV (Heindl & Smith 1998), Compton reprocessing has taken place and the observed phase lags are the result of a convolution between the unscattered phase lags and the Compton redistribution function.
Figure 9 plots the data along with a simplified model of the phase lags taking some of these complications into account. We ignore the changing effective area of RXTE and assume a constant response with energy. Based on the power-law nature of the spectrum, we approximate the process of Comptonization by assuming that the energy of the injected photons is much less than the observed photon energies or the temperature of the electrons. We also assume an isothermal atmosphere, in contrast to the gray atmosphere we used for Figure 8, which gives too low a hard lead. The best fit has $`kT=1.1`$ keV as observed at infinity, $`R=10`$ km, and $`M=2.2M_{}`$. The total $`\chi ^2`$ of the fit is 38.6 for 6 degrees of freedom. The dominant contribution to this $`\chi ^2`$ comes from the underprediction of the hard lead at low energies. This is as expected, because we assumed an instrumental effective area that is constant with energy, whereas in reality the effective area rises rapidly with increasing photon energy at low energies. This changing effective area gives greater weight to the larger leads at higher energies, which is in better agreement with the data. Therefore, given the simplifications of the model, our fit to the data is encouragingly good and supports the Doppler interpretation of the observed hard lead.
## 5 DISCUSSION
Relative amplitudes of harmonics.—We have presented calculations of the waveforms and amplitudes at different harmonics of the spin frequency for one or two hot spots and many realistic combinations of stellar compactness, spot size and emission pattern, observation angle, and magnetic inclination. These calculations show that typically either the fundamental or the first overtone has an amplitude much larger than the amplitude of any other harmonic. This corresponds well to the observations of the six sources with burst brightness oscillations, in which there is a strong oscillation at only one frequency. We also find that if the first overtone is the dominant harmonic, there must be two similar and nearly antipodal bright spots, because a single spot always produces a much stronger oscillation at the fundamental than at any overtone. In contrast, if the fundamental is much stronger than the overtone, this is consistent with but does not require a single spot: if there are two bright spots that are sufficiently dissimilar or far away from antipodal, or if our line of sight is such that one of the spots is hidden, then the oscillation at the fundamental will dominate. This implies that the three sources with detectable oscillations near $``$300 Hz (4U 1728–34 \[Strohmayer et al. 1996\], 4U 1702–43 \[Markwardt, Strohmayer, & Swank 1999\], and 4U 1636–536 \[Miller 1999; this source has a strong oscillation at $``$580 Hz but a detectable oscillation at $``$290 Hz\]) have spin frequencies of $``$300 Hz, whereas the three sources with detectable oscillations only at $``$500–600 Hz (Aql X-1 \[Zhang et al. 1998\], MXB 1743–29 \[Strohmayer et al. 1997a\], and KS 1731–260 \[Smith et al. 1997\]) could have spin frequencies at either this frequency or half of it. Therefore, all six burst oscillation sources are consistent with having spin frequencies $``$300 Hz.
Information content of waveforms.—Our results also show clearly that power density spectra, which contain information only on the relative amplitudes of different harmonics, are much less informative than the waveforms themselves. Figure 10 shows three different waveforms that all have an amplitude at the first overtone that is 2.3 times the amplitude at the fundamental, which is the ratio found by Miller (1999) for 4U 1636–536. In all three cases there are two bright spots. The solid line shows the waveform for two identical pointlike spots that are 175 apart, the dotted line shows the waveform for two antipodal spots with brightnesses differing by 10%, and the dashed line shows the waveform for two identical and antipodal spots that are 75 from the rotational pole and observed from a line of sight that is also 75 from the rotational pole. Although the amplitude ratio is the same in each case, the waveforms are quite different from each other, and the physical implications are also different. This underscores the importance of calculating waveforms and not just power density spectra, both observationally and theoretically.
Searches for weak higher harmonics.—The amplitudes and phases of higher harmonics potentially contain important clues about the propagation of nuclear burning and about the compactness of the star, but as yet there are no sources in which a higher harmonic of a strong oscillation has been observed. Our plots of amplitude versus spot size suggest that it is best to look for weaker higher harmonics when the dominant oscillation is strong. The reason is that, in general, the ratio of the second to first harmonic drops with increasing spot size, and therefore with decreasing amplitude at the fundamental. Hence, a search only of data showing strong oscillations may more sensitively reveal the presence of higher harmonics.
Shape of amplitude decrease as a function of spot size.—In our calculations, as the spot size increases the amplitude decreases slowly until the angular radius of the spot is $`40^{}`$ but quickly thereafter. This apparently conflicts with the observations of 4U 1728–34 reported in Strohmayer et al. (1997), in which the error bars are large but it appears that the decrease in amplitude is fast from the start and then slows down. Further quantification of this result is important, but if confirmed it could be caused by a number of effects. For example, the spot size might never be small: if ignition were nearly simultaneous over a large area, further spreading would already be in the large-spot regime, and hence the amplitude would decrease quickly. If the spreading velocity were initially high but then decreased, this would have a similar effect on the amplitudes. Alternately, if there is a corona with a non-negligible scattering optical depth around the star and the optical depth increases as the burst approaches its peak flux, this would also decrease the amplitude faster than expected when the optical depth is zero.
Phase lags as a probe of surface rotation velocity.—We find that the hard lead observed in SAX J1808–3658 is fit reasonably well by a model (see Ford 1999) in which rotational Doppler shifts cause higher-energy X-rays to lead lower-energy X-rays. This fit lends support to the model, and suggests that with better fitting and more data (especially from a future high-area timing mission) it may be possible to use phase lag versus energy data to help constraint the mass $`M`$ or the compactness $`R/M`$ of the star.
We thank Wei Cui for providing time lag data for SAX J1808-3658. This work was supported in part by NASA ATP grant number NRA-98-03-ATP-028, NASA contract NASW-4690, and DOE contract DOE B341495. |
no-problem/0001/cond-mat0001376.html | ar5iv | text | # Surface Collective Excitations in Ultrafast Pump–Probe Spectroscopy of Metal Nanoparticles
## I Introduction
Surface collective excitations play an important role in the absorption of light by metal nanoparticles. In large particles with sizes comparable to the wave–length of light $`\lambda `$ (but smaller than the bulk mean free path), the lineshape of the surface plasmon (SP) resonance is determined by the electromagnetic effects. On the other hand, in small nanoparticles with radii $`R\lambda `$, the absorption spectrum is governed by quantum confinement effects. For example, the momentum non–conservation due to the confining potential leads to the Landau damping of the SP and to a resonance linewidth inversely proportional to the nanoparticle size. Confinement also changes non–linear optical properties of nanoparticles: a size–dependent enhancement of the third–order susceptibilities, caused by the elastic surface scattering of single–particle excitations, has been reported.
Recently, extensive experimental studies of the electron relaxation in nanoparticles have been performed using ultrafast pump–probe spectroscopy. Unlike in semiconductors, the dephasing processes in metals are very fast, and nonequilibrium populations of optically excited electrons and holes are formed within several femtoseconds. These thermalize into the hot Fermi–Dirac distribution within several hundreds of femtoseconds, mainly due to e–e and h–h scattering. Since the electron heat capacity is much smaller than that of the lattice, a high electron temperature can be reached during less than 1 ps time scales, i.e., before any significant energy transfer to the phonon bath occurs. Note that in nanometer-sized metal particles and nanoshels, the electron–phonon coupling is weaker than in the bulk. During this stage, the SP resonance has been observed to undergo a time–dependent spectral broadening. Subsequently, the electron and phonon baths equilibrate through the electron–phonon interactions over time intervals of a few picoseconds. During this incoherent stage, the hot electron distribution can be characterized by a time–dependent temperature.
The many–body correlations play an important role in the transient changes of the absorption spectrum. For example, in undoped materials, four–particle correlations and exciton–exciton interactions were shown to play a dominant role in the optical response for specific sequences of the optical pulses. Correlation effects also play an important role in doped systems. In nanoparticles, it has been shown that in order to explain the differential absorption lineshape, it is essential to take into account the energy–dependent e–e scattering of the optically–excited carriers near the Fermi surface. Furthermore, despite the similarities to the bulk–like behavior, observed, e.g., in metal films, certain aspects of the optical dynamics in nanoparticles are significantly different. For example, experimental studies of small Cu nanoparticles revealed that the relaxation times of the pump–probe signal depend strongly on the probe frequency: the relaxation was considerably slower at the SP resonance. This and other observations suggest that collective surface excitations play an important role in the electron dynamics in small metal particles.
In this paper we address the role of surface collective excitations in the electron relaxation in small metal particles. We show that the dynamically screened e–e interaction contains a correction originating from the surface collective modes excited by an electron in nanoparticle. This opens up new quasiparticle scattering channels mediated by surface collective modes. We derive the corresponding scattering rates, which depend strongly on the nanoparticle size. The scattering rate of a conduction electron increases with energy, in contrast to the bulk–plasmon mediated scattering. In noble metal particles, we study the SP–mediated scattering of a $`d`$–hole into the conduction band. The scattering rate of this process depends strongly on temperature, and exhibits a peak as a function of energy due to the restricted phase space available for interband scattering. We show that this effect manifests itself in the ultrafast nonlinear optical dynamics of nanometer–sized particles. We perform self–consistent calculations of the temporal evolution of absorption spectrum in the presence of the pump pulse. For large sizes, the absorption peak exhibits a red shift at short time delays. We show that with decreasing size, the SP resonance developes a blue shift due to the SP–mediated interband scattering of the $`d`$–hole. We also find that the relaxation times of the pump–probe signal depend strongly on the probe frequency, in agreement with recent experiments.
The paper is organized as follows. In Section II we review the relevant basic relations of the linear response theory for nanoparticles. In Section III the dynamically screened Coulomb potential in a nanoparticle is derived. In Section IV we calculate the SP–mediated interband scattering rate of a d–band hole. In Section V we incorporate this effect in the calculation of the absorption coefficient. In Section VI we present our numerical results and discuss their experimental implications on the size and frequency dependence of the time–resolved pump–probe signal. In Section VII we study the quasiparticle scattering in the conduction band mediated by surface collective modes. Section VIII concludes the paper.
## II Basic relations
In this section we recall the basic relations regarding the linear absorption by metal nanoparticles embedded in a medium with dielectric constant $`ϵ_m`$. We will focus primarily on noble metal particles containing several hundreds of atoms; in this case, the confinement affects the extended electronic states after the bulk lattice structure has been established. If the particles radii are small, $`R\lambda `$, only dipole surface modes can be optically excited and non–local effects can be neglected. In this case the optical properties of this system are determined by the dielectric function
$$ϵ_{\mathrm{col}}(\omega )=ϵ_m+3pϵ_m\frac{ϵ(\omega )ϵ_m}{ϵ(\omega )+2ϵ_m},$$
(1)
where $`ϵ(\omega )=ϵ^{}(\omega )+iϵ^{\prime \prime }(\omega )`$ is the dielectric function of a metal particle and $`p1`$ is the volume fraction occupied by nanoparticles in the colloid. Since the $`d`$–electrons play an important role in the optical properties of noble metals, the dielectric function $`ϵ(\omega )`$ includes the interband contribution $`ϵ_d(\omega )`$. For $`p1`$, the absorption coefficient of such a system is proportional to that of a single particle and is given by
$$\alpha (\omega )=9pϵ_m^{3/2}\frac{\omega }{c}\text{Im}\frac{1}{ϵ_s(\omega )},$$
(2)
where
$$ϵ_s(\omega )=ϵ_d(\omega )\omega _p^2/\omega (\omega +i\gamma _s)+2ϵ_m,$$
(3)
plays the role of an effective dielectric function of a particle in the medium. Its zero, $`ϵ_s^{}(\omega _s)=0`$, determines the frequency of the SP, $`\omega _s`$. In Eq. (3), $`\omega _p`$ is the bulk plasmon frequency of the conduction electrons, and the width $`\gamma _s`$ characterizes the SP damping. The semiclassical result Eqs. (2) and (3) applies to nanoparticles with radii $`Rq_{_{TF}}^1`$, where $`q_{_{TF}}`$ is the Thomas–Fermi screening wave–vector ($`q_{_{TF}}^11`$ Å in noble metals). In this case, the electron density deviates from its classical shape only within a surface layer occupying a small fraction of the total volume. Quantum mechanical corrections, arising from the discrete energy spectrum, lead to a width $`\gamma _sv__F/R`$, where $`v__F=k__F/m`$ is the Fermi velocity. Even though $`\gamma _s/\omega _s(q_{_{TF}}R)^11`$, this damping mechanism dominates over others, e.g., due to phonons, for sizes $`R10`$ nm. In small clusters, containing several dozens of atoms, the semiclassical approximation breaks down and density functional or ab initio methods should be used.
It should be noted that, in contrast to surface collective excitations, the e–e scattering is not sensitive to the nanoparticle size as long as the condition $`q_{_{TF}}R1`$ holds. Indeed, for such sizes, the static screening is essentially bulk–like. At the same time, the energy dependence of the bulk e–e scattering rate, $`\gamma _e(EE_F)^2`$, with $`E_F`$ being the Fermi energy, comes from the phase–space restriction due to the momentum conservation, and involves the exchange of typical momenta $`qq_{_{TF}}`$. If the size–induced momentum uncertainty $`\delta qR^1`$ is much smaller than $`q_{_{TF}}`$, the e–e scattering rate in a nanoparticle is not significantly affected by the confinement.
## III Plasmon–pole approximation in small metal particles
In this section, we study the effect of the surface collective excitations on the e–e interactions in a spherical metal particle. To find the dynamically screened Coulomb potential, we generalize the method previously developed for calculations of local field corrections to the optical fields. The potential $`U(\omega ;𝐫,𝐫^{})`$ at point $`𝐫`$ arising from an electron at point $`𝐫^{}`$ is determined by the equation
$`U(\omega ;𝐫,𝐫^{})=u(𝐫𝐫^{})+{\displaystyle 𝑑𝐫_1𝑑𝐫_2u(𝐫𝐫_1)\mathrm{\Pi }(\omega ;𝐫_1,𝐫_2)U(\omega ;𝐫_2,𝐫^{})},`$ (4)
where $`u(𝐫𝐫^{})=e^2|𝐫𝐫^{}|^1`$ is the unscreened Coulomb potential and $`\mathrm{\Pi }(\omega ;𝐫_1,𝐫_2)`$ is the polarization operator. There are three contributions to $`\mathrm{\Pi }`$, arising from the polarization of the conduction electrons, the $`d`$–electrons, and the medium surrounding the nanoparticles: $`\mathrm{\Pi }=\mathrm{\Pi }_c+\mathrm{\Pi }_d+\mathrm{\Pi }_m`$. It is useful to rewrite Eq. (4) in the “classical” form
$$(𝐄+4\pi 𝐏)=4\pi e^2\delta (𝐫𝐫^{}),$$
(5)
where $`𝐄(\omega ;𝐫,𝐫^{})=U(\omega ;𝐫,𝐫^{})`$ is the screened Coulomb field and $`𝐏=𝐏_c+𝐏_d+𝐏_m`$ is the electric polarization vector, related to the potential $`U`$ as
$$𝐏(\omega ;𝐫,𝐫^{})=e^2𝑑𝐫_1\mathrm{\Pi }(\omega ;𝐫,𝐫_1)U(\omega ;𝐫_1,𝐫^{}).$$
(6)
In the random phase approximation, the intraband polarization operator is given by
$`\mathrm{\Pi }_c(\omega ;𝐫,𝐫^{})={\displaystyle \underset{\alpha \alpha ^{}}{}}{\displaystyle \frac{f(E_\alpha ^c)f(E_\alpha ^{}^c)}{E_\alpha ^cE_\alpha ^{}^c+\omega +i0}}\psi _\alpha ^c(𝐫)\psi _\alpha ^{}^c(𝐫)\psi _\alpha ^c(𝐫^{})\psi _\alpha ^{}^c(𝐫^{}),`$ (7)
where $`E_\alpha ^c`$ and $`\psi _\alpha ^c`$ are the single–electron eigenenergies and eigenfunctions in the nanoparticle, and $`f(E)`$ is the Fermi–Dirac distribution (we set $`\mathrm{}=1`$). Since we are interested in frequencies much larger than the single–particle level spacing, $`\mathrm{\Pi }_c(\omega )`$ can be expanded in terms of $`1/\omega `$. For the real part, $`\mathrm{\Pi }_c^{}(\omega )`$, we obtain in the leading order
$$\mathrm{\Pi }_c^{}(\omega ;𝐫,𝐫_1)=\frac{1}{m\omega ^2}[n_c(𝐫)\delta (𝐫𝐫_1)],$$
(8)
where $`n_c(𝐫)`$ is the conduction electron density. In the following we assume, for simplicity, a step density profile, $`n_c(𝐫)=\overline{n}_c\theta (Rr)`$, where $`\overline{n}_c`$ is the average density. The leading contribution to the imaginary part, $`\mathrm{\Pi }_c^{\prime \prime }(\omega )`$, is proportional to $`\omega ^3`$, so that $`\mathrm{\Pi }_c^{\prime \prime }(\omega )\mathrm{\Pi }_c^{}(\omega )`$.
By using Eqs. (8) and (6), one obtains a familiar expression for $`𝐏_c`$ at high frequencies,
$$𝐏_c(\omega ;𝐫,𝐫^{})=\frac{e^2n_c(𝐫)}{m\omega ^2}U(\omega ;𝐫,𝐫^{})=\theta (Rr)\chi _c(\omega )𝐄(\omega ;𝐫,𝐫^{}),$$
(9)
where $`\chi _c(\omega )=e^2\overline{n}_c/m\omega ^2`$ is the conduction electron susceptibility. Note that, for a step density profile, $`𝐏_c`$ vanishes outside the particle. The $`d`$–band and dielectric medium contributions to $`𝐏`$ are also given by similar relations,
$`𝐏_d(\omega ;𝐫,𝐫^{})=\theta (Rr)\chi _d(\omega )𝐄(\omega ;𝐫,𝐫^{}),`$ (10)
$`𝐏_m(\omega ;𝐫,𝐫^{})=\theta (rR)\chi _m𝐄(\omega ;𝐫,𝐫^{}),`$ (11)
where $`\chi _i=(ϵ_i1)/4\pi `$, $`i=d,m`$ are the corresponding susceptibilities and the step functions account for the boundary conditions. Using Eqs. (9)–(11), one can write a closed equation for $`U(\omega ;𝐫,𝐫^{})`$. Using Eq. (6), the second term of Eq. (4) can be presented as $`e^2𝑑𝐫_1u(𝐫𝐫_1)𝐏(\omega ;𝐫_1,𝐫^{}).`$ Substituting the above expressions for $`𝐏`$, we then obtain after integrating by parts
$`ϵ(\omega )U(\omega ;𝐫,𝐫^{})={\displaystyle \frac{e^2}{|𝐫𝐫^{}|}}`$ $`+{\displaystyle 𝑑𝐫_1_1\frac{1}{|𝐫𝐫_1|}_1\left[\theta (Rr)\chi (\omega )+\theta (rR)\chi _m\right]U(\omega ;𝐫_1,𝐫^{})}`$ (13)
$`+i{\displaystyle 𝑑𝐫_1𝑑𝐫_2\frac{e^2}{|𝐫𝐫_1|}\mathrm{\Pi }_c^{\prime \prime }(\omega ;𝐫_1,𝐫_2)U(\omega ;𝐫_2,𝐫^{})},`$
with
$$ϵ(\omega )1+4\pi \chi (\omega )=ϵ_d(\omega )\omega _p^2/\omega ^2,$$
(14)
$`\omega _p^2=4\pi e^2\overline{n}_c/m`$ being the plasmon frequency in the conduction band. The last term in the rhs of Eq. (13), proportional to $`\mathrm{\Pi }_c^{\prime \prime }(\omega )`$, can be regarded as a small correction. To solve Eq. (13), we first eliminate the angular dependence by expanding $`U(\omega ;𝐫,𝐫^{})`$ in spherical harmonics, $`Y_{LM}(\widehat{𝐫})`$, with coefficients $`U_{LM}(\omega ;r,r^{})`$. Using the corresponding expansion of $`|𝐫𝐫^{}|^1`$ with coefficients $`Q_{LM}(r,r^{})=\frac{4\pi }{2L+1}r^{L1}r^L`$ (for $`r>r^{}`$), we get the following equation for $`U_{LM}(\omega ;r,r^{})`$:
$`ϵ(\omega )U_{LM}(\omega ;r,r^{})=`$ $`Q_{LM}(r,r^{})+4\pi \left[\chi (\omega )\chi _m\right]{\displaystyle \frac{L+1}{2L+1}}\left({\displaystyle \frac{r}{R}}\right)^LU_{LM}(\omega ;R,r^{})`$ (16)
$`+ie^2{\displaystyle \underset{L^{}M^{}}{}}{\displaystyle 𝑑r_1𝑑r_2r_1^2r_2^2Q_{LM}(r,r_1)\mathrm{\Pi }_{LM,L^{}M^{}}^{\prime \prime }(\omega ;r_1,r_2)U_{L^{}M^{}}(\omega ;r_2,r^{})},`$
where
$`\mathrm{\Pi }_{LM,L^{}M^{}}^{\prime \prime }(\omega ;r_1,r_2)={\displaystyle 𝑑\widehat{𝐫}_1𝑑\widehat{𝐫}_2Y_{LM}^{}(\widehat{𝐫}_1)\mathrm{\Pi }_c^{\prime \prime }(\omega ;𝐫_1,𝐫_2)Y_{L^{}M^{}}(\widehat{𝐫}_2)},`$ (17)
are the coefficients of the multipole expansion of $`\mathrm{\Pi }_c^{\prime \prime }(\omega ;𝐫_1,𝐫_2)`$. For $`\mathrm{\Pi }_c^{\prime \prime }=0`$, the solution of Eq. (16) can be presented in the form
$`U_{LM}(\omega ;r,r^{})=a(\omega )e^2Q_{LM}(r,r^{})+b(\omega ){\displaystyle \frac{4\pi e^2}{2L+1}}{\displaystyle \frac{r^Lr^L}{R^{2L+1}}},`$ (18)
with frequency–dependent coefficients $`a`$ and $`b`$. Since $`\mathrm{\Pi }_c^{\prime \prime }(\omega )\mathrm{\Pi }_c^{}(\omega )`$ for relevant frequencies, the solution of Eq. (16) in the presence of the last term can be written in the same form as Eq. (18), but with modified $`a(\omega )`$ and $`b(\omega )`$. Substituting Eq. (18) into Eq. (16), we obtain after lengthy algebra in the lowest order in $`\mathrm{\Pi }_c^{\prime \prime }`$
$$a(\omega )=ϵ^1(\omega ),b(\omega )=ϵ_L^1(\omega )ϵ^1(\omega ),$$
(19)
where
$$ϵ_L(\omega )=\frac{L}{2L+1}ϵ(\omega )+\frac{L+1}{2L+1}ϵ_m+iϵ_{cL}^{\prime \prime }(\omega ),$$
(20)
is the effective dielectric function, whose zero, $`ϵ_L^{}(\omega _L)=0`$, determines the frequency of the collective surface excitation with angular momentum $`L`$,
$$\omega _L^2=\frac{L\omega _p^2}{Lϵ_d^{}(\omega _L)+(L+1)ϵ_m}.$$
(21)
In Eq. (20), $`ϵ_{cL}^{\prime \prime }(\omega )`$ characterizes the damping of the $`L`$–pole collective mode by single–particle excitations, and is given by
$$ϵ_{cL}^{\prime \prime }(\omega )=\frac{4\pi ^2e^2}{(2L+1)R^{2L+1}}\underset{\alpha \alpha ^{}}{}|M_{\alpha \alpha ^{}}^{LM}|^2[f(E_\alpha ^c)f(E_\alpha ^{}^c)]\delta (E_\alpha ^cE_\alpha ^{}^c+\omega ),$$
(22)
where $`M_{\alpha \alpha ^{}}^{LM}`$ are the matrix elements of $`r^LY_{LM}(\widehat{𝐫})`$. Due to the momentum nonconservation in a nanoparticle, the matrix elements are finite, which leads to the size–dependent width of the $`L`$–pole mode:
$$\gamma _L=\frac{2L+1}{L}\frac{\omega ^3}{\omega _p^2}ϵ_{cL}^{\prime \prime }(\omega ).$$
(23)
For $`\omega \omega _L`$, one can show that the width, $`\gamma _Lv_F/R`$, is independent of $`\omega `$. Note that, in noble metal particles, there is an additional d–electron contribution to the imaginary part of $`ϵ_L(\omega )`$ at frequencies above the onset $`\mathrm{\Delta }`$ of the interband transitions.
Putting everything together, we arrive at the following expression for the dynamically–screened interaction potential in a nanoparticle:
$`U(\omega ;𝐫,𝐫^{})={\displaystyle \frac{u(𝐫𝐫^{})}{ϵ(\omega )}}+{\displaystyle \frac{e^2}{R}}{\displaystyle \underset{LM}{}}{\displaystyle \frac{4\pi }{2L+1}}{\displaystyle \frac{1}{\stackrel{~}{ϵ}_L(\omega )}}\left({\displaystyle \frac{rr^{}}{R^2}}\right)^LY_{LM}(\widehat{𝐫})Y_{LM}^{}(\widehat{𝐫}^{}),`$ (24)
with $`\stackrel{~}{ϵ}_L^1(\omega )=ϵ_L^1(\omega )ϵ^1(\omega )`$. Equation (24), which is the main result of this section, represents a generalization of the plasmon pole approximation to spherical particles. The two terms in the rhs describe two distinct contributions. The first comes from the usual bulk-like screening of the Coulomb potential. The second contribution describes a new effective e–e interaction induced by the surface: the potential of an electron inside the nanoparticle excites high–frequency surface collective modes, which in turn act as image charges that interact with the second electron. It should be emphasized that, unlike in the case of the optical fields, the surface–induced dynamical screening of the Coulomb potential is size–dependent.
Note that the excitation energies of the surface collective modes are lower than the bulk plasmon energy, also given by Eq. (21) but with $`ϵ_m=0`$. This opens up new channels of quasiparticle scattering, considered in the next section.
## IV interband scattering mediated by surface plasmons
We now turn to the interband processes in noble metal particles and consider the scattering of a $`d`$–hole into the conduction band. We restrict ourselves to the scattering via the dipole channel, mediated by the SP. The corresponding surface–induced potential, given by the $`L=1`$ term in Eq. (24), has the form
$`U_s(\omega ;𝐫,𝐫^{})={\displaystyle \frac{3e^2}{R}}{\displaystyle \frac{𝐫𝐫^{}}{R^2}}{\displaystyle \frac{1}{ϵ_s(\omega )}}.`$ (25)
With this potential, the $`d`$–hole Matsubara self–energy is given by
$`\mathrm{\Sigma }_\alpha ^d(i\omega )={\displaystyle \frac{3e^2}{R^3}}{\displaystyle \underset{\alpha ^{}}{}}|𝐝_{\alpha \alpha ^{}}|^2{\displaystyle \frac{1}{\beta }}{\displaystyle \underset{i\omega ^{}}{}}{\displaystyle \frac{G_\alpha ^{}^c(i\omega ^{}+i\omega )}{ϵ_s(i\omega ^{})}},`$ (26)
where $`𝐝_{\alpha \alpha ^{}}=c,\alpha |𝐫|d,\alpha ^{}=c,\alpha |𝐩|d,\alpha ^{}/im(E_\alpha ^cE_\alpha ^{}^d)`$ is the interband transition matrix element. Since the final state energies in the conduction band are high (in the case of interest here, they are close to the Fermi level), the matrix element can be approximated by the bulk–like expression $`c,\alpha |𝐩|d,\alpha ^{}=\delta _{\alpha \alpha ^{}}c|𝐩|d\delta _{\alpha \alpha ^{}}\mu `$, the corrections due to surface scattering being suppressed by a factor of $`(k__FR)^11`$. After performing the frequency summation, we obtain for Im$`\mathrm{\Sigma }_\alpha ^d`$
$`\mathrm{Im}\mathrm{\Sigma }_\alpha ^d(\omega )={\displaystyle \frac{9e^2\mu ^2}{m^2(E_\alpha ^{cd})^2R^3}}\text{Im}{\displaystyle \frac{N(E_\alpha ^c\omega )+f(E_\alpha ^c)}{ϵ_s(E_\alpha ^c\omega )}},`$ (27)
with $`E_\alpha ^{cd}=E_\alpha ^cE_\alpha ^d`$. We see that the scattering rate of a $`d`$-hole with energy $`E_\alpha ^d`$, $`\gamma _h^s(E_\alpha ^d)=\text{Im}\mathrm{\Sigma }_\alpha ^d(E_\alpha ^d)`$, has a strong $`R^3`$ dependence on the nanoparticle size.
The important feature the interband SP–mediated scattering rate is its energy dependence. Since the surface–induced potential, Eq. (25), allows for only vertical (dipole) interband single–particle excitations, the phase space for the scattering of a $`d`$–hole with energy $`E_\alpha ^d`$ is restricted to a single final state in the conduction band with energy $`E_\alpha ^c`$. As a result, the $`d`$–hole scattering rate, $`\gamma _h^s(E_\alpha ^d)`$, exhibits a peak as the difference between the energies of final and initial states, $`E_\alpha ^{cd}=E_\alpha ^cE_\alpha ^d`$, approaches the SP frequency $`\omega _s`$ \[see Eq. (27)\].
As we show in the next section, the fact that the scattering rate of a $`d`$–hole is dominated by the SP resonance, affects strongly the nonlinear optical dynamics in small nanoparticles. This is the case, in particular, when the SP frequency, $`\omega _s`$, is close to the onset of interband transitions, $`\mathrm{\Delta }`$, as, e.g., in Cu and Au nanoparticles. Consider an e–h pair with excitation energy $`\omega `$ close to $`\mathrm{\Delta }`$. As we discussed, the $`d`$–hole can scatter into the conduction band by emitting a SP. According to Eq. (27), for $`\omega \omega _s`$, this process will be resonantly enhanced. At the same time, the electron can scatter in the conduction band via the usual two–quasiparticle process. For $`\omega \mathrm{\Delta }`$, the electron energy is close to $`E_F`$, and its scattering rate is estimated as $`\gamma _e10^2`$ eV. Using the bulk value of $`\mu `$, $`2\mu ^2/m1`$ eV near the L-point, we find that $`\gamma _h^s`$ exceeds $`\gamma _e`$ for $`R2.5`$ nm. In fact, one would expect that, in nanoparticles, $`\mu `$ is larger than in the bulk due to the localization of the conduction electron wave–functions.
## V Surface plasmon optical dynamics
In this section, we study the effect of the SP–mediated interband scattering on the nonlinear optical dynamics in noble metal nanoparticles. When the hot electron distribution has already thermalized and the electron gas is cooling to the lattice, the transient response of a nanoparticle can be described by the time–dependent absorption coefficient $`\alpha (\omega ,t)`$, given by Eq. (2) with time–dependent temperature. In noble–metal particles, the temperature dependence of $`\alpha `$ originates from two different sources. First is the phonon–induced correction to $`\gamma _s`$, which is proportional to the lattice temperature $`T_l(t)`$. As mentioned in the Introduction, for small nanoparticles this effect is relatively weak. Second, near the onset of the interband transitions, $`\mathrm{\Delta }`$, the absorption coefficient depends on the electron temperature $`T(t)`$ via the interband dielectric function $`ϵ_d(\omega )`$ \[see Eqs. (2) and (3)\]. In fact, in Cu or Au nanoparticles, $`\omega _s`$ can be tuned close to $`\mathrm{\Delta }`$, so the SP damping by interband e–h excitations leads to an additional broadening of the absorption peak. In this case, the temperature dependence of $`ϵ_d(\omega )`$ dominates the pump–probe dynamics. Below we show that, near the SP resonance, both the temperature and frequency dependence of $`ϵ_d(\omega )=1+4\pi \chi _d(\omega )`$ are strongly affected by the SP–mediated interband scattering.
For non-interacting electrons, the interband susceptibility, $`\chi _d(i\omega )=\stackrel{~}{\chi }_d(i\omega )+\stackrel{~}{\chi }_d(i\omega )`$, has the standard form
$$\stackrel{~}{\chi }_d(i\omega )=\underset{\alpha }{}\frac{e^2\mu ^2}{m^2(E_\alpha ^{cd})^2}\frac{1}{\beta }\underset{i\omega ^{}}{}G_\alpha ^d(i\omega ^{})G_\alpha ^c(i\omega ^{}+i\omega ),$$
(28)
where $`G_\alpha ^d(i\omega ^{})`$ is the Green function of a $`d`$–electron. Since the $`d`$-band is fully occupied, the only allowed SP–mediated interband scattering is that of the $`d`$–hole. We assume here, for simplicity, a dispersionless $`d`$–band with energy $`E^d`$. Substituting $`G_\alpha ^d(i\omega ^{})=[i\omega ^{}E^d+E_F\mathrm{\Sigma }_\alpha ^d(i\omega ^{})]^1`$, with $`\mathrm{\Sigma }_\alpha ^d(i\omega )`$ given by Eq. (26), and performing the frequency summation, we obtain
$$\stackrel{~}{\chi }_d(\omega )=\frac{e^2\mu ^2}{m^2}\frac{dE^cg(E^c)}{(E^{cd})^2}\frac{f(E^c)1}{\omega E^{cd}+i\gamma _h^s(\omega ,E^c)},$$
(29)
where $`g(E^c)`$ is the density of states of conduction electrons. Here $`\gamma _h^s(\omega ,E^c)=\mathrm{Im}\mathrm{\Sigma }^d(E^c\omega )`$ is the scattering rate of a $`d`$-hole with energy $`E^c\omega `$, for which we obtain from Eq. (27),
$$\gamma _h^s(\omega ,E^c)=\frac{9e^2\mu ^2}{m^2(E^{cd})^2R^3}f(E^c)\text{Im}\frac{1}{ϵ_s(\omega )},$$
(30)
where we neglected $`N(\omega )`$ for frequencies $`\omega \omega _sk_BT`$. Remarkably, $`\gamma _h^s(\omega ,E^c)`$ exhibits a sharp peak as a function of the frequency of the probe optical field. The reason for this is that the scattering rate of a $`d`$–hole with energy $`E`$ depends explicitly on the difference between the final and initial states, $`E^cE`$, as discussed in the previous section: therefore, for a $`d`$–hole with energy $`E=E^c\omega `$, the dependence on the final state energy, $`E^c`$, cancels out in $`ϵ_s(E^cE)`$ \[see Eq. (27)\]. This implies that the optically–excited $`d`$–hole experiences a resonant scattering into the conduction band as the probe frequency $`\omega `$ approaches the SP frequency. It is important to note that $`\gamma _h^s(\omega ,E^c)`$ is, in fact, proportional to the absorption coefficient $`\alpha (\omega )`$ \[see Eq. (2)\]. Therefore, the calculation of the absorption spectrum is a self–consistent problem defined by Eqs. (2), (3), (29), and (30).
It should be emphasized that the effect of $`\gamma _h^s`$ on $`ϵ_d^{\prime \prime }(\omega )`$ increases with temperature. Indeed, the Fermi function in the rhs of Eq. (30) implies that $`\gamma _h^s`$ is small unless $`E^cE_Fk_BT`$. Since the main contribution to $`\stackrel{~}{\chi }_d^{\prime \prime }(\omega )`$ comes from energies $`E^cE_F\omega \mathrm{\Delta }`$, the $`d`$–hole scattering becomes efficient for electron temperatures $`k_BT\omega _s\mathrm{\Delta }`$. As a result, near the SP resonance, the time evolution of the differential absorption, governed by the temperature dependence of $`\alpha `$, becomes strongly size–dependent, as we show in the next section.
## VI Numerical results
In the numerical calculations below, we adopt the parameters of the experiment of Ref. , which was performed on $`R2.5`$ nm Cu nanoparticles with SP frequency, $`\omega _s2.22`$ eV, slightly above the onset of the interband transitions, $`\mathrm{\Delta }2.18`$ eV. In order to describe the time–evolution of the differential absorption spectra, we first need to determine the time–dependence of the electron temperature, $`T(t)`$, due to the relaxation of the electron gas to the lattice. For this, we employ a simple two–temperature model, defined by heat equations for $`T(t)`$ and the lattice temperature $`T_l(t)`$:
$`C(T){\displaystyle \frac{T}{t}}`$ $`=`$ $`G(TT_l),`$ (31)
$`C_l{\displaystyle \frac{T_l}{t}}`$ $`=`$ $`G(TT_l),`$ (32)
where $`C(T)=\mathrm{\Gamma }T`$ and $`C_l`$ are the electron and lattice heat capacities, respectively, and $`G`$ is the electron–phonon coupling. The parameter values used here were $`G=3.5\times 10^{16}`$ Wm<sup>-3</sup>K<sup>-1</sup>, $`\mathrm{\Gamma }=70`$ Jm<sup>-3</sup>K<sup>-2</sup>, and $`C_l=3.5`$ Jm<sup>-3</sup>K<sup>-1</sup>. The values of $`\gamma _s`$ and $`\mu `$ were extracted from the fit to the linear absorption spectrum, and the initial condition for Eq. (31) was taken as $`T_0=800`$ K, the estimated pump–induced hot electron temperature. We then self–consistently calculated the time–dependent absorption coefficient $`\alpha (\omega ,t)`$, which describes the spectrum in the presence of the pump field. The differential transmission is then proportional to $`\alpha _r(\omega )\alpha (\omega ,t)`$, where $`\alpha _r(\omega )`$ is the absorption coefficient at the room temperature.
In Fig. 1 we show the calculated absorption spectra for different nanoparticle sizes. Fig. 1(a) shows the spectra at several time delays for $`R=5.0`$ nm; for this size, the SP–mediated d–hole scattering has no effect. With decreasing nanoparticle size, the linear absorption spectra are not significantly altered, as can be seen in Figs. 1(b) and (c)\]. However, the change becomes pronounced at short time delays corresponding to higher temperatures \[see Figs. 1(b) and (c)\]. This effect is clearly seen in the differential transmission spectra, shown in Fig. 2, which undergoes qualitative transformation with decreasing size.
Note that it is necessary to include the intraband e–e scattering in order to reproduce the differential transmission lineshape observed in the experiment. For optically excited electron energy close to $`E_F`$, this can be achieved by adding the e–e scattering rate $`\gamma _e(E^c)[1f(E^c)][(E^cE_F)^2+(\pi k_BT)^2]`$ to $`\gamma _h^s`$ in Eq. (29). The difference in $`\gamma _e(E^c)`$ for $`E^c`$ below and above $`E_F`$ leads to a lineshape similar to that expected from the combination of red–shift and broadening \[see Fig. 2(a)\].
In Figs. 2(b) and (c) we show the differential transmission spectra with decreasing nanoparticle size. For $`R=2.5`$ nm, the apparent red–shift is reduced \[see Fig. 2(b)\]. This change can be explained as follows. Since here $`\omega _s\mathrm{\Delta }`$, the SP is damped by the interband excitations. This broadens the spectra for $`\omega >\omega _s`$, so that the absorption peak is asymmetric. The $`d`$–hole scattering with the SP enhances the damping; since the $`\omega `$–dependence of $`\gamma _h^s`$ follows that of $`\alpha `$, this effect is larger above the resonance. On the other hand, the efficiency of the scattering increases with temperature, as discussed above. Therefore, for short time delays, the relative increase in the absorption is larger for $`\omega >\omega _s`$. With decreasing size, the strength of this effect increases further, leading to an apparent blue–shift \[see Fig. 2(c)\]. Such a strong change in the absorption dynamics originates from the $`R^3`$ dependence of the $`d`$–hole scattering rate; reducing the size by the factor of two results in an enhancement of $`\gamma _h^s`$ by an order of magnitude.
In Fig. 3 we show the time evolution of the differential transmission at several frequencies close to $`\omega _s`$. It can be seen that the relaxation is slowest at the SP resonance; this characterizes the robustness of the collective mode, which determines the peak position, versus the single–particle excitations, which determine the resonance width. For larger sizes, at which $`\gamma _h^s`$ is small, the change in the differential transmission decay rate with frequency is smoother above the resonance \[see Fig. 3(a)\]. This stems from the asymmetric lineshape of the absorption peak, mentioned above: the absorption is larger for $`\omega >\omega _s`$, so that its relative change with temperature is weaker. For smaller nanoparticle size, the decay rates become similar above and below $`\omega _s`$ \[see Fig. 3(b)\]. This change in the frequency dependence is related to the stronger SP damping for $`\omega >\omega _s`$ due to the $`d`$–hole scattering, as discussed above. Since this additional damping is reduced with decreasing temperature, the relaxation is faster above the resonance. This rather “nonlinear” relation between the time–evolution of the pump–probe signal and that of the temperature becomes even stronger for smaller sizes \[see Fig. 3(c)\]. In this case, the frequency dependence of the differential transmission decay below and above $`\omega _s`$ is reversed. Note that a frequency dependence consistent with our calculations presented in Fig. 3(b) was, in fact, observed in the experiment of Ref., shown in Fig. 3(d).
## VII quasiparticle scattering via surface collective modes
Let us now turn to the electron scattering in the conduction band accompanied by the emission of surface collective modes. In the first order in the surface–induced potential, given by the second term in the rhs of Eq. (24), the corresponding scattering rate can be obtained from the Matsubara self–energy
$`\mathrm{\Sigma }_\alpha ^c(i\omega )={\displaystyle \frac{1}{\beta }}{\displaystyle \underset{i\omega ^{}}{}}{\displaystyle \underset{LM}{}}{\displaystyle \underset{\alpha ^{}}{}}{\displaystyle \frac{4\pi e^2}{(2L+1)R^{2L+1}}}{\displaystyle \frac{|M_{\alpha \alpha ^{}}^{LM}|^2}{\stackrel{~}{ϵ}_L(i\omega ^{})}}G_\alpha ^{}^c(i\omega ^{}+i\omega ),`$ (33)
where $`G_\alpha ^c=(i\omega E_\alpha ^c)^1`$ is the non-interacting Green function of the conduction electron. Here the matrix elements $`M_{\alpha \alpha ^{}}^{LM}`$ are calculated with the one–electron wave functions $`\psi _\alpha ^c(𝐫)=R_{nl}(r)Y_{lm}(\widehat{𝐫})`$. Since $`|\alpha `$ and $`|\alpha ^{}`$ are the initial and final states of the scattered electron, the main contribution to the $`L`$th term of the angular momentum sum in Eq. (33) will come from electron states with energy difference $`E_\alpha E_\alpha ^{}\omega _L`$. Therefore, $`M_{\alpha \alpha ^{}}^{LM}`$ can be expanded in terms of the small parameter $`E_0/|E_\alpha ^cE_\alpha ^{}^c|E_0/\omega _L`$, where $`E_0=(2mR^2)^1`$ is the characteristic confinement energy. The leading term can be obtained by using the following procedure. We present $`M_{\alpha \alpha ^{}}^{LM}`$ as
$$M_{\alpha \alpha ^{}}^{LM}=c,\alpha |r^LY_{LM}(\widehat{𝐫})|c,\alpha ^{}=\frac{c,\alpha |[H,[H,r^LY_{LM}(\widehat{𝐫})]]|c,\alpha ^{}}{(E_\alpha ^cE_\alpha ^{}^c)^2},$$
(34)
where $`H=H_0+V(r)`$ is the Hamiltonian of an electron in a nanoparticle with confining potential $`V(r)=V_0\theta (rR)`$. Since $`[H,r^LY_{LM}(\widehat{𝐫})]=\frac{1}{m}[r^LY_{LM}(\widehat{𝐫})]`$, the numerator in Eq. (34) contains a term proportional to the gradient of the confining potential, which peaks sharply at the surface. The corresponding contribution to the matrix element describes the surface scattering of an electron making the $`L`$–pole transition between the states $`|c,\alpha `$ and $`|c,\alpha ^{}`$, and gives the dominant term of the expansion. Thus, in the leading order in $`|E_\alpha ^cE_\alpha ^{}^c|^1`$, we obtain
$$M_{\alpha \alpha ^{}}^{LM}=\frac{c,\alpha |[r^LY_{LM}(\widehat{𝐫})]V(r)|c,\alpha ^{}}{m(E_\alpha ^cE_\alpha ^{}^c)^2}=\frac{LR^{L+1}}{m(E_\alpha ^cE_\alpha ^{}^c)^2}V_0R_{nl}(R)R_{n^{}l^{}}(R)\phi _{lm,l^{}m^{}}^{LM},$$
(35)
with $`\phi _{lm,l^{}m^{}}^{LM}=𝑑\widehat{𝐫}Y_{lm}^{}(\widehat{𝐫})Y_{LM}(\widehat{𝐫})Y_{l^{}m^{}}(\widehat{𝐫})`$. Note that, for $`L=1`$, Eq. (35) becomes exact. For electron energies close to the Fermi level, $`E_{nl}^cE_F`$, the radial quantum numbers are large, and the product $`V_0R_{nl}(R)R_{n^{}l^{}}(R)`$ can be evaluated by using semiclassical wave–functions. In the limit $`V_0\mathrm{}`$, this product is given by $`2\sqrt{E_{nl}^cE_{n^{}l^{}}^c}/R^3`$, where $`E_{nl}^c=\pi ^2(n+l/2)^2E_0`$ is the electron eigenenergy for large $`n`$. Substituting this expression into Eq. (35) and then into Eq. (33), we obtain
$`\mathrm{\Sigma }_\alpha ^c(i\omega )={\displaystyle \frac{1}{\beta }}{\displaystyle \underset{i\omega ^{}}{}}{\displaystyle \underset{L}{}}{\displaystyle \underset{n^{}l^{}}{}}C_{ll^{}}^L{\displaystyle \frac{4\pi e^2}{(2L+1)R}}{\displaystyle \frac{E_{nl}^cE_{n^{}l^{}}^c}{(E_{nl}^cE_{n^{}l^{}}^c)^4}}{\displaystyle \frac{(4LE_0)^2}{\stackrel{~}{ϵ}_L(i\omega ^{})}}G_\alpha ^{}^c(i\omega ^{}+i\omega ),`$ (36)
with
$$C_{ll^{}}^L=\underset{M,m^{}}{}|\phi _{lm,l^{}m^{}}^{LM}|^2=\frac{(2L+1)(2l^{}+1)}{8\pi }_1^1𝑑xP_l(x)P_L(x)P_l^{}(x),$$
(37)
where $`P_l(x)`$ are Legendre polynomials; we used properties of the spherical harmonics in the derivation of Eq. (37). For $`E_{nl}^cE_F`$, the typical angular momenta are large, $`lk__FR1`$, and one can use the large–$`l`$ asymptotics of $`P_l`$; for the low multipoles of interest, $`Ll`$, the integral in Eq. (37) can be approximated by $`\frac{2}{2l^{}+1}\delta _{ll^{}}`$. After performing the Matsubara summation, we obtain for the imaginary part of the self–energy that determines the electron scattering rate
$$\text{Im}\mathrm{\Sigma }_\alpha ^c(\omega )=\frac{16e^2}{R}E_0^2\underset{L}{}L^2𝑑Eg_l(E)\frac{EE_\alpha ^c}{(E_\alpha ^cE)^4}\text{Im}\frac{N(E\omega )+f(E)}{\stackrel{~}{ϵ}_L(E\omega )},$$
(38)
where $`N(E)`$ is the Bose distribution and $`g_l(E)`$ is the density of states of a conduction electron with angular momentum $`l`$,
$$g_l(E)=2\underset{n}{}\delta (E_{nl}^cE)\frac{R}{\pi }\sqrt{\frac{2m}{E}},$$
(39)
where we replaced the sum over $`n`$ by an integral (the factor of 2 accounts for spin).
Each term in the sum in the rhs of Eq. (38) represents a channel of electron scattering mediated by a collective surface mode with angular momentum $`L`$. For low $`L`$, the difference between the energies of modes with successive values of $`L`$ is larger than their widths, so that the different channels are well separated. Note that since all $`\omega _L`$ are smaller than the frequency of the (undamped) bulk plasmon, one can replace $`\stackrel{~}{ϵ}_L(\omega )`$ by $`ϵ_L(\omega )`$ in the integrand of Eq. (38) for frequencies $`\omega \omega _L`$.
Consider now the $`L=1`$ term in Eq. (38), which describes the SP–mediated scattering channel. The main contribution to the integral comes from the SP pole in $`ϵ_1^1(\omega )=3ϵ_s^1(\omega )`$, where $`ϵ_s(\omega )`$ is the same as in Eq. (3). To estimate the scattering rate, we approximate $`\text{Im}ϵ_s^1(\omega )`$ by a Lorentzian,
$$\text{Im}ϵ_s^1(\omega )=\frac{\gamma _s\omega _p^2/\omega ^3+ϵ_d^{\prime \prime }(\omega )}{[ϵ^{}(\omega )+2ϵ_m]^2+[\gamma _s\omega _p^2/\omega ^3+ϵ_d^{\prime \prime }(\omega )]^2}\frac{\omega _s^2}{ϵ_d^{}(\omega _s)+2ϵ_m}\frac{\omega _s\gamma }{(\omega ^2\omega _s^2)^2+\omega _s^2\gamma ^2},$$
(40)
where $`\omega _s\omega _1=\omega _p/\sqrt{ϵ_d^{}(\omega _s)+2ϵ_m}`$ and $`\gamma =\gamma _s+\omega _sϵ_d^{\prime \prime }(\omega _s)`$ are the SP frequency and width, respectively. For typical widths $`\gamma \omega _s`$, the integral in Eq. (38) can be easily evaluated, yielding
$$\text{Im}\mathrm{\Sigma }_\alpha ^c(\omega )=\frac{24e^2\omega _sE_0^2}{ϵ_d^{}(\omega _s)+2ϵ_m}\frac{E_\alpha ^c\sqrt{2m(\omega \omega _s)}}{(\omega E_\alpha ^c\omega _s)^4}[1f(\omega \omega _s)].$$
(41)
Finally, using the relation $`e^2k_F[ϵ_d^{}(\omega _s)+2ϵ_m]^1=3\pi \omega _s^2/8E_F`$, the SP–mediated scattering rate, $`\gamma _e^s(E_\alpha ^c)=\text{Im}\mathrm{\Sigma }_\alpha ^c(E_\alpha ^c)`$, takes the form
$$\gamma _e^s(E)=9\pi \frac{E_0^2}{\omega _s}\frac{E}{E_F}\left(\frac{E\omega _s}{E_F}\right)^{1/2}[1f(E\omega _s)].$$
(42)
Recalling that $`E_0=(2mR^2)^1`$, we see that the scattering rate of a conduction electron is size–dependent: $`\gamma _e^sR^4`$. At $`E=E_F+\omega _s`$, the scattering rate jumps to the value $`9\pi (1+\omega _s/E_F)E_0^2/\omega _s`$, and then increases with energy as $`E^{3/2}`$ (for $`\omega _sE_F`$). This should be contrasted with the usual (bulk) plasmon–mediated scattering, originating from the first term in Eq. (24), with the rate decreasing as $`E^{1/2}`$ above the onset. To estimate the size at which $`\gamma _e^s`$ becomes important, we should compare it with the Fermi liquid e–e scattering rate, $`\gamma _e(E)=\frac{\pi ^2q_{_{TF}}}{16k__F}\frac{(EE_F)^2}{E_F}`$. For energies $`EE_F+\omega _s`$, the two rates become comparable for
$$(k__FR)^212\frac{E_F}{\omega _s}\left(1+\frac{E_F}{\omega _s}\right)^{1/2}\left(\frac{k__F}{\pi q_{_{TF}}}\right)^{1/2}.$$
(43)
In the case of a Cu nanoparticle with $`\omega _s2.2`$ eV, we obtain $`k__FR8`$, which corresponds to the radius $`R3`$ nm. At the same time, in this energy range, the width $`\gamma _e^s`$ exceeds the mean level spacing $`\delta `$, so that the energy spectrum is still continuous. The strong size dependence of $`\gamma _e^s`$ indicates that, although $`\gamma _e^s`$ increases with energy slower than $`\gamma _e`$, the SP–mediated scattering should dominate for nanometer–sized particles. Note that the size and energy dependences of scattering in different channels are similar. Therefore, the total scattering rate as a function of energy will represent a series of steps at the collective excitation energies $`E=\omega _L<\omega _p`$ on top of a smooth energy increase. We expect that this effect could be observed experimentally in time–resolved two–photon photoemission measurements of size–selected cluster beams.
## VIII Conclusions
To summarize, we have examined theoretically the role of size–dependent correlations in the electron relaxation in small metal particles. We identified a new mechanism of quasiparticle scattering, mediated by collective surface excitations, which originates from the surface–induced dynamical screening of the e–e interactions. The behavior of the corresponding scattering rates with varying energy and temperature differs substantially from that in the bulk metal. In particular, in noble metal particles, the energy dependence of the $`d`$–hole scattering rate was found similar to that of the absorption coefficient. This led us to a self–consistent scheme for the calculation of the absorption spectrum near the surface plasmon resonance.
An important aspect of the SP–mediated scattering is its strong dependence on size. Our estimates show that it becomes comparable to the usual Fermi–liquid scattering in nanometer–sized particles. This size regime is, in fact, intermediate between “classical” particles with sizes larger than 10 nm, where the bulk–like behavior dominates, and very small clusters with only dozens of atoms, where the metallic properties are completely lost. Although the static properties of nanometer–sized particles are also size–dependent, the deviations from their bulk values do not change the qualitative features of the electron dynamics. In contrast, the size–dependent many–body effects, studied here, do affect the dynamics in a significant way during time scales comparable to the relaxation times. As we have shown, the SP–mediated interband scattering reveals itself in the transient pump–probe spectra. In particular, as the nanoparticle size decreases, the calculated time–resolved differential absorption develops a characteristic lineshape corresponding to a resonance blue–shift. At the same time, near the SP resonance, the scattering leads to a significant change in the frequency dependence of the relaxation time of the pump–probe signal, consistent with recent experiments. These results indicate the need for a systematic experimental studies of the size–dependence of the transient nonlinear optical response, as we approach the transition from boundary–constrained nanoparticles to molecular clusters.
This work was supported by NSF CAREER award ECS-9703453, and, in part, by ONR Grant N00014-96-1-1042 and by Hitachi Ltd.
FIG. 1
FIG. 1
FIG. 1
FIG. 2
FIG. 2
FIG. 2
FIG. 3
FIG. 3
FIG. 3
FIG. 3 |
no-problem/0001/cond-mat0001193.html | ar5iv | text | # An Exact Monte Carlo Method for Continuum Fermion Systems
## Abstract
We offer a new proposal for the Monte Carlo treatment of many-fermion systems in continuous space. It is based upon Diffusion Monte Carlo with significant modifications: correlated pairs of random walkers that carry opposite signs; different functions “guide” walkers of different signs; the Gaussians used for members of a pair are correlated; walkers can cancel so as to conserve their expected future contributions. We report results for free-fermion systems and a fermion fluid with 14 <sup>3</sup>He atoms, where it proves stable and correct. Its computational complexity grows with particle number, but slowly enough to make interesting physics within reach of contemporary computers.
Monte Carlo methods have provided powerful numerical tools for quantum many-body physics . They include methods such as Green’s function Monte Carlo (GFMC) , Diffusion Monte Carlo (DMC), or Path Integral Monte Carlo (PIMC) that are capable of giving, at least for moderate size bosonic systems, answers with no uncontrolled approximations. Such accurate treatment of fermionic systems has been made vastly more difficult by a “sign problem.” Progress in the application of Quantum Monte Carlo methods to condensed matter physics, to electonic structure, and to nuclear structure physics has been impeded for years by the lack of exact and efficient methods for dealing with fermions.
This paper offers a new proposal for solving many-fermion systems by an extension of DMC. In the systems we have studied, the signal-to-noise ratio of the Monte Carlo estimates are constant at long imaginary times, by contrast to the behavior of ordinary DMC where they decay exponentially . Except for the use of a short-time Green’s function, no approximations– physical, mathematical, or numerical– are made. The effect of a finite interval of imaginary time is easily controlled, or may be eliminated entirely.
It is no surprise that Monte Carlo methods can solve the many-body Schrodinger equation in imaginary time for bosonic systems. Let $`\stackrel{}{R}`$ denote all coordinates of an $`N`$-body system, and $`V(\stackrel{}{R})`$ be the potential at $`\stackrel{}{R}`$.
$$[\frac{\mathrm{}^2}{2m}^2+V(\stackrel{}{R})]\psi (\stackrel{}{R},\tau )+\mathrm{}\frac{\psi (\stackrel{}{R},\tau )}{\tau }=0$$
(1)
This equation also describes the diffusion of an object (a “random walker”) in a $`3N`$ dimension space in which the potential $`V(\stackrel{}{R})`$ serves as a generalized absorption rate. Because the potential in physical problems can be unbounded from above and below, a direct simulation of that diffusion, although straightforward, will be inefficient. Some form of importance sampling transformation has been found to be highly useful. In the standard DMC, this is a technical device for accelerating the convergence; in our new method it becomes an essential feature.
DMC uses an “importance” or “guiding” function $`\psi _G(\stackrel{}{R})`$ and a trial eigenvalue $`E_T`$ to construct a random walk. A simple version is as follows: Using a fixed step in imaginary time, $`\delta \tau `$, a walker at $`\stackrel{}{R}`$ is (a) moved to $`\stackrel{}{R}+\delta \tau \stackrel{}{}\mathrm{ln}\psi _G(\stackrel{}{R})`$; (b) then each coordinate is incremented by an element of a random vector $`\stackrel{}{U}`$, a Gaussian with mean zero and variance $`\delta \tau `$; finally, (c) each walker is turned into $`M`$ walkers with $`<M>=\mathrm{exp}\left\{\delta \tau \left[E_T\widehat{H}\psi _G(\stackrel{}{R})/\psi _G(\stackrel{}{R})\right]\right\}`$, where $`\widehat{H}`$ is the Hamiltonian.
The resulting random walk has expected density
$$f(\stackrel{}{R},\tau )=\psi _G(\stackrel{}{R})\underset{k}{}a_k\mathrm{exp}[(E_TE_k)\tau ]\varphi _k(\stackrel{}{R})$$
(2)
where $`\varphi _k(\stackrel{}{R})`$ are eigenfunctions of $`\widehat{H}`$ with eigenvalues $`E_k`$, and $`a_k`$ are expansion coefficients. The limit of $`f(\stackrel{}{R},\tau )`$ for large $`\tau `$ is dominated by the eigenfunction $`\varphi _0`$ having the lowest eigenvalue $`E_0`$.
We alter the structure of DMC in the following ways: (1) In order to represent an antisymmetric wave function that is both positive and negative, we introduce walkers, $`\{\stackrel{}{R}_m^+,\stackrel{}{R}_m^{}\}`$, that respectively add or subtract their contributions to statistical expectations. The computation now involves ensembles of pairs of walkers carrying opposite signs. (2)Two distinct functions, $`\psi _G^\pm (\stackrel{}{R})`$ are used to guide the $`\pm `$ walkers. (3) The Gaussians $`\stackrel{}{U}^\pm `$ for the paired walkers are not independent; rather $`\stackrel{}{U}^{}`$ is obtained by reflecting $`\stackrel{}{U}^+`$ in the perpendicular bisector of the vector $`\stackrel{}{R}^+\stackrel{}{R}^{}`$. Finally (4) the overlapping distributions that determine the next values of $`\stackrel{}{R}^\pm `$ are added algebraically so as to allow positive and negative walkers to cancel, while preserving the correct expected values.
In a Monte Carlo calculation of this kind, we “project” quantities of interest by calculating integrals weighted with some trial function, say $`\psi _T(\stackrel{}{R})`$. In DMC the energy eigenvalue, $`E_0`$, can be determined from:
$$E_0=\frac{\widehat{H}\psi _T(\stackrel{}{R})\varphi _0(\stackrel{}{R})𝑑\stackrel{}{R}}{\psi _T(\stackrel{}{R})\varphi _0(\stackrel{}{R})𝑑\stackrel{}{R}}=\frac{{\displaystyle \underset{m}{}}{\displaystyle \frac{\widehat{H}\psi _T(\stackrel{}{R}_m)}{\psi _G(\stackrel{}{R}_m)}}}{{\displaystyle \underset{m}{}}{\displaystyle \frac{\psi _T(\stackrel{}{R}_m)}{\psi _G(\stackrel{}{R}_m)}}}$$
(3)
replacing integrals by sums over positions of the random walk.
In our modified dynamics, Eq.(3) now takes the form
$$E_0=\frac{{\displaystyle \underset{m}{}}[{\displaystyle \frac{\widehat{H}\psi _T(\stackrel{}{R}_m^+)}{\psi _G^+(\stackrel{}{R}_m^+)}}{\displaystyle \frac{\widehat{H}\psi _T(\stackrel{}{R}_m^{})}{\psi _G^{}(\stackrel{}{R}_m^{})}}]}{{\displaystyle \underset{m}{}}[{\displaystyle \frac{\psi _T(\stackrel{}{R}_m^+)}{\psi _G^+(\stackrel{}{R}_m^+)}}{\displaystyle \frac{\psi _T(\stackrel{}{R}_m^{})}{\psi _G^{}(\stackrel{}{R}_m^{})}}]}.$$
(4)
If $`\{\stackrel{}{R}_m^+\}`$ and $`\{\stackrel{}{R}_m^{}\}`$ follow the same dynamics using the same $`\psi _G`$, then the expected values of numerator and denominator of Eq. (4) decay exponentially at large $`\tau `$. Some correlation among walkers is essential. This observation is reinforced by noting that the Pauli principle, which demands that fermion wave functions be antisymmetric, is a global condition, and cannot be satisfied by independent walkers. Put another way, it will be necessary to have dynamics that distinguish between walkers that carry different signs. These motivations underlie aspects (2) and (3) of our method outlined above.
A second aspect of the difficulty in treating fermion systems is that the density that one obtains naturally from a random walk is the symmetric ground state. In order for Eq. (4) to have an asymptotically bounded signal-to-noise ratio, walkers of opposite signs must be able efficiently to cancel. This underlies the need for modification (4) given above. The need for some degree of cancellation has been a theme of previous research starting with the work of Arnow et al. . The need for distinct dynamics for positive and negative walkers was stressed in . That these two aims could be accomplished by appropriate correlation among walkers was pointed out by Liu, Zhang, and Kalos . The use of distinct guiding functions is new and serves as the connection among the different algorithmic ideas that enables the treatment of general potentials.
Stable results can be obtained using correlated pairs only. Correct results are ensured when the dynamics have the property that the random walk for either member of the pair is the same as that of a single free walker, except when they cancel, a condition satisfied here. The expectations of Eqs.(3) and (4) are linear in the walker densities and are unchanged by correlations. Furthermore, we have devised a method of canceling opposite walkers that also preserves these expectations.
Let $`\phi _A(\stackrel{}{R})`$ be a trial function for the fermionic state. Let $`\phi _S(\stackrel{}{R})`$ be some approximation to the symmetric ground state wave function of the same Hamiltonian. Define:
$$\psi _G^\pm (\stackrel{}{R})=\sqrt{\phi _S^2(\stackrel{}{R})+c^2\phi _A^2(\stackrel{}{R})}\pm c\phi _A(\stackrel{}{R})$$
(5)
The following properties of these two functions are significant: (a) they are positive; (b) when $`c`$ is small, they are dominated by $`\phi _S`$, so that opposite walkers behave similarly; (c) $`\psi _G^+`$ transforms under an odd permutation $`𝒫`$ as follows:
$$\psi _G^+(𝒫\stackrel{}{R})=\psi _G^{}(\stackrel{}{R}).$$
(6)
As mentioned above we modify simple DMC in several ways. The “drift” is applied in the usual way to walkers assumed to be at $`\stackrel{}{R}_0^\pm `$, using the two guiding functions:
$`\begin{array}{c}\stackrel{}{R}^+=\stackrel{}{R}_0^++\delta \tau \stackrel{}{}\mathrm{ln}\psi _G^+(\stackrel{}{R}^+)\\ \\ \stackrel{}{R}^{}=\stackrel{}{R}_0^{}+\delta \tau \stackrel{}{}\mathrm{ln}\psi _G^{}(\stackrel{}{R}^{}).\end{array}`$ (10)
Diffusion of the walkers, however, is carried out in a correlated way: let $`\stackrel{}{U}^+`$ be a vector of $`3N`$ Gaussian random variables each of mean zero and variance $`\delta \tau `$. New trial positions $`\stackrel{}{R}_n^\pm `$ are now given by
$$\stackrel{}{R}_n^+=\stackrel{}{R}^++\stackrel{}{U}^+;\stackrel{}{R}_n^{}=\stackrel{}{R}^{}+\stackrel{}{U}^{},$$
(11)
where the random vector $`U^{}`$ is obtained by reflection in the perpendicular bisector of the vector $`\stackrel{}{R}^+\stackrel{}{R}^{}`$ as described in (4) above.
Walker densities can be subtracted by computing their chances of arrival at a common point, but because they have different guiding functions, they do not exactly cancel. The analysis of “forward walking” allows one to determine the expected future contribution of a walker to any projected quantity. Thus, we can compute the change in expectations when a pair meets. For this change to be zero , a positive walker at $`\stackrel{}{R}_n^+`$ must survive to the next time step with probability
$`\begin{array}{c}P^+(\stackrel{}{R}_n^+;\stackrel{}{R}^+,\stackrel{}{R}^{})=\\ \\ \mathrm{max}[0,1\frac{B^{}\left(\stackrel{}{R}_n^+|\stackrel{}{R}^{}\right)G\left(\stackrel{}{R}_n^+\stackrel{}{R}^{}\right)\psi _G^+\left(\stackrel{}{R}_n^+\right)}{B^+\left(\stackrel{}{R}_n^+|\stackrel{}{R}^+\right)G\left(\stackrel{}{R}_n^+\stackrel{}{R}^+\right)\psi _G^{}\left(\stackrel{}{R}_n^+\right)}]\end{array}`$ (15)
where
$$G(\stackrel{}{R}^{}\stackrel{}{R})=\frac{\mathrm{exp}[(\stackrel{}{R}^{}\stackrel{}{R})^2/(2\delta \tau )]}{(2\pi \delta \tau )^{3N/2}}$$
(16)
is the Gaussian density used in DMC. The branching factors, $`B^+(\stackrel{}{R}|\stackrel{}{R}^+)`$ and $`B^{}(\stackrel{}{R}|\stackrel{}{R}^{})`$ are
$$B^\pm (\stackrel{}{R})=\mathrm{exp}\left\{\delta \tau \left[E_T\frac{H\psi _G^\pm (\stackrel{}{R})}{\psi _G^\pm (\stackrel{}{R})}\right]\right\}$$
(17)
An analogous expression is used for negative walkers.
An isolated walker may appear as a result of different branching factors at $`\{\stackrel{}{R}_m^+\}`$ and $`\{\stackrel{}{R}_m^{}\}`$; if, with probability one half, one generates a walker of opposite sign by interchanging the coordinates of two like-spin particles, then a pair is reconstituted that preserves future expectations.
To determine the energy, we use the estimator of Eq. (4). A sharp indication of the stability of the calculation is the behavior of its denominator
$$𝒟=[\frac{\psi _T(\stackrel{}{R}_m^+)}{\psi _G^+(\stackrel{}{R}_m^+)}\frac{\psi _T(\stackrel{}{R}_m^{})}{\psi _G^{}(\stackrel{}{R}_m^{})}]$$
(18)
In a naive calculation, $`𝒟`$ decays to zero in an imaginary time of order $`\tau _c=1/(E_AE_S)`$ where $`E_A`$ and $`E_S`$ are the fermion and boson energies. A stable method will show $`𝒟`$ asymptotically constant.
Although a system of free fermions in a periodic box is analytically trivial, it presents an exigent test of this method. For this system, the lowest symmetric state is constant, and the exact fermionic wave function is a determinant of plane waves. We use $`\rho =0.5`$ and set
$$\psi _G^\pm (\stackrel{}{R})=\sqrt{1+c^2\phi _A^2(\stackrel{}{R})}\pm c\phi _A(\stackrel{}{R})$$
(19)
where $`\phi _A`$ is a Slater determinant of one body orbitals $`\chi _{\stackrel{}{r}_i}^\stackrel{}{k}`$ of the following form:
$$\chi _{\stackrel{}{r}_i}^\stackrel{}{k}=\mathrm{exp}\left[i\stackrel{}{k}\left(\stackrel{}{r}_i+\lambda _B\underset{ji}{}\eta (r_{ij})\stackrel{}{r}_{ij}\right)\right]$$
(20)
The parameter $`\lambda _B`$ controls the departure of the nodal structure of this function from the exact shape. The fact that these functions are modulated only a little from a constant by $`\phi _A`$ means that the polarization of the population of plus and minus walkers is small.
In table I we report the results obtained for periodic systems of 7, 19, and 27 free fermions. The results agree with the analytic eigenvalues within the Monte Carlo estimates of the standard error. It has been conjectured that the computational complexity of Fermion Monte Carlo calculations will grow as $`N!`$, where $`N`$ is the number of particles in the system. Since (27!/7!) = 2.16 $`\times 10^{24}`$, a calculation with 27 or even 19 bodies would be impossible were that conjecture to be true.
We have also applied this algorithm to a system of 14 <sup>3</sup>He atoms in a periodic box at equilibrium density, $`\rho =0.0216`$Å<sup>-3</sup>. Energies are expressed in Kelvins, and lengths in Å.
With interatomic potentials that have a hard core, we may use the same function $`\phi _A`$ as for free fermions, but also need a Jastrow product. With
$$\phi _S=\phi _S(\stackrel{}{R})=\underset{i<j}{}\mathrm{exp}[(b/r_{ij})^5],$$
(21)
the guiding functions now have the form:
$$\psi _G^\pm (\stackrel{}{R})=\phi _S(\stackrel{}{R})\left[\sqrt{1+c^2\phi _A^2(\stackrel{}{R})}\pm c\phi _A(\stackrel{}{R})\right]$$
(22)
In Fig. 1 we plot the cumulative denominator as a function of imaginary time for a typical run. As can be seen, the fundamental stability of the method is well demonstrated. Fig. 2 shows the decay of the same denominator, when the method is made unstable by setting $`c=0`$.
Table II exhibits the eigenvalues of various runs with our method applied to the periodic system with 14 <sup>3</sup>He atoms. They are all consistent, and yield a weighted average of -2.2558(39). The run marked (b) is a continuation of the run labeled (a) separated by a long run with a longer time step. As a whole, including such continuations, the longest aggregate sequence comprises a total imaginary time of 1830 $`K^1`$. Using a total system energy difference of 20 $`K`$ (as we have measured), that corresponds to $`3.6\times 10^4`$ fermion decay times. An alternative measure of the length of the run, suggested by David Ceperley, is the ratio of the rms diffusion length of a particle to the mean spacing between particles. For this sequence of runs, that ratio is 19. Thus the observation of stable values of the sums in Eq. (4) is significant.
Space limitations preclude a complete description of the other checks that we have made that the results for <sup>3</sup>He are correct: they include a fixed node calculation of exactly the same model problem, which yielded an eigenvalue of -2.08(1) $`K`$. A transient estimate (cf. Fig. 3), relaxing from the fixed node, is consistent with our result (shown as the dashed line.) Analysis of the results in Fig. 2 leads to a fermion-boson energy difference of 1.434(35) $`K`$ per particle. This agrees well with a direct calculation of the energy of a 14-body mass-3 boson system that gave -3.68(1) $`K`$.
By construction, the method proposed here introduces no approximations other than that of the short imaginary-time Green’s function. In other words, if the results are stable, then they are correct. Although we have have not yet proved the stability of the method (i.e. that the long-term average of the denominator of the eigenvalue quotient is not zero), we believe that we have convincingly demonstrated the stability. Perhaps the most important conclusion that we may draw is that the “sign problem” of Fermion Monte Carlo for continuous systems is not intractable; the search for elegant computational methods in this and related applications is justified.
One of the authors (FP) has been supported, in part, by the National Science Foundation under grant ASC 9626329. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. W-7405-Eng.-48. We are happy to acknowledge helpful conversations with B.J. Alder, J. Carlson, D.M. Ceperley, G.V. Chester, R.Q. Hood, S.B. Libby, K.E. Schmidt, C.J. Umrigar, and S. Zhang. We thank particularly D.E. Post for his help and strong support. |
no-problem/0001/astro-ph0001339.html | ar5iv | text | # GRB Spectral Hardness and Afterglow Properties
## Introduction
Early observations of gamma-ray bursts (GRBs) with afterglows led us to hypothesize that GRBs with afterglows are spectrally harder than those without. However, the heterogeneous nature of GRB observations, coupled with the multi-wavelength nature of afterglow observations, has led us to a number of concerns:
1. Afterglow observations are heterogeneous: observational biases for detecting radio afterglows are different than those for detecting optical and/or x-ray afterglows.
2. A GRB might have no intrinsic afterglow, or inadequate search conditions might result in no afterglow being detected.
3. Spectral hardness measures are biased by instrument performance: One instrumental dataset should be used.
Our solutions to these concerns are as follows:
1. We require GRBs to either have detected radio afterglows or moderately-complete radio afterglow searches. This condition satisfies the first and second concerns.
2. We require that GRBs be observed by BATSE. This satisfies the third concern.
Our resulting database is shown in Table 1.
GRB 980425 may or may not have a radio afterglow, depending on its association with supernova SN 1998bw. If there is an association with SN 1998bw, then GRB 980425 has x-ray, optical, and radio afterglows. If there is no association with SN 1998bw, then this GRB has only an x-ray afterglow, with no optical or radio afterglow. The SAX team pian99 lists two possible x-ray afterglow sources for GRB 980425; the first one is consistent with SN 1998bw, the second one is not.
The SAX Team indicates that the second afterglow source might have “rebursted”, making its classification as a GRB counterpart questionable (this is not standard GRB behavior). However, it should be noted that the flux measurement of the second observation of this source represents less than a $`3\sigma `$ detection, placing the rebursting claim in doubt.
If the first afterglow source is associated with SN 1998bw, then GRB 980425 is significantly less luminous than typical GRBs. As discussed elsewhere in the literature schmidt99 , very few BATSE GRBs can have luminosities this small in typical spatial distribution models.
It appears to us that the afterglow source of GRB 980425 is still in question. Because of these doubts, we consider independently the cases where GRB 980425 has and does not have a radio afterglow.
## Are GRBs With Radio Afterglows Different from Those Without?
Figure 1 demonstrates that GRBs with radio afterglows appear to be harder than those without. The hardness ratio HR(43/21) (100-1000 keV energy fluence divided by 25-100 keV energy fluence) has been used in this analysis, because it has the largest signal-to-noise ratio available in BATSE 4-channel data and spans the largest spectral range. However, similar results can be obtained from other hardness ratios.
In Table 2 we summarize Student’s t-test probabilities that the $`\mathrm{log}`$\[HR(43/21)\] distributions of bursts without radio afterglows and bursts with radio afterglows have different means. The significance of a correlation depends strongly on the status of GRB 980425 due to small number statistics. If GRB 980425 is not associated with SN 1998bw, then the correlation between spectral hardness and radio afterglow is more likely.
## Discussion
To determine why this difference in hardness might exist, we checked other hardness ratios in the four-channel data. Hardness ratios involving channels 3 and 4 indicate similar results as obtained in Figure 1. Thus, any spectral differences dependent on radio afterglow type appear to result from the distribution of high energy photons. This is supported by Figure 2, which indicates no correlation of radio afterglow type with hardness ratio HR21.
This is also supported by Figure 3, which compares the GRB function spectral parameters E<sub>break</sub> and $`\beta `$ for the bursts in question (GRBs 980425 and 980519 are not plotted due to large $`\beta `$ errors). Large values of E<sub>break</sub>, large values of $`\beta `$, or both produce conditions indicating many high-energy photons. GRBs with radio afterglows tend to occupy a different diagram region than GRBs without radio afterglows. Since E<sub>break</sub> and $`\beta `$ are obtained from time-averaged spectra, we suspect that the diagram regions might be even more distinct if signal-to-noise were better for faint BATSE GRBs.
The possible correlation between GRB spectral hardness and afterglow type can be clarified with additional observations in the future. Resolution of the status of GRB 980425 would also help clarify this issue.
If GRB spectral hardness is an indicator of radio afterglow type, then a direct link between the central engine and the delayed emission is established. Such a link could be very important to the understanding of GRB physics. The Lorentz factor of the expanding external shock could be constrained by this correlation. For this reason, it is as important to determine upper flux limits on afterglow non-detections as it is to provide information on detections.
## Conclusions
There is evidence that GRBs with radio afterglows have harder gamma-ray burst emission than those without. Due to small number statistics, the significance of this correlation depends at present time on whether or not GRB 980425 is associated with supernova SN 1998bw. There is evidence of a similar correlation between bursts with optical afterglows, but this is more difficult to document because the literature is less clear on conditions under which an optical search failed to yield an afterglow.
If we assume that a relationship exists between spectral hardness and radio afterglow type, then GRBs with radio afterglows appear to have more high-energy photons (E $``$ 100 keV) than those without radio afterglows, as determined from the spectral parameters E<sub>break</sub> and $`\beta `$. Also, roughly $`2/3`$ of BATSE-detected GRBs should produce radio afterglows based on the overall distributions of E<sub>break</sub> and $`\beta `$. It should be noted that all GRBs producing afterglows of any type belong to the long, bright, soft GRB class.
## Acknowledgements
We thank Dale Frail, Chip Meegan, Geoff Pendleton, Ralph Wijers, David Haglin, and Chryssa Kouveliotou for valuable discussions. Jon Hakkila acknowledges 1999 NASA/ASEE Summer Faculty Fellowship Program support. |
no-problem/0001/cond-mat0001125.html | ar5iv | text | # SIN and SIS tunneling in cuprates.
\[
## Abstract
We calculate the SIN and SIS tunneling conductances for the spin-fermion model. We argue that at strong spin-fermion coupling, relevant to cuprates, both conductances have dip features near the threshold frequencies when a tunneling electron begin emitting propagating spin excitation. We argue that the resonance spin frequency measured in neutron scattering can be inferred from the tunneling data by analyzing the derivatives of SIN and SIS conductances.
\]
The electron tunneling experiments are powerful tools to study the spectroscopy of superconductors. These experiments measure the dynamical conductance $`dI/dV`$ through a junction as a function of applied voltage $`V`$ and temperature . For superconductor-insulator-normal metal (SIN) junctions, the measured dynamical conductance is proportional to the electron density of states (DOS) in a superconductor $`N(\omega )=(1/\pi )𝑑kImG(k,\omega )`$ at $`\omega =eV\text{[3]}`$. For superconductor-insulator-superconductor (SIS) junctions, the conductance $`dI/dVG(\omega =eV)`$, where $`G(\omega )=_0^\omega 𝑑\mathrm{\Omega }N(\omega \mathrm{\Omega })_\mathrm{\Omega }N(\mathrm{\Omega })`$ is proportional to the derivative over voltage of the convolution of the two DOS .
For conventional superconductors, the tunneling experiments have long been considered as one of the most relevant ones for the verification of the phononic mechanism of superconductivity . In this communication we discuss to which extent the tunneling experiments on cuprates may provide the information about the pairing mechanism in high-$`T_c`$ superconductors. More specifically, we discuss the implications of the spin-fluctuation mechanism of high-temperature superconductivity on the forms of SIN and SIS dynamical conductances.
The spin-fluctuation mechanism implies that the pairing between electrons is mediated by the exchange of their collective spin excitations peaked at or near the antiferromagnetic momentum $`Q`$. This mechanism yields a $`d`$wave superconductivity , and explains a number of measured features in superconducting cuprates, including the peak/dip/hump features in the ARPES data near $`(0,\pi )`$ , and the resonance peak below $`2\mathrm{\Delta }`$ in the inelastic neutron scattering data . Moreover, in the spin-fluctuation scenario, the ARPES and neutron features are related: the peak-dip distance in ARPES equals the resonance frequency in the dynamical spin susceptibility . This relation has been experimentally verified in optimally doped and underdoped $`YBCO`$ and optimally doped $`Bi2212`$ materials . Here we argue that the resonance spin frequency can also be inferred from the tunneling data by analyzing the derivatives of SIN and SIS conductances.
The SIN and SIS tunneling experiments have been performed on $`YBCO`$ and $`Bi2212`$ materials . At low/moderate frequencies, both SIN and SIS conductances display a behavior which is generally expected in a $`d`$wave superconductor: SIN conductance is linear in voltage for small voltages, and has a peak at $`eV=\mathrm{\Delta }`$ where $`\mathrm{\Delta }`$ is the maximum value of the $`d`$wave gap , while SIS conductance is quadratic in voltage for small voltages, and has a near discontinuity at $`eV=2\mathrm{\Delta }`$ . These features have been explained by a weak-coupling theory, without specifying the nature of the pairing interaction . However, above the peaks, both SIN and SIS conductances have extra dip/hump features which become visible at around optimal doping, and grow with underdoping . We argue that these features are sensitive to the type of the pairing interaction and can be explained in the spin-fluctuation theory.
As a warm-up for the strong coupling analysis, consider first SIN and SIS tunneling in a $`d`$wave superconductor in the weak coupling limit. In this limit, the fermionic self-energy is neglected, and the superconducting gap does not depend on frequency. For simplicity, we consider a circular Fermi surface for which $`\mathrm{\Delta }_k=\mathrm{\Delta }\mathrm{cos}2\varphi `$.
We begin with the SIN tunneling. Integrating $`G(k,\omega )=(\omega +ϵ_k)/(\omega ^2ϵ_k^2\mathrm{\Delta }_k^2)`$ over $`ϵ_k=v_F(kk_F)`$ we obtain
$`N(\omega )`$ $`=`$ $`Re{\displaystyle \frac{\omega }{2\pi }}{\displaystyle _0^{2\pi }}{\displaystyle \frac{d\varphi }{\sqrt{\omega ^2\mathrm{\Delta }^2\mathrm{cos}^2(2\varphi )}}}`$ (1)
$`=`$ $`{\displaystyle \frac{2}{\pi }}\{\begin{array}{cc}K(\mathrm{\Delta }/\omega )\hfill & \text{for }\omega >\mathrm{\Delta }\hfill \\ (\omega /\mathrm{\Delta })K(\omega /\mathrm{\Delta })\hfill & \text{for }\omega <\mathrm{\Delta }\hfill \end{array},`$ (4)
where $`K(x)`$ is the elliptic integral. We see that $`N(\omega )\omega `$ for $`\omega \mathrm{\Delta }`$ and diverges logarithmically as $`(1/\pi )\mathrm{ln}(8\mathrm{\Delta }/|\mathrm{\Delta }\omega |)`$ for $`\omega \mathrm{\Delta }`$. At larger frequencies, $`N(\omega )`$ gradually decreases to a frequency independent, normal state value of the DOS, which we normalized to $`1`$. The plot of $`N(\omega )`$ is presented in Fig 1a.
We now turn to the SIS tunneling. Substituting the results for the DOS into $`G(\omega )`$ and integrating over $`\mathrm{\Omega }`$, we obtain the result presented in Fig 1b. At small $`\omega `$, $`G(\omega )`$ is quadratic in frequency, which is an obvious consequence of the fact that the DOS is linear in $`\omega `$. At $`\omega =2\mathrm{\Delta }`$, $`G(\omega )`$ undergoes a finite jump. This discontinuity is related to the fact that near $`2\mathrm{\Delta }`$, the integral over the two DOS includes the region $`\mathrm{\Omega }\mathrm{\Delta }`$ where both $`N(\mathrm{\Omega })`$ and $`N(\omega \mathrm{\Omega })`$ are logarithmically singular, and $`_\mathrm{\Omega }N(\mathrm{\Omega })`$ diverges as $`1/(\mathrm{\Omega }\mathrm{\Delta })`$. The singular contribution to $`G(\omega )`$ from this region can be evaluated analytically and yields
$$G(\omega )=\frac{1}{\pi ^2}P_{\mathrm{}}^{\mathrm{}}\frac{dx\mathrm{ln}|x|}{x+\omega 2\mathrm{\Delta }}=\frac{1}{2}\text{sign}(\omega 2\mathrm{\Delta })$$
(5)
We see that the amount of jump in the SIS conductance is a universal number which does not depend on $`\mathrm{\Delta }`$.
The results for the SIN and SIS conductances in a $`d`$wave gas agree with earlier studies . In previous studies, however, SIS conductance was computed numerically, and the universality of the amount of the jump at $`2\mathrm{\Delta }`$ was not discussed, although it is clearly seen in the numerical data.
We now turn to the main subject of the paper and discuss the forms of SIN and SIS conductances for strong spin-fermion interaction.
We first show that the features observed in a gas are in fact quite general and are present in an arbitrary Fermi liquid as long as the impurity scattering is weak. Indeed, in an arbitrary $`d`$wave superconductor,
$$N(\omega )Im𝑑\varphi \frac{\mathrm{\Sigma }(\varphi ,\omega )}{(F^2(\varphi ,\omega )\mathrm{\Sigma }^2(\varphi ,\omega ))^{1/2}},$$
(6)
where $`\varphi `$ is the angle along the Fermi surface, and $`F(\varphi ,\omega )`$ and $`\mathrm{\Sigma }(\varphi ,\omega )`$ are the retarded anomalous pairing vertex and retarded fermionic self-energy at the Fermi surface (the latter includes a bare $`\omega `$ term in the fermionic propagator). The measured superconducting gap $`\mathrm{\Delta }(\varphi )`$ is a solution of $`F(\varphi ,\mathrm{\Delta }(\varphi ))=\mathrm{\Sigma }(\varphi ,\mathrm{\Delta }(\varphi ))`$.
In the absence of impurity scattering, $`Im\mathrm{\Sigma }`$ and $`ImF`$ in a superconductor both vanish at $`T=0`$ up to a frequency which for arbitrary strong interaction exceeds $`\mathrm{\Delta }`$. The Kramers-Kronig relation then yields at low frequencies $`Re\mathrm{\Sigma }(\varphi ,\omega )\omega `$, $`ReF(\varphi ,\omega )(\varphi \varphi _{node})`$ where $`\varphi _{node}`$ is a position of the node of the $`d`$wave gap. Substituting these forms into (6) and integrating over $`\varphi `$ we obtain $`N(\omega )\omega `$ although the prefactor is different from that in a gas. The linear behavior of the DOS in turn gives rise to the quadratic behavior of the SIS conductance.
Similarly, expanding $`\mathrm{\Sigma }^2F^2`$ near each of the maxima of the gap we obtain $`\mathrm{\Sigma }^2(\varphi ,\omega )F^2(\varphi ,\omega )(\omega \mathrm{\Delta })+B(\varphi \varphi _{max})^2`$, where $`B>0`$. Then
$$N(\omega )Re\frac{d\stackrel{~}{\varphi }}{\sqrt{B\stackrel{~}{\varphi }^2+(\mathrm{\Omega }\mathrm{\Delta })}}\frac{\mathrm{ln}|\mathrm{\Omega }\mathrm{\Delta }|}{\sqrt{B}}$$
(7)
This result implies that the SIN conductance in an arbitrary Fermi liquid still has a logarithmic singularity at $`eV=\mathrm{\Delta }`$, although its residue depends on the strength of the interaction. The logarithmical divergence of the DOS causes the discontinuity in the SIS conductance by the same reasons as in a Fermi gas.
In the presence of impurities, the logarithmical singularity is smeared out, and the DOS acquires a nonzero value at zero frequency (at least, in the self-consistent $`T`$matrix approximation ). However, for small concentration of impurities, this affects the conductances only in narrow frequency regions near singularities while away from these regions the behavior is the same as in the absence of impurities.
We now show that a strong spin-fermion interaction gives rise extra features in the SIS and SIN conductances, not present in a gas. The qualitative explanation of these features is the following. At strong spin-fermion coupling, a $`d`$-wave superconductor possesses propagating, spin-wave type collective spin excitations near antiferromagnetic momentum $`Q`$ and at frequencies below $`2\mathrm{\Delta }`$. These excitations give rise to a sharp peak in the dynamical spin susceptibility at a frequency $`\mathrm{\Omega }_{res}<2\mathrm{\Delta }`$ , and also contribute to the damping of fermions near hot spots (points at the Fermi surface separated by $`Q`$), where the spin-mediated $`d`$wave superconducting gap is at maximum. If the voltage for SIN tunneling is such that $`eV=\mathrm{\Omega }_{res}+\mathrm{\Delta }`$, then an electron which tunnels from the normal metal, can emit a spin excitation and fall to the bottom of the band (see Fig. 2a) loosing its group velocity. This obviously leads to a sharp reduction of the current and produce a drop in $`dI/dV`$.
Similar effect holds for SIS tunneling. Here however one has to first break an electron pair, which costs the energy $`2\mathrm{\Delta }`$. After a pair is broken, one of the electrons becomes a quasiparticle in a superconductor and takes an energy $`\mathrm{\Delta }`$, while the other tunnels. If $`eV=2\mathrm{\Delta }+\mathrm{\Omega }_{res}`$, the electron which tunnels through a barrier has energy $`\mathrm{\Delta }+\mathrm{\Omega }_{res}`$, and can emit a spin excitation and fall to the bottom of the band. This again produces a sharp drop in $`dI/dV`$ (see Fig. 2b).
In the rest of the paper we consider this effect in more detail and make quantitative predictions for the experiments. Our goal is to compute $`dI/dV`$ for SIN and SIS tunneling for strong spin-fermion interaction.
The point of departure for our analysis is the set of two Eliashberg-type equations for the fermionic self-energy $`\mathrm{\Sigma }_\omega `$, and the spin polarization operator $`\mathrm{\Pi }_\mathrm{\Omega }`$. The later is related to the dynamical spin susceptibility at the antiferromagnetic momentum by $`\chi ^1(Q,\mathrm{\Omega })1\mathrm{\Pi }_\mathrm{\Omega }`$. The same set was used in our earlier analysis of the relation between ARPES and neutron data . In Matsubara frequencies these equations read ($`\stackrel{~}{\mathrm{\Sigma }}_{\omega _m}=i\mathrm{\Sigma }(\omega _m)`$)
$`\stackrel{~}{\mathrm{\Sigma }}_{\omega _m}`$ $`=`$ $`\omega _m+{\displaystyle \frac{3R}{8\pi ^2}}{\displaystyle \frac{\stackrel{~}{\mathrm{\Sigma }}_{\omega _m+\mathrm{\Omega }_m}}{q_x^2+\stackrel{~}{\mathrm{\Sigma }}_{\omega _m+\mathrm{\Omega }_m}^2+F^2}\frac{d\mathrm{\Omega }_m}{\sqrt{q_x^2+1\mathrm{\Pi }_\mathrm{\Omega }}}}`$ (8)
$`\mathrm{\Pi }_\mathrm{\Omega }`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \frac{d\omega _m}{\omega _{sf}}\left(\frac{\stackrel{~}{\mathrm{\Sigma }}_{\mathrm{\Omega }_m+\omega _m}\stackrel{~}{\mathrm{\Sigma }}_{\omega _m}+F^2}{\sqrt{\stackrel{~}{\mathrm{\Sigma }}_{\mathrm{\Omega }_m+\omega _m}^2+F^2}\sqrt{\stackrel{~}{\mathrm{\Sigma }}_{\omega _m}^2+F^2}}1\right)}.`$ (9)
This set is a simplification of the full set of Eliashberg equations that includes also the equation for the anomalous vertex $`F(\omega )`$ . As in we assume that near optimal doping, the frequency dependence of $`F(\omega )`$ is weak at $`\omega \mathrm{\Delta }`$ relevant to our analysis, and replace $`F(\omega )`$ by a frequency independent input parameter $`F`$. Other input parameters in (9) are the dimensionless coupling constant $`R=\overline{g}/(v_F\xi ^1)`$ and a typical spin fluctuation frequency $`\omega _{sf}=(\pi /4)(v_F\xi ^1)^2/\overline{g}`$. They are expressed in terms of the effective spin-fermion coupling constant $`\overline{g}`$, the Fermi velocity at a hot spot $`v_F`$, and the magnetic correlation length $`\xi `$. By all accounts, at and below optimal doping, $`R1`$ , i.e., the system behavior falls into the strong coupling regime.
Strictly speaking, the set (9) is valid near hot spots where $`\varphi \varphi _{max}`$. Away from hot spots $`F(\varphi )`$ is reduced compared to $`F`$. We, however, will demonstrate that the new features due to spin-fermion interaction are produced solely by fermions from hot regions.
As in , we consider the solution of (9) for the experimentally relevant case $`FR\omega _{sf}`$ when the measured superconducting gap $`\mathrm{\Delta }F^2/(R^2\omega _{sf})\omega _{sf}`$. In this situation, at frequencies $`\mathrm{\Delta }`$, fermionic excitations in the normal state are overdamped due to strong spin-fermion interaction. In a superconducting state, the form of the spin propagator is modified at low frequencies because of the gap opening, and this gives rise to a strong feedback from superconductivity on the electron DOS.
More specifically, we argued in that in a superconductor, $`\mathrm{\Pi }_\mathrm{\Omega }`$ at low frequencies $`\mathrm{\Omega }2\mathrm{\Delta }`$ behaves as $`\mathrm{\Omega }^2/\mathrm{\Delta }`$, i.e., collective spin excitations are undamped, propagating spin waves. This behavior is peculiar to a superconductor – in the normal state, the spin excitations are completely overdamped. The propagating excitations give rise to the resonance in $`\chi (Q,\mathrm{\Omega })`$ at $`\mathrm{\Omega }_{res}(\mathrm{\Delta }\omega _{sf})^{1/2}\omega _{sf}`$ where $`Re\mathrm{\Pi }(\mathrm{\Omega }_{res})=1`$ . This resonance accounts for the peak in neutron scattering .
The presence of a new magnetic propagating mode changes the electronic self-energy for electrons near hot spots. In the absence of a propagating mode, an electron can decay only if its energy exceeds $`3\mathrm{\Delta }`$. Due to resonance, an electron at a hot spot can emit a spin wave already when its energy exceeds $`\mathrm{\Delta }+\mathrm{\Omega }_{res}`$. It is essential that contrary to a conventional electron-electron scattering, this process gives rise to a discontinuity in $`Im\mathrm{\Sigma }(\omega )`$ at the threshold. Indeed, using the spectral representation to transform from Matsubara to real frequencies in the first equation in (9), integrating over momentum and neglecting for simplicity unessential $`q_x^2`$ in the spin susceptibility, we obtain for $`\omega \omega _{th}=\mathrm{\Delta }+\mathrm{\Omega }_{res}`$
$`Im\mathrm{\Sigma }(\omega )`$ $``$ $`{\displaystyle _{\mathrm{\Omega }_{res}}^{\omega \mathrm{\Delta }}}𝑑\mathrm{\Omega }{\displaystyle \frac{1}{\sqrt{\omega \mathrm{\Omega }\mathrm{\Delta }}}}{\displaystyle \frac{1}{\sqrt{\mathrm{\Omega }\mathrm{\Omega }_{res}}}}`$ (11)
$`{\displaystyle _0^{(\omega \omega _{th})^{1/2}}}𝑑x{\displaystyle \frac{1}{\sqrt{\omega \omega _{th}x^2}}}={\displaystyle \frac{\pi }{2}},`$
We see that $`Im\mathrm{\Sigma }(\omega )`$ jumps to a finite value at the threshold. This discontinuity is peculiar to two dimensions. By Kramers-Kronig relation, the discontinuity in $`Im\mathrm{\Sigma }`$ gives rise to a logarithmical divergence of $`Re\mathrm{\Sigma }`$ at $`\omega =\omega _{th}`$. This in turn gives rise to a vanishing spectral function near hot spots, and accounts for a sharp dip in the ARPES data .
We now show that the singularity in $`Re\mathrm{\Sigma }(\omega )`$ causes the singularity in the derivatives over voltages of both SIN and SIS conductances $`d^2I/dV^2`$. Indeed, near a hot spot, $`F(\varphi )=F(1\lambda \stackrel{~}{\varphi }^2)`$ where $`\stackrel{~}{\varphi }=\varphi \varphi _{max}`$, and $`\lambda >0`$. Then, quite generally, $`Re\mathrm{\Sigma }(\varphi ,\omega )\mathrm{ln}|\omega \omega _{th}(\varphi )|`$ where $`\omega _{th}(\varphi )=\omega _{th}+C\stackrel{~}{\varphi }^2`$, and $`C>0`$. Substituting this expression into the DOS and differentiating over frequency, we obtain after a simple algebra
$`{\displaystyle \frac{N(\omega )}{\omega }}`$ $``$ $`{\displaystyle \frac{F^2(\varphi )}{\mathrm{\Sigma }^3(\varphi ,\omega )}_\omega \mathrm{\Sigma }(\varphi ,\omega )d\varphi }`$ (13)
$`{\displaystyle \frac{1}{\mathrm{ln}^3|\omega \omega _{th}|}}{\displaystyle \frac{\mathrm{\Theta }(\omega _{th}\omega )}{\sqrt{\omega _{th}\omega }}},`$
where $`\mathrm{\Theta }(x)`$ is a step function. We see that $`N(\omega )/\omega `$ has a one-sided, square-root singularity at $`\omega =\omega _{th}`$. Physically, this implies that the conductance drops when propagating electrons start emitting spin excitations. Note that the typical $`\varphi `$ which contribute to the singularity are small (of order $`|\omega _{th}\omega |^{1/2}`$), which justifies our assertion that the singularity is confined to hot spots.
The singularity in $`N(\omega )/\omega `$ is likely to give rise to a dip in $`N(\omega )`$ at $`\omega \omega _{th}`$. The argument here is based on the fact that if the angular dependence of $`\omega _{th}(\varphi )`$ is weak (i.e., $`C`$ is small), then $`\mathrm{\Sigma }(\omega _{th})F(\omega _{th})`$, and $`N(\omega _{th})`$ reaches its normal state value with infinite negative derivative. Obviously then, at $`\omega >\omega _{th}`$, $`N(\omega )`$ goes below its value in the normal state and should therefore have a minimum at some $`\omega \omega _{th}`$. Furthermore, at larger frequencies, we solved (9) perturbatively in $`F(\omega )`$ and found that $`N(\omega )`$ approaches a normal state value from above. This implies that besides a dip, $`N(\omega )`$ should also display a hump somewhere above $`\omega _{th}`$. The behavior of the SIN conductance is schematically shown in Fig. 3a.
Similar results hold for SIS tunneling. The derivative of the SIS current, $`d^2I/dV^2G(\omega )/\omega `$, is given by
$$\frac{G(\omega )}{\omega }=_0^\omega _\omega N(\omega \mathrm{\Omega })_\mathrm{\Omega }N(\mathrm{\Omega })d\mathrm{\Omega }$$
(14)
Evaluating the integral in the same way as for SIN tunneling, we find a square-root singularity at $`\omega =\omega _{th}^{}=2\mathrm{\Delta }+\mathrm{\Omega }_{res}`$.
$`{\displaystyle \frac{d^2I}{dV^2}}`$ $``$ $`P{\displaystyle _0^\omega }{\displaystyle \frac{d\mathrm{\Omega }}{\omega \mathrm{\Omega }\mathrm{\Delta }}}{\displaystyle \frac{1}{\mathrm{ln}^3|\omega _{th}\mathrm{\Omega }|}}{\displaystyle \frac{\mathrm{\Theta }(\omega _{th}\omega )}{\sqrt{\omega _{th}\omega }}}`$ (15)
$``$ $`{\displaystyle \frac{1}{\mathrm{ln}^3|\omega _{th}^{}\omega |}}{\displaystyle \frac{\mathrm{\Theta }(\omega _{th}^{}\omega )}{\sqrt{\omega _{th}^{}\omega }}}`$ (16)
The singularity comes from the region where $`\mathrm{\Omega }\omega _{th}`$ and $`\omega \mathrm{\Omega }\mathrm{\Delta }`$, and both $`_\omega N(\omega \mathrm{\Omega })`$ and $`_\omega N(\omega )`$ are singular.
Again, it is very plausible that the singularity of the derivative causes a dip at a frequency $`\omega \omega _{th}^{}`$, and a hump at even larger frequency. We stress, however, that at exactly $`\omega _{th}^{}`$, the SIS conductance has an infinite derivative, while the dip occurs at a frequency which is somewhat larger than $`\omega _{th}^{}`$. The behavior of the SIS conductance is presented in Fig 3.
Qualitatively, the forms of conductances presented in Fig 3 agree with the SIN and SIS data for YBCO and Bi2212 materials . Moreover, recent SIS tunneling data for $`Bi2212`$ indicate that the relative distance between the peak and the dip ($`\mathrm{\Omega }_{res}/(2\mathrm{\Delta })`$ in our theory) decreases with underdoping. More data analysis is however necessary to quantitatively compare tunneling and neutron data.
To summarize, in this paper we considered the forms of SIN and SIS conductances both for noninteracting fermions, and for fermions which strongly interact with their own collective spin degrees of freedom. We argue that for strong spin-fermion interaction, the resonance spin frequency $`\mathrm{\Omega }_{res}`$ measured in neutron scattering can be inferred from the tunneling data by analyzing the derivatives of SIN and SIS conductances. We found that the derivative of the SIN conductance diverges at $`eV=\mathrm{\Delta }+\mathrm{\Omega }_{res}`$ while the derivative of the SIS conductance diverges at $`eV=2\mathrm{\Delta }+\mathrm{\Omega }_{res}`$, where $`\mathrm{\Delta }`$ is the maximum value of the $`d`$wave gap.
It is our pleasure to thank G. Blumberg, A. Finkel’stein and particularly J. Zasadzinski for useful conversations. The research was supported by NSF DMR-9979749. |
no-problem/0001/astro-ph0001416.html | ar5iv | text | # Magnetic field and unstable accretion during AM Herculis low states. Based on data collected at SAO (Russia), CAO (Crimea), BAO (Bulgaria) and OHP (France)
## 1 Introduction
AM Herculis (4U1814+49) is the well-known prototype of the ”polar” systems, a subclass of cataclysmic variables in which a highly magnetized (10<sup>7</sup>G) white dwarf in a close binary system accretes matter from a low-mass companion (see Cropper 1990, Chanmugam 1992 for a review). From the long term optical monitoring collected and kindly made available to us by the AAVSO (J. Mattei, private communication), it is now well known to oscillate irregularly between two different states of optical brightness, a high state at (V $``$12.5) corresponding to high accretion rate and a low state (V $``$15) during which the accretion luminosity is significantly reduced so that photospheric emission of the two stars becomes visible in the infrared for the companion and in the UV and the optical for the white dwarf. The low state reveals in particular a complicated optical spectrum where strong absorption features due to the Zeeman splitting of the Balmer lines produced in the high surface magnetic field of the white dwarf are clearly visible, allowing the direct measurement of the field (Schmidt et al. 1981, Latham et al. 1981, Young et al. 1981). During low states, the UV emission is also found consistent with white dwarf atmosphere models with T=(2-2.5) 10<sup>4</sup>K and typical size (6-8) 10<sup>6</sup>m (Heise & Verbunt 1988, G nsicke et al. 1995, Silber et al. 1996).
The question of whether or not the accretion ceases during low states is still open. UV data indicate that the polar caps are still substantially heated and the few low states observed in the X-rays reveal the presence of residual accretion. A weak X-ray modulation due to the occultation of the main accreting pole is detected (Fabbiano 1982, de Martino et al. 1998). At such low rates, the accretion onto the white dwarf is highly unstable and eventually switches-off as recently observed by the BeppoSAX satellite (de Martino et al. 1998). Strong (30%) quasi-periodic optical oscillations with periods near 5 minutes have been observed during a decline to a low state and interpreted as an accretion instability arising close to the capture radius (Bonnet-Bidaud et al. 1991). A spectacular sharp rising ($``$1hr) flare of $``$2 mag. was also detected during a 1992 low state, tentatively associated with a stellar flare from the red dwarf companion (Shakhovskoy et al. 1993). We present here data obtained during different low states of AM Her in 1990, 1991 and 1997 which show that the source presents very different characteristics and variability. The 1997 data were obtained to supplement contemporaneous BeppoSAX X-ray observations (de Martino et al. 1998).
## 2 Observations
### 2.1 Photometry and polarimetry
Photometric and polarimetric observations were conducted at the 1.25m AZT-11 telescope of the Crimean Astrophysical Observatory (CAO), equipped with the double-beam chopping polarimeter of the Helsinski University (Korhonen et al. 1984). On 1991 Sept 4 and 5 and 1997 July 1 and 30, UBVRI data were collected during $``$ 3-4 h intervals overlapping spectroscopic observations described below. The UBVRI photometric data were recorded automatically with a 23.1s resolution and the polarimeter was used in circular polarization mode with a resolution of 3 min in the same bands. However a statistically significant signal was only received in circular polarization in the R and I bands and only the corresponding data are discussed here. The full description of the polarimeter and the method of the observations are presented in Berdyugin & Shakhovskoy (1993). On 1997 July 1, data were also obtained at the CAO 2.6m Shajn telescope, using a one-channel polarimeter with a fast rotating achromatic quarter-wave plate, in the wide R-band (0.50-0.75nm) with a 4s integration time.
On 1991 Sept 4 and 5, photometric data were acquired at the 6-m BTA telescope of the Special Astrophysical Observatory (SAO) (Nizhnij Arkhyz, Russia). The observations were carried out at the Nasmyth focus of the telescope, simultaneously with the spectroscopy, using 50% of the incoming flux split by a dichroic plate to the NEF photometer (Vikuliev et al. 1991). A light curve through a Johnson B-filter with a 12 arcsec aperture was recorded with a 0.1s resolution during the observations. UBVR measurements were also performed at the beginning and end of the observations to calibrate the brightness level of the source. On 1997 July 3 and 4, complementary photometric data were also obtained at the Belogradchik Astronomical Observatory BAO (Bulgaria), using an ST-8 CCD camera attached to a 60cm telescope.
All UBVR magnitudes were obtained from differential measurements, using the star D (m<sub>V</sub> = 13.1) in the field as a comparison (Liller 1977, Andronov & Korotin, 1982).
### 2.2 Spectroscopy
Spectroscopic data were collected during AM Her low states in 1990, 1991 and 1997 using the SP-124 spectrograph of the 6-m BTA telescope (Ioannisiani et al. 1982). A television scanner with two lines of 1024 channels is used to record the sky and source spectra simultaneously in a photon-counting mode (see Somova et al. 1982, for a detailed description of the instrumentation). A 2.5 arcsec aperture was set, adapted to the seeing and the spectrograph was equipped with a grating yielding a resolution of 4Å , 2Å and 2Å , respectively in 1990, 1991 and 1997. Data were reduced using the SIPRAN software developed at SAO (Somov 1986). Part of the 1990 spectroscopic data was already preliminarily discussed in Bonnet-Bidaud et al.(1992).
On 1997, spectroscopic observations were also obtained at the 1.93m telescope of the Haute-Provence Observatory (OHP), equipped with the Carelec spectrograph (Lemaitre et al. 1990).
The log of the observations is presented in Table 1 with the Heliocentric Julian Dates (HJD) corresponding to the start of the exposures. In the following, the orbital/rotational phases have been computed according to the ephemeris derived by S. Tapia and quoted in Heise & Verbunt (1988), where HJD($`\varphi =0`$)= 2443014.76614(4)+0.128927041(5)E, $`\varphi =0`$ corresponding to the maximum of linear polarization.
## 3 Analysis and results
### 3.1 Photometry and polarimetry
Figures 1 and 2 show the UBVRI light curves covering more than one orbital cycle in the different observations. The source is seen at a mean V magnitude of $``$ 15.1 and $``$ 15.2, respectively in 1991 and 1997, consistent with the level previously reported during low states of the star (Szkody et al. 1982, Bailey et al. 1988). Table 2 gives the mean magnitude and dispersion in the different bands and at the different epochs.
In 1991, the shape of the modulation is typical of a low state with a broad hump in U and B around phase 0.6, antiphased with a minimum in R and I. A secondary minimum in R and I is also visible around phase 0.2. The V light curve is flat but with a greater dispersion. Superimposed on the smooth modulation, clear rapid (5-10 min.) flux enhancements are also seen, particularly visible in R and I bands. Characteristics of these flares (marked with numbers on Figure 1) are analysed below. In 1997, the source is less active, though at a similar brightness level with no evidence of flares. For all observing nights, the modulation is nearly absent in all bands and no strong flux variations are seen, except for a dip around phase 0.1 in I on July 1 and a $``$0.3 mag. broad increase in the R-band near phase 0. on July 30. BAO light curves obtained on July 3 and 4 also show no modulation.
Figure 3a and b shows the corresponding circular polarization in the R and I bands. A significant polarization is seen in 1991, with a mean level in the R / I bands of -3.9% /-2.6% and -4.5% /-4.0% on Sept 4 and 5 respectively, indicating a significant residual accretion. Negative polarization reaches a maximum of up to -12% around phase 0.4 and 0.7 and a minimum near 0% around phase 0.1, comparable to what usually observed during normal high states of the source (Bailey & Axon 1981). The polarization, attributed to cyclotron radiation, is usually restricted to the infrared bands during low state observations and shifted to higher frequencies only during high states (Bailey et al. 1988). It is usually absent in R and I bands during low states (Shakhovskoy et al. 1993, Silber et al. 1996) though occasional detections have been made (Latham et al. 1981, Shakhovskoy et al. 1992). In 1997, at a comparable brightness level, no significant polarization is detected in the same R and I bands, with mean values respectively of (0.8$`\pm `$1.9)% and (0.7$`\pm `$1.8)% for July 1 and (1.9$`\pm `$4.2)% and (0.5$`\pm `$2.3)% for July 30, indicating that the cyclotron emission has become negligible (Shakhovskoy et al. 1992).
The peculiar flaring variability of the source seen in 1991 has been investigated by computing the characteristics of the flares marked in Figure 1. Figure 4a shows the colour-colour diagram of the AM Her system during the quiescent state (outside flares) and at the peak of the flares (with the size of symbols indicative of the flare intensity). For comparison, the colours of the very large blue flare observed by Shakhovskoy et al. (1993) during a 1992 low state are also shown. The AM Her colours during quiescence are remarkably similar in the different epochs. The flares are clearly distributed into three different categories: moderate flares (F3,F5,F6) during which the system stays approximately of the same colour as in the quiescent state or slightly bluer, intense flares (F1,F2,F4) during which the colours change to strongly red and the very intense 1992 flare clearly peaking in the blue.
The energy distribution of the flares has been evaluated by computing in the different bands the difference between the magnitude at peak and the local quiescent magnitude, estimated from polynomial fits through the light curve excluding flares. Magnitude excess due to the flares is reported in Table 3 and the corresponding spectra of selected flares are given in Figure 4b. The slope of the best linear fit to the (log F<sub>ν</sub>-log $`\nu `$) distribution is also given in Table 3. As indicated by the slope, the strongest flares appear clearly red, peaking around R/I bands with a slope $`\nu ^{(23)}`$, while the less intense ones are bluer $`\nu ^{(1)}`$. We note that the polarization during the largest flares is significant in both R and I bands (see open symbols in Fig. 3a and b). The properties of the flares are further discussed in Section 4.
Quasi-periodic oscillations (QPO) have been searched using (0.1s) resolution photometric data obtained in 1991. While no significant QPOs have been found on 1991 September 4, QPOs are clearly detected on 1991 September 5, with an amplitude 8.6% and a period 6.6 min (397s) during 30 minutes covering a (0.97-1.14) phase interval. No QPOs were observed in 1997.
### 3.2 Spectroscopy
The spectra obtained during the 1990, 1991 and 1997 low states have been averaged to produce a mean normalized spectrum representative of each different epoch (Figure 5). The spectra were reduced by standard procedure using MIDAS-ESO package and fluxes have been normalized by dividing by a continuum fitted through selected points free of line features. All spectra show clear evidence of the Balmer lines in emission with their associated Zeeman components in absorption, similar to what previously observed (Latham et al. 1981, Schmidt et al. 1981, Young et al. 1981, Silber et al. 1996). The equivalent widths of the emission lines appear to vary in the different spectra(see Table 4). The HeII 4686 line is only detectable in 1991. At this epoch, the spectrum changes drastically from one day to the next, with a sudden appearance of this high excitation line, and an equivalent width changing from $`(0.4\pm 0.5)`$Å to $`(3.6\pm 0.5)`$Å. On 1991 Sept. 5, though the system is at the same mean low level (m<sub>V</sub> = 15.1) and the Zeeman absorption lines, typical of low state, are still clearly present, the emission lines are strong with H<sub>β</sub> and H<sub>γ</sub> EW of 19Å and 14Å respectively, within a factor 1.5-2 of the high state values. This suggests a significant residual accretion.
The location and depth of the Zeeman absorption features are best shown in Figure 6, where the mean of all low and medium resolution spectra are displayed on an extended scale, with the emission part being cut. The two sets of data appear very similar with, at higher resolution, the evidence of clear and rather sharply defined absorptions around 4080, 4300, 4650 and 4820Å. To identify these features, synthetic idealized spectra have been constructed using the Zeeman wavelengths and oscillator strengths tabulated by Kemic (1974) for different magnetic fields in the range of 10-30MG as expected for AM Her. The intensities of the lines were taken to be proportional to the oscillator strengths in the Milne-Eddington approximation (see Latter et al. 1987) and the spectra were interpolated with respect to the field strength to provide a complete grid of comparison spectra. The computed ”theoretical” spectra were further convolved with an instrumental response corresponding to a spectral resolution of 2Å.
The inspection of the synthetic spectra reveals that the most significant features of Fig. 6 correspond to the $`\sigma `$ and $`\pi `$ components of the H<sub>β</sub> and H<sub>γ</sub> hydrogen lines. In the range of considered B fields, these lines clearly distributed into ”stationary” lines, only weakly variable in position, such as H<sub>β</sub> $`\sigma ^+`$ and $`\pi `$ and ”non stationary lines” strongly or moderately variable in position such as H<sub>β</sub> $`\sigma ^{}`$ and H<sub>γ</sub> $`\pi `$ (Angel 1978). The non stationary lines allow a precise determination of the magnetic field strength. In Fig. 6 is shown the B=12.5MG synthetic spectrum which is the best description of the data. Most features are accurately reproduced such as the split components of H<sub>β</sub> and H<sub>γ</sub> . The accuracy in the B determination is typically $`\pm 0.5`$MG based on a precision better than $``$ 10Å in the position of the fast varying H<sub>β</sub> $`\sigma ^{}`$ feature.
More careful inspection reveals two additional features at $``$ 4400Å and $``$ 4775Å that are not reproduced by the synthetic spectrum. Interestingly enough, the $``$ 4775Å feature, which is clearly visible in Fig. 6 as a left shoulder to the H<sub>β</sub> $`\pi `$ feature, can be reproduced by a H<sub>β</sub> $`\pi `$ component from a significant higher field ($``$ 17MG). The corresponding H<sub>β</sub> $`\sigma ^{}`$ will then be shifted to $``$ 4560Å where a small feature is indeed present in the spectrum so we cannot exclude the possible superposition of this higher field. A similar additional higher field component has already been reported by Latham et al. (1981). We also investigated the possible contributions from helium lines. For fields in the polar range, Zeeman helium stationary components are expected from the HeI 4471Å line at 4320, 4420 and 4530Å , not in accordance with the observed features.
We looked for possible variations of the main Zeeman features with orbital phase using our longest set of data of 1997 July 29 at low resolution. The spectrum appears remarkably stable in phase with only possible minor variations in intensities around the H<sub>β</sub> $`\sigma ^{}`$ line.
## 4 Discussion
### 4.1 The AM Her magnetic field
The best magnetic field derived from the Zeeman absorption components in the AM Her optical spectrum during different episodes of low state in 1990, 1991 and 1997 is B=(12.5$`\pm `$0.5)MG, in accordance with what previously reported but with better accuracy (Latham et al. 1981, Schmidt et al. 1981, Young et al. 1981, Silber et al. 1996). The Zeeman split absorption components in polar low states are usually assumed to originate in the hot photosphere of the magnetic white dwarf whose contribution becomes dominant when the accretion ceases.
The relative good accuracy in the magnetic field determination is achieved by the stability of ”non stationnary” components such as H<sub>β</sub> $`\sigma ^{}`$ which are very sensitive to the field strength. Despite the spectra are averaged over different orbital phase intervals and different epochs, the presence of such a stable feature is an indication of a rather homogeneous field. This situation is surprising since, in a simple dipole model, a range of 2 is expected between the polar and equatorial field and a spread of B-values is therefore expected for an hemisphere seen at a given inclination (Saffer et al. 1989). The observed restricted range would therefore imply a nearly equator-on view with an inclination close to 90 and an observed B field peaking at B<sub>polar</sub>/2.
### 4.2 Unstable accretion during the low states
The AM Her optical low states reported in this paper, though all showing a similar optical brightness with m<sub>v</sub>=(15.1-15.4), display different overall characteristics. Of particular interest is the behaviour of the source in 1991 when both a significant polarization and flaring activity is seen contrary to 1990 and 1997. Within the brightness history of the source (Mattei J., AAVSO, private communication), it may be significant that the 1991 observation is located toward the end of a rather slow decline from high to low state, while the 1990 and 1997 observations are both included in a prolonged long stable low state, lasting already for 5 and 2 months respectively.
The major flares observed in 1991 are predominantly red in nature when compared to quiescent state. This and the presence of a significant polarization point toward residual unstable accretion at that time. The presence of a low level ($``$5min.) temporary QPOs usually observed during intermediate high states (Bonnet-Bidaud et al. 1992) and the reappearance of high excitation lines on Sept 5, 1991 further strengthen this conclusion.
The comparison of this flaring activity with the major flare observed in 1992, also during a low state, is interesting. It has been proposed that the sharp 1992 event is due to a stellar flare originating from a magnetic active secondary (Shakhovskoy et al. 1993). The shape as well as the colour changes of the flare were found consistent with what observed during strong flares from red dwarfs (Beskrovnaya et al. 1996). The total energy in the flare, though at the very extreme upper end of what usually observed in nearby UV Ceti systems, is comparable to the more energetic flares detected in open clusters (Shakhovskaya 1989). However flares of this type are also fairly repetitive while it has been observed only once in AM Her although the source has been intensively monitored. We note however that a flare at this amplitude may be lost during high states. We find significant that the maximum magnitude of the flare (m<sub>v</sub>$``$12.5) is of the order of the high state level so that this episod may also be interpreted alternatively as an unstable accretion event. The very peculiar colours of this flare (see Fig. 4a) may result in this case from inhomogeneous accretion with blobs buried into the white dwarf atmosphere if the density is high enough.
Following Frank et al. (1988), we estimate the critical density for buried shocks by equating the accreted gas ram pressure with the atmosphere pressure at an optical depth sufficient for efficient reprocessing, giving
$`\rho _{cr}`$ = 1.42 $`M_{wd}^2`$. \[R$`{}_{}{}^{3}{}_{wd}{}^{}`$.T<sub>wd</sub>.ln(97R<sub>wd</sub>.$`\rho _{cr}`$)\]<sup>-1</sup>
where $`\rho _{cr}`$ is the critical density in units of 10<sup>-6</sup> g.cm<sup>-3</sup> and $`M_{wd}`$, R<sub>wd</sub> and T<sub>wd</sub> the mass, radius and temperature of the white dwarf in units of M, 10<sup>9</sup>cm and 10<sup>5</sup>K respectively. Assuming T<sub>wd</sub>$``$ 0.2 and $`M_{wd}`$=0.6 (see G nsicke et al. 1995) with a corresponding R<sub>wd</sub>=0.9 (Nauenberg 1972) gives a critical density of $`\rho _{cr}`$ = 0.85 10<sup>-6</sup>g.cm<sup>-3</sup>. This has to be compared with the mean accreted density during steady high states $`\rho _{high}`$ = $`\dot{M}`$/(v<sub>ff</sub>A) with a typical accretion rate $`\dot{M}`$ $`10^{16}`$ g.s<sup>-1</sup> , v<sub>ff</sub> the free-fall velocity $`10^8`$ cm.s<sup>-1</sup> and A the accreting area $`10^{16}`$ cm<sup>2</sup> which yields $`\rho _{high}`$ = 10<sup>-8</sup>g.cm<sup>-3</sup>.
To achieve blobby accretion, the density in the flare has then to be $``$10-100 times the steady accretion. This may be easily achieved if, for instance, the accretion during the temporary event takes place onto a (3-10) reduced accretion spot radius due to particular unstable capture conditions by the magnetic field at that time. The emerging radiation from such a blobby accretion has not yet been computed accurately but the radiation is expected to be thermalized inside the atmosphere and will be radiated at temperature closer to the white dwarf blackbody (Kuijpers & Pringle 1982). We then expect the optical spectrum of the flare to follow roughly a ($`\nu ^{+(12)}`$) dependency, in accordance with the observed colours. Such flares should be mostly visible in soft X-rays.
On the opposite, the smaller flares observed during low states may correspond to small accretion events during which the accretion rate only temporarily increased, leading to transient presence of a standard shock above the white dwarf and an associated cyclotron emission. Such small scale flares, as those observed in 1991, show indeed polarization and red colours ($`\nu ^{(23)}`$) expected from the optically thin part of the cyclotron. During the typical low states, outside flares, the low density accretion flow may lead to the absence of a shock and heat the upper atmosphere by Coulomb collisions leading to the so called bombardment solution (Woelk &Beuermann 1992). In this last case, the bluer colors of the optical radiation during quiescence can be easily explained by the decrease (1991) or negligible (1997) cyclotron emission. This picture derived from the optical variability study of AM Her is in accordance with the conclusions drawn from the low state X-ray characteristics of the source (Ramsay et al. 1995). The reason of the unstable accretion during low states, leading to both large accretion event and/or small accretion instability, is still unresolved as well as the exact mechanism responsible for the low states.
## 5 Conclusion
The study of the temporal and spectral characteristics of AM Her during low states at different periods allowed different conclusions to be drawn.
The Zeeman spectral features shown by the source are surprisingly very stable although obtained through different parts of the orbital cycle and therefore with different orientations with respect to the suspected dipole magnetic field. The magnetic field strength derived from the position of ”non stationary” lines is (12.5$`\pm `$0.5) MG which could represent an averaged field seen over the white dwarf surface. Additional features are seen which may originate from an higher field region.
The temporal optical variability of AM Her during low states is very rich, displaying occasional large blue flare events as well as repetitive smaller amplitude red flares. It is shown that the characteristics of all these flares can be explained by accretion events of different amplitudes. The large and unique event observed in 1992, though consistent with red dwarf flares, can also be tentatively explained by a large increase of the accretion rate coupled to a reduced accretion area which can lead to a sufficient increase in the density to produce a buried shock unstable accretion. The more frequent smaller amplitude flares are interpreted instead as small variable increases of the accretion rate. |
no-problem/0001/astro-ph0001068.html | ar5iv | text | # On the peculiar red clump morphology in the open clusters NGC 752 and NGC 7789
## 1 Introduction
The clump of red giants is a remarkable feature in the colour-magnitude diagrams (CMD) of intermediate-age and old open clusters (Cannon rdc (1970)). It is defined by stars in the stage of core helium burning (CHeB).
In the clusters for which the non-member field stars and binaries have been identified and excluded, the red clump may occupy a very small region of the CMD. A good example of this case is given by the 4-Gyr old cluster M 67: its 6 clump stars differ in colour by less than 0.01 mag in $`BV`$, and 0.1 mag in $`V`$ (see e.g. Montgomery et al. mmj (1993)). This small spread of the clump can be easily understood as the result of having low-mass core He-burning stars of very similar masses in this cluster.
However, some clusters clearly present a more complex clump structure. One of the best examples is given by NGC 752: Mermilliod et al. (mmlm (1998)) recently pointed out it presents a kind of dual clump. It is shown in detail in the figure 5 of Mermilliod et al. (mmlm (1998)): there we can notice the presence of the main clump centered at $`BV=1.01`$, $`V=9.0`$ and composed of 8 member stars, and a distribution of 3 or 4 fainter stars, going down by about 0.5 mag in relation with this main clump. Importantly, all stars plotted are members of the cluster with probability $`P>93`$%, and photometric errors are lower than 0.015 on $`V`$ and 0.013 on the colours. Therefore, the structure seen in NGC 752 is real, and not an artefact of observational uncertainties.
On the other hand, recent works suggest that clumps with a faint extension may be a commom feature in the field of nearby galaxies. In a few words, population synthesis models of galaxy fields predict that a secondary red clump may be formed at about $`0.30.4`$ mag below the main one, containing the CHeB stars which are just massive enough for starting to burn helium in non-degenerate conditions. Girardi et al. ggws (1998) first suggested the presence of this feature in the CMD derived from Hipparcos data-base (Perryman et al. macp (1997); ESA esa (1997)). The subject has been later extensively discussed by Girardi (lg99 (1999)). Bica et al. (bgdc (1998)) and Piatti et al. (pgbc (1999)) recently presented clear evidence showing that this feature is present in the LMC.
Could this feature provide an explanation also for the peculiar morphology of the clump in NGC 752? Is a similar clump morphology observed in other clusters as well? These are the main questions we address in this paper.
## 2 The clusters
First of all, we examined the available data for galactic open clusters, in order to identify if other clusters have clumps with the same general appearance as in NGC 752. We used the extended database of accurate photometry and radial velocity data compiled by Mermilliod and collaborators. This data allows us to select the member stars and avoid the binaries in each cluster. In this way, clean CMDs can be produced. Indeed, Mermilliod & Mayor (mm89 (1989), mm90 (1990)) already noticed the presence of a number of stars below the clump in some clusters. The case of NGC 752, however, is remarkable for the high fraction of stars located below the “main clump” level, which causes its apparent bi-modality.
We searched for clusters with ages similar to that of NGC 752 ($`9.1<\mathrm{log}t<9.3`$). Several candidates were found. NGC 3680 and IC 4651 (Mermilliod et al. manm (1995)) do not present the same characteristics as NGC 752 does. The red giant clump in NGC 3680 is rather concentrated, with few scatter in magnitude and colours, while the morphology of IC 4651 clump is more complex. NGC 2158 is a rich and very interesting cluster probably showing also a complex structure of the clump region. However, most data are photographic and there is presently no kinematical membership determination to identify the true members. A CCD study paying attention to the red giants would be worth. We shall therefore refrain to use this cluster. The fourth cluster is NGC 7789 for which $`BV`$ CCD data have been published by Martinez Roger et al. (mrpc (1994)) and Jahn et al. (jkr (1995)). Gim et al. (1998a ) have published an extensive radial-velocity study which permits to reject the non-member stars and identify the binaries. Membership probabilities from proper motions for NGC 7789 have been published by McNamara & Solomon (mcns (1981)) so that the membership of the red giants is rather well defined. Most other clusters containing numerous red giants and for which good photometric data are available are either younger (about 1 Myr) and have a large clump, with a few stars below or are older and have a more or less well developed giant branch.
As can be seen in Fig. 1, the red clump in NGC 7789 presents a tail of faint stars extending down to 0.4 mag below the main concentration of clump stars. Again, the fainter clump stars are observed to be slightly bluer than the main clump concentration.
A comparison between NGC 752 and NGC 7789 clearly evidences that both clusters have similar ages: suffice it to notice that the main sequence termination (TAMS) and red clump are observed at the same colour ($`(BV)_o=0.5`$ and $`(BV)_o=1.0`$, respectively), and that their magnitude difference is also very similar (of about $`\delta V=0.5`$ mag) in both clusters. Also, we recall that both clusters have metallicities very close to each other: according to Friel & Janes (fj83 (1983)), $`\text{[Fe/H]}=0.16\pm 0.05`$ for NGC 752 and $`\text{[Fe/H]}=0.26\pm 0.06`$ for NGC 7789.
In Table 1, we list a limited number of age estimates for both clusters, in which the age-dating was based in models with overshooting.
It turns out that both NGC 752 and NGC7789 should have an age of about 1.5 Gyr. These are indicative ages, which will be useful in the analysis of the following sections.
## 3 The models
In order to describe the evolution of clump stars as a function of cluster parameters, we make use of the stellar evolutionary tracks and isochrones from Girardi et al. (gbbc (2000)). This data-base of stellar models contains CHeB stars computed for a large number of initial masses, providing a detailed description of the position of these stars in both HR and colour-magnitude diagrams. Figure 2 shows the location of the models for solar metallicity ($`Z=0.019`$) in the $`M_V`$ versus $`BV`$ diagram. It can be noticed that CHeB models of varying mass distribute along a well defined sequence in this plot. This sequence is relatively narrow if we consider the initial fraction of 70% of the CHeB lifetime, where most of the CHeB stars are expected to be found. The remaining 30% fraction, instead, occupies a broad distribution in the diagram. Importantly, the sequence of CHeB models presents a well-defined lower boundary, which is also drawn in the plot. The same boundary line is shown for lower values of metallicity ($`Z=0.008`$ and $`Z=0.004`$), thus showing how the sequences of CHeB models get bluer as the metallicity decreases.
When clusters become older, we expect to find CHeB stars of lower and lower masses. Therefore, the position of clump stars in an ageing cluster roughly follows the sequence shown in Fig. 2, going from the upper left to the bottom right of the diagram. Along this sequence, however, the clump luminosity passes through a temporary minimum when the turn-off mass is of about 2 $`M_{}`$. This effect is illustrated in Fig. 3, in which we simulate clusters with $`Z=0.019`$ (i.e. solar metallicity) and ages between 1 and 2 Gyr. This age interval encompasses the probable ages of NGC 752 and NGC 7789.
In the sequence of simulations, the clump of CHeB stars decreases in luminosity as the cluster ages, up to about 1.26 Gyr. Then, up to an age of 1.56 Gyr, this luminosity increases by as much as 0.4 mag, remaining almost constant afterwards. This increase in luminosity in a relatively short timescale corresponds to the age (and stellar initial mass) at which the CHeB switches from quiescent ignition, to a mildly explosive ignition (the He-flash) inside an electron degenerate core. The increase in luminosity is mostly due to the large increase in the core mass required to ignite helium in a degenerate core. The basic theory of this transition is detailed in the classical work by Sweigart et al. (sgr (1990)).
Girardi et al. (ggws (1998)) and Girardi (lg99 (1999)) already explored the consequences of this transition in the CMDs of galactic fields. The most impressive effect they found is that the stars with age of $`1`$ Gyr may define a “secondary clump” feature extending below the main clump of red giants. Of course, the suggestion that the same feature may be present in star clusters like NGC 752 and NGC 7789 is immediate.
In fact, the age at which the transition occurs in the models is clearly in agreement with what is observed in the clusters: at 1.5 Gyr, main sequence and red clump are observed at $`(BV)_o=0.5`$ and $`(BV)_o=1.0`$, respectively, and their magnitude difference is of $`\delta V=0.5`$ mag. These numbers are undistiguishable from those observed in the CMDs of NGC 752 and NGC 7789.
However, it is also clear that the models of single-age, single-metallicity stellar populations shown in Fig. 3 do not produce “dual clumps”, neither clumps with fainter tails, as in the case of galaxy models. What clump models indicate is that the clump has an intrinsic elongated structure for ages lower than 1.2 Gyr, getting more concentrated at later ages, when the clump gets brighter. Therefore, they do not provide an obvious explanation for the clump morphologies observed in NGC 7789 and NGC 752.
## 4 Observed red giant clumps
The theoretical ZAHB and individual evolutionary tracks have been plotted over the observed red giants in NGC 7789 (Fig. 4a). Due to the shape of the red giant locus, the position of the ZAHB is rather obvious. The diagram can be interpreted as follows: a number of stars, with masses between 1.95 and 1.75 $`M_{}`$ are close to or on the ZAHB, while other stars have already further evolved from the ZAHB. The bulk of the red giants has masses between 1.7 and 1.8 $`M_{}`$. Open circles are known binaries. Some do show the effects of the secondaries because their colours are bluer, while several ones are right in the middle of the “clump”. The theoretical shape of the ZAHB does give a very good representation of the observed morphology of the red giant clump.
The resulting image is that at the core Helium-burning phase we have a spread in masses and ages among the red giants. It is evident that stars do not arrive at the same time in the He-burning phase, but clearly the observed morphology is not compatible with the ”classical” paradigm of evolution of single star, single mass isochrones. We observe, as is well known from the evolution on the main sequence, stars on the ZAHB and stars leaving this phase toward the asymptiotic giant branch. What is surprising is the size of the observed dispersion in mass on the ZAHB.
The individual evolutionary tracks permit to understand the vertical dispersion and assign it to the evolution away from the ZAHB. However, the solid part of the individual evolutionary tracks, covering 70% of the core Helium-burning lifetime is mostly limited to the very beginning of the tracks and seems to be a little too short with respect to the observed distribution of the stars.
The stars with $`12.1<V<12.5`$ and $`BV1.4`$ form a bump at the exact position predicted by the models (see Fig. 1a). It correponds to the phase when the H-burning shell moving outward encounters the H-discontinuity resulting form the first dredge-up. They are therefore not in an advanced core He-burning stage, but mark a pause in the ascent of the red giant branch.
The case of NGC 752 (Fig. 4b) can be understood in the same context. Even if there are less red giants, the explanation seems to be quite convincing. The “classical” clump is well marked, and the fainter stars, which define the so-called second clump, are pretty well aligned along the ZAHB. Two models have been plotted, that for solar metallicity ($`Z=0.019`$) and half solar ($`Z=0.008`$). If the track is fitted to the base of the clump, both curves equally well reproduce the positions of the points.
We have looked for other clusters to extend the interpretation of the clump morphology to further objects. One beautiful example has been found with NGC 2660, with the CCD photometry of Sandrelli et al. (sbtm (1999)). The striking shape of the red giant locus is fairly well reproduced by the ZAHB for $`Z=0.004`$ (see Fig. 5a). The distribution of the points leads to masses comprised between 2.2 and $`1.9`$ $`M_{}`$. IC 1311 also presents a clump with a vertical sequence of stars, but in absence of membership criterion, it is difficult to separate the cluster and field stars.
On the oldest side of the age range, NGC 2204 shows a well defined and compact clump which contains stars on the ZAHB and stars slightly evolved (Fig. 5b).
This small sample of representative clusters shows that the clump morphology changes with ages and that the shape of the ZAHB predicted by the models, at various chemical compositions, do reproduce well the observed patterns. Still younger open clusters, with ages lower than 1 Gyr, have more massive red giants which do not evolve through the Helium flash and the morphology is again different. It seems that a single isochrone is also not able to reproduce the complexity of the clump structure.
## 5 Interpretation
After suggesting an interpretation to the red clump morphology in these clusters, it is convenient to further discuss details of the models. This in order to clarify whether we are really facing strange evolutionary behaviours.
First of all, it is interesting to consider the natural dispersion of mass in clump stars of different ages. Fig. 6 presents the locus of stars at both the beginning and end of the CHeB stage, on the age-initial mass diagram. These two lines delimit the region allowed for clump stars. Singling out a single age for a cluster (i.e. a horizontal line), we immediately identify the mass range of its clump stars. It is interesting to notice that, at ages lower than about 1 Gyr, this mass range is about 0.2 $`M_{}`$ wide. When we get to a certain age value (about 1.4 Gyr), it gets suddenly narrower, to about 0.1 $`M_{}`$. This effect is the simple result from the sudden reduction of the CHeB lifetime, by a factor of about 2.5, which follows the onset of electron-degenerate cores: this lifetime is of about $`10^8`$ yr for low mass stars, gets to a maximum of about $`2.5\times 10^8`$ yr at the transition mass $`M_{\mathrm{Hef}}`$, and then decreases monotonically for stars of higher mass (see Girardi & Bertelli gb98 (1998); Girardi lg99 (1999)). This particular behaviour simply reflects the different core masses necessary for igniting helium in stars of different masses.
Fig. 6 then helps to understand the origin of the elongated clumps noticed in the first panels of Fig. 3: they result from the higher dispersion of clump masses found in clusters before the transition.
Of course, there is also an age range in which we find both CHeB stars which ignited helium in non-degenerate conditions, and those which have done it quiescently. This is detailed in the lower-left diagram of Fig. 6. One can notice that, whereas the main sequence lifetime increases monotonically as we pass from $`M_{\mathrm{Hef}}+\delta M`$ to $`M_{\mathrm{Hef}}\delta M`$, the CHeB (clump) lifetime roughly halves as we pass from $`M_{\mathrm{Hef}}+\delta M`$ to $`M_{\mathrm{Hef}}\delta M`$. At a given age close to $`t(M_{\mathrm{Hef}})`$, we can then have the presence of both clump stars with $`M_{\mathrm{Hef}}+\delta M`$ at the end of their CHeB evolution, and stars with $`M_{\mathrm{Hef}}\delta M`$ at the beginning of the same phase. The interest of this situation is that these two kind of stars (with $`M_{\mathrm{Hef}}+\delta M`$ and $`M_{\mathrm{Hef}}\delta M`$) have CHeB initial luminosities differing by up to 0.4 mag, thus providing a good hint for the origin of dual clumps. The coexistence of both CHeB stars lasts for a maximum period of 0.2 Gyr. Interestingly, in some cases it may happen that stars in both regimes (degenerate and non-degenerate He ignition), for very short age intervals, are distributed over non-contiguous age intervals. In the Girardi et al. (gbbc (2000)) models, it happens for the $`Z=0.008`$ tracks, and for those with $`Z=0.019`$ computed without overshooting (see Fig. 6), for which we have a non-monotonic mass vs. age relation for stars at the late stages of CHeB, in the vicinity of $`M_{\mathrm{Hef}}`$. In the context of the present investigation, this is of course the most interesting situation, because it could alone generate a dual clump in a single isochrone, without any artificial assumption about the dispersion of age and mass of clump stars.
By means of simulations like those shown in Fig. 3, however, we have verified that dual clumps do not appear due to this effect of mass dispersion, because the clump stars of higher mass are found to be always more evolved than those of lower mass, and hence already departed from the ZAHB to much higher luminosities. Thus, at the age they are observed simultaneously with the brighter clump of the lower-mass stars, they no longer represent clump stars of lower luminosity.
On the other hand, we should notice that the possibility of having dual clumps in single isochrones may depend essentially on the velocity at which the clump gets brighter and short-lived with the stellar mass, or, equivalently, on the velocity at which the core mass at the He-flash increases along the transition mass $`M_{\mathrm{Hef}}`$. The present models (Girardi et al. gbbc (2000)) may not present the level of detail necessary to explore this possibility further, due to their limited mass resolution, of about 0.05 $`M_{}`$. This means that, at timescales faster than 0.06 Gyr, our models reflect the result of interpolating between the contiguous evolutionary tracks, more than the real evolutionary behaviour of the stars. An improvement by a factor of 2 in the model resolution would be desirable.
Other subtle effects can also be invoked to generate the dual clump features in the models.
One of them is the presence of a small dispersion of ages for the cluster stars. It would reflect in a larger range of masses for clump stars. Such effect is simulated in Fig. 7, in which we present synthetic CMDs computed by assuming constant star formation in the age intervals $`1.121.26`$, $`1.261.41`$, and $`1.121.41`$ Gyr. These age intervals were chosen so that we have the presence of stars in the transition region between degenerate and non-degenerate helium ignition. It can be noticed that the clump is only slightly broadened in Fig. 7, when compared to the simulations of Fig. 3. Moreover, notice that the assumption of an age dispersion implies also that the turn-off region of the CMD is slightly broadened, by at most 0.10 mag in colour, if compared to the single-age simulations of Fig. 3.
Similar effects, without however any broadening of the MS, can be obtained by assuming differential mass-loss by evolved stars.
It is worth remarking that an age spread of $`0.1`$ Gyr, as assumed in Fig. 7, would represent an extreme case. 0.01 Gyr would represent a better upper limit to the age spread in a cluster, according to estimates based on the pre-main sequences of Orion and other very young open clusters (Prosser et al. pshs (1994); Hillenbrand lah97 (1997)).
## 6 Conclusions
In this paper, we suggest that the peculiar CMD morphology of the red clump in the open clusters NGC 752 and NGC 7789, may be indicating the presence of stars which ignited helium under both degenerate and non-degenerate conditions. This interpretation is suggested by the coincidence between the ages of the clusters (as derived from the main CMD features), with the ages at which evolutionary models undergo this main evolutionary transition. We remark that the event we are referring to, is equivalent ot the so-called “RGB phase transition” as mentioned in Renzini & Buzzoni (rb86 (1986)) and Sweigart et al. (sgr (1990)).
This situation, however, can not be reproduced by models which assume a single isochrone for the clusters, because the mass dispersion of clumps stars in any simple model can hardly be larger than 0.2 $`M_{}`$. Moreover, the tendency found in the simple cluster models is that clump stars of higher mass are more evolved away from the ZAHB, and hence invariably more luminous than stars of lower masses. This happens despite that, in the vicinity of the transition mass $`M_{\mathrm{Hef}}`$, more massive clump stars start to burn helium at luminosities up to 0.4 mag fainter than the less massive ones.
Neither can this situation be simply reproduced assuming an age dispersion for the clump stars. The age dispersion required (more than 0.1 Gyr) is too large compared to present observational estimates. Moreover, such a high age dispersion would also cause a non-negligible –and so far not observed– spread in the main sequence region of the CMD.
We are left, then, with a couple of other possibilities. First, mass-loss on the RGB may cause a significant dispersion of clump masses, at a single age. Different amounts of mass-loss can be triggered, for instance, in stars with different rotational velocities. If this is the case, studying clusters like NGC 7789 we may be able to put constraints on the differential mass loss for stars of this approximate mass range, just in the way we actually do for stars in globular clusters (cf. Renzini & Fusi Pecci rfp (1988)). Second, the mass and age at which the transition occurs is somewhat dependent of the efficiency adopted for overshooting in stellar cores during the main-sequence phase. Any dispersion in this efficiency (caused, e.g. by different rotational velocities), should also reflect on a dispersion of H-exhausted cores masses at a given age. Such a dispersion would be more evident exactly in the mass range of the transition, because it is the mass interval in which the core mass–initial mass relation changes the most. For clusters older than 4 Gyr, for instance, the core mass at He ignition is practically constant and much less sensitive to the extension of convective cores.
If any of these interpretations is correct, we face some interesting possibilities. First, with more data for open clusters in the relevant age range, we may be able to constrain observationally the velocity at which the transition from non-degenerate to degenerate helium ignition occurs. Second, once this age interval is better documented, we may be able to attach independent observables (like the main sequence termination colour and magnitude) to this transition, to which theoretical models should comply. The present data for NGC 752 and NGC 7789 suggest that $`(BV)_0^{\mathrm{TO}}0.5`$ at the transition mass $`M_{\mathrm{Hef}}`$, for near-solar metallicities. Alternative data for LMC clusters by Corsi et al. (cbfp (1994)), indicates $`(BV)_0^{\mathrm{TO}}0.25`$ for the same transition mass at a metallicity of about half solar. These numbers seem to be reasonably well reproduced by the present models.
###### Acknowledgements.
We are grateful to Dr A. Bragaglia for providing a copy of the CCD data in NGC 2660 in advance of publication. The work by L.G. and G.C. is funded by the Italian MURST. L.G. acknowledges the hospitality from the Université de Lausanne during a visit. The work of J.-C.M. has been supported by grants of the Swiss National Funds (FNRS). |
no-problem/0001/cond-mat0001270.html | ar5iv | text | # Triplet interactions in star polymer solutions
## I Introduction
Star polymers , i.e., structures of $`f`$ linear polymer chains that are chemically linked with one end to a common core, have found recent interest as very soft colloidal particles . As the number $`f`$ of chains increases, they interpolate between linear polymers and polymeric micelles . For large $`f`$, the effective repulsion between the cores of different polymer stars becomes strong enough to allow for crystalline ordering in a concentrated star polymer solution. While such a behavior was already predicted by early scaling arguments only recently corresponding experiments have become feasible with sufficiently dense star solutions. The crystallization transition occurs roughly at the overlap concentration $`c^{}`$ which is the number density of stars where their coronae start to touch experiencing the mutual repulsion. It is defined as $`c^{}=1/(2R_g)^3`$ where $`R_g`$, the radius of gyration, is the root mean square distance of the monomers from the center of mass of a single star. In addition, theory and computer simulation have refined the original estimate for the number of chains $`f`$ necessary for a freezing transition from $`f100`$ to $`f34`$ and predicted a rich phase diagram including stable anisotropic and diamond solid structures at high densities and high arm numbers. These results were derived using an effective pair potential between stars with a logarithmic short distance behavior derived from scaling theory.
In general, while the pair interactions are the central focus and the typical input of any many-body theory, much less is known about triplet and higher-order many body interactions. For rare gases, the Axilrod-Teller triplet interaction has been found to become relevant in order to describe high-precision measurements of the structure factor . For charged colloids, the effective triplet forces are generated by nonlinear counterion screening. This was investigated recently by theory and simulations . For star polymer solutions in a good solvent such studies are missing. In all three cases, the effective triplet forces originate from formally integrating out microscopic degrees of freedom. For rare gases, these are the fluctuations of the outer-shell electrons while for charged colloids the classical counterions play the role of additional microscopic degrees of freedom. For star polymers, on the other hand, one is interested in an effective interaction between the star centers by integrating out the monomer degrees of freedom . Usually one starts from an effective pair potential which is valid for large particle separation. The range of this effective pair potential involves a certain length scale $`\mathrm{}`$ which is the decay length of the van-der-Waals attraction, the Debye-Hückel screening length or the diameter of gyration $`2R_g`$, for rare gases, charged colloids, and star polymers, respectively. Triplet forces, i.e. three star forces, not forces between monomers, become relevant with respect to the pairwise forces if the typical separations between the particles are smaller than this typical length scale $`\mathrm{}`$. This implies a triple overlap of particle coronae drawn as spheres of diameter $`\mathrm{}`$ around the particle centers. The triple overlap volume is an estimate for the magnitude of the triplet forces. Hence a three-particle configuration on an equilateral triangle is the configuration where triplet effects should be most pronounced.
The aim of the present paper is to quantify the influence of triplet interactions for star polymer solutions in a good solvent using both analytical theory and computer simulation. In doing so, we consider a set-up of three star polymers whose centers are on an equilateral triangle. We found that the triplet part is attractive but its relative contribution is small (11%) with respect to the repulsive pairwise part. This relative correction is universal, i.e., it is independent of the particle separation and of the arm number. It even persists for a collinear configuration of three star polymers where the absolute correction is smaller than in the triangular situation for the same star-star distance. Consequently, the validity of the effective pair potential model is justified even at densities above the overlap concentration. In particular, our result gives evidence that the anisotropic and diamond solids predicted by the pair theory are indeed realizable in actual samples of concentrated star polymer solutions.
Our paper is organized as follows: in section II we apply scaling theory to extract the triplet forces both for small and for large arm numbers. In section III we briefly describe our Molecular Dynamics (MD) simulation scheme and present results in section IV. Comparing these to the theoretical predictions, we find good agreement. Section V is devoted to concluding remarks and to an outlook.
## II Scaling theory of triplet forces between star polymers
### A Scaling of single stars
The scaling theory of polymers was significantly advanced by de Gennes’ observation that the $`n`$-component spin model of magnetic systems is applicable to polymers in the formal $`n=0`$ limit . This opened the way to apply renormalization group (RG) theory to explain the scaling properties of polymer solutions that have been the subject of experimental and theoretical investigations since the pioneering works in this field . Many details of the behavior of polymer solutions may be derived using the RG analysis . Here, we use only the more basic results of power law scaling: the radius of gyration $`R_g(N)`$ of a polymer chain and the partition function $`𝒵(N)`$ are found to obey the power laws:
$$R_g(N)N^\nu \text{ and }𝒵(N)z^NN^{\gamma 1}.$$
(1)
The fugacity $`z`$ measures the mean number of possibilities to add one monomer to the chain. It is microscopic in nature and will depend on the details of the model or experimental system. The two exponents $`\nu `$ and $`\gamma `$ on the contrary are the $`n=0`$ limits of the correlation length exponent $`\nu (n)`$ and the susceptibility exponent $`\gamma (n)`$ of the $`n`$ component model and are universal to all polymer systems in a good solvent, i.e., excluding high concentration of polymers or systems in which the polymers are collapsed or are near the collapse transition. For any such system the exponents of any other power law for linear polymers may be expressed by these two exponents in terms of scaling relations.
It has been shown that the $`n`$ component spin model may be extended by insertions of so called composite spin operators that allow to describe polymer networks and in particular star polymers in the $`n=0`$ limit . A family of additional exponents $`\gamma _f`$ governs the scaling of the partition function $`𝒵_f(N)`$ of a polymer star of $`f`$ chains each with $`N`$ monomers:
$$𝒵_f(N)z^NN^{\gamma _f1}.$$
(2)
Again the exponents of any other power law for more general polymer networks are given by scaling relations in terms of $`\gamma _f`$ and $`\nu `$. Here, we substitute another family of exponents $`\eta _f`$ to replace $`\gamma _f1=\nu (\eta _ff\eta _2)`$. The first two members $`\eta _1=0`$ and $`\eta _2=(1\gamma )/\nu `$ are defined by the requirement that the $`f=1`$ star and the $`f=2`$ star are just linear chains with scaling exponents $`\gamma _1=\gamma _2=\gamma `$. The values of these exponents are known from renormalization group analysis (RG) and Monte Carlo (MC) simulations . Several equivalent approaches have been elaborated to evaluate the renormalized perturbation theory. Early first order perturbative RG results were given in ref. . Here, we explicitly present the result of an expansion in the parameter $`\epsilon =4d`$ where $`d`$ is the space dimension. The $`\epsilon `$-expansion for the $`\eta _f`$ reads
$`\eta _f={\displaystyle \frac{\epsilon }{8}}f(f1)\left\{1{\displaystyle \frac{\epsilon }{32}}(8f25)+{\displaystyle \frac{\epsilon ^2}{64}}\left[(28f89)\zeta (3)+8f^249f+{\displaystyle \frac{577}{8}}\right]\right\}+𝒪(\epsilon ^4)`$ (3)
with the Riemann $`\zeta `$-function. Note that this series is asymptotic in nature and to evaluate it for $`\epsilon =1`$ it is necessary to apply resummation. An alternative expansion for the star exponents makes use of an RG approach at fixed dimension $`d=3`$ proposed by Parisi . This expansion has been worked out in refs. . The corresponding expressions are lengthy and not presented here. In Table 1, in the first two lines we have calculated the resummation for the series in Eq. (3) as well as for the expansion at fixed dimension. The resummation procedure that we apply combines a Borel transform with a conformal mapping using all information on the asymptotic behavior of the perturbation expansion of the corresponding spin model . Results for $`f9`$ have been given before in refs. whereas we have added here the calculation of values for $`f=10,12,15`$. The deviation between the two approaches measures the error of the method. For large $`f`$ the leading coefficient of the $`k`$th order term $`\epsilon ^k`$ in Eq. (3) is multiplied by $`f^{k+1}`$. This is due to combinatorial reasons and occurs also for the alternative approach. It limits the use of the series to low values of $`f`$.
Another possibility to estimate the values of the star polymer scaling exponents $`\gamma _f`$ is to consider the limiting case of many arm star polymers. For large $`f`$ each chain of the star is restricted approximately to a cone of solid angle $`\mathrm{\Omega }_f=4\pi /f`$. In this cone approximation one finds for large $`f`$
$$\gamma _ff^{3/2}.$$
(4)
### B Two star polymers
Let us now turn to the effective interaction between the cores of two star polymers at small distances $`r`$ that are small on the scale of the size $`R_g`$ of the stars. Let us for the moment consider a more general case of two star polymers with $`f_1`$ and $`f_2`$ arms respectively. The cores of the two stars are at a distance $`r`$ from each other. We assume all chains involved to be of the same length. The power law for the partition sum $`𝒵_{f_1f_2}^{(2)}(r)`$ of two star polymers may then be derived from a short distance expansion. This expansion is originally established in the field theoretic formulation of the $`n`$ component spin model. While we do not intend to give any details of these considerations here, applications to polymer theory may be found in refs. . The relevant result on the other hand is simple enough: the partition sum of the two stars $`𝒵_{f_1f_2}^{(2)}(N,r)`$ at small distance $`r`$ factorizes into a function $`C_{f_1f_2}(r)`$ of $`r`$ alone and the partition function $`𝒵_{f_1+f_2}`$(N) of the star with $`f_1+f_2`$ arms that is formed when the cores of the two stars coincide.
$$𝒵_{f_1f_2}^{(2)}(N,r)C_{f_1f_2}(r)𝒵_{f_1+f_2}(N)$$
(5)
For the function $`C_{f_1f_2}(r)`$ one may show that power law scaling for small $`r`$ holds in the form
$$C_{f_1f_2}(r)r^{\mathrm{\Theta }_{f_1f_2}^{(2)}}.$$
(6)
with the contact exponent $`\mathrm{\Theta }_{f_1f_2}^{(2)}`$. To find the scaling relation for this power law we we change the length scale in (5) in an invariant way by $`r\lambda r`$ and $`N\lambda ^{1/\nu }N`$. The scaling of the partition function $`𝒵_{f_1f_2}^{(2)}`$ may be shown to factorize into the contributions for the two stars. This transforms (5) to
$$\lambda ^{1/\nu (\gamma _{f_1}1)}\lambda ^{1/\nu (\gamma _{f_2}1)}𝒵_{f_1f_2}^{(2)}(\lambda ^{1/\nu }N,\lambda r)\lambda ^{\mathrm{\Theta }_{f_1f_2}^{(2)}}C_{f_1f_2}(\lambda r)\lambda ^{1/\nu (\gamma _{f_1+f_2}1)}𝒵_{f_1+f_2}(\lambda ^{1/\nu }N).$$
(7)
Collecting powers of $`\lambda `$ provides the scaling relation
$`\nu \mathrm{\Theta }_{f_1f_2}^{(2)}`$ $`=`$ $`(\gamma _{f_1}1)+(\gamma _{f_2}1)(\gamma _{f_1+f_2}1),`$ (8)
$`\mathrm{\Theta }_{f_1f_2}^{(2)}`$ $`=`$ $`\eta _{f_1}+\eta _{f_2}\eta _{f_1+f_2}.`$ (9)
We now specialize our consideration to the interaction between two stars of equal number of arms $`f_1=f_2=f`$. The mean force $`F_{ff}^{(2)}(r)`$ between the two star polymers at short distance $`r`$ is then easily derived from the effective potential $`V^{\mathrm{eff}}(r)=k_\mathrm{B}T\mathrm{log}[𝒵_{ff}^{(2)}(r)/(𝒵_f)^2]`$ with $`k_\mathrm{B}T`$ denoting the thermal energy. For the force this results in
$$\frac{1}{k_\mathrm{B}T}F_{ff}^{(2)}(r)=\frac{\mathrm{\Theta }_{ff}^{(2)}}{r}.$$
(10)
The cone approximation for the contact exponents $`\mathrm{\Theta }_{ff}^{(2)}`$ may be matched to the known values for $`f=1,2`$ (see table 1), fixing the otherwise unknown prefactor. Assuming that the behavior of the $`\mathrm{\Theta }_{ff}^{(2)}`$ may be described by the cone approximation for all $`f`$ one finds:
$$F_{ff}^{(2)}(r)\frac{5}{18}\frac{f^{3/2}}{r}.$$
(11)
This matching in turn suggests an approximate value for the $`\eta _f`$ exponents,
$$\eta _f\frac{5}{18}(2^{3/2}2)^1f^{3/2}.$$
(12)
Note on the other hand that this approximation is inconsistent with the exact result $`\eta _1=0`$. However, the approximation works well for $`\mathrm{\Theta }_{ff}^{(2)}`$ in the range $`f=1,\mathrm{},6`$ were we have calculated the corresponding values from the perturbation theory results as well as according to the cone approximation. Our results, displayed in the second part of table 1, show good correspondence of the cone approximation with the resummation values.
### C Three stars
We now use the idea of the short distance expansion once more to derive the triplet interaction of three star polymers at close distance. We consider a symmetric situation in which the three cores of the polymer stars are located on the corners of an equilateral triangle (see Fig. 1). The distance between the cores is $`r`$ while their distance to the center of the triangle is $`R`$. We assume that the radius of gyration $`R_g`$ of the star polymers is much larger than their mutual distance $`R_gr`$.
To make the argument more transparent we first consider the slightly more general case of three stars with $`f_1`$, $`f_2`$ and $`f_3`$ arms respectively. Shrinking the outer radius $`R`$ of the triangle on which the cores are located, the partition function of this configuration of three stars will scale with $`R`$ according to
$`𝒵_{f_1f_2f_3}(R)`$ $``$ $`R^{\mathrm{\Theta }_{f_1f_2f_3}^{(3)}}`$ (13)
$`\mathrm{\Theta }_{f_1f_2f_3}^{(3)}`$ $`=`$ $`\eta _{f_1}+\eta _{f_2}+\eta _{f_3}\eta _{f_1+f_2+f_3}.`$ (14)
Now, the scaling exponent $`\eta _{f_1+f_2+f_3}`$ of the star that results by collapsing the cores of the three stars at one point has to be taken into account as follows from an argument analogous to the above consideration for two stars.
Let us specify the result for the symmetric situation of three equivalent stars $`f_1=f_2=f_3=f`$. Furthermore we assume that the large $`f`$ approximation (12) is valid for the exponents $`\eta _f`$. Then the three star contact exponent may be written as
$$\mathrm{\Theta }_{fff}^{(3)}=\frac{3^{3/2}3}{2^{3/2}2}\times \frac{5}{18}f^{3/2}.$$
(15)
An effective potential of the system of the three stars at small distance $`R`$ from the center may then be defined by
$$V_{fff}^{(3)\mathrm{eff}}(R)=k_\mathrm{B}T\mathrm{\Theta }_{fff}^{(3)}\mathrm{ln}(R/R_g).$$
(16)
We now derive the corresponding three body force underlying this effective potential. Note that the absolute value of the force is the same for all three stars. The relation of the potential to the force on the core of one star is then
$$V_{fff}^{(3)\mathrm{eff}}(R+dR)V_{fff}^{(3)\mathrm{eff}}(R)=\underset{i=1}{\overset{3}{}}\stackrel{}{F_i}d\stackrel{}{R}_i=3F_{fff}^{(3)}(R)dR.$$
(17)
The final result for the total force on each of the stars that includes any three body forces is therefore
$$F_{fff}^{(3)}(R)=k_\mathrm{B}T\mathrm{\Theta }_{fff}^{(3)}/(3R).$$
(18)
If one starts instead from a sum of two body forces, then one star experiences the sum of the two forces calculated for the star-star interaction. With the given geometry of the equilateral triangle this is easily calculated to give
$$F_{fff}^{(2)}(r)=|\widehat{r}_{12}\mathrm{\Theta }_{ff}^{(2)}/r_{12}+\widehat{r}_{13}\mathrm{\Theta }_{ff}^{(2)}/r_{13}|=k_\mathrm{B}T\mathrm{\Theta }_{ff}^{(2)}/R.$$
(19)
Here, $`r=r_{12}=r_{13}=R\sqrt{3}`$ denote the distance between two of the stars, while the $`\widehat{r}_{ij}`$ are the unit vectors along the edges of the triangle (see Fig.1). The relative deviation from the pair potential picture is then given by
$$\frac{\mathrm{\Delta }F}{F_{fff}^{(2)}}=\frac{F_{fff}^{(3)}(r)F_{fff}^{(2)}(r)}{F_{fff}^{(2)}(r)}=\frac{\mathrm{\Theta }_{fff}^{(3)}3\mathrm{\Theta }_{ff}^{(2)}}{3\mathrm{\Theta }_{ff}^{(2)}}.$$
(20)
Using the cone approximation for the contact exponent we finally obtain for the relative deviation caused by triplet forces alone
$$\frac{\mathrm{\Delta }F}{F_{fff}^{(2)}}=\frac{3^{3/2}3}{2^{3/2}2}0.11.$$
(21)
This result is independent of the number of arms and valid in the full region that is described by the logarithmic potential. In table 1 we have calculated the exponents as derived from the perturbation expansion of polymer field theory checking the relation eq. (21). Taking into account the error that may be estimated from the difference of the results obtained by the two complementary approaches, the results are in good agreement with the cone approximation even for low $`f`$ values. The fair coincidence is rather surprising as additional numerical errors might be introduced by the calculation of the contact exponents from the original star exponents. It confirms our estimate of the relative deviation caused by triplet forces to be of the order of not more than $`11\%`$ for all analytic approaches we have followed here. Let us note that the analogous calculation for a symmetric linear configuration of three stars yields the same relative deviation eq. (21). The absolute triplet forces for the linear configuration are smaller by a factor $`\sqrt{3}/2`$ than for the triangular configuration with the same star-star distance.
## III Computer simulation method
Molecular dynamics (MD) simulations were performed using exactly the model that three of the present authors devised to test the effective pair potential and had been originally proposed to study single star polymers . In this model the configuration of star polymer $`i=1,2,3`$ is given by the coordinates $`\stackrel{}{r}_m^{(i,j)}`$ of the $`N`$ monomers $`m=1,\mathrm{},N`$ of the $`f`$ chains $`j=1,\mathrm{},f`$ and the position of its core $`r_0^{(i)}`$. The main features of this model are the following: (1) A purely repulsive truncated Lennard-Jones like potential acts between all monomers $`m=0,\mathrm{},N`$ on all chains. (2) An attractive FENE-potential that preserves the chain connectivity and acts only between consecutive monomers $`m,m+1`$ along each chain. (3) These potentials have to be slightly modified for the interaction between the first monomer $`m=1`$ and the core $`m=0`$ of the star to allow the core to have a radius $`R_d`$ that is sufficiently large to place $`f`$ monomers in its vicinity.
The three cores of the stars were placed at the corners of an equilateral triangle, see again Figure 1 where also the core radius $`R_d`$ is shown. A typical snapshot of the three star simulation is displayed in Figure 2 for a functionality of $`f=5`$ and $`N=100`$ monomers per chain. The force on the star core was averaged during the MD simulation for a number of edge lengths $`r`$ of the triangle varying in the range between the diameter of the two cores $`2R_d`$ and the diameter of gyration $`2R_g`$ of a single star polymer. We have produced data for $`f=3,5,10,18,30`$. For the smaller functionalities ($`f=3,5,10`$) the number of monomers per chain was $`N=100`$ while for $`f=10,18,30`$ a number $`N=50`$ was chosen. Note that the total system comprises between $`9004500`$ mutually interacting particles. As equilibration is slow and the statistical average converges slowly, the simulation becomes increasingly time-consuming beyond such system sizes. As for reference data, we have also produced data for a two stars situation according to the calculations in Ref. .
## IV Results
Results of the computer simulation are compared to the theory in Figures 3a and 3b. The reduced averaged force on a single star is shown versus the reduced triangle length for different arm numbers. As a reference case, also the corresponding results in a pair potential picture are shown, both within theory and simulation. For technical reasons we kept a small core radius $`R_d`$ in the simulation, which is roughly $`10\%`$ of the radius of gyration of the whole star. In the theory, on the other hand, the core size was zero. Hence, to compare properly , a shift $`r2R_d`$ has to be performed.
As expected, in both theory and simulation, the triplet forces become relevant only within the coronae. A comparison with pure pairwise forces leads to the first important observation that the triplet force is smaller, i.e. the pure triplet contribution is attractive. (Note that one has to multiply the pure two-star force by a factor of $`\sqrt{3}`$ for simple geometrical reasons.) The relative magnitude of the triplet term, however, is small. A quantitative comparison with theory and simulation leads to good overall agreement. The triplet contribution itself, however, is subjected to larger statistical errors of the simulation. Hence we resorted to a different strategy to check the theory by plotting the inverse force versus distance. If the theory is correct the simulation data should fall on a straight line both for the pure pairwise and the full triplet case. The slope should then give the theoretical prefactor of the logarithmic potential. The advantage of this consideration is that the slope bears a smaller statistical error as more data points are included. Such a comparison is shown in Figure 4 for $`f=10`$. The first consequence is that the simulation data indeed fall on a straight line confirming the theory. In fact this is true for all other parameter combinations considered in the simulations. The slope is higher for the triplet and lower for the pair case, both in theory and simulation. The actual values in Figure 4 are in the same order of magnitude but a bit different.
In order to check this in more detail, we have extracted the slope for all simulation data. The result is summarized in Figure 5 where the relative differences of the slopes between the pair and triplet cases are plotted versus the arm number $`f`$. The theory predicts a constant value of $`0.11`$, see Eq. (19). The simulation data scatter a lot in the range between $`0.05`$ and $`0.15`$ due to the large statistical error but the theoretical value falls reasonably within the data. Consequently, the triplet contributions are found to be attractive and small even for nearly touching cores where the triplet overlap of the coronae is substantial.
## V Conclusions
In conclusion, we have calculated, by theory and computer simulations, the triplet interaction between star polymer centers in a good solvent positioned on the corners of an equilateral triangle. The triplet part was found to be attractive but only about $`11\%`$ of the pairwise repulsion. Our calculations justify earlier investigations where the pair potential framework was used even slightly above the star overlap concentration.
We finish with a couple of remarks: First, the scaling theory can also be performed for any triplet configurations beyond the equilateral triangle studied in this paper. Second, arbitrary higher-order many body forces can be investigated assuming a cluster of $`M`$ stars. Such a calculation is given in Appendix A. As a result, the deviations from the pair potential picture increase with the number $`M`$ and even diverge for $`M\mathrm{}`$. This implies that the pair potential picture breaks down for very high concentrations. This is expected as for high concentration a star polymer solution is mainly a semi-dilute solution of linear chains where it is irrelevant at which center they are attached to . As far as further simulational work is concerned, there are many open problems left. Apart from the investigation for arbitrary triplet configurations and their extensions to an arbitrary number of stars, the most challenging problem is a full “ab initio” simulation of many stars including many-body forces from the very beginning. This is in analogy to Car-Parrinello simulations which were also applied to colloidal suspensions . A first attempt has been done , but certainly more work is needed here. Another (a bit less demanding) task is to study stars on a periodic solid lattice with periodic boundary conditions and extract the many body interactions from there.
It would be interesting to study the relevance of triplet forces for star polymers in a poor solvent near the $`\mathrm{\Theta }`$-point . It can, however, be expected that the triplet forces here are even less important than for a good solvent as the effective interaction becomes stiffer in a poor solvent. Furthermore, the effect of polydispersity in the arm number which has been briefly touched in our scaling theory treatment should be extended since this is important to describe real experimental samples.
## Acknowledgments
We are grateful to the DFG for financial support within the SFB 237.
## A Higher Order forces between star polymers
Here, we derive for the general case of $`M`$ simultaneously interacting star polymers with $`f`$ arms the effective $`M`$th order force. Generalizing the equilateral triangle geometry, we study the situation where the $`M`$ cores of the stars are evenly distributed on a sphere with radius $`R`$. In particular, the cores of the stars may be located at the corners of a regular polyhedron. Then the non-radial forces on each star polymer cancel. The latter condition may be fulfilled approximately also for large numbers $`M`$ for which a regular polyhedron does not exist.
We first calculate the force on one star by the sum of $`M1`$ pairwise forces effected by the other stars. For the pairwise force (10) that acts according to a $`1/r`$-law it is easy to verify that the radial component of the force between any two points on the sphere is $`\mathrm{\Theta }_{ff}^{(2)}/(2R)`$ independent of their relative position. With this simplification the total (radial) force on one star is
$$\frac{1}{k_\mathrm{B}T}F_{M,f}^{(2)}=\frac{M1}{2}\frac{\mathrm{\Theta }_{ff}^{(2)}}{R}.$$
(A.1)
Here, $`F_{M,f}^{(2)}`$ denotes the sum of pairwise forces on one of the $`M`$ stars each with $`f`$ arms. In the case $`M=3`$ this is the result of eq. (19).
The total $`M`$th order force $`F_{M,f}^{(M)}`$ between $`M`$ star polymers with $`f`$ arms brought close together may again be derived from a short distance expansion resulting in the scaling relation
$$\mathrm{\Theta }_{M,f}^{(M)}=M\eta _f\eta _{Mf}.$$
(A.2)
The force on one star is then found in the same way as for three stars as
$$\frac{1}{k_\mathrm{B}T}F_{M,f}^{(M)}=\frac{\mathrm{\Theta }_{M,f}^{(M)}}{MR}.$$
(A.3)
The leading contributions for large numbers of stars $`M`$ in the two cases differ even in the power of $`M`$. While the first is linear in $`M`$ the latter grows only with the square root of $`M`$. In the large-$`f`$ and large-$`M`$ approximations this reads :
$`{\displaystyle \frac{1}{k_\mathrm{B}T}}F_{M,f}^{(2)}`$ $``$ $`{\displaystyle \frac{5}{18}}{\displaystyle \frac{f^{3/2}}{2R}}M`$ (A.4)
$`{\displaystyle \frac{1}{k_\mathrm{B}T}}F_{M,f}^{(M)}`$ $``$ $`{\displaystyle \frac{5}{18}}(2^{3/2}2){\displaystyle \frac{f^{3/2}}{R}}M^{1/2}.`$ (A.5)
Note that for large $`M`$ the factors $`M`$ and $`M^{1/2}`$ in these two approaches are not a result of the large $`f`$ approximation but are of combinatorial and geometrical origin. This shows that for large $`M`$ the sum of pairwise forces largely overestimates the force on one star.
## Figure Captions
FIG. 1. Three star polymers at mutual distance $`r`$. The cores of the stars (with radius $`R_d`$) are located at the corners of an equilateral triangle. The distance from the center is $`R`$. The mean radius of gyration of a single star is $`R_g`$.
FIG. 2. Snapshot of the simulation of three stars with $`f=5`$ arms each with $`N=100`$ monomers. The cores are located at the corners of the equilateral triangle that is depicted in the center. The monomers that belong to the same star are represented by balls of the same color: either black, dark gray, or light gray.
FIG. 3a. Comparison of the force $`F`$ measured in the three star MD with that calculated from a corresponding two star MD simulation for $`f=3`$ and $`f=10`$ with $`N=100`$. Also the results predicted by the theory are plotted as a continuous line (only pair forces) and a broken line (including triplet forces).
FIG. 3b. Same as Fig. 3a but for $`f=18`$ and $`f=30`$ with $`N=50`$.
FIG. 4. Comparison of the inverse force $`1/F`$ measured in the three star MD with that calculated from a corresponding two star MD simulation for $`f=10`$ with $`N=50`$. The linear fits for the pair forces (small dashed line) and the full three body force (dash-dotted line) are shown together with the respective results predicted by the theory which are depicted by a continuous line (only pair forces) and a broken line (including triplet forces).
FIG. 5. The slopes of the linear fits to the data as shown in Fig. 4 were extracted from the simulation data for $`f=3,5,10,18,30`$ and $`N=50,100`$ to calculate the relative deviation $`\mathrm{\Delta }F/F_{fff}^{(2)}`$ induced by the triplet forces. The line at $`0.11`$ corresponds to the analytic result. |
no-problem/0001/hep-th0001200.html | ar5iv | text | # Untitled Document
hep-th/0001200 CALT-68-2260 CITUSC/00-008
D-branes on Orbifolds with Discrete Torsion And Topological Obstruction
Jaume Gomis
Department of Physics
California Institute of Technology
Pasadena, CA 91125
and
Caltech-USC Center for Theoretical Physics
University of Southern California
Los Angeles, CA 90089
We find the orbifold analog of the topological relation recently found by Freed and Witten which restricts the allowed D-brane configurations of Type II vacua with a topologically non-trivial flat $`B`$-field. The result relies in Douglas proposal – which we derive from worldsheet consistency conditions – of embedding projective representations on open string Chan-Paton factors when considering orbifolds with discrete torsion. The orbifold action on open strings gives a natural definition of the algebraic K-theory group – using twisted cross products – responsible for measuring Ramond-Ramond charges in orbifolds with discrete torsion. We show that the correspondence between fractional branes and Ramond-Ramond fields follows in an interesting fashion from the way that discrete torsion is implemented on open and closed strings.
January 2000
1. Introduction, Results and Conclusions
Orbifolds in string theory provide a tractable arena where CFT can be used to describe perturbative vacua. In the more conventional geometric compactifications, geometry and topology provide powerful techniques in describing the long wavelength approximation to string theory. In a sense, geometric compactifications and orbifolds provide a sort of dual description. In the latter, CFT techniques are available but topology is less manifest. On the other hand, conventional Calabi-Yau compactification lacks an exact CFT formulation but a rich mathematical apparatus aids the analysis of the corresponding supergravity approximation.
Perhaps surprisingly, orbifold CFT seems to be able to realize topological relations satisfied by geometric compactifications from worldsheet consistency conditions. A nice example of this phenomenon is the interpretation of a restriction imposed by modular invariance as the analog of the topological constraint requiring the space-time manifold to have a vanishing second Stieffel-Whitney class.
The original motivation of this work was to find the analog of the topological formula recently found by Freed and Witten<sup>1</sup> Several aspects of this topological relation had already been considered by Witten in . in the context of orbifolds. Let us briefly explain their results in a language that will be convenient for what follows. Given a space-time manifold $`X`$ and a submanifold $`YX`$, their formula constraints the configuration of D-branes allowed to wrap $`Y`$. A careful analysis of string worldsheet global anomalies in the presence of D-branes and a flat Neveu-Schwarz B-field – so that the curvature $`H=dB`$ is zero – imposes, for a class of backgrounds, the following topological relation<sup>2</sup> This formula can have an additional term that depends on the topology of $`Y`$. We will ignore this correction since it vanishes when the second Stieffel-Whitney class of $`Y`$ is trivial and the orbifold model satisfies a relation which can be identified with the vanishing of this class.
$$i^{}[H]=0,$$
where $`[H]H^3(X,Z)`$ determines the topological class of the B-field and $`i^{}[H]`$ is the restriction of $`[H]`$ to the D-brane worldvolume $`YX`$. Since $`[i^{}[H]]H^3(Y,Z)`$ is a torsion class in cohomology, so that there is a smallest non-zero integer $`m`$ such that $`m[i^{}[H]]0`$, the anomaly relation (1.1) can only be satisfied whenever there are a multiple of $`m`$ D-branes wrapping $`YX`$. Therefore, in a background with a topologically non-trivial flat B-field such that $`m[i^{}[H]]0`$, the charge of the minimal D-brane configuration wrapping $`Y`$ is $`m`$ times bigger than in a background with trivial B-field. The anomaly relation (1.1) expresses in topological terms that D-branes in string theory are not pure geometric constructs, but that the allowed configurations may depend on discrete choices – like the choice of $`[H]`$ – of the string background. In some cases, this fact is realized by the K-theory classification of Ramond-Ramond charges<sup>3</sup> The work of Sen describing D-branes via tachyon condensation of unstable systems – see for example – was crucial in making the identification between D-branes and K-theory. See for a Type IIA discussion.. For example, whenever $`[H]`$ is not trivial, Type IIB D-brane charges are classified by the twisted topological K-theory group $`\text{K}_{[H]}(X)`$ instead of the more conventional group $`\text{K}(X)`$ used whenever $`[H]`$ is trivial.
In this note we find the analogous phenomenon for orbifold models. Given a compact orbifold $`T^6/\mathrm{\Gamma }`$ with discrete torsion<sup>4</sup> The simplest supersymmetric orbifold model with discrete torsion appears when the orbifold is three complex dimensional. For concreteness, we will study in section $`3`$ the case $`\mathrm{\Gamma }=Z_n\times Z_n^{}`$. In , the values of $`n`$ and $`n^{}`$ so that $`\mathrm{\Gamma }`$ acts cristallographically were found. , we show that the charge of the minimal D6-brane configuration wrapping the orbifold is an integer multiple bigger than the minimal charge when one considers conventional orbifolds (without discrete torsion). This result parallels the consequences that stem from (1.1). Roughly speaking, turning on discrete torsion in the orbifold corresponds to turning on a flat topologically non-trivial $`B`$-field in a geometric compactification and the conventional orbifold corresponds to the case where the $`B`$-field is trivial. This suggest that discrete torsion in string theory is intimately related to torsion in homology $`[1519]`$<sup>5</sup> In many examples the relation is not direct and the correspondence between torsion in homology and discrete torsion is only visible after an irrelevant perturbation, as in for example . .
A crucial ingredient in deriving this result is a careful treatment of open strings in orbifolds with discrete torsion. Douglas has proposed that discrete torsion should be implemented on open strings by embedding a projective representation of the orbifold group on Chan-Paton factors. Whether $`\mathrm{\Gamma }`$ admits discrete torsion and projective representations depends on its cohomology via $`H^2(\mathrm{\Gamma },U(1))`$. This alone suggest the correlation between discrete torsion and projective representations. We show that worldsheet consistency conditions uniquely determine the action of the orbifold group on Chan-Paton factors once a closed string orbifold model is specified. This result can be derived by demanding that open and closed strings interact properly in the orbifold, so that the orbifold group $`\mathrm{\Gamma }`$ is conserved by their interactions. A careful account of what is discrete torsion is essential in this derivation and we shall present its description in section $`2`$.
The effects of discrete torsion can be incorporated to define a K-theory group which measures Ramond-Ramond charges in orbifolds with discrete torsion. In the algebraic<sup>6</sup> See for prior use of the algebraic approach to K-theory in string theory. approach to equivariant K-theory via cross products , one incorporates discrete torsion by twisting the cross product by a cocycle<sup>7</sup> The multiplication law of the cross product $`C(X)\mathrm{\Gamma }`$, where $`C(X)`$ is the algebra of continous functions on $`X`$, is twisted by a cocycle $`cH^2(\mathrm{\Gamma },U(1))`$ and defines and associative group ring generated by the elements of $`\mathrm{\Gamma }`$ with $`C(X)`$ coeffitients. The Grothendieck group of the twisted crossed product yields the K-theory group $`\text{K}_\mathrm{\Gamma }^{[c]}(X)`$. See for a more complete discussion. $`cH^2(\mathrm{\Gamma },U(1))`$ corresponding to the choice of discrete torsion and projective representation. It turns out that whenever the action of a finite group $`\mathrm{\Gamma }`$ on $`X`$ is free, the algebraic K-theory of twisted cross products $`\text{K}_\mathrm{\Gamma }^{[c]}(X)`$ is isomorphic to the twisted topological K-theory group $`\text{K}_{[H]}(X/\mathrm{\Gamma })`$ which classifies D-branes in a background with topologically non-trivial flat B-field. The use of projective representations – and therefore of cocycles – provides a definition of K-theory which is the orbifold generalization of $`\text{K}_{[H]}(X)`$.
We show that the minimal D-brane charge for six-branes wrapping $`T^6/\mathrm{\Gamma }`$ is larger for orbifolds with discrete torsion than for conventional orbifolds by explicitly computing the D-brane charge. The D6-brane charge can be extracted from a disk amplitude with an insertion of the corresponding untwisted Ramond-Ramond vertex operator. The same result can be obtained as in using the boundary state formalism . As shown in , the properly normalized untwisted Ramond-Ramond six-brane charge is given by
$$Q=\frac{d_R}{|\mathrm{\Gamma }|},$$
where $`d_R`$ is the dimension of the $`R`$-representation<sup>8</sup> In the more conventional setup of branes transverse to a non-compact orbifold, a bulk brane is described by the regular representation so that $`Q=1`$ and a brane stuck at the singularity by an irreducible representation and carries fractional untwisted charge . of $`\mathrm{\Gamma }`$ acting on the Chan-Paton factors and $`|\mathrm{\Gamma }|`$ is the order of the group. The minimal charge is therefore obtained by taking the smallest irreducible representation of $`\mathrm{\Gamma }`$. For open strings in conventional orbifolds one must use standard (vectorial) representations of $`\mathrm{\Gamma }`$. For any discrete group $`\mathrm{\Gamma }`$, the smallest irreducible vectorial representation is always one-dimensional<sup>9</sup> One always has the trivial representation where each element $`g_i\mathrm{\Gamma }`$ is representated by $`1`$.. As shown in section $`2`$, particular projective representations must be used when dealing with orbifolds with discrete torsion. A simple and important property of projective representations is that there are no non-trivial<sup>10</sup> Any group $`\mathrm{\Gamma }`$ can have projective representations. The important issue is whether a given projective representation can be redefined to become a vectorial one. This will become more clear in section $`2`$. one-dimensional projective representations. Therefore, given a model with orbifold group $`\mathrm{\Gamma }`$ and non-trivial $`H^2(\mathrm{\Gamma },U(1))`$, the charge of the minimal D6-brane configuration when discrete torsion is turned on is given in terms of the charge of the minimal D6-brane configuration when discrete torsion is turned off by
$$Q_{dis.tors.}=d_R^{proj}Q_{convent.}.$$
$`d_R^{proj}>1`$ is dimension of the smallest irreducible projective representation of $`\mathrm{\Gamma }`$. Thus, the charge of a D6-brane wrapping the entire orbifold is always larger when one consider orbifolds with discrete torsion than when one considers conventional orbifolds.
The correlation between discrete torsion in the closed string sector and the use of projective representations on open strings provides a natural description of fractional branes in these models. For simplicity, let’s consider D0-branes sitting at a point in a conventional non-compact orbifold $`C^3/\mathrm{\Gamma }`$. The charge vector of any zero-brane state lies in a charge lattice generated by a basis of charge vectors. Each irreducible representation of $`\mathrm{\Gamma }`$ is associated with a basis vector of the charge lattice. This result can be shown both from CFT and the K-theory approach to D-brane charges using equivariant K-theory . The closed string spectrum yields a massless Ramond-Ramond one-form potential for each twisted sector, but there are as many twisted sectors as irreducible representations of $`\mathrm{\Gamma }`$, so that indeed one can associate a generator of the charge lattice with each irreducible representation. A particular zero-brane state is uniquely specified by the choice of representation of $`\mathrm{\Gamma }`$ on its Chan-Paton factors, but any representation of $`\mathrm{\Gamma }`$ can be uniquely decomposed into a particular sum of its irreducible ones. Therefore, the states associated with the irreducible representations can be used as a basis of zero-brane states. This is realized by equivariant K-theory since $`\text{K}_\mathrm{\Gamma }(C^3)R(\mathrm{\Gamma })`$, where $`R(\mathrm{\Gamma })`$ is the representation ring of $`\mathrm{\Gamma }`$. The CFT argument goes through in the presence of discrete torsion. That is, any zero-brane state has a unique decomposition in terms of the states associated with the irreducible projective representations. The question is if there are as many massless Ramond-Ramond one-form fields as irreducible projective representations, so that one can associate a generator of the charge lattice to each irreducible representation. This a priori seems non-trivial since the projection in the twisted sector is different<sup>11</sup> See section $`2`$ for more details. in the presence of discrete torsion and generically the projection removes these massless fields. The matching between massless Ramond-Ramond fields and irreducible projective representations follows in an interesting way from the algebraic properties of discrete torsion. It turns out that the number of irreducible projective representations of $`\mathrm{\Gamma }`$ equals the number of $`c`$-regular<sup>12</sup> A group element $`g_i\mathrm{\Gamma }`$ is $`c`$-regular if $`c(g_i,g_j)=c(g_j,g_i)g_j\mathrm{\Gamma }`$(we take $`\mathrm{\Gamma }`$ abelian for simplicity), where $`c`$ is a cocycle determining the projective representations $`\gamma (g_i)\gamma (g_j)=c(g_i,g_j)\gamma (g_ig_j)`$ and the discrete torsion phase $`ϵ(g_i,g_j)=c(g_i,g_j)/c(g_j,g_i)`$. $`ϵ=1`$ for $`c`$-regular elements. See sections $`2`$ and $`3`$ for more details. elements of $`\mathrm{\Gamma }`$ . Moreover, the closed string spectrum in the twisted sectors associated with $`c`$-regular elements is identical<sup>10</sup> to the corresponding twisted sector spectrum in the conventional orbifold (without discrete torsion) which do have massless Ramond-Ramond fields. Thus, each irreducible projective representation is associated with a generator in the charge lattice even when there is non-trivial discrete torsion. This intuitive result, which follows from algebraic properties of cocycles, ties in a nice way the effects of discrete torsion on open and closed strings. From this CFT result, one is naturally led to conjecture that the K-theory of twisted cross products $`K_\mathrm{\Gamma }^{[c]}(C^3)R_{[c]}(\mathrm{\Gamma })`$, where now $`R_{[c]}(\mathrm{\Gamma })`$ denotes the module of projective representations of $`\mathrm{\Gamma }`$ with cocycle $`c`$.
The organization of the rest of the paper is the following. In section $`2`$ we explain the inclusion of discrete torsion in closed string orbifolds and relate its properties to the topology of the orbifold group $`\mathrm{\Gamma }`$. We analyze D-branes in these orbifolds and derive from worldsheet consistency conditions the necessity to use projective representations when analyzing open strings in orbifolds with discrete torsion. In section $`3`$ we find the orbifold analog of the result by Freed and Witten , present several examples and describe the charges of fractional branes in orbifolds with discrete torsion.
2. Open and Closed strings in Orbifolds with Discrete Torsion
The dynamics of a D-brane at an orbifold singularity provides a simple example of how the geometry of space-time is encoded in the D-brane worldvolume theory (for a partial list of references see $`[3141]`$ and for interesting applications to AdS/CFT see ). The low energy gauge theory on the brane is found by quantizing both open and closed strings on the orbifold . Closed string modes appear as parameters in the gauge theory such as in Fayet-Iliopoulos terms and in the superpotential. Open string modes provide gauge fields and scalars which describe the fluctuations of the brane. In this section we will show that consistency of interactions between open and closed strings require embedding an appropriate projective representation of the orbifold group on the open string Chan-Paton factors when studying orbifolds with discrete torsion<sup>13</sup> Recently, have considered the gauge theory on branes on an orbifold with discrete torsion.. We will start by briefly explaining the essentials of discrete torsion and describing the closed string spectrum of these models. This will be crucial in determining the appropriate projection on open strings.
The spectrum of closed strings on an orbifold $`X/\mathrm{\Gamma }`$ – with abelian $`\mathrm{\Gamma }`$ – is found by quantizing strings that are closed up to the action of $`\mathrm{\Gamma }`$ and projecting onto $`\mathrm{\Gamma }`$ invariant states. When $`\mathrm{\Gamma }`$ is an abelian group<sup>14</sup> In this paper we shall consider $`\mathrm{\Gamma }`$ abelian only. It is straightforward to generalize to non-abelian groups., one must quantize and project $`|\mathrm{\Gamma }|`$ closed strings. This is reflected in the partition function of the orbifold by it having $`|\mathrm{\Gamma }|^2`$ terms corresponding to all the possible twists along the $`\sigma `$ and $`\tau `$ directions of the worldsheet. One loop modular invariance allows each term in the partition function to be multiplied by a phase
$$Z=\underset{g_i,g_j\mathrm{\Gamma }}{}ϵ(g_i,g_j)Z_{(g_i,g_j)},$$
such that $`ϵ(g_i,g_j)`$ is invariant under an $`SL(2,Z)`$ transformation<sup>15</sup> That is $`ϵ(g_i,g_j)=ϵ(g_i^ag_j^b,g_i^cg_j^d)`$ where $`\left(\begin{array}{cc}a& b\\ c& d\end{array}\right)SL(2,Z)`$. and $`Z_{(g_i,g_j)}`$ is the partition function of a string closed up to the action of $`g_i\mathrm{\Gamma }`$ with an insertion of action of $`g_j\mathrm{\Gamma }`$ in the trace.
As first noted by Vafa , modular invariance on higher genus Rieman surfaces together with factorization of loop amplitudes imposes very severe restrictions on the allowed phases. Orbifolds models admitting these non-trivial phases are usually referred as orbifolds with discrete torsion. As we shall briefly explain in a moment, whether a particular orbifold model admits such a generalization depends on the topology of the discrete group $`\mathrm{\Gamma }`$.
In , Vafa showed that $`ϵ`$ must furnish a one dimensional representation<sup>16</sup> For non-abelian $`\mathrm{\Gamma }`$, $`ϵ(g_i,g_j)`$ must be a one dimensional representation of the stabilizer subgroup $`N_{g_i}\mathrm{\Gamma }`$, where $`N_{g_i}=\{g_j\mathrm{\Gamma },g_ig_j=g_jg_i\}`$. of $`\mathrm{\Gamma }`$
$$ϵ(g_i,g_jg_k)=ϵ(g_i,g_j)ϵ(g_i,g_k)$$
for each $`g_i\mathrm{\Gamma }`$. This provides a natural way to take $`ϵ`$ into account when computing the closed string spectrum. In orbifolds with discrete torsion, the spectrum in the $`g_i`$ twisted sector is obtained by keeping those states $`|s>_i`$ in the single string Hilbert space that satisfy
$$g_j|s>_i=ϵ(g_i,g_j)|s>_ig_j\mathrm{\Gamma }$$
States satisfying (2.1) transform in a one dimensional representation of $`\mathrm{\Gamma }`$. In this language, the spectrum of conventional orbifolds transform in the trivial one-dimensional representation of $`\mathrm{\Gamma }`$ where $`ϵ1`$.
Discrete torsion is intimately connected with the topology of $`\mathrm{\Gamma }`$ via
$$ϵ(g_i,g_j)=\frac{c(g_i,g_j)}{c(g_j,g_i)}.$$
Here $`cU(1)`$ is a two-cocycle, which is a collection of $`|\mathrm{\Gamma }|^2`$ phases satisfying the following $`|\mathrm{\Gamma }|^3`$ relations
$$c(g_i,g_jg_k)c(g_j,g_k)=c(g_ig_j,g_k)c(g_i,g_j),g_i,g_j,g_k\mathrm{\Gamma }.$$
The set of cocycles can be split into conjugacy classes via the following equivalence relation compatible with (2.1)
$$c^{}(g_i,g_j)=\frac{c_ic_j}{c_{ij}}c(g_i,g_j).$$
One can show from the definition of discrete torsion in (2.1) that indeed $`ϵ`$ is a one dimensional representation<sup>17</sup> One can also show that $`ϵ(g_i,g_i)=1`$ and $`ϵ(g_i,g_j)ϵ(g_j,g_i)=1`$. of $`\mathrm{\Gamma }`$. Moreover, the discrete torsion phase (2.1) is the same for cocycles in the same conjugacy class. Therefore, the number of different orbifold models that one can construct is given by the number of conjugacy classes of cocycles. Topologically, equivalence classes of cocycles of $`\mathrm{\Gamma }`$ are determined by its second cohomology <sup>18</sup> The map $`c:\stackrel{n}{\stackrel{}{\mathrm{\Gamma }\times \mathrm{\Gamma }\mathrm{}\times \mathrm{\Gamma }}}U(1)`$ is an $`n`$-cochain. The set of all $`n`$-cochains forms an abelian group $`C^n(\mathrm{\Gamma },U(1))`$ under multiplication. One can construct a coboundary operator $`d_{n+1}:C^n(\mathrm{\Gamma },U(1))C^{n+1}(\mathrm{\Gamma },U(1))`$ such that $`d_nd_{n+1}=0`$ and write down a corresponding complex. One can also define the group $`Z^n(\mathrm{\Gamma },U(1))=\text{Ker}d_{n+1}`$ of $`n`$-cocycles and $`B^n(\mathrm{\Gamma },U(1))=\text{Im}d_n`$ of $`n`$-coboundaries. The $`n`$-th cohomology group is defined as usual as $`H^n(\mathrm{\Gamma },U(1))=\text{Ker}d_{n+1}/\text{Im}d_n`$. For $`n=2`$, a $`2`$-cochain satisfying (2.1) maps to the identity under $`d_3`$, so it is a 2-cocycle. Moreover, $`d_2c(g_i,g_j)=c_ic_j/c_{ij}`$ so the equivalence classes of cocycles with (2.1) as an equivalence relation is given by $`H^2(\mathrm{\Gamma },U(1))`$. Cocycles in the same conjugacy class are therefore cohomologous. group $`H^2(\mathrm{\Gamma },U(1))`$. Summarizing, given a discrete group $`\mathrm{\Gamma }`$ there are as many possibly different orbifold models that one can construct as the number of elements in $`H^2(\mathrm{\Gamma },U(1))`$.
Placing D-branes in these vacua requires analyzing both open and closed strings in the orbifold. The closed string spectrum was summarized in the last few paragraphs in a language that will be convenient when considering open strings. The most general action on an open string is obtained by letting $`\mathrm{\Gamma }`$ act both on the interior on the string (the oscillators) and its end-points (the Chan-Paton factors). The open string spectrum is found by keeping all those states invariant under the combined action of $`\mathrm{\Gamma }`$
$$|s,ab>=\gamma (g_i)_{aa^{}}^1|g_is,a^{}b^{}>\gamma (g_i)_{b^{}b},$$
where $`s`$ is an oscillator state and $`ab`$ is a Chan-Paton state. Consistent action on the open string state and completeness of Chan-Paton wavefunctions demand $`\mathrm{\Gamma }`$ to be embedded on Chan-Paton factors by matrices the represent $`\mathrm{\Gamma }`$ up to a phase
$$\gamma (g_i)\gamma (g_j)\gamma (g_ig_j).$$
This seems lo leave some arbitrariness since $`\mathrm{\Gamma }`$ may have several classes of representations. As we will show shortly the arbitrariness is removed once closed strings are also taken into account. In particular $`\mathrm{\Gamma }`$ may have several classes of projective representations where group multiplication is realized only up to a phase. The most general such representation is given by
$$\gamma (g_i)\gamma (g_j)=c(g_i,g_j)\gamma (g_ig_j),$$
where $`cU(1)`$. Associativity of matrix multiplication forces $`c`$ to satisfy the cocycle condition (2.1). Moreover, if $`c`$ satisfies (2.1), so does $`c^{}`$ defined in (2.1). The corresponding representation is trivially found to be $`\gamma ^{}(g_i)=c_i\gamma (g_i)`$. Therefore, the different classes of projective representations of $`\mathrm{\Gamma }`$ are also measured by $`H^2(\mathrm{\Gamma },U(1))`$. Moreover, the invariant open string spectrum (2.1) of the orbifold model only depends on the cohomology class of the cocycle and not on the particular representative one chooses. This is complete analogy with the closed string discussion indicating that projective representations should be used when describing orbifolds with discrete torsion.
We will now show that once we make a particular choice of discrete torsion $`ϵ`$ in (2.1) for the closed strings, that the action on the open string Chan-Paton factors is uniquely determined to be a projective representation (2.1) with cocycle $`c`$. This follows from a worldsheet CFT condition demanding $`\mathrm{\Gamma }`$ to be a symmetry of the OPE. The action of $`\mathrm{\Gamma }`$ on open and closed strings is consistent only if $`\mathrm{\Gamma }`$ is conserved by interactions. We already know that this is the case for interactions involving only closed strings. One must also demand consistency of open-closed string interactions, that is $`\mathrm{\Gamma }`$ has to be conserved by a open-closed string amplitude<sup>19</sup> A similar restriction was imposed by Polchinski in orientifold models.. Let us consider for concreteness the transition between a Ramond-Ramond closed string state in the $`g_i`$-th twisted sector and photon arising from the open string ending on a D-brane transverse to the orbifold. To lowest order in the string coupling this amplitude arises in the disk. The closed string vertex operator is built out of a twist field which creates a cut from its location inside the disk to the boundary of the disk. Fields jump across the cut by the orbifold action $`g_i`$, which includes the action of $`\gamma (g_i)`$ on the Chan-Paton matrix $`\lambda `$ of the open string gauge field. This amplitude is completely determined by Lorentz invariance
$$\text{tr}(\gamma (g_i)\lambda )<V_\alpha ^i(0)\stackrel{~}{V_\beta ^i(0)}V^\mu (1)>,$$
where $`V_\alpha ^i,\stackrel{~}{V_\beta ^i}`$ are the right and left moving parts of the $`g_i`$-th twisted Ramond-Ramond vertex operator and $`V^\mu `$ is the vertex operator for the photon. Consistency requires invariance of this amplitude under the action of $`\mathrm{\Gamma }`$. As mentioned earlier, the model is not specified until we choose a particular discrete torsion $`ϵ`$ on the closed string. Therefore, taking into account how $`\mathrm{\Gamma }`$ acts on closed string states (2.1) and the usual adjoint action (2.1) on open strings Chan-Paton factors, the amplitude (2.1) transforms under the action of $`g_j`$ as
$$\text{tr}(\gamma (g_i)\gamma (g_j)^1\lambda \gamma (g_j))ϵ(g_i,g_j)<V_\alpha ^i(0)\stackrel{~}{V_\beta ^i(0)}V^\mu (1)>.$$
Invariance under $`\mathrm{\Gamma }`$ requires setting equal (2.1) and (2.1), which gives after writing the discrete torsion phase in terms of cocycles the following constraint
$$\gamma (g_i)\gamma (g_j)c(g_j,g_i)=c(g_i,g_j)\gamma (g_j)\gamma (g_i).$$
This constraint is satisfied by choosing a projective representation (2.1). Summarizing, we have shown from simple worldsheet principles that given an orbifold with discrete torsion (2.1) that the action of the orbifold group on open strings is determined by the corresponding cocycle<sup>20</sup> There seems to be some arbitrariness in the projective representation one chooses. The worldsheet consistency condition only determines the cohomology class of the cocycle but does not pick a particular representative. This freedom, however, does not affect the spectrum of the orbifold model..
It is interesting to note that constraints on $`ϵ`$ arise from a two-loop effect on closed strings but that consistent open-closed string interactions at tree level determine the action on the open strings.
3. Examples and D-brane Charges
In this section we will work out a general class of examples and develop some of the relevant properties of projective representations of discrete groups that are needed to show the results anticipated in section $`1`$ and $`2`$. As mentioned in section $`2`$ a discrete group $`\mathrm{\Gamma }`$ admits projective representations if $`H^2(\mathrm{\Gamma },U(1))`$ is non-trivial. The simplest abelian group admitting non-trivial projective representations – or equivalently, giving rise to discrete torsion in orbifolds – is $`\mathrm{\Gamma }=Z_n\times Z_n^{}`$. The allowed classes of representations are labeled by $`H^2(\mathrm{\Gamma },U(1))Z_d`$, where $`d=gcd(n,n^{})`$ is the greatest common divisor of $`n`$ and $`n^{}`$. Thus, a priori, there are $`d`$ different orbifold models one can define.
A basic definition in the theory of projective representations is that of a $`c`$-regular element. The number of irreducible projective representations of $`\mathrm{\Gamma }`$ with cocycle $`c`$ equals the number of $`c`$-regular elements of $`\mathrm{\Gamma }`$ <sup>21</sup> This is very different to the case of vector representations, for which there are as many irreducible representations as there are conjugacy classes in the discrete group.. A group element $`g_i\mathrm{\Gamma }`$ – for abelian $`\mathrm{\Gamma }`$ – is $`c`$-regular if
$$c(g_i,g_j)=c(g_j,g_i)g_j\mathrm{\Gamma }.$$
This definition is independent of the representative of the cocycle class. Thus, the number $`N_c`$ of irreducible projective representations with cocycle class $`c`$ is given by the following formula<sup>22</sup> We use the fact that any non-trivial one-dimensional representation yields zero when one sums over all group elements.
$$N_c=\frac{1}{|\mathrm{\Gamma }|}\underset{g_i,g_j\mathrm{\Gamma }}{}\frac{c(g_i,g_j)}{c(g_j,g_i)}=\frac{1}{|\mathrm{\Gamma }|}\underset{g_i,g_j\mathrm{\Gamma }}{}ϵ(g_i,g_j).$$
We have used (2.1) to write the above formula in terms of the discrete torsion phases. It is clear then that the closed string spectrum for the sectors twisted by $`c`$-regular elements is the same as for conventional orbifolds, so that we have as many irreducible representations as massless Ramond-Ramond fields of a given rank. As explained in the section $`1`$, this prediction should follow from the algebraic K-theory group $`\text{K}_\mathrm{\Gamma }^{[c]}(X)`$ of twisted cross products.
Let’s consider in some detail the example $`Z_n\times Z_n^{}`$. The discrete torsion phases appearing in the closed string partition function correspond to one dimensional representations of $`Z_n\times Z_n^{}`$. If we let $`g_1`$ be the generator of $`Z_n`$ and $`g_2`$ the generator of $`Z_n^{}`$ a general group element can be written as $`g_1^ag_2^b`$, where $`a`$ and $`b`$ are integers. Then, the allowed discrete torsion phases are
$$ϵ(ab,a^{}b^{})=\alpha ^{m(ab^{}a^{}b)},m=0,\mathrm{},d1,$$
where $`\alpha =\mathrm{exp}(2\pi i/d)`$ and $`d=gcd(n,n^{})`$. As expected there are $`d`$ different phases one can associate to the closed string partition function (2.1).
The study of D-branes in these backgrounds require analyzing the representation theory<sup>23</sup> This example has also been considered recently by .of $`Z_n\times Z_n^{}`$. For our purposes, we only need to find the number of irreducible representations in a given cohomology class and their dimensionality. We can use (3.1) and (3.1) to find how many of them there are. Let $`p`$ be the smallest non-zero integer such that
$$\mathrm{exp}(\frac{2\pi imp}{d})=1,$$
then the sum (3.1) can be split into sums of blocks of $`p`$ elements. Usual vector representations correspond to $`p=1`$. If we perform the sum over say $`a`$ and $`b`$ for each block we get $`0`$ except when $`a^{}`$ and $`b^{}`$ are multiples of $`p`$, for which the sum over $`a`$ and $`b`$ over a block of $`p`$-elements just gives $`p^2`$. Since there are $`\frac{n}{p}`$ and $`\frac{n^{}}{p}`$ blocks of $`p`$ elements for the sum over $`a`$ and $`b`$ and $`a^{}`$ and $`b^{}`$ respectively, the total sum yields
$$N_c=\frac{1}{nn^{}}\left(\frac{nn^{}}{p^2}\right)^2p^2=\frac{nn^{}}{p^2}.$$
Therefore, there are $`N_c=nn^{}/p^2`$ irreducible projective representations with cocycle $`c`$ for $`Z_n\times Z_n^{}`$. The dimensionality of each irreducible representation can be obtained from the fact that the regular representation can be decomposed in terms irreducible ones
$$|\mathrm{\Gamma }|=\underset{a=1}{\overset{N_c}{}}d_{R_a}^2,$$
where $`R_a`$ labels the different irreducible representations. Thus, each representation is $`p`$-dimensional. Usual vector representations $`(c1)`$ are one-dimensional and all irreducible projective one are bigger.
The conclusion stating that the minimal charge of a D-brane configuration wrapping the entire compact orbifold is an integer bigger than the minimal charge whenever discrete torsion is non-trivial can be verified by a simple disk amplitude. We want to compute the charge under the untwisted sector Ramond-Ramond field corresponding to a wrapped D6-brane. This can be computed by inserting the untwisted six-brane Ramond-Ramond vertex operator on the disk. We will sketch the computation and refer to for more details. The vertex operator has to be in the $`(3/2,1/2)`$ picture to soak the background superghost charge on the disk. In this picture and the Ramond-Ramond potential $`C_{\mu _0\mathrm{}\mu _6}`$ appears in the vertex operator. The amplitude is multiplied by the trace of the representation acting on Chan-Paton factors for the identity element (to compute the charge under the $`g_i`$ Ramond-Ramond field one multiplies by the trace of the representation for $`g_i`$). The computation can be easily computed by conformally mapping onto the upper half plane and imposing the appropriate boundary conditions. The final result is
$$Q=\frac{d_R}{|\mathrm{\Gamma }|},$$
where $`d_R`$ is the dimension of the representation considered. This formula applies both for conventional orbifolds as well as for orbifolds with discrete torsion. In the first case one must use projective representations and in the second vector representations. Since the smallest irreducible vector representations of $`\mathrm{\Gamma }`$ is one-dimensional but the smallest irreducible projective representation is larger, this shows that indeed the minimal D-brane charge allowed for orbifolds with discrete torsion are bigger than for conventional orbifolds.
Acknowledgments
I would like to express my gratitude to D.E. Diaconescu, M. Douglas, E. Gimon, S. Gukov and E. Witten for enlightening discussions. J.G. is supported in part by the DOE under grant no. DE-FG03-92-ER 40701.
References
relax L.Dixon, J. Harvey, C. Vafa and E.Witten, “Strings on Orbifolds I and II”, Nucl .Phys. B261 (1985) 678 and Nucl. Phys. B274 (1986) 285. relax P. Candelas, G.T. Horowitz, A. Strominger and E. Witten, “Vacuum Configurations for Superstrings”, Nucl. Phys. B258 (1985) 46. relax L. Dixon, D. Friedan, E. Martinec and S.H. Shenker, “The Conformal Field Theory of Orbifolds”, Nucl. Phys. B282 (1997) 13. relax C. Vafa, “Modular Invariance and Discrete Torsion on Orbifolds”, Nucl. Phys. B273 (1986) 592. relax E. Witten, “Baryons And Branes In Anti-de Sitter Space”, JHEP 9807:006, 1998, hep-th/9805112. relax E. Witten, “D-Branes And K-Theory”, JHEP 9812:025, 1998, hep-th/9810188. relax D.S. Freed and E. Witten, “Anomalies in String Theory with D-branes”, hep-th/9907189. relax R. Minasian and G. Moore, “ K-theory and Ramond-Ramond charge”, JHEP 9711 (1997) 002. relax A. Sen, “ Stable Non-BPS States in String Theory”, JHEP 9806 (1998) 007; “Stable Non-BPS Bound States of BPS D-branes”, JHEP 9808 (1998) 010; “Tachyon Condensation on the Brane Antibrane System”, JHEP 9808 (1998) 012; “$`SO(32)`$ Spinors of Type I and Other Solitons on Brane-Antibrane Pair”, JHEP 9809 (1998) 023. relax O. Bergman and M.R. Gaberdiel, “Stable non-BPS D-particles”, Phys. Lett. B441 (1998) 133, hep-th/9806155. relax M. Frau, L. Gallot, A. Lerda and P. Strigazzi, “Stable non-BPS D-branes in Type I string theory”, hep-th/9903123. relax P. Hořava, “Type IIA D-Branes, K-Theory, and Matrix Theory”, Adv. Theor. Math. Phys. 2 (1999) 1373, hep-th/9812135. relax A. Kapustin, “D-branes in a topologically nontrivial B-field”, hep-th/9909089. relax A. Font, L.E. Ibáñez and F. Quevedo, “$`Z_n\times Z_n^{}`$ Orbifolds and Discrete Torsion”, Phys. Lett. B217 (1989) 272. relax C. Vafa and E. Witten, “On Orbifolds with Discrete Torsion”, J. Geom. Phys. 15 (1995) 189, hep-th/9409188. relax P.S. Aspinwall and D.R. Morrison, “Chiral Rings Do Not Suffice: $`N=(2,2)`$ Theories with Nonzero Fundamental Group”, Phys. Lett. B334 (1994) 79, hep-th/9406032. relax P.S. Aspinwall and D.R. Morrison, “Stable Singularities in String Theory”, Commun. Math. Phys. 178 (1996) 115, hep-th/9503208. relax E.R. Sharpe, “Discrete Torsion and Gerbes I and II”, hep-th/9909108 and hep-th/9909120. relax D. Berenstein and R.G. Leigh, “Discrete Torsion, AdS/CFT and duality”, hep-th/0001055. relax M.R. Douglas, “D-branes and Discrete Torsion”, hep-th/9807235. relax S. Gukov and V. Periwal, “Dbrane Phase Transitions and Monodromy in K-theory”, hep-th/9908166. relax B. Blackadar, “K-Theory for Operator Algebras”, Springer-Verlag. relax I. Raeburn and D.P. Williams, “Pull-backs of $`C^{}`$-Algebras and Crossed Products by Diagonal Actions”, Trans. AMS 287 (1985) 755. relax D.E. Diaconescu and J. Gomis, “Fractional Branes and Boundary States in Orbifold Theories”, hep-th/9906242. relax J. Polchinski and Y. Cai, ”Consistency of Open Superstring Theories”, Nucl. Phys. B296 (1988) 91;
C. Callan, C. Lovelace, C. Nappi and S. Yost, ”Loop Corrections to Superstring Equations of Motion”, Nucl. Phys. B308 (1988) 221;
T. Onogi and N. Ishibashi, “Conformal Field Theories On Surfaces With Boundaries And Crosscaps”, Mod. Phys. Lett. A4 (1989) 161;
N. Ishibashi, “The Boundary And Crosscap States In Conformal Field Theories”, Mod. Phys. Lett. A4 (1989) 251. relax D.E. Diaconescu, M.R. Douglas and J. Gomis, “Fractional Branes and Wrapped Branes”, JHEP 9802 (1998) 013, hep-th/9712230. relax M.R. Douglas, “Enhanced Gauge Symmetry in M(atrix) Theory”, JHEP 9707 (1997) 004, hep-th/9612126. relax H. Garcia-Compean, “D-branes in Orbifold Singularities and Equivariant K-Theory”, Nucl. Phys. B557 (1999) 480, hep-th/9812226. relax S. Gukov, “K-Theory, Reality, and Orientifolds”, hep-th/9901042. relax G. Karpilowsky, “Projective Representations of Finite Groups”, M. Dekker 1985. relax M.R. Douglas and G. Moore, “D-branes, Quivers, and ALE Instantons”, hep-th/9603167. relax J. Polchinski, “Tensors from K3 Orientifolds”, Phys. Rev. D55 (1997) 6423, hep-th/9606165. relax M.R. Douglas, D. Kabat, P. Pouliot and S.H. Shenker, “D-branes and Short Distances in String Theory”, Nucl. Phys. B485 (1997) 85, hep-th/9608024. relax C.V. Johnson and R.C. Myers, “Aspects of Type IIB Theory on ALE Spaces”, Phys. Rev. D55 (1997) 6382, hep-th/9610140. relax M.R. Douglas, B.R. Greene and D.R. Morrison, “Orbifold Resolution by D-Branes”, Nucl. Phys. B506 (1997) 84, hep-th/9704151. relax K. Mohri, “D-Branes and Quotient Singularities of Calabi-Yau Fourfolds”, Nucl. Phys. B521 (1998) 161, hep-th/9707012. relax D.E. Diaconescu and J. Gomis, “Duality in Matrix Theory and Three Dimensional Mirror Symmetry”, Nucl. Phys. B517 (1998) 53, hep-th/9707019. relax K. Mohri, “Kähler Moduli Space of a D-Brane at Orbifold Singularities”, Commun. Math. Phys. 202 (1999) 669, hep-th/9806052. relax B.R. Greene, “D-Brane Topology Changing Transitions”, Nucl. Phys. B525 (1998) 284, hep-th/9711124. relax S. Mukhopadhyay and K. Ray, “Conifolds From D-branes”, Phys. Lett. B423 (1998) 247, hep-th/9711131. relax M.R. Douglas, “Topics in D-geometry”, hep-th/9910170. relax S. Kachru and E. Silverstein, “4d Conformal Field Theories and Strings on Orbifolds”, Phys. Rev. Lett. 80 (1998) 4855, hep-th/9802183. relax A. Lawrence, N. Nekrasov and C. Vafa, “On Conformal Theories in Four Dimensions”, Nucl. Phys. B533 (1998) 199, hep-th/9803015. relax M.R. Douglas and B. Fiol, “D-branes and Discrete Torsion II”, hep-th/9903031. relax S. Mukhopadhyay and K. Ray, “D-branes on Fourfolds with Discrete Torsion”, hep-th/9909107. relax E.G. Gimon and J. Polchinski, “Consistency Conditions for Orientifolds and D-Manifolds”, Phys. Rev. D54 (1996) 1667, hep-th/9601038. |
no-problem/0001/hep-lat0001015.html | ar5iv | text | # String breaking in zero-temperature lattice QCD
## I Introduction
The breaking of a long flux tube between two static quarks into a quark-antiquark pair is one of the most fundamental phenomena in QCD. Because of its highly non-perturbative nature it has defied analytical calculation, while its large scale, e.g. when compared to the sizes of composite particles in the theory, has caused difficulties in standard nonperturbative methods. Thus string breaking has remained a widely publicized feature of the strong interaction that has never, apart from rough models, been reproduced from the theory.
String breaking can occur in hadronic decays of $`Q\overline{Q}`$ mesons and is especially relevant when this meson is lying close to a meson-antimeson ($`Q\overline{q}\overline{Q}q`$) threshold. For the heaviest quarks involved in these decays applying heavy quark effective theory is a reasonable approximation.
Due to recent advances in both computational hardware and algorithms, much interest in lattice QCD has been devoted to attempts to observe string breaking. The direct approach of trying to see the flattening in the static $`Q\overline{Q}`$ potential at large separation has been successful only at temperatures close to the critical one . The failure of this Wilson loop method at zero temperature seems to be mainly due to the poor overlap of the operator(s) with the $`Q\overline{q}\overline{Q}q`$ state . In much more easily calculable adjoint $`SU(2)`$ and $`SU(2)`$+Higgs models without fermions a variational approach with explicit inclusion of both the Wilson loop and scalar bound state operators has worked well . In three-dimensional SU(2) with staggered fermions an improved action approach has been claimed to be successful with just Wilson loops . In QCD with fermions effective operators for $`Q\overline{q}\overline{Q}q`$ systems are, however, hard to implement; part of the problem is the exhausting computational effort required to get sufficient statistics for light quark propagators with conventional techniques for fermion matrix inversion.
A new technique of calculating estimates of light quark propagators using Monte Carlo techniques on pseudo-fermionic field configurations with maximal variance reduction has been found to be very useful for systems including heavy quarks taken as static, such as single heavy-light mesons and baryons and also two heavy-light mesons . The application of this method to the string breaking problem seems natural.
Our previous work has concentrated on bound states of two heavy-light mesons and mechanisms of their attraction for various values of light quark isospin $`I_q`$ and spin $`S_q`$. Here we continue by studying the $`Q\overline{q}\overline{Q}q`$ system for $`I_q,S_q=(0,1)`$ together with the $`Q\overline{Q}`$ system at distances around the string breaking point $`r_b1.2`$ fm, where the energies of the two systems are equal if there is no mixing.
## II Quantum numbers and hybrid mesons
When a quark and an antiquark are created from the vacuum they should have the $`0^{++}`$ quantum numbers of the vacuum. A quark-antiquark pair with $`J^{PC}=0^{++}`$ has lowest orbital angular momentum $`L=1`$ and is in a spin triplet. This so-called Quark Pair Creation or $`{}_{}{}^{3}P_{0}^{}`$ model was combined with a harmonic oscillator flux tube model by Isgur et al. to describe local breaking (formation) of a flux tube. In our calculation the symmetries of the static approximation for the heavy quark automatically lead to only the light quark spin triplet being nonzero, which can be seen from the Dirac spin structure of the heavy-light diagram in Fig. 1.
String breaking can also occur for hybrid mesons where the gluon field between two quarks is in an excited state. Table I presents the couplings of some low-lying gluonic excitations to the quantum numbers of the resulting meson-antimeson in the static limit for the heavy quarks. In this limit CP and $`J_z`$, where $`z`$ is the interquark axis, are conserved. The table lists the representations of these symmetries (with $`\mathrm{\Sigma },\mathrm{\Pi },\mathrm{\Delta }`$ corresponding to $`J_z=0,1,2`$ respectively and $`g,u`$ corresponding to CP=$`\pm 1`$). For the lowest-lying excitation, which has $`\mathrm{\Pi }_u`$ symmetry with $`J_z=1`$, only non-zero angular momenta $`L+L^{}`$ for the resulting mesons $`B_L^{}`$, $`B_L`$ would be allowed, as $`S_q`$ has to be zero to generate a negative $`CP`$.
Here, as for the other symmetries, $`I_q=0`$ as we need the same flavour for the light quarks. The $`I_q=1`$ cases do not correspond to spontaneous breaking of a string in vacuum but a correlation of a $`Q\overline{Q}`$+$`q\overline{q}`$ system with a $`Q\overline{q}`$+$`\overline{Q}q`$ system at different time, i.e. the breaking of a string in the presence of a meson. When the table is extended to nonzero $`L`$ for the heavy quark plus antiquark (as opposed to $`L>0`$ for a single meson) the $`P,C`$ values get multiplied with $`(1)^L`$. The energy levels also change; these retardation effects have been found to be relatively small .
Previously string breaking in hybrid mesons has been discussed from a phenomenological point of view using an extension of the approach of Isgur et al., i.e. a nonrelativistic flux-tube model with decay operators from strong coupling limit of lattice gauge theory and heavy quark expansion of QCD in Coulomb gauge . From this model two selection rules were given, the first one agreeing with the $`\mathrm{\Pi }_u`$ case in Table I; low-lying hybrids do not decay into identical mesons, the predominant channel being one $`s`$ and one $`p`$-wave meson. The second rule prohibits decay of spin singlet states into only spin singlets, which is not relevant to our calculation as our heavy quark spin decouples.
An important question for hybrid meson phenomenology is the nature of the lowest state for a given set of quantum numbers at a particular heavy quark separation; a hybrid $`Q\overline{Q}`$ meson, a ground-state $`Q\overline{Q}`$ meson with a $`q\overline{q}`$ meson or a system of two heavy-light mesons. It is also useful to know the strength of mixing between these states. This information can be obtained, in principle, from lattice calculations and used to decide what sort of bound states are most likely to exist and what their decays will be (see also Ref. ).
## III Lattice calculation
We use SU(3) lattice QCD on a $`16^3\times 24`$ lattice with the Wilson gauge action and the Sheikholeslami-Wohlert quark action with a nonperturbative “clover coefficient” $`c_{SW}=1.76`$ and $`\beta =5.2`$ with two degenerate flavours of both valence and sea quarks. The measurements were performed on 20 gauge configurations. The gauge configurations are the same as in Ref. for $`\kappa =0.1395`$. With these parameters we get a lattice spacing $`a0.14`$ fm and meson mass ratio $`M_{PS}/M_V=0.72`$.
Estimators of propagators of quarks from point $`n`$ to point $`m`$ can be obtained from pseudofermion fields $`\varphi `$. For each gauge configuration a sample of the pseudofermion fields is generated, and the propagators are then obtained by Monte Carlo integration . Thus there is one Monte Carlo averaging for the gauge samples, and another one for the pseudofermion samples for each gauge sample. In order to reduce statistical variance of propagators a variance reduction method similar to multi-hit can be used ; such a reduction is essential in practice. Our variance reduction involves division of the lattice in two regions, whose boundary is kept fixed while the $`\varphi `$-fields inside are replaced by their multi-hit averages. We use 24 pseudofermionic configurations per each gauge configuration.
Figure 1 shows the diagrams involved in the calculation with the time axis in the horizontal direction. The solid lines are heavy quark propagators, which in the static approximation are just products of gauge field variables. The wiggly lines are light quark propagators, obtained essentially as a product of pseudofermionic variables from each end which have to be in different variance reduced regions .
In the large $`T`$ limit both the quark-antiquark and two-meson operators should in principle approach $`e^{E_0(R)T}`$ with $`E_0(R)`$ being the ground state of the system. In practice the Wilson loop has a very small overlap with the two-meson state, which leads to great practical difficulties in observing the flattening $`E_0(R)2M_{Q\overline{q}}`$ from it at large $`R`$. The heavy-light term $`U`$ is necessary to obtain the correct ground state by explicitly including both quark-antiquark and two-meson states, and allows us to measure their overlap, which is crucial for string breaking to happen.
To estimate the ground (and excited) state energy of our observables we always use a variational basis formed from different degrees of spatial fuzzing of the operators. This allows the use of moderate values of $`T`$ instead of the infinite time limit to reduce excited state contributions. The resulting correlation matrix $`C(R,T)`$ is then diagonalised to get the eigenenergies.
Due to the variance reduction method dividing the lattice in two halves in the time direction, the box diagram and the heavy-light correlator in Fig. 1 have to be turned “sideways” on the lattice; i.e., the time axis in the diagrams is taken to be one of the spatial axes to keep the light quark propagators going from one variance reduced volume to another. This induces technical complications that greatly increase the memory and CPU demands of the measurement program.
For two flavours the $`I_q=0`$ wavefunction is of the form $`(u\overline{u}+d\overline{d})/\sqrt{2}`$, which gives factors of $`1,\sqrt{2},2`$ for $`D,U,B`$ respectively. For light quark spin we get the triplet states as in Ref. .
In this first study we concentrate on the ground state breaking, i.e. the first row of Table I. Investigation of the hybrid meson breaking requires diagrams not included in Fig. 1, which involve the hybrid $`Q\overline{Q}`$ and $`Q\overline{Q}+\overline{q}q`$ operators. We estimate that for the $`\mathrm{\Pi }_u`$ excited state the excited string breaking happens in the same distance range as for the ground state due to the non-zero momenta of the resulting mesons (masses taken from Ref. ), which makes it harder to obtain sufficient accuracy as the spatial operators for excitations involve subtractions rather than sums of lattice paths.
## IV Results
### A Variational approach
A full variational matrix involving the Wilson loop, heavy-light correlator and the $`Q\overline{q}\overline{Q}q`$ correlators gives the ground and excited state energies and corresponding operator overlaps as a function of heavy quark separation, in analogue to the approach of Refs. for the adjoint string breaking and Refs. for the SU(2)+Higgs model. We use a local light quark creation (annihilation) operator and an extended version where a fuzzed path of link variables with length two separates the operator from the heavy quark line. For the link variables involved in the $`Q\overline{Q}`$ operators we have two fuzzing levels. The two $`Q\overline{Q}`$ and three $`Q\overline{q}\overline{Q}q`$ basis states then give a $`5\times 5`$ correlation matrix $`C(R,T)`$ that can be diagonalised. However, for our present statistics the full matrix gives a reasonable signal only for $`r<r_b`$.
In Figure 2 the results from a calculation using just the most fuzzed basis states for both $`Q\overline{Q}`$ and $`Q\overline{q}\overline{Q}q`$ (a $`2\times 2`$ matrix) are shown. At $`r_b`$ we would expect the ground and excited state energies to be separated by twice the mixing coefficient $`x`$ (see below). We observe a larger separation which is presumably due to our statistics not being sufficient to give accurate plateaus for the energies.
Although this full variational approach is in principle the most direct way to study string breaking, we find that it is possible to focus on string breaking explicitly, as we now discuss.
### B Mixing matrix element
In full QCD, there is mixing of energy levels between states coupling to Wilson lines (flux tube) and $`Q\overline{q}\overline{Q}q`$ states. To get the mixing matrix element the correlation between a Wilson line and a $`Q\overline{q}\overline{Q}q`$ operator has to be considered. In order to study the operator mixing from this heavy-light correlator one needs to use results (energies and couplings) from both diagonal operators separately: thus from the Wilson loop (with ground state contribution given by $`W(T)=w^2\mathrm{exp}[V(R)T]`$) and the unconnected $`Q\overline{q}\overline{Q}q`$ correlator (eg. $`D(T)=d^2\mathrm{exp}[M(R)T]`$ from the ground state) where we use a variational basis to suppress excited states.
The ground state contribution to the heavy-light correlator can then be written as
$$U(T)=x(R)\underset{t=0}{\overset{T}{}}we^{V(R)t}e^{M(R)(Tt)}d+O(x^3)$$
(1)
In the quenched case the contributions from fermion loops inside the correlator are absent, removing the $`O(x^3)`$ terms in Eq. 1. The box term is expressed in the same manner as
$`B(T)`$ $`=`$ $`x^2(R){\displaystyle \underset{t_1=0}{\overset{T}{}}}{\displaystyle \underset{t_2t_1}{\overset{T}{}}}de^{V(R)t_1}e^{M(R)(t_2t_1)}`$ (3)
$`\times e^{V(R)(Tt_2)}d+O(x^4)`$
The operator mixing coefficient $`x`$ for the $`Q\overline{Q}`$ and $`Q\overline{q}\overline{Q}q`$ states can be extracted from these expressions. Near the string breaking point (where $`V(R)=M(R)`$), in the infinite time limit, only the ground state contributions survive. We use
$`x`$ $`=`$ $`{\displaystyle \frac{U(T)}{\sqrt{W(T)D(T)}}}{\displaystyle \frac{f^{T/2}}{1+\mathrm{}+f^T}}+O(x^3)`$ (4)
$`=`$ $`\sqrt{{\displaystyle \frac{B(T)}{D(T)}}}{\displaystyle \frac{f^{T/2}}{\sqrt{1+\mathrm{}+(T+1)f^T}}}+O(x^2)`$ (5)
The factors of $`f\mathrm{exp}(V(R)M(R))`$ account for departures from the string breaking point.
In the quenched case there is no mixing between the energy levels of the quark-antiquark and two-meson systems, and $`x`$ can be extracted using Eqs. 4,5. As $`x<<1`$ the non-leading terms in the expressions for $`x`$ are small and we may use also these formulas with our unquenched data - with a resulting decrease in errors compared to the full variational study of the preceeding subsection.
Our assumption about neglecting excited state contributions can be tested by obtaining consistent results for $`x`$ from both relations for several $`T`$ values. To improve further on our estimate of $`x`$ we diagonalise separately $`W,D`$ and $`B`$ to enhance the ground state contributions and use the first two diagonalisations to extract the ground state of $`U`$. Our results with bootstrap errors can be seen in table II. Assuming constant $`x`$ for $`0.99\mathrm{fm}r1.31`$ fm gives us a best estimate of $`x/a=0.033(6)/a=46(8)`$ MeV. This is about half of the value of $`x=100`$ MeV obtained using a strong coupling mixing model and the experimental $`\mathrm{{\rm Y}}(4S)`$ decay rate .
Our analysis shows that the string breaking matrix element is small but non-zero. We are able, however, to find two independent ways to estimate it (using all four diagrams in Fig. 1) and we obtain $`x=46(8)`$ MeV with light quarks that are around the strange quark mass. This is the first non-perturbative determination from QCD of the string breaking matrix element. Because of its small value, direct observation of string breaking from the spectrum is difficult to achieve.
Acknowledgement
We thank A.M. Green and K. Rummukainen for discussions. Some of the calculations, consuming 70 GB of disk and $`2\times 10^{16}`$ FLOPs, were performed with the excellent resources provided by the CSC in Espoo, Finland. |
no-problem/0001/hep-ph0001008.html | ar5iv | text | # Charmonium Production from the Secondary Collisions at LHC Energy
## 1 Secondary charmonium production
The initial energy density in ultrarelativistic heavy ion collisions at LHC energy exceeds by a few order of magnitudes the critical value required for quark-gluon plasma formation. Thus, according to Matsui and Satz , one expects the formation of charmonium bound states to be severely suppressed due to Debye screening. The initially produced $`c\overline{c}`$ pairs in hard parton scattering, however, due to charm conservation, will survive in the deconfined medium until the system reaches the critical temperature where the charm quarks hadronize, forming predominately $`D`$ and $`\overline{D}`$ mesons. An appreciable fraction of $`c\overline{c}`$ pairs and consequently $`D`$,$`\overline{D}`$ mesons produced in Pb-Pb collisions at LHC energy can lead to an additional production of charmonium bound states due to reactions such as: $`D\overline{D^{}}+D^{}\overline{D}+D^{}\overline{D^{}}\psi +\pi `$ and $`D^{}\overline{D^{}}+D\overline{D}\psi +\rho `$ as first indicated in . In this work we present a quantitative description of the secondary $`J/\psi `$ and $`\psi ^,`$ production due to the above processes from the thermal hadronic medium created in Pb-Pb collisions at LHC energy.
## 2 Thermal production kinetics
The charmonium production cross section $`\sigma _{D\overline{D}\psi h}`$ can be related to the hadronic absorption of charmonium $`\sigma _{\psi hD\overline{D}}`$, through the detailed balance relation
$$\sigma _{D\overline{D}\psi h}=d_{D\overline{D}}(\frac{k_{\psi \pi }}{k_{D\overline{D}}})^2\sigma _{\psi hD\overline{D}},$$
(1)
where $`k_{ab}^2=[s(m_a+m_b)^2][s(m_am_b)^2]/4s`$ denotes the square of the center of mass momentum of the corresponding reaction and $`h`$ stands for $`\rho `$ or $`\pi `$, whereas $`\psi `$ for $`J/\psi `$ or $`\psi ^,`$ mesons. The degeneracy factor $`d`$ takes the values: $`d_{D\overline{D^{}}}=d_{D^{}\overline{D}}=3/4`$, $`d_{D^{}\overline{D^{}}}=d_{D^{}\overline{D}}=1/4`$ with the pions in the final state and $`d_{D\overline{D^{}}}=d_{D^{}\overline{D}}=9/4`$, $`d_{D\overline{D}}=27/4,`$ $`d_{D^{}\overline{D^{}}}=3/4,`$ for the processes with $`\rho `$ mesons.
The magnitude of charmonium absorption cross section on hadrons is still, however, theoretically not well under control. There are four models which have been proposed to give an estimate for the $`J/\psi `$ dissociation cross section on pions: (1) the constituent quark model , (2) the comover model , (3) an effective hadronic Lagrangian and (4) a short distance QCD approach .
In the framework of the constituent quark model the $`J/\psi `$ absorption on a pion is viewed as a quark exchange process where the charm quark changes its side with a light quark. The cross section for the $`\pi +J/\psi D^{}\overline{D}+D\overline{D^{}}+D^{}\overline{D^{}}`$ process, in this approach, was calculated in the non-relativistic quark model in terms of the first Born approximation . The energy dependence of the above cross section for the final state $`D^{}\overline{D}+D\overline{D^{}}`$ is shown in fig. 1. One sees that the cross section abruptly increases just above threshold and reaches its peak value of about 6 mb. The large value of the cross section is mostly due to the particular modelling of the confining interaction between the quarks which is taken as attractive, independent of the colour quantum number of the affected quark pair.
The large charm quark mass, $`m_c1.21.8`$ GeV and $`J/\psi `$ binding energy, $`2M_DM_{J/\psi }`$640 GeV in comparison to $`\mathrm{\Lambda }_{QCD}`$, as well as small size of $`J/\psi `$, $`r_{J/\psi }0.2`$fm$`<<\mathrm{\Lambda }_{QCD}^1`$, have been used as an argument to calculate the charmonium dissociation on light hadrons in terms of a QCD approach . The $`J/\psi h`$ cross section in this approach is expressed in terms of hadronic gluon distribution functions, making use of the short distance QCD method based on sum rules derived from the operator product expansion. The resulting energy dependence of the $`J/\psi `$ absorption cross section on pions calculated within QCD is shown in fig. 1. The hadronic gluon distribution functions are strongly suppressed at high gluon momenta. Thus, since the dissociation of $`J/\psi `$ requires hard gluons, the absorption cross section $`J/\psi \pi D\overline{D}`$ becomes very small just above threshold as seen in fig. 1. Recent analysis of $`J/\psi `$ photoproduction data confirms the relation between the charmonium-hadron interactions and the hadronic gluon structure indicating a small value of the $`J/\psi `$ absorption cross section on hadrons.
The scattering of $`J/\psi `$ on $`\pi `$ or $`\rho `$ meson can be also described as the exchange of open charm mesons between the charmonium and the incident particle. Near the kinematical threshold this exchange was modeled in terms of an effective meson Lagrangian . In fig. 1 we show the energy dependence of the cross section for $`J/\psi `$ dissociation in this approach. The result for the cross section should be considered here as an upper limit since in the model the hadronic form-factor was taken to be unity.
In phenomenological models describing charmonium production in pA and light nucleus-nucleus collisions the dissociation cross section of charmonium on light mesons was assumed to be energy independent . Using a value of $`\sigma _{\psi N}4.8`$ mb, which was shown to be consistent with the pA data, and the quark structure of hadrons one could fix the $`\sigma _{\psi (\pi ,\rho )}2\sigma _{\psi N}/33`$ mb.
The differences between these various models for the energy dependence of the $`J/\psi `$-$`\pi `$ cross section are particularly large near threshold. The theoretical uncertainties of the cross section seen in fig. 1 will naturally influence the yield of the secondary charmonium production. In the following, we shall calculate and compare the yield of the secondary $`J/\psi `$ with all these cross sections.
The $`\psi ^,`$ absorption cross section has not been evaluated in the quark exchange and effective Lagrangian model. The application of the short distance QCD is not possible in this case due to the very low binding energy of the $`\psi ^,`$ ( of the order of 60 MeV) and the correspondingly large size of the $`\psi ^,`$.
For a first estimate of the $`\psi ^,`$ dissociation on light mesons we assume that the absorption cross section is energy independent and attains its geometric value of about 10 mb very near the threshold. This assumption was shown to provide a quite good description of the $`\psi ^,`$ suppression observed in S-U collisions .
### 2.1 Rate equation for the charmonium production in a hadron gas
In a thermal hadronic medium the rate of charmonium production from $`D\overline{D}`$ annihilation is determined by the thermally averaged cross section and the densities of incoming and outgoing particles. The thermal average of the charmonium production cross section $`<\sigma _{D\overline{D}\psi h}v_{D\overline{D}}>`$ is given by the following expression :
$$<\sigma _{D\overline{D}}v_{D\overline{D}}>=\frac{\beta }{8}\frac{_{t_0}^{\mathrm{}}𝑑t\sigma _{D\overline{D}}(t)[t^2(m_{D\overline{D}}^+)^2][t^2(m_{D\overline{D}}^{})^2]K_1(\beta t)}{m_D^2m_{\overline{D}}^2K_2(\beta m_D)K_2(\beta m_{\overline{D}})},$$
(2)
where $`K_1`$, $`K_2`$ are modified Bessel functions of the second kind, $`m_{D\overline{D}}^+m_D+m_{\overline{D}}`$ and $`m_{D\overline{D}}^{}m_Dm_{\overline{D}}`$, $`t\sqrt{s}`$ is the center-of-mass energy, $`\beta `$ the inverse temperature, $`v_{ab}`$ is the relative velocity of incoming mesons and the integration limit is taken to be: $`t_0=max[(m_D+m_{\overline{D}}),(m_\psi +m_h)]`$.
The thermal cross section for the production and absorption of charmonium convoluted with the densities of incoming particles describes the rate equation for charmonium production in a hadron gas per unit of rapidity by:
$$\frac{dR}{d\tau }=\underset{i,j}{}<\sigma _{D_i\overline{D}_j\psi h}v_{D_i\overline{D_j}}>n_{D_i}n_{\overline{D_j}}\underset{i,j}{}<\sigma _{\psi hD_i\overline{D_j}}v_{\psi h}>n_\psi n_h,$$
(3)
where $`i=\{D,D^{}\}`$, $`h`$ denotes a $`\pi `$ or $`\rho `$ meson and $`\psi `$ stands for $`\psi ^,`$ or $`J/\psi `$.
In the hadronic medium we include the following secondary charmonium production processes:
$$D^{}\overline{D}+D\overline{D^{}}+D^{}\overline{D^{}}\pi +\psi $$
(4)
$$D^{}\overline{D^{}}+D\overline{D}\rho +\psi $$
(5)
where $`\psi `$ stands for $`J/\psi `$ or $`\psi ^,`$.
The solution of the rate equation requires additional assumptions on the space-time evolution of the hadronic medium and the initial number of $`D`$ and $`\overline{D}`$ mesons.
### 2.2 Model for expansion dynamics
We have adopted a model for the expansion dynamics assuming isentropic cylindrical expansion of the thermal fireball. To account for transverse expansion we assume a linear increase of the radius of a cylinder with the proper time. Thus, at a given proper time $`\tau `$ the volume of the system is parameterized by:
$$V(\tau )=\pi R^2(\tau )\tau \mathrm{with}R(\tau )=R_A+0.15(\tau \tau _0)$$
(6)
where $`\tau _0`$ is the initial proper time when the system is created as a thermally equilibrated quark-gluon plasma and $`R_A`$ is an initial radius being determined by the atomic number of colliding nucleus. For central Pb-Pb collisions $`R_A6.7`$ and we fixed $`\tau _00.1`$ fm.
The initially produced quark-gluon plasma cools during the expansion until it reaches the critical temperature $`T_c`$ at the time $`\tau _q^c`$ where it starts to hadronize. The system stays in the mixed phase until the time $`\tau _h^c`$, where the quark gluon plasma is totally converted to a hadron gas. The purely hadronic gas can still expand until it reaches the chemical freeze-out temperature $`T_f`$ at the time $`\tau _f`$ where all inelastic particle scattering ceases and the particle production stops.
The most recent results of Lattice Gauge Theory (LGT) give an upper limit for the critical temperature in QCD to be $`T_c0.17`$GeV . The analysis of presently available data for different particle multiplicities and their ratios measured in heavy ion collisions at SPS energy suggests that the chemical freezeout temperature at LHC should be in the range $`0.16<T_f<0.18`$ , that is very close to $`T_c`$.
To make a quantitative description of the space-time evolution of a thermal medium one still needs to specify the equation of state. The quark gluon plasma is considered as an ideal gas of quarks and gluons whereas the hadron gas is described as an ideal gas of hadrons and resonances. We have included the contributions of all baryonic and mesonic resonances with a mass of up to 1.6 GeV into the partition function. To take into account approximately the repulsive interactions between hadrons at short distances we apply excluded volume corrections. Here we use the thermodynamically consistent model proposed in where the thermodynamical observables for extended particles are obtained from the formulas for pointlike objects but with the shift of the chemical potential. In particular the pressure is described by:
$$P^{extended}(T,\mu )=P^{pointlike}(T,\overline{\mu }),\mathrm{with}\overline{\mu }=\mu v_{eigen}p^{extended}(T,\mu ),$$
(7)
where the particle eigenvolume $`v_{eigen}=4\frac{4}{3}\pi r^3`$ simulates repulsive interactions between hadrons. Following the detailed analysis in we have assigned the value $`r=0.3`$ fm for all mesons and baryons.
In fig.2 we show the time evolution of the temperature for Pb-Pb collisions at LHC energy for two different values of $`T_c`$. The initial temperature of the chemically equilibrated plasma was fixed requiring the entropy conservation:
$$s_q(T_0)V_0N_\pi /3.6\mathrm{with}N_\pi (dN_\pi /dy)_{y=0},$$
(8)
where $`N_\pi `$ described the final number of pions at midrapidity, $`V_0`$ is the initial volume and $`s_q(T_0)`$ the initial entropy density of the thermalized quark-gluon plasma. For $`N_\pi =12000`$ and $`\tau _0`$=0.1 fm we get the initial thermalization temperature $`T_01.07`$GeV for Pb-Pb collisions at LHC.
The result in fig.2 shows that for $`T_c0.17`$GeV the QGP starts to hadronize after 14 fm and stays in the mixed phase by 20 fm. Taking the lowest expected freezeout temperature $`T_f0.16`$ GeV the system exists only for a very short time of few fm in a hadronic phase.<sup>1</sup><sup>1</sup>1we do not count the isentropic expansion phase towards thermal freeze-out, since no appreciate particle production takes place there. Thus, the contribution of the purely hadronic phase to the overall secondary charmonium multiplicity can be neglected. The secondary charmonium bound states are produced almost entirely during the mixed phase. This is in contrast to . In fig. 3 we show the time evolution of the total volume $`V`$ as well as the fraction of the volume $`V_h`$ occupied by hadrons during the mixed phase. The total volume is calculated following eq.6, whereas $`V_h`$ can be obtained from the condition of entropy conservation in the following form:
$$V_h(\tau )=\frac{V(\tau )V_q^c}{1\frac{s_h^c}{s_q^c}},V_q^c=\frac{N_\pi }{3.6s_q^c},V_h^c=\frac{N_\pi }{3.6s_h^c},$$
(9)
where $`s_q^c`$ and $`s_h^c`$ are the entropy density in the quark gluon plasma and the hadron gas at the critical temperature. For the equation of state described above and $`T_c=0.17`$GeV we have: $`s_q^c11.9`$ and $`s_h^c/s_q^c0.22`$.
The time evolution of the hadronic medium during the mixed phase is totally determined by the number of pions in the final state and the entropy density of the quark-gluon plasma and the hadron gas at the critical temperature. Our default parameters for the expansion dynamics are as follows:
$$T_c=0.17\mathrm{GeV},T_fT_c,\tau _00.1\mathrm{fm},s_q^c12/\mathrm{fm}^3,s_h^c/s_q^c0.22.$$
(10)
We will study, however, how deviations from the above values influence the final multiplicity of the secondary charmonium produced during the evolution of hadronic medium.
### 2.3 Charm density
The initial number of open charm mesons at the critical temperature $`N_{D\overline{D}}`$ can be related to the number of primary $`c,\overline{c}`$ quarks $`N_{c\overline{c}}`$ produced in A-A collisions. Neglecting the possible absorption and production of $`c\overline{c}`$ pairs during the evolution and hadronization of a quark-gluon plasma we can put $`N_{D\overline{D}}N_{c\overline{c}}`$. Due to charm conservation the number of $`D^,`$s should be equal to the number of $`\overline{D}`$ mesons. The charm conservation also implies that during the mixed phase the number of $`D\overline{D}`$ mesons in the hadronic phase $`N_{D\overline{D}}^m`$is given by the fraction of volume occupied by the hadrons that is $`N_{D\overline{D}}^m=N_{c\overline{c}}V_h/V`$ where $`V`$ and $`V_h`$ are described by eq.6 and 9. We further assume that during hadronization of the quark gluon plasma the open charm mesons are in a local thermal equilibrium with all other hadrons, however, the yield of $`D`$ and $`\overline{D}`$ exceeds their chemical equilibrium value. The ratio of the number of $`D`$ to $`D^{}`$ mesons at the temperature $`T`$ is obtained from the relative chemical equilibrium condition:
$$\frac{N_D}{N_D^{}}=3\frac{m_D^2K_2(\beta m_D)}{m_D^{}^2K_2(\beta m_D^{})}.$$
(11)
The initial number of $`c\overline{c}`$ pairs in Pb-Pb collisions at LHC energy was obtained by scaling the p-p result with the geometrically allowed number of nucleon-nucleon collisions. The cross section for charm production at LHC in p-p collisions was calculated by leading order perturbative QCD using code PYTHIA with $`<k_t^2>=1`$GeV<sup>2</sup> . Typical values of the 50$`c\overline{c}`$ pairs and $`0.5`$$`J/\psi `$ were obtained in central Pb-Pb collisions at midrapidity.
The $`c\overline{c}`$ pairs can be also produced and absorbed during the evolution of thermally equilibrated quark-gluon plasma. In lowest order in $`\alpha _s`$ the charm quark pairs are produced by gluon and quark pair fusion. Solving a similar rate equation as in eq.3 but with the appropriate QCD cross sections we calculated in fig.4 the time evolution of the number of thermal $`c\overline{c}`$ pairs. The thermal production is seen in fig.4 not to be negligible as compared with the preequilibrium production. Dependent on the thermalization time there are an additional 5-10 $`c\overline{c}`$ pairs produced during the evolution of the thermally and chemically equilibrated plasma. This number, however, depends crucially on the value of the charm quark mass $`m_c`$ and the strong coupling constant $`\alpha _s`$. In fig.5 we compare the thermal $`c\overline{c}`$ multiplicity calculated with $`m_c=1.5`$ GeV and $`m_c=1.2`$ GeV with $`\alpha _s=0.3`$.
Dynamical models for parton production and evolution like HIJING or SSPC confirmed, that indeed, after a very short time of the order of 0.1-0.5fm the partonic medium can reach the thermal equilibrium. However, both quarks and gluons my appear far below their chemical saturation values. Thus, discussing thermal $`c\overline{c}`$ production one should take into account deviations of gluon and light quark yields from their equilibrium values. These deviations can be parameterized by fugacities parameters $`\lambda _q`$ and $`\lambda _g`$ modifying the Boltzman distribution function of quarks and gluons .
In fig.6 we calculated the time evolution of $`c\overline{c}`$ pairs within the kinetic model derived in for a non-equilibrium quark-gluon plasma. The following initial conditions for the thermalization time $`\tau _0`$, the initial temperature $`T_0`$ and the values for quark $`\lambda _q`$ and gluon $`\lambda _g`$ fugacities were fixed for Pb-Pb collisions at LHC from SSPC and HIJING : (1) $`\tau _0=0.25fm`$, $`T_0=1.02GeV`$, $`\lambda _g^0=0.43`$ $`\lambda _q^0=0.082`$ ; (2) $`\tau _{in}=0.5fm`$, $`T_{in}=0.82GeV`$, $`\lambda _g^{in}=0.496`$ $`\lambda _q^{in}=0.08`$ ; (3) $`\tau _{in}=0.5fm`$, $`T_{in}=0.72GeV`$, $`\lambda _g^{in}=0.761`$ $`\lambda _q^{in}=0.118`$ . In a non-equilibrium plasma, dependent on the initial conditions, there are 3-5 $`c\overline{c}`$ pairs produced during the evolution of plasma as seen in fig.6. One should also note that at the time when temperature reaches the critical value of $`0.17`$GeV the quark and gluon fugacities are very close to unity which indicates the chemical equilibration of an ideal quark-gluon plasma.
From the results presented in fig<sup>,</sup>s.4-6 one concludes, that dependent on the initial conditions as well as on the amount of chemical equilibration of the plasma, one expects 5-10 $`D\overline{D}`$ pairs at $`T_c`$ in addition to the 50 $`D\overline{D}`$ from the hadronization of the initially produced $`c\overline{c}`$ pairs.
## 3 Time evolution of the charmonium abundances
The number of produced charmonium bound states from $`D\overline{D}`$ scattering is obtained by multiplying the rate equation eq.3 by the volume of the hadron gas eq.9 and then performing the time integration. In fig.7 we show the time evolution of the abundance of $`J/\psi `$ from Pb-Pb collisions at LHC energy obtained by solving the rate equation. The calculations were done with the four different models for $`J/\psi \pi `$ absorption cross section described in fig.1 and assuming initially 50 $`D\overline{D}`$ and 12000 pions at midrapidity. The results in fig.7 show a very strong sensitivity of $`J/\psi `$ production on the absorption cross section. The largest number of the secondary $`J/\psi `$ is obtained with the cross section predicted by the quark exchange model. Here the number of $`J/\psi `$ is only by 1/2 smaller than the primary value. If, however, the absorption cross section is described by short distance QCD the secondary $`J/\psi `$ production is lower by more than two orders of magnitude.
Recent analysis of $`J/\psi `$ photoproduction data confirms the relation between the energy dependences of the $`J/\psi `$ cross section and the Feynmann x-dependence of the gluon distribution function of the nucleon . This could be used as an indication that the short distance QCD approach is a consistent way to calculate the $`J/\psi `$ cross section. Thus, the secondary production of $`J/\psi `$ from the hadronic gas can be entirely neglected in this case. The situation can, however, change when discussing the production of $`\psi ^,`$
The time evolution of the abundance of $`\psi ^,`$ is presented in fig.8 with the same basic parameters as used for $`J/\psi `$ in fig.7. We show in fig.8 the separate contributions of the production processes for the secondary $`\psi ^,`$ with $`\pi `$ and $`\rho `$ mesons in the final state. Following the arguments of the previous section the absorption cross section for $`\psi ^,`$ on $`\pi `$ and $`\rho `$ mesons were taken to be energy independent and equal to its geometric value of 10 mb. The produced $`\psi ^,`$ during the mixed phase from $`D\overline{D}`$ annihilation is seen in fig.8 to saturate at the large value 0.08, which is almost 1/5 of the primary number of $`J/\psi `$ expected for Pb-Pb collisions at LHC energy.
The yield of the secondary charmonium summarized in fig<sup>,</sup>s.7-8 depends on the parameters used in calculations. In the following we study the modification of the results on the secondary $`\psi ^,`$ by changing the initial number of $`D\overline{D}`$ mesons as well as the values of the relevant thermal parameters.
In fig.9 we discuss the yield of $`\psi ^,`$ for different multiplicities of $`D\overline{D}`$ mesons by including the contribution of thermal $`c\overline{c}`$ pairs. From the rate equation eq.3 it is clear that this dependence is quadratic. The increase of $`D\overline{D}`$ by 10-20$`\%`$ implies the $`2040\%`$ increase of the $`\psi ^,`$ yield.
In the space-time evolution model all parameters were fixed assuming $`N_\pi =12000`$. This number, however, is still not well established and the deviations for $`N_\pi `$ in the range $`5000<N_\pi <12000`$ are not excluded. In fig.10 we show the sensitivity of the results for $`\psi ^,`$ production on $`N_\pi `$. Keeping the same number of $`D\overline{D}`$ mesons and decreasing $`N_\pi `$ the number of the secondary $`\psi ^,`$ increases as seen in fig.10. This is mostly because by decreasing $`N_\pi `$ the initial density $`n_{D\overline{D}}`$ of $`D\overline{D}`$ increases leading to a larger production of the charmonium bound states.
Modelling the equation of state we have fixed the critical entropy density $`s_q^{cideal}`$ in the quark gluon plasma by the ideal gas equation of state. From the Lattice Gauge Theory we know, however that, due to interactions, the entropy density $`s_q^c`$ could be smaller by a significant factor. Deviation of the momentum distribution of quarks and gluons from the chemical saturation is also reducing the critical entropy density of quark-gluon plasma. To establish the influence of a decreasing critical entropy density we calculated the total number of $`\psi ^,`$ as a function of the ratio $`s_q^c/s_q^{cideal}`$ in fig.11. Decreasing the entropy density of a quark-gluon plasma by a factor of two reduces the yield of $`J/\psi `$ by 70$`\%`$. This is mostly because the time the system spends in the mixed phase is shorter. However, even with this reduction of the entropy density the secondary production of $`\psi ^,`$ is not negligible. The influence of the result on the critical entropy density in a hadron gas $`s_h^c`$ is less important. Increasing $`s_h^c`$, by a factor of two reduces the total number of $`\psi ^,`$ by only 10$`\%`$.
The secondary $`\psi ^,`$ production in a mixed phase is also sensitive to the parameterization of the time evolution of the system size in transverse direction. Increasing the transverse expansion parameter from 0.15 to $`0.45`$ in eq.6 decreases the total number of $`\psi ^,`$ produced during the mixed phase by 40 $`\%`$. Taking a quadratic dependence of $`R(\tau )`$ on proper time as proposed in decreases the yield of $`\psi ^,`$ from 0.08 to 0.036, which is still comparable with the number of primary $`\psi ^,`$.
## 4 Conclusions
We have considered the possibility of the secondary charmonium production in ultrarelativistic heavy ion collisions at LHC energy. Admitting thermalization of a partonic medium crated in a collision and the subsequent first order phase transition to a hadronic matter we have shown that the secondary charmonium production appears almost entirely during the mixed phase. The yield of secondarily produced $`\psi `$ mesons is very sensitive to the hadronic absorption cross section. Within the context of the short distance QCD approach this leads to negligible values for J/$`\psi `$ regeneration. The $`\psi ^,`$ production, however, can be large and may even exceed the initial yield from primary hard scattering. Thus it is conceivable that at LHC energy the $`\psi ^,`$ charmonium state can be seen in the final state whereas $`J/\psi `$ production can be entirely suppressed. The appearance of the $`\psi ^,`$ in the final state could be thus considered as an indication for the charmonium production from the secondary hadronic rescattering.
Acknowledgments
We acknowledge stimulating discussions with H. Satz and J. Stachel. One of us (K.R.) acknowledges partial support of the Gesellschaft für Schwerionenforschung (GSI) and the Committee of Research Development (KBN). |
no-problem/0001/math-ph0001029.html | ar5iv | text | # Untitled Document
A REMARK ON THE EQUIVALENCE OF ISOKINETIC AND ISOENERGETIC
THERMOSTATS IN THE THERMODYNAMIC LIMIT.
by David Ruelle<sup>*</sup><sup>*</sup>IHES. 91440 Bures sur Yvette, France. $`<`$ruelle@ihes.fr$`>`$.
Abstract. The Gaussian isokinetic and isoenergetic thermostats of Hoover and Evans are formally equivalent as remarked by Gallavotti, Rondoni and Cohen. But outside of equilibrium the fluctuations are uncontrolled and might break the equivalence. We show that equivalence is ensured if we consider an infinite system assumed to be ergodic under space translations.
Keywords: statistical mechanics, nonequilibrium, ensembles, thermodynamic limit, Gaussian thermostats.
1. Introduction. In the study of nonequilibrium statistical mechanics, if nonhamiltonian forces are used to achieve nonequilibrium, a thermostat is needed to cool the system. The Gaussian thermostats introduced by W. Hoover and D. Evans have the great interest of respecting the deterministic character of the equations of motion (see for instance Evans and Morriss ). Starting with an evolution equation $`\dot{x}=F(x)`$ in phase space, a Gaussian thermostat constrains the evolution to a prescribed hypersurface $`\mathrm{\Sigma }`$ by projecting $`F(x)`$, for $`x\mathrm{\Sigma }`$, to the tangent plane to $`\mathrm{\Sigma }`$ at $`x`$. In the present note we follow Cohen-Rondoni, and Gallavotti comparing an isokinetic and an isoenergetic thermostat, and showing that they give the same result in the limit of a large system (thermodynamic limit). In equilibrium statistical mechanics one can show rigorously that fixing the kinetic energy is equivalent to fixing the total energy, asymptotically for large systems (see ). It is therefore natural to hope that something similar is true for nonequilibrium, as advocated by Gallavotti (many references, see , ) and by Cohen and Rondoni . However, the entropy considerations which are available in equilibrium statistical mechanics fail utterly outside of equilibrium, i.e., fluctuations of energy at fixed kinetic energy are uncontrolled, and the situation appears rather hopeless. We shall show however that the argument of Cohen and Rondoni can be modified to apply, at least formally, to the dynamics of actually infinite systems. (In a different context – at equilibrium – Sinai has also shown the interest of considering the dynamics of infinite systems). Our approach will remain formal at the level of infinite system evolution equations: technical problems arise there, which do not seem directly related to the problem at hand, and are better discussed separately. We shall consider a system of particles in $`d`$ dimensions which is infinitely extended in $`\nu `$ dimensions, with $`1\nu d`$, and we shall discuss states of infinitely many particles which are invariant under translations in $`𝐑^\nu `$. The assumption that the infinite systems dynamics is well defined, and $`𝐑^\nu `$-ergodicity, will be sufficient to establish the equivalence of isokinetic and isoenergetic nonequilibrium steady states 2. IK an IE dynamics. We recall now the definition of the Gaussian isokinetic (IK) thermostat. We take for our configuration space $`M`$ a compact subset of $`𝐑^u\times 𝐓^v`$ where $`𝐓^v`$ is the $`v`$-torus, and momentum space is identified with $`𝐑^{u+v}`$. We assume that a force field on $`M`$ is given, written as $`\mathrm{grad}V+\xi `$, where $`V:M𝐑`$ is a potential, and $`\xi `$ is a nongradient vector field<sup>*</sup><sup>*</sup>Note that a change in $`V`$ can be compensated by a corresponding change in $`\xi `$: the splitting of the force into two terms is arbitrary for the IK time evolution.. Consider now the equations of motion
$$\begin{array}{c}\dot{p}=_qV+\xi \alpha p\\ \dot{q}=p/m\end{array}$$
$`(1)`$
completed by elastic reflection at the boundary of $`M`$. Without the term $`\xi \alpha p`$ this time evolution would be Hamiltonian. The term $`\xi `$ maintains the system outside of equilibrium. The term $`\alpha p`$ is the thermostat. We obtain the Gausssian isokinetic thermostat by choosing $`\alpha `$ such that the kinetic energy is constant:
$$0=\frac{d}{dt}\frac{p^2}{2m}=\frac{p}{m}(_qV+\xi \alpha p)$$
i.e.,
$$\alpha =(_qV+\xi )p/p^2$$
$`(2)`$
Note that if $`\xi `$ is locally a gradient (corresponding to a multivalued potential function on $`M`$), the Dettmann-Morriss pairing theorem asserts that (except for one value $`=0`$) the spectrum of Lyapunov exponents of an ergodic measure is symmetric with respect to some constant $`c`$ which is in general nonzero. (We shall however not make use of this result). We consider now the Gaussian isoenergetic (IE) thermostat associated again with the force $`\text{grad}V+\xi `$, but where we want to maintain fixed the energy function
$$H=p^2/2m+V(q)$$
$`(3)`$
The equations of motion are again of the form (1) and using (3) the isoenergetic condition is
$$0=\dot{H}=\frac{p}{m}(_qV+\xi \alpha p)+_qV\frac{p}{m}$$
i.e.,
$$\alpha =\xi p/p^2$$
$`(4)`$
With the Gaussian isoenergetic (IE) thermostat the time evolution is thus defined by (1), (4). We consider now the IK and the IE time evolution in the infinite system limit. We want to study the time evolution of a state $`\rho `$ ergodic under $`𝐑^\nu `$-space translations. We shall ignore existence and uniqueness problems for these evolution equations, and our discussion will thus remain formal in this respect. (In fact, the one-dimensional situation may be relatively accessible to rigorous study, but the $`n`$-dimensional case with $`n2`$ appears much more difficult). Physically we may think of a system of particles in a region $`D`$ invariant under $`𝐑^\nu `$, where $`1\nu \text{dim}D`$ but possibly $`\nu <\text{dim}D`$. For example we may consider a shear flow between two moving plates, but we do not take the limit where these two plates are infinitely far apart, as this would introduce unwanted hydrodynamic instabilities. Another example would be a system of particles in $`[0,L]\times 𝐑^\nu `$. In the $`x`$-direction we put an electric field and we assume a suitable boundary condition (see ). The expressions $`p^2=pp`$ and $`\xi p`$ diverge for an infinite system, but behave additively with respect to volume, and we can (under mild conditions on $`\rho `$) define the average per unit volume with respect to $`\rho `$, noted $`p^2_\rho `$ or $`\xi p_\rho `$. Since $`\rho `$ is ergodic, it is carried by points (in infinite phase space) for which the large volume average of $`p^2`$ or $`\xi p`$ is well defined and constant, equal to $`p^2_\rho `$ or $`\xi p_\rho `$. The expressions $`V`$, $`_qVp`$ behave almost additively with respect to volume and, again under mild conditions, we can define the large volume averages $`V_\rho `$, $`_qVp_\rho `$. Again, $`\rho `$ is carried by points (in infinite phase space) for which the large volume average of $`V`$ or $`_qVp`$ is well defined and constant, equal to $`V_\rho `$ or $`_qVp_\rho `$. In our formal treatment of the infinite system IK or IE evolution we consider the time evolution of an infinite phase space point, generic with respect to the space ergodic measure $`\rho `$, replacing the expressions (2), (4) for $`\alpha `$ by their large volume limits
$$\alpha =(_qV+\xi )p_\rho /p^2_\rho $$
$`(2^{})`$
or
$$\alpha =\xi p_\rho /p^2_\rho $$
$`(4^{})`$
In general $`\rho `$ depends on time, and so does $`\alpha `$ given by $`(2^{})`$ or $`(4^{})`$. Suppose now that $`\rho `$ is invariant under the IK or IE time evolution; then $`\alpha `$ and also $`V`$ are time independent, so that
$$0=\dot{V}_\rho =_qV\dot{q}_\rho =\frac{1}{m}_qVp_\rho $$
But then $`(2^{})`$ and $`(4^{})`$ coincide: the infinite system IK and IE evolutions have the same time invariant space ergodic states $`\rho `$. (Apart from the use of space ergodicity for an actually infinite system, this is the remark of Cohen and Rondoni ). Note that, if we replace in (3) $`m`$ by $`\stackrel{~}{m}`$ and $`V`$ by $`\stackrel{~}{V}`$, imposing $`\dot{H}=0`$ yields
$$\alpha =(\frac{\stackrel{~}{m}}{m}_q\stackrel{~}{V}_qV+\xi )\frac{p}{p^2}$$
and in the infinite system limit we have again equivalence with the isokinetic ensemble. On the other hand, if $`H`$ is not of the form $`p^2/2\stackrel{~}{m}+\stackrel{~}{V}(q)`$, the Gaussian thermostat doesn’t give a term of the form $`\alpha p`$ in (1) and we do not have equivalence with the isokinetic ensemble in the infinite system limit. For the purposes of nonequilibrium statistical mechanics one should presumably restrict $`\rho `$ to be an infinite system SRB state (defined so that the time entropy per unit volume is equal to the sum of the positive Lyapunov exponents per unit volume). Hopefully, the space ergodic SRB states form a 2-parameter family parametrized by the average number of particles and the energy (or the kinetic energy) per unit volume. But the delicate question of identifying the natural nonequilibrium steady states is here bypassed by the remark that they are the same for the infinite system IK and IE evolutions. In equilibrium statistical mechanics the proof of equivalence of ensembles is somewhat subtle, and uses in particular the concavity properties of the entropy (see ). One might think that the corresponding problem in nonequilibrium statistical mechanics would be even more difficult, and the above findings about the equivalence of IK and IE appear thus surprisingly cheap. What we have shown is however only that the IK and IE evolutions coincide (formally) in the infinite system limit; the detailed study of the natural nonequilibrium states remains to be made. 3. The constant $`\alpha `$ case. It is of interest to consider the equations (1) with $`\alpha `$ = constant. For this situation one obtains the following result. Proposition. Consider the evolution equations
$$\begin{array}{c}\dot{p}=_qV+\xi \alpha p\\ \dot{q}=p/m\end{array}$$
in $`TM`$, where $`M𝐑^u\times 𝐓^v`$ and we impose elastic reflection on the boundary of $`M`$. We assume that $`\alpha `$, $`m`$ are constants $`>0`$, and that $`V`$, $`\xi `$ are bounded. Then
$$lim\underset{t\mathrm{}}{sup}(\frac{p^2}{2m}+V)\frac{\mathrm{max}\xi ^2}{2m\alpha ^2}+\mathrm{max}V$$
$`(5)`$
$$lim\underset{t\mathrm{}}{sup}p^2\frac{\mathrm{max}\xi ^2}{\alpha ^2}+2m(\mathrm{max}V\mathrm{min}V)$$
$`(6)`$
Furthermore, if the bounded measure $`\rho `$ is invariant under time evolution and, and $`\mathrm{\Phi }`$ is any continuous function we have
$$\rho (dpdq)\mathrm{\Phi }(\frac{p^2}{2m}).(\xi p\alpha p^2)=0$$
$`(7)`$
From the evolution equations we obtain
$$\frac{d}{dt}(\frac{p^2}{2m}+V)=\frac{p}{m}\dot{p}+_q\dot{q}=\frac{p}{m}(_qV+\xi \alpha p)+_q\frac{p}{m}=\frac{p}{m}(\xi \alpha p)$$
$`(8)`$
Let now $`ϵ>0`$ and suppose that
$$\frac{p^2}{2m}+V\frac{\mathrm{max}\xi ^2}{2m\alpha ^2}+\mathrm{max}V+ϵ$$
$`(9)`$
then
$$p^2\mathrm{max}\xi ^2/\alpha ^2+ϵ$$
or
$$\alpha |p|\mathrm{max}|\xi |+ϵ^{}$$
with $`ϵ^{}>0`$ and thus, in view of (8),
$$\frac{d}{dt}(\frac{p^2}{2m}+V)\frac{1}{m}(|p||\xi |\alpha |p|^2)\frac{|p|}{m}ϵ^{}$$
Therefore, as long as (9) holds, we have
$$\frac{d}{dt}(\frac{p^2}{2m}+V)\delta $$
for some $`\delta >0`$, proving (5). From (5) we obtain immediately (6). Let $`\mathrm{\Psi }^{}=\mathrm{\Phi }`$ then, by the ergodic theorem,
$$\rho (dpdq)\mathrm{\Phi }(\frac{p^2}{2m}).(\xi p\alpha p^2)=\underset{T\mathrm{}}{lim}\frac{1}{T}_0^T𝑑t\mathrm{\Phi }(\frac{p^2}{2m}).(\xi p\alpha p^2)$$
$$=\underset{T\mathrm{}}{lim}\frac{m}{T}_0^T𝑑t\mathrm{\Psi }^{}(\frac{p^2}{2m}).\frac{d}{dt}(\frac{p^2}{2m}+V)=\underset{T\mathrm{}}{lim}\frac{m}{T}_0^T𝑑t\frac{d}{dt}\mathrm{\Psi }(\frac{p^2}{2m}+V)$$
$$=\underset{T\mathrm{}}{lim}\frac{m}{T}[\mathrm{\Psi }(\frac{p^2}{2m}+V)]_0^{\mathrm{}}=0$$
because $`\mathrm{\Psi }`$ is bounded in view of (5). This proves (7). Acknowledgements. I am indebted to Eddie Cohen, Giovanni Gallavotti, and Oscar Lanford for enlightening discussions concerning this note. References. N.I. Chernov, G.L. Eyink, J.L. Lebowitz, and Ya.G. Sinai. “Derivation of Ohms law in a deterministic mechanical model.” Phys. Rev. Letters 70, 2209-2212(1993).
E.G.D. Cohen and L. Rondoni. “Note on phase space contraction and entropy production in thermostatted Hamiltonian systems.” Chaos 8,357-365(1998).
D.J. Evans and G.P. Morriss. Statistical mechanics of nonequilibrium fluids. Academic Press, New York, 1990.
G. Gallavotti. “Dynamical ensembles equivalence in fluid mechanics.” Physica D 105,163-184(1997).
G. Gallavotti. “Chaotic dynamics, fluctuations, non-equilibrium ensembles.”Chaos 8,384-392(1998).
D. Ruelle. “Correlation functionals.” J. Math. Phys. 6,201-220(1965).
Ya.G. Sinai. “A remark concerning the thermodynamic limit of the Lyapunov spectrum.” Int. J. Bifurc. Chaos 6,1137-1142(1996). |
no-problem/0001/astro-ph0001482.html | ar5iv | text | # Do Statistically Significant Correlations Exist between the Homestake Solar Neutrino Data and Sunspots?
## 1 Introduction
The chlorine solar-neutrino detector, located in the Homestake gold mine in Lead, South Dakota, was the first to successfully detect neutrinos from the Sun and is the longest running solar neutrino experiment to date. The rate of neutrino captures was soon observed (Rowley, Cleveland, & Davis 1984) to be much lower than that predicted by the Standard Solar Model (SSM)(Bahcall 1989). SSM calculations (Turck-Chiéze & Lopes 1993, Bahcall, Basu, & Pinsonneault 1998) predict capture rates between 1.4 and 1.7 atoms/d for the chlorine experiment. The most recent experimental value (Cleveland et al. 1998) is only $`0.479\pm 0.043`$ (combined statistical and systematic errors) atoms/d, or 27-34% of the expected signal. This deficit, generally referred to as the “solar neutrino problem,” has now been observed by other experiments (Hirata et al. 1998, Fukuda et al. 1998, Abdurashitov et al. 1999, Hampel et al. 1999). A number of explanations have been considered. Some of these involve modifications of the SSM; others propose new physics beyond the Standard Model of particle physics, see the summary by Bahcall (1989). With regard to the Homestake data, it has been suggested that the solar neutrino signal varies with time, and further, that the signal is anticorrelated with the well-known sunspot cycle (Rowley et al. 1984, Davis 1986, Davis, Cleveland, & Rowley 1987) or with other indicators of solar activity ( Massetti, Storini, & Iucci 1995).
To explain this anticorrelation, it has been proposed that the neutrino has a magnetic moment. However, such a claim rests on the significance of the anticorrelation. Various authors have examined the Homestake data, some finding little or no evidence for any anticorrelation, and others finding a significantly large one. Of those who find significant anticorrelations, details of the analysis are often not given. In some cases, the data are smoothed by taking running averages. However, Walther (1997, 1999) has argued that the method of running averages may mistakenly lead to significant anticorrelations. To illustrate this, Walther (1997) generated a set of x-values by randomly selecting them from a normal distribution. He did the same to generate a set of y-values but from a different normal distribution. These points are inherently uncorrelated. He then took a ten-point running average of this data set and found the apparent significance of the correlation to increase when compared with the non-smoothed data. In a subsequent paper, Walther (1999) employed a statistical method known as parametric subsampling, a procedure to evaluate data when the points are not independent, to assess the significance of the Homestake data when smoothed with running averages. He found no significant anticorrelation between the Homestake data and sunspots.
Following this lead, we undertook a detailed reexamination of the Homestake and sunspot data. In Sec.$`\mathrm{\hspace{0.25em}2}`$, below, some details of the Homestake experiment are reviewed and its basic results, $`^{37}`$Ar production rates as a function of time, are examined. We did not find significant variations anticorrelated with sunspot numbers. The method of weighted running averages is applied in Sec.$`\mathrm{\hspace{0.25em}3}`$ to smooth the Homestake and sunspot data. An apparent anticorrelation emerges whose significance increases with the number of points used to make the averages. Our statistical analysis of the Homestake data when smoothed with running averages differs from Walther’s but arrives at the same conclusion of no significant anticorrelation; our correlation coefficients have significances similar to the unsmoothed data. This suggests that the noted increase in significance is an artifact related to the failure to consider the reduction in the number of independent points when running averages are taken. We arrive at the same conclusion when, like Walther, we apply parametric subsampling to the analysis of the Homestake data. We conclude that, when analyzed properly, no significant anticorrelation exists between the Homestake solar neutrino data and sunspot numbers.
## 2 The Homestake Experiment: Basic Results
The Homestake detector contains 615 tons of perchlorethylene (C<sub>2</sub>Cl<sub>4</sub>) located 1478 m underground in a gold mine. It utilizes a radiochemical procedure based on the inverse $`\beta `$-decay reaction
$$\nu _e+^{37}\mathrm{Cl}^{37}\mathrm{Ar}+e^{}.$$
$`(1)`$
The $`^{37}`$Ar produced by neutrino capture on stable <sup>37</sup>Cl decays by electron capture back to <sup>37</sup>Cl with a half life of 35.0 d. Reaction (1) has a threshold of 814 keV, hence Homestake is sensitive only to electron neutrinos from the pep reaction and those emitted by the decay of <sup>7</sup>Be and <sup>8</sup>B in the Sun. It is transparent to other neutrino flavors.
At the start of a “run,” stable Ar carrier gas is added to the tank. After $`3`$ months, the carrier Ar and any $`^{37}`$Ar produced during the exposure are removed from the detector by sweeping with He gas. The recovered Ar (yields $`95`$%) is purified by gas chromatography and transferred into a proportional counter. The sample from each run is assayed for approximately twelve $`^{37}`$Ar half lives. A maximum likelihood fit (Cleveland 1983) is used to resolve the $`^{37}`$Ar decay from the counter background.
The best-fit $`^{37}`$Ar production rates listed by Cleveland et al. (1998) are plotted as a function of time in Fig. 1(a). The measurements were nearly continuous from 1970.281 to 1994.388 with the exception of an $`1.4`$-y gap (crosshatched in the figure) due to the failure of two circulation pumps. Representative $`68\%`$ confidence ranges from the published work are shown as error bars in the figure. This range is not, in general, symmetrically positioned about the data point. We conservatively adopt the larger of the upper or lower errors in the calculations which follow. With this weighting, the mean $`^{37}`$Ar production rate for the 108 runs is found to be $`0.354\pm 0.028`$ atoms/day. This is appreciably lower than the $`0.479\pm 0.030`$ (statistical only)atoms/day reported by Cleveland et al. (1998). Their value was calculated via a maximum likelihood method which combined all the runs into a single data set—essentially one $`^{37}`$Ar decay curve—whereas for the purposes of our correlation analysis, we must keep each run discrete. The difference between weighted mean and maximum likelihood value can be traced to the fact that the lower production rates tend to have smaller absolute errors than the higher ones, hence are given more weight in the averaging process. The unweighted mean is $`0.485\pm 0.031`$.
The mean sunspot number associated with each $`^{37}`$Ar measurement is plotted as a function of time in Fig. 1(b). These are averages of daily sunspot numbers (NOAA 1999) over the duration of the corresponding Homestake run. The periodic behavior of the sunspots is apparent. The $`^{37}`$Ar measurements commence at a time of decreasing solar activity. They encompass two solar minima and maxima, and they end near a third minimum. With this range, the data afford the possibility of exploring correlations between the solar neutrino signal and sunspot number.
The dependence of $`^{37}`$Ar production rate on sunspot number is shown in the form of a scatter plot in Fig. 1(c). There is no obvious correlation or anticorrelation: high and low rates are associated with both high and low sunspot numbers. To quantify this, we define temporal regions in Fig. 1(a) corresponding to three solar states: (1) when the Sun is quiet, at or near the sunspot minima; (2) when the Sun is active, at or near the sunspot maxima; and (3) when the Sun is in transition between (1) and (2). Weighted mean $`^{37}`$Ar production rates for each solar state are listed in Table 1 together with the mean sunspot number for that state. Were an anticorrelation to exist, one would expect the solar neutrino signal to be significantly larger when the Sun is quiet rather than when it is active. Although the weighted mean production rate is slightly lower for the active Sun than for quiet conditions, the $`(10\pm 19)\%`$ reduction associated with a ten-fold increase in mean sunspot number is clearly marginal. This conclusion is insensitive to the choice of weighting: the corresponding change in unweighted means is $`(16\pm 15)\%`$. Mean production rates for these three solar states are the same at the one-$`\sigma `$ level as that for all 108 measurements.
The correlation between $`^{37}`$Ar production and sunspot number was further quantified by calculating (Press et al. 1992) Pearson’s product-moment coefficient, $`r_p`$, and Spearman’s rank-order coefficient, $`r_s`$. As can be seen in Table 2, values of $`r_p`$ and $`r_s`$ for the full 108 data points are comparable. The significance of the weak anticorrelation, $`r0.1`$, is normally considered in terms of the probability that the null hypothesis is true, i.e., that an observed $`r`$ represents a statistical fluctuation of otherwise uncorrelated data. The distribution of either $`r_p`$ or $`r_s`$ for $`N`$ independent but uncorrelated samples, the “null distribution,” is expected (Press et al. 1992) to be approximately normal with a mean of zero and a standard deviation $`\sigma =1/\sqrt{N2}`$. For $`N=108`$, $`\sigma =0.097`$. The one-sided probability that an observed $`r`$ represents a statistical fluctuation, the “null probability” $`P(r)`$ in Table 2, is obtained by integrating the null distribution from $`1`$ to $`r`$. There is a substantial probability, $`16\%`$, that the observed anticorrelation is insignificant. (By convention, a correlation is considered significant when the probability is less than $`1\%`$.) There is an equal probability for an accidental positive correlation.
The interpretation of Pearson’s correlation coefficient is somewhat dependent on the assumption that the distribution of the quantities of interest is bivariate normal (Press et al. 1992). The distribution of the Homestake $`^{37}`$Ar production rates is shown as a histogram in Fig. 2(a). The peak at or near zero reflects the fact that production rates and lower $`68\%`$ confidence limits reported by Cleveland et al.(1998) were constrained to be $`0`$. Since the total number of events ($`^{37}`$Ar plus background) recorded in an individual run is limited, statistical fluctuations in their temporal distribution might be expected to result in occasional negative values. The remaining data are consistent with the normal distribution shown as a smooth curve in Fig. 2(a). It peaks at $`0.526\pm 0.042`$, somewhat above the global maximum likelihood value for the entire data set. Sampling of a periodic function such as the sunspots does not yield a normal distribution as is seen in Fig. 2(b). We use Spearman’s correlation coefficients in the following discussions as they are less dependent on the assumption of a bivariate normal distribution.
Values of $`r_s`$ and $`P(r_s)`$ for conventional 2- and 4-point averages of the Homestake and sunspot data are also included in Table 2. Such averaging procedures do not appear to improve the significance of the anticorrelation. In fact, broadening of the null distribution with decreasing number of points, to $`\sigma =0.200`$ for $`N=27`$, leads to a reduced significance with a $`27\%`$ chance that the observed $`r_s=0.125`$ is due to a statistical fluctuation. While standard averaging procedures may aid in the display of data, they entail an intrinsic loss of information.
## 3 Running Averages
The method of running averages has been used by a number of authors (Massetti et al. 1995, McNutt 1997) to smooth the Homestake data, and claims have been made that $`^{37}`$Ar production rates are anticorrelated with sunspot numbers or other indicators of solar activity. While the present discussion is limited to sunspot numbers, it has more general implications for the use of running average procedures.
The weighted running average, $`X_i`$, for run $`i`$ is defined as
$$X_i=\underset{j=in}{\overset{i+n}{}}w_jx_j/\underset{j=in}{\overset{i+n}{}}w_j,$$
$`(2)`$
and its weight, $`W_i`$, as
$$W_i=\underset{j=in}{\overset{i+n}{}}w_j=\underset{j=in}{\overset{i+n}{}}(1/\sigma _j^2).$$
$`(3)`$
Here $`x_j`$, $`\sigma _i`$, and $`w_j`$ are the value, standard deviation, and weight of the $`j^{th}`$ observed point. The length of the running average is $`l_a=2n+1`$. As defined above, $`X_i`$ corresponds to the midpoint of the $`l_a`$ points used in forming the average.
Equations 2 and 3 are not applicable in the vicinity of an “endpoint” of which there are four in the Homestake data; one at the beginning of the experiment, a second at the start of the interruption due to the pump failure, a third at the end of that failure, and the last at the end of the measurements. That is, we did not average across the gap due to the pump failure. Following the method of Davis (1999), we define those points directly adjacent to an endpoint as the unaveraged value and weight, e.g., $`X_1=x_1`$ and $`W_1=w_1`$. Those one removed from an endpoint are defined as the three-point running averages, for example, $`X_2`$ is the weighted average of $`x_1`$, $`x_2`$, and $`x_3`$, and so on.
Three-, five-, and seven-point weighted running averages of the 108 points that comprise the Homestake data set are plotted together with the unaveraged ($`l_a=1`$) values in Fig. 3. The period of data interruption due to the pump failure is again crosshatched. Some time dependent structure appears to emerge as the length of the average increases. There is a hint of maxima (minima) for the years 1977 (1980) and 1987 (1993), years that correspond to minima (maxima) in the sunspot cycle (see Fig. 1(b)). Particularly apparent is the minimum in $`^{37}`$Ar production at 1980, a time of maximum solar activity. In their analysis of the first 61 Homestake runs, Bahcall, Field & Press (1987) “conclude that the suggestive correlation \[Rowley et al. (1984)\] between neutrino capture rate and sunspot number depends almost entirely upon the four low points near the beginning of 1980.” It remains an important feature of the 108-point data set now available. Note also that the high points on either side of the pump failure, which persist as the length of the running average increases, occur at a solar minimum.
Spearman’s correlation coefficients presented in Table 3 suggest that the use of running averages may reveal otherwise hidden correlations. The value of $`r_s`$ decreases from $`0.105`$ for the unaveraged data to $`0.240`$ for $`l_a=7,`$ and the apparent probability, $`P(r_s)`$, that the null hypothesis is true $`decreases`$ from $`14\%`$ to $`0.6\%`$. We ask, is this valid evidence for an anticorrelation between the solar neutrino signal and sunspot number?
In calculating $`P(r_s)`$ it is assumed that the standard deviation of the null distribution is determined by the original number of data points, $`N=108,`$ such that $`\sigma =1/\sqrt{106}`$. While standard averaging reduces the number of points, running averaging seems to maintain the full number. No information appears to be lost as the procedure can be reversed to recover the original values and weights. However, the running-averaged points are not independent. If we assume equal weights, it can be seen from Eq. 2 that the running average for the $`i^{th}`$ run, $`X_i`$, is $`\frac{(l_a1)}{l_a}`$ dependent on the neighboring points. Alternatively, a measured value, $`x_i`$, contributes to $`l_a`$ averaged points. The number of independent points is then on the order of $`N/l_a`$. A better approximation would include the special treatment of the four endpoints. For example, the number of independent points for $`l_a=7`$ can be estimated to be $`\frac{4}{1}+\frac{4}{3}+\frac{4}{5}+\frac{(10812)}{7}=19.8`$. The predicted standard deviation of the null distribution is then $`\sigma =1/\sqrt{19.82}=0.237`$, as compared to $`\sigma =0.097`$ for the 108 independent data points. Values of $`\sigma `$ from this simple estimation procedure are shown as a function of $`l_a`$ by the smooth curve in Fig. 4.
A null distribution can be generated by a Monte Carlo procedure which first randomly shuffles the unaveraged $`^{37}`$Ar data. This breaks any correlation with sunspots. One then performs the appropriate running average on the shuffled data and calculates Spearman’s correlation coefficient between the averaged $`^{37}`$Ar values and averaged sunspot numbers. Distributions were obtained from $`10^6`$ shuffles for $`l_a=1,3,5,`$ and $`7`$. The calculated distribution for $`l_a=1`$, the unaveraged data, is plotted as a histogram in Fig. 5. This is well fitted by a Gaussian, the smooth curve in the figure. The standard deviation, $`\sigma =0.097`$, agrees with the value expected for $`N=108`$ independent points. The distribution for the 7-point running average in Fig. 5 is substantially broader. The $`\sigma =0.230`$ is that expected for 20.9 independent points. There are visible deviations from the Gaussian fit.
Values of standard deviations obtained by this Monte Carlo procedure for $`l_a=1,3,5`$, and $`7`$ are shown as points in Fig. 4. While those for $`l_a=3,5`$, and $`7`$ fall somewhat below the curve described above, the large broadening introduced by the running averaging process is confirmed. This broadening is not dependent on the weighting of the average, e.g., the standard deviation of the null distribution for the 7-point running average is 0.236 when weighting is applied compared with 0.231 when it is not. Note that the broadening would not have been observed if the $`^{37}`$Ar data had been shuffled after, rather than before, the averaging process.
The “true” null probabilities for each $`r_s`$ value in Table 3 were obtained by integration of the null distributions. The apparent increase in significance of the anticorrelation vanishes. The reason for this can be seen in Fig. 5. The vertical line at $`r_s=0.240`$ indicates the correlation obtained for the 7-point running average (Table 3). The area to the left of that line is many-times larger for the true null distribution for $`l_a=7`$ than for that assuming 108 independent points.
We have reached the same conclusion as to lack of significance by applying Walther’s parametric subsampling procedure; the resulting probabilities are listed in column 5 of Table 3. The reader is referred to Walther (1999) for details of the calculation.
## 4 Conclusions
Claims have been made by some authors that the $`^{37}`$Ar production rates measured in the Homestake solar neutrino experiment are anticorrelated with sunspot number. Some of these rest on the use of running averages to smooth the data. It has been suggested by Walther(1997, 1999) that such a procedure may lead to a substantial overestimation of the significance of the anticorrelations. We have critically reexamined the Homestake data, how the running averages have been applied, and how the results have been interpreted. We reach the following conclusions:
1) Significant anticorrelations (at greater than an approximately one-$`\sigma `$ level) with the sunspot cycle are not found in the original Homestake data.
2) When the data are smoothed by taking running averages, a significant anticorrelation does seem to emerge. Its apparent significance appears to increase with the number of points used to form the averages.
3) An analysis in terms of the true null distributions, as calculated by Monte Carlo procedures, shows that the apparently high significance of the anticorrelations for the 5- and 7-point running averages is an artifact arising from the failure to consider the loss of independence between neighboring points introduced by that averaging process. This conclusion agrees with that inferred by the parametric subsampling procedure.
4) While the Homestake experiment has had a major impact on physics by being the first to detect solar neutrinos and to identify the solar neutrino problem, its present precision is insufficient to substantiate the existence of a significant temporal correlation with the sunspot cycle.
This work was supported by the Office of High Energy and Nuclear Physics of the U.S. Department of Energy under contract No. DE-AC02-98CH10886. We wish to thank R. Davis and J. Weneser for helpful discussions.
FIGURE CAPTIONS
Fig. 1. (a) $`^{37}`$Ar production rates (atoms/day) measured in the Homestake experiment(Cleveland et al. 1998) as a function of time. Error bars indicate $`68\%`$ confidence levels for some representative points. (b) Mean sunspot numbers associated with each $`^{37}`$Ar measurement, also as a function of time. Crosshatched areas indicate the data interruption due to pump failure. (c) $`^{37}`$Ar production rate as a function of the associated sunspot number.
Fig. 2. (a) The distribution of $`^{37}`$Ar production rates. The smooth curve suggests an approximately normal distribution (see text). (b) The distribution of associated mean sunspot numbers.
Fig. 3. Growth of apparent temporal structure on applying weighted running averages to the measured $`^{37}`$Ar production rates. (a) The original data. (b) Three point running averages. (c) Five point running averages. (d) Seven point running averages. Crosshatched areas indicate the data interruption due to pump failure.
Fig. 4. Standard deviation of the null distribution for Spearman’s correlation coefficient as a function of the length of the running average. The curve shows the broadening suggested by a simple estimation procedure; see text. The points are from the null distributions generated by the Monte Carlo random shuffling of the original $`^{37}`$Ar data.
Fig. 5. Null distributions of Spearman’s correlation coefficient between $`^{37}`$Ar production rates and sunspot numbers for the original Homestake data and for those data subjected to seven-point running averaging. Histograms show the results generated by a random shuffling procedure; see text. A Gaussian fit to each histogram is shown as a smooth curve. The vertical line indicates the $`r_s=0.24`$ obtained for the unshuffled 7-point running average of the data. |
no-problem/0001/hep-ph0001093.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Spontaneous gauge symmetry breaking in the Standard Model (SM) is realized by introducing a single CP-even Higgs boson, $`h_{\mathrm{SM}}`$. The “standard” Higgs hunting strategies at an $`e^+e^{}`$ collider rely on the Higgs-strahlung process, $`e^+e^{}Zh_{\mathrm{SM}}`$, and (for higher energies and heavier Higgs bosons) on the $`WW`$ fusion process, $`e^+e^{}\nu \overline{\nu }h_{\mathrm{SM}}`$ ($`ZZ`$ fusion is smaller by an order of magnitude) . However, even the simplest two-Higgs-doublet model (2HDM) extension of the SM exhibits a rich Higgs sector structure. Moreover, it allows for spontaneous and/or explicit CP violation in the scalar sector . CP violation, which in the SM is achieved only by the Yukawa couplings of the Higgs boson to quarks being explicitly complex , could equally well be partially or wholly due to new physics beyond the SM. The possibility that an extended Higgs sector is responsible for CP violation is particularly appealing, especially as a means for obtaining an adequate level of baryogenesis in the early universe .
The CP-conserving (CPC) 2HDM predicts <sup>♯1</sup><sup>♯1</sup>♯1 The same menagerie of pure-CP Higgs bosons is found at the tree level in the minimal supersymmetric model (MSSM) . However, with $`CP`$-violating phases of soft-supersymmetry breaking terms, the $`h^0`$, $`H^0`$ and $`A^0`$ will mix beyond the Born approximation . the existence of two neutral CP-even Higgs bosons ($`h^0`$ and $`H^0`$, with $`m_{h^0}m_{H^0}`$ by convention), one neutral CP-odd Higgs ($`A^0`$) and a charged Higgs pair ($`H^\pm `$). The situation is more complex in the 2HDM with CP-violation (CPV) in the scalar sector. There, the physical mass eigenstates, $`h_i`$($`i=1,2,3`$), are mixtures (specified by three mixing angles, $`\alpha _i`$, $`i=1,2,3`$) of the real and imaginary components of the original neutral Higgs doublet fields; as a result, the $`h_i`$ have undefined CP properties.
The absence of any $`e^+e^{}Zh_{\mathrm{SM}}`$ signal in LEP2 data translates into a lower limit on $`m_{h_{\mathrm{SM}}}`$: the latest analysis of four LEP experiments at $`\sqrt{s}`$ up to 196 GeV implies $`m_{h_{\mathrm{SM}}}`$ greater than 102.6 GeV . More generally, in $`e^+e^{}`$ collisions, if $`m_{h_{\mathrm{SM}}}<\sqrt{s}m_Z`$ the $`h_{\mathrm{SM}}`$ will be discovered, assuming sufficient integrated luminosity. In this note, we wish to address the extent to which the neutral Higgs bosons of an extended Higgs sector are guaranteed to be discovered if they are sufficiently light. The possibility of such a guarantee rests on considering not only $`Z`$+Higgs production but also Higgs pair production, $`b\overline{b}`$+Higgs production and $`t\overline{t}`$+Higgs production and on the existence of sum rules for the Higgs boson couplings controlling the rates for these processes. Our analysis will be performed for a type-II 2HDM, wherein the neutral component of one of the Higgs doublet fields couples only to down-type quarks and leptons and the neutral component of the other couples only to up-type quarks.
We first remind the reader of the 2HDM (CPV or CPC) result that if there are two light Higgs bosons, $`h_1`$ and $`h_2`$, then at least one will be observable in $`Zh_1`$ or $`Zh_2`$ production or both in $`h_1h_2`$ pair production. This is because of the sum rule for the Higgs boson couplings $`C_1^2+C_2^2+C_{12}^2=1`$, where $`g_{ZZh_i}\frac{gm_Z}{c_W}C_i`$ and $`g_{Zh_ih_j}\frac{g}{2c_W}C_{ij}`$ \[$`c_W=\mathrm{cos}\theta _W`$, $`g`$ is the SU(2) gauge coupling constant\]. If none of the three processes are observed, we know that at least one of the two Higgs masses must lie beyond the kinematic limits defined by $`\sqrt{s}<m_Z+m_{h_1},m_Z+m_{h_2},m_{h_1}+m_{h_2}`$. A recent analysis of LEP data shows that the 95% confidence level exclusion region in the $`(m_{h_1},m_{h_2})`$ plane that results from the sum rule is quite significant .
Here, we focus on the question of whether a single neutral Higgs boson will be observed in $`e^+e^{}`$ collisions if it is sufficiently light, regardless of the masses and couplings of the other Higgs bosons. In general, such a guarantee cannot be established if only the Higgs-strahlung and Higgs-pair production processes are considered. First, there is a “nightmare” scenario in which Higgs-strahlung is inadequate for detection of the lightest Higgs boson $`h_1`$ while all other Higgs bosons are too heavy to be kinematically accessible. This is easily arranged by choosing model parameters such that the $`ZZh_1`$ coupling is too weak for its detection in Higgs-strahlung production while maintaining consistency with precision electroweak constraints despite the other Higgs bosons being heavy. Of course, if we were to demand that the 2HDM remains perturbative up to energy scales of order $`10^{16}\mathrm{GeV}`$, then the sum rule of Ref. guarantees that $`_iC_i^2m_{h_i^0}^2<m_B^2`$, where $`m_B200\mathrm{GeV}`$, in which case this scenario could not be realized assuming that $`\sqrt{s}`$ is substantially larger than $`m_B`$. Second, it could happen that there are two light Higgs bosons, $`h_1`$ and $`h_2`$, but one of them, e.g. the $`h_2`$, has full strength $`ZZh_2`$ coupling. Then, the sum rule $`C_1^2+C_2^2+C_{12}^2=1`$ implies that the $`ZZh_1`$ and $`Zh_1h_2`$ couplings must both be zero at tree-level. Consequently, the $`h_2`$ will be seen in $`e^+e^{}Z^{}Zh_2`$ production but the $`h_1`$ will not be discovered in $`Zh_1`$ or $`h_1h_2`$ production, even when these processes are kinematically accessible. Note that this scenario is completely consistent with the above-noted GUT-scale-perturbativity sum rule. We stress that the above cases can arise regardless of the mixing structure, CPC or CPV, of the neutral Higgs boson sector.
In we derived new sum rules relating the Yukawa $`g_{f\overline{f}h_i}`$ and Higgs–$`Z`$ couplings of the 2HDM \[see Eq. (14)\] which guarantee that any $`h_i`$ that has suppressed $`ZZh_i`$ coupling must have substantial $`t\overline{t}h_i`$ and/or $`b\overline{b}h_i`$ coupling. This result implies that if the $`h_i`$ is sufficiently light for $`t\overline{t}h_i`$ to be kinematically allowed and if the luminosity is sufficiently large, then the $`h_i`$ will be observed in at least one of the Yukawa processes $`e^+e^{}f\overline{f}h_i`$ ($`f=t`$, $`b`$ and possibly $`\tau `$), dominated by Higgs radiation from the final state fermions . Therefore, the complete Higgs hunting strategy at $`e^+e^{}`$ colliders, and at hadron colliders as well, must include not only the Higgs-strahlung process and Higgs-pair production but also the Yukawa processes. <sup>♯2</sup><sup>♯2</sup>♯2In the context of a CP conserving 2HDM, the relevance of the Yukawa processes when $`\mathrm{tan}\beta `$ is large has been stressed already several times . However, our earlier work left open a detailed analysis of just how much integrated luminosity was required.
In this paper, we consider in more detail the 2HDM in the context of future $`e^+e^{}`$ linear colliders ($`\sqrt{s}500800\mathrm{GeV}`$) with integrated luminosity $`L5001000`$ fb<sup>-1</sup>, as planned in one-to-two years of running at TESLA. Focusing on the case of a light Higgs boson that cannot be observed in Higgs-strahlung or Higgs-pair production, we determine the $`L`$ required so that either $`b\overline{b}h_1`$ or $`t\overline{t}h_1`$ production will allow $`h_1`$ detection. For the worst choices of Higgs mixing angles $`\alpha _i`$, the required $`L`$ is quite large.
The outline of the paper is as follows. In the next section, we discuss the sum rules for Higgs boson couplings in the CPV 2HDM. Then, we present numerical results for $`Zh_1h_2`$, $`b\overline{b}h_1`$ and $`t\overline{t}h_1`$ cross sections at $`e^+e^{}`$ linear colliders running with $`\sqrt{s}=500`$ and 800 GeV and address the question of measurability of Yukawa couplings. In the next section, we determine the portions of parameter space such that unrealistically large integrated luminosity could be required for discovery of a light $`h_1`$. In the conclusions, we summarize the main points of the paper and briefly discuss implications of the sum rules for Higgs searches at hadronic accelerators.
## 2 Higgs boson couplings and sum rules
In the type-II two-Higgs-doublet model, the neutral component of the $`\mathrm{\Phi }_1`$ doublet field couples only to down-type quarks and leptons and the neutral component of $`\mathrm{\Phi }_2`$ couples only to up-type quarks. As usual, we define $`\mathrm{tan}\beta v_2/v_1`$, where $`|\mathrm{\Phi }_{1,2}^0|=v_{1,2}/\sqrt{2}`$. As a result of the mixing (for details see ) between real and imaginary parts of neutral Higgs fields, the Yukawa interactions of the $`h_i`$ mass-eigenstates are not invariant under CP. They are given by:
$$=h_i\overline{f}(S_i^f+iP_i^f\gamma _5)f$$
(1)
where the scalar ($`S_i^f`$) and pseudoscalar ($`P_i^f`$) couplings are functions of the mixing angles. For up-type and down-type quarks we have
$`S_i^u={\displaystyle \frac{m_u}{vs_\beta }}R_{i2},P_i^u={\displaystyle \frac{m_u}{vs_\beta }}c_\beta R_{i3},`$ (2)
$`S_i^d={\displaystyle \frac{m_d}{vc_\beta }}R_{i1},P_i^d={\displaystyle \frac{m_d}{vc_\beta }}s_\beta R_{i3},`$ (3)
and similarly for charged leptons. <sup>♯3</sup><sup>♯3</sup>♯3 $`s_\beta =\mathrm{sin}\beta `$, $`c_\beta =\mathrm{cos}\beta `$, and in our normalization $`v\sqrt{v_1^2+v_2^2}=2m_W/g=246\text{GeV}`$. For the 2HDM, the $`R_{ij}`$ are elements of the orthogonal rotation matrix
$`h=R\phi =\left(\begin{array}{ccc}c_1& s_1c_2& s_1s_2\\ s_1c_3& c_1c_2c_3s_2s_3& c_1s_2c_3c_2s_3\\ s_1s_3& c_1c_2s_3+s_2c_3& c_1s_2s_3+c_2c_3\end{array}\right)\left(\begin{array}{c}\phi _1\\ \phi _2\\ \phi _3\end{array}\right),`$ (10)
\[$`s_i\mathrm{sin}\alpha _i`$ and $`c_i\mathrm{cos}\alpha _i`$\] which relates the original neutral degrees of freedom <sup>♯4</sup><sup>♯4</sup>♯4The remaining degree of freedom, $`\sqrt{2}(c_\beta \text{Im}\varphi _1^0+s_\beta \text{Im}\varphi _2^0)`$, becomes a would-be Goldstone boson which is absorbed in giving mass to the $`Z`$ gauge boson.
$$(\phi _1,\phi _2,\phi _3)\sqrt{2}(\text{Re}\varphi _1^0,\text{Re}\varphi _2^0,s_\beta \text{Im}\varphi _1^0c_\beta \text{Im}\varphi _2^0)$$
(11)
of the two Higgs doublets $`\mathrm{\Phi }_1=(\varphi _1^+,\varphi _1^0)`$ and $`\mathrm{\Phi }_2=(\varphi _2^+,\varphi _2^0)`$ to the physical mass eigenstates $`h_i`$ ($`i=1,2,3`$). Without loss of generality, we assume $`m_{h_1}m_{h_2}m_{h_3}`$.
Using the above notation, the couplings of neutral Higgs and $`Z`$ bosons are given by
$`C_i`$ $`=`$ $`s_\beta R_{i2}+c_\beta R_{i1}`$ (12)
$`C_{ij}`$ $`=`$ $`w_iR_{j3}w_jR_{i3}`$ (13)
where $`w_i=s_\beta R_{i1}c_\beta R_{i2}`$.
The conventional CP-conserving limit can be obtained as a special case: $`\alpha _2=\alpha _3=0`$. Then, if we take $`\alpha _1=\pi /2\alpha `$, $`\alpha `$ is the conventional mixing angle that diagonalizes the mass-squared matrix for $`\sqrt{2}\text{Re}\varphi _1^0`$ and $`\sqrt{2}\text{Re}\varphi _2^0`$. The resulting mass eigenstates are $`h_1=h^0`$ $`h_2=H^0`$ and $`\sqrt{2}(s_\beta \text{Im}\varphi _1^0c_\beta \text{Im}\varphi _2^0)=A^0`$, where $`h^0`$, $`H^0`$ ($`A^0`$) are the CP-even (CP-odd) Higgs bosons defined earlier for the CPC 2HDM. Of course, there are other CP-conserving limits. For instance, by choosing $`\alpha _1=\alpha _2=\pi /2`$, $`h_1`$ becomes pure $`\phi _3=A^0`$, while it is $`h_2`$ and $`h_3`$ that are CP-even.
The crucial sum rules that potentially guarantee discovery (assuming sufficient luminosity) of any neutral Higgs boson that is light enough to be kinematically accessible in Higgs-strahlung and $`b\overline{b}`$+Higgs and $`t\overline{t}`$+Higgs are an automatic result of the orthogonality of the $`R`$ matrix. These sum rules involve a combination of the Yukawa and $`ZZ`$ couplings of any one Higgs boson and require that at least one of these couplings has to be sizable. In particular, if $`C_i0`$ (the focus of our paper) then orthogonality of $`R`$ yields
$$(\widehat{S}_i^t)^2+(\widehat{P}_i^t)^2=\left(\frac{\mathrm{cos}\beta }{\mathrm{sin}\beta }\right)^2,(\widehat{S}_i^b)^2+(\widehat{P}_i^b)^2=\left(\frac{\mathrm{sin}\beta }{\mathrm{cos}\beta }\right)^2$$
(14)
where for convenience we introduce rescaled couplings
$`\widehat{S}_i^f{\displaystyle \frac{S_i^fv}{m_f}},\widehat{P}_i^f{\displaystyle \frac{P_i^fv}{m_f}},`$ (15)
$`f=t,b`$<sup>♯5</sup><sup>♯5</sup>♯5For obvious reasons we consider the third generation of quarks. Similar expressions hold for for lighter generations. Eq. (14) implies that either the $`t\overline{t}`$ or the $`b\overline{b}`$ coupling of $`h_i`$ must be large in the $`C_i0`$ limit; both cannot be small. Even in the other extreme of $`C_i\pm 1`$, i.e. ffull strength $`ZZh_i`$ coupling, one finds that $`(\widehat{S}_i)^2+(\widehat{P}_i)^21`$, for both the top and the bottom quark couplings, in the limit of either very large or very small $`\mathrm{tan}\beta `$. A completely general result following from orthogonality of $`R`$, that is independent of $`C_i`$, is
$`\mathrm{sin}^2\beta [(\widehat{S}_i^t)^2+(\widehat{P}_i^t)^2]+\mathrm{cos}^2\beta [(\widehat{S}_i^b)^2+(\widehat{P}_i^b)^2]=1,`$ (16)
again implying that the Yukawa couplings to top and bottom quarks cannot be simultaneously suppressed. As a result, if an $`h_i`$ is sufficiently light, its detection in association with $`b\overline{b}`$ or $`t\overline{t}`$ should, in principle, be possible, irrespective of the neutral Higgs sector mixing and regardless of whether or not it is seen in $`e^+e^{}Zh_i`$ or Higgs pair production. However, this leaves open the question of just how much luminosity is required to guarantee detection.
## 3 Higgs boson production in $`𝒆^\mathbf{+}𝒆^{\mathbf{}}`$ colliders
To treat the three processes: (i) bremsstrahlung off the $`Z`$ boson ($`e^+e^{}Zh_i`$), (ii) Higgs pair production ($`e^+e^{}h_ih_j`$), and (iii) the Yukawa processes with Higgs radiation off a heavy fermion line in the final state ($`e^+e^{}f\overline{f}h_i`$) on the same footing, we discuss the production of $`h_1`$ in association with heavy fermions:
$$e^+e^{}f\overline{f}h_1.$$
(17)
Processes (i) and (ii) contribute to this final state when $`Zf\overline{f}`$ and $`h_2f\overline{f}`$, respectively. If $`|C_1|`$ is not too near 1, Eqs. (2,3) imply that radiation processes (iii) are enhanced when the Higgs boson is radiated off top quarks for small $`\mathrm{tan}\beta `$ and off bottom quarks or $`\tau `$ leptons for large values of $`\mathrm{tan}\beta `$. Since all fermion and Higgs boson masses in the final state must be kept nonzero, the formulae for the cross section are quite involved. The tree level expressions can be found in Ref. .
Before turning to the case of a single Higgs boson that is unobservable in Higgs-strahlung or Higgs pair production, we briefly review and extend to higher energy our earlier results regarding the detection of a least one of two light Higgs bosons when $`m_{h_1}+m_{h_2},m_{h_1}+m_Z,m_{h_2}+m_Z<\sqrt{s}`$. In particular, suppose that neither is observable in Higgs-strahlung. More precisely, as we scan over the $`\alpha _i`$’s we require that the number of $`e^+e^{}Zh_1`$ and (separately) the number of $`Zh_2`$ events both be less than 50 for an integrated luminosity of $`500\mathrm{fb}^1`$. This will mean that $`|C_1|,|C_2|1`$, which in turn implies that Higgs-pair production is at full strength, $`|C_{12}|1`$. In Fig. 1, we show contour plots for the minimum value of the pair production cross section, min\[$`\sigma (e^+e^{}h_1h_2)`$\], as a function of Higgs boson masses at $`\sqrt{s}=`$ 500 and 800 GeV obtained by scanning over mixing angles $`\alpha _i`$. With integrated luminosity of $`5001000\mathrm{fb}^1`$, a large number of events (large enough to allow for selection cuts and experimental efficiencies) is predicted for the above energies over a broad range of Higgs boson masses. If 50 $`h_1h_2`$ events before cuts and efficiencies prove adequate, one can probe reasonably close to the kinematic boundary defined above. Thus, we will only need the Yukawa processes for Higgs discovery if (a) there is only one light Higgs boson or (b) there are two light Higgs bosons but one cannot be seen in Higgs-strahlung or Higgs pair production because the other has full SM strength $`ZZ`$ coupling.
So now let us turn to the case of a light Higgs boson that cannot be seen in $`Zh_1`$ production ($`|C_1|1`$) or Higgs pair production ($`|C_{1i}|1`$, $`i=2,3`$ and/or $`m_{h_2},m_{h_3}>\sqrt{s}m_{h_1}`$). The question is whether the sum rules (14) imply that Yukawa couplings are sufficiently large to allow detection of the $`h_1`$ in $`t\overline{t}h_1`$ and/or $`b\overline{b}h_1`$ production (assuming both are kinematically allowed). In Fig. 2, we plot the minimum and maximum values of $`\sigma (e^+e^{}f\overline{f}h_1)`$ for $`f=t,b`$ as a function of the Higgs boson mass, where we scan over the mixing angles $`\alpha _1`$ and $`\alpha _2`$ <sup>♯6</sup><sup>♯6</sup>♯6If only the $`h_1`$ is light, we only need to scan over $`\alpha _1`$ and $`\alpha _2`$ since all the couplings of the $`h_1`$ depend only upon these two mixing angles. at a given $`\mathrm{tan}\beta `$ while requiring fewer than $`50`$ $`Zh_1`$ events for $`L=500\mathrm{fb}^1`$.<sup>♯7</sup><sup>♯7</sup>♯7We note that, if $`C_10`$, then the minimal and maximal $`b\overline{b}h_1`$ cross sections are almost equal. We see that, if $`m_{h_1}`$ is not large and $`\mathrm{tan}\beta `$ is either very small or very large, we are guaranteed that there will be sufficient events in either the $`b\overline{b}h_1`$ or the $`t\overline{t}h_1`$ channel to allow $`h_1`$ discovery. However, if $`\mathrm{tan}\beta `$ is of moderate size, the reach in $`m_{h_1}`$ is quite limited if the $`\alpha _i`$’s are such that $`\sigma (t\overline{t}h_1)`$ is minimal. For example, at $`\sqrt{s}=500`$ GeV let us take 50 events (before cuts and efficiencies) as the observability criteria.<sup>♯8</sup><sup>♯8</sup>♯8For $`\mathrm{tan}\beta 1`$ and a light $`h_1`$, requiring 50 $`t\overline{t}h_1`$ events might not be sufficient since the $`h_1`$ will decay predominantly into $`c\overline{c}`$, and the resulting $`t\overline{t}c\overline{c}`$ final states will have a large background from ordinary $`t\overline{t}+`$multijet events. On the other hand, the $`t\overline{t}h_1`$ cross section is substantially enhanced when $`\mathrm{tan}\beta 1`$ and, unless there is severe phase space suppression, we will have substantially more than 50 events. For $`L=500\mathrm{fb}^1`$, $`50`$ events then requires $`\sigma 0.1\mathrm{fb}`$. From Fig. 2, we see the following.
* At $`\mathrm{tan}\beta =1`$, $`\sigma (b\overline{b}h_1)0.1\mathrm{fb}`$ for all $`m_{h_1}`$, while $`\sigma _{\mathrm{min}}(t\overline{t}h_1)`$ falls below $`0.1\mathrm{fb}`$ for $`m_{h_1}>70\mathrm{GeV}`$. Thus, all but quite light $`h_1`$’s would elude discovery.
* At $`\mathrm{tan}\beta =10`$, $`\sigma _{\mathrm{min}}(t\overline{t}h_1)0.1\mathrm{fb}`$ and $`\sigma _{\mathrm{min}}(b\overline{b}h_1)\sigma _{\mathrm{max}}(b\overline{b}h_1)`$ falls below $`0.1\mathrm{fb}`$ for $`m_{h_1}>80\mathrm{GeV}`$.
A $`\sqrt{s}=800`$ GeV machine considerably extends the mass reach for $`\mathrm{tan}\beta =1`$: the $`h_1`$ will be observable for $`m_{h_1}<230\mathrm{GeV}`$ (requiring $`\sigma _{\mathrm{min}}(t\overline{t}h_1)0.1\mathrm{fb}`$). However, for $`\mathrm{tan}\beta =10`$, $`\sigma _{\mathrm{min}}(t\overline{t}h_1)`$ is again very small while $`\sigma (b\overline{b}h_1)`$ actually declines faster, falling below $`0.1\mathrm{fb}`$ already at $`m_{h_1}50\mathrm{GeV}`$. Obviously, for $`\mathrm{tan}\beta `$ somewhat less than 10, only a very light $`h_1`$ is guaranteed to be observable with only $`L=500\mathrm{fb}^1`$ of integrated luminosity.
For the most part, the minimum cross sections obtained above when the number of $`Zh_1`$ events is small and $`\mathrm{tan}\beta `$ is moderate in size correspond to $`\alpha _i`$ choices such that $`C_1=0`$ exactly. Further, when $`C_1=0`$, the minimum cross sections are achieved for a purely CP-odd $`h_1`$ ($`\alpha _1=\alpha _2=\pi /2`$ and variants thereof), even though $`C_1=s_\beta s_1c_2+c_\beta c_1`$ is exactly zero for any choices of $`\alpha _1`$ and $`\alpha _2`$ such that $`c_2=\mathrm{cot}\beta c_1/s_1`$ (see discussion in ).
For more extreme $`\mathrm{tan}\beta `$ values than those illustrated in Fig. 2, there is, however, an alternative — we can actually zero one of the Yukawa process, namely the one that is already suppressed, while keeping the $`Zh_1`$ cross section small. For example, if $`\mathrm{tan}\beta `$ is large, implying small $`c_\beta `$, $`C_1`$ can be small enough to satisfy a finite experimental limit on the number of $`Zh_1`$ events if $`\alpha _1=0`$ ($`s_1=0`$, $`c_1=1`$), so that the first term in $`C_1=s_\beta s_1c_2+c_\beta c_1`$ is 0. In this extreme, the $`t\overline{t}h_1`$ cross section will be zero (irrespective of $`\alpha _2`$) since $`S_1^ts_1c_2/s_\beta `$ and $`P_1^ts_1s_2\mathrm{cot}\beta `$ are both 0. In this limit, $`h_1`$ is purely $`\phi _1`$, the neutral Higgs component that couples only to bottom quarks. If $`\mathrm{tan}\beta `$ is small ($`s_\beta `$ small), the converse situation arises. $`C_1`$ can be kept small by taking $`c_1=0`$ ($`\alpha _1=\pi /2`$) irrespective of the value of $`\alpha _2`$. One can then zero the $`b\overline{b}h_1`$ cross section by choosing $`\alpha _2=0`$. This is the limit in which $`h_1`$ is purely $`\phi _2`$, the neutral Higgs component that couples to top quarks.
To illustrate, consider $`\sqrt{s}=800\mathrm{GeV}`$ and large $`\mathrm{tan}\beta `$. If we require that $`\sigma (Zh_1)<0.1\mathrm{fb}`$ (corresponding to fewer than 50 events for $`L=500\mathrm{fb}^1`$), then for $`\mathrm{tan}\beta =10,15,20`$ we can choose $`\alpha _1=0`$, i.e. $`\sigma (t\overline{t}h_1)=0`$, for $`m_{h_1}410,90,0\mathrm{GeV}`$, respectively. Note that $`\mathrm{tan}\beta =10`$ is just on the border for which this extreme of zeroing $`t\overline{t}h_1`$ becomes relevant. In fact, the $`\mathrm{tan}\beta =10`$ minimum cross section curve at $`\sqrt{s}=800\mathrm{GeV}`$ in Fig. 2 lies below that which would be obtained for a purely CP-odd $`h_1`$ and corresponds to a slight compromise between exactly zeroing the $`t\overline{t}h_1`$ cross section and the requirement of keeping $`\sigma (Zh_1)<0.1\mathrm{fb}`$.
Measuring the Yukawa couplings:
¿From the Yukawa sum rules and Fig. 2, it is clear that the value of $`C_i`$ that makes it easiest to measure at least one of the $`h_i`$ Yukawa couplings is very $`\mathrm{tan}\beta `$ dependent. If $`\mathrm{tan}\beta `$ is either $`1`$ or $`10`$, then $`C_i=0`$ seems to be the most optimistic. This is because the largest of the minimum cross section values (whether $`t\overline{t}h_i`$ for $`\mathrm{tan}\beta 1`$ or $`b\overline{b}h_i`$ for $`\mathrm{tan}\beta 1`$) is typically substantially enhanced if $`|C_i|0`$, whereas if $`|C_i|`$ is not small the sum rules imply that less enhancement is possible. In particular, if $`|C_i|1`$ (as would be known if the $`Zh_i`$ rate is full strength), then, as outlined earlier, both $`(\widehat{S}_i^t)^2+(\widehat{P}_i^t)^2`$ and $`(\widehat{S}_i^b)^2+(\widehat{P}_i^b)^2`$ will be of order unity, approaching 1 exactly if $`\mathrm{tan}\beta `$ is either very large or very small. This implies minimum cross section values close to those found for $`\mathrm{tan}\beta =1`$. From the $`\mathrm{tan}\beta =1`$ minimum cross section curves of Fig. 2 for $`\sqrt{s}=500\mathrm{GeV}`$ ($`\sqrt{s}=800\mathrm{GeV}`$), one finds that $`L=500\mathrm{fb}^1`$ would not be sufficient for a measurement of the $`b\overline{b}h_i`$ coupling via $`b\overline{b}h_i`$ production, and, if $`m_{h_i}`$ is significantly above $`70\mathrm{GeV}`$ ($`230\mathrm{GeV}`$), it would also be difficult to measure the $`t\overline{t}h_i`$ Yukawa coupling.
The situation is quite different if $`\mathrm{tan}\beta `$ is moderate in size. In this regime of $`\mathrm{tan}\beta `$, Fig. 2 shows that a relatively light $`h_i`$ may not even be observable for $`L=500\mathrm{fb}^1`$ at $`\sqrt{s}=800\mathrm{GeV}`$ when $`|C_i|1`$. However, it might very well be observable in one of the Yukawa processes if $`|C_i|=1`$. For example, consider $`m_{h_1}=200\mathrm{GeV}`$ in Fig. 2. If $`\mathrm{tan}\beta =10`$, $`\sigma _{\mathrm{min}}(b\overline{b}h_1)`$ and $`\sigma _{\mathrm{min}}(t\overline{t}h_1)`$ of Fig. 2 are both below $`0.1\mathrm{fb}`$ if $`|C_1|1`$, whereas if $`|C_1|=1`$ then the $`\mathrm{tan}\beta =1`$ curves of Fig. 2 become relevant, from which we see that $`\sigma _{\mathrm{min}}(t\overline{t}h_1)0.2\mathrm{fb}`$, implying that one could obtain a reasonably good measurement of the $`t\overline{t}h_1`$ coupling.
## 4 Worst case scenarios
The discussion of the previous section raises the interesting question of just how much luminosity is required as a function of $`\sqrt{s}`$ and $`m_{h_1}`$ in order to absolutely guarantee discovery of a light $`h_1`$ in at least one of the three modes, $`Zh_1`$, $`t\overline{t}h_1`$ or $`b\overline{b}h_1`$. We consider only $`h_1`$ masses such that both Yukawa modes are kinematically allowed. In our first plot, Fig. 3, we impose the requirements (I) that there be $`50`$ $`Zh_1`$ events for $`L=500\mathrm{fb}^1`$ and (II) that LEP/LEPII upper bounds on the $`ZZh_1`$ coupling be satisfied. For each $`m_{h_1}`$, we scan over $`\mathrm{tan}\beta `$, to determine the $`\mathrm{tan}\beta `$ at which $`\mathrm{min}_{(\alpha _1,\alpha _2)}\left\{\mathrm{max}[\sigma (b\overline{b}h_1),\sigma (t\overline{t}h_1)]\right\}`$ is smallest. Here, $`\mathrm{max}[\sigma (b\overline{b}h_1),\sigma (t\overline{t}h_1)]`$ is the larger of $`\sigma (b\overline{b}h_1)`$ and $`\sigma (t\overline{t}h_1)`$ for any given $`(\alpha _1,\alpha _2)`$ choice, and $`\mathrm{min}_{(\alpha _1,\alpha _2)}`$ refers to the minimum value of this maximum after scanning over all $`(\alpha _1,\alpha _2)`$ values (see footnote 6) satisfying the constraints (I) and (II). We then look for the $`\mathrm{tan}\beta `$ value at which this minimum is smallest. For any other $`\mathrm{tan}\beta `$ choice, one or the other cross section will be larger than this minimum for all choices of $`(\alpha _1,\alpha _2)`$ and the corresponding mode easier to observe. This defines the ‘worst case’ $`\mathrm{tan}\beta `$ choice for which a light Higgs boson that is unobservable in the $`Zh_1`$ mode will be most difficult to see by virtue of neither the $`b\overline{b}h_1`$ nor the $`t\overline{t}h_1`$ cross section being enhanced relative to the other. In Fig. 3, we plot the worst case choice of $`\mathrm{tan}\beta `$ and the corresponding value of $`\sigma _{\mathrm{min}}\mathrm{min}_{\mathrm{tan}\beta }\left(\mathrm{min}_{(\alpha _1,\alpha _2)}\left\{\mathrm{max}[\sigma (b\overline{b}h_1),\sigma (t\overline{t}h_1)]\right\}\right)`$. Results are presented for both $`\sqrt{s}=500\mathrm{GeV}`$ and $`\sqrt{s}=800\mathrm{GeV}`$.
We observe that the integrated luminosity required for the worst case cross section to yield 50 events in each of the Yukawa modes is always greater than $`L=500\mathrm{fb}^1`$. Even for small $`m_{h_1}10\mathrm{GeV}`$, $`\sigma _{\mathrm{min}}45\times 10^2\mathrm{fb}`$ at these energies, implying that $`L>1000\mathrm{fb}^1`$ would be required for just $`4050`$ events in each. As we increase $`m_{h_1}`$, the worst case cross section for $`\sqrt{s}=500\mathrm{GeV}`$ falls dramatically and detection of the $`h_1`$ would not be possible for any reasonable $`L`$. However, at $`\sqrt{s}=800\mathrm{GeV}`$ the worst case cross section has only fallen to about $`1.7\times 10^2\mathrm{fb}`$ at $`m_{h_1}=100\mathrm{GeV}`$ for which $`L=3000\mathrm{fb}^1`$ would yield about 50 events in the $`b\overline{b}h_1`$ and $`t\overline{t}h_1`$ modes, each (while still not guaranteeing as many as 50 $`Zh_1`$ events). Possibly, such a large $`L`$ could be achieved after several years of running.
To illustrate all of this more completely, we have determined, as a function of $`m_{h_1}`$, the $`\mathrm{tan}\beta `$ range for which constraint (II) above is satisfied while the $`Zh_1`$, the $`b\overline{b}h_1`$ and the $`t\overline{t}h_1`$ cross section each yield fewer than 50 events for at least one $`(\alpha _1,\alpha _2)`$ choice assuming (a) $`L1000\mathrm{fb}^1`$ or (b) $`L2500\mathrm{fb}^1`$ and $`\sqrt{s}=500\mathrm{GeV}`$ or, separately, $`\sqrt{s}=800\mathrm{GeV}`$. These $`\mathrm{tan}\beta `$ ranges are represented in Fig. 4 by the wedge of $`\mathrm{tan}\beta `$ between the solid ($`\sqrt{s}=800\mathrm{GeV}`$) or dashed ($`\sqrt{s}=500\mathrm{GeV}`$) lines. For $`\mathrm{tan}\beta `$ values above (below) the upper (lower) line, $`b\overline{b}h_1`$ ($`t\overline{t}h_1`$) will be observable for all $`(\alpha _1,\alpha _2)`$ choices. We see that, even after combining $`\sqrt{s}=500\mathrm{GeV}`$ and $`\sqrt{s}=800\mathrm{GeV}`$ running, the $`L=1000\mathrm{fb}^1`$ wedge begins at $`m_{h_1}25\mathrm{GeV}`$ and widens rapidly with increasing $`m_{h_1}`$. For $`L=2500\mathrm{fb}^1`$, the wedge begins at a higher $`m_{h_1}`$ value ($`80\mathrm{GeV}`$ for $`\sqrt{s}=800\mathrm{GeV}`$), but still expands rapidly as $`m_{h_1}`$ increases further. Thus, it is apparent that, despite the sum rules guaranteeing significant fermionic couplings for a light 2HDM Higgs boson that is unobservable in $`Z`$+Higgs production, $`\mathrm{tan}\beta `$ and the $`\alpha _i`$ mixing angles can be chosen so that the cross section magnitudes of the two Yukawa processes are simultaneously so small that detection of such an $`h_1`$ cannot be guaranteed for integrated luminosities that are expected to be available.
On a final technical note, we have found that the $`h_1`$ is, for the most part, either exactly, or almost exactly, CP-odd for the $`(\alpha _1,\alpha _2)`$ parameters corresponding to the curves plotted in Figs. 3 and 4. The only exception is for $`m_{h_1}`$ values between $`160\mathrm{GeV}`$ ($`240\mathrm{GeV}`$) and $`270\mathrm{GeV}`$ ($`300\mathrm{GeV}`$) for $`L=1000\mathrm{fb}^1`$ ($`L=2500\mathrm{fb}^1`$) at $`\sqrt{s}=800\mathrm{GeV}`$ along the upper lines in Fig. 4. For this range, $`\alpha _1`$ can be chosen close to 0 to minimize $`\sigma (t\overline{t}h_1)`$ while continuing to satisfy the $`Zh_1`$ event number constraint (see the discussion at the end of the previous section).
## 5 Discussion and conclusions
The sum rules, Eq. (14), relating the Yukawa and Higgs-$`ZZ`$ couplings of a general CP-violating two-Higgs-doublet model have important implications for Higgs boson discovery at an $`e^+e^{}`$ collider. In particular, for any $`h_i`$, if the $`ZZh_i`$ coupling is small, then the $`t\overline{t}h_i`$ or $`b\overline{b}h_i`$ Yukawa coupling must be substantial. This means that any one of the three neutral Higgs bosons that is light enough to be produced in $`e^+e^{}t\overline{t}h_i`$ (implying that $`e^+e^{}Zh_i`$ and $`e^+e^{}b\overline{b}h_i`$ are also kinematically allowed) will normally be found at an $`e^+e^{}`$ linear collider if the integrated luminosity is sufficient. However, we have found that the mass reach in $`m_{h_i}`$ may fall well short of the $`\sqrt{s}2m_t`$ kinematic limit for moderate $`\mathrm{tan}\beta `$ values and anticipated luminosities. We have made a precise determination of the value of $`\mathrm{tan}\beta `$ (as a function of $`m_{h_i}`$) for which the smallest (common) value of the $`t\overline{t}h_i`$ and $`b\overline{b}h_i`$ cross sections is attained when the $`ZZh_i`$ coupling is suppressed. From this, we have computed as a function of $`m_{h_i}`$ the minimum luminosity required in order to detect such an $`h_i`$. Even at $`\sqrt{s}=800\mathrm{GeV}`$, to guarantee detection of a Higgs boson with small $`ZZ`$ coupling for the worst possible choice of $`\mathrm{tan}\beta `$ and neutral Higgs sector mixing angles would require an integrated luminosity in excess of $`1000\mathrm{fb}^1`$ starting at $`m_{h_i}10\mathrm{GeV}`$. Further, the minimum $`L`$ required to guarantee detection for the worst choices of $`\mathrm{tan}\beta `$ and mixing angles increases rapidly as $`m_{h_i}`$ increases, as does the band of $`\mathrm{tan}\beta `$ in which $`L>1000\mathrm{fb}^1`$ is required.
We also discussed the case of an $`h`$ that is observed in the $`Zh`$ final state but also light enough to be seen in $`t\overline{t}h`$ and, by implication, $`b\overline{b}h`$. We have noted that if $`Zh`$ production proceeds with SM strength, then the same sum rules can be used to show that measurement of its $`b\overline{b}`$ coupling will be impossible for any conceivably achievable integrated luminosity, while measurement of its $`t\overline{t}`$ coupling may only be possible for $`m_h`$ up to values significantly below the $`\sqrt{s}2m_t`$ phase space limit (the exact reach depending upon the integrated luminosity and Higgs sector mixing angles).
Finally, we note that detection ‘guarantees’ for the 2HDM model are likely to apply over an even more restricted range of model parameter space in the case of the Tevatron and LHC hadron colliders. In the case of the Tevatron, the small rate for $`t\overline{t}`$+Higgs production is a clear problem. In the case of the LHC, a detailed study is needed to determine what cross section level is required in order that Higgs detection in the $`t\overline{t}`$+Higgs channel will be possible. Existing studies in the context of supersymmetric models can be used to point to parameter regions that are problematical because of large backgrounds and/or signal dilution due to sharing of available coupling strength. Almost certainly, very small (large) $`\mathrm{tan}\beta `$ values will be needed in order to be certain that the $`t\overline{t}`$+Higgs ($`b\overline{b}`$+Higgs) modes will be viable. Still, it is clear that the sum rules do imply that difficult parameter regions are of limited extent.
Acknowledgments
This work was supported in part by the Committee for Scientific Research (Poland) under grants No. 2 P03B 014 14, No. 2 P03B 030 14, by Maria Sklodowska-Curie Joint Fund II (Poland-USA) under grant No. MEN/NSF-96-252, by the U.S. Department of Energy under grant No. DE-FG03-91ER40674 and by the U.C. Davis Institute for High Energy Physics. |
no-problem/0001/astro-ph0001111.html | ar5iv | text | # Evidence for TeV Emission from GRB 970417a
## 1 Introduction
Gamma-ray bursts were discovered over 30 years ago Klebesadel, Strong & Olson (1973). Although thousands of GRBs have been observed the physical processes responsible for them are still unknown. The understanding of these objects was greatly enhanced by results from the Compton Gamma Ray Observatory, which contained experiments sensitive to photons from 50 keV to 30 GeV. One of these experiments, BATSE, a wide field instrument sensitive to gamma rays from 50 keV to above 300 keV Paciesas et al. (1999), has detected several thousand GRBs. EGRET Esposito et al. (1999) detected 7 GRBs with photon energies ranging from 100 MeV to 18 GeV Dingus, Catelli & Schneid (1998). No high energy cutoff above a few MeV has been observed in any GRB spectrum, and emission up to TeV energies is predicted in several models Dermer, Chiang & Mitman (1999); Pilla & Loeb (1998); Totani 1998a ; Meszaros & Rees (1994).
Very high energy gamma-ray emission may not be observable for sources at redshifts much greater than 0.5 because of pair production with infrared extragalactic background photons Jelley (1966); Gould & Schreder (1966). Recent observations of lower-energy afterglows associated with several GRBs have allowed the measurement of 9 redshifts, either by measuring the spectrum of the optical afterglow, or by measuring the spectrum of the putative host galaxy. These redshifts cover a range between 0.4 and 3.4, and imply that the distribution of intrinsic luminosities is broad Lamb & Reichert (1999). This suggests that the intensity of TeV gamma-ray emission from a GRB (which requires a relatively nearby source) may not be well correlated with the intensity of the sub-MeV emission detected by BATSE.
At energies greater than 30 GeV, gamma-ray fluxes from most astrophysical sources become too small for current satellite-based experiments to detect because of their small sensitive areas. Only ground-based experiments Hoffman, Sinnis, Fleury & Punch (1999); Ong (1998); Catanese & Weekes (1999) have areas large enough to detect these sources. These instruments detect the extensive air showers produced by the high energy photons in the atmosphere, thus giving them a much larger effective area at high energies. These showers can be observed by detecting the Cherenkov light emitted by the cascading relativistic particles as they traverse the atmosphere, or by detecting the particles which reach ground level.
TeV gamma-ray emission from several astrophysical sources has been detected using atmospheric Cherenkov telescopes. These instruments have extremely large collection areas ($`10^5`$ m<sup>2</sup>) and good hadronic rejection. Unfortunately, they have relatively narrow fields of view (a few degrees) and can operate only on dark clear nights, resulting in a low duty cycle. They are therefore ill suited to search for transient sources such as GRBs. Searches for GRBs at energies above 300 GeV have been made by slewing these telescopes within a few minutes of the notification of the GRB location Connaughton et al. (1997). No detections have been reported. However, because of the narrow field of view, coupled with the delay in slewing to the correct position, there have not been any prompt TeV gamma-ray observations at the GRB location.
At energies greater than 10 TeV, the Tibet collaboration reported a possibly significant deviation of the probability distribution from background, for the superposition of all the bursts within their field of view. However, no single burst showed a convincing signal Amenomori et al. (1996). Two GRBs occurred within the field of view of the HEGRA AIROBICC Cherenkov array. One very long duration burst showed an excess over background from a direction not entirely consistent with the sub-MeV emission, so this was not claimed as a firm detection Padilla et al. (1998).
Milagro, a new type of TeV gamma-ray observatory with a field of view greater than one steradian and a high duty cycle, began operation in December 1999 near Los Alamos, New Mexico. A prototype detector, Milagrito Atkins et al. 1999a , operated from February 1997 to May 1998, during which 54 GRBs detected by BATSE were within 45 of zenith of Milagrito. This paper reports on the search for TeV gamma-ray emission from these 54 gamma-ray bursts, but concentrates more specifically on GRB 970417a.
## 2 The Milagrito Detector
Milagrito consisted of a planar array of 228 8-inch photomultiplier tubes (PMTs) submerged in a light-tight water reservoir Atkins et al. 1999a . The PMTs were located on a square grid with 2.8 m spacing, covering a total area of 1800 m<sup>2</sup>. Data were collected at water depths of 0.9, 1.5 and 2.0 m above the PMTs. The PMTs detected the Cherenkov light produced as charged shower particles traversed the water. The abundant gamma rays in the air shower interact with the water via pair production and Compton scattering to produce additional relativistic charged particles, increasing the Cherenkov light yield. The continuous medium and large Cherenkov angle (41 ) result in the efficient detection of shower particles incident on the reservoir with the array of PMTs. Simulations show that Milagrito was sensitive to showers produced by primary gamma rays with energies as low as $``$100 GeV. The relative arrival times of the shower front at the PMTs were used to reconstruct the direction of the incoming air shower. The trigger required $`>`$100 PMTs to register at least one photoelectron within a 300 ns time window. Events were collected at a rate of $``$300 s<sup>-1</sup>; almost all of these triggers were caused by the hadronic cosmic-ray background. The capability of Milagrito to detect TeV gamma rays was demonstrated by the observation of the active galaxy Markarian 501 during its 1997 flare Atkins et al. 1999b . The instrument had an angular resolution of about $`1^{}`$.
## 3 Observations and Results
A search was conducted in the Milagrito data for an excess of events, above those due to the background of cosmic rays, coincident with BATSE GRBs. Only bursts within 45 of zenith of Milagrito were considered because the sensitivity of Milagrito fell rapidly with increasing zenith angle. For each burst, a circular search region on the sky was defined by the BATSE 90% confidence interval, which incorporates both the statistical and systematic position errors Briggs et al. (1999). The search region was tiled with an array of overlapping $`1.6^{}`$ radius bins spaced $`0.2^{}`$ apart in RA and DEC. This radius was appropriate for the measured angular resolution of Milagrito Atkins et al. 1999b ; Atkins et al. 1999a . The number of events falling within each of the $`1.6^{}`$ bins was summed for the duration of the burst defined by the T90 interval reported by BATSE. This time period is that in which the BATSE fluence rose from 5% to 95% of its total. T90 was chosen, a priori, because the EGRET detections were much more significant during T90 than during longer time intervals Hurley et al. (1994).
For each GRB, the angular distribution of background events on the sky was characterized using two hours of data surrounding each burst. This distribution was then normalized to the number of events ($`N_{T90}`$) detected by Milagrito over the entire sky during T90. The resulting background data were also binned in the same $`1.6^{}`$ overlapping bins as the initial data. Each bin in the actual data was compared to the corresponding bin in the background map. The Poisson probability of a background fluctuation giving rise to an excess at least as large as that observed was calculated. The bin with the lowest such probability was then taken as the most likely position of a very high energy gamma-ray counterpart to that particular BATSE burst.
The chance probability of obtaining at least the observed significance anywhere within the entire search region was determined by Monte Carlo simulations using the following procedure. For each burst a set of simulated signal maps was obtained by randomly drawing $`N_{T90}`$ events from the background distribution. These maps were searched, as before, for the most significant excess within the search region defined by the BATSE 90% confidence interval. The probability after accounting for the size of the search region is given by the ratio of the number of simulated data sets with probability less than that observed in the actual data to the total number of simulated data sets. The distribution of the chance probabilities obtained by this method for the 54 GRBs is given in Figure 1. Details of a somewhat different analysis, which yields consistent results with those reported here, as well as more detailed results from the other 53 bursts, will be described elsewhere Leonor (2000).
One of these bursts, GRB 970417a, shows a large excess above background in the Milagrito data. The BATSE detection of this burst shows it to be a relatively weak burst with a fluence in the 50–300 keV energy range of $`1.5\times 10^7`$ ergs/cm<sup>2</sup> and T90 of 7.9 seconds. BATSE determined the burst position to be RA $`=295.7^{}`$, DEC $`=55.8^{}`$. The low BATSE fluence results in a large positional uncertainty of $`6.2^{}`$ (1$`\sigma `$). The resulting search region for TeV emission has a radius of $`9.4^{}`$. The $`1.6^{}`$ radius bin with the largest excess in the Milagrito data is centered at RA $`=289.9^{}`$ and DEC $`=54.0^{}`$, corresponding to a Milagrito zenith angle of $`21^{}`$. This location is consistent with the position determined by BATSE. The uncertainty in the candidate location is approximately $`0.5^{}`$ (1$`\sigma `$), much better than the BATSE uncertainty. Figure 2 shows the number of counts in this search region for the array of $`1.6^{}`$ bins. The bin with the largest excess has 18 events with an expected background of $`3.46\pm 0.11`$ (statistical error based on the background calculation method used). The Poisson probability for observing an excess at least this large due to a background fluctuation is $`2.9\times 10^8`$. The probability of such an excess or greater anywhere within the search region for this burst was found by the Monte Carlo simulation described above to be $`2.8\times 10^5`$ (see Figure 1). For 54 bursts, the chance probability of background fluctuating to at least the level observed for GRB 970417a for at least one of these bursts is $`1.5\times 10^3`$. The individual events contributing to this excess were examined. The distributions of the number of tubes hit per event and the shower front reconstructions were consistent with those from other shower events. There is no evidence that the detector was malfunctioning during the burst analysis time period.
Although the initial search was limited to T90, upon identifying GRB 970417a as a candidate, longer time intervals were also examined. EGRET observed longer duration GeV emission Hurley et al. (1994), and TeV afterglows are predicted by several models Meszaros & Rees (1994); Totani 1998b . A search for TeV gamma rays integrated over time intervals of one hour, two hours and a day after the GRB start time did not show any significant excesses. Histograms of shorter time intervals, where the data are binned in intervals of one second, are shown in Figure 3. An analysis of the data also revealed no statistically significant evidence for TeV after-flares.
## 4 Discussion
If the observed excess of events in Milagrito is indeed associated with GRB 970417a, then it represents the highest energy photons yet detected from a GRB. The energy spectrum and maximum energy of emission are difficult to determine from Milagrito data. The small size of the pond compared to the lateral extent of typical air showers, along with the poor ability of this instrument to measure the amount of energy deposited in the pond, make the estimation of shower energy on an event by event basis nearly impossible. The very high energy fluence implied by this observation depends on the spectrum and upper energy cutoff of the emission, which Milagrito is unable to determine. Monte Carlo simulations of gamma-ray-initiated air showers show that the effective area of Milagrito increases slowly with energy, so that the energy threshold is undefined Atkins et al. 1999a . However, Milagrito had very little sensitivity below 100 GeV, so this observation indicates the emission of photons with energies greater than a few hundred GeV from GRB 970417a. Figure 4 shows the implied fluence of this observation above 50 GeV as a function of upper cutoff energy for several assumed differential power-law spectra. The observed cosmic-ray event rate agrees well with the rate predicted by simulations Atkins et al. 1999b implying that the systematic error on the energy scale for Milagrito is $`<`$30%.
Several studies Salamon & Stecker (1998); Primack, Bullock, Somerville & Macminn (1999) find that the opacity due to pair production for $`>`$200 GeV gamma rays exceeds one for redshifts larger than $``$0.3. Thus, if Milagrito has detected high energy photons from GRB 970417a, it must be a relatively nearby object. The observed excess implies a fluence above 50 GeV between $`10^3`$ and $`10^6`$ ergs/cm<sup>2</sup> and the spectrum must extend to at least a few hundred GeV. The very high energy gamma-ray fluence ($`>50`$ GeV) inferred from this result is at least an order of magnitude greater than the sub-MeV fluence.
To summarize, an excess of events with chance probability $`2.8\times 10^5`$ coincident both spatially and temporally with the BATSE observation for GRB 970417a was observed using Milagrito. The chance probability that an excess of at least this significance would be observed from the entire sample of 54 bursts is $`1.5\times 10^3`$. If the observed excess coincident with GRB 970417a is not an unlikely fluctuation of the background, then a GRB bright at TeV energies has been identified. A search for other coincidences with BATSE will be continued with the current instrument, Milagro, which has significantly increased sensitivity to GRBs between 0.1 and 10 TeV.
Many people helped bring Milagrito to fruition. In particular, we acknowledge the efforts of Scott DeLay, Neil Thompson and Michael Schneider. This work was supported in part by the National Science Foundation, the U. S. Department of Energy (Office of High Energy Physics and Office of Nuclear Physics), Los Alamos National Laboratory, the University of California, and the Institute of Geophysics and Planetary Physics. |
no-problem/0001/cond-mat0001306.html | ar5iv | text | # Electron momentum distribution of a single mobile hole in the 𝑡-𝐽 model
## Abstract
We investigate the electron momentum distribution function (EMDF) for the two-dimensional $`t`$-$`J`$ model. The results are based on the self-consistent Born approximation (SCBA) for the self-energy and the wave function. In the Ising limit of the model we give the results in a closed form, in the Heisenberg limit the results are obtained numerically. An anomalous momentum dependence of EMDF is found and the anomaly is in the lowest order in number of magnons expressed analitycally. We interpret the anomaly as a fingerprint of an emerging large Fermi surface coexisting with hole pockets.
The electron momentum distribution function $`n_𝐤=\mathrm{\Psi }_{𝐤_0}|_\sigma c_{𝐤,\sigma }^{}c_{𝐤,\sigma }|\mathrm{\Psi }_{𝐤_0}`$ is the key quantity for resolving the structure of the Fermi surface in cuprates . Here we study the EMDF for $`|\mathrm{\Psi }_{𝐤_0}`$ which represents a weakly doped antiferromagnet (AFM), i.e., it is the ground state (GS) wave function of a planar AFM with one hole and with the total momentum $`𝐤_0`$. In the present work we investigate the low-energy physics of the CuO<sub>2</sub> planes in cuprates within the framework of the standard $`t`$-$`J`$ model
$$H=t\underset{<ij>\sigma }{}\left(\stackrel{~}{c}_{i,\sigma }^{}\stackrel{~}{c}_{j,\sigma }+\text{H.c.}\right)+J\underset{<ij>}{}\left[S_i^zS_j^z+\frac{\gamma }{2}(S_i^+S_j^{}+S_i^{}S_j^+)\right],$$
(1)
where $`\stackrel{~}{c}_{i,\sigma }^{}`$ ($`\stackrel{~}{c}_{i,\sigma }`$) are electron creation (annihilation) operators acting in a space forbidding double occupancy on the same site. $`S_i^\alpha `$ are spin operators. Our approach is based on a spinless fermion Schwinger boson representation of the $`t`$-$`J`$ Hamiltonian and on the SCBA for calculating the Green’s function $`G_𝐤(\omega )`$ and the corresponding wave function $`|\mathrm{\Psi }_𝐤`$ .
In general the expectation value $`n_𝐤`$ has to be calculated numerically. The Ising limit, $`\gamma =0`$, is an exception. The quasi particle is dispersionless with the GS energy $`ϵ_𝐤=ϵ_0`$, the residue $`Z_𝐤=Z_0`$ and the Green’s function $`G_𝐤(\omega )=G_0(\omega )`$. Therefore it is possible to express the required matrix elements in $`n_𝐤`$ analytically and to perform a summation of corresponding non-crossing contributions to any order $`n\mathrm{}`$. The result is
$`n_𝐤`$ $`=`$ $`1{\displaystyle \frac{1}{2}}Z_0(\delta _{\mathrm{𝐤𝐤}_0}+\delta _{\mathrm{𝐤𝐤}_0+𝐐})+{\displaystyle \frac{1}{N}}\delta n_𝐤,`$ (2)
$`\delta n_𝐤`$ $`=`$ $`4P\gamma _𝐤4(1Z_0)\gamma _𝐤^2,`$ (3)
where $`P=_{m=0}^{\mathrm{}}\sqrt{A_mA_{m+1}}`$ with $`A_0=Z_0`$, $`A_m=A_{m1}[2tG_0(ϵ_02mJ)]^2`$, $`_{m=0}^{\mathrm{}}A_m=1`$ and $`\gamma _𝐤=(\mathrm{cos}k_x+\mathrm{cos}k_y)/2`$. We note that the result Eqs. (2,3) exactly fulfills the sum rule $`_𝐤n_𝐤=N1`$ and $`\delta n_𝐤1`$. In Eq. (2) the only dependence on the GS momentum $`𝐤_0`$ enters through the two delta functions separated with the AFM vector $`𝐐=(\pi ,\pi )`$. The EMDF $`\delta n_𝐤`$ is determined only with two parameters, $`P`$ and $`Z_0`$, presented as a function of $`J/t`$ in Fig. 1. Note that $`P=1`$ and $`Z_0=0`$ for $`J0`$, therefore the result simplifies, $`\delta n_𝐤=4\gamma _𝐤(1\gamma _𝐤)`$.
Now we turn to the Heisenberg model, $`\gamma 1`$. Here the important ingredient is the gap-less magnons with linear dispersion and a more complex ground state of the planar AFM. $`G_𝐤(\omega )`$ is strongly $`𝐤`$-dependent. The GS is fourfold degenerate and the results must be averaged over the GS momenta $`𝐤_0=(\pm \pi /2,\pm \pi /2)`$. To get more insight into the structure of $`\delta n_𝐤`$, we simplify the wave function by keeping only the one-magnon contributions. The leading order contribution to $`\delta n_𝐤`$ is then
$$\delta n_𝐤^{(1)}=Z_{𝐤_0}M_{𝐤_0𝐪}G_{𝐤_0}(ϵ_{𝐤_0}\omega _𝐪)\left[2u_𝐪+M_{𝐤_0𝐪}G_{𝐤_0}(ϵ_{𝐤_0}\omega _𝐪)\right],$$
(4)
with $`𝐪=𝐤𝐤_0`$ \[or equivalent in the Brillouin zone (BZ)\], $`𝐯=t(\mathrm{sin}k_{0x},\mathrm{sin}k_{0y})`$, $`M_{𝐤_0𝐪}`$ is the hole-magnon coupling and $`u_𝐪`$ is the ususal spin wave Bogoliubov coefficient . The momentum dependence of the EMDF, contained in Eq. (4), essentially captures well the full numerical solution . A surprising observation is that the EMDF exhibits in the extreme Heisenberg limit a discontinuity $`Z_{𝐤_0}N^{1/2}`$ and $`\delta n_𝐤^{(1)}(1+\mathrm{sign}q_x)/q_x`$. We interpret this result as an indication of an emerging large Fermi surface at discontinuities at points $`𝐤_0`$, not lines in the BZ.
The anomalous structure at $`𝐤=(\pm \pi /2,\pm \pi /2)`$ is clearly seen in Fig. 2, where $`\delta n_𝐤^{(1)}`$ is shown for $`Z_𝐤t/J1`$ and $`\gamma 1`$. The Green’s function is here approximated with the non-interacting expression, $`G_{𝐤_0}(\omega )1/\omega `$. It should be noted that $`\delta n_𝐤^{(1)}`$ exhibits at $`\gamma =1`$ also a (weak) singularity ($`>1`$). However, the $`n_𝐤`$ sum rule is still exactly satisfied. In Fig. 2 is for the purpose of presentation $`\delta n_𝐤^{(1)}`$ truncated to $`6<\delta n_𝐤^{(1)}<1`$.
In the present work we considered the electron momentum distribution function for a single hole in AFM and possibly relevant to underdoped cuprates. Non-analytic properties encountered in Eq. (4) are an evidence of the emerging large Fermi surface at $`𝐤(\pm \pi /2,\pm \pi /2)`$ coexisting, however, with a ’hole pocket’ type of a Fermi surface. As long-range AFM order is destroyed by doping, ’hole-pocket’ contributions should disappear while the singularity in $`\delta n_𝐤`$ could persist. We thus interpret this result as relevant for the understanding of the electronic structure found recently with ARPES experiments in uderdoped cuprates , where only portions of a large Fermi surface close to $`𝐤𝐤_0`$ were seen. |
no-problem/0001/astro-ph0001385.html | ar5iv | text | # Do wavelets really detect non-Gaussianity in the 4-year COBE data?
## 1 Introduction
Observations of temperature fluctuations in the cosmic microwave background (CMB) provide a valuable means of distinguishing between two competing theories for the formation of structure in the early Universe. Currently, the most favoured theory is the simple inflationary cold-dark-matter (CDM) model, for which the distribution of temperature fluctuations in the CMB should be Gaussian. The second class of theories invokes the formation of topological defects such as cosmic strings, monopoles or textures, which should imprint some non-Gaussian features in the CMB (Bouchet, Bennett & Stebbins 1988; Turok 1996). Thus, the detection (or otherwise) of a non-Gaussian signal in the CMB is an important means of discriminating between these two classes of theory.
In order to test for large-scale non-Gaussianity in the CMB, the 4-year COBE-DMR dataset (in various forms) has already been analysed using a number of different statistical techniques, as discussed below. These tests have been performed either on some combination of the 31-, 53- and 90-GHz A & B 4-year DMR maps, or the 4-year DMR maps from which Galactic emission has been removed. Two such Galaxy-removed maps are generally available, each one created using a different separation method. The DMR-DCMB map is a linear combination of all six individual COBE-DMR maps designed to cancel the free-free emission (Bennett et al. 1992), whereas the DMR-DSMB map is constructed by first subtracting templates of synchrotron and dust emission and then removing free-free emission (Bennett et al. 1994).
The first investigation of non-Gaussianity in the 4-year COBE data was performed by Kogut et al. (1996). This analysis used the 4-year DMR 53 GHz $`(A+B)/2`$ map at high latitudes ($`|b|>20\mathrm{°}`$) with cut-outs near Ophiuchus and Orion (Bennett et al. 1996), and found that traditional statistics such as the three-point correlation function, the genus and the extrema correlation function, were completely consistent with a Gaussian CMB signal. Colley, Gott & Park (1996) also computed the genus statistic, but for the DMR-DCMB map with $`|b|>30\mathrm{°}`$, and arrived at similar conclusions. The full set of Minkowski functionals were computed for the 4-year 53 GHz $`(A+B)/2`$ map (with a smoothed Galactic cut) by Schmalzing & Gorski (1998), taking proper account of the curvature of the celestial sphere. They also concluded that the CMB is consistent with a Gaussian random field on degree scales. On computing the bi-spectrum of the 4-year COBE data, Heavens (1998) also found no evidence for non-Gaussianity. Finally, Novikov, Feldman & Shandarin (1999) have calculated the partial Minkowski functionals for both the DMR-DCMB and DMR-DSMB maps and do report detections of non-Gaussianity, but the analysis was performed without making a Galactic cut and the detections most probably result from residual Galactic contamination.
Recently, however, two apparently robust detections of non-Gaussianity in the 4-year COBE data have been reported. Ferreira, Magueijo & Gorski (1998) applied a technique based on the normalised bi-spectrum to a map created by averaging the 53A, 53B, 90A and 90B 4-year COBE-DMR channels (each weighted according to the inverse of its noise variance) and then applying the extended Galactic cut of Banday et al. (1997) and Bennett et al. (1996). They concluded that Gaussianity can be rejected at the 98 per cent confidence level, with the dominant non-Gaussian signal concentrated near the multipole $`\mathrm{}=16`$. This non-Gaussian signal is certainly present in the COBE data, but Banday, Zaroubi & Gorski (1999) have now shown that it is not cosmological in origin and is most likely the result of an observational artefact. Nevertheless, using an extended bi-spectrum analysis, Magueijo (1999) reports a new non-Gaussian signal above the 97 per cent level, even after removing the observational artefacts discovered by Banday et al.
A second detection of non-Gaussianity was reported by Pando, Valls-Gabaud & Fang (1998) (hereinafter PVF), who applied a technique based on the discrete wavelet transform (DWT) to Face 0 and Face 5 of the QuadCube pixelisation of the DMR-DCMB and DMR-DSMB maps in Galactic coordinates (i.e. the North and South Galactic pole regions respectively). PVF computed the skewness, kurtosis and scale-scale correlation of the wavelet coefficients of DMR maps in certain domains of the wavelet transform, and compared these statistics with the corresponding probability distributions computed from 1000 realisations of simulated COBE observations of a Gaussian CMB sky. In all cases, they found that the skewness and kurtosis of the wavelet coefficients were consistent with a Gaussian CMB signal. On the other hand, the scale-scale correlation coefficients showed evidence for non-Gaussianity at the 99 per cent confidence level on scales of 11–22 degrees in Face 0 of both the DMR-DCMB and DMR-DSMB maps. Nevertheless, in both maps, Face 5 was found to be consistent with Gaussianity. We note that Bromley & Tegmark (1999) confirm the findings of both PVF and Ferreira et al. (1998).
In this paper, we also apply to the 4-year COBE data a non-Gaussianity test based on the skewness, kurtosis and scale-scale correlation of the wavelet coefficients. In the analysis presented below, however, we calculate the skewness and kurtosis statistics using unbiased estimators based on $`k`$-statistics (Hobson, Jones & Lasenby 1999 - hereinafter HJL), as opposed to the straightforward calculation of sample moments employed by PVF. For the scale-scale correlation, we adopt the same definition as that used by PVF. We also note that the analysis presented below is slightly more general than that presented by PVF, since we calculate the statistics of the wavelet coefficients in all the available domains of the wavelet transform, as opposed to using only those regions that represent structure in the maps on the same scale in the horizontal and vertical directions.
Perhaps the most important point addressed in the analysis presented here, however, is the fact that non-Gaussianity tests based on any orthogonal compactly-supported wavelet decomposition are sensitive to the orientation of the input map. This is discussed in detail below. As an illustration of this point, we therefore present the results of two separate analyses, in which the relative orientations of the input maps differ by 180 degrees. Nevertheless, it should be remembered that, in general, different techniques for detecting non-Gaussianity are each sensitive to different ways in which the data may be non-Gaussian. We should therefore not be too surprised if the detailed results of an analysis are orientation dependent. Obviously, it would be troubling if the general conclusions concerning non-Gaussianity of the data depended on orientation, but that is not the case here.
## 2 The wavelet decomposition
The basics of the wavelet non-Gaussianity test are discussed in detail in HJL and also by PVF and so we give only a brief outline here. The two-dimensional discrete wavelet transform (DWT) (Daubechies 1992, Press et al. 1994) performs the decomposition of a planar digitised image of size $`2^{J_1}\times 2^{J_2}`$ into the sum of a set of two-dimensional planar (digitised) wavelet basis functions
$$\frac{\mathrm{\Delta }T}{T}(𝒙_i)=\underset{j_1=0}{\overset{J_11}{}}\underset{j_2=0}{\overset{J_21}{}}\underset{l_1=0}{\overset{2^{j_1}1}{}}\underset{l_2=0}{\overset{2^{j_2}1}{}}b_{j_1,j_2;l_1,l_2}\psi _{j_1,j_2;l_1,l_2}(𝒙_i).$$
(1)
In equation (1), the wavelets $`\psi _{j_1,j_2;l_1,l_2}(𝒙)`$ (with $`j_1,j_2,l_1,l_2`$ taking the values indicated in the summations) form a complete and orthogonal set of basis functions. Each two-dimensional wavelet is simply the direct tensor product of the corresponding one-dimensional wavelets $`\psi _{j_1;l_1}(x)`$ and $`\psi _{j_2;l_2}(y)`$, which in turn are defined in terms of the dilations and translations of some mother wavelet $`\psi (x)`$ via
$$\psi _{j_1;l_1}(x)=\left(\frac{2^{j_1}}{L}\right)^{1/2}\psi (2^{j_1}x/Ll_1),$$
(2)
where $`0xL`$, and a similar expression holds for $`\psi _{j_2;l_2}(y)`$. Thus, the scale indices $`j_1`$ and $`j_2`$ correspond to the scales $`L/2^{j_1}`$ and $`L/2^{j_2}`$ in the $`x`$\- and $`y`$-directions respectively (so $`J_1`$ and $`J_2`$ are the smallest possible scales – i.e. one pixel – in each direction), whereas the location indices $`l_1`$ and $`l_2`$ correspond to the $`(x,y)`$-position $`(Ll_1/2^{j_1},Ll_2/2^{j_2})`$ in the image. Since each wavelet basis function $`\psi _{j_1,j_2;l_1,l_2}(x,y)`$ is localised at the relevant scale/position, the corresponding wavelet coefficient $`b_{j_1,j_2;l_1,l_2}`$ measures the amount of signal in the image at this scale and position.
### 2.1 Orientation sensitivity
At this point, it is important to note the sensitivity of the orthogonal wavelet decomposition to the orientation of the original input map. As shown by Daubechies (1992), it is impossible to construct an orthogonal wavelet basis, in which the basis functions are both symmetric (or anti-symmetric) and have compact support. This asymmetry of the basis functions is the cause of the orientation sensitivity. This is most easily appreciated by considering an input map consisting of just one of the wavelet basis functions. If this map is rotated through 180 degrees (say), then because the basis functions are asymmetric it is not possible to represent the rotated basis functions in terms of just one of the original basis function. Instead, the signal in the rotated map must be represented by several wavelet basis functions with different scale and position indices. Thus any statistics based on the wavelet coefficients are sensitive to the orientation of the original input map. Since the origin of this effect is the asymmetry of the one-dimensional wavelet basis functions, it also occurs for two-dimensional orthogonal wavelet decompositions based on the Mallat algorithm (Mallat 1989), which is also commonly called the multiresolution analysis method. In order to obtain wavelet statistics that are invariant under 90, 180, 270 degrees rotations of the input image (and also insensitive to cyclic translations of the image by an arbitrary number of pixels in each direction), it is necessary to use the à trous wavelet algorithm (see e.g. Starck, Murtagh & Bijaoui 1998) with a symmetric filter function. The application of this technique to the detection of non-Gaussianity in the CMB will be presented in a forthcoming paper.
### 2.2 Application to COBE data
In this paper, we will be concerned with Face 0 and Face 5 of the COBE QuadCube pixelisation scheme in Galactic coordinates, each of which consists of $`32\times 32`$ equal-area pixels (i.e. $`J_1=J_2=5`$) of size $`(2.8\mathrm{°})^2`$. Thus the scale $`j`$ corresponds to an angular scale of $`2.8\times 2^{4j}`$. Following the discussion by HJL, the structure of the corresponding wavelet domain is shown in Fig. 1, where the pixel numbers are plotted on a logarithmic scale. We see that the domain is partitioned into separate regions according to the scale indices $`j_1`$ and $`j_2`$ in the horizontal and vertical directions respectively. Thus regions with $`j_1=j_2`$ contain wavelets basis functions that represent the image at the same scale in the two directions, whereas regions with $`j_1j_2`$ describe the image on different scales in the two directions. As discussed in HJL, regions with $`j_1=0`$ or $`j_2=0`$ actually contain basis functions that are tensor products of different one-dimensional basis and so for the remainder of this paper we restrict our attention to regions with $`j_1,j_21`$. We also define the integer variable $`k=2^{j_1}+2^{j_2}`$, which serves as a measure of inverse scale length, and is constant within each region of the wavelet domain. We note that the value of $`k`$ is not altered if the values $`j_1`$ and $`j_2`$ are interchanged. In this paper, we also restrict ourselves to the Daubechies 4 wavelet basis used by PVF, although analogous analyses may also be performed for other orthogonal discrete wavelet bases, and indeed similar results to those presented in Section 3 are obtained.
### 2.3 Skewness and kurtosis spectra
Following HJL, when considering the statistics of the wavelet coefficients $`b_{j_1,j_2;l_1,l_2}`$ of an image, it is useful to consider separately all those coefficients that share each value of $`k`$. For each value of $`k`$, we then use the corresponding wavelet coefficients to calculate estimators of the skewness $`\widehat{S}`$ and (excess) kurtosis $`\widehat{K}`$ of the parent distribution from which the coefficients were drawn. We therefore obtain the skewness and kurtosis ‘spectra’ $`\widehat{S}(k)`$ and $`\widehat{K}(k)`$ for the image.
As mentioned in the Introduction, at this point our method diverges from that used by PVF in two ways. Firstly, PVF only consider regions of the wavelet domain for which $`j_1=j_2`$ and $`j_1,j_21`$ (i.e. $`k=4,8,16,32`$), whereas we consider all regions with $`j_1,j_21`$. Secondly, we calculate the estimators $`\widehat{S}`$ and $`\widehat{K}`$ in a different way from that adopted in PVF, as follows. At each value of $`k`$ the skewness and (excess) kurtosis of the parent distribution of the wavelet coefficients are given by
$`S`$ $`=`$ $`\mu _3/\mu _2^{3/2}=\kappa _3/\kappa _2^{3/2},`$ (3)
$`K`$ $`=`$ $`\mu _4/\mu _2^23=\kappa _4/\kappa _2^2,`$ (4)
where $`\mu _n`$ is the $`n`$th central moment of the distribution and $`\kappa _n`$ is the $`n`$th cumulant (see HJL for a brief discussion). In PVF, the estimators $`\widehat{\mu }_n`$ of the central moments are simply taken to be the central moments of the sample of wavelet coefficients. It is easily shown, however, that these estimators are biased, so that $`\widehat{\mu }_n\mu _n`$, and this bias is quite pronounced when the sample size is small (as it is in this case). PVF then estimate the skewness and (excess) kurtosis by inserting the biased estimators $`\widehat{\mu }_n`$ into (3) and (4) respectively. Thus, the corresponding estimators $`\widehat{S}`$ and $`\widehat{K}`$ are also significantly biased. In this paper, we instead calculate our estimates of the skewness and (excess) kurtosis using $`k`$-statistics (see Kenney & Keeping 1954; Stuart & Ord 1994; HJL). These provide unbiased estimates $`\widehat{\kappa }_n`$ of the cumulants of the parent population from which the wavelet coefficients were drawn. These unbiased estimators of the cumulants are then inserted into (3) and (4) to obtain the estimators $`\widehat{S}`$ and $`\widehat{K}`$.
### 2.4 Scale-scale correlation spectrum
In addition to the skewness and kurtosis spectra, we may also measure the correlation between the different domains of the wavelet transform by defining the estimators of the scale-scale correlation as
$$\widehat{C}_{j_1,j_2}^p=\frac{2^{j_1+j_2+2}_{l_1}_{l_2}b_{j_1,j_2;[l_1/2],[l_2/2]}^pb_{j_1+1,j_2+1;l_1,l_2}^p}{_{l_1}_{l_2}b_{j_1,j_2;[l_1/2],[l_2/2]}^p_{l_1}_{l_2}b_{j_1+1,j_2+1;l_1,l_2}^p}.$$
(5)
In equation (5), the sums on $`l_1`$ extend from $`0`$ to $`2^{j_1+1}1`$ (similarly for $`l_2`$), $`p`$ is an even integer and $`[]`$ denotes the integer part. Thus $`C_{j_1,j_2}^p`$ measures the correlation between the wavelet coefficients in the domains $`(j_1,j_2)`$ and $`(j_1+1,j_2+1)`$. In PVF, it was assumed that $`j_1=j_2`$, so that the correlation of wavelet coefficients were only calculated between adjacent diagonal domains in Fig. 1. When $`j_1j_2`$, however, it is convenient to extend the sums in (5) to include also the corresponding domains with $`j_1`$ and $`j_2`$ interchanged. Thus, in each case, we in fact measure the correlation between wavelet coefficients with inverse scalelengths of $`k`$ and $`2k`$ respectively (see Fig. 1). For each possible value of $`k`$, we denote this correlation by $`\widehat{𝒞}^p(k)`$, thereby producing a scale-scale correlation spectrum. Following PVF, we restrict our analysis to the case where $`p=2`$.
### 2.5 The non-Gaussianity test
The skewness, (excess) kurtosis and scale-scale correlation spectra $`\widehat{S}(k)`$ , $`\widehat{K}(k)`$ and $`\widehat{𝒞}^2(k)`$ of the wavelet coefficients form the basis of the non-Gaussianity test. The procedure is as follows. We first calculate the $`\widehat{S}(k)`$, $`\widehat{K}(k)`$ and $`\widehat{𝒞}^2(k)`$ spectra for Face 0 or Face 5 of the 4-year COBE map. We then generate 5000 realisations of an all-sky CMB map drawn from an inflationary/CDM model with parameters $`\mathrm{\Omega }_\mathrm{m}=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`h=0.5`$, $`n=1`$ and $`Q_{\mathrm{rms}\mathrm{ps}}=18`$ $`\mu `$K, convolved with a $`7\mathrm{°}`$-FWHM Gaussian beam. For each realisation, we then add random Gaussian pixel noise, where the rms of the noise in each pixel is taken from the COBE rms noise map. The $`\widehat{S}(k)`$, $`\widehat{K}(k)`$ and $`\widehat{𝒞}^2(k)`$ spectra are then calculated for Face 0 and Face 5 of each of the 5000 realisations to obtain approximate probability distributions for the $`\widehat{S}(k)`$, $`\widehat{K}(k)`$ and $`\widehat{𝒞}^2(k)`$ statistics when the CMB signal is the chosen Gaussian inflationary/CDM model. By comparing these probability distributions with the corresponding spectra for Face 0 and Face 5 of the COBE map, we thus obtain (at each $`k`$-value) an estimate of the probability that the CMB signal in the DMR-DSMB map is drawn from a Gaussian ensemble characterised by the chosen inflationary/CDM model. For each face, however, we obtain skewness and kurtosis statistics at ten different $`k`$-values, and six different scale-scale correlation statistics. Thus, the total number of statistics obtained for each face is 26, and care must be taken in assessing the significance of non-Gaussianity detections at individual $`k`$-values (see below). As discussed in section 2.1, however, the orthogonal wavelet decomposition is sensitive to the orientation of the input map. Thus, we repeat the above non-Gaussianity test for the case where Face 0 and Face 5 are both rotated through 180 degrees.
It is also clear that, to some extent, the results of such an analysis will depend on the chosen parameters in the inflationary/CDM model via the corresponding predicted ensemble-average power spectrum $`C_{\mathrm{}}`$, from which the 5000 realisations are generated. Nevertheless, since at each $`k`$-value the skewness and kurtosis statistics contain the variance $`\mu _2`$ of the wavelet coefficients in their denominators, and the scale-scale correlation in (5) is similarly normalised, we would expect these statistics to be relatively unaffected by changing the power spectrum of the inflationary/CDM model. As an interesting test, we repeated our entire analysis for the case where the 5000 realisations were instead generated using the maximum-likelihood $`C_{\mathrm{}}`$ spectrum calculated from the 4-year COBE data by Gorski (1997). As expected, we found that the results were virtually identical to those presented in the next Section.
## 3 Results
### 3.1 The DMR-DSMB map
In this Section, we present the results of the wavelet non-Gaussianity test when applied to Face 0 and Face 5 of the 4-year COBE DMR-DSMB map in Galactic coordinates. As mentioned in the Introduction, this Galaxy-removed map is constructed by first subtracting templates of synchrotron and dust emission and then removing the free-free emission (Bennett et al. 1994). We find that the results of the non-Gaussianity test are similar for both the DMR-DSMB and DMR-DCMB Galaxy-removed maps.
The resulting $`\widehat{S}(k)`$, $`\widehat{K}(k)`$ and $`\widehat{𝒞}^2(k)`$ spectra for Face 0 and Face 5 of the DSMB map are plotted in Fig. 2. In each plot, the crosses correspond to the values derived from the DSMB map orientated in the same manner as that used by PVF (orientation A), the solid squares correspond to the values obtained from the DSMB map after rotating it through 180 degrees (orientation B), and the open circles denote the mean of corresponding distribution derived from the simulated COBE observations of the 5000 realisations of the inflationary/CDM model. The error bars denote the 68, 95 and 99 per cent limits of the distributions. These distributions were found to be virtually indistinguishable for the two orientations of the COBE data. For convenience, the $`\widehat{S}(k)`$ and $`\widehat{K}(k)`$ spectra have been normalised so that the variance of each distribution is equal to unity. Thus, for any particular $`k`$-value, a estimate of the significance level can be read off directly from the scale on the vertical axis.
As mentioned above, we calculate the $`\widehat{S}(k)`$ and $`\widehat{K}(k)`$ spectra for all available domains of the wavelet transform, and the $`\widehat{𝒞}^2(k)`$ spectrum for all pairs of domains whose $`k`$-values differ by a factor of 2 (with $`j_1,j_21`$ in each case; see Fig. 1). In contrast, PVF only considered domains with $`j_1=j_2`$ and thus only obtained $`\widehat{S}(k)`$ and $`\widehat{K}(k)`$ values for $`k=4,8,16,32`$, and $`\widehat{𝒞}^2(k)`$ values at $`k=4,8,16`$.
We see from Fig. 2 that for orientation A (crosses), all the points in the skewness and kurtosis spectra lie comfortably within their respective Gaussian probability distributions for both faces. In the scale-scale correlation spectrum, however, we confirm PVFs finding of a point at $`k=4`$ that lies slightly outside the 99 per cent confidence limit. On the other hand, for orientation B (solid squares) we obtain two skewness detections someway beyond the 99 per cent confidence limit. These occur in Face 0 at $`k=32`$ and in Face 5 at $`k=24`$. From Fig. 1, however, we see that these $`k`$-values correspond to wavelet basis functions on small scales, corresponding to pixel-to-pixel variations in the COBE map. Thus it is unlikely that this non-Gaussianity is cosmological in origin; we return to this point below. The kurtosis spectrum and scale-scale correlation spectra show no strong non-Gaussian outliers for this orientation.
In order to investigate the robustness of the high-$`k`$ outliers in the $`\widehat{S}(k)`$ spectra for orientation B, we repeated the analysis for the DSMB map with all multipoles above $`\mathrm{}=40`$ removed. A similar filtering process was also performed on each of the 5000 CDM realisations. Since the 7-degree FWHM COBE beam essentially filters out all modes beyond $`\mathrm{}=40`$, we would expect these modes to contain no contribution from the sky and consist only of instrumental noise or observational artefacts. We also repeated the filtering process for orientation A. The corresponding $`\widehat{S}(k)`$, $`\widehat{K}(k)`$ and $`\widehat{𝒞}^2(k)`$ spectra for two orientations of Face 0 and Face 5 of the filtered DSMB map are plotted in Fig. 3.
We see immediately that the high-$`k`$ skewness detections that were present in the unfiltered map have now disappeared. This suggests that the non-Gaussianity present in the original DSMB map is not cosmological in origin, and is most likely an artefact resulting from the algorithm used to subtract Galactic emission. From Fig. 3(c), we also note that the three points that lay outside the 95 per cent limit in the $`\widehat{𝒞}^2(k)`$ spectrum for Face 0 of the original DSMB map in orientation B (see Fig. 2(c)) have all now been brought well within the Gaussian error bars. Thus we find no strong evidence for non-Gaussianity in the filtered DSMB map in orientation B. For orientation A, however, as we might expect, the level of significance of the $`\widehat{𝒞}^2`$ detection at $`k=4`$ was only slightly reduced by the filtering process.
### 3.2 The 53+90 GHz coadded map
Since the above analysis suggests some non-Gaussianity on pixel scales in the DSMB map, possibly introduced by the Galaxy subtraction algorithm, we repeat the analysis for the inverse noise variance weighted average of the 53A, 53B, 90A and 90B COBE DMR channels.
Fig. 4 shows the $`\widehat{S}(k)`$, $`\widehat{K}(k)`$ and $`\widehat{𝒞}^2(k)`$ spectra for Face 0 and Face 5 of the $`53+90`$ GHz coadded map in both orientations. For orientation A (crosses), none of the skewness, kurtosis or scale-scale correlation statistics lies outside the corresponding 99 per cent limit. Also, for orientation B (solid squares), we see that, in contrast to the DSMB map, no large detections of non-Gaussianity are obtained at high $`k`$ in the skewness spectra. Nevertheless, outliers do occur at the 99 per cent level for Face 0 in the $`\widehat{K}(k)`$ spectrum at $`k=4`$, and for Face 5 in the $`\widehat{𝒞}^2(k)`$ at $`k=6`$ and $`k=12`$. Indeed, the last of these lies someway outside the 99 per cent confidence limit. However, this statistic measures the correlation between the wavelet coefficients in the domains with $`k=12`$ and $`k=24`$, and is therefore influenced primarily by features in the map on the scale of one or two pixels in size.
We once again tested the robustness of these putative detections of non-Gaussianity by repeating the analysis after removing all multipoles above $`\mathrm{}=40`$ from the COBE map and the CDM realisations. The resulting spectra are shown in Fig. 5. From Fig. 5(f), we see that the large outlier in $`\widehat{𝒞}^2(12)`$ that was obtained for the unfiltered map in orientation B has now reduced to well within the Gaussian error bars. This suggests that the noise in Face 5 of the coadded map may contain some non-Gaussian component. Nevertheless, the two outliers at the 99 per cent limit in $`\widehat{K}(4)`$ for Face 0 and $`\widehat{𝒞}^2(6)`$ for Face 5 in orientation B remain unaffected by the filtering process, and thus might be interpreted as robust signatures of non-Gaussianity on large scales.
It is, however, important to remember that, although the significance level is above the 99 per cent level for these individual statistics, we must take into account the fact that no outliers are found in the large number of other statistics we have calculated; this is discussed below. It should also be bourne in mind that no Galaxy subtraction has been performed on the 53+90 GHz coadded map. Although, our analysis is restricted to Face 0 and Face 5 of the COBE QuadCube, which lie outside the standard Galactic cut, it is possible that these faces may be contaminated to some extent by high-latitude Galactic emission.
## 4 Discussion and conclusions
We have presented an orthogonal wavelet analysis of the 4-year COBE data, in order to search for evidence of large-scale non-Gaussianity in the CMB. In particular, we identify an orientation sensitivity associated with this method, which must be borne in mind when assessing its results.
We find that several statistics in the $`\widehat{S}(k)`$, $`\widehat{K}(k)`$ and $`\widehat{𝒞}^2(k)`$ spectra for the COBE DSMB and 53+90 GHz coadded maps (in orientations A and B) lay outside the 99 per cent limit of the corresponding probability distributions derived from 5000 simulated COBE observations of CDM realisations. However, only one such outlier in the DSMB map and two outliers in the 53+90 GHz coadded map were found to be robust to the removal of all multipoles above $`\mathrm{}=40`$ in the COBE map and CDM realisations. In the DSMB map, this occurs in $`\widehat{𝒞}^2(4)`$ for Face 0 in orientation A, and in the 53+90 GHz coadded COBE map the outliers are in $`\widehat{K}(4)`$ for Face 0 and $`\widehat{𝒞}^2(6)`$ for Face 5, both in orientation B.
We must, however, take care in assessing the significance of these outliers. For each face and orientation we calculate 26 different statistics. Thus for each data set (either DSMB or 53+90 GHz coadded), the total number of statistics is $`2\times 2\times 26=104`$, and we must take proper account of the fact that a large number of these show no evidence of non-Gaussianity (see, for example, Bromley & Tegmark 1999). Since the statistics presented here are not independent of one another and generally do not possess Gaussian one-point functions, the only way of obtaining a meaningful estimate of the significance of our results is by Monte-Carlo simulation. Indeed, in their bi-spectrum analysis of the 4-year COBE data, Ferreira et al. (1998) used Monte-Carlo simulations and a generalised $`\chi ^2`$-statistic to assess their results. In our case, we adopt a slightly different approach and simply use the 5000 CDM realisations to estimate the probability of obtaining a given number of robust outliers at $`>99`$ percent level in any of our 104 statistics, even when the underlying CMB signal is Gaussian. For the DSMB data, we obtained one robust outlier, and the corresponding probability of this occuring by chance was found to be 0.59. For the 53+90 GHz coadded data, two outliers were obtained, and the corresponding probability is 0.28. Therefore planar orthogonal wavelet analysis of the 4-year COBE data can only rule out Gaussianity at the 41 per cent level in the DSMB data and at the 72 per cent level in the 53+90 GHz coadded data. Thus, we conclude that this method does not provide strong evidence for non-Gaussianity in the CMB.
## Acknowledgements
The authors thank David Valls-Gabaud for his work in independently verifying the numerical results presented here. PM acknowledges financial support from the Cambridge Commonwealth Trust. MPH thanks the PPARC for financial support in the form of an Advanced Fellowship. |
no-problem/0001/cond-mat0001397.html | ar5iv | text | # (Meta-)stable reconstructions of the diamond (111) surface: interplay between diamond- and graphite-like bonding
## Abstract
Off-lattice Grand Canonical Monte Carlo simulations of the clean diamond (111) surface, based on the effective many-body Brenner potential, yield the $`(2\times 1)`$ Pandey reconstruction in agreement with *ab-initio* calculations and predict the existence of new meta-stable states, very near in energy, with all surface atoms in three-fold graphite-like bonding. We believe that the long-standing debate on the structural and electronic properties of this surface could be solved by considering this type of carbon-specific configurations.
The discovery of fullerene has awakened an increasing interest in carbon based nanostructures as well as in processes, such as diamond graphitization or the graphite-to-diamond transformation leading to structures, which promise to have desirable properties of both graphite and diamond. It is important to develop predictive schemes to treat diamond, graphite and mixed bonding with approaches able to deal with large structures, often beyond the possibility of *ab-initio* calculations. The effective many-body empirical potentials due to Tersoff for group IV elements are very accurate for Si and Ge, but less reliable for C. In particular, for C the Tersoff potential yields the unreconstructed (111) $`(1\times 1)`$ surface as the most stable against the experimental evidence of a $`(2\times 1)`$ Pandey reconstruction. For the (001) face it predicts dimerization with a strong asymmetric displacement of the third-layer atoms. Here, we use the potential proposed by Brenner (parametrization I) and show that it is reliable also at the surface.
We perform off-lattice Grand Canonical Monte Carlo (GCMC) simulations of the (111) surface of diamond. We find the unbuckled undimerized $`(2\times 1)`$ Pandey chain reconstruction as the minimum energy structure and three new meta-stable states, close in energy, with all surface atoms in a three-fold graphite-like bonding. Two of them are obtained by a strong dimerization of the *lower* (4-fold coordinated) chain, inducing a small dimerization of the upper ($`\pi `$-bonded) chain as well. The third meta-stable $`(\sqrt{3}\times \sqrt{3})R30^\mathrm{o}`$ reconstruction is formed by a regular array of vacancies.
Surprisingly, the reconstruction of clean diamond(111) is not yet established in detail, although there is a consensus that the $`\pi `$-bonded Pandey $`(2\times 1)`$ reconstruction is the most stable. One important issue is whether this surface is metallic or semiconducting. In most calculations the band of surface states is metallic whereas experimentally the highest occupied state is at least 0.5 eV below the Fermi level. Dimerization along the $`\pi `$-bonded chain could open the surface gap but only one total-energy calculation obtains slightly dimerized chains yielding a 0.3 eV gap in the surface band. Experimentally, recent X-ray data and medium-energy ion scattering do not show any dimerization but favor the $`(2\times 1)`$ reconstruction accompanied by a strong tilt of the $`\pi `$-bonded chains, similar to the $`(2\times 1)`$ reconstruction of Si(111) and Ge(111). The tilt is however not confirmed by theoretical studies. Also, relaxations in deeper layers are debated. The bonds between first and second bilayers are found to be elongated by an amount which varies between 1% and 8%. Bonds between the second and third bilayers are slightly shifted ($`1`$%) in theoretical studies while X-ray data suggest a 5-6% relaxation.
Experimentally, uncertainties can be caused by variations of surface preparation. A partial graphitization and other structural phases can coexist at the real surface. It is noteworthy that most structural models were first suggested for Si and Ge and then extended to diamond. However, the former always favor tetrahedral four-fold coordination whereas C favors also the graphite-like three-fold bonding, the latter being in fact energetically stable in the bulk at normal conditions. One can therefore expect diamond to have additional low-energy surface structures, such as the meta-stable reconstructions presented here.
Empirical potentials, although less accurate than *ab-initio* calculations, allow to explore larger portions of phase space and can lead to unexpected structures, which can then be tested in more accurate calculations and taken into account in the experimental data analysis. We exploit the MC simulated annealing scheme to overcome potential energy barriers and identify low-energy surface structures. Atoms in the bottom layers are kept fixed at their ideal bulk positions while the others (usually, four bilayers of 128 atoms each) are mobile. We consider either a $`VNT`$, $`PNT`$ or $`P\mu T`$ ensembles for different tasks. The canonical $`VNT`$ ensemble is used to minimize ordered structures. During the simulated annealing we allow for volume fluctuations ($`PNT`$). We consider also a grand canonical $`P\mu T`$ ensemble to access structures with different number of atoms than in the bulk terminated structure. Each atom creation/destruction is enabled only in the near-surface region (2-3 top bilayers) and followed by 1000 MC equilibration moves. We have optimized the implementation of the potential by use of neighbor lists which allow to calculate energy variations on a finite portion of the sample. The one-dimensional functions defining the potential are stored in tables with a fine grid and calculated by linear interpolation. The attractive $`V_A`$, repulsive $`V_R`$ and the cut-off $`f_c`$ terms are stored as a function of the square of interatomic distances to avoid square root operations. The three-dimensional $`F`$ function is stored on a finer grid and the tricubic interpolation is replaced by a linear one, reducing the terms to be evaluated from 64 to 8. We note that, in a strongly covalent system like diamond, creation and destruction are very improbable events, also because immediately after creation or destruction the neighboring atoms have not yet adjusted to the new local environment; after a destruction the system can gain up to 2 eV by relaxation. This energy has been added as an umbrella in the acceptance rule for creation/destruction.
Throughout, we give energy gains $`\mathrm{\Delta }E`$ per $`1\times 1`$ unit cell, relative to the bulk-terminated surface. We find the relaxed $`(1\times 1)`$ and Pandey $`(2\times 1)`$ structures shown in Fig. 1 to have $`\mathrm{\Delta }E=0.244`$ and 1.102 eV, respectively. Apart from a 4% elongation of the bond between first and second bilayer against 8% for the Pandey structure, our results agree remarkably well with *ab-initio* calculations.
To the best of our knowledge, no one has succeeded before in simulating a spontaneous transition from the ideal $`(1\times 1)`$ to the $`(2\times 1)`$ reconstructed diamond (111) surface. The $`(2\times 1)`$ reconstruction of both diamond(111) and Si(111) is associated with a large coherent displacement of the first bilayer by more than 0.5 Å, accompanied by rebonding in which one atom in the $`2\times 1`$ surface unit cell changes coordination from four to three while another does the opposite. *Ab-initio* molecular dynamics simulation of the spontaneous $`(2\times 1)`$ reconstruction of Si(111) shows that breaking and formation of a new bond occurs at the same time. We find that for diamond, instead, the bond breaking leading to an increase of three-fold (graphite-like) atoms precedes the bond formation. In such a situation, competition between the $`(2\times 1)`$ reconstruction and (partial) surface graphitization becomes very important, especially if the annealing is performed at high temperatures. Conversely, a low annealing temperature makes it very difficult to overcome the potential barrier between the two ordered structures. In Fig. 2 we show the top view of a diamond (111) sample, obtained from the ideal relaxed $`(1\times 1)`$ structure after an annealing cycle (about $`0.510^6`$ MC steps) at $`T=750`$ K. The efficiency of phase space exploring is improved by increasing the step size as to reduce the acceptance rate from the usual 50% down to 20-25%. In Fig. 2 we emphasize pieces of the lower, four-fold coordinated Pandey chains which represent the final stage of the $`(1\times 1)(2\times 1)`$ transition. Only one rotational domain is present and the relative position of the formed chains is somewhat disordered.
At higher temperatures, apart from a tendency towards graphite-like structures, we observe also other ordered phases, similar to the Pandey reconstruction but accompanied by a strong dimerization of the *lower* atomic chain. This dimerization can be performed in two ways, leading to the $`(2\times 1)`$ and $`(4\times 1)`$ structures shown in Fig. 3. The atom coordinates are given in Table I along with those for the Pandey reconstruction. The energy gain $`\mathrm{\Delta }E`$ is $`0.883`$ and $`1.023`$ eV, for the metastable $`(2\times 1)`$ and $`(4\times 1)`$ respectively. In the dimerized chains the short/long distances between atoms 13 and 14 are 1.459/2.215 and 1.444/2.455 Å for the $`(2\times 1)`$ and $`(4\times 1)`$ instead of 1.562 Å in the Pandey reconstruction. This dimerization induces dimerization of the $`\pi `$-bonded chain as well, albeit small ($`<1`$%), which might be of importance for surface electronic properties. The $`(4\times 1)`$ dimerized structure is only 160 meV ($`2000`$ K) per broken bond higher than the Pandey structure, i.e. these phases can coexist at the surface at high temperatures. Indeed, by heating the ordered Pandey structure to $`2350`$ K we observe a partial transformation to the dimerized state as shown in Fig. 4. Note also a precursor of surface graphitization.
Lastly, in GCMC runs we find the $`(\sqrt{3}\times \sqrt{3})R30^\mathrm{o}`$ reconstruction formed by an ordered array of vacancies as shown on the right hand side in Fig. 3. The bond lengths between the atoms in the first bilayer is reduced to 1.390 Å, slightly less than the equilibrium bond length in graphite (1.42 Å). The bonds between the first and second layer are elongated by $`1`$% while the other bond lengths are close to the bulk value. Taking the bulk binding energy 7.346 eV as the chemical potential, the energy gain is estimated to be $`\mathrm{\Delta }E=0.6145`$ eV. Once formed, this structure is found to be very stable and remains unaltered after long GCMC annealing cycles at $`T=2350`$ K. Similar structures have been discussed for Si. For diamond(111) a $`(2\times 2)`$ vacancy structure was shown to be energetically unfavorable compared to the relaxed $`(1\times 1)`$ structure. However, contrary to the $`(2\times 2)`$ vacancy structure, our $`(\sqrt{3}\times \sqrt{3})`$ structure has only three-fold coordinated surface atoms and might thus be more favorable.
In conclusion, we have performed off-lattice GCMC simulations of the clean diamond (111) surface structure based on the Brenner potential. A spontaneous transition from the ideal $`(1\times 1)`$ to the stable Pandey $`(2\times 1)`$ reconstruction is obtained. We also find metastable reconstructions very close in energy with strong dimerization of the lower atomic chain, which are shown to coexist with the Pandey chain reconstruction at temperatures $`2000`$ K. Besides, we find a deep local minimum for the vacancy stabilized $`(\sqrt{3}\times \sqrt{3})`$ structure. These meta-stable structures have a larger number of three-fold coordinated atoms at the surface. The absence of consensus on the structural details and electronic structure of the clean (111) surface might be related to these surface structures, which are peculiar of carbon and have never been considered so far.
A.V.P. and A.F. would like to thank E. Vlieg, F. van Bouwelen, J.J. ter Meulen, W. van Enckevort, J. Schermer and B.I. Dunlap for useful discussions. D.P., F.E. and E.T. acknowledge support by MURST. |
no-problem/0001/astro-ph0001522.html | ar5iv | text | # SUPERWIND MODEL OF EXTENDED LYMAN𝛼 EMITTERS AT HIGH REDSHIFT
## 1 INTRODUCTION
### 1.1 Surveys for High-Redshift Galaxies
Recent great progress in the observational astronomy have revealed that a large number of high-redshift galaxies can be accessible by continuum emission of galaxies (stellar continuum, thermal continuum from dust grains, or nonthermal continuum from plasma heated by supernovae) in a wide range of observed wavelengths between optical and radio (e.g., Williams et al. 1996; Lanzetta, Yahil, & Fernández-Soto 1996; Chen, Lanzetta, & Pascarelle 1999; Steidel et al. 1996a, 1996b; Dey et al. 1998; Spinrad et al. 1998; Weymann et al. 1998; van Breugel et al. 1999; Smail et al. 1997; Hughes et al. 1998; Barger et al. 1998; Eales et al. 1999; Barger, Cowie, & Sanders 1999; Richards et al. 1999). On the other hand, it has been also argued often that forming galaxies at high redshift experienced very luminous starbursts and thus they could be much brighter in line emission such as Ly$`\alpha `$ and \[O ii\]$`\lambda `$3727 emission lines (e.g., Partridge & Peebles 1967; Larson 1974; Meier 1976). However, although many attempts have been made to search for such very strong emission-line sources at high redshift (see for a review, Pritchet 1994; see also Pahre & Djorgovski 1995; Thompson, Mannucci, & Beckwith 1996), most these searches failed except some successful surveys around known high-$`z`$ objects such as quasars (Hu & McMahon 1996; Hu, McMahon, & Egami 1996; Petitjean et al. 1996; Hu, McMahon, & Cowie 1999). Very recently, a new attempt with the Keck 10 m telescope has revealed the presence of Ly$`\alpha `$ emitters in blank fields at high redshift (Cowie & Hu 1998, hereafter CH98). Subsequently, Keel et al. (1999, hereafter K99) and Steidel et al. (1999, hereafter S99) also found a number of high-$`z`$ Ly$`\alpha `$ emitters in other sky areas. Since these three surveys have reinforced the potential importance of search for high-$`z`$ Ly$`\alpha `$ emitters, it seems urgent to investigate the origin of Ly$`\alpha `$ emitters.
### 1.2 Extended Lyman $`\alpha `$ Emitters at High Redshift
A brief summary of the recent three surveys for high-$`z`$ Ly$`\alpha `$ emitters (CH98, K99, and S99) is given in Table 1. In this Letter, we adopt an Einstein-de Sitter cosmology with a Hubble constant $`H_0=100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. Each survey has discovered more than ten strong Ly$`\alpha `$ emitters with the Ly$`\alpha `$ equivalent width above 100 Å in the observed frame. It is interesting to note that five sources among them are observed to be very extended spatially, e.g., $``$ 100 kpc (K99; S99); we call them Ly$`\alpha `$ blobs following S99 (hereafter LABs). These five sources are cataloged as Object 18, Object 19, 53W002 (K99), Blob 1, and Blob 2 (S99). Their basic data are given in Table 2. Since the three LABs found in K99 are all strong C iv emitters (Pascarelle et al. 1996), it seems natural to conclude that they are photoionized by the central engine of active galactic nuclei (AGNs) (K99). On the other hand, the remaining two LABs found by S99 have no evidence for the association with AGNs (S99). It should be also noted that their observed Ly$`\alpha `$ equivalent widths, $`EW(\mathrm{Ly}\alpha )`$ $``$ 1500 Å, are much larger than those of the three K99 sources. These suggest that the origin of LABs may be heterogeneous and thus the origin of S99 LABs is different from that of K99 ones.
Here we summarize the observational properties of the LABs found by S99; 1) the observed Ly$`\alpha `$ luminosities are $`10^{43}h^2`$ ergs s<sup>-1</sup>, 2) they appear elongated morphologically, 3) their sizes amount to $``$ 100 $`h^1`$ kpc, 4) the observed line widths amount to $`1000`$ km s<sup>-1</sup>, and 5) they are not associated with strong radio-continuum sources such as powerful radio galaxies. One possible origin may be that these LABs are superwinds driven by the initial starburst in galaxies because i) a superwind could develop to a distance of $``$ 100 kpc in the low-density intergalactic medium (IGM), and ii) a superwind often blows with a bi-conical morphology (e.g., Heckman, Armus, & Miley 1990). In this Letter, we investigate this possibility. We also discuss a possible evolutionary link between LABs and high-$`z`$, dust-enshrouded submm sources (Barger et al. 1999 and references therein).
## 2 SUPERWIND MODEL
### 2.1 Superwinds From Forming Galaxies
We consider the possibility that a LAB is a well-developed superwind seen from a nearly edge-on view. First, we investigate properties of a superwind caused by the initial starburst in a galaxy. We adopt the dissipative collapse scenario for formation of elliptical galaxies and bulges \[i.e., the monolithic collapse model<sup>1</sup><sup>1</sup>1It is not necessarily to presume that this pregalactic cloud is a first-generation gigantic gas cloud. If a number of subgalactic gas clouds are assembled into one and then a starburst occurs in its central region, the physical situation seems to be nearly the same as that of the monolithic collapse. (Larson 1974)\] together with the galactic wind model proposed by Arimoto & Yoshii (1987; hereafter AY87; see also Kodama & Arimoto 1997). In this scenario, the initial starburst occurs at the epoch of galaxy formation in the galaxy center. Subsequently, massive stars die and then a large number of supernovae appear. These supernovae could overlap and then evolve into a so-called superbubble. If the kinetic energy deposited to the surrounding gas overcomes the gravitational potential energy of the galaxy, the gas clouds are blown out into the intergalactic space as a superwind (e.g., Heckman et al. 1990).
The evolution of such a superwind can be described by superbubble models (McCray & Snow 1979; Koo & McKee 1992a, 1992b; Heckman et al. 1996; Shull 1995). The radius and velocity of the shocked shells<sup>2</sup><sup>2</sup>2It is noted that the derivation of $`r_{\mathrm{shell}}`$ requires that the baryonic component dominates the gravitational potential. Although the presence of a dark matter halo requires that this estimate of $`r_{\mathrm{shell}}`$ is not valid at arbitrarily large radii, we do not take account of this effect because our discussion is an order-of-magnitude one. at time $`t`$ (in units of $`10^8`$ years) are then
$$r_{\mathrm{shell}}110L_{\mathrm{mech},43}^{1/5}n_{\mathrm{H},5}^{1/5}t_8^{3/5}\mathrm{kpc},$$
(1)
and
$$v_{\mathrm{shell}}650L_{\mathrm{mech},43}^{1/5}n_{\mathrm{H},5}^{1/5}t_8^{2/5}\mathrm{km}\mathrm{s}^1,$$
(2)
where $`L_{\mathrm{mech}}`$ is the mechanical luminosity released collectively from the supernovae in the central starburst in units of $`10^{43}`$ ergs s<sup>-1</sup> and $`n_\mathrm{H}`$ is the average hydrogen number density of the IGM in units of $`10^5`$ cm<sup>-3</sup>.
We can estimate $`L_{\mathrm{mech}}`$ directly from AY87. For an elliptical galaxy with a stellar mass $`M_{\mathrm{stars}}=10^{11}M_{}`$, radius $`r`$ 10 kpc and $`n_\mathrm{H}1`$ cm<sup>-3</sup> (Saito 1979; AY87), we expect $`N_{\mathrm{SN}}3\times 10^9`$ stars that explode as supernovae. Since most of these massive stars were formed during the first $`5\times 10^8`$ years (= $`t_{\mathrm{GW}}`$), we obtain $`L_{\mathrm{mech}}\eta E_{\mathrm{SN}}N_{\mathrm{SN}}/t_{\mathrm{GW}}10^{43}\mathrm{erg}\mathrm{s}^1`$ where $`E_{\mathrm{SN}}`$ is the total energy of a single supernova ($`10^{51}`$ ergs) and $`\eta `$ is the efficiency of the kinetic energy deposited to the ambient gas ($``$ 0.1; Dyson & Williams 1980). We assume for simplicity that a hydrogen number density in the IGM is $`n_{\mathrm{IGM}}(z)0.1n_{\mathrm{cr}}(z)=0.1n_{\mathrm{cr}}(0)(1+z)^31.1\times 10^6h^2(1+z)^3`$ where $`n_{\mathrm{cr}}(0)`$ is the critical number density corresponding to the critical mass density of the universe, $`\rho _{\mathrm{cr}}(0)=3H_0^2/(8\pi G)1.9\times 10^{29}h^2`$ g cm<sup>-3</sup>. We thus obtain $`n_{\mathrm{IGM}}(3)7.3\times 10^5h^2`$ cm<sup>-3</sup> at $`z=3`$. Since we assume that the superwind is seen from a nearly edge-on view, we obtain a characteristic size of the superwind, $`l2\times r_{\mathrm{shell}}`$ 150 kpc with $`n_{\mathrm{H},5}=7.3`$. If we assume an opening angle of the superwind $`\theta _{\mathrm{open}}=45^{}`$ (see section 2.3), we obtain a full width at half maximum velocity of the superwind, FWHM $`2\times v_{\mathrm{shell}}\mathrm{sin}\theta _{\mathrm{open}}620`$ km s<sup>-1</sup>. These values appear consistent with the observations (S99).
### 2.2 Frequency of Occurrence of Superwinds at High Redshift
Since our superwinds model implies that the most probable progenitors of LABs are forming elliptical galaxies, it is important to compare the observed number density of LABs at high redshift with that of elliptical galaxies in the local universe. The observed number density of LABs at high redshift can be related to the number density of elliptical galaxies responsible for the LABs $`n_{\mathrm{E}\mathrm{LAB}}`$ as
$$n_{\mathrm{LAB}}n_{\mathrm{E}\mathrm{LAB}}\nu _{\mathrm{SW}}(1\mathrm{\Delta }\mathrm{\Omega }/4\pi ),$$
(3)
where $`\nu _{\mathrm{SW}}`$ is the chance probability to find superwinds from high-$`z`$ ellipticals, and $`\mathrm{\Delta }\mathrm{\Omega }`$ is the full opening solid angle of a pair of the superwind in units of steradian. The last term is attributed to the assumption that we observe superwinds from a nearly edge-on view.
First, based on the results of CH98, K99, and S99, we estimate $`n_{\mathrm{LAB}}`$ using the following relation,
$$n_{\mathrm{LAB}}=\frac{N_{\mathrm{LAB}}(\mathrm{CH98})+N_{\mathrm{LAB}}(\mathrm{K99})+N_{\mathrm{LAB}}(\mathrm{S99})}{V(\mathrm{CH98})f_{\mathrm{cl}}(\mathrm{CH98})+V(\mathrm{K99})f_{\mathrm{cl}}(\mathrm{K99})+V(\mathrm{S99})f_{\mathrm{cl}}(\mathrm{S99})}$$
(4)
where $`V`$ is the co-moving volume of the surveyed area (see Table 1) and $`f_{\mathrm{cl}}`$ is the clustering factor of galaxies in the surveyed volume with respect to the so-called field. In the CH survey, no LAB is found in the two blank fields; i.e., $`N_{\mathrm{LAB}}(\mathrm{CH98})=0`$ and $`f_{\mathrm{cl}}(\mathrm{CH98})=1`$. In the S99 survey, the two LABs are found in the proto-cluster region in which the number density of galaxies is higher by a factor of $``$ 6 than that in the field; i.e., $`N_{\mathrm{LAB}}(\mathrm{S99})=2`$ and $`f_{\mathrm{cl}}(\mathrm{S99})=6`$. In the K99 survey, although the three LABs are found in the 53W002 field, all of them are associated with AGNs. Therefore, we adopt $`N_{\mathrm{LAB}}(\mathrm{K99})=0`$. There is a rich group of galaxies in this field (Pascarelle et al. 1996). However, since it is difficult to estimate its clumping factor quantitatively, we assume $`f_{\mathrm{cl}}(\mathrm{K99})f_{\mathrm{cl}}(\mathrm{S99})=6`$. We do not use the data of the other five fields surveyed by K99 because the detection limits of Ly$`\alpha `$ emission are higher by a factor of 2 than those of the other survey fields. Then we obtain $`n_{\mathrm{LAB}}3.4\times 10^5h^3`$ Mpc<sup>-3</sup>.
Next we estimate the probability to observe superwinds $`\nu _{\mathrm{SW}}`$. Since galaxies beyond $`z5`$ have been found (e.g., Hu et al. 1999; Dey et al. 1998; Spinrad et al. 1998; Weymann et al. 1998; van Breugel et al. 1999), we assume that elliptical galaxies were formed randomly at a redshift range between $`z=10`$ and $`z=3`$. According to the cosmology model adopted here, the above redshift interval corresponds to a duration of $`\tau _{\mathrm{form}}6.4\times 10^8h^1`$ years. For an elliptical galaxy with a mass of $`10^{11}M_{}`$, the galactic wind breaks at $`t(\mathrm{SW})3.5\times 10^8`$ years after the onset of the initial starburst (AY87). Therefore, superwinds could be observed from such ellipticals with $`z6`$. The chance probability of superwinds can be estimated as $`\nu _{\mathrm{SW}}=\tau _{\mathrm{SW}}/\tau _{\mathrm{form}}`$ where $`\tau _{\mathrm{SW}}`$ is the duration when a superwind can be observed as an emission-line nebula. As shown in section 2.1, a duration of $`\tau _{\mathrm{SW}}1\times 10^8`$ years is necessary to develop the superwind to a radius of $`100h^1`$ kpc. Therefore, we obtain $`\nu _{\mathrm{SW}}0.16h`$.
Thirdly, we estimate the probability to observe superwinds from a nearly edge-on view. A typical semi-opening angle of superwinds may be $`\theta _{\mathrm{open}}45^{}`$ (e.g., Heckman et al. 1990; Ohyama, Taniguchi, & Terlevich 1997 and references therein). This gives $`1\mathrm{\Delta }\mathrm{\Omega }/4\pi =\mathrm{cos}\theta _{\mathrm{open}}0.71`$.
Then, we obtain
$$n_{\mathrm{E}\mathrm{LAB}}n_{\mathrm{LAB}}\nu _{\mathrm{SW}}^1(1\mathrm{\Delta }\mathrm{\Omega }/4\pi )^13.0\times 10^4h^2\mathrm{Mpc}^3.$$
(5)
Integrating the luminosity function of elliptical galaxies derived by Marzke et al. (1994), we find that the above number density corresponds to that of ellipticals above $`1L^{}`$ when $`h`$ lies in a range between 0.5 and 1. Since the mass of an elliptical galaxy with $`L_{}`$ is $`10^{11}M_{}`$ (AY87; Kodama & Arimoto 1997), our superwind model appears consistent with the observations.
### 2.3 Obscured Host Galaxies
Finally we give comments on the visibility of galaxies hosting superwinds. In our superwind model, the central starburst region may be obscured by the surrounding gas and dust. Although AY87 assume that the superwind blows isotropically for simplicity, actual superwinds tend to have a bi-conical morphology. This implies that a lot of gas and dust may be located in the host galaxy with a disk-like configuration, being responsible for the collimation of superwinds. These gas clouds are expected to absorb the radiation from the central star cluster if we observe superwinds from a nearly edge-on view. Let us consider a case that gas clouds with a total mass of $`M_{\mathrm{gas}}`$ are uniformly distributed in a disk with a radius of $`r`$ and a full height of $`d`$. We estimate the average number density of gas $`n_\mathrm{H}=M_{\mathrm{gas}}/[\pi r^2dm_\mathrm{H}]14M_{\mathrm{gas},10}r_{10}^2d_1^1`$ cm<sup>-3</sup> where $`M_{\mathrm{gas},10}`$ is in units of $`10^{10}M_{}`$, $`r_{10}`$ is in units of 10 kpc, $`d_1`$ is in units of 1 kpc, and $`m_\mathrm{H}`$ is the mass of a hydrogen atom. This gives an H i column density $`N_\mathrm{H}=n_\mathrm{H}r4.2\times 10^{23}M_{\mathrm{gas},10}r_{10}^1d_1^1`$ atoms cm<sup>-2</sup> for an edge-on view toward the gas disk, corresponding to the visual extinction of $`A_V280`$ mag where we use a relation of $`A_V(\mathrm{mag})=N_\mathrm{H}/(1.54\times 10^{21}\mathrm{cm}^2)`$ (e.g., Black 1987). Even if the gas-to-dust mass ratio is ten times smaller than that of our galaxy, the visual extinction is still large, $`A_V30`$ mag. This may be responsible for the observed shortage of the ultraviolet luminosities accounting for the Ly$`\alpha `$ line luminosities (S99).
The obscuration described above may also be responsible for the observed large equivalent widths of the Ly$`\alpha `$ emission in S99; i.e., $`EW(\mathrm{Ly}\alpha )1500`$ Å. Since these LABs are observed at $`z3.1`$ (see Table 2), the rest-frame equivalent widths are estimated to be $`EW^0(\mathrm{Ly}\alpha )375`$ Å. This value is still larger by a factor of two than those expected for star-forming, dust-free galaxies; e.g., $`EW(\mathrm{Ly}\alpha )`$ 50 – 200 Å (Charlot & Fall 1993; see also Tenorio-Tagle et al. 1999). However, in our model, strong continuum radiation from the central star cluster can be obscured by a lot of surrounding gas and dust. On the other hand, the Ly$`\alpha `$ emission arises from the superwind which is far from the host, e.g., $`r`$ 100 kpc. Therefore, the larger-than-normal $`EW(\mathrm{Ly}\alpha )`$ is one of important properties of our model.
### 2.4 A Possible Evolutionary Link Between LABs and Dust-enshrouded Submm Sources
As mentioned in section 2.2, the central starburst region in a forming elliptical galaxy could be enshrouded by a lot of gas with dust grains because these grain are expected to be supplied either from Population III objects (if any) or first massive stars in the initial starburst or both. Therefore, elliptical galaxies at this phase may be observed as dust-enshrouded (or dusty) submm sources (hereafter DSSs). Subsequent supernova explosions blow out the gas into the IGM as a superwind several times $`10^8`$ years after the onset of the initial starburst. Elliptical galaxies at this superwind phase are assumed to be LABs in our model. We note that they are expected to be much fainter at submm than the DSSs because a significant part of dust grains were already expelled out from the galaxy. In summary, the dissipative-collapse formation of elliptical galaxies together with the galactic wind model suggests the following evolutionary sequence. Step I: The initial starburst occurs in the center of pregalactic gas cloud. Step II: This galaxy may be hidden by surrounding gas clouds for the first several times $`10^8`$ years (i.e., the DSS phase). Step III: The superwind blows and thus the DSS phase ceases. The superwind leads to the formation of extended emission-line regions around the galaxy (i.e., the LAB phase). This lasts for a duration of $`1\times 10^8`$ years. And, Step IV: The galaxy evolves to an ordinary elliptical galaxy $`10^9`$ years after the formation.
## 3 CONCLUDING REMARKS
The origin of Ly$`\alpha `$ emission from high-$`z`$ objects may be heterogeneous; i.e., a) ionized gas irradiated by massive stars, b) ionized gas heated by superwinds, and c) ionized gas irradiated by the central engine of various types of AGNs. Such diversity is also reported for submm-selected galaxies (i.e., DSSs) with $`z>1`$ (Ivison et al. 1999). Therefore, in order to investigate the cosmic star formation history from high-$`z`$ to the present day (e.g., Madau et al. 1996), we will have to study carefully what the observed Ly$`\alpha `$ emitters at high redshift are.
We would like to thank an anonymous referee for useful suggestions and comments. YS is a JSPS fellow. This work was financially supported in part by the Ministry of Education, Science, and Culture (Nos. 07044054, 10044052, and 10304013). |
no-problem/0001/hep-ph0001173.html | ar5iv | text | # Cosmology of SUSY Q-balls
## 1 Non-topological solitons in MSSM
In a class of theories with interacting scalar fields $`\varphi `$ that carry some conserved global charge, the ground state is a Q-ball , a lump of coherent scalar condensate that can be described semiclassically as a non-topological soliton of the form
$$\varphi (x,t)=e^{i\omega t}\overline{\varphi }(x).$$
(1)
Q-balls exist whenever the scalar potential satisfies certain conditions that were first derived for a single scalar degree of freedom with some abelian global charge and were later generalized to a theory of many scalar fields with different charges . Non-abelian global symmetries and abelian local symmetries can also yield Q-balls.
It turns out that all phenomenologically viable supersymmetric extensions of the Standard Model predict the existence of non-topological solitons associated with the conservation of baryon and lepton number. If the physics beyond the standard model reveals some additional global symmetries, this will further enrich the spectrum of Q-balls . The MSSM admits a large number of different Q-balls, characterized by (i) the quantum numbers of the fields that form a spatially-inhomogeneous ground state and (ii) the net global charge of this state.
First, there is a class of Q-balls associated with the tri-linear interactions that are inevitably present in the MSSM . The masses of such Q-balls grow linearly with their global charge, which can be an arbitrary integer number . Baryonic and leptonic Q-balls of this variety are, in general, unstable with respect to their decay into fermions. However, they could form in the early universe through the accretion of global charge or, possibly, in a first-order phase transition .
The second class of solitons comprises the Q-balls whose VEVs are aligned with some flat directions of the MSSM. The scalar field inside such a Q-ball is a gauge-singlet combination of squarks and sleptons with a non-zero baryon or lepton number. The potential along a flat direction is lifted by some soft supersymmetry-breaking terms that originate in a “hidden sector” of the theory at some scale $`\mathrm{\Lambda }__S`$ and are communicated to the observable sector by some interaction with a coupling $`g`$, so that $`g\mathrm{\Lambda }100`$ GeV. Depending on the strength of the mediating interaction, the scale $`\mathrm{\Lambda }__S`$ can be as low as a few TeV (as in the case of gauge-mediate SUSY breaking), or it can be some intermediate scale if the mediating interaction is weaker (for instance, $`g\mathrm{\Lambda }__S/m_{_{Planck}}`$ and $`\mathrm{\Lambda }__S10^{10}`$ GeV in the case of gravity-mediated SUSY breaking). For the lack of a definitive scenario, one can regard $`\mathrm{\Lambda }__S`$ as a free parameter. Below $`\mathrm{\Lambda }__S`$ the mass terms are generated for all the scalar degrees of freedom, including those that parameterize the flat direction. At the energy scales larger than $`\mathrm{\Lambda }__S`$, the mass terms turn off and the potential is “flat” up to some logarithmic corrections. If the Q-ball VEV extends beyond $`\mathrm{\Lambda }__S`$, the mass of a soliton is no longer proportional to its global charge $`Q`$, but rather to $`Q^{3/4}`$.
This allows for the existence of some entirely stable Q-balls with a large baryon number $`B`$ (B-balls). Indeed, if the mass of a B-ball is $`M__B(1\mathrm{TeV})\times B^{3/4}`$, then the energy per baryon number $`(M__B/B)(1\mathrm{TeV})\times B^{1/4}`$ is less than 1 GeV for $`B>10^{12}`$. Such large B-balls cannot dissociate into protons and neutrons and are entirely stable thanks to the conservation of energy and the baryon number. If they were produced in the early universe, they would exist at present as a form of dark matter .
## 2 Fragmentation of Affleck–Dine condensate into Q-balls
Several mechanisms could lead to formation of B-balls and L-balls in the early universe. First, they can be produced in the course of a phase transition . Second, thermal fluctuations of a baryonic and leptonic charge can, under some conditions, form a Q-ball. Finally, a process of a gradual charge accretion, similar to nucleosynthesis, can take place . However, it seems that the only process that can lead to a copious production of very large, and, hence, stable, B-balls, is fragmentation of the Affleck-Dine condensate .
At the end of inflation, the scalar fields of the MSSM develop some large expectation values along the flat directions, some of which have a non-zero baryon number . Initially, the scalar condensate has the form given in eq. (1) with $`\overline{\varphi }(x)=const`$ over the length scales greater than a horizon size. One can think of it as a universe filled with Q-matter. The relaxation of this condensate to the potential minimum is the basis of the Affleck–Dine (AD) scenario for baryogenesis.
It was often assumed that the condensate remains spatially homogeneous from the time of formation until its decay into the matter baryons. This assumption is, in general, incorrect. In fact, the initially homogeneous condensate can become unstable and break up into Q-balls whose size is determined by the potential and the rate of expansion of the Universe. B-balls with $`12<\mathrm{log}_{10}B<30`$ can form naturally from the breakdown of the AD condensate. These are entirely stable if the flat direction is “sufficiently flat”, that is if the potential grows slower than $`\varphi ^2`$ on the scales or the order of $`\overline{\varphi }(0)`$. The evolution of the primordial condensate can be summarized as follows:
This process has been analyzed analytically in the linear approximation. Recently, some impressive numerical simulations of Q-ball formation have been performed ; they confirm that the fragmentation of the condensate into Q-balls occurs in some Affleck-Dine models. The global charges of Q-balls that form this way are model dependent. The subsequent collisions can further modify the distribution of soliton sizes.
## 3 SUSY Q-balls as dark matter
Conceivably, the cold dark matter in the Universe can be made up entirely of SUSY Q-balls. Since the baryonic matter and the dark matter share the same origin in this scenario, their contributions to the mass density of the Universe are related. Therefore, it is easy to understand why the observations find $`\mathrm{\Omega }_{_{DARK}}\mathrm{\Omega }_B`$ within an order of magnitude. This fact is extremely difficult to explain in models that invoke a dark-matter candidate whose present-day abundance is determined by the process of freeze-out, independent of baryogenesis. If this is the case, one could expect $`\mathrm{\Omega }_{_{DARK}}`$ and $`\mathrm{\Omega }_B`$ to be different by many orders of magnitude. If one doesn’t want to accept this equality as fortuitous, one is forced to hypothesize some ad hoc symmetries that could relate the two quantities. In the MSSM with AD baryogenesis, the amounts of dark-matter Q-balls and the ordinary matter baryons are related . One predicts $`\mathrm{\Omega }_{_{DARK}}=\mathrm{\Omega }_B`$ for B-balls with $`B10^{26}`$. This size is in the middle of the range of Q-ball sizes that can form in the Affleck–Dine scenario .
The value $`B10^{26}`$ is well above the present experimental lower limit on the baryon number of an average relic B-ball, under the assumption that all or most of cold dark matter is made up of Q-balls. On their passage through matter, the electrically neutral baryonic SUSY Q-balls can cause a proton decay, while the electrically charged B-balls produce massive ionization. Although the condensate inside a Q-ball is electrically neutral , it may pick up some electric charge through its interaction with matter . Regardless of its ability to retain electric charge, the Q-ball would produce a straight track in a detector and would release the energy of, roughly, 10 GeV/mm. The present limits constrain the baryon number of a relic dark-matter B-ball to be greater than $`10^{22}`$. Future experiments are expected to improve this limit. It would take a detector with the area of several square kilometers to cover the entire interesting range $`B10^{22}\mathrm{...10}^{30}`$.
The relic Q-balls can accumulate in neutron stars and can lead to their ultimate destruction over a time period from one billion years to longer than the age of the Universe . If the lifetime of a neutron star is in a few Gyr range, the predicted mini-supernova explosions may be observable.
## 4 B-ball baryogenesis
An interesting scenario that relates the amounts of baryonic and dark matter in the Universe, and in which the dark-matter particles are produced from the decay of unstable B-balls was proposed by Enqvist and McDonald .
## 5 Phase transitions precipitated by solitosynthesis
In the false vacuum, a rapid growth of non-topological solitons can precipitate an otherwise impossible or slow phase transition .
Let us suppose the system is in a metastable false vacuum that preserves some U(1) symmetry. The potential energy in the Q-ball interior is positive in the case of a true vacuum, but negative if the system is in the metastable false vacuum. In either case, it grows as the third power of the Q-ball radius $`R`$. The positive contribution of the time derivative to the soliton mass can be written as $`Q^2/\overline{\varphi }^2(x)d^3xR^3`$, and the gradient surface energy scales as $`R^2`$. In the true vacuum, all three contributions are positive and the Q-ball is the absolute minimum of energy (Fig. 1). However, in the false vacuum, the potential energy inside the Q-ball is negative and goes as $`R^3`$. As shown in Fig. 1, for small charge $`Q`$, there are two stationary points, the minimum and the maximum. The former corresponds to a Q-ball (which is, roughly, as stable as the false vacuum is), while the latter is a critical bubble of the true vacuum with a non-zero charge.
There is a critical value of charge $`Q=Q_c`$, for which the only stationary point is unstable. If formed, such an unstable bubble will expand.
If the Q-ball charge increases gradually, it eventually reaches the critical value. At that point Q-ball expands and converts space into a true-vacuum phase. In the case of tunneling, the critical bubble is formed through coincidental coalescence of random quanta into an extended coherent object. This is a small-probability event. If, however, a Q-ball grows through charge accretion, it reaches the critical size with probability one, as long as the conditions for growth are satisfied. The phase transition can proceed at a much faster rate than it would by tunneling.
## 6 Conclusion
Supersymmetric models of physics beyond the weak scale offer two plausible candidates for cold dark matter. One is the lightest supersymmetric particle, which is stable because of R-parity. Another one is a stable non-topological soliton, or Q-ball, carrying some baryonic charge.
SUSY Q-balls make an appealing dark-matter candidate because their formation is a natural outcome of Affleck–Dine baryogenesis and requires no unusual assumptions.
In addition, formation and decay of unstable Q-balls can have a dramatic effect on baryogenesis, dark matter, and the cosmic microwave background. Production of unstable Q-balls in the false vacuum can cause an unusually fast first-order phase transition.
## References |
no-problem/0001/astro-ph0001222.html | ar5iv | text | # Rings in the Planetesimal Disk of 𝛽 Pic
## 1 INTRODUCTION
Circumstellar dust disks around nearby stars provide strong indirect evidence for the existence of planetesimals in exosolar systems. Radiation pressure, Poynting-Robertson drag, collisions, and sublimation remove dust orbiting young main sequence stars on timescales 10<sup>2</sup> \- 10<sup>3</sup> times shorter than the stellar ages (Backman & Paresce, 1993). Therefore we infer that larger parent bodies exist and replenish dust in the same way that comets and asteroids resupply interplanetary grains in the Solar System.
We expect that an undisturbed system of many bodies orbiting a star will have an axially symmetric distribution. Over large (10<sup>2</sup> AU) scales, however, the northeast (NE) side of the $`\beta `$ Pic disk is not a mirror image of the southwest (SW) side (Kalas & Jewitt, 1994, 1995). Beginning at $``$200 AU (10”) projected radius, the NE disk extension is brighter, longer, and thinner than the SW extension by roughly 20%. Asymmetry is also evident in the disk vertical height; in the SW the disk height is greater north of the midplane than to the south, whereas the opposite is true in the NE extension. The timescale for smoothing such asymmetries is controlled by the orbital period, and it is therefore short relative to the age of the star. Hence the observed asymmetries must be young.
Gravitational perturbation by a brown dwarf companion (Whitmire et al., 1988) or a close stellar flyby (Kalas & Jewitt, 1995) has been suggested as one possible mechanism for generating large-scale disk asymmetry. However, no stellar object physically associated with $`\beta `$ Pic has yet been identified. The key problem in identifying a candidate perturber near $`\beta `$ Pic is that the star is bright ($`V`$=+3.8 mag). Even when we artificially eclipse $`\beta `$ Pic using a coronagraph, spurious instrumental features that resemble both stars and extended objects dominate optical images. However, data obtained at different telescopes with different instruments have different instrumental noise signatures. We therefore search for the faintest real objects in $`\beta `$ Pic’s field using optical data from multiple telescopes and instruments.
## 2 OBSERVATIONS AND RESULTS
Table 1 summarizes the observational data for $`\beta `$ Pic. The Hubble Space Telescope (HST) Archive data consist of WFPC2 CCD images with the $`\beta `$ Pic disk oriented along either the Planetary Camera (PC) or the Wide Field Cameras (WFC). The ground-based observations and data reduction techniques are described in earlier work (Kalas & Jewitt, 1995; Smith & Terrile, 1984). Here we note that sensitivity to faint objects is hampered by $`\beta `$ Pic’s bright point spread function (PSF) and light from the circumstellar dust disk itself. Both are subtracted using template stars and/or idealized model fits that are detailed in Kalas & Jewitt (1995).
Subtraction of a smooth, symmetric (axially and vertically) model disk reveals a brightness enhancement along the NE extension of the disk midplane 785 AU from the star (Feature A; Fig. 1; Table 2). Four field stars common to every data set are utilized for image registration and astrometry, and the feature is confirmed in every data set (except the HST PC images, which lack a suitable field of view). We do not detect a similar brightness enhancement anywhere between 25” and 41” radius on the SW side of the disk. Feature A is unlikely to be a background galaxy because it is amorphous in the unbinned HST images, and appears extended along the position angle of the disk.
The NE disk midplane has several more brightness enhancements between 25” and 40” with a degree of positional correlation not present in brightness knots in the field or SW of the star (Fig. 1; Table 2). The centroids of Features F and G are spatially correlated to within 0.5” in the highest resolution images obtained with HST and the University of Hawaii 2.2 m telescope. The morphologies and positions of feature centroids vary due to instrumental noise, sub-pixel registration differences between the observed disk and the idealized model disk, and the number of pixels binned to form a final image. Experiments with the data reduction techniques show that the latter two effects can shift centroids by 1 pixel, which is equivalent to 0.4”-0.6”. These uncertainties are evident even for the highest signal-to-noise detection, Feature A (Fig. 1). We identify four more brightness enhancements (B, C, D, E; Fig. 1) that appear spatially correlated within these uncertainties. We expect that more sensitive, high resolution observations dedicated to imaging this portion of the disk will help establish the exact number of knots and their positions. Brightness knots near and within the SW disk extension are uncorrelated and can be attributed to noise.
Table 2 gives the measured positions and surface brightnesses of the seven features. Basic characteristics of the midplane structure are: a) feature A at $`40.7`$” (785 AU) radius is resolved and extended by $``$4” ($``$80 AU), b) feature A shows the greatest enhancement over the mean midplane surface brightness (i.e. displays the greatest contrast), c) the spacing between features generally increases with increasing radius, d) the SW disk midplane does not contain comparable features.
We interpret the observed features as dust density enhancements along the projected disk midplane.<sup>1</sup><sup>1</sup>1Given the number of faint (23 mag$`<m_R<`$24 mag) objects detected in the entire unobstructed field of view, there is a $``$3% chance that one of the disk features is due to a background galaxy. The density enhancements may represent discrete clouds of dust produced by random planetesimal collisions, or a ring system system viewed edge-on. We reason that if random planetesimal collisions were important, then we would observe density enhancements on the SW side of the disk also. Because density enhancements are absent in the SW midplane, the random collisions mechanism may be insignificant, though future work should perform quantitative tests.
Here we explore the validity of a ring model to explain the midplane density enhancements. We assume that the multiple features along the NE midplane represent nested eccentric rings viewed close to edge-on. The absence of comparable features to the SW of $`\beta `$ Pic means that the rings are not centered on the star. The well ordered nature of the system of bright features over a radial scale $``$300 AU may be indicative of a global restructuring of the planetesimal disc in the recent past. Thus, as a first attempt to explain the origin of such a system, we test how a strong gravitational perturbation, such as from a close stellar flyby, might alter the global morphology of an initially symmetric planetesimal disk. In addition, we require that the resulting disk morphology qualitatively fits the radial and vertical disk asymmetries known to exist along the same region of the disk as the proposed ring system.
## 3 DYNAMICAL SIMULATIONS
We utilize a standard numerical code<sup>2</sup><sup>2</sup>2 We assume an initial $`r^{3/2}`$ radial dependence of surface number density, and a vertically exponential density profile with the disk flared such that the scale height is proportional to $`r^{3/2}`$. (Mouillet et al., 1997) to follow $`10^6`$ collisionless test particles that are initialised in circular orbits about a point-mass potential; this system experiences an encounter with a secondary point-mass that follows a prescribed parabolic trajectory. The key parameters governing the disk response are: the mass ratio, the pericenter distance $`q`$, and the inclination of pericenter to the initial midplane of the disk $`i`$. The test particles are taken to represent the underlying parent bodies that replenish the dust through infrequent collisional disruption. We assume that the distribution of simulated particles traces the distribution of dust grains that would result from collisions in the real system (Mouillet et al., 1997).
We find that coplanar encounters distort disk structure near periastron, leading to the development of transient kinematic spiral features. These are regions of eccentricity growth. Within a few disk orbital periods of periastron passage the spiral patterns collapse into generally eccentric nested rings. If the encounter timescale is sufficiently short, as in relatively close encounters, then one spiral arm dominates the response. Correspondingly, the disk takes on a lopsided appearance (Larwood, 1997). After many orbital periods, the ring patterns become incoherent through orbital phase-mixing. Inside a radius $`q/4`$ the tidal influence of the perturber is slight; outside a radius $`q/3`$ the interaction rapidly increases in strength (Hall et al., 1996). Similarly, non-coplanar flyby encounters excite inclination changes in the orbits of disk particles (Clarke & Pringle, 1993), generating asymmetry about the midplane (Ostriker, 1994).
The stellar mass ratio is taken to be $`0.3`$ throughout the models presented here. The perturber therefore has the mass of an M star relative to $`\beta `$ Pic. However, varying the mass ratio was not found to affect the qualitative outcome significantly compared with varying $`q`$. We investigated the induced length asymmetry in the disk as a function of $`q`$ for coplanar encounters. For a broad range of viewing angles, the required length asymmetry ($`20\%`$) is produced at a radius corresponding to the initial size of the disk when $`q`$ is $`1.3`$ times larger than that value. We then considered various inclinations, $`i`$, for the encounter, and compared the resulting isophotal contours with those from observations, for various viewing angles.
Figures 2, 3, and 4 present the simulation which best reproduces the observational data. The model disk is seen $`9`$ orbital periods (at the initial outer radius) after periastron. Planetesimals have scattered outwards to several times the initial disk radius, and show well-defined ringed structure on one side of the disrupted disk, but not the other, in agreement with the observations (Fig. 1). In the edge-on view, model disk isophotes qualitatively match the observed asymmetries in length, width, and height above and below the midplane (Fig. 3). The rings appear as bumps along the disk midplane in the edge-on view, with spacing increasing with radius (Fig. 4).
Ring structure inside computational radius 2 has been eliminated by orbital phase mixing, which in the observational data corresponds to radius $``$ 26” (500 AU). The region of relatively unperturbed particles, $`q/4`$, now scales to $``$ 9”, which is consistent with the projected radius where disk asymmetries are observed to begin (Kalas & Jewitt, 1995). Having determined the length unit we deduce that this model is at a state $`90000`$ yr after periastron. Since the removal timescale of dust due to Poynting-Robertson drag is significantly longer than 10<sup>5</sup> yr (Backman & Paresce, 1993), the distribution of parent bodies will trace the reflecting dust particles, as we initially assumed.
## 4 DISCUSSION
The stellar flyby hypothesis provides a simple explanation for the existence, spacing, and morphology of the brightness maxima along $`\beta `$ Pic’s midplane, as well as the large-scale disk asymmetries. The pumped up velocity dispersion of planetesimals may also lead to increased dust replenishment rates, owing to more frequent and destructive collisions (Stern & Colwell, 1997; Kenyon & Luu, 1999). Therefore, the large total dust mass around $`\beta `$ Pic compared to that of other main sequence stars probably results from both its youth (Barrado y Navascues, 1999) and the dynamical state of the system. Our model for $`\beta `$ Pic’s recent dynamical history implies that the top panel of Fig. 2 approximates the face-on view of the planetesimal disk.
The encounter distance assumed in the simulation scales to $``$700 AU, implying a statistically unlikely event (0.01% chance in 10<sup>6</sup> yr, assuming empirical parameters for the Solar neighborhood; Garcia-Sanchez et al. 1999). However, if $`\beta `$ Pic formed with a bound stellar companion on a $``$1000 AU radius orbit, and it was the companion that was perturbed by a stellar flyby into a new orbit that disrupted the planetesimal disk, then the flyby probability increases by an order of magnitude or more. This scenario is also consistent with the prograde and relatively small angle trajectory of the perturber relative to the disk midplane in our simulation. Ultimately, identification of the perturber is necessary to validate the stellar flyby hypothesis. Future work will present a comprehensive, statistical analysis of the relative space motions and uncertainties for candidate perturbers using Hipparcos data (Kalas, Deltorn & Larwood, 2000).
We note that the orientation of the outer disk height asymmetry is in the same direction relative to the midplane as the proposed midplane warp imaged $``$50 AU from the star (Burrows et al., 1995). This inner warp could be due to a planet that is inclined relative to the disk midplane (Mouillet et al., 1997). The stellar flyby hypothesis offers an alternate explanation for the warp. The non-coplanar flyby generates planetesimal orbits with increased eccentricity and inclination, which at apastron manifest as the flared disk in the SW extension. However, as seen in the edge-on projection, this family of vertically scattered planetesimals cross the main disk on their way to periastron at $``$50 AU to the NE of $`\beta `$ Pic (Fig. 2). This intersecting ’second plane’ might contaminate isophotes from the quiescent inner disk, giving the appearance of a warped midplane. Hypothetical planets near 50 AU radius may also encounter the ’second plane’ planetesimals and deliver them to the innermost parts of the system, leading to an enhancement of cometary activity (Knacke et al., 1994; Beust et al., 1996).
The main effect of the external perturber is to scatter planetesimals from their formation sites outward to greater radii, and vertically away from the midplane. Roughly 10% of the disk mass is actually lost to interstellar space in the simulation presented here.<sup>3</sup><sup>3</sup>3 Escaping particles have velocities $``$5 km s<sup>-1</sup>, in which case they could cover the distance between $`\beta `$ Pic and the Sun in $``$4 Myr. This rearrangement of the planetesimal disk was achieved primarily by the gas giant planets during the evolution of the Solar System. However, recent theoretical work links the high eccentricities and inclinations of Kuiper Belt objects to close stellar flybys with the young Sun (Ida et al., 1999). The current state of the $`\beta `$ Pic system may therefore accurately represent an early evolutionary phase of our Solar System.
P.K. and J.L. acknowledge support from both STScI and MPIA-Heidelberg. We thank A. Evans, F. Bruhweiler, C. Burrows, S. Ida, J. Surace, R. Terrile for contributing to this research. We are grateful to D. Jewitt, D. Backman, S. Beckwith, M. Clampin, and J. Papaloizou for commenting on drafts of the manuscript. |
no-problem/0001/nlin0001058.html | ar5iv | text | # Applying Blind Chaos Control to Find Periodic Orbits
## Abstract
Abstract: Analysis of the PPF chaos control method used in biological experiments shows that it can robustly control a wider class of systems than previously believed, including those without stable manifolds. This can be exploited to find the locations of unstable periodic orbits by varying the parameters of the control system.
PACS numbers: 87.10.+e, 05.45.+b, 07.05.Dz
One of the most surprising successes of chaos theory has been in biology: the experimentally demonstrated ability to control the timing of spikes of electrical activity in complex and apparently chaotic systems such as heart tissue and brain tissue . In these experiments, PPF control — a modified formulation of OGY control — was applied to set the timing of external stimuli; the controlled system showed stable periodic trajectories instead of the irregular interspike intervals seen in the uncontrolled system. The mechanism of control in these experiments was interpreted originally as analogous to that of OGY control: unstable periodic orbits riddle the chaotic attractor and the electrical stimuli place the system’s state on the stable manifold of one of these periodic orbits.
Alternative possible mechanisms for the experimental observations have been described by Zeng and Glass and Christini and Collins . These authors point out that the controlling external stimuli serve to truncate the interspike interval to a maximum value. When applied, the control stimulus sets the next interval $`s_{n+1}`$ to be on the line
$$s_{n+1}=𝒜s_n+𝒞.$$
(1)
We will call this relationship the “control line.” Zeng and Glass showed that if the uncontrolled relationship between interspike intervals is a chaotic one-dimensional function, $`s_{n+1}=f(s_n)`$, then the control system effectively flattens the top of this map and the controlled dynamics may have fixed points or other periodic orbits. Christini and Collins showed that behavior analogous to the fixed-point control seen in the biological experiments can be accomplished even in completely random systems. Since neither chaotic one-dimensional systems nor random systems have a stable manifold, the interval-truncation interpretation of the biological experiments is different than the OGY interpretation. The interval-truncation method differs also from OGY and related control methods in that the perturbing control input is a fixed-size stimulus whose timing can be treated as a continuous parameter. This type of input is conventional in cardiology (e.g., ).
In this Letter, we show that the state-trunction interpretation is applicable in cases where there is a stable manifold of a periodic orbit as well as in cases where there are only unstable manifolds. We find that superior control can be achieved by intentionally placing the system’s state off of any stable manifold. This suggests a powerful scheme for the rapid experimental identification of fixed points and other periodic orbits in systems where interspike intervals are of interest.
The chaos control in and was implemented in two stages. First, interspike intervals $`s_n`$ from the uncontrolled, “natural” system were observed. Modeling the system as a function of two variables $`s_{n+1}=f(s_n,s_{n1})`$, the location $`s^{}`$ of a putative unstable flip-saddle type fixed point and the corresponding stable eigenvalue $`\lambda _s`$ were estimated from the data. (Since the fixed point is unstable, there is also an unstable eigenvalue $`\lambda _u`$.) The linear approximation to the stable manifold lies on a line given by Eq. 1 with $`𝒜=\lambda _s`$ and $`𝒞=(1\lambda _s)s^{}`$. Second, using estimated values of $`𝒜`$ and $`𝒞`$, the control system was turned on. Following each observed interval $`s_n`$, the maximum allowed value of the next interspike interval was computed as $`𝒮_{n+1}=𝒜s_n+𝒞`$. If the next interval naturally was shorter than $`𝒮_{n+1}`$ no control stimulus was applied to the system. Otherwise, an external stimulus was provided to truncate the interspike interval at $`s_{n+1}=𝒮_{n+1}`$.
In practice, the values of $`s^{}`$ and $`\lambda _s`$ for a real fixed point of the natural system are known only imperfectly from the data. Insofar as the estimates are inaccurate, the control system does not place the state on the true stable manifold. Therefore, we will analyze the controlled system without presuming that $`𝒜`$ and $`𝒞`$ in Eq. 1 correspond to the stable manifold.
If the natural dynamics of the system is modeled by $`s_{n+1}=f(s_n,s_{n1})`$, the dynamics of the controlled system is given by
$$s_{n+1}=\text{min}\{\begin{array}{cc}f(s_n,s_{n1})\hfill & \text{Natural Dynamics}\hfill \\ 𝒜s_n+𝒞\hfill & \text{Control Line}\hfill \end{array}$$
(2)
We can study the dynamics of the controlled system close to a natural fixed point, $`s^{}`$, by approximating the natural dynamics linearly as
$`s_{n+1}=f(s_n,s_{n1})`$ $`=`$ $`(\lambda _s+\lambda _u)s_n\lambda _s\lambda _us_{n1}`$ (3)
$`+s^{}(1+\lambda _s\lambda _u\lambda _s\lambda _u)`$
Since the controlled system (Eq. 2 is nonlinear even when $`f()`$ is linear, it is difficult to analyze its behavior by algebraic iteration. Nonetheless, the controlled system can be studied in terms of one-dimensional maps.
Following any interspike interval when the controlling stimulus has been applied, the system’s state $`(s_n,s_{n1})`$ will lie somewhere on the control line. From this time onward the state will lie on an image of the control line even if additional stimuli are applied during future interspike intervals.
Figure 1 (left) shows an example of how the dynamics result in a simple one-dimensional map for the case where the natural dynamics have a flip saddle ($`\lambda _u<1`$ and $`0<\lambda _s<1`$) and where the control line intersects the line of identity ($`s_n=s_{n1}`$) below the natural fixed point $`s^{}`$. The stable and unstable manifolds are shown as arrows which intersect at the location of the natural fixed point $`s^{}`$. The control line is shown as a broad dark gray stripe. Its image under the natural dynamics is shown as a thin dashed line. At some points this image is above the control line and is therefore truncated (in the vertical direction) by the control stimulus to be on the control line. Overall, the image of the control line under the controlled dynamics is shown as the broad light gray bent line. In this case, the first, second, and all successive images of the control line are all the same: see Fig 1 (right).
Once the control stimulus has been applied, the dynamics of the controlled system are described by a one-dimensional map: the bent light gray line in Fig. 1. The analysis of the dynamics of this map is straightforward. For the case shown in Fig. 1, the map has a fixed point (where the flat part of the map intersects the line of identity). Near this fixed point, the map is identical to the control line, so the fixed point of the map is also the “controller fixed point,”
$$x^{}=𝒞/(1𝒜)$$
where the control line intersects the line of identity.
Figure 2 illustrates a more complicated case, a non-flip saddle with $`x^{}>s^{}`$, where successive images of the control line do not all overlap. In this case, successive images are not identical, but there is still a stable fixed point of the controlled dynamics at $`x^{}`$.
The stability of the controlled dynamics fixed point and the size of its basin of attraction can be analyzed in terms of the control line and its image. When the previous interspike interval has been terminated by a control stimulus, the state lies somewhere on the control line. If the controlled dynamics are to have a stable fixed point, this must be at the controller fixed point $`x^{}`$ where the control line intersects the line of identity. However, the controller fixed point need not be a fixed point of the controlled dynamics. For example, if the image of the controller fixed point is below the controller fixed point, then the interspike interval following a stimulus will be terminated naturally.
For the controller fixed point to be a fixed point of the controlled dynamics, we require that the natural image of the controller fixed point be at or above the controller fixed point. One such situation, for the flip saddle, is illustrated in Fig. 1 where it can be seen that the natural image of a neighborhood of the control line near $`x^{}`$ is above the control line. Thus the dynamics of the controlled system, close to $`x^{}`$, are given simply by
$$s_{n+1}=𝒜s_n+𝒞$$
The fixed point of these dynamics is stable so long as $`1<𝒜<1`$. In the case of a flip saddle, we therefore have a simple recipe for successful state-truncation control: position $`x^{}`$ below the natural fixed point $`s^{}`$ and set $`1<𝒜<1`$.
Fixed points of the controlled dynamics can exist for natural dynamics other than flip saddles. This can be seen using the following reasoning: Let $`\xi `$ be the difference between the controller fixed point and the natural fixed point: $`s^{}=x^{}+\xi `$. Then the natural image of the controller fixed point can be found from Eq. 3 to be
$`s_{n+1}`$ $`=`$ $`(\lambda _s+\lambda _u)x^{}\lambda _s\lambda _ux^{}`$ (4)
$`+(1+\lambda _s\lambda _u\lambda _s\lambda _u)(x^{}+\xi )`$
The condition that
$$s_{n+1}x^{}$$
(5)
will be satisfied depending only on $`\lambda _s`$, $`\lambda _u`$, and $`\xi =s^{}x^{}`$. In the case represented in Fig. 1, where $`\xi <0`$ and $`\lambda _u<1`$, the condition $`s_{n+1}>x^{}`$ will be satisfied for any $`\lambda _s<1`$. This means that for any flip saddle, so long as $`x^{}<s^{}`$, the point $`x^{}`$ will be a fixed point of the controlled dynamics and will be stable so long as $`1<𝒜<1`$.
Equations 4 and 5 imply that control can lead to a stable fixed point for any type of fixed point except those for which both $`\lambda _u`$ and $`\lambda _s`$ are greater than 1 (so long as $`1<𝒜<1`$). Since the required relationship between $`x^{}`$ and $`s^{}`$ for a stable fixed point of the controlled dynamics depends on the eigenvalues, it is convenient to divide the fixed points into four classes, as given in Table 1
For example, for the non-flip saddle shown in Fig. 2, the natural image of the control line at $`x^{}`$ is above $`x^{}`$. Thus, the controlled image will be truncated (vertically) to be identical to $`x^{}`$ and therefore the controller fixed point is also a fixed point of the controlled dynamics. This will be stable for $`1<𝒜<1`$, but with a finite basin of attraction.
Beyond the issue of the stability of the fixed point of the controlled dynamics, there is the question of the size of the fixed point’s basin of attraction. Although the local stability of the fixed point is guaranteed for the cases in Table 1 for $`1<𝒜<1`$, the basin of attraction of this fixed point may be small or large depending on $`𝒜`$, $`𝒞`$, $`s^{}`$, $`\lambda _u`$ and $`\lambda _s`$. For the case of Fig. 1, the basin is finite when $`|\lambda _s+\lambda _u\lambda _s\lambda _u/𝒜|>1`$. In the case of Fig. 2, and for non-flip repellers generally, any initial condition that is mapped to below the $`\lambda _s`$ eigenvector will receed away from $`x^{}`$.
The endpoints of the basin of attraction can be derived analytically. The size of the basin of attraction will often be zero when $`𝒜`$ and $`𝒞`$ are chosen to match the stable manifold of the natural system. Therefore, in order to make the basin large, it is advantageous intentionally to misplace the control line and to put $`x^{}`$ in the direction indicated in Table 1. In addition, control may be enhanced by setting $`𝒜\lambda _s`$, for instance $`𝒜=0`$.
If the relationship between $`x^{}`$ and $`s^{}`$ is reversed from that given in Table 1, the controlled dynamics will not have a stable fixed points. To some extent, these can also be studied using one-dimensional maps. The flip saddle and double-flip repeller can display stable period-2 orbits and chaos. For the non-flip saddle and single-flip repeller, control is unstable when $`x^{}<s^{}`$.
The fact that control may be successful or even enhanced when $`𝒜`$ and $`𝒞`$ are not matched to $`\lambda _s`$ and $`s^{}`$ suggests that it may be useful to reverse the experimental procedure often followed in chaos control. Rather than first identifying the parameters of the natural unstable fixed points and then applying the control, one can blindly attempt control and then deduce the natural dynamics from the behavior of the controlled system. This use of PPF control is reminiscent of pioneering studies that used periodic stimulation to demonstrate the complex dynamics of biological preparations.
As an example, consider the Henon map:
$$s_{n+1}=1.4+0.3s_{n1}s_n^2$$
This system has two distinct fixed points. There is a flip-saddle at $`s^{}=0.884`$ with $`\lambda _u=1.924`$ and $`\lambda _s=0.156`$ and a non-flip saddle at $`s^{}=1.584`$ with $`\lambda _u=3.26`$ and $`\lambda _s=0.092`$. In addition, there is an unstable flip-saddle orbit of period 2 following the sequence $`1.3660.6661.366`$. There are no real orbits of period 3, but there is an unstable orbit of period 4 following the sequence $`.893.3051.575.989.893`$. These facts can be deduced by algebraic analysis of the equations.
In an experiment using the controlled system, the control parameter $`x^{}=𝒞/(1𝒜)`$ can be varied. The theory presented above indicates that the controlled system should undergo a bifurcation as $`x^{}`$ passes through $`s^{}`$. Figure 3 shows the bifurcation diagram for the controlled Henon system (with $`𝒜=0`$). For each value of $`x^{}`$, the controlled system was iterated from a random initial condition and the values of $`s_n`$ plotted after allowing a transient to decay. A bifurcation from a stable fixed point to a stable period 2 as $`x^{}`$ passes through the flip-saddle value of $`s^{}=0.884`$. A different type bifurcation occurs at the non-flip saddle fixed point at $`s^{}=1.584`$. To the left of the bifurcation point, the iterates are diverging to $`\mathrm{}`$ and are not plotted.
Adding gaussian dynamical noise (of standard deviation $`0.05`$) does not substantially alter the bifurcation diagram, suggesting that examination of the truncation control bifurcation diagram may be a practical way to read off the location of the unstable fixed points in an experimental preparation.
By activating the truncation control after every second, third or fourth iteration, it is possible to find periodic orbits of period 2, 3, and 4 respectively. The bifurcation diagrams are shown in Fig. 4. The location of the period-2 orbits can be clearly discerned even in the presence of noise. No period-3 orbit is indicated. Noise obscures the location of all but one of the period-4 points.
Unstable periodic orbits can be difficult to find in uncontrolled dynamics because thre is typically little data near such orbits. Application of PPF control, even blindly, can stabilize such orbits and dramatically improve the ability to locate them. This, and the robustness of the control, may prove particularly useful in biological experiments where orbits may drift in time as the properties of the system change.
We would like to acknowledge helpful conversations with Thomas Schreiber and Leon Glass. |
no-problem/0001/cond-mat0001133.html | ar5iv | text | # Room Temperature Organic Superconductor?
## Abstract
The electron–phonon coupling in fullerene C<sub>28</sub> has been calculated from first principles. The value of the associated coupling constant $`\lambda /N(0)`$ is found to be a factor three larger than that associated with C<sub>60</sub>. Assuming similar values of the density of levels at the Fermi surface N(0) and of the Coulomb pseudopotential $`\mu ^{}`$ for C<sub>28</sub>–based solids as those associated with alkali doped fullerides A<sub>3</sub>C<sub>60</sub>, one obtains T<sub>c</sub>(C<sub>28</sub>)$``$8T<sub>c</sub>(C<sub>60</sub>)
PACS: 74.70.Wz, 63.20.Kr, 61.48.+c
The valence properties of small fullerenes , in particular of the smallest fullerene yet observed C<sub>28</sub>, is a fascinating question at the fundamental level as well as in terms of its potential applications for the synthesis of new materials . In supersonic cluster beams obtained from laser vaporization, C<sub>28</sub> is the smallest even-numbered cluster, and thus the fullerene displaying the largest curvature, which is formed with special abundance. In fact, under suitable conditions, C<sub>28</sub> is almost as abundant as C<sub>60</sub> . At variance with its most famous family member C<sub>60</sub>, C<sub>28</sub> is expected to form a covalent crystal (like C<sub>36</sub> ), and not a Van der Waals solid . However, similarly to C<sub>60</sub>, fullerene C<sub>28</sub>maintains most of its intrinsic characteristics when placed inside an infinite crystalline lattice . The transport properties of the associated metal doped fullerides, in particular superconductivity, can thus be calculated in terms of the electron–phonon coupling strength $`\lambda `$ of the isolated molecule, and of the density of states of the solid . In keeping with the fact that curvature–induced hybridization of the graphite sheet $`\pi `$ orbitals, seems to be the mechanism explaining (cf. and refs. therein) the large increase in T<sub>c</sub> in going from graphite intercalated compounds (T<sub>c</sub>$`5`$K) to alkali–doped C<sub>60</sub> fullerides, (T<sub>c</sub>$`3040`$K) , fullerene C<sub>28</sub> is a promising candidate with which to form a high–T<sub>c</sub> material. These observations call for an accurate, first–principle investigation of the electronic and vibrational properties, as well as of the electron–phonon coupling strength of this system. In the present work we present the results of such a study, carried out within ab–initio density functional theory (DFT) in the local spin density approximation (LSDA). Our findings are that the associated value of $`\lambda /N(0)`$ is a factor $`2.5`$ and $`1.2`$ larger than that associated with C<sub>60</sub> and C<sub>36</sub> respectively. Under similar assumptions for the density of levels at the Fermi energy N(0) and for the Coulomb pseudopotential $`\mu ^{}`$ as those associated with alkali-doped fullerides A<sub>3</sub>C<sub>60</sub>, one will thus expect T<sub>c</sub>(C<sub>28</sub>)$``$8T<sub>c</sub>(C<sub>60</sub>), opening the possibility for C<sub>28</sub>–based fullerides which are superconducting at, or close to, room temperature.
The equilibrium geometry of C<sub>28</sub> obtained in the present calculation is similar to that proposed by Kroto and co–workers , and has the full T<sub>d</sub> point group symmetry. All atoms are three fold coordinated, arranged in 12 pentagons and 4 hexagons. The large ratio of pentagons to hexagons makes the orbital hybridization in C<sub>28</sub> more of sp<sup>3</sup> type rather than sp<sup>2</sup>, the typical bonding of graphite and C<sub>60</sub>. The sp<sup>3</sup>–like hybridization is responsible for a series of remarkable properties displayed by small fullerenes in general and by C<sub>28</sub> in particular. Some of these properties are : a) the presence of dangling bonds, which renders C<sub>28</sub> a strongly reactive molecule, b) the fact that C<sub>28</sub> can be effectively stabilized (becoming a closed shell system displaying a large HOMO–LUMO energy gap) by passivating the four tetrahedral vertices either from the outside (C<sub>28</sub>H<sub>4</sub>) or from the inside (U@C<sub>28</sub>. It also displays a number of hidden valences: in fact, C<sub>28</sub>H<sub>10</sub>, C<sub>28</sub>H<sub>16</sub>, C<sub>28</sub>H<sub>22</sub> and C<sub>28</sub>H<sub>28</sub> are essentially as stable as C<sub>28</sub>H<sub>4</sub> (all displaying HOMO–LUMO energy gap of the order of $`1.5`$ eV) , in keeping with the validity of the free–electron picture of $`\pi `$–electrons which includes, as a particular case, the tetravalent chemist picture, c) while typical values of the matrix elements of the deformation potential involving the LUMO state range between 10–100 meV, the large number of phonons which couple to the LUMO state produces a total electron–phonon matrix element of the order of 1 eV (cf. Table 1), as large as the Coulomb repulsion between two electrons in C<sub>28</sub>. This result (remember that the corresponding electron–phonon matrix element is $`0.1`$ eV and the typical Coulomb repulsion is $`0.51`$ eV for C<sub>60</sub> ) testifies to the fact that one should expect unusual properties for both the normal and the superconducting state of C<sub>28</sub>–based fullerides, where the criticisms leveled off against standard theories of high T<sub>c</sub> of fullerenes (cf. e.g. refs. and refs. therein) will be much in place.
In Fig 1(a), we report the electronic structure of C<sub>28</sub> ccomputed within the Local Spin Density approximation, as obtained from a Car–Parrinello molecular dynamics scheme . Near the Fermi level we find three electrons in a $`t_2`$ orbital, and one in a $`a_1`$ orbital, all with the same spin, in agreement with the results of . The situation is not altered, aside from a slight removal of the degeneracy, when the negative anion, C<sub>28</sub><sup>-</sup>, is considered (see Fig. 1(b)). In this case, the additional electron goes into the $`t_2`$ state, and has a spin opposite to that of the four valence electrons of neutral C<sub>28</sub>.
The wavenumbers, symmetries, and zero-point amplitudes of the phonons of C<sub>28</sub> are displayed in Table 1, together with the matrix elements of the deformation potential defining the electron–phonon coupling with the LUMO state. The total matrix element summed over all phonons is equal to $`710`$ meV. The partial electron–phonon coupling constants $`\lambda _\alpha /N(0)`$, also shown in Table 1, sum up to $`214`$ meV. This value is a factor $`2.5`$ larger than that observed in C<sub>60</sub> , and a factor $`1.2`$ larger than the value recently predicted for C<sub>36</sub> . In Fig. 2 we display the values of $`\lambda /N(0)`$ for C<sub>70</sub>, C<sub>60</sub>, C<sub>36</sub> and C<sub>28</sub> , which testify to the central role the sp<sup>3</sup> curvature induced hybridization has in boosting the strength with which electrons couple to phonons in fullerenes .
In keeping with the simple estimates of T<sub>c</sub> carried out in refs. for C<sub>60</sub> and C<sub>36</sub> based solids, we transform the value of $`\lambda /N(0)`$ of Table 1 into a critical temperature by making use of McMillan’s solution of Eliashberg equations
$$T_c=\frac{\omega _{ln}}{1.2}\mathrm{exp}[\frac{1.04(1+\lambda )}{\lambda \mu ^{}(1+0.62\lambda )}],$$
(1)
where $`\omega _{ln}`$ is a typical phonon frequency (logarithmic average), $`\lambda `$ is the electron–phonon coupling and $`\mu ^{}`$ is the Coulomb pseudopotential, describing the effects of the repulsive Coulomb interaction. Typical values of $`\omega _{ln}`$ for the fullerenes under discussion is $`\omega _{ln}10^3`$K (cf. e.g. ). Values of N(0) obtained from nuclear magnetic resonance lead to values of $`7.2`$ and $`8.1`$ states/eV–spin for K<sub>3</sub>C<sub>60</sub> and Rb<sub>3</sub>C<sub>60</sub>, respectively (cf. ref. and refs. therein). Similar values for N(0) are expected for C<sub>36</sub> . Making use of these values of N(0) for all C<sub>n</sub>–based solids ($`n`$=70,60,36 and 28), one obtains $`0.2\lambda 3`$ for the range of values of the associated parameter $`\lambda `$. The other parameter entering Eq. (1), namely $`\mu ^{}`$ and which is as important as $`\lambda `$ in determining T<sub>c</sub> is not accurately known. For C<sub>60</sub>, $`\mu ^{}`$ is estimated to be $`0.25`$ . Using this value of $`\mu ^{}`$, and choosing N(0) so that T<sub>c</sub>$`19.5`$K for C<sub>60</sub>, as experimentally observed for K<sub>3</sub>C<sub>60</sub> , one obtains T<sub>c</sub>(C<sub>28</sub>)$``$8T<sub>c</sub>(C<sub>60</sub>) and T<sub>c</sub>(C<sub>28</sub>)$``$1.3T<sub>c</sub>(C<sub>36</sub>) .
We conclude that C<sub>28</sub>–fullerene displays such large electron–phonon coupling matrix elements as compared to the repulsion between two electrons in the same molecule, that it qualifies as a particular promising high T<sub>c</sub> superconductor. From this vantage point of view one can only speculate concerning the transport properties which a conductor constructed making use of C<sub>20</sub> as a building block can display. In fact, this molecule is made entirely out of 12 pentagons with no hexagons, being the smallest fullerene which can exist according to Euler theorem for polyhedra, and thus displaying the largest curvature a carbon cage can have.
Calculations have been performed on the T3E Cray computer at CINECA, Bologna. |
no-problem/0001/cond-mat0001293.html | ar5iv | text | # References
Eur. Phys. J. B
Domino effect for world market fluctuations
N.Vandewalle, Ph.Boveroux and F.Brisbois
GRASP, Institut de Physique B5, Université de Liège,
B-4000 Liège, Belgium.
Abstract
In order to emphasize cross-correlations for fluctuations in major market places, series of up and down spins are built from financial data. Patterns frequencies are measured, and statistical tests performed. Strong cross-correlations are emphasized, proving that market moves are collective behaviors.
Keywords: econophysics, critical phenomena
PACS: 02.50.-r — 05.50.+j — 89.90.+n
Statistical physics started a few years ago to investigate financial data since they seem to exhibit complex behaviors, i.e. departures from true randomness. Various physical methods have been already reported to sort out correlations in financial data . Recently, Bonanno et al. studied data for 29 indices from different countries. The study demonstrated the existence of cross-correlations between these market places as well as a regional (continental) organization.
In order to emphasize and quantify the cross-correlations between the major financial indices around the world, we present here an analysis using a different approach. Our analysis distinguish up and down fluctuations.
Figure 1 presents the closing values of three major financial indices from January 1980 till December 1999: the Japanese Nikkei, the German DAX and the Dow Jones Industrial Average (DJIA). Due to the earth rotation, the trading hours are of course different: from 9h00 till 15h00 (local time) in Tokyo, from 8h30 till 16h00 (local time) in Frankfurt, and from 9h30 till 16h00 (local time) in New York. Thus, there is only a small overlap during trading hours for the german DAX index and the DJIA. The considered period of 20 years corresponds to about $`5200\times 3`$ data points. Below, only the sign of the daily fluctuations will be considered whatever its amplitude.
Figure 2 illustrates the different sequences of spins that one can build from the three data series: (a) from the DJIA only and (b) from the three series together. Positive and negative fluctuations are represented by up and down spins respectively. During the whole period, a fraction of positive fluctuations is counted.
First, let us consider each index evolution separately such as the DJIA. This evolution corresponds to the third vertical series of spins of Figure 2. For this series, a fraction $`b=0.510`$ of “up” spins (a bias) is measured. In our analysis, only patterns of length 3 made of “up” and “down” spins, also called triplets, are considered from 3 consecutive trading days in NewYork. Thus, there exist $`2^3=8`$ possible different triplets. Since $`b>\frac{1}{2}`$, the most frequent pattern is expected to be the “up-up-up” with a probability $`f_e=b^3`$ while the less frequent one is “down-down-down” with a probability $`f_e=(1b)^3`$. Those expected probabilities $`f_e`$ are illustrated in the histogram (in grey) of Figure 3. One should note that the counting of pattern frequencies is similar to the Zipf technique which was originally introduced in the context of natural languages . The Zipf analysis has been e.g. applied to correlated systems like DNA sequences and also for investigating the distribution of incomes . The observed (measured) frequency of each pattern $`f`$ is reported in white in Figure 3. Error bars are indicated, and are calculated assuming a binomial distribution of spins taking the bias into account. No significant deviation from the biased random distribution (in grey) is observed in Figure 3. One concludes that correlations between the signs of daily fluctuations cannot be observed for the DJIA using this statistical analysis. Similar results have been obtained for the Nikkei and DAX indices. One should note that Zhang reported recently a similar statistical analysis on the New-York Stock Exchange (NYSE) index. He found correlations which may be associated with the bias not taken into account in his work.
Consider now the three index evolutions of Figure 1 together, i.e. the spin series resulting from the lining of the three daily spins in a successive way as if the succession of spins is recorded around the world, as illustrated in Figure 2b. One should note that holidays do not take place at the same dates in different countries. These days containing any closed market are not considered in our measurements. Over all markets and for the whole 20 years period, a fraction $`b=0.502`$ of “up” spins has been measured. Such a bias is negligible but will nevertheless be taken into account in the following discussion. The observed frequencies of triplets are plotted in Figure 4. Since $`b=0.502`$ close to $`\frac{1}{2}`$, the deviations from a uniform distribution is not visible in Figure 4. Error bars are indicated. Surprisingly, large deviations from the expected grey distribution are observed. The largest differences are observed for the “down-down-down” and “up-up-up” patterns. In these cases, the frequency is about $`f=0.17`$ instead of the $`f_e=0.125`$ expected for a random process, i.e. a relative difference of 44%! These deviations from the grey distribution represents what is known as the “domino effect” indicating that one place influences the next opening market. In particular, two negative (positive) fluctuations are usually followed by another down (up) fluctuation on the next market. In other words, major market places fluctuates in a cooperative fashion. This behavior seems to be symmetrical with respect to up and down patterns for the whole 20 years period. Except one work on price waves in french markets during the 19th century , it is the first time to our knowledge that this Zipf-like method is applied to emphasize such correlations in between market places.
One may ask if the strength of the domino effect is constant with time. It does not of course. Figure 5 presents the histogram for a period of 2 years preceeding the crash of 1987. During that period, there was some “euphory” and the indices were growing at a high rate (about an annual return of 20% for the DJIA), except for the DAX. The measured bias is thus quite large for that period: $`b=0.538`$. One observes also that the difference between the random and the observed distributions is quite large during that period with respect to the 20 years period investigated in Figure 4. In other words, stronger correlations are observed before crashes as suggested by recent works on the predictability of drastic events . Another remark is that the differences between observed and expected frequencies for “up-up-up” and “down-down-down” triplets are not similar. Indeed, for the “down-down-down” triplets, $`f0.15`$ instead of the $`f_e0.10`$, i.e. a relative difference of 50% while for the “up-up-up” triplets, $`f0.20`$ instead of the $`f_e0.16`$, i.e. a relative difference of 25%. This result means that the correlations are more marked for “down” spins than for “up” spins.
Our analysis of up and down daily fluctuations is rather simple. One may ask for a more complicated analysis. We have recently shown that the use of other fluctuation types for describing for example large or small up and down fluctuations, i.e. four spin types, leads to other types of correlations and more visible structures.
Statistical physicists love spin models because simple ingredients/rules make complex dynamics. Spins can represent up and down daily fluctuations. A daily fluctuations series as considered above can be viewed as the growth of a semi-open chain of successive up or down spins . At each time step, a new spin is added at the extremity of the semi-open chain. Both histograms of Figures 4 and 5 mean that “ferromagnetic” interactions have to be considered and that successive domains of up and down spins exists. Though the modelling ot the markets is outside the scope of the present paper, it suggests that modelling is possible in a physical (spin) framework like spin glasses . Also, the physical quantities as the entropy, susceptibility or magnetization can be useful as market indicators for analysts.
In summary, we have performed some analysis for the daily evolution of three major world financial indices. It has been discovered that strong correlations exists between market places. Moreover, these correlations have been quantified such that the so-called domino effect is emphasized and quantified. It has been put also into evidence that the amplitude of the domino effect varies with time and seem to be more pronounced before a crash.
Acknowledgements
NV is grateful to the FNRS (Brussels, Belgium) for financial support. Valuables discussions with M.Ausloos, R.D’hulst, S.Galam, A.Pekalski, R.N.Mantegna, D.Stauffer and H.E.Stanley are acknowledged.
Figure Captions
Figure 1 — Semi-log plot of three major world financial indices from January 1980 till December 1999: the Nikkei225, the DAX30 and the Dow Jones Industrial Average. Important financial events are emphasized.
Figure 2 — Typical examples of the construction of spins series from financial data series: (a) a single index and (b) three indices.
Figure 3 — Histogram of triplets frequencies for the Dow Jones Industrial Average. Two cases are illustrated: the expected frequency from a random distribution taking the bias into account (in grey) and the observed frequencies (in white). Error bars are indicated.
Figure 4 — Histogram of triplets frequencies for the lining series. Two cases are illustrated: the expected frequency from a random distribution taking the bias into account (in grey) and the observed frequencies (in white). Error bars are indicated.
Figure 5 — Histogram of triplets frequencies for the two years period lining series preceeding the crash of October 1987. Two cases are illustrated: the expected frequency from a random distribution taking the bias into account (in grey) and the observed frequencies (in white). Error bars are indicated. |
no-problem/0001/hep-ph0001224.html | ar5iv | text | # 1 Introduction
## 1 Introduction
In R-parity violating ($`\overline{)}\mathrm{R}_\mathrm{p}`$) models the single resonant production of charged sleptons in hadron-hadron collisions is possible. The most promising channels for the discovery of these processes, at least with small $`\overline{)}\mathrm{R}_\mathrm{p}`$ couplings, involve the gauge decays of these resonant sleptons. In particular if we consider the production of a charged slepton, this can then decay to give a neutralino and a charged lepton, i.e. the process
$$\mathrm{u}+\overline{\mathrm{d}}\stackrel{~}{\mathrm{}}^+\mathrm{}^++\stackrel{~}{\chi }^0.$$
(1)
In addition to this $`s`$-channel process there are $`t`$-channel processes involving squark exchange. The neutralino decays via the crossed process to give a charged lepton, which due to the Majorana nature of the neutralino can have the same charge as the lepton from the slepton decay. We therefore have a like-sign dilepton signature which we expect to have a low Standard Model background.
## 2 Backgrounds
The dominant Standard Model backgrounds to this process come from
* Gauge boson pair production, i.e. production of ZZ or WZ followed by leptonic decays of the gauge bosons with some of the leptons not being detected.
* $`\mathrm{t}\overline{\mathrm{t}}`$ production. Either the t or $`\overline{\mathrm{t}}`$ decays semi-leptonically, giving one lepton. The second top decays hadronically. A second lepton with the same charge can be produced in a semi-leptonic decay of the bottom hadron formed in the hadronic decay of the second top, i.e.
$`\mathrm{t}`$ $``$ $`\mathrm{W}^+\mathrm{b}\mathrm{e}^+\overline{\nu _\mathrm{e}}\mathrm{b},`$
$`\overline{\mathrm{t}}`$ $``$ $`\mathrm{W}^{}\overline{\mathrm{b}}\mathrm{q}\overline{\mathrm{q}}\overline{\mathrm{b}},\overline{\mathrm{b}}\mathrm{e}^+\overline{\nu _\mathrm{e}}\overline{\mathrm{c}}.`$ (2)
* $`\mathrm{b}\overline{\mathrm{b}}`$ production. If either of these quarks hadronizes to form a $`\mathrm{B}_{\mathrm{d},\mathrm{s}}^0`$ meson this can mix to give a $`\overline{\mathrm{B}}_{\mathrm{d},\mathrm{s}}^0`$. This means that if both the bottom hadrons decay semi-leptonically the leptons will have the same charge as they are both coming from either b or $`\overline{\mathrm{b}}`$ decays.
* Single top production. A single top quark can be produced together with a $`\overline{\mathrm{b}}`$ quark by either an $`s`$\- or $`t`$-channel W exchange. This can then give one charged lepton from the top decay, and a second lepton with the same charge from the decay of the meson formed after the b quark hadronizes.
* Non-physics backgrounds. There are two major sources: (i) from misidentifying the charge of a lepton, e.g. in Drell-Yan production, and (ii) from incorrectly identifying an isolated hadron as a lepton. This means that there is a major source of background from W production with an additional jet faking a lepton.
Early studies of like-sign dileptons at the LHC only studied the backgrounds from heavy quark production. It was found that by imposing cuts on the transverse momentum and isolation of the leptons the heavy quark backgrounds could be significantly reduced. However more recent studies of the like-sign dilepton production at the LHC and the Tevatron suggest that a major source of background to like-sign dilepton production is from gauge boson pair production and from fake leptons. Here we will consider the backgrounds from gauge boson pair production as well as heavy quark production. The study of the non-physics backgrounds (e.g. fake leptons) requires a full simulation of the detector and it is therefore beyond the scope of our study. In particular the background from fake leptons cannot be reliably calculated from Monte Carlo simulations and must be extracted from data . We can use the differences between the $`\overline{)}\mathrm{R}_\mathrm{p}`$ signature we are considering and the MSSM signatures considered in to reduced the background from gauge boson pair production.
We impose the following cuts
* A cut on the transverse momentum of the like-sign leptons $`p_T>40`$ GeV.
* An isolation cut on the like-sign leptons so that the transverse energy in a cone of radius $`R=\sqrt{\mathrm{\Delta }\varphi ^2+\mathrm{\Delta }\eta ^2}=0.4`$ about the direction of each lepton is less than $`5`$ GeV.
* A cut on the transverse mass, $`M_T^2=2|p_T_{\mathrm{}}||p_{T_\nu }|(1\mathrm{cos}\mathrm{\Delta }\varphi _\mathrm{}\nu )`$, where $`p_T_{\mathrm{}}`$ is the transverse momentum of the charged lepton, $`p_{T_\nu }`$ is the transverse momentum of the neutrino, assumed to be all the missing transverse momentum in the event, and $`\mathrm{\Delta }\varphi _\mathrm{}\nu `$ is the azimuthal angle between the lepton and the neutrino, i.e. the missing momentum in the event. We cut out the region where $`60\mathrm{GeV}<M_T<85\mathrm{GeV}`$.
* A veto on the presence of a lepton in the event with the same flavour but opposite charge (OSSF) as either of the leptons in the like-sign pair if the lepton has $`p_T>10`$ GeV and which passes the same isolation cut as the like-sign leptons.
* A cut on the missing transverse energy, $`E_{\mathrm{𝑚𝑖𝑠𝑠}}^T<20`$ GeV .
While these cuts were chosen to reduce the background we have not attempted to optimize them. The first two cuts are designed to reduce the background from heavy quark production. As can be seen in Fig. 1, these cuts reduce this background by several orders of magnitude. The remaining cuts are designed to reduce the background from gauge boson pair, in particular WZ, production which is the major source of background after the imposition of the isolation and $`p_T`$ cuts. The transverse mass cut is designed to remove events with leptonic W decays as can be seen in Fig. 2a. The veto on the presence of OSSF leptons is designed to remove events where one lepton from the dilepton pair comes from the leptonic decay of a Z boson. The missing transverse energy cut again removes events with leptonic W decays, this is mainly to reduce the background from WZ production, as seen in Fig. 2b. The effect of these cuts on the heavy quark and gauge boson pair backgrounds are shown in Figs. 1 and 3, respectively.
The backgrounds from the various processes are summarized in Table 1. The simulations of the $`\mathrm{b}\overline{\mathrm{b}}`$, $`\mathrm{t}\overline{\mathrm{t}}`$ and single top production were performed using HERWIG6.1 . The simulations of gauge boson pair production used PYTHIA6.1 . The major contribution to the background comes from WZ production the major contribution to the error comes from $`\mathrm{b}\overline{\mathrm{b}}`$. For the $`\mathrm{b}\overline{\mathrm{b}}`$ simulation we have required a parton-level cut of $`40`$ GeV on the transverse momentum of the bottom quarks. This should not affect the results provided we impose a cut of at least $`40`$ GeV on the $`p_T`$ of the leptons. We also forced the B meson produced to decay semi-leptonically. In events where there was one $`\mathrm{B}_{\mathrm{d},\mathrm{s}}^0`$ meson this meson was forced to mix, if there was more than one $`\mathrm{B}_{\mathrm{d},\mathrm{s}}^0`$ then one of the mesons was forced to mix and the others forced to not mix. Even with these cuts it is impossible to simulate the full luminosity with the resources available, due to the large cross section for $`b\overline{b}`$ production. This gives the large error on the estimate of this background.
## 3 Signal
We used HERWIG6.1 to simulate the signal. This version includes the resonant slepton production, including the $`t`$-channel diagrams, and the R-parity violating decay of the neutralino including a matrix element for the decay . We will only consider first generation quarks as the cross sections for processes with higher generation quarks are suppressed by the parton distributions. There are upper bounds on the $`\overline{)}\mathrm{R}_\mathrm{p}`$ couplings from low energy experiments. The bound on $`\lambda _{}^{}{}_{111}{}^{}`$ from neutrino-less double beta decay is very strict so we consider muon production via the coupling $`\lambda _{}^{}{}_{211}{}^{}`$, which has a much weaker bound,
$$\lambda _{}^{}{}_{211}{}^{}<0.059\times \left(\frac{M_{\stackrel{~}{d}_R}}{100\mathrm{G}\mathrm{e}\mathrm{V}}\right),$$
(3)
from the ratio $`R_\pi =\mathrm{\Gamma }(\pi e\nu )/\mathrm{\Gamma }(\pi \mu \mu )`$ .
We have performed a scan in $`M_0`$ using HERWIG with the following SUGRA parameters, $`M_{1/2}=300`$ GeV, $`A_0=300`$ GeV, $`\mathrm{tan}\beta =2`$, $`\mathrm{sgn}\mu =+`$, and with the $`\overline{)}\mathrm{R}_\mathrm{p}`$ coupling $`\lambda _{}^{}{}_{211}{}^{}=0.01`$. The number of events which pass the cuts given in Section 2 are shown in Fig. 4a, while the efficiency of the cuts, i.e. the fraction of the signal events which have a like-sign dilepton pair passing the cuts, is shown in Fig. 4b. The dip in the efficiency between $`140\mathrm{GeV}<M_0<180\mathrm{GeV}`$ is due to the resonant production of the second lightest neutralino becoming accessible. Just above threshold the efficiency for this channel is low due to the low $`p_T`$ of the lepton produced in the slepton decay.
If we conservatively take a background of $`7.6`$ events, i.e. 1$`\sigma `$ above the central value of our calculation, a 5$`\sigma `$ fluctuation of the background would correspond to 20 events, using Poisson statistics. This is given as a dashed line in Fig. 4a. As can be seen for a large range of values of $`M_0`$ resonant slepton production can be discovered at the LHC, for $`\lambda _{211}^{}=0.01`$. The production cross section depends quadratically on the $`\overline{)}\mathrm{R}_\mathrm{p}`$ Yukawa coupling and hence it should be possible to probe much smaller couplings for small values of $`M_0`$.
As can be seen in Fig. 5, at this SUGRA point the sdown mass varies between $`622`$ GeV at $`M_0=50`$ GeV and $`784`$ GeV at $`M_0=500`$ GeV. The corresponding limit on the coupling $`\lambda _{}^{}{}_{211}{}^{}`$ varies between 0.37 and 0.46. We can probe couplings of $`\lambda _{}^{}{}_{211}{}^{}=2\times 10^3`$ for $`M_0=50`$ GeV which corresponds to a smuon mass of $`223`$ GeV, and at couplings of $`\lambda _{}^{}{}_{211}{}^{}=10^2`$ we can probe values of $`M_0`$ up to $`500`$ GeV, i.e. a smuon mass of $`540`$ GeV. This is more than an order of magnitude smaller than the current upper bounds on the $`\overline{)}\mathrm{R}_\mathrm{p}`$ coupling given above for these values of $`M_0`$. This is a greater range of couplings and smuon masses than can be probed at the Tevatron . The backgrounds are higher at the LHC but this is compensated by the higher energy and luminosity leading to significantly more signal events.
## 4 Conclusions
We have considered the backgrounds to like-sign dilepton production at the LHC and find a background after cuts of $`5.1\pm 2.5`$ events for an integrated luminosity of $`10fb^1`$. This means, taking a conservative estimate of the background of 7.6 events, that 20 events would correspond to a $`5\sigma `$ discovery. For a full analysis however, non-physics backgrounds must also be considered.
A preliminary study of the signal suggests that an efficiency for detecting the signal in excess of 20% can be achieved over a range of points in SUGRA parameter space. At the SUGRA point studied this means we can probe $`\overline{)}\mathrm{R}_\mathrm{p}`$ couplings of $`2\times 10^3`$ for a smuon mass of $`223`$ GeV and up to smuon masses of $`540`$ GeV for couplings of $`10^2`$, and higher masses for larger couplings.
A more detailed scan of SUGRA parameter space for this signal remains to be performed. |
no-problem/0001/cond-mat0001458.html | ar5iv | text | # Classes of behavior of small-world networks
## Abstract
Small-world networks are the focus of recent interest because they appear to circumvent many of the limitations of either random networks or regular lattices as frameworks for the study of interaction networks of complex systems. Here, we report an empirical study of the statistical properties of a variety of diverse real-world networks. We present evidence of the occurrence of three classes of small-world networks: (a) scale-free networks, characterized by a vertex connectivity distribution that decays as a power law; (b) broad-scale networks, characterized by a connectivity distribution that has a power-law regime followed by a sharp cut-off; (c) single-scale networks, characterized by a connectivity distribution with a fast decaying tail. Moreover, we note for the classes of broad-scale and single-scale networks that there are constraints limiting the addition of new links. Our results suggest that the nature of such constraints may be the controlling factor for the emergence of different classes of networks.
Disordered networks, such as small-world networks are the focus of recent interest because of their potential as models for the interaction networks of complex systems . Specifically, neither random networks nor regular lattices appear to be an adequate framework within which to study “real-world” complex systems such as chemical-reaction networks , neuronal networks , food-webs , social networks , scientific-collaboration networks , and computer networks .
Small-world networks —which emerge as the result of randomly replacing a fraction $`p`$ of the links of a $`d`$-dimensional lattice with new random links— interpolate between the two limiting cases of a regular lattice ($`p=0`$) and a random graph ($`p=1`$). A “small-world” network is characterized by the properties (i) the local neighborhood is preserved —as for regular lattices —, and (ii) the diameter of the network, quantified by average shortest distance between two vertices , increases logarithmically with the number of vertices $`n`$ —as for random graphs . The latter property gives the name “small-world” to these networks, as it is possible to connect any two vertices in the network through just a few links while the local connectivity would suggest the network to be of finite dimensionality.
The structure of small-world networks and of real networks has been probed through the calculation of their diameter as a function of network size . In particular, networks such as (a) the electric-power grid for Southern California, (b) the network of movie-actor collaborations, and (c) the neuronal network of the worm C. Elegans, appear to be small-world networks . Further, it was proposed that these three networks, the world-wide web , and the network of citations of scientific papers are scale-free —that is, they have a distribution of connectivities that decays with a power-law tail.
Scale-free networks emerge in the context of a growing network in which new vertices connect preferentially to the more highly connected vertices in the network . Scale free networks are still small-world networks because (i) they have clustering coefficients much larger than random networks , and (ii) their diameter increases logarithmically with the number of vertices $`n`$ .
Here, we address the question of the conditions under which disordered networks are scale-free through the analysis of several networks in social, economic, technologic, biologic, and physical systems. We identify a number of systems for which there is a single scale for the connectivity of the vertices. For all these networks there are constraints limiting the addition of new links. Our results suggest that such constraints may be the controlling factor for the emergence of scale-free networks.
First, we consider two examples of technologic and economic networks: (i) the electric-power grid of Southern California , the vertices being generators, transformers and substations and the links high-voltage transmission lines, and (ii) the network of world airports , the vertices being the airports and the links non-stop connections. Figure 1 shows the connectivity distribution for these two examples. It is visually apparent that neither case has a power-law regime, and that both have exponentially decaying tails, implying that there is a single scale for the connectivity $`k`$.
Second, we consider three examples of “social” networks: (iii) the movie-actors network , the links in this network indicating that the two actors were cast at least once in the same movie, (iv) the acquaintance network of Mormons , the vertices being 43 Utah Mormons and the number of links the number of other Mormons they know, and (v) the friendship-network of 417 Madison Junior High School students . Figure 2 shows the connectivity distribution for these social networks. The scale-free (power-law) behavior of the actor’s network is truncated by an exponential tail. In contrast, the network of acquaintances of the Utah Mormons and the friendship network of the high-school students display no power-law regime, but instead we find results consistent with a Gaussian distribution of connectivities, indicating the existence of a single scale for $`k`$.
Third, we consider two examples of networks from the natural sciences: (vi) the neuronal network of the worm C. Elegans , the vertices being the neurons and the links being connections between neurons, and (vii) the conformation space of a lattice polymer chain , the vertices being the possible conformations of the polymer chain and the links the possibility of connecting two conformations through local movements of the chain . The conformation space of a protein chain shares many of the properties of the small-world networks of Ref. . Figures 3a,b show for C. Elegans the cumulative distribution of $`k`$ for both incoming and outgoing neuronal links. The tails of both distributions are well approximated by exponential decays, consistent with a single scale for the connectivities. For the network of conformations of a polymer chain the connectivity follows a binomial distribution, which converges to the Gaussian , so we also find a single scale for the connectivity of the vertices (Fig. 3c).
Thus, there is empirical evidence for the occurrence of three classes of small-world networks: (a) scale-free networks, characterized by a connectivity distribution with a tail that decays as a power law ; (b) broad-scale or truncated scale-free networks, characterized by a connectivity distribution that has a power-law regime followed by a sharp cut-off, like an exponential or Gaussian decay of the tail \[see example (iii)\]; (c) single-scale networks, characterized by a connectivity distribution with a fast decaying tail, such as exponential or Gaussian \[see examples (i),(ii),(iv-vii)\].
A natural question is “What are the reasons for such a rich range of possible structures for small-world networks?” To answer this question let us recall that preferential attachment in growing networks gives rise to a power-law distribution of connectivities . However, preferential attachment can be hindered by two classes of factors: (I) aging of the vertices. This effect can be pictured for the network of actors: in time, every actress or actor will stop acting. For the network, this fact implies that even a very highly connected vertex will, eventually, stop receiving new links. The vertex is still part of the network and contributing to network statistics, but it no longer receives links. The aging of the vertices thus limits the preferential attachment preventing a scale-free distribution of connectivities. (II) cost of adding links to the vertices or the limited capacity of a vertex. This effect is exemplified by the network of world airports: for reasons of efficiency, commercial airlines prefer to have a small number of hubs where all routes would connect. To first approximation, this is indeed what happens for individual airlines, but when we consider all airlines together, it becomes physically impossible for an airport to become a hub to all airlines. Due to space and time constraints, each airport will limit the number of landings/departures per hour, and the number of passengers in transit. Hence, physical costs of adding links and limited capacity of a vertex will limit the number of possible links attaching to a given vertex.
To test numerically the effect of aging and cost constraints on the local structure of networks with preferential attachment, we simulate the scale-free model of Ref. but introduce aging and cost constraints of varying strength. Figure 4 shows that both types of constraints lead to cut-offs on the power-law decay of the tail of connectivity distribution and that for strong enough constraints no power-law region is visible.
We note that the possible distributions of connectivity of the small-world networks have an analogy in the theory of critical phenomena . At the gas-liquid critical point, the distribution of sizes of the droplets of the gas (or of the liquid) is scale-free, as there is no free-energy cost in their formation . As for the case of a scale-free network, the size $`s`$ of a droplet is power-law distributed: $`p(s)s^\alpha `$. As we move away from the critical point, the appearance of a non-negligible surface tension introduces a free-energy cost for droplets which limits their sizes, so that their distribution becomes broad-scale: $`p(s)s^\alpha f(s/\xi )`$, where $`\xi `$ is the typical size for which surface tension starts to be significant and the function $`f(s/\xi )`$ introduces a sharp cut-off for droplet sizes $`s>\xi `$. Far from the critical point, the scale $`\xi `$ becomes so small that no power-law regime is observed and the droplets become single-scale distributed: $`p(s)f(s/\xi )`$. Often, the distribution of sizes in this regime is exponential or Gaussian.
We thank J.S. Andrade Jr., R. Cuerno, N. Dokholyan, P. Gopikrishnan, C. Hartley, E. LaNave, K.B. Lauritsen, H. Orland, F. Starr and S. Zapperi for stimulating discussions and helpful suggestions. The Center for Polymer Studies is funded by NSF. |
no-problem/0001/hep-ph0001172.html | ar5iv | text | # Minimal Gaugino Mediation
## I Introduction
Hidden sectors are an essential ingredient of simple and natural models of supersymmetry (SUSY) breaking. The unifying idea is that SUSY is assumed to be broken in the hidden sector and then communicated to the minimal supersymmetric standard model (MSSM) by messenger interactions which are flavor blind. This structure results in flavor-universal scalar masses and solves the SUSY flavor problem.
In the context of extra dimensions with branes such hidden sectors are very natural. If – for example – the MSSM and the SUSY breaking sector are confined to two different parallel 3-branes embedded in extra dimensions, then the separation in the extra dimensions forbids direct local couplings between the “visible” MSSM fields and the hidden sector. However, fields on separated branes can still communicate by exchanging bulk messenger fields. Couplings which arise from such non-local bulk mode exchange are suppressed. For a messenger of mass $`M`$ and brane separation $`L`$ the suppression factor is $`e^{ML}`$, which is the Yukawa propagator of the messenger field exchanged between the two branes.
This suggest a very simple scenario for communicating SUSY breaking to the MSSM which guarantees flavor-universal scalar masses. If all light bulk fields have flavor blind couplings then the soft SUSY breaking parameters generated by exchange of these messengers preserve flavor. Heavy bulk modes may violate flavor maximally but the resulting non-universal contributions to the scalar masses are exponentially suppressed .
The two obvious candidates for bulk fields which can communicate SUSY breaking to the Standard Model fields in a flavor-blind way are gravity and the Standard Model gauge fields. Gravity as a bulk messenger (“Anomaly Mediation” ) leads to a very simple and predictive model which unfortunately predicts negative slepton masses and is therefore ruled out in its simplest and most elegant form.<sup>*</sup><sup>*</sup>*For models which cure Anomaly Mediation by introducing new fields and interactions see . The alternative, Standard Model gauge fields as messengers (“Gaugino Mediation”), has been proposed recently by D.E. Kaplan, Kribs, and Schmaltz as well as by Chacko, Luty, Nelson, and Ponton and was found to work perfectly. In Gaugino Mediation the MSSM matter fields (quarks, leptons and superpartners) live on a “matter brane”, while SUSY breaks on a parallel “SUSY breaking brane”, and the MSSM gauge superfields live in the bulk. Because the gaugino fields are bulk fields they couple directly to the SUSY breaking and obtain soft masses. The MSSM scalars are separated from SUSY breaking by the distance $`L`$ and therefore obtain much smaller masses from non-local loops with high momentum modes of the bulk gauge fields . Thus at the compactification scale the theory matches onto a four-dimensional theory with gaugino masses and negligibly small scalar masses.
Vanishing scalar masses and non-vanishing gaugino masses at a high scale, as in no-scale models , is very attractive because evolving the theory to low energies via the renormalization group equation generates flavor-universal and positive soft scalar masses. Consistent electroweak symmetry breaking also requires a $`\mu `$ term of size comparable to the gaugino masses. Thus a minimal version of Gaugino Mediation has only three high energy parameters
$$\mu ,M_{1/2},M_c.$$
(1)
Here $`M_{1/2}`$ is the common gaugino mass at the unification scale, and $`M_c`$ is the compactification scale where the higher dimensional theory is matched onto the effective four-dimensional theory. Since we wish to preserve the successful prediction of $`\mathrm{sin}^2\theta _w`$ from gauge coupling unification in the MSSM we limit $`M_c>M_{GUT}`$.
In Section II of this paper we show that this scenario, which we call “Minimal Gaugino Mediation” (Mg̃M), with only the parameters in Eq. (1) works very well phenomenologically. The minimal scenario which we advocate here differs from the more general models in in that we do not introduce soft supersymmetry breaking mass parameters in the Higgs sector of the theory at $`M_c`$. Radiative electroweak symmetry breaking works automatically in Mg̃M and determines $`\mu `$ by fitting to the $`Z`$ mass. Therefore the entire superpartner spectrum of Mg̃M can be computed via the renormalization group equations in terms of only two free parameters: $`M_{1/2}`$ and $`M_c`$. We will see that the running from $`M_c`$ to $`M_{GUT}`$ in the grand unified theory is important for the masses of the lightest superpartners. We find that the Bino is the LSP and a perfect cold dark matter candidate in a large region of the models’ parameter space. “Minimal Gaugino Mediation” also evades all existing collider bounds without fine-tuning.
In Section III we present a complete and economical model which gives rise to the Mg̃M boundary condition. The model generates the hierarchy between the Planck scale and the SUSY breaking scale with the extra-dimensional dynamical supersymmetry breaking mechanism of Arkani-Hamed, Hall, Smith, and Weiner . To solve the $`\mu `$ problem without introducing a $`B\mu `$ problem we propose a new mechanism in which five-dimensional $`N=1`$ supersymmetry relates $`\mu `$ and the gaugino mass.
In Section IV we briefly explain why Mg̃M has no SUSY CP problem, estimate the neutralino relic density, and conclude.
## II Sparticle spectrum in Mg̃M
In this section we determine the predictions of Mg̃M for the spectrum of MSSM particles. The input parameters of the model are listed in Eq. (1). We use the renormalization group equations (RGEs) of the $`\overline{\mathrm{DR}}`$ scheme to calculate the soft breaking parameters at the electroweak scale. We first outline our procedure for the running and discuss general features of the evolution. Then we present the spectrum of superparticles and describe how the experimental limits translate into constraints on the parameter space of the model.
At the compactification scale $`M_{GUT}\begin{array}{c}<\hfill \\ \hfill \end{array}M_c\begin{array}{c}<\hfill \\ \hfill \end{array}M_{Planck}/10`$ the mass parameters of Mg̃M are
$$M_{1/2}\mu 0,m^2=A=B=0.$$
(2)
We limit the range of compactification scales from below by the GUT scale in order to preserve the successful prediction of $`\mathrm{sin}^2\theta _w`$ from four dimensional unification in the MSSM. Note that this requirement would still allow compactification scales slightly below $`M_{GUT}`$; however as we will discover below, $`M_c`$ needs to be slightly larger than $`M_{GUT}`$ to avoid a charged LSP. The upper limit on $`M_c`$ is more model dependent. It arises from demanding that flavor violating soft masses are sufficiently small. Such masses are generated from exchange of massive bulk fields with flavor-violating couplings, which are expected to be present in any fundamental theory which explains the Yukawa couplings of the Standard Model. Assuming that the lightest such states have masses of order $`M_{Planck}`$ the suppression factor is of the order of $`\mathrm{exp}(M_{Planck}/M_c)`$. Requiring that this exponential suppresses off-diagonal squark masses sufficiently gives $`M_c\begin{array}{c}<\hfill \\ \hfill \end{array}M_{Planck}/10`$.
To connect the boundary condition of Eq. (2) to experiments at the weak scale we first run from $`M_c`$ to $`M_{GUT}`$ in the unified theory and then run from $`M_{GUT}`$ to the weak scale with the RGEs of the MSSM. Gaugino domination, or the no-scale, boundary conditions have been studied extensively in the literature, however only including renormalization below the GUT scale . Since the renormalization effects above the GUT scale are not discussed very frequently in the literature we describe them in some detail first.
Naively, one might be tempted to argue against calculating renormalization effects above the GUT scale because: i. the running above the GUT scale gives only very small masses because $`\mathrm{log}(\frac{M_c}{M_{GUT}})\mathrm{log}(\frac{M_{GUT}}{M_{weak}})`$ and ii. the running of soft masses above the GUT scale is model dependent because the theory above the GUT scale contains new unknown fields and couplings which enter the RGEs and give rise to unknown threshold effects. Both of these arguments are invalid as is easy to see: Argument i. neglects group theory factors. For example, the mass which is generated for the right-handed sleptons from running below the GUT scale is very small because they only couple to hypercharge. Above the GUT scale, sleptons are unified into larger GUT representations and the associated larger multiplicity factors more then compensate for the smaller log. The second argument would apply in general theories with soft masses, but it does not apply to Mg̃M where (at one loop) all generated soft masses are determined by gauge charges only. To understand this consider a generic one-loop RG equation for scalar soft terms
$$\frac{d}{dt}(\mathrm{soft})g^2M_{1/2}+(\mathrm{soft})f(g^2,\mathrm{SUSY}\mathrm{couplings}).$$
(3)
Here the first term is determined entirely by the known gauge charges, whereas the second term depends on unknown new fields and couplings. However, in Mg̃M all soft terms for the scalars are zero at $`M_c`$. Therefore, the soft masses appearing in the second term are small (loop-suppressed compared to $`M_{1/2}`$), and it is a good approximation to drop the second term. The only remaining model dependence is in the gauge interactions above the GUT scale. The predictions depend on the choice of unified gauge group, and we present predictions for both $`SU(5)`$ and $`SO(10)`$. Furthermore, there is also a weak dependence on the running of the unified gauge coupling above the GUT scale. We perform our renormalization group analysis assuming a minimal set of GUT representations ($`3\times (10+\overline{5})+5+\overline{5}+24`$ for the case of $`SU(5)`$ and $`3\times 16+10+45+16+\overline{1}6`$ for $`SO(10)`$). However, even adding as much as three additional adjoints to either theory would change the final scalar masses by at most a few percent.A more detailed discussion of RGEs above the GUT scale can be found in . Finally, note that GUT threshold corrections to the supersymmetry breaking scalar masses vanish in $`\overline{\mathrm{DR}}`$ so that using the RGEs gives the complete answer.
Even though we perform our renormalization group analysis numerically one can also obtain extremely simple approximate formulae for the soft parameters at the GUT scale as follows. At one loop the ratio $`\frac{M_{1/2}}{g^2}`$ is RGE invariant. Thus, the running of $`M_{1/2}`$ is trivial as it traces the running of the gauge coupling, and we present our results using $`\alpha `$ and $`M_{1/2}`$ evaluated at $`M_{GUT}`$ rather than at $`M_c`$. Assuming that the running of the couplings above the GUT scale is not too fast all other soft terms at the GUT scale are then given by
$`A_{top}`$ $`={\displaystyle \frac{2\alpha }{\pi }}M_{1/2}`$ $`t_c[{\displaystyle \frac{24}{5}},{\displaystyle \frac{63}{8}}],`$ (4)
$`A_{bot}`$ $`={\displaystyle \frac{2\alpha }{\pi }}M_{1/2}`$ $`t_c[{\displaystyle \frac{21}{5}},{\displaystyle \frac{63}{8}}],`$ (5)
$`B`$ $`={\displaystyle \frac{2\alpha }{\pi }}M_{1/2}`$ $`t_c[{\displaystyle \frac{12}{5}},{\displaystyle \frac{9}{2}}],`$ (6)
$`m_{\overline{\mathrm{𝟓}}}^2`$ $`={\displaystyle \frac{2\alpha }{\pi }}M_{1/2}^2`$ $`t_c[{\displaystyle \frac{12}{5}},{\displaystyle \frac{9}{2}}],`$ (7)
$`m_{\mathrm{𝟏𝟎}}^2`$ $`={\displaystyle \frac{2\alpha }{\pi }}M_{1/2}^2`$ $`t_c[{\displaystyle \frac{18}{5}},{\displaystyle \frac{45}{8}}],`$ (8)
where $`t_c=\mathrm{log}(\frac{M_c}{M_{GUT}})`$ ranges between 0 and 4. Note that we defined the trilinear soft scalar couplings as $`A_{top,bot}Y_{top,bot}`$. All parameters in the equations above are evaluated at the GUT scale. The gauge coupling at the unification scale is determined from the low-energy values of the couplings and it corresponds to $`\alpha _{GUT}=1/24.3`$. The first set of numbers in parenthesis applies to $`SU(5)`$, the second one to $`SO(10)`$. With an abuse of notation for the case of $`SO(10)`$ we defined $`m_{\overline{\mathrm{𝟓}}}^2`$ to denote the soft mass for the Higgses of the MSSM, while $`m_{\mathrm{𝟏𝟎}}^2`$ denotes the common soft mass of the matter fields.
Below the GUT scale we integrate the one-loop RGEs numerically. One loop-running has adequate precision if one uses the one-loop improved Higgs potential . The dominant correction to the lightest Higgs mass comes from top quark loops below the stop mass threshold. It can be accounted for by adding the term
$$\frac{3Y_{top}^4}{16\pi ^2}\mathrm{log}\frac{m_{\stackrel{~}{t}_L}m_{\stackrel{~}{t}_R}}{m_t^2}\left(H_u^{}H_u\right)^2.$$
(9)
In addition, we incorporate the contributions to squark and slepton masses arising from D-terms as described in Ref. .
After evolving all soft masses to the weak scale we impose the constraints which follow from radiative electroweak symmetry breaking. This determines the weak scale values of both $`\mu `$ and $`\mathrm{tan}\beta `$, and we are left with only two free parameters: $`M_{1/2}(M_{GUT})`$ and the compactification scale $`M_c`$. The $`\mu `$ parameter is multiplicatively renormalized, and it does not enter any RGE at one loop. Therefore, we will quote its value at the weak scale.
Figure 1 illustrates the significance of the RG evolution above the GUT scale. Without running above the GUT scale ($`M_c=M_{GUT}`$) the stau is the LSP; however for any compactification scale larger than only $`1.5M_{GUT}`$ the stau is heavier than the lightest neutralino. Note that the dependence of the stau mass on $`t_c=\mathrm{log}(\frac{M_c}{M_{GUT}})`$ is stronger in $`SO(10)`$ than in $`SU(5)`$. This follows from the larger group theoretical factors in $`SO(10)`$ which cause soft masses above the GUT scale to be generated more efficiently.
The allowed parameter space for $`SU(5)`$ and $`SO(10)`$ Mg̃M models is presented in Figure 2. We find a lower bound on $`t_c`$ from requiring that the LSP be neutral. An upper bound on $`t_c`$ is not shown on the figure, but as discussed above, flavor violating effects due to massive bulk fields limit $`t_c4`$. Since $`M_{1/2}`$ is the only source for superpartner masses, the experimental lower limits on superpartner masses and the Higgs mass translate into lower limits on $`M_{1/2}`$. In particular, we find that the LEP II limits on the Higgs mass ($`m_{h^o}`$ 106 GeV)The Standard Model Higgs bound rather than the much weaker SUSY Higgs bound applies in the entire allowed parameter space because $`\mu `$ is sufficiently large so that the heavier Higgs fields decouple and the production cross section becomes Standard-Model-like. and the right handed slepton masses ($`m_{\stackrel{~}{\tau }}75`$ GeV and $`m_{\stackrel{~}{e}}95`$ GeV) imply $`M_{1/2}\begin{array}{c}>\hfill \\ \hfill \end{array}180`$ GeV. Furthermore, we find that the $`\mu `$ parameter in our model is given by $`\mu =3/2M_{1/2}`$ to an accuracy of better than 2$`\%`$ for all values of $`t_c`$. This implies a lower bound $`\mu \begin{array}{c}>\hfill \\ \hfill \end{array}270`$ GeV with an associated mild tuning of the $`Z`$ mass.
The figure also shows contours of the relic abundance of the lightest neutralino corresponding to $`\mathrm{\Omega }_\chi h^2=0.1,0.3`$. The LSP relic abundance calculation is particularly simple in our model, we discuss it briefly in Section IV.
Figure 3 shows the Mg̃M spectrum as a function of the gaugino mass for the example case of an $`SU(5)`$ GUT with $`\mathrm{log}(\frac{M_c}{M_{GUT}})=2`$. The qualitative features of the spectrum are generic and do not depend on the choice of grand unified group or compactification scale. The masses of all superpartners and Higgs fields, except for the lightest Higgs, rise linearly with $`M_{1/2}`$. As in minimal supergravity the LSP is a Bino-like neutralino. The right-handed stau is the next-to-LSP. As usual, colored superpartners are heaviest, followed by charginos, neutralinos and Higgses with masses of order $`\mu `$. The mass of the lightest Higgs particle increases only logarithmically with $`M_{1/2}`$ through the one-loop improvement of the Higgs potential as described in Eq. (9).
Figure 3 also shows contours of constant $`\mathrm{tan}\beta `$ in the $`M_{1/2}t_c`$ plane. Note that in the allowed region $`\mathrm{tan}\beta `$ is almost independent of $`M_{1/2}`$, but it increases with $`t_c`$.
Since Mg̃M has only 2 free parameters, measuring the masses of only two particles is in principle sufficient to determine the input parameters and predict the entire superpartner spectrum. In practice, presumably the Higgs will be the first new particle to be discovered. This is because the MSSM Higgs mass bound of 130 GeV applies also to the Mg̃M Higgs which could therefore be discovered (or ruled out) at Run II of the Tevatron , and might even be seen at LEP 205. The mass of the Higgs would give an estimate of $`M_{1/2}`$. Should $`M_{1/2}`$ be close to 200 GeV, there is a chance that LEP or the Tevatron will discover the first superpartners. For low enough compactification scales LEP would find the right-handed stau and/or selectron. Independent of the compactification scale the Tevatron could then observe charginos in the tri-lepton channel . For larger $`M_{1/2}`$ we would have to wait for the LHC.
It is exciting that observation of the first superpartner immediately also leads to a first test of the model. This is because the discovery would allow a mass measurement of both the discovered superpartner as well as the LSP mass from the distribution of the missing energy. One could then use the measured masses of the Higgs and Bino to obtain two independent determinations of $`M_{1/2}`$ and therefore test the model. Once we know a mass of any of the sleptons we can extract the remaining free parameter – $`t_c`$. Note that discovery of just a few of the lightest superpartners would already allow a determination of the GUT gauge group!
## III Mg̃M, an explicit model
In this section we describe a simple model which breaks supersymmetry and yields only gaugino masses at the compactification scale. Our model is complete: it generates exponentially small supersymmetry breaking which is mediated to the gauginos via a higher dimensional operator, and $`\mu `$ is naturally of the same order as the gaugino masses. The model combines the idea of “gaugino mediation” with the supersymmetry breaking mechanism proposed in .
To begin we recall the higher dimensional set-up of . The MSSM matter and Higgs fields live on a 3+1 dimensional brane embedded in one extra dimension. Supersymmetry is broken dynamically on a parallel brane which is a distance $`L`$ apart from the matter brane (Fig. 4). The MSSM gauge fields and gauginos live in the bulk of the extra dimension. We take this extra dimension to be circular, with radius $`R`$. In order to preserve the quantitative prediction of $`\mathrm{sin}^2\theta _w`$ from gauge coupling unification in the four dimensional MSSM, we demand that the compactification scale be higher than the GUT scale, $`R^1M_cM_{GUT}`$.<sup>§</sup><sup>§</sup>§A similar constraint on the compactification scale also follows from demanding that the extra-dimensional theory remains perturbative up to the five dimensional Planck scale $`M`$$`g_{GUT}^2=\frac{g_5^2}{2\pi R}<\frac{24\pi ^{5/2}}{2\pi RM}`$. Using $`2\pi RM^3=M_{Planck}^2`$ this becomes $`RM_{Planck}<750`$.
In our model, supersymmetry breaking manifests itself in a vacuum expectation value for the F-component, $`X_F`$, of a chiral superfield $`X`$ on the SUSY breaking brane. The MSSM gaugino fields can couple to $`X`$ directly, giving a gaugino mass
$$𝑑x_5\delta (x_5L)d^2\theta \frac{XWW}{M^2}\frac{X_F}{VM^2}\lambda \lambda .$$
(10)
Here, the factor of the extra-dimensional volume ($`V=2\pi R`$ for a circle) arises from the wave function normalizations of the four-dimensional gaugino fields $`\lambda `$. All other soft supersymmetry breaking parameters in the MSSM, such as soft scalar masses $`X^{}XQ^{}Q`$, are suppressed at short distances by extra-dimensional locality . The low-energy values of these parameters are generated from the renormalization group equations as discussed in the previous section.
It is useful to discuss the exact form of the short-distance suppressions in more detail. In general, there are two possible sources for such terms: direct contact terms suppressed by the cut-off $`M`$ or non-local terms from loops of the light bulk gauge fields. The contact terms are not present in the effective theory below $`M`$ to all orders in the local expansion in inverse powers of $`M`$ because they connect fields at different positions. However, this does not preclude the appearance of terms with coefficients $`e^{LM}`$ which do not have an expansion in local operators. These operators are expected to be flavor violating and are therefore strongly constrained experimentally . The most stringent constraint comes from CP violation in the K system and gives roughly $`e^{LM}\begin{array}{c}<\hfill \\ \hfill \end{array}10^4`$ or $`LM\begin{array}{c}>\hfill \\ \hfill \end{array}8`$. Therefore, the allowed range for the compactification scale is $`M_{GUT}\begin{array}{c}<\hfill \\ \hfill \end{array}M_c\begin{array}{c}<\hfill \\ \hfill \end{array}M_{Planck}/10`$.
The other source of short-distance scalar masses – loops of bulk gauge fields – leads to finite contributions to the masses which are suppressed by additional powers of the separation $`L`$ relative to the gaugino mass (10). However, they are flavor universal because they arise from gauge interactions. As discussed in detail in these contributions are negligible compared to the much larger contributions from the renormalization group evolution.
In the following Subsections we turn to discussing the mechanism of supersymmetry breaking and the origin of the $`\mu `$ term in the model. Our mechanism for breaking supersymmetry and stabilizing the radius of the extra dimension is taken directly from the elegant paper of Arkani-Hamed et. al. . In the following, we summarize their discussion and apply it to our model. Our solution to the $`\mu `$ problem is new.
### A Supersymmetry breaking
Following , we keep track of four-dimensional $`N=1`$ supersymmetry by employing four-dimensional $`N=1`$ superspace notation and treating the $`x_5`$ coordinate as a label. The action for a massive five-dimensional hypermultiplet $`(\mathrm{\Phi },\mathrm{\Phi }^c)`$ then reads
$$d^4x𝑑x_5\left(d^4\theta (\mathrm{\Phi }^{}\mathrm{\Phi }+\mathrm{\Phi }^c\mathrm{\Phi }^c)+d^2\theta \mathrm{\Phi }^c(m+_5)\mathrm{\Phi }\right).$$
(11)
The advantage of this formalism is that it is straightforward to write down $`N=1`$ supersymmetric couplings of $`\mathrm{\Phi }`$ to boundary fields. The supersymmetry breaking model of consists of the bulk field $`\mathrm{\Phi }`$ with superpotential couplings to a source $`J`$ and a field $`X`$ which are localized on different branes. $`J`$ is localized on the matter brane at $`x_5=0`$, while $`X`$ on the SUSY-breaking brane at $`x_5=L`$
$$𝑑x_5\left(\delta (x_5)\sqrt{M}J\mathrm{\Phi }^c+\delta (x_5L)\sqrt{M}X\mathrm{\Phi }\right).$$
(12)
Here we have suppressed coupling constants but inserted factors of the fundamental mass scale $`M`$ to keep track of mass dimensions. The vacuum equations for the scalar field are then
$`\mathrm{\Phi }_F`$ $`=`$ $`\delta (x_5L)\sqrt{M}X+(m_5)\varphi ^c=0,`$ (13)
$`\mathrm{\Phi }_F^c`$ $`=`$ $`\delta (x_5)\sqrt{M}J+(m+_5)\varphi =0.`$ (14)
On a circle $`x_5[0,2\pi R)`$ the equation for $`\mathrm{\Phi }`$ has the unique solution
$$\varphi =\sqrt{M}J\frac{e^{mx_5}}{1e^{m2\pi R}}.$$
(15)
Thus the source $`J`$ “shines” a vacuum expectation value for the bulk scalar $`\varphi `$ which decays exponentially with increasing $`x_5`$ (see Fig. 4). Supersymmetry is broken because $`X`$ obtains a non-vanishing $`F`$-component
$$X_F=\sqrt{M}\varphi (L)=MJ\frac{e^{mL}}{1e^{m2\pi R}}MJe^{mL}.$$
(16)
Assuming a source $`JM`$ and a mass $`mM`$ one finds $`X_FM^2e^{ML}`$.
Note that this model is a higher dimensional generalization of a simple O’Raifeartaigh model. The source $`J`$ forces a non-zero expectation value for the field $`\varphi `$, that is in conflict with the $`X`$ equation of motion which requires $`\varphi =0`$. The role of the extra dimension is to modulate the resulting supersymmetry breaking by the factor $`e^{ML}`$. Coupling the field $`X`$ to the gauge fields as in Eq. (10) then results in non-vanishing gaugino masses
$$M_{1/2}=\frac{X_F}{2\pi RM^2}\frac{J}{2\pi RM}e^{ML}.$$
(17)
As in ordinary O’Raifeartaigh models, the scalar expectation value of $`X`$ is undetermined classically. A non-vanishing expectation value can be seen to act as a source for $`\mathrm{\Phi }^c`$ from Eq. (13). In order to simplify the analysis we assume that the $`X`$-expectation value is zero. This may either be enforced by additional tree-level superpotential terms on the supersymmetry breaking brane such as $`\delta (x_5L)[XY+Y^2Z]`$, or it could be a result of quantum corrections lifting the flat direction.
### B The $`\mu `$ term
To generate a $`\mu `$ term of the correct size we utilize the $`\varphi ^c`$ component of the superfield $`(\mathrm{\Phi },\mathrm{\Phi }^c)`$. To break supersymmetry we used an expectation value for $`\varphi `$ which was “shining” clockwise from the source $`J`$ on the matter brane towards the supersymmetry breaking brane. For generating $`\mu `$ we “shine” an expectation value for $`\varphi ^c`$ by adding the superpotential
$$𝑑x_5\left(\delta (x_5L)\sqrt{M}J^c\mathrm{\Phi }+\delta (x_5)\frac{\kappa }{\sqrt{M}}\mathrm{\Phi }^cH_uH_d\right).$$
(18)
The new terms modify the equations of motion
$`\mathrm{\Phi }_F`$ $`=`$ $`\delta (x_5L)\sqrt{M}J^c+(m_5)\varphi ^c=0,`$ (19)
$`\mathrm{\Phi }_F^c`$ $`=`$ $`\delta (x_5)\sqrt{M}J+(m+_5)\varphi =0,`$ (20)
where we have assumed that the vacuum expectation values of $`X`$ and $`H_uH_d`$ are negligible compared to $`J,J^cM`$. As mentioned in the previous Subsection this can be enforced by adding suitable brane potentials.
We see that the $`\varphi `$ equation is unchanged, while the new source $`J^c`$ also “shines” an expectation value for $`\varphi ^c`$
$$\varphi ^c=\sqrt{M}J^c\frac{e^{m(x_5L)m2\pi R\theta (x_5L)}}{1e^{m2\pi R}}.$$
(21)
Note that since we have placed the source on the supersymmetry breaking brane $`\varphi ^c`$ is “shined” in the opposite direction from $`\varphi `$, as depicted in Fig. 4. The generated $`\mu `$ term is equal to
$$\mu =\frac{\kappa }{\sqrt{M}}\varphi ^c(0)=\kappa J^c\frac{e^{mL}}{1e^{m2\pi R}}\kappa J^ce^{mL}.$$
(22)
Comparing this to the gaugino mass Eq. (17) we find that we need to set $`\kappa 1/(2\pi RM)1/100`$.
Note that $`\mu `$ has the exact same exponential suppression factor $`e^{mL}`$ as $`m_{1/2}`$. This follows from the fact that $`\varphi `$ and $`\varphi ^c`$ reside in the same five dimensional supersymmetry multiplet. In other words, five dimensional supersymmetry relates the exponential suppression factors appearing in $`\mu `$ and $`M_{1/2}`$. It is disappointing that because of the volume suppression in the gaugino masses we still need to choose a small coupling $`\kappa `$ to get $`\mu M_{1/2}`$. However, $`\kappa `$ is a superpotential coupling and as such can be small naturally. Note that the spatial separation of the supersymmetry breaking $`X_F`$ from the location of the Higgs fields does not allow a $`B\mu `$ term at the high scale. We therefore do not have the usual problem $`B16\pi \mu `$ which haunts most other approaches to the $`\mu `$ problem. Finally, we emphasize that this new extra-dimensional solution to the $`\mu `$ problem does have broader applicability.
### C Radius stabilization
In the discussion above we have assumed that the radius $`R`$ of the extra dimension and the distance $`L`$ between the branes are fixed. In a complete theory both parameters correspond to fields. We now discuss a simple supersymmetry-preserving mechanism to stabilize both $`R`$ and $`L`$. Our mechanism is a trivial modification of . In its simplest form it requires a single additional massive bulk hypermultiplet $`(\mathrm{\Psi },\mathrm{\Psi }^c)`$ with couplings to brane fields
$$𝑑x_5\left(\delta (x_5)\sqrt{M}[I\mathrm{\Psi }^c+I^c\mathrm{\Psi }]+\delta (x_5L)\sqrt{M}[A(\mathrm{\Psi }\mathrm{\Lambda }\sqrt{M})+A^c(\mathrm{\Psi }^c\mathrm{\Lambda }^c\sqrt{M})]\right).$$
(23)
Assuming that the brane fields $`A,A^c`$ have no vacuum expectation valuesIn the absence of supersymmetry breaking these expectation values are flat directions. It is straightforward to enforce the vanishing expectation values, for example by adding a brane superpotential $`\delta (x_5L)[AB+B^2C]`$ for $`A`$ and similarly for $`A^c`$. one finds the following equations of motion
$`\mathrm{\Psi }_F`$ $`=`$ $`\delta (x_5)\sqrt{M}I^c+(m_\mathrm{\Psi }_5)\psi ^c=0,`$ (24)
$`\mathrm{\Psi }_F^c`$ $`=`$ $`\delta (x_5)\sqrt{M}I+(m_\mathrm{\Psi }+_5)\psi =0,`$ (25)
$`A_F`$ $`=`$ $`\psi (L)\mathrm{\Lambda }\sqrt{M}=0,A_F^c=\psi ^c(L)\mathrm{\Lambda }^c\sqrt{M}=0,`$ (26)
which have unique supersymmetry preserving solutions for $`R`$ and $`L`$. For example for symmetric values of the parameters $`\mathrm{\Lambda }=\mathrm{\Lambda }^c`$ and $`I=I^c`$ we find
$$L=\pi R=\frac{1}{m_\mathrm{\Psi }}\mathrm{arcsinh}\left(\frac{I}{2\mathrm{\Lambda }}\right).$$
(27)
Thus, for $`I`$ and $`\mathrm{\Lambda }`$ of order $`M`$ a radius of the desired size is generated by choosing a relatively small mass for the bulk scalar $`m_\mathrm{\Psi }M/30`$.
## IV Discussion
Minimal Gaugino Mediation is a very compelling and predictive theoretical framework which solves all supersymmetric naturalness problems without fine-tuning.
Mg̃M solves the supersymmetric flavor problem: At the high scale $`M_c`$ the scalar masses and A-terms vanish, and therefore the only flavor violation in renormalizable couplings resides in the Yukawa couplings. Gaugino loops generate universal positive scalar masses at low energies. Small non-universalities in the masses arise from the Yukawa interactions, but these contributions do not lead to new flavor violation because they are aligned with the Yukawa matrices. An exception to this is the running of the scalar masses above the GUT scale where flavor is broken by unified interactions . Since the right-handed sleptons are light in Mg̃M event rates for lepton flavor violating processes such as $`\mu e\gamma `$ might be near the experimental bounds.
Mg̃M solves the supersymmetric CP problem: This is easy to understand by realizing that at the compactification scale (where $`m^2=A=B=0`$) the phases in $`M_{1/2}`$ and $`\mu `$ can be removed by phase redefinitions of the gaugino fields and the Higgs superfields, respectively. Therefore, the theory has no new phases beyond the phases in the Yukawa couplings and no supersymmetric CP violation. This does not solve the strong CP problem however.
Mg̃M is very predictive and therefore testable: The model has only two free parameters which implies that there are many relations between the masses of the superpartners and Higgses which can be tested experimentally.
Mg̃M has a great cold dark matter candidate: The LSP of Mg̃M is almost a pure Bino for most of the parameter space. This makes the calculation of the relic neutralino (Bino) density relatively easy, because in this scenario neutralino annihilations are dominated by the t-channel exchange of the right-handed sleptons. If one ignores the small (but interesting) region of parameter space where the stau and neutralino are degenerate to within 5$`\%`$ (and where co-annihilations are important ) the relic neutralino abundance is given by
$$\mathrm{\Omega }_\chi h^2\frac{(m_{\stackrel{~}{l}_R}^2+m_\chi ^2)^4}{(1.4TeV)^2m_\chi ^2(m_{\stackrel{~}{l}_R}^4+m_\chi ^4)}\frac{m_{\stackrel{~}{l}_R}^2}{(480GeV)^2}.$$
(28)
This formula is accurate to about 20$`\%`$ over the whole parameter space plotted in Figure 2 except for where neutralinos and staus are almost degenerate (a narrow band surrounding the “stau LSP” excluded regions). For Mg̃M Eq. (28) yields abundances which generically are cosmologically safe and often lie within the cosmologically interesting regime $`0.1<\mathrm{\Omega }_\chi h^2<0.3`$ as is evident from Figure 2.
Mg̃M is theoretically well motivated: Separation of SUSY breaking and the MSSM matter fields onto two different branes naturally gives rise to the Gaugino Mediation boundary condition. If the Higgs fields also live on the MSSM matter brane then all supersymmetry breaking soft Higgs mass parameters vanish, giving Mg̃M. The model is very economical and unifies. We believe that the model is sufficiently “conservative”, successful in solving all the problems of supersymmetry, and elegant that it has a real chance of describing Nature.
###### Acknowledgements.
We thank Nima Arkani-Hamed, Howard Baer, Howie Haber, Markus Luty, Chris Kolda, Graham Kribs, Kirill Melnikov, Michael Peskin, and Jim Wells for useful discussions. MS is supported by the DOE under contract DE-AC03-76SF00515. WS is supported by the DOE under contract DOE-FG03-97ER40506. |
no-problem/0001/astro-ph0001247.html | ar5iv | text | # Suppressing Linear Power on Dwarf Galaxy Halo Scales
## 1 Introduction
The field of physical cosmology has made rapid progress in the last decade, and a “standard model” is already beginning to emerge. Many of the main cosmological parameters are becoming known and there is good reason to believe that the measurements will be significantly improved, and the paradigm tested, in the next few years from observations of the Cosmic Microwave Background anisotropy and upcoming surveys of large-scale structure. While in broad outline the paradigm appears to work well, there are some discrepancies which indicate that revisions in our standard model may be required. In this paper we discuss several topics related to one of these issues: the lack of low mass halos in our local neighborhood and in particular consider what we might learn about the small-scale matter power spectrum.
The halo problem has been highlighted by several groups. Analytic arguments based on Press-Schechter () theory were given by Kauffmann, White & Guiderdoni (), while Klypin et al. () and Moore et al. (\[1999a\]) used very high resolution dark matter simulations. A summary of the situation has been given recently by Spergel & Steinhardt () and Kamionkowski & Liddle ().
Within the Press-Schechter theory, and its extensions, the number density of halos of a given mass is related to the amplitude of the linear theory power spectrum on a scale proportional to $`M^{1/3}`$. For example $`10^{10}M_{}`$ halos in a model with $`\mathrm{\Omega }_\mathrm{m}=0.3`$ probe a linear scale of $`0.3h^1`$Mpc. Numerous numerical simulations have demonstrated that halo number density seems to be governed by the linear power spectrum as Press-Schechter theory would predict. A deficit of low mass halos thus implies either additional physics (see below) or a deficit of linear theory power on small length scales. This modification could come about either by variations in the primordial power spectrum (e.g., from inflation) or in the cosmological processing of this power spectrum (e.g., from ‘warm’ dark matter).
Recently Kamionkowski & Liddle () pointed out that a well studied class of inflationary models (BSI; see e.g., Starobinsky ) could give rise to a deficit of small-scale power in the primordial power spectrum. One way to achieve such a deficit is to introduce a change in the slope of the inflaton potential at a scale determined by the astrophysical problem to be solved, in our case $`k5h\mathrm{Mpc}^1`$. Thus in such models one ‘naturally’ achieves fewer low mass halos than in the conventional inflationary CDM models.
The linear theory power spectrum of these BSI models is well described by Kamionkowski & Liddle (). Compared to a scale-invariant model there is a small rise followed by a sharp drop in power at some scale $`k_0`$. Beyond $`k_0`$ the power spectrum oscillates with an envelope which falls more steeply than $`k^3`$. At high-$`k`$ the spectrum recovers to the usual $`k^3`$ slope but with much smaller amplitude.
We believe that it is of interest to constrain such modifications to the initial power spectrum, if possible. However, because of this sharp drop, the model near $`k_0`$ more closely resembles the familiar top-down scenarios (e.g. HDM) than a bottom-up CDM model for some range of wavenumbers. Thus arguments based on reasoning developed for ‘traditional’ CDM models should be checked against numerical simulations. Furthermore, the number density of objects may be one of the only probes of the linear theory power spectrum. Several astrophysical probes of small-scale power are sensitive not to the linear theory but to the non-linear power spectrum. As is well known, objects collapsing under gravitational instability feed power from large scales to small thus allowing small-scale power to be regenerated once a mode goes non-linear (e.g. Little, Weinberg & Park ; Melott & Shandarin , , ; Bagla & Padmanabhan ). How much power is regenerated, crucial for determining how much there was initially, requires numerical calculation.
We address several of these issues in the following sections.
Finally we should note that we believe that constraints on small-scale power are of intrinsic interest in and of themselves. We shall discuss this within the context of the sub-halo problem described above while noting that several other astrophysical effects may also explain the discrepancy. The most obvious examples are: the total number of Local Group satellites could be underestimated, feedback could be important (Kauffmann, White & Guiderdoni , Bullock, Kravstov & Weinberg ), or the satellites could fail to make stars and be dark (e.g. HI clouds).
## 2 Probes of Small-Scale Power
Any proposal to solve the small halo number density problem by modifying the initial power spectrum must simultaneously be able to pass other constraints on small-scale power. While we have a number of constraints on the linear and evolved power spectrum on larger scales, there are very few stringent constraints on linear scales below a Mpc. Kamionkowski & Liddle () argue that constraints from the abundance of damped Ly-$`\alpha `$ systems and the reionization epoch are passed by low-density versions of the BSI model, partly because of the uncertainties involved in making those predictions. The clustering of objects at high-$`z`$ does not appear to be a promising probe of the matter power spectrum on these small scales. A priori the two most obvious probes are the object abundances which motivated this modification of the power spectrum initially and the power spectrum of the flux in the Ly-$`\alpha `$ forest.
## 3 Simulations
To address some of these issues we ran two sets of N-body simulations. The base model in all cases was a $`\mathrm{\Lambda }`$CDM model with $`\mathrm{\Omega }_\mathrm{m}=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`h=0.7`$, $`n=1`$, COBE normalized using the method of Bunn & White (), i.e. with $`\sigma _8=0.88`$. The transfer functions were computed using the fits of Eisenstein & Hu () without the baryonic oscillations. This power spectrum was optionally filtered to suppress small-scale power. We have modeled the behavior displayed in Fig. 1 of Kamionkowski & Liddle () with a simple analytic form:
$$\mathrm{\Delta }^2(k)\frac{k^3P(k)}{2\pi ^2}=\left(\mathrm{\Delta }_{\mathrm{fid}}^2+(k/k_0)^{3/2}\mathrm{\Delta }_{\mathrm{fid}}^2(k_0)\right)^1$$
(1)
Here $`\mathrm{\Delta }_{\mathrm{fid}}^2`$ is the fiducial power spectrum whose high-$`k`$ behavior we are modifying and the power-law slope of the $`k/k_0`$ term was chosen to match the behavior of their Fig. 1 in the range just above $`k_0`$. The model plotted in their Fig. 1 corresponds to $`k_010h\mathrm{Mpc}^1`$, as shown in our Fig. 2 below.
The first set of simulations used a PM code described in detail in (Meiksin, White & Peacock , White ). The simulations used $`256^3`$ particles and a $`512^3`$ force mesh in a box $`25h^1`$Mpc on a side evolved from $`z=70`$ to $`z=3`$. The high mass resolution and quick execution times allowed us to explore parameter space and address the Ly-$`\alpha `$ forest questions (§4.3) where very high force resolution isn’t necessary.
The second set of simulations used a new implementation of a TreePM code similar to that described in Bagla (). These runs used $`128^3`$ particles in the same size box, evolved from $`z=60`$ to $`z=3`$ with the time step dynamically chosen as a small fraction of the local dynamical time. While higher mass resolution would be preferable, this would make the execution time prohibitive on desktop workstations with the current serial version of the code. A spline softened force (Monaghan & Lattanzio , Hernquist & Katz ) with $`h=8\times 10^4L_{\mathrm{box}}20h^1`$kpc comoving was used (the force was therefore exactly $`1/r^2`$ beyond $`h`$). Very roughly this corresponds to a Plummer law smoothing $`ϵh/3`$ (e.g. Springel & White ), although a Plummer law gives 1% force accuracy only beyond $`10ϵ`$.
We have performed numerous tests of the code, among them tests of self-similar evolution of power-law spectra in critical density models and stable evolution of known halo profiles. The simulations took $`200`$ time steps from $`z=60`$ to $`z=3`$. Comparison of final particle positions suggested the time step criterion was conservative. We have additionally compared the TreePM code with a cosmological Tree code (Springel & White ) and found good agreement in the clustering statistics for several different initial conditions, including one of those used here (V. Springel,, private communication).
With both the PM and TreePM codes we ran 3 realizations of 4 models: the ‘fiducial’ $`\mathrm{\Lambda }`$CDM power spectrum, and 3 filtered versions with $`k_0=10h\mathrm{Mpc}^1`$, closely approximating Fig. 1 of Kamionkowski & Liddle (), $`k_0=5h\mathrm{Mpc}^1`$ and $`k_0=2h\mathrm{Mpc}^1`$ which show a larger effect more easily resolved by these relatively small simulations. For each of the 3 realizations the same random phases were used for all 4 power spectra to allow inter-comparison. As additional checks on finite volume and resolution effects we also ran simulations in boxes of side $`50h^1`$Mpc and $`35h^1`$Mpc finding excellent agreement where the simulations overlapped.
## 4 Results
### 4.1 Visual impression
In Fig. 1 we show slices through the particle distributions of our 4 models. The most extreme model, with $`k_0=2h^1\mathrm{Mpc}`$ looks markedly different from the others, with smooth low density regions (the simulation initial grid is still clearly visible), and a lack of substructure in the higher density areas. The differences between the other panels are more subtle, and in all cases, are really only apparent on the smallest scales.
### 4.2 Power spectrum
Most probes of small-scale structure, other than the object abundance, depend upon the non-linear power spectrum. The process of gravitational collapse transfers power from large-scales to small, and can generate a $`k^3`$ tail in $`P(k)`$ if it is absent initially. Fitting formulae for the non-linear power spectrum such as that of Peacock & Dodds () are not applicable for spectra, such as ours, which have regions with $`n<3`$. We use our PM and TreePM simulations to study the non-linear power spectrum.
As can be seen in Fig. 2 the scales of interest are non-linear by $`z=3`$ and small-scale power removed by filtering has been regenerated by collapse of large-scale modes. The fiducial model has good agreement with the fitting function of Peacock & Dodds () on intermediate scales, though for scales smaller than $`k10h\mathrm{Mpc}^1`$ in the TreePM simulations we obtain more power than Peacock & Dodds predict by a factor of about 2, independent of the realization or box size. We believe this is due to the very flat nature of the linear theory spectrum on these scales: Jain & Bertschinger () found a similar discrepancy with Peacock & Dodds for $`n=2`$ spectra (see their Fig. 7). We note that other fitting formulae have been developed which it is possible would better reproduce the behaviour of our fiducial model (see e.g. Jain, Mo & White ; Ma ).
To focus on the dynamics of the power regeneration we show the evolution of the mass power spectrum for our fiducial model and one filtered model (with $`k_0=5h\mathrm{Mpc}^1`$) in Fig. 3. We use the average of 3 realizations of PM simulation output here since the greater particle density allows us to probe smaller amplitude fluctuations before shot-noise contamination becomes severe. The PM and TreePM simulations agree on the power up to $`k20h\mathrm{Mpc}^1`$ suggesting we resolve the relevant scales with our PM code.
Notice that even at $`z=6`$, the “peak” in power introduced by Eq. 1 has disappeared and small-scale power has been regenerated. The difference between the fiducial and filtered models grows progressively smaller as the evolution proceeds. For comparison, the generation of non-linear power has also been studied in numerical experiments by Little, Weinberg & Park (), who studied scale-invariant models, Melott & Shandarin (, , ) and by Bagla & Padmanabhan (), amongst others.
Finally it is of interest to ask how the redshift space power spectra evolve. Typically the redshift space spectra appear closer to the linear theory power spectrum than the real space spectra. In Fig. 4 we show the redshift space mass power spectrum as a function of redshift, as in Fig. 3 for the real-space spectra. We can see that even the redshift space spectra have a tail of power at small scales, induced by the non-linear clustering.
### 4.3 Ly-$`\alpha `$ forest
There has been a great deal of progress in theoretical understanding of the Ly-$`\alpha `$ forest recently, due in large part to hydrodynamic simulations (Cen et al. ; Zhang, Anninos & Norman ; Miralda-Escudé et al. ; Hernquist et al. ; Wadsley & Bond ; Zhang et al. ; Theuns et al. \[1998a\], \[1998b\]; Davé et al. ; Bryan et al. ). In these simulations, it has been found that at high $`z`$ ($`>2`$), most of the absorption in Ly-$`\alpha `$ forest spectra is due to a continuous, fluctuating photoionized medium. The physical processes governing this absorbing gas are simple (see e.g., Bi and Davidsen , Hui & Gnedin ), and as a result, the optical depth for absorption at a particular point can be related directly to the underlying matter density (Croft et al. ). Because of this, observations of the Ly-$`\alpha `$ forest in quasar spectra can be potentially very useful for probing the clustering of matter (e.g. Gnedin , Croft et al. , Nusser & Haehnelt ).
We have generated simulated Ly-$`\alpha `$ forest spectra from our PM $`N`$-body outputs at $`z=3`$ in order to test how constraining Ly-$`\alpha `$ measurements could be for the models described in this paper. To do this, we follow a similar procedure to that outlined in Hui & Gnedin () and Croft et al. (). We bin the particle distribution onto $`512^3`$ density and velocity grids using a cloud-in-cell scheme, and smooth with a Gaussian filter of width one grid cell. We convert the density in each cell to an optical depth for neutral hydrogen absorption by assuming the ‘Fluctuating Gunn-Peterson Approximation” (FGPA) (see Croft et al. \[, \]; see also Hui & Gnedin ) and assign a temperature to the cell using a power-law density-temperature relation. In all our tests, we use the form $`T=T_0\rho ^{\gamma 1}`$, with $`\gamma =1.5`$ (see Hui & Gnedin for the expected dependence of $`\gamma `$ on reionization epoch). We set the coefficient of proportionality between density and optical depth by requiring that the mean transmitted flux, $`F=0.684`$, in accordance with the observations of McDonald et al. (). We run 256 lines of sight parallel to each of the three axes through one simulation box and create mock spectra from a convolution of the optical depths, peculiar velocities, and thermal broadening. The conversion to $`\mathrm{km}\mathrm{s}^1`$ from $`h^1`$Mpc at $`z=3`$ in this model is a factor of 112.
On small scales, the finite pressure of the gas will in detail modify its clustering (see e.g., Hui & Gnedin , Bryan et al. , Theuns et al. ), tending to make the gas density field smoother than the dark matter only outputs of our simulations. We have also implemented a 2-species version of “Hydro-PM” (Hui & Gnedin ), which takes these effects into account, but find that for our purposes here the main results are adequately reproduced by the pure PM runs. We expect that the details of the spectra that we create will also depend on other assumptions about e.g., the reionization epoch and our simulation methodology. To the extent that we are interested primarily in relative comparisons between models this should not be cause for concern.
For each set of mock spectra we compute the one-dimensional flux power spectrum, using an FFT, and show the results in Fig. 5. In the top panel of this figure, we have set the temperature of the gas at the mean density, $`T_0`$, to be equal to $`10^4`$K for all models. The different curves show the effects of linear power suppression at different values of $`k_0`$, while the points are the observational results of McDonald et al. (). While a suppression in flux power is seen in Fig. 5, it is very small. If we vary the value of $`T_0`$, the change in the thermal broadening scale causes a more dramatic effect. This can be seen in the lower panel of Fig. 5, where we show results for the fiducial linear power spectrum only. In general, the flux power spectrum shape on small scales will depend on the temperature of the gas through thermal broadening and finite gas pressure, as well as the non-linearity of matter clustering. In the context of this $`\mathrm{\Lambda }`$CDM model, it seems as though the observations of McDonald et al. are consistent with a fairly high mean gas temperature, although a more detailed study involving hydrodynamic simulations is needed to give definitive results. What is certain from the present study is that the one-dimensional flux power spectrum provides little constraint on our models with suppressed linear power. Clustering in the flux has apparently been regenerated by non-linear gravitational evolution in a similar fashion to that seen in Fig. 2. There may be a good side to this, though, as insensitivity to the amount of small-scale linear power will mean that estimates of the temperature of the IGM made by looking at small-scale clustering of the flux (as in Fig. 5) should be more robust than expected.
If we assume isotropy of clustering, the three-dimensional flux power spectrum, $`\mathrm{\Delta }_F^2(k)`$ can be simply recovered from the one-dimensional one (see Croft et al. for details). It was found by Croft et al. () that on sufficiently large scales the shape of $`\mathrm{\Delta }_F^2`$ measured from simulated spectra matches well that of the linear theory mass power spectrum, $`\mathrm{\Delta }^2(k)`$. In Fig. 6 we test this using the model with $`k_0=2h^1\mathrm{Mpc}`$, and the fiducial model (both with $`T_0=10^4`$K). We can see that there is not much difference between $`\mathrm{\Delta }_F^2(k)`$ for the two (the same is true of the two intermediate models, which we do not plot). The linear theory mass power spectrum (arbitrarily normalized) is shown for comparison. On scales approaching the box size, cosmic variance is large enough to account for the difference between the linear curve and the points. On smaller scales, there is still scatter, but the points taken from the simulation with less linear power are systematically a bit lower. We might expect the simulation points to start to trace the linear theory shape around the scale of non-linearity, where $`\mathrm{\Delta }^2(k)`$ becomes comparable to 1, which from Fig. 2 is around $`k12h\mathrm{Mpc}^1`$. If we look at Fig. 6, this does seem reasonable, and we do find similar results even if we assume different gas temperatures. On smaller scales though, $`\mathrm{\Delta }_F^2(k)`$ has been regenerated by non-linearity, so that the exact relationship between $`\mathrm{\Delta }_F^2(k)`$ and the matter clustering is complex, and as in Fig. 5 the differences between models small.
Another statistic which we can check, in order to see whether suppression of linear power has caused any changes in higher-order clustering, is the probability distribution of the flux. We plot this in Fig. 7, showing the 4 models with different linear power (and all with $`T_0=10^4`$ K) in the top panel. There are small differences between the models, particularly at the high flux end, where the models with more power appear to have more truly empty regions. These small differences are likely to remain unobserved though due to the difficulty of accurate continuum fitting. This has implications for studies which use the flux PDF information to constrain the amount of linear power on the Jean’s scale (e.g., Nusser & Haehnelt ). For the same reason that the flux power spectrum does not change much on small scales (generation of power), these methods are likely to also be insensitive to power truncation of the type we are considering.
The lower panel of this figure shows results for our fiducial model with different gas temperatures. There appears to be little difference between the curves, although we have found that differences do appear if the spectra are subjected to a moderate amount of smoothing (e.g. with a $`50\mathrm{km}\mathrm{s}^1`$ Gaussian; not shown).
From our tests with both sets of statistics, we find that the Ly-$`\alpha `$ forest is not a promising discriminator between the models we are considering here. Two effects conspire to mask any differences in the Ly-$`\alpha `$ measurements on the small scales where there are large differences in the linear power spectra. First, the thermal broadening has the effect of smoothing the spectra (the thermal width of features is a few 10s of $`\mathrm{km}\mathrm{s}^1`$). Second, non-linear evolution of the density field causes power to be rapidly transferred from large to small scales. For these models the scale of non-linearity at $`z=3`$ at is about the same or larger than the scale at which there are large differences in the linear power spectra. On smaller scales, the shape of the three-dimensional flux power spectrum no longer follows that of the linear mass power spectrum.
### 4.4 Halo abundance
Ideally we would like to evolve a large volume and study the number density of small halos present today within a larger halo such as the Milky Way. This is not possible with the limited dynamic range of the simulations presented here. While many effects could potentially disturb all the small halos as they interact with each other and a larger halo, very high resolution numerical simulations suggest that this may not be the case in practice. Moore et al. (\[1999a\]) find that a large number of small halos are not disrupted, so that the number remaining will still be a substantial fraction of the number that existed in the proto-galaxy. In this paper, we focus on the number density of halos in our simulations at $`z=3`$ where our $`25h^1`$Mpc box is just about to go non-linear. We assume that these small halos would become incorporated into a larger halo at later times by the usual evolution of clustering, and that the fraction that survive disruption can be predicted by the referring to the detailed calculations of Moore et al. (\[1999a\]). Here we are only interested in deficit in the number of small halos in our suppressed models relative to the fiducial model. This relative fraction should be similar at the redshift of our simulation box to what it would be at $`z=0`$, although the absolute number of halos could only be quanitified using simulations like those of Moore et al. Similar assumptions to ours were also used by Kamionkowski & Liddle ().
We show in Fig. 8 the Press-Schechter predictions for our fiducial and filtered models and the numerically determined mass functions. As there is no perfect algorithmic definition of a “group” of points, the mass function is sensitive to a small degree to the halo finding algorithm. We have used both the Friends-of-Friends (FOF; Davis et al. ) algorithm, with linking length 0.2, and the HOP (Eisenstein & Hut ) halo finding algorithm to construct these mass functions. We find that the mass function differed slightly if we changed the parameters in the algorithms or the algorithm used, and show results for both of these schemes. However, these differences should not affect our main conclusions, as we are interested in comparing models with different amounts of small-scale power to each other.
The N-body mass functions and the Press-Schechter predictions are shown in Figs. 8 and 9 for the HOP and FOF algorithms respectively. The “shelf” at the low mass end of the N-body mass functions arises because of the minimum number of particles allowed to form a group. There are no very low mass halos in the simulation. If we use $`\delta _c=1.69`$ with a top-hat window in the Press-Schechter predictions we find that the mass functions have too few large-mass halos compared to the HOP N-body results for both $`z=4`$ and 3. The two can be brought into better agreement if we decrease $`\delta _c`$ to 1.5 as we have done. With this modification, the Press-Schechter predictions overestimate the FOF results by a factor of up to 2. To check for simulation artifacts we also ran several larger boxes. We find that the mass functions from a sequence of larger boxes (up to $`50h^1`$Mpc) with different random phases match smoothly and stably onto the mass function of these simulations, suggesting that there are no finite volume or sample variance effects operating. As a final check, a completely separate analysis chain using a different N-body code (a cosmological Tree code) obtains the same mass function at $`z=3`$ for one of our runs (V. Springel; private communication).
Figs. 8 and 9 suggest that the number of halos is indeed governed by the linear theory power spectrum. The amount of suppression relative to the fiducial model is robust to the parameters of our group finding algorithm or the algorithm used. The absolute number of halos can in principle be predicted from statistics of the initial density field, although there are uncertainties related to the definition of halos in the simulations and parameters in Press-Schechter theory.
We find that in order to reduce the number of small halos by a large factor (for example Kamionkowski & Liddle () recommend about an order of magnitude), we require a fairly severe filtering of the fiducial model, using a filter with $`k_0=2h\mathrm{Mpc}^1`$.
Finally we remark that this set of simulations does not have enough mass resolution to probe the structure of the halos we find. However simulations by Moore et al. (\[1999b\]) suggest that the halo structure will not be sensitive to the filtering of the initial power spectrum. This lends some support to our assumption that the amount of disruption of the small halos when they become incorporated into a larger halo does not depend on the alterations we have made to the initial power spectrum.
## 5 Conclusions
While the essential picture of hierarchical formation of large-scale structure in a universe containing primarily cold dark matter appears to work well, some puzzles remain. One of these is the paucity of dwarf galaxies in the local neighborhood. One resolution of this “lack-of-small-halos problem” is a modification of the initial power spectrum, reducing the amount of small-scale power. There exist inflationary models which can accomplish this, though the scale of the modification must be put in by hand. Other approaches, such as assuming that the universe is dominated by Warm Dark Matter (WDM), will have a similar effect (and both approaches may solve other problems: see e.g. Sommer-Larsen & Dolgov ). In models with reduced small-scale power structure forms in a top-down manner over a range of scales near the break, so ansatzë developed for the “traditional” bottom-up scenario should be treated with caution. In this work we have used numerical simulations to address the question of how one could constrain such a modification of the initial power spectrum. We note that we have dealt in detail only with a model with suppressed initial power. In a WDM model the deficit of power arises from the dark matter velocity dispersion, and so such as model may behave slightly differently, at least on the smallest scales.
We find that the halo mass function depends primarily on the linear theory power spectrum, so a suppression of small-scale power does reduce the number of low mass halos. While the Press-Schechter theory predicts qualitatively the right behavior, its free parameter ($`\delta _c`$) must be adjusted to fit the N-body results. To reduce the number of $`10^{10}M_{}`$ halos by a factor of $`>5`$ compared to our fiducial model requires a fairly extreme filtering of the primordial power spectrum, and the structure that forms in such a model appears qualitatively different to the fiducial $`\mathrm{\Lambda }`$CDM model (Fig. 1).
Collapse of large-scale structures as they go non-linear regenerates a “tail” in $`P(k)`$ if it is suppressed in the initial conditions (and this holds in redshift as well as real space). Thus probes which measure primarily the evolved power spectrum are less sensitive to reduced small-scale power than one might think. We particularly examine measurements of clustering from the Ly-$`\alpha `$ forest flux. On the scales which govern the number of small halos, choosing a different gas temperature affects Ly-$`\alpha `$ clustering much more strongly than suppressing the linear power spectrum. The matter power spectrum measurement made from the low resolution Ly-$`\alpha `$ forest spectra by Croft et al. () probes scales just above this, which are still linear, and offers essentially little constraint on these models. Any extension of these simple Ly-$`\alpha `$ forest measurements to smaller scales must necessarily have less general conclusions drawn from them.
Given that the number density of collapsed objects seems to be the most sensitive probe of this small-scale modification of the power spectrum, other observations which depend on this should be used to make consistency checks. At the moment, the obvious choices, such as the number density of damped Ly-$`\alpha `$ systems, or the redshift of reionization induced by the formation of the first stars and quasars, are difficult to predict accurately from theory. Their potentially strong discriminatory power will make them useful eventually though, as we learn whether more of Cosmology’s puzzles can be resolved by an absence of small-scale power.
M.W. would like to acknowledge useful conversations with Jasjeet Bagla, Chip Coldwell, Lars Hernquist and Volker Springel on the development of the TreePM code. We thank Marc Kamionkowski and David Weinberg for useful comments on an earlier draft. M.W. was supported by NSF-9802362 and R.A.C.C. by NASA Astrophysical Theory Grant NAG5-3820. Parts of this work were done on the Origin2000 system at the National Center for Supercomputing Applications, University of Illinois, Urbana-Champaign. |
no-problem/0001/cond-mat0001188.html | ar5iv | text | # Universal and non-universal features of glassy relaxation in propylene carbonate
## I Introduction
In this paper the evolution of structural relaxation as observed upon cooling the van-der-Waals liquid propylene carbonate (PC) from above the melting temperature ($`T_m=218\mathrm{K}`$) to the glass-transition temperature ($`T_g=160\mathrm{K}`$) will be analyzed. It will be shown that the spectra, as measured within the four-decade frequency window below $`800\mathrm{G}Hz`$ by depolarized-light-scattering, by dielectric-loss, and by neutron-scattering spectroscopy can be quantitatively described by the solutions of a two-component schematic model of the mode-coupling theory (MCT), where the drift of the various spectral features over several orders of magnitude due to temperature changes can be fitted by smooth variations of the model parameters. The results of the data fits will be used to demonstrate in detail which features can be explained by the universal $`\beta `$-relaxation-scaling laws of the asymptotic MCT-bifurcation dynamics, and which are caused by either preasymptotic corrections to this scaling or by crossover phenomena to microscopic oscillatory motion.
Glassy PC spectra within the full GHz window have first been studied by Du et al. using depolarized-light-scattering spectroscopy. It was shown that the data can be interpreted with the universal laws predicted by MCT. In its basic version, which is also referred to as the idealized MCT, this theory implies an ideal liquid-glass transition at a characteristic temperature $`T_c`$. In an extended version, $`T_c`$ marks a crossover from the high-temperature regime, where the dynamics is dominated by non-linear-interaction effects between density fluctuations, to a low-temperature regime, where the dynamics deals with activated-hopping transport in an effectively frozen system. For temperatures $`T`$ near $`T_c`$, the MCT equations can be solved by asymptotic expansions for the so-called $`\beta `$-relaxation regime. This results in formulas for universal features of the MCT dynamics as reflected in the appearance of dynamical scaling laws, power-law-decay processes, and in algebraically diverging time scales. The different anomalous exponents and also the $`\beta `$-relaxation-master functions are determined by a system-dependent number which is called the exponent parameter $`\lambda `$ . The data analysis of Ref. suggested $`T_c187\mathrm{K}`$ and $`\lambda 0.78`$. Relaxation curves measured for PC within the pico-second window in solvation-dynamics studies and dielectric-loss spectra determined within the GHz window have also been analyzed with the MCT-scaling-law formulas using parameters $`T_c`$ and $`\lambda `$ consistent within the experimental uncertainties with the values cited above. The critical temperature for PC has first been determined to $`T_c180\mathrm{K}`$ by interpreting the $`\alpha `$-relaxation time for density fluctuations measured by neutron-scattering spectroscopy with the MCT-power-law prediction for this quantity. A similar analysis of the viscosity suggests a value of $`T_c`$ near $`190\mathrm{K}`$. The effective Debye-Waller factor for the elastic modulus has been measured for PC by Brillouin-scattering spectroscopy . Interpreting this quantity with the asymptotic formula of the idealized MCT, a critical temperature considerably higher than $`190\mathrm{K}`$ has been suggested. However, since the data interpretation is not compelling , this finding cannot be considered to be a falsification of the $`T_c187\mathrm{K}`$ result. Thus one could conclude, that MCT describes some essential features of the glassy dynamics of PC qualitatively correct, a statement which also holds for a series of other glass-forming systems .
In order to arrive at a more stringent assessment of MCT, Wuttke et al. have re-examined the above cited PC data for $`T>T_c`$. In addition, they have studied incoherent-neutron-scattering spectra $`S(q,\omega )`$ for a two-decade window in frequency $`\omega `$ and for wave vectors $`q`$ between $`0.7`$ and $`2.3\mathrm{\AA }^1`$. The data exhibited the predicted factorization in a $`q`$-dependent but $`\omega `$-independent amplitude $`h_q`$, and a $`q`$-independent term describing the frequency and temperature variation: $`S(q,\omega )h_q\chi ^{\prime \prime }(\omega )/\omega `$. The susceptibility spectrum $`\chi ^{\prime \prime }(\omega )`$ showed the subtle dependence on $`\omega `$ and on $`(TT_c)`$ predicted by the MCT-scaling laws for the $`\beta `$-process, provided $`T_c182\mathrm{K}`$ and $`\lambda 0.72`$ was chosen. These parameters are marginally compatible with the values found in the above cited earlier work on PC. The depolarized-light-scattering spectra have been remeasured within the $`\beta `$-relaxation window for $`T>T_c`$. The spectrometer used in Ref. incorporated several improvements over the one used in the original study , resulting in improved signal-to-noise ratios. Furthermore, the use of a narrow-band interference filter eliminated the possibility of higher-order transmission effects which have recently been recognized as a potential source of artifacts . But the new spectra agree with the old ones within the error bars of the latter. The remeasured spectra could be fitted convincingly with the universal asymptotic results using the newly found values for $`T_c`$ and $`\lambda `$. It was shown in addition that also the solvation-dynamics results and the dielectric-loss spectra could be fitted within the same frame using the new values for $`T_c`$ and $`\lambda `$. Actually, the new fit to the dielectric-loss data is more convincing than the original one , since the fit interval expands with decreasing $`(TT_c)`$, as requested by MCT. The size of the $`(TT_c)`$ interval and the window for the frequency where leading-order-asymptotic results describe the MCT-bifurcation dynamics, depend on the probing variable . It was assumed in that the range of validity of the asymptotic analysis is smaller for the dielectric-loss spectra than for the light-scattering spectra. It also had to be anticipated that preasymptotic corrections can account for a $`35\%`$ offset of the $`\beta `$-relaxation-time scale of the neutron-scattering data relative to the one for the light-scattering data.
To corroborate the cited MCT interpretations of glassy PC spectra, the previous work shall in this paper be extended in three directions. First, the $`\alpha `$-relaxation peaks will be included in the analysis, so that the low-frequency limit for the fit interval can be decreased to $`1\mathrm{G}Hz`$ or lower. Thereby the crossover from $`\alpha `$\- to $`\beta `$-relaxation and the non-universal $`\alpha `$-peak shapes can be described as well. Second, the crossover from relaxation to vibrational dynamics will be included in the analysis, so that the high-frequency limit for the fit interval can be increased by about a factor of four. Third, an extended form of the MCT instead of the idealized one will be used, so that the spectra for depolarized-light scattering and dielectric loss for $`\omega 1\mathrm{G}Hz`$ can be described also for temperatures below $`T_c`$. The specified goals will be achieved by studying the full solutions of an MCT model.
The paper is organized as follows: In Sec. II, the basic formulas for the schematic model to be used will be summarized, and then (Sec. III) the experimental data sets are fitted using this model with smoothly drifting parameters. After a short introduction to the necessary equations for the asymptotic analysis for the model (Sec. IV), the $`\beta `$-scaling laws are tested against the data in Sec. V. In Sec. VI, it will be shown that for the studied model a properly defined dielectric modulus is more suited for a description by scaling laws than the dielectric function. Section VII presents some conclusions.
## II A schematic mode-coupling-theory model
The idealized MCT is based on closed equations of motion for the auto-correlation functions of the density fluctuations $`\varphi _q(t)`$, which are positive definite functions of time $`t`$, depending on the wave vector modulus $`q`$ . The extended MCT also includes couplings of the density correlators $`\varphi _q(t)`$ to the auto-correlation functions for the currents . The general equation of motion expresses the density correlator in terms of relaxation kernels. It is formulated most transparently with Laplace-transformed quantities. For the latter, the convention $`F(z)=i_0^{\mathrm{}}\mathrm{exp}(izt)F(t)𝑑t`$ with complex frequency $`z`$, and $`F(\omega )=F^{}(\omega )+iF^{\prime \prime }(\omega )`$ for $`z=\omega +i0`$ will be used.
$`\varphi _q(z)={\displaystyle \frac{1}{z+C_q(z)}},`$ (2)
$`C_q(z)=N_q(z){\displaystyle \frac{\mathrm{\Omega }_q^2}{z+M_q^{\text{reg}}(z)+\mathrm{\Omega }_q^2m_q(z)}}.`$ (3)
Here, $`\mathrm{\Omega }_q`$ denotes a characteristic frequency given by the thermal velocity $`v`$ and the static structure factor $`S_q`$: $`\mathrm{\Omega }_q^2=q^2v^2/S_q`$. The general current-flow kernel $`C_q(z)`$ describes density-fluctuation decay via two parallel channels. Phonon-assisted hopping is given by $`N_q(z)`$. The relaxation due to nonlinear interactions of density fluctuations is described by a force-fluctuation kernel which consists of a sum of a regular term $`M_q^{\text{reg}}(z)`$ and a mode-coupling term $`m_q(z)`$. The former deals with normal-liquid dynamics, and the latter with the slow motion caused by the cage effect. It is obtained as a polynomial $`_q`$ of the density correlators $`\varphi _q(t)`$:
$$m_q(t)=_q\left[\varphi _q(t)\right].$$
(4)
The coefficients of the polynomial are non-negative; they are given by the equilibrium structure and hence depend smoothly on external control parameters like temperature $`T`$. Systematic studies of the kernels $`N_q(z)`$ and $`M_q^{\text{reg}}(z)`$ are not available. The theory shall be simplified by Markov approximations of these quantities: $`M_q^{\text{reg}}(z)=i\nu _q`$, $`N_q(z)=i\mathrm{\Delta }_q`$. The friction constants $`\nu _q0`$ and hopping coefficients $`\mathrm{\Delta }_q0`$ shall be treated as model parameters, which depend smoothly on $`T`$.
Equations (II) can exhibit bifurcation singularities. Generically, if as a single control parameter the temperature is considered, the singularity occurs for a critical temperature $`T_c`$ if all hopping coefficients $`\mathrm{\Delta }_q`$ vanish. If some $`\mathrm{\Delta }_q0`$, the singularity is avoided. However, for small $`\mathrm{\Delta }_q`$ and small $`|TT_c|`$ the singularity causes an anomalous dynamics: the glassy dynamics studied by MCT. At the singularity the correlators do not decay to zero but to a positive value $`f_q^c`$, which is called the plateau. It is approached by an algebraic decay law, called the critical decay, which is specified by an anomalous exponent $`a`$, $`0<a1/2`$:
$`\varphi _q(t)f_q^c=h_q(t/t_0)^a+𝒪\left(t^{2a}\right);`$ (5)
$`T=T_c,\mathrm{\Delta }_q=0.`$ (6)
The quantity $`h_q>0`$ is called the critical amplitude, and it can be determined from the mode-coupling functional $`_q`$ for $`T=T_c`$. The time scale $`t_0`$ is determined by the transient dynamics for $`T=T_c`$. For $`\mathrm{\Delta }_q=0`$ and small but negative $`(T_cT)`$, the correlator falls below the plateau $`f_q^c`$ according to the von Schweidler law $`\varphi _q(t)f_q^ct^b+𝒪\left(t^{2b}\right)`$, characterized by a second anomalous exponent $`b`$, $`0<b1`$. From $`_q`$ for $`T=T_c`$, one can calculate the above mentioned exponent parameter $`\lambda `$, $`0<\lambda 1/2`$, which determines the critical exponent $`a`$ and the von Schweidler exponent $`b`$ via $`\mathrm{\Gamma }(1a)^2/\mathrm{\Gamma }(12a)=\lambda =\mathrm{\Gamma }(1+b)^2/\mathrm{\Gamma }(1+2b)`$. In the so-called $`\beta `$-relaxation window, implicitly defined by $`|\varphi _q(t)f_q^c|1`$, MCT predicts that the dynamics is in leading order controlled by merely two smooth functions of $`T`$: the separation parameter $`\sigma `$ and the hopping parameter $`\delta `$. The former is determined by $`_q`$, and its zero defines the crossover temperature $`T_c`$: $`\sigma =C(T_cT)/T_c+𝒪\left((TT_c)^2\right)`$. The latter obeys $`\delta 0`$; generically, $`\delta `$ vanishes only if $`\mathrm{\Delta }_q=0`$ for all $`q`$. The shape of the correlation functions in the asymptotic regime of the $`\beta `$-relaxation window is fully determined by the exponent parameter $`\lambda `$; as can be inferred from Ref. and the original papers cited therein.
Testing the relevance of MCT by comparing the leading-order results for the $`\beta `$-relaxation with data is however hampered by a great difficulty. Without detailled microscopic calculations one cannot determine the size of the corrections to the asymptotic formulas, and therefore their range of validity is not known. In addition, the optimal choice of $`\lambda `$, fixing the shape of the $`\mathrm{log}\chi ^{\prime \prime }`$-versus-$`\mathrm{log}\omega `$ graph is tedious to decide upon and might well depend on the choice of the fit interval. The difficulty of fixing $`\lambda `$ from a $`\beta `$-relaxation study alone was demonstrated recently for the hard-sphere system . A set of density correlators $`\varphi _q(t)`$ calculated for various wave vectors and packing fractions was considered. A fit to them with the asymptotic predictions for a significantly wrong $`\lambda `$ was by a standard fitting procedure not distinguishable from the correct fits within typical experimental windows.
A different route for data interpretation is based on comparison of the measured spectra with the complete solutions obtained from schematic MCT models. This procedure was studied first by Alba-Simionesco et al. . Schematic models are truncations of the complete set of Eqs. (II) to a set dealing with a small number of correlators only. Thus the mathematical complexity of the problem is reduced considerably. Alas, the connection of the mode-coupling-functional coefficients with the microscopic structure gets lost; the coefficients are to be treated as fit parameters. The main advantage of this approach is that one does not rely on the applicability of asymptotic formulas; one is sure that all results on crossover phenomena and preasymptotic corrections are logically consistent with the MCT.
The simplest schematic model deals with a single correlator only, which shall be denoted by $`\varphi (t)`$. The first MCT equation is equivalent to Eqs. (2,b) with $`q`$ indices dropped:
$$\varphi (z)=\frac{1}{z+i\mathrm{\Delta }\mathrm{\Omega }^2/\left[z+i\nu +\mathrm{\Omega }^2m(z)\right]}.$$
(8)
For the mode-coupling functional, a quadratic polynomial that can reproduce all valid values for the exponent parameter $`\lambda `$ is used :
$$m(t)=v_1\varphi (t)+v_2\left(\varphi (t)\right)^2.$$
(9)
For $`\mathrm{\Delta }=0`$, ideal liquid-glass transitions occur on a line in the $`v_1`$-$`v_2`$ plane of coupling constants. One can use $`\lambda `$ to parameterize this line of critical coupling constants:
$$v_1^c=(2\lambda 1)/\lambda ^2,v_2^c=1/\lambda ^2,1/2\lambda <1.$$
(10)
Thus this model is specified by two control parameters $`(v_1,v_2)`$, by two frequencies $`(\mathrm{\Omega },\nu )`$ quantifying the transient dynamics, and one rate $`\mathrm{\Delta }`$ for the activated transport processes. The model has many non-generic features, and therefore one cannot expect it to describe a measured spectrum. In the present paper, the correlator $`\varphi (t)`$ is introduced to mimic in an overall fashion the combined effect of all structure fluctuations in producing the bifurcation point and the exponent parameter $`\lambda `$ of the system.
The dynamics of some probing variable $`A`$ coupling to density fluctuations shall be described by a second correlator, to be denoted $`\varphi _A^s(t)`$. It obeys an equation analogous to Eq. (8):
$$\varphi _A^s(z)=\frac{1}{z+i\mathrm{\Delta }_A^s\mathrm{\Omega }_{A}^{s}{}_{}{}^{2}/\left[z+i\nu _A^s+\mathrm{\Omega }_{A}^{s}{}_{}{}^{2}m_A^s(z)\right]}.$$
(12)
Again the microscopic dynamics is quantified by two frequencies referred to as microscopic parameters $`(\mathrm{\Omega }_A^s,\nu _A^s)`$. The activated relaxation processes are described by $`\mathrm{\Delta }_A^s`$. The mode-coupling functional shall be specified by a coupling to $`\varphi (t)`$ quantified by a single coupling constant $`v_A^s`$:
$$m_A^s(t)=v_A^s\varphi (t)\varphi _A^s(t).$$
(13)
It is a peculiarity of this model, that the dynamics of the probing variable $`A`$ is influenced by $`\varphi (t)`$ but not vice versa. Thus the position of the transition is not modified by the introduction of the second correlator nor is the value of $`\lambda `$. The model was motivated by Sjögren for the description of tagged-particle motion in a glassy environment, and it will be used here in the same context for the interpretation of the neutron-scattering data. The MCT for the reorientational dynamics of a non-spherical probe molecule suggests the same schematic model for the dipole and quadrupole relaxation ; an observation that motivates the application of the model for the description of the dielectric-loss and depolarized-light-scattering spectra, respectively. For the incoherent-neutron-scattering cross-section the fit will be done using the model parameters for $`\varphi _A^s(t)`$ different for different wave vectors. For the index $`A`$ the abbreviations ls, de, and ns for light scattering, dielectric loss, and neutron scattering, respectively, will be used. The specified two-component schematic model has been used earlier for data interpretation with the restriction to $`\mathrm{\Delta }=\mathrm{\Delta }_A^s=0`$. Depolarized-light-scattering spectra within the full GHz band have been described for glycerol for all temperatures above $`T_g`$ , and for ortho-terphenyl for $`T>T_c`$ . Rufflé et al. were the first to simultaneously describe glassy spectra for several probing variables $`A`$. Within the $`\beta `$-relaxation regime, they fitted coherent-neutron-scattering spectra for several wave vectors and also the longitudinal elastic modulus for $`\mathrm{Na}_{0.5}\mathrm{Li}_{0.5}\mathrm{PO}_3`$.
The single coupling constant $`v_A^s`$ determines all features of the structural-relaxation part of the second correlator. Thus, the $`\alpha `$-peak strengths, widths, and positions are correlated. These correlations follow the same pattern as found and explained for the $`\alpha `$ peaks of the hard-sphere system . Nevertheless, it is not obvious from the beginning, and thus truly remarkable, that such a simple model will be sufficient not only to explain the trends found in the data, but even to reproduce structural relaxation for PC quantitatively.
Equation (8) is equivalent to
$`\ddot{\varphi }(t)+(\mathrm{\Delta }+\nu )\dot{\varphi }(t)+(\mathrm{\Omega }^2+\mathrm{\Delta }\nu )\varphi (t)`$ (14)
$`+\mathrm{\Omega }^2{\displaystyle _0^t}m(tt^{})\left[\dot{\varphi }(t^{})+\mathrm{\Delta }\varphi (t^{})\right]𝑑t^{}=0,`$ (15)
to be solved with the initial condition $`\varphi (t=0)=1`$, $`\dot{\varphi }(t=0)=\mathrm{\Delta }`$. This equation, together with Eq. (9), is solved numerically with a similar algorithm as used in the preceding work for the case $`\mathrm{\Delta }=0`$. Equations (II) are treated in the same manner, but $`\varphi (t)`$ has to be used as input for Eq. (13). From the result for $`\varphi _A^s(t)`$, a Laplace-transformation yields $`\varphi _A^s(z)`$. The fluctuation-dissipation theorem then determines the dynamical susceptibility $`\chi _A(z)`$ of variable $`A`$:
$$\chi _A(z)/\chi _A=z\varphi _A^s(z)+1.$$
(16)
Here, $`\chi _AA^2`$ is the thermodynamic susceptibility. In particular, the imaginary part of Eq. (16) determines the normalized susceptibility spectrum, $`\chi _A^{\prime \prime }(\omega )/\chi _A=\omega \varphi _{A}^{s}{}_{}{}^{\prime \prime }(\omega )`$, the quantity of main interest in the following. In our data analysis, $`\chi _A`$ enters as an additional fit parameter, which we treat, for the sake of simplicity, as a temperature-independent normalization constant.
## III Data analysis
### A Fits to the data
The result of our fits to the measured PC spectra are shown by the full lines in Figs. 1 and 2. Since one cannot expect the schematic model to provide a description of the microscopic band, the fits have been restricted to frequencies below $`500\mathrm{G}Hz`$ for the light-scattering and neutron-scattering spectra. The fit range for the dielectric spectra could be extended up to $`1\mathrm{T}Hz`$. For the neutron-scattering data, a set of spectra for 3 representative $`q`$-vectors out of 10 analyzed is shown. The analyzed $`q`$-range is $`0.5\mathrm{\AA }^1q1.4\mathrm{\AA }^1`$; outside this range, experimentally accessible frequency windows become too small to gain meaningful information for MCT parameters. In Ref. , light-scattering spectra above $`T=250\mathrm{K}`$ have been published, but show apparent violation of $`\alpha `$ scaling. We were able to fit these curves with the same quality as the ones shown by assuming a slightly varying static susceptibility $`\chi _{\text{ls}}`$, which has the effect of shifting curves up and down in the log-log plot. These curves were omitted in Fig. 1 to avoid overcrowding.
All model parameters should be used as temperature-dependent fit parameters in our analysis. Within the studied temperature interval, there are no structural anomalies reported for PC. Thus, the fits are done with the constraint that the parameters drift smoothly and monotonously. In the following part of this section, the parameters used for the theoretical curves in Figs. 1 and 2 shall be discussed.
One experiences a considerable flexibility in choosing the path $`(v_1(T),v_2(T))`$ followed by the coupling constants in the $`v_1`$-$`v_2`$-parameter plane for the interpretation of the data as emphasized earlier . To arrive at an overview of the possibilities for fitting the many spectra, we started with a first step, where the path was varied but biased to some smooth curve. Applying the general theory to Eq. (9), one derives the formula for the above-mentioned separation parameter $`\sigma `$,
$$\sigma =(1f^c)\left[(v_1v_1^c)f^c+(v_2v_2^c)f_{}^{c}{}_{}{}^{2}\right].$$
(17)
In our first step of the analysis, we also force the $`v_1`$, $`v_2`$ to obey the asymptotic linear $`(T_cT)`$ dependence of $`\sigma `$ cited above. In the second step, this latter restriction is eliminated and a free fit is started by examining small corrections to the result of the first step. The thus obtained results also account for an inevitable uncertainty in the determination of the experimental temperatures. The fit yields $`T_c180\mathrm{K}`$, and $`\lambda 0.75`$, corresponding to $`a0.30`$ and $`b0.56`$. The value for $`\lambda `$ is between the values reported in Refs. and and falls within the error bars of both. The linear interpolation of the found $`\sigma `$ versus $`T`$ values gives $`\sigma =C(T_cT)/T_c`$ with $`C0.069`$. The found distribution of $`(v_1,v_2)`$ points is shown in the upper part of Fig. 3. Upon lowering $`T`$, both $`v_1`$ and $`v_2`$ increase, which is consistent with the physical reasoning of the system’s mode-coupling coefficients becoming larger at lower temperatures. The lower diagram in Fig. 3 demonstrates that the asymptotic formula for $`\sigma `$ is well obeyed for $`150\mathrm{K}T285\mathrm{K}`$. It should be stressed that the glass-transition line is just crossed by a regular drift, i. e. there is no accumulation of $`(v_1,v_2)`$ points close to it. This demonstrates how the critical phenomena predicted by the MCT originate from the mathematical structure of its equations of motion. In particular, the schematic model illustrates that within MCT no subtle $`q`$-interferences or hydrodynamic phenomena are responsible for the glass-transition dynamics.
The fitted mode-coupling coefficients $`v_A^s(T)`$ for the light-scattering and dielectric data, and the corresponding coefficients $`v_{\text{ns}}^s(q,T)`$ for the neutron-scattering experiment are shown in Fig. 4. Again, we find monotonically increasing couplings with decreasing temperature. The coupling coefficients $`v_{\text{ns}}^s(q)`$ describing the incoherent-neutron-scattering data are decreasing with increasing $`q`$. This is equivalent to the plateau values $`f_q^{s,c}`$ decreasing with increasing $`q`$, which agrees qualitatively with the findings for incoherent-neutron-scattering results discussed within the microscopic MCT .
The parameters $`\mathrm{\Omega }_A^s`$, $`\nu _A^s`$ which specify the transient dynamics of $`\varphi _A^s`$ are shown in Fig. 5. The results from the neutron-scattering analysis reflect the behavior $`\mathrm{\Omega }_{\text{ns}}^s(q,T)q\sqrt{T}`$ to a good approximation, which is in agreement with the result of the microscopic theory. But drawing more conclusions from the microscopic parameters would be over-interpreting the model. They are shown here mainly to demonstrate that there are no abnormal variations occurring. We find much larger uncertainties for the microscopic fit parameters $`\mathrm{\Omega }`$, $`\nu `$, $`\mathrm{\Omega }_A^s`$, $`\nu _A^s`$, than for those parameters $`v_1`$, $`v_2`$, and $`v_A^s`$, ruling the structural-relaxation part of the spectra. In particular, it was possible to use for the parameters which specify the transient of the first correlator $`\varphi (t)`$ temperature independent values $`\mathrm{\Omega }=1\mathrm{T}Hz`$ and $`\nu =0\mathrm{T}Hz`$.
The hopping coefficient $`\mathrm{\Delta }`$ in Eq. (8) determines the position of the susceptibility minimum below $`T_c`$. This minimum cannot be seen in the light-scattering data, thus the chosen values are not unambiguously determined. The light-scattering spectra in the upper panel of Fig. 1 are fitted with the hopping parameter $`\mathrm{\Delta }_A^s`$ for the second correlator ignored: $`\mathrm{\Delta }_{\text{ls}}^s=0`$. The fits to the dielectric-loss spectra in the lower panel of Fig. 1 are done with a non-vanishing $`\mathrm{\Delta }_{\text{de}}^s`$. For the whole temperature range investigated, $`\mathrm{\Delta }(T)`$ can be assumed to follow an Arrhenius law, $`\mathrm{\Delta }(T)\mathrm{exp}(E_A/T)`$, which would be expected for thermally activated hopping over barriers. Fig. 6 shows the values used for the fit. Although $`\mathrm{\Delta }`$ increases by an order of magnitude, the calculated curves for temperatures higher than $`190\mathrm{K}`$ show no influence from hopping effects on the spectra. This is demonstrated in Fig. 7. The irrelevance of the increasing hopping coefficients $`\mathrm{\Delta }_q`$ for temperatures increasing above $`T_c`$ can be understood on the basis of a discussion of the asymptotic formulas . It is the reason, why the idealized theory can be used for data analysis for $`T`$ sufficiently larger than $`T_c`$. In the analyzed neutron-scattering experiment, the dynamical window and the studied temperature intervals are too small to investigate hopping effects, and therefore the curves in Fig. 2 are calculated with $`\mathrm{\Delta }_{\text{ns}}^s=0`$.
Above $`T_c`$, the spectra including hopping show deviations from the idealized ones only for small $`TT_c`$. Below $`T_c`$, the crossover to the white-noise spectrum is suppressed, and a minimum occurs as hopping starts to be the dominant relaxation effect. Because of the insensitivity of the main body of the analyzed data to choices of $`\mathrm{\Delta }`$, the activation energy cannot be determined very precisely from the fit; the upper straight line in Fig. 6 corresponds to $`E_A=811\mathrm{K}`$. This value is in reasonable agreement with the one found in an earlier asymptotic analysis . Dielectric-loss spectra show hopping-induced minima at higher frequencies than the light-scattering spectra, and this we have accounted for by introducing a second hopping parameter $`\mathrm{\Delta }_{\text{de}}^s`$ there. In a similar way, $`(\mathrm{\Omega }_{\text{de}}^s/\mathrm{\Omega })^2\mathrm{\Delta }_{\text{de}}^s(T)`$ follows an Arrhenius law and has no influence on the spectra above $`T_c`$; this second hopping term has already been included in the comparison studied in Fig. 7. Here, the activation energy is of the order of $`2000\mathrm{K}`$, which makes the result more striking, since $`(\mathrm{\Omega }_{\text{de}}^s/\mathrm{\Omega })^2\mathrm{\Delta }_{\text{de}}^s`$ is allowed to vary over three orders of magnitude. In both cases, activation energies as well as the prefactors are of reasonable magnitude. It should be stressed that, although the treatment of hopping by a frequency-independent $`\mathrm{\Delta }`$ is rather crude, the resulting frequency range in which the schematic model gives a good fit to experimental data, is enlarged by about one decade for $`T<T_c`$ relative to the fit interval which can be treated by the idealized-MCT model.
In the measurements of the dielectric functions, information on both the imaginary and the real part of $`\epsilon (\omega )=\epsilon ^{}(\omega )+i\epsilon ^{\prime \prime }(\omega )`$ have been obtained . The fit to the $`\epsilon ^{\prime \prime }`$ data shown above was performed using $`\epsilon ^{\prime \prime }(\omega )=4\pi \chi _{\text{de}}^{\prime \prime }(\omega )=4\pi \chi _{\text{de}}\omega \varphi _{\text{de}}^{s}{}_{}{}^{\prime \prime }(\omega )`$, thus obtaining the proportionality factor $`\epsilon _0=4\pi \chi _{\text{de}}`$ as a by-product. Then, the real part is given by $`\epsilon ^{}(\omega )\widehat{\epsilon }=\epsilon _0(1+\omega \varphi _{}^{s}{}_{\text{de}}{}^{}(\omega ))`$. The new parameter $`\widehat{\epsilon }`$ has to be determined by shifting the curves, and it can differ from $`\epsilon _{\mathrm{}}=1`$ in both directions: The liquid exhibits microscopic oscillations, which contribute to $`\epsilon ^{}(\omega )`$ as some shift $`\mathrm{\Delta }\epsilon _{\text{micr.}}^{\text{exp}}`$ with respect to $`\epsilon _{\mathrm{}}=1`$ for the structural part of the response function. The schematic model uses a single damped oscillator, giving some $`\mathrm{\Delta }\epsilon _{\text{micr.}}^{\text{fit}}`$, which may be either too small or too large. Depending on the temperature, we find values of $`\widehat{\epsilon }=\epsilon _{\mathrm{}}+\left(\mathrm{\Delta }\epsilon _{\text{micr.}}^{\text{exp}}\mathrm{\Delta }\epsilon _{\text{micr.}}^{\text{fit}}\right)`$ between $`3`$ and $`1`$, which are of reasonable magnitude. Figure 8 shows the result of testing our fit against the accordingly shifted real part of the measured dielectric function. It is clear from the theory that the real and imaginary parts of the calculated curves are connected by Kramers-Kronig relations. But for the experiment, both quantities have to be regarded as almost independent data sets, since the measurements are restricted to a finite frequency range. Thus, figure 8 provides more than just a different view on the fit shown in Fig. 1, and it is an important point that the real-part data can be fitted with the schematic model as well, introducing only one additional fit parameter $`\widehat{\epsilon }`$. In the minimum region of the spectra, we find this to be confirmed, and for higher $`T`$, the $`\alpha `$-relaxation step can be described by the schematic model, too. The discrepancies for the $`\alpha `$ peak in the glass are the analogue to what can be seen in the $`\epsilon ^{\prime \prime }`$ fit. Similar observations hold for the high-frequency dynamics, where one has to notice in addition, that experimental error bars are relatively large for frequencies above $`300\mathrm{G}Hz`$. A slightly better fit of the $`\epsilon ^{}`$ data could have been achieved by allowing the static susceptibility $`\chi _{\text{de}}`$ to vary with temperature. This possibility is not examined here, since the shift is only small, and since we do not want to introduce assumptions on the $`T`$-dependence of the static quantity $`\epsilon _0`$.
### B Summary of the Data Analysis
Glass-forming liquids exhibit temperature-sensitive spectra for frequencies well below the band of microscopic excitations. These precursors of the glass transition are referred to as structural-relaxation spectra. The full lines in Fig. 1 and 2 demonstrate, that the evolution of structural relaxation of PC, including the crossover to the microscopic regime, is described well by a schematic MCT model. The description holds for all spectra obtained by the depolarized-light-scattering spectrometer; in this case it deals with the three-decade dynamical window between $`0.3`$ and $`500\mathrm{G}Hz`$, and it accounts for the change of the spectral intensity by a factor of $`10^3`$ if the temperature is shifted between the glass transition $`T_g`$ and $`30\mathrm{K}`$ above the melting temperature $`T_m`$. It accounts for the measured $`\alpha `$-peak-maximum shift by a factor of $`10`$ if $`T`$ is changed by $`30\mathrm{K}`$. A similar statement holds for the description of the dielectric-loss spectra, where the $`\alpha `$-peak shift from $`40\mathrm{G}Hz`$ down to $`0.02\mathrm{G}Hz`$ is described. This shift is caused by a temperature decrease from $`293\mathrm{K}=T_m+75\mathrm{K}`$ to $`243\mathrm{K}`$.
Between the $`\alpha `$ peak and the vibrational excitation peak near $`1\mathrm{T}Hz`$, the susceptibility spectra in Figs. 1 and 2 exhibit a minimum at some frequency $`\omega _{\text{min}}`$. It shifts to smaller frequencies as the temperature is lowered, but less than the $`\alpha `$-peak position. Its intensity $`\chi _{\text{min}}=\chi ^{\prime \prime }(\omega _{\text{min}})`$ exceeds the white-noise spectrum one would expect for the dynamics of normal liquids by more than two orders of magnitude. Such white noise would yield susceptibility spectra varying linearly with frequency, $`\chi _{\text{wh.n.}}^{\prime \prime }(\omega )\omega `$, as is indicated by the dashed lines in Fig. 1. These anomalous minima are also treated properly by the model.
Neutron-scattering data are available for a series of wave vectors $`q`$, and hence the dynamics is probed on various length scales. The $`q`$ dependence is in the schematic model described by that of the coupling coefficient $`v_{\text{ns}}^s(q)`$. The data description in Fig. 2 is possible using a $`q`$ dependence in qualitative agreement with the results expected from the microscopic theory of simple systems.
It appears nontrivial that the used schematic model can deal with the mentioned spectra of PC. The success of the fits indicates that the studied glassy dynamics is rather insensitive to microscopic details of the systems. Apparently the evolution of glassy dynamics within the GHz window reflects, above all, only quite general features of the nonlinear-interaction effects, which can also be modelled by simple truncations of the full microscopic theory. These conclusions require some reservation. The explanation of the PC data by the used model is based on the choice of the model parameters, in particular on the choice of the drift of all parameters with changes of temperature, which is documented in Fig. 3-6. Only a full microscopic theory can show, whether or not the chosen parameters are in accord with the fundamental microscopic laws.
Furthermore, it has to be emphasized that the studied model cannot reproduce the spectra for frequencies below $`1\mathrm{G}Hz`$ if the temperature is below the critical value $`T_c`$. Such spectra can be measured accurately using dielectric-loss spectroscopy, and the lower panel of Fig. 1 exhibits some of this data for $`T=173\mathrm{K}`$ and $`T=183\mathrm{K}`$. The lack of success of our work in handling these spectra is clearly connected with the improper treatment of hopping processes. It remains unclear at present, whether this is due to the stochastic approximation, $`N_q(z)=i\mathrm{\Delta }_q`$, or due to restricting ourselves to a one-component schematic model, or whether the whole extension of MCT to a theory including hopping transport is inadequate.
## IV Some asymptotic formulas
Let us list some of the asymptotic results for the studied MCT model, which will be needed below in Sec. V. These results are obtained by straight-forward specialization of the general formulas discussed in Ref. . We will focus on the $`\beta `$-relaxation regime for $`TT_c`$, with hopping effects neglected. A comprehensive discussion of the asymptotic results can be found in Ref. .
From the full MCT equations (II), a leading-order expansion in $`\sqrt{|\sigma |}`$ gives rise to the asymptotic predictions for the intermediate-time window of the $`\beta `$ relaxation. A central result is the factorization theorem, $`\varphi _q(t)f_q^c=h_qG(t)`$, where the so-called $`\beta `$ correlator $`G(t)`$ is independent of $`q`$. This result still holds, in the generic case, for the tagged-particle density-fluctuation correlator or the correlator dealing with light scattering or dielectric response: $`\varphi _A^s(t)=f_A^{s,c}+h_A^sG(t)`$, with the same $`G(t)`$ as above. The Fourier-cosine transform of $`G(t)`$ is called the $`\beta `$ spectrum $`G^{\prime \prime }(\omega )`$. One gets for the normalized susceptibility spectra
$$\chi _x^{\prime \prime }(\omega )=\omega \varphi _x^{\prime \prime }(\omega )=h_x\chi ^{\prime \prime }(\omega ),$$
(18)
where $`\chi ^{\prime \prime }(\omega )=\omega G^{\prime \prime }(\omega )`$ is called the $`\beta `$-susceptibility spectrum. Here, the index $`x`$ denotes either the wave-vector modulus $`q`$, or $`x=(s,A)`$. The function $`G`$ depends on $`t/t_0`$, $`\sigma `$, and $`\delta `$ only: it is uniquely determined by the exponent parameter $`\lambda `$ as the solution of the equation
$$\sigma \delta t+\lambda (G(t))^2=\frac{d}{dt}_0^tG(tt^{})G(t^{})𝑑t^{},$$
(19)
to be solved with the initial condition $`G(t0)=(t/t_0)^a`$. The so-called hopping parameter $`\delta `$ has to be calculated from $`\mathrm{\Delta }_q`$, and for the studied model it reads
$$\delta =\mathrm{\Delta }f_{}^{c}{}_{}{}^{2}/(1f^c).$$
(20)
In this context, the numbers $`\mathrm{\Delta }_A^s`$ only enter as corrections to scaling.
The plateau values $`f_x^c`$, and the critical amplitudes $`h_x`$ can be calculated from the mode-coupling functionals. In the case of the schematic model studied, the values for the first correlator are given by $`\lambda `$:
$$f^c=1\lambda ,h=(1f^c).$$
(21)
The relation between the exponent parameter $`\lambda `$ and the $`\alpha `$-peak strength $`f^c`$ is one of the non-generic features of that model. For the second correlator, the plateau value and critical amplitude read
$$f_A^{s,c}=1\frac{1}{v_A^sf^c},h_A^s=\frac{1f^c}{v_A^sf_{}^{c}{}_{}{}^{2}}.$$
(22)
Changing $`v_A^s`$, the $`\alpha `$-peak strength $`f_A^{s,c}`$ can be varied. Again, these equations establish a non-generic relation between the $`f_A^{s,c}`$ and the $`h_A^s`$. In our fits to the neutron-scattering data, a $`q`$ dependence of $`f_{\text{ns}}^{s,c}`$ and $`h_{\text{ns}}^s`$ can arise only through a $`q`$-dependence of the $`v_{\text{ns}}^s`$.
From Eq. (19) one identifies for the case $`\delta =0`$ the time scale for the $`\beta `$-relaxation: $`t_\sigma =t_0|\sigma |^{1/2a}`$. Going over to rescaled times, $`\widehat{t}=t/t_\sigma `$, and rescaled frequencies, $`\widehat{\omega }=\omega t_\sigma `$, one gets from Eq. (18) the scaling law for the $`\beta `$-susceptibility spectra
$$\chi _x^{\prime \prime }(\omega )=h_xc_\sigma \widehat{\chi }(\widehat{\omega }),$$
(23)
where $`c_\sigma =\sqrt{|\sigma |}`$. The master spectrum $`\widehat{\chi }`$ is $`\sigma `$-independent. It is fixed through the exponent parameter $`\lambda `$, and thus through the static structure alone. For large rescaled frequencies, $`\widehat{\omega }1`$, one obtains the critical-power-law spectrum. This extends to all frequencies as $`\sigma 0`$:
$$\chi _x^{\prime \prime }(\omega )=h_x\mathrm{sin}(\pi a/2)\mathrm{\Gamma }(1a)(\omega t_0)^a,T=T_c.$$
(24)
For small rescaled frequencies, one gets the von Schweidler-law for $`\sigma <0`$, $`\widehat{\chi }(\widehat{\omega }1)1/\widehat{\omega }^b`$, and thus $`\widehat{\chi }`$ exhibits a minimum at some frequency $`\widehat{\omega }_{\text{min}}`$ with $`\widehat{\chi }_{\text{min}}=\widehat{\chi }(\widehat{\omega }_{\text{min}})`$. Due to the scaling law, Eq. (23), the variation of the spectral minima with temperature is, in the asymptotic region, given by
$$\omega _{\text{min}}=\widehat{\omega }_{\text{min}}/t_\sigma ,\chi _{\text{min}}=\widehat{\chi }_{\text{min}}c_\sigma ,\sigma <0.$$
(25)
The point $`(\widehat{\omega }_{\text{min}},\widehat{\chi }_{\text{min}})`$ is completely fixed by $`\lambda `$, and for $`\lambda =0.75`$ one gets: $`\widehat{\omega }_{\text{min}}=1.733`$, $`\widehat{\chi }_{\text{min}}=1.221`$.
On the glass side, $`\sigma >0`$, the idealized theory yields for the $`\beta `$ correlator for large rescaled times a constant, $`G(\widehat{t}1)=1/\sqrt{1\lambda }`$. Thus the signature of the MCT-fold bifurcation are $`\sqrt{T_cT}`$ anomalies of the nonergodicity parameters $`f_x=\varphi _x(t\mathrm{})`$:
$$f_x(T)=f_x^c+h_x\sqrt{\sigma /(1\lambda )},T<T_c.$$
(26)
If the correlators deal with density fluctuations or tagged-particle densities, the quantity $`f_x`$ is the Debye-Waller factor or Lamb-Mößbauer factor, respectively. For $`\sigma <0`$, corresponding to $`T>T_c`$, the long-time limits of the correlators vanish, as is the case for $`T<T_c`$ but $`\delta 0`$. But if $`\sigma `$ and $`\delta `$ are sufficiently small, the correlators still exhibit plateaus for times exceeding the transient scale $`t_0`$ before the decay towards zero sets in. The heights of these plateaus are then given by $`f_x`$ for $`T<T_c`$, and by $`f_x^c`$ for $`T>T_c`$, then called effective nonergodicity parameters. The decay from the plateau is the $`\alpha `$ process, and thus the strength of the $`\alpha `$ peak in the susceptibility spectra is given by $`f_x`$. This also corresponds to the height of the relaxation step exhibited by the real part of the susceptibility, when the frequency is shifted through the $`\alpha `$-peak window.
The preceding Eqs. (18-26) establish universality features of MCT. They provide the basis of a general explanation of the glassy MCT dynamics by means of features of the spectra not depending on the specific microscopic properties of a given system.
## V Scaling law analysis
In this section it shall be studied how well the above calculated MCT solutions can be described by the MCT-$`\beta `$-relaxation-scaling laws summarized in the preceding section. It has been demonstrated earlier , that the range of validity of these equations can be analyzed by evaluating the next-to-leading-order corrections. Here, we will study the combined effect of all corrections due to structural relaxation as well as due to vibrational-transient-dynamics effects. Only the solutions referring to the parameter sets used in Figs. 1 and 2 will be discussed. Thus the following analysis refers to control parameters and dynamical windows representative for state-of-the-art experimental studies of the evolution of glassy dynamics.
### A The critical decay
Solving the equations of motion for $`T=T_c`$, $`\mathrm{\Delta }=0`$, and $`\mathrm{\Delta }^s=0`$ for times up to $`10^{15}\mathrm{p}s`$, the critical power law, Eq. (5), was identified. The common time scale was determined to $`t_0=0.035\mathrm{p}s`$. The leading-order result $`\widehat{\varphi }_x(t)=(t/t_0)^a`$, where $`\widehat{\varphi }_x(t)=(\varphi _x(t)f_x^c)/h_x`$, is shown in the double-logarithmic representation of Fig. 9 by straight dash-dotted lines with slope $`a`$. For the two temperatures closest to $`T_c`$, the full lines in this diagram exhibit the solutions $`\widehat{\varphi }_x(t)`$. Dashed lines demonstrate the corresponding $`\beta `$ correlators $`G(t)`$, determined from Eqs. (17,19,20). The approach of the first correlator $`\varphi (t)`$ towards the plateau $`f^c`$ is well described by the scaling law for $`T=180\mathrm{K}`$ and $`190\mathrm{K}`$. For $`T=180\mathrm{K}`$ the critical power law is exhibited within a $`1.5`$-decade time window for times exceeding $`t_c`$ with $`t_c/t_0300`$, while for $`t<t_c`$ the vibrational transient dynamics masks the structural relaxation. In this case, the validity of the critical power law for larger times is restricted by the onset of hopping effects. Hopping plays no significant role for the $`\beta `$ relaxation of $`T=190\mathrm{K}`$ (compare Fig. 7). But there, the deviations of $`G(t)`$ from the short-time limit $`(t/t_0)^a`$ set in already for $`t<t_c`$. Thus this power law cannot be identified anymore for distance parameters $`\epsilon =(T_cT)/T_c`$ with $`|\epsilon |0.06`$. This scenario is in semiquantitative agreement with the one discussed in Ref. for the density correlators of a hard-sphere system. Let us reiterate that the correlator $`\varphi (t)`$ drives the glass transition for the studied model, but that it is not the quantity measured.
The two lower sets of curves in Fig. 9 show that the decrease of $`\varphi _{\text{ls}}^s(t)`$ and $`\varphi _{\text{de}}^s(t)`$ towards their plateaus $`f_{\text{ls}}^{s,c}`$ and $`f_{\text{de}}^{s,c}`$ respectively is described qualitatively by the dashed lines, i. e. by the scaling laws. However, there are remarkable quantitative deviations between the solutions $`\widehat{\varphi }_A^s(t)`$ and their asymptotic form $`G(t)`$. These appear as if the amplitude experiences some offset. The reason is that the transient dynamics influences the correlators $`\varphi _A^s(t)`$ also for times which exceed $`t_c`$ by up to two orders of magnitude. This means that the dynamics of the two probing variables is strongly influenced by oscillations within that window, where the driving correlator $`\varphi (t)`$ exhibits the $`t^a`$ law. Therefore the power law $`\widehat{\varphi }_A^s(t)=(t/t_0)^a`$ cannot be identified accurately in the curves shown for $`A=\text{ls}`$ and $`A=\text{de}`$. This is also demonstrated by the straight dash-dotted line in the upper panel of Fig. 1, which represents the asymptotic low-frequency-susceptibility spectrum at the critical point, Eq. (24).
Within the $`1.5`$-decade window, where the $`\mathrm{log}\widehat{\varphi }`$-versus-$`\mathrm{log}(t/t_0)`$ curve for $`180\mathrm{K}`$ in Fig. 9 demonstrates the critical-decay asymptote, the graphs of $`\mathrm{log}\widehat{\varphi }_{\text{ls}}^s`$ and $`\mathrm{log}\widehat{\varphi }_{\text{de}}^s`$ versus $`\mathrm{log}(t/t_0)`$ for $`180\mathrm{K}`$ and $`183\mathrm{K}`$, respectively, also appear as nearly straight lines, so that they can be described very well in this window by some effective power law. One thus expects an effective power-law spectrum which is described by Eq. (24), but with $`a`$ and $`h_A`$ replaced by some $`a^{\text{eff}}`$ and $`h_A^{\text{eff}}`$ respectively. This phenomenon also was observed for the susceptibility spectra of the hard-sphere system . For the light-scattering result, one infers $`a^{\text{eff}}<a`$ and $`h_{\text{ls}}^{\text{eff}}<h_{\text{ls}}^s`$. The dotted line in the upper panel of Fig. 1 corroborates this conclusion. It exhibits the solution for the model evaluated for $`T=T_c`$ with hopping effects ignored. This line can be fitted well between $`10^5\mathrm{T}Hz`$ and $`10^3\mathrm{T}Hz`$ by an effective power-law following Eq. (24) with $`a^{\text{eff}}/a0.92`$ and $`h^{\text{eff}}/h0.7`$. The crossover from this effective power law to the asymptotic critical law, Eq. (24), occurs only at frequencies around $`1\mathrm{M}Hz`$.
### B The non-ergodicity-parameter anomaly
Figure 10 shows effective nonergodicity parameters of the three correlators underlying the curves in Fig. 1, which were determined from the plateau heights of the $`\varphi _x(t)`$-versus-$`\mathrm{log}t`$ diagrams. The crosses in the lower panel show the values deduced in Ref. from the step size of the measured real part of the dielectric function, divided by the value of $`\epsilon _0`$ assumed in the fit to the susceptibility spectra. Figs. 1 and 8 demonstrate that the present model describes the dielectric function of PC reasonably, and so it is not surprising that the calculated values (dots) reproduce the measured ones (crosses) reasonably well. The discrepancies between dots and crosses are anticipated to be mainly due to difficulties in determining the step size accurately in the experiment, where one carefully has to eliminate contributions from the $`\beta `$ relaxation.
Full lines in Fig. 10 exhibit the asymptotic laws, i. e. the values $`f_x`$ from Eq. (26) for $`T<T_c`$ and the constant $`f_x^c`$ for $`TT_c`$. Figure 10(a) demonstrates that the $`60\%`$ variation of the effective non-ergodicity parameter $`f`$ of the first correlator is described well by the asymptotic formula. This holds for temperatures down to $`T_g`$. On the other hand, the results for the light scattering and for the dielectric response do not exhibit the asymptotic behavior; there is no evidence for the $`\sqrt{T_cT}`$ anomaly at all to be noticed in the data. There are two reasons for this finding. The obvious one reflects the large size of $`f_A^{s,c}`$, i. e. it results from the observation that the $`\alpha `$ peaks of the susceptibility dominate over the remaining susceptibility spectrum (compare Fig. 1). Equation (26) for probing-variable correlator is equivalent to
$$(1f_A^s)=(1f_A^{s,c})h_A^s\sqrt{\sigma /(1\lambda )}.$$
(28)
Since $`(1f_A^s)`$ and $`h_A^s`$ are positive and $`(1f_A^{s,c})`$ is less than $`0.1`$ for the two correlators discussed, the whole $`\sqrt{T_cT}`$ effect is below $`10\%`$. Therefore it is difficult to separate the $`\sqrt{T_cT}`$ anomaly from the scatter of the data. The less obvious reason results from the smooth but appreciable temperature drift found for the coupling coefficient $`v_A^s`$ (compare Fig. 4). This coupling determines the non-ergodicity parameters of the second correlator of the schematic model in terms of the parameter $`f`$: $`(1f_A^s)=1/(v_A^sf)`$. The square-root singularity is due to that in $`f`$, and expanding $`(1/f)`$ one reproduces Eq. (28), but with effective terms
$`(1f_A^{s,c})^{\text{eff}}=R_A(1f_A^{s,c}),h_A^{\text{eff}}=R_Ah_A^s,`$ (29)
$`R_A(T)=v_A^{s,c}/v_A^s(T).`$ (30)
Replacing the renormalization coefficient $`R_A(T)`$ by its value at the critical point, $`R_A^c=1`$, one reproduces the leading-order result, Eq. (26). However, within the temperature interval considered, the smooth drift of $`(1f_A^{s,c})^{\text{eff}}`$ overwhelms the small variation of the $`\sqrt{T_cT}`$ term. This is demonstrated in Figs. 10(b,c) by the dashed lines. The numerically found circles are well described by this line. One concludes that the drifting coupling coefficient $`v_A^s(T)`$ is responsible for the deviations from the leading-order asymptotics. Unfortunately, the $`v_A^s(T)`$ are not available directly from experiments.
### C Scaling of the $`\beta `$-relaxation minima
Figure 11 shows the susceptibility master spectrum $`\widehat{\chi }`$ for $`\lambda =0.75`$ and $`\delta =0`$ as dashed curves. The upper set of solid lines in this figure are the spectra of the first correlator rescaled, according to Eq. (23), as $`\omega \varphi ^{\prime \prime }(\omega )/(\sqrt{|\sigma |}h)`$. Asymptotic validity of scaling is demonstrated: the window of rescaled frequencies $`\widehat{\omega }=\omega t_\sigma `$, for which the rescaled spectra are close to the master spectrum $`\widehat{\chi }`$, expands with decreasing $`(TT_c)`$. Convincing agreement between $`\widehat{\chi }`$ and the $`180\mathrm{K}`$ result can be found as long as hopping effects are ignored. For higher temperatures, where $`|\epsilon |=|TT_c|/T_c0.06`$, strong deviations are found. The $`T=210\mathrm{K}`$ spectrum, for which $`|\epsilon |=0.17`$, does not even show a minimum. The demonstrated deviations from the scaling laws are similar to what was explained in Ref. for the MCT solutions for the hard-sphere system.
Preasymptotic-correction effects for the variables discussed for PC in Figs. 1 and 2 differ from those for the auxiliary correlator $`\varphi `$. This is demonstrated in the lower part of Fig. 11 for $`\epsilon =0.17`$. Deviations of the rescaled spectra $`\chi _A^{\prime \prime }(\omega )/(\sqrt{\sigma }h_A)`$ from the master spectrum $`\widehat{\chi }(\omega t_\sigma )`$ are larger for the dielectric loss than for the light scattering, and the latter are larger than those for the neutron-scattering results. While the predicted probe independence of $`\chi _A^{\prime \prime }(\omega )/h_A`$ holds rather well for $`\omega <\omega _{\text{min}}`$, deviations from the factorization theorem are observed mainly for higher frequencies. As discussed above in connection with Eq. (28), the large size of $`f_A^{s,c}`$ leaves only a $`10\%`$ decay of the correlator from the initial value unity to the plateau, and this decay is influenced by vibrational motion. This leads to the strong disturbances of the susceptibility spectra for $`\omega >\omega _{\text{min}}`$. For the neutron-scattering data for intermediate wave vectors, this problem is not so severe, since the critical Lamb-Mößbauer factor $`f_q^c`$ decreases with increasing $`q`$. Therefore the shape of the susceptibility minimum, which is exhibited by the two neutron-scattering results shown in Fig. 11, is closer to the one of the master spectrum.
The $`q`$-dependence of the critical amplitude $`h_q^s`$ for the incoherent-neutron-scattering spectra has been measured as a byproduct of the test of the factorization theorem, $`\varphi _{q}^{s}{}_{}{}^{\prime \prime }(\omega )h_q^s\chi ^{\prime \prime }(\omega )/\omega `$. A linear law, $`h_q^sq`$, was found within the studied wave-vector interval . Such a strictly linear law is not compatible with the microscopic MCT, which predicts that the $`h_q`$-versus-$`q`$ graph exhibits a broad asymmetric peak near the position $`q_{\text{max}}`$ of the first sharp diffraction peak of the structure factor. For small $`q`$, the critical amplitude increases regularly as $`h_qq^2+𝒪(q^4)`$ , and thus $`h_q`$ exhibits an inflection point for some $`q<q_{\text{max}}`$. Whether the found linear $`q`$-dependence of the experimental values is due to multiple-scattering effects, is not clear . Our schematic-model fit, however, suggests that, even if $`h_q^s`$ can be approximated by a linear law, the $`q`$-dependence is not strictly linear but rather given by some intermediate crossover around the inflection point. Equation (22) relates $`h_q^s`$ to the inverse of $`v_q^s`$; thus a strictly linear law for $`h_q^s`$ would imply $`v_q^s1/q`$. Such result is added as a dash-dotted line in Fig. 4b, and it shows that this is not consistent with our data analysis. An ad-hoc expression, reflecting the crossover from the small-$`q`$ asymptote through the inflection point is $`h_q^sq^2/\left[1+(q/q^{})\right]`$. The resulting expression for $`v_q^s`$ is added in Fig. 4b as dashed lines for two temperatures, and it provides a reasonable interpolation of the found fit parameters. One would need measurements for wave vectors of the order of $`0.2\mathrm{\AA }^1`$ and less, in order to test for the small-$`q^2`$ behavior predicted by the microscopic MCT.
There is a most bothersome preasymptotic-correction effect which can be seen in the lower part of Fig. 11: the positions $`\omega _{\text{min}}t_\sigma `$ of the susceptibility minima are not identical; and they are all larger than the asymptotic value $`\widehat{\omega }_{\text{min}}`$ which is shown by a diamond. Since the bands of microscopic excitations of the correlator spectra $`\varphi _{}^{s}{}_{A}{}^{\prime \prime }(\omega )`$ are located at much lower frequencies than that of the spectrum $`\varphi ^{\prime \prime }(\omega )`$, the spectrum of the test variables cross over too quickly to the transient to be able to develop the universal relaxation pattern for $`\omega >\omega _{\text{min}}`$. As a result, $`\omega _{\text{min}}`$ gets an offset to larger frequencies. Figure 12 exhibits this result as a rectification diagram. The asymptotic result is shown as a full straight line: $`\omega _{\text{min}}^{2a}=(\widehat{\omega }_{\text{min}}/t_0)^{2a}|\sigma |=\widehat{C}(TT_c)`$ with the constant $`\widehat{C}=(\widehat{\omega }_{\text{min}}/t_0)^{2a}C/T_c0.004`$. The positions of the observed minima can still be interpolated reasonably by straight lines, shown in dashed. However, the slopes of the dashed lines differ from those of the asymptotic line. In a clear violation of the asymptotic factorization theorem, the lines for different probing variables are different. The linear interpolations lead to intersections with the abscissa, which differ somewhat from the correct value of $`T_c`$. For the neutron-scattering data, this interpolation has been omitted in Fig. 12, since the error bars obtained by $`q`$-averaging do not allow for a well-determined estimate here.
There is also a strong temperature-dependent offset of the amplitude scale relative to the scaling-law prediction. This is demonstrated in Fig. 13 for the light-scattering spectra. All four rescaled minima $`\chi _{\text{min}}/\sqrt{\sigma }h_{\text{ls}}`$ are far below $`\widehat{\chi }_{\text{min}}`$, which is indicated by a diamond. Moreover, with decreasing $`(TT_c)`$, the discrepancy between rescaled curves and expected asymptote does not decrease, rather it increases. Such behavior is not anticipated from the leading-order corrections to the scaling laws , but it can be explained as a higher-order effects because of the important role played by the temperature dependence of the coupling coefficient $`v_A^s`$. This drift can be eliminated by introducing an effective amplitude $`h_{\text{ls}}^{\text{eff}}`$, as discussed above in connection with Eq. (29). The result is given by the upper set of curves in Fig. 13. Indeed, the discrepancies between asymptotics and rescaled curves are reduced, and they now decrease with decreasing $`(TT_c)`$. But even for $`T=190\mathrm{K}`$, i. e. for $`|\epsilon |=0.06`$, there is a considerable offset of the minimum intensity from the scaling result. The $`180\mathrm{K}`$ curve demonstrates the approach towards the asymptotic limit, would hopping be absent. There still is a clear deviation between rescaled spectrum and scaling-law result, which increases with increasing $`\omega `$ for $`\omega >\omega _{\text{min}}`$. But the sign and size of this effect are similar to what was found for the hard-sphere system for wave vectors yielding a plateau $`f_q^c`$ as large as $`f_{\text{ls}}^c`$ .
### D Summary of the Scaling-Law Analysis
It is, of course, more satisfactory to interpret data for glassy dynamics with the set of universal formulas provided by MCT for the asymptotic dynamics near a glass-transition singularity than to explain experimental findings within schematic models. The more probing variables $`A`$ are taken into account, the more convincing such an analysis is, since the universal results also imply connections between spectra measured for different $`A`$. The preceding work on PC exemplifies these statements. However, the data are influenced by preasymptotic effects, and one cannot judge the relevance of these correction effects, if one does not know the underlying microscopic MCT equations. Forcing data into the universal formulas can thus lead to self-contradicting results, as the preceding subsections have demonstrated. While the spectral shapes are rather robust and the rectification diagram for the scales appears correct and leads to a reasonable estimation of $`T_c`$, as shown by the dashed lines in Fig. 12, the prefactors for the asymptotic formulas extracted from the data can be quite wrong. This error cannot be noticed if one studies a single probing variable $`A`$ only, but it appears as a violation of the factorization theorem if one compares spectra for different $`A`$. One concludes that the problems with the analysis discussed in Ref. are neither due to inadequate application of MCT results nor due to failures of MCT. Rather they reflect the properties of MCT; more precisely, they exemplify the limitations for the application of asymptotic laws.
A general rule for the test of the $`\beta `$-relaxation-scaling law is corroborated by the present analysis: if the nonergodicity parameter $`f_A^c`$ is large, i. e. if the $`\alpha `$-peak strength is large compared to the strength of the microscopic-excitation peak of the susceptibility spectrum, the preasymptotic corrections are very important. This is especially true for the discussed light-scattering and dielectric-loss spectra. Neutron-scattering spectroscopy has the advantage that $`f_q^c`$ can be shifted by changing the wave vector $`q`$. Therefore, we found the scaling-law analysis to work best for the neutron-scattering data of Ref. . It would be very informative to corroborate this finding by a measurement of the expected $`\sqrt{T_cT}`$ anomaly of the Debye-Waller or Lamb-Mößbauer factor.
## VI The dielectric modulus
Since the memory kernel $`m_q^s(t)`$ in the mode-coupling approach is expressed as a polynomial of the density correlators, Eq. (4), this quantity shows the same asymptotic scenario as the correlators themselves. From the factorization theorem for the correlators one concludes in leading order for the kernels: $`m_A^s(t)=f_{M,A}^{s,c}+h_{M,A}^sG(t)`$. Eqs. (13,21,22) determine the plateau $`f_{M,A}^{s,c}`$ and the critical amplitude $`h_{M,A}^s`$ for the memory kernel of probing variable $`A`$:
$`f_{M,A}^{s,c}=v_A^sf^cf_A^{s,c},`$ (31)
$`h_{M,A}^s=v_A^s\left(f^ch_A^s+f_A^{s,c}h\right)=v_A^sh.`$ (32)
While in the above discussion $`f_A^{s,c}`$ was found to be larger than $`90\%`$, such that the square-root singularity is suppressed to a below-$`10\%`$ effect, the situation for the memory kernel is different. The coupling coefficient $`v_A^s`$ now plays the role of a normalization constant. If one introduces the normalized memory kernel in analogy to the normalized correlators $`\varphi _x(t)`$, $`\widehat{m}_A^s(t)=m_A^s(t)/v_A^s`$, such that $`\widehat{m}_A^s(t0)=1`$, one gets for the normalized plateau:
$$\widehat{f}_{M,A}^{s,c}=f^cf_A^{s,c}.$$
(33)
In cases where $`f_A^{s,c}`$ is close to unity, like in our analysis of the dielectric-loss and light-scattering data, one can approximate $`\widehat{m}_A^s(t)f^c+hG(t)`$. This equals the asymptotic expression for the first correlator. Thus, we can expect $`\beta `$ scaling for the memory kernel of the probe-variable $`A`$ to work equally well as for the first correlator and thus better than for the corresponding probe-variable correlator. Let us examine this in detail for the memory kernel of $`\varphi _{\text{de}}^s`$ underlying the fit to the dielectric-susceptibility spectra. For the light-scattering data, qualitatively the the same picture arises.
In the upper part of Fig. 14, three spectra of the memory kernels for the dielectric function, rescaled to $`\omega m_{\text{de}}^{\prime \prime }(\omega )/\sqrt{|\sigma |}`$, are plotted as solid lines for three temperatures above $`T_c`$. The asymptotic $`\beta `$-susceptibility spectrum for $`\lambda =0.75`$ is again shown as a dashed line. While the picture shows some similarity to the situation found in the upper part of Fig. 13 for the light-scattering susceptibilities, the reason for the deviations from the scaling law are different. This can be inferred from the lower part of Fig. 14, where the same scaling is shown for the normalized memory functions. Here, the solid lines represent $`\omega \widehat{m}_{\text{de}}^{\prime \prime }(\omega )/\sqrt{|\sigma |}h`$. One notices, that the $`\alpha `$-peak strength is remarkably smaller than in the dielectric susceptibility, and comparable to that for the first correlator of the model. Similarly, we find the standard scenario for the approach of the rescaled spectra to the master curve. The deviations from the asymptotics are qualitatively the same as exhibited in the upper part of Fig. 11 for $`\omega \varphi (\omega )`$. Thus one concludes: the deviations from scaling seen in the upper part of Fig. 14 are mainly due to the $`T`$-dependent normalization $`v_A^s`$, and not, as in the case discussed in connection with Fig. 13, due to microscopic crossover effects.
The question of normalization becomes even clearer for the effective non-ergodicity parameters. Figure 15(a) shows the unnormalized values $`f_{M,\text{de}}^s`$ as open circles. The full line exhibits the asymptotic prediction $`f_{M,\text{de}}^s=f_{M,\text{de}}^{s,c}+h_{M,\text{de}}^s\sqrt{\sigma /(1\lambda )}`$ for $`T<T_c`$ and $`f_{M,\text{de}}^s=f_{M,\text{de}}^{s,c}`$ for $`TT_c`$. Again, the drifting coupling coefficient $`v_{\text{de}}^s`$ is responsible for masking the predicted square-root law. But, unlike in Fig. 10, this is only true for the unnormalized quantity. The normalized function $`\widehat{f}_{M,\text{de}}^s`$, shown as filled circles in Fig. 15(b), exhibits good agreement with the asymptotic law. As for the values discussed for the tagged-particle density correlators, the drift of $`v_{\text{de}}^s`$ still results in a temperature dependence of $`\widehat{f}_{M,\text{de}}^s`$ for $`T>T_c`$, but this drift is now reduced to a $`10\%`$ effect. The asymptotic value of $`\widehat{f}_{M,\text{de}}^{s,c}`$ differs only about $`5\%`$ from the one for the first correlator, $`f^c`$, which is shown as a dash-dotted line in Fig. 15(b). It is remarkable, that even for the unnormalized quantity the position of $`T_c`$ can be estimated better than it could be done for the plateau values of the tagged-particle density correlators. This can be done by noticing that the slope of a linear interpolation of the data changes when going over from $`T<T_c`$ to $`T>T_c`$.
From the MCT equations (II) with the hopping kernel set to zero, one derives the expression for the dynamic susceptibility, Eq. (16), in terms of the memory kernel $`m_A^s(z)`$,
$$\chi _A(z)=\mathrm{\Omega }_{A}^{s}{}_{}{}^{2}\chi _A/\left[z^2\mathrm{\Omega }_{A}^{s}{}_{}{}^{2}+zM_A^{\text{reg}}(z)+\mathrm{\Omega }_{A}^{s}{}_{}{}^{2}zm_A^s(z)\right].$$
(34)
Let us define a dynamical susceptibility $`\chi _{M,\text{de}}(z)`$ corresponding to the kernel $`m_{\text{de}}^s(t)`$ in analogy to Eq. (16):
$$\chi _{M,\text{de}}(z)=zm_{\text{de}}^s(z)+m_{\text{de},0}^s,$$
(35)
with $`m_{\text{de},0}^s=m_{\text{de}}^s(t=0)`$. Then one can write for the dielectric function $`\epsilon (z)=\epsilon _{\mathrm{}}+4\pi \chi _{\text{de}}(z)`$
$$\epsilon (z)\epsilon _{\mathrm{}}=\frac{4\pi \chi _{\text{de}}}{\left(z/\mathrm{\Omega }_{\text{de}}^s\right)^21m_{\text{de},0}^s+\left[iz\nu _{\text{de}}^s/\mathrm{\Omega }_{\text{de}}^{s}{}_{}{}^{2}+\chi _{M,\text{de}}(z)\right]}.$$
(37)
The inverse of the dielectric function, $`1/\epsilon (z)`$, is occasionally considered as the dielectric modulus . The exact Mori-Zwanzig representation, Eq. (34), suggests to rather consider $`\left[\epsilon (z)\epsilon _{\mathrm{}}\right]^1`$, i. e. $`\chi _{\text{de}}^1(z)`$. This function consists of a quadratic polynomial in the frequency, $`\left(z/\mathrm{\Omega }_{\text{de}}^s\right)^21m_{\text{de},0}^s`$, a white-noise background, $`iz\nu _{\text{de}}^s`$, and a non-trivial part $`\chi _{M,\text{de}}(z)`$. The latter has all the standard properties of a susceptibility, in particular it obeys Kramers-Kronig relations. There is the trivial relation between the spectrum $`\chi _{M,\text{de}}^{\prime \prime }(\omega )`$ and the dielectric function
$$\mathrm{Im}\left[\left[\widehat{\epsilon }\epsilon (\omega )\right]^1\right]=\frac{1}{4\pi \chi _{\text{de}}}\left[\omega \left(\nu _{\text{de}}^s/\mathrm{\Omega }_{\text{de}}^{s}{}_{}{}^{2}\right)+\chi _{M,\text{de}}^{\prime \prime }(\omega )\right].$$
(38)
Here, $`\epsilon _{\mathrm{}}`$ was replaced by the constant $`\widehat{\epsilon }`$, discussed above in connection with the fit of $`\epsilon ^{}(\omega )`$.
The full lines in Fig. 16 exhibit the right-hand side of Eq. (38), evaluated with the model parameters used for the interpretation of the dielectric-loss spectra in Fig. 1. The symbols exhibit the left-hand side of Eq. (38) calculated with the data from Ref. and $`\widehat{\epsilon }`$ determined in connection with the fits of $`\epsilon ^{}(\omega )`$ in Fig. 8. Figure 16 shows that the fit is of equal quality as the ones shown for the direct analysis of the dielectric loss spectra. However, to produce the result in Fig. 16, one has to be careful to subtract the right value of $`\widehat{\epsilon }`$. The error bars shown in the figure for $`T=253\mathrm{K}`$ indicate the influence of subtracting $`\widehat{\epsilon }\pm 1`$ instead of $`\widehat{\epsilon }`$ to estimate the uncertainty introduced by this procedure. One notices that the shape of the curves for the high-frequency part is influenced. Thus an analysis based on $`\left[\widehat{\epsilon }\epsilon (\omega )\right]^1`$ is only practicable, if one can avoid these problems of the inversion procedure. Up to trivial terms, the left-hand side of Eq. (38) is identical with the spectrum $`\omega m_{\text{de}}^{\prime \prime }(\omega )`$ discussed in Fig. 14. This quantity can be explained well by the $`\beta `$-relaxation-scaling laws. Thus for PC, the corrections to the asymptotic laws are smaller for $`\left(\left[\epsilon (\omega )\widehat{\epsilon }\right]^1\right)^{\prime \prime }`$ than for $`\epsilon ^{\prime \prime }(\omega )`$. In particular it is shown by Fig. 15 that the square-root singularity can be identified from a discussion of $`\chi _{M,\text{de}}(\omega )`$.
## VII Conclusion
It was exemplified for propylene carbonate as a typical glass-forming van-der-Waals system, that the susceptibility spectra measured by three different experimental techniques can be described well by a schematic MCT model. Several decades of intensity change in the GHz frequency window as seen in light-scattering and dielectric-loss experiments with a sensitive temperature dependence typical for glass-forming liquids are fitted. Also, the results from incoherent neutron scattering, probing the dynamics for different wave vectors could be included in this simultaneous fit. Real-part data from the dielectric experiment have been successfully analyzed as well to further corroborate the consistency of the schematic-model fit. For temperatures ranging from the critical value $`T_c180\mathrm{K}`$ to well above the melting point, the range of applicability of the model includes both $`\alpha `$\- and $`\beta `$-relaxation windows, as well as the crossover to the microscopic spectrum. Below $`T_c`$ and down to the glass-transition temperature $`T_g`$, a rather simple approach to account for hopping phenomena improves the fit for the $`\beta `$-minimum regime, but fails to describe the $`\alpha `$-peak below $`T_c`$.
The schematic model used in the fit captures the general features of the glass-transition scenario predicted by the full microscopic theory. Still, it allows to go further than an analysis based on the asymptotic predictions of MCT only. In particular, we have used the schematic model to investigate which features of the measured spectra can be described by asymptotic laws, and where preasymptotic corrections set in. We find that the asymptotic formulas qualitatively give an adequate description of the data. Thereby the preceding studies are corroborated. But we demonstrated also, that preasymptotic-correction effects cause important quantitative differences between the data and the scaling-law results. One aspect of this is the $`T`$-drift of the critical amplitude $`h_{\text{ls}}^s`$ noted in an earlier analysis of PC light-scattering data . The drift of the coupling constant $`v_{\text{ls}}^s`$ is not sufficient to explain this, as was demonstrated in Figs. 9 and 13. Also, the crossover to the microscopic excitations influences the height of the spectra at the $`\beta `$ minimum. For the measurements analyzed, scaling works best with the neutron-scattering data, due to the relatively low plateau values $`f_q^{s,c}`$. An asymptotic analysis of the dielectric modulus could even work better in this respect. But due to uncertainties in the inversion of the dielectric function, such analysis is not practicable unless the modulus itself is measured directly.
###### Acknowledgements.
We thank H. Z. Cummins, M. Fuchs, P. Lunkenheimer, M. R. Mayr, U. Schneider, A. P. Singh, and J. Wuttke for many helpful discussions and the authors of Ref. , Refs. , and Ref. for providing us with their files for the various propylene-carbonate data. This work was supported by Verbundprojekt BMBF 03-G05TUM. |
no-problem/0001/hep-ph0001117.html | ar5iv | text | # The Solar Neutrino Problem in the Light of a Violation of the Equivalence PrincipleTalk given by R. Zukanovich Funchal.
## 1 INTRODUCTION
Neutrinos have had, since their childhood in the early 30’s, profound consequences on our understanding of the forces of nature. In the past they led to the discovery of neutral currents and provided the first indication in favour of the standard model of electroweak interaction. They may be today at the very hart of yet another breakthrough in our perceptions of the physical world.
Today the results coming from solar neutrino experiments as well as from atmospheric neutrino experiments are difficult to be understood without admitting neutrino flavour conversion. Nevertheless the dynamics underlying such conversion is yet to be established and in particular does not have to be a priori related to the electroweak force.
The interesting idea that gravitational forces may induce neutrino mixing and flavour oscillations, if the weak equivalence principle of general relativity is violated, was proposed about a decade ago , and thereafter, many works have been performed on this subject .
Many authors have investigated the possibility of solving the solar neutrino problem (SNP) by such gravitationally induced neutrino oscillations , generally finding it necessary, in this context, to invoke the MSW like resonance since they conclude that it is impossible that this type of long-wavelength vacuum oscillation could explain the specific energy dependence of the data . Nevertheless we demonstrate that all the recent solar neutrino data coming from gallium, chlorine and water Cherenkov detectors can be well accounted for by long-wavelength neutrino oscillations induced by a violation of the equivalence principle (VEP).
## 2 THE VEP FORMALISM
We assume that neutrinos of different types will suffer different time delay due to the weak, static gravitational field in the space on their way from the Sun to the Earth. Their motion in this gravitational field can be appropriately described by the parameterized post-Newtonian formalism with a different parameter for each neutrino type. Neutrinos that are weak interaction eigenstates and neutrinos that are gravity eigenstates will be related by a unitary transformation that can be parameterized, assuming only two neutrino flavours, by a single parameter, the mixing angle $`\theta _G`$ which can lead to flavour oscillation .
In this work we assume oscillations only between two species of neutrinos, which are degenerate in mass, either between active and active ($`\nu _e\nu _\mu ,\nu _\tau `$) or active and sterile ($`\nu _e\nu _s`$, $`\nu _s`$ being an electroweak singlet) neutrinos.
The evolution equation for neutrino flavours $`\alpha `$ and $`\beta `$ propagating through the gravitational potential $`\varphi (r)`$ in the absence of matter can be found in Ref. . In the case we take $`\varphi `$ to be a constant, this can be analytically solved to give the survival probability of $`\nu _e`$ produced in the Sun after travelling the distance $`L`$ to the Earth
$$P(\nu _e\nu _e)=1\mathrm{sin}^22\theta _G\mathrm{sin}^2\frac{\pi L}{\lambda },$$
(1)
where the oscillation wavelength $`\lambda `$ for a neutrino with energy $`E`$ is given by
$$\lambda =\left[\frac{\pi \text{ km}}{5.07}\right]\left[\frac{10^{15}}{|\varphi \mathrm{\Delta }\gamma |}\right]\left[\frac{\text{MeV}}{E}\right],$$
(2)
which in contrast to the wavelength for mass induced neutrino oscillations in vacuum, is inversely proportional to the neutrino energy. Here $`\mathrm{\Delta }\gamma `$ is the quantity which measures the magnitude of VEP.
## 3 ANALYSIS
We will discuss here our analysis and results for active to active conversion. The same analysis for the $`\nu _e\nu _s`$ channel can be found in Ref. , given similar results.
In order to examine the observed solar neutrino rates in the VEP framework we have calculated the theoretical predictions for gallium, chlorine and Super-Kamiokande (SK) water Cherenkov solar neutrino experiments, as a function of the two VEP parameters ($`\mathrm{sin}^22\theta _G`$ and $`|\varphi \mathrm{\Delta }\gamma |`$), using the solar neutrino fluxes predicted by the Standard Solar Model by Bahcall and Pinsonneault (BP98) taking into account the eccentricity of the Earth orbit around the Sun.
We do a $`\chi ^2`$ analysis to fit these parameters and an extra normalization factor $`f_B`$ for the <sup>8</sup>B neutrino flux, to the most recent experimental results coming from Homestake $`R_{\text{Cl}}=2.56\pm 0.21`$ SNU, GALLEX and SAGE combined $`R_{\text{Ga}}=72.5\pm 5.5`$ SNU and SK $`R_{\text{SK}}=0.475\pm 0.015`$ normalized to BP98. We use the same definition of the $`\chi ^2`$ function to be minimized as in Ref. , except that our theoretical estimations were computed by convoluting the survival probability given in Eq. (1) with the absorption cross sections , the neutrino-electron elastic scattering cross section with radiative corrections and the solar neutrino flux corresponding to each reaction, $`pp`$, $`pep`$, <sup>7</sup>Be, <sup>8</sup>B, <sup>13</sup>N and <sup>15</sup>O; other minor sources were neglected.
We present in Fig. 1 (a) the allowed region determined only by the rates with free $`f_B`$, for fixed <sup>8</sup>B flux ($`f_B=1`$) the allowed region is similar. In Ref. one can find a table which gives more details on best fitted parameters as well as the $`\chi _{\text{min}}^2`$ values for fixed and free $`f_B`$. We found for $`f_B=1`$ that $`\chi _{\text{min}}^2=1.49`$ for 3-2=1 degree of freedom and for $`f_B=0.81`$ that $`\chi _{\text{min}}^2=0.32`$ for 3-3=0 degree of freedom.
We then perform a spectral shape analysis fitting the <sup>8</sup>B spectrum measured by SK using the following $`\chi ^2`$ definition:
$$\chi ^2=\underset{i}{}\left[\frac{S^{\text{obs}}(E_i)f_BS^{\text{theo}}(E_i)}{\sigma _i}\right]^2,$$
(3)
where the sum is performed over all the 18 experimental points $`S^{\text{obs}}(E_i)`$ normalized by BP98 prediction for the recoil-electron energy $`E_i`$, $`\sigma _i`$ is the total experimental error and $`S^{\text{theo}}`$ is our theoretical prediction that was calculated using the BP98 <sup>8</sup>B differential flux, the $`\nu e`$ scattering cross section , the survival probability as given by Eq. (1) taking into account the eccentricity as we did for the rates, the experimental energy resolution as in Ref. and the detection efficiency as a step function with threshold $`E_{\text{th}}`$ = 5.5 MeV.
After the $`\chi ^2`$ minimization with $`f_B=0.80`$ we have obtained $`\chi _{\text{min}}^2=15.8`$ for 18-3 =15 degree of freedom. The best fitted parameters that also can be found in Ref. permit us to compute the allowed region displayed in Fig. 1 (b).
Finally, we perform a combined fit of the rates and the spectrum obtaining the allowed region presented in Fig. 1 (c). The combined allowed region is essentially the same as the one obtained by the rates alone. In all cases presented in Figs. 1 (a)-(c) we have two isolated islands of 90% C.L. allowed regions. See Ref. for a table with the best fitted parameters for this global fit as well as for the fitted values corresponding to the local minimum in these islands. Note that only the upper corner of the Fig. 1 (c), for $`|\varphi \mathrm{\Delta }\gamma |>2\times 10^{23}`$ and maximal mixing in the $`\nu _e\nu _\mu `$ channel can be excluded by CCFR . Moreover, there are no restrictions in the range of parameters we have considered in the case of $`\nu _e\nu _\tau ,\nu _s`$ oscillations.
## 4 DISCUSSIONS AND CONCLUSIONS
Let’s finally remark that, in contrast to the usual vacuum oscillation solution to the SNP, in this VEP scenario no strong seasonal effect is expected in any of the present or future experiments, even the ones that will be sensitive to <sup>7</sup>Be neutrinos . Contrary to the usual vacuum oscillation case, the oscillation length for the low energy $`pp`$ and <sup>7</sup>Be neutrinos are very large, comparable to or only a few times smaller than the Sun-Earth distance, so that the effect of the eccentricity in the oscillation probability is small. On the other hand, for higher energy neutrinos relevant for SK, the effect of the eccentricity in the probability could be large, but it is averaged out after the integration over a certain neutrino energy range. These observations are confirmed by Fig. 4 of Ref. .
We have found a new solution to the SNP which is comparable in quality of the fit to the other suggested ones and can, in principle, be discriminated from them in the near future. In fact a very-long-baseline neutrino experiment in a $`\mu `$-collider could directly probe the entire parameter region where this solution was found.
## ACKNOWLEDGMENTS
We thank P. Krastev, E. Lisi, G. Matsas, H. Minakata, M. Smy, P. de Holanda and GEFAN for valuable discussions and comments. H.N. thanks W. Haxton and B. Balantekin and the Institute for Nuclear Theory at the University of Washington for their hospitality and the Department of Energy for partial support during the final stage of this work. This work was supported by the Brazilian funding agencies FAPESP and CNPq. |
no-problem/0001/astro-ph0001511.html | ar5iv | text | # First Dark Matter Limits from a Large-Mass, Low-Background Superheated Droplet Detector
## Abstract
We report on the fabrication aspects and calibration of the first large active mass ($`15`$ g) modules of SIMPLE, a search for particle dark matter using Superheated Droplet Detectors (SDDs). While still limited by the statistical uncertainty of the small data sample on hand, the first weeks of operation in the new underground laboratory of Rustrel-Pays d’Apt already provide a sensitivity to axially-coupled Weakly Interacting Massive Particles (WIMPs) competitive with leading experiments, confirming SDDs as a convenient, low-cost alternative for WIMP detection.
The rupture of metastability by radiation has been historically exploited as a method for particle detection. Perhaps its most successful application is the Bubble Chamber, where ionizing particles deposit enough local energy in a superheated liquid to produce vaporization along their wake. Apfel extended this concept in the form of Superheated Droplet Detectors (SDDs, a.k.a. Bubble Detectors), in which small drops (radius $`10\mu m`$) of the liquid are uniformly dispersed in a gel or viscoelastic medium. In a SDD the gel matrix isolates the fragile metastable system from vibrations and convection currents, while the smooth liquid-liquid interfaces impede the continuous triggering on surface impurities that occurs in bubble chambers. The lifetime of the superheated state is extended, allowing for new applications: SDDs are increasingly popular as neutron dosimeters, where the nucleated visible bubbles provide a reading of the radiation exposure. SIMPLE (Superheated Instrument for Massive ParticLE searches) aims to detect particle dark matter using SDDs. We report here on the sensitivity attained at the early prototype stage, already comparable to the best achieved with competing technologies.
In the moderately superheated industrial refrigerants used in SDDs, bubbles are produced only by particles having elevated stopping powers ($`dE/dx200keV/\mu m`$) such as nuclear recoils. This is understood in the framework of the “thermal spike” model , common to bubble chambers: for the transition to occur, a vapor nucleus of radius $`>r_c`$ must be created, while only the energy deposited along a distance comparable to this critical radius $`r_c`$ is available for its formation. Hence, a double threshold is imposed: the deposited energy $`E`$ must be larger than the work of formation of the critical nucleus, $`E_c`$, and this energy must be lost over a distance $`\text{O}\left(r_c\right)`$, i.e., a minimum $`dE/dx`$ is required. More formally :
$`E>E_c=4\pi r_c^2\gamma /3ϵ`$ (1)
$`dE/dx>E_c/ar_c,`$ (2)
where $`r_c=2\gamma /\mathrm{\Delta }P`$, $`\gamma \left(T\right)`$ is the surface tension, $`\mathrm{\Delta }P=P_VP`$, $`P_V\left(T\right)`$ is the vapor pressure, $`P`$ and $`T`$ are the operating pressure and temperature, $`ϵ`$ varies in the range $`[0.02,0.06]`$ for different liquids , and $`a\left(T\right)\text{O}\left(1\right)`$ .
Both thresholds can be tuned by changing the operating conditions: keV nuclear recoils like those expected from scattering of WIMPs (currently the favored galactic dark matter candidates ) are detectable at room $`T`$ and atmospheric $`P`$, allowing for a low-cost search free of the complications associated to cryogenic equipment. Most importantly, the threshold in $`dE/dx`$ provides an insensitivity to minimum-ionizing backgrounds that hamper the numerous WIMP detection efforts . A mere $`<10`$ WIMP recoils/kg target/day are expected and hence the importance of background reduction and/or rejection. SDDs of active mass O(1)kg can in principle considerably extend the present experimental sensitivity .
Prompted by the modest active mass of commercially available SDDs ($`0.03`$ g refrigerant/dosimeter) and the need to control the fabrication process, we developed a $`80l`$, 60 bar pressure reactor dedicated to large-mass SDD production. It houses a variable-speed magnetic stirrer, heating and cooling elements and micropumps for catalyst addition (we nevertheless favored thermally-reversible food gels due to safety concerns in the handling of synthetic monomers). The fabrication of $`1l`$ SDD modules containing up to 3% in superheated liquid starts with the preparation of a suitable gel matrix; ingredients are selected and processed in order to avoid alpha emitters, the only internal radioemitters of concern . A precise density matching between matrix and refrigerant is needed to obtain a uniform droplet dispersion, making water-based gels inadequate unless large fractions of inorganic salts are added, which can unbalance the chemistry of the composite and contribute an undesirable concentration of these contaminants . We find that glycerol is for this and other reasons an additive of choice. It is purified using a bed of pre-eluted ion-exchanging resin specifically targeted at actinide removal. Polymer additives and gelating agent are washed in a resin bath. All components are forced through 0.2 $`\mu m`$ filters to remove motes that can act as nucleation centers. The resulting mixture is outgassed and maintained above its gelation temperature in the reactor. The refrigerant is distilled and incorporated to this solution at $`P>>P_V\left(T\right)`$ to avoid boiling during the ensuing vigorous stirring. After a homogenized dispersion ($`r30\pm 15\mu `$m) of droplets is obtained, cooling, setting and step-wise adiabatic decompression produce a delicate entanglement of superheated liquid and thermally-reversible gel, the SDD. The detectors are refrigerated and pressurized during storage to inhibit their response to environmental neutrons.
SDDs can bypass the listed problems associated to a former bubble chamber WIMP search proposal, but are not devoid of their own idiosyncrasies. For instance, the solubility of hydrogen-free refrigerant liquids in water-based gels is small (e.g., 0.002 mol/kg bar for R-12, $`CCl_2F_2`$), yet sufficient to produce unchecked bubble growth via permeation after few days of continuous SDD operation. The engorged bubbles lead to fractures, spurious nucleations and depletion of the superheated liquid (commercial gel-based SDDs are designed for few hours of exposure before recompression , a cycle that can be repeated a limited number of times). To achieve the long-term SDD stability needed for a WIMP search, we employ a multiple strategy: fracture formation can be delayed under a moderate $`P2`$ atm, or by choosing refrigerants with the lowest solubility in the matrix ($`0.0003`$ mol/kg bar for R-115, $`C_2ClF_5`$). Structure-making inorganic salts produce a ”salting-out” effect, i.e., further reduce the refrigerant solubility. Their use being inadvisable for the reasons above, we introduce instead polymers known to have a similar effect , such as polyvinylpyrrolidone (PVP). As a result of these measures, present SIMPLE modules are stable over $``$40 d of continuous exposure. Another example of SDD-specific problems is the formation of clathrate-hydrates on droplet boundaries during fabrication or recompression. These metastable ice-like structures are inclusions of refrigerant molecules into water cages that shorten the lifetime of superheated drops encrusted by them via transfer mechanisms still not well understood . Their presence may be responsible for a long-lived spurious nucleation rate observed in R-12 SDDs following fabrication . This is addressed in SIMPLE with the addition of polymers such as PVCap or PVP, which act as kinetic inhibitors in their growth , and by use of large molecular size refrigerants like R-115, for which the formation of most hydrates is stoichiometrically forbidden .
Prototype modules are tested in an underground gallery. The 27 m rock overburden and $`30`$ cm paraffin shielding reduce the flux of muon-induced and cosmic fast neutrons, the main source of nucleations above ground. Inside the shielding, a water$`+`$glycol thermally-regulated bath maintains $`T`$ constant to within $`0.1^{}`$C. The characteristic violent sound pulse accompanying vaporization in superheated liquids is picked-up by a small piezoelectric transducer in the interior of the module, amplified and stored. Special precautions are taken against acoustic and seismic noise. Fig. 1 displays the decrease in spontaneous bubble nucleation rate brought by progressive purification of the modules.
The response of smaller SDDs to various neutron fields has been extensively studied and found to match theoretical expectations. However, large-size, opaque SDDs require independent calibration: acoustic detection of the explosion of the smallest or most distant droplets is not a priori guaranteed. The energy released as sound varies as $`(P_VP)^{3/2}`$ , making these additional characterizations even more imperative for SDDs operated under $`P>1`$ atm. Two separate types of calibration have been performed to determine the target mass effectively monitored in SIMPLE modules and to check the calculation of the $`T,P`$-dependent threshold energy $`E_{thr}`$ above which WIMP recoils can induce nucleations (defined as the lowest energy meeting both conditions in Eq. (1) ). First, a liquid <sup>241</sup>Am source (an alpha emitter) is diluted into the matrix while still in the solution state. Following Eq. (1), the 5.5 MeV alphas and 91 keV recoiling <sup>237</sup>Np daughters cannot induce nucleations at temperatures below $`T_\alpha `$ and $`T_{\alpha r}`$, respectively . The expression $`a=4.3\left(\rho _v/\rho _l\right)^{1/3}`$ , where $`\rho _v(T)`$, $`\rho _l(T)`$ are the vapor- and liquid-phase densities of the refrigerant, correctly predicts the observed $`T_\alpha `$ for both R-12 and R-115 at $`P=`$1 and 2 atm. In the same conditions, the theoretical value of $`ϵ`$ for these liquids ($`ϵ0.026`$, neglecting a small $`T,P`$ dependence) generates a good agreement with the experimental $`T_{\alpha r}`$ (Fig. 1, insert). Prior to extensive component purification, the spectrum in non-calibration runs (Fig. 1, histogram) bears close resemblance to that produced by <sup>241</sup>Am spiking (Fig. 1, insert); the initial presence of a small ($`10^4`$ pCi/g) <sup>228</sup>Th contamination, compatible with the observed rate, was confirmed via low-level alpha spectroscopy. Three regimes of background dominance are therefore delimited by vertical lines in Fig. 1: the sudden rise at $`T15^{}`$C originates in high-$`dE/dx`$ Auger electron cascades following interactions of environmental gammas with Cl atoms in the refrigerant . The calculated $`E_c`$ for R-115 at $`T=15.5^{}`$C and $`P=`$2 atm is 2.9 keV, coincidental with the binding energy of K-shell electrons in Cl, 2.8 keV (i.e., the maximum $`E`$ deposited via this mechanism). Thus, the onset of gamma sensitivity provides a welcome additional check of the threshold in the few keV region.
Alpha calibrations are not suitable for a rigorous determination of the overall sound detection efficiency because a large fraction of the added emitters drifts to gel-droplet boundaries during fabrication, an effect explained by the polarity of actinide complex ions and dependent on matrix composition. While this migration does not affect $`T_\alpha `$ nor $`T_{\alpha r}`$, it enhances the overall nucleation efficiency in a somewhat unpredictable manner . To make up for this deficiency, SIMPLE modules have been exposed to a <sup>252</sup>Cf neutron source at the TIS/RP calibration facility (CERN). The resulting spectrum of neutron-induced fluorine recoils (Fig. 2, insert) mimics a typically expected one from WIMP interactions. A complete MCNP4a simulation of the calibration setup takes into account the contribution from albedo and thermal neutrons. The expected nucleation rate as a function of $`T`$ is calculated as in : cross sections for the elastic, inelastic, (n,$`\alpha `$) and (n,p) channels of the refrigerant constituents are extracted from ENDFB-VI libraries. Look-up tables of the distribution of deposited energies as a function of neutron energy are built from the SPECTER code , stopping powers of the recoiling species are taken from SRIM98 . Since $`T`$ was continuously ramped up during the irradiations at a relatively fast 1.1C/hr, a small correction to it ($`<1^{}`$C) is numerically computed and applied to account for the slow thermalization of the module. Depending on $`T`$, the value of $`E_{thr}`$ for elastic recoils in fluorine (the dominant nucleation mechanism in R-115) is set by either condition in Eq. (1), the other being always fulfilled for $`E>E_{thr}`$ . The handover from the second to the first condition at $`T`$ above $`5.5^{}`$C ($`2.5^{}`$C) for $`P=`$2 atm ($`P=`$1 atm) is clearly observed in the data as two different regimes of nucleation rate (Fig. 2). A larger-than-expected response, already noticed in R-12 , is evident at low $`T`$: the calculated $`E_{thr}`$ there is too conservative (too high). This behavior appears well below the normal regime of SDD operation (which is at $`T`$ high enough to have $`E_{thr}=E_c`$) and therefore does not interfere with neutron or WIMP detection. However, it is interesting in that it points at a higher than normal bubble nucleation efficiency from heavy particles, as discussed in early bubble chamber work . A best-fit to the overall normalization of the Monte Carlo over the full data set (Fig. 2, dotted lines) yields the fraction of refrigerant mass monitored with the present sound acquisition chain, $`34\pm 2\%`$ ($`74\pm 4\%`$) of the total at $`P=`$2 atm ($`P=`$1 atm), a decisive datum to obtain dark matter limits.
The installation 500 m underground of modules identical in preparation and sound detection system to those utilized in <sup>252</sup>Cf calibrations started in July 1999. A decommissioned nuclear missile control center has been converted into an underground laboratory , facilitating this and other initiatives. The characteristics of this site (microphonic silence, unique electromagnetic shielding ) make it specially adequate for rare-event searches. Modules are placed inside a thermally-regulated water bath, surrounded by three layers of sound and thermal insulation. A 700 l water neutron moderator, resting on a vibration absorber, completes the shielding. Events in the modules and in external microphones are time-tagged, allowing to filter-out the small fraction ($`15`$%) of signals correlated to human activity in the immediate vicinity of the experiment. $`P`$ and $`T`$ are continually logged. The signal waveforms are digitally stored, but no event rejection based on pulse-shape considerations is performed at this stage, eluding the criticisms associated to some WIMP searches in which large data cuts are made.
The raw counting rate from the first SIMPLE module operated in these conditions appears in Fig. 3. Accounting for sound detection efficiency and a 62% fluorine mass fraction in R-115, limits can be extracted on the spin-dependent WIMP-proton cross section $`\sigma _{Wp}`$ (Fig. 3). The cosmological parameters and method in are used in the calculation of WIMP elastic scattering rates, which are then compared to the observed uncut nucleation rate at $`T=10^{}`$C or $`14^{}`$C, depending on WIMP mass. The expected nucleation rate at $`T`$ (i.e., integrated for recoil energies $`>E_{thr}(T)`$) from a candidate at the edge of the sensitivity of the leading DAMA experiment ($`1.510^4`$ kg-day of NaI) is offered as a reference in Fig. 3: SIMPLE sensitivity is presently limited by the large statistical uncertainty associated to a short exposure, and not yet by background rate. A considerable improvement is expected after the ongoing expansion of the bath to accommodate up to 16 modules. In parallel to this, plastic module caps are being replaced by a sturdier design: runs using refrigerant-free modules show that a majority of the recorded events arise from pressure microleaks, correlated to the sense of $`T`$ ramping, able to stimulate the piezoelectric sensor. It must also be kept in mind that a $`T`$independent, flat background implies a null WIMP signal, albeit this eventual approach to data analysis can only be exploited after a large reduction in statistical uncertainty is achieved.
The importance of the spin-dependent WIMP interaction channel (where F is the optimal target ) has been recently stressed by its relative insensitivity to CP-violation parameter values, which may otherwise severely reduce coherent interaction rates . Nevertheless, $`CF_3Br`$ modules able to exploit coherent couplings are presently under development. The intrinsic insensitivity of SDDs to most undesirable backgrounds, low cost of materials involved and simplicity of production and operation opens a new door to dark matter detection.
We thank the Communauté des Communes du Pays d’Apt and French Ministry of Defense for supporting the conversion of the underground site. Our gratitude goes to M. Auguste, J. Bourges, G. Boyer, R. Brodzinski, A. Cavaillou, COMEX-PRO, M. El-Majd, M. Embid, L. Ibtiouene, IMEC, J. Matricon, M. Minowa, Y.H. Mori, T. Otto, G. Roubaud, M. Same and C.W. Thomas. |
no-problem/0001/astro-ph0001168.html | ar5iv | text | # Clues to the origin of parsec to kilo-parsec jet misalignments in EGRET sources
## 1 Introduction
The recent released third catalog of high-energy $`\gamma `$-ray sources by the EGRET telescope on the Compton Gamma-Ray Observatory contains 66 high-confidence identifications of blazars and 27 lower-confidence potential blazar identifications (Hartman et al. 1999). The $`\gamma `$-ray blazars are relativistically beamed and have strong jet components. The inverse Compton scattering is the most promising process responsible for the $`\gamma `$-ray emission (Marscher & Gear 1985; Bloom & Marscher 1996; Ghisellini & Madau 1996; Böttcher & Dermer 1998). All $`\gamma `$-ray AGNs identified by EGRET are radio-loud with flat spectrum, but not all flat-spectrum AGNs are detectable $`\gamma `$-ray sources. One possibility is that the beaming cone for $`\gamma `$-ray emission is narrower than that for radio emission. In this case, the $`\gamma `$-ray emission could be beamed away from the line of sight, but the radio emission is still Doppler boosted due to a wider beaming cone (von Montigny et al. 1995).
The apparent position angle difference $`\mathrm{\Delta }`$PA between parsec and kilo-parsec jets can be related to the bending properties of the jets. Many authors have investigated the misalignment angle distributions for the different samples of radio sources (Pearson & Readhead 1988; Conway & Murphy 1993; Appl et al. 1996; Tingay et al. 1998). If the angle between the relativistic jet and the line of the sight is small, the projection effect of beaming can amplify the intrinsic bend of the jet (Conway & Murphy 1993). Several effects such as motion of the host galaxy, collision of the jet with clouds, or precession of the central engine, can thereby cause the bend.
Recently, the concept of jet-disc symbiosis was introduced and the inhomogeneous jet model plus mass and energy conservation in the jet-disc system was applied to study the relation between disc and jet luminosities (Falcke & Biermann 1995; Falcke et al. 1995; Falcke & Biermann 1999). An effective approach to study the link between these two phenomena is to explore the relationship between luminosity in line emission and kinetic power of jets in different scales (Rawlings & Saunders 1991; Celotti, Padovani & Ghisellini 1997). Rawlings & Saunders (1991) derived the total jet kinetic power $`Q_{jet}`$ and found a correlation between $`Q_{jet}`$ and the narrow line luminosity $`L_{NLR}`$. Similar correlations between line and radio emission have also been found by some authors (Cao & Jiang 1999; Xu et al. 1999; Willott et al. 1999). The optical line region is photoionized by a nuclear source (probably radiation from the disk), so the optical line emission is a better accretion power indicator than the optical continuum radiation may be enhanced by relativistically beamed synchrotron radiation for some flat-spectrum quasars (Celotti et al. 1997). The extended radio emission which is not affected by beaming may reflect the jet power $`Q_{jet}`$. Thus, the ratio of extended radio to broad-line flux reflects the ratio of jet power to accretion power. In this work, we present correlations between the misalignment $`\mathrm{\Delta }`$PA and the ratio of radio to broad-line emission for a sample of $`\gamma `$-ray blazars. In Sect. 2, we describe the sample of sources. The results are contained in Sect. 3. The last section includes the discussion.
## 2 The sample
Complete information on the line spectra is available for very few sources in our sample, since different lines are observed for the sources at different redshifts. We have to estimate the total broad-line flux from the available observational data. There is not a solidly established procedure to derive the total broad-line flux and we therefore adopt the method proposed by Celotti et al. (1997). The following lines: Ly$`\alpha `$, C iv, Mg ii, H$`\gamma `$, H$`\beta `$ and H$`\alpha `$, which contribute the major parts in the total broad-line emissions, are used in our estimate. We use the line ratios reported by Francis et al. (1991) and add the contribution from line H$`\alpha `$ to derive the total broad-line flux (see Celotti et al. 1997 for details). We then search the literature to collect data on broad-line fluxes. We only consider values of line fluxes (or luminosities) given directly or the equivalent width and the continuum flux density at the corresponding line frequency which are reported together in the literature. When more than one value of the same line flux was found in the literature, we take the most recent reference.
We start with the $`\gamma `$-ray blazars identified by Hartman et al.(1999) including lower-confidence blazars. There are 79 AGNs with available redshifts in the third EGRET catalog. Among these sources, we search the literature extensively and find 44 sources with sufficient line data to estimate the total broad-line flux. The remainder of the sources that lack broad-line flux data include 9 BL Lac objects and 26 quasars. The broad-line fluxes have not been measured due to weak line emission for the BL Lac objects. The situation for the quasars is quite different from the BL Lac objects. We note that the spectroscopic observations for most of these 26 quasars have been performed. However, the line data of these quasars are usually incomplete, i.e., only the equivalent width, line profile or line-to-continuum ratio is given, but the continuum flux density at the given frequency is not available probably due to the specific purpose of the literature or the problem of calibration. Only a bit more than half of the $`\gamma `$-ray sources with known redshifts have sufficient data, such that the total broad-line flux can be estimated. This is similar to the situation in Cao & Jiang (1999). In their work, 198 sources within the starting sample of 378 sources have suitable data to derive the total broad-line flux. No evidence shows that the lack of broad-line flux for these sources would affect the main results of present analyses, though it leads to a highly incomplete sample for present study. The further spectroscopic observations on these sources would be helpful. We collect the data of all sources with both the broad-line flux and the misalignment angle $`\mathrm{\Delta }`$PA between parsec and kilo-parsec jets, which leads to a sample of 34 blazars (we add the TeV $`\gamma `$-ray objects: Mkn501 to the sample, which is not listed in the EGRET catalog). There are 26 quasars and 8 BL Lac objects in this sample, in which 7 sources are lower-confidence potential blazar identifications and two TeV $`\gamma `$-ray objects: Mkn421 and Mkn501. The broad-line data are listed in Table 1. We compile the data of the extended radio flux density at 5 GHz in the rest frame of the source in column (9) of Table 1. The data given at the wavelength other than 5 GHz are K-corrected to 5 GHz in the rest frame of the source assuming $`\alpha _{\mathrm{ext}}=0.75`$ ($`f_{\mathrm{ext}}\nu ^{\alpha _{\mathrm{ext}}}`$).
We also give the core flux density data at two different frequencies for each source in the sample, and a two-point spectral index is then derived for the core of the source. The core flux density at 5 GHz in the rest frame of the source is available by K-correction. The misalignment between kilo-parsec and parsec jets $`\mathrm{\Delta }`$PA are taken from the literature. For a few sources, different values of $`\mathrm{\Delta }`$PA are given by different authors usually due to the complex jet structures. We take the minimum $`\mathrm{\Delta }`$PA for these sources. All data of the core flux density and the misalignment angle $`\mathrm{\Delta }`$PA are given in table 2.
The misalignment angle $`\mathrm{\Delta }`$PA is given by the comparison between the VLA and VLBI maps. We note that only one position angle (VLA or VLBI) is available for some sources. Only the VLA position angle is available for the sources 0414$``$189 and 0954+556. There are five sources: 0454$``$234, 1504$``$166, 1741$``$038, 2200+420 and 2320$``$035, of which only the VLBI position angle is available. One reason is that these sources are too compact. High dynamic range VLA maps of many sources sufficient to reveal weak kilo-parsec structure are not available. Therefore, further radio observations on these sources might reveal their misalignment information. Further spectroscopic observations are also necessary to complete the sample.
## 3 Results
The distribution of $`\mathrm{\Delta }`$PA of the sample is plotted in Fig. 1 and appears bimodal, which is similar to that given by Pearson & Readhead (1988). Such distribution can be explained by projection of the helical jet (Conway & Murphy 1993). In Fig. 2 we plot the relation between the apparent misalignment $`\mathrm{\Delta }`$PA and the ratio of the total radio flux at 5 GHz to the broad-line flux. The radio flux density is K-corrected to the rest frame of the source. A correlation is found at 99.9 per cent significant level for the whole sample using Spearman’s correlation coefficient $`\rho `$. A slightly less significant correlation is present for quasars in the sample. We also find a significant correlation between $`\mathrm{\Delta }`$PA and the ratio of the VLBI core flux to the broad-line flux at the level 99.99 per cent, where the VLBI core flux density is also K-corrected to the rest frame of the source at 5 GHz (see Fig. 3). In Fig. 4 we plot the relation between the apparent misalignment $`\mathrm{\Delta }`$PA and the ratio of the extended radio flux at 5 GHz in the rest frame of the source to the broad-line flux. A weak correlation at 90 per cent significance shows that a trend for sources with higher ratios have larger misalignment $`\mathrm{\Delta }`$PA.
In present sample, there are two TeV $`\gamma `$-ray sources: Mkn421 (1101+384) and Mkn501 (1652+398). We know that the three TeV $`\gamma `$-ray objects are quite different from other $`\gamma `$-ray sources (Coppi & Aharonian 1999a,b). Only Mkn421 is listed in the third EGRET catalog (Hartman et al. 1999). However, we cannot find obvious different behaviours for these two TeV $`\gamma `$-ray objects from the remains in the sample (see Figs. 2 $``$ 4, labeled by large squares).
The redshifts of the sources in our sample ranges from 0.031 to 2.286, which means that the spatial resolution of VLA and VLBI are very different. However, the angular resolution is about same for these sources, which would probably affect the correlation analyses (Appl et al. 1996). We therefore group the sources into low (z$`<`$0.5) and high (z$`>`$0.5) redshift objects. For the latter, the angular to linear scale mapping is approximately constant. We re-analyze the correlations and find a correlation between $`\mathrm{\Delta }`$PA and the ratio of the radio core flux to the broad-line flux at 99.94 per cent significant level for the 24 objects with z$`>`$0.5.
## 4 Discussion
Recently, Serjeant et al. (1998) found a correlation between radio and optical continuum emission for a sample of steep-spectrum radio quasars that is evidence for a link between accretion process and jet power Their sample is limited to the steep-spectrum quasars to reduce the Doppler beaming effect in the optical waveband. A similar correlation is given by Carballo et al (1999) for the B3-VLA sample. Cao & Jiang (1999) present a correlation between radio and broad-line emission for a sample of radio quasars including both flat-spectrum and steep-spectrum quasars. They adopted the broad-line emission instead of the optical continuum in their investigation to avoid the beaming effect on optical emission. The jet power $`Q_{jet}`$ is proportional to the bulk Lorentz factor of the jet, and also depends on the size of the jet and the particle density in the jet (Celotti & Fabian 1993). The apparent misalignment may be affected by the angle between the direction of VLBI jet and the sight line if the angle is small (Conway & Murphy 1993). The total radio flux density and VLBI core flux density are both strongly increased by beaming in core-dominated radio blazars. The sources with higher ratio may therefore have higher Doppler factors. So, the correlations between $`\mathrm{\Delta }`$PA and $`\nu f_{\mathrm{rad}}/f_{\mathrm{line}}`$ or $`\nu f_{\mathrm{core}}/f_{\mathrm{line}}`$ can be explained by the beaming effects. Impey et al. (1991) found that the sources with higher fractional optical polarization tend to have relatively larger misalignment $`\mathrm{\Delta }`$PA between VLBI and parsec structures, which is consistent with both large apparent misalignments and optical fractional polarization being correlated with large beaming and small angles to the line of sight.
The extended radio emission may also be correlated to the jet power $`Q_{jet}`$ (see Fig. 4). If this is true, the ratio of the extended radio flux to the broad-line flux may reflect the ratio of the jet power to the disk luminosity: $`Q_{jet}/L_{acc}`$. The weak correlation between $`\mathrm{\Delta }`$PA and the ratio of extended radio to broad-line flux might imply that the intrinsic bend of the jet is related with the ratio of jet mechanical power to accretion power.
Jets in quasars may be powered by rotating black holes in the nuclei (Blandford & Znajek 1977; Moderski et al. 1998). The jet power $`Q_{jet}`$ is then related to the rotational energy of the black hole according to the BZ mechanism. Hence, the ratio of radio to broad-line flux might be related to the angular momentum of the black hole to some extent. Also, we know that the variation of the rotation axis of the black hole caused by the Lense-Thirring effect can result in a change of the orientation of the nuclei jet (Bardeen & Petterson 1975; Scheuer & Feiler 1996), which may lead to the different ejected orientations between small and large scale jets. The faster rotating black hole may cause the jet ejected orientation changing more rapidly, which leads to a larger intrinsic bend of the jet.
More recently, Ghosh & Abramowicz (1997), Livio et al. (1999) have shown that the electromagnetic output from the inner disc is generally expected to dominate over that from the hole. If this is the case for the $`\gamma `$-ray AGNs, the jet power $`Q_{jet}`$ from the disc is mainly determined by the poloidal magnetic field $`B_{pd}`$ at the disc surface. A tentative explanation on the relation in Fig. 4. is that both bending and the jet power could be increased by having large magnetic fields in the accretion disk.
The further study on this problem for a larger sample of core-dominated radio quasars will be in our future work, and we can then check whether the correlations are properties only of EGRET sources or of all blazars in general.
###### Acknowledgements.
I thank the referee for his helpful comments and suggestions that improved the presentation of this paper, X.Y. Hong is thanked for helpful discussion on the measurement of misalignment angle. The support by NSFC and Pandeng Plan is gratefully acknowledged. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautic and Space Administration. |
no-problem/0001/astro-ph0001047.html | ar5iv | text | # Evidence for TeV gamma-ray emission from the shell type SNR RX J1713.7-3946
## 1 Introduction
Supernova remnants (SNRs) are currently believed to be a major source of galactic cosmic rays (GCRs) from the arguments of energetics, shock acceleration mechanisms (Blandford & Eichler blandford87 (1987), Jones & Ellison jones91 (1991)), and the elemental abundances in the source of GCR (Yanagita et al. yanagita90 (1990), Yanagita & Nomoto yanagita99 (1999)). EGRET observations suggest that the acceleration sites of GCRs at GeV energies are SNRs (Esposito et al. esposito96 (1996)). However, direct evidence for the SN origin of GCRs at TeV energies is scarce (e.g. Koyama et al. koyama95 (1995), Allen et al. allen97 (1997), Buckley et al. buckley98 (1998)). Arguably the best evidence for the existence of relativistic electrons with energies around 100 TeV is the CANGAROO observation of TeV gamma-rays from the northeast rim of SN1006, which coincides with the region of maximum flux in the 2–10 keV band of the ASCA data (Tanimori et al. 1998b ). This TeV gamma-ray emission was explained as arising from 2.7 K Cosmic Microwave Background Radiation (CMBR) photons being Inverse Compton (IC) up-scattered by electrons with energies up to $``$ 100 TeV and allowed, together with the observation of non-thermal radio and X-ray emission, the estimation of the physical parameters of the remnant, such as the magnetic field strength (Pohl pohl96 (1996), Mastichiadis mastichiadis96 (1996), Mastichiadis & de Jager jager96 (1996), Yoshida & Yanagita yoshida97 (1997), Naito et al. naito99 (1999)).
The shell type SNR RX J1713.7$``$3946 was discovered in the ROSAT All-Sky Survey (Pfeffermann & Aschenbach pfeffermann96 (1996)). The remnant has a slightly elliptical shape with a maximum extent of $`70^{}`$. The 0.1–2.4 keV X-ray flux from the whole remnant is $``$ 4.4 $`\times `$ 10<sup>-10</sup> erg cm<sup>-2</sup> s<sup>-1</sup> ranking it among the brightest galactic supernova remnants. Subsequent observations of this remnant by the ASCA Galactic Plane Survey revealed strong non-thermal hard X-ray emission from the northwest (NW) rim of the remnant that is three times brighter than that from SN1006 (Koyama et al. koyama97 (1997)). The non-thermal emission from the NW rim dominates the X-ray emission from RX J1713.7$``$3946, and the SNR X-ray emission as a whole is dominated by non-thermal emission (Slane et al. slane99 (1999), Tomida tomida99 (1999)). It is notable that the observed emission region of hard X-rays extends over an area $`0^{}.4`$ in diameter. Slane et al. (slane99 (1999)) carried out 843 MHz radio observations using the Molonglo Observatory Synthesis Telescope, and discovered faint emission which extends along most of the SNR perimeter, with the most distinct emission from the region bright in X-rays. Slane et al. (slane99 (1999)) suggest the distance to RX J1713.7$``$3946 is about 6 kpc based upon the observation of CO emission from molecular clouds which are likely to be associated with the remnant.
The dominance of non-thermal emission from the shell is reminiscent of SN1006. Koyama et al. (koyama97 (1997)) proposed from the global similarity of the new remnant to SN1006 in its shell type morphology, the non-thermal nature of the X-ray emission, and apparent lack of central engine like a pulsar, that RX J1713.7$``$3946 is the second example, after SN1006, of synchrotron X-ray radiation from a shell type SNR. These findings from X-ray observations would suggest that TeV gamma-ray emission could be expected, as observed in SN1006, from regions in the remnant extended over an area larger than the point spread function of a typical imaging telescope ($`0^{}`$.2).
Both SN1006 and RX J1713.7$``$3946 show notably lower radio flux densities and relatively lower matter densities in their ambient space when compared with those for the other shell type SNRs (Green green98 (1998)) for which the Whipple group (Buckley et al. buckley98 (1998)) and CANGAROO group (Rowell et al. gavin99 (1999)) have reported upper limits to the TeV gamma-ray emission. These characteristics might be related to the reason why TeV gamma-rays have been detected only for SN1006 and not from other shell type SNRs: the lower radio flux may indicate a weaker magnetic field which may result in a higher electron energies due to reduced synchrotron losses. In addition, the lower matter density would suppress the production of $`\pi ^0`$ decay gamma-rays. An observation of TeV gamma-rays from RX J1713.7$``$3946 would provide not only further direct evidence for the existence of very high energy electrons accelerated in the remnant but also other important information on some physical parameters such as the strength of the magnetic field which are relevant to the particle acceleration phenomena occurring in the remnant, and would also help clarify the reason why TeV gamma-rays have until now been detected only from SN1006.
With the above motivation, we have observed RX J1713.7$``$3946 with the CANGAROO imaging TeV gamma-ray telescope in 1998. Here we report the result of these observations.
## 2 Instrument and Observation
The CANGAROO 3.8m imaging TeV gamma-ray telescope is located near Woomera, South Australia (13647’E, 3106’S) (Hara et al. hara93 (1993)). A high resolution camera of 256 photomultiplier tubes (Hamamatsu R2248) is installed in the focal plane. The field of view of each tube is about 0.12 $`\times `$ 0.12, and the total field of view (FOV) of the camera is about 3. The pointing accuracy of the telescope is $`0^{}.02`$, determined from a study of the trajectories of stars of magnitude 5 to 6 in the FOV. RX J1713.7$``$3946 was observed in May, June and August in 1998. During on-source observations, the center of the FOV tracked the NW rim (right ascension $`17^\mathrm{h}\mathrm{\hspace{0.17em}11}^\mathrm{m}\mathrm{\hspace{0.17em}56}^\mathrm{s}.7`$, declination $`39^{}\mathrm{\hspace{0.17em}31}^{}\mathrm{\hspace{0.17em}52}^{\prime \prime }.4`$ (J2000)), which is the brightest point in the remnant in hard X-rays (Koyama et al. koyama97 (1997)). An off-source region having the same declination as the on-source but a different right ascension was observed before or after the on-source observation for equal amounts of time each night under moonless and usually clear sky conditions. The total observation time was 66 hours for on-source data and 64 hours for off-source data. After rejecting data affected by clouds, a total of 47.1305 hours for on-source data and 45.8778 hours for off-source data remained for this analysis.
## 3 Analysis and Result
The standard method of image analysis was applied for these data which is based on the well-known parameterization of the elongated shape of the Čerenkov light images using “width,”“length,”“concentration” (shape), “distance” (location), and the image orientation angle “alpha” (Hillas hillas85 (1985), Weekes et al. weekes89 (1989), Reynolds et al. reynolds93 (1993)). However, the emitting region of TeV gamma-rays in this target may be extended, as in the case of SN1006. For extended sources, use of the same criteria as for a point source in the shower image analysis is not necessarily optimal. We made a careful Monte Carlo simulation for extended sources of various extents and found the distribution of the shower image parameter of width, length, and concentration for gamma-ray events is essentially the same within a statistical fluctuation as in the case of a point source. However, the simulation suggests that we should allow a wider range dependent on the extent of the source for the parameter of distance and alpha to avoid overcutting gamma-ray events. In this analysis, gamma-ray–like events were selected with the criteria of 0.01 $``$ width $``$ 0.11, 0.1 $``$ length $``$ 0.45, 0.3 $``$ concentration $``$ 1.0 and 0.5 $``$ distance $``$ 1.2.
Figure 1a shows the resultant alpha distribution when we analyzed the distribution centered at the tracking point (right ascension $`17^\mathrm{h}\mathrm{\hspace{0.17em}11}^\mathrm{m}\mathrm{\hspace{0.17em}56}^\mathrm{s}.7`$, declination $`39^{}\mathrm{\hspace{0.17em}31}^{}\mathrm{\hspace{0.17em}52}^{\prime \prime }.4`$ (J2000)), which is the brightest point in the remnant in hard X-rays (Koyama et al. koyama97 (1997)). The solid line and the dashed line indicate the on-source and off-source data respectively. Here we have normalized the off-source data to the on-source data to take into account the difference in observation time and the variation of trigger rates due to the difference in zenith angle between on- and off-source data and due to subtle changes in weather conditions. The value of the normalization factor $`\beta `$ is estimated to be 1.03 from the difference in total obsevation time for on- and off-source measurements. On the other hand, the actual value of the normalization factor $`\beta `$ is estimated to be $`0.99`$ from the ratio of $`N_{\mathrm{on}}`$/$`N_{\mathrm{off}}`$, where $`N_{\mathrm{on}}`$ and $`N_{\mathrm{off}}`$ indicate the total number of gamma-ray-like events with alpha between 40 and 90 for on- and off-source data respectively. We selected the region with alpha $`>`$ 40 to avoid any “contamination” by gamma-rays from the source, in the knowledge that the source may be extended. The small discrepancy in the two estimates of the value $`\beta `$ might come from a slight change in the mirror reflectivity during the observation due to dew. Here we adopt the value 0.99 for $`\beta `$ in the following analysis by taking the small discrepancy into the systematic errors due to the uncertainty in the mirror reflectivity as shown below. Figure 1b shows the alpha distribution of the excess events for the on-source over the off-source distribution shown in Figure 1a. A rather broad but significant peak can be seen at low alpha, extending to $`30^{}`$. The alpha distributions expected for a point source and several disk-like extended sources of uniform surface brightness with various radii centered at our FOV were calculated using the Monte Carlo method. These distributions are shown in the same figure. The alpha distribution of the observed excess events appears to favour a source radius of $`0^{}.4`$, which suggests the emitting region of TeV gamma-rays is extended around the NW rim of RX J1713.7$``$3946. The statistical significance of the excess is calculated by ($`N_{\mathrm{on}}(\alpha )\beta N_{\mathrm{off}}(\alpha )`$) / $`\sqrt{N_{\mathrm{on}}(\alpha )+\beta ^2N_{\mathrm{off}}(\alpha )}`$, where $`N_{\mathrm{on}}(\alpha )`$ and $`N_{\mathrm{off}}(\alpha )`$ are the numbers of gamma-ray–like events with alpha less than $`\alpha `$ in the on- and off-source data respectively. The significance at the peak of the X-ray maximum was $`5.6\sigma `$ when we chose a value of alpha $`30^{}`$ considering the result of the Monte Carlo simulation shown in Figure 1b.
In order to verify this extended nature, we examined the effects of the cut in shape parameters on the alpha distribution by varying each cut parameter over wide ranges. We also produced alpha distributions for different energy ranges and data sub-sets. Similar broad peaks in the alpha distribution persisted through these examinations. Also we examined more recent data from PSR1706$``$44 from July and August 1998 and obtained a narrow peak at alpha $`<15^{}`$, as expected for a point source. This confirms that the extended nature of the TeV gamma-ray emitting region does not come from some malfunction of our telescope system and/or systematic errors in our data analysis. A similar, but not as broad, alpha peak was seen for SN1006 (Tanimori et al. 1998b ).
In order to see the extent of the emitting region, we made a significance map of the excess events around the NW rim of RX J1713.7$``$3946. Significances for alpha $`30^{}`$ were calculated at all grid points in $`0^{}.1`$ steps in the FOV. Figure 2 shows the resultant significance map of the excess events around the NW rim of RX J1713.7$``$3946 plotted as a function of right ascension and declination, in which the contours of the hard X-ray flux (Tomida tomida99 (1999)) are overlaid as solid lines. The solid circle indicates the size of the point spread function (PSF) of our telescope which is estimated to have a standard deviation of $`0^{}.25`$ for alpha $`30^{}`$ based upon Monte Carlo simulations for a point source with a Gaussian function. The area which shows the highest significance in our TeV gamma-ray observation coincides almost exactly with the brightest area in hard X-rays. The region which shows the emission of TeV gamma-rays with high significance ($`3\sigma `$ level) extends wider than our PSF and appears to coincide with the ridge of the NW rim that is bright in hard X-rays. It extends over a region with a radius of $`0^{}.4`$. This region persisted in similar maps calculated for several values of alpha narrower than $`30^{}`$.
The integral flux of TeV gamma-rays was calculated, assuming emission from a point source, to be (5.3 $`\pm `$ 0.9 \[statistical\] $`\pm `$ 1.6 \[systematic\]) $`\times `$ 10<sup>-12</sup> photons cm<sup>-2</sup> s<sup>-1</sup> ($``$ 1.8 $`\pm `$ 0.9 TeV). The flux value and the statistical error were estimated from the excess number of $`N_{\mathrm{on}}(30^{})\beta N_{\mathrm{off}}(30^{})`$, where the value of $`30^{}`$ for alpha is chosen by the argument mentioned before. The causes of the systematic errors are categorized by uncertainties in (a) assumed differential spectral index, (b) the loss of gamma-ray events due to the parameter cuts, (c) the estimate of core distance of showers by the Monte Carlo method, (d) the trigger condition, (e) the conversion factor of the ADC counts to the number of photo-electrons, and (f) the reflectivity of the reflector. These errors from (a) to (f) are estimated as 15%, 22%, 3%, 12%, 10%, and 8% for the integral flux and 24%, 2%, 8%, 20%, 29%, and 17% for the threshold energy, respectively. The total systematic errors shown above are obtained by adding those errors quadratically.
To summarise, all our observed data support the hypothesis that the emitting region of the NW rim is extended. In general, the value of the effective detection area of the telescope system for extended sources would be reduced by some factor from that for a point source, because the gamma-ray detection efficiency decreases with the distance of emitting points from the center of the FOV when we observe with a single dish. We calculated the efficiency as a function of the distance by the Monte Carlo method by analyzing the data with the same criteria as applied to the actual data. We estimated the value of the correction factor to the effective area to be $`1.2`$ for our target by integrating the efficiency over the distance for an extended disk-like source of uniform surface brightness with a radius of $`0^{}.4`$. The factor of 1.2 is less significant than the systematic errors estimated above.
## 4 Discussion
The SNR RX J1713.7$``$3946 is reminiscent of SN1006 both in the synchrotron X-ray emission from the shell far from the centre of the remnant and also in the TeV gamma-ray emission from an extended region coincident with that of the non-thermal X-rays. This suggests that the particles responsible for the emission of the high energy photons are accelerated in shocks.
There are several possible emission processes of TeV gamma-rays: the emission induced by accelerated protons (by the $`\pi ^0`$ decay process) and by electrons – through bremsstrahlung and/or the Inverse Compton (IC) process. The expected integral flux of gamma-rays above our threshold energy of $`1.8`$ TeV by the $`\pi ^0`$ decay process is estimated to be $`<4\times 10^{14}`$ photons cm<sup>-2</sup> s<sup>-1</sup> (Drury et al. drury94 (1994), Naito & Takahara naito94 (1994)), where we assume the distance and the upper limit for the number density in the ambient space of the remnant as 6 kpc and 0.28 atoms/cm<sup>3</sup>, respectively (Slane et al. slane99 (1999)). This flux value is too low to explain our observed flux, even taking into account the large uncertainties in the estimates of the distance and the ambient matter density of the remnant (Slane et al. slane99 (1999), Tomida tomida99 (1999)). However, there remains the possibility of some contribution of the $`\pi ^0`$ decay process if the remnant is interacting with a molecular cloud located near the NW rim (Slane et al. slane99 (1999)). The relative contribution in emissivity of the bremsstrahlung process compared to $`\pi ^0`$ decay process is estimated as $`10`$ %, assuming the flux ratio of electrons to protons is $`1/100`$ and that both have power law spectra with the index of 2.4 (Gaisser gaisser90 (1990)), indicating this process is also unlikely to dominate. Therefore, the most likely process for TeV gamma-ray emission seems to be the IC process.
Under this assumption, the magnetic field strength in the supernova remnant can be deduced from the relation $`L_{\mathrm{syn}}/L_{\mathrm{IC}}=U_\mathrm{B}/U_{\mathrm{ph}}`$ between the IC luminosity $`L_{\mathrm{IC}}`$ and synchrotron luminosity $`L_{\mathrm{syn}}`$, where $`U_\mathrm{B}=B^2/8\pi `$ and $`U_{\mathrm{ph}}`$ are the energy densities of the magnetic field and the target photon field, respectively. $`L_{\mathrm{syn}}`$ and $`L_{\mathrm{IC}}`$ in the above formula must be due to electrons in the same energy range. The value of $`L_{\mathrm{syn}}`$ which should be compared with our TeV gamma-ray data is estimated from the ASCA result to be $`L_{\mathrm{syn}}=L_{\mathrm{ASCA}}_{E_{\mathrm{syn}}^{\mathrm{min}}}^{\mathrm{}}E^{1.44}𝑑E`$ /$`_{0.5\mathrm{keV}}^{10\mathrm{k}\mathrm{e}\mathrm{V}}E^{1.44}𝑑E`$, extrapolating the synchrotron spectrum with the same power law out of the energy range of 0.5$``$10 keV covered by ASCA (Tomida tomida99 (1999)). Here $`L_{\mathrm{ASCA}}=2.0\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup> is the X-ray luminosity in the 0.5$``$10 keV energy band observed by ASCA from the NW rim of the remnant and the power law index of $`1.44`$ is the mean value for index of X-rays in the same energy range (Tomida tomida99 (1999)). $`E_{\mathrm{syn}}^{\mathrm{min}}=0.14(B/10\mu \text{G})`$ keV is a typical synchrotron photon energy emitted by electrons which emit 1.8 TeV photons (the threshold energy of our observation) by the IC process when we assume the target photons to be from the CMBR. The value of $`L_{\mathrm{IC}}`$ is calculated to be $`4.2\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup> from our result for the number of photons of TeV gamma-rays, and using the fact that the spectra of synchrotron photons and IC photons follow the same power law when the electrons have a power law spectrum. Thus inserting $`L_{\mathrm{syn}}`$, $`L_{\mathrm{IC}}`$, and $`U_{\mathrm{ph}}=4.2\times 10^{13}`$ erg cm<sup>-3</sup> of the energy density for the CMBR into the above relation, we can solve for the magnetic field strength $`B`$. Finally, the magnetic field at the NW rim is estimated to be $`10.8\mu `$G. The extrapolation used to estimate $`L_{\mathrm{syn}}`$ is reasonable, because $`E_{\mathrm{syn}}^{\mathrm{min}}`$ is estimated to be 0.15 keV; this is not so different from the minimum energy of the ASCA band (0.5 keV).
The electrons responsible for the synchrotron and IC photon emissions are likely to have been accelerated by the shocks in the remnant as discussed above. If the maximum electron energy is limited by synchrotron losses, this maximum energy can be estimated by equating the cooling time due to synchrotron losses with the time scale of acceleration by the first Fermi process in a strong shock as $`50(V_\mathrm{s}/2000\text{km s}\text{-1})(B/10\mu \text{G})^{0.5}`$ TeV, where V<sub>s</sub> is the shock velocity (Yoshida & Yanagita yoshida97 (1997)). On the other hand, equating the acceleration time with the age of the remnant, the maximum energy can be expressed $`180(V_\mathrm{s}/2000\text{km s}\text{-1})^2(B/10\mu \text{G})(t_{\mathrm{age}}/10000\text{year})`$ TeV. In either case, whether it is synchrotron losses or the age of the remnant that limits the maximum electron energy (Reynolds & Keohane reynolds99 (1999)), electrons should exist with energies high enough to emit the observed synchrotron X-rays and TeV gamma-rays by the IC process.
It is notable that both RX J1713.7$``$3946 and SN1006 have relatively low magnetic field strengths and low matter densities in their ambient space. These common features may have arisen if the magnetic field was ‘frozen in’ to the matter without amplification other than by compression by shocks and may be the reason why electrons are accelerated to such high energies. These facts may also explain the radio quietness (Green green98 (1998)) and the weak emissivity of $`\pi ^0`$ decay gamma-rays of the remnants. For SN1006, the low matter density in the ambient space might result from the remnant being located far off the galactic plane and the supernova being of type Ia. For RX J1713.7$``$3946, the low matter density may be caused by material having been swept out by the stellar wind of the supernova progenitor (Slane et al. slane99 (1999)). The low magnetic field and the low matter density in the ambient space of SN1006 and RX J1713.7$``$3946 may explain why TeV gamma-rays have been detected so far only for these two remnants.
In conclusion, we have found evidence for TeV gamma-ray emission from RX J1713.7$``$3946 at the level of 5.6 sigma. If confirmed (à la Weekes weekes99 (1999)), this would be the second case after SN1006 to show directly that particles are accelerated up to energies of $`100`$ TeV in the shell type SNR.
###### Acknowledgements.
We sincerely thank H. Tomida and K. Koyama for providing us the ASCA data. We thank the referee very much for his helpful comments on the paper. This work is supported by a Grant-in-Aid for Scientific Research from Japan’s Ministry of Education, Science, and Culture, a grant from the Australian Research Council and the (Australian) National Committee for Astronomy (Major National Research Facilities Program), and the Sumitomo Foundation. The receipt of JSPS Research Fellowships (SH, AK, GPR, KS, and TY) is also acknowledged. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.