id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0001/hep-ph0001134.html | ar5iv | text | # Introduction
## Introduction
Quantum chromodynamics (QCD), the theory of the strong interaction, attracts a large body of theoretical work and experimental investigation. โApplications of QCDโ \[theory\] probably means that the former should have something to do with the latter. This is a severe restriction. It leaves out, for example, phenomena at high temperature and large baryon density, which have not been tested in terrestrial laboratories. Yet there have been fascinating developments in these directions, which reflect once more the complexity that follows from a Lagrangian as simple as $`G^2/4+`$quarks. In this talk I concentrate on high energy QCD processes, which means that at least some part of the process must be tractable perturbatively. Even within this narrow frame striving at completeness would do injustice to the diversity of the (sub-)field. The following gives a survey and assessment of recent theoretical results on selected topics. For details please consult original references and topical reviews. Apologies for omitting topics that should have been included, but have not been for various reasons (lack of time, competence, โฆ).
The conceptual basis for discussing QCD processes at large momentum transfer $`Q`$ is provided by factorization:
$$d\sigma =d\widehat{\sigma }(Q,\mu )F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})+O(\mathrm{\Lambda }_{\mathrm{QCD}}/Q).$$
(1)
The first factor, $`d\widehat{\sigma }(Q,\mu )`$, is insensitive to long distances of order $`1/\mathrm{\Lambda }_{\mathrm{QCD}}`$. It is computed in perturbation theory as scattering of quarks and gluons and depends only on the strong coupling $`\alpha _s`$ and heavy quark masses. The second factor accounts for the fact that experiments are prepared and measurements are done far away ($`1`$fm) from the interaction point. $`F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})`$ parametrizes this long-distance sensitivity in terms of process dependent quantities: vacuum condensates, parton distributions, fragmentation functions, light cone wave-functions and many more. In principle, $`F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})`$ depends only on $`\alpha _s`$ and light quark masses, but since $`\alpha _s(\mathrm{\Lambda }_{\mathrm{QCD}})`$ is large, we cannot compute it in perturbation theory. However, being independent of the hard scattering process, the same $`F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})`$ may appear in a generic class of processes. There are some fortunate cases in which long distance sensitivity appears only in $`O(\mathrm{\Lambda }/Q)`$ in (1). In these cases, we have particularly clean predictions, if $`Q`$ is large enough. In general, we need to provide $`F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})`$. It can sometimes be computed non-perturbatively by numerical methods (โlattice QCDโ). Or it may be approximated by models of low-energy QCD. More often, however, some measurements are used to determine $`F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})`$; others are then predicted. This makes QCD seem to depend on many infrared parameters along with $`\alpha _s`$. It also implies iterations of theory and experiment to arrive at predictions.
Eq. (1) suggests a procedure: for any given large momentum transfer process (i) establish (1), identify $`F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})`$; (ii) compute $`d\widehat{\sigma }(Q,\mu )`$ accurately; (iii) if $`F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})`$ is known, predict $`d\sigma `$, otherwise determine $`F(\mu ,\mathrm{\Lambda }_{\mathrm{QCD}})`$, if $`d\sigma `$ is measured; (iv) check the accuracy of this procedure by addressing power corrections $`O(\mathrm{\Lambda }_{\mathrm{QCD}}/Q)`$. The outline of this talk is divided in sections according to this procedure rather than by topics, although in different order. Sect. 1 covers perturbative calculations, Sect. 2 power corrections. Perturbative expansions of $`d\widehat{\sigma }(Q,\mu )`$ often fail in special kinematic regions, but accurate results can be recovered upon all-order resummations. In Sect. 3 I discuss three representative examples of this situation. Finally, Sect. 4 is devoted to some processes for which factorization has been established more recently.
It is important to remind ourselves that working with QCD we take many things for granted which have never been proven: that (1), obtained from factorization properties of Feynman diagrams, holds non-perturbatively; that the operator product expansion holds non-perturbatively; that perturbative expansions are asymptotic; that lattice QCD approaches the correct continuum limit. The overall picture of consistency that has emerged in applications of QCD suggests a pragmatic attitude towards these problems. However, the questions remain.
## 1 Perturbative calculations
For long-established QCD processes there are no easy perturbative calculations any more. Increasing the accuracy by one order in $`\alpha _s`$ has become technically demanding, usually requiring extensive or automated algebraic manipulations by computers and/or numerical computing. The complications increase by increasing the number of loops, or the number of mass scales or external legs.
### 1.1 More loops
Totally inclusive quantities are related to imaginary parts of correlation functions. This avoids infrared divergences in intermediate expressions. Such quantities are candidates for fully automated evaluation .
The $`\alpha _s^3`$ correction to $`e^+e^{}`$ hadrons (massless quarks and gluons) and related observables, and to some deep inelastic scattering sum rules, have been known for some time . More recently, the QCD $`\beta `$-function and quark mass anomalous dimension have been computed at 4-loop order.
These results use that any 3-loop, massless, 2-point integral is calculable in dimensional regularization. The most important tools are the integration-by-parts method , infrared re-arrangement which reduces the calculation of the 4-loop pole part to the above class of diagrams, and powerful computers that handle the algebra connected with about $`10^4`$ Feynman diagrams. Another important class of diagrams which is generically calculable is 3-loop, massive, vacuum bubble diagrams . There is no obvious way to extend these results to one more loop.
### 1.2 More scales
Observables that depend on more than one kinematic invariant or a kinematic invariant and quark masses are difficult, even if they are totally inclusive. A method that has led to a number of interesting new results is based on asymptotic expansions in a ratio of scales, such that each term in the expansion is a single-scale integral that is analytically solvable. This method can be used even if the expansion parameter is not small if many terms in the expansion can be obtained and if the radius of convergence is sufficiently large or convergence can be improved by Padรฉ approximants.
Asymptotic expansions can be performed (i) for large external momenta, small masses or for large masses, small external momenta ; (ii) around mass shell ; (iii) near thresholds or in $`t/s`$ for $`22`$ scattering; (iv) in Sudakov limits . These expansions are done on the integrand level. The fact that loop momenta cover all scales implies that, in general, extra terms have to be added to the Taylor expansion of the integrand.
A nice example to illustrate the method is the 3-loop coefficient in the relation between the pole mass and the $`\overline{\mathrm{MS}}`$ mass of a heavy quark . This requires 3-loop on-shell integrals, which are not known. Instead expand the quark self-energy around external momentum $`p^2=0`$, which reduces the problem to 3-loop vacuum bubbles, which are calculable. Then put $`p^2/m^2=1`$ and use Padรฉ approximants. Expansion to order $`(p^2/m^2)^{14}`$ (plus information from the opposite limit $`p^2m^2`$) gives $`r_3=3.10\pm 0.06`$ for the coefficient at order $`\alpha _s^3`$ ($`n_f=4`$). (In retrospect, this turns out to be a โbadโ example, because the 3-loop on-shell integrals are in fact exactly calculable . The exact number is $`r_3=3.0451\mathrm{}`$ , in nice agreement with the previous semi-analytic result.)
Applications of this method up to now concern quantities with internal masses or on-shell lines. $`e^+e^{}b\overline{b}X`$ has been obtained at order $`\alpha _s^2`$ for general $`q^2`$ and in an expansion near threshold . The $`\alpha _s^2`$ corrections to inclusive heavy quark decays has been calculated for $`bcl\nu `$ , $`tbW`$ and $`bul\nu `$ . The last result is particularly impressive, because the asymptotic expansion is obtained algebraically to all orders. It is then resummed to an exact result.
### 1.3 More legs
Higher order jet calculations pose a different sort of challenge, because the kinematics becomes complicated (as the number of jets increases), and because the calculation is done on the amplitude level. Infrared singularities cancel in an intricate way, or are factorized into parton densities (fragmentation functions) after cancellations. Almost certainly the final result is obtained after numerical integration.
Relatively recent results include NLO corrections to $`e^+e^{}4`$ jets , $`e^+e^{}3`$ jets with quark mass effects , which provide us with a first, yet imprecise, evidence of scale-dependence of the bottom quark mass . Partial NLO results exist on $`p\overline{p}3`$ jets . The full result is supposed to be completed soon.
### 1.4 Towards NNLO jets
The conceptual and technical frontier is set by NNLO jet calculations, the basic process being $`22`$ ($`pp2`$ jets or 1 jet inclusive, $`pp\gamma \gamma X`$) or $`13`$ ($`e^+e^{}3`$ jets). NNLO calculations provide detailed insight into jet structure and a better determination of $`\alpha _s`$. In $`e^+e^{}3`$ jets they are important to understand the interplay between perturbative and power corrections.
There are several components to the NNLO jet project. The amplitudes have to be computed, which include 2-loop 4-point diagrams. Amplitudes with five and six partons have to be integrated analytically over the singular regions of phase space. After cancellation of infrared divergences, the remaining phase space integrals have to be evaluated numerically efficiently. There has been progress on many of these components recently.
Because of the integration over singular regions of phase space even tree amplitudes are non-trivial. In $`24`$ tree amplitudes one encounters a new situation, when two partons become simultaneously soft or three partons become collinear. The last case (squared and integrated over phase space) gives rise to a new class of splitting amplitudes (functions), when one parton decays into three collinear partons, which generalize the usual splitting functions. All soft, collinear, and mixed, limits have now been analyzed . Likewise, although the 1-loop five-point amplitudes are known in four dimensions, this is insufficient, because the two jet cross sections includes configurations where two of the partons are not resolved. Making use of the universality of soft and collinear limits, these amplitudes are now known to all orders in the dimensional regularization parameter $`ฯต`$ in those kinematic regions, where the phase space integration is singular .
The most difficult amplitude is the 2-loop virtual correction to the basic $`22`$ (or $`13`$) process. Until very recently, it has been unclear whether the basic scalar double box integrals are analytically calculable. In a stunning calculation an analytic result was obtained for the planar double box integral, expressed in terms of elementary special functions, and an algorithm was provided to compute the integral with arbitrary numerator . It is equally surprising that this result was obtained by elementary methods: the $`\alpha `$-representation and Mellin-Barnes transformation and summations of multiple sums obtained after taking the Mellin-Barnes integrations. The crossed double box was subsequently calculated using the same methods. The numerator algebra, connected to multiple products of three-gluon vertices, is, however, highly non-trivial, and remains to be done. Methods, based on helicity amplitudes, colour decomposition, special gauges and unitarity exist to simplify the task . Up to now this has been completed in a toy $`N=4`$ supersymmetric theory (leaving the scalar integral unevaluated) , and more recently for the maximally helicity violating amplitude in QCD .
Many of these results can be used also for NNLO corrections to $`e^+e^{}3`$ jets. However, the 2-loop double box integrals with one off-shell external leg are not yet known. The infrared singularities at order $`1/ฯต^{4,3,2}`$ are known , but the structure of $`1/ฯต`$ poles remains to be elucidated.
In my opinion, the results that have been achieved over the past two years make success predictable, at least for NNLO 2 jets. On the other hand, many hard algebraic and numerical tasks remain to be done. Even with a concerted effort the relevant time scale is years rather than months. However, this is clearly a beautiful case, where most of the most advanced techniques for perturbative QCD calculations merge into a single project.
### 1.5 NNLO parton evolution
NNLO jets require NNLO parton distributions. Evolution of these parton distributions requires the NNLO DGLAP splitting functions. The complete NNLO splitting functions are still unknown, although some moments have been computed some time ago and further constraints exist in the large-$`x`$ and small-$`x`$ limit.
Large evolution means large $`Q^2`$, since parton distributions are typically determined experimentally at some low scale. Large $`Q^2`$ means large $`x`$ and hence the known moments may already provide accurate information. Indeed, first constructions of approximate NNLO non-singlet splitting functions have appeared that make use of the known information . The constraints at small $`x`$ turn out to be quite weak, but there seems to be little uncertainty for $`x>0.1`$. When the splitting function is folded with a typical parton distribution this range increases to $`x>0.02`$.
## 2 Beyond leading power
Perturbative expansions, if computed to arbitrarily high order, ultimately diverge. They become useless beyond a certain order, unless they are summed. In recent years, we have learnt to turn this embarrassment into a benefit, since the pattern of divergence tells us something about the scaling of power corrections $`(\mathrm{\Lambda }_{\mathrm{QCD}}/Q)^n`$ to a hard scattering cross section. A particularly interesting type of divergence, called infrared renormalon, is related to integration over small loop momentum in Feynman integrals . Roughly speaking, there is a relation between perturbative long distance sensitivity, the size of perturbative coefficients in higher orders, and the scaling of non-perturbative power corrections . For inclusive deep inelastic scattering, $`n=2`$, and one recovers higher twist corrections predicted by the operator product expansion.
### 2.1 Event shape observables and energy flow
For other, less inclusive, observables, such as event shape variables in $`e^+e^{}`$ and $`ep`$ collisions, one often finds $`n=1`$ . Since these variables are order $`\alpha _s`$ perturbatively, they are prone to large non-perturbative (and perturbative) corrections. They have been investigated intensively over the past two years, theoretically and experimentally.
The leading power correction originates from soft partons emitted from a fast, nearly back-to-back $`q\overline{q}`$ pair. Write
$$S=S^{\mathrm{pert}.}(\mu _I)+\frac{\mu _I}{Q}S^{NP}(\mu _I)+O(\mathrm{\Lambda }_{\mathrm{QCD}}^2/Q^2)$$
(2)
for an average event shape variable $`S`$. The experimentally measured energy dependence of $`S`$ clearly supports the existence of a $`1/Q`$ power correction with a reasonably sized normalization $`S^{NP}(\mu _I)`$, which is non-perturbative. An interesting hypothesis (also applied to event shape distributions ) states that the non-perturbative corrections are universal, i.e. $`S^{NP}(\mu _I)c_S\overline{\alpha }(\mu _I)`$, where $`c_S`$ is observable dependent, but calculable and $`\overline{\alpha }(\mu _I)`$ is non-perturbative but independent of $`S`$ . (This applies to thrust, jet masses and the $`C`$ parameter. Other event shapes, such as jet broadenings, involve complications .) Four parton final states with two soft partons have also been investigated . Remarkably, one finds that $`c_S`$ is rescaled by the same factor for a variety of shape observables . The universality hypothesis has led to a number of instructive experimental tests. Recent results on average event shapes and event shape distributions in $`e^+e^{}`$ annihilation and DIS tend to confirm the hypothesis within the expected accuracy. However, the fact that the value of $`\alpha _s`$, fitted simultaneously to each $`S`$, is somewhat unstable indicates that the present understanding is not perfect.
Universality may hold for a special class of observables, but it would be surprising, if it held in general. What is needed to shed light on the issue is a factorization theorem for soft gluons beyond leading power. Recall that factorization theorems for event shapes usually demonstrate that soft gluon corrections cancel at leading power. We are now interested in the leading contribution that is left over after this cancellation.
In the problem is approached in terms of energy flow of soft particles. The universal, non-perturbative objects relevant to the two-jet limit ($`q\overline{q}`$ plus soft partons) are
$$G(\stackrel{}{n}_1,\mathrm{},\stackrel{}{n}_k;\mu _I)=0|W^{}\underset{i=1}{\overset{k}{}}(\stackrel{}{n}_i)W|0,$$
(3)
where $`(\stackrel{}{n}_i)`$ measures soft energy flow in the direction of $`\stackrel{}{n}_i`$, $`\mu _I`$ is a factorization scale that defines what โsoftโ means, and $`W`$ denotes a product of eikonal lines for the energetic $`q\overline{q}`$ pair. The $`G(\stackrel{}{n}_1,\mathrm{},\stackrel{}{n}_k;\mu _I)`$ are horribly complicated objects and it is hardly conceivable that they could ever be extracted from measurements. However, the fact that they are independent of the hard scale $`Q`$ already entails interesting predictions. For example, event shape distributions can be expressed as a convolution of a perturbative distribution and a non-perturbative $`Q`$-independent, but observable-dependent โshape functionโ, that follows from these energy flow correlation functions. Event shape averages can be represented as
$$S_{1/Q}=๐\stackrel{}{n}w_S(\stackrel{}{n})G(\stackrel{}{n}),$$
(4)
with a calculable weight function $`w_S(\stackrel{}{n})`$. The single energy flow correlation function $`G(\stackrel{}{n})`$ can in principle be determined from the leading power correction to the energy-energy correlation.
I find this a promising step towards understanding soft power corrections. The concept of energy flow is clearly important and deserves more attention, as it corresponds directly to calorimetric measurements. Observables that can be represented in terms of energy flow are automatically infrared safe. They may also be defined non-perturbatively and therefore be amenable to a more systematic analysis of power corrections .
### 2.2 OPE of the plaquette
There are also things that donโt work as expected. Consider the operator product expansion (OPE) of the plaquette expectation value in pure gauge theory at finite lattice spacing, i.e. the inverse lattice spacing takes the role of the scale $`Q\mathrm{\Lambda }_{\mathrm{QCD}}`$. The OPE gives
$$\text{plaquette}=\underset{n=1}{}c_n\alpha _s^{\mathrm{latt}}(Q)^n+\frac{C(\alpha _s^{\mathrm{latt}})}{Q^4}\frac{\alpha _s}{\pi }GG+\mathrm{}.$$
(5)
The coefficients $`c_n`$ of the perturbative expansion have been computed to 8th order numerically . After transformation to a continuum coupling definition, the coefficients exhibit the expected infrared renormalon growth. Summing the series approximately should give an accuracy of order $`1/Q^4`$, hence subtracting the summed series from non-perturbative Monte Carlo data for $`\text{plaquette}`$, the remainder should scale as $`1/Q^4`$, consistent with the scaling of the gluon condensate term.
Contrary to this expectation, the remainder is found to approach a perfect $`1/Q^2`$ scaling behaviour . Since the OPE is one of the few tools we have to go beyond perturbation theory, this is clearly something we should understand. There may be subtleties with the transformation to the continuum scheme, since this transformation is not known to 8th order or may also have power corrections. The effective action at finite lattice spacing contains an infinite set of higher dimension operators. Could these add up to a $`1/Q^2`$ power correction so that the result is a lattice artefact? But there may be less profane explanations such as power corrections from short distances that affect coefficient functions (and therefore would not contradict the OPE) . This possibility is not ruled out by any argument. It presents a fundamental question that challenges our understanding of non-perturbative short-distance expansions. It would also have implications for the phenomenology of power corrections to current correlation functions. For these reasons, the problem raised by should be cleared up!
## 3 Perturbative resummations
Returning to perturbative expansions, it is not unusual that a perturbative expansion in $`\alpha _s`$ breaks down, even though the coupling constant is small. This happens because the smallness of the coupling constant is compensated by a large kinematic invariant. In effect, one is dealing with a multi-scale problem. If all scales are large compared to $`\mathrm{\Lambda }_{\mathrm{QCD}}`$, the problem is perturbative and may be subjected to systematic all-order resummations. The kinematic conditions leading to the breakdown of perturbation theory can be quite different and the resummations reflect completely different physics. In this section I discuss three examples of such resummations, where progress has been made over the past two years.
### 3.1 Parton thresholds
A familiar source of large kinematic corrections is related to partonic thresholds. Consider the differential cross section
$$d\sigma =\underset{i,j}{}f_{i/A}f_{j/B}d\widehat{\sigma }_{ijf}$$
(6)
for a hard hadron-hadron collision. Large logarithms appear in $`d\widehat{\sigma }`$, when the cms energy $`\widehat{s}`$ of $`i+j`$ is just large enough to produce a given final state. For example, in production of a massive vector boson with mass $`Q`$, the leading correction is $`\alpha _s^n(\mathrm{ln}^{2n1}(1z))/[1z]_+`$ at order $`\alpha _s^n`$, where $`z=Q^2/\widehat{s}`$, and perturbation theory breaks down for $`z1`$.
In this case large logarithms originate from the lack of phase space for real emission and the incomplete cancellation of sensitivity to collinear and soft momentum. Because of this relation the structure of these logarithms is well understood. The logarithms exponentiate and can be resummed:
$`{\displaystyle ๐zz^{N1}๐\widehat{\sigma }(z)}`$ $`=`$ (7)
$`H(\alpha _s)\mathrm{exp}\left[\mathrm{ln}Ng_1(\alpha _s\mathrm{ln}N)+g_2(\alpha _s\mathrm{ln}N)+\alpha _sg_3(\alpha _s\mathrm{ln}N)+\mathrm{}\right]+O(1/N).`$
This resummation was worked out at next-to-leading logarithmic order (i.e. including $`g_2(\alpha _s\mathrm{ln}N)`$) some years ago for $`21`$ processes (massive vector boson production) and $`12`$ processes (event shape variables in $`e^+e^{}`$ in the 2-jet limit) .
Next-to-leading logarithmic resummation has now been extended to $`22`$ scattering processes . Several new complications appear in this case. Since the underlying hard process depends on an additional kinematic invariant, $`(\widehat{t})/\widehat{s}`$, so do the functions that appear in the exponent of (7). Furthermore, the $`22`$ amplitude contains several colour amplitudes and since soft gluon emission carries away colour, these amplitudes mix, turning the exponential into a matrix exponential on the independent colour amplitudes. While the structure of resummation remains thus the same, the technical complications make the formalism more difficult to apply in practice.
Fortunately, simplifications occur for total cross sections. NLL resummed results have been presented for heavy quark production and prompt photon production . For di-jet production at large transverse momentum, the formalism is in principle complete, but it has not yet been implemented . It turns out that at energies of interest for heavy quark production and prompt photons, the effect of resummation is typically small, i.e. within the renormalization scale variation of a fixed order NLO calculation. The real benefit of resummation is a significant reduction of this scale dependence compared to NLO QCD, and hence, probably, the theoretical uncertainty. The $`E_T`$ spectrum of prompt photons at low $`E_T`$ remains in disagreement with the data . Since at $`E_T`$ several GeV power corrections in $`1/E_T`$, or intrinsic transverse momentum, can be very important, this is hardly a serious issue. It is, however, a serious problem for determining the gluon distribution at large $`x`$.
### 3.2 Non-relativistic
A different kind of partonic threshold is encountered in heavy quark production in $`e^+e^{}`$ annihilation. When the cms energy is just larger than $`4m_Q^2`$, the quark and antiquark move at small relative velocity and attract each other through a strong Coulomb force, even if $`\alpha _s`$ is small. Formulated as a perturbative resummation problem, we need the terms
$$R_{e^+e^{}Q\overline{Q}X}v\underset{n=0}{\overset{\mathrm{}}{}}\left(\frac{\alpha _s}{v}\right)^n\{1(\text{LO});\alpha _s,v(\text{NLO});\alpha _s^2,\alpha _sv,v^2(\text{NNLO});\mathrm{}\}$$
(8)
at leading order (LO), next-to-leading order (NLO), etc., where $`v`$ is the small relative velocity.
The LO resummation is done by solving for the Green function of the Schrรถdinger equation with the Coulomb potential. To be more systematic, such concepts from quantum mechanics have to be derived from QCD, incorporating correctly the short-distance structure of QCD. This is done by a sequence of non-relativistic effective field theories. Quarks and gluons can be classified as hard (h), soft (s), potential (p) and ultrasoft (us) . Then these modes are integrated out successively, according to the scheme $`_{\mathrm{QCD}}[Q(h,s,p);g(h,s,p,us)]_{\mathrm{NRQCD}}[Q(s,p);g(s,p,us)]_{\mathrm{PNRQCD}}[Q(p);g(us)]`$, passing from QCD to non-relativistic QCD to potential-non-relativistic QCD . The equation of motion of PNRQCD is exactly the Schrรถdinger equation, with corrections to it that encode the information of the short-distance modes that have been integrated out.
With the help of this method the NNLO resummation has been performed. This leads to first principle NNLO calculations of $`t\overline{t}`$ production near threshold (in $`e^+e^{}`$ collisions) . The NNLO correction turns out to be very important and has led to the conclusion that it is the $`\overline{\mathrm{MS}}`$ top quark mass rather than the pole mass than can be determined more accurately, though indirectly, from the cross section near threshold . Another important application concerns the determination of the $`b`$ quark mass from $`e^+e^{}b\overline{b}X`$ . The recent analyses that take care of adequate bottom mass renormalization prescriptions converge towards a common value for the bottom quark $`\overline{\mathrm{MS}}`$ mass, which I average as $`\overline{m}_b(\overline{m}_b)=4.23\pm 0.08`$GeV. The centre of attention is now on understanding logarithmic corrections in $`v`$ .
### 3.3 High energy, small $`x`$
The high energy limit $`sQ^2`$ of QCD cross sections is an old, yet unsolved problem. Large logarithms can appear either in the high-energy limit of hard partonic reactions, such as in $`\gamma ^{}\gamma ^{}`$ scattering or forward jet production, or in the small-$`x`$ behaviour of parton distributions and their evolution. The leading logarithms $`(\alpha _s\mathrm{ln}s/Q^2)^n`$ have been summed long ago by means of the BFKL equation . This leads to cross sections that rise as $`s^{\overline{\alpha }_s\mathrm{\hspace{0.17em}4}\mathrm{ln}2}`$ ($`\overline{\alpha _s}=N_c\alpha _s/\pi `$) with energy. For many years most theoretical work has been concerned with the physical mechanism that would make the high energy limit compatible with unitarity, but a quantitative theory has not yet emerged. Most of the recent activity in small-$`x`$ physics has however been inspired by the completion of the NLO correction to the BFKL kernel , and its interpretation. The following discussion concentrates on this aspect.
Recall that phenomenological applications of LO BFKL theory have remained ambiguous or unsuccessful. HERA data on the gluon density indicates that DGLAP evolution works well, in fact too well, down to $`x10^6`$. No resummation of $`\mathrm{ln}x`$ corrections to the evolution kernels is required. There is some flexibility in the input gluon distribution, nevertheless the message is that departures from DGLAP cannot be large. Virtual photon scattering has been measured at LEP . Even allowing for the fact that LO BFKL may not predict the normalization of the cross section well, the observed energy dependence is less steep than predicted. Forward pion production at HERA may be described by LO BFKL, but other interpretations of the data seem possible.
It is therefore clearly interesting to see how NLO corrections affect this comparison. In the high energy limit, the cross section factorizes schematically as
$$\sigma =\frac{d^2k_1}{k_1^2}\mathrm{\Phi }_A(k_1)\frac{d^2k_2}{k_2^2}\mathrm{\Phi }_B(k_2)\frac{d\omega }{2\pi i}\left(\frac{s}{k_1k_2}\right)^\omega G_\omega (k_1,k_2),$$
(9)
where $`A`$ usually represents a virtual photon and $`B`$ a virtual photon or a proton. In the latter case the impact factor $`\mathrm{\Phi }_p(k_2)`$ is not perturbatively calculable. $`k_{1,2}`$ denote transverse momenta of the scattering objects, $`kQ`$ for virtual photons, $`k\mathrm{\Lambda }_{\mathrm{QCD}}`$ for protons. The factorized form (9) is believed to hold to next-to-leading logarithmic order, but beyond this order there are terms that cannot be associated with the four-(reggeized)-gluon Green function $`G_\omega (k_1,k_2)`$. $`G_\omega (k_1,k_2)`$ satisfies the BFKL equation
$$\omega G_\omega (k_1,k_2)=\delta ^{(2)}(k_1k_2)+\frac{d^2k}{\pi }K_\omega (k_1,k)G_\omega (k,k_2).$$
(10)
Roughly speaking, the leading order kernel $`K_\omega (k_1,k)`$ sums a single gluon ladder exchanged between $`A`$ and $`B`$ with emissions ordered in longitudinal momentum. The NLO correction has to account for all configurations in which one power of $`\mathrm{ln}x`$ is lost. Partial results have been collected over many years and the full NLO correction has finally been completed . It is usually presented through the action of the kernel on a set of test functions:
$$d^2k^{}K_\omega (k,k^{})\left(\frac{k_{}^{}{}_{}{}^{2}}{k^2}\right)^{\gamma 1}=\overline{\alpha }_s\chi _0(\gamma )\left[1\beta _0\overline{\alpha }_s\mathrm{ln}\frac{k^2}{\mu ^2}\right]+\overline{\alpha }_s^2\chi _1(\gamma ).$$
(11)
In the saddle point approximation for the inverse Mellin integrals, treating $`\overline{\alpha }_s`$ as small, the energy growth $`s^\lambda `$ of hard high energy cross sections is then determined by
$$\lambda =\overline{\alpha }_s\chi _0(1/2)+\overline{\alpha }_s^2\chi _1(1/2)=\overline{\alpha }_s\mathrm{\hspace{0.17em}4}\mathrm{ln}2\left[16.5\overline{\alpha }_s\right],$$
(12)
ignoring the scale dependent part of the kernel. The NLO correction is huge, large enough to modify qualitatively the conclusions drawn from leading order, which is good. At the same time, the NLO kernel taken at face value leads to non-sense results , unless $`\overline{\alpha }_s0.05`$, which is unrealistically small.
Much effort has gone into the question whether the NLO result invalidates the BFKL resummation programme as a whole. To answer this question one has to go beyond a systematic resummation of high energy logarithms. Such a step is unavoidably ambiguous and needs to be motivated by physics arguments. It appears that much of the NLO characteristic function $`\chi _1(\gamma )`$, even near $`\gamma =1/2`$, can be understood from the singularities at $`\gamma =0`$ and $`1`$. Note from (11) that these singularities correspond to transverse logarithms. The leading singularities $`1/\gamma ^{3,2}`$, $`1/(1\gamma )^{3,2}`$ are related to the symmetric energy scale $`k_1k_2`$ chosen in (9), the running coupling and the non-singular terms of the LO DGLAP splitting function. Remarkably, $`\chi _1(\gamma )`$ is extremely well reproduced just by keeping these singularities.
This suggests that these singularities should be summed to all orders. Unphysical transverse logarithms generated by the symmetric energy scale can be removed by replacing
$$\chi _0(\gamma )\chi _0^\omega (\gamma )=2\psi (1)\psi (\gamma +\omega /2)\psi (1\gamma +\omega /2).$$
(13)
Although not unique, this seems to be a particularly natural choice. After performing this replacement, the NLO correction is reduced, though not small. Further support for this resummation arises from the possibility to introduce a โrapidity vetoโ $`y_{i+1}y_i>\mathrm{\Delta }`$ , which is essentially a hard cut-off on the momentum region, where the ordering in rapidity was not a good approximation in the first place. After resummation, the $`\mathrm{\Delta }`$-dependence is small and the NLO correction moderate for all $`\mathrm{\Delta }`$ , which indicates that the resummed kernel is less sensitive to momentum regions where the approximations necessary to derive it are not valid.
The remaining $`\gamma `$-singularities are double poles. Two further modifications beyond NLO small-$`x`$ logarithms need to be performed to take care of them. First, rather than improving the DGLAP anomalous dimension by small-$`x`$ logarithms, we can take the opposite point of view and improve $`\chi (\gamma )`$ by taking into account all information on collinear logarithms . In this way one can arrange, in addition, for momentum conservation, which requires vanishing anomalous dimension at $`\omega =1`$. Second, the 1-loop evolution of $`\alpha _s`$ can be taken into account exactly, rather than perturbatively as in (11). There are two cases to consider, symmetric processes with $`k_1k_2\mathrm{\Lambda }_{\mathrm{QCD}}`$ and asymmetric processes . In the latter case, with $`Qk_1k_2\mathrm{\Lambda }_{\mathrm{QCD}}`$ as for deep inelastic scattering, one must also apply collinear factorization to the four gluon Green function, such that
$$G_\omega (k_1,k_2)=F_\omega ^{\mathrm{UV}}(k_1)F_\omega ^{\mathrm{IR}}(k_2)+O(k_2^2/k_1^2).$$
(14)
The dependence on the non-perturbative low momentum evolution of the running coupling is factorized into $`F_\omega ^{\mathrm{IR}}(k_2)`$, which can be absorbed into the input gluon distribution. This part remains beyond perturbative control, although it may well control the actual small-$`x`$ behaviour of the gluon distribution. On the other hand, only $`F_\omega ^{\mathrm{UV}}(k_1)`$ is $`Q`$-dependent and hence determines the evolution of the gluon density.
There seems yet not to be an unanimous opinion on which of these aspects is most important. For example, argues that $`\lambda `$ should be considered as a non-perturbative parameter, while takes a less agnostic attitude. Ref. , on the other hand, emphasizes the role of the running coupling, demonstrating that the effective scale of $`\alpha _s`$ in the anomalous dimension increases as $`x`$ decreases, because of ultraviolet diffusion. It is also claimed that this leads to an improved fit to structure function data compared to a standard DGLAP fit, which is definitely interesting. Despite these different viewpoints, theory clearly seems to be on the right track, as the results consistently point towards a smaller (but positive) hard pomeron intercept compared to LO BFKL. The resummed gluon anomalous dimension is also close to the DGLAP one down to rather small moments. It will be interesting to see consolidation of this field and the first true NLO+improved BFKL predictions for physical processes (which needs as yet unknown NLO impact factors).
## 4 Novel factorization โtheoremsโ
In the past sections I discussed hard scattering processes which have been known as such. But for other processes factorization of its short-distance part has been established only recently. Often factorization comes at the expense of introducing new non-perturbative parameters. Even if these parameters are not accessible immediately, much is gained in terms of conceptual clarity. In this section I discuss three examples of such โnewโ applications of QCD.
### 4.1 Hard diffraction
A particularly nice example is hard diffraction . Discovered in hadron-hadron collisions by UA8 about a decade ago , after the inspiring work of , the extent to which hard diffraction is a hard process, has remained rather unclear. This has changed completely with the arrival of accurate data on hard diffraction in $`ep`$ scattering , the demise of Regge terminology, and the realization that hard diffraction in DIS can be described in close analogy with inclusive DIS .
In hard diffractive DIS, $`\gamma ^{}pXp`$, the proton scatters (quasi-)elastically off a virtual photon, which fragments into a colour neutral cluster $`X`$. The scattered proton is usually not detected, but since it typically loses only a small fraction of its momentum, the event is identified by a large gap in rapidity between $`p`$ and $`X`$. About 10% of all DIS events are rapidity gap events. Furthermore, hard diffraction is not suppressed with $`1/Q^2`$ relative to inclusive DIS.
In close analogy with inclusive DIS, the diffractive cross section factorizes into a short-distance cross section and a diffractive parton distribution :
$$\frac{d\sigma ^D(x,Q^2,\xi ,t)}{d\xi dt}=\underset{i=q,g}{}\underset{x}{\overset{\xi }{}}๐y\widehat{\sigma }^{\gamma ^{}i}(Q,x,y;\mu )\frac{df_i^D(y,\xi ,t;\mu )}{d\xi dt}.$$
(15)
The diffractive parton distribution $`f_i^D(y,\xi ,t;\mu )`$ represents the probability to find parton $`i`$ in the proton with momentum fraction $`y`$ under the condition that the proton stays intact and loses longitudinal momentum fraction $`\xi `$. Note that this definition makes no reference to Regge factorization or the pomeron. Neither does it make reference to a rapidity gap $`\mathrm{\Delta }y`$ , which follows from kinematics alone when $`\xi `$ is small: $`\mathrm{\Delta }y\mathrm{ln}(1/\xi )`$. The hard scattering occurs on a single parton as in ordinary DIS. The dynamics that is responsible for the formation of a colour-singlet cluster is non-perturbative and therefore part of the definition of the diffractive parton distribution.
The physical picture of hard diffraction is perhaps clearest in the proton rest frame and reminiscent of the โaligned jet modelโ . In the proton rest frame, at small Bjorken $`x`$, the virtual photon splits into a $`q\overline{q}`$ pair long before it hits the proton. The $`q\overline{q}`$ wave-function of the virtual photon suppresses configurations in which one of the quarks carries almost all momentum. Yet it is these configurations that give rise to a large diffractive cross section, because the wave-function suppression is compensated by the large cross section for the scattering of a $`q\overline{q}`$ pair of hadronic transverse size off the proton. The harder of the two quarks is essentially a spectator to diffractive scattering. The scattering of the softer quark off the proton is non-perturbative and cannot be described by exchange of a finite number of gluons. Hence there is an unsuppressed probability that the softer quark leaves the proton intact. This explains the leading twist nature of hard diffraction. The details of the scattering of the softer quark off the proton are encoded in the diffractive quark distribution. In a similar way, the $`q\overline{q}g`$ configuration in the virtual photon, in which the $`q\overline{q}`$ pair carries almost all momentum, gives rise to the diffractive gluon distribution.
Because the short-distance cross section $`\widehat{\sigma }^{\gamma ^{}i}`$ of hard diffractive DIS is identical to inclusive DIS, the evolution of the diffractive parton distributions is identical to those of ordinary parton distributions. It follows that the characteristics of diffraction are entirely contained in the input distributions at a given scale. It is therefore interesting to model these distributions. The original idea of a partonic content of the pomeron can be interpreted as an ansatz in which the diffractive parton distribution factorizes into a pomeron flux factor, which determines the $`\xi `$ dependence, and a parton distribution in the pomeron which depends only on $`\beta =x/\xi `$. The precise data from HERA do not support this simple ansatz any more, although the problem can be fixed by adding more Regge poles. More recent approaches model the proton field off which the Fock states of the virtual photons scatter. The semi-classical approach , which preceded the factorization theorem, can be formulated in such a way that it models the diffractive parton distributions . It can be justified for a large nucleus . Applied to the proton it gives a reasonable description of both diffractive and inclusive DIS . (See for earlier work that contains some elements of the semi-classical approach.) Another approach is based on two gluon exchange . In this case one either has to deal with an infrared divergence, or couple the gluons to a small size toy nucleon as in . Remarkably, these three approaches give similar results on the $`\beta `$-dependence of diffractive parton distributions and agree on the fact that the gluon distribution is enhanced by a large colour factor. This leads to positive scaling violations already at relatively large $`\beta `$, different from inclusive DIS, but in agreement with data.
It is encouraging that simple models reproduce the gross features of the data. Given the differences of the models as far as the proton is concerned, it seems that hard diffraction probes the wave-function of a virtual photon rather than the structure of the proton!
Hard diffraction in hadron-hadron collisions is much harder to describe and more varied, as there can be rapidity gaps between jets, between a jet and a hadron remnant etc.. Factorization does not seem to hold in this case , neither is it expected to , since, for example, an elastically scattered hadron must traverse the remnant of the other hadron, which can cause its break-up. I would like to note, however, a recent suggestion to describe rapidity gap-like events (between jets) in terms of small energy flow in the gap rather than the absence of particles. Although this does not correspond exactly to the notion of hard diffractive scattering, such a definition is more appropriate for a partonic interpretation.
### 4.2 Skewed processes
Factorization has also been shown for deeply virtual Compton scattering $`\gamma ^{}p\gamma p`$ and diffractive vector meson production (after earlier work in ) $`\gamma (Q)pVp`$, where $`V`$ can be an onium and $`Q`$ arbitrary or $`V`$ can be a longitudinally polarized light vector meson, in which case $`Q^2`$ must be large. Note that two-gluon exchange is applicable to diffractive vector meson production, but not to diffractive DIS, because convolution with the virtual photon wave-function relevant to longitudinal vector meson production suppresses the asymmetric $`q\overline{q}`$ fluctuations, which have large transverse size. As a consequence, only the small size $`q\overline{q}`$ component contributes at leading power.
Deeply virtual Compton scattering and diffractive vector meson production require a generalized parton distribution on the amplitude level, since the proton is scattered with non-zero momentum transfer, owing to the difference in invariant mass of the initial and final vector particle. These objects, defined as
$$p^+\frac{dz^{}}{2\pi }e^{ixp^+z^{}}p^{}|\overline{\psi }(0)\gamma ^+\psi (z^{})|p$$
(16)
for quarks, are referred to as skewed (off-diagonal, non-forward, โฆ) parton distributions, and describe a parton $`i`$ (a quark above) extracted from the proton with momentum fraction $`x`$ and returned with momentum fraction $`x^{}`$. For $`p^{}=p`$, the skewed parton distribution reduces to the conventional one. The first moment, however, is related to a proton form factor. These hybrid properties are also reflected in the evolution properties. For $`x^{}>0`$ the evolution resembles DGLAP evolution. For $`x^{}<0`$, the skewed parton distribution describes emission of a $`q\overline{q}`$ pair and the evolution resembles ERBL evolution of light cone distribution amplitudes. The evolution properties and the form of skewed parton densities have been actively studied. An interesting observation is that the skewed parton density is determined by the conventional one for small $`x`$ and $`x^{}x`$ .
Is there experimental evidence for skewedness? ZEUS reports first evidence for deeply virtual Compton scattering, but the data are not yet good enough to allow detailed tests. Skewedness effects in diffractive vector meson production are largest if the invariant mass difference between the incoming photon and outgoing vector meson is large. This suggests to look at $`\mathrm{{\rm Y}}`$ photoproduction or vector meson production at large $`Q^2`$ . Incorporation of skewedness improves the theoretical prediction in comparison with data, but other theoretical uncertainties remain large and preclude an unambiguous statement. The power behaviour of longitudinal to transverse $`\rho `$ meson production appears to disagree with the naive estimate $`\sigma _L/\sigma _TQ^2`$, but the (formally logarithmic) scale-dependence of the gluon distribution may play an important role .
### 4.3 Exclusive $`B`$ decays
There exists a standard framework to discuss exclusive processes at large momentum transfer in terms of light cone distribution amplitudes . As we are entering the era of exclusive $`B`$ decays, it is only appropriate to consider them as bona fide hard reactions. After all they involve momentum transfers $`q^2m_b^225\text{GeV}^2`$, and there will be millions of $`B`$โs! A general two-body decay amplitude can be written as
$$๐(BM_1M_2)=A_1e^{i\delta _1}e^{i\delta _{W1}}+A_2e^{i\delta _2}e^{i\delta _{W2}},$$
(17)
where $`A_{1,2}`$ denote the magnitudes of the amplitudes, $`\delta _{1,2}`$ strong interaction phases and $`\delta _{W1,2}`$ weak phases. The weak phases are CP-violating and of primary interest. Yet to determine them, the strong phases and amplitudes must be known, unless we are fortunate enough that only a single term contributes on the right hand side of (17), or we have enough experimental information to reduce strong interaction input.
The standard formalism does not immediately apply to $`B`$ decays, because the $`B`$ meson contains a soft spectator quark. The spectator quark may go to a final state meson without participating in a hard scattering. Hence the process cannot be described in terms of light cone distribution amplitudes alone. A more general factorization for decays into a heavy ($`D`$) and a light meson has been proposed in , based on the idea that the light meson is initially ejected as a compact object from the weak decay vertex, although no quantitative conclusions have been drawn, with the exception of . Recently, a systematic investigation of the heavy quark limit has been undertaken . The conclusion is that all soft and collinear configurations can be absorbed either into light cone distribution amplitudes or a form factor for the transition $`BM_1`$, where $`M_1`$ is the meson that picks up the light spectator quark. In particular, the corrections conventionally termed โnon-factorizableโ are dominated by hard gluon exchange and hence computable. The proposed factorization theorem, applicable to heavy-light final states ($`D\pi `$, etc.) and light-light final states ($`\pi \pi `$, $`\pi K`$, etc.), reads
$`๐(BM_1M_2)`$ $`=`$ $`F_{BM_1}(0){\displaystyle \underset{0}{\overset{1}{}}}๐xT^I(x)\mathrm{\Phi }_{M_2}(x)`$ (18)
$`+{\displaystyle \underset{0}{\overset{1}{}}}๐\xi ๐x๐yT^{II}(\xi ,x,y)\mathrm{\Phi }_B(\xi )\mathrm{\Phi }_{M_1}(y)\mathrm{\Phi }_{M_2}(x),`$
with corrections that are suppressed as $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_b`$. The second term is present only for light-light final states and has the form of a standard BL-type term. It accounts for hard gluon interactions with the spectator quark .
The implications of (18) taken at face value are far-reaching. Since non-perturbative form factors and light cone distribution amplitudes can either be measured or determined in principle with lattice QCD, the strong phases $`\delta _{1,2}`$ and amplitudes $`A_{1,2}`$ are completely predicted. CKM parameters can then be directly extracted from measurements of branching fractions and CP asymmetries.
Some work remains to be done to demonstrate that (18) gives accurate predictions at the $`b`$ quark scale. A factorization proof to all orders has yet to be given, which may imply that an integration over intrinsic transverse momentum in the $`B`$ meson has to be added to the second term of (18). Power corrections in $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_b`$ can turn out uncomfortably large, if enhanced by small current quark masses. It is also worth noting that the light cone properties of $`B`$ mesons have remained largely unexplored. In any event, the new approach improves over naive factorization, which has been the most commonly used theoretical tool. Because of its potential for $`B`$ factories, applications need to be carefully examined.
## Conclusion
QCD is a lively field of incredible variety. It is also often technical. Comparing todayโs QCD overviews with the discussion of big ideas 25 years ago, this variety may even appear intimidating. But this transformation in style reflects like no other indicator the progress in understanding how QCD works. The challenges provided by strong coupling have led to insights into how field theory works unparalleled by any other theory. Given the intrinsic beauty and simplicity of QCD, together with its role in the future high energy physics programme at the energy frontier, we can be sure of further progress in the field.
I could not have given this talk without the help of many collegues. I benefitted in particular from discussions with G. Buchalla, S. Catani, J. Forshaw, A. Hebecker, M. Krรคmer, Z. Kunzst, G. Salam and T. Teubner.
Discussion
Gรผnter Grindhammer (MPI, Munich): Considering deep inelastic scattering at low $`x`$, what happens with the difference between the BFKL and DGLAP approaches in the case of heavy quark production? In this case one has two hard scales, $`Q^2`$ and the quark mass.
The answer depends on where the heavy quarks are. If the heavy quarks couple to the virtual photon (โthe top of the ladderโ), there is no difference to inclusive DIS at small $`x`$. If the heavy quark pair is coupled to the bottom of the ladder, both ends are perturbative, as is the case for forward jet production at $`p_T\mathrm{\Lambda }_{\mathrm{QCD}}`$. In this case there is no reason to believe that DGLAP evolution should be relevant. |
no-problem/0001/quant-ph0001031.html | ar5iv | text | # Bound states of neutral particles in external electric fields published in Phys. Rev. A 61 (2000) 022101
## I Introduction
In relativistic quantum theory, a charged fermion of spin $`\frac{1}{2}`$ moving in a background electromagnetic field is described by the Dirac equation with minimal coupling to the vector potential. In 1941, Pauli extended this equation to include an additional nonminimal coupling term which takes into account the interaction caused by the anomalous magnetic moment of the charged particle . This extended equation is usually called the DiracโPauli equation. Many works have been devoted to the investigation of exact solutions of this equation in various electromagnetic fields, say, a constant uniform magnetic field, an electromagnetic plane wave, and more complicated ones . A constant central (spherically symmetric) electric field was also considered by some authors. In this case the DiracโPauli equation is separable in spherical coordinates, however, exact solutions of the radial equation have not been found in closed forms . In these studies, the nonminimal coupling is conceptually taken as some correction to the minimal one (though the correction is considerable for protons), and the simultaneous presence of both couplings causes some mathematical difficulty.
In this paper we consider neutral fermions of spin $`\frac{1}{2}`$ with magnetic moment. Without electric charges, such particles can still interact with electromagnetic fields through nonminimal coupling, and can be well described by the DiracโPauli equation. On the one hand, without the minimal coupling, the DiracโPauli equation is simpler. On the other hand, the interaction solely from nonminimal coupling has not attracted enough attention, especially before the discovery of the AharonovโCasher (AC) effect \[3-5\]. Since the AC effect is a consequence of the nonminimal coupling and has been observed in experiment , one may become interested in other consequences of the interaction. For instance, it may be of interest to study bound states of neutral fermions in external electromagnetic fields, especially when exact solutions are available. It appears that this problem was not considered in the literature as far as we know. The purpose of this paper is to deal with this problem. It is organized as follows.
In the next section we consider the DiracโPauli equation of a neutral fermion of spin $`\frac{1}{2}`$, with mass $`m_\mathrm{n}`$ and magnetic moment $`\mu _\mathrm{n}`$, interacting with an external electromagnetic field through nonminimal coupling. For spherically symmetric or central electromagnetic fields, it can be shown that the total angular momentum is a constant of motion. By separation of variables in spherical coordinates, the stationary DiracโPauli equation in a central electric field, which involves four partial differential equations, can be reduced to a system of two coupled ordinary differential equations (ODE) for two radial wave functions. Given a specific electric field, one can in principle solve the system of ODE to obtain the radial wave functions and determine the energy levels for bound states. For a wide variety of electric fields, one can find bound-state solutions of critical energy value $`m_\mathrm{n}`$ or $`m_\mathrm{n}`$ in analytic forms. It turns out that these critical energy levels are infinitely degenerate. This is interesting because it reveals the possibility of condensing infinitely many fermions, say, neutrons, into a single energy level. Electric fields that support a finite number of critical bound states are also discussed. In Sec. III we study a radially constant field, in this case the system of ODE can be completely solved, and we have scattering as well as bound-state solutions. All bound-state solutions are given in closed forms. Only the critical energy level has infinite degeneracy. In Sec. IV we deal with radially linear electric fields. The system of ODE is also completely solvable. In this case we have only bound-state solutions, and many of the energy levels are infinitely degenerate. In Sec. V we discuss the simultaneous presence of central magnetic and electric fields. In this case separation of variables is still possible in spherical coordinates. But the reduced system of ODE involves four coupled equations for four radial wave functions, and is thus much more difficult to solve. Some other remarks and discussins, say, the nonrelativistic limit, are also included in this section.
## II Neutral fermions in central electric fields
We work in (3+1)-dimensional space-time and use the natural units where $`\mathrm{}=c=1`$. Consider a neutral fermion of spin $`\frac{1}{2}`$ with mass $`m_\mathrm{n}`$ and magnetic monment $`\mu _\mathrm{n}`$, moving in an external electromagnetic field described by the field strength $`F_{\mu \nu }`$. The fermion is described by a four-component spinorial wave function $`\mathrm{\Psi }`$ obeying the DiracโPauli equation
$$(i\gamma ^\mu _\mu \frac{1}{2}\mu _\mathrm{n}\sigma ^{\mu \nu }F_{\mu \nu }m_\mathrm{n})\mathrm{\Psi }=0,$$
(1)
where $`\gamma ^\mu =(\gamma ^0,๐ธ)`$ are Dirac matrices satisfying
$$\{\gamma ^\mu ,\gamma ^\nu \}=2g^{\mu \nu }$$
(2)
with $`g^{\mu \nu }=\mathrm{diag}(1,1,1,1)`$, and
$$\sigma ^{\mu \nu }=\frac{i}{2}[\gamma ^\mu ,\gamma ^\nu ].$$
(3)
It can be shown that
$$\frac{1}{2}\sigma ^{\mu \nu }F_{\mu \nu }=i๐ถ๐๐บ๐$$
(4)
where E is the external electric field and B the magnetic one, $`๐ถ=\gamma ^0๐ธ`$, and $`\mathrm{\Sigma }^k=\frac{1}{2}ฯต^{kij}\sigma ^{ij}`$ where $`ฯต^{kij}`$ is totally antisymmetric in its indices and $`ฯต^{123}=1`$. If both E and B are independent of the time $`t`$, one may set
$$\mathrm{\Psi }(t,๐ซ)=e^{it}\psi (๐ซ),$$
(5)
and obtain a stationary DiracโPauli equation for $`\psi `$:
$$H\psi =\psi ,$$
$`(6\mathrm{a})`$
where the Halmiltonian $`H`$ is given by
$$H=๐ถ๐ฉ+i\mu _\mathrm{n}๐ธ๐\mu _\mathrm{n}\gamma ^0๐บ๐+\gamma ^0m_\mathrm{n},$$
$`(6\mathrm{b})`$
where $`๐ฉ=i`$ or $`p^k=i_k`$.
Now let us consider spherically symmetric or central fields
$$๐=E(r)๐_r,๐=B(r)๐_r,$$
(7)
where $`r`$ is one of the spherical coordinates $`(r,\theta ,\varphi )`$ and $`๐_r`$ is the unit vector in the radial direction. As usual we define the the orbital angular momentum $`๐=๐ซ\times ๐ฉ`$. It is not difficult to calculate the commutator $`[๐,H]`$ and it turns out that $`[๐,H]0`$ even for free particles. For central fields, however, it can be shown that the total angular momentum $`๐=๐+๐`$ where $`๐=\frac{1}{2}๐บ`$ is a conserved quantity, i.e., $`[๐,H]=0`$. Thus one can have a common set of eigenstates for $`(H,๐^2,J_z)`$. Because $`๐^2=\frac{3}{4}`$ is a constant operator, it is also a conserved quantity. Unfortunately, $`๐^2`$ is not conserved and cannot have a common set of eigenstates with $`(H,๐^2,J_z)`$. If $`B(r)=0`$, we have a further conserved quantity $`K=\gamma ^0(๐บ๐+1)`$ which commutes with both $`H`$ and J. In this case one can have a common set of eigenstates for $`(H,๐^2,J_z,K,๐^2)`$.
To solve the DiracโPauli equation, one should choose a specific representation for the $`\gamma `$ matrices. Here we use the Dirac representation . In this representation we have $`๐บ=\mathrm{diag}(๐,๐)`$ where $`๐`$ are Pauli matrices, and $`๐=\mathrm{diag}(๐ฃ,๐ฃ)`$ where $`๐ฃ=๐+\frac{1}{2}๐`$ is a $`2\times 2`$ matrix. In this section we only consider a central electric field. The simultaneous presence of a central magnetic field will be discussed in Sec. V. We define $`\psi =(\phi ,\chi )^\tau `$, where $`\tau `$ denotes transpose, and both $`\phi `$ and $`\chi `$ are two-component spinors. The stationary DiracโPauli equation (6) then takes the form
$$๐(๐ฉi\mu _\mathrm{n}E๐_r)\phi =(+m_\mathrm{n})\chi ,$$
$`(8\mathrm{a})`$
$$๐(๐ฉ+i\mu _\mathrm{n}E๐_r)\chi =(m_\mathrm{n})\phi .$$
$`(8\mathrm{b})`$
Here four partial differential equations are involved. We are going to simplify these equations by separation of variables in spherical coordinates. Let us define the two-component spinors
$$f_{lm}^+(\theta ,\varphi )=\left(\begin{array}{c}\sqrt{\frac{l+m+1}{2l+1}}Y_{lm}(\theta ,\varphi )\\ \sqrt{\frac{lm}{2l+1}}Y_{l,m+1}(\theta ,\varphi )\end{array}\right),l=0,1,2,\mathrm{};m=(l+1),l,\mathrm{},l;$$
$`(9\mathrm{a})`$
$$f_{lm}^{}(\theta ,\varphi )=\left(\begin{array}{c}\sqrt{\frac{lm+1}{2l+3}}Y_{l+1,m}(\theta ,\varphi )\\ \sqrt{\frac{l+m+2}{2l+3}}Y_{l+1,m+1}(\theta ,\varphi )\end{array}\right),l=0,1,2,\mathrm{};m=(l+1),l,\mathrm{},l.$$
$`(9\mathrm{b})`$
Here $`Y_{lm}(\theta ,\varphi )`$ are spherical harmonics as defined in Ref. . Both of them are common eigenstates of $`(๐ฃ^2,j_z,๐^2,๐^2)`$ with eigenvalues
$$((l+\frac{1}{2})(l+\frac{3}{2}),m+\frac{1}{2},l(l+1),\frac{3}{4})$$
and
$$((l+\frac{1}{2})(l+\frac{3}{2}),m+\frac{1}{2},(l+1)(l+2),\frac{3}{4}),$$
respectively. It can be shown that
$$๐๐_rf_{lm}^\pm (\theta ,\varphi )=f_{lm}^{}(\theta ,\varphi ),$$
(10)
and
$$๐๐f_{lm}^+(\theta ,\varphi )=lf_{lm}^+(\theta ,\varphi ),$$
$`(11\mathrm{a})`$
$$๐๐f_{lm}^{}(\theta ,\varphi )=(l+2)f_{lm}^{}(\theta ,\varphi ).$$
$`(11\mathrm{b})`$
The relation
$$๐๐ฉ=i(๐๐_r)_r+\frac{i}{r}(๐๐_r)(๐๐)$$
(12)
is also useful in the following. With these preparations we can simplify Eq. (8) for two different kinds of solutions.
The first kind of solution to Eq. (8) is $`\psi ^+=(\phi ^+,\chi ^+)^\tau `$, where
$$\phi ^+(r,\theta ,\varphi )=u^+(r)f_{lm}^+(\theta ,\varphi ),\chi ^+(r,\theta ,\varphi )=iv^+(r)f_{lm}^{}(\theta ,\varphi ).$$
(13)
Note that $`\psi ^+`$ is a common eigenstate of $`(๐^2,J_z,K,๐^2)`$ with eigenvalues $`((l+\frac{1}{2})(l+\frac{3}{2}),m+\frac{1}{2},l+1,\frac{3}{4})`$, but it is not an eigenstate of $`๐^2`$. Using the relations (10-12), it is not difficult to show that Eq. (8) now reduces to a system of first-order ODE for the radial wave functions $`u^+(r)`$ and $`v^+(r)`$:
$$\frac{du^+}{dr}+\mu _\mathrm{n}Eu^+\frac{l}{r}u^+=(+m_\mathrm{n})v^+,$$
$`(14\mathrm{a})`$
$$\frac{dv^+}{dr}\mu _\mathrm{n}Ev^++\frac{l+2}{r}v^+=(m_\mathrm{n})u^+.$$
$`(14\mathrm{b})`$
Because $`\theta `$ and $`\varphi `$ are not defined at the origin, the appropriate boundary conditions for $`u^+`$ and $`v^+`$ are
$$|u^+(0)|<\mathrm{}(l=0),u^+(0)=0(l0),$$
$`(15\mathrm{a})`$
$$v^+(0)=0l.$$
$`(15\mathrm{b})`$
Of course they should also satisfy appropriate boundary conditions at infinity. For bound-state solutions to be considered below, they should fall off rapidly enough when $`r\mathrm{}`$ such that $`\psi ^+`$ is square integrable. For scattering problem, they should be finite at infinity. Given a specific form for $`E(r)`$, one can solve Eq. (14) at least numerically. This is much simpler than dealing with Eq. (8). For $`m_\mathrm{n}`$, one can express $`v^+`$ in terms of $`u^+`$ by using Eq. (14a), and substitute it into Eq. (14b) to obtain a second-order ODE solely for $`u^+`$:
$$\frac{d^2u^+}{dr^2}+\frac{2}{r}\frac{du^+}{dr}+\left[^2m_\mathrm{n}^2+\mu _\mathrm{n}\frac{dE}{dr}\mu _\mathrm{n}^2E^2+2(l+1)\mu _\mathrm{n}\frac{E}{r}\frac{l(l+1)}{r^2}\right]u^+=0.$$
(16)
This is similar to the radial Schrรถdinger equation in a central potential. It can be exactly solved for some specific form of $`E(r)`$. This will be studied in the subsequent sections. When Eq. (16) is solved, it is easy to obtain $`v^+`$. If $`=m_\mathrm{n}`$, one can directly solve Eq. (14) without difficulty.
The second kind of solution to Eq. (8) is $`\psi ^{}=(\phi ^{},\chi ^{})^\tau `$, where
$$\phi ^{}(r,\theta ,\varphi )=u^{}(r)f_{lm}^{}(\theta ,\varphi ),\chi ^{}(r,\theta ,\varphi )=iv^{}(r)f_{lm}^+(\theta ,\varphi ).$$
(17)
Note that $`\psi ^{}`$ is also a common eigenstate of $`(๐^2,J_z,K,๐^2)`$ with eigenvalues $`((l+\frac{1}{2})(l+\frac{3}{2}),m+\frac{1}{2},(l+1),\frac{3}{4})`$, As before, Eq. (8) reduces to a system of first-order ODE for the radial wave functions $`u^{}(r)`$ and $`v^{}(r)`$:
$$\frac{du^{}}{dr}+\mu _\mathrm{n}Eu^{}+\frac{l+2}{r}u^{}=(+m_\mathrm{n})v^{},$$
$`(18\mathrm{a})`$
$$\frac{dv^{}}{dr}\mu _\mathrm{n}Ev^{}\frac{l}{r}v^{}=(m_\mathrm{n})u^{}.$$
$`(18\mathrm{b})`$
This is similar to Eq. (14). If $`m_\mathrm{n}`$, one can solve Eq. (18b) for $`u^{}`$, and substitute it into Eq. (18a) to obtain a second-order ODE solely for $`v^{}`$:
$$\frac{d^2v^{}}{dr^2}+\frac{2}{r}\frac{dv^{}}{dr}+\left[^2m_\mathrm{n}^2\mu _\mathrm{n}\frac{dE}{dr}\mu _\mathrm{n}^2E^22(l+1)\mu _\mathrm{n}\frac{E}{r}\frac{l(l+1)}{r^2}\right]v^{}=0.$$
(19)
This is similar to Eq. (16). Note that the appropriate boundary conditions for $`u^{}`$ and $`v^{}`$ at the origin are
$$u^{}(0)=0l,$$
$`(20\mathrm{a})`$
$$|v^{}(0)|<\mathrm{}(l=0),v^{}(0)=0(l0).$$
$`(20\mathrm{b})`$
Thus Eqs. (16) and (19) have the same boundary conditions at the origin. Also note that they interchange if $`E(r)E(r)`$. If $`=m_\mathrm{n}`$, Eq. (19) is invalid, and one can solve Eq. (18) directly.
Using the completeness relation of the spherical harmonics, it can be shown that the two-component spinors $`f_{lm}^+(\theta ,\varphi )`$ and $`f_{lm}^{}(\theta ,\varphi )`$ constitute a complete set on the sphere. More specifically, we have
$$\underset{l=0}{\overset{\mathrm{}}{}}\underset{m=(l+1)}{\overset{l}{}}[f_{lm}^+(\theta ,\varphi )f_{lm}^+(\theta ^{},\varphi ^{})+f_{lm}^{}(\theta ,\varphi )f_{lm}^{}(\theta ^{},\varphi ^{})]=\delta (\mathrm{cos}\theta \mathrm{cos}\theta ^{})\delta (\varphi \varphi ^{}).$$
(21)
Therefore all possible forms of solutions to Eq. (8) are included in Eqs. (13) and (17).
In the subsequent sections we are going to solve Eqs. (16) and (19) for radially constant and radially linear electric fields. Before dealing with these specific cases, we would like to give some special bound-state solutions for more general forms of the electric field. We assume that $`E(r)`$ behaves like $`r^{1+\delta _1}`$ when $`r0`$ and like $`r^{1+\delta _2}`$ when $`r\mathrm{}`$ where $`\delta _1`$ and $`\delta _2`$ are positive numbers, and is regular everywhere except possibly at $`r=0`$. If $`\mu _\mathrm{n}E(r)>0`$ for $`r>r_+`$ where $`r_+`$ is some finite radius, we have the following solution to Eq. (14) with energy level $`=m_\mathrm{n}`$.
$$u_l^+(r)=A_l^+r^l\mathrm{exp}\left[_0^r\mu _\mathrm{n}E(r^{})๐r^{}\right],v_l^+(r)=0,$$
(22)
where $`A_l^+`$ is a normalization constant. This obviously satisfies the boundary conditions (15) at the origin. It is a bound-state solution because it falls off rapidly enough to be square integrable. It is remarkable that the energy eigenvalue does not depend on the quantum numbers $`l`$ and $`m`$ (or $`j=l+\frac{1}{2}`$ and $`m_j=m+\frac{1}{2}`$). Thus the degeneracy of this energy level is numerably infinite. This is somewhat similar to the situation of a charged particle in a magnetic field with infinite flux in two dimensions . To our knowledge other similar situations were not encountered previously in the realistic three-dimensional space. This is interesting because it reveals the possibility of condensing infinitely many fermions, say, neutrons, into a single energy level. If $`\mu _\mathrm{n}E(r)<0`$ for $`r>r_{}`$ where $`r_{}`$ is some finite radius, then we have the following solution to Eq. (18) with energy level $`=m_\mathrm{n}`$.
$$u_l^{}(r)=0,v_l^{}(r)=A_l^{}r^l\mathrm{exp}\left[_0^r\mu _\mathrm{n}E(r^{})๐r^{}\right],$$
(23)
where $`A_l^{}`$ is a normalization constant. This is also a bound-state solution, and the energy level is infinitely degenerate. Here we have a negative-energy solution, and will have more in the following sections. The presence of negative-energy eigenvalues and eigenstates is a quite general feature of relativistic quantum mechanics. Though these solutions are nonphysical in the one-particle theory, it is well known that they correspond to antiparticles after second quantization. Therefore we will not exclude these solutions in this paper.
If $`E(r)\kappa /r`$ for large $`r`$ where $`\kappa `$ is a constant, the situation is of special interest. In this case the nonvanishing component of the critical solutions \[$`u_l^+(r)`$ for $`\mu _\mathrm{n}\kappa >0`$ or $`v_l^{}(r)`$ for $`\mu _\mathrm{n}\kappa <0`$\] does not fall off exponentially at large $`r`$, but behaves like $`r^{l|\mu _\mathrm{n}\kappa |}`$. To be normalizable, one should have $`l<|\mu _\mathrm{n}\kappa |\frac{3}{2}`$. Therefore to have at least one critical bound state, one should have $`|\mu _\mathrm{n}\kappa |>\frac{3}{2}`$. When $`|\mu _\mathrm{n}\kappa |\frac{3}{2}`$ is a natural number, we have $`|\mu _\mathrm{n}\kappa |\frac{3}{2}`$ critical boumd states (degeneracy over $`m`$ has not been taken into account). When $`|\mu _\mathrm{n}\kappa |\frac{3}{2}`$ is not a natural number, the number of critical bound states is $`[|\mu _\mathrm{n}\kappa |\frac{1}{2}]`$, where the square bracket denotes the integral part of a number. This is similar to the situation of a charged particle in a magnetic field with finite flux in two dimensions .
For a specific electric field, critical bound states with $`=m_\mathrm{n}`$ and those with $`=m_\mathrm{n}`$ do not appear simultaneously. This spectral asymmetry is also similar to that of a charged particle in a magnetic field in two dimensions. Therefore vacuum polarization similar to those in two dimensions for charged particles \[9-11\] or neutral ones may be expected for the present system after second quantization.
To conclude this section we write down the normalization condition for bound-state (or so called square integrable) solutions:
$$๐๐ซ\psi ^\pm (๐ซ)\psi ^\pm (๐ซ)=_0^{\mathrm{}}[u^\pm (r)]^2r^2๐r+_0^{\mathrm{}}[v^\pm (r)]^2r^2๐r=1.$$
(24)
The normalization constants in Eqs. (22) and (23), and those in the following sections are to be determined by this condition.
## III Radially constant electric fields
In this section we consider radially constant electric fields $`E(r)=E_0`$ where $`E_0`$ is a constant. As pointed out before, Eqs. (16) and (19) interchange when $`E(r)E(r)`$. So we need only consider a positive $`E_0`$ or a negative $`E_0`$. For convenience we assume that $`\mu _\mathrm{n}E_0>0`$. Now Eq. (16) takes the form
$$\frac{d^2u^+}{dr^2}+\frac{2}{r}\frac{du^+}{dr}+\left[^2m_\mathrm{n}^2\mu _\mathrm{n}^2E_0^2+\frac{2(l+1)\mu _\mathrm{n}E_0}{r}\frac{l(l+1)}{r^2}\right]u^+=0.$$
(25)
This has the same form as the radial Schrรถdinger equation in an attractive Coulomb field. The difference is that the โCoulomb fieldโ (the next to the last term in the square bracket) here depends on the quantum number $`l`$. Thus the energy levels will depend on $`l`$ as well as a principal quantum number or a radial quantum number. As Eq. (25) is familiar in quantum mechanics, we will give the solutions only. Remember that Eq. (25) is invalid for $`=m_\mathrm{n}`$. We have the bound-state energy levels
$$_{0l}=m_\mathrm{n},n_r=0,$$
$`(26\mathrm{a})`$
$$_{n_rl\pm }=\pm \left[m_\mathrm{n}^2+\mu _\mathrm{n}^2E_0^2\frac{(n_r+l+1)^2(l+1)^2}{(n_r+l+1)^2}\right]^{\frac{1}{2}},n_r=1,2,\mathrm{}.$$
$`(26\mathrm{b})`$
Here $`n_r`$ is a radial quantum number. When $`n_r=0`$ we have a positive critical energy level given in Eq. (26a). Though it is independent of $`l`$, we keep the subscript $`l`$ to make a clear correspondence to the corresponding wave functions below. For $`n_r0`$ we have positive- and negative-energy levels, indicated by the subscript $`\pm `$ in Eq. (26b). The corresponding radial wave functions are
$$u_{n_rl\pm }^+(r)=A_{n_rl\pm }\rho ^le^{\rho /2}L_{n_r}^{2l+1}(\rho ),$$
$`(27\mathrm{a})`$
$$v_{n_rl\pm }^+(r)=A_{n_rl\pm }\frac{\mu _\mathrm{n}E_0}{(n_r+l+1)(_{n_rl\pm }+m_\mathrm{n})}\rho ^{l+1}e^{\rho /2}L_{n_r1}^{2l+3}(\rho )$$
$`(27\mathrm{b})`$
for $`n_r0`$, and
$$u_{0l}^+(r)=A_{0l}\rho ^le^{\rho /2},$$
$`(27\mathrm{c})`$
$$v_{0l}^+(r)=0$$
$`(27\mathrm{d})`$
for $`n_r=0`$, where
$$\rho =\alpha _{n_rl}r,\alpha _{n_rl}=\frac{2(l+1)\mu _\mathrm{n}E_0}{(n_r+l+1)},$$
(28)
and $`L_{n_r}^{2l+1}(\rho )`$, etc., are Laguerre polynomials defined in Ref. , which are different from those used in Ref. . Note that the superscript + indicates the first kind of solutions (13), while the subscript $`\pm `$ indicates the sign of the energy levels. It is seen from Eq. (27) that negative- and positive-energy solutions have the same functional form, but the coefficients are different. The normalization constants are
$$A_{n_rl\pm }=\frac{(\mu _\mathrm{n}E_0)^{\frac{3}{2}}}{(n_r+l+1)^2}\left[\frac{2(l+1)^3n_r!}{(n_r+2l+1)!}\right]^{\frac{1}{2}}\left(\frac{_{n_rl\pm }+m_\mathrm{n}}{_{n_rl\pm }}\right)^{\frac{1}{2}}$$
$`(29\mathrm{a})`$
for $`n_r0`$ and
$$A_{0l}=\frac{(2\mu _\mathrm{n}E_0)^{\frac{3}{2}}}{\sqrt{(2l+2)!}}$$
$`(29\mathrm{b})`$
for $`n_r=0`$. The degeneracy of the energy level $`_{n_rl+}`$ or $`_{n_rl}`$ is $`2l+2`$. As the energy level $`_{0l}=m_\mathrm{n}`$ is actually independent of $`l`$, its degeneracy is numerably infinite. Indeed, the solution (28) is a specific realization of the solution (22) discussed before.
When $`=m_\mathrm{n}`$, Eq. (25) is invalid. In this case one should deal with Eq. (14) directly. It is easy to show that this energy value corresponds to a trivial solution. Thus all first-kind solutions are included in Eq. (27), and the corresponding energy levels are given by Eq. (26). Note that all energy levels have absolute values less than $`\sqrt{m_\mathrm{n}^2+\mu _\mathrm{n}^2E_0^2}`$. When $`||`$ exceeds this value, we have scattering solutions to Eq. (25). This will not be discussed here.
Now we turn to Eq. (19), which in the present case becomes
$$\frac{d^2v^{}}{dr^2}+\frac{2}{r}\frac{dv^{}}{dr}+\left[^2m_\mathrm{n}^2\mu _\mathrm{n}^2E_0^2\frac{2(l+1)\mu _\mathrm{n}E_0}{r}\frac{l(l+1)}{r^2}\right]v^{}=0.$$
(30)
Since $`\mu _\mathrm{n}E_0>0`$, this is equivalent to the radial Schrรถdinger equation in a repulsive Coulomb field. In this case only scattering solutions are available. These scattering solutions have energy $`>\sqrt{m_\mathrm{n}^2+\mu _\mathrm{n}^2E_0^2}`$ or $`<\sqrt{m_\mathrm{n}^2+\mu _\mathrm{n}^2E_0^2}`$. If $`=m_\mathrm{n}`$, Eq. (30) is invalid. Then we may deal with Eq. (18) directly. It turns out that this energy value corresponds to a trivial solution. We thus conclude that there is no bound state of the second kind in the present case.
To finish this section we estimate the โBohr radiusโ of the neutron in this radially constant field. It is roughly equal to $`\alpha _{n_rl}^1`$. For the critical-energy state, $`n_r=0`$, and $`\alpha _{0l}^1=(2\mu _\mathrm{n}E_0)^1`$. In the MKS system it reads
$$\alpha _{0l}^1=\frac{\mathrm{}c^2}{2\mu _\mathrm{n}E_0}.$$
(31)
We take $`|E_0|=5.15\times 10^{11}`$ V/m, the electric field strength at the Bohr radius of the hydrogen atom. For neutrons, we have $`\alpha _{0l}^1=9.5\times 10^4`$ m. This is a macroscopic length scale but rather small. However, it might be not easy to realize a radially constant central electric field with the above magnitude in the laboratory. We do not know whether there exists some such field somewhere in the universe.
## IV Radially linear electric fields
In this section we turn to another exactly solvable field, the radially linear electric field $`E(r)=\beta r`$ where $`\beta `$ is a constant. The electric charge density that produces this field is $`\rho _\mathrm{c}=3\beta /4\pi `$ in the Gaussian units, which is a constant. To realize the above central field, however, the electric charge density should become zero outside some large sphere where the particle under consideration cannot reach practically. Otherwise the electric field would be zero everywhere. In the region of interest (inside the large sphere) the field is then radially linear. For reasons given earlier, we need only consider one sign of $`\beta `$. For convenience we assume that $`\beta \mu _\mathrm{n}>0`$. Eq. (16) then takes the form
$$\frac{d^2u^+}{dr^2}+\frac{2}{r}\frac{du^+}{dr}+\left[^2m_\mathrm{n}^2+(2l+3)\beta \mu _\mathrm{n}\beta ^2\mu _\mathrm{n}^2r^2\frac{l(l+1)}{r^2}\right]u^+=0.$$
(32)
This is not valid for $`=m_\mathrm{n}`$. In the latter case one can solve Eq. (14) directly and obtain a trivial solution. Thus all nontrivial solutions of the first kind are those arise from Eq. (32). The equation (32) has the same form as the radial Schrรถdinger equation for an isotropic harmonic oscillator. The difference is that the โenergyโ here depends on the quantum number $`l`$. Thus the dependence of the energy levels on the quantum numbers will be different from that of the isotropic harmonic oscillator. Since Eq. (32) is also familiar in quantum mechanics, we will give the solutions only. There are only bound-state solutions. The energy levels are
$$_0^+=m_\mathrm{n},n_r=0$$
$`(33\mathrm{a})`$
$$_{n_r\pm }^+=\pm \sqrt{m_\mathrm{n}^2+4n_r\beta \mu _\mathrm{n}},n_r=1,2,\mathrm{},$$
$`(33\mathrm{b})`$
where $`n_r`$ is a radial quantum number. Note that the superscript \+ for $``$ indicates the first kind of solutions, while the subscript $`\pm `$ indicates the sign of the energy. As before, we have negative- as well as positive-energy levels. The corresponding radial wave functions are
$$u_{0l}^+(r)=A_{0l}^+\rho ^le^{\rho ^2/2},v_{0l}^+(r)=0$$
(34)
for $`n_r=0`$, and
$$u_{n_rl\pm }^+(r)=A_{n_rl\pm }^+\rho ^le^{\rho ^2/2}L_{n_r}^{l+1/2}(\rho ^2),$$
$`(35\mathrm{a})`$
$$v_{n_rl\pm }^+(r)=A_{n_rl\pm }^+\frac{2\sqrt{\beta \mu _\mathrm{n}}}{_{n_r\pm }^++m_\mathrm{n}}\rho ^{l+1}e^{\rho ^2/2}L_{n_r1}^{l+3/2}(\rho ^2)$$
$`(35\mathrm{b})`$
for $`n_r0`$, where
$$\rho =\sqrt{\beta \mu _\mathrm{n}}r$$
(36)
and $`L_{n_r}^{l+1/2}(\rho ^2)`$, etc., are Laguerre polynomials as employed in Sec. III but the argument here is $`\rho ^2`$. The normalization constants are determined by Eq. (24) and are given by
$$A_{0l}^+=\frac{\sqrt{2}(\beta \mu _\mathrm{n})^{\frac{3}{4}}}{\sqrt{\mathrm{\Gamma }(l+3/2)}},n_r=0$$
$`(37\mathrm{a})`$
$$A_{n_rl\pm }^+=(\beta \mu _\mathrm{n})^{\frac{3}{4}}\left[\frac{n_r!}{\mathrm{\Gamma }(n_r+l+3/2)}\right]^{\frac{1}{2}}\left(\frac{_{n_r\pm }^++m_\mathrm{n}}{_{n_r\pm }^+}\right)^{\frac{1}{2}},n_r=1,2,\mathrm{}.$$
$`(37\mathrm{b})`$
It is remarkable that all the above energy levels are independent of the quantum number $`l`$, and thus all of them are infinitely degenerate. The critical-energy solution (34) is another realization of the solution (22).
Now we consider the second kind of solutions (17). It is easy to show that Eq. (18) gives a trivial solution when $`=m_\mathrm{n}`$. Thus all nontrivial solutions arise from Eq. (19) which is valid for $`m_\mathrm{n}`$ and in the present case becomes
$$\frac{d^2v^{}}{dr^2}+\frac{2}{r}\frac{dv^{}}{dr}+\left[^2m_\mathrm{n}^2(2l+3)\beta \mu _\mathrm{n}\beta ^2\mu _\mathrm{n}^2r^2\frac{l(l+1)}{r^2}\right]v^{}=0.$$
(38)
This is very similar to Eq. (32). The only difference lies in the sign of the third term in the square bracket. This difference, however, will render the energy levels quite different from those obtained above. As before, we only give the results here. The energy levels are
$$_{N\pm }^{}=\pm \sqrt{m_\mathrm{n}^2+(4N+6)\beta \mu _\mathrm{n}},N=n_r+l=0,1,2,\mathrm{},$$
(39)
where $`n_r=0,1,2,\mathrm{}`$ is a radial quantum number and $`N`$ is a principal quantum number. The superscript $``$ of $``$ indicates the second kind of solutions, while the subscript $`\pm `$ indicates the sign of the energy. The spectrum obtained here has no overlap with that in Eq. (33). The corresponding radial wave functions are
$$u_{n_rl\pm }^{}(r)=A_{n_rl\pm }^{}\frac{2\sqrt{\beta \mu _\mathrm{n}}}{_{N\pm }^{}m_\mathrm{n}}\rho ^{l+1}e^{\rho ^2/2}L_{n_r}^{l+3/2}(\rho ^2),$$
$`(40\mathrm{a})`$
$$v_{n_rl\pm }^{}(r)=A_{n_rl\pm }^{}\rho ^le^{\rho ^2/2}L_{n_r}^{l+1/2}(\rho ^2),$$
$`(40\mathrm{b})`$
where $`\rho `$ is given by Eq. (36), and $`n_r=0,1,2,\mathrm{}`$ is the radial quantum number. The normalization constants are given by
$$A_{n_rl\pm }^{}=(\beta \mu _\mathrm{n})^{\frac{3}{4}}\left[\frac{n_r!}{\mathrm{\Gamma }(n_r+l+3/2)}\right]^{\frac{1}{2}}\left(\frac{_{N\pm }^{}m_\mathrm{n}}{_{N\pm }^{}}\right)^{\frac{1}{2}},n_r=0,1,2,\mathrm{}.$$
(41)
The energy levels $`_{N+}^{}`$ and $`_N^{}`$ depend only on the principal quantum number $`N`$. Given $`N`$, $`l`$ may vary from 0 to $`N`$. For a given $`l`$, there are $`2l+2`$ different solutions. Therefore the degeneracy of the level $`_{N+}^{}`$ or $`_N^{}`$ is
$$d_N=\underset{l=0}{\overset{N}{}}(2l+2)=(N+1)(N+2).$$
(42)
In conclusion, in the radially linear electric field, we have two sets of bound-state energy levels. The first set is given in Eq. (33), corresponding to the first kind of solutions. The second set is given in Eq. (39), corresponding to the second kind of solutions. There is no scattering solution here. In contrast, the radially constant electric field studied in Sec. III admits both scattering and bound-state solutions, though there exists no bound state of the second kind. Finally we estimate the โBohr radiusโ of the neutron in the present case. This is roughly equal to $`(\beta \mu _\mathrm{n})^{\frac{1}{2}}`$, or $`(3/4\pi \rho _\mathrm{c}\mu _\mathrm{n})^{\frac{1}{2}}`$ where $`\rho _\mathrm{c}`$ is the electric charge density producing the field. In the MKS system this reads
$$\left(\frac{3\mathrm{}}{4\pi \mu _0\rho _\mathrm{c}\mu _\mathrm{n}}\right)^{\frac{1}{2}},$$
where $`\mu _0`$ is the permeability of the vacuum. We take $`\rho _\mathrm{c}=e/a_0^3`$ where $`e`$ is the electron charge and $`a_0`$ is the Bohr radius of the hydrogen atom. For neutrons the above โBohr radiusโ has the value $`4.4\times 10^8`$ m. This is rather small. However, it may be difficult to achieve the above electric charge density.
## V Summary and discussions
In the preceding sections we have studied the DiracโPauli equation of a neutral fermion with nonminimal coupling to a central electric field. By separation of variables in spherical coordinates, the stationary DiracโPauli equation which involves four partial differential equations can be reduced to a system of ODE which involves two coupled first-order ODE for two radial wave functions. There are two different kinds of solutions, and thus two independent systems of ODE. Bound states of critical energy values can be obtained analytically for a quite general class of electric fields, where the degeneracy of the critical energy level turns out to be numerably infinite. This reveals the possibility of condensing infinitely many fermions into a single energy level. We also discussed a special form of the electric field that supports a finite number of critical bound states. Two specific electric fields, one radially constant and the other radially linear, are studied in detail and all the bound-state solutions are obtained in closed forms. In the first case bound states exist only for the first kind of solutions, while scattering states exist for both kinds. Scattering states are not discussed in detail. In the second case, we have two sets of discrete energy levels corresponding to the two kinds of solutions. There is no scattering state. It turns out that the energy levels in the first set are all infinitely degenerate. In both fields we have negative as well as positive energy levels. Critical energy levels are also admitted in both cases, which may be positive or negative depending on the signs of $`\mu _\mathrm{n}`$ and the electric fields. Note that the two critical energy levels are not admitted at the same time, however. This spectral asymmetry may likely lead to vacuum polarization after second quantization.
In Sec. II we have shown that the total angular momentum J is a conserved quantity in the simultaneous presence of a central magnetic field and a central electric field. But we have not discussed the solutions of the DiracโPauli equation in this case. In the Dirac representation, the stationary DiracโPauli equation (6) takes the form
$$๐(๐ฉi\mu _\mathrm{n}E๐_r)\phi =(+m_\mathrm{n}\mu _\mathrm{n}B๐๐_r)\chi ,$$
$`(43\mathrm{a})`$
$$๐(๐ฉ+i\mu _\mathrm{n}E๐_r)\chi =(m_\mathrm{n}+\mu _\mathrm{n}B๐๐_r)\phi .$$
$`(43\mathrm{b})`$
These equations are similar to Eq. (8) but more complicated. They are still separable in spherical coordinates. We set
$$\phi (r,\theta ,\varphi )=u^+(r)f_{lm}^+(\theta ,\varphi )+u^{}(r)f_{lm}^{}(\theta ,\varphi ),$$
$`(44\mathrm{a})`$
$$\chi (r,\theta ,\varphi )=iv^+(r)f_{lm}^{}(\theta ,\varphi )+iv^{}(r)f_{lm}^+(\theta ,\varphi ).$$
$`(44\mathrm{b})`$
Substituting these ansatz into Eq. (43) and using the relations (10-12) we obtain the following system of ODE for the four radial wave functions.
$$\frac{du^+}{dr}+\mu _\mathrm{n}Eu^+\frac{l}{r}u^+=(+m_\mathrm{n})v^++\mu _\mathrm{n}Bv^{},$$
$`(45\mathrm{a})`$
$$\frac{dv^+}{dr}\mu _\mathrm{n}Ev^++\frac{l+2}{r}v^+=(m_\mathrm{n})u^++\mu _\mathrm{n}Bu^{},$$
$`(45\mathrm{b})`$
$$\frac{du^{}}{dr}+\mu _\mathrm{n}Eu^{}+\frac{l+2}{r}u^{}=(+m_\mathrm{n})v^{}+\mu _\mathrm{n}Bv^+,$$
$`(45\mathrm{c})`$
$$\frac{dv^{}}{dr}\mu _\mathrm{n}Ev^{}\frac{l}{r}v^{}=(m_\mathrm{n})u^{}+\mu _\mathrm{n}Bu^+.$$
$`(45\mathrm{d})`$
If $`B(r)=0`$, one may set $`u^{}=v^{}=0`$ which reduces Eq. (45) to Eq. (14) for the first kind of solutions, or set $`u^+=v^+=0`$ which reduces Eq. (45) to Eq. (18) for the second kind of solutions. This is what we have done before for a pure electric field. When a magnetic field is present at the same time, this is not allowed, however. The essential reason is that $`K`$ is no longer a conserved quantity in this case. All the four ODE in Eq. (45) are coupled to each other. It seems difficult to solve them even for a pure magnetic field. We will not go into further details of these equations here.
Let us briefly discuss the nonrelativistic limit of the DiracโPauli equation. Consider the stationary case with a pure electric field. We can solve Eq. (8a) for $`\chi `$, and substitute it into Eq. (8b) to obtain an equation for $`\phi `$:
$$[๐(๐ฉ+i\mu _\mathrm{n}๐)][๐(๐ฉi\mu _\mathrm{n}๐)]\phi =(^2m_\mathrm{n}^2)\phi .$$
(46)
This holds for any value of $``$ except $`=m_\mathrm{n}`$, and is valid for noncentral electric field as well. To discuss the nonrelativistic limit we consider only positive $``$ and set
$$=m_\mathrm{n}+^{}.$$
When $`^{}m_\mathrm{n}`$ we get the nonrelativistic limit of Eq. (46):
$$[๐(๐ฉ+i\mu _\mathrm{n}๐)][๐(๐ฉi\mu _\mathrm{n}๐)]\phi =2m_\mathrm{n}^{}\phi .$$
(47)
This has essentially the same form as Eq. (46), and thus the same solutions. However, it should be remarked that even when $`|\mu _\mathrm{n}๐|m_\mathrm{n}`$, Eq. (47) is not valid for those $`^{}`$ comparable with $`m_\mathrm{n}`$. For example, in the radially constant field with $`|\mu _\mathrm{n}E_0|m_\mathrm{n}`$, Eq. (47) is good for all bound states, but not for scattering ones with large $``$, say, $`=2m_\mathrm{n}`$. On the other hand, even if $`|๐|`$ is unbounded, Eq. (47) is still valid for small $`^{}`$. For example, in the radially linear field, Eq. (47) may be good for lower levels if $`|\beta \mu _\mathrm{n}|m_\mathrm{n}`$. Since Eq. (47) is not simpler, it is more convenient to deal with Eq. (46) directly. The nonrelativistic limit with both magnetic and electric fields can be similarly discussed, though the situation is more complicated. We will not give further details here.
We have pointed out in Sec. III that the radially constant electric field admit scattering solutions of both kinds. Though Eqs. (25) and (30) can be solved to give partial wave solutions, the scattering problem is difficult to handle in this case since these equations involve long-range โCoulomb potentialsโ. An easier situation for the scattering problem may be the field $`E(r)r^1`$. This will be studied subsequently.
In this paper we have dealt with (3+1)-dimensional problems. The DiracโPauli equation (1) has a much simpler form in a (2+1)-dimensional space-time. Indeed, the situation for the AC effect is equivalent to a (2+1)-dimensional problem because of the specific field configuration. Recently, we have calculated the probability of neutral particle-antiparticle pair creation in the vacuum by external electromagnetic fields in 2+1 dimensions, based on the nonminimal coupling . Both scattering and bound-state problems in external fields are easier in 2+1 dimensions. These and other consequences of the nonminimal coupling will also be studied subsequently.
## Acknowledgments
The author is grateful to Professor Guang-jiong Ni for encouragement. This work was supported by the National Natural Science Foundation of China. |
no-problem/0001/astro-ph0001076.html | ar5iv | text | # Cosmological SPH simulations with four million particles: statistical properties of X-ray clusters in a low-density universe
## 1 Introduction
Clusters of galaxies have been widely used as probes to extract the cosmological information. In particular, the cluster abundances including the X-ray temperature function (XTF), X-ray luminosity function (XLF) and number counts turn out to put strong constraints on the cosmological density parameter $`\mathrm{\Omega }_0`$ and the fluctuation amplitude $`\sigma _8`$ (Henry & Arnaud, 1991; White, Efstathiou & Frenk, 1993; Jing & Fang, 1994; Viana & Liddle, 1996; Eke, Cole & Frenk, 1996; Kitayama & Suto, 1996, 1997; Kitayama, Sasaki & Suto, 1998). Since the theoretical predictions for those purposes are usually based on the Press โ Schechter mass function and the simple virial equilibrium model for the X-ray clusters, the reliability of the resulted constraints is heavily dependent on the validity of those assumptions in the observed X-ray clusters. In fact there are numerous realistic physical processes which would somehow invalidate the simplifying assumptions employed in the above procedure; one-to-one correspondence between a virialized halo and an X-ray cluster, non-sphericity, substructure and merging, galaxy formation, radiative cooling, heating by the UV background and the supernova energy injection, etc.
While these can be addressed by hydrodynamical simulations in principle, the relevant simulations are quite demanding. This is why most previous studies simply check the Press โ Schechter mass function against the purely $`N`$-body simulations in cosmological volume (Suto, 1993; Ueda, Itoh & Suto, 1993; Lacey & Cole, 1994), or focus on hydrodynamical simulations of individual clusters in a constrained volume (Eke, Navarro & Frenk, 1998; Suginohara & Ostriker, 1998; Yoshikawa, Itoh & Suto, 1998); most previous hydrodynamical simulations of clusters in cosmological volume, on the other hand, lacked the numerical resolution to address the above question, and/or ignored the effect of radiative cooling (Evrard, 1990; Cen & Ostriker, 1992, 1994; Kang et al., 1994; Bryan & Norman, 1998; Frenk et al., 1999) except for a few latest state-of-the-art simulations (Pearce et al., 1999a, b; Cen & Ostriker, 1999).
In this paper, we present results from a series of cosmological SPH (smoothed particle hydrodynamics) simulations in cosmological volume with sufficiently high resolution. They enable us to compute the statistical properties of simulated clusters in an unbiased manner, and thus to directly address the validity of the widely used model predictions for the mass โ temperature relation and the resulting XTFs.
## 2 Simulations
Our simulation code is based on the P<sup>3</sup>MโSPH algorithm, and the P<sup>3</sup>M part of the code has been extensively used in our previous $`N`$-body simulations (Jing & Fang, 1994; Jing & Suto, 1998). All the present runs employ $`N_{\mathrm{DM}}=128^3`$ dark matter particles and the same number of gas particles. We use the spline (S2) softening function for gravitational softening (Hockney & Eastwood, 1981), and the softening length $`ฯต_{\mathrm{grav}}`$ is set to $`L_{\mathrm{box}}/(10N_{\mathrm{DM}}^{1/3})`$, where $`L_{\mathrm{box}}`$ is the comoving size of the simulation box. We set the minimum of SPH smoothing length as $`h_{\mathrm{min}}=ฯต_{\mathrm{grav}}/4`$, and adopt the ideal gas equation of state with an adiabatic index of $`5/3`$.
We consider a spatiallyโflat lowโdensity CDM (cold dark matter) universe with the mean mass density parameter $`\mathrm{\Omega }_0=0.3`$, cosmological constant $`\lambda _0=0.7`$, and the Hubble constant in units of 100 km s<sup>-1</sup> Mpc<sup>-1</sup>, $`h=0.7`$. The power-law index of the primordial density fluctuation is set to $`n=1`$. We adopt the baryon density parameter $`\mathrm{\Omega }_\mathrm{b}=0.015h^2`$ (Copi et al., 1995) and the rms density fluctuations on $`8h^1`$ Mpc scale, $`\sigma _8=1.0`$, according to the constraints from the COBE normalization and cluster abundance (Bunn & White, 1997; Kitayama & Suto, 1997).
The initial conditions for particle positions and velocities at $`z=25`$ are generated using the COSMICS package (Bertschinger, 1995). We prepare two different initial conditions for $`L_{\mathrm{box}}=150h^1`$Mpc and $`75h^1`$Mpc to examine the effects of the numerical resolution. For the two different boxsizes, we consider three cases for the ICM (intracluster medium) thermal evolution (Table 1); L150A and L75A assume non-radiative evolution of the ICM (hereafter non-radiative models). L150C and L75C include the effect of radiative cooling (pure cooling models). In L150UJ and L75UJ (multiphase models), both radiative cooling and UV-background heating are taken into account, and an additional modification of the SPH algorithm is implemented in order to avoid the artificial overcooling, following Pearce et al. (1999a, b). In addition, we have L150CJ which is identical to L150UJ but without the UV background. While the L75C model is evolved until $`z=0.5`$, the simulations of the remaining six models are computed up to $`z=0`$.
The radiative cooling and the UV-background heating take account of the photoionization of Hi, He i and He ii, the collisional ionization of Hi, He i and He ii, the recombination of Hii, He ii and He iii, the dielectronic recombination of He ii, Compton cooling and the thermal bremsstrahlung emission. The UV-background flux density $`J(\nu )`$ is assumed to be parameterized as
$$J(\nu )=J_{21}(z)\left(\frac{\nu _\mathrm{L}}{\nu }\right)\times 10^{21}[\mathrm{erg}/\mathrm{s}/\mathrm{cm}^2/\mathrm{sr}/\mathrm{Hz}],$$
(1)
where $`\nu _\mathrm{L}`$ is the Lymanโlimit frequency, and we adopt the redshift evolution:
$$J_{21}(z)=J_{21}^0\frac{(1+z)^4}{5+(1+z)^4}$$
(2)
following Vedel et al. (1994). We use the semi-implicit scheme in integrating the thermal energy equation (Katz, Weinberg & Hernquist (1996)).
It is known that SPH simulations with the effect of radiative cooling produce unacceptably dense cooled clumps. While Suginohara & Ostriker (1998) ascribed this to some missing physical processes such as heat conduction and heating from supernova explosions, Thacker et al. (1998) and Pearce et al. (1999a) showed that this overcooling could be due to an artificial overestimate of hot gas density convolved with the nearby cold dense gas particles, and can be largely suppressed by simply decoupling the cold gas ($`T<10^4`$K) from the hot component. This numerical treatment can be interpreted as a phenomenological prescription for the galaxy formation. Therefore following their spirit, we adopt a slightly different approach โ using the Jeans criterion for the cooled gas:
$$h>\frac{c_s}{\sqrt{\pi G\rho _{\mathrm{gas}}}},$$
(3)
where $`h`$ is the smoothing length of gas particles, $`\rho _{\mathrm{gas}}`$ the gas density, $`c_s`$ the sound speed, and $`G`$ the gravitational constant. In practice, however, we made sure that the above condition is almost identical to the condition of $`T<10^4`$K adopted by Pearce et al. (1999a). Nevertheless we prefer using the Jeans criterion because it is rephrased in terms of the physical process. Apart from the fact that such cooled gas particles are ignored when computing the SPH density of hot gas particles, all the other SPH interactions are left unchanged.
## 3 Results
### 3.1 Cluster Identification and Radial Profiles
At each epoch, we identify gravitationally bound objects using an adaptive โfriend-of-friendโ algorithm (Suto et al., 1992), and select objects with more than 200 dark matter and gas particles as clusters of galaxies. Specifically we use the SPH gas density in assigning the local linking length with the linking parameter of $`0.5`$. With this parameter, the mass function of the selected clusters is nearly identical to the conventional โfriend-of-friendโ with the fixed linking length of $`0.2`$ times the mean particle separation. The virial mass $`M`$ for each cluster is defined as the total mass at the virial radius within which the averaged mass density is $`18\pi ^2\mathrm{\Omega }_0^{0.4}\overline{\rho }_c(z)`$ predicted from non-linear spherical collapse model, where $`\overline{\rho }_c(z)`$ is the critical density of the universe at $`z`$ (White, Efstathiou & Frenk, 1993; Kitayama & Suto, 1996). The X-ray luminosity is computed on the basis of the bolometric and band limited thermal bremsstrahlung emissivity (Rybicki & Lightman, 1979), which ignores metal line emission. We also compute the mass weighted and emission weighted temperatures, $`T_X^\mathrm{m}`$ and $`T_X`$, using $`2`$$`10`$ keV band emission. For models L150UJ and L75UJ, we do not include contribution of those cold particles which satisfy the criterion (3) in computing $`T_X`$, $`T_X^\mathrm{m}`$ and $`L`$, since they are not supposed to remain as a gaseous component if an appropriate star formation scheme is implemented in our simulation.
In Figure 1, we show the spherically averaged profiles of dark matter and gas densities, ICM temperature, and cumulative X-ray luminosity for the most massive cluster in models L150A, L150UJ and L150C. Those for models L75A, L75UJ and L75C are also depicted in Figure 2. The centers of clusters are defined as the peaks of gas density. The profiles of dark matter are fairly well-approximated by the model proposed by (Navarro, Frenk, & White, 1997). Due to an artificial cooling catastrophe, the pure-cooling model (L150C and L75C) produces unacceptably high X-ray luminosity, consistent with Suginohara & Ostriker (1998). This artificial feature can be largely removed by decoupling the cold gas from hot component, and models L150UJ and L75UJ yield a fairly reasonable range of X-ray luminosities. Nevertheless we do not claim that the luminosities in L150UJ and L75UJ are reliable; the current modified scheme is to be interpreted as a tentative and phenomenological remedy at best and should be replaced by more realistic scheme for galaxy formation. Lewis et al. (1999) have obtained a very similar result based on a more sophisticated treatment of galaxy formation in their simulation. They show that when both cooling and star formation are included in the simulation, cluster luminosities increase only moderately (a factor 3) compared to the non-radiative case; this amount of increase is significantly less than that in the pure-cooling case. In addition, we find that the values of the luminosities are still significantly affected by the numerical resolution (ยง3.3).
In Figures 3 and 4, we show the spherically averaged profiles of local dynamical timescale $`t_{\mathrm{dyn}}`$, 2-body heating timescale $`t_{2\mathrm{b}\mathrm{o}\mathrm{d}\mathrm{y}}`$ (Steinmetz & White, 1997) and cooling timescale $`t_{\mathrm{cool}}`$ for representative clusters with 3 different mass scale for models L150UJ and L75UJ, respectively. Each timescale is defined as
$`t_{\mathrm{dyn}}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{G\rho _{\mathrm{tot}}}}},`$ (4)
$`t_{2\mathrm{b}\mathrm{o}\mathrm{d}\mathrm{y}}`$ $`=`$ $`\sqrt{{\displaystyle \frac{27}{128\pi }}}{\displaystyle \frac{\sigma _{1\mathrm{D}}^3}{G^2M_{\mathrm{DM}}\rho _{\mathrm{DM}}\mathrm{ln}\mathrm{\Lambda }}},`$ (5)
$`t_{\mathrm{cool}}`$ $`=`$ $`{\displaystyle \frac{3n_{\mathrm{gas}}k_\mathrm{B}T}{2\mathrm{\Lambda }_{\mathrm{cool}}(n,T)}},`$ (6)
where $`G`$ is the gravitational constant, $`\rho _{\mathrm{tot}}`$ the total mass density , $`\sigma _{1\mathrm{D}}`$ the 1-dimensional velocity dispersion of dark matter particles, $`\mathrm{ln}\mathrm{\Lambda }`$ the Coulomb logarithm, $`M_{\mathrm{DM}}`$ the mass of a dark matter particle, $`\rho _{\mathrm{DM}}`$ the mass density of dark matter, $`n_{\mathrm{gas}}`$ the number density of gas, $`k_\mathrm{B}`$ the Boltzmann constant and $`\mathrm{\Lambda }_{\mathrm{cool}}(n,T)`$ the cooling function. The expression for $`t_{2\mathrm{b}\mathrm{o}\mathrm{d}\mathrm{y}}`$ is given in Steinmetz & White (1997) and we set the value of the Coulomb logarithm to $`\mathrm{ln}\mathrm{\Lambda }=5`$ as a nominal value.
For the clusters with $`M2\times 10^{14}M_{}`$ in the models with $`L_{\mathrm{box}}=150h^1\text{Mpc}`$ , $`t_{\mathrm{cool}}`$ is shorter than $`t_{2\mathrm{b}\mathrm{o}\mathrm{d}\mathrm{y}}`$, and hence the artificial 2-body heating is not effective for these clusters. On the other hand, clusters with $`M10^{14}M_{}`$ have the cooling timescale comparable with the 2-body heating timescale. Thus, the relatively poor clusters with $`M10^{14}M_{}`$ in $`L_{\mathrm{box}}=150h^1\text{Mpc}`$ models may suffer from the artificial 2-body heating. Since it is difficult to predict the possible systematic effect based on the timescale argument alone, it is most straightforward to quantify the effect by a careful comparison with the results in $`L_{\mathrm{box}}=75h^1\text{Mpc}`$ runs which are almost free from the artificial 2-body heating. In the following analysis, we use clusters with $`M>10^{14}M_{}`$ and $`M>10^{13}M_{}`$ for L150 and L75 models, respectively. Table 2 indicates the number of those clusters which satisfy the above criteria for each model at different redshifts.
It should also be noted that one should not worry the numerical 2-body heating at the central regions of clusters where $`t_{\mathrm{dyn}}t_{\mathrm{cool}}`$ even if $`t_{\mathrm{cool}}t_{2\mathrm{b}\mathrm{o}\mathrm{d}\mathrm{y}}`$. As noted in Steinmetz & White (1997), those regions will experience a catastrophic cooling; in fact, they are exactly the place where we attempt to suppress the overcooling by decoupling the cold gas particles.
### 3.2 Temperature โ mass relation
A conventional analytical modeling of clusters of galaxies assumes that the ICM is isothermal and in hydrostatic equilibrium within the dark matter potential. Then the ICM temperature is predicted to be
$`k_\mathrm{B}T_X=\gamma {\displaystyle \frac{\mu m_\mathrm{p}GM}{3r_{\mathrm{vir}}}}`$ (7)
$`5.2\gamma \left({\displaystyle \frac{\mathrm{\Omega }_0\mathrm{\Delta }_\mathrm{c}}{18\pi ^2}}\right)^{1/3}\left({\displaystyle \frac{M}{10^{15}h^1M_{}}}\right)^{2/3}(1+z_\mathrm{f})\text{keV}`$ (8)
in terms of the cluster mass $`M`$, where $`\mathrm{\Delta }_\mathrm{c}`$ is the mean density of a virialized object at a formation redshift $`z_\mathrm{f}`$, and $`\gamma `$ is a fudge factor of an order unity; if the cluster is isothermal and its one-dimensional velocity dispersions is equal to $`\sqrt{GM/3r_{\mathrm{vir}}}`$, then $`\gamma `$ is the inverse of the spectroscopic $`\beta `$-parameter and approximately given by $`1.2`$ (Kitayama & Suto, 1997).
Since the above $`T_X`$$`M`$ relation is the most important ingredient in translating the Press โ Schechter mass function into the XTF, the reliability of the conventional cluster abundance crucially depends on the applicability of this relation. Figure 5 plots the $`T_X`$$`M`$ relations for non-radiative and multiphase models at $`z=2`$, 1 and 0. In order to avoid the possible artificial two-body heating effect, we show the results for clusters of $`M>10^{13}M_{}`$ and $`M>10^{14}M_{}`$ in L75 and L150 models, respectively.
This figure implies three major conclusions; first, comparison of the upper and middle panels indicates that the current numerical resolution is sufficiently good and the $`T_X`$$`M`$ relation seems to be converged. Second, the mass weighted temperature $`T_X^\mathrm{m}`$ is well fitted to equation (8) with $`\gamma =1.2`$, while the emission weighted temperature $`T_X`$ is systematically higher for the same $`M`$. Finally, the simulated $`T_X`$$`M`$ relation is almost unaffected by the phenomenological (in the current simulation) treatment of the thermal evolution of the ICM gas, and the results of L150A and L150UJ are almost identical. We also verify that the results of L150CJ are almost identical to those of L150UJ implying that the presence of a UV background does not affect our conclusions. This is the first successful check of the relation, made possible by the sufficiently large volume and good resolution of our simulations.
### 3.3 Luminosity โ temperature relation
In contrast to the $`T_X`$$`M`$ relation, the luminosity โ temperature relation of X-ray clusters is not easy to predict. This is because the X-ray luminosity is proportional to the square of the gas density, and thus sensitive to the thermal evolution of the ICM. A simple self-similar model, which predicts $`L_XT_X^2(1+z_\mathrm{f})^{3/2}`$ (Kaiser, 1986), is shown to be completely inconsistent with the observed relation of $`L_XT_X^\alpha (1+z)^\zeta `$ where $`2.6\alpha 3.0`$ and $`\zeta 0`$ (Edge & Stewart, 1991; David et al., 1993; Markevitch, 1998).
The $`L_X`$$`T_X`$ relation of our simulated clusters (Fig. 6) also exhibits the difficulty to obtain a reliable estimate of the luminosities. While it is reasonable that the results are sensitive to the ICM thermal evolution, they are also affected by the numerical resolution. The cluster luminosities in $`L_{\mathrm{box}}=150h^1`$Mpc models are systematically underestimated relative to those in $`75h^1`$Mpc models. Since we employ the equal-mass particle in the simulations, the resolution problem would be more serious for smaller clusters. Actually this explains why L150A model produces $`L_XT_X^3`$ accidentally although this non-radiative model should result in $`L_XT_X^2`$. Thus we conclude that the reliable estimate of the X-ray luminosities requires much higher resolution, especially for less luminous clusters, than ours in addition to the more physical implementation of the galaxy formation process. Therefore, we disagree with Pearce et al. (1999b) who concluded that the effect of cooling suppresses the X-ray luminosities; while our results do not reach convergence either, we find the opposite trend, i.e., the luminosities increase with the cooling (Figs. 1 and 6). After all, since their resolution is similar to ours, their results cannot escape from the resolution problem, and it is premature to draw any conclusion on the luminosities from cosmological SPH simulations.
### 3.4 X-ray Temperature Function
Finally we show the XTFs in non-radiative and multiphase models at redshift $`z=0.0`$ and $`1.0`$ (Fig. 7) as a function of the emission and mass weighted temperatures. These should be compared with the analytical prediction on the basis of the PressโSchechter mass function (Press & Schechter, 1974) and the $`T_X`$$`M`$ relation mentioned above. Since we have already showed that the $`T_X`$$`M`$ relation agrees well with the analytical expectation, our simulated clusters in cosmological volumes can examine the statistical reliability of the analytical prediction of XTF for the first time. Figure 7 implies that the analytical and numerical results agree well with each other almost independently of the ICM thermal evolution model; if we adopt $`\gamma 1.6`$ and $`1.2`$ for XTFs in terms of emission- and mass-weighted temperatures, respectively. It should be also noted that since the XTFs from different numerical resolutions are nearly the same within the statistical errors, we can state that the XTFs from our simulations do not suffer from any serious numerical artifacts. These results justify the use of a simple analytical model for the cluster abundance extensively applied before (White, Efstathiou & Frenk, 1993; Viana & Liddle, 1996; Eke, Cole & Frenk, 1996; Kitayama & Suto, 1997; Kitayama, Sasaki & Suto, 1998). The observed XTF of local ($`z<0.1`$) clusters by Markevitch (1998) is also shown in Figure 7 and lower by factor of 2โ3 than the simulated ones at $`T_X3\text{}9`$ \[keV\].
## 4 Conclusions
On the basis of a series of the large cosmological SPH simulations, we have specifically addressed the reliability of the analytical predictions on the statistical properties of X-ray clusters. In order to distinguish numerical artifacts from real physics, we performed simulations with two different numerical resolutions. Our main findings are summarized as follows:
(i) The inclusion of radiative cooling in the high-resolution simulations substantially change the luminosity of simulated clusters. Without implementing the galaxy formation scheme, this leads to an artificial overcooling catastrophe as demonstrated by Suginohara & Ostriker (1998). With a phenomenological prescription of cold gas decoupling like Pearce et al. (1999a), however, the overcooling is largely suppressed. Nevertheless the predicted X-ray luminosities of clusters are not reliable even by one order of magnitude.
(ii) In contrast to the huge uncertainties on the X-ray luminosities, the temperature of simulated clusters is fairly robust both to the ICM thermal evolution and to the numerical resolution, and in fact are in good agreement with the analytic predictions commonly adopted. We also showed that the emission-weighted temperature is 1.3 times higher than the mass-weighted one.
(iii) The analytical predictions for the X-ray temperature function translated from the Press โ Schechter mass function are fairly accurate and reproduce the simulation results provided that the fudge factor $`\gamma =1.2(1.6)`$ is adopted in the mass-weighted (emission-weighted) temperature โ mass relation (8). The XTFs simulated in the cosmology adopted in this paper are slightly higher than the observed one of local clusters by Markevitch (1998).
Our simulations have made statistically unbiased synthetic catalogues of clusters of galaxies in virtue of their large simulation volume and sufficient numerical resolution. Thus, they can be the theoretical references which should be compared with the results from future cluster observations with X-ray satellites including XMM and Astro-E, and can contribute to probing cosmological parameters from observed cluster abundances in due course. Additional radiative processes such as heat conduction of ICM and energy feedback from supernovae explosions, which may considerably affect the physical properties of ICM, must be considered as future works in order to solve the discrepancy of $`L`$$`T`$ relation. In addition, since we consider only one fairly specific cosmological model, it is necessary to perform simulations under other cosmological models.
We thank an anonymous referee for pointing out the importance of the two-body heating in the simulations. K.Y. and Y.P.J. gratefully acknowledge the fellowship from the Japan Society for the Promotion of Science. Numerical computations were carried out on VPP300/16R and VX/4R at the Astronomical Data Analysis Center (ADAC) of the National Astronomical Observatory, Japan, as well as at RESCEU (Research Center for the Early Universe, University of Tokyo) and KEK (High Energy Accelerator Research Organization, Japan). This research is supported by Grants-in-Aid by the Ministry of Education, Science, Sports and Culture of Japan to RESCEU (07CE2002), and by the Supercomputer Project (No.99-52) of KEK. |
no-problem/0001/astro-ph0001154.html | ar5iv | text | # The Stellar Content of the Halo of NGC 5907 from Deep HST NICMOS Imaging 1footnote 11footnote 1Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
## 1 Introduction
The presence of dark matter halos around galaxies is well-established through a number of observations. These include the flat rotation curves of spiral galaxies, the velocities of globular clusters and satellites around their host galaxies, the properties of hot X-ray gas around ellipticals, and mass-to-light ratio measurements from gravitational lensing (e.g. reviews by Sackett 1996, Ashman 1992, Trimble 1987). However, despite this wealth of data demonstrating the existence of massive dark halos, their composition remains unknown.
The total mass density inferred for galaxy halos is roughly $`\mathrm{\Omega }_{galaxies}0.2`$ (e.g. Bahcall, Lubin, & Dorman 1995). This number comes from the combination of the observed luminosity density in the local universe (e.g. Loveday et al. 1992, Efstathiou et al. 1988) with the assumption that typical galaxy halos extend to $`200`$ kpc, as suggested by observations of satellites of spiral galaxies (Zaritsky et al. 1993), and have $`(M/L)_B100h`$ within this radius around spirals and a factor of a few higher around ellipticals (e.g. Mushotzky et al. 1994). The best constraint on the corresponding value of the mass density in baryons comes from measurements of deuterium in quasar absorption lines, from which $`\mathrm{\Omega }_Bh^20.02`$ is derived (Burles & Tytler 1998a,b). For current estimates of $`H_0=70\mathrm{kms}^1\mathrm{Mpc}^1`$ (Mould et al. 1999), this baryonic density is smaller than the total mass density inferred for galaxy halos, indicating that some or most of the dark matter around galaxies is in the form of exotic particles. Nevertheless, $`\mathrm{\Omega }_B`$ is not negligible compared to $`\mathrm{\Omega }_{galaxies}`$, leaving room for a substantial contribution from baryons to the massive dark halos around galaxies.
Photometry of edge-on spiral galaxies in search of massive halos has been done since the 1970s, originally with TV cameras and photographic plates (e.g. Davis 1975). Faint, extended halo light is easiest to detect in thin, edge-on spirals because the regions above and below the galactic plane are not contaminated through projection effects by the brighter stellar disk. Skrutskie et al. (1985) set upper limits on faint $`V`$-band and, using large apertures, $`K`$-band emission in the late type edge-on spirals NGC 2683, NGC 4244 and NGC 5907, concluding that no more than $`1/3`$ of an isothermal dark halo could be composed of a luminous baryonic component. Refined halo mass models for NGC 5907 based on galaxy parameters derived in $`H`$ and $`K`$ from pixelated near-infrared detectors and the Skrutskie et al. (1985) upper limits for halo light increased the baryonic upper limit to $`2/3`$ (Barnaby & Thronson 1994).
NGC 5907 is a particularly interesting case since it exhibits a flat rotation curve well beyond its Holmberg radius (Sancisi & van Albada 1987, Sofue 1994) which allows a good estimate of its dynamical mass to be made. The thin stellar disk is slightly warped perpendicular to the line-of-sight. In a pioneering CCD photometry study of very low surface brightness features around edge-on spiral galaxies by Morrison, Boroson, & Harding (1994), faint extended $`R`$-band emission was discovered around NGC 5907. The extended light is unlike any known thick disk, both in terms of its shallow radial profile and its low surface brightness (Sackett et al. 1994, Morrison 1999). However, the faint NGC 5907 light is well fit with a halo-like profile that is moderately flattened toward the plane of the galaxy ($`c/a0.5`$) and a radial volume density profile $`\rho r^{2.3}`$, similar to that inferred for the massive halo from the rotation curve (Sackett et al. 1994). A similar flattening has also been tentatively observed in the globular cluster system of NGC 5907 in a recent study with HST by Kissler-Patig et al. (1999).
The $`R`$-band radial profile of the NGC 5907 halo is considerably shallower than that of known stellar populations in the halos of the Milky Way and M31, which are much steeper than that of the dark halo mass inferred from their rotation curves and other dynamical measures. Specifically, in the Galaxy the RR Lyrae distribution falls off as roughly $`r^{3.5}`$ (Saha 1985) and the globular cluster system as $`r^3`$ (Zinn 1985) or $`r^{3.5}`$ (Harris & Racine 1979). In M31, the globular clusters are distributed like $`r^3`$ (Racine 1991), while red giant branch stars appear to have a profile falling at least as steeply as $`r^{3.8}`$ (Reitzel, Guhathakurta, & Gould 1998). In contrast, various kinematical and lensing constraints suggest that the total mass distribution of our Galaxy and other spirals has a much shallower radial profile, perhaps $`r^1`$ within a few kpc and tapering to roughly $`r^2`$ out to at least 50 kpc and perhaps much beyond (Sackett 1996, 1999 and references therein; Zaritsky 1999 and references therein).
The existence of this unusual stellar component with a shallow radial profile in NGC 5907 has been confirmed by a number of independent observations by other groups, both in the near-infrared (Rudy et al. 1997, James & Casali 1998) and at other optical wavelengths (Lequeux et al. 1996, 1998, Zheng et al. 1999). Although different proposals have been made for the origin of the extended light in these studies, all radial profiles agree at surface brightness levels of $`R27`$ mag/arcsec<sup>2</sup>, corresponding to a height above the plane of about $`120^{\prime \prime }`$ (8 kpc for a distance of 14 Mpc). Lequeux et al. report optical colors for this shallow, extended population of $`(BV)1.0`$ and $`(VI)1.4`$, as red as typical elliptical galaxies, while red $`JK`$ colors of $`11.5`$ have reported in the infrared studies (Rudy et al. 1997, James & Casali 1998). At longer wavelengths of 3.5 - 5$`\mu `$m, Yost et al. (1999) have recently reported a non-detection around NGC 5907 at $`180^{\prime \prime }`$ to $`540^{\prime \prime }`$ (12 $``$ 37 kpc). Combining this result with model atmospheres, they conclude that hydrogen burning stars contribute no more than 15% of the dark mass within this region. Although this non-detection at 3.5 - 5$`\mu `$m does not coincide spatially with the detections at $`2.2\mu `$m, the negative result at longer wavelengths suggests that whatever produces the faint optical light around NGC 5907 does not emit strongly at these wavelengths and probably does not have enough mass to account for the dark matter halo inferred from the rotation curve.
The origin of the stellar halo of NGC 5907 raises many puzzles. Its shallow radial profile differs from known disk populations and red colors are different from known halo populations. The relatively red color implies either an initial mass function (IMF) favoring extremely low mass dwarfs ($`<0.2M_{}`$), or a normal, metal-rich, old stellar population. Stellar halos are expected to be metal-poor, as observed in the Milky Way, because halo formation is believed to occur early in the life of a galaxy before the majority of metals have been generated in nuclear processes in stellar interiors. But an IMF as strongly biased towards low mass dwarfs as is required to explain the halo colors of NGC 5907 has not been observed in globular clusters or in the local halo population of the Milky Way (e.g. King et al. 1998 and references therein). On the other hand, if the red colors are generated by an old, metal-rich population, one must explain how such a population came to reside in the halo where metal enrichment is thought to be low.
One possibility proposed by Lequeux et al. (1998) is that such a halo could result from accretion of a metal-rich population from a low-mass elliptical after a tidal encounter with NGC 5907. If such an encounter could disrupt the elliptical without damaging the thin disk of NGC 5907, the debris may naturally settle into a halo configuration similar to what is observed. Alternatively, Fuchs (1995) suggests the extended halo population originates from the dynamical response of the stellar spheroid to the dark matter halo in which it is embedded. Zheng and collaborators suggest that the faint light seen in their deep observations of NGC 5907 โ and by inference the observations of others as well โ may be a combination of effects including confusion from a very narrow arc apparently associated with the galaxy, an unseen but postulated face-on warp, and stellar foreground confusion. If these suggestions, all of which involve normal stellar populations, are correct, the surface brightness observed from the ground should be associated with a substantial number of giants that can be resolved from space.
In this paper, we present and discuss the implications of NICMOS observations of the halo of NGC 5907 designed to detect the many individual bright giants expected to contribute significantly to the halo light of NGC 5907 if it is composed of a stellar population with a initial mass function like that observed locally and has a distance and colors consistent with existing observations. In Section 2, we describe the observations, data reduction and analysis including the detection limits, and show the resulting mosaic of our images of the stellar halo of NGC 5907. The comparison of these observations to models of the stellar population of this component is presented in Section 3. This section includes a detailed discussion of the effects of distance, metallicity, and initial mass functions on these models and thus the interpretation of our data. The implications of the absence of stellar sources in our H-band images of the observed stellar halo of NGC 5907 are discussed in Section 4.
## 2 Observations and Data Reduction
We used the near-infrared camera (NICMOS; MacKenty et al. 1997) on the Hubble Space Telescope (HST) to observe a field in the halo of NGC 5907 during two separate observing windows on 1998 May 23 and 1998 July 10-11. A total 35,326 s of integration time over 12 orbits was obtained. One dark exposure of zero length was taken at the beginning of each orbit in the ACCUM mode to reduce persistence effects from cosmic rays, and the remainder were taken through the F160W filter in the MULTIACCUM mode with the SPARS64 sequence. Between the two separate observing windows, the position angle of the telescope was free to rotate and rolled clockwise by $`43.7\mathrm{deg}`$. Observations were obtained with the NIC2 camera, which has a field of view 19.2 $`\times `$ 19.2 wide with 256 pixels on a side and 0.075 per pixel. The telescope was pointed so that the geometrical center of the NIC2 camera was $`\alpha `$ = 15<sup>h</sup>16<sup>m</sup>01$`^\mathrm{s}.`$54, $`\delta `$ = 56$`\mathrm{deg}`$ 20โฒ31.7 (J2000). This location is 75โณaway from the center of the galaxy, perpendicular to the plane of the galaxy (Figure 1), corresponding to a distance above the plane of the galaxy of 5.1 kpc based on a distance to NGC 5907 of 14 Mpc (see Section 3.3 for a discussion of the distance to the galaxy). This position was chosen to be outside both the region of disk confusion and the thin, long arc found by Shang et al. (1998), but well inside the region where extended light has been reported in BVRIJK.
The images were processed through CALNICA, the standard NICMOS pipeline, which performs bias subtraction, dark-count correction, and flat-fielding. Our own subsequent data reduction consisted of masking bad pixels, correcting for constant offsets between the four quadrants of the NIC2 camera, and subtracting a scaled master โsky frameโ constructed from the average of all of the individual images. The images were then registered and combined. Due to the different position angles between visit 1 and visit 2+3, the images of visit 1 were combined separately from visit 2+3. The images of visit 2+3 were registered with respect to each other using the brightest object in the field and co-added to improve the signal-to-noise. The first image of visit 3 was not included in the combined image because of bad trailing due to the loss of one of our guide stars during that exposure. The visit 1 mosaic, generated using the world coordinate system in the image, was then combined with visit 2+3, after rotating the latter by an angle of $`43.7\mathrm{deg}`$ and registering on the brightest object in the field. The rotation was done after mosaicing because rotation spreads bad pixels and cosmic ray hits in a way that it is difficult to remove them accurately when combining individually rotated images. Each resulting mosaic covers a field of view 23.75 $`\times `$ 20.25 wide. The final combined F160W image of our field is shown in Figure 2. The signal-to-noise ratio is higher at the center of the image, where the total integration time is longer, and lower at the edges of the frames.
### 2.1 Photometry
Photometry was performed on the final combined image using the version of the automated star-detection algorithm DAOPHOT (Stetson 1992) as implemented in the IRAF package. In order to allow a uniform threshold for object detection to be applied across the whole image, each pixel in the image was normalized by the square-root of the exposure time at that location. The object detection was performed with the subroutine DAOFIND and a detection threshold of 5 $`\sigma `$ above the local background level. Objects were also required to have DAOPHOT parameters $`0.5<`$ SHARP $`<0.5`$ and CHI2 $`<5`$. These cuts are aimed at eliminating features that have a significantly smaller or larger extent than the PSF (e.g. cosmic rays or single pixel defects and unresolved blends or large galaxies, respectively). Only one object meets these criteria and is a potential star. This object is also the only object with a FWHM less than twice the FWHM of the PSF.
The single stellar object detected is marked on Figure 2 with a circle. The total magnitude of that star is $`m_{F160W}=23.6`$. This is determined by obtaining photometry in an aperture of 1.7 pixels in diameter and using an aperture correction derived either from TINYTIM models or NIC2 observations of brighter stars in other fields (these give the same results). The magnitudes in the F160W filter are very similar to typical ground-based H-band magnitudes, with an uncertainty in the calibration of NICMOS photometry for an object with unknown color of about $`10\%`$ (Colina & Rieke 1997). Due to the high Galactic latitude of NGC 5907 ($`b=51\mathrm{deg}`$), models of the stellar distribution in the Galaxy yield a negligible probability of a foreground star in our field (e.g. Cohen et al. 1993). However, there are few constraints on Galactic sub-stellar (e.g. brown dwarf) populations at these very faint infrared magnitudes. Based on deep NICMOS pointings in blank areas of the sky (e.g. Yan et al. 1998), we expect roughly ten faint field galaxies in the images, consistent with the number of resolved sources we detect.
### 2.2 Artificial Star Tests
In order to determine the magnitude limit of our photometry, we performed a series of artificial star tests on the images. Specifically, we added artificial stars of known magnitude into the final NIC2 image used for the analysis above, and then performed the same object detection and photometry on these images with the artificial stars that was used on the original image. Because of the lack of stellar objects in our field, we used a PSF determined from other NIC2 data (Marleau et al. 1999) to create the artificial stars. For each 0.1 magnitude bin, 105 artificial stars were added to the original image with a random spatial distribution. The object identification and output magnitudes for the artificial stars that were returned by the same procedures used on the real data were then compared to the input list. This was repeated three times for each 0.1 magnitude bin, and the results averaged for each bin. A plot of the recovery fraction (completeness) as a function of input magnitude is given in Figure 3. As shown in this plot, the average completeness of the photometry over the full image is $`50\%`$ for objects with $`m_{F160W}=24.9`$.
## 3 Results
### 3.1 Fiducial Model
The primary goal of our work is to constrain the nature of the halo of NGC 5907 by comparing the observed star counts in our NICMOS image to the star counts expected for various models of the stellar population of the halo of NGC 5907. Our procedure is to take a luminosity function in F160W for a given stellar population, either from observations such as those in the Galactic bulge, or from theoretical models, place this population at the distance of NGC 5907, and normalize the luminosity function to match the observed surface brightness within the region of our NICMOS data. We then convert this into a prediction of the number of stars expected in our NICMOS images by multiplying the predicted distribution in apparent magnitudes with the completeness function determined above.
We begin by considering a fiducial model, based as closely as possible on the observed properties of NGC 5907 and its extended light distribution. Specifically, our fiducial model has a surface brightness within the NICMOS region of $`R=25.85`$ mag/arcsec<sup>2</sup> as measured by Morrison et al. (1994), a distance to NGC 5907 of 14 Mpc based both on Tully-Fisher and models for deviations from the Hubble flow in the region around NGC 5907 (see Section 3.3), and an H-band stellar luminosity function adapted from studies of Baadeโs window (Tiede et al. 1995), which has a metallicity similar to that suggested by the observed broad-band colors of the extended stellar light in NGC 5907. This fiducial model predicts that more than 100 stars should be seen in the combined NICMOS image, in stark contrast to our observation of one candidate star. This is graphically demonstrated in Figure 4, in which we simulate the expected appearance of the fiducial model in our NICMOS image, accounting for the expected Poisson shot noise. The significant number of bright giants expected for the simplest assumptions about the stellar population in the halo of NGC 5907 are not present in the data. Because of the stark difference between the observations and the simplest prediction, we consider in turn each of the components that go into the prediction. A fundamental component of our calculation is the H-band luminosity function, which depends on the initial mass function (IMF), metallicity, and age of the stellar population. The distance to NGC 5907 and the surface brightness of its halo within our NICMOS field also play a role and we consider all of these below.
### 3.2 Luminosity Function, IMF, and Metallicity
For the fiducial case shown in Figure 4, we adopt an H-band stellar luminosity function constructed from the K-band luminosity function observed in Baadeโs Window (Tiede et al. 1995), adjusted to F160W by a small color term. The choice of a luminosity function based on the Galactic Bulge is motivated by the similar metallicities of stars in Baadeโs Window (e.g. McWilliam & Rich 1994) and the halo of NGC 5907, as inferred from its optical colors as described in detail below. This approach also has the benefit of comparing the data to an empirical luminosity function. As demonstrated in Figure 4, the luminosity function based on observations in Baadeโs Window dramatically fails to account for the data.
Because of the failure of the fiducial model, we explored a range of stellar initial mass functions and metallicities to search for sets of parameters that are consistent with the data. We generated theoretical F160W luminosity functions using the single burst 12 Gyr-old stellar population synthesis models of Bruzual & Charlot (1998) with the semi-empirical stellar spectral energy distributions of Lejeune, Cuisinier, & Buser (1997). The models are defined for \[Fe/H\] = $`2.3,1.7,0.7,0.4,0.0,+0.4`$ with a stellar mass range of $`0.1125`$$`M_{}`$. We also consider a range of stellar initial mass functions, parametrized as a single power law with slope $`\alpha `$, such that $`dN=M^{(\alpha +1)}dM`$, where a Salpeter IMF is $`\alpha =1.35`$. We caution that the stellar population models become unreliable for $`\alpha \stackrel{>}{}4`$ because at these very steep slopes the stars at the bottom of the main sequence become the principal contributors to the integrated light. In this case, the accuracy of the model predictions rests on the assumed colors of old late-type M dwarfs and the shape of the IMF near the hydrogen burning limit, both of which are poorly constrained locally, much less as a function of metallicity. Nevertheless, the qualitative behavior of the models for cases of extremely large $`\alpha `$ should be correct, even if they are more uncertain quantitatively.
Figure 5 shows H-band luminosity functions predicted for our NICMOS observations of the halo of NGC 5907 as a function of IMF slope and metallicity, given the ground-based observations of the R-band surface brightness of $`25.85`$ magnitudes/arcsec<sup>2</sup> (ยง3.4), a distance to NGC 5907 of 14 Mpc (ยง3.3), and the completeness function shown in Figure 3 (ยง2.2). Figure 5 shows that only extremely steep IMFs or very low metallicities are consistent with our NICMOS data in which only one possible star is detected. In the former case, the absence of giants originates in an extraordinarily higher dwarf to giant ratio in the stellar halo of NGC 5907, while in the latter, low-metallicity giants are simply too faint to be detected because of the dependence on metallicity of the brightness of giants in the H-band.
This result can be quantified further by considering the Poisson statistics of either one or zero detected stars. For metallicities of \[Fe/H\] $`0.7`$, even the largest $`\alpha `$ we consider is discrepant with the observation of one star at $`>99.99\%`$ confidence level. Because of the uncertainties in the stellar populations models in these extraordinarily dwarf dominated IMFs discussed above, we simply adopt $`\alpha >4`$ as the result for \[Fe/H\] $`0.7`$. Alternatively, if \[Fe/H\] is sufficiently low, then the brightest giants become too faint to detect. If one adopts the usual Salpeter IMF with $`\alpha =1.35`$, this requires \[Fe/H\] $`\stackrel{<}{}1.7`$. As can be seen in Figure 5, the transition from the presence of giants at \[Fe/H\] $`0.7`$ to the absence of giants at \[Fe/H\] $`\stackrel{<}{}1.7`$ is fairly insensitive to modest changes in $`\alpha `$. We also note that if the light in NGC 5907 is assumed to come from a solely low metallicity population, the one stellar object is unlikely to be a star in the halo of NGC 5907, since the low metallicity models predict that any stars that are detected are found at the faintest limit of the data, while the stellar object is more than one magnitude brighter than our $`50\%`$ completeness limit.
Tighter constraints on the stellar population of the extended light around NGC 5907 can be placed by requiring the integrated colors of the halo population to be consistent with the ground-based colors of the halo light reported by Lequeux et al. (1998). Figure 6 compares the observed (B$``$V) and (V$``$I) colors to the BC98 model populations over the same range of $`\alpha `$ and \[Fe/H\] shown in Figure 5. With the NICMOS data alone, it is possible to account for the absence of giants through very low metallicity since then the giants would be too faint to detect in our images. Figure 6 shows that the published optical colors argue against the possibility of a very low metallicity with a normal IMF. We also note that the near-infrared (J$``$K) colors of Rudy et al. (1997) and James & Casali (1999) are even redder than the models that can account the optical colors, which are already red. While it is unclear whether the problem is with these challenging observations, or with difficulties in the models for either the reddest metal-rich giants (important for populations with normal IMFs) or the latest type M dwarfs (important for the steep IMFs), it is clear that the observational evidence to date suggests that the colors of the NGC 5907 halo are red. Therefore, the joint NICMOS star count and color constraints appear to require a dramatically higher dwarf-to-giant ratio than given by a typical IMF.
More quantitatively, in order to be consistent within the $`99\%`$ confidence limit of our observation of only a single star in the NICMOS field, the ratio of bright giants to fainter stars must be more 100 times lower than that expected for a stellar population with a Salpeter IMF and a metallicity that is consistent with the optical colors at the $`2\sigma `$ level. This constraint translates to $`\alpha >3`$ for a simple power-law parametrization of the IMF. Such steep IMFs cause low metallicity models which are consistent with the absence of bright giants (\[Fe/H\] $`\stackrel{<}{}1.7`$) to become red enough to begin to match the optical colors. Because this result indicates a much higher ratio of dwarfs to giants than observed in Galactic globular clusters or expected from the stellar initial mass function in star forming regions of the Galaxy, we examine the robustness of each of the steps taken in obtaining this result.
### 3.3 Distance
The detection of bright giants in NGC 5907 clearly depends on the distance to the galaxy. Using both the H and R-band Tully-Fisher relations, as well as the observed radial velocity of NGC 5907 combined with a model of the expected peculiar velocity at its location, we find that NGC 5907 is at a distance of 14 Mpc, with an uncertainty of $`20\%`$. Half of this uncertainty is due to the intrinsic scatter of the techniques we use to determine the distance to NGC 5907. The other half is due to the current uncertainty in the extragalactic distance scale, as manifested in the uncertainty in the distance to the Virgo cluster or similarly in the Hubble constant. This comes primarily from the uncertainty in the absolute calibration of the Cepheid Period-Luminosity relation. A distance to NGC 5907 of 14 Mpc is somewhat greater than that typically adopted in earlier work (e.g. Morrison et al. 1994), and our own preliminary presentation of this work at conferences (Liu et al. 1998).
#### 3.3.1 H-band Tully-Fisher
The H-band Tully-Fisher relation is one of the most reliable methods for determining distances to edge-on spiral galaxies like NGC 5907. The Tully-Fisher relation has been heavily studied and tested, resulting in well-established relationships between spiral galaxy luminosity and line-width (e.g. Jacoby et al. 1992). Moreover, the H-band offers a significant advantage in its reduced sensitivity to internal extinction, which is the dominant source of uncertainty for optical determinations of the magnitudes of edge-on galaxies. Even in H, the internal extinction is probably not zero for galaxies as inclined as NGC 5907 (e.g. Moriondo, Giovanelli, & Haynes 1998, Tully et al. 1998).
One way to determine the H-band Tully-Fisher distance to NGC 5907 is to adopt the H-band Tully-Fisher relation given by Hubble Space Telescope Key Project on the Hubble Constant, which is based on Cepheid distances to 21 spiral galaxies (Sakai et al. 1999). Specifically, the Key Project team found an H-band Tully-Fisher relation of $`H_{0.5}^c=11.03(\mathrm{log}W_{20}^c2.5)21.74`$, where $`H_{0.5}^c`$ is the H-band magnitude within an aperture that is a fixed fraction of the B-band diameter, as defined by Aaronson, Huchra, & Mould (1979), and $`W_{20}^c`$ is the velocity width of HI at $`20\%`$ of the maximum HI flux, corrected for inclination. The observed dispersion in the relation is 0.36 magnitudes for the Key Project sample, similar to that found for samples of cluster spirals (e.g. Peletier & Willner 1991, 1993; hereafter PW91 and PW93). The slope of this H-band Tully-Fisher relation is similar to, but slightly larger than that found for spirals in Virgo and Ursa Major found by PW91 and PW93, and by the earlier work of Aaronson, Huchra, & Mould (1979). Thus, the Sakai et al. relation will give a slightly larger distance for NGC 5907, which has a larger velocity width than most galaxies in the sample, although the difference is well within the 0.36 magnitudes internal scatter.
For NGC 5907 itself, we adopt the values of $`H_{0.5}=7.58`$ and log $`W_{20}^c=2.690`$ given by Aaronson et al. (1982). For NGC 5907, the value of log $`W_{20}^c=2.690`$ is well-determined for this very edge-on system (e.g. Schรถniger & Sofue 1994). The H magnitude of NGC 5907 may be more uncertain, as the galaxy is edge-on and internal extinction is likely to play a role even in the H-band. Two recent studies have attempted to determine the effect of extinction for spiral galaxies in the near-infrared. Based on a large study 154 spiral galaxies, Moriondo, Giovanelli, & Haynes (1998) find an extinction in H of approximately 0.15 magnitudes for a galaxy with the inclination of NGC 5907. A smaller study by Tully et al. (1998) in K, finds 0.25 magnitudes of extinction in the K-band. We therefore adopt 0.2 magnitudes as the best estimate of the H-band extinction in NGC 5907, and note that the $`0.4`$ magnitudes scatter in the Tully-Fisher is sufficient to account for the uncertainties in this correction. With log $`W_{20}^c=2.690`$ and $`H_{0.5}^c=7.38`$, we find that the distance to NGC 5907 is 31.21, or 17.5 $`\pm 2.7`$ Mpc, based on the Key Project relation.
An alternative is to use the slope found for spirals in Ursa Major and Virgo while still setting the zero point to give the same 16 Mpc distance to the Virgo cluster determined by HST Cepheid observations (Macri et al. 1999). Peletier & Willner studied roughly 25 spirals in each of these clusters and found a slope of $`H_{0.5}^c=10.2(\mathrm{log}W_{20}^c2.5)`$. We adopt the same zero point as above for the Key project sample, effectively setting the two Tully-Fisher relations equal for galaxies with log $`W_{20}^c=2.5`$. This approach gives a distance to NGC 5907 of 16.5 Mpc. The difference is due to the somewhat shallower slope found for the cluster surveys
A third approach is to take advantage of more modern two-dimensional H-band photometry of NGC 5907 (Barnaby & Thronson 1992). Although most calibration work is done with the Aaronson et al. (1982) aperture data, PW91 define a Tully-Fisher relationship based on the H-band light within the isophote at which the H-band surface brightness is brighter than 19 magnitudes/arcsec<sup>2</sup> ($`H_{19}`$). They find a Tully-Fisher relation for this isophotal magnitude of log $`(V_{20})=0.098(H_{19}9)+2.525`$, with the zero point set by $`d_{Virgo}=`$ 16 Mpc. From two-dimensional photometry, Barnaby & Thronson (1992) derive face-on surface brightnesses and scale lengths for NGC 5907. Since the $`H_{19}`$ magnitude of PW91 is defined within an isophote on the sky, it is dependent on galaxy inclination, with more light being encompassed for edge-on systems. We therefore translate the Barnaby & Thronson model to $`i=60\mathrm{deg}`$, which is the most likely inclination in a random sample of galaxies. In this case, the $`H_{19}`$ magnitude gives a distance of 12.9 Mpc based on the calibration above. Lower and upper limits to the distance derived from this technique can be derived by translating to edge-on and face-on system, giving a range in distances from 11.2 Mpc to 16.1 Mpc.
#### 3.3.2 R-band Tully-Fisher
A somewhat independent check on the distance derived above can be obtained by considering the R-band rather then the H-band in the Tully-Fisher method. Morrison et al. (1994) estimated the reddening-free total magnitude of NGC 5907 of $`m_R=9.1`$ by modelling the disk outside of the dust lane, and then extrapolating this model into the masked regions. Taking the R-band Tully-Fisher calibration of Pierce & Tully (1992, see also Jacoby 1992) and the rotation velocity given above, gives a distance of 14.3 Mpc. This R-band distance estimate is consistent with the H-band distance given above within the uncertainties of about 0.36 magnitudes for each technique.
#### 3.3.3 Peculiar Velocity and Observed Density Field
The scatter in Tully-Fisher is not negligible, so it is important to consider other constraints on the distance to NGC 5907. One approach is to use the observed redshift of NGC 5907, and the expected peculiar velocity for a galaxy at its location to estimate its true distance. This requires a model for the underlying density field, which has been estimated both from IRAS galaxy surveys (Davis, Nusser, & Willick 1996) and optical galaxy surveys (Baker et al. 1998), using the expansion technique of Nusser & Davis (1994). The predicted peculiar velocity for NGC 5907 is then โ250 $`\mathrm{km}\mathrm{s}^1`$ from the optical survey and โ190 $`\mathrm{km}\mathrm{s}^1`$ for the IRAS survey. For field galaxies like NGC 5907, the uncertainty in these peculiar velocities is dominated by the random dispersion of galaxies around the smooth Hubble Flow, which is measured to be 120 $`\mathrm{km}\mathrm{s}^1`$ (e.g. Baker et al. 1999). Combined with $`cz=667`$ $`\mathrm{km}\mathrm{s}^1`$ for NGC 5907 (RC3, de Vaucouleurs et al. 1991) and $`H_0=70`$ $`\mathrm{km}\mathrm{s}^1`$ Mpc<sup>-1</sup> for consistency with the Virgo distance above, we find distances of 13.1 and 12.2 Mpc respectively, with an uncertainty of $`\pm 1.7`$ Mpc due to random motions of galaxies. We account for the additional systematic uncertainty introduced by the Hubble Constant below.
#### 3.3.4 Final Distance Estimate
To determine the distance to NGC 5907, we combine the Tully-Fisher and flow-field estimates in quadrature. For Tully-Fisher, we simply average the $`H_{0.5}^c`$, $`H_{19}`$, and R-band estimates, for which we find 14.9 Mpc. Since the errors in these estimates are highly correlated, we retain an internal error of 0.36 magnitudes, or $`\pm 2.3`$ Mpc. Similarly, for the flow-field estimate, we simply average the results from the IRAS and optical galaxy predictions, which gives a distance of $`12.7\pm 1.7`$ Mpc. We then combine these in quadrature, obtaining an answer of $`13.5\pm 1.4`$ Mpc. Both techniques depend directly on the Hubble constant, for which we have adopted a value of $`70`$ $`\mathrm{km}\mathrm{s}^1`$Mpc<sup>-1</sup> (e.g. Mould et al. 1999). We take the uncertainty in this value to be $`12\%`$ accommodating the latest results on the calibration of the Cepheid Period-Luminosity relationship (e.g. Maoz et al. 1999). Combining this uncertainty in the overall calibration of the distance scale with the uncertainties intrinsic to the techniques applied to NGC 5907, we find that the distance to NGC 5907 is $`13.5\pm 2.1`$ Mpc. For most of the work in this paper, we round this up to 14 Mpc.
The model luminosity functions in Figures 4 and 5 are based on this distance. In order for the detection of at most one star in our NICMOS image to be consistent with a Salpeter IMF and within $`2\sigma `$ of published optical colors, NGC 5907 would have to be at a distance of more than 24 Mpc, which is $`5\sigma `$ larger than our estimated distance of $`14\pm 2`$ Mpc. We also note that the distance we derive is larger than previously published distances, so accounting for the absence of bright giants in the extended light around NGC 5907 would require even larger discrepancies with earlier work.
### 3.4 R-band Surface Brightness
The predicted star counts are normalized so that their integrated flux produces the observed R-band surface brightness; therefore uncertainties in the surface brightness propagate to uncertainties in the predicted NICMOS counts. The R-band surface brightness within our NICMOS pointing is $`25.85\pm 0.15`$ mag/arcsec<sup>2</sup> based on a direct measurement from the reduced R-band image of Morrison et al. (1994), kindly provided by H. Morrison and J. Monkiewicz. Independent measurements of the surface brightness in narrower filters at roughly the same wavelengths give good agreement on the surface brightness of the diffuse stellar component out to $`R<27`$ mag/arcsec<sup>2</sup> (Zhang et al. 1999). Thus, the uncertainty in the surface brightness within our NICMOS field is small compared to other uncertainties and to the dramatic difference between the number of giants expected and the number observed in the NICMOS image.
## 4 Conclusions
The fundamental result of our investigation is that we detect only one unresolved object in our NICMOS images of the stellar halo of NGC 5907, compared to more than 100 giant stars that are expected to be observable within this field, given the observed surface brightness and colors in the region of our pointing and the simplest assumptions about the stellar population of the halo and the distance to the galaxy. Taken at face value, this result indicates that the dwarf to giant ratio is about 100 times greater than that for typical stellar populations.
Given the dramatic nature of this result, we consider other options. A very low metallicity (\[Fe/H\] $`\stackrel{<}{}1.7`$) results in giants too faint to be seen by our observations. However, a stellar population with such a low metallicity and a Salpeter IMF has colors for the integrated light that are more than 3$`\sigma `$ bluer than the $`(BV)`$ and $`(VI)`$ observations of Lequeux et al. (1998). Alternatively, if NGC 5907 is more distant than typically believed, then the giants may be too faint to be observed. However, for a stellar population with a Salpeter IMF and a metallicity that gives colors within $`2\sigma `$ of the observed optical colors, the distance must be more than 24 Mpc to be even marginally consistent with our detection of only one stellar object. This is $`5\sigma `$ greater than our estimate of $`14\pm 2`$ Mpc for the distance to NGC 5907, and even more discrepant with earlier work, which adopted closer distances (e.g. Morrison et al. 1994, Yost et al. 1999). Thus, only the โlast resortโ option of a stellar population with a very high dwarf-to-giant ratio appears to be able to account for the absence of resolved stars in our NICMOS image in the stellar halo of NGC 5907 without conflict with other observational data. Specifically, for a simple power-law parametrization of the IMF, $`\alpha >3`$ is required to be within $`2\sigma `$ of the published (B$``$V) and (V$``$I) colors and within the 99% confidence limit of the detection of at most one star.
If confirmed, this result has important implications for the nature of the diffuse light observed at $`z`$-heights of 4โ8 kpc in NGC 5907. Firstly, our observations do not support a number of proposed origins for this stellar component. Specifically, if this diffuse light originated from an accreted elliptical galaxy (Lequeux et al. 1998), was dynamically heated from a thinner stellar disk, or is a tidal warp or ring (Shang et al. 1998), many giants would have been observed in our NICMOS images of NGC 5907, since none of these populations is expected to have an extremely steep IMF or a very low metallicity. Although a larger distance to NGC 5907 would allow any model of the extended stellar light in NGC 5907 to be reconciled with the data, no extant model provides an a priori explanation why several different techniques should underestimate the distance to NGC 5907. Very speculatively, if a high dwarf-to-giant ratio is the source of the absence of giants in our NICMOS images, and the IMF is assumed to be a power-law extended to masses less than $`0.1M_{}`$, then the R-band mass-to-light ratio is high enough that the observed halo can constitute the mass that produces the galaxyโs rotation curve. However, such a model is inconsistent with the combination of a non-detection of the halo of NGC 5907 at 3.5 โ 5$`\mu `$ and current models of stellar atmospheres of stars at the hydrogen burning limit (Yost et al. 1999).
Because a dramatically dwarf-rich stellar population has not been observed elsewhere, we consider several additional ways to constrain the nature of the observed stellar halo of NGC 5907. Although a stellar population with low metallicity and a standard IMF is not consistent with the published optical and near-infrared colors, measurements of colors at faint surface brightnesses are difficult. An independent, direct test of the low metallicity hypothesis can be made through WFPC2 observations. These should see thousands of giants if the observed halo light in NGC 5907 comes from metal-poor stars with a standard IMF and the distance of the galaxy is consistent with existing estimates. However, if the WFPC2 observations yield a null result, it will be difficult to obtain further independent tests of the distance to NGC 5907. One possibility is to obtain NICMOS observations closer to the center of the galaxy, which should reveal some bright stars if the distance to the galaxy is not much greater than expected. Initial HST observations of its globular cluster system did not reveal a sufficient number of clusters to reliably use the peak of the globular cluster luminosity function as a distance indicator (Kissler-Patig et al. 1999). Planetary nebulae might be observable, although observations in the halo could be suspect if the IMF is as dramatically different as suggested here.
We thank Stรฉphane Charlot for providing us with the isochrone synthesis data for galaxy evolution. We thank Heather Morrison and Joe Silk for valuable discussions, and the referee for helpful suggestions. We also thank Heather Morrison and Jackie Monkiewicz for providing direct measurements of the optical surface brightness in their images within our NICMOS field. We acknowledge excellent assistance from Doug van Orsow at STScI in getting our program executed and financial support from the following HST NASA grants GO-07277 (SEZ and FRM) and AR-07523 (FRM). FRM also acknowledges an IoA postdoctoral research fellowship. |
no-problem/0001/astro-ph0001340.html | ar5iv | text | # A Simple BATSE Measure of GRB Duty Cycle
## Introduction
The term duty cycle in astrophysics is defined as โthe fraction of time a pulsed beam is onโ hopkins80 . This term is more appropriate in describing periodic emitters such as pulsars than it is for non-periodic, one-time emitters such as gamma-ray bursts (GRBs). GRB emission consists of pulses varying in intensity over a burstโs duration, and is thus more conducive to a definition recognizing the continuous nature of GRB emission than to one limited as either โonโ or โoff.โ A more appropriate definition of GRB duty cycle should describe the effectiveness of a GRB as an emitter during the time that it emits. We therefore define the GRB duty cycle as the average flux relative to its peak flux. This duty cycle definition can be described in terms of measured BATSE parameters; it is essentially fluence divided by the quantity peak flux times duration.
Fluence and duration are two of the three defining characteristics of the three GRB classes identified by statistical clustering techniques mukherjee98 . Spectral hardness is the third. Fluence (time-integrated flux) incorporates information about duration in its definition. Overlapping information appears to be contained in fluence and duration bagoly98 . For this reason, duty cycle (as defined here) is a potentially valuable probe for studying properties of the three GRB classes.
Properties of the three GRB classes as determined from statistical clustering techniques mukherjee98 are demonstrated in Table 1.
## Duty Cycle Definition
We define the duty cycle (DC) in terms of BATSE parameters as:
$$DC=\frac{S_{23}}{AP_{64}T_{90}}$$
(1)
Here, S<sub>23</sub> is the channel 2+3 fluence (time-integrated flux between 50 and 300 keV), T<sub>90</sub> is the duration spanning 90% of the GRB emission, and P<sub>64</sub> is the 64 ms peak flux. $`A`$ is a constant for converting photon counts to energy (assuming a diagonal detector response matrix as a first-order approximation).
The peak flux used in this calculation must be measured on the shortest available timescale (64 ms) in order to avoid arbitrarily smoothing out the maximum value of the peak flux. A peak flux underestimate produces a corresponding duty cycle overestimate.
GRBs with T<sub>90</sub> durations less than 64 ms have had their T<sub>90</sub> values set to 64 ms, so that their durations are not given a different temporal resolution than that of their peak flux measure.
## Analysis of GRB Classes
Our database consists of non-overwriting GRBs in the BATSE 4Br Catalog paciesas99 triggering on 1024 ms peak flux in the 50 to 300 keV range with the trigger threshold set $`5.5\sigma `$ above background. These criteria prevent trigger biases meegan99 from influencing our conclusions. We have also removed GRBs with large relative measurement errors in each of the four-channel fluences, T<sub>90</sub>, and P<sub>64</sub> so that measurement error does not bias our conclusions.
The GRBs have been assigned to a class using the supervised decision tree classifier C4.5 quinlan86 . The technique is described in more detail elsewhere hakkila99 .
We obtain an average value of $`A2.24\times 10^7`$ ergs photon<sup>-1</sup> by integrating typical bright GRB spectrum (soft power law index $`\alpha 1`$) over the 50 to 300 keV trigger energy range.
Many Class 2 (Short) GRBs have duty cycles $`DC>1.0`$, as their harder spectra lead to an underestimate of A and to a corresponding overestimate of DC. We obtain a separate value of $`A2.80\times 10^7`$ ergs photon<sup>-1</sup> for these GRBs (using $`\alpha 0`$), and recalculate their duty cycles. No attempt is made to account for Class 3 (Intermediate) spectra, which have similar spectral components to those of faint Class 1 (Long) bursts hakkila99 .
Figure 1 is a plot of $`DC`$ vs. hardness ratio $`HR321`$ (100 to 300 keV fluence divided by 25 to 100 keV fluence). This hardness ratio represents the third delineating attribute of the three GRB classes.
We note the following characteristics of GRBs, and of the three GRB classes (based upon Figure 1), as they pertain to the duty cycle:
* There are no efficient, soft GRBs.
* There are no inefficient, hard GRBs.
* Class 2 (Short) GRBs are generally efficient, with duty cycles of $`DC0.1`$.
* Class 1 (Long) GRBs are rarely efficient, with duty cycles of $`DC0.7`$.
* Class 3 (Intermediate) GRBs blend into the Class 1 (Long) GRBs in this plot.
## Conclusions
The duty cycle measure as defined here is fairly effective and easy to calculate. The simplifying assumption of a diagonalized detector response matrix is not completely valid. A small correction factor is needed to account for excess high-energy photons from hard GRBs (primarily those belonging to Class 2). Nonetheless, this approach allows preprocessed BATSE attributes to be incorporated directly into the duty cycle calculation, without requiring the use of data in a less processed form.
Despite the aforementioned problem, Class 2 (Short) is well delineated from Class 1 (Long) in a plot of duty cycle vs. HR321. Class 2 GRBs are harder, more efficient emitters than Class 1 GRBs.
Class 3 (Intermediate) does not appear to be distinct from Class 1 (Long) on the basis of the duty cycle attribute. This result is in agreement with the findings of our artificial intelligence study hakkila99 . |
no-problem/0001/astro-ph0001140.html | ar5iv | text | # NEARBY OPTICAL GALAXIES: SELECTION OF THE SAMPLE AND IDENTIFICATION OF GROUPS
## 1 INTRODUCTION
With the advent of large surveys of galaxy redshifts coupled to well-selected galaxy catalogs, it has become possible to delineate the three-dimensional (3D) distribution of galaxies and to attempt a 3D-definition of the galaxy density.
This paper, which is the third in a series of papers (Marinoni et al. 1998, Paper I; Marinoni et al. 1999, Paper II) in which we investigate on the properties of the large-scale galaxy distribution, presents the all-sky sample of optical galaxies used in our study and the identification of galaxy groups in this sample.
The first 3D galaxy catalogue which covered both Galactic hemispheres with good completeness in redshift was the magnitude-limited (B$``$12 mag) โRevised Shapley-Ames Catalogue of Bright Galaxiesโ (RSA, Sandage & Tammann 1981). It was used by Yahil, Sandage & Tammann (1980) to calculate the galaxy density field in the Local Supercluster (LS). The structures of the LS region were well delineated by Tully & Fisher (1987) on the basis of the Nearby Galaxies Catalog (NBG, Tully 1988), which is a combination of the RSA catalog and a diameter-limited sample of late-type and fainter galaxies found in an all-sky HI survey. This catalog, which is limited to a depth of 3000 km/s and is complete down to B$``$12 mag (although it extends to fainter magnitudes), was also used to determine local galaxy density parameters, which were exploited in statistical analyses of environmental effects on some properties of the LS galaxies (Giuricin et al. 1993, 1994, 1995; Monaco et al. 1994).
In an effort to go beyond the LS, Hudson (1993a, 1993b, 1994a, 1994b) constructed a wide galaxy sample from a merging of the diameter-limited northern UGC catalog (Nilson 1973) and the diameter-limited southern ESO catalog (Lauberts 1982; Lauberts & Valentijn 1989). He applied statistical corrections for the fairly large incompleteness in redshift of his sample as a function of angular diameter and position on the sky and reconstructed the density field of optical galaxies to a depth of $`cz=8000`$ km/s.
The โOptical Redshift Surveyโ (ORS, Santiago et al. 1995), which provided $``$1300 new redshifts for bright and nearby galaxies, marks a considerable advance towards the construction of an all-sky sample of nearby optical galaxies with good completeness in redshift. The ORS contains $``$8300 galaxies with known redshift and consists of two overlapping optically-selected samples (limited in apparent magnitude and diameter, respectively) which cover almost all the sky with $`|b|>20^{}`$. Each sample is the concatenation of three subsamples drawn from the UGC catalog in the north , the ESO catalog in the south (for $`\delta <17.5^{}`$, and the Extension to the Southern Observatory Catalogue (ESGC, Corwin & Skiff 1999) in the strip between the UGC and ESO regions ($`17.5\delta 2.5^{}`$). The authors selected their own galaxy sample according to the raw (observed) magnitudes and diameters and then attempted to quantify the effects of Galactic extinction on the galaxy density field as well as the effects of random errors and systematic trends in the magnitude and diameter scales internal to different catalogs. They calculated the galaxy density field out to $`cz=8000`$ km s$`^1`$in redshift space on the basis of the UGC and ESO magnitudeโlimited samples and of the ESGC diameter-limited sample (for a total of $``$6400 galaxies), after having collapsed the galaxy members of six rich nearby clusters to a single redshift (Santiago et al. 1996). Baker et al. (1998) calculated the peculiar velocity field resulting from the ORS sample (defined as above), adding the IRAS 1.2 Jy galaxy sample (Fisher et al. 1995) in the unsurveyed ZOA ($`|b|<20^{}`$) and at large distances ($`cz>8000`$ km/s).
In this paper, we follow a different approach to the construction of an all-sky sample of optical galaxies with good properties of completeness, by attempting the use of an uniform selection criterium (based on homogenized blue magnitudes corrected for Galactic extinction, internal extinction and K-dimming) over the sky. The sample we select (hereinafter Nearby Optical Galaxy (NOG) sample) is a complete, magnitudeโlimited and distanceโlimited, allโsky sample of $``$7000 nearby and bright optical galaxies, which we extract from the LyonโMeudon Extragalactic Database (LEDA) (e.g., Paturel et al. 1997).
This sample constitutes an extension in distance and in the number of redshifts (with a consequent increase in redshift completeness) of the all-sky sample of $``$6400 bright and nearby galaxies ($``$5400 galaxies above $`|b|=20^{}`$), recently used in the calculation of differents sets of galaxy distances corrected for nonโcosmological motions by means of peculiar velocity field models (Paper I) and in the rediscussion of the local galaxy luminosity function (Paper II).
As previously emphasized (e.g., Hudson 1993a, Santiago et al. 1995), outside the zone of avoidance (ZOA), optical galaxy samples are more suitable for mapping the galaxy density field on small scales than IRASโselected galaxy samples, which have been frequently used as tracers of the galaxy density field on large scales (e.g. Strauss et al. 1992, based on the IRAS 1.9 Jy sample, Fisher et al. 1995 and Webster, Lahav & Fisher 1997, both based on the IRAS 1.2 Jy sample, Branchini et al. 1999 and Schmoldt et al. 1999, both based on the PSCz sample by Saunders et al. 1999 a, b), because IRAS samples do not include the early-type galaxies (which have little dust content and star formation), are relatively sparse nearby, and are based on far-infrared fluxes, which are much less linked with galaxy mass than optical and near-infrared fluxes. The latters are believed to be the best tracers of galaxy mass and this motivates ongoing plans of constructing wide magnitude-limited samples of near-infrared selected galaxies such as the 2MASS (e.g., Huchra et al. 1999) and DENIS (e.g., Epchtein et al. 1999) projects.
Moreover, as discussed by Santiago et al. (1996), standard extinction corrections on diameters are thought to be less reliable than extinction corrections on magnitudes. This makes it preferable to use magnitudeโlimited optical samples rather than diameterโlimited optical ones for the reconstruction of the galaxy density field.
Since we plan to use the NOG sample to trace the galaxy density field also on small scales, in this paper we provide group assignments for the galaxies of the NOG sample by means of both the hierarchical (H) (e.g., Tully 1987) and the percolation (P) friends of friends methods (e.g., Huchra & Geller 1982) of group identification. The identification of groups, which allow us to study the continuity of the properties of galaxy systems over a large range of scales (e.g., Girardi & Giuricin 1999), is also useful for improving the determination of the 3D structure (e.g., the groups identified by Wegner, Haynes & Giovanelli 1993 in the PerseusโPisces region). Furthermore, galaxy systems are favorite targets for determining the peculiar velocity field with reduced uncertainties (e.g., Giovanelli et al. 1997). In a forthcoming paper we shall use the locations of individual galaxies and groups to reconstruct the galaxy density field (see Marinoni et al. 1999b for preliminary results).
The outline of our paper is as follows. In ยง2 we present the selection of the NOG sample. In ยง3 we illustrate the distribution of NOG galaxies on the sky. In ยง4 we summarize the two identification procedures of groups, i.e. the H and P algorithms. In ยง5 we present the resulting catalogs of loose groups. Conclusions are drawn in ยง6.
Throughout, the Hubble constant is 75 km s<sup>-1</sup> Mpc<sup>-1</sup>.
## 2 The selection of the sample
Being aware that a sample must have a well-defined selection function in order to be useful for any sort of quantitative work (e.g. the review by Strauss 1999), we select a galaxy sample according to well-defined selection criteria. Relying, in general, on data (positions, redshifts, total blue magnitudes) tabulated in LEDA, we select a sample of 7076 galaxies which satisfy the following selection criteria:
* Galactic latitudes $`|b|>20^{}`$;
* recession velocities (evaluated in the Local Group rest frame) $`cz`$6000 km/s;
* corrected total blue magnitudes B$``$14 mag.
We transform tabulated heliocentric redshifts into the LG rest frame according to Yahil, Tammann & Sandage (1997). In the following we always refer to redshifts evaluated in the LG frame.
Limiting the sample to a given depth ($`cz`$6000 km/s in our case) has the main advantage of reducing the incompleteness in redshift for a given limiting magnitude, because a fraction of the galaxies with unknown redshift is presumably located beyond the limiting depth. With this choice our sample is also less affected by shot noise which increases with increasing distance. Moreover, the choice of limiting the volume of the sample minimizes distance effects in the identification of galaxy groups. Last, the knowledge of the peculiar velocity field, which will be used to place the NOG objects into the real-distance space, becomes very poor beyond this depth.
In the LEDA compilation, which collects and homogenizes several data for all the galaxies of the main optical catalogues โ such as the catalogs UGC, ESO, ESGC, CGCG (Zwicky et al. 1961โ1968) and MCG (VorontsovโVelyaminov, Archipova & Krasnogorskaja 1962โ1974)โ, the original raw data (blue apparent magnitudes and angular sizes) have been transformed to the standard systems of the RC3 catalog (de Vaucouleurs et al. 1991) and have been corrected for Galactic extinction, internal extinction, and K-dimming, as described in Paturel et al. 1997. Corrections for internal extinction, which are conspicuous in very inclined spiral galaxies, are in general neglected in magnitude-limited optical galaxy samples used in studies of the spatial galaxy distributions. The adopted corrections for internal extinction do not take into account a possible dependence on the galaxy luminosity (e.g., Giovanelli et al. 1995).
The adopted limits for the unsampled ZOA ($`|b|<20^{}`$) are imposed by the requirement of intrinsic completeness of the sample. An additional problem which affects the construction of a well-controlled optical galaxy sample in the ZOA is the presumably low quality of available Galactic reddening maps in this region. As a matter of fact, precisely in the ZOA there are pronounced differences between the classical maps of Burstein & Heiles (1978, 1982) (substantially adopted in LEDA), which are largely HI maps with the zero-point adjusted and with smooth variations in dust-to-gas ratio estimated from galaxy counts, and the new maps derived by Schlegel, Finkbeiner & Davis (1998) from the COBE/DIRBE and IRAS/ISSA observations, which give a direct measure of the column density of the Galactic dust. Tests of the accuracy of reddening maps emphasize their unreliability in regions characterized by a strong and very patchy Galactic extinction (e.g. Arce & Goodman 1999) such as the low $`|b|`$-regions and reveal large-scale errors across the sky in the ZOA, specifically an appreciable overestimate of Galactic extinction in the Vela region ($`230^{}<l<310^{}`$, $`|b|<20^{}`$) (Burstein et al. 1987; Hudson 1999).
In the LEDA there are 6880 galaxies which satisfy the adopted selection criteria (B$``$14 mag, $`cz`$6000 km/s, $`|b|>20^{}`$). We add to this initial sample 196 galaxies (with B$``$14 and $`|b|>20^{}`$) which have new measures of redshifts that we find from matching the LEDA with the NASA Extragalactic Database (NED), the Updated Zwicky Catalog (UZC) (Falco et al. 1999), the ORS (Santiago et al. 1995) and the PSCz (kindly provided to us by B. Santiago and W. Saunders, respectively).
Relying on information given in LEDA and NED for the binary and multiple systems of galaxies, we include in our sample only the individual components in these systems which satisfy our selection criteria.
The final distance-limited ($`cz`$6000 km/s) and magnitude-limited (B$``$ 14 mag) NOG sample comprises 7076 galaxies (with $`|b|>20^{}`$).
The logarithmic integral counts of all LEDA galaxies versus their blue total magnitude show a linear relation down to $``$15.5 mag (Paturel et al. 1997), whilst the logarithmic differential counts of all LEDA galaxies with $`|b|>20^{}`$ reveal that a linear relation is satisfied only down to magnitudes somewhat fainter than B=14 mag, which can be regarded as the limit of intrinsic completeness of the data base.
Thus, although the different galaxy catalogues, from which data are collected and homogenized in the LEDA, have different limits of completeness in apparent magnitude or angular diameter, the NOG sample turns out to be nearly intrinsically complete down to its limiting magnitude $`B=14`$ mag.
The redshift completeness of all-sky samples of bright optical galaxies is not yet extremely high and decreases with fainter limiting magnitudes (e.g. Giudice 1999). For the sample limited to $`|b|>20^{}`$ and B$``$14 mag there are 550 objects without redshift measures. Some of these objects are galaxies with bright stars superposed for which is difficult to obtain a spectrum. Most of these objects are galaxies with faint (uncorrected) apparent magnitudes. Most of the objects without redshift are located in the southern sky (precisely at $`\delta <10^{}`$).
Thus, the degree of redshift completeness of this sample, with no limits in redshift, is 92%. This is indeed a lower limit to the redshift completeness of the NOG, since the NOG is limited to 6000 km/s.
We have estimated the NOG redshift completeness C by dividing the number $`N_z`$ of galaxies with known redshift ($`N_z`$=7076) by the total number $`N_T=N_z+N_p`$ of galaxies which are presumed to have $`cz`$6000 km/s. We have calculated the number $`N_p`$ of objects with unknown redshifts which are predicted to have $`cz`$6000 km/s as $`N_p=_{i=1}^nP_i(B)`$, where $`P_i(B)`$ is the probability for a galaxy with magnitude B and unknown redshift to have $`cz`$6000 km/s. We have estimated this probability under the assumption of a homogeneous universe for the Schechter-like galaxy luminosity function which fits the differential galaxy counts. In this way we obtain a redshift completeness of 98%, which is a fixed average percentage over the sampled volume. Details on these calculations and on the selection function of the NOG sample will be presented in a subsequent paper (see Marinoni et al. 1999b for preliminary results).
Adopting a sample selection based on corrected and homogenized magnitudes, we attempt to minimize systematic selection effects as a function of direction in the sky, which may arise from inconsistencies among the different magnitude systems used in the original catalogs, and we take into account the variable amounts of Galactic extinction across the sky and of internal extinction in galaxies of different morphological types and inclination angles. Clearly, systematic errors (though not zero-point errors in Galactic and internal extinctions) across the sky would affect the uniformity of galaxy sampling.
Notwithstanding the different selection criteria adopted, the NOG sample has many galaxies in common with the ORS sample, which comprises $``$6280 galaxies having $`cz`$6000 km/s (and $`|b|>20^{}`$), of which $``$4360 and $``$4280 objects belong to the magnitude-limited and diameter-limited ORS subsamples, respectively. A large fraction of these galaxies, 87% (95% and 86% of those belonging to the magnitude-limited and diameter-limited ORS subsamples restricted to $`cz`$6000 km/s), are common to the NOG. There are 78% of NOG galaxies common to the ORS; to be more precise, 59% and 52% of NOG galaxies are common to the magnitude-limited and diameter-limited ORS subsamples, respectively.
## 3 The distribution of galaxies on the sky
Fig. 1 shows the distribution of NOG galaxies on the celestial sphere using equalโarea Aitoff projections in equatorial, Galactic, and supergalactic coordinates. The region devoid of galaxies corresponds to the unsampled ZOA ($`|b|<20^{}`$).
Although Galactic extinction is greater than the norm in the center ($`l0^{}`$) and anticenter ($`l180^{}`$) regions, there may be a real deficiency of galaxies in these regions at low $`|b|`$-values. In particular, this is suggested by redshift surveys which select galaxy candidates from the IRAS Point Source Catalog (1988), whose completeness is, however, quite questionable in these two regions. Specifically, a concatenation of large voids stretching from the Local Group all the way to the NOG distance limit and beyond (see, e.g., Lu & Freudling 1995) is thought to be responsible for the deficiency of galaxies in the OrionโTaurus anticenter region ($`l=150^{}190^{}`$, $`b30^{}`$). As regards the center region, redshift surveys have pointed out the presence of a nearby void, around $`l=0^{}`$ and $`b=10^{}`$, the Ophiuchus void (Wakamatsu et al. 1994, Nakanishi et al. 1997). This void appears to be contained in the large Local Void of Tully & Fisher (1987), which covers a large part of the sky between $`l0^{}`$ and $`l80^{}`$. The Local Void, which is centered at $`cz`$2500 km/s and has a diameter of $``$2500 km/s (Nakanishi et al. 1997), is probably interconnected with the more distant, large Microscopium void (centered at $`b0^{}`$, $`l10^{}`$, $`cz`$4500 km/s).
In order to distinguish structures more clearly, in Fig.2 we show the Aitoff projections of the NOG galaxies on the celestial sphere in Galactic coordinates, for three redshift slices.
Prominent structures stand out in these plots. Many galaxies tend to be concentrated in the supergalactic plane which stretches in the plots from $`l135^{}`$ to $`l315^{}`$. The densest part of the Local Supercluster is the overdensity at $`l=300^{}315^{}`$, $`b=30^{}70^{}`$ (Virgo Southern Extension) with the Virgo cluster at its northern tip ($`l=284^{}`$, $`b=75^{}`$).
In the low-redshift slice ($`cz<`$2000 km/s, 2012 galaxies) we further note some nearby clusters, such as Ursa Major ($`l=145^{}`$, $`b=66^{}`$), Fornax ($`l=237^{}`$, $`b=54^{}`$), and the cluster surrounding NGC 1395 in the Eridanus cloud ($`l=214^{}`$, $`b=52^{}`$). The last two clusters are the dominant overdensities of the DoradoโFornaxโEridanus complex, also named Fornax wall (the southern supercluster of Mitra 1989), which ranges from $`l=190^{}`$, $`b=60^{}`$ to $`l=270^{}`$, $`b=40^{}`$. The Local Void is apparent as the paucity of galaxies between $`l0^{}`$ and $`l80^{}`$. Other voids are discernible, e.g. the Gemini void around $`l=190^{}`$, $`b=20^{}`$. The latter void is a part of a very large nearby void (named V$`\alpha `$ by Webster et al. 1997) which stretches below the Galactic plane down to the above-mentioned Orion-Taurus void ($`l=150^{}190^{}`$, $`b30^{}`$).
The intermediate-redshift slice ($`2000cz<`$ 4000 km/s, 2377 galaxies) intersects the Great Attractor region, which includes the HydraโCentaurus complex, which stands out around $`b=20^{}`$, $`l=260^{}`$ (Hydra) and $`l=310^{}`$ (Centaurus), together with the contiguous TelescopiumโPavoโIndus (TโPโI) supercluster (also named Centaurus Wall), whose foreground part is apparent from $`b=20^{}`$, $`l=330^{}`$ to $`b=60^{}`$, $`l=30^{}`$, and the Hydra Wall, which starts from the Hydra cluster and stretches in the southern Galactic hemisphere from $`b=20^{}`$, $`l=230^{}`$ to $`b=30^{}`$, $`l=190^{}`$. Noticeable clumps in the northern hemisphere are the Canes VenaticiโCamelopardalis clouds at $`l=95^{}`$, $`50^{}<b<70^{}`$ and the Ursa Major cloud at $`l=130^{}`$, $`30^{}<b<60^{}`$. There is a prominent void, the Leo void, at $`l200^{}`$, $`b60^{}`$. The large Eridanus void around $`l=270^{}`$, $`b=60^{}`$, which roughly corresponds to the void named V1 da Costa et al. (1988) and V$`\beta `$ by Webster et al. (1997), stretches considerably towards the Galactic plane.
In the next redshift slice ($`cz`$4000 km s$`^1`$, 2653 galaxies) the dominant overdensities are the PerseusโPisces supercluster ($`l=110^{}150^{}`$, $`35^{}<b<20^{}`$) and the main part of the TelescopiumโPavoโIndus supercluster in the southern Galactic hemisphere. The Cetus Wall runs southwards from PerseusโPisces along $`b60^{}`$. The galaxy concentration around $`l=190^{}`$, $`b=25^{}`$ is the NGC 1600 region (Saunders et al. 1991). The galaxy overdensity around $`l=120^{}`$, $`b=70^{}`$, which does not correspond to a specific galaxy cluster, was named C$`\gamma `$ by Webster et al. (1997). The void at $`l=300^{}`$, $`b=45^{}`$ was named V3 by da Costa et al. (1988). In the northern sky we recognize the high-redshift component of the Hydra-Centaurus complex with the surrounding Hydra void (at $`l290^{}`$, $`b30^{}`$), the Cancer cluster ($`l=195^{}`$, $`b=25^{}`$), the Gemini filament (at $`180^{}<l<210^{}`$, $`15^{}<b<30^{}`$; see Focardi, Marano & Vettolani 1986), the Cygnus-Lyra filament (see Takata, Yamada & Saitล 1996) which crosses the Galactic plane from $`l90^{}`$, $`b15^{}`$ to $`l50^{}`$, $`b10^{}`$, and the Camelopardalis supercluster ($`l=135^{}`$, $`b=25^{}`$), which, according to Webster et al. 1997, is probably connected with the Perseus-Pisces supercluster.
The large void which covers most of the northern sky between $`l=145^{}`$ and $`l=195^{}`$ lies between the Virgo cluster and the โGreat Wallโ and was noted in the CfA1 redshift survey of Davis et al. (1982).
In Figs. 3 and 4 we show the distribution of NOG galaxies on the celestial sphere using equal-area polar hemispheric projections in equatorial coordinates, for different redshift slices. These plots better illustrate many other minor structures and voids in the galaxy distribution. The structures illustrated in our plots are qualitatively similar to those described in the analogous plots presented in Fairallโ s (1998) books for a generic (statistically uncontrolled) wider sample of galaxies with known redshift and no limit in magnitude (or diameter). This book gives a comprehensive description of the cosmography of the nearby universe (see also Tully & Fisher 1987 and Pellegrini et al. 1990, for previous detailed descriptions of the structures of the Local Supercluster and southern hemisphere, respectively).
The distribution of NOG galaxies appears qualitatively similar to that of the ORS galaxies (cfr the analogous Aitoff projections presented by Santiago et al. 1995 and Baker et al. 1998). Both optical galaxy samples trace essentially the same structures, with NOG providing a somewhat denser sampling (11% more galaxies) of the galaxy density field in the nearby universe (within 6000 km s$`^1`$). Moreover, comprising 3204 galaxies with $`cz`$3000 km/s, the NOG gives a much denser sampling of the LS region than the NBG sample.
A comparison with the distribution of the IRAS 1.2โJy galaxies (cfr the plots given by Fisher et al. 1995 and Baker et al. 1998) shows that NOG samples the galaxy density field much better than the IRAS samples and delineates similar major overdensity regions but with a greater density contrast. This is related to the known fact that IRAS surveys under-count the dust-free early-type galaxies which congregate in high-density regions and give a galaxy density field characterized by a bias smaller by a factor of $`1.5`$ than that of the optical galaxy density field (e.g., Strauss et al. 1992; Hudson 1993; Hermit et al. 1996). The newly completed PSCz survey (Saunders et al. 1999 a, b), which includes IRAS galaxies to a flux limit of 0.6 Jy, leads to a density field which compares fairly well with that derived from the IRAS 1.2 Jy sample (e.g., Branchini et al. 1999; Schmoldt et al. 1999). Although the NOG covers 79% of the solid angle covered by the PSCz, our sample contains 35% more galaxies.
## 4 The Identification of Galaxy Groups
.
We identify galaxy groups by means of the most widely used objective group-finding algorithms, the hierarchical and the percolation it friends of friends algorithms, which allow us a comparison with wide group catalogs published in the literature, although other objecting techniques of clustering analysis are available (e.g., Pisani 1996, Escalera & MacGillivray 1995).
### 4.1 The hierarchical algorithm
In the hierarchical (H) clustering method, first introduced by Materne (1978), one defines an affinity parameter between the galaxies (e.g. their separations) which controls the grouping operation. Then one starts with all galaxies of the sample as separate units and links the galaxies successively in order of affinity until there is only one unit that encompasses the ensemble. A hierarchical sequence of units organized by decreasing affinity is the result of this method. The merging of a galaxy into a given unit involves the consideration of the whole unit and not only of the last object merged into the unit. Another merit of this method is the easy visualization of the whole merging procedure under the form of a hierarchical arborescence, the dendrogram.
Customarily, it is believed that the H method has the practical drawback of requiring very long calculation time (e.g., in comparison with the percolation method). Paying attention to this problem, we have managed to considerably speed up the hierarchical code by using numerical tricks. In this way, we have made this code nearly as fast as the percolation algorithm. The code is written in the C programming language, which allows us to use techniques of sparse matrix (i.e with most elements equal to zero) in a natural way, through a data structure based on pointers. Specifically, for each pair of NOG galaxies, the affinity parameter, which is taken to be the galaxy luminosity density as explained below, is not stored in memory and is not exactly calculated, but replaced with zero, if its value is smaller than a preselected limit. The maximum value of this parameter is searched only for the few pairs for which the parameter values are greater than this limit. Then the limit is gradually lowered in the following steps until the dendrogram is completed.
There are several possible choices for the grouping parameter. For instance Tully (1980, 1987) and Vennik (1984) employed a grouping parameter (galaxy luminosity divided by separation squared) which measures the gravitational force between galaxies $`i`$ and $`j`$, but cut the hierarchy according to the luminosity density and number density of the entity, respectively.
Following basically the procedure adopted by Gourgoulhon, Chamaraux & Fouquรฉ (1992), we use the same parameter for the two operations, namely the luminosity density $`3(L_i+L_j)/(4\pi r_{ij}^3)`$, where $`L_i`$ and $`L_j`$ are the corrected luminosities (as defined below) of the galaxies $`i`$ and $`j`$, and $`r_{ij}`$ their mutual separation. We take into account the loss of faint galaxies with increasing distances within our magnitude-limited galaxy sample by multiplying the luminosity of each galaxy located at a distance $`r`$ by the factor
$$\beta (r)=\frac{_0^{\mathrm{}}L\mathrm{\Phi }(L)๐L}{_{L_{min}(r)}^{\mathrm{}}L\mathrm{\Phi }(L)๐L}$$
(1)
where $`\mathrm{\Phi }(L)`$ is the galaxy luminosity function of our sample, $`L_{min}`$ is the minimum luminosity necessary for a galaxy at a distance $`r`$ (in Mpc) to make it into the sample; $`L_{min}`$ corresponds to the absolute magnitude $`M_B=5\mathrm{log}r25+B_{lim}`$, where $`B_{lim}`$=14 mag is the limiting apparent magnitude of our sample.
We use the Schechter (1976) form of the luminosity function with $`M^{}=20.68`$, $`\alpha =1.19`$, $`\mathrm{\Phi }^{}=0.0052Mpc^3`$. This is the luminosity function, unconvolved with the magnitude error distribution (i.e., not Malmquist-corrected, according to the precepts of Ramella, Pisani & Geller 1997), that we derive by means of Turnerโs (1979) method (see also de Lapparent, Geller & Huchra 1989 and Paper II). For this calculation, using redshifts as distance indicators, we take the the NOG galaxies having $`cz>500`$ km/s, and $`M_B`$-values in the range $`22.5M_BM_l`$, where $`M_l=15.12`$ is the faintest absolute magnitude at which galaxies with magnitude limit $`B_{lim}`$=14 mag are visible at the fiducial distance $`r=500/(75h_{75})6.7h_{75}^1`$ Mpc. Convolving the Schechter form of the luminosity function with a Gaussian magnitude error distribution having zero mean and dispersion of 0.2 mag, we obtain the Malmquist-corrected luminosity function characterized by $`M^{}=20.59\pm 0.07`$, $`\alpha =1.16\pm 0.05`$, $`\mathrm{\Phi }^{}=0.0065\pm 0.0009Mpc^3`$. The luminosity function is similar to that derived in Paper II from a similar, albeit smaller and less complete in redshift, sample of nearby and bright optical galaxies (see Paper II for a detailed discussion and comparison with the galaxy luminosity functions given in the literature).
For $`B_{lim}`$=14 mag, $`\beta `$ is 1.19, 1.74, 3.07 at 2000, 4000, 6000 km/s respectively.
We adopt $`810^9L_{}Mpc^3`$ (corresponding to a luminosity density contrast of 45) as the limiting luminosity density parameter used to cut the hierarchy and define groups. The same value was adopted by Gourgoulhon et al. (1992). Tully (1987), using only the luminosity of the brighter component in the evaluation of the entity density, chose the slightly smaller value of $`2.510^9L_{}Mpc^3`$. We have checked that the value adopted by us better distinguish some known nearby structures, such as the substructures identified in the Virgo cluster region by specific surveys (see end of this subsection), than Tullyโs (1987) value does.
Following Tully (1987) and Gourgoulhon et al. (1992), we distinguish two cases in the derivation of the separation $`r_{ij}`$ between galaxies $`i`$ and $`j`$ from their angular distance. In the case of small differences in the velocities, we assume that no information is available about the lineโofโsight separations in differential velocities and take separations from planeโofโsky information, with the average projection factor $`4/\pi `$ applied to correct statistically for depth in the third dimension (see eq. 4 in Gourgoulhon et al. 1992).
In the case of large differences in the velocities , we assume that differential velocities are simply related to the expansion of the universe and directly infer a lineโofโsight separation (see eq. 7 in Gourgoulhon et al. 1992).
For intermediate cases, we use the transition formula proposed by Gourgoulhon et al. (1992) (see their eqs. 5 and 6), which transforms between the two above-mentioned limiting cases in a smooth way.
The procedure is regulated by the choice of a free parameter, the transition velocity $`V_l`$. The choice of $`V_l`$ is a compromise between too low values which would lead to rejection of group members with large peculiar velocities (with a consequent underestimate of the group velocity dispersion) and too high values which would allow the inclusion into groups of galaxies which are accidental superpositions in the line of sight (with a consequent overestimate of the group velocity dispersion). Following Gourgoulhon et al. (1992), we adopt the fairly low value of $`V_l`$=170 km/s, which reliably identifies groups of low velocity dispersion. For his less deep sample, Tullyโs (1987) choice, $`V_l`$=300 km/s, was greater than our value; moreover, his value is roughly equivalent (in terms of corresponding galaxy separations) to the value we adopt, in view of the different transition formula employed by this author.
With low values of $`V_l`$ the clusters of galaxies are split into various subunits because of their large velocity dispersion. These subunits are located at about the same positions, but have different average velocities. This inconvenience of the method is related to the use of an universal $`V_l`$-value for the whole sample.
As done by Gourgoulhon et al. (1992), after running the algorithm, we identify by hand 17 high-velocity, relatively rich systems, by collecting the various subunits into one aggregate (for a total of 440 galaxies), with the aid of the results obtained with the P algorithm (in the variant P1) discussed in ยง4.2. Tully (1987) removed the high-velocity systems before running the algorithm, which implies that system members are to be chosen a priori, whilst Garcia (1993) neglected this problem in many cases.
There are two regions of the sky where the initial results obtained from running the H algorithm were unsatisfactory, i.e. the region comprising the nearest systems to the Local Group and the complex region of the Virgo cluster. In the former case the algorithm groups together many nearby galaxies, because the redshift is no longer a reasonable indicator of distance; in this case, reliable results could be obtained from the algorithm by replacing the redshifts with redshift-independent distances. Therefore, to identify very nearby systems, we have first selected the members of four well-known nearby groups directly on the basis of the specific studies by van Driel et al. (1998) for the M81 group, by Cรดtรฉ et al. (1997) for the Sculptor and Centaurus A groups, on the review by Mateo (1998) for the Local Group. Then, after having excluded the members of these groups, we have rerun the algorithm for the other galaxies.
Since a long time specific surveys of the Virgo region have identified substructures in the Virgo cluster first by means of an inspection of the morphological classification, brightness, redshift of galaxies (e.g., Binggeli, Sandage & Tammann 1985) and then through accurate distance indicators (mainly the Tully โ Fisher relation for spirals). The current knowledge of the main clumps of the Virgo cluster, which appears to be a structure considerably elongated along the line of sight, can be summarized as follows (see, e.g., the recent studies by Yasuda, Fukugita & Okamura 1997, Federspiel, Tammann & Sandage 1998, Gavazzi et al. 1999): the subcluster A centered on the galaxy M87 is the dominant substructure (at a velocity $`cz`$1350 km/s and at a distance of $``$14-18 Mpc); the clump B, offset to the south around M49, lying at similar cz but at larger distance ($``$20-24 Mpc), is thought to fall to Virgo A; the clouds M, W (both at $`cz`$ 2500 km/s) are background structures at twice the distance of Virgo A and may also be falling to Virgo A; the cloud Wโ is located at $`cz`$1500 km/s and $``$25 Mpc; the northern part of the Virgo Southern Extension (SE) lies at a redshift and distance similar to that of the main body. In this paper we have made membership assignments adopting borderlines between the different substructures in accordance with Binggeli, Tamman & Sandage (1987) and Binggeli, Popescu & Tammann (1993).
### 4.2 The friends of friends algorithm
We identify groups in redshift-space with the percolation (P) friends of friends algorithm (Huchra & Geller 1982). So far, this algorithm, being easier to implement than the H algorithm, has been the most widely used method of group identification in the literature. Unlike the H algorithm, this algorithm does not rely on any a priori assumption about the geometrical shape of groups, although it may suffer from some drawbacks which are mentioned at the end of ยง4.2.
For each galaxy in the NOG sample, this algorithm identifies all other galaxies with a projected separation $`D_{12}D_L(cz_1,cz_2)`$ and a line-of-sight velocity difference $`cz_{12}cz_L(cz_1,cz_2)`$ where $`cz_1`$, $`cz_2`$ are the velocities of the two galaxies in the pair. All pairs linked by a common galaxy form a group. We estimate the limiting number density contrast as
$$\frac{\delta \rho }{\rho }=\frac{3}{4\pi D_0^3}\left[_{\mathrm{}}^{M_l}\mathrm{\Phi }(M)๐M\right]^11$$
(2)
where $`\mathrm{\Phi }(M)`$ is the luminosity function of the sample (see ยง5.1) and $`M_l=15.12`$ mag is the faintest absolute magnitude at which galaxies with magnitude limit $`B`$=14 mag are visible at the fiducial distance $`r=500/(75h_{75})6.7h_{75}^1`$ Mpc. The estimate assumes that the galaxy separation along the line of sight is comparable with $`D_L`$ (e.g., spherical symmetry).
In order to take into account the decrease of the magnitude range of the luminosity function sampled at increasing distance, the distance link parameter $`D_L`$ and the velocity link parameter $`cz_L`$ are in general suitably increased with increasing distance. Huchra & Geller (1982) initially and later other authors (e.g., Geller & Huchra 1983; Maia, da Costa & Latham 1989; Ramella, Geller & Huchra 1989; Ramella, Pisani & Geller 1997) scaled the distance and velocity link parameters in the same way, as $`D_L=D_0R`$ and $`cz_L=cz_0R`$, where
$$R=\left[_{\mathrm{}}^{M_l}\mathrm{\Phi }(M)๐M/_{\mathrm{}}^{M_{12}}\mathrm{\Phi }(M)๐M\right]^{1/3}$$
(3)
and $`M_{12}`$ is the faintest absolute magnitude at which a galaxy with apparent magnitude equal to the magnitude limit ($`B=14`$ mag in our case) is visible at the mean distance of the pair. Scaling both $`D_L`$ and $`cz_L`$ with distance, one keeps the number density enhancement, $`\delta \rho /\rho `$, constant.
The properties of selected groups are known to be sensitive to the adopted distance and velocity links. As a matter of fact, the typical size of a group is mostly linearly related to the adopted value of $`D_0`$, whereas the typical velocity dispersion of a group mostly depends on the adopted value of $`cz_0`$ (e.g., Trasarti-Battistoni 1998). The adopted value of $`cz_L`$ must be small enough to avoid the inclusion of too many interlopers in groups, without biasing the velocity dispersion of groups towards too low values. The chosen value of $`\delta \rho /\rho `$ must be large enough to avoid that unbound fluctuations in the distribution of galaxies within large scale structures be mistaken for real systems, without splitting rich systems into many multiple systems.
Geometrical Monte-Carlo simulations (Ramella et al. 1989, 1997) and especially cosmological N-body simulations which have used full 3D information (e.g., Nolthenius & White 1987; Moore, Frenk & White 1993; Nolthenius, Klypin & Primack 1994; Frederic 1995 a, b; Nolthenius, Klypin & Primack 1997; Diaferio et al. 1999) can help us in searching for the optimal sets of linking parameters and scaling relations with distance which maximize the efficiency of the P algorithm in picking up โrealโ groups. As a matter of fact, almost all relevant simulations were designed to describe the properties of redshift surveys whose magnitude limits are comparable to that of NOG (e.g., CfA1) or moderately fainter than that of NOG (e.g., CfA2), which, however, is limited to a smaller distance . Moreover, moderate differences in the luminosity functions and magnitude limits of galaxy samples (e.g. CfA1 versus CfA2) lead to minor differences (on the order of 10-15%) in the optimal choices of percolation linking parameters (as discussed by Trasarti-Battistoni 1998).
Investigations on the variation of the properties of groups (identified in several redshift surveys) with $`cz_0`$ and $`D_0`$ (or $`\delta \rho /\rho `$) showed that there is a range of values of the two parameters where the median properties of the groups are fairly stable (i.e., $`\delta \rho /\rho =`$60โ160, $`cz_L=`$200โ600 km/s at the velocity of 1000 km/s), with an โoptimal choiceโ believed to be centered around $`\delta \rho /\rho `$=80 and $`cz_L`$=350 km/s (at the velocity of 1000 km/s) (e.g. Ramella et al. 1989, 1987; Frederic 1995 a, b). These simulations also show that an appreciable fraction of the poorer groups, those with $`n<5`$ members, is false (i.e. unbound density fluctuations), whereas the richer groups almost always correspond to real systems.
More specifically, testing the accuracy of group-finding algorithms through N-body cosmological structure simulations, Frederic (1995 a, b) pointed out that the optimal parameters which maximize the accuracy of group identification are indeed dependent on the purposes for which groups are being selected. With the above-mentioned scaling of the linking parameters, restrictive velocity linking lengths (i.e., $`cz_L`$200 km/s at 1000 km/s) tend to cause members of the few high velocity dispersion systems to be missed (biasing low their velocity dispersion and mass), but result in a much fewer interlopers. Therefore generous velocity links (i.e., $`cz_L`$500 km/s at 1000 km/s) may be preferred in studies which aim to well identify high-velocity dispersion systems; on the other hand, restrictive velocity links, which is what we will choose in this paper, are to be preferred in our case, because the NOG is limited to a relatively small depth and (unlike the CfA1 and CfA2 samples) it does not contain very rich (e.g. Coma-like) galaxy clusters and especially because we shall use the NOG groups mainly to collapse their members to a single redshift, removing peculiar motion effects on group scales. Consistently with these considerations, Nolthenius (1993), who revised the identification of CfA1 groups with the introduction of galaxy distances calculated from a Virgo-Great Attractor flow field model, reduced significantly the interloper contamination by choosing a restrictive velocity link ($`cz_L`$=350 km/s at 5000 km/s, i.e. a value of $`cz_L`$ only $``$ 1/4 as large as that chosen in the original catalog of CfA1 groups by Geller & Huchra 1983).
We have run the P algorithm (with the above-mentioned scaling of $`D_L`$ and $`cz_L`$) for some pairs of values of the two linking parameters in the above-mentioned ranges and choose the values of $`\delta \rho /\rho `$=80 ($`D_0`$=0.41 Mpc) and $`cz_0`$=200 km/s (corresponding to 234 km/s at the velocity of 1000 km/s) for our final percolation catalog with customary scalings of the two search parameters. According to eq. (3), $`D_L`$ is 0.48, 0.61, 0.89, 1.05, and 1.27 Mpc at 1000, 2000, 4000, 5000, and 6000 km/s, respectively, whereas $`cz_L`$ is 234, 298, 434, 519, and 620 km/s at the respective distances. The resulting catalog turns out to be in good agreement with that obtained with the H algorithm (see ยง5).
The choice of a less restrictive velocity link parameter would lead to group catalogs more dissimilar to that of hierarchical groups, i.e. with an even smaller fraction of ungrouped galaxies and binary pairs and an even larger number of groups. For instance, choosing $`cz_0=`$300 km/s and the same value of $`D_0`$, we obtain a 7% smaller number of ungrouped galaxies, a 4% smaller number of binary pairs, and a 3% greater number of systems with at least three members. On the other side, choosing $`cz_0=`$100 km/s and the same value of $`D_0`$, we obtain twice the number of ungrouped galaxies, together with only about 1/6 of the groups with at least three members. If we let $`\delta \rho /\rho `$ decrease to 60 (increase to 100), with $`cz_0`$=200 km/s, we obtain 8% less (6% more) ungrouped galaxies; the numbers of galaxy pairs and systems with at least three members vary by a smaller percentage in the same and opposite sense, respectively.
Several simulations (Nolthenius & White 1987, Moore, Frenk & White 1993, Nolthenius, Klypin & Primack 1994, 1997) suggest that the above-mentioned scaling of the velocity link parameter $`cz_L`$ increases too rapidly at large redshifts (see also Nolthenius 1993) and favour a mild increase of $`cz_L`$ with $`z`$ (together with a similar scaling of $`D_L`$) from about 200โ400 km/s at 500 km/s to about 400โ700 km/s at 6000 km/s, with details (especially the zero-point of the scaling relation) depending on the adopted cosmological model. A mild scaling of $`cz_L`$ with $`z`$ has the advantage of minimizing the number of interlopers at the price of failing to pick up all members of clusters characterized by high velocity dispersion (see, e.g., Nolthenius 1993; Frederic 1995 a, b).
In the absence of compelling reasons for making a precise choice of the detailed scaling of $`cz_L`$, we have run the P algorithm also keeping $`cz_L`$ constant with $`z`$, i.e. $`cz_L=cz_0`$ (and $`D_L`$ scaled as above). This is an extreme choice which, though conceptually very questionable, is used here in practice as an approximation to a slow variation of $`cz_L`$ with $`z`$, given the limited range of $`z`$ encompassed by NOG. Also Garcia (1993) used the same approximation (i.e. $`cz_L`$ constant) in her application of the P algorithm to a sample of nearby galaxies limited to the depth of 5500 km/s.
We have run the P algorithm for some pairs of values of the two linking parameters lying in the above-mentioned ranges and we choose the values of $`\delta \rho /\rho `$=80 ($`D_0`$=0.41 Mpc) and $`cz_L`$=350 km/s for our final P group catalog with $`cz_L`$ kept constant.
If we let $`\delta \rho /\rho `$ decrease to 60 (increase to 100), with $`cz_L`$=350 km/s, the fraction of ungrouped galaxies decreases by 8% (increases by 6%) and the number of galaxy pairs accordingly varies by a smaller percentage. On the other hand, if we let $`cz_L`$ vary from $`cz_L`$=250 km/s to 600 km/s, with $`\delta \rho /\rho `$=80, the number of ungrouped galaxies decreases from values 10% greater to values 10% smaller than that relative to $`cz_L`$=350 km/s; the number of pairs accordingly varies by a smaller percentage. The number of groups with at least three members does not change appreciably in all these cases.
The two variants of the P algorithm (with $`cz_L`$ kept constant and with $`cz_L`$ scaled with $`z`$) considered in this paper are meant to represent two extreme cases for the scaling behaviour of $`cz_L`$. As discussed in ยง5, it is encouraging that the two respective catalogs of groups, hereafter denoted as P1 and P2 respectively, appear to be in very close agreement between each other; they turn out to be also in good agreement with the catalog of H groups, with P1 in sligthly better agreement than P2. Clearly, for our sample which covers a limited range of distances, differences in the adopted scaling of the velocity link parameter of the P algorithm are unimportant.
In each of its variants, the P algorithm groups together many nearby galaxies (among them many members of the Virgo and Ursa Major clusters and of well-known very nearby groups) into a very large unrealistic system, even if we let the values of the parameters $`cz_0`$ and $`\delta \rho /\rho `$ vary within reasonable intervals. Garcia (1993) encountered a similar problem in running the P algorithm for her sample of comparatively nearby galaxies. This problem stands out when the algorithm is applied to a dense sample of nearby galaxies. The problem is mainly related to the fact that the galaxies which at a given step are merged into a group are picked up only in reference to their closest neighbour in the group and not to the whole set of galaxy members gathered at the previous steps (as is done in the case of the H algorithm). This can lead to sort out possible non-physical systems, like a long filament of galaxies with a small separation between physically unrelated neighbouring objects.
We have solved this problem by taking directly a few very nearby groups and the systems of the Virgo region as given in the literature (as explained at the end of ยง5.1) and by adopting the same results obtained with the H method in the nearby region ($`cz<500`$ km/s). Therefore, by definition the catalogs of groups selected with the P method are equal to the catalog of H groups in the Virgo region and nearby region ($`cz<`$500 km/s).
## 5 The catalogs of groups
Although we have identified groups in redshift space, we expect the group selection to be hardly affected by peculiar motions, since all galaxies located in a small volume tend to move together in redshift space.
Our final catalog of H groups comprises 1062 systems, i.e. 587 binaries and 475 groups with at least three members. These groups contain 3119 galaxies. Of these groups 413 comprise $`n<10`$ members for a total of 1723 galaxies, 39 groups comprise $`10n<20`$ members for a total of 494) galaxies, and 23 groups (among which the major Virgo substructures and the well-known clusters Ursa Maior, Fornax, Eridanus, Centaurus, Hydra) have at $`n`$20 members for a total of 902 galaxies. The remaining 2783 galaxies are left ungrouped (field galaxies).
Our final catalog of P1 (P2) groups comprises 1079 (1093) systems, i.e. 572 (581) binaries and 507 (512) groups with at least three members. These groups contain 3239 (3295) galaxies; of them 444 (448) groups comprise less than 10 members for a total of 1842 (1889) galaxies, 44 (45) groups comprise $`10n<20`$ members for a total of 580 (587) galaxies, and 19 (20) groups have at least 20 members for a total of 817 (819) galaxies. There are 2693 (2619) galaxies which are left ungrouped (field galaxies).
Table 1 shows the numbers of H, P1, P2 groups for different group richness (number $`n`$ of galaxy members). By applying the nonparametric Kolmogorov-Smirnov and sign statistical tests (e.g., Hoel 1971), we find no significant differences between the distributions. Thus, the three catalogs of groups are, on average, similar as far as the distribution of galaxy members in groups is concerned.
Furthermore, we quantify the similarity between the catalogs of groups by counting the number of members of a H group which belong to a common P1 group. We first determine which members of each P1 group belong to the same H group. We calculate a largest group fraction (LGF) for each P1 group by dividing the number of members in the largest such subgroup by the total number of members in the P1 group (see Frederic 1995a for a similar definition of LGF). Fig. 5 shows, as a function of group richness (number of members), the fraction of P1 groups of a given richness with LGFs of unity and in each quartile below. For example, there are 22 P1 groups with seven members. Of these, 48% have LGF of 100%, 57% have LGF of 75%, 91% have LGF of at least 50%, and all of the n=7 groups have LGFs greater than 25%. The H groups give a similar histogram, with somewhat greater values along the ordinate axis (see Fig. 6). The large fractions of groups having high LGF-values confirm the similarity between the two catalogs of groups. If we repeat these calculations replacing P1 groups with P2 groups, we find slighly lower values along the ordinate axis in the plot corresponding to Fig. 5 and an almost equal histogram in the plot corresponding to Fig. 6. Thus, P1 groups are in slightly better agreement with H groups than P2 groups. If we compare P1 and P2 groups in the same way, we find a very good agreement, as expected (the values of LGF are almost always greater than 80% and are frequently greater than 90%).
Furthermore, we have calculated the LGF-values separately for the nearby and distant NOG galaxies dividing the sample at 3500 km/s. In this way, we have verified that the agreement between the P1 and P2 groups gets slightly worse as we go to larger distances, as expected. On the other hand, there is no appreciable effect of this kind in the comparison between H and P1 (or P2) groups.
The ratio of the number of groups with at least three members to the number of non-member (binary and field) galaxies is 0.12, 0.13, 0.14 for the H, P1, P2 groups, respectively. These values lie in the range of published values coming from other group catalogs, e.g., 0.09 for the SSRS1 groups (Maia et al. 1989), 0.10 for the LCRS groups (Tucker et al. 1997) and the PPS groups (TrasartiโBattistoni 1998), 0.11 for the PGC groups (Gourgoulhon et al. 1992, Fouquรฉ et al. 1992) groups, 0.12 for the SSRS2 groups (Ramella et al. 1999b), 0.13 for the CfA2 north (Ramella et al. 1997) and ESP groups (Ramella et al. 1999a), 0.14 for the revised CfA1 groups (Nolthenius 1993), 0.17 for the NBG groups (Tully 1987), 0.15 and 0.19 for the LEDA groups derived by Garcia (1993) using the P and H methods, respectively.
The ratio of members of groups with at least three members to the total number of galaxies is 0.44, 0.46, 0.47 for the H, P1, P2 groups, respectively, whereas published values are 0.35 for the SSRS1 (Maia et al. 1989), LCRS (Tucker et al. 1997) and PPS groups (TrasartiโBattistoni 1998), 0.40 for the SSRS2 (Ramella et al. 1999b) groups, 0.41 for the ESP groups (Ramella et al. 1999a) groups, 0.42 for the PGC groups (Gourgoulhon et al. 1992; Fouquรฉ et al. 1992), 0.45 for the CfA2 north groups (Ramella et al. 1997), 0.48 for the revised CfA1 groups (Nolthenius 1993), 0.51 for the NBG groups (Tully 1987), 0.63 and 0.47 for the LEDA groups (Garcia 1993), respectively derived by means of the P and H methods.
In general, our catalogs of groups are broadly consistent with the previous catalogs of groups selected in the same regions and our values for the two above-mentioned ratios appear to be consistent with typical values reported in the literature.
As regards the H groups, our values are close to those of the PGC groups and are a little lower than those of the NBG groups (because we adopt a greater limiting luminosity density parameter to cut the hierarchy (see ยง4.1)). Compared to the LEDA groups identified with the H method, we find less groups, which is partially due to the fact that in many cases Garcia (1993) neglected the reconstruction of high-velocity systems, which the algorithm tends to break in several systems with different average velocities (see ยง4.1). Furthermore, compared to the LEDA groups identified with the P method, we basically find smaller groups with less members, because, on average, we adopt lower values of $`cz_L`$ (see ยง4.2). In general, there is much less similarity between the two catalogs of LEDA groups than between our two catalogs.
A comparison of the distribution of the centers of the two samples of groups with that of galaxies show qualitatively that groups trace the large-scale structure of the nearby universe.
The final catalogs of the members of H, P1, and P2 groups are presented in Tables 2, 3, and 4, respectively. In these Tables we give the number of group, the PGC and alternative names of the galaxy member, the 1950 right ascension and declination (in hours, minutes, seconds and in degrees, arcmin, arcsec, respectively), the velocity $`cz`$ (in the Local Group frame), and the corrected total blue magnitude.
The final catalogs of H, P1, and P2 groups (along with some group properties) are presented in Tables 5, 6, and 7, respectively. These tables give the NOG group number, the name of the brightest galaxy of the group, the number of galaxy members, the median values of the 1950 right ascension and declination of the group members, the median value of recession velocity $`cz`$ (in the Local Group frame), the common name of the system (when available), the cross-identifications between NOG groups, the cross-identification between NOG groups and previous catalogs of groups. Of them we choose the all-sky catalogs of nearby groups published by Tully (1987) and by Garcia (1993) for a detailed comparison. Specifically, we consider Garciaโ s (1993) final catalog of groups defined by her as the one that includes only systems common to the two original catalogs that she constructed by means of the H and P methods. Cross-identifications are tabulated only when there at least three galaxies in common between our groups (with at least three members) and groups of previous catalogs and two galaxies in common between pairs.
In Table 5 we denote by an asterisk the 17 systems which are split by the H algorithm along the line of sight and then are reconstructed by us with the aid of the results of the P1 method. Moreover, in Table 5 we denote by a flag $`+`$ the 11 systems which are constructed with the aid of membership assignments provided directly in the literature for the Virgo region (seven systems and 311 galaxies) and for four very nearby groups (comprising 55 galaxies) (see ยง4.1). As explained at the end of ยง4.2, the P1 and P2 systems are by definition taken to be equal to those identified with the H method in the Virgo region and in the very nearby region ($`cz<`$500 km/s). The latter region involves 13 systems (of which 3 pairs) and 161 (118 grouped and 43 ungrouped) galaxies. These systems are denoted by a flag $`+`$ in Tables 6 and 7.
Tables 2, 3, 4, 5, 6, and 7 are available in electronic form only.
## 6 Conclusions
In this paper we describe the NOG sample, a distance-limited ($`cz<`$6000 km/s) and magnitude-limited (B$``$14 mag) sample of 7076 optically-selected galaxies which covers 2/3 of the sky ($`|b|>20^{}`$) and has a good completeness in redshift (98%).
We select the NOG on the basis of homogenized corrected blue magnitudes in order to minimize systematic effects in galaxy sampling, due to the use of different magnitude systems in different areas of the sky and to Galactic and internal extinction. In this sense the NOG, which is meant to be the first step towards the construction of a statistically well-controlled optical galaxy sample with homogenized photometric data covering most of the celestial sphere, is in principle designed to offer a largely unbiased view of the galaxy distribution.
We identify galaxy systems in the NOG by means of both the hierarchical and the percolation friends of friends methods. After an extensive search in the space of relevant parameters with the guide of available numerical simulations, we choose optimal sets of parameters which allow us to obtain reliable and homogeneous catalogs of loose groups. Remarkably, these catalogs turn out to be substantially consistent as far as the distribution of members in groups is concerned. Containing about 500 systems (with at least three members), they are among the largest catalogs of groups presently available. Although they are drawn from a galaxy sample limited to bright magnitudes, they are useful for studies of the statistical properties of loose groups, since their physical properties were found to be stable, on average, against the inclusion of fainter galaxy members (Ramella et al. 1995a,b; Ramella, Focardi & Geller 1996). In particular, being extracted from the same galaxy sample, the catalogs allow one to investigate on variations in group properties (e.g., velocity dispersion, virial mass and radius) strictly related to differences in the algorithm adopted. These differences indicate to what extent our knowledge of the location and properties of groups in the nearby universe is inaccurate. Previous comparisons between catalogs of groups identified with the H and P algorithms (Pisani et al. 1992) were based on catalogs extracted from different galaxy samples.
Most of the NOG galaxies ($``$60%) are found to be members of galaxy pairs ($``$580 pairs comprising $``$15% of the galaxies) or groups with at least three members ($``$500 groups comprising $``$45% of the galaxies). About $``$40% of the galaxies are left ungrouped (field galaxies).
Though being limited to a depth of 6000 km s$`^1`$, the NOG covers interesting regions of prominent overdensities (in mass and galaxies) of the nearby universe, such as the โGreat Attractorโ region and the Perseus-Pisces supercluster. Compared to previous all-sky optical and IRAS galaxy samples, the NOG provides a denser sampling of the galaxy density field in the nearby universe. Besides, as expected, the NOG delineates overdensity regions with a greater density contrast than IRAS galaxy samples do.
Given its high-density sampling and large sky coverage, the NOG sample is well suited for mapping the cosmography of the nearby universe beyond the Local Supercluster and for allowing a comparison of the density field as traced by optical galaxies with that described by IRAS galaxies (addressing questions concerning the amount of relative biasing in the galaxy distribution and its possible dependence on scale).
By virtue of the identification of NOG groups, the NOG is also well suited for deriving galaxy density parameters on small scales to be used in observational investigations of environmental effects on galaxy properties. Environmental studies in which the local galaxy density is decoupled from membership in galaxy systems go beyond the conventional comparison between the properties of cluster and field galaxies and thus can better constrain physical processes responsible for the formation and evolution of galaxies. Much of the observed evolution of the properties and populations of galaxies (e.g., Ellis 1997) which has occurred during recent epochs ($`z<1`$) can be ascribed to interaction of galaxies and their local environment.
In a subsequent paper (see Marinoni et al. 1999b for preliminary results) the NOG groups will be used to remove non-linearities in the peculiar velocity field (e.g., the velocity dispersion of group members) on small scales. To correct the redshiftโdistances of field galaxies and groups on large scales, we shall apply models of the peculiar velocity field, following the approach described in Paper I. We shall use the locations of individual galaxies and groups calculated in realโdistance space (i.e. for distances predicted by different velocity field models) to calculate the selection function of the NOG sample (see Paper II) and to reconstruct the galaxy density field. Local galaxy density parameters to be used in studies of environmental effects on nearby galaxies will be provided.
We are indebted to B. Santiago (together with the ORS team) and to W. Saunders (together with the the PSCz team) who provided us with data in advance of publication. We wish to thank S. Borgani, D. Fadda, R. Giovanelli, M. Girardi, M. Hudson, F. Mardirossian, M. Mezzetti, P. Monaco, M. Ramella for interesting conversations. C. M. and L. C. are grateful to SISSA for its kind hospitality. This research has made use of the Lyon-Meudon Extragalactic Database (LEDA) supplied by the LEDA team at the CRAL-Observatoire de Lyon (France) and of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work has been partially supported by the Italian Ministry of University, Scientific and Technological Research (MURST) and by the Italian Space Agency (ASI). |
no-problem/0001/math-ph0001041.html | ar5iv | text | # Algebra of differential forms with exterior differential ๐ยณ=0 in dimension one
## 1 Introduction
It is well known that one of possible ways to generalize a classical Grassmann algebra is to increase the power of nilpotency of its generators. This means that a possible generalization can be defined as an associative unital algebra generated by $`\theta _1,\theta _2,\mathrm{},\theta _n`$ satisfying $`\theta _i^N=0,N>2,i=1,2,\mathrm{},n`$. In order to obtain an analogue of classical Grassmann algebra one should add commutation relations between generators to the above mentioned definition and this leads to different generalizations of classical Grassmann algebra in the case $`n>1`$. If one imposes the commutation relations $`\theta _i\theta _j=q\theta _j\theta _i(i<j)`$, where $`q`$ is an $`N`$-th primitive root of unity, then the corresponding structure is called a generalized Grassmann algebra ,. Another approach based on ternary commutation relations and the representation of group of cyclic permutations $`Z_3`$ by cubic roots of unity was developed in , and the corresponding structure is called a ternary Grassmann algebra. It should be mentioned that both generalizations of classical Grassmann algebra mentioned above coincide in the trivial case of one generator $`\theta `$ satisfying $`\theta ^N=0`$ and this algebra known as anyonic line is closely related to fractional supersymmetry .
A classical Grassmann algebra underlies an exterior calculus on a smooth manifold with exterior differential $`d`$ satisfying $`d^2=0`$. Therefore the above mentioned generalizations of Grassmann algebra raise a natural question of possible generalizations of classical exterior calculus to one with exterior differential satisfying $`d^N=0,N>2`$. From an algebraic point of view an adequate algebraic structure underlying an exterior calculus is the notion of graded differential algebra. Hence one can generalize a classical exterior calculus with the help of an appropriate generalization of graded differential algebra. This generalization called graded $`q`$-differential algebra was proposed and studied by M. Dubois-Violette in the series of papers , where the author constructed several realizations of graded $`q`$-differential algebra.
According to the definition given by M. Dubois-Violette a graded $`q`$-differential algebra is an associative unital $`๐`$-graded algebra endowed with linear endomorphism $`d`$ of degree 1 satisfying $`d^N=0`$ and the graded $`q`$-Leibniz rule
$$d(\alpha \beta )=d(\alpha )\beta +q^a\alpha d(\beta ),$$
(1)
where $`a`$ is the grading of an element $`\alpha `$, $`q`$ is a primitive $`N`$-th root of unity.
From a point of view of differential geometry the above definition can be used to generalize the de Rham complex on a finite-dimensional smooth manifold. This question was studied in the series of papers , , where the authors used the notion of ternary Grassmann algebra (considering the first non-trivial generalization of classical exterior calculus corresponding to $`N=3`$) to construct the algebra of differential forms. If one assumes $`d^20`$ replacing it by $`d^3=0`$, then in order to construct a self-consistent theory of differential forms it is necessary to add to the first order differentials of local coordinates $`dx^1,dx^2,\mathrm{},dx^m`$ a set of second order differentials $`d^2x^1,d^2x^2,\mathrm{},d^2x^m`$ (and higher order differentials in the case of $`N>3`$). Appearance of higher order differentials, which are missing in classical exterior calculus, is a peculiar property of a proposed generalization of differential forms. This has as a consequence certain problems. It is well known that in classical exterior calculus functions commute with differentials, i.e.
$$fdx^i=dx^if,i=1,2,\mathrm{},m,$$
(2)
where $`f`$ is a smooth function on a manifold, or, from an algebraic point of view, the space of 1-forms is a free finite bimodule over the algebra of smooth functions generated by the first order differentials and (2) shows how its left and right structures are related to each other. Now, assuming that $`d`$ is no more classical exterior differential, i.e. $`d^20`$, and differentiating (2) with regard to graded $`q`$-Leibniz rule (1) one immediately obtains
$$d^2x^if=fd^2x^i+[df,dx^i]_q,$$
(3)
where $`[df,dx^i]_q=dfdx^iqdx^idf`$ and we assign grade 1 to first order differentials $`dx^i`$. The above relations (3) are not homogeneous in the sense that the commutation relations between functions and second order differentials include first order differentials as well.
In this paper, we show that commutation relations between functions and second order differentials can be made homogeneous, i.e. they will not include first order differentials, if we take a non-commutative geometry point of view. In order to make our construction more transparent we begin with the simplest case of one-dimensional space. We construct the differential forms on this space with exterior differential satisfying $`d^3=0`$ and show that they form a graded $`q`$-differential algebra. In our construction we use a coordinate calculi developed in . Then we study the commutation relations between functions and second order differentials and show that the requirement of homogeneity (i.e. vanishing of the second term in the right-hand side of (3)) implies that a coordinate calculi we are considering is the differential calculus on the anyonic line, which means that the commutation relations between functions and second order differentials are homogeneous, i.e they take on the form $`d^2xf=fd^2x`$, only in the case of the anyonic line (in dimension one).
## 2 Graded $`q`$-differential algebra on one- <br>dimensional space
In this section, we construct a graded $`q`$-differential algebra of differential forms with exterior differential $`d`$ satisfying $`d^3=0`$ in dimension one. We study the structure of a bimodule of second order differentials and show that it is homogeneous in the case of the anyonic line. In this section, $`q`$ is a primitive cube root of unity, i.e. $`q^3=1`$.
Let $`๐`$ be a free unital associative $`๐`$-algebra generated by a variable $`x`$. If $`\xi :๐๐`$ is a homomorphism of this algebra and $`:๐๐`$ is a linear map such that
$$(x)=1,(fg)=(f)g+\xi (f)(g),f,g๐,$$
(4)
then according to coordinate calculi the map
$$d:f(f)dx,$$
(5)
where $`dx`$ is the first order differential of a variable $`x`$, is a coordinate differential, i.e. $`d`$ is a linear map $`d:๐D_\xi (A)`$ satisfying the Leibniz rule
$$d(fg)=d(f)g+fd(g),$$
(6)
and $`D_\xi (A)`$ is a free left module over $`๐`$ generated by $`dx`$ with the right module structure defined by the commutation rule
$$dxf=\xi (f)dx.$$
(7)
If $`f=_m\alpha _mx^m`$ is an element of $`๐`$ then the derivative $`:๐๐`$ can be written in explicit form
$$(f)=\underset{m1}{}\underset{k=0}{\overset{m1}{}}\alpha _m\xi ^k(x)x^{mk1}.$$
(8)
In order to construct a generalization of exterior calculus with exterior differential $`d`$ satisfying $`d^20`$ let us introduce the second order differential $`d^2x`$. Let $`(dx)^k(d^2x)^m`$ be monomials composed from first and second order differentials, where $`k,m`$ nonnegative integers. As usual we assume that $`(dx)^0=(d^2x)^0=1`$ where $`1`$ is the unit element of $`๐`$. Let $`\mathrm{\Omega }_\xi (๐)`$ be a free left module over the algebra $`๐`$ generated by the above introduced monomials. It is easy to see that $`๐\mathrm{\Omega }_\xi (๐),D_\xi (๐)\mathrm{\Omega }_\xi (๐)`$. The module $`\mathrm{\Omega }_\xi (๐)`$ becomes an unital associative algebra if we define a multiplication law on $`\mathrm{\Omega }_\xi (๐)`$ by the relations
$`dxx`$ $`=`$ $`\xi (x)dx,d^2xf=\xi (f)d^2x+[,\xi ]_q(f)(dx)^2,`$ (9)
$`(dx)^3`$ $`=`$ $`0,d^2xdx=q^2dxd^2x,`$ (10)
where $`[,\xi ]_q(f)=(\xi (f))q\xi ((f))`$, and $`f๐`$.
Analyzing the defining commutation relations (9,10) of the algebra $`\mathrm{\Omega }_\xi (๐)`$ one can note that the bimodule $`D_\xi (๐)`$ of a coordinate calculi is a submodule of $`\mathrm{\Omega }_\xi (๐)`$ and the relation (7) between its left and right structures follows from the commutation relation between the first order differential $`dx`$ and a variable $`x`$. The second remark concerns the structure of the algebra $`\mathrm{\Omega }_\xi (๐)`$ with respect to the second order differential. The relations (9, 10) show that any power of the second order differential does not vanish. Hence the algebra $`\mathrm{\Omega }_\xi (๐)`$ is an infinite-dimensional vector space and an arbitrary element $`\omega `$ of this algebra can be written in the form
$$\omega =\underset{m0}{}\underset{k=0}{\overset{2}{}}f_{km}(dx)^k(d^2x)^m,f_{km}๐.$$
(11)
We shall call elements of the algebra $`\mathrm{\Omega }_\xi (๐)`$ differential forms on one-dimensional space generated by a variable $`x`$. The algebra of differential forms $`\mathrm{\Omega }_\xi (๐)`$ becomes a $`๐`$-graded algebra if we assign grading zero to each element of the algebra $`๐`$ and grading $`k+2m`$ to monomial $`(dx)^k(d^2x)^m`$, i.e. we assume that a variable $`x`$ has grading zero and the gradings of the differentials $`dx,d^2x`$ are respectively 1,2. Then the algebra of differential forms splits into the direct sum of its subspaces
$$\mathrm{\Omega }_\xi (๐)=\underset{m=0}{\overset{\mathrm{}}{}}\mathrm{\Omega }_\xi ^m(๐)$$
where
$`\mathrm{\Omega }_\xi ^0(๐)`$ $`=`$ $`๐,`$
$`\mathrm{\Omega }_\xi ^{2k}(๐)`$ $`=`$ $`\{f(d^2x)^k+h(dx)^2(d^2x)^{k1}:f,h๐\},k=1,2,\mathrm{}`$ (12)
$`\mathrm{\Omega }_\xi ^{2k+1}(๐)`$ $`=`$ $`\{fdx(d^2x)^k:f๐\},k=0,1,\mathrm{}.`$ (13)
We now extend the differential (5) of the coordinate calculi to the whole algebra $`\mathrm{\Omega }_\xi (๐)`$ as follows: $`d(\omega )=(fh)dx(d^2x)^k`$, $`\omega \mathrm{\Omega }_\xi ^{2k}(๐)`$ $`d(\omega )=f(d^2x)^{k+1}+f(dx)^2(d^2x)^k`$, $`\omega \mathrm{\Omega }_\xi ^{2k+1}(๐)`$.
We shall call the above defined differential an exterior differential on the algebra of differential forms $`\mathrm{\Omega }_\xi (๐)`$. It follows from the definition that exterior differential is an endomorphism of degree 1 of the algebra $`\mathrm{\Omega }_\xi (๐)`$, i.e. $`d:\mathrm{\Omega }_\xi ^m(๐)\mathrm{\Omega }_\xi ^{m+1}`$.
###### Proposition 1
The algebra of differential forms $`\mathrm{\Omega }_\xi (๐)`$ is a graded $`q`$-differential algebra with respect to exterior differential $`d`$, i.e. for any two differential forms $`\omega ,\theta `$ the exterior differential $`d`$ satisfies
$`d^3(\omega )`$ $`=`$ $`0,`$ (14)
$`d(\omega \theta )`$ $`=`$ $`d(\omega )\theta +q^{|\omega |}\omega d(\theta ),`$ (15)
where $`|\omega |`$ is the grading of a form $`\omega `$.
Proof. It follows from (12,13) that any differential form $`\omega `$ can be decomposed into the sum of two forms $`\omega _o,\omega _e`$ respectively of odd and even grading, where
$`\omega _e`$ $`=`$ $`{\displaystyle \underset{k1}{}}[f_k(d^2x)^k+h_{k1}(dx)^2(d^2x)^{k1}],`$ (16)
$`\omega _o`$ $`=`$ $`{\displaystyle \underset{k0}{}}g_kdx(d^2x)^k.`$
From the definition of the exterior differential it also follows that a differential form of even grading(16) is $`d`$-closed ($`d\omega _e=0`$) if it satisfies the following condition
$$f_k=h_{k1}.$$
(17)
Now it is easy to show that any form of odd grading is $`d^2`$-closed, i.e. $`d^2\omega _o=0`$. Indeed applying the exterior differential $`d`$ to form $`\omega _o`$, one obtains the form of even grading
$$d\omega _o=\underset{k0}{}[g_k(d^2x)^{k+1}+g_k(dx)^2(d^2x)^k],$$
which is $`d`$-closed according to (17). Differentiating form (16) of even grading twice, one obtains the form
$$d^2\omega _e=\underset{k1}{}[(f_kh_{k1})(d^2x)^{k+2}+(^2f_kh_{k1})(dx)^2(d^2x)^{k+1}],$$
(18)
which is $`d`$-closed. Thus the cube nilpotency (14) of the exterior differential is proved. The $`q`$-Leibniz rule (15) can be verified by a direct calculation.
A homomorphism $`\xi `$ plays a role of a parameter in the structure of the algebra of differential forms, and, choosing particular homomorphism, we can specify the structure of $`\mathrm{\Omega }_\xi (๐)`$. We remind that according to the definition the algebra of differential forms $`\mathrm{\Omega }_\xi (๐)`$ is a free left module over the algebra $`๐`$ generated by the monomials $`(dx)^k(d^2x)^m`$ with associative multiplication law determined by the relations (9, 10). Actually this algebra is generated by three generators $`x,dx,d^2x`$. Hence its structure will be more transparent if we define it by means of commutation relations imposed on the generators $`x,dx,d^2x`$. The only relation in (9, 10), which is not a commutation relation between generators, is the second relation in (9) containing an arbitrary element $`f`$ of the algebra $`๐`$. The reason why it contains an arbitrary element $`f`$ is that in contrast to the commutation relation $`dxx=\xi (x)dx`$ the right-hand side of this relation is not a homomorphism of the algebra $`๐`$ because of the non-homogeneous term $`[,\xi ]_q(f)`$. Obviously imposing the condition
$$[,\xi ]_q(f)=0,$$
(19)
we can replace the second relation in (9) by the commutation relation
$$d^2xx=\xi (x)d^2x,$$
(20)
which has exactly the same form as the first one involving the first order differential.
Actually the choice for a homomorphism $`\xi `$ is not very wide. Indeed every homomorphism $`\xi `$ of the algebra $`๐`$ is determined by an element $`h_\xi ๐`$ such that $`\xi (x)=h_\xi `$. From (4) it follows that if the derivative $``$ determined by a homomorphism $`\xi `$ should satisfy $`(x^m)x^{m1}`$ then $`\xi (x)=\alpha _\xi x`$ where $`\alpha _\xi `$ is a complex number. The condition (19) can be solved with regard to a homomorphism $`\xi `$, and the following proposition describes a structure which is induced on one-dimensional space in this case.
###### Proposition 2
The condition (19) is satisfied if and only if $`\xi (x)=qx`$. This solution leads to the $`q`$-differential calculus on anyonic line with derivative
$$(f)=\underset{k1}{}\alpha _k\frac{x^{k1}}{[k1]_q!},f=\underset{k0}{}\alpha _k\frac{x^k}{[k]_q!}$$
(21)
This means that one can consistently add the relation $`x^3=0`$ to the relations (9, 10).
Proof. Using the formula (8) one can find
$`(\xi (f))`$ $`=`$ $`\alpha _\xi {\displaystyle \underset{m}{}}\alpha _m{\displaystyle \underset{k=0}{\overset{m1}{}}}\xi ^k(h)h^{mk1},`$
$`q\xi ((f))`$ $`=`$ $`q{\displaystyle \underset{m}{}}\alpha _m{\displaystyle \underset{k=0}{\overset{m1}{}}}\xi ^k(h)h^{mk1}.`$
Thus
$$[,\xi ]_q(f)=(\alpha _\xi q)\underset{m}{}\alpha _m\underset{k=0}{\overset{m1}{}}\xi ^k(h)h^{mk1}.$$
(22)
From the above formula it immediately follows that $`\xi (x)=qx`$ or $`\xi (f(x))=f(qx)`$. Putting $`\xi (x)=qx`$ in (8), one obtains (21). It was explained in that $`q`$-differential calculus determined by the derivative (21) is correctly defined at cube root of unity on one-dimensional space generated by a variable $`x`$ only in the case when $`x^3=0`$. It is easy to show that the relation $`x^3=0`$ can be consistently added to the relations (9, 10).
Now the algebra of differential forms $`\mathrm{\Omega }_\xi (๐)`$ on anyonic line can be defined as an unital associative algebra generated by three generators $`x,dx,d^2x`$ satisfying the following commutation relations:
$`x^3`$ $`=`$ $`0,`$ (23)
$`dxx`$ $`=`$ $`qxdx,d^2xx=qxd^2x`$ (24)
$`(dx)^3`$ $`=`$ $`0,d^2xdx=q^2dxd^2x.`$ (25)
## Acknowledgments
The authors would like to acknowledge the financial support of Estonian Science Foundation under the grants No.3308 and No.1134. |
no-problem/0001/cond-mat0001417.html | ar5iv | text | # Elastic moduli, dislocation core energy and melting of hard disks in two dimensions
## I Introduction
One of the first continuous systems to be studied by computer simulations is the system of hard disks interacting with the two body potential,
$`V(r)`$ $`=`$ $`\mathrm{},r\sigma `$ (1)
$`=`$ $`0,r>\sigma `$ (2)
where, $`\sigma `$ (taken to be $`1`$ in the rest of the paper) the hard disk diameter, sets the length scale for the system and the energy scale is set by $`k_BT=1`$. Despite its simplicity, this system was shown to undergo a phase transition from solid to liquid as the density $`\rho `$ was decreased. The nature of this phase transition, however, is still being debated. Early simulations always found strong first order transitions. As computational power increased the observed strength of the first order transition progressively decreased! Using sophisticated techniques Lee and Strandburg and Zollweg and Chester found evidence for, at best, a weak first order transition. A first order transition has also been predicted by theoretical approaches based on density functional theory. On the other hand, recent simulations of hard disks by Jaster, using as many as $`N=65536`$ particles find evidence for a continuous, Kosterlitz -Thouless -Halperin -Nelson -Young (KTHNY) transition from liquid to a hexatic phase, with orientational but no translational order, at $`\rho =0.899`$. Nothing could be ascertained, however, about the expected hexatic to the crystalline solid transition at higher densities because the computations became prohibitively expensive. The solid to hexatic melting transition was estimated to occur at a density $`\rho _c.91`$. A priori, it is difficult to assess why various simulations give contradicting results concerning the order of the transition. In this paper we take an approach, complementary to Jasterโs, and investigate the melting transition of the solid phase. We show that the hard disk solid is unstable to perturbations which attempt to produce free dislocations leading to a solid $``$ hexatic transition in accordance with KTHNY theory. Though this has been attempted in the past, numerical difficulties, especially with regard to equilibration of defect degrees of freedom, makes this task highly challenging. We also show that this transition lies close to a, first order, solid to liquid melting line. We calculate quantitatively the relative positions of the first order and the KTHNY transitions in the parameter space for this system and explain why earlier simulations failed to arrive at a consensus.
The coarse grained density of a crystalline solid can be expanded as, $`\rho (๐ซ)=_๐\rho _๐e^{i๐๐ซ}`$, where (G) is a reciprocal lattice vector. The order parameters $`\rho _๐`$ are complex, $`\rho _๐=|\rho _๐|e^{i๐ฎ๐},`$ and the displacement vector $`๐ฎ`$ is the deviation of an atom from the nearest perfect lattice point $`๐`$. If fluctuations of the amplitude of $`\rho _๐`$ can be neglected then a solid can be described in terms of $`๐ฎ`$ alone โ- the fundamental assumption of elasticity theory. The elastic Hamiltonian for hard disks is given by,
$$F=Pฯต_++B/2ฯต_+^2+(\mu +P)(ฯต_{}/2+2ฯต_{xy}),$$
(3)
where $`B`$ is the bulk modulus. The quantity $`\mu _{eff}=\mu +P`$ is the โeffectiveโ shear modulus (the slope of the shear stress vs shear strain curve) and $`P`$ is the pressure. The hard disk solid, being a purely repulsive system, is always under a uniform hydrostatic pressure $`P(\rho )`$ at any density $`\rho `$. The Lagrangian elastic strains are defined as,
$$ฯต_{ij}=\frac{1}{2}\left(\frac{u_i}{R_j}+\frac{u_j}{R_i}+\frac{u_i}{R_k}\frac{u_k}{R_j}\right),$$
(4)
where the indices $`i,j`$ go over $`x`$ and $`y`$ and finally, $`ฯต_+=ฯต_{xx}+ฯต_{yy},`$ and $`ฯต_{}=ฯต_{xx}ฯต_{yy}`$.
In general a solid possesses two types of excitations, โsmoothโ phonons and โsingularโ dislocations, respectively. Long wavelength phonons inhibit long range order in 2-d solids so that the intensity of a Bragg reflection peak $`I_Ge^{2W_G}`$, where the Debye Waller factor $`W_GG^2a^{2d}/(d2)`$ ($`a`$ is the lattice parameter and $`d`$ the number of spatial dimensions) diverges and order parameter correlations decay algebraically โ- an example of Quasi Long Ranged Order (QLRO). We know that singular excitations, like dislocations, can drive a QLRO $``$ disorder transition (where correlations decay exponentially). This situation has been analysed by the KTHNY theory .
The KTHNY- theory is presented usually for a 2-d triangular solid under zero external stress. It is shown that the dimensionless Youngโs modulus of a two-dimensional solid,
$$K=\frac{8}{\sqrt{3}\rho }\frac{\mu }{\{1+\mu /(\lambda +\mu )\}},$$
where $`\mu `$ and $`\lambda `$ are the Lamรฉ constants, depends on the fugacity of dislocation pairs, $`y=\mathrm{exp}(E_c)`$, where $`E_c`$ is the core energy of the dislocation, and the โcoarse -grainingโ length scale $`l`$. This dependence is expressed in the form of the following coupled differential equations (the recursion relations) for the renormalization of $`K`$ and $`y`$:
$`{\displaystyle \frac{K}{l}}`$ $`=`$ $`3\pi y^2e^{\frac{K}{8\pi }}[{\displaystyle \frac{1}{2}}I_0({\displaystyle \frac{K}{8\pi }}){\displaystyle \frac{1}{4}}I_1({\displaystyle \frac{K}{8\pi }})],`$ (5)
$`{\displaystyle \frac{y}{l}}`$ $`=`$ $`(2{\displaystyle \frac{K}{8\pi }})y+2\pi y^2e^{\frac{K}{16\pi }}I_0({\displaystyle \frac{K}{8\pi }}).`$ (6)
where $`I_0`$ and $`I_1`$ are Bessel functions. The thermodynamic value is recovered by taking the limit $`l\mathrm{}`$.
We see in Fig. (1) that the trajectories in $`y`$-$`K`$ plane can be classified in two classes, namely those for which $`y0`$ as $`l\mathrm{}`$ (ordered phase) and those $`y\mathrm{}`$ as $`l\mathrm{}`$ (disordered phase). These two classes of flows are separated by lines called the separatrix. The transition temperature $`T_c`$ (or $`\rho _c`$) is given by the intersection of the separatrix with the line of initial conditions $`K(\rho ,T)`$ and $`y=\mathrm{exp}(E_c(K))`$ where $`E_ccK/16\pi `$. At the transition point the flow follows the separatrix so that the renormalized $`K`$ jumps from $`16\pi `$ to $`0`$ at the transition. The ordered phase corresponds to the solid (no free dislocations) and the disordered phase is a phase where free dislocations proliferate. Proliferation of dislocations however does not produce a liquid, rather a liquid crystalline phase called a โhexaticโ with quasi- long ranged (QLR) orientational order but short ranged positional order. A second K-T transition destroys QLR orientational order and takes the hexatic to the liquid phase by the proliferation of โdisclinationsโ (scalar charges). Apart from $`T_c`$ there are several universal predictions from KTHNY- theory, for example, the order parameter correlation length and susceptibility has essential singularities ($`e^{bt^\nu },tT/T_c1`$) near $`T_c`$. All these predictions can, in principle, be checked in simulations.
Note that, in order to use the KTHNY- theory to study the solid- hexatic transition in hard disks we have to bear in mind that for the hard disk solid, which is always under a uniform hydrostatic pressure $`P(\rho )`$, the effective shear modulus $`\mu _{eff}`$ has to be used in the definition of $`K`$.
The KTHNY- theory predicts when a 2-D solid becomes unstable to the proliferation of dislocations. However, there is a second possibility. The free energy of the liquid may become higher than that of the stable solid at a density smaller than that where the hexatic phase is recorded. This leads to a first order transition and a jump in density at the liquid -solid coexistence pressure (for simulations in the $`NVT`$\- ensemble) instead of an intermediate hexatic phase. Often it is very difficult to distinguish the two possibilities as the history of simulation studies of hard disks shows. This is further complicated by the fact that KTHNY theory also predicts that the specific heat, or equivalently, in the case of the hard disk system, the compressibility, shows a smooth bump leading to a near flat region in the pressure -density diagram. In Fig. (2a) we show the conventional situation where the dotted line designates the often observed first order transition. In Fig. (2b) we show Jasterโs results where it is seen that instead of a flat region in the $`P`$-$`\rho `$\- curve or a Maxwell loop usually associated with a first order transition one gets instead a smooth bending over to a state with a high compressibility. Finite size effects which would be present in the first- order case are negligible. This would indicate the presence of a liquid- hexatic transition. The question of solid to hexatic transition is still open. It is worth noting that detailed finite size scaling of orientational order in this system is not necessarily in contradiction to this result.
Why do simulations of hard disk solids find it so difficult to see a solid- hexatic transition? One of the reasons is, of course, the divergence of the correlation length as the system approaches the transition so that one requires large systems. This is complicated by the fact that in order to obtain equilibrated values of the dislocation density ($`y`$) one also needs very large simulation times because in a high density solid the diffusion of defects is very slow. To illustrate this point we have attempted to calculate the defect density of a hard disk solid in a Monte Carlo simulation. We perform conventional Monte Carlo simulations in the NVT ensemble with an usual Metropolis updating scheme for $`N=3120`$ particles. We choose a single density $`\rho =.92`$; a sequence of initial states are then constructed by adding extra complete rows of atoms (thereby increasing the density to $`\rho _i.92`$) and removing an equal number of atoms from the bulk at random. In equilibrium, these extra vacancies in the bulk should diffuse out and the lattice parameter adjust to fill in the gap. After about one million Monte Carlo steps we calculate the number of five coordinated ($`n_5`$) and seven coordinated ($`n_7`$) atoms. Since our system cannot have free vacancies (due to our choice of ensemble) we expect in equilibrium $`n_5=n_7`$. The simulations at each $`\rho _i`$ is repeated for ten realizations of the initial state. Our results are shown in Fig. (3). We see that $`n_5n_7`$ (except for the trivial case of $`\rho _i=.92`$), the difference growing with $`\rho _i`$ as expected, and the statistical errors are very large. We therefore conclude that even for a relatively small system of $`3120`$ particles, the equilibration of defects (vacancies in this case) is an extremely slow process. So it should not come as a surprise that brute force simulations of the hard disk solid fail to produce the true equilibrium phase.
It may also happen, on the other hand, that KTHNY- theory fails due to the following reasons. Firstly, elastic theory itself may fail near the transition, so that amplitude or long wave length phonon fluctuations may destabilise the solid producing a continuous transition. Though remote, this possibility has nevertheless been discussed in the literature . Secondly, perturbation theory in $`y`$ may break down because $`E_c`$ is too small (i.e. $`y`$ too large) at the transition. Saito and Strandburg showed using lattice discretized versions of a dislocation Hamiltonian that KTHNY perturbation theory breaks down if $`E_c<2.7`$ at the transition. In our simulations of the hard disk system we check both these possibilities as well as the possibility of a first order transition.
In the next section we discuss our simulations together with our method for computing elastic constants and core energies. We use these inputs to check for a first order transition and a KTHNY- scenario in Section III. We summarise and conclude this work in Section IV.
## II Elastic constants and dislocation core energies from constrained simulations
One way to circumvent the problem of large finite size effects and slow relaxation due to diverging correlation lengths is to simulate a system which is constrained to remain defect (dislocation) free and, as it turns out, without a phase transition. Relatively small systems simulated for short times therefore yields thermodynamically accurate data in this limit. Surprisingly, we show that using this data it is possible to predict the expected equilibrium behaviour of the unconstrained system. It is worth mentioning that with an approach similar in spirit to the one followed here, we have obtained excellent results for the Kosterlitz -Thouless transition in the two -dimensional planar rotor model, which has served as an important model in the development of the KTHNY theory , after the proofs of the low temperature susceptibility divergence in this model and the existence of phase transitions without local order parameters in general were given.
We simulate $`N=3120`$ hard disks in an (almost) square box. We have also simulated two additional systems of $`N=2016`$ and $`N=4012`$ particles in order to look for residual finite size effects. Our algorithm follows closely the usual Metropolis scheme for simulating hard disks. The simulation is always started from a perfect triangular lattice which fits into our box โ the size of the box determining the density. Once a regular MC move is about to be accepted, we perform a local Delaunay triangulation involving the moved disk and its nearest and next nearest neighbors. We compare the connectivity of this Delaunay triangulation with that of the reference lattice (a copy of the initial state) around the same particle. If any old bond is broken and a new bond formed (Fig. (4)) we reject the move since one can show that this is equivalent to a dislocation - antidislocation pair separated by one lattice constant involving dislocations of the smallest Burgerโs vector. Note that, (i) only dislocation pairs of smallest Burgerโs vector are eliminated, dislocations of higher topological charge cost higher energy and may not be relevant at the densities where a melting transition is usually observed; (ii) other fluctuations e.g. long wavelength phonon fluctuations and fluctuations of the amplitude of the order parameter (spontaneous production of voids in the system) are not eliminated as long as they preserve connectivity. The fraction of moves $`p`$ which are rejected because they violate the constraint is stored. Next, we need a method to calculate elastic constants accurately in our simulations making sure that we extrapolate to the thermodynamic limit. Such a method has been recently developed by us and discussed in detail elsewhere. Below we include a brief description for completeness.
Since we have a dislocation free system, we can always associate an ideal, static, โreferenceโ lattice point $`๐`$ with every hard disk all through the simulation and calculate $`๐ฎ_๐(t)=๐(t)๐`$. Microscopic strains $`ฯต_{ij}(๐)`$ can be calculated now for every reference lattice point $`๐`$. Next, we coarse grain (average) the microscopic strains within a sub-box of size $`L_b`$,
$$\overline{ฯต}_{ij}=L_b^d^{L_b}d^drฯต_{ij}(๐ซ)$$
and calculate the ($`L_b`$ dependent) quantities,
$`S_{++}^{L_b}`$ $`=`$ $`<\overline{ฯต}_+\overline{ฯต}_+>`$ (7)
$`S_{}^{L_b}`$ $`=`$ $`<\overline{ฯต}_{}\overline{ฯต}_{}>`$ (8)
$`S_{33}^{L_b}`$ $`=`$ $`4<\overline{ฯต}_{xy}\overline{ฯต}_{xy}>`$ (9)
The sub-blocks may be constructed by simply dividing the entire box of size $`L`$ into integral number of smaller boxes, as done in this calculation so that $`L/L_b=`$ an integer, or multiple sub-boxes of arbitrary size $`L_bL`$ can be constructed within the simulation cell, as in Ref.. Lastly, quantities in the thermodynamic limit are obtained by fitting data to the form,
$$S_{\gamma \gamma }^{L_b}=S_{\gamma \gamma }^{\mathrm{}}\left[\mathrm{\Psi }(xL/\xi )\left(\mathrm{\Psi }(L/\xi )C\left(\frac{a}{L}\right)^2\right)x^2\right]+๐ช(x^4).$$
where the index $`\gamma =+,,3`$ the function, $`\mathrm{\Psi }(\alpha )`$ is defined as,
$$\mathrm{\Psi }(\alpha )=\frac{2}{\pi }\alpha ^2_0^1_0^1๐x๐yK_0(\alpha \sqrt{x^2+y^2})$$
$`K_0`$ is a Bessel function and $`\xi `$ is the correlation length for the $`ฯตฯต`$\- correlations.
The elastic constants in the thermodynamic limit are obtained from, the set: $`B=1/2S_{++}^{\mathrm{}}`$ and $`\mu _{eff}=1/2S_{}^{\mathrm{}}=1/2S_{33}^{\mathrm{}}`$. The last two equations for $`\mu _{eff}`$ serve as a stringent internal consistency check and yields an accurate error estimate for this quantity. There are two ways to obtain the fluctuations $`S_{\gamma \gamma }^{L_b}`$ for every sub-block size $`L_b`$ in Eq. (5). One can either accumulate $`<ฯต_\gamma ฯต_\gamma >`$ directly or construct histograms of the block strains $`ฯต_\gamma `$ and obtain $`S_{\gamma \gamma }`$ by fitting Gaussian profiles to the normalized probability distributions of $`ฯต_\gamma `$ for every block size $`L_b`$. Again this constitutes another excellent consistency check and a measure of the statistical uncertainties involved. We accumulate data till all these uncertainties are less than a percent. Residual finite size effects obtained by repeating the entire procedure for $`N=2016`$ and $`4012`$ particles for a few densities are also seen to be within the same limit of accuracy.
There are several distinct advantages of our method: In general our method works for any system for which instantaneous configurations can be obtained (for example either from other simulations or from real experiments). We obtain directly the finite size scaled results from a single simulation. As discussed above there are a number of stringent internal consistency checks which can be used to obtain very accurate data. In spite of this our method is easy to use and the computational complexity is not more than calculating for eg. pair correlation functions. This method can be easily adapted for calculating local strains and stresses in inhomogeneous situations.
## III Results and Discussion
Our results for the elastic moduli, the pressure and the fraction of moves, $`p`$, rejected due to the topological constraint discussed above are given in Table I as a function of density. In Fig. (5), we compare our results for the bulk and shear moduli with the data of two previous simulations of Ref. and Ref. . We also compare our simulation results to estimates from free volume theory in the simplest, independent cell approximation. Within this approach the Helmholtz free energy per particle is given by $`f=\mathrm{log}(v_f)`$, where the available free volume, $`v_f=(a1)^2/\rho _c`$ and the close packed density $`\rho _c=2/\sqrt{3}`$. Other thermodynamic quantities can be obtained by successive differentiation, viz.
$`P`$ $`=`$ $`\rho {\displaystyle \frac{x}{x1}}`$ (10)
$`B`$ $`=`$ $`P\{1+{\displaystyle \frac{1}{2(x1)}}\}`$ (11)
$`\mu _{eff}`$ $`=`$ $`{\displaystyle \frac{B}{2}}`$ (12)
where $`x=\sqrt{\rho _c/\rho }`$ and we have used the Cauchy relation, strictly valid only for a harmonic solid, for our estimate of the effective shear modulus $`\mu _{eff}`$. Note that the free volume elastic moduli and the pressure diverge as $`\rho \rho _c`$.
We see that our bulk modulus interpolates smoothly from the free volume values at high densities to those of Ref. at low densities. Overall, the differences between the three sets of data are small. Our values for the shear modulus agrees well with the free volume results at high density, but at low densities they are smaller than all other estimates though close to those of Ref. .
Once the elastic constants are obtained we can analyze in detail the two competing scenarios viz. first order solid- liquid transition or KTHNY- transition to the hexatic phase.
### A Equation of state, free energy and first order melting
First of all, we should point out that our constrained simulations allow us to obtain elastic constants up to a density as low as $`\rho =.88`$, far below the density $`\rho =.899`$ where the transition to the liquid is expected to occur, which implies that amplitude and phonon fluctuations cannot destabilize the solid. So an ordinary second order transition is ruled out. However, there can always be a first order transition if the free energy of the liquid becomes lower than that of the perfect solid.
In order to investigate this question we obtain the equation of state $`P(\rho )`$ and the Gibbs free energy $`g(P)`$ of the liquid and the solid.
To obtain the equation of state of the liquid we use the semi -empirical, accurate, analytical form by Santos et. al. , which is in excellent agreement with computer simulation data. The pressure is given by,
$$P/\rho =\{12\eta +\frac{2\eta _c1}{\eta _c^2}\eta ^2\}^1$$
(13)
where the packing fraction $`\eta =(\pi /4)\rho `$ and $`\eta _c`$ is the packing fraction at close packing. The Helmholtz free energy per particle,
$$f(\rho )=_0^\eta ๐\eta ^{}\frac{P/\rho 1}{\eta ^{}}+f_{id},$$
(14)
where the ideal gas Helmholtz free energy per particle $`f_{id}=\mathrm{log}(\rho )1`$. The Gibbs free energy $`g(P)`$ is then obtained by the standard Legendre transformation, $`g=f+P/\rho `$. In addition we use the data of Jaster in the transition region to obtain a revised estimate of the free energy. This is done by fitting Jasterโs data for $`P(\rho )`$ to a polynomial for $`\rho >0.85`$ which matches the results of Santos et. al. for $`\rho 0.85`$. From this equation of state we can obtain the Helmholtz and hence the Gibbs free energy by integrating starting from the value given by Eq.(7) at $`\rho =0.85`$.
The equation of state for the solid is obtained by integrating our bulk modulus values using the result of Bladon and Frenkel at $`\rho =1.049`$ as the reference pressure ($`P=22.00`$). The Gibbs free energy is obtained by further integration again using the result obtained for the free energy in Ref. at $`\rho =1.049`$ as a reference ($`g=25.64`$).
The possible (first order) transitions can be located by equating the Gibbs free energies. The slope discontinuity gives the (inverse) density difference of coexisting phases. We find immediately, that all the free energies have very similar slopes (see Fig. 6) so that any possible first order transition would have only a small jump in the density. It also implies that small errors in the free energy of our reference state makes a large difference in the co-existence pressure. We have therefore reduced the reference free energy by a small amount ($`<4\%`$) so that the coexisting pressure $`P_1=9.2`$ โ the value found in most recent simulations.
Using the semi-empirical free energy of Santos et. al. we obtain a (meta stable) first order transition with $`\rho _l=0.871`$ and $`\rho _s=0.912`$ as observed in early simulations. Of course, this estimate of $`\rho _l`$ is only a lower bound, as the theory of Ref. is expected to overestimate the free energy. The free energy from Jasterโs data is lower and almost completely parallel to that of the solid suggesting a very weak first order transition if at all. In this case we get a slope difference $`<1.3\%`$ (viz. $`\rho _l=0.899`$ and $`\rho _s=0.911`$) - well within our numerical accuracy (Fig. 6).
### B Core energy $`E_c`$ and the KTHNY transition:
Next, we analyze our results in the light of the KTHNY- theory. The unrenormalized $`K=16\pi `$ at $`\rho _c=0.904`$ ($`P_c=8.92`$) (see Fig. (2), lower arrow) which implies that a weak first order transition from solid to liquid preempts a KTHNY- solid- hexatic transition. However, the value of $`K`$ is renormalized by the presence of dislocations. We can estimate the extent of this renormalization from our data.
The dislocation pair probability
$$p_d=\mathrm{exp}(2E_c)Z(K)$$
(15)
where $`Z(K)`$ is the โinternal partition functionโ of a dislocation pair and is given by,
$$Z(K)=\frac{2\pi \sqrt{3}}{K/8\pi 1}I_0(\frac{K}{8\pi })\mathrm{exp}(\frac{K}{8\pi }).$$
(16)
Where we have set the core radius $`r_c=a`$, the lattice parameter. The core energy of a dislocation is a difficult quantity to obtain from a simulation, though it has been attempted in the past. In our case, an ansatz, which gives excellent results in the 2D- XY- model , and identifies the rejection ratio $`p`$ as $`p=p_d`$ can be used to obtain $`E_c`$, see Fig. (7). Throughout the relevant region $`E_c`$ is safely above the limit $`E_c>2.7`$ . At the transition the $`E_c6`$ which is in good agreement the results of Murray and Van Winkle ($`E_c5.6`$) from experiments on 2-d charge stabilised colloids and of Zahn et. al. ($`E_c4.`$) for paramagnetic colloids.
Finally, to obtain the melting density we use the unrenormalised $`K`$ and $`y=\mathrm{exp}(E_c(K))`$ as inputs to the KTHNY recursion relations (Eqs.(4)) and solve them numerically by a standard Euler discretization to obtain $`K_R`$, see Fig. (8). The melting density obtained from our value for $`K_R`$ is $`\rho _c=.916`$ and $`P_c=9.39`$ (Fig. (2), upper arrow). This means that the KTHNY- transition now preceeds the first order transition and the solid transforms to the hexatic phase.
## IV Summary and conclusion
We have simulated a dislocation free triangular solid of hard disks using a constrained Monte Carlo algorithm. Using a block analysis scheme we calculate the finite size scaled elastic constants of this solid. From the number of times the system attempts to violate our no- dislocation constraint we can obtain (virtual) dislocation probabilities and hence the core energy. The absence of a phase transitions in our system implies that all correlation lengths remain finite and the problem of slow equilibration of defect densities is eliminated. In effect we obtain highly accurate values of the unrenormalized coupling constant $`K`$ and the defect fugacity $`y`$ which can be used as inputs to the KTHNY recursion relations. Numerical solution of these recursion relations then yields the renormalized coupling $`K_R`$ and hence the density and pressure of the solid to hexatic melting transition.
We can draw a few very precise conclusions from our results. Firstly, a solid without dislocations is stable against fluctuations of the amplitude of the solid order parameter and against long wavelength phonons. So any melting transition mediated by phonon or amplitude fluctuation is ruled out in our system. Secondly, the core energy $`E_c>2.7`$ at the transition so KTHNY perturbation theory is valid though numerical values of nonuniversal quantities may depend on the order of the perturbation analysis. Thirdly, solution of the recursion relations shows that a KTHNY transition at $`P_c=9.39`$ preempts the first order transition at $`P_1=9.2`$. Since these transitions, as well as the hexatic -liquid KTHNY transition lies so close to each other, the effect of, as yet unknown, higher order corrections to the recursion relations may need to be examined in the future. Due to this caveat, our conclusion that a hexatic phase exists over some region of density exceeding $`\rho =.899`$ still must be taken as preliminary. Also, in actual simulations, cross over effects near the bicritical point, where two critical lines corresponding to the liquid -hexatic and hexatic -solid transitions meet a first order liquid -solid line (see for e.g. Ref. for a lattice model where such a situation is discussed) may complicate the analysis of the data, which may, in part, explain the confusion which persists in the literature on this subject. In systems with softer potentials, the signature of a KTHNY transition appears to be more pronounced. In future, we would like to analyze more complicated systems eg. laser induced reentrant melting of charge stabilized colloids, and the influence of other defect variables eg. grain boundaries on elastic constants and melting behaviour.
## V Acknowledgement
We are grateful for many illuminating discussions with D. Frenkel, M. Bates, Madan Rao, W. Janke and D. R. Nelson. One of us (S.S.) thanks the Alexander von Humboldt Foundation for a Fellowship. Support by the SFB 513 is gratefully acknowledged. This paper is dedicated to F. J. Wegner on the occasion of his 60<sup>th</sup> birthday.
| $`\rho `$ | $`N_c`$ | $`P`$ | $`B`$ | $`\mu _{eff}`$ | $`p\times 10^2`$ | $`K/16\pi `$ |
| --- | --- | --- | --- | --- | --- | --- |
| 0.88 | 10<sup>5</sup> | 8.117 | 27.69 | 11.63 | 0.36823 | 0.8550 |
| 0.9 | 10<sup>5</sup> | 8.777 | 32.47 | 13.87 | 0.20358 | 0.9925 |
| 0.905 | 10<sup>5</sup> | 8.957 | 33.67 | 14.46 | 0.17386 | 1.0271 |
| 0.910 | 10<sup>5</sup> | 9.145 | 35.38 | 15.22 | 0.14469 | 1.0744 |
| 0.915 | 10<sup>5</sup> | 9.342 | 37.09 | 15.99 | 0.11706 | 1.1225 |
| 0.920 | 10<sup>5</sup> | 9.545 | 38.48 | 16.88 | 0.09532 | 1.1722 |
| 0.925 | 10<sup>5</sup> | 9.759 | 40.67 | 17.88 | 0.07513 | 1.2337 |
| 0.930 | 10<sup>5</sup> | 9.982 | 42.72 | 18.90 | 0.05967 | 1.2948 |
| 0.935 | 10<sup>5</sup> | 10.217 | 44.69 | 19.91 | 0.04643 | 1.3538 |
| 0.94 | 2$`\times `$10<sup>4</sup> | 10.462 | 46.85 | 21.45 | 0.03432 | 1.4382 |
| 0.95 | 10<sup>5</sup> | 10.996 | 52.14 | 24.10 | 0.01855 | 1.5945 |
| 0.96 | 2$`\times `$10<sup>4</sup> | 11.586 | 59.67 | 27.61 | 0.00901 | 1.8067 |
| 0.97 | 10<sup>5</sup> | 12.251 | 67.45 | 31.59 | 0.00370 | 2.0379 |
| 0.98 | 2$`\times `$10<sup>4</sup> | 13.003 | 79.20 | 36.62 | 0.00137 | 2.3479 |
| 0.99 | 10<sup>5</sup> | 13.862 | 89.98 | 42.60 | 0.00041 | 2.6835 |
| 1. | 49400 | 14.843 | 104.78 | 50.25 | 0.00009 | 3.1206 |
| 1.02 | 10<sup>5</sup> | 17.301 | 148.88 | 69.91 | 0.0 | 4.2854 |
| 1.04 | 10<sup>5</sup> | 20.714 | 212.02 | 102.02 | 0.0 | 6.0857 |
| 1.06 | 10<sup>5</sup> | - | 319.07 | 158.69 | 0.0 | 9.1874 |
| 1.08 | 10<sup>5</sup> | - | 531.24 | 268.02 | 0.0 | 15.1567 |
| 1.1 | 10<sup>5</sup> | - | 1018.49 | 526.94 | 0.0 | 29.0094 |
Table I Pressure $`P`$, bulk modulus $`B`$, effective shear modulus $`\mu _{eff}`$, ratio of moves rejected due to the zero dislocation density constraint $`p`$ and the (unrenormalized) coupling constant $`K/16\pi `$ as a function of the density $`\rho `$. The total number of configurations used for the averages $`N_c`$ is also listed. The pressure $`P`$ was obtained by integrating $`B`$ below $`\rho =1.049`$. |
no-problem/0001/cs0001024.html | ar5iv | text | # LA-UR-00-309 A Parallel Algorithm for Dilated Contour Extraction from Bilevel Images
(January 12, 2000)
## Abstract
We describe a simple, but efficient algorithm for the generation of dilated contours from bilevel images. The initial part of the contour extraction is explained to be a good candidate for parallel computer code generation. The remainder of the algorithm is of linear nature.
The notion of shape is intimately related to the notion of boundary or contour, in that the contour delineates the shape from its exterior, and conversely the shape is defined in extent by its contour.
In digital image processing contour extraction is important for the purpose of enclosing and characterizing a set of pixels belonging to an object within a given image. In the following, we shall work only with bilevel images, i.e., images that contain only pixels of two spectral values, e.g., black and white pixels. Without loss of generality let us assume that we have therefore only black and white pixels in a given image, where the white pixels represent pixels of objects, and black pixels represent pixels of holes and the universe, respectively. Fig. 1.a shows an example of a bilevel image.
As a first step, we shall construct a set of edges, which separates white pixels from black ones. In order to do this, we note that a given white pixel $`i`$ has four corners $`A^i,B^i,C^i,D^i`$ (cf. Fig. 2). From the pixels corner points one can then construct its four edges $`A^iB^i,B^iC^i,C^iD^i,D^iA^i`$. Let $`A^iB^ie_1^i`$, $`B^iC^ie_2^i`$, $`C^iD^ie_3^i`$ and $`D^iA^ie_4^i`$, respectively. Then we can always tell for a given edge, $`e_j^i`$ $`(j=1,2,3,4)`$, the side where the parent pixel is present. Thus, we always know for a given edge, on which sides the inside and the outside of the corresponding shape is located.
A pixel $`i`$ can contribute to the set of edges, which separates white pixels from black ones, either with no, one, two, three or four edges. If two neighboring pixels are white, the common edge they share, i.e., $`e_1^i=e_3^j`$ or $`e_2^i=e_4^j`$ ($`ij`$), will not contribute to our edge list. Hence, our desired edge list will only consist of edges which have a multiplicity of one. For the bilevel image in Fig. 1.a we show the corresponding (but not yet connected) edge set in Fig. 1.b.
We would like to emphasize, that all given white pixels could be treated simultaneously, when their lower, left, upper, and right edges are examined for candidates in the contour pixel edge set. Thus, here, an excellent opportunity for the generation of parallel computer code is given by our above described technique.
In the second step, we construct connected loops from our contour edge set which we have generated so far. In doing so, we first enumerate the edges of the contour edge set. Then, a new set is created, which is the set of end points (i.e., their coordinate pairs) of the edges themselves. Every point is listed only once, but for each point we also list which edges (their identities are given through their enumeration) are connected to a given point. The reader can convince himself easily, that there are always either two or four edges connected to a point in the point list.
At this stage, all the knowledge for connecting the edges is available. To be specific, we shall generate contours, which are oriented counterclockwise about the objects they enclose, and which are oriented clockwise when they enclose holes within the objects, respectively.
Initially, all edges of the contour edge set are labeled as โunusedโ. We take the first element of the contour edge set to construct our first contour loop. If the edge is of type $`e_1^i`$ ($`e_2^i`$ ($`e_3^i`$ ($`e_4^i`$))) we note point $`A^i`$ ($`B^i`$ ($`C^i`$ ($`D^i`$))) as our first point of the first loop in order to ensure its correct orientation. We label our first edge as โusedโ and then look up the end point list to examine to which other edge our current edge is connected at the other point $`B^i`$ ($`C^i`$ ($`D^i`$ ($`A^i`$))) of our used edge. At this point, we have to distinguish between two cases. In the first case, only two edges are connected to point $`B^i`$ ($`C^i`$ ($`D^i`$ ($`A^i`$))), wherein we just consider the as yet unused edge as our next edge. In the second case, four edges are connected to point $`B^i`$ ($`C^i`$ ($`D^i`$ ($`A^i`$))). Then we choose as the next edge the unused edge belonging to the current pixel $`i`$. In doing so, we always ensure a unique choice for building the contours. Furthermore, we weaken the connectivity between pixels, that touch each other only in one point. In fact, as we shall see below, our dilated versions of the contours will lead to a total separation between two pixels that only share one common point. After having found our next edge $`j`$, we shall add its first point to our contour point chain list, and then we label that new edge as โusedโ. We reiterate the above described procedure, until we encounter our first โusedโ element of the contour edge set. In order to avoid double accounting, we shall not add the first point of the first edge to the contour point chain again. This, then, concludes the construction of the first contour. While creating the contour, we may count the number of edges, which we have used so far and compare it to the total number of edges present in our contour edge set.
In case there are still โunusedโ edges present in our contour edge set we shall scroll through our edge list, until we find the first next โunusedโ edge and use it as the first edge of our next contour.
We repeat the above described algorithm to create the new contour. This is done until eventually all edges have been used. The result will be a set of point chains, of which each one of them represents a closed contour.
At this point, we may note that the smallest contours consist of only four edge points. The latter result is valid for all isolated white pixels, i.e., those, which have no other white pixels as 4-neighbors.
Fig. 3.a shows the contours for the bilevel image given in Fig. 1.a. We stress, that in our set of point chains each point chain is a list of the first points of our pixel edges. If we replace these points by the middle points of the edges without changing their connectivity, we get modified contours, which are a dilation of the centers of the boundary pixels. The dilated contour version derived from the example given in Fig. 3.a is shown in Fig. 3.b. In particular, in places where some of the contours touch themselves due to two white neighboring pixels, which are in contact in only one point, we obtain a definite separation of the contour sections.
We would like to mention, that another very popular contour extraction algorithm is given by Ref. . But this algorithm is very slow compared to our algorithm, since it explores the 8-neighborhood of the image pixels in a sequential raster scan, whereas we use only the 4-neighborhood of the pixels and all pixels at the same time. Furthermore, we have convinced ourselves that the algorithm presented in Ref. cannot account for nested regions in the image very easily and reliably.
In summary, we have presented a partially parallel and otherwise linear algorithm to construct dilated contours from bilevel images. We stress, that the generated contours are always nondegenerate, i.e., they always enclose an area larger than zero, and they never cross or overlap each other. Furthermore, our contours are oriented with respect to the objects and their possible holes. |
no-problem/0001/hep-ph0001194.html | ar5iv | text | # 1 Introduction
## 1 Introduction
To verify whether or not the Higgs mechanism is responsible for spontaneous electroweak symmetry breaking as expected by the standard model (Sm), or indeed, the minimal supersymmetric standard model (Mssm), one must perform an experimental reconstruction of the Higgs self-energy potential. This reconstruction requires the measurement of the Higgs boson mass(es) and the triple and quartic Higgs self-couplings. The triple Higgs self-coupling may become accessible to direct measurement at an electron-positron linear collider (Lc) with centre-of-mass energy at the TeV scale. In this study we compare the signal of the most promising channel with its dominant electroweak (Ew) and Qcd backgrounds to assess the feasibility of its measurement.
We restrict ourselves to a discussion of the triple Higgs self-coupling in the Sm<sup>2</sup><sup>2</sup>2Some of the trilinear couplings of the Mssm may prove to be accessible even at the Lhc via resonant decay of a heavy Higgs, eg. $`Hhh`$, and are under investigation elsewhere . and in particular double Higgs-strahlung off $`Z`$ bosons, in the process $`e^+e^{}HHZ`$. We adopt the $`Hb\overline{b}`$ decay channel over the Higgs mass range $`M_H\stackrel{<}{}140`$ GeV and assume very efficient tagging and high-purity sampling of $`b`$ quarks. Then the backgrounds to the $`\lambda _{HHH}`$ measurement are primarily the โirreducibleโ Ew and Qcd backgrounds $`e^+e^{}b\overline{b}b\overline{b}Z`$.
The Ew background is of $`๐ช(\alpha _{em}^5)`$ away from resonances, but can, in principle, be problematic due to the presence of both $`Z`$ vectors and Higgs scalars yielding $`b\overline{b}`$ pairs. In contrast, the Qcd background is of $`๐ช(\alpha _{em}^3\alpha _s^2)`$. Here, although there are no heavy objects decaying to $`b\overline{b}`$ pairs, the production rate itself could give difficulties due to the presence of the strong coupling. In addition, the double Higgs-strahlung process (see Fig. 1) contains diagrams proceeding via an $`HHZ`$ intermediate state but not dependent on $`\lambda _{HHH}`$, as well as a diagram sensitive to the triple Higgs self-coupling (the right-hand graph of Fig. 1).
The plan of the paper is as follows. The next section details the procedure adopted in computing the relevant scattering amplitudes. Section 3 displays our numerical results and contains our discussion. Finally, in the last section, we summarize and conclude.
## 2 The calculation
The signal process is rather straightforward to calculate in the case of on-shell $`HHZ`$ production (see Refs. for analytic expressions of the matrix elements (Mes) and further discussions of related signal processes). The Ew background is more complex, deriving from many graphs with different resonant structures. In order to perform an efficient integration we have grouped the Feynman diagrams into different collections of diagrams with identical (non-)resonant structure. This categorization allows one to compute each of the topologies separately, with the appropriate mapping of variables, thus optimizing the accuracy of the numerical integration. Furthermore, one is able to assess the relative weight of the various subprocesses in the full scattering amplitude, giving added insight into the fundamental dynamics. In contrast the Qcd background contains fewer diagrams, with only five different (non-)resonant topologies. This makes integration much simpler than in the Ew case and it can be carried out with percent accuracy directly over the full Me using standard multichannel Monte Carlo methods. For further details and numerical inputs see Ref. .
We assume that total and differential rates are those at the partonic level, as we identify jets with the partons from which they originate. To resolve the final state $`b`$ (anti)quarks as separate systems, we impose the following acceptance cuts: $`E(b)>10`$ GeV on the energy of each $`b`$ (anti)quark and $`\mathrm{cos}(b,b)<0.95`$ on the relative separation of all possible $`2b`$ combinations. We further assume that $`b`$ jets are distinguishable from light-quark and gluon jets (by using, for example, $`\mu `$-vertex tagging techniques). However, no efficiency to tag the four $`b`$ quarks is included in our results. The $`Z`$ boson is treated as on-shell and no branching ratio is applied to quantify its possible decays. In practice, in order to simplify the treatment of the final state, one may assume that the $`Z`$ boson decays leptonically (that is, $`Z\mathrm{}^+\mathrm{}^{}`$, with $`\mathrm{}=e,\mu ,\tau `$) or hadronically into light quark jets (that is, $`Zq\overline{q}`$, with $`qb`$). Also, we have not included Initial State Radiation (Isr) in our calculations.
## 3 Results
The total signal cross section is plotted as a function of $`M_H`$ in the top-left frame of Fig. 2, for three centre-of-mass (Cm) energies. Even at low Higgs masses where both the production and decay rates are largest, the signal is rather small. In fact, the signal is below $`0.2`$ femtobarns for all energies from 500 to 1500 GeV, although this can be doubled simply by polarizing the incoming electron and positron beams. Thus, as already recognized in Ref. , where on-shell production studies of the signal were performed, luminosities of the order of one inverse attobarn need to be collected before statistically significant measurements of $`\lambda _{HHH}`$ can be performed.
The decrease of the signal with increasing Higgs mass is due to suppression of the $`Hb\overline{b}`$ decay channel. Above $`M_H140`$ GeV it becomes overwhelmed by the opening of the off-shell $`HW^\pm W^{}`$ decay (see, for example, Fig. 1 of Ref. ). In contrast, the production cross section for $`e^+e^{}HHZ`$ without specifying the subsequent decay is much less sensitive to $`M_H`$ . In addition, because the signal is an annihilation process proportional to $`1/s`$, a larger Cm energy ($`E_{\mathrm{cm}}`$) tends to deplete the production rates, as long as $`E_{\mathrm{cm}}2M_H+M_Z`$. When this is no longer true, e.g., at 500 GeV and $`M_H\stackrel{>}{}140`$ GeV, phase space suppression can overturn the $`1/s`$ propagator effects. This is evidenced by the crossing of the curves for 500 and 1000 GeV in Fig. 2.
Fig. 2 also shows the background processes plotted with respect to $`M_H`$ for the three Cm energies. As anticipated, the Ew background is problematic due to its resonance structures, whereas the Qcd background presents no such difficulty and does not significantly obscure the signal, despite the strong coupling constant. Our categorization of the Ew background into different resonant topologies now facilitates a closer examination. In particular we observe that only four sub-processes dominate the background. Generic Feynman diagrams of these sub-processes can be seen in Fig. 3, together with their contribution to the total Ew background rate, as a function of the Higgs boson mass. All other Ew sub-processes are much smaller, rarely exceeding $`10^3`$ femtobarns, and have little effect on the kinematics or magnitude of the background.
The Qcd background is dominated by $`e^+e^{}ZZ`$ production with one of the two $`Z`$ bosons decaying hadronically into four $`b`$ jets. Also significant is single Higgs-strahlung production (off a $`Z`$) with the Higgs scalar subsequently decaying into $`b\overline{b}b\overline{b}`$ via an off-shell gluon. The contributions of the other diagrams, which do not resonate, are typically one order of magnitude smaller than the $`ZZ`$ and $`ZH`$ mediated graphs, with the interferences smaller still (and generally negative).
We now investigate several differential spectra, to find kinematic cuts which will suppress the backgrounds. The distributions in $`E(b)`$ and $`\mathrm{cos}(b,b)`$ cannot be further exploited after the acceptance cuts are made. Instead we consider the invariant masses of $`b`$ (anti)quark systems: for 2$`b`$ systems where the $`b`$ jets come from the same production vertex (โrightโ pairing) or otherwise (โwrongโ pairing); for 3$`b`$ systems, and for the 4$`b`$ system. The spectra for the 2$`b`$ and 4$`b`$ systems are shown in Fig. 4 (the 3$`b`$ spectra are less useful and are not shown here). It is clear that the narrow Higgs peak<sup>3</sup><sup>3</sup>3Recall that for $`M_H=110`$ GeV one has $`\mathrm{\Gamma }_H3`$ MeV. The Higgs resonances in Fig. 4 are smeared by incorporating a $`5`$ GeV bin width. in the $`M_R(bb)`$ distribution, can be exploited to reduce the Qcd background. Although the Ew process also displays a resonance at $`M_H`$, only one 2$`b`$ invariant mass will peak here compared to two combinations in the signal. The Ew background can therefore also be cut away. Finally, one may require that none of the 2$`b`$ invariant masses reproduce a $`Z`$ boson, provided that the invariant mass resolution of di-jet systems is at least as good as the difference $`(M_HM_Z)/2`$, in order to resolve the $`Z`$ and $`H`$ peaks. The $`M(bbbb)`$ spectra is also useful in distinguishing signal from background since clearly $`M(bbbb)`$ must always be greater than $`2M_H`$ for the signal process, while it can be lower for both background processes (especially for the Qcd background).
In addition to the different resonance structures of signal and background one can exploit the dominantly $`t`$-channel nature of the backgrounds as compared to the $`s`$-channel signal. In Fig. 4 we also show the cosine of the polar angle (i.e. with respect to the beam axes) of the four $`b`$ quark system (or, indeed, the real $`Z`$). Notice that the backgrounds are much more forward peaked than the signal. The Qcd events are predominantly $`e^+e^{}ZZ`$ production followed by the decay of one of the gauge bosons into four $`b`$ quarks. The $`ZZ`$ pair is produced via $`t,u`$-channel graphs so the four $`b`$ quarks are preferentially directed forwards and backwards into the detector. In contrast, the signal is entirely $`s`$-channel resulting in more centrally produced $`b`$ jets. The EW background has a more complicated structure but is still sizably dominated by forward production. A similar effect is seen for $`\mathrm{cos}(bbb)`$ and $`\mathrm{cos}(bb)`$, allowing one to separate signal and backgrounds events efficiently.
The transverse momentum distributions of the 4$`b`$, 3$`b`$ and 2$`b`$ systems were also examined. However the distributions for the signal and the Ew background proved too similar to allow their use as efficient kinematic variables for cuts.
The main features of distributions studied above are rather stable to variation of the Cm energy or Higgs mass (within the ranges under discussion). We can therefore optimize the $`S/B`$ ratio by imposing the cuts:
$$|M(bb)M_H|<5\mathrm{GeV}(\mathrm{on}\mathrm{exactly}\mathrm{two}\mathrm{combinations}\mathrm{of}2b\mathrm{systems}),$$
$$|M(bb)M_Z|>5\mathrm{GeV}(\mathrm{for}\mathrm{all}\mathrm{combinations}\mathrm{of}2b\mathrm{systems})$$
$$M(bbbb)>2M_H,|\mathrm{cos}(2b,3b,4b)|<0.75.$$
(1)
In enforcing these constraints, we assume no $`b`$ jet charge determination.
The signal and background cross-sections after the implementation of the selection cuts can be seen in Fig. 5. Both background rates are greatly reduced while a large portion of the original signal is maintained. This results in a $`S/B`$ ratio which is enormously large for not too heavy Higgs masses. For example, at Centre-of-mass energies $`E_{\mathrm{cm}}=500(1000)[1500]`$ GeV and for $`M_H=110`$ GeV, one finds $`S/B=25(60)[104]`$. This remarkable suppression of the backgrounds comes largely from the invariant mass cuts on $`M_{bb}`$. In fact, they are crucial not only in selecting the $`M_H`$ resonance of the signal, but also in minimizing the signal rejection around $`M_Z`$ when mispairings occur: notice the shoulder at 90 GeV of the $`M_W(bb)`$ signal spectrum.
A number of caveats to our analysis apply. Firstly, the value we have adopted for the resolution is rather high, considering the large uncertainties normally associated with the experimental determination of jet angles and energies, though not unrealistic in view of the most recent studies . The ability of the actual detectors in guaranteeing the performances foreseen at present is thus crucial for the feasibility of dedicated studies of double Higgs-strahlung events at the LC. Furthermore, one must consider the efficiency of tagging the $`b`$ quarks necessarily present in the final state, particularly in the case in which the $`Z`$ boson decays hadronically. Given the high production rate of six jet events from Qcd and multiple gauge boson resonances in light quark and gluon jets, it is desirable to resort to heavy flavour identification in hadronic samples. However, the poor statistics of the $`HHZ`$ signal requires a judicious approach in order not to deplete it below detection level. According to recent studies , the two instances can be combined successfully, as efficiencies for tagging $`b\overline{b}`$ pairs produced in Higgs decays were computed to be as large as $`ฯต_{b\overline{b}}90\%`$, with mis-identification probabilities of light(charmed) quarks as low as $`ฯต_{q\overline{q}(c\overline{c})}0.3(4)\%`$ (and negligible for gluons). If such a projection for the Lc detectors proves to be true, then even the requirement of tagging exactly four $`b`$ quarks in double Higgs-strahlung events might be statistically feasible, thus suppressing the reducible backgrounds to really marginal levels . One should also bear in mind that experimental considerations, such as the performances of detectors, the fragmentation/hadronization dynamics and a realistic treatment of the $`Z`$ boson decays, are also important when determining what cuts should be made. Such considerations are beyond the scope of this paper, and are under study elsewhere .
Finally, the number of signal and backgrounds events seen per inverse attobarn of luminosity at $`E_{\mathrm{cm}}=500`$, $`1000`$, and $`1500`$ GeV, with $`M_H=110`$ GeV, can be seen in Tab. 1. One could relax one or more of the constraints we have adopted to try to improve the signal rates without letting the backgrounds become unmanageably large. However, such a relaxation only marginally effects the signal while significantly reducing the $`S/B`$ ratio, and should only be done if high luminosities cannot be obtained. Kinematic fits can also help in improving the $`S/B`$ ratio .
## 4 Summary
In conclusion, the overwhelming irreducible background from Ew and Qcd processes of the type $`e^+e^{}b\overline{b}b\overline{b}Z`$ to double Higgs production in association with $`Z`$ bosons and decay in the channel $`Hb\overline{b}`$, i.e., $`e^+e^{}HHZb\overline{b}b\overline{b}Z`$, should easily be suppressed down to manageable levels by simple kinematics cuts: e.g. in invariant masses and polar angles.
The number of signal events is generally rather low, but will be observable at the Lc provided that it provides very high luminosity, excellent $`b`$ tagging performances, and high di-jet resolution. As advocated in Ref. , one also requires a good forward acceptance for jets since single jet directions in the double Higgs-strahlung process can stretch up to about $`20^\mathrm{o}`$ in polar angle. |
no-problem/0001/math0001091.html | ar5iv | text | # Continued fractions and Catalan problems
## 1 Introduction
A Catalan problem is any enumerative problem that produces the Catalan sequence of numbers or one of its many $`q`$-analogs. Stanley provides a catalog of $`66`$ Catalan problems. Interestingly, many of the generating functions that arise from these problems can be given as a continued fraction with a simple yet elegant form. Two of these generating functions are reproduced below and a third we derive anew. Our intent is to show that the first two continued fractions are special instances of the third with the implication that many others are as well. We begin with the three Catalan problems and their corresponding generating functions.
Problem 1. A $`(132)`$ pattern (respectively, a $`(123)`$ pattern) in a permutation $`\pi `$ of length $`n`$ is a triple $`1i<j<kn`$ of indices for which $`\pi (i)<\pi (k)<\pi (j)`$ (respectively, $`\pi (i)<\pi (j)<\pi (k)`$). Let $`f_r(n)`$ denote the number of permutations $`\pi `$ of length $`n`$ that have no $`(132)`$ patterns and exactly $`r`$ $`(123)`$ patterns. Recently, Robertson, Wilf and Zeilberger () derived the generating function
$$\underset{r,n0}{}f_r(n)z^nq^r=\frac{1}{1{\displaystyle \frac{z}{1{\displaystyle \frac{z}{1{\displaystyle \frac{zq}{1{\displaystyle \frac{zq^3}{1{\displaystyle \frac{zq^6}{\mathrm{}}}}}}}}}}}}$$
(1)
in which the $`l`$th numerator is $`zq^{\left(\genfrac{}{}{0pt}{}{l1}{2}\right)}`$ ($`l`$ is for *level* in anticipation of Problem 3).
Problem 2. The number of lattice paths from $`(0,0)`$ to $`(n,n)`$ with steps $`(1,0)`$ and $`(0,1)`$ that never rise above the line $`y=x`$ is a Catalan number. Let $`P`$ be such a path, $`A(P)`$ the area under the path (and above the $`x`$-axis), and let $`C_n(q)=_Pq^{A(P)}`$. Then a generating function is given by (see Exercise 6.34 in Stanley and replace the $`x`$ therein with $`zq`$ )
$$\underset{n0}{}q^{\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)}C_n(1/q)z^n=\frac{1}{1{\displaystyle \frac{zq}{1{\displaystyle \frac{zq^2}{1{\displaystyle \frac{zq^3}{1{\displaystyle \frac{zq^4}{\mathrm{}}}}}}}}}}$$
(2)
in which the $`l`$th numerator is $`zq^l`$.
Problem 3. The number of ordered trees (also known as plane trees) on $`n`$ edges is a Catalan number. The *level* of a vertex is the number of edges on the unique path from the root to the vertex. Thus, the root is the unique vertex at level zero and the vertices at level one are adjacent to the root. Let $`T_{l_1,l_2,\mathrm{}}`$ be the number of ordered trees that have $`l_k`$ vertices at level $`k>0`$ and let $`v_k`$, $`k>0`$, be indeterminates. The generating function $`T`$ that enumerates ordered trees by the number of vertices at each level is defined as
$$T(v_1,v_2,\mathrm{})=\underset{l_1,l_2,\mathrm{}0}{}T_{l_1,l_2,\mathrm{}}v_1^{l_1}v_2^{l_2}\mathrm{}.$$
The first few terms (number of edges $`n3`$) of $`T`$ are
$$T(v_1,v_2,\mathrm{})=1+v_1+v_1v_2+v_1^2+v_1v_2v_3+2v_1^2v_2+v_1v_2^2+v_1^3+\mathrm{}.$$
That $`T`$ can be written as a continued fraction that subsumes the previous continued fractions is our main result. It is simple, yet has some interesting applications.
###### Theorem 1.
The generating function that enumerates ordered trees by the number of vertices at each level is
$$T(v_1,v_2,\mathrm{})=\frac{1}{1{\displaystyle \frac{v_1}{1{\displaystyle \frac{v_2}{1{\displaystyle \frac{v_3}{\mathrm{}}}}}}}}$$
###### Proof.
We exploit the natural recursive property of ordered trees to obtain a recursion for $`T`$. The recursion immediately leads to the continued fraction. Any ordered tree on more than one vertex can be constructed from a collection of others (the subtrees) by joining the roots of these subtrees to a new vertex. The new vertex becomes the root of the tree under construction. Note that the level of a vertex in a subtree *increases by one* after the new root is inserted. The function $`T(v_1,v_2,\mathrm{})`$ enumerates the choices for a subtree and each of these choices contributes a factor of $`v_1T(v_2,v_3,\mathrm{})`$ because of the level changes. The factor of $`v_1`$ is present because the root of the subtree becomes a vertex at level one. Thus, the trees with $`k`$ subtrees (of the root) are enumerated by $`v_1^kT^k(v_2,v_3,\mathrm{})`$. The generating function satisfies
$`T(v_1,v_2,\mathrm{})`$ $`=`$ $`1+v_1T(v_2,v_3,\mathrm{})+v_1^2T^2(v_2,v_3,\mathrm{})+\mathrm{}`$
$`=`$ $`{\displaystyle \frac{1}{1v_1T(v_2,v_3,\mathrm{})}}.`$
Iteration of the last functional recursion produces the continued fraction. โ
An immediate application is obtained by replacing each indeterminate $`v_k`$ with $`z`$ denoted simply as $`T(z)`$. The resulting function enumerates ordered trees by the number of edges and is
$`T(z)`$ $`=`$ $`{\displaystyle \frac{1}{1{\displaystyle \frac{z}{1{\displaystyle \frac{z}{1{\displaystyle \frac{z}{\mathrm{}}}}}}}}}`$
$`=`$ $`{\displaystyle \frac{1}{1zT(z)}}.`$
The well-known solution of the above generates the Catalan numbers and is $`T(z)=(1\sqrt{14z})/2z`$.
The more challenging applications are the evaluations needed to produce the continued fractions of permutations (Equation 1 of Problem 1) and lattice paths (Equation 2 of Problem 2). Both applications require that we map their respective problems to an ordered-tree problem. These mappings are of interest in their own right and we explore them now. We begin with the lattice path problem because it is simpler and the mapping is already known.
## 2 Lattice paths and ordered trees
We draw our ordered trees with the root at the top and proceed downward to the leaves. The first leaf is the leftmost leaf in the drawing and the remaining leaves are referred to by their positions in a left-to-right order. A preorder (depth-first) traversal of the ordered tree provides a well-known correspondence with a lattice path. When an edge of the tree is traversed downward away from the root we take a $`(1,0)`$ (east) step in the lattice, otherwise we take a $`(0,1)`$ (north) step. In this manner, an ordered tree with $`n`$ edges corresponds to a unique lattice path from $`(0,0)`$ to $`(n,n)`$.
If the path $`P`$ corresponds to the tree $`T`$, then we need to determine what statistic of the tree corresponds to the area $`A(P)`$ under the path. We let $`A(T)=A(P)`$ be this statistic of the tree and claim that it depends only on the vertex levels.
###### Lemma 2.
If $`T`$ is an ordered tree on $`n`$ edges, then
$$A(T)=\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)\underset{\mathrm{vertices}v}{}\mathrm{level}(v),$$
where $`\mathrm{level}(v)`$ is the level of vertex $`v`$.
###### Proof.
Let $`w`$ be the rightmost leaf of the tree $`T`$. Our immediate interest is to calculate the area under the east step that arises in the lattice path by traversing the last edge to this leaf downward away from the root. This area is equal to the height that the east step has in the lattice path and is equal to the number of north steps that have occurred prior to the east step. There are a total of $`n`$ north steps in the lattice path (one for each edge of the tree) and $`\mathrm{level}(w)`$ remaining north steps after the east step. Hence, the east step is at height $`n\mathrm{level}(w)`$ in the lattice. If we now delete the leaf $`w`$ from the tree $`T`$, then the resulting tree has a lattice path with area $`n\mathrm{level}(w)`$ less than that of $`T`$. A formal inductive argument on the number of edges provides the result. โ
We use the lemma to prove the following continued-fraction result. The result is the same as that given in Equation 2.
###### Theorem 3.
If $`C_n(q)=_Tq^{A(T)}`$ enumerates the set of ordered trees on $`n`$ edges by the area under their corresponding lattice paths, then
$$\underset{n0}{}q^{\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)}C_n(1/q)z^n=\frac{1}{1{\displaystyle \frac{zq}{1{\displaystyle \frac{zq^2}{1{\displaystyle \frac{zq^3}{1{\displaystyle \frac{zq^4}{\mathrm{}}}}}}}}}}$$
in which the $`l`$th numerator is $`zq^l`$.
###### Proof.
Let $`T`$ be an ordered tree on $`n`$ edges and assign to every nonroot vertex at level $`l>0`$ the value $`zq^l`$. The product of all these values is then $`z^nq^{_v\mathrm{level}(v)}`$. Summing over all ordered trees on $`n`$ edges we have by the lemma
$$z^n\underset{T}{}q^{_v\mathrm{level}(v)}=z^n\underset{T}{}q^{\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)A(T)}=z^nq^{\left(\genfrac{}{}{0pt}{}{n+1}{2}\right)}C_n(1/q).$$
The sum of these over all $`n0`$ then enumerates ordered trees and the generating function is given by the continued fraction of Theorem 1 with $`v_l=zq^l`$. โ
## 3 Permutations and ordered trees
The previous problem used an existing bijection between the set of ordered trees and the set of lattice paths to get the desired result. We seek a similar approach for ordered trees and permutations. There are many ways to map a permutation onto a tree (often an unordered tree) but none of these serve our needs. The mapping we introduce appears to be new.
Let $`T`$ be an ordered tree on $`n`$ edges. We use a preorder traversal of $`T`$ to label the nonroot vertices in decreasing order with the integers $`n,n1,\mathrm{},1`$. Thus, the first vertex visited gets the label $`n`$ and the last receives $`1`$. We now construct a permutation written as a word by *reading* the labeled tree in *postorder*. We again traverse the tree from left to right and record the label of a vertex when we last visit it. In Catalan fashion, the five ordered trees and their corresponding permutations are shown in Figure 1. Note that the only permutation missing from those of length three is $`132`$. The (132) pattern has been avoided. Also note that there is exactly one permutation with a (123) pattern (the first permutation shown). Thus, recalling the definition of $`f_r(n)`$ given in the introduction we have $`f_0(3)=4`$, $`f_1(3)=1`$, and $`f_r(3)=0,r>1`$. We generalize these observations after introducing some useful notation.
If $`T`$ is an ordered tree on $`n`$ edges, then we let $`\pi (T)`$ be its corresponding permutation written as a word on the numbers $`1,2,\mathrm{},n`$. We let $`\pi (T,k)`$ be the same permutation except we use the corresponding numbers $`1+k,2+k,\mathrm{},n+k`$. For example, the first tree shown in the figure has $`\pi (T)=\pi (T,0)=123`$ and $`\pi (T,1)=234`$. For emphasis, we denote the concatenation of two words $`\pi `$ and $`\pi ^{}`$ as $`\pi \pi ^{}`$ (instead of the usual $`\pi \pi ^{}`$). The following lemma describes how the permutation of a tree can be constructed from those of its subtrees.
###### Lemma 4.
Let $`T`$ be an ordered tree on $`n`$ edges with subtrees $`T_1,T_2,\mathrm{},T_s`$ on $`n_1,n_2,\mathrm{},n_s`$ edges, respectively. Let $`N_0=n`$ and $`N_k=N_{k1}n_k1,k=1,2,\mathrm{},s`$, then
$$\pi (T)=\pi (T_1,N_1)N_0\pi (T_2,N_2)N_1\mathrm{}\pi (T_s,N_s)N_{s1}.$$
###### Proof.
Note that $`N_0=n`$ is the total number of vertices to be labeled, that $`N_1`$ is the total number of vertices to be labeled after those of the subtree $`T_1`$, and so on. Since the vertices of $`T`$ are labeled in decreasing order using a preorder traversal, $`i<j`$ implies that all vertices of $`T_i`$ receive labels greater than those of $`T_j`$. Since $`\pi (T)`$ is constructed by reading these labels in postorder, $`i<j`$ implies that the vertex labels in $`T_i`$ appear in $`\pi (T)`$ before any of those in $`T_j`$. Thus, $`\pi (T)`$ begins as $`\pi (T_1)`$ with each number incremented by $`N_1`$, i.e., as $`\pi (T_1,N_1)`$. The root of this subtree is the first vertex visited in the preorder traversal and receives the label $`n=N_0`$. It is read last among the vertices of $`T_1`$ in postorder, however, so that $`\pi (T)`$ begins as $`\pi (T_1,N_1)N_0`$. The general case follows similarly. โ
Note that the lemma also provides the means to prove that the mapping $`T\pi (T)`$ is injective. That it is onto the set of all $`(132)`$-pattern avoiding permutations is proved in the next theorem. Before proceeding to this theorem we present another lemma which enables us to count $`(123)`$ patterns and their generalization. An *increasing* pattern of *length* $`k`$, $`k>0`$, in a permutation $`\pi `$ of length $`n`$ is a $`k`$-tuple $`1i_1<i_2<\mathrm{}<i_kn`$ of indices for which $`\pi (i_1)<\pi (i_2)<\mathrm{}<\pi (i_k)`$.
###### Lemma 5.
Let $`T`$ be an ordered tree on $`n`$ edges and $`V_k`$ a subset of $`k`$ vertices, $`0<kn`$, none of which are the root, then the labels of these vertices provide an increasing pattern of length $`k`$ in $`\pi (T)`$ if and only if they lie along a path from the root to some leaf.
###### Proof.
We induct on $`n`$. If $`n=1`$, then $`k=1`$ and the lemma is obvious. In fact, this is the case for all $`n`$ whenever $`k=1`$. We assume that the lemma is true for any ordered tree on $`n`$ or fewer edges, $`n>0`$, and let $`T`$ be a tree on $`n+1`$ edges. Suppose that $`\pi (T)`$ contains an increasing pattern of length $`k>1`$. We let $`v_i`$ be the vertex in $`T`$ that provides the $`i`$ in such a pattern, $`i=1,2,\mathrm{},k`$. Then $`v_k`$ must receive a larger label than $`v_1`$ implies that the subtree containing $`v_k`$ precedes or is the same as that of $`v_1`$ (see the proof of Lemma 4). However, the label of $`v_k`$ must be read after that of $`v_1`$ implies that its subtree must follow or be the same as that of $`v_1`$. The conclusion is that they are in the same subtree. A similar argument applies to $`v_i`$ and $`v_1`$, $`i>1`$, so that all the vertices must be in the same subtree, say $`T_i`$. Thus, the increasing pattern of length $`k`$ lies entirely within $`\pi (T_i,N_i)N_{i1}`$.
Recall that it is the root of the subtree that receives the label $`N_{i1}`$. If the root of the subtree is one of the vertices providing the increasing pattern, then it must be $`v_k`$. We must consider two cases depending on whether the subtree root is $`v_k`$ or not.
If it is not, then the pattern lies entirely within $`\pi (T_i,N_i)`$ and corresponds uniquely to an increasing pattern of length $`k`$ in $`\pi (T_i)`$. By the inductive hypothesis, the vertices $`v_1,v_2,\mathrm{},v_k`$ must lie along some path from the root of the subtree to a leaf. Necessarily, this is also a path from the root of $`T`$ to a leaf as required by this lemma.
If the root of the subtree is $`v_k`$, then $`v_1,v_2,\mathrm{},v_{k1}`$ provide the labels for an increasing pattern of length $`k1`$ in $`\pi (T_i,N_i)`$. This pattern corresponds to a unique increasing pattern of length $`k1`$ in $`\pi (T_i)`$ and again by hypothesis the vertices $`v_1,v_2,\mathrm{},v_{k1}`$ must lie along a path from the root of the subtree to some leaf of the subtree. This path together with the root of the subtree provides the needed path of this lemma.
Conversely, if $`V_k=\{v_1,v_2,\mathrm{},v_k\}`$ is a subset of nonroot vertices of $`T`$, where we may assume the label of $`v_i`$ is less than that of $`v_j`$ whenever $`i<j`$, and the vertices lie along a path from the root to a leaf, then $`v_i`$ lies below $`v_j`$ on this path whenever $`i<j`$. Thus, when the labels are read in postorder, the label of $`v_i`$ is read prior to that of $`v_j`$, $`i<j`$. The vertices then provide an increasing pattern of length $`k`$ in $`\pi (T)`$.
The two previous lemmas enable us to prove the following interesting combinatorial theorem. Its corollary establishes a continued fraction as the generating function of $`(132)`$-avoiding permutations by number of increasing patterns of length $`k`$.
###### Theorem 6.
A permutation $`\pi `$ avoids the $`(132)`$ pattern if and only if $`\pi =\pi (T)`$ for some tree $`T`$. If this is the case, then the number of increasing patterns of length $`k`$ depends only on the levels of the vertices in the tree and is given by $`_v\left(\genfrac{}{}{0pt}{}{\mathrm{level}(v)1}{k1}\right)`$.
###### Proof.
Suppose that $`\pi (T)`$ contains a $`(132)`$ pattern and that $`T`$ is among the smallest such trees. We let $`v_3`$ be a vertex in $`T`$ that provides the $`3`$ in such a pattern and let $`v_1,v_2`$ be the vertices that provide the corresponding $`1`$ and $`2`$, respectively. Then $`v_3`$ must receive a larger label than $`v_1`$ implies that the subtree containing $`v_3`$ precedes or is the same as that of $`v_1`$ (see the proof of Lemma 4). However, the label of $`v_3`$ must be read after that of $`v_1`$ implies that its subtree must follow or be the same as that of $`v_1`$. The conclusion is that they are in the same subtree. A similar argument applies to $`v_2`$ and $`v_1`$ so that all three vertices must be in the same subtree, say $`T_i`$.
Also note that none of them can be the root of the subtree since the root receives the largest label among the vertices of the tree and, hence, can not be $`v_1`$ or $`v_2`$. Its label appears later in $`\pi (T)`$ than the others in the subtree implies that the root can not be $`v_3`$. Thus, the $`(132)`$ pattern lies entirely within $`\pi (T_i,N_i)`$ which implies that $`\pi (T_i)`$ itself must have a $`(132)`$ pattern contradicting our choice of $`T`$. Since the number of $`(132)`$-pattern avoiding permutations of length $`n`$ and the number of ordered trees on $`n`$ edges are the same Catalan number, the mapping $`T\pi (T)`$ is a bijection between these sets. It is an instructive exercise to construct $`T`$ from a $`(132)`$-pattern avoiding permutation.
It remains to determine the number of increasing patterns of length $`k`$ in $`\pi (T)`$. As a result of Lemma 5 it is only necessary to count the number of vertex subsets of size $`k`$, none of which are the root, such that the vertices lie along a path from the root to a leaf. We claim this number is $`_v\left(\genfrac{}{}{0pt}{}{\mathrm{level}(v)1}{k1}\right)`$ as stated. To see this, let $`v`$ be any vertex of $`T`$ and choose $`v_k=v`$. There are $`\mathrm{level}(v)1`$ nonroot vertices other than $`v`$ along the unique path from the root to $`v`$. From these we select any $`k1`$ of them, which together with $`v`$, form the required subset. It is clear that every subset with the required properties arises this way and we are done.
We now use the theorem to write a generating function as a continued fraction. We let $`f_r^{(k)}(n)`$ denote the number of permutations of length $`n`$ that have no $`(132)`$ pattern and exactly $`r`$ increasing patterns of length $`k`$. The case $`k=3`$ is that considered by Robertson, Wilf and Zeilberger .
###### Corollary 7.
The generating function that enumerates $`(132)`$-pattern avoiding permutations of length $`n`$ by number of increasing patterns of length $`k`$ is
$$\underset{r,n0}{}f_r^{(k)}(n)z^nq^r=\frac{1}{1{\displaystyle \frac{N_1}{1{\displaystyle \frac{N_2}{1{\displaystyle \frac{N_3}{1{\displaystyle \frac{N_4}{\mathrm{}}}}}}}}}}$$
(3)
in which the $`l`$th numerator $`N_l`$ is $`zq^{\left(\genfrac{}{}{0pt}{}{l1}{k1}\right)}`$.
###### Proof.
Let $`T`$ be an ordered tree on $`n`$ edges and assign to every nonroot vertex at level $`l>0`$ the value $`zq^{\left(\genfrac{}{}{0pt}{}{l1}{k1}\right)}`$. The product of all these values is then $`z^nq^{_v\left(\genfrac{}{}{0pt}{}{\mathrm{level}(v)1}{k1}\right)}`$ and the result follows from the previous theorem and Theorem 1. โ
janim@wpunj.edu
jrieper@cybernex.net |
no-problem/0001/cond-mat0001201.html | ar5iv | text | # SUPERCONDUCTIVITY IN A MESOSCOPIC DOUBLE SQUARE LOOP: EFFECT OF IMPERFECTIONS
## Abstract
We have generalized the network approach to include the effects of short-range imperfections in order to analyze recent experiments on mesoscopic superconducting double loops. The presence of weakly scattering imperfections causes gaps in the phase boundary $`B(T)`$ or $`\mathrm{\Phi }(T)`$ for certain intervals of $`T`$, which depend on the magnetic flux penetrating each loop. This is accompanied by a critical temperature $`T_c(\mathrm{\Phi })`$, showing a smooth transition between symmetric and antisymmetric states. When the scattering strength of imperfections increases beyond a certain limit, gaps in the phase boundary $`T_c(B)`$ or $`T_c(\mathrm{\Phi })`$ appear for values of magnetic flux lying in intervals around half-integer $`\mathrm{\Phi }_0=hc/2e`$. The critical temperature corresponding to these values of magnetic flux is determined mainly by imperfections in the central branch. The calculated phase boundary is in good agreement with experiment.
Early experiments have revealed that the effect of nonmagnetic impurities on the transition temperatures of bulk superconductors is very small. The critical temperature, $`T_c`$, changes by about 1% for 1% concentration of impurities. An interpretation of these observations was first proposed by Anderson. The only effect of the impurities is to change the energy of a free electron to eigenenergies determined by those impurities. This modifies the density of states in the integral equation for $`T_c`$. Therefore, scattering by non-magnetic impurities only slightly changes $`T_c`$ in bulk superconductors. Investigations extending Andersonโs work have been performed for multiband superconductors using the Abrikosov-Gorโkov approach. An interesting opportunity to intensify the effect of the imperfections occurs in mesoscopic superconducting structures where the confined condensate is much more sensitive to the action of impurities than in bulk structures.
Recently, the onset of the superconducting state has been studied in different mesoscopic structures of Al, comprising lines, dots, loops, double loops, microladders etc., with sizes smaller than the coherence length $`\xi (T)`$. Refs. show that experimentally observed phase boundaries for square loops with two attached leads are in excellent agreement with calculations based on Ginzburg-Landau (GL) theory .
In this present letter, we analyze the case of a superconducting In this present letter, we analyze the case of a superconducting mesoscopic double square loop (see the inset to Fig. 1a) where experiments have revealed the phase boundary shown in Fig. 1a. Using the micronet approach for a superconducting double loop, we obtain a phase boundary (cf. Ref. ) which consists of the intersecting parabolas shown in Fig. 1b. One set (with minima at integer values of the magnetic flux $`\mathrm{\Phi }`$ through one loop in units of the magnetic flux quantum $`\mathrm{\Phi }_0hc/2e`$: $`\mathrm{\Phi }/\mathrm{\Phi }_0`$) depends on the magnetic flux quanta penetrating each loop with $`L=0,1,2,\mathrm{}`$. The other set (with minima at half-integer values of $`\mathrm{\Phi }/\mathrm{\Phi }_0`$) depends on odd numbers of magnetic flux quanta penetrating the double loop as a whole with $`L=1/2,3/2,\mathrm{}`$. Since the critical temperature corresponds to the lowest Landau level $`E_{LLL}(\mathrm{\Phi })`$, the way the parabolas intersect means that the minimum energy encounters a shift from one branch to another at certain values of magnetic flux. Moreover, since the derivative of the lowest Landau level with respect to magnetic flux is proportional to the persistent current (cf. Ref. Eq. (4.5)) we arrive at the following paradox: At the intersections, the left and right derivatives of $`E_{LLL}(\mathrm{\Phi })`$ are different, the persistent current has a discontinuity and its value is consequently indefinite. In order to resolve this paradox, an analogy between superconducting loops and semiconducting quantum rings can be exploited. In such rings, in the presence of impurities and an Aharonov-Bohm magnetic field, a crossing is known to change into an anti-crossing. In this case, the gaps between the different eigenenergies as a function of the magnetic flux widen and hence the degeneracy of states, which contribute to the lowest level, is raised (see Ref. ). In this letter we demonstrate that this also applies to the superconducting mesoscopic double square loop.
The observed phase boundary for a superconducting mesoscopic double square Al loop is given in Fig. 1a. After subtraction of the parabolic background, related to the finite width, $`w=130`$ nm, of the stripes (dashed line in Fig. 1a), at least 12 practically identical periods are seen. One such period is plotted in Fig. 1b (dots). From the period of the oscillations of the phase boundary with respect to the magnetic field, $`\mathrm{\Delta }B=1.24`$ mT, an effective loop side length $`Q=1.3`$ $`\mu `$m is obtained. This is close to the average loop size. For further experimental details, the reader is referred to Ref. . In order to understand the observed โanti-crossingโ of the different elements of the experimentally observed phase boundary and, in particular, the smooth shape of the $`T_c(\mathrm{\Phi })`$ minima, we consider a superconducting mesoscopic double square loop with imperfections. These imperfections may be introduced during the fabrication of the mesoscopic structures. Most probably one of the sources of such imperfections is the inhomogeneity of the geometrical superconducting lines written by e-beam lithography.
Within the framework of the GL approach, the presence of imperfections in a superconducting structure may be modelled by a spatial inhomogeneity in the parameters $`a`$ and $`b`$ in the GL equation:
$`{\displaystyle \frac{1}{2m}}\left(i\mathrm{}{\displaystyle \frac{2e}{c}}\text{A}(๐ซ)\right)^2\psi (๐ซ)+a(๐ซ)\psi (๐ซ)+b(๐ซ)\psi (๐ซ)^2\psi (๐ซ)=0.`$ (1)
Near the phase boundary, where the order parameter $`\psi (๐ซ)`$ is small, the system is adequately described by the linearized GL equation:
$`{\displaystyle \frac{1}{2m}}\left(i\mathrm{}{\displaystyle \frac{2e}{c}}\text{A}(๐ซ)\right)^2\psi (๐ซ)+a(๐ซ)\psi (๐ซ)=0.`$ (2)
Moreover, the magnetic field may be assumed to be equal to the applied magnetic field. The vector potential $`๐`$ of the uniform field $`๐๐_z`$ is taken in the symmetric gauge.
The presence of imperfections, localized in the loop around several points $`๐ซ_s`$ over a distance which is much smaller than the coherence length or the typical loop size (โshort range imperfectionsโ), can be modelled by the following function:
$`a(๐ซ)=a+{\displaystyle \underset{s}{}}V_s\delta (๐ซ๐ซ_s),`$ (3)
where $`a`$ is the GL coefficient of the substance, and the magnitudes $`V_s`$ are determined by specific characteristics of the imperfections. Eq. (2) then becomes similar to the Schrรถdinger equation for a particle of mass $`m`$ and charge $`2e`$ in the potential field described by the scalar form $`_sV_s\delta (๐ซ๐ซ_s)`$ and by the vector potential $`\text{A}(๐ซ)`$, the quantity $`a`$ playing the role of the energy.
Short-range imperfections are assumed to be present in all three branches of the loop at the points characterized by the coordinates $`Q_s`$ with $`s=L,M,R`$ as shown in Fig. 2. Furthermore, taking $`\xi (T)\xi _0/\sqrt{(1T/T_c)}`$ as a unit of length, we obtain the linearized GL equation in terms of dimensionless coordinates:
$`\left\{(i_x+{\displaystyle \frac{2\pi }{\mathrm{\Phi }_0}}A_x)^2+(i_y+{\displaystyle \frac{2\pi }{\mathrm{\Phi }_0}}A_y)^2+{\displaystyle \underset{s=L,M,R}{}}\stackrel{~}{V}_s\delta (yQ_s)1\right\}\mathrm{\Psi }=0.`$ (4)
It should be noted that the dimensionless scattering magnitude $`\stackrel{~}{V}_s=2mV_s\xi (T)/\mathrm{}^2=C_s/\sqrt{(1T/T_c)}`$ is temperature dependent.
We have solved Eq. (4) following the micronet approach. A new conceptual feature compared with Refs. is the use of additional nodal points at the positions of the imperfections. The presence of these imperfections implies an additional condition, which can be derived in the following way. In the vicinity of the point with $`y=Q_s`$, Eq. (4) takes the form
$$(i_y+\frac{2\pi }{\mathrm{\Phi }_0}A_y)^2\mathrm{\Psi }\mathrm{\Psi }=\stackrel{~}{V}_s\delta (yQ_s)\mathrm{\Psi }.$$
(5)
We integrate both sides of this equation over $`y`$ from $`Q_sฯต`$ to $`Q_s+ฯต`$, $`ฯต+0`$. Taking into account continuity of the order parameter, $`\mathrm{\Psi }(Q_s+ฯต)=\mathrm{\Psi }(Q_sฯต)`$, we obtain the additional constraint
$$\frac{d\mathrm{\Psi }}{dy}|_{y=Q_s+ฯต}\frac{d\mathrm{\Psi }}{dy}|_{y=Q_sฯต}=\stackrel{~}{V}_s\mathrm{\Psi }(y=Q_s)$$
(6)
for the derivatives to the left and to the right from the new nodal point $`y=Q_s`$. Applying Eq. (6) to a single loop of length $``$ with one imperfection leads to the following secular equation
$`\mathrm{cos}(\phi )=\mathrm{cos}()+{\displaystyle \frac{1}{2}}\stackrel{~}{V}_s\mathrm{sin}(),`$ (7)
where $`\phi =2\pi \mathrm{\Phi }/\mathrm{\Phi }_0`$.
Analyzing the onset of superconductivity in a superconducting ring with a lateral arm of length $`\mathrm{}`$, de Gennes derived an equation which differs from Eq. (7) by the substitution $`\stackrel{~}{V}_s=\mathrm{tan}(\mathrm{})/2`$. Thus, from the point of view of our present analysis, a lateral arm may be considered of as a kind of imperfection in a ring.
After some lengthy, but straightforward algebra, we obtain the secular equation which determines the phase boundary:
$`\mathrm{cos}^2\phi +\mathrm{cos}\phi {\displaystyle \frac{D_L+D_R}{2D_M}}{\displaystyle \frac{D_LD_R}{4}}\left[{\displaystyle \underset{i=1}{\overset{2}{}}}\left({\displaystyle \frac{N_{iM}}{D_M}}+{\displaystyle \frac{N_{iL}}{D_L}}+{\displaystyle \frac{N_{iR}}{D_R}}\right){\displaystyle \frac{1}{D_M^2}}{\displaystyle \frac{1}{D_L^2}}{\displaystyle \frac{1}{D_R^2}}\right]{\displaystyle \frac{1}{2}}=0`$ (8)
with
$`N_{1s}=\mathrm{cos}3Q+\stackrel{~}{V}_s\mathrm{sin}(2QQ_s)\mathrm{cos}(Q+Q_s),N_{2s}=\mathrm{cos}3Q+\stackrel{~}{V}_s\mathrm{cos}(2QQ_s)\mathrm{sin}(Q+Q_s),`$ (9)
$`D_s=\mathrm{sin}3Q+\stackrel{~}{V}_s\mathrm{sin}(2QQ_s)\mathrm{sin}(Q+Q_s);`$ (10)
$`N_{1M}=\mathrm{cos}Q+\stackrel{~}{V}_M\mathrm{sin}(QQ_M)\mathrm{cos}(Q_M),N_{2M}=\mathrm{cos}Q+\stackrel{~}{V}_M\mathrm{cos}(QQ_M)\mathrm{sin}(Q+Q_M),`$ (11)
$`D_M=\mathrm{sin}Q+\stackrel{~}{V}_M\mathrm{sin}(2QQ_M)\mathrm{sin}(Q+Q_M),`$ (12)
where $`s=L,R`$.
In Eq. (8), $`\phi =2\pi \mathrm{\Phi }/\mathrm{\Phi }_0`$ with $`\mathrm{\Phi }`$, the magnetic flux through each of the loops. Here, we recall that lengths in the above equations are expressed in units of $`\xi (T)`$ and are therefore functions of the temperature. Consequently, the secular equation establishes a relation between the magnetic flux $`\mathrm{\Phi }`$ and the temperature $`T`$.
The phase boundaries obtained by solving Eq. (8), with an imperfection in only one branch of the double loop, are shown in Figs. 3 to 5 ($`L`$ in Figs. 3 and 4, $`M`$ in Fig. 5).
As is seen in Fig. 3a, at small values $`C_s`$, an โenergyโ gap forms between solutions corresponding to different numbers of magnetic flux quanta penetrating each loop ($`L=0,1/2,1,\mathrm{}`$). The resultant phase boundary indicates that a continuous change takes place from a symmetric superconducting order parameter at integer values of $`\mathrm{\Phi }/\mathrm{\Phi }_0`$ to an antisymmetric state at half-integer values of the relative magnetic flux. It is also worthy of note that the presence of imperfections slightly diminishes the critical temperature of the double loop at zero magnetic field.
When increasing $`C_s`$ above a certain limit, the pattern of phase boundaries changes dramatically. Gaps appear in certain flux intervals (โflux gapsโ) around half-integer $`\mathrm{\Phi }/\mathrm{\Phi }_0`$ values. This behavior is illustrated by the curves in Fig. 3b. For a given $`T<T_c`$, a superconducting state exists when $`\mathrm{\Phi }`$ ranges from zero to a certain value of $`\mathrm{\Phi }_1`$, after which the sample turns into the normal state. With a further increase of magnetic flux along a horizontal straight line shown in Fig. 3b, the sample remains in the normal state until a value of $`\mathrm{\Phi }_2`$ is reached, at which the sample becomes again superconducting. This demonstrates a re-entrant behavior as a function of field. \[When approaching in Fig. 3b the points where the phase boundary $`\mathrm{\Phi }(T)`$ has its extrema and, consequently, the derivatives $`T/\mathrm{\Phi }`$ would diverge, the superconducting state apparently becomes unstable.\] It should be noted that the existence of such a regime, where the system is normal in a certain flux interval, was reported by de Gennes for a single superconducting ring with a lateral arm.
The trend of lowering of the critical temperature of a double loop at zero magnetic field with increasing $`C_s`$ is clearly seen by comparison of the curves in Fig. 3b which refer to different values of $`V_L`$.
In regions adjacent to integer values of $`\mathrm{\Phi }/\mathrm{\Phi }_0`$, a good agreement with experiment of the calculated phase boundary $`T_c(\mathrm{\Phi })`$, is achieved for the zero-temperature coherence length $`\xi _0`$ = 128 nm. This value is in accordance with previous estimates (see Ref. ). The resulting set of phase boundaries is shown in Fig. 4a. For comparison with the experimental data, it is necessary to renormalize the temperature scale, taking as a unit temperature the specific critical value of a loop with imperfections (see Fig. 4b).
In Fig. 5, a plot of the phase boundary is shown for a double loop with an imperfection in the middle branch ($`M`$ in Fig. 2). It is clear that by increasing $`V_M`$, one shifts a minimum of the $`1T_c(\mathrm{\Phi })/T_c`$ curve (using a renormalized temperature), at half-integer values of $`\mathrm{\Phi }/\mathrm{\Phi }_0`$, to higher values, without modifying those parts of the phase boundary close to the integer values of $`\mathrm{\Phi }/\mathrm{\Phi }_0`$.
The best agreement between the calculated phase boundary and experimental data is achieved for a configuration where imperfections are present on all three branches of the double loop ($`L,M,R`$ in Fig. 2). The corresponding curves are shown in Fig. 6. The main observation which follows from these figures is that imperfections in the central branch play a decisive role in determining the critical temperature of a double loop at the half-integer values of $`\mathrm{\Phi }/\mathrm{\Phi }_0`$.
In conclusion, we have generalized the network approach to include the effects of short-range imperfections. The presence of weakly scattering imperfections leads to the formation of gaps between solutions corresponding to integer and half-integer numbers of magnetic flux quanta penetrating each loop. The phase boundary is characterized by a smooth transition between symmetric and antisymmetric states. For imperfections with relatively large magnitudes, gaps in the phase boundary $`T_c(B)`$ or $`T_c(\mathrm{\Phi })`$ appear when the magnetic flux lies in intervals around half-integer values. The critical temperature at the half-integer values of the relative magnetic flux has been shown to be determined mainly by imperfections in the central branch. The calculated phase boundary for a mesoscopic double square loop is in good agreement with experiment.
Acknowledgments. \- We thank V. N. Gladilin for fruitful interactions. This work has been supported by the Interuniversitaire Attractiepolen โ Belgische Staat, Diensten van de Eerste Minister โ Wetenschappelijke, technische en culturele Aangelegenheden; the F.W.O.-V. projects Nos. G.0287.95, G.0232.96, G.0306.00 W.O.G. WO.025.99N (Belgium), and the ESF Programme VORTEX. |
no-problem/0001/gr-qc0001049.html | ar5iv | text | # The collision and snapping of cosmic strings generating spherical impulsive gravitational waves
## 1 Introduction
Several exact solutions of Einsteinโs equations have been published which describe an impulsive spherical gravitational wave in a Minkowski background generated, either by a snapping cosmic string (identified by a deficit angle), or by an expanding string inside the sphere.
The first such solution was presented independently by both Gleiser and Pullin and Biฤรกk and Schmidt . The first of these was obtained by pasting two appropriate forms of Minkowski space either side of the spherical wavefront, while the second was obtained as a limiting case of a solution with boost-rotation symmetry. In this case, two null particles recede from a common point generating an impulsive spherical gravitational wave and there are either cosmic strings attaching each particle to infinity, or there is an expanding cosmic string along the axis of symmetry separating the two particles. However, as pointed out by Biฤรกk , this solution does not strictly describe a snapping cosmic string, but two semi-infinite cosmic strings initially approaching at the speed of light and then separating again at the instant at which they collide.
In fact a general method for constructing expanding impulsive spherical gravitational waves had previously been presented by Penrose . This involves cutting Minkowski space-time along a null cone and then re-attaching the two pieces with a suitable โwarpโ. The explicit general solution written in a continuous coordinate system was given by Nutku and Penrose and Hogan , . The above solutions describing โsnapping cosmic stringsโ are included within this family.
These solutions have been extended to include a cosmological constant. With this they describe expanding spherical gravitational waves in de Sitter and anti-de Sitter backgrounds. Further, the general class of solutions of this type has been shown to be equivalent to impulsive limits of the RobinsonโTrautman type N class of solutions. Finally, it may be mentioned that Hortaรงsu and his colleagues have considered aspects of particle creation in these backgrounds.
It may be noted that Hogan has also considered imploding-exploding gravitational waves. In this he has simply attached an impulsive wave on a past null cone to one on a future null cone without considering the necessary sources of the wave. Outside the cone, a snapping string must evolve continuously. However, this construction does permit a string inside an expanding cone to be different from one inside the prior imploding cone, although the singular event at which the two cones join would then require some physical justification.
The purpose of the present paper is to describe the Penrose method and its interpretation in detail and in full generality, using alternative spatial sections and including an arbitrary cosmological constant. This will enable us to construct an exact solution which describes the gravitational wave that would be generated by the collision (and consequent breaking) of a pair of moving cosmic strings in a Minkowski background. Such a situation was outlined in . The explicit solution is given in section VI.
## 2 The explicit Penrose construction
As mentioned above, Penrose has described a โcut and pasteโ method for constructing an expanding spherical gravitational wave in a Minkowski background. In this method, appropriate junction conditions across the null cone guarantee that the vacuum field equations are satisfied. However, there is an impulsive component in the Weyl tensor representing an impulsive gravitational wave located on this null cone.
The above procedure will now be performed explicitly. However, for later convenience, we will derive a more general form of the solution for an impulsive spherical wave in a Minkowski background than that outlined in . In addition, we will also include a cosmological constant $`\mathrm{\Lambda }`$, so that the resulting solutions will also describe expanding spherical impulsive waves in de Sitter and anti-de Sitter backgrounds. In this approach, it is convenient to start with the line element for space-times of constant curvature in the manifestly conformally flat form
$$\mathrm{d}s^2=\frac{2\mathrm{d}\eta \mathrm{d}\overline{\eta }2\mathrm{d}u\mathrm{d}v}{[1+\frac{1}{6}\mathrm{\Lambda }(\eta \overline{\eta }uv)]^2},$$
(1)
where the relation to conformal cartesian coordinates is $`\eta =\frac{1}{\sqrt{2}}(x+iy)`$, $`u=\frac{1}{\sqrt{2}}(t+z)`$ and $`v=\frac{1}{\sqrt{2}}(tz)`$.
We may now perform the transformation
$`v`$ $`=`$ $`{\displaystyle \frac{V}{p}}ฯตU,`$
$`u`$ $`=`$ $`Z\overline{Z}{\displaystyle \frac{V}{p}}U,`$ (2)
$`\eta `$ $`=`$ $`{\displaystyle \frac{V}{p}}Z,`$
where
$$p=1+ฯตZ\overline{Z},ฯต=1,0,1.$$
(3)
The parameter $`ฯต`$ is related to the Gaussian curvature of the 2-surfaces given by $`U=`$ const., $`V=`$ const. . With this, the metric (1) becomes
$$\mathrm{d}s^2=\frac{2{\displaystyle \frac{V^2}{p^2}}\mathrm{d}Z\mathrm{d}\overline{Z}+2\mathrm{d}U\mathrm{d}V2ฯต\mathrm{d}U^2}{[1+\frac{1}{6}\mathrm{\Lambda }U(VฯตU)]^2}.$$
(4)
Let us also consider the alternative, and more involved, transformation given by
$`v`$ $`=`$ $`AVDU,`$
$`u`$ $`=`$ $`BVEU,`$ (5)
$`\eta `$ $`=`$ $`CVFU,`$
where
$`A={\displaystyle \frac{1}{p|h^{}|}},B={\displaystyle \frac{|h|^2}{p|h^{}|}},C={\displaystyle \frac{h}{p|h^{}|}},`$
$`D={\displaystyle \frac{1}{|h^{}|}}\left\{{\displaystyle \frac{p}{4}}\left|{\displaystyle \frac{h^{\prime \prime }}{h^{}}}\right|^2+ฯต\left[1+{\displaystyle \frac{Z}{2}}{\displaystyle \frac{h^{\prime \prime }}{h^{}}}+{\displaystyle \frac{\overline{Z}}{2}}{\displaystyle \frac{\overline{h}^{\prime \prime }}{\overline{h}^{}}}\right]\right\},`$
$`E={\displaystyle \frac{|h|^2}{|h^{}|}}\left\{{\displaystyle \frac{p}{4}}\left|{\displaystyle \frac{h^{\prime \prime }}{h^{}}}2{\displaystyle \frac{h^{}}{h}}\right|^2+ฯต\left[1+{\displaystyle \frac{Z}{2}}\left({\displaystyle \frac{h^{\prime \prime }}{h^{}}}2{\displaystyle \frac{h^{}}{h}}\right)+{\displaystyle \frac{\overline{Z}}{2}}\left({\displaystyle \frac{\overline{h}^{\prime \prime }}{\overline{h}^{}}}2{\displaystyle \frac{\overline{h}^{}}{\overline{h}}}\right)\right]\right\},`$
$`F={\displaystyle \frac{h}{|h^{}|}}\left\{{\displaystyle \frac{p}{4}}\left({\displaystyle \frac{h^{\prime \prime }}{h^{}}}2{\displaystyle \frac{h^{}}{h}}\right){\displaystyle \frac{\overline{h}^{\prime \prime }}{\overline{h}^{}}}+ฯต\left[1+{\displaystyle \frac{Z}{2}}\left({\displaystyle \frac{h^{\prime \prime }}{h^{}}}2{\displaystyle \frac{h^{}}{h}}\right)+{\displaystyle \frac{\overline{Z}}{2}}{\displaystyle \frac{\overline{h}^{\prime \prime }}{\overline{h}^{}}}\right]\right\},`$ (6)
$`h=h(Z)`$ is an arbitrary holomorphic function (apart from its singular regions), and the derivative with respect to $`Z`$ is denoted by a prime. With this, the Minkowski, de Sitter and anti-de Sitter metric (1) becomes
$$\mathrm{d}s^2=\frac{2\left|{\displaystyle \frac{V}{p}}\mathrm{d}Z+Up\overline{H}\mathrm{d}\overline{Z}\right|^2+2\mathrm{d}U\mathrm{d}V2ฯต\mathrm{d}U^2}{[1+\frac{1}{6}\mathrm{\Lambda }U(VฯตU)]^2},$$
(7)
where $`2H(Z)=\beta ^{}\frac{1}{2}\beta ^2`$, $`\beta =\alpha ^{}/\alpha `$ and $`\alpha =h^{}`$. Thus,
$$H(Z)=\frac{1}{2}\left[\frac{h^{\prime \prime \prime }}{h^{}}\frac{3}{2}\left(\frac{h^{\prime \prime }}{h^{}}\right)^2\right],$$
(8)
is the Schwarzian derivative. Notice that the transformation (5) and (6) reduces to (2) when $`h=Z`$.
In the coordinates of both line elements (4) and (7), the null hypersurface $`U=0`$ represents a null cone (an expanding sphere) $`\eta \overline{\eta }uv=0`$ in the background. Moreover, the reduced 2-metrics on this cone are identical. Following Penroseโs โcut and pasteโ method , we may take the line element (4) for $`U<0`$ and attach this to (7) for $`U>0`$. The resulting line element can then be written in the combined form
$$\mathrm{d}s^2=\frac{2\left|{\displaystyle \frac{V}{p}}\mathrm{d}Z+U\mathrm{\Theta }(U)p\overline{H}\mathrm{d}\overline{Z}\right|^2+2\mathrm{d}U\mathrm{d}V2ฯต\mathrm{d}U^2}{[1+\frac{1}{6}\mathrm{\Lambda }U(VฯตU)]^2},$$
(9)
where $`\mathrm{\Theta }(U)`$ is the Heaviside step function. This combined metric, which was presented for a Minkowski background in -, with a cosmological constant in , and in the most general form in , is explicitly continuous everywhere, including across the null hypersurface $`U=0`$. However, the discontinuity in the derivatives of the metric yields impulsive components in the curvature tensor proportional to the Dirac $`\delta `$-function.
It may be observed that the Penrose junction conditions can be obtained by comparing the transformations (2) at $`U=0_{}`$ with (56) at $`U=0_+`$. This gives the identification
$`(Z,\overline{Z},V,U=0)_M^{}=\left(h(Z),\overline{h}(\overline{Z}),{\displaystyle \frac{1+ฯตh\overline{h}}{1+ฯตZ\overline{Z}}}{\displaystyle \frac{V}{|h^{}|}},U=0\right)_{M^+},`$ (10)
where the space-time has been divided into two halves $`M^{}(U<0)`$ inside the null cone, and $`M^+(U>0)`$ outside.
Labelling coordinates $`(x^1,x^2,x^3,x^4)=(Z,\overline{Z},V,U)`$, and using the tetrad
$`k^\mu `$ $`=`$ $`\left[1+\frac{1}{6}\mathrm{\Lambda }U(VฯตU)\right]^2\delta _3^\mu ,`$
$`\mathrm{}^\mu `$ $`=`$ $`\delta _4^\mu ฯต\delta _3^\mu ,`$
$`m^\mu `$ $`=`$ $`p^2{\displaystyle \frac{1+\frac{1}{6}\mathrm{\Lambda }U(VฯตU)}{V^2p^4U^2\mathrm{\Theta }(U)H\overline{H}}}\left({\displaystyle \frac{V}{p}}\delta _2^\mu pU\mathrm{\Theta }(U)\overline{H}\delta _1^\mu \right),`$
the non-zero components of the curvature tensor for the line element (9) can be shown to be given by
$$\mathrm{\Psi }_4=\frac{p^2H}{V}\delta (U),\mathrm{\Phi }_{22}=\frac{p^4H\overline{H}}{V^2}U\delta (U).$$
(11)
This indicates an impulsive gravitational wave component. It also confirms that the space-time is vacuum everywhere except on the wave surface at $`V=0`$ and at possible singularities of the function $`p^2H(Z)`$.
It may be observed that the metric (4) contains a coordinate singularity at $`V=0`$. However, this becomes a physical singularity on the wave surface $`U=0`$ in the above construction. Unfortunately, for $`ฯต=0`$, $`V=0`$ is a singular null line on the surface $`U=0`$. In fact, as seen from (2), this is a common line ($`x=y=0`$, $`z=t`$) to all the null cones $`U=`$ const. Thus, for $`ฯต=0`$, there is a singular line on the wave surface where $`V=0`$, in addition to possible singular points of $`H`$. For a physical interpretation of these solutions, it would be better to remove the singularity $`V=0`$ from the wavefront. This can be achieved by considering solutions with $`ฯต0`$. For these cases, as observed by Hogan , the family of null cones $`U=`$ const. have vertices on a timelike line ($`x=y=z=0`$) if $`ฯต=1`$ and on a spacelike line ($`x=y=t=0`$) if $`ฯต=1`$. For either of these cases, the singularity at $`V=0`$ appears only at the vertex of the null cone $`x=y=z=t=0`$, which may be considered as the โoriginโ of the spherical wave.
## 3 A geometrical description of the impulse
Significantly, the construction of the spherical impulsive wave as described above admits an interesting geometrical interpretation. Specifically, the ratio
$$\xi \frac{\eta }{v}=\frac{x+iy}{tz}$$
(12)
is the well known relation for a stereographic (one-to-one) correspondence between a sphere and an Argand plane by a projection from the North pole onto a plane through the equator (see chapter 1 of ). This permits us to represent the wave surface $`U=0`$ (taken at a typical or rescaled time $`t=1`$) either as a Riemann sphere, or as its associated complex plane. Conversely, any point $`\xi `$ on the complex plane, taken as $`z=0`$, identifies a unique point $`P`$ on the sphere with coordinates
$$x=\frac{\xi +\overline{\xi }}{1+\xi \overline{\xi }},y=\frac{i(\overline{\xi }\xi )}{1+\xi \overline{\xi }},z=\frac{\xi \overline{\xi }1}{\xi \overline{\xi }+1}.$$
In terms of standard spherical coordinates, $`\xi =\mathrm{cot}\frac{\theta }{2}e^{i\varphi }`$.
We may also recall some important properties of the stereographic projection. Any circle on the Riemann sphere maps to a circle in the complex Argand plane, and vice versa. As a special case, any circle passing through the North pole of the Riemann sphere maps to a straight line in the complex plane. A great circle which passes through the North pole maps to a straight line through the origin of the plane. We also note that the stereographic projection is conformal, i.e. angle preserving.
Returning to the impulsive spherical wave, we observe from (2) and (56) that on $`U=0`$
$$\xi =\{\begin{array}{cc}Z\hfill & \text{for }U=0_{}\hfill \\ \multicolumn{2}{c}{}\\ h(Z)\hfill & \text{for }U=0_+\hfill \end{array}$$
(13)
This permits us to represent the Penrose junction condition (10) across the wave surface as a mapping on the complex Argand plane $`Zh(Z)`$. This is equivalent to mapping points $`P_{}`$ on the โinsideโ of the wave surface to the identified points $`P_+`$ on the โoutsideโ, see Fig. 1.
Normally, we will assume that the โinsideโ represented by $`Z`$ covers the complete sphere, but the function $`h(Z)`$ will not generally cover the entire sphere on the โoutsideโ. The restrictions on the range of the function $`h(Z)`$ together with its specific character will correspond to particular physical situations as will be described below.
For the sake of completeness, the results of these two sections have been given with an arbitrary cosmological constant included. However, for the remainder of this paper, we will only consider the case when $`\mathrm{\Lambda }=0`$.
## 4 A snapping cosmic string
The utility of the above geometrical approach may be demonstrated by the simplest and physically interesting case of an impulsive spherical wave generated by a โsnapping cosmic stringโ. Without loss of generality, the string may be taken to be located along the $`z`$-axis perpendicular to the above complex plane.
It may be noted that the initial GleiserโPullin solution describing this situation was presented in a slightly different form to that given above. Essentially, it employed two different constant non-zero values of $`H`$ in the regions inside and outside the spherical wave. This is effectively equivalent to taking $`\xi =e^Z`$ for $`U<0`$ and $`\xi =e^{(1\delta )Z}`$ for $`U>0`$, where $`\delta `$ is a real positive constant.
Here we will describe the same solution. However we will use the above notation as also presented by Nutku and Penrose . Taking $`\mathrm{\Lambda }=0`$, the line element (9) becomes
$$\mathrm{d}s^2=2\left|\frac{V}{p}\mathrm{d}Z+U\mathrm{\Theta }(U)p\overline{H}\mathrm{d}\overline{Z}\right|^2+2\mathrm{d}U\mathrm{d}V2ฯต\mathrm{d}U^2.$$
(14)
where $`p`$ and $`ฯต`$ are given by (3).
According to the above construction (13), in the interior region $`U<0`$, the complete spherical surface is covered by $`\xi =Z`$. It is then appropriate to represent the outer region $`U>0`$ by $`\xi =h_1(Z)`$ where
$$h_1(Z)=Z^{1\delta }.$$
(15)
Putting $`Z=|Z|e^{i\varphi }`$ where $`\varphi [\pi ,\pi )`$, $`h_1(Z)`$ covers the plane minus a wedge, e.g. $`\mathrm{arg}h_1(Z)[(1\delta )\pi ,(1\delta )\pi ]`$. This represents Minkowski space with a deficit angle $`2\pi \delta `$ and may be considered to describe a cosmic string in the region outside the spherical wave as shown in Fig. 2. (Alternatively, if $`\delta <0`$ and the exterior region may be taken to be complete and the range of $`\varphi `$ reduced so that the deficit angle representing the string appears inside the sphere.) Outside the spherical wavefront, the string may be taken to be located along the axis $`\eta =0`$. Then, by putting $`\eta =|\eta |e^{i\varphi }`$, it may be observed from (5) that, for the particular function $`h_1(Z)`$, $`\mathrm{arg}\eta =\mathrm{arg}h_1`$. Thus, the deficit angle is constant along the length of the string, corresponding to a constant tension.
For the case when $`h=h_1(Z)`$, the metric function $`H_1(Z)`$ in (14) is given by
$$H_1=\frac{\frac{1}{2}\delta (1\frac{1}{2}\delta )}{Z^2}.$$
(16)
This indicates a singular point at $`Z=0`$, which is located at the South pole of the spherical wavefront. However, there is an additional singularity at $`V=0`$ which, for $`ฯต=0`$, is located at the North pole at any time $`t>0`$. As suggested at the end of section II, a much better physical interpretation can be given to the $`ฯต0`$ solutions of the form (14). For $`ฯต=\pm 1`$, the physical singularity at $`V=0`$ appears only in the vertex of the conical impulsive surface $`U=0`$ at $`t=0`$. Moreover, according to (11) and (16)
$$\mathrm{\Phi }_{22}\frac{(1+ฯตZ\overline{Z})^4}{(Z\overline{Z})^2}\frac{U\delta (U)}{V^2}.$$
Therefore, there is a divergence not only at the South pole where $`Z=0`$ given by $`\mathrm{\Phi }_{22}|Z|^4`$, but also at the North pole where $`Z=\mathrm{}`$ given by $`\mathrm{\Phi }_{22}|Z|^4`$. Both these divergences are equivalent and indicate particles of equal โmassโ at the two ends of the string.
## 5 Lorentz transformations
At this point, let us consider a space-time given by the line element (14) for some particular function $`H`$ and observe that it can best be interpreted by first calculating the associated function $`h(Z)`$. This can be achieved by integrating the three first order equations following (7). However, the resulting function is not unique as it contains three complex constants of integration. In fact, this arbitrariness corresponds to the freedom associated with the Lorentz transformations
$$Zh(Z)=\frac{aZ+b}{cZ+d},$$
where $`a,b,c,d`$ are arbitrary complex constants satisfying the complex condition $`adbc=1`$. It is well known that this is the most general global holomorphic (i.e. analytic and conformal) transformation of the Riemann sphere to itself (see chapter 1 of ). It may also be noted that the Schwarzian derivative (8), i.e. $`H(Z)`$, is invariant under this transformation.
In particular, a rotation about an arbitrary direction can be given by
$$h_r(Z)=\frac{Ze^{i\psi }\mathrm{cos}\frac{\theta }{2}\mathrm{sin}\frac{\theta }{2}}{Ze^{i\psi }\mathrm{sin}\frac{\theta }{2}+\mathrm{cos}\frac{\theta }{2}}e^{i\varphi },$$
(17)
where $`\theta ,\varphi ,\psi `$ are the Euler angles. We will adopt the following construction: first we perform a rotation through $`\psi `$ about the $`z`$-axis, then we rotate through $`\theta `$ about the original $`y`$-direction, and finally we perform a rotation through $`\varphi `$ about the original $`z`$-axis. This moves the North pole to a point on the sphere given by the spherical coordinates $`\theta ,\varphi `$.
Of the remaining Lorentz transformations, a boost in the $`z`$-direction with velocity $`v`$ is given simply by
$$h_b(Z)=wZ,$$
(18)
where $`w=\sqrt{(1+v)/(1v)}`$, which corresponds to a uniform expansion in the complex plane. It leaves invariant both the poles of the Riemann sphere. Also, the null rotation $`h_n(Z)=ZZ_0`$, where $`Z_0`$ is complex, is a uniform translation in the complex plane. It leaves invariant only the North pole of the Riemann sphere. These are well described with useful pictures in chapter 1 of .
## 6 The collision of cosmic strings
By combining operations of the type given in (15), which introduces a deficit angle, with suitable Lorentz transformations, we can generate much more general solutions with predetermined physical properties. In particular, we can use this technique to construct an explicit solution describing the situation outlined by Nutku and Penrose in which two cosmic strings collide and split generating an impulsive spherical gravitational wave.
It is assumed that two strings are initially moving with constant velocity and approach each other. Of course, provided they are not parallel, it is always possible to make a Lorentz transformation to a frame of reference in which the two cosmic strings are orthogonal. There is therefore no loss of generality in considering only orthogonal strings. It is also assumed that both strings snap at their point of intersection and that the snapped ends propagate along the length of the string at the speed of light. This will generate an impulsive spherical gravitational wave.
We start with the simplest situation in which two orthogonal strings approach each other with a negligible velocity. The solution for such a situation can be constructed as follows.
Starting with (15), we first apply a rotation (17) with $`\psi =\theta =\varphi =\pi /2`$. This leads to $`(ih_11)/(h_1i)`$, which represents a cosmic string located along the $`y`$-axis. Then we introduce another string in the same way as in (15) by taking the $`(1\epsilon )`$-th power. This removes a wedge which represents a second string that is located along the $`z`$-axis and has deficit angle $`2\pi \epsilon `$. The final mapping is given by
$$h_2(Z)=\left(\frac{iZ^{1\delta }1}{Z^{1\delta }i}\right)^{1\epsilon }.$$
(19)
This solution describes two snapped orthogonal cosmic strings which were at โrelative restโ initially. The corresponding function $`H_2(Z)`$ has the form
$$H_2(Z)=\frac{\frac{1}{2}\delta (1\frac{1}{2}\delta )}{Z^2}\frac{\frac{1}{2}\epsilon (1\frac{1}{2}\epsilon )4(1\delta )^2Z^{2\delta }}{(Z^{1\delta }+i)^2(Z^{1\delta }i)^2}.$$
(20)
The solution is visualized in Fig. 3. On the left there is the complex function $`h_2(Z)`$ in the complex Argand plane, on the right the corresponding Riemann sphere obtained using the stereographic identification. As can be seen, there are two perpendicular wedge cuts on the sphere indicating that the flat space outside the spherical impulsive gravitational wave contains two orthogonal pairs of cosmic strings. This solution thus represents two cosmic strings which snapped at the initial time $`t=0`$ and at their point of intersection.
We can now similarly construct a more general solution describing a situation in which the two cosmic strings were in finite relative motion before the collision. In this case, the four snapped half-strings are also in relative translational motion. The more involved construction of such a solution is shown in Figs. 4 and 5.
We start with the initial cut $`h_1(Z)=Z^{1\delta }`$ given by (15) introducing the first string with deficit angle $`2\pi \delta `$ along the $`z`$-axis as indicated in Fig. 2. A subsequent rotation (17) with $`\theta =\varphi =\pi /2`$, $`\psi =\pi `$ places the initial poles $`Z=0`$ and $`Z=\mathrm{}`$ on the $`y`$-axis with the cut passing through the South pole. The corresponding function $`h_a(Z)=i(h_1+1)/(h_11)`$ is illustrated in the complex Argand plane in Fig. 4 (a) and on the Riemann sphere in Fig. 5 (a). Next we make a boost (18) with $`w=w_1`$ in the $`z`$-direction as shown in Fig. 4 (b) and Fig. 5 (b). Then another rotation (17) with $`\theta =\pi /2`$, $`\varphi =\psi =0`$ about the $`y`$-axis leads to
$$h_c(Z)=\frac{(w_1i)Z^{1\delta }+(w_1+i)}{(w_1+i)Z^{1\delta }+(w_1i)},$$
(21)
and puts the poles and the cut along the $`z=0`$ plane, symmetric about the $`x`$-axis, see Fig. 4 (c) and Fig. 5 (c). At this point we make a second cut from the North to South poles through the negative $`x`$-axis by taking $`h_c^{1\epsilon }`$. This introduces the second string with deficit angle $`2\pi \epsilon `$ along the $`z`$-axis as indicated in Fig. 4 (d) and Fig. 5 (d). (Note that the function $`h_c^{1\epsilon }`$ for $`w_1=1`$ exactly reduces to the previous form (19) with the poles located at $`h_c(0)=(i+w_1)/(iw_1)=i`$ and $`h_c(\mathrm{})=\overline{h}_c(0)=i`$.) The next step is to make a rotation about the $`y`$-axis using $`\theta =\pi /2`$, $`\varphi =\psi =0`$ (see Fig. 4 (e) and Fig. 5 (e)). Finally we perform a second boost $`w_2`$ in the negative $`z`$-direction. We thus obtain
$$h_2(Z)=w_2\frac{h_c^{1\epsilon }1}{h_c^{1\epsilon }+1}.$$
(22)
This expression represents one cut through the North pole parallel to the $`x`$-direction, and a second cut through the South pole parallel to the $`y`$-direction, but with both pairs of strings separated either side of the plane $`z=0`$ as shown in Fig. 4 (f) and Fig. 5 (f). The resulting metric function using (22) with (21) is given by
$`H_2(Z)={\displaystyle \frac{\frac{1}{2}\delta (1\frac{1}{2}\delta )}{Z^2}}{\displaystyle \frac{\frac{1}{2}\epsilon (1\frac{1}{2}\epsilon )16w_1^2(1\delta )^2Z^{2\delta }}{[(w_1+i)Z^{1\delta }+(w_1i)]^2[(w_1i)Z^{1\delta }+(w_1+i)]^2}}.`$ (23)
This is the explicit solution which describes the collision and consequent snapping of two orthogonal cosmic strings as outlined by Nutku and Penrose .
It may be noted that there exist null โparticlesโ at the ends of the four semi-infinite strings. These are located on the spherical impulsive wave surface at points given by $`\xi _{1+}=h_2(h_c(Z=0))=iw_2\mathrm{tan}[(1\epsilon )(\varphi _1\pi /2)]`$, where $`\varphi _1=\mathrm{arg}(w_1+i)`$, i.e. $`\mathrm{cot}\varphi _1=w_1`$, $`\xi _{2+}=h_2(h_c(Z=\mathrm{}))=\overline{\xi }_{1+}`$, and $`\xi _{3+}=h_2(h_c=0)=w_2`$, $`\xi _{4+}=h_2(h_c=\mathrm{})=w_2`$. These two pairs of points describe the four singular points in the complex Argand plane of Fig. 4 (f). Using the stereographic relation to standard spherical coordinates, $`\xi =\mathrm{cot}(\theta /2)e^{i\varphi }`$, the corresponding four points on the Riemann sphere โ the ends of the two cuts of Fig. 5 (f) โ have $`\mathrm{cot}(\theta _1/2)=|\xi _{1+}|=|\xi _{2+}|`$ and $`\mathrm{cot}(\theta _2/2)=|\xi _{3+}|=|\xi _{4+}|`$. The values of $`\theta _1`$ and $`\theta _2`$ can be made arbitrary by a suitable choice of the boost parameters $`w_1`$ and $`w_2`$. Consequently, the cuts representing the strings can be distributed arbitrarily over the spherical impulsive wave (parallel to the $`x`$ and $`y`$-directions in our construction). It is natural to consider a geometrically privileged situation in which the two cuts are distributed symmetrically, as indicted in Fig. 5 (f). Obviously, this is given by the condition $`\theta _1+\theta _2=\pi `$ implying $`\mathrm{cot}(\theta _1/2)=\mathrm{tan}(\theta _2/2)`$, i.e. $`|\xi _{1+}||\xi _{3+}|=1`$. Therefore, the symmetry condition can be expressed as
$$w_2=\sqrt{\mathrm{cot}[(1\epsilon )(\pi /2\varphi _1)]},\text{where}\mathrm{cot}\varphi _1=w_1.$$
(24)
For a negligible $`\epsilon `$, $`w_21/\sqrt{w_1}`$.
Note that the above choice of the boost parameters in our construction results in a geometrically symmetric situation in which the two semi-infinite parts of the first snapped cosmic string are parallel to the $`y`$-axis and propagate in the positive $`z`$-direction with speed $`\mathrm{cos}\theta _1|v_2|`$. Also, the two semi-infinite parts of the second string are parallel to the $`x`$-axis and move in the negative $`z`$-direction with the same speed $`|v_2|`$. For $`\epsilon =\delta `$ the strings have equal tension, and the geometrically symmetric situation corresponds to a choice of physically privileged coordinates in which the geometrical origin of the impulse coincides with the centre of mass of the colliding system.
Finally, it can be observed that the solution describing the collision and snapping of two cosmic strings can alternatively and equivalently be constructed if, starting again with (15), we perform the rotation (17) using $`\theta =\varphi =\pi /2`$, $`\psi =0`$ in step (a), and the rotation $`\theta =\pi /2`$, $`\varphi =\psi =0`$ in step (c). Taking the other four steps to be the same as in the previous construction shown in Fig. 4 and Fig. 5, this leads to
$$h_c(Z)=\frac{(w_1+i)Z^{1\delta }(w_1i)}{(w_1i)Z^{1\delta }(w_1+i)}.$$
(25)
This results in the solution given by (22) with (25) presented in Fig. 6. Again, there is a cut through the North pole, parallel to the $`x`$-direction and a second cut through the South pole, parallel to the $`y`$-direction, but these cuts are smaller than in Fig. 5 (f). The resulting metric function is given by
$`H_2(Z)={\displaystyle \frac{\frac{1}{2}\delta (1\frac{1}{2}\delta )}{Z^2}}{\displaystyle \frac{\frac{1}{2}\epsilon (1\frac{1}{2}\epsilon )16w_1^2(1\delta )^2Z^{2\delta }}{[(w_1+i)Z^{1\delta }(w_1i)]^2[(w_1i)Z^{1\delta }(w_1+i)]^2}}.`$ (26)
The two cuts representing the strings are distributed symmetrically along the spherical impulsive wave if $`w_2=\sqrt{\mathrm{cot}[(1\epsilon )\varphi _1]}`$.
## 7 Conclusions
We have described the Penrose โcut and pasteโ method in detail and full generality, emphasizing its geometrical properties and physical interpretation. Using this, we have explicitly constructed the exact solution which describes two colliding cosmic strings which snap at the point of their intersection. The ends of each semi-infinite string are singular points (interpreted as null particles) located on a sphere which is expanding with the speed of light. This generates a spherical impulsive gravitational wave.
## Acknowledgments
This work was supported by a visiting fellowship from the Royal Society and, in part, by the grant GACR-202/99/0261 of the Czech Republic. |
no-problem/0001/astro-ph0001377.html | ar5iv | text | # 1 Introduction:
## 1 Introduction:
Diffusive shock acceleration (DSA) mechanism are believed to be the source of radio emitting relativistic electrons in young and middle age SNRs of diameter $`D20`$ pc. But for the radio emission of evolved large diameter SNRs the mechanism of van der Laan (1962) or some modification of this mechanism (e.g., Blandford and Cowie 1982) are widely believed to be responsible. According to these models preexisting in the ambient ISM electrons compressed to high level at the radiative shock are responsible for the radio emission of the remnant. Due to the unstability of radiative shock waves this mechanism may however encounter a number of difficulties in reproducing the radio emission from very large diameter SNRs evolving in warm phase of the ISM. Moreover, if the density of the ambient medium is very low, the SNR will finish its life by merging with the ISM before cooling becomes important. For such remnants DSA becomes a main candidate for generation of relativistic electrons radio emitting in the magnetic fields of $`10^5รท10^6`$ G. But electron acceleration in SNRs is more difficult to estimate quantitatively since the injection efficiency and even the very process of acceleration for electrons are still unclear.
The goal of present study is to apply DSA mechanism, under very simple and common assumptions about the injection, to follow the evolution of shell-type SNRs up to the very large radii and, at the same time, to obtain some new constraints on the theory of acceleration mechanism if we will be able to achieve agreement between the model predictions and observations.
## 2 The Model:
We used the onion-shell model of Moraal and Axford (1983) as it was done in Asvarov (1992, 1994) where the Sedov solution for the remnant structure had been adopted. In present study SNR is modeled by using an analytical approximation of Cox and Anderson (1982) which follows the development of an adiabatic spherical blast wave in homogeneous ambient medium of finite pressure. At early times this approximation resembles the zero pressure Sedov similarity solution but extends the range of investigation well into the regime in which the external pressure is significant.
To avoid the problem of injection we introduced, like Bell (1978), two injection parameters: we inject electrons as โtest particlesโ at momentum $`p_{\mathrm{inj}}=\psi p_{\mathrm{th}}`$ , where $`p_{\mathrm{th}}=(2\mathrm{m}_\mathrm{e}T_\mathrm{s})^{1/2}`$ is the downstream electron thermal momentum, the concentration of which assumed to be proportional to the density of the ambient thermal electrons: $`N_{\mathrm{inj}}=\phi n_{\mathrm{oe}}`$. At the shock front equipartition between electron and proton temperatures is assumed which means that $`p_{\mathrm{inj}}`$ is proportional to the velocity of shock wave.
For the accelerated electrons from the energy loss processes only adiabatic cooling was taken into consideration.
Assuming the magnetic field to be frozen in to the plasma, we model the density dependence of $`H`$ as $`H=H_0(\rho /\rho _0)^\mathrm{k}`$, where $`H_0`$ is the ambient value of the magnetic field strength.
Although in our analysis we consider only extended SNRs we calculate several models in high density environments for which the radiative phase begins relatively soon after the explosion of SN. In this case total flux is obtained by integration over the layers of the shell where radiative cooling does not occur. This implies that we completely ignore the action of DSA at radiative shocks.
## 3 Results and Discussion:
Empirical $`\mathrm{\Sigma }D`$ relations are very useful tools for testing the theoretical models of SNR evolution. Before comparing our model with the observations, we formulate the common properties of the model predictions. Here we concentrate on two main observable radio characteristics of SNRs: the spectral index and surface brightness. DSA at strong shock waves in test particle approximation predicts for spectral index the value of $`0.5`$, which will increase as the shock intensity (Mach number) decreases. Calculations show that at Mach numbers $`M4`$ the value of the mean radio spectral index gets greater then $`0.6`$ but it remains bounded by maximal value of $`0.75`$ during the following evolution. This is the result of distribution of magnetic field strength at the shock but the real average spectrum of electrons inside the remnant will be somewhat softer than $`2\alpha +1=2.5`$.
What concerns another important radio characteristic of the SNR, the surface brightness, the remnant evolves at nearly constant radio surface brightness followed by very steep drop. It is important to note that the dependence of these and other radio characteristics of the remnant on shock Mach number has almost universal nature, very weakly depending on the input parameters.
To compare the model predictions with the observations we have collected a set of shell-type and a few composite SNRs in our Galaxy with known distances and remnants in LMC. Corresponding $`\mathrm{\Sigma }D`$diagram is shown in Fig.1. Error bars are due to uncertainties in the distances for some SNRs; for several objects only lower and for one SNR upper limits are known.
As a standard set of input parameters we used: the energy of SN explosion $`E_{\mathrm{SN}}`$ which is varied in the range $`(1รท5)\mathrm{\hspace{0.17em}10}^{51}`$ $`\mathrm{erg}`$; the ambient thermal electrons density $`n_{0\mathrm{e}}`$ in the range $`(5รท\mathrm{5\hspace{0.17em}\hspace{0.17em}10}^3)`$ $`\mathrm{cm}^3`$; the strength of the ambient magnetic field $`H_0`$ in the range$`(3รท10)\times 10^6\mathrm{G}`$. In all models the mass of SN ejecta is taken to be one solar mass. In Fig.1 several $`\mathrm{\Sigma }_{1\mathrm{G}\mathrm{H}\mathrm{z}}(D)`$ \- tracks are shown for different values of $`n_{\mathrm{oe}}`$, $`E_{\mathrm{SN}}`$ and $`H_0`$. In all calculations we used $`\psi =3`$ and $`\phi =4\times 10^4`$ justifying the use of test particle approximation. It is important to note that the shapes of the evolutionary tracks depend very weakly on these parameters although the magnitude of $`\mathrm{\Sigma }`$ depends on $`\phi `$ linearly.
As can be seen in Fig.1 varying mainly the values of $`n_{\mathrm{oe}}`$ and $`E_{\mathrm{SN}}`$ the model is able to cover all the remnants in the $`\mathrm{\Sigma }D`$ diagram including two large diameter relatively bright SNRs (HB 9, OA 184) and giant radio loops. As the latter is attained with the price of somewhat large value for $`E_{\mathrm{SN}}`$ of $`5\times 10^{51}`$erg we calculated several models in which the contribution of the ambient relativistic electrons was included via the injection of them into the DSA. As an example we have taken the spectrum $`j=1.4\times 10^2E^{2.2}`$ electrons m<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup> GeV<sup>-1</sup> ($`E`$ in GeV) from Fichtel et al. (1991). In Fig.1 two tracks are drawn by dashed lines.
It is interesting to not that the shapes of predicted by our model evolutionary tracks are in excellent accordance with the prediction made by Berkhuijsen (1986) that โradio remnants may evolve adiabatically at nearly constant $`\mathrm{\Sigma }_\mathrm{R}`$, followed by a steep decreaseโ
As can be seen in Fig.1 radio evolution of SNRs depends on two parameters, $`E_{\mathrm{SN}}`$ and $`n_{0\mathrm{e}}`$, equally.
In the framework of present model a number of features of the empirical $`\mathrm{\Sigma }D`$ relation obtains a simple explanation. For instance, the small number of remnants with small diameters and low $`\mathrm{\Sigma }`$ (lower left corner in the diagram) is the result of very fast evolution of SN blast wave in the low density ISM where SNRs have low $`\mathrm{\Sigma }`$. According to our model it is easy to account for high concentration of SNRs at diameters $`3050`$ $`\mathrm{pc}`$ in the $`\mathrm{\Sigma }D`$ diagram: different kind of evolutionary tracks intersect at these diameters and the sample of remnants here consists of objects evolving at different initial conditions. Of course, in the origin of the empirical $`\mathrm{\Sigma }D`$ relations we can not exclude at all the contribution of various selection effects. Moreover, not all the remnants can be described by our model. Indeed, in the $`\mathrm{\Sigma }D`$ diagram (Fig.1) the composite SNRs and SNRs with $`\alpha 0.4`$ (indicated by open circles) have systematically large values of $`\mathrm{\Sigma }`$ , which can be understood that in these remnants an additional more effective mechanism acts.
It is important to note that adopted values for Bellsโ parameters do not contradict the observations at standard values for the input parameters, characterizing the ISM and the SNR itself. This fact implies that the test particle approximation has factual realization in evolved shell-type SNRs. One more argument which favors the DSA mechanism is the statistics of spectral indices. The catalogue of SNRs of Green (1998) contains 80 SNRs with well determined values of the spectral indices,$`\alpha `$, from which $`57`$ remnants $`(71\%)`$ have $`\alpha 0.45`$ and only two SNRs (one of them is young peculiar SNRs Cas A) $`\alpha 0.75`$. Practically there are no objects contradicting the prediction of our model that $`\alpha _{\mathrm{max}}0.75`$. It is well known that young SNRs have systematically large values of $`\alpha `$, which can be explained by the back reaction effects or by the action of other then DSA mechanisms. According to the model SNRs with Mach numbers $`M4`$ has the mean value of $`\alpha 0.6`$. Assuming for simplicity that SNR evolves according to the Sedov law, $`Mt^{3/5}`$, for the number of SNRs with Mach numbers greater then $`M`$ we have $`N(M)M^{5/3}`$ from which it follows that the number of SNRs with $`\alpha 0.6`$, $`N(\alpha 0.6)=N(M4)`$, must be about $`55\%`$ of total number of SNRs. Here we have adopted for the final Mach number $`M_\mathrm{f}=2.5`$ at which $`\mathrm{\Sigma }`$ drops more then two order of its initial value and the SNR becomes invisible. In the catalogue Green (1998) $`22`$ out of $`80`$ SNRs have $`0.60\alpha 0.75`$ which makes $`27.5\%.`$ This discrepancy easily can be explained by the selection effects that SNRs with large diameters and small Mach numbers have low $`\mathrm{\Sigma }`$, consequently, difficult to be detected, though their number is more than the number of bright remnants by a factor of $`810`$.
The mean size and age of SNR with $`E_{\mathrm{SN}}=10^{51}\mathrm{ergs}`$, evolving in the ISM with $`n_{0\mathrm{e}}=0.01\mathrm{cm}^3`$, when $`\mathrm{\Sigma }`$ drops to $`5\times 10^{22}(\mathrm{Wm}^2\mathrm{sr}^1\mathrm{Hz}^1)`$ are $`150`$ pc and $`2\times 10^5`$ years, respectively. If SN occur with a rate of $`30`$ year<sup>-1</sup> then such SNRs occupy $`3.5\times 10^{65}\mathrm{cm}^3`$ of the galactic volume of $`\pi \times (25\mathrm{k}\mathrm{p}\mathrm{c})^2\times (1\mathrm{k}\mathrm{p}\mathrm{c})/6=9.65\times 10^{66}\mathrm{cm}^3`$, or $`1`$ part in $`28(3.6\%).`$ This estimate depends on the value of $`E_{\mathrm{SN}}`$ as $`E_{\mathrm{SN}}^{4/3}`$. The probability that random line of sight will hit such SNR is $`0.21`$ or $`10`$ in $`47`$ and the dependence on $`E_{\mathrm{SN}}`$ is the same. The last estimate shows that SNRs can play important role in the origin of background radio and gamma emissions of our Galaxy. Predicted by our model spectral indices are in accordance with the radio background observations. What concerns the gamma-background our model in accordance with the model of โbubbling swiss cheeseโ of Pohl & Esposito (1998) thought the value of electron spectral index of $`2.0`$ demanded in their model is in contradiction with our predictions.
## 4 Conclusions:
The model based on the assumption that the radio emitting electrons are accelerated by DSAmechanism very well explains the statistics of shell-type radio SNRs. From this we can conclude that the test particle approximation and the supposition that acceleration of electrons takes place from the thermal energies has realization in evolved SNRs in spite of all theoreticaldifficulties concerning the physics of collisionless shock waves.
We obtain that $`\psi =p_{\mathrm{inj}}/p_{\mathrm{th}}3`$. The idea that there is no acceleration at the radiative stage of SNR evolution does not contradict observation.
Presented model also in accordance with radio and gamma background observations.
References
Asvarov, A.I. 1992, AZh 69, 753
Asvarov, A.I. 1994, AZh 71, 228
Bell, A.R. 1978, MNRAS 182, 443
Berkhuijsen, E.M. 1986, A&A 166, 257
Blandford, R.D. & Cowie, L.L. 1982, ApJ 260, 625
Bogdan, T.J.& Volk, H.J. 1983, A&A,122,129
Cox, D.P.& Anderson, P.R. 1982, ApJ 253, 268
Fichtel, C.E., Ozel, M.E., Stone, R.G.,& Sreekumar, P., 1991, ApJ 374,134
Green, D.A., 1998, โ A Catalogue of Galactic Supernova Remnants (1998
September version)โ, MRAO, Cambridge, UK
Mills, B.Y., Turtle, A.J., Little, A.C. & Durdin, J.M., 1984, Austr.J.Phys.
37, 321
Moraal, H.& Axford, W.I, 1983, A&A 125, 204
Pohl, M.& Esposito, J.A. 1998, Preprint LANL, Astro-ph 9806160
van der Laan, H. 1962, MNRAS 124, 179 |
no-problem/0001/cond-mat0001369.html | ar5iv | text | # Dipolar Interactions and Origin of Spin Ice in Ising Pyrochlore Magnets
\[
## Abstract
Recent experiments suggest that the Ising pyrochlore magnets $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$ and $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ display qualitative properties of the spin ice model proposed by Harris et al. Phys. Rev. Lett.79, 2554 (1997). We discuss the dipolar energy scale present in both these materials and consider how they can display spin ice behavior despite the presence of long range interactions. Specifically, we present numerical simulations and a mean field analysis of pyrochlore Ising systems in the presence of nearest neighbor exchange and long range dipolar interactions. We find that two possible phases can occur, a long range ordered antiferromagnetic one and the other dominated by spin ice features. Our quantitative theory is in very good agreement with experimental data on both $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$ and $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$. We suggest that the nearest neighbor exchange in $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ is antiferromagnetic and that spin ice behavior is induced by long range dipolar interactions.
\]
An exciting development has occurred in the last two years with the discovery of an apparent analogy between the low temperature physics of the geometrically frustrated Ising pyrochlore compounds $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$ and $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ (so called โspin iceโ materials), and proton ordering in real ice. The magnetic cations $`\mathrm{Ho}^{3+}`$ and $`\mathrm{Dy}^{3+}`$ of these particular materials reside on the pyrochlore lattice of corner sharing tetrahedra. Single-ion effects conspire to make their magnetic moments almost ideally Ising-like, but with their own set of local axes. In particular, each moment has its local Ising axis along the line connecting its site to the middle of a tetrahedron to which it belongs (see inset of Fig. 1).
In a simple model of nearest neighbor ferromagnetic (FM) interactions, such a system has the same โice rulesโ for the construction of its ground state as those for the ground state of real ice. In both cases, these rules predict a macroscopically degenerate ground state, a feature that a number of geometrically frustrated systems possess.
In $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$, $`\mu `$SR data indicates a lack of ordering down to $`50`$ mK despite a Curie-Weiss temperature $`\theta _{\mathrm{cw}}1.9`$ K, while single crystal neutron scattering data suggests the development of short-range FM correlations, but the absence of ordering down to at least 0.35 K. $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$ also displays field dependent behavior consistent with a spin ice picture. Quite dramatically, thermodynamic measurements on $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ show a lack of any ordering feature in the specific heat data, with the measured ground state entropy within 5% of Paulingโs prediction for the entropy of ice.
However, both spin ice materials contain further interactions additional to the nearest neighbor exchange. Often, rare earth cations can have appreciable magnetic moments and, consequently, magnetic dipole-dipole interactions of the same order as, if not larger than the exchange coupling, can occur. Furthermore, it has been suggested that the nearest neighbor exchange interaction in $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$ is actually antiferromagnetic (AF) , which by itself should cause a phase transition to a long range ordered ground state. Thus, how these systems actually display spin ice-like behavior is most puzzling. For example, one might naively expect that the long-range and anisotropic spin-space nature of the dipolar interactions would introduce so many constraints that a large degree of the degeneracy present in the simple nearest-neighbor ferromagnetic spin-ice model would be removed, and induce long-range order. It is this issue that we wish to address in this paper.
A previous attempt to consider dipolar effects in Ising pyrochlores was made by Siddharthan et al.. In that work, the dipole-dipole interaction was truncated beyond five nearest neighbor distances, and a sharp transition between paramagnetism and a partially ordered phase (where rapid freezing occurs) was observed for interaction parameters believed appropriate for $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$. However, we argue that the truncation of dipole-dipole interactions can be misleading, and may introduce spurious features in various thermodynamic properties. For example, we find that the sharp feature observed in the specific heat for truncation beyond five nearest neighbor distances is softened and rounded for truncation beyond the tenth nearest neighbor shell, and the observed dynamical freezing is pushed to lower temperatures. As we show below, in the limit of infinite range dipoles, the interaction parameters of Siddharthan et al. yield spin ice.
In this work, we consider the interplay between nearest neighbor exchange and dipolar interactions by taking into account the long range (out to infinity) nature of the dipolar interactions through the use of Ewald summation techniques. Our Monte Carlo simulations and mean field results show that dipolar forces are remarkably adept at producing spin ice physics over a large region of parameter space.
Some of our main conclusions are shown in Fig. 1. For Ising pyrochlores, the dipole-dipole interaction at nearest neighbor is FM, and therefore favors frustration. Beyond nearest neighbor, the dipole-dipole interactions can be either FM or AF, or interestingly, even multiply valued, depending on the neighbor distance. Defining the nearest neighbor dipole-dipole interaction as $`D_{\mathrm{nn}}`$ and the nearest neighbor exchange as $`J_{\mathrm{nn}}`$, our Monte Carlo results indicate that spin ice behavior persists in the presence of AF exchange up to $`J_{\mathrm{nn}}/D_{\mathrm{nn}}0.91`$. For $`J_{\mathrm{nn}}/D_{\mathrm{nn}}<0.91`$ we find a second order phase transition to the globally doubly degenerate $`๐=0`$ phase of the nearest neighbor AF exchange-only model, where all spins either point in, or all out of a given tetrahedron.
Our Hamiltonian describing the Ising pyrochlore magnets is as follows,
$`H`$ $`=`$ $`J{\displaystyle \underset{ij}{}}๐_i^{z_i}๐_j^{z_j}`$ (1)
$`+`$ $`Dr_{\mathrm{nn}}^3{\displaystyle \underset{ij}{}}{\displaystyle \frac{๐_i^{z_i}๐_j^{z_j}}{|๐ซ_{ij}|^3}}{\displaystyle \frac{3(๐_i^{z_i}๐ซ_{ij})(๐_j^{z_j}๐ซ_{ij})}{|๐ซ_{ij}|^5}},`$ (2)
where the spin vector $`๐_i^{z_i}`$ labels the Ising moment of magnitude $`|S|=1`$ at lattice site $`i`$ and local Ising axis $`z_i`$. Because the local Ising axes belong to the set of $`111`$ vectors, the nearest neighbor exchange energy between two spins $`i`$ and $`j`$ is $`J_{\mathrm{nn}}J/3`$. The dipole-dipole interaction at nearest neighbor is $`D_{\mathrm{nn}}5D/3`$ where $`D`$ is the usual estimate of the dipole energy scale, $`D=(\mu _0/4\pi )g^2\mu ^2/r_{\mathrm{nn}}^3`$. For both $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$ and $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$, $`D_{\mathrm{nn}}2.35`$ K.
It is well known in the field of electrostatic interactions that the dipole-dipole interaction is difficult to handle due to its $`1/r^3`$ nature. In general, a lattice summation of dipole-dipole interactions is conditionally convergent, and must be considered with care. In order to include the important long range nature of the dipole-dipole interaction, we have implemented the well known Ewald method within our simulation technique, in order to derive an effective dipole-dipole interaction between spins within our simulation cell. Unlike in dipolar fluid simulations, our lattice spins allow the effective interactions between all moments to be calculated only once, after which a numerical simulation can proceed as normal.
A standard Metropolis algorithm was used in our Monte Carlo simulations. We used a conventional cubic cell for the pyrochlore lattice, which contains 16 spins. In general, we found it sufficient to simulate up to $`4\times 4\times 4`$ cubic cells (ie. L=4, and 1024 spins) with up to $`10^6`$ Monte Carlo steps per spin when necessary. Thermodynamic data were collected by starting the simulations at high temperatures and cooling very slowly.
Referring to Fig. 1, the characterization of a system as having spin ice behavior was carried out by determining the entropy, via numerical integration of the specific heat divided by temperature data. Paulingโs argument for the entropy of ice yields $`R[\mathrm{ln}2(1/2)\mathrm{ln}(3/2)]`$ or $`4.07`$ J mol<sup>-1</sup> K<sup>-1</sup>. We find for $`J_{\mathrm{nn}}/D_{\mathrm{nn}}=0.91`$, (our spin ice data point closest to the phase boundary in Fig. 1), this value for the entropy to within $`3\%`$, using a system size $`L=4`$.
Our thermodynamic data indicates that when the nearest neighbor exchange is AF and large compared to the dipolar interactions, the system undergoes a second order phase transition to an all in or all out $`๐=0`$ ground state as alluded to earlier. This AF phase persists slightly beyond the point where the nearest neighbor dipolar interaction (FM) is stronger than nearest neighbor AF exchange.
In the spin ice regime, our specific heat data has a number of interesting features which help shed light on the effect of long range dipole-dipole interactions. In general, each data set shows qualitatively the same broad peak as observed in the nearest neighbor FM exchange model, and vanishes at high and low temperatures. We detect very little system size dependence upon comparison of data from $`L=2,3,4`$ simulation sizes. In particular, there is no noticeable size dependence of the specific heat maximum, nor its position. As the AF/spin ice phase boundary is approached from the spin ice side, the specific heat peak begins to narrow and more importantly, both the peak height and peak position begin to vary. As we discuss below, this has important ramifications for the interpretation of experimental data.
In Fig. 2 we plot the dependence of the specific heat peak height ($`C_{\mathrm{peak}}`$) and the temperature at which it occurs ($`T_{\mathrm{peak}}`$) on the ratio of the nearest neighbor exchange and dipole-dipole energies, $`J_{\mathrm{nn}}/D_{\mathrm{nn}}`$. Note that in the regime of large nearest neighbor FM coupling, the peak height plateaus to the value one observes in the nearest neighbor FM exchange-only model.
Indeed our data suggests that when the exchange becomes FM, the nearest neighbor effective bond energy is large enough to dominate the excitations of the system. This can be more dramatically seen in Fig. 3, where we have rescaled the specific heat curves for a number of interaction parameter values in terms of the effective nearest neighbor interaction $`J_{\mathrm{eff}}J_{\mathrm{nn}}+D_{\mathrm{nn}}`$ in the regime $`J_{\mathrm{nn}}/D_{\mathrm{nn}}>0`$.
This figure shows that in terms of an effective energy scale, the medium to long range effects of the dipolar interactions are โscreenedโ by the system, and one recovers qualitatively the short range physics of the nearest neighbor spin ice model. Remarkably, inclusion of long range dipolar interactions appears to have the effect of removing a tendency towards ordering which can come from a short range truncation of the dipole-dipole interaction.
As the nearest neighbor exchange interaction becomes AF, we find that the approximate โcollapseโ onto a single energy scale becomes less accurate, with features in the specific heat becoming dependent on $`J_{\mathrm{nn}}/D_{\mathrm{nn}}`$ in a more complicated manner. It is within this regime that we believe that both $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$ and $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ exist.
Since $`D_{\mathrm{nn}}`$ is a quantity which can be calculated once the crystal field structure of the magnetic ion is known, the nearest neighbor exchange $`J_{\mathrm{nn}}`$ is the only adjustable parameter in our theory. Fig. 2 enables us to test in two independent ways the usefulness of our approach to the long range dipole problem in spin ice materials.
If we consider $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ for example, specific heat measurements by Ramirez et al. indicate a peak height, $`C_{\mathrm{peak}}^{\mathrm{Dy}}`$ of $`2.72`$ J mol<sup>-1</sup>K<sup>-1</sup>. Given that $`D_{\mathrm{nn}}2.35`$ K for this material, the left hand plot of Fig. 2 indicates a nearest neighbor exchange coupling $`J_{\mathrm{nn}}1.2`$ K. The same experimental specific heat data shows that this peak occurs at a temperature of $`T_{\mathrm{peak}^{\mathrm{Dy}}}1.25`$ K. Using the plot of $`T_{\mathrm{peak}}`$ in Fig. 2, we independently arrive at approximately the same conclusion for the value of the nearest neighbor exchange. Thus, we predict that AF exchange is present in $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ with $`J_{\mathrm{nn}}1.2`$ K. If there was no AF exchange present in this system, our results in Fig. 1 imply that there would be a peak in the specific heat at a temperature of at least $`2.3`$ K, which is not observed experimentally. Our best fit for the specific heat data of $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ by Ramirez et al. is shown in Fig. 4, where we find good agreement between theory and experiment for $`J_{\mathrm{nn}}=1.24`$ K.
Specific heat measurements on a powdered sample of $`\mathrm{Dy}_2\mathrm{Ti}_2\mathrm{O}_7`$ in a magnetic field were also reported in Ref. . Three field independent peaks were observed at low temperature. For a large field in the $`110`$ direction, two spins on each tetrahedron are pinned by the field, while the other two remain free, since their Ising axes are perpendicular to the applied field. Due to the dipolar interaction, there will be a coupling between the fluctuating spins on these two sub-lattices. Our preliminary simulations on small lattice sizes suggest a field independent ordering at low temperature as observed in experiment.
Considering $`\mathrm{Ho}_2\mathrm{Ti}_2\mathrm{O}_7`$, experimental data on its thermodynamic properties is not so categorical. The specific heat data of Siddharthan et al. indicates a feature at $`0.8`$ K, although it has been suggested that this could be due to an additional contribution in this temperature range from an anomalously large hyperfine coupling in $`\mathrm{Ho}^{3+}`$. Nevertheless, using the plot of $`T_{\mathrm{peak}}`$ in Fig. 2, we find substantial AF exchange coupling of the same order of magnitude as in the Dy compound. We note that Siddharthan et al. and den Hertog et al. find from analysis of magnetization measurements a similar order of magnitude for $`J_{\mathrm{nn}}`$. Furthermore, our numerical simulations within this region of parameter space indicate a Curie-Weiss temperature of $`2`$ K, in agreement with an experimental estimate by Harris et al. of $`\theta _{\mathrm{cw}}1.9`$ K.
$`\mathrm{Tb}_2\mathrm{Ti}_2\mathrm{O}_7`$ is an Ising pyrochlore system of similar type to the Ho<sup>3+</sup> and Dy<sup>3+</sup> based materials, but the Ising anisotropy is reduced to much lower temperature due to narrowly spaced crystal field levels. While this makes the interpretation of experimental data more difficult, initial estimates of the nearest neighbor exchange and dipole moment yield $`J_{\mathrm{nn}}/D_{\mathrm{nn}}1`$, placing this system very close to the phase boundary of Fig. 1. Indeed, $`\mu `$SR measurements suggest that $`\mathrm{Tb}_2\mathrm{Ti}_2\mathrm{O}_7`$ fails to order down to 70 mK and thus it would appear that a spin ice picture for this material cannot be a priori ruled out.
While we believe our approach yields a reasonably successful quantitative theory of spin ice behavior in Ising pyrochlores, there still remains the question of why long range dipolar interactions do not appear to lift the macroscopic degeneracy associated with the ice rules, and select an ordered state.
Mean-field theory provides a more quantitative basis for examining this issue. Following the approach used in Refs., we find that the soft-modes for Ising pyrochlore systems described by Eq. 1 consist of two very weakly dispersive branches (less than 1% dispersion) over the whole Brillouin zone (except at $`๐0`$). Such a set of quasi-dispersionless branches is very similar to the two completely dispersionless soft branches of the nearest-neighbor FM spin ice model. Consequently, both nearest-neighbor FM and dipolar spin ice behave almost identically over the whole temperature range spanning $`O(w)T<\mathrm{}`$ where $`w`$ is the bandwidth of the soft branch of dipolar spin ice. We note that this near dispersionless behavior is only recovered asymptotically as the long range dipoles are included out to infinity. The lifting of degeneracy at the 1% level in the apparent absence of any small parameters in the theory is at this point not completely understood. We do not believe it is due to numerical or computational error. A partial explanation may be that $`(111)`$ Ising anisotropy, the $`1/r^3`$ long-range nature of dipolar interactions, and the specific relationship between the topology of the pyrochlore lattice and the anisotropic (spin$``$space) coupling of dipolar interactions, combine in a subtle manner to produce a spectrum of soft modes that approximate very closely the spectrum of the nearest-neighbor ferromagnetic spin ice model of Harris et al.. In other words, there must be an almost exact symmetry fullfilled in this system when long-range dipolar interactions are taken into account. However, the same long-range nature of these interactions renders it difficult to construct a simple and intuitive picture of their effects, and we have not been able to identify such โalmost exactโ symmetry.
Also, in these systems, the absence of soft fluctuations (Ising spins) combined with the macroscopic degeneracy associated with the ice rules may be such that correlations associated with a โtrueโ ground state are dynamically inhibited from developing. For energetic reasons, states obeying the ice rules are favored down to low temperature, by which time such large energy barriers exist that evolution towards the true ground state is never achieved. In simulation terms, at low temperature the Boltzmann weights for local spin flips โtowardsโ this state are simply too small.
In conclusion, using Ewald summation techniques we have considered the effects of long range dipole-dipole interactions on the magnetic behavior of Ising pyrochlore systems. Our results show that spin ice behavior is recovered over a large region of parameter space, and we find quantitative agreement between our approach and experimental data on spin ice materials.
We thank S. Bramwell, B. Canals, and P. Holdsworth for useful discussions. We are grateful to A. Ramirez for making available his specific heat data. M.G. acknowledges financial support from NSERC of Canada, Research Corporation for a Research Innovation Award and a Cottrell Scholar Award, and the Province of Ontario for a Premier Research Excellence Award. |
no-problem/0001/astro-ph0001191.html | ar5iv | text | # 1 Introduction
## 1 Introduction
X-ray emission in the Universe arises in intense gravity environments. At high galactic latitudes, active galactic nuclei (AGN) and other emission line galaxies dominate the source counts at all explored fluxes, with galaxy clusters being the second most abundant source class.
Recent ROSAT deep surveys (, , ) have shown that most of the soft X-ray volume emissivity in the Universe arises at redshifts $`z>12`$ (). The AGN and star formation rate per unit volume follow a remarkably similar evolution rate in the Universe () and therefore they can both be used as tracers of the evolution of large-scale structure in the Universe. Using AGN and star-forming galaxies as tracers of cosmic inhomogeneities is most sensitive to intermediate redshifts ($`z2`$), providing a critical link between cosmic microwave background studies (which map the $`z1000`$ Universe) and local galaxy surveys ($`z0`$).
In this paper we briefly review the current status of the use of X-ray observations towards the study of large-scale structure. More details are presented in . Then we explore possibilities of making qualitative progress in this field by carrying out different types of X-ray surveys.
## 2 What we know so far
At galactic latitudes $`b>20^{}`$ the contribution from the Galaxy to the X-ray sky is small: less than 10% of the X-ray background above 2 keV is due to galactic emission, absorption is negligible above this photon energy and a census of X-ray sources down to any flux limit exhibits less than 10-20% of galactic stars. Observations of the X-ray background at high galactic latitudes and photon energies above 2 keV can therefore be used to map the extragalactic X-ray sky.
### 2.1 The isotropy of the X-ray background
The all-sky distribution of the X-ray background for cosmological purposes has been best mapped by the HEAO-1 mission. A galactic anisotropy dominates the large-scale anisotropy, but this can be modelled out (). A dipole contribution is detected in the X-ray sky, in rough alignment with the direction of our motion with respect to the Cosmic Microwave Background frame (, ). The amplitude of this dipole accounts for both the kinematical effect of our motion (the Compton-Getting effect) and the excess X-ray emissivity associated with the structures which are pulling us. These two effects are expected to be of the same order () and the analysis done in shows this to be the case. However, in an analysis of the ROSAT all-sky data at lower photon energies (which have the disadvantage of a larger contamination from the Galaxy) Plionis & Georgantopoulos find a dipole several times larger than the expected kinematical dipole. The difference between both results might be partly affected by the elimination of X-ray bright clusters in the Scharf et al analysis, as clusters are known to be a largely biased population (). The bias parameter derived from the XRB dipole is large ($`b_X36`$).
Treyer et al have analyzed higher order multipoles of the HEAO-1 A2 X-ray background. The discrete nature of the XRB contributes a constant term to all multipoles which scales as $`S_{cut}^{0.5}`$, where $`S_{cut}`$ is the minimum flux at which sources have been excised. Treyer et al detect a signal growing towards lower-order multipoles which is consistent with a gravitational collapse picture, as predicted by . Excluding the dipole, this analysis yields a moderate bias parameter for the X-ray sources ($`b_X12`$).
On smaller (a few degrees) angular scales, probing linear scales of hundreds of Mpc, the โexcess fluctuationsโ technique has been used often in the analysis of the XRB. The way this works is by modelling the distribution of XRB intensities on a given angular scale in terms of confusion noise, plus a contribution coming from source clustering (). These studies have yielded so far only upper limits for the excess fluctuations: $`<2\%`$ on scales of $`5^{}\times 5^{}`$ () and $`<4\%`$ on scales $`1^{}\times 2^{}`$ (). We discuss later what is the expected signal and how it could be measured.
Yet on smaller (a few arcmin) angular scales, which probe the galaxy-galaxy clustering scale, data from X-ray imaging telescopes has been used. The autocorrelation function of the XRB on these scales should reflect the clustering of high redshift X-ray sources in the nonlinear regime (). Soltan et al have found a strong positive detection for angular separations 0.3-20 which is, however, difficult to interpret as both the Galaxy and the Local Supercluster could contribute to this.
### 2.2 Clustering of X-ray selected AGN
Studying the clustering of X-ray selected AGN is likely to be the most direct way to map the structure of the X-ray sky. At soft X-ray energies this requires fairly deep surveys (going below $`10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$) as otherwise very few objects at $`z>1`$, where most of the X-ray volume emissivity is produced, would be sampled.
Carrera et al have analysed a set of โpencil beamโ medium and deep ROSAT images containing 200 X-ray selected AGN, sampling a redshift interval $`z02`$. The net result is the detection of X-ray selected AGN clustering which is relatively weak (the 3D correlation length is $`r_0<5h^1\mathrm{Mpc}`$, for $`h=H_0/\left(100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1\right)`$) and strongly evolving with redshift (faster than comoving). At much brighter flux limits used the ROSAT All Sky Survey Sources to derive a 2D correlation function that, when translated to 3D with an appropriate catalogue depth, yields a larger correlation length ($`r_06h^1\mathrm{Mpc}`$).
### 2.3 Do X-rays trace mass?
In we compile various measurements of the bias parameter for X-ray sources and in particular for X-ray selected AGN and the XRB. The bias parameter is likely to be redshift dependent. For a simple model where all objects form at the same early redshift, Fry finds $`b_X\left(z\right)=b_X\left(0\right)+z\left(b_X\left(0\right)1\right)`$, which implies that at high z the bias parameter could be large.
The other effect that comes into play, especially when using the XRB, is that at low redshift clusters become more numerous and their imprint in the local XRB features becomes more important. As clusters are a strongly biased source population ( estimate $`b_X\left(0\right)4`$) it is not surprising that the amplitude of the XRB dipole calls for large bias factors, but higher order multipoles (sensitive to more distant sources) do not.
Within present knowledge and uncertainties, the bias factor for the AGN as the dominant X-ray source population, appears to take moderate values $`b_X=12`$ at low to intermediate redshifts. Indeed at higher redshifts the AGN population might be more strongly biased.
## 3 Deep hard X-ray surveys
Deep surveys, particularly at hard photon energies, are a key ingredient to forthcoming studies af the large-scale structure of the X-ray Universe. Currently popular models for the X-ray background assume a population of AGN with a distribution of absorbing columns (, , ), where most of the X-ray energy produced by accreting black holes is absorbed and re-radiated in the infrared (, ). Several claims have been made that the absorbed AGN population evolves differently than the unabsorbed one (, , ). As most of the energy content in the XRB resides at 30 keV, it is crucial to explore harder photon energies than previously achieved with the ROSAT deep surveys.
XMM is the most sensitive X-ray observatory to survey the X-ray sky at photon energies above 2 keV. Although its point-spread-function is significantly worst than that of Chandra, at energies above 2 keV both instruments are photon-starved and then the much larger collecting area of XMM will make it more efficient. We have carried out extensive simulations of XMM EPIC observations at various depths and found that the deepest planned XMM observations (PVCal, GTO and AO-1) reaching 350-400 ks will not be confusion noise limited. Figure 1 shows the resulting image in the 2-10 keV band of a simulation of a 350 ks XMM EPIC-pn exposure in a blank field (using the standard model ). As the EPIC field of view is $`30^{}`$ in diameter, we expect to find $`300`$ sources in such a pointing once vignetting has been corrected for. Most of these sources are expected to lie at redshifts $`z>12`$, from which the X-ray volume emissivity in hard X-rays will be derived and compared with the one, assumed so far, obtained with ROSAT for soft X-ray photons.
## 4 X-ray background surveys
The measurement of large-scale structure in the X-ray Universe does not necessarily require individually resolving all sources in very large areas of the sky down to very faint fluxes. If the X-ray volume emissivity as a function of $`z`$ can be derived from deep surveys, fluctuation analyses of the XRB can also be used (). If the X-ray volume emissivity peaks at some intermediate redshift as it does in soft X-rays ($`z12`$), then for a fixed angular scale the XRB fluctuations are related almost uniquely to the value of the power spectrum of the inhomogeneities in the Universe at a single comoving wavenumber. The scales to be probed by XMM (from a few to a few tens of arcmin) will provide a measurement of the $`k\left(0.11\right)h\mathrm{Mpc}^1`$ regime. The power spectrum is expected to peak at $`k0.05h\mathrm{Mpc}^1`$, which corresponds to an angular scale of $`1\mathrm{deg}`$.
In we argue that to detect the excess fluctuations of the XRB due to source clustering for a beam size of 1$`\mathrm{deg}^2`$, a large fraction of the sky needs to be surveyed. To prove that this is feasible, we have carried out simulations of hard X-ray source populations over the whole sky with a simple clustering model for the sources and measured XRB intensities (details in ). These intensities are then โmeasuredโ with a proportional counter of 1m<sup>2</sup> effective area during one complete 6-month scan of the sky at 100% efficiency and including stable particle background in a manner similar to the Ginga LAC observations.
Figure 2 shows one of these simulations where the clustering has been modeled with a gaussian correlation function with comoving evolution. Ignoring data within $`b<20^{}`$, Figure 3 shows the histograms for the XRB intensities in 3 cases: absence of clustering, linear clustering evolution and comoving clustering evolution ($`b_X=1`$ in all cases). The distributions are clearly distinguishable, and the excess fluctuations can be determined with a very high accuracy. Indeed with a significantly smaller collecting area and similar circumstances, excess fluctuations can still be detected, but measured with larger statistical uncertainties.
One such survey will also benefit other approaches to measure large-scale structure in the Universe, particularly the multipole expansion, especially if a sensitive source survey could also be carried out. In this way, those $`1\mathrm{deg}^2`$ regions where sources above a given flux are detected could be masked out for the multipole analysis, as discussed in .
## 5 Large-area surveys
Direct measurements of the large-scale structure of the Universe at redshifts $`z12`$ via X-ray observations require surveying areas of hundreds of square degrees to a sufficient depth. Using galaxy clusters as tracers of large-scale structure of the Universe presents the difficulty of the faintness of most of these objects beyond redshifts $`z1`$. Even XMM will require a large amount of time to do a sensible mapping of galaxy clusters out to these redshifts.
Using AGN has the advantage that they have strong positive evolution up to $`z12`$. In order to reach the redshifts where most of the X-ray emissivity is produced $`z12`$, AGN surveys have to go down to, at least, a 2-10 keV flux $`10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. Figure 4 illustrates the redshift distribution for different flux limits assuming the model. The advantage of X-ray AGN surveys over similar optical work (e.g. the Sloan Digital Sky Survey) is that with hard X-rays the absorbed AGNs can also be used up to earlier times if they evolve more strongly than the unabsorbed broad-line objects.
Mapping a 100$`\mathrm{deg}^2`$ contiguous area of the sky with the XMM EPIC cameras, which have the largest field of view among all operating X-ray facilities, to a depth of $`20ks`$ (needed to get reliable detections at $`>10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$) will require 10 Msec of effective exposure time with XMM. This will collect $`10^5`$ sources.
A more efficient way to carry out that project is by means of a dedicated mission with a wide-field X-ray telescope. The Panoram-X mission proposed in , would cover the whole sky to a depth $`10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. Of course, finding redshifts for a reasonable fraction of the several tens of millions of sources to be discovered by such mission is simply impossible. A full analysis of the spatial distribution of X-ray selected AGN would then require the modelling of the 2D distribution from these maps, using redshift distributions from the deep hard X-ray surveys.
## 6 Outlook
X-ray cosmology is just in its infancy. Basic questions such as what is the bias parameter of different classes of X-ray sources are still partly unanswered. However the fact that most of the X-ray sky is dominated by extragalactic sources of which AGN are the major component make the X-ray sky especially suited for cosmology at intermediate redshifts.
Making quantitative progress in this field requires not only proper use of existing or planned observatory-type facilities (i.e., Chandra and XMM), but probably also dedicated missions to survey all (or most of) the sky. X-ray cosmology is now in a position to make specific predictions for the structure of the X-ray universe (once hard X-ray surveys have been carried out with Chandra and XMM). These surveys can then be designed and optimized to obtain detections and precise measurements of the large-scale structure of the Universe at redshifts $`z12`$. That would really be a major boost for cosmology at intermediate redshifts. |
no-problem/0001/gr-qc0001043.html | ar5iv | text | # Physical Consequences of Moving Faster than Light in Empty Space
## I Introduction
In , basing on the theory of multifractal time and space proposed in , the main features of relative motion of systems close to inertial (โalmostโ inertial systems) in the space with fractal dimension of time $`d_t=d_t(๐ซ,t)=1+\epsilon ,|\epsilon |1`$ were formulated, and it was shown that in such systems motion of body with any velocity (from zero up to infinite) becomes possible. For total momentum and energy of a moving object the following relations were obtained
$$p=\beta ^^1m_0v=\frac{m_0v}{\sqrt[4]{\beta ^4+4a_0^2}},E=E_0\sqrt{\frac{v^2c^2}{\sqrt{\beta ^4+4a_0^2}}+1}$$
(1)
where $`\beta =\left|1v^2/c^2\right|^{1/2},a_0=_i\beta _i๐
_{0i}\frac{๐ฏ}{c}ct,๐
_i=dL_i/d๐ซ`$ with $`L_i`$ standing for Lagrangian density of $`i`$th physical field, $`t`$ for time and $`\beta _i`$ being dimensional factors for $`i`$th field, providing dimensionlessness of $`\epsilon `$: $`\epsilon =_i\beta _iL_i`$. However, the question on what new physical phenomena can be observed from the point of view of this theory if a bodyโs velocity exceeds that of light in empty space was temporarily put aside. The present paper is devoted to fill in this gap and deals with several consequences of the proposed multifractal concept of time and relations (1), that allow for experimental verification.
## II Vavilov-Cherenkov-like radiation at $`v>c`$
According to the main statements of , small noninertiality of moving frames of reference arising from the time being multifractal, and openness of any real system (for statistical theory of open systems see ) must lead to small deviations from the conservation laws. In particular, the law of energy conservation would be fulfilled only approximately. Basing on the laws of electrodynamics, which remain valid in the theory of multifractal time at any velocities, it was shown that when a bodyโs velocity $`v`$ reaches that of light $`c`$ and continue increasing, the energy of the moving body reaches it maximum value and then begins diminishing (see (1). As this takes place, more and more energy being lost by the body in order to span the small deviation from the energy conservation will be emitted to the surrounding matter (space) through radiation, quite similar to the Vavilov-Cherenkov radiation. But, unlike the tachion theory, here the motion faster then light is done by ordinary particles. Moreover, free motion then is always a motion with acceleration, for it is velocity growth that is accompanied by energy decreasing in this region (we still consider the principle of energy minimum in equilibrium state to be valid). In the region of velocities greater than $`c`$, a source of energy is needed in order to decelerate a moving particle! Thus, increasing the velocity beyond the speed of light leads to energy loss accompanied by radiation. With the energy of this radiation, the energy conservation law is โalmostโ fulfilled
$$E=E_0\sqrt{\frac{v^2}{c^2\beta ^^2}+1}+E_{rad}v>c$$
(2)
It can be shown that at the velocities greater than the speed of light perturbation of multifractal structure of time still remains negligibly small, just as it is the case for resting particles.
## III Does the casuality remain in $`v>c`$ region?
In special relativity the necessary condition of two events that take place in a fixed frame of reference in points $`x_1`$ and $`x_2`$ at times $`t_1`$ and $`t_2`$ and of the same events in a moving frame to be casually connected is the validity of the following unequalities
$$t_2t_1>0,t_2^{}t_1^{}=\frac{t_2t_1}{\sqrt[4]{\beta ^4+4a_0^2}}\left(1\frac{v}{c^2}v_{inf}\right)>0$$
(3)
where $`v_{inf}`$ is the velocity of the influence spreading between the points $`x_1`$ and $`x_2`$. If $`v_{inf}<c^2/v`$, the casual connectivity of the two events remains at any speed of motion $`v\mathrm{}`$. Though, when $`v=\mathrm{}`$, the rate of influence must be zero. Nevertheless, if this unequality is not fulfilled ($`v_{inf}v>c^2`$), casuality does not remain. This violation of casuality for different events is one of the main arguments against the tachion theory, restricting, in particular, the free will of objects. Fortunately, in our model casuality does not break for the following reason. Any body moving faster than light radiates energy, and the greater its speed the less its energy. Hence, motion with velocity greater than that of light is always a motion with acceleration and thus is not โalmost inertialโ in our terminology. Therefore, in terms of the assumptions made, any comparison between moving with acceleration and fixed frames of reference have no physical sense.
Stress now the main differences between motion faster than light in tachion and in multifractal time models. In the tachion model the whole region of velocities consists of two separate parts, velocities greater and less than that of light in vacuum. Particles whose speed is greater than $`c`$ (tachions) can not cross the barrier and move into the other region. Tachions have several peculiarities in their motion, and the principle of casuality can be violated if we are to compare events in different regions of velocities. In the model of multifractional time any particle, if supplied with sufficiently big energy, can be accelerated up to velocities greater than the speed of light, and thus found itself in the tachion region. However, in this region it will be constantly radiating energy and moving with growing velocity. In this process its energy tends to a finite value $`EE_0\sqrt{2}`$ as $`v\mathrm{}`$. By consuming energy it, though, can be slowed down, and, having its energy being increased up to $`E=E_0/\sqrt{2a_0}`$ (see (1)), can return in the region $`v<c`$ and begin to radiate again, but now when being decelerated.
## IV Possible physical phenomena at $`v>c`$
A body (with nonzero rest mass) moving with the speed of light has maximum possible energy and represents a sort of energy reservoir \- if its velocity increases or decreases, the excess of energy emits through radiation. Such a body thus can serve as an energy source, since small initial (e.g., spontaneous) increasing of its velocity would lead to release of immense energy of order $`EE_0\sqrt{a_0}`$. In this connection, the following possible consequences allowing for experimental observation and applications of motion faster then light can be pointed out.
a) a sudden burst of radiation can occur as a particleโs velocity increases from $`v=c`$ to $`v>c`$. As an example of observation of this effect we can consider a charged particle in accelerators like synchrotron. At energies much greater then the rest energy ($`vc,v<c`$), the particleโs velocity almost does not alter while its energy can vary by orders of magnitude, with this change accompanied by considerable growth of synchrotron radiation. When velocity reaches the value $`v=c`$, energy has its maximal value (see (1)). Then in the narrow region of velocities $`0<v^2/c^214a_0`$ the particle looses almost all its energy through Vavilov-Cherenkov-like radiation and synchrotron radiation. In this process the radius of its orbit remains almost the same (the particleโs velocity is still close $`c`$). As this occurs, the radiation power grows sharply and has the order of $`E_010^3t^{1/2}sec^{1/2}`$ (which equals to $`10^{12}t^{1/2}eVsec^{1/2}`$ for protons. This jump of radiation power can be detected by registering appearance of high-energy $`\gamma `$-quantums, mesons, electron-positron pairs etc. Further increasing of velocity will result in the particleโs getting out from the stationary orbit and becoming invisible for the observer. The latter is connected with the fact that in order to reduce particleโs velocity down to $`v=c`$ it is necessary to give it energy it lose through radiation ($`10^{12}eV`$ during one second for proton). Such particle will move undergoing acceleration without substantial change in energy ($`E_{min}=E_0\sqrt{2}`$). Experimental observation then becomes possible only if it collides with something that can supply it with the required for the transition in the region $`v<c`$ amount of energy (another high energy particle or $`\gamma `$-quantum). In this case the particle can be detected as ordinary charged particle with very high energy, that ionizes matter and gives birth to bunches of $`\gamma `$-quantums, mesons, electron-positron pairs etc.
b) Propagation of a faster-than-light particles beam in a dense media would lead to diminishing the mediaโs temperature due to such particlesโ attaining energy while being decelerated, and this can serve as a possible method to decrease the energy of a hot dense matter (thermonuclear plasma, neutron star, nuclear power reactors etc.) This method of cooling very hot matter with high density and high scattering crossection for faster-than-light particles may turn to be one of the most effective ways of doing that for these kinds of media because of huge amounts of energy necessary to slow down such particles.
c) Energy $`E_010^3t^{1/2}sec^{1/2}`$ released over small time intervals corresponds to the temperature of $`TE_010^{19}t^{1/2}sec^{1/2}K`$, and perhaps can be used for initiating or controlling thermonuclear fusion in deuterium-tritium media.
d) Provided that the conditions for coherent radiation appearance are satisfied, a laser with working frequency of $`\nu 10^3E_0\mathrm{}^1t^{1/2}sec^{1/2}`$ can probably be created.
## V Conclusion
Investigation of relative motion in the multifractal time model for โalmost inertialโ systems indicates of possible appearance of a number of new physical phenomena in the region of the velocities greater than the speed of light and new notions about properties of mass and energy as functions of velocity in the case of fractional corrections to the topological dimension of time being small (in particular, possibility of motion with any velocity, existence of a maximum energy for a particle and several others). The proposed model, based on the concept of time with fractional dimension, does not contradict to special relativity and is not its generalization. Indeed, all motions and frames of reference in this model are absolute. Due to the time and space inhomogeneity and openness of time-space in general, Galileee and Lorentz transformations, and conservation laws are only approximate. This does not contradict to the usually observed phenomena, since the deviations from the classical laws are very small (as it is the case with space-time curvature in general relativity). The model of relative motion in multifractal time, on which the present paper is based, hence represents a theory of relative motion in โalmost inertialโ frames of reference in the space of almost homogeneous time with dimension very close to integer. The theory proposed in contains the special relativity as a special case, corresponding to zero fractional corrections to the dimension of time, and reduces to it if we set time dimension to be unity (then all the approximate laws named in the paper become exact). Approximate validity of the Lorentz transformations follows from the assumption about the smallness of fractional correction to the topological dimensionality of time. On the other hand, one should not be surprised by taking into account the velocity of light dependence on the Lagrangian densities of physical fields in โalmost inertialโ frames. For example, in general relativity it depends on gravitational potentials. As it was mentioned in , appearance of fractional dimensions of space and time can be interpreted in terms of Penroseโs ideas concerning appearance of the equations of free physical fields as a result of deformations of certain complex manifolds (such as co-homologies of bunches with coefficients) characterizing space-time (in our case, with fractional dimension)
The main results of the model of multifractal time that disappear when we use usual concept of time are the following.
1. The possibility for any object to move faster than light (instead of the factor $`\beta =\sqrt{1v^2/c^2}`$ of special relativity the modified factor $`\beta ^{}=\sqrt[4]{\beta ^4+4a_0^2}`$ appears)
2. Total energy at $`v>c`$ is determined by the expression
$$E=\sqrt{p^2c^2+E_0^2}=E_0\sqrt{\frac{1+\beta ^2}{\sqrt{\beta ^4+4a_0^2}}+1},E_0=m_0c^2$$
with $`\beta ^2=\frac{v^2}{c^2}1`$ which does not coincide with the relation $`E=\beta ^^1E_0`$ valid for $`v<c`$
3. Maximal value of the total energy (mass, momentum) are bounded by the value corresponding to the motion with the velocity of light
$$E_{max}=m_{max}c^2=E_0\sqrt{2a_0},p_{max}=m_{max}c$$
Both energy and momentum remain finite as $`v\mathrm{}`$
$$E_{\mathrm{}}=E_0\sqrt{2},p_{\mathrm{}}=m_0c$$
4. Existence of Vavilov-Cherenkov-like radiation not connected with deceleration processes
5. If the fractional correction $`\epsilon `$ to the time dimension is zero, our model fully reduces to the equations and conclusions of special relativity.
The energies, necessary according to our theory to accelerate the particles up to the velocity of light ($`E10^{10}`$eV for electron, $`E10^{12}`$eV for proton) seem to be available in the nearest decade, thus making the experimental verification of the theory of โalmost inertialโ frames of reference possible. |
no-problem/0001/quant-ph0001069.html | ar5iv | text | # Comment on โHidden assumptions in decoherence theoryโ
\[
## Abstract
It is shown that the conclusion of the paper โHidden assumptions in decoherence theoryโ is the result of a misunderstanding of the concept of pointer states. It is argued that pointer states are selected by the interaction of quantum systems with the environment, and are not based on any measurement by a conscious observer.
\]
Italo Vecchi has very recently written an article which apparently points out some โhidden assumptionsโ in the formulation of the idea of decoherence . The author claims that there is an ambiguity in the formulation of decoherence in that any vector can be chosen as a pointer basis. We show here that it is not so, and the claim is born out of a misunderstanding of the idea of pointer states.
The author starts his argument by considering a system $`S`$ which is in a superposition of certain eigenstates $`|n`$ and its environment $`W`$ which is in a state $`|\mathrm{\Phi }_0`$. The evolution can then be described as
$$\underset{n}{}c_n|n|\mathrm{\Phi }_0exp(iHt)\underset{n}{}c_n|n|\mathrm{\Phi }_0\underset{n}{}c_n|n|\mathrm{\Phi }_n(t)$$
Now, the author of claims that $`|\mathrm{\Phi }_n(t)`$ are pointer states. He further goes on to say that โany act of measurement on $`W`$ induces a collapse of its vectors into one of the pointer vectorsโ. As we started from the assumption that $`W`$ is the environment with which $`S`$ is interacting, it is ridiculous to talk of a measurement on the environment. Environment is not something which can be controlled or measured. The source of confusion can probably be traced back to the article by Joos where he uses the states of the apparatus-environment combined .
Pointer states are emergent states of a quantum system, the pointer, or the apparatus, because of interaction with the environment. These states of the apparatus emerge as stable states as a result of environment induced super-selection. Superposition of these states would be destroyed by the environment.
In fact, in the above example, if $`S`$ is assumed to be the apparatus interacting with the rest of the world, that is, an environment $`W`$, it can be used to understand the concept of pointer states. Of course, for a rigorous calculation, one has to consider a specific model of the environment. In this case, clearly, the pointer states are $`|n`$ because $`S`$ was in some arbitrary state, which was represented as a superposition of the states $`|n`$, and after interacting with the environment, these states get entangled with certain environment states $`|\mathrm{\Phi }_n(t)`$, which will eventually lead to a loss of coherence between the different $`|n`$ states. The states $`|n`$ are not arbitrarily chosen by any external measurement, butemerge because of the nature of the interaction and nature of $`S`$ itself.
There are several examples available in the literature where pointer states are shown to emerge from an interaction with the environment. For a harmonic oscillator interacting with the environment, coherent states emerge as pointer states . In , a harmonic oscillator is assumed to be acting as a pointer for measuring a spin-1/2. This calculation doesnโt rely on any concept of predictability sieve. The superpositions between the coherent states are destroyed because of interaction with a model environment which has an infinite number of degrees of freedom.
The authorโs other statement, โโฆit is based on the unphysical no-recoil assumption on th scattering processโฆโ, is also baseless, because the whole argument is not based on the no-recoil assumption, which is just an approximation to derive a simplified result. Apparent diagonalization of a systemโs density matrix can be demonstrated using exact quantum dynamics of a model system coupled to a model environment .
The authorโs example of Planckโs radiation law can be easily understood in the light of the recent results of Paz and Zurek which show that for quantum systems very weakly coupled to the environment, energy eigenstates are the pointer states. Thus one knows that the right thing to do is to apply the entropy maximization to the discreet energy spectra, and not to any other basis. Thus one doesnโt need any observer for Planckโs law, as claimed in . Rayleigh-Jeanโs law is just the long wavelength limit of Planckโs law. In other words, in the appropriate limit, Planckโs law appears to be Rayleigh-Jeans law, and one does not need to invoke an observer associated with continuous spectra.
In conclusion, we emphasize that the pointer states are the emergent stable states of a quantum system because of its interaction with the environment, and are not an outcome of any measurement by an observer. Thus there is no hidden assumption in the decoherence theory in this regard, as claimed in . |
no-problem/0001/quant-ph0001095.html | ar5iv | text | # Stochastic Resonance and Nonlinear Response by NMR Spectroscopy
## Abstract
We revisit the phenomenon of quantum stochastic resonance in the regime of validity of the Bloch equations. We find that a stochastic resonance behavior in the steady-state response of the system is present whenever the noise-induced relaxation dynamics can be characterized via a single relaxation time scale. The picture is validated by a simple nuclear magnetic resonance experiment in water.
The interplay between dissipation and coherent driving in the presence of dynamical nonlinearities gives rise to a variety of intriguing behaviors. The most paradigmatic and counterintuitive example is the phenomenon of stochastic resonance (SR), whereby the response of the system to the driving input signal attains a maximum at an optimum noise level . By now, stochastic resonance has been demonstrated in several overdamped bistable systems as diverse as lasers, semiconductor devices, SQUIDโs, and sensory neurons, the required noise tuning being accomplished by either controlling the injection of external noise or by suitably varying the temperature of the noise-inducing environment.
Due to the broad typology of situations it can exemplify and its inherent simplicity, a preferred candidate for theoretical analysis is represented by a driven dissipative two-level system (TLS). The investigation has only recently been taken into the quantum world, where some prominent results have been established for the so-called spin-boson model. The latter schematizes the archetypal situation of a driven quantum-mechanical tunneling system in contact with a harmonic heat bath, the resulting dissipation being commonly addressed in the linear Ohmic regime . Within this framework, a quantum SR phenomenon induced by a resonant irradiation with a continuous-wave field has been characterized analytically and verified through exact numerical path-integral calculations .
In the present work, we show that a stochastic resonance phenomenon occurs for a much wider class of driven two-state quantum systems, whose relaxation dynamics can be accounted for by conventional Bloch equations. We find that, irrespective of the details of the microscopic picture, the essential requirement is the emergence of a single relaxation time scale. The prediction is neatly demonstrated by a nuclear magnetic resonance experiment on a water sample.
Let us consider a two-state quantum system whose density operator $`\rho `$ is represented in terms of the Bloch vector $`\stackrel{}{s}`$ as $`\rho =(1+\stackrel{}{s}\stackrel{}{\sigma })/2`$ i.e., $`s_i(t)=\sigma _i(t)`$, $`i=1,2,3`$, in the customary pseudo-spin formalism . Within the semigroup approach for open quantum systems , the most general (completely) positive relaxation dynamics induced by the coupling to some environment is described by a quantum Markov master equation of the form
$`\dot{\rho }={\displaystyle \frac{i}{\mathrm{}}}[H(t),\rho ]+{\displaystyle \frac{1}{2}}{\displaystyle \underset{k,l=1}{\overset{3}{}}}a_{kl}\left\{[\sigma _k\rho ,\sigma _l]+[\sigma _k,\rho \sigma _l]\right\}.`$ (1)
The Hamiltonian $`H(t)`$, which describes the interaction of the TLS with the (classical) driving field, can be expressed as $`H(t)=\mathrm{}\omega _0\sigma _3/2+V(t)`$, $`V(t)=\mathrm{}(2\omega _1)\mathrm{cos}(\mathrm{\Omega }t)\sigma _1/2`$, where the Larmor frequency $`\omega _0=(E_2E_1)/\mathrm{}`$ associated with the TLS energy splitting and the Rabi frequency $`\omega _1`$ proportional to the alternating field amplitude have been introduced. The above Hamiltonian is identical to the one describing a driven tunneling process in a symmetric double-well system with โlocalizedโ states provided by $`\sigma _1`$-eigenstates: By rotating the spin coordinate by $`\pi /2`$ about the $`\widehat{y}`$-axis, one formally recovers the picture of tunneling in the $`\widehat{z}`$-representation that is encountered in the literature . The dissipative component of the TLS dynamics is fully characterized by the positive-definite $`3\times 3`$ relaxation matrix $`A=\{a_{kl}\}`$, determining the equilibrium state of the system and the relaxation time scales connected with the equilibration process. The Bloch equations correspond to an especially simple realization of $`A`$, the non-zero elements being specified in terms of 3 independent parameters: $`a_{11}=a_{22}=(2T_1)^1`$, $`a_{33}=(T_2)^1(2T_1)^1`$, $`a_{12}=a_{21}^{}=i(\sqrt{2}T_1)^1s_{eq}`$. $`T_1,T_2`$, and $`s_{eq}`$ are identified as the longitudinal and transverse lifetimes, and the equilibrium value of the population difference respectively. Thus, Eq. (1) takes the following familiar form :
$`\{\begin{array}{ccc}\dot{s_1}\hfill & =\hfill & \omega _0s_2T_2^1s_1,\hfill \\ \dot{s}_2\hfill & =\hfill & \omega _0s_1T_2^1s_2+2\omega _1\mathrm{cos}(\mathrm{\Omega }t)s_3,\hfill \\ \dot{s}_3\hfill & =\hfill & 2\omega _1\mathrm{cos}(\mathrm{\Omega }t)s_2T_1^1(s_3s_{eq}).\hfill \end{array}`$ (5)
In microscopic derivations of (5), including the ones based on the spin-boson model in the appropriate limit , relaxation is caused by elementary processes involving noise-assisted transitions between the TLS energy levels or purely dephasing events with no energy exchange between the system and the environment. The overall relaxation rates $`T_{1,2}^1`$ are obtained by integrating such fluctuation and dissipation effects over the environmental modes, weighted by the appropriate noise spectral densities. Since the latter contain the coupling strength between the system and the environment, relaxation rates are themselves directly proportional to the underlying noise intensity. Note that the above treatment in terms of a constant matrix $`A`$ is only valid for external fields that are relatively weak on the TLS energy scale i.e., $`2|\omega _1|\omega _0`$. In spite of the many restrictions involved, it is remarkable that the Bloch equations (5) are of such a wide applicability to cover the majority of magnetic or optical resonance experiments.
For times long compared to the time scales $`T_{1,2}`$ of the transient dynamics, the motion of the system reaches a steady-state behavior that is insensitive to the initial condition and acquires the periodicity of the driving. In particular, the asymptotic TLS coherence properties are captured by the off-diagonal matrix element $`s_1(t)`$, $`s_2(t)`$. It is standard practice to formulate an input/output problem, where the TLS is regarded as a dynamical system generating $`s_1(t)=\sigma _1(t)`$ as the output signal in response to a given input drive $`V(t)`$. By letting $`lim_t\mathrm{}\sigma _1(t)=s_1^{\mathrm{}}(t)`$ denote the limiting steady-state value of $`s_1(t)`$, a figure of merit for the system response is the so-called fundamental spectral amplitude ,
$$\eta (\mathrm{\Omega },\omega _1)=|s_1^{\mathrm{}}(t)|=\mathrm{}(2\omega _1)|\chi (\mathrm{\Omega },\omega _1)|,$$
(6)
where the connection to the complex susceptibility $`\chi (\mathrm{\Omega },\omega _1)=\chi ^{}(\mathrm{\Omega },\omega _1)i\chi ^{\prime \prime }(\mathrm{\Omega },\omega _1)`$ is made explicit.
The competition between driving and dissipative forces sets the boundary between the linear vs. nonlinear response regimes. In the limit where $`\omega _1T_{1,2}^1`$, only first-order contributions in $`\omega _1`$ are significant and the susceptibility $`\chi (\mathrm{\Omega },\omega _1)=\chi (\mathrm{\Omega })`$ in (6) can be calculated within ordinary linear response theory. The linear regime implies that absorption of energy from the applied field occurs without disturbing populations from their equilibrium value $`s_{eq}`$. Linear behavior breaks down whenever $`\omega _1T_{1,2}^1`$. Strongly nonlinear-response regimes can be entered for arbitrarily weak fields as long as the coupling to the environment and the induced noise effects are weak enough. For both linear and nonlinear driving, the amplitude $`\eta (\mathrm{\Omega },\omega _1)`$ of the output signal also depends on the various parameters characterizing the noise process. Quite generally, the phenomenon of stochastic resonance can be associated with the appearance of non-monotonic dependencies upon noise parameters, leading to the optimization of the response at a finite noise level.
We focus on the Bloch equations (5) with resonant driving, $`\mathrm{\Omega }=\omega _0`$. It is then legitimate to invoke the rotating-wave approximation and replace the alternating field $`V(t)`$ with $`V(t)=\mathrm{}\omega _1\mathrm{cos}(\mathrm{\Omega }t)\sigma _1/2\mathrm{}\omega _1\mathrm{sin}(\mathrm{\Omega }t)\sigma _2/2`$. The rotating-frame description of the Bloch vector $`\stackrel{}{\mu }`$ is introduced via the time-dependent rotation $`R=\mathrm{exp}(i\mathrm{\Omega }\sigma _3t/2)`$ i.e., $`\rho _R=R\rho R^1=(1+\stackrel{}{\mu }\stackrel{}{\sigma })/2`$, $`\stackrel{}{\mu }=(u,v,w)`$ . The steady-state solution to the Bloch equations is well known . In particular, $`\chi ^{}`$ and $`\chi ^{\prime \prime }`$ are read from the dispersive and absorptive components $`u,v`$ of the Bloch vector respectively, and the spectral amplitude $`\eta (\mathrm{\Omega }=\omega _0,\omega _1)=(u^2+v^2)^{1/2}`$. A simple expression is found for the nonlinear response:
$$\eta (\mathrm{\Omega }=\omega _0,\omega _1)=s_{eq}\frac{\omega _1T_2}{1+\omega _1^2T_1T_2}.$$
(7)
Suppose now that we have the capability of manipulating the strength of the coupling of the TLS to its environment, thereby changing the relaxation times $`T_1,T_2`$. For a fixed driving amplitude $`\omega _1`$, $`\eta `$ displays purely monotonic behaviors if $`T_1,T_2`$ are varied independently. However, if a single relaxation time is present, $`T_1=T_2=T_{12}`$, $`\eta `$ develops a local maximum characterized by
$$T_{12}^{}=\omega _1^1,\eta (\mathrm{\Omega }=\omega _0,\omega _1,T_{12}^{})=\frac{s_{eq}}{2}.$$
(8)
The occurrence of such a peak in the steady-state response as a function of the noise strength can be pictured as a stochastic resonance effect in the TLS. Physically, the condition for the maximum in (8) can be thought of as a synchronization between the periodicity of the rotating-frame vector in the absence of relaxation and the additional time scale emerging when dissipation is present. In semiclassical terms, a constraint of the form $`T_1=kT_2`$, $`k=`$ const., indicates that the spectral densities of the fluctuating environmental fields along different directions are not independent upon each other. Simple examples include noise processes that effectively originate from a single direction or that equally affect the system in the three directions. In fact, the existence of a single relaxation time is a feature shared with earlier investigations of quantum SR based on the driven spin-boson model , where it arises as a necessary consequence of the initial assumption that environmental forces exclusively act along the tunneling axis. However, we emphasize that our discussion is done without reference to a specific model, encompassing in principle a larger variety of physical situations.
Apart from this conceptual difference, the SR phenomenon evidenced above is characterized by the same distinctive features found for the spin-boson model on resonance . According to (8), the maximum steady-state response is independent of the driving amplitude, whereas the position of the peak shifts toward shorter relaxation times with increasing $`\omega _1`$. Thus, weaker noise strengths require weaker input fields to attain a large response. This brings the nonlinear nature of the SR mechanism to light, for weaker dissipation more easily pushes the system into a regime where $`\omega _1T_{12}1`$. No SR peak occurs in the limit $`\omega _1T_{12}1`$ where linear response theory applies and $`\eta s_{eq}(\omega _1T_{12})s_{eq}`$. Thus, SR results in efficient noise-assisted signal amplification. Breakdown of linear behavior is more convincingly demonstrated by looking at the dependence of the response (7) upon the external field strength. It is easily checked that the condition (8) simultaneously optimizes the response against $`\omega _1`$, with $`\omega _1^{}=(T_1T_2)^{1/2}`$. However, it is only when $`T_1=kT_2`$ that the existence of such an optimal field amplitude coexists with a SR effect.
Nuclear Magnetic Resonance (NMR) provides a natural candidate for a direct experimental verification of the predicted phenomenon. In NMR, the Bloch equations (5) describe the motion of the magnetization vector $`\stackrel{}{M}`$ of spin 1/2 nuclei (<sup>1</sup>H) that are subjected to a static magnetic field $`B_0`$ along the $`\widehat{z}`$-axis and a radio-frequency signal with amplitude $`2|B_1|B_0`$ applied at frequency $`\mathrm{\Omega }`$ along the $`\widehat{x}`$-axis. The mapping is established by identifying $`\stackrel{}{s}=\stackrel{}{M}`$, $`s_{eq}=M_0`$, $`\omega _0=\gamma B_0`$, $`\omega _1=\gamma B_1`$, $`M_0`$ and $`\gamma `$ denoting the equilibrium magnetization and the gyromagnetic ratio respectively. Relaxation processes arise due to a multiplicity of microscopic mechanisms . For a liquid spin $`1/2`$ sample, the leading contribution arises from fluctuations of the local dipolar field caused by bodily motion of the nuclei. The longitudinal relaxation time $`T_1`$ is essentially determined by the $`\widehat{x}`$\- and $`\widehat{y}`$\- components of the local magnetic fields at the Larmor frequency, while the transverse lifetime $`T_2`$ takes extra contributions from static components of the $`\widehat{z}`$-field, implying that $`T_22T_1`$ ordinarily . Let us assume as above that $`\mathrm{\Omega }=\omega _0`$. Once the steady state is reached, the magnetization vector $`\stackrel{}{M}`$ precesses about the $`\widehat{z}`$-axis with the periodicity of the r.f. field. The variation of the dipole moment in the tranverse plane induces a measurable e.m.f. in a Faraday coil. This provides access to the relevant quantity $`\eta `$ of Eqs. (6)-(7), which represents the length of the transverse magnetization vector, $`(u^2+v^2)^{1/2}=(M_x^2(t)+M_y^2(t))^{1/2}`$.
Our experiment consists in probing the steady-state magnetization response of water as a function of the noise strength inducing the natural relaxation processes. The <sup>1</sup>H Larmor frequency at $`B_0=9.4`$ T is $`\omega _0/2\pi =400`$ MHz, with relaxation times $`T_1=3.6`$ s, $`T_2=2.5`$ s. The sample can be brought to a regime where $`T_1T_2`$ upon addition of the paramagnetic salt copper sulfate (CuSO<sub>4</sub>). With concentrations in the range between 40 mM and 100 mM, collision events with the impurity dominate the nuclear relaxation dynamics. This effectively pushes the system into a regime of rapid motion where the correlation time of the local magnetic fields seen by the nuclei is very short on the scale $`\omega _0^1`$, thereby ensuring that $`T_1=T_2=T_{12}`$ . Higher concentrations of the CuSO<sub>4</sub> additive result in a shorter relaxation time $`T_{12}`$ hence implying an effective tuning of the noise strength. All measurements were performed at room temperature with a Bruker AMX400 spectrometer on five water samples with additive concentration in the above range.
Independent measurements of $`T_1`$ and $`T_2`$ were made to confirm that the amount of CuSO<sub>4</sub> was sufficient to make them equal. $`T_1`$ was measured via an inversion recovery technique , by looking at the recovery curve of $`M_z(t)`$ after the application of a $`\pi `$ pulse causing $`M_z(0)=M_0`$. Values of $`T_2`$ were inferred from the decay of the echo signals in a standard Carr-Purcell sequence where $`\pi `$ rotations were used to refocus dephasing due to inhomogeneous broadening . For the 5 concentrations utilized, $`T_1`$ and $`T_2`$ were found to be within 1 ms of the average value $`T_{12}`$ which is listed for each sample in Table I.
| CuSO<sub>4</sub> (mM) | $`T_{12}`$ (ms) $`\pm `$ 1 ms |
| --- | --- |
| 40 | 45.5 |
| 50 | 36.5 |
| 60 | 28.5 |
| 75 | 25.0 |
| 100 | 18.0 |
TABLE I. Relaxation time $`T_{12}=T_1=T_2`$ as a function of the CuSO<sub>4</sub> paramagnetic impurity concentration for the 5 water samples used in the experiment.
For each sample, the response to a long external r.f. pulse was measured for various values of the driving amplitude. For a given driving amplitude, the duration of the pulse was increased up to about 200 ms and the reading was continued until a constant e.m.f. value was reached, confirming that all transient responses had sufficiently decayed. Under these conditions, the observed steady-state value is equivalent to the one produced by a cw-irradiation as assumed in Eqs. (5). A delay long with respect to $`T_{12}`$ was waited between each pulse to allow the sample to return to equilibrium. The value of the r.f. amplitude was determined by extrapolating measurements of the nutation rate $`\nu _1=\omega _1/2\pi `$ at a high field setting down to the relevant lower-field domain in the neighbourhood of $`\omega _1T_{12}^1`$. While the relative error between two r.f. setting is found to be small, the systematic error associated with the extrapolation turns out to be significant. A linear correction of the frequency scale was included in the analysis to compensate for such error.
The experimental results are shown in Figs. 1 and 2. Fig. 1 evidences the SR peak for three values of the driving amplitude. A bell-shaped maximum in the response profile is clearly visible, as well as the expected shifting of the peak location with increasing $`\omega _1`$. For each curve, the SR condition $`T_{12}^{}\omega _1^1`$ of Eq. (8) is in fairly good agreement with the observed behavior, existing discrepancies being accounted for by the residual error affecting the determination of $`\omega _1`$. In Fig. 2 the complementary characterization of the SR effect in terms of nonlinear response to the driving field is displayed for three values of $`T_{12}`$. In each case, ordinary linear response theory is valid for small $`\omega _1`$ to the left side of the maximum. Thus, SR reveals itself as a signal optimization marking the crossover between linear and nonlinear response.
Beside validating the predictions from the Bloch equations (5), our experimental results also support the conclusions independently reached in earlier theoretical analyses . A few remarks are in order concerning the specific case of NMR. First, the present experiment is not a stochastic NMR experiment . While the obvious similarity is that both methods probe the nuclear spin system by looking at the transverse magnetization response, in stochastic NMR the system is directly excited by noise, which is therefore always extrinsic (and classical) in origin. More importantly, as mentioned already, the solutions to the Bloch equation have a long history as a tool to investigate magnetic resonance behaviors. In particular, the existence of an optimum r.f. amplitude $`\omega _1^{}=(T_1T_2)^{1/2}`$ is a feature pointed out long ago by Bloch himself . However, only the SR paradigm provides the motivation to regard relaxation features as controllable output parameters and to look at the usual response behavior along different axes in the parameter space. Even once this is done, this does not automatically lead to SR. Rather, it is the recognition that a single axis $`T_1=T_2`$ is effectively needed to bring out the fingerprint of the phenomenon and the novel element added to the standard NMR analysis.
In summary, we established both theoretically and experimentally the occurrence of stochastic resonance in two-state quantum systems whose relaxation dynamics are described by Bloch equations. In addition to substantially broadening the existing paradigm for stochastic resonance in quantum systems, our results point to the possibility of characterizing intrinsic relaxation behavior via resonance effects. By offering an optimized way for input/output transmission against noise, stochastic resonance carries a great potential for systems configured to perform specific signal processing and communication tasks . In particular, full exploitation of stochastic resonance phenomena could potentially disclose a useful scenario for reliable transmission of quantum information in the presence of environmental noise and decoherence.
This work was supported by the U.S. Army Research Office under grant number DAAG 55-97-1-0342 from the DARPA Microsystems Technology Office.
Corresponding author: vlorenza@mit.edu |
no-problem/0001/astro-ph0001143.html | ar5iv | text | # X-ray Measurements of the Gravitational Potential Profile in the Central Region of the Abell 1060 Cluster of Galaxies
## 1 INTRODUCTION
Measurements of the gravitational potential in clusters of galaxies provide one of the best ways to investigate the large-scale distribution of dark matter, and to constrain models of cosmic structure evolution.
The potential structures of clusters have been probed mainly by X-ray observations of the intra-cluster medium (ICM). In previous studies, ICM density distributions were usually represented by a $`\beta `$ model, which approximates an isothermal ICM hydrostatically confined in a King-type (King 1962) potential with a flat density core (e.g. Jones and Forman 1984). However, the King potential , or its modifications, are not the only possible solutions to the equation describing isothermal self-gravitating systems, and there is no a priori reason to believe that this particular type of potential is realized in actual clusters.
On the basis of N-body simulations of cold dark matter particles, Navarro, Frenk, & White (1996, 1997; hereafter NFW96 and NFW97 respectively) have shown that the density profiles of the simulated mass clumps (halos) can be universally described by a simple analytic formula as
$`\rho _{\mathrm{tot}}^{\mathrm{NFW}}(r)`$ $``$ $`\left({\displaystyle \frac{r}{r_\mathrm{s}}}\right)^1\left(1+{\displaystyle \frac{r}{r_\mathrm{s}}}\right)^2,`$ (1)
where $`r`$ is the three dimensional radius and $`r_s`$ is a scale radius. This profile, hereafter referred to as the NFW profile, exhibits a singularity cusp at the center instead of a flat core, but the gravitational potential remains finite. The potential produced by the NFW mass density profile may be referred to as the NFW potential. Some other simulations also indicate similar density profiles with central cusps, although the cusp slope may be somewhat different from that of equation (1) (Moore et al. 1997; Fukushige and Makino 1997).
ASCA X-ray observations have shown noteworthy results concerning the potential structure in the central regions of nearby clusters. In the Fornax cluster, Ikebe et al. (1996) found a hierarchical distribution of the total mass and dark matter. In the Hydra-A, Abell 1795, and a few other clusters, Ikebe et al. (1997), Xu et al. (1998), and Xu (1998), respectively, revealed similar deviations from the King-type potential in their central regions. Qualitatively these phenomena are reminiscent of the NFW potential, but the measured potential profiles appear to be more deviated from the King potential than can be explained by equation (1).
It is therefore of great interest to examine the relation between the NFW potential and the hierarchical ASCA potential. However, further assessing the reality of the NFW potential model is rather difficult. This is because these clusters showing evidence of hierarchical potential structure all have cD galaxies at their centers, whereas the $`N`$-body simulations of NFW97 and others do not consider the significant baryonic component that must be associated with the cD galaxy. In addition, most of these clusters show significant cool emission with a temperature of $`1`$ keV near the center, which introduces relatively large errors in the potential shape determinations via X-ray observations.
Accordingly, we consider that the center of โnon-cD clustersโ is the best place to compare the X-ray observations and the NFW prediction, since these system are thought to have a relatively small fraction of baryonic matter and insignificant emission of cool component in the center. We may then examine whether or not the high-quality X-ray data from ROSAT and ASCA of appropriate objects agree with predictions of the NFW potential.
We select the Abell 1060 cluster (A1060 for short), based on our belief that it is the most suitable cluster for our purpose. This cluster, with $`z=0.011`$, is the nearest one after the Virgo, Fornax, and Centaurus clusters among X-ray luminous ones. X-ray images of the cluster obtained with Einstein (Fitchett and Merritt 1988), ROSAT and ASCA have a good circular symmetry, justifying the assumption of the spherically symmetric ICM distribution to be employed in our analysis. The symmetric image also ensures that there is no significant bulk motion of the ICM such as merger with substructure at the center of A1060. There are two giant elliptical galaxies, NGC 3311 and NGC 3309, the former sometimes regarded as a cD galaxy. However NGC 3311 is smaller in size and mass than the more typical cD galaxies in other nearby X-ray clusters. In fact, the central excess X-ray luminosity above the $`\beta `$ model (Jones and Forman 1984), and the estimated cooling flow rate (Edge and Stewart 1991, Singh et al. 1988), are both very low in A1060 among nearby X-ray clusters. Using the ASCA data, Tamura et al. (1996; hereafter T96) have shown that the ICM in central regions of A1060 is quite isothermal at $`3.1`$ keV with little evidence of cool emission component. We therefore expect that the influence of the central galaxy (or galaxies) is rather small in A1060 than in other nearby cD clusters.
Under this motivation, we re-analyze in this paper the ASCA data of A1060, employing also the ROSAT data. The paper is arranged in the following way. In the next section we briefly describe the ASCA and ROSAT observations of A1060. In ยง3, we evaluate spatially resolved spectra from the cluster and confirm that the ICM is close to be isothermal within $`20^{}`$ of the center, in agreement with T96. This result justifies the assumption of hydrostatic equilibrium of ICM and its single-phased treatment, which are employed in the subsequent analysis. In ยง4, we investigate the radial brightness distribution assuming an isothermal ICM, to estimate the ICM density profile and hence the potential profile of the cluster. In ยง5, allowing a deviation from an isothermal condition, we perform a combined analysis of the ASCA and ROSAT data, and constrain the potential profile. In the last section, we discuss and summarize the obtained results.
Throughout this paper, we assume the Hubble constant to be $`H_0=70h_{70}`$ km s<sup>-1</sup>Mpc<sup>-1</sup>, and use the 90% confidence level unless stated otherwise. The solar Fe/H ratio is taken to be $`4.68\times 10^5`$ by number. At $`z=0.011`$, 1 arcmin corresponds to $`13h_{70}^1`$ kpc.
## 2 OBSERVATIONS
ASCA observations of A1060 were performed on 1993 June 28 and 29, with the GIS (Gas Imaging Spectrometer; Ohashi et al. 1996, Makishima et al. 1996) in the PH normal mode and the SIS (Solid-State Imaging Spectrometer; Bruke et al. 1994; Yamashita et al. 1997) in the 4โCCD bright mode. After screening events using the standard data-selection criteria (T96), we have obtained the net exposure times of 36 ksec and 33 ksec for the GIS and SIS, respectively. From these observations, some authors have already reported results of spatially sorted spectral analysis, including T96, Mushotzky et al. (1996), and Fukazawa et al. (1998). In this paper we perform more detailed analysis of spatial variations of the X-ray spectrum, and derive constrains on the underlying potential.
We also analyze the archival ROSAT PSPC data of A1060. The observation was performed on 1992 January 1 with a net exposure time of 15.8 ksec. Imaging analysis using the data is described by Peres et al. (1998) together with those of other 55 clusters, and by Loewenstein and Mushoztky (1996). Figure X-ray Measurements of the Gravitational Potential Profile in the Central Region of the Abell 1060 Cluster of Galaxies shows X-ray image of A1060 obtained with the PSPC in 0.5โ2.0 keV. The X-ray source seen at $`29^{}`$ north-east of A1060 is the group of galaxies, HCG 48, which has a similar distance as the cluster does. We exclude the data within $`5^{}`$ in radius of the X-ray center of HCG 48, when we investigate the surface brightness profile of A1060 with the PSPC. The ASCA GIS image, presented in T96, is similar to the PSPC image.
## 3 SPECTRAL ANALYSIS
In the present paper, we estimate the gravitating mass distribution of the cluster under the assumptions of hydrostatic equilibrium and single-phase nature of the ICM. Although we have selected A1060 as the target of our study believing that its ICM is nearly in an ideal condition, we must further examine the validity of these assumptions. For this purpose, we investigate in this section the temperature structure of A1060 with the highest accuracy. Large scale temperature variations would suggest a cluster merger or substructure, and hence a deviation from the hydrostatic equilibrium of the ICM. A cool plasma component, ascribed to the ICM cooling or interstellar medium of central galaxies, may exist along with the hot ICM. In such cases, the mass determination would become significantly less reliable. In this section, we jointly use the X-ray spectra obtained from ASCA and ROSAT, to demonstrate, with a higher reliability than was obtained by T96, that the ICM in A1060 is close to being isothermal.
### 3.1 Large Scale Temperature Profile
Utilizing the ASCA SIS and GIS data, T96 reported that the ICM in A1060 has spatially uniform temperature and metallicity within typical uncertainties of $`\pm 10`$% and $`\pm 30`$%, respectively. However, they did not correct the ASCA data for the complex point-spread-function (PSF) of the X-ray telescope (XRT) onboard ASCA. Accordingly, we re-analyze the ASCA data taking the PSF into account, and constrain the temperature profile with a higher reliability. To evaluate a large scale temperature profile, in this subsection we use only the ASCA GIS data since it has a wider field of view than the ASCA SIS and a better spectral resolution than the ROSAT PSPC. Since the X-ray brightness of A1060 is circularly symmetric, we accumulated X-ray spectra from five concentric ring regions ($`0^{}3^{}6^{}9^{}12^{}20^{}`$) in the GIS detector plane centered on the X-ray centroid. Due to extended tails of the PSF, each of these spectra contains photons scattered in from the other sky regions. To correctly take into account this effect, we followed the method of Takahashi et al. (1995) and Ikebe et al (1997). We divided the sky region into the corresponding five rings ($`0^{}3^{}6^{}9^{}12^{}20^{}`$) around the cluster center. For each energy bin, we calculated a 5 (sky annuli) $`\times `$ 5 (detector annuli) matrix, called image response matrix, which describes how photons from each sky region are distributed into the five detector regions. In this calculation we assumed that the spectrum is uniform within each sky region. Then, by specifying model spectra in the five sky regions (including their proper normalization), we can predict the five spectra on the detector plane, which can be fitted simultaneously to the actual five spectra.
In practice, we specified the brightness normalization in each sky region independently from each other. For the spectral model, we employed the Raymond-Smith (Raymond and Smith 1977) plasma emission model modified by the photoelectric absorption. The column density and metallicity were assumed to be constant over these regions, at the Galactic value of $`6\times 10^{20}`$cm<sup>-2</sup> and 0.32 solar, respectively, as obtained in T96. Thus, the model involved 10 free parameters; normalizations and temperatures of the five sky regions. The background spectra were obtained from the blank-sky (containing no bright sources) database, by extracting events within the identical region from the same detector as the on-source data. The background were added to the emission model. When calculating the fit goodness, we assigned 3% and 10% systematic errors to the spectral model and the background normalization, respectively. The former is due to uncertainties in the energy response and PSF, while the latter represents errors in reproducing non-X-ray background and intrinsic fluctuation of the Cosmic X-ray background (Ishisaki 1996).
We first tested an isothermal model in which all five temperatures are tied to be the same value, and found that it is roughly acceptable with $`\chi ^2/\nu =178/164`$. The obtained temperature is $`3.07\pm 0.05`$ keV, in agreement with the value obtained from the averaged spectra (T96). We next let the five temperatures free. The fit has been improved to $`\chi ^2/\nu =146/160`$, and yielded roughly constant temperature within $`12^{}`$ in radius with a slight drop in the outermost annulus as presented in Figure X-ray Measurements of the Gravitational Potential Profile in the Central Region of the Abell 1060 Cluster of Galaxies. According to an $`F`$-test, the fit improvement is significant at 84% confidence level, indicating that the slight drop of temperature is marginally significant. As shown in Figure X-ray Measurements of the Gravitational Potential Profile in the Central Region of the Abell 1060 Cluster of Galaxies, effect of the PSF correction can be found only in the outermost region. This can be explained as follows. Due to the energy dependence of the PSF, higher energy photons are more heavily scattered off than softer ones. Accordingly, the outer-region spectra become artificially harder than they are and the possible outward temperature drop becomes less evident in T96. Thus, correcting for the PSF, we have confirmed the inference made in T96, that the ICM in A1060 has an uniform temperature at least within $`12^{}`$ with a higher reliability.
### 3.2 Possibility of the Central Cool Emission
In many cD clusters, significant cool emission is found in their central regions (e.g., Fabian et al. 1994). In the case of A1060, we have shown that its ICM is close to an isothermal condition using the GIS data. In addition, T96 gave a rather tight upper limit on the cool emission component at the cluster center using the GIS and SIS data. To confirm the central cool component more tightly, we further employ here the ROSAT PSPC data in addition to the ASCA data, because the PSPC has a higher sensitivity than the ASCA instruments to cooler emission components. Therefore the joint fit using the three detectors, which covers the energy range of 0.3โ10 keV, is an ideal method for our purpose.
We accumulated photons over the central region of radius of $`5^{}`$ (or $`67h_{70}^1`$ kpc), separately for the three detectors. We chose this particular region because the cool emission is typically confined to central regions of $`r<70h_{70}^1`$ kpc in the Virgo (Matsumoto et al. 1996), Centaurus (Fukazawa et al. 1994), and AWM 7 clusters (Xu et al. 1997), and because the angular size of $`5^{}`$ is large enough compared to the PSF of ASCA. For the first step, we separately evaluated spectra obtained with the three detectors. The GIS background was obtained in the same way as in ยง 3.1. That of the SIS was also obtained from the blank-sky observations. On the other hand, the PSPC background was accumulated from the region of radius $`36^{}46^{}`$ in the field of view. We estimate the cluster emission in this background region to be $`1`$% of that in the central region (R$`<5^{}`$) based on extrapolation of the radial brightness profile. Therefore the cluster contribution in the background spectrum is negligible. These background spectra were subtracted from the data before fitting to the model. The temperature, column density and metallicity were all allowed to be free, and different from instrument to instrument. In Table 1, we show the result of fitting with a single temperature Raymond-Smith model. We consider that the relatively larger column density obtained with the SIS is due to the response uncertainty below 1 keV of the detector; the fitting results with the SIS in other objects show slightly large column densities than those with the GIS<sup>1</sup><sup>1</sup>1See e.g., a calibration status memo in http://heasarc.gsfc.nasa.gov/docs/asca/ahp\_proc\_analysis.html, which is maintained at NASA/GSFC.. Although the GIS and SIS temperature determinations are in a good agreement, the PSPC temperature is somewhat lower, and disagree with the ASCA values at the 90% level.
The lower temperature obtained with the PSPC suggest the presence of plasmas, cooler than the global ICM, at the central region. To examine this possibility, we fitted the PSPC spectrum by adding another plasma component to the model (i.e., two temperature model). If we leave the two temperatures free, the two components are coupled too strongly with each other. Therefore we fixed the temperature of hot component at 3.1 keV which is a global temperature of the cluster determined with the ASCA. The metallicity of both components were also fixed at the global value of 0.3 solar derived from the ASCA spectra. This two temperature model gave slightly better fit ($`\chi ^2/\nu =39/37`$) to the data. The temperature of the cool component is fount to be 1.1 (0.9โ1.5) keV. However, the fit improvement is significant only at 40% confidence level based on the $`F`$test. Adding a cooling flow model instead of the cool plasma model did not make the fit better significantly($`\chi ^2/\nu =41/36`$).
Alternatively, the lower PSPC temperature may be an indication of temperature decrease toward the center on small scales. To examine this possibility, we evaluated the PSPC spectrum within a smaller radius of $`2^{}.25`$. A single temperature fit gave a temperature of 2.2 (1.8โ2.7) keV, which is similar to that obtained previously from the region of $`5^{}`$ radius. Two temperature model to this spectrum did not give significant improvement of the goodness of the fit ($`\chi ^2/\nu =119/133`$ vs. $`\chi ^2/\nu =119/131`$). Thus, the PSPC spectra show no significant temperature decrease on a few arcmin scale.
For the next step, we fitted jointly the three spectra (SIS+GIS+PSPC) with a single temperature model to further examine the isothermality near the cluster center. The model temperature and metallicity were constrained to be common among the three detectors. On the other hand, the normalization and column density were allowed to take independent values, to take into account the slight mismatch in absolute photometric calibration and systematic uncertainties in low energy detector responses, respectively. This has given an roughly acceptable fit with $`\chi ^2/\nu =556/459`$, yielding a temperature of $`3.3\pm 0.1`$ keV, and a metallicity of $`0.29\pm 0.03`$ solar (Table 1 and Figure X-ray Measurements of the Gravitational Potential Profile in the Central Region of the Abell 1060 Cluster of Galaxies). This implies that the emission within a projected radius of $`5^{}`$ is approximately isothermal.
In a search for the possible cool emission, we added another plasma component to jointly fit the spectra from the three detectors. The metallicity of both components were assumed to be the same, while the two temperatures were left free. The fit was in fact improved slightly from 556/459 to 537/457 by introducing the cooler component, as compared to the single temperature fit. Therefore, a cool component may in fact be present, as suggested by the lower PSPC temperature. However, the improvement is significant only at 32% confidence level based on the $`F`$-test. We therefore quote a conservative 90% upper limit on the cool-component emission measure as 4% of that of the hot component, when the cool component has a temperature of 1 keV. This is in a good agreement with T96.
Based on these results, we conclude that the ASCA and ROSAT PSPC spectra do not require strongly an additional cool component. Consequently, we assume a single-phased ICM in the subsequent analysis.
## 4 RADIAL BRIGHTNESS DISTRIBUTIONS
In this section we evaluate the X-ray surface brightness distribution of A1060 and derive the ICM density profile. In the previous section, we found that the X-ray emission is dominated by an isothermal component of $`3`$ keV. Therefore, the brightness directly relates to the ICM density and hence to the gravitational profile of the cluster. We separately analyze the X-ray images obtained with the ASCA GIS and the ROSAT PSPC, but we do not use the ASCA SIS data because of its limited field of view. Since the GIS and the PSPC have different energy bands, a comparison of the brightness distributions from the two instruments provides another estimation of the temperature distribution. If the ICM is actually isothermal, the two brightness should be similar to each other.
### 4.1 Model Fittings to the GIS Data
We derived an azimuthally averaged count-rate profile from the two GIS detectors (S2 and S3), centered on the X-ray peak, in the 0.7โ10 keV band. We quantify the profile in โforwardโ way; we start from a model surface brightness distribution, apply the XRT vignetting to it, and further convolve it with the XRT$`+`$GIS point spread function. The background, obtained in the same way as in ยง 3.1, is added to the model. The derived model prediction is compared with the observed count-rate profile. The image response matrices were utilized again to take the spatial response into account in reproducing the model-predicted count profiles. In the present case, we employed matrices of 20 (sky annuli) $`\times `$ 20 (detector annuli) in size. According to the results obtained in T96 and in ยง3 of this paper, we assumed a model X-ray spectrum with temperature of 3.1 keV and metallicity of 0.32 solar, absorbed with the Galactic column density of $`6\times 10^{20}`$ cm<sup>-2</sup>. We assigned 3% and 10% systematic errors to the count-rate models and background estimation, respectively, as in ยง3.
As the model ICM density profiles, we considered the following two cases. One is the case where an isothermal ICM is gravitationally confined in a King-type potential (e.g., Sarazin 1988). The model density in this case becomes the usual $`\beta `$ model, as
$`\rho _{\mathrm{icm}}^{\mathrm{beta}}(r)\left[1+\left({\displaystyle \frac{r}{r_\mathrm{c}}}\right)^2\right]^{1.5\beta },`$ (2)
where $`r_\mathrm{c}`$ is the core radius and the parameter $`\beta `$ usualy takes a value $`0.7`$. The other is the case in which an isothermal ICM is confined in the NFW potential corresponding to equation (1). As calculated by Makino, Sasaki, and Suto (1998), the ICM in this case exhibits a radial density profile as
$`\rho _{\mathrm{icm}}^{\mathrm{NFW}}(r)`$ $`\left(1+{\displaystyle \frac{r}{r_\mathrm{s}}}\right)^{\frac{Br_\mathrm{s}}{r}},`$ (3)
where $`B`$ is a parameter related to the ICM temperature and $`r_\mathrm{s}`$ is a scale length. Although this model form appears quite different from that of the $`\beta `$ model, in reality it can fairly closely mimic the $`\beta `$ model profile, especially outside the core region (Makino et al. 1998). We call this density profile the NFW ICM model.
As shown in Table 2 and Figure 1, both the $`\beta `$ model and the NFW ICM model have given acceptable fits to the GIS count-rate profiles. The obtained core radius and $`\beta `$ are consistent with those obtained with Einstein (Jones and Forman 1984). Although Jones and Forman (1984) found a central excess above the $`\beta `$ model in the Einstein IPC data of A1060, the GIS data do no indicate significant excess in the central region. This is probably because the IPC has a better spatial resolution (a half power diameter of $`1^{}.2`$) than ASCA.
### 4.2 Model Fittings to the PSPC Data
To further investigate the ICM density profile, we utilize the ROSAT PSPC data. The PSPC has a better spatial resolution of $`25^{\prime \prime }`$ in half power diameter, compared to ASCA. Accordingly, we expect to obtain stronger constraints on the ICM density profile.
We followed a standard method (Zimmermann et al. 1997) to convert the archival PSPC data of the cluster to a surface brightness profile in the 0.5โ2.0 keV range. We derived a radial count-rate profile by accumulating photons within a number of ring-shaped regions in $`30^{\prime \prime }`$ bins. Dividing this count rate profile by an exposure map, we obtained an azimuthally averaged surface brightness profile as a function of projected radius. The cluster emission is detected at least up to radii of $`30^{}`$ with signal to background ratios larger than 0.2. We did not consider the finite width of the PSF of the PSPC, since its scale is similar to or less than the binning of the count-rate profile. Therefore, we cannot discuss structures smaller than that of the PSF. In other words, we assume the image response matrix to be diagonal. Since estimation of background is rather difficult with the PSPC, background was added to the fitting model as a flat component, with a free normalization and a 5% systematic uncertainty.
We fitted the obtained PSPC profile first with the $`\beta `$ model. As shown in Table 2 and illustrated in Figure X-ray Measurements of the Gravitational Potential Profile in the Central Region of the Abell 1060 Cluster of Galaxies, the $`\beta `$ model has been rejected with a significant confidence. This is because of a significant data excess above the model within a radius of $`1^{}`$. Then we fitted the data with the NFW ICM model. This model described the data better than the $`\beta `$ model, although the fit is still formally unacceptable, as shown in Table 2 and illustrated in Figure X-ray Measurements of the Gravitational Potential Profile in the Central Region of the Abell 1060 Cluster of Galaxies.
### 4.3 A Modified ICM Model
The observed PSPC brightness still exhibits excess in the central region above the NFW ICM model. To reproduce this excess, we modified the NFW ICM model function by introducing another parameter $`n`$ as
$`\rho _{\mathrm{icm}}^{\mathrm{NFW}^{}}(r)`$ $``$ $`\left[1+\left({\displaystyle \frac{r}{r_\mathrm{s}}}\right)^n\right]^{\frac{Br_\mathrm{s}}{r}}.`$ (4)
The case $`n=1`$ corresponds to the NFW ICM model. The smaller the index $`n`$, the steeper the ICM density of equation(4) becomes within $`r_\mathrm{s}`$ of the center. Although this function form is not quite simple, it is a natural generalization of the NFW ICM model and $`\beta `$ model on the outer region of $`r/r_\mathrm{s}>1`$ on condition of $`n>0.95`$. We hereafter call equation (4) modified NFW ICM model. We fitted this model to the PSPC brightness profile and obtained an acceptable fit with $`n=0.97`$ (Table 2 and Figure X-ray Measurements of the Gravitational Potential Profile in the Central Region of the Abell 1060 Cluster of Galaxies).
These results imply that the ICM of A1060 is more concentrated towards the cluster center than is predicted by the $`\beta `$ model. The fitting result with the NFW ICM model suggests that the ICM is even more concentrated towards the center than is predicted by the NFW potential and the isothermal assumption. Consequently, the gravitational potential is deeper than the NFW model \[eq.(1)\], or the ICM temperature distribution deviates from the isothermal condition. More quantitatively, an isothermal ICM takes the form of eq.(4) when it is confined with a gravitational potential of which the total mass density is given as
$`\rho _{\mathrm{tot}}^{\mathrm{NFW}^{}}`$ $``$ $`x^{2n3}(1+x^n)^2\left[1+\left({\displaystyle \frac{1n}{n}}\right){\displaystyle \frac{1+x^n}{x^n}}\right]`$ (5)
with $`x\frac{r}{r_\mathrm{s}}`$. In Figure 2 we present this total mass density profile with $`n=0.97`$, as obtained above, and its integral form. Thus, $`\rho _{\mathrm{tot}}^{\mathrm{NFW}^{}}`$ with $`n=0.97`$ is approximated as $`r^{1.5}`$, compared to the original NFW profile which scales as $`r^{1.0}`$.
We also note that the best-fit $`\beta `$ model parameters to the PSPC profile and those to the GIS (Table 2) are similar, even though the former is unacceptable. This suggests that the central excess brightness found with the PSPC is relatively independent of X-ray energy. This is because if, e.g., the central excess is due to a central temperature drop, we would have obtained a smaller core radius and a flatter $`\beta `$ with the PSPC than with the GIS. We shall further address these issues in the next section.
## 5 COMBINED SPECTRAL AND SPATIAL ANALYSIS:MASS CALCULATION
### 5.1 Motivations and Method
In this section, we estimate the total gravitating mass distribution allowing a deviation from isothermality of the ICM within tolerance of the ASCA plus ROSAT spectroscopy. To do this, we consider the three dimensional temperature and density profiles of the ICM together, by combining the spectral and spatial analyses. The temperature profile was actually constrained in ยง3, but it was projections onto two dimensions. Furthermore, the derived ICM density profile might change if we properly take into account small variations in the X-ray spectrum across the cluster, which was neglected in ยง4.
We hereafter utilize the GIS spectra and the PSPC brightness profile simultaneously. This is because the ASCA has superior spectroscopic capabilities while the ROSAT has a better angular resolution. There has so far been no such attempts to our knowledge, except for Ikebe et al. (1999), even though many investigators use the ASCA and ROSAT data jointly. We do not attempt to analyze the SIS data for the same reason as mentioned in ยง3.1 and ยง4.
In order to find a cluster model that can simultaneously reproduce the ASCA spectra and the ROSAT radial brightness profile, we have used a new analysis scheme. This is a kind of variation of the method of Hughes (1989), who examined the mass of the Coma cluster using spatially averaged spectra from Tenma and EXOSAT and X-ray imaging data from the Einstein IPC. Markevitch and Vikhlinin (1997) adopted a similar method to derive the total mass in Abell 2256.
The procedure consists of the following steps.
* We assume the total mass density $`\rho _{\mathrm{tot}}(r)`$, and the ICM mass density $`\rho _{\mathrm{icm}}(r)`$, both in some spherically-symmetric analytic forms as a function of the three-dimensional radius $`r`$. Assuming $`\rho _{\mathrm{tot}}(r)`$ is equivalent to assuming the integrated total mass profile, or the gravitational potential. For simplicityโs sake, we assume a single component total mass made of dark matter, instead of multi-component mass modeling considering baryonic effects. This is reasonable because previous analysis showed that the dark matter dominates the total mass of this cluster within the observed region (e.g., Loewenstein and Mushotzky 1996).
* The condition of hydrostatic equilibrium relates the total gravitating mass $`M_{\mathrm{tot}}(<r)`$ within $`r`$, $`\rho _{\mathrm{icm}}(r)`$, and pressure $`P=\frac{\rho _{\mathrm{icm}}(r)kT(r)}{\mu m_p}`$ where $`\mu `$ and $`m_p`$ are the mean molecular weight and proton mass, respectively, as
$`{\displaystyle \frac{dP}{dr}}`$ $`=`$ $`{\displaystyle \frac{GM_{\mathrm{tot}}\rho _{\mathrm{icm}}}{r^2}}.`$ (6)
We can integrate this equation outward from $`r=0`$ for a given set of temperature at $`r=0`$, $`\rho _{\mathrm{tot}}(r)`$, and $`\rho _{\mathrm{icm}}(r)`$ to obtain $`P(r)`$ and $`T(r)\frac{P(r)}{\rho _{ICM(r)}}`$. However, the solution of $`T(r)`$ is very sensitive to the choice of $`T(0)`$ or the normalization of the total mass (Huges 1989; Loewenstein 1994). Therefore, following Loewenstein (1994), we rewrite equation(6) as
$`{\displaystyle \frac{dP}{dz}}=GM_{\mathrm{tot}}\rho _{\mathrm{icm}}`$ $`\mathrm{with}`$ $`z{\displaystyle \frac{1}{r}}.`$ (7)
Integrating this equation from $`z=0(r\mathrm{})`$ with a boundary condition of $`P(z=0)P_{\mathrm{}}`$, we can compute $`T(r)`$ numerically without specifying $`T(0)`$. The non-zero pressure at infinity ($`P_{\mathrm{}}>0`$) would imply that the temperature goes to infinity at large radii. We consider such solutions unphysical and assume $`P_{\mathrm{}}=0`$. Note that $`T(r)`$ should take a particular form, in order for the assumed ICM to be in a hydrostatic equilibrium in the assumed potential via equation(6), since $`\rho _{\mathrm{tot}}(r)`$ and $`\rho _{\mathrm{icm}}(r)`$ are independently specified in advance.
* According to the specification of $`\rho _{\mathrm{icm}}(r)`$ and $`T(r)`$, and utilizing the Raymond-Smith emission code, we analytically model the X-ray emissivity as a function of $`r`$ and energy. We assume a constant metallicity of 0.32 solar and the Galactic column density over the entire cluster. We transform the model emissivities into a set of expected spectra obtained with the GIS, using the image response matrices which take into account the projection effects and the XRT+GIS response. The model cluster is also converted to the simulated PSPC brightness in the 0.5โ2.0 keV range.
* These model predictions are simultaneously fitted to the GIS ring-sorted spectra (5 regions; $`0^{}3^{}6^{}9^{}12^{}20^{}`$) and the PSPC radial surface brightness profile up to the projected radius of $`50^{}`$ (670$`h_{70}^1`$ kpc). The fitting to the five GIS spectra takes into account not only their spectral shapes, but also their relative normalizations. The model goodness is calculated through the $`\chi ^2`$-evaluation. Systematic errors were assigned as in the previous fittings. If necessary, the initial models ($`\rho _{\mathrm{tot}}`$ and $`\rho _{\mathrm{icm}}`$) are corrected in order to improve $`\chi ^2`$.
Instead of the above approach, we could first assume the temperature profile in some analytic forms. In fact, Henriksen and Mushotzky (1986) employed a form of $`T(r)\rho ^\gamma `$, and David et al. (1995) assumed $`T(r)r^\alpha `$, with $`\gamma `$ and $`\alpha `$ both being parameters. However, such a parameterization of temperature along with a particular ICM density model severely constrains the range of mass profile. Since our primary goal is constraining the total mass, or its density, rather than examining the temperature profiles, we first assume mass distributions as described above, and then determine temperature profile in such a way that the implied spectra are consistent with the observed data.
### 5.2 Results
#### 5.2.1 The NFW Potential Model
Using the method described above, we examine the NFW potential model \[eq.(1)\] with a central density cusp. For the ICM density, we assume the modified NFW ICM model \[eq.(4)\] which has been found in the previous section to be the best representation of the PSPC radial profile. We denote the scale radius of the ICM density as $`a`$ and allow $`a`$ to take different value from that of the total mass density $`r_\mathrm{s}`$. The case $`n=1`$ and $`a=r_\mathrm{s}`$ corresponds to an isothermal ICM distribution.
The best-fit model has been obtained with $`r_\mathrm{s}=14^{}.6`$ ($`190h_{70}^1`$kpc), $`a=35^{}.6,B=10.2`$, and $`n=0.97`$. This model is acceptable with $`\chi ^2/\nu =242/262`$. As shown in Figure 3, the solution reproduce both the ASCA spectra and the ROSAT radial brightness profile well. The value of $`n`$ agree with that obtained in ยง4 assuming isothermality. The acceptable (99% confidence) range of the scale radius of the total mass is $`12^{}<r_\mathrm{s}<19^{}`$, which also overlaps considerably with those derived in ยง 4 with the GIS or PSPC. The total mass density and temperature profiles for a range of acceptable models are plotted by solid lines in Figure 4. These results are reliable only within $`20^{}`$, because the GIS data covers up to $`20^{}`$. The implied temperature exhibits a moderate drop within $`3^{}`$, because the best-fit ICM profile with $`n=0.97`$ increase towards the center more steeply than an isothermal ICM ($`n=1`$) in the NFW potential.
#### 5.2.2 The Power-law Potential Model
In ยง4.2, we found the PSPC radial brightness to be even more concentrated towards the center than is predicted by the NFW potential. Therefore, the underlying potential is inferred to exhibit a stronger central drop than the simple NFW model. Hence, we examine a total mass distribution with a steeper slope near the center than the NFW one. We assume a total mass density model of the form
$`\rho _{\mathrm{tot}}^{\mathrm{Power}}`$ $``$ $`\left({\displaystyle \frac{r}{r_\mathrm{s}}}\right)^\eta \left(1+{\displaystyle \frac{r}{r_\mathrm{s}}}\right)^{\eta 3},`$ (8)
where $`\eta `$ is a free parameter, and the case of $`\eta =1`$ corresponds to the NFW model of equation(1). Since the density slope $`\eta `$ and scale radius $`r_\mathrm{s}`$ were difficult to be determined separately, we fixed $`r_\mathrm{s}`$ to be $`100^{}1400h_{70}^1`$ kpc. In this case, the total mass density is close to a power-law form of $`\rho _{\mathrm{tot}}r^\eta `$ in the central region of the cluster ($`r<20^{}`$), where we are interested in. We refer to this the power-law density model. This profile approximates the total mass density of eq.(5), derived in ยง 4.2 based on the PSPC radial brightness profile and the isothermal assumption (Figure 2). The formula implies that the total mass diverges in proportion to $`\mathrm{log}(r)`$ as the NFW model. When an isothermal ICM is confined within this power-law density potential, the ICM density profile becomes very close (though not exactly identical) to the modified NFW ICM profile of equation(4); $`n=0.97`$ corresponds to $`\eta 1.5`$. Therefore, we express the ICM density profile by equation(4), to be combined with the total density profile of equation(8).
The best-fit model has been obtained with $`\eta =1.53,a=35^{}.1,B=10.1`$, and $`n=0.97`$ (dashed-lines in Figure 4). This model is also acceptable with $`\chi ^2/\nu =248/262`$. The obtained slope of the total mass, $`\eta =1.53`$, is consistent with the result from the brightness profile analysis (ยง4, Figure 2), where we assumed the isothermality of the ICM. In fact, the temperature in this solution is nearly isothermal (Figure 4). This is consistent with the result based on the spectral analysis (ยง3). The acceptable fits (99% confidence) were obtained with $`1.42<\eta <1.65`$. When $`\eta `$ was fixed to be 1.53 and $`r_\mathrm{s}`$ was left to be free, the acceptable fits were obtained with $`r_\mathrm{s}>43^{}`$.
Moore et al. (1998) found a density profile steeper than the NFW model, through their $`N`$body simulations in a standard CDM cosmology with much higher resolution than NFW97. They employed the form
$`\rho _{\mathrm{tot}}^{\mathrm{Moore}}`$ $``$ $`\left({\displaystyle \frac{r}{r_\mathrm{s}}}\right)^{1.4}\left[1+\left({\displaystyle \frac{r}{r_\mathrm{s}}}\right)^{1.4}\right]^1`$ (9)
with $`r_\mathrm{s}`$ being 0.18 times the virial radius. We examined this density model ($`r_\mathrm{s}=16^{}`$ in the case of A1060) and found that this is also consistent with the ASCA and ROSAT data. This is reasonable, since eq.(9) is fairly close to the best-fit solution based on the power-law model.
We may explain how the total mass density models for $`\eta `$ outside the above range are rejected by the data. For example, if $`\eta `$ is too large, the model implies a too much mass in the central region, requiring a steep ICM pressure gradient in the center to balance the deeper gravitational potential. Since the ICM density profile is tightly constrained by the PSPC brightness data, the ICM temperature has to be higher towards the center. This temperature increase becomes inconsistent with the GIS spectrum in the inner region. Similarly, too small values of $`\eta `$ require a too much central temperature decrease to meet the GIS spectra.
#### 5.2.3 The King-type Potential Model
We also examine a more traditional King-type total mass profile of
$`\rho _{\mathrm{tot}}^{\mathrm{King}}`$ $``$ $`\left[1+\left({\displaystyle \frac{r}{r_\mathrm{c}}}\right)^2\right]^{\frac{3}{2}},`$ (10)
having a flat density core at the center. This is an approximation to the inner portions of a self-gravitating sphere. At large radii, the density is proportional to $`r^3`$, as the NFW \[eq.(1)\] and power-law density \[eq.(8)\] models. For the ICM density, we could assume a $`\beta `$ model as usual. However, we already found in ยง4.2 that the isothermal $`\beta `$ model can not describe the observed PSPC radial brightness, regardless of the potential profile. Therefore, we again employ the modified NFW ICM model \[eq.(4)\], as in the above two cases.
Utilizing the same method as in the previous subsections, we have obtained the best-fit model with $`r_\mathrm{c}=5^{}.8(83h_{70}^1\mathrm{kpc}),a=36^{}.1,B=10.4`$, and $`n=0.97`$ (dotted lines in Figure 4). The fit is however poorer ($`\chi ^2/\nu =291/262`$) than the NFW or power-law models, and can be rejected with 90% confidence. The king potential, by its nature, has a flat density core at the center, while the ICM density has a cusp. To compensate the density increase and make pressure to balance the potential, the temperature at the center is required to drop as seen in Figure 4. This disagree with the good isothermality in the central region: the spectral discrepancy would become even severer if we take into account the SIS spectrum.
## 6 SUMMARY AND DISCUSSION
Using the high quality data from ASCA and ROSAT, we have constrained the total mass profile in the central region ($`14h_{70}^1300h_{70}^1`$ kpc in radius) of A1060. The spatially-resolved X-ray spectra obtained with the ASCA SIS, GIS, and ROSAT PSPC have been found to be consistent with the isothermality of the ICM in the central region of A1060. Hence, for the first step, we assumed an isothermal ICM distribution and derived the gravitational potential profile based on the PSPC radial brightness profile. The potential was found to be deeper than the universal dark halo potential (NFW model) proposed by NFW96 and NFW97, of which the density scales as $`r^1`$ at the center. The total mass density profile has a central slope roughly proportional to $`r^{1.5}`$.
For the second step, we allowed the ICM to deviate from isothermality and tried to reproduce the GIS and PSPC data simultaneously either by the NFW model, its modification (power-law model), or widely-assumed King model. Among the three, the first two models have given successful descriptions of the spectral and spatial data. On the other hand, we could find no acceptable solution based on the King potential with a flat density core. The king model requires a temperature profile deviated from isothermality, resulting in the poorer fit to both the GIS and PSPC data. Therefore, we conclude that the total mass profile and the ICM temperature profile implied by the King model is less likely than those implied by the NFW or power-law density solution.
Based on the NFW and power-law density solutions, we evaluate the radial density profile of various mass components. In Figure 5 (a) and (b) we show the total mass profile, together with the ICM and stellar mass density profiles. The ICM density was derived from the X-ray surface brightness distribution, while the stellar mass densities were estimated from the optical luminosity distributions of the central galaxy (NGC 3311; Vรคsterberg et al. 1991) and that of the cluster (Fitchett and Merrit 1988), assuming a constant mass-to-light ratio of $`M_{\mathrm{stellar}}/L_{\mathrm{blue}}=8`$, where $`M_{\mathrm{stellar}}`$ and $`L_{\mathrm{blue}}`$ represent the mass of the stellar component and the blue luminosity, respectively. In the region of $`14h_{70}^1\mathrm{kpc}<\mathrm{r}<290h_{70}^1\mathrm{kpc}`$, the total mass of the luminous matter (the ICM and stellar component) is less than 20% of the derived total mass, and hence the dark matter dominates the total mass. This is consistent with the previous estimate of the baryon fraction in A1060 (Loewenstein and Mushotzky 1996), namely 11โ16% within 500 kpc, assuming $`H_0=50`$ km s<sup>-1</sup>Mpc<sup>-1</sup>, and justified our formalism in which the baryonic component has been neglected. Therefore we conclude that the dark matter radial profile in A1060 has a central density cusp, which is consistent with the NFW prediction, instead of a flat density core.
In Figure 5 (c) we plot the integrated ICM mass fraction to the total mass as a function of radius based on our solutions. This plot clearly indicate that the total mass and hence the dark matter are much more concentrated than the ICM, over the region of $`20h_{70}^1\mathrm{kpc}r200h_{70}^1\mathrm{kpc}`$, or on the โgalaxy scaleโ. What produces this distinction? We consider this to be a result of the difference in temperature distribution between the โcollision-lessโ dark matter and collisional ICM. In a bottom-up scenario of cosmic structure formation, small scale structure having small amount of gravitational potential collapses in early stage and settles into the cluster center. On the other hand, the entire cluster as a whole collapses more recently when the total gravitational energy becomes large. As a result, โa dynamically cool central regionโ develops and remains in the dark matter because of a lack of interaction between the particles. Consequently, the dark matter is concentrated towards the center. This kind of temperature inversion was predicted through $`N`$body simulations based on the hierarchical clustering universe (e.g., NFW96; Fukushige & Makino 1997). In contrast, such a dynamically cool central region did not develop in the ICM, as a result of a strong heat conduction. Hence, the ICM is more spread-out than the dark matter.
In addition to the difference in dynamics between the dark matter and ICM, extra-gravitational heating could make the ICM more extended than the dark matter. For example, galactic winds during the cluster formation stage could heat the ICM effectively. This pre-heating may result in excess entropy of the central ICM. Actually Ponman, Damian, & Navarro (1999) found the excess in clusters and galaxy group with ICM temperatures of $`<4`$ keV, by comparing central ICM entropy and ICM temperature of clusters with those obtained from $`N`$body/gas-dynamical simulations by Eke, Navarro, & Frenk (1998; hereafter ENK98). They calculated the entropy, $`T/n_\mathrm{e}^{2/3}`$, where $`T`$ and $`n_\mathrm{e}`$ are the ICM temperature and electron density, at a fiducial radius of 0.1 times the virial radius. Note that ENK98 does not include the extra-gravitational heating. Being similar to samples in Ponman, Damian, & Navarro (1999), A1060 has higher entropy by a factor of $`2`$ than the prediction by ENK98. We also note that some studies based on $`N`$body/gas-dynamical simulations indicated that the heating of the ICM before the gravitational collapse strongly affects the evolution of the ICM properties and hence the statistical properties of clusters, but insignificantly affects the ICM distribution of the equilibrium system (Metzler & Evrard 1994; Navarro, Frenk, & White 1995). Furthermore these simulations and ENK98 do not include radiative cooling of the ICM, which could affects the central distribution of the ICM as well as the pre-heating. Therefore, we consider that it is rather difficult to find evidence for the extra-gravitational heating by comparing the derived ICM distribution of a cluster with those derived theoretically at this time.
Based on the NFW model, we found the scale length $`r_\mathrm{s}`$ and the normalization of the total mass in unit of the critical density of the universe $`\delta _\mathrm{c}`$ to be $`(190_{30}^{+60})h_{70}^1`$ kpc and $`(1.2_{0.4}^{+0.5})\times 10^4`$, respectively. The virial mass $`M_{200}`$ (defined e.g., in NFW97) can be calculated as $`(2.1_{0.2}^{+0.5}\times 10^{14})h_{70}^1`$M. Through $`N`$body simulation based on the standard CDM universe ($`\mathrm{\Omega }_0=1`$), NFW97 predicted $`\delta _\mathrm{c}`$ of $`2\times 10^4`$ for the virial mass of $`2\times 10^{14}h_{70}^1`$M (see Figure 9 in NFW97), which agrees with our result within a factor of 2. In the Centaurus cluster, Ikebe et al. (1999) also found similar agreement between the X-ray measurements and the NFW prediction.
We have shown that the total mass profile having a steeper density profile of $`\rho r^\eta `$ with $`1.42<\eta <1.65`$ (the power-law solution) in the central region is consistent with the data, as is the original NFW model of $`\rho r^1`$. Recent $`N`$body simulations with much higher resolution than those of NFW97 show such steeper density profiles of dark halos (e.g. Fukushige & Makino 1997; Moore et al. 1998), in a nice agreement with our best-fit solution. In fact, we showed that a density profile found by Moore et al. (1998) through the $`N`$body simulation with the highest resolution is consistent with the data. Accordingly, results of $`N`$body simulations in hierarchical clustering universe are consistent with our observational results.
We thank all the members of the ASCA team led by Y. Tanaka and the ROSAT team for making this study possible. We also thank S. Sasaki, Y. Suto, M. Sekiguchi, and H. Bรถhringer for their constructive discussion. We appreciate the referee who provided valuable comments. ASCA data were mainly analized utilizing software developed by the ASCA-ANL and SimASCA team. The ROSAT data were obtained through ROSAT Archive Browser , provided by Max-Planck-Institut fรผr extraterrestrische Physik. T.T. is supported by the post-doctoral program of Japan Society for the Promotion of Science. |
no-problem/0001/astro-ph0001490.html | ar5iv | text | # Detection of Stellar Spots from the Observations of Caustic-Crossing Binary-Lens Gravitational Microlensing Events
## 1 Introduction
Massive searches for gravitational microlensing events by monitoring transient brightening of source stars located in the Galactic bulge and the Magellanic Clouds have been and are being carried out by several groups (EROS: Aubourg et al. 1993; MACHO: Alcock et al. 1993; OGLE: Udalski et al. 1993; DUO: Alard & Guibert 1997). These surveys have detected $`400`$ events to date (Stubbs 1999).
The light curve of a single-lens microlensing event (denoted by the subscript โsโ) with a point source (denoted by the subscript โ0โ) is given by
$$A_{\mathrm{s},0}=\frac{u^2+2}{u(u^2+4)^{1/2}},$$
$`(1.1)`$
where $`u`$ is the lens-source separation in units of the angular Einstein ring radius $`\theta _\mathrm{E}`$. The angular Einstein ring radius is related to the physical parameters of the lens by
$$\theta _\mathrm{E}=\left(\frac{4GM}{c^2}\frac{D_{ls}}{D_{ol}D_{os}}\right)^{1/2},$$
$`(1.2)`$
where $`M`$ is the lens mass and $`D_{ol}`$, $`D_{ls}`$, and $`D_{os}`$ are the separations between the observer, lens, and source star, respectively. Typical main-sequence stars in the Galactic bulge have radii that subtend only $`1`$ $`\mu `$-arcsecond, while the angular Einstein ring radius of an event caused by a solar mass lens with $`D_{ol}5`$ kpc is $`\theta _\mathrm{E}0.3`$ milli-arcsecond. Therefore, equation (1.1) is a good approximation for majority of Galactic microlensing events.
However, for a very close lens-source impact event with a considerable source star radius such as subgiants and giants, the source can no longer be approximated by a point source. For this case, different parts of the source are amplified by different amounts (differential amplification) due to the finite size of the source star and the resulting light curve deviates from the point-source one (Schneider & Weiss 1986; Witt & Mao 1994). The light curve of an extended source event caused by a single lens is given by the intensity-weighted amplification averaged over the surface of the source star, i.e.
$$A_\mathrm{s}(r_{})=\frac{_0^{2\pi }_0^r_{}I(r,\vartheta )A_{\mathrm{s},0}(\left|๐ซ๐ซ_L\right|)r๐r๐\vartheta }{_0^{2\pi }_0^r_{}I(r,\vartheta )r๐r๐\vartheta },$$
$`(1.3)`$
where $`r_{}`$ is the radius of the source star, $`I(r,\vartheta )`$ is the surface intensity distribution of the source star, and the vectors $`๐ซ_L`$ and $`๐ซ=(r,\vartheta )`$ represent the displacement vector of the center of the source star with respect to the lens and the orientation vector of a point on the source star surface with respect to the source center, respectively.
By observing the distortions in microlensing light curves caused by the finite source effect, one can obtain useful information about both the lens and source star. First, it was known that finite source effect can be used to determine $`\theta _\mathrm{E}`$, with which one can partially break the lens parameter degeneracy in the obtained Einstein time scale $`t_\mathrm{E}`$ (Gould 1994; Nemiroff & Wickramasinghe 1994; Maoz & Gould 1994; Peng 1997). Second, since different parts of the source star (with varying surface intensity and spectral energy distribution) are resolved at different times during an event, one can recover the intensity profile of the source (Witt 1995; Loeb & Sasselov 1995; Gould & Welch 1996) and can probe stellar atmosphere (Valls-Gabaud 1994, 1998; Sasselov 1997; Gaudi & Gould 1999) by taking a sequence of photometric and spectro-photometric measurements of the event.
Recently, Heyrovskรฝ & Sasselov (1999) investigated the sensitivity of single-lens microlensing event light curves to small spots with radii $`r_\mathrm{s}0.2`$ of source radii. From this investigation, they found that during source transit events spots can cause deviations in amplification larger than 2%, and thus be detectable. In this paper, we explore the feasibility of spot detection from the observations of caustic-crossing binary-lens microlensing events instead of single-lens events. For this we investigate the sensitivity of binary-lens event light curves to spots and compare it to that of single-lens events. From this investigation, we find that during caustic crossings the fractional amplification deviations of microlensing light curves from those of spotless source events are equivalent to the deviations of single-lens events, implying that spots can also be detected with a similar photometric precision to that required for spot detection by observing single-lens events. We discuss the relative advantages of observing caustic-crossing binary-lens events over the observations of single-lens events in detecting stellar spots.
## 2 Single-Lens Events for Finite Source Stars with Spots
If the surface of a source star is maculated by a spot, the light curve of a single-lens microlensing event becomes
$$A_{\mathrm{s},\mathrm{spot}}=\frac{_0^{2\pi }_0^r_{}I(r,\vartheta )A_{\mathrm{s},0}(\left|๐ซ๐ซ_L\right|)r๐r๐\vartheta _{\mathrm{\Sigma }_{\mathrm{spot}}}f(๐ซ^{})I(๐ซ^{})A_{\mathrm{s},0}(\left|๐ซ^{}๐ซ_L\right|)๐\mathrm{\Sigma }_{\mathrm{spot}}}{_0^{2\pi }_0^r_{}I(r,\vartheta )\left[1f(r,\vartheta )\right]r๐r๐\vartheta },$$
$`(2.1)`$
where $`๐ซ^{}`$ is the orientation vector of a point on the surface of the spot with respect to the source center, $`f(๐ซ^{})`$ represents the fractional decrement in the surface intensity due to the spot, and the notation $`_{\mathrm{\Sigma }_{\mathrm{spot}}}\mathrm{}๐\mathrm{\Sigma }_{\mathrm{spot}}`$ represents the surface integral over the spot range of the source star.
Due to the presence of the spot, the event light curve deviates from that of a spotless event. To see the pattern of the deviations in microlensing light curves caused by spots and to explore the feasibility of spot detection by this method, we compute the fractional amplification deviation in the light curve from that of a spotless event, i.e.
$$ฯต_\mathrm{s}=\frac{\left|A_{\mathrm{s},\mathrm{spot}}A_\mathrm{s}\right|}{A_\mathrm{s}},$$
$`(2.2)`$
by using equations (1.3) and (2.1). For the computation of $`ฯต_\mathrm{s}`$, we assume a constant surface brightness of $`I_{}`$ for the entire region of the source star outside the spot. Partially, this is because the limb darkening does not have significant effect on the light curve, but more importantly because we want to see the deviation caused sorely by the spot. The spot is modeled by a circular area with a radius $`r_{\mathrm{spot}}`$ and also has a uniform surface brightness $`I_{\mathrm{spot}}`$. We test two cases of events for which the source stars have radii of $`r_{}=0.05\theta _\mathrm{E}`$ and $`0.1\theta _\mathrm{E}`$. For both cases, the spots have relative radii of $`r_{\mathrm{spot}}/r_{}=0.2`$ with the surface brightness contrast parameter of $`๐=I_{}/I_{\mathrm{spot}}=1/(1f)=10`$. With these assumptions, the light curve in equation (2.1) is simplified into
$$A_{\mathrm{s},\mathrm{spot}}(r_{},r_{\mathrm{spot}},f)=\frac{_0^{2\pi }_0^r_{}A_{\mathrm{s},0}(\left|๐ซ๐ซ_L\right|)r๐r๐\vartheta f_0^{2\pi }_0^{r_{\mathrm{spot}}}A_{\mathrm{s},0}(\left|๐ซ^{}๐ซ_L\right|)r^{}๐r^{}๐\vartheta ^{}}{\pi (r_{}^2fr_{\mathrm{spot}}^2)}.$$
$`(2.3)`$
In the upper panels of Figure 1, we present the contours of the deviation $`ฯต_\mathrm{s}`$ as a function of the lens position $`(x_L,y_L)`$. In each panel, the two circles represent the source star (big empty circle centered at the origin) and the stellar spot on it (small filled circle), respectively. The spot is located at $`s=0.5r_{\mathrm{spot}}`$ from the center of the source star. Contours are drawn with a spacing of 0.2% from $`ฯต_\mathrm{s}=0.2\%`$ and the regions with $`ฯต_\mathrm{s}1\%`$ and $`ฯต_\mathrm{s}2\%`$ are shaded by darkening gray tones. In the lower panels, we also present several example light curves of events for source stars with (solid lines) and without spots (dotted lines) and the corresponding lens trajectories (dot-long dashed lines) are marked in the upper panels. Each pair of the trajectory and the corresponding light curve are marked by the same number. We note that our Figure 1 is equivalent to Figure 1 and 2 of Heyrovskรฝ & Sasselov (1999), except that their adopted source size is $`r_{}=\theta _\mathrm{E}/13.23`$ and all their lengths are scaled by the source size not by the angular Einstein ring radius.
From the figure, one finds that the deviation can be larger than 2% as noted by Heyrovskรฝ & Sasselov (1999), and thus spots can, in principle, be detectable. However, the region for noticeable deviations (e.g. $`ฯต_\mathrm{s}2\%`$) is confined to a very small localized area close to the spot. Even for source-transit events, unless the lens almost directly crosses the spot, the deviations will not be big enough to notice the existence of the spot. This implies that with a reasonable photometric precision, spots can be detected only for a very limited number of (almost) direct spot-transit events.
## 3 Caustic-Crossing Binary-Lens Events for Finite Source Stars with Spots
In previous section, we investigated the effect of stellar spots on the light curves of single-lens microlensing events. In this section, we investigate how the spots of source stars affect the light curves of caustic-crossing binary-lens events.
When lengths are normalized to the combined Einstein ring radius $`r_\mathrm{E}`$, which is equivalent to the Einstein ring radius of a single lens with a mass equal to the total mass of the binary, the lens equation in complex notations for a binary-lens system with a point source is given by
$$\zeta =z+\frac{m_1}{\overline{z}_1\overline{z}}+\frac{m_2}{\overline{z}_2\overline{z}},$$
$`(3.1)`$
where $`m_1`$ and $`m_2`$ are the mass fractions of individual lenses (and thus $`m_1+m_2=1`$), $`z_1`$ and $`z_2`$ are the positions of the lenses, $`\zeta =\xi +i\eta `$ and $`z=x+iy`$ are the positions of the source and images, and $`\overline{z}`$ denotes the complex conjugate of $`z`$ (Witt 1990). The amplification of each image, $`A_i`$, is given by the Jacobian of the transformation (3.1) evaluated at the images position, i.e.
$$A_{\mathrm{b},0,i}=\left(\frac{1}{|\mathrm{det}J|}\right)_{z=z_i};\mathrm{det}J=1\frac{\zeta }{\overline{z}}\frac{\overline{\zeta }}{\overline{z}}.$$
$`(3.2)`$
Then the total amplification of a binary-lens event (denoted by the subscript โbโ) with a point source is given by the sum of the amplifications of the individual images, i.e. $`A_{\mathrm{b},0}=_iA_{\mathrm{b},0,i}`$. The set of source positions with infinite amplifications, i.e. $`\mathrm{det}J=0`$, form closed curves called caustics. Therefore, whenever a point source crosses the caustic, the amplification becomes formally infinity, producing a sharp peak in the light curve. Since the caustics form a closed figure, the source transit occurs at least twice for a caustic-crossing binary-lens event.
The finite source effect also affects the light curves of binary-lens events. The light curve of a binary-lens event with a finite source, $`A_\mathrm{b}`$, is obtained in a similar fashion to the single-lens event case, i.e. the intensity-weighted amplification averaged over the surface of the source star as is in equation (1.3) except that the single-lens point-source amplification $`A_{\mathrm{s},0}`$ should be replaced by the binary-lens amplification for a point source $`A_{\mathrm{b},0}`$. Due to the finite source effect, the observed amplification remains finite even during the caustic crossings.
If a spot exists on the surface of a finite source, the light curve further deviates from $`A_\mathrm{b}`$. The amplification of the binary-lens event for a source star with a spot, $`A_{\mathrm{b},\mathrm{spot}}`$, is obtained by using equation (2.3), but with $`A_{\mathrm{b},0}`$ instead of $`A_{\mathrm{s},0}`$. To see how the light curves of binary-lens events are affected by spots and to explore the feasibility of using binary-lens events for spot detection, we compute the fractional deviation in the amplification from that of a spotless event by
$$ฯต_\mathrm{b}=\frac{\left|A_{\mathrm{b},\mathrm{spot}}A_\mathrm{b}\right|}{A_\mathrm{b}},$$
$`(3.3)`$
and the result is presented in Figure 2. In the upper panel of Figure 2, we present the contours of $`ฯต_\mathrm{b}`$ in the vicinity of the caustics of an example binary-lens system with a binary separation (normalized by $`\theta _\mathrm{E}`$) and mass ratio of $`a=1.0`$ and $`q=1.0`$, respectively. The closed figure (marked by thick solid curves) in each panel represents the caustics of the binary-lens system. The contours are drawn at the levels of $`ฯต_\mathrm{b}=1\%`$ and 2% and the regions with $`ฯต_\mathrm{b}1\%`$ and $`ฯต_\mathrm{b}2\%`$ are shaded by darkening gray tones. For direct comparison of the deviations to those of the single-lens events in Figure 1, we adopt the same radii of source stars and their spots, i.e. $`r_{}=0.05\theta _\mathrm{E}`$ and $`0.1\theta _\mathrm{E}`$ and $`r_{\mathrm{spot}}=0.2r_{}`$, and the surface brightness contrast, i.e. $`๐=10`$. Unlike the single-lens event cases, however, we place the spots at the center of the source stars, i.e. $`s=0`$, but we will discuss the dependency of the deviation $`ฯต_\mathrm{b}`$ on the spot position in the following paragraph. In the lower panels, we also present several light curves for source stars with (solid lines) and without spots (dotted lines). The source star trajectories corresponding to individual light curves are represented by dot-long dashed lines in the upper panels and each pair of the light curve and trajectory are marked by the same number.
From the figure, one finds following patterns of $`ฯต_\mathrm{b}`$. First, significant deviations in amplification occur at near the regions along the caustics, implying that noticeable devaitions in light curves occur when the spot crosses the caustic. If the spot is located at different positions on the source star, the regions for optimial spot detection will change because the spot will cross the caustic at a different time. However, since the spot is confined within the small region of the source star, the change will be very slight. Second, since the significant deviation region well surrounds most of the cautic lines, one can detect the deviation for nearly all caustic-crossing events regardless of the lens trajectories. Third, while the center-to-limb surface intensity variation, which is another important stellar surface structure, produces very smooth deviations in the light curve (e.g. the Galactic bulge event MACHO 97-BLG-28, Albrow et al. 1999a), the deviations caused by the spot are bumpy. Therefore, one can easily separate the deviations in the light curve caused by the spot.
## 4 Single-Lens Versus Binary-Lens Events
Although both single-lens and binary-lens microlensing events can be used to detect stellar spots, observing caustic-crossing binary-lens events have following relative advantages in detecting spots over the observations of source-transit single-lens events.
First, caustic-crossing binary-lens events are much more common than source-transit single-lens events. Currently, total 11 candidate caustic-crossing binary-lens events have been reported. These include MACHO LMC#1 (Dominik & Hirshfeld 1994, 1996; Rhie & Bennett 1996), OGLE#7 (Udalski et al. 1994), DUO#2 (Alard, Mao, & Guibert 1995), 97-BLG-28 (Albrow et al. 1999a), 98-SMC-1 (Afonso et al. 1998; Albrow et al. 1999b; Alcock et al. 1999), 96-BLG-3, 97-BLG-1, 97-BLG-41, 98-BLG-12, 98-BLG-42, and 99-BLG-28, (http://darkstar.astro.washington.edu). On the other hand, only one candidate source-transit single-lens event has been reported (MACHO Alert 95-30, Alcock et al. 1997). In addition, while one can detect the deviations caused by spots for nearly all caustic-crossing events, spot detection for a significant fraction of source-transit single-lens events will be difficult because only almost direct spot-transit events will produce deviations large enough to detect spots.
Second, for a caustic-crossing binary-lens event, the deviations caused by a spot can be measured with precision and high time resolution from followup observations of the event. The deviations in the light curve last only a few hours during the source star transit of the lens (for a single-lens event) or the caustic (for a binary-lens event). Therefore, followup observations with high photometric precision and time resolution will be essential for the detection of stellar spots. For a binary-lens event, the caustic crossing happens twice and thus although the first crossing was missed one can prepare the followup observations of the second crossing. By contrast, since the source transit of a single-lens event cannot be predicted a priori as well as never repeats, followup observations cannot be performed for an important fraction of these events.
Figure 1: Upper panels: contours of the fractional amplification deviation $`ฯต_\mathrm{s}`$ as a function of the lens position $`(x_L,y_L)`$ for single-lens microlensing events. The two circles in each panel represent the source star (big empty circle centered at the origin) and the stellar spot on it (small filled circle), respectively. The source stars have radii of $`r_{}=0.05\theta _\mathrm{E}`$ and $`0.1\theta _\mathrm{E}`$. For both cases, the spot has a relative radius of $`r_\mathrm{s}/r_{}=0.2`$ and the adopted contrast parameter $`๐=I_{}/I_{\mathrm{spot}}=10`$ for both cases. Contours are drawn with a spacing of 0.2% from $`ฯต_\mathrm{s}=0.2\%`$ and the regions with $`ฯต_\mathrm{s}1\%`$ and $`ฯต_\mathrm{s}2\%`$ are shaded by darkening gray tones. Lower panels: the light curves (solid curves) of single-lens microlensing events for source stars with spots. The lens trajectories corresponding to the individual light curves are represented by dot-long dashed lines in the upper panels and each pair of the light curve and trajectory are marked by the same number. The dotted curves represent the light curves which are expected when the source stars have no spot.
Figure 2: Upper panel: contours of amplification deviation $`ฯต_\mathrm{b}`$ as a function of the source star position $`(\xi ,\eta )`$ for binary-lens microlensing events. The lens system is composed of equal mass lenses (i.e. mass ratio $`q=1.0`$) with a normalized binary separation of $`a=1.0`$. The closed figure (marked by thick solid curves) in each panel represents the caustics of the binary-lens system. The contours are drawn at the levels of $`ฯต_\mathrm{b}=1\%`$ and 2% and the regions with $`ฯต_\mathrm{b}1\%`$ and $`ฯต_\mathrm{b}2\%`$ are shaded by darkening gray tones. The radii of source stars and their spots and the surface brightness contrast are same as in Figure 1. Lower panel: the light curves (solid curves) of binary-lens microlensing events for source stars with spots. The source star trajectories corresponding to the individual light curves are represented by dot-long dashed lines in the upper panels and each pair of the light curve and trajectory are marked by the same number. The dotted curves represent the light curves without spots. |
no-problem/0001/hep-ph0001110.html | ar5iv | text | # Outstanding problems in the phenomenology of hard diffractive scattering
## 1 The problems
Although it has long been suspected that the factorisation of the hard component of diffractive scattering should not apply for hadron-hadron collisions, the magnitude of the breakdown at the Tevatron has come as a surprise. A model for diffractive hard scattering that contains all the essential features of the factorisation hypothesis is that of Ingelman and Schlein . In diffractive DIS at HERA, for example, large rapidity gap events can be interpreted as the result of a highly virtual photon probing the structure of a pomeron โemittedโ from the proton. Collins has recently proved factorisation for lepton induced diffractive processes , but as expected the proof is not valid for hadron-hadron collisions. Alvero and collaborators have quantified this breakdown by extracting diffractive parton densities (which in the Ingelman-Schlein picture would be interpreted as parton distributions of the pomeron) from the HERA diffractive DIS and diffractive jet photoproduction data and using them to predict diffractive jet, W, Z and charm production and double pomeron exchange rates in $`p\overline{p}`$ collisions at the Tevatron. The diffractive parton distributions themselves are found to need a large amount of glue at the starting scale in order to fit the HERA photoproduction data. Hard gluon distributions ($`1\beta `$ at large $`\beta `$) are preferred, although the present data cannot rule out an even harder distribution, similar to the form presented by the H1 Collaboration in their analysis of the diffractive DIS data , that is strongly peaked towards $`\beta =1`$. The predicted cross sections for the above processes at the Tevatron are consistently larger than those measured, the differences ranging from factors of a few for diffractive W and Z production to factors of up to 30 for the gluon dominated fits in dijet production, indicating a severe breakdown of factorisation. The breakdown in the double pomeron rate is even more severe, where the gluon dominated fits fail by factors of order 100. For all processes, the low-glue fits, that are unfavoured at HERA, yield much better results. With this background, several questions present themselves. Firstly, is the picture of diffraction a la Ingelman-Schlein valid ? Are the parton distributions extracted at HERA any use outside HERA ? Should a new approach be sought ? Whilst not providing any answers, we will review several suggestions that may provide a starting point for future work.
## 2 Possible Solutions
### 2.1 Rapidity Gap Survival Probability
Perhaps the most obvious solution to the apparent low yield of rapidity gap events at the Tevatron relative to HERA is to attribute the difference to a rapidity gap survival factor. Such factors are able to explain the qualitative differences in rapidity gaps between jets fractions in $``$ 200 GeV $`\gamma p`$ collisions at HERA $`(10\%)`$ and in 630 GeV $`(3\%)`$ and 1800 GeV $`(1\%)`$ $`p\overline{p}`$ collisions at the Tevatron , although large uncertainties remain. The idea is simple, although the creation of viable models is an extremely difficult problem . A rapidity gap produced by the exchange of a colour-singlet object may be filled in by secondary interactions between spectator partons in the event. Since there are more spectator partons in $`p\overline{p}`$ collisions than in $`\gamma p`$ collisions, and the number density of partons increases with increasing centre of mass energy, one would expect the rate of gap destruction to be significantly larger at the Tevatron than at HERA, and to increase with centre of mass energy. Such an analysis has yet to be performed for the case of hard diffractive scattering . It is worth noting that, certainly in the gaps between jets case, it may be possible to control the non-perturbative physics by a careful definition of a rapidity gap. For example, a gap event may be defined as an event in which the total energy in a given rapidity region is greater than some value $`Q`$, where $`Q\mathrm{\Lambda }_{QCD}`$ , or as an event in which there is no jet with $`E_T^{jet}>E_{T(min)}`$ in some rapidity region .
An interesting question to ask in the context of gap survival is whether or not the gap destruction mechanism depends on the gap production subprocess. Most models to date introduce gap survival as a multiplicative factor dependent only on centre of mass energy, although this is almost certainly an over-simplification. In which case, the shapes of the diffractive parton distributions measured at different colliders and centre of mass energies would necessarily be different. Such a difference is present in the $`\beta `$ distributions measured by CDF and those extracted by H1 in diffractive DIS.
CDF measured the diffractive structure function of the antiproton using a method employing two samples of dijet events produced in $`p\overline{p}`$ collisions at $`\sqrt{s}=1800`$ GeV: a diffractive sample, collected by triggering on a leading antiproton detected in a forward Roman Pot Spectrometer (RPS), and an inclusive sample, collected with a minimum bias trigger. In leading order QCD, the ratio of the diffractive to inclusive cross sections as a function of the Bjorken $`x`$ of the struck parton of the antiproton, obtained from the dijet kinematics, is equal to the ratio of the corresponding structure functions. Thus, the diffractive structure can be calculated by multiplying the measured ratio of cross sections by the known inclusive structure. This method, which bypasses the use of (model dependent) Monte Carlo generators, yields the colour-weighted structure function
$$F_{jj}^D(x)=x\left\{g^D(x)+\frac{4}{9}\underset{i}{}\left[q_i^D(x)+\overline{q}_i^D(x)\right]\right\}$$
where $`g^D(x)`$ and $`q^D(x)`$ are the antiproton gluon and quark diffractive parton densities. Changing variables from $`x`$ to $`\beta =x/\xi `$, where $`\xi `$ is the $`\overline{p}`$ fractional momentum loss measured by the RPS, yields the structure function $`F_{jj}^{D(3)}(\xi ,\beta ,Q^2)`$.
In Fig. 1, the $`F_{jj}^{D(3)}(\xi ,\beta ,Q^2)`$ measured by CDF at the Tevatron for $`Q^275`$ GeV<sup>2</sup> in the region of $`0.035<\xi <0.095`$ and $`|t|<1`$ GeV<sup>2</sup> is compared (see ) with that calculated using parton densities extracted by the H1 Collaboration from diffractive DIS measurements at HERA, scaled down by a factor of 20. The Tevatron and HERA $`\beta `$ distributions disagree both in normalisation and shape. One should note, however, that the H1 data are in a different $`\xi `$ region to the Tevatron data, namely $`\xi <0.04`$. In the H1 analysis, in which the data are fitted with two components, pomeron and reggeon, there are significant reggeon contributions in the $`\xi `$ region of the Tevatron data. For the reggeon, a pion structure function is assumed by H1. Allowing for a different reggeon structure, constrained by the data, could introduce some flexibility in the gluon component extracted from the diffractive DIS measurements. Furthermore, one should note that the Tevatron data are mostly sensitive to the diffractive gluon content which, in the diffractive parton densities published by the H1 collaboration, is derived from the observed scaling violations of the diffractive structure function. The question awaiting an answer must then be, is there a common set of diffractive pdfโs which will fit in shape both the HERA and Tevatron measurements, leaving an overall normalisation factor which can be explained by a simple factorisable gap survival factor. Even if this is not so, can a sufficiently refined gap survival model account for the differences in the shape of the extracted diffractive pdfโs? Or is a more fundamental revision of diffractive phenomenology called for?
### 2.2 Soft Colour Interactions
The soft colour interaction approach differs from the above phenomenology in that it moves the gap formation from the initial state to the hadronisation phase. The hard subprocess and the perturbative evolution of partons is treated exactly the same for gap and non-gap events. However, after the perturbative phase, the colour structure of an event can be rearranged by exchange of soft gluons, typically between the perturbatively produced partons and the background colour field of the hadrons. Such colour reconnections may result in regions devoid of colour, i.e. rapidity gaps. A review can be found in these proceedings , where it is shown that fixing some global parameter for the reconnection probability to describe DIS gap events at HERA, it is not only possible to describe diffractive jet production (single and double diffraction) and diffractive W production at the Tevatron, but also a good description of high-$`p_{}`$ quarkonium production is obtained.
Some questionable features of the soft colour interaction models were pointed out during the workshop. The fact that the models do not modify in anyway the perturbative evolution of an event, means that e.g. the size of a rapidity gap in diffractive DIS events is completely determined by the most forward parton emitted in the perturbative phase. But so far the reproduction of DIS gap events has only been possible when implementing the soft colour interaction in the L EPTO generator which is known not to be able to describe perturbative emissions in the forward region (see e.g. ). Also, it has been shown that introducing a similar reconnection model in the A RIADNE program โ which *is* able to describe perturbative features of the forward region โ neither the rate nor the distribution of rapidity gap events can be adequately described. The rate could, of course, be fixed by modifying the cut-off in the perturbative cascade or the reconnection probability, but this would not change the fact that e.g. the $`m_X`$ distribution comes out completely wrong.
Although this casts serious doubts on the physical relevance of the soft colour interaction models, it does not prove that they are wrong. But to prove that they have anything to do with physics, it is highly desirable that they be implemented in an event generator which gives a reasonable description of the perturbative emissions in the forward region. After the workshop, work has started to implement soft colour interactions in the R AP G AP program, which is similar to L EPTO but implements a resolved virtual photon model to obtain a good description of e.g. forward jet rates. The model must also describe all diffractive HERA data. (i.e. charm, jets, photoproduction, etc).
### 2.3 A new approach
A totally new approach has been suggested by one of us which avoids the above complications regarding gap survival, and allows the structure of the pomeron to be derived from that of the parent hadron. Using a non-factorizing ansatz for $`F_2^{D(3)}`$, inspired by the observed scaling behavior of the soft single diffractive cross section
$$\frac{d\sigma _{sd}}{dM^2}\frac{1}{(M^2)^{1+ฯต}}$$
(1)
a formula is obtained which can be interpreted as a renormalized pomeron flux folded with the structure function $`F_2`$ of the proton in the pomeronโproton scattering subsystem. Details can be found in these proceedings .
## 3 Outlook
Hard diffraction at HERA and the Tevatron is clearly not fully understood. The factorized pomerom picture does not explain all the data, and whether or not any of the alternative models suggested will be fully successful remains to be seen. This difficult border region between perturbative and non-perturbative QCD remains a challenge, which will probably require more experimental data before it can be met successfully. |
no-problem/0001/math0001183.html | ar5iv | text | # A characterization of semiampleness and contractions of relative curves
## Introduction
The FujitaโZariski Theorem asserts that a line bundle $``$ that is ample on its base locus is *semiample*. Semiampleness means that a multiple $`^n`$, $`n>0`$ is globally generated. For discrete base locus the result goes back to Zariski (, Thm. 6.2), and the general form is due to Fujita (, Thm. 1.10). This note contains two applications of the FujitaโZariski Theorem.
The first section contains a generalization of both the FujitaโZariski Theorem and the cohomological criterion for ampleness due to GrothendieckโSerre. The result is the following characterization: A line bundle $``$ is semiample if and only if the modules $`H^1(X,\mathrm{Sym})`$ are finitely generated over the ring $`\mathrm{\Gamma }(X,\mathrm{Sym})`$ for every coherent ideal $`๐ช_B`$. Here $`BX`$ is the stable base locus of $``$. This gives a positive answer to Fujitaโs question (, 1.16) whether it is possible to weaken the assumption in the FujitaโZariski Theorem.
In the second section I generalize results of Piene and Emsalem . They used the FujitaโZariski Theorem to obtain sufficient conditions for contractions in normal arithmetic surfaces. Our result is a characterization of contractible curves in 1-dimensional families over local noetherian rings in terms of complementary closed subsets. This also sheds some light on the noncontractible curve constructed by Bosch, Lรผtkebohmert, and Raynaud (, chap. 6.7). For proper normal algebraic surfaces, similar results appear in .
## 1. Characterization of semiampleness
Throughout this section, $`R`$ is a noetherian ring, $`X`$ is a proper $`R`$-scheme, and $``$ is an invertible $`๐ช_X`$-module. According to the GrothendieckโSerre Criterion (, Prop. 2.6.1) $``$ is ample if and only if for each coherent $`๐ช_X`$-module $``$ there is an integer $`n_0>0`$ so that $`H^1(X,^n)=0`$ for all $`n>n_0`$. Let me reformulate this in terms of graded modules. For a coherent $`๐ช_X`$-module $``$, set
$$H_{}^p(,)=H^p(X,\mathrm{Sym})=\underset{n0}{}H^p(X,^n).$$
This is a graded module over the graded ring $`\mathrm{\Gamma }_{}()=\mathrm{\Gamma }(X,\mathrm{Sym})`$. The GrothendieckโSerre Criterion takes the form: $``$ is ample if and only if the modules $`H_{}^1(,)`$ are finitely generated over the ring $`\mathrm{\Gamma }_0()=\mathrm{\Gamma }(๐ช_X)`$ for all coherent $`๐ช_X`$-modules $``$. In this form it generalizes to the semiample case. Following Fujita , we define the *stable base locus* $`BX`$ of $``$ to be the intersection of the base loci of $`^n`$ for all $`n>0`$. We regard it as a closed subscheme with reduced scheme structure.
###### Theorem 1.1.
Let $`BX`$ be the stable base locus of $``$. Then the following are equivalent:
1. The invertible sheaf $``$ is semiample.
2. The modules $`H_{}^p(,)`$ are finitely generated over the ring $`\mathrm{\Gamma }_{}()`$ for each coherent $`๐ช_X`$-module $``$ and all integers $`p0`$.
3. The modules $`H_{}^1(,)`$ are finitely generated over the ring $`\mathrm{\Gamma }_{}()`$ for each coherent ideal $`๐ช_B`$.
###### Proof.
The implication (i)$``$(ii) is well known, and (ii)$``$(iii) is trivial. To prove (iii)$``$(i) we assume that $``$ is not semiample. According to the FujitaโZariski Theorem the restriction $`_B`$ is not ample. By the GrothendieckโSerre Criterion there is a coherent ideal $`๐ช_B`$ with $`H^1(X,^n)0`$ for infinitely many $`n>0`$. Thus $`H_{}^1(,)`$ is not finitely generated over $`\mathrm{\Gamma }_0()`$. Since $`BX`$ is the stable base locus, the maps $`\mathrm{\Gamma }(X,^n)\mathrm{\Gamma }(B,_B^n)`$ vanish for all $`n>0`$. Consequently, the irrelevant ideal $`\mathrm{\Gamma }_+()\mathrm{\Gamma }_{}()`$ annihilates $`H_{}^1(,)`$, which is therefore not finitely generated over $`\mathrm{\Gamma }_{}()`$. โ
Sommese introduced a quantitative version of semiampleness: Let $`k0`$ be an integer; a semiample invertible sheaf $``$ is called *$`k`$-ample* if the fibers of the canonical morphism $`f:X\mathrm{Proj}\mathrm{\Gamma }_{}()`$ have dimension $`k`$. For example, $`0`$-ampleness means ampleness.
###### Theorem 1.2.
Let $``$ be a semiample invertible $`๐ช_X`$-module. Then $``$ is $`k`$-ample if and only if the modules $`H_{}^{k+1}(,)`$ are finitely generated over the ground ring $`R`$ for all coherent $`๐ช_X`$-modules $``$.
###### Proof.
Set $`Y=\mathrm{Proj}\mathrm{\Gamma }_{}()`$ and let $`f:XY`$ be the corresponding contraction. Suppose $``$ is $`k`$-ample. Choose $`n_0>0`$ so that $`^{n_0}=f^{}()`$ for some ample invertible $`๐ช_Y`$-module $``$. Put $`๐ข=(^2\mathrm{}^{n_0})`$. Choose $`m_0>0`$ with $`H^p(Y,R^qf_{}(๐ข)^m)=0`$ for $`p>0`$, $`qk+1`$, and $`m>m_0`$. Consequently, the edge map $`H^{k+1}(X,๐ข^{mn_0})H^0(Y,R^{k+1}f_{}(๐ข)^m)`$ in the spectral sequence
$$H^p(Y,R^qf_{}(๐ข)^m)H^{p+q}(X,๐ข^{mn_0})$$
is injective for $`m>m_0`$. The fibers of $`f:XY`$ are at most $`k`$-dimensional, so $`R^{k+1}f_{}(๐ข)=0`$. Thus $`H^{k+1}(X,^n)=0`$ for all $`n>n_0m_0`$.
Conversely, assume that the condition holds. Seeking a contradiction we suppose that some fiber of $`f:XY`$ has dimension $`>k`$. Using we find a coherent $`๐ช_X`$-module $``$ with $`R^{k+1}f_{}()0`$. Replacing $``$ by a suitable multiple, we have $`=f^{}()`$ for some ample invertible $`๐ช_Y`$-module $``$. Passing to a higher multiple if necessary, $`H^p(Y,R^qf_{}()^n)=0`$ holds for $`p>0`$, $`qk`$, and $`n>0`$. Then the edge map $`H_{}^{k+1}(X,^n)H_{}^0(Y,R^{k+1}f_{}()^n)`$ is surjective for $`n>0`$. Choose a global section $`s\mathrm{\Gamma }(Y,^n)`$ for some $`n>0`$ so that the open subset $`Y_sY`$ contains the set of associated points for $`R^{k+1}f_{}()`$. Then $`s\mathrm{\Gamma }_{}()`$ is not a zero divisor for $`H_{}^0(R^{k+1}f_{}(),)`$. It follows that $`H_{}^0(R^{k+1}f_{}(),)`$ is nonzero for infinitely many degrees. Consequently, the same holds for $`H_{}^{k+1}(,)`$, which is therefore not finitely generated over $`R`$. โ
###### Remark 1.3.
For a vector bundle $``$, it might happen that $`๐ช_{()}(1)`$ is semiample, whereas $`\mathrm{Sym}^n()`$ fails to be globally generated for all $`n>0`$. For example, let $`k`$ be an algebraically closed field of characteristic $`p>0`$, and $`X`$ be a smooth proper curve of genus $`g>p1`$ so that the absolute Frobenius $`\mathrm{Fr}_X:H^1(๐ช_X)H^1(๐ช_X)`$ is zero. For an example see , p. 348, ex. 2.14. Let $`DX`$ be a divisor of degree $`1`$. According to the commutative diagram
$$\begin{array}{ccccccc}H^0(๐ช_X)& & H^0(๐ช_D)& & H^1(๐ช_X(D))& & H^1(๐ช_X)\\ \mathrm{Fr}_X^{}& & \mathrm{Fr}_X^{}& & \mathrm{Fr}_X^{}& & \mathrm{Fr}_X^{}=0& & \\ H^0(๐ช_X)& & H^0(๐ช_{pD})& & H^1(๐ช_X(pD))& & H^1(๐ช_X),\end{array}$$
the $`p`$-linear map $`\mathrm{Fr}_X^{}:H^1(๐ช_X(D))H^1(๐ช_X(pD))`$ is not injective. Hence there is a nontrivial extension
$$0๐ช_X๐ช_X(D)0$$
whose Frobenius pull back $`\mathrm{Fr}_X^{}()`$ splits. The surjection $`๐ช_X(D)`$ gives a section $`A()`$ representing $`๐ช_{()}(1)`$ with $`A^2=1`$ (, Prop. 2.6, p. 371). The FujitaโZariski Theorem implies that $`๐ช_{()}(1)`$ is semiample, and we obtain a birational contraction $`()Y`$. It is easy to see that the exceptional set is an integral curve $`R()`$ which has degree $`p`$ on the ruling. Hence $`()Y`$ does not restrict to closed embeddings on the fibers of $`()X`$. Consequently, $`\mathrm{Sym}^n()`$ is not globally generated at any point $`xX`$.
## 2. Contractions of relative curves
Throughout this section, $`R`$ is a local noetherian ring, and $`X`$ is a proper $`R`$-scheme with 1-dimensional closed fiber $`X_0X`$. Then all fibers of the structure morphism $`X\mathrm{Spec}(R)`$ are at most 1-dimensional. For example, $`X`$ could be a flat family of curves.
A *Stein factor* of $`X`$ is a proper $`R`$-scheme $`Y`$ together with a proper morphism $`f:XY`$ so that $`๐ช_Yf_{}(๐ช_X)`$ is bijective (compare , sec. 5). Our objective is to describe the set of all Stein factors for a given $`X`$.
Let $`C_i`$, $`iI`$ be the finite collection of all 1-dimensional integral components of the closed fiber $`X_0`$. A subset $`JI`$ yields a subcurve $`C=_{iJ}C_i`$. We call such a curve $`CX`$ *contractible* if there is a Stein factor $`f:XY`$ so that $`f(C_i)`$ is a closed point if and only if $`iJ`$. According to , Theorem 5.4.1, a Stein factor is determined up to isomorphism by its restriction $`f_0:X_0Y_0`$. The task now is to determine the contractible curves $`CX`$. It follows from and that all curves $`CX`$ are contractible provided that the ground ring $`R`$ is henselian. In particular this holds if $`R`$ is complete. On the other hand, a noncontractible curve is discussed in , chapter 6.7.
We seek to describe contractible curves $`CX`$ in terms of complementary closed subsets $`DX`$. We need a definition: Suppose $`DX`$ is a closed subset of codimension $`1`$. Let $`RR^{}`$ be the completion with respect to the maximal ideal, $`X^{}`$ the normalization of $`X_RR^{}`$, and $`C_i^{},C^{},D^{}X^{}`$ the preimages of $`C_i,C,DX`$, respectively. Let $`h:X^{}Z^{}`$ be the contraction of all $`C_i^{}X_0^{}`$ disjoint from $`C^{}`$. We call $`D`$ *persistent* if $`h(D^{})Z^{}`$ has codimension $`1`$.
###### Example 2.1.
Suppose $`R`$ is a discrete valuation ring with residue field $`k`$ and fraction field $`K`$. Let $`X`$ be the proper $`R`$-scheme obtained from $`X^{}=_R^1`$ by identifying the closed points $`0,\mathrm{}_k^1`$. Then the closure $`DX`$ of the point $`0_K^1`$ is not persistent.
###### Theorem 2.2.
Suppose $`JI`$ is a subset so that the curve $`C=_{iJ}C_i`$ is connected. Then $`CX_0`$ is contractible if and only if there is a persistent closed subset $`DX`$ of codimension $`1`$ disjoint from $`C`$ and intersecting each irreducible component $`C_iX_0`$ with $`iJ`$.
###### Proof.
Assume that $`C`$ is contractible. The corresponding contraction $`f:XY`$ maps $`C`$ to a single point. Let $`VY`$ be an affine open neighborhood of $`f(C)`$. Set $`U=f^1(V)`$ and $`D=XU`$. Clearly $`DC=\mathrm{}`$. Furthermore, $`DC_i\mathrm{}`$ for $`iJ`$; otherwise $`f(C_i)`$ would be a proper curve contained in the affine scheme $`V`$, which is absurd. Let $`X^{},Y^{}`$ be the normalizations of $`X_RR^{},Y_RR^{}`$, respectively. The induced morphism $`f^{}:X^{}Y^{}`$ is the contraction of the preimage $`C^{}X^{}`$ of $`C`$. The preimage $`V^{}Y^{}`$ of $`V`$ is affine, so $`YV`$ is of codimension $`1`$ ( II, 2.2.6). Hence the preimage $`D^{}X^{}`$ of $`D`$ is of codimension $`1`$. Obviously, the same holds if we contract the preimages $`C_i^{}X^{}`$ of $`C_i`$ disjoint from $`C^{}`$. Thus $`DX`$ is of codimension $`1`$ and persistent.
Conversely, assume the existence of such a subset $`DX`$. Set $`U=XD`$. We claim that the affine hull $`U^{\text{aff}}=\mathrm{Spec}\mathrm{\Gamma }(U,๐ช_X)`$ is of finite type over $`R`$ and that the canonical morphism $`UU^{\text{aff}}`$ is proper.
Suppose this for a moment. Then $`UU^{\text{aff}}`$ contracts $`C`$ and is a local isomorphism near each $`xU_0C`$. Choose for each $`xX_0C`$ an affine open neighborhood $`U_xX`$ of $`x`$ disjoint to the exceptional set of $`UU^{\text{aff}}`$. Then $`U_xUU^{\text{aff}}`$ is an open embedding. It is easy to see that the schemes $`U_x_{U_xU}U^{\text{aff}}`$, $`xX_0C`$ and $`U^{\text{aff}}`$ form an open cover of a proper $`R`$-scheme $`Y`$. The induced morphism $`f:XY`$ is the desired contraction.
It remains to verify the claim. Let $`RR^{}`$ be the completion. According to , VIII Corollary 3.4, the scheme $`U^{\text{aff}}`$ is of finite type if and only if $`U^{\text{aff}}_RR^{}`$ is of finite type. Furthermore, $`UU^{\text{aff}}`$ is proper if and only if if is proper after tensoring with $`R^{}`$ (, VIII Cor. 4.8). Since $`U^{\text{aff}}_RR^{}=(U_RR^{})^{\text{aff}}`$ by , Proposition 21.12.2, it suffices to prove the claim under the additional assumption that $`R`$ is complete.
Now each curve in $`X_0`$ is contractible. Observe that the contraction of $`C`$ does not change $`U^{\text{aff}}`$, so we can as well assume that $`C`$ is empty. Now our goal is to prove that $`U`$ is affine. Since $`R`$ is complete, hence universally japanese, the normalization $`X^{}X`$ is finite. Using Chevalleyโs Theorem (, Thm. 6.7.1), we reduce the problem to the case that $`X`$ is normal. Now the irreducible components of $`X`$ are the connected components. Treating them separately we may assume that $`X`$ is connected. Contracting the curves $`C_i`$ contained in $`D`$ we can assume that $`D_0`$ is finite and intersects each $`C_i`$. If $`D=X`$ or $`D=\mathrm{}`$ there is nothing to prove. Assume that $`DX`$ is of codimension 1, in other words a Weil divisor. The problem is that it might not be Cartier. To overcome this, consider the graded quasicoherent $`๐ช_X`$-algebra $`=_{n0}๐ช_X(nD)`$. The graded subalgebra $`^{}`$ generated by $`_1=๐ช_X(D)`$ is of finite type over $`๐ช_X`$. Set $`X^{}=๐ซ\text{roj}(^{})`$ and let $`g:X^{}X`$ be the structure morphism. Then $`g`$ is projective and $`๐ช_X^{}(1)`$ is a $`g`$-very ample invertible $`๐ช_X^{}`$-module. The canonical maps $`D:๐ช_X(nD)๐ช_X((n+1)D)`$ induce a homomorphism $`^{}^{}`$ of degree one, hence a section $`s:๐ช_X^{}๐ช_X^{}(1)`$. It follows from the definition of homogeneous spectra that $`s`$ is bijective over $`U`$ and vanishes on $`g^1(D)`$. Thus the corresponding Cartier divisor $`D^{}X^{}`$ representing $`๐ช_X^{}(1)`$ has support $`g^1(D)`$.
Let $`AX_0^{}`$ be a closed integral subscheme of dimension $`n>0`$. If $`g(A)X_0`$ is a curve, then $`A`$ is not contained in $`D^{}`$ but intersects $`D^{}`$. Hence $`D^{}A>0`$. If $`g(A)X`$ is a point, then $`๐ช_A(1)`$ is ample, so $`(D^{})^nA>0`$. By the Nakai criterion for ampleness we conclude that $`๐ช_X^{}(1)`$ is ample on its base locus. Now the FujitaโZariski Theorem tells us that $`๐ช_X^{}(1)`$ is semiample. It follows that $`UX^{}D^{}`$ is affine. This finishes the proof. โ
Let us consider the special case that the total space $`X`$ is a normal surface. Replacing $`R`$ by $`\mathrm{\Gamma }(X,๐ช_X)`$, we are in the following situation: Either $`R`$ is a discrete valuation ring, such that $`X\mathrm{Spec}(R)`$ is a flat deformation of $`X_0`$. Or $`R`$ is a local normal 2-dimensional ring, hence $`X\mathrm{Spec}(R)`$ is the birational contraction of $`X_0`$. In either case we call a Weil divisor $`HZ^1(X)`$ *horizontal* if it is a sum of prime divisors not supported by $`X_0`$.
Suppose $`JI`$ is a subset with $`C=_{iJ}C_i`$ connected. Let $`VX_0`$ be the union of all $`C_i`$ disjoint from $`C`$.
###### Corollary 2.3.
Notation as above. Then $`CX_0`$ is contractible if and only if there is a horizontal Weil divisor $`HX`$ disjoint from $`C`$ with the following property: For each $`C_i`$, $`iJ`$, either $`H`$ intersects $`C_i`$, or $`H`$ intersects a connected component $`V^{}V`$ with $`V^{}C_i\mathrm{}`$.
###### Proof.
Suppose $`CX_0`$ is contractible. Let $`DX`$ be a persistent Weil divisor as in Theorem 2.2. Then its horizontal part $`HD`$ satisfies the above conditions. Conversely, assume there is a horizontal Weil divisor $`HX`$ as above. It follows that $`D=H+V`$ is a persistent Weil divisor disjoint from $`C`$ intersecting each $`C_i`$ with $`iJ`$. Thus $`CX_0`$ is contractible. โ |
no-problem/0001/astro-ph0001266.html | ar5iv | text | # The red giant branches of Galactic globular clusters in the [(๐-๐ผ)โ,๐_๐] plane: metallicity indices and morphology
## 1 Introduction
In very recent times, new determinations of Galactic globular cluster (GGC) metallicities have provided us with new homogeneous $`[\text{Fe}/\text{H}]\text{ }`$ scales. In particular, Carretta & Gratton (cg97 (1997); CG) obtained metallicities from high resolution spectroscopy for 24 GGCs, with an internal uncertainty of 0.06 dex. For an even larger sample of 71 GGCs, metallicities have been obtained by Rutledge et al. (rutledge97 (1997); RHS97) based on spectroscopy of the Caii infrared triplet. The equivalent widths of the Caii triplet have been calibrated by RHS97 on both the CG scale and the older Zinn & West (zw84 (1984); ZW) scale. The compilation by RHS97 is by far the most homogeneous one which is currently available.
In the same period, we have been building the largest homogeneous $`V,I`$ photometric sample of Galactic globular clusters (GGC) based on CCD imaging carried out both with Northern (Isaac Newton Group, ING) and Southern (ESO) telescopes (Rosenberg et al. 1999b , 1999c ). The main purpose of the project is to establish the relative age ranking of the clusters, based on the methods outlined in Saviane et al. (srp97 (1997), 1999b ; SRP97, SRP99) and Buonanno et al. (b98 (1998); B98). The results of this investigation are presented in Rosenberg et al. (1999a ; RSPA99). Here suffice it to say that for a set of 52 clusters we obtained $`V`$ vs. $`(VI)`$ color-magnitude diagrams (CMD), which cover a magnitude range that goes from a few mags below the turnoff (TO) up to the tip of the red giant branch (RGB).
At this point both a spectroscopic and photometric homogeneous databases are available: the purpose of this study is to exploit them to perform a thorough analysis of the morphology of the RGB as a function of the clusterโs metallicity. As a first step, we want to obtain a new improved calibration of a few classical photometric metallicity indices. Secondly, we want to provide to the community a self-consistent, analytic, family of giant branches, which can be used in the analysis of old stellar populations in external galaxies.
### 1.1 Metallicity indices
Photometric indices have been widely used in the past to estimate the mean metallicities of those stellar systems where direct determinations of their metal content are not feasible. In particular, they are used to obtain $`[\text{Fe}/\text{H}]\text{ }`$ values for the farthest globulars and for those resolved galaxies of the Local Group where a significant Pop II is present (e.g. the dwarf spheroidal galaxies).
The calibration of $`V,I`$ indices is particularly important, since with comparable exposure times, deeper and more accurate photometry can be obtained for the cool, low-mass stars in these broad bands than in $`B,V`$. Moreover, our huge CMD database allows a test of the new CG scale on a large basis: we are able to compare the relations obtained for both the old ZW and new scale, and check which one allows to rank GGCs in the most accurate way. Indeed, the most recent calibration of the $`V,I`$ indices (Carretta & Bragaglia cb98 (1998)) is based on just 8 clusters.
### 1.2 Old stellar populations in Local Group galaxies
A reliable metallicity ranking of GGC giant branches also allows studies that go beyond a simple determination of the *mean* metallicity of a stellar population. As an illustration, we may recall the recent investigation of the halo metallicity distribution function (MDF) of NGC 5128 (Harris et al. harris5128 (1999)), which was based on the fiducial GC lines obtained by Da Costa & Armandroff (da90 (1990), hereafter DA90). These studies can be made more straightforward by providing a suitable analytic representation of the RGB family of GGCs. Indeed, assuming that most of the GGCs share a common age (e.g. Rosenberg et al. 1999a ), one expects that there should exist a โuniversalโ function of $`\{(VI)_0,M_I,[\mathrm{Fe}/\mathrm{H}]\}`$ able to map any $`[(VI)_0,M_I]`$ coordinate pair into the corresponding metallicity (provided that an independent estimate of the distance and extinction of the star are available). We will show here that such relatively simple mono-parametric function can actually be obtained, and that this progress is made possible thanks to the homogeneity of both our data set and analysis.
In order to enforce a proper use of our calibrations, we must clearly state that, in principle, the present relations are valid only for rigorously old stellar populations (i.e. for stars as old as the bulk of Galactic globulars). At fixed abundance, giant branches are somewhat bluer for younger ages (e.g. Bertelli et al. bertelli94 (1994)). Moreover, in real stellar systems AGB stars are also present on the blue side of the RGB (cf. Fig. 2). Both effects must be taken into account when dealing with LG galaxies, since they could lead to systematic effects in both the mean abundances and the abundance distributions (e.g. Saviane et al. 1999a ).
### 1.3 Layout of the paper
The observational sample, on which this investigation is based, is presented in Sect. 2. Sect. 3 is devoted to the set of indices which are to be calibrated. They are defined in Sect 3.1. The reliability of our sample is tested in Sect. 3.3, where we demonstrate that our methodology produces a set of well-correlated indices. In Sect. 4 we show that, once a distance scale is assumed for the GGCs, our whole set of RGBs can be approximated by a *single* analytic function, which depends on the metallicity alone. This finding allows a new and easier way to determine the distances and mean metallicities of the galaxies of the Local Group, extending the methods of Da Costa & Armandroff (da90 (1990)), and Lee et al. (lee.et.al93 (1993)). The metallicity indices are calibrated in Sect. 6, where analytic relations are provided both for the ZW and for the CG scales. Using these indices, we are able to test our analytic RGB family in Sect 7. Our conclusions are in Sect. 8.
## 2 The observational sample
Thirty-nine clusters have been observed with the ESO/Dutch 0.9m telescope at La Silla, and 16 at the RGO/JKT 1m telescope in la Palma. This database comprises $`75\%`$ of the GGC whose distance modulus is $`(mM)_\mathrm{V}<16`$. The zero-point uncertainties of our calibrations are $`<0.03`$ mag for each band. Three clusters were observed both with the southern and the northern telescopes, thus providing a consistency check of the calibrations: no systematic differences were found, at the level of accuracy of the zero-points. A detailed description of the observations and reduction procedures will be given in forthcoming papers (Rosenberg et al. 1999b , 1999c ) presenting the single clusters.
A subsample of this database was used for the present investigation. We retained those clusters whose CMD satisfied a few criteria: (a) the HB level could be well determined; (b) the RGB was not heavily contaminated by foreground/background contamination; and (c) the RGB was well defined up to the tip. This subsample largely overlaps that used for the age investigation, but a few clusters whose TO position could not be measured, are nevertheless useful for the metallicity indices definition. Conversely, in a few cases the lower RGB could be used for the color measurements, while the upper branch was too scarcely defined for a reliable definition of the fiducial line. Two of the CMDs that were used are shown in Figs. 1 (NGC 1851) and 2 (NGC 104), and they illustrate the good quality of the data.
The dataset of 31 clusters used in this paper is listed in Table 1. From left to right, the columns contain the NGC number, the reddening both in $`(BV)`$ and $`(VI)`$, the metallicity according to three different scales, and the apparent magnitude of the horizontal branch (HB). The $`E_{(BV)}`$ values were taken from the Harris (harris96 (1996)) on-line table<sup>1</sup><sup>1</sup>1 http://physun.physics.mcmaster.ca/Globular.html . The $`(VI)`$ reddenings were obtained by assuming that $`E_{(VI)}=1.28\times E_{(BV)}`$ (Dean et al. dwc78 (1978)). The values of the metallicity were taken from RHS97: they represent the equivalent widths of the Caii infrared triplet, calibrated either onto the Zinn & West (zw84 (1984)) scale (ZW column) or the Carretta & Gratton (cg97 (1997)) scale (RHS97 column). Moreover, the original Carretta & Gratton metallicities (CG column) are also given for the clusters comprised in their sample.
The HB level was found in different ways for clusters of different metallicity. For the the metal rich and metal intermediate clusters, a magnitude distribution of the HB stars was obtained, and the mode of the distribution was taken. Where the HB was too scarcely populated, a horizontal line was fitted through the data. The blue tail of the metal poorest clusters does not reach the horizontal part of the branch: in that case, a fiducial HB was fitted to the tail, and the magnitude of the horizontal part was taken as the reference level. The fiducial branch was defined by taking a cluster having a bimodal HB color distribution (NGC 1851, cf. Fig. 1) and then extending its HB both to the red and to the blue by โappendingโ clusters being more and more metal rich and metal poor, respectively. The details of this procedure, as well as the errors associated to the $`V_{\mathrm{HB}}`$ in Table 1, are discussed in RSPA99. For NGC 1851, $`V_{\mathrm{HB}}=16.18\pm 0.05`$ was adopted (dashed line in Fig. 1), and this value is just $`0.02`$ mag brighter than the value found by Walker (walker1851 (1992)) and Saviane et al. (ivo1851 (1998)).
Based on this observational sample, a set of metallicity indices were measured on the RGBs of the clusters. In the next section, the indices are defined and the measurement procedures are described. Consistency checks are also performed.
## 3 Metallicity indices
### 3.1 Definitions
The metallicity indices calibrated in this study are represented and defined in Fig. 1 and Fig. 2. The figures represent the CMD of NGC 1851 and NGC 104 in different color-magnitude planes, and the crosses mark the position of the RGB points used in the measurement of the indices.
The left panel of Fig. 1 shows the apparent colors and magnitudes for NGC 1851: the inclined line helps to identify the first index, $`S`$. This was defined, in the $`(BV,V)`$ plane, by Hartwick (hartwick68 (1968)) as the slope of the line connecting two points on the RGB: the first one at the level of the HB, and the second one 2.5 mag brighter. We use the same definition for the $`(VI,V)`$ plane here; however, in order to be able to use our metal richest clusters, we redefined $`S`$ by measuring the second RGB point 2 mag brighter than the HB. Since $`S`$ is measured on the apparent CMD, it is independent both from the reddening and the distance modulus.
The right panel of the same figure, shows the apparent $`V`$ magnitude vs. the de-reddened $`(VI)_0`$ color. In this panel, four other indices are identified, i.e. $`(VI)_{0,g}`$, $`\mathrm{\Delta }V_{1.1}`$, $`\mathrm{\Delta }V_{1.2}`$, and $`\mathrm{\Delta }V_{1.4}`$. The first one is the RGB color at the level of the HB, and the other three measure the magnitude difference between the HB and the RGB at a fixed color $`(VI)_0=1.1`$, 1.2 and 1.4 mag. The former index was originally defined by Sandage & Smith (sandsmith66 (1966)) and the latter one by Sandage & Wallerstein (sandwall60 (1960)), in the $`(BV)_0,V`$ plane. The other two indices, $`\mathrm{\Delta }V_{1.1}`$ and $`\mathrm{\Delta }V_{1.2}`$, are introduced later to measure the metal richest GCs. These indices require an independent color excess determination.
Finally, Fig. 2 shows the CMD of NGC 104 (47 Tuc) in the absolute $`(VI)_{0,}M_I`$ plane: the adopted distance modulus, $`(mM)_V=13.35`$, was obtained by correcting the apparent luminosity of the HB according to Lee et al. (ldz (1990); cf. Sect. 6). By comparison, Harrisโ catalog reports $`(mM)_V=13.32`$. Two other indices are represented in the figure: $`(VI)_{3.0}`$ and $`(VI)_{3.5}`$. They are defined as the RGB color at a fixed absolute $`I`$ magnitude of $`M_I=3.0`$ (Da Costa & Armandroff da90 (1990)) or $`M_I=3.5`$ (Lee et al. lee.et.al93 (1993)). The latter index was also discussed by Armandroff et al. (taft-3p5 (1993)), and a calibration formula was given in Caldwell et al. (caldwell-3p5 (1998)). This is based on the DA90 clusters plus M5 and NGC 362 from Lloyd Evans (lloyd83 (1983)).
Since these two indices are defined on the bright part of the RGB, they can be measured even for the farthest objects of the Local Group (LG). Due to the fast luminosity evolution of the stars on the upper RGB, this part of the branch was typically under-sampled by the early small-size CCDs, so no wide application of these indices has been made for Galactic globulars. However, this is of no concern for galaxy-size stellar systems. It will be shown in Sect. 6 that good accuracies can be obtained even for GCs, provided that the analytic function of Eq. (1) is used.
### 3.2 Measurement procedures
Colors and magnitudes were measured on a fiducial RGB, which has been found by least-square fitting an analytic function to the observed branch. After some experimenting, it was found that the best solution is to use the following relation:
$$y=a+bx+c/(xd)$$
(1)
where $`x`$ and $`y`$ represent the color and the magnitude, respectively. One can see from Figs. 1 and 2 that the function is indeed able to represent the giant branch over the typical metallicity range of globular clusters. Moreover, it is shown in Sect. 4 that, when the CMDs are corrected for distance and reddening, the four coefficients can be parametrized as a function of \[Fe/H\], so that one is able to reproduce the RGB of each cluster, using just one parameter: the metallicity. At any rate, the indices were measured on the original loci, so that an independent check of the goodness of the generalized hyperbolae can be made, by comparison of the measured vs. predicted indices.
All the indicesโ values that have been measured are reported in Table 2. In this table, the cluster NGC number is given in column 1; the following columns list, from left to right, $`(VI)_{0,\mathrm{g}}`$, $`S`$, $`\mathrm{\Delta }V_{1.1}`$, $`\mathrm{\Delta }V_{1.2}`$, $`\mathrm{\Delta }V_{1.4}`$, and finally the RGB color measured at $`M_I=3`$ and $`3.5`$. The Lee et al. (1990) distance scale was used to compute the last two indices (cf. Sect. 6).
### 3.3 Internal consistency checks
Before discussing the indices as metallicity indicators, we checked their internal consistency. We will show in Sect. 6 that the index $`S`$ is the most accurate one, as expected, since it does not require reddening and distance corrections. The rest of the indices are therefore plotted vs. $`S`$ in Figs. 3 and 4, and we expect that most of the scatter will be in the vertical direction. Second order polynomials were fitted to the distributions, and the *rms* of the fit was computed for each index. In order to intercompare the different indices, a relative uncertainty has been computed by dividing the *rms* by the central value of each parameter (this value is identified by a dotted line in each figure).
In this way, the scatter of the metal index $`i`$ is $`\mathrm{\Delta }i/i=0.02`$, 0.02, 0.04, 0.06, 0.12, and 0.26, for the indices $`(VI)_3`$, $`(VI)_{3.5}`$, $`(VI)_{0,\mathrm{g}}`$, $`\mathrm{\Delta }V_{1.4}`$, $`\mathrm{\Delta }V_{1.2}`$, and $`\mathrm{\Delta }V_{1.1}`$, respectively. These values confirm the visual impression of the figures, that $`\mathrm{\Delta }(VI)_{3.0}`$ and $`\mathrm{\Delta }(VI)_{3.5}`$ are the lowest dispersion indices, followed by $`(VI)_{0,\mathrm{g}}`$ and $`\mathrm{\Delta }V_{1.4}`$.
The indices will be calibrated in terms of \[Fe/H\] in Sect. 6; however, before moving to this section, we want to present a new way to provide โstandardโ GGC branches in the $`(VI)_0,M_I`$ plane, along the lines of the classical Da Costa & Armandroff (da90 (1990)) study. Using this family of RGB branches, we are able to make predictions on the trend of the already defined indices with metallicity; these trends can thus be compared to the observed ones, and therefore provide a further test of the reliability of our RGB family (cf. Sect 7).
## 4 New standard globular cluster giant branches
Da Costa & Armandroff (da90 (1990)) presented in tabular form the fiducial GGC branches of 6 globulars, covering the metallicity range $`2.17[\mathrm{Fe}/\mathrm{H}]0.71`$. The RGBs were corrected to the absolute $`(VI)_0,M_I`$ plane using the apparent $`V`$ magnitude of the HB, and adopting the Lee et al. (ldz (1990)) theoretical HB luminosity. Since the DA90 study, these branches have been widely used for stellar population studies in the Local Group. Based on these RGBs, in particular, a method to determine both the distance and mean metallicity of an old stellar population was presented by Lee et al. (lee.et.al93 (1993)).
Both DA90 and Lee et al. (1993) provided a relation between the metallicity \[Fe/H\] and the color of the RGB at a fixed absolute $`I`$ magnitude ($`M_I=3`$ and $`3.5`$, respectively), and recently a new relation for $`(VI)_{3.5}`$ has also been obtained by Caldwell et al. (caldwell-3p5 (1998)). Once the distance of the population is known (e.g. via the luminosity of the RGB tip), then an estimate of its *mean* metallicity can be obtained using one of the calibrations. It is assumed that the age of the population is comparable to that of the GGCs, and that the age spread is negligible compared to the metallicity spread (RSPA99).
In such case, one expects that any RGB starโs position in the absolute CMD is determined just by its metallicity, and that a better statistical determination of the populationโs metal content would be obtained by converting the color of *each* star into a \[Fe/H\] value. With this idea in mind, in the following sections we will show that this is indeed possible, at least for the bright/most sensitive part of the giant branch. We found that a relatively simple *continuous* function can be defined in the $`(VI)_0,M_I,[\mathrm{Fe}/\mathrm{H}]`$ space, and that this function can be used to transform the RGB from the $`(VI)_0,M_I`$ plane to the $`[\mathrm{Fe}/\mathrm{H}],M_I`$ plane.
In order to obtain this function, we first selected a subsample of clusters with suitable characteristics, so that a reference RGB grid can be constructed. The fiducial branches for each cluster were then determined in an objective way, and they were corrected to the absolute $`((VI)_0,M_I)`$ plane. In this plane, the analytic function was fitted to the RGB grid. These operations are described in the following sections.
### 4.1 Selection of clusters
The clusters that were used for the definition of the fiducial RGBs are listed in Table 3, in order of increasing metallicity. The table reports the cluster name, and some of the parameters listed in Table 1 are repeated here for ease of use. The values of the reddening were in some cases changed by a few thousandth magnitudes (i.e. well within the typical uncertainties on $`E_{(BV)}`$), to obtain a sequence of fiducial lines that move from bluer to redder colors as \[Fe/H\] increases, and again the corresponding $`E_{(VI)}`$ values were obtained assuming that $`E_{(VI)}=1.28\times E_{(BV)}`$ (Dean et al. dwc78 (1978)). Indeed, due to the homogeneity of our sample, we expect that if a monotonic color/metallicity sequence is not obtained, then only the uncertainties on the extinction values must be taken into account.
In order to single out these clusters from the total sample, some key characteristics were taken into account. In particular, we considered clusters whose RGBs are all well-defined by a statistically significant number of stars; they have low reddening values ($`E_{(BV)}0.05`$); and they cover a metallicity range that includes most of our GGCs ($`2.2[\mathrm{Fe}/\mathrm{H}]0.7`$ on the ZW scale).
The DA90 fiducial clusters were NGC 104, NGC 1851, NGC 6752, NGC 6397, NGC 7078 and NGC 7089 (M2). NGC 104 is the only cluster in common with the previous study, and M2 is not present in our dataset. The other objects have been excluded from our fiducial sample since they have too large reddening values ($`E_{(BV)}>0.05`$ for NGC 6397 and NGC 7078), or their RGBs are too scarcely populated in our CMDs (NGC 1851 and NGC 6752). Nevertheless, the calibrations that we obtain for the $`(VI)_{3.0}`$ and $`(VI)_{3.5}`$ are in fairly good agreement with those obtained by DA90 (for the small discrepancies at the high metallicity end, cf. Sect. 6.2 and 6.3), and in particular with the recent Caldwell et al. (caldwell-3p5 (1998)) calibration for the $`(VI)_{3.5}`$ index.
### 4.2 Determination of the fiducial loci
The ridge lines of our fiducial RGBs were defined according to the following procedure. The RGB region was selected from the calibrated photometry, by excluding both HB and AGB stars. All stars bluer than the color of the RR Lyr gap were removed; AGB stars were also removed by tracing a reference straight line in the CMD, and by excluding all stars blue-side of this line. This operation was carried out in the $`((VI),I)`$ plane, where the RGB curvature is less pronounced, and a straight line turns out to be adequate.
The fiducial loci were then extracted from the selected RGB samples. The $`(VI)`$ and $`I`$ vectors were sorted in magnitude, and bins were created containing a given number of stars. Within each bin, the median color of the stars and the mean magnitude were used as estimators of the bin central color and brightness. The number of stars within the bins was exponentially increased going from brighter to fainter magnitudes. In this way, (a) one can use a small number of stars for the upper RGB, so that the color of the bin is not affected by the RGB slope, and (b) it is possible to take advantage of the better statistics of the RGB base. Finally, the brightest two stars of the RGB were not binned, and were left as representatives of the top branch. After some experimenting, we found that a good RGB sampling can be obtained by taking for each bin a number of stars which is proportional to $`e^{0.2i}`$, where $`i`$ is an integer number. The resulting fiducial vectors were smoothed using an average filter with a box size of 3.
The RGB regions of the 6 clusters are shown in Fig. 5, together with the fiducial lines: it can be seen that in all cases the AGBs are easily disentangled from the RGBs. The values of the fiducial points corresponding to the solid lines in Fig. 5, are listed in Table 4.
### 4.3 Analytic fits to the fiducial loci
The fiducial branches defined in Sect. 4.2 were fitted with a parametrized family of hyperbolae. First, the RGBs were moved into the absolute $`(VI)_0,M_I`$ plane. The distance modulus was computed from the apparent magnitude of the HB (cf. Table 3) and by assuming the common law $`M_V(\mathrm{HB})=a[\mathrm{Fe}/\mathrm{H}]+b`$; in order to compare our results with those of DA90, $`a=0.17`$ and $`b=0.82`$ were used, but we also obtained the same fits using more recent values as in Carretta et al. (cetal99 (1999)), i.e. $`a=0.18`$ and $`b=0.90`$. The RGB was modeled with an hyperbola as in Rosenberg et al. (1999a ), but in this case the coefficients were taken as second order polynomials in \[Fe/H\]. In other words, we parametrized the whole family of RGBs in the following way:
$$M_I=a+b(VI)+c/[(VI)d]$$
(2)
where
$$a=k_1[\mathrm{Fe}/\mathrm{H}]^2+k_2[\mathrm{Fe}/\mathrm{H}]+k_3$$
(3)
$$b=k_4[\mathrm{Fe}/\mathrm{H}]^2+k_5[\mathrm{Fe}/\mathrm{H}]+k_6$$
(4)
$$c=k_7[\mathrm{Fe}/\mathrm{H}]^2+k_8[\mathrm{Fe}/\mathrm{H}]+k_9$$
(5)
$$d=k_{10}$$
(6)
The list of the parameters of the fits in magnitude is reported in Table 5, together with the *rms* of the residuals around the fitting curves. The table shows that the parameter $`d`$ does not depend on the choice of the distance scale, as expected. Even the other coefficients are little dependent on the distance scale, apart from $`k_3`$. It is affected by the zero-point of the HB luminosity-metallicity relation, and indeed there is the expected $`0.1`$ mag difference going from the LDZ to the C99 distance scale.
One could question the choice of a constant $`d`$, but after some training on the theoretical isochrones, we found that even allowing for a varying parameter, its value indeed scattered very little around some mean value. This empirical result is a good one, in the sense that it allows to apply a robust linear least-square fitting method for any choice of $`d`$, and then to search for the best value of this constant by a simple *rms* minimization. We chose to fit the $`M_I=f\{(VI)_0,[\mathrm{Fe}/\mathrm{H}]\}`$ function, and not the $`(VI)_0=f(M_I,[\mathrm{Fe}/\mathrm{H}])`$ function, since the latter one would be double-valued for the brightest part of the metal rich clustersโ RGBs. This choice implies that our fits are not well-constrained for the vertical part of the giant branch, i.e. for magnitudes fainter than $`M_I1`$. However, we show in the next section that our analytic function is good enough for the intended purpose, i.e. to obtain the \[Fe/H\] of the RGB stars in far Local Group populations, and thus to analyze how they are distributed in metallicity.
Our synthetic RGB families are plotted in Figs. 6 and 7, for the LDZ distance scale. In the former figure, the ZW metallicity scale is used, while the CG scale is used in the latter one. The figures show that the chosen functional form represents a very good approximation to the true metallicity โdistributionโ of the RG branches. The *rms* values are smaller than the typical uncertainties in the distance moduli within the Local Group. We further stress the excellent consistency of the empirical fiducial branches for clusters of similar metallicity. We have two pairs of clusters whose metallicities differ by at most 0.03 dex (depending on the scale): NGC 288 and NGC 5904 on the one side, and NGC 5272 and NGC 6205 on the other side. The figures show that the fiducial line of NGC 288 is similar to that of NGC 5904, and the NGC 5272 fiducial resembles that of NGC 6205, further demonstrating both the homogeneity of our photometry and the reliability of the procedure that is used in defining the cluster ridge lines.
If the coefficients of the hyperbolae are taken as third order polynomials, the resulting fits are apparently better (the *rms* is $`0.05`$ mag); however, the trends of the metallicity indices show an unphysical behavior, which is a sign that further clusters, having metallicities not covered by the present set, would be needed in order to robustly constrain the analytic function.
In the following section, the indices are calibrated in terms of metallicity, so that in Sect. 7 they will be used to check the reliability of our generalized fits.
## 5 Calibration of the indices. Introduction
In order to obtain analytic relations between the indices and the actual metallicity, our photometric parameters were compared both with the ZW and the CG values. A summary of the resulting equations is given in Table 6. For each index (first column) both linear and quadratic fits were tried, of the form: $`[\mathrm{Fe}/\mathrm{H}]=\alpha \mathrm{index}+\beta `$ and $`[\mathrm{Fe}/\mathrm{H}]=\alpha \mathrm{index}^2+\beta \mathrm{index}+\gamma `$. The coefficients of the calibrating relation are given in the columns labelled $`\alpha `$, $`\beta `$, and $`\gamma `$; in column 7, the *rms* of the residuals is also given. In the case of the $`(VI)_{3.0}`$ and $`(VI)_{3.5}`$ indices, neither the linear nor the quadratic fits give satisfactory results, when the CG scale is considered. Instead, a good fit is obtained if a change of variables is performed, setting $`z=0.02\times 10^{[\mathrm{Fe}/\mathrm{H}]}`$, and linearly interpolating in the index (i.e. setting $`z=\alpha \mathrm{index}+\beta `$). The column 8 of Table 6 identifies the kind of fitting function that is used for each parameter/metallicity combination: the symbols โ1โ, โ2โ and โ$`z`$โ refer to the linear, quadratic, and linear in $`z`$ fits, respectively. Relations on both the CG and ZW metallicity scales are given, and column 3 flags the \[Fe/H\] scale that is used.
In order to measure the $`(VI)_3`$ and $`(VI)_{3.5}`$ indices (cf. Sect. 3) a distance scale must be adopted. The most straightforward way is to use the observed $`V_{\mathrm{HB}}`$ (cf. Table 1) coupled with a suitable law for the HB absolute magnitude.
It has become customary to parameterize this magnitude as $`M_V(\mathrm{HB})=a[\mathrm{Fe}/\mathrm{H}]+b`$, although there is no consensus on the value of the two parameters $`a`$ and $`b`$. The current calibrations of these two metallicity indices were obtained by Da Costa & Armandroff (da90 (1990)) and Lee et al. (lee.et.al93 (1993)), and they are based on the Lee et al. (ldz (1990); LDZ) theoretical luminosities of the HB. LDZ gave a relation $`M_V(\mathrm{HB})=0.17[\mathrm{Fe}/\mathrm{H}]+0.82`$ valid for $`Y=0.23`$.
As discussed in Sect 4, since many current determinations of Population ii distances within the Local Group are based on the Lee et al. (ldz (1990)) distance scale, and for the purpose of comparison with previous studies, we provide a calibration using the latter HB luminosity-metallicity relation. However, in the last ten years revisions of this relation have been discussed by many authors, so we also calibrated the two indices using $`M_V(\mathrm{HB})=0.18[\mathrm{Fe}/\mathrm{H}]+0.90`$ (Carretta et al. cetal99 (1999)), which is one of the most recent HB-based distance scales.
We must stress that *metallicities on the ZW scale must be used in the $`M_V`$ vs.* \[Fe/H\] *relation*. Indeed, CG showed that their scale is not linearly correlated to that of ZW, so not even the *$`M_V`$ vs.* \[Fe/H\] relation will be linear: if one wishes to use the new scale, then *the absolute magnitude of the HB must be re-calibrated* in a more complicated way.
The best calibrating relations are shown in Figs. 8 to 11. In the following sections, for each index a few remarks on the accuracy of the calibrations and comparisons with past studies are given.
## 6 Calibration of the indices. Discussion
### 6.1 $`S`$
On the CG scale, the second-order fit has a residual *rms* of 0.12 dex in \[Fe/H\]. On the ZW scale, the linear fit is obtained with a *rms* of 0.12 dex. This index can therefore be calibrated on both scales, with a comparable level of accuracy. A parabolic fit does not improve the relation on the ZW scale, since the coefficient of the quadratic term is very small (-0.004) and the *rms* is the same. These relations are shown in Fig. 8 as solid lines, where the upper panel is for the ZW scale, and the lower panel for the CG scale (this layout is reproduced in all the following figures).
The cluster NGC 6656 (M22) was excluded from the fits, and is plotted as an open circle in Fig. 8. It is well-known that M22 is a cluster that shows a metallicity spread, and indeed it falls outside the general trend in most of the present calibrations.
### 6.2 $`(VI)_{3.0}`$
The first definition of the $`(VI)_{3.0}`$ index was given in Da Costa & Armandroff (da90 (1990)), where a calibration in terms of the ZW scale was also given: $`[\mathrm{Fe}/\mathrm{H}]=15.16+17.0(VI)_34.9(VI)_3^2`$. The same index (measured on the *absolute* RGBs corrected with the LDZ HB luminosity-metallicity relation) is plotted, in Fig. 9, as a function of the metallicity on both scales, and the solid lines represent our calibrations. The top panel shows the quadratic relation on the ZW scale, whose *rms* is 0.14 dex. The bottom panel of Fig. 9 shows the relation on the CG scale. In this case, a quadratic fit is not able to reproduce the trend of the observational data. A better result can be obtained by making a variable change, i.e. using the variable $`z=0.0210^{[\mathrm{Fe}/\mathrm{H}]}`$; in this case, a linear relation is found, and its *rms* is 0.15 dex. This measure of the residual scatter has been computed after transforming back to metallicity, so the reliability of the index can be compared to that of the other ones. Again, the index can be calibrated on both scales with a comparable accuracy. The dashed curve in the upper panel of Fig. 9 shows the original relation obtained by DA90: there is a small discrepancy at the high-metallicity end, which can be explained by the different 47 Tuc fiducial line that was adopted by DA90 (cf. below the discussion on $`(VI)_{3.5}`$).
As already recalled, we checked the effect of adopting another distance scale, by repeating our measurements and fits, and adopting the C99 distance scale. For the ZW metallicity scale, we obtain the quadratic relation whose coefficients are listed in Table 6, and whose *rms* is 0.15 dex. The bottom panel of Fig. 9 shows the relation on the CG scale. Again, a quadratic fit is not able to reproduce the trend of the observational data. Making the already discussed variable substitution, the linear relation in $`z`$ has an *rms* of 0.16 dex, so the two metallicity scales yield almost comparable results.
### 6.3 $`(VI)_{3.5}`$
Using the same โstandardโ GC branches of DA90, Lee et al. (lee.et.al93 (1993)) defined a new index, $`(VI)_{3.5}`$, to be used for the farthest population ii objects. It was also calibrated in terms of the ZW scale: $`[\mathrm{Fe}/\mathrm{H}]=12.64+12.6(VI)_{3.5}3.3(VI)_{3.5}^2`$. A new calibration was also given recently in Caldwell et al. (caldwell-3p5 (1998)): \[Fe/H\]$`=1.00+1.97q3.20q^2`$, where $`q=[(VI)_{3.5}1.6]`$. The index and our calibrations (solid lines) are plotted, in Fig. 10, on both metallicity scales. Again, the measurements were made in the absolute CMD, assuming the LDZ distance scale. Our quadratic calibration vs. the ZW scale has a residual *rms* scatter of 0.13 dex, which is the same of the linear relation on the CG metallicity vs. $`z`$.
The Lee et al. relation (dashed line) predicts slightly too larger metallicities on the ZW scale, for $`[\mathrm{Fe}/\mathrm{H}]>1`$. This can also be interpreted as if the DA90 47 Tuc branch were $`<0.1`$ mag bluer than ours. Indeed, if one looks at Fig. 5 of DA90, one can easily see that some weight is given to the brightest RGB star, which is brighter than the trend defined by the previous ones. The result is a steeper branch, which also justifies the DA90 slightly bluer RGB fiducial. Since our metal richest point is defined by two clusters, and since the two measured parameters agree very well, we are confident that our calibration is reliable. In any case, the discrepancy between the two scales is no larger than $`0.1`$ dex. It is also reassuring that the Caldwell et al. (caldwell-3p5 (1998)) relation (pluses) is closer to the present calibration, since the former is based on a larger set of clusters. This might be an indication that the Lee et al. relation is actually inaccurate at the metal rich end, due to the small set of calibrating clusters.
As before, we obtained a further calibration also using the C99 $`M_V`$ vs. \[Fe/H\] relation; the quadratic fit on the ZW scale has a residual *rms* scatter of 0.13 dex, while the $`z`$ variable can be fitted with a straight line, with an *rms* of 0.14 dex.
### 6.4 The $`\mathrm{\Delta }V`$ family and $`(VI)_{0,\mathrm{g}}`$
For any $`\mathrm{\Delta }V`$ index, the quadratic relations vs. the ZW metallicity do not improve the *rms* and they are not plotted in the figures. The coefficients are listed in Table 6.
The best metallicity estimates of the โ$`\mathrm{\Delta }V`$ familyโ are obtained with the $`\mathrm{\Delta }V_{1.4}`$ index. The errors on $`[\mathrm{Fe}/\mathrm{H}]`$ are just slightly larger than the standard uncertainties of the spectroscopic determinations. The solid lines of Fig. 11 show the calibrations that we obtain. The quadratic equation on the CG scale, and the linear one on the ZW scale, are obtained with residual scatters of 0.16 dex.
The rest of the indices in this family, and $`(VI)_{0,\mathrm{g}}`$, lack the precision of the other abundance indicators. This is due to the fact that the error on any $`\mathrm{\Delta }V`$ index is proportional to the uncertainty on the color of the RGB (which depends on the reddening), times its local slope where the reference point is measured. Since the RGB slope increases going away from the tip (i.e. towards bluer colors), we expect that the scatter on the $`\mathrm{\Delta }V`$ indices will also increase as the color of the reference point gets bluer. Indeed, Table 6 shows that in most cases the rms uncertainties are $`>0.2`$ dex for these indices. The residual scatter is largest for the $`(VI)_{0,\mathrm{g}}`$ index, which is the most affected by the uncertainties on the reddening.
The $`\mathrm{\Delta }V_{1.2}`$ and $`(VI)_{0,\mathrm{g}}`$ parameters have been earlier calibrated, on the CG scale, by Carretta & Bragaglia (cb98 (1998)). Using their quadratic relation for $`\mathrm{\Delta }V_{1.2}`$, and both their linear and quadratic relations for $`(VI)_{0,\mathrm{g}}`$, the corresponding *rms* of the residuals in metallicity are 0.21 dex and $`0.41`$ dex, respectively. Our new and the old calibrations are therefore compatible, within the (albeit large) uncertainties.
## 7 A test of the โmodelโ RGBs; comparison with the observed \[Fe/H\] indices
A straightforward test of our new analytic RGBs can be made by generating the same metallicity indices that have been measured on the observed RGBs, and then checking the consistency of the predicted vs. measured quantities. To this aim, for a set of discrete \[Fe/H\] values a $`(VI)_0`$ vector was generated, and the combination of the two was used to compute the $`M_I`$ vector of the giant branch, using Eqs. (2-6). Then for each branch the metallicity indices were measured as it was done for the clustersโ fiducials.
In Figs. 8 to 11, the predicted indices are identified by the small open squares (spaced by 0.1 dex) connected by a solid line. The best predictions are for those indices that rely on the brightest part of the RGB (i.e. $`(VI)_{3.0}`$, $`(VI)_{3.5}`$ and $`\mathrm{\Delta }V_{1.4}`$), while the computations are partially discrepant for those indices that rely on a point that is measured on the faint RGB. This is easily explained by the nature of our fit: since the best match is searched for along the ordinates (for the reasons discussed in Sect. 4), then it is better constrained in the upper part of the RGB, where its curvature becomes more sensitive to metallicity. We must also stress that the metal richest cluster in the reference grid is 47 Tuc (\[Fe/H\]$`=0.70`$ on the ZW scale), whereas NGC 6352 (\[Fe/H\]$`=0.50`$ on the same scale) is the metal richest cluster for which metallicity indices have been measured. Some of the discrepancies that are seen at the highest metallicities are therefore due to the lack of low-reddening clusters that can be used to extend the reference grid to the larger \[Fe/H\] values.
The mean differences between the predicted and fitted indices are, on the ZW scale, around 0.03 dex for the $`(VI)_{3.0}`$ and $`(VI)_{3.5}`$ indices. They are around 0.08 dex for the $`\mathrm{\Delta }V_{1.2}`$, $`\mathrm{\Delta }V_{1.4}`$, and $`S`$ indices. They rise to $`0.1`$ and $`0.3`$ dex for the $`\mathrm{\Delta }V_{1.1}`$ and $`(VI)_{0,\mathrm{g}}`$ indices. A similar trend is seen for the comparison on the CG scale. In this case, the mean differences are $`0.05`$ dex for $`(VI)_{3.0}`$, $`(VI)_{3.5}`$, and $`S`$; they are $`0.1`$ dex for $`\mathrm{\Delta }V_{1.2}`$ and $`\mathrm{\Delta }V_{1.4}`$; and they are 0.12 and 0.27 for the $`\mathrm{\Delta }V_{1.1}`$ and $`(VI)_{0,\mathrm{g}}`$ indices.
We can therefore conclude that, apart from the $`\mathrm{\Delta }V_{1.1}`$ and $`(VI)_{0,\mathrm{g}}`$ indices, our mono-parametric RGB family gives a satisfactory reproduction of the actual changes of the RGB morphology and location, as a function of metallicity. It is then expected that, using this approach, one can exploit the brightest $`3`$ mags of the RGB to determine the mean metallicity, and even more important, the metallicity *distribution* of the old stellar population of any Local Group galaxy. In a forthcoming paper, we will demonstrate such possibility by re-analyzing our old photometric studies of the dwarf spheroidal galaxies Tucana (Saviane et al. tucana (1996)), Phoenix (Held et al. 1999a ; Martรญnez-Delgado et al. 1999b ), Fornax (Saviane et al. 1999a ), LGS 3 (Aparicio et al. aaj\_lgs3 (1997)), Leo I (Gallart et al. aaj\_leoi (1999); Held et al. 1999b ) and NGC 185 (Martรญnez-Delgado et al. 1999a ).
## 8 Conclusions
In this work, we have provided the first calibration of a few metallicity indices in the $`(VI),V`$ plane, namely the indices $`S`$, $`\mathrm{\Delta }V_{1.1}`$ and $`\mathrm{\Delta }V_{1.4}`$. Calibrations on both the Zinn & West (1984) and Carretta & Gratton (1997) scales have been obtained. The metallicity indices $`(VI)_{0,\mathrm{g}}`$, $`\mathrm{\Delta }V_{1.2}`$ , $`(VI)_{3.0}`$ and $`(VI)_{3.5}`$ have been also calibrated on both scales, and we have shown that our new relations are consistent with existing ones. In the case of the latter two indices, we have obtained the first calibration on the CG scale; for both scales, we have also obtained the first calibration that takes into account new results on the RR Lyr distances . The accuracy of the calibrations is generally better than 0.2 dex, regardless of the metallicity scale that is used.
Our results are an improvement over previous calibrations, since a new approach in the definition of the RGB is used, and since our formulae are based on the largest homogeneous photometric database of Galactic globular clusters.
The availability of such database also allowed us a progress towards the definition of a standard description of the RGB morphology and location. We were able to obtain a function in the $`(VI)_0,M_I,[\mathrm{Fe}/\mathrm{H}]`$ space which is able to reproduce the whole set of GGC giant branches in terms of a single parameter (the metallicity). We suggest that the usage of this function will improve the current determinations of metallicity and distances within the Local Group, extending the methods of Lee et al. (1993).
###### Acknowledgements.
We thank the referee, Gary Da Costa, for helpful suggestions that improved the final presentation of the manuscript. I.S. acknowledges the financial support of Italian and Spanish Foreign Ministries, through an โAzioni Integrate/Acciones Integradasโ grant. |
no-problem/0001/astro-ph0001275.html | ar5iv | text | # Dust in the 55 Cancri planetary system
## 1 Introduction
Planetary systems are born in dusty circumstellar disks. Once planets form, the circumstellar dust is thought to be continually replenished by collisions and sublimation (and subsequent condensation) of larger bodies such as asteroids, comets, and Kuiper Belt objects (Nakano 1988; Backman & Paresce 1993). Such debris disks have now been directly imaged around several nearby main sequence stars: $`\beta `$ Pictoris, HR 4796A, Vega, Fomalhaut and $`ฯต`$ Eridani (Holland et al. 1998; Jayawardhana et al. 1998; Koerner et al. 1998; Greaves et al. 1998). The presence of debris disks around stars which, in some cases, may be $`2\times 10^8`$ \- $`10^9`$ years old suggests that an appreciable amount โ perhaps tens of lunar masses โ of dust may be present even in mature planetary systems. The ring of dust recently imaged at 850$`\mu `$m around the nearby K2V star $`ฯต`$ Eridani is also spatially analogous to the Kuiper Belt of our own solar system (Greaves et al. 1998).
The G8V star 55 Cancri, at a distance of 13 pc, is unique in having both planets as well as a substantial dust disk. It contains one planet of about 2 Jupiter masses in an orbit with a semi-major axis of 0.11 AU (Butler et al. 1997) and evidence for a second planet at several AU in the form of a residual drift in the stellar velocity over the past 10 years (Marcy & Butler 1998). The dust disk, first inferred by Dominik et al. (1998) using ISO observations at 25$`\mu `$m and 60$`\mu `$m, is much larger, with a radius of $``$ 50 AU. Recent near-infrared coronographic observations of Trilling & Brown (1998) have resolved the 55 Cancri dust disk and confirm that it extends to at least 40 AU (3.24โ) from the star.
We have recently commenced a mini-survey of the parent stars of known extrasolar planets using the Submillimeter Common User Bolometer Array (SCUBA) on the James Clerk Maxwell Telescope (JCMT). Our program is to obtain 850$`\mu `$m and 450$`\mu `$m flux measurements in the photometry mode since the expected disk sizes are too small to be spatially resolved at present. Our goals are to explore the kinship between circumstellar dust and planets and to provide significant constraints on the nature and amount of dust associated with the Kuiper Belts of these extrasolar planetary systems. Here we report the detection of sub-millimeter emission from 55 Cnc during the first observing shift our survey program.
## 2 Observations and Results
We observed 55 Cnc with the SCUBA instrument (Holland et al. 1999) on the JCMT on Mauna Kea, Hawaii. The data were obtained on 1999 February 4-9 UT using the SCUBA photometry mode. Although SCUBA operates at 450 and 850$`\mu `$m simultaneously, the observing conditions are generally poorer at the shorter wavelength. Zenith atmospheric opacities were exceptionally good at 850$`\mu `$m, ranging from 0.10 to 0.15. Observations of Uranus were used for calibrations. Pointing accuracy was 2โ, which is small compared with the beam size of 15โ at 850$`\mu `$m (FWHM) and 8โ at 450$`\mu `$m. The data were reduced using the SCUBA User Reduction Facility (Jenness & Lightfoot 1998).
We also obtained mid-infrared observations of 55 Cnc on 1999 May 3 UT using the OSCIR instrument on the Keck II telescope. OSCIR is a mid-infrared imager/spectrometer built at the University of Florida<sup>1</sup><sup>1</sup>1 Additional information on OSCIR is available on the Internet at www.astro.ufl.edu/iag/., using a 128$`\times `$128 Si:As Blocked Impurity Band (BIB) detector developed by Boeing. On Keck II, OSCIR has a plate scale of 0.062โ/pixel, providing a 7.9โ$`\times `$7.9โ field of view. We used a chop frequency of 4 Hz and a throw of 8โ. Images were obtained in N(10.8 $`\mu `$m) and IHW18(18.2 $`\mu `$m) filters, with on-source integration times of 120 sec and 300 sec, respectively. The standard stars $`\mu `$ UMa and $`\alpha `$ Boo were used for flux calibration.
In the sub-millimeter, we measure 2.8$`\pm `$0.5 mJy at 850$`\mu `$m and 7.9$`\pm `$4.2 mJy at 450$`\mu `$m from 55 Cnc, presumably due to thermal emission of dust in a Kuiper Belt-like population. In the mid-infrared, where the emission is dominated by the stellar photosphere, we measure 1.0$`\pm `$0.1 Jy at 10.8$`\mu `$m and 280$`\pm `$28 mJy at 18.2$`\mu `$m. The mid-infrared images do not show any evidence for spatial extension. This is not surprising given that 55 Cnc has little or no excess above the photosphere at these wavelengths. Table 1 lists all available mid-infrared to sub-millimeter flux measurements and limits from our observations, $`IRAS`$, and $`ISO`$.
## 3 Discussion
Following Backman & Gillett (1987), we can write the fractional luminosity of dust as $`\tau =L_d/L_{}`$, where $`L_d`$ and $`L_{}`$ are the luminosities of the dust debris and the star, respectively. For 55 Cnc, based on its far-infrared excesses, $`\tau 7\times 10^5`$, some two orders of magnitude lower than that of the debris disk prototype $`\beta `$ Pictoris.
Figure 1 shows that a single-temperature blackbody can match the far-infrared and sub-millimeter flux measurements of 55 Cnc quite well. If one assumes that the emission at $`\lambda `$ 25$`\mu `$m is primarily due to the stellar photosphere, a $`T=100`$ K blackbody fits the ISO 60$`\mu `$m measurement and the SCUBA 450 and 850$`\mu `$m detections. It is also roughly consistent with the ISO 90$`\mu `$m limit. Figure 1 also includes a modified blackbody fit with $`T=60`$ K and $`\beta =`$0.5, where $`F_\nu \nu ^{2+\beta }`$, for comparison. This can fit the 60$`\mu `$m and sub-millimeter points well, but does not meet the ISO 90$`\mu `$m constraint.
A blackbody temperature of 100 K for the grains suggests that the dust disk around 55 Cnc has a central disk hole with a minimum radius of 13 AU. If the dust temperature is closer to 60 K, the hole could be as large as 35 AU. This would not conflict significantly with the coronograph image of Trilling & Brown (1998) where reliably detected emission begins at $``$ 27 AU from the star. Thus, the 55 Cnc dust disk is likely to be well outside the orbits of the two known planets in the system. Since $`\beta `$=0 corresponds to large grains and $`\beta `$=1 to small grains for optically thin emission, our best-fit to the data in Figure 1 would imply a population of grains with $`a100\mu m`$.
To better constrain the disk parameters using data at all wavelengths, we used the model discussed by Dent et al. (1999). Calculations are made with a 2-D continuum radiative transfer code which includes the star and a thin disk with inner and outer boundaries $`r_{in},r_{out}`$, and a power-law mid-plane density $`r^p`$. The dust emission is characterized by a single characteristic grain size $`a`$, a critical wavelength $`\lambda _0`$ and an opacity index $`\beta `$; shortwards of $`\lambda _0`$ the grains act as blackbodies while longwards the emissivity is given by $`Q(\lambda /\lambda _0)^\beta `$.
We have assumed the $`r^3`$ power-law density distribution derived from the near-infrared observations also continues down to the inner radius $`r_{in}`$. For 55 Cnc, the best fit model (Figure 2) has $`r_{in}`$ of 10 AU, a grain size $`a`$ = 100 $`\mu `$m and opacity index $`\beta =0.5`$. Both the simple fit and model are roughly consistent with the ISO 90 $`\mu `$m upper limit, although the lower $`\beta `$ may provide a better fit to this data. The model indicates an upper limit to the dust density in the region 3 AU$`r_{in}`$10 AU of $`<`$10% of the density at 10 AU, thus the region where planets are detected is significantly depleted of dust.
Since the sub-millimeter flux is less sensitive to the temperature of the grains than the infrared flux, we can use it to estimate the dust mass. Following Jura et al. (1995), the dust mass $`M_d`$ is given by
$$M_d=F_\nu R^2\lambda ^2/\left[2kT_{gr}K_{abs}\left(\lambda \right)\right],$$
(1)
if $`R`$ denotes the distance from the sun to 55 Cnc. Assuming a dust absorption coefficient $`K_{abs}(\lambda )`$ between 1.7 and 0.4 cm<sup>2</sup> g<sup>-1</sup> at 850$`\mu `$m (Greaves et al. 1998), we obtain a dust mass of 0.0008-0.005 Earth masses, for $`T`$= 100-130 K. The lower value of $`K_{abs}(\lambda )`$ is suggested by models of large, icy grains (Pollack et al. 1994), while the higher estimate has been used for previous observations of debris disks (Holland et al. 1998). However, as for all the extrasolar debris disks, very large grains could dominate the total mass while adding little submillimeter emission, so our mass estimates only provide lower limits.
Our dust mass estimates are consistent with Dominik et al. (1998) who derive $`M_d>4\times 10^5M_{Earth}`$ by fitting a disk model to the ISO and IRAS data. On the other hand, using a low albedo (near-infrared reflectance of 6%) and an average particle density of 1 g cm<sup>-3</sup>, Trilling & Brown (1998) estimate a dust mass of 0.4 $`M_{Earth}`$ in the 55 Cnc disk from their scattered light observations. Their estimate is inconsistent with ours. The reason for the discrepancy is not clear. One possibility is that Trilling & Brown (1998) may have overestimated the disk brightness in the near-infrared due to difficulties in background subtraction.
The amount of dust in our solar systemโs Kuiper Belt is not well determined. Based on far-infrared observations of $`COBE`$ and $`IRAS`$, Backman, Dasgupta & Stencel (1995) and Stern (1996) have placed an upper limit of 10$`{}_{}{}^{5}M_{Earth}^{}`$ on the Kuiper Belt mass in the form of dust (particles with $`a`$ 1cm). However, Teplitz et al. (1999) show that the dust mass could be as high as $`7\times 10^4M_{Earth}`$ depending on assumptions about albedo, distribution in particle size, contribution of foreground and background sources to the far-infrared emission, etc. Thus, the 55 Cnc disk may be somewhat โdustierโ than our Kuiper Belt. It also appears somewhat โover-dustyโ for 55 Cncโs stellar age of $``$ 5 Gyr (Gonzalez & Vanture 1998; Baliunas et al. 1997), when compared to dust masses in the handful of nearby, well-studied debris disks.
Trilling & Brown (1998) suggest that the apparent dust mass excess in 55 Cnc is consistent with the idea that the inner planet migrated toward the star from its birthplace (Trilling et al. 1998; Murray et al. 1998). In this scenario, a planet migrates inward by exchanging angular momentum with a circumstellar disk which initially extends to a few stellar radii. This migration could transfer material from the inner part of the disk to the outer part, enhancing the mass at Kuiper Belt distances. If that is true, other extrasolar planetary systems with โhot Jupiterโ planets should also harbor appreciable amounts of dust in their outer regions. We expect to test this prediction during our on-going sub-millimeter survey of parent stars of radial-velocity planets.
The radiation field of a G8 star is generally too weak to expel dust grains by radiation pressure. In the case of 55 Cnc, the Poynting-Robertson timescale is much shorter than the estimated $``$5-Gyr age of the star (Dominik et al. 1998). Therefore, the dust grains in the system must be replenished by collisions or sublimation of larger bodies such as asteroids, comets, and Kuiper Belt objects.
In summary, we have detected sub-millimeter thermal emission from dust in the 55 Cnc planetary system. Our results confirm that state-of-the-art sub-millimeter instruments are able to detect continuum emission from even a relatively small amount of dust surrounding nearby sun-like stars. The observed dust mass in the 55 Cnc system appears to be somewhat higher than that associated with the Kuiper Belt in our solar system. Far-infrared observations from the Space Infrared Telescope Facility and the Stratospheric Observatory for Infrared Astronomy as well as detailed modeling will be crucial for reliably constraining the spatial extent, size distribution and composition of the dust.
We are grateful to Charles Telesco, Scott Fisher and Robert Piรฑa for their assistance with the OSCIR observations at Keck. We also wish to thank the staff of JCMT and Keck for their outstanding support.
Figure Captions
Figure 1. Composite spectral energy distribution (SED) of 55 Cancri from infrared to sub-millimeter wavelengths. The near-infrared fluxes (open circles) are from Persson et al. (1977). ISO measurements are shown as filled circles and ISO upper limits as open circles, while filled squares and open squares designate IRAS measurements and upper limits, respectively. Our mid-infrared fluxes are shown as diamonds and our JCMT measurements are indicated by filled stars. All error bars are smaller than the symbols except where shown. Also shown are the photospheric emission with T=5250 K (solid line), and modified black bodies with $`\beta `$=0.0, T=100 K (dashed line), and $`\beta `$=0.5, T=60 K (dotted line), constrained to fit the 850$`\mathrm{\mu m}`$ flux.
Figure 2. Best-fit model for the 55 Cnc SED. The model assumes a thin disk with an inner radius $`r_{in}=`$10 AU, a grain size $`a=`$100 $`\mu `$m, and an opacity index $`\beta =`$0.5. Symbols are the same as in Figure 1. |
no-problem/0001/hep-ph0001180.html | ar5iv | text | # 1 The QCD pomeron exchange mechanism of the processes ๐พโข๐พโ๐ฝ/๐โข๐ฝ/๐.
## Acknowledgments
We are grateful to the Organizers for the interesting and stimulating meeting. This research was partially supported by the Polish State Committee for Scientific Research (KBN) grants 2 P03B 89 13, 2 P03B 084 14 and by the EU Fourth Framework Programme โTraining and Mobility of Researchersโ, Network โQuantum Chromodynamics and the Deep Structure of Elementary Particlesโ, contract FMRXโCT98โ0194. |
no-problem/0001/astro-ph0001512.html | ar5iv | text | # The Local Interstellar Medium in Puppis-VelaBased on observations collected at the European Southern Observatory at La Silla, Chile
## 1 Introduction
We view the Universe through local interstellar gas and this gaseous material in the immediate vicinity of the Sun affects all absorption line studies of more distant Galactic lines of sight.
The Local Interstellar Medium (LISM) in the direction of Puppis-Vela \[$`l`$ = 250ยฐ to 275ยฐ, $`b`$ = $``$15ยฐ to +5ยฐ\] was selected for a Na I absorption line study to determine the kinematics of large-scale structures in the interstellar medium (ISM) toward Puppis-Vela, and was initiated by Sahu & Blaauw (1994). Within $``$ 2 kpc in this direction, three large structures have been identified (1) the Vela Molecular Ridge at $``$ 1 kpc (Murphy & May 1991), (2) the IRAS Vela Shell at $``$ 450 pc (Sahu, 1992) and (3) the 36-diameter, H$`\alpha `$ emitting Gum Nebula. Na I absorption caused by local (d $`<`$ 200 pc) interstellar gas is present in spectra toward the distant background stars embedded in and beyond these extended objects. The absorption components arising within 200 pc must first be identified and understood before attempting to study the kinematics of more distant ISM structures. This work is the first attempt to three dimensionally (l,b,d) place components of interstellar gas in a localized region on the sky using optical absorption spectroscopy, and is preliminary to mapping clouds and extended structures at larger distances. The only interstellar clouds that have been mapped three dimensionally are the Local Interstellar Cloud (LIC) and G-cloud (Linsky et al. 2000) using principally H column densities from Hubble Space Telescope (HST) Space Telescope Imaging Spectrograph (STIS) and Goddard High Resolution Spectrograph (GHRS) spectra and the assumption that the clouds have constant density. For the Puppis-Vela sightlines discussed here, the technique of Linsky et al. cannot be applied since the Sun is not embedded in the gas under study and there is no evidence to allow the assumption that the lines of sight have constant density. Distances to target stars derived from Hipparcos parallax measurements allow limits to be placed on the extent of interstellar gas components, the first step in mapping the interstellar clouds or understanding the โastronephographyโ (Linsky et al. 2000) of the region. With multiple sight lines at various distances, recurring velocity features may be identified and used as a basis for defining the locations, sizes, and characteristics of interstellar gas.
Little is known about the LISM in Puppis-Vela within $``$ 200 pc. In total, absorption line spectra of only five Puppis-Vela lines of sight have been published (Crawford 1991; Dunkin & Crawford 1999; Ferlet et al. 1985 and Welsh et al. 1994). There has be no localized, systematic study of the local gas toward Puppis-Vela until now. One implication of this lack of information is that LISM data must be obtained from all sky surveys (Welsh it et al. 1998, Gรฉnova it et al. 1997) whose velocity and column density generalizations may not be indicative of the nature of LISM gas in a specific direction. The identification of local velocity components toward Puppis-Vela performed here will allow local absorption features to be distinguished from more distant absorption features in any subsequent study of the ISM in Puppis-Vela. Questions posed by Puppis-Vela ISM studies that are addressed here include: What is the contribution of the LIC in this direction? Does the Local Bubble (or Cavity) extend beyond $``$ 70 pc in this direction? Are there any neutral velocity components in the LISM in Puppis-Vela?
We investigate the kinematics and structure of the LISM gas in Puppis-Vela using Na I absorption line data of 11 stars with Hipparcos-based distances $`<`$ 200 pc, in conjunction with data for five previously studied lines of sight. The Na I spectra presented here form a subset of a larger sample containing absorption spectra toward $``$ 75 more distant stars in Puppis-Vela, which corroborate the conclusions made here, and will be presented in a subsequent paper.
## 2 Components of the LISM
It is helpful to review the structures contained in the LISM to more fully understand the Puppis-Vela lines of sight. The three known components of the LISM include the Local Interstellar Cloud (LIC), the G cloud, and the Local Bubble. The Sun is moving through a warm (T $``$ 7000 K), low density (n(H I) $``$ 0.1cm<sup>-3</sup>), partially ionized interstellar cloud termed the LIC (Lallement & Ferlet 1997, Lallement & Bertin 1992). Figure 1 shows a schematic diagram of the components of the LISM as viewed from above the plane of the Galaxy. The LIC is observed in projection toward most but not all nearby stars, and extends approximately 3 โ 8 pc. The LIC is the only interstellar cloud where in situ measurements of interstellar dust grains (Baguhl et al. 1996) and interstellar gas (Witte et al. 1992) have been performed through Ulysses & Galileo spacecraft observations. Models of the LIC (Redfield & Linsky 1999) place the Sun just inside the LIC, in the direction of the Galactic Center and toward the North Galactic Pole. The Solar Wind is carving out a cavity in the LIC which is slightly elongated in the direction of relative motion of the Sun in the LIC. In the direction of the Galactic Center, another cloud termed the G cloud is seen. This cloud is colder (T $``$ 5400 K) (Linsky & Wood 1996) and is approaching the Sun at about 29 km s<sup>-1</sup>. From Figure 1, it is clear that the G cloud is not expected to be seen in the Puppis-Vela direction. Since the Sun is within the LIC, interstellar sight lines in all directions contain absorption from gas associated with the LIC, but these features are not always strong enough to be detected in Na I absorption. Specifically toward Puppis-Vela, very low Na I column densities of $``$ 10<sup>8</sup> to 10<sup>9</sup> cm<sup>-2</sup> can be estimated using the empirical N(H)/N(Na I) formula given by Ferlet et al. (1985), and low color excesses, E(B$``$V) $``$ 0.06, are detected toward d $`<`$ 200 pc Puppis-Vela sight lines. Very low electron column densities (n<sub>e</sub> $``$ 0.03) are also deduced from the dispersion measure of three pulsars within 200 pc (Toscano et al. 1999).
Surrounding the Sun, the LIC, and the G cloud is the Local Bubble (or Cavity). The Local Bubble is an irregularly shaped region that radially extends approximately 70 pc from the Sun (Welsh et al. 1998). The Local Bubble appears to protrude toward $`\beta `$ CMa \[$`l`$ = 226ยฐ, $`b`$ = $``$14ยฐ\], forming an extention called the $`\beta `$ CMa tunnel and borders the sight lines to Puppis-Vela. The extent of the tunnel is not well determined, and is discussed in detail in ยง6.3. It is not yet known whether the Local Bubble is a bounded region created from a cataclysmic event such as a supernova explosion, or if the Local Bubble is an intercloud region, isolated by the boundaries of neighboring structures (Snowden et al. 1990). The majority of the volume of the Local Bubble is filled with hot X-ray emitting plasma with a characteristic temperature of T $``$ 10<sup>6</sup>K, while most of the mass in the Local Bubble is cool and diffuse (T $``$ 100 K). Typical densities in the Local Bubble range between $``$ 0.002 โ 500 cm<sup>-3</sup> (for a recent review on this subject see Breitschwerdt (1998)).
## 3 Using Na I to study the LISM
There are advantages and disadvantages associated with using Na I to study the LISM. The main advantage of Na I is that it is possible to perform high resolution (R $``$ 100,000) surveys with ground based telescopes. Na I is a tracer of cold, high column density gas since it has a low ionization potential of 5.14 eV. On the other hand, the disadvantage of observing Na I is that it is a trace ion in the ISM. Na I is well suited for mapping the boundaries of hot plasma in the Local Bubble since low column densities of Na I are expected within the Local Bubble and higher column densities are detected without. Additionally, cold clouds are embedded in the hot Local Bubble, and these clouds may be located and mapped with Na I. Estimates on the extent, distance, velocity and Na I column density of previously undocumented components containing neutral gas are presented in ยง6.
The most abundant species in the LISM is H I; however, H I is more difficult to observe since space based instruments must be employed to measure its column density. There are two empirical methods by which column densities of H I may be estimated: the ratio of N(H I):E(B$``$V) (Bohlin et al. 1978, Diplas & Savage 1994), and the ratio of N(H):N(Na I) (Hobbs 1974a, 1974b, 1976, Stokes 1978, and Ferlet et al. 1985). Neither of these two methods of empirically calculating N(H I) are suitable for the study of the LISM toward Puppis-Vela.
The first method of calculating atomic hydrogen column densities given the color excesses of stars does not produce accurate results for sight lines with very low reddening since the uncertainties associated with the MK spectral type are large compared to the observed values of (B$``$V). The second method of determining the distribution of hydrogen estimates N(H I + H<sub>2</sub>) given column densities of Na I is also inaccurate for regions of low column density. The studies mentioned above contain few data points for N(Na I) $``$ 10<sup>11</sup> cm<sup>-2</sup>. Nearly 80% of the local Puppis-Vela LISM sight lines contain absorption components with column densities below 10<sup>11</sup> cm<sup>-2</sup>, the regime where the N(H) to N(Na I) empirical relationship has not been shown to apply. In addition to the lack of data points in the low column density limit, Welty et al. (1994) have also noted that several of the low column density data points are incorrect. The large uncertainty in N(H):N(Na I) in the low column density limit makes it useful for only order of magnitude estimates. As a point of reference, however, direct H I observations along several lines of sight in Puppis-Vela at d $``$ 100 pc exhibit log N(H I) $``$ 18.1 to 20.1 cm<sup>-2</sup> (Table 5.1, Dring 1997).
## 4 Observations and Data Reduction
Na D<sub>1</sub> and D<sub>2</sub> $`\lambda \lambda `$5889.951, 5895.924 spectra for 11 B-type stars within 200 pc in the direction of the IRAS Vela Shell ($`l`$,$`b`$) $``$ (263ยฐ,8ยฐ) were obtained by M. S. Sahu and A. Blaauw using the Coudรฉ Echelle Spectrograph (CES) on the 1.4 meter Coudรฉ Auxiliary Telescope (CAT) at the European Southern Observatory. The observations were made both on site in La Silla and remotely from Garching in January 1993 (HD 76805) and January 1994. The Long Camera and the UV-coated Ford Aerospace/Loral 2048 $`\times `$ 2048 CCD (#27) were used for all observations. The CCDโs pixel size was 15 $`\mu `$m $`\times `$ 15 $`\mu `$m, and had a low dark current of 3 e<sup>-</sup>/pixel/hour, a low readout noise ($``$ 6e<sup>-</sup> rms), and few apparent defects. The net efficiency of the system is 3.8% at 5400 ร
and 4.6% at 6450 ร
(Pasquini et al. 1992).
Table 1 contains general stellar information for the 11 stars observed. To supplement the sample, we searched the literature and found five additional stars with Na I column density and velocity measurements in the direction towards Puppis-Vela. The following information is listed for each star: HD number, Galactic position (longitude and latitude), MK spectral type classification, visual magnitude, observed Johnson photometric colors from the Tycho catalogue, calculated BโV color excess, distance obtained using Hipparcos trigonometric parallax data, heliocentric radial velocity ($`V_{rad}`$), projected rotational velocities ($`vsini`$), and references for the MK spectral type and $`V_{rad}`$, respectively.
A standard data reduction procedure was followed that first included bias subtraction and flat-fielding of the science spectra. A thorium-argon arc lamp was the wavelength calibration source, and the stability of the CES yielded unchanging calibration exposures throughout each night. The calibration spectra were used in conjunction with the Th-Ar line list by Willmarth (1987) to visually identify the emission lines, fit a second degree polynomial to the resulting pixel vs. wavelength arrays, and thereby convert our absorption line spectra from pixel to wavelength space. The absolute wavelength solutions had rms variations of 0.7 km s<sup>-1</sup> or 0.014 ร
for the 1993 spectrum and 1.1 km s<sup>-1</sup> or 0.023 ร
for the 1994 spectra. The instrumental resolution determined using the FWHM of the thorium lines for both observing runs was 3.1 km s<sup>-1</sup>, equivalent to $`\lambda /\mathrm{\Delta }\lambda `$ $``$ 95,000.
In addition to the interstellar Na I absorption features, the raw spectra were contaminated by many telluric absorption lines which are abundant at wavelengths surrounding the Na I D doublet. Multiple observations of $`\alpha `$ Vir (HD 116658) were made since it is a nearby star with a high rotational velocity (v sin i = 159 km s<sup>-1</sup>, Hoffleit & Jaschek 1982) and little interstellar Na I absorption. These spectra were used as templates to remove telluric absorption lines contaminating the spectra. To normalize the template spectra, the continuua were fit with cubic splines then divided by the fits. The optical depths of the atmospheric absorption lines in $`\alpha `$ Vir were adjusted by a multiplicative factor for each star so that the strength of the well separated telluric lines at wavelengths of 5883$``$5901 ร
matched those in each object spectrum. Next, the object spectra were divided by the scaled $`\alpha `$ Vir template spectra to eliminate telluric absorption lines. The observed spectra were continuum normalized by fitting a cubic spline and dividing the object spectrum by the fit. The resulting spectra had signal-to-noise ratios of 110 to 250.
## 5 Presentation of the Data
To fit the spectral lines and infer physical properties about the interstellar velocity components along each line of sight, we used the profile fitting method and software developed by Welty, Hobbs, & Kulkarni (1994). This technique assumes that the components have Maxwellian velocity distributions. Each line was fit with the fewest number of components necessary, and additional components either increased the $`\chi `$<sup>2</sup> statistic or required that one or more components have unphysical b values. The parameters describing each component were adjusted by an iterative, non-linear, least-square method to achieve an rms deviation of the absorption line fit that was comparable to the rms deviation in the stellar continuum, and reduced the value of the $`\chi `$<sup>2</sup> statistic.
The Na I spectra of the 11 stars observed are displayed in Figure 2. For each sight line, normalized intensity is plotted versus heliocentric velocity for both the D<sub>2</sub> and D<sub>1</sub> lines. Filled circles indicate the data points, the best-fit models are plotted with solid lines, and dashed lines trace the individual Gaussian components. Beneath the D<sub>1</sub> spectra are the residuals to the fits for the D<sub>2</sub> lines. For the spectra of HD 72232 and HD 79416, the y-axes in the residual plots range from $``$0.1 to 0.1 in units of normalized intensity, whereas for the rest of the spectra, the y-axes of the residual plots range from $``$0.015 to 0.015. Upwards arrows beneath the D<sub>2</sub> lines are located at the projected LIC velocity. No LIC components are revealed in the spectra, as expected (see ยง2).
Table 2 contains numerical data associated with the model fits and includes: (1) the HD number, (2) central heliocentric velocity, (3) equivalent width, (4) Doppler b parameter, (5) logarithm of the Na I column density, (6) signal-to-noise ratio, and (7) the heliocentric to LSR velocity conversion factor. Uncertainties were estimated by comparing the measured Na I D<sub>2</sub> and D<sub>1</sub> profile fit parameters. If there were no sources of error, these values would be identical since each pair of doublet lines arise due to absorption by the same Na I gas. The maximum difference in measured velocities between a D<sub>2</sub> versus D<sub>1</sub> component is $`\pm `$0.5 km s<sup>-1</sup>, and is consistent with the calculated rms velocity dispersion. The uncertainties associated with the column density and b value are generally $`<`$ 10% and a quality estimate is listed for each spectral line in Table 2.
## 6 Discussion
### 6.1 Identification of Three Velocity Components
Along the eleven sight lines toward early type stars located within 250ยฐ $`<`$ $`l`$ $`<`$ 299ยฐ and $``$8ยฐ $`<`$ $`b`$ $`<`$ +4ยฐ, Na I gas is primarily found in one of three distinct velocity ranges (to an accuracy of $``$ 1 km s<sup>-1</sup>) . Five additional sight lines with Na I kinematic information were found in the literature. General stellar information for these stars is listed in Table 3. Information about the Na I components observed is in Table 4 and includes: (1) HD number, (2) central heliocentric velocity, (3) Doppler b parameter, (4) logarithm of the Na I column density, (5) velocity resolution at which the observation was made, and (6) a reference for the data.
Although the sixteen lines of sight are spread over $``$ 50ยฐ in Galactic longitude, the absorption components do not appear at random velocities. The majority of the velocity components can be associated with one of the following velocity ranges: (A) +6 km s<sup>-1</sup> $`<`$ V<sub>helio</sub> $`<`$ +9 km s<sup>-1</sup>, (B) +12 km s<sup>-1</sup> $`<`$ V<sub>helio</sub> $`<`$ +15 km s<sup>-1</sup>, and (C) +21 km s<sup>-1</sup> $`<`$ V<sub>helio</sub> $`<`$ 23 km s<sup>-1</sup>. The Galactic coordinates of the sixteen lines of sight have been plotted in Figure 3. Filled triangles indicate the locations of stars whose we present new Na I data, while open triangles pinpoint the locations of stars for which data are from the literature. To illustrate the presence of absorption components at the three common velocities described, lines of sight with absorption at velocities A, B, and C have been encircled. No distinction has been made regarding the source of the data in Figures 3b-3d. The symbol sizes indicate the relative Na I column density detected. Small symbols correspond to 10.0 $`<`$ log\[N(Na I)\] $`<`$ 10.4, medium symbols denote 10.4 $`<`$ log\[N(Na I)\] $`<`$ 11.1, and large symbols represent column densities of 11.7 $`<`$ log\[N(Na I)\] $`<`$ 12.6.
Analysis of the distribution of data points in Figure 3 reveals that velocity components A, B, and C are located at distinct regions in $`l`$ and $`b`$. The size of the data points, which corresponds to the Na I column density, reveals that higher column densities of gas at velocities A and B exist in this region of the LISM, whereas the sight lines with absorption at velocity C have lower column densities. Comparing the locations where each of the components are detected, the velocities of the components in the Local Standard of Rest decrease with increasing Galactic longitude. General Galactic rotation also follows this trend, but the Galactic rotation velocities calculated at d = 150 pc, assuming a galactocentric distance of 8.5 kpc, and a local circular speed of 220 km s<sup>-1</sup> in the solar neighborhood (Mihalas & Binney 1981), do not match up with the velocities of Na I absorption features observed in the LISM. The sightlines to stars at d $`>`$ 200 pc confirm the locations of local gas absorption for components B and C, whereas the region 280$`\mathrm{ยฐ}`$ $`<`$ $`l`$ $`<`$ 300$`\mathrm{ยฐ}`$, where component A is detected, is outside of the sample area. Also from the spectra of the more distant sight lines, absorption features at the velocity of Component B were additionally observed along sightlines with $`b`$ $``$ $``$10$`\mathrm{ยฐ}`$ and $`l`$ $``$ 260$`\mathrm{ยฐ}`$, and gas with velocities coincident with Component C was observed up to the Galactic plane ($`b`$ $``$ 0$`\mathrm{ยฐ}`$).
In Figure 4 the Puppis-Vela region is viewed from above the Galactic plane and contains the same data points as Figure 3. Here, distance increases radially outward and galactic longitude increases counterclockwise. The symbol sizes are defined identically as in Figure 3, and the same Galactic latitude range is applied: $``$10ยฐ$`<`$ $`b`$ $`<`$ +5ยฐ. The symbols in Figure 4 indicate the velocity of the absorption component, and have been placed at the appropriate distance and galactic longitude of the target star. From this perspective, the locations of components with similar velocities are also seen to be grouped together, and not randomly distributed. Arcs have been overplotted to highlight the maximum distance and minimum extent in Galactic longitude of the front edge of a gas component at a particular velocity. The arcs have been labled with โA,โ โB,โ or โCโ to match the velocity ranges defined above and on the plot. Note that the arc labeled โCโ is dashed in the center to indicate where its presence is uncertain according to the spectra presented here, but when spectra from more distant stars are included, gas with velocity C is seen throughout the entire Galactic longitude range.
The locations and characteristics of the three components are summarized in Table 5. For each component, Table 5 lists the minimum extent in Galactic longitude and latitude where the gas is observed, the maximum distance to the gas, the heliocentric velocity range of the gas, and the spread in Na I column densities for spectral lines within the given velocity range. Three stars (HD 61831, HD 65575, and HD 74195) have been omitted from Figure 4 for reasons discussed in ยง6.3.
Sight lines either pass completely through the component, or partially penetrate the gas. Using the distribution of the measured column densities towards the target stars of known distances, it is possible to deduce limits on the distances to each gas component. Refining these distance estimates would necessarily require more observations ($``$ 18 additional sight lines exist) to obtain a finer distribution of observations so that the boundaries of the gas and the density distribution of the gas within the components could be distinguished. Specific characteristics of the gas at velocities A, B, and C are suggested below.
The three components of gas observed in the LISM have unique characteristics. Sight lines with velocity A gas exhibit rather high column densities (large symbols) along three out of four sight lines. These three high column density sight lines (#1, 3, 8) are located at Galactic latitudes of $``$4.9ยฐ $`<`$ $`b`$ $`<`$ $``$3.0ยฐ, while the fourth target whose spectrum exhibits gas with velocity A is at $`b`$ = +3.8ยฐ. Since the latitude of the latter target, HD 106490, is on average 8ยฐ away from the others, a smaller column density of gas at V<sub>helio</sub> = +6 to +9 km s<sup>-1</sup> exists above the galactic plane towards HD 106490 (#2). The absorbing material is confined to a distance $`<`$ 100 pc since the column density of Na I does not increase when more distant sight lines are observed.
Sight lines with components at velocity B tend to have moderate to high column densities at larger distances. Toward the stars HD 79416 (#9) and HD 72232 (#10) we find log N(Na I) $``$ 11.8, whereas the remainder of the sight lines with absorption at velocity B ranged from log N(Na I) = 10.28 to 11.08. Towards one of the seven stars with spectral features at velocity B, HD 74560 has a low column density (small symbol), yet it is physically located between several stars that have substantially higher column densities. This variation in the magnitude of the column densities suggests that the velocity B gas is patchy, and inhomogeneous. The high column density along the line of sight towards HD 74195 (#8), observed by Welsh et al. (1994), reveals gas at velocity B, however the Doppler parameter given, b = 0.3$`\pm `$0.1 km s<sup>-1</sup> is small compared to the 3.6 km s<sup>-1</sup> velocity resolution, so this data point has been omitted from our analysis.
There are two concentrations of sight lines with velocity C gas: one at $`l`$ $``$ 270ยฐ and one at $`l`$ $``$ 252ยฐ. Low column densities (small symbols) are observed towards the stars grouped at $`l`$ $``$ 270ยฐ while moderate column densities (medium symbols) are seen toward $`l`$ $``$ 252ยฐ. Stars HD 74146 (#5), HD 74071 (#6), and HD 74560 (#7) may be located near the edge of the component or the component may be very diffuse considering the low values of N(Na I) detected.
Because of the absence of observed sight lines with d $`<`$ 200 pc, the arc tracing the estimated boundary of velocity C gas is dashed in the middle. Three nearby stars are located behind the dashed portion of the arc but do not exhibit absorption by gas at velocity C in their Na I spectra. As noted above, it is likely that the gas is patchy and inhomogeneous, which would then account for the observations. The presence of component C in the spectra of the longer sight lines in the sample is also intermittant, also indicating that the component C gas is patchy. One of these stars, HD 79416 (#10), is located above the Galactic plane at $`b`$ = +3.3ยฐ, while all of the stars revealing velocity C gas have $`b`$ $``$ $``$5.9ยฐ. The spectra from the more distant stars in our sample include component C gas for $`b`$ $``$ 0$`\mathrm{ยฐ}`$ and for the same extent in $`l`$.
With the components of neutral Na I gas arising in the first 200 pc of the ISM toward Puppis-Vela identified, any subsequent spectroscopic study of the neutral ISM in this direction can distinguish between local gas and more distant gas. The presence of absorption components at velocities A, B, or C in the spectra of future studies will not be confused with absorption due to the myriad other structures that exist toward Puppis-Vela. The three dimensional information about the componentsโthe extent of the gas as projected on the sky, and the limits on the distance to the gasโserve as a basis of estimating the typical velocities and column densities present in the LISM.
In general, the LISM toward Puppis-Vela has low positive velocities ranging from +6 km s<sup>-1</sup> to +23 km s<sup>-1</sup>. Absorbing gas at these velocities is located at distances closer than $``$ 104 pc, and the Na I column densities detected range from log N = 10.2 cm<sup>-2</sup> to 11.9 cm<sup>-2</sup> indicating that the cold gas is clumpy or patchy, and not uniformly distributed.
### 6.2 Using LISM Component DataโAn Example
To put our results in perspective, we searched the literature for UV interstellar absorption line studies in the direction of Puppis-Vela in order to demonstrate that knowing the properties of the LISM gas in a particular region of the sky is extremely valuable for the interpretation of longer lines of sight. The only recent study has been toward $`\gamma `$<sup>2</sup> Velorum by Fitzpatrick & Spitzer (1994; hereafter FS94). Their Hubble Space Telescope (HST)/Goddard High-Resolution Spectrograph (GHRS) observations have a velocity resolution similar to our Na I data (3.1 km s<sup>-1</sup> versus 3.5 km s<sup>-1</sup> for the GHRS data) facilitating a comparison between the two datasets. $`\gamma `$<sup>2</sup> Velorum is located at \[($`l`$ = 263ยฐ, $`b`$ = -8ยฐ); $`d`$ $``$ 260 pc\] so it is likely that some of the components observed in the GHRS spectra arise in the LISM. FS94 detect seven components as described in Table 6. Comparing the velocities of the FS94 features with the three velocity components we detect in the LISM, their component 6 corresponds to our component C and their components 4 and 5 correspond to our component B.
The neutral species detected with GHRS include S I and C I. FS94 fit the S I spectrum with component 4, and the C I feature with components 2 and 4. In addition to these neutral species, we also include a Na I spectrum of $`\gamma `$<sup>2</sup> Vel that was obtained at the same time and with the same instruments as the LISM data toward Puppis-Vela described in ยง4. The Na I D<sub>2</sub> and D<sub>1</sub> spectra of $`\gamma `$<sup>2</sup> Vel are shown in Figure 5, and the parameters of the fit are listed in Table 7. Na I absorption is seen at velocities corresponding to FS94 components 2, 4, and 6. Comparing the Na I data for the $`\gamma `$<sup>2</sup> Vel line of sight with the distribution of gas in the foreground, we may conclude that component 2 arises in gas located beyond 200 pc, yet less than 260 pcโthe distance to $`\gamma `$<sup>2</sup> Vel. Both components 4 and 6 were detected in the LISM at distances $`<`$ 200 pc.
The presence of component 4 is complicated, however, by the fact that it appears in the spectra of all of the species detected in FS94, both neutral and highly ionized, and in our Na I spectrum. In addition, FS94 note that a Copernicus spectrum toward $`\gamma `$<sup>2</sup> Vel reveals H<sub>2</sub> absorption which may be assigned to either component 4 or a blend of components 2 and 5. Although FS94 assign H II region origins to component 4 absorption because of the presence of species such as C IV and Si IV, it is difficult to reconcile neutrals and molecules coexisting with such highly ionized species.
FS94 propose that the H II region containing the component 4 gas is the $``$ 40 pc surrounding a $``$ 60 pc-radius stellar wind blown bubble centered on $`\gamma `$<sup>2</sup> Vel. We suggest that there are two distinct regions of gas with velocities matching component 4. First, the ionized region extending 100 pc from the star described in FS94 contains ionized gas at V<sub>helio</sub> $``$ 13 km s<sup>-1</sup>. Second, the neutral gas absorption along the line of sight to $`\gamma `$<sup>2</sup> Vel at V<sub>helio</sub> $``$ 13 km s<sup>-1</sup> occurs at d $``$ 115 pc, corresponding with our component B. If the H<sub>2</sub> spectrum is fit by component 4, then it is consistent to place the H<sub>2</sub> gas with the Na I in component B, thereby providing a unique example of H<sub>2</sub> gas near the edge, or possibly within, the Local Bubble.
Not only is the mapping of the nearby gas essential in the characterization of the LISM itself, this data clarified the complex absorption signature along the $`\gamma `$<sup>2</sup> Vel line of sight, and should prove useful for other Puppis-Vela sight lines. Caution must be observed, however, since it has been shown here and in other papers (Watson & Meyer (1996); Frail et al. (1994); Lauroesch et al. (1998, 1999)) that interstellar Na I is patchy, as evidenced by the subparsec-scale structures that appear to be ubiquitous in the diffuse ISM, but this focused study will detect most components specific to the Puppis-Vela LISM which previous all-sky local gas surveys ignored.
### 6.3 LISM Absorption Components at Peculiar Velocities
Of the 27 absorption features detected (excluding the ultra-high resolution data of Dunkin & Crawford (1999)) along 16 lines of sight, 9 features have velocities outside of velocities A, B, and C. The peculiar velocities of all but three components can be understood with simple explanations. A total of three absorption components toward HD 61831 and HD 72232 lie just outside of the velocity ranges A, B, and C. It is likely that these absorption features are blends of multiple narrow lines that are unresolved at R $``$ 100,000, as evidenced by the many components distinguished in the ultra-high resolution data towards HD 81188 cited in Table 4. Three additional components toward HD 65575 and HD 106490 have peculiar velocities, but these stars are located at the periphery of the sample, and it is natural that at some point the characteristics of the gas change and components A, B, and C cease being detected.
In total, only 3 of 27 components cannot be simply accounted for, and they include the V<sub>helio</sub> = +132.3 km s<sup>-1</sup> and $``$10.6 km s<sup>-1</sup> lines toward HD 62226 and the V<sub>helio</sub> = $``$139 km s<sup>-1</sup> line toward HD 74146. The spectra of HD 62226 and HD 74146 are plotted in Figure 6a and 5b, respectively. For each line of sight, the Na I D<sub>2</sub> absorption is plotted at the top, the D<sub>1</sub> absorption is plotted in the middle of the panel. The residual to the best fit to the D<sub>2</sub> spectrum is on the bottom, plotted with the y-axis ranging from $``$0.015 to +0.015. Unlike the spectra of the other LISM sight lines in this region, that of HD 62226 contains three components at three distinct velocities. The absorption at V<sub>helio</sub> = +22.2 km s<sup>-1</sup> is consistent with the gas observed towards adjacent sight lines in the region, but the absorption features at V<sub>helio</sub> = $``$10.6 km s<sup>-1</sup> and +132.2 km s<sup>-1</sup> are not observed towards any other stars in the area.
Although lines of sight adjacent to HD 62226 were also observed, namely HD 61878, HD 64503, and HD 61831, the spectra toward these stars do not show absorption at either V<sub>helio</sub> = $``$10.6 km s<sup>-1</sup> or +132.2 km s<sup>-1</sup>. Because the sight line towards HD 62226 is fortuitously flanked by these three nearby lines of sight that have also been observed, it may be concluded that the gas producing the V<sub>helio</sub> = $``$10.6 km s<sup>-1</sup> and +132.2 km s<sup>-1</sup> components are confined to a small region projected on the sky at d $`<`$ 190 pc. The V<sub>helio</sub> = $``$10.6 km s<sup>-1</sup> component is very weak, but is clearly visible in both the D<sub>2</sub> and D<sub>1</sub> spectra. The D<sub>2</sub>/D<sub>1</sub> ratio of equivalent widths for the +132.2 km s<sup>-1</sup> lines equals 2.2, which is close to the expected ratio of 2 for unsaturated Na I D<sub>2</sub> and D<sub>1</sub> lines.
The other sight line with unexplained components is that of HD 74146 which also contains an absorption component at a high velocity, but here the feature is a broad (b = 16.7 km s<sup>-1</sup>) line at negative velocity, V<sub>helio</sub> = $``$139 km s<sup>-1</sup>, and D<sub>2</sub>/D<sub>1</sub> = 0.8. The breadth of the line and the low D<sub>2</sub>/D<sub>1</sub> ratio are atypical of unsaturated interstellar lines, and it is unlikely that such a weak line could be saturated. Additionally, several sight lines adjacent to HD 74146 were also observed (HD 76805, HD 74071, and HD 74560) though their spectra did not reveal absorption at high negative velocities.
A literature search for information regarding whether or not circumstellar material has been observed around HD 62226 or HD 74146 yielded nothing. Additionally, we attempted to use International Ultraviolet Explorer (IUE) spectra to determine the origin of the high velocity absorption features. HD 74146 was not observed with IUE, and only one high dispersion spectrum of HD 62226 was obtained. The single spectrum toward HD 62226 had a low signal-to-noise ratio (S/N $`<`$ 10), so the presence or absence of high velocity features associated with characteristic circumstellar or interstellar absorption could not be ascertained. With the data available, we were not able to determine whether the high velocity features in the spectra of HD 62226 and HD 74146 were of circumstellar or interstellar origin.
### 6.4 The $`\beta `$ CMa Tunnel
In ยง2 it was mentioned that the extension of the Local Bubble (or Cavity) known as the $`\beta `$ CMa tunnel is adjacent to Puppis-Vela when projected on the sky. This tunnel of rarefied hot gas has been estimated to extend $``$250-300 pc (Sfeir et al. 1999; Welsh 1991; Welsh, Crifo, & Lallement 1998), and appears to be almost free of neutral gas. For instance, H I column densities of N(H I) $``$ 10<sup>18</sup> cm <sup>-2</sup> measured toward the 153$`\pm `$15 pc line of sight to $`\beta `$ CMa are thought to arise primarily in gas along the first $``$5 pcโwithin the LIC (Gry et al. 1985). In this section, previous estimates of the tunnelโs extent will be reviewed and a new measurement of the size of the tunnel, within the first 150 pc, will be presented.
After the low gas densities towards several stars in the direction of $`\beta `$ CMa were noticed, subsequent studies of the LISM probed adjacent sight lines to ascertain the size of the tunnel. Using optical absorption spectroscopy to observe the Na I D doublet, three estimates of the extent of the tunnel were made. First, Welsh (1991) assigned minimum angular dimensions to the tunnel: an elliptical hole located at 226ยฐ $``$ $`l`$ $``$ 242ยฐ, $``$11ยฐ $``$ $`b`$ $``$ $``$20ยฐ, extending to a depth of $``$300 pc. Later Welsh, Crifo, & Lallement (1998) revised the size of the tunnel and substantially widened the estimate. They reported minimum approximate dimensions of the low gas density tunnel towards $`\beta `$ CMa to be 250 pc long by 90 pc wide. Analysis of Figure 2 in Welsh et al. indicates that the tunnel narrows at larger distances; specifically, the extent of the tunnel is 210ยฐ $``$ $`l`$ $``$ 277 ยฐ at d = 150 pc, while at d = 250 pc regions of higher column density gas bound the tunnel to within 215ยฐ $``$ $`l`$ $``$ 240ยฐ. Sfeir et al. (1999) included the $`\beta `$ CMa tunnel in their maps of the neutral gas associated with the Local Bubble, but used only a few data points to confine the low gas density tunnel. Their data is inconclusive regarding the depth of the tunnel and they proposed that the tunnel is either terminated at a distance of $``$ 250 pc by Na I absorbing gas producing equivalent widths of $``$ 50 mร
, or that the boundary of the tunnel is closer but contains several perforations.
The maps illustrating the tunnel in Welsh et al. (1998) and Sfeir et al. (1999) do not represent differences in the starsโ Galactic latitude, however. Both papers show the tunnel on plots of distance versus Galactic longitude with the stars projected onto the Galactic plane, thus obscuring information regarding the Galactic latitude of the target stars. Almost all of the published Na I spectral data in the direction of $`\beta `$ CMa pertain to stars below the Galactic plane at $`b`$ $``$ $``$10ยฐ to $``$30ยฐ. Our observations add a new dimension to the tunnel boundary since the target stars in our sample are concentrated within $`b`$ $``$ $``$10ยฐ to 0ยฐ.
In addition to the Na I column density measurements given in Tables 2 and 4 for stars toward Puppis-Vela, N(Na I) measurements along sight lines in and around the $`\beta `$ CMa tunnel were compiled. Stellar information (as was given in Table 1) about these published sight lines is in Table 8, and the corresponding Na I column densities and references are in Table 9.
The column densities toward sight lines listed in Tables 2, 4, and 9 and projected onto the Galactic plane are illustrated as a function of distance and Galactic longitude, in Figure 7. The Galactic latitudes of the lines of sight have been divided into two groups: those at $``$10ยฐ $`<`$ $`b`$ $`<`$ 0ยฐ are indicated with filled symbols, while targets located at $``$30ยฐ $`<`$ $`b`$ $`<`$ $``$10ยฐ are plotted using open symbols. Diamonds are used when precise measurements of the Na I column density are known and triangles are plotted for sight lines where only N(Na I) upper limits have been determined. The symbol sizes suggest the relative column density detected along each line of sight, with larger symbols indicating larger column densities, as defined in the caption to Figure 3.
The thin solid lines in Figure 7 depict the longitudinal angular extent of the tunnel estimated by Welsh (1991). In view of the data shown in Figure 7, it appears that his estimate should be expanded by $``$ 5ยฐ on either edge out to a distance of $``$ 150 pc. Note that all of the sight lines supporting this tunnel estimate have $`b`$ $`<`$ $``$10ยฐ. So far, no observations have been made to constrain the tunnelโs extent in latitude or length. The expanded extent of the tunnel proposed by Welsh et al. (1998) (in their Figure 2) is sketched in Figure 6 with dotted lines. With the additional lines of sight near the Galactic plane presented here, the Welsh et al. high longitude tunnel boundary appears misplaced. Either one or more of the following reasons may account for this. The new data reveals that the first $``$ 150 pc of the $`\beta `$ CMa tunnel do not extend beyond l $``$ 267ยฐ, near the plane of the Galaxy ($``$10ยฐ $`<`$ $`b`$ $`<`$ 0ยฐ). There is no data to confirm if the tunnel does or does not extend beyond $`l`$ = 267ยฐ for $``$30ยฐ $`<`$ $`b`$ $`<`$ $``$10ยฐ. At distances of $``$ 150 to 200 pc, the tunnel does not extend beyond $`l`$ $``$ 251ยฐ near the plane of the Galaxy, as evidenced by the filled data points at $`l`$ $`>`$ 250ยฐ in Figure 7. There is no observational evidence to determine whether the $`\beta `$ CMa tunnel extends past $`l`$ $``$ 250ยฐ at lower Galactic latitudes, since no sight lines have been observed there. The lower Galactic longitude boundary also appears to encompass sight lines with high Na I column densities, and should be relocated to $`l`$ $``$ 215ยฐ. From the compilation of data on almost 40 lines of sight, it appears that outside the Local Bubble (d $`>`$ 70 pc), low neutral gas densities are to be found between $`l`$ $``$ 215ยฐ to 250ยฐ and $`b`$ $``$ $``$21ยฐ to $``$9 ยฐ. The arc drawn in Figure 7 at d $``$ 150 pc illustrates the extent in Galactic longitude over which low column density lines of sight with $`b`$ $``$ $``$10ยฐ have been observed. The arrows pointing to larger distances suggest that this bound on the $`\beta `$ CMa tunnel is only a lower limit, but is dictated by the present data.
The most distant sight lines included in this sample have d $``$ 200 pc and $`b`$ $``$ $``$30ยฐ. At this limit, the sight lines have not penetrated the scale height of the disk of the Galaxy (see Dame & Thaddeus (1994) and Dickey & Lockman (1990)). Studies of more distant sight lines attempting to map the distant end of the tunnel could easily be confused by the Galactic diskโs decreasing density gradient, whereby stars at large distances and lower latitudes might exhibit lower N(Na I) measurements because of the geometry of the Galaxy, and not necessarily because of extensions in the Local Bubble. From nearby interstellar Na I measurements, there is a region of low density towards $`\beta `$ CMa, but it is not clear whether the tunnel is present in the plane of the Galaxy. Additional observations along sight lines 0ยฐ to 30ยฐ below the Galactic plane are needed to clarify the three dimensional extent of the $`\beta `$ CMa tunnel.
## 7 Conclusions
We have presented high resolution Na I spectra toward 11 early type stars plus kinematic data for 5 lines of sight taken from the literature to study the LISM in the direction of Puppis-Vela. Additionally, we have compiled a list of Na I column density measurements made toward nearby (d $`<`$ 200 pc) sight lines in the $`\beta `$ CMa tunnel. Our conclusions are stated below:
1. Observations of Na I in the LISM toward Puppis-Vela revealed absorption at three distinct velocities with the following properties: Component Aโ\[$`l`$ $``$ 276ยฐ to 298ยฐ, $`b`$ $``$ $``$5ยฐ to +4ยฐ\], V<sub>helio</sub> = +6 to +9 km s<sup>-1</sup>, and d $``$ 104 pc; Component Bโ\[$`l`$ $``$ 264ยฐ to 276ยฐ, $`b`$ $``$ $``$7ยฐ to +3ยฐ\], V<sub>helio</sub> = +12 to +15 km s<sup>-1</sup>, and d $``$ 115 pc; Component Cโ\[$`l`$ $``$ 252ยฐ to 271ยฐ, $`b`$ $``$ $``$8ยฐ to $``$6ยฐ\], V<sub>helio</sub> = +21 to +23 km s<sup>-1</sup>, and d $``$ 131 pc. This identification of LISM gas will enable future studies of the Puppis-Vela ISM to separate local gas absorption from more distant features, as was illustrated for the $`\gamma `$<sup>2</sup> line of sight. Including a distinction of the distance at which gas exists, as well as its position on the sky, is a fundamental step in the new field of three dimensional astronephography.
2. The LIC is not detected in Na I absorption toward Puppis-Vela because the column densities of LIC gas in this direction are too low, generally $`<`$ 10<sup>11</sup> cm<sup>-2</sup>.
3. Low column density sight lines in the $`\beta `$ CMa tunnel are confined to $`l`$ = 215ยฐ to 250ยฐ and $`b`$ = $``$21ยฐ to $``$9ยฐ, to a distance of $``$ 150 pc. New observations closer to the Galactic plane ($``$10ยฐ $`<`$ $`b`$ $`<`$ 0ยฐ) reveal higher column densities suggesting that the tunnel may not extend in latitude to $`b`$ = 0ยฐ. More observations near the Galactic plane and at distances greater than 150 pc are needed to ascertain the true size and location of the $`\beta `$ CMa tunnel.
4. Although most of the absorption components seen in the spectra of stars within 200 pc toward Puppis-Vela could be assigned to one of the three components, A, B, or C, a few components had peculiar velocities. Toward HD 62226 and HD 74146 high velocity features, V<sub>helio</sub> = +132.2 km s<sup>-1</sup> and V<sub>helio</sub> = $``$139 km s<sup>-1</sup> respectively, were present. In both cases, nearby sight lines were also observed and contained no high velocity features. The origin of the high velocity gas components, be they circumstellar or interstellar, is unknown.
ANC and HWM acknowledge support from NASA contract NAS 5-32985 to Johns Hopkins University and MSS acknowledges support from a GTO grant to the STIS IDT. We thank F. Bruhweiler, W. Landsman, and J. Lauroesch for helpful comments on the manuscript and E. Burgh for a useful graphics routine. We acknowledge use of the Simbad Database at the Centre Donnรฉes astronomiques de Strasbourg (http://simbad.u-strasbg.fr/Simbad).
Figure Captions
Figure 1. A schematic diagram of the structures immediately surrounding the Sun. The Sun (at center) is embedded in the LIC and near the G cloud, which are both within the Local Bubble. Approximate locations, sizes, and angles subtended by the clouds are given by Linsky et al. (1999) and Lallement (1998). The velocities of the LIC and G cloud shown are with respect to the heliocentric frame of reference. The arrow extending from the Sun shows the direction of the motion of the Sun relative to the Local Standard of Rest (LSR). Dashed lines indicate the approximate angle subtended by Puppis-Vela.
Figure 2. Na I D<sub>2</sub> and D<sub>1</sub> spectra of 11 sight lines towards Puppis-Vela with d $`<`$ 200 pc. Normalized intensity is plotted versus heliocentric velocity ( km s<sup>-1</sup>) for each star. The best-fit models to the absorption line profiles are shown as solid lines and the individual component fits are indicated by dashed lines. The bottom panel for each line of sight shows the residuals to the fits with the y-axis ranging from $``$0.1 to +0.1 for HD 72232 and HD 79416, and the y-axis ranging from $``$0.015 to +0.015 for all other lines of sight. The arrow on each plot indicates the projected LIC velocity for the particular sight line. $``$The complete Na I spectra toward HD 62226 and HD 74146 are shown in Figure 6.
Figure 3. Panel (a) shows the Galactic positions of the LISM stars towards the Puppis-Vela for which Na I spectra were obtained (filled symbols) and those for which Na I kinematic data was found in the literature (open symbols). Only stars at Galactic latitudes of $``$10ยฐ $`<`$ $`b`$ $`<`$ $`+`$5ยฐ have been included. Panel (b) shows where Na I was detected with velocities of +6 to +9 km s<sup>-1</sup>, (c) shows where velocities of +12 to +15 km s<sup>-1</sup> were seen, and (d) gives the locations of +21 to +23 km s<sup>-1</sup> Na I gas. In panels (b)-(d), the relative symbol size indicates the Na I column density. Small symbol: 10.00 $``$ log N(Na I) $`<`$ 10.50. Medium symbol: 10.50 $``$ log N(Na I) $``$ 11.10. Large symbol: 11.70 $``$ log N(Na I) $``$ 12.60.
Figure 4. The LISM stars in Puppis-Vela for which there exists Na I kinematic data, projected onto the Galactic plane. The distance to the stars increases radially, and the Galactic longitude increases along the arc. All stars have Galactic latitudes of -10ยฐ$`<=`$ $`b`$ $`<=`$+5ยฐ. The diamonds denote the sight lines in which (A) +6 $`<=`$ V<sub>helio</sub> $`<=`$ +9 km s<sup>-1</sup> Na I gas was detected, the triangles and squares mark where velocities of (B) +12 $`<=`$ V<sub>helio</sub> $`<=`$ +15 km s<sup>-1</sup> and (C) +21 $`<=`$ V<sub>helio</sub> $`<=`$ 23 km s<sup>-1</sup> Na I gas, respectively, were seen. The symbol sizes depict N(Na I) as defined for Figure 3. The arcs are drawn to indicate the maximum distance, and minimum extent in Galactic longitude of gas pockets with velocities A, B, and C. The stars shown are: 1) HD 103079, 2) HD 106490, 3) HD 93030, 4) HD 76805, 5) HD 74146, 6) HD 74071, 7) HD 74560, 8) HD 81188, 9) HD 79416, 10) HD 72232, 11) HD 61878, 12) HD 62226, 13) HD 64503.
Figure 5. Na I D<sub>2</sub> (top) and D<sub>1</sub> (middle) spectra along the line of sight to $`\gamma `$<sup>2</sup> Vel (HD 68273). The data points from each spectrum are represented as dots, a solid line indicates the best-fit model, and the dashed lines show the three components that compose the fit. The residual to the D<sub>2</sub> line fit is plotted at the bottom of the panel where the y-axis ranges between $`\pm `$0.05.
Figure 6. Complete spectra for the sight lines to a) HD 62226 and b) HD 74146. In each panel, normalized intensity of the Na I D<sub>2</sub> lines are on top and D<sub>1</sub> lines are in the middle. At the bottom of each panel the residual to the D<sub>2</sub> model fits are shown with the y-axes ranging from $``$0.015 to +0.015. An IUE spectrum of HD 62226 was analyzed to help determine the origin of the high velocity components, but was inconclusive due to the low signal-to-noise ratio.
Figure 7. Na I absorption line data in the direction of the $`\beta `$ CMa tunnel. The solid lines confine the first 200 pc of the $`\beta `$ CMa tunnel as proposed by Welsh (1991) and the dotted lines outline the first 200 pc of the tunnel described in Welsh et al. (1998). The thick arc highlights the longitudinal extent of the $`\beta `$ CMa tunnel according to the compilation of the data and pertains to the region below the Galactic plane where $``$30ยฐ$`<`$ $`b`$ $`<`$ $``$10ยฐ. The arrows indicate that it is a lower limit on the distance to the end of the tunnel. Symbol sizes indicate the column density of Na I observed for given lines of sight as defined for Figure 3. If only an upper limit on the column density for a sight line is known, a downward triangle is used for the data point rather than a diamond. Filled symbols correspond to stars with $``$10ยฐ $`<`$ $`b`$ $`<`$ 0ยฐ, and open symbols are used for stars with $``$30ยฐ $`<`$ $`b`$ $`<`$ $``$10ยฐ. |
no-problem/0001/cond-mat0001294.html | ar5iv | text | # References
Avalanches of popping bubbles
in collapsing foams
N.Vandewalle<sup>1,2</sup>, J.F.Lentz<sup>1</sup>, S.Dorbolo<sup>1</sup> and F.Brisbois<sup>1</sup>
<sup>1</sup> GRASP, Institut de Physique B5, Universitรฉ de Liรจge,
B-4000 Liรจge, Belgium.
<sup>2</sup> Laboratoire des Milieux Dรฉsordonnรฉs et Hรฉtรฉrogรจnes, Tour 13, Case 86,
4 place Jussieu, F-75252 Paris Cedex 05, France.
keywords: avalanches, foams, topological rearrangements
Cellular structures are very common in nature . Each cell of the cellular structure can be a bubble in a beer, a biological cell in a tissue , a grain in a polycrystal or a magnetic domain in a solid . Foams have becomed paradigms of disordered cellular systems. Among the physical properties of interest, one is the long-term behaviour of a froth driven by topological rearrangements. In the present work, we report acoustic experiments on foam systems. We have recorded the sound emitted by crackling cells during the collapsing of foams. The sound pattern is then analyzed using classical methods of statistical physics. Fundamental processes at the surface of the collapsing foam are found. In particular, size is not a relevant parameter for exploding bubbles.
Foams have been created by blowing air at the base of a water/soap mixture in a cylindric vessel. A typical resulting foam is illustrated in Figure 1. Polygonal bubbles are observed near the air/foam interface while spherical bubbles are seen at the water/foam interface. The thickness of the foam layer can be controlled such that the foam can be considered as dry in the region of our interests, i.e. the air/foam interface. Near this interface, the evolution of dry foam is slow and driven by geometrical constraints like the motion of vertices and edges . In addition, many topological rearrangements such as the cell side switching or the vertex disappearance take also place in the foam . The combination of bubble area growth/decay and topological rearrangements induces a complex dynamics in which subtle correlations are found as e.g. described by the so-called Aboav-Weaire law .
Moreover, one should note that many bubble explosions occur at the air/foam interface. Indeed, each bubble at the surface presents a very thin and curved face which is more fragile than the planar faces located inside the foam. The explosion of the surface bubbles leads to a macroscopic collapse of the foam as usually observed in beer and soap froths. A relevant question concerns the type(s) of bubbles which is(are) exploding at the air/foam interface. Another fundamental question is whether the bubble explosions are correlated or not? If yes, what is the nature of these correlations?
In order to study the explosion of bubbles, a microphone has been placed above the foam. The sound of popping cells has been recorded at a sampling rate of 32 $`kHz`$. In order to minimize the external noise as well as any secondary reflection of acoustic waves, the setup has been placed in a special โanechoic chamberโ. Different commercial soaps have been tested. Containers of various sections have been also used.
A typical recording of acoustic activity $`A(t)`$ (dimensionless) is presented in Figure 2. Peaks of various amplitudes are distributed along the time series. Each Peak represents the explosion of one bubble at the surface of the collapsing foam. The characteristic duration of a peak is typically $`\tau _0=5\mu s`$. A white noise component having a very small amplitude is also present. In order to extract the exact position and size of each peak, we have numerically treated the time series. The noise has been first removed by selecting a lower cutoff for peak amplitudes (noise thresholding). The resulting filtered signal $`\stackrel{~}{A}(t)`$ is also shown in Figure 2. On time series which are 20 minutes long, we observe typically 5000 events. One should note that the exploding events are not homogeneously distributed along the time axis. Bursts of acoustic activity separated by periods of stasis are observed. This heterogeneous acoustic activity will be characterized herebelow.
After noise filtering, the power $`P`$ dissipated within the small interval $`\mathrm{\Delta }t\tau _0`$ is then calculated for each peak, i.e.
$$P=_{t_0\mathrm{\Delta }t/2}^{t_0+\mathrm{\Delta }t/2}\stackrel{~}{A}^2(t)๐t.$$
(1)
The dissipated power $`P`$ is given in arbitrary units; nevertheless it is assumed to be proportional to the energy dissipated during the explosion of the bubble membrane, i.e. to be proportional to the surface area of the disappearing cell. Figure 3 presents a typical histogram $`h(P)`$ of the frequency of peak occurence as a function of the peak intensity $`P`$. This distribution presents a maximum and a long โtailโ. The inset of Figure 3 shows a log-log plot of the long tail. For large $`P`$ values, $`h(P)`$ behaves like a power law
$$h(P)P^\nu .$$
(2)
This power law behaviour of the tail holds over 1.5 decades for best cases. Both the maximum position of $`h(P)`$ and the tail exponent $`\nu `$ have non-universal values i.e. depend mainly on the nature of the soap/water mixture. Exponent $`\nu `$ values ranging from 1.5 up to 3.0 have been found. The asymptotic power law behaviour indicates that the distribution of peak amplitudes is quite broad. In other words, no sharp cutoff is observed in $`h(P)`$. This implies that a wide variety of bubble sizes is exploding and that large bubbles are sometimes stronger than small ones. This experimental result is in contrast with the widely accepted and intuitive argument that only large bubbles are fragile and explode at the air/foam interface. The latter intuitive argument would however lead to a sharp exponential cutoff in $`h(P)`$. On the contrary, our experiment demonstrates that no critical bubble size exists. The stabilization of some large bubbles with respect to apparently weak smaller bubbles should find some explanation by considering the neighbouring environment of each bubble . This should be confirmed by direct observation of a collapsing foam which is outside the scope of this work.
A wide variety of bubbles (small and large ones) participates thus to the collapsing of the foam. Let us investigate whether correlations exist or not. Figure 4 presents in a log-log plot the histogram $`h(\tau )`$ of the interpeak durations $`\tau `$, i.e. the time intervals $`\tau `$ separating successive bubble explosions. Four different series obtained with four different soap/water mixtures in different containers are illustrated. For each analyzed series, the data points have been rescaled by a certain factor in order to emphasize that all series seem to behave like a power law
$$h(\tau )\tau ^\alpha $$
(3)
with an exponent $`\alpha =1.0\pm 0.1`$. A power law with $`\alpha =1`$ is illustrated by the continuous line in Figure 4. Each histogram occupies about 2 decades. The value of the exponent $`\alpha `$ do not change when other types of soap/water mixtures are used. Only the total number of explosions as well as the total dissipated power may change.
For a homogeneous (random) occurence of bubble explosions, one expects a poissonian distribution, i.e. an exponential decay of $`h(\tau )`$! The power law behaviour indicates that bubbles explosions are correlated events! Moreover, an unique value for $`\alpha `$ implies that the temporal correlations in the collapsing foam are universal.
The power law behaviour of $`h(\tau )`$ and the longtail of $`h(P)`$ suggest that the energy release is discontinuous and quite similar to self-organized critical systems . Simulations and experiments have indeed shown that a slowly driven foam can be described by avalanches having a broad distribution of event rate versus the energy release. In these studies, events are abrupt topological rearrangements (mainly the coarsening and the vanishing bubbles) while in the present study, events are exploding bubbles at the surface of a collapsing foam. One understands that the explosion of a bubble implies a topological change for neighbouring bubbles which may propagate in the bulk of the foam as well as along the air/foam interface. These topological changes may destabilize other bubbles at the interface and thus create avalanches of popping bubbles.
In summary, our acoustic experiments have put into evidence the intermittent and correlated character of popping bubbles in a collapsing dry foam. In other words, the dynamics of a collapsing foam is discontinuous and evolves by sudden bursts of activity separated by periods of stasis. These avalanches are correlated. Moreover, we have discovered that a wide variety of bubble sizes participate to the phenomenon.
Acknowledgements
NV and SD are financially supported by FNRS and FRIA respectively. Thanks to the CEDIA laboratory at the University of Liรจge and in particular A.Calderon who provided an access to the anechoic chambers allowing for high quality acoustic recordings.
Correspondance and requests should be addressed to Nicolas Vandewalle, e-mail: nvandewalle@ulg.ac.be
Figure Captions
Figure 1 โ Picture of a typical foam obtained by blowing air at the base of a water/soap mixture.
Figure 2 โ Acoustic recording of crackling bubbles in a collapsing dry foam: (top) typical recording and (bottom) filtered recording for which a white noise component has been removed.
Figure 3 โ The histogram $`h`$ of the power $`P`$ dissipated during each bubble explosion. The inset presents the tail of $`h`$ in a log-log plot. The continuous line represent a power law fit.
Figure 4 โ Log-log plot of the histogram $`h`$ for the interpeaks durations $`\tau `$. Two cases are illustrated. The continuous line is a power law with an exponent $`\alpha =1`$. |
no-problem/0001/hep-ph0001061.html | ar5iv | text | # 1 Magnitudes of the imaginary parts of the dominant SM amplitudes and the contributions coming from s-channel KK-excitations. The contributions for ๐_๐ =90^๐ are represented by triangles for the SM contributions, by circles for ๐น_{+โฃ-โฃ+-} with ๐_๐ =1โข๐โข๐โข๐, and by diamonds for ๐น_{+โฃ-โฃ+-} with ๐_๐ =2โข๐โข๐โข๐. In this scenario, the amplitude ๐น_{+โฃ-โฃ-+} equals ๐น_{+โฃ-โฃ+-}.
Accepted from publication in Physics Letters B
UM-P 014-2000
RCHEP 002-2000
A note on low scale unification and gamma-gamma scattering
S.R.Choudhury<sup>1</sup><sup>1</sup>1src@ducos.ernet.in
Department of Physics, Delhi University, Delhi, India,
A. Cornell<sup>2</sup><sup>2</sup>2a.cornell@tauon.ph.unimelb.edu.au and G.C. Joshi<sup>3</sup><sup>3</sup>3joshi@physics.unimelb.edu.au
School of Physics, University of Melbourne,
Parkville, Victoria 3108, Australia
$`31^{st}`$ of October, 1999
## Abstract
In this note we study an interesting effect of low energy gravity on photon-photon scattering at high energies.
In a recent paper Gounaris, Porfyriadis and Renard have highlighted the possibility of exploring new physics through the scattering process $`\gamma \gamma \gamma \gamma `$ at c.m. energies in the TeV range. The $`\gamma \gamma `$ mode is a possible mode of operation in a $`e^+e^{}`$ linear collider and this makes the study of this particular scattering at TeV-scale useful in a very practical way and not just of purely theoretical interest. Recently a lot of interest has also been generated in a TeV-scale unification wherin one envisages a Kaluza-Klein (KK) scenario in (4+n)-dimensions . Ordinary matter and gauge fields are localized in a (3+1)-dimensional brane configuration wheras gravity propagates in (4+n)-dimension, with the n-extra dimension compactified. For n=2, the compactification scale turns out to be in the mm range with a weak Planck scale in the TeV range (which acts as a cut-off for all effective theories) and these numbers make this particular choice of n to be particularly interesting from the point of view of experiments in the collider.
In the scenario just outlined excitations of the gravitons in the compactified dimensions would appear in the (3+1)-dimensional world as towers of particles. At every level, there are one spin-2 state (massive), one spin-1 state and one spin-0 state. The spin-1 state decouple from ordinary matter and the spin-0 state couples through the dilaton mode. The spin-2 KK-modes with masses starting from the 1/R scale and effectively cutoff at $`M_s`$ (that is at some scale of the order of 1 or 2 TeV) couples to fermions and also to gauge particles and are the most visible signature of theories with compactification at low scale of the order of mms. The coupling of an individual KK-state is not of much interest since it is gravitationally suppressed, their interaction once summed over the towers of states however is significant. It gets an effective strength that can be phenomenologically relevant to processes in the TeV-scale. This note is to estimate such effects for $`\gamma \gamma `$ scattering at the TeV scale. We show that the inclusion of the Spin-2 KK-excitations of gravitons in the TeV-range results in changes in the scattering amplitude that is definitely within the measurement range of experiments in the colliders.
Consider the scattering process:
$$\gamma (k_1,\lambda _1)+\gamma (k_2,\lambda _2)\gamma (k_3,\lambda _3)+\gamma (k_4,\lambda _4)$$
where the kโs and the $`\lambda `$โs refer respectively to momenta and helicities of the particles in the c.m. frame. The helicities take on values +1 and -1 . The invariant scattering amplitude for this process is denoted by $`F_{h_1h_2h_3h_4}(s,t,u)`$ where the hโs take on values of signatures of the helicities and s,t are the usual Mandelstam variables. These amplitudes in the standard Model (SM) have been calculated by Jikia and Tkabladze . At values of s and t such that $`s>>t>>M_W^2`$, the amplitudes $`F_{++++}`$ and $`F_{++}`$ together with their parity equivalents dominate:
$$F_{++++}(s,t,u)=F_{++}(s,t,u)$$
(1)
$$F_{++++}(s,t,u)=(16\alpha ^2i\pi )(s/t)\mathrm{log}(t/M_W^2)$$
(2)
Let us now estimate the contributions coming from KK-excitations in all the three channels. The vertex connecting the KK-excitations with a pair of photons have been explicitly worked out by Han, Lykken and Zhang . The resultant amplitdues have the symmetry:
$$F_{++++}(s,t,u)=F_{++}(u,t,s)$$
(3)
$$F_{}(s,t,u)=F_{++}(u,t,s)$$
(4)
$$F_{++}(s,t,u)=F_{++}(s,u,t)$$
(5)
and
$$F_{++}(s,t,u)=(i/4)(\kappa ^2)(u^2)[D(s)+D(t)]$$
(6)
$$F_{++}(s,t,u)=F_{++}(s,t,u)$$
(7)
where
$$D(s)=\left(\frac{1}{sM_{KK}^2}\right)$$
(8)
$`M_{KK}`$ denoting the mass of the KK-excitations and the summation is over the entire tower of excitations. In the last equation, $`M_{KK}`$ is understood to include the width -i$`\mathrm{\Gamma }`$/2 also.
The contribution of these KK-excitations are complex in general. However, in respect of SM contributions, Gounaris et. al. have made the very important observation that the SM contribution is dominantly imaginary for values of s much greater than $`M_w^2`$ for directions away from the forward. When the contributions coming from the KK-exchanges enter as corrections to the main SM contributions, clearly only the imaginary parts of the contributions become relevant. The important point to note is that in the expression for the amplitudes only the s-channel resonance contributes an imaginary part whereas the others are completely real. Thus, in the range of energies where the KK-contributions are expected to be in the nature of correction terms to the main SM contribution, only the s-channel resonance contributions need be taken into account. This means only $`F_{++}`$ and $`F_{++}`$ together with their parity equivalents need be considered. The imaginary parts of the KK-resonance contributions come from the imaginary parts of the propagator denominators: $`(sM_{KK}^2)`$. This is easily estimated following . The imaginary parts of the amplitudes above are given by:
$$Im\left(\frac{1}{sM_{KK}^2}\right)=\frac{\pi s^{n/21}R^n}{\mathrm{\Gamma }(n/2)(4\pi )^{n/2}}$$
(9)
The nonvanishing contributions of the s-channel tower of KK-excitations to the imaginary parts of the amplitudes are thus given by:
$$ImF_{++}=\left(\frac{\kappa ^2(1+\mathrm{cos}\theta _s)^2}{2}\right)\left(\frac{\pi s^{n/2+1}R^n}{\mathrm{\Gamma }(n/2)(4\pi )^{n/2}}\right)$$
(10)
where $`\theta _s`$ is the c.m. scattering angle. Im$`F_{++}`$is given by the above expression with cos$`\theta _s`$ replaced by its negative. Using now the relation (equation 64 of )
$$\kappa ^2R^n=M_s^{(n+2)}(4\pi )^{n/2}\mathrm{\Gamma }(n/2)$$
(11)
we get
$$ImF_{++}=\left(\frac{\pi (1+\mathrm{cos}\theta _s)^2}{16}\right)(s^{1/2}/M_s)^4$$
(12)
for n=2.
Multiple KK-excitation exchanges will give contributions proportional to higher powers of $`s^{1/2}/M_s`$ . These cannot be computed in a straightforward manner and thus the single KK-exchange contribution is a reliable correction only in the domain where $`s^{1/2}/M_s`$ is smaller than one. We exhibit in figure 1 and 2, the relative contributions to the imaginary parts of the amplitude $`F_{++}`$ and $`F_{++}`$ in comparison to the SM predictions. All other SM amplitudes are negligible in this energy and angle values and as reasoned above, the magnitudes of the KK-contributions can be taken seriously when $`s^{1/2}/M_s`$ is not too close to unity. It is clear that there will be a window, whose value will depend upon the value of$`M_s`$, wherein the KK-exchange contributions will act as a correction term to the SM-dominant term with a definite magnitude. Deviations of the measured cross-sections from the SM can thus be fitted to the correction term with a single parameter $`M_s`$ in the TeV. range. As the energies become higher so that multiple KK-exchange contributions become important as well, we are unable to calculate the KK-exchange contribution beyond saying that it will dominate the cross-section. We have calculated above the cross-sections away from the forward direction. Qualitatively similar conclusions of course can be drawn for the forward amplitude and hence for the total $`\gamma `$-$`\gamma `$ cross-section also.
In conclusion, a low scale scenario leads to some definite pattern of deviation from the SM prediction for $`\gamma \gamma `$ scattering. There exists a window at around a few hundred GeV. c.m. energy where the new physics gives rise a calculable correction to the SM values and thus provides a well defined signature. At still higher energies, the contributions coming from multiple KK-exchanges start dominating over the SM but do not lend themselves to estimation in any reliable manner.
After this work was completed the following related papers on photon-photon scattering appeared . However, our note emphasizes the phenominalogical importance of a window around the few 100GeV. |
no-problem/0001/cond-mat0001008.html | ar5iv | text | # Extension of the Brinkman-Rice picture and the Mott transitionโ
\[
## Abstract
In order to explain the metal-Mott-insulator transition, the Brinkman-Rice (BR) picture is extended. In the case of less than one as well as one electron per atom, the on-site Coulomb repulsion is given by $`U=\kappa \rho ^2U_c`$ by averaging the electron charge per atom over all atomic sites, where $`\kappa `$ is the correlation strength of $`U`$, $`\rho `$ is the band filling factor, and $`U_c`$ is the critical on-site Coulomb energy. The effective mass of a quasiparticle is found to be $`\frac{m^{}}{m}=\frac{1}{1\kappa ^2\rho ^4}`$ for $`0<\kappa \rho ^2<1`$ and seems to follow the heat capacity data of Sr<sub>1-x</sub>La<sub>x</sub>TiO<sub>3</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-ฮด</sub> at $`\kappa =1`$ and $`0<\kappa \rho ^2<1`$. The Mott transition of the first order occurs at $`\kappa \rho ^2=1`$ and a band-type metal-insulator transition takes place at $`\kappa \rho ^2=0`$. This Mott transition is compared with that in the $`d=\mathrm{}`$ Hubbard model.
PACS number(s): 71.27.+a, 71.30.+h, 74.20.Mn, 74.20.Fg
\]
Although several theoretical studies have been performed to reveal the mechanism of the metal-insulator transition (MIT),<sup>1-4</sup> the Mott MIT (called โMott transitionโ) in 3$`d`$ transition-metal oxides including strongly correlated high-$`T_c`$ superconductors still remains to be clarified.<sup>5,6</sup> In particular, the effective mass near the Mott transition has attracted special attention because it is related to both the mechanism of the MIT and the two-dimensional density of states (2D-DOS). The latter offers a clue to explain the mechanism of high-$`T_c`$ superconductivity.
In this paper, we build an extended BR picture with a generalized effective mass depending on the carrier density, which reduces to the BR picture in the case of one electron per atom, and apply it to experimental data of Sr<sub>1-x</sub>La<sub>x</sub>TiO<sub>3</sub> and YBCO.
In strongly correlated metals with one electron per atom on a $`d=\mathrm{}`$ simple cubic lattice, the on-site Coulomb repulsion $`U`$ is always given. However, in real metallic crystals, in which the number of electrons $`n`$ is less than the number of atoms $`m`$, $`U`$ is determined not uniquely but instead by probability, because the electronic band structure between sites differs and this system does not transform from real-space to $`K`$-space. Therefore, when charges on a site are averaged over all atomic sites, $`U`$ can be defined, such as in the case of one electron per atom.
In the case when $`n=m`$, the existence probability ($`P=n/m=\rho =\rho _{}+\rho _{}`$, โthe band filling factorโ) of electrons on nearest neighbour sites is one. The on-site Coulomb interaction of two electrons in the conduction band is given by $`U=U^{}\frac{e^2}{r}`$. In the case when $`n<m`$, $`P<1`$ and the on-site charge is $`e^{}=eP`$ (or $`e^{}/e=P`$), when averaged over sites. Then, the Coulomb energy is given by $`U=P^2U^{}`$. $`U^{}`$ does not necessarily agree with the critical value $`U_c=8|\overline{ฯต}|`$ of the interaction in the BR picture.<sup>1</sup> The Coulomb repulsion is thus given by
$`U=P^2U^{}=\rho ^2U^{},`$ (1)
$`U^{}=\kappa U_c,`$ (2)
$`U=\kappa \rho ^2U_c,`$ (3)
where $`0<\rho 1`$, while $`0<\kappa 1`$ is the correlation strength. In the case of $`\kappa `$1 and $`\rho `$=1 ($`\rho _{}=\rho _{}=\frac{1}{2}`$), $`U`$ reduces to the correlation in the BR picture. In the case when $`\kappa `$=1 and $`\rho `$1, $`U`$ tends to $`U_c`$ when the band approaches half filling. This indicates that having more carriers in the conduction band increases the correlation, $`viceversa`$.
Although $`U`$ in the Hubbard model is replaced by Eq. (1), calculations of the expectation value of $`U`$ based on the Gutzwiller variational theory<sup>7</sup> do not change because $`\rho ^2`$ is constant. Thus we consider the conditions for applying Eq. (3) to the BR picture. In the Gutzwiller theory, $`\overline{\nu }`$ atoms are doubly occupied with a probability $`\eta ^{\overline{\nu }}`$, where $`0<\eta <1`$.<sup>7</sup> In the BR picture<sup>1</sup> at half filling $`\rho =1`$ and $`\rho _{}=\rho _{}=\frac{1}{2}`$, $`\eta =\overline{\nu }/(0.5\overline{\nu })`$, and hence $`0<\overline{\nu }<1/4`$. For the lowest energy state, $`\overline{\nu }=(1/4)(1\kappa )`$ where $`U/U_c=\kappa `$, the above limits for $`\overline{\nu }`$ imply $`0<\kappa <1`$. When $`\rho <1`$ and $`\rho _{}=\rho _{}=\frac{1}{2}\rho `$, the BR picture can be applied as well, with $`U`$ averaged over all sites. Since the lowest energy state, $`\overline{\nu }=(1/4)(1\kappa \rho ^2)`$ where $`U/U_c=\kappa \rho ^2`$, is satisfied for $`0<\overline{\nu }<1/4`$, the condition $`0<\kappa \rho ^2<1`$ is reached. Thus the inverse of the discontinuity $`q`$ in the BR picture,
$`{\displaystyle \frac{1}{q}}={\displaystyle \frac{m^{}}{m}}`$ $`=`$ $`(1({\displaystyle \frac{U}{U_c}})^2)^1,`$ (4)
$`=`$ $`(1\kappa ^2\rho ^4)^1,`$ (5)
where $`m^{}`$ is the effective mass of a quasiparticle, is defined under the combined condition $`0<\kappa \rho ^2<1`$, although the separate condictions are $`0<\rho 1`$ and $`0<\kappa 1`$. Therefore, $`m^{}`$ increases without bound when $`\kappa `$1, $`\rho `$1. For $`\kappa `$0 and $`\rho `$0+, $`m^{}`$ decreases and, finally, the correlation undergoes a (normal or band-type) MIT which differs from the Mott MIT exhibiting a first order transition. For $`\rho `$=1, Eq. (5) reduces to the effective mass in the BR picture. At $`\kappa \rho ^2=1`$, the MIT of the first order occurs and the state can be regarded as the paramagnetic insulating state because $`\overline{\nu }=0`$. Eq. (5) is illustrated in Fig. 1 (a). In addition, the expectation value of the energy in the (paramagnetic) ground state is given by $`H_N=\overline{ฯต}(1\kappa \rho ^2)^2`$. Here, the band energy $`\overline{ฯต}=\overline{ฯต}_{}+\overline{ฯต}_{}=2\mathrm{\Sigma }_{k<k_F}ฯต_k<0`$ is the average energy
without correlation and $`ฯต_k`$ is the kinetic energy in the Hamiltonian in the BR picture<sup>1</sup>, with the zero of energy chosen so that $`\mathrm{\Sigma }_kฯต_k`$=0. This picture may be called an extended BR picture with band filling.
On the other hand, in the $`d=\mathrm{}`$ Hubbard model<sup>2</sup>, in which the width of the DOS $`\rho (\omega )`$ of the coherent part with a constant peak scale decreases with increasing $`U`$, the Mott transition occurs for the $`\mathrm{๐๐๐๐๐๐ข๐}`$ number of quasiparticles, which number is an integral of $`\rho (\omega )`$ near $`U=U_c`$ to energies near $`\omega =0`$. This is a marked difference from the BR picture in which the Mott transition occurs at $`\rho `$=1 (the $`\mathrm{๐๐๐ฅ๐๐๐ข๐}`$ number of quasiparticles). The Mott transition in the $`d=\mathrm{}`$ Hubbard model can be regarded as the band-type MIT in the extended BR picture because of the decrease of the number of quasiparticles in the coherent part. In addition, the $`tJ`$ model<sup>3</sup> and the Hubbard model<sup>4</sup> on a square lattice predict critical behavior according to $`m^{}/m=1/(1x)`$.
Eq. (5) is applied to the heat capacity data of the Sr<sub>1-x</sub>La<sub>x</sub>TiO<sub>3</sub>, which is well known as a strongly correlated system. The number of quasiparticles determined by the Hall coefficient increases linearly with $`x`$ up to at least $`x`$=0.95.<sup>5,6</sup> The heat capacity in Fig. 3 of reference 5 is replotted, as shown in Fig. 1(b). Eq. (5) follows closely the heat capacity data in the case when $`\kappa `$=1. The first-order transition is found between $`x`$=0.95 and $`x`$=1. This transition corresponds to the Mott MIT because $`\kappa \rho ^2=1`$ in this picture, with $`\rho =1`$ at LaTiO<sub>3</sub> and $`\kappa =1`$ from the experimental result. Thus LaTiO<sub>3</sub> is a Mott insulator.
Eq. (5) is also applied to the specific heat data for YBCO measured by magnetic measurements by Daรผmbling<sup>8</sup>. Eq. (5) seems to follow the data, as shown in Fig. 1(d). Although it is difficult to confirm whether YBCO at $`\rho `$=1 is a Mott insulator because the effective mass at $`\rho `$=1 is divergent, the divergence is regarded as a Mott transition.
$`\mathrm{๐๐}`$ conclude from experimental data that the correlation strength is found to be $`\kappa `$=1, and that the presently proposed extended BR picture is better able to explain the Mott transition than Hubbard models.Further, instead of the van Hove singularity (vHs), Eq. (5) can be used for 2D-DOS to describe the mechanism of high-$`T_c`$ superconductivity.
I thank Mr. Nishio for valuable comments, and Prof. Tokura, Prof. Kumagai, and Dr. M. Daรผmbling for permission to use experimental data in Fig. 1(b),(c). |
no-problem/0001/astro-ph0001145.html | ar5iv | text | # The Lake Baikal neutrino experiment
## 1 Detector and Site
The Baikal Neutrino Telescope is deployed in Lake Baikal, Siberia, 3.6 km from shore at a depth of 1.1 km. NT-200, the medium-term goal of the collaboration , was put into operation at April 6th, 1998 and consists of 192 optical modules (OMs) โ see fig.1. An umbrella-like frame carries 8 strings, each with 24 pairwise arranged OMs. Three underwater electrical cables connect the detector with the shore station.
In April 1993, the first part of NT-200, the detector NT-36 with 36 OMs at 3 strings, was put into operation and took data up to March 1995. A 72-OM array, NT-72, run in 1995-96. In 1996 it was replaced by the four-string array NT-96. NT-144, a six-string array with 144 OMs, was taking data in 1997-98.
Summed over 1140 days effective lifetime, $`6.610^8`$ muon events have been collected with NT-36, -72, -96, -144, -200.
The OMs are grouped in pairs along the strings. They contain 37-cm diameter QUASAR PMs which have been developed specially for our project . The two PMs of a pair are switched in coincidence in order to suppress background from bioluminescence and PM noise. A pair defines a channel.
A muon-trigger is formed by the requirement of $`N`$ hits (with hit referring to a channel) within 500 ns. $`N`$ is typically set to 3 or 4. For such events, amplitude and time of all fired channels are digitized and sent to shore. A separate monopole trigger system searches for clusters of sequential hits in individual channels which are characteristic for the passage of slowly moving, bright objects like GUT monopoles.
The main challenge of large underwater neutrino telescopes is the identification of extraterrestrial neutrinos of high energy. In this paper we present results of a search for neutrinos with $`E_\nu >10`$TeV obtained with the deep underwater neutrino telescope NT-96 at Lake Baikal .
## 2 Search strategy and the limits on the diffuse neutrino flux
The used search strategy for high energy neutrinos relies on the detection of the Cherenkov light emitted by the electro-magnetic and (or) hadronic particle cascades and high energy muons produced at the neutrino interaction vertex in a large volume around the neutrino telescope.
We select events with high multiplicity of hit channels corresponding to bright cascades. The volume considered for generation of cascades is essentially below the geometrical volume of NT-96. A cut is applied which accepts only time patterns corresponding to upward traveling light signals (see below).
Neutrinos produce showers and high energy muons through CC-interactions
$$\nu _l(\overline{\nu _l})+N\stackrel{CC}{}l^{}(l^+)+\text{hadrons},$$
(1)
through NC-interactions
$$\nu _l(\overline{\nu _l})+N\stackrel{NC}{}\nu _l(\overline{\nu _l})+\text{hadrons},$$
(2)
where $`l=e`$ or $`\mu `$, and through resonance production
$$\overline{\nu _e}+e^{}W^{}\text{anything},$$
(3)
with the resonant neutrino energy $`E_0=M_w^2/2m_e=6.310^6`$GeV and cross section $`5.0210^{31}`$cm<sup>2</sup>.
Within the first 70 days of effective data taking, $`8.410^7`$ events with $`N_{hit}4`$ have been selected.
For this analysis we used events with $``$4 hits along at least one of all hit strings. The time difference between any two channels on the same string was required to obey the condition:
$$(t_it_j)z_{ij}/c<az_{ij}+2\delta ,(i<j).$$
(4)
The $`t_i,t_j`$ are the arrival times at channels $`i,j`$, and $`z_{ij}`$ is their vertical distance. $`\delta =5`$ nsec accounts for the timing error and $`a=1`$ nsec/m.
8608 events survive the selection criterion (4). The highest multiplicity of hit channels (one event) is $`N_{hit}=24`$.
Since no events with $`N_{hit}>24`$ are found in our data we can derive upper limits on the flux of high energy neutrinos which produce events with multiplicity
$$N_{hit}>25.$$
(5)
The shape of the neutrino spectrum was assumed to behave like $`E^2`$ as typically expected for Fermi acceleration. In this case, 90% of expected events would be produced by neutrinos from the energy range $`10^4รท10^7`$GeV. Comparing the calculated rates with the upper limit to the actual number of events, 2.3 for 90% CL we obtain the following upper limit to the diffuse neutrino flux:
$$\frac{d\mathrm{\Phi }_\nu }{dE}E^2<1.410^5\text{cm}^2\text{s}^1\text{sr}^1\text{GeV}.$$
(6)
Fig.2 shows the upper limits to the diffuse high energy neutrino fluxes obtained by BAIKAL (this work), SPS-DUMAND , AMANDA , EAS-TOP and FREJUS (triangle) experiments as well as model independent upper limit obtained by V.Berezinsky (curve labelled B) (with the energy density of the diffuse X- and gamma-radiation $`\omega _x210^6`$ eV cm<sup>-3</sup> as follows from EGRET data ) and the atmospheric neutrino fluxes from horizontal and vertical directions (upper and lower curves, respectively). Also, predictions from Stecker and Salamon model (curve labelled SS) and Protheroe model (curve labelled P) for diffuse neutrino fluxes from quasar cores and blazar jets are shown in Fig.2.
For the resonant process (3) our 90% CL limit at the W resonance energy is:
$$\frac{d\mathrm{\Phi }_{\overline{\nu }}}{dE_{\overline{\nu }}}3.6\times 10^{18}\text{cm}^2\text{s}^1\text{sr}^1\text{GeV}^1.$$
(7)
The limit (6) obtained for the diffuse neutrino flux is of the same order as the limit announced by FREJUS but extends to much higher energies. We expect that analysis of 3 years data taking with NT-200 would allow us to lower this limit by another order of magnitude.
This work was supported by the Russian Ministry of Research,the German Ministry of Education and Research and the Russian Fund of Fundamental Research ( grants 99-02-18373a, 97-02-17935, 99-02-31006 and 97-15-96589), and by the Russian Federal Program โIntegrationโ (project no. 346). |
no-problem/0001/astro-ph0001135.html | ar5iv | text | # Theory and Observations of Type I X-Ray Bursts from Neutron Stars
## I Introduction
The gravitational energy release from matter accreted onto a neutron star (NS) of mass $`M`$ and radius $`R`$ is $`GMm_p/R200\mathrm{MeV}`$ per nucleon, much larger than that released from thermonuclear fusion ($`E_{nuc}5`$ MeV per nucleon when a solar mix goes to iron group elements). Hansen and Van Horn (1975) showed that the burning of the accumulated material in the NS atmosphere occurred in radially thin shells and so was susceptible to a thermal instability. Evidence of the instability came soon after with the discovery of recurrent Type I X-ray bursts from low accretion rate ($`\dot{M}<10^9M_{}\mathrm{yr}^1`$) NSs. The successful association of the thermal instabilities found by Hansen & Van Horn (1975) with the X-ray bursts made a nice picture of a recurrent cycle that consists of fuel accumulation for several hours followed by a thermonuclear runaway that burns the fuel in $`10100`$ seconds (see Lewin, van Paradijs and Taam 1995 for an overview and references). The observational quantity $`\alpha `$ (defined as the ratio of the time-averaged accretion luminosity to the time-averaged burst luminosity) is close to the value expected (i.e. $`\alpha =(GM/R)/E_{nuc}40`$) for a thermonuclear burst origin.
Though our basic understanding from 25 years ago is unchanged, we now know much more about how the thermal instability depends on $`\dot{M}`$, both theoretically and observationally. It is this comparison that I emphasize, as it provides many important lessons that are likely applicable to thin shell flashes on accreting white dwarfs (classical novae); where 100-1000 yr recurrence times prohibit such detailed comparisons. I focus solely on NSs accreting at $`\dot{M}>10^{10}M_{}\mathrm{yr}^1`$, which is appropriate for most persistently bright Low Mass X-ray Binaries (in particular the โZโ and โAtollโ sources of Hasinger & van der Klis 1989). These NSs are weakly magnetic, with $`B<10^{10}`$G.
I start by reviewing the simplest aspects of the physics of the accumulation and ignition of the fresh fuel on the NS (leaning heavily on results from my Bildsten 1998 review article, to which I refer the reader for the complete set of original references). I then discuss the EXOSAT observations of the $`\dot{M}`$ dependence of the Type I X-Ray burst properties and speculate that a solution to these puzzles is possible if freshly accreted matter accumulates near the equator. This problem, as well as the observations of nearly coherent oscillations during the burst (summarized by Swank at this meeting) are the first good indicators of the breaking of spherical symmetry. I close with a detailed discussion of the Type I bursts from the binary GS 1826-234, which is a beautiful example of limit-cycle mixed hydrogen/helium burning.
## II Accumulation, Ignition, Explosion
Once the freshly accreted hydrogen and helium has thermalized and become part of the โstarโ, it undergoes hydrostatic compression from the new material that is continuously piled on. The extreme gravity on the NS surface compresses the fresh fuel to ignition densities and temperatures within a few hours to days.<sup>1</sup><sup>1</sup>1 The physics of the compression and burning depends on the accretion rate per unit area, $`\dot{m}\dot{M}/A_{acc}`$, where $`A_{acc}`$ is the covered area of fresh material. I sometimes quote numbers for both $`\dot{m}`$ and $`\dot{M}`$. When I give $`\dot{M}`$, I have assumed $`A_{acc}=4\pi R^21.2\times 10^{13}\mathrm{cm}^2`$. The short thermal time in the atmosphere (only $`10`$ s at the ignition location, $`P10^{22}10^{23}\mathrm{erg}\mathrm{cm}^3`$) compared to the time to accumulate the material (hours to days) makes the compression far from adiabatic. Indeed, the temperature contrast from the photosphere to the burning layer is a factor of ten; whereas the density contrast exceeds $`10^4`$.
The temperature exceeds $`10^7`$ K in most of the accumulating atmosphere, so that hydrogen burns via the CNO cycle and we can neglect the pp cycles. At high temperatures ($`T>8\times 10^7\mathrm{K}`$), the timescale for proton captures becomes shorter than the subsequent $`\beta `$ decay lifetimes, even for the slowest <sup>14</sup>N(p,$`\gamma `$)<sup>15</sup>O reaction. The hydrogen then burns in the โhotโ CNO
$${}_{}{}^{12}\mathrm{C}(p,\gamma )^{13}\mathrm{N}(p,\gamma )^{14}\mathrm{O}(\beta ^+)^{14}\mathrm{N}(p,\gamma )^{15}\mathrm{O}(\beta ^+)^{15}\mathrm{N}(p,\alpha )^{12}\mathrm{C},$$
(1)
cycle and is limited to $`5.8\times 10^{15}Z_{\mathrm{CNO}}\mathrm{ergs}\mathrm{g}^1\mathrm{s}^1`$, where $`Z_{\mathrm{CNO}}`$ is the mass fraction of CNO and is independent of temperature. The hydrogen burns this way in the accumulating phase when $`\dot{m}>900\mathrm{g}\mathrm{cm}^2\mathrm{s}^1(Z_{CNO}/0.01)^{1/2}`$ and is thermally stable. The amount of time it takes to burn the hydrogen is $`(10^3/Z_{\mathrm{CNO}})\mathrm{s}`$, or about one day for solar metallicities. For lower $`\dot{m}`$โs, the hydrogen burning is thermally unstable and is the trigger for the Type I burst.
The slow hydrogen burning during the accumulation allows for a unique burning regime at high $`\dot{m}`$โs. This simultaneous H/He burning occurs when $`\dot{m}>(25)\times 10^3\mathrm{g}\mathrm{cm}^2\mathrm{s}^1(Z_{CNO}/0.01)^{13/18}`$, as at these high rates the fluid element is compressed to helium ignition conditions long before the hydrogen is completely burned (Lamb and Lamb 1978, Taam and Picklum 1978). The strong temperature dependence of the helium burning rate (and lack of any weak interactions) leads to a strong thin-shell instability for temperatures $`T<5\times 10^8\mathrm{K}`$ and causes the Type I X-Ray burst for these $`\dot{m}`$โs. The critical condition of thin burning shells ($`hR`$) is true before burning and remains so even during the flash (when temperatures reach $`10^9\mathrm{K}`$) as the large gravitational well on the neutron star requires temperatures of order $`10^{12}\mathrm{K}`$ for $`hR`$. Stable burning sets in at higher $`\dot{M}`$โs (comparable to the Eddington limit) when the helium burning temperature sensitivity finally becomes weaker than the cooling rateโs sensitivity (Ayasli & Joss 1982 and Taam, Woosley & Lamb 1996).
For solar metallicities, there is a narrow window of $`\dot{m}`$โs where the hydrogen is completely burned before the helium ignites. In this case, a pure helium shell accumulates underneath the hydrogen-burning shell until densities and pressures are reached for ignition of the pure helium layer. The recurrence times of these bursts must be longer than the time to burn all of the hydrogen, so pure helium flashes should have recurrence times in excess of a day and $`\alpha 200`$. To summarize, in order of increasing $`\dot{m}`$, the regimes of unstable burning we expect to witness from NSs accreting at sub-Eddington rates ($`\dot{m}<10^5\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$) are (Fujimoto, Hanawa & Miyaji 1981, Fushiki and Lamb 1987):
1. Mixed hydrogen and helium burning triggered by thermally unstable hydrogen ignition for $`\dot{m}<900\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$ ($`\dot{M}<2\times 10^{10}M_{}\mathrm{yr}^1`$).
2. Pure helium shell ignition for $`900\mathrm{g}\mathrm{cm}^2\mathrm{s}^1<\dot{m}<(25)\times 10^3\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$ following completion of hydrogen burning.
3. Mixed hydrogen and helium burning triggered by thermally unstable helium ignition for $`\dot{m}>(25)\times 10^3\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$ ($`\dot{M}>4.411.1\times 10^{10}M_{}\mathrm{yr}^1`$).
The transition $`\dot{m}`$โs are for $`Z_{CNO}0.01`$. Reducing $`Z_{CNO}`$ lowers the transition accretion rates and, more importantly, makes the $`\dot{m}`$ range for pure helium ignition quite narrow. We now discuss what happens as the thermal instability develops into a burst and what observational differences are to be expected between a pure helium ignition and a mixed hydrogen/helium ignition.
The flash occurs at fixed pressure, and the increasing temperature eventually allows the radiation pressure to dominate. For an ignition column of $`10^8\mathrm{g}\mathrm{cm}^2`$, the pressure is $`P=gy10^{22}\mathrm{ergs}\mathrm{cm}^3`$, so $`aT_{max}^4/3P`$ gives a maximum temperature $`T_{max}1.5\times 10^9\mathrm{K}`$. For pure helium flashes, the fuel rapidly burns (since there are no limiting weak interactions) and the local Eddington limit is often exceeded, leading to a radius expansion burst and a duration set mostly by the time it takes the heat to escape, $`510`$ seconds.
When hydrogen and helium are both present, the high temperatures reached during the thermal instability easily produces elements far beyond the iron group (Hanawa et al. 1983; Wallace & Woosley 1984; Hanawa and Fujimoto 1984) via the rapid-proton (rp) process of Wallace and Woosley (1981). This burning starts a few seconds after the initial helium flash (see Hanawa and Fujimoto 1984 for an illuminating example) that makes new seed nuclei and increases the temperature. The rp process burns hydrogen by successive proton captures and $`\beta `$ decays. The seed nuclei move up the proton-rich side of the valley of stability (much like the r-process which occurs by neutron captures on the neutron rich side) more or less limited by the $`\beta `$-decay rates. Theoretical work shows that the end-point of this time-dependent burning is at elements far heavier than iron (Hanawa and Fujimoto 1984, Schatz et al. 1997, Koike et al. 1999).<sup>2</sup><sup>2</sup>2In steady-state burning at $`\dot{M}>10^8M_{}\mathrm{yr}^1`$ Schatz et al. (1999) showed that the rp-process burns all of the hydrogen and ends at nuclei with $`A`$ near 100. The long series of $`\beta `$ decays allows for energy release 10-100 seconds after the burst has started. We thus expect a mixed hydrogen/helium burst to last much longer than a pure helium burst.
## III Observations of $`\dot{M}`$ Dependencies
The 3.8 day orbit of EXOSAT was an excellent match for the long-term monitoring of the Type I bursters needed to reveal the dependence of their nuclear burning behavior on $`\dot{M}`$.<sup>3</sup><sup>3</sup>3I hope that the equally well-matched Chandra and XMM satellites will devote as much time to Type I bursters. As I will make clear from this discussion, detailed spectroscopy during the bursts would be very informative. While in a particular burning regime, we expect that the time between bursts should decrease as $`\dot{M}`$ increases since it takes less time to accumulate the critical amount of fuel at a higher $`\dot{M}`$. Exactly the opposite behavior was observed from many low accretion rate ($`\dot{M}<10^9M_{}\mathrm{yr}^1`$) NSs. A particularly good example is 4U 1705-44, where the recurrence time increased by a factor of $`4`$ when $`\dot{M}`$ increased by a factor of $`2`$ (Langmeier et al. 1987, Gottwald et al. 1989). If the star is accreting matter with $`Z_{CNO}=10^2`$ then these accretion rates are at the boundary between unstable helium ignition in a hydrogen-rich environment at high $`\dot{M}`$ and unstable pure helium ignition at lower $`\dot{M}`$. The expected change in burst behavior as $`\dot{M}`$ increases would then be to more energetic and more frequent bursts. This was not observed.
Other NSs showed similar behavior. van Paradijs, Penninx & Lewin (1988) tabulated this effect for many bursters and concluded that increasing amounts of fuel are consumed in a less visible way than Type I X-ray bursts as $`\dot{M}`$ increases. The following trends were always found as $`\dot{M}`$ increases:
* The recurrence time increases from 2-4 hours to $`>`$ day.
* The bursts burn less of the accumulated fuel, with $`\alpha `$ increasing from $`40`$ to $`>100`$ (see top panel of Figure 1).
* The duration of the bursts decrease from $`30`$ s to $`5`$ s.
The low $`\dot{M}`$ bursts look like mixed hydrogen/helium burning (namely, energetic and of long duration from the rp-process) whereas the high $`\dot{M}`$ bursts look like pure helium burning (not so energetic, recurrence times typically long enough to have burned the hydrogen to helium before the burst and short duration due to the lack of any weak interactions). The simplest explanation would be to say that the NS has transitioned from the low $`\dot{M}`$ mixed burning regime (noted as 1 in ยงII) to the higher $`\dot{M}`$ pure helium burning (noted as 2 in ยงII). For this to be true, these NSs should be accreting at $`10^{10}M_{}\mathrm{yr}^1`$ in the lower $`\dot{M}`$ state and a factor of 4-5 higher in the high $`\dot{M}`$ state. However, these estimates are not consistent with the observations.
van Paradijs et al. (1988) estimated the $`\dot{M}`$ from those bursters which have shown Eddington limited radius expansion bursts. For these systems, the ratio of the persistent flux to the flux during radius expansion measures the accretion rate in units of the Eddington accretion rate ($`2\times 10^8M_{}\mathrm{yr}^1`$). They showed that most bursters accrete at rates $`\dot{M}(330)\times 10^{10}M_{}\mathrm{yr}^1`$, at least a factor of three (and typically more) higher than the calculated rate where such a transition should occur. Moreover, if the accretion rates were as low as needed, the recurrence times for the mixed hydrogen/helium burning would be about 30 hours, rather than the observed 2-4 hours. Fujimoto et al. (1987) discussed in some detail the challenges these observations present to a spherically symmetric model, while Bildsten (1995) attempted to resolve this by having much of the thermally unstable burning occur via slow deflagration fronts that do not lead to Type I bursts, but rather slow hour-long flares.
Another comparably embarrassing conundrum is the lack of regular bursting from the six โZโ sources (Sco X-1, Cyg X-2, GX 5-1, GX 17+2, GX 340+0, GX 349+2) which are accreting at $`3\times 10^92\times 10^8M_{}\mathrm{yr}^1`$. These NSs very rarely show Type I bursts, and when they do, they are so infrequent that the resulting $`\alpha `$ values are usually $`>10^3`$ (see Kuulkers et al. 1997 and Smale 1998 for examples and discussions). In other words, these bursts are clearly not responsible for burning all of the accreted fuel, whereas theory clearly says that these objects should be burning nearly all of their fuel unstably in the mixed hydrogen/helium regime (noted as 3 above). The same mystery holds for the Atoll sources with $`\dot{M}10^9M_{}\mathrm{yr}^1`$ (GX 3+1, GX 13+1, GX 9+1 and GX 9+9), which at best are infrequent bursters.
## IV Accumulation in the Equator?
Many of these puzzles are resolved by relaxing our spherical symmetry presumption and allowing the fresh material to only cover a fraction of the star prior to igniting. There are observational hints that this is happening, as the other clear trend (in addition to those noted in the previous section) found by EXOSAT was an increase in the apparent black-body radius ($`R_{app}`$) as $`\dot{M}`$ increased (see bottom panel in Figure 1). This parameter is found by spectral fitting in the decaying tail of the Type I bursts and is susceptible to absolute spectral corrections (see discussion in Lewin, van Paradijs and Taam 1995) that will hopefully be resolved with XMM observations. In a similar vein, van der Klis et al. (1990) found that the temperature of the burst at the moment when the flux was one-tenth the Eddington limit decreased as $`\dot{M}`$ increased (hence a larger area) for the Atoll source 4U 1636-53. In total, these observations raise the distinct possibility that the covered area increases enough with increasing $`\dot{M}`$ so that the accretion rate per unit area actually decreases.
By interpreting the measured $`R_{app}`$ as an indication of the fraction of the star that is covered by freshly accreted fuel, we can measure directly $`\dot{m}=\dot{M}/4\pi R_{app}^2`$, which is independent of the distance to the source, as $`F_x=GM\dot{M}/4\pi d^2R`$ gives $`\dot{m}(F_xR/GM)(d/R_{app})^2`$. The bottom panel of Figure 1 shows data for the burster EXO 0748-676. The lower (upper) solid lines are curves where the local accretion rate is constant at $`\dot{m}=8.3\times 10^3\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$ ($`\dot{m}=3.7\times 10^3\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$). The arrow points in the direction of increasing $`\dot{m}`$. The points at higher $`\dot{M}`$ (as inferred from $`F_x`$) tend to lie at comparable or slightly lower $`\dot{m}`$. The radius increase appears adequate to offset the $`\dot{M}`$ increase. In addition, for this source, the inferred values of $`\dot{m}`$ are in the range where the NS is transitioning from the mixed H/He burning at high $`\dot{m}`$ to the pure helium case at lower $`\dot{m}`$. More physically stated, the data point to the possibility that, as $`\dot{M}`$ increases, the area increases fast enough to allow the hydrogen to complete its burning before high enough pressures are reached for helium ignition. If such small covering areas persist to the higher $`\dot{M}`$โs of the Z sources, then their apparently stable nuclear burning is easily explained.
We know that these NSs accrete from a disk formed in the Roche lobe overflow of the stellar companion. However, there are still debates about the โfinal plungeโ onto the NS surface. Some advocate that a magnetic field controls the final infall, while others prefer an accretion disk boundary layer. This is now an important issue to resolve, both for the reasons I have noted here as well as for the oscillations seen during the bursts. If material is placed in the equatorial belt, it is not clear that it will stay there very long. If angular momentum was not an issue, the lighter accreted fuel (relative to the ashes) would cover the whole star quickly. However, on these rapidly rotating neutron stars, the fresh matter added at the equator must lose angular momentum to get to the pole. This competition (namely understanding the spreading of a lighter fluid on a rotating star) has just been recently investigated by Inogamov and Sunyaev (1999), to which I refer the interested reader.
## V An Example of Mixed H/He Bursts
Despite the complications I discussed in the previous sections, there are times when Type I bursters behave in a near limit cycle manner, with bursts occurring nearly periodically as $`\dot{m}`$ apparently stays at a fixed value for a long time. The most recent (and beautiful!) example of such a Type I burster is GS 1826-238. Ubertini et al. (1999) show that during 2.5 years of monitoring with the BeppoSAX Wide Field Camera, 70 bursts were detected from this object with a quasi-periodic recurrence time of $`5.76\pm 0.62`$ hours. The persistent flux during this bursting period was $`F_x2\times 10^9\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ (Ubertini et al. 1999, In โt Zand et al. 1999, Kong et al. 2000), which when combined with the measured ratio of $`R_{app}/d`$ (about 9 km at 8 kpc; Kong et al. 2000) gives $`\dot{m}8\times 10^3\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$, safely in the mixed H/He burning regime.
If at the upper limit distance of 8 kpc (in โt Zand et al. 1999) the global accretion rate is $`\dot{M}10^9M_{}\mathrm{yr}^1`$ and apparently the whole star is covered with fresh material. No coherent pulsations have been detected in these bursts, so we do not know this NSโs rotation rate.
The type I bursts from this source are a โtextbookโ case for the mixed hydrogen/helium burning expected at these accretion rates. The estimated $`\dot{m}`$ gives an accumulated column on the NS prior to the burst of $`1.6\times 10^8\mathrm{g}\mathrm{cm}^2`$, just what is expected from theory. These quasi-periodic bursts allow for a very secure measurement of $`\alpha 50`$, which implies a nuclear energy release of $`4\mathrm{MeV}`$ per accreted nucleon for a $`1.4M_{}`$, 10 km NS. Energy releases this large can only come about via hydrogen burning and the long ($`>100`$s ) duration of the bursts are consistent with the expected long-time energy release from the rp-process. Figure 2 shows the time profile of such a burst seen with RXTE (Kong et al. 2000) in a few different energy bands. Though these data were taken to study the optical reprocessing (top panel is the simultaneous optical burst), they provide an important confirmation of the delayed energy release expected when hydrogen is burning via an rp-process. The resemblence of these profiles to Hanawa and Fujimotoโs (1984) theoretical results are striking.
The upcoming launch of the High Energy Transient Explorer should provide comparable long-term coverage as the Wide Field Camera on BeppoSAX and gather more information on such nice bursts from many more LMXBโs.
## VI Conclusions
I hope I have made the case that Type I bursts from neutron stars are still very interesting to study in their own right and provide important lessons to those studying thin shell flashes in other astrophysical contexts. The detailed comparison provided by the neutron star systems is likely telling us to seriously consider the possibility and repercussions of fuel preferentially accumulating in the equatorial region. Another place where this might prove immediately applicable are the recurrent novae, where currently one infers high accretion rates and white dwarf masses in order to get the short recurrence times of 20-50 years (Livio 1994). These constraints are relaxed if we allow for a smaller covering fraction. I am not the first to say this, but hopefully the Type I burst observations make such a warning harder to ignore!
Jean Swank reviewed the observations of nearly coherent oscillations in the 300-600 Hertz range during many Type I bursts and so I will not summarize those results here. Though I am convinced that these modulations are intimately connected to rapid stellar rotation, there are still important unresolved questions. The ones that bother me the most are:
1. What causes the asymmetry at late times in the burst, long after the peak?
2. Why does the modulation appear sometimes at twice the spin frequency?
3. How does the burning front really spread on a rapidly rotating star? Ignition at one spot is plausible, but we do not understand how the ignited/hot fuel spreads around a rapidly rotating star.
It is even an open question as to why, from a particular NS, only some bursts show these oscillations. Before any meaningful theoretical work can be carried out, what is needed is the phenomenology of the oscillations in the context of the well established burst phenomenology I have discussed here. My current mental tabulations point to a complete abscence of oscillations during the long bursts (even during the long rise) indicative of mixed H/He burning. All reported detections of oscillations during bursts that I am aware of are from short duration, high $`\alpha `$ bursts.
I thank Andrew Cumming, Erik Kuulkers, and Michiel van der Klis for discussions and comments on the manuscript. My recent plunge into the EXOSAT Type I burst literature occurred when I was the CHEAF Visiting Professor at the Astronomical Institute โAnton Pannekoekโ of the University of Amsterdam. I thank them for the hospitality and Michiel for reminding me to eventually place his 1990 article on 4U 1636-53 in a meaningful context. This research was supported by NASA via grant NAG 5-8658 and by the NSF under Grant PHY 94-07194. L. B. is a Cottrell Scholar of the Research Corporation. |
no-problem/0001/astro-ph0001063.html | ar5iv | text | # The nature of RX J0052.1-7319
## 1 Introduction
RX J0052.1-7319 (= 1E 0050.3-7335) has been discovered during Einstein observations (e.g. Seward and Mitchell 1981). But the nature of the source has not been determined making use of these Einstein observations. The source has been found to coincide with the nebular complex DEM 70 in the SMC (Davies, Elliot & Meaburn 1976). RX J0052.1-7319 has been found as spectrally hard and highly variable X-ray source in the ROSAT PSPC X-ray survey of Kahabka & Pietsch (1996) and it is Number 84 in the ROSAT PSPC X-ray catalog of Kahabka et al. (1999).
Due to its spectral hardness and the observed time variability of the X-ray flux it has been classified as a persistent and highly variable X-ray source and a candidate X-ray binary system in the Small Magellanic Cloud by Kahabka & Pietsch (1996).
The detection of X-ray pulsations for this source have not been reported for a long time and the nature of the source remained unclear. But recently 15.3 sec X-ray pulsations have been discovered during ROSAT HRI and BATSE observations performed in Nov/Dec 1996 (Lamb et al. 1999). This fact indicated for an X-ray pulsar associated with this source.
The source is contained in the catalog of X-ray sources detected with the ASCA satellite in the field of the SMC (Yokogawa et al. 1999). It has been found to be a comparatively weak X-ray source and no pulsations have been found for this source from the ASCA observations.
Searches for an optical counterpart in a 10<sup>โฒโฒ</sup>error circle for the X-ray source have been performed by Israel & Stella (1999). They found a B-type star with R=14.54$`\pm `$0.03 which shows H$`\alpha `$ emission indicating for a Be type nature of the star. In addition they found another object with R=16.05$`\pm `$0.05.
The R=14.5 mag star has been found to be contained in the OGLE microlensing database towards the SMC by Udalski et al. (1999). It is a long-term variable star with quasiperiodic light variation of amplitude 0.13 mag in the I band. A possible period of $``$600-700 days has been found for the star. But it is rather uncertain due to the comparable length of the used OGLE SMC database (of 745 days, from 1997 Jan. 17 to 1999 Feb. 1). A second variable object with V=15.9 has been found in the X-ray error circle which may be identical with the object found by Israel & Stella (1999).
Here we discuss the possible nature of RX J0052.1-7319. We make use of two ROSAT HRI observations performed in 1995 and 1996 in a systematic program by the author to study the time variability of (candidate) X-ray binary systems in the Small Magellanic Cloud. First and preliminary results have been reported elsewhere (Kahabka 1999a,b).
## 2 Observations
The observations discussed in this paper have been performed with the HRI detector of the ROSAT satellite (Trรผmper 1983) in 1995 and 1996. In Table 1 a log of these observations is given. The 1995 observation was centered on the SMC supernova remnant SNR 0049-736 = N19 (cf. Kahabka et al. 1999). The 1996 observation was centered on the supersoft X-ray source RX J0048.4-7332 (cf. Kahabka et al. 1994). The source was at an off-axis angle of 5โฒand 21โฒin the 1995 and 1996 observation respectively. It was in the 1996 observation close to the rim of the HRI detector and affected by the varying attitude of the satellite.
## 3 Results
### 3.1 The X-ray position
During the 1995 HRI observation the source had a HRI count rate of $`5.6\pm 2.5\times 10^3s^1`$ and was quite faint. It was in the central field of the detector and an accurate position could be determined. The observation extended over more than half a year (cf. Fig. 1). Three time intervals have been analyzed independently to constrain the X-ray position and to determine the positional uncertainty due to a variable satellite aspect. As there are two time variable optical counterparts in the previously reported ROSAT PSPC 11โ error box of RX J0052.1-7319 (Kahabka & Pietsch 1996) a more accurate HRI position may allow to determine the association of the X-ray source to either of these sources. As the nature of the optically fainter source seems not to be constrained (e.g. in terms of a stellar source or a background AGN) it is also not clear whether it is a detectable X-ray source. In principle both objects may contribute to the observed X-ray source.
From the MayโJune 1995 ROSAT HRI observation a mean position R.A. = 0<sup>h</sup>52<sup>m</sup>15<sup>s</sup>.5, Decl. = -73<sup>o</sup>19โ14โ (equinox 2000.0; $`\pm `$4<sup>โฒโฒ</sup>at 90% confidence) has been reported by Kahabka (1999a).
We derive from the three observations (cf. Tab. 2) an average position R.A. = 0<sup>h</sup>52<sup>m</sup>15.<sup>s</sup>.6, Decl. = -73<sup>o</sup>19โ13โ (equinox 2000.0) which is in agreement with the position derived by Kahabka (1999a). From the mean deviation we derive an error radius of $``$4<sup>โฒโฒ</sup>(at 90% confidence). Assuming that this is the positional error due to the uncertain satellite attitude we constrain the total (statistical and systematic) positional error at 90% confidence to 6<sup>โฒโฒ</sup>.
The source is observed during the 1996 observation at a large off-axis angle of 21. The 50% power radius of the HRI point-spread function is considerable (32<sup>โฒโฒ</sup>) at such an off-axis angle but the position may be more accurately determined due to the central core of the point-spread function. The statistical 90% error radius determined with a maximum likelihood analysis in EXSAS is $``$5<sup>โฒโฒ</sup>. The positional uncertainty taking systematic errors into account may be between these two limits. Still the position of the source found in the 1996 observation agrees with the position of the source found in the 1995 observation. For comparison the position derived for the symbiotic nova RX J0048.4-7332 during the same observation deviates $``$5<sup>โฒโฒ</sup>from the optical position (Morgan, 1992).
### 3.2 X-ray pulsations
We have searched for the 15.3 sec pulsations reported in this source by Lamb et al. (1999) in the 1996 data. The event times have been projected from the spacecraft to the solar-system barycenter with standard EXSAS software (Zimmermann et al. 1994). In addition the photon event table has been screened by selecting the wobble phase interval (0.8,1.2) using standard EXSAS software. This procedure has been applied as the point-spread function of the source is heavily affected by the very outer edge of the detector. Due to the satellite wobble the point-spread function of the source was temporarily outside of the detector. By applying this selection we screened the data and accepted only time intervals when the point-spread function was to a large fraction within the detector. The source count rate is strongly affected by this effect. From Fig 1 we see that the maximum effective background subtracted count rate was large, i.e. $`1.1\pm 0.1s^1`$. Assuming that this is the count rate when the source was to a large degree inside the detector we conclude that the source has about this count rate. Interestingly this is about 1.5$`\times `$ the count rate as reported from the ROSAT HRI observations of Lamb et al. (1999) which were performed $``$1-2 months after our observation. Apparently the peak of the outburst occurred either during our observation or even earlier. Assuming a spectrum with an absorbing column density of $`\mathrm{3\; 10}^{21}\mathrm{H}\mathrm{atoms}\mathrm{cm}^2`$, a photon index of 1.0, a HRI count rate of $`1.1\mathrm{s}^1`$ corresponds to a flux of $`\mathrm{9.6\; 10}^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ and to an absorbed luminosity of $`\mathrm{4.1\; 10}^{37}\mathrm{erg}\mathrm{s}^1`$ and to an unabsorbed luminosity of $`\mathrm{5.2\; 10}^{37}\mathrm{erg}\mathrm{s}^1`$.
A best period of 15.3$`\pm `$0.1 sec has been derived from the 1996 data (cf. Fig .2). The period is not very accurately constrained as the observation was short (1.5 ksec) and additional screening of the data reduced the effective exposure time to 0.47 ksec. We used 6 and 4 phase bins respectively and the pulse profile is given in Fig 2. The period uncertainty as determined from the relation $`\delta \mathrm{P}=\mathrm{P}/(\mathrm{T}_{\mathrm{obs}}\times \mathrm{N}_{\mathrm{bin}})`$ (with the effective exposure time $`T_{\mathrm{obs}}`$ and the number of phase bins $`\mathrm{N}_{\mathrm{bin}}`$) would be $`10^2`$ seconds. The significance of the period detection is $`6\sigma `$. The 1996 October 19 observation was performed about one month before the outburst reported by Lamb et al. (1999) from the ROSAT HRI and BATSE observations.
## 4 Discussion
Kahabka & Pietsch (1996) have argued that RX J0052.1-7319 may be a persistent and highly variable X-ray source as it has already been observed during Einstein observations (Seward & Mitchell 1981; Inoue et al. 1983; Wang & Wu 1992). The source has been detected in ROSAT PSPC observations performed in October 1991 and April 1992, and with the HRI performed in May to December 1995 and October 1996. The count rate was varying by a large factor $`>`$100. The source has not always been detected during ROSAT observations. Cowley & Schmidtke (1997) report that the source was not detected during HRI observations performed in April, May, and October 1994. They derive an upper limit to the count rate of $`<0.004s^1`$ which is still in the range of count rates derived for the 1995 observation ($`0.0057\pm 0.0025\mathrm{s}^1`$). Analysis of archival PSPC observations performed in December 1992 and May 1993 do not reveal a significant detection of the source which has been at large off-axis angles of 44and 35respectively. The $`2\sigma `$ upper limit PSPC count rate is $`2\times 10^3s^1`$ for the May 1993 observation and a factor of 10 lower than the PSPC count rate derived for the April 1992 observation (Kahabka & Pietsch 1996). The rise in X-ray flux (of $`0.01countss^1d^1`$) during the April 1992 observation could be due to the onset of an X-ray outburst which might have happened around August 1993 and decayed till November 1993. This would give an outburst repetition time of 3.2 years (or 1200 days). This behaviour is in agreement with a High-Mass Be-type X-ray binary system which undergoes high states during periastron passage of the neutron star.
The source was indeed observed during the 1996 observations for about 50 days (from 1996 October 19 till December 9) in an outburst with an HRI count rate of $`0.71.2\mathrm{s}^1`$. If this outburst is due to the periastron passage of the neutron star than constraints can be derived for the orbital period which has to be considerably longer.
RX J0052.1-7319 is in the OGLE SMC6 field and an about 2 year observational database exists for this source. Two OGLE detected variable stars are within the reported X-ray error circle of RX J0052.1-7319, SMC SC6 99923 and SMC SC6 99991 (Udalski 1999). The first is a long-term variable star with quasiperiodic light variation of amplitude 0.13 in the I band (the mean I magnitude is 14.5). The possible period is 600-700 days, but it is uncertain due to the length of the database. The second optical variable has an I magnitude in the range 16.1 to 15.7 with a pronounced linear rise in the I magnitude during a 200 day period. It may be identical with the object reported by Israel & Stella 1999 (IAU Circ. No. 7101), which remains so-far unidentified. No $`\mathrm{H}\alpha `$ line emission has been detected from this object. It could in principle be a background AGN. This may explain the fact that no X-ray pulsations have been detected with ASCA although an X-ray source has been detected at the position of RX J0052.1-7319. Both objects are within the 6<sup>โฒโฒ</sup>HRI error circle.
It appears to be likely that RX J0052.1-7319 is associated with the 14.5 mag Be-type star and is a Be-type transient. But it cannot be excluded that the X-ray source is confused by a second near-by source, e.g. a time variable background stellar source or AGN. If it is one X-ray source then it also has to be understood why it is active in X-rays for nearly two months. If the source follows the relation between rotation period and orbital period found by Corbet for the galactic Be-type transients (Corbet 1986) then the 15.3 second rotation period would correspond to an orbital period of $``$40 days. If the duration of the 1996 outburst is indeed $``$50 days then this would argue against activity related to periastron passage for such a short orbital period.
Another possibility is that the high-mass star associated with RX J0052.1-7319 periodically undergoes โeruptionsโ or outbursts (of duration a few months) during which it efficiently transfers mass towards the neutron star companion (cf. Marlborough 1997). If the star settles back to its normal configuration the mass-transfer reduces. Such a scenario would still be consistent with an orbital period of $``$40 days in this system as the duration of the X-ray outburst would be determined by the duration of the outburst of the star. A Be star undergoing periodic outbursts in our Galaxy is $`\lambda `$ Eri. In a recent work Mennickent et al. (1998) found for this system an outburst repetition period of 469 days (or 939 days). The duration of the outburst is $``$120 days.
The similarity of the long-period lightcurve of $`\lambda `$ Eri and of the OGLE microlensing light curve of the 15.9 star SMC SC6 99991 in the error box of RX J0052.1-7319 is striking. As it is not clear whether this object is a stellar source (in the SMC) or a time variable background AGN (cf. Kawaguchi et al. 1998) it could be related to RX J0052.1-7319. Note that the fact that no $`\mathrm{H}\alpha `$ emission is observed (Israel & Stella 1999) also refers to $`\lambda `$ Eri (Mennickent et al. 1998). One problem with this identification may be that the observed 1996 X-ray outburst occured outside the โoptical outburstโ although โprojectingโ the SMC SC6 99991 light curve onto the $`\lambda `$ Eri light curve a repetition period of $``$3.5 years would be obtained which is quite close to the X-ray outburst period of RX J0052.1-7319 estimated from the X-ray observations.
## 5 Summary
For RX J0052.1-7319 15.3 sec pulsations have been detected in ROSAT HRI X-ray observations performed in October 1996. The count rate of $`1.1\pm 0.1\mathrm{s}^1`$ observed during this observation is the highest reported so far for this source. The corresponding X-ray luminosity is $`5\times 10^{37}ergs^1`$ for SMC distance. The position of the X-ray source coincides with a 14.5 mag Be-type star in the SMC and a fainter $``$15.9 mag object which remains so-far unidentified. Both objects have been found to be variable with a timescale of a few hundred days and are contained in the OGLE microlensing database. It is unclear which is the optical counterpart of RX J0052.1-7319 or if even both objects contribute to the observed X-ray flux.
###### Acknowledgements.
The ROSAT project is supported by the Max-Planck-Gesellschaft and the Bundesministerium fรผr Forschung und Technologie (BMFT). This research was supported in part by the Netherlands Organisation for Scientific Research (NWO) through Spinoza Grant 08-0 to E.P.J. van den Heuvel. I thank Lex Kaper for discussions and W. Kundt for reading the manuscript. I thank the referee R.C. Lamb for useful comments. |
no-problem/0001/quant-ph0001116.html | ar5iv | text | # On local invariants of pure three-qubit states
## 1 Introduction
The invariants of many-particle states under unitary transformations which act on single particles separately (โlocalโ transformations) are of interest because they give the finest discrimination between different types of entanglement. They can be regarded as coordinates on the space of entanglement types (equivalently, the space of orbits of the group of local transformations). In this paper we study the case of pure states of three spin-$`\frac{1}{2}`$ particles, or qubits. For mixed states of two qubits, it is possible to give a complete set of invariants , describing the 9-dimensional space of orbits in terms of 18 invariants, nine of which may be taken to have only discrete values (for example, the signs of certain polynomials listed in ). For pure three-qubit states, where the space of orbits is known to be 6-dimensional, we can at present do no more than find a set of six algebraically independent invariants. We will show (Section 3) that in order to do this with polynomials in the state coordinates it is necessary to go to polynomials of order 8, and we will exhibit (Section 4) a set of six independent invariants; their physical meaning is discussed in Section 5. We will also discuss (Section 6) the possibility of finding a more convenient set of non-polynomial invariants. Section 2 is an introductory discussion of the invariants of pure $`n`$-qubit states.
## 2 Pure states: general considerations
A general theory of local invariants of mixed $`n`$-particle states has been given by Rains and Grassl et al. . Here we review the part of that theory that refers to pure states.
The most general system is that of $`n`$ non-identical particles $`A,B,\mathrm{}`$ with one-particle state spaces of dimensions $`d_A,d_B,\mathrm{}`$. Let $`\{|\psi _i^X:i=1,\mathrm{},d_X\}`$ be an orthonormal basis of one-particle states of particle $`X`$; then the general $`n`$-particle state can be written
$$|\mathrm{\Psi }=\underset{ijk\mathrm{}}{}t^{ijk\mathrm{}}|\psi _i^{(A)}|\psi _j^{(B)}|\psi _k^{(C)}\mathrm{}$$
where the sum is over values of $`i`$ from 1 to $`d_A`$, values of $`j`$ from 1 to $`d_B`$, and so on. By the First Fundamental Theorem of invariant theory applied to $`U(d_A),U(d_B),\mathrm{}`$, any polynomial in $`t^{ijk\mathrm{}}`$ which is invariant under the action on $`|\mathrm{\Psi }`$ of the local group $`U(d_A)\times U(d_B)\times \mathrm{}`$ is a sum of homogeneous polynomials of even degree (say $`2r`$), of the form
$$P_{\sigma \tau \mathrm{}}(๐ญ)=t^{i_1j_1k_1\mathrm{}}\mathrm{}t^{i_rj_rk_r\mathrm{}}\overline{t}_{i_1j_{\sigma (1)}k_{\tau (1)}\mathrm{}}\mathrm{}\overline{t}_{i_rj_{\sigma (r)}k_{\tau (r)}\mathrm{}}$$
(2.1)
where $`\sigma ,\tau ,\mathrm{}`$ are permutations of $`(1,\mathrm{},r)`$. Here $`\overline{t}_{ijk\mathrm{}}`$ is the complex conjugate of $`t^{ijk\mathrm{}}`$, and we adopt the usual summation convention on repeated indices, one in the upper position and one in the lower. Note that $`P_{\sigma \tau \mathrm{}}`$ is unchanged by simultaneous conjugation of the permutations $`\sigma ,\tau ,\mathrm{}`$:
$$P_{\sigma \tau \mathrm{}}(๐ญ)=P_{\sigma ^{}\tau ^{}\mathrm{}}(๐ญ)\text{ if }\sigma ^{}=\kappa \sigma \kappa ^1,\tau ^{}=\kappa \tau \kappa ^1,\mathrm{}$$
since such a conjugation merely expresses the effect of changing the order of the factors in each summand in $`P`$.
For two particles $`A,B`$ there is just one permutation $`\sigma `$, which we can decompose into cycles $`\kappa _1,\mathrm{},\kappa _s`$ of orders $`l_1,\mathrm{},l_s`$ with $`l_1+\mathrm{}+l_s=r`$. The polynomial $`P_\sigma (๐ญ)`$ then splits into a product of polynomials $`P_{\kappa _1}\mathrm{}P_{\kappa _s}`$, where $`P_\kappa `$ depends only on the order of the cycle $`\kappa `$, which is equal to half the degree of $`P_\kappa `$:
$`P_\kappa (๐ญ)`$ $`=t^{i_1j_1}\overline{t}_{i_1j_{\kappa (1)}}t^{i_{\kappa (1)}j_{\kappa (1)}}\overline{t}_{i_{\kappa (1)}j_{\kappa ^2(1)}}\mathrm{}`$
$`=t^{i_1j_1}\overline{t}_{i_1j_2}t^{i_2j_2}\overline{t}_{i_2j_3}\mathrm{}t^{i_lj_l}\overline{t}_{i_lj_1}`$
(by renaming the dummy indices $`j_{\kappa (1)},j_{\kappa ^2(1)},\mathrm{},j_{\kappa ^{l1}(1)}`$)
$$=\mathrm{tr}(\rho _B^l)$$
where $`\rho _B=\mathrm{tr}_A|\mathrm{\Psi }\mathrm{\Psi }|`$ is the density matrix of particle $`B`$, with matrix elements
$$(\rho _B)_k^j=t^{ij}\overline{t}_{ik}.$$
Thus the polynomial invariants of a two-particle pure state are the sums of the powers of the eigenvalues of $`\rho _B`$. These can all be expressed in terms of the first $`d_B`$ power-sums, which generate the algebra of invariant polynomials and are algebraically independent if the eigenvalues are independent. However, they are not independent if $`d_A<d_B`$, for in that case some of the eigenvalues of $`\rho `$ vanish. But clearly the same argument could be used to show that the algebra of invariants is generated by the traces of the powers of $`\rho _A`$, which is consistent because the non-zero eigenvalues of $`\rho _A`$ are the same as those of $`\rho _B`$. Thus the algebra of polynomial invariants of two-particle pure states has a set of independent generators
$$\mathrm{tr}(\rho _A^l)=\mathrm{tr}(\rho _B^l),l=1,\mathrm{},\mathrm{min}(d_A,d_B).$$
The non-zero eigenvalues of $`\rho _A`$ (or $`\rho _B`$) are in fact the squares of the coefficients in the Schmidt decomposition of $`|\mathrm{\Psi }`$, so what we have here is the well-known fact that the local invariants of a pure two-particle state are the symmetric functions of the Schmidt coefficients.
## 3 Polynomial invariants of three-qubit states
For the remainder of the paper we consider three spin-$`\frac{1}{2}`$ particles $`A,B,C`$. The classification of pure states of this system has been discussed in , and their invariants in . It is known that the dimension of the space of orbits is 6; there are therefore six algebraically independent local invariants. We will show that there are no more than five algebraically independent invariants of degree less than 8, and exhibit a set of six algebraically independent invariants with maximum degree 8.<sup>1</sup><sup>1</sup>1I understand that similar conclusions have been reached by Markus Grassl .
The vector space of homogeneous invariants of degree $`2r`$ is spanned by functions $`P_{\sigma \tau }`$ labelled by pairs of elements of $`S_r`$, the group of permutations of $`r`$ things. Thus there is one independent invariant of degree 2,
$$I_1=P_{ee}(๐ญ)=t^{ijk}\overline{t}_{ijk}=\mathrm{\Psi }|\mathrm{\Psi }$$
where $`e`$ is the identity permutation, so that $`S_1=\{e\}`$. If $`S_2=\{e,\sigma \}`$, the four linearly independent quartic invariants are
$`P_{ee}(๐ญ)`$ $`=t^{i_1j_1k_1}\overline{t}_{i_1j_1k_1}t^{i_2j_2k_2}\overline{t}_{i_2j_2k_2}=\mathrm{\Psi }|\mathrm{\Psi }^2,`$
$`I_2=P_{e\sigma }(๐ญ)`$ $`=t^{i_1j_1k_1}\overline{t}_{i_1j_1k_2}t^{i_2j_2k_2}\overline{t}_{i_2j_2k_1}=\mathrm{tr}(\rho _C^2),`$
$`I_3=P_{\sigma e}(๐ญ)`$ $`=t^{i_1j_1k_1}\overline{t}_{i_1j_2k_1}t^{i_2j_2k_2}\overline{t}_{i_2j_1k_2}=\mathrm{tr}(\rho _B^2),`$
$`I_4=P_{\sigma \sigma }(๐ญ)`$ $`=t^{i_1j_1k_1}\overline{t}_{i_1j_2k_2}t^{i_2j_2k_2}\overline{t}_{i_2j_1k_1}=\mathrm{tr}(\rho _A^2)`$
where $`\rho _A,\rho _B,\rho _C`$ are the one-particle density matrices:
$$\rho _X=\mathrm{tr}_{YZ}|\mathrm{\Psi }\mathrm{\Psi }|\text{ where }\{X,Y,Z\}=\{A,B,C\}\text{ in some order.}$$
Thus there are at most four algebraically independent invariants of degree $`4`$.
Higher-order invariants $`P_{\pi \sigma }(๐ญ)`$ with $`\pi ,\sigma S_3`$ are functions of the four quadratic and quartic invariants if $`\pi `$ and $`\sigma `$ are equal or if either of them is the identity. To see this, note first that if $`\pi =\sigma `$,
$`P_{\sigma \sigma }(๐ญ)`$ $`=t^{i_1j_1k_1}\mathrm{}t^{i_rj_rk_r}\overline{t}_{i_1j_{\sigma (1)}k_{\sigma (1)}}\mathrm{}\overline{t}_{i_rj_{\sigma (r)}k_{\sigma (r)}}`$
$`=(\rho _A)_{i_{\tau (1)}}^{i_1}(\rho _A)_{i_{\tau (2)}}^{i_2}\mathrm{}(\rho _A)_{i_{\tau (r)}}^{i_r}`$
where $`\tau =\sigma ^1`$. This is a product of traces of powers of $`\rho _A`$. But since $`\rho _A`$ is a $`2\times 2`$ matrix, the Cayley-Hamilton theorem enables us to express $`\mathrm{tr}(\rho _A^r)`$ for $`r3`$ as a function of $`\mathrm{tr}\rho _A`$ and $`\mathrm{tr}\rho _A^2`$.
Secondly, if $`\pi =e`$,
$`P_{e\sigma }(๐ญ)`$ $`=t^{i_1j_1k_1}\mathrm{}t^{i_rj_rk_r}\overline{t}_{i_1j_1k_{\sigma (1)}}\mathrm{}\overline{t}_{i_rj_rk_{\sigma (r)}}`$
$`=(\rho _C)_{k_{\sigma (1)}}^{k_1}\mathrm{}(\rho _C)_{k_{\sigma (r)}}^{k_r}`$
which is a product of traces of powers of $`\rho _C`$; and similarly $`P_{\pi e}(๐ญ)`$ is a product of traces of powers of $`\rho _B`$.
Thus the only sextic invariants $`P_{\pi \sigma }`$ which might be algebraically independent of the quadratic and quartic invariants are those for which $`\pi `$ and $`\sigma `$ are distinct 2-cycles, or distinct 3-cycles, or one is a 2-cycle and the other is a 3-cycle. Moreover, in each of these categories all the possible pairs $`(\pi ,\sigma )`$ are related by simultaneous conjugation and therefore give the same invariant. There are therefore three possible independent sextic invariants:
1. $`\pi `$, $`\sigma `$ distinct 3-cycles, say $`\pi =(123)`$, $`\sigma =(132)`$. This gives
$`I_5=P_{(123)(132)}(๐ญ)`$ $`=t^{i_1j_1k_1}t^{i_2j_2k_2}t^{i_3j_3k_3}\overline{t}_{i_1j_2k_3}\overline{t}_{i_2j_3k_1}\overline{t}_{i_3j_1k_2}`$
$`=(\rho _{BC})_{j_2k_3}^{j_1k_1}(\rho _{BC})_{j_3k_1}^{j_2k_2}(\rho _{BC})_{j_1k_2}^{j_3k_3}`$ (3.1)
where $`\rho _{BC}=\mathrm{tr}_A|\mathrm{\Psi }\mathrm{\Psi }|`$ is the density matrix of the two-particle system of particles $`B`$ and $`C`$. This invariant was identified by Kempe as one which distinguishes three-particle states which have identical density matrices for every subsystem. It has exactly the same form when expressed as a function of $`\rho _{AB}`$ or of $`\rho _{AC}`$.
2. $`\pi `$, $`\sigma `$ distinct 2-cycles, say $`\pi =(12)`$, $`\sigma =(23)`$. This gives
$`I_5^{}=P_{(12)(23)}(๐ญ)`$ $`=t^{i_1j_1k_1}t^{i_2j_2k_2}t^{i_3j_3k_3}\overline{t}_{i_1j_2k_1}\overline{t}_{i_2j_1k_3}\overline{t}_{i_3j_3k_2}`$
$`=(\rho _B)_{j_2}^{j_1}(\rho _C)_{k_2}^{k_3}(\rho _{BC})_{j_1k_3}^{j_2k_2}`$
$`=\mathrm{tr}[(\rho _B\rho _C)\rho _{BC}].`$ (3.2)
3. $`\pi `$ a 2-cycle, say (12), and $`\sigma `$ a 3-cycle, say (123), or vice versa. These give
$`I_5^{\prime \prime }=P_{(12)(123)}(๐ญ)`$ $`=t^{i_1j_1k_1}t^{i_2j_2k_2}t^{i_3j_3k_3}\overline{t}_{i_1j_2k_2}\overline{t}_{i_2j_1k_3}\overline{t}_{i_3j_3k_1}`$
$`=(\rho _{AC})_{i_2k_3}^{i_1k_1}(\rho _A)_{i_1}^{i_2}(\rho _C)_{k_1}^{k_3}`$
$`=\mathrm{tr}[(\rho _A\rho _C)\rho _{AC}]`$ (3.3)
and
$`I_5^{\prime \prime \prime }=P_{(123)(12)}(๐ญ)`$ $`=t^{i_1j_1k_1}t^{i_2j_2k_2}t^{i_3j_3k_3}\overline{t}_{i_1j_2k_2}\overline{t}_{i_2j_3k_1}\overline{t}_{i_3j_1k_3}`$
$`=\mathrm{tr}[(\rho _A\rho _B)\rho _{AB}].`$ (3.4)
Primes have been placed on the symbols for these last three invariants because they will not feature in our final list of independent invariants, each of them being expressible in terms of $`I_5`$ and the quadratic and quartic invariants. To show this, we write $`I_5`$ in terms of $`2\times 2`$ matrices by considering the $`4\times 4`$ matrix $`\rho _{BC}`$ as a set of four $`2\times 2`$ matrices $`X_{j_2}^{j_1}`$: the matrix elements of $`X_{j_2}^{j_1}`$, labelled by $`(k_1,k_2)`$, are
$$(X_{j_2}^{j_1})_{k_2}^{k_1}=(\rho _{BC})_{j_2k_2}^{j_1k_1}.$$
Then
$$I_5=\mathrm{tr}(X_{j_2}^{j_1}X_{j_1}^{j_3}X_{j_3}^{j_2}).$$
Now we use the $`2\times 2`$ matrix identity
$$\begin{array}{cc}\hfill \mathrm{tr}(XYZ)+\mathrm{tr}(XZY)=\mathrm{tr}X\mathrm{tr}(YZ)& +\mathrm{tr}Y\mathrm{tr}(ZX)+\mathrm{tr}Z\mathrm{tr}(XY)\hfill \\ & \mathrm{tr}X\mathrm{tr}Y\mathrm{tr}Z\hfill \end{array}$$
(3.5)
which holds for any $`2\times 2`$ matrices $`X,Y,Z`$, and can be obtained by trilinearising (or โpolarisingโ โ replace $`X`$ first by $`X+Y`$ and then by $`X+Y+Z`$) the cubic identity
$$\mathrm{tr}X^3=\frac{3}{2}\mathrm{tr}X\mathrm{tr}X^2\frac{1}{2}(\mathrm{tr}X)^3$$
which in turn is obtained by taking the trace of the Cayley-Hamilton theorem. Apply (3.5) to the matrices $`X_{j_2}^{j_1}`$, $`X_{j_1}^{j_3}`$, $`X_{j_3}^{j_2}`$ occurring in the expression for $`I_5`$. The first term on the left-hand side is $`I_5`$; the second is
$$\mathrm{tr}(X_{j_2}^{j_1}X_{j_3}^{j_2}X_{j_1}^{j_3})=\mathrm{tr}(\rho _{BC}^3)=\mathrm{tr}(\rho _A^3)$$
since the non-zero eigenvalues of $`\rho _{BC}`$ are the same as those of $`\rho _A`$ (both being the squares of the coefficients in a Schmidt decomposition of $`|\mathrm{\Psi }`$). The first term on the right-hand side is
$`\mathrm{tr}(X_{j_2}^{j_1})\mathrm{tr}(X_{j_1}^{j_3}X_{j_3}^{j_2})`$ $`=(\rho _B)_{j_2}^{j_1}(\rho _{BC})_{j_1k_2}^{j_3k_1}(\rho _{BC})_{j_3k_1}^{j_2k_2}`$
$`=(\rho _B)_{j_2}^{j_1}t^{i_1j_3k_1}\overline{t}_{i_1j_1k_2}t^{i_2j_2k_2}\overline{t}_{i_2j_3k_1}`$
$`=(\rho _B)_{j_2}^{j_1}(\rho _A)_{i_2}^{i_1}(\rho _{AB})_{i_1j_1}^{i_2j_2}`$
$`=\mathrm{tr}[(\rho _A\rho _B)\rho _{AB}];`$
the second and third terms differ from the first only by permuting the indices $`j_1,j_2,j_3`$ and therefore (after summing) are equal to it; and the last term is
$$\mathrm{tr}(X_{j_2}^{j_1})\mathrm{tr}(X_{j_3}^{j_2})\mathrm{tr}(X_{j_1}^{j_3})=\mathrm{tr}(\rho _B^3).$$
Thus (3.5) gives
$$I_5=3\mathrm{tr}[(\rho _A\rho _B)\rho _{AB}]\mathrm{tr}(\rho _A^3)\mathrm{tr}(\rho _B^3).$$
(3.6)
Similarly, using the alternative expressions for $`I_5`$ in terms of $`\rho _{AB}`$ and $`\rho _{AC}`$ gives
$`I_5`$ $`=3\mathrm{tr}[(\rho _B\rho _C)\rho _{BC}]\mathrm{tr}(\rho _B^3)\mathrm{tr}(\rho _C^3)`$ (3.7)
$`=3\mathrm{tr}[(\rho _A\rho _C)\rho _{AC}]\mathrm{tr}(\rho _A^3)\mathrm{tr}(\rho _C^3).`$ (3.8)
So there are at most five independent invariants of degree 6 or less. Since six invariants are needed to parametrise the orbits , we must use at least one invariant of degree 8 or more. A convenient, and physically significant, choice is the 3-tangle identified by Coffman, Kundu and Wootters :
$$I_6=\frac{1}{4}\tau _{123}^2=\left|ฯต_{i_1i_2}ฯต_{i_3i_4}ฯต_{j_1j_2}ฯต_{j_3j_4}ฯต_{k_1k_3}ฯต_{k_2k_4}t^{i_1j_1k_1}t^{i_2j_2k_2}t^{i_3j_3k_3}t^{i_4j_4k_4}\right|^2$$
(3.9)
where $`ฯต_{ij}`$ is the antisymmetric tensor in two dimensions ($`ฯต_{12}=ฯต_{21}=1`$, $`ฯต_{11}=ฯต_{22}=0`$). The expression between the modulus signs is an SU$`(2)^3`$ invariant (though not a U$`(2)^3`$ invariant โ its phase is not invariant under local transformations), so its modulus is a local invariant. The invariant $`I_6`$ can be put into our standard form of a sum of terms like (2.1) by multiplying the SU$`(2)^3`$ invariant by its complex conjugate
$$ฯต^{i_5i_6}ฯต^{i_7i_8}ฯต^{j_5j_6}ฯต^{j_7j_8}ฯต^{k_5k_7}ฯต^{k_6k_8}\overline{t}_{i_5j_5k_5}\overline{t}_{i_6j_6k_6}\overline{t}_{i_7j_7k_7}\overline{t}_{i_8j_8k_8}$$
(where the contravariant tensor $`ฯต^{ij}`$ is numerically the same as $`ฯต_{ij}`$), and using the identity
$$ฯต^{ab}ฯต_{cd}=\delta _c^a\delta _d^b\delta _d^a\delta _c^b.$$
To show that the invariants $`I_1,\mathrm{},I_6`$ are independent it is sufficient to show that their gradients are linearly independent at some point. To calculate these gradients in the 16-(real)dimensional space of pure states, we can treat $`t^{ijk}`$ and $`\overline{t}_{ijk}`$ formally as independent coordinates; the fact that our invariants are real means that the 16 componentsof the gradient of $`I_a`$ are the real and imaginary parts of the partial derivatives with respect to $`t^{ijk}`$. The results of calculating $`I_a/t^{ijk}`$ and putting
$$t^{000}=t^{010}=t^{110}=0,t^{011}=t^{100}=t^{101}=t^{111}=1,t^{001}=\mathrm{i},$$
$$\overline{t}^{ijk}=\text{ complex conjugate of }t^{ijk}$$
(where $`0`$ and $`1`$ are the two possible values of $`i,j,k`$) are as follows:
$`_๐ญI_1`$ $`=(0,\mathrm{i},0,1,1,1,0,1)`$
$`_๐ญI_2`$ $`=(2\mathrm{i},8\mathrm{i},2,8,4,10,2,8)`$
$`_๐ญI_3`$ $`=(0,28\mathrm{i},0,62\mathrm{i},6,82\mathrm{i},2+2\mathrm{i},6+2\mathrm{i})`$
$`_๐ญI_4`$ $`=(22\mathrm{i},26\mathrm{i},0,62\mathrm{i},6,82\mathrm{i},0,8+2\mathrm{i})`$
$`_๐ญI_5`$ $`=(69\mathrm{i},1236\mathrm{i},6,3012\mathrm{i},21,4512\mathrm{i},9+6\mathrm{i},36+12\mathrm{i})`$
$`_๐ญI_6`$ $`=(8,0,8+16\mathrm{i},8,8,0,8\mathrm{i},0)`$
These six vectors are indeed linearly independent over $``$.
## 4 Physical significance of the invariants
The invariant $`I_1`$ is just the norm of the three-party state and therefore has no physical significance; we will normally set it equal to 1. The three invariants $`I_1,I_2,I_3`$ are one-particle quantities, giving the eigenvalues of the one-particle density matrices; they are equivalent to the one-particle entropies, which measure how entangled each particle is with the other two together. The entanglement in each pair of particles and the three-way entanglement of the whole system are all given by the last invariant $`I_6`$, as follows. A good measure of the entanglement of two qubits $`A,B`$ in a mixed state is the *2-tangle* $`\tau _{AB}`$ , which is a monotonic function of the entanglement of formation . The three-way entanglement of three qubits $`A,B,C`$ in a pure state is measured by the *3-tangle*
$$\tau _{ABC}=\tau _{A(BC)}\tau _{AB}\tau _{AC}$$
where $`\tau _{A(BC)}=4det\rho _A=2(I_1^2I_2)`$ is another measure (equivalent to the entropy of $`A`$) of how entangled $`A`$ is with the pair $`(BC)`$. It can be shown that $`\tau _{ABC}`$ is invariant under permutations of $`A`$, $`B`$ and $`C`$; in fact it is equal to our invariant $`I_6`$. By solving the equations expressing the permutation invariance of $`\tau _{ABC}`$, we can now give formulae for all three 2-tangles and the 3-tangles in terms of our invariants:
$`\tau _{AB}`$ $`=1I_2I_3+I_4\frac{1}{2}I_6,`$
$`\tau _{AC}`$ $`=1I_2+I_3I_4\frac{1}{2}I_6,`$
$`\tau _{BC}`$ $`=1+I_2I_3I_4\frac{1}{2}I_6,`$
$$\tau _{ABC}=I_6.$$
The 3-tangle $`I_6`$ is maximal for the GHZ state $`|000+|111`$, whose 2-tangles vanish; on the other hand, $`I_6`$ vanishes at the states $`p|100+q|010+r|001`$.
The remaining invariant, $`I_5`$, is a different and independent measure of the entanglement of each pair of qubits. Its existence shows that the 2-tangles and 3-tangle are not sufficient to determine a pure 3-qubit state up to local equivalence. As is shown by eqs. (3.6)โ(3.8), this invariant is equivalent to any one of the two-qubit quantities $`\kappa _{AB}=\mathrm{tr}[(\rho _A\rho _B)\rho _{AB}]`$ (together with one-qubit quantities), and it relates these three 2-qubit quantities to each other. If we regard the hermitian operators $`\rho _A`$ and $`\rho _B`$ as observables, then $`\kappa _{AB}`$ is the expectation value of $`\rho _A\rho _B`$, so $`\kappa _{AB}I_2I_3`$ is the correlation between the eigenvalues of $`\rho _A`$ and $`\rho _B`$. It is related to the relative entropy of the two-qubit state $`\rho _{AB}`$ relative to the product state $`\rho _A\rho _B`$, and is a second measure of the entanglement of the pair $`(A,B)`$, independent of the 2-tangle $`\tau _{AB}`$.
Finally, we give the values of these invariants for some special states (all of which are taken to be normalised).
For a factorised state $`a|111+b|100`$,
$$I_2=1,I_3=I_4=a^4+b^4,I_5=a^6+b^6,I_6=0.$$
For a generalised GHZ state $`p|000+q|111`$,
$$I_2=I_3=I_4=p^4+q^4,I_5=p^6+q^6,I_6=4p^2q^2.$$
For the minimally 3-tangled state $`p|100+q|010+r|001`$,
$$I_2=p^4+(q^2+r^2)^2,I_3=q^4+(r^2+p^2)^2,I_4=r^4+(p^2+q^2)^2,$$
$$I_5=p^6+q^6+r^6+3p^2q^2r^2,I_6=0.$$
## 5 Canonical coordinates
An alternative type of invariant, not necessarily a polynomial in the coordinates of the state vector, is obtained by specifying a canonical point on each orbit. The values of the invariant functions at any point are then the coordinates of the canonical point on its orbit. The canonical points lie on a manifold corresponding to the space of orbits, and their coordinates can (at least locally) be expressed in terms of an appropriate number of parameters.
One form of canonical state was suggested independently by Linden and Popescu and by Schlienz , who pointed out that any pure state of three qubits can be written as
$$\begin{array}{cc}\hfill |\mathrm{\Psi }=& \mathrm{cos}\theta |0\left(\mathrm{cos}\varphi |0|0+\mathrm{sin}\varphi |1|1\right)\hfill \\ & +\mathrm{sin}\theta |1\left(r(\mathrm{sin}\varphi |0|0+\mathrm{cos}\varphi |1|1)+s|0|1+t\mathrm{e}^{i\omega }|1|0\right)\hfill \end{array}$$
(5.1)
where $`0\theta ,\varphi \pi /4,\mathrm{\hspace{0.33em}0}\omega <2\pi `$, and $`r,s,t`$ are non-negative real numbers satisfying $`r^2+s^2+t^2=1`$. Simpler canonical forms, in which the number of non-zero coefficients is reduced to five, have since been proposed by Acin et al and Carteret et al ; the latter form is
$$p|100+q|010+r|001+s|111+t\mathrm{e}^{\mathrm{i}\theta }|000$$
where $`p,q,r,s,t`$ and $`\theta `$ are real parameters. It is straightforward to calculate the invariants $`I_1,\mathrm{},I_6`$ in terms of either of the above sets of parameters; the results are not enlightening.
We will now describe another, more intrinsically defined, form of canonical point whose coordinates are more simply related to $`I_1,\mathrm{},I_6`$.
The three-particle state $`|\mathrm{\Psi }`$ has three Schmidt decompositions:
$$\begin{array}{cc}\hfill |\mathrm{\Psi }& =\underset{i}{}\alpha _i|\varphi _i_A|\mathrm{\Phi }_i_{BC}\hfill \\ & =\underset{i}{}\beta _i|\theta _i_B|\mathrm{\Theta }_i_{AC}\hfill \\ & =\underset{i}{}\gamma _i|\chi _i_C|X_i_{AB}\hfill \end{array}$$
(5.2)
where $`\{|\varphi _i\}`$, $`\{|\theta _i\}`$ and $`\{|\chi _i\}`$ ($`i=0,1`$) are orthonormal pairs of one-particle states, $`\{|\mathrm{\Phi }_i\}`$, $`\{|\mathrm{\Theta }_i\}`$ and $`\{|X_i\}`$ are orthonormal pairs of two-particle states, the suffices indicate which of the three particles $`A,B,C`$ are in which state, and $`\{\alpha _i\}`$, $`\{\beta _i\}`$ and $`\{\gamma _i\}`$ are pairs of non-negative real numbers satisfying
$$\alpha _1^2+\alpha _2^2=\beta _1^2+\beta _2^2=\gamma _1^2+\gamma _2^2=\mathrm{\Psi }|\mathrm{\Psi }=I_1.$$
(5.3)
These Schmidt coefficients, being the positive square roots of the eigenvalues of the one-particle density matrices $`\rho _A,\rho _B,\rho _C`$, are related to the quartic invariants by
$$\begin{array}{cc}\hfill \alpha _1^4+\alpha _2^4& =\mathrm{tr}(\rho _A^2)=I_2,\hfill \\ \hfill \beta _1^4+\beta _2^4& =\mathrm{tr}(\rho _B^2)=I_3,\hfill \\ \hfill \gamma _1^4+\gamma _2^4& =\mathrm{tr}(\rho _C^2)=I_4.\hfill \end{array}$$
(5.4)
These equations have unique real non-negative solutions for $`\alpha _i,\beta _i,\gamma _i`$ provided the invariants $`I_1,\mathrm{},I_4`$ satisfy
$$I_1>0,\frac{1}{2}I_1^2I_2,I_3,I_4I_1^2.$$
Now consider the coordinates $`c^{ijk}`$ of $`|\mathrm{\Psi }`$ with respect to the canonical basis $`|\varphi _i_A|\theta _j_B|\chi _k_C`$. If the states $`|\varphi _i,|\theta _j,|\chi _k`$ were uniquely determined by $`|\mathrm{\Psi }`$ โ and they almost are โ then the coordinates $`c^{ijk}`$ would be local invariants. However, the Schmidt decompositions do not determine the phases of $`|\varphi _i`$, $`|\theta _j`$ and $`|\chi _k`$. We can fix these by requiring that four of the $`c^{ijk}`$ should be real: for example, we can change the phases of $`|\varphi _0`$ and $`|\varphi _1`$ to make $`c^{000}`$ and $`c^{100}`$ real, then change the phases of $`|\theta _0`$ and $`|\theta _1`$ to make $`c^{001}`$ and $`c^{011}`$ real, simultaneously changing the phase of $`|\chi _0`$ to keep $`c^{000}`$ and $`c^{100}`$ real. (It is easy to show that under the six-dimensional group of phase changes of the basis vectors, the generic set of coordinates has two-dimensional stabiliser, so that the orbits are four-dimensional and therefore four phases can be removed.)
From the Schmidt decompositions we obtain the one-particle density matrices
$$\begin{array}{cc}\hfill \rho _A& =\underset{i}{}\alpha _i^2|\varphi _i\varphi _i|,\hfill \\ \hfill \rho _B& =\underset{i}{}\beta _i^2|\theta _i\theta _i|,\hfill \\ \hfill \rho _C& =\underset{i}{}\gamma _i^2|\chi _i\chi _i|.\hfill \end{array}$$
(5.5)
Hence the coordinates $`c^{ijk}`$ satisfy
$$\begin{array}{cc}\hfill \underset{jk}{}c^{ijk}\overline{c}_{ljk}& =\alpha _i^2\delta _l^i,\hfill \\ \hfill \underset{ik}{}c^{ijk}\overline{c}_{imk}& =\beta _j^2\delta _m^j,\hfill \\ \hfill \underset{ij}{}c^{ijk}\overline{c}_{ijn}& =\gamma _k^2\delta _n^k.\hfill \end{array}$$
(5.6)
To obtain a relation between the $`c^{ijk}`$ and Kempeโs invariant $`I_5`$, we calculate
$`\mathrm{tr}[(\rho _A\rho _B)\rho _{AB}]`$
$`=\mathrm{tr}\left[\left({\displaystyle \underset{i}{}}\alpha _i^2|\varphi _i_A\varphi _i|_A\right)\left({\displaystyle \underset{j}{}}\beta _j^2|\theta _j_B\theta _j|_B\right)\left({\displaystyle \underset{k}{}}\gamma _k^2|X_k_{AB}X_k|_{AB}\right)\right]`$
$`={\displaystyle \underset{ijk}{}}\alpha _i^2\beta _j^2\gamma _k^2|\varphi _i|\theta _j|X_k|^2.`$
But
$$c^{ijk}=\varphi _i|_A\theta _j|_B\chi _k|_C|\mathrm{\Psi }=\gamma _k\varphi _i|_A\theta _j|_B|X_k_{AB}.$$
Hence
$$\mathrm{tr}(\rho _A\rho _B\rho _{AB})=\underset{ijk}{}\alpha _i^2\beta _j^2|c^{ijk}|^2$$
and so, using (3.6),
$$I_5=3\underset{ijk}{}\alpha _i^2\beta _j^2|c^{ijk}|^2\underset{i}{}\alpha _i^6\underset{j}{}\beta _j^6.$$
(5.7)
Finally, the relation between the $`c^{ijk}`$ and the 3-tangle $`I_6`$ needs a longer argument which we will not give here. The result is
$$I_6=detR$$
(5.8)
where
$$R_j^i=(\alpha _i^4+\alpha _i^2)\delta _j^i\underset{kl}{}(\beta _k^2+\gamma _l^2)c^{ikl}\overline{c}_{jkl}$$
In order to determine how many states have the same values of the invariants $`I_1,\mathrm{},I_6`$, and therefore how many further discrete-valued invariants are needed to specify uniquely a pure state of three qubits up to local transformations, one would need to find the number of different sets of coordinates $`c^{ijk}`$ satisfying the reality conditions given above and the equations (5.6), (5.7) and (5.8), where $`\alpha _i,\beta _i`$ and $`\gamma _i`$ are determined by (5.3) and (5.4).
## Acknowledgements
I am grateful to Bob Gingrich for finding an error in an earlier version of this paper and for performing the calculations whose result is reported at the end of Section 3. |
no-problem/0001/nlin0001013.html | ar5iv | text | # Frobenius-Perron Resonances for Maps with a Mixed Phase Space
## Abstract
Resonances of the time evolution (Frobenius-Perron) operator $`๐ซ`$ for phase space densities have recently been shown to play a key role for the interrelations of classical, semiclassical and quantum dynamics. Efficient methods to determine resonances are thus in demand, in particular for Hamiltonian systems displaying a mix of chaotic and regular behavior. We present a powerful method based on truncating $`๐ซ`$ to a finite matrix which not only allows to identify resonances but also the associated phase space structures. It is demonstrated to work well for a prototypical dynamical system.
Effectively irreversible behavior of classical Hamiltonian systems can be elucidated by studying the phase space density and its propagator, the Frobenius-Perron operator $`๐ซ`$. Due to Liouvilleโs theorem $`๐ซ`$ can be represented by an infinite unitary matrix whose spectrum lies on the unit circle in the complex plane. Nevertheless, means and correlation functions of observables can relax (see figure 1c) with damping factors known as (Ruelle-Pollicott) resonances of $`๐ซ`$. These resonances have recently attracted attention, e.g. in a superanalytic approach to universal fluctuations in quantum (quasi-) energy spectra which originated from the physics of disordered systems. In that approach the Frobenius-Perron resonances constitute a link between classical and quantum chaos . There is even a recent experiment where quantum fingerprints of Ruelle-Pollicott resonances are identified . To further clarify the interrelations between classical, semiclassical, and quantum behavior, a practical scheme to actually determine classical resonances is called for which is free of restrictions of previous investigations, like hyperbolicity, one-dimensional (quasi-) phase space or isolation of the phase-space regions causing intermittency.
Our thus motivated quantitative investigations into Hamiltonian systems with mixed phase space, a still largely unexplored area of great interest and promise, lead us to the discrete unimodular Frobenius-Perron eigenvalues (with eigenfunctions localized in islands of regular motion around elliptic periodic orbits, see figure 1a) and to resonances, smaller than unity in modulus (with eigenfunctions localized on the unstable manifolds of hyperbolic periodic orbits, see figure 2a-c). Both the discrete spectrum and the resonances are determined by diagonalizing truncated Frobenius-Perron matrices $`๐ซ^{(N)}`$ and studying the cut-off dependence of its eigenvalues. Using the information about eigenfunctions, we then reproduce resonances by the so-called cycle expansion of periodic-orbit theory and furthermore through decay rates of correlation functions. We should add that similarly motivated but technically different (not involving eigenfunctions and employing external noise) efforts to determine resonances can be found in Refs. .
As a prototypical dynamical system we have employed the kicked top, i.e. a periodically kicked angular momentum $`\stackrel{}{J}=(j\mathrm{sin}\theta \mathrm{cos}\phi ,j\mathrm{sin}\theta \mathrm{sin}\phi ,j\mathrm{cos}\theta )`$ of conserved length $`j`$ whose phase space is the sphere $`\stackrel{}{J}^2/j^2=1`$; we confront a single degree of freedom with the โazimutalโ angle $`\varphi `$ as the coordinate and the cosine of the โpolarโ angle $`\theta `$ the conjugate momentum. The dynamics is specified as a stroboscopic area preserving map $`M`$ on phase space. It consists of rotations $`R_z(\beta _z),R_y(\beta _y)`$ about the $`y`$ and $`z`$axes and a โtorsionโ, i.e. a nonlinear rotation $`R_z(\tau \mathrm{cos}\theta )`$ about the $`z`$axis which changes $`\phi `$ by $`\tau \mathrm{cos}\theta `$,
$$M=R_z(\tau \mathrm{cos}\theta )R_z(\beta _z)R_y(\beta _y).$$
(1)
The equivalent map of the phase space density $`\rho `$ is brought about by the Frobenius-Perron operator $`๐ซ`$,
$$๐ซ\rho (\mathrm{cos}\theta ,\phi )=\rho (M^1(\mathrm{cos}\theta ,\phi )).$$
(2)
We keep $`\beta _z=\beta _y=1`$ fixed and vary the torsion constant $`\tau `$, starting with the integrable case $`\tau =0`$. Increasing values of $`\tau `$ bring about more and more chaos until for $`\tau >10`$ elliptic islands have become so small that they are difficult to detect. We shall focus on $`\tau =4`$ (roughly $`90\%`$ of the phase space dominated by chaos) and $`\tau =10`$ (more than $`99\%`$ chaos).
A Hilbert space of phase space functions on the sphere is spanned by the spherical harmonics $`Y_{lm}(\theta ,\phi )`$ with $`l=0,1,2,\mathrm{}`$ and $`|m|l`$. These functions are ordered with respect to phase space resolution by the index $`l`$: if all $`Y_{lm}`$ with $`0ll_{max}`$ are admitted phase space structures of area roughly $`4\pi /(l_{max}+1)^2`$ can be resolved. If we so truncate the infinite Frobenius-Perron matrix $`๐ซ_{lm,l^{}m^{}}`$, we (i) destroy unitarity, (ii) restrict the spectrum to $`N=(l_{max}+1)^2`$ discrete eigenvalues whose moduli cannot exceed unity, and (iii) renounce the resolution of phase space structures of linear dimension below $`\sqrt{4\pi }/(l_{max}+1)`$. Upon diagonalizing the truncated $`N\times N`$matrix $`๐ซ^{(N)}`$ and increasing $`N`$ we find the โnewly bornโ eigenvalues close to the origin while the โolderโ ones move about in the complex plane. โVery oldโ ones eventually settle for good. If the classical dynamics is integrable ($`\tau =0`$ or $`\beta _y=0`$), the asymptotic large$`N`$ loci are back to the unit circle, where the full $`๐ซ`$ has its spectrum. But not so for a mixed phase space: while some eigenvalues of $`๐ซ^{(N)}`$ โfreezeโ with unit moduli, others come to rest inside the unit circle as $`N\mathrm{}`$. Table I illustrates how non-unimodular eigenvalues found for the kicked top with $`\tau =10`$ at $`l_{max}=40`$ remain in their positions as $`l_{max}`$ is increased to $`l_{max}=50,60`$ and $`70`$.
We could pass over such findings and speak of the danger of tampering with infinity, were there not good reasons for and a physical interpretation of the existence of such stable non-unimodular eigenvalues.
The following qualitative argument suggests the persistence of non-unimodular eigenvalues as $`N\mathrm{}`$ for non-integrable dynamics. In contrast to regular motion, chaos brings about a hierarchy of phase space structures which extends without end to ever finer scales. A truncated Frobenius-Perron operator $`๐ซ^{(N)}`$ must reflect the flow of probability towards the unresolved scales as a loss, however large the cut-off $`N`$ may be chosen.
Arguments from perturbation theory indicate that any non-unitary approximation to a unitary operator with continuous spectrum has some eigenvalues in positions near (non-unimodular) resonances of the unitary operator, i.e. poles of the resolvent in a higher Riemannian sheet. The perturbation series for such an eigenvalue does not converge but produces, with increasing order, a sequence of points concentrated in the neighborhood of the respective resonance. It is intuitive to interpret the freezing of non-unimodular eigenvalues (which need not be a strict convergence) as analogous to the โspectral concentrationโ known from perturbation theory.
To find further evidence for our interpretation of frozen eigenvalues as resonances we have looked at the eigenfunctions of $`๐ซ^{(N)}`$, with the following salient results. Eigenvalues freezing with unit moduli have eigenfunctions located on elliptic islands of regular motion surrounding elliptic periodic orbits in phase space. Such islands are bounded by invariant tori which form impenetrable barriers in phase space. We can thus expect the function constant inside the elliptic islands around a $`p`$periodic orbit and zero outside to be an eigenfunction of $`๐ซ`$ with eigenvalue unity. Similarly reasoning we expect, for $`p>1`$, the $`p`$th roots of unity to arise as eigenvalues as well; their eigenfunctions should have constant moduli and be invariant under $`๐ซ^p`$. For the kicked top with $`\tau =4`$ and $`l_{max}=60`$ an eigenfunction with the eigenvalue $`0.9993`$, i. e. almost at unity, is shown in figure 1a. It is localized on the three islands around an elliptic orbit of period three (see figure 1b) and does have the two expected partners. We have indeed found frozen eigenvalues near the $`p`$-th roots of unity and their eigenfunctions localized near elliptic period-$`p`$ orbits for $`p`$ up to 6; without much further effort such signatures of higher periods could be identified.
Now on to the eigenvalues freezing with moduli smaller than unity. Once such freezing has been observed the corresponding eigenfunction has approached its final shape on the resolved phase space scales. The eigenfunctions are sharply localized around unstable manifolds of hyperbolic periodic orbits, ones with low periods at first since these are easiest to resolve; but with growing $`l_{max}`$ more complex orbits of higher periods appear in the โsupportโ of eigenfunctions. Even though all periodic orbits contributing to the structure of an eigenfunction have similar stability coefficients and even though the latter do describe the rate of mutual departure of neighboring trajectories it would be too naive to simply identify resonances with stability coefficients; we shall rather have to resort to cycle expansions further below.
Just as for the eigenvalues there is no strict convergence of the eigenfunctions. With increasing resolution new structures on finer scales become visible, in correspondence with the infinitely convoluted shape of the unstable manifolds (see figure 2a-b). Since no finite approximation $`๐ซ^{(N)}`$ accounts for arbitrarily fine structures one encounters the aforementioned loss of probability from resolved to unresolved scales. Not even in the limit $`N\mathrm{}`$ can the unitarity of $`๐ซ`$ be restored: rather, the eigenfunctions tend to singular objects outside the Hilbert space, in tune with a continuous spectrum of $`๐ซ`$.
The reader may have noticed that all eigenvalues in table I are real or almost imaginary. In fact, all eigenvalues we have identified as frozen inside the unit circle have phases corresponding to those of roots of unity, a fact demanding explanation. Clearly, since $`๐ซ^{(N)}`$ is real the eigenvalues are either real or come in complex conjugate pairs, but no other phase than zero is distinguished by that argument. Again, the eigenfunctions offer further clues. We find that the phases of the complex eigenvalues are determined by the length $`p`$ of the shortest periodic orbit present in an eigenfunction $`f`$ as those of the $`p`$-th roots of unity. The following intuitive argument indicates that this is to be expected.
Assume an eigenfunction is mostly concentrated around a shortest unstable orbit with period $`p`$ as well as a longer one with period $`p^{}`$. Denote by $`\delta _{p,n}`$ a โcharacteristic functionโ which is constant near the $`n`$-th point of the period-$`p`$ orbit, $`n=1\mathrm{}p`$, and zero elsewhere. The truncated Frobenius-Perron operator $`๐ซ^{(N)}`$ maps $`\delta _{p,n}`$ into $`๐ซ^{(N)}\delta _{p,n}=r_p\delta _{p,n+1}`$ with the real positive factor $`r_p`$ smaller than unity accounting for losses, in particular to unresolved scales. Independent linear combinations of the $`\delta _{p,n}`$ can be formed as $`f_{pk}=_{n=1}^p\mathrm{e}^{\mathrm{i2}\pi kn/p}\delta _{p,n}`$ with $`k=1\mathrm{}p`$. Now consider a sum of two such functions, $`g=f_{pk}+f_{p^{}k^{}}`$, and apply $`๐ซ^{(N)}`$. For $`g`$ to qualify as an approximate eigenfunction we must obviously have $`r_pr_p^{}`$ and $`k/p=k^{}/p^{}`$. But then indeed $`๐ซ^{(N)}gr_p\mathrm{e}^{\mathrm{i2}\pi k/p}g`$ and $`[๐ซ^{(N)}]^pgr_p^pg`$. The phase is thus dictated by the shortest orbit. Needless to say, the argument is identical to the one used before for the eigenfunctions living in islands around elliptic orbits, save for $`r_p=1`$ in those regular cases. Since orbits of low period are most likely to be resolved first, the eigenvalues found for $`l_{max}=40`$ in table I have phases according to $`p=1,2`$ and $`4`$.
Knowing which orbits are linked to a non-unimodular eigenvalue, we can adopt a cycle expansion to calculate decay rates from periodic orbits . A cycle expansion of the spectral determinant, i.e. the characteristic polynomial of the Frobenius-Perron operator, allows for the calculation of resonances in hyperbolic system with high accuracy . The spectral determinant is expressed in terms of the traces of the Frobenius-Perron operator $`\mathrm{Tr}๐ซ^n`$ as $`d(z)=_{n=1}^{\mathrm{}}\mathrm{exp}\left(\frac{z^n}{n}\mathrm{Tr}๐ซ^n\right)`$ and subsequently expanded as a finite polynomial up to some order $`n_{max}`$. Only the first $`n_{max}`$ traces are required for the calculation of this polynomial. The traces $`\mathrm{Tr}๐ซ^n`$ are calculated by summing over hyperbolic periodic orbits of length $`n`$ as $`\mathrm{Tr}๐ซ^n=\frac{1}{|det(1J)|}`$ where the $`2\times 2`$ matrix $`J=M^n(X)/X`$ is the linearized map $`M^n`$ evaluated at any of the points of a contributing period-$`n`$ orbit and $`X=(\mathrm{cos}\theta ,\phi )`$ the phase space point. The zeros of the polynomial which are insensitive against an increase of $`n_{max}`$ are inverses of resonances.
The condition under which the ordinary cycle expansion of a spectral determinant converges is that all periodic orbits are hyperbolic and sufficiently unstable . But if we only consider one ergodic region in phase space at a time, i.e. bar contributions from elliptic orbits, and impose a stability bound by including only the relatively few hyperbolic orbits identified in an eigenfunction of $`๐ซ^{(N)}`$, we can still use the cycle expansion as follows.We assume the spectral determinant factorized as $`d(z)=_{i=1}^{\mathrm{}}d_i(z)`$ with one factor $`d_i`$ for the family of eigenfunctions to which a given set of periodic orbits contributes. Each such factor $`d_i(z)`$ is then calculated separately with the above well known expressions but restricting the periodic-orbit sum for the trace $`\mathrm{Tr}๐ซ^n`$ to the orbits previously identified as contributing to the eigenfunctions. In table II resonances reproduced via the spectral determinant from only few orbits show surprisingly good agreement with the resonance-eigenvalues of $`๐ซ^{(N)}`$ for $`\tau =10`$ and $`l_{max}=60`$. The index $`n_{max}`$ gives the order up to which the spectral determinant is expanded, i.e. the length of the longest (pseudo-)orbits employed. The total number of orbits used is given in brackets behind the resonances. For the resonance $`0.8103`$ the three relevant orbits of period $`1,2`$ and $`4`$ are marked in the magnified region of the eigenfunction ($`l_{max}=60`$) in figure 2c. The first repetition of the single period-2-orbit contributing to the resonance $`0.6597`$ gives an almost diverging contribution to the spectral determinant, thus hindering its expansion to a higher order.
In the cycle expansion the phases of the resonances are reproduced exactly since they are again directly determined by lengths of orbits. If $`p`$ is the shortest orbit length used in $`d_i(z)`$, the polynomial can as well be written as a polynomial in $`z^p`$ thus allowing the zeros to have the phases of the $`p`$th roots of unity.
As a final check on the physical meaning of our frozen eigenvalues with moduli smaller than unity as Frobenius-Perron resonances we have compared these moduli with rates of correlation decay. In a numerical experiment we investigated the decay of the correlator $`C(n)=[\rho (n)\rho (0)\rho (\mathrm{})\rho (0)][\rho (0)\rho (0)\rho (\mathrm{})\rho (0)]^1`$. Depending on the choice of $`\rho (0)`$ different long-time decays are observable. We chose $`\rho (0)`$ as covering the regions where the hyperbolic orbits relevant for a given resonance are situated. Figure 1c illustrates the very good agreement between the long-time decay of $`C(n)`$ (dots) and the decay as predicted by the corresponding resonance $`0.81`$ (full line). Together with the resonances at $`l_{max}=60`$ table II displays the associated decay factors by which $`C(n)`$ decreases over one timestep, obtained from a numerical fit. Again the agreement is convincing.
In conclusion, we have presented a method to determine Frobenius-Perron resonances and the associated phase-space structures, applicable to systems with mixed phase spaces. The acquired knowledge of phase space structures allows to check the accuracy to which resonances are determined by the otherwise independent approaches of cycle expansion and correlation decay.
We are grateful to Shmuel Fishman for discussions initiating as well as accompanying this work. Support by the Sonderforschungsbereich โUnordnung und groรe Fluktuationenโ and by the grant GAAV No. A1048804 of the Czech Academy of Sciences is thankfully acknowledged. F. H. also thanks the Isaac Newton Institute for hospitality during the workshop โSupersymmetry and Trace Formulaeโ in 1997 during which this work was begun; especially fruitful interactions with Ilya Goldscheid were made possible there. |
no-problem/0001/hep-ph0001319.html | ar5iv | text | # 1 Prologomena to any meta future physics
## 1 Prologomena to any meta future physics
The title assigned to this talk, while certainly both eye-catching and pithy, doesnโt make any literal sense in English. โPhysics needs for future acceleratorsโ is a phrase which could be interpreted in any number of ways, leading to very different sorts of talks. Let me begin, therefore, by briefly describing the roads not taken in my review. This will allow me to mention some important issues which, while not the focus of my talk, are worthy of serious high-minded discussion in international forums such as the Lepton-Photon meetings.
### 1.1 Physics needs for building future accelerators
This, it seems to me, is the most obvious literal interpretation of the original title. Since, like all theorists, I consider myself to be essentially omniscient, I even briefly considered giving such a talk. I went as far as dusting off my copy of Jackson , which contains, after all, most of the basic physics that you will need to build future accelerators.
This facetious exercise illustrates a worrisome development to which we had better start paying more attention. Since (in principle) all of us understand the basics of accelerator physics, there is a disturbing tendency to denigrate this essential branch of our field, to relegate it to โtechniciansโ โthe implication being that first-rate minds are drawn to more โcutting-edgeโ physics problems. Combined with the general trend towards increasing specialization, we have largely decoupled and discounted an activity which in fact largely defines and limits the future of our field.
Furthermore, if you look seriously at the designs and R&D work related to any of the proposed future accelerators โlinear colliders , hadron colliders , as well as very ambitious ideas like the muon collider , or the CLIC two-beam acceleration concept โ you will find a host of interesting and highly challenging physics problems. As a community, I doubt that we are doing enough towards attracting, training, supporting, and encouraging the next generation of accelerator physicists โthe cadre of first-rate, creative, and experienced people without whom no future accelerators are likely to get built. In that case everything else I say in this talk addresses a moot topic, since we will have failed before we have even properly begun.
How can we avoid this calamity? One way to attract new talent to accelerator R&D, and to validate the importance of this activity, is for more experimentalists โand even theoristsโ to devote some finite fraction of their time to thinking about accelerator physics problems. We should keep in mind that lack of technical experience has the positive virtue of stimulating new kinds of questions, new ways of thinking, and thus ultimately to innovation. Bringing fresh minds to bear on these sorts of problems can only help, and will enhance the prestige, and thus the overall vigor, of these activities. Small steps in this direction can be encouraged now with existing resources; a few years down the road we should aim to establish new centers for advanced accelerator physics.
The payoff for nuturing this branch of physics will be well worth the investment, and could be spectacular. It may well be, for example, that physics innovation, rather than engineering or manufacturing breakthroughs, is the key to major cost reductions in linear collider or hadron collider designs.
### 1.2 Physics needs for funding future accelerators
Another possible interpretation of the title of this talk is a little more political. What do we need to get out of existing experiments and facilities in order to strengthen the case for particular future machines, as well as goose up the overall momentum and glamour of our field?
While we are correct to spend sleepness nights agonizing over the optimal path to our long term future, we should also rejoice that our immediate future looks extremely bright. As this transparency from the next Lepton-Photon conference illustrates, the LEP experiments have a very real chance of discovering the Higgs. The B factories coming on line now will provide fundamental new inputs to our global view of particle physics, including, most likely, some surprises. The same can be said for the many neutrino experiments in progress or under construction, and for a number of other low energy projects. As a bonus, a flood of new astrophysical data will impact on a number of important ideas and problems circulating in particle physics.
Last but not least, the next run of the Tevatron will extend our reach for the Higgs , supersymmetry , B physics, top physics, electroweak physics, and new strong dynamics, to name but a few. Major discoveries are very possible. If we are fortunate, we might even get the first experimental hints of extra spatial dimensions , quantum gravity, or strings .
Obviously new discoveries from any of these arenas will help bootstrap funding and resources for future experiments and new facilities. In addition, discoveries โor even hintsโ of physics beyond the Standard Model will crystalize our thinking about what future facilities are needed.
We can congratulate ourselves on having assembled such a rich physics program for our immediate future. There is a danger, however, that in our rush to get to the LHC and other future accelerators, we may not maximally exploit those opportunities already at hand. If we are so foolish as to sacrifice our present to โsecureโ our future, we are more likely to end up sacrificing both.
## 2 Physics questions for future accelerators
What is the right way to think about future physics? The traditional approach is to enumerate a number of possible scenarios for physics beyond the Standard Model, guided by whatever theoretical prejudices seem most fashionable at the moment. Although I will myself be guilty of this sort of theorizing later in the talk, I donโt think this is the best approach for making major decisions, decisions that will commit billions of dollars of the taxpayersโ money and determine the future or nonfuture of high energy physics.
I will attempt instead to outline a more robust approach to thinking about future physics, one which relies less on theoretical assumptions and more on an appreciation for what high energy physics is really all about.
### 2.1 Crimes and misapprehensions
The first thing we need to do is to discredit some harmful dogmas which have been clouding our thinking about high energy physics for more than a decade. Anyone who falls under the spell of these dogmas is not likely to hold rational or constructive views about the future needs of our field. As far as I know, no one is willing to admit authorship of these pernicious doctrines, but certainly the bourgeois overlords of high energy physics and their running dog lackeys (i.e. this audience) must assume responsibility for tolerating or promulgating these ideas.
The basic crime here is to have allowed the misapprehension โamong ourselves, students, and the publicโ that particle physics is โalmost doneโ. This misapprehension arises from two rather different but equally radical notions, which I will now briefly review.
#### 2.1.1 Organized religion
This is the dogma that high energy physics has been to a large extent supplanted by a new activity called string theory. String theory is the one true Theory of Everything (TOE), people who are a lot smarter than you will have it figured out any day now, and they will soon be able to compute the electron mass, etc. purely on the basis of mathematical consistency. Thus the traditional activities of high energy physics (such as experiment) have become largely irrelevant. Put another way, since no future accelerator can ever directly probe the most fundamental scale of physics, the โbottom upโ approach is pointless, and we should instead invest in the more promising โtop downโ approach to connecting fundamental physics with our existing data banks.
This extreme view is, of course, quite silly. String theory does indeed hold great promise for advancing our understanding of fundamental physics, and has already produced some profound insights about black hole physics, about gauge theories, and other areas as well. But anyone who has followed the rapid advances in string theory knows that, for every question successfully disposed of, three new ones seem to crop up in its wake. Fundamental physics is, not surprisingly, rich, dense, and confusing. The road to fundamental understanding will be a long road, and this makes the traditional activities of high energy physics (such as experiment) even more interesting and important in that light.
#### 2.1.2 Feudalism
This is the dogma that the Standard Model is king and will reign forever. This is particularly discouraging for young people, because the implication here is that all the good stuff happened in the seventies and you missed it. There are a few things left to do โweโll find the Higgs, measure a few more parametersโ and then thatโs it. So unfortunately young physicists entering the field today are coming in at the tail end of the Golden Age, but โtough luckโ thereโs only one Golden Age and ours is almost over.
This extreme view is equally silly, yet it seems to have penetrated into the morose subconscious of a large fraction of high energy physicists. If you are suffering from this problem and the Zoloft isnโt working, pay attention and I will attempt to dispel your weltschmerz by rational argument.
#### 2.1.3 Trotsky was right
As it turns out, somewhat surprisingly, the guy who had it right was Trotsky. This great philosopher and humanist was the first to point out that high energy physics is exciting, and will continue to remain exciting, precisely because it exists in a state of permanent revolution. As experiments probe higher energies and smaller distance scales, and as our theoretical frameworks struggle to produce a coherent explanation of what we see, our fundamental view of the physical world and how to describe it changes dramatically. High energy physicists are continually engaged in a process of creating new frontiers, moving to those frontiers, civilizing the rough elements, then pushing out to the next frontier.
This rather obvious historical fact has lately been obscured by two phenomena. One phenomenon is that particle physics during the past decade was in one of those recurring phases where theoretical assimilation of earlier experimental findings dominates (to some extent) over new experimental surprises. This gives the superficial impression that not much is happening, when in fact during this period our understanding has increased dramatically, such that our basic theoretical frameworks are remarkably different from what they were 10 or 15 years ago.
The second phenomenon was the cancellation of the SSC, and the concomitant popularization of the high energy physics version of Horganism. The idea here is that we have slipped into an era of diminishing returns for our investments in basic science. High energy physics, in this view, is becoming increasingly expensive and complicated to pursue, and as a practical matter the field will die out completely in a decade or two.
This dire prediction may indeed be correct, but for reasons rooted in politics and sociology, not physics. Precisely because high energy physics is constantly redefining itself, we have no idea what it will be like a century from now. Advances of the 21st century could easily rival those of the 20th century. There is no reason to imagine that we are near the end of this process, barring the complete collapse of our civilization. Or, as Trotsky put it, our expectation is
> Revolution whose every successive stage is rooted in the preceding one and which can end only in complete liquidation.
### 2.2 The Standard Model as an effective field theory
Let me now be much more explicit about the present status of particle physics, and about the process of getting to the next iteration of our understanding. The key realization here is that the Standard Model is an effective field theory. As advocated in Ken Wilsonโs pioneering work , quantum field theories in general are nonperturbatively defined as effective field theories valid below some explicit ultraviolet cutoff $`\mathrm{\Lambda }`$. The parameters of the effective Hamiltonian can be regarded as encoding the effects of integrating out the ultraviolet degrees of freedom above the cutoff scale, including the high momentum modes of the light degrees of freedom. Effective Hamiltonians can contain an arbitrarily large number of operators, but at energies small compared to the cutoff the effects of higher dimension operators are suppressed by powers of energy divided by $`\mathrm{\Lambda }`$.
The Standard Model, of course, is famously renormalizable. However, as emphasized by Steven Weinberg , the power-counting renormalizability of the Standard Model has no particular physical significance. The physically important sense in which the Standard Model is renormalizable is that the ultraviolet divergences of the model are controlled by gauge symmetries, such that counterterms exist to cancel all infinities. In this sense effective gauge field theories with higher dimension operators are also renormalizable. Renormalizability is in no sense an indication that the Standard Model is โfundamentalโ. Our generic expectation is that the full Standard Model Hamiltonian will turn out to contain a number of higher dimension operators, whose effects are too suppressed to have shown up in present experiments. Indeed it is this expectation which makes precision low energy experiments interesting as probes of new physics.
In light of the above, the enterprise of searching for new physics can be described in a completely model-independent way. The task for experiments at future accelerators (as well as at existing facilities) is to address the following set of physics questions:
* The Standard Model is an effective field theory for physics below some high energy cutoff $`\mathrm{\Lambda }`$. What is the value of $`\mathrm{\Lambda }`$?
* What are the relevant degrees of freedom in the new effective theory at energies above $`\mathrm{\Lambda }`$.
* What are the symmetries and organizing principles of this new effective theory?
* What symmetries and organizing principles of the Standard Model turn out to be artifacts of the โlow energyโ approximation?
* Do the symmetries and organizing principles of the new effective theory explain the parameters and parameter hierarchies of the Standard Model (e.g. all the notorious mysteries of flavor)?
* Does the new effective theory give any hints (e.g. higher dimension operators, spontaneously broken symmetries) of new physics at even higher scales?
### 2.3 What is the scale of new physics?
Let me elaborate further on the first of these questions, which has obvious importance for making smart choices about future accelerators. How do we go about determining $`\mathrm{\Lambda }`$ for the Standard Model? There seem to be three basic complementary methods.
* One method is to use high energy machines to search for evidence of new degrees of freedom characteristic of the new effective theory above the cutoff. This could take the form of new particles, resonances, or collective effects. It could also show up as evidence of compositeness, form factors, indications of symmetry restoration, or of symmetry breaking. We might even see signals telling us about new spatial degrees of freedom.
So far all such searches have turned up negative, even at the Tevatron, our highest energy collider. This indicates either that the new degrees of freedom are somewhat obscured (by Standard Model backgrounds and instrumental effects) or that present experiments are not yet probing above the cutoff scale $`\mathrm{\Lambda }`$.
* Another method is to search for evidence of higher dimension operators in the effective Hamiltonian of the Standard Model itself. At high energies this approach is not entirely distinct from the previous method, but it can also be utilized in a variety of lower energy experiments. It is important to realize here that the symmetries and approximate symmetries of the known contributions to the Standard Model may not be respected by the full effective action. This motivates a variety of searches for flavor changing neutral currents, CP violation, lepton number violation, proton decay, etc.
So far, with the exception of the strong case for neutrino oscillations, experiments of this type have not produced compelling results for new physics. This is an indication either that certain operators are simply forbidden in the full effective action, or that $`\mathrm{\Lambda }`$ is sufficiently large that these โirrelevantโ operators decouple rather efficiently in current experiments.
* The third method is to consider the Higgs sector, which is more sensitive to $`\mathrm{\Lambda }`$ for several reasons. The first is that, in the case where the Higgs mass is large, the Higgs becomes strongly coupled at some high energy scale. $`\mathrm{\Lambda }`$ is certainly no larger than this scale. Similarly, in the case where the Higgs mass is small, the Higgs effective potential becomes unstable at some high energy scale, also providing an upper bound on $`\mathrm{\Lambda }`$. These limits are summarized in Figure 5, where we see that they have only logarithmic sensitivity to the high scale. Also shown is the current 95% confidence level upper bound on the Standard Model Higgs mass from electroweak precision data, which is approximately 250 GeV. Combined with the lower bound on the Higgs mass from the direct searches at LEP, the net result is that $`\mathrm{\Lambda }`$ is rather weakly bounded.
A much stronger and more robust upper bound on $`\mathrm{\Lambda }`$ is provided by consideration of the Higgs naturalness problem. The Higgs has an unprotected scalar mass, whose value is naturally of order the cutoff $`\mathrm{\Lambda }`$. This would lead us to the conclusion that $`\mathrm{\Lambda }`$ not much more than 250 GeV. A light Higgs can be arranged if, in the new effective theory above $`\mathrm{\Lambda }`$, the Higgs mass is related to parameters which vanish in some symmetry limit. However even in these cases it is hard to arrange a large hierarchy between the electroweak scale (defined as the Higgs vacuum expectation value $`v=246`$ GeV) and the cutoff $`\mathrm{\Lambda }`$.
The bottom line of this effective field theory analysis is that there is a strong (and growing) tension in the Standard Model between the Higgs naturalness/hierarchy problem, which wants $`\mathrm{\Lambda }`$ to be close to the electroweak scale, and the apparent decoupling of new physics effects in current data. The obvious resolution of this dialectic is that $`\mathrm{\Lambda }`$ is only just out of reach of current experiments. An educated guess would be:
$$500\mathrm{GeV}\begin{array}{c}<\hfill \\ \hfill \end{array}\mathrm{\Lambda }\begin{array}{c}<\hfill \\ \hfill \end{array}1\mathrm{TeV}$$
I emphasize that although the numbers in this estimate are a little soft, the physics input that goes into it is very robust. Indeed it ultimately only depends on our understanding of the Standard Model as an effective quantum field theory.
### 2.4 What could be out there?
What will we find when we begin to probe the new effective theory above the scale $`\mathrm{\Lambda }`$? Most theoretical speculation about the new effective theory at high energies involves adding things to the Standard Model:
* Add new particles: 4th generation, superpartners, messenger sector, etc.
* Add new symmetries: e.g. supersymmetry, etc.
* Add new gauge interactions: e.g. technicolor, $`Z^{}`$, etc.
However it is just as likely that at higher energy scales we have instead (or in addition) much more radical changes:
* Qualitatively new degrees of freedom: e.g. strings, membranes, extra dimensions.
* Symmetries of the Standard Model are broken: e.g. B and L violation.
* โSacred principlesโ of the Standard Model are violated! This would not be the first time that sacred cows got ground into hamburger.
Indeed, already in this century we have seen several examples of sacred principles which turned out to be artifacts of some approximation. It is worth reminding ourselves of these history lessons:
1. Newtonian mechanics $``$ electromagnetism $``$ special relativity.
Lesson: Galilean invariance is only an approximation, good at low speeds.
2. Thermodynamics $``$ electromagnetism $``$ quantum mechanics.
Lesson: Rayleighโs formula for blackbody emittance is only an approximation, good at low frequencies.
3. Newtonian gravity $``$ special relativity $``$ general relativity.
Lesson: Newtonian gravity is only an approximation, good for weak gravitational fields and low speeds.
A conservative view is that all of the following theoretical assumptions/frameworks may break down under certain conditions at certain energy scales:
* The assumption that the fundamental dynamical entities are point-like particles.
* Relativistic quantum field theory (and the associated ideas of locality, microcausality, CPT invariance).
* General relativity.
* Quantum mechanics.
It is unfortunate that these possibilities are today largely ignored by both theorists and experimenters. Of course we should not ascribe every burp in the data to a breakdown of microcausality, but neither should we assume that all of our current paradigms will remain sacrosanct indefinitely. It is amusing to recall in this regard that Werner Heisenberg, in 1939, seriously suggested that quantum mechanics breaks down at an energy scale around 1 GeV. Nowadays anyone who questions the universal validity of quantum mechanics is (usually correctly) labelled as a crank.
Although string theory has not (yet) done a good job of matching to the Standard Model at low energies, it has proven to be a great exercise for both organizing and liberating our thinking about new physics. For example:
* If string theory is correct, both general relativity and quantum field theory break down at some energy scale $`M_s`$. We donโt know what this string scale is; itโs lower bound, set by experiment, is about 1 TeV.
* If string theory is correct, the fundamental physical entities are not quarks and leptons, but a whole collection of particle-like, string-like, and membrane-like objects.
* Furthermore these objects propagate in a 9+1 or 10+1 dimensional spacetime.
### 2.5 Model-independent conclusions
Without making any particular assumptions about what new physics is out there, we can now draw some important conclusions with a high degree of confidence:
* There is a whole new effective theory waiting to be explored at the TeV scale.
* The new physics will be rich, surprising, confusing, and take a long time to untangle.
These conclusions also imply the following:
* To explore the new theory you will want high energies, reasonable luminosities, and reasonable detectors.
* To understand the new physics, you will also want detailed studies, for which you need excellent luminosities and excellent detectors.
* You will need detailed studies not only to unravel the new effective theory, but also to give you hints about physics at even higher scales.
## 3 Future accelerators
In the second half of my talk I would like to discuss the physics driving various proposals for future accelerators. To be concrete and focus our thinking I have summarized these proposals below according to my opinion of what we might have and when. The dates of course are only estimates, but the groupings are important. It is important in thinking about future machines to make a clear distinction between those which are intended to operate in the LHC era, and those which are clearly post-LHC successors. It is also important to discriminate between those proposals which extend the energy frontier, and those which would operate within the energy frontier defined by the LHC. I have also made note of possible upgrades of the Tevatron, LHC, and linear $`e^+e^{}`$ collider (LC), since recycling is likely to become increasingly popular in a difficult funding climate.
Given the very exciting prospect for future physics discoveries advocated in the first half of this talk, it is rather straightforward to make the physics case for machines of the LHC era. I emphasis that my summary of the physics needs does not factor in the dollar cost or political cost of various machines, neither do I address the question of whether we can attract sufficient human resources to pursue several big projects simultaneously. I will also touch briefly on machines of the post-LHC era.
2006 โ 2012: The LHC Era
* LHC: $`\sqrt{s}`$$`=`$$`14`$ TeV, $``$$`=`$$`10^{33}`$ to $`10^{34}`$.
* LC: $`\sqrt{s}`$$`=`$$`350`$ GeV to 1 TeV, $``$$`=`$$`10^{34}`$ish.
* $`\nu `$ factory: 1 millimole of muons per year.
* upgraded Tevatron?: $`\sqrt{s}`$$`=46`$ TeV.
2013 โ 2025: Within the Energy Frontier
* stretch LC: $`\sqrt{s}`$$`=`$1.5 TeV.
* $`\gamma \gamma `$, $`e^{}e^{}`$: piggyback on LC.
* First Muon Collider: Higgs factory? Heavy Higgs factory?
2013 โ 2025: Extending the Energy Frontier
* upgraded LHC?: $`\sqrt{s}`$$`=`$?.
* CLIC: $`\sqrt{s}`$$`=`$$`35`$ TeV, $``$$`=`$$`10^{35}`$.
* High Energy Muon Collider: $`\sqrt{s}`$$`=`$$`34`$ TeV, potential for 10 to 15 TeV.
* VLHC: $`\sqrt{s}`$$`=`$$`100200`$ TeV, $``$$`=`$$`10^{35}`$.
### 3.1 What is the physics driving the LHC?
You are supposed to know this already! The LHC will advance the energy frontier by roughly a factor of five over present experiments. This should be amply sufficient to probe the new effective theory above the cutoff $`\mathrm{\Lambda }`$, discover most of its degrees of freedom, symmetries, and organizing principles. Looking back, we should be able to get a fundamental understanding of the mechanism of electroweak symmetry breaking, and perhaps shed light on flavor problems or other mysteries of the Standard Model.
### 3.2 What is the physics driving the LC?
The various linear collider proposals involve machines which will operate within the energy frontier established by the LHC. I have attempted here to summarize the important physics needs driving these proposals:
* Higgs physics is golden.
* The LHC wonโt be sufficient to unravel the new physics at the TeV scale.
* The LC has unique capabilities to divine new physics at even higher scales.
#### 3.2.1 Higgs physics is golden
Precision measurements are already telling us that either the Higgs is light (mass less than about 250 GeV ), or new physics is misleading us! Even if the Higgs turns out not to be Standard Model-like, it is still likely to be discovered at either LEP, the Tevatron, or the LHC. However, the discovery of a $`b\overline{b}`$ invariant mass bump (say) in some data sample will be only the beginning of understanding the Higgs. As soon as a discovery is made, a main priority of high energy physics will be to answer the following questions :
1. Is this the Higgs of electroweak symmetry breaking?
2. Is this the Higgs associated with the generation of fermion masses?
3. Is this the only Higgs?
In short, we will need to find out everything we possibly can about the Higgs sector. Higgs physics will be golden, will occupy us for many years, and will require a large number of challenging measurements.
For example, we will need precise measurements of all the Higgs branching fractions. Disentangling the Higgs branching to taus, charm, and glue is a tall order, and we will need sensitivity to the rare decay modes as well. The LHC can do part of this job, but we will need an LC (with good luminosity and detectors) to do the rest. For a light Higgs a 350 or 500 GeV linear collider has enough energy reach. Eventually we would want to look at $`t\overline{t}H`$ production at higher energies.
Higgs physics will be interesting for a long time. Thus even in the post-LHC era we may be very interested in low energy machines which can refine our knowledge of the Higgs sector. This includes the $`\gamma \gamma `$ option for a linear collider, and an s-channel Higgs factory as First Muon Collider.
#### 3.2.2 The LHC wonโt be sufficient to unravel the new physics at the TeV scale.
The LHC experiments will do a lot, including a large number of precision measurements. But, as I have argued, the new physics at the TeV scale will be both rich and confusing. Mere prudence will demand that we probe this new world with all the tools at our disposal. A linear collider offers different sensitivities, polarization, reduced backgrounds, better contained events, and even more precise measurements.
Examples:
* Untangling the neutralino and slepton sectors in supersymmetry. What variety of SUSY is it?
* Deciphering virtual effects of extra dimensions. Is your Drell-Yan anomaly due to spin 2 Kaluza-Klein graviton exchange?
#### 3.2.3 LC precision measurements can pin down new physics scales
A case study which illustrates this point was recently made by Ambrosanio and Blair . They assumed that the new physics is a minimal version of gauge-mediated supersymmetry, and examined the question of whether experiments at a 500 GeV linear collider could measure the hidden sector supersymmetry breaking scale $`\sqrt{F}`$.
This is a scenario in which the lightest neutralino is the next-to-lightest-superpartner (NLSP), and decays to a Goldstino (which is not seen) plus a photon: $`\stackrel{~}{\chi }_1^0`$ $``$$`\gamma `$G. The decay length of the NLSP, $`c\tau `$, has only log sensitivity to the gauge mediation messenger scale, but is proportional to the SUSY breaking scale $`\sqrt{F}`$:
$$c\tau _{\stackrel{~}{\chi }_1^0}\frac{F^2}{M_{\stackrel{~}{\chi }_1^0}^5}$$
This suggests a challenging physics program, in which we must first determine that we have weak scale supersymmetry, that we have gauge-mediated supersymmetry breaking, that this is โminimalโ gauge mediation, and that the neutralino is the NLSP. Having done all of this, we must then be prepared to measure $`c\tau `$ of $`\stackrel{~}{\chi }_1^0`$ in the entire range from 10 microns to 30 meters, using various (overlapping) techniques:
* Projective tracking
* 3D tracking
* Photon pointing
* Calorimeter timing
* Statistical (counting single $`\gamma `$ versus 2 $`\gamma `$)
The conclusion of this study is that in such a scenario, with an appropriate detector and 200 $`fb^1`$ of integrated luminosity, we could measure $`\sqrt{F}`$ to $`\pm 5`$% at a 500 GeV linear collider.
### 3.3 Why a Neutrino Factory?
Neutrino oscillations are a strong hint that there is new physics associated with scales in the $`10^{10}`$-$`10^{16}`$ GeV range, or of brane-bulk physics in the case of extra dimensions . It will be a big job to pin down this new physics, and, in particular, to link this new physics to anything else in the Standard Model We will certainly need precise and overconstrained measurements of the lepton mass matrix, just as we are now achieving for the CKM matrix of the quark sector. Flavor problems are hard, and it seems highly probable that we will need to build at least one new accelerator facility optimized for neutrino physics.
Possible designs of a muon storage ring neutrino factory are currently under study . the basic is to use a muon storage ring as a source for very intense beams of fairly high energy neutrinos. The muon charge, momentum, and polarization determine the neutrino composition and spectrum, thus the initial characteristics of the neutrino beam will be known with high confidence. Many types of oscillation experiments can be considered , including very long baseline possibilities such as Fermilab to Gran Sasso or CERN to Brookhaven. This presents the opportunity for truely international collaborations. This is particularly so since there are serious detector challenges to be overcome, including building in the capability to discriminate lepton flavors and measure their charges.
My belief that the neutrino factory can be brought on line as an LHC era facility is based mostly on its moderate size, and upon the growing interest and enthusiasm exhibited in many quarters for this idea. However a number of serious technical obstacles stand in the way of this goal, and without an especially aggressive R&D effort my rosy scenario will not come to pass. Since a neutrino factory appears also to be the most practical path to a muon collider, we have an especially strong motivation to pursue this idea as vigorously as possible.
### 3.4 Pushing the energy frontier
The reality of modern high energy physics is such that, if you want a new energy frontier collider in 2020, you had better be doing serious R&D for it now. This puts us in something of a quandry, since we donโt yet know how to estimate the next interesting energy scale. From the arguments reviewed in this talk we will certainly need a post-LHC energy frontier machine, but beyond that general statement we know almost nothing at this point in time.
In fact at the moment we cannot with confidence answer even the most basic of questions. For example, to explore the new physics of the post-LHC era, will a 3 or 4 TeV lepton collider, or an LHC upgrade, be good enough, or do we need to push straight to a 10โ15 TeV muon collider or 100โ200 TeV VLHC? Until we answer such questions we had better be pursuing R&D for all options.
The point here is that LHC+LC data is going to be essential for making good decisions about post-LHC facilities. For example, LHC/LC data may indicate that, due to the existence of extra dimensions, the effective Planck scale is only about 3 TeV. In this case hard scatterings at future colliders may produce mostly black holes. This knowledge would certainly have a major impact on your choices for $`\sqrt{s}`$, luminosity, and detector design for post-LHC experiments!
A Final Thought
> It is much more likely that we will fail to build new accelerators than that these accelerators will fail to find interesting physics! |
no-problem/0001/hep-ph0001073.html | ar5iv | text | # Alternative Supersymmetric Spectra
## Abstract
We describe the features of supersymmetric spectra, alternative to and qualitatively different from that of most versions of the MSSM. The spectra are motivated by extensions of the MSSM with an extra $`U(1)^{}`$ gauge symmetry, expected in many grand unified and superstring models, which provide a plausible solution to the $`\mu `$ problem, both for models with supergravity and for gauge-mediated supersymmetry breaking. Typically, many or all of the squarks are rather heavy (larger than one TeV), especially for the first two families, as are the sleptons in the supergravity models. However, there is a richer spectrum of Higgs particles, neutralinos, and (possibly) charginos. Concrete examples of such spectra are presented, and the phenomenological implications are briefly discussed.
preprint: UPR-0867-T FERMILAB-Pub-99/369-T hep-ph/0001073
Introduction.
The minimal supersymmetric standard model (MSSM) and its simple extensions contain many free parameters associated with supersymmetry breaking. Most analyses have been based on two generic classes of models of soft supersymmetry breaking: (1) supergravity, in which supersymmetry breaking in a hidden sector is transmitted to the observable sector via supergravity. One usually assumes universality or at least a comparable scale for the soft parameters at the Planck scale. (2) Gauge-mediated models, with the breaking transmitted via messenger fields at relatively low energy, such as $`10^5`$ GeV. In both cases the scale of the soft supersymmetry breaking parameters ultimately sets the electroweak scale via radiative electroweak breaking, provided that the supersymmetric $`\mu `$ parameter is of a comparable magnitude. Once universality is relaxed there are many free parameters in supergravity, and there are many versions of gauge-mediation. However, in both cases a typical spectrum involves sparticle masses in the several hundred GeV range due to naturalness arguments; i.e., the mass scale of the superpartners should be in this range (and at most $`๐ช(1\mathrm{TeV})`$) for SUSY to explain the origin of the electroweak scale without excessive fine-tuning. Most studies of the implications for current and future colliders and precision measurements have been based on such a spectrum.
However, it is well known that naturalness does not necessarily require that all sparticles have masses below the TeV scale . In the scalar sector, naturalness only constrains the masses of the third generation sfermions and the electroweak Higgs doublets, as these are the fields which have large Yukawa couplings and thus play dominant roles in radiative electroweak symmetry breaking. Therefore, the sparticle masses of the first and second generations can be significantly larger than the other sparticle (and particle) masses without violating naturalness criteria. Recent work has demonstrated that this hierarchy can be generated dynamically via renormalization group evolution (first pointed out in and investigated in the context of grand unified models in ). In this scenario, the soft supersymmetry breaking scalar mass-squared parameters can be multi-TeV ($`4`$ TeV) at the high scale (while the gaugino masses and scalar trilinear couplings are $`M_W`$; the Higgs and third generation masses are driven to smaller values due to their large Yukawa couplings, while the first and second generations remain heavy. The results of a recent extension of this framework including the possibility of multi-TeV $`A`$ parameters indicate that such inverted hierarchies can be generated with the first two generations up to $`20`$ TeV. This scenario has distinctive implications (such as in collider searches for superpartners; see, e.g., ), and can be advantageous phenomenologically, as stringent laboratory constraints on the SUSY parameter space from flavor-changing neutral currents (FCNC) and CP-violation can be considerably weakened . Another possibility in supergravity models pointed out in is that since the Higgs soft mass-squared parameter at the electroweak scale can be quite insensitive to the initial values of the scalar masses due to โfocus-pointโbehavior of the RGEโs, scalar masses for all three generations of squarks and sleptons of order $`23`$ TeV can be consistent with naturalness (see for a discussion in the context of gauge-mediated models).
The purpose of this paper is to point out that there is another class of (string-motivated) models based on gauge extensions of the MSSM with an additional $`U(1)^{}`$ gauge group in which this type of spectrum is naturally achieved. In these models, it has been shown that the $`U(1)^{}`$ may be broken at the TeV scale by a radiative mechanism analogous to that for electroweak breaking provided there are sufficiently large Yukawa couplings of a standard model singlet $`S`$ which carries $`U(1)^{}`$ charge . Such extended gauge groups, exotic particle content, and large Yukawa couplings are generically present in classes of quasi-realistic perturbative superstring constructions. These models also provide an alternative resolution to the $`\mu `$ problem of the MSSM, since gauge invariance can forbid the elementary $`\mu `$ term while an effective $`\mu `$ term can be generated via a trilinear coupling of the SM singlet to the two electroweak Higgs doublets . Furthermore, the enhanced symmetry avoids the problems of domain walls, which are common to models involving an effective $`\mu `$ generated by the VEV of a scalar but not associated with the breaking of an extra gauge symmetry .
In this framework the VEV of the singlet field sets the scale of the $`Z^{}`$ mass. This VEV is generally of order several TeV since the nonobservation of an additional $`Z^{}`$ boson and the stringent constraints on the $`ZZ^{}`$ mixing angle $`\alpha _{ZZ^{}}`$ typically require that the mass of the $`Z^{}`$ is significantly heavier than the $`Z`$ mass (the lower bounds on $`M_Z^{}`$ are model dependent, but are in the range of 500 GeV to 1 TeV or so). Since the singlet VEV is achieved radiatively, its value generally sets the scale of the required initial values of the soft breaking parameters. Typical supersymmetry breaking parameters are at the TeV scale, at least for the first two generations. However, there is typically a much richer spectrum of Higgs particles and neutralinos, as well as the $`Z^{}`$ and (usually) exotic fermions and their partners. Specific models based on perturbative heterotic string constructions also involve extended chargino sectors .
Since the electroweak and $`U(1)^{}`$ symmetry breaking are coupled in these models, the large ratio of the $`Z^{}`$ and $`Z`$ masses requires a certain amount of tuning of the parameters (cancellations are needed for the expectation values of the Higgs doublets to be sufficiently small). Nevertheless, such models are worth exploring as viable alternatives to the MSSM which are well motivated theoretically both within quasi-realistic string constructions and GUT models. An additional motivation to consider such models seriously arises from recent precision electroweak data. The $`Z`$ lineshape and atomic parity data hint at the existence of an extra $`Z^{}`$ at a scale around 1 TeV <sup>*</sup><sup>*</sup>*The implications of the atomic parity data alone have been considered recently in . For earlier references, see .. In this paper, we illustrate typical spectra from several concrete models, some with supergravity-mediated SUSY soft breaking parameters and another with gauge-mediated supersymmetry breaking, and comment briefly on phenomenological implications.
Results: Alternative Supersymmetric Spectra.
The models we consider are extensions of the MSSM with an additional nonanomalous $`U(1)^{}`$ gauge symmetry and additional matter fields, typically including both SM singlets (with $`U(1)^{}`$ charges) and SM exotics. Such models are motivated from a class of quasi-realistic (perturbative heterotic) superstring models . It was shown in that after vacuum restabilization this class of string models generically contain extended Abelian gauge structures and additional matter content at the string scale. The trilinear couplings, which can be calculated exactly in string perturbation theory, usually include the top quark Yukawa coupling and the coupling between Higgs doublets and a singlet field, an effective $`\mu `$-term. The coefficients of these couplings are calculable in string theory, and are of $`๐ช(1)`$. A phenomenological analysis of an explicit four-dimensional string model with these features demonstrated that the large Yukawa couplings trigger the radiative breaking of the $`U(1)^{}`$ by driving the soft supersymmetry breaking mass-squared parameters of the SM singlet fields $`S_i`$ negative at low energies via renormalization group evolution, as argued on general grounds in . For supergravity models with such large Yukawa couplings the $`U(1)^{}`$ breaking will either be at the electroweak scale (i.e., up to a TeV or so) or at a large intermediate scale if the breaking occurs along a $`D`$-flat direction (we do not consider the intermediate scale $`Z^{}`$ case further in this paper). Electroweak scale breaking can also be implemented in models with gauge-mediated supersymmetry breaking .
A general analysis of these scenarios in the supergravity-mediated supersymmetry breaking framework was analyzed in a minimal model with no additional exotics . However, this model is not $`U(1)^{}`$ anomaly free, and thus does not have the necessary ingredients for a fully realistic theory, and thus the case of a string-motivated and anomaly-free $`E_6`$-type model was proposed in and studied in . These analyses demonstrated that there are corners of parameter space for which a phenomenologically acceptable $`ZZ^{}`$ hierarchy at the electroweak scale can be obtained. In these scenarios, the $`U(1)^{}`$ breaking is radiative and triggered by a large $`๐ช`$(TeV) SM-singlet VEV Another possibility is that the symmetry breaking is driven by a large value of the soft supersymmetry breaking trilinear coupling . However, this solution yields a light $`Z^{}`$ that is phenomenologically excluded except in the case of models with certain (leptophobic) couplings, and thus we do not consider this scenario further in this paper.. This solution provides a $`Z^{}`$ mass close to the natural upper limit of $`12`$ TeV, with the electroweak scale achieved via cancellations that require a certain amount of tuning of the soft mass parameters. This is the least desirable feature of these models.
In all these models, the low energy spectra displayed features that are different from the standard MSSM spectrum. In general, the requirement of a phenomenologically acceptable $`ZZ^{}`$ hierarchy leads to low energy values of some or all of the scalar masses that are generically a few TeV. This feature can be understood heuristically within the supergravity-mediated supersymmetry breaking scenarios, where the boundary conditions are implemented at the string scale $`M_G`$ The small discrepancy between the observed unification scale $`M_G`$ and the perturbative heterotic string scale is not significant for the cases considered here. as follows. In the limit of $`SH_1,H_2`$, the $`Z^{}`$ mass is set by the soft mass-squared parameter $`m_S^2`$ of the singlet $`S`$ at low energies by: $`M_Z^{}\sqrt{2m_S^2}`$ (in this limit the $`U(1)^{}`$ breaking can be considered separately from the electroweak breaking). To obtain the large and negative $`m_S^2`$ parameter at low energies and to avoid large fine-tuning, in general it is necessary that the singlet couples with $`๐ช(1)`$ Yukawa couplings to additional SM exotic quarks, as such couplings drive the singlet mass-squared parameter strongly to negative values. In this case, the RG evolution provides the desired value of $`m_S^2`$ at low energies provided that the scale of the soft mass-squares of the exotic quarks at $`M_G`$ is about a few TeV. This scale determines the typical magnitudes of the soft mass-squared parameters in such phenomenologically acceptable models (which are not excessively tuned either at the electroweak scale or at the high scale). The first and second generation sparticles (whose soft mass-squared parameters do not run significantly due to their smaller Yukawa couplings) are typically a few TeV. However, a certain amount of fine-tuning is needed to obtain low energy values of the Higgs soft mass-squared parameters of the order of the electroweak scale. Since the $`U(1)^{}`$ symmetry breaking is at the TeV scale, there are also additional Higgs bosons and neutralinos in the low energy spectrum. In the large $`S`$ limit, some of these additional states acquire masses $`M_Z^{}`$. We note that in the explicit string-derived model analyzed in , the couplings are more complicated, and an extended Higgs sector must be invoked to achieve a realistic $`ZZ^{}`$ hierarchy. As a result, the low energy spectrum includes additional charginos as well as Higgs bosons and neutralinos.
To illustrate these features, we now turn to several supergravity models and demonstrate the symmetry breaking pattern and the low energy spectrum explicitly. For the sake of simplicity, the models discussed are those with a minimal Higgs sector of two electroweak doublets and one SM singlet. For each model, we display the relevant mass parameters at both the electroweak scale and the string (or GUT) scale in Table I and II, and present the detailed low energy spectra explicitly in Table III.
The first example we consider of an anomaly-free model with $`U(1)^{}`$ charges that allow an induced $`\mu `$ term $`(Q_1+Q_2+Q_S=0)`$, where $`Q_1`$, $`Q_2`$, and $`Q_S`$ are respectively the $`U(1)^{}`$ charges of $`H_1`$, $`H_2`$, and $`S`$, and also does not include additional SM exotics was first presented in the Appendix of . The charge assignments in this model (in self-evident notation) are given by
$$\begin{array}{cc}Q_{E}^{}{}_{3}{}^{}=Q_2Q_1,\hfill & Q_{L}^{}{}_{3}{}^{}=Q_2,\hfill \\ Q_{Q}^{}{}_{3}{}^{}=\frac{1}{3}Q_1,\hfill & Q_S=(Q_1+Q_2),\hfill \\ Q_{D}^{}{}_{3}{}^{}=\frac{1}{3}(Q_1+3Q_2),\hfill & Q_{U}^{}{}_{3}{}^{}=\frac{1}{3}(Q_13Q_2),\hfill \end{array}$$
(1)
for arbitrary $`Q_1`$ and $`Q_2`$, and the first and second families have zero $`U(1)^{}`$ charges. We stress that this model is only semi-realistic; while these charge assignments are consistent with gauge invariance conditions for the top quark $`Q_{U}^{}{}_{3}{}^{}+Q_{Q}^{}{}_{3}{}^{}+Q_2=0`$ and the tau lepton $`Q_{E}^{}{}_{3}{}^{}+Q_{L}^{}{}_{3}{}^{}+Q_1=0`$ Yukawa interactions, the bottom quark Yukawa interaction (and those of the first two generations) is forbidden by the symmetry. The bottom quark mass can be generated from a higher-dimensional operator, but its value is suppressed by the $`U(1)^{}`$ breaking scale and thus is too small. However, we present this model as a minimal example in which to display the patterns of the $`U(1)^{}`$ symmetry breaking and resulting spectra. Nonuniversal boundary conditions (or the addition of exotics) at the string scale are required to drive the singlet mass-squared parameter negative at the electroweak scale in this model. The boundary conditions at the string scale are presented in Table I, for an example in which $`M_Z^{}=1`$ TeV, the $`ZZ^{}`$ mixing is $`\alpha _{ZZ^{}}=2\times 10^3`$, and $`\mathrm{tan}\beta =2`$.
Another example, which provides acceptable anomaly free $`U(1)^{}`$ quantum numbers and is approximately consistent with gauge unification, is a string-motivated model with $`E_6`$ particle content (without $`E_6`$-type relations among the Yukawa couplings) . The particle content of the model under consideration includes three $`E_6`$ 27-plets, each of which includes an ordinary family, two Higgs-type doublets, two standard model singlets, and two exotic $`SU(2)`$-singlet quarks with charge $`\pm 1/3`$. We also assume a single vector-like pair of Higgs-type doublets from a $`27+27^{}`$, which does not introduce any anomalies. The particle content is consistent with gauge unification. It is further assumed that only a subset of these fields (the SM Higgs doublets, SM singlet $`S`$, and an exotic quark pair $`D`$ and $`\overline{D}`$) play significant roles in the radiative breaking due to the presence of trilinear superpotential couplings (with $`๐ช(1)`$ coefficients) of the form $`SH_1H_2`$ and $`SD\overline{D}`$.
The $`U(1)^{}`$ symmetry breaking patterns of this model were analyzed in detail in assuming a general set of supergravity-mediated soft supersymmetry breaking mass parameters; we refer the reader to this work for further details. In general, non-universal boundary conditions are required to achieve the desired hierarchy. The soft parameters and low energy spectrum of the first numerical example of this model ($`E_6`$(UA)), which includes only the dominant effect of the top Yukawa couplings and assumes universal $`A`$-parameters at $`M_G`$, are presented for a case in which $`M_Z^{}=1700`$ GeV, $`\alpha _{ZZ^{}}=2\times 10^3`$. The second numerical example of this model ($`E_6`$(NUA)), which has non-universal $`A`$-terms and also includes the bottom, tau and charm Yukawas, is presented for a case in which $`M_Z^{}=1600`$ GeV, $`\alpha _{ZZ^{}}=1\times 10^3`$. The non-universal $`A`$ parameters result in a different pattern in the squark spectrum, as presented in Table III.
This class of models was also analyzed recently assuming gauge-mediated supersymmetry breaking . The particle content of the observable sector in the particular example considered includes the MSSM fields, as well as vector pair of quark singlets ($`D`$ and $`\overline{D}`$), and an additional singlet field whose couplings to the two Higgs doublets and to $`D`$, $`\overline{D}`$ are allowed by gauge invariance. The supersymmetry breaking scale is set to the standard value of $`10^5`$ GeV. In this model it is assumed that the messenger fields are not charged under the $`U(1)^{}`$ symmetry; therefore, the soft mass-square of the singlet field is zero at the messenger scale. As a result, to achieve a desired value of $`m_S^2`$ at the electroweak scale over the short period of RGE running (from $`10^5`$ GeV to $`10^2`$ GeV), the soft mass-squares of the scalar exotic quarks generated at the messenger scale are required to be of order TeV, which thus sets the mass scale for the masses of the other squarks. A numerical example of the model is presented in Table I and II, with the $`Z^{}`$ mass of $`1110`$ GeV and the mixing angle $`\alpha _{ZZ^{}}=0.004`$.
An inspection of Table III indicates that the low energy spectra of each of these models share several common features. The scalar particles generally have masses at the TeV scale, higher than the standard scenarios in the MSSM. In particular, the squarks of the first and second generation have masses about $`13`$ TeV in each of the examples. The first $`E_6`$ example, with universal $`A`$-parameters at the GUT scale, predicts a hierarchy between the third family and the first two family squarks; in particular the stops and sbottoms are much lighter. In contrast, in the second $`E_6`$ example with non-universal $`A`$-parameters, the squarks of the three families can have masses of the same order. In the case in which the third family sparticles are also heavy, there is an additional tuning issue because the stops enter at the loop level of the Higgs potential; however, given that cancellations between terms of order $`(\mathrm{TeV})^2`$ already are present at the tree level, the tuning is not significantly worsened for stops with TeV masses. In the gauge-mediated model, the squarks (including the exotics) acquire TeV scale masses at the messenger scale; since the running time is very short (from $`10^5`$ GeV to $`10^2`$ GeV), their masses stay heavy $`๐ช`$(TeV). In the slepton sector, both of the $`E_6`$ models predict heavy sleptons with masses above around one TeV, while the gauge mediated model has sleptons of a few hundred GeV due to the gauge coupling hierarchy at the messenger scale (see also ).
We now turn to a discussion of the Higgs and neutralino sectors, as discussed in detail in . With the assumption of a minimal set of Higgs fields, in addition to the MSSM Higgs bosons there is one additional neutral CP-even Higgs boson which is predominantly the real component of the singlet field and has a mass $`M_Z^{}`$ in the large $`S`$ limit. In the neutralino sector, there are two additional neutralinos: one extra gaugino (corresponding to the fermionic partner of the $`Z^{}`$) and an extra Higgsino (corresponding to the fermionic partner of the $`S`$ field). In the large $`S`$ limit, these neutralinos mix and have masses controlled by $`M_Z^{}`$. We also note that the upper bound on the tree level mass of the lightest Higgs receives a contribution from the $`U(1)^{}`$ $`D`$-term and thus can be heavier than that of the MSSM, which is a particular feature of this class of models.
Concluding Remarks.
The purpose of this paper has been to emphasize two main points: (i) supersymmetric models with an additional $`U(1)^{}`$ gauge symmetry broken at the TeV scale are well motivated extensions of the MSSM both theoretically and phenomenologically, and (ii) the characteristic low energy mass spectra of this class of models exhibit patterns which have distinctive features compared to that of the MSSM. In particular, the strong phenomenological constraints on the $`Z^{}`$ mass and mixing with the ordinary $`Z`$ dictate that the $`U(1)^{}`$ is broken by a large singlet VEV of order several TeV, which sets the initial scale of the soft scalar mass-squared parameters. The resulting low energy spectra generically have heavy scalars, as well as a richer spectrum of Higgs bosons, neutralinos, and possibly charginos. In this scenario, the electroweak scale is generated by cancellations, which in turn suggests a natural upper limit of the mass scale of the heavy scalars (and the $`Z^{}`$ mass) to be of order several TeV to avoid excessive fine-tuning.
We conclude with a brief discussion of the phenomenological implications of the mass spectra in this class of models. In general, the heavy squarks (and sleptons in the supergravity models) can lead to distinctive phenomenological signatures and can ease the strong constraints on the SUSY parameter space from FCNC and CP violation (see e.g. and references therein for the analysis of these processes within the MSSM) as discussed recently in . In the models considered here the heavy scalars in these models are typically in the range $`13`$ TeV; as such, it is well known that flavor changing neutral current (FCNC) operators due to box diagrams are typically suppressed compared to the MSSM because of the larger scale, although one must still rely somewhat on the assumption that the soft scalar mass-squares that are generated due to some (unknown) supersymmetry breaking mechanism are diagonal in flavor space. In our RGE analysis, the scalar mass-squares and the $`A`$ terms are assumed to be diagonal (but not necessarily universal). For this reason, we will not go into a detailed analysis of the implications of the spectra presented in our paper on the FCNC and CP-violating processes. Instead, note that with squark masses of order a few TeV, the requirement of universality of the soft scalar masses can be relaxed compared to the case of the MSSM. Namely, the splitting between the scalar masses of the three quark families $`|m_{q_1}m_{q_2}|/m_{q_3}`$ can be as large as $`๐ช`$($`1`$), while still satisfying present experimental bounds on FCNC. We also point out that in the models considered there can also be new flavor changing effects due to family non-universal $`U(1)^{}`$ couplings (e.g., the string-derived model in has family non-universal $`U(1)^{}`$ charges). While such family non-universal couplings are subject to severe constraints for the couplings of the first two generations, the third generation couplings are less constrained. An complete analysis of these effects is currently underway .
In addition to the effects from the heavy scalars, the extended gauge, Higgs, neutralino, and (possibly) chargino sectors implicit in these models have a number of phenomenological consequences. The implications include new expectations for precision experiments and collider searches, and possibly new patterns for dark matter . In addition, the presence of the $`U(1)^{}`$ symmetry and the exotics can have effects on $`R`$-parity violation, neutrino masses, quark and charged lepton masses and mixings, and scenarios for baryogenesis. A comprehensive study of such issues is beyond the scope of this paper and is deferred for a future study.
###### Acknowledgements.
We thank M. Cvetiฤ, J. Erler, and J. R. Espinosa for helpful discussions and suggestions. This work is supported in part by the U.S. Department of Energy Grants No. EY-76-02-3071 (P.L.,M.P.), DE-FG02-95ER40899 (L.E.), and DE-AC02-76CH03000 (J.W.), and in part by the Feodor Lynen Program of the Alexander von Humboldt Foundation (M.P.). |
no-problem/0001/astro-ph0001446.html | ar5iv | text | # Preheated Advection Dominated Accretion Flow
## 1 Introduction
Gas accretes onto compact objects in disk or (quasi) spherical shape depending on the angular momentum and the entropy of the gas. The flow may have a thin disk shape with very small radial motion, or it could be spherical or spheroidal with significant radial motion. The amount of radiation emitted depends on the shape of the flow as well as on the nature of the compact object. Accretion onto white dwarfs and neutron stars always produces luminosity approximately proportional to the mass accretion rate. Thin disk accretion also produces radiation that is a constant fraction of the gravitational energy. However, spheroidal or spherical flow onto the black holes with relatively large radial velocities emits differing amount of radiation, roughly scaling with the square of the accretion rate (Shapiro 1973a; Park 1990a,b), depending on the physical states of the gas and how the radiation escapes from the flow. Therefore, accretion onto black holes can have a variety of geometrical flow shapes and can produce a wide range of luminosity and emitted spectrum.
### 1.1 Spherical accretion
Accretion flows onto black holes become spherical or nearly spherical when the angular momentum is very small or when the hole is rapidly moving relative to the gas (Hoyle & Lyttleton 1939; Bondi & Hoyle 1944; Bondi 1952; Loeb & Laor 1992). We review first the simpler case of spherical accretion because we find that the type of solution obtained here are quite relevant to more complex ADAF case. As long as two-body processes are important, the solutions become scale-free and are applicable to black holes of arbitrary mass (Chang & Ostriker 1985). These scale-free solutions then depend primarily on the dimensionless luminosity
$$l\frac{L}{L_E}$$
(1)
and the dimensionless mass accretion rate<sup>1</sup><sup>1</sup>1We define the Eddington mass accretion rate as $`\dot{M}_EL_E/c^2`$, without $`e^1`$ factor because the radiation efficiency $`e`$ of accretion onto black holes is not fixed in general.
$$\dot{m}\frac{\dot{M}}{\dot{M}_E}\frac{\dot{M}c^2}{L_E},$$
(2)
or, equivalently, the dimensionless luminosity $`l`$ and the radiation efficiency
$$e\frac{L}{\dot{M}c^2},$$
(3)
where $`L_E`$ is the Eddington luminosity (Ostriker et al. 1976). However, if we require the self-consistency between the gas and the radiation field generated by the accretion process, only certain combinations of the accretion rate and the luminosity (plus spectrum) are allowed, and a given self-consistent solution is described by a line in the $`(l,\dot{m})`$ plane. Within a given type of solution, only one parameter is needed to characterize the specific flow, and the dimensionless mass accretion rate, $`\dot{m}`$, is the most meaningful parameter. Broad reviews of accretion physics can be found, for example, in Rybicki & Lightman (1979), Pringle (1981), Treves, Maraschi, & Abramowicz (1988), Frank, King, & Raine (1992), Chakrabarti (1996b).
#### 1.1.1 Spherical flow with small mass accretion rate ($`\dot{m}1`$)
The electron scattering optical depth from infinity down to the horizon is roughly given by $`\tau _{es}=\dot{m}(r/r_s)^{1/2}`$ for freely falling flow where $`r_s`$ is the Schwarzschild radius of the hole and, therefore, for $`\dot{m}1`$, spherical flow is optically thin to scattering. Photons escape through the flow almost freely, and the exchange of momentum and energy between the gas and the radiation field is negligible. Radiative cooling and Compton heating are also quite negligible. Gas is almost freely falling, $`v/c(r/r_s)^{1/2}`$, which is true even in relativistic flow (Shapiro 1973a,b, 1974); adiabatic heating keeps the gas up to the virial temperature, $`T(r/r_s)^1`$, until electrons become relativistic (Shapiro 1973a; Park 1990a,b). Once electrons reach relativistic temperature, the adiabatic index of the gas changes from $`\gamma =5/3`$ to $`\gamma =13/9`$ if electrons and protons are well coupled. Now the temperature is increasing less rapidly and radiative cooling, mainly a relativistic bremsstrahlung and synchrotron, becomes more efficient. Regardless, the gas can reach as high as a few times $`10^{10}\mathrm{K}`$ even when electrons are loosely coupled to ions via Coulomb process. The luminosity and efficiency of the non magnetic solutions are quite small due to the very low gas density, typically $`l<10^7`$ and $`e<10^6`$ (solid line marked as S \[Shapiro 1973a\] and large crosses \[Park 1990b\] in Figs. 1 & 3) with efficiency approximately proportional to the mass accretion rate $`e\dot{m}`$ and luminosity $`l\dot{m}^2`$. The radiation temperatures $`T_X`$, defined as $`4kT_X`$ being equal to the energy-weighted mean photon energy, of the solutions are plotted as crosses in Figure 2.
#### 1.1.2 Spherical flow with large mass accretion rate ($`\dot{m}1`$)
As the mass accretion rate $`\dot{m}`$ approaches 1, radiative cooling becomes more efficient due to the increased density and the flow cools down to $`10^4\mathrm{K}`$ in the absence of Compton heating. The flow forms an effectively optically thick, i.e., optically thick to absorption, core with blackbody radiation field inside. The flow no longer behaves as an adiabatic gas.
Further increase of the mass accretion rate makes the core grow larger, and soon the whole flow becomes effectively optically thick. The flow is now cooled by the diffusion of radiation as in stellar interior (Flammang 1982, 1984; Soffel 1982; Blondin 1986; Park 1990a; Nobili, Turolla, & Zampieri 1991). However, the ultimate source of the luminosity is gravity through adiabatic compression rather than nuclear burning. Triangles in Figures 1โ3 show the luminosity $`l`$, the radiation temperature $`T_X`$, and the radiation efficiency $`e`$ as a function of the mass accretion rate $`\dot{m}`$ (Nobili et al. 1991). Note that the high temperature, low $`\dot{m}`$ solutions (large crosses) are quite distinct from the low temperature, high $`\dot{m}`$ solutions (triangles) in Figure 2, but they are continuous in Figure 1. Although the luminosity increases steadily with larger mass accretion rate, the flow is still a quite inefficient radiator, $`e10^7`$.
The electron scattering optical depth becomes larger than unity when $`\dot{m}>1`$, and the flow becomes optically thick to scattering. Photons collide more often with the flow and the diffusion speed of photons decreases with increasing optical depth. For high $`\dot{m}`$, the bulk velocity of the flow exceeds the photon diffusion speed, $`v>c/(\tau _{es}+\tau _{abs})`$, and radiation trapping occurs (Begelman 1978). The behavior of radiation and gas in this regime has to be treated under rigorous relativistic framework (Flammang 1982, 1984; Park 1990a; Nobili et al. 1991; Park 1993). In such an optically thick relativistic flow, the flux seen by the stationary observer relative to background spacetime is significantly different from the flux seen by the observer comoving with the flow. The momentum transferred to the gas from the radiation is closely linked to the former, a comoving-frame flux, and the luminosity seen by the observer at infinity to the latter, a fixed-frame flux. (Mihalas and Mihalas 1984; Park 1993). Also in this regime, the usual diffusion relation between the flux and the radiation pressure does not hold and should not be used (Miller 1990; Park & Miller 1991). This radiation trapping critically affects all types of accretion flow with large mass accretion rates.
The possibility of another mode at the same mass accretion rate was first investigated by Wandel, Yahil, & Milgrom (1984). Subsequent rigorous studies show that the flow can be maintained up to high temperature through Compton preheating by the radiation that accretion flow itself produces at smaller radii (Park 1990a,b; Nobili et al. 1991; Mason & Turolla 1992). They find this high-temperature, higher-luminosity, and therefore higher-efficiency flow does exist, but only for $`3\dot{m}100`$ as shown in Figures 1โ3 as large open circles (Park 1990b). Hence, for a certain range of the mass accretion rate, there exists more than one family of accretion flow with different properties.
These high-temperature accretion flows have the gas temperature as high as $`10^910^{10}\mathrm{K}`$ in contrast to the low-temperature ($`10^4\mathrm{K}`$) case. The flow is optically thick to scattering, and radiation is trapped inside $`r\dot{m}r_s`$. Yet, the flow is optically thin to absorption despite the large scattering optical depth because of the high electron temperature. The flows are much more luminous $`l10^410^2`$ (large open circles in Fig. 1) than their low-temperature counterparts (triangles) and produce much harder photons. The radiation efficiency $`e10^4`$ is about $`10^3`$ times higher than for the low-temperature flow, yet still far less than $`e0.1`$ of the thin disk. The self-consistent luminosity $`l`$ increases with the mass accretion rate $`\dot{m}`$, while the spectrum of the emitted radiation which preheats the flow in turn is harder for lower $`\dot{m}`$ and softer for higher $`\dot{m}`$ (circles in Fig. 2). The global stabilities of these flows have been investigated in relativistic framework, and they are found to be thermally unstable and develop moving shocks (Zampieri, Miller, & Turolla. 1996) as suspected in steady-state work (Park 1990a,b).
There exist a few candidate processes that might significantly increase the radiation efficiency. Mรฉszรกros (1975) considered the dissipational heating from turbulent motion, Maraschi et al. (1982) magnetic field reconnection, and Park & Ostriker (1989) electron-positron pair production. Solutions of the first and second types can reach $`e`$ as high as $`0.1`$, small squares in Figures 1 & 3 (Maraschi et al. 1982), while that of the last type up to $`e10^{2.5}`$, small filled circles in Figures 1 & 3 (Park & Ostriker 1989). They constitute yet another type of accretion flow in this so called super-Eddington accretion regime. However, the foundations for these higher-efficiency families are less solid than the other two families because the solutions are found without the consideration of preheating (Mรฉszรกros 1975; Maraschi et al. 1982) or only the inner supersonic part of the flow is dynamically considered (Park & Ostriker 1989). The rigorous and full treatment of dynamics and preheating will certainly show that they are unstable to the preheating instability (Cowie, Ostriker, & Stark 1978; Ciotti & Ostriker 1999). It is possible that the overheated flow is stabilized by developing a shock (Chang & Ostriker 1989; small crosses in Figs. 1 & 3), yet no self-consistent solutions with preheating and a steady shock have been constructed despite some hints of their existence (Park 1990b; Nobili et al. 1991).
### 1.2 Preheating
For this higher $`\dot{m}`$ and $`l`$ flow, radiation has more chance to interact with the gas: both momentum and energy are transferred from one to the other. Transfer of momentum, when the interaction of radiation with matter is mediated by the electron Thomson scattering, leads to the critical Eddington luminosity ($`l=1`$ lines in Figs. 1 & 3). Another lower type of luminosity upper limit is found by Ostriker et al. (1976) that can be as small as $`10^2`$ times the Eddington luminosity. This new critical luminosity exists due to preheating of gas flow by hard Compton radiation. When the gas is preheated significantly near the sonic point, it becomes too hot to accrete in a steady state. For example, when the surrounding gas temperature is near $`10^4\mathrm{K}`$ and the Compton temperature of preheating radiation is $`10^8\mathrm{K}`$, steady-state accretion is impossible above $`l_{cr}0.01`$ (rectangular region I in Figs. 1 & 3). This preheating explains why self-consistent solutions are not found for $`\dot{m}3`$ and $`\dot{m}100`$ (Park 1990a,b; Nobili et al. 1991): since Compton preheating depends on the energy and the luminosity of outcoming photons, the preheating instability occurs when the radiation is too hard ($`\dot{m}3`$; Fig. 2) or the luminosity is too high ($`\dot{m}100`$; Fig. 1). The region can be smaller under certain conditions (Ostriker et al. 1976; Cowie et al. 1978; Bisnovatyi-Kogan & Blinikov 1980; Stellingwerf & Buff 1982; Krolik & London 1983).
Preheated flow shows interesting time-dependent behavior (Cowie et al. 1978; Grindlay 1978) that may be applicable to variabilities of accretion powered astronomical sources, including galactic X-ray sources, active galactic nuclei, QSOs, and cooling flows (Ciotti & Ostriker 1997, 1999). Cowie et al. (1978) found two distinct type of time-dependent behavior inside and near the sonic radius. The flow is overheated inside the sonic point for high $`l`$, low $`e`$ accretion (region II in Figs. 1 & 3), producing recurrent flaring on short time intervals. Low $`l`$, high $`e`$ flow (region III in Figs. 1 & 3) develops preheating outside the sonic radius, causing accretion rate and luminosity changes on longer time scales. However, these time-dependent studies did not impose self-consistency, meaning the radiation efficiency is arbitrarily assigned rather than calculated from the flow itself.
All of the high temperature, physically thick self-consistent solutions shown in Figure 3 that are known or expected to be stable have a quite low efficiency ($`e10^4`$). It seems likely that all physically thick solutions having a high enough density to efficiently produce radiation will undergo relaxation oscillations with the primary cause being preheating instabilities. The exception to this may be the slim disks which have a relatively low emission temperature and which essentially resemble massive stars.
### 1.3 Axisymmetric accretion
Accretion flow onto a black hole with angular momentum can assume disk-like or spheroidal shape depending on how efficiently the heat produced is removed. The flow becomes thin disk-like when the energy from viscous dissipation is readily radiated away in the vertical direction (Shakura & Sunyaev 1973). If too much radiation is produced and trapped within the flow, the disk puffs up to a thick one, and much of the radiation and gas energy is advected into the hole with significant radial velocity (Jaroszyลski, Abramowicz, & Paczyลski 1980; Paczyลski & Wiita 1980). When the disk is not thin, constructing the flow solution becomes a two-dimensional problem which has been tackled only by arbitrarily fixing certain physical parameters (e.g., Paczyลski 1998), or by reducing it to one-dimensional problem (Abramowicz et al. 1988; Narayan & Yi 1994, NY1 hereafter; Narayan & Yi 1995b, NY3 hereafter), or by numerical simulations (Molteni, Lanzafame, & Chakrabarti 1994; Ryu et al. 1995; Chen et al. 1997; Igumenshchev & Beloborodov 1997; Igumenshchev & Abramowicz 1999; Stone, Pringle, & Begelman 1999), or by assuming self-similar forms (Narayan & Yi 1995a, NY2 hereafter).
#### 1.3.1 Thin disk
Since the seminal works of 70โs, thin disk accretion, especially the so-called $`\alpha `$-disk has been de facto standard accretion mode applied to various accretion powered sources (Shakura 1972; Pringle & Rees 1972; Shakura & Sunyaev 1973; Novikov & Thorne 1973). The thin disk has merits of being simple: the equations are reduced to one-dimensional ones and all physical interactions can be described by local quantities. Angular momentum of the rotating gas is transported outward by the viscous stress through the differentially rotating gas. Gravitational potential energy is radiated away via viscous dissipation. Although the disk is geometrically thin, it is optically thick to absorption, and radiation diffuses out in the vertical direction. Because of the geometrical shape, the emitted radiation does not in general interact with other parts of the flow. The innermost edge of the disk determines the amount of gravitational potential energy to be released, and the radiation efficiency is roughly $`0.1`$ regardless of the central compact object being neutron star or black hole. Therefore, all thin disk accretions appear on the dotted line $`e=0.1`$ (TD in Figs. 1 & 3).
However, the thin disk formalism may not be extended to high luminosity systems or, equally, high $`\dot{m}`$ systems: the disk becomes unstable to thermal or secular instability when radiation pressure dominates over the gas pressure (Lightman & Eardley 1974; Shakura & Sunyaev 1976; Pringle 1976; Piran 1978). Furthermore, hard X-ray sources like Cygnus X-1 cannot be explained by a thin disk: it is too cool and emits blackbody-like spectrum. Though Shapiro, Lightman, & Eardley (1976) found a new family of optically thin disk with very high electron temperature, they are also proved to be unstable to the thermal instability (Pringle 1976; Piran 1978; Park 1995). Thus, for high flow rates or high temperature flows, this simple, high efficiency solution is not available or appropriate.
#### 1.3.2 Thick disk
As the dimensionless mass accretion rate $`\dot{m}`$ approaches $`e^1`$ (i.e., $`l1`$), the vertical height of the disk becomes comparable to the radius. In the thin disk case, gravity is balanced by the Keplerian rotation and the radial motion is negligible. When the disk becomes geometrically thick, the dynamical balance in the radial direction becomes important, and significant radial motion occurs, resulting in sub-Keplerian rotation. Now the disk is two-dimensional: dynamics in the radial direction as well as in the azimuthal direction should be considered.
This geometrically thick accretion disk has been studied by Paczyลski and collaborators (e.g. Paczyลski & Wiita 1980; Jaroszynski et al. 1980; Abramowicz, Calvani, & Nobili 1980). Thick disk flow can only be described by full two-dimensional partial differential equations because the angular momentum of the gas is not known a priori, radial infall motion can not be ignored, and the energy equations become non-local. Studies of thick disk onto black holes show that the flow develops a cuspy inner edge through which the gas flows into the hole with roughly free-fall velocity. Photons are locked inside the disk and advected into the hole; yet a large amount of radiation may escape through the surface of the thick disk which shapes up the funnel around the rotation axis.
#### 1.3.3 Slim disk
To eschew the complexities while retaining the essential features of the thick accretion flow, Abramowicz et al. (1988) applied height-integrated $`\alpha `$ disk equations to super-Eddington ($`\dot{m}>e^1`$) accretion flow. They found stable disk solutions with significant radial motion in high $`\dot{m}`$ regime, in which the thin disk is unstable. The flow is optically thick and the radiation is trapped at small radii so that, as noted above, these solutions resemble massive stars. The entropy is directly advected with the flow, which adds to the radiative cooling through the surface of the disk. This new family of disk solutions is called โslim diskโ because the disk is assumed to be slim enough to validate the height-integrated equations as in the thin disk solutions. In this accretion mode the radiation efficiency is not predetermined: some of the gravitational potential energy is ultimately absorbed into the hole due to the entropy advection, and the amount of radiation emitted depends on specific conditions of the flow. In this regard, the slim disk has some similarity to the case of high $`\dot{m}`$ spherical accretion. The luminosity and radiation efficiency of these slim disks are plotted in Figures 1 & 3 as dot-dashed curve labeled SD; the flow can reach super-Eddington luminosity, $`l1`$, and $`e`$ decreases from $`0.1`$ as $`\dot{m}`$ increases (Szuszkiewicz, Malkan, & Abramowicz 1996). However, it is not at all clear if the slim disk approach can be extended to super-Eddington systems because the disk is likely to be thick rather than slim.
#### 1.3.4 Advection dominated accretion flow
The slim disk solutions were constructed by explicitly integrating the height-averaged hydrodynamic equations (Abramowicz et al. 1988). However, Narayan & Yi (NY1) find that these one-dimensional equations admit simple self-similar solutions if cooling due to the entropy advection is assumed to be a constant fraction of the viscous heating (also see Spruit et al. 1987): The density, velocity, angular velocity, and total pressure are simple power laws in radius: $`\rho r^{3/2}`$, $`vr^{1/2}`$, $`\mathrm{\Omega }r^{3/2}`$, and $`Pr^{5/2}`$. Most of the dissipation energy generated is advected inward with the flow, and the solutions are called advection dominated accretion flow <sup>2</sup><sup>2</sup>2The basic formulation of ADAF is the same as that of the slim disk flow. They are different only in $`\dot{m}`$, or in the effective optical depth. (ADAF; see Narayan, Mahadeva, & Quataert 1998b for review). This type of accretion had been studied earlier (Ichimaru 1977) under different names like โion torusโ (Rees et al. 1982).
However, since the temperature of this type of flow is close to the virial value, it is not certain whether height-integrated equations are adequate. In the subsequent work (NY2), the full two-dimensional self-similar solutions have been constructed under the standard $`\alpha `$-viscosity assumption. The radial velocity of the flow turns out to be a fraction of (roughly proportional to $`\alpha `$ for $`\alpha 1`$) the free-fall value at the equatorial plane, yet the flow along the pole is pressure supported with zero infall velocity. The matter is preferentially accreted along the equatorial plane. The flow is subsonic due to the near virial pressure, which is valid for $`rr_s`$. Another unique feature of ADAF is that the Bernoulli constant of the flow is positive, as noted by NY2 themselves and critically analyzed by Blandford & Begelman (1999), which has led Xu & Chen (1997) and Blandford & Begelman (1999) to propose a generalization of ADAF that has both inflow and outflow at the same time. Another important result from NY2 is that the physical quantities calculated from the vertically integrated one-dimensional equations generally agree with those averaged over the polar angle. In a way this justifies the slim disk approach to the two-dimensional accretion flow, although this is proved only in the self-similar flow. This permitted further extensive works on the dynamics of ADAF to be based on the one-dimensional height-integrated hydrodynamic equations (Narayan, Kato, & Honma 1997; Chen, Abramowicz, & Lasota 1997; Nakamura et al. 1997; Gammie & Popham 1998; Popham & Gammie 1998; Chakrabarti 1996a; Lu, Gu, & Yuan 1999). Because, everywhere in the flow the radial velocities are quite substantial and the pole to equator density variations are moderate, these solutions share many of the characteristics of the purely spherical flow.
Detailed microphysics are readily incorporated into these one-dimensional solutions, showing the main characteristics of ADAF: low radiation efficiency and high electron temperature (NY3; Abramowicz et al. 1995). These make ADAF applicable to low luminosity, hard X-ray sources, like Sgr A (Narayan, Yi, & Mahadevan 1995; Mahadevan, Narayan, & Krolik 1997; Manmoto, Mineshige, & Kusunose 1997; Narayan et al. 1998a), NGC 4258 (Lasota et al. 1996), soft X-ray transient sources (Narayan, McClintock, & Yi 1996; Narayan, Barret, & McClintock 1997), and low-luminosity galactic nuclei (Di Matteo & Fabian 1997; Mahadevan 1997) as well as to other diverse problems like lithium production in AGN (Yi & Narayan 1997), torque-reversal in accretion-powered X-ray pulsars (Yi & Wheeler 1998), the QSO evolution (Yi 1996; Choi, Yang, & Yi 1999), and X-ray background (Yi & Boughn 1998). Constructing the accretion flow with the inner part being ADAF and the outer part being thin disk has more merits and freedom than the simple thin disk, and therefore it can be successful in representing a wider range of astronomical systems.
Typical ADAFs are presented as dashed lines in Figures 1 & 3 (NY3). They are inefficient radiators: most of the gravitational energy of the accreted matter is carried with the flow into the hole without being released in the form of radiation. This very property of ADAF is similar to that of the hot, adiabatic, low-$`\dot{m}`$ spherical accretion flow (Shapiro 1973a,b) as the slim disk is to the low-temperature, high-$`\dot{m}`$ spherical flow (Flammang 1982, 1984). The self-similar functions of radius for various physical quantities are exactly those of the non-relativistic adiabatic spherical flow, and the isodensity surface of the ADAF is spheroidal rather than disk-like (NY3). In fact, optically thin two-temperature spherical accretion flow with a magnetic field produces a spectrum that roughly agrees with that of ADAF (Melia 1992, 1994).
#### 1.3.5 Unified description
Recent works on various types of axisymmetric accretion flow with a wide range of mass accretion rate make it possible to describe these seemingly distinct families of accretion flow in a unified way (Abramowicz et al. 1995; Chen et al. 1995). Each family of solution is constructed on appropriate assumptions, and specific name is attached to it, e.g., thin disk, slim disk, and ADAF. The traditional way of presenting the disk solutions in a plane of the mass accretion rate, $`\dot{m}`$, versus the surface mass density, $`\mathrm{\Sigma }`$, at a given radius is very revealing, contrast to the spherical accretion where the global properties like the mass accretion rate and the total luminosity are used. Different families of solutions appear as different curves in this plane (Fig. 4).
Since the solutions sometimes depends on the mass of the hole $`M`$ and the value of the viscous parameter $`\alpha `$, we take a specific case of $`M=10M_{}`$ and $`\alpha =0.1`$ (Chen et al. 1995). Figure 4 shows two disconnected curves, each representing the low-$`\mathrm{\Sigma }`$ flow (dotted curve on the left) and the high-$`\mathrm{\Sigma }`$ flow (solid curve on the right). The former is optically thin to absorption and the latter optically thick. This shows that there can be more than one type of accretion flow in a certain range of mass accretion rate. Also, the former type of accretion flow exists only for $`\dot{m}1`$, while the latter type is possible for all range of $`\dot{m}`$.
The lower (GTD) and middle (RTD) branches of the S shaped curve on the right side of Figure 4 correspond respectively to the classic gas pressure and radiation pressure dominated thin disk solutions (Shakura & Sunyaev 1973). Radiative cooling through the surface of the disk is dominant over the advective transport. The positive slope of the gas pressure dominated disk shows the flow being stable, and vice versa for the radiation dominated disk (Lightman & Eardley 1974; Shakura & Sunyaev 1976). The uppermost branch (SD) of the curve is the slim disk solution in which the advection of radiation plus gas stabilizes the radiation pressure dominated flow. The vertical scale height of the slim disk can be comparable to the radius, especially when $`l=e\dot{m}1`$. Even at this high $`\dot{m}`$, the temperature of the flow is less than $`10^8\mathrm{K}`$, and the spectrum of the emitted radiation is the superposition of modified blackbodies with different temperatures. Both the gas pressure dominated and the radiation pressure dominated thin disks are efficient radiators $`e0.1`$ (TD in Figs. 1 & 3) while the slim disk can have smaller efficiency $`e0.1`$ (SD in Figs. 1 & 3).
The right branch (SLE in Fig. 4) of the optically thin accretion disk is the two-temperature hot disk solution of Shapiro et al. (1976). The slope indicates that the disk is stable to viscous perturbations. However, it is thermally unstable on much shorter time scales (Pringle 1976; Piran 1978). The flow has different ion and electron temperatures, and the viscous dissipation energy is readily radiated through the optically thin flow. Therefore, the radiation efficiency is always $`e0.1`$. Recent work on the stability of SLE solutions suggests that the advective cooling may affect the thermal instability (Wu 1997).
The left branch (ADAF in Fig. 4) represents the ADAF of NY1 and Abramowicz et al. (1995), in which the flow is optically thin and viscous dissipation energy is mostly advected with the flow, therefore, stabilizing the flow. The electron temperature of the flow can reach up to $`10^9\mathrm{K}`$, and the radiation spectrum is that of the Comptonized bremsstrahlung and synchrotron. The radiation efficiency of this branch (NY3) is roughly $`e\dot{m}`$ (see NY in Fig. 3) and the luminosity $`l\dot{m}^2`$ (see NY in Fig. 1) as in the optically thin spherical accretion (NY3).
There also can be other interesting families of axisymmetric accretion if processes like pair production (Kusunose & Mineshige 1992) or various shocks (Chakrabarti 1996a) exist, which have properties analogous to the equivalent solutions in the spherical flow. However, preheating, which is important in the spherical flow, has never been incorporated in any of the disk solutions shown in Figure 4.
### 1.4 Two-dimensional nature of axisymmetric accretion flow
One important property of the slim or thick axisymmetric accretion flows, including ADAFs, is their two-dimensional nature. This has been generally ignored under the assumption that height-integrated equations are good enough (NY2). However, Park & Ostriker (1999, hereafter PO) emphasize that the radiation emitted must interact with the accreting matter due to the geometrical shape of ADAF . Unlike the thin disk where the radiation easily escapes the flow through the vertical surface without any interactions, photons produced at smaller radii of the flow must escape through the tenuous outer part of the flow in much the same way as in spherical accretion. By studying the interaction of radiation with matter in two-dimensional ADAF, they find that the dynamics and thermal properties of the flow should be substantially changed in some parts of the parameter space (see also Esin 1997 for one dimensional treatment). Especially, radiative cooling and Compton preheating will change the polar region of the flow dramatically, and under the right conditions winds along the polar axis will be produced. But PO reached these conclusions based on simple comparison of various time scales, leaving some interesting questions unanswered. For example, PO showed that when Compton heating is negligible, the electrons near the polar axis cannot be maintained at high temperatures, suggesting the possibility of the cooling of ions. But answers to what temperature will electrons finally settle to and whether ions too will cool due to increased coupling can be obtained only after the energy equations for electrons and ions are properly solved with all heating, cooling, and energy exchange effects. Also, the existence of proposed solutions maintained by Compton preheating for the mass accretion rate at which high temperature solution is not possible can only be checked by solving the energy equations.
So, in this work we explicitly integrate the ion and electron energy equations of ADAF with special considerations given to all relevant radiative cooling, the Compton preheating, and radiative transfer of Comptonizing radiation. As discussed above, we need to make fundamental assumptions or simplifications to deal with two dimensional nature of ADAF. Since we are interested in the two-dimensional thermal properties of ADAF, we cannot use the usual height-integrated formalism. So we here adopt the fundamental assumption that the density and the velocity of the ADAF are described by the self-similar solutions of NY2, regardless of ion and electron temperature profiles. Since the dynamics of the flow is controlled by the pressure, and pressure, in turn, is determined by energetics of electrons and ions, this approach is not completely self-consistent. In both the work that follows and in NY2 the pressure is dominated by ions, and since both the cooling and preheating that we treat affect primarily the electrons, the total pressure, which is the quantity most relevant to the dynamics, differs relatively little between our work and NY2. In any case, we believe this approach will nicely complement the extensive work on energetics of ADAF in one-dimensional formalism (NY3; Narayan, Kato, & Honma 1997; Chen, Abramowicz, & Lasota 1997; Nakamura et al. 1997; Gammie & Popham 1998; Popham & Gammie 1998; Chakrabarti 1996a).
## 2 Equations
### 2.1 Self-Similar ADAF
Viscous accretion flow can be heated by $`PdV`$ work, viscous dissipation, and the interaction with radiation. It cools mostly by emission of radiation. However, locally, since the flow is moving, advection acts as loss of gas energy at a fixed point. The self-similar advection-dominated accretion flow solutions (NY1, NY2) refer to a family of solutions where viscous heating is mainly balanced by the advective cooling with negligible radiative cooling. The dynamical property of the self-similar ADAF, e.g., density and velocity, is determined by the viscosity parameter $`\alpha `$ and the parameter $`ฯต^{}f^1(5/3\gamma )/(\gamma 1)`$ determined by the adiabatic index $`\gamma `$ of the gas and the ratio $`f`$ of the advective cooling versus the viscous heating. When the flow is advection dominated, $`f1`$.
### 2.2 Dynamics
The big obstacle in investigating the axisymmetric accretion flow is the two-dimensional nature of the problem and the global coupling between the radiation and the gas. The physical state of the gas at one position is linked with all the other parts of the flow through the transfer of radiation. Also, the dynamics of the flow has to be determined self-consistently. It is very difficult at present to solve such two-dimensional problems in a truly self-consistent way as was done in spherical accretion (Shapiro 1973a,b; Park 1990a,b; Nobili et al. 1991). So we here adopt a fundamental simplifying assumption: the dynamics of the flow, i.e., the velocity and density of the gas, will be that of self-similar solutions (NY2) regardless of the thermal structure of the flow. One can alternatively consider this an exercise in determining the self-consistency of the NY2 solution. Thus we take radial velocity to be<sup>3</sup><sup>3</sup>3We note, however, that recent time-dependent numerical simulations of two dimensional axisymmetric accretion flows very near the black hole show diverse dynamical behavior (Igumenshchev & Abramowicz 1999; Stone et al. 1999).
$$v_r(r,\vartheta )=r\mathrm{\Omega }_k(r)v(\vartheta ),$$
(4)
the poloidal velocity
$$v_\vartheta =0,$$
(5)
the rotational velocity
$$v_\varphi =r\mathrm{\Omega }_k(r)\mathrm{\Omega }(\vartheta )\mathrm{sin}\vartheta ,$$
(6)
where $`r\mathrm{\Omega }_k(r)=(GM/r)^{1/2}`$. The density is
$$n(r,\vartheta )/n_0=\dot{m}(r/r_s)^{3/2}n(\vartheta ),$$
(7)
where $`n_0r_s\sigma _T=1/[2(1+Y_{He})]`$ and $`\sigma _T`$ Thomson cross section. The free-fall spherical flow corresponds to $`v(\vartheta )=\sqrt{2}`$, $`\mathrm{\Omega }(\vartheta )=0`$, and $`n(\vartheta )=1`$. As noted above, these assumptions would hold only if the pressure field of the final solution is similar to that of the self-similar one. We will discuss how self-consistent the final solutions are.
### 2.3 Energy equations
To get the relevant energy equation in axisymmetric accretion flow, we extend the relativistic energy equations for the spherical flow under the radiative heating and cooling to axisymmetric one (Park 1990a,b). Although most ADAF calculations are done in a non-relativistic framework, the difference in temperature between the relativistic and non-relativistic treatment is so large that we use here the correct relativistic energy equations. For some parameters, the output luminosity can be altered by more than a factor of 10.
The ion and electron energy equations are
$$\frac{dฯต_i}{dr}\frac{\epsilon _i+P_i}{n_i}\frac{dn_i}{dr}=\frac{\mathrm{\Gamma }_{vis}+Q_{ie}}{v_r},$$
(8a)
$$\frac{dฯต_e}{dr}\frac{\epsilon _e+P_e}{n_i}\frac{dn_e}{dr}=\frac{\mathrm{\Lambda }_e\mathrm{\Gamma }_eQ_{ie}}{v_r},$$
(8b)
where $`\mathrm{\Gamma }_{vis}`$ is the viscous heating rate for ions, $`\mathrm{\Lambda }_e`$ and $`\mathrm{\Gamma }_e`$ the cooling and heating rates for electrons, $`\epsilon _i`$ and $`\epsilon _e`$ the internal energies, all per unit volume, $`P_i`$ and $`P_e`$ the pressures, $`n_i`$ and $`n_e`$ the number densities, $`T_i`$ and $`T_e`$ the temperatures, of ions and electrons, respectively. This form of energy equations inherently accounts for the advective cooling of both ions and electrons (see Nakamura et al. for discussion on advection of electron energy). The energy transfer through Coulomb coupling is described by the rate $`Q_{ie}`$ where
$$Q_{ie}=\frac{3}{2}\frac{m_e}{m_i}\underset{i}{}Z_i^2n_en_i\sigma _Tc(kT_ikT_e)\frac{1+\sqrt{\pi /2}(\theta _e+\theta _i)^{1/2}}{\sqrt{\pi /2}(\theta _e+\theta _i)^{3/2}}\mathrm{ln}\mathrm{\Lambda },$$
(9)
$`Z_i`$ is ion charge, $`m_i`$ and $`m_e`$ ion mass and electron mass, respectively, $`\theta _ikT_i/m_ic^2`$, $`\theta _ekT_e/m_ec^2`$, $`\sigma _T`$ Thomson cross section, and $`\mathrm{ln}\mathrm{\Lambda }=20`$ the Coulomb logarithm. There exists a better expression for $`Q_{ie}`$ in transrelativistic temperatures (Stepney 1983; Stepney & Guilbert 1983), but it is quite costly to evaluate. The result is not sensitive to the exact functional form of $`Q_{ie}`$ in the transition regime.
Since electrons usually reach relativistic temperatures, $`\epsilon _e=(3/2)x_e^{}n_ekT_e`$, where $`x_e^{}=(2/3)[5\eta \theta _e^1(\eta ^21)\theta _e^11]`$, $`\eta =K_3(\theta _e^1)/K_2(\theta _e^1)`$ and $`K_3`$, $`K_2`$ are the modified Bessel functions (Shapiro 1973a; Park 1990b). The factor $`x_e^{}`$ incorporates the change of the equation of state at relativistic temperatures and has an asymptotic value of 1 when $`kT_em_ec^2`$ and 2 when $`kT_em_ec^2`$. Also, ion energy density $`\epsilon _i=(3/2)x_i^{}n_ikT_i`$ where $`x_i^{}`$ is similarly defined. The change of the equation of state at high temperatures can produce a significant difference compared to a nonrelativistic treatment. Even neglecting the derivative of $`x_e^{}`$ can make an error of factor 2 (Park 1990a). Direct calculation of $`\eta `$ is numerically demanding, so we use a convenient fitting formula instead (Service 1986).
### 2.4 Radiative transfer
In a freely falling spherical accretion flow, the electron scattering optical depth from radius $`r`$ to infinity is simply $`\tau _{es}(r)=\dot{m}(r/r_s)^{1/2}`$. In ADAF, the flow is moving more slowly than free-fall and therefore its density and optical depth is higher for the same $`\dot{m}`$. We define the optical depth along a given polar angle $`\vartheta `$,
$$\tau _{es}(r,\vartheta )=\dot{m}(r/r_s)^{1/2}n(\vartheta ),$$
(10)
which has the highest value along the equator, $`\vartheta =\pi /2`$. For $`\alpha =0.1`$ and $`ฯต^{}=0.1`$, $`1.0`$, and $`10`$ ADAF, $`n(\pi /2)30`$, $`50`$, and $`400`$, respectively (NY2). However, $`n(\vartheta )`$ along the pole is $`30`$, $`20`$, and $`10`$ for $`ฯต^{}=0.1`$, $`1.0`$, and $`10`$. Hence the flow remains optically thin along the pole for $`\dot{m}0.1`$.
It is not the intention of this work to rigorously solve the radiative transfer for the nonspherical flow. We add the emission from all $`\theta `$ at a given $`r`$ to get the averaged luminosity profile $`L_X(r)`$ and the radiation temperature $`T_X(r)`$ which is defined as $`4kT_X`$ being equal to the energy-weighted mean photon energy (Levich & Syunyaev 1971;Park 1990a), and use these as an approximation to the real radiation field. This approximation should be a good one for small $`ฯต^{}`$ flow where the density profile is almost spherical, or for low $`\dot{m}`$ flow where the flow is optically thin. Moreover, the preheating radiation field at large radius should be well described by the streaming radiation, and a spherical radiation field should be good enough approximation. A more advanced treatment would expand the radiation field into spherical harmonics with the next (herein neglected) term the quadrupole component (Loeb & Laor 1992).
Under this simplification, $`L_X(r)`$ is just the sum of all frequency-integrated emission, i.e., radiative cooling rate, inside radius $`r`$,
$$L_X(r)=2\pi _{r_{in}}^{r_{out}}_0^\pi \mathrm{\Lambda }_{all}(r,\vartheta )\mathrm{sin}\vartheta d\vartheta r^2dr.$$
(11)
A far more difficult task is to calculate the Comptonization temperature $`T_X(r)`$ without really solving the frequency-dependent radiative transfer equation. Since we treat every emission as being Comptonized in situ, the main change of the radiation temperature in radius is due to the addition of newly produced photons to already existing photons. If the spectral energy density of existing photons is $`E_X`$ and its radiation temperature is $`T_X`$, incremental addition of newly produced and locally Comptonized photons of energy density $`dE_S`$ and radiation temperature $`T_S`$ will change the radiation temperature of the total to
$$T_X^{}=\frac{E_XT_X+dE_ST_S}{E_X+dE_S}=T_X+dT_X,$$
(12)
where $`dT_X`$ is the change in $`T_X`$ within $`dr`$. Solving for $`dT_X`$ gives (Park 1990a)
$$\frac{d\mathrm{ln}T_X(r)}{d\mathrm{ln}r}=\frac{T_S(r)T_X(r)}{T_X(r)}\frac{\mathrm{\Lambda }_S(r)r}{F(r)},$$
(13)
where $`\mathrm{\Lambda }_S(r)`$ is the $`\vartheta `$-average of all emissions and $`F(r)=L(r)/4\pi r^2`$. We used the approximate relation $`dE_S/E_X=\mathrm{\Lambda }_Sdr/F`$. The $`\vartheta `$-averaged radiation temperature, $`T_S(r)`$, is calculated as
$$T_S(r)=\frac{_0^\pi \mathrm{\Lambda }_S(r,\vartheta )T_S(r,\vartheta )\mathrm{sin}\vartheta d\vartheta }{_0^\pi \mathrm{\Lambda }_S(r,\vartheta )\mathrm{sin}\vartheta d\vartheta }.$$
(14)
In case where both bremsstrahlung and synchrotron emissions contribute, $`\mathrm{\Lambda }_S(r,\vartheta )T_S(r,\vartheta )`$ is replaced by $`\mathrm{\Lambda }_{Cbr}T_S^{Cbr}+\mathrm{\Lambda }_{Csyn}T_S^{Csyn}`$.
### 2.5 Cooling and heating
We consider atomic cooling, bremsstrahlung, and synchrotron radiation and their inverse Comptonization as the main radiative cooling processes on electrons, and viscous dissipation and Compton scattering as the main heating processes on ions and electrons, respectively. Special care has been taken for absorption and inverse Comptonization of soft bremsstrahlung and synchrotron photons. A detailed description is given in the Appendix.
## 3 Calculation
### 3.1 Method of Calculation
When the preheating radiation field is negligible, finding the solution is straightforward. One just integrates the energy equations (8a,b) with zero preheating luminosity. However, when the flow is affected by the preheating radiation, we have to find solutions that satisfy the energy equations (8a,b) but with a not-yet-determined radiation field.
We use the iteration method to build the solution (Park 1990a,b). First, initial estimates for $`L^0(r)=L_0`$ and $`T_X^0(r)=T_{X,0}`$ are guessed for given $`M`$, $`\dot{m}`$, $`\alpha `$, $`ฯต^{}`$, and the ratio of the gas pressure to the total (gas plus magnetic) pressure $`\beta `$. The gas energy equation (8a,b) is integrated under this $`L^0(r)`$ and $`T_X^0(r)`$ for selected values of $`\vartheta `$โs. Now we have $`T_i(r,\vartheta )`$ and $`T_e(r,\vartheta )`$ at each $`(r,\vartheta )`$. Normally $`r`$ is divided into 100 logarithmic intervals and $`\vartheta `$ into 10 or 20 intervals from 0 to $`\pi /2`$. The angle-averaged luminosity $`L^1(r)`$ and radiation temperature $`T_X^1(r)`$ are calculated from this temperature profile (Eqs. and ). In general, $`L_0(r_{out})L^1(r_{out})`$ and $`T_{X,0}(r_{out})T_X^1(r_{out})`$, and the solution is not self-consistent. Only for a specific combination of initial $`L_0`$ and $`T_{X,0}`$, will the model be self-consistent in the sense that the luminosity and the radiation temperature of the iterated solution has $`L^1(r_{out})=L^0(r_{out})`$ and $`T_X^1(r_{out})=T_X^0(r_{out})`$. We search the two-dimensional plane of $`L_0`$ and $`T_{X,0}`$ for solutions that satisfy this self-consistency condition to within the 1 % level.
### 3.2 Boundary Conditions
There are two natural boundaries in accretion problem, inner and outer ones. At the outer boundary, $`r_{out}`$, the state of the gas has to be specified when the gas energy equations are integrated, and at the inner boundary $`r_{in}`$, the state of the radiation field, $`L(r_{in})`$ and $`T_X(r_{in})`$, needs to be specified for equations (11) and (13).
The velocity and density of gas are fixed by equations (4)-(7) for given $`\dot{m}`$ and $`M`$. So the only meaningful boundary condition is the temperature of electrons and ions. Due to the strong atomic cooling around $`10^4\mathrm{K}`$, the gas at large radius in spherical flows (except for very low $`\dot{m}`$) is most likely to be at the usual equilibrium temperature $`T_e=T_i=T_{eq}10^4\mathrm{K}`$. However, self-similar ADAF solutions have $`T_e=T_iT_{vir}`$ at all radii, where $`(5/2)kT_{vir}GMm_p/r`$. So we will apply both boundary conditions: $`T(r_{out})=T_{vir}(r_{out})`$ or $`T(r_{out})=T_{eq}(r_{out})`$. The outer boundary radius in most calculation is taken to be $`10^5r_s`$, which corresponds to $`T_{vir}10^7\mathrm{K}`$.
We adopt $`r_{in}=3r_s`$ since the radius of the marginally stable orbit around Schwarzschild black hole is $`3r_s`$. The luminosity at $`r_{in}`$ would be the sum of gravitationally redshifted emission of radiation between $`r_{in}`$ and $`r_s`$, which we approximate as
$`L(r_{in})`$ $``$ $`{\displaystyle \mathrm{sin}\vartheta d\vartheta _{r_s}^{r_{in}}\mathrm{\Lambda }r^3d\mathrm{ln}r}`$ (15)
$``$ $`\frac{1}{2}\mathrm{\Lambda }(r_{in})r_{in}^3\mathrm{ln}(r_{in}/r_s),`$
and the radiation temperature as
$$T_X(r_{in})=\frac{\mathrm{\Lambda }_{Cbr}T_S^{Cbr}+\mathrm{\Lambda }_{Csyn}T_S^{Csyn}}{\mathrm{\Lambda }_{Cbr}+\mathrm{\Lambda }_{Csyn}}|_{r_{in}}.$$
(16)
## 4 Results
The dynamical structure of the flow, i.e., density and velocity, is fixed by the parameter $`ฯต^{}`$, which is expected to be between 0 and 1. Esin (1997) argues that $`\gamma =(83\beta )/(63\beta )`$, and $`\gamma =13/9`$ for $`\beta =1/2`$. This would make $`ฯต^{}=1/2`$ for $`f=1`$. But Quataert & Narayan (1999) argue that $`\gamma `$ can be close to 5/3. Also in relativistic regime $`\gamma `$ approaches 4/3, and in many of our solutions $`f`$ can be different from 1. So we choose $`ฯต^{}=1`$ as our representative value. The flow structure for $`ฯต^{}=0.1`$ and that for $`ฯต^{}=1`$ are not significantly different, whereas that for $`ฯต^{}=10`$ is rather extreme (NY2). We also choose $`Y_{He}`$ to be 0.1.
### 4.1 Un-preheated flow
First, we searched for solutions that have negligible preheating. This is simply achieved by starting the iteration with very small preheating luminosity. We first choose $`T(r_{out})=T_{vir}`$ as the outer boundary condition, which would favor the high temperature solutions.
For the flow onto $`M=10M_{\mathrm{}}`$ or $`M=10^8M_{\mathrm{}}`$ black holes, we find solutions when $`\dot{m}10^4`$ that have near virial ion temperature everywhere, which validates the self-similar ADAF. However, we also find that ions and electrons cool down $`10^4\mathrm{K}`$ at all radii for $`\dot{m}0.04`$ regardless of the mass of the black hole, possibly becoming either a cool thin disk (dotted line TD in Figs. 1 & 3) or a cool spherical flow (triangles in Figs. 1โ3). This non-existence of hot solutions agrees rather well with one-dimensional calculation (NY3) despite the different approach: at $`r=10^5r_s`$, the critical mass accretion rate above which the ADAF of NY3 does not exist is $`\dot{m}2\times 10^2`$ (in our units) for $`f=0.3`$ and $`m=10`$. For the intermediate mass accretion rate $`10^4\dot{m}0.04`$, both ions and electrons near the polar axis have $`10^4\mathrm{K}`$ while the gas near the equatorial plane is maintained near virial temperature. As the mass accretion rate decreases, the low temperature region around the polar axis decreases: For the mass accretion rate $`\dot{m}=10^3`$, only the conical region of the flow within $`6\mathrm{ยฐ}`$ of the polar axis is cooled, whereas for $`\dot{m}=10^2`$ that within $`17\mathrm{ยฐ}`$ is cooled (circles in Fig. 5). This confirms the fact that electrons around the polar axis will cool down, and ions too follow electrons due to increased Coulomb coupling between them (PO). We could not find any case where ions are kept at high temperature while electrons are cooled down; once electrons are cool, the Coulomb coupling becomes too strong to allow a two-temperature plasma. These effect will be more severe for lower $`\alpha `$ flow because they have lower radial infall velocity.
Figure 6 shows the typical temperature profiles of ions and electrons along different $`\vartheta `$ for the flow with equipartition magnetic field. Solid line shows ion and electron temperature profiles for $`\vartheta =0`$ (the polar axis) in ADAF with $`\dot{m}=10^2`$. Up to $`\vartheta =3\pi /32`$ (long-dashed line), the flow stays near $`10^4\mathrm{K}`$ (dotted line for $`\vartheta =\pi /32`$ and short-dashed line for $`\vartheta =\pi /16`$). Because of the very small viscous and adiabatic heating near the polar axis, electrons cannot be maintained at a high temperature, and the stronger coupling between ions and electrons at lower electron temperature makes ions and electrons have the same temperatures at all radii for these $`\vartheta `$โs. Only for large enough $`\vartheta `$ (dot-dashed lines for $`\vartheta =\pi /8`$; upper one for ion and lower one for electron) ions and electrons maintain their high temperatures due to higher viscous heating rate and shorter flow time. As result we expect a conical region with $`\vartheta 3\pi /32`$ around the polar axis will collapse while the flow along the equatorial plane accretes with significant radial velocity at high temperature. It is likely that the collapse would lead to an empty funnel along the polar axis. The temperature change across the surface of the funnel is going to be sudden, and the energy transfer by conduction may lead to further cooling from the hot part of the flow. But the existence and the generic shape of the funnel will remain the same. Of course this funnel may be filled by a tenuous outgoing wind, but consideration of this is beyond the scope of the present paper. However, the temperature profile of the flow near the equatorial plane agrees with the result from direct integration height-averaged equations with dynamics (Nakamura et al. 1997; Manmoto et al. 1997). Also, the flow remains at high temperatures for all values of mass accretion rate when the atomic cooling peak is intentionally removed, so it is clear that the effect we find (a collapse of the polar gas) is due to atomic cooling. In sum, it appears that for $`\dot{m}>10^4`$ the ADAF solutions would collapse at the pole, with the funnel possibly filled with a tenuous hot outgoing wind.
We now turn to the cases where the temperature at the outer boundary is equal to the equilibrium temperature $`10^4\mathrm{K}`$.<sup>4</sup><sup>4</sup>4This condition would apply when (quasi) spherical flow at large radii is matched to the inner ADAF. Because of the strong atomic cooling peak near $`10^4\mathrm{K}`$, electrons stay in this temperature until very strong heating takes over. For the same parameters as in $`T(r_{out})=T_{vir}`$ cases, we find that the flow is kept at $`T_i=T_e10^4\mathrm{K}`$ for all $`\vartheta `$ when the mass accretion rate $`\dot{m}2\times 10^4`$ (triangles in Fig. 5). The result differs significantly from the prior boundary condition $`T(r_{out})=T_{vir}`$ cases. This implies that only the very low mass accretion flow will reach the high temperature when the outer boundary is at $`10^4\mathrm{K}`$ and, therefore, the cool thin disk would not switch to hot a ADAF automatically for intermediate mass accretion rates unless there is some forcing process.
The luminosity and the radiation temperature (both at the outer boundary) of lower flow rate ($`\dot{m}2\times 10^4`$) hot solutions for this boundary condition are shown in Figures 7 & 8, respectively, as diamonds. The dimensionless luminosity $`l`$ (diamonds in Fig. 7) for given dimensionless mass accretion rate $`\dot{m}`$ is about 100 times larger than that for the spherical accretion (Shapiro 1973b) because the density of ADAF is roughly 20โ50 times (for $`\alpha =0.1`$ and $`ฯต^{}=1`$), depending on $`\vartheta `$, higher than the spherical flow with the same $`\dot{m}`$. The radiation temperature $`T_X`$ (diamonds in Fig. 8) is very low due to the copious soft synchrotron photons emitted (Shapiro 1973b). However, it increases as $`\dot{m}`$ approaches $`10^{3.7}`$ because now lower temperature electrons mainly cool by bremsstrahlung. The sudden decrease of $`l`$ around $`\dot{m}2\times 10^4`$ in Figure 7 reflects the transition from hot ADAF to cold solutions as discussed above (triangles in Fig. 5). This transition does not depend on the strength of the magnetic field since it is mainly determined by the atomic cooling peak near $`T10^4\mathrm{K}`$. For comparison, the detailed ADAF models for Sgr A ($`M=2.5\times 10^6M_{\mathrm{}}`$; Narayan et al. 1998) and NGC 4258 ($`M=3.6\times 10^7M_{\mathrm{}}`$; Lasota et al. 1996) are shown as $`\mathrm{}`$ and $``$ in Figures 7 and 8, respectively.
The luminosity and the radiation temperature of the flow without any magnetic field, thereby cooling only by Comptonized bremsstrahlung, are shown as stars in Figures 7 & 8. Due to the absence of soft synchrotron photons, the luminosity is much lower while the radiation temperature is much higher than the flow with magnetic field. An approximate effective temperature of the disk surface for the gas pressure dominated thin disk is shown as dotted line TD in Figure 8 for comparison.
When conditions of self-consistency (with regard to the thermodynamics of the emitted radiation) are applied to the solutions with $`T(r_{out})=T_{eq}(r_{out})10^4\mathrm{K}`$, we see from Figure 8 that both the low and high temperature branches of the ADAF solutions are only possible for $`\dot{m}10^{3.7}`$.
### 4.2 Preheated flow
PO suggested the possibility of preheated ADAF solutions, self-consistently maintained by the Compton preheating as in spherical accretion flows. These solutions are found by starting from ad hoc initial luminosity and radiation temperature high enough to affect the flow significantly (Park 1990a,b). Two dimensional plane of the luminosity and the radiation temperature, ($`l`$, $`T_X`$), is searched for the self-consistent values.
As in un-preheated flow, the outer boundary condition is important because the physical state of the flow inside depends on the entropy of the flow at the outer boundary. We have two possible choices: $`T(r_{out})=T_{eq}(r_{out})10^4\mathrm{K}`$ or $`T(r_{out})=T_{vir}(r_{out})`$. In most cases, the latter condition is awkward because $`T_{vir}(r_{out})`$ is not the equilibrium temperature at $`r_{out}`$. The flow immediately adjusts to the equilibrium temperature $`T_{eq}10^4\mathrm{K}`$ and stays at that temperature until much stronger heating brings the temperature up. The former choice, which is more natural in usual spherical accretion, unfortunately, is not consistent with the self-similar ADAF, where the sum of the ion pressure and the electron pressure is assumed to have the virial value. However, this choice would make joining the ADAF solutions to outer thin disk solution much more reasonable. So we adopt this boundary condition.
Figures 7 & 8 show the preheated solutions found in ($`\dot{m}`$,$`l`$) and ($`\dot{m}`$,$`T_X`$) plane, which may be called preheated ADAF (=PADAF). The circles represent the solutions with an exactly equipartition magnetic field. We find solutions sustained by preheating in the range $`6.5\times 10^3\dot{m}0.15`$, which includes the range in which no high temperature flow is possible when preheating is not considered ($`\dot{m}>0.04`$ or $`\dot{m}>2\times 10^4`$). The luminosity of the preheated flow increases generally with the mass accretion rate (circles in Fig. 7). However, for $`\dot{m}0.015`$ the luminosity increases as the mass accretion rate decreases because the electron temperature reaches above $`10^9\mathrm{K}`$ for this low $`\dot{m}`$ flow, and synchrotron emission makes increasing contribution to the total luminosity. The majority of photons are now low energy synchrotron photons and the radiation temperature of these solutions decreases as $`\dot{m}`$ decreases and $`l`$ increases. This can be seen quite clearly in Figure 8. The radiation temperature decrease quite rapidly as $`\dot{m}`$ becomes smaller than $`0.015`$. Compared to the (un-preheated) ADAF, the PADAF has much higher radiation temperatures because most soft synchrotron photons are absorbed and inverse Comptonization is stronger. Solutions of NY3 are also shown (dotted line) in Figure 7 for comparison.
The temperature profile of the typical PADAF solution is shown in Figure 9. The parameters of the flow are $`\dot{m}=0.01`$, $`M=10^8M_{\mathrm{}}`$, $`l=4.1\times 10^6`$, and $`T_X=1.2\times 10^9\mathrm{K}`$. At the outer radii, the flow is kept near $`10^4\mathrm{K}`$. When the Compton heating becomes significant, the temperature suddenly jumps above $`10^6\mathrm{K}`$ first at the pole (solid line). This arises from the classic phase change due to the atomic cooling peak. Away from the pole, the density is higher, and the jump occurs at successively smaller radii. Also due to the faster infall velocity, the transition is smoother. Once the temperature is above $`10^8\mathrm{K}`$, the Coulomb coupling gets weaker, and electron temperature deviates from that of the ions. The flow is mostly heated by Compton preheating by hot photons produced at smaller radii with added help of adiabatic compression and viscous heating. The electron temperature increases with varying slope at $`T_e10^9\mathrm{K}`$ by relativistic effects and then flattens due to highly efficient Comptonized relativistic bremsstrahlung and Comptonized synchrotron radiation at this temperature range.
Now we look at a higher $`\dot{m}`$, higher $`l`$ solution (Fig. 10 for $`\dot{m}=0.1`$, $`M=10M_{\mathrm{}}`$). Due to the higher density, ions and electrons are well coupled. The flow in the polar axis (solid line) is heated first at much larger radii compared to lower $`\dot{m}`$ flow, and larger $`\vartheta `$ flow is heated successively (dotted line for $`\vartheta =\pi /8`$, short-dashed line for $`\vartheta =\pi /4`$, long-dashed line for $`\vartheta =3\pi /8`$, and dot-dashed line for $`\vartheta =\pi /2`$). Note that within the radius range $`10^3<r/r_s<10^6`$, the flow temperature of the polar region is much greater than the virial temperature (thin solid line) while that for the equatorial plane (dot-dashed line) is close to the virial value. This will produce the preheating instability or wind preferentially along the pole as suggested in PO because longer infall timescale along the pole makes this part of the flow more vulnerable to instability while shorter infall timescale near the equatorial plane stabilizes the flow. We may call this type of solutions ADAF with polar wind (WADAF). So, preheating may be yet another process to produce outflow in ADAF (Xu & Chen 1997; Blandford & Begelman 1999; Das 1999; Turolla & Dullemond 2000; see also Menou et al. for the polar wind in neutron star accretion). Meanwhile, if the mass accretion rate, and accordingly the luminosity also, is much higher, all parts of the flow will be heated well above the virial value. Since in spherical flow overheating outside the sonic point produces long term modulation on the accretion rate and that inside the sonic point produces short term flaring (Cowie et al. 1978), it is likely that these higher $`l`$ flow may develop either a relaxational variability or a flaring variability.
The range of $`\dot{m}`$ to have equatorial accretion and the preheated polar outflow simultaneously is rather limited around $`\dot{m}0.15`$. This range will be wider for larger $`ฯต^{}`$ and narrower for smaller $`ฯต^{}`$, because larger $`ฯต^{}`$ flow has a higher density contrast between the pole and the equator.
When there is no magnetic field, Comptonized bremsstrahlung is the only cooling process, and the properties of these flow show more continuous behavior (crosses in Figs. 7 & 8). They also exist in a wider $`\dot{m}`$ range, $`10^{3.5}\dot{m}10^1`$. The slope of $`l`$ versus $`\dot{m}`$ changes around $`\dot{m}10^2`$ because the electrons can reach near $`10^9\mathrm{K}`$, and relativistic radiative process becomes important.
## 5 Summary and Discussion
We have studied the two-dimensional thermal properties of the ADAF by integrating the energy equations under the assumption that the density and velocity of the flow is described by the self-similar solutions of Narayan & Yi (1995a). Special considerations are given to the effects of preheating by Comptonizing hot photons produced at smaller radii and radiative transfer of these photons, as well as to the atomic cooling, Comptonized relativistic bremsstrahlung, and Comptonized synchrotron radiation. We find that Compton preheating is important in general.
1. When preheating is not considered, the high temperature flows do not exist when the mass accretion rate is higher than $`10^{1.5}L_Ec^2`$, and even below this mass accretion rate, a roughly conical region around the hole cannot sustain high temperature ions and electrons. Funnels should exist in these flows unless $`\dot{m}\dot{M}c^2/L_E<10^4`$. If the flow starts at large radii with a normal equilibrium temperature $`10^4\mathrm{K}`$ as in spherical flows, the critical mass accretion rate becomes $`\dot{m}10^{3.7}`$, above which level no self-consistent solutions exist.
2. Even above this critical mass accretion rate, the flow can be self consistently maintained at high temperature if Compton preheating is considered. These solutions constitute a new branch of solutions as in the case of spherical accretion flow; these preheated high temperature ADAFs can exist above the critical mass accretion rate in addition to the usual cold thin disk. Therefore, Compton preheating could be the mechanism to trigger the phase change from the thin disk to PADAFโthe preheated ADAF.
3. We also find solutions where the flow near the equatorial plane is normally accreting while that near the polar axis is overheated by Compton preheating, possibly becoming polar wind. The flow may be called WADAFโADAF with a polar wind.
The formalism we have used in this work is not completely satisfactory in the sense that the dynamics of the flow is prescribed to be ADAF like. In real flow, changes in thermal properties will induce those in dynamical properties. Hence, although solving the two dimensional flow structure with the correct radiative transfer is a daunting task, understanding the true nature of the ADAF like solutions seems to demand it. We feel confident, however, that the characteristic solutions for all accretion rates of interest ($`\dot{m}10^3`$) will be either PADAF or WADAF type when the outer boundary approaches the equilibrium ($`T10^4\mathrm{K}`$) temperature.
We would like to thankfully acknowledge useful conversations with E. Quataert, K. Menou, R. Narayan, I. Yi, R. Blandford, X. Chen, and B. Paczynลski. This work was supported by Korea Research Foundation grant KRF-1999-015-DI0113 and NSF grant 94-24416.
## Appendix A Cooling and heating processes
### A.1 Atomic cooling and bremsstrahlung
Cooling due to atomic processes and bremsstrahlung is expressed as a composite formula (Svensson 1982; Stepney & Guilbert 1983; Nobili et al. 1991)
$$\mathrm{\Lambda }_{Cbr+atomic}=\sigma _Tc\alpha _fm_ec^2n_i^2\left[\left\{\lambda _{Cbr}(T_e)+6.0\times 10^{22}\theta _e^{1/2}\right\}^1+\left(\frac{\theta _e}{4.82\times 10^6}\right)^{12}\right]^1,$$
(A1)
where $`\sigma _T`$ is the Thomson cross section, $`\alpha _f`$ the fine-structure constant, and $`n_i`$ the number density of ions. In part of the flow with high temperature where most of the radiation is produced, the bremsstrahlung cooling rate is not much affected by free-free absorption because most energies are carried by photons with energy $`kT_e`$ in bremsstrahlung emission.
The cooling rate due to the Comptonized bremsstrahlung $`\mathrm{\Lambda }_{Cbr}`$ can be expressed as some average enhancement factor times the un-Comptonized bremsstrahlung rate
$$\lambda _{br}=\left(\frac{n_e}{n_i}\right)(\underset{i}{}Z_i^2)F_{ei}(\theta _e)+\left(\frac{n_e}{n_i}\right)^2F_{ee}(\theta _e)$$
(A2)
where
$`F_{ei}`$ $`=`$ $`4\left({\displaystyle \frac{2}{\pi ^3}}\right)^{1/2}\theta _e^{1/2}(1+1.781\theta _e^{1.34})\text{for}\theta _e<1`$
$`=`$ $`{\displaystyle \frac{9}{2\pi }}\theta _e\left[\mathrm{ln}(1.123\theta _e+0.48)+1.5\right]\text{for}\theta _e>1`$
$`F_{ee}`$ $`=`$ $`{\displaystyle \frac{5}{6\pi ^{3/2}}}(443\pi ^2)\theta _e^{3/2}(1+1.1\theta _e+\theta _e^21.25\theta _e^{5/2})\text{for}\theta _e<1`$
$`=`$ $`{\displaystyle \frac{9}{\pi }}\theta _e\left[\mathrm{ln}(1.123\theta _e)+1.2746\right]\text{for}\theta _e>1.`$
Obtaining a precise estimate of the enhancement factor for arbitrary optical depth, electron temperature, and flow geometry is a quite formidable task by itself. However, Dermer, Liang, & Canfield (1991) provide a convenient, yet reasonable way to deal with thermal Comptonization. They define a Comptonized energy enhancement factor $`\eta (\nu )`$ as the average change in photon energy between creation and escape in a flow with an electron scattering optical depth $`\tau _{es}`$ and electron temperature $`T_e`$,
$$\eta (\nu )=1+\eta _1\eta _2\left(\frac{x}{\theta _e}\right)^{\eta _3}$$
(A5)
where $`xh\nu /m_ec^2`$, $`P=1\mathrm{exp}(\tau _{es})`$, $`A=1+4\theta _e+16\theta _e^2`$, $`\eta _1P(A1)/(1PA)`$, $`\eta _31\mathrm{ln}P/\mathrm{ln}A`$, and $`\eta _2\eta _1/3^{\eta _3}`$. Although this formula is not applicable to certain ranges of $`\nu `$ and $`\tau _{es}`$, it is good enough for general use.
We apply $`\eta (\nu )`$ to the bremsstrahlung spectrum
$$ฯต_{br}(x)d\left(\frac{x}{\theta _e}\right)=\mathrm{\Lambda }_{br}\mathrm{exp}\left(\frac{x}{\theta _e}\right)d\left(\frac{x}{\theta _e}\right)$$
(A6)
to get the Comptonized bremsstrahlung cooling rate. When absorption is not important, photon with energy $`x`$ ($`x3\theta _e`$) would be upscattered to $`\eta x`$, and the cooling rate would be
$`\mathrm{\Lambda }_{Cbr}`$ $`=`$ $`{\displaystyle _0^{3\theta _e}}\eta x{\displaystyle \frac{ฯต_{br}}{x}}๐x+{\displaystyle _{3\theta _e}^{\mathrm{}}}ฯต_{br}(x)๐x`$ (A7)
$`=`$ $`{\displaystyle _0^{\mathrm{}}}ฯต_{br}(x)๐x+{\displaystyle _0^{3\theta _e}}(\eta 1)ฯต_{br}(x)๐x.`$ (A8)
For the second integration, we use simpler form of $`ฯต_{br}`$,
$`ฯต_{br}^0(x)d\left({\displaystyle \frac{x}{\theta _e}}\right)`$ $`=`$ $`\mathrm{\Lambda }_{br}d\left({\displaystyle \frac{x}{\theta _e}}\right)x\theta _e`$ (A9)
$`=`$ $`0x>\theta _e`$ (A10)
since very little emission exists above $`x>\theta _e`$. The resulting cooling rate is
$$\mathrm{\Lambda }_{Cbr}=\mathrm{\Lambda }_{br}\left[1+\eta _1\frac{\eta _2}{\eta _3+1}\right].$$
(A11)
When $`\theta _e`$ and $`\tau _{es}`$ increases such that $`PA1`$, most photons are likely to be upscattered to $`kT_e`$. Equation (A5) has a formal divergence in this saturated Comptonization regime. If we assume that photons with energy above $`h\nu _{abs}`$ are all upscattered to $`kT_e`$, the saturated energy enhancement factor for the whole emission is
$`\mathrm{\Lambda }_{Cbr}^{sat}/\mathrm{\Lambda }_{br}`$ $`=`$ $`1+{\displaystyle _{x_{abs}}^{\theta _e}}{\displaystyle \frac{3\theta _e}{x}}d\left({\displaystyle \frac{x}{\theta _e}}\right)`$ (A12)
$`=`$ $`1+3\mathrm{ln}\left({\displaystyle \frac{\theta _e}{x_{abs}}}\right).`$ (A13)
Hence, the Comptonized bremsstrahlung cooling rate in general is
$$\mathrm{\Lambda }_{Cbr}^{sat}=\mathrm{\Lambda }_{br}\mathrm{min}[1+\eta _2\frac{\eta _2}{\eta _3+1},1+3\mathrm{ln}\left(\frac{\theta _e}{x_{abs}}\right)].$$
(A14)
The energy-weighted mean photon energy for the emission is expressed as
$`{\displaystyle \frac{kT_S^{Cbr}}{m_ec^2}}`$ $`=`$ $`\mathrm{\Lambda }_{Cbr}^1{\displaystyle _0^{\mathrm{}}}(\eta x)^2{\displaystyle \frac{ฯต_{br}}{x}}๐x`$ (A15)
$``$ $`{\displaystyle _0^{\mathrm{}}}xฯต_{br}(x)๐x+{\displaystyle _0^{\theta _e}}(\eta ^21)xฯต_{br}^0(x)๐x`$ (A16)
$`=`$ $`\theta _e{\displaystyle \frac{\mathrm{\Lambda }_{br}}{\mathrm{\Lambda }_{Cbr}}}\left[1+\eta _1(\eta _1+2){\displaystyle \frac{2\eta _2(1+\eta _1)}{\eta _3+2}}+{\displaystyle \frac{\eta _2^2}{2\eta _3+1}}\right].`$ (A17)
Since this expression has divergence near saturated regime, we put an upper limit $`T_S=T_e`$ (Wien spectrum),
$$T_S^{Cbr}=\frac{1}{4}kT_e\mathrm{min}[4,\frac{1+\eta _1(\eta _1+2)\frac{2\eta _2(1+\eta _1)}{\eta _3+2}+\frac{\eta _2^2}{2\eta _3+1}}{1+\eta _1\frac{\eta _2}{\eta _3+1}}]$$
(A18)
which has a limit value $`T_S^{Cbr}=\frac{1}{4}T_e`$ when Comptonization is negligible.
### A.2 Synchrotron
Even an equipartition magnetic field does not play an important role in dynamics of spherical accretion (Shapiro 1974). But synchrotron emission could be the dominant cooling process nevertheless. In this work we assume a magnetic field of the order of equipartition strength:
$$P_m=\frac{B^2}{24\pi }=\beta (P_g+P_m)=\beta P$$
(A19)
where $`P_m`$ is the magnetic pressure, $`P_g`$ the gas pressure, and $`P`$ the total pressure.
The angle-averaged synchrotron emission by relativistic Maxwellian electrons is given by (Pacholczyk 1970)
$$ฯต_{syn}=\frac{2\pi }{\sqrt{3}}\frac{e^2}{c}\frac{n_e\nu }{\theta _e^2}I^{}(x_M)d\nu $$
(A20)
where $`x_M2\nu /(3\nu _0\theta _e^2)`$, $`\nu _0eB/(2\pi m_ec)`$, and
$$I^{}(x_M)=\frac{4.0505}{x_M^{1/6}}\left(1+\frac{0.40}{x_M^{1/4}}+\frac{0.5316}{x_M^{1/2}}\right)\mathrm{exp}(1.8899x_M^{1/3})$$
(A21)
is a fitting formula (Mahadevan, Narayan, & Yi 1996).
When absorption is not important, the cooling rate due to the optically thin synchrotron emission is obtained by integrating the equation (A20)
$$\mathrm{\Lambda }_{syn}^0=213.6\frac{e^2}{c}n_e\nu _0^2\theta _e^2.$$
(A22)
In generic accretion flow, a large fraction of the low energy synchrotron photons can be absorbed either by synchrotron self-absorption or by free-free absorption, and its effect should be incorporated along with that of Comptonization.
If the flow becomes optically thick in absorption at frequency $`\nu _{abs}`$, the synchrotron cooling (before Comptonization) can be expressed as
$$\mathrm{\Lambda }_{syn}=f_{syn}\mathrm{\Lambda }_{syn}^0$$
(A23)
where
$$f_{syn}_{x_{abs}}^{\mathrm{}}xI^{}(x)๐x/_0^{\mathrm{}}xI^{}(x)๐x$$
(A24)
and $`x_{abs}=h\nu _{abs}/m_ec^2`$. By this, we assume all photons above $`\nu _{abs}`$ escape the gas flow. The treatment of cooling when the flow is optically thick to absorption for lower energy photons and thin for higher energy ones can be tricky because the absorbed radiation heats the gas and the slight difference in the radiation field between two different radii drives the transfer of the energy while higher energy photons directly escape. Correct calculation of this would require solving gas energy equations and radiative transfer equations with frequency dependence. A simpler way would be to consider only the photons that escape as the source of cooling (Ipser & Price 1982), neglecting the diffusive cooling due to the absorbed photons, which we adopt here (see NY3 for different approach).
For Comptonization of absorbed synchrotron photons, we assume that photons near the cut-off frequency $`\nu _{abs}`$ are most affected by Comptonization, and the cooling rate due to the Comptonization of synchrotron photons is simply
$$\mathrm{\Lambda }_{Csyn}=\eta (\nu _{abs})f_{syn}\mathrm{\Lambda }_{syn}^0.$$
(A25)
Similarly, the radiation temperature of the Comptonized synchrotron photons, $`T_S^{syn}`$, is
$$4kT_S^{syn}=h\nu _{abs}\eta (\nu _{abs}).$$
(A26)
### A.3 Absorption
There are two sources of absorption: free-free and synchrotron self-absorption. At high temperature where most of the radiation is produced, synchrotron absorption is more important in general. In this work, we estimate the frequency at which either absorption is important under the effect of Comptonization, and use the larger of the two as $`\nu _{abs}`$.
The difficulty of considering absorption under Comptonization is that a photon having a big chance of being absorbed in the absence of Comptonization can be upscattered and its mean free path for absorption can increase, resulting in escape before being absorbed. So the effective frequency where absorption becomes important becomes lower due to Compton scattering. This would be far more important to synchrotron losses than to bremsstrahlung because most of emission in synchrotron radiation is in the tail part of the spectrum and the precise location of the absorption frequency can change the emission by some exponential factor.
Dermer, Liang, & Canfield (1991) estimate the effective absorption frequency by equating the characteristic timescale for absorption with that for the energy increase due to Compton scattering in case of free-free absorption. We apply this approach to both free-free absorption and synchrotron self-absorption.
The absorption frequency $`\nu _{abs}^{ff}`$ for free-free absorption is estimated from the relation (Dermer, Liang, & Canfield 1991)
$$\tau _{ff}(\nu _{abs}^{ff})\mathrm{{\rm Y}}=1$$
(A27)
where $`\tau _{ff}=ra_{ff}`$ and $`a_{ff}`$ is the absorption coefficient
$$a_{ff}(x)=\sqrt{8\pi }\frac{\alpha _f^2\sigma _Tr_e^3}{x^2\theta _e^{3/2}\left[1+(8/\pi )^{1/2}\theta _e^{3/2}\right]}n_i^2\overline{g}$$
(A28)
and
$`\overline{g}`$ $`=`$ $`(1+3Y_{He})(1+2\theta _2+2\theta _e^2)\mathrm{ln}\left[{\displaystyle \frac{4\eta _E(1+3.42\theta _e)\theta _e}{x}}\right]`$
$`+(1+Y_{He}^2)({\displaystyle \frac{3\sqrt{2}}{5}}+2\theta _2)\theta _e\mathrm{ln}\left[{\displaystyle \frac{4\eta _E(11.2+10.4\theta _e^2)\theta _e}{x}}\right],`$
$`\alpha _f`$ the fine structure constant, and $`r_e=e^2/m_ec^2`$ the electron radius (Svensson 1984). This expression is valid for $`x\theta _e`$ and we used $`\sqrt{\pi /2}\theta _e^{1/2}[1+(8/\pi )^{1/2}\theta _e^{3/2}]`$ to approximate $`\mathrm{exp}(1/\theta _e)\text{K}_2(1/\theta _e)`$. The factor $`\mathrm{{\rm Y}}`$ includes the effect of Comptonization,
$`\mathrm{{\rm Y}}`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{min}[1,8\theta _e]\tau _{es}}}\text{if}y>1\text{and}\tau _{es}>1`$ (A30)
$`=`$ $`1+\xi \tau _{es}\text{otherwise}`$ (A31)
where $`\xi =1/3`$ is the geometry factor and $`y(4\theta _e+16\theta _e^2)\mathrm{max}[\tau _{es},\tau _{es}^2]`$ is the usual Compton parameter.
The absorption frequency $`\nu _{abs}^{syn}`$ for synchrotron self-absorption is similarly calculated (Svensson 1984; Dermer, Liang, & Canfield 1991),
$$\tau _{syn}(\nu _{abs}^{syn})\mathrm{{\rm Y}}=1$$
(A32)
where
$$\tau _{syn}=\frac{1}{4\sqrt{3}}\frac{e^2cn_er}{\nu kT_e\theta _e^2}I^{}(x_M)$$
(A33)
(Ipser & Price 1982; Mahadevan, Narayan, & Yi 1996).
Then, the effective absorption frequency $`\nu _{abs}`$ is set to be the maximum of $`\nu _{abs}^{ff}`$ and $`\nu _{abs}^{syn}`$.
### A.4 Viscous heating
The ion heating rate per unit volume under $`\alpha `$-viscosity description (Shakura & Sunyaev 1973) is (NY2)
$$\mathrm{\Gamma }_{vis}=\alpha \rho c_s^2\mathrm{\Omega }_K\left[3v^2(\vartheta )+\frac{9}{4}\mathrm{\Omega }^2(\vartheta )\mathrm{sin}^2\vartheta +\left(\frac{v(\vartheta )}{\vartheta }\right)^2+\mathrm{sin}^2\vartheta \left(\frac{\mathrm{\Omega }(\vartheta )}{\vartheta }\right)^2\right].$$
(A34)
### A.5 Compton heating
We now turn to process which can be dispositive in disrupting accretion flow. Electrons in outer regions, $`r/r_s10^410^5`$, can be heated by high-energy photons produced at inner hot region, $`r/r_s10^110^2`$, of the flow. We use Compton heating rate from the Kompaneets equation (Levich & Syunyaev 1971; Rybicki & Lightman 1979),
$$\mathrm{\Gamma }_{Compt}=n_e\sigma _T\frac{4k[T_X(r)T_e(r)]}{m_ec^2}\frac{L_X(r)}{4\pi r^2}.$$
(A35)
The information on the preheating radiation spectrum is incorporated in the angle-averaged quantities, $`T_X(r)`$ and $`L_X(r)`$, which are calculated from the equations (11) and (13) along with corresponding auxiliary equations. |
no-problem/0001/astro-ph0001388.html | ar5iv | text | # ROTSE All Sky Surveys for Variable Stars I: Test Fields
## 1 Introduction
The Robotic Optical Transient Search Experiment<sup>1</sup><sup>1</sup>1For more details see http://www.umich.edu/$$rotse (ROTSE) is a collection of instruments designed to search for astrophysical transients, especially those associated with Gamma-ray Bursts (GRBs). Along with observations triggered by external sources, ROTSE instruments perform regular patrol observations. In particular, the ROTSE-I instrument has been imaging the entire sky visible from New Mexico in two epochs nightly since March 1998. These observations provide a unique opportunity to perform uniform all-sky surveys for variable stars.
All-sky observations of variable stars, even at comparatively bright magnitudes, have important advantages over narrow field searches. Variables found in uniform all-sky surveys are ideal for performing studies of galactic structure. They can detect structure in the galaxy on all scales, while avoiding the confusion that substructure creates in pencil beam surveys. All-sky surveys are more sensitive to intrinsically faint classes of disk variables, such as Delta Scuti stars and main sequence contact binary systems. They are also very useful for finding rare, intrinsically bright variables, such as red supergiant variables (Feast, et al., 1980). Finally, all sky surveys at relatively bright magnitudes identify complete samples of nearby variables, ideal for detailed study by other techniques such as parallax measurement, x-ray observations, and high resolution spectroscopy. This is especially important as we prepare for the era in which Full-sky Astrometric Mapping Explorer<sup>2</sup><sup>2</sup>2see http://aa.usno.navy.mil/FAME and the Space Interferometry Mission<sup>3</sup><sup>3</sup>3see http://sim.jpl.nasa.gov/index.html will be able to use these objects to directly calibrate the distance ladder.
We present here first results from analysis of ROTSE-I sky patrol data. For this analysis we have concentrated on the study of periodic variable stars. A discussion of aperiodic transients will be presented in a subsequent publication. In the following sections we describe relevant details of the ROTSE-I system, data reduction, variable identification, phasing, and automatic classification. This is followed by a description of some general properties of the variables discovered. We conclude with a discussion of what we expect from future ROTSE variable catalogs.
## 2 Observations
The ROTSE-I instrument consists of four Canon 200mm f/1.8 lenses, each equipped with a thermoelectrically cooled CCD camera incorporating a Thompson TH7899M CCD. The $`14\mu \text{m}`$ pixels of these CCDs subtend 14.4โ at this focal length. Each of these four assemblies has a field of view 8.2ยฐx 8.2ยฐ. All four optical assemblies are co-mounted on a single rapidly slewing mount. Pointing offsets between the four optical assemblies allow the instrument to cover a combined 16x16 field of view. The Canon lenses provide a point spread function which has a typical full width at half maximum of about 20โ. As a result stellar images are only moderately undersampled. To maximize sensitivity to GRB optical transients, ROTSE-I CCDs are currently operated without filters.
All ROTSE instruments are designed for completely automatic operation. The ROTSE-I telescope is housed on the roof of a military surplus electronics enclosure. It is protected by day and in bad weather by a clamshell cover which flips completely out of the way during operation. Inside the ROTSE-I enclosure are a set of five Linux PCs which operate the system. Four of these are dedicated to operation of the CCD cameras. The fifth is a master computer which completely controls observatory operations. ROTSE instruments are currently installed at Los Alamos National Lab, near the LANSCE end station. Deployment of these instruments to a dark site at Fenton Hill, west of Los Alamos, is contemplated for the near future.
At the beginning of the night the ROTSE master computer checks the weather status, which is measured by local monitoring hardware. If conditions allow, it opens the clamshell and waits for astronomical twilight. ROTSE has two principle observing modes; patrol mode and trigger response mode. Most of the time is spent in patrol mode. The large ROTSE-I field of view allows the entire celestial sphere to be tiled with a set of 206 field centers (see Figure 1). In its current New Mexico location, ROTSE-I can image 160 of these fields at various times of the year. While in patrol mode, we successively image all available sky patrol fields. Occasionally (about once every ten days) an accessible GRB trigger arrives through the GCN system (Barthelmy, et al., 1998). When this occurs, ROTSE immediately interrupts patrol observing and obtains burst response data. Details of ROTSE studies of GRBs, including the first observation of optical emission contemporaneous with a GRB, can be found in Akerlof, et al. (1999) and Kehoe, et al. (1999).
Sky patrol observations are conducted in the following manner. A list of available (elevation $`>`$20) fields is generated at the start of a patrol. During the night, the telescope slews to each patrol location and obtains a pair of 80 s exposures. Exposures are taken in pairs to eliminate confusion caused by cosmic rays, satellite trails, etc. and to allow robust detection of aperiodic transients. Paired observations also prove extremely useful for detection of periodic variables, as described below. On a typical night two patrol sequences are obtained, covering about 18,000 square degrees of sky, and recording the brightness of $`9\times 10^6`$ stars with four images taken in two epochs. The time of each observation is recorded with an accuracy of 20 ms. Maintainance of an accurate time standard through the system is accomplished by use of the Network Time Protocol, and is required to support ROTSEโs GRB mission.
Instrumental calibrations required for reduction of ROTSE data are also obtained automatically. A sequence of 12 dark frames is recorded during the night. These dark frames are median averaged to provide a global dark for each night. Since the cameras are TE cooled, this correction is primarily important for removing the small number ($`<<1\%`$) of pixels which have high dark current rates. Since ROTSE-I pixels are so large, they have relatively high sky rates. As a result, patrol observations are sky noise limited. We obtain flatfields from night sky images by median averaging a single observation of each patrol field. As there are typically 90 fields observed in at least one epoch, this process yields very good, stable flats. Flatfield corrections for ROTSE-I are dominated by vignetting in the lens, which amounts to about a 40% loss of sensitivity at the CCD corners.
Regular patrol observations by ROTSE-I began in March 1998. These early observations utilized 25 s patrol exposures, reducing the sensitivity, but increasing the number of observation epochs. In March 1999 exposure lengths were increased to 80 s. As of November 1999 more than 2.6 terabytes of imaging data have been obtained, a total of about 430,000 images. For this analysis we have selected 9 of the 160 patrol fields for which ROTSE-I data exist. We have analyzed all observations of these fields obtained from March 15, 1999 to June 15, 1999. The number of available epochs varies by field from $``$40 to $``$110. These data includes about 40% of the available time coverage for these fields, which constitute 5.6% of the ROTSE-I sky coverage.
## 3 Data Analysis and Calibration
Except for automatic generation of darks and flats, ROTSE-I sky patrol analysis for this study was conducted offline. Online analysis of some data began in August 1999, and we expect to begin online analysis of all data in Spring 2000. Data analysis begins with frame correction. Median dark images are subtracted from each sky exposure. Images are then corrected for variable system response by application of the night sky flats described above. These corrected images are then reduced to object lists by the SExtractor (Bertin & Arnouts, 1996) package. Since ROTSE-I data is heavily dominated by stars we retain only a small set of the available SExtractor outputs: position, size, magnitude, and magnitude errors. The remainder of the analysis, from calibration through automatic variable classification, is carried out with a series of IDL routines generated at the University of Michigan over the last several years.
Calibration for the ROTSE-I data is accomplished in a somewhat unusual manner. Each camera in the ROTSE-I array has a 64 square degree field of view. This implies that within each image there are typically 1500 Tycho (Hog, 1998) stars. The Tycho catalog is derived from Hipparcos observations, and includes both highly accurate astrometry and two-color (B and V) space-based CCD photometry. Most Tycho stars are fainter than the $`m_V`$ 9.0 saturation limit of ROTSE-I observations. Astrometric and photometric calibrations are based on these stars. The availability of large numbers of well measured calibration stars in each image is a remarkable resource, both for calibration and monitoring of data quality.
Astrometric calibrations are accomplished by identifying ROTSE-I objects with Tycho stars through a triangle matching routine. The transformation of CCD (x,y) to (ra,dec) is accomplished through a third order polynomial warp. The resulting quality of the astrometric calibration can be tested by examining the residuals to this fit. Astrometric residuals for a typical field are shown in Figure 2. Errors for bright ($`m_V<12.5`$) stars are $``$1.5โ, about one tenth of a pixel.
Photometric calibration is somewhat more complex. The Tycho catalog includes only B and V photometry, and ROTSE-I images are obtained with unfiltered CCDs. We use the Tycho B and V magnitudes to produce an empirically predicted m<sub>ROTSE</sub> for each Tycho star:
$$m_{ROTSE}=m_V\frac{m_Bm_V}{1.875}$$
(1)
These color-corrected magnitudes are then used compared to ROTSE-I instrumental magnitudes to set ROTSE-I zeropoints. The net effect of this procedure is to place ROTSE observations onto a V-equivalent scale, in the sense that the average Tycho star has m<sub>ROTSE</sub>=$`m_V`$.
While m<sub>ROTSE</sub> is clearly not a standard V magnitude, this technique provides very good, stable, zeropoints. As a measure of the quality and stability of these photometric calibrations we present in Figure 3 the standard deviation of 18016 stars observed in 114 different epochs over a period of four months. The rms errors rise from $``$2% at 10th magnitude to about 20% at the 5$`\sigma `$ threshold around $`m_{ROTSE}`$ =15.5. These errors are typical of the fields included in this study.
The presence of many Tycho stars in each image provides an excellent opportunity for monitoring data quality. The calibration routines generate summary outputs including both astrometric and photometric fit residuals. This summary information is carefully examined before observations from a particular day are accepted and incorporated into the overall light curve catalog.
Once calibrated object lists from each observation of each field are generated, they must be collated into tables of light curves. This is done using the calibrated object positions. A requirement is imposed that each object must appear in both observations of a ROTSE pair to be included in the light curve measurement. This provides a strong veto against cosmic rays, satellite glints, etc. The output of this process is an array of light curves for each field. These light curve tables then form the input to the subsequent process of variable identification and classification.
## 4 Variable Identification
To automatically detect variables we implement the technique of Welch & Stetson (1993) (the WS technique) as modified by Stetson (1996). This method increases our sensitivity to periodic variables by taking advantage of paired observations. Both observations in a ROTSE-I pair are taken within 2.5 minutes. Since this time is much shorter than the period of the variables we seek, we expect the two observations to record essentially the same magnitude. For variable objects, the residuals from the comparison of each magnitude to the mean magnitude will be correlated. For stable objects there is no correlation of residuals. Products of these residuals will, for variables, generally be positive, while products of uncorrelated residuals may be either positive or negative. As a result, the sum of these products for paired observations will increase monotonically for periodic variables and cluster around zero for stable objects.
For each object with a light curve we calculate a variability index through a series of steps. The variations from the most obvious implementation of the WS technique are all designed to make the measurement more robust against the presence of a few bad observations. We begin by forming the appropriately weighted products of residuals for each pair of observations.
$`\delta _{1i}={\displaystyle \frac{V_{1i}\overline{V}}{(\sigma _V)_{1i}}}`$
$`P_i=\delta _{1i}\delta _{2i}`$ (2)
where $`\delta _{1i}`$ is the uncertainty weighted residual between the first of a pair of observations of an object and the mean magnitude of that object through all observations. $`P_i`$ is the product of the residuals from a pair of adjacent observations. The mean magnitude used in this calculation is based on the robust mean of Stetson (1987). It is an iteratively weighted mean, where the weights are based on the residuals between each observation and the previously determined mean. The uncertainty $`\sigma _V`$ used for this calculation is the photometric error calculated by SExtractor added in quadrature to an assumed 2% systematic error. We generate sums of these products of residuals to calculate the variability index
$$J_{ROTSE}=\sqrt{\frac{1}{n(n1)}}\underset{i=1}{\overset{n}{}}sgn(P_i)\sqrt{P_i}$$
(3)
where n is the number of available epochs. This varies by field from 22 to 114, primarily because different time periods were analysed for each field. This form is simpler than that discussed by Stetson (1996) because all ROTSE observations appear in pairs, and all pairs are equally weighted. As a last addition, we modify this index by the kurtosis measure of Stetson (1996). This parameter is designed to account for the fact that many real variables have sinusoidal light curves. Magnitude measures for a sinusoidal variable cluster around the maximum and minimum values. Those for a stable star with Gaussian errors cluster around the mean value. As a measure of this we calculate
$$K_{ROTSE}=\frac{\frac{1}{N}_{i=1}^N\delta _i}{\sqrt{\frac{1}{N}_{i=1}^N\delta _i^2}}$$
In this equation N represents the total number of observations (twice the number of epochs) and $`\delta _i`$ is the residual from the mean of each observation. This parameter has values K=1.0 for a square wave, K=0.90 for a sinusoid, and K=0.798 for a stable object with Gaussian errors. When the residuals $`\delta _i`$ are dominated by a single bad measurement with residual $`\mathrm{\Delta }`$, this parameter becomes
$`K_{ROTSE}{\displaystyle \frac{\frac{\mathrm{\Delta }}{N}}{\sqrt{\frac{\mathrm{\Delta }^2}{N}}}}`$
$`{\displaystyle \frac{1}{\sqrt{N}}}`$ (4)
Since this goes to zero in the limit of large N, the kurtosis index helps to reduce the sensitivity to single bad observations. It also reduces our sensitivity to variables with a low duty cycle, such as flare stars and detached eclipsing binaries. We combine this with the variability index $`J_{ROTSE}`$ to determine the final selection index
$$I_{var}=J_{ROTSE}K_{ROTSE}/0.798$$
(5)
This is equal to J when the residuals are Gaussian, slightly amplified for sinusoidal light curves, and significantly reduced when a single bad measurement dominates the variability index J.
Variable candidates are selected by examining I<sub>var</sub> for all objects which are present in at least 11 epochs. This requirement is made because phasing is ambiguous for objects observed in fewer epochs. Figure 4 shows the distribution of I<sub>var</sub> vs. $`m_{ROTSE}`$ for a typical field. For this analysis variables are selected to be those which have an I<sub>var</sub> value more than 4.75 $`\sigma `$ above the mean value. In a typical camera (64 square degrees), this cut identifies between 40 and 100 candidate variables.
Two remaining cuts are made to clean the sample. Despite the robustness of I<sub>var</sub> against objects whose light curves have just a few deviant pairs, there remain a small number of โflaringโ objects per field which pass our I<sub>var</sub> cut. Some of these objects are real variables such as detached eclipsing binaries and flare stars. Since we have little information on their variability, we opt to remove them from this analysis and reserve them for further study. Such โflareโ objects are defined as objects for which there are at most two pairs of observations falling in the magnitude range from 0.5 to 1.0 times the absolute value of the maximum residual. This cut removes about 25% of candidate variables.
Finally, we make a cut designed to remove problems with deblending. The SExtractor deblending algorithm works well for these stellar images, but there are some cases where stars are sufficiently close to one another that they are sometimes deblended, and sometimes blended. This creates a very characteristic light curve in which the measured magnitude shifts between two distinct values. To detect these automatically, we first generate a list of ROTSE observed stars (if there are any) within a few pixels of each variable. We then extract a subset of the observations in which the candidate variable is found and its close neighbor is not. These are candidate โblendโ measurements. We compare the mean and standard deviation of the โblendโ and โnon-blendโ measurements. If the means are separated by more than two standard deviations, the object is considered a deblending problem and removed from the variable list. The effect of this cut is strongly dependent on crowding (and hence on galactic latitude). It removes between 5% and 30% of candidate variable objects.
The output of this process is a set of objects which show moderate evidence for variability. At this point we allow the inclusion of some false variables, as we wish to maintain sensitivity to real variables of the lowest possible amplitudes. We use robust evidence of periodicity as our final restriction. This list of variable candidates includes 7396 objects, drawn from a total of 917,266 which pass the 11 epoch cut.
## 5 Variable Confirmation and Classification
Variable classification proceeds in two steps. First, an attempt is made to fit each light curve to a third order polynomial over the full period of observation. Since these observations cover about 100 days, this fit can be quite good for a variety of long period variables, and is used to flag long period objects. For these objects the polynomial fit parameters are used to derive an amplitude (maximum variation within the observation window) and a measure of goodness of fit. Of the 7396 variable candidates, 739 are identified as long period variables by this technique.
The second step is to attempt phasing of all objects which pass the I<sub>var</sub> cut. For this purpose we use a cubic spline method described in detail in Akerlof, et al. (1994). This technique provides best fit periods, period error estimates, and spline fit approximations for object light curves. The general quality of these spline fits is illustrated in the sample light curves described below. In total 1195 of our 7396 variable candidates are successfully phased. The remainder (5462) are mostly near the detection threshold. Additional ROTSE data will be required to determine how many of the remaining candidates are real variables.
With phased light curves in hand, the classification process begins. Automatic classification is based on period and light curve shape, as quantified by the spline fit. We begin by Fourier analyzing the spline fit to the light curve:
$$\mathrm{\Delta }_m(t)=\underset{i=0}{\overset{255}{}}p_icos(\frac{2\pi t}{\mathrm{\Gamma }})$$
(6)
Where $`\mathrm{\Delta }_m(t)`$ is the phased light curve. Classification then relies on the derived period $`\mathrm{\Gamma }`$, ratios of the Fourier coefficients:
$`\begin{array}{c}r_1=\frac{p_2}{p_1}\\ r_2=\frac{p_3}{p_1}\\ r_3=\frac{p_2}{p_3}\end{array}`$
and the sign of the largest deviation from the mean. Cuts based on these Fourier power ratios are referred to below as โratio cutsโ. In addition, most classifications require that the largest power in the Fourier series is in the first term. This implies that the phased light curve has one cycle per period. The cuts used here were selected through examination of the parameters of a subset of manually classified objects. We expect them to evolve in future ROTSE variable studies.
Classification for the purposes of this study is confined to 8 classes: RRab, RRc, Delta Scuti, Cepheid, Contact Binaries, Eclipsing, Mira, and long period variable. The classifications given here should be considered preliminary. We are aware that there is not necessarily complete correspondence between our classifications and more traditional definitions of these classes. We refer to our classes as RRAB, RRC, DS, C, E, EW, M, and LPV in what follows.
RRAB stars were the original target of this project. Although most RRABs have periods from 0.3-0.9 days, some have been detected with periods as long as 2 days. Therefore the period range is defined as 0.3-2.0 d. Classification within this range is based on ratio cuts. RRAB stars are characterized by asymmetric light curves with a rapid rise followed by a slower decay. As a result we search for light curves which are not sinusoidal, but have substantial contributions from higher harmonics. RRAB stars are selected to have:
$`\begin{array}{ccc}0.08<r_1<1.0,& 0.01<r_2<1.0,& r_3>0.6\end{array}`$
The RRC, DS, and C types are quite similar in the sense that all have more or less sinusoidal light curves. The primary distinction between these classes is period range. We begin by requiring that the contribution from higher harmonics be small. All three classes share the same ratio cuts:
$`\begin{array}{cc}r_1<0.16,& r_2<0.024\end{array}`$
Those phased objects which pass these cuts we classify as DS ($`\mathrm{\Gamma }<0.2`$d), RRC ($`0.2`$d$`<\mathrm{\Gamma }<1.0`$d), or C ($`1.0`$d$`<\mathrm{\Gamma }<50.0`$d).
This RRC classification overlaps with the low end of the RRAB period and ratio space. This is not an enormous problem, as the majority of RRC stars have periods less than 0.4 d. To accommodate the region of overlap we tighten the RRC ratio cuts to $`r_1<0.08`$ and $`r_2<0.01`$ for the range $`\mathrm{\Gamma }>0.4`$ when the amplitude $`>`$ 0.35 m. This accounts for the only apparent difference between RRAB and RRC objects in this range; the RRC stars have lower amplitude and a light curve too sinusoidal to be called RRAB. A comparison of RRAB and RRC light curves in this period and amplitude range is given in Figure 5.
Eclipsing objects include the EW (contact) and E (detached) types. There are no period restrictions for selection of eclipsing systems. EW objects are selected as those with
$`\begin{array}{cc}0.04<r_1<0.2,& 0.007<r_2<0.04\end{array}`$
Since these objects tend to overlap with RRC and DS objects, the distinguishing feature becomes the sign of the greatest deviation. For EW stars, the greatest deviation is always less than the mean. For RRC and DS stars the greatest deviation tends to be brighter than the mean. Systems which receive the E classification also require a negative greatest deviation. E type systems are treated in two sets, those with eclipses of equal depth, and those with eclipses of different depths. This second case is the only class which we allow to have two cycles per period. For those objects which have minima of equal depth, the real orbital period of the system is twice the photometric period. Periods for equal depth E and EW type systems are corrected for this effect once classification is complete.
As a final step for this classification, each light curve is checked by visual inspection, allowing for small adjustments to classification. In addition, each light curve and its classification was manually graded for quality from 10 (excellent) to 1 (marginal). Experience gained from this hand-scanning convinces us that we will be able to essentially automate classification by implementing further techniques like those described by Rucinski (1997).
## 6 Results
In this preliminary study we have analyzed $``$2000 square degrees of ROTSE-I sky patrol data in an effort to assess the usefulness of these data for detection of periodic variables. The locations of these fields in galactic and celestial coordinates are shown in Figure 6. A primary conclusion of this work is confirmation that ROTSE-I data form an excellent resource for the discovery and classification of periodic variables. We have discovered a total of 1781 periodic variables, 89% of which are not included in the General Catalog of Variables Stars (Kholopov, et al., 1988) (based on position matching within a 28.8โ aperture). This reiterates the long standing assertion (Paczyลski, 1997) that many variable stars brighter than 15th magnitude remain to be discovered. We refer to this catalog as the ROTSE Survey for Variables 1 (RSV1).
Some general properties of the objects discovered are presented below. Example light curves are presented for each variable class to give an idea of data quality. The distribution of several classes of objects on the sky is shown in Figure 7. The basic object list is presented in Table 1. The entire catalog, along with light curves, is available online through http://www.umich.edu/$$rotse.
Accurate determination of our variable detection efficiency is a complex exercise. Since detection efficiency is a strong function of period, amplitude, and light curve shape, it must be determined seperately for each variable class. This can be done correctly only after a complete understanding of the period, amplitude, and light curve shape distributions in the observing bands is obtained. These studies will be reported for each variable class in future publications.
For the moment we make a simple estimate of detection efficiency by measuring the fraction of GCVS stars recovered here. For this purpose we have selected GCVS stars within our survey area with maxima less than $`m_V`$=10 and minima greater than $`m_V`$=13. Within this sample we recover more than 80% of the RR Lyrae and Delta Scuti stars. Our efficiency for eclipsing types, as expected, varies strongly with type, from $``$70% for close binaries to $``$30% for detached systems. All 52 known Miras within the ROTSE-I magnitude range are recovered. Our lowest recovery rates are for GCVS types L and SR (the slow irregulars and semiregulars) at about 37%. This low efficiency is due to a combination of variability timescale, amplitude, and aperiodicity.
### 6.1 RR Lyraes
RR Lyrae stars are extremely useful as distance indicators and tracers of the structure of the Milky Way. They are attractively easy to identify and measure well. As a result they were the original target of this investigation. With a typical $`M_V`$=0.74 (Fernley, et al., 1998), RR Lyraes are detectable by ROTSE-I from 0.7 to about 7 kpc. Among all the variable types which we classify, our classification of objects as RR Lyrae stars is most secure.
In this preliminary survey, we identify 186 RRAB stars, 126 of which are newly discovered. As these stars have large amplitude and distinctive variability, it is not surprising that a relatively large fraction (32%) are previously known. The period and amplitude distributions for these stars are shown in Figure 8. All ROTSE RRABs which appear in the GCVS are classified there as either RRAB (55 of 60) or RR (5 of 60). Sample light curves for RRAB stars are shown in Figure 9.
Since the RRAB sample has a large overlap with the GCVS, we have made direct comparisons of ROTSE and GCVS derived periods for these objects. The dependence of this period difference on $`m_{ROTSE}`$ is shown in Figure 10. There are 57 overlapping stars with measured GCVS periods. For two of these ROTSE measures periods substantially different from the GCVS period. In one case the ROTSE period provides an enormously better fit to the ROTSE data, suggesting that either the GCVS period is in error or the period of this object has changed. In the second the ROTSE phase coverage is incomplete. Examination of additional ROTSE data for this object shows much better agreement with the GCVS period. For the remaining 55 overlap stars the RMS period error between ROTSE and the GCVS is 0.00026 d. Typical period errors for ROTSE determinations are 0.00012 d, so these offsets are probably dominated by GCVS period errors.
In addition, we identify a total of 113 RRC stars, 104 of which are newly identified. The large fraction of newly identified RRC stars is not surprising given their relatively small pulsation amplitudes. The classification of these stars as RRCs and not, for example, contact binaries is dependent on hand scanning of the light curves. Of the nine objects known in the GCVS, seven are classified there as RRC, two as EW. Visual examination of the light curves supports the classification as RRC in both cases of disagreement. The period and magnitude histograms for these objects are also given in Figure 8. Examples of light curves for these objects are presented in Figure 9.
RRC stars make up about 9% of GCVS RR Lyraes. It is interesting to note that the RRC fraction found here, 38%, is substantially larger. This illustrates the important advantage of ROTSE-I CCD photometry over earlier wide area surveys based on photographic photometry. This difference is particularly striking for variable classes like RRCs, with mean amplitudes $`A_V`$=0.3 m. The magnitude distributions for all new and previously known RR Lyrae stars of both types are shown in Figure 11.
### 6.2 Delta Scuti Stars
The Delta Scuti stars are observationally (and physically) similar to the RRc class. They obey a period luminosity relation which is now well determined by Hipparcos calibration (McNamara, 1997). Their periods range from 0.1 d to 0.28 d and their amplitudes range from 0.1 to 0.5 m. We have classified 91 objects as DS stars, of which two are known from the GCVS. As with the RRC stars, the precision of ROTSE CCD photometry helps to reveal large numbers of previously undetected stars in this class. Examples of DS light curves are included in Figure 9.
### 6.3 Close Binary Systems
Close binary systems (mostly of the W UMa type) are very common in the ROTSE-I data. These objects have recently been shown to obey a reasonably tight period-color-luminosity relation (Rucinski & Duerbeck, 1997). We have identified 382 candidate close binaries, of which 368 are new. The detection of such a large number of systems is not unexpected. Most contact systems contain relatively low-mass G and K type stars. It is estimated that as many as 1 in 500 G and K type stars are members of contact binary systems. Relatively shallow, wide area surveys such as this are especially sensitive to such low luminosity objects. Examples of EW systems are presented in Figure 12.
### 6.4 Other Eclipsing Systems
With between 22 and 114 epochs per location this analysis has relatively poor sensitivity to widely separated eclipsing systems. Nonetheless, we identify a total of 109 eclipsing systems, 95 of which are new. This is a relatively inhomogeneous set, which includes both $`\beta `$ Lyrae systems and detached Algol type eclipsing binaries.
### 6.5 Intermediate period pulsators
We have identified 201 systems with periods from 1 d to 100 d with more or less sinusoidal light curves. All these objects are placed in class C. They are not yet fully identified, though it is clear that some are Cepheids, W Virginis stars, and RS CVn systems. Only 2 of these 201 objects are known in the GCVS; the dwarf nova AH Her, and HZ Her, a low-mass x-ray binary. An important application of the ROTSE all-sky variable survey will be identification of a complete sample of bright Cepheids for calibration of the distance ladder.
### 6.6 Miras and Other Long Period Variables
The long period objects in our sample are drawn from at least two different groups, Miras and red supergiant variables (RSVs). We have classified those with observed variations larger than one magnitude as M. There are 146 such objects in our sample, 66 of which are in the GCVS. Of the overlap objects, most (60 out of 66) are classified by GCVS as Miras. The remainder are classified as long period or semi-regular variables. While periods for these very long period variables cannot be firmly established by these data, we note that analysis of the full two year ROTSE-I database will be extremely effective in this regard.
Miras are among the most venerable distance indicators, and obey a good period-luminosity relation, at least in K band (Bedding & Zulstra, 1998). For Miras this PL relation is contaminated in the optical by strong TiO absorption. Pierce, Jurcevic, & Crabtree (1999) have recently shown that there is a good optical PL relation for the red supergiant variables, and it is likely that many of the objects classified here as M are actually in this class. Separation of these two classes should be possible using IR data of the type which the 2MASS survey (Beichman, 1998) will provide. As both types of stars are intrinsically very luminous ($`4<M_{ROTSE}<7`$) they can be observed in ROTSE-I data to great distance. The most luminous of these objects can be observed to about 300 kpc. While identification and period measurement of these objects can be accomplished with existing ROTSE-I data, their use as distance probes may require data in standard passbands.
We group all other long period variables into a single catch-all class. There are 534 such objects detected, of which 501 are newly discovered. This large number of new objects is remarkable, especially in light of our relatively low detection efficiency ($``$37%) for these objects. The combination suggests that longer duration observations will unveil substantially larger numbers of LPVs. Again, it is impossible within these four months of data to accurately determine periods for these objects. Examples of M and LPV stars are presented in Figure 12.
## 7 Multiwavelength Correlations
We have correlated our variable catalog with several all-sky catalogs in other wavebands. Comparison to the ROSAT All Sky Survey Bright Source Catalog (Voges, et al., 1999) yields 26 matches within a radius of 40โ. Of these, only four are listed in SIMBAD<sup>4</sup><sup>4</sup>4http://hea-www.harvard.edu/SIMBAD; AM Her (a CV), HZ Her (an LMXRB), PW HER (an RS CVn star) and TW CrB. The last is unidentified in SIMBAD, but is clearly an EW object in ROTSE data. Of the remainder, 5 are short period eclipsing systems, one exhibits a long ($`>`$100 d) fade and 16 are longer period C type variables. These last may be RS CVn stars, CVs, or x-ray binaries. We are planning a program of followup spectroscopy to determine the nature of these interesting objects.
We have also cross-correlated our catalog of variables with the IRAS Point Source Catalog (Beichman, et al., 1988). A total of 269 matches are found within a radius of 40โ (which is a typical IRAS error ellipse major axis). Only 85 of these are known in the GCVS. Every one of these objects is a long period variable, which we have classified as M (105), LPV (156), or C (8). All of these objects are detected in the IRAS 12 $`\mu `$m channel, and about half are detected in the 25 $`\mu `$m channel. These characteristics are perfectly consistent with their classification as pulsating giant stars.
## 8 Conclusions and Future Prospects
We have searched a small fraction of ROTSE-I sky patrol data for periodic variables using a robust selection technique. All detected variables have been phased and automatically classified, and the derived classifications have each been checked by visual inspection of the phased light curves. A large number of new variable stars are uncovered, including representatives of many known classes of periodic variables.
The most important conclusion of this work is confirmation that many new variables of all types await discovery at bright magnitudes, and that the ROTSE-I data are a highly effective discovery archive. These data allow accurate classification, period, and ephemeris determination for all variables with amplitudes greater than $`0.1\text{ m}`$. We identify a fraction 1781/917226 $``$ 0.2% of all observed objects as variable. Because it is known that the variable selection used here is biased against some variable types this is a firm lower limit on the variability fraction.
ROTSE-I all sky patrol data are already having a real impact on the study of bright variables. In the magnitude range from 10.5-12.5 the new variables presented here (973) represent a 30% increase in the total number of GCVS variables (3162) in this magnitude range. This despite the fact that they are drawn from only 5% of the celestial sphere.
We have confirmed the expectation that many variables of modest amplitude, such as RRc and Delta Scuti stars, have escaped detection in earlier all sky searches. We find that the fraction of RR Lyraes which are of the RRC type is about 38%, in contrast to the 9% suggested by the GCVS.
The data analyzed here constitute only 5.6% of the ROTSE-I sky coverage. The entire sky visible from New Mexico has already been observed for a number of epochs ranging from 150 at -30ยฐdec to 1100 at the north celestial pole. As this study includes substantial regions at both low (b $``$ 10ยฐ) and high (b=90ยฐ) galactic latitude, it is reasonable to predict the number of objects we will detect in the full survey by simple extrapolation. From this extrapolation we expect to uncover a total of about 32,000 variable stars via ROTSE-I patrol analysis.
Perhaps most important, ROTSE-I variable star selection will be carried out in a completely uniform way across the full sky north of -30ยฐdec. The entire catalog will be based on CCD photometry, obtained with a single instrument and reduced in a single, consistent way. This will make the ROTSE-I variable catalog uniquely useful for studies of the galactic distribution of variables, and for direct studies of galactic structure.
We have concentrated here on the discovery and classification of periodic variables. Several selection criteria are deliberately biased against the inclusion of flaring objects. Analysis of these data for a variety of aperiodic transients is underway, and will be reported in subsequent publications.
In addition to extending our analysis of existing data, the ROTSE program is assembling a variety of new tools for the study of astrophysical transients. In the near future we expect to begin repeating the ROTSE-I patrol scheme with modifications to allow us to obtain three color (V, R, and I) observations. A fourth channel will retain an open CCD for increased sensitivity and comparison to earlier ROTSE-I results. In the longer term, the ROTSE program is engaged in a large scale expansion designed to improve our sensitivity to GRB optical counterparts. We are constructing an array of ten 0.45m telescopes, each with a 2ยฐx 2ยฐ field of view and a thinned CCD camera. These telescopes will be deployed globally to a total of 6 sites, and will provide 24 hour coverage of the sky. These ROTSE-III instruments will allow us to extend the kind of variable studies presented here to at least 19th magnitude while still covering a substantial fraction of the sky.
The authors acknowledge useful conversations with Bohdan Paczyลski and Joyce Guzik, and thank Michael Pierce and John Jurcevic for making available information on red supergiant variables prior to publication. ROTSE is supported at the University of Michigan by NSF grants AST 9970818 and AST 9703282, NASA grant NAG5-5101, the Research Corporation, the University of Michigan, and the Planetary Society. Work performed at LANL is supported by the DOE under contract W7405-ENG-36, by NASA, and by a Laboratory Directed Research and Development grant. Work performed at LLNL is supported by the DOE under contract W7405-ENG-48. |
no-problem/0001/hep-th0001006.html | ar5iv | text | # Questions in quantum physics: a personal view
## 1 Introduction
Personal views are shaped by past experiences and so it may be worth while pondering a little about accidental circumstances which channelled the course of oneโs own thinking. Meeting the right person at the right time, stumbling across a book or article which suddenly opens a new window.
Fifty years ago, as eager students at Munich University just entering the phase of our own scientific research, we were studying the enormous papers by Julian Schwinger on Quantum Electrodynamics, following the arguments line by line but not really grasping the message. I remember the feelings of frustration, realizing that we were far away from the centers of action. But, mixed with this, also some dismay which did not only refer to the enormous arsenal of formalism in the new developments of QED but began with the standard presentation of the interpretation of Quantum Theory. I remember long discussions with my thesis advisor Fritz Bopp, often while circling some blocks of streets ten times late in the evening, where we looked in vain for some reality behind the enigma of wave-particle dualism. Why should physical quantities correspond to operators in Hilbert space? Why should probabilities be described as absolute squares of amplitudes?, etc., etc. Since we did not seem to make much headway by such efforts I decided to postpone philosophy and concentrate on learning what was really done, which aspects were used in an essential way and were responsible for the miraculous success of Quantum Theory. Leave aside for a while questions of interpretation discussed by Bohr and Heisenberg, extrapolations like Diracโs transformation theory or von Neumannโs theory of measurement and return to the more pragmatic attitude pervading for instance the book by Leonard Schiff on Quantum Mechanics. For this purpose WaIter Heitlerโs book โ*Quantum Theory of Radiation*โ (second part of the first edition) proved immensely helpful. Here I saw what the typical problems were which Quantum Field Theory tried to address and I also learned to appreciate the progress made meanwhile by covariant perturbation theory and Feynman diagrams.
The next great piece of luck for me was (indirectly) caused by Niels Bohr. In the course of the planning of a great joint European laboratory for high energy physics (now CERN) he saw the need to introduce a young generation of theoretical physicists in Europe to this area and offered the hospitality of his Institute. He called an international conference in 1952 which I could attend and a year later I got a fellowship for spending a year in Copenhagen. This clearly ended the frustration of being isolated from the great world. The first fringe benefit of the 1952 conference was a garden party at the residence of Niels Bohr where I met Arthur Wightman who gave me the invaluable advice to read the 1939 paper by Wigner on the representations of the inhomogeneous Lorentz group. Returning to Munich I followed the advice and it was a revelation. Not only by putting an end to our concern about wave equations for particles with higher spin but because here I recognized a most natural starting point. The group, nowadays called the Poincarรฉ group, is the symmetry group of the geometry of space-time according to the theory of special relativity. What would be more natural than to ask for the irreducible representations of this group? Equally remarkable was the result that these representations (more precisely those of positive energy) correspond to the quantum theory of the simplest physical system, a single particle. It has just two attributes: a mass and a spin. Everything that can be observed for such a system, including, as far as possible, a position at given time, could be expressed within the group algebra. No reference to canonical commutation relations, guessed from classical mechanics, nor to the wave picture. The wave equation arises from the irreducibility requirement. I could not understand why this paper had remained almost unnoticed by the physics community for such a long time. In fact, even in 1955 I was introduced at some conference in Paris as: โHe is one who has read the 1939 paper of Wignerโ. This was indeed my major claim to fame.
Probably all the young theorists who had the privilege of spending a year as members of the CERN theoretical study group in Copenhagen will remember this as a wonderful time. The atmosphere at the Institute with the *spiritus loci*, emanating from the personality of Niels Bohr, and upheld by Aage Bohr and Christian Mรธller who was the official director of the study group, combined in a rare way scientific aspirations on the highest level with a friendliness discouraging any competitive struggles and allowing everyone to proceed at his own pace. Though there was some joint topic suggested, at my time it was the Tamm-Dancoff method and alternatives to perturbation theory, with some emphasis on nuclear physics, everyone was allowed to work on the subject of his own choice. So, after a short excursion into nuclear physics I returned to my pet subjects. Prominent among them was collision theory in quantum mechanics and field theory. The widely used recipe of โadiabatic switching off of the interactionโ appeared to me not only as ugly but also as highly suspect because the notion of โinteractionโ was not clear *a priori* and if one switched off the wrong thing one would decompose a nucleus into its fragments. This led me to appreciate the physical significance of various topologies for vectors and operators in Hilbert space. Since in high energy physics the experiments were not concerned with fields but with particles there was the idea that the role of a field was just to interpolate between incoming and outgoing free fields which were associated to some specific type of particle. Trying to implement this I unfortunately used the wrong topology. But just at the end of my stay in Copenhagen we received a preprint of the paper by Lehmann, Symanzik and Zimmermann who did things correctly and thereby provided an elegant algorithm relating โGreenโs functionsโ in quantum field theory to scattering amplitudes for particles.
Speaking of Copenhagen and quantum theory the โCopenhagen interpretationโ immediately comes to mind. I prefer to call it the โCopenhagen spiritโ or, more specifically, the natural philosophy of Niels Bohr. I did have some opportunities to talk to the great master but, in spite of my great admiration and some efforts, this was not fruitful. It was only in later years that I understood the depth of various parts of his philosophy. But there always remained one disagreement which came from the question: what are we trying to do and what is guiding us? Physics began by the recognition that there are relations between phenomena which are reproducible. These could be studied systematically, isolating simple processes, controlling and refining the conditions under which they occur. The formulation of the regularities found and the unification of the results of many different experiments by one coherent picture was achieved by a mapping into abstract worlds: a world of appropriate concepts and a world of mathematical structures dealing with relations within and between various sets of mental constructs, one of them being the set of complex numbers. This endeavor manifestly led to some level of understanding of โthe laws of natureโ as evidenced by the development of a technology which provided mankind with enormous powers to serve their conveniences and vices. But what was understood and what was the relative role in this process played by observations, by creation of concepts and by mathematics? When Dirac wrote on the first pages of the 1930 edition of his famous book: โ*The only object of theoretical physics is to calculate results that can be compared with experiment*โ, this can hardly be taken at face value. As he often testified later, he was searching for beauty and he found it in mathematical structures. So much so that he preferred to look for beautiful mathematics first and consider their possible physical relevance later. Indeed, the road from phenomena to concepts and mathematics is not a one-way street. As the studies shifted from coarser to finer features the theory could not be derived directly from experiments but, as Einstein put it, it had to be freely invented and tested subsequently by suggesting experiments. In this passage back and forth between phenomena and mental structures many aspects entered which cannot be rationalized. The belief in harmony, simplicity, beauty are driving forces and they relate more to musicality than to logic or observations.
There was one further highly significant and somewhat accidental occurrence in shaping my subsequent work and this may also illustrate the above remarks. Professors F. Bopp and W. Maak in Munich decided in 1955 that it was important to exchange experience between theoretical physicists arid mathematicians. This initiative was not rewarded by visible success. The number of participants dwindled quickly and the enterprise ended after a few months. But for me it was of paramount importance. I was introduced there to rather recent work of the Russian mathematicians Gelfand and Naimark on involutive, normed algebras and to the work of von Neumann on operator algebras and reduction theory. I saw that Hilbert space resides in some wider setting which, at least from the mathematical point of view, constitutes a rather canonical structure resulting from a few natural structural relations. Besides the standard algebraic operations it needed a \*-operation (involution) and one is led to a natural topology induced by a unique โminimal regular normโ. It appeared highly likely that this structure was behind the scenes in the mathematical formalism of quantum theory. The prototype of such an algebra is furnished by group theory. There a most important tool is the consideration of functions of the group with values in the complex numbers. They form obviously a linear space because we can multiply them by complex numbers and add them. If there is a distinguished measure on the group (which is the case for compact groups and, up to a normalization factor, for locally compact ones) the group multiplication defines a product in the space of these functions, the convolution product. The inverse in the group defines a \*-operation in this algebra of functions by $`f^{}(g)=\overline{f}(g^1)`$. The resulting algebra yields the representation theory of the group. An irreducible representation corresponds to a minimal (left) ideal in the algebra.
If in the following I try to describe things which I believe to have learned concerning the interrelation between observed phenomena, concepts and mathematical structures I must precede this with some apologies. The inadequate handling of references is due to the state of disorder in my notes and lack of time. The abstractions used in describing the procedures of acquiring knowledge may be too schematic. There is a painful gap between their qualitative character and the very precise mathematical structures into which they are mapped.
## 2 Concepts and Mathematics in Quantum Theory
A paper describing some fascinating recent experiments was entitled โ*Reality or Illusion?*โ These experiments (see e.g. refs. ) have lent impetus to the long standing discussion about the meaning of reality in quantum theory. Do the discoveries force us to abandon the naive idea of an outside world called nature whose laws we try to find? What is the role of the observer? Do the puzzles relate to the mind-body problem?
Many different views concerning such questions have been voiced throughout the past seventy years. So if I try to express mine it may be pardonable to proceed in an extremely pedantic fashion.
A single experiment of the type alluded to above combines many individual clicks of some detectors. Though each click is unique and neither repeatable nor predictable even under optimally controlled circumstances, we may regard it as a โreal factโ in the sense in which these words are used in any other context. Let us call it an โeventโ. Its existence is not dependent on the state of consciousness of human individuals. In modern times it is usually registered automatically, stored in computer memories and there is no dispute between the members of a group of experimenters about its โrealityโ. The outcome of the experiment refers to the frequency with which a particular configuration of events occurs in many runs and this is reported as a probability of the phenomenon under precisely described circumstances. To be accepted, this result must be reproducible by any other group of scientists who is willing to invest time and resources in repeating the experiment. The events mentioned are coarse. A detector is macroscopic. We regard macroscopic bodies as โreal objectsโ and statements about their placement in space and time as โreal attributesโ. The word โrealโ just means here that such objects, events and their space-time attributes belong to common experience shared by many persons and do not depend on the state of consciousness of an individual. The observed relations between them constitute the only empirical basis from which the โfree inventionโ of a theory can proceed. With regard to this mental task there is a piece of wisdom which I learned from F. Hund. It might be called Hundโs zeroth rule. He pointed out that the progress of physical theory depended on the lucky circumstance that always some effects were small enough to remain unnoticed or could be disregarded as insignificant at the time a particular piece of the theory was proposed. We cannot take many steps at the same time. We should regard a theory always as preliminary; it will disregard some fine features of which we are luckily ignorant or which we neglect in order to obtain a tractable idealization.
The purpose of these lengthy elaborations is twofold. First, I do not think that physics can make any contribution to the mind-body problem. The attempt to explain some puzzling aspects of quantum physics by invoking subjective impressions and the role of the consciousness of individual human beings is not an appropriate answer. Secondly, the concept of *event* is necessary in quantum physics. It is an independent concept. The mental picture that it corresponds to an interaction process between an atomic object and a macroscopic one is misleading because experiments tell us that there are no atomic objects in an ontological sense (see below). Of course, from this we may conclude that there are no macroscopic objects either and that their apparent reality results from an asymptotic idealization. This is true (see e.g. the discussion in ref. of the emergence of classical concepts due to large size and decoherence). But the idealization is covered by Hundโs zeroth rule and is essential for the form of the present theory. If we want to avoid it we must take the next step in the development of the theory. Let us address now some specific aspects.
1. In experiments we usually (necessarily?) distinguish two parts: a source which determines the probability assignments (subsumed under the notion of โstateโ), and a set of detectors to whose responses the probabilities apply. Though in the description of the source a variety of considerations enter which will have to be looked at more closely, we shall, for simplicity of language, just idealize it as characterized by some pattern of โsource eventsโ. The total setting, consisting of source events and target events, where the former determine a conspicuous probability assignment for the occurrence of the latter, may be called a โquantum processโ. Bohr emphasized the indivisibility of the process as one of the key lessons of quantum theory. This poses the question of how we can isolate such a process from the rest of the world. In technical terms, what do we have to take into account in โpreparing a stateโ in order to get a reproducible probability assignment for a pattern of target events (defined by some arrangement of detectors). Here we are helped by lucky circumstances. We live in a reasonably steady environment; its influence does not change rapidly in space and time. So, if we are stupid in the state preparation we just get an uninteresting probability assignment, a โvery impure stateโ. The art of the experimenter is needed to improve state preparation and render the probability assignment as conspicuous as possible. It appears, however, that there is a limit to such improvements, idealized by the notion of a โpure stateโ.
2. Given some definite process one would like to assign to it a โphysical systemโ as the agent producing the target events or, more carefully, as the messenger between source and target events. This is clearly a mental construct. Can we attach any element of reality to it? If we focus on a single event involving one detector far removed from the source we may think of a single particle as this messenger. But we may also consider patterns of several events, seen in coincidence arrangements of detectors far removed from the source and from each other. Then we sometimes find correlations in their joint probabilities which are of a very peculiar type. If we believe that there is a specific messenger from the source to each target event (for instance a particle) then, whatever notion of state we try to assign to those, we cannot represent the joint probability for the pattern of events as arising from joint probabilities for a corresponding set of individual states of the messengers. This is the conclusion to be drawn from the violation of Bellโs inequality. It is not so easily seen in the first discussions which focused on hidden variables but emerges clearly in ref. . Another equally surprising effect has been demonstrated by Hanbury-Brown and Twiss. They start from two entirely independent source events (for instance photons emitted from two far distant surface regions of a star which happen to arrive almost simultaneously in the observatory). So they can cause a coincidence in two detectors. Each detector responds to one photon but can, of course, not distinguish from which source event it comes. By varying the difference of the optical paths from the telescope exit to the two detectors one finds varying intensity correlations in the coincidence signals. This means that the cause for the response of one detector cannot be attributed to the arrival of either a messenger from the left edge or from the right edge of the star. Both photons work together though there is no phase relationship between the two emission acts. There is a causal relation between the pattern of two source events and the pattern of two target events but it cannot be split into causal ties between single events.
Taken together these experiences imply that the notion of a โphysical systemโ does not have independent reality. What is relevant for the click of a single detector is some notion of โpartial stateโ prevailing in its neighborhood. In both of the above examples this is described as an impure state of a single particle. In the EPR-example as discussed by Bell it is determined by one source event, the decay of an unstable particle. In the second example it is caused by two source events. The probability for the response of both detectors in coincidence depends on the partial state in the union of the two neighborhoods and this is apparently not determined by the pair of partial states around the individual detectors. Thus quantum physics exemplifies the saying: โthe whole is more than the sum of its partsโ and it does so in extreme fashion. The Pauli principle claims that all electrons in the universe are correlated. The reality behind the mental picture of a physical system consisting of a certain number of particles refers to a certain set of events with causal connections between them, manifested by the existence of a probability for the total process. In an ideal experiment this is obtained by counting the number of times the pattern of target events is realized, dividing it by the number of times the source events occured. In the usually prevailing cases where the source is not adequately known we can still determine relative probabilities of different patterns of target events assuming that the source remains constant.
The holistic aspect mentioned above is often called the โessential non-localityโ of quantum theory. But this is an unfortunate terminology because the only reason why we can talk about specific processes at all resides in the locality of individual events and the causal structure of space-time.
3. The reader may have wondered why I specialized the usual notion of โobservableโ to that of a detector and talked about events instead of measuring results indicated by the position of a pointer of some instrument. The spectral resolution of self-adjoint operators which played such an essential role in the development of early quantum mechanics was not even mentioned so far. One reason for this is the problem of how to achieve the mapping from a particular arrangement of instruments to its representative in the mathematical scheme. In early quantum mechanics the idea that we consider a physical system consisting of a definite number of particles seemed to pose no problem (a beautiful illustration of Hundโs zeroth rule). The degrees of freedom were positions and momenta, appearing in the canonical formalism of classical mechanics on equal footing. Though it became clear that these degrees of freedom could not be real attributes of the system one still talked about measuring one of them (or simple functions of them like energy and angular momentum). Bohr emphasized that the full description of the experimental arrangement is needed โto tell our friends what we learnedโ and that this could only be done in plain language. But, since the classical degrees of freedom persisted, this description of the arrangement could ultimately be summarized by one mathematical object which corresponded in a symbolic way to a classical quantity. How can one proceed in this passage from the description of an arrangement of hardware to a mathematical symbol relating to the system? The primary piece of information is given by the placement of macroscopic bodies in space-time. These bodies perform different functions. Some parts may be considered as โstate preparation proceduresโ representing the source events. Other parts yield the measuring result which is an unresolvable phenomenon, an unpredictable decision in nature, a coarse event. Its primary attribute is an approximate position in space-time. The representation of the whole arrangement (apart from the primary source) by a self-adjoint operator, interpreted as describing the measurement of a quantity related to some function of the classical degrees of freedom involves the theory (Schrรถdinger equation) in conjunction with idealizations and approximations which are transparent only in simple cases. The operators corresponding to momentum and energy have clear significance as generators of translations in space and time but are only indirectly related to observations, which in the last resort concern the position of an event in space-time. The position operator of a particle at a prescribed time yields spectral projectors which can approximately characterize an event. But the assumed existence of a family of mutually exclusive events with certainty that one of them must happen is an extrapolation which becomes highly unnatural in relativistic situations. This is mildly indicated already by the ambiguities arising in the attempt to define a position operator in Wignerโs analysis.
The fundamental discovery, that the โelementary particlesโ, formerly believed to be the building stones of matter, are not eternal but can be created or destroyed in processes, forces us to consider states whose particle content is not only varying but undefined in some regions. While the concepts of โsystemโ or โparticleโ suggest some object existing in an ontological sense, the concept of โstateโ belongs to the realm of possibilities (potentialities, propensities) for the realization of *something coming into existence*, an event. This would not be so in a deterministic theory but if we believe that the indeterminacy in the prediction of phenomena, inherent in the formulation of quantum physics, is a feature of the laws of nature and not just due to ignorance which could be lifted by future studies then the distinction between the realm of possibilities and the realm of facts becomes imperative. The โstateโ belongs to the former. Strictly speaking it provides a quantitative description of a *contribution* to the probability for the occurrence of events. The other contribution is given by the placement and type of detectors. Thus also the notions of โsystemโ and โparticleโ belong to the realm of possibilities. But they retain their importance. They allow us to classify (at least under favorable circumstances) the possible partial states in a region by a *denumerable set*.
This procedure involves the center piece of the mathematics of quantum theory: the superposition principle and eigenvalue problems; more generally, the determination of invariant subspaces in a complex linear space with respect to the action of the symmetry group of the theory. The intuitive steps leading to the recognition that this mathematical structure (Hilbert space, involutive algebras, representation theory of groups) offers the key to quantum theory appear to me as a striking corroboration of Einsteinโs emphasis on free creations of the mind and Diracโs conviction that beauty and simplicity provide guidance.
Returning to our line of argument: the ordering of states in classes by the concept of a โsystemโ corresponds to the selection of an invariant subspace, under the action of the symmetry group. A minimal invariant subspace, an irreducible representation, may be called an elementary system. Its attributes are group characters. If we consider only the symmetry group of space-time, the Poincarรฉ group, the irreducible representations give us states of a stable system, a system which could persist eternally if it were alone in the world and no events could occur. This simple system is a single particle. Its attributes are a value of the mass and the spin, which define a group character. The reason why the simplest systems play such an important role for observations is due to the circumstance that in many experiments the partial state pertaining to a large but limited region of space-time can be very closely approximated by the restriction of *a global single particle state* to the region. This will, in fact be the standard situation in the overwhelming part of space-time if the mean density of matter is small. To obtain a basis in the space of single particle states we must choose some maximal set of commuting generators. In this choice the generators of space-time *translations*, whose spectral values are energy-momentum 4-vectors, play a preferred role in the following respect. If we look in a region whose extension is small compared to its mean distance from source events then the partial state there is well approximated by a mixture of parts of plane waves, (improper) eigenstates of the generators of translations, each belonging to a specific energy-momentum vector.
We confined attention so far to the Poincarรฉ group describing the space-time symmetry. The full symmetry group includes โgauge symmetriesโ whose characters are charge quantum numbers. The first example was the electric charge with its description as a character of U(1). The generalizations of this in high energy physics led to flavor and color multiplets associated with the groups SU(2), SU(3). To avoid misunderstandings it must be stressed that we talk here about a global gauge group. The significance of local gauge invariance will be addressed later.
### 2.1 Conclusions
Position and momentum belong to different parts of the scheme. Position is an (approximate) attribute of an event, not of a particle, and the event marks a position in space-time not a position in space at an arbitrarily assumed time (as the picture of a world line for a particle would suggest). In simple cases the event may be regarded as the interaction process between a particle and a detector. But the notion of โparticleโ does not correspond to that of an object existing in any ontological sense. It relates to the simplest type of global state and describes possibilities, not facts. The notion of โpartial stateโ demands in addition that we ignore all possible events outside some chosen region and thus ignore possible correlations with outside events. The concepts of โparticleโ and โphysical systemโ arise from the possibility of ordering global states into distinct classes defined by the symmetry group of the theory. A particle corresponds to an irreducible representation of this group. Its attributes are group characters. A system corresponds to some subrepresentation of the tensor product of irreducible representations. Experience tells us that only a countable set of irreducible representations (particle types) appears in nature. The determination of these (the masses, spins, charge quantum numbers of physical particles) is one of the tasks of the theory.
In observations we are concerned with partial states which result by the restriction of global states to some regions in which we choose to place detectors. For a fixed global state the partial state in a region can be approximately described by the restriction of a global state which belongs to the class of some specific system. In other words: if we focus attention on some particular region then the global state may tell us for instance that in there the probability for events is almost the same as that predicted from the restriction of some single particle state. Indeed, if we choose the region sufficiently small then it will usually suffice to consider only mixtures of single particle states with definite momenta. The existence of zero mass particles complicates this picture somewhat, as evidenced by laser beams and by infrared problems where the number of particles is no longer useful for the description of a partial state.
The analysis of global states in terms of various systems approximating the partial states in various regions of space and time is the other task of the theory (the theory of collision processes).
In the whole scheme we still need an observer. No facts are created if no detectors are around anywhere. Though the consciousness of an individual plays no role (it was eliminated by the assumed โas ifโ reality of macroscopic bodies and coarse events) the scheme still appears somewhat artificial. It is a description of what we may learn by experiments. But looking at the detailed mathematical structure, developed to cope with the above mentioned tasks of the theory, it seems clear that the notions of macroscopic bodies and coarse events are asymptotic concepts. If, on the other hand, we wish to replace them by finer ones we encounter difficulties. They can, I believe, not be overcome without a radical change of the formalism involving our understanding of space and time. As long as there is an enormous disparity between collision partners, one being a macroscopic body the other an atomic object, we can talk about an approximate position of the event and give upper bounds for its uncertainty, relating to the size and the time of sensitivity of the (effective part of the) detector, and we can give lower bounds for the energy- momentum transfer needed to overcome the barriers against the appearance of a significant change. This suffices for practical purposes but does not seem to be the ultimate answer if we look at the vertex of a high energy event in a storage ring.
## 3 The mathematical structure in relativistic quantum physics and its interpretation
In quantum field theory the basic mathematical objects, the fields, are functions of points in space-time. These are singular objects which have to be smeared out over some finite regions to yield observables which can be represented by operators in a Hilbert space. There are problems. Some serious ones are related to gauge invariance, specifically to the local gauge principle first encountered in quantum electrodynamics (QED). If one wants to avoid โunphysical statesโ one has to restrict attention to gauge invariant quantities. From these one may hope to construct algebras of observables. To be precise: we abstract from this heuristic consideration that we can obtain a normed, involutive algebra (for short, a $`C^{}`$-algebra) for each bounded, open region $`๐ช`$ of space-time. The correspondence
$$๐ช๐(๐ช)$$
(3.1)
between regions and algebras yields one essential piece of information for the analysis of the consequences of the theory. We call $`๐(๐ช)`$ the algebra of observables of the region $`๐ช`$. There are some natural relations between these local algebras. Obvious is the inclusion relation:
($`i`$) $`๐ช_1๐ช_2`$ implies $`๐(๐ช_1)๐(๐ช_2)`$.
This allows the definition of a global $`C^{}`$-algebra $`๐`$ as the โinductive limitโ, the completion of the union of all local algebras in the norm topology.
The second important relation reflects the causal structure of space-time:
($`ii`$) If $`๐ช_1`$ is space-like to $`๐ช_2`$ then $`๐(๐ช_1)`$ and $`๐(๐ช_2)`$ commute.
The third basic ingredient is covariance with respect to the Poincarรฉ group $`๐ซ`$. We need a realization of $`๐ซ`$ by automorphisms of $`๐`$; to each element $`g๐ซ`$ there is an automorphism of $`๐`$ denoted by $`\alpha _g`$ which should have the obvious geometric significance:
($`iii`$) If $`A๐(๐ช)`$, then $`\alpha _gA๐(g๐ช)`$,
where $`g๐ช`$ denotes the region resulting from shifting $`๐ช`$ by $`g`$. It is convenient to assume that these algebras have a common unit element.
We call the structure defined by the correspondence (3.1) with the properties mentioned a (covariant) net of local algebras. A general *state* $`\omega `$ corresponds to a normalized, positive linear form i.e. a linear function $`A\omega (A)`$ from the algebra $`๐`$ to the complex numbers which takes real, non-negative values on the positive elements of the algebra:
$$\omega (A^{}A)0\text{for any}A๐;\omega (\mathrm{๐})=1.$$
(3.2)
A partial state in some region $`๐ช`$ is defined in the same way with $`๐`$ replaced by $`๐(๐ช)`$. It corresponds to the restriction of a class of global states to the subalgebra considered.
From section 2 we see that the physical interpretation requires a characterization of those elements of $`๐`$ which represent detectors for an event in a region $`๐ช`$. The first guess might be to identify the projectors in $`๐(๐ช)`$ with such detectors. This is, however, not sufficient. A detector, in contrast to a source, must be passive; it should not click in the vacuum situation. We must control the energy- momentum transfer. Starting from any element$`A๐`$ we can construct elements
$$A(f)=(\alpha _xA)f(x)d^4x,$$
(3.3)
where $`x`$ refers to a translation in space-time. If the Fourier transform of the function $`f`$ has support in a region $`\mathrm{\Delta }`$ in $`p`$-space then the energy-momentum transfer of $`A(f)`$ is limited to $`\mathrm{\Delta }`$. Therefore we add to the structure described so far the (somewhat over-idealized) assumption that there exists a ground state $`\omega _0`$, the vacuum, which is invariant with respect to the Poincarรฉ group and is annihilated by any $`L๐`$ which is of the form (3.3) with support $`\mathrm{\Delta }`$ outside of the closed forward cone $`V^+`$ (positive time-like vectors in $`p`$-space including $`0`$). This assumption, called the *spectrum condition*, allows us to define detectors which are approximately associated to a region $`๐ช`$ in position space and to some window $`\mathrm{\Delta }`$ in $`p`$-space indicating the minimal energy-momentum of the โatomic objectโ needed for the response of the detector. Any element
$$P=L^{}L\text{with}P=1,L=A(f)$$
(3.4)
represents such a detector if we start with $`A๐(๐ช)`$ and choose the function $`f`$ so that (apart from small negligible tails) the support in $`x`$-space is a small region around the origin and its Fourier transform is practically zero outside of the region $`\mathrm{\Delta }`$. Of course this does not yield a precise localization or momentum transfer but this is not relevant if we think of a detector as a macroscopic body. Let us note that $`P`$ will in general not be a projector but this is not necessary either, because we need not consider the negation, an instrument which indicates with certainty that no event has happened.
The above characterization of a detector does not tell us what the detector detects. But we discover that such additional information is not needed *a priori*. If the net is given and a vacuum state exists (the spectral condition), then we have the tools to analyze the physical content of the theory by studying the response of coincidence arrangements, represented by products of $`P_k`$ belonging to mutually space-like situated localization regions, in any state. A single particle state, for instance, can be defined as a state which is โsimply localized at all timesโ, i.e. never capable of producing a coincidence of two (space-like separated) detectors:
$$\omega (P_1P_2)=0$$
(3.5)
for any such choice of the $`P_k`$ ($`k=1,2`$), but $`\omega (P)0`$ for some $`P`$. For further elaborations see .
A net satisfying only the requirements mentioned so far need not yield a physically reasonable theory. It may, for instance, describe no particles at all or a non-denumerable number of different types. Further properties are needed. Some necessary conditions are known which concretize the structure considerably and relate to various physical aspects ranging from the appearance of charge quantum numbers in particle physics to properties of thermodynamic equilibrium states . But we do not know yet how to formulate restrictive conditions powerful enough to define a specific net, let alone the ambitious aim of constructing a net whose physical content is corroborated by experiments.
It is my personal conviction that in this step the local gauge principle plays a crucial role. This assessment stems partly from progress in theoretical high energy physics in the past decades and partly from my belief in simplicity and naturalness of fruitful basic concepts. The principle mentioned tells us that we should not try to focus on global symmetries. In a local theory the symmetries should only govern the structure in the small and the comparison of their action in different regions needs additional information which is called a โconnectionโ because the comparison depends on the way we pass from one region to the other. In the two important classical field theories which have proved their worth for physics, Maxwellโs electrodynamics and Einsteinโs general relativity, this principle is encoded. In the former it was recognized rather late and refers to the gauge symmetry related to electric charge; in the latter it was one of the guiding principles and refers to the Poincarรฉ symmetry of space-time. The Lorentz part, which keeps one point in space-time fixed, is reduced to a local symmetry for the tangent space at this point; the translations are replaced by the connection. Quantum physics as we know and use it is anchored on the uncritical acceptance of space-time as an arena in which we can place instruments, an arena with known geometry including a causal structure. Some aspects of the theory depend also on the existence of a global symmetry for this geometry.<sup>1</sup><sup>1</sup>1Quantum physics in โcurved space-timeโ (representing a given, external gravitational field) retains the first part of these requirements. This means that the net structure of local algebras with the properties ($`i`$), ($`ii`$) persists. The loss of ($`iii`$) implies that the spectrum condition has to be replaced. A considerable amount of work has been devoted to this problem but we shall not discuss it here; it would be beyond the scope of this paper. If we loose this anchor completely we enter an area in which the conceptual structure and mathematical formalism of quantum physics cannot persist. In this area the problem mentioned at the end of section 2 and some of the questions addressed in the next section may become imperative. So, to stay on the present level, we wish to keep global Poincarรฉ symmetry and only reduce the internal symmetries, relating to the charge structure, to local significance needing the definition of a connection. In a classical field theory the formalism of Yang-Mills theories, generalizing electrodynamics to non-Abelian local internal symmetry groups, is well understood, using the notions of sections and connections in a fiber bundle. The transfer of this formalism to quantum theory is highly nontrivial and, in my opinion, not yet adequately understood. If we use the approach via algebras of observables sketched above then the incorporation of the additional structure due to (not directly observable) local internal symmetries is obscured by the singular nature of points and lines used in the classical case. To handle this we need knowledge about the short distance behavior (ultraviolet limit) of the theory. A few tentative suggestions concerning the notion of a quantum connection are given in ref. . I consider the clear understanding of how local internal symmetries can be incorporated in a well defined mathematical structure as one of the most important immediate aims on which many subsequent developments may hinge. It constitutes, of course, a hybrid theory since the global nature of the geometric symmetries is kept. So it may not be of primary importance to clarify whether the continuum limit really exists.
## 4 Retrospective and Perspectives
Comparing the picture sketched so far with the discussions on the interpretation of quantum theory seventy years ago we may note:
1. The โlanguage of classical physicsโ stressed so much by Bohr as indispensable for the observer (to enable him to tell what was done and learned) remains an essential ingredient but, if we disregard questions of convenience, it may be reduced to the description of geometric relations in the placement of various macroscopic bodies and the coarse events observed in space and time. All further information is contained in the mathematical structure. The correspondence principle needed to map the description into the realm of mathematical symbols is provided by the reference to classical space-time and its geometric symmetry on both sides. It is the correspondence (3.1) together with the action of the translation group, needed to characterize the passive nature of detectors by (3.4). Apart from this the global symmetry of space-time is needed in two respects. In a single experiment because it studies the statistical relations in an ensemble of many individual event patterns which occur at different times and in the communication with other observers who would like to test the results in a different region of space-time.
2. The indivisibility of a process emphasized by Bohr leads to the concept of an event as an irreducible unit and it manifests itself also in the holistic aspect of the causal relations between events. The isolation of an individual process as a distinguishable, coherent part in the history of the universe, a pattern of events which can be considered by itself without mentioning its ties with other parts, depends to some extent on the choice of the observer of how much he wants to consider but this choice is limited by the requirement that it must lead to a well defined conspicuous probability assignment for the total process, a requirement which can be precisely fulfilled only in a steady environment.
3. The distinction between possibilities and facts which appears to be unavoidable in a formulation of indeterministic laws implies a distinction between future and past. Bohr mentions the โessential irreversibility inherent in the very concept of observationโ. If the term observation does not mean that the ultimate responsibility for deciding what constitutes a fact is delegated to the consciousness of an individual human being<sup>2</sup><sup>2</sup>2I think this would be too unreliable to be useful for the purposes of physics. then we must accept the essential irreversibility inherent in the concept of an event. This endows the โarrow of timeโ with an intrinsic significance in the physical theory and corresponds to a picture of reality as evolving in successive steps of a process with a moving boundary, separating past facts from future possibilities. It corresponds to the picture drawn by the philosopher A.N. Whitehead . This does not conflict with the existence of a time reversal symmetry of the theory which describes a symmetry in the probability assignments for processes. The significance of the arrow of time is encoded in the existing theory by the spectrum condition for the energy-momentum of states entering in the characterization of detectors (which provide one contribution for the probability of an event). The time reversal operator, being anti-unitary, does not change the sign of the energy.
Turning now to perspectives for future development of the theory, we might take a few hints from the preceding discussion. First, that all symmetries should be considered as local but that we should not associate the meaning of local with a point in a spaceโtime continuum but with a possible event. A pattern of events with a web of causal ties between them bears some analogy to a section in a fiber bundle whose base space is the set of events and the typical fiber is a direct sum of representations of the symmetry group. The causal ties provide the connection. The dynamical law must then describe the probability assignment for different possibilities of growth of such a pattern in the evolution process in which possibilities turn into facts and the boundary between past and future changes. Included in this task is the determination of the subset of representations in the fiber of an event, the generalized eigenvalue problem yielding the relation between masses, spins and charge quantum numbers. I shall not try to elaborate on the many questions connected with such a picture and its relation to existing formalism. This is beyond the scope of this paper and the capabilities of its author. |
no-problem/0001/quant-ph0001097.html | ar5iv | text | # Quantum mechanics and the Continuum Problem (II)
## Abstract
In one-dimensional case, it is shown that the basic principles of quantum mechanics are properties of the set of intermediate cardinality.
PACS numbers: 03.65.Bz, 02.10.Cz
The concept of discrete space is not a unique alternative of the continuous space. Since discrete space is a countable set, there is an intermediate possibility connected with the continuum problem: space may be neither continuous nor discrete. The commonly held view is that the independence of the continuum hypothesis (CH) is not a certain solution of the continuum problem in consequence of incompleteness of set theory. Nevertheless, from the independence of CH follows a unique definite status of the set of intermediate cardinality. It is important here that this set must be a subset of continuum (continuum must contain a subset equivalent to the intermediate set). Taking into account that any separation of the subset is a proof of existence of the intermediate set, which contradicts the independence of CH, we get that the set of intermediate cardinality exists only as a subset of continuum. In other words, the subset of intermediate cardinality, in principle, cannot be separated from continuum (set theory โconfinementโ). If Zermelo-Fraenkel set theory is consistent, complete, and giving the correct description of the notion of set, then this is the only possible understanding of the independence of CH.
Note that if we postulate existence of the intermediate set (in other words, if we take the negation of CH as an axiom), the result will be the same: since any construction or separation of the set are forbidden by the independence of CH, we have to reconcile with the same โlatentโ intermediate subset in continuum which we can get without any additonal assumption. And it is not reasonable to take CH as an axiom because, as a consequence, we lose this subset.
According to the separation axiom schema, for any set $`X`$ and for any property expressed by formula $`\phi `$ there exists a subset of the set $`X`$, which contains only members of $`X`$ having $`\phi `$. Then some subset cannot be separated from continuum if each point of the subset does not have its own peculiar properties but only combines properties of the members of the countable set and continuum.
At first sight, this seems to be meaningless. But the content of the requirement coincides with the content of wave-particle duality: quantum particle combines properties of a wave (continuum) and a point-like particle (the countable set).
As an illustration, consider a brick road which consists of black bricks and white bricks. If we know (or suspect) that among them there are some bricks which have white top side and black bottom side (or vice versa), we, nevertheless, cannot find them. Based only on top view, the problem of separation (and even existence) of black-and-white bricks is undecidable. Each brick can be black-and-white with some probability. However, if we have top view and bottom view, we can find these bricks: each of them looks like a white brick on the one view and like a black brick on the other view (โblack-white dualityโ).
In order to get information about the โinvisibleโ set consider the maps of the intermediate set $`I`$ to the sets of real numbers ($`R`$) and natural numbers ($`N`$).
Let the map $`IN`$ decompose $`I`$ into the countable set of equivalent mutually disjoint infinite subsets: $`I_n=I`$ ($`nN`$). Let $`I_n`$ be called a unit set. All members of $`I_n`$ have the same countable coordinate $`n`$.
Consider the map $`IR`$. Continuum $`R`$ contains a subset $`M`$ equivalent to $`I`$, i.e., there exists a bijection
$$f:IMR.$$
(1)
This bijection reduces to a separation of the intermediate subset $`M`$ from continuum. Since any separation procedure is a proof of existence of the intermediate set and, therefore, contradicts the independence of the continuum hypothesis, we, in principle, do not have a rule for assigning a definite real number to a point of the intermediate set. Hence, any bijection can take a point of the intermediate set only to a random real number. If we do not have preferable real numbers, then we have the equiprobable mapping. This already conforms to the quantum free particle. In the general case, we have the probability $`P(r)dr`$ of finding a point $`sI`$ about $`r`$.
Thus the point of the intermediate set has two coordinates: a definite natural number and a random real number:
$$s:(n,r_{random}).$$
(2)
Only the natural number coordinate gives reliable information about the relative positions of the points of the set and the size of its interval. But the points of a unit set are indistinguishable. It is clear that the probability $`P(r)`$ depends on the natural number coordinate of the corresponding point. Note that the information about a point in the one-dimensional intermediate set is necessarily two-dimensional.
For two real numbers $`a`$ and $`b`$ the probability $`P_{ab}dr`$ of finding $`s`$ in the union of the neighborhoods $`(dr)_a(dr)_b`$
$$P_{ab}dr[P(a)+P(b)]dr$$
(3)
because $`s`$ corresponds to both (all) points at the same time (the events are not mutually exclusive). It is convenient to introduce a function $`\psi (r)`$ such that $`P(r)=๐ซ[\psi (r)]`$ and $`\psi _{ab}=\psi (a)+\psi (b)`$. The idea is to compute the non-additive probability from some additive object by a simple rule.
We have
$$P_{ab}=๐ซ(\psi _{ab})=๐ซ[\psi (a)+\psi (b)]๐ซ[\psi (a)]+๐ซ[\psi (b)],$$
(4)
i.e., the dependence $`๐ซ[\psi (r)]`$ is non-linear. The simplest non-linear dependence is a square dependence:
$$๐ซ[\psi (r)]=|\psi (r)|^2.$$
(5)
The probability $`P(r)`$ is not probability density because we cannot integrate it due to its non-additivity (an integral is a sum). The normalization condition means only that $`f`$ is a bijection: we can find only one image of the point $`s`$ in $`R`$. Actually, the concept of probability should be modified. An illustration in terms of the above brick road will make this clear: If we know the exact number $`N_{BW}`$ of the black-and-white bricks, we do not need to check all the bricks of perhaps infinite brick road. It is reasonable to stop checking when all this bricks are obtained and put
$$P_{BW}=\frac{N_{BW}}{N_{checked}},$$
(6)
where $`P_{BW}`$ is the probability of finding a black-and-white brick, $`N_{checked}`$ is the exact (minimal) number of the bricks checked. Thus only $`N_{checked}`$ may vary in the different test runs (finding all the black-and-white bricks) and we have to use the average value.
The concept of probability for continuum may be modified in a similar way, since the point always may be found in a finite interval. We do not need to take into consideration remaining empty continuum.
But we shall not alter the concept of probability because it is not altered in quantum mechanics (although this results in infinite probabilities). The main purpose of this paper is to show that quantum mechanics describes the set of intermediate cardinality.
The function $`\psi `$, necessarily, depends on $`n`$: $`\psi (r)\psi (n,r)`$. Since $`n`$ is accurate up to a constant (shift) and the function $`\psi `$ is defined up to the factor $`e^{i\text{const}}`$, we have
$$\psi (n+\text{const},r)=e^{i\text{const}}\psi (n,r).$$
(7)
Hence, the function $`\psi `$ is of the following form:
$$\psi (r,n)=A(r)e^{2\pi in}.$$
(8)
Thus the point of the intermediate set corresponds to the function Eq.(8) in continuum. We can specify the point by the function $`\psi (n,r)`$ before the mapping and by the random real number and the natural number when the mapping has performed. In other words, the function $`\psi (n,r)`$ may be regarded as the image of $`s`$ in $`R`$ between mappings.
Consider probability $`P(a,b)`$ of finding the point $`s`$ at $`b`$ after finding it at $`a`$. Let us use a continuous parameter $`t`$ for correlation between continuous and countable coordinates of the point $`s`$ (simultaneity) and in order to distinguish between the different mappings (events ordering):
$$r(t_a),n(t_a)\psi (t)r(t_b),n(t_b),$$
(9)
where $`t_a<t<t_b`$ and $`\psi (t)=\psi [n(t),r(t)]`$. For simplicity, we shall identify the parameter with time without further discussion. Note that we cannot use the direct dependence $`n=n(r)`$. Since $`r=r(n)`$ is a random number, the inverse function is meaningless.
Assume that $`s`$ is a โobservableโ point, i.e., for each $`t(t_a,t_b)`$ there exists the image of the point in continuum $`R`$.
Partition interval $`(t_a,t_b)`$ into $`k`$ equal parts $`\epsilon `$:
$`k\epsilon =t_bt_a,`$
$`\epsilon =t_it_{i1},`$
$`t_a=t_0,t_b=t_k,`$ (10)
$`a=r(t_a)=r_0,b=r(t_k)=r_k.`$
The conditional probability of of finding the point $`s`$ at $`r(t_i)`$ after $`r(t_{i1})`$ is given by
$$P(r_{i1},r_i)=\frac{P(r_i)}{P(r_{i1})}$$
(11)
(between the points $`t_{i1}`$ and $`t_i`$, the continuous image of the point is out of control but the unmonitored zone will be reduced to zero by passage to the limit $`\epsilon 0`$), i.e.,
$$P(r_{i1},r_i)=\left|\frac{A_i}{A_{i1}}e^{2\pi i\mathrm{\Delta }n_i}\right|^2,$$
(12)
where $`\mathrm{\Delta }n_i=|n(t_i)n(t_{i1})|`$. Note that $`\mathrm{\Delta }n_i`$ is really a vector.
The probability of the sequence of the transitions (we may use the word โtransitionโ because we have the substantiated notion of time)
$$r_0,\mathrm{},r_i,\mathrm{}r_k$$
(13)
is given by
$$P(r_0,\mathrm{},r_i,\mathrm{}r_k)=P(r_1,r_2)\mathrm{}P(r_{i1},r_i)\mathrm{}P(r_{k1},r_k),$$
(14)
i.e.,
$$P(r_0,\mathrm{},r_i,\mathrm{}r_k)=\left|\frac{A_k}{A_0}\mathrm{exp}2\pi i\underset{i=1}{\overset{k}{}}\mathrm{\Delta }n_i\right|^2.$$
(15)
Then probability of the corresponding continuous sequence of the transitions $`r(t)`$
$$P[r(t)]=\underset{\epsilon 0}{lim}P(r_0,\mathrm{},r_i,\mathrm{}r_k)=\left|\frac{A_k}{A_0}e^{2\pi im}\right|^2,$$
(16)
where
$$m=\underset{\epsilon 0}{lim}\underset{i=1}{\overset{k}{}}\mathrm{\Delta }n_i.$$
(17)
Since at any time $`t_a<t<t_b`$ the point $`s`$ corresponds to all points of $`R`$, it also corresponds to all continuous random sequences of mappings $`r(t)`$ simultaneously (we emphasize that $`r(t)`$ is not necessarily a classical path).
Probability $`P[r(t)]`$ of finding the point at any time $`t_att_b`$ on $`r(t)`$ is non-additive too. Therefore, we introduce an additive functional $`\varphi [r(t)]`$. In the same way as above, we get
$$P[r(t)]=|\varphi [r(t)]|^2.$$
(18)
Taking into account Eq.(16), we can put
$$\varphi [r(t)]=\frac{A_N}{A_0}e^{2\pi im}=\text{const}e^{2\pi im}.$$
(19)
Thus we have
$$P(a,b)=|\underset{allr(t)}{}\text{const}e^{2\pi im}|^2,$$
(20)
i.e., the probability $`P(a,b)`$ of finding the point $`s`$ at $`b`$ after finding it at $`a`$ satisfies the conditions of Feynmanโs approach (section 2-2 of ) for $`S/\mathrm{}=2\pi m`$ (indeed, Feynman does not essentially use in Chap. 2 that $`S/\mathrm{}`$ is just action).
Therefore,
$$P(a,b)=|K(a,b)|^2,$$
(21)
where $`K(a,b)`$ is path integral (2-25) of :
$$K(a,b)=_{r_a}^{r_b}e^{2\pi im}Dr(t).$$
(22)
Thus we can apply Feynmanโs method in the following way.
1)We substitute $`2\pi m`$ for $`S/\mathrm{}`$ in in Eq.(2-15) of .
2)In section 2-3 of Feynman explains how the principle of least action follows from the dependence
$$P(a,b)=|\underset{allr(t)}{}\text{const}e^{(i/\mathrm{})S[r(t)]}|^2.$$
(23)
By the same nonrigourous reasoning, for โvery, veryโ large $`m`$, we get โthe principle of least $`m`$โ. This also means that for large $`m`$ the point $`s`$ has a definite stationary path and, consequently, a definite continuous coordinate. In other words, the corresponding interval of the intermediate set is sufficiently close to continuum (let the interval be called macroscopic), i.e., cardinality of the intermediate set depends on its size. Recall that we can measure the size of an interval of the set only in the unit sets (some packets of points).
3)Since large $`m`$ and $`\mathrm{\Delta }n_i`$ may be considered as continuous variables, we have
$$m=\underset{\epsilon 0}{lim}\underset{i=1}{\overset{N}{}}\mathrm{\Delta }n_i=_{t_a}^{t_b}๐n(t)=min.$$
(24)
The function $`n(t)`$ may be regarded as some function of $`r(t)`$: $`n(t)=\eta [r(t)]`$. It is important that $`r(t)`$ is not random due to the second item. Therefore,
$$_{t_a}^{t_b}๐n(t)=_{t_a}^{t_b}\frac{d\eta }{dr}\dot{r}๐t=min,$$
(25)
where $`\frac{d\eta }{dr}\dot{r}`$ is some function of $`r`$, $`\dot{r}`$, and $`t`$ (note absence of higher time derivatives than $`\dot{r}`$), i.e., large $`m`$ can be identified with action:
$$m=_{t_a}^{t_b}L(r,\dot{r},t)๐t=min.$$
(26)
Since the value of action depends on units of measurement, we need a parameter $`h`$ (depending on units only) such that
$$hm=_{t_a}^{t_b}L(r,\dot{r},t)๐t.$$
(27)
Note that we can substitute action for $`m`$ only for sufficiently high time rate of change of the countable coordinate $`n`$ because, if $`\mathrm{\Delta }n_i=n(t_i)n(t_{i1})`$ in Eq.(24) is not sufficiently large to be considered as an (even infinitesimal) interval of continuum, action reduces to zero. This may be understood as vanishing of mass of the point. Recall that mass is a factor which appear in Lagrangian of a free point as a peculiar property of the point under consideration, i.e., formally, mass may be regarded as a consequence of the principle of least action .
Finally, we may substitute $`S/\mathrm{}`$ for $`2\pi m`$ in Eq.(22) and apply Feynmanโs method to the set of intermediate cardinality.
Consider the special case of constant time rate of change $`\nu `$ of the countable coordinate $`n`$. We have $`m=\nu (t_bt_a)`$. Then โthe principle of least $`m`$โ reduces to โthe principle of least $`t_bt_a`$โ. If $`\nu `$ is not sufficiently large (massless point), this is the simplest form of Fermatโs least time principle for light. The more general form of Fermatโs principle follows from Eq.(24): since
$$_{t_a}^{t_b}๐n(t)=\nu _{t_a}^{t_b}๐t=min,$$
(28)
we obviously get
$$_{t_a}^{t_b}\frac{dr}{v(t)}=min,$$
(29)
where $`v(t)=dr/dt`$. In the case of non-zero action (mass point), the principle of least action and Fermatโs principle โworkโ simultaneously. It is clear that any additional factor can only increase the โpure leastโ time. As a result $`t_bt_a`$ for a massless point bounds below $`t_bt_a`$ for any other point and, therefore, $`(ba)/(t_bt_a)`$ for massless point bounds above average speed between the same points $`a`$ and $`b`$ for continuous image of any point of the intermediate set. This is a step towards special relativity.
It is important to make some general remarks on the description of the set intermediate cardinality.
The complete description of the intermediate set falls into two basic parts: continuous and countable. The continuous description is classical mechanics (the principle of least action is an intrinsic property of the set of intermediate cardinality).
Quantum mechanics is a connecting link and must be considered as a separate description (a countable description in terms of the continuous one). The description has its particular transitional main law (with action but without the principle of least action): the wave equation. Therefore, quantum mechanics is relevant for sufficiently large interval which may be considered as continuum. Compare this with the Copenhagen macroscopic measuring apparatus.
Thus the complete description of the intermediate set consist of three parts: macroscopic (continuous), microscopic in macroscopic terms (let us call it โsubmicroscopicโ), and proper microscopic, i. e., it is a system of three dual theories.
Mathematical โinvisibilityโ of the intermediate set leads to confusion: all descriptions are placed in the same continuous space. As a result the directions of the countable descriptions are lost and replaced with spin. We also lose microscopic dimensions of non-continuous descriptions.
The total number of space time dimensions of three 3D descriptions is ten. The same number of dimensions appear in string theories. But the extra dimensions of the intermediate set are essentially microscopic and do not require compactification. Since microscopic intervals (unlike macroscopic ones) are essentially non-equivalent, the proper microscopic description must split into a system of countable (quantum) dual โtheoriesโ with number of extra dimensions corresponding to the number of distinguishable cardinalities.
By definition, a proper microscopic interval can not be considered as continuous, i.e., it has no length. In other words, its macroscopic (continuous) image is exactly a point. Thus from macroscopic point of view, there are two kinds of points: the true points and the composite points. A composite point consist of an infinite number of points. It is uniquely determined by the number of unit sets. Note that, in string theories, in order to get one natural number (mode) one needs at least two real numbers (length, tension) and additional assumptions. Cardinality of the proper microscopic interval may be regarded as some qualitative property of the point. This property vanishes if the interval is destroyed (decay of the corresponding point). The minimal building block for a composite point is a unit set. In the three-dimensional case, there must be three types of the unit sets forming, in the macroscopic limit, three-dimensional approximately continuous space. |
no-problem/0001/astro-ph0001438.html | ar5iv | text | # Prompt Optical Observations of Gamma-ray Bursts
## 1 Introduction
The gamma-ray emission of GRBs typically has a duration of the order of tens of seconds or less and exhibits little pattern in its very pronounced temporal variation. Until 1997, the brevity of bursts prevented multi-wavelength observation which would allow an accurate localization of the source. As a result, the burst mechanism, environment, location and energy scale have been elusive. Since 1997, multi-wavelength afterglow observations (eg. Costa et al. (1997), van Paradjis et al (1997)) have established the distance to several bursts (eg. Metzger et al. (1997), Kulkarni et al. (1998)) and illuminated the physics processes occurring a few hours to days after the gamma-ray onset. The burst mechanism, however, remains a mystery.
Prompt radiation provides critical detail about the processes of the burst itself. Detection of such emission, in the optical for instance, requires instruments with wide field-of-view, rapid response and automated operation. The ROTSE-I CCD telephoto array meets these objectives (see Kehoe et al. (1999)). Preliminary data from the BATSE detectors (Fishman et al. (1989)) on-board the Compton Gamma-Ray Observatory are used by the GRB Coordinates Network (GCN, Barthelmy et al. (1998), Barthelmy et al. (1995)) to generate triggers about once per day providing rough coordinates ($`\mathrm{\Delta }\theta 10^{}`$) within $`5`$ seconds of a burst. A dynamic response points ROTSE-I at the trigger coordinates within 3 seconds of their receipt. The discovery of prompt optical emission from GRB990123 (Akerlof et al. (1999)) has illustrated properties of early shock development and the immediate environment of bursts. This paper presents a further search for optical counterparts in a subset of our GRB trigger data, as well as a comparison of the results with GRB990123.
## 2 Observations and Reduction
The subset of 6 triggers discussed here were taken in the first year of operation. They were selected for this analysis because they possess localization errors of about 1 square degree or smaller (see Table 1). This positional accuracy, which reduces the search area and background by a factor of more than 200 from that available from BATSE alone, is generally obtained from the relative timing of signals from BATSE and the gamma-ray detectors on-board Ulysses (Hurley et al. (1999b)). Thin โInterplanetary Networkโ (IPN) annuli are generated which are only about $`0.1^{}`$ wide (Hurley et al. (1999a)). The intersection of the BATSE position probability distribution with such a timing annulus produces an IPN arc a few degrees long. If available, a third detection reduces this arc to a smaller diamond-shaped region. In the current sample, 4 bursts are localized to IPN arcs, and GRB981121 has an IPN diamond using NEAR data. The BeppoSAX (Feroci et al. (1997), Jager et al. (1997)) satellite observed GRB980329 (Frontera et al. (1998)), providing a very accurate position. This sample does not include GRBs on the faint end of the BATSE fluence distribution or short bursts (see Kouveliotou et al. (1993), Kouveliotou et al. (1996)). Both limitations will be addressed in later analyses.
For prompt GRB triggers, we initially begin taking 5 second exposures to retain sensitivity to rapid variation, then lengthen to 25 second and 125 second exposures to maximize sensitivity. If the trigger position error is of the same order as the ROTSE-I field-of-view ($`16^{}\times 16^{}`$), we also โtileโ around the given position at specific epochs in the sequence to ensure coverage of sources with errant initial positions but well-localized later. We then return to the direct pointing with longer exposures and begin the sequence again.
ROTSE-I first triggered on GRB980329 and began the first exposures 11.5 seconds after the burst had started. Unfortunately, the sky was cloudy for the early data and hazy for the later images. Nevertheless, some early images are clear enough in the immediate region of the burst to detect 10th magnitude objects. To maximize sensitivity in later, clearer images, the last two observations are the result of co-adding two and three frames, respectively. GRB980401 occurred during focusing tests so a manual response of eight exposures was performed. The last four 125 second exposures were co-added into one observation spread over 897 seconds. The final localization for GRB980420 places it near the galactic plane (most probable $`g_b10^{}`$), and focused images are very crowded. The optics of the two cameras covering the main part of the IPN arc, however, were poorly focused. The final localization for GRB980627 places a majority of the probable area outside of even tiled exposures with the result that we have 40% coverage in only four tiled images. Observing conditions for GRB981121 and GRB981223 were good.
Raw images are dark subtracted and flat-fielded, followed by source finding in the corrected images using SExtractor (Bertin and Arnouts (1996)). We then perform an astrometric and photometric calibration by comparison to the Hipparcos catalog (Hรธg et al. (1998)). Our astrometric errors are 1.4 arcsec. Since we operate with unfiltered CCDs to maximize light-gathering ability, photometry is established by comparing raw ROTSE magnitudes to V-band measures and color correcting based on $`BV`$. The resulting magnitude, $`m_{ROTSE}`$, corresponds on average to $`m_V`$ but includes sensitivity in the B, V, I and especially R bands. Our photometric errors are 0.02 magnitude for stars brighter than magnitude 12.
## 3 Analysis and Discussion
Due to observation of an X-ray counterpart for GRB980329, optical (Djorgovski et al. (1998), Palazzi et al. (1998), Pedersen et al. (1998b)) and radio (Taylor et al. (1998)) counterparts were observed several hours later. For such precise localizations, we would accept any detection at the known location of the burst. No optical emission was observed, so for the early images we take the sensitivity to be 0.5 magnitudes brighter than the dimmest SAO and GSC stars visible in the immediate region of the burst. We calculated the limiting sensitivities for the co-added frames by extrapolation of the Hipparcos-derived calibration to our $`5\sigma `$ threshold. A cross-check of this calibration was performed by directly comparing to the USNO catalog and finding the faintest $`m_{ROTSE}`$ to which we are more than 50% efficient. Given the afterglow measurements, we are able to place a constraint on the overall power-law decline of optical emission from GRB980329 to be shallower than $`t^{1.8}`$ with respect to the earliest afterglow detection. This contrasts with the faster decline of the X-ray emission (Greiner et al. (1998)).
For the other bursts, we require that a source magnitude vary by at least $`0.5+5\sigma `$ where $`\sigma `$ is the statistical error on the dimmest measurement. Varying objects are only considered bona fide optical counterpart candidates if they appear in at least two successive images. This removes backgrounds such as cosmic rays and satellite glints which show up frequently in ROTSE-I images. No rapidly varying objects were found in the allowed error regions of these five bursts. The image sensitivities were determined from the Hipparcos calibration as described above, and in most cases were cross-checked with the USNO catalog comparison.
Limiting magnitudes are given in Table 2 for up to three epochs marking significant improvements in sensitivity. Figure Prompt Optical Observations of Gamma-ray Bursts displays all relevant observations where coverage exceeded 50%, and indicates that ROTSE-I has sensitivity to optical bursts significantly fainter than GRB990123. The earliest limit is $`m_{ROTSE}>13.1`$ at 10.85 seconds for GRB981223. The best limit of this sample is $`m_{ROTSE}>16.0`$ at 62 minutes for GRB980401. We can conclude that bright optical counterparts (ie. $`m_{ROTSE}10`$) are uncommon.
Since prompt optical emission has been seen in GRB990123, we ask whether optical emission from a GRB is correlated with gamma-ray output, as is suggested by Sari and Piran (1999). Because we do not know whether fluence or peak flux are accurate measures of the total gamma ray emission, we consider both in our comparison. To bring all bursts onto a common footing, we first adjust their $`m_{ROTSE}`$ limits by $`2.5\mathrm{log}(f/f_{GRB990123})`$, where $`f`$ is the gamma-ray fluence. We calculate this fluence to be that measured in the BATSE 50 - 100 keV plus 100 - 300 keV channels to avoid systematics due to problems in spectral fitting the other channels (Briggs (1999)). These fluence-scaled limits are plotted along with the GRB990123 observations in Figure Prompt Optical Observations of Gamma-ray Bursts. We have also adjusted our optical limits by scaling according to the BATSE measure of peak flux in the 64ms binning of the 50 - 300 keV data (see Figure Prompt Optical Observations of Gamma-ray Bursts).
Variation of galactic extinction over the IPN arcs prevents us from quoting an accurate value for most of these bursts. However, it is much less than 1 magnitude at their most probable locations. Since GRB990123 has a similar low value of extinction (= 0.04), the effect of galactic extinction on our comparison should be minimal. The one exception is GRB980420 which may have over 2 magnitudes of extinction. Although extinction near the source can only be measured for GRB980329, it is likely most GRBs are not so heavily obscured since the great majority observed to have both X-ray and radio counterparts also exhibit optical emission (Frail (1999)).
Under the assumption of gamma-ray scaling, ROTSE-I is sensitive to GRB990123-like optical bursts for GRB981223 from 30 to 300 seconds. Around 1 minute, the optical emission of GRB981121 and GRB981223 would have been more than 2 magnitudes over our detection threshold. Either GRB990123 is atypical of GRBs in general, or there is not a strong correlation of optical flux with gamma-ray emission and the inherent dispersion to any actual correlation must be larger than two magnitudes to explain the results from GRB981121 and GRB981223.
## 4 Conclusions
In a study of six well-localized gamma-ray bursts, no optical counterparts were identified. When comparing to afterglow observations of GRB980329, we constrain the overall power-law decay of the optical emission to be shallower than $`t^{1.8}`$. When using either gamma-ray fluence or peak flux as a predictor of optical emission, we find that especially around 1 minute optical emission is at least two magnitudes dimmer than for GRB990123. This non-detection of another optical burst indicates that optical emission is not strongly correlated with gamma-ray output.
We thank the BATSE team for their GRB data and Michael Briggs for further assistance. We also thank the NEAR team for their GRB981121 data. ROTSE is supported by NASA under $`SR\&T`$ grant NAG5-5101, the NSF under grants AST-9703282 and AST-9970818, the Research Corporation, the University of Michigan, and the Planetary Society. Work performed at LANL is supported by the DOE under contract W-7405-ENG-36. Work performed at LLNL is supported by the DOE under contract W-7405-ENG-48. |
no-problem/0001/astro-ph0001430.html | ar5iv | text | # Outbursts of EX Hydrae: mass transfer events or disc instabilities?
## 1 Introduction
If intermediate polars (IPs) are cataclysmic variables possessing partial accretion discs, with the centre disrupted by a magnetic field, then we expect that they can show disc instability outbursts, as dwarf novae do. Several IPs โ XY Ari (Hellier, Mukai & Beardmore 1997), YY Dra (Patterson et al. 1992) and GK Per (e.g. Kim et al. 1992) โ appear to show just that.
However, two other IPs โ V1223 Sgr & TV Col โ show short, low-amplitude outbursts that are unlike dwarf nova eruptions and probably result from another instability such as mass-transfer bursts (see Warner 1996 and Hellier et al. 1997 for reviews). The outbursts of TV Col last $``$8 h with 2-mag amplitudes (Szkody & Mateo 1984; Schwarz et al. 1988; Hellier & Buckley 1993); increased S-wave emission during this period points to enhanced mass transfer. V1223 Sgr has shown a very similar event (van Amerongen & van Paradijs 1989). Since these two stars have novalike discs (e.g. TV Col shows superhumps and V1223 Sgr shows VY Scl low states) the outbursts are unlikely to be thermal instabilities in the disc. Very similar short-lived flare events have been seen in AM Her stars (e.g. Warren et al. 1993), which cannot be disc instabilities since such stars donโt have discs. \[The possible IP RX J0757+6306 may be a similar system (Tovmassian et al. 1998; Kato 1999), but the data are currently too sparse for certainty.\]
The remaining IP showing outbursts, EX Hya, is intermediate between the above two types, with outbursts lasting $``$2โ3 d (cf. 0.5 d for TV Col & V1223 Sgr, and 5 d for XY Ari & YY Dra). Previous outbursts have been reported by Bond et al. (1987); Hellier et al. (1989; hereafter Paper 1); Reinsch & Beuermann (1990) and Buckley & Schwarzenberg-Czerny (1993).
A major finding from the 1987 outburst (Paper 1) was a high-velocity feature in the emission-line wings. This seemed to arise from an enhanced accretion stream overflowing the disc and connecting directly with the magnetosphere. We predicted that since the stream rotated with the orbital frequency ($`\mathrm{\Omega }`$) and the magnetosphere with the spin frequency ($`\omega `$), the relative geometry (and thus the X-ray emission from stream-fed accretion) should vary at the $`\omega `$$``$$`\mathrm{\Omega }`$ frequency. Such X-ray beat modulations have since been seen in many IPs (e.g. Hellier 1998) but not, so far, in EX Hya. Accordingly, we applied for Target of Opportunity time to observe the next outburst of EX Hya with the rapid-response RXTE X-ray satellite. This was successfully triggered during an outburst in 1998 August and we report the results here.
EX Hya is also well studied in quiescence, showing a prominent sinusoidal modulation at the 67-min spin period and a grazing eclipse recurring with the 98-min orbital period. See, e.g., Hellier (1987, hereafter Paper 2) for spectroscopy, Siegel et al. (1989) for optical photometry, and Rosen et al. (1991) for X-ray data.
## 2 The long-term record
Since EX Hyaโs behaviour is clearly different from that of normal dwarf novae it is worth presenting the complete record. Fig. 1 shows the visual estimates of EX Hya compiled by the Variable Star Section of the RASNZ. While EX Hya mostly sits at 13$`^{\text{th}}`$ mag, it rises to mag 9.5 in infrequent outbursts lasting 2โ3 d. Note that much of the variability at quiescence is real, caused by the spin and orbital modulations. Fig. 2 contains details of the outbursts on an expanded scale. We note the following points:
1. 15 outbursts have been seen in 44 years, for an average recurrence of $``$ 3 yrs. However, the yearly and monthly data gaps mean that many will have been missed. Coverage dense enough to catch 2-d outbursts is $``$ 2/3$`^{\text{rds}}`$ complete in recent times, dropping to $``$ 1/3$`^{\text{rd}}`$ complete earlier, so we can estimate that only $``$ half the outbursts have been caught, reducing the recurrence to $``$ 1.5 yrs.
2. The outbursts occur irregularly: near JD 244 8370 a โdoubleโ outburst occurred with an interval of only 8 d. In contrast, taking the sampling into consideration, there is a 95% probability that inter-outburst intervals $`>`$ 2.7 yrs have occurred (most likely in the 12-yr period between JD 244 2300โ244 6600 when no outburst was seen).
3. The outburst rises are unresolved in the RASNZ data, where rises of 3.4 mags in $``$ $`<`$12 hrs are seen. Reinsch & Beuermann (1990) caught part of a rise, seeing the brightness increase by a factor 10 within 3 hrs.
4. The declines are slower than the rises and are variable: the outburst at JD 244 6920 declined by 3.5 mags in 1.8 d while that at JD 245 1040 took 3.0 d to decline by the same amount.
5. The outburst at JD 244 8760 was peculiar. EX Hya rose from mag 13.1 to 9.9 in $`<`$ 15 hrs and declined from 10.3 to 12.6 in only $`<`$ 4.5 hrs, the whole event being over in $`<`$ 1 d. \[The two outburst points are by different observers; the observers involved (including the current authors AJ and DO) are highly experienced observers of EX Hya, and the data points have been confirmed from the original observing logs.\]
6. If the accretion rate scales as the optical magnitude and if the quiescent accretion rate is $``$10<sup>16</sup> g s$`^{\text{โ1}}`$ then the outbursts typically involve $``$10<sup>22</sup> g of material.
## 3 The 1998 outburst data
The August 1998 outburst (Fig. 3) showed the usual unresolved rise but decayed to quiescence in 3 d, 1 d longer than the other well-studied outbursts.
### 3.1 RXTE X-ray observations
Following notification of the outburst we observed EX Hya with RXTE \[see Bradt, Rothschild & Swank (1993) for a description of this satellite\] gaining three sections of data on the outburst decline (Fig. 3). The first section, lasting 1 hr, recorded a 2โ15 keV count rate varying between 70 and 330 c s<sup>-1</sup> (all 5 PCUs); during the second section, lasting 9 hr, the count rate was in the range 60โ220; and by the third section, lasting 6 hr, the count rate had declined to 35โ70, essentially a quiescent count rate.
We Fourier transformed the 2โ15 keV X-ray dataset, first normalizing the three data sections to the same count rate. The result (Fig. 4) reveals power at the spin frequency ($`\omega `$) and at the beat frequency between the orbital and spin periods ($`\omega `$$``$$`\mathrm{\Omega }`$). An X-ray beat frequency has never been seen in EX Hya in quiescence, but its occurrence in outburst confirms the prediction in Paper 1. However, some scepticism is in order since the X-ray data cover only $``$4.5 cycles of the 3.5-hr beat period. One might also be concerned that since the spacecraft orbital period is near EX Hyaโs orbital period (96 vs 98 mins), beating with the spacecraft orbit might explain the peak seen. However, this would produce equal peaks at $`\omega `$$``$$`\mathrm{\Omega }_{\mathrm{rxte}}`$ and $`\omega `$ \+ $`\mathrm{\Omega }_{\mathrm{rxte}}`$ whereas there is no power at $`\omega `$ \+ $`\mathrm{\Omega }_{\mathrm{rxte}}`$.
By fitting a sinusoid to the three sections of 2โ15 keV X-ray data we find that the spin pulse had a modulation depth (semi-amplitude/mean) of 52% in the first section, declining to 25% in the second section and 5% in the third (the errors are dominated by flickering, and the first result is particularly unreliable since the data cover only $``$1 cycle). For comparison, Rosen et al. (1991) quote a 14% depth in quiescent Ginga data over a similar energy range; thus the pulse amplitude was markedly bigger during outburst. There was no apparent change in pulse phase during the RXTE observations.
The spectral changes over the spin cycle are consistent with the usual quiescent behaviour โ greater modulation at lower energies โ but the difficulty of disentangling (in a limited dataset) two periodicities, considerable flickering, and the outburst decline, made further investigation unreliable.
The X-ray data show the expected narrow, partial eclipse (not shown), with a profile similar to that of the quiescent eclipse. It is close to the time predicted by the ephemeris of Hellier & Sproats (1992; not including the sinusoidal term), being early by 40 $`\pm `$ 10 s (0.007 in phase). Other than this, there was no orbital modulation.
### 3.2 CTIO photometry
We obtained $`R`$-band photometry with the Cerro Tololo Inter-American Observatory 0.9-m telescope over 5 nights of the decline and return to quiesence (Figs. 3 & 5).
The spin pulse is present throughout the dataset, but with different amplitudes (the semi-amplitudes/mean, as far as can be told given the flickering, are 12, 13, 24, 10 and 7 per cent on the five nights respectively). Reinsch & Beuermann (1990) also report the pulsation throughout their outburst dataset, with an amplitude comparable to that in quiescence.
It might appear from Fig. 5 that the pulse is late compared to the predicted times of maxima (which use the quadratic ephemeris of Hellier & Sproats 1992) but the situation is more complex: Fig. 6 shows that X-ray maximum occurs where predicted (to within 0.05 in phase) but that the optical pulse remains bright for $``$0.15 longer. This effect has not been reported previously, but this is the first simultaneous optical/X-ray dataset.
## 4 The quiescent eclipse
Since interpreting the eclipse profiles during outburst will be crucial, weโll first take a detour into the quiescent lightcurve. Note, firstly, that the partial, flat-bottomed X-ray eclipse implies that the secondary limb grazes the white dwarf, eclipsing the lower accreting pole but leaving the upper pole uneclipsed (Beuermann & Osborne 1988; Rosen et al. 1991; Mukai et al. 1998).
To investigate the quiescent optical eclipse we have used the 45 h of $`B`$-band photometry reported by Sterken et al. (1983) and Sterken & Vogt (1995). We first folded the data on the 67-min spin period (using 50 phase bins) to obtain the mean pulse profile. We then removed the pulse by subtracting from each datapoint the value of the mean pulse profile at that phase. Then we folded the data on the orbital cycle, to obtain the curve displayed in Fig. 7. The $``$ 30 per cent optical eclipse lasts for 3 mins and is coincident with the X-ray eclipse. Detailed studies (e.g. Siegel et al. 1989) show that the eclipse centroid depends on spin phase and reveal that most of the eclipsed light arises from the accretion curtain of material falling onto the lower pole of the white dwarf.
Fig. 7 also shows an orbital hump extending between phases $``$ 0.6โ0.15, and is presumably caused by the bright spot where the stream hits the accretion disc. Similar features are seen in dwarf novae such as Z Cha, where the hump extends between phases 0.62โ0.13 (Wood et al. 1986).
By analogy with Z Cha, we would also expect to see a disc eclipse and a bright spot eclipse. However, the disc eclipse involves only 23 per cent of the light in Z Cha (Wood et al. 1986), and in the grazing eclipse of EX Hya the fraction might be lower. It is possible that the EX Hya light curve contains a disc eclipse of $``$ 10 per cent depth, which starts at phase 0.95 as a steepening of the hump decline, and finishes at phase $``$ 0.05. It is also possible that the โshoulderโ to the eclipse, ending at $``$ 0.07, involves the eclipse of the bright spot. However, both interpretations are near the margins of the data quality given the flickering.
Since the evidence for a disc eclipse is marginal we can ask whether EX Hya contains a disc at all, especially given the proposed models of discless accretion in IPs (Wynn & King 1995) and in EX Hya in particular (King & Wynn 1999). The other evidence for a disc can be summarised as (see Hellier 1991 for a fuller account): (1) the dominance of the spin period, rather than the beat period, in quiescent X-ray lightcurves, which implies that the accreting material circularizes and loses knowledge of orbital phase; (2) a weak โrotational disturbanceโ seen in the emission lines (Paper 2); (3) an emission-line S-wave with the correct phase and velocity to arise from an impact at the edge of a disc (Paper 2); (4) the orbital hump and its being at the same phase as in Z Cha (above); (5) material above the plane consistent with a splash where the stream hits the disc edge, revealed by soft X-ray and EUV dips (Cรณrdova, Mason & Kahn 1985; Mauche 1999), and (6) the double-peaked lines seen in quiescence (Paper 2);
Note, though, that none of these secure the velocity field of the disc, and so donโt rule out a magnetically threaded structure (e.g. King & Wynn 1999) if it is able to mimic a disc in the above respects. We leave the issue of whether the outburst was a disc instability, thus implying the presence of a disc, to the discussion.
## 5 The outburst eclipses
In the last night of photometry (when EX Hya was back in quiescence) the observed eclipse was narrow, V-shaped, coincident with the X-ray eclipse, and similar to previous quiescent eclipses (Section 4). On the penultimate night (JD 245 1038) the eclipse egress had a โshoulderโ lasting until phase 0.07. The night before, both eclipses had asymmetrical V shapes with minima at phase 0.02 (relative to the X-ray mid-eclipse). Earlier still in the outburst the eclipse is difficult to discern: some shallow dips may be eclipses, but there is not enough repeatability in consecutive cycles to distinguish them from flickering. Similarly, near the peak of the 1987 outburst Reinsch & Beuermann (1990) saw broad, shallow dips that may be eclipses, but again there were not enough cycles to be sure.
The lateness of the eclipses on the third night (JD 245 1037) implies that they are probably eclipses of an accretion stream rather than a disc. To test this we have computed the eclipse of a model stream, assuming it to have a constant brightness along the freefall trajectory between the initial impact with the disc and the point of its closest approach to the white dwarf. The model parameters (based on those of Paper 2) are: $`P_{\mathrm{orb}}`$ = 5895 s, $`M_1`$ = 0.7 M, $`M_2`$ = 0.13 M, $`i`$ = 79 and $`R_{\mathrm{disc}}`$ = 0.76 $`R_{\mathrm{L1}}`$.
Fig. 8 shows that the model stream eclipse exhibits the same features (V shape, minimum at phase 0.02, faster ingress, slower egress) as the two eclipses observed on that night. The only free parameter is the model normalization, where in order to match the data we have diluted the stream with uneclipsed light such that the stream is 43% of the total \[for comparison, in AM Her stars the stream is commonly found to contribute 50โ60% of the total light (Harrop-Allin et al. 1999)\]. Fig. 9 illustrates the geometry of the above model, showing the system at phases 0.00 (white dwarf eclipse), 0.02 (stream-eclipse minimum) and 0.07 (see below).
On the fourth night (JD 245 1038) the eclipse is of the white dwarf and its environs, with an additional shoulder lasting until phase 0.07 (unfortunately we only have one cycle that night so canโt check the featureโs repeatability). The constant intensity during the shoulder implies an eclipse of a pointlike source. If this source is located along the track of the accretion stream, the start and end phases of the shoulder and the depth would be reproduced if it emitted 22% of the systemโs light and was 0.29$`a`$ from the white dwarf (where $`a`$ is the stellar separation).
Is this distance (1.3 $`\times 10^{10}`$ cm) the radius of the magnetosphere? The (not particularly reliable) estimate of Paper 2 is different, at 6 $`\times 10^9`$ cm. However, we can check the plausibility by estimating the field strength that would place the magnetosphere there. Using the standard theory for a disc (e.g. Frank, King & Raine 1992) we find that for an accretion rate of 10<sup>16</sup> g s<sup>-1</sup> the implied magnetic moment is 9 $`\times 10^{32}`$ G cm$`^\text{3}`$, an order of magnitude greater than other estimates (e.g. Paper 2; Warner 1996). One could argue that a stream would penetrate further in than a disc, increasing the derived magnetic moment further, although if the stream carried only a fraction of the accretion flow this would reduce the estimate again.
Is the distance then the radius of the outer disc edge? Again, it is inconsistent with Paper 2, which found a value of 2.3 $`\times 10^{10}`$ cm from the extent in phase of the rotational disturbance during the eclipse, and also from the separation of the double peaks in the emission lines, assuming these to give the Keplerian velocity at the disc edge. However, the extent of the rotational disturbance is hard to estimate, and the assumption of Keplerian motion could well be wrong. Further uncertainties are that an enhanced stream might penetrate into the disc, and also that the disc size might change in outburst, enlarging due to the enhanced viscosity of a disc instability but shrinking due to the addition of low-angular-momentum material from an enhanced stream.
Note, though, that the egress at phase 0.07 is consistent with a possible bright-spot egress at that phase in the quiescent lightcurve; thus, overall the most likely conclusion is that the feature is at the disc edge, and that previous estimates of the disc radius were too large.
## 6 Discussion
Several features of EX Hyaโs outbursts are unlike those expected from a disc instability (Section 2). The most striking is the rarity of the outbursts. Over time, only 4% of EX Hyaโs accretion occurs during outburst (assuming, simplisticly, that the accretion rate scales as the optical flux). In contrast, the figure for a typical dwarf nova such as SS Cyg is 90%. Another peculiarity is the range of interoutburst intervals, from 8 d to $`>`$ 2 y, when there is no change in quiescent magnitude (and hence mass-transfer rate). Note also the decline times: the 2โ3 d declines are typical of dwarf novae and are comparable with the viscous timescale of a disc, but the 5-hr decline is not (even if allowance is made for the lack of inner disc). Further, the emission line equivalent widths increase during outburst (Paper 1); in dwarf novae (with the exception of IP Peg) they decrease.
In contrast, the evidence for an enhanced mass-transfer stream is clear. High-velocity line wings from an overflowing stream hitting the magnetosphere have been observed in both the 1987 and 1991 outbursts (Paper 1; Buckley & Schwarzenberg-Czerny 1993). They were accompanied by greatly enhanced line emission from the stream impact at the edge of the disc. An X-ray beat period, caused by the stream connecting to the magnetic field and predicted in Paper 1, has now been seen (Section 3.1). Lastly, the eclipse profiles during late decline reveal a bright overflowing stream (Section 5).
The above suggests that EX Hyaโs outbursts are mass transfer events, rather than disc instabilities, but is not conclusive. The disc-instability enthusiast could argue that the instabilities are reduced in duration and frequency by the magnetic disruption of the inner disc (Angelini & Verbunt 1989). If they are reduced to minor perturbations on the disc, the irregularity of the outbursts could follow. The enhanced mass transfer might then be a consequence of a disc instability, triggered by enhanced irradiation of the secondary star. This is easier in a magnetic system than in a dwarf nova since radiation from the magnetic poles is less likely to be hidden by the disc, compared to radiation from a boundary layer. There is indeed increased line emission from the secondary in the 1987 outburst, in addition to the increased line emission from the stream/bright-spot (Paper 1).
In principle the optical eclipse profiles early in the outburst should tell us whether the disc has gone into a high state, however the difficulty of judging which features are real, given the flickering and limited data, precludes a firm conclusion. The first two nights of our optical dataset, and also the dataset early in an outburst by Reinsch & Beuermann (1990) are compatible with broad dips, $``$20 per cent deep, at the expected eclipse times. If a disc were the only light source, the grazing eclipse would produce dips of $``$30 per cent depth, so the observed depth is consistent with some dilution by light from the magnetosphere (which must be present given the spin pulse). The broad dips might also be centered slightly late, compared to the expected eclipse time (Reinsch & Beuermann 1990, and also upper panel of our Fig. 5), which would indicate a contribution from an enhanced stream.
The absence, early in the outburst, of the narrow eclipses seen in quiescence is puzzling. It indicates either that the white dwarf and its accretion curtains are relatively faint (but why then do we still see a spin pulse?) or that in outburst we see predominantly the upper (uneclipsed) accretion curtain (but why is this?).
In summary, there is clearly enhanced mass transfer, but we cannot be sure whether or not this is triggered by a disc-instability outburst.
## 7 Conclusions
(1) EX Hyaโs outbursts are unlike those of any other dwarf novae. However, it is possible that they are characteristic of disc instabilities in a magnetically truncated disc rather than the result of a different process.
(2) There is clearly enhanced mass transfer in outburst. The evidence includes an enhanced stream/disc impact, eclipse profiles resulting from a bright stream overflowing the disc, line emission from where the overflowing stream hits the magnetosphere, and an X-ray periodicity at the beat period, indicating coupling of the overflowing stream to the magnetosphere.
(3) It is unclear whether the enhanced mass transfer is triggered by a disc instability. The eclipse profiles early in outburst are consistent with a brightened disc, but there isnโt enough repeatability over different cycles to distinguish them from flickering with certainty.
(4) After reviewing the evidence we conclude that EX Hya does possess an accretion disc, or a circulating structure with very similar characteristics. The optical orbital modulation in quiescence is similar to that of non-magnetic dwarf novae, including an orbital hump and marginal evidence for eclipses of the disc and bright spot.
(5) We find evidence that previous estimates for the disc size are too large, prefering instead a disc radius of 0.29 of the stellar separation.
(6) In possessing an overflowing stream giving rise to distorted emission line wings and distorted eclipse profiles, EX Hya in outburst shows some of the characteristics of SW Sex stars (e.g. Hellier 1999).
## Acknowledgments
We thank Darragh OโDonoghue for suggesting the analysis of Section 4, Chris Sterken for kindly sending us the data, and Janet Wood for helping us to interpret it. We thank the RXTE team for their rapid response to our TOO request. Further, we thank Klaus Beuermann for a helpful refereeโs report. The Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatories, is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation. |
no-problem/0001/astro-ph0001302.html | ar5iv | text | # A continuous low star formation rate in IZw 18 ? This research has made use of NASAโs Astrophysics Data System Abstract Service.
## 1 Introduction
The blue compact galaxy IZw 18 is a fiducial object - it has a very low metallicity and is currently experiencing an intense star formation episode. Its metallicity (Searle & Sargent 1972; Skillman & Kennicutt 1993) is the lowest (1/50 Z) observed in this kind of objects and more metal-deficient blue compact dwarf galaxies have not been found despite extensive searches (Terlevich 1982; Terlevich et al. 1991; Masegosa et al. 1994; Izotov et al. 1994; Terlevich et al. 1996), with the possible exception of SBSG 0335-052W (Lipovetsky et al. 1999). This led Kunth & Sargent (1986) to suggest that during a single starburst event, the metals ejected by the massive ionizing stars are mixed within a short time scale in the HII region and lead, in few Myr, to a metallicity level comparable to that of IZw 18. Other studies have also shown that only one burst is sufficient to account for the oxygen abundance in IZw 18 (Alloin et al. 1978; Lequeux et al. 1981; Kunth et al. 1995). Thus if the metallicity measured in IZw 18 is solely the result of the metals produced in the current burst, a discontinuity in the spatial abundance distribution would be expected corresponding to the edge of the recently enriched region, typically a few hundred parsecs away from the young stellar core (Roy & Kunth 1995).
Early measurements of the abundance in the HI halo of IZw 18 by Kunth et al. (1994) indicated that the metallicity of the very massive neutral cloud in front of the ionizing cluster might be a factor of about 20 lower than in the HII region, strengthening the possibility of a sharp abundance drop. However the UV absorption lines they used were saturated, and this result remains very uncertain (Pettini & Lipman 1995). Moreover, recent HI observations by Van Zee et al. (1998) displayed lower velocity dispersion in the HI halo than those assumed by Kunth et al. (1994), leading to a metallicity comparable to the abundance in the central ionized region.
On the other hand, several observations of starburst galaxies (Kobulnicky & Skillman 1997, and references therein) have shown no significant gradient or discontinuity in the oxygen abundance distributions within the HII regions, except for the well established local overabundance of nitrogen in NGC5253 (Welch 1970; Walsh & Roy 1987, 1989; Kobulnicky et al. 1996, 1997). This corroborates models which predict that during a starburst, the heavy elements produced by the massive stars are ejected with high velocities into a hot phase, leaving the starburst region without immediate contribution to the enrichment of the insterstellar medium (Pantelaki & Clayton 1987; Tenorio-Tagle 1996; Devost et al. 1997; Kobulnicky & Skillman 1997; Pilyugin 1999). In this scenario, the metals observed now would have their origin in a previous star formation event, and an underlying old stellar population would be expected. Early observations of IZw 18 did not reveal clearly such an old population (Thuan 1983; Hunter & Thronson 1995), but recent reanalysis of HST archive data (Aloisi et al. 1999) has shown that stars older than 1 Gyr must be present. Moreover, Ostlin (1999) studied the resolved stellar population in the near infrared with NICMOS onboard HTS and found also that while the NIR colour magnitude diagram was dominated by stars 10-20 Myr old, numerous red AGB stars require a much higher age, in agreement with Aloisi et al. (1999). NICMOS data require stars older than 1 Gyr to be present and an age as high as 5 Gyr is favoured. This holds even if a distance slightly higher than the conventional 10 Mpc is adopted. This suggests that the present star formation episode in IZw 18 is not the first one. The rather high C/O ratio observed in IZw 18 (Garnett et al. 1997) could also suggest a carbon enrichment by an evolved population of intermediate mass stars. However, other starburst galaxies show quite lower C/O ratio (Garnett et al. 1995; Kobulnicky & Skillman 1998; Izotov 1999), so this fact remains puzzling and controversial (Izotov 1999). Considering the large uncertainties on the determinations of the stellar yields (Prantzos 1998, 1999) and on the determination of the C/O ratio (Izotov et al. 1997; Izotov 1999), this may not be used as a strong evidence for an enrichment by an old stellar population.
Thus the mechanism responsible for the dispersal and mixing of newly synthesized elements in a starburst galaxy remains unclear, as well as the chain of star formation events responsible for the observed abundances. IZw 18, as the lowest abundance galaxy among starbursts, is an ideal laboratory to study these processes; its low metallicity is indicative of a rather โsimpleโ star formation history, and one would expect the material ejected by the present massive stars to give high contrast in the abundances between the enriched and the non enriched zones. However, if the small companion galaxy northwest of I Zw18 has had an influence, as suggested by Dufour et al. (1996) through tidal effects or streaming gas resulting from a collision with the main body of I Zw18, the recent history of dispersal and mixing of elements may not be that easy to disentangle.
We conducted deep long slit spectroscopy of IZw 18 in order to measure the O/H abundances as far as possible from the central HII region of the NW knot, and to detect a discontinuity or systematic gradient in the metallicity distribution. Observations and reduction are described in section 2; results are presented in section 3 and 4 and the star formation history of this galaxy is discussed in the last sections.
## 2 Observations and data reduction
Seventeen exposures of 3000 seconds each of the blue compact galaxy IZw 18 were obtained with the 3.6 m Canada-France-Hawaii Telescope during three successive nights between 1995 February 1 and 4 using the MOS spectrograph with the 2048 $`\times `$ 2088 Loral 3 thick CCD detector. A long slit (1.52 arcsec wide) was used with a position angle of 45 $`\mathrm{ยฐ}`$ covering the spectral range from 3700 to 6900 ร
. The position of the slit is displayed on Fig 1. The spatial resolution was 0.3145 arcsec/pix and the dispersion 1.58 ร
/pix, leading to a spectral resolution of about 8.2 ร
. The seeing was between 1 and 1.5 arcsec. The spectra were reduced using IRAF. The bias was removed using the overscan section from each frame. The pixel-to-pixel sensitivity correction and the illumination effects (vignetting) were corrected using dome flat field and sky flat images. The images were calibrated in wavelength using a combination of two exposures made during the second night with a Neon and a Helium lamp respectively. Five 50 seconds exposure images of the standard star Feige 34 were obtained in order to flux calibrate the spectra. To account for wavelength-dependent atmospheric refraction, we fitted a low-order polynomial along the stellar continuum in each frame and then realigned each spectrum before combination. The $`H\beta `$ spatial profile on the seven frames obtained during the first night was different from those ten obtained during the two other nights. We assumed that the positioning of the slit was slightly different during the first and the two other nights. Nevertheless, as the offset was less than 1โ (slit positionning error), we aligned and combined all the nights together in order to increase the S/N ratio.
After reduction, an abnormal โdiffuse lightโ background in the blue part of the long exposure images appeared. The origin of this โlightโ is probably due to a slight increase in the temperature of the CCD with time or to light diffused in the instrument during long exposures. This feature was removed by the subtraction of the background (task BACKGROUND) and using the task APSCATTER which is especially designed for this kind of purpose. The residuals after correction were less than 0.5 $`\%`$ of the continuum level. Three bad columns (579 to 581 i.e., 4600 and 4602 ร
respectively) of the CCD were ignored. We applied a Doppler correction to shift the final spectrum to zero velocity.
Spectra were extracted by summing along the slit. The apertures used were 5 pixels (1.57โ) wide with 2 pixels (0.63โ) of overlap. In order to increase the signal to noise (especially for the \[OIII\]4363ร
line), large aperture spectrum were extracted summing over 12 pixels (3.78โ) every 6 pixels (1.89โ) along the slit, but this did not allow extension of the region over which \[OIII\]4363ร
could be measured. We also extracted a large aperture spectrum integrated over the whole galaxy (25 pixels centered on the maximum of the continuum emission) in order to compare our observations with the spectroscopic measurements (but with different PA) of Skillman & Kennicutt (1993). A small aperture spectrum integrated over 2 pixels (0.62โ) has also been extracted to match the aperture used by Izotov (1999). The large aperture spectrum is displayed in figure 2 and results of line measurements (for both small and large aperture) are shown in table 1. Their mean FWHM is around 8 ร
, and the lines are unresolved.
Emission lines were measured automatically using the routine TWOFITLINES<sup>1</sup><sup>1</sup>1Twofitlines, Version 1.4 package for IRAF provided by Jose Acosta, Instituto de Astrofisica de Canarias - SPAIN. We compared the measurements with those made interactively with Gaussian fits through the IRAF task SPLOT and found no differences larger than two percent. A few weak lines in regions of low S/N high, for which no Gaussian could be fitted, were measured by direct integration. The errors bars were computed by summing in quadrature the effective photon noise on the line flux and the rms noise in the local continuum. An additional two percent error accounts for uncertainties in the flat-fielding and sky+diffuse light subtraction process.
## 3 Dust extinction
The interstellar dust extinction, or reddening, was first evaluated using the H$`\alpha `$/H$`\beta `$, H$`\gamma `$/H$`\beta `$ and H$`\delta `$/H$`\beta `$ ratios, assuming their intrinsic values to be 2.75, 0.475, 0.264 respectively for an electron temperature of 20000 K and a density of 100 cm<sup>-3</sup> (Osterbrock 1989). We used the extinction function
$$\frac{\mathrm{I}(\lambda )}{\mathrm{I}(\mathrm{H}\beta )}=\frac{\mathrm{F}(\lambda )}{\mathrm{F}(\mathrm{H}\beta )}10^{\mathrm{K}(\lambda ).\mathrm{E}(\mathrm{B}\mathrm{V})}$$
(1)
where I($`\lambda `$) is the intrinsic line intensity, F($`\lambda `$) is the observed flux at each wavelength and K($`\lambda `$) is the extinction function according to the galactic reddening law of Seaton (1979).
In order to correct from the effect of stellar absorption in the Balmer lines, we first assumed that their strength was the same for all the lines. We derived then the value for which consistent results for E(B-V) were obtained using the three different Balmer ratios. As the extent over which the flux was integrated is larger than the size of the ionizing star cluster, we corrected for the underlying stellar absorption only in the central area, i.e., over 3.8โ (185 pc) centered on the maximum continuum emission, according to the images of Hunter & Thronson (1995). We found the underlying stellar absorption to be around 1.8 ร
, close to the value of 2 ร
used by Skillman & Kennicutt (1993) and Roy & Walsh (1987) and adopted this value (1.8 ร
) for correction.
The variation of the extinction parameter E(B-V) along the slit is shown in Fig 3.a. It can be seen that a good agreement between the three computed values is obtained only in the central region (we have indeed forced this agreement by defining the strength of the absorption lines). Outside the central area, values obtained using H$`\gamma `$/H$`\beta `$ and H$`\delta `$/H$`\beta `$ ratios are systematically lower than values obtained with H$`\alpha `$/H$`\beta `$, and fall most of the time below zero. Artificially increasing the H$`\beta `$ flux by less than four percent erases this discrepancy, suggesting it could be (partially) related to small calibration errors dues to the Balmer absorption lines in the Feige 34 spectra.
Stasinska & Schaerer (1999) have shown that $`\mathrm{H}_\alpha `$ is partially excited by collisions in IZw 18, so that the $`\mathrm{H}_\alpha /\mathrm{H}_\beta `$ ratio should be between 2.95 and 3.00 for at least the main body of the nebula, higher than for case B recombination. This effect would explain the discrepancy between the reddening values estimated using different line ratios. These authors conclude that the reddening affecting this ionized nebula should be practically equal to zero. Therefore, we have assumed no reddening at all along the slit in our calculations.
## 4 Abundance determinations
### 4.1 Electron temperature
The temperature sensitive \[OIII\]4363ร
line was measured over a slit length of 12 arcsec. This extent is comparable with that reported by Martin (1996) despite the large difference in exposure times (51000 s for this observation and 12000 s for that of Martin), but with a different orientation of the slit (PA = 45$`\mathrm{ยฐ}`$ against 7.6$`\mathrm{ยฐ}`$ of Martin). Nevertheless, the \[OIII\]5007 line was observed over a length of 49 arcsec (2.5 kpc) against 23 arcsec in Martinโs observations. We then computed the ratio of \[OIII\]4959+\[OIII\]5007 line strength to \[OIII\]4363ร
to evaluate the electron temperature $`\mathrm{T}_\mathrm{e}`$(OIII) with a program based on the 3-level atom formulae from McCall (1984) using atomic data from Mendoza (1983). Uncertainties were propagated through all steps to derive the error bars. Fig 3.b shows the variation of the derived electron temperature as a function of position along the slit. The electron temperature can be considered constant across the HII region. Using the large and small aperture spectra, we also obtain a mean electron temperature of 19300 $`\pm `$ 600 K and 19700 $`\pm `$ 1000 K in agreement with the previous determinations of Skillman & Kennicutt (1993) and Izotov (1999) respectively. However, our measurement in the small aperture appears somewhat smaller (but still compatible within the error bars) than the value of Izotov (1999) .
### 4.2 The oxygen abundance
Oxygen abundances were derived for the regions where the electron temperature was measured using \[OIII\]$`\lambda 4363`$. They were obtained by summing over ionization states using the expression:
$$\frac{\mathrm{O}}{\mathrm{H}}=\frac{\mathrm{O}^+}{\mathrm{H}^+}+\frac{\mathrm{O}^{++}}{\mathrm{H}^+}$$
(2)
The presence of HeII 4686 suggests that $`\mathrm{O}^{+++}`$ should also be present. Generally, HeII 4686 is used to evaluate the abundance of $`\frac{\mathrm{O}^{+++}}{\mathrm{H}^+}`$ but the origin, nebular or circumstellar, of HeII 4686 in IZw 18 is not well established. However it has been shown (Legrand et al. 1997) that this line peaks at the position of the WR feature, suggesting that these stars could be responsible for a higher excitation locally, and that $`\mathrm{O}^{+++}`$ is not an abundant ion. Skillman & Kennicutt (1993) and Izotov & Thuan (1998) have estimated that this stage contribute less than 4 percent to the total oxygen abundance. In term of a possible abundance gradient in IZw 18, neglecting $`\mathrm{O}^{+++}`$ will not change the general trend of the abundance profile and would, at worst, slightly underestimate, by a few percent, the oxygen abundance in the region around WR stars. So the contribution of $`\frac{\mathrm{O}^{+++}}{\mathrm{H}^+}`$ was not included. The contribution of the ionization states $`\mathrm{O}^+\mathrm{and}\mathrm{O}^{++}`$ was computed using a program based on the 3-level atom formulae from McCall (1984) with atomic data from Mendoza (1983) using the \[OII\] 3727 and \[OIII\] 4959 lines. We compared the abundances delivered by our program with abundances calculated by IRAF using the 5-level atom approximation from Shaw & Dufour (1995) and found no significant differences. The spatial profile of the oxygen abundance is given in Fig 3.c.
We also used the large and small aperture spectra to derive the mean oxygen abundance $`12+\mathrm{log}(\frac{\mathrm{O}}{\mathrm{H}})=7.18\pm 0.03`$ in both cases, in excellent agreement with Skillman & Kennicutt (1993). This value appears different from that reported by Izotov (1999), mainly due to the differences in the electronic temperatures adopted.
Our results (Fig. 3.c) show unambiguously that there is no significant abundance gradient nor discontinuity in the NW-HII region of IZw 18 at scales smaller than 600 pc (using $`\mathrm{H}_\mathrm{o}=75\mathrm{km}.\mathrm{s}^1.\mathrm{Mpc}^1`$). Martin (1996) suggested a possible, but weak, gradient in an orthogonal direction. The spatial resolution of our observations is 50 pc, and smaller scale inhomogeneities cannot be excluded. Moreover, the spatial profile of the oxygen lines does not indicate any abrupt change in metallicity at larger distances. Combined with the results of Van Zee et al. (1998), who found for the HI halo an abundance comparable with that of the HII gas, our results strongly favour a homogeneous metallicity distribution over the whole galaxy. This is fully consistent with related studies which have found very homogeneous spatial abundance distributions in several other giant HII regions (Diaz et al. 1987; Gonzalez-Delgado et al. 1994; Skillman 1985), as in 30 Doradus (Rosa & Mathis 1987), LMC-SMC (Dufour & Harlow 1977; Pagel et al. 1978; Russell & Dopita 1990), or dwarf and irregular galaxies (Devost et al. 1997; Kobulnicky & Skillman 1997, 1996; Roy et al. 1996; Pagel et al. 1980; Masegosa et al. 1991), again with the exception of NGC 5253 already refered to in the Introduction.
## 5 Toward a new star formation history for IZw 18
### 5.1 A previous star formation event
The oxygen abundance distribution in IZw 18 appears extremely homogeneous throughout the galaxy, indicating a thoroughly mixed interstellar medium. If the measured abundances result from the metals ejected by the massive stars involved in the current burst, as suggested by Kunth & Sargent (1986), this would imply efficient mixing of the ejecta on scales of at least 600 pc within a timescale comparable to the age of the present burst i.e., a few Myr (Hunter & Thronson 1995). However, dispersal of the heavy elements ejected by the massive stars can hardly be accomplished in less than $`10^8`$ yr on scales between 100 and 1000 pc (Roy & Kunth 1995); the timescale required for complete mixing is even longer (Tenorio-Tagle 1996). Thus the observed metals cannot arise from the material ejected by the stars formed in the current burst. It follows that the presently observed metals should have been formed in a previous star formation episode. The metals ejected in the current burst of star formation remain most probably hidden in a hot phase as suggested by Pantelaki & Clayton (1987) and more recently by Tenorio-Tagle (1996), Devost et al. (1997), Kobulnicky & Skillman (1997) and Pilyugin (1999). Bomans (1999) has shown on a deep pointing with the ROSAT HRI instrument that there is extended X-ray emission to the SW and maybe to the NE from the central bubble of IZw 18. This extended emission seems to trace the expanding $`\mathrm{H}_\alpha `$ loops, leading the author to conclude that it supports the picture of hot, metal-enriched gas streaming out of IZw 18. This gas would have been ejected into the halo where it would take long excursions while cooling and returning to the central galactic region to become available for future processing into stars (Tenorio-Tagle 1996). X-ray observations of the BCD VIIZw 403 (Papaderos et al. 1994) are also interpreted as hot material ejected by the present starburst. The availability of powerful X-ray observatories in the near future will allow the metallicity of this hot gas to be derived, allowing this scenario to be confirmed.
If the observed metals in IZw 18 were formed in a previous star formation episode, this would imply that the object is not a โyoungโ galaxy undergoing its first star formation as suggested by Searle & Sargent (1972). Such a view is also supported by other studies, which independently lead to another scenario (Dufour et al. 1988; Dufour & Hester 1990; Hunter & Thronson 1995; Kunth et al. 1995; Garnett et al. 1996; Aloisi et al. 1999).
### 5.2 The dearth of low metallicity galaxies
The metal abundances measured in IZw 18 are the lowest known in the interstellar matter (but not in stars) of the local universe; this remains so despite extensive metallicity measurements in emission line galaxies (Terlevich 1982; Terlevich et al. 1991; Masegosa et al. 1994; Izotov et al. 1994; Terlevich et al. 1996). Because of the correlation between size, luminosity and metallicity in dwarf galaxies (Skillman et al. 1989), Masegosa et al. (1994) proposed that galaxies with very low metallicity are too faint to be โcaughtโ in their sample. This raises the possibility that extremely metal deficient objects may be very faint. IZw 18 and other starburst galaxies (Roennback & Bergvall 1995) lie quite far away from the correlation established by Skillman et al. (1989) for dwarf irregular galaxies. This may reflect the fact that they are presently undergoing a strong star formation event which increases their luminosity. However, the galaxies used by Skillman et al. (1989) were selected from the H$`\alpha `$ catalog of Kennicutt et al. (1989), thus the sample allows for current star formation! The origin of the correlation remains unclear (see also Skillman 1999).
It is easy to show that the present star formation rate in IZw 18 or in other starbursts cannot be sustained for a Hubble time without producing excessive chemical enrichment and a numerous stellar population. It is generally admitted that blue compact galaxies experience violent star formation events separated by long quiescent phases (Searle & Sargent 1972) during which they would appear as Low Surface Brightness Galaxies (LSBG) or quiescent dwarfs. However this population does not contain any objects more metal poor than IZw 18 (McGaugh & Bothun 1993; McGaugh 1994; Roennback & Bergvall 1995; Van Zee et al. 1997a, b). Does the metallicity of IZw 18 represent a lower limit for the abundance in the gas of local galaxies? If so, why?
### 5.3 The lack of HI clouds without optical counterpart
Different observing programs have been carried out to search for isolated intergalactic HI cloud, but without success so far (Briggs 1997). Most local so-called primeval HI clouds candidates turned out to be associated with stars (see for example Djorgovski 1990; Impey et al. 1990; McMahon et al. 1990; Salzer et al. 1991; Chengalur et al. 1995, for HI1225+01). Does this mean that such entities do not exist? If so, this would imply that all gas clouds (with a mass comparable to that of a dwarf galaxy) have formed stars. However, the detection limits for HI surveys remain quite high ($`N_{HI}10^{18}\mathrm{cm}^2`$), and the existence of very small primeval HI clouds cannot be ruled out. Nevertheless, if isolated dwarf galaxy progenitor HI clouds existed, they would have sizes and masses comparable to small galaxies, and they would present sufficient column densities to be detected by radio techniques. So far, non-detection indicates that if such clouds exist, they are very rare. This idea is reinforced by the presence of absorption line systems of high column densities in the spectra of quasars which seems to arise mainly from halos of bright galaxies and not from small HI clouds (Lanzetta et al. 1995; Tripp et al. 1997), indicating again that the latter are sparse. Furthermore, it has been shown that the diffuse cosmic UV background can ionize the extreme outer HI disks of spiral galaxies (Van Gorkom 1991; Corbelli et al. 1989; Maloney 1990; Corbelli & Salpeter 1993a, b), producing an abrupt fall in their HI column density. This effect could contribute to hide some primeval HI gas from the current surveys.
### 5.4 The temporal evolution of the metallicity
Absorption lines in Damped Lyman Alpha (DLA) systems are used to study the temporal evolution of the metallicity of the interstellar gas. Although the nature of the absorbing systems is still controversial (Tripp et al. 1997), it is generally admitted that the metallic lines are associated in some way with galaxies. The temporal evolution of the metallicity in the DLA systems reported by Lu et al. (1996) and more recently by Lu et al. (1998) and Prochaska & Wolfe (1999) is reproduced in Fig 4.
One notices that the mean metallicity of the interstellar gas increases as one gets closer to local time. This is generally interpreted as the effect of cumulative enrichment by strong star formation events. However, a more intriguing feature is that the metallicity of the most underabundant systems seems also to increase with time! No extremely underabundant system has been found at low redshift. If the enrichment is solely the result of starburst events, we should find, locally, objects which have not undergone any burst (or very few of them) ; these objects would have a very low metallicity (comparable to what is observed at high redshift). Does the apparent increase in metallicity of the most underabundant systems indicate the existence of a minimal and continuous enrichment of the interstellar medium? The number of systems observed at low redshift is small (Meyer & York 1992; Steidel et al. 1995; Pettini & Bowen 1997; de la Varga & Reimers 1997; Boisse et al. 1998; Shull et al. 1998); if some unevolved systems exist, they must be very few. The non detection of such systems could arise from a selection effect rather than from their inexistence.
### 5.5 A new star formation regime
It is generally accepted that the metal enrichment of the ISM builds up mainly in bursts. Different studies have been carried out to model these bursts to reproduce the global properties of galaxies. In the case of IZw 18, Kunth et al. (1995) have shown that only one burst, with an intensity comparable to the present one, is enough to produce the observed abundances. As we have shown, this single burst cannot be the present one. Previous massive star formation has occurred. We cannot eliminate the possibility that this previous star formation event was a starburst.
However, starburst episodes must be separated by quiescent phases, during which these systems appear as quiescent dwarfs or Low Surface Brightness Galaxies (LSBG). Studies of the latter objects (Van Zee et al. 1997c) have revealed that, despite their low gas density, star formation occurs (with a weak efficiency), probably as a local process instead of a global event. The SFR between bursts is very low, but not zero, so the metallicity would increase slowly during these quiescent phases. Because these star formation rates are very weak, they are generally neglected in studies of star formation history of galaxies. However, in dealing with very low metallicity galaxies, they are capable of raising the metallicity levels up to values comparable to that of IZw 18 in less than a Hubble time.
For example the galaxy UGC 9128, studied by Van Zee et al. (1997b) presents a SFR of about $`\mathrm{1.7\hspace{0.17em}10}^4M_{}yr^1`$ for a HI mass of $`\mathrm{3.55\hspace{0.17em}10}^7M_{}`$. If such a low SFR lasts 10 Gyr, it will form $`\mathrm{1.7\hspace{0.17em}10}^6M_{}`$ in stars, and no more than 5% of the initial mass of gas will have been transformed into stars. At this low continuous star formation rate, sustained during even a Hubble time, the fraction of gas still available at present epoch remains high (about 95%). Thus the existence of a continuous low star formation rate in dwarf galaxies is consistent with the large HI reservoirs generally observed in these objects.
The current metallicity of the gas $`Z_{gas}`$ assuming the simple closed box model (Pagel 1998) can be expressed by (Searle & Sargent 1972)
$$Z_{gas}yLn(G)$$
(3)
where $`G`$ is the fraction of gas presently available and $`y`$ the yield in heavy elements. The uncertainties on this last parameter are large, but $`y`$ is likely to be in the range 0.01 to 0.036 (Maeder 1992). Using a mean value $`y0.02`$, and for the example above with $`G=0.95`$, we estimate the metallicity of the gas resulting from this low SFR enrichment to be close to $`10^3`$, that is 1/20th solar!
Consequently, a low continuous star formation rate cannot be neglected, especially when dealing with low metallicity galaxies; this may be the dominant star forming and metal enrichment process in dwarf galaxies.
We propose that in the most extreme objects, like IZw 18, a continuous low star formation regime can account for the observed abundances. We surmise that the present starburst is the first major one in the history of IZw 18, and that a mild star formation rate has been going on for several Gyr. Preliminary calculations strengthen this hypothesis (Legrand & Kunth 1998). If such a low regime is universal, we expect that all small systems have been forming stars and that their metallicity has increased slowly but steadily with time. This scenario explains the lack of local objects more underabundant than IZw 18, the absence of HI clouds without an optical counterpart and the evolution as a function of redshift of the most metal poor quasars absorption systems. Detailed modelling of this low star formation regime is presented in Legrand (2000).
## 6 Conclusion
We have acquired deep long slit spectroscopy of the metal poor dwarf star forming galaxy IZw 18. We confirm the very low metal content of the galaxy, and show that no significant abundance gradient nor inhomogeneities larger than $`\pm 0.05`$ dex are present in IZw 18 on scales of 50 pc to 600pc. This is in apparent contradiction with the hypothesis of instantaneous local pollution proposed by Kunth & Sargent (1986). Instead, this supports a picture where metals ejected in the current burst of star formation escape into a hot halo hidden phase in the halo, follow a long excursion while cooling and come back much later into the central galactic region (or escape into the intergalactic medium). This also implies that star formation has been occurring previous to the current burst. Based on different observational facts, we propose that the metals in IZw 18 are the result of a mild continuous star formation rate. The generalization of this model to all gas clouds can account for the scarcity of local galaxies with a metallicity lower than IZw 18, for the increase with time of the metallicity of the most underabundant DLA systems, and for the apparent absence of HI clouds without optical counterparts. If starbursts appear as important episodes in the history of galaxies, the low continuous star formation regime, dominant during the quiescent inter-burst periods, cannot be neglected.
###### Acknowledgements.
This work is part of the PhD thesis of FL. We thanks P. Petitjean, J. Silk, M. Fioc, R. Terlevich, N. Prantzos, F. Combes, G. Tenorio-Tagle and the referee R. Dufour for helpful suggestion and discussions. |
no-problem/0001/astro-ph0001117.html | ar5iv | text | # The PSCz catalogue
## 1 Introduction
Data from IRAS (the Infra-Red Astronomical Satellite) allows unparalleled uniformity, sky coverage and depth for mapping the local galaxy density field. In 1992, with completion of the QDOT and 1.2 Jy surveys (Rowan-Robinson et al. 1990a, Lawrence et al. 1999, Strauss et al. 1990, 1992, Fisher et al. 1995), and with other large redshift surveys in progress, it became clear that a complete redshift survey of the IRAS Point Source Catalog (the PSC, Beichman et al. 1984, henceforth ES) had become feasible.
Our specific targets for the PSCz survey were two-fold: (a) we wanted to maximise sky coverage in order to predict the gravity field, and (b) we wanted to obtain the best possible completeness and flux uniformity within well-defined area and redshift ranges, for statistical studies of the IRAS galaxy population and its distribution. The availability of digitised optical information allowed us to relax the IRAS selection criteria used in the QIGC (Rowan-Robinson et al. 1990b), and use optical identification as an essential part of the selection process. This allowed greater sky coverage, being essentially limited only by requiring that optical extinctions be small enough to allow complete identifications. The PSC was used as starting material, because of its superior sky coverage and treatment of confused and extended sources as compared with the Faint Source Survey (Moshir et al. 1989). The depth of the survey ($`0.6\mathrm{Jy}`$) derives from the depth to which the PSC is complete over most of the sky.
The topology of the survey is analysed and presented by Canavezes et al. (1998); the inferred velocity field by Branchini et al. (1999); the real-space power spectrum and its distortion in redshift-space by Tadros et al. (1999); the redshift-space power spectrum by Sutherland et al. (1999). The direction and convergeance of the dipole has been investigated by Rowan-Robinson et al. (1999), and its implications for cosmological models by Schmoldt et al. (1999). Sharpe et al. (1999) presented a least-action reconstruction of the local velocity field, while an optical/IRAS clustering comparison was presented by Seaborne et al. (1999). Many of these results are summarised in Saunders et al. (1999).
## 2 Construction of the Catalogue
### 2.1 Sky coverage
We aimed to include in the survey all areas of sky with (a) reliable and complete IRAS data, and (b) optical extinction small enough to allow reliable galaxy identifications and spectroscopic followup. We defined a mask of those parts of the sky excluded from the survey; the final mask was the union of the following areas:
1. Areas failing to get 2 Hours-Confirming coverages (HCONs, ES III.C.1). In these areas, there is either no data at all or the data does not allow adequate source confirmation.
2. Areas flagged as High Source Density at $`12`$, $`25`$ or $`60\mu \mathrm{m}`$. HSD at $`12`$ or $`25`$ implies an impossibly high stellar density for galaxy identifications. Areas flagged as HSD at $`60\mu \mathrm{m}`$ were processed differently in the PSC, with completeness sacrificed for the sake of reliability.
3. Areas with $`I_{100}>25\mathrm{MJy}\mathrm{ster}^1`$. The $`100\mu \mathrm{m}`$ intensity values are those of Rowan-Robinson et al. (1991). Above this value, we found the fraction of sources which were identifiable as galaxies dropped dramatically.
4. Areas with extinctions $`A_B>2^m`$, where secure optical identifications become increasingly difficult. Our extinction estimates were based on the $`I_{100}`$ values above, and incorporated a simple model for the the variation in dust temperature across the Galaxy; they are discussed further in Section 4.3. Except towards the centre and anti-centre, this criterion and the previous one are almost equivalent
5. Small patches covering the LMC and SMC, defined by $`I_{100}>10`$ and $`5\mathrm{MJy}\mathrm{ster}^1`$ respectively. In these areas, there are large numbers of HII regions in the clouds themselves with similar optical and IRAS properties to background galaxies, making identifications very uncertain.
The mask is specified as a list of excluded $`1\mathrm{deg}^2`$ โlune-binsโ (ES Ap.X.1), so these values are necessary averages over $`1\mathrm{deg}^2`$.
The overall area outside the mask is 84% of the sky. For statistical studies of the IRAS galaxy population and its distribution, where uniformity is more important than sky coverage, we made a โhigh $`|b|`$โ mask, as above but including all areas with $`A_B>1^m`$, and leaving 72% of the sky. In practice, this criterion is almost identical to $`I_{100}>12.5\mathrm{MJy}\mathrm{ster}^1`$. Henceforth, when we say โhigh-latitudeโ, we simply mean outside this mask. Both masks are shown in Figure 1.
### 2.2 PSC selection criteria
Our aim was to relax the criteria of the QIGC sufficiently to pick up virtually all galaxies, even at low latitude, purely from their IRAS properties; simultaneously, we wanted to keep contamination by Galactic sources to a reasonable level. Our actual colour selection criteria were almost the same as the QIGC:
| $`\mathrm{log}_{10}(S_{60}/S_{25})`$ | $`>`$ | $`0.3`$ |
| --- | --- | --- |
| $`\mathrm{log}_{10}(S_{25}/S_{12})`$ | $`<`$ | $`1.0`$ |
| $`\mathrm{log}_{10}(S_{100}/S_{25})`$ | $`>`$ | $`0.3`$ |
| $`\mathrm{log}_{10}(S_{60}/S_{12})`$ | $`>`$ | $`0.0`$ |
| $`\mathrm{log}_{10}(S_{100}/S_{60})`$ | $`<`$ | $`0.75`$ |
However, unlike the QIGC, upper limits were used only where they guaranteed inclusion or exclusion. We did not at this stage exclude any source solely on the basis of an identification with a Galactic source. Because the PSCz evolved from the QIGC, there remain in the catalogue 3 galaxies which actually fail the new selection criteria. In total, we selected 16422 sources from the PSC.
Many very nearby galaxies have multiple PSC sources, associated with individual starforming regions. 70 such sources were pruned to leave for each such galaxy, the single, brightest PSC source. Also at this stage, Local Group galaxies were excised from the catalogue, and a separate catalogue of far-infrared properties for Local Group galaxies made.
### 2.3 Extended sources
All IRAS surveys are bedevilled by the question of how to deal with galaxies which are multiple or extended with respect to the IRAS $`60\mu \mathrm{m}`$ beam. The size of most $`60\mu \mathrm{m}`$ detectors was $`1.5^{}`$ in-scan by $`4.75^{}`$ cross-scan. The raw data was taken every $`0.5^{}`$, and for the PSC this was then filtered with an 8-point zero-sum linear filter, of the form $`++++`$. Hence galaxies with in-scan far-infrared sizes larger than about $`1.5^{}`$, will have their fluxes underestimated in the PSC. There is a lesser sensitivity to cross-scan diameter, caused by some of the scans only partially crossing the full width exteneded sources.
The approach we have settled on is to preferentially use PSC fluxes, except for sources identified with individual galaxies whose large diameters are likely to lead to significant flux underestimation in the PSC. The PSC flux is based on a least chi-squared template fit to the point-source-filtered data-stream, so the flux is correctly measured for slightly extended sources, with diameters smaller than about $`1.5^{}`$<sup>1</sup><sup>1</sup>1By comparison, the Faint Source Survey (Moshir et al. 1989) fluxes are peak amplitudes of the point-source-filtered data, so are much less tolerant of slightly extended sources.. As far-infrared diameters are typically half optical, (e.g. Rice et al. 1988), and optical $`D_{25}`$ diameters are typically several times the FWHM for spirals, we can expect that isolated galaxies must have extinction-corrected diameters larger than several arcmin for their fluxes to be badly underestimated by the PSC.
Fluxes for extended sources can be derived using the ADDSCAN (or SCANPI) software provided at IPAC, which coadds all detector scans passing over a given position. We have tested how the ratio of PSC-to-extended flux depends on $`D_{25}`$, and, as expected, we find that the ratio is almost constant for galaxies up to $`2.5^{}`$ (Figure 2), where the addscan flux is typically 10% larger. However, we also find that addscan fluxes are systematically 5% larger than PSC fluxes, even for much smaller galaxies. This discrepancy is not due to any difference in calibration (we have tested this by extracting a point-source-filtered flux from the addscan data and substituting this for the PSC flux, but the discrepancy in Figure 2 remains). The reason is the large number of multiple and/or interacting galaxies seen by IRAS, for which the addscan picks up the entire combined flux. Using such combined fluxes throughout is not a consistent approach, since nearby interacting galaxies will always be resolved into two or more sources, while more distant ones will not be<sup>2</sup><sup>2</sup>2In any survey with unresolved sources, a consistent approach to multiple sources is not possible, except by artificially degrading the resolution for each source to make it the same in physical units as for the most distant.. The effect of the PSC filter is more subtle: a close pair of galaxies unresolved by the IRAS beam will in general have unequal fluxes, and in a flux limited survey, at least one will be below the flux limit. In such a situation, the zero-sum linear PSC filter still overestimates the flux of the brightest source on average (because of clustering), but by much less than the addscan.
Very large galaxies will have fluxes underestimated by the addscan, because their cross-scan extent is large compared with the detector size. Fluxes for galaxies with optical diameters $`D_{25}>8^{}`$ have been taken from Rice et al. (1988), who made coadded maps for each such galaxy. In principal, galaxies somewhat smaller than this may have underestimated fluxes from the addscans. However, we find that, for the 19 PSCz galaxies in Rice et al. with quoted diameters $`8^{}10^{}`$, the flux differance between addscan and Rice et al. fluxes is $`\mathrm{log}_{10}(S_{60A}/S_{60R})=0.044\pm 0.034`$, that is the addscan fluxes are slightly but not significantly larger; so we do not expect smaller galaxies to have underestimated addscan fluxes.
Bearing these points in mind, we proceeded as follows: we made a catalogue, with the same sky coverage as the PSCz, of optically-selected galaxies from the LEDA database (Paturel et al. 1989) with extinction-corrected $`D_{25}`$ diameters larger than $`2.25^{}`$, where the extinctions were estimated as per Section 4.3, and the consequent corrections to the diameters are as given by Cameron (1990). In Saunders et al. (1995), it was argued that LEDA is reasonably complete to this limit. For the largest sources, we used the positions and fluxes of Rice et al. (1988). IPAC kindly addscanned all the remainder for us to provide coadded data. We then used software supplied by Amos Yahil to extract positions, fluxes and diameters from this data, on the assumption that the galaxies have exponential profiles. Where these addscan fluxes are used in the catalogue, they have been arbitrarily decreased by 10%, to bring them statistically into line with the PSC fluxes at the $`2.25^{}`$ switchover. We ended up with 1402 sources associated with large galaxies entering the catalogue, and 1290 PSC sources associated with large galaxies flagged for deletion. The latter are kept in the catalogue, so that a purely IRAS-derived catalogue can be extracted if wanted.
### 2.4 Other problem sources
In the QIGC, sources flagged as associated with confirmed extended sources, or with poor Correlation Coefficient, or flagged as confused, were addscanned. The above prescription should have dealt with the extended sources; the only other reasons for a galaxy source to have a poor CC are that (a) it is multiple, and as argued above we prefered PSC fluxes in these cases, or (b) it has low S/N, in which case the PSC is an unbiased flux estimator, while addscanning risks noise-dependent biases. The confusion flag is set very conservatively (i.e. most sources flagged as confused have sensible PSC fluxes) and in any case the PSC in general deals better with confusion than the addscans, in both cross-scan and in-scan directions. For these reasons, and to avoid introducing any latitude-dependent biases into the catalogue, we opted to use PSC fluxes throughout for sources not identified with optical galaxies as above. Addscan fluxes are included in the catalogue for all sources for those wishing to experiment with them.
### 2.5 Supplementary sources
For a source to be accepted into the PSC, it required successful Hours-Confirmation on two separate HCONS (ES V.D.6). Thus at low S/N, areas of the sky with only 2 HCONS are inevitably less complete than those with 3 or more; the estimated completeness at $`0.60.65\mathrm{Jy}`$ in 2HCON areas is only 82% (ES XII.A.4). To improve this completeness, we supplemented the catalogue with 1HCON sources satisfying our colour criteria in the Point Source Catalog Reject File (ES VII.E.1), where there is a corresponding source in the Faint Source Survey. We demanded that PSC and FSS sources be within each otherโs $`2\sigma `$ error ellipse, and that the fluxes agree to within a factor of $`1.5`$. This revealed many sources in the Reject File where two individual HCON detections had failed to be merged in the PSC processing (ES XII.A.3), as well as sources which failed at least one HCON for whatever reason. New sources were created or merged with existing ones, and the Flux Overestimation Parameters (ES XII.A.1) assigned or recalculated accordingly. As well as these 1HCON sources, we also searched in the Reject File for additional sources with flux quality flags 1122 and 1121 <sup>3</sup><sup>3</sup>31=upper limit only, 2=moderate quality detection, 3=good detection, in each of the four bands; neither of which category made the PSC. Finally, we looked for sources in the PSC itself with flux quality flags 1113, but correlation coefficient at $`60\mu \mathrm{m}`$ equal to A,B or C indicating a meaningful detection. Altogether we found an additional 323 galaxies, mostly in 2HCON regions, and made 143 deletions as a result of merging individual HCON detections.
### 2.6 Optical identifications
Optical material for virtually all sources was obtained from COSMOS or APM scans, including new APM scans taken of 150 low-latitude POSS-I E plates. In general, we used red plates at $`|b|<10^{}`$ and blue otherwise. The actual categories of optical material are:
1. Uncalibrated POSS-O plates scanned with APM. Here the magnitudes are typically good to $`0.5^m`$ for faint ($`B19^m`$) galaxies.
2. SRC J plates scanned with APM. These plates were scanned and matched for the APM survey, and have $`b_J`$ magnitudes typically good to $`0.25^m`$ for faint galaxies.
3,4,5. Uncalibrated SRC J or EJ plates scanned with COSMOS, giving $`b_J`$ accurate to about $`0.5^m`$.
6. SRC SR plates scanned with COSMOS. The quoted magnitudes are r-magnitudes, good to $`0.5^m`$.
7. POSS-E plates scanned with APM. For these plates, the standard POSS-O calibration was assumed. Since the O plates are about 1 magnitude deeper than the E plates, while $`BR1`$ for IRAS galaxies, this gives a reasonable approximation to the $`B`$ magnitude. When there is extinction, this procedure gives systematically too bright a magnitude by about $`0.43A_B`$ (assuming Mathis 1990). The scatter is still dominated by the $`0.5^m`$ zero-point error.
At bright magnitudes, photographic photometry for galaxies becomes increasingly uncertain because of non-linearity and saturation in the emulsion.
The image parameters from COSMOS and APM data were also used to get arcsecond positions and offsets from nearby stars, and to make โcartoonโ representations of the $`4^{}\times 4^{}`$ field centred on each IRAS source. The identifications were made using the likelihood methods of Sutherland and Saunders (1992). In general, and always at low latitudes, these cartoons were supplemented by grayscale images from the Digital Sky Survey and/or inspection of copy plates, and any change in best identification noted.
At this stage, non-galaxies were weeded out by a combination of optical appearance, IRAS colours and addscan profiles, VLA 20cm maps from the NVSS survey (Condon et al. 1998), millimetre data from Wouterlout and Brand (1989), SIMBAD and other literature data. Several hundred low-latitude sources were also imaged at $`K^{}`$, as part of the extension of the PSCz to lower latitudes (Saunders et al. 1999). We found a total of 1376 confirmed non-galaxies. We are left with 15332 confirmed galaxies, and a further 79 sources where no optical identification is known but there is no clear Galactic identification either. The distribution of identified galaxies and the mask are shown in Figure 1.
The Galactic sources are dominated by infrared cirrus (774 sources). However, there are also 88 planetary nebulae, 140 emission-line stars, 24 sources identified with Galactic HII regions, 212 sources identified with bright stars or reflection nebulae, and 138 sources associated with YSOโs.
## 3 The redshift survey
An essential part of the project was maintenance of a large database of redshift information from the literature, from databases such as the NED and LEDA (the NASA and Lyons Extragalactic Databases) and Huchraโs ZCAT, and also work in progress for other surveys. Redshifts were accepted on the basis that their claimed error was better than any other available for that source, including our own.
Of the 15,411 galaxies in the sample, about 8,000 had known redshift at the inception of the project, with about another 2,000 expected to be observed as part of ongoing projects such as the CfA2 and SSRS surveys. The project was fortunate enough to be allocated 6 weeks of INT+FOS time, 1 week INT+IDS, 6 nights AAT+FORS time, 18 nights on the CTIO 1.5m, two weeks at the INAOE 2.1m and 120 hours at Nanรงay radiotelescope, over a total of 4 years. 4600 redshifts were obtained in this time. For the INT+IDS and CTIO spectra, redshift determination was made from the average observed wavelengths of the $`\mathrm{H}\alpha `$, NII and SII features. The rest of the data was taken with low dispersion spectrographs with resolutions around 15-20 Angstroms, and the $`\mathrm{H}\alpha `$/NII lines are blended together, as are the two SII lines. We modelled each continuum-subracted spectrum as a linear combination of $`\mathrm{H}\alpha `$, NII and SII features, with redshift as a free parameter. The model giving the smallest $`\chi ^2`$ versus the data gave the redshift and its error, also the various linestrengths and a goodness of fit. Spectra with poor $`\chi ^2`$, large redshift uncertainty or unphysical line ratios were checked by hand and where necessary refitted with different FWHM or initial guesses. Wavelength calibration included cross-correlation of each sky spectrum with a well calibrated template. The final derived errors average $`120\mathrm{km}\mathrm{s}^1`$, as compared with $`300\mathrm{km}\mathrm{s}^1`$ for the QDOT survey. Further details are presented in Keeble (1996), and Oliver et al. (1999).
By construction, we have minimised the overlap with other redshift surveys, but there are to date 448 galaxies for which we have both our own measurement, and higher resolution measurements by other workers (Figure 3). There are 31 sources where the velocity difference is more than $`3\sigma `$ (errors combined in quadrature), if these are clipped out, the remaining 417 sources give an average offset $`V_{PSCz}V_{lit}=6.1\pm 6.1\mathrm{km}\mathrm{s}^1`$, with a scatter of $`124\mathrm{km}\mathrm{s}^1`$ and a reduced $`\chi _\nu ^2=0.89`$ . The scatter includes the contribution from the non-PSCz redshift, and suggests that our quoted errors are good estimates of the genuine external error. Of the 31 discrepant redshifts, 5 involve HI observations with unrealistically small quoted errors and with absolute differences less than $`250\mathrm{km}\mathrm{s}^1`$, a further 11 are between 100 and 600 $`\mathrm{km}\mathrm{s}^1`$ and $`35\sigma `$, and suggest a non-Gaussian error distribution, and 15 are greater than $`700\mathrm{km}\mathrm{s}^1`$ and $`5\sigma `$ and must represent different galaxies or incorrect line identification or calibration. We thus have 15 redshifts seriously in error out of a total of 896 recent measurements (both ours and other workers) giving an error rate of 1.7%.
We did not pursue very faint ($`b_J>19.5^m`$) galaxies, on the basis that (a) they were certain to be at distances too large to be included for either statistical or dynamical studies, and (b) very large amounts of telescope time would be needed to achieve useful completeness for these galaxies. There are 438 sources without known redshift with a clear or probable faint optical galaxy identification (with $`b_J>19.5^m`$), and a further 127 with no obvious optical identification but no secure identification as a Galactic source either. At time of writing we are still lacking redshifts for 189 brighter ($`b_J<19.5^m`$) galaxies. The sky distribution of these categories is shown in Figure 4. Redshifts are available for 14677 sources.
### 3.1 n(z) and selection function
The $`n(z)`$ distribution is shown in Figure 5. The source density amounts to 1460 gals $`\mathrm{ster}^1`$ at high latitudes and the median redshift is $`8500\mathrm{km}\mathrm{s}^1`$. To account for the effect of the flux limit on the observed number density of galaxies as a function of distance, we need to know the selection function $`\psi (r)`$, here defined as the expected number density of galaxies in the survey as a function of distance in the absence of clustering. We have derived this both non-parametrically and parametrically using the methods of Mann, Saunders and Taylor (1996). This method is almost completely insensitive to the assumed cosmology, in the sense that the derived expected $`n(z)`$ is invariant. The resulting selection function can be transformed to other cosmologies or definitions of distance simply by mapping the volume element or distance, keeping $`n(z)dz`$ invariant.
For simplicity, and to allow comparison with simulations, we assume for derivation purposes a Euclidean Universe without relativistic effects, and with distance $`r`$ equal to $`V/100h\mathrm{km}\mathrm{s}^1`$. We derive the result both non-parametrically, and parametrically using the double power-law form
$$\psi (r)=\psi _{}\left(\frac{r}{r_{}}\right)^{1\alpha }\left[1+\left(\frac{r}{r_{}}\right)^\gamma \right]^{\left(\frac{\beta }{\gamma }\right)}$$
(1)
which very well describes the non-parametric results. The parameters $`\psi `$, $`\alpha `$, $`r_{}`$, $`\gamma `$ and $`\beta `$ respectively describe the normalisation, the nearby slope, the break distance in $`h^1\mathrm{Mpc}`$, its sharpness and the additional slope beyond it.
Using the high-latitude PSCz to derive the selection function, and correcting redshifts only for our motion with respect to the centroid of the local group, $`V=V_{hel}+300\mathrm{s}\mathrm{i}\mathrm{n}l\mathrm{cos}b`$, we obtain
$`\psi _{}=0.0077,\alpha =1.82,r_{}=86.4,\gamma =1.56,\beta =4.43`$
Both non-parametric and parametric results are shown in Figure 6. The uncertainty in the selection function is less than 5% for distances $`30200h^1\mathrm{Mpc}`$, and 10% for $`10300h^1\mathrm{Mpc}`$, although the selection function drops by 4 decades over this range.
In Figure 6, we also show the non-parametric selection function derived by the same method, for the QDOT and $`1.2\mathrm{Jy}`$ surveys.
## 4 Reliability, completeness, uniformity, flux accuracy
The utility of the PSCz for cosmological investigation depends on its reliability, completeness and uniformity.
### 4.1 Reliability
The major sources of unreliability in the catalogue are (a) incorrect identifications with galaxies nearby in angular position, and (b) incorrect identification of spectral features. Later cross-referencing used increasingly sophisticated methods based on Sutherland and Saunders (1992), but many older identifications depend on simple $`2^{}`$ proximity. The most recent cross-correlation with ZCAT provided 1975 updated velocities, of which 84 were altered by more than $`500\mathrm{km}\mathrm{s}^1`$ and 50 by more than $`1000\mathrm{km}\mathrm{s}^1`$, suggesting an error rate of 2%, in agreement with the analysis of Section 3.
The identifications made for our own redshift followup were done much more carefully than those taken from the literature, using a likelihood approach including full knowledge of the error ellipse, optical galaxy source counts etc, and checked by eye from a greyscale image. The number of ambiguous identifications is negligible, though the PSC source is often confused between two galaxies separated by a few arcmin in the cross-scan direction. Where there was real ambiguity, we took redshifts for both galaxies and used the one with larger $`\mathrm{H}\alpha `$flux. It is also possible that Galactic sources were occasionally identified as background galaxies, but we believe this to be very rare.
### 4.2 Completeness
Incompleteness enters into the catalogue for many reasons:
1) There will be IRAS incompleteness where sources with true $`60\mu \mathrm{m}`$ flux greater than $`0.6\mathrm{Jy}`$ fail to appear in the catalogue. The largest source of this incompleteness is the areas of sky with only 2HCONs, where the PSC incompleteness is estimated as 20% (differential) and 5% (cumulative) at $`0.6\mathrm{Jy}`$ (ES XII.A.4) . Based on source counts, we estimate that our recovery procedure (Section 2.5) for these sources has reduced the overall incompleteness in these areas from 5% to 1.5%; Figure 7 shows the source counts for high-latitude 2HCON sky. Our recovery procedure was impossible for $`|b|<10^{}`$, so lower-latitude 2HCON sky (2% of the catalogue area) retains the PSC incompleteness.
2) The PSC is confusion limited in the Plane. However, by construction, the PSCz does not cover any areas with High Source Density at $`60\mu \mathrm{m}`$ (as defined in ES V.H.6), and the mask defined at $`100\mu \mathrm{m}`$ effectively limits the number of confusing sources. Also, at low latitudes, there are known problems in the PSC with noise lagging. However, the source counts shown in Figure 8 show low-latitude incompleteness to be small down to $`0.6\mathrm{Jy}`$.
3) Some galaxies are excluded by our colour criteria. Cool, nearby galaxies may fail the $`100/60\mu \mathrm{m}`$ condition, but will normally be included separately as extended sources. From the comparison with the $`1.2\mathrm{Jy}`$ survey, we estimate that about 50 galaxies from the PSC have been excluded (see Section 5).
4) No attempt was made to systematically obtain redshifts for galaxies with $`b_J>19.5^m`$. At high latitudes, the work of e.g. Clements et al. (1996) shows that these will in general be further than $`z>0.1`$. At lower latitudes, incompleteness cuts in at lower redshifts. It was originally hoped that the PSCz would be everywhere complete to $`z=0.05`$, but it is clear from the 3D density distributions that there is significant (10% or more ) incompleteness down to z=0.04 towards the anti-centre. The reasons for this are discussed in the next subsection.
Patchy extinction can lead to higher local extinction than our values, which are averages over $`1\mathrm{deg}^2`$ bins. We have obtained $`K^{}`$ images of most low-latitude sources without obvious galaxy or Galactic identification; many faint galaxies are revealed, but only a handful of nearby ones. Spectroscopy of the missing low-latitude galaxies is continuing as part of the Behind The Plane extension to the PSCz survey; for the time being, we estimate that good completeness has been achieved for $`z<0.1/10^{(0.2(A_B+A_B^2/10))}`$. The incompleteness is strongly concentrated towards the anticentre and at low latitudes; for $`|b|>10^{}`$, the survey is estimated to be useably complete to $`z=0.05`$ everywhere.
The number of galaxies with $`b_J<19.5^m`$ for which redshifts are still unknown is 192, or 1.2% of the sample. A further 85 have only marginal redshift determinations and some of these will be incorrect. Both unknown and marginal redshifts are well distributed round the sky, but with some preference for lower latitudes.
### 4.3 Extinction maps
Our extinction maps started from the $`I_{100}`$ intensity maps of Rowan-Robinson et al. (1991), binned into lune bins, with the $`I_{100}/A_V`$ ratio as given by Boulanger and Perault (1988), and $`A_B/A_V`$ ratio as given by Mathis (1990). Because the dust temperature declines with galactic radius, this procedure over-estimates extinctions towards the galactic centre, and under-estimates them towards the anti-centre. We attempted to correct for this dependence of dust temperature on position by devising a simple model for the dust and starlight in our galaxy. We assumed a doubly exponential, optically thin distribution of dust and stars, with standard IAU scalelengths for the solar radius and stellar distribution ($`r_0=8.5\mathrm{kpc},r_{sc}=3.5\mathrm{kpc},z_{sc}=0.35\mathrm{kpc}`$) and an assumed the dust to have the same radial and half the vertical scalelength. The dust properties were assumed to be uniform, and we assumed a dust temperature of $`20K`$ at the solar radius, and proportional to the one fifth power of the local stellar density elsewhere. We then found, for each position on the celestial sphere, the ratio of $`100\mu \mathrm{m}`$ emission to column density of dust, and normalised this ratio by the Boulanger and Perault value for the NGP.
Subsequent to our definition of the PSCz catalogue and mask, Schlegel et al. (1998) used the IRAS ISSA and COBE DIRBE data to make beautiful high resolution maps of the dust emission and extinction in our Galaxy. Comparison of our own maps and those of Schlegel et al. show that (a) our temperature corrections across the galaxy are too small, and (b) there are several areas where cooler (16K) dust extends to $`|b|30^{}`$.
Overall, our extinction estimates may be in error by a factor 1.5-2, in the sense of being too low towards the anticentre and too high towards the galactic centre. This has two principal effects. (a) Towards the anticentre, galaxies may fall below the $`b_J=19.5^m`$ limit because of extinction. A subsequent program of $`K^{}`$-imaging has revealed a handful of extra nearby galaxies, and a significant number at redshifts 0.05 and greater. (b) The definition of the optically-selected catalogue in Section 2.3 depends on the extinction corrections. We will have selected too many optical galaxies towards the centre and too few towards the anticentre. However, our matching of addscan and PSC fluxes was designed to be robust to exactly this sort of error. To test for any effect, we have compared the simple, number-weighted dipole of the surface distribution of PSCz galaxies, (a) using the normal catalogue and (b) using purely PSC-derived fluxes as described in section 2.3. The dipole changes by $`1^{}`$ in direction and 2.5% in amplitude when we do this, showing that any bias caused by incorrect extinctions is negligible.
### 4.4 Flux accuracy and uniformity
The error quoted for PSC $`60\mu \mathrm{m}`$ fluxes for genuine point sources is just 11%(ES VII.D.2). We are more concerned with any non-fractional random error component, since this may lead to Malmquist-type biases. The analysis in Lawrence et al. (1999), based on the $`12/60\mu \mathrm{m}`$ colours of bright stars, finds an absolute error of $`0.059\pm 0.007\mathrm{Jy}`$, in addition to a fractional error of 10%. Stars are better point sources than galaxies, so this may underestimate the absolute error for our sample. We have made an independent estimate, by investigating the scatter between our PSC and addscan fluxes. Of course PSC and addscan fluxes start from the same raw data, but the processing and background estimation are entirely different, while the actual photon noise is negligible. The absolute component of the scatter is $`0.060.07\mathrm{Jy}`$, of which an estimated 20% comes from the error in the addscan flux - confirming that the Lawrence et al. value is a reasonable estimate of the PSC absolute error component.
This error estimate leads to Malmquist biases in the source densities of order 5-6% (differential, at the flux limit) and 2-3% (cumulative) (Murdoch, Crawford and Jauncey 1973). 2HCON sky may have biases as large as 10% (differential) and 4% (cumulative), so the non-uniformity introduced into the catalogue should be no worse than 5% (differential) and 2% (cumulative). This is borne out by the source counts in Figures 7 and 8, but note that in any noise-limited catalogue such as the PSC, where the noise varies across the catalogue, there will always be a regime where Malmquist biases are masked by incompleteness.
Lawrence et al. (1999) find evidence for slight non-linearity in the PSC flux scale, but this should not affect any analysis except evolutionary studies. The effect on evolution was considered by Saunders et al. (1997).
Non-uniformity can also come about as a result of simple changes in the absolute flux scale. A small fractional error in the calibration will on average lead to an error half again as large in the source density. There are three obvious reasons for flux scale variations:
1) The absolute calibration of the third 3HCON was revised by a few percent after the release of the PSC (and FSS). The effect of this revision on PSC fluxes would be to change those in the 75% of the sky covered by 3HCONS by 1%.
2) Whenever the satellite entered the South Atlantic Anomaly, radiation hits altered the sensitivity of the detectors, and data taken during such times were discarded (ES III.C.4). However, this leaves the possibility of Malmquist effects, and also data taken near the boundaries of the SAA potentially suffers residual effects. We have investigated this by checking the source counts for declinations $`40^{}<\delta <10^{}`$, where data taken is most likely to be affected. We see no evidence for any variation (Figure 7) above the few % level in source counts.
3) Whenever the satellite crossed the Galactic Plane, or other very bright sources, the detectors suffered from hysteresis. This effect was investigated by Strauss et al. (1990). They found that the likely error is typically less than 1% and always less than 2.2%. This is confirmed by the constancy, to within a few percent, of our galaxy source counts for identified galaxies in the PSCz as a function of $`I_{100}`$.
Overall, differential source densities across the sky due to incompleteness, Malmquist effects and sensitivity variations are not believed to be greater than a few percent anywhere at high latitudes for $`z<0.1`$. Tadros et al. (1999) found an upper limit to the rms amplitude of large scale, high latitude spherical harmonic components to the density field of the PSCz of 3%. Since this is close to the expected variations due to clustering, the variations due to non-uniformity in the catalogue must be smaller than this. At lower latitudes, variations are estimated to be no greater than 10% for $`z<0.05`$.
## 5 Comparison with the $`1.2\mathrm{Jy}`$ survey
The $`1.2\mathrm{Jy}`$ survey of 5500 IRAS galaxies (Fisher et al. 1995) used looser colour criteria than the PSCz, and as such acts as a valuable check on the efficacy of our selection procedure from the PSC. We find that the $`1.2\mathrm{Jy}`$ survey contains 11 galaxies that have been excluded by our selection criteria. It contains a further 5 galaxies that satisfy the PSCz criteria, but are not included due to programming and editing errors, either during construction of the QIGC survey or its various extensions and supplements to form the PSCz. Extrapolating these numbers to lower fluxes, we can expect that about 50 PSC galaxies are missing altogether from the catalogue.
Conversely, the conservative selection criteria for the $`1.2\mathrm{Jy}`$ survey led to much greater levels of contamination by cirrus and other Galactic sources than in the PSCz. In the $`1.2\mathrm{Jy}`$ survey, these were eliminated by visual inspection of sky survey plates; inevitably real galaxies occasionally got thrown out by mistake, especially at low latitudes. We have found, within the PSCz area, 110 galaxies which are misclassified as Galactic in the $`1.2\mathrm{Jy}`$ survey, and we have obtained redshifts for 67 of them. Most of the remainder are fainter than our $`b_J=19.5^m`$ cutoff. There are also known to be to date a further 117 galaxies classified as Galactic in the $`1.2\mathrm{Jy}`$ survey outside the PSCz area. These were found as part of the ongoing โBehind the Planeโ extension of the PSCz to lower latitudes described in more detail in Saunders et al. (1999).
## 6 Ancillary information
Along with the PSC data, the catalogue also contains the following information: POSS/SRC plate and position on that plate; the RA,$`\delta `$, offset, diameters and magnitude of the best match from the digitised sky survey plates; name, magnitude and diameters from UGC/ESO/MCG, PGC name, de Vaucouleurs type and HI widths where available; most accurate available redshift and our own redshift measurement; classification as galaxy/cirrus/etc; estimated $`I_{100}`$ and extinction; addscan flux and width when treated as an extended source, and point source filtered addscan flux.
## 7 Accessing the catalogue
The data is available from the CDS catalogue service (http://cdsweb.u-strasbg.fr/Cats.html). Full and short versions of the catalogue, maskfiles, description files, format statements and notes, are also available via the PSCz web site
(http://www-astro.physics.ox.ac.uk/$``$wjs/pscz.html), or by anonymous ftp from ftp://ftp-astro.physics.ox.ac.uk/pub/users/wjs/pscz/ .
## 8 Acknowledgements
The PSC-z survey has only been possible because of the generous assistance from many people in the astronomical community. We are particularly grateful to John Huchra, Tony Fairall, Karl Fisher, Michael Strauss, Marc Davis, Raj Visvanathan, Luis DaCosta, Riccardo Giovanelli, Nanyao Lu, Carmen Pantoja, Tadafumi Takata, Kouichiro Nakanishi, Toru Yamada, Tim Conrow, Delphine Hardin, Mick Bridgeland, Renee Kraan-Kortweg, Amos Yahil, Ron Beck, Esperanza Carrasco, Pierre Chamaraux, Lucie Bottinelli, Gary Wegner, Roger Clowes and Brent Tully, for the provision of redshifts prior to publication or other software or data. We are also grateful to the staff at IPAC and the INT, AAT, CTIO and INOAE telescopes. We have made very extensive use of the ZCAT, NED, LEDA, Simbad, VLA NVSS and STSCI DSS databases. |
no-problem/0001/astro-ph0001285.html | ar5iv | text | # The INT Wide Field Imaging Survey(WFS)
## 1 Introduction
Astronomy is basically an observational science, and the development and advancement of the subject has relied heavily on surveys of the sky at optical wavelengths to expand our knowledge of the observable Universe. However despite the considerable advances in optical detector technology very little improvement has been made in large scale surveys beyond those available in the 1950โs when the Palomar Sky Survey was carried out. A photographic plate taken on a 1.2-m Schmidt telescope is sky limited in about 1 hour but is only 1-2% efficient. Thus our current best wide field optical sky surveys are equivalent to no more that a $``$60 second glance at the Universe with modern CCDs using a 1m telescope. In spite of the inherent limitations of photographic plates, they are still used for major scientific programs. In recent years, this has been primarily due to the availability of online digital atlas images and catalogues based on these photographic material eg http://www.ast.cam.ac.uk/~apmcat, http://skyview.gsfc.nasa.gov/.
In an effort to rectify this apparently dismal situation, and to provide necessary underpinning imaging requirements for the 8m telescope era, the 2.5m Isaac Newton Telescope on LaPalma is being used to carry out a series of wide field imaging programs under the generic title of the INT Wide Field Survey(WFS) project. The WFS project consists of a series of independent survey programs with distinct aims as we outline below. The WFS project takes into account both surveys like SDSS (Gunn & Weinberg 1995) which also uses a 2.5m telescope and has an exposure time of $``$60 seconds, and other CCD based surveys such as those that are being carried out by NOAO (http://www.noao.edu/) and ESO (http://www.eso.org/, Nonino etal 1999). The unique elements of the INT survey are: (i) optimal choice of fields so that most are easily visible from telescopes in both hemispheres; (ii) inclusion of U band; (iii) large area (iv) temporal information; (iv) good overlapping coverage with existing deep radio surveys ie FIRST, WENSS; (v) wide RA coverage optimised for efficient follow-up; (vi) choice of SDSS bandpasses for longevity.
This article briefly describes the Wide Field Survey (WFS) program. This is a peer reviewed survey program that aims to provide deep high quality CCD data to the community both quickly and in a convenient form.
## 2 The INT Wide Field Survey
The concept of the WFS originated in 1991 within the context of the science case for a CCD mosaic for the Isaac Newton Telescope. Formal approval for the survey program began with a proposal to the ING Board in October 1997. The primary goal was to exploit the excellent capabilities of a recently completed CCD based mosaic that effectively filled the unvignetted focal plane of the 2.5m Isaac Newton Telescope (see Figure 1). The immediate aim was to carry out a major CCD based multi-colour survey in a timely fashion over a period of 4โ5 years and allow instant and easy access to the processed data to facilitate its rapid scientific exploitation
The WFS proposal was approved by the ING Board in October 1997 with a subsequent โAnnouncement of Opportunityโ closing in March 1998. Conditions of solicitation included that the survey data is available to all UK and NL based astronomers in near real-time. Raw data is typically available as taken, whilst the pipeline reduced data is available after one month. Subsequently the raw and processed data is available to the rest of the astronomical community after one year. Pipeline processing of the data is the responsibility of the Cambridge Astronomical Survey Unit(http://www.ast.cam.ac.uk/).
A WFS International Review Panel approved three main programmes in the first year, allocating fiveโsix โdark/greyโ weeks per semester to the WFS. In June 1999 a first year review carried out by PATT and the International Review Panel confirmed the continuation of the first year WFS programmes into 2000.
## 3 The INT Wide Field Camera
The INT Wide Field camera (Ives,Tulloch & Churchill 1996, see also paper in these proceedings) is mounted at the prime focus(f/3) of the 2.5m Isaac Newton telescope on La Palma, Canary Islands. The camera consists of a close packed mosaic of 4 thinned EEV42 2kx4k CCDs. The layout is shown in Figure 1. The CCDs have a pixel size of 13.5 microns corresponding to 0.33 โ/pixel. The edge to edge limit of the mosaic neglecting the $``$1โ inter-chip spacing is 34.2โ. In normal survey mode we use a step size in RA and Dec of 30โ and 20โ respectively. This provides $``$10% overlap on all edges and means that the partially vignetted chip is overlapped completely to aid photometric calibration.
## 4 The current WFS Programmes
The main science programmes chosen include a โwide shallowโ programme, a smaller deep area programme, and a programme to address temporal variability. The specific programs are described briefly below:
### 4.1 The INT Wide Angle Survey (WAS): co-PIโs, McMahon, Irwin, Walton
This is the largest approved programme approved and includes sub-projects ranging from determination of cosmological parameters (via SN Type Ia) to searches for solar system objects. The underlying philosophy of the WAS survey is encompassed in Table 2 where we summarise the time requirements of over 20 topical scientific programs. If all these programs were carried out under the normal PIโs based time allocation procedures the total on-sky time required is almost 600 nights. However, if the programs are combined they can be executed in around 100 nights. By merging the requirements of the various programs we end up with a highly efficient observing strategy. An important aspect of the reduced time requirements is that the projects will also be executed quickly.
The limiting magnitudes and wavebands being used are summarised in Table 1. Figure 2 shows how these limits transform onto the observational plane for extragalactic studies. The main survey region are listed in Table 3. A number of smaller regions are also being surveyed as determined by calibration requirements and the observing schedule. In addition, we are adding bands to other programs so that we can increase the areal coverage of multi-colour data at low cost. These fields are listed in Table 4.
The WAS program is the umbrella programme for the WFS project and leads the coordination efforts with the other programmes on, for instance, field and filter selection, to maximise scientific return of the WFS project. All programs remain autonomous during this procedure so that the peer reviewed science goals are protected.
Some of the science goals of the WAS are outlined below:
$``$ Galactic Studies: including both halo and disk white dwarf luminosity function which are relevant to both DM models and to independent calibrations of the Hubble time; stellar density distributions towards the NGP, to improve extant K<sub>z</sub> determinations of the local DM; stellar counts towards the anti-centre and other widely spaced directions, to determine the stellar warp and refine models of Galactic structure
$``$ Clusters of Galaxies: the aim is to determine the space density and cluster-cluster correlation function over the range 0.5$`<`$z$`<`$1.0. Galaxy clusters are the largest gravitationally-bound structures in the Universe, and the study of their abundance and evolutionary history with look-back-time places strong constraints on cosmological parameters and the primordial power spectrum that gave rise to the observed large scale structure.
$``$ Radio Sources & Radio Galaxies: Deep optical identification of radio sources allows: accurate counts of different types of host along mJy tracks in the P-z plane, studies of radio source luminosity evolution; multi-band investigation of giant-E standard candles; the largest known sample of low-luminosity RGs with good photometry; large-scale structure from photometric redshifts and cell counts in redshift slices; accurate optical positions of FIRST sources for WYFFOS/2DF followup.
$``$ Intermediate redshift Type 1a Supernovae: Whilst dramatic progress has been made in the determination of the fundamental cosmological parameters ($`\mathrm{\Omega }`$, $`\mathrm{\Lambda }`$) in the last two years, the analysis is now limited by systematic errors. Identifying $``$20 Type 1a Supernovae in the critical range 0.1$`<`$z$`<`$0.4 will allow a detailed treatment of these systematic errors.
The WAS also incorporates two independent distinct science programmes in the spring semester centred on Virgo and the North Galactic Pole. In fact, in the proposal submission procedure many co-Iโs of the WAS program submitted discrete proposals.
* A multicolour survey of the Virgo Cluster: PI, Davies This aims to obtain the galaxy luminosity function (LF) of the Virgo cluster as a function of colour and position in the cluster.
* The Millennium Galaxy Catalogue (MGC): PI, Driver The MGC will provide a complete and local galaxy catalogue. This survey is being carried out in the B band and lies in a region of sky covered by the 2DF redshift survey.
### 4.2 A Deep UBVRI Imaging Survey with the WFC: PI, Dalton
This programme is carrying out deep imaging of 10 deg<sup>2</sup> to a limiting magnitude of B=26 and I=24.5. It will enable the study of the evolution of galaxy clustering as a function of colour at faint magnitudes and provide a catalogue of rich galaxy clusters at intermediate red shifts.
### 4.3 Faint Sky Variability Survey (FSVS): PI, van Paradijs
This programme is searching an area of $``$10 deg<sup>2</sup>, studying photometric and astrometric variability on scales of one hour to a year to a magnitude of V=25. Example areas of investigation include: the evolution of specific galactic populations (e.g. CVโs, RR Lyraes, halo AGB stars, brown & white dwarfs, Kuiper-Edgeworth belt objects, sdB stars), the structure of the galactic halo, statistics of optical transients related to $`\gamma `$-ray bursts, and deep proper motion studies.
## 5 Choice of survey regions and photometric bands
In order to maximise the scientific value of the WFS data the WAS survey is concentrating on fields that are equatorial and hence follow-up can be carried out from telescopes on both hemispheres. This simple consideration doubles the scientific return of the survey. We also deliberately centred some of the fields on Landolt photometric calibration fields ie SA95 and SA114.
The choice of photometric wavebands was relatively straight forward. We decided to use bands similar to the SDSS bands(Fukugita etal, 1996). Note our u and z bands are not identical to the SDSS bands. See the WFS WWW pages for further details. The choice of the SDSS bands means that the INT surveys will be directly comparable with work carried out as part of the SDSS. Interestingly, the SDSS g band is very close to the UKST $`B_J`$ band. However, manufacturing delays have meant that we had to start the survey using the standard INT filter set.
## 6 Survey Coverage to Date
Survey data is being obtained on a monthly basis and thus a summary of the data obtained will soon be out of date. A complete summary of observations obtained is kept on-line at http://www.ast.cam.ac.uk/~wfcsur/status.
The situation at the end of May 1999 was that $``$60 deg<sup>2</sup> had been observed in the first ten months of the survey.
## 7 Data Products
The data products currently available for access include:
* Observing logs built from the FITS headers
* A SYBASE WWW user interface to access the raw and processed data
* Library bias frames, flatfield frames, defringing frames and non-linearity corrections
* Colour equations for all filters
* Processed 2D image maps, with a full record of processing steps in the FITS headers
* Astrometric calibration, with the World Coordinate System in the FITS headers
* Photometric calibration โ zero points and extinction
In the coming months the data products provided will be expanded after some quality control to include:
* Object catalogues, generated using APM based routines (Irwin, 1985) and SExtractor (Bertin & Arnouts 1996).
## 8 Further Information
Further information about the INT Wide Field Imaging Survey can be obtained at the Isaac Newton Groups WWW page (www.ing.iac.es/WFS) or the UK mirror(www.ast.cam.ac.uk/ING/WFS). In addition, the Wide Angle Survey has as a WWW page at www.ast.cam.ac.uk/~rgm/int\_sur/. Further details of the pipeline processing are contained in a paper by Irwin and Lewis(these proceedings).
#### Acknowledgements
The authors would like to acknowledge the unsung builders of the INT Wide Field Camera, the various peer review committee members who have guided the project since it inception in 1991 and the encouragement of their many colleagues during the long gestation period of the INT Wide Field Survey project. |
no-problem/0001/cond-mat0001240.html | ar5iv | text | # Exact Zeros of the Partition Function for a Continuum System with Double Gaussian Peaks
## I Introduction
It has been a central theme since the discovery of statistical mechanics to understand how the analytic partition function for a finite-size system acquires a singularity in the thermodynamic limit if the system undergoes a phase transition. The Lee-Yang theory has partly furnished the answer to this quest. They proposed a scenario where the zeros of the partition function form a line and cut across the real axis. They showed that the discontinuity in the 1st order derivative of partition function is proportional to the angular density of zeros, using an analogy with the two dimensional electrostatics. Then they proved this scenario for Ising-like discrete systems under very general conditions. They could show that the zeros were distributed on a unit circle in this case.
There have been many attempts to generalize the โLee-Yang circle theoremโ ever since. Fisher initiated a study of zeros of the partition function in the complex temperature plane and extensive studies of zeros of the partition function in complex temperature plane followed . In these works authors considered continuous phase transitions or critical points.
The conceptual basis of the Lee-Yang circle theorem was finally clarified in ref. by considering the first-order transition of a system with more general continuous degrees of freedom, with a doubly peaked probability distribution for the order parameter. Since the Ising-like models considered by Lee and Yang would be described by two symmetric Gaussian peaks in the thermodynamic limit, this result provides a simple conceptual basis for Lee-Yang unit circle theorem. Furthermore it is a generalization since general asymmetric configurations were considered, whose zeros form a curve which is not a unit circle in general.
One interesting problem to consider is what happens when the positions of the two Gaussian peaks coincide. Since this is the limit where the latent heat $`l`$ vanishes, one might naively expect that the system would exhibit a second-order transition of the Ehrenfest classification, where there is a finite discontinuity in specific heat but no latent heat. (Fig.1).
However, when we consider the exact zeros of the partition function for the system with two Gaussian peaks, we find there is a branch of zeros other than the one described in ref.. For $`l0`$, this branch can be neglected, since for generic systems the Gaussian approximation breaks down at this point due to the contributions from the higher order cumulants. However, for $`l=0`$ this is no more true and we have to take this branch into account. Because of this, the system exhibits a critical behavior instead of the second-order transition.
## II Locus and Density of Zeros
We consider a canonical partition function of a continuum system, which can be written as in ref.
$`(t)Z(\beta )/Z(\beta _0)={\displaystyle _{\mathrm{}}^{\mathrm{}}}e^{tx}f(x)๐x`$ (1)
where the probability density function is given by
$$f(x)=\mathrm{\Omega }(x/\beta _0)e^x/_{\mathrm{}}^{\mathrm{}}\mathrm{\Omega }(x/\beta _0)e^x๐x,$$
(2)
$`t=1\beta /\beta _0`$, $`x=\beta _0E`$, $`\mathrm{\Omega }(E)`$ is the density of states at energy $`E`$ and $`\beta _0`$ is the inverse of the transition temperature we are interested in. When one is interested in field driven phase transition, one may replace the energy $`E`$ by the magnetization $`M`$ and the inverse temperature $`\beta `$ by the magnetic field $`H`$ in case of magnetic systems, so on. We investigate the locus of zeros on the plane of complex temperature $`z=re^{i\theta }e^t`$.
Now consider the case of interest in this paper, when $`f(x)`$ is given by summation of two Gaussian peaks up to normalization,
$$f(x)=\frac{1}{\sqrt{2\pi }\sigma _1}\mathrm{exp}[\frac{(x\mu _1)^2}{2\sigma _1^2}]+a\frac{1}{\sqrt{2\pi }\sigma _2}\mathrm{exp}[\frac{(x\mu _2)^2}{2\sigma _2^2}]$$
(3)
These two peaks represent two different phases of the system. When $`\mu _1\mu _2`$ the system undergoes the first-order transition. Let us denote two Gaussian functions by $`f_1(x)`$ and $`f_2(x)`$. Since we can relabel $`f_1`$ and $`f_2`$, and redefine $`a`$, we may assume $`m(\mu _2\mu _1)/2>0`$ without loss of generality. We then have
$`(t)`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}e^{tx}[f_1(x)+af_2(x)]๐x`$ (4)
$`=`$ $`\mathrm{exp}(\psi _1)+\mathrm{exp}(\psi _2)`$ (5)
where
$`\mathrm{exp}(\psi _1(t))`$ $``$ $`{\displaystyle e^{tx}f_1(x)๐x}`$ (6)
$`\mathrm{exp}(\psi _2(t))`$ $``$ $`{\displaystyle e^{tx}af_2(x)๐x}.`$ (7)
From the expressions above, one can easily see that the locations of zeros are given by the solutions to the following equation as in ref.
$$\psi _2(t_k)\psi _1(t_k)=2iI_ki(2k+1)\pi $$
(8)
where $`k`$ runs through all the integers. For the double Gaussian distribution the equation above can be rewritten as
$$iI_k=\frac{1}{2}\mathrm{ln}a+m(t_k)+\frac{\stackrel{~}{\sigma }^2}{2}(t_k)^2$$
(9)
where $`\stackrel{~}{\sigma }^2=(\sigma _2^2\sigma _1^2)/2`$. This equation is quadratic and easily solved. The solutions are
$`t_k^\pm `$ $`=`$ $`{\displaystyle \frac{m}{\stackrel{~}{\sigma }^2}}\pm {\displaystyle \frac{\sqrt{2}}{|\stackrel{~}{\sigma }|}}\sqrt{iI_k{\displaystyle \frac{1}{2}}\mathrm{ln}a+{\displaystyle \frac{m^2}{2\stackrel{~}{\sigma }^2}}}`$ (10)
$`=`$ $`{\displaystyle \frac{m}{\stackrel{~}{\sigma }^2}}\pm {\displaystyle \frac{|I_k|}{|\lambda _k\stackrel{~}{\sigma }|}}\pm i\mathrm{sign}(I_k)|{\displaystyle \frac{\lambda _k}{\stackrel{~}{\sigma }}}|`$ (11)
where
$$\lambda _k\sqrt{\sqrt{(\frac{m^2}{2\stackrel{~}{\sigma }^2}\frac{\mathrm{ln}a}{2})^2+I_k^2}(\frac{m^2}{2\stackrel{~}{\sigma }^2}\frac{\mathrm{ln}a}{2})}$$
(12)
Note that there are two branches of solutions. One passes through the transition point $`t=0`$ in the thermodynamic limit and the other does not, so the latter was implicitly discarded in ref.. As we will see, the second branch closes in toward $`t=0`$ as we take the limit $`l0`$.
Now we redefine the variables
$`m`$ $`=`$ $`{\displaystyle \frac{Nl}{2}}`$ (13)
$`I_k`$ $`=`$ $`{\displaystyle \frac{Ny_k}{2}}`$ (14)
$`\stackrel{~}{\sigma }^2`$ $`=`$ $`{\displaystyle \frac{N\mathrm{\Delta }c}{2}}`$ (15)
and consider the thermodynamic limit $`N\mathrm{}`$. We then get
$`\mathrm{ln}(r_k)`$ $``$ $`\mathrm{}(t_k)={\displaystyle \frac{m}{\stackrel{~}{\sigma }^2}}\pm {\displaystyle \frac{|I_k|}{|\stackrel{~}{\sigma }\lambda _k|}}`$ (16)
$`=`$ $`{\displaystyle \frac{l}{\mathrm{\Delta }c}}\pm {\displaystyle \frac{|y_k|}{\sqrt{\sqrt{(\frac{l^2}{2})^2+y_k^2(\mathrm{\Delta }c)^2}\frac{l^2}{2}}}}`$ (17)
$`\theta _k`$ $``$ $`\mathrm{}(t_k)=\pm \mathrm{sign}(I_k)|{\displaystyle \frac{\lambda _k}{\stackrel{~}{\sigma }}}|`$ (18)
$`=`$ $`\pm \mathrm{sign}(y_k)\sqrt{\sqrt{{\displaystyle \frac{1}{4}}({\displaystyle \frac{l}{\mathrm{\Delta }c}})^4+{\displaystyle \frac{y_k^2}{(\mathrm{\Delta }c)^2}}}{\displaystyle \frac{l^2}{2(\mathrm{\Delta }c)^2}}}`$ (19)
The terms involving $`\mathrm{ln}a`$ are finite size corrections and vanish in this limit. We solve the second equation of (19) in terms of $`y_k`$ to get
$$y_k=\pm \theta _kl\sqrt{1+\theta _k^2(\frac{\mathrm{\Delta }c}{l})^2}$$
(20)
We substitute (20) into the first equation of (19) to get the locus of zeroes,
$$r_\pm =\mathrm{exp}\left[\frac{l}{\mathrm{\Delta }c}\pm \frac{l}{|\mathrm{\Delta }c|}\sqrt{1+\theta _k^2(\frac{\mathrm{\Delta }c}{l})^2}\right]$$
(21)
We can also obtain the angular density of zeros. By taking formal derivative with respect to the integer $`k`$, we get
$`\left|{\displaystyle \frac{d\theta _k}{dk}}\right|`$ $`=`$ $`{\displaystyle \frac{1}{2\sqrt{\sqrt{\frac{1}{4}(\frac{l}{\mathrm{\Delta }c})^4+\frac{y_k^2}{(\mathrm{\Delta }c)^2}}\frac{l^2}{2(\mathrm{\Delta }c)^2}}}}`$ (23)
$`\times {\displaystyle \frac{2y_k\pi }{N(\mathrm{\Delta }c)^2\sqrt{\frac{1}{4}(\frac{l}{\mathrm{\Delta }c})^4+\frac{y_k^2}{(\mathrm{\Delta }c)^2}}}}`$
$`=`$ $`{\displaystyle \frac{2\pi l\sqrt{1+\theta ^2(\frac{\mathrm{\Delta }c}{l})^2}}{N[l^2+2(\mathrm{\Delta }c)^2\theta ^2]}}`$ (24)
Therefore, the angular densities of zeros $`g_\pm `$ of two branches are given by
$$2\pi g_\pm (\theta )\frac{2\pi }{N}\left|\frac{dk}{d\theta }\right|=l\frac{1+2(\frac{\mathrm{\Delta }c}{l})^2\theta ^2}{\sqrt{1+(\frac{\mathrm{\Delta }c}{l})^2\theta ^2}}$$
(25)
## III First-Order Transitions
We will now consider both loci of zeros of the partition function at first-order transition. Note that all the quantities above depend only on the ratio $`l/\mathrm{\Delta }c`$ except for a overall factor of $`l`$ in front of $`g(\theta )`$.<sup>*</sup><sup>*</sup>*When both $`l`$ and $`\mathrm{\Delta }c`$ are zero these quantities are ill-defined and we can no longer use the Gaussian approximation. One then has to take into account higher order cumulants. When $`l/\mathrm{\Delta }c0`$, we get the first-order transition. This is the case considered in ref.. There only the locus of zeros near the transition point $`t=0`$ were treated carefully since these were the only things of interest. In fact, for generic systems we expect that the Gaussian approximation breaks down away from the transition point $`t=0`$ due to the higher-order cumulants.
Let us elaborate on this point. The locus of zeros cross the real axis at $`t=0`$ and $`t=2l/\mathrm{\Delta }c`$, indicating there are two phase transitions. This can be easily understood. The probability density at arbitrary temperature is given by
$`e^{tx}\mathrm{\Omega }(x)`$ $`=`$ $`{\displaystyle \frac{e^{tx}e^{(xx_1)^2/(2\sigma _1^2)}}{\sqrt{2\pi \sigma _1}}}+{\displaystyle \frac{e^{tx}e^{(xx_2)^2/(2\sigma _2^2)}}{\sqrt{2\pi \sigma _2}}}`$ (26)
$`=`$ $`{\displaystyle \frac{e^{(xx_1\sigma _1^2t)^2/(2\sigma _1^2)+x_1t+\sigma _1^2t^2/2}}{\sqrt{2\pi \sigma _1}}}+{\displaystyle \frac{e^{(xx_2\sigma _2^2t)^2/(2\sigma _2^2)+x_2t+\sigma _2^2t^2/2}}{\sqrt{2\pi \sigma _2}}}`$ (27)
We see that for nonzero $`t`$ the positions of the peaks are shifted, and also the relative weights change. We see that the position of the peak for large $`\sigma _i`$ gets shifted by a larger amount for given temperature change, consistent with the fact that it has larger specific heat. The weight of the peak 1 relative to the peak 2 is given by:
$$w1/w2\mathrm{exp}(\frac{(\sigma _1^2\sigma _2^2)}{2}t^2+(x_1x_2)t)$$
(28)
By construction, at $`t=0`$, the weight of two Gaussian peak is equal. Assuming $`\sigma _2>\sigma _1`$, we see that for $`t>0`$ the peak labeled by 2 dominates. When $`t`$ becomes slightly negative, then the peak 1 dominates. Also the positions of the Gaussian peaks get shifted to left, but the peak 2 moves faster. For $`t<(x_1x_2)/(\sigma _1^2\sigma _2^2)`$ the peak 2 goes to the left of the peak 1. At $`t=2l/\mathrm{\Delta }c`$, the weight of the peak 2 become equal to that of 1 again, and the peak 2 is dominant for $`t<2l/\mathrm{\Delta }c`$. Therefore at this temperature there is another first-order transition with latent heat $`l`$ and specific heat change $`\mathrm{\Delta }c`$. We can make similar arguments for $`\sigma _1>\sigma _2`$. This process is depicted in Fig.2.
This mechanism works only if we trust that the Gaussian form given in (3) is exact. However, for a generic system, this is just a leading truncation of the cumulant expansion
$$\mathrm{exp}[Nf(x)]=\mathrm{exp}[N(f(x_0)+\frac{f^{\prime \prime }(x_0)}{2}(\mathrm{\Delta }x)^2+\frac{f^{\prime \prime }(x_0)}{3!}(\mathrm{\Delta }x)^3+\mathrm{})],$$
(29)
so the higher order cumulants can be ignored only when $`\mathrm{\Delta }x<<O(1/\sqrt{N})`$. But at the first-order transition at $`t=2l/\mathrm{\Delta }c`$, the system is dominated by the peaks which are located at the distances of $`O(1)`$ from the positions of the peaks at $`t=0`$. Therefore the higher-order cumulants would contribute, and we cannot trust the picture above. However, when $`\mathrm{\Delta }x<<O(1/\sqrt{N})`$, or when the higher order cumulants are extremely small due to some reason, transition at $`t=2l/\mathrm{\Delta }c`$ cannot be neglected anymore. In particular, in the limit $`l0`$, the second branch touches the first branch, preventing the system from exhibiting the second-order transition.
The behaviors of the loci of zeros for various values of $`\mathrm{\Delta }c/l`$ are depicted in Fig.1,2 and 3 in complex $`z\mathrm{exp}(t)`$ plane. $`t_+`$ is the outer curve and $`t_{}`$ is the inner curve. Only the zeros in the first Riemann sheet are shown. When $`\mathrm{\Delta }c/l>0`$ $`(<0)`$, $`t_+`$ $`(t_{})`$ passes through $`t=0`$, and becomes unit circle as one approaches the symmetric limit, $`\mathrm{\Delta }c/l0`$. This is consistent with Lee-Yangโs unit circle theorem. The other branch $`t_{}`$ $`(t_+)`$ degenerates to origin.(goes to infinity.) The loci intersect the real axis orthogonally as long as $`l0`$.
## IV $`l0`$ limit and the critical behavior
The limit $`l/\mathrm{\Delta }c=0`$ may be considered as the opposite limit from the symmetric case $`\mathrm{\Delta }c/l=0`$. Now the two loci $`t_\pm `$ which were separate when $`l0`$, touch each other at $`\theta =0`$ and form a single curve. (Fig.6). Their loci are given by
$$r_\pm =\mathrm{exp}(\pm |\theta |)$$
(30)
The density of zeros is
$$2\pi g(\theta )=2\pi (g_+(\theta )+g_{}(\theta ))=4\mathrm{\Delta }c|\theta |.$$
(31)
Note that $`g(\theta )`$ is zero at $`\theta =0`$, consistent with the fact that the first derivative of the partition function has no discontinuity. The loci intersect the real axis with the angles of 45 degrees, and the intermediate region dominated by peak 1 with the smaller specific heat, which used to separate two domains dominated by peak 2 with the larger specific heat, touches the real axis at just one point. Therefore the system is dominated by the peak 2 for $`t0`$, and the peak 1 has same weight as the peak 2 only at $`t=0`$, when their positions coincide. The qualitative behaviors of the two peaks for $`t>0`$ and $`t<0`$ are same as the ones depicted in Fig.2(a) and Fig.2(d).
Therefore the system is exhibiting a critical behavior where it is just on the verge of making a phase transition. However, in contrast to many familiar examples of critical behavior, the specific heat near $`t=0`$ remains finite instead of blowing up. At this stage it is not yet clear whether there is an example of a discrete system whose critical behavior at the thermodynamic limit can be described by this model.
###### Acknowledgements.
This work was supported in part by the Ministry of Education, Republic of Korea through a grant to the Research Institute for Basic Sciences, Seoul National University, in part by the Korea Science Foundation through Research Grant to the Center for Theoretical Physics, Seoul National University, and in part by the Japanese Society for Promotion of Science through the Institute of Physics, University of Tokyo. |
no-problem/0001/cond-mat0001151.html | ar5iv | text | # Charge Fluctuations on Membrane Surfaces in Water
## I Introduction
In recent years there has been a growing interest in electrostatic systems that are dominated by ion fluctuations and ion distributions around larger charged objects. In some of these systems one finds attraction between like charged objects and direct electrostatic contributions in systems that are over-all neutral.
In this paper we will generalize some theoretical results for systems of neutral surfaces (membranes) that nontheless interact electrostatically via ion fluctuations and correlations. These predictions are relevant to the experimental work done both on biological systems and on artificial systems where charges are introduced in order to improve membrane characteristics. Examples are the charged membranes in membrane-DNA complexes used for gene transvection and the formation of equilibrium bilayer vesicles from mixed charged lipids.
Recently it has been shown that charge fluctuations can lead to attractions between over-all neutral surfaces. However, the system treated was the somewhat artificial case of uniform layers where the interacting surfaces separate regions of the same dielectric. In this paper we specifically focus on the role of the dielectric discontinuities in systems of lipid membranes in an aqueous solution and how they affect these interactions. In Sec. II we introduce a model system for the membrane which includes two surfaces charged with both positive and negative mobile ions (charged lipid heads at the bilayer surface) that are over-all neutral. The system is treated within the Debye-Hรผckel model for a two-dimensional salt solution. We calculate the interaction between these two surfaces resulting from the fluctuations and correlations of the mobile charges, and find that the resulting attraction depends in a non-trivial way on the dielectric discontinuity between lipid and water.
## II Interaction Between Two Salty Surfaces
In this section we calculate the effective interaction between two surfaces that contain mobile charges but are over-all neutral. This is a model system for mixed charged lipid membranes or for membranes that are very highly charged to the extent that their counter-ions are restricted to a near-by layer that is thin enough to be considered as a two-dimensional surface. Pincus and Safran have calculated this interaction within the Debye-Hรผckel approximation for a uniform system, i.e. a system with no dielectric discontinuities. We will follow their method, while introducing the dielectric contributions to this model.
### A Model
The Debye-Hรผckel model is an expansion of the energy to second order in the charge density fluctuations and includes both the electrostatic and entropic contributions due to these fluctuations:
$$\delta H=๐\rho ๐\rho ^{}\left[\frac{1}{2}\underset{i=1,2}{}(\frac{\delta (\stackrel{}{\rho }\stackrel{}{\rho }^{})}{\sigma _0}+\varphi (\stackrel{}{\rho }\stackrel{}{\rho }^{},z=0))\delta \sigma _i(\stackrel{}{\rho })\delta \sigma _i(\stackrel{}{\rho }^{})+\varphi (\stackrel{}{\rho }\stackrel{}{\rho }^{},z=d)\delta \sigma _1(\stackrel{}{\rho })\delta \sigma _2(\stackrel{}{\rho }^{})\right]$$
(1)
The self energy of each of the surfaces separately is given by the first two terms while the third term is the interaction term between charges on the different surfaces. $`\sigma _{1,2}`$ are the charge densities on the surfaces (the index $`i=1,2`$ denotes the surface number) while $`\rho `$ is the in-plane coordinate and $`z`$ is the coordinate perpendicular to the surface. The first term ($`\delta `$ function) is the entropic contribution from the charge density fluctuations in both surfaces. In this expression we have assumed, for the sake of simplicity and without taking away from the generality of the treatment, that the charge fluctuations are only due to density fluctuations of one type of charge while the other sign does not fluctuate and therefore does not contribute to the free energy to this order. Thus the entropic contribution can be written in terms of the total charge density fluctuations on each surface , where $`\sigma _0`$ is the average charge density of each species (separately). The electrostatic contributions, $`\varphi `$, both between charges in the same surface ($`z=0`$) and between charges on the opposite surfaces ($`z=d`$) are not trivial because of the dielectric discontinuities that are formed by these surfaces (fig 1). The discontinuities reflect the fields thus creating image charges in the region outside the membrane. Because this system has two such discontinuities on either side of the membrane, each image charge is reflected over and over again so that we have an infinite number of charges over which to sum when calculating the potential. We require expressions both for the interaction between charge fluctuations in the same surface (they will also contribute to the inter-surface interaction via the reflections) and fluctuations on opposite surfaces. The interaction potential of two charges that are in the same surface is:
$$\varphi (\stackrel{}{\rho }\stackrel{}{\rho }^{},z=0)=\frac{e^2}{\overline{ฯต}}\left(\frac{1}{|\stackrel{}{\rho }\stackrel{}{\rho }^{}|}+\frac{ฯต_m}{\overline{ฯต}}\underset{n=1}{\overset{\mathrm{}}{}}u^{2n1}\frac{1}{\sqrt{|\stackrel{}{\rho }\stackrel{}{\rho }^{}|^2+(2nd)^2}}\right)$$
(2)
While the interaction energy for two charges on the two different sides of the membrane is given by:
$$\varphi (\stackrel{}{\rho }\stackrel{}{\rho }^{},z=d)=\frac{e^2ฯต_m}{\overline{ฯต}^2}\underset{n=1}{\overset{\mathrm{}}{}}u^{2n2}\frac{1}{\sqrt{|\stackrel{}{\rho }\stackrel{}{\rho }^{}|^2+((2n1)d)^2}}$$
(3)
Here $`ฯต_{w,m}`$ are the dielectric constants of water and membrane lipid respectively, $`\overline{ฯต}=\frac{ฯต_w+ฯต_m}{2}`$, $`u=\frac{ฯต_mฯต_w}{ฯต_m+ฯต_w}`$ and $`d`$ is the membrane thickness.
The sums in Eq.2 and 3 are easily performed if we use the identity $`\mathrm{exp}^{qz}J_0(qr)๐q=\frac{1}{\sqrt{r^2+z^2}}`$ to transform them into simple geometric series. The resulting energy in momentum space has the form:
$$\delta H=\underset{q}{}\left[\frac{1}{2}\left(|\delta \sigma _1(q)|^2+|\delta \sigma _2(q)|^2\right)A(q)+\delta \sigma _1(q)\delta \sigma _2(q)B(q)\right]$$
(4)
The coefficients are: $`A(q)=\left(\frac{1}{\sigma _0}+\frac{2\pi <l>}{q}+\frac{2\pi \delta l}{q}\frac{\mathrm{exp}(2qd)}{1u^2\mathrm{exp}(2qd)}\right)`$ and $`B(q)=\frac{2\pi l_m}{q}\frac{\mathrm{exp}(qd)}{1u^2\mathrm{exp}(2qd)}`$. Here we have defined three different โBjerrum lengthsโ: $`<l>=\frac{e^2}{\overline{ฯต}k_BT}`$, $`\delta l=<l>2\frac{(ฯต_mฯต_w)ฯต_m}{\overline{ฯต}^2}`$ and $`l_m=<l>\frac{ฯต_m}{\overline{ฯต}}`$.
At this point it is worth noting the differences between this expression and that which is found for the uniform case of no dielectric variations: The differences are expressed through the various effective Bjerrum lengths. In the uniform case there is only one such length scale which would be equal to $`<l>`$ where $`\overline{ฯต}=ฯต`$. In that case $`l_m=<l>=l_B`$ and $`\delta l=0`$. Hence the differences enter not only in the way they change the interaction amplitude through $`l_m`$ and $`<l>`$, but also by adding an additional interaction term that is $`d`$ dependent, but which is also proportional to the dielectric difference, $`ฯต_mฯต_w`$, through $`\delta l`$, and affects the resulting interaction in a non-trivial way, as will be seen below.
The Gibbs free energy for these fluctuations is now given by the logarithm of the partition function:
$$\frac{G}{k_BT}=\mathrm{log}\left\{\mathrm{\Pi }_q๐\sigma _q\mathrm{exp}(\mathrm{\Delta }H/k_BT)\right\}=\mathrm{log}\left\{A(q)^2B(q)^2\right\}$$
(5)
The pressure between the two surfaces due to charge fluctuations as a function of membrane thickness is given by the negative derivative of the Gibbs free energy with respect to the thickness:
$$\mathrm{\Pi }(d)=\frac{k_BT}{A}\underset{q}{}q\mathrm{exp}^{2qd}\frac{\frac{\delta l}{<l>}\left(\lambda q+1+\frac{\delta l}{<l>}\mathrm{exp}^{2qd}\right)\left(\frac{l_m}{<l>}\right)^2}{\left(\lambda q+1+\frac{\delta l}{<l>}\mathrm{exp}^{2qd}\right)^2\left(\frac{l_m}{<l>}\right)^2\mathrm{exp}^{2qd}},$$
(6)
where we have introduced a Guoy-Chapman length scale: $`\lambda =\frac{1}{2\pi <l>\sigma }`$. In integral form we find the expression:
$$\mathrm{\Pi }(d)=k_BT\frac{1}{2\pi d^3}๐xx^2\mathrm{exp}^{2x}\frac{\frac{\delta l}{<l>}\left(\frac{\lambda }{d}x+1+\frac{\delta l}{<l>}\mathrm{exp}^{2x}\right)\left(\frac{l_m}{<l>}\right)^2}{\left(\frac{\lambda }{d}x+1+\frac{\delta l}{<l>}\mathrm{exp}^{2x}\right)^2\left(\frac{l_m}{<l>}\right)^2\mathrm{exp}^{2x}}$$
(7)
### B Results and Discussion
The most convenient way to analyze the results of the previous section is by looking at the various limits of the integral, Eq. 7. We have three dimensionless parameters that determine the behavior of this integral and thus the $`d`$ and $`ฯต_w,ฯต_m`$ dependence of the pressure. The first is the ratio between the two length scales in the problem:
$$\frac{\lambda }{d}=\frac{1}{2\pi <l>\sigma d},$$
which parameterizes the strength of the charging in the membrane relative to the distance between the surfaces. The other two parameters are ratios of the dielectric constants and also their relative difference:
$$\frac{\delta l}{<l>}=\frac{2(ฯต_mฯต_w)ฯต_m}{\overline{ฯต}^2}\mathrm{and}\frac{l_m}{<l>}=\frac{ฯต_m}{\overline{ฯต}}$$
The first of these two ratios reflects the effect of image charges on the fluctuation induced interactions, while the second measures the relative weakening or strengthening of the primary interactions between fluctuations on the two sides due to the difference in dielectric response of the material between them.
We have three different parameters with which we find two main regimes and one sub-regime. The first regime is reached when we take the limit $`\frac{\lambda }{d}1`$ (high ion density: the average distance between ions $`\sqrt{d<l>}`$):
$$\mathrm{\Pi }(d)\frac{k_BT}{\pi d^3}\left(\frac{\delta l}{<l>}\left(\frac{l_m}{<l>}\right)^2\right)\frac{k_BT}{d^3}\frac{ฯต_m\left(2ฯต_wฯต_m\right)}{\overline{ฯต}^2}.$$
(8)
The $`1/d^3`$ behavior remains the same throughout this regime, although the sign of the pressure changes from being attractive for $`ฯต_w>ฯต_m`$ (as is expected for a lower dielectric between the surfaces and is the case for a biomembrane) and even $`ฯต_m`$ slightly bigger than $`ฯต_w`$, becoming repulsive only when the internal dielectric, $`ฯต_m`$ is at least twice as big as the external one, $`ฯต_w`$. In this limit the effect of the variation in the dielectric between the surfaces is just on the size (and eventually the sign) of the pressure, but the dependence on distance is unaltered from the uniform case which was described in as a fluctuation effect and compared with the Van der Waals attraction also because of its linear dependence on temperature.
The next main regime is the opposite one when $`\frac{\lambda }{d}1`$. Here we distinguish between two regimes: The first is that when the dielectric contrast is not very big (compared with $`\left(\frac{l_m}{<l>}\right)^2\frac{d}{\lambda }`$) and in this case the behavior is, as expected, similar to that found for the uniform case in this limit:
$$\mathrm{\Pi }(d)\left(\frac{l_m}{<l>}\right)^2\frac{k_BT}{d\lambda ^2}\left(\frac{ฯต_m}{\overline{ฯต}^2}\right)^2\frac{\sigma ^2}{d}\frac{e^4}{k_BT}.$$
(9)
In this case the pressure is inversely proportional to the temperature (through the $`\lambda `$ dependence) and is argued to be a correlation, rather than a fluctuation, effect . The dielectric effects enter in the coefficient $`\left(\frac{l_m}{<l>}\right)^2`$ and reduce the interaction as the internal dielectric (lipid) becomes smaller then the external one (water) and the dielectric contrast increases. However as this contrast increases another effect becomes important: the effect of the image charges which dominate when $`|\frac{\delta l}{<l>}|`$ is not small compared with $`\left(\frac{l_m}{<l>}\right)^2\frac{d}{\lambda }`$, and we find:
$$\mathrm{\Pi }(d)\frac{\delta l}{<l>}\frac{k_BT}{d^2\lambda }\frac{(ฯต_wฯต_m)ฯต_m}{\overline{ฯต}^3}\frac{\sigma e^2}{d^2}.$$
(10)
Here once again we find that the interaction will change sign when the internal and external dielectrics reverse roles. However, the dominant effect is that the power law changes from $`d^1`$ to $`d^2`$, and therefore for smaller $`d`$ this effect becomes more important than the previous result, Eq. 9. Note that in this regime the pressure is independent of T and is therefore neither pure fluctuation nor correlation effect. Moreover, it is linearly dependent on the surface charge density, $`\sigma `$, (and not quadratically) indicating that the correlations lead to an average charge distribution which is temperature independent and the result is an interaction between each charge and its effective image charge which does not include, to first order, the rest of the mobile charges.
Because the membrane thickness is typically of order $`40\AA `$ the limit of $`\frac{\lambda }{d}1`$ can only be achieved for very low charging of the membrane and in this limit it might not be strong enough to compete with the Van der Waals interaction; in any case the stronger power dependence on $`d`$ might not be easy to detect. However, note that if we reverse the membrane and water roles, and we look instead at the inter-membrane interactions say in a stack, we find that this last result might be more important. Because the inter-membrane distances in stacks can be relatively small this limit is easily achieved even in moderately charged membranes (one of every 5-6 lipids is charged). Because the dielectric constants are now reversed, the ratios change but we remain in this last limit where the reflections dominate the interaction. Moreover, due to the reversal of the dielectrics, the interaction (between membranes across the water) is repulsive and therefore the interplay with the Van der Waals attraction becomes more interesting. It is especially meaningful in this case because unlike the lipid material in the membrane, in some experimental set-ups water can flow in and out of the stack and therefore the stack separation can be more effectively controlled by this interaction.
In summary, we have shown that fluctuation induced interactions are strongly dependent on the dielectric properties of the system not only quantitatively but also qualitatively. The lower dielectric constant of lipid will reduce the strength of the interaction between the two surfaces of the membrane but will also change the scaling with the membrane thickness. When looking at interactions in a stack the reverse happens: the interaction is enhanced by a factor of $`\frac{ฯต_w}{\overline{ฯต}}2`$ with respect to the uniform case and we might also be able to see the effects of dielectric reflections when looking at the inter-membrane interactions.
###### Acknowledgements.
We are grateful to Sam Safran and Helmut Schiessel for useful discussions. R.M was supported by the National Institute of Health under award No. 1 F32 GM19971. This work was supported by the National Science Foundation under awards No. 8-442490-21587 and 8-442490-22213, and partially by the MRL program of the National Science Foundation under award NO. DMR96-32716. |
no-problem/0001/nlin0001062.html | ar5iv | text | # Microscopic chaos and diffusion
## I Introduction
It is generally assumed that diffusion in macroscopic systems is a result of chaos on a microscopic scale, following Einsteinโs 1905 explanation for Brownian motion, that is, the movement of a colloidal particle due to thermal fluctuations in the surrounding fluid. We describe chaos, diffusion and related properties in detail in Sec. III. We define microscopic chaos (usually shortened to just โchaosโ) in terms of unpredictability as quantified by positive Lyapunov exponents or a positive Kolmogorov-Sinai entropy;<sup>*</sup><sup>*</sup>*Specifically, a positive Lyapunov exponent of a dynamical system means that initially similar states of the system as measured by distance in phase space separate exponentially fast; the closely related property of positive Kolmogorov-Sinai entropy means that a finite amount of information per unit time is needed to construct the future phase space trajectory of the system, knowing the infinite past trajectory to an arbitrary (but finite) resolution. later we will find that properties based on periodic orbits also provide a useful description of chaotic aspects of the microscopic dynamics. Note that we restrict the use of the term โchaoticโ to its usual meaning, that is a positive Kolmogorov-Sinai entropy; while the properties of periodic orbits are important, we do not classify systems as chaotic or nonchaotic based on these properties. We call a system diffusive when the mean square displacement of a particle is proportional to the time, or a distribution of particles satisfies the diffusion equation.
A recent experiment of Gaspard et al , described below, purports to show that the diffusion of a Brownian particle is due to microscopic chaos. While we believe that Brownian motion (including both the Brownian particle and the solvent) is most likely chaotic, our simulations using nonchaotic models, in a brief Comment and in greater detail here lead to the same results as found in the experiment, so that no experimental proof of microscopic chaos has been given in . Here we explore the question of what kind of experimental measurements or data analysis might be required to identify microscopic chaotic dynamics, and the connection between microscopic chaos and diffusion.
We consider generalizations of models of diffusion due to Lorentz and Ehrenfest (Figs. 1-3), where a single point particle undergoes elastic collisions with a fixed arrangement of circular or square scatterers in two dimensions, respectively. Collisions with the circular scatterers of the (chaotic) Lorentz gas lead to exponential separation of nearby trajectories, that is, a positive Lyapunov exponent, while collisions with the square scatterers of the (nonchaotic) Ehrenfest model lead to at most linear separation of nearby trajectories, and the Lyapunov exponents are all zero. Actually, we consider models with three types of scatterers, all of which are non-overlapping: circles as in the Lorentz gas, squares oriented such that their diagonals are parallel to the coordinate axes and with only four particle velocity directions as in the Ehrenfest wind-tree model, and squares of arbitrary orientations with arbitrary particle velocity directions as a generalization of the Ehrenfest model.
For each of these three models, we consider the following three cases: an infinite number of randomly placed fixed scatterers as in the original models of Lorentz and Ehrenfest, a small number of fixed scatterers in an elementary cell subject to periodic boundary conditions, and an arrangement of a finite number of fixed scatterers enclosed by absorbing boundaries. Note that a model with periodic boundary conditions can be viewed as an infinite periodic system. For the purposes of studying properties such as the mean free time, it is better to view the model as a finite system with periodic boundaries, but to compute the diffusion coefficient in terms of the growth of the mean square displacement with time, it is necessary to view the model as an infinite periodic system.
A link between chaos and diffusion involves a fundamental question of statistical mechanics, since one has to make a connection between the microscopic and macroscopic behavior of large systems. The models we consider here are very special from a physical point of view, but very attractive from a mathematical point of view, because there is only one moving particle, rather than a large number as in, for example, Brownian motion. These can be considered bona fide statistical mechanical models if the large number of scatterers are included, as long as their lack of motion is irrelevant to the questions we ask. We believe that our models incorporate the essential features needed for a discussion of microscopic chaos and diffusion.
We now describe the experiment and analysis of Gaspard et al. in more detail. The position of a Brownian particle in a fluid was measured at regular intervals of $`1/60\mathrm{s}`$. The experimental time series data was then interpreted using standard techniques of chaotic time series analysis, suggesting a positive lower bound on the Kolmogorov-Sinai entropy, hence microscopic chaos. The method they used, due to Cohen and Procaccia was adapted from an approach pioneered by Grassberger and Procaccia . In this method one analyzes the distribution of recurrences, i.e. instances where the system approximately retraces part of its previous trajectory in phase space for a certain length of time, leading to a determination of the Kolmogorov-Sinai entropy. There is detailed mathematical justification for the method (under assumptions such as the length of the data set being sufficient to approximate infinite time limits, see Sec. IV), but the idea is very simple: recurrences give useful information about the predictability of the system. If long recurrences occur very often, it is easy to predict the future of the system from previous instances similar to the recent part of the trajectory, so the system has a small or zero Kolmogorov-Sinai entropy, whereas a rapid decay of the frequency with length of the recurrences indicates a high degree of microscopic chaos. Thus, the Cohen-Procaccia method used by Gaspard can be used to calculate the Kolmogorov-Sinai entropy from a time series in principle, but certain mathematical conditions and limits apply, sometimes restricting its applicability in practice. Gaspard et al. concluded from their analysis that the Kolmogorov-Sinai entropy of the system containing the Brownian particle was positive, and hence that the microscopic dynamics was chaotic.
Subsequently, we showed that the same approach applied to an โidenticalโ time series generated by a numerical simulation of the nonchaotic (infinite, fixed orientation) Ehrenfest model yielded virtually identical results. This is surprising, since the collisions with the flat sides of square scatterers do not lead to positive Lyapunov exponents, as described above. In Ref. we attributed the discrepancy to the physical issue of time scales โ the time interval between measurements ($`1/60\mathrm{s}`$) was vastly greater than the typical collision times of the Brownian particle with the solvent particles in the fluid (approximately $`10^{12}\mathrm{s}`$). While this is certainly an experimental problem, it leaves open the question of whether in principle a similar analysis with a much higher resolution could ever prove from experimental data that the microscopic dynamics is chaotic or not.
One aim of the current work is to shed light on this, and related questions. We perform the same Cohen-Procaccia analysis on all the models mentioned above, finding that the same results are obtained even for a model with a periodic array of squares. This rules out the possibility that the apparently positive value of the Kolmogorov-Sinai entropy is due to a loss of information associated with the randomness of the many scatterers. We discuss the relevant time scales in our models, finding a single microscopic time scale, and we can then qualitatively explain the results of the Cohen-Procaccia algorithm when the time step is shorter or longer than this time. We conclude that due to the rarity of recurrences in a diffusive system, the determination of chaoticity in such a system requires vastly longer time series than are practical for experimental or computational work, even when the measurements are taken at microscopic time and distance scales.
At that point we return to the question of how microscopic dynamics manifests itself in a time series, as well as in the macroscopic behavior. We develop a new method which we call โalmost periodic recurrencesโ that selects a particular class of recurrences different from those used in the Cohen-Procaccia method. Our approach is designed to distinguish systems based on their periodic orbit properties, which are related to, but not equivalent to the usual definition of chaos as positive Kolmogorov-Sinai entropy. We find that, in contrast to the Cohen-Procaccia method, this method can distinguish between the chaotic Lorentz model and the nonchaotic Ehrenfest model and thus reveals at least one way in which microscopic chaos is manifest, although on macroscopic scales both models exhibit diffusion. The basic idea of our method is simple: almost periodic recurrences are related to periodic orbits which have very different properties in chaotic and nonchaotic systems, so a search for periodic orbits using recurrences may distinguish between the two classes of systems. The periodic orbits of chaotic systems (such as the Lorentz gas) are exponentially unstable, so many repetitions close to a periodic orbit are unlikely. In contrast, the periodic orbits of nonchaotic systems (for example our wind-tree models) can be power law unstable, allowing many repetitions.
The markedly different properties of periodic orbits in the Lorentz and Ehrenfest models leads to the following important observations about diffusion in finite geometries. Solutions of the diffusion equation with absorbing boundaries exhibit exponential decay, corresponding to the probability of a particle remaining in the system for a given time. The randomly oriented wind-tree model has periodic orbits with power law escape, so that the escape from the whole system cannot be exponential at long times. The fixed orientation wind-tree model has no periodic orbits at all, so the particle cannot remain in the system longer than a fixed maximum time. Both of these are examples of situations in which the infinite model satisfies the diffusion equation by having a Gaussian distribution with the mean square displacement proportional to the time, but the corresponding finite model does not satisfy the diffusion equation.
The outline of this paper is as follows: first we introduce our models (Sec. II), and discuss in detail their relevant time scales and chaotic and diffusive properties (Sec. III). Then we discuss the Cohen-Procaccia method and its inability to distinguish chaotic from nonchaotic dynamics in our case (Sec. IV). Finally we introduce our alternative โalmost periodic recurrenceโ method and discuss the properties of the periodic orbits (Sec. V), and their consequences for finite systems (Sec. VI). We conclude with a discussion of our results and some open questions (Sec.VII).
## II The models
### A Definitions
The Ehrenfest wind-tree model describes a point โwindโ particle moving in straight lines in the plane punctuated by elastic collisions with fixed square scatterers, the โtreesโ. In the original model (Fig. 1), the trees are oriented with their diagonals along the $`x`$ and $`y`$ axes, and the wind particle moves with a fixed velocity in only four possible directions, along these axes. The trees are located at random positions, with a given number density, and not overlapping. There is an overlapping version we will not consider; it exhibits anomalous (sub)-diffusion and non-Gaussian distributions .
Secondly, we consider the case of randomly oriented scatterers (Fig. 2), which allows all possible wind particle directions. In both the fixed and randomly oriented cases all Lyapunov exponents are zero, because a collision from a flat scatterer causes only a linear separation of nearby trajectories; it does not cause the exponential separation associated with convex curved scatterers.
Thirdly, we consider the (two dimensional) Lorentz gas (Fig. 3), that is, a model where the scatterers are circular, and we choose the area and number density of the scatterers to correspond to the wind-tree models. This model is known to have normal diffusive properties and is used here as a comparison to examine the effects of positive versus zero Lyapunov exponents.
### B Numerical details
For the numerical simulations, we take the velocity of the wind particle to be $`1`$. For the wind-tree models we take the side length $`l`$ equal to $`\sqrt{2}`$. This is done for computational convenience; space is divided into unit squares (of side length 1) aligned with the coordinate axes, so that at most one scatterer can be contained in each unit square. For the Lorentz gas the circular scatterers have a radius $`R=\sqrt{2/\pi }`$ to give them the same area $`l^2=2`$ as the squares. In all cases the total area considered has a number of unit squares, $`L`$, up to $`3500`$ in both $`x`$ and $`y`$ directions, and periodic boundary conditions are used. To simulate an infinite system, a large value of $`L`$ (up to 3500) is used; for the maximum time of $`10^6`$ time units the particle never travels far enough to sample the periodic boundary conditions. To simulate a periodic system, a small size such as $`L=4`$ is used; we always consider the case of finite horizon, so that the time between collisions is bounded. To simulate a system with absorbing boundaries, an intermediate size such as $`L=20`$ is used; the absorbing boundaries are one distance unit inside the edge of the system to avoid the possibility of the particle colliding with a scatterer as well as with its periodic image.
We use a number of scatterers $`N`$ equal to $`L^2/4`$. This means that the number density $`n=N/L^2`$ is $`1/4`$ in all cases and the proportion of the total area covered by the scatterers is $`\rho =N(l/L)^2=1/2`$. Thus we have an intermediate density of scatterers, which is most convenient from the point of view of the simulations and also the time scales. For a significantly lower density, an infinite horizon is much more likely in all but the largest systems. For a significantly higher density (limited by a maximum of $`\rho =1`$) the Metropolis algorithm (see below) used to position the scatterers is very much slower. Also, both for low and high density systems, there is an additional time scale (see section III A), which complicates the analysis.
We now describe the method used to position the fixed scatterers. Each configuration of random positions and orientations is obtained from a version of the Metropolis algorithm, used in . The square scatterers of the wind-tree models are placed initially at points belonging to a square lattice defined such that they do not overlap, that is, with their centers at integer coordinates $`(m,n)`$ such that $`m+n`$ is even, and their orientation as in Fig. 1. Not every lattice site is then occupied, depending on the number density. We use a random initial placement of the scatterers on the lattice to accelerate the convergence of the shifting algorithm, described next. The random placement leads to large scale fluctuations in density. These fluctuations would occur anyway as a result of the small shifts, but only after a very large number of iterations. Each scatterer is shifted and rotated a small random amount in turn, typically a few tenths of a distance unit and of order ten degrees respectively. If the shift causes an overlap, the move is rejected and the previous configuration is used. The procedure is repeated with different shifts and rotations. A large fixed number of shifts are attempted, sufficient for reasonable measures of the correlation between scatterers to have long converged to their final values.
If the circles of the Lorentz gas are placed on the same lattice as the squares they overlap slightly; at the chosen density of $`\rho =1/2`$ there is enough freedom to allow them to find non-overlapping locations, so that the final configuration is non-overlapping, as we could check. At higher densities circles may not find non-overlapping locations, and a close packed triangular lattice should then be used instead.
### C Notation
We now define the notation we will use to distinguish between the various models. There are a very large number of possible parameters that could appear in general, referring to the shape of the scatterers (squares, circles, etc.), different densities and different shaped boundary conditions to name a few, but for conciseness we limit the notation to include only those models we use here, in particular keeping the areal density always at $`\rho =1/2`$. Using square brackets \[,\] to denote different cases, our notation is \[F,R,L\]\[$`\mathrm{}`$,\[P,A\]$`L`$\]. The first symbol \[F,R,L\] denotes the type and orientation of the scatterers, F for fixed oriented squares, R for randomly oriented squares and L for circles (the Lorentz gas). The next symbol \[$`\mathrm{}`$,P,A\] gives information about the boundary conditions, either $`\mathrm{}`$, that is, no boundaries, or P for periodic boundary conditions or A for absorbing boundaries. The symbols P and A are followed by the size of the system, $`L`$, which is an even integer. Thus the original Ehrenfest model is denoted F$`\mathrm{}`$, while a periodic Lorentz gas with $`L=8`$ (containing $`N=L^2/4=16`$ scatterers at the density used in our simulations) is denoted LP8 (see Fig. 3). The latter model is generated by random shifts of the scatterers as above, ensuring that periodic images of the scatterers do not overlap. When no confusion can arise, we will sometimes refer to classes of models with a simplified notation, for example we denote all randomly oriented wind-tree models as R, and all Lorentz models with absorbing boundaries as LA.
## III Fundamental properties
### A Time scales
This section collects a number of fundamental properties and results that form a basis for sections IVV and VI. First we consider the important issue of time scales, which puts the time series analysis methods of sections IV and V into a physical context. Then we consider chaotic properties, giving known results and conjectures for our models; all of the following sections use this. After this we study diffusive properties, obtaining some new results. These are of direct importance to Sec. VI; the time series analysis methods are applied only to our diffusive models, We include here one model exhibiting superdiffusion, see Sec. III C. but the diffusion is somewhat incidental as the methods can be applied to arbitrary time series. At the end of this section we relate the chaotic and diffusive properties of our models.
One of our difficulties of the Gaspard et al experiment is that the interval between their measurements (1/60 s) is so much greater than the relevant time scale ($`10^{12}\mathrm{s}`$) of the dynamics, that is the time interval determined by collisions between the Brownian particle and the other particles in the fluid. It is thus important to clarify the issue of what time scales are relevant in our models, so that the simulation results can be put in context. There are three relevant time scales here, the time taken for the particle to traverse the length of a scatterer (defined to be of order one time unit; see Sec. II B), the mean free time between collisions $`\overline{\tau }`$, and the time at which the particle begins to notice the finite size $`L`$ of the system (when it is not infinite). It turns out (see Sec. III C) that all the random and periodic models except FP have a well defined diffusion coefficient $`D`$, so in these cases the time taken to reach the boundary is of order $`L^2/D`$.
The mean free time can be calculated exactly (see the Appendix) and is given by $`\overline{\tau }_F=1`$, $`\overline{\tau }_R=\pi /(2\sqrt{2})1.111`$ and $`\overline{\tau }_L=\sqrt{\pi /2}1.253`$ for the F, R, and L models respectively, independent of the size of the system (ignoring the case of absorbing boundaries). We have checked these values numerically, and the results agree within their uncertainties of about $`0.002`$. It is clear that all the time scales in these models are of order one time unit, except those defined by the boundary. This means that a time step of one time unit should be sufficient to observe effects due to the microscopic chaos in the time series analysis methods of sections IV and V.
### B Microscopic dynamical properties
We began with the notion of microscopic chaos as the presence of a positive Lyapunov exponent arising from collisions with strictly convex scatterers, or as a positive Kolmogorov-Sinai entropy quantifying the lack of predictability of a chaotic system. We now want to make these ideas more precise and clarify what is known and what is conjectured about our models.
From a mathematical point of view, there are a large number of dynamical properties that a system may exhibit . Those of interest to us are
1. Ergodicity. We need ergodicity for the time series analysis methods of Secs. IV and V; that is, a single long trajectory is supposed to sample the dynamics of the whole phase space. We either know or conjecture that ergodicity holds in all of our models (see below).
2. Decay of velocity correlations. We note that Eqs. (6,8) below give the diffusion and Burnett coefficients as integrals over velocity autocorrelation functions: the diffusion coefficient involves two-time correlations, while the Burnett coefficient involves four-time correlations. For these coefficients to be defined the appropriate correlations must decay sufficiently quickly. <sup>ยง</sup><sup>ยง</sup>ยงMixing (which implies ergodicity) is equivalent to the statement that all two time correlation functions decay, not just velocity correlations. It is neither necessary nor sufficient for the existence of the diffusion coefficient: it is not necessary because correlations other than velocity correlations need not decay in a diffusive system, and it is not sufficient because the velocity correlation could decay too slowly for the integral to converge. Numerical evidence for the existence or nonexistence of these coefficients for our models is given in Sec. III C.
3. A positive Lyapunov exponent, corresponding to exponential separation of initially close trajectories, follows from the shape of the scatterers. The Lorentz gas has a positive Lyapunov exponent due to its convex scatterers, while the wind-tree models have all zero Lyapunov exponents due to their (piecewise) flat scatterers. We are not interested in the exact value of the Lyapunov exponent in the Lorentz gas (although it is easy to calculate numerically) because there is no general relation linking it to, for example, the diffusion coefficient. For example, in our wind-tree models the Lyapunov exponent is zero, but the diffusion coefficient remains positive. The same is true for the KS entropy.
4. A positive Kolmogorov-Sinai entropy, which is our definition of chaos, and which the Cohen-Procaccia method (see Sec. IV) is designed to compute. We either know or conjecture that the KS entropy is equal to the sum of the positive Lyapunov exponents in our models (Pesinโs theorem, see below). Actually there is at most one positive Lyapunov exponent in these systems.
5. Periodic orbit properties sensitive to chaoticity are described in Sec. V and used in Secs. V and VI.
We now briefly discuss what is known about our models regarding these dynamical properties. The periodic Lorentz gas has exponential decay of (all two time) correlations , which implies ergodicity and the convergence of the integral in (6) below. For (8) see Ref. . The periodic wind-tree models are generically The term generic here denotes a positive measure of scatterer configurations, as opposed to initial positions of the particle. ergodic . Pesinโs theorem holds for all of these \[F,R,L\]P models, see Refs. for the Lorentz gas and Ref. for the wind-tree models.
The infinite models are much more difficult to treat rigorously because the phase space is not compact. In order to make reasonable conjectures about the above properties, we use the solution of the diffusion equation (2) below, which is verified numerically in Sec. III C for the models \[F,R,L\]$`\mathrm{}`$, to argue that the number of different scatterers hit by the particle in time $`t`$ is proportional to $`t/\mathrm{log}t`$, as follows. In two dimensions, the probability density of the particle being at any point decays as $`1/t`$. This means that in a time $`t`$, the particle comes close (say, within a radius $`ฯต`$) to the point a number of times proportional to the integral of $`1/t`$, that is $`\mathrm{log}t`$. If the point corresponds to the location of a scatterer, we could say that the particle collides with each scatterer a number of times proportional to $`\mathrm{log}t`$. In this case the total number of different scatterers hit by the particle in time $`t`$ might be expected to grow proportional to $`t/\mathrm{log}t`$, which is what we find numerically in Fig. 4.
These observations have the following consequences for the chaotic properties of the infinite models. Because the particle returns to each scatterer an infinite number of times, we expect that it passes arbitrarily close to every point in phase space, that is, the system is ergodic. The KS entropy gives the amount of information per unit time required to predict the trajectory knowing its infinite past. The unpredictability has two sources, the instability associated with the collisions in the Lorentz gas (that is, the positive Lyapunov exponent), and the random positions of the scatterers with which the particle has not yet collided in all the models. The number of different scatterers hit by the particle grows as $`t/\mathrm{log}t`$, so the rate decreases to zero as $`1/\mathrm{log}t`$. We conclude from this argument that the random positions of the scatterers do not contribute to the KS entropy, although results at finite time might suggest otherwise, given that $`1/\mathrm{log}t`$ decays so slowly. In other words, Pesinโs theorem is satisfied for these infinite models also. <sup>\**</sup><sup>\**</sup>\**The argument hinges on the solution of the diffusion equation (Eq. (1) below) in two dimensions. In three dimensions, the density decays as $`t^{3/2}`$ which is integrable at infinity, so that the particle collides with each scatterer only a finite number of times and thus does not sample the whole phase space. In addition, there is a contribution to the KS entropy from the random positions of the scatterers, thus violating Pesinโs theorem. There is no contradiction here; the proofs of Pesinโs theorem all demand that the phase space be compact.
We have discussed a number of dynamical properties with regard to our models. The chaotic Lorentz gas satisfies more of these properties than the nonchaotic wind-tree models. Chaos is defined here by positive KS entropy and is clearly relevant fro diffusion, yet we show in subsequent sections that the periodic orbit properties also appear to be relevant to diffusion. The conclusion discusses an example of a model which is chaotic as it has positive KS entropy, but has periodic orbit properties and diffusive properties more similar to the nonchaotic wind-tree models. Although we have a substantial number of relevant numerical results below, a full understanding of the dynamical properties (KS entropy, periodic orbit properties or perhaps others) most closely related to diffusion still eludes us.
### C Macroscopic diffusion
Now we discuss diffusive properties. We begin with the diffusion equation, and then relate it to the mean square displacement of a particle. The diffusion equation
$$\frac{P}{t}=D^2P$$
(1)
for a probability density $`P(๐ฑ,t)`$ with a constant diffusion coefficient $`D`$ is linear. Thus its general solution is an integral over Greenโs functions given by the solution for an initial Dirac delta distribution $`P(๐ฑ,0)=\delta (๐ฑ๐ฑ_0)`$, that is,
$$P(๐ฑ,t)=(4\pi Dt)^{d/2}\mathrm{exp}\left[\frac{(๐ฑ๐ฑ_0)^2}{4Dt}\right]$$
(2)
is the conditional probability density that a particle initially at the point $`๐ฑ_0`$ will be at the point $`๐ฑ`$ a time $`t`$ later; this is therefore a 2-time probability distribution (or density) function. The spatial dimension $`d=2`$ in our case.
The diffusion equation is known to hold for the probability density of a particle undergoing deterministic dynamics in a number of systems including the Lorentz gas . In the diffusion equation a macroscopic approximation is used, implying that $`P(๐ฑ,t)`$ is an average over space and time scales large compared to the characteristic microscopic lengths and times of the dynamics. The mean square displacement of the particle in the $`x`$-direction after a macroscopic time $`t`$ is obtained from (2),
$$\mathrm{\Delta }x^2=\mathrm{\Delta }x^2P(๐ฑ,t)๐๐ฑ=2Dt$$
(3)
where $`\mathrm{\Delta }x=xx_0`$. This is the Einstein relation for diffusion. Note that $`x=x(t)`$ is now an explicit function of time. Similarly, the fourth and sixth cumulants are
$`\mathrm{\Delta }x^4_c\mathrm{\Delta }x^43\mathrm{\Delta }x^2^2=0`$ (4)
$`\mathrm{\Delta }x^6_c\mathrm{\Delta }x^610\mathrm{\Delta }x^4\mathrm{\Delta }x^2+15\mathrm{\Delta }x^2^3=0`$ (5)
respectively. It is also possible to obtain dimensionless forms of the above expressions dividing them by the appropriate power of $`\mathrm{\Delta }x^2`$. In this form the fourth cumulant is usually called the kurtosis, and is a common measure of how close a probability density is to a Gaussian distribution.
Straightforward manipulations applied to the mean square displacement lead to the Green-Kubo expression for the diffusion coefficient (if it exists)
$$D=_0^{\mathrm{}}v_tv_0๐t$$
(6)
where $`v`$ is one (say the $`x`$-) component of the velocity and the subscript denotes the time. This shows that the existence of a diffusion coefficient is also related to the rate of decay of the velocity autocorrelation function. The integral diverges if the decay rate of the integrand is $`t^1`$ or slower, unless the integrand is oscillatory. An oscillatory integrand cannot be ruled out, but is not typical for the Lorentz gas as observed in numerical simulations.
The diffusion equation is the lowest order macroscopic description of the diffusion process. The next approximation, in the direction of microscopic space and time scales, ie. when the probability density $`P`$ varies sufficiently rapidly, the right hand side of Eq. (1) may be augmented by terms such as $`B^4P`$, where $`B`$ is called the linear super-Burnett coefficient . Like the diffusion coefficient, it obeys an Einstein relation
$$\mathrm{\Delta }x^43\mathrm{\Delta }x^2^2=24Bt$$
(7)
This is consistent with the zero kurtosis found previously (4): to find the kurtosis we divided by $`x^2^2`$ which is proportional to $`t^2`$ if the diffusion coefficient exists, while the Burnett coefficient in (7) gives a smaller term (for large $`t`$), proportional to $`t`$. The Burnett coefficient can also be written in the form of a Green-Kubo relation in terms of integrals over four-time velocity correlation functions,
$$B=\frac{1}{6}_0^{\mathrm{}}_0^{\mathrm{}}_0^{\mathrm{}}\left(v_0v_1v_2v_3v_0v_1v_2v_3v_0v_2v_1v_3v_0v_3v_1v_2\right)๐t_1๐t_2๐t_3$$
(8)
where $`v_0=v(t=0)`$ as above and $`v_1=v(t=t_1)`$ etc. A slow decay of the relevant correlation functions leads more easily to a divergence in this case than in Eq. (6. This means that it is possible to have a well defined diffusion coefficient, but a divergent Burnett coefficient, which occurs if the fourth cumulant increases faster than $`t`$ but slower than $`t^2`$.
We estimate the cumulants defined in Eqs. (3-5) numerically by choosing a large number (typically $`10^5`$) of random initial conditions for the particle not inside a scatterer, for a single configuration of scatterers. We report only the cumulants in the $`x`$-direction, although we have checked that the $`y`$ distribution is similar. See Figs. 5-7 and Table I.
For five models, \[F,R,L\]$`\mathrm{}`$, \[R,L\]P4, the mean square displacement is found to be proportional to the time, and the fourth and sixth cumulants increase slower than $`t^2`$ and $`t^3`$ respectively at large times, indicating that the distribution approaches a Gaussian. The exception to this rule is FP4, for which the mean square displacement grows faster than the time, approximately as $`t^{1.4}`$ and the kurtosis is also nonzero. This superdiffusive behavior is not particularly stable in the sense that the mean square displacement for this model has larger fluctuations than the other models, and the exponent of approximately $`1.4`$ varies unpredictably between 1 and 2 with the exact positions of the scatterers and the size of the cell (for example for FP6 or FP8).
### D Connections between chaotic and diffusive properties
We now look at how the chaotic and diffusive properties of our models are related in the light of our results, specifically of: (a) normal diffusion without chaos, (b) superdiffusion without chaos, (c) a possible relation between the diffusion coefficient and the periodic orbits, and (d) the Burnett coefficient of the periodic Lorentz gas. In no way does the data presented in Figs. 5-7 distinguish between chaotic and nonchaotic models.
#### Normal diffusion
Our results show that it is possible to have a well defined diffusion coefficient and a Gaussian distribution function without positive Lyapunov exponents, that is, in the F$`\mathrm{}`$, R$`\mathrm{}`$ and RP4 models. The RP4 model is particularly interesting in this regard, since the two most obvious sources of unpredictability: dispersive collisions and collisions with new randomly placed scatterers, are absent. Recall that we argued (in section III B) that these two processes might together determine the KS entropy, that is, the rate of loss of information about the particleโs position and velocity. The RP4 model is thus the clearest example of a model in which there is normal diffusion, and zero KS entropy. An example with the same properties, using a periodic rhombus model has been studied in Ref. , where a mean square displacement proportional to the time was observed, but the higher moments were not studied.
#### Superdiffusion
The FP4 model, in contrast, has a mean square displacement growing faster than linearly with the time, and a non-Gaussian distribution. This can be related to a slow decay of the velocity autocorrelation function (see above), but a physical explanation of why this occurs in the FP4 model but not in the RP4 model is lacking.
#### Diffusion and periodic orbits
It is perhaps somewhat hazardous to deduce more from this data than the existence of normal diffusion; however we would nevertheless like to point out a possible connection between the value of the diffusion coefficient (given by the vertical displacement of the curves in the inset of Fig. 5) in the infinite models and the properties of periodic orbits discussed in section V. We make no statement about the diffusion coefficient of the periodic models because this coefficient depends on the position of the scatterers. The idea is that the presence of periodic orbits in a model with randomly positioned scatterers might be expected to lower the diffusion coefficient, since the periodic orbits may keep the particle trapped in roughly the same place; this takes time without contributing to the mean square displacement. Thus we observe that the fixed orientation wind-tree model F$`\mathrm{}`$ has no periodic orbits and has the largest diffusion coefficient; the random Lorentz gas L$`\mathrm{}`$ has periodic orbits, but they are exponentially unstable, and has the next largest diffusion coefficient; the random orientation wind-tree model R$`\mathrm{}`$ has periodic orbits with power law instability that can inhibit diffusion for a long time, and the diffusion coefficient is the smallest of the three.
#### The Burnett coefficient
The fourth cumulant grows linearly with time for the LP4 model, indicating a finite Burnett coefficient, as expected, given its exponential decay of correlations . The other models, F$`\mathrm{}`$, R$`\mathrm{}`$, RP4 and L$`\mathrm{}`$ have a divergent Burnett coefficient, which is not surprising since their decay of correlations is most likely as a power law. Thus, from the more subtle properties of the two time (displacement) correlation function, we can deduce some properties relating to the rate of decay of higher order correlations, even special four-time (velocity) correlations, but not whether there is a positive Lyapunov exponent, since L$`\mathrm{}`$ behaves similarly to R$`\mathrm{}`$ from this point of view. Even in the intermediate region between the ballistic behavior at small times and the (super)-diffusive behavior at large times in Fig. 5, that is, where the distribution functions are measured at microscopic time scales, there is no clear distinction between the chaotic and nonchaotic models.
We have now discussed at some length the connection between chaotic and diffusive properties of our models. While this has yielded valuable insight into how little chaoticity is needed for diffusion to occur, it has not provided us with an understanding of how chaotic and nonchaotic diffusion differ, let alone a method to distinguish chaotic and nonchaotic diffusive systems experimentally. The reason, given in Ref. is that two time correlations (for example our cumulants) are insufficient to characterize chaos; multi-time correlations (or their underlying probability distributions) are required. The remaining sections of this paper all use multi-time distributions of one form or another to investigate our models. We begin with a more detailed study of the Grassberger-Procaccia method than in Ref. , where we found it to incorrectly classify the F$`\mathrm{}`$ model as chaotic.
## IV The Grassberger-Procaccia method
Grassberger and Procaccia gave the earliest practical methods for computing chaotic properties such as entropies and dimensions from experimental or computer generated time series. These are still the most popular methods in use, sometimes with minor variations. In Ref. the conclusion of microscopic chaos using data from a Brownian motion experiment was based on a slightly different method of Cohen and Procaccia . Both methods are clearly reviewed in Ref. .
We consider these methods as applied to the calculation of the KS entropy $`K`$, the non-zero value of which characterizes chaos. We recall the discussion of section III B where we concluded that the wind-tree models all have zero KS entropy, but the Lorentz gas has a KS entropy equal to its positive Lyapunov exponent.
The original Grassberger-Procaccia method computes a slightly different dynamical entropy $`K_2`$ (defined using the square of the probability, $`p^2`$ rather than the conventional $`p\mathrm{log}p`$) satisfying the inequality $`K_2<K`$. Thus a positive estimate $`K_2>0`$ implies that $`K>0`$ and hence chaoticity. We follow Ref. in using an adaptation due to Cohen and Procaccia that allows $`K`$ to be estimated directly.
We replace the experimental time series of the positions of the Brownian particle in Ref. by a numerical time series of any of our infinite or periodic diffusive models, containing the $`x`$ and $`y`$ positions of the particle at $`10^6`$ times uniformly spaced by a separation $`\mathrm{\Delta }t`$. Unlike Ref. , <sup>โ โ </sup><sup>โ โ </sup>โ โ Because there is only one experimental time series in Ref. , the authors generated new time series with time steps equal to a multiple of $`\mathrm{\Delta }t`$ by removing data points. The new time step was denoted $`\tau `$. We do not need to do this in our numerical work, and we do not use the notation $`\tau `$ for time steps to avoid confusion with the mean free time. our values of $`\mathrm{\Delta }t`$, equal to 1 and 0.01 time units are close to the dynamical time scale determined by the mean free time $`\overline{\tau }`$ (see Sec. III A).
A single point on our numerical trajectory is denoted $`(x_i,y_i)`$, where $`i`$ is an integer ranging from 0 to $`10^61`$. For the fixed oriented square (F) models the trajectory contains all the phase space information except the velocity which can take only four values in this case. For the randomly oriented square (R) and the Lorentz (L) models the velocity has one continuous degree of freedom. The Brownian motion experiment uses only one component (say, $`x`$) of the particle position, ignoring the huge number of other degrees of freedom contained in the fluid system. From a mathematical point of view, as long as the degrees of freedom are all coupled (that is, all the degrees of freedom depend on each other directly or indirectly), this type of method usually converges to the correct entropy (or dimension) using as few as one measured variable, however the efficiency may then well be such that an unreasonably large number of data points is required for a reliable estimate of these quantities.
A number (typically $`10^2`$ to $`10^3`$) of (in our case non-overlapping) uniformly spaced subsequences of the long trajectory are denoted โreference segmentsโ (โreference trajectoriesโ in Ref. ), having lengths $`n`$ ranging from 1 to 100. These reference segments are compared with all subsequences of the trajectory having the same length. The comparison is made according to the position relative to the initial point of each subsequence, with a Euclidean metric in real space and a maximum metric over the trajectory. Thus, if $`i`$ and $`j`$ give the positions in the full trajectory of the initial points of the $`M`$ reference segments and $`N10^6`$ total segments respectively, the distance between the segments beginning at $`i`$ and $`j`$ is defined as
$$d_n(i,j)=\underset{k=0}{\overset{n1}{\mathrm{max}}}\sqrt{(\mathrm{\Delta }x_{i+k}\mathrm{\Delta }x_{j+k})^2+(\mathrm{\Delta }y_{i+k}\mathrm{\Delta }y_{j+k})^2}$$
(9)
where $`\mathrm{\Delta }x_{i+k}=x_{i+k}x_i`$ etc.
We are now in a position to count the number of recurrences, in what is called โpattern probabilityโ in . For a given tolerance or spatial resolution $`ฯต`$ we can define the probability of a certain pattern defined by the reference sequence beginning at $`i`$ as the proportion of all the subsequences of the full trajectory that are within a distance (as defined above) $`ฯต`$ of the reference sequence:
$$P(i,n,ฯต,\mathrm{\Delta }t)=\frac{1}{N}\mathrm{\#}_j[d_n(i,j)<ฯต]$$
(10)
and from this the โpattern entropyโ
$$K(n,ฯต,\mathrm{\Delta }t)=\frac{1}{M}\underset{i}{}\mathrm{log}_{10}P(i,n,ฯต,\mathrm{\Delta }t)$$
(11)
which gives the information theoretic entropy of the dynamics with respect to the pattern (of length $`n`$ and resolution $`ฯต`$) probability distribution. The pattern entropy thus contains information about the $`n`$-time (displacement) distribution functions, as suggested by the discussion at the end of the previous section.
When the pattern entropy increases linearly with time in the limit of large $`n`$, it is possible to define an entropy per unit time,
$$h(ฯต,\mathrm{\Delta }t)=\frac{1}{\tau }\underset{n\mathrm{}}{lim}[K(n+1,ฯต,\mathrm{\Delta }t)K(n,ฯต,\mathrm{\Delta }t)]$$
(12)
In practice $`n`$ is finite, and $`K`$ is always bounded above by $`\mathrm{log}_{10}N`$ (since it is an average of $`\mathrm{log}_{10}P`$, where $`P`$ bounded below by $`1/N`$) and tends to saturate towards this value at long times. However, we can often find a large enough linear region in a plot of $`K`$ as a function of $`n`$ (for fixed $`ฯต`$ and $`\mathrm{\Delta }t`$) to compute $`h(ฯต,\mathrm{\Delta }t)`$. In Ref. $`h(ฯต,\mathrm{\Delta }t)`$ was plotted as a function of $`ฯต`$ with different curves corresponding to different $`\mathrm{\Delta }t`$. From dimensional arguments it follows that any diffusive process has a scale invariance leading to a dependence $`h1/ฯต^2`$, which is in fact observed in such plots.
Assuming that there is sufficient data that all the limits (of large $`N`$, $`M`$ and $`n`$) can be approximated reliably, the KS entropy $`K`$ is equal to the maximum value of $`h(ฯต,\mathrm{\Delta }t)`$ as $`ฯต`$ and $`\mathrm{\Delta }t`$ are varied. The measured value $`h(ฯต,\mathrm{\Delta }t)`$ will be less than the true KS entropy $`K`$ if the values of $`ฯต`$ and or $`\mathrm{\Delta }t`$ are so large that not all of the information contained in the dynamics is represented in the time series. This effect does not explain why a nonchaotic system may appear to be chaotic, that is, why the measured value of $`h(ฯต,\mathrm{\Delta }t)`$ is greater than $`K=0`$, as in Ref. : this must be due to a problem with the above mentioned limits, and we defer a discussion of this point until after we have presented our results.
Our results for each of the six models \[F,R,L\]\[$`\mathrm{}`$,P4\] are illustrated in Fig. 8 with $`\mathrm{\Delta }t=1`$ and in Fig. 9 with $`\mathrm{\Delta }t=0.01`$. The case $`\mathrm{\Delta }t=1`$ samples the motion of the particle as it begins to diffuse among the scatterers. The models (both chaotic and nonchaotic) are indistinguishable, with the linear growth of the pattern entropy suggesting positive KS entropy, except that the superdiffusive model FP4 is showing signs that the KS entropy is zero, as a nonchaotic model should.
For $`\mathrm{\Delta }t=0.01`$ (Fig. 9), where mostly ballistic behavior occurs, the graphs of pattern entropy vs. number of time steps is no longer linear. The R and L models look much the same, while the F models have an unusual feature when the pattern entropy is equal to $`\mathrm{log}_{10}40.6`$ which is due to the four available wind directions, that is, only a fourth of the trajectories at a given $`(x,y)`$ point are close in phase space after a certain time. The pattern entropy is becoming linear at larger times in all models, suggesting positive KS entropy.
We can draw the following conclusions about the use of Grassberger-Procaccia type methods for this class of problems: (a) measurements need to be made at microscopic time scales, (b) even at microscopic time scales, the method does not seem to distinguish between chaotic and nonchaotic dynamics, and (c) the reason seems to be that unfeasibly long time series are needed in order to do that.
#### Macroscopic measurements
It is clear that in an analysis as explained above, using large distance and time scales a diffusive system shows itself as almost independent of its microscopic dynamics (whether chaotic, non-chaotic or stochastic), although we cannot exclude the possibility that an unreasonably long trajectory might still give some evidence of weak correlations that survive for macroscopic times. In the Brownian motion experiment, for example, there are of the order of $`10^{10}`$ collisions of the Brownian particle with the surrounding solvent between each measurement of its position, so any correlations arising from, say, nonchaotic microscopic dynamics would be effectively averaged out between measurements. This is essential from the point of view of designing future experiments, but it is not the full story as it does not explain our results, which are obtained using short times. The Brownian motion experiment could not investigate the motion at short time scales, but it is important to know whether data from an improved experiment might determine the microscopic chaoticity in principle.
#### Microscopic measurements
Our results do not suggest that the Grassberger-Procaccia type methods can distinguish between chaotic and nonchaotic diffusive models even at short times because the nonchaotic R models consistently give the same results as the chaotic L models. The only distinctions we have been able to make are between diffusion and superdiffusion for $`\mathrm{\Delta }t=1`$, and between the continuous velocity space of the R and L models and the four velocity directions of the F models. Both of these distinctions can be made from a time series without using the intensive analysis of the Grassberger-Procaccia method.
#### How long a time series is required?
Our wind-tree models appear unpredictable (in the sense of positive KS entropy) because it takes the system some time to โrealizeโ that it is nonchaotic, in other words, quite a long time series length $`N`$ is required. For example, the most pronounced nonchaotic diffusive model we have is RP4. Once the particle โknowsโ the positions and orientations of all the scatterers, the system is completely predictable, and hence a zero KS entropy is manifest. However it is necessary for the particle to enumerate the large number of ways of achieving this<sup>โกโก</sup><sup>โกโก</sup>โกโกA conservative and very rough estimate for the number of ways for the particle to determine the positions and orientations of the scatterers is as follows: It must collide with tow adjacent sides of all four scatterers, a total of eight collisions. For each scatterer there are four possible pairs of adjacent sides, making $`4^4=256`$ combinations. The eight collisions could occur in any order, giving an extra factor of $`8!=40320`$. Thus our estimate is $`4^4\times 8!10^7`$. Note that collisions may not be able to occur in any order (thus lowering the estimate), but we have ignored situations where the particle collides with three or four sides of a scatterer, or with the same sides more than once (thus greatly increasing the estimate). The trajectory length would need to be substantially larger than the true number of ways the particle can determine the scatterer configuration, in order to obtain reasonable statistics, so that nonchaotic recurrences dominate. with sufficient statistics that nonchaotic recurrences<sup>\**</sup><sup>\**</sup>\**Nonchaotic recurrences are those with a probability that decreases more slowly than exponentially with length, indicating greater probability than is characteristic of chaotic systems, and hence zero KS entropy. dominate the GP calculation, for which our trajectory length of $`N=10^6`$ is apparently insufficient. For the infinite models such as R$`\mathrm{}`$ the motion of the particle is never completely predictable, but the proportion of collisions that the particle encounters new scatterers decreases as $`1/\mathrm{log}t`$ according to the discussion of Sec. III B. The measurement of such a slow decrease is far beyond the capabilities of any feasible implementation of the Grassberger-Procaccia method.
Since the Grassberger-Procaccia approach does not seem to distinguish our chaotic and nonchaotic diffusive models, we turn to an alternative time series analysis method, with a view towards suggesting possible methods for analyzing future experiments that are able to sample the dynamics on microscopic time scales. Such a method cannot possibly enumerate all possible trajectory segments, as discussed above, but there are certain types of trajectory segments, namely those that are almost periodic, that stand out as predictable in a nonchaotic system. The method of the next section takes advantage of this property, and in effect distinguishes chaotic and nonchaotic systems by singling out this subset of all possible recurrences.
## V Almost periodic recurrences
### A Motivation
We have seen in Sec. IV that the variant of the Grassberger-Procaccia method used by Gaspard et al cannot distinguish between nonchaotic and chaotic models, for the reason that the time series (either experimental or numerical) is not long enough to provide a reasonable approximation to the infinite time limit implied by the method.
We can attempt to circumvent this difficulty by looking for specific types of recurrences, as opposed to taking an average over all types of recurring trajectory segments. In particular we will show that a promising candidate for a useful specific recurrence is given by one that corresponds to an orbit that is almost periodic (again, within a spatial tolerance $`ฯต`$). We can identify these almost periodic orbits as those that repeat (within a distance $`ฯต`$) their previous motion at equally spaced intervals of time, hence the name โAlmost periodic recurrencesโ (APR).
If we compare the APR approach with the Grassberger-Procaccia (GP) method, there are a number of similarities and differences. Both methods are designed to deduce dynamical properties from time series data. GP aims to make quantitative statements about the KS entropy, whereas APR as yet only provides a qualitative statement about chaoticity as expressed in periodic orbit properties, which are not equivalent to the usual definition of chaos as positive KS entropy. Both methods make use of the probability of recurrences. GP then averages over many probabilities, while APR singles out a few especially significant recurrences. This means, for example that in an intermittent system, where the dynamics switches irregularly from chaotic to regular behavior, GP measures only the average, chaotic, dynamics, while APR is sensitive to the lack of chaoticity in the regular regions of phase space. In other words, an intermittent system can have a positive Lyapunov exponent, and also periodic orbits with nonchaotic properties. The APR method characterizes such a system as nonchaotic, which is an appropriate designation with regard to diffusive properties as we observe in Sec. VI below, but which is inappropriate from the usual point of view, which is that chaos is defined as positive KS entropy. An example of such an intermittent system is given in the final discussion.
We now discuss the properties of periodic orbits in chaotic and nonchaotic systems (particularly our models), before describing the APR method in more detail and presenting the results.
### B Periodic orbits in chaotic systems
Periodic orbits are of great importance in chaotic systems. Although typical trajectories (as defined by the Liouville or natural measures) are not periodic, there are many systems for which they are approximated by periodic orbits when the periodic orbits are dense in the phase space or on an attractor. This allows many properties to be computed from expansions involving periodic orbits .
The properties of the periodic orbits are not directly related to the other dynamical properties discussed in Sec. III B, yet they do tend to differ between chaotic and nonchaotic systems, and so they constitute additional dynamical properties related to chaos. The properties of the periodic orbits in the Lorentz gas are of most interest to us, and in some sense typical for chaotic systems, so we focus on these. In this and the following section we examine two relevant properties of periodic orbits, and sketch proofs of these properties.
#### Existence
There is an easy constructive proof of the existence of periodic orbits in the Lorentz gas: Choose two disks arbitrarily; the line joining their centers gives a periodic orbit unless there is a disk interposing; consider one of the original disks and the interposing disk, and repeat until no disk interposes.
#### Stability
The periodic orbits are exponentially unstable, that is, almost all trajectories beginning a small distance $`ฯต`$ from the periodic orbit at time $`t=0`$ are at a distance $`ฯตe^{\lambda _pt}`$ at time $`t`$, where $`\lambda _p`$ is the maximum (here the only positive) Lyapunov exponent, which depends on the periodic orbit $`p`$. This exponential instability makes it very unlikely for a typical trajectory to remain near a given periodic orbit for long times.
### C Periodic orbits in nonchaotic systems
We study here the properties of periodic orbits of our wind-tree models, which are typical of nonchaotic systems, without attempting to classify all possible nonchaotic periodic behavior. For a complementary study of the periodic orbits in polygonal billiards, see Ref. .
Not all wind-tree models have periodic orbits. The argument given above for the Lorentz gas fails because an orbit connecting two square scatterers is periodic only if the scatterers have the same orientation (of zero probability in the randomly oriented model) and the particle moves perpendicular to the surface of the scatterers (excluded by definition in the fixed orientation model). In general, the existence of periodic orbits depends on the locations and orientations of the scatterers. We will find that using generic locations of scatterers periodic orbits exist in the R$`\mathrm{}`$ model, but not in the F$`\mathrm{}`$ model. The periodic models (such as FP4 and RP4) are harder to treat mathematically, and are not discussed here.
#### F model: nonexistence
The absence of periodic orbits for generic configurations of the F$`\mathrm{}`$ model follows directly from arguments in a proof of Aarnes . In order for the particle to return to its starting point, an expression linear in the positions of the scatterers must vanish. The coefficients of the $`(x,y)`$ position of a scatterer in this expression are integers depending on which faces the particle hits, and are zero only if the particle hits opposite faces an equal number of times. A simple reductio ad absurdum argument shows then that there is a scatterer (perhaps more than one), the furthest to the top and right (largest value of $`x+y`$), which has collisions on only one of its faces (the lower left), and hence the coefficients corresponding to the position of this scatterer are nonzero. If the scatterers are randomly placed, no linear combination of the positions can vanish, so no periodic orbit can exist. Of course, there are special configurations of the scatterers (such as at the corners of a square aligned with the coordinate axes) that allow periodic orbits. Since there are therefore generically no periodic orbits in the F$`\mathrm{}`$ model, the remainder of this section refers to the R$`\mathrm{}`$ model.
#### R model: existence
The existence of periodic orbits for generic configurations of the R$`\mathrm{}`$ model follows from the observation that any set of configurations of a finite number of (eg. 3) scatterers with nonzero (Lesbesgue) measure includes configurations appearing somewhere in a generic infinite configuration. A period 3 orbit can always be found in an acute triangular billiard by minimizing the total path length, see Fig. 10. If three scatterers outline an acute triangle and are positioned so that their faces contain the path of minimal length (clearly of nonzero measure), a period 3 orbit exists.
#### R model: stability
Finally we investigate the stability of periodic orbits in the R$`\mathrm{}`$ model. The combination of the linear dynamics of the free particle with length preserving reflections leads to a linear separation of nearby trajectories. That is, an initial separation of $`ฯต`$ in the direction of the velocity leads to a separation $`ฯตt`$ in the position after time $`t`$. Another way of saying this is that the number of uniformly distributed trajectories remaining within a distance $`ฯต`$ of the periodic orbit is $`ฯต/t`$ for large $`t`$. Thus a particle in the R$`\mathrm{}`$ model is quite likely to spend a long time near a periodic orbit, in contrast to in the Lorentz gas.
### D Details and results
Now we return to our method of Almost Periodic Recurrences for distinguishing chaotic properties of diffusive systems from a time series. This consists of counting the number of almost periodic sequences in the time series, thus hoping to exploit the different properties of periodic orbits in chaotic and nonchaotic systems as discussed in the previous two sections.
As in the Grassberger-Procaccia method we begin with a time series containing $`10^6`$ positions of the particle $`(x_i,y_i)`$ spaced at a time interval $`\mathrm{\Delta }t=1`$, see Sec. III A. Analogous to Eq. (9) we define a distance between two segments of length $`T`$ (the period of an almost periodic orbit) that begin at points $`i`$ and $`j`$ on the trajectory,
$$d_T(i,j)=\underset{k=0}{\overset{T1}{\mathrm{max}}}\sqrt{(x_{i+k}x_{j+k})^2+(y_{i+k}y_{j+k})^2}$$
(13)
Using Eq. (13) we compute the number of initial points $`i`$ for which the orbit repeats within a tolerance $`ฯต`$,
$$N_T(i)=\mathrm{\#}_i[d_T(i,i+T)<ฯต]$$
(14)
where we use $`ฯต=1`$ for the results presented in Fig. 11.
It is clear that the chaotic Lorentz models, with an exponential decay of $`N_T`$ with $`T`$ can be easily distinguished from the nonchaotic F and R models which mostly have a slower and noisier decay, so we have achieved our objective of finding a time series analysis method that can distinguish between time series of chaotic and nonchaotic diffusive systems. It remains for us to explain the difference, and the plots obtained using the properties of the periodic orbits in these systems. For that, we note that there are two types of (almost) periodic orbits that might appear in the expression for $`N_T`$: An orbit of length $`T`$ that has been (almost) repeated, and a shorter orbit of length $`T/n`$ that has appeared $`2n`$ times, so that the total time is equal to $`2T`$.
#### The L models
In the Lorentz gas, the probability of remaining near a periodic orbit for a time $`T`$ is of order $`e^{\lambda _pT}`$ (see Sec. V B), where $`\lambda _p`$ is the maximum (here the only) positive Lyapunov exponent, and depends on the periodic orbit $`p`$. This is why both the infinite and periodic Lorentz gases give an exponential form in Fig. 11. Because $`N_T`$ can be due to long orbits or repeats of short orbits, the exponential form depends on both (1) the exponential escape from short orbits with many repetitions and (2) the exponential instability of orbits with length, that is, that $`\lambda _p`$ does not approach zero for long orbits. <sup>\*โ </sup><sup>\*โ </sup>\*โ There is a minor technical difficulty associated with the random Lorentz gas: It is possible due to the random placement of the scatterers to have arbitrarily long free paths between collisions, leading to the possibility that $`\lambda _p`$ could become arbitrarily small. However, the probability of a long free path is also exponentially small, so the result is still exponential.
#### The R models
In the randomly oriented wind-tree models, the probability of remaining near a periodic orbit for a time $`T`$ is of order $`1/T`$ (see Sec. V C). This form is not immediately apparent in Fig. 11 because orbits (particularly those surviving for long times) can be counted more than once due to contributions from an interval of different initial points $`i`$. Both the R$`\mathrm{}`$ model and the RP4 model continue beyond $`T=100`$ due to increasingly rare long orbits and many repeats of shorter orbits. For example, the almost periodic orbit giving rise to the conspicuous point with period $`T=58`$ in the RP4 model is repeated many times, also contributing to multiples of the period, double $`T=`$114,115; triple $`T=`$172,173; and so on up to $`T=`$1033,1034. It is clear that the dynamics is completely different to that of a chaotic system.
#### The F models
In the fixed oriented wind-tree models, there are no periodic orbits (see Sec. V C). This means that any contribution to $`N_T`$ is due to orbits that are close to periodic. The infinite model F$`\mathrm{}`$ contains many arrangements of a few scatterers that come arbitrarily close to generating a true periodic orbit, so the plot is similar to that of the R models. The periodic model FP4, in contrast, has only a few short orbits that are at all close to periodic, and none that can be repeated to give further contributions at higher values of $`T`$, or these would be observed in the figure.
What we have presented here barely scratches the surface of the Almost Periodic Recurrences method and its variants. It would be quite easy to search for 3 or more occurrences of an (almost) periodic orbit, or to distinguish between a single repeat of a long orbit and many repeats of a short orbit. We have restricted the presentation here to a demonstration of the method as a means to distinguish time series of chaotic and nonchaotic diffusive systems, but we hope that it will be useful in more general contexts.
## VI Absorbing boundary conditions
The case of absorbing boundary conditions provides a rich context for illuminating the differences between chaotic and nonchaotic diffusion. This is because the diffusion equation with absorbing boundary conditions leads naturally to an exponential escape process, while the periodic orbits of the randomly oriented wind-tree model require a power law escape process (see Sec. V C), thus violating the diffusion equation.
We consider here the probability that a single randomly placed particle will remain in an open system<sup>\*โก</sup><sup>\*โก</sup>\*โกWe describe systems with absorbing boundaries as โopenโ. for a given time, or equivalently, the number of identical noninteracting particles remaining in the system after the same time. For chaotic dynamics the escape rate formalism of Gaspard and Nicolis uses the escape rate (defined below) to connect the diffusion coefficient to dynamical properties such as the KS entropy and the positive Lyapunov exponents. For nonchaotic systems the escape process is qualitatively different depending on the properties of the periodic orbits as we have noted above, and the escape rate formalism must be generalized, if it is to make any sense at all. We now describe the escape rate formalism as it applies to chaotic systems, discuss the results for our models, and then attempt to generalize the formalism to include nonchaotic systems.
A macroscopic description of escape on a square (for simplicity; other geometries are analogous) is given by the diffusion equation (1) for the particle density $`P(๐ฑ,t)`$, together with the boundary condition $`P=0`$ along the lines $`x=0`$, $`y=0`$, $`x=L`$ and $`y=L`$. The general solution is
$$P(๐ฑ,t)=\underset{m,n=1}{\overset{\mathrm{}}{}}a_{m,n}\mathrm{sin}(m\pi x/L)\mathrm{sin}(n\pi y/L)e^{\gamma _{m,n}t}$$
(15)
with the decay rate
$$\gamma _{m,n}=\frac{D\pi ^2}{L^2}(m^2+n^2)$$
(16)
At long times, the solution is dominated by the slowest decaying mode, corresponding to the escape rate
$$\gamma =\gamma _{1,1}=\frac{2D\pi ^2}{L^2}$$
(17)
Note that the escape of particles is exponential in (15). The escape rate formalism equates this macroscopic escape rate $`\gamma `$ with the (exponential) escape rate of a (microscopically) chaotic system, which is related to its KS entropy $`K`$ and the sum of its positive Lyapunov exponents $`\lambda _+`$ (here at most a single positive Lyapunov exponent) by
$$\gamma =\lambda _+K$$
(18)
the โescape rate formulaโ which generalizes Pesinโs formula (Sec. III B) to open systems. Not only can the escape rate of chaotic systems be calculated from periodic orbit theory , but, as we will show below, there is an intimate connection between periodic orbits and the escape process in nonchaotic systems.
We now present our numerical results, from which we learn how the above theory for chaotic systems is modified in the nonchaotic case. As described in Sec. II B, the arrangement of scatterers used is the same as in the periodic case with $`L=20`$; the absorbing boundaries are one distance unit from the edge of the periodic cell, leading to an open system of size $`L=18`$. As before we use fixed oriented squares (F), randomly oriented squares (R) or circles (L). We denote absorbing boundary conditions by A, so the full notation for these models is \[F,R,L\]A18. We place $`10^7`$ particles uniformly (without overlapping the scatterers) in the square of size $`L=18`$ and compute the number of particles remaining in the system as a function of time, see Figs. 1213.
The chaotic Lorentz gas exhibits exponential decay as predicted by the diffusion equation for all times, but the nonchaotic wind-tree models deviate from exponential decay at late times. The results are well described by the empirical expressions
$`L(t)`$ $`=`$ $`6.6\times 10^6\times e^{0.0165t}`$ (19)
$`R(t)`$ $`=`$ $`6.6\times 10^6\times e^{0.0088t}+3.2\times 10^5/t`$ (20)
$`F(t)`$ $`=`$ $`\{\begin{array}{cc}6.6\times 10^6\times e^{0.025t}\hfill & t<226\hfill \\ 40(800t)\hfill & 226<t<800\hfill \\ 0\hfill & t>800\hfill \end{array}`$ (24)
for the L, R and F models respectively over the range of times considered. The initial exponential decay may be compared with Eqs. (15-17) to give an effective diffusion coefficient. We find $`D_L=0.27`$, $`D_R=0.14`$ and $`D_F=0.41`$, which are certainly consistent with the results of the mean square displacement, Tab. I in Sec. III C, given that Eqs. (15-17) depend on the macroscopic diffusion equations, and so require a large system $`L\mathrm{}`$ limit. The coefficient $`6.6\times 10^6`$ is obtained by matching Eq. (15) to the uniform probability density of the initial conditions; this comes to $`64N(0)/\pi ^4`$ with $`N(0)=10^7`$ as the initial number of particles in the system. From this close agreement, we conclude that the early escape is well described by the diffusion equation with the same diffusion coefficient that appears in the mean square displacement of Sec. III C; there is no trace of non-diffusive behavior at short times.
#### The R model
We now discuss the long time behavior of the nonchaotic models. The randomly oriented wind-tree model has a $`1/t`$ decay in (20) due to its periodic orbits; particles that are initially close to a periodic orbit lead to this power law as described in Sec. V C. The coefficient gives an estimate of the density of periodic orbits; $`3.2\times 10^5`$ is quite small compared to the number of particles, $`10^7`$, so periodic orbits are relatively rare in a sense that is difficult to define precisely. In terms of the escape rate formalism, a power law decay corresponds to an escape rate $`\gamma =0`$ which trivially satisfies (18), but yields no information about the diffusion coefficient $`D`$, that is, (17) is not satisfied.
#### The F model
The fixed oriented wind-tree model shows a complete escape of all the particles in a finite time, as might be expected from the lack of periodic orbits. The dramatic transition from the initial exponential decay to a much slower (at first) linear decay is somewhat surprising. We interpret this as follows: While there are no exactly periodic orbits in the F model, it is possible for the particle to remain in an orbit that is nearly periodic for some time, allowing the particle to remain in the system longer than the exponential decay would predict. If there is a bundle of trajectories that survive for just 800 time units, then this would lead to a linear law because particles are initially evenly distributed along the bundle of trajectories. If there was, in addition, a large bundle of trajectories that survive for, say 500 time units, there would be a kink in the plot, with a sharp decrease in gradient at $`t=500`$. The observation that the curve is close to linear (see Fig. 13) thus implies that the lengths of these long living trajectories are strongly peaked around 800 time units. One such long lived orbit is depicted in Fig. 14; its length is seen to be due to two almost periodic orbits. The complete escape of the particles corresponds to an escape rate $`\gamma =\mathrm{}`$, which flagrantly violates both (17) and (18) and yields no information about the diffusion coefficient $`D`$.
We conclude from these observations that, although the wind-tree models satisfy the diffusion equation when there are no boundaries, that is, the two time distribution function is of the correct Gaussian form, such a macroscopic approximation is not valid when there are absorbing boundary conditions. This means we cannot determine the diffusion coefficient in the same way as for the Lorentz gas, Eqs. (15-17), that is, from
$$D=\frac{1}{2\pi ^2}\underset{L\mathrm{}}{lim}L^2\underset{t\mathrm{}}{lim}\frac{1}{t}\mathrm{ln}\frac{P_L(t)}{P_L(0)}$$
(25)
because the infinite time limit leads to zero or infinity as explained above. Here $`P_L(t)`$ is the probability of a particles remaining in a system of size $`L`$ for time $`t`$; this is obtained by observing many independent particles, and taking the limit of an infinite number of particles.
We can, however, attempt to calculate the diffusion coefficient in the finite wind-tree models by taking the $`t\mathrm{}`$ limit at the same time as the $`L\mathrm{}`$ limit. This is because the diffusion equation is a good approximation even in the open case as long as the time is not too long, and the decay remains exponential. For example, assuming that the โdensity of periodic orbitsโ in the RA model is independent of $`L`$ (which seems to be the case numerically), the transition time from exponential to power law decay, as determined by comparing the two terms in Eq. (20) using Eq. (17) is of order $`t_{tr}L^2\mathrm{log}L`$. This should be compared with the time scale for which the slowest decaying mode of the diffusion equation dominates, $`t_DL^2`$, see (16). Thus, for a given $`L`$, there is a narrow range in time $`t_Dtt_{tr}`$ in which the decay is exponential, and in which a limit can be taken to obtain $`D`$ from the escape process. For example we can combine the $`L`$ and $`t`$ limits by setting $`L=u`$ and $`t=u^2\sqrt{\mathrm{log}u}`$,
$$D=\frac{1}{2\pi ^2}\underset{u\mathrm{}}{lim}\frac{1}{\sqrt{\mathrm{log}u}}\mathrm{ln}\frac{P_u(u^2\sqrt{\mathrm{log}u})}{P_u(0)}$$
(26)
We expect this relation to hold for all the models \[L,R,F\]A. We emphasize that Eq. (26), if it is mathematically valid, still does not provide a practical method for computing the diffusion coefficient beyond the estimates we gave following Eq. (24). This is because the range of time scales $`t_Dtt_{tr}`$ grows so slowly with $`L`$ that extremely large system sizes are required for more precise estimates of $`D`$.
We can also turn the argument around, and suggest that the escape at long times of finite systems may be a good experimental technique for detecting (or ruling out at some level) nonchaotic microscopic dynamics. In any case, the range of validity of the diffusion equation as a macroscopic description of nonchaotic systems is restricted.
## VII Discussion and open questions
We have explored the connections between microscopic chaos and diffusion. On a superficial level, the details of the microscopic dynamics seem to have little effect on the diffusion process: Gaussian diffusion is observed in our RP4 model, which has scatterers with flat sides (hence no exponential separation of nearby trajectories) and is periodic (hence there is no source of randomness from a disordered environment). In addition, the Grassberger-Procaccia method of time series analysis cannot distinguish our chaotic and nonchaotic diffusive models because it would require impractically long data sets. On a deeper level, however, the subtle differences between chaotic and nonchaotic diffusion are quite apparent if you know where to look: Our time series analysis method based on periodic orbits has no trouble distinguishing between the chaotic and nonchaotic models, and the long time behavior of escape from an open system is also determined by the properties of the chaotic versus nonchaotic periodic orbits.
In the light of our results, we can make a few concrete suggestions for an experimental determination of the chaotic or nonchaotic properties in a diffusive system. Firstly, as we remarked in Ref. , it is necessary to make measurements on the distance and time scales relevant to the microscopic dynamics. In our models, this question is simplified by the fact that all microscopic time scales are of order one in our units, Sec. III A. Once this is achieved, one could use an approach based on the method of Almost Periodic Recurrences, which searches for periodic behavior; alternatively the late time decay of particles from an open system can also elicit the chaotic or nonchaotic nature of the dynamics.
This raises again the question of exactly what dynamical properties would be measured; there are systems with the positive KS entropy of the Lorentz gas as well as the power law unstable periodic orbits of the wind-tree models. We will now make some remarks about such systems, although we make no attempt to enumerate all degrees and classes of chaotic dynamics.
Our example in this discussion is a model containing both circular and randomly oriented square scatterers. The circular scatterers lead, as in the Lorentz gas, to a positive Lyapunov exponent, while the square scatterers lead, as in the randomly oriented wind-tree model, to power law unstable periodic orbits, using the same argument as in Sec. V C. We have not simulated such a model numerically; however the arguments used in the previous sections may be applied, leading to the following predictions:
Sec. III D: There will be a positive diffusion coefficient with a Gaussian distribution function, as in both the L and R models.
Sec. IV: The Grassberger-Procaccia method will describe the dynamics as chaotic, due to a positive Lyapunov exponent and hence positive KS entropy, as in the L models.
Sec. V: The APR method will describe the dynamics as nonchaotic, due to the power law unstable periodic orbits, as in the R models.
Sec. VI: Such a system with absorbing boundaries will exhibit power law escape at long times, again due to the periodic orbits, as in the R model.
The different conclusions reached by the GP and APR methods exemplify the fact that these methods are based on different dynamical properties; for diffusion in an open system, it is clear that the mixed model is most similar to the R model, and hence that the APR method distinguishes the relevant dynamical property in this case.
We conclude with some open questions and possibilities for further work. In terms of microscopic chaos, we have studied only a few models. We note that there are examples of lack of chaoticity we have not considered, such as the coexistence of chaotic and nonchaotic regions found in KAM theory. In terms of diffusion, we note that in the light of Sec. VI the diffusion equation is not a complete macroscopic description of nonchaotic diffusion. In terms of time series analysis, our APR method, while sufficient for our purposes here, could be much more developed and applied. With regard to periodic orbit theory, the methods developed for exploiting the importance of periodic orbits to compute properties of chaotic systems do not apply to nonchaotic systems, and yet we have seen that periodic orbits also play an important role in nonchaotic systems.
We note that our infinite models, while sharing some of the properties of corresponding finite billiards, appear to fall outside the domain of current mathematics. For example, a recent study of infinite billiard systems is mostly restricted to cases with finite areas that are finitely connected; our models obey neither of these conditions. It would be interesting to see if it is possible to rigorously determine the status of our models, particularly their ergodic properties, rate of decay of correlations and Kolmogorov-Sinai entropy within the framework of some mathematical theory of infinite nonchaotic systems. More generally, a mathematical understanding of statistical mechanics including the thermodynamic limit naturally leads to the study of infinite systems.
Finally, we remark that all the models we have considered โ with the exception of one (FP4) โ are diffusive, limiting our investigation as to the presence of microscopic chaos or not to such systems. In addition, the precise role played by microscopic chaos โ as represented by the Lyapunov exponents โ and โmacroscopic chaosโ, as embodied by the randomly placed scatterers for the existence of a diffusion process and the value of the diffusion coefficient, remains open. A similar but more complicated situation obtains when diffusion of momentum (viscosity) or energy (heat conduction) and other transport processes are considered.
## Acknowledgments
First of all we want to acknowledge that H. van Beijeren suggested the wind-tree model as an important alternative to the Lorentz gas for the study of microscopic chaoticity and diffusion, which led to the work of Ref. and this paper. We thank him also, along with L. Bunimovich, J. R. Dorfman, P. Gaspard and I. Procaccia for stimulating and helpful discussions. This work was supported by the Engineering Research Program of the Office of Basic Energy Sciences of the US Department of Energy under contract #DE-FG02-88-ER13847.
## A The mean free time
We give here a calculation of the mean free time for our models quoted in Sec. III A. Our derivation is due to Chernov , extended to the fixed oriented model, and with a technical caveat for the infinite models. Refer to Sec. II B for the definitions of $`R`$, $`L`$, $`N`$ and $`\rho `$.
The mean free time $`\overline{\tau }`$ is equal to the mean free path, since the velocity is one. The mean free time is known exactly for billiard systems, which include the \[R,L\]P models discussed here. We make a minor extension to allow the FP model (which differs because we do not want to allow all velocity directions). We expect that the formula for the mean free time would still hold in the infinite models on physical grounds, but we cannot justify this mathematically. Briefly, the argument in Ref. observes that the total volume of phase space $`V`$ can be computed in two different ways.
One expression for the volume of phase space is as an integral over the area of the billiard, giving $`V=4A`$ for the original wind-tree model with fixed orientations, and $`V=2\pi A`$ for random orientations or the Lorentz gas. Here, $`V`$ is the volume of phase space, including both position and velocity, while $`A`$ is the area accessible to the point particle. For our infinite billiards we think of a large but finite periodic system of length $`L`$, and formally take the limit $`L\mathrm{}`$ at the end of the calculation. Since the area $`A`$ is equal to $`L^2(1\rho )`$, we have $`A=L^2/2`$ here as $`\rho =1/2`$. The prefactor $`4`$ comes from the four possible wind directions, and the $`2\pi `$ from integrating over all possible directions. Putting these expressions together we have $`V=4L^2(1\rho )`$ for fixed oriented squares and $`V=2\pi L^2(1\rho )`$ for randomly oriented squares or circles.
An alternative expression for the volume of phase space is an integral over the boundary of the billiard, where the contribution from each point is given by the free path/time $`\tau `$. In this way we find $`V=\sqrt{2}\overline{\tau }P`$ for fixed oriented squares and $`V=2\overline{\tau }P`$ for randomly oriented squares or circles. Here $`P`$ is the length of the boundary, that is, the total perimeter of all the scatterers, $`\overline{\tau }`$ is the mean free path/time, and the numerical prefactor is the integral over the component of velocity perpendicular to the boundary. For the wind-tree models we have $`P=4\sqrt{2}N`$ where $`N`$ is the number of scatterers, so $`P=2\sqrt{2}\rho L^2`$; for the Lorentz gas, we have $`P=2\pi RN=\sqrt{2\pi }\rho L^2`$.
Comparing the expressions for $`V`$ in both calculations, we find $`\overline{\tau }=(1\rho )/\rho =1`$ (using our value of $`\rho =1/2`$) for the case of fixed oriented squares, $`\overline{\tau }=\pi (1\rho )/(2\sqrt{2}\rho )=\pi /(2\sqrt{2})1.111`$ for the case of random oriented squares, and $`\overline{\tau }=\sqrt{\pi /2}(1\rho )/\rho =\sqrt{\pi /2}1.253`$ for the Lorentz gas. |
no-problem/0001/cond-mat0001288.html | ar5iv | text | # Pinning of quantized vortices in helium drops by dopant atoms and molecules
## Abstract
Using a density functional method, we investigate the properties of liquid <sup>4</sup>He droplets doped with atoms (Ne and Xe) and molecules (SF<sub>6</sub> and HCN). We consider the case of droplets having a quantized vortex pinned to the dopant. A liquid drop formula is proposed that accurately describes the total energy of the complex and allows one to extrapolate the density functional results to large $`N`$. For a given impurity, we find that the formation of a dopant+vortex+<sup>4</sup>He<sub>N</sub> complex is energetically favored below a critical size $`N_{\mathrm{cr}}`$. Our result support the possibility to observe quantized vortices in helium droplets by means of spectroscopic techniques.
Since the first observation of the $`\nu _3`$ vibrational band of SF<sub>6</sub> dissolved in <sup>4</sup>He droplets , the infrared spectroscopy of molecules inside or attached to helium has attracted a wide interest (see, for instance, Refs. and references therein). A major motivation for these efforts is that cold helium droplets offer the possibility of resolving rotational spectra of rather complex molecules and may constitute โthe ultimate spectroscopic matrixโ to create and study novel species. This unique feature of helium droplets originates from their quantum nature: not only they are fluid at zero temperature, due to the large zero point motion, but also exhibit a crucial superfluid behavior. The superfluid character of <sup>4</sup>He droplets is interesting also from a fundamental viewpoint. In fact, the observation of superfluid effects in finite-sized quantum systems has to do with important concepts, like order parameter, Bose-Einstein condensation, and phase coherence, which were originally introduced for uniform systems and which are now widely used in different contexts.
In the case of liquid helium, Grebenev et al. recently showed that only a rather small amount of <sup>4</sup>He atoms is needed to develop a superfluid droplet, also confirming theoretical predictions . In that experiment, the evidence for superfluidity is the appearence of a sharp rotational spectrum of an OCS molecule in <sup>3</sup>He-<sup>4</sup>He mixed drops, when the number of <sup>4</sup>He atoms surrounding the dopant is larger than about $`60`$. In the same spirit, experiments have been made to observe critical velocities (i.e., the occurrence of a Landau criterion for superfluidity), and a reduction of the moment of inertia (see and references therein). In contradistinction, detecting quantized vortices in droplets still remains an open question. It is worth stressing that all these investigations have many analogies with the current activity on Bose-Einstein condensation in trapped gases, where new results are now available about critical velocities , moments of inertia and vortices .
In this work we address the problem of quantized vortices. One first observes that a vortex line in a pure droplet is expected to be difficult to produce and stabilize, since it implies a significant increase of energy compared to a vortex-free droplet. In order to circumvent this limitation, we explore the possibility of pinning the vortex line to a dopant atom or molecule. If the dopant is deeply bound inside the droplet, it might stabilize the vortex for a time long enough to permit its observation. A second advantage is that the dopant could make the detection feasible via spectroscopic techniques.
Our purpose is to determine the energy and density profile of a impurity+vortex+<sup>4</sup>He<sub>N</sub> complex, for droplets up to $`N`$= 1000, using a finite-range density functional. We then subtract to its energy that of the same droplet without vortex and/or impurity and show that the difference fits very well to a liquid drop formula, which allows one to safely extrapolate to larger droplets. The density functional method consists in minimizing the total energy of the system at zero temperature written as a functional of the He density. We use the Orsay-Paris functional , which is based on an effective non-local interaction with a few parameters fixed to reproduce known properties of bulk liquid He. This functional has been shown to accurately reproduce the static properties of pure and doped He clusters and has also been used to describe a quantized vortex line in bulk liquid helium. In the latter case, the vortex is included with the Feynman-Onsager (OF) ansatz for the velocity field. This implies a singular vorticity and hence the vanishing of the density on the vortex axis. At $`T`$= 0 this approximation is a reasonable starting point, since it enormously reduces the computational cost. Recent calculations have shown that the density profile and energy of the vortex line given by the OF approximation are reasonably close to the ones obtained by assuming non-singular vorticity. Finally, the actual temperature of the droplets, below $`0.4`$ K , is low enough for neglecting thermal contributions to the fluid motion. Thus, corrections beyond OF and at $`T0`$ are expected not to change the main results of the present work.
The minimization of the energy is performed in axial symmetry by mapping the density on a grid of points, putting the vortex line along the $`z`$-axis and the dopant in the center, at $`๐ซ=0`$. The numerical code used to calculate the density profile and energy is the same used in . The potentials for rare gas impurities have been taken from , that of the spherically averaged SF<sub>6</sub> from , and that of HCN from .
We first consider pure droplets with and without vortex. In Fig. 1 we show density profiles, at $`z=0`$, obtained for different $`N`$. For large droplets the shape approaches that of a rectilinear vortex in the uniform liquid ; the core radius is of the order of $`1รท2`$ ร
and the density oscillates as a consequence of the He-He interaction. In Fig. 2 we plot the energy associated to the vortex flow, defined as
$$\mathrm{\Delta }E_\mathrm{V}(N)E_\mathrm{V}(N)E(N),$$
(1)
where $`E_\mathrm{V}`$ and $`E`$ are the energies of droplets with and without vortex, respectively. The solid line represents the results obtained with a liquid drop formula of the kind:
$$\mathrm{\Delta }E_\mathrm{V}(N)=\alpha N^{1/3}+\beta N^{1/3}\mathrm{log}N+\gamma N^{1/3},$$
(2)
with the parameters $`\alpha `$= 2.868 K, $`\beta `$= 1.445 K, and $`\gamma `$= 0.313 K extracted from a fit to the density functional calculations. This formula works well. The reason can be easily understood by means of a hollow-core model for the vortex, having core radius $`a`$, in a droplet of radius $`R`$ and constant density $`\rho _0`$. By integrating the kinetic energy of the vortex flow in the limit $`Ra`$, one gets
$$E_{\mathrm{kin}}=\frac{2\pi \mathrm{}^2\rho _0}{m_4}\left[R\mathrm{log}\left(\frac{2R}{a}\right)R+\frac{a^2}{4R}\right].$$
(3)
Writing $`R`$= $`r_0N^{1/3}`$ one recovers the $`N`$-dependence as in (2).
The next step is the inclusion of a dopant atom or molecule. As an example, in Fig. 3 we show the He density distribution for a drop of $`N=500`$ with HCN hosted in the vortex core. Both the axis of the linear molecule and that of the vortex are taken along $`z`$. The density is very inhomogeneous near the dopant, due to the complexity of the HCN-He interaction. The energetics of the system can be conveniently analysed by introducing the following energies:
$`\mathrm{\Delta }E_\mathrm{V}^\mathrm{X}(N)`$ $``$ $`E_{\mathrm{X}+\mathrm{V}}(N)E_\mathrm{X}(N),`$ (4)
$`S_\mathrm{X}(N)`$ $``$ $`E_\mathrm{X}(N)E(N),`$ (5)
$`S_{\mathrm{X}+\mathrm{V}}(N)`$ $``$ $`E_{\mathrm{X}+\mathrm{V}}(N)E(N),`$ (6)
where the subscripts $`X`$ and $`V`$ refer to drops doped with impurity $`X`$ and/or vortex line.
The energy $`\mathrm{\Delta }E_\mathrm{V}^\mathrm{X}`$ is the one associated with the vortex flow in the doped cluster. In Fig. 2 it is compared with the vortex energy in pure droplets, $`\mathrm{\Delta }E_\mathrm{V}`$. The difference
$$\delta _\mathrm{X}(N)=\mathrm{\Delta }E_\mathrm{V}^\mathrm{X}\mathrm{\Delta }E_\mathrm{V}<0$$
(7)
is almost independent of $`N`$, apart from the smallest droplets. The reason is that this difference has to do with the โgeometrical extensionโ of the dopant, i.e., the โholeโ made by the dopant in the vortex flow, as well as with the distortion of the density near the dopant caused by the pinning of the vortex core. Both effects are localized near the dopant and, thus, they are expected to give a shift in energy which becomes $`N`$-independent for large droplets.
The quantity $`S_\mathrm{X}(N)`$ in Eq. (5) is the solvation energy of the dopant in a vortex-free droplet. The results obtained for Ne, Xe, HCN and SF<sub>6</sub> are shown in Fig. 4. As already discussed in , the solvation energy becomes almost $`N`$-independent for $`N`$ larger than a few hundreds. The value at $`N=1000`$ can be safely taken to represent the solvation energy in the bulk, $`S_\mathrm{X}(\mathrm{})S_\mathrm{X}(1000)`$. For our analysis, we have chosen impurities having binding energies on a wide range.
The key quantity in the present study is the solvation energy of the dopant+vortex complex given by $`S_{\mathrm{X}+\mathrm{V}}(N)`$ in Eq. (6). The results are shown in Fig. 5. From the definitions (4)-(7) one can also write
$`S_{\mathrm{X}+\mathrm{V}}(N)`$ $`=`$ $`E_\mathrm{X}(N)+\mathrm{\Delta }E_\mathrm{V}^\mathrm{X}(N)E(N)`$ (8)
$`=`$ $`S_\mathrm{X}(N)+\mathrm{\Delta }E_\mathrm{V}(N)+\delta _\mathrm{X}(N).`$ (9)
In Fig. 5 we compare $`S_{\mathrm{X}+\mathrm{V}}`$ with the sum $`S_\mathrm{X}+\mathrm{\Delta }E_\mathrm{V}`$; the difference is $`\delta _\mathrm{X}`$. The simple picture which emerges from this analysis is that the solvation energy of the dopant+vortex complex is just the sum of the solvation energy of the dopant with no vortex and the extra energy of a vortex in a pure droplet, apart from a small shift which depends on the dopant. Deviations from this rule are significant only for small droplets, having radius of the order of the size of the dopant. Our numerical results provide a quantitative basis for this picture and yield typical estimates of $`\delta _\mathrm{X}`$. It is worth noticing that, by rearranging the terms in (7), this quantity can be written as the difference between the solvation energies of the dopant in a droplet with an without vortex, $`\delta _\mathrm{X}=[(E_{\mathrm{X}+\mathrm{V}}E_\mathrm{V})(E_\mathrm{X}E)]`$, and can hence be interpreted as the binding energy of the dopant to the vortex .
Since the solvation energy $`S_\mathrm{X}`$ is negative and almost constant for $`N>300`$ while the vortex energy $`\mathrm{\Delta }E_\mathrm{V}`$ always increases, the dopant+vortex complex has a solvation energy which changes sign at some $`N_{\mathrm{cr}}`$. This means that for $`N<N_{\mathrm{cr}}`$, the dopant+vortex complex is energetically favored. In the case of Ne, as one can see in Fig. 5, $`N_{\mathrm{cr}}380`$. This number is rather small as compared to the typical droplet size in current experiments, and is a consequence of the weak binding of Ne. Dopants with stronger binding have larger $`N_{\mathrm{cr}}`$. An estimate for HCN, Xe, and SF<sub>6</sub> can be easily obtained by means of the liquid drop formula. One has to insert expression (2) in (9) and use the large-$`N`$ values of $`S_\mathrm{X}`$ and $`\delta _\mathrm{X}`$. The $`S_\mathrm{X}`$ values turn out to be $`310`$ K, $`320`$ K and $`622`$ K, and the $`\delta _\mathrm{X}`$ values are 5.0 K, 4.4 K, and 7.7 K, for Xe, HCN, and SF<sub>6</sub>, respectively. These numbers yield $`N_{\mathrm{cr}}7600`$ for Xe, $`8100`$ for HCN, and $`40000`$ for SF<sub>6</sub>.
In conclusion, the analysis of the energetics of doped helium droplets has allowed us to disclose a possible mechanism to create and stabilize vortex lines. A dopant+vortex complex could be formed by picking up the impurity, assuming that the collision imparts sufficient angular momentum. The vortex line is expected to appear attached to the dopant, since the binding energy $`\delta _\mathrm{X}`$ is negative. The formation of the complex is energetically favored below a critical $`N`$ which is well within the range of droplet sizes met in current experiments if the dopant has a large solvation energy. A metastable state could also exists for $`N>N_{\mathrm{cr}}`$, but estimating its lifetime is a more demanding calculation. One should also explore the energy barrier associated with other possible decay processes. Further work is planned in this direction.
We thank Kevin Lehmann for useful discussions. F.D. would like to thank the Dipartimento di Fisica, of the Universitร di Trento, where part of this work has been done. This work has been performed under grants No. PB98-1247 from DGESIC, Spain, and No. 1998SGR-00011 from Generalitat of Catalunya. |
no-problem/0001/nucl-th0001043.html | ar5iv | text | # Shell model description of normal parity bands in odd-mass heavy deformed nuclei
## Abstract
The low-energy spectra and B(E2) electromagnetic transition strengths of <sup>159</sup>Eu, <sup>159</sup>Tb and <sup>159</sup>Dy are described using the pseudo SU(3) model. Normal parity bands are built as linear combinations of SU(3) states, which are the direct product of SU(3) proton and neutron states with pseudo spin zero (for even number of nucleons) and pseudo spin 1/2 (for odd number of nucleons). Each of the many-particle states have a well-defined particle number and total angular momentum. The Hamiltonian includes spherical Nilsson single-particle energies, the quadrupole-quadrupole and pairing interactions, as well as three rotor terms which are diagonal in the SU(3) basis. The pseudo SU(3) model is shown to be a powerful tool to describe odd-mass heavy deformed nuclei.
PACS numbers: 21.60.Fw, 21.60.Cs, 27.70.+q
The shell model is a fundamental theory that is applicable in nuclear, atomic and non-relativistic quark physics . In its simplest formulation it provides a natural explanation of magic numbers as shell closures and the energy spectra of closed shell $`\pm 1`$ odd-mass nuclei . Powerful computers and special algorithms for diagonalizing large matrices has allowed systematic studies of nuclei of the sd-shell and pf-shell up to $`A=56`$ . New methods for solving large scale shell-model problems in medium mass nuclei have also been developed . A shell-model description of heavy nuclei requires further assumptions that include a systematic and proper truncation of the model space .
In light deformed nuclei the dominance of the quadrupole-quadrupole interaction led to the introduction of the SU(3) shell model , and with it a very natural means to truncate large model spaces. Although realistic interactions mix different irreducible representations (irreps), the ground state wave function of well-deformed light nuclei normally consists of only a few SU(3) irreps . The strong spin-orbit interaction renders the usual SU(3) scheme useless in heavy nuclei, but at the same time pseudo-spin emerges as a good symmetry .
Pseudo-spin symmetry refers to the experimental fact that single-particle orbitals with $`j=l1/2`$ and $`j=(l2)+1/2`$ in the shell $`\eta `$ lie very close in energy and can therefore be labeled as pseudo spin doublets with quantum numbers $`\stackrel{~}{j}=j`$, $`\stackrel{~}{\eta }=\eta 1`$ and $`\stackrel{~}{l}=l1`$. The origin of this symmetry has been traced back to the relativistic Dirac equation . The pseudo SU(3) model capitalizes on the existence of pseudo-spin symmetry.
In the simplest version of the pseudo SU(3) model, the intruder level with opposite parity in each major shell is removed from active consideration and pseudo-orbital and pseudo-spin angular momentum are assigned to the remaining single-particle states. The coupling of a deformed rigid-rotor core with one extra particle in a pseudo SU(3) orbital has been used to describe rotational bands and electromagnetic properties of heavy odd-mass nuclei and identical normal and superdeformed bands .
A fully microscopic description of low-energy bands in even-even nuclei has been developed using the pseudo SU(3) model. The first applications used the pseudo SU(3) as a dynamical symmetry, with a single SU(3) irrep describing the whole yrast band up to backbending . A comparison of quantum rotor and microscopic SU(3) states provided a classification of the SU(3) irreps in terms of their transformation properties under $`\pi `$ rotations in the intrinsic frame and led to the construction of a $`K^2`$ operator which plays a crucial role in the description of the gamma band .
On the computational side, the development of a computer code to calculate reduced matrix elements of physical operators between different SU(3) irreps represented a breakthrough in the development of the pseudo SU(3) model. For example, with this code it is possible to include pairing, which is an SU(3) symmetry breaking interaction, in the Hamiltonian and exhibit its close relationship with triaxiality . Full-space calculations in the pf-shell in an SU(3) basis show that for a description of the low-energy spectra of deformed nuclei the Hilbert space can be truncated to leading irreps of the quadrupole-quadrupole and spin-orbit (or pseudo spin-orbit) interactions. However, the inclusion of a pairing-type interaction is essential for a correct description of moments of inertia in such a truncated space.
Once a basic understanding of this overall structure was achieved, a powerful shell-model theory for a description of normal parity states in heavy deformed nuclei emerged. For example, the low-energy spectra of many Gd and Dy isotopes, their B(E2) and B(M1) transitions strengths for both their scissors and twist modes and their fragmentation were successfully described with a realistic Hamiltonian .
In the present letter we introduce a refined version of the pseudo SU(3) formalism which uses a realistic Hamiltonian with single-particle energies plus quadrupole-quadrupole and monopole pairing inteactions with strengths taken from known systematics. The model is applied to three odd-mass rare earth nuclei: <sup>159</sup>Eu, <sup>159</sup>Tb and <sup>159</sup>Dy. The results represent a full implementation of the very ambitious program implied in first applications of the pseudo SU(3) model to odd-mass nuclei performed nearly thirty years ago .
Many-particle states of $`n_\alpha `$ active nucleons in a given normal parity shell $`\eta _\alpha `$, $`\alpha =\nu `$ or $`\pi `$, can be classified by the following chains of groups:
$`\{1^{n_\alpha ^N}\}\{\stackrel{~}{f}_\alpha \}\{f_\alpha \}\gamma _\alpha (\lambda _\alpha ,\mu _\alpha )\stackrel{~}{S}_\alpha K_\alpha `$
$`U(\mathrm{\Omega }_\alpha ^N)U(\mathrm{\Omega }_\alpha ^N/2)\times U(2)SU(3)\times SU(2)`$
$`\stackrel{~}{L}_\alpha J_\alpha ^N`$
$`SO(3)\times SU(2)SU_J(2),`$ (1)
where above each group the quantum numbers that characterize its irreps are given and $`\gamma _\alpha `$ and $`K_\alpha `$ are multiplicity labels of the indicated reductions.
The most important configurations are those with highest spatial symmetry . This implies that $`\stackrel{~}{S}_{\pi ,\nu }=0`$ or $`1/2`$, that is, only configurations with pseudo spin zero for even number of nucleons and $`1/2`$ for odd number of nucleons are taken into account.
We will describe <sup>159</sup>Tb as a first example. It has 15 protons and 12 neutrons in the 50-82 and 82-126 shells, respectively. The number of nucleons in normal (N) and abnormal (A) parity orbitals is determined by filling the Nilsson levels with a pair of particles for $`\beta 0.25`$ in order of increasing energy. This gives
$`n_\pi ^N=9,n_\pi ^A=6,n_\nu ^N=8,n_\nu ^A=4`$ (2)
After decoupling the pseudospin in Eq. (1) we get $`\{\stackrel{~}{f}_\pi \}=\{2^41\},\{\stackrel{~}{f}_\nu \}=\{2^4\}`$ with $`\stackrel{~}{S}_\pi =1/2`$ and $`\stackrel{~}{S}_\nu =0`$. Table I lists the 15 pseudo SU(3) irreps, with the largest value of the Casimir operator $`C_2`$, which were used in this calculation.
Table I
The Hamiltonian contains spherical Nilsson single-particle terms for protons and neutrons ($`H_{sp,\pi [\nu ]}`$), the quadrupole-quadrupole ($`\stackrel{~}{Q}\stackrel{~}{Q}`$) and pairing ($`H_{pair,\pi [\nu ]}`$) interactions as well as three โrotor-likeโ terms which are diagonal in the SU(3) basis:
$`H`$ $`=`$ $`H_{sp,\pi }+H_{sp,\nu }{\displaystyle \frac{1}{2}}\chi \stackrel{~}{Q}\stackrel{~}{Q}G_\pi H_{pair,\pi }`$
$`G_\nu H_{pair,\nu }+aK_J^2+bJ^2+A_{asym}\stackrel{~}{C}_2.`$
The term proportional to $`K_J^2`$ breaks the SU(3) degeneracy of the different K bands , the $`J^2`$ term represents a small correction to fine tune the moment of inertia, and the last $`\stackrel{~}{C}_2`$ term is introduced to distinguish between SU(3) irreps with $`\lambda `$ and $`\mu `$ both even from the others with one or both odd .
The Nilsson single-particle energies as well as the pairing and quadrupole-quadrupole interaction strengths were taken from systematics ; only $`a`$ and $`b`$ were used for fitting. Parameter values are listed in Table II and are consistent with those used in the description of neighboring even-even nuclei .
Table II
Figure 1(a) shows the calculated and experimental $`K=\frac{3}{2},\frac{5}{2}`$ and $`\frac{1}{2}`$ bands for <sup>159</sup>Tb. The agreement between theory and experiment is in general excellent. The model predicts a continuation of the $`K=\frac{5}{2}`$ band and over-emphasizes staggering in the $`K=\frac{1}{2}`$ band.
Figure 1
The role played by each term in the Hamiltonian will be discussed in detail elsewhere . In this letter we wish to emphasize that the pairing interaction is absolutely essential despite the strong truncation of the Hilbert space. To this end we present in part (b) of Fig. 1 the low-energy spectra of <sup>159</sup>Tb with the same Hamiltonian except that the pairing interaction has been turned off. It clearly exhibits the importance of the pairing interaction in building up the correct moment of inertia: the spectra without pairing is strongly compressed. It can also be seen that pairing affect the other energies in a similar way with an overall effect that resembles the introduction of a multiplicative factor in the Hamiltonian. We conclude that the proposed truncation scheme is justified and works as expected.
Theoretical and experimental B(E2) transition strengths between yrast states in <sup>159</sup>Tb are shown in Table III. The E2 transition operator that was used is given by
$$Q_\mu =e_\pi Q_\pi +e_\nu Q_\nu e_\pi \frac{\eta _\pi +1}{\eta _\pi }\stackrel{~}{Q}_\pi +e_\nu \frac{\eta _\nu +1}{\eta _\nu }\stackrel{~}{Q}_\nu ,$$
(4)
with effective charges $`e_\pi =2.3,e_\nu =1.3`$. These values are very similar to those used in the pseudo SU(3) description of even-even nuclei . They are larger than those used in standard calculations of B(E2) strengths due to the passive role assigned to nucleons in unique parity orbitals, whose contribution to the quadrupole moments is parametrized in this way.
In Figure 2 (a) we present the low-lying energy spectra of <sup>159</sup>Eu, including the $`K=\frac{5}{2},\frac{3}{2}`$ and $`\frac{1}{2}`$ bands built with 7 protons in the normal parity subshell $`\stackrel{~}{\eta }=3`$ and 8 neutrons in $`\stackrel{~}{\eta }=4`$. There is a good agreement between the experimental and theoretical results. The model predicts a second $`\frac{7}{2}^+`$ state in the $`K=\frac{3}{2}`$ band which is missing in the experimental spectra, as well as several other states in the excited bands.
Figure 2
It is interesting to notice that the ground state in <sup>159</sup>Tb is $`\frac{3}{2}^+`$ while in <sup>159</sup>Eu it is $`\frac{5}{2}^+`$. Reproducing this effect is one of the successes of this theory; realistic single-particle-energies are required to get this ordering correct.
The low energy spectra of <sup>159</sup>Dy is presented in Fig. 2(b). There are three bands, with $`K=\frac{3}{2},\frac{5}{2}`$ and $`\frac{1}{2}`$, respectively. As in the other cases the agreement between theory and experiment is remarkably good. In the $`K=\frac{3}{2}`$ ground state band the $`\frac{17}{2}^{}`$ state is predicted to have an energy higher than the experimentally observed one. This departure of the experimental ground state band from the rigid rotor behavior may be related with a band crossing. The possibility of describing it by increasing the Hilbert space is under investigation. In the $`K=\frac{1}{2}`$ band the $`\frac{3}{2}^{}`$ state lies higher than its $`\frac{5}{2}^{}`$ partner which contradicts the experimental results. As in the other cases, the model predicts several excited levels that are as yet undetected.
It has been shown that normal parity bands in odd-mass heavy deformed nuclei can be described quantitatively using the pseudo SU(3) model. Only a few representations with largest $`C_2`$ values and pseudo spin 0 or 1/2 are needed. The Hamiltonian uses Nilsson single-particle energies, quadrupole-quadrupole and pairing interactions with strengths fixed by systematics, and three small rotor terms which with the others yield excellent results for energies and B(E2) values in A=159 nuclei.
This work exhibits the usefulness of the pseudo SU(3) model as a shell model, one which can be used to describe deformed rare-earth and actinide isotopes by performing a symmetry dictated truncation of the Hilbert space. It opens up the possibility of a more detailed microscopic description of other properties of heavy deformed nuclei, both with even and odd protons and neutrons numbers, like g-factors, M1 transitions, and beta decays.
This work was supported in part by Conacyt (Mรฉxico) and the U.S. National Science Foundation.
Table Captions
Table I: The 15 pseudo SU(3) irreps used in the description of <sup>159</sup>Tb bands.
Table II: Parameters used in Hamiltonian Shell model description of normal parity bands in odd-mass heavy deformed nuclei.
Table III: Theoretical and experimental B(E2) transition strengths for <sup>159</sup>Tb.
Figure Captions
Fig. 1: Energy spectra of <sup>159</sup>Tb, โExpโ represents the experimental results and โTheoโ the calculated ones. Insert (a) shows the energies obtained with the Hamiltonian parameters listed in Table II, insert (b) shows the energies obtained without pairing.
Fig. 2: (a) Energy spectra of <sup>159</sup>Eu and (b) <sup>159</sup>Dy, with the same convention of Fig. 1. |
no-problem/0001/astro-ph0001238.html | ar5iv | text | # CCD photometry in the region of NGC 6994: the remains of an old open cluster Table 1 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html
## 1 Introduction
Open clusters are very useful for many purposes concerning our galaxyโs structure and evolution; the oldest ones (with ages of about 1 Gyr or greater) are particularly suitable for studying the Galactic disk. As pointed out by Janes & Phelps (1994), the spatial distributions of old and young open clusters are notably different: the old ones, projected onto the Galactic plane, are located in the outer disk, at distances greater than 7.5 kpc from the Galactic center, towards the Galactic anticenter, whereas the young open clusters are distributed symetrically about the Sun. The scaleโheights estimated by fitting exponential laws to their respective distributions perpendicular to the Galactic plane, indicate that the old open cluster population (scaleโheight of 375 pc) is considerably thicker than the young one (scaleโheight of 55 pc). This distribution of old open clusters can be understood in terms of their dynamical evolution, as the fact of remaining in the outer disk and far from the Galactic plane, helps them to avoid tidal encounters with giant molecular clouds, mostly present in the inner disk, as well as the effect of other disruptive forces (Friel (1995) and references therein).
NGC 6994 (C2056โ128) is an object located at low galactic latitude ($`\alpha =20^h58\stackrel{m}{.}9`$, $`\delta =12\mathrm{ยฐ}38\mathrm{}(J2000.0)`$; $`l=35\stackrel{}{.}7`$, $`b=34\mathrm{ยฐ}`$), in Aquarius, and it has been very little studied. Collinder (1931) estimated for it a distance of 3.8 kpc, an angular diameter of 2.8 arcmin and wondered whether it was an open or globular cluster. Ruprecht (1966) classified NGC 6994 as a Trumpler (1930) class IV 1 p, a very sparse and poor open cluster. In a statistical analysis of Galactic clustersโ ages, Wielen (1971) included it in the group of old and nearby ones, with high values of galactic latitude, but again considering it as a doubtful open cluster.
As far as we know, there have been no previous photometric studies of NGC 6994. Our investigation attempts to shed light on the nature of this object by means of CCD photometry, determining its probable members and true extension, and estimating its reddening, distance, age and metallicity. We also include a comparison with a model of the Galactic stellar distribution that lends support to our observational results.
Sect. 2 describes observations and data reduction. Membership and the fundamental parameters of the cluster are discussed in Sect. 3. In Sect. 4 we present a comparison with a model of the Galaxy, and in Sect. 5 an analysis of the radial distribution. Our conclusions and a summary of the results are provided in the final Sect.
## 2 Observations and reductions
Observations for this project were carried out on the nights 12/13 and 13/14 October 1996, using the 2.15 m telescope at the Complejo Astronรณmico El Leoncito (CASLEO) in San Juan, Argentina. Direct CCD images were collected with a TEK 1024 chip and $`BV(RI)_{KC}`$ filters; a focal reducer was attached to the telescope so that the scale was 0.8 arcsec/pixel, providing a circular usable field with a diameter of 9.6 arcmin. In each filter, a set of two frames in at least three different exposures were obtained for NGC 6994, as well as 40 frames of three Landolt (1992) fields containing 12 standard stars, which cover a range in color from $`(BV)=0.339`$ to 1.551. The seeing ranged from 1.8 to 2.4 arcsec during both nights.
A blue CCD image is shown in Fig. 1 and it is evident from it that, formerly, the cluster was supposed to consist of just the four central brightest stars. The stellar distribution on the frame is presented in Fig. 2, which can be used as a finding chart and where we have included a 3 arcmin circle about the aforementioned brightest stars.
All reductions were performed using IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract to the National Science Foundation.. Preliminary processing was done in the standard way; frames were trimmed, bias substracted and flatโfielded using dome flats. Instrumental magnitudes of stars in the region of NGC 6994 were derived with the DAOPHOT package (Stetson (1987)), a position dependent point spread function (PSF) and the corresponding aperture corrections were calculated for each frame, which were reduced separately. Standard stars were measured using aperture photometry and the following transformation equations were applied to transform our instrumental magnitudes
$$b=V+(BV)+3.180+0.275X0.129(BV)$$
$$v=V+1.911+0.165X+0.067(BV)$$
$$r=V(VR)+1.796+0.115X0.033(VR)$$
$$i=V(VI)+2.775+0.075X0.101(VR)$$
where lower case letters refer to instrumental magnitudes and upper case ones represent standard system values, X is the airmass, and the extinction coefficients were taken from Minniti et al. (1989). The rms error in the fits of all the transformation equations was of the order of 0.01 mag.
We finally calculated, for the 144 stars measured in this field, mean values for their colors and magnitudes weighted according to the photometric errors given by DAOPHOT, and the corresponding errors of the means which are shown in Fig. 3 as a function of $`V`$ magnitude. The full photometric data set is listed in Table 1, where the X and Y coordinates are given in units of CCD pixels.
## 3 Results
Figs. 4, 5, and 6 show the colorโmagnitude diagrams (CMDs) for all the stars studied in the region of NGC 6994. We have made no attempt to eliminate the field stars because we lack a comparison field and we had no conclusive proof that the probable cluster did not extend out of the limits of the frame. A comparison with a model of our Galaxy for this field, which is discussed in the next Sect., will help to settle the question.
Membership, reddening and distance were determined together, following a kind of iterative process. Even though without radial velocities or metallicities we cannot confirm membership, we attempted to identify likely cluster members based on the locus of the stars in the $`(BV)`$ vs $`V`$ diagram. We started by assuming that only stars in the central group, that is within the 3 arcmin circle (see Fig. 2), were members of the cluster. In the following steps, once we had estimations of color excess and distance modulus, we selected and added as member candidates those stars lying within $`\pm 2ฯต`$ of the Zero Age Mean Sequence (ZAMS) from SchmidtโKaler (1982), shifted according to these chosen reddening and distance modulus ($`ฯต`$ being the internal error of the photometry), and we also included those lying within 0.75 mag of the upper envelope in case any of the member candidates were binaries (see Fig. 4). The reddening was estimated by means of the $`BVI_C`$ technique, discussed by Cousins (1978), using the $`[(BV)(VI)]`$ vs $`(VI)`$ diagram (Fig. 7); member candidates with $`(VI)`$ colors between 0.5 and 1 mag were preferred for this determination because this color range is the most sensitive to the reddening in this diagram, as pointed out by Barrado & Byrne (1995). Finally, the distance modulus was chosen so as to obtain the best possible fit of the isochrones from the work of VandenBerg (1985) to our data, taking into account the already estimated color excess; the isochrones fitting also gave us information on the probable metallicity and age of the cluster. The whole procedure was repeated until no new member candidates were added to the group.
The final selection of member candidates was performed checking also their positions in the other CMDs. Star #4, though located slightly to the left of the sequences in the CMDs, was considered as a probable member because the errors in its mean $`<V>`$ and $`<BV>`$ values (0.06 and 0.07 mag, respectively), were higher than for other stars of similar magnitude. Star #14 was considered as a field star. Stars #6 and #7 lie in the binary sequence, about 0.7 mag above the main sequence . In this way, a total of 24 members, including star #4, are proposed as belonging to this cluster: 7 of them are located within the central 3 arcmin circle, and another 17 stars are distributed out to the limits of our frames so that the true angular size of NGC 6994 might be even larger than 9 arcmin. Member star candidates are shown as filled circles in all the CMDs, the probable member #4 as an open circle and field stars as crosses. It is interesting to note that all stars in the CCD frames brighter than $`V=14.5`$ mag, i.e. the 11 brightest stars, seem to be members of NGC 6994.
The adopted reddening is $`E(VI)=0.09\pm 0.03`$ mag, and by means of the relation of Dean et al. (1978) we get $`E(BV)=0.07\pm 0.02`$ mag. The shift of the Cousinsโ sequence to the position shown in Fig. 7 proved to be relatively uncertain although, according to the direction of the reddening line, it was the best fitting that could be accomplished including member candidates with $`(VI)`$ colors between 0.5 and 1 mag, as explained above. Consequently, we decided to check the color excess and to obtain other estimates of the reddening from the maps of Burstein & Heiles (1982) and the recently published ones of Schlegel et al. (1998). In the former maps, the position corresponding to the galactic coordinates of the cluster is very close to the contour of $`E(BV)=0.06`$ mag, and we obtained values ranging between 0.05 and 0.06 mag within the boundaries of our frames. The more reliable maps from Schlegel et al. gave $`E(BV)`$ between 0.04 and 0.06 mag for the same region. Thus, the estimates of reddening from both maps are in agreement, within the errors, with our previous determination so we decided to keep that color excess as the most accurate value and use it as a constraint in the isochrone fitting process.
In order to estimate the distance modulus we considered theoretical VandenBerg isochrones of different metallicities and ages, shifted according to the $`E(BV)`$ color excess. Isochrones of metallicity \[Fe/H\] =$`0.45`$ and more metal poor were inconsistent with our data for all ages. The $`0.23`$ dex ones at ages 2 โ 3 Gyr provided a proper fit for a distance modulus $`(VM_V)9.3`$ mag, and the best global fit was obtained with the isochrones of solar metallicity corresponding to the same ages, at a $`(VM_V)=9.2\pm 0.25`$ mag. Due to the small number of stars involved in the fit we cannot discard the metallicity \[Fe/H\] =$`0.23`$, but both distance moduli are in agreement within the errors, so we finally assumed that the distance derived with the solar metallicity isochrones was the most accurate one (Fig. 8).
If we adopt a value of $`R=A_V/E(BV)=3.2`$, we obtain an unreddened distance modulus of $`(V_0M_V)9\pm 0.25`$ mag, corresponding to a distance from the Sun of approximately $`d=620`$ pc. Taking into account its galactic latitude and assuming a solar Galactocentric distance of 8.5 kpc, we determine that NGC 6994 is located at about 350 pc below the Galactic plane and has a Galactocentric distance of about $`R_{gc}=8.1`$ kpc.
The two stars on the left side of the CMDs, which are identified as #41 and #50 in Fig. 2 and Table 1, deserve further discussion. According to their $`(BV)`$ colors they might be white dwarfs; it is thus worth to find out, by means of models, whether they are cluster white dwarfs candidates or not. We used the photometric calibration from Bergeron et al. (1995). The first step was to estimate the visual absolute magnitude for these stars assuming that they were located at the same distance as the cluster ($`M_V=7.78.5`$ mag, taking into account the error of the distance modulus); then, we obtained all the corresponding $`(BV)`$ colors and ages from the calibrations, including the models for the pure hydrogen and pure helium compositions as well as the ones for different values of the surface gravity. The model $`(BV)`$ colors range from $`0.17`$ to $`0.33`$ mag, that is, quite different from the observed ones (see Table 1); and if we look at the corresponding ages, they would be younger than $`10^7`$ yr, which seems unlikely for the sparse cluster that we are dealing with. On the other side, the absolute magnitudes from the models that correspond to the observed $`(BV)`$ are $`M_V9.5`$ mag, which differ in at least 1 mag from the ones obtained from the observations. We are then lead to conclude that stars #41 and #50 are not cluster white dwarfs. They may be either field white dwarfs or another type of blue object.
## 4 Comparison with a Galactic model
In order to confirm the identification of the member star candidates as an old cluster, we attempted a comparison of our observational results with Galaxy model predictions. If we obtain, for the particular field we are studying, a theoretical distribution of field stars that matches the observed one, we will be quite confident on the nature of the sequence drawn by the member candidates as a cluster.
We obtained the star distribution predicted by the Galactic model from Reid & Majewski (1993), corresponding to a field of the same size, located at the same position as NGC 6994, and including stars up to the same V limiting magnitude. Due to the low value of the galactic latitude, we expected to find, as field star contamination, several stars from the thick disk and from the halo added to the ones belonging to the Galactic disk. Figs. 9 and 10 show the theoretical $`V`$ vs $`(BV)`$, and $`V`$ vs $`(VI)`$ CMDs, reddened according to the values obtained in the previous Sect., and where we have included the sequence of member stars to perform a better comparison with Figs. 4 and 6, respectively. We can see that the model and the observed CMDs appear very similar, with a high proportion of stars from the thick disk and the halo; and that the candidate cluster members are located on an separate sequence, away from the field stars. The same effect is present in the $`V`$ vs $`(VR)`$ CMD, which is not shown. We believe that this comparison is giving strong support to the identification of the cluster star candidates as members of a genuine open cluster.
By means of this comparison of the observations with the model predictions, we can see that, up to the limiting V magnitude of the stars involved in this paper ($`V=21`$ mag), all the stars redder than $`(VI)=1.5`$ mag belong to the disk population without any contribution from the halo (see Fig. 10). This fact has already been pointed out by Reid et al. (1996), in their analysis of two deep and relatively high galactic latitude fields.
## 5 Stellar radial distribution
Given the small number of member candidates of NGC 6994, it is worth computing the surface density of stars in the region studied. Taking as a rough center the position of the four brightest stars, that is, the center of the 1.5 arcmin radius circle in Fig. 2, we calculated the cumulative projected number density within a series of concentric annular rings about such point. It is displayed in Fig. 11 which shows that NGC 6994 presents an increase of the density towards the center. Anyway, as this distribution represents an important proof of the existence of the cluster, it should be analized in a statistical way. By means of a random number generator we made a set of 10 uniform distributions of the same number of stars, scattered across the same area, and then applied a Kolmogorov-Smirnof test (Press et al. (1992)) to prove if they were statistically different from the observed stellar radial distribution. The statistic in this analysis is the maximum vertical separation between the observed and each random cumulative distributions, and P is the corresponding significance level for the hypothesis that the two samples are drawn from the same parent distribution. The probabilities P obtained, in sorted order, were of 0.151, 0.141, and the rest ranged from 0.045 to 0.003; these results show that the observed distribution is significantly different from the uniform distributions and that it has not occured just by chance. In this way, the existence of the cluster as a true identity is enhanced, and it should also be noticed that it is highly likely that the outer members of the cluster are located out of the bounds of our CCD frames.
## 6 Conclusions
The results of our photometry and the comparison with a Galactic model suggest that NGC 6994 is an old and sparse open cluster. We identify only 24 member candidates within the limits of our CCD frames, including the four brightest members that are located close to the center. The best isochrone fits give an age that lies within the range of 2 to 3 Gyr, assuming solar metallicity. The distance of the cluster from the Sun is estimated as 620 pc and the Galactocentric distance as 8.1 kpc; it is situated about 350 pc below the Galactic plane.
Our interpretation of the above results is based on the dynamical evolution of the cluster. Friel (1995) discussed the distribution of old open clusters projected on the Galactic plane and perpendicular to it. All the old open clusters present Galactocentric distances larger than 7.5 kpc and their distribution with height from the plane is much broarder than for the young ones, that is, the old cluster population can be fitted by a 375 pc scaleโheight exponential compared to the 55 pc scaleโheight of clusters with ages less than the Hyades (Janes & Phelps 1994). Both facts are in favour of the cluster longevity: the giant molecular clouds, whose encounters with open clusters can be devastating (Terlevich (1987)), are mainly located in the inner disk.
On the other side, clusters older than 1 Gyr are expected to present considerable mass segregation. They have had enough time to relax dynamically so that more massive stars end up centrally concentrated, and low mass stars have moved to the outer regions and may have escaped from the cluster. This process of evaporation is experienced by isolated clusters but is more efficient in the presence of an external field due to the host galaxy; this seems to have happened to a high proportion of NGC 6994 low mass members.
Finally, it is interesting to note the Nโbody simulations from Terlevich (1987) showed that, after 300โ400 Myr of evolution, some of the stars remained around each open cluster, forming an extended corona outside Kingโs tidal radius, but being still linked to the cluster. We find a similar kind of corona in the outskirts of NGC 6994, formed by stars that are likely to escape in the near future.
###### Acknowledgements.
The authors acknowledge use of the CCD and data acquisition system supported under U. S. National Science Foundation grant ASTโ90โ15827 to R. M. Rich. We are greatly indebted to T. von Hippel for making many useful suggestions which improved this paper, and for providing the code for the Galactic model calculations. We are also grateful to H. G. Marraco for his discussions and advice and for reading the manuscript, and to A. Feinstein and D. D. Carpintero for their comments. Thanks are due to S. D. Abal de Rocha, M. C. Fanjul de Correbo and E. Suรกrez for their technical assistance. This work was supported by grants from La Plata University and from the CONICET. |
no-problem/0001/cond-mat0001087.html | ar5iv | text | # Resonances and Chaos in the Collective Oscillations of a Trapped Bose Condensate
## References
1. M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, and E.A. Cornell, Science 269, 189 (1995).
2. K.B. Davis, M.O. Mewes, M.R. Andrews, N.J. van Druten, D.S. Drufee, D.M. Stamper-Kurn, and W. Ketterle, Phys. Rev. Lett. 75, 3969 (1995).
3. C.C. Bradley, C.A. Sackett, J.J. Tollet, and R.G. Hulet, Phys. Rev. Lett. 75, 1687 (1995).
4. S. Stringari, Phys. Rev. Lett. 77, 2360 (1996).
5. D.S. Jin, J.R. Ensher, M.R. Matthews, C.E. Wieman, and E.A. Cornell, Phys. Rev. Lett. 77, 420 (1996); R. Onofrio, D.D. Durfee, C. Raman, M. Kohl, C.E. Kuklewicz, W. Ketterle, preprint cond-mat/9908340.
6. F. Dalfovo, C. Minniti, S. Stringari, and L. Pitaevskii, Phys. Lett. A 227, 259 (1997); F. Dalfovo, C. Minniti, and L. Pitaevskii, Phys. Rev. A 56, 4855 (1997).
7. M. Fliesser, A. Csordas, P. Szepfalusy, and R. Graham, Phys. Rev. A 56, R2533 (1997).
8. E.P. Gross, Nuovo Cimento 20(1961) 454; J. Math. Phys. 4, 195 (1963); L.P. Pitaevskii, Zh. Eksp. Teor. Fiz. 40, 646 (1961) \[Sov. Phys. JETP 13, 451 (1961)\].
9. A. Parola, L. Salasnich, and L. Reatto, Phys. Rev. A 57, R3180 (1998); L. Reatto, A. Parola, and L. Salasnich, J. Low Temp. Phys. 113, 195 (1998).
10. L. Salasnich, Mod. Phys. Lett. B 11 1249 (1997); L. Salasnich, Mod. Phys. Lett. B 12, 649 (1998); L. Salasnich, Phys. Rev. A 61, 015601 (2000).
11. E. Cerboneschi, R. Mannella, E. Arimondo, and L. Salasnich, Phys. Lett. A 249, 245 (1998).
12. S. Stringari, Phys. Rev. A 58 2385 (1998).
13. D.M. Stamper-Kurn, H.J. Miesner, S. Inouye, M.R. Andrews, and W. Ketterle, Phys. Rev. Lett. 81, 500 (1998).
14. A.J. Lichtenberg and M.A. Lieberman, Regular and Stochastic Motion (Springer, New York, 1983); V.I. Arnold, Mathematical Methods of Classical Mechanics (Moscow, Nauka, 1974).
15. L. Salasnich, preprint chao-dyn/9906034, to appear in Progr. Theor. Phys. Suppl., No. 139 (2000).
16. L.P. Pitaevskii, Phys. Lett. A 221, 14 (1996); L.P. Pitaevskii and A. Rosch, preprint cond-mat/9608135.
17. L. Salasnich, J. Math. Phys. 40, 4429 (1999).
18. V.R. Manfredi and L. Salasnich, in Perspectives on Theoretical Nuclear Physics VII, pp. 319-324, Ed. A. Fabrocini et al. (Edizioni ETS, Pisa, 1999).
19. V.R. Manfredi and L. Salasnich, Int. J. Mod. Phys. B 13, 2343 (1999).
| Resonance | $`\lambda _1`$ | $`\lambda _2`$ | Resonance | $`\lambda `$ |
| --- | --- | --- | --- | --- |
| $`(1,2,0)`$ | $`0.683`$ | $`1.952`$ | $`(0,1,1)`$ | $`1`$ |
| $`(1,3,0)`$ | $`0.438`$ | $`3.081`$ | $`(0,1,2)`$ | $`1.512`$ |
| $`(1,4,0)`$ | $`0.320`$ | $`4.159`$ | $`(0,1,3)`$ | $`2.393`$ |
| $`(1,5,0)`$ | $`0.255`$ | $`5.226`$ | $`(0,2,1)`$ | $`0.454`$ |
| $`(2,5,0)`$ | $`0.527`$ | $`2.530`$ | $`(0,2,3)`$ | $`0.802`$ |
| $`(3,5,0)`$ | $`0.888`$ | $`1.501`$ | $`(0,3,1)`$ | $`0.300`$ |
Table 1. Low-order resonances of the normal modes of the condensate and the corresponding values of the anisotropy parameter $`\lambda `$ of the external trap.
## Figure Captions
Figure 1. Poincarรจ Sections of the $`m_z=0`$ modes. From left to right: $`\chi =1/5`$, $`2/5`$ and $`3/5`$, respectively. $`\chi `$ is the relative increase of energy with respect to the ground-state. Trap anisotropy $`\lambda =1.501`$.
Figure 2. The same as in Figure 1. Trap anisotropy $`\lambda =\sqrt{8}`$ (JILA).
Figure 3. Configurations of the ($`\lambda `$,$`\chi `$) plane for which the $`m_z=0`$ oscillations are regular. $`\lambda `$ is the trap anisotropy and $`\chi `$ is the relative increase of energy with respect to the ground-state. Full triangles: MIT trap. Full circles: $`(3,5,0)`$ resonance. Full squares: JILA trap. Open circles: all the other configurations. |
no-problem/0001/cond-mat0001241.html | ar5iv | text | # 1 Solid lines: the fluid-solid and fluid-fluid phase lines from the simulations of Dijkstra et. al.[] for ๐
=1/30. Dotted lines and dash-dotted lines: the coordination number calculated as in the text using the AO potential and the potential from the simulations respectively. Long-dashed line: the ๐_๐=2.4 line proposed by Buhot[]. Inset: ๐_{๐โข๐}โข(๐) for ๐
=0.2,๐_๐=0.25,๐_๐ ^๐=0.25 with AO potential. Solid line: from ๐_{๐โข๐}โข(๐)=exp(-๐ฝโข๐_{๐โข๐โข๐}โข(๐)); dashed line: from PY integral equation.
โFluid-solid phase-separation in hard-sphere mixtures is unrelated to bond-percolation.โ
In a recent letter, A. Buhot proposes that entropy driven phase-separation in hard-core binary mixtures is directly related to a bond-percolation transition. In particular, Buhot suggests that a phase-instability occurs when the coordination number $`n_b`$, defined as:
$$n_b=\rho _l_{\sigma _lr\sigma _l(1+R)}g_{ll}(r)๐๐ซ,$$
(1)
is equal to $`zp_c`$, where z is the coordination number of a particular crystal lattice, and $`p_c`$ is its bond-percolation threshold. Here $`\rho _l`$ is the number density of the larger particles, $`g_{ll}(r)`$ is the radial distribution function of the larger particles, and $`R=\sigma _s/\sigma _l<1`$ is the ratio of the diameters $`\sigma _i`$. However, for binary hard-sphere mixtures, calculations based on an accurate approximation to $`g_{ll}(r)`$ demonstrate that $`n_b`$ varies widely along the phase-boundaries calculated directly by simulations, implying that bond-percolation is unrelated to the phase-separation in these systems.
For highly asymmetric binary hard-sphere systems, Dijkstra et. al. conclusively demonstrated that an effective one-component description based on a depletion potential picture quantitatively describes the fluid-solid transition. This in turn implies that the one-component description should give a fair representation of the radial distribution function $`g_{ll}(r)`$. Recent simulations of the Asakura-Oosawa (AO) depletion potential show that the Percus-Yevick (PY) approximation quantitatively describes the pair-correlations along the fluid-solid transition line. In the inset of Fig. 1, the simple form: $`g_{ll}(r)=\mathrm{exp}\left(\beta V_{dep}(r)\right)`$, where $`V_{dep}(r)`$ is the effective depletion potential, is compared to more the accurate PY integral equation results. Typically for packing fraction $`\eta _l=\pi \rho _l\sigma _l^3/60.25`$ along the phase boundaries this form gives near quantitative agreement for $`r\sigma _l(1+R)`$, which is not surprising since for small $`\eta _l`$ the potential is typically at least $`2.5k_BT`$ along the fluid-solid transition line while the hard-core induced correlations are small so that the exponential form dominates. In fact, Buhotโs treatment of binary hard-spheres reduces exactly to this simple form but with an AO depletion potential which is valid only when $`\eta _l0`$. If one replaces $`\eta _s`$ with the $`\eta _s^r`$ of small spheres in a reservoir kept at constant chemical potential, the correct form of the AO potential is recovered. In Fig. 1, the metastable fluid-fluid and the stable fluid-solid phase lines taken from simulations are compared to lines of constant coordination number, which are calculated with eqn. (1) and $`g_{ll}(r)=\mathrm{exp}\left(\beta V_{dep}(r)\right)`$, together with the depletion potential used for the simulations as well the simpler AO potential. The difference between the two is very small, implying that the coordination number is not very sensitive to the exact form of the depletion potential. Also included is the proposed bond-percolation induced fluid-solid line at $`n_b=2.4`$ derived from the approximation to $`g_{ll}(r)`$ used by Buhot.
Clearly: (a) As expected, the approximation used by Buhot for $`g_{ll}(r)`$ breaks down as $`\eta _l`$ increases. (b) The lines of constant coordination number are not related to either the fluid-fluid or the fluid-solid phase lines, implying that there is no direct relation between bond-percolation and phase-separation. The same results were found for other size ratios, and it is hard to see how more accurate approximations for $`g_{ll}(r)`$ could change this picture.
The breakdown of the bond-percolation picture for this archetypical hard-core mixture model implies a similar breakdown for other, more complex, mixtures. The good agreement found at a few state points for hard-square systems probably results from either their 2-d nature, the imposed parallel symmetry, or the rather unusual purported 2nd order fluid-solid transition. It does not imply that bond-percolation is generally relevant for fluid-solid phase-separation in binary hard-core mixtures.
A.A. Louis
Department of Chemistry, Lensfield Rd,
Cambridge CB2 1EW, UK
PACS numbers: 64.75.+g, 61.20.Gy, 64.60.Ak |
no-problem/0001/astro-ph0001418.html | ar5iv | text | # The Magnetic Fields of the Universe and Their Origin
## 1. Introduction
The problem of understanding the origin of the large scale galactic magnetic fields has been with us for over forty years. There have been many papers and reviews on the galactic and extragalactic magnetic fields (see Moffatt 1978; Parker 1979; Krause and Radler 1980; Ruzmaikin et al. 1988; Wielebinski & Krause 1993; Beck et al. 1996; Zweibel & Heiles 1997; Kulsrud 1999), and observational reviews (see Miley 1980; Bridle & Perley 1984; Krรถnberg 1994), including the observations themselves (e.g. Perley et al. 1984; Taylor et al. 1990; Taylor & Perley 1993; Eilek et al. 1984).
Recent rapid progress in observational work on galaxy clusters has revealed a surprising result. Intracluster medium (ICM) appears to be definitely magnetized, and in many cases, perhaps are highly magnetized as convincingly argued by Eilek et al. (2000). Figure 1 presents one such example in Hydra A Cluster as shown by the rotation measure ($`R_m`$) map made by Taylor & Perley (1993). We will show in this article that the implied magnetic energy and flux estimated from extensive $`R_m`$ maps of a dozen or so galaxy clusters are so exceedingly large that the conventional galactic dynamo models may prove to be inadequate. We argue that a new source of energy and a different form of the galactic dynamo are required.
As the rotation measure observations of galaxy clusters are relatively new and some of them are (yet) unpublished by the observation teams, we will first explain some of the observation results in detail, then discuss their physical implications at length. In the second half of the article, we will propose a new paradigm related to AGN accretion disks and describe some of our recent efforts in understanding a sequence of physical processes revolving around the origin of cluster magnetic fields.
## 2. Galactic and Extragalactic Magnetic Fields
Faraday rotation measures, $`R_m`$, are shown to be consistent with six other physical interpretations of magnetic fields in ours and nearby galaxies (star light polarization, interstellar Zeeman splitting, synchrotron emission, synchrotron polarization, and inferred by x-ray emission and cosmic ray isotropy and pressure) (see Krรถnberg 1994 for a review), thus establishing $`R_m`$ as a reliable measure of galactic and extra galactic magnetic fields. Because of the existence of many self-illuminating as well as background sources, usually AGN, and the increasing sensitivity of radio detection, $`R_m`$ has become the recognized measure of extragalactic magnetic fields (Krรถnberg 1994; Taylor et al. 1994; Ge & Owen 1994; Krause & Beck 1998).
### 2.1. Magnetic Flux and Energy in Galaxy Clusters
Recently, high quality $`R_m`$ maps of self-illuminating sources of galaxy clusters where the distances are known have become available (for example, Taylor & Perley 1993; Eilek et al. 2000). An important quantity that has received less discussion in these papers is the magnitude of the magnetic flux and energy.
Figure 1 shows the $`R_m`$ map of the region illuminated by Hydra A in the cluster (courtesy of Taylor & Perley 1993). The largest single region of highest field in this map has approximately the following properties: the size $`L50`$ kpc and $`B33\mu `$ G, derived on the basis that the field is patchy and is tangled on a 4 kpc scale. This leads to a startling estimates of flux, $`F\mathrm{๐ต๐ฟ}^\mathit{2}\mathit{8}\times \mathit{10}^\mathit{4}\mu `$ G $`kpc^2`$, and energy, $`W=(B^2/8\pi )L^34\times 10^{59}`$ ergs, assuming that the tangled field is only confined to the 50 kpc region. If this is extended to the whole cluster which is $`500`$ kpc, then the implied flux and energy are correspondingly larger by a factor of $`100`$ and $`10^3`$, respectively. A similar conclusion can be reached when a larger sample of $`R_m`$ of galaxy clusters are analyzed using the data presented in Eilek et al. (2000). In Table 1, we have reproduced part of table given in Eilek et al. (2000) and added two columns where the approximate flux and energy are calculated assuming that the fields are partially tangled or in loops.
Furthermore, the estimated values of fluxes and energies are most likely to be the minimum of the actual magnetic fields existing in the galaxy clusters. Faraday rotation depends upon the component of the field strength along the line of sight, $`B_{}`$, the distance along the line of sight, $`Z_o`$, and electron density, $`n_e`$. Estimates of $`n_e`$ can be made from the x-ray emission measurements of the clusters with a typical accuracy of $`20\%`$, and it varies by factors of 2 to 4 over the region of the source, but otherwise is nearly uniform, and clumping is small (Taylor et al. 1994). If the field is folded in any fashion so that regions of oppositely directed field are in the line of sight, then the observed $`R_m`$ will be smaller than that if the same field lines were straightened out into one direction. In other words $`R_m`$ is a minimum measure of $`B_{}`$.
To put the above numbers in perspective, for a typical galaxy like ours, e.g., with 1 kpc thickness, 3 kpc Homberg radius, and a field of $`3\mu `$ G, the magnetic flux and energy are roughly $`10^{38}`$ G cm<sup>2</sup> and $`4\times 10^{52}`$ ergs, respectively. One observes that the flux and energy given in Table 1 range from close to the Hydra A limit to no more than $`10^2`$ times that of a typical galaxy.
The magnitude of the implied fluxes and energies are so large, $`\times 10^3`$ and $`\times 10^6`$ respectively, compared to these quantities within standard galaxies that their origin requires a new source of energy and a different form of the dynamo than previous galactic models. These minimum energies are sometimes even larger than the baryonic binding energy of galaxies ($`2\times 10^{58}`$ ergs). The extremely large fluxes also seem out of reach via amplification by ordinary galaxy rotations in a Hubble time.
Next, we discuss the difficulty with using turbulence to create these nearly uniform, highly correlated and coherent regions of $`R_m`$ as seen in Figure 1. We then discuss the still greater difficulty of creating the total magnetic energy of the cluster based upon a turbulence dynamo model.
### 2.2. Turbulent versus Coherent Fields
It has been suggested by a number of people (Eilek 1999; DeYoung 1980; Ruzmaikin et al. 1989; Goldman & Rephaeli 1991; Jaffe 1980) that the entire cluster is uniformly turbulent due to Rayleigh Taylor instabilities during matter in-fall into the cluster, and that this turbulence drives the cluster dynamo creating the fields. The problem with this interpretation is the total magnetic energy, the magnitude of the turbulence, the strength of the fields, the apparent correlation of $`R_m`$ maps with single AGN structures, and finally the limited number of rotations of the cluster in a Hubble age.
Because of the small rotation rate of the typical cluster, $`100`$ km $`s^1`$, the available rotation energy is small, $`10^2`$ of the cluster binding energy, which has a thermal velocity of $`10^3`$ km $`s^1`$. So applying the turbulent model to Hydra A implies a magnetic energy $`10^3`$ greater than the rotational energy. Therefore the dynamo must be of the $`\alpha ^2`$ type where fields are generated on the small scale, yet as Taylor et al. (1994) point out, the fields of Hydra A and A1795 reverse on the different sides of the core, requiring coherence on scales of 100 kpc. Since this reversal is correlated with the structure of the source, and since the energy generated at the small scale is small compared to the turbulence input and the turbulent input should be small compared to the binding energy (DeYoung 1992), we believe that all these factors point to random, localized sources of magnetic energy of size $`>10^{60}`$ ergs. This is probably too demanding for turbulence.
Furthermore, it will be difficult to produce large scale coherent $`R_m`$ regions, which have been observed in Hydra A (Figure 1, northern region) and several other galaxy clusters (Eilek et al. 2000). This is because in a turbulent plasma, the emission, the $`R_m`$, and the degree of polarization should all be statistically symmetric. Despite the unlikelyhood of all these factors conspiring to create both a pattern and a nearly uniform $`R_m`$, one observes in many $`R_m`$ maps of AGN, mostly in clusters, a distinct match in the $`R_m`$ pattern with the jet like pattern of emission. Particularly the sign of the average $`R_m`$ in several cases reverses across a symmetry plane through he core of the AGN (Taylor & Perley 1993). The size of the regions of uniform $`R_m`$ correlates strikingly with the size of the jet as a function of distance from the nucleus. We interpret this correlation as due to the source of the field being the AGN jet as opposed to a turbulent $`\alpha \mathrm{\Omega }`$ dynamo in the cluster as a whole.
### 2.3. Average Field Structure
Using serendipitous polarized background sources, therefore random lines of sight through random clusters, Clark & Krรถnbergโs (1999) have made $`80`$ $`R_m`$ measurements. Their observations have produced a boundary of the typical cluster in $`R_m`$ such that the average field was $`3\mu `$ G out to a radius of $`R_{cluster}300`$ kpc. The magnetic flux and energy, $`10^4\mu `$ G $`kpc^2`$ and $`10^{60}`$ ergs, are then similar to the largest structure already discussed in Hydra A. If each galaxy of a typical cluster with $`50`$ large galaxies contributes a high field region during its AGN phase, then the probability of intersecting such a region of area that is $`1\%`$ of the cluster is roughly $`5\times 10^3`$ so that in 100 lines of sight, the probability of intersecting a Hydra-like region of an AGN in a cluster is $`50\%`$. This is not inconsistent with the variability they observed. Finally we note that the large degree of polarization observed $`50\%`$ in these sources indicates that the rotation source and emission source cannot be in the same location (Burn 1966; Taylor 1991), otherwise polarized emission from various depths in the source would undergo different degrees of rotation and hence emerge depolarized. Therefore in any model the Faraday screen and the emission source must be related and even congruent in order that the screen and hence $`R_m`$ be correlated with the core of the AGN.
### 2.4. Black Hole Accretion Disk as the Engine
Purely based on the energetics, the accretion disk around supermassive black holes in AGNs offers an attractive site for the production of magnetic fields. The accessible binding energy of the black hole is $`10^8M_{}c^210^{61}`$ ergs and the winding number of the disk forming the BH of nearly every galaxy is $`N_w5\times 10^{10}`$ at $`10R_g`$, where $`R_g`$ is the BH horizon ($`1`$AU). Using the canonical numbers thought to apply to AGN disks, the BH dynamo flux can be $`F_{\mathrm{BHdyn}}B_{\mathrm{BHdyn}}\pi R_{\mathrm{BHdyn}}^\mathit{2}N_w\mathit{10}^{\mathit{43}}`$ G cm<sup>2</sup>, where we have used $`B_{\mathrm{BHdyn}}10^4`$ G at $`L_{AGN}10^{46}`$ ergs $`s^1`$ and $`R_{\mathrm{BHdyn}}10R_g10^{14}`$ cm. Both the flux and energy from this simple analysis are $`10`$ times the maximum observed values. No other source of energy is likely to be sufficient by many orders of magnitude. Therefore it is much more reasonable to assume that every AGN, both within and external to clusters, produces the magnetic energy and flux that we observe in this extreme case from the binding energy released in the accretion disk forming the central BH. This implies that every galaxy contains a BH where $`90\%95\%`$ of the accessible binding energy is transformed into magnetic energy during its AGN phase by an accretion disk dynamo. On the average this flux and energy is distributed throughout the universe as force-free fields and only a small fraction $`5\%10\%`$ of the magnetic energy is dissipated in the form of the AGN spectra, thus explaining the problem of the missing AGN luminosity (Richstone 1998; Krolik 1999). In this picture a larger fraction of the magnetic energy is dissipated where the brightest AGNs are seen in galaxy clusters, because only in the clusters is a sufficient gas density retained by the gravity of the cluster such that this density confines the field increasing the fraction of the magnetic energy that is dissipated. For most galaxies external to dense clusters a small fraction of this magnetic energy is dissipated as the AGN radiation, a small fraction remains in the galaxy, and the bulk of the magnetic energy and flux is distributed in the walls and voids of the universe.
## 3. Astrophysical Requirements and Progress with a Model
The sequence of phenomena that can explain this astonishing extragalactic magnetic flux and energy must start with an accretion disk forming a massive central galactic BH. This in turn presumes an answer to an equally enigmatic question, namely the formation of these massive galactic BHs themselves (Begelman et al. 1989; Rees 1999). By focusing on the transport of angular momentum we believe that the flat rotation curve mass distribution can be explained as a plausible result of any non-linear collapse of an initial gaseous baryonic density fluctuation by hierarchical tidal torquing (Newman & Wasserman 1999). The BH forms from this mass distribution when the Rossby vortex torque mechanism supersedes tidal torquing and an accretion disk forms. All this mass then collapses to a BH. The flat rotation curve, $`MR`$, results in $`\mathrm{\Sigma }R^1`$. When this thickness reaches $`\mathrm{\Sigma }100`$ to $`1000`$ g $`cm^2`$, heat is confined for several revolutions, and the Rossby vortex instability initiates at $`M_{disk}10^7`$ to $`10^8M_{}`$. Finally the dynamo produced fields then supersede the previous torque mechanisms.
### 3.1. The Rossby Vortex Torque Mechanism
We have predicted and demonstrated analytically and numerically how a new instability in Keplerian flow, the Rossby vortex instability, can grow (Lovelace et al. 1999; Li et al. 2000a; Li et al. 2000b). The production of vortices is shown in Figure 2. This instability produces torque and thus transports angular momentum within an accretion disk by purely hydrodynamically means via the interaction of large, two-dimensional, co-rotating Rossby vortices. The enhanced transport of angular momentum by co-rotating vortices is recognized in rotational atmospheric flows (Staley & Gall 1979) and in laboratory measurements of the Ranque-Hilsch tube (Hilsch 1946; Frรถhlingsdorf & Unger 1999, Colgate & Buchler 1999).
### 3.2. The Dynamo, Star-Disk Collisions, and Helicity
A coherent dynamo can form in a Keplerian accretion disk because of the large azimuthal velocity shear, provided that there exists a robust source of non-axisymmetric helicity. Classically turbulence has been invoked to explain this helicity using the mean field dynamo theory, but we know of no way to create this degree of turbulence, with vertical motions, hydrodynamically in an accretion disk, because hydrodynamic turbulence alone is damped in an accretion disk (Balbus & Hawley 1998). The magnetic instability of Balbus & Hawley will lead to turbulence, but the magnitude of the turbulence is orders of magnitude too small compared to the Keplerian stress. Instead we have identified a new, robust source of helicity driven by star-disk collisions by a small mass fraction $`10^310^4`$ of pre galaxy-formation stars. The Keplerian shear and a star-disk collision with the twist producing helicity is shown in Fig. 3. We have demonstrated by laboratory flow visualization experiments of how plumes, driven in a rotating frame, counter rotate relative to the frame (Beckley & Colgate 1998; Beckley et al. 2000) and thus produce a robust and coherent helicity where flux is always added in the same direction and where the driving force is large compared to the Keplerian stress in the disk.
We have simulated the positive, exponential gain of both the quadrupole and dipole poloidal field of such a dynamo with a vector potential code in 3-D, cylindrical coordinates, where the velocity field simulates both the Keplerian rotation and star collision-produced plumes. We have observed a growth rate of $`10\%`$ per revolution, two plumes per two revolutions, $`R_{plume}=1/3R_{disk}`$, and with a magnetic Reynolds number, $`R_{ey,\mathrm{\Omega },B}=100`$ (Pariev, Colgate & Finn 2000).
### 3.3. The Saturation of the Dynamo and the Formation of the Helix
With positive gain and large winding number, the dynamo will saturate regardless of how small the seed field is. Since the helicity does not depend on turbulence, it will not be subject to turbulent $`\alpha `$-quenching at the small scale (Vainstein & Cattanio 1992; Vainshtein & Rosner 1991). Furthermore since the stars maintain virial velocity, their velocity is supersonic relative to the disk and the resulting shock stress is large. At the back reaction limit, the field grows until the torque of the field affects the Keplerian motion, and the accessible BH binding energy is converted into magnetic energy. The progressive loss of this flux is a force-free helical, Poynting magnetic flux, which we identify as the collimated AGN jets. We have investigated the field topology of these twisted helical flux surfaces by integrating the Grad-Shafranov equations for a force-free axisymmetric field with a Keplerian distribution of winding number (Li et al. 2000c) as shown in Fig. 4. Since the field decreases as $`B_{helix}1/R`$, the pressure, at large radius as the helix extends to Mpcs, becomes of the order of the IGM and and the outer boundary of the helix is self collimating (Lynden-Bell 1996). The energy carried by this helix, at a mean radius near the BH, $`R_{dyn}10R_{BH}`$ is the accessible energy of accretion or $`\dot{M}_{BH}c^2/10=(B_{helix}^2/8\pi )(100\pi R_{BH}^2)`$ or $`B_{helix}10^4`$ G, $`I=5R_{helix}B_{helix}=5\times 10^{18}`$ amperes, and $`V_{potential}=10^{20}`$ volts and $`I\times V_{potential}=10^{39}`$ watts $`=10^{46}`$ ergs $`s^1`$. General relativity inside the innermost stable orbit will add additional energy (Blandford & Znajek 1977; Livio et al. 1999).
### 3.4. $`J_{}`$ Reconnection and Acceleration
The distribution of this flux in the universe occurs by partial tearing mode reconnection producing the minimum energy Taylor state (Taylor 1986). The total flux is conserved, but a fraction of the energy is dissipated in the tearing mode, $`J_{}`$ reconnection. The resulting $`E_{}`$ acceleration of the current carriers produces the emission that we associate with the AGN.
#### Acknowledgments.
We are indebted to Richard Lovelace, Howard Beckley, Vladimir Pariev, John Finn, Mike Warren, Dave Westpfahl, Van Romero, Ragnar Ferrel, and Warner Miller for direct contributions to this project and to very many more who have contributed in discussions, criticisms and encouragements. HL acknowledges the support of an Oppenheimer Fellowship. This research is supported by the DOE, under contract W-7405-ENG-36.
## References
Balbus, S.A., & Hawley, J.F. 1998, Rev. Mod. Phys., 70, 1
Beck, R. et al. 1996, ARAA, 34, 155
Beckley, H.F. & Colgate, S.A. 1998, APS, DFD., Abst. 5253
Beckley, H.F. et al. 2000, Phys. Fluids, to be submitted
Begelman, M.C. et al. 1989, Theory of Accretion Disks, ed. F Meyer, W Duschl, J. Frank, and E Meyer-Hofmeister, NATO series C, Kluwer Pub., 290, p373, ibid p387
Blandford, R.D. & Znajek, R.L. 1977, MNRAS, 179, 433
Bridle, A.H. & Perley, R.A. 1984, ARAA, 22, 319
Burn, B.F. 1966, MNRAS, 133, 67
Clark, T., Krรถnberg, P.P. & Bรถhringer, H. 1999, preprint
Colgate, S.A. & Buchler, R.J. 1999, 14th Florida Workshop in Nonlinear Astronomy, ed. R. Buchler and H. Kantrup, Gainsville, FL, Proc. of NY Acad. Sci.
DeYoung, D.S. 1980, ApJ, 241, 81
DeYoung, D.S. 1992, ApJ, 386, 464
Eilek, J.A. et al. 1984, ApJ, 278, 37
Eilek, J.A. 1999, Magnetic fields in Clusters: Theory vs. Observation, Ringberg workshop, Germany, MPE-Report
Eilek, J.A. et al. 2000, ApJ, to be submitted
Frรถhlingsdorf, W. & Unger, H. 1999, Int. Jour. of Heat and Mass Transfer, 42, 415
Ge, J-P. & Owen, F.N. 1994, AJ, 108, 1523
Goldman, I. & Rephaeli, Y. 1991, ApJ, 380, 344
Hilsch, R. 1946, Die Expansion von Gasen im Zentrifugalfeld als Kรคlteprozess, Zeitung fรผr Naturforschung, 1, 208; and 1947, Rev. Sci. Instr. 18, 108
Jaffe, W. 1980, ApJ, 241, 925
Krause, F. & Beck, R. 1998, A&A, 335, 789
Krause, F. & Radler, K-H. 1980, Mean Field Electrodynamics and Dynamo Theory, Berlin: Akademie-Verlag, Oxford, Pergamon
Krolik, J.H. 1998, Active Galactic Nuclei, Princeton: Princeton Univ. Press
Krรถnberg, P.P. 1994, Prog. Phys., 57, 325
Kulsrud, R.M. 1999, ARAA, 37, 37
Li, H. et al. 2000a, ApJ, in press
Li, H. et al. 2000b, ApJ, to be submitted
Li, H. et al. 2000c, ApJL, to be submitted
Livio, M. et al. 1999, ApJ, 512, 100
Lovelace, R.V.E. et al. 1999, ApJ, 513, 805
Lynden-Bell, D. 1996, MNRAS, 279, 389
Miley, G.K. 1980, ARAA, 18, 165
Moffatt, H.K. 1978, Magnetic Field Generation in Conducting Fluids, Cambridge Univ. Press
Newman, W.I. & Wasserman, I. 1999, ApJ, 354, 411
Pariev, V., Colgate, S.A. & Finn, J.M. 2000, ApJ, to be submitted
Parker, E.N. 1979, Cosmical Magnetic Fields: Their origin and Their Activity, Oxford: Claredon
Perley, R.A. et al. 1984, ApJS, 54, 291
Rees, M.J. 1999, astro-ph/9912346
Richstone, D. 1998, Nature, 395, 14
Ruzmaikin, A.A. et al. 1989, MNRAS, 241, 1
Ruzmaikin, A.A. et al. 1998, Magnetic Fields in Galaxies, Astrophy. and Space Sci. Lib., Kluwer, Dordrecht
Staley, D. O. & Gall, R. L. 1979, J. Atmos. Sci., 36, #6, 973
Taylor, G.B. et al. 1990, ApJ, 360, 41
Taylor, G.B. 1991, Ph.D. thesis, UCLA
Taylor, G.B. & Perley, R.A. 1993, ApJ, 416, 554
Taylor, G.B. et al. 1994, AJ, 107, 1942
Taylor, J.B. 1986, Rev. of Modern Phys., 58, 741
Vainshtein, S.I. & Rosner, R. 1991, ApJ, 376, 199
Vainshtein, L.I. & Cattaneo, F. 1992, ApJ, 393, 165
Wielebinski, R. & Krause, F. 1993, Astron Astro. Rev., 4, 449
Zweibel, E.G. & Heiles, C. 1997, Nature, 385, 131 |
no-problem/0001/cond-mat0001082.html | ar5iv | text | # The strain energy and Youngโs modulus of single-wall carbon nanotubes calculated from the electronic energy-band theory
## Abstract
The strain energies in straight and bent single-walled carbon nanotubes (SWNTs) are calculated by taking account of the total energy of all the occupied band electrons. The obtained results are in good agreement with previous theoretical studies and experimental observations. The Youngโs modulus and the effective wall thickness of SWNT are obtained from the bending strain energies of SWNTs with various cross-sectional radii. The repulsion potential between ions contributes the main part of the Youngโs modulus of SWNT. The wall thickness of SWNT comes completely from the overlap of electronic orbits, and is approximately of the extension of $`\pi `$ orbit of carbon atom. Both the Youngโs modulus and the wall thickness are independent of the radius and the helicity of SWNT, and insensitive to the fitting parameters. The results show that continuum elasticity theory can serve well to describe the mechanical properties of SWNTs.
Since their discovery in 1991, Carbon nanotubes (CNTs) have invoked considerable interest in the last decade . There are many works on both the theoretical and the experimental studies about the electronic structure of CNTs, and many exciting and novel properties have been discovered. For example, it was found that the insulating, semi-metallic, or metallic behavior depends upon the radius and the helicity of CNTs . On the thermal and the mechanical properties, the tubes are significantly stiffer than any material presently known . To understand these many intriguing properties, many groups have calculated the strain energy and the Youngโs modulus of single-wall carbon nanotubes (SWNTs). Among these calculations, many depend on the choice of an empirical potential between the carbon atoms, such as Tersoff-Brenner potential . Lenosky et al. employed an empirical model with three parameters reducible to a continuous model with two elastic moduli. They showed that the continuum elasticity model serves well to describe the deformation of multi-wall carbon nanotubes (MWNTs). Resent theoretical studies on the Youngโs moduli of SWNTs show some discrepancies coming from the adoption of different empirical potentials and different relations in the continuum elasticity theory (CET), especially, the different values of the effective wall thickness of SWNT. How to calculate the Youngโs modulus of SWNT is still an open question.
Here we present a simple method for the computation of the strain energy of straight SWNTs directly from the electronic band structure without introducing any empirical potential. This method had also been extended to calculate the strain energy of bent tubes. It is found that the wall thickness of SWNTs can be calculated simply from the band electrons, and the Youngโs modulus by consideration of both the repulsion energy between ions and the bond-length dependencies of the electronic energy. Our results show that CET can well describe the bending of SWNTs and that both the Youngโs modulus and the effective wall thickness are independent of the radius and the helicity of the tubes, and insensitive to the fitting parameters. We obtained the Youngโs modulus of SWNT about $`5`$ $`\mathrm{TPa}`$, $`5`$ times larger than the value of MWNT or graphite bulk samples, and the effective wall thickness about $`0.7`$ $`\mathrm{\AA }`$, the size of carbon atom.
Generally, the total energy of the carbon system is given by the sum :
$`E_{total}=E_{el}+E_{rep},`$ (1)
where $`E_{el}`$ is the sum of the energy of band electrons of the occupied states and $`E_{rep}`$ is given by a repulsive pair potential depending only on the distance between two carbon atoms. They are given by
$$E_{el}=\underset{occ}{}E_k,$$
(2)
and
$$E_{rep}=\underset{i}{}\underset{j>i}{}\varphi (r_{ij}),$$
(3)
, respectively. Since $`\varphi (r)`$ is a short-range potential , only interaction between neighbor atoms needs to be considered. On account of the relaxation effect , the bond length of SWNT is slightly larger than that of graphite ($`r_0=1.42`$ $`\mathrm{\AA }`$). However, even in $`C_{60}`$, for which the relaxation effect is significant on account of its small radius, calculations show that the energy contribution of the bond-relaxation can still be safely ignored . The total energy can now be rewritten as:
$$E_{total}=\frac{1}{2}_0(\delta r_{ij})^2+E_{ang},$$
(4)
where the first term on the right hand side of $`E_{total}`$ is the sum of the repulsion energy between ions and the electronic energy contribution of the bond length change with $`\delta r_{ij}`$ as the change of the distance between the $`i`$th and the $`j`$th atoms in SWNT from that in graphene. The second term is the electronic energy contribution of the angular change of the bond, when rolling from graphene to SWNT. The positions of the atoms of straight SWNTs are located on the cylindrical surface of the tube when the relaxation effect of the bonds is neglected. $`\delta r_{ij}`$ is proportional to $`\rho ^2`$, where $`\rho `$ is the cross-sectional radius of SWNT, and the first term of Eq. (4) can be ignored, since it is of $`\rho ^4`$ order. Therefore, the strain energy of straight SWNTs comes from the curvature-induced electronic energy change, and can be obtained by taking account of the electronic energy of all the occupied bands.
In order to calculate the electronic energy bands of SWNTs, we use a simple nearest-neighbor tight-binding (TB) model. This model contains nine TB parameters of graphite: Four hopping including, $`V_{ss\sigma }=6.679`$, $`V_{sp\sigma }=5.580`$, $`V_{pp\sigma }=5.037`$, $`V_{pp\pi }=3.033`$ in unit of $`\mathrm{eV}`$; four overlapping integration including, $`S_{ss\sigma }=0.212`$, $`S_{sp\sigma }=0.102`$, $`S_{pp\sigma }=0.146`$, $`S_{pp\pi }=0.129`$ and an energy difference between the $`2s`$ orbit and the $`2p`$ orbit of the carbon atoms $`\mathrm{\Delta }`$=$`(_{2s}_{2p})=8.868`$ $`\mathrm{eV}`$. The model has been used widely for the calculation of the electronic properties of both graphenes and SWNTs. In general, these TB parameters depend upon the bond-length in the way ,
$$V_{\lambda \lambda ^{^{}}\mu }(r)=V_{\lambda \lambda ^{^{}}\mu }(r_0)\mathrm{exp}(\gamma (rr_0)).$$
(5)
However, in case of straight SWNTs, the $`\rho ^2`$ order dependence of the strain energy will not be affected even if we ignore simultaneously these dependencies and the repulsion energy.
With the notation used by White et al. , each SWNT is indexed by a pair of integers $`(n_1,n_2)`$ corresponding to the lattice vector $`\stackrel{}{R}`$=$`n_1\stackrel{}{a}_1+n_2\stackrel{}{a}_2`$ on the graphene, where $`\stackrel{}{a}_1`$, $`\stackrel{}{a}_2`$ are the unit cell vectors of the graphene. The tube structure is obtained by a rotation operation $`๐_N`$ and a screw operation $`๐ฎ(h,\alpha )`$. The operation $`๐_N`$ is a rotation of $`\frac{2\pi }{N}`$ about the axis, where $`N`$ is the largest common factor of $`n_1`$ and $`n_2`$. The $`๐ฎ(h,\alpha )`$ operation is a rotation of an angle $`\alpha `$ about the axis of SWNT in conjunction with a translation of $`h`$ units along the axis, which both $`h`$ and $`\alpha `$ depending on the tube parameters . Let $`[m,l]`$ denote a primitive unit cell in the tube generated by mapping the $`[0,0]`$ cell to the surface of the cylinder first and then translating and rotating this cell by $`l`$ applications of the rotational operator $`๐_N`$ followed by $`m`$ applications of $`๐ฎ(h,\alpha )`$. Because $`๐ฎ(h,\alpha )`$ and $`๐_N`$ commute with each other, we can generalize the Bloch sums, and obtain the Hamiltonian matrix:
$`_{ij}^{AA}(k,n)=_{ij}^{BB}(k,n)=_i\delta _{ij},`$ (6)
$`_{ji}^{BA}(k,n)=(_{ij}^{AB}(k,n))^{},`$ (7)
$`_{ij}^{AB}(k,n)={\displaystyle \underset{r}{}}\mathrm{exp}[{\displaystyle \frac{2ni\pi }{N}}\mathrm{\Delta }l(r)+ik\mathrm{\Delta }m(r)]๐ฑ_{ij}^{AB}(r),`$ (8)
where $`[\mathrm{\Delta }m(r),\mathrm{\Delta }l(r)]`$ $`(r=1,2,3)`$ are the cell indices of the primitive unit cells located by the three nearest-neighbor atoms $`B`$ of atom $`A`$ in the tube, $`A`$ and $`B`$ are two independent carbon atoms in a primitive unit cell of SWNT. Let $`n=0,1,\mathrm{},N1`$ represent the $`N`$ sub-Brillouin Zones, $`k`$ be a one-dimensional wave vector. $`_1=_{2s}`$, $`_i=_{2p}`$ for $`i=2,3,4`$. Taking the $`2p`$ wave function as a vector and $`2s`$ wave function a scalar, one can easily obtain:
$`๐ฑ_{p_i,p_j}^{AB}(r)`$ $`=(\widehat{e}_{A_i}\widehat{e}_{B_j(r)})V_{pp\pi }`$ (10)
$`(\widehat{e}_{A_i}\widehat{u}_r)(\widehat{e}_{B_j(r)}\widehat{u}_r)(V_{pp\pi }V_{pp\sigma }),`$
$`๐ฑ_{s,p_i}^{AB}(r)`$ $`=(\widehat{e}_{B_i}(r)\widehat{u}_r)V_{sp\sigma },`$ (11)
$`๐ฑ_{p_i,s}^{AB}(r)`$ $`=(\widehat{e}_{A_i}\widehat{u}_r)V_{sp\sigma },`$ (12)
$`๐ฑ_{s,s}^{AB}(r)`$ $`=V_{ss\sigma }.`$ (13)
where $`\widehat{u}_r(r=1,2,3)`$ be the unit vector from the atom $`A`$ to its three neighboring atoms $`B`$. $`\widehat{e}_{A_i}`$ and $`\widehat{e}_{B_j}(r)`$ are the unit vector of the $`2p_i`$ wave function of atom $`A`$ and the unit vector of the 2$`p_j`$ wave function of atom $`B`$, respectively. The overlapping integration matrix $`๐ฎ(k,n)`$ has the same form as the Hamiltonian matrix, with four overlapping integration parameters to replace the four hopping integration parameters, and with unit to replace the energy of $`2s`$ and $`2p`$ wave function $`_{2s}`$ and $`_{2p}`$. Thus we obtain an 8 $`\times `$ 8 Hamiltonian matrix $`(k,n)`$ and an overlapping integration matrix $`๐ฎ(k,n)`$. By solving the secular equation $`(k,n)๐_i(k,n)`$=$`E_i๐ฎ(k,n)๐_i(k,n)`$, we can calculate the electronic energy band $`E_i`$ of SWNTs.
Taking account of the total energy of all the occupied band electrons in SWNTs relative to that in graphene, we have calculated the strain energy $`E_s`$ of the straight SWNTs. With the possible bond length dependence of the TB parameters being neglected, and with the real bond length of SWNT $`|\stackrel{}{u}_r|`$, where $`\stackrel{}{u}_r`$ represents vectors between the nearest neighbor atoms in the tube , we have calculated the direction cosines $`\widehat{e}_i\widehat{u}_r`$ of Eq. (13). Fig. 1 shows that $`E_s`$ depends only on the radius $`\rho `$ of the tubes. The characteristic behavior $`E_s`$=$`๐/\rho ^2`$ is found with $`๐1.44`$ $`\mathrm{eV}\mathrm{\AA }^2/\mathrm{atom}`$, in good agreement with previous calculated value 1.34 or 1.53 $`\mathrm{eV}\mathrm{\AA }^2/\mathrm{atom}`$ , and excellently close to the value of 1.57 extracted from the measured phonon spectrum of graphite .
Recently, $`\mathrm{`}\mathrm{`}`$curved SWNTsโ and $`\mathrm{`}\mathrm{`}`$torus-like SWNTsโ have been found . They still have the $`sp^2`$ bond structure, but it is predicted to have pentagon-heptagon defects . In $`\mathrm{`}\mathrm{`}`$curved SWNTsโ, the bond-length is nearly the same as that in graphite sheet, since the distortion that is created by the bending nature of the curved tube is topologically relaxed by the inclusion of fivefold and sevenfold rings. However, the application of an external force moment at the two ends of the tube gives a different deformation. The hexagonal structure of the tube will not change until it reaches a critical bending curvature . The tube undergoes only a simple compression on the inner side, and a stretching on the outer side. In the following discussion, the word $`\mathrm{`}\mathrm{`}`$curved SWNTโ refers to the growthed SWNT with pentagon-heptagon defects, and the word $`\mathrm{`}\mathrm{`}`$bent SWNTโ refers to SWNT that bends with outer-stretching and inner-compressing deformations under external force moments applied to the two ends of the tube. Using an empirical model employed by Lenosky et al., Ou-Yang et al. have developed a macroscopic continuous elastic model to calculate for $`\mathrm{`}\mathrm{`}`$curved SWNTโ. In their work, the strain energy of $`\mathrm{`}\mathrm{`}`$curved SWNTโ come from the angular change of the bonds or the curvature of the tubes. However, in the case of $`\mathrm{`}\mathrm{`}`$bent SWNTโ, the bond length effect will contribute the main part of the strain energy. In what follows, we will treat only the latter case.
The $`\mathrm{`}\mathrm{`}`$bent SWNTโ surface can be described by
$`\stackrel{}{Y}(s,\varphi )=\stackrel{}{r}(s)+\rho [\stackrel{}{N}(s)\mathrm{cos}\varphi +\stackrel{}{b}(s)\mathrm{sin}\varphi ]`$ (14)
where $`\stackrel{}{r}(s)`$ is the position vector of the axis, and $`0<sl`$, is the arc-length parameter along the bent SWNT axis. $`0<\varphi 2\pi `$. $`\stackrel{}{N}(s)`$ and $`\stackrel{}{b}(s)`$ are unit normal and unit binormal vector of $`\stackrel{}{r}(s)`$, respectively. The position of each carbon atom is described by two parameters, $`s`$ and $`\varphi `$. The two operations $`๐_N`$ and $`๐ฎ(h,\alpha )`$ can still be used to determine the positions of atoms in the SWNT. Therefore, a translation of $`h`$ units along the axis of SWNT means an addition of $`h`$ to $`s`$, and a rotation of $`\alpha `$ about the axis means an addition of $`\alpha `$ to $`\varphi `$. However, because the rotational symmetry about the axis of the bent SWNT is broken, $`๐_N`$ and $`๐ฎ(h,\alpha )`$ are not symmetry operations. It is necessary to generalize the Bloch sums in the crystal unit cell containing $`\times N`$ primitive unit cells of SWNT. Here $``$ is the length of the cell along the axis direction( unit is $`h`$ ). $`=2(n_{1}^{}{}_{}{}^{2}+n_{2}^{}{}_{}{}^{2}+n_1n_2)/N^2`$, for $`n_1n_2`$ not a multiple of $`3N`$, and $`=2(n_{1}^{}{}_{}{}^{2}+n_{2}^{}{}_{}{}^{2}+n_1n_2)/(3N^2)`$, for $`n_1n_2`$ a multiple of $`3N`$ . We calculate only SWNT with constant radius of curvature $`R`$. It is not principal difficult to extend the present treatment to general bent SWNTs. In bent SWNT, $`๐ฑ_{ij}^{AB}(r)`$ depends on the position of the atom $`A`$, and will be written as $`๐ฑ_{ij}^{AB}(l,m;r)`$, where $`[m,l]`$ are indices of the primitive unit cell of the atom $`A`$. When SWNT is bent to a different direction $`\stackrel{}{N}(s)`$, $`๐ฑ_{ij}^{AB}(r)`$ will be different, but the Hamiltonian matrix elements are almost independent of the bending direction. We have found that the anisotropy effect is very small. Similar to the case of the straight SWNT, it is easy to obtain the $`(8N)\times (8N)`$ matrices of the Hamiltonian and the overlapping integration.
Since the change of bond length $`\delta r`$ in bent SWNT is proportional to $`\frac{\rho }{R}`$, the energy contribution of the bond stretching and bond compressing will be of the order of $`\frac{1}{R^2}`$. It is necessary to calculate both the electronic energy $`E_{el}`$ and the repulsion energy $`E_{rep}`$ between the ions. In order to fit the force constant of graphite , we take $`\gamma =1.024`$ $`\mathrm{\AA }^1`$, $`\varphi ^{^{}}=\frac{\varphi }{r}=13.63\gamma `$ $`\mathrm{eV}/\mathrm{\AA }`$ and $`\varphi ^{^{\prime \prime }}=60.4`$ $`\mathrm{eV}/\mathrm{\AA }^2`$. With these parameters, we arrive correctly the second derivative of the stretching energy $`E_c`$ of SWNTs be $`D=\frac{^2E_c}{ฯต^2}=58.5`$ $`\mathrm{eV}`$ and the Poisson ratio be $`\sigma =0.24`$, where $`ฯต`$ is the relative compression along the axis of SWNT.
Fig. 2$`(๐)`$ shows the strain energy $`E_b`$ per atom of the (5,5) SWNT as a function of the bending radius $`R`$. The data follow quite well with the expected behavior $`E_b=E_s+\lambda /R^2`$. A least square fit to the data yields a value of $`\lambda 173`$ $`\mathrm{eV}\mathrm{\AA }^2/\mathrm{atom}`$. Previous studies on $`\mathrm{`}\mathrm{`}`$curved SWNTsโ give a simple formula can be given,
$$E_b=\frac{๐R}{\rho ^2\sqrt{R^2\rho ^2}}\frac{๐}{\rho ^2}+\frac{๐}{2R^2}.$$
(15)
In comparing with our results, we find that the value of $`\lambda `$ for $`\mathrm{`}\mathrm{`}`$curved SWNTโ is equal to $`๐/2`$, only $`0.7`$ $`\mathrm{eV}\mathrm{\AA }^2/\mathrm{atom}`$. It implies that the strain energy of pentagon-heptagon defects is far less than the strain energy of the stretching and compressing of the bond length. The experimental fact that the deformations of $`\mathrm{`}\mathrm{`}`$bent SWNTโ are change of bond length rather than the pentagon-heptagon defect reveals that there is a high potential barrier between the two deformations to prevent the change of the hexagonal structure under the addition of a moment of external force at the two end of SWNT.
By CET, we can calculate the Youngโs modulus $`Y`$ of SWNT from three different strain energies. They are the rolling energy $`E_s`$, the compressing or stretching energy $`E_c`$ and the bending strain energy $`\mathrm{\Delta }E_b`$. The three energies are given by
$`E_s={\displaystyle \frac{๐}{\rho ^2}},`$ (16)
$`E_c={\displaystyle \frac{1}{2}}Dฯต^2,`$ (17)
$`\mathrm{\Delta }E_b={\displaystyle \frac{\lambda }{R^2}},`$ (18)
where $`ฯต`$ is the relative stretch or compression along the axis of SWNTs, and $`D`$ is the second derivative of $`E_c`$. The three quantities $`๐`$, $`D`$ and $`\lambda `$ are given by
$`๐={\displaystyle \frac{\mathrm{\Omega }}{24(1\sigma ^2)}}Yb^3,`$ (19)
$`D=\mathrm{\Omega }Yb,`$ (20)
$`\lambda ={\displaystyle \frac{\mathrm{\Omega }}{4}}Yb(\rho ^2+b^2/4),`$ (21)
respectively. Where $`\mathrm{\Omega }=2.62`$ $`\mathrm{\AA }^2/\mathrm{atom}`$ is the occupied area per carbon atom in SWNTs, $`b`$ is the effective wall thickness of SWNT. Previous calculations indicate that the value of $`D`$ of SWNTs is about $`58`$ $`\mathrm{eV}/\mathrm{atom}`$, same as that of graphites. However, since the wall thickness is not well defined in single-layered structure, various values of $`b`$ are used in the studies, thus the obtained Y are quite different. Lu and Hernรกndez et al. took the interwall distance of graphite ($`3.4`$ $`\mathrm{\AA }`$) as the thickness, and obtained the average Youngโs modulus of SWNT about $`1`$ $`\mathrm{TPa}`$, in consistency with the corresponding measurement in multiwall nanotubes and bulk graphite samples. But the average value of $`Y`$ can not describe all kinds of deformations of SWNTs, such as the rolling of graphene and the bending of SWNT, though it can describe the stretching and compressing deformation along the axis direction of SWNT. Yakobson et al. have given $`Y=5.5`$ $`\mathrm{TPa}`$ and $`b=0.66`$ $`\mathrm{\AA }`$ by using the rolling energy formula of graphite sheet \[ Eq. (20)\] and the stretching energy formula of graphene or SWNT \[Eq. (19)\], simultaneously. The obtained value of $`b`$ is about the $`\pi `$ orbital extension of carbon atom, which corresponds to the general fact that elasticity results from the overlapping of electron cloud between atoms. However, since Eq. (19) describe the rolling of single-layered graphene, the results given by Yakobson seems to correspond to graphite sheet rather than SWNT. The Youngโs modulus and the effective wall thickness of SWNT and their dependence of the tube radius and helicity still remain unknown.
With Eq. (21), from the calculation of the bending strain energy of SWNTs with various radius, one may simultaneously find $`Y`$ and $`b`$ of SWNTs. Fig. 2$`(๐)`$ shows the relationship between $`\lambda `$ and the radius of the tube in the form $`\lambda =b^{}\rho ^2+a^{}`$. In comparison with the Eq. (21), it implies that both $`Y`$ and $`b`$ are independent of the radius and helicity of SWNTs. The value of $`b^{}=15.3`$ $`\mathrm{eV}/\mathrm{atom}`$ is consistent with the value $`D/4=14.7`$ $`\mathrm{eV}/\mathrm{atom}`$. However, it is difficult to obtain the exact value of $`a^{}`$, because the first term of $`\lambda `$ is much greater than the second term $`a^{}`$, to introduce high errors in the our fitting of $`a^{}`$. Carefully analysis of the strain energy of $`\mathrm{`}\mathrm{`}`$bent SWNTโ, gives only that the electronic energy from the angular change of the bond can contribute to the $`\rho ^0`$ order of the bending strain energy of SWNT. The other terms, including the repulsion energy between ions and the electronic energy from the nonzero $`\gamma `$ effects (Eq. (5)), depend on $`\delta r`$, hence on the radius of the tube $`\rho `$. For wall thickness, is is unnecessary to consider the bond length dependence of TB parameters and the repulsion energy. It require only to calculate $`\gamma =0`$ bond angular contribution $`E_{el0}`$ of the electronic energy. When $`\gamma =0`$, the $`\frac{1}{R}`$th order perturbation of Hamiltonian is zero, and $`E_{el0}=\frac{\lambda _{el0}}{R^2}`$. The $`\rho ^0`$ order term of $`\lambda `$ come completely from the $`\lambda _{el0}`$ and the residual part of $`\lambda `$ affects only the value of $`\rho ^2`$ order. The exact value of $`a^{}`$ can be obtained by calculating $`E_{el0}`$. Fig. 2 (c) shows the expected relationship $`E_{el0}=\lambda _{el0}/R^2`$. Fig. 2$`(๐)`$ gives values of $`\lambda _{el0}`$ for some SWNTs. It lead to $`a^{}=1.05r_{0}^{}{}_{}{}^{2}`$ $`\mathrm{eV}\mathrm{\AA }^2/\mathrm{atom}`$ by fitting it to $`a^{}+a_1\rho +a_2\rho ^2`$. The wall thickness of the tube is supposed to be identical to that of graphite, then from Eq. (19) and Eq. (21), it gives $`a^{}=\frac{3}{2}(1\sigma ^2)๐1.0r_{0}^{}{}_{}{}^{2}`$ $`\mathrm{eV}\mathrm{\AA }^2/\mathrm{atom}`$. Therefore, $`b`$ is about $`0.74`$ $`\mathrm{\AA }`$, and $`Y`$ is about $`5.1`$ $`\mathrm{TPa}`$. This shows that both $`Y`$ and $`b`$ are independent of the radius and helicity of the tube, and the Youngโs modulus of SWNT is five times greater than the average value of MWNT. The obtained value of $`b`$ is independent of the fitting parameters $`\gamma `$ and $`\varphi ^{^{\prime \prime }}`$, and $`Y`$ is also insensitive to these parameters.
In summary, our calculation shows the following results: the strain energy of the straight SWNT come mainly from the occupied bands electrons, The obtained Youngโs modulus of SWNT is independent of the radius and the helicity and is much larger than the modulus of the bulk sample. The effective thickness of SWNT is about the size of the carbon atom, far less than the distance between the layers of the graphite. These results show that CET can well describe the deformation of the bent tubes.
The authors acknowledge the useful discussions in our group. We would like to thank Dr. X.-J. Bi, Mr. Y.-H. Su and G.-R. Jin for correcting an earlier version of the manuscript and thank Prof. Y.-Z. Xie and Dr. H.-J. Zhou for correcting the manuscript in English. The numerical calculations are performed partly at ITP-Net and partly at the State Key Lab. of Scientific and Engineering Computing. |
no-problem/0001/cond-mat0001031.html | ar5iv | text | # Simple model for the linear temperature dependence of the electrical resistivity of layered cuprates
## I Introduction
The linear temperature dependence of the electrical resistivity $`\rho _{ab}T`$ is one of the most important properties of the normal phase kinetics of high-$`T_c`$ layered cuprates. However, despite the intensive investigations over a period of more than ten years and the numerous theoretical models proposed this simple law does not yet have a unique explanation. In the physics of conventional metals it is well established that Pt, for instance, has a linear resistivity but the underlying physics is completely different. A parallel between similar behaviour for Cu and the data for La<sub>1.825</sub>Sr<sub>0.175</sub>CuO<sub>4</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> is available as well . The linear temperature dependence of the resistivity is one of the most discussed problems in the physics of high-$`T_c`$ superconductors .
The aim of this paper is to present a simple model estimation explaining the $`\rho _{ab}T`$ behaviour. Our model is based on the strong anisotropy of the electrical resistivity in layered cuprates. In the $`c`$-direction, perpendicular to the conducting CuO<sub>2</sub> planes ($`ab`$-planes), the electrical resistivity $`\rho _c`$ is significantly lower than the in-plane one $`\rho _{ab}`$. For the $`c`$-polarised electric fields also the plasma frequency $`\omega _c`$ can be lower than the critical temperature
$$\mathrm{}\omega _c<k_BT_c.$$
(1)
Plasma oscillations exist only in the superconducting phase. Plasmons in superconductors are observable only in the superconducting phase while in the normal phase they are overdamped. Therefore the criterion for applicability of the model is the lack of coherent transport in $`c`$-direction and almost frequency independent electromagnetic response for $`\mathrm{}\omega <k_BT`$ (in Ref. , for instance, the author finds that coherence is not very important for the normal resistivity; for the applicability of our model it would suffice that only the mean free path in $`c`$-direction be not significantly larger than the lattice constant $`c_0`$ in the same direction). In this case the electric field between the CuO<sub>2</sub> planes can be considered as a ilassical field and Boltzmannโs distribution for the energy of the plane capacitors formed by these planes is applicable. Such a picture is probably most appropriate for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-ฮด</sub> and Bi<sub>2</sub>Ca<sub>2</sub>SrCu<sub>2</sub>O<sub>8</sub> which contain double CuO<sub>2</sub> layers spaced by distance $`d_0`$ almost corresponding to the diameter of the oxygen ions. Every plaquete within a double CuO<sub>2</sub> plane is considered here as an independent plane capacitor with capacitance $`C=\epsilon _0a_0^2/d_0,`$ where $`1/4\pi \epsilon _09\times 10^9`$ Jm/C$`^2,`$ $`\epsilon _0`$ being the dielectric permeability of vacuum and $`a_0`$โthe lattice parameter of the CuO<sub>2</sub> plane; the distance between the copper and oxygen ions is thus given by $`\frac{1}{2}a_0.`$
## II Model
The capacitors defined in Sec. I are the main ingredient of the proposed mechanism for creation of resistivity. According to the equipartition theorem their average energy can be written as $`\frac{1}{2C}Q^2=\frac{1}{2}k_BT,`$ which gives for the averaged square of the electric charge
$$Q^2=\epsilon _0\frac{a_0^2}{d_0}k_BT.$$
(2)
Consider now the scattering of one nearly free charge carrier (e.g. a hole moving in a CuO<sub>2</sub> plane) by a localized charge $`Q`$. For definiteness the hole is assumed to move in the $`x`$-direction and pass by the charge $`Q`$ at minimum distance $`r`$ to it. The trajectory of the hole is approximated by a straight line and its velocity is nearly constant and corresponds in our model estimation to the Fermi velocity $`v_F`$. The time needed for the charge carrier to pass by the scatterer (the flypast-time) we evaluate as $`\tau _Q2r/v_F`$ . The maximal Coulomb force acting perpendicular to the trajectory is $`F_{}=eQ/4\pi \epsilon _0r^2.`$ Hence for the perpendicular momentum gained by the scattered hole one has $`\mathrm{\Delta }p_{}\tau _QF_{}.`$ The latter quantity is much smaller than the Fermi momentum $`p_F=mv_F,`$ and for small scattering angles $`\theta _r1`$ one has
$$\theta _r\frac{\mathrm{\Delta }p_{}}{p_F}=\frac{A}{r},$$
(3)
where $`AeQ/4\pi \epsilon _0E_F,`$ $`E_F=\frac{1}{2}mv_F^2`$ is the Fermi energy, $`m=m_{\mathrm{eff}}m_0`$ is the effective mass in the CuO<sub>2</sub> plane, $`m_{\mathrm{eff}}`$ is the dimensionless one, and $`m_0`$ is the free electron mass. We note that the Rutherford scattering is the same both in classical and quantum mechanics and the Coulomb logarithms in the theory of plasma exceed the accuracy of such model estimations. In principle the classical and quantum results might slightly differ upon taking into account prefactors containing Coulomb logarithms, however, the difference would be much smaller than the uncertainty introduced by the lack of knowledge of material and/or model parameters. Let us stress that any mechanism for electrical resistivity must incorporate in an essential way some mechanism for transmission of the electron quasimomentum to the lattice. The capacitor model does this implicitly. The thermally excited charges in the capacitors play the role of the defects in the metal. Strictly speaking, the model considered is not purely electronic. It contains implicitly some weak inelastic electron-phonon interaction. In spite of its large relaxation time the latter ensures the equipartition theorem for the thermal energy of the independent capacitors.
The picture outlined above can be easily generalized to account for the influence of all scattering plaquetes along the $`x=0`$ line. Accordingly, the charge carrier travels at distance $`r=\pm a_0,\pm 2a_0,\pm 3a_0,\mathrm{}`$. Since the charges of the capacitors are independent random variables the average square of the scattering angle is an additive quantity
$$\theta ^2_{\mathrm{line}}=\underset{r}{}\theta _r^2=2\left(\frac{A}{a_0}\right)^2\left(1+\frac{1}{2^2}+\frac{1}{3^2}+\mathrm{}\right).$$
(4)
The field outside the plane capacitors is essentially a dipole field, however, we will not discuss such details because the corresponding correction gives a factor of order of one: $`\zeta (2)=1+1/2^2+1/3^2+\mathrm{}=\pi ^2/61.`$
Now let us apply the discrete lattice model in order to address the diffusion of the charge carrier momentum on the Fermi surface. The mean free path $`l`$ is the distance after which the charge carrier โforgetsโ the direction of its earlier motion and scatters by $`90^{}=\pi /2`$ having travelled a distance equal to $`l/a_0`$ lattice constants, i.e.
$$\theta ^2_{l/a_0}=\left(\frac{l}{a_0}\right)\theta ^2_{\mathrm{line}}=\left(\frac{\pi }{2}\right)^2.$$
(5)
Consequently, for the mean free path we get finally
$$l=3\pi \frac{4\pi \epsilon _0}{e^2}\frac{E_F^2d_0}{k_BT}a_0.$$
(6)
Inserting the transport life-time of the carriers defined for metals as $`\tau _{\mathrm{tr}}=l/v_F`$ in Drudeโs formula for the conductivity
$$\sigma =\frac{ne^2\tau _{\mathrm{tr}}}{m}=\frac{1}{\rho },$$
(7)
where $`n`$ is the number of charge carriers per unit volume and $`e`$ is the electron charge, one recovers the linear temperature dependence of the resistivity (distinguishing feature for the classical statistics, cf. )
$$\rho (T)=\frac{p_F}{ne^2l}=\frac{p_F}{3\pi (4\pi \epsilon _0)nd_0a_0E_F^2}k_BT=\frac{k_BT}{3\pi ^2\epsilon _0nd_0a_0mv_F^3}.$$
(8)
It is remarkable that the squared electron charge $`e^2`$ is cancelled and $`\mathrm{}`$ does not appear explicitly as well. The electrical resistivity is by definition a property of the normal state whereas the cuprates have attracted attention because of their high $`T_c.`$ That is why we consider it useful to perform a comparison with the experiment employing parameters of the superconducting phase. For clean superconductors when the mean free path $`l(T_c)`$ is much larger than the Ginzburg-Landau coherent length $`\xi _{ab}(0)`$ we can evaluate the effective mass $`m_{\mathrm{eff}}`$ as half of the effective mass of the Cooper pairs. For thin cuprates films, $`d_{\mathrm{film}}\lambda _{ab}(0),`$ the effective mass of the Cooper pairs is determined by the electrostatic modulation of the kinetic inductance , but in principle it is also accessible from the Bernoulli effect , the Doppler effect for plasmons , magnetoplasma resonances , or the surface Hall effect . Further, the electron density can be extracted from the extrapolated to zero temperature in-plane penetration depth
$$\frac{1}{\lambda _{ab}^2(0)}=\frac{\mu _0ne^2}{m},\mu _0=4\pi \times 10^7,ฯต_0=1/\mu _0c^2,c=299792458\mathrm{ms}^1.$$
(9)
The Fermi momentum $`p_F`$ can be determined on the basis of the model of a two-dimensional (2D) electron gas which for bilayered cuprates gives
$$n=\frac{2}{c_0}n^{(2\mathrm{D})},n^{(2\mathrm{D})}=\frac{p_F^2}{2\pi \mathrm{}^2}.$$
(10)
So after some elementary algebra the Eq. (8) takes the form
$$\frac{\sqrt{m_{\mathrm{eff}}}}{\lambda _{ab}^5(0)}\frac{\mathrm{d}\rho }{\mathrm{d}T}C_{\rho \lambda }\frac{8}{3\pi ^{1/2}}\frac{k_Be^5}{m_0^{1/2}(2\pi \mathrm{})^3}\frac{\mu _0^{5/2}}{ฯต_0}\frac{1}{a_0d_0c_0^{3/2}}=\mathrm{const}.$$
(11)
We expect a weak doping dependence of the left-hand side of the above equation, while
$$T_cE_Fn^{(2\mathrm{D})}1/\lambda _{ab}^2(0),\text{ for }nn_{\mathrm{opt}}$$
(12)
can vary significantly upon going from underdoped to optimally doped regime $`n_{\mathrm{opt}}`$. In the overdoped regime $`n>n_{\mathrm{opt}}`$ the resistivity often displays non-linear temperature dependence. Simultaneously, if the doping dependence of the effective mass is negligible for $`nn_{\mathrm{opt}},`$ i.e. $`m_{\mathrm{eff}}\mathrm{const}`$, the model predicts
$$\frac{\mathrm{d}\rho }{\mathrm{d}T}\lambda _{ab}^5(0)\frac{1}{n^{5/2}}\frac{1}{T_c^{2.5}},\text{ for}nn_{\mathrm{opt}}.$$
(13)
Let us note also that for a single-plane material ($`d_0c_0`$) only the 2D density $`n^{(2\mathrm{D})}=nc_0,`$ i.e., the number of electrons per unit area, is relevant for the bulk 3D resistivity. The cancellation of the lattice constant $`c_0`$ can by easily understood inspecting the expression for the bulk conductivity of a system with equidistant conducting planes $`\sigma =c_0^1n^{(2\mathrm{D})}\tau _{\mathrm{tr}}(c_0)/m.`$ According to the plane capacitor model, cf. Eq. (2), the scattering rate is proportional to $`c_0^1`$ and, according to Eq. (6), the transport life-time $`\tau _{\mathrm{tr}}c_0.`$ As a result, for single-plane materials $`\sigma `$ does not depend on the interplane distance $`c_0,`$ assuming $`n^{(2\mathrm{D})}=\mathrm{const}.`$
## III Numerical example
Let us provide now an estimate for $`v_F`$ and $`l`$ based on the proposed model for the set of parameters $`\mathrm{d}\rho /\mathrm{d}T=0.5\mu \mathrm{\Omega }\mathrm{cm}/\mathrm{K}`$ , $`n^{(2\mathrm{D})}=\frac{1}{2}a_0^2=3.37\times 10^{14}\mathrm{cm}^2,`$ $`n=1/(a_0^2c_0)=5.72\times 10^{21}\mathrm{cm}^3,`$ $`2\pi \mathrm{}/p_F=3.54a_0,`$ $`a_0=3.85\mathrm{\AA },`$ $`c_0=11.8\mathrm{\AA },`$ $`d_0=3.18\mathrm{\AA },`$ $`m=m_{\mathrm{eff}}m_0,`$ $`m_{\mathrm{eff}}=3,`$ cf. Ref. , and $`m_0=9.11\times 10^{31}\mathrm{kg}`$. Substituting these parameters into Eq. (8) and Eq. (6) we get an acceptable value for the Fermi velocity (cf. Table 3 of Ref. , where 31, 140, 200 and 220 km/s estimations are cited)
$$v_F=\left(\frac{k_B}{3\pi ^2m}\frac{a_0c_0}{d_0\epsilon _0}\frac{\mathrm{d}T}{\mathrm{d}\rho }\right)^{1/3}=1.76\times 10^5\mathrm{m}\mathrm{s}^1=176\text{ km/s},$$
(14)
and for the mean free path, respectively
$`l(T=300\mathrm{K})`$ $`=`$ $`5.67a_0=22\mathrm{\AA },`$ (15)
$`\pi \left({\displaystyle \frac{l}{a_0}}\right)^2`$ $``$ $`101.`$ (16)
For $`T_c=90\mathrm{K}`$ and $`\xi _{ab}(0)=12\mathrm{\AA }`$ we have $`\xi _{ab}(0)/l(T_c)=16\%,`$ $`\lambda _{ab}(0)=122`$ nm, $`\lambda _{ab}(0)/\xi _{ab}(0)=101,`$ $`8/(3\sqrt{\pi })1.50,`$ this numerical prefactor slightly changes if we apply more sophisticated approach for treatment of electric field fluctuations and the electron scattering by the random electric potential. Finally for the constant $`C_{\rho \lambda }`$, introduced by Eq. (11) and describing the $`\rho `$-$`\lambda `$ correlations, we get $`C_{\rho \lambda }=320\mathrm{\Omega }\mathrm{K}^1(\mu \mathrm{m})^4.`$ These numerical estimates lead us to conclude that the suggested model does not contradict the experimental data, cf. Ref.. Furthermore, we consider that a detailed state-of-the-art derivation of the charge density fluctuation of the plasma in layered cuprates could be an adequate quantitative model for the theory of their electrical resistivity. Along the same line, a systematic study of the $`\rho `$-$`\lambda `$ correlations would provide an effective tool to analyse the scattering mechanisms in layered perovskites.
The mechanism of the electrical resistivity is qualitatively fairly simple and is schematically presented in Fig. 1. The conducting CuO<sub>2</sub> planes constitute plates of plane capacitors and one has to take into account the Boltzmann (or Rayleigh-Jeans) statistics of the electrostatic energy of the capacitors. The last criterion for applicability of the model is the significant low frequency reflection coefficient for an electromagnetic wave from a single CuO<sub>2</sub> plane. It exists only for high two-dimensional conductivity $`\sigma c_0>\epsilon _0c_{\mathrm{light}}`$ , where $`\rho _{ab}(4/c_0)=300\mathrm{\Omega }`$ sheet resistance is evaluated for Bi<sub>2</sub>Ca<sub>2</sub>SrCu<sub>2</sub>O<sub>8</sub> just above $`T_c.`$ While moving in the conducting CuO<sub>2</sub> planes the charge carriers are scattered by charge density fluctuations in the same planes. In fact it is a self-consistent problem for collisionless plasma. The electrical resistivity appears upon taking into account the charge density fluctuations. This scattering mechanism is analogous to the Rayleighโs blue-sky law where the light is scattered by fluctuations of the air density. Only the โelectronโ sky is rather red. In case of Rutherford scattering the faster charges of smaller wavelength are scattered less intensively while for light the effect is opposite.
## IV Discussion and conclusions
The numerical example presented in Sec. III demonstrates that scattering by charge density fluctuations can explain the total resistivity of layered cuprates or at least constitutes a significant part of it. The plane capacitor is an important structural detail of the scenario. Now we want to address two interesting issues. (i) Whether the existence of high-$`T_c`$ structures having linear resistivity is possible without the plane capacitor detail and vice versa? (ii) Why other layered structures do not display linear resistivity? The answers have qualitative character:
(i) High$`T_c`$ superconductivity can exist even in artificial structures having a CuO<sub>2</sub> monolayer and the resistivity is yet linear. In this case the plane capacitors are missing; however, one has to take into account the electric field fluctuations close to the two dimensional (2D) CuO<sub>2</sub> layer. The 2D plasmons are gapless and overdamped above $`T_c.`$ Therefore we have to calculate 2D charge density fluctuations and the corresponding thermodynamic fluctuations of the electric field. Due to the equipartition theorem the linear resistivity is recovered again, with only the prefactor being different.
(ii) There are various artificial metal-insulator structures where the temperature dependence of the in-plane conductivity is specific and non universal. When a metallic layer contains even a few monolayers we should take into account that the electric field does not penetrate through the metal. Thus due to trivial electrostatic reasons the thermodynamic fluctuations of the electric field in the insulator layers would be an inefficient mechanism for charge scattering in thick metallic layers. Metallic monolayers have residual resistivity related to defects, significant electron-phonon coupling etc. According to Mattissenโs rule we could search for the charge density fluctuation part of the scattering, but it is unlikely that this mechanism dominates. The theory of fluctuations of the electromagnetic field between metallic layers (this is the geometry of the Casimir effect) is a typical problem in statistical physics, however, the difficult task will be the experimental separation of the linear term provided many other mechanisms contribute to the resistivity.
To summarise, we came to the conclusion that an important hint for the applicability of the suggested model will be the existence of linear resistivity in other layered structures containing no CuO<sub>2</sub> planes but having almost 2D charge carriers and of course good quality. If our simple electrostatic explanation is correct any layered material having good 2D metallic layers should display the same behaviour regardless of its electronic structure. In this case it is ensured that linear resistivity is not related to some specific subtle properties of electron band structure of the CuO<sub>2</sub> plane or some sophisticated non-Fermi-liquid-like strongly correlated electron processes due to Cu$`3d`$ electrons, but rather to such a universal cause as the omnipresent fluctuations of the electromagnetic field and related to them charge density fluctuationsโwho could be blind for the blue sky.
The linear resistivity of the layered ruthenates could be considered as such an example and crucial experiment. In Sr<sub>2</sub>RuO<sub>4</sub> the temperature dependence of the Hall coefficient is similar to the one measured in cuprates and the striking linear dependence of the conductivity persists over the whole temperature range 1โ1000 K . It is impressive to observe any physical quantity exhibiting linear behaviour over three orders of magnitude change of the temperature although it is a simple consequence of the conventional transport theory of metallic solids. The authors of Ref. note that they were unaware of any model that specifically predicts or can convincingly account for essentially linear behaviour of resistivity over three decades of temperatures although most theories of high-$`T_c`$ superconductivity discuss linear resistivity. Nevertheless they suggest that linear resistivity is not an exclusive feature of the normal state of high-$`T_c`$ cuprates, but rather of all layered oxides especially perovskites, possibly even independently of the magnitude of $`T_c`$ . Indeed, Sr<sub>2</sub>RuO<sub>4</sub> ($`T_c<1`$K) presents an unequivocal demonstration that its linear resistivity is not related to processes involved in the pairing mechanism. The linear resistivity is created by thermally activated electric fields while the pairing could originate in the conventional Heisenberg and Heitler-London type exchange attraction, cf. Ref. .
The emerging consensus that the model and mechanism of high-$`T_c`$ superconductivity must simultaneously explain both the linear normal resistivity and the superconducting gap is likely to be an erroneous presumption. It is not a priori clear why the high-$`T_c`$ superconductivity should repeat the history of phonon superconductors . One could think of the scattering mechanism as one that โlivesโ between the CuO<sub>2</sub> planes while the pairing mechanism is localized within them. The final solution of the problem, however, would be rather given by the development of technologies for artificial layered structures necessary for the oxide electronics in the next millennium .
Concluding, we believe that there is nothing mysterious in the linear dependence of the in-plane resistance in high-$`T_c`$ layered cupratesโit is just a consequence of the well-known physical laws dating back to the end of XIX and the beginning of XX century.
###### Acknowledgements.
The authors are thankful to Prof. J. Indekeu and Prof. F. Vidal for the hospitality and fruitful discussions. The authors are very much indebted to Prof. A. Varlamov and Prof. D. Pavuna for the interesting correspondence and providing their results before publication. The authors appreciate also the interest of Prof. V. Moshchalkov to the present paper and his stimulating comments. One of the authors (TMM) is thankful to Prof. A. A. Abrikosov, Dr. I. Bozovic, Dr. D. M. Eagles, Prof. A. Leggett, Prof. K. Maki, and the referees, sending papers, correspondence, discussions and implied clarifications of the capacitor model of resistivity. Last but not least, special thanks to E. Penev for critical reading of the manuscript, providing us with the electronic version of Fig. 1 and many suggested improvements. This paper was partially supported by the Bulgarian NSF No. 627/1996, Visiting Professor fellowship from the Spanish MEC, The Belgian DWTC, the Flemish Government Programme VIS/97/01, the IUAP and the GOA. |
no-problem/0001/astro-ph0001070.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Neutral hydrogen in the nearby Universe is observed to reside in the gravitational potential wells associated with galaxies. In optically luminous galaxies, the neutral gas is a minor fraction of the baryonic mass, while in some extreme low surface brightness dwarfs, the mass in gas is comparable to or may even exceed the mass in luminous stars. The standard models for primordial nucleosynthesis require that the total baryonic mass density of the Universe exceed that observed in luminous matter (Tytler et al 1996, Tosi et al 1998), and it appears that the majority of the Universeโs baryons are spread through the ionized intergalactic medium, IGM, (Burles & Tytler 1996, Shull et al 1996, Rauch et al 1997). The role of the galactic potentials is to confine the atomic gas to sufficient density that it can be self-shielding in the face of the ionizing background radiation.
Although only a minor constituent in the $`z=0`$ Universe, the neutral gas is a useful kinematic tracer of the confining gravitational potentials inhabited by galaxies, and a large research effort has used the 21cm line to establish the existence of dark matter in galaxies and to begin to decode its distribution (cf. van Albada et al 1985, and the contribution by Swaters to these proceedings).
Quasar absorption-line surveys for damped Lyman-$`\alpha `$ DLa lines indicate that a greater fraction of the Universeโs baryons was neutral in the redshift interval $`z=2`$ to 3 (Wolfe et al 1995, Lanzetta et al 1995, Storrie-Lombardi et al 1996). In fact, the mass in neutral gas exceeded the mass in stars at that time. Absorption line studies of the low column density clouds in the Lyman-$`\alpha `$ forest show that the IGM has already been re-ionized by the epoch at which the highest redshift background quasars are presently observed, at $`a5`$ (Schneider et al 1991, Fan et al 1999). The lines of sight through the DLa absorber at subsequent epochs must then be probing neutral gas confined to the potential wells of evolving gravitationally bound systems. These systems may evolve to the galaxies of today or may be the protogalactic clumps that merge to form the present day galaxies. Sections 3 and 4 of this review return to the observational tools that the 21cm line provides for characterizing the size and kinematics in these high $`z`$ systems
The following section addresses the question of whether there might be fossil relics of these original gravitationally bound systems surviving in the nearby Universe until the present. Could there be a population of dark matter mini-halos, capable of capturing and confining neutral hydrogen in a primitive state from a formation epoch at high $`z`$ until the present? If so, then these would be convenient and highly informative objects to study in detail.
## 2 The Significance of the Galactic High Velocity Clouds
Blitz et al (1999) recently presented a case for the Galactic population of High Velocity Clouds(HVCs) being $`10^8`$M dark matter mini-halos, each containing $`10^7`$M of neutral gas of primordial composition, unpolluted by mass loss from stars. Each individual cloud must be stable on cosmic time scales, implying that the neutral gas is a minor dynamical component in comparison to the dissipationless dark matter, which is responsible for the binding potential.
Since the HVCs contain no resident stellar populations, a significant hurdle in understanding the physical properties of the HVCs is the absence of a clear distance indicator (such as stars that would be accessible to spectroscopic parallax methods). Thus, one particularly attractive aspect of the Blitz et al hypothesis is the construction of an independent distance indicator based on the dynamical stability of each cloud. The only adjustable parameter in this picture is $`f`$, the fraction of the total dynamical mass composed of neutral gas. Blitz et al favor $`f0.1`$ with the implication that the clouds are at distances of roughly 1 Mpc, forming a Local Group population rather than a Galactic Halo population, and typically have HI masses around $`10^7`$M. At this mass level, the HVC population adds significantly to the integral HI content of the Local Group.
Theoretical support for this interpretation of the HVCs comes from CDM simulations (cf. Klypin et al 1999, Moore et al 1999), which find that many of the dark matter mini-halos that are the building blocks of galactic systems survive intact through the mergers and accretion events that go into building a system like our Local Group. A population of several hundred of these mini-halos would be expected in the Local Group. This number far exceeds the number of star bearing satellite galaxies in the Local Group (cf Mateo 1998) and is more consistent with the number of HVCs.
If the HVC population is an important component of the Universe and contributes significantly to the formation of all galaxies and groups of galaxies, then HVC populations should exist around other galaxies in the nearby Universe in addition to the galaxies of the Local Group. Furthermore, if the HVCs are representative of a wide-spread extragalactic population, they could be present outside groups and galaxy halos as independent entities. In fact, the neutral gas masses of $`M_{HI}10^7`$M are large enough that emission from these objects could be detected in 21cm line observations of nearby galaxies and groups.
A number of surveys have now been conducted that are capable of sensing the presence of an extragalactic HVC population. Weinberg et al 1991 were searching specifically for such a population of CDM mini-halos in their VLA survey but found none. Several extragalactic HI surveys of substantially larger volumes to more sensitive HI mass limits have also found no objects with HVC properties (Zwaan et al 1997, Spitzak & Scheider 1999, Kilborn 1999).
Consider the Arecibo HI Strip Survey (Zwaan et al 1997, Sorar 1994) as an example. The survey was an unbiased probe of the HI content of the nearby Universe, conducted without prior recognition of the optically identified galaxies in the survey strips. The flavor of the survey as applied to the HVC question (Zwaan & Briggs 1999) is presented in Fig. 1. Sorar (1994) conducted the original survey by repeatedly scanning strips of constant declination (over the course of as many as 30 days) with the sensitive, high resolution 3 beam of the Arecibo telescope. The thin horizontal lines in Fig. 1 mark the survey areas, which consist of two declination strips, each of length $`10^h`$ of right ascension to a depth of 7500 km s<sup>-1</sup>. Although the survey declinations were selected at random, the strips chance to probe the halos of many galaxies at impact parameters of $`1`$ Mpc. In total, the survey probed $``$200 catalogued galaxy halos and the environments of $``$14 catalogued groups with sensitivity to neutral hydrogen masses $`10^7`$M. Zwaan and Briggs (1999) applied the formulation of Blitz et al to the Wakker and van Woerden (1991) HVC catalog to deduce distribution functions for comparable HVC populations in these extragalactic settings, leading to the conclusion that the survey should have made $``$75 HVC detections in groups and $``$380 detections around galaxies. No objects with the properties of HVCs were discovered. In the context of the Blitz et al model, Zwaan and Briggs conclude that the $`f`$ parameter must be adjusted to $`f0.02`$, which lowers the HI masses by a factor of 25 and positions the objects at typical distances of $``$200 kpc โ thereby assigning them to the Galactic halo, rather than giving them Local Group membership.
What did the Arecibo Survey detect? Zwaan et al (1997) present the follow-up and analysis for the 66 detected signals. As with comparable surveys conducted to date (Szomoru et al 1994, Henning 1995, Henning et al 1998, Kraan-Korteweg et al 1998, Spitzak & Schneider 1999, Kilborn et al 1999), all detected signals located away from Galactic extinction and from bright foreground stars could be optically identified with galaxies containing stars. No surprising new populations of intergalactic clouds or ultra-low-surface brightness objects were uncovered.
The strong association of the neutral gas reservoirs in the nearby Universe with stars suggests there is some causal relation. Perhaps the confinement of HI to sufficient densities that it can remain neutral also leads to conditions where it is difficult to avoid the instabilities that lead to cooling, collapse and star formation. In such a picture, the shallower potential wells of the lower mass dwarf galaxies would be the places where the HI is more gently confined and evolutionary processes would generally proceed more slowly. Indeed, this is the domain inhabited by the dimmest of the gas-rich LSB galaxies.
The HVC population might form the extension of the gas-rich dwarf population to extremely low masses, much in the spirit of the proposal by Blitz et al. Closing the loop between the Galactic HVCs and the extragalactic analogs will be an important step in understanding their nature and may form important evidence in the case for how galaxies formed and accumulated their interstellar media.
In the meanwhile, we are left with the conclusions of Wakker and van Woerden (1997), who after extensive review of the HVC literature, find that no single origin can account for the properties of the HVC population. Instead, several mechanisms, including infalling extragalactic clouds, cloud circulation within the Galactic halo driven by a galactic fountain, and a warped outer arm extension to the Galaxy must be invoked. Explanations for the Magellanic Stream and associated HVC complexes require tidal interactions within the Local Group (Putman et al 1998), and it may be that measurement of metal abundances in the HVCs using absorption-line techniques may be the best discriminant between โprimordialโ objects and the remnant debris of merging.
## 3 Damped Lyman-$`\alpha `$ absorbers and the HI content of the Lyman Break Galaxies
Quasar absorption line studies of the Damped Lyman-$`\alpha `$ absorbers lead to the statistical result that Universe at $`z2.5`$ contained at least 5 times the neutral gas that exists at the $`z=0`$ present (cf Wolfe 1988). Subsequent surveys reached the conclusion that the neutral content peaked around this time at $`z=2`$ to 3 (Wolfe et al 1995, Lanzetta et al 1995, Storrie-Lombardi et al 1996). A number of other indicators point to this period as the time when mass was most vigorously redistributed (cf review by Briggs 1999): (1) the comoving number density of quasars peaked, implying the AGN were being fed efficiently, (2) the onset of the formation of metal-rich ionized halos can be seen in the CIV absorption-line statistics for these redshifts, and (3) the comoving star-formation-rate density appears to have peaked at this time, as has been discussed in several presentations at this conference.
One of the principal indicators used to measure the comoving SFR density during the peak in the action at $`z2.5`$ has been the Lyman Break galaxy population (cf. Giavalisco et al 1996). These objects appear to have nearly the comoving density of the present day $`L^{}`$ galaxies, and it is interesting to ponder the relation between them and the DLa absorbers that collectively contain so much neutral gas. If the HI mass of present day $`L^{}`$ galaxies is simply scaled up to account for the DLa evolution and the mass is assigned to LB galaxies, then $`M_{HI}`$ for each LB galaxy would be a bit more than $`10^{10}`$M. The statistical cross section for DLa absorption greatly exceeds the apparent optical size of the LB objects in HST images. If the large DLa cross section were to be assigned to LB population, then each LB galaxy in the HST images could be an indicator of only the most central region of a much bigger neutral (and metal enriched โ see Pettini in these proceedings) gas-rich system. On the other hand, the results of CDM simulations imply that, instead, the large galaxies that we now classify $`L^{}`$ built up over time from the coalescence of many smaller protogalactic clumps. In this picture, the HI cross section at $`z2.5`$ sensed by the DLa studies must be apportioned to many smaller objects, implying that the LB objects could be representative of only a tiny fraction of the galaxy population coexisting with the LB galaxies.
A crucial test of galaxy evolution models will be to measure how the neutral gas (whose baryon density appears to exceed that in stars at these redshifts) is distributed and how it relates to the LB galaxies. Unfortunately, mapping the 21cm line emission from individual protogalaxies at these redshifts will be very challenging, even with the most optimistic design parameters for next generation radio telescopes. On the other hand, considerable progress might be made with a straightforward statistical method, on time scales much shorter than the construction of a new radio telescope. Several current generation aperture synthesis telescopes (the Westerbork Synthesis Radio Telescope and the Giant Metrewave Radio Telescope in India) are equipped to observe the 21cm line redshifted to $`z3`$. The field of view of these telescopes can survey several square degrees of sky in a single integration with sufficient angular resolution to avoid confusion among the LB galaxies. If an adequate catalog of LB galaxies could be constructed for such a synthesis field, with of order $`10^4`$ LB objects, with celestial coordinates and redshifts, then the radio signals could be stacked, to obtain a statistical measure of the HI content of the LB population. This would allow the โaverage HI content per LB galaxy siteโ to be determined. Of course, we would rather examine individual galaxies in order to directly measure their physical size and perform kinematical studies to infer the depths of their potential wells (i.e. the evolution of the dark matter halos over time). How this might be accomplished is the subject of the next section.
## 4 Mapping gas rich galaxies at high redshift
Considerable progress in assessing the extent and kinematics of the DLa class of quasar absorption line system could be made with minimal technical adaptation of existing radio facilities. The technique requires background radio quasars or high redshift radio galaxies with extended radio continuum emission. Some effort needs to be invested in surveys to find redshifted 21cm line absorption against these types of sources. These surveys can either key on optical spectroscopy of the quasars to find DLa systems for subsequent inspection in the 21cm line, or they can make blind spectral surveys in the 21cm line directly, once the new wideband spectrometers that are being constructed for at Westerbork and the new Green Bank Telescope are completed. Then radio interferometers with suitable angular resolution at the redshifted 21cm line frequency must be used to map the absorption against the extended background source. This would involve interferometer baselines of only a few hundred kilometers โ shorter than is typically associated with VLBI techniques, but longer than the VLA and GMRT baselines. The shorter spacings in the European VLBI Network and the MERLIN baselines would form an excellent basis for these experiments, although considerable effort will be required to observe at the interference riddled frequencies outside the protected radio astronomy bands.
Fig. 2 shows an example of how these experiments might work. The top panel shows contours for the radio source 3C196. Brown and Mitchell (1983) discovered a 21cm line in absorption at $`z=0.437`$ against this source in a blind spectral survey. The object has been the target of intensive optical and UV spectroscopy (summarized by Cohen et al 1996), as well as HST imaging to identify the the intervening galaxy responsible for the absorption (Cohen et al 1996, Ridgway and Stockton 1997). Fig. 2 includes a dashed ellipse in the top panel to indicate the approximate extent and orientation of the galaxy identification.
The second panel from the top in Fig. 2 illustrates the 21cm line emission spectrum typical of nearby HI-rich disk galaxies, observed by a low resolution (โsingle-dishโ) beam that does not resolve the gaseous structure in the galaxy. The rotation of a galaxy with a flat rotation curve produces the velocity field shown to the left of the spectrum
For disk systems observed in absorption, the information accessible to the observer is less, since we can only hope to ever learn about the gas opacity and kinematics for regions that fall in front of background continuum. This restricts our knowledge to zones outlined in the third panel of Fig. 2. The โrestricted emissionโ spectrum is drawn to illustrate what fraction of the galaxies gas content might be sensed by a sensitive synthesis mapping observation. A comparison to the total gas content in the upper spectrum suggests that much of the important information (velocity spread, for example) would be measured by a synthesis map of the absorption against background source.
The single-dish spectrum of the absorption lines observed for an object like 3C196 is weighted by the regions where the background continuum has the highest brightness. As shown in the lower panel, this weighting emphasizes the bright spots in the radio lobes. Clearly sensitive mapping will better recover the information lost in the integral spectrum produced by a low angular resolution observation. A preliminary look at recent observations of the $`z=0.437`$ absorber in 3C196 can be found in de Bruyn et al (1997).
## 5 Conclusion
Neutral gas clouds rely on confinement in gravitational potential wells in order to maintain sufficient density that they are not ionized by the background radiation emitted by star forming regions and AGN. In this sense, the damped Lyman-$`\alpha `$ absorbers are sign posts that draw our attention to the evolving potential wells of galaxies and protogalactic clumps, perhaps before they become sufficiently luminous to be studied through their optical or UV emission. The 21cm line then becomes a convenient probe of the cold gas that traces these possibly primitive potential wells in which galaxies form.
## 6 Acknowledgements
I am grateful to the Organizing Committee of the XIXth Recontres de Moriond for the invitation and support for attending this excellent conference. |
no-problem/0001/cond-mat0001066.html | ar5iv | text | # The effect of compression on the global optimization of atomic clusters
## I Introduction
One of the most important types of global optimization problem, and one which is particularly of interest to chemical physicists, is the determination of the lowest energy configuration of a molecular system, such as a protein, a crystal or a cluster. However, such a task can be very difficult because of the large number of minima that a potential energy surface (PES) can haveโit is generally expected that the number of minima of a system will increase exponentially with size. Therefore, if applications to large systems with realistic descriptions of the interatomic interactions are to be feasible, it is necessary that efficient global optimization algorithms, which scale well with system size, are developed.
A key part of this development is understanding when and why an algorithm is likely to succeed or fail, because, as well as providing useful information about the limitations of an algorithm, this physical insight might be utilised in the design of better algorithms. This is the motivation behind the current paper. Here, we analyse the reasons for the success of a recent algorithm when applied to the global optimization of Lennard-Jones (LJ) clusters for some particularly difficult sizes.
The global optimization of LJ clusters has probably become the most common benchmark for configurational optimization problems. Putative global minima have been obtained for all sizes up to 309 atoms, and up-to-date databases of these structures are maintained on the web. There are two types of difficulty for the LJ cluster problem. First, there is the general increase in the number of minima with cluster size. Second, on top of this effect there are size-specific effects related to the topography of the PES.
For most of the clusters the topography of the PES aids global optimization. There is a funnel from the high-energy liquid-like clusters to the low energy minima with structures based up on the Mackay icosahedra. When there is a dominant low-energy icosahedral minimum at the bottom of the funnel, such as when complete Mackay icosahedra can be formed, global optimization is particularly easy.
However, there are some sizes for which the global minimum is not icosahedral. At $`N`$=38 the global minimum is a face-centred-cubic (fcc) truncated octahedron (38A in Figure 1), at $`N`$=75โ77 and 102โ104 the global minima are based on Marks decahedra (e.g. 75A in Figure 1), and at $`N`$=98 the global minimum is a Leary tetrahedron (98A in Figure 1). For these sizes the PES has a fundamentally different character. As well as the wide funnel leading down to the low-energy icosahedral structures, there is a much narrower funnel which leads down to the global minimum. Relaxation down the PES is much more likely to take the system into the wider funnel where it is then trapped. The time scale for interfunnel equilibration is very slow because of the large energy and free energy barriers between the two funnels.
As a result these eight clusters are hard to optimize, the larger examples being virtually impossible to optimize by traditional approaches, such as simulated annealing. However, these cases are solvable by a set of methods in which the โbasin-hoppingโ transformation is applied to the PES. This transformation is used by the Monte Carlo minimization or basin-hopping algorithm, and implicitly by all the most successful genetic algorithms. The transformation of the PES works by changing the thermodynamics of the clusters such that the system is now able to pass between the funnels more easily. However, the non-icosahedral global minima still take much longer to find than the icosahedral global minima, and there is no way of knowing if one has waited long enough to rule out the possibility of a non-icosahedral global minimum. This is illustrated by the Leary tetrahedron at $`N`$=98. Despite the fact that powerful optimization techniques had been applied to LJ<sub>98</sub>, the global minimum was discovered only very recently. Subsequently, it was confirmed that this minimum could be found by some of the previously applied methods.
Given this background, it would be useful to develop techniques that are more efficient for these double-funnel examples. Two potential approaches have very recently been put forward. First, Hartke has achieved improvements in the genetic algorithm approach by forcing the system to maintain a diversity of structural types in the population, thus preventing the population becoming concentrated in the icosahedral funnel. Second, Schoen and Locatelli noted that the exceptions to the icosahedral structural motifs are usually more spherical than the competing icosahedral structures. This is because the exceptions generally occur at sizes where both a particularly stable form for the alternative morphology is possible and the icosahedral structures involve an incomplete overlayer. Therefore, Schoen and Locatelli added a term to the potential energy favouring compact clusters. Using this PES transformation, the non-icosahedral global minima at $`N`$=38, 98,102โ104 were much more likely to be found by their multi-start minimization algorithm. An additional transformation had to be applied in order to find the global minima at $`N`$=75โ77.
It is the reasons for the success of this second approach that we examine in this paper. In particular we show how Schoen and Locatelliโs transformation affects the topography of the PES. We also show how the transformation can be incorporated as an element of an existing algorithm, namely basin-hopping.
## II Methods
The atoms in the clusters interact via the Lennard-Jones potential:
$$E_{\mathrm{LJ}}=4ฯต\underset{i<j}{}\left[\left(\frac{\sigma }{r_{ij}}\right)^{12}\left(\frac{\sigma }{r_{ij}}\right)^6\right],$$
(1)
where $`ฯต`$ is the pair well depth and $`2^{1/6}\sigma `$ is the equilibrium pair separation. To this, Schoen and Locatelli added a term proportional to $`_{i<j}r_{ij}`$ which penalizes long pair distances. Here, we use a slightly different form, which again acts to compress the cluster. The energy for such a compressed Lennard-Jones (CLJ) cluster is given by
$$E_{\mathrm{CLJ}}=E_{\mathrm{LJ}}+\underset{i}{}\mu _{\mathrm{comp}}\frac{|๐ซ_i๐ซ_{\mathrm{c}.\mathrm{o}.\mathrm{m}.}|^2}{\sigma ^2},$$
(2)
where $`\mu _{\mathrm{comp}}`$ is a parameter that determines the magnitude of the compression acting on the cluster, and $`๐ซ_{\mathrm{c}.\mathrm{o}.\mathrm{m}}`$ is the position of the centre of mass of the cluster. We found the additional term to be approximately proportional to Schoen and Locatelliโs expression, and so the effect of the two transformations on the PES topography are virtually identical.
To map the PES topography of these CLJ clusters we use the same methods as those we have applied to LJ and Morse clusters to obtain large samples of connected minima and transition states that provide good representations of the low-energy regions of the PES. The approach involves repeated applications of eigenvector-following to find new transition states and the minima they connect.
In the basin-hopping algorithm, the transformed potential energy is given by
$$\stackrel{~}{E}(๐ฑ)=\mathrm{min}\left\{E(๐ฑ)\right\},$$
(3)
where $`๐ฑ`$ represents the vector of nuclear coordinates and min signifies that an energy minimization is performed starting from $`๐ฑ`$. Hence the energy at any point in configuration space is assigned to that of the local minimum obtained by the minimization, and the transformed PES consists of a set of plateaus or steps each corresponding to the basin of attraction surrounding a minimum on the original PES. This PES is then searched by constant temperature Monte Carlo. Additionally, the algorithm has been found to be more efficient for clusters if the configuration is reset to that of the new local minimum at each accepted step.
There are two ways that one might incorporate a further PES transformation into this algorithm. One could use basin-hopping to first find the global minimum of the transformed PES, then reoptimize the $`n_{\mathrm{low}}`$ lowest energy minima under the original potential. However, if the global minimum of the original PES is not among the $`n_{\mathrm{low}}`$ lowest energy minima of the transformed PES this approach is bound to fail.
Alternatively, at each step one could first optimize a new configuration using the transformed potential, then reoptimize the resulting minimum using the original potential. By incorporating this second minimization the shortcomings of the first approach are avoided. Furthermore, if the energy of this final minimum is used in the Metropolis acceptance criterion, the Boltzmann weight of each minimum is unchanged. However, the occupation probability of a particular minimum will be proportional the area of the basin of attraction of the minimum on the transformed rather than the original PES, i.e.
$$p_in_i\stackrel{~}{A}_i\mathrm{exp}(\beta E_i),$$
(4)
where $`n_i`$ is the number of permutational isomers of $`i`$ and $`\stackrel{~}{A}_i`$ is the total area of the basins of attraction of the minima on $`\stackrel{~}{E}`$ which when reoptimized on $`E`$ lead to minimum $`i`$. Therefore, if the relative area of the global minimum is larger on the transformed PES, optimization should be easier using this approach. We refer to this version of the basin-hopping algorithm as two-phase basin hopping. This variation is not much more computationally demanding than standard basin-hopping because the starting point for the second minimization is likely to be close to a minimum of the untransformed PES.
There is one further difference from previous implementations of the basin-hopping algorithm. Previously, we had performed the minimization in Equation (3) by conjugate gradient. However, we have since found a limited memory BFGS algorithm that is more efficient.
## III Results
In global optimization the aim of transforming the potential energy surface is to make the global minimum easier to locate. Typically, one therefore wants the transformation to reduce the number of minima and the barriers between them. Furthermore, if the transformation is to change the relative energies of the minima, one wants the energetic bias towards the global minimum to increase.
As the number of minima and transition states on the CLJ<sub>13</sub> PES is small enough that virtually all can be found, we can examine whether the compressive term has the first of the above effects by examining CLJ<sub>13</sub> as a function of $`\mu _{\mathrm{comp}}`$. The number of minima and transition states clearly decreases as $`\mu _{\mathrm{comp}}`$ increases (Table I). It is interesting to note that minima with low symmetry preferentially disappear. The PES transformation places the cluster in a harmonic potential about its centre of mass. This potential plays a role similar to a soft spherical box, and so less compact minima disappear from the PES as $`\mu _{\mathrm{comp}}`$ increases. Similar results are found when periodic boundary conditions are appliedโthe number of minima is much less than for a LJ cluster of equivalent size and the number of minima decreases as the pressure in the cell is increased.
It is also worth noting that the magnitude of the downhill barriers relative to the energy difference between the minima decreases as $`\mu _{\mathrm{comp}}`$ increases (Table I). In the terminology used by Berry and coworkers, the profiles of the pathways to the global minimum become more staircase-like and less sawtooth-like with increasing $`\mu _{\mathrm{comp}}`$. The combination of the changes to the number of stationary points and the barrier heights act to make relaxation to the icosahedral global minimum easier as the PES is further transformed.
Next, we examine the CLJ<sub>38</sub> cluster. For a cluster of this size it is not feasible to obtain a complete representation of the PES in terms of stationary points, so instead we obtain a good representation of the lower energy regions of the PES. At each value of $`\mu _{\mathrm{comp}}`$ we obtained a sample of 6000 minima. The effect of $`\mu _{\mathrm{comp}}`$ on the number of stationary points, which we noted for CLJ<sub>13</sub>, is again evident (Table II). As $`\mu _{\mathrm{comp}}`$ increases, $`n_{\mathrm{search}}`$, the number of minima from which we have to perform transition state searches in order to generate the 6000 minima, increases and it becomes more likely that a new transition state does not connect to a new minimum, but rather to one already in our sample.
The second desired effect of a PES transformation is to change the energetics in a manner that makes the global minimum more favourable. We can get a simple guide as to how the energies of the minima depend on $`\mu _{\mathrm{comp}}`$ if we assume there is no structural relaxation in response to changing $`\mu _{\mathrm{comp}}`$. Then $`E_{\mathrm{CLJ}}=E_{\mathrm{LJ}}+\mu _{\mathrm{comp}}Q_{\mathrm{comp}}`$ where the order parameter, $`Q_{\mathrm{comp}}=_i|๐ซ_i๐ซ_{\mathrm{c}.\mathrm{o}.\mathrm{m}.}|^2/\sigma ^2`$, is evaluated at $`\mu _{\mathrm{comp}}`$=0. From the values of $`Q_{\mathrm{comp}}`$ we can predict the changes in the relative energies of any two minima.
$`Q_{\mathrm{comp}}`$ is a measure of the compactness of the cluster, and from Figure 2 one can see how the compactness of the global minima depends on size. For the first two shells the icosahedral global minima are most compact when complete Mackay icosahedra can be formed, e.g. $`N`$=13 and 55. However, for the third shell the most compact icosahedral structure is at $`N`$=135, where twelve vertex atoms of the Mackay icosahedron are missing, rather than at $`N`$=147.
If we examine LJ<sub>38</sub> as an example of a cluster with a non-icosahedral global minimum, we see that this size corresponds to a pronounced minimum in Figure 2โthe truncated octahedron is particularly compact compared to the other global minima of similar size. Furthermore, from Figure 3b we can see that the LJ<sub>38</sub> global minimum has the lowest value of $`Q_{\mathrm{comp}}`$ of all the LJ<sub>38</sub> minima. Therefore, the energy gap between the global minimum and the lowest-energy icosahedral minimum increases with $`\mu _{\mathrm{comp}}`$ (Table II). To visualize how this deepening of the fcc funnel changes the PES topography we present disconnectivity graphs of CLJ<sub>38</sub> for a range of $`\mu _{\mathrm{comp}}`$ values in Figure 4.
Disconnectivity graphs provide a representation of the barriers between minima on a PES. In a disconnectivity graph, each line ends at the energy of a minimum. At a series of equally-spaced energy levels we compute which (sets of) minima are connected by paths that never exceed that energy. We then join up the lines in the disconnectivity graph at the energy level where the corresponding (sets of) minima first become connected. In a disconnectivity graph an ideal single-funnel PES would be represented by a single dominant stem associated with the global minimum to which the other minima directly join. For a multiple-funnel PES there would be a number of major stems which only join at high energy.
From the disconnectivity graph of LJ<sub>38</sub> one can deduce that the cluster has a double-funnel PES (Figure 4a). There is a narrow funnel associated with the global minimum, and a wider funnel associated with the icosahedral minima. There are a number of low-energy minima at the bottom of the icosahedral funnel, which, although they have only small differences in the way the outer layer is arranged (e.g. the second lowest icosahedral minimum, 38C, is depicted in Figure 1), can be separated by moderate-sized barriers. As a result there is a certain amount of fine structure at the bottom of the icosahedral funnel with not all minima joined directly to the stem of the lowest-energy icosahedral minimum. From the data in Table II one can see that there are many more minima associated with the icosahedral funnel.
As the fcc funnel becomes deeper with increasing $`\mu _{\mathrm{comp}}`$ it increases in size relative to the icosahedral funnel (Figure 4). By $`\mu _{\mathrm{comp}}`$=$`5ฯต`$ the fcc funnel dominates the PES, and the disconnectivity graph has the form expected for an ideal single funnel with only a very small sub-funnel for the icosahedral minima. These changes are also reflected in the number of minima associated with both funnels (Table II).
These changes to the PES topography of course affect the thermodynamics. For LJ<sub>38</sub> there are two peaks in the heat capacity curve (Figure 5). The first is due to a transition from the fcc global minimum to the icosahedral minima, which is driven by the greater entropy of the latter. The second corresponds to melting. The first transition hinders global optimization because it is thermodynamically favourable for the cluster to enter the icosahedral funnel on cooling from the molten state, where it can then be trapped. However, as $`\mu _{\mathrm{comp}}`$ increases, the decreasing entropy of the icosahedral funnel can no longer overcome the increasing energy difference between the global minimum and the icosahedral funnel (Table II) and so this first transition is suppressed. Consequently, the heat capacity curves for the CLJ<sub>38</sub> clusters in Figure 5 show only one peak, indicating that the global minimum is most stable up to melting.
Of course, the changes to the PES topography and thermodynamics mean that on relaxation down the PES the system is more likely to enter the fcc funnel as $`\mu _{\mathrm{comp}}`$ increases. Furthermore, the energy barrier to escape from the icosahedral funnel relative to the energy difference between the bottoms of the two funnels becomes smaller (Table II), thus making escape from the icosahedral funnel easier. To quantify these effects we performed annealing simulations for CLJ<sub>38</sub> at a number of values of $`\mu _{\mathrm{comp}}`$ (Table III). For LJ<sub>38</sub> 80% of the longer annealing runs ended at the bottom of the icosahedral funnel, and only 2% at the global minimum. However, by $`\mu _{\mathrm{comp}}`$=$`5ฯต`$ 99.5% of the long annealing runs reached the global minimum.
Given the above, it is unsurprising that two-phase basin-hopping finds the global minimum more rapidly as $`\mu _{\mathrm{comp}}`$ increases (Figure 6b). At large $`\mu _{\mathrm{comp}}`$ the first-passage time is 40 times shorter than for LJ<sub>38</sub>. Conversely, the first-passage time to reach the icosahedral minimum 38C increases. These changes are driven by changes to $`\stackrel{~}{A}_i`$ in Equation (4). The basin of attraction of the global minimum increases in size relative to those of the icosahedral minima as the PES is further transformed.
Locatelli and Schoenโs transformation works for LJ<sub>38</sub> because the global minimum is the most compact spherical minimum. However, this does not necessarily have to be the case, even for those clusters with non-icosahedral global minima. From Figure 2 one can see that the non-icosahedral global minima at $`N`$=98 and 102โ104 have particularly low values of $`Q_{\mathrm{comp}}`$ and Figure 3d confirms that the Leary tetrahedron, 98A, has the lowest $`Q_{\mathrm{comp}}`$ value of all the LJ<sub>98</sub> minima. Therefore, Locatelli and Schoen were able to locate these global minima. However, for $`N`$=75-77 the values of $`Q_{\mathrm{comp}}`$ for the Marks decahedra are not set apart from the nearby icosahedral global minima (Figure 2) and Figure 3c shows that there are a number of LJ<sub>75</sub> minima which have lower values of $`Q_{\mathrm{comp}}`$ than 75A. In particular, the icosahedral minimum 75C that is third lowest in energy has a lower $`Q_{\mathrm{comp}}`$, and the Marks decahedron is no longer the CLJ<sub>75</sub> global minimum beyond $`\mu _{\mathrm{comp}}`$=$`3.1ฯต`$.
The geometric root of this behaviour is that the Marks decahedra at $`N`$=75โ77 are the least spherical of the non-icosahedral global minima. The 75-atom Marks decahedron is somewhat oblate and some of the icosahedral minima with which 75A is competing are prolate by a similar degree, leading to comparable values of $`Q_{\mathrm{comp}}`$. Therefore, although the transformation may aid global optimization by reducing the number of minima and by increasing the energy of many minima relative to the Marks decahedron, unlike for LJ<sub>38</sub> it does not remove the fundamental double-funnel character of the PES. To locate the global minimum Locatelli and Schoen had to add an additional โdiameter penalizationโ to the potential.
Locatelli and Schoen found that for many of the clusters their transformation did not aid global optimization. This was not unexpected, but simply reflects the fact that often the icosahedral global minima are not the most compact minima. We analyse one example. At $`N`$=34 it is possible to form a compact Leary tetrahedron (34H in Figure 1), which is the eighth lowest-energy LJ<sub>34</sub> minimum. This structure has a significantly lower value of $`Q_{\mathrm{comp}}`$ than the global minimum (Figure 3a). As a result, the Leary tetrahedron becomes the CLJ<sub>34</sub> global minimum at $`\mu _{\mathrm{comp}}`$=$`0.3ฯต`$. The results of two-phase basin-hopping runs are similar to those for LJ<sub>38</sub> in that as $`\mu _{\mathrm{comp}}`$ increases the compact non-icosahedral structure becomes significantly easier to locate and the low-energy icosahedral minima more difficult (Figure 6). The difference, though, is that now this scenario is undesirable, because it is the global minimum that is becoming more difficult to reach.
## IV Conclusions
By analysing the effect of a compressive transformation on the PES topography we have obtained insights into the reasons for its success in aiding the optimization of LJ clusters that have non-icosahedral global minima. Firstly, we have shown that the transformation reduces the number of minima and transition states on the PES. Secondly, for examples where, as is often the case, the non-icosahedral global minimum is the most compact structure, the transformation causes the funnel of the global minimum to become increasingly dominant. For LJ<sub>38</sub> the PES has a double funnel, whilst at large $`\mu _{\mathrm{comp}}`$ the PES has an ideal single-funnel topography, enabling the system to relax easily down the PES to the global minima. However, when as for LJ<sub>75</sub>, the decahedral global minimum is only one of the more compact minima, the transformation is less beneficial for global optimization. By contrast, for sizes with icosahedral global minima the transition is often unhelpful, as we saw for LJ<sub>34</sub>, because the global minimum is much less likely to be the most compact structure. Therefore, the transformation needs to be used in combination with other methods. As the transformation is most likely to be successful for clusters where other methods fail it can act as a good complement to them. For example, when the basin-hopping algorithm is applied, usually a series of runs are performed at each size. If one of the runs used the two-phase approach, this would increase the chance of success for those sizes where the PES had a multiple-funnel topography.
Other PES transformations could also be usefully employed alongside standard basin-hopping runs in this two-phase approach, if they are likely to aid global optimization for some sizes. For example, increasing the range of the potential is another transformation that reduces the number of stationary points on the PES. Using the transformations alongside standard runs avoids one of the major difficulties associated with PES transformations. They are rarely universally effective, but rather there are likely to be some instances when they destabilize the global minimum, thus making optimization more difficult. This is certainly the case when increasing the range of the potential, where the range-dependence of the most stable cluster structure is well-documented.
Although we have seen how a compressive transformation can be useful in aiding the global optimization of LJ clusters, an important question is how generally useful it will be. Although this question can only be definitively answered through applications to a variety of systems, one would expect it to be useful for metal and simple molecular clusters that form compact structures, particularly those that favour 12-coordination. For these systems, as with LJ clusters, the strength of this approach would be locating those global minima that are not based on the dominant morphology, because the alternative morphologies are only likely to be most stable when they are compact and sherical. It might also be useful in systems such as proteins where there are a large number of less compact unfolded configurations. However, it would not be useful for clusters of substances, such as water and silicon, which form open network structures where the liquid can be denser than the solid.
###### Acknowledgements.
J.P.K.D. is the Sir Alan Wilson Research Fellow at Emmanuel College, Cambridge. The author is grateful to David Wales for supplying a modified version of the basin-hopping code, and would also like to thank Marco Locatelli and Fabio Schoen for helpful discussions and for sharing results prior to publication. |
no-problem/0001/astro-ph0001005.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Isolated neutron stars (INSs), which donโt show radio pulsar activity attract now much attention of astrophysicists due to recent observations of several candidates with the ROSAT sattelite (see Neรผhauser & Trรผmper 1999 and a review in Treves et al. 2000). As we discussed in our previous paper (Popov & Prokhorov 2000) INSs can be important for discussion of different models of magnetic field decay (MFD) in NSs in general.
During its evolution an INS can pass through four phases: โejectorโ, โpropellerโ, โaccretorโ and โgeorotatorโ. At the first stage the INS is spinning down according to the magneto-dipole formula till so-called ejector period is reached. At the second stage captured matter cannot penetrate down to the surface of the INS, and the star continue to spin down faster than at the stage of ejection. At last, so-called accretor period is reached, and matter can fall down: accretion starts. If the INSโs velocity (or magnetic field) is high enough, the star can appear as a georotator, where matter cannot be captured, as far as the magnetosphere radius is large than the radius of gravitational capture.
Several models of MFD in NSs were suggested during the last 20-30 years (see for example a recent brief review by Konar & Bhattacharya). Most of these models can be fitted by exponential or power-law decay, or by their combination with some set of parameters. INSs can be an important class of objects for verification of different theories of MFD, because in these sources accretion rate is negligible, so it is not necessary to take into account the influence of accretion onto MFD (Urpin et al. 1996). Spin-up/spin-down rates on the stage of accretion are also relatively low in comparison with NSs in binary systems. It means, that in INSs MFD operates in the โpurestโ form (Popov & Konenkov 1998). Thatโs why these objects have a special importance, in our opinion, for investigations of observational appearance of different effects of MFD.
Recently, Colpi et al. (2000) discussed power-law models of MFD in INSs and applied them to highly magnetized NSs, โmagnetarsโ. Here we briefly discuss later stages of evolution of INSs with the power-law MFD, and estimate if it is possible for them to reach the stage of accretion, and if yes, what can be their properties at this stage.
Our analysis follows the papers Popov & Prokhorov (2000), and Colpi et al. (2000). So, we just repeat calculations of Popov & Prokhorov (2000) but for the power-law decay, using some results of Colpi et al. (2000). And we refer to these papers for all details of terminology, calculations etc.
## 2 Power-law decay
Power-law (as also exponential) MFD is a widely discussed variant of NSsโ field evolution. Power-law is a good fit for several different calculations of the field evolution (Goldreich & Reisenegger 1992, Geppert et al. 2000). The power-law MFD can be described with the following simple formula (Colpi et al. 2000):
$$\frac{dB}{dt}=aB^{1+\alpha }.$$
(1)
So, we have only two parameters of decay: $`a`$ and $`\alpha `$. As far as this decay is relatively slow for the most interesting values of $`\alpha 1`$ (we use the same units as in Colpi et al. 2000), we donโt specify any bottom magnetic field, contrary to what we made for more rapid exponential decay (Popov & Prokhorov 2000). Even for the Model C from Colpi et al. (2000) (see Table 1) with relatively fast MFD the magnetic field can decrease only down to $`10^8`$ G in $`10^{10}`$ yrs (see Fig. 1). But for $`\alpha <1`$ the magnetic field can decay significantly during the Hubble time (we call here โthe Hubble timeโ time interval $`10^{10}`$ yrs, which is nearly equal to the age of our Galaxy) for any reasonable value of $`a`$. And, probably, it is useful to introduce in the later case a bottom field.
At the stage of ejection an INS is spinning down according to the magneto-dipole formula:
$`P\dot{P}bB^2`$. Here (and everywhere below) $`b=3`$, values of magnetic field, $`B`$, $`B_{\mathrm{}}`$ and $`B_0`$, are taken in units $`10^{13}`$ G and time, $`t`$, in units $`10^6`$ yrs (as in Colpi et al. 2000).
In the table we show parameters of the Models A, B, C from Colpi et al. (2000). $`B_{\mathrm{}}`$ is the magnetic field calculated for $`t=t_{Hubble}=10^{10}`$ yrs and for the initial field $`B_0=10^{12}`$ G. Models A and B correspond to ambipolar diffusion in the irrotational and the solenoidal modes respectively. Model C describes MFD in the case of the Hall cascade.
In Fig. 2 we show dependence of the ejector period, $`p_e`$, and the asymptotic period, $`p_{\mathrm{}}`$, on the parameter $`a`$ for $`\alpha =1`$ for different values of the initial magnetic field, $`B_0`$:
$$p_e=25.7B_{\mathrm{}}^{1/2}n^{1/4}v_{10}^{1/2}\mathrm{s},$$
(2)
$$p_{\mathrm{}}^2=\frac{2}{2\alpha }\frac{b}{a}B_0^{2\alpha }.$$
(3)
Here $`v_{10}`$ is velocity $`(v_{INS}^2+v_s^2)^{1/2}`$ in units 10 km/s; $`v_{INS}`$ is the spatial velocity of the INS and $`v_s`$ \- sound velocity. $`n`$ is the interstellar medium (ISM) number density. $`B_0`$ \- initial magnetic field.
$`p_e`$ was calculated for $`t=t_{Hubble}=10^{10}`$ yrs, i.e. for the moment, when $`B=B_{\mathrm{}}`$.
It is clear from Fig. 2, that for the initial field $`10^{11}`$ G low velocity INSs are able to come to the stage of accretion: for $`B_0=10^{11}`$ G lines for $`p_{\mathrm{}}`$ and $`p_e`$ for the lowest possible velocity, 10 km/s, coincides.
In Fig. 3 we show โforbiddenโ regions on the plane $`a`$$`\alpha `$, where an INS for a given velocity for sure cannot come to the stage of accretion in the Hubble time (compare with โforbiddenโ regions in Popov & Prokhorov 2000). In a forbidden region an INS for specified parameters cannot leave the stage of ejector even after $`10^{10}`$ years of evolution. If one also takes into account the stage of propeller (between ejector and accretor stages) it becomes clear, that โforbiddenโโ regions for an INS which cannot reach the stage of accretion is even larger. We note, that the propeller stage can be shorter (probably much shorter, especially for constant field) than the stage of ejection (see Lipunov & Popov 1995 for detailed arguments), so the โforbiddenโ regions on Fig. 3 cannot become much larger if one also takes into account the stage of propeller. It is also important, that we take very low INSโs velocity and high ISM density. For most part of INSs all plotted โforbiddenโ regions should be larger.
One can see, that for the most interesting cases (Models A, B, C from Colpi et al. 2000) and $`v<200`$ km/s INSs can reach the stage of accretion. It is an important point, that fraction of low velocity NSs is very small (Popov et al. 2000) and most of them have velocities about 200 km/s.
## 3 Evolved magnetars
In the last several years a new class of objects - highly magnetized NSs, โmagnetarsโ (Duncan & Thompson 1992) โ became very popular in connection with soft $`\gamma `$-repeaters (SGR) and anomalous X-ray pulsars (AXP) (see Mereghetti & Stella 1995, Kouveliotou et al. 1999, Mereghetti 1999 and recent theoretical works Alpar 1999, Marsden et al. 2000, Perna et al. 2000).
Magnetars come to the propeller stage with periods $`10`$$`100`$ s in the Models A, B, C (see Fig. 2 in Colpi et al. 2000). Then their periods quickly increase, and NSs come to the stage of accretion with significantly longer periods, and at that stage they evolve to a so-called equilibrium period (Lipunov & Popov 1995, Konenkov & Popov 1997) due to accretion of the turbulent ISM:
$$p_{eq}2800B^{2/3}I_{45}^{1/3}n^{2/3}v_{10}^{13/3}v_{t_{10}}^{2/3}M_{1.4}^{8/3}\mathrm{s}$$
(4)
Here $`v_t`$ is a characteristic turbulent velocity, $`I`$ โ moment of inertia, $`M`$ โ INSโs mass.
Isolated accretor can be observed both with positive and negative sign of $`\dot{p}`$ (Lipunov & Popov 1995). Spin periods of INSs can differ significantly from $`p_{eq}`$ contrary to NSs in disc-fed binaries, and similar to NSs in wide binaries, where accreted matter is captured from giantโs stellar wind. It happens because spin-up/spin-down moments are relatively small.
As the field is decaying the equilibrium period is decreasing, coming to 28 sec when the field is equal to $`10^{10}`$ G (we note here recently discovered objects RX J0420.0-5022 (Haberl et al. 2000) with spin period $`22.7`$ s).
It is important to discuss the possibility, that evolved magnetar can appear as georotator (see Lipunov 1992 for detailed description or Popov et al. 2000 for short description of different INSsโ stages). It happens if:
$$v300B^{1/5}n^{1/10}\mathrm{km}/\mathrm{s}.$$
(5)
For all values of $`a`$ and $`\alpha `$ that we used (see Fig. 3) NSs, at the end of their evolution ($`t=10^{10}`$ yrs), have magnetic fields $`10^{12}`$ G for wide range of initial fields, so they never appear as georotators if $`v<580`$ km/s for $`n=1\mathrm{c}\mathrm{m}^3`$. But without MFD magnetars with $`B10^{15}`$ G and velocities $`v100`$ km/s can appear as georotators.
In Popov et al. (2000) it was shown, that georotator is a rare stage for INSs, because an INS can come to the georotator stage only from the propeller or accretor stage, but all these phases require relatively low velocities, and high velocity INSs spend most of their lives as ejectors. This situation is opposite to binary systems, where a lot of georotators are expected for fast stellar winds (wind velocity can be much faster than INSโs velocity relative to ISM).
Without MFD magnetars also can appear as accreting sources. In that case they can have very long periods and very narrow accretion columns (that means high temperature). Such sources are not observed now. Absence of some specific sources associated with evolved magnetars (binary or isolated) can put some limits on their number and properties (dr. V. Gvaramadze drew our attention to this point).
At the accretion part of INSsโ evolution periods stay relatively close to $`p_{eq}`$ (but can fluctuate around this value), and INSsโ magnetic fields decay down to $`10^{10}10^{11}`$ G in several billion years for the Models A and B. It corresponds to the polar cap radius about 0.15 km and temperature about 250 โ 260 eV, higher than for the observed INS candidates with temperature about 50 โ 80 eV. We calculate the polar cap radius, $`R_{cap}=R\sqrt{(R/R_A)}`$, with the following formula:
$$R_{cap}=610^3B^{2/7}n^{1/7}v_{10}^{3/7}\mathrm{cm}.$$
(6)
Here $`R_A1.810^{10}n^{2/7}v_{10}^{6/7}B^{4/7}\mathrm{cm}`$ is the Alfven radius. The temperature can be even larger, than it follows from the formula above as far as for very high field matter can be channeled in a narrow ring, so the area of the emitting region will be just a fraction of the total polar cap area.
As the field is decreasing the radius of the polar cap is increasing, and the temperature is falling. Sources with such properties (temperature about 250-260 eV) are not observed yet (Schwope et al. 1999). But if the number of magnetars is significant (about 10% of all NSs) accreting evolved magnetars can be found in the near future, as far as now we know about 5 accreting INS candidates (Treves et al. 2000, Neรผhauser & Trumper 1999), and their number can be increased in future. $`\dot{p}`$ measurements are necessary to understand the nature of such sources, if they are observed.
Recently discovered object RX J0420.0-5022 (Haberl et al. 2000) with the spin period $`22.7`$ s, can be an example of an INS with decayed magnetic field accreting from the ISM, as previously RX J0720.4-3125. Due to relatively low temperature, 57 eV, its progenitor cannot be a magnetar for power-law MFD (Models A,B,C) or similar sets of parameters, because a very large polar cap is needed, which is difficult to obtain in these models. Of course RX J0420.0-5022 can be explained also as a cooling NS. The question โare the observed candidates cooling or accreting objects?โ is still open (see Treves et al. 2000). If one finds an object with $`p100`$ s and temperature about 50 โ 70 eV can be a strong argument for its accretion nature, as far as such long periods for magnetars can be reached only for very high initial magnetic fields (see Fig. 2 in Colpi et al. 2000) for reasonable models of MFD and other parameters.
## 4 Conclusions
Our main result means, that for power-law MFD (contrary to exponential decay) we cannot put serious limits on the parameters of decay with the ROSAT observations of INS candidates as far as for all plausible models of power-law MFD INSs from low velocity tail are able to become accretors. For more detailed conclusions a NS census for power-law MFD is necessary, similar to non-decaying and exponential cases (Popov et al. 2000).
An interesting possibility of observing evolved accreting magnetars appear both for the case of MFD and for constant field evolution. These sources should be different from typical present day INS candidates observed by ROSAT. Existence or absence of old accreting magnetars is very important for the whole NS astrophysics.
Acknowledgments
We thank drs. Monica Colpi, Vasilii Gvaramadze and Roberto Turolla for discussions. S.P. thanks University of Milan, University of Padova and Astronomical Observatory of Brera for their hospitality. This work was supported by the RFRB, INTAS and NTP โAstronomyโ grants. |
no-problem/0001/cond-mat0001225.html | ar5iv | text | # A Mechanism for Cutting Carbon Nanotubes with a Scanning Tunneling Microscope
## I Introduction
Since the discovery of carbon nanotubes in 1991 a lot of progress has been made in the synthesis as well as in the characterization of the electronic, optical and mechanical properties of these remarkable molecules . They are promising structures to use as components in submicrometer-scale devices and in nanocomposites . A carbon nanotube can be visualized as a graphite sheet rolled up seamlessly into a cylinder. They have diameters in the range of 0.6-30 nm and are many microns in length. Depending on the synthesis conditions they can appear in multi-walled or single-walled configurations . The tube symmetry determines not only the electronic (metallic or semiconducting) character but also the plastic/brittle behavior. The special geometry makes the nanotubes excellent candidates for mesoscopic quantum wires. Evidence for (1D) quantum confinement was obtained from electronic transport measurements on single-walled nanotubes . The transition from one-dimensional (wire) to zero-dimensional (quantum dot) behavior can be achieved directly by cutting a long nanotube to a shorter length. This has been recently done by Venema et al by applying voltage pulses to the tip of a scanning tunneling microscope (STM) located just above a nanotube. Discrete energy states, consistent with a 1D particle-in-a-box model , have been measured with STM spectroscopy in such short tubes. The possibility to control the length of nanotubes by the cutting technique is of interest for various applications of carbon nanotubes in nanoscale devices .
In this paper we discuss a number of possible mechanisms that can explain the experimental observed cutting of tubes by a voltage pulse to the STM tip for single-walled carbon nanotubes. In the following Section the experimental data is briefly described. In Section III, we discuss some relevant physical mechanisms that can result in breaking of tubes and then select one as the most promising mechanism. We analyze the proposed cutting mechanism in Section IV in more detail and compare the theoretical model to the experimental results. We end the paper with a short discussion and outlook.
## II Experimental results
As found earlier individual carbon nanotubes can be locally cut by applying a voltage pulse to the tip of a scanning tunneling microscope (STM). The carbon nanotubes that were studied were single-walled, synthesized by a laser vaporization technique and consisted mainly of $`1.4`$ nm diameter nanotubes (material from R.E. Smalley and coworkers). Samples were prepared by depositing a dispersion of nanotubes in 1,2-dichloroethane onto single-crystal Au(111) surfaces. The experiments were done both at room temperature and at 4 K.
Nanotubes are cut by the following procedure: During imaging of a nanotube in constant-current mode, scanning is interrupted and the STM tip moves to a selected position on the nanotube. Feedback is then switched off and a voltage pulse between tip and sample is applied for 1 ms. After this pulse, the feedback is switched on again and scanning resumes where imaging was interrupted. The distance between the STM tip and the nanotube during a pulse is determined by the settings for the feedback current and voltage. Fig. 1(b) shows an example of a nanotube that has been cut into various smaller tube pieces as a result of voltage pulses of -3.75 V applied at the positions marked in Fig. 1(a). Often, tube parts beneath the STM tip are picked up during a pulse. This usually leads to degradation of the tip quality. Cleaning of the tip can then be done by applying voltages on the gold surface, away from the nanotube.
The cutting efficiency as a function of the bias voltage applied during a pulse is shown in Fig. 2. This experimental result provides essential input for the theoretical modeling of the cutting mechanism described below. The efficiency at a specific voltage is defined as the number of successful cutting events divided by the total number of applied pulses at that voltage. A large number (about 150) of voltage pulses were applied at room temperature in a range of 1 to 6 V, at positive and negative polarity. The pulses were applied on various nanotubes, both semiconducting and metallic. To study the dependence of cutting efficiency on the distance between the STM tip and the nanotube during a pulse, feedback currents were varied between 20 pA and 1 nA and feedback voltages between 0.1 and 3 V were used. No dependence of cutting efficiency on the tunnel distance was found. Pulses applied with various feedback currents and voltages are therefore included in same graph of Fig. 2. The main experimental results are listed below:
* We find a sharp threshold for the voltage of 3.8 $`\pm `$ 0.2 V for cutting nanotubes. This is independent of polarity. Below 3.6 V, nanotubes could almost never be cut. Above 4 V, tubes were almost always cut.
* The cutting efficiency is independent of the feedback tunnel current or voltage. The tunnel resistance has been varied over three orders of magnitude, which changes the tunnel distance significantly. This demonstrates that the determining physical quantity for cutting is the voltage, rather than the electric field.
* The cutting procedure appears to be effective for different types of nanotubes. We observe no dependence on the electronic character (i.e. semiconducting or metallic) of a tube.
* Nanotubes can be cut at room temperature as well as at 4 K.
* Nanotubes within bundles can be cut as efficiently as isolated single-wall tubes (see for an example Fig. 3).
* Upon decreasing the tunnel distance considerably by increasing the tunnel current beyond 1 nA, nanotubes are moved away laterally by the STM tip during imaging.
* The separation between nanotube ends created by a cut varies significantly in size, from a few nm to 20 nm. Sometimes, the tube ends are displaced after the cut. Fig. 4(a) for example shows a strongly bent nanotube on which two voltage pulses were applied near the marked positions. Fig. 4(b) shows that the three tube parts separated by the cuts were moved significantly by the cutting events. Most likely, the nanotube was fixed on the substrate under some strain that was released by the cutting.
## III Physical concepts for the cutting mechanism
In this Section we survey a number of possible physical mechanisms for the breaking of nanotubes by a voltage pulse that are interrelated to various degrees.
1. Shear pressure; Simply crashing the tip into the tube could be a possible cutting mechanism. However, experimentally we find that the tubes are moved laterally when the tip is brought close to the tube. This is related to the large reversible elastic response (flexibility) exhibited by carbon nanotubes . Simulations of C<sub>60</sub>-molecules impact on carbon nanotubes have shown that even large radial forces produce reversible elastic distortions indicating that crashing the tip onto the tube is not an efficient method to cut. Furthermore, there is no obvious energy scale of 4 eV in the crashing process.
2. Crack propagation; This is related to the propagation of voids and cracks already present in as-grown nanotubes under low (tensile) loads. However, there is no evidence, from STM or otherwise, for the presence of local defects or cracks in the SWNT material studied here. This mechanism is hence not considered to be important.
3. Collective excitations (plasmons); Plasmons excited by inelastic electron scattering of the tunneling current can decay into electron-hole pairs, phonons or other excitations that induce a polarization or charge separation in the tube. Eventually the release of the plasmon energy leads to a break through local heating and atom evaporation. Due to the particular cylindrical geometry of carbon nanotubes we expect to have a $`\pi `$-plasmon excitation at about 5 eV . This has been confirmed experimentally for SWNTs of $`1.4`$ nm diameter. The decay of the plasmon excitation into atom evaporation is a well-known phenomenon in metallic clusters where the surface-plasmon energy is of the order of the binding energy leading to atom-evaporation as an effective decaying mechanism of the collective excitations . In the case of tubes with an internal binding energy greater than $``$ 7 eV/atom (as for most carbon solids; see Table I), however, multiple-plasmon excitations need to be active to induce transitions which weaken the carbon bonds. This puts this mechanism behind first-order models. However, it can enhance the probability of electronic excitations (see below).
4. Localized particle-hole excitations; This mechanism is concerned with the symmetry-allowed interband excitation of localized $`\sigma `$-states to states of $`\pi ^{}`$ character near the Fermi level. These electronic excitations leave localized $`\sigma `$-holes behind which weaken the C-C bonds by creating possible nucleation sites for breaking of the tube. Electrons tunneling inelastically between tip and tube are the source for these excitations. The probability for this process is favored by the local electric field at the tip-tube interface, which, independent of the metallic or semiconducting behavior, enhances the number of possible bonding/antibonding transitions triggering the bond-breaking. The density of states (DOS) for a (10,10) nanotube, which is metallic, is shown in Fig. 5. Near the Fermi level, the DOS is finite and constant. At higher energies, sharp peaks can be observed which are the Van Hove singularities at the subband onsets . At above/below 3.6 eV from the Fermi level the interband excitations involve states with a predominant $`\sigma ^{}`$/$`\sigma `$-localized character with a small curvature induced $`\sigma \pi `$/$`\sigma ^{}\pi ^{}`$ hybridization. The excitations of the $`\sigma `$/$`\sigma ^{}`$ states have been found in nanotubes to lead to a broad spectral feature in the experimental electron-energy loss-spectra close to the $`\pi `$-plasmon excitation , similar to the case of $`\sigma \pi ^{}`$ interband transitions in graphite . This interband excitation involving localized $`\sigma `$-states introduces a natural energy threshold for the cutting process to take place around 3.6 eV. This agrees quite well with the experimental observation of a sharp threshold voltage for cutting of 3.8 eV that is independent of polarity.
5. Field-induced elastic deformation; The large electric field from the tip apex causes significant changes in the C-C bond lengths. This introduces a mechanical instability in the tube that triggers the formation of topological defects (double pentagon-heptagon defect pairs, see Fig. 6) that dynamically evolve towards breaking of the tube (see next section for details). This effect is enhanced by the stress introduced in the nanotube during the electronic excitation of localized $`\sigma `$-states (as discussed in the previous mechanism). The induced stress acts on the tube for the whole 1 ms applied voltage pulse which is long enough for the formation and evolution of the dislocation cores.
The combination of the last two processes is the most likely to constitute the basic mechanism for cutting of nanotubes. The cutting process then is triggered by the inelastic electron excitations involving transitions of localized $`\sigma `$-states at about $``$ 3.6 eV. This accounts for the threshold voltage found in the experiments. The C-C bonds are further weakened by the mechanical stress induced by the large electric field between tip and sample. These two processes together induce topological defects and drive the system to mechanical instability. In the next section we present a more detailed analysis of mechanisms 4 and 5 as well as specific molecular dynamics simulations of bond rearrangement and bond-breaking driven by the stress introduced in the nanotube.
## IV The cutting mechanism
In the previous section we have sorted out a possible scenario for the cutting of nanotubes by a voltage pulse. This section discusses in more detail the bond weakening due to mechanisms 4 and 5 described in the previous section. The formation and propagation of topological defects due to the bond weakening will also be discussed.
### A Localized electronic excitation
In order to understand the role of electronic excitations in the bond-weakening and cutting process we show in Fig. 5 how the density of states of a (10,10) SWNT is modified by an applied electric field in the direction perpendicular to the tube axis. Here the structural relaxation and electronic calculations were done in the framework of the ab-initio total-energy density-functional pseudopotential theory . We find that the field tends to increase the nanotubeโs lattice parameter by a few %, nearly independent of the polarity of the applied bias potential. Eventually, as the field strength increases, the structure can reach a state which is not in a stable equilibrium for the carbon atoms any more .
In the calculations shown in Fig. 5 the field acts on the whole tube for fixed atomic coordinates. Results are shown corresponding to electric fields accessible in the experimental setup (up to 1 eV/ร
). One could also take into account the spatial variation of the applied field related to the tip-size. In that case there are distinct regions: one far from the tip where the electronic properties of the tube are dictated by the isolated tube and the other just beneath the tip where the electronic properties are modified by the applied field. From Fig. 5 we see that the applied field results in the appearance of localized levels which increases the DOS in an energy region close to the Fermi level (this effect is related to the bond-weakening and bond-length increase in carbon nanotubes under an applied voltage discussed above). The increase of density of states near the Fermi level enhances the number of possible transitions that can take place. Semiconducting nanotubes have a DOS comparable to that of metallic nanotubes, but have an energy gap with zero DOS near the Fermi energy. However, the electric field will induce states within the gap, similar to the increase of DOS near the Fermi level for metallic nanotubes. This allows localized particle-hole excitations to take place also for semiconducting nanotubes. Indeed, in the experiments no difference is found in cutting efficiency between semiconducting and metallic nanotubes.
It may appear that any electronic mechanism should be current dependent. However the number of broken bonds is far less than the number of available electrons in the cutting process. The excitations become possible because of the large current available in the STM experiment. The experimental range of currents during the cutting process (more than 50 nA) is such that during the 1 ms pulse more than thousand electrons are involved in inelastic events. This means that we are in a saturated regime where cutting can in principle be achieved independent of the current. This situation can be compared to the situation of cutting Si:H bonds with an STM, where the desorption yield was found to be independent of bias or current once the bias is large enough . The formation of topological defects is a natural way of releasing the excitation energy and naturally triggers the breaking process.
### B Field-induced elastic deformation
We study the tip-field induced mechanical stress in the nanotube within a macroscopical approach. An argument for using a macroscopic model is that the specific elastic constants of single wall nanotubes determined from experiments follow quite well the predictions for the macroscopic elasticity theory (in terms of the Youngโs modulus, Poisson ratio and torsion and bending elastic constants ). Although the ultimate description of the fracture mechanics is a complex phenomenon that requires both macroscopic and microscopic descriptions, we thus rely on a continuous model to get a first estimate of the parameters involved in the process. The typical energy of the distortion process is such that it corresponds to bond-breaking energies for carbon compounds (see Table I) or to displacements which fulfill a Lindeman melting criterion of a 10% change in bond distance.
We consider the tube as a thin hollow rod which is clamped to the surface by van der Waals forces. In the cutting process, the tube is distorted by the tip in a region the size of the tip (length-scale L). For a hollow tube (inner radius $`a`$ and outer radius $`b`$) of bending inertia I=$`\pi (b^4a^4)/4`$ and Youngโs modulus Y, we expect a relative deformation $`\delta `$ for an applied force $`F`$ of
$$\frac{\delta }{L}\frac{FL^2}{NYI}$$
(1)
where N is a number of order 10-100 depending on details of the modeling such as force distribution and boundary conditions. We estimate the force F from the interaction between a sphere of radius R (representing the tip curvature) and a tube of outer radius b (7.5 ร
in our situation) to be :
$$F=\frac{ฯต_0}{d}\sqrt{Rb}(2\pi V)^2.$$
(2)
where $`V`$ is the applied bias voltage and $`d`$ the tip-tube distance. We see that it depends on the inverse separation. In this equation we have neglected the hollow inner part of the tube that would reduce the force by approximately 20 %, but not its dependence on $`d`$ and $`V`$. With V=4 Volts, a distance of d=10 ร
and R=100 ร
(a reasonable value in STM when simulating a tip with a sphere ) we find a force of 15 nN . This is independent of the polarity of the bias, as found experimentally. Soler et al. experimentally found that graphite surface would be experiencing a 1 ร
deformation in an STM configuration for a force of the order of 1 nN. Notice however that the relevant elastic constants in their case are related to the weak interlayer bonding in graphite as reflected in the corresponding c<sub>33</sub> and c<sub>44</sub> elastic contansts. These are much smaller than the c<sub>11</sub> constant (reflecting the strong intralayer $`sp^2`$-like bond) which is relevant in our case (see Table II) .
Using now Eqn. (1) and inserting Y $``$ 1 TPa, we find that for a distortion $`\delta /L`$ higher than the 10% Lindeman criterion, L has at least to be about 5 nm (of course strongly depending on N). The 10 % change in the bond lengths can lead to brittle behavior (see below). All this happens for electric fields of the order of 1 V/ร
which are in the experimental range. Notice that this pull of electro-magnetic origin is happening over the length covered by the tip shape and drops dramatically outside the tip providing a highly โdistortedโ region.
### C Dynamical bond-breaking
Now we address how the electronically and mechanically induced strain can lead to the breakdown of a tube. A plausible mechanism is the Stone-Wales (SW) transformation leading to bond-rotation defects. This comprises of a pair of pentagon-heptagon defects obtained by a simple C-C bondโrotation in the hexagonal network, see Fig. 6. These defects are the main source of strain release for tubes under tension and determine the overall electronic character of the tube . The pentagon-heptagon defect behaves as a single edge dislocation in the tube circumference. Once nucleated, the pair dislocations can relax further by successive Stone-Wales transformations.
We have performed molecular dynamics simulations of strain-induced defect and plastic/brittle behavior using the tight binding parameterization of Ref. that reproduces quite well density-functional calculations within the local-density-approximation. The computed (T=0K) SW defect formation energy in an armchair nanotube at different applied strains shows that for a tensile strain of about 5% for a (10,10) tube and 12% for a (12,0) tube, the defect geometry is energetically favourable over the perfect tube. The activation barrier for the formation of defects is lowered by the applied tension. Indeed the activation barrier for bond rotation in a (10,10) nanotube is found to reduce from 5.6 eV at zero strain to 3 eV at 10% strain; similar reduction is observed for the barrier for separation of the pentagon-heptagon dislocation cores. The dynamics of these defects are dictated by the tube chirality as well as the applied tension and temperature. In particular at high applied strain and low temperature all tubes are brittle . In the simulations restricted to armchair tubes, we observed that after the nucleation of the first SW-defect, octagonal and higher order rings start to appear as the applied strain is increased. This leads to brittle behavior and shows the important role of the strain induced by the applied voltage. The simulation time scale for the formation and evolution of these defects are of the order of nanoseconds, much shorter than the applied voltage pulse of 1 ms. We thus expect the natural nucleation and evolution of these defects beneath the tip region of the STM, leading to a bond breaking and possible local collapse of the structure.
## V Conclusions and Outlook
From the previous discussion we conclude that the applied voltage at the STM tip creates: (i) excitation of localized $`\sigma `$-bonding states, (ii) a change in the density of states around the Fermi level making the tubes more metallic-like and, (iii) an overall distortion of the region beneath the tip, leading to the formation and evolution of strain-induced topological defects. These processes together are responsible for the cutting mechanism with the threshold voltage of $`4`$ V. The proposed mechanism relies only on the applied voltage. It appears that within the experimental values, the tip-tube distance does not change the results.
It would be interesting to attempt the cutting in multi-walled carbon nanotubes also. The cutting will probably not work for these nanotubes since the inner layers are able to accomodate the stress acting on the outer layers. Furthermore we expect the inter-tube van der Waals interaction in multi-walled nanotubes to be strong enough to partially release the concentration of elastic strain so that only atoms of the outer surface will be affected. This is different to the case of nanotubes in bundles, where the cutting process is as effective as for isolated nanotubes (see Fig. 3). In this case the other tubes in the bundle act as a global support similar to the gold substrate for the isolated SWNTs.
It may be interesting to see if the breaking process is accompanied by photon emission to any significant degree to gain further information about the energetic of the possible processes. Photon emission provides a signal that is dependent on the local physical environment beneath the tip with a spatial resolution determined by the size of the local creening charge (colective mode) involved in the photon emission. Typical resolution can be estimated as $`\sqrt{Rd}20`$ร
using our length scales introduced above . A related topic would be field emission from nano-tubes.
To summarize, we have characterized in detail the possible mechanisms responsible for the breaking up of tubes as found experimentally. Nanotube cutting is a promising technique for the emerging field of nano-manipulation and nanodevices. More studies will have to be made to understand all the interesting physics taking place at this nanoscale level.
## Acknowledgments
This work was supported in part by grants from the Swedish Natural Science Research Council, Iberdrola S.A, JCyL (Grant: VA28/99) and the European Community TMR contracts ERB-FMRX-CT98-0198 and ERBFMRX-CT96-0067 (DG12-MIHT). SPA and AR are grateful to the enlightening atmosphere at the University of the Basque Country where most of this work was done. We thank H.L.J. Temminck Tuinstra for help with the experiments and acknowledge discussions with J.W.G. Wildรถer, D. Tomanek, P. Bernier and M. Buongiorno-Nardelli. The work at Delft was supported by the Dutch Foundation for Fundamental Research of Matter (FOM).
|
no-problem/0001/hep-ph0001016.html | ar5iv | text | # Rapidity gaps at HERA and the Tevatron from soft colour exchanges11footnote 1Presented by R. Enberg at UK Phenomenology Workshop on Collider Physics, Durham, England, September 1999
## References |
no-problem/0001/hep-ph0001007.html | ar5iv | text | # Introduction to Neutrinos
## 1 Neutrino Story
Once it became apparent that the spectrum of $`\beta `$ electrons was continuous , something drastic had to be done! In December 1930, in a letter that starts with typical panache, $`\mathrm{`}\mathrm{`}`$Dear Radioactive Ladies and Gentlemenโฆโ, W. Pauli puts forward a โdesperateโ way out: there is a companion neutral particle to the $`\beta `$ electron. Thus came to the awareness of humans on our planet, the existence of the neutrino, so named in 1933 by Fermi (Pauliโs original name for it was neutron, superseded by Chadwickโs discovery of a heavy neutral particle), implying that there is something small about it, specifically its mass, although nobody at that time thought it was that small.
Fifteen years later, B. Pontecorvo proposes the unthinkable, that neutrinos can be detected: an electron neutrino that hits a $`{}_{}{}^{37}Cl`$ atom will transform it into the inert radioactive gas $`{}_{}{}^{37}Ar`$, which then can be stored and then detected through its radioactive decay. Pontecorvo did not publish the report, perhaps because of the times, or because Fermi thought the idea ingenious but not immediately achievable.
In 1956, using a scintillation counter experiment they had proposed three years earlier , Cowan and Reines discover electron antineutrinos through the reaction $`\overline{\nu }_e+pe^++n`$. Cowan passed away before 1995, the year Fred Reines was awarded the Nobel Prize for their discovery. There emerge two lessons in neutrino physics: not only is patience required but also longevity: it took $`26`$ years from birth to detection and then another $`39`$ for the Nobel Committee to recognize the achievement! This should encourage future physicists to train their children at the earliest age to follow their footsteps, in order to establish dynasties of neutrino physicists. Perhaps then Nobel prizes will be awarded to scientific families?
In 1956, it was rumored that Davis , following Pontecorvoโs proposal, had found evidence for neutrinos coming from a pile, and Pontecorvo , influenced by the recent work of Gell-Mann and Pais, theorized that an antineutrino produced in the Savannah reactor could oscillate into a neutrino and be detected. The rumor went away, but the idea of neutrino oscillations was born; it has remained with us ever since. Neutrinos give up their secrets very grudgingly: its helicity was measured in 1958 by M. Goldhaber , but it took 40 more years for experimentalists to produce convincing evidence for its mass.
The second neutrino, the muon neutrino is detected in 1962, (long anticipated by theorists Inouรซ and Sakata in 1943 ). This time things went a bit faster as it took only 19 years from theory (1943) to discovery (1962) and 26 years to Nobel recognition (1988).
That same year, Maki, Nakagawa and Sakata introduce two crucial ideas; one is that these two neutrinos can mix, and the second is that this mixing can cause one type of neutrino to oscillate into the other (called today flavor oscillation). This is possible only if the two neutrino flavors have different masses.
In 1964, using Bahcallโs result of an enhanced capture rate of $`{}_{}{}^{8}B`$ neutrinos through an excited state of $`{}_{}{}^{37}Ar`$, Davis proposes to search for $`{}_{}{}^{8}B`$ solar neutrinos using a $`100,000`$ gallon tank of cleaning fluid deep underground. Soon after, R. Davis starts his epochal experiment at the Homestake mine, marking the beginning of the solar neutrino watch which continues to this day. In 1968, Davis et al reported a deficit in the solar neutrino flux, a result that stands to this day as a truly remarkable experimental tour de force. Shortly after, Gribov and Pontecorvo interpreted the deficit as evidence for neutrino oscillations.
In the early 1970โs, with the idea of quark-lepton symmetries comes the suggestion that the proton could be unstable. This brings about the construction of underground (to avoid contamination from cosmic ray by-product) detectors, large enough to monitor many protons, and instrumentalized to detect the ฤerenkov light emitted by its decay products. By the middle 1980โs, several such detectors are in place. They fail to detect proton decay, but in a serendipitous turn of events, 150,000 years earlier, a supernova erupted in the large Magellanic Cloud, and in 1987, its burst of neutrinos was detected in these detectors! All of a sudden, proton decay detectors turn their attention to neutrinos, while to this day still waiting for its protons to decay! As we all know, these detectors routinely monitor neutrinos from the Sun, as well as neutrinos produced by cosmic ray collisions.
## 2 Standard Model Neutrinos
The standard model of electro-weak and strong interactions contains three left-handed neutrinos. The three neutrinos are represented by two-components Weyl spinors, $`\nu _i`$, $`i=e,\mu ,\tau `$, each describing a left-handed fermion (right-handed antifermion). As the upper components of weak isodoublets $`L_i`$, they have $`I_{3W}=1/2`$, and a unit of the global $`i`$th lepton number.
These standard model neutrinos are strictly massless. The only Lorentz scalar made out of these neutrinos is the Majorana mass, of the form $`\nu _i^t\nu _j`$; it has the quantum numbers of a weak isotriplet, with third component $`I_{3W}=1`$, as well as two units of total lepton number. Higgs isotriplet with two units of lepton number could generate neutrino Majorana masses, but there is no such higgs in the Standard Model: there are no tree-level neutrino masses in the standard model.
Quantum corrections, however, are not limited to renormalizable couplings, and it is easy to make a weak isotriplet out of two isodoublets, yielding the $`SU(2)\times U(1)`$ invariant $`L_i^t\stackrel{}{\tau }L_jH^t\stackrel{}{\tau }H`$, where $`H`$ is the Higgs doublet. As this term is not invariant under lepton number, it is not be generated in perturbation theory. Thus the important conclusion: The standard model neutrinos are kept massless by global chiral lepton number symmetry. The detection of non-zero neutrino masses is therefore a tangible indication of physics beyond the standard model.
## 3 Neutrino Mass Models
Neutrinos must be extraordinarily light: experiments indicate $`m_{\nu _e}<10\mathrm{eV}`$, $`m_{\nu _\mu }<170\mathrm{keV}`$, $`m_{\nu _\tau }<18\mathrm{MeV}`$ , and any model of neutrino masses must explain this suppression. The natural way to generate neutrinos masses is to introduce for each one its electroweak singlet Dirac partner, $`\overline{N}_i`$. These appear naturally in the Grand Unified group $`SO(10)`$ where they complete each family into its spinor representation. Neutrino Dirac masses stem from the couplings $`L_i\overline{N}_jH`$ after electroweak breaking. Unfortunately, these Yukawa couplings yield masses which are too big, of the same order of magnitude as the masses of the charged elementary particles $`m\mathrm{\Delta }I_w=1/2`$.
The situation is remedied by introducing Majorana mass terms $`\overline{N}_i\overline{N}_j`$ for the right-handed neutrinos. The masses of these new degrees of freedom are arbitrary, as they have no electroweak quantum numbers, $`M\mathrm{\Delta }I_w=0`$. If they are much larger than the electroweak scale, the neutrino masses are suppressed relative to that of their charged counterparts by the ratio of the electroweak scale to that new scale: the mass matrix (in $`3\times 3`$ block form) is
$$\left(\begin{array}{cc}0& m\\ m& M\end{array}\right),$$
(1)
leading, for each family, to one small and one large eigenvalue
$$m_\nu m\frac{m}{M}\left(\mathrm{\Delta }I_w=\frac{1}{2}\right)\left(\frac{\mathrm{\Delta }I_w=\frac{1}{2}}{\mathrm{\Delta }I_w=0}\right).$$
(2)
This seesaw mechanism provides a natural explanation for small neutrino masses as long as lepton number is broken at a large scale $`M`$. With $`M`$ around the energy at which the gauge couplings unify, this yields neutrino masses at or below tenths of eVs, consistent with the SuperK results.
The lepton flavor mixing comes from the diagonalization of the charged lepton Yukawa couplings, and of the neutrino mass matrix. From the charged lepton Yukawas, we obtain $`๐ฐ_e`$, the unitary matrix that rotates the lepton doublets $`L_i`$. From the neutrino Majorana matrix, we obtain $`๐ฐ_\nu `$, the matrix that diagonalizes the Majorana mass matrix. The $`6\times 6`$ seesaw Majorana matrix can be written in $`3\times 3`$ block form
$$=๐ฑ_\nu ^t๐๐ฑ_\nu \left(\begin{array}{cc}๐ฐ_{\nu \nu }& ฯต๐ฐ_{\nu N}\\ ฯต๐ฐ_{N\nu }^t& ๐ฐ_{NN}\end{array}\right),$$
(3)
where $`ฯต`$ is the tiny ratio of the electroweak to lepton number violating scales, and $`๐=\mathrm{diag}(ฯต^2๐_\nu ,๐_N)`$, is a diagonal matrix. $`๐_\nu `$ contains the three neutrino masses, and $`ฯต^2`$ is the seesaw suppression. The weak charged current is then given by
$$j_\mu ^+=e_i^{}\sigma _\mu ๐ฐ_{MNS}^{ij}\nu _j,$$
(4)
where
$$๐ฐ_{MNS}=๐ฐ_e๐ฐ_\nu ^{},$$
(5)
is the Maki-Nakagawa-Sakata (MNS) flavor mixing matrix, the analog of the CKM matrix in the quark sector.
In the seesaw-augmented standard model, this mixing matrix is totally arbitrary. It contains, as does the CKM matrix, three rotation angles, and one CP-violating phase. In the seesaw scenario, it also contains two additional CP-violating phases which cannot be absorbed in a redefinition of the neutrino fields, because of their Majorana masses (these extra phases can be measured only in $`\mathrm{\Delta }=2`$ processes). These additional parameters of the seesaw-augmented standard model, need to be determined by experiment.
## 4 Present Experimental Issues
Today, very impressive limits have been set on the direct detection of neutrino masses
$$m_{\nu _e}\mathrm{few}\mathrm{eVs};m_{\nu _\mu }160\mathrm{KeVs};m_{\nu _\tau }18\mathrm{MeVs}.$$
(6)
The best limits on the electron neutrino mass come from Tritium $`\beta `$ decay. Also, one does not know if what kind of mas neutrinos have Majorana or Dirac masses. An important clue is the absence of neutrinoless double $`\beta `$ decay, which puts a limit on electron lepton number violation.
Neutrino masses much smaller than these limits can be detected through neutrino oscillations. These can be observed using natural sources; some are somewhat understood and predictable, such as neutrinos produced in cosmic ray secondaries, neutrinos produced in the sun; others, such as neutrinos produced in supernovas close enough to be detected are much rarer. The second type of experiments monitor neutrinos from reactors, and the third type uses accelerator neutrino beams. Below we give a brief description of some of these experiments.
* Atmospheric Neutrinos
Neutrinos produced in the decay of secondaries from cosmic ray collisions with the atmosphere have a definite flavor signature: there are twice as many muon like as electron like neutrinos and antineutrinos, simply because pions decay all the time into muons. It has been known for sometime that this 2:1 ratio differed from observation, hinting at a deficit of muon neutrinos. However last year SuperK was able to correlate this deficit with the length of travel of these neutrinos, and this correlation is the most persuasive evidence for muon neutrino oscillations: after birth, muon neutrinos do not all make it to the detector as muon neutrinos; they oscillate into something else, which in the most conservative view, should be either an electron or a tau neutrino. However, a nuclear reactor experiment, CHOOZ, rules out the electron neutrino as a candidate. Thus there remains two possibilities, the tau neutrino or another type of neutrino that does not interact weakly, a sterile neutrino. The latter possibility is being increasingly disfavored by a careful analyses of matter effects: it seems that muon neutrinos oscillate into tau neutrinos. The oscillation parameters are
$$(m_{\nu _\tau }^2m_{\nu _\mu }^2)10^3\mathrm{eV}^2;\mathrm{sin}^22\theta _{\nu _\mu \nu _\tau }.86.$$
(7)
Although this epochal result stands on its own, it should be confirmed by other experiments. Among these is are experiments that monitor muon neutrino beams, both at short and long baselines.
* Solar Neutrinos
Starting with the pioneering Homestake experiment, there is clearly a deficit in the number of electron neutrinos from the Sun. This has now been verified by many experiments, probing different ranges of neutrino energies and emission processes. This neutrino deficit can be parametrized in three ways
+ Vacuum oscillations of the electron neutrino into some other species, sterile or active, with parameters
$$(m_{\nu _e}^2m_{\nu _\mathrm{?}}^2)10^{10}10^{11}\mathrm{eV}^2;\mathrm{sin}^22\theta _{\nu _e\nu _\mathrm{?}}.7.$$
(8)
This possibility implies a seasonal variation of the flux, which the present data is so far unable to detect.
+ MSW oscillations. In this case, neutrinos produced in the solar core traverse the sun like a beam with an index of refraction. For a large range of parameters, this can result in level crossing region inside the sun. There are two distinct cases, according to which the level crossing is adiabatic or not. These interpretations yield different ranges of fundamental parameters.
The non-adiabatic layer yields the small angle solution
$$(m_{\nu _e}^2m_{\nu _\mathrm{?}}^2)5\times 10^6\mathrm{eV}^2;\mathrm{sin}^22\theta _{\nu _e\nu _\mathrm{?}}2\times 10^3.$$
(9)
The adiabatic layer transitions yields the large angle solution
$$(m_{\nu _e}^2m_{\nu _\mathrm{?}}^2)10^410^5\mathrm{eV}^2;\mathrm{sin}^22\theta _{\nu _e\nu _\mathrm{?}}0.65.$$
(10)
This solution implies a detectable day-night asymmetry in the flux.
How do we distinguish between these possibilities? Each of these implies different distortions of the Boron spectrum from the laboratory measurements. In addition, the highest energy solar neutrinos may not all come from Boron decay; some are expected to be โhepโ neutrinos coming from $`p+{}_{}{}^{3}He{}_{}{}^{4}He+e^++\nu _e`$.
In their measurement of the recoil electron spectrum, SuperK data show an excess of high end events, which would tend to favor vacuum oscillations. They also see a mild day-night asymmetry effect which would tend to favor the large angle MSW solution. In short, their present data does not allow for any definitive conclusions, as it is self-contradictory.
A new solar neutrino detector, the Solar Neutrino Observatory (SNO) now coming on-line, should be able to distinguishe between these scenarios. It contains heavy water, allowing a more precise determination of the electron recoil energy, as it involves the heavier deuterium. Thus we expect a better resolution of the Boron spectrumโs distortion. Also, with neutron detectors in place, SNO will be able to detect all active neutrino species through their neutral current interactions. If successfull, this will provide a smoking gun test for neutrino oscillations.
* Accelerator Oscillations have been reported by the LSND collaboration , with large angle mixing between muon and electron antineutrinos. This result has been partially challenged by the KARMEN experiment which sees no such evidence, although they cannot rule out the LSND result. This controversy will be resolved by an upcoming experiment at FermiLab, called MiniBoone. This is a very important issue because, assuming that all experiments are correct, the LSND result requires a sterile neutrino to explain the other experiments, that is both light and mixed with the normal neutrinos. This would require a profound rethinking of our ideas about the low energy content of the standard model.
At the end of this Century, the burning issues in neutrino physics are
* The origin of the Solar Neutrino Deficit
This is being addressed by SuperK, in their measurement of the shape of the $`{}_{}{}^{8}B`$ spectrum, of day-night asymmetry and of the seasonal variation of the neutrino flux. Their reach will soon be improved by lowering their threshold energy.
SNO is joining the hunt, and is expected to provide a more accurate measurement of the Boron flux. Its raison dโรชtre, however, is the ability to measure neutral current interactions. If there are no sterile neutrinos, we might have a flavor independent measurement of the solar neutrino flux, while measuring at the same time the electron neutrino flux!
This experiment will be joined by BOREXINO, designed to measure neutrinos from the $`{}_{}{}^{7}Be`$ capture. These neutrinos are suppressed in the small angle MSW solution, which could explain the results from the $`pp`$ solar neutrino experiments and those that measure the Boron neutrinos.
* Atmospheric Neutrinos
Here, there are several long baseline experiments to monitor muon neutrino beams and corroborate the SuperK results. The first, called K2K, already in progress, sends a beam from KEK to SuperK. Another, called MINOS, will monitor a FermiLab neutrino beam at the Soudan mine, 730 km away. A Third experiment under consideration would send a CERN beam towards the Gran Sasso laboratory (also about 730 km away!). Eventually, these experiments hope to detect the appearance of a tau neutrino.
This brief survey of upcoming experiments in neutrino physics was intended to give a flavor of things to come. These experiments will not only measure neutrino parameters (masses and mixing angles), but will help us answer fundamental questions about the nature of neutrinos, especially the possible kinship between leptons and quarks. But there is much more to come: there is increasing talk of producing intense neutrino beams in muon storage rings, and even of detecting the neutrino cosmic background!
## 5 Theories
Theoretical predictions of lepton hierarchies and mixings depend very much on hitherto untested theoretical assumptions. In the quark sector, where the bulk of the experimental data resides, the theoretical origin of quark hierarchies and mixings is a mystery, although there exits many theories, but none so convincing as to offer a definitive answer to the communityโs satisfaction. It is therefore no surprise that there are more theories of lepton masses and mixings than there are parameters to be measured. Nevertheless, one can present the issues as questions:
* Do the right handed neutrinos have quantum numbers beyond the standard model?
* Are quarks and leptons related by grand unified theories?
* Are quarks and leptons related by anomalies?
* Are there family symmetries for quarks and leptons?
The measured numerical value of the neutrino mass difference (barring any fortuitous degeneracies), suggests through the seesaw mechanism, a mass for the right-handed neutrinos that is consistent with the scale at which the gauge couplings unify. Is this just a numerical coincidence, or should we view this as a hint for grand unification?
Grand unified Theories, originally proposed as a way to treat leptons and quarks on the same footing, imply symmetries much larger than the standard modelโs. Implementation of these ideas necessitates a desert and supersymmetry, but also a carefully designed contingent of Higgs particles to achieve the desired symmetry breaking. That such models can be built is perhaps more of a testimony to the cleverness of theorists rather than of Natureโs. Indeed with the advent of string theory, we know that the best features of grand unified theories can be preserved, as most of the symmetry breaking is achieved by geometric compactification from higher dimensions .
An alternative point of view is that the vanishing of chiral anomalies is necessary for consistent theories, and their cancellation is most easily achieved by assembling matter in representations of anomaly-free groups. Perhaps anomaly cancellation is more important than group structure.
Below, we present two theoretical frameworks of our work, in which one deduces the lepton mixing parameters and masses. One is ancient , uses the standard techniques of grand unification, but it had the virtue of predicting the large $`\nu _\mu \nu _\tau `$ mixing observed by SuperKamiokande. The other is more recent, and uses extra Abelian family symmetries to explain both quark and lepton hierarchies. It also predicts large $`\nu _\mu \nu _\tau `$ mixing. Both schemes imply small $`\nu _e\nu _\mu `$ mixings.
### 5.1 A Grand Unified Model
The seesaw mechanism was born in the context of the grand unified group $`SO(10)`$, which naturally contains electroweak neutral right-handed neutrinos. Each standard model family is contained in two irreducible representations of $`SU(5)`$. However, the predictions of this theory for Yukawa couplings is not so clear cut, and to reproduce the known quark and charged lepton hierarchies, a special but simple set of Higgs particles had to be included. In the simple scheme proposed by Georgi and Jarlskog , the ratios between the charged leptons and quark masses is reproduced, albeit not naturally since two Yukawa couplings, not fixed by group theory, had to be set equal. This motivated us to generalize their scheme to $`SO(10)`$, where their scheme was (technically) natural, which meant that we had an automatic window into neutrino masses through the seesaw. The Yukawa couplings were of the Higgs-heavy, with $`\mathrm{๐๐๐}`$ representations, but the attitude at the time was โdamn the Higgs torpedoes, and see what happensโ. A modern treatment would include non-renormalizable operators , but with similar conclusion.
The model yielded the masses
$$m_b=m_\tau ;m_dm_s=m_em_\mu ;m_dm_s=3(m_em_\mu ).$$
(11)
and mixing angles
$$V_{us}=\mathrm{tan}\theta _c=\sqrt{\frac{m_d}{m_s}};V_{cb}=\sqrt{\frac{m_c}{m_t}}.$$
(12)
While reproducing the well-known lepton and quark mass hierarchies, it predicted a long-lived $`b`$ quark, contrary to the lore of the time. It also made predictions in the lepton sector, namely maximal $`\nu _\tau \nu _\mu `$ mixing, small $`\nu _e\nu _\mu `$ mixing of the order of $`(m_e/m_\mu )^{1/2}`$, and no $`\nu _e\nu _\tau `$ mixing.
The neutral lepton masses came out to be hierarchical, but heavily dependent on the masses of the right-handed neutrinos. The electron neutrino mass came out much lighter than those of $`\nu _\mu `$ and $`\nu _\tau `$. Their numerical values depended on the top quark mass, which was then supposed to be in the tens of GeVs!
Given the present knowledge, some of the features are remarkable, such as the long-lived $`b`$ quark and the maximal $`\nu _\tau \nu _\mu `$ mixing. On the other hand, the actual numerical value of the $`b`$ lifetime was off a bit,and the $`\nu _e\nu _\mu `$ mixing was too large to reproduce the small angle MSW solution of the solar neutrino problem.
The lesson should be that the simplest $`SO(10)`$ model that fits the observed quark and charged lepton hierarchies, reproduces, at least qualitatively, the maximal mixing found by SuperK, and predicts small mixing with the electron neutrino .
### 5.2 A Non-grand-unified Model
There is another way to generate hierarchies, based on adding extra family symmetries to the standard model, without invoking grand unification. These types of models address only the Cabibbo suppression of the Yukawa couplings, and are not as predictive as specific grand unified models. Still, they predict no Cabibbo suppression between the muon and tau neutrinos. Below, we present a pre-SuperK model with those features.
The Cabibbo supression is assumed to be an indication of extra family symmetries in the standard model. The idea is that any standard model-invariant operator, such as $`๐_i\overline{๐}_jH_d`$, cannot be present at tree-level if there are additional symmetries under which the operator is not invariant. Simplest is to assume an Abelian symmetry, with an electroweak singlet field $`\theta `$, as its order parameter. Then the interaction
$$๐_i\overline{๐}_jH_d\left(\frac{\theta }{M}\right)^{n_{ij}}$$
(13)
can appear in the potential as long as the family charges balance under the new symmetry. As $`\theta `$ acquires a $`vev`$, this leads to a suppression of the Yukawa couplings of the order of $`\lambda ^{n_{ij}}`$ for each matrix element, with $`\lambda =\theta /M`$ identified with the Cabibbo angle, and $`M`$ is the natural cut-off of the effective low energy theory. As a consequence of the charge balance equation
$$X_{if}^{[d]}+n_{ij}X_\theta =0,$$
(14)
the exponents of the suppression are related to the charge of the standard model-invariant operator , the sum of the charges of the fields that make up the the invariant.
This simple Ansatz, together with the seesaw mechanism, implies that the family structure of the neutrino mass matrix is determined by the charges of the left-handed lepton doublet fields.
Each charged lepton Yukawa coupling $`L_i\overline{N}_jH_u`$, has an extra charge $`X_{L_i}+X_{Nj}+X_H`$, which gives the Cabibbo suppression of the $`ij`$ matrix element. Hence, the orders of magnitude of these couplings can be expressed as
$$\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right)\widehat{Y}\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right),$$
(15)
where $`\widehat{Y}`$ is a Yukawa matrix with no Cabibbo suppressions, $`l_i=X_{L_i}/X_\theta `$ are the charges of the left-handed doublets, and $`p_i=X_{N_i}/X_\theta `$, those of the singlets. The first matrix forms half of the MNS matrix. Similarly, the mass matrix for the right-handed neutrinos, $`\overline{N}_i\overline{N}_j`$ will be written in the form
$$\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right)\left(\begin{array}{ccc}\lambda ^{p_1}& 0& 0\\ 0& \lambda ^{p_2}& 0\\ 0& 0& \lambda ^{p_3}\end{array}\right).$$
(16)
The diagonalization of the seesaw matrix is of the form
$$L_iH_u\overline{N}_j\left(\frac{1}{\overline{N}\overline{N}}\right)_{jk}\overline{N}_kH_uL_l,$$
(17)
from which the Cabibbo suppression matrix from the $`\overline{N}_i`$ fields cancels, leaving us with
$$\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right)\widehat{}\left(\begin{array}{ccc}\lambda ^{l_1}& 0& 0\\ 0& \lambda ^{l_2}& 0\\ 0& 0& \lambda ^{l_3}\end{array}\right),$$
(18)
where $`\widehat{}`$ is a matrix with no Cabibbo suppressions. The Cabibbo structure of the seesaw neutrino matrix is determined solely by the charges of the lepton doublets! As a result, the Cabibbo structure of the MNS mixing matrix is also due entirely to the charges of the three lepton doublets. This general conclusion depends on the existence of at least one Abelian family symmetry, which we argue is implied by the observed structure in the quark sector.
The Wolfenstein parametrization of the CKM matrix ,
$$\left(\begin{array}{ccc}1& \lambda & \lambda ^3\\ \lambda & 1& \lambda ^2\\ \lambda ^3& \lambda ^2& 1\end{array}\right),$$
(19)
and the Cabibbo structure of the quark mass ratios
$$\frac{m_u}{m_t}\lambda ^8\frac{m_c}{m_t}\lambda ^4;\frac{m_d}{m_b}\lambda ^4\frac{m_s}{m_b}\lambda ^2,$$
(20)
can be reproduced by a simple family-traceless charge assignment for the three quark families, namely
$$X_{๐,\overline{๐ฎ},\overline{๐}}=(2,1,1)+\eta _{๐,\overline{๐ฎ},\overline{๐}}(1,0,1),$$
(21)
where $``$ is baryon number, $`\eta _{\overline{๐}}=0`$, and $`\eta _๐=\eta _{\overline{๐ฎ}}=2`$. Two striking facts are evident:
* the charges of the down quarks, $`\overline{๐}`$, associated with the second and third families are the same,
* $`๐`$ and $`\overline{๐ฎ}`$ have the same value for $`\eta `$.
To relate these quark charge assignments to those of the leptons, we need to inject some more theoretical prejudices. Assume these family-traceless charges are gauged, and not anomalous. Then to cancel anomalies, the leptons must themselves have family charges.
Anomaly cancellation generically implies group structure. In $`SO(10)`$, baryon number generalizes to $``$, where $``$ is total lepton number, and in $`SU(5)`$ the fermion assignment is $`\overline{\mathrm{๐}}=\overline{๐}+L`$, and $`\mathrm{๐๐}=๐+\overline{๐ฎ}+\overline{e}`$. Thus anomaly cancellation is easily achieved by assigning $`\eta =0`$ to the lepton doublet $`L_i`$, and $`\eta =2`$ to the electron singlet $`\overline{e}_i`$, and by generalizing baryon number to $``$, leading to the charges
$$X_{๐,\overline{๐ฎ},\overline{๐},L,\overline{e}}=()(2,1,1)+\eta _{๐,\overline{๐ฎ},\overline{๐}}(1,0,1),$$
(22)
where now $`\eta _{\overline{๐}}=\eta _L=0`$, and $`\eta _๐=\eta _{\overline{๐ฎ}}=\eta _{\overline{e}}=2`$. It is interesting to note that $`\eta `$ is at least in $`E_6`$. The origin of such charges is not clear, as it implies in the superstring context, rather unconventional compactification.
As a result, the charges of the lepton doublets are simply $`X_{L_i}=(2,1,1)`$. We have just argued that these charges determine the Cabibbo structure of the MNS lepton mixing matrix to be
$$๐ฐ_{MNS}\left(\begin{array}{ccc}1& \lambda ^3& \lambda ^3\\ \lambda ^3& 1& 1\\ \lambda ^3& 1& 1\end{array}\right),$$
(23)
implyingno Cabibbo suppression in the mixing between $`\nu _\mu `$ and $`\nu _\tau `$. This is consistent with the SuperK discovery and with the small angle MSW solution to the solar neutrino deficit. One also obtains a much lighter electron neutrino, and Cabibbo-comparable masses for the muon and tau neutrinos. Notice that these predictions are subtly different from those of grand unification, as they yield $`\nu _e\nu _\tau `$ mixing. It also implies a much lighter electron neutrino, and Cabibbo-comparable masses for the muon and tau neutrinos.
On the other hand, the scale of the neutrino mass values depend on the family trace of the family charge(s). Here we simply quote the results our model . The masses of the right-handed neutrinos are found to be of the following orders of magnitude
$$m_{\overline{N}_e}M\lambda ^{13};m_{\overline{N}_\mu }m_{\overline{N}_\tau }M\lambda ^7,$$
(24)
where $`M`$ is the scale of the right-handed neutrino mass terms, assumed to be the cut-off. The seesaw mass matrix for the three light neutrinos comes out to be
$$m_0\left(\begin{array}{ccc}a\lambda ^6& b\lambda ^3& c\lambda ^3\\ b\lambda ^3& d& e\\ c\lambda ^3& e& f\end{array}\right),$$
(25)
where we have added for future reference the prefactors $`a,b,c,d,e,f`$, all of order one, and
$$m_0=\frac{v_u^2}{M\lambda ^3},$$
(26)
where $`v_u`$ is the $`vev`$ of the Higgs doublet. This matrix has one light eigenvalue
$$m_{\nu _e}m_0\lambda ^6.$$
(27)
Without a detailed analysis of the prefactors, the masses of the other two neutrinos come out to be both of order $`m_0`$. The mass difference announced by superK cannot be reproduced without going beyond the model, by taking into account the prefactors. The two heavier mass eigenstates and their mixing angle are written in terms of
$$x=\frac{dfe^2}{(d+f)^2},y=\frac{df}{d+f},$$
(28)
as
$$\frac{m_{\nu _2}}{m_{\nu _3}}=\frac{1\sqrt{14x}}{1+\sqrt{14x}},\mathrm{sin}^22\theta _{\mu \tau }=1\frac{y^2}{14x}.$$
(29)
If $`4x1`$, the two heaviest neutrinos are nearly degenerate. If $`4x1`$, a condition easy to achieve if $`d`$ and $`f`$ have the same sign, we can obtain an adequate split between the two mass eigenstates. For illustrative purposes, when $`0.03<x<0.15`$, we find
$$4.4\times 10^6\mathrm{\Delta }m_{\nu _e\nu _\mu }^210^5rmeV^2,$$
(30)
which yields the correct non-adiabatic MSW effect, and
$$5\times 10^4\mathrm{\Delta }m_{\nu _\mu \nu _\tau }^25\times 10^3\mathrm{eV}^2,$$
(31)
for the atmospheric neutrino effect. These were calculated with a cut-off, $`10^{16}\mathrm{GeV}<M<4\times 10^{17}\mathrm{GeV}`$, and a mixing angle, $`0.9<\mathrm{sin}^22\theta _{\mu \tau }<1`$. This value of the cut-off is compatible not only with the data but also with the gauge coupling unification scale, a necessary condition for the consistency of our model, and more generally for the basic ideas of Grand Unification.
## 6 Outlook
Theoretical predictions of neutrino masses and mixings depend on developing a credible theory of flavor. We have presented two flavor schemes, which predicted not only maximal $`\nu _\mu \nu _\tau `$ mixing, but also small $`\nu _e\nu _\mu `$ mixings. Neither scheme includes sterile neutrinos. The present experimental situation is somewhat unclear: the LSND results imply the presence of a sterile neutrino; and superK favors $`\nu _\mu \nu _\tau `$ oscillation over $`\nu _\mu \nu _{\mathrm{sterile}}`$. The origin of the solar neutrino deficit remains a puzzle, which several possible explanations. One is the non-adiabatic MSW effect in the Sun, which our theoretical ideas seem to favor, but it is an experimental question which is soon to be answered by the continuing monitoring of the $`{}_{}{}^{8}B`$ spectrum by SuperK, and the advent of the SNO detector. Neutrino physics is at an exciting stage, and experimentally vibrant, as upcoming measurements will help us sharpen our understanding of fundamental interactions.
## 7 Acknowledgements
I wish to thank Professor Tran Thanh Van for inviting me to this instructive conference which has not only increased my comprehension and awareness of present issues in high energy physics, but also contributed to my culture gรฉnรฉrale. This research was supported in part by the department of energy under grant DE-FG02-97ER41029. |
no-problem/0001/physics0001018.html | ar5iv | text | # Hierarchy of time scales and quasitrapping in the ๐-atom micromaser
## Conclusions and discussion
The main result of our work is the discovered possibility to purposefully create in the cavity quasistable states close to Fock states. We analyzed the dynamics of the micromaser pumped by $`N`$-atomic clusters . Our approach generalizes the basic model of the one atom micromaser and can be experimentally realized. We assumed the point-like nature of $`N`$-atomic clusters. This assumption can easily be realized in practice when clusters are created in a gas flow by focused laser pulses in the light range. In this case the width of the beam is of order of few microns while the size of the cavity can be of order of few millimeters. In our work we have pointed out the conditions when the time evolution of a (sub)diagonal of the reduced density matrix is independent of the other elements of the density matrix. We have investigated the properties of the spectrum of the evolution operator (see Figures 1,2) and discussed their connection to the properties of the RDM dynamics. We have discussed the hierarchy of the time scales of the micromaser dynamics and have shown that the sectors of the spectrum around zero are responsible for rapid processes while the sectors close to 1 correspond to quasi-equilibrium. For the first time in the existing literature we have introduced an important notion of the quasitrapped states. The Figure 6 shows that these states are close to the Fock states. The domains in the Fock space corresponding to quasitrapping are rather narrow, their locations change smoothly with variations of the number of atoms in the cluster. This means that the overall picture of the dynamics is stable with respect to small variations of the number of atoms in a cluster. In our future work we plan to investigate this phenomenon in a greater detail as well as to study how the properties of the quasitrapped states depend on the choice of the initial density matrix of the $`N`$-atomic cluster.
## Figure captions
Figure 1. The spectrum of $`W(0,\tau )`$ for $`N=1,5,10`$ and $`g\tau =1.355`$ .
Figure 2. The spectrum of the evolution operator $`S(N)`$ in ascending order for $`N=1,15`$, $`N_{ex}=20,\mathrm{}`$, and $`g\tau =1.355`$ .
Figure 3. The rates of change of integral probabilities of the Fock states in the second $`14n24`$ and the third $`39n49`$ quasitrapping domains for $`N=10`$ .
Figure 4. Integral probabilities of the Fock states in the second $`14n24`$ and the third $`39n49`$ quasitrapping domains for $`N=1`$ .
Figure 5. Integral probabilities of the Fock states in the second $`14n24`$ and the third $`39n49`$ quasitrapping domains for $`N=10`$ .
Figure 6. The photon number distributions of the diagonal elements of RDM at the $`l`$ -moments of maximal probabilities of the Fock states in the second $`14n24`$ and the third $`39n49`$ domains of quasitrapping. |
no-problem/0001/quant-ph0001049.html | ar5iv | text | # A Generalized Jaynes-Cummings Hamiltonian and Supersymmetric Shape-Invariance
## I Introduction
Supersymmetric quantum mechanics deals with pairs of Hamiltonians which have the same energy spectra, but different eigenstates. A number of such pairs of Hamiltonians share an integrability condition called shape invariance . Although not all exactly-solvable problems are shape-invariant , shape invariance, especially in its algebraic formulation , is a powerful technique to study exactly-solvable systems.
Supersymmetric quantum mechanics is generally studied in the context of one-dimensional systems. The partner Hamiltonians
$`\widehat{H}_1`$ $`=`$ $`\widehat{A}^{}\widehat{A}`$ (2)
$`\widehat{H}_2`$ $`=`$ $`\widehat{A}\widehat{A}^{},`$ (3)
are most readily written in terms of one-dimensional operators
$`\widehat{A}`$ $``$ $`W(x)+{\displaystyle \frac{i}{\sqrt{2m}}}\widehat{p},`$ (5)
$`\widehat{A}^{}`$ $``$ $`W(x){\displaystyle \frac{i}{\sqrt{2m}}}\widehat{p},`$ (6)
where $`W(x)`$ is the superpotential. Attempts were made to generalize supersymmetric quantum mechanics and the concept of shape-invariance beyond one-dimensional and spherically-symmetric three-dimensional problems. These include non-central , non-local , and periodic potentials; a three-body problem in one-dimension with a three-body force ; N-body problem ; and coupled-channel problems . It is not easy to find exact solutions to these problems. For example, in the coupled-channel case a general shape-invariance is only possible in the limit where the superpotential is separable which corresponds to the well-known sudden approximation in the coupled-channel problem . Our goal in this article is to introduce a class of shape-invariant coupled-channel problems which correspond to the generalization of the Jaynes-Cummings Hamiltonian .
## II Shape Invariance
The Hamiltonian $`\widehat{H}_1`$ of Eq. (I) is called shape-invariant if the condition
$$\widehat{A}(a_1)\widehat{A}^{}(a_1)=\widehat{A}^{}(a_2)\widehat{A}(a_2)+R(a_1),$$
(7)
is satisfied . In this equation $`a_1`$ and $`a_2`$ represent parameters of the Hamiltonian. The parameter $`a_2`$ is a function of $`a_1`$ and the remainder $`R(a_1)`$ is independent of the dynamical variables such as position and momentum. As it is written the condition of Eq. (7) does not require the Hamiltonian to be one-dimensional, and one does not need to choose the ansatz of Eq. (I). In the cases studied so far the parameters $`a_1`$ and $`a_2`$ are either related by a translation or a scaling . Introducing the similarity transformation that replaces $`a_1`$ with $`a_2`$ in a given operator
$$\widehat{T}(a_1)\widehat{O}(a_1)\widehat{T}^{}(a_1)=\widehat{O}(a_2)$$
(8)
and the operators
$$\widehat{B}_+=\widehat{A}^{}(a_1)\widehat{T}(a_1)$$
(9)
$$\widehat{B}_{}=\widehat{B}_+^{}=\widehat{T}^{}(a_1)\widehat{A}(a_1),$$
(10)
the Hamiltonians of Eq. (I) take the forms
$$\widehat{H}_1=\widehat{B}_+\widehat{B}_{}.$$
(11)
and
$$\widehat{H}_2=\widehat{T}\widehat{B}_{}\widehat{B}_+\widehat{T}^{}.$$
(12)
Using Eq. (7) one can also easily prove the commutation relation
$$[\widehat{B}_{},\widehat{B}_+]=\widehat{T}^{}(a_1)R(a_1)\widehat{T}(a_1)R(a_0),$$
(13)
where we used the identity
$$R(a_n)=\widehat{T}(a_1)R(a_{n1})\widehat{T}^{}(a_1),$$
(14)
valid for any $`n`$. The ground state of the Hamiltonian $`\widehat{H}_1`$ satisfies the condition
$$\widehat{A}\psi _0=0=\widehat{B}_{}\psi _0.$$
(15)
The $`n`$-th excited state of $`\widehat{H}_1`$ is given by
$$\psi _n\left(\widehat{B}_+\right)^n\psi _0$$
(16)
with the eigenvalue
$$\epsilon _n=\underset{k=1}{\overset{n}{}}R(a_k).$$
(17)
Note that the eigenstate of Eq. (16) needs to be suitably normalized. We discuss the normalization of this state in the next section.
## III Generalization of the Jaynes-Cummings Hamiltonian
To generalize the Jaynes-Cummings Hamiltonian to general shape-invariant systems we introduce the operator
$$\widehat{S}=\sigma _+\widehat{A}+\sigma _{}\widehat{A}^{},$$
(18)
where
$$\sigma _\pm =\frac{1}{2}\left(\sigma _1\pm i\sigma _2\right),$$
(19)
with $`\sigma _i`$, with $`i=1,\mathrm{\hspace{0.17em}2},\mathrm{and}\mathrm{\hspace{0.17em}\hspace{0.17em}3}`$, being the Pauli matrices and the operators $`\widehat{A}`$ and $`\widehat{A}^{}`$ satisfy the shape invariance condition of Eq. (7). We search for the eigenstates of $`\widehat{S}`$. It is more convenient to work with the square of this operator, which can be written as
$$\widehat{S}^2=\left[\begin{array}{cc}\widehat{T}& 0\\ 0& \pm 1\end{array}\right]\left[\begin{array}{cc}\widehat{B}_{}\widehat{B}_+& 0\\ 0& \widehat{B}_+\widehat{B}_{}\end{array}\right]\left[\begin{array}{cc}\widehat{T}^{}& 0\\ 0& \pm 1\end{array}\right].$$
(20)
Note the freedom of sign choice in this equation, which results in two possible decompositions of $`\widehat{S}^2`$.
We next introduce the states
$$\mathrm{\Psi }_\pm =\left[\begin{array}{cc}\widehat{T}& 0\\ 0& \pm 1\end{array}\right]\left[\begin{array}{c}m\\ n\end{array}\right]$$
(21)
where $`m`$ and $`n`$ are the abbreviated notation for the states $`\psi _n`$ and $`\psi _m`$ of Eq. (16). Using Eqs. (13), (20) and (21) and the fact that the operator $`\widehat{T}`$ is unitary one gets
$`\widehat{S}^2\mathrm{\Psi }_\pm `$ $`=`$ $`\left[\begin{array}{cc}\widehat{T}& 0\\ 0& \pm 1\end{array}\right]\left[\begin{array}{cc}\widehat{B}_+\widehat{B}_{}+R(a_0)& 0\\ 0& \widehat{B}_+\widehat{B}_{}\end{array}\right]\left[\begin{array}{c}m\\ n\end{array}\right]`$ (22)
$`=`$ $`\left[\begin{array}{cc}\widehat{T}& 0\\ 0& \pm 1\end{array}\right]\left[\begin{array}{cc}\epsilon _m+R(a_0)& 0\\ 0& \epsilon _n\end{array}\right]\left[\begin{array}{c}m\\ n\end{array}\right].`$ (23)
Using Eqs. (14) and (17) one can write
$`\widehat{T}\left[\epsilon _m+R(a_0)\right]\widehat{T}^{}`$ $`=`$ $`\widehat{T}\left[R(a_1)+R(a_2)+\mathrm{}+R(a_m)+R(a_0)\right]\widehat{T}^{}`$ (24)
$`=`$ $`R(a_2)+R(a_3)+\mathrm{}+R(a_{m+1})+R(a_1)=\epsilon _{m+1}.`$ (25)
Hence the states
$$\mathrm{\Psi }_m_\pm =\frac{1}{\sqrt{2}}\left[\begin{array}{cc}\widehat{T}& 0\\ 0& \pm 1\end{array}\right]\left[\begin{array}{c}m\\ m+1\end{array}\right],m=0,1,2,\mathrm{}$$
(26)
are the normalized eigenstates of the operator $`\widehat{S}^2`$
$$\widehat{S}^2\mathrm{\Psi }_m_\pm =\epsilon _{m+1}\mathrm{\Psi }_m_\pm .$$
(27)
One can also calculate the action of the operator $`\widehat{S}`$ on this state
$$\widehat{S}\mathrm{\Psi }_m_\pm =\frac{1}{\sqrt{2}}\left[\begin{array}{c}\pm \widehat{T}\widehat{B}_{}m+1\\ \widehat{B}_+m\end{array}\right].$$
(28)
Introducing the operator
$$\widehat{Q}^{}=\left(\widehat{B}_+\widehat{B}_{}\right)^{1/2}\widehat{B}_+$$
(29)
one can write the normalized eigenstate of $`\widehat{H}_1`$ as
$$m=\left(\widehat{Q}^{}\right)^m0.$$
(30)
Using Eqs. (29) and (30) one gets
$$\widehat{B}_+m=\sqrt{\epsilon _{m+1}}m+1.$$
(31)
Similarly
$`\widehat{T}\widehat{B}_{}m+1`$ $`=`$ $`\widehat{T}\widehat{B}_{}{\displaystyle \frac{1}{\sqrt{\widehat{B}_+\widehat{B}_{}}}}\widehat{B}_+m`$ (32)
$`=`$ $`\widehat{T}\sqrt{\widehat{B}_{}\widehat{B}_+}m`$ (33)
$`=`$ $`\widehat{T}\sqrt{\epsilon _m+R(a_0)}m`$ (34)
$`=`$ $`\sqrt{\epsilon _{m+1}}\widehat{T}m.`$ (35)
Using Eqs. (31) and (35), Eq. (28) takes the form
$`\widehat{S}\mathrm{\Psi }_m_\pm `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\sqrt{\epsilon _{m+1}}\left[\begin{array}{c}\pm \widehat{T}m\\ m+1\end{array}\right]`$ (36)
$`=`$ $`\pm \sqrt{\epsilon _{m+1}}\mathrm{\Psi }_m_\pm .`$ (37)
Eqs. (27) and (37) indicate that the Hamiltonian
$$\widehat{H}=\widehat{S}^2+\sqrt{\mathrm{}\mathrm{\Omega }}\widehat{S},$$
(38)
where $`\mathrm{\Omega }`$ is a constant, has the eigenstates $`\mathrm{\Psi }_m_\pm `$
$$\widehat{H}\mathrm{\Psi }_m_\pm =\left(\epsilon _{m+1}\pm \sqrt{\mathrm{}\mathrm{\Omega }}\sqrt{\epsilon _{m+1}}\right)\mathrm{\Psi }_m_\pm $$
(39)
with the exception of the ground state. It is easy to show that the ground state is
$$\mathrm{\Psi }_0=\left[\begin{array}{c}0\\ 0\end{array}\right],$$
(40)
with eigenvalue 0. To emphasize the structure of Eq. (39) as the generalized Jaynes-Cummings Hamiltonian we rewrite it as
$$\widehat{H}=\widehat{A}^{}\widehat{A}+\frac{1}{2}[\widehat{A},\widehat{A}^{}]\left(\sigma _3+1\right)+\sqrt{\mathrm{}\mathrm{\Omega }}\left(\sigma _+\widehat{A}+\sigma _{}\widehat{A}^{}\right).$$
(41)
When $`\widehat{A}`$ describes the annihilation operator for the harmonic oscillator, $`[\widehat{A},\widehat{A}^{}]=\mathrm{}\omega `$, where $`\omega `$ is the oscillator frequency. In this case Eq. (41) reduces to the standard Jaynes-Cummings Hamiltonian.
When $`\widehat{A}^{}\widehat{A}`$ describes the Morse Hamiltonian, Eq. (41) takes the form
$`\widehat{H}`$ $`=`$ $`{\displaystyle \frac{\widehat{p}^2}{2M}}+V_0\left(e^{2\lambda x}2e^{\lambda x}\right)+\sqrt{V_0}{\displaystyle \frac{\mathrm{}\lambda }{\sqrt{2M}}}\left(\sigma _3+1\right)e^{\lambda x}`$ (43)
$`+\sqrt{\mathrm{}\mathrm{\Omega }V_0}\left[\sigma _1\left(1{\displaystyle \frac{\mathrm{}\lambda }{2\sqrt{2MV_0}}}e^{\lambda x}\right)\sigma _2{\displaystyle \frac{\widehat{p}}{\sqrt{2MV_0}}}\right]`$
with the energy eigenvalues
$`E_m`$ $`=`$ $`\sqrt{V_0}{\displaystyle \frac{\mathrm{}\lambda }{\sqrt{2M}}}(m+1)\left[2{\displaystyle \frac{\mathrm{}\lambda }{\sqrt{2MV_0}}}(m+2)\right]`$ (45)
$`\pm \left\{\mathrm{}\mathrm{\Omega }\sqrt{V_0}{\displaystyle \frac{\mathrm{}\lambda }{\sqrt{2M}}}(m+1)\left[2{\displaystyle \frac{\mathrm{}\lambda }{\sqrt{2MV_0}}}(m+2)\right]\right\}^{\frac{1}{2}}.`$
Both harmonic oscillator and Morse potential are shape-invariant potentials where parameters are related by a translation. It is also straightforward to use those shape-invariant potentials where the parameters are related by a scaling in writing down Eq. (41).
## IV Conclusions
In this article we introduced a class of shape-invariant bound-state problems which represent two-level systems. The corresponding coupled-channel Hamiltonians generalize the Jaynes-Cummings Hamiltonian. If we take $`\widehat{H}_1`$ to be the simplest shape-invariant system, namely the harmonic oscillator, our Hamiltonian, Eq. (41), reduces to the standard Jaynes-Cummings Hamiltonian, which has been extensively used to model a single field mode on resonance with atomic transitions.
In this article we only addressed generalization of the Jaynes-Cummings model to other shape-invariant bound state systems. Supersymmetric quantum mechanics has been applied to alpha particle and Coulomb scattering problems. More recently shape-invariance was utilized to calculate quantum tunneling probabilities . It may be possible to generalize our results to such continuum problems. Such an investigation will be deferred to a later publication.
## ACKNOWLEDGMENTS
This work was supported in part by the U.S. National Science Foundation Grant No. PHY-9605140 at the University of Wisconsin, and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation. A.B.B. acknowledges the support of the Alexander von Humboldt-Stiftung. M.A.C.R. acknowledges the support of Fundaรงรฃo de Amparo ร Pesquisa do Estado de Sรฃo Paulo (Contract No. 98/13722-2). A.N.F.A. acknowledges the support of Fundaรงรฃo Coordenaรงรฃo de Aperfeiรงoamento de Pessoal de Nรญvel Superior (Contract No. BEX0610/96-8). A.B.B. thanks to the Max-Planck-Institut fรผr Kernphysik and M.A.C.R. to the Nuclear Theory Group at University of Wisconsin for the very kind hospitality. |
no-problem/0001/astro-ph0001257.html | ar5iv | text | # Identification of the Coronal Sources of the Fast Solar Wind
## 1 INTRODUCTION
One of the primary objectives of the SOHO mission is the identification of the source and acceleration mechanisms of the fast solar wind. Although polar coronal holes and regions with open magnetic field configuration were recognized long ago to be at the origin of the fast wind, the paucity of direct coronal hole observations from space in the long interval between the Skylab and SOHO missions prevented in the last two decades a real progress in studying how the fine structure of coronal holes regulates coronal expansion. Polar plumes, that are the most prominent features in polar coronal holes, and interโplume regions have both been proposed to play an important role in the generation of the highโspeed wind (e.g., Ahmad and Withbroe, 1977; Wang, 1994). SOHO observations have confirmed that plumes are denser, cooler and less dynamic structures than the surrounding regions (Doschek et al., 1997; Noci et al., 1997; Wilhelm et al., 1998). Nevertheless they are site of quasiโperiodic compressional waves (DeForest and Gurman, 1998), identified as slow magnetosonic ones that however can carry only a fraction ($``$2$`\times `$10<sup>3</sup> erg cm<sup>-2</sup> s<sup>-1</sup>) of the energy required to accelerate the fast solar wind, $``$10<sup>5</sup> erg cm<sup>-2</sup> s<sup>-1</sup> (Ofman et al., 1999). The Ultraviolet Coronagraph Spectrometer (UVCS) onboard SOHO, is the first instrument that can allow us to determine the outflow velocity of the wind plasma in the outer corona and therefore to distinguish between the dynamic conditions of plumes and surrounding regions, at the height where the wind velocity has become significant. The results of such an analysis performed on the basis of the Doppler dimming of the O VI $`\lambda \lambda `$ 1032, 1037 ร
, detected in the period April 6โ9, 1996, are reported in the present paper. Preliminary results of this study are found in the Ph.D. thesis by Giordano (1998).
## 2 OBSERVATION OF THE POLAR CORONAL HOLE
In the first detailed observation of the ultraviolet emission of a coronal hole above 1.5 R, performed at solar minimum with UVCS during April 6โ9, 1996, the polar region was scanned over an interval of 72 hours, starting on April 6, 1996 at 07:22 UT. The instantaneous field of view (29 arcmin x 14 arcsec), centered on the North pole, was moved by 14 arcsecond steps in the radial direction, thus ensuring a continuous coverage of the corona between 1.45 to 2.48 R. For each spatial element, 14โ x 14โ, the two O VI lines, at $`\lambda `$ 1031.91 ร
and $`\lambda `$ 1037.61 ร
were detected with spectral resolution of 0.2 ร
, and integration time of 3600 seconds. The raw data are calibrated according to the standard procedure (Gardner et al., 1996). The O VI line profiles are then fitted with a function resulting from the convolution of a gaussian function, for the solar spectral profile, with the Voigt curve describing the instrumental broadening, and a function accounting for the width of the slit (Giordano, 1998). The best fit is obtained by adjusting as free parameters standard deviation, $`\sigma _\lambda `$, mean wavelength, $`\lambda _o`$, and peak, $`I(\lambda _o)`$, of the solar profile. The observed line intensity is then the integral $`I_{tot}=\sqrt{2\pi }I(\lambda _o)\sigma _\lambda `$.
In the O VI images of the polar hole observed on April 6โ9, plumes are clearly identified at least up to 2 R. Four main plumes are present within $`\pm `$14 from the North pole, as the O VI 1032 image shows in Figure 1. They appear as bright broad features, dimming with heliodistance. Since the observation time to scan the corona out to 2 R is 1.4 days, plumes are either fairly stable or they tend to form again in almost the same position. In 1.4 days the displacement of the plumes due to solar rotation is negligible. Their width at 1.7 R is roughly 5$`\times `$10<sup>9</sup>โ10<sup>10</sup> cm. Outside the central region, $`|\pm 14^{}|`$, plumes are weaker and fewer as shown by the intensity along the heliocentric circumference with radius 1.7 R (Figure 1).
Aim of the analysis is to determine the solar wind velocity in plumes and surrounding regions, including interโplume lanes and darker areas of lower plume population, outside $`\pm `$14, that is, background coronal hole regions, with the intent of identifying the source of the highโspeed wind. Therefore the different regions are studied at a height 1.7 R, where plumes are still well identified and the wind has acquired on the average a sufficiently high velocity, $``$ 100 km s<sup>-1</sup>(Strachan et al., 1993; Antonucci et al., 1997a, b; Kohl et al., 1997a, 1998; Giordano, Antonucci and Dodero, 1999).
The plume emission is averaged over the brightest peaks within $`\pm `$14 identified as dark segments in Figure 2, and the lane emission is averaged over the dimmest regions (dashed segments in Figure 2). The emission of dark background regions is averaged outside the interval $`\pm `$14. The background average heliodistance, 1.82 R, is higher than that of plumes and interโplume lanes, 1.72 R.
## 3 WIND VELOCITY IN PLUMES AND LANES
The solar wind velocity relative to the three different regimes that have been identified, plume, interโplume lanes and dark background, is then measured by determining the outflow velocity of the oxygen ions through the ratio of the Doppler dimmed O VI $`\lambda `$ 1032 and $`\lambda `$ 1037 ร
resonance lines (Noci, Kohl and Withbroe, 1987; Dodero et al., 1998; Li et al., 1998).
In the outer corona these lines are emitted both via collisional excitation of the coronal ions and via resonant scattering of photons coming from the transition region. The second process is of increasing importance as the corona becomes more rarefied. In the frame of reference of the expanding solar wind the relative wavelength shift of incident photons and coronal absorbing profiles, causes a dimming of the resonant emission that is a function of the outflow velocity (Beckers and Chipman, 1974). The wind velocity can then be determined by the intersections of the observed O VI intensity ratios with the emissivity ratios curves calculated with the code by Dodero et al. (1998), for the coronal conditions observed, or inferred, at 1.7 R. In a spherical symmetric corona, the ratios of the emissivities on the plane of the sky closely approximates the line intensity ratios (Noci, Kohl and Withbroe, 1987).
The electron temperature, $`T_e`$, is deduced from the SOHO observations that indicate values that remain below 1$`\times `$10<sup>6</sup> K in a coronal hole and decrease above 1.15 R(David et al., 1998). A rare measurement in a bright plume, at 1.6 R, yields a 4$`\times `$10<sup>5</sup> K temperature (Wilhelm et al., 1998). Higher coronal temperature values are determined on the basis of inโsitu charge state ionization measurements, performed with SWICS/Ulysses, which are however derived in the assumption of slow wind (Ko et al., 1997). Here, we assume the temperature roughly equal to 3$`\times `$10<sup>5</sup> K at 1.7 R for both plumes and surrounding regions. The wind velocity results do not change in any significant way if a higher temperature, e.g. 1$`\times `$10<sup>6</sup> K, is assumed outside plumes. The plume electron density is derived from white light observations performed during minimum solar activity in 1996 (Guhathakurta et al., 1999). Since the interโplume and background region density is not explicitly given in that paper, it is assumed to be equal to the quiet coronal hole (Table 1). The ratio of the plume and interโplume density is consistent with the values published by Cranmer et al. (1999).
The O VI coronal absorbing profiles along the lineโofโsight are directly measured with UVCS. The width of the ion velocity distribution along the lineโofโsight, $`v_{1/e}=\sqrt{2}\frac{c}{\lambda _o}\sigma _\lambda `$, is equivalent to the kinetic temperature, $`T_k=\frac{m_i}{2k_B}v_{1/e}^2`$, where $`k_B`$ is the Boltzmann constant and $`m_i`$ is the ion mass. Table 2 reports the observed intensity ratios, $`\rho `$, of the OVI 1037 to the OVI 1032 line and the observed kinetic temperature of the O VI ions along the lineโofโsight, $`T_k`$, together with the average heliodistance of the plume, interโplume and background regions. The observed $`T_k`$ values (Table 2) confirm that the width of the ion velocity distribution is broader in lanes and background regions than in plumes (Antonucci et al., 1997b; Noci et al., 1997).
In the computation of the O VI emissivity the ion velocity distribution is considered to be biโmaxwellian. Since the broadening of the O VI lines in regions of open magnetic field lines is much larger than expected on the basis of ionโproton and protonโelectron thermal balance (Antonucci et al., 1997a; Kohl et al., 1997a, b; Noci et al., 1997), it is reasonable to assume the width of the radial distribution (perpendicular to the lineโofโsight) as a variable between two extreme cases, that is, oxygenโelectron thermal balance along the radial direction and isotropic distribution, respectively. Furthermore, the existence of a degree of anisotropy in the oxygen velocity distribution is proven at least above 1.8 R(Kohl et al., 1997a, 1998; Antonucci, Giordano and Dodero, 1999; Cranmer et al., 1999). The emissivity ratios are then computed both for the radial velocity distribution characterized by $`T_{k,r}=T_e`$, (i.e. maximum of anisotropy, corresponding to oxygen ions and electrons in thermal equilibrium along the radial direction) in Figure 3, and by $`T_{k,r}=T_k`$, (i.e. isotropic velocity distribution) in Figure 4. Calculated and observed emissivity ratio of the O VI lines are compared to derive the outflow velocity.
## 4 OUTFLOW VELOCITY RESULTS
The expansion velocity, derived from the intersection of the emissivity ratio curve with the observed O VI ratio, in Figure 3 and 4, has not a unique value because of the lack of information on the radial ion velocity distribution, but it falls within a higher limit derived for isotropic oxygen velocity distribution and a lower value for the anisotropic case.
It is immediately obvious from the inspection of Figure 3 and 4, that lanes between plumes and dimmer background regions outside $`\pm 14^{}`$ are the privileged sites for the fast solar wind acceleration. In these regions, at 1.7 R, the fast solar wind has already reached speeds above 100 km s<sup>-1</sup>, while the plasma in plumes is either expanding slowly or remains almost static. In lanes the outflow velocity is between 105 km s<sup>-1</sup> and 150 km s<sup>-1</sup>, and in darker regions between 110 km s<sup>-1</sup> and 180 km s<sup>-1</sup> for anisotropic and isotropic ion velocity distribution, respectively (Table 3). These values are of the order of the average outflow velocity detected from the O VI line emission integrated along the instantaneous fieldโofโview, at this height, which is between 100 km s<sup>-1</sup> and 125 km s<sup>-1</sup>(Giordano, Antonucci and Dodero, 1999). The tendency to measure higher velocity in the darker background regions might reflect the fact that they are located on the average at higher heliodistances, where the wind is faster. The errors of the outflow velocity derived for isotropy are larger because in this case the derivative of the emissivity ratio is smaller than in the case of anisotropy, as shown in Figure 3 and 4. No significant outflow velocity can be found in plumes for isotropic velocity distribution. However if the distribution of the oxygen ion velocity deviates from isotropy, it is possible to detect a slow expansion at 65 km s<sup>-1</sup>, much lower than that found for interโplume lanes and background regions.
## 5 CONCLUSIONS
As a result of the spectroscopic analysis of the ultraviolet emission from a solar minimum polar hole in the outer corona, we can therefore conclude that there is strong evidence that the fast solar wind is preferentially accelerated in interโplume lanes and in the darker background regions of a polar coronal hole. It is interesting to note that these are also the regions where the O VI line broadening is enhanced relative to that observed in plumes (Antonucci et al., 1997b; Noci et al., 1997).
According to the UVCS results (e.g., Kohl et al., 1997a), the wind acceleration is higher where line profiles are broader. This fact has led to interpret the excess line broadening in terms of preferential heating of the oxygen ions related to preferential acceleration of the solar wind. The correlation of higher outflow velocity and broader line profiles outside the plumes is consistent with this interpretation. This does not contradict the model by Wang (1994), that invokes higher heating at the base of plumes, since the enhanced broadening of the oxygen lines in interโplume lanes, evidence for preferential heating of the oxygen ions, is probably related to wave-particle interaction occurring higher up in the corona (Kohl et al., 1998; Cranmer et al., 1999).
The present results, that identify the regions surrounding plumes as the dominant sources of the highโspeed solar wind, together with those recently obtained by Hassler et al. (1999) and Peter and Judge (1999), who observed blue shifts at the base of polar coronal holes, suggesting that solar wind is emanating from regions along the boundaries of magnetic network cells, are finally indicating how to track back to the coronal base the fast solar wind velocity field lines.
The authors wish to acknowledge the financial support of the Italian Space Agency and NASA. |
no-problem/0001/astro-ph0001336.html | ar5iv | text | # ISO Spectroscopy of H2 in Star Forming Regions
## 1 Introduction
Molecular hydrogen is expected to be ubiquitous in the circumstellar environment of Young Stellar Objects (YSOs). It is the main constituent of the molecular cloud from which the young star has formed and is also expected to be the main component of the circumstellar disk. Most of this material will be at temperatures of 20โ30 K and difficult to observe. However, some regions may be heated to temperatures of a few hundred K and produce observable H<sub>2</sub> emission. The intense UV radiation generated by accretion as well as by the central star itself will create a photodissociation region (PDR), of which the surface layer is heated by collisions with photoelectrically ejected electrons from grain surfaces, in any surrounding neutral material. Another possibility to produce warm H<sub>2</sub> is in shocks caused by the interaction of an outflow with the surrounding molecular cloud. Shocks are usually divided into J- or Jump-shocks, and C- or Continuous-shocks. In J-shocks the molecular material is dissociated in the shock front, where the gas is heated to several times 10<sup>4</sup> degrees. Behind the shock front, molecular material will re-form, and warm H<sub>2</sub> may be observed in the post-shock gas. C-shocks, in contrast, are not sufficiently powerful to dissociate molecular material, but may produce observable amounts of H<sub>2</sub> within the shock front itself.
Until recently the study of H<sub>2</sub> in star forming regions has mainly concentrated on the study of the near-infrared ro-vibrational lines observable from the ground. However, the launch of the Infrared Space Observatory (ISO) has opened up the possibility to also study the mid-infrared pure rotational lines of H<sub>2</sub>, with much lower upper energy levels, and directly detect the thermal emission of warm H<sub>2</sub> in a wide variety of sources. In these proceedings we report on our study of H<sub>2</sub> lines in the ISO spectra of a sample of 21 YSOs. We will show that emission of warm H<sub>2</sub> is common in the environments of intermediate- and high-mass YSOs and can be explained by the phenomena of shocks and PDRs outlined above.
## 2 Observations and Analysis
ISO Short Wavelength Spectrometer (SWS; 2.4 โ 45 $`\mu `$m) spectra were obtained for a sample of 21 YSOs, mostly of intermediate and high mass. Data were reduced in a standard fashion using calibration files corresponding to OLP version 7.0. In each object, molecular hydrogen line fluxes or upper limits (total flux for line with peak flux 3$`\sigma `$) of 0โ0 S(0) to S(11), 1โ0 Q(1) to Q(6) and 1โ0 O(2) to O(7) were determined.
Pure rotational (0โ0) H<sub>2</sub> emission was detected in 12 out of our 21 sources. Ro-vibrational (1โ0) H<sub>2</sub> emission was detected in 4 sources, all of which were also detected in the pure rotational lines. A first inspection of our data shows that H<sub>2</sub> emission was only found in the vicinity of early-type ($`<`$ B4) stars or near embedded sources. Qualitatively this is in agreement with one would expect: the strong UV fluxes of early-type stars are expected to produce extended PDRs, whereas embedded YSOs are expected to drive strong outflows, causing a shock as the outflow hits the surrounding molecular cloud.
The 28.2188 $`\mu `$m 0โ0 S(0) line was not detected. This shows directly that we did not detect the cool quiescent H<sub>2</sub> in the molecular cloud. A more qualitative analysis of our data can be made by plotting the log of $`N(\mathrm{J})/\mathrm{g}`$, the apparent column density for a given J upper level divided by the statistical weight, versus the energy of the upper level. For the statistical weight we have assumed the high temperature equilibrium relative abundances of 3:1 for the ortho and para forms of H<sub>2</sub>. The resulting excitation diagrams are shown in Fig. 1. For comparison we also show excitation diagrams of three sources known to be dominated by shocks (Orion BN/KL peak 1, IC 443 and RCW 103) and three well-known PDRs (the Orion Bar, NGC 2023 and S140), created using data from literature. In Fig. 1 we also show Boltzmann distribution fits to the low-lying pure rotational lines. The fact that for most sources the the points for ortho and para H<sub>2</sub> lie are both well fitted by this nearly straight line proves that our assumption on their relative abundances is correct. For a number of sources, the lines at higher energy levels can be seen to deviate strongly from the Boltzmann fit. In these cases, we have attempted to characterize this behaviour, which may reflect the combined effects of UV-pumped infrared fluorescence and the presence of a very warm, but thin, surface layer in a PDR and may be due to the effect of re-formation of H<sub>2</sub> in J-shocks, by fitting a second Boltzmann distribution to the higher energy level populations. The resulting excitation temperatures and derived mass of molecular hydrogen are also indicated in Fig. 1.
Employing predictions of H<sub>2</sub> emission from PDR, J-shock and C-shock models by Burton et al. (1992), Hollenbach & McKee (1989) and Kaufman & Neufeld (1996), we determined the excitation temperature $`T_{\mathrm{rot}}`$ from the low-lying pure rotational levels as a function of density $`n`$ and either incident FUV flux $`G`$ (in units of the average interstellar FUV field G<sub>0</sub>) or shock velocity $`v_s`$ in an identical way as was done for the observations. The results of this procedure, plotted against $`G`$ (PDRs) or the total flux observed in all lines (shocks) are shown in Fig. 2. Note that there is considerable overlap between the $`T_{\mathrm{rot}}`$ predicted by PDR, J-shock and C-shock models. This means that pure rotational H<sub>2</sub> emission alone cannot distinguish between these mechanisms in all cases and additional information will be needed. However, our ISO spectra provide just such information. Detectable \[S i\] 25.25 $`\mu `$m emission means that a shock must be present, whereas PAH emission is indicative of the presence of a PDR. The presence or absence of ionic lines such as \[Si ii\] 34.82 $`\mu `$m can further distinguish between a C-shock and a J-shock. The information from these lines is used in Fig. 2 to distinguish between likely shocks (circles) and PDRs (squares).
In general, the PDR sources fall, within errors, in the parameter space outlined by the PDR models. Since the J-shocks only predict a very narrow range of $`T_{\mathrm{rot}}`$, only one source, AFGL 2591, is compatible with the observed H<sub>2</sub> emission arising in such a dissociative shock. All shock sources are compatible with the range in $`T_{\mathrm{rot}}`$ predicted by C-shock models. However, the detection of ionic lines in most of these sources shows that a J-shock component must be present. Most likely real astrophysical shocks are never as simple as the purely dissociative or non-dissociative shocks in the employed models, but are made up of a combination of the two, with the non-dissociative component dominating the H<sub>2</sub> spectrum.
## 3 Conclusions
We have shown that pure-rotational emission from warm H<sub>2</sub> is readily detectable in the vicinity of intermediate- and high-mass YSOs and can be used to gain insight in the physical conditions in the circumstellar material. The main mechanisms that produce warm H<sub>2</sub> in these types of environments are shocks and PDRs. No deviations from the 3:1 ortho/para ratio of H<sub>2</sub> were found for either type of heating mechanism. Both shocks and PDRs show a warm and a hot component in H<sub>2</sub>. The warm component probes the thermal emission from warm gas. For PDRs the hot component may reflect the combined effects of UV-pumped infrared fluorescence and the presence of a thin, very warm surface layer. In shocks the hot H<sub>2</sub> component may be due to the re-formation of H<sub>2</sub> with non-zero formation energy. The warm H<sub>2</sub> component in shocks appears to be dominated by the non-dissociative part of the shock. The evolution of YSOs is expected to be from shock-dominated to PDR-dominated and H<sub>2</sub> may be one of the best tracers of the end of the outflow phase of a young star. |
no-problem/0001/cond-mat0001211.html | ar5iv | text | # Drift-Controlled Anomalous Diffusion: A Solvable Gaussian Model
\[
## Abstract
We introduce a Langevin equation characterized by a time dependent drift. By assuming a temporal power-law dependence of the drift we show that a great variety of behavior is observed in the dynamics of the variance of the process. In particular diffusive, subdiffusive, superdiffusive and stretched exponentially diffusive processes are described by this model for specific values of the two control parameters. The model is also investigated in the presence of an external harmonic potential. We prove that the relaxation to the stationary solution is power-law in time with an exponent controlled by one of model parameters.
02.50.Ey, 05.40.Jc, 05.70.Ln
\]
Diffusive stochastic processes, i.e. stochastic processes $`x(t)`$ characterized by a linear grows in time of the variance $`<x^2(t)>t`$ are quite common in physical systems. However deviations from a diffusive process are observed in several stochastic systems. Superdiffusive ($`<x^2(t)>t^\nu `$ with $`\nu >1`$) and subdiffusive ($`<x^2(t)>t^\nu `$ with $`\nu <1`$) random processes have been detected and investigated in physical and complex systems. A classical example of superdiffusive random process is Richardsonโs observation that two particles moving in a turbulent fluid which at time $`t=0`$ are originally placed very close the one with the other have a relative separation $`\mathrm{}`$ at time $`t`$ that follows the relation $`<\mathrm{}^2(t)>t^3`$ . Most recent examples include anomalous kinetics in chaotic dynamics due to flights and trapping , anomalous diffusion in aggregate of amphiphilic molecules and anomalous diffusion in a two-dimensional rotating flow . Subdiffusive stochastic processes have also been detected and investigated. Examples includes charge transport in amorphous semiconductors and the dynamics of a bead in polymers . Another class of stochastic processes which are not diffusive in a simple way is the one characterized by a variance with a stretched exponential time dependence. When a such process is Gaussian distributed the probability of return to the origin $`P_0(t)`$ is described by the Kohlrausch law $`P_0(t)\mathrm{exp}[t^\nu ]`$ with $`\nu <1`$. Similar behaviors are observed in glassy systems and in random walks in ultrametric spaces .
The modeling of some of the above discussed anomalous diffusing stochastic processes has been done by using a variety of approaches. To cite some examples, we recall that superdiffusive and subdiffusive processes have been modeled by writing down a generalized diffusion equation , by introducing Lรฉvy walks models , by using a fractional Fokker-Planck equation approach or by using โad hocโ stochastic models such as, for example, the fractional Brownian motion .
In this rapid communication we introduce a class of Langevin equations able to describe all the different anomalous regimes discussed above for Gaussian processes. Specifically, we study the properties of the class of Langevin equations
$$\dot{x}+\gamma (t)x=\mathrm{\Gamma }(t),$$
(1)
where $`\gamma (t)`$ is a function of time $`t`$ and $`\mathrm{\Gamma }(t)`$ is a Langevin force with zero mean and with a correlation function given by $`<\mathrm{\Gamma }(t_2)\mathrm{\Gamma }(t_1)>=D\delta (t_2t_1)`$. Equation (1) describes an Ornstein-Uhlenbeck process in the particular case of $`\gamma (t)=\gamma `$. This equation is linear and solvable. For the sake of simplicity we set the boundary condition of Eq. (1) at $`t=0`$. The formal solution of Eq. (1) is
$$x(t)=x(0)G(t)+G(t)_0^t\frac{\mathrm{\Gamma }(s)}{G(s)}๐s,$$
(2)
where $`G(t)\mathrm{exp}[_0^t\gamma (s)๐s]`$. By using this formal solution and all order correlation functions of $`\mathrm{\Gamma }(t)`$ we obtain all central moments of $`x(t)`$. The first two central moments are given by
$`<x(t)>=x(0)G(t)\mu (t),`$ (3)
$`<(x(t)\mu (t))^2>=DG^2(t){\displaystyle _0^t}{\displaystyle \frac{1}{G^2(s)}}๐s\sigma ^2(t).`$ (4)
The general relation between higher-order even central moments and the second central moment of the investigated processes is the one observed in a Gaussian process. Moreover odd central moments are zero hence we conclude that the stochastic processes described by Eq. (1) is Gaussian.
We now consider the two-time correlation functions of the process $`x(t)`$ and of its time derivative $`\dot{x}(t)`$. In the following we label the two times $`t_1`$ and $`t_2`$ of the correlation functions in such a way that $`t_2t_1`$. By using the formal solution of Eq. (2) we determine the two-time correlation function for the random variable $`x(t)`$
$$<x(t_1)x(t_2)>=\mu (t_1)\mu (t_2)+\frac{G(t_2)}{G(t_1)}\sigma ^2(t_1).$$
(5)
In general, the correlation function $`<x(t_1)x(t_2)>`$ is not a function of $`t_2t_1`$ and therefore the process is usually non-stationary.
By starting from the correlation function of $`x(t)`$ and from the formal solution of the Langevin equation we obtain the two-time correlation function of $`\dot{x}(t)`$ as
$`<\dot{x}(t_1)\dot{x}(t_2)>=\mu _v(t_1)\mu _v(t_2)+D\delta (t_2t_1)`$ (6)
$`+\gamma (t_2)G(t_2)F(t_1),`$ (7)
where $`\mu _v(t)=\gamma (t)\mu (t)`$ indicates the mean of the time derivative $`\dot{x}(t)`$ and $`F(t_1)\left[\gamma (t_1)\sigma ^2(t_1)D\right]/G(t_1)`$. The two-time correlation function of $`\dot{x}(t)`$ is the sum of a delta function and of a smooth function.
The Fokker-Planck equation associated to the Langevin equation given in Eq. (1) is
$$\frac{\rho }{t}=\frac{}{x}\left(\gamma (t)x\rho \right)+\frac{D}{2}\frac{^2\rho }{x^2}.$$
(8)
This Fokker-Planck equation is the same as the Smoluchowski equation of a Brownian particle moving in a harmonic oscillator with a time dependent potential $`U(x)x^2\gamma (t)`$. In our study we consider both positive and negative values of $`\gamma (t)`$. For positive values of $`\gamma (t)`$ the position $`x=0`$ is a stable equilibrium position whereas in the opposite case $`x=0`$ is an unstable equilibrium position.
We calculate the two-time conditional probability density $`P(x_2,t_2|x_1,t_1)`$ as the Green function of the Fokker-Planck equation. In our formalism $`t_1t_2`$. Our determination is done by working with the Fourier transform of $`P(x_2,t_2|x_1,t_1)`$ with respect to the $`x`$ variable. The equation for the Fourier transform of $`P(x_2,t_2|x_1,t_1)`$ is a first order partial differential equation, which can be solved by the methods of characteristics. We obtain
$`P(x_2,t_2|x_1,t_1)={\displaystyle \frac{1}{\sqrt{2\pi s^2(t_2,t_1)}}}`$ (9)
$`\mathrm{exp}\left({\displaystyle \frac{(x_2m(t_2,t_1)x_1)^2}{2s^2(t_2,t_1)}}\right),`$ (10)
where $`m(t_2,t_1)=\mathrm{exp}\left[_{t_1}^{t_2}\gamma (y)๐y\right]`$, and $`s^2(t_2,t_1)=D_{t_1}^{t_2}\mathrm{exp}\left[2_z^{t_2}\gamma (y)๐y\right]๐z`$. Hence the transition probability of Eq. (7) is a Gaussian transition probability. Moreover, Eq. (7) satisfies the Chapman-Kolmogorov equation. In fact from a direct integration one can verify that $`P(x_3,t_3|x_1,t_1)=P(x_3,t_3|x_2,t_2)P(x_2,t_2|x_1,t_1)๐x_2`$.
In the rest of this rapid communication we restrict our attention to the class of Langevin equations with a drift term which has a temporal behavior of the form
$$\gamma (t)a/t^\beta $$
(11)
for large time values. We study the stochastic process of Eq. (1) for different values of parameters $`a`$ and $`\beta `$. Specifically we focus on the asymptotic temporal evolution of the variance and of the two-time correlation function of $`\dot{x}(t)`$. We recall that for $`a=0`$ Eq. (1) describes a Wiener process with a variance increasing in a diffusive way, $`\sigma ^2(t)t`$, and a delta correlated $`\dot{x}(t)`$. When $`\beta =0`$ Eq. (1) describes an Ornstein-Uhlenbeck process and two regimes are observed depending on the sign of $`a`$. When $`a>0`$ the stochastic process has a stationary Gaussian solution, whereas when $`a<0`$ there is no stationary state and the variance increases asymptotically in an exponential way, $`\sigma (t)\mathrm{exp}(2|a|t)`$ . The two-time correlation of the velocity decreases in an exponential way as $`\mathrm{exp}(|a|(t_2t_1))`$.
The cases considered above are known, in addition to these cases we observe a large variety of new behaviors controlled by the specific values of parameters $`a`$ and $`\beta `$. By investigating the $`(\beta ,a)`$ set of parameters, we detect different anomalous behavior that we discuss below systematically by considering different regions of the $`\beta `$ parameter.
(i) Region with $`\beta >1`$. The process $`x(t)`$ is diffusive and its variance increases linearly with time for any value of $`a`$. The two-time correlation function of $`\dot{x}(t)`$ can be obtained starting from Eq. (5). A direct calculation gives
$$<\dot{x}(t_1)\dot{x}(t_2)>\frac{a}{t_2^\beta }\mathrm{exp}\left(\frac{at_2^{1\beta }}{1\beta }\right)F(t_1).$$
(12)
The process $`\dot{x}(t)`$ can be positively or negatively correlated depending on the sign of $`a`$. When $`a>0`$ $`(a<0)`$ the correlation is negative (positive). This property is valid for any value of $`\beta `$ parameter. By investigating the explicit form of Eq. (9) one observes that the correlation function decreases as a function of $`t_2`$ with a power-law dependence, $`<\dot{x}(t_1)\dot{x}(t_2)>1/t_2^\beta `$.
(ii) Region with $`\beta =1`$. In this case we observe two regimes. When $`a>1/2`$ the variance increases in a diffusive way $`\sigma ^2(t)t`$. We find a different behavior when $`a<1/2`$. In fact, by using Eq. (3) one can show that
$$\sigma ^2(t)t^{2|a|}.$$
(13)
Therefore the particle performs a Gaussian superdiffusive random process. At the boundary value $`a=1/2`$ the variance increases in a log divergent way as $`\sigma ^2(t)t\mathrm{log}t`$. The two-time correlation function of $`\dot{x}(t)`$ is determined starting from Eq. (5). An explicit calculation gives
$$<\dot{x}(t_1)\dot{x}(t_2)>\frac{a}{t_2^{1+a}}F(t_1).$$
(14)
The two-time correlation function of $`\dot{x}(t)`$ shows a power-law time dependence and the $`\dot{x}(t)`$ process is a strongly dependent random process . We wish to point out that when $`a=1`$ the diffusion of the $`x(t)`$ process is ballistic. This specific case has been already investigated by E. Nelson in the framework of stochastic mechanics. The Ito equation describing the stochastic process associated with the free evolution of a Gaussian quantum wave packet is
$$dx(t)=\frac{tc}{t^2+c^2}xdt+dw(t),$$
(15)
where $`w(t)`$ is a Wiener process and $`c`$ is a constant . This stochastic equation describes the same random process of Eq. (1) for large values of $`t`$.
(iii) In the region $`0<\beta <1`$ we observe two regimes, which depend on the sign of $`a`$. When $`a>0`$ the variance increases as
$$\sigma ^2(t)t^\beta .$$
(16)
This behavior is the customary behavior observed in subdiffusive random process. The two-time correlation function of $`\dot{x}(t)`$ behaves asymptotically as
$$<\dot{x}(t_1)\dot{x}(t_2)>\frac{aD}{t_2^\beta }\mathrm{exp}\left(\frac{a}{1\beta }(t_2^{1\beta }t_1^{1\beta })\right).$$
(17)
If the time interval $`\tau t_2t_1`$ is shorter than $`t_1`$ the power-law term dominates in this equation and the stochastic process is power-law anti-correlated. For $`\tau >>t_1`$ the two-time correlation function decreases exponentially.
When $`a<0`$ the variance increases as a stretched exponential
$$\sigma ^2(t)\mathrm{exp}\left[\frac{2|a|}{1\beta }t^{1\beta }\right].$$
(18)
Since the process is Gaussian, the probability of return to the origin follows the Kohlrausch law, $`\rho (x(0),t)=1/\sqrt{2\pi }\sigma (t)\mathrm{exp}\left[t^{1\beta }\right]`$. This kind of anomalous diffusion has been observed in glasses and in random walks on an ultrametric space . The two-time correlation function of $`\dot{x}(t)`$ increases with time as Eq. (14).
(iv) Region $`\beta <0`$. This region is essentially different from the previous ones because the absolute value of the drift term increases in time and eventually diverges. In this case we also observe two regimes depending on the sign of $`a`$. When $`a<0`$ the time evolution of the variance is formally the same as Eq. (15) of case (iii). In this region of $`\beta `$ parameter the variance increases more than exponentially in time. When $`a>0`$ we find that the variance decreases with time with a power-law dependence $`\sigma ^2(t)1/t^{|\beta |}`$. By using the Smoluchowski picture, we can interpret this behavior as the motion of Brownian particle moving in a time dependent potential which leads to a localization of the particle in the point $`x=0`$. For both regimes the two-time correlation function of $`\dot{x}(t)`$ is given by Eq. (9).
We summarize the above discussed variety of diffusive behavior of $`\sigma ^2(t)`$ in Table I.
The results obtained above refer to $`x(t)`$ random processes which are not stationary. We now consider the problem of a process $`x(t)`$ whose dynamics is controlled by a modified version of Eq. (1) in which the effects of the presence of an โexternalโ time-independent potential are taken into account. To this end we consider the specific case of a overdamped particle of mass M moving in a viscous medium in the presence of a potential having a time-dependence of the kind of Eq. (8) and a time-independent part. The equation of motion of such a system is
$$M\eta \dot{x}+g(t)xF(x)=M\stackrel{~}{\mathrm{\Gamma }}(t),$$
(19)
where $`\eta `$ is the friction constant and $`\stackrel{~}{\mathrm{\Gamma }}(t)`$ is the Langevin force with diffusion constant $`2\eta k_BT/M`$. This equation is formally equivalent to
$$\dot{x}+\gamma (t)x+V^{}(x)=\mathrm{\Gamma }(t),$$
(20)
when $`V^{}(x)=F(x)/M\eta `$, $`\gamma (t)=g(t)/M\eta `$ and $`D=2k_BT/M\eta `$. The prime in $`V(x)`$ indicates spatial derivative. It is worth pointing out that when $`\gamma (t)`$ goes to zero as $`t`$ increases (as, for example, in the case $`\gamma (t)a/t^\beta `$ with $`\beta >0`$) Eq. (17) might have a stationary solution. The presence of a stationary solution depends on the exact shape of $`V(x)`$.
To investigate in a concrete example the relaxation dynamics of the probability density function of $`x(t)`$ towards the stationary solution we study Eq. (17) in the presence of an external harmonic potential, $`V(x)=\frac{1}{2}kx^2`$. In this case the process has a stationary state. A general solution of Eq. (17) is found by using the substitution $`\gamma (t)\gamma (t)+k/M\eta `$ in Eq. (2). In this case the variance of the process is equal to
$$\sigma ^2(t)=De^{2kt/M\eta }G^2(t)_0^t\frac{e^{2ks/M\eta }}{G^2(s)}๐s.$$
(21)
The asymptotic stationary value of $`\sigma ^2(t)`$ is $`\sigma _{st}^2DM\eta /2k=k_BT/k`$ which is independent from the parameters $`a`$ and $`\beta `$. However, we observe a relaxation dynamics whose functional form is controlled by the values of $`a`$ and $`\beta `$ parameters. To detect the different relaxation dynamics we evaluate numerically the integral in Eq. (18) by setting $`\gamma (t)=a/(\tau ^\beta +t^\beta )`$. In Fig. 1 we show in a log-log plot the quantity $`\mathrm{\Delta }(t)|\sigma ^2(t)\sigma _{st}^2|/\sigma _{st}^2`$ as a function of time. The quantity $`\mathrm{\Delta }(t)`$ provides a measure of the distance of the system from the stationary behavior. In Fig. 1 we show that the quantity $`\mathrm{\Delta }(t)`$ decreases following the power-law behavior $`\mathrm{\Delta }(t)1/t^\beta `$ for large values of $`t`$ and for all the investigated values of the parameters $`a`$ and $`\beta `$. In particular, when $`a>0`$, $`\sigma ^2(t)\sigma _{st}^2`$ goes to zero as a negative value whereas when $`a<0`$ the same quantity goes to zero taking positive values. In order to illustrate this result we show $`\sigma ^2(t)`$ as a function of time when $`\beta =0.6`$ and $`a=\pm 1`$ in the inset of Fig. 1. For $`0<\beta 1`$ (therefore including subdiffusive, superdiffusive and stretched exponentially diffusing processes) it is not possible to define a characteristic time scale for the convergence of $`\sigma ^2(t)`$ during the process of relaxation. This is due to the fact that the integral of $`_{t_1}^{\mathrm{}}\mathrm{\Delta }(t)/\mathrm{\Delta }(t_1)๐t=\mathrm{}`$. Although a power-law behavior is still observed when $`\beta >1`$, it is worth pointing out that in this interval of $`\beta `$ parameter a typical time scale might be determined by considering the above discussed integral which is finite in this region of $`\beta `$ parameter.
In conclusion the Langevin equations (1) and (17) with the choice of Eq. (8) describe non-stationary and stationary random processes showing a wide class of (normal and anomalous) diffusion. When a stationary state exists, the relaxation dynamics to the stationary state has a power-law time dependence. The processes modeled by Eqs (1) and (17) are characterized by a time dependent drift term in the associated Fokker-Planck equation. Our model is complementary to the Batchelorโs description of anomalous diffusion obtained by assuming a time-dependent diffusion term . Equations (1) and (17) can be used to model metastable systems in which one of the physical observables, such as, for example, the viscosity, is time dependent. They can also be used to develop simple and efficient algorithms generating realizations of random processes with controlled anomalous diffusion.
The authors thank INFM and MURST for financial support. F. Lillo acknowledges FSE-INFM for his fellowship. |
no-problem/0001/gr-qc0001053.html | ar5iv | text | # Untitled Document
Areas of the Event Horizon and Stationary Limit Surface for a Kerr Black Hole
C. A. Pickett and J. D. Zund<sup>a</sup>
Departments of Mathematical Sciences and Physics,
New Mexico State University, Las Cruces, New Mexico, 88003
I. INTRODUCTION
The notions of an event horizon, (EH), and stationary limit surface, (SLS), are important concepts in black hole physics. The former requires no comment, while the latter is less well-known. The SLS geometrically bounds a region outside the EH known as an ergosphere. Inside the ergosphere a test particle cannot travel along the orbit of a timelike Killing vector and remain at rest<sup>1</sup>, and the red shift is infinite on the SLS<sup>2</sup>.
While surface area of an EH has been related to the entropy of a black hole<sup>3</sup>, that of the SLS has not been given a physical interpretation. In this note our approximate evaluation of the area of a SLS suggests a reinterpretation of the area of an EH.
II. THE KERR BLACK HOLE
Employing the original form of the uncharged Kerr solution<sup>4,5</sup>, (KS), expressed in spherical polar coordinates $`(\theta ,\varphi )`$, the angular part $`d\omega ^2`$ of the line element $`ds^2`$ is given by
$$d\omega ^2=\rho ^2d\theta ^2+\mathrm{sin}^2\theta \left[r^2+a^2+\frac{2mr}{\rho ^2}a^2\mathrm{sin}^2\theta \right]d\varphi ^2$$
(1)
where
$$\rho ^2=r^2+a^2\mathrm{cos}{}_{}{}^{2}\theta $$
(2)
with $`0<\theta <\pi ,0\varphi 2\pi `$. This expression employs geometric, or relativistic, units in which the speed of light and the Newtonian gravitational constant are unity. Quantity $`a`$ is then the angular momentum $`J`$ per unit mass of a spinning particle of mass $`m`$. Note that $`a0`$ for the KS, and when $`a=0`$ the KS reduces to the Schwarzschild solution, (SS).
Rewriting (1) as
$$d\omega ^2=\gamma _{AB}dx^Adx^B$$
(3)
where $`x^A=(\theta ,\varphi ),A=2,3`$, the area $`๐`$ of the corresponding 2-dimensional (angular) surface is given by
$$๐=_0^{2\pi }_0^\pi \sqrt{det\gamma _{AB}}๐\theta ๐\varphi .$$
(4)
Inspection of $`det\gamma _{AB}`$ shows that it is independent of $`\varphi `$, so consequently (4) reduces to
$$๐=2\pi _0^\pi \sqrt{det\gamma _{AB}}๐\theta .$$
(5)
III. SURFACE AREAS OF THE EH AND SLS
By definition, for an EH
$$r=r_{EH}=m+\sqrt{m^2a^2}$$
(6)
and for a SLS
$$r=r_{SLS}=m+\sqrt{m^2a^2\mathrm{cos}^2\theta }$$
(7)
Then, upon using these values of $`r`$ in (5), one obtains expressions for the respective surface areas $`๐_{EH}`$ and $`๐_{SLS}`$. Of course, for the SS, since $`a=0`$ the EH and the SLS coincide.
Evaluation of (5) for the EH is elementary, and one immediately obtains
$$๐_{EH}=8\pi m\left(m+\sqrt{m^2a^2}\right).$$
(8)
However, the evaluation of (5) for the SLS is non-trivial, and involves the integral
$$๐_{SLS}=4\pi m_0^\pi r_{SLS}\sqrt{1+\frac{a^2}{mr_{SLS}}\mathrm{sin}^2\theta }\mathrm{sin}\theta d\theta .$$
(9)
The difficulty calculating this is obvious since $`r_{SLS}`$ involves $`\theta `$, and appears twice in the integrand!
To the best of our knowledge, no one has succeeded in evaluating the integral (9) in closed form. It is perhaps noteworthy that in his treatise<sup>5</sup> Chandrasekhar calculated $`๐_{EH}`$ and gave the result (8), but made no comment about the evaluation of (9). Indeed, he did not even cite an approximate expression for $`๐_{SLS}`$.
IV. AN APPROXIMATE EVALUATION OF $`A_{SLS}`$
Since the expression (9) seems to be intractable it is natural to seek an approximate determination of $`๐_{SLS}`$.
Fortunately, both the uncharged KS and the โNo Hairโ theorem require that
$$m^2a^20,$$
(10)
and this permits us to obtain an approximate expression for $`๐_{SLS}`$. The elementary inequality, which is easily proven,
$$a^2/m^21a^2/mr_{SLS}1,$$
(11)
then permits us to do a binomial expansion
$$(1\pm X)^{1/2}1\pm \frac{1}{2}X,$$
(12)
with $`X1`$, for the radical in $`r_{SLS}`$.
For $`๐_{SLS}`$ this expansion must be used twice. First, expansion of the radical in the integrand yields
$$๐_{SLS}4\pi m_0^\pi r_{SLS}\left(1+\frac{a^2}{2mr_{SLS}}\mathrm{sin}^2\theta \right)\mathrm{sin}\theta d\theta $$
(13)
which we split into two integrals:
$$๐_{SLS}4\pi m\left\{_0^\pi r_{SLS}\mathrm{sin}\theta d\theta +\frac{a^2}{2m}_0^\pi \mathrm{sin}^3\theta d\theta \right\}.$$
(14)
The first integrand in (14) requires a second expansion,
$$\begin{array}{ccc}_0^\pi r_{SLS}\mathrm{sin}\theta d\theta \hfill & \hfill & m_0^\pi \left(1\frac{a^2}{m^2}\mathrm{cos}^2\theta \right)^{1/2}\mathrm{sin}\theta d\theta \hfill \\ & \hfill & m_0^\pi \left(2\frac{a^2}{2m^2}\mathrm{cos}^2\theta \right)\mathrm{sin}\theta d\theta \hfill \\ & =\hfill & m\left(4\frac{1}{3}\frac{a^2}{m^2}\right);\hfill \end{array}$$
while the second integral in (14) is elementary
$$\frac{a^2}{2m^2}_0^\pi \mathrm{sin}^3\theta d\theta =\frac{2}{3}\frac{a^2}{m^2}.$$
Adding these two expressions and multiplying by $`4\pi m`$, we obtain
$$๐_{SLS}16\pi m^2+\frac{4\pi }{3}a^2$$
(15)
which is our approximate evaluation of $`๐_{SLS}`$.
V. A GEOMETRIC REINTERPRETATION OF $`A_{EH}`$ AND $`A_{SLS}`$
Although we have the exact value of $`๐_{EH}`$ exhibited in (8), since we have only an approximate value for $`๐_{SLS}`$, it seems natural to consider what our approximation procedure gives for $`๐_{EH}`$. This is easy, and by using (12) in (8) we obtain
$$๐_{EH}16\pi m^24\pi a^2$$
(16)
which invites a comparison with (15). By recalling that the usual Euclidean surface area of a sphere is
$$4\pi (radius)^2,$$
we observe that
$$16\pi m^2=4\pi (2m)^2=4\pi r_S^2,$$
(17)
where $`r_S=2m`$ is the Schwarzschild radius.
This suggests calling the expression in (17), which is the common value of $`๐_{EH}`$ and $`๐_{SLS}`$ for the SS, the Schwarzschild area
$$๐_S=4\pi r_S^2$$
(18)
Of course, for the KS the expressions (15) and (16) are more complicated. However, both the second terms on the right hand sides of these expressions involve the ubiquitous factor of $`4\pi `$. This suggests introducing the notion of an angular momentum sphere having $`a0`$ as a radius, since dimensionally in geometrized units $`a`$ is a length. Such a sphere has the Euclidean surface area
$$๐_J=4\pi a^2,$$
(19)
and hence we can rewrite (16) and (15) as
$`๐_{EH}๐_S๐_J`$
$`๐_{SLS}๐_S+{\displaystyle \frac{1}{3}}๐_J.`$ (20)
These show, as is to be expected, that $`๐_{SLS}>๐_{EH}`$ with $`๐_{SLS}=๐_{EH}`$ only for the SS. The values of $`๐_{EH}`$ and $`๐_{SLS}`$ given in (20) are surprisingly close in contrast to the usual pictorial illustrations of the ergosphere given in the literature.
VI. NUMERICAL EVALUATION OF $`A_{EH}`$ AND $`A_{SLS}`$
Since the approximate values of $`๐_{EH}`$ and $`๐_{SLS}`$ in (20) are curiously close and tantalizingly simple, one wonders whether this is accidental, or quite sensible. While our use of the binomial expansion (12) seems sensible, are we justified in omitting the higher order terms?
To answer this question we have done a numerical computation of $`๐_{EH}`$ and $`๐_{SLS}`$ by using Mathcad<sup>6</sup>. This computes definite integrals by using a Romberg algorithm which accelerates the convergence of a sequence of simple trapezoidal / midpoint approximations of the value of the integral by extrapolating both the sequence of estimates and the widths of the subintervals. Since our goal is to check the accuracy of the expressions given in (20), we will use the values of $`a`$ and $`m`$ given by Shapiro and Teukolsky<sup>7</sup>, and let Mathcad choose the method of evaluation with an automatic default setting of 0.001.
Then in geometric units: $`m=1.478\times 10^5cm,J=4.034\times 10^9cm^2`$ so that $`a=2.73\times 10^4cm`$, and
$$a/m=0.185.$$
Upon direct calculation (15) gives
$$๐_{SLS}=1.101\times 10^{12}cm^2,$$
while Mathcadโs evaluation is
$$๐_{SLS}=1.094\times 10^{12}cm^2.$$
Upon taking the latter to be โexactโ this yields an error of $`5.797\times 10^3cm^2`$, i.e. 0.58%.
Likewise, upon direct evaluation (16) gives
$$๐_{EH}=1.089\times 10^{12}cm^2$$
while evalution of the exact expression (8) yields
$$๐_{EH}=1.089\times 10^{12}cm^2,$$
and the error between these is $`8.128\times 10^7cm^2`$, i.e. 0.008%.
This numerical comparison gives us confidence in the accuracy of the approximate value of $`๐_{SLS}`$ displayed in equation (15).
ACKNOWLEDGMENT
The second author would like to express his thanks to Professor George Burleson for encouraging him to offer a course in black hole physics (Spring 1999), where the topic of this current investigation was assigned as a problem.
a) Electronic mail: jzund@nmsu.edu
1. See Pankaj S. Joshi, Global Aspects in Gravitation and Cosmology (Clarendon Press, Oxford, 1993) pp 77-82; and for a less mathematical discussion, Jean-Pierre Luminet, Black Holes, (Cambridge University Press, Cambridge, 1992).
2. Remo Ruffini and John A. Wheeler, โRelativistic cosmology and space platformsโ in The Significance of Space Research for Fundamental Physics (European Space Research Organization, Neuilly-sur-Seine, 1971) pp 72-99. This is the technical version of the authors popular exposition โIntroducing the black hole,โ Physics Today, January 1971, 30-41.
3. Jacob D. Bekenstein, โBlack holes and entropy,โ Physical Review D 7, 2333-2346 (1973); and โBlack hole thermodynamics,โ Physics Today, January 1980, 24-31.
4. Roy P. Kerr, โGravitational field of a spinning mass as an example of algebraically special metrics,โ Physical Review Letters 11, 237-238 (1963) and pp 273-318 of reference 5.
5. Subrahmanyan Chandrasekhar, The Mathematical Theory of Black Holes (Clarendon Press, Oxford 1983) p. 317.
6. Mathcad Userโs Guide: Mathcad 8 Professional (Mathsoft Inc., Cambridge MA, 1998).
7. Stuart L. Shapiro and Saul A. Teukolsky, Black Holes, White Dwarfs, and Neutron Stars (J. Wiley & Sons, New York, 1983) p. 357. |
no-problem/0001/hep-ex0001058.html | ar5iv | text | # Atmospheric neutrinos
## 1 INTRODUCTION
Atmospheric neutrinos are produced in the cascade originated in the atmosphere by a primary cosmic ray. Underground detection of atmospheric neutrino-induced events was pioneered by the Kolar Gold Field KGF experiment in India and the CWI2 experiment in a mine in South Africa. The field gained new interest when the large underground detectors for proton decay experiments were put in operation. At the beginning atmospheric neutrinos were studied mainly as possible sources of backgrounds for proton decay searches. But very soon, the water Cherenkov experiments, IMB in the United States and Kamiokande in Japan, discovered that the ratio between events with a muon and those with an electron was lower than expected.
The first historical observation of the anomaly was in 1986 in the IMB paper โCalculation of Atmospheric Neutrino-Induced Backgrounds in a Nucleon Decay Searchโ. It was observed in this paper that โThe simulation predicts that $`34\%\pm 1\%`$ of the events should have an identified muon decay while our data has $`24\%\pm 3\%`$โ. The importance of this discrepancy as possible signature for neutrino oscillations in the path length between the production point and the detector (in the range 10 -13000 km) was not fully recognized at the beginning. In 1988 there was the first paper by the Kamiokande collaboration dedicated to this anomaly followed by two papers from the IMB collaboration.
However, this anomaly was not confirmed by the proton decay iron fine-grained experiments NUSEX (in the Mont Blanc tunnel between France and Italy) and Frejus (in another tunnel under the Alps) and there was the suggestion that the anomaly was due to the differences in the neutrino cross sections in water and in iron not taken into account in the Fermi gas model used in the original calculations. A calculation by Engel showed that this effect should be negligible for the energies of interest. Later, the results from another fine-grained iron detector Soudan-2 have shown that probably there was a statistical fluctuation in the NUSEX and Frejus data.
In 1994 another anomaly was observed by the Kamiokande experiments: the distortion of the angular distribution of the events with a single muon in the so-called internally produced Multi-GeV data sample with a reduction of the flux of the vertical up-going events.
There were several attempts to look for possible angular distortion in other categories of events, for example in the neutrino externally produced upward-going muons. Results were produced at that time by the IMB experiment, the Baksan experiment in URSS and the Kamiokande experiment itself. The results were inconclusive or in contradiction with the neutrino oscillation hypothesis, particularly for what concerns the analysis of the stopping muon/ through-going muon ratio in the IMB experiment .
The MACRO tracking experiment in the Gran Sasso laboratory began the operation for neutrino physics in 1989 with a small fraction of the final detector. The first results of MACRO in 1995 showed that there was a deficit of events particularly in the vertical direction. However the statistics was not enough at that time to discriminate unambiguously between the oscillation and the no-oscillation hypothesis.
Another big step forward in this field was due to the Superkamiokande experiment. In 1998 at the Takayama Neutrino conference there was the announcement of the observation of neutrino oscillation ($`\nu _\mu `$ disappearance ) from the Superkamiokande experiment. It is notable that, at the same conference, the two other running experiments Soudan-2 and MACRO have presented results in strong support of the same $`\nu _\mu `$ oscillations pattern observed by SuperKamiokande.
## 2 NEUTRINO OSCILLATIONS AND MATTER EFFECT
Neutrino oscillations were suggested by B. Pontecorvo in 1957 after the discovery of the $`K^0\overline{K^0}`$ transitions.
If neutrinos have masses, then a neutrino of definite flavor, $`\nu _{\mathrm{}}`$, is not necessarily a mass eigenstate. In analogy to the quark sector the $`\nu _{\mathrm{}}`$ could be a coherent superposition of mass eigenstates.
The fact that a neutrino of definite flavor is a superposition of several mass eigenstates, whose differing masses $`M_m`$ cause them to propagate differently, leads to neutrino oscillations : the transformation in vacuum of a neutrino of one flavor into one of a different flavor as the neutrino moves through empty space. The amplitude for the transformation $`\nu _{\mathrm{}}\nu _{\mathrm{}^{}}`$ is given by:
$$A(\nu _{\mathrm{}}\nu _{\mathrm{}^{}})=\underset{m}{}U_\mathrm{}me^{i\frac{M_m^2}{2}\frac{L}{E}}U_\mathrm{}^{}m^{}$$
(1)
where $`U`$ is a $`3\times 3`$ unitary matrix in the hypothesis of the 3 standard neutrino flavors ($`\nu _\mu ,\nu _e,\nu _\tau `$). In the hypothesis of a sterile neutrino $`U`$ is a $`4\times 4`$ unitary matrix.
The probability $`P(\nu _{\mathrm{}}\nu _{\mathrm{}^{}})`$ for a neutrino of flavor $`\mathrm{}`$ to oscillate in vacuum into one of flavor $`\mathrm{}^{}`$ is then just the square of this amplitude. For two neutrino oscillations and in vacuum:
$$P(\nu _{\mathrm{}}\nu _{\mathrm{}^{}\mathrm{}})=\mathrm{sin}^22\theta \mathrm{sin}^2\left[1.27\delta M^2\frac{L}{E}\right]$$
(2)
$`\delta M^2(\text{eV}^2),L(\text{km}),E(\text{GeV})`$
This simple relation should be modified when a neutrino propagates through matter and when there is a difference in the interactions of the two neutrino flavors with matter.
The neutrino weak potential in matter is:
$$V_{\mathrm{weak}}=\pm \frac{G_Fn_B}{2\sqrt{2}}\times \{\begin{array}{cc}2Y_n+4Y_e\hfill & \text{for }\nu _e\text{,}\hfill \\ 2Y_n\hfill & \text{for }\nu _{\mu ,\tau }\text{,}\hfill \\ 0\hfill & \text{for }\nu _s\text{,}\hfill \end{array}$$
(3)
where the upper sign refers to neutrinos, the lower sign to antineutrinos, $`G_F`$ is the Fermi constant, $`n_B`$ the baryon density, $`Y_n`$ the neutron and $`Y_e`$ the electron number per baryon (both about 1/2 in normal matter). Numerically we have
$$\frac{G_Fn_B}{2\sqrt{2}}=1.9\times 10^{14}\mathrm{eV}\frac{\rho }{\mathrm{g}\mathrm{cm}^3}.$$
(4)
The weak potential in matter produces a phase shift that could modify the neutrino oscillation pattern if the oscillating neutrinos have different interactions with matter. The matter effect could help to discriminate between different neutrino channels. According to equation 3 the matter effect in the Earth could be important for $`\nu _\mu \nu _e`$ and for the $`\nu _\mu \nu _s`$ oscillations, while for $`\nu _\mu \nu _\tau `$ oscillations there is no matter effect. For some particular values of the oscillation parameters the matter effect could enhance the oscillations originating โresonancesโ (MSW effect). The internal structure of the Earth could have an important role in the resonance pattern . However, for maximum mixing, the only possible effect is the reduction of the amplitude of oscillations.
## 3 ATMOSPHERIC NEUTRINOS
In the hadronic cascade produced from the primary cosmic ray we have the production of neutrinos with the following basic scheme :
P+N $`\pi +K..`$
$`\pi /k\mu ^+(\mu ^{})+\nu _\mu `$ ($`\overline{\nu }_\mu `$)
$`\mu ^+(\mu ^{})e^+(e^{})+\nu _e(\overline{\nu _e})+\overline{\nu _\mu }(\nu _\mu )`$
From these decay channels one expects at low energies about twice muon neutrinos respect to electron neutrinos. This result doesnโt change very much with a detailed calculation. The calculation of the absolute neutrino fluxes is a more complicated matter, with several sources of uncertainty due to the complicated shower development in the atmosphere and to the large uncertainties in the cosmic ray spectrum.
There are two basic topologies of neutrino induced events in a detector: internally produced events and externally produced events. The internally produced events have neutrino interaction vertices inside the detector. In this case all the secondaries can be in principle observed. The range of neutrino energies involved goes from a fraction of GeV up to 10 GeV or more. Both electron neutrinos and muon-neutrinos can be detected. The externally produced events have neutrino interaction vertex in the rock below the detector. Typical neutrino energies involved are of the order of 100 GeV. Only muon neutrinos can be detected. Figure 1 shows the basic geometrical factors of the neutrino production and detection in an underground detector.
The neutrino events could have background connected with the production of hadrons by photoproduction due to the down-going muons. This background has been measured by the MACRO experiment. The photoproduced neutrons can simulate internal events and the pions can simulate stopping or through-going muons. The rate of this background depends on the rate of the down-going muons and therefore on the depth. As it his shown in Figure 2 this effect could be important for detectors of shallow depth and it could be one of the reason for some past results in contradiction with the current oscillation scenario.
## 4 THE SOUDAN-2 EXPERIMENT
The results of the Soudan-2 experiment are discussed in detail in another talk at this conference. Here I want to stress the importance of this experiment for the Sub-GeV events (events having energies of the order of 1 GeV or less) where a possible contradiction between the iron sampling calorimeters and the water Cherenkov detector was suggested in the past. Figure 3 shows the current experimental situation together with the new results of Soudan-2 with 4.6 Ktons/year of data.
Recently the Soudan-2 group has been able to study the L/E distribution for a sample of events selected to have an high energy resolution. The distributions are shown in Figure 4. After background subtraction they have 90.5 $`\nu _\mu `$ CC events and 116.4 $`\nu _e`$ CC (153.6 predicted) events.
Due to the nuclear effects and to the limited statistics it is not possible to see the sinusoidal pattern of equation (2) with the first minimum at $`Log(\frac{L}{E})=2.5`$ predicted in the case of oscillations with $`\delta m^23\times 10^3eV^2`$. One of the main goal of the next generation atmospheric neutrino experiments is the measurement of this pattern that could provide a precise measurement of the oscillation parameters and a way to discriminate alternative hypothesis with neutrino decays.
However, from the study of the $`\chi ^2`$ of the L/E distribution as function of the oscillation parameters the Soudan-2 group has been able to set a $`90\%`$ confidence region for the oscillation parameters, reported together with the other experiments in figure 12. The best $`\chi ^2`$ is for $`\delta m^2=0.8\times 10^2eV^2`$ and $`sin^22\theta =0.95`$.
Soudan-2 measures a flux of $`\nu _e`$ neutrinos smaller than the one expected (with an old version of the Bartol flux), while SuperKamiokande has agreement between predictions and data. This disagreement could be due or to a statistical fluctuation or to some physical effect due to the different geomagnetic cuts or to differences in the neutrino samples (Soudan-2 accepts events with a smaller energy and accepts multiprong events).
## 5 NEUTRINO DETECTION IN THE MACRO EXPERIMENT
The MACRO detector is described elsewhere. Active elements are streamer tube chambers used for tracking and liquid scintillator counters used for the time measurement.
Figure 5 shows a schematic plot of the three different topologies of neutrino events analyzed up to now: $`Up`$ $`Through`$ events, $`Internal`$ $`Up`$ events and $`Internal`$ $`Down`$ together with $`Up`$ $`Stop`$ events. The requirement of a reconstructed track selects events having a muon.
The $`Up`$ $`Through`$ tracks come from $`\nu _\mu `$ interactions in the rock below MACRO. The muon crosses the whole detector ($`E_\mu >1`$ GeV). The time information provided by scintillator counters permits to know the flight direction (time-of-flight method). The typical neutrino energy for this kind of events is of the order of 100 GeV. The data have been collected in three periods, with different detector configurations starting in 1989 with a small fraction of the apparatus.
The $`Internal`$ $`Up`$ events come from $`\nu `$ interactions inside the apparatus. Since two scintillator layers are intercepted, the time-of-flight method is applied to identify the upward going events. The typical neutrino energy for this kind of events is around 4 GeV. If the atmospheric neutrino anomalies are the result of $`\nu _\mu `$ oscillations with maximum mixing and $`\mathrm{\Delta }m^2`$ between $`10^3`$ eV<sup>2</sup> and $`10^2`$ eV<sup>2</sup> it is expected a reduction in the flux of this kind of events of about a factor of two, without any distortion in the shape of the angular distribution. Only the data collected with the full MACRO (live-time around 4.1 years) have been used in this analysis.
The $`Up`$ $`Stop`$ and the $`Internal`$ $`Down`$ events are due to external interactions with upward-going tracks stopping in the detector ($`Up`$ $`Stop`$) and to neutrino induced downgoing tracks with vertex in the lower part of MACRO ($`Internal`$ $`Down`$). These events are identified by means of topological criteria. The lack of time information prevents to distinguish the two sub samples. The data set used for this analysis is the same used for the $`Internal`$ $`Up`$ search. An almost equal number of $`Up`$ $`Stop`$ and $`Internal`$ $`Down`$ is expected if neutrinos do not oscillate. The average neutrino energy for this kind of events is around 4 GeV. In case of oscillations it is not expected a reduction in the flux of the $`Internal`$ $`Down`$ events (having path lengths of the order of 20 km), while it is expected a reduction in the number $`Up`$ $`Stop`$ events similar to the one expected for the $`Internal`$ $`Up`$.
## 6 UPWARD THROUGH-GOING MUONS
The direction that muons travel through MACRO is determined by the time-of-flight between two different layers of scintillator counters. The measured muon velocity is calculated with the convention that muons going down through the detector are expected to have 1/$`\beta `$ near +1 while muons going up through the detector are expected to have 1/$`\beta `$ near -1.
Several cuts are imposed to remove backgrounds caused by radioactivity or showering events which may result in bad time reconstruction. The most important cut requires that the position of a muon hit in each scintillator as determined from the timing within the scintillator counter agrees within $`\pm `$70 cm with the position indicated by the streamer tube track.
In order to reduce the background due to the photoproduced pions each upgoing muon must cross at least 200 g/cm<sup>2</sup> of material in the bottom half of the detector. Finally, a large number of nearly horizontal ($`\mathrm{cos}\theta >0.1`$), but upgoing muons have been observed coming from azimuth angles corresponding to a direction containing a cliff in the mountain where the overburden is insufficient to remove nearly horizontal, downgoing muons which have scattered in the mountain and appear as upgoing. This region is excluded from both the observation and Monte-Carlo calculation of the upgoing events.
There are 561 events in the range $`1.25<1/\beta <0.75`$ defined as upgoing muons for this data set. These data are combined with the previously published data for a total of 642 upgoing events. Based on the events outside the upgoing muon peak, it is estimated there are $`12.5\pm 6`$ background events in the total data set. In addition to these events, there are $`10.5\pm 4`$ events which result from upgoing charged particles produced by downgoing muons in the rock near MACRO. Finally, it is estimated that $`12\pm 4`$ events are the result of interactions of neutrinos in the very bottom layer of MACRO scintillators.
In the upgoing muon simulation it is used the neutrino flux computed by the Bartol group. The cross-sections for the neutrino interactions have been calculated using the GRV94 parton distributions set which varies by +1% respect to the Morfin and Tung parton distribution used in the past. The systematic error on the upgoing muon flux due to uncertainties in the cross section including low-energy effects is 9%. The propagation of muons to the detector has been done using the energy loss calculation of reference for standard rock. The total systematic uncertainty on the expected flux of muons adding the errors from neutrino flux, cross-section and muon propagation in quadrature is $`\pm 17\%`$. This theoretical error in the prediction is mainly a scale error that doesnโt change the shape of the angular distribution. The number of events expected integrated over all zenith angles is 824.6, giving a ratio of the observed number of events to the expectation of 0.74 $`\pm 0.031`$(stat) $`\pm 0.044`$(systematic) $`\pm 0.12`$(theoretical).
Figure 6 shows the zenith angle distribution of the measured flux of upgoing muons with energy greater than 1 GeV for all MACRO data compared to the Monte Carlo expectation for no oscillations and with a $`\nu _\mu \nu _\tau `$ oscillated flux with $`\mathrm{sin}^22\theta =1`$ and $`\mathrm{\Delta }m^2=0.0025`$ eV<sup>2</sup>.
The shape of the angular distribution has been tested with the hypothesis of no oscillations normalizing data and predictions. The $`\chi ^2`$ is $`22.9`$, for 8 degrees of freedom (probability of 0.35% for a shape at least this different from the expectation). Also $`\nu _\mu \nu _\tau `$ oscillations are considered. The best $`\chi ^2`$ in the physical region of the oscillations parameters is 12.5 for $`\mathrm{\Delta }m^2`$ around $`0.0025eV^2`$ and maximum mixing (the best $`\chi ^2`$ is 10.6, outside the physical region).
To test the oscillation hypothesis, the independent probabilities for obtaining the number of events observed and the angular distribution for various oscillation parameters are calculated. They are reported for $`\mathrm{sin}^22\theta =1`$ in Figure 7 A) for oscillations $`\nu _\mu \nu _\tau `$ . It is notable that the value of $`\mathrm{\Delta }m^2`$, suggested from the shape of the angular distribution is similar to the value necessary in order to obtain the observed reduction in the total number of events in the hypothesis of maximum mixing. Figure 7 B) shows the same quantities for sterile neutrinos oscillations taking into account matter effects.
The maximum of the probability is 36.6% for oscillations $`\nu _\mu \nu _\tau `$. The probability for no oscillation is 0.36%.
The maximum probability for the sterile neutrino is $`8.6\%`$ in a region of $`\mathrm{\Delta }m^2`$ around $`10^2eV^2`$. The probabilities for sterile neutrinos are comparable to the one for $`\tau `$ neutrinos only in the small regions shown in Figure 8.
Another way to try to discriminate between the oscillation of $`\nu _\mu `$ in $`\nu _s`$ and the one in $`\nu _\tau `$ is to study the angular distribution in two bins, computing the ratio between the two bins as shown in Figure 9. The statistical significance is higher then in the case of data binned in 10 bins, but some features of the angular distribution could be lost using this ratio. The ratio is insensitive to most of the errors on the theoretical prediction of the $`\nu `$ flux and cross section. From this plot the $`\nu _s`$ hypothesis is disfavored at $`2\sigma `$ level.
It is interesting to check if there is agreement between the data measured by different experiments. In case of oscillations it is important to take into account correctly the energy threshold of the different experiments: Superkamiokande has an average energy threshold of the order of 7 GeV while for MACRO it is 1 GeV. The comparison between Kamiokande, Superkamiokande and MACRO shown in Figure 10 is done by comparing the ratio between the events measured and the events expected in case of oscillation (as computed by each experiment). There is a remarkable agreement between the three experiments even if there is still some possible discrepancy in the region around the vertical.
## 7 THE MACRO LOW ENERGY EVENTS
The analysis of the $`Internal`$ $`Up`$ events is similar to the analysis of the $`Up`$ $`Through`$. The main difference is due to the requirement that the interaction vertex should be inside the apparatus. About 87% of events are estimated to be $`\nu _\mu `$ CC interactions. The uncertainty due to the acceptance and analysis cuts is 10%. After the background subtraction (5 events) 116 events are classified as $`Internal`$ $`Up`$ events
The $`Internal`$ $`Down`$ and the $`Up`$ $`Stop`$ events are identified via topological constraints. The main requirement is the presence of a reconstructed track crossing the bottom scintillator layer. The tracking algorithm for this search requires at least 3 streamer hits (corresponding roughly to 100 $`gr`$ $`cm^2`$). All the track hits must be at least 1 m from the detectorโs edges. The criteria used to verify that the event vertex (or stopping point) is inside the detector are similar to those used for the $`Internal`$ $`Up`$ search. To reject ambiguous and/or wrongly tracked events which survived automated analysis cuts, real and simulated events were randomly merged and directly scanned with the MACRO Event Display. About 90% of the events are estimated to be $`\nu _\mu `$ CC interactions. The main background for this search are the low energy particles produced by donwn-going muons . After background subtraction ($`7\pm 2`$ events) 193 events are classified as $`Internal`$ $`Down`$ and $`Up`$ $`Stop`$ events.
The Montecarlo simulation for the low energy events uses the Bartol neutrino flux and the neutrino low energy cross sections reported in . The simulation is performed in a large volume of rock (170 kton) around the MACRO detector (5.3 kton). The uncertainty on the expected muons flux is about 25%. The number of expected events was also evaluated using the โNEUGENโ neutrino event generator (developed by the Soudan and MINOS collaborations) as input to our full Monte Carlo simulation. The NEUGEN generator predicts $`6\%(5\%)`$ fewer IU (ID+UGS) events detectable in MACRO than , well within the estimated systematic uncertainty for neutrino cross sections ($`15\%`$).
The angular distributions of data and predictions are compared in Figure 11. The low energy samples show an uniform deficit of the measured number of events over the whole angular distribution with respect to the predictions, while there is good agreement with the predictions based on neutrino oscillations.
The theoretical errors coming from the neutrino flux and cross section uncertainties almost cancel if the ratio between the measured number of events $`\frac{IU}{(ID+UGS)}`$ is compared with the expected one. The partial error cancellation arises from the nearly equal energy spectra of parent neutrinos for the IU and the ID+UGS events. The experimental systematic uncertainty on the ratio is 6%. The measured ratio is $`\frac{IU}{ID+UGS}=0.60\pm 0.07_{stat}`$, while the one expected without oscillations is $`0.74\pm 0.04_{sys}\pm 0.03_{theo}`$. The probability (one-sided ) to obtain a ratio so far from the expected one is 5%, near independent of the used neutrino flux and neutrino cross sections for the predictions.
The confidence level regions as function of $`\mathrm{\Delta }m^2`$ and $`sin^22\theta `$ are studied using a $`\chi ^2`$ comparison of data and Monte Carlo for the data of Fig. 11. The $`\chi ^2`$ includes the shape of the angular distribution, the $`\frac{IU}{ID+UGS}`$ ratio and the overall normalization. The systematic uncertainty is 10% in each bin of the angular distributions, while it is 5% for the ratio. The result is shown in Figure 12 (MACRO low energes).
A direct comparison can be done with the Superkamiokande stopping muons , because in this analisys it is used the same Bartol neutrino flux used from MACRO. The ratio $`\frac{measured}{expected}`$ with oscillations in the best fit point of each experiment and for $`cos(\theta )<0.2`$ is $`0.97\pm 0.11`$ for SuperKamiokande in good agreement with $`1.02\pm 0.15`$ for the MACRO $`IUP`$ and $`0.91\pm 0.11`$ for the MACRO $`IDW+STOP`$ events.
## 8 CONCLUSIONS
The Soudan-2 detector is able to study atmospheric neutrino oscillations in the Sub-GeV region. MACRO is able to cover the Multi-GeV and the $`100GeV`$ region. MACRO and Soudan2 results can be compared to Kamiokande and SuperKamiokande that covers all the three region. The $`90\%`$ allowed confidence regions for Kamiokande,Superkamiokande, Soudan-2, MACRO for the oscillation $`\nu _\mu \nu _\tau `$ are shown in Figure 12. The statistical power of the Superkamiokande experiment is larger than the others, but it is remarkable to note that it is possible to see the same effect detected in Superkamiokande with detectors using completely different experimental techniques and in similar energy regions.
Using the matter effect it is possible to discriminate between different oscillation hypothesis. In particular the oscillation $`\nu _\mu \nu _s`$ with maximum mixing is disfavored by the MACRO experiment respect the $`\nu _\mu \nu _\tau `$ oscillation. A similar results is obtained by Superkamiokande.
Although the $`\nu _\mu \nu _\tau `$ oscillation hypothesis is the most simple that fits the current data, other more complex scenarios exist and can fit well the data. They require additional hypothesis on the existence of exotic particles, such as the sterile neutrino, or of oscillations in more than 2 families.
The exact determination of oscillation parameters and of the channels of the oscillations will be the main goal of the future generation experiments using atmospheric neutrinos or artificial neutrino beams.
I am greatly indebted to W. A. Mann, Maury Goodman and T. Kafka of the Soudan2 Group and all the collegues of the MACRO collaboration for the results presented in this Talk and for the very useful discussions. |
no-problem/0001/cond-mat0001094.html | ar5iv | text | # Dynamics of Collapse of flexible Polyelectrolytes and Polyampholytes
## Abstract
We provide a theory for the dynamics of collapse of strongly charged polyelectrolytes (PEs) and flexible polyampholytes (PAs) using Langevin equation. After the initial stage, in which counterions condense onto PE, the mechanism of approach to the globular state is similar for PE and PA. In both instances, metastable pearl-necklace structures form in characteristic time scale that is proportional to $`N^{\frac{4}{5}}`$ where $`N`$ is the number of monomers. The late stage of collapse occurs by merger of clusters with the largest one growing at the expense of smaller ones (Lifshitz- Slyozov mechanism). The time scale for this process $`\tau _{COLL}N`$. Simulations are used to support the proposed collapse mechanism for PA and PE.
Due to the interplay of several length scales charged polymers, referred to as polyelectrolytes (PEs), exhibit diverse structural characteristics depending on the environment (pH, temperature, ionic strength etc). The collapse of polyelectrolytes mediated by counterions is relevant in describing the folding of DNA and RNA. Reversible condensation of DNA induced by multivalent cations is required for its efficient packaging. Similarly, the role of divalent cations in enabling RNA to form its functionally competent state has been firmly established. In both cases, a highly charged polyelectrolyte undergoes a collapse transition from an ensemble of extended conformations.
Recently, simulations as well as theories have been advanced to describe the collapse of flexible polyelectrolyte chains mediated by condensation of counterions. An understanding of the collapse transition in polyelectrolyte chains could be valuable in gaining insights into the condensation mechanism in biomolecules. Motivated by this we provide a description of the dynamics of collapse of flexible PEs and polyampholytes (PAs). The theory for the latter becomes possible because of our suggestion (see below) that the physical processes governing the collapse of polyampholytes and polyelectrolytes are similar.
In poor solvents (with reference to the uncharged polymer) the polyion is stretched at sufficiently high temperatures due to intramolecular electrostatic repulsion. The translational entropy of the counterions at high temperatures is dominant, and hence their interaction with the polyion is negligible. As the temperature is decreased the binding energy of the counterions to the polyelectrolyte exceeds the free energy gain due to translational entropy and they condense. This results in a great decrease in the effective charge on the polyion and, because the solvent is poor this leads to the compaction of the chain.
These considerations have been used by Schiessel and Pincus to propose a diagram of states for highly charged polyelectrolytes. Consider a PE consisting of $`N`$ monomers with a fraction $`f(1)`$ of monomers carrying a charge of $`e`$. Strongly charged PE implies $`f(l_B/A)^2>1`$ where $`l_B=e^2/4\pi ฯตk_BT`$ is the Bjerrum length, $`ฯต`$ is the dielectric constant of the solvent, and $`A`$ is the mean distance between charges. At low temperatures we expect counterions (with valence $`z`$) to condense onto the PE because the Manning parameter $`(l_B/A)>1/z`$. The electrostatic blob length $`\xi _{el}`$ is the scale at which the mutual repulsion between two charges is approximately $`k_BT`$, and is given by $`\xi _{el}l_Bz^2/k^2`$ where $`k=ln\varphi `$ with $`\varphi `$ being the volume fraction of free counterions. The size of the PE is stretched i.e. $`Lk^2A^2N/l_Bz^2`$ provided $`\xi _{el}<\xi _Ta_o\mathrm{\Theta }/(\mathrm{\Theta }T)`$ where $`\xi _T`$ is the thermal blob length, $`\mathrm{\Theta }`$ is the collapse temperature for the uncharged polymer. When $`\xi _T<\xi _{el}`$ the PE undergoes a transition to a globular state.
Here, we study the kinetics of this collapse process using Langevin equation. In our earlier study we showed, using simulations, that the approach to the globular state occurs in three stages following a temperature quench. In the initial stage the counterions condense. The intermediate time regime is characterized by the formation of metastable necklace-pearl structures. In the final stage the pearls (domains) merge leading to the compact globular conformation. Here, the theory for the intermediate time regime is developed by adopting the procedure suggested by Pitard and Orland to describe the dynamics of collapse (and swelling) of homopolymers. We describe the dynamics of the late stages of collapse using an analogy to Lifshitz-Slyozov growth mechanism.
The equation of motion of the polyelectrolyte chain is assumed to be given by the Langevin equation
$$\frac{r}{t}=\frac{1}{\zeta }(\frac{H}{r})+\eta (s,t)$$
(1)
where $`\zeta =k_BT/D`$ and $`D`$ is diffusion constant of monomer, T is the temperature. The Hamiltonian of the flexible polyelectrolyte chain with an effective Kuhn length $`a_o`$ is
$$H(t)=\frac{k_BT}{2a_o^2}_0^N(\frac{r(s,t)}{s})^2๐s+V_c(t)$$
(2)
and
$$V_c(t)=_0^N๐s_0^N๐s^{}\frac{q(s)q(s^{})}{|๐ซ(\mathrm{s},\mathrm{t})๐ซ(\mathrm{s}^{},\mathrm{t})|}$$
(3)
The thermal noise $`\eta (s,t)`$ is assumed to be Gaussian with zero mean and the correlation is given by
$$<\eta (s,t)\eta (s^{},t^{})>=2D\delta (tt^{}).$$
(4)
In Eq.(3) $`r(s,t)`$ is the location of monomer $`s`$ at time $`t`$, $`V_c`$ is Coulomb interaction between monomers at $`s`$ and $`s^{}`$, and $`l_B`$ is the Bjerrum length. In writing Eq.(3) we have neglected hydrodynamic interactions. Furthermore, excluded volume interactions are also omitted. The effects due to self-avoidance are not likely to be important in the early stages of collapse because in this time regime attractive interactions mediated by counterion condensation dominate.
Collapse of Polyelectrolytes: The polyelectrolyte chain is initially assumed to be in the $`\mathrm{\Theta }`$-solvent. At $`t=0`$, we imagine a quench to a temperature below $`\mathrm{\Theta }`$ so that the chain is effectively in a poor solvent. The equilibrium conformations of a weakly charged polyelectrolyte $`(f(l_B/A)^21)`$, in poor solvents has been described by Dobrynin et al. and has been further illustrated by Micka et al. These studies showed that when the net charge on the polyelectrolyte chain exceeds a certain critical value then the equilibrium conformation resembles a pearl-necklace. This structure consists of clusters of charged droplets connected by strings under tension. This is valid when counterion condensation, which is responsible for inducing attraction between the monomers, is negligible.
Consider a charged polyelectrolyte chain $`(f=1)`$. Upon a quench to $`T<\mathrm{\Theta }`$, such that the thermal blob length $`\xi _T`$ is not too small, the polyelectrolyte chain undergoes a sequence of structural changes enroute to the collapsed conformation. In particular, after counterion condensation PE evolves towards metastable pearl-necklace structure. The dynamics of this process can be described using Eqs.(1-4) and the following physical picture. We assume that shortly following a quench to poor enough solvent conditions the counterions condense onto the polyelectrolyte chain. The time scale for condensation of the multivalent counterions is diffusion limited, and is approximately given by $`\tau _{COND}\rho ^{2/3}\zeta /k_BT`$ where $`\rho `$ is the density of the counterions. Explicit simulations reveal that the $`\tau _{COND}`$ is much shorter than time scales in which the macromolecule relaxes.. Upon condensation of a multivalent cation with valence $`z(2)`$, the effective charge around the monomer becomes $`(z1)e`$. If the locations of the divalent cation occur randomly and if the correlations between the counterions are negligible then in the early stages, PE chain with condensed counterions may be mapped onto an evolving random polyampholyte. We assume that the condensed counterions are relatively immobile over the time interval $`\tau _{COND}<t<\tau _{CLUST}`$ where $`\tau _{CLUST}`$ is the time required for the formation of pearl-necklace structures. With these assumptions the correlation between the renormalized charges on the polyanion can be written as
$$\overline{q_iq_j}=q_o^2(N\delta _{ij}1)/(N1)$$
(5)
where $`q_o`$ is the net charge of the polyion after counterion condensation and the average is done over the conformations of the polyelectrolyte with condensed counterions.
The above physical picture, which finds support in explicit numerical simulations, can be used to calculate the dependence of the early stage dynamics of the collapse process on $`N`$ using the method introduced by Pitard and Orland. The basis of this method is to construct a reference Gaussian Hamiltonian $`H_v(t)`$ with an effective Kuhn length $`a(t)`$
$$H_v(t)=\frac{k_BT}{2a^2(t)}_0^N(\frac{r_v(s,t)}{s})^2๐s.$$
(6)
The value of $`a(t)`$ is determined from the condition that the difference in the size of the polyelectrolyte chain computed using $`H_v`$ and the full theory vanish to first order in $`\delta H=H_vH`$ for all $`t`$. The equations of motion for the reference chain specified by $`r_v(s,t)`$ and for $`\chi (s,t)=r(s,t)r_v(s,t)`$ are given (to first order in $`\chi `$) by
$`{\displaystyle \frac{r_v(s,t)}{t}}`$ $`=`$ $`{\displaystyle \frac{D}{a^2(t)}}({\displaystyle \frac{^2r_v(s,t)}{s^2}})+\eta (s,t)`$ (7)
$`{\displaystyle \frac{\chi (s,t)}{t}}`$ $`=`$ $`{\displaystyle \frac{D}{a^2(t)}}({\displaystyle \frac{^2\chi (s,t)}{s^2}})+D[({\displaystyle \frac{1}{a_o^2}}{\displaystyle \frac{1}{a^2(t)}}){\displaystyle \frac{^2r_v}{s^2}}+F(r_v(s,t))]`$ (8)
where $`F(r(s,t))=(V_c/k_BT)/r(s,t)`$. The relaxation of each mode is obtained in terms of Fourier representation $`\stackrel{~}{r}_n(t)=1/N_0^Ne^{i\omega _ns}r(s,t)๐s`$ $`(\omega _n=\frac{2\pi n}{N})`$ and is given by
$`\stackrel{~}{r}_n(t)`$ $`=`$ $`{\displaystyle ^t}๐t_1G_n(tt_1)\eta _n(t_1)+G_n(t)\stackrel{~}{r}_n(0)`$ (9)
$`\stackrel{~}{\chi }_n(t)`$ $`=`$ $`D{\displaystyle ^t}๐t_1G_n(tt_1)[F_n(r_v(t_1))\omega _n^2({\displaystyle \frac{1}{a_o^2}}{\displaystyle \frac{1}{a^2(t_1)}})\stackrel{~}{r}_n(t_1)]`$ (10)
where $`G_n(t)=exp(D^t\frac{\omega _n^2}{a^2(t^{})}๐t^{})`$.
We assume that at $`t=0`$ the neutral chain is in $`\mathrm{\Theta }`$-solvent. Therefore $`<\stackrel{~}{r}_n(0)\stackrel{~}{r}_m^{}(0)>=\frac{Na_o^2}{4\pi ^2n^2}\delta _{mn}`$. The thermal noise in Fourier space satisfies $`<\stackrel{~}{\eta }_n(t)>=0`$ and $`<\stackrel{~}{\eta }_n(t)\stackrel{~}{\eta }_m^{}(t^{})>=\frac{2D}{N}\delta _{nm}\delta (tt^{})`$. We also assume that $`a(t)`$ is a slowly varying function of time for $`t\tau _{ROUSE}`$ where $`\tau _{ROUSE}`$ is defined implicitly by $`\frac{4\pi ^2D}{N^2}_0^{\tau _{ROUSE}}\frac{d\tau }{a^2(t)}=1`$. Thus, we can let $`G_n(t)=exp(D\omega _n^2t/a^2)`$ for long wavelength modes.
The time dependent $`R_g^2(t)`$ can be expressed in terms of the amplitude and the relaxation rate of each mode
$$R_g^2(t)=\underset{n}{}<|\stackrel{~}{r}_n^2(t)|>=\underset{n}{}\frac{1}{N\omega _n^2}[(a_o^2a^2(t))G_n(t)+a^2(t)].$$
(11)
The variational parameter $`a(t)`$ is obtained by equating the true value of the square of the radius of gyration $`R_g^2`$ to one calculated with $`H_v`$. This implies that all the correction terms to $`R_g^2`$ computed using Eq.(6) should vanish. To first order in $`\chi (s,t)`$ the effective dynamic Kuhn length $`a(t)`$ satisfies the self consistent equation
$$_0^N<r_v(s,t)\chi (s,t)>๐s=0$$
(12)
where the thermal average is done with respect to the noise term in the Langevin equation.
The self consistent equation for $`a(t)`$ is
$$(\frac{1}{a_o^2}\frac{1}{a^2(t)})=\frac{_n^t๐t^{}G_n(tt^{})<\stackrel{~}{r}_n(t)\frac{(\frac{\mathrm{\Delta }H}{k_BT})^{}}{\stackrel{~}{r}_n(t^{})}>}{_n\omega _n^2^t๐t^{}G_n(tt^{})<\stackrel{~}{r}_n(t)\stackrel{~}{r}_n^{}(t^{})>}$$
(13)
where $`\mathrm{\Delta }H`$ is the total interaction energy (Coulomb,excluded volume) of the PE.
In the early stage of the coil-to-globule transition $`t\tau _{CLUST}`$, we can assume $`a^2(t)a_o^2`$. The initial driving force for collapse is the counterion-mediated coulomb attraction. All other forces (such as hydrophobic interactions due to the poor solvent quality) play a less significant role. Thus, we can set $`\mathrm{\Delta }H=V_c`$. With this observation the evaluation of $`<r_n(t)F^{}(r_n(t^{}))>`$ gives
$$<\stackrel{~}{r}_n(t)F^{}(r_n(t^{}))>=<\stackrel{~}{r}_n(t)\stackrel{~}{r}_n^{}(t^{})>\frac{\sqrt{2}}{3\pi ^2}\frac{<q(s)q(s^{})>c_n^2(s,s^{})}{(\mathrm{\Omega }(s,s^{},t^{})/2)^{3/2}}๐s๐s^{}$$
(14)
with $`c_n(s,s^{})=e^{ins}e^{ins^{}}`$ and $`\mathrm{\Omega }(s,s^{},t)=\frac{1}{3}_lc_l^2(s,s^{})<\stackrel{~}{r}_l^2(t)>`$. The physical picture relating the state of the PE chain (till formation of pearl-necklace structure) to polyampholyte is used to perform an additional average over the โquenchedโ random charge variables (see Eq.(5)). The resulting equation when substituted into the right hand side of Eq.(13) yields an expression for the time dependent Kuhn length $`a(t)`$, namely,
$$a^2(t)=a_o^2[1c_1\frac{(Dt)^{3/4}}{k_BT}(a_o^{2/5}l_B\frac{N}{N1})]=a_o^2(1(\frac{t}{\tau _c})^{3/4})$$
(15)
Since $`\overline{qq^{}}=q_o^2/(N1)`$ when $`ss^{}`$, we obtain $`\tau _c(a_o/l_B)^{4/3}a_o^2/D`$ and it is nearly independent of the polymer size $`N`$. Note that $`\tau _c`$ is an estimate for the time scale in which the clusters in the metastable pearl-necklace structures form. Because the formation of such clusters occur locally (i.e. by interaction between monomers that are not separately by a long contours) $`\tau _c`$ is expected to be independent of $`N`$. Therefore, we identify $`\tau _c=\tau _{CLUST}`$. In the time range $`\tau _{COND}<t<\tau _{CLUST}`$, the radius of gyration is obtained by substituting Eq.(15) into Eq.(11). We find the decay of $`R_g(t)`$ can be approximated as
$$R_g^2(t)R_g^2(0)(1(t/\tau _{PE})^{\alpha _{PE}})$$
(16)
with $`\alpha _{PE}=5/4`$ and $`\tau _{PE}=(\frac{\pi Na_o}{12\sqrt{2D}})^{\frac{4}{5}}(\tau _{CLUST})^{\frac{5}{3}}`$. The characteristic time $`\tau _{PE}`$ is therefore $`\tau _{PE}(Na_o/l_B)^{\frac{4}{5}}(a_o/D)^{\frac{7}{5}}`$.
The formation of local clusters has also been suggested as a mechanism for collapse of homopolymers in poor solvents. The time scale for cluster formation for homopolymer has been calculated by Pitard and Orland who found $`\tau _{HP}(a_o^2/D)(a_o^3/|v_2|)^{4/3}N^{4/3}`$. The counterion-mediated attraction leading to pearl-necklace like structures occurs on a shorter time scale $`\tau _{PE}N^{4/5}`$. In very poor solvents ($`l_B/a_o<|v_2|^2/a_o^6`$), where the hydrophobic interactions dominant the driving force for collapse of PE, we expect the dynamics to resemble that of homopolymer.
Dynamics of PA collapse: The theory developed for PE is directly applicable to describe the dynamics of collapse of PA without having to justify the averaging over the quenched random charges (cf. Eq.(5)). The Langevin dynamics for PA is given by Eqs.(1-4) where the quenched random charges explicitly satisfy Eq.(5). It is known that when the total charge of the PA $`Q<\pm \sqrt{N}e`$ the chain is collapsed at low temperature regardless of the quality of the solvent. The mathematical description of the dynamics given by Eq.(16) describes the early stages of collapse of PA. For PA, described by Eqs.(1-4), there is no counterion condensation. The metastable pearl-necklace structure forms after the random charges on the monomers are turned on provided $`Q<\pm \sqrt{N}e`$. We predict that the time for forming the metastable pearl-necklace conformations is also given by Eq.(16) The power law decrease in radius of gyration (Eq.(16)) is limited only to time scale within which the structure leads to several clusters.
Late stages of collapse: For both PA and PE, the pearl-necklace structures merge (see below) at $`t>\tau _{CLUST}`$ to form compact collapsed structures. This occurs by the largest cluster growing at the expense of smaller ones (โpackmanโ effect) which is reminiscent of the Lifshitz-Slyozov description of the kinetics of precipitation in supersaturated solutions. The driving force for this growth is the concentration gradient across the clusters or domains. If this analogy is correct then we expect that the size of the largest cluster $`S(t)`$ to grow as
$$S(t)t^\alpha $$
(17)
with $`\alpha \frac{1}{3}`$. The collapse is complete when $`S(t)R_g(t\mathrm{})a_oN^{\frac{1}{3}}`$ which implies that the characteristic collapse time $`\tau _{COLL}N`$.
In order to validate the proposed mechanism, i.e., the formation of necklace-globule structures and the growth of the largest domain by devouring the smaller ones, we performed Langevin simulations for strongly charged flexible PEs and PAs in sufficiently poor solvents so that the equilibrium structure is the compact globule. In Fig.(1) we display examples of the conformations that are sampled in the dynamics of approach to the globular state starting from $`\mathrm{\Theta }`$-solvent conditions. Both panels (top is for PEs and the lower one is for PAs) show that in the later stages of collapse the largest clusters in the necklace-globule grow and the smaller ones evaporate. This lends support to the proposed Lifshitz-Slyozov mechanism.
In order to estimate $`\alpha `$ (cf. Eq.(17)) we have used molecular dynamics simulations of polyelectrolyte and randomly charged polyampholyte and calculated the number of particles that belong to the largest cluster $`N_S(t)`$ as a function of time. In Fig.(2), the simulation results show the linear increase of $`N_S(t)`$ for times greater than $`t30\tau `$ for PA and $`t100\tau `$ for PE. Here $`\tau =a_o^2/D`$. Therefore, we estimate $`\alpha \frac{1}{3}`$. The change of slope for long times is due to the finite size effects and indicates the completion of the globule formation. Because the quality of solvent (see legend in Fig.(2) is not the same for PA and PE the onset time for linear behavior occurs at different times.
Conclusions: We have presented a unified picture of collapse dynamics of PE and PA. Although the morphology of collapsed structures for PA and PE are different, our theory shows that the mechanism of approach to the globular state for both should be similar. In particular, both PA and PE reach the collapsed conformations via metastable pearl-necklace structures. For PE the driving force for forming such structures is the counterion-mediated attractions. Charge fluctuations in PA lead to pearl-necklace structures. The life time of such structures is determined by a subtle competition between attractive interactions mediated by counterions and hydrophobic interactions determined by the solvent quality. The interplay between these forces may be assessed by estimating the free energy of necklace-globule structures. The necklace-globule conformation consists of $`n`$ globules with nearly vanishing net charge that are in local equilibrium. The free energy of $`i^{th}`$ globule is
$$F_i\frac{4\pi }{3}(\mathrm{\Delta }f)R_i^3+4\pi \sigma R_i^2$$
(18)
where $`R_i`$ is the radius of the $`i^{th}`$ globule and $`\sigma `$ is the surface tension. Note that $`\mathrm{\Delta }f`$ is the same before and after the merger of clusters. The free energy difference between a conformation consisting of two clusters and the conformation in which they are merged is $`\mathrm{\Delta }F(8\pi \sigma 4(2)^{2/3}\pi \sigma )(N/n)^{2/3}`$. Typical charge fluctuation in each globule is $`q_o(N/n)^{1/2}`$. If the coulomb energy fluctuation of each globule ($`\delta E_{fluct}q_o^2/a_o(N/n)^{2/3}`$) is less than the free energy difference between the conformation with two separated clusters and the one where they are merged, then the system spontaneously grows to a large cluster. If the solvent quality is not very poor, i.e.$`(8\pi \sigma 4(2)^{2/3}\pi \sigma )<q_o^2/a_o`$, the life time of the metastable necklace-globule can be long so that the collapse mechanism is controlled by the energy barrier between the metastable state and the globule. In this case, the attractive interaction between the clusters are induced by the mobile charge (counterion) fluctuations just as for the counterion mediated interaction between like charged rods. The morphology of the final collapsed state is also determined by a competition between electrostatic interactionis and hydrophobic forces. When the solvent is very poor the collapse state is amorphorous whereas when collapse is determined by $`\mathrm{\Delta }F`$ the globular state is an ordered Wigner Crystal. |
no-problem/0001/astro-ph0001353.html | ar5iv | text | # On the Viewing Angle Dependence of Blazar Variability
## 1 Introduction
The rapid variability often exhibited by blazars and GRBs is widely attributed to formation of internal shocks in the expelled outflow (see e.g., Rees 1978; Romanova & Lovelace 1997; Levinson 1998, in the context of blazars, and Rees & Meszaros, 1994; Eichler 1994; Sari & Piran 1997, in the context of GRBs). Internal shocks are produced as a result of unsteadiness of the source, which ultimately leads to overtaking collisions of different fluid slabs. The observed variability time associated with a single front is on the order of the light travel time across the expelled fluid slab (provided the cooling time is sufficiently short, and that the slab is optically thin and not too thick geometrically), and, therefore, can be as short as the intrinsic timescale (e.g., the dynamical time of the central engine). If indeed associated with the gravitational radius of the putative black hole, this timescale comes out to be of order milliseconds in the case of GRBs, and minutes to hours in the case of blazars, consistent with the temporal substructure seen in these two classes of objects (e.g., Wagner, 1997; Ulrich, et al., 1997; Fishman & Meegan, 1995). The fraction of bulk energy that can be dissipated behind the shocks and, provided the cooling time is sufficiently short, radiated away, depends mainly on the difference in Lorentz factors of the colliding shells. In scenarios that invoke magnetically dominated outflows, effective dissipation of the shocked magnetic field is also required for high radiative efficiency (e.g, Levinson & Van Putten 1997).
The dynamics of internal fronts and the resulting variability pattern depend on the parameters of the expelled fluid and on environmental conditions. In particular, in situations whereby the front moves through an intense, roughly isotropic (in the frame of the central engine) radiation field, as in ERC models, it will be subject to a radiative drag that can affect its dynamics and emission considerably. In the model considered here (see, Levinson, 1998 \[Paper I\]; Levinson 1999a \[Paper II\] for details), the created front is assumed to be adiabatic initially. During the adiabatic phase, shortly after its creation and prior to the onset of radiative losses, the front moves at some constant velocity intermediate between that of the colliding fluid slabs, such that the net momentum flux incident through the shocks and, consequently, the net force exerted on the front by the โpushโ of the exterior fluids vanish. A fraction of the energy dissipated behind the shocks is tapped for the acceleration of electrons to nonthermal energies and the rest to heat the front. The injected electrons then cool adiabatically, owing to the front expansion, and radiatively through synchrotron emission and inverse Compton scattering of external, soft photos (ERC emission). Since the synchrotron emission is isotropic in the comoving frame it does not contribute to momentum losses. However, the ERC emission, which is highly beamed in the front frame, gives rise to a radiative drag that leads to deceleration of the front during the rise of the emitted ERC power. The increasing radiative friction is balanced by an excess momentum transfer to the front from the fluid behind it (the fast fluid). The reason is that as the front decelerates the relative velocity between the front and the fast fluid and, hence, the net momentum flux incident through the reverse shock increases, while the net momentum flux incident through the forward shock decreases. The decelerating front will reach its minimum Lorentz factor roughly when the total ERC power radiated by the front peaks, provided the shock crossing time of the expelled fluid slabs is sufficiently long. If the intensity of external radiation declines with radius, as envisaged here, then the ERC flux radiated by the front and the associated drag will decline as the front moves outwards. This would then lead to re-acceleration of the front, following peak emission, until its initial speed and structure are restored (or until shock crossing of the fluid slabs is completed). The external, soft photons also contribute a large pair production opacity that can lead to absorption of escaping gamma rays and the initiation of pair cascades inside and ahead of the front at early times. This , in turn, will affect the evolution of the high energy spectrum. The synchrotron opacity also changes with time, owing to the front expansion. We stress that the variability in this model reflects the dynamics of the front and the radial profiles of the magnetic field and external photon intensity, and is not due to any explicit time changes of the front parameters or particle injection. To be concrete, in the case of long outbursts the variability is caused by a change in the Thomson and synchrotron opacities during the course of the front that result from the radial variations of external photon intensity and magnetic field. In the case of short outbursts the variability is due to shock crossing (after which energy deposition in the front ceases) and the subsequent cooling of the hot fluid slabs. The combination of time varying Lorentz factor and optical depth effects should give rise to a certain dependence of the variability pattern on the viewing angle (Levinson 1999b). It is the purpose of this paper to explore this orientation effect within the framework of the radiative front model developed in Papers I and II.
## 2 Intensity and Observed Flux
In Papers I and II we analyzed the structure and dynamics of a radiative front and computed the temporal evolution of the angle averaged flux radiated during its course. In this section we generalize our previous treatment to allow for the calculation of the angular dependence of the emission. The structure and dynamics of the front are computed as before using the model developed in Paper I, but the equations governing the evolution of the radiated flux have been modified in a manner described below. The model assumes that a constant fraction of the power dissipated behind the shocks is injected in the form of nonthermal electron distribution. The rate of energy dissipation inside the front depends, in turn, on the relative velocities between the exterior fluids and the front and is computed self consistently. The electron acceleration time is assumed to be much shorter than the cooling and light crossing times, so that it does not affect the evolution of the radiated ERC and synchrotron spectra. The energy distributions of electrons, gamma-rays and synchrotron photons inside the front are computed by solving the appropriate kinetic equations, which are coupled to the MHD equations governing the front dynamics through the injection term and the energy and momentum loss terms. As shown in Papers I and II, the energy distribution of emitting electrons is determined essentially by the pair cascade and cooling processes and is insensitive to the form of injected electron spectrum, provided the acceleration process is efficient and that the Thomson opacity contributed by the external radiation field is large enough. In the examples presented below the injected spectrum was taken to be a power law with roughly equal energy injection rate per log energy.
We approximate the front (see fig. 1) as a cylindrical section with an axial length $`\mathrm{\Delta }X`$ and cross sectional radius $`d`$, and denote by $`\beta _c`$, $`\beta _{s+}`$ and $`\beta _s`$ the velocity of the front, the forward and reverse shocks with respect to the injection frame, respectively, and by $`\mathrm{\Gamma }_c`$, $`\mathrm{\Gamma }_{s\pm }`$ the corresponding Lorentz factors. We suppose that in addition to its radial expansion, which is computed from the model, the front expands also sideways at some velocity, taken to be a fraction $`\psi <<1`$ of $`\beta _c`$. This then yields $`d(t)=\psi r(t)`$, where $`r(t)=r_o+c\beta _c๐t`$ is the position of the front (more precisely, the contact discontinuity) at time $`t`$. We stress that the shock geometry invoked here is probably unrealistic, since the perimeter of the forward shock moves at velocity $`\beta _c\sqrt{1+\psi ^2}`$ that may violate causality under certain choice of parameters. The shock is more likely to be curved, or even corrugated as a result of instabilities. We do not expect, however, our results to be strongly dependent upon shock geometry, but merely on the characteristic velocities. We further assume that the electron distribution is isotropic and homogeneous inside the front, and denote by $`j_\nu (\mu ,t)`$ and $`\kappa _\nu (\mu ,t)`$ the emission and absorption coefficients of some radiation process (e.g., synchrotron or ERC emission), as measured in the injection frame. In the case of synchrotron emission, the latter assumption implies that $`j_\nu `$ and $`\kappa _\nu `$ are also isotropic and homogeneous inside the front (but not necessarily the intensity). This is not true, however, for the ERC emission (Dermer, 1995), since the comoving distribution of scattered photons is highly anisotropic.
Consider now a photon emitted at time t at some position inside the front in the direction $`\widehat{\mathrm{\Omega }}`$, and denote by $`z_t`$ the distance between the location from which the photon is emitted and the forward shock, and by $`R_t^\mathrm{\Omega }`$ the distance along the projection of $`\widehat{\mathrm{\Omega }}`$ on the plan perpendicular to the front axis (that is, along the vector $`\widehat{\mathrm{\Omega }}\mu \widehat{z}`$, where $`\mu =\widehat{\mathrm{\Omega }}\widehat{z}`$) between the emission point and the front boundary (see fig. 1). Now, the photon (if not absorbed) will escape from the front through the forward shock if c$`\overline{t}\mathrm{sin}\theta <R_t^\mathrm{\Omega }`$ (ignoring the transverse expansion of the front), and through the sides when the opposite inequality holds. Here $`\mathrm{sin}\theta =\sqrt{1\mu ^2}`$, and $`\overline{t}`$ is given implicitly by $`\overline{t}=(z_t/c+^{\overline{t}}\beta _{s+}๐t)/\mu `$. In terms of the time averaged shock velocity, $`\overline{\beta }_{s+}`$, one obtains, $`c\overline{t}=z_t/(\mu \overline{\beta }_{s+})`$, so that the condition that the photon will escape through the forward shock reads: $`z_t<R_t^\mathrm{\Omega }(\mu \overline{\beta }_{s+})/\mathrm{sin}\theta `$. Note that photons emitted into directions $`\mu <\overline{\beta }_{s+}`$ can only escape from the sides.
The intensity crossing the front boundary along the direction $`\widehat{\mathrm{\Omega }}`$, at some position $`\mathrm{\Sigma }`$ on the boundary surface, is given, under the above assumptions, by
$$I_\nu (\mathrm{\Sigma },\widehat{\mathrm{\Omega }},t)=_0^{t_\mathrm{\Sigma }}j_\nu (\widehat{\mathrm{\Omega }},t\chi )e^{\tau _\nu (\chi )}c๐\chi ,$$
(1)
where $`\tau _\nu (\chi )=c_0^\chi \kappa _\nu (\chi ^{})๐\chi ^{}`$ is the optical depth traversed by a photon emitted at time $`t\chi `$, and $`t_\mathrm{\Sigma }`$ is the light crossing time of the expanding front along the corresponding ray, measured in the frame of the central engine. The flux emitted from the boundary surface at some location is given by $`I_\nu (\mathrm{\Sigma },\widehat{\mathrm{\Omega }},t)(\widehat{\mathrm{\Omega }}\widehat{n}\widehat{\beta }_\mathrm{\Sigma }\widehat{n})`$, where $`\widehat{n}`$ is the normal to the surface, and $`\beta _\mathrm{\Sigma }`$ is the velocity of the surface at the corresponding position (for instance $`\beta _\mathrm{\Sigma }=\beta _{s+}\widehat{z}`$ for any point on the forward shock surface). This must be multiplied by the time dilation factor, $`(1\beta _\mathrm{\Sigma }\widehat{\mathrm{\Omega }})^1`$, in order to obtain the observed flux. One finds,
$$_\nu =\frac{(1+Z)}{D_L^2}_\mathrm{\Sigma }I_\nu (\mathrm{\Sigma },\widehat{\mathrm{\Omega }},t)\frac{(\widehat{\mathrm{\Omega }}\widehat{n}\widehat{\beta }_\mathrm{\Sigma }\widehat{n})}{1\beta _\mathrm{\Sigma }\widehat{\mathrm{\Omega }}}๐\mathrm{\Sigma }.$$
(2)
Here $`D_L`$ is the luminosity distance and $`Z`$ is the corresponding redshift. We note that the time evolution of $`\beta _\mathrm{\Sigma }`$ and the remaining front parameters is computed using the front equations derived in Paper I. It is also worth-noting that the observed time change, $`t_{obs}=(1\beta _\mathrm{\Sigma }\widehat{\mathrm{\Omega }})๐t`$, may be different for different emitting surfaces, so that the observed time structure may reflect, to some extent, the geometry of the front. The above treatment modifies the analysis presented in Papers I and II to account for retardation effects associated with the front expansion and with rapid time changes of the emissivity and opacity. We find that this modification does not give rise to significant alterations of the results obtained in Papers I and II for the evolution of the angle averaged flux, but does improve slightly the calculations of the light curves observed at relatively large viewing angles.
As a simple example consider a non-expanding blob moving with a velocity $`\beta _c`$, and let the volume emissivity be time independent. Then $`\beta _\mathrm{\Sigma }=\beta _c\widehat{z}`$, and equations (1) and (2) yield for the optically thin flux, $`_\nu \frac{jV}{(1\beta _c\mu )}`$, where $`V=ct_\mathrm{\Sigma }๐\mathrm{\Sigma }`$ is the volume of the front. In terms of the comoving volume, $`V^{}=\mathrm{\Gamma }_cV`$, and comoving emissivity, the observed flux reduces to the familiar expression: $`_\nu =\frac{(1+Z)}{D_L^2}๐_c^3j_\nu ^{}V^{}`$, with $`๐_c=[\mathrm{\Gamma }_c(1\beta _c\mu )]^1`$ being the corresponding Doppler factor. As a second example, consider the flux emitted from an expanding, stationary front in the forward direction, viz., $`\mu =1`$. In that case the entire flux is emitted through the forward shock. The corresponding time dilation factor is then $`(1\beta _{s+})^1`$, and the light crossing time along the frontโs axis is $`t_\mathrm{\Sigma }=\overline{t}=\mathrm{\Delta }X/(1\beta _{s+})`$. From equations (1) and (2) we obtain, again in the optically thin limit,
$$_\nu =\frac{(1+Z)}{D_L^2}๐_{s+}๐_c^2(\mathrm{\Gamma }_{s+}/\mathrm{\Gamma }_c)V^{}j_\nu ^{},$$
with $`๐_{s+}`$ being the Doppler factor associated with the forward shock. This example illustrates the effect of the expansion on the emitted flux. The enhancement of the forward flux by a factor $`(\mathrm{\Gamma }_{s+}/\mathrm{\Gamma }_c)^2`$ is entirely due the growth of the frontโs volume.
The integration of eq. (1) for a specific process requires the determination of the corresponding volume emissivity $`j_\nu `$. The determination of the synchrotron emissivity is straightforward, since the emission is isotropic in the front frame, and is described in Paper II. The ERC emissivity is calculated using the head-on approximation; that is, the direction of scattered photon is taken to be along the direction of the scattering electron. The electron distribution in the injection frame, denoted by $`n_e(E_e,\mu ,t)`$, is obtained at each time step by appropriate Lorentz transformation of the comoving electron distribution which, as mentioned above, assumed to be isotropic . Using this approximations the ERC emissivity can be expressed as,
$$j_{ERC}(E_\gamma ,\mu ,t)=n_e(E_e,\mu ,t)\eta _{c\gamma }(E_\gamma ,E_e,t)๐E_e$$
(3)
where $`\eta _{c\gamma }`$ is the corresponding redistribution function, and is given explicitly in Blandford & Levinson (1995). Note that $`\eta `$ is independent of $`\mu `$ by virtue of the assumed isotropy of the ambient radiation field. Finally, since we consider only cases for which gamma-ray production is dominated by ERC emission, we neglect the SSC emissivity in eq. (1) (see ยง4 for further discussion).
## 3 Results
Equations (1) and (2) and the front equations derived in Paper I were integrated for different values of $`\mu `$. In the following examples, the Lorentz factors of the fluids ahead and behind the front and the rest-frame Alfven 4-velocity have been chosen to be, respectively, 5, 20, and 10. The magnetic pressure has been taken to decline as $`r^p`$, and the intensity of background radiation as $`f(r)/r^2`$. A rapid magnetic field dissipation inside the front with the same decay constant as in the previous papers has been invoked. As a check, we computed the angular distribution of observed flux for a front with roughly time independent Lorentz factor (low radiative efficiency case), and compared the results with the analytic expressions, $`๐^\delta `$, with $`\delta =3+\alpha `$ and $`\delta =4+2\alpha `$ for synchrotron and ERC emission, respectively (Dermer 1995). The analytic results were reproduced to a very good accuracy by the numerical model.
Fig. 2 depicts the time profile of the front Lorentz factor, $`\mathrm{\Gamma }_c`$, obtained for sufficiently long outbursts, in the sense that the shock travel time across the fluid slab is longer than the time change associated with the radial variations of magnetic field and ambient radiation intensity (see Paper I for more details). The radial profiles of magnetic field and external radiation field in this example are $`r^2`$ ($`p=2`$, $`f(r)=1`$). A steeper profile of the ambient radiation intensity leads quite generally to a larger acceleration of the front following peak emission, owing to the faster decrement of the radiative friction experienced by the front. The times of peak emission at different bands, and the corresponding Lorentz factors are indicated in the figure, and it is seen that the emission at different energies has different Doppler factors. This is essentially a consequence of the combination of dynamical and opacity effects. In addition to this dependence of Doppler factor on wavelength, there will also be a difference in the beaming patterns of synchrotron and ERC emission, owing to the difference in angular dependence of the corresponding volume emissivities (see below).
The resultant beaming patterns at different energies are delineated in fig. 3, where the relative dependences of the peak fluxes (normalized to their values at $`\mu =1`$) on viewing angle are exhibited. The two gamma-ray bands shown correspond to the total gamma-ray flux above and below the threshold energy at which the pair production opacity (at $`r=r_o`$) is roughly unity. The difference in beaming patterns between synchrotron and ERC emission is clearly seen. Also evident is the stronger dependence of the optically thick emission on viewing angle. Although a model for the quiescent emission is required in order to calculate the amplitude of variations, the dependence of observed flux on $`\mu `$ shown in fig. 3 suggests that objects oriented to the observer at relatively large viewing angles may have preferentially larger amplitude outbursts at short wavelengths (optical/UV), and for small enough viewing angles also at gamma-ray energies below the break of the gamma-ray spectrum. It is also conceivable that intense outbursts at these energies will be followed by no activity at all at low frequencies.
The angular dependence of observed flux reflects, in this model, also the radial profiles of magnetic field and ambient radiation intensity. This is because both, the opacity and the front dynamics depend on the radial decrement of these quantities. Quite generally we find that the dependence of the ratio of optically thin and optically thick synchrotron fluxes on viewing angle becomes stronger for steeper ambient intensity profiles. The results obtained for ambient radiation intensity with an exponential profile, $`f(r)=\mathrm{exp}(r/r_1)`$ (with $`r_1=10r_o`$ in this example), that may reflect the density profile of the surrounding gas that re-processes or scatters the nuclear radiation across the front, are shown in Fig. 4. As seen, the dependence of the optically thin fluxes on viewing angle is insensitive to the form of $`f(r)`$. However, the peak of the optically thick synchrotron flux declines more steeply with increasing viewing angle. This is because the front re-accelerates much faster in this case, owing to the rapid drop in radiative drag, so that the Doppler factor of the optically thick bands changes more rapidly. The behavior of optically thick ERC emission is more complicated; steeper ambient intensity profiles render the pair production opacity ahead of the front smaller which, in turn, leads to earlier ERC emission. This counteracts the effect associated with the faster acceleration and, therefore, the angular dependence of this component is more sensitive to the choice of parameters. In fact, in the example shown in Fig. 4 the decline of the total gamma-ray flux above the threshold energy to pair production at $`r_o`$ is slower in the case of ambient intensity with an exponential profile, in contrast to the behavior of the low-frequency synchrotron emission.
The radial profile of the magnetic field affect mainly the synchrotron flux. Steeper profiles give rise to shorter delays of the low-frequency emission and, depending on the front acceleration scale, lead to a weaker dependence of the peak flux on viewing angle.
Fig. 5 presents sample light curves at various energies, computed for different viewing angles. As seen, for optically thick bands the shape of the light curve depends on $`\mu `$, tending towards a much steeper decline at larger values of $`\mu `$, thereby leading to a light curve that appears more symmetric. Moreover, the time of peak emission and flare duration decrease with increasing viewing angle, as can be seen from the figure. This is again due to the evolution of the Doppler factor caused by the re-acceleration of the front. The optically thin bands show little dependence of flux decay time on viewing angle, as expected. We note that in situations where the ejected slabs are thin enough, or the ambient intensity has a much steeper radial profile, the flares will also tend to have a roughly symmetric shape.
In the case of sufficiently short outbursts, shock crossing is completed before front re-acceleration sets in. The dependence of the variability pattern on viewing angle would then be different than for long outbursts, and may depend on the cooling time of emitting electrons in the heated fluid slab. If only the fast slab is short, the reverse shock will decay quickly while the forward shock will continue to propagate outward, similar to a GRB blast wave. This may give rise to somewhat different characteristics of the low-frequency emission. If both slabs are thin, then a lack of activity at long wavelengths, and a lower energy cutoff at gamma-ray energies are expected (see Paper II for a detailed discussion). The temporal behavior will be further complicated in situations in which multiple fronts with small enough duty cycle are created. This can lead to blending of different flares (as often seen at radio wavelengths; e.g., Aller, et al. 1985; Valtaoja, et al. 1999) and, in the case of ejection of a thin slab, to a collision of the decelerating slab, following shock crossing, with a newly expelled one, and the ultimate formation of a new shock in the radiating slab. Such episodes are far more difficult to simulate. Nonetheless, rapid, large amplitude flares should reflect the features associated with a single front.
## 4 Conclusions
We have considered the angular dependence of the observed variability pattern produced by internal fronts propagating through an ambient radiation field. We have shown that, for sufficiently long outbursts, the combination of dynamical effects caused by the radiative drag and optical depth effects, gives rise to a strong dependence of the observed flareโs properties on source orientation.
To be more concrete, the shape of the light curves of optically thick emission reflects, at sufficiently large viewing angles, the temporal evolution of the Doppler factor, and at small viewing angles the evolution of the radiated power density. As a consequence, the time of peak emission and flare duration decrease with increasing viewing angle. Moreover, the flare appears to be more symmetric at larger viewing angles. The time evolution of optically thin emission is insensitive to source orientation, since it originates when the front is near its minimum Lorentz factor.
The time evolution of the beaming factors also renders the characteristics of correlated emission sensitive to source orientation. Because the source is inhomogeneous, the fluxes at different energies originate from different locations along the course of the front and, therefore, have different beaming cones due to the varying Lorentz factor. As a result the ratio of peak fluxes of the low-frequency (self-absorbed) and high-frequency synchrotron emission decreases with increasing viewing angle. The beaming cone of ERC emission is narrower than that of the synchrotron emission, leading to a somewhat more sensitive dependence of the gamma-ray flux on viewing angle than the high-frequency synchrotron flux. One implication of the above results is that sources that are oriented at relatively large viewing angles to the observer may exhibit events whereby strong gamma-ray and/or IR-optical-UV flares are followed by little or no activity of the low frequency (radio-to-millimeter) flux.
Finally, we note that the neglect of SSC emission may not be justified at large viewing angles, even in cases where the total power radiated is dominated by ERC emission. This is because the beaming cone of ERC photons is narrower than that of SSC photons (Dermer 1995). Thus, it is conceivable that in some regime of parameter space, the origin of observed gamma-rays also depends on the orientation of the source with respect to the observer.
This research was supported by The Israeli Science Foundation. |
no-problem/0001/hep-ex0001064.html | ar5iv | text | # The DELPHI Silicon Tracker in the global pattern recognition
## 1 Introduction
Since 1990 DELPHI has been operating a Silicon Tracker in the barrel region close to the beam pipe. This detector has been upgraded three times to follow the different tracking requirements for LEP 1 and LEP 2 as well as to improve the tracking performance. In its final upgraded version the Silicon Tracker covers nearly the full polar angle down to $`10.5^{}`$. It is the innermost detector of the tracking system of the DELPHI detector , which is one of the most complex because of the presence of the Ring Imaging Cherenkov Counters (RICH) in the central (barrel) and the endcap (forward) regions.
The global track reconstruction software was developed in parallel to the upgrades of the Silicon Tracker. First versions of a pattern recognition were adding the Silicon Tracker hits to tracks which were reconstructed using the outer tracking detectors. In 1995 the so called โ$`R_b`$ crisisโ led people to carefully study reconstruction problems which were limiting the $`b`$-tagging performance. It was realised that the method of using the Silicon Tracker information at the end of the reconstruction was not sufficient for a precise $`b`$-tagging and for an efficient vertex reconstruction in $`\tau `$ and heavy flavour decays. New track reconstruction algorithms starting from the Silicon Tracker information were needed to solve the problems. The development of a new reconstruction package ended in 1999 with the complete integration of the forward part of the Silicon Tracker (Very Forward Tracker, VFT) into the global pattern recognition.
This article is structured as follows. At the beginning a brief introduction to the Silicon Tracker and the DELPHI outer tracking system is given. Then the track reconstruction concepts and algorithms are discussed in the two following sections. In the third section a description of the precision tracking in the barrel part is given starting from the reconstructed Silicon Tracker clusters and the measured track elements in the outer tracking detectors. In the fourth section the use of the VFT hits in the forward tracking is discussed. The performance of the track reconstruction software in the barrel and forward regions is shown.
## 2 The layout of the Silicon Tracker and the DELPHI outer tracking system
In the DELPHI standard coordinate system the $`z`$ axis is along the electron direction, the $`x`$ axis points towards the centre of LEP and the $`y`$ axis points upwards. The polar angle to the $`z`$ axis is called $`\theta `$ and the azimuthal angle around the $`z`$ axis is called $`\varphi `$. The radial coordinate is $`R=\sqrt{x^2+y^2}`$.
Figure 1 shows an $`Rz`$ cross section of a quarter of the DELPHI Silicon Tracker. A detailed description of the detector (in its final setup) can be found in . It is divided into a barrel part (Vertex Detector, VD) and the VFT in the forward direction. For the installation around the beam pipe the mechanical structure is divided into two half shells in $`R\varphi `$.
The VD consists of 3 concentric layers (called Closer, Inner and Outer) at average radii of 6.6 cm, 9.2 cm and 10.6 cm, respectively. All three layers cover polar angles of $`25^{}155^{}`$, the Inner layer extends the coverage to $`21^{}159^{}`$. The Closer as well as the Outer layer is made of 24 modules with a 15 % azimuthal overlap, while only 20 modules are used for the Inner layer (figure 2). Each module in the Outer and the Inner layer consists of 8 sensors, the Closer layer modules are shorter and consist of only 4 sensors. Half of each module is bonded in series and read out at the outer ends. All Closer and Outer layer modules have double sided readout to measure $`Rz`$ and $`R\varphi `$, while only the โextremeโ 2 sensors in the Inner layer are double sided, the โcentralโ sensors measure only $`R\varphi `$. The n-side lines of one sensor of the โflippedโ modules in the Closer and the Inner layer are connected to the p-side ones of the adjacent sensor.
The hit resolution in $`R\varphi `$ is $`8\mu `$m. In the $`Rz`$ plane the readout pitch is changed for plaquettes at different angles to give the best resolution perpendicular to the track, varying between $`10\mu `$m and $`25\mu `$m for tracks at different inclinations.
The VFT consists of two pixel layers, the first one being located inside the VD, and two mini strip layers. It covers the angular region of $`11^{}26^{}`$ and $`154^{}169^{}`$. The pixel dimension is $`330\times 330\mu `$m<sup>2</sup>. The two layers of back-to-back mini strip detectors have a readout pitch of $`200\mu `$m and one intermediate strip. To help the pattern recognition the mini strip modules are mounted at a small stereo angle.
A detailed description of the DELPHI apparatus can be found in . A schematic view of the detector is shown in figure 3. The DELPHI outer tracking system is divided into a barrel and a forward part.
In the barrel part precise tracking information is provided by the Time Projection Chamber (TPC) and the Inner (ID) and Outer (OD) Detectors. The ID consists of a jet chamber to perform a precise $`R\varphi `$ measurement and of 5 concentric layers of straw tubes. It is followed by the TPC, which covers polar angles between $`21^{}`$ and $`159^{}`$. The TPC single point resolution for charged particles is $`250\mu `$m in $`R\varphi `$ and $`880\mu `$m in $`z`$. The Outer Detector is mounted behind the Barrel RICH to give additional tracking information at a radius of $`2.02`$ m. The five layers of drift cells cover polar angles between $`42^{}`$ and $`138^{}`$ and provide $`R\varphi `$ and $`z`$ information.
In the forward region the tracking is improved by two planar drift chambers in front of and behind the Forward RICH. Their polar angle coverage is $`11^{}33^{}`$ (FCA) and $`11^{}36.5^{}`$ (FCB), respectively. The Forward RICH provides an additional track point from the ionisation of the charged particle inside the drift box.
## 3 Precision tracking in the barrel part
The detectors of the outer tracking system in the barrel region measure the tracks of charged particles with high redundancy. Local pattern recognition algorithms are used to reconstruct the track elements in the different detectors. The VD cluster reconstruction algorithm is documented in . A careful treatment of the VD hit information and good simulation description are needed together with an optimised track reconstruction software in order to exploit the high precision tracking information in the barrel region.
### 3.1 Internal VD alignment
The internal alignment of the Silicon Tracker is based on a mechanical survey, during which the components and the whole structure were measured by optical or mechanical means. The survey was done in two steps . First a measurement of each individual module was done to fix the position of a sensor within a module to a level of 1-2 $`\mu `$m. Then the position of each module in the two half shells was determined with a relative precision of about 10 $`\mu `$m.
The detector may significantly deform after the survey during transportation and installation. Therefore the final precision is obtained by an offline alignment using tracks from charged particles. The first step after the installation of the detector is the determination of the relative position of the half shells w.r.t. each other using charged tracks crossing the top or bottom overlaps of the two halves.
Several effects which influence the detector alignment are taken into account in the alignment procedure . The Lorentz angle effect leads to a shift of $`6\mu `$m in $`R\varphi `$ ($`B=1.2`$ Tesla). Due to the flipped module design the resulting shift for a reconstructed hit is opposite for sensors where the p-side is facing the beam pipe or the outer tracker. Moreover, in the study of the Lorentz angle effect it has been found out that the barycentre of the holes and electrons created by the charged particle passing through the detector does not correspond exactly to the mid-plane of the detector. A 10-20 $`\mu `$m shift of the barycentre of holes and electrons in the radial position towards the p-side needs to be taken into account. Finally bowing of individual modules along $`z`$ of up to 150 $`\mu `$m in radius has been observed. This effect is related to stress during installation and the size of the bowing varies with temperature and humidity.
The final alignment is done using charged tracks in $`Z\mu \mu `$ and hadronic events. The Outer layer (see figure 2) is taken as a master layer for the whole detector. Its modules are aligned w.r.t. each other using hadronic tracks passing the 15 % azimuthal overlap between adjacent modules. The Closer layer is aligned w.r.t. the Outer layer using muons in $`Z\mu \mu `$ events, where the momentum of each muon track is fixed to the nominal value. The Inner layer is aligned to the other two using again tracks from hadronic events. This procedure is iterated a few times until a stable alignment is found. At the end an overall twist of the detector around the beam axis is measured using the geometrically signed impact parameter of tracks w.r.t. the beam spot as a function of their polar angle $`\theta `$.
### 3.2 Shaken VD alignment for the simulation
The intrinsic $`R\varphi `$ hit resolution of a VD sensor is 5 $`\mu `$m, while in DELPHI a resolution of $`8\mu `$m has been achieved. The difference reflects imperfections in the internal alignment and small deformations in the flatness of the sensors.
The simulation needs to reflect the actual precision obtained for the data. For example a simple hit smearing would not allow for correlations between tracks, because in real data tracks hitting the same sensors are affected in the same way by residual problems. Therefore a scheme of shaking the detector position in the reconstruction of simulated events is used to model the actual resolution. Such a scheme also includes effects of misalignment at the track reconstruction level into the simulation. The position of each sensor is varied from its nominal position on an event by event basis . This is needed in order not to have any visible pattern in the misalignment of the simulation. Such a pattern would have been cured by the alignment procedure on the real data. The RMS of the shifts applied were e.g. 6.4 $`\mu `$m in $`R\varphi `$, 7.3 $`\mu `$m in $`z`$ and 20 to 37 $`\mu `$m in $`R`$ for the simulation corresponding to the 1995 real data. Typical rotations are of the order of 0.15 - 0.4 mrad.
### 3.3 Quality cuts on cluster signal over noise
It is necessary to remove noise clusters from the event to avoid tails in the impact parameter resolution function. Cuts on cluster signal over noise are applied to improve the purity of good hits without losing much efficiency. The VD is made of several different sensors which have different signal over noise performance. Typical values range from 10 to 18 and close to 30 for the p-side of the single metal Outer layer . Hence the cuts on signal over noise are tuned for each module and vary from 6 to 15. The cuts are applied to the hits at association time and depend on the track topology to improve the efficiency.
Tracks having $`R\varphi `$ hits in 3 out of 3 layers are very tightly constrained and therefore the chance of picking up a random noise hit is very small. For such tracks it is required that only 2 out of 3 associated $`R\varphi `$ hits pass the signal over noise cut.
### 3.4 Efficiency correction for the Simulation
The description of the hit efficiency of the detector is an important aspect of the simulation. Dead modules and dead channels need to be taken into account too. A sensor by sensor tuning of the hit efficiency is done in the simulation by randomly removing hits from the event in order to match the apparent efficiency to the one in the real data. Modules which are inefficient for only parts of the data taking period are allowed for by dropping the module information from a corresponding fraction of the simulated events.
### 3.5 Kalman Filter track fit
The task of the track fit is to determine the track parameters from the combination of VD hits and track elements measured in the outer tracking system. The track fit algorithm used by DELPHI is a Kalman Filter , which is a fast recursive algorithm. It is implemented using the weight matrix approach.
A Kalman Filter is an estimator for a linear system, while a track in a solenoid field is described by a helix. A Taylor expansion around the reference trajectory is used as a starting point to obtain a linear system. The fit is iterated to ensure good convergence.
The DELPHI track fit takes into account the effects of multiple scattering and energy loss of particles in the material. A simplified description of the detector material is sufficient for the purpose of track fitting. The geometry of the detector material of the outer tracker is approximated by a sequence of surfaces, which are either cylinders around or planes perpendicular to the beam pipe. For each of these surfaces an apparent thickness is specified in terms of radiation length and energy loss of a minimal ionising particle (for a particle crossing at $`90^{}`$).
A different approach is used for the VD material in the fit. Multiple scattering and energy loss in the material of the VD and the beam pipe are dominant contributions to the impact parameter resolution. Therefore a more detailed description of the material distribution is used for the track extrapolation and fitting inside the VD. The description reflects the complicated support structures and the overlap of modules in the individual layers in the corresponding azimuthal regions as can be seen in figure 2.
In the track fit the effect of the multiple scattering is taken into account by increasing the error contour of the track extrapolation after crossing the material surface. The momentum dependent effect of the energy loss is taken into account by changing the curvature of the reference trajectory used for the Taylor expansion in the fit.
Another important feature of the DELPHI fit is the logic to remove outliers. The fit is able to remove up to 3 measurements from a track candidate if it fails a fit $`\chi ^2`$ probability cut of 0.1 %. This is a very effective filter to remove wrong associations of hits to tracks. A ranking of detectors to be removed is used in order not to remove the most precise measurement (e.g. the TPC track element) from the track. Called from the track search packages the fit always retains the track element which was used as a starting point to reconstruct the track.
### 3.6 Optimised track search algorithms
The first version of a DELPHI track search was based on the track elements found in the TPC. These track elements were extrapolated to the ID jet chamber and the OD to associate additional hits. The output of the searches was a sample of candidates. The track parameters for all of the candidates were determined by the track fit and bad combinations were removed. Remaining ambiguous associations were resolved by a two stage process selecting good tracks. No VD information was used to further constrain the ambiguity decision. The VD hits were associated afterwards. In a first step all tracks were extrapolated to the VD layers and the $`R\varphi `$ hits were associated track by track, in the second step the $`Rz`$ hits were associated. After the association all tracks were fitted to include the VD information in the track parameters. Any mistake done in the linear chain of reconstruction steps resulted in a problem for the following steps. This led to rather unstable results which furthermore were not reproduced in the simulation. The performance of the package was strongly dependent on the track density and therefore problems were more frequent in events with heavy quarks.
The new track reconstruction software is using a completely different approach. Now the VD is used as a starting point for the track search, because it is the most precise detector and has the best two track resolution of all tracking detectors in DELPHI. There are two new track search algorithms, one is using all combinations of TPC tracks and 2 or 3 associated $`R\varphi `$ hits in the VD , the other is starting with ID jet chamber hits plus VD $`R\varphi `$ hits . The algorithms were designed in $`R\varphi `$ only without using VD $`Rz`$ hit information, because the Silicon Tracker had no double sided readout for half of the LEP 1 data taking period. In both new searches bad combinations are filtered using the track fit. Remaining candidates are extrapolated to the other detectors to search for possible associations of additional hits. Each association is tested again using the track fit outlier logic. All ambiguous track combinations are then fed into a new ambiguity processor to resolve the full event. This processor will be discussed later in this paper.
The new reconstruction software uses the VD $`Rz`$ hits, which are present for the last two years of LEP 1 data taking and for the complete LEP 2 data set, to improve the resolution. The $`Rz`$ hits are associated to the resolved tracks found using only the $`R\varphi `$ hit information. All possible associations of $`Rz`$ hits to tracks are considered and filtered using the full track fit. The ambiguities are then resolved in a second run of the global event ambiguity processor.
#### 3.6.1 Secondary interactions in material
Searches for vertices from hadronic interactions, from $`\gamma `$ conversions and from decays of $`K_s^0`$ and $`\mathrm{\Lambda }`$ are part of the new tracking software . Figure 4 shows an example of a hadronic interaction in the detector material in front of the TPC. The TPC track elements of secondary particles produced due to such interactions or decays are a problem for the track reconstruction, because wrong association of VD hits to secondary tracks would disturb the correct association of VD hits to primary tracks.
Therefore the vertex searches are called before the track searches to reconstruct such secondary vertices using only the TPC track elements. All track elements associated to the secondary vertices are removed from the event before the full reconstruction of primary tracks. No VD hits are associated to these TPC track elements. A dedicated track search is then performed to reconstruct the tracks which are pointing to the hadronic interaction vertices . This search uses the remaining unassociated hits in the VD and the ID jet chamber after the reconstruction of primary tracks which did not cause a hadronic shower before the TPC. The tracks measured only in the VD or the VD and the ID jet chamber are then linked to the secondary vertices to fully reconstruct the track-vertex structure. Finally elastic interactions and decays in flight of pions or kaons are reconstructed.
### 3.7 The concept of โexclusionsโ
The result of track searches is a set of ambiguous track candidates with many possible associations of individual hits to different track candidates. Also the local pattern recognition of the different detectors can create ambiguous hit combinations. For example in the VFT mini strip detector space points are reconstructed out of hits on the back-to-back module by combining the measurements in both orientations so that hits from $`n`$ tracks on a module lead to $`n(n1)`$ mirror images. In the ID jet chamber the left/right ambiguity can not be resolved using only the detector information itself. All such ambiguities need to be resolved in the process of the track reconstruction. It is beneficial to leave the decisions to the stage of the global event solution, since at this stage the full information of the reconstructed track candidates can be used to minimise mistakes.
All results from the different reconstruction packages are stored in the DELPHI event database . The database structure allows for so called โlogical exclusionsโ between objects like hits or track candidates. An โexclusionโ signals that two objects use conflicting or common detector information and that for the final solution of the event such conflicts need to be resolved.
### 3.8 Event ambiguity processing
The ambiguity solution is a combinatorial problem. The task of the ambiguity processor is to decide about the association of (VD) hits and to select the best tracks out of the set of mutually โexcludedโ candidates found by the search algorithms. The design of the DELPHI ambiguity processor was done in order to find a balance between performance and CPU consumption.
The ambiguity processor maximises a โscoreโ function for a given event. The score of each track in the solution is determined by the number of hits associated to the track and the quality of the fit. A simple algorithm to resolve the event can start by selecting the track with the highest score. The hits associated to that track are removed from all other candidates. This implies refitting the candidates from which a hit has been dropped. The list of candidates is therefore changing in the course of the process. The process is iterated by selecting the next best track until no more candidates are left over. This algorithm is very fast, but any mistake at the beginning propagates through the rest of the event analysis. Another algorithm, which does not have this problem, would be to create all possible lists of tracks, which contain no โexclusionsโ anymore, in the same way as before. Here the list with the highest score would be selected. This algorithm is limited by combinatorics, because the number of lists increases very rapidly with the number of candidates.
The DELPHI ambiguity processor is a mixture of both algorithms. It is a recursive algorithm, which in each step subdivides the event into sets of โexcludedโ tracks to resolve them independently. For each set all possible lists of tracks are tried. One track after the other is taken out of the set and each time the subset is resolved in the next recursion level. For each recursion the maximum possible score of the subset is calculated to truncate combinations below the current maximum. A fall back solution is implemented, which uses the simpler algorithm for a set in case it is not resolved after more than 2 minutes or the recursion depth is exceeding 9 levels.
Additional protections are needed. Sub-tracks created during the processing are rejected if they are only generated by splitting a long track. A list of bad tracks is used to reject detector combinations of poor quality or high risk of being fake.
The scoring function is tuned to optimise the track reconstruction efficiency and the hit association purity at the same time. For each track a score of 100 is assigned, while a detector measurement associated to the track is given a score between 1 and 20, depending on the quality of the measurement, and a logarithm of the $`\chi ^2`$ probability of the track fit is added to disfavour bad track candidates.
The ambiguity processor is used three times in the track reconstruction code. It is called for the first time to resolve the tracks including the VD $`R\varphi `$ hits, a second time to resolve the association of the $`Rz`$ hits and finally to resolve ambiguities in the search dedicated to reconstruct tracks before interactions with the material.
### 3.9 Results of the new barrel track reconstruction package
The new central tracking has been successfully used for the final reprocessing of the full LEP 1 data set and for the processing of the LEP 2 data. An excellent reconstruction quality has been achieved and the precision of many DELPHI physics results has been improved.
The impact parameter resolution for charged tracks from hadronic $`Z^0`$ events has been measured to be :
$`\sigma _{IP_{R\varphi }}`$ $`=`$ $`{\displaystyle \frac{71\mu m}{p\times \mathrm{sin}^{3/2}\theta }}28\mu m,`$ (1)
$`\sigma _{IP_z}`$ $`=`$ $`{\displaystyle \frac{75\mu m}{p\times \mathrm{sin}^{5/2}\theta }}39\mu m.`$ (2)
In both cases the first term, which depends on the track polar angle $`\theta `$, is the contribution due to multiple scattering, the second term is the asymptotic value reflecting the measurement error. The average miss distance at the interaction point between the two muons in $`Z\mu \mu `$ events is measured to be 33 $`\mu `$m in $`R\varphi `$ and 51.6 $`\mu `$m in $`Rz`$ . In figure 5 the ratio of the impact parameter distributions of reconstructed tracks from hadronic $`Z^0`$ events in real data and simulation are shown separately for $`R\varphi `$ and $`Rz`$. The resolutions are correctly described and the tails in the distributions due to wrong association of VD hits to tracks are reproduced in the simulation.
The most dramatic improvement due to the new reconstruction algorithms is visible in the $`b`$-tagging performance . The average number of tracks per hadronic $`Z^0`$ decay used to determine the $`b`$ content increased from 9.5 to 14.3. Figure 6 shows a comparison of the $`b`$-tagging efficiency as a function of $`b`$ purity for different experiments. After the reprocessing DELPHI outperforms all other LEP experiments. Only SLD has a better resolution due to its smaller beam pipe and the smaller dimensions of the beam spot.
Figure 7 shows the mass difference between the $`D^+`$ and the $`D^0`$ from the decay $`D^+D^0\pi ^+`$, where the $`D^0`$ decays into $`K^{}\pi ^+\pi ^{}\pi ^+`$. The signal is shown for both processings using the old and the new track reconstruction code. In both cases the $`D^+`$ decays are reconstructed using the same analysis code and the same set of cuts. A gain of a factor 2.5 in efficiency is observed for such complicated decay modes.
An example of an application of the secondary vertex search is shown in figure 8. The mass signal of $`\mathrm{\Sigma }\pi n`$ decays is reconstructed from the $`\mathrm{\Sigma }`$ tracks measured in the VD and the tracks of the decay pions. The vertices have been found by the search for decays in flight.
Figure 9 shows an example of a reconstructed high multiplicity multi-jet event taken at an centre-of-mass energy of 202 GeV in the year 1999. No significant degradation of the $`b`$-tagging resolution has been observed for such complicated events.
## 4 VFT standalone tracking and the acceptance in the forward region
In contrast to the barrel tracking the track reconstruction strategy in the forward region is completely different. In this region the VFT is needed to improve the acceptance for charged particles. Standalone track reconstruction in the VFT is mandatory to measure the tracks before most of the particles shower in the material of the end rings of the barrel detectors.
### 4.1 VFT standalone track reconstruction
The VFT (see figure 1) consists of two layers of pixel detectors and two layers of back-to-back mini strip detectors. It gives on average 2 to 3 space points per track in the polar angle range from $`21^{}`$ to $`10.5^{}`$. An important aspect of the pixel detector is the low random noise rate of $`0.5\times 10^6`$ after an offline suppression of 0.3 % of systematically noisy pixels . This ensures a small rate of fake hits and consequently a good purity in the reconstruction. The reconstructed clusters in both views of a back-to-back mini strip module are combined to obtain 3 dimensional hit information.
The standalone tracking is done in 3 steps. First all combinations of 3 layers are tested and the track parameters are determined using a helix fit. One requires the reconstructed track elements to point towards the primary interaction region. The primary interaction region has a dimension of 0.77 $`c`$m in $`z`$ and of 150 and 10 $`\mu `$m in $`x`$ and $`y`$, respectively . In the next step the track finding efficiency is improved using all left over two hit combinations in both pixel layers. The point of the average primary interaction position is added to the combinations to determine the track parameters. Finally a similar strategy is used for combinations of space points in both mini strip layers. A $`2^{}`$ stereo angle and flipping of the module orientation leads to a relative angle of $`4^{}`$ between the strips in the same projection of both layers. This relative angle is used to remove fake combinations of mirror images which do no longer point towards the primary interaction region. The result of the standalone tracking as well as all hits are then used in the forward search to reconstruct the full track.
### 4.2 The VFT in the forward track search
The forward track reconstruction is limited by the material of the end rings of the barrel detectors, which amounts to 1.4 radiation length in front of the electromagnetic calorimeter. Furthermore tracks are dropping out of the acceptance of the central detectors with decreasing polar angle as it is shown in figure 10, where the number of measurements can be seen as a function of polar angle for all detectors but the forward chambers. The latter contribute 18 additional points over the full range, but they operate behind the material of the end rings of the barrel detectors.
The VFT, which is close to the interaction region, is the basis of the track forward finding. The track search algorithm uses as starting points as many different seeds as possible. These seeds are either VFT tracks from the standalone tracking, combinations of VFT hits with ID jet chamber hits or other detector combinations like VD and ID jet chamber or TPC. In total 12 different combinations are tried. Starting from each of the seeds a simple road search is done to look for possible hits in the other detectors to be associated to the track candidate. On the resulting list of all possible hits from all detectors a search is done for track combinations including the maximum number of detectors. The Kalman Filter track fit with its outlier logic serves as the final filter to select good candidates, which are fed into the global event solution to resolve ambiguities. At this stage of the reconstruction no tracks measured only in the VFT and in the VFT and the ID jet chamber are considered. A dedicated search is done afterwards to reconstruct these tracks out of the remaining tracks found by the VFT standalone track reconstruction.
### 4.3 Results of the new forward tracking
Figure 11 shows the improvement of the track finding efficiency for primary tracks from the interaction region as a function of the polar angle $`\theta `$ after including the VFT into the forward track reconstruction. A clear gain is visible down to $`14^{}`$ for tracks reconstructed using the VFT. Below $`13^{}`$ the ID jet chamber drops out of the tracking and the VFT standalone tracks are used to extend the efficiency plateau down to $`11^{}`$. An example of a $`\gamma \gamma `$ event is shown in figure 12. In the event one electron is measured in the Small Angle Tile Calorimeter. Most of the tracks are reconstructed using the VFT hit information.
## 5 Conclusion
The DELPHI Silicon Tracker is used successfully in two different tracking situations. In the barrel region the VD provides precision tracking information. Optimised algorithms were developed to reconstruct the charged tracks starting from the VD hits. The result of this new reconstruction software is an excellent data quality which gives DELPHI the best $`b`$-tagging performance of all LEP experiments. In the forward region different reconstruction strategies based on the VFT are used to significantly improve the track reconstruction efficiency down to a polar angle of $`10.5^{}`$.
## Acknowledgements
I would like to thank P. Bruckman, K. รsterberg, M.E. Pol and C. Weiser for providing me with material for the presentation. I also like to thank K. รsterberg for reading the article and sending constructive comments. |
no-problem/0001/hep-ph0001137.html | ar5iv | text | # A MONTE CARLO FOR BFKL PHYSICS11footnote 1Presented at the International Workshop on Linear Colliders, Sitges, Spain, April 28-May 5, 1999
## 1 Introduction
High energy $`e^+e^{}`$ collisions can lead to the scattering of virtual photons emitted by the initial electron and positron. When the virtuality $`Q^2`$ of these photons is small compared to the center-of-mass energy $`\widehat{s}`$ of the $`\gamma ^{}\gamma ^{}`$ system, the scattering cross section is dominated by contributions in which the photons split into quark-antiquark pairs, with t-channel gluon exchange. The emission of additional soft gluons from the t-channel gluon gives rise to large logarithms that lead to corrections in powers of
$$\alpha _s(Q^2)\mathrm{ln}(\widehat{s}/Q^2),$$
(1)
which is of order one in this kinematic regime. These logarithms must therefore be resummed in the calculation of the cross section. The events that result from this process are characterized by electron-positron pairs with a large rapidity separation, and hadronic activity in between.
The large-logarithm resummation is performed by the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation , where its analytic solution gives a rise in the cross section $`\widehat{\sigma }(\widehat{s})^\lambda `$, with $`\lambda =4C_A\mathrm{ln}2\alpha _s/\pi 0.5`$. The BFKL equation applies not only to virtual photon scattering as described above, but also to dijet production at large rapidity difference in hadron-hadron collisions and to forward jet production in lepton-hadron collisions.
The BFKL equation can be solved analytically, but to do so requires giving up energy-momentum conservation, because it involves integration over arbitrarily large momenta of emitted gluons. Furthermore, because the sum over gluons is implicit, only leading-order kinematics can be included. This leads to predictions that do not correspond to any real experimental situation. In principle the corrections due to kinematic effects are in subleading, but in practice, as we will see below, they can be quite important.
## 2 A Monte Carlo for BFKL Physics
The solution to the problem of lack of kinematic constraints in analytic BFKL predictions is to unfold the implicit gluon sum to make it explicit, and to implement the result in a Monte Carlo event generator . This is achieved as follows. The BFKL equation contains separate integrals over real and virtual emitted gluons. We combine the โunresolvedโ real emissions โ those with transverse momenta below some minimum value (small compared to the momentum threshold for measured jets) โ with the virtual emissions. Schematically, we have
$$_{virtual}+_{real}=_{virtual+real,unres.}+_{real,res.}$$
(2)
We perform the integration over virtual and unresolved real emissions analytically.
We then solve the BFKL equation by iteration, and we obtain a differential cross section that contains an explicit sum over emitted gluons along with the appropriate phase space factors. In addition, we obtain an overall form factor due to virtual and unresolved emissions. The subprocess cross section is
$$d\widehat{\sigma }=d\widehat{\sigma }_0\times \underset{n0}{}f_n$$
(3)
where $`f_n`$ is the iterated solution for $`n`$ real gluons emitted and contains the overall form factor. It is then straightforward to implement the result in a Monte Carlo event generator. Emitted real (resolved) gluons appear explicitly, so that conservation of momentum and energy is based on exact kinematics for each event. In addition, we include the running of the strong coupling constant. See for further details.
## 3 Results and Prospects
We have used this BFKL Monte Carlo approach to study dijet production at hadron colliders in detail . The most important conclusion is that the effects of kinematic constraints can be very large, because they suppress radiation of the gluons that give rise to what are considered to be characteristic BFKL effects. As a result, the predictions can change substantially. This is illustrated in Fig. 1, which shows the dijet cross section at the Tevatron for two different center-of-mass energies as a function of the dijet rapidity difference. The naive analytic BFKL prediction lies above the leading QCD curve, as expected. But when kinematic constraints are included, the BFKL prediction gets pushed below that of leading-order QCD. Clearly it is important to incorporate kinematic constraints in our BFKL predictions.
We are currently completing the application of our BFKL Monte Carlo to virtual photon scattering in $`e^+e^{}`$ collisions and in forward jet production at HERA. In both cases we expect kinematic constraints to be large and to lead to some suppression of BFKL effects.
## Acknowledgments
Work supported in part by the U.S. Department of Energy, under grant DE-FG02-91ER40685 and by the U.S. National Science Foundation, under grant PHY-9600155.
## References |
no-problem/0001/nlin0001068.html | ar5iv | text | # Modulated Amplitude Waves and the Transition from Phase to Defect Chaos
## Abstract
The mechanism for transitions from phase to defect chaos in the one-dimensional complex Ginzburg-Landau equation (CGLE) is presented. We introduce and describe periodic coherent structures of the CGLE, called Modulated Amplitude Waves (MAWs). MAWs of various period $`P`$ occur naturally in phase chaotic states. A bifurcation study of the MAWs reveals that for sufficiently large period, pairs of MAWs cease to exist via a saddle-node bifurcation. For periods beyond this bifurcation, incoherent near-MAW structures occur which evolve toward defects. This leads to our main result: the transition from phase to defect chaos takes place when the periods of MAWs in phase chaos are driven beyond their saddle-node bifurcation.
Spatially extended systems can exhibit, when driven away from equilibrium, irregular behavior in space and time: this phenomenon is commonly referred to as spatio-temporal chaos . The one-dimensional complex Ginzburg-Landau equation (CGLE):
$$_tA=A+(1+ic_1)_x^2A(1ic_3)|A|^2A,$$
(1)
describes pattern formation near a Hopf bifurcation and has become a popular model to study spatiotemporal chaos . As a function of $`c_1`$ and $`c_3`$, the CGLE exhibits two qualitatively different spatiotemporal chaotic states known as phase chaos (when $`A`$ is bounded away from zero) and defect chaos (when the phase of $`A`$ displays singularities where $`A=0`$). The transition from phase to defect chaos can either be hysteretic or continuous; in the former case, it is referred to as $`L_3`$, in the latter as $`L_1`$ (Fig. 1). Despite intensive studies , the phenomenology of the CGLE and in particular its โphaseโ-diagram are far from being understood. Moreover, it is under dispute whether the $`L_1`$ transition is sharp, and whether a pure phase-chaotic (i.e. defect-free) state can exist in the thermodynamic limit .
It is the purpose of this paper to elucidate these issues by presenting the mechanism which creates defects in transient phase chaotic states. Our analysis consists of four parts: (i) We describe a family of Modulated Amplitude Waves (MAWs), i.e., pulse-like coherent structures with a characteristic spatial period $`P`$. (ii) A bifurcation analysis of these MAWs reveals that their range of existence is limited by a saddle-node (SN) bifurcation. For all $`c_1,c_3`$ within a certain range, we define $`P_{SN}`$ as the period of the MAW for which this bifurcation occurs. (iii) We show that for $`P>P_{SN}`$, i.e., beyond the SN bifurcation, near-MAW structures display a nonlinear evolution to defects. It is found that, in phase chaos, near-MAWs with various $`P`$โs are created and annihilated perpetually.
The transition to defect chaos takes place when near-MAWs with $`P>P_{SN}`$ occur in a phase chaotic state. (iv) Finally, instabilities to splitting of resp. interaction between MAWs are identified as the relevant processes which locally decrease resp. increase $`P`$ in phase chaos. We will argue that the SN curve for $`P\mathrm{}`$ is a lower bound (see Fig. 1) for the transition from phase chaos to defect chaos.
From a general viewpoint, our analysis shows that there is no collective behavior that drives the transition. Instead, strictly local fluctuations drive local structures beyond their SN bifurcation and create defects.
(i) MAWs as coherent structures - By coherent structures we mean uniformly propagating structures of the form
$$A(x,t)=a(xvt)e^{i\varphi (xvt)}e^{i\omega t},$$
(2)
where $`a`$ and $`\varphi `$ are real-valued functions of $`z:=xvt`$. Such structures play an important role in various dynamical regimes of the CGLE . The substitution of Ansatz (2) into the CGLE leads to a set of three coupled ODEs for $`a`$, $`b=da/dz`$ and $`\psi =d\varphi /dz`$ . The MAWs correspond to limit-cycles of these ODEs, or equivalently, spatially periodic solutions of the CGLE. The MAWs occur in a two parameter family which we choose to parametrize by their spatial period $`P`$ and their average phase gradient $`\nu :=1/P_0^P๐z\psi `$. Some examples of MAWs are shown in Fig. 2b and Fig. 3. Only solutions for which $`\nu =0`$ are considered here; the reason for this will be discussed in (iii). To compute the MAWs and their bifurcations, we have used the software package AUTO94 to solve the ODEs for fixed $`P`$ and $`\nu `$.
(ii) MAW range of existence - MAWs with $`\nu 0`$ bifurcate from unstable plane waves in the CGLE. We focus on the $`\nu =0`$ case, i.e., on the homogeneous oscillation $`A(x,t)=e^{ic_3t}`$. This solution becomes Benjamin-Feir (BF) unstable at $`c_1c_3=1`$, beyond which all plane waves are unstable (Benjamin-Feir-Newell (BFN) criterion) . In the ODEs, the fixed point $`(a,b,\psi )=(1,0,0)`$ that corresponds to the homogeneous solution undergoes a Hopf bifurcation (HB) upon increasing $`c_1`$ and $`c_3`$. For infinite $`P`$ the Hopf bifurcation occurs for $`c_1c_3=1`$, while for smaller $`P`$ the Hopf bifurcation occurs for larger $`c_1`$ and $`c_3`$. The sequence of bifurcations for fixed $`P=50`$ is illustrated in Fig. 2a. The square symbol denotes the Hopf bifurcation, and the resulting solutions have drifting velocity $`v=0`$. Via a secondary drift pitchfork (DP) bifurcation (diamond) the MAWs acquire $`v0`$. For the relevant parameters, i.e., sufficiently small $`\nu `$ and large $`P`$, both bifurcations are supercritical ; the amplitude modulations grow away from these bifurcations. The MAWs
undergo a saddle-node (SN) bifurcation (triangle) when $`c_1`$ or $`c_3`$ are sufficiently increased. The upper branch returns far back into the BF stable region of the CGLE; the recently discovered โhomoclinic holesโ are MAWs of this upper branch in the limit $`P\mathrm{}`$. The spatial profiles of MAWs on the upper (II) and lower (I) branches and SN are shown in Fig. 2b.
The SN curves in the $`c_1c_3`$ parameter plane have been computed for various spatial periods $`P`$. For given parameters $`c_1`$ and $`c_3`$, we define $`P_{SN}`$ as the period for which a saddle-node bifurcation occurs. We find, roughly, that for larger $`P`$ this SN occurs for smaller values of $`c_1,c_3`$ (see Fig. 1).
To summarize: a family of coherent, periodic MAW solutions of the CGLE has been obtained. The range of existence of these solutions is limited by a SN bifurcation for large $`c_1,c_3`$.
(iii) Beyond the Saddle Node - In Fig. 3 the relevance of the SN for defect generation is illustrated. In Fig. 3a we show the time evolution of a MAW-like initial condition in a periodic system of size $`L>P_{SN}`$. While for $`L<P_{SN}`$ we obtain coherent MAWs, for $`L>P_{SN}`$ incoherent dynamics occurs: the amplitude modulation and drifting velocity grow until defects are formed. Extensive tests show that defects are always generated for MAW-like initial conditions when $`L>P_{SN}`$. In Fig. 3b,c the relevance of this defect generating mechanism for chaotic states is illustrated in a large system of size $`L=512`$ with coefficients close to the $`L_3`$ transition. The transient phase chaotic state (Fig. 3b) contains local structures which can come arbitrarily close to one-period MAWs. Fig. 3c shows a snapshot of a spatial profile of $`|A|`$ in a phase chaotic state; parts of this profile can be approximated
by a MAW with appropriate $`P`$. The phase gradient $`\nu `$ averaged between peaks of the amplitude is always close to zero; this is the reason why we focused on $`\nu =0`$ MAWs. Defects appear when one of these MAWs acquires a period larger than $`P_{SN}`$ (Fig. 3b). This illustrates the main result: the transition to defect chaos occurs when a phase chaotic state contains pulses with peak to peak distances larger than $`P_{SN}`$.
To test the generality of this picture, we have carried out extensive numerical simulations of Eq. (1) near the transition lines $`L_1`$ resp. $`L_3`$, adopting an integration algorithm developed in , in systems with sizes ranging from $`L=100`$ to $`L=5000`$ and integration times up to $`5\times 10^6`$. The distribution of peak-to-peak distances $`p`$ of the phase gradients has been determined. Even though the phase chaotic state is not everywhere MAW-like, we found that occurrences of large values of this โlocalโ $`p`$ were approximated well by MAW profiles. Defects occurred in systems with $`L512`$ if and only if $`p>P_{SN}`$. Since large $`p`$โs are most โdangerousโ, the maximum value of $`p`$, $`p_{max}`$, is the relevant quantity here. An example of $`p_{max}`$ as a function of $`c_1`$ near $`L_3`$ is shown in Fig. 4 (squares); as soon as $`p_{max}`$ crosses the SN curve, defects occur.
One may worry whether $`p_{max}`$ is a well-defined quantity, especially in the thermodynamic limit. For larger system sizes and integration times $`p_{max}`$ increases, however the apparent transition where defects occur shifts accordingly. For example, we found in our simulations that for $`c_3=2.0`$, the critical value of $`c_1`$ approximates $`0.65`$, while Ref. finds, for shorter integration times, a critical value $`0.68`$. The fact that $`p_{max}`$ (slowly)
increases for larger systems/longer times is in agreement with earlier assertions that there is no sharp transition to defect chaos . We have not been able to establish an upper bound for the $`p`$โs occurring in phase chaos; therefore we conjecture that the SN line for $`P\mathrm{}`$ provides a lower boundary for the transition from phase to defect chaos.
(iv) MAW stability - Of course, the laminar patches that occur in MAWs of large period are linearly unstable, and large P-MAWs have only a small probability to occur. To get some further insight in the behavior of MAWs, we have calculated the linear stability properties of the MAWs. We start with a system of size $`L=P`$ and periodic boundary conditions. Both MAW branches have neutral modes corresponding to translational and phase symmetries. The eigenvalue associated with the SN is positive for solutions on branch II and negative for MAWs on branch I. Apart from these 3 purely real eigenvalues, the stability spectrum consists of pairs of complex conjugate eigenvalues.
In what follows the lower branch I is considered exclusively. For small enough $`P`$, all eigenvalues $`\lambda _i<0`$, but when we increase $`P`$, MAWs become unstable to finite wavenumber perturbations. By using a Bloch Ansatz, we extended the stability analysis to systems with $`n`$ identical pulses ($`L=nP`$). For $`n>1`$, new instabilities may appear. The shape of these eigenmodes suggests that the instabilities lead to splitting of resp. interaction between adjacent MAWs; a nonlinear analysis confirms this. These instabilities are the relevant processes which locally decrease resp. increase $`p`$, thus inhibiting or enhancing the generation of defects. The splitting and interaction mechanism is very similar to the cell splitting and instabilities one encounters in the Kuramoto-Sivashinsky equation .
The results of the stability analysis are summarized in Fig. 4 and 5. It is important to stress here that there is no qualitative difference between the behavior of MAWs near the $`L_3`$ and the $`L_1`$ transition.
The eigenvalues with largest real part on the connected curve in Fig. 5a,b correspond to โsplittingโ modes; Fig. 5c,d displays the nonlinear evolution that occurs when this mode is unstable. Clearly, this instability tends to reduce the spatial periods $`p`$ and prevents MAWs to cross the SN boundary. Above a critical value for $`c_1`$ ($`c_3`$) the splitting modes are stable for all $`P`$ (Fig. 4). In this case the period of the MAWs can grow until $`P>P_{SN}`$ is reached and defects are created.
The eigenvalues labeled by open squares in Fig. 5a,b describe interaction between subsequent peaks that occur for $`n>1`$ . These interaction modes are mainly active for small $`P`$ (typically $`P<20`$). They cause instability of periodic MAWs and lead to local increase of the peak to peak distance $`p`$; Fig. 5e shows the nonlinear evolution in such a case.
Conclusion - We have presented a systematic study of modulated amplitude waves (MAWs) in the complex Ginzburg-Landau equation (CGLE). These periodic coherent structures originate in supercritical bifurcations due to the BF instability of the CGLE. MAW existence is bounded by saddle-node bifurcations towards large $`c_1,c_3`$. Approaching the transition from phase to defect chaos, near-MAWs with large $`P`$ occur in phase chaos. Defects are generated if the period of these MAWs becomes larger than $`P_{SN}`$. This scenario is valid for both the $`L_1`$ and $`L_3`$ transition. Indications have been given in favor of the existence of the phase turbulent regime even in the thermodynamic limit. Altogether, our study leaves little space for doubt that the transition from phase chaos to defect chaos in the CGLE is governed by coherent structures and their bifurcations.
It is a pleasure to acknowledge discussions with H. Chatรฉ and L. Kramer. AT and MB are grateful to ISI Torino for providing a pleasant working environment during the Workshop on โComplexity and Chaosโ in October 1999. MGZ is supported from a post-doctoral grant of the MEC (Spain). MvH acknowledges financial support from the EU under contract ERBFMBICT 972554. |
no-problem/0001/hep-th0001105.html | ar5iv | text | # MIT-CTP-2939YITP-00-3hep-th/0001105 January 2000 T-duality of non-commutative gauge theories
## 1 Introduction
Non-commutative gauge theories have attracted much attention since the existence of the limit realizing non-commutative gauge theories as theories on D-branes was found. It is known that the non-commutativity on D-branes is due to the background $`B`$-field. By quantizing open strings in the non-vanishing $`B`$-field background, we obtain non-vanishing commutators of coordinates of the end points of the strings on D-branes.
Let us restrict our attention to two-dimensional commutative and non-commutative $`U(1)`$ gauge theories on D2-branes wrapped around a rectangular torus for simplicity. The easiest way to see the non-commutativity is T-dualizing the configuration. If the background $`B`$-field is non-zero, the D2-brane is transformed into a D1-brane wrapped around one cycle of a slanted torus. In , it is noticed that two end points of an open string on the D1-brane split due to the slant of the torus, and interaction among such open strings is described by non-commutative geometry. In the context of open strings on a D2-brane, the splitting of the endpoints is a result of the interaction of the string worldsheet with the $`B`$-field.
Recently, Pioline and Schwarz and Seiberg and Witten suggested the existence of a set of equivalent theories parameterized by the non-commutativity parameter $`\theta `$. This set contains a commutative gauge theory with the background $`B`$ field, a non-commutative gauge theory with a vanishing background $`2`$-form field and other infinitely many theories with a non-zero background $`2`$-form field and $`\theta `$. In this paper, we will refer to the background $`2`$ form field in gauge theories as $`\varphi `$-field because when $`\theta 0`$ the background field is not identified with the $`B`$-field in the background spacetime. The non-commutative Born-Infeld action depends only on the sum $`f+\varphi `$, where $`f`$ is a field strength of a gauge field on the brane.
Because it is known that in the two special cases, $`\varphi =0`$ case and $`\theta =0`$ case, the theories are regarded as T-duals of the same D1-brane configuration on a slanted torus, it is natural to ask whether such a T-dual picture exists in the case of general $`\varphi `$ and $`\theta `$. In this paper, we suggest the T-dual configuration of D1-brane for non-commutative $`U(1)`$ gauge theories with general $`\varphi `$, $`f`$ and $`\theta `$, and show that the relation of parameters for equivalent gauge theories can be easily obtained by simple rotation of the D1-brane configuration.
Furthermore, we show that Morita equivalence is generated by combining this T-duality and the modular transformation of the dual D1-brane configuration.
## 2 Ordinary T-duality
First, let us review the T-duality between ordinary (commutative) gauge theories on a D2-brane and D1-brane configurations. For simplicity, we will consider only rectangular D2-branes. Let $`x`$ and $`y`$ be the two coordinates on the D2-brane and $`L_x`$ and $`L_y`$ be the lengths of sides along $`x`$ and $`y`$ direction respectively. Because we will carry out T-duality along the $`y`$ direction, the two sides of the rectangle with constant $`y`$ should be identified. The other two sides do not have to be identified and we can put $`L_x`$ infinite. However, to see change of the metric along $`x`$ direction, it is convenient to pick out a finite part with finite length $`L_x`$. Thus the background manifold is not a torus but a cylinder.
We will use the following two metrics on the D2-brane.
$$0xL_x,0yL_y,G_{ij}=diag(1,1),$$
(1)
$$0x1,0y1,G_{ij}=diag(L_x^2,L_y^2).$$
(2)
Although the metric (2) is used in much of the literature, we will mainly use the metric (1) because in this metric it is easy to intuitively imagine the meanings of physical values. Let $`\varphi `$ and $`f`$ denote the $`\varphi `$-field and the field strength of the gauge field in the metric (1). The total flux $`\mathrm{\Phi }`$ and $`F`$ of these fields in the whole rectangle are
$$\mathrm{\Phi }=L_xL_y\varphi ,F=L_xL_yf,$$
(3)
and these are also the values of the fields in the metric (2).
Let us carry out the T-duality transformation along $`y`$ direction, and call the new compactification period $`\stackrel{~}{L}_y`$. The relation between $`L_y`$ and $`\stackrel{~}{L}_y`$ is given by
$$\stackrel{~}{L}_y=\frac{2\pi }{TL_y},$$
(4)
where $`T`$ is the string tension.
By this duality, the flux $`\mathrm{\Phi }`$ is transformed into the twist of the cylinder. Namely, if we go $`L_x`$ along $`x`$ direction, the $`๐^1`$ along $`y`$ direction is shifted by the angle $`\mathrm{\Phi }`$. Therefore, the slope of the base of the parallelogram obtained by cutting the cylinder along $`y=0`$ line is given by (Fig.1)
$$\text{slope of the base}=\frac{\varphi }{T}.$$
(5)
The Wilson line along $`y`$ direction on the D2-brane is transformed into the $`y`$ coordinate of the D1-brane. In the metric (1), gauge field $`a_y`$ and the coordinate $`y`$ are related by
$$2\pi \frac{y}{\stackrel{~}{L}_y}=L_ya_y.$$
(6)
(Because we are considering the T-duality along $`y`$ direction, the background gauge field should be constant along $`y`$ direction.) Differentiating this relation with respect to $`x`$, we obtain
$$\frac{dy}{dx}=\frac{f}{T},$$
(7)
where $`f=_xa_y`$.Taking account of the slope of the base of the parallelogram, the net slope of the D1-brane is represented as follows.
$$\text{slope of the D1-brane}=\frac{f+\varphi }{T}.$$
(8)
Although the relations (5) and (8) still hold if $`\varphi `$ and $`f`$ depend on the coordinate $`x`$, we assume they are constant in what follows.
## 3 T-duality of non-commutative gauge theory ($`\varphi =f=0`$ case)
There is another dual description of a theory of open strings on the D1-brane on the slanted cylinder. As shown in , it is equivalent to a non-commutative gauge theory.
Let us consider the case of vanishing gauge field strength. In this case, the D1-brane is parallel to the base of the parallelogram.(Fig.2) Let $`L_x`$ and $`\stackrel{~}{L}_y`$ be the length of the base and the height of the parallelogram, respectively. (These definitions are different from those in the previous section.) By the duality, this is transformed into non-commutative gauge theory on a rectangular cylinder.
The width of the rectangle is $`L_x`$, and the compactification period $`L_y`$ along $`y`$ direction is determined by equating the energy $`T\stackrel{~}{L}_y`$ of a wrapped open string and Kaluza-Klein momentum $`2\pi /L_y`$ in the gauge theory. As a result, the relation of $`L_y`$ and $`\stackrel{~}{L}_y`$ is the same with that for the ordinary T-duality (4).
This duality is very similar to the ordinary T-duality, and so, we will refer to it as T-duality, too. The different point between these two T-dualities is that we use closed strings in the ordinary T-duality, while we use open strings on D1-brane in the new T-duality. Because open strings are always at right angle to the D1-brane, their direction and length are different from those of closed strings. This causes the difference of the size of the cylinder of the dual gauge theories.
We can also reproduce ordinary T-duality by using open strings. In this case, in order to obtain the relation between $`L_y`$ and $`\stackrel{~}{L}_y`$, we should use open strings parallel to the left and right sides of the parallelogram. They are not perpendicular to the D1-brane and are the solutions of equation of motion only when we neglect the non-diagonal part of the metric of the slanted cylinder. This correspond to neglect of the background $`B`$-field in the D2-brane picture. In fact, it is known that, when we carry out the path integral of open strings, if we do not include the $`B`$-field term in the kinetic term we obtain an ordinary gauge theory as a low energy effective theory, and if we regard it as a part of the kinetic term we obtain a non-commutative one. Therefore, we can say that a gauge theory which we obtain by the T-duality from the D1-brane configuration depends on the direction of the open strings which we use. We refer to this direction as โdirection of T-duality.โ
The shift between the upper base and the lower base causes the splitting of the end points of a wrapped open string on the D1-brane. Let $`\mathrm{\Delta }`$ denote the amount of the shift. The length of an open string with winding number $`n`$ is $`n\stackrel{~}{L}_y`$ and the splitting of the end points is $`n\mathrm{\Delta }`$. This string corresponds to an open string on D2-brane moving along $`y`$ direction with momentum $`nT\stackrel{~}{L}_y`$ whose end points split along $`x`$ by the distance $`n\mathrm{\Delta }`$. This splitting of end points is the origin of the non-commutativity of the gauge theory and the parameter of the non-commutativity $`\theta `$ is given as a ratio of the momentum and the splitting.
$$\theta =\frac{\mathrm{\Delta }}{T\stackrel{~}{L}_y}.$$
(9)
The parameter $`\theta `$ defined by (9) is a value in the metric (1). Since this has dimension of length<sup>2</sup>, the value $`\mathrm{\Theta }`$ in the metric (2) is given by
$$\mathrm{\Theta }=\frac{1}{L_xL_y}\theta .$$
(10)
We can show that interaction among open strings on the D1-brane is described with the $``$-product defined with the parameter $`\theta `$ in the dual gauge theory. Let $`\varphi _n(x)`$ denote a field of open strings with winding number $`n`$ and endpoints at $`x\pm n\mathrm{\Delta }/2`$. Because open strings interact at their end points, the interaction is represented by using the following non-local product.
$$\varphi _n^1(x)=\underset{k}{}\varphi _{nk}^2\left(x\mathrm{\Delta }\frac{n}{2}\right)\varphi _k^3\left(x+\mathrm{\Delta }\frac{nk}{2}\right)$$
(11)
The dual field $`\varphi (x,y)`$ on the D2-brane is obtained from this field by the following Fourier transformation.
$$\varphi (x,y)=\underset{n}{}e^{2\pi iny/L_y}\varphi _n(x).$$
(12)
Using this field, the product (11) is rewritten as the following $``$-product.
$`\varphi ^1(x,y)`$ $`=`$ $`\varphi ^2(x,y)\varphi ^3(x,y)`$ (13)
$``$ $`\mathrm{exp}{\displaystyle \frac{i\theta }{2}}(_{x_2}_{y_3}_{y_2}_{x_3})\varphi ^2(x_2,y_2)\varphi ^3(x_3,y_3)|_{x_2=x_3=x,y_2=y_3=y}`$
## 4 T-duality with general parameters
In this section, we generalize the duality. Let us consider a non-commutative $`U(1)`$ gauge theory on a rectangular cylinder with size $`L_x\times L_y`$, and let $`\varphi `$, $`f`$ and $`\theta `$ denote the background $`2`$-form field, gauge field strength and non-commutativity parameter in the metric (1), respectively. What D1-brane configuration is dual to this theory? Now, we suggest that the relations (4), (5), (8) and (9) still hold in the general case if we define the parameters $`L_x`$, $`\stackrel{~}{L}_y`$ and $`\mathrm{\Delta }`$ as shown in Fig.3. (For $`\varphi =0`$ case, the relation (8) was already obtained in .)
We will not prove this statement. Instead, we would like to give two comments which make it seem reasonable.
The first one is about the compactification period $`L_y`$. Because the length of an open string along the direction of T-duality<sup>1</sup><sup>1</sup>1This string is a solution of equation of motion if we take account of a particular portion of the non-diagonal element of the metric of the cylinder. In a low energy effective field theory obtained by quantizing the open strings, the rest would appear as a background $`\varphi `$-field in the gauge theory. with winding number one is not $`\stackrel{~}{L}_y`$ but $`l=\stackrel{~}{L}_y\mathrm{\Delta }f/T`$ in the present case as shown in Fig.3, the Kaluza-Klein momentum in the dual gauge theory should be quantized with a unit $`lT`$. This unit is different from ordinary one $`2\pi /L_y`$. This difference is interpreted as follows. In a non-commutative gauge theory, even if the gauge group is $`U(1)`$, the gauge field couples to an adjoint matter field $`\psi `$ via the non-commutativity. For example, the covariant derivative $`_y\psi `$ is not just a partial derivative.
$$_y\psi =_y\psi +ia_y\psi i\psi a_y.$$
(14)
If the field strength $`f=_xa_y`$ is constant, we obtain the following relation.
$$_y\psi =(1f\theta )_y\psi .$$
(15)
Because the factor $`1f\theta `$ in the right hand side can be removed by the rescaling of the coordinate $`y`$, the interaction with the background gauge field effectively changes the metric, and the effective period $`L_y^{\mathrm{eff}}`$ along $`y`$ direction is
$$L_y^{\mathrm{eff}}=\frac{L_y}{1f\theta }.$$
(16)
The unit of the Kaluza-Klein momentum $`lT`$ is related to this effective period in the usual way.
The other comment is on the quantization of the gauge flux. If we identify the left side and the right side of the parallelogram, the D1-brane has to go through the corresponding points on the left and right sides. As a result, the slope of the D1-brane is quantized. This correspond to the flux quantization on the D2-brane. Due to the slant of the sides the quantized slopes are not multiples of some constant. It is given by
$$\frac{f}{T}=\frac{\stackrel{~}{L}_yn}{L_x+\mathrm{\Delta }n},n๐.$$
(17)
This relation is reproduced in the framework of the non-commutative gauge theory as follows. For the purpose of T-duality, the gauge field should not depend on the coordinate $`y`$, and if the flux is constant the gauge field is given by
$$a_y(x)=fx.$$
(18)
The gauge field on the boundary $`a_y(0)`$ and $`a_y(L_x)`$ should be equivalent up to a gauge transformation.
$$a_y(0)=iU(y)_yU^1(y)+U(y)a_y(L_x)U^1(y).$$
(19)
To keep the gauge field independent of the coordinate $`y`$, $`U(y)`$ should be the following function.
$$U(y)=e^{2\pi iny/L_y},n๐.$$
(20)
The quantization of $`n`$ is due to the periodicity along $`y`$ direction. Using the definition of the $``$-product (13), we can show an identity
$$e^{iky}u(x)e^{iky}=u(x+k\theta ),$$
(21)
for an arbitrary function $`u(x)`$, and (19) is rewritten as
$$a_y(0)=\frac{2\pi n}{L_y}+a_y\left(L_x+2\pi n\frac{\theta }{L_y}\right).$$
(22)
This equation restricts the value of $`f`$ into the following value
$$f=\frac{2\pi n/L_y}{L_x+2\pi n\theta /L_y},$$
(23)
and this is equivalent to the relation (17).
Recently, it was suggest that if a matrix $`M^{ij}`$ defined by the following equation (24) for two non-commutative gauge theories with different parameters are the same, these theories are equivalent. The matrix $`M^{ij}`$ is defined by
$$M^{ij}=\frac{1}{G_{ij}\mathrm{\Phi }_{ij}/T}+T\mathrm{\Theta }^{ij},$$
(24)
where $`G_{ij}`$ is the metric in (2) and $`\mathrm{\Phi }_{ij}`$ and $`\mathrm{\Theta }^{ij}`$ are the following antisymmetric matrices representing the background $`\varphi `$-field and the non-commutativity. (Our definition of $`\mathrm{\Phi }`$ is different from that in by the signature.)
$$\mathrm{\Phi }_{ij}=\left(\begin{array}{cc}& \mathrm{\Phi }\\ \mathrm{\Phi }& \end{array}\right),\mathrm{\Theta }^{ij}=\left(\begin{array}{cc}& \mathrm{\Theta }\\ \mathrm{\Theta }& \end{array}\right).$$
(25)
$`\mathrm{\Phi }`$ and $`\mathrm{\Theta }`$ are defined with metric (2) and are related to $`\varphi `$ and $`\theta `$ in metric (1) by (3) and (10). Substituting these into (24), we obtain the following elements of the matrix $`M^{ij}`$.
$$M^{ij}=\frac{1}{L_xL_y(1+\varphi ^2/T^2)}\left(\begin{array}{cc}L_y/L_x& \varphi /T(1+\varphi ^2/T^2)T\theta \\ \varphi /T+(1+\varphi ^2/T^2)T\theta & L_x/L_y\end{array}\right).$$
(26)
These elements can be rewritten as
$$M^{ij}=\left(\begin{array}{cc}1/b^2& (Ta/2\pi b)\mathrm{cos}\gamma \\ (Ta/2\pi b)\mathrm{cos}\gamma & (Ta/2\pi )^2\mathrm{sin}^2\gamma \end{array}\right),$$
(27)
where $`a`$, $`b`$ and $`\gamma `$ are the lengths of two sides and the angle of a corner(Fig.4).
These parameters are independent of the placement of the parallelogram and invariant under its rotation. Therefore, in the dual picture, all the parallelograms corresponding to equivalent non-commutative gauge theories are congruent with each other, and the change of parameters among the equivalent theories is regarded as simple rotation of the parallelogram.
Next, let us consider the gauge field. In , the relation among gauge fields of the equivalent non-commutative gauge theories with different $`\theta `$ is given as a solution of a certain differential equation. The equation can be solved in the case of constant gauge field strength, and the solution is
$$\frac{1}{F}\mathrm{\Theta }=\frac{1}{F_0},$$
(28)
where $`F`$ is gauge field strength for a theory with non-commutativity $`\mathrm{\Theta }`$ and $`F_0`$ is that for $`\mathrm{\Theta }=0`$. This equation implies that all equivalent gauge theories have a common value of $`1/F\mathrm{\Theta }`$. In the dual D1-brane configuration, this value is represented by the placement-independent variables as follows.
$$\frac{1}{F}\mathrm{\Theta }=\frac{a}{2\pi c},$$
(29)
where $`c`$ is defined in Fig.4. Therefore, the equivalence relation of gauge fields is also reproduced by rotation of the parallelogram.
## 5 Morita equivalence
There is another kind of equivalence among non-commutative gauge theories which is called Morita equivalence. Morita equivalence is โT-dualityโ among non-commutative gauge theories. We stress that the duality which we discussed in the previous section is not included in Morita equivalence. In two-dimensional case, Morita equivalence is regarded as a duality among D2-brane configurations, while we have discussed the duality between D2-branes and D1-branes. These two duality, however, is intimately related.
In the two-dimensional case, Morita equivalence group is $`SO(2,2;๐)SL(2,๐)\times SL(2,๐)`$. One $`SL(2,๐)`$ factor is nothing but the modular group of the torus. If we go to the dual D1-brane picture by the T-duality we have discussed, the other $`SL(2,๐)`$ is also simply the modular group of the dual torus as we will show below.
Let us introduce an orthogonal coordinate $`(X,Y)`$ in the D1-brane configuration such that $`Y`$ represents the direction of the T-duality. Two vectors generating the parallelogram are
$$\stackrel{}{v}=L_x(1,\varphi ),\stackrel{}{w}=\stackrel{~}{L}_y(\theta ,1+\theta \varphi ).$$
(30)
By the modular transformation, these vectors are transformed as
$$\stackrel{}{v}^{}=A\stackrel{}{v}+B\stackrel{}{w},\stackrel{}{w}^{}=C\stackrel{}{v}+D\stackrel{}{w},$$
(31)
where $`A`$, $`B`$, $`C`$ and $`D`$ are integers satisfying $`ADBC=1`$. By this, $`L_x`$, $`L_y`$, $`\mathrm{\Theta }`$ and $`\mathrm{\Phi }`$ are transformed as follows.
$$L_x^{}=(A+B2\pi \mathrm{\Theta })L_x,L_y^{}=(A+B2\pi \mathrm{\Theta })L_y,$$
$$2\pi \mathrm{\Theta }^{}=\frac{C+D2\pi \mathrm{\Theta }}{A+B2\pi \mathrm{\Theta }},\frac{\mathrm{\Phi }^{}}{2\pi }=(A+B2\pi \mathrm{\Theta })^2\frac{\mathrm{\Phi }}{2\pi }+B(A+B2\pi \mathrm{\Theta }).$$
(32)
These are the same with what are obtained in .
## 6 Conclusions
In this paper, we suggested the T-duality between a non-commutative gauge theory on D2-brane with general background $`2`$-form field $`\varphi `$, gauge field strength $`f`$ and non-commutativity parameter $`\theta `$ and a theory of open strings on a D1-brane. By using this duality, we can represent the equivalence, which connects theories with arbitrary $`\theta `$โs, and Morita equivalence as rotations and modular transformations of the dual configuration, respectively.
Furthermore, it may clarifies that the ambiguity of choice of parameters $`\varphi `$ and $`\theta `$ comes from the freedom to choose the background $`B`$-field when we quantize open strings. Namely, when we obtain a low energy effective field theory by quantizing open strings, we have to fix the background field $`B`$. In the dual D1-brane configuration, the direction of the T-duality is determined by solving the equation of motion of open strings on the corresponding background metric. Roughly speaking, the parameter $`\theta `$ is determined at this point. After the relation between the string theory and the low energy field theory is established, the change of the $`B`$-field causes the emergence of background field $`\varphi `$ in the field theory.
## Acknowledgements
The author would like to thank W. Taylor for helpful comments. This work was supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative research agreement #DE-FC02-94ER40818 and by a Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture (#9110). |
no-problem/0001/astro-ph0001081.html | ar5iv | text | # A limit-cycle model for the binary supersoft X-ray source RX J0513.9-6951
## 1 Introduction
Luminous supersoft X-ray sources (SSS) have been established as a new and distinct class of objects which are observationally distinguished by their very soft X-ray spectra with temperatures on the order of 30 eV and luminosities of $`10^{36}10^{38}`$ erg s<sup>-1</sup> (for recent reviews see Kahabka & van den Heuvel 1997; van Teeseling 1998). Several SSS have been identified as accreting close binaries with orbital periods of $``$ 1 day or less. The most popular interpretation of these systems involves a white dwarf which accretes matter via Roche-lobe overflow and an accretion disk at a rate of $`14\times 10^7M_{\mathrm{}}`$/yr, sufficient to permit stable quasi-steady nuclear shell-burning in the surface layers of the white dwarf, either because of thermal timescale mass transfer from a more massive (slightly evolved) main sequence companion (van den Heuvel et al. 1992) or because of wind-driven mass transfer from a low-mass irradiated companion (van Teeseling & King 1998).
## 2 The transient binary SSS RX J0513.9โ6951
The luminous transient soft X-ray source RX J0513.9โ6951 (henceforth RX J0513) discovered in the ROSAT all-sky survey (Schaeidt et al. 1993) has been optically identified as a high mass-transfer accreting binary system in the LMC (Cowley et al. 1993; Pakull et al. 1993) with an orbital period of 0.76 days (Crampton et al. 1996). Optical monitoring has revealed that RX J0513 undergoes recurrent low states at quasi-regular intervals, in which the optical brightness drops by $``$ 1 magnitude (Reinsch et al. 1996; Southwell et al. 1996). The optical low-states are accompanied by a turn-on of the system in the soft X-ray range (Reinsch et al. 1996; Schaeidt 1996).
The optical low states last for $``$ 40 days and repeat about every 140โ180 days. Such short time scales cannot be explained by the limit-cycle behaviour sketched by van den Heuvel et al. (1992) or by recurrent burning models (Fujimoto 1982). Within the framework of a shell-burning white dwarf an alternative explanation has been suggested by Pakull et al. (1993): The rather sudden changes in the soft X-ray flux are the result of the direct response of the white dwarf to slight changes in the mass transfer rate. On the horizontal shell-burning branch, a small increase of the accretion rate may significantly affect the effective radius of the white dwarf envelope (Kato 1985). An increase of the photospheric radius by e.g. a factor of 4 implies that the effective temperature drops by a factor of 2. Given the extreme sensitivity of the ROSAT PSPC and HRI count rates on temperature, this in turn implies that the source may become undetectable although the bolometric luminosity remains roughly the same (e.g. Heise et al. 1994). In this model of an expanding and contracting envelope the sudden drop of the optical flux, the colour variation, and the temporarily increased soft X-ray flux can be quantitatively described by variations in the effective temperature of the hot central star and variations in the irradiation of the accretion disk (Reinsch et al. 1996).
This model is supported by an independent estimate of the neutral hydrogen column density obtained with recent HST UV spectroscopy which constrains the X-ray luminosity during the on-state of RX J0513 to $`(2.59)\times 10^{37}`$ erg s<sup>-1</sup>, i.e. somewhat below the Eddington limit, and confirms that the radius of the soft X-ray source is consistent with the radius of a non-expanded white dwarf (Gรคnsicke et al. 1998).
## 3 X-ray and optical monitoring
In order to obtain tight limits for the temporal development of the radius and the effective temperature of the photosphere, we have monitored RX J0513 with the ROSAT HRI detector at intervals of about two days covering one complete X-ray outburst cycle (Fig. 1, upper part). The source remained undetectable during the X-ray off state and showed a sudden increase of the soft X-ray flux by a factor of $`>100`$ at the end of August 1997, reaching maximum flux after $``$ 5 days. The X-ray outburst lasted $``$ 40 days and ended with a steep flux decline by again a factor of $`>100`$ to non-detectability within 2 days. The slow flux variation during the X-ray outburst can be approximated by an exponential decline with a time constant of $`(34\pm 6)`$ days.
Detailed light curves of several optical low states are available from regular monitoring of RX J0513 as a serendipitous source on CCD images taken for the MACHO project (Southwell et al. 1996). In Fig. 1 (lower part) the light curves of different observed optical minima are shifted in time such that the onset of the optical decline occurs at $`\mathrm{\Delta }t=0`$. MACHO data obtained quasi-simultaneously with our HRI monitoring indicate that the steep onset of the X-ray outburst occurred within 1โ2 days after the beginning of the optical decline (W. Sutherland, private communication).
The shape of the optical low states repeats fairly well with an uncertainty of some 5 days in the duration of the low state. Within this uncertainty, its length coincides with the duration of the X-ray outburst. Both, the X-ray and the optical light curves show a similar small flux gradient before the final fast transition to the X-ray off/ optical high state occurs.
## 4 Light curve modelling
To test whether the model of a contracting and expanding white dwarf can quantitatively explain both the X-ray light curve and the dips in the optical light curve we have calculated a predicted optical light curve from our X-ray data. First we used LTE white dwarf model atmosphere spectra (van Teeseling et al. 1994) to determine the photospheric radius as a function of the HRI count rate, where we assumed a distance of 50 kpc, a bolometric luminosity of $`10^{38}`$ erg s<sup>-1</sup>, and an absorption column of $`n_\mathrm{H}=6\times 10^{20}`$ cm<sup>-2</sup> (Gรคnsicke et al. 1998). Then we used the binary light curve code binary++ (van Teeseling et al. 1998) to calculate the orbital average optical magnitude as a function of the photospheric radius $`R_1`$ of the white dwarf. Since this code self-consistently calculates the amount of irradiation from an extended white dwarf on the accretion disk and companion, including all possible shielding effects, this calculation is more accurate than the semi-analytic approach we used in Reinsch et al. (1996) and allows us to investigate how the results depend on the various parameters. We assume a mass ratio of $`M_2/M_1=2`$, an orbital separation $`a=3.8\times 10^{11}`$ cm as appropriate for a quasi-main-sequence donor star and an orbital period $`P=0.76`$ days, an orbital inclination of $`10\mathrm{ยฐ}`$, a disk filling 80% of the average Roche-lobe radius, a uniform irradiation reprocessing efficiency of $`\eta =0.5`$, a secondary temperature of $`9000`$ K, and an accretion rate of $`3.4\times 10^7M_{\mathrm{}}`$.
Figure 2 shows the resulting total absolute $`V`$ magnitude as a function of $`R_1`$, and individual magnitudes of the disk, the companion star and the white dwarf. For $`R_1/a>0.13`$ or $`R_15\times 10^{10}`$ cm, the expanded white dwarf is the dominant optical light source. With increasing $`R_1`$ the disk first becomes brighter because of more effective irradiation, but becomes fainter again for $`R_12\times 10^{10}`$ cm because an increasing part of the inner disk disappears inside the white dwarf envelope.
In Figure 1, we have plotted the predicted optical light curve over the combined MACHO light curve. For data points with only an upper limit for the X-ray count rate, we assume a radius $`R_14.5\times 10^{10}`$ cm, which correctly reproduces the amplitude of the dip in the MACHO light curve. The $`3\sigma `$ X-ray upper limit of 0.00014 cts/s for the X-ray off state requires a radius of $`R_11\times 10^{10}`$ cm (using $`L=10^{38}`$ erg s<sup>-1</sup>, $`T185000`$ K, $`n_\mathrm{h}=6\times 10^{20}`$ cm<sup>-2</sup>) during the optical bright state. Our calculations show that it is relatively easy to reproduce the observed optical dips from the X-ray data, with the correct amplitude and surprisingly accurate absolute magnitudes. It also illustrates that when the X-rays become detectable, the white dwarf photosphere has almost reached its minimum size and the optical light curve has almost reached the level of the faint phase plateau. The difference between the observed and predicted optical light curve immediately after the optical decline could be explained by the initial lack of an optically thick inner disk after the white dwarf envelope has contracted to its minimal proportions.
## 5 A limit-cycle model for RX J0513
The X-ray and optical light curves of RX J0513 suggest that the system follows a kind of a limit cycle behaviour with four typical time-scales: the $``$ 140 days of the optical high/ X-ray off state, the rapid transition ($``$ 4 days) to the X-ray on/ optical low state, the $``$ 30 days duration of this state, and, again, the rapid transition ($``$ 2 days) to the optical high/ X-ray off state.
We propose here that this behaviour results from expansion of the white dwarf photosphere in response to enhanced accretion onto the white dwarf, together with the reaction of the disk to increased irradiation by this expanded photosphere while the mass transfer from the companion star remains constant. At the (arbitrary) start of the cycle (Fig. 3(d)), let us assume we have an accretion disk supplying matter to a white dwarf with its non-expanded radius $`R_110^9`$ cm. Because the mass supply rate is close to the Eddington critical accretion rate $`\dot{M}_{\mathrm{crit}}`$ (Fujimoto 1982; Kato 1985), the white dwarf radius begins to expand (Fig. 3 (d)โ(e)), as explained in Sect. 2 above. This will, in turn, influence the disk temperature. An extended central source with radius $`R_1H`$, where $`H`$ is the scale height of the disk, produces a surface temperature $`T_{\mathrm{irr}}`$ at disk radius $`R`$ in an optically thick disk (e.g. Adams et al. 1988) given by
$$\left(\frac{T_{\mathrm{irr}}}{T_1}\right)^4=\frac{\eta }{\pi }\left[\mathrm{arcsin}\rho \rho (1\rho ^2)^{1/2}\right].$$
(1)
Here $`\rho =R_1/R`$, $`T_1`$ is the temperature of the white dwarf photosphere, limb-darkening has been neglected, and $`\eta `$ is the reprocessing efficiency of the disk surface. Increasing $`R_1`$ has two effects: (i) at given radius $`R`$, the disk temperature rises approximately as $`T_{\mathrm{irr}}^4R_1R^3R_1`$, and (ii) the inner disk disappears in the hot envelope of the star (see also Sect. 4 above).
The increase in disk temperature raises the mass-flow rate in the disk, since the disk viscosity coefficient $`\nu =\alpha c_\mathrm{S}H`$ is increased. Here $`c_\mathrm{S}`$ is the sound speed, and $`H=c_\mathrm{S}R^{3/2}/(2GM)^{1/2}`$ is the scale height of the disk. The disk is now no longer in a steady state, since the mass-flow rate within it exceeds the mass supply rate from the companion star at its outer edge. Its mass is therefore gradually drained onto the white dwarf on a viscous timescale
$$t_{\mathrm{visc},\mathrm{d}}=\frac{R_d^2}{\nu }=\frac{R_d}{\alpha }\frac{R_d}{c_\mathrm{S}H}130\left(\frac{\alpha }{0.1}\right)^1\left(\frac{R_d}{10^{11}\mathrm{cm}}\right)\mathrm{days},$$
(2)
where $`10^{11}`$ cm is a characteristic disk radius (the radius of the Roche lobe is about $`1.5\times 10^{11}`$ cm), $`H/R_\mathrm{d}0.03`$, $`c_\mathrm{S}3\times 10^6`$ cm s<sup>-1</sup> for $`T_{\mathrm{irr}}\mathrm{30\hspace{0.17em}000}`$ K , and $`\alpha 0.1`$ is a typical value of the viscosity parameter. With the disk being drained, the accretion rate onto the white dwarf eventually drops below $`\dot{M}_{\mathrm{crit}}`$, and the white dwarf reverts to its unexpanded state, the disk becomes cooler, and the system enters an optical low and X-ray on state (Fig. 3 (a)โ(b)). The collapse of the expanded stellar envelope leaves the accretion disk with an inner hole of approximately the envelope radius which is gradually refilled by accretion from the outer disk. The disk temperature at a given radius $`R`$ decreases as $`T_{\mathrm{irr}}^4R_1R^3`$, but with $`R_110^9`$ cm in the optical low state the temperature at the edge of the hole ($`R_\mathrm{h}3\times 10^{10}`$ cm) becomes $`\mathrm{32\hspace{0.17em}000}`$ K, similar to the outer disk temperature in the optical high state. The viscous time scale for refilling the hole, assuming again $`H/R_\mathrm{h}0.03`$ and $`c_\mathrm{S}3\times 10^6`$ cm s<sup>-1</sup> then is
$$t_{\mathrm{visc},\mathrm{h}}=\frac{R_\mathrm{h}^2}{\nu }=\frac{R_\mathrm{h}}{\alpha }\frac{R_\mathrm{h}}{c_\mathrm{S}H}40\left(\frac{\alpha }{0.1}\right)^1\left(\frac{R_\mathrm{h}}{3\times 10^{10}\mathrm{cm}}\right)\mathrm{days}.$$
(3)
This picture predicts a long X-ray off state and a shorter X-ray on state, with rapid (thermal-timescale) transitions between them, in quantitative agreement with what is observed.
## 6 Short-term variability
Besides the exponential decline, the soft X-ray flux of RX J0513 shows significant variations of $`\pm `$ 0.1 HRI counts/s on timescales of hours to days (Fig. 4). A time-series analysis of the detrended HRI count rates, however, reveals no clear periodicity in the range 0.1โ10 days. The strongest signal is found at $`P=0.4075`$ days but corresponds only to a $`2\sigma `$ detection. We have arbitrarily phase-folded the residual fluxes on the suggested orbital period of $``$ 0.76 days but find no obvious modulation of the light curve neither using the spectroscopic ephemeris (Crampton et al. 1996) nor using the better defined photometric ephemeris (Alcock et al. 1996).
Although our analysis shows that variability on the orbital and possibly shorter timescales may be present, our data coverage is not sufficient to decide whether the flux variations are truly periodic or not.
## 7 Conclusions
The ROSAT HRI monitoring of a complete X-ray outburst of the transient SSS RX J0513 has shown that the transition between the X-ray on and off states occurs with a change of the soft X-ray flux by a factor of $`>100`$ within 2โ4 days. In the model of an expanding and contracting white dwarf envelope this implies that the decrease of the effective radius by a factor of $`>7`$ and the increase of the effective temperature by a factor of $``$ 3 occur on the same time-scale during the X-ray turn-on and vice versa during the X-ray turn-off.
The steepest intensity variations in the optical occur on a similar time-scale as the soft X-ray flux variations. This is consistent with our model that the optical variability is caused by the varying contribution of the accretion disk illumination by the expanding and contracting envelope of the white dwarf.
The existence of typical time-scales of the optical high/ X-ray off state, of the optical low/ X-ray on state, and of the transition phases, and the fairly accurate repetition of the optical lightcurve suggests that the observed variability is driven by limit-cycle behaviour. A possible self-maintained mechanism is the periodic change of the accretion disk viscosity in response to changes of the irradiation by the hot central star. In this scenario, the mass-flow rate at the surface of the white dwarf varies while the mass-transfer rate from the companion star remains constant. Our model can qualitatively explain the observed time-scales and requires no external mechanism like the episodic occurrence of star spots near the $`L_1`$ point to trigger the transition between the X-ray on and off states.
###### Acknowledgements.
We thank Will Sutherland (Oxford) for providing some information about the onset of the August 1997 optical low-state of RX J0513. We also thank the ROSAT team at the MPE (Garching) for their support with the time-critical scheduling of our HRI observations and for including additional target-of-opportunity pointings during the final phase of the X-ray outburst. This work was supported in part by the DLR under grant 50 OR 96 09 8. |
no-problem/0001/cond-mat0001107.html | ar5iv | text | # Electron Localization at Metal Surfaces
## I Introduction
For every given solid surface, breaking of periodicity in one dimension will result in a change in the electronic states near and at the surface, since the lack of nearest neighbours on one side of the surface atoms causes a sensible local rearrangement of the surface structure and chemical bonds. We investigate here aluminum, which as a bulk material is a paradigmatic jelliumlike metal: cohesion and bonding are dominated by electronโgas features, where the ions can be considered roughly speaking as a perturbation. What happens at an Al surface, however, is much less intuitive: does the surface behave essentially like a jellium surface, or do surface atoms play a preminent role? Here we investigate this issue and we show how a sharp answer is provided by a tool which is a very innovative one in condensed matter physics. So far, bonding features at a crystal surfaces have been investigated by studying either the charge density or the local density of states. Instead, we are using here the soโcalled โelectron localization functionโ (ELF), which has become recently very popular in the field of quantum chemistry.
We study here three basic choices for the orientation of the Al surface: (110), (100), and (111), schematically shown in Fig. 1. These highโsymmetry surfaces have a rather different packing of the surface atoms: this is also visible in Fig. 1, where the same scale has been used for the three surfaces. Some correlation between packing and bonding properties is obviously expected, but what is surprising is that the three chosen surfaces span the whole range of possibilities, with (110) and (111) being at the two very extreme ends: while the Al(110) surface prominently displays well characterized Al atoms, Al(111) is essentially a weakly perturbed jellium surface. The atomiclike vs. jelliumlike character of the electron distribution is perspicuously shown by the immediate graphical language of ELF.
Several appealing features make ELF the tool of choice in the present study: ELF is a pure groundโstate property, as the density is, but it โmagnifiesโ by design the bonding features of a given electron distribution. Furthermore, ELF is dimensionless, and allows to compare the nature of bonding on an absolute scale. ELF provides in a very simple way a quantitative estimate of the โmetallicityโ of a given bond, or more generally of a given valence region of the system. From our ELF calculations it is clear how in the bulk metal the valence electrons display a freeโelectron nature: actually, outside the core region, the charge density basically behaves as a freeโelectron gas. This jelliumlike behaviour of the crystalline system must be contrasted with the opposite extreme of the isolated atom, where the valenceโshell region is characterized by very different ELF values. Through a series of ELF contour plots, we will show in the following how this freeโatom behaviour is almost entirely reproduced by the topmost atoms of the least packed metal surface studied, namely Al(110). We stress that ELF discriminates an atomiclike valenceโelectron distribution from a jelliumlike distribution in a sharp quantitative way.
We are adopting here a fully abโinitio method, which has become the โstandard modelโ in modern firstโprinciples studies of simple metals, covalent semiconductors, simple ionic solids, and many other disparate materials: namely, density functional theory (DFT) with normโconserving pseudopotentials. This framework is the natural choice for a real solid system as the one chosen for this work, although the ELF has been originally introduced and mostly applied, in the quantumโchemistry litterature, as an allโelectron tool. Anyhow, we will demonstrate that the use of the pseudopotential approximation makes the ELF particularly meaningful, since getting rid of the core electrons and focussing solely on the bonding electrons notably simplifies the final outcome.
In a recent very interesting paper, Fall et al. have investigated the trend in the Al work function for the same three orientations as considered here. They provide an explanation of the trend in terms of โfaceโdependent filling of the atomiclike $`p`$ states at the surfaceโ. The ELF is an orbitalโindependent tool, yet it provides a concomitant message: we show that there is indeed a faceโdependent filling of the electronic states localized in the surface region.
The present paper is organized as follows. In Sec. II we introduce the ELF definition, reviewing its foundamental properties and features. In Sec. III we discuss the use of the pseudopotential approximation for our ELF analysis. In Sec. IV we present our results carried out for the three aluminum surfaces. Finally, in Sec. V we draw a few conclusions.
## II Definitions
The ELF has been originally proposed by Becke and Edgecombe, as a convenient measure of the parallelโspin electron correlation. Starting from the shortโrange behavior of the parallelโspin pair probability, they defined a new scalar function, conveniently ranging from zero to one, that uniquely identifies regions of space where the electrons are well localized, as occurs in bonding pairs or loneโelectron pairs. This tool has immediately shown its power to visualize the chemical bonding and the electron localization. The ELF is defined as
$$\text{ELF}=\frac{1}{1+[D(๐ซ)/D_h(๐ซ)]^2},$$
(1)
in which $`D(๐ซ)`$ and $`D_h(๐ซ)`$ represent the curvature of the parallelโspin electron pair density for, respectively, the actual system and a homogeneous electron gas with the same density as the actual system at point r. By definition, ELF is identically one either in any singleโelectron wavefunction or in any twoโelectron singlet wavefunction: in both cases the Pauli principle is ineffective, and the ground wavefunction is nodeless or, loosely speaking, โbosonicโ. In a manyโelectron system ELF is close to one in the regions where electrons are paired to form a covalent bond, while is small in lowโdensity regions; ELF is also close to one where the unpaired lone electron of a dangling bond is localized. Furthermore, since in the homogeneous electron gas ELF equals 0.5 at any density, values of this order in inhomogeneous systems indicate regions where the bonding has a metallic character.
Savin et al. have proposed another illuminating interpretation of the same quantity, showing how $`D`$ can be simply calculated in terms of the local behaviour of the kinetic energy density, thus making no explicit reference to the pair distribution function. This is particularly convenient in our case, since the pair density is outside the scope of DFT. Owing to the Pauli principle, the groundโstate kinetic energy density of a system of fermions is no smaller than the one of a system of bosons at the same density: ELF can be equivalently expressed in terms of the extra contribution to the kinetic energy density due to the Pauli principle. The Savin et al. reformulation also provides a meaningful physical interpretation: where ELF is close to its upper bound, electrons are strongly paired and the electron distribution has a local โbosonicโ character.
Following Ref. , $`D(๐ซ)`$ is the Pauli excess kinetic energy density, defined as the difference between the kinetic energy density and the soโcalled von Weizsรคcker kinetic energy functional:
$$D(๐ซ)=\frac{1}{2}_๐ซ_๐ซ^{}\rho (๐ซ,๐ซ^{})|_{๐ซ=๐ซ^{}}\frac{1}{8}\frac{|n(๐ซ)|^2}{n(๐ซ)}$$
(2)
where $`\rho `$ is the oneโbody reduced (spinโintegrated) density matrix. The von Weizsรคcker functional provides a rigorous lower bound for the exact kinetic energy density and is ordinarily indicated as the โbosonicโ kinetic energy, since it coincides with the groundโstate kinetic energy density of a nonโinteracting system of bosons at density $`n(๐ซ)`$. Therefore, $`D`$ is positive semidefinite and provides a direct measure of the local effect of the Pauli principle. The other ingredient of Eq. (1) is $`D_h(๐ซ)`$, defined as the kinetic energy density of the homogeneous electron gas at a density equal to the local density:
$$D_h(๐ซ)=\frac{3}{10}(3\pi ^2)^{\frac{2}{3}}n(๐ซ)^{\frac{5}{3}}.$$
(3)
It is obvious to verify that the ELF is identical to one in the ground state of any oneโ or twoโelectron systems, while is identical to 0.5 in the homogeneous electron gas at any density.
As commonly done in many circumstancesโincluding in other ELF investigationsโwe approximate the kinetic energy of the interacting electron system with the one of the noninteracting KohnโSham (KS) one. We therefore use the KS density matrix:
$$\rho (๐ซ,๐ซ^{})=2\underset{i}{}\varphi _i^{KS}(๐ซ)\varphi _i^{KS}(๐ซ^{}),$$
(4)
where $`\varphi _i^{KS}(๐ซ)`$ are the occupied KS orbitals. Such approximation is expected to become significantly inaccurate only in the case of highly correlated materials.
## III Isolated PseudoโAtom
The original ELF definition is an allโelectron one, and has the remarkable feature of naturally revealing the entire shell structure for heavy atoms. Such a feature is of no interest here, since only one electronic shell (the $`sp`$ valence one) is involved in the bonding of Al atoms in any circumstances. The pseudopotential scheme simplifies the landscape, since only the electrons of the relevant valence shell are dealt with explicitly: the ELF message comes out therefore much clearer, with basically no loss of information. In the spatial regions occupied by core electrons, the pseudoโelectronic distribution shows a depletion and ELF assumes very low values. Outside the ionic cores, in the regions relevant to chemical bonding, the norm conservation endows the pseudocharge density with physical meaning, as widely discussed in the modern pseudopotential literature. The pseudo ELF carries thereforeโin the material of interest hereโthe same information as the allโelectron ELF, while it removes irrelevant and confusing features due to the inner, chemically inert, shells.
For an isolated pseudoโatom we have by construction only a single valence shell, clearly displayed by a single and very prominent ELF maximum. In Fig. 2 we report our result carried out for an isolated Al atom: in the picture one immediately remarks the spherical region associated to the ELF maximum (0.91), where the charge density is almost bosonic. The black cloud in Fig. 2 indicates the strong localization of the electronic valence shell and perspicuously distinguishes the free aluminum pseudoโatom from the crystalline one.
Indeed looking at the following Figs. 35, and inspecting the bulk regions of them, the ELF plots quantitatively demonstrate that the valence electrons have a predominant freeโelectron nature, with a jelliumlike (or ThomasโFermi) ELF value. The ion cores only provide an โexclusion regionโ for electrons (white circles in the plots), but basically do not alter the jelliumlike nature of the electron distribution outside the core radii: in this sense we may regard the ion cores as a โweakโ perturbation. Actually, the maximum value attained by ELF between nearestโneighbor atoms in bulk Al is only 0.61 andโoutside the core radiiโthere is a widespread grey region, where ELF is almost constant and close to 0.5, thus indicating a jelliumlike electronic system. This peculiar behaviour will help us in the following to understand the bonding pattern occurring at different Al surfaces.
## IV Aluminum Surfaces
All the calculations in this work use a stateโofโtheโart set of ingredients: a planeโwave expansion of the KS orbitals with a 16 Ry kineticโenergy cutโoff, a set of MonkhorstโPack special points for the Brillouin zone integration, with a Gaussian broadening of 0.01Ry and a normโconserving pseudopotential in fully nonโlocal form. The calculations for the (111) surface were performed using a 9+6 supercell (9 planes of Al, 6 equivalent planes of vacuum) and 37 kโpoints in the irreducible Brillouin zone. The corresponding values used for the (100) surface are an 8+6 supercell and 46 kโpoints, and for the (110) surface an 8+8 supercell and 48 kโpoints. We have preliminarily investigated the effect of surface ionic relaxation on ELF, and found it negligible. Similar insensitiveness was found in Ref. upon other surface electronic properties: we therefore present results for the unrelaxed surfaces. As shown above in Fig. 1, the three surfaces have a different packing: the packing increases from Al(110) to Al(100) to Al(111). A measure of this packing is the number of nearest neighbors in the surface plane, called coordination in the following: this number is 2, 4, and 6 for the three surfaces, respectively.
In Fig. 3 we report two ELF contour plots along two nonโequivalent planes passing through the topmost (twofold coordinated) atom of the Al(110) surface. Along a direction orthogonal to the surface and passing through the surface atom, our calculated ELF attains the maximum value of 0.86, thus indicating that the Pauli principle has little effect. Comparing Fig. 3 to 2, where the ELF maximum has the close value of 0.91, we quantitatively see see how much the surface atom actually behaves as a freeโatom in the outer direction. A slightly different electronic distribution is present at the fourโfold coordinated Al(100) surface in Fig. 4. In this case the ELF maximumโalong the same directionโattains the lower value of 0.80 and shows therefore an electron distribution with a less pronounced atomiclike character.
The almost black regions around the surface atoms in Figs. 3 and 4 are also clearly visible in the top views of Fig. 6, where the contour plots are drawn in the outermost atomic plane. In this plane the maxima occur midway between nearest-neighbors atoms and assume the values of 0.77, 0.74, and 0.67 in order of increasing packing. This decrease of electron pairing shows a trend from covalentโlike to metallicโlike bonding between nearest neighboring surface atoms; bonding to the underlying bulk atoms is instead metallic in all cases.
Finally, in Fig. 5 we have the extreme case of the sixโfold coordinated Al(111) surface. The absolute ELF maximum is only 0.73 and lies in a lowโsymmetry point, slightly off the outermost atomic plane. Along the direction orthogonal to the surface and passing through the surface atom, the ELF maximum is only 0.65: this behaviour clearly marks an almost smooth decay from the bulkโwhere the electrons exhibit a prevailing jelliumlike distributionโto the vacuum region. Also in the top view of Fig. 6 there is no evidence of electron pairing between surface atoms, with a wide spread region of almost uniform ELF, close in value to 0.5. We can therefore summarize the result by saying that the Al(111) surface is essentially a jellium surface perturbed by the atomic cores, in a pseudopotential sense: the perturbation obviously originates an โexclusion regionโ (white plots within the core radii), but besides this has little effect on the electron distribution in the bulk and at the (111) surface.
The values of the maxima in the direction orthogonal to the surface and passing through the surface atom in the three cases (0.91, 0.80, and 0.65, as reported above) can be interpreted as a measure of the occupation of the atomiclike states protruding from the surface. We recall in fact that ELF is identically equal to 1 for any oneโ or twoโelectron system: high ELF values indicate a strong localization of the wavefunction. Our numbers show an analogous trend as the one pointed out by Ref. , where it is shown that the filling of atomic $`p_{}`$ orbitals, protruding from the surface, decreases indeed with increasing packing, i.e. going from (110) to (100) to (111).
## V Conclusions
In conclusion, we have shown how the ELF at the surfaces of a paradigmatic $`sp`$โbonded metal distinctly reveals the rearrangements in the electron distribution due to the surface formation and to the changes in the local coordination. We have shown that ELF, in a modern DFTโpseudopotential framework, is a powerful tool to investigateโwithout using any spectral informationโthe chemical bonding pattern: this makes the method appealing in order to deal with much more complicated systems.
Finally, to put this work in a wider perspective, we notice in the current literature a quest for other innovative tools which address localization and bonding, and whose relationships to ELF have not been investigated yet. |
no-problem/0001/cond-mat0001014.html | ar5iv | text | # Spin-Peierls instability in the spin-1/2 transverse ๐โข๐ chain with Dzyaloshinskii-Moriya interaction
## Abstract
We calculate exactly the density of magnon states of the regularly alternating spin-$`\frac{1}{2}`$ $`XX`$ chain with Dzyaloshinskii-Moriya interaction. The obtained result permit us to examine the stability of the chain with respect to spin-Peierls dimerization. We found that depending on the dependences of Dzyaloshinskii-Moriya interaction on distortion amplitude it may act either in favour of the dimerization or against the dimerization.
PACS numbers: 75.10.-b
Keywords: spin-$`\frac{1}{2}`$ $`XY`$ chain, Dzyaloshinskii-Moriya interaction, spin-Peierls dimerization
Postal addresses:
Dr. Oleg Derzhko (corresponding author)
Olesโ Zaburannyi
Institute for Condensed Matter Physics
1 Svientsitskii Street, Lโviv-11, 290011, Ukraine
tel/fax: (0322) 76 19 78
email: derzhko@icmp.lviv.ua
Prof. Johannes Richter
Institut fรผr Theoretische Physik, Universitรคt Magdeburg
P.O. Box 4120, D-39016 Magdeburg, Germany
tel: (0049) 391 671 8841
fax: (0049) 391 671 1217
email: Johannes.Richter@Physik.Uni-Magdeburg.DE
The discovery of the inorganic spin-Peierls compound CuGeO<sub>3</sub> renewed an interest in the investigation of spin-Peierls instability of quantum spin chains. Up to date a large number of papers concerning the quantum spin chains which are believed to model appropriately the spin degrees of freedom of the spin-Peierls compounds has appeared. As a rule the considered models are rather complicated and have been examined with exploiting different approximations. On the other hand, some generic features of the spin-Peierls systems can be illustrated in the simplified but exactly solvable models. As an example of such model one can refer to the spin-$`\frac{1}{2}`$ $`XX`$ chain that was studied in several papers . The purpose of the present paper is to examine the influence of introducing Dzyaloshinskii-Moriya interspin coupling on the spin-Peierls dimerization within the frames of the one-dimensional spin-$`\frac{1}{2}`$ $`XX`$ model in a transverse field. The presence of Dzyaloshinskii-Moriya interaction for CuGeO<sub>3</sub> was proposed in Refs. in order to explain the EPR and ESR experimental data. On the other hand, the multisublattice spin-$`\frac{1}{2}`$ $`XX`$ chain with Dzyaloshinskii-Moriya interaction was introduced in , however, that paper was not devoted to the study of spin-Peierls instability. In our paper we closely follow the idea of Ref. and compare the total ground state energy of the dimerized and uniform chains. In contrast to previous works we use the continued-fraction representation for one-fermion Green functions that allows one to consider in a similar way not only the dimerized lattice but also more complicated lattice distortions. Based on the performed calculations we found that Dzyaloshinskii-Moriya interaction may act both in favour of the dimerization and against the dimerization. The result of its influence depends on the dependence of Dzyaloshinskii-Moriya interaction on the distortion amplitude in comparison with such a dependence of the isotropic exchange interaction.
We consider $`N\mathrm{}`$ spins $`\frac{1}{2}`$ on a circle with the Hamiltonian
$`H=2{\displaystyle \underset{n=1}{\overset{N}{}}}I_n\left(s_n^xs_{n+1}^x+s_n^ys_{n+1}^y\right)`$
$`+2{\displaystyle \underset{n=1}{\overset{N}{}}}D_n\left(s_n^xs_{n+1}^ys_n^ys_{n+1}^x\right)`$
$`+{\displaystyle \underset{n=1}{\overset{N}{}}}\mathrm{\Omega }_ns_n^z.`$ (1)
Here $`I_n`$ and $`D_n`$ are the isotropic exchange coupling and Dzyaloshinskii-Moriya coupling between the neighbouring sites $`n`$ and $`n+1`$ and $`\mathrm{\Omega }_n`$ is the transverse field at site $`n`$. We restricted ourselves to the Hamiltonian (Spin-Peierls instability in the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction) since after the Jordan-Wigner transformation it reduces to tight-binding spinless fermions. We introduce the temperature double-time one-fermion Green functions that yield the density of magnon states by the relation $`\rho (E)=\frac{1}{\pi N}_{n=1}^N\text{Im}G_{nn}^{}`$, $`G_{nm}^{}G_{nm}^{}(E\pm iฯต)`$. In the case of the tight-binding fermions the required diagonal Green functions can be expressed by means of continued fractions
$`G_{nn}^{}={\displaystyle \frac{1}{E\pm iฯต\mathrm{\Omega }_n\mathrm{\Delta }_n^{}\mathrm{\Delta }_n^+}},`$ (2)
$`\mathrm{\Delta }_n^{}={\displaystyle \frac{I_{n1}^2+D_{n1}^2}{E\pm iฯต\mathrm{\Omega }_{n1}\frac{I_{n2}^2+D_{n2}^2}{E\pm iฯต\mathrm{\Omega }_{n2}_{\mathrm{}}}}},`$
$`\mathrm{\Delta }_n^+={\displaystyle \frac{I_n^2+D_n^2}{E\pm iฯต\mathrm{\Omega }_{n+1}\frac{I_{n+1}^2+D_{n+1}^2}{E\pm iฯต\mathrm{\Omega }_{n+2}_{\mathrm{}}}}}.`$
For any periodic configuration of the intersite couplings and transverse field the fractions $`\mathrm{\Delta }_n^{}`$ and $`\mathrm{\Delta }_n^+`$ involved into $`G_{nn}^{}`$ (2) become finite and can be evaluated exactly yielding obviously the exact result for the density of states $`\rho (E)`$ and hence for the thermodynamic quantities of spin model (Spin-Peierls instability in the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction).
In what follows we shall use the result for the periodic chain having period 2 that is characterized by a sequence $`I_1D_1\mathrm{\Omega }_1I_2D_2\mathrm{\Omega }_2I_1D_1\mathrm{\Omega }_1I_2D_2\mathrm{\Omega }_2\mathrm{}`$. For such a case we have
$`\rho (E)=\{\begin{array}{cc}0,\hfill & \text{if}Eb_4,b_3Eb_2,b_1E,\hfill \\ \frac{1}{2\pi }\frac{|2E\mathrm{\Omega }_1\mathrm{\Omega }_2|}{\sqrt{(E)}},\hfill & \text{if}b_4<E<b_3,b_2<E<b_1,\hfill \end{array}`$ (5)
$`(E)=4_1^2_2^2\left[(E\mathrm{\Omega }_1)(E\mathrm{\Omega }_2)_1^2_2^2\right]^2`$
$`=(Eb_4)(Eb_3)(Eb_2)(Eb_1),`$
$`\left\{b_4b_3b_2b_1\right\}=\{{\displaystyle \frac{1}{2}}\left(\mathrm{\Omega }_1+\mathrm{\Omega }_2\right)\pm ๐ป_1,{\displaystyle \frac{1}{2}}\left(\mathrm{\Omega }_1+\mathrm{\Omega }_2\right)\pm ๐ป_2\},`$
$`๐ป_1={\displaystyle \frac{1}{2}}\sqrt{\left(\mathrm{\Omega }_1\mathrm{\Omega }_2\right)^2+4\left(|_1|+|_2|\right)^2},`$
$`๐ป_2={\displaystyle \frac{1}{2}}\sqrt{\left(\mathrm{\Omega }_1\mathrm{\Omega }_2\right)^2+4\left(|_1||_2|\right)^2},`$
$`_n^2=I_n^2+D_n^2.`$
Let us examine the instability of the considered spin chain with respect to dimerization. To do this we assume $`|I_1|=|I|(1+\delta )`$, $`|I_2|=|I|(1\delta )`$, $`|D_1|=|D|(1+k\delta )`$, $`|D_2|=|D|(1k\delta )`$, where $`0\delta 1`$ is the dimerization parameter. Putting $`k=0`$ one has a chain in which Dzyaloshinskii-Moriya interaction does not depend on the lattice distortion, whereas for $`k=1`$ the dependence of Dzyaloshinskii-Moriya interaction on the lattice distortion is as that for the isotropic exchange interaction. We consider a case of zero temperature and look for the total energy per site $`(\delta )`$ that consists of the magnetic part $`e_0(\delta )`$ and the elastic part $`\alpha \delta ^2`$. From (5) one finds that
$`e_0(\delta )={\displaystyle \frac{1}{2}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}๐E\rho (E)|E|`$
$`={\displaystyle \frac{1}{\pi }}๐ป_1\text{E}(\psi ,{\displaystyle \frac{๐ป_1^2๐ป_2^2}{๐ป_1^2}}){\displaystyle \frac{1}{2}}\left|\mathrm{\Omega }_1+\mathrm{\Omega }_2\right|\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{\psi }{\pi }}\right),`$ (6)
$`\psi =\{\begin{array}{cc}0\hfill & \text{if}๐ป_1\frac{1}{2}|\mathrm{\Omega }_1+\mathrm{\Omega }_2|,\hfill \\ \text{arcsin}\sqrt{\frac{๐ป_1^2\frac{1}{4}\left(\mathrm{\Omega }_1+\mathrm{\Omega }_2\right)^2}{๐ป_1^2๐ป_2^2}}\hfill & \text{if}๐ป_2\frac{1}{2}|\mathrm{\Omega }_1+\mathrm{\Omega }_2|<๐ป_1,\hfill \\ \frac{\pi }{2}\hfill & \text{if}\frac{1}{2}|\mathrm{\Omega }_1+\mathrm{\Omega }_2|<๐ป_2,\hfill \end{array}`$ (10)
$`๐ป_1={\displaystyle \frac{1}{2}}\sqrt{\left(\mathrm{\Omega }_1\mathrm{\Omega }_2\right)^2+4\left[\sqrt{I^2(1+\delta )^2+D^2(1+k\delta )^2}+\sqrt{I^2(1\delta )^2+D^2(1k\delta )^2}\right]^2},`$
$`๐ป_2={\displaystyle \frac{1}{2}}\sqrt{\left(\mathrm{\Omega }_1\mathrm{\Omega }_2\right)^2+4\left[\sqrt{I^2(1+\delta )^2+D^2(1+k\delta )^2}\sqrt{I^2(1\delta )^2+D^2(1k\delta )^2}\right]^2},`$
and $`\text{E}(\psi ,a^2)_0^\psi ๐\varphi \sqrt{1a^2\mathrm{sin}^2\varphi }`$ is the elliptic integral of the second kind. We also seek for a nonzero solution $`\delta ^{}0`$ of the equation $`\frac{(\delta )}{\delta }=0`$. Using (Spin-Peierls instability in the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction) one gets
$`{\displaystyle \frac{1}{\pi }}\text{E}(\psi ,{\displaystyle \frac{๐ป_1^2๐ป_2^2}{๐ป_1^2}}){\displaystyle \frac{๐ป_1}{\delta }}`$
$`{\displaystyle \frac{1}{\pi }}{\displaystyle \frac{๐ป_2^2\frac{๐ป_1}{\delta }๐ป_1๐ป_2\frac{๐ป_2}{\delta }}{๐ป_1^2๐ป_2^2}}\left[\text{E}(\psi ,{\displaystyle \frac{๐ป_1^2๐ป_2^2}{๐ป_1^2}})\text{F}(\psi ,{\displaystyle \frac{๐ป_1^2๐ป_2^2}{๐ป_1^2}})\right]`$
$`+2\alpha \delta =0,`$ (11)
$`{\displaystyle \frac{๐ป_{1,2}}{\delta }}={\displaystyle \frac{1}{๐ป_{1,2}}}\left[\sqrt{I^2(1+\delta )^2+D^2(1+k\delta )^2}\pm \sqrt{I^2(1\delta )^2+D^2(1k\delta )^2}\right]`$
$`\times \left[{\displaystyle \frac{I^2(1+\delta )+kD^2(1+k\delta )}{\sqrt{I^2(1+\delta )^2+D^2(1+k\delta )^2}}}{\displaystyle \frac{I^2(1\delta )+kD^2(1k\delta )}{\sqrt{I^2(1\delta )^2+D^2(1k\delta )^2}}}\right],`$
and $`\text{F}(\psi ,a^2)_0^\psi ๐\varphi /\sqrt{1a^2\mathrm{sin}^2\varphi }`$ is the elliptic integral of the first kind.
Until the end of the paper we shall consider a case of the uniform transverse field $`\mathrm{\Omega }_1=\mathrm{\Omega }_2=\mathrm{\Omega }_0`$. In the interesting for application limit $`\delta 1`$ valid for hard lattices having large values of $`\alpha `$ one finds $`๐ป_1=2||`$, $`๐ป_2=2||\mathrm{}\delta `$ with $`||=\sqrt{I^2+D^2}`$ and $`\mathrm{}=\frac{I^2+kD^2}{I^2+D^2}`$ and instead of Eqs. (Spin-Peierls instability in the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction), (Spin-Peierls instability in the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction) one has
$`e_0(\delta )={\displaystyle \frac{2||}{\pi }}\text{E}(\psi ,1\mathrm{}^2\delta ^2)|\mathrm{\Omega }_0|\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{\psi }{\pi }}\right),`$ (12)
$`\psi =\{\begin{array}{cc}0\hfill & \text{if}\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}2}||<|\mathrm{\Omega }_0|,\hfill \\ \text{arcsin}\sqrt{\frac{4^2\mathrm{\Omega }_0^2}{4^2(1\mathrm{}^2\delta ^2)}}\hfill & \text{if}\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}2}||\mathrm{}\delta |\mathrm{\Omega }_0|<2||,\hfill \\ \frac{\pi }{2}\hfill & \text{if}|\mathrm{\Omega }_0|<2||\mathrm{}\delta ;\hfill \end{array}`$ (16)
$`{\displaystyle \frac{\pi \alpha }{||}}={\displaystyle \frac{\mathrm{}^2}{1\mathrm{}^2\delta ^2}}(F(\psi ,1\mathrm{}^2\delta ^2)E(\psi ,1\mathrm{}^2\delta ^2))`$ (17)
Consider the case $`\mathrm{\Omega }_0=0`$. After rescaling $`I`$, $`\frac{\alpha }{\mathrm{}^2}\alpha `$, $`\mathrm{}\delta \delta `$ one finds that Eq. (17) is exactly as that considered in Ref. and thus $`\delta ^{}\frac{1}{\mathrm{}}\mathrm{exp}\left(\frac{1}{\mathrm{}^2}\frac{\pi \alpha }{||}\right)`$. Thus for $`k=1`$ when $`\mathrm{}=1`$ Dzyaloshinskii-Moriya interaction leads to increasing of the dimerization parameter $`\delta ^{}`$, whereas for $`k=0`$ when $`\mathrm{}1`$ Dzyaloshinskii-Moriya interaction leads to decreasing of the dimerization parameter $`\delta ^{}`$.
Consider further the case $`0<|\mathrm{\Omega }_0|<2||`$. Varying $`\delta `$ in the r.h.s. of Eq. (17) from 0 to 1 one calculates a lattice parameter $`\alpha `$ for which the taken value of $`\delta `$ realizes an extremum of $`(\delta )`$. One immediately observes that for $`\frac{|\mathrm{\Omega }_0|}{2||}\mathrm{}\delta `$ the dependence $`\alpha `$ versus $`\delta `$ remains as that in the absence of the field, whereas for $`0\mathrm{}\delta <\frac{|\mathrm{\Omega }_0|}{2||}`$ the calculated quantity $`\alpha `$ starts to decrease. From this one concludes that the field $`\frac{|\mathrm{\Omega }_0|}{2||}=\mathrm{exp}\left(\frac{1}{\mathrm{}^2}\frac{\pi \alpha }{||}\right)`$ makes the dimerization unstable against the uniform phase. The latter relation tells us that the Dzyaloshinskii-Moriya interaction increases the value of that field for $`k=1`$ and decreases it for $`k=0`$.
In Figs. 1, 2 we presented the changes of the total energy $`(\delta )(0)`$ (Spin-Peierls instability in the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction) vs the dimerization parameter $`\delta `$ with switching on Dzyaloshinskii-Moriya interaction and the nonzero solution $`\delta ^{}`$ of Eq. (Spin-Peierls instability in the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction) vs $`\alpha `$ with switching on Dzyaloshinskii-Moriya interaction, respectively. These results agree with the above ones valid in the limit $`\delta 1`$.
Let us emphasize that one cannot treat rigorously within the Jordan-Wigner picture complete Dzyaloshinskii-Moriya interaction that for neighbouring sites $`n`$ and $`n+1`$ reads $`๐_n^x(s_n^ys_{n+1}^zs_n^zs_{n+1}^y)+๐_n^y(s_n^zs_{n+1}^xs_n^xs_{n+1}^z)+๐_n^z(s_n^xs_{n+1}^ys_n^ys_{n+1}^x)`$ except the latter term being included in (Spin-Peierls instability in the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction). The effects of $`๐^x`$, $`๐^y`$ may be examined numerically performing finite-chain calculations.
To conclude, we have analysed a stability of the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with respect to dimerization in the presence of the Dzyaloshinskii-Moriya interaction calculating for this purpose with the help of continued fractions the ground state energy for an arbitrary value of the dimerization parameter. Depending on the dependence of Dzyalosahinskii-Moriya interaction on the amplitude of lattice distortion it acts either in favour of dimerization or against it.
It is generally known that the increasing of the external field leads to a transition from the dimerized phase to the incommensurate phase rather than to the uniform phase. Evidently, the incommensurate phase cannot appear in the presented treatment within the frames of the adopted ansatz for the lattice distortions $`\delta _1\delta _2\delta _1\delta _2\mathrm{}`$, $`\delta _1+\delta _2=0`$. To clarify a possibility of more complicated distortions the chains with longer periods should be examined.
The present study was partly supported by the DFG (projects 436 UKR 17/20/98 and Ri 615/6-1). O. D. acknowledges the kind hospitality of the Magdeburg University in the spring of 1999 when a part of the paper was done. He is also indebted to Mrs. Olga Syska for continuous financial support.
FIGURE 1. Dependence $`(\delta )(0)`$ vs $`\delta `$ for the spin-$`\frac{1}{2}`$ $`XX`$ chain with Dzyaloshinskii-Moriya interaction. $`|I|=1`$, $`|\mathrm{\Omega }_0|=0`$, $`\alpha =0.8`$, a: $`k=1`$ ($`|D|=0,\mathrm{\hspace{0.33em}0.2},\mathrm{\hspace{0.33em}0.4},\mathrm{\hspace{0.33em}0.6},\mathrm{\hspace{0.33em}0.8},\mathrm{\hspace{0.33em}1}`$ from top to bottom), b: $`k=0`$ ($`|D|=0,\mathrm{\hspace{0.33em}0.2},\mathrm{\hspace{0.33em}0.4},\mathrm{\hspace{0.33em}0.6},\mathrm{\hspace{0.33em}0.8},\mathrm{\hspace{0.33em}1}`$ from bottom to top), c: $`|D|=0.5`$ ($`k=1,\mathrm{\hspace{0.33em}0.8},\mathrm{\hspace{0.33em}0.6},\mathrm{\hspace{0.33em}0.4},\mathrm{\hspace{0.33em}0.2},\mathrm{\hspace{0.33em}0}`$ from bottom to top).
FIGURE 2. Dependence $`\delta ^{}`$ vs $`\alpha `$ for the spin-$`\frac{1}{2}`$ transverse $`XX`$ chain with Dzyaloshinskii-Moriya interaction. $`|I|=1`$, $`\alpha =0.8`$, $`\mathrm{\Omega }_0=0`$ (a, d, g), $`\mathrm{\Omega }_0=0.1`$ (b, e, h), $`\mathrm{\Omega }_0=0.3`$ (c, f, i), $`k=1`$ (a, b, c) ($`|D|=0,\mathrm{\hspace{0.33em}0.2},\mathrm{\hspace{0.33em}0.4},\mathrm{\hspace{0.33em}0.6},\mathrm{\hspace{0.33em}0.8},\mathrm{\hspace{0.33em}1}`$ from left to right), $`k=0`$ (d, e, f) ($`|D|=0,\mathrm{\hspace{0.33em}0.2},\mathrm{\hspace{0.33em}0.4},\mathrm{\hspace{0.33em}0.6},\mathrm{\hspace{0.33em}0.8},\mathrm{\hspace{0.33em}1}`$ from right to left), $`|D|=0.5`$ (g, h, i) ($`k=1,\mathrm{\hspace{0.33em}0.8},\mathrm{\hspace{0.33em}0.6},\mathrm{\hspace{0.33em}0.4},\mathrm{\hspace{0.33em}0.2},\mathrm{\hspace{0.33em}0}`$ from right to left). |
no-problem/0001/astro-ph0001041.html | ar5iv | text | # Untitled Document
Evidence for a low density Universe
from the relative velocities of galaxies
R. Juszkiewicz<sup>โยง</sup>, P. G. Ferreira<sup>โโฅยถ</sup>, H. A. Feldman<sup>#</sup>,
A. H. Jaffe<sup>โโ</sup>, M. Davis<sup>โโ</sup>
Dรฉpartement de Physique Thรฉorique, Universitรฉ de Genรจve, CH-1211 Genรจve, Switzerland
<sup>ยง</sup> Copernicus Astronomical Center, 00-716 Warsaw, Poland
Theory Group, CERN, CH-1211, Genรจve 23, Switzerland
CENTRA, Instituto Superior Tecnico, Lisboa 1096 Codex, Portugal
<sup>#</sup> Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045
<sup>โโ</sup>Center for Particle Astrophysics, University of California, Berkeley, CA94720, USA
The motions of galaxies can be used to constrain the cosmological density parameter, $`\mathrm{\Omega }`$, and the clustering amplitude of matter on large scales. The mean relative velocity of galaxy pairs, estimated from the Mark III survey indicates $`\mathrm{\Omega }=\mathbf{0.35}_{\mathbf{0.25}}^{+\mathbf{0.35}}`$. If the clustering of galaxies is unbiased on large scales, $`\mathrm{\Omega }`$ = 0.35 $`\pm `$ 0.15, so that an unbiased Einstein-de Sitter model ($`\mathrm{\Omega }=`$ 1) is inconsistent with the data.
The mean relative velocity for a pair of galaxies at positions $`\stackrel{}{r}_1`$ and $`\stackrel{}{r}_2`$ is $`\stackrel{}{u}_{12}=H\stackrel{}{r}`$, where $`\stackrel{}{r}=\stackrel{}{r}_1\stackrel{}{r}_2`$ and the constant of proportionality $`H=100`$ h km s<sup>-1</sup>Mpc<sup>-1</sup> is the Hubble parameter (1,2). The quantity $`0.6<`$ h $`<1`$ parametrizes uncertainties in $`H`$ measurements. This law is an idealization, followed by real galaxies only on sufficiently large scales, corresponding to a uniform mass distribution. On smaller scales, the gravitational field induced by galaxy clusters and voids generates local deviations from the Hubble flow, called peculiar velocities. Correcting for this effect gives $`\stackrel{}{u}_{12}=H\stackrel{}{r}+v_{12}\stackrel{}{r}/r`$. The quantity $`v_{12}(r)`$ is called the mean pairwise streaming velocity. In the limit of large $`r`$, $`v_{12}=0`$. In the opposite limit of small separations, $`u_{12}(r)=0`$ (virial equilibrium). Hence, at intermediate separations $`v_{12}<0`$ and we can expect to observe gravitational infall, or the โmean tendency of well-separated galaxies to approach each otherโ (3). In a recent paper we derived an expression, relating $`v_{12}`$ to cosmological parameters (4); in another, using Monte Carlo simulations we showed how $`v_{12}`$ can be measured from velocity-distance surveys of galaxies (5). Our purpose here is to estimate $`v_{12}(r)`$ from observations and constrain the cosmological density parameter, $`\mathrm{\Omega }`$.
The statistic we consider was first introduced in the context of the Bogolyubov-Born-Green-Kirkwood (BBGKY) kinetic theory describing the dynamical evolution of a self-gravitating collection of particles (3,6). One of the BBGKY equations is the so called pair conservation equation, relating the time evolution of $`v_{12}`$ to $`\xi (r)`$ โ the two-point correlation function of spatial fluctuations in the fractional matter density contrast (3). Its solution is well approximated by (4)
$`v_{12}(r)`$ $`=`$ $`\frac{2}{3}Hr\mathrm{\Omega }^{0.6}\overline{\overline{\xi }}(r)[1+\alpha \overline{\overline{\xi }}(r)],`$ (1)
$`\overline{\overline{\xi }}(r)`$ $`=`$ $`{\displaystyle \frac{3_0^r\xi (x)x^2๐x}{r^3[\mathrm{\hspace{0.17em}1}+\xi (r)]}},`$ (2)
where $`\alpha =1.20.65\gamma `$, $`\gamma =(d\mathrm{ln}\xi /d\mathrm{ln}r)_{\xi =1}`$ and $`\mathrm{\Omega }`$ is the present density of nonrelativistic particles. The equations above have been obtained by interpolating between a second-order perturbative solution for $`v_{12}(r)`$ and the nonlinear stable clustering solution. For a particle pair at separation $`\stackrel{}{r}`$, the streaming velocity is given by
$`v_{12}(r)=(\stackrel{}{v}_1\stackrel{}{v}_2)\widehat{r}_\rho =(\stackrel{}{v}_1\stackrel{}{v}_2)\widehat{r}w_{12},`$ (3)
where $`w_{12}=(1+\delta _1)(1+\delta _2)[1+\xi (r)]^1`$ is the pair-density weighting, $`\stackrel{}{v}_A`$ and $`\delta _A`$ are the peculiar velocity and fractional density contrast of matter at position $`\stackrel{}{r}_A`$, $`A=1,2\mathrm{}`$, the separation $`r=|\stackrel{}{r}_1\stackrel{}{r}_2|`$ is fixed for all pairs, the hats denote unit vectors, and $`\xi (r)=\delta _1\delta _2`$. The expression in square brackets in the definition of $`w_{12}`$ ensures that $`w_{12}=1`$ and the pairwise velocity probability density integrates to unity. Note that the pair-weighted average, $`\mathrm{}_\rho `$, differs from simple spatial averaging, $`\mathrm{}`$, by the weighting factor $`w_{12}`$. The pair-weighting makes the average different from zero, unlike the volume average $`\stackrel{}{v}_1\stackrel{}{v}_2\mathrm{\hspace{0.17em}0}`$, which vanishes because of isotropy.
Our approximate solution of the pair conservation equation was successfully tested against N-body simulations in the dynamical range $`\xi 10^3`$ (4,7). It is valid for universes filled with non-relativistic particles and it is insensitive to the value of the cosmological constant (2,4). Eqn.(1) was derived under the additional assumption that the probability distribution of the initial, small-amplitude density fluctuations was Gaussian.
Until now we have also implicitly assumed that (i) the spatial distribution of galaxies traces the mass distribution and that (ii) $`v_{12}(r)`$ for the galaxies is the same as for the matter. If the galaxies are more clustered than mass, condition (i) is broken and we have โclustering biasโ. The galaxy two-point correlation function is close to a power law, $`\xi ^{\mathrm{gal}}(r)r^\gamma `$, over three orders of magnitude in separation $`r`$ (8). This is not true for the mass correlation function $`\xi (r)`$ in structure formation models of the cold dark matter (CDM) family (7). To reconcile theory with observation, one has to introduce a measure of bias that depends on separation and cosmological time, $`t`$: $`b^2(r,t)=\xi ^{\mathrm{gal}}(r,t)/\xi (r,t)`$. Because of the pair-density weighting, clustering bias can in principle induce โvelocity biasโ in a way similar to systematic error propagation. This is certainly true in the most simplistic of all biasing prescriptions - the โlinear biasingโ, under which $`b`$ is a constant and, moreover, $`\delta ^{\mathrm{gal}}=b\delta `$. The expression for $`v_{12}^{\mathrm{gal}}`$ can be obtained from Eqn.(3) by formally replacing the weighting function $`w_{12}(\delta _1,\delta _2)`$ with $`w_{12}(\delta _1^{\mathrm{gal}},\delta _2^{\mathrm{gal}})`$. In the linear limit, $`\xi 1`$, we get $`v_{12}^{\mathrm{gal}}=bv_{12}`$ (9), in qualitative agreement with recent N-body simulations, which considered a whole range biasing prescriptions, allowing nonlinear and/or non-local mapping of the mass density field onto $`\delta ^{\mathrm{gal}}`$ (10). However, there are also simulations which show exactly the opposite: although the galaxies do not trace the spatial distribution of mass, pairs of galaxies behave like pairs of test particles moving in the gravitational field of the true mass distribution, and $`v_{12}^{\mathrm{gal}}(r)=v_{12}(r)`$ (11). Direct measurements of $`v_{12}^{\mathrm{gal}}(r)`$ can help us decide which simulations and biasing schemes are more believable than others. Indeed, one can measure $`v_{12}^{\mathrm{gal}}`$ for different morphological classes of galaxies. The linear bias model predicts $`v_{12}^{}{}_{}{}^{(\mathrm{E})}/v_{12}^{}{}_{}{}^{(\mathrm{S})}=b^{(\mathrm{E})}/b^{(\mathrm{S})}`$, where the superscripts refer to elliptical (E) and spiral (S) galaxies. Observations suggest $`b^{(\mathrm{E})}/b^{(\mathrm{S})}2`$ and $`b^{(\mathrm{S})}1`$ (13). Hence, one expects $`v_{12}^{}{}_{}{}^{(\mathrm{E})}/v_{12}^{}{}_{}{}^{(\mathrm{S})}2`$ if the linear bias model is correct and $`v_{12}^{}{}_{}{}^{(\mathrm{E})}/v_{12}^{}{}_{}{}^{(\mathrm{S})}=1`$ in the absence of velocity bias.
Measurements of $`v_{12}(r)`$ can be also used to determine $`\mathrm{\Omega }`$. Indeed, if the mass correlation function is well approximated by a power law, $`\xi (r)r^\gamma `$, $`v_{12}`$ at a fixed separation can be expressed in terms of $`\mathrm{\Omega }`$ and the standard normalization parameter โ $`\sigma _8`$. The latter quantity is the root-mean-square contrast in the mass found within a randomly placed sphere of radius 8 h<sup>-1</sup>Mpc. Unlike the conventional linear perturbative expression for $`v_{12}(r)\mathrm{\Omega }^{0.6}\sigma _{8}^{}{}_{}{}^{2}r^{1\gamma }`$, our nonlinear Ansatz provides the possibility of separating $`\sigma _8`$ from $`\mathrm{\Omega }`$ by measuring $`v_{12}`$ at different values of $`r`$ \[see the lowermost panel in Fig.1 below; see also ref. (12) \].
We will now describe our measurements. The mean difference between radial velocities of a pair of galaxies is $`s_As_B_\rho =v_{12}\widehat{r}(\widehat{r}_A+\widehat{r}_B)/2`$, where $`s_A=\widehat{r}_A\stackrel{}{v}_A`$ and $`\stackrel{}{r}=\stackrel{}{r}_A\stackrel{}{r}_B`$. Here as before, the latin subscripts number the galaxies in the survey, $`A,B=1,2\mathrm{}`$ To estimate $`v_{12}`$, we minimize the quantity $`\chi ^2(v_{12})=_{_{A,B}}\left[(s_As_B)p_{AB}v_{12}/2\right]^2`$, where $`p_{AB}\widehat{r}(\widehat{r}_A+\widehat{r}_B)`$ and the sum is over all pairs at fixed separation $`r=|\stackrel{}{r}_A\stackrel{}{r}_B|`$. The resulting statistic is (5)
$`v_{12}(r)={\displaystyle \frac{2(s_As_B)p_{AB}}{p_{AB}^{}{}_{}{}^{2}}}.`$ (4)
Monte-Carlo simulations show that this estimator is insensitive to biases in the way galaxies are selected from the sky and can be corrected for biases due to errors in the estimates of the radial distances to the galaxies (5). The survey used here is the Mark III standardized catalogue of galaxy peculiar velocities (14,15,16). It contains 2437 spiral galaxies with Tully-Fisher (TF) distance estimates and 544 ellipticals with $`D_n\sigma `$ distances. The total survey depth is over 120 h<sup>-1</sup>Mpc, with homogenous sky coverage up to 30 h<sup>-1</sup>Mpc. The inverse TF and IRAS density field corrections for inhomogeneous Malmquist bias in the spiral sample agree with each other and give similar streaming velocities, with lognormal distance errors of order $`\sigma _{\mathrm{ln}d}23\%`$. For the elliptical sample, $`\sigma _{\mathrm{ln}d}21\%`$ and the distances assume a smooth Malmquist bias correction (17).
The estimates from the spiral and elliptical are remarkably consistent with each other (Fig.1), unlike previous comparisons using the velocity correlation tensor (18,19). For a velocity ratio $`R=v_{12}^{}{}_{}{}^{(\mathrm{E})}/v_{12}^{}{}_{}{}^{(\mathrm{S})}=1`$, we obtain $`\chi ^21`$, while for $`R=2`$ the $`\chi ^2=2.1`$. The most straightforward interpretation of this result is that there is no velocity bias and the linear clustering bias model should be rejected. Its static character and the resulting failure to describe particle motion, induced by gravitational instability was pointed out earlier on theoretical grounds (20). Our results can, however, be reconciled with linear bias model if it is generalized to allow scale-dependence, $`b=b(r)`$. Biasing factors for both galaxy types can be arbitrarily large at small separations, where $`\xi (r)1`$, if biasing is suppressed at large separations, where $`|\xi (r)|<1`$. Indeed, in the nonlinear limit $`w_{12}(b\delta _1,b\delta _2)b^2\delta _1\delta _2/b^2\xi =\delta _1\delta _2/\xi `$, and hence $`v_{12}^{\mathrm{gal}}(r)v_{12}(r)`$.
We obtained an estimate of $`\sigma _8`$ and $`\mathrm{\Omega }`$ from the shape of the $`v_{12}(r)`$ profile as follows. We assumed that the shape of the mass correlation function $`\xi (r)`$ (but not necessarily the amplitude) is similar the shape of the galaxy correlation function estimated from the APM catalogue (8), consistent with a power law index $`\gamma =1.75\pm 0.1`$ (the errors we quote are conservative) for separations $`r10\mathrm{h}^1\text{Mpc}`$. Given the depth of the Mark III catalogue we expect the covariance between estimates of $`v_{12}(r)`$ to be only weakly correlated at $`r<10`$ h<sup>-1</sup>Mpc; we use N-body simulations to determine the covariance of the estimates over this range of scales and use a $`\chi ^2`$ minimization to obtain the 1-$`\sigma `$ constraints: $`\sigma _80.7`$ and $`\mathrm{\Omega }=0.35_{0.25}^{+0.35}`$. Fixing $`\sigma _8=1`$ we obtain $`\mathrm{\Omega }=0.35\pm 0.15`$ (Fig. 2).
We can obtain a more conservative constraint on $`\sigma _8`$ and $`\mathrm{\Omega }`$ by examining a $`v_{12}`$ at a single separation, $`rr_{}=10\mathrm{h}^1`$Mpc. Substituting $`r=r_{}`$ and $`\xi (r)r^{1.75}`$ into Eqns.(1)-(2), we get
$$v_{12}(r_{})=\mathrm{\hspace{0.17em}605}\sigma _{8}^{}{}_{}{}^{2}\mathrm{\Omega }^{0.6}(1+0.43\sigma _{8}^{}{}_{}{}^{2})/(1+0.38\sigma _{8}^{}{}_{}{}^{2})^2\mathrm{km}/\mathrm{s}.$$
(5)
The above relation shows that at $`r=r_{}`$, $`v_{12}`$ is almost entirely determined by the values of two parameters: $`\sigma _8`$ and $`\mathrm{\Omega }`$. The uncertainties in the observed $`\gamma `$ lead to an error in eq. (5) of less than $`10\%`$ for $`\sigma _81`$. In fact, at this level of accuracy and at this particular scale, our constraints depend only on the value of $`\mathrm{\Omega }`$ and the overall normalization $`\sigma _8`$ but do not depend on other model parameters, such as the shape of $`\xi (r)`$. The streaming velocity, $`v_{12}(r_{})`$ depends on $`\xi (r)`$ only at $`r<r_{}`$, so unlike bulk flows, it is unaffected by the behavior of $`\xi (r)`$ at $`r>r_{}`$ \[compare our Eqn. (1) with Eqn. (21.76) in ref. (2) \]. Moreover, the dominant contribution to $`v_{12}(r_{})`$ comes from $`\overline{\xi }(r_{})`$ \- an average of $`\xi (r)`$ over a ball of radius $`r_{}`$, so the details of the true shape of $`\xi (r)`$ at $`r<r_{}`$ have little effect on $`v_{12}(r_{})`$ as long as $`\sigma _8`$ (and hence, the volume-averaged $`\xi `$) is fixed. Hence, Eqn.(5) can provide robust limits on $`\sigma _8`$ and $`\mathrm{\Omega }`$ even if the assumption about the proportionality of $`\xi (r)`$ to the APM correlation function is dropped. This statement can be directly tested by comparing predictions of Eqn.(5) with predictions od CDM-like models, all of which fail to reproduce the pure power-law behavior of the observed galaxy correlation function. When this test was applied to four models, recently simulated by the Virgo Consortium (7), we found that for fixed values of $`\sigma _8`$ and $`\mathrm{\Omega }`$, the predictions based on Eqn.(5) were within $`6\%`$ of the $`v_{12}(r_{})`$, obtained from the simulations (21).
The measured value, $`v_{12}(r_{})=\mathrm{\hspace{0.17em}280}_{53}^{+68}\mathrm{km}/\mathrm{s}`$ (Fig.1), is inconsistent with $`\sigma _8=1`$ and $`\mathrm{\Omega }=1`$ at the $`99\%`$ confidence level.
Our results are compatible with a number of earlier dynamical estimates of the parameter $`\beta \mathrm{\Omega }^{0.6}\sigma _8`$ \[$`\beta `$ is sometimes defined as $`\mathrm{\Omega }^{0.6}/b`$, but $`\sigma _81/b`$ and the two definitions differ at most at the 10% level, see, for example ref.(8) \]. A technique, based on the action principle (22) gives $`\beta =0.34\pm 0.13`$; comparisons of peculiar velocity fields with redshift surveys based on the integral form of the continuity equation (called velocity-velocity comparisons) typically give $`\beta =0.50.6`$ (23-26). Of the velocity-velocity comparisons, the one with the smallest error bars is the VELMOD estimate: $`\beta =0.5\pm 0.05`$ (24). This constraint has several advantages over others; in particular, it correctly takes into account cross calibration errors between different Mark III subcatalogues. To illustrate the consistency of our results with velocity-velocity studies we will now compare our limits on $`\sigma _8`$ and $`\mathrm{\Omega }`$, derived from the shape of $`v_{12}(r)`$ for a range of separations with constraints from our measurement of $`v_{12}(10\mathrm{h}^1\text{Mpc})`$ alone, combined with limits from VELMOD (Fig.2). Again we find that a low-$`\mathrm{\Omega }`$ universe is favoured: $`\mathrm{\Omega }<0.65`$ and $`\sigma _8>0.7`$. The concordance region overlaps with the constraint derived from our measurements of $`v_{12}(r)`$.
Our results disagree with the IRAS-POTENT estimate, $`\beta =0.89\pm 0.12`$ (27). The IRAS-POTENT analysis is based on the continuity equation in its differential form; it uses a rather complicated reconstruction technique to recover the full velocity field from its radial component. The reason for the disagreement is not clear at present. One can think of at least two possible sources of systematic errors in the IRAS-POTENT analysis: (i) the reconstruction scheme itself (e.g., taking spatial derivatives of noisy data), and (ii) the nonlinear corrections adopted. The nonlinear corrections diverge like $`\mathrm{\Omega }^{1.8}`$ in the limit $`\mathrm{\Omega }0`$ (27). By contrast, the accuracy of the nonlinear corrections for the velocity-velocity is insensitive to $`\mathrm{\Omega }`$. The velocity-velocity approach is also simpler than IRAS-POTENT because it does not involve the reconstruction of the full velocity vector from its radial component measurements (although both approaches do require a reconstruction of galaxy positions from their redshifts). Note that our method is direct, not inverse: it does not require any reconstruction at all.
Finally, there is a potential caveat in the โno velocity biasโ assumption in our own analysis. Although this assumption is based on empirical evidence from the two sets of galaxy types, the observational data is noisy and involves non-trivial corrections for Malmquist bias which could affect the two samples differently. Application of our approach to different data sets will clarify these issues. If contrary to our preliminary results, the streaming velocity turns out to be subjected to bias after all, such a finding may affect our estimates of $`\sigma _8`$ and $`\mathrm{\Omega }`$, based on the shape of the $`v_{12}(r)`$ profile but not our rejection of the unbiased Einstein-de Sitter model. In this sense our differences with the IRAS-POTENT analysis do not depend on the presence or absence of velocity bias.
The advantages of the new statistic we have used here can be summarized as follows. First, $`v_{12}`$ can be estimated directly from velocity-distance surveys, without subjecting the observational data to multiple operations of spatial smoothing, integration and differentiation, used in various reconstruction schemes. Second, unlike cosmological parameter estimators based on the acoustic peaks, expected to appear in the cosmic microwave background power spectrum (28), the $`\mathrm{\Omega }`$ estimate based on $`v_{12}`$ is model-independent. Finally, our approach offers the possibility to break the degeneracy between $`\mathrm{\Omega }`$ and $`\sigma _8`$ by measuring the $`v_{12}(r)`$ at different separations.
References and Notes
(1) E.P. Hubble, Proc. Natl. Acad. Sci. USA 15, 168 (1929).
(2) P.J.E. Peebles, Principles of Physical Cosmology, pp. 340-342 (Princeton University Press, Princeton, 1993).
(3) P.J.E. Peebles, The LargeโScale Structure of the Universe, pp. 266-268 (Princeton University Press, Princeton, 1980).
(4) R. Juszkiewicz, V. Springel, R. Durrer, Astrophys. J. 518, L25 (1999).
(5) P.G. Ferreira, R. Juszkiewicz, H. Feldman, M. Davis, A.H. Jaffe, Astrophys. J. 515, L1 (1999).
(6) M. Davis, M., P.J.E. Peebles, Astrophys. J. Suppl. 34, 425 (1977).
(7) A. Jenkins et al., Astrophys. J. 499, 20 (1998).
(8) G. Efstathiou, in Les Houches, Session LX. Cosmology and large scale structure. (eds. R. Schaeffer, et al.) 107-252 (North-Holland, Amsterdam, 1996).
(9) K.B. Fisher, M. Davis, M. Strauss, A. Yahil, J. Huchra, Mon. Not. R. Astron. Soc. 267, 927 (1994).
(10) V.K. Narayanan, A.A. Berlind, D.H. Weinberg, astro-ph/9812002 (1998).
(11) G. Kauffmann, J.M. Colberg, A. Diaferio, S.D.M. White, Mon. Not. R. Astron. Soc. 303, 188 (1999). To linear bias enthusiasts, this result may appear puzzling. A textbook example of such a possibility is the spatial distribution of the gas and stars in a galaxy. Their distribution itself is biased, while their relative velocities at separations comparable to the radius of the dark halo are not.
(12) N-body simulations show that the stable clustering solution ($`u_{12}=0,v_{12}\left(r\right)/Hr=1`$) occurs for $`\xi >200`$ \[ref.(7), see also B. Jain, Mon. Not. R. Astron. Soc. 287, 687 (1997)\]. Note that in the limit $`r0`$ our Ansatz gives $`v_{12}/Hr\left[2/\left(3\gamma \right)\right]\mathrm{\Omega }^{0.6}\left(1+\alpha \right)`$, which for the range of parameters considered is generally different from unity (although of the right order of magnitude). It is possible to improve our approximation and satisfy the small separation boundary condition by replacing $`\left(2/3\right)\mathrm{\Omega }^{0.6}`$ with the expression $`\left[\left(1\gamma /3\right)F\xi \left(r\right)+\left(2/3\right)\mathrm{\Omega }^{0.6}\right]/\left[F\xi \left(r\right)+1\right]`$, where $`F1/100`$ is a fudge factor which ensures that $`v_{12}/Hr1`$ when $`\xi 100`$ while the perturbative, large-scale solution remains unaffected. However, the stable clustering occurs at separations smaller than $`200`$h<sup>-1</sup>kpc, which is a tenth of the smallest separation we consider here. As a result, in our range of separations $`r`$, Eqn.(1) is as close to fully nonlinear N-body simulations as the improvement, suggested above \[see ref. (4), Fig. 2\].
(13) M.A. Strauss, J.A. Willick, Phys. Rep. 261, 271 (1995).
(14) J.A. Willick, S. Courteau, S.M. Faber, D. Burstein, A. Dekel, Astrophys. J. 446, 12-38 (1995).
(15) J.A. Willick, et al., Astrophys. J. 457, 460 (1996).
(16) J.A. Willick, et al., Astrophys. J. Suppl. 109, 333 (1997).
(17) D. Lynden-Bell, et al., Astrophys. J 329, 19 (1988).
(18) K.M. Gรณrski, M. Davis, M.A. Strauss, S.D.M. White, A. Yahil, Astrophys. J. 344, 1 (1989).
(19) E.J. Groth, R. Juszkiewicz, J.P. Ostriker, Astrophys. J. 346, 558 (1989).
(20) Under realistic circumstances one expects that gravitational growth of clustering pulls the mass with the galaxies. As a result, any clustering bias, introduced at the epoch of galaxy formation is likely to evolve towards unity at late times, at variance with the linear bias model, where $`b`$ is time-independent \[J. N. Fry, Astrophys. J. 461, L65 (1996); P. J. E. Peebles, astro-ph/9910234\]. A similar behavior was seen in N-body simulations (11). Note also that the linear bias relation $`\delta ^{\mathrm{gal}}=b\delta `$ is a conjecture which generally does not follow from the relationship between the correlation functions $`\xi ^{\mathrm{gal}}=b^2\xi `$ unless we restrict our model to a narrow subclass of random fields.
(21) Imagine that one of the CDM-like models is a valid description of our Universe. Let us choose the so-called $`\mathrm{\Lambda }`$CDM model, recently simulated by the Virgo Consortium (7). It is defined by parameters $`h=0.7,\mathrm{\Omega }=0.3,\sigma _8=0.9`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, which is the cosmological constantโs contribution to the density parameter. This model requires scale-dependent biasing because the predicted shape of $`\xi \left(r\right)`$ differs widely from the observed galaxy correlation function: at separations $`\mathrm{h}r/\mathrm{Mpc}=10,\mathrm{\hspace{0.17em}4},\mathrm{\hspace{0.17em}2}`$ and 0.1 the logarithmic slope of the mass correlation function reaches the values, given by $`\gamma =1.7,\mathrm{\hspace{0.17em}1.5},\mathrm{\hspace{0.17em}2.5}`$ and 1, respectively. As we expected, however, these wild oscillations do not affect the resulting $`v_{12}\left(r_{}\right)`$. Indeed, the N-body simulations (7) give $`v_{12}\left(r_{}\right)=220`$ km/s, while substituting $`\sigma _8=0.9`$ and $`\mathrm{\Omega }=0.3`$ in Eqn.(5) gives $`v_{12}\left(r_{}\right)=225`$ km/s.
(22) E.J. Shaya, P.J.E. Peebles, R.B. Tully, Astrophys. J. 454, 15 (1995).
(23) M. Davis, A. Nusser, J.A. Willick, Astrophys. J. 473, 22 (1996).
(24) J.A. Willick, M.A. Strauss, A. Dekel, T. Kolatt, Astrophys. J. 486, 629 (1997).
(25) L.N. da Costa, et al., Mon. Not. R. Astron. Soc. 299, 425 (1998).
(26) J. A. Willick, M.A. Strauss, Astrophys. J. 507, 64 (1998).
(27) Y. Sigad, A. Eldar, A. Dekel, M.A. Strauss, A. Yahil, Astrophys. J. Suppl. 109, 516 (1997).
(28) A.G. Doroshkevich, Ya.B. Zeldovich, R.A. Sunyaev, Sov. Astron. 22, 523 (1978).
(29) We are grateful to Luiz da Costa for encouragement at early stages of this project. We also benefitted from discussions with Stephane Courteau, Ruth Durrer, Kris Gรณrski and Jim Peebles. RJ was supported by the KBN (Polish Government), the Tomalla Foundation (Switzerland), and the Poland-US M. Skลodowska-Curie Fund. MD and AHJ were supported by grants from the National Science Foundation and NASA. HAF and RJ were supported by the NSF-EPSCor and GRF. We thank the Organizers of Summer 1997 workshop at the Aspen Center for Physics, where this work began.
(30) Correspondence should be addressed to PGF (e-mail: pgf@astro.ox.ac.uk).
Figure 1: The streaming velocities of 2437 spiral galaxies (top panel) and 544 elliptical galaxies (center panel) estimated from the Mark III catalogue. The error bars are the estimated 1$`\sigma `$ uncertainties in the measurement due to lognormal distance errors, sparse sampling (shot noise) and finite volume of the sample (sample variance). The error bars were estimated from mock catalogues described in ref. (5). The small sample volume also introduces correlations between measurements of $`v_{12}\left(r\right)`$ at different values of $`r`$. To guide the eye, and show that although the two samples have different noise levels (because of much smaller number of galaxies in the elliptical sample), the $`v_{12}\left(r\right)`$ signal in both cases is similar, we also plot $`v_{12}\left(r\right)`$ calculated from equation (1) for a $`\xi r^{1.75}`$ power-law model with $`\sigma _8=1.25`$ and $`\mathrm{\Omega }=0.3`$. Three theoretical $`v_{12}\left(r\right)`$ curves are plotted (bottom panel) with $`\xi r^{1.75}`$, $`\sigma _8\mathrm{\Omega }^{0.6}=0.7`$ and $`\sigma _8=0.5`$ (solid line), $`1`$ (dotted line) and $`1.5`$ (dashed line). These curves show how measurements of $`v_{12}\left(r\right)`$ can break the degeneracy between $`\mathrm{\Omega }`$ and $`\sigma _8`$.
Figure 2: The blue region constrains the viable values of $`\mathrm{\Omega }`$, the fractional mass density of the universe, and $`\sigma _8`$, the variance of mass fluctuations at $`r=8h^1`$Mpc, from the combination of the constraints on the streaming velocities (red region) and $`\beta =\sigma _8\mathrm{\Omega }^{0.6}`$ (green region). The streaming velocities are constrained at $`r=10h^1`$Mpc from the Mark III catalogue of peculiar velocities and $`\beta `$ is measured using the VELMOD comparison between the Mark III catalogue and the velocity field inferred from the IRAS redshift survey. The dashed line defines the 1-$`\sigma `$ obtained from comparing expressions 1 and 2 with the Mark III catalogue from $`2`$h<sup>-1</sup>Mpc to $`10`$h<sup>-1</sup> Mpc. |
no-problem/0001/astro-ph0001456.html | ar5iv | text | # A Search for Sub-millisecond Pulsations in Unidentified FIRST and NVSS Radio Sources
## 1. Introduction
The FIRST survey (Faint Images of the Radio Sky at Twenty Centimeters) and NVSS survey (NRAO VLA Sky Survey) are recent 1400 MHz VLA radio surveys of the Northern sky. The FIRST survey is an ongoing survey of the North and South Galactic caps using the VLA in B-configuration with a synthesized beam size of $`5.4^{\prime \prime }`$ (Becker et al. 1995). In the published FIRST catalog of radio sources from the first two observing sessions in 1993 and 1994 (White et al. 1997), 1550 square degrees of the North Galactic cap were covered spanning $`7^\mathrm{h}<\alpha `$(J2000)$`<18^\mathrm{h}`$ and +28 $`<`$ $`\delta `$(J2000) $`<`$ +42. The positions and flux densities of $`1.4\times 10^5`$ discrete radio sources<sup>1</sup><sup>1</sup>1The online catalog is updated regularly as observing proceeds and currently contains more than $`5.4\times 10^5`$ sources derived from data taken from 1993 to 1998 (99Jul21 catalog version, http://sundog.stsci.edu/first/catalogs.html). are complete down to a flux density of $``$ 1 mJy. The NVSS survey (Condon et al. 1998) covers $`\delta `$ $`>`$ $`40^{}`$ (covering 82% of the celestial sphere) and catalogs more than 1.8 $`\times `$ 10<sup>6</sup> sources complete down to a flux density of $``$ 2.5 mJy. The NVSS survey was conducted with the VLA in D and DnC configurations with a synthesized beam size of $`45^{\prime \prime }`$. The NVSS survey also preserves polarization information.
Several large-scale pulsar surveys have previously been conducted at high Galactic latitudes (see Camilo 1997 and references therein). However, the rates at which the received analog power was sampled and digitized in these surveys, typically 3-4 kHz, and the low observing radio frequencies ($``$ 400 MHz), combined with relatively large radio frequency channel bandwidths of between 125 kHz and 1 MHz, restricted their sensitivity to sub-millisecond pulsars to very small dispersion measures (DMs) (DM $`{}_{}{}^{}{}_{}{}^{<}`$ 10 pc cm<sup>-3</sup>). Large-scale surveys that maintain sensitivity to sub-millisecond periodicities over a wide range of DMs are difficult: the fast sampling rate and small radio frequency channel bandwidth required make large total bandwidths and long integration times currently impractical. However, a targeted search for sub-millisecond pulsations is possible using narrow frequency channels and a fast sampling rate. Such a survey is of course also sensitive to long-period pulsars which may have been missed in previous surveys due to radio frequency interference or scintillation.
Consideration of the properties of known recycled pulsars and representative models of magnetic field decay and equations of state suggests that a significant population of sub-millisecond pulsars could be present in the Galaxy (e.g., Possenti et al. 1998). It is possible, therefore, that some of the sources which remain unidentified in radio survey catalogs could be bright sub-millisecond radio pulsars which have previously escaped detection in high-latitude pulsar surveys. To date, no pulsar has been found with a period shorter than that of the first millisecond pulsar discovered, PSR B1937+21, which has a 1.56 ms period (Backer et al. 1982). The discovery of a sub-millisecond pulsar would place important constraints on the equation of state of neutron matter at high densities (e.g., Kulkarni 1992).
## 2. Target Choice and Observations
The FIRST and NVSS surveys contain a number of bright sources which are unresolved and have no identification in other wavebands. Although over 99% of bright sources ($`S_{1400}>`$ 60 mJy) found in previous large-scale surveys are believed to be active galactic nuclei (AGN) (Condon et al. 1998), many sources in the FIRST and NVSS catalogs remain unidentified. One possibility is that they are previously unrecognized radio pulsars. Since pulsars often have a high degree of linear polarization (Lyne & Manchester 1988), polarized sources are good targets for pulsar searches. Han & Tian (1999) have identified 97 objects in the NVSS catalog which are coincident with known pulsars. Of the 89 redetected pulsars in Table 1 of their paper for which the degree of linear polarization could be determined from the NVSS observations, only 8 had an observed nominal fractional linear polarization less than 5%. The intrinsic degree of polarization of these pulsars may even be higher if bandwidth depolarization effects are significant, in which case an even higher fraction of the pulsar sample is more than 5% linearly polarized. Figure 2 of Han & Tian (1999) shows that only $``$ 10% of identified quasars and $``$ 10% of BL-Lac objects are more than 5% linearly polarized. Thus, although there is not a clear polarization cutoff separating the pulsar and extragalactic populations, a polarization threshold of 5% excludes most ($``$ 90%) of the identified non-pulsar population while retaining the majority ($``$ 90%) of the identified pulsar population.
We have searched for radio pulsations in bright ($`S_{1400}15`$ mJy) point-like unidentified sources from the FIRST and NVSS survey catalogs which are more than 5% linearly polarized at 1400 MHz. Sources were selected directly from the catalogs if they met certain criteria.
Unidentified FIRST sources had their flux densities checked against their corresponding NVSS flux densities. If a source were extended, the better resolution of the FIRST survey would be expected to yield a lower flux density for the source than the NVSS survey. Therefore, in order to eliminate extended objects, sources were only included if their FIRST and NVSS flux densities agreed to within a few percent (indicating an unresolved non-variable source) or if the FIRST flux density exceeded the NVSS flux density (indicating an unresolved scintillating source). The large number of extended sources in the catalogs makes this filter necessary, though it does unfortunately eliminate scintillating sources which happen to be fainter in the FIRST survey.
For unresolved NVSS sources outside of the FIRST survey region, pointed VLA observations were undertaken in October 1995 in B-configuration in order to obtain the same angular resolution ($`5.4^{\prime \prime }`$) as the FIRST survey (R. Becker & D. Helfand, unpublished work). Sources in these observations were then subjected to the selection criteria described above. A total of 92 objects from the catalogs (39 appearing in both FIRST and NVSS, and 53 appearing only in NVSS) fit our selection criteria and were sufficiently far north to be observed in our search. The positions, NVSS total intensity peak flux densities, and fractional linear polarization from the NVSS peak flux values are listed for the selected sources in Table 1.
Each of the 92 unidentified sources was observed at a center frequency of 610 MHz in two orthogonal linear polarizations for 420 s with the Lovell 76-meter telescope at Jodrell Bank, UK. A total bandwidth of 1 MHz was split into 32 frequency channels with detected signals from each channel added in polarization pairs and recorded on Exabyte tape as a continuous 1-bit digitized time series sampled at 50 $`\mu `$s.
The minimum detectable flux density of periodicities in a pulsar search depends upon the raw sensitivity of the system and a number of propagation and instrumental effects (e.g., Dewey et al. 1985). Interstellar dispersion contributes to the broadening of the intrinsic pulse according to
$$\tau _{\mathrm{DM}}=\left(\frac{202}{\nu }\right)^3\mathrm{DM}\mathrm{\Delta }\nu $$
(1)
Here $`\tau _{\mathrm{DM}}`$ is in milliseconds, $`\nu `$ is the observing frequency in MHz, DM is the dispersion measure in pc cm<sup>-3</sup>, and $`\mathrm{\Delta }\nu `$ is the channel bandwidth in MHz. For our observing system, $`\tau _{\mathrm{DM}}`$ is 1.135 $`\mu `$s per pc cm<sup>-3</sup> of DM. The fast sampling rate ($`t_{\mathrm{samp}}`$ = 50 $`\mu `$s) and small channel bandwidth ($`\mathrm{\Delta }\nu =31.25`$ kHz) in our survey made it sensitive to sub-millisecond pulsars for a large range of DMs (DM $`{}_{}{}^{}{}_{}{}^{<}`$ 500 pc cm<sup>-3</sup>) in the absence of pulse scattering effects. Our estimated sensitivity to pulsations for a range of periods and DMs is shown in Figure 1. A detailed explanation of the calculation used to produce Figure 1 can be found in Crawford (2000).
## 3. Data Reduction
In each observation, the frequency channels were dedispersed at 91 trial DMs which ranged from 0 to 1400 pc cm<sup>-3</sup> and were summed. We searched this large DM range in the unlikely event that a source could be a previously missed long-period pulsar with a high DM. For DM $`{}_{}{}^{}{}_{}{}^{>}`$ 500 pc cm<sup>-3</sup> the channel dispersion smearing is too great to maintain sensitivity to sub-millisecond pulsations from all our sources. However, the Taylor & Cordes (1993) model of the Galactic free electron distribution indicates that for all of our sources, with the exception of two that are within 5 of the Galactic plane, the DMs are expected to be less than 100 pc cm<sup>-3</sup> regardless of distance. Each resulting dedispersed time series of $`2^{23}`$ samples was then coherently Fourier transformed to produce an amplitude modulation spectrum corresponding to a trial DM.
Radio frequency interference (RFI) produced many false peaks in certain narrow regions of the modulation spectra at low DMs. We therefore masked several frequency ranges and their harmonics in which RFI appeared regularly so that any true pulsar signals were not swamped by interference. Typically 1-2% of the modulation spectrum in each observation was lost in this way.
We then looked for the strongest peaks in the spectra. First, each modulation spectrum was harmonically summed. In this process, integer harmonics in the modulation spectrum are summed, enhancing sensitivity to harmonic signals (e.g., Nice, Fruchter, & Taylor 1995). This is particularly useful for long-period pulsars, which have a large number of unaliased harmonics. After summing up to 16 harmonic signals, the highest candidate peaks in the modulation spectrum were recorded along with the period, DM, and the signal-to-noise ratio (S/N). Redundant harmonic candidates were then eliminated. Unique candidates were recorded if they had S/N $`>`$ 7 and if the candidate period appeared in at least 10 DM trials. The final candidates for each beam were inspected by dedispersing the original data at DMs near the candidate DM and folding the data at periods near the candidate period in order to look for a broad-band, continuous pulsar-like signal.
This technique was tested by observing several known bright pulsars (PSR B1937$`+`$21, PSR B0329$`+`$54, and PSR J2145$``$0750) throughout the survey. The results for these pulsars are listed in Table 2. All three pulsars were detected with S/N consistent with our survey sensitivity, though scintillation affects the detection strengths.
## 4. Discussion
We did not detect any significant pulsations from the target sources. Here we consider possible effects which could prevent detection if they were pulsars.
The selected sources were bright, with the weakest source having a 1400 MHz flux density of 15 mJy. Assuming a typical pulsar spectral index of $`\alpha =1.6`$ (Lorimer et al. 1995), where $`\alpha `$ is defined according to $`S\nu ^\alpha `$, this source would have flux density 58 mJy at 600 MHz (the horizontal dashed line in Figure 1). For expected DMs, this is about seven (four) times greater than our sensitivity limit for periods greater than (about equal to) 1 ms, as indicated in Figure 1. All of our sources, therefore, were bright enough (in the absence of scintillation) to be easily detectable with our observing system if they were pulsars.
Dispersion smearing is not a factor preventing detection, since all but two of these sources have high Galactic latitudes ($`|b|>5^{}`$) and should have DM $`<`$ 100 pc cm<sup>-3</sup> regardless of distance. This is well within our sensitivity limits to sub-millisecond pulsations (see Figure 1). Interstellar scattering, which can be estimated from the Taylor & Cordes (1993) model of the Galactic electron distribution, is expected to be negligible at 610 MHz. Two of the sources (0458+4953 and 0607+2915) are within 5 of the Galactic plane; for an assumed DM of 100 pc cm<sup>-3</sup>, their predicted pulse scatter-broadening times are $``$ 140 $`\mu `$s at 610 MHz. This is small enough to maintain sensitivity to sub-millisecond pulsations, and is of the same order as the dispersion smearing (113 $`\mu `$s) at this DM.
An extremely wide beam would prevent modulation of the pulsed signal and could render a pulsar undetectable. However, this would likely occur only in an aligned rotator geometry in which both the spin and magnetic axes are pointing toward us. This is unlikely for our targets, since, if they were pulsars, the position angle of linear polarization is expected to follow the projected direction of the magnetic axis as the star rotates (Lyne & Manchester 1988). This geometry would significantly reduce the measured degree of linear polarization as the pulsar rotates, inconsistent with our choice of significantly polarized sources.
Scintillation is the modulation of a radio signal passing through a medium of variable index of refraction, such as an inhomogeneous interstellar plasma. In diffraction scintillation, the wave scattering causes interference which can enhance or suppress the amplitude of the radio signal on the time scale of minutes (e.g., Manchester & Taylor 1977). These intensity fluctuations vary as a function of radio frequency at any given time and have a characteristic bandwidth $`\mathrm{\Delta }\nu `$, where
$$\mathrm{\Delta }\nu 11\nu ^{22/5}d^{11/5}.$$
(2)
Here $`\mathrm{\Delta }\nu `$ is in MHz, $`\nu `$ is the observing frequency in GHz, and $`d`$ is the distance to the source in kpc (Cordes, Weisberg, & Boriakoff 1985). For pulsars with $`d<`$ 1 kpc, this characteristic bandwidth exceeds our 610 MHz observing bandwidth of 1 MHz. Indeed, of 28 nearby pulsars observed at 660 MHz by Johnston, Nicastro, & Koribalski (1998), 13 had scintillation bandwidths greater than our bandwidth of 1 MHz and had a characteristic fluctuation time-scale greater than our integration time of 420 s. More than half of these 13 pulsars had distances less than 1 kpc, and only one had $`d>`$ 2 kpc. If the sources we surveyed were placed at a distance of 2 kpc and a spectral power law index of $`\alpha `$ = 1.6 is assumed, their 400 MHz luminosities would all be at the upper end of the observed pulsar luminosity distribution ($`L_{400}>450`$ mJy kpc<sup>2</sup>). This suggests that if these sources were pulsars, they are likely to be closer than 2 kpc and therefore in the distance range where the scintillation bandwidth exceeds our observing bandwidth. In this case, scintillation causes the probability distribution of the observed intensity to be an exponential function with a maximum in the distribution at zero intensity (McLaughlin et al. 1999). The number of sources we expect to see in our sample is the sum of the probabilities that we will see each individual source (i.e., that the scintillated flux is above the minimum detectable flux). For the range $`P>`$ 1 ms, we expect to see 88 of the 92 sources (5% missed). For the range $`P1`$ ms, we expect to see 85 of the 92 sources (8% missed). Thus, only a few of our sources are likely to have been missed due to scintillation. Therefore scintillation cannot account for the non-detection of the bulk of the sources in the survey.
A pulsar in a binary orbit experiences an acceleration which changes the observed modulation frequency during the course of the observation. Sensitivity to pulsations is degraded if the frequency drift exceeds a single Fourier bin $`\mathrm{\Delta }f=1/T_{\mathrm{int}}`$, where $`T_{\mathrm{int}}`$ is the integration time. Assuming that the acceleration of the pulsar is constant during the observation, a critical acceleration can be defined, above which the change in frequency is greater than $`\mathrm{\Delta }f`$ and the sensitivity is reduced:
$$a_{\mathrm{crit}}=\frac{c}{fT_{\mathrm{int}}^2}.$$
(3)
Since the Fourier drift scales linearly with acceleration, the reduction in sensitivity also scales linearly with acceleration. For our integration time of 420 s, the critical acceleration is $`a_{\mathrm{crit}}/P`$ $`1.7`$ (where $`a_{\mathrm{crit}}`$ is in units of m/s<sup>2</sup> and $`P`$ is the pulsar period in ms). Our weakest source is several times brighter than our detection limit (Figure 1), so a reduction in sensitivity by a factor of several should still maintain detectability to pulsations. Thus, a more appropriate critical acceleration for the weakest source in our list is $`a_{\mathrm{crit}}/P12`$ (for $`P>1`$ ms) and $`a_{\mathrm{crit}}/P`$ $``$ 7 (for $`P1`$ ms).
Orbits containing millisecond or sub-millisecond pulsars that have been spun up from mass transfer from a low-mass donor would be expected to be circular. Of the 40 known pulsars in circular ($`e<`$ 0.01) binary orbits, the largest projected mean acceleration to our line of sight (assuming $`i=60^{}`$) is $`a_{\mathrm{mean}}/P3.6`$ for PSR J1808$``$3658, an X-ray millisecond pulsar with a 2.5 ms period in a 2 hr binary orbit around a 0.05 $`M_{}`$ companion (Chakrabarty & Morgan 1998). This acceleration is well below the critical acceleration $`a_{\mathrm{crit}}/P`$ even for our weakest source.
The mean flux density of the sources on our list, however, is much higher ($`S_{610}400`$ mJy, assuming $`\alpha `$ = 1.6) than our weakest source, which raises the critical acceleration for our typical source. Only very large accelerations ($`a_{\mathrm{mean}}/P{}_{}{}^{}{}_{}{}^{>}85`$ for $`P>1`$ ms and $`a_{\mathrm{mean}}/P{}_{}{}^{}{}_{}{}^{>}40`$ for $`P{}_{}{}^{}{}_{}{}^{<}1`$ ms) would prevent detection of pulsations for our typical source. A system such as PSR J1808$``$3658 with $`S_{610}400`$ mJy would still be detectable if it had $`P0.3`$ ms or if it had an orbital period $`P_b15`$ minutes (but not both). Thus it is unlikely that binary motion in a sub-millisecond or millisecond pulsar system would be a significant source of non-detections for most of our sources.
We have also estimated the likelihood of serendipitously detecting a pulsar not associated with these sources using the observed surface density of both normal and millisecond pulsars in the Galactic plane (Lyne et al. 1998). With standard assumptions for a spectral power law index and luminosity distribution (Lorimer et al. 1993), we find that it is unlikely ($`<2`$% probability) that we would detect any pulsars from the chance placement of our 92 beams.
## 5. Conclusions
No pulsations were detected at 610 MHz from the 92 polarized, point-like sources that we searched from the FIRST and NVSS radio surveys. Sensitivity to sub-millisecond pulsations was maintained for DMs less than about 500 pc cm<sup>-3</sup> (without scattering effects), which encompasses the expected DM range for all of these sources. We find that several effects which could prevent detection (brightness, dispersion smearing, scattering, and beaming) are not significant factors here. Scintillation is expected to account for only a few of our non-detections and therefore cannot be the cause of the majority of our non-detections. For a source with a typical flux density in our list, Doppler motion in a tight binary system would only prevent detection if the mean projected line-of-sight acceleration of the pulsar were at least an order of magnitude higher than those observed in the known population of circular binary pulsar systems. We conclude that as a population, these sources are unlikely to be pulsars. Given that $``$ 10% of extragalactic sources in the NVSS survey were identified by Han & Tian (1999) as being at least 5% linearly polarized, it is possible that most of our target sources are unidentified extragalactic objects. However, the nature of these sources is still not certain.
We thank Bob Becker and David Helfand for providing us with the list of target sources from the FIRST and NVSS surveys, Andrew Lyne for assistance with observing, and Nichi DโAmico and Luciano Nicastro for providing the FVSLAI software package. We thank an anonymous referee for insightful and helpful comments. VMK is grateful for hospitality while visiting Jodrell Bank. |
no-problem/0001/cond-mat0001180.html | ar5iv | text | # Coherent Stranski-Krastanov growth in 1+1 dimensions with anharmonic interactions: An equilibrium study
## I Introduction
The preparation of arrays of defect free three-dimen-sional (3D) nanoscale islands is a subject of intense research in the last decade owing to possible optoelectronic applications as quantum dots. The latter are promising for fabrication of lasers and light emitting diodes. In recent time the instability of two-dimensional (2D) growth against the formation of coherently strained 3D islands in highly mismatched heteroepitaxial systems has been successfully used to produce quantum dots. This is the well known Stranski-Krastanov (SK) growth mode where the decrement of the strain energy in the 3D islands overcompensates the contribution of the surface energy.
When the adhesion forces between the substrate and film materials overcompensate the strain energy stored in the overlayer owing to the lattice mismatch, a thin pseudomorphous wetting layer consisting of an integer number of monolayers is first formed by a layer-by-layer mode of growth. This kind of growth cannot continue indefinitely because of the accumulation of strain energy and the disappearance of the energetic influence of the substrate after several atomic diameters. Then, in the thermodynamic limit, unstrained 3D islands are formed and grow on top of the wetting layer, the lattice misfit being accommodated by misfit dislocations (MDs) at the wetting layer \- 3D islands boundary. Thus the wetting layer and the 3D islands represent different phases in the sense of Gibbs separated by an interphase boundary. The energy of the latter is given by the energy of the array of MDs. This is the classical Stranski-Krastanov mechanism of growth (see Fig. (13a)). However, it has been found that under certain conditions coherently strained (dislocation free) 3D islands are formed on top of the wetting layer (Fig. (13b)). These islands are strained to fit the wetting layer in the middle of their bases but are more or less strain-free near their top and side walls. Such coherently strained islands are formed at large positive misfits when the lattice parameter of the overlayer is larger than that of the substrate and the overlayer is compressed. It has also been observed that the size distribution of the 3D islands is very narrow. The above observations have been reported for the growth of Ge on Si(100), InAs on GaAs(100), InGaAs on GaAs, and InP on In<sub>0.5</sub>Ga<sub>0.5</sub>P. In all cases the lattice misfit is positive and very large (4.2, 7.2, and $``$3.8% for Ge/Si, InAs/GaAs, and InP/In<sub>0.5</sub>Ga<sub>0.5</sub>P, respectively) for semiconductor materials which are characterized by directional and brittle chemical bonds. The only exception to the authorsโ knowledge is the system PbSe/PbTe(111) in which the misfit is negative (-5.5%) and the overlayer is expanded. However, the authors of Ref. () note that whereas the in-plane lattice parameter of the PbSe wetting layer is strained to fit exactly the PbTe substrate, the parameter of the 3D islands rapidly decreases, reaching 95% of the bulk PbSe lattice constant at about 4 monolayers coverage. One could speculate that the lattice misfit is accommodated by MDs introduced at the onset of the 3D islanding.
Whereas the classical SK growth is more or less clear from both thermodynamic and kinetic points of view, the formation of coherent 3D islands still lacks satisfactory explanation. We can consider as a first approximation the formation of coherent 3D islands in SK growth as homoepitaxial growth on uniformly strained crystal surface both film and substrate materials having one and the same bonding. If so, it is not clear what is the thermodynamic driving force for 3D islanding if the islands are coherently strained to the same degree as the wetting layer. It is also not clear why coherent 3D islands are observed in compressed rather than in expanded overlayers. Another question which should be answered is why the formation of coherent 3D islands requires very large value of the positive misfit. The reason for the narrow size distribution is still unclear although much effort has been made to elucidate the problem. Finally, the mechanism of formation of coherent 3D islands is still an open question.
Two major approximations are usually made when dealing theoretically with the formation of coherently strained 3D islands. The first is the use of the linear theory of elasticity in order to compute the strain contribution to the total energy of the islands. However, the validity of the latter is hard to accept bearing in mind the high values of the lattice mismatch. As will be shown below the MDs differ drastically in compressed and expanded films. Second, it is commonly accepted that the interfacial energy between the wetting layer and the dislocation free 3D islands is sufficiently small and can be neglected in the case of coherent SK growth. The latter is equivalent to the assumption that the substrate (the wetting layer) wets completely the 3D islands. In fact this assumption rules out the 3D islanding from thermodynamic point of view as 3D islands are only possible at incomplete wetting, or in other words, when the interfacial energy is greater than zero. As shown below the adhesion of the atoms to the wetting layer is also distributed along the island in addition to the strain distribution and plays a more significant role than the latter. Due to the lattice misfit the atoms are displaced from their equilibrium positions in the bottoms of the potential troughs they should ocupy at zero misfit. In such a way the adhesion of the atoms to the substrate is stronger in the middle of the islands and weaker at the free edges. The average adhesion of an island of a finite size is thus weaker compared with that of an infinite monolayer. An interfacial boundary appears and the wetting of the island by the substrate (the wetting layer) is incomplete in the average. It is this incomplete wetting which drives the formation of dislocation free 3D islands on the uniformly strained wetting layer.
In the present paper we make use of a more realistic interatomic potential which is characterized by its anharmonicity, in the sense that the repulsive branch is steeper than the attractive branch, and by its nonconvexity, which means that it possesses an inflection point beyond which its curvature becomes negative. Recently Tam and Lam have used a Mie potential to describe the mode of growth in a kinetic Monte Carlo procedure. However, the above mentioned authors did not study the effect of misfit sign. Moreover, the distribution of the stress in the 3D islands has been studied again within the continuum elasticity theory. Yu and Madhukar computed the energy and the distribution of strain in coherent Ge islands on Si(001) using a molecular dynamics coupled with the Stillinger-Weber potential but did not study the effect of anharmonicity in the general case.
The use of such a potential allows us to answer the question why coherently strained 3D islands appear predominantly in compressed overlayers. Comparing the energies of mono- and multilayer islands allows to make definite conclusions concerning the mechanism of formation and growth of the 3D islands, and the thermodynamic reason for the narrow size distribution. It turns out that there is a critical 2D island size above which the monolayer islands become unstable against the bilayer islands. Thus, as has been shown earlier by Stoyanov and Markov, Priester and Lannoo and Chen and Washburn, the monolayer islands appear as necessary precursors for the formation of 3D islands. Beyond another critical size the bilayer islands become unstable against trilayer islands, etc. Then, the growth of 3D islands consists of consecutive transformations. As a result of each one of them the islands thicken by one monolayer. The critical size for the mono-bilayer transformations increases sharply with the decrease of the lattice misfit going asymptotically to infinity at some critical misfit. The monolayer islands are thus always stable against the multilayer islands below this critical misfit which explains the necessity of large misfit in order to grow coherent 3D islands. The critical misfit in expanded overlayers is nearly twice greater in absolute value than that in compressed overlayers which in turn explains why coherent 3D islanding is very rarely (if at all) observed in expanded overlayers.
The edge atoms are more weakly bound than the atoms in the middle of the islands. This is due to the weaker adhesion of the edge atoms to the wetting layer. Thus, the 2D-3D transformation takes place by transport of atoms from the edges of the monolayer islands, where they are weakly bound, on top of their surfaces to form islands of the upper layer where they are more strongly bound. This process is then repeated in the transformation of bilayer to trilayer islands, etc. The critical size for the 2D-3D transformation to occur is the thermodynamic reason for the narrow size distribution of the 3D islands.
In the case of expanded overlayers the atoms interact with each other through the weaker attractive branch of the potential and most of the atoms are not displaced from their equilibrium positions. The size effect is very weak, the average adhesion is sufficiently strong, and the critical sizes for 2D-3D transformation either do not exist or appear under extreme conditions of very large absolute value of the misfit. In any case MDs are introduced before the formation of bilayer islands. The coherent monolayer islands are either energetically stable against multilayer islands or MDs are introduced before the 2D-3D transformation. As a result the classical SK growth is expected in expanded overlayers.
## II Model
We consider a model in 1+1 dimensions (substrate $`+`$ height) which we treat as a cross-section of the real 2+1 case. An implicit assumption is that in the real 2+1 case the monolayer islands have a compact rather than a fractal shape and the lattice misfit is one and the same in both orthogonal directions. Although the model is qualitative it gives correctly all the essential properties of the real 2+1 system as shown by Snyman and van der Merwe. In this model the monolayer island is represented by a finite discrete Frenkel-Kontorova linear chain of atoms subject to an external periodic potential exerted by a rigid substrate (Fig. (13)). We consider as a substrate the uniformly strained wetting layer of the same material consisting of an integer number of monolayers. In other words, we consider the SK growth in two separate stages. The first stage is a Frank-van der Merwe (layer-by-layer) growth during which the wetting layer is formed. The second stage is a Volmer-Weber growth of 3D islands on top of the wetting layer. In this paper we restrict ourselves to the consideration of the second stage assuming the wetting layer is already built up. The energetic influence of the initial substrate is already lost and the bonding between the atoms in the 3D islands is the same as that of the atoms of the first atomic plane of the 3D islands to the atoms belonging to the uppermost plane of the wetting layer.
The atoms of the chain are connected with bonds that obey the generalized Morse potential
$`V(x)=V_o\left[{\displaystyle \frac{\nu }{\mu \nu }}e^{\mu (xb)}{\displaystyle \frac{\mu }{\mu \nu }}e^{\nu (xb)}\right],`$ (1)
shown in Fig. (13) where $`\mu `$ and $`\nu `$ ($`\mu >\nu `$) are constants that govern the repulsive and the attractive branches, respectively, and $`b`$ is the equilibrium atom separation. For $`\mu =2\nu `$ the potential (1) turns into the familiar Morse potential. In the case of homoepitaxy the bond strength $`V_o`$ is related to the energy barrier for desorption.
The potential (1) possesses an inflection point $`x_{inf}=b+\mathrm{ln}(\mu /\nu )/(\mu \nu )`$ beyond which its curvature becomes negative. The latter leads to a distortion of the interatomic bonds in the sense that long, weak and short, strong bonds alternate (see the upper right-hand corner of Fig. (13)), and to the appearance of structures consisting of multiple MDs (multikinks or kink-antikink-kink solutions). The latter represent two kinks (or solitons) connected by a strongly stretched out bond (the antikink).
The 3D islands can be represented by linear chains stacked one upon the other as in the model proposed by Stoop and van der Merwe, and by Ratsch and Zangwill, each upper chain being shorter than the lower one. In principle, the Frenkel-Kontorova model is inadequate to describe a thickening overlayer because of two basic assumptions inherent in it. The first one is the rigidity of the substrate. Assuming that the substrate remains rigid upon formation of 3D islands on top of it rules out the interaction between the islands through the elastic fields around them. It is believed that this assumption is valid for very thin deposits not exceeding one or two monolayers. The second one is connected with the relaxation effects. When a new monolayer island is formed on top of the previous one the latter should relax and the strains in the island will redistribute. One can expext that the formation, say, of a second monolayer will make the bonds between the first monolayer atoms effectively stiffer. As will be discussed below this will lead to weaker adherence of the atoms in the first monolayer to the wetting layer. MDs could be also introduced to relieve the strain. Nevertheless, the Frenkel-Kontorova model can provide excellent qualitative generalization in two dimensions both horizontally and vertically. According to the authors of Ref. () $`n`$-layer island can be mimicked by assuming that the force constant of the interatomic bonds is $`n`$ times greater than that of a monolayer island. Thus a bilayer island under compression could be simulated by doubling the value of the repulsive constant $`\mu `$. This approach obviously gives the upper bound of the effect of the next layers on the redistribution of the strain in the lower layers. An implicit shortcoming of this method is that it assumes the same number of bonds (and correspondingly atoms) in the upper chains and thus does not allow calculations of clusters with different slopes of the side walls.
Another approach to the problem has been proposed by Ratsch and Zangwill. They accepted that each layer (chain) presents a rigid sinusoidal potential to the chain of atoms on top of it. The atom, or more precisely, the potential trough separation of the lower chain is taken as average of all atom separations. As the strains of the bonds which are closer to the free ends are smaller, the average atom separation $`b_n`$ in the $`n`$th chain is closer to the unperturbed atom spacing $`b`$ and the lattice misfit $`f_{n+1}=(bb_n)/b_n`$ for computing the energy of the $`n+1`$st chain is smaller in absolute value than the misfit $`f=(ba)/a`$ which is valid only for the base chain that is formed on the wetting layer the latter having an atom separation $`a`$. In such a way the lattice misfit and in turn the bond strains gradually decrease with the islands height. Every upper chain was taken shorter than the lower one by an arbitrary number of atoms and was centered on top of it as shown schematically in Fig. (13). Moreover, every uppermost chain is taken frozen (the relaxation of the lower chain upon formation of the next one is ruled out) and serves as a template for the formation of the next one. Then, the formation of each next chain does not exert any influence on the distribution of strain in the previous chains, and thus this approach represents the lower bound of the effect of the next layer.
In the present paper we will use the approach of Ratsch and Zangwill. The main reason is that it allows a gradual attenuation of the strain with the island height, and also different angles of the side walls. We believe that although rather crude this approach gives correctly the essential physics with one exception. It does not account for the decrease of the average adhesion of the base chain to the wetting layer upon thickening of the islands. An approximate evaluation of the latter effect can be obtained by using the upper bound approach. It should be emphasized that both approaches show qualitatively identical results. We could expect that the results of more accurate calculations including the strain relaxation will not differ qualitatively by those presented below. Preliminary studies with an energy minimization program allowing strain relaxation always produced dislocated expanded and coherent compressed islands in agreement with the results shown below. Note that owing to the approximations of the model (1+1 dimensions and the lack of relaxation) the figures obtained as a result of the calculations, e.g. 3.25% for the critical misfit for 3D islanding, should not be taken as meaningful. Finally, we have to mention that the numerical solution of the system of governing equations (8) requires no more than a seconds on a 100 MHz PC even when the number of equations (atoms in the chain) is about 100.
Discussing the stability of mono- and multilayer islands we follow the approach developed by Stoyanov and Markov. We start from the classical concept of minimum of the surface energy at a fixed volume. Following Stranski, the surface energy $`F(N)`$ is defined as the difference between the potential energy of the cluster consisting of $`N`$ atoms and the potential energy of the same number of atoms in the bulk crystal
$`F(N)=N\varphi _k{\displaystyle \underset{i=1}{\overset{N}{}}}\varphi _i`$ (2)
which is valid for clusters with arbitrary shape and size. Here $`\varphi _k`$ is the work necessary to detach one atom from a kink position (or the energy of atom in the bulk of the crystal), and the sum gives the work required to disintegrate the cluster into single atoms. Since the term $`N\varphi _k`$ does not depend on the cluster shape the stability of mono- and multilayer islands is determined by the above sum. The maximum of the sum corresponds to a minimum of the surface (edge) energy of the cluster. Therefore, as a measure of stability, we adopt the potential energy per atom of the clusters, which is, in fact, equal to the above sum taken with a negative sign.
The potential energy of a chain of the $`n`$th layer consisting of $`N_n`$ atoms reads
$$E_n=\underset{i=1}{\overset{N_n1}{}}V(X_{i+1}X_ib)+\underset{i=1}{\overset{N_n}{}}\Phi _i$$
(3)
where
$$\Phi _i=\frac{W}{2}\left[1cos\left(2\pi \frac{X_i}{b_{n1}}\right)\right]$$
(4)
accounts for the adhesion of the $`i`$th atom. $`X_i`$ are the coordinates of the atoms taken from an arbitrary origin. The difference $`\mathrm{\Delta }X_i=X_{i+1}X_i`$ is in fact the distance between the $`i+1`$st and $`i`$th atoms. The first sum in Eq. (3) gives the energy of the bond strains. The second sum gives the energy of the atoms in the periodic potential field created by the lower chain where $`W`$ is its amplitude and $`b_{n1}`$ is the average potential trough separation of the underlying layer. In general $`W`$ should be a function of the atom separation of the underlying layer and thus should depend on $`n`$ but for simplicity we neglect this dependence. As mentioned above $`b_{n1}=a`$ holds only for the base chain. The amplitude $`W`$ can be considered in our model as the barrier for surface diffusion. On a nearest neighbor bond hypothesis $`W`$ is related to the substrate-deposit bond strength by
$$W=gV_o$$
(5)
where $`g<1`$ is a constant of proportionality varying approximately from 1/30 for long-range van der Waals forces to 1/3 for short-range covalent bonds.
The average of the second sum in Eq. (3) for the base chain divided by $`V_o`$
$$\Phi =\frac{1}{N_1V_o}\underset{i=1}{\overset{N_1}{}}\Phi _i$$
(6)
has the same physical meaning as the adhesion parameter
$`\Phi ={\displaystyle \frac{\sigma +\sigma _i\sigma _s}{2\sigma }}=1{\displaystyle \frac{\beta }{2\sigma }}`$ (7)
which accounts for the incomplete wetting of the 3D islands by the substrate in heteroepitaxy ($`\sigma `$, $`\sigma _i`$ and $`\sigma _s`$ being the specific surface energies of the overlayer, the interface and the substrate, respectively, and $`\beta `$ being the specific adhesion energy). In the case of the classical SK growth the adhesion parameter is given by $`\Phi =ฯต_d/2\sigma `$ where $`ฯต_d`$ is the energy of a net of MDs. We have the case of complete wetting when $`\Phi 0`$. The formation of 3D islands can obviously take place only when $`0<\Phi <1`$.
Minimization of $`E_n`$ with respect to $`X_i`$ results in a set of governing equations for the atom coordinates in the form
$$e^{\mu ฯต_{i+1}}e^{\nu ฯต_{i+1}}e^{\mu ฯต_i}+e^{\nu ฯต_i}+A\mathrm{sin}(2\pi \xi _i)=0,$$
(8)
where $`ฯต_i=b_{n1}(\xi _i\xi _{i1}f_n)`$ is the strain of the $`i`$th bond, $`\xi _i=X_i/b_{n1}`$ is the displacement of the $`i`$th atom with respect to the bottom of the $`i`$th potential trough, $`f_n`$ is the misfit between the $`n`$th chain and the substrate potential exerted by the $`n1`$st chain, and $`A=\pi W(\mu \nu )/\mu \nu b_{n1}V_o`$. The lattice misfit has its largest value $`f=(ba)/a`$ only for the base chain in multilayer islands, and goes to zero with increasing islands thickness. Expanding of the exponentials in Taylor series for small strains gives the set of equations that govern the discrete harmonic model. Solving numerically the system of equations (8) gives the atom displacements $`\xi _i`$ and all the parameters characterizing the system can be easily computed.
The properties of the solutions of the system (8) are of crucial importance for understanding of the coherent SK growth. Two forces act on each atom: first this is the force exerted by the neighboring atoms, and second, this is the force exerted by the substrate (the underlying chain or the wetting layer). The first force tends to preserve the natural spacing $`b`$ between the atoms, whereas the second force tends to place all the atoms at the bottoms of the corresponding potential troughs of the substrate separated at a distance $`b_nb`$. As a result of the competition between the two forces the bond strains and the atom adhesion are distributed along the chain. The undislocated solution (Fig. (13a)) clearly shows the decrease of the atom adhesion at the ends of the chain as the atoms are more and more displaced towards the chain ends. In the case of positive misfit the dislocation represents an empty potential trough the bond in the core of the dislocation being strongly stretched out (Fig. (13b)). This picture is equivalent to a crystal plane in excess in the substrate. In the opposite case of negative misfit (Fig. (13c)) the dislocation represents two atoms in one trough (a crystal plane in excess in the overlayer), the bond in the dislocation core being compressed. Both configurations are energetically equivalent in the harmonic approximation where the force between the atoms increases linearly with the atom separation. This is not, however, the case when an anharmonic potential is adopted. The latter displays a maximum force between the atoms at $`x=x_{inf}`$. This is the theoretical tensile stress of the material $`\sigma _{tens}=V_o\mu (\nu /\mu )^{\mu /(\mu \nu )}`$ and if the actual force exerted on the corresponding bond is greater than $`\sigma _{tens}`$ the bond will break up. Thus the interval of existence of dislocated solutions in compressed chains depends on the material parameters $`V_o,W,\mu ,\nu ,f`$, and becomes very narrow. Dislocated solutions in compressed chains exist only in sufficiently long chains beyond some critical chain length. As will be shown below, this leads to coherent SK growth in compressed overlayers. On the contrary, the bonds in the cores of the MDs in expanded chains are compressed and cannot break. As a result MDs become energetically favored and can be introduced in very short chains. Thus, the classical SK growth should be expected in expanded overlayers as the dislocated islands with a monolayer height can become energetically favored long before the coherently strained multilayer islands.
## III Results
### A Monolayer islands
The distribution of the bond strains along the chains is shown in Fig. (13a). As expected the bonds in the middle of the chains are strained to fit exactly the uniformly strained wetting layer. The strains at the chain ends tend to zero. In fact the strains of the hypothetic zeroth and $`N`$th bonds should be exactly equal to zero. The strains in the middle of the expanded chain compared with these of compressed ones are much closer to $`f`$ owing to the weaker attraction between the atoms of the chain. Fig. (13b) shows the distribution of the bond energy. It is seen that in the case of compressed chains ($`f>0`$) the bond energy in the middle of the chain is smaller than that in expanded chains owing to the stronger atom repulsion.
The distribution of the adhesion of the separate atoms $`\Phi _i`$ (Eq. (4)) (taken in terms of the bond energy $`V_o`$ as $`(\Phi _iV_o)/V_o`$) is demonstrated in Fig. (13). The weaker adhesion at the chain ends, which is often overlooked in theoretical models, is due to the displacement of the atoms from the bottoms of the potential troughs (see Fig. (13a)). What is more important is that the atoms in the expanded chains adhere much more strongly to the wetting layer compared with the atoms in the compressed chains.
Fig. (13) shows the dependence of the mean adhesion parameter $`\Phi `$ (Eq. (6)) on the number of atoms. As can be expected the atom adhesion in expanded overlayers is stronger than that in compressed ones owing to the weaker attraction between the atoms in the former. The forces exerted from the substrate are stronger than the forces between the chain atoms and the latter are situated more deeply in the potential troughs. The curves display maxima which are due to the interplay between the fraction of the most strongly displaced end atoms and the values of the particular displacements. In short chains the atoms are weakly displaced from the bottoms of the potential troughs and the adhesion is stronger. With increasing chain length the displacements of the end atoms increase and beyond some length saturate and do not increase anymore. The fraction of weakly displaced middle atoms increases and a maximum is displayed. The value of the maximum (not shown) decreases sharply with decreasing misfit going asymptotically to zero at zero misfit. This means that $`\Phi >0`$ at any value of the nonzero misfit which is the thermodynamic reason for 3D islanding.
Fig. (13) shows the distribution of the total energy (strain plus adhesion) in chains with positive and negative misfit. The maxima in the middle are due to the strain contribution whereas the increase of the energy at the ends is due to the weaker adhesion. It is first seen that the atoms in the expanded chain are considerably more strongly bound to each other and to the substrate. The main difference between both curves is that the atoms at the free ends in compressed chains are much more weakly bound than the end atoms in expanded chains. This result is of crucial importance for our understanding of the mechanism of transformation of the mono- to multilayer (3D) islands. We conclude from Fig. (13) that compressed islands display a greater tendency to transform into bilayer islands and further to form coherent 3D islands in comparison with expanded islands.
### B Multilayer islands
The multilayer (3D) islands can be full or frusta of pyramids and can have side walls with different slope. The effect of the side walls slope on the minimum energy shape is more or less clear. More unsaturated dangling bonds normal to the film plane appear on side walls with smaller slope and the corresponding surface energy is greater. Obviously, the surface energy of the steepest walls with a slope of 60<sup>o</sup> is the lowest one. One could expect that the islands bounded with the steepest walls will be more stable than the flatter islands. The problem whether the pyramids are full or frusta is more difficult to resolve. First, with increasing pyramid height the lattice misfit decreases and the mean strain vanishes. This in turn leads to increase of the adhesion of the separate atoms, and, as a whole, to an increase of the bond energy closer to the apex of the pyramids. On the other hand, the layers which are closer to the apex are smaller in size and the size effect increases. The latter leads to smaller work of evaporation per atom of a whole uppermost atomic plane. As has been known for a long time, the work required to disintegrate a whole atomic plane into single atoms (the mean separation work) taken with a negative sign is equal to its chemical potential at the absolute zero. Hence, adding to the pyramid smaller and smaller upper base atomic planes leads to a decrease of the mean separation work of the upper base and in turn to higher chemical potential. As a result we could expect that frusta of pyramids with a slope of $`60^o`$ of the side walls will be energetically favored. This is clearly seen in Fig. (13) which demonstrates the energy per atom of pyramids with different side walls slopes as a function of the height taken as a number of monolayers. The curves display minima at a certain height which clearly show that the frusta of the pyramids are the lowest energy configurations. The energy of the full pyramids is much higher. The minimum of the $`60^o`$ side wall slope is the lowest one thus confirming the above consideration. The steepest side wall slope of $`60^o`$ is a natural consequence of the model which considers a face centered cubic rather than a diamond lattice. It is worth noting that Ratsch and Zanguill also report that the steepest side walls are energetically favored.
The above result does not mean that in the real experiment the coherent 3D islands will grow as frusta of pyramids. The lowest minimum in Fig. (13) represents in fact the equilibrium shape of the islands. In reality the crystallites grow with a shape which is determined by the rates of growth of the different walls and thus depends on the supersaturation. The growing crystal is bounded by the walls with the lowest growth rate at the given supersaturation. Mo et al have established with the help of scanning tunneling microscopy (STM) that small coherently strained Ge islands (โhutโ islands) grow on Si(001) as full pyramids bounded with (105) side walls, whereas Voigtlรคnder and Zinner observed frusta of tetrahedron Ge pyramids on Si(111) with aspect (height-to-base) ratio showing a maximum of about 0.135 at a coverage of 4 MLs. All the above is valid for sufficiently large crystals. We are interested here of the initial stages of growth of the 3D islands, or more precisely, in the transformation of monolayer into multilayer islands. As shown in the next sections the formation and growth of 3D islands proceeds by consecutive transformations of monolayer islands into bilayer and then into multilayer islands which is the lowest energy path of the 2D-3D transformation.
It should be stressed that the adhesion parameter $`\Phi `$ of a monolayer island should differ significantly from that of a multilayer island with the same base chain length. In our model they are equal. The reason is that the model does not allow the relaxation of the lower chains after formation of new ones on top of them. The above is obviously incorrect as the formation of a second chain on top of the base one leads to effectively stronger lateral bonds in the bilayer islands. We will try to qualitatively evaluate this problem and to discuss its consequences. As mentioned above the bilayer island could be treated as a first approximation as a monolayer island with a doubled force constant. As a result both the fraction of the strongly displaced end atoms and the corresponding displacements will be larger. Then the adhesion parameter of a bilayer island will be greater than that of a monolayer island with the same width. An evaluation of this effect can be made by using the approach of van der Merwe et al mentioned above by doubling of the constant $`\mu `$ in compressed chains. Thus for mono-, bi- and trilayer islands with $`\mu =12,24`$ and 36, one obtains $`\Phi =0.024,0.066`$ and 0.1, respectively ($`\nu =6,f=0.05,N=21`$). As seen the effect of the third layer is weaker than that of the second which is easy to understand. The effect of formation of the next monolayers will have a smaller effect on the adhesion of the island and after some thickness the adhesion parameter will not change anymore. Thus the base layer atoms in a coherent multilayer island are more weakly bound to the wetting layer. What follows is that once formed the bilayer islands stabilize the further growth of the coherent 3D islands.
### C Stability of mono- and multilayer islands
We compare further the energies of mono- and multilayer islands with different thickness. The latter are bounded with $`60^o`$ side walls as they have the lowest minimum energy as shown above. Fig. (13a) shows the dependence of the energy of compressed monolayer and multilayer islands on the total number of atoms at comparatively small lattice misfit of 3%. As seen the monolayer islands are always stable against bilayer and trilayer islands. A 2D-3D transformation is thus not expected and the film should continue to grow in a layer-by-layer mode coupled with an introduction of MDs at a later stage. The same dependence but at a larger misfit of 5% is demonstrated in Fig. (13b). The monolayer islands become unstable against the bilayer islands beyond a critical island size $`N_{12}`$, the bilayer islands in turn become unstable against the trilayer islands beyond a second critical number $`N_{23}`$, etc. The curve denoted by MD represents the energy of a monolayer chain containing one MD. The latter begins at a large number of atoms ($`N=52`$) because the bonds in the cores of the MDs break up for shorter chains. This is due to the fact that the force exerted on these bonds from the neighboring atoms is greater than the theoretical tensile stress of the film material $`\sigma _{tens}`$ as mentioned above. Curve 1 which represents the energy of undislocated monolayer chain is computed for clarity up to a number of atoms smaller than the number (52) at which the solutions of the dislocated chain appear. The reason is that the values of the energy are very close and the curves are undistinguishable for the eye. The energies of monolayer chains with and without MDs cross each other at about $`N=300`$ (not shown) which means that coherent 3D islands are formed long before the introduction of MDs. Moreover, the dislocated chain with a monolayer height has an energy much higher than the energies of the undislocated multilayer islands. The latter clearly shows that the film โprefersโ to grow as coherent 3D islands in which the gradual decrease of the strain energy overcompensates the surface energy rather than to introduce MDs in the first monolayer.
Fig. (13) demonstrates the same dependence as in Fig. (13) but in expanded chains. The absolute value of the negative misfit is very large (-10%). At absolute values of the misfit smaller than 5.5% (not shown) the behavior of the energies is the same as in Fig. (13a). The energies of the coherent mono- and multilayer chains cross again each other at some critical numbers of atoms but the dislocated monolayer chain (denoted by MD) becomes energetically favoured noticeably before the coherent bilayer chain to become stable. The classical SK growth should take place in expanded overlayers.
Fig. (13) shows the misfit dependence of the first critical size $`N_{12}`$ for both positive and negative misfits. As seen it increases sharply with decreasing misfit going asymptotically to infinity at some critical misfits denoted by the vertical dashed lines. The existence of a critical positive misfit for coherent SK growth to occur explains why a high mismatch epitaxy is required in order to grow coherent 3D islands. The critical misfit below which the expanded monolayer islands are always stable against multilayer islands is nearly twice larger in absolute value compared with the same quantity in compressed overlayers. Thus coherent SK growth in expanded overlayers could be observed at unrealistically large absolute values of the negative misfit.
We conclude that the classical SK growth or a 2D growth will be observed in the thermodynamic limit at small positive misfits and a coherent SK growth at misfits greater than a critical misfit. This result clearly explains why large positive misfit is required for the coherent SK growth to occur. The large positive misfit leads to large atom displacements and in turn to weaker adhesion. The physics is essentially the same as in the case of heteroepitaxial growth of 3D islands directly on top of the surface of the foreign substrate (Volmer-Weber growth).
### D Mechanism of 2D-3D transformation
It is natural to assume that once the monolayer islands become unstable against the bilayer islands ($`N>N_{12}`$) the former should rearrange themselves into bilayer islands. As shown below the mono-bilayer transformation can be considered as the first step for building sufficiently high 3D crystallites. The mechanism of the mono-bilayer transformation is easy to predict having in mind that the edge atoms are more weakly bound than the atoms in the middle. The edge atoms can detach and difuse on top of the monolayer islands giving rise of clusters of the second layer. We consider first in more detail the transformation of a monolayer island (chain) with a length $`N_o>N_{12}`$ into bilayer island. For this aim we plot the energy $`E(n)`$ of an incomplete bilayer island which consists of $`N_on`$ atoms in the lower layer and $`n`$ atoms in the upper layer referred to the energy $`E_o`$ of the initial chain consisting of $`N_o`$ atoms as a function of the number of atoms $`n`$ in the upper layer. This is the curve denoted by 1-2 in Fig. (13). As seen it displays a maximum at $`n=1`$ after which $`\mathrm{\Delta }E_n=E(n)E_o`$ decreases upto the complete mono-bilayer transformation at which $`n=(N_o1)/2`$.
Curve 1-2 in Fig. (13) has the characteristic behavior of a nucleation process. The cluster at which the maximum of $`\mathrm{\Delta }E`$ is observed can be considered as the critical nucleus of the second layer. As shown in Ref. () the mono-bilayer transformation is a real nucleation process when 2+1 heteroepitaxial Volmer-Weber model is considered, in other words, when the 3D islands are formed directly on top of the foreign substrate without the formation of an intermediate wetting layer. The chemical potential of the upper island at the maximum is exactly equal to that of the initial monolayer island, and the supersaturation with which the nucleus of the second layer is in equilibrium is equal to the difference of the energies of desorption of the atoms from the same and the foreign substrate, This is namely the driving force for the 2D-3D transformation to occur. The 1+1 model is in fact one-dimensional and the nuclei do not exist in the thermodynamic sense because the length of a row of atoms does not depend on the supersaturation. However, considering our 1+1 model as a cross-section of the real 2+1 case we can treat the curve 1-2 in Fig. (13) as the size dependence of the free energy for nucleus formation and growth. We would like to emphasize that in the 2+1 case the nucleus does not necessarily consist of one atom. Its size should depend on the lattice misfit, and in the real situation - on the temperature. The curves denoted by 2-3 and 3-4 in Fig. (13) represent the energy changes of bilayer to trilayer islands, and of trilayer to fourlayer islands, respectively. As seen they behave in the same way and the work for nucleus formation (the respective maxima) decrease with the thickening of the islands. The latter means that the mono-bilayer transformation is the rate determining process of the total mono-multilayer (2D-3D) transformation.
## IV Discussion
The Stranski-Krastanov growth mode appears as a result of the interplay of the film-substrate bonding, strain and surface energies. A wetting layer is first formed on top of which 3D islands nucleate and grow. The 3D islands and the wetting layer represent necessarily different phases. If this was not the case the growth should continue by 2D layers. Then we can consider as a useful approximation the 3D islanding on top of the uniformly strained wetting layer as Volmer-Weber growth. The latter requires the adhesion of the atoms to the substrate to be smaller than the cohesion between the overlayer atoms. In other words, the wetting of the substrate by the overlayer should be incomplete. In the classical SK growth this condition is fulfilled because of the formation of an array of misfit dislocations at the boundary between the islands and the wetting layer. The atoms are displaced from the bottoms of the potential troughs (mostly in the cores of the MDs, see Fig. (13b)) and (13c)) and thus are more weakly bound in average to the underlying wetting layer, irrespective of that the chemical bonding is one and the same. As a result the lattice misfit gives rise to an effective adhesion which is weaker than the cohesion of the overlayer atoms. Contrary to the wetting layer the 3D islands are elastically relaxed and their atom density differs from that of the former. Thus, the wetting layer and the 3D islands really represent different phases separated by a clear interfacial boundary whose energy is in fact the energy of the array of MDs. The physical reason for 3D islanding in the coherent SK growth is practically the same. In this case the atoms near the islands edges are displaced from the bottoms of the corresponding potential troughs (see Fig. (13a)) and they adhere more weakly to the wetting layer compared with the atoms in the middle. The thicker the islands the stronger is this tendency. Thus, the average adhesion of the 3D islands to the wetting layer is again weaker than the cohesion in the islands themselves. Then we can treat the coherent SK growth as a Volmer-Weber growth on top of the wetting layer. The main difference is that in Volmer-Weber growth the adhesion parameter $`\Phi `$ is constant whereas in the coherent SK mode it depends on the islands thickness.
The weaker adhesion means in fact an incomplete wetting which appears as the thermodynamic driving force for the 3D islanding. The smaller the misfit the smaller are the displacements of the edge atoms and in turn the stronger is the average wetting. The latter leads to the appearance of a critical misfit below which the edge effects do not play a significant role. The average wetting is very strong and the formation of coherent 3D islands becomes thermodynamically unfavored. The film will continue to grow in a 2D mode untill the strain is relaxed by introduction of MDs or dislocated 3D islands at a later stage. The existence of a critical misfit for 2D-3D transformation to occur both in compressed and expanded overlayers has been noticed in several studies. Pinczolits et al have found that deposition of PbSe<sub>1-x</sub>Te<sub>x</sub> on PbTe(111) remains purely two dimensional when the misfit is less than 1.6$`\%`$ in absolute value (Se content $`<30\%`$). Leonard et al have successfully grown quantum dots of In<sub>x</sub>Ga<sub>1-x</sub>As on GaAs(001) with $`x=0.5`$ ($`f3.6\%`$) but 60$`\AA `$ thick 2D quantum wells at $`x=0.17`$ ($`f1.2\%`$). A critical misfit of 1.4% has been found by Xie et al upon deposition of Si<sub>0.5</sub>Ge<sub>0.5</sub> films on relaxed buffer layers of Si<sub>x</sub>Ge<sub>1-x</sub> with varying composition.
The average adhesion (the wetting) depends strongly on the anharmonicity of the interatomic forces. Expanded islands adhere more strongly to the wetting layer and the critical misfit beyond which coherent 3D islanding is possible is much greater in absolute value compared with that in compressed overlayers. As a result coherent SK growth in expanded films could be expected at very (unrealistically) large absolute values of the negative misfit. The latter, however, depends on the materials parameters (degree of anharmonicity, strength of the chemical bonds, etc.) of the particular system and cannot be completely ruled out. Xie et al studied the deposition of Si<sub>0.5</sub>Ge<sub>0.5</sub> films in the whole range of 2% tensile misfit to 2% compressive misfit on relaxed buffer layers of Si<sub>x</sub>Ge<sub>1-x</sub> starting from $`x=0`$ (pure Ge) to $`x=1`$ (pure Si) and found that 3D islands are formed only under compressive misfit larger than 1.4%. Films under tensile strain were thus stable against 3D islanding in excellent agreement with the predictions of our model.
The weaker average adhesion in compressed overlayers leads to another effect at misfits greater than the critical one. At some critical number of atoms $`N_{12}`$ the monolayer islands become unstable against the bilayer islands. The latter become in turn unstable against trilayer islands beyond another critical number $`N_{23}`$, and so on. As a result the complete 2D-3D transformation should take place during growth by consecutive transformations of mono- to bilayer, bi- to trilayer islands, etc. Owing to the stronger interatomic repulsive forces the edge atoms in the compressed monolayer islands adhere more weakly to the wetting layer compared with the edge atoms in expanded islands. This results in an easier transformation of mono- to bilayer islands which is the first step to the complete 2D-3D transformation. The latter includes also kinetics in the sense that the edge atoms have to detach and form the upper layers. However, it is not the strain at the edges (which is nearly zero) that is responsible for the easier detachment of the edge atoms as suggested by Kandel and Kaxiras but the weaker adhesion. The 2D-3D transformation is hindered in expanded islands as the edge atoms adhere more strongly to the wetting layer. On the other hand, the existence of such critical sizes, which determine the intervals of stability of islands with different thickness, could be considered as the thermodynamic reason for the narrow size distribution of the 3D islands which is observed in the experiment. The latter does not mean that this is the only reason. Elastic interactions between islands and growth kinetics can have greater effect than the thermodynamics. The 2D-3D transformation takes place by consecutive nucleation events, each next one requiring to overcome a lower energetic barrier. Thus, the mono-bilayer transformation appears as the rate determining process.
Let us consider all the above from another point of view. The results displayed in Fig. (13b) show that the equilibrium shape aspect ratio increases gradually with the islands volume. The consecutive stability of islands with increasing thickness reflects the fact that the increase of the pyramid height is discreet (layer after layer) whereas the base chain length remains nearly constant. The stronger the adhesion or the smaller the misfit the wider will be the intervals of stability of islands with a fixed height and vice versa. The formation of every new crystal plane on the upper crystal face requires the appearance of a 2D nucleus. As the growing surface is usually very small the formation of one nucleus is sufficient for the growth of a new crystal plane. Thus we could expect a mononucleus layer-by-layer growth of the pyramids. The latter has been independently established by using of a kinetic Monte Carlo method by Khor and Das Sarma. It should be noted that Duport, Priester and Villain established that the monolayer islands are thermodynamically favored upto a critical size beyond which the equilibrium shape becomes nearly a full pyramid. The transition from a monolayer island to a pyramid is of first order and requires the overcoming of an activation barrier which is proportional to $`f^4`$.
It should be stressed that our definition of the critical 2D island size $`N_{12}`$ for 2D-3D transformation to begin differs from that in the papers of Priester and Lannoo, and of Chen and Washburn. The former authors define the critical size by comparing the energy per atom of monolayer islands with that of fully built 3D pyramids. Chen and Washburn have accepted as critical the size at which the energy of the monolayer islands displays a minimum. They found also that the critical size $`N_c`$ determined by the minimum of the energy increases very steeply with decreasing misfit ($`N_cf^6`$). Although our definition of $`N_c`$ is different we also observe a very steep misfit dependence (see Fig. (13)).
A rearrangement of monolayer height (2D) islands into multilayer (3D) islands has been reported by Moison et al who established that the InAs 3D islands begin to form on GaAs at a coverage of about 1.75 ML but then the latter suddenly decreases to 1.2 ML. This decrease of the coverage in the second monolayer could be interpreted as a rearrangement of an amount of nearly half a monolayer into 3D islands. The same phenomenon has been noticed by Shklyaev, Shibata and Ichikawa in the case of Ge/Si(111). Voigtlรคnder and Zinner noted that Ge 3D islands in Ge/Si(111) epitaxy have been observed at the same locations where 2D islands locally exceeded the critical wetting layer thickness of 2 bilayers.
Contrary to the linear theory of elasticity the anharmonicity and the non-convexity of the real interatomic potentials lead to different intervals of existence of misfit dislocations in compressed and expanded overlayers. The nonconvexity of the interatomic potential gives rise to the possibility of breaking of the expanded bonds in the cores of the MDs in compressed overlayers when the force exerted on them is greater than the theoretical tensile strength of the material. As a result MDs in compressed overlayers appear in sufficiently large islands and small coherent 3D islands can appear before that. On the contrary, this restriction does not exist in expanded overlayers where the bonds in the cores of the MDs are compressed. The introduction of MDs can thus become energetically favored in short chains (small islands) before the formation of coherent 3D islands and the classical SK growth should be observed in most cases.
It should be noted that the results presented above depend on the approximations of the model particularly when the energy of the multilayer islands is computed. Allowing a strain relaxation of lower layers when new layers are formed on top of them could lead to earlier introduction of MDs but also to weaker adhesion of the 3D islands to the wetting layer. Thus, applying a more refined approach which accounts for the strain relaxation in the islands, as well as in the wetting layer, will allow us to study the transition from the coherent to the classical (dislocated) Stranski-Krastanov growth mode.
In summary, accounting for the anharmonicity and the non-convexity of the real interatomic potentials in a model in 1+1 dimensions, we have shown that coherent 3D islands can be formed on the wetting layer in the SK mode predominantly in compressed overlayers at sufficiently large values of the misfit. Coherent 3D islanding in expanded overlayers could be expected as an exception rather than as a rule. Monolayer height islands with a critical size appear as necessary precursors of the 3D islands. The latter explains the narrow size distribution of the 3D islands from thermodynamic point of view.
###### Acknowledgements.
One of the authors (EK) is financially supported by the Spanish DGES Contract PB97-0076 and partly by Contract F608 of the Bulgarian National Fund for Scientific Research. IM gratefully acknowledges the inspiration and the fruitfull discussions with R. Kaischew. The authors greatly benefitted from the remarks and criticism of Jacques Villain (Grenoble). |
no-problem/0001/physics0001025.html | ar5iv | text | # Forecast and event control On what is and what cannot be possible Part I: Classical case
## Principle of self-consistency
An irreducible, atomic physical phenomenon manifests itself as a click of some detector. There can either be a click or there can be no click. This yes-no scheme is experimental physics in-a-nutshell (at least according to a theoretician). From that kind of elementary observation, all of our physical evidence is accumulated.
Such irreversibly observed events (whatever the relevance or meaning of those terms are ) are subject to the primary condition of consistency or self-consistency: โAny particular irreversibly observed event either happens or does not happen, but it cannot both happen and not happen.โ
Indeed, so trivial seems the requirement of consistency that David Hilbert polemicised against โanother authorโ with the following words , โโฆfor me, the opinion that the \[\[physical\]\] facts and events themselves can be contradictory is a good example of thoughtlessness.โ
Just as in mathematics, inconsistency, i.e., the coexistence of truth and falseness of propositions, is a fatal property of any physical theory. Nevertheless, in a certain very precise sense, quantum mechanics incorporates inconsistencies in a very subtle way, which assures overall consistency. For instance, a particle wave function or quantum state is said to โpassโ a double slit through both slits at once, which is classically impossible. (Such considerations may, however, be considered as mere trickery quantum talk, devoid of any operational meaning.) Yet, neither a particle wave function nor quantum states are directly associable with any sort of irreversible observed event of physical reality. We shall come back to a particular quantum case in the second part of this investigation.
And just as in mathematics and in formal logic it can be argued that too strong capacities of intrinsic event forecast and intrinsic event control renders the system overall inconsistent. This fact may indeed be considered as one decisive feature in finite deterministic (โalgorithmicโ) models . It manifests itself already in the early stages of Cantorian set theory: any claim that it is possible to enumerate the real numbers yields, via the diagonalization method, to an outright contradiction. The only consistent alternative is the acceptance that no such capacity of enumeration exists. Gรถdelโs incompleteness theorem states that any formal system rich enough to include arithmetic and elementary logic could not be both consistent and complete. Turingโs theorem on the recursive unsolvability of the halting problem , as well as Chaitinโs $`\mathrm{\Omega }`$ numbers are formalizations of related limitations in formal logics, the computer sciences and mathematics.
In what follows we shall proceed along very similar lines. We shall first argue that any capacity of total forecast or event controlโeven in a totally deterministic environmentโis contradicting the (idealistic) idea that decisions between alternatives are possible; stated differently: that there is free will. Then we shall proceed with possibilities of forecast and event control which are consistent both with free will and the known laws of physics.
It is also clear that some form of forecast and event control is evidently possibleโindeed, that is one of the main achievements of contemporary natural science, and we make everyday use of it, say, by switching on the light. These capacities derived from the standard natural sciences are characterized by a high chance of reproducibility, and therefore do not depend on single events.
In what follows, we shall concentrate on very general bounds of capacities of forecast and event control; bounds which are imposed upon them by the requirement of consistency. These considerations should be fairly general and do not depend on any particular physical model. They are valid for all conceivable forms of physical theories; classical, quantum and forthcoming alike.
## Strong forecasting
Let us consider forecasting the future first. Even if physical phenomena occur deterministically and can be accounted for (โcomputedโ) on a higher level of abstraction, from within the system such a complete description may not be of much practical, operational use .
Indeed, suppose there exists free will. Suppose further that an agent could predict all future events, without exceptions. We shall call this the strong form of forecasting. In this case, the agent could freely decide to counteract in such a way as to invalidate that prediction. Hence, in order to avoid inconsistencies and paradoxes, either free will has to be abandoned, or it has to be accepted that complete prediction is impossible.
Another possibility would be to consider strong forms of forecasting which are, however, not utilized to alter the system. Effectively, this results in the abandonment of free will, amounting to an extrinsic, detached viewpoint. After all, what is knowledge and what is it good for if it cannot be applied and made to use?
It should be mentioned that the above argument is of an ancient type . As has already been mentioned, it has been formalized recently in set theory, formal logic and recursive function theory, where it is called โdiagonalization method.โ
In doing this, we are inspired by the recent advances in the foundations of quantum (information) theory. There, due to complementarity and the impossibility to clone generic states, single events may have important meanings to some observers, although they make no sense at all to other observers. One example for this is quantum cryptography. Many of these events are stochastic and are postulated to satisfy all conceivable statistical laws (correlations are nonclassical, though). In such frameworks, high degrees of reproducibility cannot be guaranteed, although single events may carry valuable information, which can even be distilled and purified.
## Strong event control
A very similar argument holds for event control and the production of โmiraclesโ . Suppose there exists free will. Suppose further that an agent could entirely control the future. We shall call this the strong form of event control. Then this observer could freely decide to invalidate the laws of physics. In order to avoid a paradox, either free will or some physical laws would have to be abandoned, or it has to be accepted that complete event control is impossible.
## Weak forecast and event control
From what has already been said, it should be clear that it is reasonable to assume that forecast and event control should be possible only if this capacity cannot be associated with any paradox or contradiction.
Thus the requirement of consistency of the phenomena seems to impose rather stringent conditions on forecasting and event control. Similar ideas have already been discussed in the context of time paradoxes in relativity theory (cf. and \[15, p. 272\], โThe only solutions to the laws of physics that can occur locally $`\mathrm{}`$ are those which are globally self-consistentโ).
There is, however, a possibility that the forecast and control of future events is conceivable for singular events within the statistical bounds. Such occurrences may be โsingular miraclesโ which are well accountable within the known laws of physics. They will be called weak forms of forecasting and event control.
It may be argued that, in order to obey overall consistency, such a framework should not be extendable to any forms of strong forecast or event control, because, as has been argued before, this could either violate global consistency criteria or would make necessary a revision of the known laws of physics.
The relevant laws of statistics (e.g., all recursively enumerable ones) impose rather lax constraints especially on finite sequences and do not exclude local, singular, improbable events. For example, a binary sequence such as $`11111111111111111111111111111111`$ is just as probable as the sequences $`11100101110101000111000011010101`$ and $`01010101010101010101010101010101`$ and its occurrence in a test is equally likely, although the โmeaningโ an observer could ascribe to it is rather different. These sequences may be embedded in and be part of much longer stochastic sequences. If short finite regular (or โmeaningfulโ) sequences are padded into long irregular (โmeaninglessโ) ones, those sequences become statistically indistinguishable for all practical purposes from the previous sequences. Of course, the โmeaningโ of any such sequence may vary with different observers. Some of them may be able to decipher a sequence, others may not be capable of this capacity.
It is quite evident that per definition any finite regularity in an otherwise stochastic environment should exclude the type of high reproducability which one has gotten used to in the natural sciences. Just on the contrary: single โmeaningfulโ events which are hardly reproducible might indicate a new category of phenomena which is dual to the usual โlawfulโ and highly predictable ones.
Just as it is perfectly all right to consider the statement โThis statement is trueโ to be true, it may be perfectly reasonable to speculate that certain events are forecasted and controlled within the domain of statistical laws. But in order to be within the statistical laws, any such method needs not to be guaranteed to work at all times.
To put it pointedly: it may be perfectly reasonable to become rich, say, by singular forecasts of the stock and future values or in horse races, but such an ability must necessarily be not extendible, irreproducible and secretive; at least to such an extend that no guarantee of an overall strategy and regularity can be derived from it.
The associated weak forms of forecasting and event control are thus beyond any global statistical significance. Their importance and meaning seems to lie mainly on a very subjective level of singular events. This comes close to one aspect of what Jung imagined as the principle of โsynchronicityโ , and is dual to the more reproducible forms one is usually accustomed to.
## Against the odds
This final paragraph reviews a couple of experiments which suggest themselves in the context of weak forecast and event control. All are based on the observation whether or not an agent is capable to forecast or control correctly future events such as, say, the tossing of a fair coin.
In the first run of the experiment, no consequence is derived from the agentโs capacity despite the mere recording of the data.
The second run of the experiment is like the first run, but the meaning of the forecasts or controlled events are different. They are taken as outcomes of, say gambling, against other individuals (i) with or (ii) without similar capacities, or against (iii) an anonymous โmechanicโ agent such as a casino or a stock exchange.
As a variant of this experiment, the partners or adversaries of the agent are informed about the agentโs intentions.
In the third run of experiments, the experimenter attempts to counteract the agentโs capacity. Let us assume the experimenter has total control over the event. If the agent predicts or attempts to bring about to happen a certain future event, the experimenter causes the event not to happen and so on.
It might be interesting to record just how much the agentโs capacity is changed by the setup. Such an expectation might be defined from a dichotomic observable
$$e(A,i)=\{\begin{array}{cc}+1& \mathrm{correct}\mathrm{guess}\hfill \\ 1& \mathrm{incorrect}\mathrm{guess}\hfill \end{array}$$
where $`i`$ stands for the $`i`$โth experiment and $`A`$ stands for the agent $`A`$. An expectation function can then be defined as usual by the average over $`N`$ experiments; i.e.,
$$E(A)=\frac{1}{N}\underset{i=1}{\overset{N}{}}e(A,i).$$
From the first to the second type of experiment it should become more and more unlikely that the agent operates correctly, since his performance is leveled against other agents with more or less the same capacities. The third type of experiment should produce a total anticorrelation. Formally, this should result in a decrease of $`E`$ when compared to the first round of experiment.
Another, rather subtle, deviation from the probabilistic laws may be observed if correlated events are considered. Just as in the case of quantum entanglement, it may happen that individual components of correlated systems may behave totally at random exhibit more disorder than the system as a whole .
If once again one assumes two dichotomic observables $`e(A,i),e(B,i)`$ of a correlated subsystem, then the correlation function
$$C(A,B)=\frac{1}{N}\underset{i=1}{\overset{N}{}}e(A,i)e(B,i)$$
and the associated probabilities may give rise to violations of the Boole-Bell inequalitiesโBooleโs โconditions of possible \[\[classical\]\] experienceโ and may even exceed the Tsirelson bounds for โconditions of possible \[\[quantum\]\] experience.โ There, the agent should concentrate on influencing the coincidences of the event rather than the single individual events. In such a case, the individual observables may behave perfectly random, while the associated correlations might be nonclassical and even stronger-than-quantum and might give rise to highly nonlocal phenomena. As long as the individual events cannot be controlled, this needs not even violate Einstein causality. (But even then, consistent scenarios remain .)
In summary it can be stated that, although total forecasting and event control are incompatible with free will, more subtle forms of these capacities remain conceivable even beyond the present laws of physics; at least as long as their effects upon the โfabric of phenomenaโ are consistent. These capacities are characterized by singular events and not on the reproducible patterns which are often encountered under the known laws of physics. Whether or not such capacities exist at all remains an open question. Nevertheless, despite the elusiveness of the phenomenology involved, it appears not unreasonable that the hypothesis might be testable, operationalizable and even put to use in certain contexts. |
no-problem/0001/hep-ph0001210.html | ar5iv | text | # E-print hep-ph/0001210 Preprint YARU-HE-00/02 Mass Shift of Axion in Magnetic Field
## Acknowledgments
This work was partially supported by INTAS under grant No. 96-0659 and by the Russian Foundation for Basic Research under grant No. 98-02-16694. |
no-problem/0001/astro-ph0001497.html | ar5iv | text | # Spectral Evolution of the Peculiar Ic Supernova 1998bw
## 1 Introduction
### 1.1 SN/GRB association
The suggestion that type Ic supernova (SN) 1998bw is the optical counterpart of $`\gamma `$-ray burst (GRB) 980425 has forced us to rethink the mechanics of both SNe and GRBs. The association of SN 1998bw and GRB980425 is supported by the extremely low probability of a chance coincidence \[Galama et al. 1998\] and particularly by the peculiar observational characteristics of SN 1998bw. Patat & Piemonte \[Patat & Piemonte 1998a\] have classified SN 1998bw as a Type Ic supernova, since spectral lines due to helium, silicon and hydrogen are weak or absent in the early spectra. However, at $`M_B=18.88`$ at maximum \[Galama et al. 1998\], this object was three times brighter than the average SN Ic, and early spectra show extremely broad lines and unusual line ratios \[Lidman et al. 1998\]. SN 1998bw rivals the brightest radio supernovae yet observed and radio observations indicate that the shock of the explosion was relativistic \[Kulkarni et al. 1998, Wieringa et al. 1998\]. Assuming the association and the low redshift \[Tinney et al. 1998\] the GRB was also unusual โ at least 4 orders of magnitude fainter than other GRBs \[Galama et al. 1998\].
To what extent SNe are associated with GRBs is unclear due to the low numbers of well-observed Ib/c SNe and the large positional error of most GRBs. Wang & Wheeler \[Wang & Wheeler 1998\] argue that all Ib/c could produce GRBs, but due to beaming we would see only a fraction. Kippen et al. \[Kippen et al. 1998\] found no evidence for an association between SNe and strong GRBs, and Bloom et al. \[Bloom et al. 1998\] model the radio signature and suggest that 1% of GRBs are produced by SNe. If SN 1998bw is a member of a previously unobserved subclass of GRB progenitors \[Iwamoto et al. 1998\], we have a rare opportunity to test and refine our understanding of SNe and GRBs.
### 1.2 Type Ib/c supernovae
Supernovae of type Ib/c are identified by their early optical spectra, which lack the deep Si II absorption feature seen at 6150 ร
in Ia spectra and the prominent hydrogen lines of Type II SNe. Unlike Ia SNe, Ib/c objects are radio emitters and are typically fainter at maximum by $`M_B1.5`$ magnitudes \[Filippenko 1997\]. The parent galaxies of these events (Sbc or later) and heterogeneity of this class suggest that Ib/c SNe are powered by the same mechanism as Type II SNe โ core collapse of a massive star. The progenitors of Ib/c SNe have been modelled as Wolf-Rayet stars which have lost their hydrogen envelopes either via close binary interaction (Nomoto, Iwamoto & Suzuki 1995), through a strong stellar wind (Woosley, Langer & Weaver 1993) or a combination of the two mechanisms (Woosley, Langer & Weaver 1995).
This class is further divided into Ib SNe with strong helium lines in the early spectra, and Ic where helium lines are weak or absent. Opinion varies as to the relationship between Ic and helium-rich Ib SNe and whether there is a smooth or bimodal variation of He I strengths \[Filippenko et al. 1995, Clocchiatti et al. 1996\]. Ic progenitors may have lost their helium envelopes in another stage of mass loss, leaving a bare CO star at core collapse \[Harkness et al. 1987, Nomoto et al. 1995\] or the helium envelope may be poorly mixed with the Ni<sup>56</sup> \[Woosley & Eastman 1997\]. Piemonte \[Piemonte 2000\] stresses the spectral and photometric variation for both Ib and Ic SNe. These variations indicate a wide range of ejecta mass, which would depend on the main sequence mass of the progenitor and its mass loss history as well as secondary parameters such as metallicity and convection \[Woosley et al. 1995\]. In general, ejecta mass is expected to be small compared to other SN types.
### 1.3 SN 1998bw
The models for SN 1998bw presented to date tend to fall into two classes โ an intrinsically energetic event or hypernova \[Iwamoto et al. 1998, Woosley et al. 1999\] with a massive progenitor star and more normal SNe artificially brightened by beaming \[Wang & Wheeler 1998\]. Radio observations suggest that material has been ejected irregularly by a central engine \[Li & Chevalier 1999\]. All models agree in requiring some form of non-symmetric geometry, possibly produced by an asymmetric explosion.
With such diverse interpretations, it is important that all available data are used to provide observational limits for the models. In this paper we present the results of a cooperative spectral monitoring campaign carried out at Siding Spring, Australia, on the AAT, UKST and SSO 2.3-m telescope between May and November 1998 and compare the spectral evolution and velocity shifts of SN 1998bw with other well-observed Ic SNe.
## 2 Observations
Spectral observations of SN 1998bw were made in Directorโs override time and service time on the Anglo-Australian Telescope (AAT) and UK Schmidt Telescope (UKST), and with the cooperation of scheduled observers on the Siding Spring Observatories (SSO) 2.3-m telescope. Unfortunately, the site experienced the poorest observing statistics of the decade and $`>60`$% of allocated time was lost. We obtained useful spectra at 10 epochs which span 7 to 94 days past $`V_{max}`$ (or 23 to 110 days past the GRB event). Observational details are given in Table 1.
Observations were made using the scheduled instruments for the telescopes which included both conventional long-slit spectrographs (DBS, RGO) and fibre-fed spectrographs (2dF, FLAIR). The Nasmyth B Imager at the 2.3-m was used as a long-slit spectrograph by inserting blue and red grisms and an order-sorting filter. Long-slit data were processed in the usual way using the FIGARO data reduction package \[Shortridge et al. 1997\]. A first order correction has been made for background emission from the galaxy, but narrow emission from underlying H ii regions remain in the spectra. At least two wavelength regions were observed on each of the long-slit data epochs, and spectra were combined by normalising to match the overlap regions.
Data taken on FLAIR were processed using the IRAF data reduction package as outlined in Drinkwater & Holman \[Drinkwater & Holman 1996\]. 2dF observations were extracted using the S-DIST and C-DIST utilities from FIGARO. Sky emission was removed using neighbouring fibres. It was not possible to correct the 2dF or FLAIR data for background galaxy emission, but the level of contamination was low during this period with the exception of the narrow nebular lines.
Telluric absorption lines have been removed from the data using observations of low-metallicity stars observed on days 7, 11 and 19, scaled where necessary to match the strongest O<sub>2</sub> features. Considering the poor conditions it is probably not surprising that telluric line ratios were variable and weak residuals can be seen in the spectra, particularly around 7500 to 8100 ร
and at $`\lambda >9500`$ ร
.
Long-slit observations were corrected for instrumental response using a contemporaneous observation of a spectrophotometric standard. Fibre observations did not include accompanying standards, so a linear interpolation between bracketing long slit observations were used to correct these data. As these observations were made in non-photometric conditions and through narrow fibres or slits ($``$2 arcsec), correction to absolute flux was made by scaling to match the $`V`$-band light curve \[Galama et al. 1998, McKenzie & Schaefer 1999\] (the $`V`$-band was not covered on day 35 so the $`R`$-band was used). The success of this method was checked by comparing the photometry in the $`B`$ and $`R`$ bands with spectrophotometry from the corrected spectra, measured using filter responses from Bessell (private communication). Both $`B`$ and $`R`$ bands resulted in a mean ratio of 0.95, with $`\sigma `$ = 6% (Figure 1). Errors at the edge of the spectra are likely to be larger and in general line fluxes from this data set should be regarded with caution. For the purpose of comparison, spectra shown in this paper have been normalised by the $`V`$-band photometry, and shifted to rest wavelengths using $`vz=2580`$ km s<sup>-1</sup> as derived from the narrow H ii region emission. Both the fluxed and normalised spectra are available via anonymous ftp at ftp.aao.gov.au/pub/local/ras/98bw.
## 3 Spectral Evolution
Early spectral evolution of SN 1998bw has been presented in Iwamoto et al. \[Iwamoto et al. 1998\] (days โ9 to +11). In Figure 2 the spectral evolution of SN 1998bw is shown between days 7 and 94. During this period, spectra are dominated by a strong continuum peaking around 5400 ร
, with a small number of broad features which become increasingly dominant relative to the continuum. Line widths remain approximately stable. Our observation on day 94 agrees qualitatively with the description by Patat & Piemonte \[Patat & Piemonte 1998b\] of the spectrum on day 123.
The breadth of the spectral features and the uncertainty in continuum level hampers detailed analysis. In this paper we present preliminary results by looking at the most notable features of the spectrum in comparison to classic Ic SNe SN 1983V \[Clocchiatti et al. 1997\], SN 1987M \[Filippenko, Porter & Sargent 1990\] and SN 1994I \[Clocchiatti et al. 1996, Filippenko et al. 1995\], and to SN 1997ef which has unusually broad lines and bears the closest resemblance to SN 1998bw \[Iwamoto et al. 2000\].
### 3.1 Days 7-19
Eight of our spectra fall within the photospheric period, during which the B-band and V-band light curves were declining steeply prior to the radioactive tail. Spectra show little change over the period in the range 4000 to 7000 ร
, as seen in Figure 3 (left panel), where spectra have been binned in wavelength and time to improve the signal-to-noise ratio. The height of the continuum is poorly defined as overlapping P-Cygni profiles result in line blanketing over much of the spectral range โ apparent peaks are merely regions of relatively low opacity \[Iwamoto et al. 1998\]. In the right panel of Figure 3 spectra of other supernovae are shown at similar epochs. Qualitively SN 1998bw is very different from the other supernovae, with merged absorption blueward and redward of 5300 ร
. SN 1998bw lacks the spectral detail and the overall effect is that of heavy smoothing. However, detailed comparison between the SNe spectra show that the same bands can be identified, and most differences in SN 1998bw can be attributed to the large line widths.
In Table 2, the wavelengths of the observed minima in SN 1998bw (day 19) are compared with those measured from other SNe, and converted to relative velocities (in units of $`10^3`$ km s<sup>-1</sup>) for suggested line identifications (see below). Estimates of measurement errors are shown, reflecting how distinct the minima are for each line. SN 1998bw blueshifts are $``$30% higher than SN 1983V and $``$50% higher than SN 1994I, but are comparable with SN 1987M.
The absorption band between 4100 โ 4300 ร
is resolved into two minima in SN 1998bw by days 17-19. The two minima are also seen in SN 1983V and SN 1994I. This absorption band is present but not resolved in SN 1997ef. In SN 1987M the band extends blueward compared to the other SNe. The absorption band is attributed to Fe ii blends, centred at rest wavelengths of 4274 ร
and 4555 ร
\[Clocchiatti et al. 1997\]. An alternative identification for the second minimum is Mg ii $`\lambda `$4481 \[Filippenko 1997\], which gives velocities which are more in agreement with other features for all but SN 1987M. Other species identified in this region are Ti ii and C ii \[Baron et al. 1996\] and Cr ii \[Iwamoto et al. 1998\].
Fe ii is also the usual identification for the absorption band between 4700 ร
and 5200 ร
. In SN 1987M the feature is resolved into three minima which match well with Fe ii $`\lambda \lambda `$4923, 5018, 5169 (multiplet 42). In SN 1983V and SN 1994I only two minima are resolved, identified as 5018 ร
and 5169 ร
\[Clocchiatti et al. 1997, Iwamoto et al. 1998, Iwamoto et al. 2000\]. However, we find that the identification of the bluer line as 4923 ร
results in better agreement with other features. In SN 1998bw the band is only marginally resolved on day 19, with 4923 ร
unusually dominant. In SN 1997ef the line ratios are more typical, and the band is resolved into two minima \[Iwamoto et al. 2000\].
In Ic SNe spectra, Na i $`\lambda \lambda `$5890, 5896 absorption typically strengthens later than the Fe ii features, becoming dominant around day 30 and fading by day 100. The line emerges more slowly in SN 1998bw compared to SN 1983V, SN 1987M and SN 1994I, but is comparable to SN 1997ef. The Na i blueshift is similar to Fe ii blueshifts for all four SNe in Table 2. The presence of He i $`\lambda `$5876 discussed for other Ic SNe \[Clocchiatti et al. 1996, Clocchiatti et al. 1997\] cannot easily be ascertained for SN 1998bw since the weaker line would be severely blended with Na i.
The Si ii $`\lambda \lambda `$6347, 6371 absorption shows the largest shift in velocity during this period, with minima velocities of $`11600`$ km s<sup>-1</sup> on day 11 and $`7900`$ km s<sup>-1</sup> on day 19. In Table 2, the Si ii line has the lowest blueshift of the measured lines for all SNe. The line profile of the Si ii feature is similar to that of SN 1983V, suggesting that there may be contribution from a second line resolved in SN 1983V, SN 1994I and SN 1997ef and identified by Clocchiatti et al. \[Clocchiatti et al. 1997\] as O i $`\lambda `$6158.
### 3.2 Days 19-94
Late-time spectral evolution of SN 1998bw is shown in Figure 4 (left panel). During this period the light curve is decaying linearly via radioactive decay \[McKenzie & Schaefer 1999\], and as expected we see a transition from an absorption-dominated photospheric spectrum to an emission-dominated nebular spectrum, though on day 94 the transition is still not complete. Early nebular spectra of SN 1994I and SN 1987M are shown in comparison to SN 1998bw in Figure 4 (right panel). SN 1998bw evolves slowly compared to the other two SNe โ the day 94 spectrum bears a closer resemblance to SN 1987M on day 62 than day 96, and is very similar to SN 1994I on day 56. In order to separate the new emission from the other features in SN 1998bw, the day 45 spectrum has been subtracted from the day 94 spectrum, after scaling by the V-band photometry. Likewise, day 36 has been subtracted from day 56 for SN 1994I. The results are compared in Figure 5.
Velocity shifts of the main features are given in Table 3. \[Oi\] $`\lambda \lambda `$6300, 6363 is blueshifted in SN 1987M and SN 1994I. In SN 1998bw on day 94 the line profile is symmetric, but in the residual emission it is also blueshifted by $`500`$ km s<sup>-1</sup>. Ca ii\] $`\lambda \lambda `$7291, 7323 has a similar blueshift in SN 1987M and SN 1994I. In SN 1998bw the blueshift is significantly greater, but in the residual spectrum the profile is similar to SN 1994I, so apparent differences are probably due to contribution from the persisting photospheric features. The Ca ii\] to \[Oi\] ratio is greater in SN 1987M than in SN 1994I and SN 1998bw, which has been attributed to a difference in the relative abundances of calcium and oxygen \[Filippenko et al. 1995\]. While Mg i\] $`\lambda `$4571 is the usual identification for the emission peak at 4500 ร
\[Filippenko 1997\], an alternative identification is given by Patat et al. \[Patat & Piemonte 1998b\] as Fe ii $`\lambda `$4555, and both transitions give an adequate fit to the line with a symmetric profile. We adopt the Mg i\] identification for this paper.
Approximate line widths have been measured using ABLINE in FIGARO (Table 3). Line widths are similar for the Mg i\], \[O i\] and Ca ii\] emission for each supernova. The mean width of these lines in SN 1998bw on day 94 is $`11600\pm 400`$ km s<sup>-1</sup> which is $``$45% broader than SN 1987M and SN 1994I. Na i emerges as a P-Cygni profile between days 19 and 94, evolving more slowly in equivalent width and in emission to absorption ratio than SN 1987M and SN 1994I. A noticable difference between SN 1998bw and SN 1987M is the width of the absorption component of Na i, which is only half the width in SN 1987M, and considerably narrower than the emission lines.
The peak at 5200 ร
has been tentatively identified as Fe ii $`\lambda `$5215 \[Patat & Piemonte 1998b\]. This transition is typically seen in older Ia SNe, but not in Ic SNe, and Patat and Piemonte note that its presence would indicate that SN 1998bw was a type Iac SN. However, the feature is also present in SN 1987M and more weakly in SN 1994I during the early nebular phase. It fades relative to Mg i\], \[O i\] and Ca ii\] at later times. It therefore seems reasonable to assume that the presence of Fe ii emission in Ic SNe spectra is typical during the transition from the photospheric to the nebular phase. In the residual spectrum (Figure 5) the feature is weaker relative to the 4500 ร
peak and has a markedly different profile. This supports the identification of the 4500 and 5200 ร
peaks as transitions of different species, and indicates that the 5200 ร
peak is fading and/or is artificially enhanced by a relatively high continuum level.
### 3.3 7000 โ 9000 ร
The red spectral region of SN 1998bw is shown in Figure 6 for days 7 and 19. It is this region which differs most markedly from typical Ic SNe such as SN 1987M (shown on day 7). In these SNe, strong absorption from O i $`\lambda `$7774 and Ca ii $`\lambda \lambda `$8498, 8542, 8662 are well separated by an absorption free region missing in SN 1998bw. Iwamoto et al \[Iwamoto et al. 1998\] have successfully modelled this region for day $`9`$ with a photospheric velocity of 28000 km s<sup>-1</sup>. However, by day $`1`$ their model predicts that the two features should be resolved, and that by day 7 O i and Ca ii are unblended. Similar evolution is predicted by the direct analysis models of Branch \[Branch 2000\]. In SN 1997ef this behaviour is seen, with blended absorption on day 3 and well separated features on day 30 \[Iwamoto et al. 2000\]. In SN 1998bw, however, there is no sign of significant separation as late as day 19, our last epoch covering this region.
While it is presumably possible to fit this region by increasing the mass of the progenitor, and therefore the line widths of the O i and Ca ii profiles, there are two limitations which need to be considered. The first is that we do not expect to see redshifted absorption from a simple expanding envelope. Under this assumption, O i cannot contribute to the absorption band redward of 7774 ร
, irrespective of the line width. The second limitation is that we detect a shallow dip at $``$8100 ร
which we identify as the minimum of the Ca ii absorption. This feature aligns well with the Ca ii $`\lambda \lambda `$3933, 3968 line profile, and the velocity of $`15300`$ km s<sup>-1</sup> is already higher than blueshifts of other lines at this epoch (Table 3).
In order to reproduce the spectrum on day 19 we would require a highly unusual geometry to produce redshifted absorption from O i, or to produce Ca ii absorption with a second minimum at around $`24000`$ km s<sup>-1</sup>. An alternative explanation is the presence of a third component absorbing at around 7700 โ 8000 ร
. Adequate fits can be produced with lines of rest wavelength 8000 โ 8350 ร
, and candidate species include C ii, C iii and N i. A relatively weak contribution from any of these species would result in the blended spectrum we observe. Modelling is required to further investigate this region, and to determine whether unusual abundances, density or temperature distribution can explain the observations.
## 4 Conclusion
During the period between 7 and 94 days after V-band maximum, we have seen that SN 1998bw resembles other Ic SNe sufficiently to support this classification, but has unusually slow spectral evolution. On day 94 we see the emergence of a nebular spectrum, which retains many of the characteristics of the photospheric period. The late onset of the supernebular phase, compared to SN 1987M (62 d) and SN 1994I (56 d), is consistent with the ejection of an unusually large mass, as predicted by lightcurve models \[Iwamoto et al. 1998, Woosley et al. 1999\].
By day 19, SN 1998bw blueshifts are up to 50% larger than other Ic SNe. However, increased blueshifts alone seem insufficient to explain the unusually smooth and blended spectrum which persists to late times โ for instance SN 1998bw on day 19 has similar blueshifts to SN 1987M on day 11, but SN 1987M has well defined spectral features. Emission line widths on day 94 are 45% broader than SN 1994I and SN 1987M and Na i absorption is far broader in SN 1998bw on day 94 than in SN 1987M at similar epochs. The line profiles of SN 1998bw may have disproportionally strong absorption wings โ we lack an example of an unblended feature at earlier times for confirmation. Using the standard model for a homologously expanding envelope, absorption lines which are broader but of similar blueshift imply that the absorption region in SN 1998bw spans a larger range of velocity space, at both higher and lower velocities than other Ic SNe, as expected for a massive envelope. A closer inspection of the 7000 โ 9000 ร
region of SN 1998bw suggests that we are also seeing contribution from enhanced line species. Unusually strong lines from species such as N i, C ii, C iii, Ti ii and Cr ii may help to produce the extensive line blending. If so, this could indicate an overabundance of these elements, or unusual physical properties of the ejecta.
More work is required in establishing line identifications and spectral characteristics, best done using spectral models. Whether or not SN 1998bw was associated with GRB 980425, it is of great interest as an extreme example of Ic SNe. SN 1997ef bears some resemblance to SN 1998bw and may be an intermediate object between SN 1998bw and classical SNe, though it is too early to say whether we are seeing a bimodal or continuous variation in properties. Thanks to the interest inspired by the $`\gamma `$-ray burst, SN 1998bw has been observed extensively and successful modelling of this object is likely to enhance our understanding of this relatively poorly observed class of supernova.
## Acknowledgements
We thank the observers at the AAT, UKST and MSO 2.3-m telescope who kindly donated their time to our project. We thank Alejandro Clocchiatti, Craig Wheeler, Jodie Martin and Paolo Mazzali for providing digital supernova data to compare with our observations, and Peter Meikle, David Branch and Brian Schmidt for helpful discussions. |
no-problem/0001/hep-ph0001141.html | ar5iv | text | # Featuring the structure functions geometry
## Introduction
The behavior and dependence of the structure functions on the Bjorken $`x`$ is among the most actively discussed subjects in the unpolarized and polarized deep-inelastic scattering. The particular role here belongs to the small $`x`$ region where asymptotical properties of the strong interactions can be studied. The characteristic point of low-$`x`$ region is an essential nonperturbative nature of the underlying dynamics in the whole region of $`Q^2`$. Despite the results of perturbative QCD calculations are in a good agreement with the latest HERA data, the conceptual feasibility of the perturbative QCD methods in this region has not been justified.
Of course, the shortcomings of various model approaches to the study of this nonperturbative region are also evident. However, one can hope to gain from these models an information which cannot be obtained from the perturbative methods (cf. ). Among the possible extensions there could be considerations of the geometrical features of the structure functions, i.e. dependence of the structure functions on the transverse coordinates or in other words on impact parameter. This dependence would allow one to gain an information on the spatial distribution of the partons inside the parent hadron and the spin properties of the nonperturbative intrinsic hadron structure. The geometrical properties of structure functions should play an important role under analysis of the leptonโnuclei deepโinelastic scattering and in the hard production in the heavyโion collisions.
## 1 Definition and interpretation of $`b`$โdependent structure functions
In this note we study the $`b`$โdependence of the structure functions along the line used in , i.e. we suppose that the deepโinelastic scattering is determined by the aligned-jet mechanism . There are serious arguments in favor of its leading role and dominance over the other mechanism known as a color-transparency. The aligned-jet mechanism is essentially nonperturbative one and allows to relate structure functions with the discontinuities of the amplitudes of quarkโhadron elastic scattering. These relations have the following form
$`q(x)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\text{Im}[F_1(s,t)+F_3(s,t)]|_{t=0},`$
$`\mathrm{\Delta }q(x)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\text{Im}[F_3(s,t)F_1(s,t)]|_{t=0},`$
$`\delta q(x)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\text{Im}F_2(s,t)|_{t=0}.`$ (1)
The functions $`F_i`$ are helicity amplitudes for the elastic quark-hadron scattering in the standard notations for the nucleonโnucleon scattering. We consider high energy limit or the region of small $`x`$.
The structure functions obtained according to the above formulas should be multiplied by the factor $`1/Q^2`$ โ probability that such alignedโjet configuration occurs .
The amplitudes $`F_i(s,t)`$ are the corresponding Fourier-Bessel transforms of the functions $`F_i(s,b)`$.
The relations Eqs. (1) will be used as a starting point under definition of the structure functions which depend on impact parameter. According to these relations it is natural to give the following operational definition:
$`q(x,b)`$ $``$ $`{\displaystyle \frac{1}{2}}\text{Im}[F_1(x,b)+F_3(x,b)],`$
$`\mathrm{\Delta }q(x,b)`$ $``$ $`{\displaystyle \frac{1}{2}}\text{Im}[F_3(x,b)F_1(x,b)],`$
$`\delta q(x,b)`$ $``$ $`{\displaystyle \frac{1}{2}}\text{Im}F_2(x,b),`$ (2)
and $`q(x)`$, $`\mathrm{\Delta }q(x)`$ and $`\delta q(x)`$ are the integrals over $`b`$ of the corresponding $`b`$-dependent distributions, i.e.
$$q(x)=\frac{Q^2}{\pi ^2x}_0^{\mathrm{}}b๐bq(x,b),\mathrm{\Delta }q(x)=\frac{Q^2}{\pi ^2x}_0^{\mathrm{}}b๐b\mathrm{\Delta }q(x,b)$$
(3)
and
$$\delta q(x)=\frac{Q^2}{\pi ^2x}_0^{\mathrm{}}b๐b\delta q(x,b).$$
(4)
The functions $`q(x,b)`$, $`\mathrm{\Delta }q(x,b)`$ and $`\delta q(x,b)`$ depend also on the variable $`Q^2`$ and have simple interpretations, e.g. the function $`q(x,b,Q^2)`$ represent probability to find quark $`q`$ in the hadron with a fraction of its longitudinal momenta $`x`$ at the transverse distance
$$b\pm \mathrm{\Delta }b,\mathrm{\Delta }b1/Q$$
from the hadron geometrical center. Interpretation of the spin distributions directly follows from their definitions: they are the differences of the probabilities to find quarks in the two spin states with longitudinal or transverse directions of the quark and hadron spins.
It should be noted that the unitarity plays crucial role in the direct probabilistic interpretation of the function $`q(x,b)`$. Indeed due to unitarity
$$0q(x,b)1.$$
(5)
The integral $`q(x)`$ is a quark number density which is not limited by unity and can have arbitrary nonnegative value. Thus, the given definition of the $`b`$โdependent structure functions is self-consistent. Of course, spin distributions $`\mathrm{\Delta }q(x,b)`$ and $`\delta q(x,b)`$ are not positively defined.
## 2 Unitarity and structure function geometrical profiles
The unitarity can be fulfilled through the $`U`$โmatrix representation for the helicity amplitudes of elastic quarkโhadron scattering. In the impact parameter representation the expressions for the helicity amplitudes are the following
$`F_{1,3}(x,b)=U_{1,3}(x,b)/[1iU_{1,3}(x,b)],`$
$`F_2(x,b)=U_2(x,b)/[1iU_1(x,b)]^2`$ (6)
Unitarity requires Im$`U_{1,3}(x,b)0`$. The $`U`$โmatrix form of unitary representation contrary to the eikonal one does not generate itself essential singularity in the complex $`x`$ plane at $`x0`$ and implementation of unitarity can be performed easily. Therefore we use this representation and not the method of the eikonalization. The model which provides explicit form of helicity functions $`U_i(x,b)`$ has been described elsewhere . A hadron consists of the constituent quarks aligned in the longitudinal direction and embedded into the nonperturbative vacuum (condensate). The constituent quark appears as a quasiparticle, i.e. as current valence quark surrounded by the cloud of quark-antiquark pairs of different flavors. The strong interaction radius of the constituent quark $`Q`$ is determined by its Compton wavelength.
Spin of constituent quark, e.g. $`U`$-quark in this approach is given by the sum:
$$J_U=1/2=S_{u_v}+S_{\{\overline{q}q\}}+L_{\{\overline{q}q\}}=1/2+S_{\{\overline{q}q\}}+L_{\{\overline{q}q\}}.$$
(7)
In the model an exact compensation between the total spin of the quark-antiquark cloud and its angular orbital momenta occurs, i.e.
$$L_{\{\overline{q}q\}}=S_{\{\overline{q}q\}}.$$
(8)
In this approach based on effective Lagrangian the gluon degrees of freedom are overintegrated.
On the grounds of the experimental data for polarized DIS we arrive to conclusion that the significant part of the spin of constituent quark is due to the orbital angular momentum of the current quarks inside the constituent one .
The explicit expressions for the helicity functions $`U_i(x,b)`$ at small $`x`$ can be obtained from the corresponding functions $`U_i(s,b)`$ given in by the substitute $`sQ^2/x`$ and at small values of $`x`$ they get the form:
$`U_{1,3}(x,b)=U_0(x,b)[1+\beta _{1,3}(Q^2)m_Q\sqrt{x}/Q],`$
$`U_2(x,b)=g_f^2(Q^2){\displaystyle \frac{m_Q^2x}{Q^2}}\mathrm{exp}[2(\alpha 1)m_Qb/\xi ]U_0(x,b),`$ (9)
where
$$U_0(x,b)=i\stackrel{~}{U}_0(x,b)=i\left[\frac{a(Q^2)Q}{m_Q\sqrt{x}}\right]^{n+1}\mathrm{exp}[Mb/\xi ].$$
(10)
$`a`$, $`\alpha `$, $`\beta `$, $`g_f`$ and $`\xi `$ are the model parameters, some of them in this particular case of quark-hadron scattering depend on the virtuality $`Q^2`$.The meaning of these parameters (cf. ) is not crucial here; note only that $`m_Q`$ is the average mass of constituent quarks in the quark-hadron system of $`n+1`$ quarks and $`M`$ is their total mass, i.e. $`M=_{i=1}^{n+1}m_i`$. We consider here for simplicity a pure imaginary amplitude. Using definition of the $`b`$โdependent structure functions given in Sec. 1 and Eqs. (6) we obtain at small $`x`$:
$$q(x,b)=\frac{\stackrel{~}{U}_0(x,b)}{1+\stackrel{~}{U}_0(x,b)},$$
(11)
$$\mathrm{\Delta }q(x,b)=\frac{\beta _{}(Q^2)m_Q\sqrt{x}}{Q}\frac{\stackrel{~}{U}_0(x,b)}{[1+\stackrel{~}{U}_0(x,b)]^2},$$
(12)
$$\delta q(x,b)=\frac{g_f^2(Q^2)m_Q^2x}{Q^2}\mathrm{exp}[\frac{2(\alpha 1)m_Qb}{\xi }]\frac{\stackrel{~}{U}_0(x,b)}{[1+\stackrel{~}{U}_0(x,b)]^2},$$
(13)
where $`\beta _{}(Q^2)=\beta _3(Q^2)\beta _1(Q^2)`$. From the above expressions it follows that $`q(x,b)`$ has a central $`b`$โdependence, while $`\mathrm{\Delta }q(x,b)`$ and $`\delta q(x,b)`$ have peripheral profiles. Their qualitative dependence on the impact parameter $`b`$ is depicted in Fig. 1. The peripheral dependence on impact parameter according to the relation Eq. (8) is the manifestation of a significant presence of the angular orbital momenta in the spin balance of a nucleon.
From Eqs.(1113) it follows that factorization of $`x`$ and $`b`$ dependencies is not allowed by unitarity. However, this result is valid for the small $`x`$ region only and approximate factorization is possible in the region of not too small $`x`$ where account of unitarity reduces to the factorization breaking corrections.
The following relation between the structure functions $`\mathrm{\Delta }q(x,b)`$ and $`\delta q(x,b)`$ can also be inferred from the above formulas
$$\delta q(x,b)=c(Q^2)\frac{\sqrt{x}}{Q}\mathrm{exp}(\gamma b)\mathrm{\Delta }q(x,b).$$
(14)
Thus, the function $`\delta q(x,b)`$ which describes transverse spin distribution is suppressed by the factors $`\sqrt{x}`$ and $`\mathrm{exp}(\gamma b)`$, i.e. it has a more central profile. This suppression also reduces double-spin transverse asymmetries in the central region in the Drell-Yan production compared to the corresponding longitudinal asymmetries.
The strange quark structure functions have also a more central $`b`$โdependence than in the case of $`u`$ and $`d`$ quarks. The radius of the corresponding quark matter distribution is
$$R_q(x)\frac{1}{M}\mathrm{ln}Q^2/x$$
(15)
and the ratio of the strange quark distributions to the light quark distributions radii is given by the corresponding constituent quark masses, i.e. for the nucleon this ratio would be
$$R_s(x)/R_q(x)(1+\frac{\mathrm{\Delta }m}{4m_Q})^1,$$
(16)
where $`\mathrm{\Delta }m=m_Sm_Q`$.
Time reversal invariance of strong interactions allows one to write down relations similar to Eqs.(1) for the fragmentation functions also and obtain expressions for the fragmentation functions $`D_q^h(z,b)`$, $`\mathrm{\Delta }D_q^h(z,b)`$, $`\delta D_q^h(z,b)`$ which have just the same dependence on the impact parameter $`b`$ as the corresponding structure functions. The fragmentation function $`D_q^h(z,b,Q^2)`$ is the probability for fragmentation of quark $`q`$ at transverse distance $`b\pm \mathrm{\Delta }b`$ ($`\mathrm{\Delta }b1/Q`$) into a hadron $`h`$ which carry the fraction $`z`$ of the quark momentum. In this case $`b`$ is a transverse distance between quark $`q`$ and the center of the hadron $`h`$. It is positively defined and due to unitarity obey to the inequality
$$0D_q^h(z,b)1$$
(17)
The physical interpretations of spinโdependent fragmentation functions $`\mathrm{\Delta }D_q^h(x,b)`$ and $`\delta D_q^h(x,b)`$ is similar to that of corresponding spin structure function. Peripherality of the spin fragmentation functions can also be considered as a manifestation of the important role of angular orbital momenta.
## Conclusion
It is interesting to note that the spin structure functions have a peripheral dependence on the impact parameter contrary to central profile of the unpolarized structure function. In the considered model where the hadron has aligned structure the peripherality of the spin structure functions implies that the main contribution to the spin of constituent quark is due to the orbital angular momentum. This orbital angular momentum has a nonperturbative origin and does not result from the perturbative QCD evolution. This conclusion provides clue for the possible solution of the problem of the nucleon spin structure. It is interesting to find out possible experimental signatures of the peripheral geometrical profiles of the spin structure functions and the significant role of the orbital angular momentum. One of such indications could be an observation of the different spatial distributions of charge and magnetization at Jefferson Lab . It would also be important to have a precise data for the strange formfactor. It could also be done analyzing both the spin structure and angular distribution in exclusive electroproduction and it worth considering in a separate study.
## Acknowledgements
This work was supported in part by the Russian Foundation for Basic Research under Grant No. 99-02-17995. One of the authors (S.T.) is grateful to Alan Krisch for the warm hospitality at the Spin Physics Center of the University of Michigan where this work was finished. |
no-problem/0001/astro-ph0001088.html | ar5iv | text | # Evolution of Cluster Galaxies in Hierarchical Clustering Universes
## 1. Introduction
Rich clusters of galaxies are large laboratories for studying galaxy evolution (Dressler 1984), and their evolution can be followed with samples out to $`z1`$ (Rosati et al. 1998). It is well established that galaxy populations vary with the density of neighboring galaxies in clusters of galaxies (Dressler 1980) and depend on the distance from the clustersโ centers (Whitmore et al. 1993). The increase in the fraction of blue, star-forming cluster galaxies with the redshift (Butcher, Oemler 1978, 1984a, 1984b) has also been well established. It has been suggested that galaxy-galaxy and galaxy-cluster interactions play important roles in these effects; especially, a major merger of galaxies produces elliptical galaxies as merger remnants (Barnes 1989, 1996) and cumulative tidal interactions induce a morphological transformation of spiral galaxies to S0 galaxies.
Since it is generally believed that cold dark matter dominates the mass in the universe, we expect that the formation process of dark matter halos significantly affects the process of the formation and evolution of galaxies. In this paper, we discuss our high-resolution cosmological $`N`$-body simulations which trace the motion of the dark matter particles and can resolve the galaxy-sized dark halos within a high-density environment. We also consider when the above mentioned interactions (i.e., major mergers and tidal interactions) act on the evolution of the cluster galaxies during the formation and evolution of clusters in a hierarchical clustering universe.
We should consider the hydrodynamical processes of the baryonic components in order to follow the evolution of galaxies. However, hydrodynamical simulations, e.g., smoothed particle hydrodynamics (SPH) simulations, need much more CPU time than collisionless simulations. Thus, it is difficult to obtain a wide dynamical range with such simulations. Here, we restrict ourselves to follow the evolution of dark matter halos and to use the galaxy tracing method described by Okamoto and Habe (1999, hereafter Paper I) to obtain merging history trees of galaxies.
Recently, some authors have studied the evolution of the dark matter halos of the cluster galaxies in SCDM (Ghigna et al. 1998; Paper I). The epoch of the formation of a cluster of galaxies is very sensitive to the value of the cosmological density parameter, $`\mathrm{\Omega }_0`$ (Richstone et al. 1992). Since the evolution of clusters affects the evolution of galactic halos within the clusters (Paper I), it is interesting to compare the evolution of cluster galaxies in various cosmological models with different values of the density parameter. Here, we examine two cosmological models: one is the critical universe ($`\mathrm{\Omega }_0=1`$), and the other is the open universe ($`\mathrm{\Omega }_0=0.3`$). For both models we assume that the mass of the universe is dominated by cold dark matter (CDM).
The plan of this paper is as follows: Techniques and parameters of the $`N`$-body simulations and the method of creation of merging history trees of galaxies are described in section 2. Our results are presented in section 3 and discussed in section 4.
## 2. Simulation
### 2.1. The Simulation Data Set
Our simulations followed the evolution of a isolated spheres of a radius, $`R_{\mathrm{sim}}`$, in both the standard CDM (SCDM) universe ($`\mathrm{\Omega }=1`$, $`hH_0/100`$ $`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1=0.5`$, $`\sigma _8=0.67`$) and the open CDM (OCDM) universe ($`\mathrm{\Omega }_0=0.3`$, $`h=0.7`$, $`\sigma _8=1`$). The normalisations were chosen to approximately match the observed cluster abundance. We imposed the constraint of the $`3\sigma `$ peak with an 8 Mpc Gaussian smoothed density field at the center of each simulation sphere in order to obtain a rich cluster (Hoffman, Ribak 1991). The simulations were performed using a parallel tree-code, which was used in Paper I. To obtain a sufficient resolution to follow the evolution of the galaxy-sized halos with a relatively small number of particles, we used the multi-mass $`N`$-body code (Navarro et al. 1997; Paper I). These initial conditions of our simulations were made as follows.
First, only long-wavelength components were used for the realization of the initial perturbation in the simulation sphere using $`10^5`$ particles; we then performed a simulation with these low-resolution particles. After this procedure, we tagged the particles which are inside a sphere of radius 3 Mpc centered on the cluster center at $`z=0`$. Next, we went back to the initial time stage, and then divided each tagged particles into 64 high-resolution particles according to the density perturbation that is realized by including additional shorter wavelength components. As a result, the total number of the particles became $`10^6`$.
We then calculated again the dark matter evolution using high- and low-resolution particles from the new initial condition. Our analyses were operated only for the high-resolution particles. The mass of a high-resolution particle was $`m5.5\times 10^8h^1M_{}`$, and its softening length, $`ฯต`$, was set to 5 kpc.
The overall parameters and mass of the clusters in both simulations at $`z=0`$ are listed in table 1.
### 2.2. Creation of Merging History Trees of Galaxies
To create merging history trees of galaxies, we have to identify the galactic dark halos in the sea of dark matter. The identification of halos in such environments is a critical step (Bertshinger, Gelb 1991; Summers et al. 1995). The most widely used halo-finding algorithm, called the friends-of-friends (e.g., Devis et al. 1985), is not acceptable, because it cannot separate substructures inside of large halos. Since the DENMAX algorithm (Bertshinger & Gelb 1991) shows good performance, we used its offspring SKID (Governato et al. 1997).
This algorithm groups particles by moving them along the density gradient to the local density maximum. The density field and the density gradient are defined everywhere by smoothing each particle using the SPH-like method with the neighboring 64 particles. At a given redshift, only particles with local densities greater than one-third of the virial density at that epoch are moved to the local density maximum. This threshold roughly corresponds to the local density at the virial radius. The final step of the process is to remove all particles that are not gravitationally bound to their parent halos. Here, halos which contain more particles than a threshold number, $`n_{\mathrm{th}}`$, are identified as galactic halos. Unless we explicitly state, $`n_{\mathrm{th}}=30`$ is adopted. This is a large enough number to inhibit the numerical evaporation of halos (Moore et al. 1996a).
The method to create the merging history tree is similar to that mentioned in Paper I; the details are as follows.
We identify galactic halos with a 0.5 Gyr time interval, which is restricted by the disk space of our computer. Since this time interval is shorter than the dynamical time-scale of the clusters and the fading time-scale of evidence of starburst in galaxies, it is sufficiently short to construct the merger trees to investigate the effect of the merging of galaxies and the tidal interactions.
The three most bound particles in each halo are tagged as tracers. We consider three cases to construct the merger tree of galaxies.
First, if a halo at $`t_{i+1}`$, where $`i`$ is the number of a time stage, has two or more tracers that are contained in the same halo at $`t_i`$, then the halo at $`t_{i+1}`$ is defined as a next halo of the halo at $`t_i`$. In this case, the halo at $`t_i`$ is a progenitor of the halo at $`t_{i+1}`$.
Next, we consider the case that some halos at $`t_{i+1}`$ have one of three tracers of a halo at $`t_i`$. The halo that has a tracer which is more bound in the halo at $`t_i`$ is chosen as the next halo of the halo at $`t_{i+1}`$.
Finally, if none of three tracers of a halo at $`t_i`$ are contained in any halos at the next time stage ($`t_i`$), we refer the particle which is the most bound tracer of the halo at $`t_i`$ as a stripped tracer. We thus call both the halos and stripped tracers galaxies throughout this paper.
We construct the merging history trees of galaxies in this way. A halo which has more than two progenitors at former time stage is referred as a merger. It often happens that satellite galaxies pass through a central halo. Such cases should not be considered as merging. Hence, we check that the tracers of galaxies at $`t_{i1}`$, which are contained in a merger at $`t_i`$ are still in the same halo at $`t_{i+1}`$.
In order to estimate the mass of a stellar component of a galaxy, we assume that the mass of the stellar component is proportional to the sum of the masses of its all progenitors (hereafter we call this mass the summed-up-mass). Except for the case in which a large fraction of the stellar component of the galaxies have been stripped during the halo stripping, this assumption may be valid. We estimate the summed-up-mass of a galaxy as the sum of the summed-up-masses of its all progenitors at the previous time stage. For a newly forming halo which has no progenitors at a previous time stage, the summed-up-mass of the halo is set to the mass of the halo. To consider mass increase due to accretion of dark matter to the halo after its first identification, we replace the summed-up-mass with the halo mass when the summed-up-mass is smaller than the halo mass.
## 3. Results
### 3.1. Evolution of the Cluster
To define the size of clusters, we calculated the virial overdensity based on the spherical-collapse model. For the SCDM model we used 200 as the virial overdensity according to previous studies (e.g., Navarro et al. 1996). In OCDM, the virial overdensity is a function of redshifts. Therefore, we calculate it at each redshift and, it then became $`\delta 400`$ at $`z=0`$. In figure 1, we plot the mass evolution of the most massive virialized object in each model. It is well known that the formation epoch of the OCDM cluster is much earlier than that of the SCDM cluster. In our result, the formation redshift (the redshift when the half of the final mass has accreted) of the SCDM cluster is $`z0.15`$ and that of the OCDM cluster is $`z1.6`$. We show the $`x`$$`y`$ projection of a density map in a cube with sides of $`2r_{\mathrm{vir}}`$ ($`r_{\mathrm{vir}}`$ is the radius of the sphere having the virial overdensity) centered on the clusterโs center in each model at $`z=0`$ (figure 2). The gray scale represents logarithmic scaled density given by the SPH like method. We found that many galaxy-sized density peaks survive even in the central parts of the rich clusters.
### 3.2. Merging of Galaxies
Numerical simulations have shown that mergers with a mass ratio of 3:1 or less produce a remnant resembling an elliptical galaxy (Barnes 1996). Therefore, we define a halo which has more than two ancestors with this ratio at the former time stage as a โmajor merger.โ In figure 3, we show the major merger fraction of the large galaxies ($`10^{11}h^1M_{}M_{\mathrm{sum}}10^{13}h^1M_{}`$) in the cluster-forming regions as a function of the redshift. The points in the figure can be fitted by a curve in Gottlรถber et al. (1999), $`\alpha (1+z)^\beta \mathrm{exp}[\gamma (1+z)]`$, with $`\alpha =0.01,0.04,\beta =4.5,4.6,`$ and $`\gamma =1.4,1.2`$ for SCDM and OCDM, respectively. When the clusters start forming, the merger fraction steeply decreases. One reason for this is that the high-velocity dispersion in clusters and groups inhibits the galaxies within these objects from merging with each other. Another reason is that the stripping of halos by tidal fields of such large objects prevents the merging of individual galactic halos (Funato et al. 1993; Bode et al. 1994). Since the cluster in OCDM forms much earlier than in SCDM (see figure 1), this decline of the merger fraction appears at the higher redshift in OCDM than SCDM.
Governato et al. (1999) showed that the major merger rate of the galaxy-sized dark halos in the field for $`z<1`$ is proportional to $`(1+z)^{4.2}`$ and $`(1+z)^{2.5}`$ in SCDM and OCDM, respectively. To compare their result obtained in the field to our result obtained in the cluster-forming regions, we can say that the efficiency of the major merging of galaxies in the cluster-forming regions more steeply increases toward high redshifts than in the field, especially in OCDM. Since the density contrast of the cluster-forming regions in OCDM takes a larger value than in SCDM, the evolution of major merger rate in OCDM is significantly different between in the field and in the cluster-forming regions.
Recently, van Dokkum et al. (1999) observed the rich cluster at $`z=0.83`$, finding a high merger fraction in the cluster and rapid evolution of the fraction. The fraction in $`z<1`$ is comparable to the result obtained here.
### 3.3. Tidal Stripping of Halos
We evaluated the effects of the tidal stripping on the galactic halos in different cosmologies. For this purpose, we chose galactic halos at a redshift when the number of large halos with $`M_\mathrm{h}10^{11}h^1M_{}`$ is largest much before the cluster or group formation epochs. The tidal effects are probably negligible at such a redshift. Then, we examined whether the chosen galaxies loose their halos at lower redshifts. If they become $`M_\mathrm{h}<10^{10}h^1M_{}`$ at lower redshifts, they must be tidally stripped. We then adopted $`n_{\mathrm{th}}=19,18`$ for SCDM and OCDM, respectively. Therefore, when a galaxy has become a stripped tracer, it means that the galaxy does not have a halo with $`M_\mathrm{h}10^{10}h^1M_{}`$.
When they become such small halos, dissipative effects, which were not included in our simulations, should become important. We call such galaxies stripped galaxies.
In figure 4, we show the stripped galaxy fractions in the $`0.25h^1`$ Mpc radius bins from the cluster centers. The stripped galaxies show a similar distribution in both models at $`z=0`$. Their evolutions, however, are very different between the models. In SCDM, there are few stripped galaxies, except for the central part at $`z=0.5`$, and the fraction drastically increases near $`z=0`$. On the other hand, we can see a strong correlation between the fraction and the radius in the OCDM cluster, even at $`z=0.5`$; this fraction shows very weak evolution during $`z=0.5`$ and $`0`$. This is because the cluster in OCDM forms much earlier than in SCDM.
Next, we compared the radii of the halos, which were determined as the radii at which their circular velocity profiles take minimum values (Ghigna et al. 1998; Paper I), to the tidal radii of the halos estimated at their pericentric positions, which we calculated by using the dark halo model of Navarro et al. (1997). The tidal radii of the halos, $`r_{\mathrm{est}}`$, were estimated by the following approximation assuming an isothermal distribution of dark matter:
$$r_{\mathrm{est}}r_{\mathrm{peri}}\frac{v_{\mathrm{max}}}{V_\mathrm{c}},$$
(1)
where $`v_{\mathrm{max}}`$ is the maximum value of the circular velocity of a galactic halo and $`V_\mathrm{c}`$ is the circular velocity of a cluster. In figure 5, we plot $`r_{\mathrm{est}}`$ against $`r_\mathrm{h}`$ for the outgoing halos that must have passed pericenter recently. We plotted only the large halos ($`v_\mathrm{c}80`$ km $`\mathrm{s}^1`$) to avoid any influence of the insufficient resolution. Moreover, we ignored the halos with $`r_{\mathrm{peri}}<300`$ kpc, because they have tidal tails due to impulsive collisions as they pass close to the cluster center (Ghigna et al. 1998; Paper I). In SCDM, most of the halos with $`r_\mathrm{h}>100`$ kpc have larger radii than $`r_{\mathrm{est}}`$. On the other hand, the halos in OCDM are sufficiently truncated, and have comparable radii to $`r_{\mathrm{est}}`$, because few galaxies accrete to the cluster in OCDM at low redshifts (the cluster has formed at $`z_{\mathrm{form}}=1.6`$). This means that the galaxies in the SCDM cluster tidally evolve, even at present, and such evolution has almost come to completion in the OCDM cluster. This result is consistent with the weak evolution of the stripped galaxy fraction of the cluster galaxies in OCDM.
## 4. Discussion
We have investigated the effects of the difference of the cosmological density parameter on the evolution of the cluster galaxies using the cosmological $`N`$-body simulations in SCDM and OCDM. The cluster forms at $`z0.15`$ and $`z1.6`$ in SCDM and OCDM, respectively. We have shown that the difference between the formation epochs of the clusters changes the evolution of the cluster galaxies.
The major merger fraction in the cluster-forming regions is roughly proportional to $`(1+z)^{4.5}`$ in each cosmological model at low redshifts ($`0<z<2`$). The decline of this fraction appears at higher redshifts in OCDM than in SCDM. The reason is as follows. The efficiency of merging rapidly decreases as clusters form because of large internal velocities of the clusters and a reduction of the size of tidally truncated halos. Hence, the earlier formation of the cluster in OCDM leads to an earlier decline of the major merger fraction. From this result, we expect that in a lower density universe the elliptical galaxies in clusters would mainly form earlier. We will investigate this possibility in a forthcoming paper.
The tidal interactions also have the possibility to change the morphology of the galaxies and to induce active star formation (Moore et al. 1996b, 1998). We find that, in the SCDM universe, the fraction of cluster galaxies which have been stripped of their dark halos due to tidal interactions begins to increase from $`z0.5`$. Thus, if the morphological transformation from S to S0 and the star burst due to the galaxy harassment (Moore et al. 1996b, 1998) are caused by such tidal interactions, the morphology-density relation and the ButcherโOemler effect should evolve from $`z0.5`$ in SCDM. On the other hand, in OCDM, sice this fraction has already been significant at $`z=0.5`$, we can observe these effects at higher redshifts than $`z=0.5`$. For detailed analyses we need a star-formation history in each galaxy, which will be considered in future work.
In this paper we show how the cluster evolution affects on the evolution of cluster galaxies. The effects of different formation epoches of the clusters of galaxies between cosmological models on the color and morphological evolution of the cluster galaxies will be clarified by combining the merging history trees obtained here with a simple model of gas cooling, starformation, and feedback used in semianalytical work (Kauffmann et al. 1993; Cole et al. 1994; Kauffmann et al. 1999).
We wish to thank M. Fujimoto and M. Nagashima for useful discussions. Numerical computation in this work was carried on the HP Exemplar at the Yukawa Institute Computer Facility and on the SGI Origin 2000 at the Division of Physics, Graduate School of Science, Hokkaido University.
## References
Barnes J.E. 1989, Nature 338, 123
Barnes J.E. 1996, in Formation of the Galactic Halo, Inside and Out, ed H. Morrison, A. Sarajedini ASP Conf. Ser. 92, p415
Bertshinger E., Gelb J.M. 1991, Comput. Phys. 5, 164
Bode P.W., Berrington R.C., Cohn H.N., Lugger P.M. 1994, ApJ, 433, 479
Butcher H., Oemler A.Jr 1978, ApJ 219, 18
Butcher H., Oemler A.Jr 1984a, ApJ 285, 426
Butcher H.R., Oemler A.Jr 1984b, Nature 310, 31
Cole S., Aragรฒn-Salamanca A., Frenk C.S., Navarro J.F., Zepf S.E. 1994, MNRAS 271 781
Davis M., Efstathiou G., Frenk C.S., White S.D.M. 1985, ApJ 292, 371
Dressler A. 1980, ApJ 236, 351
Dressler A. 1984, ARA&A 22, 185
Funato Y., Makino J., Ebisuzaki T. 1993, PASJ, 45, 289
Ghigna S., Moore B., Governato F., Lake G., Quinn T., Sadel J. 1998, MNRAS, 300, 146
Governato F., Moore B., Cen R., Stadel J., Lake G., Quinn T. 1997, NewAstron. 2, 91
Gottlรถber S., Klypin A., Kravtsov A.V. 1999, astro-ph/9909012
Hoffman Y., Ribak E. 1991, ApJ 380, L5
Kauffmann G., Colberg J.M., Deaferio, A., White S.D.M. 1999, MNRAS 303, 188
Kauffmann G., White S.D.M., Guiderdoni B. 1993, MNRAS 264, 201
Moore B., Katz N., Lake G. 1996a, ApJ 457, 455
Moore B., Katz N., Lake G., Dressler A., Oemler A.Jr 1996b, Nature 379, 613
Moore B., Lake G., Katz N. 1998, ApJ 495, 139
Navarro J.F., Frenk C.S., White S.D.M. 1997, ApJ 490, 493
Okamoto T., Habe A. 1999, ApJ 516, 591 (Paper I)
Richstone D., Loeb A., Turner E.L. 1992, ApJ 393, 477
Rosati P., Ceca R.D., Norman C., Giacconni R. 1998, ApJ 492, L21
Summers F.J., Davis M., Evrard A.E. 1995, ApJ 454, 1
van Dokkum P.G., Franx M., Fabricant D., Kelson D.D., Illingworth G.D. 1999, ApJ 520, L95
Whitmore B.C., Gilmore D.M., Jones C. 1993, ApJ 407, 489 |
no-problem/0001/hep-lat0001004.html | ar5iv | text | # Anisotropic improved actions
## 1 INTRODUCTION
The advantage of using improved actions has been recognized in many area of lattice simulations; at the same time the merits of using anisotropic lattice have been well understood. Then we had started the study of the anisotropic properties for the improved actions. The improved actions we study in this work consist of terms
$$S_{\mu ,\nu }=\beta (C_0L(\text{Plaq.})_{\mu ,\nu }+C_1L(\text{Rect.})_{\mu ,\nu })$$
where $`L(\text{Plaq.})`$ and $`L(\text{Rect.})`$ represent plaquette and 6-link rectangular loops respectively, and $`C_0`$ and $`C_1`$ satisfies $`C_0+8C_1=1`$. Our improved action covers
i) tree level Symanzikโs action($`C_1=1/12`$),
ii) Iwasakiโs action($`C_1=0.331`$),
iii) QCDTAROโs action($`C_1=1.409`$)
etc.
For these classes of improved actions the anisotropic lattice is formulated in the same way as in the case of standard plaquette action. We introduce the coupling constant, $`g_\sigma `$, ($`g_\tau `$) and lattice spacing, $`a_\sigma `$, ($`a_\tau `$) in space (temperature) direction. With these parameters, the action on the anisotropic lattice is written as,
$$S=\beta _\sigma S_{ij}+\beta _\tau S_{4i},$$
where $`\beta _\sigma =g_\sigma ^2\xi _R^1`$, $`\beta _\tau =g_\tau ^2\xi _R`$ and $`\xi _R=a_\sigma /a_\tau `$.
In the weak coupling expansions, the $`\eta `$ parameter has been written as follows,
$$\eta =1+N\alpha (\xi _R,C_1)/\beta +O(g^4).$$
The behavior of $`\alpha (\xi _R,C_1)`$ had already been reported at lat98, which we will show in Fig.1.
We have found that $`\alpha (\xi _R,C_1)`$ changes sign around $`C_10.160`$. Namely in the weak coupling region, for the Iwasakiโs and QCDTAROโs actions, the $`\eta `$ parameter is less than unity contrary to the case of the standard plaquette action.
The natural question is what is the behavior of $`\eta `$ in the intermediate to weak coupling region, for those improved actions.
## 2 NUMERICAL STUDY of $`\eta `$
Numerically, the parameter $`\eta `$ is calculated by the relation $`\eta =\xi _R/\xi _B`$ , where $`\xi _B`$ is a bare anisotropy parameter which appears in the action,
$$S=\beta (\frac{1}{\xi _B}S_{ij}+\xi _BS_{i4}),$$
while the renormalized anisotropy parameter is defined by $`\xi _R=a_\sigma /a_\tau `$. For the probe of the scale for the space and temperature direction, we use the lattice potential in these directions. The lattice potential in temperature direction is defined by Wilson loops in space-temperature plane,
$$V_{st}(\xi _B,p,t)=\mathrm{ln}(\frac{W_{st}(p,t)}{W_{st}(p+1,t)}).$$
The potential in the space direction is defined in a similar way.
The simulations are mainly done on the $`12^3\times 24`$ lattice for $`\xi _R=2.0`$, and for some $`\beta `$ on $`16^3\times 32`$ lattice to study the size dependences.
The method of determination of $`\eta `$ is explained in some detail by the examples of $`\beta =4.5`$ for Iwasakiโs improved action.
We fix $`\xi _R=2`$, and calculate the rate
$$R(\xi _B,p,r)=\frac{V_{ss}(\xi _B,p,r)}{V_{st}(\xi _B,p,\xi _Rt)}$$
at $`\xi _B=2.0,2.1,2,2`$. The results for $`\xi _B=2.1`$ are shown in the Fig.2 for each $`p`$ and $`r`$.
For these $`R(\xi _B)`$ we have made the interpolation by the second order polynomial in $`\xi _B`$ and solve the point where $`R=1`$ is satisfied. Then the $`\eta `$ is obtained for each $`p`$ and $`r`$. The results for the $`\eta `$ are shown in Fig.3.
It is found that for $`p\times r15`$, $`\eta `$ is almost flat. For the flat region, we take the average of them. The errors are estimated by the Jackknife method. The same analysis has been repeated for other $`\beta `$ and for the other Improved actions.
In order to check the size dependences, the simulations are carried out on $`16^3\times 32`$ lattice at $`\beta =21.0`$ and $`3.5`$ for the Iwasakiโs improved action. It is found that at $`\beta =21.0`$ the size dependence is not small but at $`\beta =3.5`$ it is quite small. Therefore for the intermediate to strong coupling region, the simulation is carried out on $`12^3\times 24`$ lattice. All the results are summarized in Fig.4.
## 3 CONCLUSIONS AND DISCUSSIONS
$``$ Iwasakiโs Action
The $`\eta `$ parameter has a shallow dip at $`\beta 6.0`$ and then blows up towards the small $`\beta `$ region. The prediction by the one loop perturbative calculation breaks down qualitatively. As a results of this behavior the $`\eta 1`$ in the wide range of $`\beta `$.
The results for $`\xi _R=3`$ are also shown in the Fig.4. It is found that the qualitative behavior is the same but the dip around $`\beta =6.0`$ becomes deeper. This means that the renormalization of the anisotropy parameter is larger for larger $`\xi _R`$.
$``$ Tree-level Symanzikโs Action and the action with $`C_1=0.16`$
The $`\eta `$ for Symanzikโs action is qualitatively the same as those of the standard action; namely it increases monotonically as $`\beta `$ decreases. But the slope is less steep and then the value of $`\eta `$ is smaller than the standard action at a given $`\beta `$.
For the action with $`C_1=0.16`$, it has been found that $`\alpha =0`$, namely the renormalization of the anisotropy parameter is zero in the weak coupling perturbative calculations. The $`\eta `$ in the intermediate to strong coupling region shows similar behavior with the standard plaquette action and Symanzikโs action. But again the slope becomes less steep.
$``$ Global features of $`\eta (\beta ,C_1)`$
The calculation with the QCDTAROโs action is not carried out yet. But we have found the global structure of the $`\eta `$ parameter as a function of $`\beta `$ and $`C_1`$. The interesting point is that the for the action with $`C_1<0.16`$, the $`\eta `$ once decreases and then increases with decreasing $`\beta `$. As a result of this behavior the $`\eta `$ stays close to one in the wide range of $`\beta `$ for Iwasakiโs improved action. This may be a good features for the numerical simulations.
$``$ Further works
The calculation of the $`\eta `$ at stronger coupling region and the other $`\xi _R=0.5,1.5,\mathrm{3.05.0}`$ etc are now under study. We are preparing to use the smeared Wilson loops for the simulation in the stronger coupling region.
We are starting the calculation of the lattice spacing $`a`$ as a function of $`\beta `$ on the anisotropic lattices.
The behavior of $`\eta `$ with smaller $`C_1`$ like QCDTAROโs action, is also an interesting problem.
We are preparing the simulations of physical quantities on an anisotropic lattice with improved actions. The targets are heavy quark spectroscopy, transport coefficients of quark gluon plasma etc.
ACKNOWLEDGMENTS
The present work has been done with SX-4 at RCNP and VX-4 at Yamagata. We are grateful for the members of RCNP for warm hospitality and kind supports. |
no-problem/0001/astro-ph0001337.html | ar5iv | text | # Non-parametric star formation histories for 5 dwarf spheroidal galaxies of the local group
## 1 Introduction
The local dwarf spheroidal galaxies form a sample of small galaxies which, due to their relatively nearby locations and close association with the Milky Way could in principle furnish crucial observational and theoretical information on a range of astrophysical phenomena. As these systems are eventually disrupted and incorporated into the Milky Way they illustrate locally one of the mechanisms thought to be responsible for the build up of large galaxies. Thus comparing their stars and those now found in our galaxy we can obtain a first estimate of the relevance of late mergers (Unavane et al. 1996). Dynamical studies of their stars have yielded valuable constraints on the nature and structure of dark matter halos at the smallest scales (Lin & Faber 1983, Gerhard & Spergel 1992). Their orbits probe the galactic halo in a range of distances not sampled by any other objects and can thus be used to study the outer galactic halo. Additionally, their small sizes make them in principle the simplest galactic systems, where key processes such as star formation and gas flows can be studied under relatively well defined conditions.
However, this situation is complicated by the fact that we not only lack a theoretical understanding of these systems, but also an observational record of their evolution; only a present day snapshot of their physical parameters is available, as is the case with most galactic systems. Whilst high redshift observations have recently opened up new areas of research as they begin to yield an statistical description of the evolution of bright galaxies, such an approach is likely to remain out of reach for these small systems for some time. Fortunately, their neighboring locations allow the study of their individual stars, which offers the possibility of directly probing their evolutionary histories by inferring star formation rates as a function of time, $`SFR(t)^{}s`$.
The recent availability of detailed colour-magnitude diagrams for several nearby systems has prompted the development and allowed the application of careful statistical methods aimed at reconstructing the star formation histories of these objects (e.g. Chiosi et al. (1989), Aparicio et al. (1990) and Mould et al. (1997) using Magellanic and local clusters, and Mighell & Butcher (1992), Smecker-Hane et al. (1994), Tolstoy & Saha (1996), Aparicio & Gallart (1995) and Mighell (1997) using local dSphโs). Although much has been learnt of the complex $`SFR(t)^{}s`$ of these systems, existing studies have lacked two major ingredients: a homogeneous set of observations including several of the dSph galaxies does not exist, and different data sets are generally analyzed using different techniques. These two points make comparisons between the derived $`SFR(t)^{}s`$ at best risky. A further difficulty lies in the fact that the available rigorous statistical studies approach the problem parametrically, which is something one should try to avoid when the actual structure of the function one is trying to recover can be crucial, as is the case when the underlying physics is unknown. An example of this last point is the case of the Carina dwarf. Hurley-Keller et al. (1998) solve for the best fitting three discrete bursts solution to the $`SFR(t)`$ and conclude that star formation has proceeded spasmodically, whilst Mighell (1997) uses a non parametric star count approach, albeit not a fully consistent statistical method, to obtain a more gradual solution for Carinaโs $`SFR(t)`$.
In this paper we have attempted to improve on the determination of the star formation histories of local dSph systems by addressing the two points mentioned above. We use recent HST observations of the resolved populations of a sample of dSph galaxies (Carina, LeoI, LeoII, Ursa Minor and Draco) uniformly taken and reduced, to recover the $`SFR(t)`$ of each, applying a new non-parametric maximum likelihood method. This allows meaningful comparisons to be made, as any systematics, at any level, will affect all our galaxies equally.
The outline of our paper is as follows: in section 2 we discuss the observations, in section 3 we include a brief outline of our method, which was introduced in our paper I (Hernandez et al. 1999). The results are presented in section 4, and in section 5 we summarize our results.
## 2 The observations
The main requirements of our observations were that they should comprise a homogeneous sample of local dSph galaxies, mostly in terms of the data reduction. Only such an internally consistent data set allows robust comparisons between different galaxies, once uniform data reduction and analysis methods are adopted. We extracted available archive HST data for the Carina, LeoI, LeoII, Ursa Minor and Draco galaxies, and used standard data reduction methods and standard HST calibration numbers throughout the sample (e.g. Elson et al. 1996).
The currently available data cover only small (and variable) sections of the total extent of these systems. This fact clearly limits the inferences which can be drawn to the small observed fractions, the star formation histories of these regions might not by representative of the average for a whole galaxy. While this limitation introduces an extra uncertainty to our results, it highlights the interesting possibility of studying spatial variations in the evolutionary histories of dSph galaxies, if comprehensive HST studies where undertaken. In the above sense, our results for the different galaxies refer in the strictest sense only to the fractions covered by the observed fields.
As we did not require the HR diagrams to extend much fainter than the oldest turnoff points ($`M_V2425`$), or to be complete into the faintest limits (the faintest stars were in fact excluded from the analysis) the data reduction was straightforward. Our resulting CMDs do not show any systematic difference from comparable published ones for the galaxies we study. The technical details of the images used appear in the appendix.
## 3 The method
In this section we give a summary description of our HR diagram inversion method, which was described extensively in our paper I. In contrast with other statistical methods, we do not need to construct synthetic colour magnitude diagrams (CMD) for each of the possible star formation histories being considered. Rather we use a direct approach which solves for the best $`SFR(t)`$ compatible with the stellar evolutionary models assumed and the observations used.
The evolutionary model consists of an isochrone library, and an IMF. Our results are largely insensitive to the details of the latter, for which we use:
$$\rho (m)\{\begin{array}{cc}\hfill m^{1.3}& 0.08M_{}<m0.5M_{}\hfill \\ \hfill m^{2.2}& 0.5M_{}<m1.0M_{}\hfill \\ \hfill m^{2.7}& 1.0M_{}<m\hfill \end{array}$$
(1)
The above fit was derived by Kroupa et al. (1993) for a large sample towards both galactic poles and all the solar neighborhood.
As the weak metallicity dispersions measured in the galaxies we are studying (around 0.3 dex) are comparable to the errors in the metallicity determinations themselves, we have not attempted to introduce any enrichment histories for any of our galaxies. In fact, as small internal metallicity spreads present in these galaxies would introduce only small differential age offsets in our inferred $`SFR(t)`$ (see Table I), we shall in all cases use single metallicity isochrone sets. Once a metallicity has been selected, we use the latest Padova isochrones (Fagotto et al. 1994, Girardi et al. 1996) together with a detailed constant phase interpolation scheme using only stars at constant evolutionary phase, to construct an isochrone library having a chosen temporal resolution.
In this case we implement the method with a resolution of 0.15 Gyr, sufficient for our present problem. It is one of the advantages of the method that this resolution can be increased arbitrarily (up to the stellar model resolution) with computation times scaling only linearly with it.
Our only other inputs are the positions of, say $`n`$ observed stars in the HR diagram, each having a colour and luminosity, $`c_i`$ and $`l_i`$. Starting from a full likelihood model, we first construct the probability that the $`n`$ observed stars resulted from a certain $`SFR(t)`$. This will be given by:
$$=\underset{i=1}{\overset{n}{}}\left(_{t_0}^{t_1}SFR(t)G_i(t)๐t\right),$$
(2)
where
$$G_i(t)=\frac{\rho (l_i;t)}{\sqrt{2\pi }\sigma (l_i)}exp\left(\frac{\left[C(l_i;t)c_i\right]^2}{2\sigma ^2(l_i)}\right)$$
In the above expression $`\rho (l_i;t)`$ is the density of points along the isochrone of age $`t`$, around the luminosity of star $`i`$, and is determined by the assumed IMF together with the duration of the differential phase around the luminosity of star $`i`$. $`t_0`$ and $`t_1`$ are a maximum and a minimum time needed to be considered, for example 0 and 15 Gyr. $`\sigma (l_i)`$ is the amplitude of the observational errors in the colour of the stars, which are a function of the luminosity of the stars. This function is supplied by the particular observational sample one is analyzing. Finally, $`C(l_i;t)`$ is the colour the observed star would actually have if it had formed at time $`t`$. We shall refer to $`G_i(t)`$ as the likelihood matrix, since each element represents the probability that a given star, $`i`$, was actually formed at time $`t`$. Since the colour of a star having a given luminosity and age can sometimes be multi-valued function, in practice we check along a given isochrone, to find all possible masses a given observed star might have as a function of time, and add all contributions (mostly 1, sometimes 2 and occasionally 3) in the same $`G_i(t)`$. In this construction we are only considering observational errors in the colour, and not in the luminosity of the stars. The generalization to a two dimensional error ellipsoid is trivial, however the observational errors in colour dominate the problem to the extent of making this refinement unnecessary. Although the amplitude of luminosity errors is only a factor of $`2`$ smaller than colour errors, as can be inferred from the fact that CMD diagrams typically display a range of luminosities 5 times larger than in colour, in discriminating between isochrones, errors in colour are $`10`$ times as important as errors in luminosity. The absence of a colour dependence from $`\rho (l_i;t)`$ is a direct consequence of having neglected errors in the luminosity of the stars. A star of a given observed luminosity and assumed age will thus have a colour determined by the isochrones used.
Equation (2) is essentially the extension from the case of a discrete $`SFR(t)`$ used by Tolstoy & Saha (1996), to the case of a continuous function (continuous in time, but obviously discrete with respect to the stars) in the construction of the likelihood. The challenge now is to find the optimum $`SFR(t)`$ without evaluating equation (2) i.e. without introducing a fixed set of test $`SFR(t)`$ cases from which one is selected.
The condition that $`(SFR)`$ has an extremal can be written as
$$\delta (SFR)=0,$$
and a variational calculus treatment of the problem applied. Firstly, we develop the product over $`i`$ using the chain rule for the variational derivative, and divide the resulting sum by $``$ to obtain:
$$\underset{i=1}{\overset{n}{}}\left(\frac{\delta _{t_0}^{t_1}SFR(t)G_i(t)๐t}{_{t_0}^{t_1}SFR(t)G_i(t)๐t}\right)=0$$
(3)
Introducing the new variable $`Y(t)`$ defined as:
$$Y(t)=\sqrt{SFR(t)}๐tSFR(t)=\left(\frac{dY(t)}{dt}\right)^2$$
and introducing the above expression into equation (3) we can develop the Euler equation to yield,
$$\frac{d^2Y(t)}{dt^2}\underset{i=1}{\overset{n}{}}\left(\frac{G_i(t)}{I(i)}\right)=\frac{dY(t)}{dt}\underset{i=1}{\overset{n}{}}\left(\frac{dG_i/dt}{I(i)}\right)$$
(4)
where
$$I(i)=_{t_0}^{t_1}SFR(t)G_i(t)๐t$$
This in effect has transformed the problem from one of searching for a function which maximizes a product of integrals (equation 2) to one of solving an integro-differential equation (equation 4). We solve this equation iteratively, with the boundary condition SFR(15)=0. Details of the numerical procedure required to ensure convergence to the maximum likelihood SFR(t) can be found in our paper I, where the method is tested extensively using synthetic HR diagrams. The main advantages of our method over other maximum likelihood schemes are the totally non parametric approach the variational calculus treatment allows, and the efficient computational procedure, where no time consuming repeated comparisons between synthetic and observational CMD are necessary, as the optimal $`SFR(t)`$ is solved for directly.
The lower main sequence region of the CMD diagram is totally degenerate with age, and contains the lower brightness stars, where the errors are larger. We have seen from using synthetic HR diagrams that excluding this region produces a faster and more accurate convergence of the method, and have in analyzing real galaxies excluded stars of magnitudes fainter than $`M_V+5`$. This last cut together with the fact that our isochrones only extend out to the tip of the red giant branch (to go further would necessitate combining results from different physical models, which we preferred not to do) leaves us with a mass range which actually varies as a function of time. To include also the fraction of the $`SFR(t)`$ outside this region, we apply a minor correction factor to the result of equation (4), which accounts for the fraction of mass outside the sampled range, as a function of time, as given by the IMF used.
Before presenting the star formation histories which result from applying our method to the colour-magnitude diagrams of the galaxies sampled, we include a summary of the systematic errors associated with the theoretical inputs of the method, which are more extensively discussed in our paper I.
The IMF convolved with the duration of the differential evolutionary phase enters the calculation of the likelihood matrix in determining the density of points around the luminosity of each observed star, for each of the isochrones considered. As the main sequence and the giant branch regions of the CMD are degenerate with age (for a single metallicity population), it is the region containing the turn-off points of the sampled population that drives the solution of the problem. As a consequence of the above, the details of the IMF used are largely unimportant, it is basically the main sequence lifetime of a star that the solution is sensitive to. This was shown explicitly in our paper I through the use of synthetic HR diagrams, where the IMF affected only the amplitude of the recovered $`SFR(t)^{}s`$, which were normalized through the total number of stars in the HR diagrams. As in this case we are normalizing the inferred $`SFR(t)^{}s`$ through the total luminosities of the galaxies being studied, changing the IMF within any reasonable limits leaves the results unaffected. The effect of any blended binaries is equally unimportant, as the broadening of the main sequence occurs in the degenerate region and is in any case much smaller than the broadening produced by the observational colour spread.
The resolution of the method varies as a function of time and of the observational errors present in the CMD being analyzed, in ways that were studied in our paper I. The observational errors tend to smear the time structure in the $`SFR(t)`$ always towards older ages, i.e. a burst of age $`t`$ will be recovered as an episode of duration $`\mathrm{\Delta }t`$ ending at time $`t`$. This $`\mathrm{\Delta }t`$ varies with age, becoming significant $`(>1Gyr)`$ only for populations older than around 10 Gyr, for the level of observational errors present in our HST colour-magnitude diagrams. Younger populations are less affected, and the formal resolution at which the method was implemented of 0.15 Gyr is representative of our results for ages younger than 6 Gyr.
As explained in the previous section, our isochrones end at the tip of the red giant branch, which means that more advanced evolutionary phases can not be incorporated into the analysis. Fortunately, these later phases occupy regions of the CMD diagram distinct from those containing the phases we account for. Therefore, we can simply remove any red clump and horizontal branch stars from the analysis, leaving the structure of the studied regions unaffected. These later phases form only a minority component, containing little extra information, and do not affect our inferences. In the same way our results are not affected by the presence of some contaminating field stars. Provided they do not fall on the region of the CMD containing the turn off points of the underlying stars, they are simply removed from the analysis.
We are however sensitive to the assumed metallicity of the stellar populations being treated. In our paper I we presented a few examples of how the method reacts when inverting a synthetic HR diagram produced with isochrones different to the ones used in the inference procedure. If we construct the likelihood matrix using isochrones which are very different (about 1 dex) from the ones used to produce the HR diagram, the iterative method used in solving equation (4) becomes unstable and tends to divergent solutions. This property can be used to deduce large incompatibilities between the stars and the template isochrones against which they are being compared. Small metallicity offsets are much harder to detect, and produce distorted results. The degree of distortion varies with age, shape of the overall $`SFR(t)`$ and the observational errors present in highly non-linear fashion.
To give some indication of these distortions we present Table I. We produced a synthetic HR diagram from a single Gaussian burst input $`SFR(t)`$ having a duration of 1 Gyr, of a given metallicity, and applied the method using a metallicity 0.2 dex lower than the one used to generate the stars. This was repeated for a range of metallicities and ages, for both positive and negative metallicity mismatches. The top row of Table 1 shows the input metallicity (in dex), the first column shows the input burst age, and the other entries show the age offset between the recovered and input $`SFR(t)`$, all ages are given in Gyr. A โ+โ sign denotes the inferred population was older than the input one, where a lower metallicity was used in the inference procedure. Similarly, a โ-โ indicates that the inferred population was younger than the input one, where a higher metallicity was used to invert the simulated CMD.
The metallicities shown cover the range present in the galaxies being studied. Populations much younger than 2 Gyr are not present in our galaxies, and those older than 10 Gyr are distorted by the observational errors to the point of eliminating much of the time resolution of the inversion in this region. This table can be used to estimate the effects of changing the assumed metallicities, or of introducing temporal gradients, within the small observationally restricted range. The well known age-metallicity degeneracy is apparent. This might affect our results even if the mean metallicity is well known, as temporal variations in the metallicities (which must exist at some level) are not considered by the method. Using an independent test on our results, we find these effects to be minor in most of the cases we study, as observational measurements of the metallicity of these systems suggest.
### 3.1 Testing the results
Once the IMF, metallicity and observational parameters are assumed for a given galaxy, the positions of the observed stars in the CMD are used to construct the likelihood matrix $`G_i(t)`$, which is the only input given to the inversion method. In our Paper I we tested this method using synthetic CMDs produced from known $`SFR(t)^{}s`$, with which we could assess the accuracy of the result of the inversion procedure. In working with real data, we require the introduction of an independent method of comparing our final result to the starting CMD, in order to check that the answer our inversion procedure gives is a good answer. From our paper I we know that when the stars being used in the inversion procedure were indeed produced from the isochrones and metallicity used to construct the likelihood matrix, the inversion method gives accurate results. The introduction of an independent comparison between our answer and the data is hence a way of checking the accuracy of the input physics used in the inversion procedure, i.e. the IMF, metallicity and observational parameters.
The most common procedure of comparing a certain $`SFR(t)`$ with an observed CMD is to use the $`SFR(t)`$ to generate a synthetic CMD, and compare this to the observations using a statistical test to determine the degree of similarity between the two. For example, Aparicio et al. (1997) manage to recover simultaneously the distance, enrichment history and $`SFR(t)`$ of the local dwarf LGS 3 by constructing synthetic CMDโs from a $`SFR(t)`$ taken parametrically as a series of contiguous bursts, and finding the amplitudes of each burst that give the maximum statistical similarity with the data, in terms of a counts in cells maximum likelihood. In that case, they solve for the amplitudes of the bursts that give a total synthetic CMD most closely resembling the observational data set. Their synthetic CMD is a linear sum of the partial CMDโs produced from a single realization for each burst. This has the advantage of allowing a large parameter space to be considered, as the synthetic CMDโs are constructed trivially from a fixed single statistical realization of each burst. The disadvantage however is that one is not comparing the $`SFR(t)`$ with the data, but rather a particular realization of the $`SFR(t)`$ with the data. The distinction becomes arbitrary when large numbers of stars are found in all regions of the CMD, which is generally not the case. Following a Bayesian approach, we prefer to adopt the $`W`$ statistic presented by Saha (1998), essentially
$$W=\underset{i=1}{\overset{B}{}}\frac{(m_i+s_i)!}{m_i!s_i!}$$
where B is the number of cells into which the CMD is split, and $`m_i`$ and $`s_i`$ are the numbers of points two distributions being compared have in each cell. This asks for the probability that two distinct data sets are random realizations of the same underling distribution. In implementing this test we first produce a large number ($`500`$) of random realizations of our best answer $`SFR(t)`$, and compute the $`W`$ statistic between pairs in this sample of CMDโs. This gives a distribution which is used to determine a range of values of $`W`$ which are expected to arise in random realizations of the $`SFR(t)`$ being tested. Next the $`W`$ statistic is computed between the observed data set, and a new large number of random realizations of $`SFR(t)`$, this gives a new distribution of $`W`$ which can be objectively compared to the one arising from the model-model comparison to assess whether both data and modeled CMDโs are compatible with a unique underling distribution. Both distributions of $`W`$ were characterized in terms of a mean value and a $`1\sigma `$ amplitude. This final check of our answer is in fact the slowest part of the procedure, but necessary to obtain an independent check on the answer of our inversion method. In other terms, we are checking that our best inferred maximum-likelihood solution is also a good fit. The value of B used was $``$ 6400.
## 4 The galaxies
### 4.1 Carina
The first galaxy we study is the Carina dwarf, one of the first dSphโs to be observed in terms of resolved stellar population, and the one for which the most studies inferring the $`SFR(t)`$ from the CMD have been published. This gives us the opportunity of comparing our results with previous studies. Our CMD diagram initially contained 2550 stars. After removing contamination, stars beyond the RGB and the lower degenerate region, we are left with 980 stars. The full observational CMD is shown in the left panel of Figure (1).
Using the numbers published in the recent review by Mateo (1998) we took as the central values of our observational parameters for Carina $`[Fe/H]=2.0\pm 0.2`$, $`M_V=9.3`$, $`E(BV)=0.04\pm 0.02`$ and $`(mM)_0=20.03\pm 0.09`$ (Smecker-Hane et al. 1994, Mighel 1997, Hurley-Keller et al. 1998 and Mateo et al. 1998). The metallicity fixes the isochrones and the integrated magnitude the normalization, we use the distance modulus and the reddening correction to fix the observations in the CMD. At this point we apply our method to invert the observational CMD and recover the underlying $`SFR(t)`$, this is shown in the right panel of Figure (1) by the solid curve. The dotted curves represent the upper and lower envelopes to a series of alternative reconstructions of the $`SFR(t)`$ produced by changing the assumed values of the distance modulus and the reddening corrections, within their respective error ranges. The internal metallicity spread present in this galaxy is very low, as can be seen from the narrow RGB, and is quoted as $`<0.1`$ dex in the review by Mateo (1998). In this way, our error margins for this galaxy due to metallicity uncertainties are not much larger than what is shown by the dotted lines, see Table(1).
In Figure (2) we illustrate the procedure of checking the inferred $`SFR(t)`$, the left panel gives one random realization of the central inferred $`SFR(t)`$, which is seen to resemble the data for Carina rather well. The right panel of Figure (2) shows the implementation of the W test. The solid histogram gives the distribution of values of W which arise from 500 model-model comparisons, and gives the variability arising from the different random realizations of the central $`SFR(t)`$, for the number of stars present in our observations. The dashed histogram gives the distribution of values of W which result from 250 data-model comparisons. Only $`<32`$ % of the random realizations of our central $`SFR(t)`$ would give distributions of W (when compared against all other realizations) having a mean value further removed from that of the 500 model-model distribution than the data. In this sense, we can accept the hypothesis that both the data and our 500 random realizations of the central $`SFR(t)`$ for Carina come from the same underlying generating function at a 1 $`\sigma `$ level. For the remaining galaxies we shall give only the results of the W test in terms of the mean and 1$`\sigma `$ amplitude of the model-model and the data-model distributions.
Our result shows an interesting $`SFR(t)`$ for this galaxy, very little star formation at early times, until around 10 Gyr ago, when over a period of 3 Gyr an intermediate population was formed. The $`SFR(t)`$ then decreased markedly, before entering a more recent and extended period of star formation which ended 2 Gyr ago. The very low levels found throughout for the star formation rate of $`50100M_{}/Myr`$ are representative of what we find in all our galaxies, and should provide clues as to the physical processes driving the star formation activity in these systems.
The existence of some RR Lyr stars in this galaxy (e.g. Saha et al. 1986, Mateo et al. 1995 or Kuhn et al. 1996) signal the presence of a very old population at some level, although recent estimates of the $`SFR(t)`$ in Carina coincide in that the amplitude of this old component is minor (Hurley-Keller & Mateo 1998), and appears to have blended completely into the MS of our CMD, or to be found preferentially in a region of the galaxy not sampled by our observations. One of the most general claims about the $`SFR(t)`$ of Carina has been the extreme โburstingโ character of it (e.g. Smecker-Hane 1994, Hurley-Keller & Mateo 1998) we note however, that all these studies have assumed a priori an extremely discrete form for the $`SFR(t)`$ of this galaxy, and then solved for the best such function. In contrast, Mighell (1997) using a non-parametric approach, finds a much more continuous solution, basically consistent with what we obtain. The resolution of our method in the region of ages $`28Gyr`$ is sufficient to exclude the possibility of any total cessation of the star forming activity in this region lasting more than $`0.5Gyr`$, as was shown in our Paper I, where synthetic HR diagrams produced from bursting $`SFR(t)^{}s`$ were correctly inverted. We conclude that although the star formation history of this galaxy is clearly bimodal, it is not a series of discrete bursts lasting $`1Gyr`$. However, we can not exclude the possibility that sampling a much larger fraction of this galaxy could yield somewhat different results. Analyzing local variations of the $`SFR(t)`$ within these galaxies is an interesting project which will be treated in other papers, using data covering greater portions of the galaxies. As with all our galaxies, given the small number of stars available, the resolution of our solution is limited. Episodes of very short duration or low level, resulting in very few stars will be totally missed. We are recovering the $`SFR(t)`$ responsible for producing the greater fraction of the observed stars.
### 4.2 Ursa Minor
The case of Ursa Minor seems to be the simplest of the ones we study, and is actually the only one of our galaxies which agrees with the once common expectation of dSph systems being simply old and metal poor. Our observational CMD is shown in the left panel of Figure (3), and is made up of 1232 stars. After removing those stars incompatible with the phases included in our isochrones, together with the fraction beneath $`M_V=6`$ we are left with 334 stars, which we used in the inversion procedure.
Using the values given in Mateoโs (1998) review, we take $`[Fe/H]=2.2\pm 0.1,M_V=8.9,E(BV)=0.03\pm 0.02`$ and $`(mM)_0=19.11\pm 0.1`$ (Nemec et al. 1988 and Olszewski & Aaronson 1985) for this galaxy. The reported internal metallicity dispersion in this galaxy is also very low, at $`<0.2`$ dex, which introduces little uncertainty in our results. Applying our method using the central values of the observational parameters we obtain the solid line shown in the right panel of Figure (3). Again, the dotted curves represent an envelope to a large number of reconstructions obtained by changing the observational parameters within their error ranges. Of this galaxy we can say that most of its stars are older than 12 Gyr. Given the observational errors present and the large age of the population of this galaxy, we can not draw any inferences on the time structure of the $`SFR(t)`$, as this is totally lost in the noise. The duration of the star forming episodes can only be concluded to have been $`3Gyr`$. Normalizing through the total luminosity of this galaxy we obtain rates of $`>400M_{}/Myr`$. The result of applying the $`W`$ test to this galaxy gives $`47\pm 5`$ and $`44\pm 4`$ for the model-model and model-data sets, respectively, showing our answer to be compatible with the data at better than a 1 $`\sigma `$ level. Olszewski & Aaronson (1985) used a ground based CMD and a simple isochrone fitting procedure to conclude the population of this galaxy is uniformly old, with the possibility of a 2Gyr spread.
### 4.3 LeoI
The observations we obtained for LeoI are shown in the CMD in the left panel of Figure(4). This contains 11334 stars, which was reduced to 8691 after exclusion of unsuitable stars (e.g. the red clump), as described for the previous two galaxies. The excluded region comprised stars with $`M_V<1`$ and $`BV>0.7`$ and $`BV<0.9`$, though our results are highly insensitive to the details of these cut, as the RGB close to the red clump is an age degenerate region. The CMD of this galaxy reveals a young MS, but also a RGB extending down into the turn off region of a much older MS. The distribution of stars along the MS region is not uniform, and is actually encoding an interesting $`SFR(t)`$.
Taking for this galaxy the central values of those given by Mateo (1998), $`[Fe/H]=1.5\pm 0.4,M_V=11.9,E(BV)=0.01\pm 0.01`$ and $`(mM)_0=21.99\pm 0.2`$ (Reid & Mould 1991, Lee et al. 1993b and Demers et al. 1994), we invert the CMD of LeoI to obtain the $`SFR(t)`$ shown by the solid curve in the right panel of Figure (4). The dotted curves contain all other possible answers compatible with our method and the observational parameters taken with their errors.
The $`SFR(t)`$ of LeoI can be divided into three distinct phases, which join continuously with no evidence of a discrete bursting behaviour. The first of these phases lasted from $`1510Gyr`$ ago, and proceeded at a rate of around $`30M_{}/Myr`$. The following two phases were extended peaks of star formation activity centered on ages of $`8`$ and $`4Gyr`$, and having durations and maximum amplitudes of around $`3`$ and $`4Gyr`$, at 100 and 150 $`M_{}/Myr`$ respectively, as shown in Figure (4). Any total cessation of the star forming activity can be excluded for ages between 1 and 10 Gyr. As time resolution is lost beyond this age, the population beyond 10 Gyr could in principle be a single burst, and appear extended because of the observational errors.
The ground based study of Lee et al. (1993) reaching only the youngest turn off points, and subsequent analysis of this data set by Caputo et al. (1995) and Caputo et al. (1996) using isochrone matching techniques and luminosity function methods developed for single age globular clusters, revealed the presence of stars of ages 1-3 Gyr. Using more recent HST data and comparing to modeled CMDs Gallart et al. (1998) describe the star formation history of LeoI as coming mostly from an episode lasting from 6-2 Gyr ago, with the addition of an older component of duration 2-3 Gyr, in good agreement with our inferred $`SFR(t)`$ for this galaxy. The population box of this galaxy given by Mateo (1998) is consistent with our results.
In this case, the $`W`$ test gives results showing that our inferred $`SFR(t)`$ is incompatible with the data at a two $`\sigma `$ level. As our inversion method has been extensively tested using synthetic CMDโs, this result shows the data to be in conflict with our input assumptions. This is perhaps not surprising as this system has a much larger internal metallicity spread ($`0.3\pm 0.1`$) than the two reviewed previously, which together with the errors in the metallicity determination allow for quite a large ($`1`$) internal spread. We have thus solved for the best fitting single metallicity solution, and discovered that internal metallicity dispersions are important. This metallicity spread introduces a time uncertainty going from 1-3 Gyr, for ages going from 1-13 Gyr. Solving simultaneously for the enrichment and star formation histories is a problem we shall treat later, as an extension of the variational calculus approach. A further possible source of the disagreement found between the synthetic CMD reproductions of our recovered $`SFR(t)`$ and the data is the difficulty of modeling precisely the error structure present in real data, as pointed out by Aparicio & Gallart (1995). Although the inversion method itself is highly robust to the details of the assumed error structure, the very careful comparison of the $`W`$ test would pick up any discrepancy between the assumed error structure used in generating the synthetic CMDโs, and that actually present in the data. Finally, the presently available isochrones do not take into account the relative overabundance of $`\alpha `$ elements at low metallicities, which at some level introduces a slight mismatch between the observed stars and the assumed modeling. Unfortunately, we can not distinguish between these possibilities easily.
### 4.4 LeoII
For this galaxy we obtained 7625 stars, of which we used 4492 after removing from the analysis the red clump (stars with both $`M_V<1`$ and $`BV<0.9`$), the lower regions and some blue stragglers blue wards of $`VI=0.2`$, which do not correspond to any of the evolutionary phases included in our isochrones. The full observational CMD is shown in the left panel of Figure (5). For the central values of the observational parameters of this galaxy as summarized by Mateo (1998) of $`[Fe/H]=1.9\pm 0.1,M_V=8.9,E(BV)=0.02\pm 0.01`$ and $`(mM)_0=21.63\pm 0.09`$ (Mighel & Rich 1996, Demers & Irwin 1993 and Lee 1995) we obtain a divergent solution, indicating that the isochrones used do not correspond to the stars being analyzed. Changing the metallicity to $`[Fe/H]=1.75`$, just marginally outside the errors reported by Mateo (1998), gives a stable convergence of the method and a significant result. The recent study by Mighell and Rich (1996) determined a metallicity of $`[Fe/H]=1.6\pm 0.25`$ for this galaxy, consistent with what was used here. Our inferred $`SFR(t)`$ is shown by the solid curve in the right panel of Figure (5). Again, the dotted curves represent an envelope to all alternative reconstructions obtained by varying the observational parameters within their errors. As with LeoI, this galaxy shows a large internal metallicity spread, which is probably what the very sensitive $`W`$ test detects, also giving an incompatible result between the model-model and data-model comparisons at more than a two $`\sigma `$ level. The discussion of this point given for the previous galaxy applies also to LeoII, metallicity spreads will have to be considered for a more accurate rendering of the star formation history of this galaxy.
In this case, we see a gradually rising $`SFR`$ from 12 Gyr to a peak of 160 $`M_{}/Myr`$ at 8 Gyr, followed by a somewhat more abrupt descent, with star formation activity ending by around 6 Gyr ago, as shown by Figure (5). This result would be affected by the internal metallicity spread of $`0.3`$ dex of LeoII (Mateo 1998), producing a broadening of around 1-3 Gyr, see Table (1). Comparing with the study of Mighell and Rich (1996), who analyze an HST CMD of LeoII by fitting a โfiducial sequenceโ to the CMD and then comparing it to theoretical isochrones to solve for the age of the galaxy treated as a single parameter, we find no inconsistencies. They obtain an age of $`9\pm 1`$ Gyr for LeoII, with an age spread of around 4 Gyr, which is compatible with our results. They also report some degree of star formation at ages $`>10Gyr`$, of which we see no evidence. This discrepancy is probably the result of the different methods used in the analysis. Given the high age resolution of our method, it is not only the median age and a representative value for the spread that we obtain, also the shape of the burst is recovered, ruling out for example a rectangular burst for this galaxy. Not just the age and duration of star formation episodes in these galaxies, but also the time structure of them can now be reliably inferred, and used in aiding theoretical interpretations of the origin of these systems.
### 4.5 Draco
For this last galaxy our observational CMD contains 3091 stars, most of which are located in the lower, age-degenerate region of the diagram, and were thus excluded from the analysis, leaving 1210 stars after removing also the horizontal branch and blue stragglers (stars with both $`M_V<0`$ and $`BV<0.7`$). Our full observational CMD for Draco is shown in the left panel of Figure (6). Mateo (1998) summarizes the metallicity and observational properties as $`[Fe/H]=2.0\pm 0.15,M_V=8.8,E(BV)=0.03\pm 0.01`$ and $`(mM)_0=19.58\pm 0.15`$ (Carney & Seitzer 1986, Lehnert et al. 1992, Nemec 1985 and Grillmair et al. 1998). Using the central values we produced our observational CMD and inverted it to obtain the $`SFR(t)`$ plotted as the solid curve in the right panel of Figure (6). The dotted curves contain all variations obtained by shifting our observations within the error ranges of the values given by Mateo (1998) for $`E(VI)`$ and $`(mM)_V`$.
Our result for Draco is very similar to what we obtained for LeoII, only shifted by 2 Gyr towards younger ages, giving a median age of around 7 Gyr. The time structure of the $`SFR(t)`$ also differs slightly from that of LeoII in that the peak is broader in Draco, having a plateau lasting around 2 Gyr, rather than a narrow maximum. The presence of a low level old component extending beyond 10 Gyr is also inferred, although the precise time structure in this region is not well restricted by the method. The values given by the normalization through the total luminosity are in the range of our other galaxies, with the maximum rate being of around 110 $`M_{}/Myr`$. Using Sahaโs $`W`$ test in this case gives $`208\pm 9`$ for the model-model comparison, and $`201\pm 8`$ for the data-model comparison, which gives us confidence in our results, as it shows our $`SFR(t)`$ is compatible with the data at better than a 1 $`\sigma `$ level.
Comparing this result with the recent study of Grillmair et al. (1998) we find the two results to be only marginally consistent, as they report an age of 10-12 Gyr (for the IMF we assumed) for the bulk of the stellar population of Draco $`\pm 2.5Gyr`$, which they identify as essentially a single age event. We note several differences between their approach and ours, any of which on its own could bring the two results into closer agreement. They use HST data to construct their observational CMD, which is very similar to the one we obtain, no differences are evident at this level. Their analysis however differs markedly. They fit a fiducial sequence to the CMD diagram assumed to be representative of the bulk of the population, adjusting the MS and lower RGB regions through an inclusion envelope criterion, and the bright RGB by eye. This fiducial sequence is then compared to theoretical isochrones (VandenBerg & Bell 1985) through a maximum likelihood analysis designed to find the age of the system, treated as a one parameter problem. We note that comparing any fiducial sequence to theoretical isochrones will only be a meaningful statistical procedure in cases where the underlying $`SFR(t)`$ is indeed a single epoch burst, e.g. in the case of a globular cluster. Further, defining any such sequence so that it is a valid statistical representation of the underlying $`SFR(t)`$ is a problem that has not been treated yet. Grillmair et al. (1998) also noted their isochrones showed systematic inconsistencies when compared to the stars they were dating, which also casts some doubt on their results, as they remarked. They also had the difficulty of requiring multiple conversions between their observational bands and those available from theoretical stellar models.
Our results for this galaxy are weakened by the assumption of a single metallicity for the entire population. Although this can not be rigorously correct, it essentially holds for the previous four galaxies, which show small internal metallicity spreads. Draco however, has an internal spread of 0.5 dex (Mateo 1998), which could alter our results for this galaxy at the level of 1-3 Gyr. Another possible explanation to the difference between our results and those of Grillmair et al. (1998) is that we used the Padova isochrones (Fagotto et al. 1994, Girardi et al. 1996) rather than those of VandenBerg & Bell (1985). Finally, we note that Carney & Seitzer (1986) sampled a much larger region of Draco using ground based CCD data, and detected multiple turnoffs in this galaxy, corresponding to ages between 8 and 15 Gyr.
## 5 Summary
We have used a homogeneous set of observational colour magnitude data to study the star formation histories of a sample of 5 dSph galaxies, through a non-parametric variational calculus maximum likelihood method. We then performed a detailed statistical analysis to check the accuracy of our results for each galaxy, obtaining good results for three of our galaxies (Carina, Ursa Minor and Draco), and evidence of a systematic difference between our data and results for LeoI and LeoII, probably due to internal metallicity spreads. We can now compare the results we obtained for the different galaxies, with the added consideration of a possible extra 1-3 Gyr error margin in the results for LeoI and LeoII.
Ursa Minor appears to be the only essentially โPopulation IIโ system, being characterized by a uniformly old star formation history. LeoII and Draco are systems which show similar star formation histories, being basically characterized by a single major episode. This lasted in both cases around 4 Gyr centered at 8 Gyr, although Draco shows a low level extension into much older ages. LeoI shows the most complex $`SFR(t)`$, having a small old component of age $`>10`$ Gyr, and two later episodes centered at 8 and 3.5 Gyr, although the star formation activity did not stop altogether between them. It is interesting that the second episode in LeoI coincides in age with the period of star formation in Draco and LeoII. Finally, star formation in Carina highly resembles that in LeoI in the relative amplitude, duration and locations of the two main components.
Since we used the total luminosities of these galaxies to normalize the inferred $`SFR(t)`$ in physical units, we can derive other quantities of interest, for example the supernova rates as a function of time. The SN type II rates are obtained by scaling the total $`SFR(t)`$ by a factor given by the fraction of stars more massive than $`8M_{}`$, which for our assumed IMF translates $`100M_{}/Myr1SNII/2Myr`$. The only galaxy showing rates greater than $`150M_{}/My`$ is Ursa Minor, and it is also the only one with a $`SFR(t)`$ consistent with a single epoch burst. The other four systems, showing extended $`SFR(t)^{}s`$, have rates of always less than $`150M_{}/Myr`$. This last fact might indicate the presence of a threshold in these systems, above which energy input from massive stars into the gas component is sufficient to totally disrupt the galaxies interstellar medium and end star formation. No characteristic timescales of $`1`$ Gyr are evident from the recovered $`SFR(t)^{}s`$ of these galaxies, with the possible exception of Ursa Minor, suggesting that SN type I are not determinant in driving the star formation processes in these systems.
Figure (7) presents the luminosity weighted sum of the $`SFR(t)`$ for all the galaxies we analyzed. This shows the star formation activity in the set of these low metallicity dSphโs to have ended by around 2 Gyr ago, and having been relatively steady during the period 3-9 Gyr. For older ages we see the average $`SFR(t)`$ to be essentially dominated by the old age Ursa Minor galaxy. Our results as summarized by Figure (7) support the calculations presented by Unavane et al. (1996) in that the average metal-poor dSph star is of intermediate age, and not as old as a โPopulation IIโ halo star. A more complete sample would include the Sagittarius dwarf, with a mean age of $`10Gyr`$, as well as the much larger Magellanic clouds, having ages of $`<3Gyr`$. It seems reasonable to suppose the total star formation history for the satellites of the Milky Way to show no preferred epoch of star formation, as suggested by Tolstoy (1998).
From these comparisons it is clear that the dSph galaxies of the Milky Way do not form a simple system, and straight forward correlations between $`SFR(t)`$ and other present day parameters such as instantaneous galactocentric distance or metallicity are not evident, and perhaps not even meaningful. A physical understanding of these systems will probably have to consider the complex interactions of these systems with the halo of the Galaxy (tidal forces, evolving gaseous component, orbital structure etc). It could well be the case that the present day sample of survivors actually experienced very distinct origins and evolutionary histories (as direct studies of their $`SFR(t)`$ appear to indicate) with little in common other than having been shaped under the dominating influence of the Milky Way. A ram pressure stripped dwarf irregular and a more recently tidally torn fragment of the Magellanic clouds could both end up as dSph systems today.
Much more information is needed on these systems before their full potential as tracers of the build up and evolution of the Milky Way can be realized, once their individual evolutionary histories are better understood. Studies aimed at recovering the full orbital structure of these galaxies, through proper motion measurements and potential theory reconstructions will yield crucial independent information on the evolution and formation of these systems. A more complete sampling than the one we have conducted here is needed to visualize local dSphโs fully. Any advance in the modeling of advanced stellar phases will also improve the use of CMD as tools in galactic evolution studies, as it would in principle eliminate the need to remove parts of the CMD from consideration.
Our present sample includes the 5 dSph galaxies having the lowest internal metallicity spreads, and therefore the ones for which our present method applies best. Obtaining a larger sample and analyzing it through a fully consistent and non parametric statistical method, will require the simultaneous recovering of the enrichment history and the $`SFR(t)`$. Development of such a method will be the subject of a future work. We emphasize the need of a homogeneous sample at all levels of the analysis, together with a fully consistent statistical inversion method which does not assume any a priori structure for the $`SFR(t)`$ one is solving for, in comparative studies of star formation histories.
## Acknowledgments
The work of X. Hernandez was partly supported by a DGAPA-UNAM grant.
## Appendix A The images
The images were recovered from the HST data archive. The image numbers, filters and observation dates are given in table A1. Table A2 gives the A-to-D gains, exposure times, and numbers of images for the dSph fields.
### A.1 Retrieving the data and image preparation
For each of the five dSphโs, there were a number, $`n`$, of images taken, in both a long band (F814W, corresponding closely to Johnsonโs I), and a short band (F555W or F606W, corresponding to Johnsonโs V).
For each dSph, there are a set of long exposure โVโ and โIโ data files. Each data file contains the output from 4 detectors โ the planetary camera (PC), and the three Wide Field cameras (WF2, WF3 and WF4). Each of these detector images is 800$`\times `$800 pixels in size, of which typically 730$`\times `$730 are usable.
Data treatment was carried out in the IRAF environment. Image combination was carried out using the STSDAS package, and photometry using the DAOPHOT package.
In order to remove the severe cosmic ray effects in each image, the $`n`$ images were combined by taking the mean value for each pixel position, after the rejection of values either too high or too low with respect to local variations. The task used here was โcrrejโ in the STSDAS package. Each image was also cropped to a 730$`\times `$730 image by rejecting the first 60 rows/columns and the last 10.
In the following analyses, only the data from the three WF cameras were used. The pixel scale in these WF images is 0.10$`\mathrm{"}`$ per pixel.
### A.2 Source extraction
Sources were extracted to 2$`\sigma `$ above the mean background. Point spread function (PSF) fitting photometry was carried out by selecting isolated stars to define a PSF. Sources with a fit at $`\chi ^2>1.5`$ were rejected, and the magnitudes used were aperture magnitudes with a radius of 2 pixels. (0.20$`\mathrm{"}`$).
### A.3 Galactic contamination
At faint magnitudes in optical wavebands, external galaxies can constitute a major contaminant in number counts. Williams et al. (1996), based on HDF (Hubble Deep Field) galaxy counts find $``$2$`\times `$10<sup>5</sup> galaxies per square degree brighter than V=26. For the area of the three WF detectors, this corresponds to, on average, 250 contaminating galaxies in the field of view. Compared with the many thousands of stars in the images, this contamination is small.
Nevertheless, the high resolution of the HST allows the separation of galaxies from stars more reliably than for ground based work, where image resolution is necessarily lower and the distinguishing stars from galaxies is less straightforward.
The cut in $`\chi ^2`$ which we use eliminates the majority of galaxies (and remaining cosmic ray events) because these will in general be fitted poorly by the PSF.
### A.4 Magnitude corrections
#### A.4.1 Aperture Corrections
An aperture correction to render these magnitudes equivalent to those which would be obtained by use of a 0.5$`\mathrm{"}`$ radius aperture (see Holtzman et al. 1995) was found by selecting bright, unsaturated stars, in each of the โVโ and โIโ bands, and for each detector and each dSph separately. The mean differences between the magnitude using a 5.02 pixel diameter aperture (0.5$`\mathrm{"}`$ radius) and a 2 pixel diameter aperture were used to correct all magnitudes.
#### A.4.2 A to D gain correction
All the observations for the dSphโs considered here were taken through bay 4 (see Holtzman et al. 1995), which means that the Analogue-to-Digital gain is only 7.0, rather than the standard value of 14.0. Due to some unshared electronics, this necessitates a different correction for each of the WF fields, as indicated below:
| Wide Field | $`\mathrm{\Delta }`$m |
| --- | --- |
| 2 | 0.754 |
| 3 | 0.756 |
| 4 | 0.728 |
#### A.4.3 Geometric correction
The WF cameras have geometric distortions which arise mainly from elements in the optical path. As a consequence, the effective pixel areas, in square angular measure, vary systematically across each WF detector. We make a parameterization of the data from the figures of Holtzman et al. (1995), and apply that as a correction. The correction is well represented by a quadratic function of distance from the centre of the detector, and never exceeds 0.04 magnitudes at the edge of the detectors.
We use $`\mathrm{\Delta }m=1.897\times 10^5`$ $`r1.208\times 10^7`$ $`r^2`$ where $`r`$ is the distance in pixels from the position (400,400) on the detector.
#### A.4.4 Charge Transfer efficiency correction
The readout of the CCD detectors requires the transfer of charge through successive rows of the detector. As a consequence, the signal from the last rows to be read are diminished because of the loss of charge during transfers. The correction is a maximum of 0.04 magnitudes at the final row.
We use $`\mathrm{\Delta }m=0.04(y/800)`$ where $`y`$ is the row number.
#### A.4.5 Corrections for Leo I
WFPC2 data taken before 23rd April 1994 was taken at a detector temperature of $``$76C rather than $``$88C. The change reduced CTE effects, and IR zeropoint problems. The Leo I observations were made before this change, and we use a linear ramp of size 0.12 magnitudes to correct for the CTE effects as recommended by Holtzman et al. (1995), and additionally, an additive offset of 0.05 magnitudes in I to correct for the higher zero point.
### A.5 Conversion to standard V and I
Reddening must be taken into account before corrections are made to V and I. We use the published values of reddening, which are fairly small for all these galaxies ($`E(BV)<0.08`$), to make corrections to the magnitudes before applying the transformations given below.
We use the extinction values tabulated by Holtzman et al. (1995) for the filter F555W and F814W, and an estimate based on these for the F606W filter:
$`A_{F555W}=3.026E(BV)`$
$`A_{F814W}=1.825E(BV)`$
$`A_{F606W}=2.75E(BV)`$ (estimate)
We subsequently apply synthetic transformations given by Holtzman et al. (1995), of the form
Output Band = $`m_{raw}`$ \+ $`a_0`$ \+ $`a_1`$(V$``$I) + $`a_2`$(V$``$I)<sup>2</sup>
where the coefficients are given in table 4.
$`m_{raw}`$ is the output aperture magnitude using an aperture of 2 pixels radius, corrected as indicated above to an aperture of radius 0.5$`\mathrm{"}`$.
The above formulae were iterated until no further significant change in V or I occurred.
Finally, the reddenings which had been removed are restored using:
$`A_V=3.10E(BV)`$
and
$`A_I=1.83E(BV)`$
The reddenings used are refined iteratively after the fitting of isochrones to the MS region. Note, however, that because the reddenings are small, and the reddenings are restored to the data afterwards, even if the initial guess for the reddening is wrong by a substantial amount, very little effect is seen in the final photometry. (e.g. a change of the reddening by 0.1<sup>m</sup> changes photometry by less than 0.005<sup>m</sup>.)
### A.6 Systematics
As Holtzman et al. (1995) point out, there remain some aspects of the photometric calibration of WFPC2 which are uncertain. For example, Holtzman et al.(1995) note discrepancies of $``$ 0.05 magnitudes between long and short exposures, which are not understood. Several more minor systematic and random effects at the level of a few percent (corresponding to a few hundredths of a magnitude) are not well understood. Furthermore, the conversion that has been made here to standard V and I colours, introduces more minor uncertainties. It must be noted that the systematic effects in zeropoints may be as large as 0.1 magnitudes. Any such uncertainties appear as an offset primarily in adopted distance modulus, and would affect most CMDs in the same way. Differentially, any effects should be minimal. |
no-problem/0001/quant-ph0001010.html | ar5iv | text | # Do the precise measurements of the Casimir force agree with the expectations?
## Abstract
An upper limit on the Casimir force is found using the dielectric functions of perfect crystalline materials which depend only on well defined material constants. The force measured with the atomic force microscope is larger than this limit at small separations between bodies and the discrepancy is significant. The simplest modification of the experiment is proposed allowing to make its results more reliable and answer the question if the discrepancy has any relation with the existence of a new force.
The Casimir force between closely spaced macroscopic bodies is the effect of quantum electrodynamics (QED) and for that reason could be predicted very accurately. In the rigorous Lifshitz theory the force is defined by the optical properties of used materials. Knowledge of these properties is the weakest element in the theory restricting the accuracy that can be achieved. Though the measurement of the Casimir force is not the best way to test QED, such experiments are of great importance because they are sensitive to the presence of new fundamental forces predicted in many modern theories (see, for example, and references therein). To distinguish a new force from the background, we should be able to calculate the Casimir force with a precision better than the experimental one. In the series of recent experiments this force has been measured with the torsion pendulum (TP) in the range of distances $`0.66\mu m`$ and with the atomic force microscope (AFM) in the range $`0.10.9\mu m`$. The corresponding precisions were 5% and 1%, respectively.
The force per unit area between parallel plates arising as a result of electromagnetic fluctuations at nonzero temperature $`T`$ is generalized by the Lifshitz theory , where the plate material is taken into account by its dielectric function at imaginary frequencies $`\epsilon \left(i\zeta \right)`$:
$$F^{pl}(a)=\frac{kT}{\pi c^3}\underset{n=0}{\overset{\mathrm{}}{}}^{}\zeta _n^3\underset{1}{\overset{\mathrm{}}{}}๐pp^2\left\{\left[G_1^2e^{2p\zeta _na/c}1\right]^1+\left[G_2^2e^{2p\zeta _na/c}1\right]^1\right\}.$$
(1)
Here โprimeโ means that $`n=0`$ term is taken with the coefficient $`1/2`$, $`a`$ is the distance between bodies and
$$G_1=\frac{p+s}{ps},G_2=\frac{\epsilon \left(i\zeta _n\right)p+s}{\epsilon \left(i\zeta _n\right)ps},$$
$$s=\sqrt{\epsilon \left(i\zeta _n\right)1+p^2},\zeta _n=2\pi nkT/\mathrm{}.$$
(2)
The Casimir result $`F_c^{pl}\left(a\right)=\pi ^2\mathrm{}c/240a^4`$ is reproduced from (1) in the limit $`\epsilon \mathrm{}`$ and $`T0`$. The function $`\epsilon \left(i\zeta _n\right)`$ cannot be measured directly but can be expressed via imaginary part of the dielectric function on the real axis with the help of the dispersion relation
$$\epsilon \left(i\zeta \right)1=\frac{2}{\pi }\underset{0}{\overset{\mathrm{}}{}}๐\omega \frac{\omega Im\epsilon \left(\omega \right)}{\omega ^2+\zeta ^2}.$$
(3)
Information on $`Im\epsilon \left(\omega \right)`$ can be extracted from the data on reflectivity and absorptivity of electromagnetic waves for a given material.
In the experiments the force is measured between metalized disc and sphere because for two plates it is difficult to keep them parallel. For this configuration (1) has to be modified with the help of the proximity force theorem (PFT) which is true for $`Ra`$, where $`R`$ is the radius of curvature of the spherical surface. Applying PFT to (1) one can find the force between sphere and plate as $`2\pi RF^{pl}\left(a\right)๐a`$. The integration gives
$$F(a)=\frac{kTR}{c^2}\underset{n=0}{\overset{\mathrm{}}{}}^{}\zeta _n^2\underset{1}{\overset{\mathrm{}}{}}๐pp\mathrm{ln}\left[\left(G_1^2e^{2p\zeta _na/c}1\right)\left(G_2^2e^{2p\zeta _na/c}1\right)\right].$$
(4)
This expression differs from those used in and in two respects. First, in the cited papers the integration connected with the PFT was not done analytically that complicated numerical analysis. Second, the zero temperature limit has been taken when one can change the sum over $`n`$ in (4) by the integral over $`\zeta `$. This limit was also considered in , though the PFT integral was evaluated explicitly. It seems a reasonable approximation at small separations because the temperature correction is proportional to $`(kTa/\mathrm{}c)^3`$ and is small. However, one should remember that this correction has been found in the limit of ideal conductor $`\epsilon \mathrm{}`$. For a real conductor it can behave as $`kTa/\mathrm{}c`$ and be important. We have computed the force according to (4) and with the integral instead of the sum and found that the difference at the smallest distances tested in the AFM experiments exceeds $`4pN`$ in contrast with the conservative estimate for the experimental errors $`2pN`$ .
In the AFM experiments an additional $`Au_{0.6}Pd_{0.4}`$ layer of $`20nm`$ or $`8nm`$ thick was on the top of $`Al`$ metallization of the bodies to prevent aluminum oxidation. It has to be included into consideration. This layer is transparent for the electromagnetic waves with high frequencies $`c/a`$ since the absorption is proportional to $`Im\epsilon \left(\omega \right)`$ which is small for $`\omega c/a`$ and for this reason the layer was ignored in . However, the force depends on $`\epsilon (i\zeta )`$ for which the low frequencies dominate in (3) because of large $`Im\epsilon \left(\omega \right)`$ and that is why we cannot neglect the $`Au/Pd`$ layer. To take it into account, one has to generalize expression for the force (1) to the case of layered bodies. Suppose that the top layer has the thickness $`h`$ and its dielectric function is $`\epsilon _1`$. The bottom layer is thick enough to be considered as infinite and let its dielectric function be $`\epsilon _2`$. The method described in for deriving Eq.(1) can be easily generalized for layered plates. We have to add only the matching conditions for the Green functions on the layers interface. The result will look exactly as (1) but with more complex $`G_{1,2}`$:
$$G_1=\frac{\left(s_1+s_2\right)\left(p+s_1\right)e^{\zeta _ns_1h/c}+\left(s_1s_2\right)\left(ps_1\right)e^{\zeta _ns_1h/c}}{\left(s_1+s_2\right)\left(ps_1\right)e^{\zeta _ns_1h/c}+\left(s_1s_2\right)\left(p+s_1\right)e^{\zeta _ns_1h/c}},$$
$$G_2=\frac{\left(\epsilon _2s_1+\epsilon _1s_2\right)\left(\epsilon _1p+s_1\right)e^{\zeta _ns_1h/c}+\left(\epsilon _2s_1\epsilon _1s_2\right)\left(\epsilon _1ps_1\right)e^{\zeta _ns_1h/c}}{\left(\epsilon _2s_1+\epsilon _1s_2\right)\left(\epsilon _1ps_1\right)e^{\zeta _ns_1h/c}+\left(\epsilon _2s_1\epsilon _1s_2\right)\left(\epsilon _1p+s_1\right)e^{\zeta _ns_1h/c}},$$
(5)
where $`s_{1,2}`$ are defined similar to $`s`$ in (2). The force between plate and sphere is given by (4) with the above $`G_{1,2}`$. Qualitatively the effect of the top layer will be negligible if $`h\omega _{1p}/c1`$ where $`\omega _{1p}`$ is the plasma frequency of this layer. For typical plasma frequencies $`10^{16}s^1`$ it is definitely not the case even for $`h=8nm`$. The force between layered bodies was found also in with a little bit different technic but it was not used there for actual calculations.
Now we are able to evaluate the Casimir force in real geometry of the experiments if there is information on the dielectric functions of the used materials: $`Au`$, $`Al`$, and $`Au_{0.6}Pd_{0.4}`$ alloy. Strictly speaking, one has to measure these functions in wide range of wavelengths on the samples which are used for the force measurement. It was not done in all experiments and to draw any conclusion from them we have to make some assumptions on $`\epsilon \left(\omega \right)`$. At low frequencies $`Au`$ and $`Al`$ are well described by the Drude theory, where
$$\epsilon \left(\omega \right)=1\frac{\omega _p^2}{\omega \left(\omega +i\omega _\tau \right)}.$$
(6)
Here $`\omega _p`$ is the free electron plasma frequency and $`\omega _\tau `$ is the Drude damping frequency. A simple test for validity of (6) is the behavior of the material resistivity that is defined as
$$\rho \left(\omega \right)=Im\frac{1}{\epsilon _0\left(1\epsilon \left(\omega \right)\right)\omega }=\frac{\omega _\tau }{\epsilon _0\omega _p^2},$$
(7)
where $`\epsilon _0`$ is the free space permittivity. The resistivity is frequency independent within the Drude approximation. For crystalline samples of $`Au`$ and $`Al`$ (the entries 2 in Table 1) the frequency behavior of the resistivity and $`Im\epsilon \left(\omega \right)`$ are shown in Fig.1. The data on the dielectric functions were taken from , where the data from many original works are collected. Palladium definitely cannot be described by (6) at any frequency. However, it is known experimentally that amorphous metallic alloys such as $`Au/Pd`$ can be described by the Drude approximation . The physical explanation for this is associated with large Drude damping of the compounds like $`Au_{0.6}Pd_{0.4}`$. Eq.(7) allows to use well defined static resistivity $`\rho \left(0\right)=\rho _0`$ instead of the damping frequency $`\omega _\tau `$.
Of course, at higher frequencies when interband transitions are reached the Drude approximation fails. Nevertheless, it is very useful since low frequencies dominate in the dispersion relation. Extrapolation of (6) to high frequencies gives
$$\epsilon \left(i\zeta \right)=1+\frac{\omega _p^2}{\zeta \left(\zeta +\omega _\tau \right)}.$$
(8)
The relative error inserted in (8) due to extrapolation can be estimated as $`\omega _\tau /\omega _0`$, where $`\omega _0`$ is the frequency of the first resonance for a given metal. The error can be as large as 10% but it does not influence significantly in the force. If we will use (8) for the force computation and change $`\omega _p`$ by 5% ( 10% correction to $`\epsilon \left(i\zeta \right)`$ at all frequencies), then the force is changed less than 2%. Moreover, since the interband transitions give a correction to (8) which is frequency dependent, it reduces the correction to the force further below the experimental uncertainties. The possibility to neglect the interband transitions in $`Al`$ for the force evaluation was noted in . It agrees with our estimate and direct computation using the handbook data for $`Im\epsilon \left(\omega \right)`$. Therefore, in all cases of interest we can use Eq.(8) to describe the dielectric function of a material on the imaginary axis. Since the integral in (3) is saturated in the low frequency region, we should extract the parameters $`\omega _p`$ and $`\omega _\tau `$ from the data for real and imaginary parts of $`\epsilon \left(\omega \right)`$ by fitting them in infrared region with (6).
It is important that optical properties of evaporated (spattered) films can be quite different from those of bulk material and depend on the technological details. It is known, for example, that the film density is typically 0.7 of that of the bulk material if it was not annealed. For the resistivity of spattered and evaporated $`Au`$ films the value $`\rho _0=8.2\mu \mathrm{\Omega }cm`$ has been reported in contrast with the bulk resistivity $`2.25\mu \mathrm{\Omega }cm`$. All these make impossible to use the handbook data for reliable calculation of the Casimir force. This conclusion is illustrated by Table 1, where the parameters for $`Al`$ and $`Au`$ found by fitting the data from are presented.
Though we cannot use handbook data to evaluate the force, one can confine it for a given experiment. This statement is based on the observation that because of better reflectivity the force (4) increases every time when $`\omega _p`$ increases or $`\rho _0`$ decreases. For us it is important that any technological procedures will reduce $`\omega _p`$ and increase the resistivity $`\rho _0`$. The perfect crystalline material will have the largest plasma frequency and the smallest resistivity and these parameters are well defined. The plasma frequency $`\omega _p`$ is defined by the concentration of free electrons in the metal $`n`$ and their effective mass $`m_e^{}`$
$$\omega _p=\sqrt{\frac{e^2n}{m_e^{}\epsilon _0}}.$$
(9)
Gold is a good conductor and $`m_e^{}`$ is quite close to the mass of electron. We will find the upper limit on the electron concentration if suppose that every $`Au`$ atom produce a free electron. Then for $`Au`$ plasma frequency one finds $`\omega _p^{Au}=1.3710^{16}s^1`$. The static resistivity can be used to get the damping frequency $`\omega _\tau `$ with the help of (7) at a given $`\omega _p`$. For crystalline gold it is $`\rho _0^{Au}=2.25\mu \mathrm{\Omega }cm`$. One can compare these parameters with that given in Table 1 to make sure that they correspond to the limit values. In the TP experiment the bodies were covered with $`Au`$ of thickness $`0.5\mu m`$ that is thick enough to be considered as infinite. Substituting the $`Au`$ parameters into (8) and calculating the force according to (4) one finds the upper limit on the Casimir force in the TP experiment. The residual force $`F^{exp}\left(a_i\right)F^{lim}(a_i)`$ is shown in Fig.2a, where $`F^{exp}\left(a_i\right)`$ are the experimental points at separations $`a_i`$. The prediction obviously does not contradict to the experiment but dealing with the upper limit we cannot conclude that there is an agreement, either.
For the AFM experiments the upper limit on the Casimir force is more restrictive. The plasma frequency for $`Al`$ can be restricted using (9) if one supposes that every atom produces 3 free electrons. It gives $`\omega _p^{Al}=2.4010^{16}s^1`$ that coincide with the largest value in Table 1. The resistivity of perfect crystal is $`\rho _0^{Al}=2.65\mu \mathrm{\Omega }cm`$. Since we successfully predicted the plasma frequencies for the best samples of $`Au`$ and $`Al`$, the same way one can use to estimate $`\omega _p`$ for $`Au/Pd`$. If each $`Au`$ atom gives one and $`Pd`$ atom gives not more than two free electrons, then $`\omega _p^{Au/Pd}=1.6910^{16}s^1`$. This alloy is used in microelectronics and resistivity of the bulk material is known to be $`\rho _0^{Au/Pd}30\mu \mathrm{\Omega }cm`$ in accordance with the statement that alloys have large Drude damping. These data allow to find the upper limit on the force using (4) with the functions $`G_{1,2}`$ defined in (5). Real surface of the bodies is always distorted. The distortion statistics were analysed with atomic force microscope . The force has to be averaged over the distorted surfaces and we use for this the procedure developed in . This procedure seems quite reliable. Moreover, the important progress in controlled metal evaporation allowed to reduce the surface roughness to the level when the correction to the force becomes practically unimportant for the experiment .
It was indicated that the thickness of $`Au/Pd`$ layer is less than $`20nm`$, that is why for calculations the conservative value $`h=15nm`$ was chosen. The top layer changes the force on $`13pN`$ at the smallest separation. Variation of $`\omega _p^{Al}`$ on 10% gives only $`1pN`$ change in the force because of screening effect of the top layer. The same variation in $`\omega _p^{Au/Pd}`$ changes the force on $`2pN`$. The resistivity variation of the $`Au/Pd`$ layer on 30% gives $`1pN`$ effect. At larger separations all the effects become smaller. All this means that the limit is stable in respect to variation of the parameters. It is clear also that the top layer definitely cannot be ignored in the force evaluation. The residual force $`F^{exp}\left(a\right)F^{lim}(a)`$ with the experimental points from is shown in Fig.2b by triangles.
In the assumption of absolute transparency of the $`Au/Pd`$ layer was not only used for theoretical interpretation of the result but also in the procedure of the force extraction from the raw data. For this reason we cannot use the points for the force directly. Fortunately, it is easy to restore the right data by shifting all the points to larger separations on $`2h=16nm`$. The result for the residual force $`F^{exp}\left(a\right)F^{lim}(a)`$ with shifted experimental points from is presented in Fig.2b by open squares. This figure clearly indicates the presence of some unexplained attractive force which is decreasing rapidly when the distance between bodies increased. One can speculate that the observed discrepancy is explained by a new Yukawa force mediated by a light scalar boson but we will not discuss now the restrictions on the Yukawa parameters that will be given elsewhere.
To make the experiment absolutely clear, it is preferable to use $`Au`$ instead of $`Al`$ metallization because its non-reactive surface has strong advantage over $`Al`$. It excludes also additional uncertainties connected with $`Au/Pd`$ layer. One can use as well silver or copper but they are not as inert as gold. In practice it is difficult to measure the dielectric function at the wavelengths larger than $`30\mu m`$ but this range gives an important contribution to the dispersion relation. That is why the material behavior in this range has to be predictable. One can say definitely that the materials of platinum group cannot be used since they are not described by (6) at low frequencies. An additional advantage of $`Au`$ metallization is higher density of the bodies coating. In this case the hypotetic Yukawa force will be larger roughly in $`\left(\rho _{Au}/\rho _{Al}\right)^250`$ times. If the observed discrepancy has relation with the Yukawa interaction, the AFM experiment with $`Au`$ metallization of the bodies will definitely reveal this new force even without detailed knowledge of optical properties of the metallization.
In conclusion, we have found the upper limit on the Casimir force that is realized for perfect crystalline coating of the bodies for which electrical and optical properties are well defined. This limit is smaller than the observed force in the AFM experiments and the difference far exceeds experimental errors and theoretical uncertainties for small separations between bodies. $`Au`$ metallization of the bodies in the AFM experiment will allow to reveal origin of the discrepancy.
Figure captions
Figure 1. Validity of the Drude approximation for $`Al`$ (triangles) and $`Au`$ (circles) in the infrared range. The resistivity does not depend on frequency (left axis). Solid lines (right axis) demonstrate that $`Im\epsilon \left(\omega \right)`$ depends on $`\omega `$ according to (6) with the parameters given in Table 1 (entries 2).
Figure 2. The residual force $`F^{exp}\left(a_i\right)F^{lim}(a_i)`$ for different experiments: (a) TP experiment ; (b) AFM experiments with the data from (triangles) and from (open squares). |
no-problem/0001/cond-mat0001152.html | ar5iv | text | # Interpretation of a microwave induced current step in a single intrinsic Josephson junction on a Bi-2223 thin film
## 1 INTRODUCTION
It is well established now that the electronic $`c`$-axis transport in the superconducting state of high-$`T_c`$-superconductors (HTSC) like $`(Bi,Pb)_2Sr_2Ca_2Cu_3O_{10+x}`$ (Bi-2223) is determined by an intrinsic Josephson effect between the superconducting $`CuO_2`$-layers . Recently, the microwave properties of these materials attracted considerable attention for future electronical applications like high frequency oscillators and high-speed digital devices. In this context microwave phase-locking , the emission of Cherenkov radiation and collective fluxon motion have been reported. However, due to the strong (inductive) coupling of the dynamics in different intrinsic Josephson junctions (ITJJ) a detailed analysis is complex. Therefore the study of recently fabricated samples consisting of a single junction is interesting, as it rules out the influence of interactions and heating effects.
## 2 EXPERIMENTAL RESULTS
We have prepared Bi-2223 thin films using Pb dopant as a stabilizer of the 2223 phase on a MgO(100) substrate. Details of the preparation technique and the experimental setup can be found elsewhere . SEM, AFM and TEM analysis reveal that the prepared films were composed of crystal grains with a size of $`4\mu \mathrm{m}\times 4\mu \mathrm{m}`$ and a roughness of about $`1.8\mathrm{nm}`$, which is one half of the $`c`$-axis lattice constant. The mesa-type stack structures with an area of $`2\mu \mathrm{m}\times 2\mu \mathrm{m}`$ were fabricated using standard photolithography and $`Ar`$-ion milling techniques. The $`I`$-$`V`$-curve of the stacks show 1-3 branches, which is consistent with the height of the stack as expected from the etching rate of $`10\mathrm{n}\mathrm{m}/\mathrm{min}`$.
Typical parameters of the junctions are as follows: critical temperature $`T_c100K`$, critical current density $`j_c410^3\mathrm{A}/\mathrm{cm}^2`$. In contrast to stacks with a lot of ITJJs, it is possible to extract the normal junction resistance $`R_n32\mathrm{\Omega }`$, the $`I_cR_n5.1\mathrm{mV}`$ product and the value $`\mathrm{\Delta }_037.5\mathrm{mV}`$ of the superconducting gap directly. Due to the nonlinearity of the $`I`$-$`V`$-curve the characteristic frequency $`f_c`$ and the McCumber parameter $`\beta _c`$ are strictly speaking not well defined material parameters, but they can be roughly estimated from $`R_n`$ ($`f_c=2eI_cR_n/h2.5\mathrm{THz}`$) or from the return current $`I_r`$ (from the resistive to the superconducting state): $`\beta _c=(4I_c/\pi I_r)^216.6`$ . The contact resistance of the Au/Bi-2223 interface was evaluated by the slope of the superconducting branch at $`4.2\mathrm{K}`$ as $`810^8\mathrm{\Omega }\mathrm{cm}^2`$. This value for the contact resistance is several orders of magnitude smaller than in previous experiments and essentially eliminates possible nonequilibrium effects due to heating.
Figure 1 shows the $`I`$-$`V`$-characteristic at 4.2 K of the thinnest fabricated stack with a single resistive branch corresponding to a single unit cell. The $`I`$-$`V`$-curve can be well reproduced theoretically assuming a $`d`$-wave order parameter $`\mathrm{\Delta }(\theta )=\mathrm{\Delta }_0\mathrm{sin}(2\theta )`$ and a parallel resistance of $`7.8\mathrm{\Omega }`$.
We have also studied the behaviour of the single ITJJ in external microwave radiation in the frequency range up to $`f_c27\mathrm{G}\mathrm{H}\mathrm{z}`$, which is two orders of magnitude smaller than the characteristic frequency $`f_c2.5\mathrm{THz}`$. In this case the maximum current $`I_{s,\mathrm{max}}`$ of the superconducting branch decreased monotonically with increasing microwave power .
In addition to this, we observed a pronounced current step structure in the $`I`$-$`V`$-curve at a certain voltage $`V_m`$. For amplitudes, which suppress the superconducting branch completely, this structure appears in the lower millivolt range. Figure 2 shows that with increasing power $`P`$ of the microwave irradiation the step structures shift to higher voltages $`V_m\sqrt{P}`$, while the height $`I_m`$ of the step remains practically unchanged.
## 3 DISCUSSION
Shapiro steps can be ruled out as an explanation of this phenomenon, because they are expected at voltages $`V_{\mathrm{sh},n}=n\mathrm{}\omega _{\mathrm{rf}}/2e`$, which are integer multiples of the voltage $`\omega _{\mathrm{rf}}/2e34\mu V`$. As the difference of their voltages is two orders of magnitude smaller than $`V_m`$, this interpretation becomes very unlikely, as it would predict a series of high-order Shapiro steps with rather low amplitude instead of one pronounced step.
On the other hand, the lateral size $`L_{ab}2\mu m`$ of the stack is still larger than the typical size $`2\lambda _J0.6\mu \mathrm{m}`$ of a Josephson vortex, which allows for fluxon motion parallel to the layers. It is thus reasonable to associate the observed step structure with some kind of collective vortex flow.
A complete model of the phase dynamics under the influence of external microwave irradiation would include the detailed discussion of the (unknown) boundary conditions on the surface of sample in order to determine the magnitude of the induced electric and magnetic fields both parallel and perpendicular to the superconducting layers. Also various pinning mechanisms should in principle be taken into account.
For the reproduction of the experimental features presented above it will be sufficient to consider the effect of ac-magnetic fields $`H_{\mathrm{ac}}(t)=H_{\mathrm{ac0}}\mathrm{sin}(\omega _{\mathrm{rf}}t)`$. Note that the influence of an external magnetic field $`H_{\mathrm{ac}}`$ is formally equivalent to currents injected parallel to the layers. It also turns out in the simulation that an externally applied oscillating $`c`$-axis current $`I_{\mathrm{ac}}(t)`$ is unable to reproduce the experimental data correctly. The nonlinearity of the quasiparticle current is modelled as in .
Figure 3 compares the experimental step structure and the theoretical simulation using the parameters given above. Thereby both the dependence of the critical current $`I_c`$ and of the voltage $`V_m`$ and the height $`I_m`$ on the external microwave power $`PH_{ac0}^2`$ can be successfully reproduced.
As the typical frequencies used here are much smaller than the plasma frequency, the external microwaves have similar effects as static external fields and consequently do not depend on the exact value of the oscillation frequency.
As a consequence, the behaviour of the observed structure can be understood in terms of well known features of the flux-flow step in high magnetic fields : $`V_m=BsL_{ab}`$ ($`s`$: distance of superconducting layers) and $`I_m/I_c=c_s/L_{ab}f_c0.06`$. Physically the step structure occurs when the Josephson vortices created by the external field approach their limiting Swihart velocity $`\overline{c}25\times 10^5\mathrm{m}/\mathrm{s}`$ .
The numerical solution of the Sine-Gordon equation allows to discuss the supercurrent distribution $`I_c\mathrm{sin}\mathrm{\Phi }`$ in the regions A and B as marked in Fig. 3. Open/closed circles in Fig. 4 represent the center of vortices/anti-vortices respectively and the plot (a)-(e) are taken at different (not equidistant) time steps. In contrast to the static case, the direction of moving fluxons will change with the external frequency $`\omega _{rf}`$, changing the polarity according to the (alternating) direction of the external field $`H_{ac}(t)`$.
In region A (cf. Fig. 4) the phase is increasing linearly $`\mathrm{\Phi }(x,t)\mathrm{\Phi }_0+kx`$, which corresponds to a very homogeneous field distribution in the stack. On the other hand, in region B more pronounced kinks in the phase can be found, which account for a loose array of vortices with periodically oscillating relative distance of the fluxons.
## 4 CONCLUSIONS
The successful fabrication of single intrinsic junction stacks on Bi-2223 thin films with low contact resistance and $`I_cR_n5.1\mathrm{mV}`$ has been reported. Due to this fact we were able to study the properties of a single junction without interference with different junctions in the stack and eliminating the influence of heating. Under microwave irradiation, we observed a pronounced step in the $`I`$-$`V`$-characteristic of the single ITJJ in the lower millivolt range well below the superconducting gap edge. Its voltage position changes linearly as a function of the square root of the irradiated microwave power, while the step current remains constant. This behaviour could be qualitatively reproduced by numerical simulations in an external ac-magnetic field parallel to the layers, which show a collective motion of vortices in alternating directions.
The authors would like to thank Dr. K. Mizuno for valuable discussions. One of us (C.H.) gratefully acknowledges the hospitality of the Advanced Technology Research Laboratories in Kyoto and financial support by JISTEC, the Studienstiftung des Deutschen Volkes and the Department of Energy under contract W-7405-ENG-36. |
no-problem/0001/hep-lat0001026.html | ar5iv | text | # Chiral condensate in the quenched Schwinger model
## I Introduction
Use of quenched QCD as an approximation to the full theory depends upon a good understanding of the regions of parameter space where the quenched theory differs in important ways from the full theory. For the case of the chiral condensate $`\overline{\psi }\psi `$, there may be qualitatively different behaviors for sufficiently small quark mass. Whereas the condensate is expected to be finite in the full theory, there are theoretical arguments and some initial numerical indications that it diverges in quenched QCD. Also numerical analysis of an instanton gas model shows a divergence. A careful study, using a lattice Dirac operator that obeys chiral symmetry, to determine the mass range in which quenched QCD is a good approximation to the full theory has not been done.
Similarly for the Schwinger model, the full theory has a finite condensate but there are predictions that it diverges in the quenched theory. Thus the Schwinger model can be used to investigate the two-dimensional versions of these questions concerning anomalies, topology, chiral symmetry, and condensates and their impact on the relationship between full and quenched theories. Although the Schwinger model is in most ways much simpler than QCD, it does present some peculiar difficulties of its own. Strong infrared effects, which are dynamical and nonperturbative in QCD, are already kinematical in the lower dimension of the Schwinger model. There is the possibility that infrared enhancement in the quenched case gives a fermion spectral density that is divergent as the eigenvalue $`\lambda `$ goes to zero and that there is a corresponding infinite condensate $`<\overline{\psi }\psi >`$. We have investigated this issue numerically using the overlap Dirac operator to describe the massless limit for the fermions and have found strong evidence for these divergences in the quenched Schwinger model.
When stated in terms of the low eigenvalue behavior of the fermion spectral density in the quenched Schwinger model, theoretical discussions have given a lower bound that is finite and stronger one that is divergent. Some estimates that are not bounds have suggested a form diverging exponentially in the volume $`e^{cg^2V}`$. The discussions in Refs. are given in terms of the eigenvalue shifts of the would-be-zero modes associated with subregions of the whole two-dimensional volume $`V`$. With larger shifts due to interactions with other subareas, the spectrum is flat at small $`\lambda `$ in the infinite volume limit. Smaller shifts leave the would-be-zero modes concentrated near the origin so that the spectral density there diverges as $`V\mathrm{}`$. Other arguments for the form of the divergence proceed along different lines, but the implication for the spectrum is that the lowest eigenvalues are exponentially small in the volume with a corresponding exponentially large spectral density and condensate.
The data that we present here covers a range of lattice sizes from $`8^2`$ to $`32^2`$. The full spectrum of the overlap Dirac operator was calculated in gauge backgrounds from the Wilson action at several bare couplings. The next section discusses the lattice formalism used in this paper. The third section discusses the physics issues in more detail. The fourth section gives our numerical results. In addition to the spectrum itself, there are measures of its behavior including $`<\overline{\psi }\psi >`$ and the distribution of the lowest eigenvalue. The last section contains a summary of our results and some concluding discussion.
## II Lattice formalism
Since we are interested in studying the small mass region and the massless limit, we need to work with a lattice Dirac operator that respects chiral symmetry. We will use the overlap Dirac operator for our numerical study. It has the form
$$D=\frac{1}{2}\left[1+m+(1m)\gamma _5ฯต(H_w)\right]$$
(1)
with $`H_w`$ being the hermitian Wilson Dirac operator in the supercritical region and $`0m1`$ is the bare fermion mass. The hermitian overlap Dirac operator $`H=\gamma _5D`$ has paired non-zero eigenvalues. The topological zero modes are chiral and have partners with unit eigenvalue and opposite chirality. In a fixed gauge field background,
$$<\overline{\psi }\psi >=\frac{|Q|}{mV}+\frac{1}{V}\underset{\lambda >0}{}\frac{2m(1\lambda ^2)}{\lambda ^2(1m^2)+m^2}.$$
(2)
The sum is over all positive non-zero eigenvalues of $`H`$, $`Q`$ is the global topological charge, and $`V`$ is the lattice volume.
In the numerical calculation, we generate gauge fields distributed according to the Wilson gauge action
$$S_g=\frac{1}{g^2}\underset{p}{}\mathrm{Re}U_p$$
(3)
with $`U_p`$ the product of U(1) link elements around a fundamental plaquette and $`g`$ the lattice coupling constant. The fermions have periodic boundary conditions <sup>*</sup><sup>*</sup>* This choice of boundary conditions is not as restrictive as it seems since we only have one fermion. A gauge field configuration can be multiplied by an arbitrary constant U(1) field on each link in either of the directions without changing the gauge action. Since all of these possibilities are included in the sum over gauge field configurations, there is no real distinction between periodic and anti-periodic boundary conditions. More generally all boundary conditions that are periodic up to a phase are equivalent. . For each choice of $`g`$ and $`L`$, we diagonalize $`H_w`$ in a fixed gauge field background and form $`H`$ by first forming $`ฯต(H_w)`$. We then diagonalize $`H^2`$ in the chiral sector that contains topological zero modes, if any. Since all computations are done in double precision, we know the non-zero eigenvalues of $`H`$ to an absolute precision of $`10^8`$. In addition we know the exact number of zero eigenvalues of $`H`$ by counting the difference between the number of positive and negative eigenvalues of $`H_w`$ .
## III Physics issues
In the multi-flavor Schwinger model, the classical U(1) chiral symmetry is explicitly broken by the anomaly, while the SU(N) chiral symmetry cannot be broken in two dimensions. The โt Hooft vertex $`<_i\overline{\psi }_i\psi _i>`$ is not associated with an intact symmetry or Goldstone bosons, so it can and does have a nonzero value . In the quenched case, the exact zero modes of the massless Dirac operator cause a divergence in $`<\overline{\psi }\psi >`$ in the massless limit at finite volume. But as seen in (2), the divergence is of the form $`<|Q|>/(mV)`$. Since $`<|Q|>\sqrt{V}`$, it follows that
$$\underset{m0}{lim}\underset{V\mathrm{}}{lim}<|Q|>/(mV)=0.$$
(4)
This trivial divergence does not contribute in the case where one first takes the thermodynamic limit and then takes the massless limit. But this finite volume divergence does not appear in the unquenched Schwinger model. The zero modes of the Dirac operator in these backgrounds cause a suppression of such gauge field configurations when the fermion determinant is included as part of the gauge field measure.
The small eigenvalue behavior of the spectrum determines the contribution that the second term in (2) makes to $`<\overline{\psi }\psi >`$. Thus the issues to be numerically investigated are centered upon the small eigenvalue behavior of the massless Dirac operator. The main question is whether the infinite volume spectrum is flat as the eigenvalue $`\lambda `$ goes to zero or has a divergence at small $`\lambda `$.
Consider the gauge field seen by the fermion. The plaquette magnetic field of the quenched Schwinger model is ultra-local with the field fluctuations on different plaquettes uncorrelated in infinite volume. The plaquette angles are approximately gaussian distributed at weak coupling. Thus the study of fermionic observables in the quenched theory is best thought of as an investigation of a disordered system .
Let us begin the discussion with two much simpler examples. For the case of free fermions on an $`L\times L`$ lattice with periodic boundary conditions, the low-lying levels are
$$\lambda [(\frac{2\pi n_1}{L})^2+(\frac{2\pi n_2}{L})^2]^{1/2}$$
(5)
so that the level spacing is of order $`1/L`$, and the density of states per unit volume is of order $`\lambda `$ at the low end.
Another simple case is a uniform magnetic field $`B`$, which gives Landau levels. The level spacing is of order $`B`$, and the degeneracy of each level is of order $`BV`$. With the scale of energy intervals larger than $`B`$, the density of states is flat. As we will see later, the typical $`BV`$ is $`gL`$ so that the average of $`B`$ over $`V`$ does get smaller with increasing volume. Thus the Landau levels give a flat spectral density if the energy resolution is coarser than $`g/L`$.
For the case at hand of particles with gyromagnetic ratio 2, there is a cancellation between the paramagnetic magnetic moment interaction with the field and the diamagnetic kinetic energy contribution that puts the lowest Landau level at exactly zero energy. These are the $`BV/(2\pi )`$ zero modes.
When the net flux is zero and all boundary conditions are periodic so that the vector potential can be put in the form
$$A_\mu =ฯต_{\mu \nu }_\nu \varphi ,$$
(6)
there is also a pair of zero modes with opposite chirality. These have the form
$$\psi _+=e^\varphi \left(\begin{array}{c}1\\ 0\end{array}\right)\text{ and }\psi _{}=e^\varphi \left(\begin{array}{c}0\\ 1\end{array}\right)$$
(7)
For the quenched Schwinger model the field is neither zero nor uniform. As noted above, the magnetic field is random with no plaquette-plaquette correlation between the magnetic field on different plaquettes. Thus the variance increases as the area. With the coupling $`g`$ small and $`gR`$ large, the flux through the area $`R\times R`$ is gaussian distributed with a typical size of $`gR`$, so that the area average of the field strength is $`B=g/R`$.
What is the fermion spectrum in that case? The index theorem tells us that for a net flux $`2\pi f`$ through the area $`L^2`$, there are $`f`$ modes with zero eigenvalue. For $`f=0`$ and all boundary conditions periodic, there are two zero modes of the form above. But what else happens at the low end of the spectrum? There are two suggestions for an answer in the literature. The discussion of Casher and Neuberger begins by dividing the $`L\times L`$ volume into $`R\times R`$-sized pieces with $`gR`$ and $`L/R`$ large. Considered in isolation, each of these areas has of order $`gR`$ zero modes. It is then argued that the effect of interaction between different regions is to shift the eigenvalues of the zero modes away from zero, in such a way as to produce a spectrum that is bounded below by one that is flat at small $`\lambda `$.
With a similar approach, Smilga argues for a stronger lower bound that gives a spectral density that diverges as $`\lambda 0`$. His stronger result follows from using the fact that the value of the zero mode wave function on the boundary that separates a region with magnetic field from one with zero field is exponentially small in the flux through the region. Arguments using other methods in his paper and in the paper by Dรผrr and Sharpe conclude that the divergence is quite strong with a factor $`e^{cg^2V}`$. This would be a consequence of modes with eigenvalues as small as the inverse of that factor.
For a numerical test of the argument used by Smilga to produce the stronger bound, we construct a background gauge field configuration that has two regions of size $`R^2`$ each with constant magnetic field and opposite net fluxes of magnitude $`R`$. We study the low lying eigenvalues of the overlap Dirac operator and show that they go down exponentially with $`R`$.
On the $`L^2`$ lattice, fix two regions of size $`R^2`$ separated by $`(L/21,L/21)`$. The slightly off-symmetric separation is chosen to avoid any accidental lattice symmetries. On one region of size $`R^2`$, we make up a constant magnetic field of flux $`R`$, and on the other region, we make up a constant magnetic field of flux $`R`$. Elsewhere the field is zero. An initial numerical check with the field set to zero in one of the regions, verified the presence of $`R`$ topological zero modes for the overlap Dirac operator. Then returning to the case of interest with the field in both regions, we calculated the spectrum again. In Fig.1, we plot the low end of the positive half of the spectrum as a function of $`R`$. The lowest few of these small eigenvalues go down exponentially in $`R`$. We verified that this behavior remains unchanged when small random perturbations are added to the link elements. The numerical results are less restrictive than the theoretical arguments in that they do not rely on a variational argument, which without further work, applies only to a single mode on the lattice. Also all the lattice modes in addition to the would-be-zeros are included. This confirms the crucial point in the argument for the stronger bound of Ref.. However, it does not provide evidence that the spacings might be as small as $`e^{cg^2V}`$.
Let us now consider the continuum limit. Given the remark above on the lack of correlation in the field strengths, it is not possible to use a scale from that in defining the continuum. However, there is another simple approach. For $`g`$ small, ask how large does $`R`$ have to be so that the typical flux through the $`R\times R`$ area is of order one? Since the flux variance for a single plaquette is $`g^2`$ and the uncorrelated fluxes add, the variance for the area is $`g^2R^2`$. Thus the typical flux through the area is $`gR`$, and $`g`$ sets an inverse length or energy scale. This happens to be of the same order as the scale for the unquenched Schwinger model, in which the mass in lattice units is $`g/\sqrt{\pi }`$. We may hope to get a sensible continuum limit by measuring continuum dimensionful quantities in units of appropriate powers of $`g/a`$. To get the finite volume continuum limit, we will want to take $`g`$ to zero and $`L`$ to infinity with $`gL`$ fixed. Lattice eigenvalues, $`<\overline{\psi }\psi >`$, and other quantities with continuum units of energy should be considered in ratios like $`\lambda /g`$ as $`g0`$.
Finally, let us discuss the range of $`g`$ and $`L`$ where these interesting effects might be seen. From two points of view, we can see that $`gL`$ must be large. First if $`L`$ is fixed and $`g`$ is small, then there are only perturbative effects from the gauge field, and the strong infrared fluctuations cannot appear. Also if $`gL`$ is small, then there are essentially no would-be-zero modes that could realize the the physical pictures of and . If $`g`$ is large, then the gauge field is very rough on the scale of a single lattice spacing, and the continuum-based arguments above do not apply. The smallest region that typically contains a unit of flux should be several lattice units so that it is large enough for the fermion to realize zero modes from the non-zero flux. Thus we must have $`g`$ small and $`gL`$ large, which means, of course, that $`L`$ must be large.
## IV Numerical results
The numerical results described in this section give substantial evidence that the infinite volume limit of the spectral density $`\rho (\lambda )`$ is indeed infinite for $`\lambda 0`$. We will show this by computing the spectrum of the overlap Dirac operator in U(1) gauge field backgrounds. The massless limit is approached in the conventional way by adding a standard mass term. We will also compute $`<\overline{\psi }\psi >`$ as a function of mass in finite lattice volumes and show that it grows as the mass is lowered before it finally dives to zero as it must for $`m=0`$ and finite volume. We will also show that the average value of the lowest eigenvalue does not scale with the volume, nor does it fit predictions from chiral random matrix theory.
Strong evidence for a divergence in the non-topological piece of $`<\overline{\psi }\psi >`$ is seen by plotting the gauge ensemble average of the second term in (2) as a function of $`m`$ at a fixed $`g=0.4\sqrt{\pi }`$ for several lattice sizes. We focus on the small mass region in Fig.2. The data at $`L=32`$ show a rise in $`<\overline{\psi }\psi >`$ as the mass is decreased. This divergence is due to an accumulation of very small eigenvalues at larger $`L`$ as seen in the histogram of the small non-zero eigenvalues in Fig.3. Even though only $`L=32`$ shows a rise in $`<\overline{\psi }\psi >`$ at small masses, an anomalous accumulation of very small eigenvalues is evident at $`L=24`$ in Fig.3. The accumulation is not enough to give a rise in $`<\overline{\psi }\psi >`$ on the $`L=24`$ lattice.
The smallest eigenvalue $`\lambda _{\mathrm{min}}`$ has to scale like $`1/V`$ for a finite value of the density of eigenvalues at zero $`\rho (0)`$ and a finite value of $`<\overline{\psi }\psi >`$ in the massless limit. In Fig.4, we plot the histogram of $`\lambda _{\mathrm{min}}L^2`$ for the various ensembles in the zero topological sector. We see that there is no evidence for scaling in the distribution, whereas chiral random matrix theory predicts a universal function of the form $`\frac{z}{2}e^{\frac{z^2}{4}}`$ with $`z=\mathrm{\Sigma }L^2\lambda _{\mathrm{min}}`$ and $`\mathrm{\Sigma }`$ the value of the chiral condensate. A previous analysis of the distribution of the low lying eigenvalues done at smaller physical volume showed that the distribution did not fit the predictions of unitary chiral random matrix theory . In Ref. this was attributed to finite volume effects. In our case, the reason for the discrepancy is not small volumes but a divergent chiral condensate. Now consider the average of the smallest nonzero eigenvalue as a function of $`L`$. Three simple functions motivated by heuristic physics arguments are $`c/V`$, $`e^{cL}`$, and $`e^{cV}`$. To determine which of these forms is closest to the data, we have plotted in Fig.5 $`V\lambda _{\mathrm{min}}/16`$, $`\mathrm{ln}(\lambda _{\mathrm{min}}/\lambda _{\mathrm{min}}_{L=8})/(8L)`$, and $`50\mathrm{ln}(\lambda _{\mathrm{min}}/\lambda _{\mathrm{min}}_{L=8})/(64V)`$ verses $`L`$ for $`Q=0`$ and $`|Q|=1`$. (The normalizations are just for convenience.) To the extent that one of these functions represents the data well, the corresponding graph in Fig.5 should be flat. Evidently $`e^{cL}`$ is preferred. Recall that this is the form that appears in the argument for the lower bound in and in the spectrum from the artificial configurations described in Section III.
Fig.2 shows that $`L=32`$ is needed at $`g=0.4\sqrt{\pi }`$ to see the divergent behavior in the chiral condensate. This corresponds to a physical volume of $`gL/\sqrt{\pi }=12.8`$. To study the effect of lattice spacing, we compared this result with others obtained on $`L=24`$ at $`g/\sqrt{\pi }=0.4(32/24)`$ and on $`L=28`$ at $`g/\sqrt{\pi }=0.4(32/28)`$. These have the same physical volume $`gL/\sqrt{\pi }=12.8`$ but are coarser lattices. The comparison in Fig.6 shows that the divergence visible on the $`L=32`$ lattice is not seen on the $`L=24`$ lattice. However, the $`L=28`$ data follow the $`L=32`$ data to very small masses and into the region where the condensate begins to grow. We have used dimensionless quantities in this plot to facilitate a proper comparison. The scaling behavior is good until $`m/g`$ gets small enough to emphasize the very smallest eigenvalues, some of which are being distorted on the coarser lattice. At smaller coupling and at the same physical volume, the scaling behavior extends to smaller $`m/g`$.
To illustrate the point that at fixed $`L`$, $`g`$ can be neither too big nor too small if the small $`\lambda `$ growth is to be seen, we have data from $`L=32`$ and four couplings in Fig.7. The smallest value of $`g`$, which corresponds to a physical size of 9.6, shows no growth at all. The medium values at sizes 12.8 and 16 show the effect. (Note that the finite size effects between these two are small.) The largest value of $`g`$ has size 32 but the gauge field there is too rough, and the small $`\lambda `$ peak is gone.
## V Conclusions
We have shown that for small coupling and large volume, a small eigenvalue peak appears in the spectral density of the overlap Dirac operator and in $`<\overline{\psi }\psi >`$. This is strong evidence for the predictions that these quantities diverge in the infinite volume limit of the quenched Schwinger model. There is some evidence that the divergence could be as strong as $`e^{cgL}`$, but the lattice sizes are insufficient to provide evidence for the stronger $`e^{dg^2L^2}`$ predictions. Similarly there is limited evidence that the would-be-zero modes of subregions of the lattice can provide a physical understanding of the results. However, a definite test of that model also awaits data from larger lattices.
The results in this paper clearly point the direction for further work. The calculations should be extended to larger lattices so that there are several sizes showing the small $`\lambda `$ growth of the spectral density and $`<\overline{\psi }\psi >`$. With that, it would be possible to study the volume dependence of the small $`\lambda `$ peaks and test in more detail the theoretical expectations.
Although the quenched Schwinger model is quite some distance from full four-dimensional QCD, results from it help to map out the range of territory available to massless fermions responding to a gauge field.
###### Acknowledgements.
R.N. would like to thank Urs Heller for general discussions on the quenched approximation and Herbert Neuberger for some discussions on the quenched Schwinger model. |
no-problem/0001/nucl-th0001044.html | ar5iv | text | # Particle Multiplicities and Thermalization in High Energy Collisions
## 1 Introduction
The use of thermal or statistical models to describe multiparticle production has a long history . Recent studies \[ReferencesReferences\] observe that the multiplicities of hadron production in a variety of contexts ($`e^+e^{}`$, $`pp`$, $`p\overline{p}`$ and heavy ion collisions) are extremely well described by models involving thermal distributions of free hadrons. The only free parameters in their analysis are the temperature and volume of the model thermal system, and a parameter reflecting the level of equilibration of strange particles (in the heavy ion case the baryon chemical potential is an additional parameter). One might conclude from their results that thermal and chemical equilibrium among hadrons is already reached in individual jets produced in high-energy scattering.
However, this conclusion is somewhat unjustified, given that any mechanism for producing hadrons which evenly populates the free particle phase space will mimic a microcanonical ensemble, and therefore yield apparently thermal results<sup>1</sup><sup>1</sup>1After completion of this work we learned that a similar conclusion was reached by C. N. Yang et al. in . They refer to the apparent temperature as a partition temperature and stress that no thermal equilibrium is implied. We thank Professor K.S. Lee of Chonnam National University in Korea for making us aware of this earlier work.. It is important to remark that this type of apparent thermalization will always yield Boltzmann weights which are functions of the free particle energy, and hence describe a non-interacting ensemble. True QCD thermalization, of the type associated with the quark-gluon plasma (QGP), and that experimentalists hope to observe at RHIC, involves large collective effects, and hence is probably poorly modelled by a non-interacting hadron gas.
Here we address the problem of deducing whether a process which leads to multiparticle production is thermal. We make the important distinction between phase space dominated phenomena, which lead to ensembles governed by free particle Boltzmann weights, and interacting thermal ensembles, in which collective effects can be important. We argue that a process which generates data that can be fit using free particle ensembles is merely good at populating phase space in a uniform way โ it has not necessarily produced an interacting thermal region. In QCD, a region of this type with temperature of order 100 MeV or more should exhibit strong collective phenomena, such as hadron mass shifts, that preclude description in terms of a free particle ensemble. Hence, we argue that the failure of statistical techniques based on free particle ensembles should be regarded as a signal for the onset of true equilibration at heavy ion colliders such as RHIC.
This paper is organized as follows. In section 2 we review the relation between microcanonical and canonical ensembles in statistical mechanics. We argue that multiparticle production in many cases is equivalent to a method of populating a modified microcanonical ensemble. Such an ensemble will produce โthermalโ behavior even if there is no subsequent interaction of particles once produced. In section 3 we apply our results to multiparticle production and examine a specific example in a toy model involving tree-level photon production. In this toy model there is clearly no real thermal equilibrium โ once produced, the photons do not interact. However, a quasi-Boltzmann distribution results in the limit where a large number of photons is produced. In section 4 we discuss the implications of our conclusions for heavy ion collisions.
## 2 Microcanonical vs Canonical Ensembles
Here we give a brief review of the relationship between the microcanonical and canonical ensembles (MCE and CE, respectively) in statistical mechanics. Recall that the MCE sums only over states with some fixed total energy, while the CE sums over all states with Boltzmann weight $`e^{\beta E}`$. The main result is rather familiar: under certain general assumptions, quantities computed in the MCE differ from those computed in the CE by an amount which vanishes as the size of the system $`N`$ is taken to infinity. The importance of this result is as follows: the cross section for particle production in a high energy collision can be written as the expectation of the matrix element squared in the MCE corresponding to the free theory. In the limit of large $`N`$, one can therefore rewrite the usual phase space integral appearing in a cross section in terms of the average of the matrix element squared in a CE, which is controlled by the Boltzmann factor. This naturally leads to a certain โthermalโ behavior which we discuss below.
In the MCE the total energy is fixed and the probability density is constant over all of phase space. The computation of the entropy $`S(E)`$ is as follows:
$$\mathrm{\Gamma }(E)=\underset{s}{}\delta (E_sE),$$
(1)
$$S(E)=\mathrm{ln}\mathrm{\Gamma }(E).$$
(2)
In Eq. (1), $`E_s`$ is the energy of the state $`s`$. It is often more convenient to replace the delta function in (1) with the factor $`\mathrm{\Delta }(E)\mathrm{exp}(\beta (E_sE))`$, yielding a new quantity, the CE:
$$\overline{\mathrm{\Gamma }}(E)=\underset{s}{}e^{\beta (E_sE)}.$$
(3)
In the canonical ensemble there is no restriction on the energies of the states $`s`$. However, they appear with Boltzmann weight $`\mathrm{exp}(\beta E_s)`$. The new quantity introduced, $`\beta `$, describes the temperature of the system, which is fine-tuned to ensure that the average energy is $`E`$. To see this, rewrite $`\overline{\mathrm{\Gamma }}(E)`$ as follows:
$`\overline{\mathrm{\Gamma }}(E)`$ $`=`$ $`{\displaystyle ๐E^{}\underset{s}{}e^{\beta (E^{}E)}\delta (E_sE^{})}`$ (4)
$`=`$ $`{\displaystyle ๐E^{}\mathrm{\Gamma }(E^{})e^{\beta (E^{}E)}}`$
$`=`$ $`{\displaystyle ๐E^{}e^{S(E^{})\beta (E^{}E)}}.`$
Now evaluate (4) in the saddle-point approximation. (This is also known as the Darwin-Fowler method .) Let
$$\beta =\frac{S}{E^{}}|_E,$$
(5)
so
$`\overline{\mathrm{\Gamma }}`$ $`=`$ $`e^{S(E)}{\displaystyle ๐E^{}e^{\frac{1}{2}S^{\prime \prime }(E)(E^{}E)^2+\mathrm{}}}`$ (6)
$`=`$ $`\sqrt{{\displaystyle \frac{2\pi }{S^{\prime \prime }(E)}}}e^{S(E)}+\mathrm{}.`$
Here we have assumed that $`S^{}(E)>0`$ (positive temperature) and $`S^{\prime \prime }(E)<0`$ (positive specific heat). It is easy to see that the difference between the CE entropy $`\overline{S}(E)=\mathrm{ln}\overline{\mathrm{\Gamma }}(E)`$ and the MCE entropy $`S(E)`$ is of order $`(\mathrm{ln}N)/N`$. The terms represented by the ellipsis in (6) lead to even smaller corrections, and will be neglected.
Now consider a generic operator $`๐ช`$. The average in the CE is given by
$`๐ช_C`$ $`=`$ $`{\displaystyle \frac{1}{\overline{\mathrm{\Gamma }}(E)}}{\displaystyle \underset{s}{}}e^{\beta (E_sE)}๐ช_s`$ (7)
$`=`$ $`e^{S(E)}\sqrt{{\displaystyle \frac{S^{\prime \prime }(E)}{2\pi }}}{\displaystyle ๐E^{}e^{S(E^{})\beta (E^{}E)}๐ช_M(E^{})},`$
where $`๐ช_M`$ is the average taken in the microcanonical ensemble. Its logarithm can be expanded as follows
$$\mathrm{ln}[๐ช_M(E^{})]=\mathrm{ln}[๐ช_M(E)]+(E^{}E)\mathrm{ln}^{}[๐ช_M(E)]+\frac{1}{2}(E^{}E)^2\mathrm{ln}^{\prime \prime }[๐ช_M(E)]+\mathrm{}.$$
(8)
The integral in (7) can again be performed in the saddle-point approximation. Note that if the operator $`๐ช`$ is of order $`N`$ (as in the case of a particle multiplicity), the coefficients appearing in the expansion (8) are of order $`\mathrm{ln}N`$. They lead to small shifts (of order $`(\mathrm{ln}N)/N`$) in the saddle-point value of the temperature $`\beta `$ and the overall prefactor. Thus the canonical and microcanonical averages of particle multiplicities converge in the limit of large $`N`$.
In the next section we apply these results to the computation of cross sections for particle production in high energy collisions.
## 3 Multiplicity Results
The cross section for the process $`A+Bn`$ particles is
$$\sigma =\frac{1}{2E_A2E_B|v_Av_B|}\underset{f}{}\frac{d^3p_f}{(2\pi )^3}\frac{1}{2E_f}||^2(2\pi )^4\delta ^{(4)}\left(p_A+p_B\underset{f}{}p_f\right).$$
(9)
First, let us assume that the function<sup>2</sup><sup>2</sup>2In relativistic field theory, we usually adopt the normalization convention $`p|p^{}=2E_p(2\pi )^3\delta ^3(pp^{})`$, which leads to the factor of $`1/2E_p`$ in the phase space density. On the other hand, in statistical physics, we usually adopt an energy-independent normalization. Although the final result for the cross section is independent of our normalization convention, what we mean by โphase space dominatedโ depends on what we choose to be the unit of phase space. In this paper we will always be referring to the statistical mechanical unit of phase space, i.e. $`d^3p`$. $`||^2/2E_f`$ in the dominant region of phase space is slowly varying (we will relax this assumption shortly). Then, we can use the following approximation
$$\sigma =\frac{1}{2E_A2E_B|v_Av_B|}\frac{|\overline{}|^2}{_f2\overline{E}_f}\underset{f}{}\frac{d^3p_f}{(2\pi )^3}(2\pi )^4\delta ^{(4)}\left(p_A+p_B\underset{f}{}p_f\right),$$
(10)
where $`\overline{},\overline{E}`$ are averaged quantities. As noted, the integral in (10) is just the microcanonical ensemble for $`N`$ free particles, and hence leads to โthermalโ properties of the particle distributions and multiplicities. Considering the more general case where the number of particles is not fixed, we simply sum over all cross sections,
$$\sigma =\underset{n}{}\sigma _{ABn},$$
(11)
to obtain a MCE without fixed particle number. In the usual thermodynamic limit this sum is dominated by some particular value of $`n`$, so it is equivalent to consider the earlier case with $`n=N`$.
We now treat the matrix element more carefully, by retaining it in the phase space integral. The resulting integral can still be turned into a canonical ensemble using the result of the previous section, provided the modified โentropyโ (i.e., the logarithm of the phase space integral including the matrix element) continues to grow with total energy $`E`$ (positive temperature), and the second derivative of this entropy with respect to $`E`$ is negative (positive specific heat<sup>3</sup><sup>3</sup>3These requirements are satisfied in the toy model we consider below, where the modified entropy behaves as $`S(E)N\mathrm{ln}\mathrm{ln}(E/m)`$, for $`N`$ โphotonsโ with total energy $`E`$.). The energy-momentum delta function can then be replaced by a Boltzmann weight $`\mathrm{exp}[(p_A+p_Bp_f)\beta ]`$, which in the center of mass frame reduces to $`\mathrm{exp}[\beta (EE_f)]`$. This yields the following result for the differential cross section:
$$\frac{d^3\sigma }{dp_i^3}\frac{1}{E_i}e^{E_i/T}\underset{fi}{}d^3p_f\frac{1}{E_f}||^2\mathrm{exp}\left(\underset{fi}{}E_f/T\right).$$
(12)
Note the natural appearance of the Boltzmann factor in (12).
To proceed further, we need a specific model for the behavior of the matrix element. There are very few cases in which the matrix element for $`N`$-particle production is explicitly known. One such case is the QED process
$$f\overline{f}1\text{spin up photon}+(n1)\text{spin down photons},$$
(13)
where the (massless) fermions have opposite spin. The matrix element squared for (13) is
$$|(P_{},Q_{};1_{},2_{},\mathrm{},n_{})|^2=(2e^2)^n\frac{(2Qp_1)^2}{(2PQ)}\underset{i=2}{\overset{n}{}}\frac{(2PQ)}{(2Pp_i)(2p_iQ)},$$
(14)
where $`P`$, $`Q`$ and $`p_i`$ are the momenta of the two incoming fermions and $`n`$ outgoing gauge particles respectively. In the $`(P,Q)`$ center of momentum frame this may be written as
$$|(P_{},Q_{};1_{},2_{},\mathrm{},n_{})|^2=๐ฆE_1^2(1+\mathrm{cos}\theta _1)^2\underset{i=2}{\overset{n}{}}\frac{1}{E_i^2\mathrm{sin}^2\theta _i},$$
(15)
where $`E_i,\theta _i`$ are the energy and production angle of the $`i`$th photon, and we have lumped all of the remaining constant factors into $`๐ฆ`$. Using this to solve for the differential cross section yields ($`i1`$)
$$d^3\sigma \frac{1}{E_i^3\mathrm{sin}^2\theta }e^{E_i/T}d^3p_i.$$
(16)
The resulting number density of (spin down) photons is then
$$n\frac{d^3p}{(2\pi )^3}\frac{1}{E^3\mathrm{sin}^2\theta }e^{E/T}.$$
(17)
In this particular example we have a problem, because the photons are massless and there is an infrared catastrophe due to arbitrarily soft photons. Of course, the number density of observable photons โ those with some minimum energy and angular separation from the initial $`f\overline{f}`$ pair โ is finite. For our purposes, we can always eliminate this problem by introducing a photon mass by hand in our toy model. In fact, we can introduce several species of โphotonsโ with masses $`m_i`$. Then, the abundance of each species is given by<sup>4</sup><sup>4</sup>4In writing Eq. (18) we have chosen to ignore the angular dependence. Had we chosen to retain it, the angular integration would produce the factor $`\mathrm{ln}\left[(1+\sqrt{14m_i^2/s})/(1\sqrt{14m_i^2/s})\right]`$, where $`\sqrt{s}`$ is the center of mass energy of the collision.
$$n_i\frac{d^3p}{(2\pi )^3}\frac{1}{(p^2+m_i^2)^{3/2}}e^{\sqrt{p^2+m_i^2}/T}.$$
(18)
The integral in (18) differs from the one appearing in a pure Boltzmann distribution due to the factor of $`(p^2+m_i^2)^{3/2}`$. Without this extra factor, the integral simply reduces to $`m_i^2TK_2(m_i/T)`$, where $`K_2(x)`$ denotes the modified Bessel function of order 2.
In a sense, the additional factor makes only a small difference relative to the exponential: when taken into the exponent it is of order $`\mathrm{ln}(\beta E)`$, versus $`\beta E`$ for the Boltzmann factor. However, actual ratios of particle abundances $`n_i/n_j`$ will differ from thermal ratios. In Fig. 1 we show the result of the multiplicities from (18) and a thermal best fit. While we didnโt use any additional parameters, such as individual chemical potentials, the eventual quality of the fit would probably not be as good as what is observed in $`e^+e^{}`$, $`pp`$, $`p\overline{p}`$ and heavy ion collisions \[ReferencesReferences\]. In other words, the hadronization process probably populates free particle phase space somewhat more evenly than our toy model. However, our toy model does demonstrate that Boltzmann-like distributions are not necessarily indicative of real thermal (chemical or kinetic) equilibrium.
## 4 Discussion
In the previous sections we argued that multiparticle production can readily lead to thermal behavior if the the process in question is phase space dominated. Because phase space is determined by free particle kinematics, the results correspond to an ensemble of non-interacting particles. In other words, because the arguments of the energy delta function in the cross section Eq. (9) are simply free particle energies, the corresponding Hamiltonian appearing in the ensemble in Eqs. (1) and (3) is the free Hamiltonian, with no interactions. Our toy model of photons suggests that this result is rather generic in any process where a large number of particles is produced.
If the ensemble is dominated by a particular species of particle (i.e. the lightest particle), the apparent โtemperatureโ will be related to the mass $`m`$ of that species. This is because phase space is maximized by producing as many particles as possible, each with a kinetic energy of order $`m`$. Suspiciously, the typical temperatures produced by the excellent thermal fits of $`e^+e^{}`$, $`pp`$, $`p\overline{p}`$ and heavy ion collisions \[ReferencesReferences\] are all of order the pion mass.
Of course, an ensemble of non-interacting hadrons is not very interesting. It gives us no information about the actual QCD phase diagram. In real QCD the energy of a state consisting of many particles is modified due to interactions: it is not simply the sum of the free particle energies. At high density or temperature interaction effects are large and lead to large deviations from free particle results. Were this not so one could never see collective phenomena such as chiral symmetry restoration or a deconfinement phase transition. In an ensemble of interacting hadrons we already expect significant collective effects at temperatures $`Tm_\pi `$, such as a decrease in the value of the quark condensate $`\overline{q}q_T`$, and shifts in the various hadron masses.
We expect these effects to lead to the failure of statistical techniques based on free particle ensembles, which predict that the multiplicities should fall roughly exponentially (as $`e^{m/T}`$, see Fig. 1). However, if the masses and widths are shifted from their vacuum values at the instant of chemical freeze-out, as would be expected at the $`170`$ MeV temperatures obtained in the fits, then a plot of multiplicity versus vacuum mass will deviate from the thermal prediction: i.e. it will have the form<sup>5</sup><sup>5</sup>5We have simplified our discussion here in two ways. First, we have ignored the possiblity of introducing chemical potentials related to conserved quantities. Since the number of data points to be fit is much larger than the handful of these potentials which may justifiably be introduced, it is highly unlikely that the effects of all of the mass shifts could be reproduced in this manner. Second, many of the hadronic states decay rapidly and so affect the relative populations of the observed particles. Inclusion of these effects adds no degrees of freedom to the fits. It would be amazing if the shifts in masses and widths should conspire to reproduce the vacuum results. $`e^{m(T)/T}`$. On the other hand, it is also possible that the system will remain in equilibrium long enough for the masses and widths to return to their vacuum values. If so, then a thermal fit will perform well, but the resulting temperature would not be near the chiral phase transition.
Knowledge of these mass shifts would be necessary for any detailed predictions. Although there have been some promising results , it has proven difficult to extract this information from lattice data. A perturbative estimate of the decrease in the quark condensate was derived in Ref. . In the chiral limit ($`m_u=m_d=0`$), an analytic result is possible. It reads
$$\overline{q}q_T=\overline{q}q_{T=0}\left[1\frac{3T^2}{24F^2}\frac{3}{8}\left(\frac{T^2}{12F^2}\right)^2+๐ช(T^6)\right],$$
(19)
where $`F93`$ MeV is the (zero temperature) pion decay constant. Using real world quark masses changes this result only slightly. The corresponding estimate for the temperature of the chiral phase transition (i.e. the point at which the condensate is essentially zero) is 170 MeV in the chiral limit, and 190 MeV in the real world, consistent with lattice data.
We expect the thermal pion mass to obey the finite temperature version of the Dashen formula
$$m_\pi ^2(T)\frac{2m_q}{F^2(T)}|\overline{q}q_T|,$$
(20)
where (again, for two massless flavors)
$$F^2(T)=F^2(0)\left[1\frac{T^2}{6F^2(0)}+๐ช(T^4)\right].$$
(21)
Note that at leading order the thermal pion mass actually increases slightly with temperature. The temperature dependence of the baryon masses is a harder problem, but in the naรฏve quark model we might expect them to behave roughly as
$$m_B(T)3|\overline{q}q_T|,$$
(22)
where the constituent quark mass is simply due to the condensate. This estimate at least incorparates the fact that the baryons must become nearly massless at the chiral phase boundary, since chiral symmetry prevents them from obtaining a mass. Equations (20) and (22) suggest that the relative abundance of baryons compared to pions will increase at high temperature. While we donโt necessarily believe that (22) is very accurate, the point is that the thermal pion and baryon masses probably do not depend on the quark condensate (and hence the temperature) in the same way. Thus, it is probably inconsistent to imagine fitting the properties of an interacting hadron gas at $`Tm_\pi `$ using vacuum hadron masses. Yet, essentially all recent heavy ion data<sup>6</sup><sup>6</sup>6In fact, from this point of view any model (such as a parton cascade model) which reproduces free particle thermal multiplicites at temperatures of order 150 MeV probably lacks some important dynamics associated with the phase transition. We might classify it as just another efficient populator of phase space. agrees well with multiplicities generated by free thermal models, with temperatures of roughly $`T50170`$ MeV.
In Fig. 2 we reproduce a plot from the paper of Cleymans and Redlich , where the fitted temperatures and chemical potentials resulting from LEP, CERN/SPS, BNL/AGS and GSI/SIS data are displayed. One interpretation of these results (which we do not subscribe to) is that a large region of the QCD phase diagram in the temperature-density plane has already been explored! There is no question that the quality of the free thermal fits is quite good. However, this in itself suggests that an interacting thermal region has yet to be produced in these experiments. Rather, it is quite possible that the collisions simply serve as a mechanism for populating phase space, without ever evolving through configurations in real thermal and chemical equilibrium (i.e. actual points on the phase diagram in Fig. 2). If the system had passed through real equilibrium, we suggest that the observed final state multiplicities could deviate significantly from those which can be generated by free thermal models. Thus the failure of such models to fit the data could be a signal for real equilibrium. This idea is explored in the context of phenomenologically motivated models in . For more recent work related to this paper, which originally appeared as preprint nucl-th/0001044 in 2000, see .
## Acknowledgements
The authors would like to thank S. Das Gupta, C. Gale, R. Hwa, C.S. Lam, H. Minakata, B. Mueller, R. Pisarski and D. Rischke for useful discussions and comments. SH is particularly grateful to K.S. Lee for making him aware of the previous work of C.N. Yang and collaborators, and for re-stimulating his interest in this area. SH is supported under DOE contract DE-FG06-85ER40224. JH and GM are supported in part by the Natural Sciences and Engineering Research Council of Canada and the Fonds pour la Formation de Chercheurs et lโAide ร la Recherche of Quรฉbec. |
no-problem/0001/cond-mat0001364.html | ar5iv | text | # Electric Field Induced Phase Transition in KDP Crystal Near Curie Point: Raman and X-ray Scattering Studies
## Abstract
X-ray scattering measurements are performed in order to verify that the mechanism leading to the DC electric field induced $`C_{2v}^{19}C_{2v}^{19}`$ phase transition in KDP crystal at 119 K is the changing of the local sites symmetries of phosphate group from $`C_2`$ in the $`C_{2v}^{19}`$ phase to $`C_s`$ in the $`C_{2v}^{19}`$ phase. It is shown by analyzing the integrated intensity of the (800) and (080) reflections that under DC electric field the density of oxygen atoms lying on these plane changes indicating that phosphate group rotates around the direction relative to the orthorhombic $`C_{2v}^{19}`$ structure. Some Raman results are also discussed.
Among the several ferroelectric materials which contains hydrogen bonds, potassium dihydrogen phosphate (KDP) is probably the most investigated. At a temperature of 122 K, the KDP crystals undergo a ferroelectric phase transition , where occurs the lowering of the crystal symmetry from the tetragonal $`D_{2d}^{12}`$ phase to the orthorhombic $`C_{2v}^{19}`$ phase. As a result the crystal lattice becomes polarized along the $`caxis`$. Near and below the phase-transition temperature the protons are partially ordered , being located near either the upper or the lower oxygen atoms of phosphate groups . Many works reporting on the investigation of the stability of both $`D_{2d}^{12}`$ and $`C_{2v}^{19}`$ phases as a function of the hydrostatic pressure and low-intensity DC electric fields were already published \[2-7\]. Recently, we have investigated the effect of uniaxial pressure on these KDP phases . We have shown that under uniaxial pressure, where the force was applied along the shear direction, the KDP undergoes two metastable transitions, namely: (i) $`D_{2d}^{12}C_{2v}^{j19}`$ and (ii) $`C_{2v}^{19}C_{2v}^{j19}`$. A reasonable explanation for these transitions was given based on the changing of the local site symmetry of the phosphate ions. In the ferroelectric phase, the phosphate ion changes its local site symmetry from $`C_2`$ to $`C_s`$ , maintaining the same factor group $`C_{2v}`$ but modifying the space group. These changes are due to the rotation of the phosphate ions around the direction of the orthorhombic structure. (It is important to mention that under an electric field and low temperature a phase transition from a monoclinic $`C_s`$ structure to an orthorhombic $`C_{2v}`$ one was observed ). Also, after discussing the reversibility criteria of this new metastable phase, we have drawn the phase diagram for the KDP transitions on the plane $`(\sigma _6,T)`$ for temperatures in the range from 110 K to 130 K. A theoretical explanation to the appearance of the metastable $`C_{2v}^{j19}`$ phase based on the Gibbs free energy density of the system was given. We have expanded the phenomenological Gibbs free energy density of system up to $`P_3^{10}`$, where $`P_3`$ is the spontaneous polarization along the direction. Even for small values of the coefficient of the term $`P_3^{10}`$, a second minimum for the crystal energy can be achieved. This second minimum is associated to the metastable $`C_{2v}^{j19}`$ phase, which presents a lower value of polarization than that presented by the stable $`C_{2v}^{19}`$ phase. This is in accordance with our assumption that the dipoles rotate around the orthorhombic $`baxis`$ when KDP undergoes a phase transition. This assumption can also be verified using other experimental techniques, e.g., X-ray diffraction, where we can observe modifications in the diffraction pattern associated with the $`(h00)`$ and $`(0k0)`$ reflection planes, with $`h,k=4n`$ where $`n=1,2,\mathrm{},`$ once the number of oxygen atoms lying on these planes change. However, it is somewhat difficult to perform X-ray scattering measurements using the stress apparatus described in Ref.. So, we decided take advantage of the fact that the ferroelectric phase present piezoelectricity, to investigate the phase transition $`C_{2v}^{19}C_{2v}^{j19}`$ under DC electric field applied along the ferroelectric direction. In other words, due to the converse piezoelectric effect a DC electric field applied along the direction should induce phase transitions in KDP crystals similarly to that induced by uniaxial pressure with the force applied along the shear direction. Hence, the main goal of this work is to perform x-ray diffraction measurements to verify the assumption that due to a DC electric field applied along the ferroelectric $`caxis`$, the dipoles rotate around the direction of the orthorhombic structure leading to a change in the local site symmetry of the phosphate ions.
The samples were cut from good optical quality crystals grown by slow evaporation in parallelepipeds of dimensions $`6\times 5\times 1.5`$ mm<sup>3</sup> and oriented by X-ray diffraction . The parallelepipeds faces were orthogonal to the , and directions to the orthorhombic structure. Electrodes of silver were evaporated on the large faces which are perpendicular to the ferroelectric direction. A Keithley Instruments voltage supply model 246 was used as the voltage source.
Light scattering measurements were performed using a conventional equipment (argon ion laser and double monochromator), with an experimental resolution of 1 cm<sup>-1</sup>. A continuos flow-type cryostat was used to record the Raman spectra at low temperatures, which could be controlled to $`\pm `$ 0.1 K. Geometries for the spectra listed in the figures follow the usual Porto notation A(BC)D. X-ray scattering measurements were performed using a Rigaku diffractometer with radiation source of Mo K$`\alpha `$ coupled with a low temperature chamber. The good penetration of X-ray beam gives advantage to get diffraction from deeper planes, avoiding the non uniformity of electric field on surface. The procedure used in the X-ray experiments was: first, the crystal was aligned using the (440) reflection of paraelectric phase and then cooled down to ferroelectric phase where during the transition, the reciprocal lattice points of (800) and (080) reflections appear satisfying the diffraction conditions of the orthorhombic structure with the $`C_{2v}`$ factor group.
Before discussing X-ray results, we need to show that under DC electric field, KDP crystal undergoes the $`C_{2v}^{19}C_{2v}^{j19}`$ phase transition. Then, in Fig. 1(a) and 1(b) we show part of DC electric field dependent Raman spectra for the low frequency taken at 119 K, for the symmetries $`A_1`$ and $`B_1`$ of the $`C_{2v}`$ factor group, respectively. For $`E=0`$, both $`A_1`$ and $`B_1`$ spectra present a good agreement with the mode distribution predicted by the group theory analysis. However, for $`E=5`$ kV/cm , we observe qualitative modifications in the Raman spectra. From Fig. 1(a) we can see three modifications: (i) The increase of the vibration for $`\omega <250`$ cm<sup>-1</sup>; (ii) the increase in the intensity of the vibration at 525 cm<sup>-1</sup> and (iii) the disappearance of the vibration at 575 cm<sup>-1</sup>. For $`B_1`$ symmetry, as shown in Fig. 1(b), we observe an inversion in the intensity of the peaks oscillating at around 200 and 500 cm<sup>-1</sup>. Since the symmetries $`A_1`$ and $`B_1`$ are unidimensional , the modifications exhibit by the Raman spectra are an evidence that due to the DC electric field the $`C_{2v}^{19}`$ phase do KDP underwent a phase transition .The modifications observed comply with the symmetry analysis considering the transition from the space group $`C_{2v}^{19}`$ to the space group $`C_{2v}^{19}`$, where the phosphate ion changes its local site symmetry from $`C_2`$ to $`C_s`$. The modifications observed are irreversible once all features seen in the Raman spectra of Figs. 1(a) and 1(b) remain present even when the DC electric field is turned off, and the crystal is maintained in this condition for an arbitrary long time. To go back to the spectra of KDP for the $`C_{2v}^{19}`$ phase, we must increase the temperature of the crystal over 122 K, and then cool it again at temperatures below 122 K. This irreversibility can be understood as a manifestation of a lowering in the cell potential due to an increase in the dipole interactions. Switching off the DC electric field is not sufficient to overcome the potential barrier created by dipolar interaction. This barrier is overcame only by transferring thermal energy to the dipoles. Due to these facts, we conclude that a DC electric field induces a phase transition in KDP similar to that induced by uniaxial pressure.
In order to present another experimental evidence of the mechanism leading to the appearance of the $`C_{2v}^{j19}`$ phase, we have performed single crystal X-ray measurements as a function of DC electric field. The idea of the experiment is very simple: if the phosphate tetrahedron in fact rotates around the direction, a variation in the behavior of the integrated intensity of diffraction peaks associated with (800) and (080) reflection planes should be observed, once the number of oxygen lying on these planes changes when E is varied from 0 up to 5 kV/cm, as shown in Fig. 2. Figure 3 shows the diffraction patterns corresponding to the (800) and (080) reflection planes of the orthorhombic structure as a function of the DC electric field up to 5 kV/cm at 119 K. The features observed are irreversible in the same way that those presented in the Raman spectra. The peaks emerge from overlapping of the bands corresponding to the $`K\alpha _1`$ and $`K\alpha _2`$ lines of Mo radiation. By performing a spectral decomposition into pseudo-Voigth components, we can draw the behavior of the integrated intensity of the (800) and (080) $`K\alpha _1`$ reflections as a function of DC electric field as displayed in Fig.4. It should be observed that the integrated intensity corresponding to (800) reflection decreases with increasing the DC electric field up to 5 kV/cm, while that one of (080) reflection increases. These changes in the integrated intensity behavior indicate that the density of oxygen atoms lying on these planes changed. This can be ascribed as a result of a rotation of phosphate ion around the direction relative to the $`C_{2v}^{19}`$ orthorhombic structure. This statement agrees with the observation performed by Bacon and Pease , where they showed that the KDP under DC electric field exhibits the value of the saturation polarization of the order of $`4.7\times 10^6`$ Ccm<sup>-2</sup>, whereas the observed value is $`5\times 10^6`$ Ccm<sup>-2</sup> for $`E=0`$ . Due to this fact, the local sites symmetries exhibited by the phosphate ion changes from $`C_2`$ to $`C_s`$ which modifies the space group of the $`C_{2v}`$ symmetry from $`j=19`$ to $`j19`$. In conclusion we reported on the experimental verification based on X-ray measurements of the mechanism leading to the conformational $`C_{2v}^{19}C_{2v}^{19}`$ phase transition of KDP when a DC electric field is applied along the ferroelectric c axis. The behavior of the integrated intensity of (800) and (080) reflections indicates that occurs a modification in the density of oxygen atoms lying on these planes . This modification results from the rotation of phosphate ion around the orthorhombic axis. This rotation changes the local sites symmetries of phosphate ion from $`C_2`$ in the $`C_{2v}^{19}`$ phase to $`C_s`$ in the $`C_{2v}^{19}`$ phase.
ACKNOWLEDGEMENTS
Financial support from CAPES, CNPq, FINEP and FUNCAP, Brazilian funding agencies, is gratefully acknowledged.
REFERENCES
R.J. Nelmes, W.F. Kuhs, C.J. Howard, J.E. Tibballs and T.W. Ryan, J. Phys. C: Solid State Phys. 18, L711 (1985).
G. Busch, Helv.Phys.Acta 11,269 (1938).
F. Jona and G. Shirane, Ferroelectric Crystals (Dover, New York, 1993).
K.Itoh,T. Matsubayashi,E. Nakamura, and H. Motegi, J.Phys.Soc.Jpn. 39, 843 (1975).
F.E.A.Melo, K.C.Serra, R.C.Souza, S.G.C.Moreira, J.Mendes-Filho, and J.E.Moreira, Braz. J. Phys. 22,95 (1992).
B.Morosin and G.Samara, Ferroelectrics 3, 49 (1971).
P.S.Pearcy and G.Samara, Phys. Rev.B 8, 2033 (1973).
F.E.A. Melo, S.G.C. Moreira, A.S. Chaves, I. Guedes, P.T.C. Freire, and J. Mendes Filho, Phys. Rev. B 59, 3276 (1999).
S.G.C. Moreira, F.E.A. Melo, and J. Mendes Filho, Phys. Rev. B 54, 6027 (1996).
G. E. Bacon and R.S. Pease, Proc. R. Soc. A 230, 359 (1955).
A. von Arx and W. Bantle, Helv. Phys. Acta. 16, 211 (1943).
FIGURE CAPTIONS
Fig. 1- Raman spectra of KDP as a function of the DC electric field at 119 K for two different symmetries: (a) $`A_1`$ and (b) $`B_1`$.
Fig. 2- Schematic representation of the orthorhombic $`C_{2v}^{19}`$ structure of the KDP projected on the ab plane.
Fig. 3- Experimental single crystal X-ray diffraction pattern related to (800) and (080) reflection planes as a function of the DC electric field at 119 K.
Fig. 4- Plots of the integrated peak intensity corresponding to (800) and (080) $`K\alpha _1`$ reflections as a function of the DC electric field at 119 K. |
no-problem/0001/astro-ph0001368.html | ar5iv | text | # Nanoarcsecond Single-Dish Imaging of the Vela Pulsar
## 1. Introduction
Under sufficiently strong scattering conditions, a point source whose radiation propagates through the Interstellar Medium (ISM) exhibits diffractive scintillation, whereby the fluctuations in the intensity, $`I`$, are fully modulated: $`(II)^2/I^2=1`$. If $`r_{\mathrm{diff}}`$ is the length scale on the scattering screen for which the root-mean-square phase difference is one radian, the angular size of the diffractive pattern is $`\theta _d=r_{\mathrm{diff}}/D`$, where $`D`$ is distance between the observer and the scattering screen. The amplitude of the intensity fluctuations is suppressed if the angular size of a scintillating object, $`\theta _s`$, is comparable to $`\theta _d`$.
Finite source size effects are more pronounced at low frequency. Since the phase delay caused by the density inhomogeneities in the ISM is linearly proportional to frequency, the scatttering effect of density inhomogeneities is greater at lower frequency. The diffractive scale, which is a measure of the amplitude of the turbulent phase fluctuations, therefore decreases with frequency. Pulsar scattering data shows that the spectrum of turbulent density fluctuations in the ISM, $`\mathrm{\Phi }(๐ช)`$, is consistent with a power law: $`\mathrm{\Phi }(๐ช)q^\beta `$ over $`5`$ decades in wavenumber (Armstrong, Rickett & Spangler 1995). For such behaviour, the diffractive scale varies as $`r_{\mathrm{diff}}\nu ^{2/(\beta 2)}`$, with evidence for both $`\beta 11/3`$ and $`\beta 4`$ for the Vela pulsar (Johnston et al. 1998).
Gwinn et al. (1997) (see also these proceedings) measured $`m=0.87`$ for the Vela pulsar at 2.3 GHz and, attributing the deviation from $`m=1`$ to a source size effect, used the theory of diffractive scintillation to derive a source size of $`500`$ km. However, as $`\theta _d/\theta _s`$ decreases for the stronger scattering encountered at lower frequencies the modulation index is also expected to decrease. For the source size stated by Gwinn et al. (1997), scintillation theory predicts that the modulation index is no larger than 0.45 if $`\beta =4`$ ($`m<0.35`$ if $`\beta =11/3`$) at 660 MHz, assuming that the size of the emission region does not decrease with frequency.
Conversely, source-size effects at higher frequencies are expected to be negligible, with the intensity probability distribution following a negative exponential distribution $`p(I)=1/I_0\mathrm{exp}(I/I_0)`$ with mean intensity $`I_0`$ (e.g. Gwinn et al. 1998).
## 2. Results
We observed the Vela pulsar for 3 minutes with the Parkes telescope and the CPSR (Caltech-Parkes-Swinburne Recorder) backend to analyse the diffractive scintillation of Vela at 660 MHz, thereby testing the source-size assertion of Gwinn et al. (1997). We describe the data reduction procedure here briefly; full details of the analysis will be presented elsewhere (Macquart et al. 2000).
The CPSR system recorded a two-bit complex sampled data stream in each of two linear polarizations at a rate of 20 MHz. Each polarization stream was analysed separately. For each stream, the mean pulsar power was determined by subtracting the average off-pulse spectrum (obtained by FFTing the data stream) from the on-pulse spectrum. Although the variation of the pulsar flux density is negligible over the 20 MHz bandwidth, Faraday rotation across the band is not, as it causes the detected power in each (linear) polarization stream to vary as a function of frequency.
The spectra then were combined in groups of 10 pulses โ equivalent to a third of the scintillation timescale โ and normalized by the pulsarโs mean power at that frequency and by the instrumental bandpass. The mean signal across the normalised band was subtracted to leave only the fluctuations in $`I(\nu )`$ across the band. The outer eighths of the band were clipped due to the tapering of the bandpass at the edges. These normalised pulsar spectra were then autocorrelated and cross-correlated to find the normalised covariance
$`\mathrm{\Gamma }(\mathrm{\Delta }\nu ,\mathrm{\Delta }t)={\displaystyle \frac{[I(\nu +\mathrm{\Delta }\nu ,t+\mathrm{\Delta }t)I(\nu ,t)]^2}{I(\nu ,t)^2}}.`$ (1)
Figure 1 shows the frequency autocorrelation function $`\mathrm{\Gamma }(\mathrm{\Delta }\nu ,0)`$. Fits to the covariance function yielded a decorrelation bandwidth $`\nu _d=244\pm 4`$ Hz for poln 0 ($`\nu _d=241\pm 8`$ Hz for poln 1) and a decorrelation timescale $`t_{\mathrm{diff}}=3.3\pm 0.3`$ s determined from poln 0 only due to the higher signal to noise available in this channel. The modulation indices are $`m=0.871\pm 0.003`$ (poln 0) and $`m=0.93\pm 0.03`$ (poln 1). The stated errors are obtained from those formally obtained from the data shown in figure 1. However, these measurements are subject to other errors:
* Intrinsic pulse-to-pulse flux variations may affect the scaling of the modulation indices. Suppose we receive a pulse $`Y`$ times stronger than the mean pulse flux density. Then the scintillation signal, which was normalised by the mean pulse flux density over one minute, is thus measured to have a modulation index $`m_{\mathrm{meas}}=Ym_{\mathrm{real}}`$. The effect of intrinsic pulse variability is reduced by averaging together many independent pulses before calculating the autocorrelation function, and is further reduced by averaging together many such autocorrelation functions. Over the 3 min of data used, intrinsic pulse variations contribute an error $`\mathrm{\Delta }m0.01`$.
* Telescope gain variations play a similar rรดle to intrinsic pulsar variability. We estimate their effect on timescales of one minute to be less than a few percent.
* The finite spectral and temporal resolution of our observation may reduce our measurement of the modulation index due to smearing of the scintillation pattern. However, the spectral and temporal resolution used are sufficiently small compared to the decorrelation bandwidth and timescale that these effects are negligible.
In total, we estimate that the total error in our measured modulation index due to the effects mentioned above is no more than 5%.
In addition to the 660 MHz data, we have also obtained scintillation data at a frequency of 8.4 GHz (Macquart et al. 2000), for which one expects $`m=0.99`$ (i.e. negligible source-size effects). However, the observed intensity distribution deviates significantly below the expected negative exponential distribution at high intensity, and the modulation index is $`m0.93`$.
## 3. Discussion
Our measured modulation index $`m=0.87\pm 0.05`$ is significantly at variance with the modulation index of $`m<0.45`$ expected if the quenching of the diffractive scintillation at 2.3 GHz is a source-size effect. The 660 MHz scintillation data places an upper limit of 50 km on the size of this region.
The two main explanations for the apparent contradiction are:
* The pulsarโs emission region is smaller at low frequency. The upper limit on the expected modulation index assumes that the emission region retains the same characteristic size between 2.3 GHz and 660 MHz; however, the radius to frequency mapping paradigm leads one to expect the emission region to be larger at low frequency and thus that the diffractive scintillation is quenched even further.
* The reduction in $`m`$ observed by Gwinn et al. is not related to the apparent source size. This explanation is supported by the 8.4 GHz scintillation data. This data may indicate that the statistics of the phase fluctuations on the scattering screen are not Gaussian, and thus that the expected intensity distribution due to scintillation of a point source is not exactly negative exponential in form.
For a complete discussion of the observing and calibration procedures used and further implications of the results see Macquart et al. (2000).
## References
Armstrong, J.W., Rickett, B.J., Spangler, S.R., 1995, ApJ, 443, 209
Gwinn, C. R. et al, 1997, ApJ, 483, L53
Gwinn, C.R. et al., 1998, ApJ, 505, 928
Gwinn, C.R. et al., 1999, ApJ, in press.
Johnston, S., Nicastro, L., Koribalski, B., 1998, MNRAS, 297, 108
Macquart, J.-P., Johnston, S., Walker, M.A., Stinebring, D.R., 2000, ApJ, submitted. |
no-problem/0001/cond-mat0001099.html | ar5iv | text | # Connection between charge transfer and alloying core-level shifts based on density-functional calculations
## I Introduction
The concept of charge transfer is fundamental to chemistry and condensed-matter physics. Unfortunately, it is frustratingly difficult to give a precise definition of charge transfer, or even a well-defined prescription for measuring it. This is equally true for related quantities such as electronegativity and bond ionicity.
There have been a number of attempts to relate the charge transfer in an alloy to the positions of the core levels. These energies can be measured with high accuracy using X-ray spectroscopy. In general terms, the core levels of an atom are shifted when the atomโs environment changes. Interesting cases are, for example, when a crystal is formed out of free atoms, when an atom is at the surface rather than in the bulk of a solid, or when an alloy is formed out of two elemental solids.
In the case of the alloy core-level shift (the subject of this paper) a major objective has been to find a well-defined connection between the measured shift and the charge transfer between the constituents. Clearly, when charge is moved from one atom to another, an electrostatic potential builds up, which modifies the energy needed to eject an electron from a core level into the vacuum. In the simplest form of the โpotential modelโ, the change of the potential felt by a core electron is described using a Madelung term and an on-site contribution. Unfortunately, cancellation between these effects and uncertainty in the the model parameters make it difficult to extract reliable charge transfers using this approach.
Furthermore, the simple potential model is valid only for the โinitial stateโ picture, i.e. when describing the positions of the core levels in the alloy and the pure metal before a core electron is removed. To compare with the measured binding energies, a final-state screening contribution must be taken into account: after a core hole is created, the remaining electrons relax to screen the hole. The kinetic energy of the emitted electron includes the screening energy. This can be included in the formalism (by a term generally denoted $`\delta R`$), but this adds yet another parameter whose numerical value is poorly known.
An alternative procedure is described in Ref. . It was pointed out that the relaxation energy can be measured directly via the shift of the Auger parameter (the sum of core-level ionization and Auger-energy shifts). For the compounds AuMg and AuZn, the resulting values of $`\delta R`$ are inconsistent with the estimates used in an earlier work based on the potential model, even though plausible ionicities were deduced there. A modified potential model was presented, which relates the valence charge transfer in a metal to an atomic property, namely the change in the potential at the core due to changes in the valence and core occupation numbers.
Under these circumstances, it is desirable to make a detailed theoretical analysis of a typical system using a method which can quantify the various contributions unambiguously. Here, we use ab-initio density-functional total-energy calculations to study the MgAu alloy. Using a supercell technique, the Mg, Au, and MgAu metals with and without a core hole on a selected atom can be described accurately. This cleanly separates the core-level shift into initial-state and final-state relaxation contributions, which can then be checked against the appropriate models. In addition, direct inspection of the densities of states, on-site charges, and screening charge distributions gives an understanding of the effects of alloying and the different screening responses to a core hole.
ยฟFrom the results, we are led to the conclusion that the final-state relaxation contribution is not small, as it changes both the sign and the magnitude of the core-level shift on the Mg atom. This happens because the screening of the Mg 1$`s`$ core hole is substantially less effective in the alloy than in the pure Mg metal. In contrast, the relaxation energy is found to be almost identical in the pure metal and the alloy for Au. By inspecting the screening density and comparing to free atom calculations, we try to offer a simple explanation for the changes in the screening properties upon alloying.
After obtaining a realistic picture of both the initial-state and the final-state screening terms, we have tried to interpret these results in terms of the potential model. However, is has not been possible to reproduce the features of the model. In brief, the potential model assumes the following connection between the shift $`\mathrm{\Delta }V`$ in the core-level binding energy and the charge transfer $`\mathrm{\Delta }q`$:
$$\mathrm{\Delta }V=(kM)\mathrm{\Delta }q$$
(1)
where $`k\mathrm{\Delta }q`$ is the on-site Coulombic potential and $`M\mathrm{\Delta }q`$ is a Madelung term from charges on the other lattice sites. Sometimes additional terms are included, e.g. in the form
$$\mathrm{\Delta }V=k\mathrm{\Delta }qM\mathrm{\Delta }q\delta Re\delta \varphi .$$
(2)
where $`\delta R`$ is the change in the final-state relaxation energy and $`\delta \varphi `$ is the change in the Fermi energy. Our calculations can supply reliable values for the well-defined quantity $`\delta R`$, but cannot assign values to poorly-defined quantities such as $`\delta \varphi `$, $`k`$, and $`M`$. A substantial effort to recover the potential model in either formulation did not lead to any quantitative or even qualitative agreement.
## II Calculation and interpretation of core-level shifts
Certain excitation energies (such as core-level shifts and atomic ionization energies) can be obtained as the difference of the total energies of two self-consistent density-functional calculations for the ground state. This can be done whenever the excited state is formally the ground state for a different set of quantum numbers. Although the calculated eigenvalues should not be directly associated with the excitation energies, a connection can be made using Slaterโs transition state concept.
During an experiment such as XPS, an electron is emitted from the core state into the vacuum. The core-level binding energy is the difference of the total energies between the unperturbed, homogeneous crystal and the impurity system in which a single atom has a reduced core occupation. The first system is easy to handle using standard band-structure techniques, whereas the second requires some treatment suitable for impurities, such as use of supercells.
In a metal, a valence electron moves in from the surrounding crystal to screen the positive charge of the core hole. Effectively, the core electron has been lifted to the Fermi level hereby. The energy needed to do this can be expected to depend on the position of the core eigenvalue before the excitation and on the degree of screening of the core hole. The separation into โinitial stateโ and โfinal-state screeningโ contributions can be made clearer using the transition-state concept. Within DFT, this is done using Janakโs formula, which states that the derivative of the total energy respective to a some occupation number equals the corresponding eigenvalue. Applied to the present situation, the charge $`x`$ is taken from the core state of one atom in the supercell and put into the valence band, and
$$\frac{E_T(x)}{x}=E_\mathrm{F}E_c(x)ฯต_c(x),$$
(3)
where $`E_T`$ is the total energy, $`E_\mathrm{F}`$ is the Fermi energy, and $`E_c`$ is the core-level eigenvalue.
Actual calculations show that, to a very good approximation, the core eigenvalue drops in a nearly linear fashion as it is deoccupied, even though the overall core-level drop is substantial. For example, $`ฯต_c`$ increases from 1248.6 to 1364.7 eV when the Mg $`1s`$ occupation is reduced from two to one. Similarly, the Au $`4f`$ state starts at 78.6 eV below the Fermi energy and drops to 93.3 eV.
Assuming a strictly linear dependence of $`ฯต_c`$ on $`x`$, the corel-level binding energy (the change in total energy when one electron is taken from the core) can be written in various illuminating ways:
$`E_T(1)E_T(0)`$ $`=`$ $`{\displaystyle _0^1}ฯต_c(x)๐x`$ (4)
$``$ $`ฯต_c(\frac{1}{2})`$ (5)
$``$ $`\frac{1}{2}[ฯต_c(0)+ฯต_c(1)]`$ (6)
$``$ $`ฯต_c(0)+\frac{1}{2}[ฯต_c(1)ฯต_c(0)]`$ (7)
These equations express the full core-level binding energy (CLBE) including final-state relaxation effects in terms of the eigenvalues at different occupations. Eq. 5 is Slaterโs transition-state rule, and Eq. 6 shows that the CLBE is the average of the eigenvalues before and after removing the core electron. In Eq. 7, the first term $`ฯต_c(0)`$ is the initial-state CLBE and the relaxation contribution is identified as one-half of the core-eigenvalue drop upon depopulation.
This description is a useful tool to interpret the calculated results because it makes contact between core-level shifts and differences in the screening response. The core-level shift is the difference of the CLBE in two different environments, say $`A`$ and $`B`$. The initial-state core-level shift is the difference of the static core levels in the unperturbed systems. According to Eq. 7, the final-state relaxation contribution can be expressed as the difference of the core-eigenvalue drop upon depopulation. In general terms, a core level drops more strongly when the valence electrons screen the core hole less efficiently. Thus, the initial-state picture for the core-level shift is applicable if the screening of the core hole is the same in both systems. If there is a positive relaxation contribution to the core-level shift from $`A`$ to $`B`$, this shows that the core-level drop is larger and the screening less effective in system $`B`$. Conversely, a negative relaxation contribution indicates that screening is more effective in $`B`$.
## III Calculational procedure
To determine the initial-state CLBE, a calculation for the unperturbed periodic systems is adequate. Hereby it is advantageous to use an all-electron method, which gives the core eigenvalues directly. For the complete CLBE including final-state relaxation, a supercell is used and the difference of the total energy with and without a core hole on one atom is evaluated. As discussed, the electron taken from the core state is placed into the valence band. For additional information, the promoted charge can take non-integer values. Within a self-consistent DFT calculation, the important screening effects should be described accurately. Note that the properties of the surface, specifically the work function, do not enter either description. This must be the case for an acceptable model since the true core binding energy relative to the Fermi level, expressed as the difference of two total energies, is a bulk property.
The electronic-structure and total-energy calculations presented here were done with the all-electron full-potential LMTO method, within the local approximation (LDA) to density-functional theory. Minimization of the energy under a constrained core occupation is rigorously justified in the DFT framework: a self-consistent calculation under the chosen constraint provides a variational total energy in the parameter subspace identified by the constraint.
We applied this technique to the core levels of the Mg 1$`s`$ and Au 4$`f`$ levels, first for the pure materials and second for the binary MgAu alloy in the CsCl structure. Accurate experimental data exist on the core level shifts upon formation of this alloy. To make comparisons between the materials easier, the fcc structure was adopted for pure Mg. To study the core-holeโexcited solids, we used 16-atom supercells for both CsCl-structure MgAu, and fcc Mg and Au. The distances of the core hole from its periodic images exceeds 12 bohr in all cases, and tests show that our values for the core level shifts are converged with respect to cell dimension. The localization of the calculated density response to the core-hole perturbation, as detailed below, provides an a posteriori justification for the used supercells. The Brillouin-zone integration was done using more than 50 irreducible special points. Muffin-tin radii for Mg respectively Au are 2.94 and 2.60 bohr in the pure metals and 2.50 and 2.70 in the compound at the experimental lattice constants, and are scaled with the lattice constant. All the calculations are scalar-relativistic and use the Vosko et al. parameterization of the LDA exchange-correlation potential.
## IV Results for MgAu
The calculated structural parameters for MgAu, Au and Mg are given in Table I. The results for these bulk systems are of standard DFT-LDA quality. For the calculation of the core holes, supercells were built up at the theoretical lattice constant.
Before moving to a discussion of the core level shifts, we point out that the absolute core binding energies in Table II (referred to $`\mathrm{E}_\mathrm{F}`$, and obtained as total-energy differences) are in remarkable agreement with experiment, showing errors below 1%. The data in the Table also illustrates the large drop of the core eigenvalues when an electron is removed (about 14.5 eV for Au and 120 eV for Mg). By averaging the eigenvalues before and after removal of the core electron, Eq. 6 can be verified, showing that to a good approximation the core eigenvalue indeed drops linearily relative to the Fermi energy as charge is removed.
The calculated initial-state and full core-level shifts are obtained by taking the differences of the corresponding values in Table II, leading to the values shown in Table III and (graphically) in Fig.1. The difference between the full and initial-state CLS then gives the screening contribution. The full results are in good agreement with experiment for both cases. We find that the initial-state estimate is already accurate for the Au 4$`f`$ shift, but that it is grossly incorrect and even has the wrong sign for Mg 1$`s`$. The screening contribution to the shift is thus completely different for the two types of atom: it is negligible for Au, but is the dominant contribution for Mg. The screening energies are in reasonable agreement with those deduced from Auger parameter measurements, but are incompatible with the assumptions made in earlier work.
In view of the discussion in Section II, the conclusion is that the Au $`4f`$ core hole is screened equally well in the pure metal and in the alloy. For Mg, on the other hand, the depopulated $`1s`$ core level has dropped by a larger amount in the alloy, showing that the core hole is screened significantly less effectively there than in pure Mg. Given the size of the effect, an analysis of measured core level shifts which does not take screening into account is pointless.
The calculations reproduce the experimental core-level shifts and split these unambiguously into an initial-state and a final-state screening term. In the rest of this section, we discuss these contributions separately in view of the calculated electronic structure and the potential-type models. In the end, we will come to the conclusion that the potential models are difficult to justify on the basis of realistic calculations.
To help in the interpretation of the results, the site-resolved densities of states are presented in Figs. 2, 3, and 4. Fig. 2 compares the electronic structure of Mg, Au, and MgAu before making a core hole. The effects of a Mg $`1s`$ or Au $`4f`$ core hole on the valence states of Mg and Au are shown in Fig. 3, those in MgAu in Fig. 4.
### A Initial-state shifts
The calculated initial-state core-level binding-energy (i-CLBE) shift upon alloying Mg and Au to form MgAu is $`\mathrm{\Delta }_{\mathrm{Mg}}=0.45`$ eV for the Mg $`1s`$ state and $`\mathrm{\Delta }_{\mathrm{Au}}=0.71`$ eV for the Au $`4f`$ state. A charge transfer of about 0.1โ0.2 electrons from Mg to Au is generally considered reasonable, in view of the electronegativity values of 1.31 and 2.54 for Mg and Au, respectively. A reliable definition of the charge transfer (say, as charge density integral) from a density-functional calculation is very difficult to set up, so we will not try to verify the generally accepted value directly. However, we can inspect whether the calculated initial-state shift is compatible with the potential model for this accepted value of the charge transfer. We also discuss other modeling concepts which attempt to explain the initial-state CLBE shift. In the end, an honest appraisal is that no simple model can account for the calculated values, despite extensive efforts to find one.
As described above, the basic feature of the potential model is that a charge transfer to the atoms of type $`B`$ causes a repulsive on-site Coulomb potential which pushes up the core states, reducing the (initial-state) CLBE. This effect is only partly compensated by the Madelung potential. Thus, the CLBE is reduced for those atoms which acquire additional charge, and vice versa. On the other hand, we can also present an equally simple alternative model, based on a rigid-band description, where this effect is reversed. Assume (with reference to Figs. 2, 3, and 4) that the density of states (DOS) of the alloy is obtained by adding together the two DOS of the constituents, shifted vertically to line up in some way which reflects the bonding. The charge on an atom in the alloy is then simply related to the position of the shared alloy Fermi energy with respect to the site-decomposed valence DOS. Furthermore, we assume that the core eigenvalues are at a fixed position relative to the valence band, so that the core levels track the shifts of the DOS. Since the CLBE is defined relative to the Fermi energy, it follows that charge transfer to sites of type $`B`$ is associated with an upward shift of the Fermi energy and a larger initial-state CLBE.
In the case of MgAu, we are assuming that there is charge transfer from Mg to Au, and have calculated that there is an increase of the Au $`4f`$ CLBE in the alloy. The Mg atom has lost some charge by alloying and has a reduced CLBE. Even if only the signs are considered, these features are incompatible with the potential model, but agree with the rigid-band description. In fact, we can become ambitious and try use the calculated values of the density of states at the Fermi level for the pure Mg and Au materials ($`D_F`$(Mg)=0.46 states/eV and $`D_F`$(Au)=0.33 states/eV) to connect the CLBE shifts with the charge transfer. Assuming a resonably flat DOS at the Fermi energy, the transferred charge for an atom of a certain type is approximately equal to
$$\mathrm{\Delta }q=D_\mathrm{F}\mathrm{\Delta }ฯต_c$$
which yields $`0.21`$ and $`0.23`$ electrons for Mg and Au, respectively. These values are close to the generally accepted charge transfer for this type of alloy; furthermore, the charge transferred away from Mg is close to the charge transferred to the Au atom.
Unfortunately, this gratifying result must be considered accidental, for several reasons. Foremost is that the alloy has a substantially smaller volume than the sum of the volumes of the consituents: the cell volumes are 113.3, 146.9, and 225.9 bohr<sup>3</sup> for Au, Mg, and MgAu, respectively. It makes sense to assign the shrinkage of 13% to the softer Mg atom. Thus, a more correct description could be to โprepareโ the Mg atom by compressing it to a smaller volume, then forming the alloy from this compressed Mg and the Au crystal. The total i-CLBE shift then is a sum of the effects due to the two steps. Independent of whether the shrinkage is assigned to the Mg or Au atoms, we are now considering a simpler system in which an alloy is formed without any volume change. If the rigid-band model is a resonable description, it should be equally applicable here. When the Mg bulk is compressed, the $`1s`$ core level moves up by 0.51 eV, reducing the CLBE from 1248.62 eV to $`ฯต_c`$=1248.11 eV. This value is almost equal to that in the alloy, so that the estimated charge transfer for the Mg atom from the rigid-band model now comes out close to zero. Unfortunately, this is not compatible with the charge of 0.23 electrons added to the Au site, throwing the perceived success of the rigid-band model into doubt.
At this stage, it can be speculated that shifts of the Fermi energy should be included, arising from changes in the electronic structure due to alloying. Indeed, the plots of the DOS in Figs. 2 to 4 show that the rigid-band assumption is not conspicuously well satisfied. To demonstrate that all kinds of other effects of similar magnitude would still be neglected, we focus on just one aspect, namely the role of $`sp`$ to $`d`$ charge promotion during alloying. This can be most easily investigated in an ASA calculation, where the total crystal volume is assigned to atomic spheres. The ASA result for the initial-state CLBE shifts when alloying Mg and Au reproduces the full-potential calculation reasonably well. The interpretation of the CLBE shift can now be given a new dimension, since we can directly investigate the response of the core eigenvalues to changes in the $`sp`$ and $`d`$ charges. We obtain response parameters $`ฯต_c/Q_{\mathrm{}}3`$ eV/electron for the $`sp`$ and $``$1.5 eV/electron for the $`d`$ states. Furthermore, we can inspect the changes in the partial charges $`Q_{\mathrm{}}`$ when the Mg atom is taken from the pure (compressed) Mg crystal and is placed in the alloy, obtaining $`\mathrm{\Delta }Q_{sp}0.21`$ and $`\mathrm{\Delta }Q_d0.28`$ electrons. Thus, only 0.07 electrons are added to the Mg atomic sphere, but $`0.25`$ electrons are promoted from the $`sp`$ to the $`d`$ states. Combined with the response parameters, it follows that a contribution to the initial-state CLBE shift of about $`0.3`$ eV should be attributed to the $`sp`$ to $`d`$ promotion. Altogether, this shows that not only do the core states shift relative to the valence band when charge is transferred, but the effect is significantly different depending on the angular momentum which takes up the charge.
In sum, despite attempts in various directions, we have not been able to find a simple model which can describe the initial-state CLBE shifts caused by alloying. The simple potential model is not applicable because even the signs are not predicted correctly. The rigid-band model seems slightly more plausible, but also suffers from a number of shortcomings. Theoretically, an extended model could be written down which includes numerous other relevant effects, such as $`sp`$-to-$`d`$ promotion etc. However, this model would be so complicated and unwieldy that the overall aim of a simple model would be negated. Having confidence in our calculations, we believe that the values of $`0.45`$ eV and $`0.71`$ eV for the i-CLBE shift are reliable, but we have no convincing way to explain these numbers in simple terms.
### B Final-state screening contribution
Next, we discuss the final-state screening contribution to the CLBE shifts when Mg and Au are alloyed to make ordered MgAu. As mentioned above, the calculated screening contribution is 0.02 eV for Au and 0.70 eV for Mg (Table I). This was interpreted as follows: the screening of the Au $`4f`$ core hole happens in a way which is nearly independent of the environment. In contrast, the screening of the Mg $`1s`$ core hole is significantly different in the pure Mg bulk and in the alloy. More exactly, the core hole in the alloy is screened considerably less effectively than in pure Mg. In the following, we try to analyze this difference in the screening properties.
A major advantage of an accurate simulation such as a DFT calculation is that it can provide data which is not accessible to experiment; one example is the separation into initial-state and screening contributions. It is equally useful to use the calculation as a โmicroscopeโ to provide quantities such as the DOS or the charge density. In the present situation, we can develop a feeling for the nature of the core-hole screening by inspecting the screening density directly. This is simply the difference of the charge density with and without the core hole. One additional electron is in the valence charge, responding to the attractive potential of the core hole more or less flexibly. The screening densities are shown in Figs. 5, 6, 7, and 8 for pure Mg, pure Au, the Mg core hole in MgAu, and the Au core hole in MgAu, respectively. These plots are a central result of this paper, making it possible to think about the screening cloud in a straightforward and unambiguous way.
By comparing Figs.5 and 6 for the pure constituents, basic differences for Mg and Au are evident. Whereas the screening cloud in Mg is wide and extended, screening in Au is performed by a localized lump of electrons. For a true transition metal, this could be easily explained: the screening electron would be taken up by the localized $`d`$ states at the Fermi energy. For Au, however, the $`d`$ shell is already full and this explanation is not possible.
Instead, the correct explanation for the localised screening in Au can be deduced from the corresponding DOS plot. On the Au atom with the core hole, the $`d`$ states are pulled down (and out of the crystal $`d`$ band) by the attractive core-hole potential. The electronic structure is similar to that of a Hg impurity in Au. In real space, the $`d`$ states contract, albeit without any change in the occupation number. To the screening cloud, this process contributes the difference of the contracted and uncontracted $`d`$ shell, which is a positive peak near the nucleus surrounded by a negative โring.โ At this stage, we have not yet taken up the extra screening electron. This is done by the $`sp`$ states which now in turn screen (and fill in) the attractive ring. Since the $`sp`$ states are more extended, this cannot be done completely, leaving some part of the negative ring visible in the total screening cloud.
For the quality of the screening, the charge closest to the nucleus is most relevant. On the Au atom this is dominated by the shrinking of the $`d`$ shell, which can be expected to be largely independent of the environment. Screening by $`sp`$ electrons is a more extended affair which can be influenced by the environment of the atom. However, for Au the $`sp`$ electrons play a less immediate role, even though they actually take up the additional screening electron. In contrast, screening of the Mg $`1s`$ core hole is done only by $`sp`$ valence electrons in the form of an extended cloud. Overall, these arguments can explain why screening is largely independent of the environment in Au but not in Mg, in agreement with the calculated results for the alloying process.
Next, we compare the screening of the core holes in the pure materials and the MgAu alloy. For the Au $`4f`$ core hole, the screening clouds in the density plots look very similar in the central $`d`$-electron lump, with some differences in the outer regions. Based on the discussion above, we can easily accept that the screening is similar in Au and MgAu and that only an insignificant contribution to the Au $`4f`$ CLBE shift is obtained. For the case of Mg, we wish to understand why the screening in the alloy is significantly less effective. Unfortunately, in this context it is again difficult to obtain a clear answer.
A first possible explanation for the less effective screening in the alloy is that charge has been transferred away from the Mg atom, leaving less charge to respond to the attractive core hole, leading to reduced screening. However, if we count the number of electrons inside a sphere of a fixed radius ($`R_0=2.8`$ bohr) we find that the sphere charge in the alloy is 0.36 electrons above that in pure Mg. This is presumably a consequence of the reduced volume in the alloy. Even though the additional charge is mainly in the outer regions of the sphere, it would seem to invalidate an explanation based on reduced available valence charge.
Secondly, a comparison of the DOS for pure Mg with the Mg site in MgAu shows that the simple-metal parabolic $`sp`$ DOS has changed to some degree of covalent character in the alloy, with a minimum of the DOS around $`0.2`$ Ry. It can be speculated that this leads to a somewhat more rigid valence charge density, which cannot respond as flexibly to the core hole potential. This effect could play a role, but is not confirmed or invalidated by the calculation.
Finally, Fig.7 shows antiscreening features on the neighboring Au atoms. This could be interpreted as a โvariable wavelength Friedel oscillationโ whereby the Friedel wavelength changes from a value appropriate to Mg to a shorter one on the Au atoms. The antiscreening could push the first node of the screening density inwards with a corresponding reduction of the screening charge. This explanation, while potentially applicable, also cannot be confirmed unambiguously.
## V Summary and Conclusions
In this paper, we have presented the results of ab initio density-functional theory calculations of the core level shifts which arise upon alloying, using the prototypical intermetallic compound MgAu as an example. We were interested in the following questions: how well the experimental results can be reproduced; how the full core level shifts can be separated into initial-state and final state screening contributions; and whether the results can be understood in terms of simple models.
The agreement to experiment turns out to be good. The calculated core-level shifts are $`0.73`$ and $`0.25`$ eV for the Au $`4f`$ and Mg $`1s`$ states, respectively, close to the measured values of $`0.74`$ and $`0.34`$ eV. Given the complexity of the problem, these results are very satisfying. We have also found that the absolute core-level binding energies, calculated as the difference of two total energies, agree to within 1% with the experimental results.
The calculations give an unambiguous separation of each core level shift into a static initial-state contribution and a term due to the final-state screening of the core hole by the other electrons. Such a separation is central to all subsequent attempts to understand the results using simpler concepts. Somewhat unexpectedly, we find that the screening contribution is not just a small correction, but changes the picture drastically. Specifically, the shift of the Mg $`1s`$ core state changes sign when screening effects are included.
Extensive attempts were made to evaluate the calculated results in terms of simpler models. For the initial-state shifts, however, no convincing model could be found which is able to predict the calculated values. Among other considered descriptions, the well-known โsimple potential modelโ could not be confirmed. The basic difficulty is that a large number of effects influence the core level binding energy. The situation is considerably too complicated to be cast into any simple model with only a few parameters. Possibly, a series of calculations for several different systems could uncover trends and help to formulate a better model, but we do not consider it plausible that an adequate general model can be found.
For the screening contribution, we have used the calculations to obtain accurate images of the screening clouds for the different cases. This information cannot be obtained from experiment and is of major help when trying to obtain insight into the nature of the screening process. Indeed, straightforward interpretations for the screening mechanism at the Mg and Au sites could be deduced. Whereas the screening of the Mg core hole is done by a relatively extended $`sp`$-electron cloud, screening in Au takes place in a two-step process. First, the full Au $`d`$ shell contracts in response to the attractive core-hole potential, then the $`sp`$ valence electrons fill up the depletion ring around the $`d`$ shell. This description is in line with the result that the Mg screening depends on the environment, while Au screening does not.
Two conclusions are drawn from the results. First, while it is possible to obtain insight into the electronic structure changes upon alloying and the screening behaviour, simple models which try to connect the alloying core-level shifts with charge transfer cannot be confirmed. This is mainly due to the complexity of the real system which is not compatible with a description involving only a few quantities. Specifically, this means that charge transfer is only one of several quantities involved, and in fact one of the most poorly defined ones. Secondly, a full ab-initio calculation can reproduce measured core-level binding energies and their shifts to very good accuracy. This shows that simpler models are not actually needed in order to interpret measured values, where such measurements are used to investigate systems with unknown properties. Instead, density-functional calculations should be used for this purpose. |
no-problem/0001/cond-mat0001118.html | ar5iv | text | # Models of the Small World A Review
## 1 Introduction
The United Nations Department of Economic and Social Affairs estimates that the population of the world exceeded six billion people for the first time on October 12, 1999. There is no doubt that the world of human society has become quite large in recent times. Nonetheless, people routinely claim that, global statistics notwithstanding, itโs still a small world. And in a certain sense they are right. Despite the enormous number of people on the planet, the structure of social networksโthe map of who knows whomโis such that we are all very closely connected to one another (Kochen 1989, Watts 1999).
One of the first quantitative studies of the structure of social networks was performed in the late 1960s by Stanley Milgram, then at Harvard University (Milgram 1967). He performed a simple experiment as follows. He took a number of letters addressed to a stockbroker acquaintance of his in Boston, Massachusetts, and distributed them to a random selection of people in Nebraska. (Evidently, he considered Nebraska to be about as far as you could get from Boston, in social terms, without falling off the end of the world.) His instructions were that the letters were to be sent to their addressee (the stockbroker) by passing them from person to person, and that, in addition, they could be passed only to someone whom the passer new on a first-name basis. Since it was not likely that the initial recipients of the letters were on a first-name basis with a Boston stockbroker, their best strategy was to pass their letter to someone whom they felt was nearer to the stockbroker in some social sense: perhaps someone they knew in the financial industry, or a friend in Massachusetts.
A reasonable number of Milgramโs letters did eventually reach their destination, and Milgram found that it had only taken an average of six steps for a letter to get from Nebraska to Boston. He concluded, with a somewhat cavalier disregard for experimental niceties, that six was therefore the average number of acquaintances separating the pairs of people involved, and conjectured that a similar separation might characterize the relationship of any two people in the entire world. This situation has been labeled โsix degrees of separationโ (Guare 1990), a phrase which has since passed into popular folklore.
Given the form of Milgramโs experiment, one could be forgiven for supposing that the figure six is probably not a very accurate one. The experiment certainly contained many possible sources of error. However, the general result that two randomly chosen human beings can be connected by only a short chain of intermediate acquaintances has been subsequently verified, and is now widely accepted (Korte and Milgram 1970). In the jargon of the field this result is referred to as the small-world effect.
The small-world effect applies to networks other than networks of friends. Brett Tjadenโs parlor game โThe Six Degrees of Kevin Baconโ connects any pair of film actors via a chain of at most eight co-stars (Tjaden and Wasson 1997). Tom Remes has done the same for baseball players who have played on the same team (Remes 1997). With tongue very firmly in cheek, the New York Times played a similar game with the the names of those who had tangled with Monica Lewinsky (Kirby and Sahre 1998).
All of this however, seems somewhat frivolous. Why should a serious scientist care about the structure of social networks? The reason is that such networks are crucially important for communications. Most human communicationโwhere the word is used in its broadest senseโtakes place directly between individuals. The spread of news, rumors, jokes, and fashions all take place by contact between individuals. And a rumor can spread from coast to coast far faster over a social network in which the average degree of separation is six, than it can over one in which the average degree is a hundred, or a million. More importantly still, the spread of disease also occurs by person-to-person contact, and the structure of networks of such contacts has a huge impact on the nature of epidemics. In a highly connected network, this yearโs fluโor the HIV virusโcan spread far faster than in a network where the paths between individuals are relatively long.
In this paper we outline some recent developments in the theory of social networks, particularly in the characterization and modeling of networks, and in the modeling of the spread of information or disease.
## 2 Random graphs
The simplest explanation for the small-world effect uses the idea of a random graph. Suppose there is some number $`N`$ of people in the world, and on average they each have $`z`$ acquaintances. This means that there are $`\frac{1}{2}Nz`$ connections between people in the entire population. The number $`z`$ is called the coordination number of the network.
We can make a very simple model of a social network by taking $`N`$ dots (โnodesโ or โverticesโ) and drawing $`\frac{1}{2}Nz`$ lines (โedgesโ) between randomly chosen pairs to represent these connections. Such a network is called a random graph (Bollobรกs 1985). Random graphs have been studied extensively in the mathematics community, particularly by Erdรถs and Rรฉnyi (1959). It is easy to see that a random graph shows the small-world effect. If a person A on such a graph has $`z`$ neighbors, and each of Aโs neighbors also has $`z`$ neighbors, then A has about $`z^2`$ second neighbors. Extending this argument A also has $`z^3`$ third neighbors, $`z^4`$ fourth neighbors and so on. Most people have between a hundred and a thousand acquaintances, so $`z^4`$ is already between about $`10^8`$ and $`10^{12}`$, which is comparable with the population of the world. In general the number $`D`$ of degrees of separation which we need to consider in order to reach all $`N`$ people in the network (also called the diameter of the graph) is given by setting $`z^D=N`$, which implies that $`D=\mathrm{log}N/\mathrm{log}z`$. This logarithmic increase in the number of degrees of separation with the size of the network is typical of the small-world effect. Since $`\mathrm{log}N`$ increases only slowly with $`N`$, it allows the number of degrees to be quite small even in very large systems.
As an example of this type of behavior, Albert et al. (1999) studied the properties of the network of โhyperlinksโ between documents on the World Wide Web. They estimated that, despite the fact there were $`N8\times 10^8`$ documents on the Web at the time the study was carried out, the average distance between documents was only about 19.
There is a significant problem with the random graph as a model of social networks however. The problem is that peopleโs circles of acquaintance tend to overlap to a great extent. Your friendโs friends are likely also to be your friends, or to put it another way, two of your friends are likely also to be friends with one another. This means that in a real social network it is not true to say that person A has $`z^2`$ second neighbors, since many of those friends of friends are also themselves friends of person A. This property is called clustering of networks.
A random graph does not show clustering. In a random graph the probability that two of person Aโs friends will be friends of one another is no greater than the probability that two randomly chosen people will be. On the other hand, clustering has been shown to exist in a number of real-world networks. One can define a clustering coefficient $`C`$, which is the average fraction of pairs of neighbors of a node which are also neighbors of each other. In a fully connected network, in which everyone knows everyone else, $`C=1`$; in a random graph $`C=z/N`$, which is very small for a large network. In real-world networks it has been found that, while $`C`$ is significantly less than 1, it is much greater than $`\mathrm{O}(N^1)`$. In Table 1, we show some values of $`C`$ calculated by Watts and Strogatz (1998) for three different networks: the network of collaborations between movie actors discussed previously, the neural network of the worm C. Elegans, and the Western Power Grid of the United States. We also give the value $`C_{\mathrm{rand}}`$ which the clustering coefficient would have on random graphs of the same size and coordination number, and in each case the measured value is significantly higher than for the random graph, indicating that indeed the graph is clustered.
In the same table we also show the average distance $`\mathrm{}`$ between pairs of nodes in each of these networks. This is not the same as the diameter $`D`$ of the network discussed above, which is the maximum distance between nodes, but it also scales at most logarithmically with number of nodes on random graphs. This is easy to see, since the average distance is strictly less than or equal to the maximum distance, and so $`\mathrm{}`$ cannot increase any faster than $`D`$. As the table shows, the value of $`\mathrm{}`$ in each of the networks considered is small, indicating that the small-world effect is at work. (The precise definition of โsmall-world effectโ is still a matter of debate, but in the present case a reasonable definition would be that $`\mathrm{}`$ should be comparable with the value it would have on the random graph, which for the systems discussed here it is.)
So, if random graphs do not match well the properties of real-world networks, is there an alternative model which does? Such a model has been suggested by Duncan Watts and Steven Strogatz. It is described in the next section.
## 3 The small-world model of Watts and Strogatz
In order to model the real-world networks described in the last section, we need to find a way of generating graphs which have both the clustering and small-world properties. As we have argued, random graphs show the small-world effect, possessing average vertex-to-vertex distances which increase only logarithmically with the total number $`N`$ of vertices, but they do not show clusteringโthe property that two neighbors of a vertex will often also be neighbors of one another.
The opposite of a random graph, in some sense, is a completely ordered lattice, the simplest example of which is a one-dimensional latticeโa set of vertices arranged in a straight line. If we take such a lattice and connect each vertex to the $`z`$ vertices closest to it, as in Fig. 1a, then it is easy to see that most of the immediate neighbors of any site are also neighbors of one another, i.e., it shows the clustering property. Normally, we apply periodic boundary conditions to the lattice, so that it wraps around on itself in a ring (Fig. 1b), although this is just for convenience and not strictly necessary. For such a lattice we can calculate the clustering coefficient $`C`$ exactly. As long as $`z<\frac{2}{3}N`$, which it will be for almost all graphs, we find that
$$C=\frac{3(z2)}{4(z1)},$$
(1)
which tends to $`\frac{3}{4}`$ in the limit of large $`z`$. We can also build networks out of higher-dimensional lattices, such as square or cubic lattices, and these also show the clustering property. The value of the clustering coefficient in general dimension $`d`$ is
$$C=\frac{3(z2d)}{4(zd)},$$
(2)
which also tends to $`\frac{3}{4}`$ for $`z2d`$.
Low-dimensional regular lattices however do not show the small-world effect of typical vertexโvertex distances which increase only slowly with system size. It is straightforward to show that for a regular lattice in $`d`$ dimensions which has the shape of a square or (hyper)cube of side $`L`$, and therefore has $`N=L^d`$ vertices, the average vertexโvertex distance increases as $`L`$, or equivalently as $`N^{1/d}`$. For small values of $`d`$ this does not give us small-world behavior. In one dimension for example, it means that the average distance increases linearly with system size. If we allow the dimension $`d`$ of the lattice to become large, then $`N^{1/d}`$ becomes a slowly increasing function of $`N`$, and so the lattice does show the small-world effect. Could this be the explanation for what we see in real networks? Perhaps real networks are roughly regular lattices of very high dimension. This explanation is in fact not unreasonable, although it has not been widely discussed. It works quite well, provided the mean coordination number $`z`$ of the vertices is much higher than twice the dimension $`d`$ of the lattice. (If we allow $`z`$ to approach $`2d`$, then the clustering coefficient, Eq. (2), tends to zero, implying that the lattice loses its clustering properties.)
Watts and Strogatz (1998) however have proposed an alternative model for the small world, which perhaps fits better with our everyday intuitions about the nature of social networks. Their suggestion was to build a model which is, in essence, a low-dimensional regular latticeโsay a one-dimensional latticeโbut which has some degree of randomness in it, like a random graph, to produce the small-world effect. They suggested a specific scheme for doing this as follows. We take the one-dimensional lattice of Fig. 1b, and we go through each of the links on the lattice in turn and, with some probability $`p`$, we randomly โrewireโ that link, meaning that we move one of its ends to a new position chosen at random from the rest of the lattice. For small $`p`$ this produces a graph with is still mostly regular but has a few connections which stretch long distances across the lattice as in Fig. 1c. The coordination number of the lattice is still $`z`$ on average as it was before, although the number of neighbors of any particular vertex can be greater or smaller than $`z`$.
In social terms, we can justify this model by saying that, while most people are friends with their immediate neighborsโneighbors on the same street, people that they work with, people that their friends introduce them toโsome people are also friends with one or two people who are a long way away, in some social senseโpeople in other countries, people from other walks of life, acquaintances from previous eras of their lives, and so forth. These long-distance acquaintances are represented by the long-range links in the model of Watts and Strogatz.
Clearly the values of the clustering coefficient $`C`$ for the WattsโStrogatz model with small values of $`p`$ will be close to those for the perfectly ordered lattice given above, which tend to $`\frac{3}{4}`$ for fixed small $`d`$ and large $`z`$. Watts and Strogatz also showed by numerical simulation that the average vertexโvertex distance $`\mathrm{}`$ is comparable with that for a true random graph, even for quite small values of $`p`$. For example, for a random graph with $`N=1000`$ and $`z=10`$, they found that the average distance was about $`\mathrm{}=3.2`$ between two vertices chosen at random. For their rewiring model, the average distance was only slightly greater, at $`\mathrm{}=3.6`$, when the rewiring probability $`p=\frac{1}{4}`$, compared with $`\mathrm{}=50`$ for the graph with no rewired links at all. And even for $`p=\frac{1}{64}=0.0156`$, they found $`\mathrm{}=7.4`$, a little over twice the value for the random graph. Thus the model appears to show both the clustering and small-world properties simultaneously. This result has since been confirmed by further simulation as well as analytic work on small-world models, which is described in the next section.
## 4 Analytic and numerical results for small-world models
Most of the recent work on models of the small world has been performed using a variation of the WattsโStrogatz model suggested by Newman and Watts (1999a). In this version of the model, instead of rewiring links between sites as in Fig. 1c, extra links, often called shortcuts, are added between pairs of sites chosen at random, but no links are removed from the underlying lattice. This model is somewhat easier to analyze than the original WattsโStrogatz model, because it is not possible for any region of the graph to become disconnected from the rest, whereas this can happen in the original model. Mathematically a disjoint section of the graph can be represented by saying that the distance from any vertex in that section to a vertex somewhere on the rest of the graph is infinite. However, this means that, when averaged over all possible realizations of the graph, the average vertexโvertex distance $`\mathrm{}`$ in the model is also infinite for any finite value of $`p`$. (A similar problem in the theory of random graphs is commonly dealt with by averaging the reciprocal of vertexโvertex distance, rather than the distance itself, but this approach does not seem to have been tried for the WattsโStrogatz model.) In fact, it is possible to show that the series expansion of $`\mathrm{}/L`$ in powers of $`p`$ about $`p=0`$ is well-behaved up to order $`p^{z1}`$, but that the expansion coefficients are infinite for all higher orders. For the version of the model where no links are ever removed, the expansion coefficients take the same values up to order $`p^{z1}`$, but are finite for all higher orders as well. Generically, both versions of the model are referred to as small-world models, or sometimes small-world graphs.
Many results have been derived for small-world models, and many of their other properties have been explored numerically. Here we give only a brief summary of the most important results. Barthรฉlรฉmy and Amaral (1999) conjectured that the average vertexโvertex distance $`\mathrm{}`$ obeys the scaling form $`\mathrm{}=\xi G(L/\xi )`$, where $`G(x)`$ is a universal scaling function of its argument $`x`$ and $`\xi `$ is a characteristic length-scale for the model which is assumed to diverge in the limit of small $`p`$ according to $`\xi p^\tau `$. On the basis of numerical results, Barthรฉlรฉmy and Amaral further conjectured that $`\tau =\frac{2}{3}`$. Barrat (1999) disproved this second conjecture using a simple physical argument which showed that $`\tau `$ cannot be less than 1, and suggested on the basis of more numerical results that in fact it was exactly 1. Newman and Watts (1999b) showed that the small-world model has only one non-trivial length-scale other than the lattice spacing, which we can equate with the variable $`\xi `$ above, and which is given by
$$\xi =\frac{1}{pz}$$
(3)
for the one-dimensional model, or
$$\xi =\frac{1}{(pzd)^{1/d}}$$
(4)
in the general case. Thus $`\tau `$ must indeed be 1 for $`d=1`$, or $`\tau =1/d`$ for general $`d`$ and, since there are no other length-scales present, $`\mathrm{}`$ must be of the form
$$\mathrm{}=\frac{L}{2dz}F(pzL^d),$$
(5)
where $`F(x)`$ is another universal scaling function. (The initial factor of $`(2d)^1`$ before the scaling function is arbitrary. It is chosen thus to give $`F`$ a simple limit for small values of its argumentโsee Eq. (6).) This scaling form is equivalent to that of Barthรฉlรฉmy and Amaral by the substitution $`G(x)=xF(x)`$ if $`\tau =1`$. It has been extensively confirmed by numerical simulation (Newman and Watts 1999a, de Menezes et al. 2000) and by series expansions (Newman and Watts 1999b) (see Fig. 2). The divergence of $`\xi `$ as $`p0`$ gives something akin to a critical point in this limit. (De Menezes et al. (2000) have argued that, for technical reasons, we should refer to this point as a โfirst order critical pointโ (Fisher and Berker 1982).) This allowed Newman and Watts (1999a) to apply a real-space renormalization group transformation to the model in the vicinity of this point and prove that the scaling form above is exactly obeyed in the limit of small $`p`$ and large $`L`$.
Eq. (5) tells us that although the average vertexโvertex distance on a small-world graph appears at first glance to be a function of three parametersโ$`p`$, $`z`$, and $`L`$โit is in fact entirely determined by a single scalar function of a single scalar variable. If we know the form of this one function, then we know everything. Actually, this statement is strictly only true if $`\xi 1`$, when it is safe to ignore the other length-scale in the problem, the lattice parameter of the underlying lattice. Thus, the scaling form is expected to hold only when $`p`$ is small, i.e., in the regime where the majority of a personโs contacts are local and only a small fraction long-range. (The fourth parameter $`d`$ also enters the equation, but is not on an equal footing with the others, since the functional form of $`F`$ changes with $`d`$, and thus Eq. (5) does not tell us how $`\mathrm{}`$ varies with dimension.)
Both the scaling function $`F(x)`$ and the scaling variable $`xpzL^d`$ have simple physical interpretations. The variable $`x`$ is two times the average number of shortcuts on the graph for the given value of $`p`$, and $`F(x)`$ is the average fraction by which the vertexโvertex distance on the graph is reduced for the given value of $`x`$. From the results shown in Fig. 2, we can see that it takes about $`5\frac{1}{2}`$ shortcuts to reduce the average vertex-vertex distance by a factor of two, and 56 to reduce it by a factor of ten.
In the limit of large $`p`$ the small-world model becomes a random graph or nearly so. Hence, we expect that the value of $`\mathrm{}`$ should scale logarithmically with system size $`L`$ when $`p`$ is large, and also, as the scaling form shows, when $`L`$ is large. On the other hand, when $`p`$ or $`L`$ is small we expect $`\mathrm{}`$ to scale linearly with $`L`$. This implies that $`F(x)`$ has the limiting forms
$$F(x)=\{\begin{array}{cc}1\hfill & \text{for }x1\hfill \\ (\mathrm{log}x)/x\hfill & \text{for }x1\text{.}\hfill \end{array}$$
(6)
In theory there should be a leading constant in front of the large-$`x`$ form here, but, as discussed shortly, it turns out that this constant is equal to unity. The cross-over between the small- and large-$`x`$ regimes must happen in the vicinity of $`L=\xi `$, since $`\xi `$ is the only length-scale available to dictate this point.
Neither the actual distribution of path lengths in the small-world model nor the average path length $`\mathrm{}`$ has been calculated exactly yet; exact analytical calculations have proven very difficult for the model. Some exact results have been given by Kulkarni et al. (2000) who show, for example, that the value of $`\mathrm{}`$ is simply related to the mean $`s`$ and mean square $`s^2`$ of the shortest distance $`s`$ between two points on diametrically opposite sides of the graph, according to
$$\frac{\mathrm{}}{L}=\frac{s}{L1}\frac{s^2}{L(L1)}.$$
(7)
Unfortunately, calculating the shortest distance between opposite points is just as difficult as calculating $`\mathrm{}`$ directly, either analytically or numerically.
Newman et al. (2000) have calculated the form of the scaling function $`F(x)`$ for $`d=1`$ small-world graphs using a mean-field-like approximation, which is exact for small or large values of $`x`$, but not in the regime where $`x1`$. Their result is
$$F(x)=\frac{4}{\sqrt{x^2+4x}}\mathrm{tanh}^1\frac{x}{\sqrt{x^2+4x}}.$$
(8)
This form is also plotted on Fig. 2 (dotted line). Since this is exact for large $`x`$, it can be expanded about $`1/x=0`$ to show that the leading constant in the large-$`x`$ form of $`F(x)`$, Eq. (6), is $`1`$ as stated above.
Newman et al. also solved for the complete distribution of lengths between vertices in the model within their mean-field approximation. This distribution can be used to give a simple model of the spread of a disease in a small world. If a disease starts with a single person somewhere in the world, and spreads first to all the neighbors of that person, and then to all second neighbors, and so on, then the number of people $`n`$ who have the disease after $`t`$ time-steps is simply the number of people who are separated from the initial carrier by a distance of $`t`$ or less. Newman and Watts (1999b) previously gave an approximate differential equation for $`n(t)`$ on an infinite small-world graph, which they solved for the one-dimensional case; Moukarzel (1999) later solved it for the case of general $`d`$. The mean-field treatment generalizes the solution for $`d=1`$ to finite lattice sizes. (A similar mean-field result has been given for a slightly different disease-spreading model by Kleczkowski and Grenfell (1999).) The resulting form for $`n(t)`$ is shown in the inset of Fig. 2, and clearly has the right general sigmoidal shape for the spread of an epidemic. In fact, this form of $`n`$ is typical also of the standard logistic growth models of disease spread, which are mostly based on random graphs (Sattenspiel and Simon 1988, Kretschmar and Morris 1996). In the next section we consider some (slightly) more sophisticated models of disease spreading on small-world graphs.
## 5 Other models based on small-world graphs
A variety of authors have looked at dynamical systems defined on small-world graphs built using either the WattsโStrogatz rewiring method or the alternative method described in Section 4. We briefly describe a number of these studies in this section.
Watts and Strogatz (1998, Watts 1999) looked at cellular automata, simple games, and networks of coupled oscillators on small-world networks. For example, they found that it was much easier for a cellular automaton to perform the task known as density classification (Das et al. 1994) on a small-world graph than on a regular lattice; they found that in an iterated multi-player game of Prisonerโs Dilemma, cooperation arose less frequently on a small-world graph than on a regular lattice; and they found that the small-world topology helped oscillator networks to synchronize much more easily than in the regular lattice case.
Monasson (1999) investigated the eigenspectrum of the Laplacian operator on small-world graphs using a transfer matrix method. This spectrum tells us for example what the normal modes would be of a system of masses and springs built with the topology of a small-world graph. Or, perhaps more usefully, it can tell us how diffusive dynamics would occur on a small world graph; any initial state of a diffusive field can be decomposed into eigenvectors which each decay independently and exponentially with a decay constant related to the corresponding eigenvalue. Diffusive motion might provide a simple model for the spread of information of some kind in a social network.
Barrat and Weigt (2000) have given a solution of the ferromagnetic Ising model on a $`d=1`$ small-world network using a replica method. Since the Ising model has a lower critical dimension of two, we would expect it not to show a phase transition when $`p=0`$ and the graph is truly one-dimensional. On the other hand, as soon as $`p`$ is greater than zero, the effective dimension of the graph becomes greater than one, and increases with system size (Newman and Watts 1999b). Thus for any finite $`p`$ we would expect to see a phase transition at some finite temperature in the large system limit. Barrat and Weigt confirmed both analytically and numerically that indeed this is the case. The Ising model is of course a highly idealized model, and its solution in this context is, to a large extent, just an interesting exercise. However, the similar problem of a Potts antiferromagnet on a general graph has real practical applications, e.g., in the solution of scheduling problems. Although this problem has not been solved on the small-world graph, Walsh (1999) has found results which indicate that it may be interesting from a computational complexity point of view; finding a ground state for a Potts antiferromagnet on a small-world graph may be significantly harder than finding one on either a regular lattice or a random graph.
Newman and Watts (1999b) looked at the problem of disease spread on small-world graphs. As a first step away from the very simple models of disease described in the last section, they considered a disease to which only a certain fraction $`q`$ of the population is susceptible; the disease spreads neighbor to neighbor on a small-world graph, except that it only affects, and can be transmitted by, the susceptible individuals. In such a model, the disease can only spread within the connected cluster of susceptible individuals in which it first starts, which is small if $`q`$ is small, but becomes larger, and eventually infinite, as $`q`$ increases. The point at which it becomes infiniteโthe point at which an epidemic takes placeโis precisely the percolation point for site percolation with probability $`q`$ on the small-world graph. Newman and Watts gave an approximate calculation of this epidemic point, which compares reasonably favorably with their numerical simulations. Moore and Newman (2000a, 2000b) later gave an exact solution.
Lago-Fernรกndez et al. (2000) investigated the behavior of a neural network of HodgkinโHuxley neurons on a variety of graphs, including regular lattices, random graphs, and small-world graphs. They found that the presence of a high degree of clustering in the network allowed the network to establish coherent oscillation, while short average vertexโvertex distances allowed the network to produce fast responses to changes in external stimuli. The small-world graph, which simultaneously possesses both of these properties, was the only graph they investigated which showed both coherence and fast response.
Kulkarni et al. (1999) studied numerically the behavior of the BakโSneppen model of species coevolution (Bak and Sneppen 1993) on small-world graphs. This is a model which mimics the evolutionary effects of interactions between large numbers of species. The behavior of the model is known to depend on the topology of the lattice on which it is situated, and Kulkarni and co-workers suggested that the topology of the small-world graph might be closer to that of interactions in real ecosystems than the low-dimensional regular lattices on which the BakโSneppen model is usually studied. The principal result of the simulations was that on a small-world graph the amount of evolutionary activity taking place at any given vertex varies with the coordination number of the vertex, with the most connected nodes showing the greatest activity and the least connected ones showing the smallest.
## 6 Other models of the small world
Although most of the work reviewed in this article is based on the WattsโStrogatz small-world model, a number of other models of social networks have been proposed. In Section 2 we mentioned the simple random-graph model and in Section 3 we discussed a model based on a regular lattice of high dimension. In this section we describe briefly three others which have been suggested.
One alternative to the view put forward by Watts and Strogatz is that the small-world phenomenon arises not because there are a few โlong-rangeโ connections in the otherwise short-range structure of a social network, but because there are a few nodes in the network which have unusually high coordination numbers (Kasturirangan 1999) or which are linked to a widely distributed set of neighbors. Perhaps the โsix degrees of separationโ effect is due to a few people who are particularly well connected. (Gladwell (1998) has written a lengthy and amusing article arguing that a septuagenarian salon proprietor in Chicago named Lois Weisberg is an example of precisely such a person.) A simple model of this kind of network is depicted in Fig. 3, in which we start again with a one-dimensional lattice, but instead of adding extra links between pairs of sites, we add a number of extra vertices in the middle which are connected to a large number of sites on the main lattice, chosen at random. (Lois Weisberg would be one of these extra sites.) This model is similar to the WattsโStrogatz model in that the addition of the extra sites effectively introduces shortcuts between randomly chosen positions on the lattice, so it should not be surprising to learn that this model does display the small-world effect. In fact, even in the case where only one extra site is added, the model shows the small-world effect if that site is sufficiently highly connected. This case has been solved exactly by Dorogovtsev and Mendes (1999).
Another alternative model of the small world has been suggested by Albert et al. (1999) who, in their studies of the World Wide Web discussed in Section 2, concluded that the Web is dominated by a small number of very highly connected sites, as described above, but they also found that the distribution of the coordination numbers of sites (the number of โhyperlinksโ pointing to or from a site) was a power-law, rather than being bimodal as it is in the previous model. They produced a model network of this kind as follows. Starting with a normal random graph with average coordination number $`z`$ and the desired number $`N`$ of vertices, they selected a vertex at random and added a link between it and another randomly chosen site if that addition would bring the overall distribution of coordination numbers closer to the required power law. Otherwise the vertex remains as it is. If this process is repeated for a sufficiently long time, a network is generated with the correct coordination numbers, but which is in other respects a random graph. In particular, it does not show the clustering property of which such a fuss has been made in the case of the WattsโStrogatz model. Albert et al. found that their model matched the measured properties of the World Wide Web quite closely, although related work by Adamic (1999) indicates that clustering is present in the Web, so that the model is unrealistic in this respect.
It is worth noting that networks identical to those of Albert et al. can be generated in a manner much more efficient than the Monte Carlo scheme described above by simply generating $`N`$ vertices with a power law distribution of lines emerging from them (using, for instance, the transformation method (Newman and Barkema 1999)), and then joining pairs of lines together at random until none are left. If one were interested in investigating such networks numerically, this would probably be the best way to generate them.
A third suggestion has been put forward by Kleinberg (1999), who argues that a model such as that of Watts and Strogatz, in which shortcuts connect vertices arbitrarily far apart with uniform probability, is a poor representation of at least some real-world situations. (Kasturirangan (1999) has made a similar point.) Kleinberg notes that in the real world, people are surprisingly good at finding short paths between pairs of individuals (Milgramโs letter experiment, and the Kevin Bacon game are good examples) given only local information about the structure of the network. Conversely, he has shown that no algorithm exists which is capable of finding such paths on networks of the WattsโStrogatz type, again given only local information. Thus there must be some additional properties of real-world networks which make it possible to find short paths with ease. To investigate this question further, Kleinberg has proposed a generalization of the WattsโStrogatz model in which the typical distance traversed by the shortcuts can be tuned. Kleinbergโs model is based on a two-dimensional square lattice (although it could be generalized to other dimensions $`d`$ in a straightforward fashion) and has shortcuts added between pairs of vertices $`i,j`$ with probability which falls off as a power law $`d_{ij}^r`$ of the distance between them. (In this work, $`d_{ij}`$ is the โManhattan distanceโ $`|x_ix_j|+|y_iy_j|`$, where $`(x_i,y_i)`$ and $`(x_j,y_j)`$ are the lattice coordinates of the vertices $`i`$ and $`j`$. This makes good sense, since this is also the distance in terms of links on the underlying lattice that separates those two points before the shortcuts are added. However, one could in principle generate networks using a different definition of distance, such as the Euclidean distance $`\sqrt{(x_ix_j)^2+(y_iy_j)^2}`$, for example.) It is then shown that for the particular value $`r=2`$ of the exponent of the power law (or $`r=d`$ for underlying lattices of $`d`$ dimensions), there exists a simple algorithm for finding a short path between two given vertices, making use only of local information. For any other value of $`r`$ the problem of finding a short path is provably much harder. This result demonstrates that there is more to the small world effect than simply the existence of short paths.
## 7 Conclusions
In this article we have given an overview of recent theoretical work on the โsmall-worldโ phenomenon. We have described in some detail the considerable body of recent results dealing with the WattsโStrogatz small-world model and its variants, including analytic and numerical results about network structure and studies of dynamical systems on small-world graphs.
What have we learned from these efforts and where is this line of research going now? The most important result is that small-world graphsโthose possessing both short average person-to-person distances and โclusteringโ of acquaintancesโshow behaviors very different from either regular lattices or random graphs. Some of the more interesting such behaviors are the following:
1. These graphs show a transition with increasing number of vertices from a โlarge-worldโ regime in which the average distance between two people increases linearly with system size, to a โsmall-worldโ one in which it increases logarithmically.
2. This implies that information or disease spreading on a small-world graph reaches a number of people which increases initially as a power of time, then changes to an exponential increase, and then flattens off as the graph becomes saturated.
3. Disease models which incorporate a measure of susceptibility to infection have a percolation transition at which an epidemic sets in, whose position is influenced strongly by the small-world nature of the network.
4. Dynamical systems such as games or cellular automata show quantitatively different behavior on small-world graphs and regular lattices. Some problems, such as density classification, appear to be easier to solve on small-world graphs, while others, such as scheduling problems, appear to be harder.
5. Some real-world graphs show characteristics in addition to the small-world effect which may be important to their function. An example is the World Wide Web, which appears to have a scale-free distribution of the coordination numbers of vertices.
Research in this field is continuing in a variety of directions. Empirical work to determine the exact structure of real networks is underway in a number of groups, as well as theoretical work to determine the properties of the proposed models. And studies to determine the effects of the small-world topology on dynamical processes, although in their infancy, promise an intriguing new perspective on the way the world works.
## Acknowledgements
The author would like to thank Luis Amaral, Marc Barthรฉlรฉmy, Rahul Kulkarni, Cris Moore, Cristian Moukarzel, Naomi Sachs, Steve Strogatz, Toby Walsh, and Duncan Watts for useful discussions and comments. This work was supported in part by the Santa Fe Institute and DARPA under grant number ONR N00014-95-1-0975.
## References
Citations of the form cond-mat/xxxxxxx refer to the online condensed matter physics preprint archive at http://www.arxiv.org/.
* Adamic, L. A. 1999 The small world web. Available as ftp://parcftp.xerox.com/pub/dynamics/smallworld.ps.
* Albert, R., Jeong, H. and Barabรกsi, A.-L. 1999 Diameter of the world-wide web. Nature 401, 130โ131.
* Bak, P. and Sneppen, K. 1993 Punctuated equilibrium and criticality in a simple model of evolution. Physical Review Letters 71, 4083โ4086.
* Barthรฉlรฉmy, M. and Amaral, L. A. N. 1999 Small-world networks: Evidence for a crossover picture. Physical Review Letters 82, 3180โ3183.
* Barrat, A. 1999 Comment on โSmall-world networks: Evidence for a crossover picture.โ Available as cond-mat/9903323.
* Barrat, A. and Weigt, M. 2000 On the properties of small-world network models. European Physical Journal B 13, 547โ560.
* Bollobรกs, B. 1985 Random Graphs. Academic Press (New York).
* Das, R., Mitchell, M., and Crutchfield, J. P. 1994 A genetic algorithm discovers particle-based computation in cellular automata. In Parallel Problem Solving in Nature, Davidor, Y., Schwefel, H. P. and Manner, R. (eds.), Springer (Berlin).
* De Menezes, M. A., Moukarzel, C. F., and Penna, T. J. P. 2000 First-order transition in small-world networks. Available as cond-mat/9903426.
* Dorogovtsev, S. N. and Mendes, J. F. F. 1999 Exactly solvable analogy of small-world networks. Available as cond-mat/9907445.
* Erdรถs, P. and Rรฉnyi, A. 1959 On random graphs. Publicationes Mathematicae 6, 290โ297.
* Fisher, M. E. and Berker, A. N. 1982 Scaling for first-order phase transitions in thermodynamic and finite systems. Physical Review B 26, 2507โ2513.
* Gladwell, M. 1998 Six degrees of Lois Weisberg. The New Yorker, 74, No. 41, 52โ64.
* Guare, J. 1990 Six Degrees of Separation: A Play. Vintage (New York).
* Kasturirangan, R. 1999 Multiple scales in small-world graphs. Massachusetts Institute of Technology AI Lab Memo 1663. Also cond-mat/9904055.
* Kirby, D. and Sahre, P. 1998 Six degrees of Monica. New York Times, February 21, 1998.
* Kleczkowski, A. and Grenfell, B. T. 1999 Mean-field-type equations for spread of epidemics: The โsmall-worldโ model. Physica A 274, 355โ360.
* Kleinberg, J. 1999 The small-world phenomenon: An algorithmic perspective. Cornell University Computer Science Department Technical Report 99โ1776. Also http://www.cs.cornell.edu/home/kleinber/swn.ps.
* Kocken, M. 1989 The Small World. Ablex (Norwood, NJ).
* Korte, C. and Milgram, S. 1970 Acquaintance linking between white and negro populations: Application of the small world problem. Journal of Personality and Social Psychology 15, 101โ118.
* Kretschmar, M. and Morris, M. 1996 Measures of concurrency in networks and the spread of infectious disease. Mathematical Biosciences 133, 165โ195.
* Kulkarni, R. V., Almaas, E., and Stroud, D. 1999 Evolutionary dynamics in the Bak-Sneppen model on small-world networks. Available as cond-mat/9905066.
* Kulkarni, R. V., Almaas, E., and Stroud, D. 2000 Exact results and scaling properties of small-world networks. Physical Review E 61, 4268โ4271.
* Lago-Fernรกndez, L. F., Huerta, R., Corbacho, F., and Sigรผenza, J. A. 2000 Fast response and temporal coherent oscillations in small-world networks. Physical Review Letters 84, 2758โ2761.
* Milgram, S. 1967 The small world problem. Psychology Today 2, 60โ67.
* Monasson, R. 1999 Diffusion, localization and dispersion relations on small-world lattices. European Physical Journal B 12, 555โ567.
* Moore, C. and Newman, M. E. J. 2000a Epidemics and percolation in small-world networks. Physical Review E 61, 5678โ5682.
* Moore, C. and Newman, M. E. J. 2000b Exact solution of site and bond percolation on small-world networks. Available as cond-mat/0001393.
* Moukarzel, C. F. 1999 Spreading and shortest paths in systems with sparse long-range connections. Physical Review E 60, 6263โ6266.
* Newman, M. E. J. and Barkema, G. T. 1999 Monte Carlo Methods in Statistical Physics. Oxford University Press (Oxford).
* Newman, M. E. J., Moore, C., and Watts, D. J. 2000 Mean-field solution of the small-world network model. Physical Review Letters 84, 3201โ3204.
* Newman, M. E. J. and Watts, D. J. 1999a Renormalization group analysis of the small-world network model. Physics Letters A 263, 341โ346.
* Newman, M. E. J. and Watts, D. J. 1999b Scaling and percolation in the small-world network model. Physical Review E 60, 7332โ7342.
* Remes, T. 1997 Six Degrees of Rogers Hornsby. New York Times, August 17, 1997.
* Sattenspiel, L. and Simon, C. P. 1988 The spread and persistence of infectious diseases in structured populations. Mathematical Biosciences 90, 367โ383.
* Tjaden, B. and Wasson, G. 1997 Available on the internet at http://www.cs.virginia.edu/oracle/.
* Walsh, T. 1999 In Proceedings of the 16th International Joint Conference on Artificial Intelligence, Stockholm, 1999.
* Watts, D. J. 1999 Small Worlds. Princeton University Press (Princeton).
* Watts, D. J. and Strogatz, S. H. 1998 Collective dynamics of โsmall-worldโ networks. Nature 393, 440โ442. |
no-problem/0001/cond-mat0001342.html | ar5iv | text | # Structure and rheology of binary mixtures in shear flow
## I Introduction
The kinetics of phase separation of a disordered system quenched into a multiphase coexistence region has been extensively studied in the last years . The main features of the process are well understood: After an early stage during which ordered domains of the equilibrium phases are formed the segregation proceeds in the late stage by coarsening of ordered regions according to the power law growth $`R(t)t^\alpha `$ for the average domains size. In binary liquids, the existence of several regimes characterized by different exponents $`\alpha `$, due to the presence of various growth mechanisms, is well established . In these regimes the pair correlation function $`C(r,t)`$ verifies a dynamical scaling law according to which it can be written as $`C(r,t)f(r/R)`$, where $`f(x)`$ is a scaling function .
ยฟFrom the theoretical point of view the most relevant progresses have been achieved in the framework of the continuous approach based on the Cahn-Hilliard equation with a Ginzburg-Landau free energy functional, the time dependent Ginzburg-Landau (TDGL) model. Within this approach, which neglects hydrodynamics, the properties of the phase-separation kinetics can be efficiently studied by means of numerical simulations or analytically in the context of approximate theories, among which the so-called large-$`n`$ limit (one-loop approximation). For a vectorial system with an infinite number of components $`n`$, indeed, the TDGL model is exactly soluble. The one-loop approximation is known to provide a mean-field picture of the phase-separation process which captures the essence of the phenomenon at a semi-quantitative level .
In this paper we study the process of phase separation in a binary mixture subject to an uniform shear flow. When shear is applied to the system the time evolution is substantially different from that of ordinary spinodal decomposition. We consider both a stationary flow and an oscillating shear.
A stationary flow induces strong deformations of the domains formed after the quench , which become anisotropic and stretched along the flow direction. Consequently the growth rate along the flow is larger than in the other directions. In some experiments a power law increase of the typical size of the domains is observed and a value $`\mathrm{\Delta }\alpha =\alpha _x\alpha _{}`$ in the range $`0.8รท1`$ for the difference between the exponents in the flow and in the shear directions is reported . Two dimensional molecular dynamic simulations find a slightly smaller value . In other experimental realizations, when the shear is strong enough, stringlike domains have been observed to extend macroscopically in the direction of the flow preventing complete phase separation. In general, the scaling behavior of sheared systems is not clearly understood and the very existence of a scaling regime in different experimental systems is questionable.
In a previous paper we have shown that the numerical solution of the one-loop approximation to the TDGL model for phase separation under shear exhibits a generalized scaling symmetry characterized by $`\mathrm{\Delta }\alpha =1`$. In the scaling regime the structure factor and other observables exhibit the interesting feature of an oscillatory pattern which can be related to a mechanism of storing and dissipation of elastic energy where domains are stretched and broken cyclically. This new effect has been shown to persist up to the longest available time of our computation and represents the hallmark of a complex dynamical pattern induced by the presence of the shear. In a recent paper Rapapa and Bray , by solving asymptotically the one-loop equations, confirmed analytically the existence of a (multi)-scaling symmetry; in the long time limit, however, they do not recover the cyclical pattern described insofar and they infer โthat the observed oscillations are slowly-decaying preasymptotic transientsโ. Since their solution is obtained in the infinite time limit, then, a reference theory for the description of this remarkable phenomenon is lacking.
Given that the one-loop approximation is a mean-field solution in spirit the natural question of its accuracy for the description of the original model arises. A numerical analysis of the exact TDGL model has been performed recently in where it is shown that the global picture of the one-loop approximation is adequate. In particular the oscillatory pattern is recovered. The existence of a scaling symmetry and the determination of the related exponents, however, has not been clearly established numerically mainly due to finite size effects limitations. The actual value of the growth exponents can be inferred by scaling or renormalization group arguments to be $`\alpha _{}=1/3`$, as in the case without shear (we stress the fact that hydrodynamic effects are neglected in this model), and $`\alpha _x=4/3`$.
The shear also induces a peculiar rheological behavior. The break-up of the stretched domains liberates an energy which gives rise to an increase $`\mathrm{\Delta }\eta `$ of the viscosity . Experiments and simulations show that the excess viscosity $`\mathrm{\Delta }\eta `$ reaches a maximum at $`t=t_m`$ and then relaxes to smaller values. The maximum of the excess viscosity is expected to occur at a fixed $`\gamma t`$ and to scale as $`\mathrm{\Delta }\eta (t_m)\gamma ^\nu `$ . Simple scaling arguments predict $`\nu =2/3`$ , but different values have been reported . All these features are adequately described by the TDGL already at the one-loop level.
In this paper we present a complete scenario of the behavior of the TDGL model for phase separation in a shear flow in the framework of the large-$`n`$ approximation. The behavior of the system is studied along the whole time history, from the instant of the quench onward, both in the presence of a steady flow and in the case of an oscillating shear where interesting effects are undercovered. Results are presented for two and three dimensional systems.
This paper is schematically divided as follows: In Sec.2 we specify the model and introduce the one-loop approximation that will be studied thoroughly in the following Sections. Section 3 is devoted to the analysis of the behavior of the model subjected to a steady flow. In Sec.4 the dynamics in the presence of an oscillatory shear is considered. In Sec.5 we present a discussion of the results, debate some open problems and draw our conclusions.
## II The model
The binary mixture is described at equilibrium by a Ginzburg-Landau free-energy
$$\{\phi \}=d^dr\{\frac{a}{2}\phi ^2+\frac{b}{4}\phi ^4+\frac{\kappa }{2}\phi ^2\}$$
(1)
where $`\phi `$ is the order parameter which represents the concentration difference between the two components. The values of $`b,\kappa `$ are positive for any temperature $`T`$ of the fluid. The parameter $`a`$ separates stable states of the blend with $`a>a_c(T)`$ ($`a_c(T)0`$), from the thermodynamically unstable states with $`a<a_c(T)`$ where the system phase separates. The time evolution of the order parameter is given by the convection-diffusion equation
$$\frac{\phi }{t}+\stackrel{}{}(\phi \stackrel{}{v})=\mathrm{\Gamma }^2\frac{\delta }{\delta \phi }+\eta $$
(2)
where the gaussian stochastic field $`\eta `$, with expectations
$`\eta (\stackrel{}{r},t)`$ $`=`$ $`0`$ (3)
$`\eta (\stackrel{}{r},t)\eta (\stackrel{}{r}^{},t^{})`$ $`=`$ $`2T\mathrm{\Gamma }^2\delta (\stackrel{}{r}\stackrel{}{r}^{})\delta (tt^{})`$ (4)
describes thermal fluctuations . In Eq. (2) $`\mathrm{\Gamma }`$ is a transport coefficient and the symbol $`\mathrm{}`$ indicates the ensemble average. The external velocity field here considered is of the form
$$\stackrel{}{v}=\gamma y\stackrel{}{e}_x$$
(5)
where $`\gamma `$ is the spatially homogeneous shear rate , that may however depend on time, and $`\stackrel{}{e}_x`$ is a unit vector in the flow direction. In the following we will consider a quench from an uncorrelated isotropic high temperature initial condition at the critical composition, i.e. with $`\phi (\stackrel{}{r},0)=0`$ and $`<\phi (\stackrel{}{r},0)\phi (\stackrel{}{r}^{},0)>=\mathrm{\Delta }\delta (\stackrel{}{r}\stackrel{}{r}^{})`$. The main observable for the description of the phase-separation kinetics is the structure factor
$$C(\stackrel{}{k},t)=<\phi (\stackrel{}{k},t)\phi (\stackrel{}{k},t)>$$
(6)
where $`\phi (\stackrel{}{k},t)`$ is the Fourier transform of the field $`\phi (\stackrel{}{r},t)`$ solution of Eq. (2). In the high temperature initial state we consider one has $`C(\stackrel{}{k},0)=\mathrm{\Delta }`$.
The cubic term in the derivative $`\delta /\delta \phi `$ prevents an exact solution of the Eq. (2), as in the case without shear . However a soluble model is recovered in the one-loop approximation which amounts to the factorization of the cubic term of Eq. (2) as
$$\phi ^3\phi ^2\phi $$
(7)
It is possible to show that the substitution (7) becomes exact in models with a vectorial order parameter when the number $`n`$ of its components becomes infinite. Since $`\phi ^2=S(t)`$ does not depend on space, due to translational invariance, the substitution (7) formally linearizes the theory. The large-$`n`$ limit is a well developed approximation scheme in statistical mechanics which have been applied to different contexts : Its validity and limitations are nowadays rather well understood .
In the large-$`n`$ approximation the dynamical equation for $`C(\stackrel{}{k},t)`$ is:
$$\frac{C(\stackrel{}{k},t)}{t}\gamma k_x\frac{C(\stackrel{}{k},t)}{k_y}=k^2[k^2+S(t)1]C(\stackrel{}{k},t)+k^2T$$
(8)
where the function $`S(t)`$ is self-consistently given by
$$S(t)=_{|\stackrel{}{k}|<q}\frac{d\stackrel{}{k}}{(2\pi )^d}C(\stackrel{}{k},t)$$
(9)
and $`q`$ is a high momentum phenomenological cut-off. Notice that in Eq. (8) the parameters of the free-energy (1) and the mobility $`\mathrm{\Gamma }`$ have been eliminated by a redefinition of the time, space and field scales. The rheological properties of the mixture are described in terms of the shear stress
$$\sigma _{xy}(t)=_{|\stackrel{}{k}|<q}\frac{d\stackrel{}{k}}{(2\pi )^d}k_xk_yC(\stackrel{}{k},t)$$
(10)
and of the first and second normal stress differences
$$\mathrm{\Delta }N_1=_{|\stackrel{}{k}|<q}\frac{d\stackrel{}{k}}{(2\pi )^d}[k_y^2k_x^2]C(\stackrel{}{k},t).$$
(11)
and
$$\mathrm{\Delta }N_2=_{|\stackrel{}{k}|<q}\frac{d\stackrel{}{k}}{(2\pi )^d}[k_z^2k_y^2]C(\stackrel{}{k},t).$$
(12)
For vectorial systems with $`n>d`$ ($`d`$ is the spatial dimensionality) topological defects are not stable . For large $`n`$, therefore, domains of the equilibrium phases are, strictly speaking, absent. Nevertheless, since from the solution of the one-loop equations presented below it is possible to identify characteristic growing lengths $`R_x(t)`$ and $`R_{}(t)`$ in the flow and in the other directions it is natural to interpret these quantities as the trace of the domains size after the one-loop approximation procedure has been performed. In the following we will always use the word domains in this broad acception.
## III Steady shear
In this section we consider the case of a constant shear rate $`\gamma `$. Eq. (8) can be formally integrated, yielding
$$C(\stackrel{}{k},t)=\mathrm{\Delta }e^{_0^t๐ฆ^2(u)[๐ฆ^2(u)+S(tu)1]๐u}+2T_0^t๐ฆ^2(u)e^{_0^u๐ฆ^2(s)[๐ฆ^2(s)+S(ts)1]๐s}๐u$$
(13)
where
$$\stackrel{}{๐ฆ}(u)=\stackrel{}{k}+\gamma k_xu\stackrel{}{e}_y$$
(14)
and $`\stackrel{}{e}_y`$ is the unit vector in the shear direction normal to the flow. For steady flow it is usual to define the excess viscosity as
$$\mathrm{\Delta }\eta (t)=\frac{\sigma _{xy}(t)}{\gamma }$$
(15)
### A Analytic solution in the short and long time limit
The consistency condition (9) cannot be worked out along the whole time history of the system. For this reason in the following Sections the model Equations will be solved numerically both in $`d=2`$ and $`d=3`$. However the model can be solved in the short and long time limit.
Short times
For short times the linearized theory developed originally by Cahn and Hilliard for the situation with $`\gamma =0`$ can be extended to the present case. This amounts to neglecting the quartic term in the local part of the free energy (1) since in the initial high temperature state the order parameter is small. With this approximation the solution of Eq. (2) reads
$$C(\stackrel{}{k},t)=\mathrm{\Delta }e^{_0^t๐ฆ^2(z)[๐ฆ^2(z)1]๐z}+2T_0^t๐ฆ^2(z)e^{_0^z๐ฆ^2(s)[๐ฆ^2(s)1]๐s}๐z$$
(16)
This approach applies to the original model and to the large-$`n`$ approximation as well because non-linear terms are neglected. It is well known that the linear theory describes the very initial transient of the phase-separation process, when domains are still forming. In this time domain the behaviour of the system in the presence of the flow is more interesting than in the simple case of an immobile fluid. A plot of the structure factor (16) is presented for a two-dimensional system in Fig. (1) for the case $`\gamma =1`$ and $`T=0`$. Initially, when domains are forming but the shear flow has not yet produced sensible effects, the structure factor evolves assuming the typical structure of a circular volcano, similarly to what happens in the case without shear. At $`\gamma t0.5`$ the anisotropy induced by the shear produces a deformation in the profile of the edge of the volcano from a ring-like geometry into an ellipse, whose major axis forms with the positive direction of the $`k_y`$ axis an angle of approximatively $`45^o`$ (see Fig. (1)). At the same time small dips start to develop in the edge at the ends of the axes of the ellipse and four peaks can be clearly observed at $`\gamma t2`$. As time goes by, the angle formed by the major axis of the ellipse with the $`k_y`$-direction decreases and the dips in the profile of $`C(\stackrel{}{k},t)`$ along the major axis develop until $`C(\stackrel{}{k},t)`$ almost consists of two separated foils, at $`\gamma t4`$, when a couple of peaks prevails. The same initial pattern is also observed by numerically solving the full model equation (2). At later times, however, the presence of the non-linear terms becomes fundamental and the linear theory breaks down, as in the case without shear. It is important to stress the fact that the presence of four peaks in the structure factor is exhibited already at the linear theory level of approximation. We will see in the following sections that the very existence of a multiply peaked $`C(\stackrel{}{k},t)`$ produces a rich dynamical pattern originating an oscillatory phenomenon.
Long times
The self-consistency condition (9) has been worked out explicitly in the long-time domain in . It is found that the model is interested by a multiscaling symmetry, as in the case without shear , characterized by the growth of the characteristic lengthscales as
$$R_x\gamma (\frac{t^5}{\mathrm{ln}t})^{\frac{1}{4}}$$
(17)
and
$$R_{}(\frac{t}{\mathrm{ln}t})^{\frac{1}{4}}$$
(18)
in the directions of the flow and perpendicular to it respectively. The excess viscosity and the normal stress differences behave as
$$\mathrm{\Delta }\eta (t)\gamma ^2(\frac{\mathrm{ln}t}{t^3})^{\frac{1}{2}}$$
(19)
$$\mathrm{\Delta }N_1\mathrm{\Delta }N_2(\frac{\mathrm{ln}t}{t})^{\frac{1}{2}}$$
(20)
The same behaviors (apart from logarithmic corrections) is obtained in by means of a scaling ansatz.
### B Numerical solution
We present in this Section the results of the numerical integration of the large-$`n`$ equation (8) which allows to follow the whole time history of the phase-separation process. We restrict ourselves to the case with $`T=0`$. An Euler first order discretization scheme has been implemented in $`d=2`$ and $`d=3`$ on $`d`$-dimensional lattices with $`201`$ mesh points per each direction. For long times the structure factor is strongly peaked around typical wavevectors which move towards zero as time goes on (see Fig 2). Given that the support of $`C(\stackrel{}{k},t)`$ also shrinks to zero it is possible to greatly improve the quality of the numerical computation by using a self-adaptive mesh algorithm that follows the evolution of the support of the structure factor. We have solved Eq. (8) for various values of the shear rate $`\gamma `$ in the range $`[10^4,10^2]`$. We found that the qualitative behavior is the same for all the values of $`\gamma `$ considered. ยฟFrom the knowledge of the structure factor we compute the characteristic lengths $`R(t)`$ as
$$R_x(t)=\frac{1}{\sqrt{\overline{k_x^2}}}$$
(21)
where
$$\overline{k_x^2}=\frac{๐\stackrel{}{k}k_x^2C(\stackrel{}{k},t)}{๐\stackrel{}{k}C(\stackrel{}{k},t)}$$
(22)
and the same for the other directions.
$`d=2`$
The behavior of $`C(\stackrel{}{k},t)`$ is shown in Fig. (2) for $`\gamma =0.001`$. Initially the evolution of the structure factor is resemblant to the one observed in Fig. (1) where the linear theory for $`C(\stackrel{}{k},t)`$ was plotted. Later on, however, the linear theory fails because the non-linearities become relevant, and the long-time regime is entered. This is characterized by the shrinking of the support of $`C(\stackrel{}{k},t)`$ towards the origin with different rates for the shear and the flow directions so that the tilt angle, namely the direction along which $`C`$ is aligned, decreases in time. The structure factor is divided into two separated foils which are symmetric due to the property $`C(\stackrel{}{k},t)=C(\stackrel{}{k},t)`$. In each foil two distinct peaks can be observed located at $`(k_{x_1},k_{y_1})`$ and $`(k_{x_2},k_{y_2})`$ with $`|k_{x_1}|2|k_{x_2}|`$ and $`|k_{y_1}|2|k_{y_2}|`$. Their heights change in time. The first peak to prevail is that located at $`(k_{x_1},k_{y_1})`$, while the other peak dominates later. As time elapses the two peaks are observed to prevail alternatively. This oscillatory behavior continues up to the longest times of our computations.
In Fig. (3) the quantities $`(\gamma \mathrm{ln}t)^{1/4}R_x(t)`$ and $`(\gamma \mathrm{ln}t)^{1/4}R_y(t)`$ are plotted against the strain $`\gamma t`$. According to Eqs.(17,18) for long times these quantities should collapse, for different values of the shear, on two power-law mastercurves with exponents $`5/4`$ and $`1/4`$, respectively. Here we observe that the collapse is indeed good, but the predicted power-law behavior is modulated by an oscillatory pattern. These oscillations are observed to be periodic on a log-time axis and persist up to the limit of the computational time.
We now consider the rheological behavior of the mixture by plotting in Fig. (4) the quantity $`(\gamma /\mathrm{ln}t)^{1/2}\mathrm{\Delta }\eta (t)`$ against the strain. This quantity reaches a maximum at $`\gamma t3.5`$ and then decrease as also found in experiments . For long times Eq. (19) would predict a data collapse for different $`\gamma `$ on a single power-law master-curve with exponent $`3/2`$. Here the situation is similar to the previous figure, in that the predicted behavior is modulated by log-time periodic oscillations. On the bases of simple scaling arguments the maximum of the excess viscosity $`\mathrm{\Delta }\eta (t_m)`$ is expected to occur at a fixed $`\gamma t`$ and to scale as $`\mathrm{\Delta }\eta (t_m)\gamma ^\nu `$, with $`\nu =2/3`$ . These arguments do not directly apply to the one-loop approximation since, due to the mean field nature, the exponents are different. The asymptotic solution (19) is not adequate to this early stage effect. The $`\gamma `$ dependence of $`\mathrm{\Delta }\eta (t_m)`$ is plotted in the inset of Fig. 4 showing that a power-law behaviour with $`\nu 0.6`$ is obeyed, in partial agreement with the aforementioned scaling arguments.
In Fig. (5) we report the numerical results for the first normal stress by plotting $`(\gamma \mathrm{ln}t)^{1/2}\mathrm{\Delta }N_1`$ against $`\gamma t`$ with $`\gamma =0.01`$. We find that $`\mathrm{\Delta }N_1(t)`$ scales asymptotically as predicted by Eq. (20) again modulated by an oscillatory pattern.
The periodic oscillations observed in all the physical observables are due to the competition between the different peaks of $`C(\stackrel{}{k},t)`$. Letโs refer to the behavior of the excess viscosity to understand how this competition affects the rheological quantities, using the features of the structure factor to obtain information about the domains evolution under the action of shear. $`\mathrm{\Delta }\eta `$ reaches its first maximum when the shape of $`C(\stackrel{}{k},t)`$ is such that the peak located at $`(k_{x_1},k_{y_1})`$ prevails and the difference between the height of the two peaks is maximal. At this time the domains are elongated by the flow and there is a prevalence of thin domains in the system. As these string-like domains are stretched further, they eventually break up into two or more domains, dissipating the stored energy. This has two effects: the excess viscosity decreases and, on the other hand, the thick domains, which have not yet been broken, prevail. In this situation the other peak of $`C(\stackrel{}{k},t)`$ (which is located at $`(k_{x_2},k_{y_2})`$ and represents the smaller features) grows faster until it prevails and $`\mathrm{\Delta }\eta `$ reaches a minimum. This behavior is reproduced with a characteristic frequency in log-time. Recently, a similar behavior has been observed in the numerical simulation of the full model Equation (2) .
$`d=3`$
In this section we report the result of the numerical solution of Eq. (8) in $`d=3`$. In Fig. (6) the time evolution of the structure factor in the special planes $`k_x=0`$, $`k_y=0`$ and $`k_z=0`$ is shown for $`\gamma =0.001`$. In the plane $`k_z=0`$ $`C(\stackrel{}{k},t)`$ behaves analogously to the previously discussed two dimensional case. The structure factor on the plane $`k_y=0`$ gives information relative to the observation of the system along the shear direction: No velocity gradient is present in the plane perpendicular to this orientation, but there are different velocities in the $`x`$ and $`z`$ directions. This allows to explain the observed behavior which is rather different from the one observed at $`k_z=0`$. The structure factor develops initially a circular volcano, as without shear. The edge of the volcano is progressively deformed by the shear into an ellipse with the major axis along the $`k_z`$ direction. The dips in the edge of the volcano at values of $`k_x0`$ develop with time so that at $`\gamma t1`$, $`C(\stackrel{}{k},t)`$ is made of two foils but these are not completely separated. During the time evolution the axes of the ellipse shrink; the decrease is faster along the $`k_x`$ direction. The two foils are never completely separated and the angle formed with the $`k_z`$ direction is zero, as observed in experiments . At $`\gamma t5`$ two well-formed peaks start to develop and grow on each foil of $`C(\stackrel{}{k},t)`$. These four peaks have the same height and their relative heights do not change in time, as it can be seen at $`\gamma t=20`$ in the picture, differently from the situation on the $`k_z=0`$ plane.
In the $`k_x=0`$ plane the shear has no effect at all and the structure factor remains circular during its evolution.
The computed behavior of $`R_x(t)`$ and $`R_y(t)`$ is similar to that of the two-dimensional case. We also find $`R_z(t)R_y(t)`$, as expected.
We report in Fig. (7) the plots of $`(\gamma /\mathrm{ln}t)^{1/2}\mathrm{\Delta }\eta (t)`$, $`(\gamma \mathrm{ln}t)^{1/2}\mathrm{\Delta }N_1(t)`$ and $`(\gamma \mathrm{ln}t)^{1/2}\mathrm{\Delta }N_2(t)`$ as functions of $`\gamma t`$. It appears that the rheological quantities still have amplitudes which are modulated by log-time oscillations which are in phase among them. The origin of such oscillations has to be found again in the oscillations of the peaks of the structure factor in the plane $`k_z=0`$. Since the support of $`C(\stackrel{}{k},t)`$ shrinks towards the origin faster in the $`k_z`$ than in the $`k_y`$ direction the second normal stress difference $`\mathrm{\Delta }N_2(t)`$ is negative. This is in accordance with general experimental experience .
## IV Oscillating shear
In this Section we consider the case of a time-dependent shear rate with
$$\gamma (t)=\gamma _o\mathrm{cos}\omega t$$
(23)
This situation is of great experimental relevance expecially for probing the viscoelastic properties of the phase separating binary mixture.
We solved the Eq. (8) numerically in $`d=2`$ using the same numerical scheme as in the case of steady shear, for different values of $`\gamma _0`$ and $`\omega `$. We will describe below the case $`\gamma _0=10^3`$, $`\tau =2\pi /\omega =6\times 10^3`$. The time evolution of the structure factor in the first cycle of $`\gamma (t)`$ is shown in Fig. (8). The dynamical pattern is analogous to the one with $`\gamma =const.`$ for times $`t<\tau /4`$ as it can be seen at $`\gamma _ot=1.5`$. Then the time dependent velocity field modifies the behaviour of the blend with respect to the case of a steady flow. In particular, at the end of the first oscillation, the four peaks of $`C(\stackrel{}{k},t)`$ are located at comparable distances from the origin of the $`k`$-space differently from what observed in Fig. (2) at $`\gamma t=6`$. The two highest maxima at $`\gamma _0t=6`$ in Fig. (8) are characterized by $`|k_y||k_x|`$. During the later time evolution these peaks grow and move towards the origin. The position in the $`k`$-plane of the other peaks rotates back and forth cyclically along an approximatively circular path. The radius of this trajectory shrinks towards the origin at a rate comparable with that of the position of the other peaks. In the asymptotic regime the four peaks have approximatively the same height and the cyclical rotation of the peak position persists. This is shown in Fig. (9) where the configurations of the structure factor are shown at each quarter of oscillation of the shear rate in the asymptotic stage.
In Fig. (10) the evolution of the characteristic lengths $`R_x(t)`$, $`R_y(t)`$ is plotted against $`\gamma _0t`$. We also plot, in the inset, the time average of these quantities over a period $`\tau `$, in order to smoothen out the superimposed oscillations. Here we observe, for times $`t<\tau `$, growth laws analogous to the steady shear case, namely $`R_x(t)t^{5/4}`$ and $`R_y(t)t^{1/4}`$. The growth exponent of $`R_x`$ changes smoothly, from $`t\tau `$ onward, from $`5/4`$ to the asymptotic value $`1/4`$ which is reached at $`\gamma _ot80`$ when all the four peaks of the structure factor have the same height. The gradual crossover of $`\alpha _x`$ from $`5/4`$ to $`1/4`$ can be better observed for larger values of $`\tau `$, since the regime with $`\alpha _x=5/4`$ persists for a longer time. This is shown in Fig. (11), where the evolution of $`R_x(t)`$, $`R_y(t)`$ is plotted against $`\gamma _0t`$ for $`\tau =5\times 10^5`$. For small $`\tau `$, instead, $`R_x`$ and $`R_y`$ grow with the same exponent $`1/4`$ from the beginning.
These observations suggest the following physical interpretation: for $`t<\tau /2`$, since $`\gamma `$ does not change sign, the evolution of the blend is comparable to the case with a constant shear rate. In particular, if $`\tau `$ is sufficiently large to exceed the initial stage when domains are forming, the power growth laws described in Sec. 3 for $`R_x,R_{}`$ are observed with $`\alpha _x=5/4`$ and $`\alpha _{}=1/4`$. On timescales much longer then $`\tau `$, however, the network of the larger domains cannot be efficiently tilted along the flow orientation which changes periodically its sign. This is confirmed by the behaviour of the two peaks with $`|k_y||k_x|`$ whose position in the $`k`$-plane moves toward the origin but does not cross the $`k_x=0`$ plane, as it would be the case if the orientation of the domains corresponding to these peaks were reversed. In this situation the difference $`\mathrm{\Delta }\alpha =1`$ between the exponents in the flow and shear direction cannot be sustained, because the larger domains are not directed along the flow orientation at all times, and a growth law with the same exponent 1/4 in all the directions is obeyed. It is interesting to notice that the other peaks, which represent smaller domains formed by the break-up of the larger ones, crosses the $`k_y=0`$ plane during their rotation every half period of $`\gamma `$. This suggests that these features are tilted by the oscillating shear and follow the flow orientation. Then we expect to observe in a real blend two type of domains which respond differently to the oscillations of the flow: a network of large and elongated structures which maintains the orientation imposed during the first half period of $`\gamma `$ and a multitude of more isotropic features, generated by the break-up of strained regions, which oscillate following the flow.
For studying rheological properties it is customary to introduce a complex viscosity $`\eta ^{}\eta ^{}i\eta ^{\prime \prime }`$ which is related to the shear stress by
$$\sigma _{xy}(t)=\gamma _o(\eta ^{}\mathrm{cos}\omega t+\eta ^{\prime \prime }\mathrm{sin}\omega t)$$
(24)
when Eq. (23) holds. It is also useful to consider another representation of the shear stress given by
$$\sigma _{xy}(t)=C\mathrm{sin}(\omega t+\varphi )$$
(25)
The connection between Eqs. (24) and (25) is given by
$$C=\gamma _o\sqrt{\eta ^2+\eta ^{\prime \prime 2}}$$
(26)
and
$$\mathrm{tan}\varphi =\frac{\eta ^{}}{\eta ^{\prime \prime }}.$$
(27)
By defining $`\gamma ^{}(t)=\gamma _oe^{i\omega t}`$, we can write Eq. (24) as
$$\sigma _{xy}(t)=Re\left[\eta ^{}\gamma ^{}(t)\right]$$
(28)
In order to relate the real and imaginary parts of the viscosity to physical quantities Eq. (24) can be casted as
$$\sigma _{xy}(t)=\eta \gamma (t)+G_0^t\gamma (t^{})๐t^{}$$
(29)
where $`\eta =\eta ^{}`$, $`G=\omega \eta ^{\prime \prime }`$ and the identity $`\mathrm{sin}\omega t=\omega _0^t\mathrm{cos}\omega t^{}dt^{}`$ has been used.
The coefficient $`\eta `$ in the r.h.s. of the Eq. (29) multiplies the portion of the shear stress in phase with the shear rate and represents the viscosity of a viscoelastic fluid. The integral of the second term of the r.h.s. of the Eq. (29) can be identified with the shear strain present in the mixture at time $`t`$. The coefficient $`G`$ is therefore the effective elastic shear modulus of the fluid. Pure viscous behavior corresponds to $`G=0`$ ($`\varphi =\pi /2`$), pure elastic behavior to $`\eta =0`$ ($`\varphi =0`$) .
In order to compute $`\eta `$ and $`G`$ during the phase separation we calculated by numerical integration the shear stress using its general definition (10). By writing
$$\sigma _{xy}(t)=A\mathrm{cos}\omega t+B\mathrm{sin}\omega t$$
(30)
it follows that $`\eta =A(t)/\gamma _o`$ and $`G=\omega B(t)/\gamma _o`$. In general, $`A`$ and $`B`$ depend on time. During a single shear oscillation, however, we expect that the Eq. (30) holds as a good approximation with constant values for $`A`$ and $`B`$. In this way $`\sigma _{xy}(t)`$ is expressed in terms of the first two coefficients in a Fourier series expansion over the interval of scaled time of duration $`2\pi `$. The values we obtain may be referred to the time that locates the middle of the interval. Thus we get
$`\eta \left(\left(m{\displaystyle \frac{1}{2}}\right)\tau \right)`$ $`=`$ $`{\displaystyle \frac{1}{\gamma _o\pi }}{\displaystyle _{(m1)2\pi }^{m2\pi }}\sigma _{xy}(t/\omega )\mathrm{cos}tdt`$ (31)
$`G\left(\left(m{\displaystyle \frac{1}{2}}\right)\tau \right)`$ $`=`$ $`{\displaystyle \frac{\omega }{\gamma _o\pi }}{\displaystyle _{(m1)2\pi }^{m2\pi }}\sigma _{xy}(t/\omega )\mathrm{sin}tdt`$ (32)
where $`m=1,2,..`$.
In Fig. (12) we report the plots of $`\eta `$ and $`G`$ against $`\gamma _ot`$. The viscosity shows a crossover between a power law decay with exponent $`3/2`$ at short times and an asymptotic behavior whose exponent is $`1/2`$. This can be explained observing that, for the steady shear case, the dynamic viscosity $`\eta `$ coincides with the excess viscosity which scales with the inverse of the domains volume $`V`$. When an oscillatory shear is applied $`V`$ crosses over from an initial power law increase $`Vt^{3/2}`$, similar to the one for the case with steady shear (see Fig. (3)), to a slower growth $`Vt^{1/2}`$, as already discussed above for $`R_x(t)`$, producing a corresponding crossover in $`\eta `$.
ยฟFrom the computed values of $`\eta `$ and $`G`$ we estimated the phase angle $`\varphi `$, which, according to Eqs. (27) and (29), is given by $`\varphi =\mathrm{arctan}\left({\displaystyle \frac{\eta }{G}}\omega \right)`$. In Fig. (13) we report the time evolution of $`\varphi `$ as a function of $`\gamma _ot`$. It can be seen that $`\varphi `$ decreases with time to reach an asymptotic value which is approximately 0.016. Accordingly, the system we are investigating shows in the asymptotic stage a behavior which is essentially elastic. The experimental data of confirm this behavior.
## V Summary and discussion
In this paper we have studied the kinetics of a phase-separating binary fluid, in the presence of a shear flow, by means of the TDGL model. It is nowadays well established that the corresponding model with $`\gamma =0`$ accurately describes the main features of the segregation process in binary alloys, where hydrodynamics can be neglected. In viscous fluids, such as polymeric blends, the validity of the present approach is limited to the early stage of spinodal decomposition; for longer times one should consider the full hydrodynamic description .
When the shear is applied to a fluid the behavior of the system is profoundly changed under many respects and a general agreement on the predictivness of the proposed models is matter of general debate. A discussion on possible effects of hydrodynamics is presented in . Moreover, the numerical solution of the TDGL model with shear poses serious problems due to discretization limitations and finite size effects and, although some progresses have been recently achieved , a satisfactory description is nowadays not available. In this scenario it is important to devise a simple analytical scheme providing the fundamental tools for the comprehension of the fluid dynamics. A natural choice in the field of growth kinetics is the large-$`n`$ approximation, that has been thoroughly studied in the case without shear, where it has proven to give a reliable description of the segregation process, although at a mean field level.
In this paper the behavior of the TDGL model in the one-loop approximation is studied in detail, and the whole time evolution of the blend is considered, from the quenching instant onward; the cases of a stationary flow and of an oscillating shear have been examined. In doing so we undercover a very rich dynamical pattern, where not only some experimental findings are reproduced, but new predictions are allowed. After an early stage, which is accurately described by the linear theory รก la Cahn-Hilliard, the presence of the velocity field produces an anisotropic power-law growth of the characteristic lengths $`R_x`$, $`R_{}`$ respectively in the flow direction and perpendicularly to it. The value of the exponent $`\alpha _{}=1/4`$ in the directions perpendicular to the flow is the same as in models with vectorial conserved order parameter without shear; although the actual value of this exponent is not expected to be accurate for real fluids (since, even without shear, the exponent obtained at the same level of approximation is known to corresponds to the Lifshitz-Slyozov exponent $`\alpha =1/3`$ for scalar fields) a growth exponent $`\alpha _{}`$ unaffected by the presence of shear has been obtained also by scaling and renormalization group arguments applied to the full model equations and is also measured in experiments . Moreover a difference $`\mathrm{\Delta }\alpha =1`$ between the flow and shear exponents is also expected to be obtained by releasing the present approximation and is observed in some experiments. In the case of a stationary flow the anisotropic growth governed by these exponents is observed from the onset of the scaling regime onwards. The power law behavior of any observable is decorated by log-time periodic oscillations. These oscillations characterize the scaling regime up to the longest simulated time but they are not observed in the asymptotic solution presented in . Given that log-time periodicity appears to be a rather common feature being observed, besides segregating fluids, during fracturing of heterogeneous solids and in stock market indices for instance, it would then be interesting to devise an analytical approach to enlighten the origin of this new phenomenon at least in the present model.
In experiments with real fluid systems carried out by Laufer et al. and, successively, by Mani et al. and Migler et al. a double overshoot in the time behavior of the viscosity and of the normal stress is observed and an interpretation in terms of break-up and recombination of the domains network is proposed. On the bases of our results it is plausible that this double overshoot represents the first part of a log-time periodic phenomenon which could be hopefully detected with a suitable experimental setup. In the model we have studied the oscillatory behaviour is due to the competition between the different maxima of a four-fold peaked structure factor. The presence of these maxima is interpreted is Sec. III B as due to the existence of different types of domains and the recurrent prevalence of each peak is suggested to be caused by the interplay between these kind of regions. A structure factor with four maxima has also been observed in polymer mixtures ; however to our knowledge the connection between the alternative dominance of the peaks of $`C(\stackrel{}{k},t)`$ and the overshoots observed in the viscosity and in the stresses has never been discussed insofar, perhaps due to insufficient resolution, although an experimental confirmation of this hypotheses would be desirable.
When an oscillating shear is present, the anisotropic regime discussed so far for the steady shear case crosses over to an isotropic growth when domains are fully developed. In this late stage, from the analysis of the behavior of the structure factor, we conjecture again the existence of two types of domains responding differently to the flow oscillations: the network of elongated structures keeps the orientation assumed during its formation in the early stage while small features generated by scission of strained parts oscillate in phase with the flow. In this late stage the growth kinetics is regulated by the same exponents as without flow. We are not aware of experiments reporting these features: It would be interesting to devise an experimental set-up for testing this prediction.
###### Acknowledgements.
F.C. is grateful to M.Cirillo, R.Del Sole and M.Palummo for hospitality in the University of Rome. F.C. acknowledges support by the TMR network contract ERBFMRXCT980183 and by PRA-1999 INFM and MURST(PRIN 97).
CAPTIONS |
no-problem/0001/cond-mat0001318.html | ar5iv | text | # Superconductivity in a Ferromagnetic Layered Compound
\[
## Abstract
We examine superconductivity in layered systems with large Fermi-surface splitting due to coexisting ferromagnetic layers. In particular, the hybrid ruthenate-cuprate compound $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$ is examined on the coexistence of the superconductivity and the ferromagnetism, which has been observed recently. We calculate critical fields of the superconductivity taking into account the Fulde-Ferrell-Larkin-Ovchinnikov state in a model with Fermi-surfaces which shapes are similar to those obtained by a band calculation. It is shown that the critical field is enhanced remarkably due to a Fermi-surface effect, and can be high enough to make the coexistence possible in a microscopic scale. We also clarify the direction of the spatial oscillation of the order parameter, which may be observed by scanning tunneling microscope experiments.
\]
Recently, coexistence of superconductivity and ferromagnetism has been reported in the hybrid ruthenate-cuprate compounds $`R_{1.4}\mathrm{Ce}_{0.6}\mathrm{RuSr}_2\mathrm{Cu}_2\mathrm{O}_{10\delta }`$ ($`R=\mathrm{Eu}`$ and Gd) and $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$ . These compounds have similar crystal structures to the high-$`T_\mathrm{c}`$ cuprate superconductor $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_7`$ except that layers of $`\mathrm{CuO}`$ chains are replaced with ruthenate layers. Experimental and theoretical studies indicate that the ruthenate layers are responsible for the ferromagnetic long range order , while the cuprate layers for the superconductivity .
One of the remarkable features of these compounds is that the superconducting transition occurs at a temperature well below the ferromagnetic transition temperature unlike most of the other ferromagnetic superconductors. For example, in $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$, the superconducting transition was observed at $`T_\mathrm{c}46\mathrm{K}`$, whereas the ferromagnetic transition at $`T_\mathrm{M}132\mathrm{K}`$ . Therefore, the ferromagnetic order can be regarded as a rigid back ground which is not modified very much by the appearance of the superconductivity. This picture is also supported by experimental observations .
According to the first principle calculations by Pickett et al. , magnetic fields in the cuprate layers due to the ordered spin moment in the ruthenate layers are much smaller than exchange fields mediated by electrons. The exchange fields play a role like magnetic fields which act only on the spin digrees of freedom but do not create Lorentz force. Therefore, the present system is approximately equivalent to a quasi-two-dimensional system in magnetic fields nearly parallel to the layers.
However, such Fermi-surface splitting gives rise to pair-breaking effect as well as that due to a parallel magnetic field. The exchange field in $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$ is very large and seems to exceed the Pauli paramagnetic limit (Chandrasekar-Clogston limit) . The Pauli paramagnetic limit $`H_\mathrm{P}`$ at $`T=0`$ is roughly estimated from the zero field transition temperature $`T_\mathrm{c}^{(0)}`$ by a simplified formula $`\mu _eH_\mathrm{P}=1.25T_\mathrm{c}^{(0)}`$, where $`\mu _e`$ denotes the electron magnetic moment. For $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$, since the exchange field exists in practice, $`T_\mathrm{c}^{(0)}`$ of isolated cuprate layers is not known, but it will be appropriate to assume $`T_\mathrm{c}^{(0)}\stackrel{<}{}90\mathrm{K}`$ from the transition temperature of $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_{7+\delta }`$ at the optimum electron density. Hence we obtain $`\mu _eH_\mathrm{P}\stackrel{<}{}110\mathrm{K}`$ at $`T=0`$ from the above formula. On the other hand, the band calculation gives an estimation $`\mu _eB_{\mathrm{ex}}=\mathrm{\Delta }_{\mathrm{ex}}/225\mathrm{m}\mathrm{e}\mathrm{V}/2107\mathrm{K}`$ . It is remarkable that the superconducting transition occurs at such a high temperature $`T_\mathrm{c}46\mathrm{K}`$ in spite of the strong exchange field of the order of the Pauli paramagnetic limit at $`T=0`$.
There are some mechanisms by which the critical field of superconductivity exceeds the Pauli limit. For example, the triplet pairing superconductivity is an important candidate. However, from their crystal structures and high transition temperatures, it is plausible that the present compounds are categorized as high-$`T_\mathrm{c}`$ cuprate superconductors and therefore the superconductivity is due to an anisotropic singlet pairing with line nodes, which is conventionally called a $`d`$-wave pairing. For the singlet pairing, possibility of an inhomogeneous superconducting state that is called a Fulde-Ferrell-Larkin-Ovchinnikov (FFLO or LOFF) state was discussed by Pickett et al. as a candidate for the mechanism.
On the possibility of the FFLO state, they pointed out that there are nearly flat areas in the Fermi-surfaces in $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$, which favor the FFLO state. It is known that the FFLO critical field diverges at $`T=0`$ in one dimensional models. However, if the Fermi-surfaces are too flat, nesting instabilities, such as those to spin density wave (SDW) and charge density wave (CDW), are favored for realistic interaction strengths. For the present compound, the nearly flat areas are not so flat that the nesting instabilities occur, but the small curvature still enhances the FFLO state .
It is also known that even in the absence of the flat areas, the critical field is enhanced in the two-dimensional (2D) systems in comparison to the three dimensional systems . Further, when the Fermi-surface structure of the system satisfies a certain condition, the critical field can reach several times the Pauli limit even in the absence of nearly flat areas . Such a Fermi-surface effect can be regarded as a kind of nesting effects analogous to those for SDW and CDW . The โnestingโ effect was examined in details in our previous papers, where 2D tight binding models are studied as examples .
Direct evidence of the FFLO state may be obtained by scanning tunneling microscope (STM) experiments. For a comparison with experimental results, spatial structure of the order parameter should be predicted theoretically. In particular, direction of the modulation of the order parameter is important. It may appear that the modulation must be in the direction perpendicular to the flattest area of the Fermi-surface, because then the spatial variation is minimized. However, in some of 2D models, it is not perpendicular to flattest areas . Only explicite calculations which take into account the Fermi-surface structure could clarify the direction of the modulation.
Therefore, the purposes of this paper are (1) estimation of the critical field of superconductivity including the FFLO state to examine the possibility of coexistence of singlet superconductivity and ferromagnetism in a microscopic scale, and (2) clarification of the direction of the spatial oscillation of the order parameter to compare with results of STM experiments possible in the future. We examine a tight binding model with Fermi-surfaces which shapes are similar to those of $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$, because the quantities that we are calculating are sensitive to the Fermi-surface structure.
Recently, the FFLO state has been studied in a tight binding model with only nearest neighbor hopping . It was found that ratio of the FFLO critical field and the Pauli limit is small near the half filling. Zhu et al. have discussed that hence the coexistence of the superconductivity and the ferromagnetic order is difficult except in the vicinity of the ferromagnetic domains near the half filling . However, some experimental results indicate coexistence in a microscopic scale and a bulk Meissner-state . Here, we should note that the tight binding model with only nearest neighbor hopping can not reproduce the shapes of the Fermi-surfaces of $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$. By taking into account the realistic Fermi-surface structure, we will show below that the critical field is enhanced remarkably and thus the coexistence in a microscopic scale is possible in this compound.
First, we define the tight binding model
$$H_0=\underset{๐ฉ\sigma }{}ฯต_{๐ฉ\sigma }c_{๐ฉ\sigma }^{}c_{๐ฉ\sigma }$$
$`(1)`$
with a dispersion relation
$$ฯต_{๐ฉ\sigma }=2t(\mathrm{cos}p_x+\mathrm{cos}p_y)4t_2\mathrm{cos}p_x\mathrm{cos}p_y\mu +h\sigma ,$$
$`(2)`$
where $`h`$ denotes the exchange field. When we apply the present theory to type II superconductors in a magnetic field $`๐`$, $`h`$ is written as $`h=\mu _e|๐|`$. We use a unit with $`t=1`$ and the lattice constant $`a=1`$ in this paper.
We take the value of the second nearest neighbor hopping energy $`t_2=0.6t`$, which gives shapes of the Fermi-surfaces similar to the symmetric $`\mathrm{CuO}_2`$ barrel Fermi-surfaces obtained by Pickett et al. at $`n=1.1`$ as shown in Fig.1. Here, $`n`$ is the electron number par a site.
We calculate the critical field in the ground state for $`n=0.922`$, applying a formula developed in our previous papers . For anisotropic pairing
$$\mathrm{\Delta }(\widehat{๐ฉ},๐ซ)=\mathrm{\Delta }_\alpha \gamma _\alpha (\widehat{๐ฉ})\mathrm{e}^{\mathrm{i}๐ช๐ซ}$$
$`(3)`$
($`\widehat{๐ฉ}๐ฉ/|๐ฉ|`$), the critical field is give by
$$h_\mathrm{c}=\underset{๐ช}{\mathrm{max}}\left[\frac{\mathrm{\Delta }_{\alpha 0}}{2}\mathrm{exp}\left(\frac{\mathrm{d}p_{}}{2\pi }\frac{\rho _{}^\alpha (0,p_{})}{N_\alpha (0)}\mathrm{log}|1\frac{๐ฏ_\mathrm{F}๐ช}{2h_\mathrm{c}}|\right)\right],$$
$`(4)`$
where $`\mathrm{\Delta }_{\alpha 0}2\omega _\mathrm{D}\mathrm{exp}(1/g_\alpha N_\alpha (0))1.76k_\mathrm{B}T_\mathrm{c}`$ and $`\rho _{}^\alpha (0,p_{})\rho _{}(0,p_{})[\gamma _\alpha (\widehat{๐ฉ})]^2`$ with the momentum dependent density of states $`\rho _{}(ฯต,p_{})`$. Here, $`p_{}`$ denotes the momentum component along the Fermi-surface. The pairing interaction is assumed to have a form
$$V(๐ฉ,๐ฉ^{})=g_\alpha \gamma _\alpha (\widehat{๐ฉ})\gamma _\alpha (\widehat{๐ฉ}^{}).$$
$`(5)`$
In particular, for $`d`$-wave pairing, we use a model with
$$\gamma _d(\widehat{๐ฉ})\mathrm{cos}p_x\mathrm{cos}p_y,$$
$`(6)`$
where $`p_x`$ and $`p_y`$ are the momentum components on the Fermi-surface in the directions of $`\widehat{๐ฉ}`$. In our previous papers, it was shown that the qualitative and semi-quantitative results are not sensitive to details of the form of $`\gamma _d(\widehat{๐ฉ})`$ . An effective density of states $`N_\alpha (0)`$ for anisotropic pairing is defined by
$$N_\alpha (0)N(0)[\gamma _\alpha (\widehat{๐ฉ})]^2,$$
$`(7)`$
with an average on the Fermi-surface
$$\mathrm{}=\frac{\mathrm{d}p_{}}{2\pi }\frac{\rho _{}(0,p_{})}{N(0)}(\mathrm{})\stackrel{}{|๐ฉ|=p_\mathrm{F}(p_{})},$$
$`(8)`$
where $`N(0)`$ is the density of states at the Fermi level. The Pauli limit $`H_\mathrm{P}`$ for anisotropic pairing is calculated by
$$\mu _eH_\mathrm{P}=\frac{\sqrt{[\gamma _\alpha (\widehat{๐ฉ})]^2}}{\overline{\gamma }_\alpha }\frac{\mathrm{\Delta }_{\alpha 0}}{\sqrt{2}}$$
$`(9)`$
with
$$\frac{1}{\overline{\gamma }_\alpha }=\mathrm{exp}\left(\frac{[\gamma _\alpha (\widehat{๐ฉ})]^2\mathrm{log}[1/|\gamma _\alpha (\widehat{๐ฉ})|]}{[\gamma _\alpha (\widehat{๐ฉ})]^2}\right).$$
$`(10)`$
In the above equations, the vector $`๐ช`$ is the center-of-mass momentum of Cooper pairs of the FFLO state. From the symmetry of the system, there are four or eight equivalent optimum vectors ($`๐ช_m`$โs), depending on whether $`๐ช`$ is in a symmetry direction or not, respectively. Actually, arbitrary linear combination of $`\mathrm{exp}(\mathrm{i}๐ช_m๐ซ)`$ gives the same second order critical field, and the degeneracy is removed by the nonlinear term of the gap equation below the critical field . However, regarding the critical field and the optimum direction of the oscillation of the order parameter near the critical field, it is sufficient to take a single $`๐ช`$ as in eq.(3).
Figures 2 and 3 show numerical results of the critical fields for $`t_2=0.6`$, with our previous results for $`t_2=0`$ (dotted lines) . It is found that the critical fields are remarkably enhanced near the electron densities $`n1.46`$ and $`1.20`$ for the $`s`$-wave and the $`d`$-wave pairing, respectively. For example, at the electron density $`n=1.1`$, the ratios of the critical field to the Pauli paramagnetic limit are approximately equal to 1.66 and 3.19 for the $`s`$-wave and the $`d`$-wave pairing, respectively. These values (especially the latter) seem to be large enough to make the coexistence possible in $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$.
In Fig.3 for the $`d`$-wave pairing, both the critical fields for $`\phi _๐ช=\pi /4`$ and $`\phi _๐ช=0`$ are shown, but the highest one is the final result of the critical field given by eq.(4). Here, $`\phi _๐ช`$ is the angle between the optimum $`๐ช`$ and one of the crystal axes. It is shown by a numerical calculation that the critical fields for the other values of $`\phi _๐ช`$ are lower than the higher one of the critical fields for $`\phi _๐ช=\pi /4`$ and 0. Thus, the direction of the optimum wave vector $`๐ช`$ jumps from $`\phi _๐ช=\pi /4`$ to $`\phi _๐ช=0`$ at $`n1.63`$. On the other hand, for the $`s`$-wave pairing, $`\phi _๐ช=\pi /4`$ is the optimum in the whole region of the electron density. These behaviors are different from that for $`t_2=0`$, in which $`\phi _๐ช=0`$ .
For $`t_2=0.6`$, a cusp is seen in Fig.2 for the $`s`$-wave pairing, whereas it does not appear in Fig.3 for the $`d`$-wave pairing. The physical origin of the cusp at $`n1.46`$ is that the Fermi surfaces satisfy a certain condition there, which was explained in our previous paper for $`t_2=0`$ . It is related to how the two Fermi-surfaces touch by the translation by the optimum $`๐ช`$. In the present case ($`t_2=0.6`$ and $`n1.46`$), the touch occurs in the (110) direction, but because of the nodes of the order parameter the โnestingโ is not efficient for the $`d`$-wave pairing. Therefore, cusp does not appear for the $`d`$-wave pairing.
In spite of the absence of cusp behavior, the critical field is still very large for the $`d`$-wave pairing near the half-filling. Figure 4 shows the nesting behavior of the Fermi-surfaces at $`t_2=0.6`$ and $`n=1.1`$. The direction of the optimum vector $`๐ช`$ is $`\phi _๐ช=\pi /4`$, and the Fermi-surfaces touch at two points (i.e., two lines in the $`p_xp_yp_z`$-space), $`(p_x,p_y)(1.113\pi ,1.713\pi )`$ and $`(1.713\pi ,1.113\pi )`$. Since $`\phi _๐ช=\pi /4`$ is also the direction of a node of the $`d`$-wave order parameter, it may appear that this direction is less favorable. However, in actuality the critical field is remarkably enhanced for this โnestingโ vector $`๐ช`$, since it gives two nesting lines which are far away from the nodes but near the flattest areas, as shown in Fig.4. Besides, they are near both the maxima of the $`d`$-wave order parameter and the van Hove singularities, which also enhance the critical field. As the electron density increases, the two nesting lines approach to the line node of the order parameter, and thus the critical field decreases.
Since the optimum direction $`\phi _๐ช=\pi /4`$ is in a symmetry line, there are four equivalent directions, that is, $`\phi _๐ช=\pm \pi /4`$ and $`\pm 3\pi /4`$. Therefore, symmetric linear combinations such as
$$\begin{array}{ccc}\hfill \mathrm{\Delta }(๐ฉ,๐ซ)& & \mathrm{cos}(qx^{})\hfill \\ \hfill \mathrm{\Delta }(๐ฉ,๐ซ)& & \mathrm{cos}(qx^{})+\mathrm{cos}(qy^{})\hfill \end{array}$$
$`(11)`$
are convincing candidates, which may be observed in the present compound, where $`x^{}=(x+y)/\sqrt{2}`$ and $`y^{}=(xy)/\sqrt{2}`$. In particular, the 2D structures such as the latter of eq.(11) are favored at high fields .
For the FFLO state to appear, temperature needs to be lower than the tri-critical temperature $`T^{}`$ of the FFLO, BCS and normal states. $`T^{}`$ is generally equal to about $`0.56T_\mathrm{c}^{(0)}`$ in simplified models such as eq.(5). If we apply this to the present system $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$, $`T^{}\stackrel{>}{}T_\mathrm{c}46\mathrm{K}`$ requires $`T_\mathrm{c}^{(0)}\stackrel{>}{}82\mathrm{K}`$. This condition for $`T_\mathrm{c}^{(0)}`$ may be relaxed by taking into account a mixing of order parameters of different symmetries, which increases $`T^{}`$ .
In conclusion, the FFLO critical field of the cuprate layers is remarkably enhanced by an effect of the Fermi-surface structure. The direction of the spatial oscillation of the order parameter is in the $`(110)`$ direction both for the $`s`$-wave pairing and the $`d`$-wave pairing. Although we examined only the ground state in this paper, the result $`H_\mathrm{c}/H_\mathrm{P}3.19`$ at $`T=0`$ is large enough to support coexistence of the superconductivity and the ferromagnetic order in a microscopic scale in $`\mathrm{RuSr}_2\mathrm{GdCu}_2\mathrm{O}_8`$. Calculation for finite temperatures is now in progress.
This work was supported by a grant for Core Research for Evolutionary Science and Technology (CREST) from Japan Science and Technology Corporation (JST). |
no-problem/0001/cond-mat0001438.html | ar5iv | text | # Auger decay, Spin-exchange, and their connection to Bose-Einstein condensation of excitons in Cu2O
\[
## Abstract
In view of the recent experiments of OโHara et al. on excitons in Cu<sub>2</sub>O, we examine the interconversion between the angular-momentum triplet-state excitons and the angular-momentum singlet-state excitons by a spin-exchange process which has been overlooked in the past. We estimate the rate of this particle-conserving mechanism and find a substantially higher value than the Auger process considered so far. Based on this idea, we give a possible explanation of the recent experimental observations, and make certain predictions, with the most important being that the singlet-state excitons in Cu<sub>2</sub>O is a very serious candidate for exhibiting the phenomenon of Bose-Einstein condensation.
PACS numbers: 25.70.Np,12.38.Qk
\]
Bose-Einstein condensation has been the subject of numerous theoretical and experimental studies. While in the recent years many of these studies have focused on the Bose-Einstein condensation of trapped alkali atoms , another possible candidate which can undergo this second-order phase transition is the exciton gas in semiconducting materials .
Excitons are much like positronium atoms. They are bound states which form between electrons and holes in a semiconductor, after the electrons get excited from the conduction band to the valence band, usually by some laser field. Since excitons consist of two fermions, in the limit where their separation is much larger than their Bohr radius, they are expected to behave like bosons. Many experiments have been performed with excitons in Cu<sub>2</sub>O because of the many advantages of this material: It has a direct, but dipole-forbidden gap, which makes the lifetime of excitons rather long, it has isotropic effective electron and hole masses, it does not form bound states, biexcitons, or an electron-hole liquid, and finally the exciton binding energy is quite large.
The traditional way of observing the kinetic energy distribution of excitons is to look at the recombination spectrum, and more specifically the optical-phonon assisted lines. Since the optical phonons have a very weak dispersion relation and since the transition matrix element does not depend on the exciton momentum, the energy distribution of the emitted photons essentially gives the kinetic energy distribution of the excitons. Many experiments have demonstrated that excitons do indeed obey Bose-Einstein statistics in the limit of high enough densities and low enough temperatures, with the luminescence spectrum fitting very accurately to Bose-Einstein distributions. This fitting procedure gives the temperature and the chemical potential of the gas, since these are essentially independent parameters. A crucial assumption underlying this procedure, is that very frequent collisions between the excitons bring the gas to a quasi equilibrium, with some time-dependent chemical potential and temperature, which in general differs from the lattice temperature that is kept very low, below 5 K. The typical effective temperature of the exciton gas is on the order of 10 up to 100 K. Knowing the temperature and the chemical potential, one can deduce the particle density, assuming an ideal Bose gas with an experimentally known total exciton mass. The densities turn out to be on the order of 10<sup>18</sup> cm<sup>-3</sup> from this method. Following this approach Snoke et al. observed that the triplet-state (ortho)excitons do not Bose condense, but move along lines parallel and closely to the critical one, which are adiabats, i.e., along lines with constant entropy per particle. This effect has been examined theoretically in Ref. , and has been attributed to a competition between the acoustic-phonon cooling of the exciton gas, and an Auger heating mechanism which prevents the Bose-Einstein condensation of orthoexcitons. Lin and Wolfe have also reported in Ref. this tendency of orthoexcitons to move along adiabats, but, most importantly, have observed evidence for the Bose-Einstein condensation of the angular-momentum singlet state (para)excitons .
However, recently OโHara et al. have developed another method for estimating the density of the excitons . By calibrating their photon detector, they have evaluated the number of photons that are being emitted, thus determining the number of orthoexcitons inside the crystal. They have also estimated the volume of the exciton gas by knowing the surface of the area of the laser light that creates the excitons, and with the assumption that the exciton gas has a typical depth inside the crystal, which is on the order of the absorption length of the laser light. Dividing the exciton number by the volume, they have found that the average density of the orthoexciton gas is two orders of magnitude smaller ($`10^{16}`$ cm<sup>-3</sup>) than the one they estimate by the spectroscopic method. This implies that the gas should be completely classical, without showing any kind of quantum degeneracy.
Therefore one is confronted with a paradox, since from the one point of view the spectra can be fitted very accurately to Bose-Einstein distributions, but on the other hand the densities seem to be quite lower than those one gets by this fit, and certainly much lower than the region where one would observe Bose statistics. Furthermore, Refs. suggest that such low exciton densities might be due to a very effective Auger decay mechanism, where two excitons collide, the one recombines, transferring its energy to the other, which ionizes. The implication is that this Auger mechanism, which does not conserve the total number of excitons prevents the onset of Bose-Einstein condensation. Up to now the Auger process was thought to provide the only relatively fast channel for excitons to get destroyed . However, in view of the very long intrinsic radiative lifetime of orthoexcitons reported recently in Refs. , the Auger decay rate should have a giant value, exceeding by three orders of magnitude the value calculated in Ref. .
In this study we examine another mechanism , in which two orthoexcitons with opposite $`J_z`$ collide, where $`๐`$ is the total angular momentum of each exciton, exchanging their electrons or holes in the process, giving two paraexcitons in the final state. We make an estimate of the rate for this spin-exchange process, and find that it is rather high. Based on the result of our calculation, we propose that this ortho-to-para interconversion mechanism is actually the dominant one in the experimental conditions used so far. Strong experimental evidence that our argument is true provide Fig. 2 of the first paper in Refs., Fig. 5 of the second paper in Refs., and Fig. 4(a) of Ref. , where for late times it is clear that there is very slow decay of the paraexciton number. Consideration of this process removes the contradiction described above between the two methods that give the exciton density. An important conclusion drawn from this new scenario is that the Bose-Einstein condensation of paraexcitons is probable, since at late times they should form a cold and relatively dense gas. The spin-exchange mechanism, even if it converts one species into the other, conserves the total number of excitons. Since the orthoexcitons lie higher in energy than the paraexcitons due to the exchange interaction by an amount $`\mathrm{\Delta }E`$, the interconversion process also transfers energy to the exciton gas. We also note that our mechanism explains the observed sublinear dependence of the orthoexciton number that is generated by the laser pulse as function of the laser power . On the other hand, under extreme pumping conditions in the band-to-band region, we have a highly nonequilibrium system at early times and one can think that Auger processes between the free carriers are also effective.
Let us start by making an order of magnitude estimate of the decay rate $`\mathrm{\Gamma }_{o,p}`$ of the ortho to paraexciton conversion process. We consider the process of two orthoexcitons with momenta $`๐`$ and $`๐`$ and opposite $`J_z`$ colliding, giving two paraexcitons with momenta $`๐^{}`$ and $`๐^{}`$. Fermiโs golden rule gives for the rate
$`\mathrm{\Gamma }_{o,p}={\displaystyle \frac{2\pi }{\mathrm{}}}{\displaystyle \underset{๐,๐,๐^{},๐^{}}{}}|M|^2f_๐^of_๐^o(1+f_๐^{}^p)(1+f_๐^{}^p)`$ (1)
$`\times \delta (E_๐^{}+E_๐^{}E_๐E_๐2\mathrm{\Delta }E)\delta _{๐+๐,๐^{}+๐^{}},`$ (2)
where $`M`$ is the matrix element for this process, $`f_๐^i`$ is the distribution function of the $`i`$ species (ortho or para excitons), having a dispersion relation $`E_๐=\mathrm{}^2K^2/2m`$, with $`m`$ being the total exciton mass. In this crude calculation we consider a cold ($`๐,๐0`$) orthoexciton gas, which allows us to write that
$`\mathrm{\Gamma }_{o,p}{\displaystyle \frac{2\pi }{\mathrm{}}}N_o^2{\displaystyle \underset{๐^{},๐^{}}{}}|M|^2(1+f_๐^{}^p)(1+f_๐^{}^p)`$ (3)
$`\times \delta (E_๐^{}^p+E_๐^{}^p2\mathrm{\Delta }E)\delta _{๐^{}+๐^{},0},`$ (4)
where $`N_i`$ is the total number of excitons of species $`i`$. The energy conservation condition in the above equation implies that $`K^{}`$ and $`P^{}`$ are of order $`(m\mathrm{\Delta }E/\mathrm{}^2)^{1/2}`$. Since for these wavevectors the occupation number is much less than 1, we can ignore the enhancement factors $`1+f`$ above. In addition, we argue below that the typical momentum exchange that enters the matrix element $`M`$ is of order $`a_B^1(mE_b/\mathrm{}^2)^{1/2}(m\mathrm{\Delta }E/\mathrm{}^2)^{1/2}`$, where $`a_B`$ is the exciton Bohr radius, and $`E_b`$ is the exciton binding energy. Since $`E_b\mathrm{\Delta }E`$, $`M`$ does not vary substantially in the sum of Eq. (4) and can be taken outside it,
$`\mathrm{\Gamma }_{o,p}{\displaystyle \frac{2\pi }{\mathrm{}}}{\displaystyle \frac{N_o^2}{2}}|M|^2{\displaystyle \underset{๐^{}}{}}\delta (E_๐^{}^p\mathrm{\Delta }E).`$ (5)
The last sum is simply the density of states calculated for an energy $`\mathrm{\Delta }E`$. The interaction that enters the matrix element $`M`$ is some screened Coulomb potential $`V(q,\omega )`$,
$`V(q,\omega )={\displaystyle \frac{4\pi e^2}{\mathrm{\Omega }ฯต(๐ช,\omega )q^2}}.`$ (6)
where $`\mathrm{\Omega }`$ is the volume of the crystal, and $`ฯต(๐ช,\omega )`$ is the dielectric function . The exciton wavefunction $`\mathrm{\Phi }_๐(๐ซ_e๐ซ_h)`$ of an exciton carrying momentum $`๐`$ can be written as
$`\mathrm{\Psi }_๐(๐ซ_e๐ซ_h)={\displaystyle \frac{1}{\sqrt{\mathrm{\Omega }}}}e^{i๐(๐ซ_e+๐ซ_h)/2}{\displaystyle \underset{๐ช}{}}\varphi _๐ชe^{i๐ช(๐ซ_e๐ซ_h)},`$ (7)
where $`๐ซ_e`$ and $`๐ซ_h`$ are the electron and hole coordinates, and $`\varphi _๐ช=8(\pi a_B^3)^{1/2}/[1+(qa_B)^2]^2`$ is the Fourier transform of the ground state hydrogenic wavefunction, $`\mathrm{\Phi }=e^{r/a_B}/(\pi a_B^3)`$, which we assume to be the relative electron-hole wavefunction. We have assumed that the two colliding orthoexcitons have $`๐=๐=0`$; denoting the momenta of the electron and the hole in each pair as $`๐ค,๐ค`$, and $`๐ฉ,๐ฉ`$ \[see Fig. 1\], after the two excitons have exchanged their electrons or holes, and some momentum $`๐ช`$, conservation of energy and momentum requires that
$`{\displaystyle \frac{\mathrm{}^2(๐ค๐ฉ+๐ช)^2}{2m}}=\mathrm{\Delta }E.`$ (8)
Since the exciton wavefunction has a momentum spread of order $`a_B^1`$, we see that $`pka_B^1`$, or in other words, $`\mathrm{}^2p^2/2mE_b`$. Because of the condition $`\mathrm{\Delta }EE_b`$, we conclude that $`q`$ is also of order $`a_B^1`$ . Furthermore, it is a rather good approximation to assume that $`ฯต(๐ช,\omega )ฯต_0`$ , the low-frequency dielectric constant of Cu<sub>2</sub>O, which is approximately equal to 7.5, and thus
$`|M|{\displaystyle \frac{4\pi e^2a_B^2}{\mathrm{\Omega }ฯต_0}}.`$ (9)
After some trivial manipulations we can write for the rate of the orthoexciton conversion process $`\mathrm{\Gamma }_{o,p}/\mathrm{\Omega }=n_o/\tau _{o,p}`$, where $`n_i`$ is the density of species $`i`$, and
$`\tau _{o,p}^116\pi n_oa_B^3\sqrt{{\displaystyle \frac{\mathrm{\Delta }E}{E_b}}}{\displaystyle \frac{E_b}{\mathrm{}}}.`$ (10)
Let us now examine the reverse process of two paraexcitons colliding, giving two orthoexcitons. The energy scale which is very important for this process is the energy splitting $`\mathrm{\Delta }E`$ between the orthoexcitons and paraexcitons in Cu<sub>2</sub>O, which is equal to 12 meV at the zone center, corresponding to a temperature of approximately 150 K. We can write for the decay rate $`\mathrm{\Gamma }_{p,o}/\mathrm{\Omega }=n_p/\tau _{p,o}`$, with
$`\tau _{p,o}^1\tau _{o,p}^1(n_p/n_o)e^{\mathrm{\Delta }E/k_BT},`$ (11)
where $`T`$ is the temperature of the exciton gas. Thus, for temperatures which are much lower than $`\mathrm{\Delta }E/k_B`$, the interconversion mechanism converts the orthoexcitons into paraexcitons, but not the reverse, since the para-ortho process is exponentially suppressed $`[e^{\mathrm{\Delta }E/k_BT}0]`$. On the other hand, for temperatures which are $`\mathrm{\Delta }E/k_B`$, the two rates $`\mathrm{\Gamma }_{o,p}`$ and $`\mathrm{\Gamma }_{p,o}`$ can be comparable, depending on the relative densities of ortho and paraexcitons, and thus the net rate can be very low.
To calculate the actual value of the rate given by Eq. (10), we use the following numbers for excitons in Cu<sub>2</sub>O (which have very low uncertainty): the binding energy is 153 meV, the energy splitting $`\mathrm{\Delta }E`$ at the zone center is 12 meV, and finally the Bohr radius $`a_B`$ is 5.3 ร
, as given by a variational calculation presented in Ref. . With these numbers we get
$`\tau _{o,p}^15n_o(10^{16}\mathrm{cm}^3)\mathrm{ns}^1,`$ (12)
where the notation $`n(10^{16}\mathrm{cm}^3)`$ means that the density is to be measured in units of $`10^{16}\mathrm{cm}^3`$.
We can now compare the experimental decay rate of orthoexcitons that was measured recently by OโHara et al. with the above theoretical number on the one hand, and with the theoretical number for the Auger process on the other hand. For the low lattice temperature of $`2`$ K in this experiment, the average kinetic energy per particle of the gas of orthoexcitons is expected to be much less than $`\mathrm{\Delta }E`$, and thus we assume for the net interconversion rate $`\mathrm{\Gamma }_{o,p}\mathrm{\Gamma }_{p,o}\mathrm{\Gamma }_{o,p}`$. The โtwo-body decay constantโ, $`A10^{16}`$ cm<sup>3</sup>/ns, that is extracted from the data given in Eq. (5) of the first paper in Refs. has the same order of magnitude as the rate given by Eq. (12).
We mentioned before that the Auger process is expected theoretically to have a rather low decay rate, in view of the very long radiative lifetime of orthoexcitons which was reported recently in Refs. . More specifically, in the theoretical study of the Auger process described in Ref. , a detailed analysis of this mechanism implied that the phonon-assisted Auger decay process was the dominant one. However, to get the rate, the authors used the orthoexciton radiative lifetime of 25 ns at a temperature of 10 K that was measured in the past. In Refs. the same quantity was measured to be approximately 10 $`\mu `$s, and since the radiative lifetime can be limited by imperfections or any other factor, the real radiative orthoexciton lifetime is at least 10 $`\mu `$s, or even longer. This implies that the phonon-assisted Auger decay rate, based on the theoretical study of Ref. is negligible. These arguments provide strong evidence that the ortho to para exciton interconversion mechanism that we study here is really the dominant process.
We turn now to the contradiction between the two methods which have been used for determining the orthoexciton density. It is important to get first an estimate for the interparticle elastic scattering rates, in order to compare them with the rates of interconversion by spin-exchange. Typical scattering rates $`\tau ^1`$ for elastic collisions between excitons are expected to be on the order of $`\tau ^1=n\sigma v_{\mathrm{th}},`$ where $`\sigma `$ is the scattering cross section, and $`v_{\mathrm{th}}`$ is the thermal velocity, $`v_{\mathrm{th}}=(8k_BT/\pi m)^{1/2}`$. At the low temperatures of interest one can assume hard-sphere scattering between the excitons. If $`a`$ is the scattering length, then for identical bosons $`\sigma =8\pi a^2`$. Recently the scattering length for excitons in Cu<sub>2</sub>O has been calculated to be on the order of $`2a_B`$ with use of Monte-Carlo simulations. Therefore we get
$`\tau ^10.1n(10^{16}\mathrm{cm}^3)\sqrt{\mathrm{T}(\mathrm{K})}\mathrm{ns}^1,`$ (13)
assuming that the total exciton mass is equal to 3 electron masses . Here the density is measured in units of $`10^{16}`$ cm<sup>-3</sup>, and the temperature in units of degrees Kelvin.
After the laser pulse starts to decrease, the paraexcitons become the dominant component of the gas because of the interconversion mechanism. For typical paraexciton densities of $`10^{17}`$ cm<sup>-3</sup> and thermal velocities of order 50 K, one sees from Eq. (13) that the typical scattering times are of order 100 ps, and the paraexcitons should be able to establish thermal equilibrium. The paraexcitons should also have a well-defined chemical potential, since after the initial times that the orthoexcitons convert to paraexcitons due to the interconversion process, their number does not vary significantly with time. By contrast, the orthoexciton-orthoexciton elastic scattering processes become less and less frequent because of their decreasing density, even if the orthoexciton-paraexciton collisions can bring them to thermal equilibrium; we claim, however, that chemical equilibrium has not been established in the orthoexciton gas, since the orthoexcitons have a relatively fast way of converting into paraexcitons and their number is not conserved. One can speculate that under such circumstances the orthoexciton gas could have a chemical potential which is rather low, but this is a non-equilibrium problem, and it requires a detailed study. One, for example, could use the Boltzmann equation to describe all the important processes which take place, and derive the distribution function of the orthoexcitons as a function of time.
There are some remarks one can make concerning this model we are proposing. Firstly, the estimate we made for the interconversion rate, Eqs. (2) and (4), does not assume thermal equilibrium, which is rather important in our problem. Secondly the Bose-Einstein condensation of paraexcitons does not seem hard, since as we explained there is no efficient mechanism which would destroy them on the timescales of interest, and their expansion could be the only factor against them in order to Bose condense. However the expansion is not expected to be dramatic, and it can also be reduced by applying some stress to the crystal, thus effectively trapping the excitons.
Finally, it has been argued is Ref. , that the orthoexcitons โ provided they are not far away from equilibrium โ have to move along lines of constant entropy, along which $`n_oT^{3/2}`$. To derive this result, the authors assumed that there is a competition between acoustic-phonon cooling and Auger heating. Since the phonon cooling rate is $`T^{3/2}`$ for low lattice temperatures, and the Auger heating rate is $`n_o`$, $`n_oT^{3/2}`$. Remarkably, if indeed the Auger process is negligible, and the interconversion process is the dominant mechanism, the heating rate due to this effect is equal to $`\mathrm{\Delta }E/\tau _{o,p}`$, and thus still proportional to $`n_o`$. Since this argument does not depend on the quantum degeneracy of the gas, $`n_o`$ should still be proportional to $`T^{3/2}`$: we conclude again that the orthoexcitons are expected to move parallel to the phase boundary, along adiabats, in contrast to the paraexcitons which most probably Bose condense. More experimental and theoretical work are required to verify these predictions.
G.M.K. was supported by the European Commission, TMR program, contract No. ERBFMBICT 983142. Helpful discussions with K. Johnsen are gratefully acknowledged. G.M.K. would like to thank the Foundation of Research and Technology, Hellas (FORTH) for its hospitality. A.M. is grateful to the Humbolt foundation for support during this work. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.