image
imagewidth (px) 794
1.06k
| markdown
stringlengths 0
59.3k
|
---|---|
set of nearly degenerate excited states, with average energy \(E_{\alpha}\). This assumption is equivalent to saying that the spectral function associated with each term is dominated by a narrow peak at \(E_{\alpha}\). This significant approximation is reasonable because the correction to the density matrix is only used to enlarge the basis, to improve DMRG convergence. Correspondingly, we approximate \((E_{0}-H_{0})^{-1}\) as \((E_{0}-E_{\alpha})^{-1}\equiv 1/\varepsilon_{\alpha}\). This gives
\[|\psi^{\prime}\rangle\approx\sum_{s}\psi_{s}\sum_{\alpha}\frac{t_ {\alpha}}{\varepsilon_{\alpha}}\hat{A}^{\alpha}\bar{P}\hat{B}^{\alpha}|L_{s} \rangle|R_{s}\rangle.\] (9)
There are no first order corrections to the density matrix from \(|\psi^{\prime}\rangle\), since \(\bar{P}|\psi\rangle=0\). The lowest order correction to \(\rho\) can be written as
\[\Delta\rho=\sum_{ss^{\prime}}\psi_{s}\psi^{*}_{s^{\prime}}\sum_{ \alpha{\alpha^{\prime}}}\frac{t_{\alpha}}{\varepsilon_{\alpha}}\frac{t_{\alpha ^{\prime}}}{\varepsilon_{\alpha^{\prime}}}\hat{A}^{\alpha}|L_{s}\rangle\langle L _{s^{\prime}}|\hat{A}^{\alpha\dagger}M_{s^{\prime}\alpha^{\prime}\alpha s}\] (10)
where
\[M_{s^{\prime}\alpha^{\prime}\alpha s}=\langle R_{s^{\prime}}| \hat{B}^{\alpha^{\prime}\dagger}\bar{P}\hat{B}^{\alpha}|R_{s}\rangle\] (11)
Here if \(A\) is the unit operator, the term adds nothing to the basis. If \(B\) is the unit operator, \(M_{s^{\prime}\alpha^{\prime}\alpha s}\) vanishes. For the nontrivial pairs of operators \(A\) and \(B\), this matrix element somewhat resembles a correlation function and it is natural to assume that the diagonal terms are dominant, where \(\alpha={\alpha^{\prime}}\) and \(s=s^{\prime}\). We expect the off diagonal terms \(\alpha\neq{\alpha^{\prime}}\) to describe coherence between different perturbation terms which would tend to reduce the number of basis functions needed to describe the system block; therefore, ignoring the offdiagonal terms is a conservative assumption. Accordingly, we take
\[M_{s^{\prime}\alpha^{\prime}\alpha s}\approx\delta_{s{s^{\prime} }}\delta_{\alpha{\alpha^{\prime}}}b_{\alpha}\] (12)
This gives Eq. (4) with \(a_{\alpha}=b_{\alpha}|t_{\alpha}|^{2}/\varepsilon_{\alpha}^{2}\), and where we omit block-Hamiltonian terms.
In practice, we take \(a_{\alpha}\) to be a small constant \(a\) independent of \(\alpha\). Construction of the correction to \(\rho\) take a calculation time for a single step proportional to \(m^{3}\) times the number of connecting terms, which is typically significantly smaller than the other parts of the DMRG calculation, although the scaling is the same. Larger values of \(a\) introduce more "noise" into the basis, speeding convergence, but also limiting the final accuracy. Note that it is just as easy to apply the correction within the two-site method as the single-site method, which may be useful in some very difficult cases. We do not present results for this combination here.
As a first test calculation, we consider the \(S=1\) Heisenberg model
\[H=\sum_{j}\vec{S}_{j}\vec{S}_{j+1}\;,\] (13)
where we have set the exchange coupling \(J\) to unity. The correction consist of the following: for each boundary site \(i\) of a block, i.e. a site directly connected to the other block, we add into the density matrix
\[\Delta\rho=a(S^{+}_{i}\rho S^{-}_{i}+S^{-}_{i}\rho S^{+}_{i}+S^{z }_{i}\rho S^{z}_{i}).\] (14)
For a chain with open boundaries, there is one site \(i\); for periodic boundaries, there are two. One could argue that this expression should be adjusted with factors of 2 between the \(z\) term and the other two terms, but this is not likely to make a significant difference. Note that the \(S^{+}\), \(S^{-}\) terms automatically increase the range of quantum numbers (i.e. total \(S^{z}\)) with nonzero density matrix eigenvalues. Figure 2 shows the convergence of the energy for a 100 site chain with open boundaries as a function of the sweep, keeping \(m=50\) states, relative to the numerically exact result obtained with \(m=200\) and 10 sweeps. One can see the excellent convergence of the standard approach. The single-site method without corrections does not do too badly in this case, but still gets stuck significantly above the two-site energy. Adding the corrections, in this case with \(a=10^{-4}\), dramatically improves the convergence, making the single site method converge nearly as fast as the two site method. The two site method is roughly a factor of three slower than the single site method. Thus, even in this simple 1D case where the standard approach works extremely well, there are advantages to using the corrected single site method.
The results change significantly if we consider periodic boundary conditions. Here we consider the same superblock configuration as with open boundary conditions, but simply add in the connection to the Hamiltonian between the first and last sites. There are better configurations for periodic boundaries, such as considering it to be a ladder with the interchain couplings turned off except at the ends. These other configurations are superior only in the sense of i |
|
does not have the relevant states. Hence, the fluctuation is not represented in the density matrix and the new system block will not possess its relevant states for that fluctuation. Later, when the roles of system and environment are reversed, the relevant states again do not appear. In a 1D system with short range interactions the extra environment site does a very good job of ensuring that the most relevant fluctuations are at least approximately present the environment, so that subsequent sweeps can build in the fluctuations to high accuracy.
In wide ladders or systems with longer range interactions, the addition of a single site to the environment is not always adequate. There may be missing fluctuations which are far from the extra site, and so are never built in. Even in these cases the extra site allows \(m\) to increase sensibly and as one lets \(m\to\infty\) one obtains exact results. However, for practical values of \(m\) one may find unacceptably slow convergence.
In this paper we describe an approximate correction to the density matrix to describe the key states which have been left out because the environment block is inadequate. With this correction, the single site superblock configuration converges well. In addition, convergence in more difficult systems is dramatically improved, in either the single site or two site configurations. We present two different derivations of the correction, and give examples using the \(S=1\) Heisenberg chain.
We first give a simple, rough argument. Consider the power method for finding the ground state: iterate \(\psi_{n+1}=(1-\varepsilon H)\psi_{n}\), where \(\varepsilon\) is a small constant. As long as \(\psi_{0}\) is not orthogonal to the exact ground state, and \(\varepsilon\) is small enough, the power method is guaranteed to converge to the ground state. Consequently, if the basis represents both \(\psi\) and \(H\psi\) exactly, and we minimize the energy within this basis, we expect exact convergence. The crucial point is the need to enlarge the basis to represent \(H\psi\). Within the standard DMRG basis obtained from \(\psi\), after solving for the ground state, \(H\psi=E\psi\), and nothing is changed by adding \(H\psi\) to the basis. To go beyond the basis, we need to construct the parts of \(H\psi\) as the basis is built up. The crucial terms of \(H\psi\) come from the terms of \(H\) which connect the system and environment blocks.
For the current superblock configuration, write the Hamiltonian in the form
\[H=\sum_{\alpha}t_{\alpha}\hat{A}^{\alpha}\hat{B}^{\alpha}.\] (1)
Here the \(\hat{A}^{\alpha}\) act only on the system block (including the site to be added to it), and the \(\hat{B}^{\alpha}\) act only on the environment block (plus its site). All the terms which do not connect the blocks are contained in two terms of the sum which have either \(A\) or \(B\) equal to the identity operator, so that this form is completely general. (The other term in each case is the block Hamiltonian.) In order to put \(H\psi\) into the basis, we need to target, in addition to \(\psi\), the terms \({\hat{A}^{\alpha}\psi}\) for all \(\alpha\). Let the states of the system have indices \(s\), \(p\), and \(q\), and the states of the environment \(e\). The state \(\hat{A}^{\alpha}\psi\) can be written as
\[\sum_{se}\sum_{p}A^{\alpha}_{sp}\psi_{pe}|s\rangle|e\rangle.\] (2)
Targetting this wavefunction means adding into the density matrix a term
\[\Delta\rho^{\alpha}_{ss^{\prime}}=a_{\alpha}\sum_{epq}A^{\alpha}_ {sp}\psi_{pe}\psi_{qe}^{*}{A^{\alpha}_{s^{\prime}q}}^{*}\] (3)
where \(a_{\alpha}\) is an arbitrary constant determining how much weight to put into this additional state. The total contribution of all the terms is
\[\Delta\rho=\sum_{\alpha}a_{\alpha}\hat{A}^{\alpha}\rho\hat{A}^{ \alpha\dagger}\] (4)
where \(\rho\) is the density matrix determined in the usual way, only from \(\psi\). This is the form of the correction that we use, with \(a_{\alpha}=a\sim 10^{-3}-10^{-4}\).
As a second derivation, we utilize perturbation theory. First, imagine that the environment block, but not the system block, is complete. We obtain the ground state exactly for this superblock, and then transform to the basis of density matrix eigenstates for the system block, and then also do the same for the environment block. Then the wavefunction can be written in the form
\[|\psi\rangle=\sum_{s}\psi_{s}|L_{s}\rangle|R_{s}\rangle.\] (5)
The reduced density matrix is
\[\rho=\sum_{s}\langle R_{s}|\psi\rangle\langle\psi|R_{s}\rangle= \sum_{s}|\psi_{s}|^{2}|L_{s}\rangle\langle L_{s}|\] (6)
Now consider the realistic case where the environment block is not complete. Assume the incompleteness takes the simple form that some of the \(|R_{s}\rangle\) are missing, labeled \(\bar{s}\), whereas \(s\) are present. Let \(P\) be a projection operator for the environment block \(P=\sum_{s}|s\rangle\langle s|\), and take \(\bar{P}=1-P\). Let the unperturbed ground state, with energy \(E_{0}\) and density matrix \(\rho_{0}\), be obtained using the incomplete environment basis. We take as a perturbation the terms in the Hamiltonian which couple to the states \(\bar{s}\), namely
\[H^{\prime}=\sum_{\alpha}t_{\alpha}\hat{A}^{\alpha}(\bar{P}\hat{B }^{\alpha}P+P\hat{B}^{\alpha}\bar{P}).\] (7)
The first order perturbative correction to the wavefunction due to \(H^{\prime}\) is
\[|\psi^{\prime}\rangle=\sum_{\alpha}t_{\alpha}(E_{0}-H_{0})^{-1} \hat{A}^{\alpha}\bar{P}\hat{B}^{\alpha}|\psi\rangle\] (8)
where \(H_{0}=H-H^{\prime}\).
In order to make progress we assume that each perturbation term \(A^{\alpha}\bar{P}\hat{B}^{\alpha}\) acting on the ground state creates a |
|
* (14) D. J. Cleaver, C. M. Care, M. P. Allen, and M. P. Neal, Phys. Rev. E **54**, 559 (1996); D. Antypov and D. J. Cleaver, J. Chem. Phys. **120**, 10307 (2004).
* (15) All quantities are given in reduced units defined in terms of Lennard-Jones potential parameters \(\epsilon_{LJ}\) and \(\sigma_{LJ}\): length in units of \(\sigma_{LJ}\), temperature in units of \(\epsilon_{LJ}/k_{B}\), \(k_{B}\) being the Boltzmann constant, and time in units of \((\sigma_{LJ}^{2}m/\epsilon_{LJ})^{1/2}\), \(m\) being the mass of spheres. We set the molecular masses of both spheres and ellipsoids and the moment of inertia of the ellipsoids equal to unity. The various energy and length parameters of the interaction potentials for our binary mixture have been chosen as follows: \(\epsilon_{LJ}=1.0\) , \(\epsilon_{GB}=0.5\), \(\epsilon_{SE}=1.5\), \(\sigma_{LJ}=1.0\), \(\sigma_{GB}=1.0\), and \(\sigma_{SE}=1.0\). See Ref. Cleaver-PRE-JCP for the general implication of the symbols. Unlike in Ref. Cleaver-PRE-JCP , here the subscripts are used to distinguish between the Lennard-Jones (LJ) interactions, the Gay-Berne (GB) interactions, and the sphere-ellipsoid (SE) interactions in the binary mixture. For the Gay-Berne interactions, \(\kappa=2,\kappa^{\prime}=5,\mu=2\), and \(\nu=1\). For the sphere-ellipsoid interaction \(\epsilon_{E}/\epsilon_{S}=0.2\) and \(\mu=2\). With these set of parameter values, the binary mixture has been investigated along an isochore corresponding to the reduced density \(\rho^{\star}=0.8\) at a series of reduced temperatures ranging from \(5.0\) to \(0.505\).
* (16) A. Rahman, Phys. Rev. **136**, A405 (1964).
* (17) A. H. Marcus, J. Schofield, and S. A. Rice, Phys. Rev. E **60**, 5725 (1999); D. Capiron, J. Matsui, and H. R. Schober, Phys. Rev. Lett. **85**, 4293 (2000).
* (18) S. Kammerer, W. Kob, and R. Schilling, Phys. Rev. E **56**, 5450 (1997); C. D. Michele and D. Leporini, Phys. Rev. E **63**, 36702 (2001).
* (19) D. Chakrabarti and B. Bagchi (manuscript in preparation).
* (20) R. Zwanzig, J. Chem. Phys. **39**, 1714 (1963).
* (21) W. Gotze and L. Sjogren, Rep. Prog. Phys. **55**, 241 (1992).
* (22) E. Leutheusser, Phys. Rev. A **29**, 2765 (1984). |
|
# Dirac Cosmology and the Acceleration of the Contemporary Universe
Cheng-Gang Shao, Jianyong Shen, Bin Wang
wangb@fudan.edu.cn Department of Physics, Fudan University, Shanghai 200433, People's Republic of China
Ru-Keng Su
rksu@fudan.ac.cn China Center of Advanced Science and Technology (World Laboratory) P.O. Box 8730, Beijing 100080, People's Republic of China Department of Physics, Fudan University, Shanghai 200433, People's Republic of China
###### Abstract
A model is suggested to unify the Einstein GR and Dirac Cosmology. There is one adjusted parameter \(b_{2}\) in our model. After adjusting the parameter \(b_{2}\) in the model by using the supernova data, we have calculated the gravitational constant \(\bar{G}\) and the physical quantities of \(a(t)\), \(q(t)\) and \(\rho_{r}(t)/\rho_{b}(t)\) by using the present day quantities as the initial conditions and found that the equation of state parameter \(w_{\theta}\) equals to -0.83 , the ratio of the density of the addition creation \(\Omega_{\Lambda}=0.8\) and the ratio of the density of the matter including multiplication creation, radiation and normal matter \(\Omega_{m}=0.2\) at present. The results are self-consistent and in good agreement with present knowledge in cosmology. These results suggest that the addition creation and multiplication creation in Dirac cosmology play the role of the dark energy and dark matter.
pacs: 98.80.-k, 98.80.Cq
## I |
|
After introducing the term \(-b_{2}\alpha^{2}\) in our theory and fit the adjust parameter \(b_{2}\) by supernova data, we have calculated the physical quantities of \(\bar{G}(t)\), \(a(t)\), \(q(t)\), \(\rho_{r}(t)/\rho_{b}(t)\) by using the data of present epoch as the initial conditions. We have found that the results are self-consistent and in good agreement with present knowledge of cosmology.
According to Dirac large number hypothesis, matter will be created in the universe. We have calculated the matter comes from addition creation and that from multiplication. An interesting picture in our theory is that the addition creation, which spreads over the universe, looks like the dark energy and the multiplication creation, which clusters around the normal matter, like the dark matter. We have found that the pressure of addition creation has a big positive value initially and fall down quickly to become a negative value. It has a negative pressure region corresponding \(0>t>-0.74\times 10^{10}yr\). The equation of state parameter \(w_{\theta}=-0.83\) at present. This value is in agreement with present dark energy models. We have also calculated the ratio of the density of the addition creation and found \(\Omega_{\Lambda}=0.8\). The same parameter but for multiplication creation, radiation and normal matter has also been computed, which reads \(\Omega_{m}=0.2\). Both of them have the same magnitude of the observational value of the dark energy and matter. This result suggests that the dark energy and dark matter are just the addition creation and the multiplication creation in Dirac cosmology.
Finally, we would like to emphasize that this model cannot be extended to the big bang epoch. We have not add the terms with \(R^{m}(m>1)\) (or \(\alpha^{-n}(n\geq 1)\) ) in the Lagrangian density. Obviously, these terms are very important to the very early universe, especially in the inflationary epoch. Our model can only be used in the time evolution regions starting from the radiation dominated epoch to the present time.
###### Acknowledgements.
This work was supported in part by NNSF of China, by the National Basic Research Program 2003CB716300 and the Foundation of Education of Ministry of China. B. Wang's work was also supported in part by Shanghai Education Commission.
## References
* (1) P.A.M. Dirac, _Nature_**139** (1937) 323, _Pro. Roy. Soc. London_**A333** (1973) 403, _Pro. Roy. Soc. London_**A365** (1979) 19
* (2) R.K. Su and S.W. Zhang, _Proc. of the 3rd Grossmann Meeting on Gen. Rel. Ed. Hu Ning_ p1381, Sci. Press and North-Holland Pub Company (1983) R.K. Su and S.W. Zhang, _Acta. Math. Sci._**3** (1983) 321, _Kexue Tongbao (Sci. Bullium of China)_**27** (1982) 944
* (3) G. Brans and R.H. Dicke, _Phys Rev._**124** (1961) 925
* (4) F. Hoyle and J.V. Narlikav, _Proc. Roy. Soc._**A227** (1964) 1
* (5) H.W. Peng, gr-qc/0401105, gr-qc/0405002
* (6) V. Canuto, et. al., _Phys. Rev. Lett._**39** (1977) 429
* (7) T. Damour and K. Nordtvedt, Jr., _Phys. Rev. Lett._**70** (1993) 2217, _Phys. Rev._**D48** (1993) 3436
* (8) T. Damour, F. Piazza and G. Veneziano, _Phys. Rev. Lett._**89** (2002) 081601, _Phys. Rev._**D66** (2002) 046007
* (9) K. Nordtvedt, Jr., gr-qc/0301024
* (10) S. Carneiro and J.A.S. Lima, gr-qc/0405141, _Gen. Rel. Grav._**26** (1994) 909
* (11) J.P. Ozan, _Rev. Mod. Phys._**75** (2003) 403
* (12) J.G. Williams, S.G. Turyshev and D.H. Boggs, _Phys. Rev. Lett._**93** (2004) 261101
* (13) G. Magnano and L.M. Sokolowski, _Phys. Rev._**D50** (1994) 5039
* (14) A. Dobado and A.L. Maroto, _Phys. Rev._**D52** (1995) 1895
* (15) S. Capozziello, V.F. Cardone, S. Cardoni and A. Troisi, _Phys. Lett._**A326** (2004) 292
* (16) S.M. Carroll, V. Duvvuri, M. Trodden and M.S. Turner, _Phys. Rev._**D70** (2004) 043528
* (17) S. Nojiri and S.D. Odintsov, _Phys. Rev._**D68** (2003) 123512
* (18) A.A. Starobinsky, _Phys. Lett._**B91** (1980) 99
* (19) T. Padmanabhan, T.R. Choudhury, _Mon. Not. Roy. Astron. Soc._**344** (2003) 828
* (20) T.R. Choudhury and T. Padmanabhan, _Astron. Astrophys._**429** (2005) 807
* (21) C.L. Bennett, et al., _ApJS_**148** (2003) 1 |
|
cosmological scale, especially at the late time of the universe evolution s10 s11 . Though Einstein GR has been proved to be correct by many experiments such as the excess perihelion precession of Mercury, gravitational redshift etc. at the scales of local system, it probably needs to be modified in the cosmological scale, especially if we want to use it to explain the recent observational result of the acceleration. Using the idea of Dirac cosmology with the variation of \(G\) one could provide a possible way to modify the Einstein GR.
As is well known, the Lagrangian density of Einstein GR with a cosmological term is
\[L_{E}=\frac{1}{{16\pi G}}\sqrt{-g}(R-2\Lambda)=\frac{1}{{16\pi G}}\sqrt{-g}R(1 -\alpha),\] (2)
where \(\alpha=2\Lambda/R\) is a dimensionless parameter. The cosmological constant \(\Lambda\) can be generally explained as the background fluctuation of the cosmological vacuum, and \(R\) is the 4-dimensional scalar curvature. In ordinary astrophysical problems of local systems, the magnitude of \(R\) is about \(8\pi G\rho+4\Lambda\). Since the vacuum energy density is much smaller compared to the density \(\rho\) in local system, \(\alpha\) is a very small quantity and can be neglected. But in cosmological problems, because \(\rho\) is small and has the same order as that of the vacuum energy density, \(\alpha\) cannot be neglected. In the limit of vacuum or the matter domination area, \(\rho\to 0\), \(R\to 4\Lambda\), \(\alpha\) can attain the magnitude \(1/2\). This means that \(\alpha\) plays an important role at cosmological scale. However, there is only a first order term of \(\alpha\) in the equations of Einstein cosmology. Instead of the factor \(1-\alpha\), we argue that in a perfect cosmological theory, the Lagrangian density of the gravitational field could contain higher-order terms of \(\alpha\). We take the Lagrangian density as
\[L_{E}=\frac{1}{{16\pi G}}\sqrt{-g}Rf(\alpha),\] (3)
where
\[f(\alpha)=1-\alpha-b_{2}\alpha^{2}-...\] (4)
The Einstein action becomes
\[S=\frac{1}{{16\pi G}}\int{d^{4}x}\sqrt{-g}R(1-\alpha-b_{2}\alpha^{2}-...)+\int {d^{4}x}\sqrt{-g}L_{M}.\] (5)
Starting from Eq.(5), we can establish a Dirac cosmology with varying \(G\) and matter creation. Obviously, this theory can unify GR and Dirac cosmology because in a local system with large scale, \(\alpha\to 0\) and the theory reduces to Einstein GR. Only at cosmological scale, the terms \(\alpha\), \(\alpha^{2}\)... become important and our theory reduces to Dirac cosmology.
Recently, many authors have considered the terms \(R^{m}(m>1)\) and/or \(R^{n}(n<0)\) on gravity s14 -s19 . But none of them has connected with Dirac cosmology and the large number hypothesis. |
|
we set
\[p_{\theta^{(1)}}=0\quad p_{\theta^{(2)}}=p_{\theta}\] (22)
Eq.(17)-Eq.(22) and the equation of scalar curvature
\[R=6[\frac{{\ddot{a}}}{a}+(\frac{{\dot{a}}}{a})^{2}]=6(\dot{H}+2H^{2})\] (23)
given by the flat Robertson-Walker metric form a complete set. In principle, we can solve the set of equations to find the behavior of the cosmological evolution.
## III The Numerical Solutions of the Universe With Matter And Radiation
We now study the behavior of our universe containing matter and radiation. To find the cosmological solution, we need the initial conditions of Eq.(17)-Eq.(22) in the early universe which, unfortunately, is little known now. Hence, we have done our numerical computation by using the present cosmological parameters as our initial conditions, which include the Hubble parameters constant \(H_{0}=0.7\times 100km\cdot s^{-1}Mpc^{-1}\), the present deceleration \(q_{0}=-0.5\), the cosmological constant \(\Lambda=1\) (as the scale), the density of baryon \(\Omega_{b}=0.05\), the ratio of the baryon-to-matter (including baryon and multiplication creation) \(\Omega_{b}/\Omega_{m}=0.17\) and the ratio of baryon-to-photon \(\eta=6.1\times 10^{-10}\)s22 . We will start from the present epoch and trace back the history of our universe.
Fig.2 shows the variation of \(\bar{G}\) in the evolution. The present time is set as \(t_{0}=0\). The three curves correspond to \(b_{2}=0.66,2.5\) and \(10\) respectively. We see that the 'gravitational constant' \(\bar{G}\) increases at first when the universe expands and then decreases at the late time. The ratio of the density of radiation to baryon floats up quickly and the scale factor \(a(t)\to t^{1/2}\) as \(t\) approaches the age of the universe, which are shown in Fig.3 and Fig.4 respectively. Therefore, the radiation dominates the early universe. And the age of the universe given by \(a(t)=0\) is \(1.02Gyr\), \(1.16Gyr\) and \(1.34Gyr\) for \(b_{2}=0.66,2.5\) and \(10\) respectively. Since there is no mechanism of inflation in our model, it is easy to understand that our model can consider the universe back to the epoch of radiation domination. Fig.5 shows the evolution of the deceleration factor \(q\). We see that the universe decelerates in the early era and gradually stops decelerating and starts accelerating. This is in a good agreement with what we understand the universe nowadays. Through the deceleration factor \(q\), we notice that the scalar curvature \(R\) is always positive.
Obviously, \(b_{2}\) is the parameter in our model which needs to be adjusted. To determine this parameter, let us compare our model with t |
|
looks like the character of the dark energy. If we take \(p_{\theta^{(2)}}=w_{\theta}\rho_{\theta^{(2)}}\) as the equation of state of addition creation, we show the parameter of the pressure-to-density of the addition creation \(w_{\theta}\) versus time \(t\) in Fig.8 and find \(w_{\theta}(t=0)=-0.83\) which is much smaller than \(-1/3\) and is consistent with the requirement of dark energy to explain the accelerating universe. To compare our results with other models, we have also calculated the ratio of the density of addition creation and find
Figure 6: The ”velocity” of the scale factor \(\dot{a}\) versus the ”position” \(a\). The dots are the supernova data from s20 s21 . The three curves from top to bottom correspond to \(b_{2}=0.66,2.5\) and \(10\).
Figure 7: The evolution of the pressure of the addition creation \(p_{\theta^{(2)}}\) at \(b_{2}=2.5\). |
|
ts with other models, we have also calculated the ratio of the density of addition creation and find
\[\frac{{\rho_{\theta^{(2)}}}}{{\rho_{\theta^{(1)}}+\rho_{\theta^{(2)}}+\rho_{r} +\rho_{b}}}=\Omega_{\Lambda}=0.8\] (24)
and that of the density of matter (multiplication creation and normal matter) in our model reads
\[\frac{{\rho_{\theta^{(1)}}+\rho_{r}+\rho_{b}}}{{\rho_{\theta^{(1)}}+\rho_{ \theta^{(2)}}+\rho_{r}+\rho_{b}}}=\Omega_{M}=0.2.\] (25)
These results have the same magnitude as that dark energy and dark matter respectively. Hence, we suggest that the dark energy comes from the addition creation and the dark matter from the multiplication creation.
## IV Summary And Discussion
We have suggested a model to unify the Einstein GR and Dirac Cosmology. In local system, our theory reduces to GR, but in the cosmological scale, our theory refers to the Dirac cosmology. The variation of the gravitational constant comes from the scalar curvature. In the local system, the variation of the gravitational constant is negligible. This result is in good agreement with present experiments. But in the cosmological scale, the change of \(\bar{G}\) is remarkable. The acceleration of the present universe is a proof of the decrease of the gravitational constant \(\bar{G}\), because the decrease of \(\bar{G}\) corresponds to an effective repulsion.
Figure 8: The evolution of the parameter of the pressure-to-density of the addition creation \(w_{\theta}\) versus time \(t\) at \(b_{2}=2.5\). |
|
Introduction
According to the Dirac's arguments, the large dimensionless numbers provided by atomic physics and astronomy of our universe are connected with each other s1 . These numbers include: (i) the ratio of the electric to the gravitational force between an electron and a proton \(a_{1}={\raise 3.01385pt\hbox{${e^{2}}$}\!\mathord{\left/{\vphantom{{e^{2}}{Gm_ {p}m_{e}}}}\right.\kern-1.2pt}\!\lower 3.01385pt\hbox{${Gm_{p}m_{e}}$}}\sim 10 ^{39}\); (ii) the age of the universe expressed in terms of the atomic unit \(a_{2}=\frac{{m_{e}c^{3}}}{{e^{2}H}}\sim 10^{39}\), where \(H\) is the Hubble constant; (iii) the mass of the part of the universe which is receding from with a velocity \(v<c/2\), expressed in the units of the proton mass, say, \(a_{3}\sim 10^{78}\). Dirac introduced a large number hypothesis
\[a_{1}\cong a_{2}\cong a_{3}^{1/2}\] (1)
Based on this hypothesis, Dirac suggested a model of cosmology with a varying gravitational constant \(G\) and an increase in the amount of matter in the universe.
Based on the large number hypothesis, a number of cosmological models with a varying gravitational constant have been proposed s2 -s10 . However most of them met many difficulties. Noting that gravitational constant \(G\) cannot vary in general relativity (GR), the first difficulty is that one must explain the contradiction between Einstein GR and Dirac Cosmology. Though much effort, for example, Milne two time scale hypothesis s6 , Weyl's geometry s1 etc., has been devoted to reconcile the requirements of these two theories, it is still an open question to establish a theory which can unify the Dirac cosmology and Einstein GR.
The second difficulty comes from experiments. Almost all experiments at the scales of solar system and galaxies have not found the variation of \(G\)s11 -s12 . A possible variation of \(G\) has been investigated with no success through geophysical and astronomical observations. From experimental results one tends to believe that \(G\) is a constant for local system with large scale.
The third difficulty belongs to the conservation of energy and momentum. Usually we use a perfect-fluid energy-momentum tensor to describe the matter of universe and it is conserved in the cosmic evolution. The addition creation, the new matter created uniformly in the whole space, and the multiplication creation, the new matter created in regions where old matters exist, must come from other mechanism as suggested by Dirac.
Many years ago a possible unified theory of Dirac cosmology and GR to overcome the above difficulties was suggested in s2 . The basic idea is as follows. Though one would expect a constant value of \(G\) at the local system such as solar system, binary system and galaxies, it must be stressed that the cosmological observations still cannot put strong limits on the time variation of \(G\) in the |
|
2. case: \({\mathbf{1}}\in S\). (The case \(-{\mathbf{1}}\in S\) is analogous, so that we will omit it here.)
Again we choose \(n\) vectors from \(I_{p-1}\) and \(m=k-1-n\) vectors from \(-I_{p-1}\). We may assume these vectors are \(e_{1},\dots,e_{n},-e_{n+1},\dots,-e_{k-1}\). Set \(b=\frac{m-n+1}{p-k}\); note that \(|b|\leq\frac{k}{p-k}<1\). Consider the hyperplane
\[x_{1}+\dots+x_{n}-x_{n+1}-\dots-x_{k-1}+b\left(x_{k}+\dots+x_{p-1}\right)=1\ .\]
Our \(k\) chosen vectors are on this hyperplane, and again one can easily check that the remaining vectors in \(A_{2p}\) satisfy
\[x_{1}+\dots+x_{n}-x_{n+1}-\dots-x_{k-1}+b\left(x_{k}+\dots+x_{p-1}\right)<1\ .\]
**Remark****.**: One can use the correspondence of the facets of \({\mathcal{C}}_{2p}\) to the vertices of \(P(2,p)\) described in the proof of Proposition 17 to show that \({\mathcal{C}}_{2p}\) has \(p\binom{p-1}{\frac{p-1}{2}}\) facets. We do not know the number of facets for the more general cyclotomic polytopes \({\mathcal{C}}_{pq}\) for distinct primes \(p\) and \(q\); it would be interesting if the correspondence to transportation polytopes could lead to this number.
Proposition 17 and Corollary 20 allow us to prove Parker's Conjecture 2:
**Theorem 21****.**: _The coordinator polynomial of \({\mathbb{Z}}[\zeta_{2p}]\), where \(p\) is an odd prime, equals_
\[h_{2p}(x)=\sum_{k=0}^{\frac{p-3}{2}}\left(x^{k}+x^{p-1-k}\right)\sum_{j=0}^{k} \binom{p}{j}+x^{\frac{p-1}{2}}\sum_{j=0}^{\frac{p-1}{2}}\binom{p}{j}.\]
Proof.: The cyclotomic polytope \({\mathcal{C}}_{2p}\) is simplicial by Proposition 17, so Corollary 16 applies. For \(j\leq\frac{p-1}{2}\), Corollary 20 gives
\[h_{j}=\sum_{k=0}^{j}(-1)^{j-k}\binom{p-1-k}{j-k}f_{k-1}=\sum_{k=0}^{j}(-1)^{j- k}\binom{p-1-k}{j-k}2^{k}\binom{p}{k}=\sum_{k=0}^{j}\binom{p}{k},\]
as one easily checks that
\[\sum_{k=0}^{j}(-1)^{j-k}\binom{p-1-k}{j-k}2^{k}\binom{p}{k}-\sum_{k=0}^{j-1}(- 1)^{j-1-k}\binom{p-1-k}{j-1-k}2^{k}\binom{p}{k}=\binom{p}{j}.\]
Palindromy of the \(h\)-vector gives \(h_{j}\) for \(j>\frac{p-1}{2}\).
Going beyond \(m=p\) or \(2p\), next we prove Parker's Conjecture 3.
**Corollary 22****.**: _The coordinator polynomial of \({\mathbb{Z}}\left[\zeta_{15}\right]\) equals_
\[c_{{\mathbb{Z}}\left[\zeta_{15}\right]}(x)=\left(1+x^{8}\right)+7\left(x+x^{7} \right)+28\left(x^{2}+x^{6}\right)+79\left(x^{3}+x^{5}\right)+130x^{4}.\]
Proof.: By Proposition 6, the polytope \({\mathcal{C}}_{15}\) has vertices
\[A_{15}=\left[\begin{array}[]{ccccccccccccccc}I_{4}&-{\mathbf{1}}&&&-I_{4}&{ \mathbf{1}}\\ &&I_{4}&-{\mathbf{1}}&-I_{4}&{\mathbf{1}}\end{array}\right],\]
and it is simplicial by Proposition 17. With this data, one can easily use the software polymake[11] to check that \({\mathcal{C}}_{15}\) has the \(h\)-polynomial \(x^{8}+7x^{7}+28x^{6}+79x^{5}+130x^{4}+79x^{3}+28x^{2}+7x+1\). The result now follows with Corollary 16. |
|
* [25] Milan Vlach, _Conditions for the existence of solutions of the three-dimensional planar transportation problem_, Discrete Appl. Math. **13** (1986), no. 1, 61-78.
* [26] Philip Wagreich, _The growth function of a discrete group_, Group actions and vector fields (Vancouver, B.C., 1981), Lecture Notes in Math., vol. 956, Springer, Berlin, 1982, pp. 125-144. |
|
is introduced in the field theoretical description of the mechanical momentum-energy current as a "kinetic momentum-energy tensor" (Minkowski).5 Here \(\mu_{0}\) is the scalar mass density and \(u_{i}\) is the velocity vector. The term \(S_{i}S_{k}\) is obviously quite analogous, except that the current vector is replaced by the velocity. And this analogy goes further if we take into consideration that, in a static spherically symmetric solution, the average value of the spatial components of \(S_{i}\) will necessarily vanish and only a time part can remain. This means that the average value of the vector \(S_{i}\) points in the direction of the velocity indeed. (In a system at rest, the latter has only one time component.)
Footnote 5: Cf., e.g., W. Pauli, Theory of Relativity (Teubner, Leipzig and Berlin, 1921) p. 675; M. v. Laue, The theory of Relativity, Part 1, 4th Edition (Friedr. Vieweg & Sohn, Braunsweig, 1921) p. 207.
The second term of (20) is also well-known from hydrodynamics. There, an additional term \(pg_{ik}\) appears for the matter tensor if \(p\) means the hydrostatic pressure (which is a scalar). The relation (20) indicates that the mechanical mass density \(\mu_{0}\) is accompanied by a hydrostatic pressure of the value \(\mu_{0}/2\). This pressure is extremely high if we consider that in the CGS system we have to multiply by \(c^{2}\). For water we would obtain the enormous amount of \(4.5\times 10^{14}\) atm!6 However, this pressure is not meant macroscopically for neutral materials. We should consider it rather as the "cohesion pressure" required for the construction of an electron, i.e., to compensate for the strong electric repulsive force.7
Footnote 6: This remarkable result resembles the well-known conclusion of the theory of relativity that each mass \(m\) is connected with the enormous amount of energy \(mc^{2}\). Similarly to the kinetic energy in mechanics representing a small difference contribution compared to the rest energy, the common hydrostatic pressure of gravitational origin appears here as a second-order quantity compared to the enormous “eigenpressure” of matter. This pressure proves to be positive indeed — i.e., it is directed inwards, in spite of the seemingly opposite sign of the last tern in equation (20). For the square of the length of the velocity vector \(u^{k}=idx^{k}/ds\) is not \(+1\) but \(-1\). |
|
# The Conservation Laws in the Field Theoretical Representation of Dirac's Theory1
Footnote 1: _Editorial note_: Published in Zeits. f. Phys. 57 (1929) 484–493, reprinted and translated in [3]. This is Nb. 3 in a series of four papers on relativistic quantum mechanics [1, 2, 3, 4] which are extensively discussed in a commentary by Andre Gsponer and Jean-Pierre Hurni [5]. Initial translation by Jósef Illy and Judith Konstág Maskó. Final translation and editorial notes by Andre Gsponer.
By Cornel Lanczos in Berlin
(Received on August 13, 1929)
(Version ISRI-04-12.3 October 11, 2023)
###### Abstract
We show that in the new description, Dirac's "current vector" is not related to a vector but to a tensor: the "stress-energy tensor." Corresponding to Dirac's conservation law, we have the conservation laws of momentum and energy. The stress-energy tensor consists of two parts: an "electromagnetic" part, which has the same structure as the stress-energy tensor of the Maxwell theory, and a "mechanical" part, as suggested by hydrodynamics. The connection between these two tensors, which appears organically here, eliminates the well-known contradictions inherent in the dynamics of electron theory. (_Editorial note:_ In this paper Lanczos continues to discuss his "fundamental equation," from which he consistently derives Proca's equation and its stress-energy tensor.)
In two previous papers,2 the author proposed a new way of describing Dirac's theory; namely, exclusively on the basis of the normal relativistic space-time structure and operating with customary field theoretical concepts only. In one respect, the new description displayed a peculiar deficiency: no vector could be found that would correspond to the fundamental zero-divergence "current vector" of Dirac's theory. Namely, the vector which could be considered as a form analogous to Dirac's current vector [cf., expression (90) in the first paper], is _not_ divergence free, whereas the formation which is really divergence free [cf., expression (13) in the second paper] does not represent a vector. This difficulty |
|
one, for obviously there are still essential features missing. On the one hand, this is not at all conceivable on the basis of a linear system of equations and, on the other hand, these equations (just like the classical field equations) do not have regular "eigensolutions" of the kind which would give stationary energy nodes -- as would be expected for a really satisfactory "theory of matter."
However, the major objection which can be made to the conjecture, that quantum theory in the end would lead to a correction of classical field theory through the here revealed connection between Dirac's theory and the Maxwell equations, is that it does not yield the classical theory of the electron even as a "first approximation." To perform a comparison with electron theory, we should once more write down the reduced equation system (98) of the first paper which we obtained as a final result for the free electron if we omit all quantities which are extraneous to the theory of the electron. As the only difference, we shall introduce another field quantity:
\[\varphi_{i}=\frac{S_{i}}{\alpha},\] (22)
instead of the current vector \(S_{i}\). We shall call this quantity the "vector potential" in accordance with the feedback equation.
Then we have the equations:
Finxn=a2phi,phixk-phkxi=Fik.} (23)
In the vacuum equations of the electron theory, the right-hand side term of the first equation is missing.9 Thus, we obtain the classical field equations if we let the constant \(\alpha\) converge to 0. However, since the constant \(\alpha={2\pi mc}/{h}\) contains the Planck constant \(h\) in the numerator, this limiting process does not means \(h\to 0\) but \(h\rightarrow\infty\). The macroscopic behaviors of the electron will thus be characterized by the unnatural transition \(h\rightarrow\infty\) instead of the expected transition \(h\to 0\). In fact, one could consider the electron theory as a first approximation only if the constant of the theory was very small. In actual fact, this constant is very large: \(\alpha=2.59\times 10^{10}\,\text{cm}^{-1}\). That is, even if the equation were already completed by (still unknown) quadratic terms -- which is required anyway (cf., footnote 8) -- this would not yet solve the problem that the macrocospic behavior of the electron is certainly incorrectly described. Namely, at larger distances, where the quadratic |
|
also form the components of a tensor. It is expedient to apply a factor of \(-\frac{1}{2}\) and therefore we put:
\[T_{ik}=-\frac{1}{2}(Fj_{i}\overline{F}^{*})_{k},\] (15)
and
\[U_{ik}=-\frac{1}{2}(G\overline{j}_{i}\overline{G}^{*})_{k}.\] (16)
The zero divergence tensor, which we shall denote by \(W_{ik}\), is composed of these two tensors:3
Footnote 3: The letter \(W\) should not remind one of probability (“Wahrscheinlichkeit”). If the Dirac vector could be interpreted as a “probability flux” (“Wahrscheinlichkeitsfluss”), then an analog interpretation for a tensor of second-order, here replacing the Dirac vector, would hardly have any meaning. Therefore, I think that at this stage no compromise is any longer possible between the “reactionary” viewpoint represented here (which aims at a complete field theoretical description based on the normal space-time structure) and the probability theoretical (statistical) approach.
\[W_{ik}=T_{ik}+U_{ik}.\] (17)
Thus, Dirac's conservation law for the four quaternions (5) appears in the form of a divergence equation for this tensor:
\[\text{div}(W_{ik})=\frac{\partial W_{i\nu}}{\partial x_{\nu}}=0,\] (18)
which describes the conservation laws of momentum and energy.
In actual fact, the tensor \(W_{ik}\) occurring here, whose divergence vanishes, can really with good reason be called a "stress-energy tensor" and thus we arrive at the following remarkable result:
In place of the Dirac current vector the stress-energy tensor occurs, and in place of the Dirac conservation law the momentum-energy law occurs.
The Dirac current vector was an extension for those scalars \(\psi\psi^{*}\) interpreted by Schrodinger as "the density of electricity." Here the same vector will be extended by one more rank: to a tensor of second order.4 However, the larger manifold of quantities may well be taken into account if we think of the fundamental significance the stress-energy tensor has for dynamics and of the fundamental significance of the Riemannian curvature tensor. Thereby, one can presume a metrical background for the whole theory proposed here, as well as a hidden connection with the most important and far-reaching branch of physics: with the general theory of relativity.
Footnote 4: This procedure resembles the development of gravitation theory where Newton’s scalar potential was extended to a tensor of second-order by Einstein.
Strangely, the stress-energy tensor given by (17) is not symmetric. |
|
Let us first consider the tensor (15). It can be written in vector analytical terms as follows:
\[T_{ik}=SF_{ik}-M\widetilde{F}_{ik}-\frac{1}{2}(S^{2}+M^{2})g_{ik}+(F_{i}^{\nu} F_{k\nu}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}g_{ik}).\] (15')
The first two terms are antisymmetric, the others are symmetric.
The other tensor (16) appears in the form:
\[U_{ik}=\widetilde{(M_{i}S_{k}-M_{k}S_{i})}+M_{i}M_{k}+S_{i}S_{k}-\frac{1}{2}(M _{\nu}M^{\nu}+S_{\nu}S^{\nu})g_{ik}.\] (16')
Here, too, an antisymmetric term is produced by the interaction between the two vectors \(S_{i}\) and \(M_{i}\).
It is quite remarkable that the stress-energy tensor becomes symmetric when all those quantities which are extraneous to the Maxwell theory drop out, that is, if we set equal to zero the scalars \(S\) and \(M\), as well as the magnetic vector \(M_{i}\). In the first paper we indicated -- without an external field -- that this constraint is really possible, whereas in the second paper we saw that the same was not feasible after introducing the vector potential. Here it is indicated again that the introduction of the vector potential in the equations was not performed in the right way. In fact, the fundamental meaning of the stress-energy tensor would be lost if we sacrificed its symmetry -- there is no doubt about that.
If we retain only the electromagnetic field strength \(F_{ik}\) and the electric current vector \(S_{i}\) as fundamental quantities, then the now symmetric stress-energy tensor appears to be composed of two parts.
The first "electromagnetic" part \(T_{ik}\) is fully identical with the Maxwell stress-energy tensor of the electromagnetic field:
\[T_{ik}=F_{i}^{\nu}F_{k\nu}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}g_{ik}.\] (19)
The second part \(U_{ik}\) can also be given a certain meaning, in view of a similar formulation in mechanics. It is:
\[U_{ik}=S_{i}S_{k}-\frac{1}{2}S_{\nu}S^{\nu}g_{ik}.\] (20)
This tensor can be regarded as a "mechanical" stress-energy tensor. In fact, the symmetric tensor:
\[\mu_{0}u_{i}u_{k},\] (21) |
|
where \(j_{\alpha}\) stands for one of the four quaternion units.
The fact that the Dirac divergence equation is quadrupled here suggests that we have a vectorial divergence instead of a scalar one. If this is true, then the set of the four quaternions (5) should be equivalent to a tensor. In actual fact, this is the case.
By means of a vector \(V\), one can namely form a vector again from an antisymmetric tensor F by means of the following quaternion product:
\[FV\overline{F}^{*}.\] (6)
Indeed,
F'V'F-'=*pFp-pVp-*p*F-*p-*=p(FVF-*)p-*. (7)
However, for (6) we can write:
\[FV\overline{F}^{*}=(Fj_{\nu}\overline{F}^{*})V_{\nu},\] (8)
and the vector character of a quaternion \(Q\) can also be expressed by saying:
\[Q_{\mu}U_{\mu}=\text{invariant},\] (9)
where the components of quaternions \(Q\) are denoted by \(Q_{\mu}\) and where \(U\) is a vector. Hence we have:
\[(Fj_{\nu}\overline{F}^{*})_{\mu}U_{\mu}V_{\nu}=\text{invariant},\] (10)
and this means, according to the definition of a tensor, that the quantities:
\[(Fj_{i}\overline{F}^{*})_{k},\] (11)
are tensor components. In other words: if we write the components of each quaternion \(Fj_{\alpha}\overline{F}^{*}\) one after the other each in a line, then the four lines taken together yield an array of 16 quantities which are tensor components.
Something analogous can be done with the vector \(G\). There we can form a vector by means of the product \(G\overline{V}G\) or even by means of:
\[G\overline{V}\,\overline{G}^{*}=(G\overline{j}_{\nu}\overline{G}^{*})V_{\nu}.\] (12)
Then it holds that:
G'V-'G-'=*pGp-*p*V-p-pG-*p-*=p(GV-G-*)p-*, (13)
is valid. That is, the quantities:
\[(G\overline{j}_{i}\overline{G}^{*})_{k},\] (14) |
|
Footnote 7: Although the electron is, of course, “smeared,” an estimate of the dimensions may be of interest for a comparison with electron theory. Let us consider a spherical shell of radius \(a\) with evenly distributed charge and mass. Then the hydrostatic pressure connected with the mass density \({M}/{4\pi\alpha^{2}}\) implied an inwards directed force of \({Mc^{2}}/{4\pi\alpha^{3}}\) per unit surface. The outwards directed electric pulling force amounts to \(\frac{1}{2}\left({e}/{4\pi\alpha^{2}}\right)^{2}\). The balance between the two forces requires that:
\[\frac{Mc^{2}}{4\pi a^{3}}=\frac{1}{2}\left(\frac{e}{4\pi a^{2}}\right)^{2},\]
From this result, we obtain for the mechanical mass:
\[M=\frac{e^{2}}{8\pi c^{2}a},\]
Just as large is the field’s electrostatic energy divided by \(c^{2}\), i.e., the “electromagnetic mass” calculated from electron theory.
If we consider that the divergence of the Maxwell tensor yields the Lorentz force and the divergence of the mechanical tensor the inertia force, then we can see on these grounds how the dynamics of the electron follows as a harmonious closed whole, which has never been possible on the basis of classical field theory. Indeed, though one had probably guessed that electromagnetic quantities needed to be completed by mechanical ones so that they can supply the "cohesion forces," i.e., prevent the electron from exploding into pieces and permit a differential formulation of dynamics. However, there had been no basis for expecting an organic merging of mechanics and electrodynamics.
The field equations obtained here provide such an inherent connection on account of the double coupling between field strength and current vector, The current vector ceases to be a "material" quantity forced from outside which does not really belong to the field and is only meant to avoid a singularity. Rather, here it represents an actual field quantity which is determined by the field equations. Similarly, the zero divergence of the matter tensor is no longer a heuristic principle for obtaining the dynamics in addition to the field equations, but these basic dynamic equations appear as a necessary consequence of the field equations. Thus, the inner closure is of the same structure as in the theory of general relativity where the divergence equation describes a mathematical identity of the curvature tensor and the principle of geodesics is affirmed already by the field equations.8
Footnote 8: Though it seems plausible to place electron dynamics on this basis, our approach is not yet sufficient for this. Namely, the divergence equation as a mathematical consequence of the field equations does not contain anything which would go beyond this. However, the field equations are linear and permit, therefore, the superposition principle which a priori excludes a dynamic influence. This discrepancy is most probably connected with the already often-mentioned difficulty: with the incorporation of the vector potential into the equations. This was first done on the basis of the quantum mechanical rule but it led to obviously unsatisfactory results. Such an incorporation would not at all be necessary since the field quantities are obviously available already in a sufficient choice and especially the current vector already plays the role of the vector potential in the “feedback,” so this should not be introduced separately as an extraneous element. The extension of the equations by the vector potential appears in this approach only as a makeshift for a not yet known nonlinearity of the system. Then the divergence equation could really contain the motion principle without becoming incompatible with the superposition principle (which is then not valid any more).
Thus, the connection (17) of the two essentially different tensors (15) and (16) is not an extraneous one but is unequivocally determined by the structure of the theory. For neither the one ("electromagnetic") nor the other ("mechanical") part has a vanishing divergence but only the given sum whereby no factor and no sign remain free. Essential difficulties and inherent contradictions of electron theory are thus eliminated and the relationship revealed here, unexpectedly, is so impressive that it can hardly be doubted that this way leads us to deeper knowledge. Of course, we are not yet able to solve the electron problem by this al |
|
have:
\[U_{ik}=\alpha^{2}(\varphi_{i}\varphi_{k}-\frac{1}{2}\varphi_{\nu}\varphi^{\nu} g_{ik}).\] (24)
In the peripheral domain, where \(\alpha\) has practically become zero, the mechanical component drops out and only the customary electromagnetic stress-energy tensor remains. However, in the central domain (just where it is demanded for the construction of the electron!) the strong mechanical component and, especially, the high cohesion pressure becomes active. Of course, the expression (24) and the sum to be formed from it on the basis of (17) can be considered for the matter tensor only in first approximation, with slowly varying \(\alpha\), because the zero divergence of this tensor was proven by assuming a constant \(\alpha\).
If the above anticipated possibilities should prove really viable, quantum mechanics would cease to be an independent theory. It would merge with a deeper "theory of matter," which relies on regular solutions of nonlinear differential equations -- in the final connection, it would be absorbed into the "world equations" of the Universe. Then the "matter-field" dualism would become just as obsolete as the "particle-wave" dualism.
Berlin-Nikolassee, August 1929.
## References
* [1] C. Lanczos, _Die tensoranalytischen Beziehungen der Diracschen Gleichung (The tensor analytical relationships of Dirac's equation)_, Zeits. f. Phys. **57** (1929) 447-473. Reprinted and translated **in** W.R. Davis _et al._, eds., Cornelius Lanczos Collected Published Papers With Commentaries, **III** (North Carolina State University, Raleigh, 1998) pages 2-1132 to 2-1185; e-print arXiv:physics/0508002 available at http://arXiv.org/abs/physics/0508002.
* [2] C. Lanczos, _Zur kovarianten Formulierung der Diracschen Gleichung (On the covariant formulation of Dirac's equation)_, Zeits. f. Phys. **57** (1929) 474-483. Reprinted and translated **in** W.R. Davis _et al._, eds., Cornelius Lanczos Collected Published Papers With Commentaries, **III** (North Carolina State University, Raleigh, 1998) pages 2-1186 to 2-1205; e-print arXiv:physics/0508012 available at http://arXiv.org/abs/physics/0508012.
* [3] C. Lanczos, _Die Erhaltungss tze in der feldm ssigen Darstellungen der Diracschen Theorie (The conservation law in the field theoretical representation of Dirac's theory)_, Zeits. f. Phys. **57** (1929) 484-493. Reprinted and translated **in** W.R. Davis _et al._, eds., Cornelius Lanczos Collected Published |
|
Papers With Commentaries, **III** (North Carolina State University, Raleigh, 1998) pages 2-1206 to 2-1225; e-print arXiv:physics/0508013 available at http://arXiv.org/abs/physics/0508013.
* [4] C. Lanczos, _Dirac's wellenmechanische Theorie des Elektrons und ihre feldtheorische Ausgestaltung (Dirac's wave mechanical theory of the electron and its field-theoretical interpretation)_, Physikalische Zeits. **31** (1930) 120-130. Reprinted and translated **in** W.R. Davis _et al._, eds., Cornelius Lanczos Collected Published Papers With Commentaries, **III** (North Carolina State University, Raleigh, 1998) 2-1226 to 2-1247; e-print arXiv:physics/0508009 available at http://arXiv.org/abs/physics/0508009.
* [5] A. Gsponer and J.-P. Hurni, _Lanczos-Einstein-Petiau: From Dirac's equation to nonlinear wave mechanics,_**in** W.R. Davis et al., eds., Cornelius Lanczos Collected Published Papers With Commentaries, **III** (North Carolina State University, Raleigh, 1998) 2-1248 to 2-1277; e-print arXiv:physics/0508036 available at http://arXiv.org/abs/physics/0508036. |
|
is solved -- as the author realized in the meantime -- by a fact which allows a much larger perspective for the field theoretical description and seems to confirm the inherent validity of the whole development to a great extent.
Footnote 2: Zeits. f. Phys. **57** (1929) 447 and 474, 1929. (_Editorial note:_ Refs. [1] and [2].)
As we mentioned before, the two Dirac equations for \(H\) and \(H^{\prime}\) [see equation (8) in the second paper] are completely equivalent to our whole system of field equations. Hence, we can form the "current vector" for these two Dirac equations (which is not complex and therefore actually represents only one vector) and in this way we obtain two zero-divergence expressions. We should not call these "vectors" because the vector character disappeared with our transformation properties of \(F\) and \(G\). However, the zero divergence follows simply from the field equations and is independent of the transformation properties.
Hence, we have two zero divergence quaternions:
HH-*,H'H-'* (1)
and it is obvious that any arbitrary linear combination of them will be divergence-free as well.
If we write for a moment:
\[A=\frac{1}{2}(F+G),\quad B=\frac{1}{2}(F-G),\] (2)
then:
H=A+iBjz,H'=A-iBjz.} (3)
The zero divergence property holds for the following two quaternions as well:
AA-*+BB-*=12(FF-*+GG-*),BjzA-*+AjzB-*=12(FjzF-*-GjzG-*).} (4)
It is obvious, however, that the quaternion unit \(j_{z}\) cannot be distinguished from the remaining spatial units. The choice of \(j_{z}\) was only due to a special way of writing down the Dirac equation, which thereby requires a special ordering of the \(\psi\)-quantities. Accordingly, we may use either \(j_{x}\) or \(j_{y}\) instead of \(j_{z}\).
In this way, we obtain four divergence-free quaternions, which we can write down in the following compact form:
\[Fj_{\alpha}\overline{F}^{*}+G\overline{j}_{\alpha}\overline{G}^{*},\] (5) |
|
terms have already been reduced to zero and the linear approximation is justified, one would not obtain potentials decreasing with \(1/r\), the potential behavior would rather be characterized by \({e^{-\alpha r}}/{r}\). Here \(\alpha\) is the strong attenuation constant which definitely contradicts experience since it completely excludes the action of an electron over a large distance.
Footnote 9: _Editorial note:_ Eq. (23) is the correct wave-equation for a massive spin 1 particle, and Eq. (17) the corresponding stress-energy tensor, both to be rediscovered by Proca in 1936. For more details, see section 11 in [5].
If we consider it plausible that the quantum mechanical reaction of the single electron (spin action, etc.) acts over very short distances, then we can also say the following: At short distances the free electron behaves as if the constant \(\alpha\) were very large and at long distances as if the same constant were very small. Unless we want to accept the highly unlikely dualism that there are also special "quantum mechanical" processes in addition to the customary field theoretical ones, we necessarily arrive at the requirement that the constant \(\alpha^{2}\) of our theory should not be considered as an actual constant. It should be considered as a field function which depends on the fundamental field quantities themselves in some still unknown way.10
Footnote 10: The supposition of a correlation with the scalar Riemannian curvature (which also has the dimension \(\text{cm}^{-2}\)) is hardly rejectable in view of Einstein’s theory of gravitation. _Editorial note:_ A nonlinear generalization of the present theory in which the constant \(\alpha^{2}\) is replaced by \(\alpha^{2}\sigma\) where \(\sigma\) is a field function is considered in the last paper in this series, i.e., Ref. [4].
Then everything would fall into place. Then the term with \(\alpha^{2}\) is no longer a linear term but one of higher order. For the linear approximation, we would obtain the vacuum equations of the classical electron theory. The \(\alpha^{2}\) function would practically decrease to zero in the peripheral range, whereas it would be expected to have a practically smooth functional form of the given order of magnitude in the central domain, i.e., in the immediate vicinity of the electron's center. Then one could understand why the de Broglie-Schrodinger wave equation with constant \(\alpha\) cannot characterize a single electron but only a large "swarm" of electrons. Then statistical averages over multiple spatial neighborhoods of different electrons could result in a functional form of \(\alpha\) which is sufficiently constant for a larger domain, whereas \(\alpha\) decreases to zero very rapidly for a single electron.
In a comprehensive field theory, it would hardly be conceivable to introduce a quantity as a "universal constant" which contains the mass of the electron. It would then be hopeless to understand the mass difference between electron and proton.
Of course, this new hypothesis would also influence the matter tensor. In fact, the constant \(\alpha\) appears in the mechanical part of the matter tensor, and the vector potential according to (22) is introduced instead of the current vector. Then we |
|
Fermi energy and with the temperature [10]. The spectral function in finite nuclei has been estimated taking into account the local density and the neutron-proton asymmetry [18] and compared to experimental data. Thermodynamic properties of the asymmetric [19] and pure neutron nuclear matter [20] have been discussed. We note that the \(T-\)matrix calculation can be performed very easily at finite temperature [6, 10, 12] (it is even simpler than at zero temperature).
## 3 Pairing
A general property of fermions interacting through an attractive potential is the transition to a superfluid state at low temperatures [21]. This phenomenon is usually not taken into account in nuclear matter studies, because the value of the superfluid gap, as extracted from the properties of finite nuclei, is expected to be small. On the other hand naive estimates of the neutron-proton pairing gap yield a value of several MeVs [22]. This value, obtained from a mean-field gap equation, is modified by many-body effects.
A fundamental approach in the study of superfluid nuclear matter should aim at the description of the cold nuclear matter system dealing with a strong repulsive core in the interaction and at the same time with the formation of the superfluid state. A generalization of nuclear matter calculations must include the usual ladder diagram resummation
Figure 5: The spectral function in the superfluid phase, including the the diagonal self-energy (solid line) and both the diagonal and anomalous self-energies (dashed line). |
|
# Spectral properties of nuclear matter
P. Bozek1
Institute of Nuclear Physics, PL-31-342 Cracow, Poland
Footnote 1: Electronic address : piotr.bozek@ifj.edu.pl
(October 12, 2023)
###### Abstract
We review self-consistent spectral methods for nuclear matter calculations. The in-medium \(T-\)matrix approach is conserving and thermodynamically consistent. It gives both the global and the single-particle properties the system. The \(T-\)matrix approximation allows to address the pairing phenomenon in cold nuclear matter. A generalization of nuclear matter calculations to the superfluid phase is discussed and numerical results are presented for this case. The linear response of a correlated system going beyond the Hartree-Fock+ Random-Phase-Approximation (RPA) scheme is studied. The polarization is obtained by solving a consistent Bethe-Salpeter (BS) equation for the coupling of dressed nucleons to an external field. We find that multipair contributions are important for the spin(isospin) response when the interaction is spin(isospin) dependent.
## 1 Introduction
Nuclear matter is an infinite system of strongly interacting fermions. Low energy nucleon-nucleon interactions in vacuum are well known from scattering experiments. The characteristic features of the nuclear force are the presence of a strongly repulsive core at small distances, attraction at moderate distances, and the appearance of a tensor component. The properties of nucleons in a strongly interacting medium are modified. This is taken into account by the dressing of nucleon propagators by a self-consistently calculated self-energy. Nonzero imaginary part of the self-energy implies a finite lifetime of |
|
The result of the calculation with self-energy and vertex corrections is compared to the RPA result in Fig. 11. The collective mode in the density response has a large width in the full calculation. This corresponds to the coupling of the sound mode to multipair excitations giving rise to a finite decay width, even at zero temperature (the small width of the RPA collective mode is due to the finite temperature \(T=15\)MeV). For the isovector response shown in the lower panel of Fig. 11 the Fermi liquid theory predicts the presence of a well defined collective state (dashed-dotted line). For a scalar residual interaction the response function in the correlated system shows a collective state at similar energy (dashed line). It has a larger width due to the contribution of multipair configurations, analogously as in the density response. On the other hand, an isospin dependent residual interaction leads to an isovector response with a large imaginary part at high energies (Fig. 10). For the chosen interaction we observe the extreme scenario of the disappearance of the collective mode when the effects of the residual interaction are taken into account (solid line in the lower panel of Fig. 11). Generally we expect a whole range of behavior depending on the energy of the collective state and on the strength of isospin dependent terms in the residual interaction but always the collective state is broader than in the Fermi liquid theory, due to the coupling to multipair configurations. The same phenomena are expected also for the spin wave collective state in the presence of spin dependent residual interactions.
## 5 Summary
We present a new method of calculations for nuclear matter. The method discussed in this work is using in-medium Green's functions' formalism and is based on the summation of diagrams with a retarded propagation of a pair of fermions. The sum of such ladder diagrams is the in-medium scattering matrix (the \(T\)-matrix). Self-consistency in this approach means that the fermion propagators in the \(T\)-matrix diagrams are dressed in a nontrivial self-energy. The self-energy itself is obtained from the sum of ladder diagrams. This self-consistency requires the use of full spectral functions depending on the momentum and energy of the particle. The propagation off the mass shell, i.e. the full dependence of the nucleon propagator on the energy, is a serious difficulty in numerical applications. The progress achieved in the last years allows to perform extensive calculation of the nuclear matter properties in the \(T-\)matrix approach at zero and finite temperatures.
Attractive nuclear interactions lead to the formation of a superfluid phase at low temperatures. Cooper pairs of fermions are formed, analogously as for the electron superconductivity in metals. The second important achievement here reported is the generalization |
|
sides the Fermi energy (Fig. 5). The most interesting result of this first, and up to now unique, study is the observation of a strong reduction of the superfluid gap in the correlated nuclear matter as compared to results from a mean-field gap equation [24]. This effect was analyzed in details [26] and can be explained by an effective modification of the density of states at the Fermi energy. The reduction of the superfluid energy gap is especially important for neutron-proton pairing close to the gap closure in symmetric nuclear matter [27].
Possible generalizations of the \(T-\)matrix approximation to the superfluid phase have been discussed by Haussmann [28]. The simplest scheme, which is \(\Phi-\)derivable and includes the required ingredients, i.e. the ladder diagrams resummation in the diagonal self-energy and the gap equation for the off-diagonal self-energy, requires the introduction of an additional \(T-\)matrix describing two-particle correlations for anomalous propagators. The generalized \(T-\) matrix has a singularity at twice the Fermi momentum for \(T\leq T_{c}\) (Fig. 6) and the imaginary part of the self-energy has a gap around the Fermi energy, corresponding to an energy gap for possible excitations in a scattering [23]. This last, important property of the approximation does not hold for the simple scheme discussed in [24]. Explicit calculations show that corrections to the gap equation coming from ladder diagrams in the off-diagonal self-energy are small [23]. These small corrections make the superfluid gap energy dependent.
## 4 Linear response with dressed vertices
Processes occurring in the dense nuclear matter are modified by medium effects [29]. This happens for neutrino rates in neutron stars and for particle or photon emission in hot nuclear matter. In particular, the effect of the off-shell propagation of nucleons in the medium is important for the subthreshold particle production in heavy ion collisions. The role of correlations for the neutrino emission could be especially important for processes in hot stars.
If the interaction with an external perturbation is small the dynamics of the system in the external field can be described in terms of linear response functions. The response functions incorporate all the correlations due to the self-interaction of particles in the system. For normal Fermi liquids the response function to long-wavelength perturbations can be calculated within the Fermi liquid theory [30]. However, if one is interested in nuclear systems at higher temperatures or for perturbations with large momentum a different approach is required. The Fermi liquid theory does not allow for multipair excitations of the system in the external potential. Such multipair excitations are important for higher energies and momenta of the external perturbation or in the presence of tensor |
|
tion is similar to the one particle-one hole response function. On the other hand, the isovector response is closer to the naive one-loop result, without vertex corrections.
To study the role of multipair configurations on the collective modes we increase the value of the Landau parameters. Within the Fermi liquid theory a collective excitation at zero temperature is a discrete peak in the imaginary part of the response function. The state corresponding to a collective excitation cannot couple to incoherent one particle-one hole excitations. At finite temperature such a coupling is possible, it can be calculated and the finite temperature width of the collective state is usually small. The collective state can acquire a finite width (also at zero temperature) due to the coupling to multipair configurations [30]. The description of the damping of collective states from such processes goes beyond the Fermi liquid theory.
Figure 11: The imaginary part of the polarization when a collective mode is present; density response (upper panel) and isospin response (lower panel). The results are obtained in the RPA approximation (dashed-dotted line), and in the full calculation with dressed vertices for an isospin dependent residual interaction (solid line) and a scalar one (dashed line). For the density response the result is independent of the type of the residual interaction. |
|
* [20] P. Bozek, P. Czerski, Phys. Rev., **C66** (2002) 027301.
* [21] J. Bardeen, L. N. Cooper, J. R. Schrieffer, Phys. Rev., **108** (1957) 1175.
* [22] B. Vonderfecht, C. Gerhart, W. Dickhoff, A. Polls, A. Ramos, Phys. Lett., **B253** (1991) 1.
* [23] P. Bozek, Phys. Rev., **C65** (2002) 034327.
* [24] P. Bozek, Nucl. Phys., **A657** (1999) 187.
* [25] J. R. Schrieffer, _Theory of superconductivity_, W. A. Benjamin, New York, 1964.
* [26] P. Bozek, Phys. Rev., **C62** (2000) 054316.
* [27] P. Bozek, Phys. Lett., **B551** (2002) 93.
* [28] R. Haussmann, Z. Phys., **B91** (1993) 291.
* [29] J. Knoll, D. N. Voskresensky, Annals Phys., **249** (1996) 532.
* [30] D. Pines, P. Nozieres, _The Theory of Quantum Liquids Vol. I_, Benjamin, New York, 1966.
* [31] E. Olsson, C. J. Pethick, Phys. Rev., **C66** (2002) 065803.
* [32] D. Gogny, R. Padjen, Nucl. Phys., **A293** (1977) 365.
* [33] N. Kwong, M. Bonitz, Phys. Rev. Lett., **84** (2000) 1768.
* [34] S. Faleev, M. Stockman, Phys. Rev., **B66** (2002) 085318.
* [35] P. Bozek, Phys. Lett., **B579** (2004) 309.
* [36] D. Tamme, R. Schepe, K. Henneberger, Phys. Rev. Lett., **83** (1999) 241.
* [37] G. Baym, L. Kadanoff, Phys. Rev., **124** (1961) 287.
* [38] P. Bozek, J. Margueron, H. Muther, Ann. Phys., **318** (2005) 245. |
|
## References
* [1] T. Matsubara, Prog. Theor. Phys., **14** (1955) 351.
* [2] L. V. Keldysh, Zh. Eksp. Teor. Fiz., **47** (1964) 1515.
* [3] G. Baym, Phys. Rev., **127** (1962) 1392.
* [4] L. Kadanoff, G. Baym, _Quantum Statistical Mechanics_, Bejamin, New York, 1962.
* [5] W. H. Dickhoff, Phys. Rev., **C58** (1998) 2807.
* [6] P. Bozek, Phys. Rev., **C59** (1999) 2619.
* [7] W. H. Dickhoff, C. C. Gearhart, E. P. Roth, A. Polls, A. Ramos, Phys. Rev., **C60** (1999) 064319.
* [8] Y. Dewulf, D. Van Neck, M. Waroquier, Phys. Lett., **B510** (2001) 89.
* [9] Y. Dewulf, D. Van Neck, M. Waroquier, Phys. Rev., **C65** (2002) 054316.
* [10] P. Bozek, Phys. Rev., **C65** (2002) 054306.
* [11] P. Bozek, P. Czerski, Acta Phys. Polon., **B34** (2003) 2759-2768.
* [12] T. Frick, H. Muther, Phys. Rev., **C68** (2003) 034310.
* [13] T. Alm, G. Ropke, A. Schnell, N. H. Kwong, H. S. Kohler, Phys. Rev., **C53** (1996) 2181.
* [14] P. Bozek, P. Czerski, Eur. Phys. J., **A11** (2001) 271.
* [15] P. Bozek, Eur. Phys. J., **A15** (2002) 325.
* [16] Y. Dewulf, W. H. Dickhoff, D. Van Neck, E. R. Stoddard, M. Waroquier, Phys. Rev. Lett., **90** (2003) 152501.
* [17] H. Q. Song, M. Baldo, G. Giansiracusa, U. Lombardo, Phys. Rev. Lett., **81** (1998) 1584.
* [18] P. Bozek, Phys. Lett., **B586** (2004) 239.
* [19] T. Frick, H. Muther, A. Rios, A. Polls, A. Ramos, Phys. Rev., **C71** (2005) 014313. |
|
the \(T-\)matrix scheme at finite temperature was also obtained in [12].
At low temperatures two difficulties show up. The first one is related to the transition to superfluidity in cold nuclear matter. This effect can be taken into account quite naturally within suitably generalized \(T-\)matrix approaches. This is the subject of the next section. We note however that most of the existing \(T-\)matrix calculations are done for the normal phase of the nuclear matter also at zero temperature. The second difficulty with low temperature nuclear matter is technical and is related to the appearance of a well defined quasi-particle peak in the spectral function of the in-medium nucleon. Simple discretization algorithms for the energy integration break down in that case. The spectral function must be separated into background and quasi-particle contributions for momenta close to \(p_{F}\). At the time of this writing, there is only one numerical implementation of this procedure for the nuclear matter \(T\) matrix (see [10] for details).
First calculations [5, 6, 7] have shown that the self-consistency for the self-energies of nucleons is very important. The effective scattering is reduced in the self-consistent calculation when compared to an approximation neglecting the imaginary part of the self-energy [13]. Also the value of the critical temperature for the superfluid phase transition goes down when using fully dressed propagators in the Thouless criterion for superfluidity.
Further developments have addressed the binding energy in the \(T-\)matrix approximation [14, 15, 9, 16]. The role of the correlations, high momentum states, and low energy tails in the spectral function on binding has been discussed [15]. Formally, one should
Figure 3: The nucleon spectral function obtained in the self-consistent \(T-\)matrix approximation for nuclear matter. |
|
of the ladder diagram summation to the superfluid phase. We discuss equations which allow for a simultaneous and consistent treatment of the short range nuclear interactions, the bound state formation (Cooper pairs) in the \(T\)-matrix and the superfluidity in nuclear matter. Model calculations have allowed us to estimate the influence of the superfluidity on standard many-body effects in nuclear matter. At the same time, the influence of many-body corrections on the superfluid gap can be assessed. These corrections, reducing the value of the gap, can be described by an effective superfluid gap equation, similar to the mean-field Bardeen-Cooper-Schrieffer equation with an effective interaction and with a renormalized value for the order parameter.
The self-consistent \(T\)-matrix approximation is a thermodynamically consistent approximation, a so called conserving approximation. Single-particle properties obtained within this approximation are consistent with global quantities describing the system, such as the binding energy or the pressure. Unlike other methods, it allows to obtain directly single-particle properties : optical potentials, single-particle widths, and spectral functions. These observables can be compared to experimental values.
The calculation of processes occurring in the dense medium, e.g. particle absorption, emission, neutrino cross-sections, requires the knowledge of different kinds of linear responses of the system to external probes. In a correlated system, described using dressed propagators, the calculation of the linear response is a very difficult task. The problem lies in the need for a consistent dressing of the vertices corresponding to the coupling of dressed nucleons to an external field. The in-medium vertices are obtained from a solution of the BS equation, where the kernel of the equation is derived from the same generating functional \(\Phi\) as the self-energy. This procedure guarantees the fulfillment of conservation laws and sum-rules. It must be contrasted with incomplete procedures with only self-energy effects without vertex corrections. The linear response with self-energy and vertex corrections beyond the Hartree-Fock+RPA approximation includes the effects of multipair excitations. For fermions interacting with a scalar potential such effects are small, except for the damping of collective modes, where the multipair configurations are important. The situation is very different for spin or isospin dependent interactions. In that case the spin or isospin response is very different form the RPA result; multipair effects are essential.
Applications discussed in this talk are restricted to nuclear systems. We note that the same methods can be applied to other many-body fermionic systems, high \(T_{c}\) superconductors, fermions near the Feshbach resonance, the electron gas.
This work was partly supported by the Polish State Committee for Scientific Research Grant No. 2P03B05925. |
|
not expect better results for the binding energy in the \(T-\)matrix approximation than in the standard \(G-\)matrix approach with higher order terms in the hole line-expansion [17]. However, the authors of ref. [16] claim that the \(T-\)matrix calculation, which does not take into account ring diagrams contribution, is better adapted to describe nuclear matter in finite nuclei. An important observation is made in
ref. [14], where the role of the thermodynamical consistency of the \(T-\)matrix approximation is discussed. As expected from the \(\Phi-\)derivability of the scheme, the \(T-\)matrix calculation gives consistent results for the global quantities in the system and its single particle properties (Fig. 4). In particular the \(T-\)matrix results fulfill the Hugenholz-Van Hove theorem and the Luttinger relation for the value of the Fermi momentum. In practice, such a consistency of the single particle energy can be obtained within the Brueckner theory only approximately by taking rearrangement terms to a given order.
The single-particle properties obtained in the self-consistent iteration of the \(T-\)matrix scheme represent a reliable estimate of the optical potential and of the single-particle width in medium [10]. Nuclear matter in the non-superfluid phase behaves as a Fermi liquid, with the standard phase-space scaling of the scattering width with the distance to the
Figure 4: The binding energy and the Fermi energy (upper panel) and two (formally equivalent) expressions for the pressure (lower panel) as function of density in the \(T-\)matrix and the Brueckner-Hartree-Fock calculations. |
|
evel of the self-energy, which in the \(\Phi-\)derivable \(T-\)matrix approximation must be taken self-consistently in the form (Fig 2)
\[\Sigma=TrTG\ .\] (3)
The self-consistency means that the nucleon propagators at all stages of the approximation are dressed by this self-energy. It requires an iterative procedure for the solution of the coupled equations (2), (3), and (1). The \(T\) matrix obtained in a given iteration is used to generated the self-energy and the dressed propagators. The dressed propagators are then used to calculate the in-medium \(T\) matrix in the next iteration.
The numerical calculations are very demanding due to the fact that the nucleons are dressed by nontrivial spectral functions coming from the imaginary part of the self-energy (Fig 3); additional integrations over the energy of off-shell nucleons appear. First results on the self-consistent in-medium \(T\) matrix appeared only in the recent years [5, 6]. In [5, 7] the spectral function of the dressed nucleon was parameterized by three Gaussians and in [8, 9] using a three poles approximation for the in-medium propagator. The first solution using full spectral functions for the dressed propagators in nuclear matter was obtained numerically in ref. [6] at finite temperature, for a simple nuclear interaction. This approach, using numerical algorithms significantly speeding the calculations, was then generalized to realistic interactions and zero temperature [10, 11]. The solution of
Figure 1: Definition of the \(T-\)matrix as a sum ladder diagrams in the interaction.
Figure 2: The self-energy in the \(T-\)matrix approximation |
|
in the normal phase. At the same time it should describe the formation of the superfluid long-range order in the two-particle correlations. Such correlations appear as a singularity of the \(T\) matrix for \(T=T_{c}\) at the energy of twice the fermionic chemical potential. This corresponds to the formation of a two-fermion bound state. At temperatures below \(T_{c}\) such fermionic (Cooper) pairs have a binding energy allowing for the condensation and for the formation of a superfluid order parameter.
In ref. [24] a first attempt to generalize nuclear matter calculation to the superfluid phase has been discussed. At small temperatures a superfluid state forms leading to nonzero expectations of the off-diagonal (anomalous) Green's functions and self-energies [25]. The off-diagonal self-energy is obtained from a gap equation but with dressed propagators. The \(T-\)matrix in the superfluid must be constructed in a way to conserve the singularity at twice the chemical potential also for \(T<T_{c}\). Such a scheme was considered in [24], giving the first calculation of the superfluid nuclear matter problem, including both the ladder resummation of the nuclear interaction and the superfluid properties obtained with a gap equation modified by many-body effects. The nucleon propagator in the superfluid is dressed by a diagonal self-energy \(\Sigma\) coming from a modified \(T-\)matrix approximation and an off-diagonal anomalous self-energy \(\Delta\); it has two poles on both
Figure 6: Inverse of the \(T-\)matrix in the pairing (deuteron) channel in the superfluid phase. The usual \(T\) matrix (dashed line) and the generalized \(T\) matrix with off-diagonal components (solid line) are shown [23]. The generalized \(T\) matrix has a singularity at the Fermi energy for zero total momentum of the pair. |
|
n of the BS equation the response function in the correlated medium can be obtained from the diagram in Fig. 9
\[\Pi_{(ST)}=Tr\Gamma^{0}_{(ST)}G_{ph}\Gamma_{(ST)}\ ,\] (6)
\(\Gamma_{(ST)}\) is the in-medium vertex for the coupling in a given spin-isospin \((ST)\) channel. The exact form of the BS equation in the real-time formalism (for different types of vertices \(\Gamma_{(ST)}\)) can be found in [35, 38]. The solution of the BS integral equation is obtained by iteration. It is a serious numerical task involving the calculation of two-loop diagrams with dressed propagators and broken rotational symmetry due to the presence of an external field.
The results for the response functions in different spin-isospin channels [38] show that the calculation using dressed propagators and dressed vertices obtained as solutions of the BS equation is close to the RPA approximation. It is expected due to cancellations of self-energy and vertex corrections. Such cancellations occur for the exact solution of the system as well as within consistent approximation derived from a generating functional \(\Phi\). For the scalar residual interaction the \(\omega\)-sum rule takes the simple form
\[-\int\frac{\omega d\omega}{2\pi}{\rm Im}\Pi^{r}_{(ST)}({\bf q},\omega)=\rho \frac{{\bf q}^{2}}{2m}\ ,\] (7)
in all the spin isospin channels \({(ST)}\). The above sum-rule severely constraints acceptable forms of the response functions in the case of a scalar residual interaction and leads to v
Figure 8: The Bethe-Salpeter equation for the dressed vertex. The particle-hole irreducible kernel \(K\) is denoted by the box and the fat and the small dots denote respectively the dressed and the bare vertices for the coupling of the external field to the nucleon.
Figure 9: The polarization function expressed using the dressed vertex for the coupling of the external field to the dressed nucleon. |
|
ery small multipair contributions in most of the kinematical regions. Some differences can occur only when collective modes are present, as discussed latter.
The situation is very different for a more general form of the residual interaction.
The kernel of the BS equation depends very much on the isospin channel, e.g. in the channel \(ST=11\) there are no vertex corrections from the residual interaction at all. The propagators are dressed by a nontrivial self-energy due to the residual interaction but the vertex corrections are small or even absent. Therefore, the cancellation between self-energy and vertex corrections, observed for a scalar interaction, can no longer be maintained. This has its implications for the \(\omega-\)sum rule which gets modified for responses with \(T=1\)[38]. In the case of the isospin dependent residual interaction, in-medium dressed nucleons couple in the same way as free nucleons to isovector potentials. In Fig. 10 the response functions obtained with the isospin dependent interaction are compared to the response functions from the Fermi liquid theory. For \(T=0\) channels the correlated response func
Figure 10: The imaginary part of the polarization in the RPA approximation (dashed-dotted line), from the self-consistent calculation with dressed nucleons and vertices (solid line) and the naive one-loop polarization with dressed nucleons (dashed line). All results are for an isospin dependent interaction \(\frac{1}{2}(1+\tau_{1}\tau_{2})V\), \(q=210\)MeV, and \(T=15\)MeV. |
|
quasiparticles and gives them nontrivial spectral properties; nucleons propagate off the energy shell. The description of nuclear matter in the language of such dressed nucleons with a broad spectral function has been the subject of intensive studies in the last years.
## 2 In medium \(T\) matrix
The description of properties and excitations of a many-body system can be performed in the language of finite-temperature Green's functions. The Green's functions' approach is most easily formulated in the imaginary time formalism [1], suitable for formal presentations and perturbative calculations, but the real-time formalism [2] is better adapted for numerical calculations. In medium propagators (Green's functions) are dressed by a self-energy term. The physical approximation enters in the choice of a suitable expression for the self-energy used in the calculation. Analytical properties of the Green's functions are guaranteed by the dispersion relation between the imaginary and real parts of the self-energy and by the use of the Dyson equation
\[G^{-1}=G_{0}^{-1}-\Sigma\ ,\] (1)
where \(G\) and \(G_{0}\) represent the dressed and vacuum propagators respectively and \(\Sigma\) is the self-energy.
The choice of a specific form of the self-energy should be adapted to the physical system under consideration, but a general procedure to derive nonperturbative approximations in many-body systems has been proposed [3]. The self-energy is written as a functional derivative of a two-particle irreducible generating functional \(\Sigma=\frac{\delta\Phi}{\delta G}\).
Due to the presence of a short range repulsive core, ladder diagrams in the free nucleon-nucleon interaction \(V\) must be resummed. The resulting correlated part of the two-particle Green's function is called the in-medium \(T-\)matrix [4] (Fig. 1)
\[T=V+VGGT\ .\] (2)
Although the ladder diagrams for the \(T\) matrix have the same form as for the Brueckner \(G-\)matrix the resulting expressions are different. The \(T-\)matrix equation is defined for in-medium Green's functions. At zero temperature the two-nucleon propagator in Fig. 1 represents the propagation a particle-particle pair (excitations above the Fermi energy) or a hole-hole pair (excitations below the Fermi energy), whereas in the Brueckner scheme the blocking operator forces the two nucleons in the ladder to be of the particle-particle type. The second, even more important, difference comes at the l |
|
interactions [31].
We take the interactions between the nucleons as a sum of a mean-field interaction that is based on the Gogny parameterization [32] and a residual interaction (scalar or isospin dependent). The isospin dependent residual interaction is obtained from the scalar one multiplying by the factor \(\frac{1}{2}(1+\tau_{1}\tau_{2})\). The self-energy is taken in the second direct Born approximation for the residual interaction (Fig. 7).
The residual interaction induces a finite width to nucleon excitations in the medium. Such a dressing of nucleons is expected in any approach going beyond the simple mean-field, e.g. the \(T-\)matrix approximation discussed previously.
When the description of the correlated system goes beyond the mean-field approximation the difficulty involved in a consistent calculation of the response function is severely increased [33, 34]. A naive calculation of the polarization bubble using dressed propagators
\[\Pi=Tr\Gamma_{0}G_{ph}\Gamma_{0}\ ,\] (4)
where \(G_{ph}\) is the particle-hole propagator with dressed nucleons and \(\Gamma_{0}\) is the free vertex for the coupling of the nucleon to an external field, is a very bad estimate for the response function [35]. In particular, it severely violates the \(\omega\)-sum rule [36].
A general recipe for calculating the in-medium coupling of the external potential to dressed nucleons is known [37]. The in-medium vertex describing the coupling of an external perturbation to nucleons is given by the solution of the BS equation (Fig. 8)
\[\Gamma=\Gamma_{0}+TrKG_{ph}\Gamma\ ,\] (5)
where \(K\) denotes the particle-hole irreducible kernel. The kernel \(K\) of the BS equation should be taken consistently with the chosen expression for the self-energy, it is given by the functional derivative of the self-energy with respect to the dressed Green's function \(K={\delta\Sigma}/{\delta G}\)[37].
Using the dressed vertex obtained as a solutio
Figure 7: Diagrams for the self-energy. The first two diagrams are the Hartree-Fock contribution for the Gogny interaction. The last diagram is the contribution of the residual interaction in the second order. |
|
# Can one predict DNA Transcription Start Sites by studying bubbles?
Titus S. van Erp\({}^{1,2}\), Santiago Cuesta-Lopez\({}^{2,3}\), Johannes-Geert Hagmann\({}^{1,2}\), and Michel Peyrard\({}^{2}\)
\(1\) Centre Europeen de Calcul Atomique et Moleculaire (CECAM)
\(2\) Laboratoire de Physique, Ecole Normale Superieure de Lyon, 46 allee d'Italie, 69364 Lyon Cedex 07, France
\(3\) Dept. Condensed Matter Physics and Institut of Biocomputation and Complex Systems. University of Zaragoza, c/ Pedro Cerbuna s/n 50009 Spain
###### Abstract
It has been speculated that bubble formation of several base-pairs due to thermal fluctuations is indicatory for biological active sites. Recent evidence, based on experiments and molecular dynamics (MD) simulations using the Peyrard-Bishop-Dauxois model, seems to point in this direction. However, sufficiently large bubbles appear only seldom which makes an accurate calculation difficult even for minimal models. In this letter, we introduce a new method that is orders of magnitude faster than MD. Using this method we show that the present evidence is unsubstantiated.
pacs: 87.15.Aa,87.15.He,05.10.-a Double stranded DNA (dsDNA) is not a static entity. In solution, the bonds between bases on opposite strands can break even at room temperature. This can happen for entire regions of the dsDNA chain, which then form bubbles of several base-pairs (bp). These phenomena are important for biological processes such as replication and transcription. The local opening of the DNA double helix at the transcription start site (TSS) is a crucial step for the transcription of the genetic code. This opening is driven by proteins but the intrinsic fluctuations of DNA itself probably play an important role. The statistical and dynamical properties of these denaturation bubbles and their relation to biological functions have therefore been subject of many experimental and theoretical studies. It is known that the denaturation process of finite DNA chains is not simply determined by the fraction of strong (GC) or weak (AT) base-pairs. The sequence specific order is important. Special sequences can have a high opening rate despite a high fraction of GC base pairs Dornberger . For supercoiled DNA, it has been suggested that these sequences are related to places known to be important for initiating and regulating transcription PNAS1 . For dsDNA, Choi et al found evidence that the formation of bubbles is directly related the transcription sites ChoiNuc2004 . In particular, their results indicated that the TSS could be predicted on basis of the formation probabilities for bubbles of ten or more base-pairs in absence of proteins. Hence, the secret of the TSS is not in the protein that reads the code, but really a characteristics of DNA as expressed by the statement: _DNA directs its own transcription_ChoiNuc2004 . In that work, S1 nuclease cleavage experiments were compared with molecular dynamics (MD) simulations on the Peyrard-Bishop-Dauxois (PBD) model PB ; PBD of DNA. The method used is not without limitations. The S1 nuclease cleavage is related to opening, but many other complicated factors are involved. Moreover, theoretical and computational studies have to rely on simplified models and considerable computational power. As the formation of large bubbles occurs only seldom in a microscopic system, MD or Monte Carlo (MC) methods suffer from demanding computational efforts to obtain sufficient accuracy. Nevertheless, the probability profile found for bubbles of ten and higher showed a striking correlation with the experimental results yielding pronounced peaks at the TSS ChoiNuc2004 . Still, the large statistical uncertainties make this correlation questionable. To make the assessment absolute, we would either need extensively long or exceedingly many simulation runs or a different method that is significantly faster than MD.
In this letter, we introduce such a method for the calculation of bubble statistics for first neighbor interaction models like the PBD. We applied it to the sequences studied in Refs. ChoiNuc2004 and, to validate the method and to compare its efficiency, we repeated the MD simulations with 100 times longer runs. The new method shows results consistent with MD but with a lot higher accuracy than these considerably longer simulations. Armed with this novel method, we make a full analysis of preferential opening sites for bubbles of any length. This analysis shows that there is no strict analogy between these preferential sites and the TSS using equilibrium statistics. Hence, the previously found correlation must have been either accidental or due to some non-equilibrium effect, which remains speculative. We discuss this issue and, more generally, the required theoretical and experimental advancements that could address the title's question definitely.
The PBD model reduces the myriad degrees of freedom of DNA to an one-dimensional chain of effective atom compounds describing the relative base-pair separations \(y_{i}\) from the ground state positions. The total potential energy \(U\) for an \(N\) base-pair DNA chain is then given by \(U(y^{N})=V_{1}(y_{1})+\sum_{i=2}^{N}V_{i}(y_{i})+W(y_{i},y_{i-1})\) with \(y^{N}\equiv\{y_{i}\}\) the set of relative base pair positions and
\[V_{i}(y_{i}) = D_{i}\Big{(}e^{-a_{i}y_{i}}-1\Big{)}^{2}\] (1) \[W(y_{i},y_{i-1}) = \frac{1}{2}K\Big{(}1+\rho e^{-\alpha(y_{i}+y_{i-1})}\Big{)}(y_{i} -y_{i-1})^{2}\]
The first term \(V_{i}\) is the on site Morse potential describing |
|
Footnote 1: In principle, MD suffers from the same problem that it allows for complete separation. As a consequence, very long MD simulations will always give erroneous results. To restrict the MD to the dsDNA one can use a bias potential that acts on \(y_{\rm min}=\textrm{MIN}[\{y_{i}\}]\). For instance, \(V^{\rm bias}(y_{\rm min})=(y_{\rm min}-y_{0})^{6}\textrm{ if }y_{\rm min}>y_{0}\) and 0 otherwise. However, at 300 K the complete denaturation occurs so seldom that it was not detected in all simulations.
We obtained relative errors around 10 % for Nose-Hoover and Langevin with \(\gamma=10\) and \(5\) ps\({}^{-1}\). The errors of the \(\gamma=0.05\) ps\({}^{-1}\), used in Ref. ChoiNuc2004 , were considerably larger due a stronger correlation between successive timesteps. The results of ChoiNuc2004 were based on 100 times fewer statistics. Hence, the corresponding errors in ChoiNuc2004 must have been 10 times larger which can explain the variance with our results. Another explanation could be that the results of ChoiNuc2004 are due to some out-of-equilibrium or dynamical effects. Such effects depend strongly on the choice of initial conditions, which poses the problem of defining biologically significant initial conditions and determining, in a meaningful way, the relevant time scale along which the simulations have to be carried to detect such non-equilibrium phenomena.
The principal error in the new method is mainly due to the finite integration steps. To estimate the accuracy, we compared \(\Delta y=0.1\) and \(0.05\) with the almost exact results of \(\Delta y=0.025\). Using the TSS peak of the AAVP5 sequence with free boundaries as reference, we found that the systematic error drops from \(\sim 5\) % to 0.03 % for CPU times of 40 minutes and 3 hours only. For comparison, the last accuracy would take about 200 years with MD on the same machine. The evaluation of larger bubbles becomes increasingly more difficult for MD. Bubbles of size 20 showed statistical errors \(>100\) % while these were only slightly increased for the integration method. It is interesting to note that the 10 bp size is more or less the upper limit for which one get sufficient accuracy using MD, while it is a lower limit were its relation to biophysics becomes interesting Murakamiscience stressing the importance of our method.
Finally, we calculated the \(P_{i}\) probabilities for the adenovirus major late promoter (AdMLP) and a control non promoter sequence (Fig. 3). Also here, our results violate the TSS conjecture. The TSS shows some opening, but cannot be assigned on basis of bubble profile only. Surprisingly, even the control sequence shows significant opening probabilities.
To conclude, we have shown that MD (or MC) encounters difficulties to give a precise indication of preferential opening sites. In particular, information of large bubbles is not easily accessible using standard methods. The method presented here is orders of magnitude faster than MD without imposing additional approximations. Using this method, we showed that the TSS is generally not the most dominant opening site for bubble formation. These results contradict foregoing conjectures based on less accurate simulation techniques. However, to address the title's question, definitely, there are still many issues to be solved. Still, there is some chance that bubble dynamics rather than bubble statics is indicatory for the TSS. Speculatively, the previously found correlation could be justified using this argument. However, a statistical significant foundation for this is lacking and it is highly questionable whether the PBD model and this type of Langevin dynamics can give a sufficiently accurate description for the dynamics of DNA. The PBD model could and, probably, should be improved to give a correct representation of the subtile sequence specific properties of DNA. Base specific stacking interaction seems to give better agreement with some direct experimental observations Santiago . Also, the development of new experimental techniques is highly desirable. Our method is not limited to the PBD model or to bubble statistics only, but it works whenever the proper factorization (6) can be applied. Therefore, we believe that the technique presented here will remain of importance for the future investigations of bubbles in DNA and their biological consequences.
We thank Dimitar Angelov and David Dubbeldam for fruitful discussions. TSvE is supported by a Marie Curie Intra-European Fellowships (MEIF-CT-2003-501976) within the 6th European Community Framework Programme. SCL is supported by the Spanish Ministry of Science and Education (FPU-AP2002-3492), project BFM 2002-00113 DGES and DGA (Spain).
## References
* (1) U. Dornberger, M. Leijon, and H. Fritzsche, J. Biol. Chem. **274**, 6957 (1999).
* (2) C. J. Benham, Proc. Natl. Acad. Sci. USA **90**, 2999 (1993); C. J. Benham, J. Mol. Biol. **255**, 425 (1996). |
|
* (3) C. H. Choi _et al._, Nucl. Acid Res. **32**, 1584 (2004); G. Kalosakas _et al._, Eur. Phys. Lett. **68**, 127 (2004).
* (4) M. Peyrard and A. R. Bishop, Phys. Rev. Lett. **62**, 2755 (1989).
* (5) T. Dauxois, M. Peyrard, and A. R. Bishop, Phys. Rev. E **47**, 684 (1993).
* (6) A. Campa and A. Giansanti, Phys. Rev. E **58**, 3585 (1998).
* (7) K. S. Murakami _et al._, Science, **296**, 1285 (2002).
* (8) S. Cuesta-Lopez et al, to be published |
|
#### Atmospheric parameters
\(T_{\rm{eff}}\) uncertainties are the most important contributor to uncertainties in abundance for all elements except C and S (see below). To estimate a realistic \(T_{\rm eff}\) uncertainty we compare \(T_{\rm eff}\) derived from the spectroscopy with that derived using the relationship between \(T_{\rm eff}\) and \(V-K\) colour index from Alonso, Arribas & Martinez-Rogers (1996). A small correction was made to the 2MASS \(K\) photometry (using formulae in Carpenter 2001) in order to convert it to the Carlos Sanchez Telescope (TCS) system used by Alonso et al. A reddening \(E(V-K)=0.055\) is assumed, corresponding to \(E(B-V)=0.02\) for the cluster determined by Westerlund et al. (1988). We chose this colour index because (i) the data is available for all our targets; (ii) it is very sensitive to \(T_{\rm eff}\); (iii) it is almost independent of the photospheric composition and gravity and (iv) is unlikely to be significantly affected by chromospheric activity (Stauffer et al. 2003). This latter point could be important in a young cluster like Blanco 1 where chromospheric activity can lead to blue excesses and possible problems when using \(B-V\) or Stromgren photometry to determine \(T_{\rm eff}\) or \(\log g\).
\(T_{\rm eff}\) values derived from the \(V-K\) photometry and from the spectroscopy are listed in Table 1. A comparison of the two \(T_{\rm eff}\) determination methods yields a mean difference (\(T_{V-K}-T_{\rm spec}\)) of \(-13\pm 36\) K with a standard deviation of 102 K. As the precision of the photometry leads to uncertainties of only \(\simeq 50\) K in \(T_{V-K}\) then most of this scatter must be due to \(T_{\rm spec}\) uncertainties of \(\simeq 100\) K. The agreement between the two scales lends some confidence that there are
\begin{table} \begin{tabular}{l l l l l l l l l} \hline & W64 & ZS58 & ZS141 & W113 & W60 & W63 & W8 & W38 \\ \hline Fe & 0.07 & 0.06 & 0.06 & 0.07 & 0.06 & 0.05 & 0.05 & 0.05 \\ Li & 0.11(0.07) & 0.11(0.07) & 0.10(0.06) & 0.09(0.05) & 0.08(0.06) & 0.07(0.04) & 0.07(0.04) & 0.07(0.05) \\ C & 0.10(0.14) & 0.09(0.12) & 0.09(0.13) & 0.09(0.13) & 0.08(0.09) & 0.07(0.10) & 0.07(0.10) & 0.07(0.09) \\ O & 0.12(0.16) & 0.11(0.14) & 0.14(0.18) & 0.09(0.14) & 0.06(0.09) & 0.06(0.10) & 0.06(0.09) & 0.06(0.09) \\ Si & 0.03(0.07) & 0.02(0.05) & 0.02(0.05) & 0.02(0.05) & 0.03(0.05) & 0.04(0.03) & 0.04(0.04) & 0.03(0.04) \\ S & 0.10(0.14) & 0.09(0.12) & 0.08(0.12) & 0.07(0.12) & 0.05(0.08) & 0.05(0.09) & 0.05(0.08) & 0.05(0.07) \\ Mg & 0.06(0.06) & 0.06(0.06) & 0.07(0.07) & 0.06(0.05) & 0.06(0.06) & 0.05(0.04) & 0.06(0.05) & 0.05(0.05) \\ Ca & 0.10(0.07) & 0.09(0.06) & 0.09(0.05) & 0.09(0.05) & 0.06(0.05) & 0.06(0.03) & 0.06(0.04) & 0.06(0.04) \\ Ti & 0.11(0.07) & 0.10(0.07) & 0.10(0.05) & 0.10(0.05) & 0.08(0.06) & 0.07(0.04) & 0.07(0.04) & 0.07(0.05) \\ Ni & 0.05(0.02) & 0.06(0.02) & 0.06(0.02) & 0.08(0.06) & 0.06(0.04) & 0.06(0.02) & 0.06(0.03) & 0.06(0.04) \\ \hline \end{tabular} \end{table} Table 4: Abundance uncertainties due to estimated atmospheric uncertainties.The quadratic sum of uncertainties due to effective temperature (\(\pm\)100 K), \(\log g\) (\(\pm\)0.2), [M/H] (\(\pm\)0.1 dex) and microturbulence (\(\pm\)0.2 km s\({}^{-1}\)) are presented for each star. The first number is the net uncertainty in [X/H], the number in brackets is the net uncertainty in [X/Fe].
\begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline ID & \(A\)(Li) & \(A\)(Li) & n & [C/H] & n & [O/H] & [O/H] & n & [Mg/H] & n & [Si/H] & n \\ & LTE & NLTE & & & & LTE & NLTE & & & & & \\ \hline W64 & 2.69\(\pm\)0.05 & 2.65 & 1 & \(-0.13\pm 0.08\) & 3 & \(+0.03\pm 0.06\) & \(-0.09\) & 3 & \(-0.17\pm 0.04\) & 4 & \(-0.14\pm 0.02\) & 7 \\ ZS58 & 3.31\(\pm\)0.06 & 3.09 & 1 & \(-0.05\pm 0.10\) & 1 & \(+0.05\pm 0.05\) & \(-0.10\) & 2 & \(-0.07\pm 0.08\) & 4 & \(-0.16\pm 0.04\) & 6 \\ ZS141 & 2.89\(\pm\)0.05 & 2.81 & 1 & \(-0.16\pm 0.17\) & 3 & \(+0.02\pm 0.06\) & \(-0.11\) & 3 & \(+0.01\pm 0.05\) & 3 & \(-0.06\pm 0.02\) & 7 \\ W113 & 3.14\(\pm\)0.05 & 3.00 & 1 & \(-0.17\pm 0.01\) & 2 & \(+0.01\pm 0.02\) & \(-0.12\) & 3 & \(-0.07\pm 0.02\) & 4 & \(-0.04\pm 0.02\) & 5 \\ W60 & 3.23\(\pm\)0.12 & 3.11 & 1 & \(-0.01\pm 0.06\) & 3 & \(+0.40\pm 0.06\) & \(+0.24\) & 3 & \(-0.10\pm 0.05\) & 3 & \(+0.05\pm 0.05\) & 2 \\ W63 & 3.13\(\pm\)0.07 & 3.03 & 1 & \(-0.15\pm 0.01\) & 3 & \(+0.18\pm 0.02\) & \(+0.01\) & 3 & \(-0.14\pm 0.02\) & 4 & \(-0.05\pm 0.04\) & 6 \\ W8 & 2.60\(\pm\)0.13 & 2.54 & 1 & \(+0.13\pm 0.02\) & 3 & \(+0.35\pm 0.05\) & \(+0.19\) & 3 & \(-0.06\pm 0.05\) & 3 & \(+0.03\pm 0.04\) & 7 \\ W38 & 3.06\(\pm\)0.10 & 2.97 & 1 & \(+0.00\pm 0.02\) & 3 & \(+0.29\pm 0.01\) & \(+0.10\) & 3 & \(-0.14\pm 0.02\) & 4 & \(-0.07\pm 0.04\) & 7 \\ \(A\)(X)\({}_{\odot}\) & & & & 8.51 & 3 & \(8.89\pm 0.01\) & & 3 & 7.54 & 4 & 7.62 & 7 \\ \hline \end{tabular} \begin{tabular}{l c c c c c c c c c c} \hline ID & [S/H] & n & [Ca/H] & n & [Ti/H] & n & [Fe/H] & n & [Ni/H] & n \\ \hline W64 & \(+0.04\pm 0.09\) & 1 & \(-0.17\pm 0.02\) & 3 & \(-0.16\pm 0.03\) & 4 & \(-0.04\pm 0.02\) & 33 & \(-0.20\pm 0.03\) & 10 \\ ZS58 & \(+0.24\pm 0.14\) & 2 & \(-0.05\pm 0.04\) & 3 & \(+0.03\pm 0.03\) & 5 & \(-0.05\pm 0.02\) & 24 & \(-0.26\pm 0.01\) & 6 \\ ZS141 & \(+0.00\pm 0.08\) & 1 & \(-0.05\pm 0.04\) & 3 & \(+0.00\pm 0.05\) & 4 & \(+0.09\pm 0.02\) & 34 & \(-0.10\pm 0.02\) & 9 \\ W113 & \(-0.12\pm 0.06\) & 2 & \(+0.04\pm 0.04\) & 3 & \(-0.04\pm 0.05\) & 5 & \(+0.09\pm 0.02\) & 35 & \(-0.03\pm 0.01\) & 9 \\ W60 & \(+0.02\pm 0.02\) & 2 & \(-0.15\pm 0.05\) & 2 & \(-0.18\pm 0.12\) & 1 & \(+0.11\pm 0.03\) & 13 & \(-0.15\pm 0.05\) & 6 \\ W63 & \(-0.02\pm 0.01\) & 2 & \(-0.02\pm 0.03\) & 3 & \(-0.12\pm 0.03\) & 3 & \(+0.05\pm 0.02\) & 24 & \(-0.17\pm 0.03\) & 11 \\ W8 & \(+0.10\pm 0.04\) & 2 & \(+0.02\pm 0.03\) & 3 & \(+0.07\pm 0.06\) & 3 & \(+0.09\pm 0.02\) & 25 & \(-0.03\pm 0.03\) & 10 \\ W38 & \(-0.07\pm 0.01\) & 2 & \(-0.11\pm 0.03\) & 3 & \(-0.01\pm 0.10\) & 2 & \(-0.03\pm 0.02\) & 24 & \(-0.17\pm 0.04\) & 9 \\ \(A\)(X)\({}_{\odot}\) & 7.34 & 2 & 6.33 & 3 & 4.90 & 5 & \(7.44\pm 0.01\) & 43 & 6.23 & 11 \\ \hline \end{tabular} \end{table} Table 3: Abundances for target stars. Errors quoted are the standard errors in the measured abundances (\(\sigma/\surd n\)). Atmospheric uncertainties (detailed in Table 4) should also be considered. Columns labelled “n” identify the number of features used when obtaining each value. Solar abundances were determined for iron and oxygen, due to the availability of accurate laboratory gfs. In all other cases the solar abundance was fixed at the values listed in the last row. Abundances are quoted differentially with respect to the Sun apart from those for Li. LTE abundances are given and NLTE abundances are also listed for Li and O as discussed in the text. |
|
# Elemental abundances in the Blanco 1 open cluster
A. Ford\({}^{1}\), R.D. Jeffries\({}^{2}\) and B. Smalley\({}^{2}\)
\({}^{1}\)CSPA/SPME, Building 28M, Monash University, VIC 3800, Australia
\({}^{2}\)Astrophysics Group, School of Chemistry and Physics, Keele University, Keele, Staffordshire ST5 5BG, United Kingdom
(Submitted August 2005)
###### Abstract
High resolution spectroscopy is used to determine the detailed chemical abundances of a group of eight F- and G-type stars in the young open cluster Blanco 1. An average [Fe/H] of \(+0.04\)\(\pm 0.02\) (internal error) \(\pm 0.04\) (external error) is found, considerably lower than a previous spectroscopic estimate for this cluster. The difference is due mainly to our adoption of significantly cooler temperatures which are consistent with both photometric and spectroscopic constraints. Blanco 1 exhibits sub-solar [Ni/Fe] (\(-0.18\)\(\pm 0.01\)\(\pm 0.01\)), [Si/Fe] (\(-0.09\)\(\pm 0.02\)\(\pm 0.03\)), [Mg/Fe] (\(-0.14\)\(\pm 0.02\)\(\pm 0.03\)) and [Ca/Fe] (\(-0.09\)\(\pm 0.03\)\(\pm 0.03\)); ratios which are not observed among nearby field stars. The material from which Blanco 1 formed may not have been well mixed with interstellar matter in the galactic disc, which tallies with its current location about 240 pc below the galactic plane. A simultaneous deficit of Ni and alpha elements with respect to Fe is hard to reconcile with most published models of yields from supernovae of types Ia and II. The revised abundances for Blanco 1 indicate that overall radiative opacities in its stars, and hence convective zone properties at a given mass, are similar to those in the Pleiades at approximately the same age. This can explain a previous observation that the Li depletion patterns of G- and K-type stars in the two clusters are indistinguishable. The lower overall metallicity of Blanco 1 now make it less attractive as a target for discovering transiting, short period exoplanets.
keywords: stars: abundances - stars: late-type - open clusters and associations: individual: Blanco 1
## 1 Introduction
Open clusters are excellent laboratories for testing our understanding of stellar structure. Their numerous stars share common ages and distances, reducing many uncertainties associated with field-star studies. Abundances for elements other than Fe and Li are rarely available in open clusters, yet these have a profound bearing on stellar structure calculations.
A case in point is the Blanco 1 cluster which has an age similar to, or a little younger than, the Pleiades (50-100 Myr - Perry, Walter & Crawford; Panagi et al. 1994). Edvardsson et al. (1995, hereafter E95) claimed [Fe/H] =\(+0.23\) for the cluster on the basis of spectroscopy of several F stars. E95 discussed this high metallicity in terms of the unusual location of Blanco 1. The cluster is 240 pc below the galactic plane and may have crossed the plane on one or more occasions. The apparent metal-rich status of the cluster has lead to a number of investigations that have sought to isolate the composition dependence of various physical phenomena. Pillitteri et al. (2003, 2004) have used Blanco 1 to determine whether metallicity influences the coronal X-ray losses from low-mass stars with convective envelopes. Blanco 1 may well be a fruitful location to search for transiting exoplanets, given the established relationship between stellar metallicity and the frequency of short-period exoplanets around field stars (e.g. Santos, Israelian & Mayor 2004 and references therein).
Jeffries & James (1999, hereafter JJ99) found that the Li depletion pattern with \(T_{\rm eff}\) among the G/K stars of Blanco 1 could not be distinguished from the similarly aged Pleiades, which has [Fe/H]\(=-0.03\) (Boesgaard & Friel 1990). This result contradicts the strong metallicity dependence predicted by models of pre-main-sequence (PMS) Li depletion, implying that some unknown mechanism inhibits PMS Li depletion in the Blanco 1 stars and that some non-convective mixing process operates in main sequence stars to ensure that Li-depletion in a metal-rich ZAMS cluster like Blanco 1 could approach that seen in the Hyades after 700 Myr.
A possible escape route for the "standard" PMS models is if elements, other than Fe, that also form a significant source of opacity in PMS stars (particularly oxygen), are _underabundant_ in the Blanco 1 stars compared to a solar mixture. This would mean that, overall, the opacities in the outer envelope of the Blanco 1 stars could be similar to those |
|
no major systematic uncertainties in the temperatures we have used and no problems with our ionization balance temperatures caused by possible NLTE overionization effects in the Fe ii lines - which may become more apparent in cooler stars (\(<5500\) K - see Schuler et al. 2003; Allende-Prieto et al. 2004). In addition we have checked plots of abundance from the Fe i lines versus lower excitation potential and none show any significant trends that would indicate a \(T_{\rm eff}\) error of more than \(\pm 150\) K.
Having used a cluster isochrone in the \(T_{\rm eff}-\log g\) plane to determine \(\log g\), then an uncertainty in \(T_{\rm eff}\) naturally leads to an uncertainty in \(\log g\). In fact this uncertainty is small, but we choose (conservatively) to allow \(\log g\) to vary by \(\pm 0.2\). Because the isochrone \(\log g\) varies very slowly with \(T_{\rm eff}\), the uncertainties in \(T_{\rm eff}\) and the assumed \(\log g\) errors are essentially uncorrelated. The \(\log g\) uncertainty is dominant for the C and S abundance determinations, but less important than the \(T_{\rm eff}\) uncertainties for all the other elements. We adopt conservative microturbulence uncertainties of \(\pm 0.2\) km s\({}^{-1}\) and atmospheric metallicity uncertainties of \(\pm 0.1\) dex. These contribute 0.02-0.03 dex abundance uncertainty to the O, C and Fe abundances but add a negligible amount to the overall error budget for the other elements.
Table 4 details the quadratic sum of uncertainties in [X/H] due to the atmospheric parameter uncertainties, making the assumption that the different sources of error are independent. These have been estimated by repeating the abundance analysis for each star/element after perturbing their atmospheric parameters. We have also estimated total uncertainties on [X/Fe]. These are smaller or larger than corresponding uncertainties in [X/H] depending on whether changes in the atmospheric parameters cause changes in the derived abundances which are in the same (e.g. Ca, Ni) or contrary (e.g. C, O, S) direction to those in the Fe abundance. The errors in Table 4 should be combined in quadrature with those quoted in Table 3 in order to obtain the overall (internal) errors on the abundances for each star.
### Mean cluster abundances
With the internal uncertainties established, Table 5 lists the weighted mean abundances for Blanco 1 in the form of [X/H] and [X/Fe], using all eight stars and the quadratic sum of the uncertainties presented in Tables 3 and 4. We quote the standard errors in the weighted mean and also the reduced chi-squared (for 7 degrees of freedom) of the weighted mean fitted to the data and the probability that a chi-squared of this size could arise given the quoted errors. These results are graphically presented in Fig. 1.
For Fe, C, Mg, S, Ca, Ti and Ni the scatter in the [X/H] abundance measurements are consistent with the estimated uncertainties, with reasonably low reduced chi-squared values. This lends confidence in our methods and uncertainty estimates for these elements and also suggests that any star-to-star scatter of abundances within the cluster is smaller than the uncertainties estimated for each star. However, there are three elements (Li, O, Si) where a high reduced chi-squared is found. This could indicate either (i) that the elemental abundance genuinely varies from star-to-star, (ii) that the abundance uncertainties have been underestimated or (iii) that there is an apparent trend of abundance with \(T_{\rm eff}\) arising from an inadequate treatment of the atmosphere, NLTE effects or the temperature scale.
For Li it is very likely that explanation (i) applies. Li is known to be depleted in many cooler (\(T_{\rm eff}<5800\) K) stars among Blanco 1 and the similarly aged Pleiades (e.g. Soderblom et al. 1993; JJ99). This could account for low Li abundances in ZS141 and W64. Of more interest is the low Li abundance of W8 with a \(T_{\rm eff}\simeq 6500\) K and a NLTE \(A\)(Li) which is about 0.5 dex lower than the three other stars in the sample with similar \(T_{\rm eff}\). This star also has a lower Li abundance than any similar stars in the Pleiades. It is tempting to speculate that this marks the development of the "Boesgaard gap" (Boesgaard & Tripicco 1986) of Li-depleted F-stars that is clearly seen in the older (700 Myr) Hyades cluster. Steinhauer & Deliyannis (2004) claim that the gap starts to form as early as 150 Myr on the basis of Li-depleted F-stars in the open cluster M35. However, the membership of W8 in Blanco 1 may still be problematic (see section 4.1). If the membership of W8 can be confirmed and further examples of Li-depleted F-stars were found in Blanco 1 this would probably indicate that the cluster is a little older than the Pleiades.
The O abundances of the Blanco 1 stars show a clear trend with \(T_{\rm eff}\). The group of 4 cooler stars have a mean [O/H]\({}_{\rm NLTE}\) that is \(0.20\pm 0.05\) dex _lower_ than the 4 stars with \(T_{\rm eff}>6000\) K and this is responsible for the high reduced chi-squared value in Table 5. Blanco 1 is a young cluster and its stars are magnetically active as a consequence of rapid rotation and dynamo action. It has been noted by previous workers that it can be difficult to obtain oxygen abundances in chromospherically active stars using the O i triplet lines. Spuriously high oxygen abundances might be obtained, which are not adequately dealt with by NLTE corrections similar to those we adopt here. Such abundance over-estimates seem to increase with chromospheric activity (Morel & Micela 2004; Schuler et al. 2004). However, both Morel & Micela and Schuler et al. suggest that this effect, (which is possibly attributable to overionisation/excitation in the upper atmosphere), _does not_ seriously compromise oxygen abundances determined from the triplet lines for stars with \(T_{\rm eff}>5500\) K. This suggestion needs further testing with high signal-to-noise observations of the weak [Oi] 6300A feature in chromospherically active F and G dwarfs. We note here that the sample stars presented in this work are mostly slow rotators (although a low inclination angle could mask pole-on rapid rotators), and that the derived NLTE oxygen abundances for the cooler stars are _lower_ than for the hotter stars, suggesting chromospheric activity is not to blame for the trend we see. In any case it seems that the NLTE corrections are not entirely satisfactory and we judge it prudent to add a further systematic uncertainty of \(\pm 0.1\) dex to the mean cluster O abundances.
The Si abundances appear to follow a similar \(T_{\rm eff}\) trend to the O abundances albeit not so significant. Adding a slope to the abundance versus \(T_{\rm eff}\) relationship yields a gradient only 2 sigma above zero (compared with 4 sigma for O). We note that the strengths and excitation potentials of the Si lines we have used are similar to, although higher on average, than those of Mg where a small chi-squared and no trend with \(T_{\rm eff}\) is observed. Instead it could be that the Si abundance measurements, which are the most precise of all the elements considered here, highlight additional sources |
|
- which E95 discount as a cluster member on the basis of its abundance!) and W63 ([Fe/H]=E95+0.21+-0.01) - but in our analysis these stars are cooler by 225 K, 150 K and 420 K respectively. If E95 had adopted the atmospheric parameters we have used, then their [Fe/H] for W8 and W63 would have been in excellent agreement with our results. The [Fe/H] of W60 would have been about 0.15 dex lower, but E95 explain that the rotational broadening of this star may have caused them to underestimate the EWs (and hence abundance), a problem which our spectral synthesis technique avoids.
In deciding what the overall abundances are in Blanco 1 the question of the adopted \(T_{\rm eff}\) scale is crucial. E95 chose to use temperatures indicated by the Stromgren photometry and the Edvardsson et al. (1993) calibration despite the fact that their own spectroscopy - in the form of trends of abundance versus excitation potential and the ionization balance - indicated that significantly lower temperatures were warranted. Other authors (R03) have also noted that the Edvardsson et al. (1993) \(T_{\rm eff}\) scale is 100-150 K hotter than that deduced from Stromgren photometry and the Alonso et al. (1996) or Saxner & Hammarback (1985) calibrations. We further note that for the Blanco 1 stars considered here, and by E95, that temperatures found from Stromgren photometry and the Alonso et al. (1996) calibrations are systematically hotter than those based on \(V-K\) by a further \(\simeq 150\) K, but are less precise (about \(\pm 120\) K) and are dependent on metallicity and gravity. For these reasons and because we are able to achieve ionisation balance at a similar temperature to that indicated by the \(V-K\) photometry, we believe our abundances are more robust. We do concede that an upward correction to our \(T_{\rm eff}\) scale of \(\simeq 100\) K is still possible and the consequences of such a shift were examined in section 4.4.
### Comparison with Jeffries & James 1999
Four of our cooler targets (W64, W113, ZS58, ZS141) were observed by JJ99. The temperatures derived there were based on the \(B-V\)/\(T_{\rm eff}\) calibration of Bohm-Vitense (1981) with an extra metallicity-dependent term. These temperatures are only 16-116 K hotter than the spectroscopically derived temperatures in this paper, but had JJ99 used the metallicities derived here rather than _assuming_ a mean cluster metallicity of \(+0.14\) then even better agreement would be obtained. Only Li abundances were calculated by JJ99, based on curves of growth presented by Soderblom et al. (1993) and also corrected for NLTE effects using the code of Carlsson et al. (1994). The difference between the J99 NLTE Li abundances and those in Table 3 is \((-0.01\pm 0.05)\) dex.
## 5 Discussion
### The abundance mix in Blanco 1
The main results of our analysis are:
1. 1.The overall metallicity is much lower than found in the previous study by E95. This is mainly due to our adoption of a lower \(T_{\rm eff}\) scale.
2. 2.We confirm the tentative findings of E95, that [Ni/Fe] and [Si/Fe] are significantly sub-solar. Furthermore, after considering the possible sources of internal and external error, we also find that [Mg/Fe] and [Ca/Fe] are sub-solar. The same sub-solar trend is indicated for [C/Fe], [S/Fe] and [Ti/Fe] but at a lower level of significance. In fact the only solar abundance ratio is that for [O/Fe], but the error bar is large enough that it may also be consistent with the ratios for the other alpha elements.
The abundance pattern in Blanco 1 is very unusual. [Ni/Fe], [Mg/Fe], [Si/Fe] and [Ca/Fe] are all derived from multiple lines with moderate excitation potentials and line strengths. These ratios are quite robust to systematic errors and have been measured in a similar way in several large studies of field dwarfs (Edvardsson et al. 1993; Chen et al. 2002; R03; Allende-Prieto 2004). Yet none of these studies contain _any_ stars with [Ni/Fe] or [Mg/Fe] as low as we find for Blanco 1, and stars that are underabundant in the other alpha elements are rare. On the other hand there is at least one other open cluster, M34, which has [Ni/Fe] \(=-0.12\pm 0.02\) and [Mg/Fe] =\(-0.10\pm 0.02\) derived using spectroscopic methods similar to those used here (Schuler et al. 2003).
The underabundance of both Ni and the alpha elements in Blanco 1 may perhaps be better considered as an excess of Fe and leaves us with two puzzles. The first is that R03 claim that at a given [Fe/H] there is only a very narrow (\(\leq 0.05\) dex) spread in [Ni/Fe] and the other abundance ratios discussed here. This is evidence that at any given time, the ISM in star forming regions of the galactic disc has been thoroughly mixed and is locally homogeneous. Blanco 1 is quite exceptional from this point of view; the gas from which it formed may not have been well mixed with the bulk of the galactic disc ISM. It should be noted however that the spectroscopic surveys of field F and G stars are quite limited in the distances that they probe, although they span a range of galactic birth site radii of a few kpc.
The second puzzle is the nature of the abundance anomalies in Blanco 1. The main sources of Fe in the ISM are supernovae of types I and II or possibly hypernovae. SN II and hypernovae invariably produce super-solar yields of [Si/Fe] (and the other alpha elements). The relative yields of iron-peak elements are somewhat dependent on the detailed supernova physics but recent models suggest that the Ni/Fe ratio is somewhere between 0.75 and 2.0 times the solar value (Nakamura et al. 2001; Hoffmann, Woosley & Weaver 2001). On the other hand SN Ia explosions produced by accretion onto a massive white dwarf produce little Mg but have ejecta with Ni/Fe ratios that are greater than 1.5 times the solar value (e.g. Iwamoto et al. 1999; Travaglio et al. 2004). It is therefore difficult to understand how the material from which Blanco 1 formed could appear depleted of alpha elements _and_ Ni with respect to Fe.
The solution to these puzzles (as first noted by E95) could be connected with a very unusual formation history for Blanco 1. The cluster lies at high galactic latitude (\(b=-79^{\circ}\)) and is some 240 pc below the galactic plane. This is far in excess of the maximum scale height achieved by similarly young field stellar populations or open clusters. E95 speculated that Blanco 1 may have formed in the shocked gas of a high velocity cloud during a collision with the galactic plane ISM (Comeron & Torra 1994). In any case, its undoubtedly peculiar trajectory may mean that the material from which Blanco 1 originated had travelled some distance |
|
# Selective advantage for multicellular replicative strategies: A two-cell example
Emmanuel Tannenbaum
etannenb@gmail.google.com Ben-Gurion University of the Negev, Be'er-Sheva, Israel 84105
###### Abstract
This paper develops a quasispecies model where cells can adopt a two-cell survival strategy. Within this strategy, pairs of cells join together, at which point one of the cells sacrifices its own replicative ability for the sake of the other cell. We develop a simplified model for the evolutionary dynamics of this process, allowing us to solve for the steady-state using standard approaches from quasispecies theory. We find that our model exhibits two distinct regimes of behavior: At low concentrations of limiting resource, the two-cell strategy outcompetes the single-cell survival strategy, while at high concentrations of limiting resource, the single-cell survival strategy dominates. The single-cell survival strategy becomes disadvantageous at low concentrations of limiting resource because the energetic costs of maintaining reproductive and metabolic pathways approach, and may even exceed, the rate of energy production, leaving little excess energy for the purposes of replicating a new cell. However, if the rate of energy production exceeds the energetic costs of maintaining metabolic pathways, then the excess energy, if shared among several cells, can pay for the reproductive costs of a single cell, leaving energy to replicate a new cell. Associated with the two solution regimes of our model is a localization to delocalization transition over the portion of the genome coding for the multicell strategy, analogous to the error catastrophe in standard quasispecies models. The existence of such a transition indicates that multicellularity can emerge because natural selection does not act on specific cells, but rather on replicative strategies. Within this framework, individual cells become the means by which replicative strategies are propagated. Such a framework is therefore consistent with the concept that natural selection does not act on individuals, but rather on populations.
Multicellularity, slime mold, biofilms, quasispecies, error catastrophe One of the most interesting questions under investigation in evolutionary biology is the emergence of cooperation and multicellularity in biological systems Kreft and Bonhoeffer (2005); Smith (1998) (and references therein). While the emergence of certain types of cooperative behavior, such as division of labor, are reasonably well understood phenomena, the evolution of multicellular organisms is a more difficult question.
With division of labor, a group of cells can more efficiently metabolize environmental resources than if they worked alone, and so it is in each cell's replicative interest to cooperate with other cells. In the case of multicellular organisms, however, certain cells forgo their ability to replicate, so that other cells in the organism can survive and reproduce. This is clearly against the replicative interests of the non-replicating cells, a situation that makes the strategy prone to defections. Indeed, defection from a multicellular survival strategy, otherwise known as cancer, is a common phenomenon in multicellular organisms.
Nevertheless, in certain environments, there must exist selective pressures driving the emergence of multicellular organisms. Perhaps one of the clearest demonstrations of such selective pressures is the existence of the organism _Dictyostelium discoideum_, commonly known as a cellular _slime mold_. The slime mold has been the focus of considerable research (it is an NIH model organism), because it lives at the border between unicellular and multicellular life: When conditions are favorable, the slime mold exists as a collection of free-living, single-celled organisms. However, when the slime mold cells are stressed, say by depletion of some necessary resource, they respond by coalescing into a differentiated, multicellular organism. When conditions improve, the slime mold reproduces by sporulation Britton (2003).
One of the interesting features of the slime mold is that, during the differentiation process, some cells inevitably forgo replication for the sake of the multicellular structure Britton (2003). In this Letter, we attempt to elucidate the selective pressures driving this behavior by considering a highly simplified model motivated by the slime mold life cycle, one which we believe illustrates the underlying principles involved in the emergence of multicellularity. We emphasize, however, that this Letter does not consider the evolutionary dynamics modeling how such behavior could have emerged in the first place. We should also emphasize that, although our model is motivated by the slime mold life cycle, it is believed that the ability to engage in multicellular behavior is ubiquitous amongst single-celled organisms, and may even characterize the organization of bacterial biofilms Kreft and Bonhoeffer (2005).
For our model, we consider a population of organisms whose genomes consist of three distinct genes (or more appropriately, genome regions): (1) A reproduction region, denoted \(\sigma_{R}\), coding for all the various cellular machinery involved in the growth and reproduction of the organism. (2) A metabolism region, denoting \(\sigma_{M}\), coding for all the various cellular machinery involved in procuring food from the environment, and metabolizing it to release the energy required for various cellular processes (as in the metabolism of glucose and the storage of the energy into ATP). (3) A multicellular region, coding for the machinery necessary to implement the two-cell survival strategy. Among the various machinery required to |
|
portion of the genome coding for this strategy undergoes a localization to delocalization transition, analogous to the error catastrophe (it is also similar to a phenomenon known as "survival of the flattest") Tannenbaum and Shakhnovich (2005); Wilke (2001).
Figure 2 shows the various solution regimes as a function of \(r(c)\) and \(p_{S}\). Note that, for a given value of \(p_{S}\), there exists a low-concentration regime where the fraction of cells adopting the two-cell strategy is positive. In this regime, there is a selective advantage for a genome to maintain a functional copy of the multicell switch \(\sigma_{S,0}\). At a critical concentration given by \(r(c)=r(c)_{=}\), resources are sufficiently plentiful that it becomes disadvantageous to instruct a cell to sacrifice its own reproductive ability for the sake of the other one. The reason for this is that, although the average fixed cost per cell is lower with the two-cell strategy, the cost of having to replicate the strategy outweighs the savings in fixed costs when resources are plentiful. Thus, once \(r(c)>r(c)_{=}\), the fraction of cells adopting the two-cell strategy disappears, and the population consists entirely of cells replicating via the single-cell strategy.
If \(r(c)>r(c)_{=}\) for \(p_{S}=1\), then varying \(p_{S}\) at this concentration will never lead to a selective advantage for maintaining a two-cell survival strategy. If \(r(c)<r(c)_{=}\) for \(p_{S}=1\), but \(r(c)>r(c)_{=}\) for \(p_{S}=0\), then for sufficiently large \(p_{S}\) there will exist a finite fraction of the cells which replicate via the two-cell strategy. As \(p_{S}\) drops below some critical value, denoted \(p_{S,crit}\), the probability of incorrectly duplicating the strategy becomes sufficiently large that the fraction of cells replicating via the two-cell strategy disappears. This concentration regime is interesting because it corresponds to a regime where replicating via the two-cell strategy is actually the advantageous one, but it might not be observed because of replication errors.
Finally, once \(r(c)\) drops below \(r(c)_{=}|_{p_{S}=0}\), then \(\kappa_{1}(c)<0\), so as long as \(\frac{1}{2}\kappa_{2}(c)p_{S}>0\), there will exist a selective advantage for maintaining the two-cell strategy in the population. Due to mutation, this will also lead to the maintenance of the single-cell strategy, although this strategy is not self-sustaining in the population.
If, due to saturation, \(r(\infty)\) is finite, then one possibility is that the parameters of our model are such that \(r(\infty)<r(c)_{=}\) at a given \(p_{S}\). Then for this value of \(p_{S}\), there will exist a selective advantage for the two-cell strategy no matter what the external concentration of resource (the cells cannot metablize the resources sufficiently fast to eliminate the selective advantage for multicellularity).
The results of our model show that natural selection does not act on individual cells, but rather on the survival strategy as encoded for in \(\sigma_{S,0}\). Individual cells then are more properly viewed as vehicles by which the multicell strategy is passed on to the next generation. When food resources become limited (or when the cells cannot rapidly metabolize the food resources present), the effective growth rate of the multicell strategy is competitive with the total growth rate of the single-cell strategies, resulting in its preservation in the population. Essentially, it becomes advantageous (from the point of view of the strategy) for several cells to pool their resources together for the purposes of replicating a single cell. When food becomes more plentiful, or when the rate of replication errors reaches a threshold value, the selective advantage for retaining the strategy disappears, and delocalization occurs over the corresponding region of the genome.
A potentially interesting avenue of future research is to determine whether there exist natural bounds on the possible multicellular replicative strategies, and whether it is possible, using thermodynamics and information theory, to connect these natural bounds to basic physicochemical properties of the constituent reaction networks.
###### Acknowledgements.
This research was supported by the Israel Science Foundation.
## References
* Kreft and Bonhoeffer (2005) J.U. Kreft and S. Bonhoeffer, Microbiology **151**, 637 (2005).
* Smith (1998) J.M. Smith, _Evolutionary Genetics: \(2^{nd}\) edition_ (Oxford University Press, New York, NY, 1998).
* Britton (2003) N.F. Britton, _Essential Mathematical Biology_ (Springer-Verlag, London, UK, 2003).
* Tannenbaum and Shakhnovich (2005) E. Tannenbaum and E.I. Shakhnovich, "Semiconservative replication, genetic repair, and many-gened genomes: Extending the quasispecies paradigm to living systems," Physics of Life Reviews, in press.
* Wilke (2001) C.O. Wilke, Nature **412**, 331 (2001).
Figure 2: Illustration of the two solution domains for our multi/single-cell replication model. Below \(r(c)_{=}\), the fraction of cells replicating via the two-cell strategy is a positive fraction of the population. Above \(r(c)_{=}\), a localization to delocalization transition occurs over \(\sigma_{S}\), and the fraction of cells replicating via the two-cell strategy drops to \(0\). |
|
cells in the replicating cell-pair has active reproductive pathways, the total energy consumption rate is given by \(2(\dot{\rho}-\Delta\dot{\rho})=\dot{\rho}_{R}+2\dot{\rho}_{M}+2\dot{\rho}_{S}\), where \(\Delta\dot{\rho}\equiv(1/2)(\dot{\rho}_{R}-2\dot{\rho}_{S})\). Therefore, the replication time is given by,
\[\tau_{rep}=\frac{1}{2}\frac{\rho+\Delta\rho}{(1-\omega_{M})r(c)-\dot{\rho}+ \Delta\dot{\rho}}\] (3)
yielding a first-order growth-rate constant of
\[\kappa_{2}(c)=2\frac{(1-\omega_{M})r(c)-\dot{\rho}+\Delta\dot{\rho}}{\rho+ \Delta\rho}\] (4)
We should note that we are implicitly assuming in this derivation that the amount of time it takes for two cells to find each other and combine is negligible compared to the replication time. We are also assuming that the costs associated with transporting metabolized resource from one cell to another is negligible. Finally, we are also assuming that the reproductive pathways can process the metabolized resource as fast as it is produced.
We let \(n_{1}\) denote the number of organisms with the single-cell genome. Because we are neglecting the time it takes for two organisms replicating via the two-cell strategy to find each other and to combine, we may assume that all such cells exist in the two-cell state. We therefore define \(n_{2}\) to be the number of such cell-pairs in the system. Then define the total population of cells \(n=n_{1}+2n_{2}\), and population fractions \(x_{1}=n_{1}/n\) and \(x_{2}=1-x_{1}=2n_{2}/n\).
We also assume that cells may generate mutated daughter cells as a result of point-mutations during replication. For simplicity, we assume that replication of the master sequences \(\sigma_{R,0}\) and \(\sigma_{M,0}\) is error-free, so that we do not need to consider cells with faulty reproduction or metabolic pathways (this situation can be created by assuming that the portions of the genomes coding for reproduction and metabolism are short, so the probability of mutations occurring in these regions is negligible). However, we assume that the per-base replication error probability in \(\sigma_{S}\) is given by \(\epsilon\). We let \(L\) denote the length of \(\sigma_{S}\), and define \(\mu=L\epsilon\). We then consider the infinite sequence length limit, while holding \(\mu\) constant. In this limit, the probability of correctly replicating \(\sigma_{S}\) is given by \(p_{S}=e^{-\mu}\). We then have,
\[\frac{dx_{1}}{dt}=(\kappa_{1}(c)-\bar{\kappa}(t))x_{1}+\frac{1}{2 }\kappa_{2}(c)(1-p_{S})x_{2}\] (5) \[\frac{dx_{2}}{dt}=(\frac{1}{2}\kappa_{2}(c)p_{S}-\bar{\kappa}(t)) x_{2}\] (6) \[\frac{dn}{dt}=\bar{\kappa}(t)n\] (7)
where \(\bar{\kappa}(t)=\kappa_{1}(c)x_{1}+\frac{1}{2}\kappa_{2}(c)x_{2}\).
The above population fractions will evolve to a steady-state Tannenbaum and Shakhnovich (2005), whose properties we can readily determine: The condition that \(dx_{2}/dt=0\) at steady-state implies that either \(x_{2}=0\) or \(\bar{\kappa}(t=\infty)=\frac{1}{2}\kappa_{2}(c)p_{S}\). If \(x_{2}=0\), then \(dx_{1}/dt=0\) implies that \(\bar{\kappa}(t=\infty)=\kappa_{1}(c)\).
For a steady-state to be stable to perturbations, we must have \(\bar{\kappa}(t=\infty)\geq\kappa_{1}(c),\frac{1}{2}\kappa_{2}(c)p_{S}\). Therefore, at steady-state we have, \(\bar{\kappa}(t=\infty)=\max\{\frac{1}{2}\kappa_{2}(c)p_{S},\kappa_{1}(c)\}\). Using the formulas for \((1/2)\kappa_{2}(c)p_{S}\) and \(\kappa_{1}(c)\), and assuming that \(\Delta\dot{\rho}>0\), we have that
\[\frac{1}{2}\kappa_{2}(c)p_{S}>\kappa_{1}(c),\mbox{ }x_{2}>0,\mbox { if $0\leq r(c)<r(c)_{=}$}\] (8) \[\frac{1}{2}\kappa_{2}(c)p_{S}<\kappa_{1}(c),\mbox{ }x_{2}=0,\mbox { if $r(c)>r(c)_{=}$}\] (9)
where,
\[r(c)_{=}=\frac{\dot{\rho}}{1-\omega_{M}}\frac{1-p_{S}\frac{\dot{\rho}-\Delta{ \dot{\rho}}}{\dot{\rho}}\frac{\rho}{\rho+\Delta\rho}}{1-p_{S}\frac{\rho}{\rho+ \Delta\rho}}\] (10)
Let \(z_{1,l}\) denote the fraction of the population whose genome \(\sigma_{R,0}\sigma_{M,0}\sigma_{S}\) is such that \(D_{H}(\sigma_{S},\sigma_{S,0})=l\), where \(l>0\). Then, using similar techniques to those found in Tannenbaum and Shakhnovich (2005), it is possible to show that,
\[\frac{dz_{1,l}}{dt}=\frac{1}{2}\kappa_{2}(c)x_{2}\frac{\mu^{l}}{l!}e^{-\mu}+ \kappa_{1}(c)e^{-\mu}\sum_{l^{\prime}=0}^{l-1}\frac{\mu^{l^{\prime}}}{l^{ \prime}!}z_{1,l-l^{\prime}}-\bar{\kappa}(t)z_{1,l}\] (11)
Defining the localization length \(\langle l\rangle\) via,
\[\langle l\rangle_{S}=\sum_{l=1}^{\infty}lz_{1,l}\] (12)
then at steady-state,
\[\langle l\rangle_{S}=\mu\frac{\bar{\kappa}(t=\infty)}{\bar{\kappa}(t=\infty)- \kappa_{1}(c)}\] (13)
which is finite as long as \(\bar{\kappa}(t=\infty)=\frac{1}{2}\kappa_{2}(c)p_{S}>\kappa_{1}(c)\), and \(\infty\) otherwise.
In other words, once the selective advantage for replicating via the two-cell survival strategy disappears, the
Figure 1: Comparison of the single-cell and the two-cell replication strategies. |
|
implement the two-cell survival strategy is a switch that causes one of the cells to shut off its reproductive pathways, and to devote itself to metabolizing food from the environment for the sake of the other cell. This part of the genome is denoted by \(\sigma_{S}\).
The full cellular genome is denoted by \(\sigma=\sigma_{R}\sigma_{M}\sigma_{S}\). We assume that there exist master sequences, \(\sigma_{R,0}\), \(\sigma_{M,0}\), and \(\sigma_{S,0}\), corresponding to gene sequences coding for the appropriate enzymes necessary for the proper functioning of the various systems. In this single-fitness-peak approximation, any mutation to these master sequences leads to the loss in function of the corresponding system. A cell for which \(\sigma=\sigma_{R,0}\sigma_{M,0}\sigma_{S,0}\) replicates via a two-cell strategy, whereby it seeks out and joins with another cell with an identical genome. The pathways encoded within \(\sigma_{S,0}\) cause one of the cells to shut off its reproductive pathways, and to devote its metabolic efforts to sustaining the other cell (a possible algorithm that the switch could implement is to instruct a cell to shut off its reproductive pathways if the reproductive pathways of the other cell is on, and to turn on its reproductive pathways if the reproductive pathways of the other cell is off. The only two stable solutions to this algorithm are where one of the cells has its reproductive pathways on, while the other cell has its reproductive pathways off. Presumably, although the two cells join with both of their reproductive pathways on, random fluctuations will break the symmetry and lead to collapse into an equilibrium state).
A cell for which \(\sigma=\sigma_{R,0}\sigma_{M,0}\sigma_{S}\), \(\sigma_{S}\neq\sigma_{S,0}\), replicates independently of the other cells. It is assumed that all other genotypes, with faulty copies of either reproductive and metabolic pathways, do not replicate at all.
The cells metabolize a single external resource, which provides both the energy and the raw materials for all the cells' needs. If we let the basic unit of energy be the amount of energy released by metabolism of a set quantity of resource, then up to a conversion factor it is possible to measure all energy and accumulation changes in terms of the resource itself. Of course, because only that quantity of resource that has been metabolized has provided the cell with energy and raw materials, our basic measurement unit becomes the quantity of metabolized resource.
It is assumed that resource is metabolized by each cell via a two-step process: (1) A binding step, whereby the resource binds to certain receptors, which then pass on the resource for metabolism. (2) A metabolism step, whereby the resource bound the receptors is then metabolized. Assuming each of the steps is an elementary reaction, we obtain a metabolism rate \(r(c)\) of the Michaelis-Menten form \(\alpha c/(1+\beta c)\), where \(c\) denotes the concentration of resource in the environment. Note that this form of the metabolism rate has the property that it reaches a maximal value as the concentration of external resource becomes infinite. This makes sense, since a cell cannot metabolize an external resource at arbitrarily high rates. It should be noted, however, that our expression for \(r(c)\) is not the only one that exhibits this saturation property, but it is one of the simplest expressions possible.
In order to replicate a cell, the various cellular systems must be replicated. Each system has an associated _build cost_ (measured in units of metabolized resource). Thus, if \(\rho_{R}\), \(\rho_{M}\), and \(\rho_{S}\) denote the build costs of the reproductive, metabolic, and two-cell pathways, respectively, then the total cost required to build a new cell replicating via the single-cell strategy is given by \(\rho_{R}+\rho_{M}\), while the total cost required to build a new cell replicating via the two-cell strategy is given by \(\rho_{R}+\rho_{M}+\rho_{S}\).
In addition to the build costs for the various systems, each system has an associated _fixed cost_, corresponding to the energy and resources required to maintain system function. These fixed costs arise because the various components of the cellular systems have intrinsic decay rates (protein degredation, auto-hydrolysis of mRNAs, etc.), and in the case of switches that have to respond to changes in the external environment or the internal states of the cell, there is a minimal rate of energy consumption associated with measuring ambient conditions.
There is also an _operating cost_ associated with each subsystem, corresponding to energy and resource costs associated with carrying out a given system task. For example, the replication machinery consumes energy in order to process a certain amount of metabolized resource toward the construction of a new cell. The metabolic pathways require energy to break-down the external resource (in chemistry, such costs are known as activation barriers).
Let \(\omega_{R}\) denote the cost of replication per unit of metabolized resource incorporated into a new cell, and let \(\omega_{M}\) denote the cost of metabolizing one unit of resource. Then for a cell replicating via the single-cell strategy, the total amount of resource that must be metabolized is given by, r=(1+oM)(rR+rM). The net rate of energy production is given by \((1-\omega_{M})r(c)\). Since replication and metabolism consume energy at a rate given by \(\dot{\rho}\equiv\dot{\rho}_{R}+\dot{\rho}_{M}\), the net rate of energy accumulation is given by, \((1-\omega_{M})\alpha c/(1+\beta c)-\dot{\rho}_{R}-\dot{\rho}_{M}\). The replication time is therefore given by,
\[\tau_{rep}=\frac{\rho}{(1-\omega_{M})r(c)-\dot{\rho}}\] (1)
yielding a first-order growth-rate constant of
\[\kappa_{1}(c)=\frac{1}{\tau_{rep}}=\frac{(1-\omega_{M})r(c)-\dot{\rho}}{\rho}\] (2)
For the two-cell replication strategy, the cell that is replicating in the cell-pair must accumulate a total of \((1+\omega_{M})(\rho_{R}+\rho_{M}+\rho_{S})=\rho+\Delta\rho\) of metabolized resource. The net rate of energy production from both cells is given by \(2(1-\omega_{M})r(c)\). Since only one of the |
|
Plaquette formulation and expression for the Jacobian
In this section we give our formulation of the plaquette representation for \(3D\)\(SU(N)\) LGT. A short description of our procedure can be found in [22] for pure gauge theory and in [23] for a theory with fermions. In subsection 2.1 we calculate the plaquette representation for the partition function on a dual lattice. In the second subsection we derive the plaquette representation for some observables.
### Expression for the Jacobian
Let us first give a qualitative explanation of our transformations. Consider the partition function given in Eqs. (2) and (3). For the formulation of the Bianchi identity related to a given \(3d\)-cube we define according to [17] a vertex \(A\) in this cube and separated by the diagonal a vertex \(B\), see Fig.1. This assignment we extend to all neighbouring cubes and finally to the whole lattice. Thus, all \(A\) vertices are separated by 2 lattice spacings, the same is true for \(B\) vertices. Next, we take a path connecting the vertices \(A\) and \(B\) by three links \(U_{1}\), \(U_{2}\), \(U_{3}\) as shown in Fig.1. Then the matrix \(V_{c}\) in (2) and (3) entering the Bianchi identity for the cube \(c\) can be presented in the following form
\[V_{c}=\left(\prod_{p\in A}V_{p}\right)\ C\left(\prod_{p\in B}V_{p}\right)\ C^{ {\dagger}}\ .\] (14)
\(\prod_{p\in A}\) means an appropriately ordered product over three plaquettes of the cube attached to the vertex \(A\). The matrix \(C\) defines a parallel transport of vertex \(A\) into vertex \(B\) and equals the product of three link matrices connecting \(A\) and \(B\), see Fig.1
\[C=U_{1}U_{2}U_{3}^{{\dagger}}\ .\] (15)
The connector \(C\) plays a crucial role in the non-abelian Bianchi identity.
Its path has to fit to the choices of \(\prod_{p\in A}\) and \(\prod_{p\in B}\). We choose the structure of the connectors as shown in Fig.2.
This \(8\)-cube fragment of the lattice is repeated through the whole lattice by simple translation. As is seen from Fig.2 there are only four different types of connectors. For example, the connector \(B_{1}A\) is the same as \(B_{7}A\). The collection of cubes with the same type of connectors, e.g. \(B_{1}A\) form on a dual lattice a body centred cubic (BCC) lattice with double lattice spacing. There are four different sub-lattices of this type corresponding to the four types of connectors in Fig.2: \(B_{1}A\), \(B_{2}A\), \(B_{3}A\) and \(B_{4}A\), and therefore four different types of Bianchi identities.
Consider now the partition function given by Eq. (13) on a \(3D\) lattice with free or Dirichlet BC in the third direction and periodic BC in other directions for the link matrices. To get the plaquette representation we make a change of variables |
|
**Plaquette representation for \(3D\) lattice gauge models: I. Formulation and perturbation theory**
**O. Borisenko1, S. Voloshin2**
Footnote 1: email: oleg@bitp.kiev.ua
Footnote 2: email: sun-burn@yandex.ru
_N.N.Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, 03143 Kiev, Ukraine_
**M. Faber3**
Footnote 3: email: faber@kph.tuwien.ac.at
_Institut fur Kernphysik, Technische Universitat Wien_
###### Abstract
We develop an analytical approach for studying lattice gauge theories within the plaquette representation where the plaquette matrices play the role of the fundamental degrees of freedom. We start from the original Batrouni formulation and show how it can be modified in such a way that each non-abelian Bianchi identity contains only two connectors instead of four. In addition, we include dynamical fermions in the plaquette formulation. Using this representation we construct the low-temperature perturbative expansion for \(U(1)\) and \(SU(N)\) models and discuss its uniformity in the volume. The final aim of this study is to give a mathematical background for working with non-abelian models in the plaquette formulation.
## 1 Introduction
### Motivation
There exist several equivalent representations of lattice gauge theories (LGT). Originally, LGT was formulated by K. Wilson in terms of group valued matrices on links as fundamental degrees of freedom [1]. The partition function can be written as
\[Z=\int DU\ \exp\{\beta S[U_{n}(x)]\}\ ,\] (1)
where \(S[U_{n}(x)]\) is some gauge-invariant action whose naive continuum limit coincides with the Yang-Mills action. The integral in (1) is calculated over the Haar measure on the group at every link of the lattice. Very popular in the context of abelian LGT is the dual representation which was constructed in [2]-[4]. Extensions of dual formulations to non-abelian groups have been proposed only in the nineties in [5]-[10]. The resulting dual representation appears to be a local theory of discrete |
|
partition function) for every cube and goes towards gradual restoration of the full identity with every order of the expansion. Thus, the generating functional contains an abelianized form of the identity without connectors. High-order terms include expansion of connectors (of course, together with other contributions which are however infrared finite) which can lead to the infrared problem. Let us also mention that in this respect there is a big difference between gauge models and \(2D\)\(SU(N)\) spin models. In the latter the low-temperature expansion also goes towards restoration of the full non-abelian Bianchi identity [24] and the form of the generating functional is formally the same. It includes invariant link Green functions and the standard Green function. In \(2D\) the latter is not infrared finite and is the main source of the trouble. As will be seen below, in \(3D\) gauge models all Green functions entering the generating functional are infrared finite. The source of the infrared problem in gauge models are the connectors of the non-abelian Bianchi identities.
### \(U(1)\) LGT
For a simple introduction to our method we first consider the abelian \(U(1)\) LGT without fermions where the expansion can be done in a straightforward manner. Due to a cancellation of the connectors from Bianchi identities for abelian models the plaquette formulation for the \(U(1)\) model on the dual lattice with the Wilson action reads
\[Z_{U(1)}(\beta)=\int_{-\pi}^{\pi}\prod_{l}\frac{d\omega_{l}}{2\pi}\exp\left[ \beta\sum_{l}\cos\omega_{l}\right]\prod_{x}J_{x}\ ,\] (46)
where the Jacobian is given by the periodic delta-function
\[J_{x} = \sum_{r=-\infty}^{\infty}e^{ir\omega_{x}}\ ,\] (47) \[\omega_{x} = \omega_{n_{1}}(x)+\omega_{n_{2}}(x)+\omega_{n_{3}}(x)-\omega_{n_{ 1}}(x-e_{1})-\omega_{n_{2}}(x-e_{2})-\omega_{n_{3}}(x-e_{3})\ .\]
Dual links are defined by the point \(x\) and the positive direction \(n\), see Eq.(28). Strictly speaking, if the original link gauge matrices satisfy free boundary conditions, and so do the plaquette matrices then the dual links \(\omega_{l}\) and representations \(r_{x}\) must obey zero Dirichlet BC. All general expansions given below are valid for any type of BC, the dependence on the BC enters only the Green functions defined below. In \(3D\) the difference between Green functions for different types of BC is of the order \({\cal O}(L^{-2})\) for large \(L\). Since we are interested in the TL behaviour only, we can consider dual models without any reference to the original link representation and introduce any type of BC. Both in abelian and in non-abelian cases we work with periodic BC. We remind at this point that expressions obtained below on a finite lattice do not correspond to the standard link formulation since the latter would include contributions from nontrivial Polyakov loops on a periodic finite lattice. However, in the TL both models must coincide and convergence to the TL is very |
|
.
* [25] G. Batrouni, M.B. Halpern, Phys.Rev. D30 (1984) 1775.
* [26] V.F. Muller, W. Ruhl, Ann.Phys. 133 (1981) 240.
* [27] O. Borisenko, S. Voloshin, M. Faber, Plaquette representation for \(3D\) lattice gauge models: II. Dual form and its properties, in preparation. |
|
identical, while the only (but essential) difference in the non-abelian gauge models is the appearance of connectors.
A lattice PT in the maximal axial gauge was constructed for abelian and non-abelian models in the standard Wilson formulation by F. Muller and W. Ruhl in [26]. In this gauge the PT for non-abelian models displays serious infrared divergences in separate terms of the expansion starting from \({\cal O}(\beta^{-2})\) order for the Wilson loop and from \({\cal O}(\beta^{-1})\) for the free energy. Of course, it is expected that all divergences must cancel in all gauge invariant quantities. In [26] the authors worked out a special procedure of how to deal with such divergences. The essential ingredients of the procedure of [26] are the following: 1) fixing the Dirichlet BC in one direction and the periodic ones in other directions; 2) a Wilson loop under consideration is placed at distance \(R\) from the boundary and the limit \(R\to\infty\) is imposed to restore time translation invariance; 3) divergent Green functions must be properly regularised, though there is no a priori preference to any regularization. With this procedure the first two coefficients of the expansion of Wilson loop expectation values in \(SU(2)\) LGT have been computed. These coefficients appear to be infrared finite in all \(D\geq 3\) and are proportional to the perimeter of the loop in \(4D\) and to \(R\ln T+T\ln R\) in \(3D\) (for a rectangular loop \(R\times T\)).
We are not aware of any proof of the infrared finiteness at higher orders of the expansion. That there could be a problem with the infrared behaviour was shown however in [21]. Namely, in the addition to the Dirichlet BC on the boundary, one link was fixed in the middle of the lattice and it was shown that the TL value of the expectation value of the plaquette to which the fixed link belong differs from that obtained in [26]. The underlying reason of such infrared behaviour lies, of course in the fact that the conventional PT is done around the vacuum \(U_{n}(x)=I\) which is the true ground state only at fixed \(L\), even in the maximal axial gauge. When the volume grows the fluctuations of the link matrices become larger and larger and may cause an infrared unstable behaviour. In particular, the integration regions over fluctuations are usually extended to infinity. But only for fixed \(L\) one can really prove that this introduces exponentially small corrections. There is no tool for proving that they remain exponentially small also in the large volume limit. On the contrary, bounds on the large plaquette fluctuations (4) holds uniformly in the volume implying that large plaquette fluctuations remain exponentially small also in the TL. This is the first achievement of the low-temperature expansion in the plaquette formulation. Constructing the low-temperature expansion in the plaquette formulation we aim not only to develop a new technics of calculation of the asymptotic expansions in LGT but also to get a deeper insight into this infrared problem and as a consequence into the problem of the uniformity of the low-temperature expansion. We hope that this will help in the investigation of the general properties of asymptotic expansions of non-abelian gauge models. As will be seen below, all Green functions appearing in the plaquette formulation are infrared finite. The source of trouble are the connectors of the non-abelian Bianchi identity. The low-temperature expansion in the plaquette formulation starts from the abelian Bianchi identity (zero order |
|
and then expand the integrand of (63) in powers of fluctuations of the link fields. We introduce now the external sources \(h_{k}(l)\) coupled to the link field \(\omega_{k}(l)\) and \(s_{k}(x)\) coupled to the auxiliary field \(\alpha_{k}(x)\) and adopt the definitions
\[\omega_{k}(l)\to\frac{\partial}{\partial h_{k}(l)}\ ,\ \alpha_{k}(x)\to\frac{ \partial}{\partial s_{k}(x)}\ .\] (65)
With this convention we get the following expansion for the partition function (63)
\[Z=\left[1+\sum_{k=1}^{\infty}\frac{1}{\beta^{k}}B_{k}\left(\partial_{h}, \partial_{s}\right)\right]M(h,s)\ ,\] (66)
where the operators \(B_{k}\) are defined through
\[1 + \sum_{k=1}^{\infty}\frac{1}{\beta^{k}}B_{k}\left(\partial_{h}, \partial_{s}\right)=\prod_{x}\left[1+\sum_{q=1}^{\infty}\frac{(-i)^{q}}{q!} \left(\sum_{k}\alpha_{k}(x)\sum_{n=1}^{\infty}\frac{\omega^{(n)}_{k}(x)}{(2 \beta)^{n/2}}\right)^{q}\right]\] (67) \[\times \prod_{l}\left[\left(1+\sum_{k=1}^{\infty}\frac{1}{(2\beta)^{k}} \sum_{l_{1},..,l_{k}}^{\prime}\frac{a_{1}^{l_{1}}...a_{k}^{l_{k}}}{l_{1}!...l_ {k}!}\right)\left(1+\sum_{k=1}^{\infty}\frac{(-1)^{k}}{(2\beta)^{k}}C_{k}W_{l} ^{2k}\right)\right]\ .\]
Here we have denoted
\[a_{k}=(-1)^{k+1}\frac{W_{l}^{2(k+1)}}{(2k+2)!}\ ,\ C_{k}=\sum_{n=0}^{k}\frac{1 }{(2n+1)!(2k-2n+1)!}\ .\] (68)
The first brackets on the rhs in Eq.(67) represent the expansion of the Jacobian. The first and the second brackets in the second line come from the expansion of the action and of the invariant measure, correspondingly. We have omitted the expansion of the term \(W_{x}/\sin W_{x}\) since it does not contribute to the asymptotic expansion (due to the constraint \(W_{x}=0\)) but only to exponentially small corrections. As usual, one has to put \(h_{k}=s_{k}=0\) after taking all the derivatives. Note, that in writing down these general expansions we do not distinguish between four BCC sub-lattices since the generating functional (see just below) has a unique form for all lattice. Thus, the product over \(x\) in Eq.(67) goes over all sites of the dual lattice. The difference between BCC sub-lattices appears only in the general expressions for coefficients \(\omega^{(i)}_{k}(x)\) for \(i\geq 1\).
The generating functional \(M(h,s)\) is given by
\[M(h,s)=\int_{-\infty}^{\infty}\prod_{x,k}d\alpha_{k}(x)\int_{-\infty}^{\infty} \prod_{l,k}d\omega_{k}(l)\exp[-\frac{1}{2}\omega^{2}_{k}(l)-i\omega_{k}(l)[ \alpha_{k}(x+e_{n})-\alpha_{k}(x)]]\]
\[\times\ \sum_{m(x)=-\infty}^{\infty}\exp\left[2\pi i\sqrt{2\beta}\sum_{x}m(x) \alpha(x)+\sum_{l,k}\omega_{k}(l)h_{k}(l)+\sum_{x,k}\alpha_{k}(x)s_{k}(x) \right]\ ,\] (69) |
|
problem of the standard perturbation theory (PT), i.e. whether PT is uniformly valid in the volume, can be shortly formulated in the following way. When the volume is fixed, and in the maximal axial gauge the link matrices perform small fluctuations around the unit matrix and the PT works very well producing the asymptotic expansion in inverse powers of \(\beta\). However, in the large volume limit the integrand becomes arbitrarily flat even in the maximal axial gauge. It means that in the thermodynamic limit (TL) the system deviates arbitrarily far from the ordered perturbative state, so that no saddle point exists anymore, i.e. configurations of link matrices are distributed uniformly in the group space. That there are problems with the conventional PT was shown in [21], where it was demonstrated that the PT results depend on the boundary conditions (BC) used to reach the TL. Fortunately, even in the TL the plaquette matrices are close to unity, the inequality (4) holds and thus provides a basis for the construction of the low-temperature expansion in a different and mathematically reliable way. In this paper we develop such a weak-coupling expansion both for abelian and non-abelian models.
This paper is organised as follows. In the next section we give our plaquette formulation of \(3D\)\(SU(N)\) LGT. We work in maximal axial gauge and consider a model with arbitrary local pure gauge action. For fermions we choose either the Wilson or the Kogut-Susskind action. The plaquette representation will be formulated on a dual lattice for the partition function, 't Hooft and Wilson loops. In section 3 we construct the weak-coupling expansion for the abelian model using the plaquette representation. We give a general expansion for the partition function, calculate the zero-order generating functional and show how to compute corrections and expectation values of Wilson loops. Then, we extend the weak-coupling expansion to an arbitrary \(SU(N)\) gauge model. Here we give a general expansion of the Boltzmann factor, explain how to treat the Bianchi constraints in the expansion, compute the generating functional and establish some simple Feynmann rules. Finally, we discuss some features of the large-\(\beta\) expansion in non-abelian models. Our conclusions are presented in section 4. Some computations are moved to the Appendices. In Appendix A we study the link Green functions which appear as the main building blocks of the expansion in the plaquette formulation. In the Appendices B and C we give all technical details for the calculation of the free energy expansion for SU(N) models in the plaquette representation.
### Notations and conventions
We work on a \(3D\) cubic lattice \(\Lambda\in Z^{d}\) with lattice spacing \(a=1\), a linear extension \(L\) and \(\vec{x}=(x,y,z)\), \(x,y,z\in[0,L-1]\) denote the sites of the lattice. We impose either free or Dirichlet BC in the third direction and the periodic BC in other directions. Let \(G=U(N),SU(N)\); \(U_{l}=U_{n}(x)\in G\), \(V_{p}\in G\) and \(DU_{l}\), \(DV_{p}\) denote the Haar measure on \(G\). \(\chi_{r}(U)\) and \(d(r)\) will denote the character and dimension of the irreducible representation \(\{r\}\) of \(G\), correspondingly. We treat models with |
|
ollowing manner. Connectors generate contributions of the form \(\sum_{l\in C}\omega_{k}(l)\) and of high order to all \(\omega_{k}^{(i)}(x)\), see Eqs.(100), (105)-(107). And though all \(\omega_{k}(l)\) behave like \({\cal O}(1/\sqrt{\beta})\) it is not obvious if it remains true for the sums of the plaquette angles along connectors. Since in the TL practically all connectors become infinitely long, this raises the question whether the property \(\sum_{l\in C}\omega_{k}(l)\sim{\cal O}(1/\sqrt{\beta})\) holds in the limit \(L\to\infty\)?
## 4 Conclusions
In this article we proposed a plaquette formulation of non-abelian lattice gauge theories. Our approach to such formulations is summarised in section 2.1. We have also included dynamical fermions in our construction. The main formula of section 2, Eq.(27), gives a plaquette formulation for gauge models with dynamical fermions. As an application of our formulation we have developed a weak-coupling expansion which can be used for a perturbative evaluation of both the free energy and gauge invariant quantities like the Wilson loop. We believe that this work can be useful in at least two aspects. The first concerns the problem of the uniformity of the perturbative expansion in non-abelian models and is described in section 3. The second aspect concerns the perturbative expansion of lattice models with actions different from the Wilson action. Practically, all standard actions discussed in the literature have a very simple form in the plaquette formulation. The hardest part in the perturbative expansion is to treat contributions from the Jacobian. Since the Jacobian represents Bianchi constraints on the plaquette matrices and is the same whatever original action is taken it is sufficient to compute contributions from the Jacobian only one time and use the result for all actions. For example, the expressions for the coefficients \(C_{meas}\), \(C_{J1}\) and \(C_{J2}\) in (109) given by (111), (119) and (120), respectively must be the same for all lattice actions. The expression for \(C_{ac}\) may vary from action to action only.
In our next paper [27] we derive an exact dual representation of non-abelian LGT starting from the plaquette formulation and study in details its low-temperature properties. In particular, we shall compute low-temperature asymptotics of the dual Boltzmann weight, derive its continuum limit and obtain an effective theory for the Wilson loop.
In conclusion, we hope that the present investigation gives a certain background for an analytical study of gauge models in the low-temperature phase which is the only phase essential for the construction of the continuum limit. It can give a solid mathematical basis for conventional PT by proving (or disproving) its asymptoticity in the large volume limit. We also think that the present method can give reliable analytical tools for the investigation of infrared physics relevant for the non-perturbative phenomena like quark confinement, chiral symmetry breaking, etc.
## Appendix A |
|
local interaction. Let \(H[U]\) be a real invariant function on \(G\) such that
\[\mid H[U]\mid\ \leq H(I)\] (5)
for all \(U\) and coefficients of the character expansion of \(\exp(\beta H)\)
\[C[r]\ =\ \int DU\ \exp\left(\beta H[U]\right)\chi_{r}(U)\] (6)
exist. Introduce the plaquette matrix as
\[V_{p}\:=\:U_{n}(x)U_{m}(x+e_{n})U_{n}^{\dagger}(x+e_{m})U_{m}^{\dagger}(x)\ ,\] (7)
where \(e_{n}\) is a unit vector in the direction \(n\). The action of the pure gauge theory is taken as
\[S_{g}[U_{n}(x)]=\sum_{p\in\Lambda}H\left[V_{p}\right]\ .\] (8)
The action for fermions we write down in the form (colour and spinor indices are suppressed)
\[S_{q}[\overline{\psi}_{f}(x),\psi_{f}(x),U_{n}(x)]=\frac{1}{2}\sum_{x,x^{ \prime}\in\Lambda}\sum_{f=1}^{N_{f}}\ \overline{\psi}_{f}(x)A_{f}(x,x^{\prime} ;U_{l})\psi_{f}(x^{\prime})\ ,\] (9)
where
\[A_{f}(x,x^{\prime};U_{l})=M_{f}\delta_{x,x^{\prime}}+\frac{1}{2}\sum_{n=1}^{d} \left[\delta_{x+e_{n},x^{\prime}}\xi_{n}(x)U_{n}(x)+\delta_{x-e_{n},x^{\prime} }\overline{\xi}_{n}(x^{\prime})U_{n}^{\dagger}(x^{\prime})\right]\ .\] (10)
We have introduced here the following notations
\[M_{f}=m_{f}-rd\ ,\ \xi_{n}(x)=r+\gamma_{n}\ ,\ \overline{\xi}_{n}(x)=r-\gamma_ {n}\] (11)
for Wilson fermions and
\[M_{f}=m_{f}\ ,\ \xi_{n}(x)=\overline{\xi}_{n}(x)=\eta_{n}(x)=(-1)^{x_{1}+x_{2} +...+x_{n-1}}\] (12)
for Kogut-Susskind fermions. \(m_{f}\) is mass of the fermion field, \(N_{f}\) is the number of quark flavours and \(r\) is the Wilson parameter (\(r=1\) is a conventional choice).
After integrating out fermion degrees of freedom the partition function of the gauge theory on \(\Lambda\) with the symmetry group \(G\) can be written as
\[Z_{\Lambda}(\beta,m_{f},N_{f})=\int\prod_{x,n}dU_{n}(x)\times\exp\left\{\beta \sum_{p\in\Lambda}H[V_{p}]+\sum_{f=1}^{N_{f}}{\rm Tr}\ln A_{f}(x,x^{\prime};U_ {l})\right\}\ ,\] (13)
where the trace is taken over space, colour and spinor indices.
## 2 |
|
the Wilson LGT contrary to the claim of [17]. Exact calculations on a \(3^{3}\) lattice show the equivalence between Wilson LGT and the model of [17] but the equivalence is lost already for a \(4\times 3^{2}\) lattice. The point is that the constraints on the plaquette matrices in the model of [17] do not match the non-abelian Bianchi identity as will be seen from our explicit calculations. Nevertheless, we have found that a certain decomposition of the lattice made in [17] can be useful in simplifying Batrouni's original representation. In the next section we use some ideas of [17] to reduce the number of connectors in constraints on plaquette matrices from four to two per each cube of the lattice. This simplifies the whole representation but still it is quite involved.
Let us also mention that there exists a plaquette representation in terms of so-called gauge-invariant plaquettes [18]. This representation does not require gauge fixing and can be formulated both on finite and infinite lattices. It is quite possible to work also with this formulation, all the methods developed in this paper can be straightforwadly extended to the model of gauge-invariant plaquettes. We have nevertheless found that the plaquette formulation obtained in the maximal axial gauge is simpler to handle, especially on finite lattices. Moreover, we shall explain how our formulation can be extended to periodic lattices. In addition to the previous works [12], [17], [18] we include also dynamical fermions in the plaquette formulation.
In spite of the complexity of the plaquette representation, we think it has certain advantages compared to the standard Wilson representation. Some of them have been mentioned and elaborated in [12] and [19]. Duality transformations, Coulomb-gas representation, strong coupling expansion look more natural and simpler in the plaquette formulation. It is also possible to develop a mean-field method which is gauge-invariant by construction and is in better agreement with Monte-Carlo data than any mean-field approach based on the mean-link method [19].
Nevertheless we believe that the main advantage of this formulation, not mentioned in [12], [18] and [19] lies in its applications to the low-temperature region. Let \(V_{p}\) be a plaquette matrix in \(SU(N)\) LGT. The rigorous result of [20] asserts that the probability \(p(\xi)\) that \({\mbox{Tr}}(I-V_{p})\geq\xi\) is bounded by
\[p(\xi)\leq{\cal O}(e^{-b\beta\xi})\ ,\ \beta\to\infty\ ,\ b={\rm{const}}\] (4)
uniformly in the volume. Thus, all configurations with \(\xi\geq O(\beta^{-1})\) are exponentially suppressed. This is equivalent to the statement that the Gibbs measure of \(SU(N)\) LGT at large \(\beta\) is strongly concentrated around configurations on which \(V_{p}\approx I\). This property justifies expansion of the plaquette matrices around unity when \(\beta\) is sufficiently large while there is no such justification for the expansion of link matrices, especially in the large volume limit. In particular, we think that replacing \({\rm Tr}V_{p}\) in the Gibbs measure by a Gaussian distribution, in the region of sufficiently large \(\beta\) is a well justified approximation. In fact, all the corrections to this approximation must be non-universal.
Actually, this is one of our motivations to construct a low-temperature, i.e. large-\(\beta\) expansion of gauge models using the plaquette representation. The well-known |
|
* [6] I. Halliday, P. Suranyi, Phys.Lett. B350 (1995) 189.
* [7] R. Oeckl, H. Pfeiffer, Nucl.Phys. B598 (2001) 400-426.
* [8] D. Diakonov, V. Petrov, Journal Exp. Theor. Phys. 91 (2000) 873-893.
* [9] O. Borisenko, M. Faber, Dual representation for lattice gauge models, Proc. of the International School-Conference "New trends in high-energy physics", Ed.by P. Bogolyubov, L. Jenkovszky, Kiev (2000) 221.
* [10] O. Borisenko, M. Faber, Confinement picture in dual formulation of lattice gauge models, Proc. of the Vienna International Symposium "Confinement-IV", 2001, World Scientific Publishing, Singapore-New-Jersey-London-Hong-Kong, 269.
* [11] M.B. Halpern, Phys.Rev. D19 (1979) 517; Phys.Lett. B81 (1979) 245.
* [12] G. Batrouni, Nucl.Phys. B208 (1982) 467.
* [13] A. Guth, Phys.Rev. D21 (1980) 2291.
* [14] J. Frohlich, T. Spencer, Commun.Math.Phys. 83 (1982) 411.
* [15] M. Gopfert, G. Mack, Commun.Math.Phys. 81 (1981) 97; 82 (1982) 545.
* [16] M. Zach, M. Faber, P. Skala, Nucl.Phys. B529 (1998) 505; Phys.Rev. D57 (1998) 123.
* [17] B. Rusakov, Phys.Lett. B398 (1997) 331; Nucl.Phys. B507 (1997) 691.
* [18] G. Batrouni, M.B. Halpern, Phys.Rev. D30 (1984) 1782.
* [19] G. Batrouni, Nucl.Phys. B208 (1982) 12.
* [20] G. Mack, V. Petkova, Ann.Phys. 125 (1980) 117.
* [21] A. Patrascioiu, E. Seiler, Phys.Rev.Lett. 74 (1995) 1924.
* [22] O. Borisenko, S. Voloshin, M. Faber, Analytical study of low temperature phase of \(3D\) LGT in the plaquette formulation, Proc. of NATO Workshop "Confinement, Topology and Other Non-perturbative Aspects of QCD", Ed. by J. Greensite, and S. Olejnik, Kluwer Academic Publishers, 2002, 33.
* [23] O. Borisenko, S. Voloshin, Field-strength formulation of lattice QCD with dynamical fermions and related topological structure, Proceedings of XVI International Symposium ISHEPP, Dubna, Russia, 2002.
* [24] O. Borisenko, V. Kushnir, A. Velytsky, Phys.Rev. D62 (2000) 025013; e-print archive hep-lat/9809133, hep-lat/9905025 |
|
(7) in the partition function (13). Then the partition function gets the form
\[Z_{\Lambda}(\beta,m_{f},N_{f})=\int\prod_{p}dV_{p}\exp\left\{\beta\sum_{p\in \Lambda}H[V_{p}]\right\}\prod_{p}\ J(V_{p})\ ,\] (16)
where the Jacobian of the transformation reads
\[J(V_{p})\:=\:\int\prod_{l}dU_{l}\ \prod_{p}\delta\left(V_{p}^{\dagger}\prod_{l \in p}U_{l}-I\right)\exp\left\{\sum_{f=1}^{N_{f}}{\rm Tr}\ln A_{f}(x,x^{\prime };U_{l})\right\}\ .\] (17)
The last equation is rather formal, for we have to specify the order of multiplications of non-abelian matrices. An important point concerns the position of the plaquette matrix within the product of link matrices. The plaquette matrices \(V_{p}\) we insert
Figure 1: Vertices \(A\), \(B\) and the connector path on the cube.
Figure 2: Structure of connectors on the lattice. |
|
variables which label the irreducible representations of the underlying gauge group and can be written solely in terms of group invariant objects like the \(6j\)-symbols, etc. A closely related approach to the dual formulation is the so-called plaquette representation invented originally in the continuum theory by M. Halpern [11] and extended to lattice models in [12]. In this representation the plaquette matrices play the role of the dynamical degrees of freedom and satisfy certain constraints expressed through Bianchi identities in every cube of the lattice. Each representation has its own advantages and deficiencies. E.g., the Wilson formulation is well suited for Monte-Carlo simulations, while dual and plaquette representations are usually used for an analytical study of the models. In particular, duals of abelian \(U(1)\) LGT have been used to prove the existence of the deconfinement phase transition at zero temperature in four-dimensions (\(4D\)) [13, 14] and to prove confinement at all couplings in \(3D\)[15]. Also Monte-Carlo simulations proved to be very efficient in the dual of \(4D\)\(U(1)\) LGT [16].
So far, however both dual and plaquette formulations have not been so popular in the case of non-abelian models, probably due to the complexity of these representations. For instance, the plaquette representation can hardly be used for Monte-Carlo computations due to a number of constraints on the plaquette matrices. Let us remind the general form of the Batrouni construction . In [12] the plaquette representation was constructed in the maximal axial gauge. The partition function takes the following form if \(S[U_{n}(x)]\) in (1) is the standard Wilson action
\[Z=\int\prod_{p}dV_{p}\exp\left[\beta\sum_{p}{\rm Re\ Tr}V_{p}\right]\prod_{c}J (V_{c})\ ,\] (2)
where \(V_{p}\in SU(N)\) are plaquette matrices, \(DV_{p}\) is the invariant Haar measure of \(SU(N)\). The product over \(c\) runs over all cubes of the lattice. The Jacobian \(J(V_{c})\) is given by
\[J(V_{c})=\sum_{r}d_{r}\chi_{r}\left(V_{c}\right)\ ,\] (3)
where the sum over \(r\) is a sum over all representations of \(SU(N)\), \(d_{r}=\chi_{r}(I)\) is the dimension of the representation \(r\). The last expression is nothing but an \(SU(N)\) delta-function which introduces certain constraint on the plaquette matrices. This constraint is just the lattice form of the Bianchi identity. The \(SU(N)\) character \(\chi_{r}\) depends on an ordered product of \(SU(N)\) matrices as dictated by the Bianchi identity. Its exact form will be given in the next section. An important point is that in the non-abelian case the resulting constraints on the plaquette matrices appear to be highly-nonlocal and this fact makes an analytical study of the model rather difficult. In particular, it has prevented so far the construction of any well controlled and useful weak-coupling expansion.
A different plaquette formulation of \(3D\)\(SU(N)\) LGT has been proposed in [17]. It has a local form and does not require gauge fixing. Unfortunately, as we have found by explicit computations the model proposed in [17] does not coincide with |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 29