text
large_stringlengths
252
2.37k
length
uint32
252
2.37k
arxiv_id
large_stringlengths
9
16
text_id
int64
36.7k
21.8M
year
int64
1.99k
2.02k
month
int64
1
12
day
int64
1
31
astro
bool
2 classes
hep
bool
2 classes
num_planck_labels
int64
1
11
planck_labels
large_stringclasses
66 values
FORMULA We can now see why the $\Psi^2$ contact-terms can be neglected. As is easy to verify, $\Psi^2$ terms first appear in the $\theta$-expansion at order $\theta^4$. Consequently, a single vertex-operator insertion $V_{\Psi^2}$ is needed to saturate the four fermion zeromodes --which is the case examined here. A single insertion, however, is proportional to $T_{M5}$ and is of order ${\cal O}(l_P^6)$ relative to two vertex-operator insertions: the latter give a contribution proportional to $T_{M5}^2$. Clearly, this analysis is valid provided the 'radius' of the six-cycle is much larger than the Planck length, ${\rm Vol}_{\Sigma}>>l^6_P$.
648
hep-th/0701287
5,340,320
2,007
1
31
false
true
1
UNITS
It is well known [CIT] that a simple combination of the reduced Planck mass $M_P=m_P/\sqrt{8\pi}$ and the Hubble parameter $H=H_0\sim 10^{-33} eV$, gives a value $\rho_\Lambda \simeq M_P^2 H_0^2$ comparable to the observed dark energy density $\sim 10^{-10} eV^4$ [CIT]. This interesting coincidence, on one hand, is of the cosmic coincidence problem and, on the other hand, motivated holographic dark energy models. The holographic dark energy models are based on the holographic principle proposed by 't Hooft and Susskind [CIT], claiming that all of the information in a volume can be described by the physics at the boundary of the volume. With the base on the principle, Cohen et al [CIT] proposed a relation between an UV cutoff ($a$) and an IR cutoff ($L$) by considering that the total energy in a region of size $L$ can not be larger than the mass of a black hole of that size. Saturating the bound, one can obtain FORMULA where $d$ is a constant. Hsu [CIT] pointed out that for $L=H^{-1}$, the holographic dark energy behaves like matter rather than dark energy. Many attempts [CIT] have been made to overcome this IR cutoff problem, for example, by using non-minimal coupling to a scalar field [CIT] or an interaction between dark energy and dark matter [CIT]. Li [CIT] suggested that an ansatz for the holographic dark energy density FORMULA would give a correct accelerating universe, where the future event horizon ($R_h$) is used instead of the Hubble horizon as the IR cutoff $L$.
1,496
hep-th/0702121
5,368,243
2,007
2
15
true
true
1
UNITS
At the Planck time $t_{Pl}= (\hbar G/c^5)^{1/2}\sim 10^{-43.3}$ s the Planck mass $m_{Pl}= (\hbar/t_{Pl}c^2)\sim (\hbar c^3/G)\sim 10^{-4.7}\hbox{g}$, corresponding to the Planck energy $E_{Pl}\sim 10^{19.05}\hbox{GeV}$, is within a particle horizon whose size is the Planck length $\ell_{Pl}\sim ct_{Pl}\sim 10^{-32.83}$ cm. Any PBHs formed before or during inflation would have had their energy density diluted by the exponential expansion of the scale factor to a negligible value, so that PBH formation is of interest mainly after the end of inflation, at $t\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}t_{end}$.
618
hep-th/0703070
5,404,281
2,007
3
8
true
true
4
UNITS, UNITS, UNITS, UNITS
But because of the special symmetries of string theory, there need to be a total of nine spatial dimensions, three of which are in the brane and are able to get stretched, and the others may be the bulk or may be dimensions that remain wrapped up at the Planck scale; and at the same time, in order for the particle properties and gauge interactions to come out right, the universe must have two holes, which means the brane has collided with itself in two places, in which case one of the many possible vacua of string theory is the one that is right for us. All the details cannot be explained here; but in any case, because the stryngbohtyk model is based on string theory, it is a theory of everything, that is to say, it can explain everything that has ever been, is, and will ever be.
790
astro-ph/0703774
5,445,219
2,007
3
30
true
false
1
UNITS
We will model the landscape by a continuous inflaton potential FORMULA where $V_{0}$ is constant, and $\delta V(\phi)$ is a random contribution such that $|\delta V(\phi)|\ll V_{0}$, and $\phi$ is the inflaton or the order parameter describing the transitions. As in stochastic inflation [CIT], in different causally connected regions fluctuations have a randomly distributed amplitude and observers living in different Hubble patches see different expectation values of the inflaton. When stochastic fluctuations of the inflaton are large enough, the expectation value of the inflaton in a given Hubble patch is determined by the Langevin equation [CIT] FORMULA where the stochastic force $f(\phi,t)$ is Gaussian with correlation properties FORMULA From (REF) one can derive the Fokker-Planck equation, which controls the evolution of the probability distribution $\rho(\phi,t)$ describing how the values of $\phi$ are distributed among different Hubble patches in the multiverse. One finds [CIT] FORMULA The general solution to Eq. (REF) is given by FORMULA where $\psi_{n}$ and $E_{n}$ are respectively the eigenfunctions and the eigenvalues of the effective Hamiltonian FORMULA Here FORMULA is a functional of the scalar field potential $V(\phi)$. It is often denoted as the superpotential due to its "supersymmetric" form: the Hamiltonian (REF) can be rewritten as $\hat{H}=\hat{Q}{}^{\dagger}\hat{Q}$, where $\hat{Q}=-\partial/\partial\phi+v'(\phi)$ with $v(\phi)=4\pi^{2}\delta V(\phi)/(3H_{0}^{4}$).
1,507
0704.0144
5,448,975
2,007
4
2
true
true
1
FOKKER
Applying the methods that we developed for analyzing effective field theories with flow equations to quantum gravity, we disregarded in a first step of our analysis the violation of the STI. A set of arbitrary renormalization and improvement conditions has been imposed, and by inverting the renormalization group trajectory it has been shown that the improvement conditions force the UV cutoff of effective quantum gravity to be the Planck scale $M_P$. We then have established that for generic bare gravity actions the family of theories described by the arbitrary renormalization and improvement conditions is predictive at scales $\Lambda$ far below the Planck scale $M_P$ with finite accuracy $(\Lambda/M_P)^2$.
716
0704.3205
5,485,660
2,007
4
24
false
true
2
UNITS, UNITS
In order to illustrate the effects of this supersymmetry breaking, we study a class of bosonic zero modes as well as their fermionic superpartners. We perform our analysis in linearised perturbation theory about the braneworld background, taking into account the corresponding brane actions. The modes that we focus on are those which are factorisable with regard to their worldvolume and orbifold dependencies, and which have a profile in the orbifold direction such that, were supersymmetry not broken, they would appear as massless fields from the 4-dimensional point of view. Here, however, the fermionic modes acquire a mass, while the bosons (which are insensitive to the orientation of the 5-sphere) remain massless. The mass of the fermions depends crucially on their $y$-dependence. In the most common case, the resulting mass is naturally of the order of the compactification scale $L_5$, which may be taken to be near the GUT or Planck scale. However, if the fermionic modes are such that they have a $y$-dependence that evolves contrary to the bulk warping, then their mass is suppressed by an additional bulk warp factor. In this way one obtains two scales of supersymmetry breaking, and thus both heavy and light fermions, by the same mechanism.
1,259
0704.3343
5,487,876
2,007
4
25
false
true
1
UNITS
Our studies also support the work of Extragalactic foreground sources Working Group of the Planck satellite[^1], particularly that of the Low Frequency Instrument (LFI) Consortium. In preparation for the Planck mission, one of our tasks is to estimate the number of extragalactic sources detectable at the Planck frequencies. The existence of unforeseen bright sources could seriously affect the primary task of Planck, i.e., the mapping of the cosmic microwave background (CMB). The theoretical aspects of blazar contamination in CMB maps have been thoroughly studied by [CIT]. Because of the lack of actual measurements at high radio frequencies, many of the source statistics have been based on extrapolated low-frequency data. Therefore we find it important to investigate, also in practice, if there are source populations or subpopulations that are brighter in the high radio frequencies than earlier assumed, especially such that exhibit significant variability and can at times make a significant contribution to the contamination of the CMB maps. It is vital to understand the high radio frequency behaviour of the various BLOs to see how many of them, at least some of the time, can also be detected by Planck.
1,220
0705.0887
5,506,632
2,007
5
7
true
false
5
MISSION, MISSION, MISSION, MISSION, MISSION
There are several problems related to the gravitational properties of the quantum vacuum [CIT]. The first one is the "unbearable lightness of space-time" [CIT]. According to the naive estimation, the natural value of the energy density of the vacuum is $\epsilon \sim c^7/\hbar G^2$. It is constructed from the Planck units which form the Bronstein cube [CIT] : $G$ is Newton constant, $c$ is speed of light, and $\hbar$ is Planck constant. This estimate is too big as compared to the experimental value of the cosmological constant $\Lambda\sim 10^{-123}c^7/\hbar G^2$. The next problem is: why is vacuum (anti)gravitating? In other words, why is $\Lambda$ non-zero? it is easier to accept that $\Lambda$ is exactly zero, than 123 orders of magnitude smaller. Then there is a coincidence problem -- Why is vacuum as heavy as the present (dark) matter? -- and some other.
871
0705.0991
5,508,264
2,007
5
8
false
true
2
UNITS, CONSTANT
As is well known, the CKM matrix is a unitary matrix, in which all phases except one are reabsorbed into a redefinition of the quarks wave functions. Therefore, its nine entries are parametrized by nine real coefficients and one phase, responsible for parity violation. In our framework, the analysis of the spectrum has been carried out by classifying the degrees of freedom according to their charge. This means that what we got are the "current eigenstates" (section REF). Subsequently, we have considered a "perturbation" of this configuration, obtained by switching-on so-far neglected degrees of freedom, in order to investigate their masses (section REF). In section REF we have put mass ratios in relation with ratios of sub-volumes of the phase space, which is divided into several sectors by the breaking of the initial symmetry. Mass ratios are then related to the couplings of the broken symmetries. As we anticipated, there are two kinds of breaking: a "strong breaking", in which the would-be gauge bosons acquire a mass above the Planck scale, and a "soft breaking", in which the gauge bosons acquire a mass below the Planck scale. Only in this second case the transition appears as an ordinary decay, mediated by a propagating massive boson. Otherwise, the boson of the broken symmetry works somehow like an external field: we don't see any boson propagating, and we interpret the phenomenon as a "family mixing". The off-diagonal entries of the CKM matrix precisely collect the effect of this type of "non-field-theory decay": off-diagonal entries account for transitions from one generation to another, non mediated through gauge bosons as in the case of ordinary decays.
1,689
0705.1130
5,509,503
2,007
5
8
false
true
2
UNITS, UNITS
While the cosmic variance-limited simulated data are useful for determining how well the reionization history could possibly be constrained by large-scale $E$-mode polarization measurements, it is also interesting to ask how well we can do with future experiments that fall somewhat short of the idealized case that we have considered so far. In particular, the upcoming Planck satellite is expected to improve our knowledge of the large-scale $E$-mode spectrum substantially [CIT]; what does this imply for constraints on $x_e(z)$? To estimate what might be possible with Planck data, we assume that after subtracting foregrounds a single foreground-free frequency channel remains for constraining the low-$\ell$ $E$-mode polarization. We take this to be the 143 GHz channel with a white noise power level of $w_P^{-1/2}=81 \mu$K$'$ and beam size $\theta_{\rm FWHM}=7.1'$, and we assume that the sky coverage is $f_{\rm sky}=0.8$ after cutting out the Galactic plane [CIT]. We compute the likelihood of Monte Carlo samples using the routines provided in CosmoMC and analyze parameter chains with the principal component method as described for *WMAP*and cosmic variance-limited data.
1,184
0705.1132
5,509,654
2,007
5
8
true
false
2
MISSION, MISSION
The largest correction is at back scattering ($\theta = \pi$). However, even for linear momenta near the GKZ cutoff ($p\approx 10^{11}$ GeV), the corrections are of the order of $10^{-16}$ ($l\approx 5 10^{-20}$GeV$^{-1}$, corresponding to the Planck length), beyond any hope to be measured in near future. At momenta in the TeV range, the situation is even more hopeless. The corrections would be of the order of $10^{-31} - 10^{-32}$.
436
0705.1233
5,510,994
2,007
5
9
false
true
1
UNITS
The physical reason for this is that the dS constructions of all three classes satisfy a condition on the area of the potential barrier which separates the dS minimum from the Minkowski vacuum at infinity. This condition states according to REF(#gravcrit){reference-type="eqref" reference="gravcrit"} that the barrier area $O$ must be much larger than the geometric mean $\sqrt{V_0V_1}$ of the potential values of the dS minimum $V_0$ and the barrier $V_1$, that is FORMULA Since realistic SUSY breaking requires $V_1\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$}(1 {\rm TeV})^4\sim 10^{-60}M_P^4$ and realistic cosmology demands $V_0\sim 10^{-120}M_P^4$ we need FORMULA in Planck units to satisfy the above area condition.
728
0705.1557
5,514,411
2,007
5
10
true
true
1
UNITS
In the temperature range we explore, the fiber time constants present a minimum around $100 K$, and rise slowly up to $300 K$. The higher time constants measured for a temperature smaller than 50K are responsible for long decay times at the end of the pulse. During the Planck HFI calibration [CIT], this drawback has been corrected for using a small permanent current on the fiber in addition to the pulse. This permanent current allows to maintain most of the fiber at temperatures where the time constant remains small, and we recover a fiber decay time in the same range as the rise time.
592
0707.4564
5,665,243
2,007
7
31
true
false
1
MISSION
More recently, a new dark energy model, named agegraphic dark energy, has been proposed [CIT], which takes into account the uncertainty relation of quantum mechanics together with the gravitational effect in general relativity. One has the so called Károlyházy relation $\delta t=\beta t_{p}^{2/3}t^{1/3}$ [CIT], and energy density of spacetime fluctuations [CIT] FORMULA $\beta$ is a numerical factor of order one, and $t_{p}$ is the Planck time. The agegraphic dark energy model assumes that the observed dark energy comes from the spacetime and matter field fluctuations in the universe. The dark energy has the form (REF) and $t$ is identified with the age $T$ of the universe [CIT] FORMULA where $M_{pl}$ is the reduced Planck mass and the constant $n^{2}$ has been introduced for representing some unknown theoretical uncertainties. Both in the radiation-dominated and matter-dominated epoches, the energy density of the agegraphic dark energy just scales as $\rho_q\sim t^{-2}$ tracking the dominant energy component. Moreover, the model can also make the late-time acceleration. For the further development of the model, see [CIT].
1,139
0708.1214
5,680,882
2,007
8
9
true
true
2
UNITS, UNITS
Hawking's determination of the temperature, together with the first law (REF), fixed the coefficient in Bekenstein's formula for the black hole entropy: FORMULA This is an enormous amount of entropy. A solar mass black hole has $S_{BH} \sim 10^{77} k$. This is much greater than the entropy of the matter that collapsed to form it: Thermal radiation has the highest entropy of ordinary matter, but a ball of thermal radiation has $M \sim T^4 R^3$, $S \sim T^3 R^3$. When it forms a black hole $R \sim M$, so $T \sim M^{-1/2}$ and hence $S \sim M^{3/2}$. On the other hand, $S_{BH} \sim M^2$. So $S_{BH}$ grows much faster with $M$ than the entropy of a ball of thermal radiation of the same size. Since we have suppressed all physical constants, the two entropies are equal only when $M$ is of order the Planck mass ($10^{-5}$ gms). We will continue to set $c=k=\hbar =1$ in the following.
889
0708.3680
5,709,476
2,007
8
27
true
false
1
UNITS
We have studied here the possibility to use the cross-correlation between CMB and galaxy density maps as a tool for constraining the neutrino mass. On one hand massive neutrinos reduce the cross-correlation spectrum because their free-streaming slows down structure formation; on the other hand, they enhance it because of the behavior of the linear growth in presence of massive neutrinos. Using both analytic approximations and numerical computations, we showed that in the observable range of scales and redshifts, the first effect dominates, but the second one is not negligible. Hence the cross-correlation between CMB and LSS maps could bring some independent information on neutrino masses. We performed an error forecast analysis by fitting some mock data inspired from the Planck satellite, Dark Energy Survey (DES) and Large Synoptic Survey Telescope (LSST). For Planck and LSST, the inclusion of the cross-correlation data increases the sensitivity to $m_{\nu}$ by 38%, $w$ by 83% and $\Omega_{dm} h^2$ by 30% with respect to the CMB data alone. With the fiducial model employed in this analysis (based on eight free parameters) the standard deviation for the neutrino mass is equal to 0.38 eV for Planck alone and 0.27 eV for Planck plus cross-correlation data. This is far from being as spectacular as the sensitivity expected from the measurement of the auto-correlation power spectrum of future galaxy/cluster redshift surveys or cosmic shear experiments, for which the predicted standard deviation is closer to the level of 0.02 eV, leading to a 2$\sigma$ detection even in the case of the minimal mass scenario allowed by current data on neutrino oscillations (see [CIT] for a review). However, the method proposed here is independent and affected by different systematics. So, it remains potentially interesting, but only if the neutrino mass is not much smaller than $m_{\nu} \sim 0.2$ eV.
1,908
0710.5525
5,840,880
2,007
10
30
true
true
4
MISSION, MISSION, MISSION, MISSION
For cosmology one is interested not only in the homogeneous limit, but also in the perturbed system. The mode equation which determines the spectrum of scalar perturbations is known also when the kinetic energy is non-canonical [CIT] and is of course more complicated than in the canonical case. However when the speed of sound is constant it simplifies considerably, so the study of perturbations in the class of models with constant $\gamma$ is possible at the same level of approximation as required in the case of $\gamma=1$. In particular, one can write down "$\gamma$-deformed" models corresponding to known cases where the spectrum of scalar perturbations is known analytically. The examples discussed are the constant potential (i.e. de Sitter space), the exponential potential[^2] (leading to power law inflation) and the model introduced by Easther [CIT]. These examples introduce $\gamma$ as a parameter in addition to parameters present in the undeformed model. The deviation of $\gamma$ form unity leads to non-gaussianity of the perturbation spectrum, and the observational consequences of the deformation can be understood in terms of commonly used observables $r$, $n_S$ and $f_{NL}$. In the example of a constant potential the deformation parameter $\gamma$ turns out to be very strongly constrained by the limits on the index of scalar perturbations. In the case of an exponential potential the situation is more interesting: the observables $r$, $n_S$ and $f_{NL}$ satisfy a relation (described in section [6]), which could be tested observationally by the Planck satellite experiment launching this year.
1,624
0711.4326
5,897,753
2,007
11
27
true
false
1
MISSION
There is another aspect of the problem that needs to be discussed for the case A). That standard 4D (Newtonian) gravity is reproduced on the thick (Planck) brane, is clear from the fact that there is a mass gap of energy $m\sim a\sim M_{pl}$ between the stable (ground state) graviton $\Psi_0$ and the normalizable massive graviton $\Psi_1$, that is also bound to the Planck brane. The continuous modes have masses bigger than $\Psi_1$ and, since these are delocalized, the corresponding amplitudes are suppressed at the origin with respect to the amplitudes $\Psi_0$ and $\Psi_1$. However, since the probe brane where the SM particles live is located away from the origin, one could think that the massive continuous modes could play an important role in modifying gravitational interactions at the TeV brane, as long as the ratio of the corresponding amplitudes to $\Psi_0$ (and $\Psi_1$) grows as one recedes from $z=0$ in the extra-space (an effect related with the localization of $\Psi_0$ and $\Psi_1$ on the thick brane and with the delocalization of the KK--continuum). This is true (even if we need only a separation $z_0\approx 40/M_{pl}$ from the Planck brane to generate the correct hierarchy), but recall that the masses of the corresponding continuous modes are of the order of the Planck mass and bigger. Consequently, a 4D observer placed at the TeV brane would not be able to detect those modes. As the probe brane position recedes from the origin the energies accessible to probe brane observers decreases, as well as the possibility to detect massive modes of the KK--continuum. The consequence is that, as long as correct Newtonian gravity is achieved at the Planck brane, gravity on a probe brane located at any position in the extra--space will be Newtonian as well, confirming the result obtained in [CIT].
1,829
0712.3098
5,947,161
2,007
12
19
false
true
5
BRANE, BRANE, BRANE, UNITS, BRANE
Funding for the Sloan Digital Sky Survey (SDSS) and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, and the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, The University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
1,356
0801.1232
5,978,036
2,008
1
8
true
false
3
MPS, MPS, MPS
The important less trivial input is the form of the coupling of $\phi$ to curvature $R$. We take it in the form: FORMULA though other similar types of the potential $V(\phi)$ are possible. Here $m^2_{Pl}$ is the value of the effective Planck mass at the stationary point of the equations of motion, $\phi =\phi_s$. As we shall see shortly, the universe has not yet reached the state where $\phi = \phi_s$. Hence the value of the Planck mass now, $M_{Pl}$ would be different from that at the stationary point, FORMULA where $\phi_0$ is the value of $\phi$ today.
561
0801.3090
6,000,566
2,008
1
20
true
true
2
UNITS, UNITS
We make mild assumptions about the compactification manifold $\mathcal{M}$. The manifold $\mathcal{M}$ is allowed to have boundaries, but only those that arise from orbifolding. We thus assume that $\mathcal{M}$ is compact and closed, or that $\mathcal{M}= \mathcal{M}' / G$ where $\mathcal{M}'$ is closed and compact and $G$ is a group which acts on $\mathcal{M}'$. In the latter case, we take all calculations to be carried out on the covering space $\mathcal{M}'$. The warp factor must be sufficiently well-behaved so that integration by parts is possible and that the four-dimensional Planck mass is finite. Distributional stress-energy sources such as branes are covered here, as are certain types of singularities in the curvature or warp factor. Our arguments here rely on averaging quantities over $\mathcal{M}$, so we require the curvature and warp term to have finite integrals with the weighted measures introduced in Section [2.2]. When the four dimensional universe is not exactly de Sitter, we make an additional assumption about the time-evolution of $\mathcal{M}$. The assumption amounts to excluding volume-preserving transformations of a certain type. When the moduli space approximation applies, we show the restriction we make is merely a gauge choice. If no such restriction is made, the four-dimensional effective theory has apparent ghost modes and so the four-dimensional interpretation is breaking down. For the scalar sector to have a positive-definite kinetic term some restriction on the allowed fluctuations of $\mathcal{M}$ is necessary. It is entirely possible that there are nonetheless consistent time-dependent reductions without making quite this restriction. For the no-go theorems proven here to be inapplicable to a specific time-dependent scenario, one must show that the corresponding restriction is not equivalent to ours under choices of gauge or coordinate transformations.
1,916
0802.3214
6,060,412
2,008
2
21
true
true
1
UNITS
We have also discussed a future constraint on $N_\nu$ using the expected data from Planck experiment. It was shown that the attainable constraint on $N_\nu$ from Planck is $2.68 \le N_\nu \le3.44$ at 95% C.L. when the BBN relation is adopted for $Y_p$, which is most stringent compared to the other cases. Since Planck experiment can probe CMB down to smaller scales than WMAP, Planck alone can give a stringent constraint on $N_\nu$.
434
0803.0889
6,086,895
2,008
3
6
true
false
4
MISSION, MISSION, MISSION, MISSION
The initial state of the Universe has a very low entropy. In fact, from the point of view of the Wheeler--DeWitt equation, the entropy should be zero as the wavefunction of the Universe is unique. The present entropy of the observed Universe can be estimated by the degrees of freedom associated holographically to the causal horizon: FORMULA where $R_H$ is the Hubble radius and $\ell_P$ the Planck length. The number of microstates is then given by Boltzmann's formula $\Omega=e^S\simeq e^{10^{123}}$, and the probability associated with the Big Bang is FORMULA The Big Bang therefore appears to be an exceptionally special point in phase space, as finely tuned as the cosmological constant [CIT].
699
0804.3598
6,166,084
2,008
4
23
false
true
1
UNITS
Here some arguments are given on cosmological implications of the K$\acute{a}$rolyh$\acute{a}$zy uncertainty relation (REF) and the energy density (REF). First, the K$\acute{a}$rolyh$\acute{a}$zy relation (REF) obeys the holographic black hole entropy bound [CIT] : the relation (REF) gives a relation between $\delta l$ (UV cutoff) and the length scale $l$ (IR cutoff) of a system, $\delta l \sim l_{p}^{2/3}l^{1/3}$; the system has entropy FORMULA which is less than the black hole entropy with horizon radius $l$. Therefore, the K$\acute{a}$rolyh$\acute{a}$zy uncertainty relation (REF) is a reflection of interplay between UV scale and IR scale in effective quantum field theory [CIT]. The microscopic energy scales of quantum mechanics and the macroscopic properties of our present universe are intimately connected. Second, the energy density (REF) is dynamically tied to the large scales of the universe, thus violating naive decoupling between UV scale and IR scale. The appearance of both the Planck length and the largest observable scale in the energy density (REF) seems to suggest that the dark energy is due to an entanglement between ultraviolet and infrared physics [CIT]. Therefore, we expect that the interplay between UV scale and IR scale can give us some clues about what is the reason that quantum gravity effects are still valid today at large distance scales. Some expectations are born out by explicit constructions of effective field theories from string theory [CIT].
1,494
0805.0546
6,186,431
2,008
5
4
true
true
1
UNITS
Before proceeding further, we should stress on two important points here: firstly, the presence of GB term removes the big bang singularity in this setup, and the universe starts with an initial finite density. Gauss-Bonnet effect is essentially a string-inspired effect in the bulk which its combination with pure DGP scenario leads to a finite big bang proposal on the brane. A consequence of string inspired field theories is the existence of minimal observable length of the order of Planck length[44-46]. One cannot probe distances smaller than this fundamental length. In fact a string cannot live on the scale smaller than its length. This feature leads us to generalize the standard Heisnberg uncertainty relation to incorporate this Planck scale effect[47,48]. The existence of this minimal observable length essentially removes spacetime singularity and acts as a UV cutoff of the corresponding field theory(see for instance [49] which discusses inflation with minimum length cutoff. See also [50]). So, in principle existence of a finite density big bang is supported at least in this viewpoint[51], see also [52]. Secondly, non-minimal coupling of scalar field with induced gravity on the brane controls the value of the initial density. This is not the only importance of non-minimal coupling of scalar field and induced gravity. In fact non-minimal coupling provides a mechanism for generating spontaneous symmetry breaking at Planck scale on the brane[53]. In this respect and based on the arguments presented at the introduction on the importance of the non-minimal coupling, non-minimal coupling of scalar field and induced gravity on the brane itself is a high energy correction of the theory and it is natural to expect that this effect couples with stringy effects in Planck scale. In fact in this setup, we encounter a smoother behavior due to Gauss-Bonnet term (a finite density big bang) and the late-time effects of non-minimally coupled scalar field component. These effects together provide a more reliable cosmological scenario.
2,055
0805.1537
6,198,489
2,008
5
11
false
true
4
UNITS, UNITS, UNITS, UNITS
In general, the amount of fine-tuning in vector dark energy models is similar to the amount of fine-tuning in scalar field models. As we have shown that there exists scaling attractors, the initial conditions for the fields and their velocities do not matter. One is then left to explain the energy scales involved. Just like with scalar fields, by redefinitions and zero-point shiftings one may choose the numbers that go into the Lagrangian, but the field turns out to be extremely light. As an example, consider the exponential potential, $V(A^2)=V_0e^{\kappa\lambda(A^2-A^2_0)}$. If the potential energy is a significant contribution to the expansion rate today, the Friedmann equation tells that the potential is of the order $V \sim H^2_0/\kappa$. However, the potential scale $V_0$ can be set to whatever seems natural by choosing suitably the parameter $A^2_0$. The mass of the associated particle, $m_A \sim \sqrt{V'} \sim 10^{-33}$ eV, if the slope $\lambda$ is roughly of order one. For the power-law potential, $V(A^2) = V_0(\kappa A^2)^n$, one finds the exact relation FORMULA where $V$ is the value of the potential at a given time. If the time is nowadays and the scale of the potential is of order of the Planck mass, then the mass of the field is the Planck mass suppressed by a factor $10^{-50(1-1/n)}$, which for large inverse powers can be quite tiny. These considerations hold also even if the potential vanishes [CIT], since one may interpret that the mass effectively arises from the nonminimal coupling. For the vector inflatons, one may find $TeV$ scale masses and potentials.
1,601
0805.4229
6,232,114
2,008
5
27
true
true
2
UNITS, UNITS
We acknowledge support from the Max-Planck Society and the Alexander von Humboldt Foundation through the Max-Planck-Forschungspreis 2005. Michael, A. Strauss acknowledges support of the National Science Foundation grant AST-0707266. X. Fan acknowledge support from a David and Lucile Packard Fellow in Science and Engineering. We thank James J. Condon for comments and suggestions. IRAM is funded by the Centre National de la Recherche Scientifique (France), the Max-Planck Gesellschaft (Germany), and the Instituto Geografico Nacional (Spain). The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated by Associated Universities, Inc.
677
0806.3022
6,276,108
2,008
6
18
true
false
3
MPS, MPS, MPS
Let's consider a mode $k$ as $\eta \rightarrow - \infty$. The physical wavelength of the modes is smaller than the Planck scale. It is natural to set a physical cutoff for momentum $p_c$. When $p > p_c$, one has to consider the trans-Planckian effect. The influence of new degrees of freedom and new physical law could be effectively encoded in the change of dispersion relation [CIT], or space-time noncommutativity [CIT] or some other ways. When $p<p_c$, the solution of the Klein-Gorden equation is reliable, the solution of the scalar field is the linear combination of $\delta \phi(\eta, \textbf{x})_k^{\pm}$ in (REF). A new set of modes for trans-Plankian effect is expressed as the combination of the Euclidean modes by a Bogoliubov transformation [CIT] (Mottola-Allen transform), FORMULA where $\alpha$ is a complex number with $\mathrm{Re}, \alpha<0$ to denote the rotation of field space, and $N_\alpha$ is derived from the Wronskian condition or the rule of Bogoliubov transformation. Since FORMULA so the equation of $\tilde a_n$ can be expressed as FORMULA Thus the new vacuum called $\alpha$-vacuum is defined as follows, FORMULA where the $\alpha$-vacuum is still de Sitter invariant, just like Euclidean one. The Bogoliubov transform can be implemented by a unitary transform [CIT], FORMULA where FORMULA The relation between the two vacua is FORMULA
1,367
0806.4109
6,288,753
2,008
6
25
true
true
1
UNITS
The fine tuning parameter $\epsilon$ has a magnitude that is related to the number of e-folds according to, FORMULA To obtain, for example, an inflationary phase with $N=50-70$ e-folds $\--$ which is a characteristic range of values for current parameter estimations $\--$ the equation of state has to be very fine tuned such that $\epsilon=10^{-87}-10^{-122}$. It is worth noting that this is a similar order of magnitude to the factor $10^{-120}$ relating the ratio of the cosmological constant predicted by summing the zero point energy of the Standard Model fields up to the Planck cutoff to that inferred from cosmological observations, although this is almost certainly just a numerical coincidence.
705
0807.2523
6,331,816
2,008
7
16
true
false
1
UNITS
Let me describe here a typical scenario for baryon number generation taking as an example, the $SU(5)$ model. Grand unified theories, together with the standard cosmological model which is based on classical gravity, can, in principle, describe the early universe, at times substantially later than the Planck time of $t_P \sim 10^{-44}$sec, corresponding to a temperature of $T \sim 10^{19}$.GeV. At earlier times quantum gravitational fluctuations, as well as other interactions unified with gravity, become strong and no reliable computations are possible. As the temperature drops the universe undergoes a series of phase transitions during which the original symmetry of the model breaks down to the one observed at present, namely confining Q.C.D. and electromagnetism. In the simplest $SU(5)$ model these transitions are the following: At $T\sim 10^{16}$GeV, the 24-plet of Higgs develops a non- zero vacuum expectation value and $SU(5)$ breaks into $U(1) \otimes SU(2) \otimes SU(3)$. At $T\sim$ 100 GeV, the standard model breaking occurs and $U(1) \otimes SU(2) \otimes SU(3)$ breaks to $U(1) \otimes SU(3)$. Still later, at $T \sim$ 100 MeV, we have the confining transition of Q.C.D. during which the quarks and gluons get confined inside the hadrons. At even later stages of the universe evolution, we have more transitions which result into nucleosynthesis, hydrogen atom recombination, etc. In this simple scenario we must add the phase transitions for inflation and reheating which I shall ignore here.
1,518
0807.4841
6,359,737
2,008
7
30
false
true
1
UNITS
One main difference between the MaVaN Hybrid Scenarios with power law type potentials (where a "high-scale" seesaw mechanism is in principle possible) and the standard MaVaN scenario is that in the former the present value $A_0$ of the acceleron field can be much larger than the eV scale, even close to the GUT or Planck scale. One can readily see that this might help with respect to the stability issue, when the heavy right-handed neutrino fields $N$ are integrated out the theory. Then, the interaction term between the light neutrino(s) and the acceleron field becomes (for, e.g., $\beta = 1$) FORMULA where $A = A_{0} + \delta A$ with $\delta A$ denoting the quantum fluctuations around the classical value of the field $A_0$. With $\delta A \ll A_0$, the term in Eq. (REF) takes the form FORMULA where $\bar m_{\nu}^{(0)} = \frac{y^2v^2}{A_{0}}$. In the usual MaVaN scenario the field value $A_0$ is not much larger than $\bar m_{\nu}^{(0)}$. In contrast to this, as we have discussed in the previous section, in the MaVaN Hybrid Scenarios with power law potentials the present value of $A_0$ can in principle be as large as the GUT scale (or Planck scale) and the coupling between neutrinos and $A$ can be strongly suppressed (by a factor $\bar m_{\nu}^{(0)}/A_0$).
1,274
0807.4930
6,361,273
2,008
7
31
true
true
2
UNITS, UNITS
It is in fact easy to see how such a term can arise in our setup. In general, the superpotential will contain contributions from sources away from the GUT stack, such as fluxes or additional 7-branes. At sufficiently low energies, we can model this by adding a constant $W_0$ to the superpotential FORMULA While this has no effect on the $M_{Pl}=\infty$ potential, it can play a role when $M_{Pl}$ is large but finite. At energies smaller than $M_{GUT}$ where 4-dimensional SUGRA is reliable, for instance, one can see directly that $W_0$ modifies the SUGRA potential by adding precisely a linear term of the sort REF(#linterm){reference-type="eqref" reference="linterm"} FORMULA In the presence of $W_0$, then, the vacuum at $M=0$ is shifted to FORMULA Though we do not know the value of $W_0$ from first principles, we can obtain a reasonable estimate by following the suggestion of [CIT] and imposing the constraint that $V\sim 0$ at the vacuum. This leads to $|W_0|\sim |F_X| M_{Pl}$ and hence to the estimate FORMULA which we will use throughout the rest of this paper. If we take a strict limit $M_{Pl}\rightarrow\infty$ with $M_{GUT}$ fixed then we recover $M=0$ as expected. However, $M_{GUT}^2/M_{Pl}$ is in reality around $10^{14}$ GeV, a scale which is small in Planck units but nevertheless gives a sizeable mass to the messenger fields and is sufficiently far from the origin that this vacuum can remain metastable and long-lived when the messengers are included [CIT].
1,482
0808.1571
6,382,601
2,008
8
12
false
true
1
UNITS
Now suppose that we do have such a coupling. We take a general Kähler class of the form FORMULA where $J_{B_2}$ is a class in $H^2(B_2)$, and $J_\infty$ is the Poincaré dual of $B_2$ in $B_3$ (where we use the embedding at infinity here -- this differs from the embedding at the location of the singular locus only by a class in $H^2(B_2)$). we need both $t_1$ and $t_2$ large in Planck units. The small angle limit, where we can trust the $8d$ gauge theory description, corresponds to $t_1 >> t_2$. We are interested in the Fayet-Iliopoulos term FORMULA The intersection of $G$ with $J_\infty$ would generally be non-zero, but in the present case we know that $\omega_Y$ is supported at the zero section (see equation (REF)), whereas $J_\infty$ is localized at the infinity section, and hence the intersection vanishes. The Fayet-Iliopoulos parameter then only depends only on the intersection of $G$ with $p^*J_{B_2}$, which is proportional to $J_{B_2} \cdot_{B_2} c_1(\zeta)$. In particular, it vanishes if $J_{B_2} \cdot c_1(\zeta)=0$. For more general $U(1)$'s, the Fayet-Iliopoulos parameter can depend on the extension to the global model.
1,146
0808.2223
6,391,143
2,008
8
17
false
true
1
UNITS
The key simplification arising in this comparison is that $p_i$, $\Gamma^{\rm BB}_i$, and $\Gamma_i$, are generically double exponentials. That is, they are of the form $\exp(\pm \exp x)$ with $x\gg 1$. We will use a triple inequality sign for such numbers, for example FORMULA Such numbers obey special laws of arithmetic. For example, for $y$ and $z$ double-exponentially large, $y/z\approx y$ if $y>z$. Moreover, if $y$ is a single exponential and $z$ a double exponential, then $zy\approx z/y\approx z$. A double exponential takes the same value in any conventional system of units, though it can be useful to think in terms of Planck units for definiteness.
662
0808.3770
6,409,967
2,008
8
27
false
true
1
UNITS
In order to forecast the capability of instruments in terms of parameter estimation, we commonly use the Fisher approach as it provides us with the achievable precision on the parameters as a function of the instrumental characteristics (noise, beam, etc.) However, predicting the precision is often not enough, and one should, in addition, estimate how sensitive the measurements are to any systematic or additional contribution to the signal. The aim of our study is to propose an analytical calculation of the bias introduced on the estimated parameters by the presence of an additional, non primary signal. We consider the latter to be any signal that is added, or subtracted, to form the observed signal used for parameter estimation. The method presented here is thus general and can be applied not only to astrophysical contaminating signals (e.g. foregrounds) but also to instrumental systematics when they are additive. We develop our method in the framework of cosmological parameter estimation using CMB angular power spectra and we apply it to the case of future Planck observations in Section [3].
1,110
0809.1364
6,431,848
2,008
9
8
true
false
1
MISSION
We turn on a vev for one of the scalar fields, say, $Z^1$: FORMULA where $v$ is real and positive, and ${\bf 1}_{N\times N}$ is $N\times N$ unit matrix. The vevs of the other scalar fields are set to zero. Basically, $v$ measures how far it is from the orbifold fixed point to the coincident $N$ M2-branes.[^3] Note that $Z$ has dimension $1/2$, so the distance is given by $v l_{\rm P}^{3/2}$ where $l_{\rm P}$ is the Planck length in 11d M-theory. The ABJM action describes a low energy limit $l_{\rm P}\rightarrow 0$ of the $N$ M2-branes with the transverse target space ${\bf C}^4/{\bf Z}_k$.
596
0809.2137
6,444,811
2,008
9
13
false
true
1
UNITS
The presence of a radiation field triggers a photo-destruction rate: FORMULA where $n_{A}$ is the number density of specie $A$; $i(\nu)$ is the specific intensity of radiation in the environment and $h$ is Planck's constant. The integral is calculated from the threshold frequency of ionization, $\nu_{th}$, to infinity.
320
0809.2786
6,450,746
2,008
9
16
true
false
1
CONSTANT
On the theoretical side there are a number of deficiencies in the SM. Some of them could be just aesthetic defects but some may go deeper. First of all the SM has a relatively large number ${\mathcal{O}}(30)$ free parameters that cannot be determined from theory alone but must be measured experimentally. Although this does not indicate an inconsistency of the theory it certainly is not in line with the hope that a fundamental theory of everything should have very few, possibly only $1$ or even $0$, free parameters. Moreover, some of the parameters seemingly need to require an enormous degree of finetuning or appear unnaturally small. Well known examples are the Higgs mass but also the $\theta$ parameter of QCD (which must be extremely small in order not to be in conflict with the observed smallness of strong CP violation). Another dissatisfying feature is that gravity is not incorporated into the SM but rather treated as a separate part. This is not just an aesthetic defect but also an expression of the fact that the quantization of gravity is still not (fully) understood. Finally, strictly speaking the SM will most likely not be valid up to arbitrary high energy scales. On the one hand this is due to our current inability to properly quantize gravity. But even the non-gravity parts are probably encountering problems in the form of Landau poles (places where the coupling becomes infinite) in the QED sector (at a very high scale much beyond the Planck scale) but probably also in the Higgs sector (where the problem is much more immediate and will occur at scales much below the Planck scale - depending on the Higss mass possibly even not much above the electroweak scale).
1,697
0809.3112
6,454,919
2,008
9
18
false
true
2
UNITS, UNITS
CMB anisotropies have played a crucial role in firmly establishing the current "standard model" of cosmology [CIT]. To take advantage of the future high precision data which will be available from planned CMB anisotropy measurements, a survey with specifications similar to the *Planck surveyor* (Planck)[^9] is considered. The Fisher matrix for the CMB temperature and polarization anisotropies is constructed as given in [CIT].
429
0809.4052
6,467,567
2,008
9
24
true
false
2
MISSION, MISSION
The aims of this paper are to obtain the nucleation rate and the radius of vacuum bubbles and to study the pair creation of black holes in the presence of a bubble wall. The processes of bubble nucleations and black hole productions may mimic Wheeler's spacetime foam structure [CIT] in the very early universe, which represents the spacetime no longer smooth at the Planck scale. Bubble nucleation [CIT] and Hawking-Moss type transition [CIT] may occur in the early universe. They may play an important role in selecting out our universe or inflation [CIT]. In addition, the creation of black holes may also play an important role, as a topology changing process, in the spacetime foam structure. Some bubbles may have black holes. Some of them may be nucleated in the presence of a black hole, which is studied by Hiscock [CIT] and Berezin *et al.* [CIT], where the black hole act as an effective nucleation center for a bubble formation. Others may cause pair creation of black holes. On the other hand, the string theory landscape paradigm has many stable and metastable vacua [CIT]. So the tunneling process becomes a remarkable event in this framework as well as eternal inflation [CIT].
1,193
0809.4907
6,478,086
2,008
9
29
false
true
1
UNITS
Thus, the Friedmann equations together with the scalar field equations can be replaced by the system of the first order ODEs: FORMULA with Eq. (REF) considered in the form of the initial conditions: FORMULA We can make these equations dimensionless: FORMULA That is to say the time $t$ is measured in the Planck times $t_{Pl}$, the scale factor $a$ is measured in the Planck lengths $L_{Pl}$ and the potential $U$ is measured in the $M^2_{Pl}$ units.
450
0809.5226
6,481,899
2,008
9
30
true
true
2
UNITS, UNITS
Large scale structure surveys, together with precision measurements of the CMB anisotropies have already provided us with a wealth of knowledge about the geometry, evolution and composition of the Universe. In the coming decade, Cosmologists will carry out even larger scale galaxy and lensing surveys and produce higher resolution CMB maps. We consider a combination of three experiments in order to assess how well the lensing-ratio can be measured in such future surveys. We consider the redshift slice of foreground tracers (lenses) to be drawn from an ADEPT-like [^1] large scale structure survey and the background (source) galaxies taken from an LSST-like [^2] weak lensing experiment. For the CMB lensing measurements, we consider the upcoming Planck mission as well as a prospective polarization-based mission like CMBPOL.
831
0810.3931
6,530,516
2,008
10
21
true
false
1
MISSION
In Planck scale, only $\theta_{12}$have resonable deviation [13] and $\theta_{13},$$\theta_{23}$deviation is very small less than $0.3^{o}$[12]. In the new mixing due to Planck scale effects, we get the new multiplicity factor of electric dipole moment, which is proprtonal to Jarlskog determiant and Planck scale mixing chang the Jarlskog Determiant[14]. If there exist Majorana neutrino with masses $m_{i}^{2}\leq m_{w}^{2}$ and $|U_{ei}|^{2}\leq10^{-2}$,obtained from the charged current universality constraints [15]. We get $de\leq10^{-32}e-cm$assuming that the intergal function F in eq(9) is of the order unity.
618
0810.4394
6,537,125
2,008
10
24
false
true
3
UNITS, UNITS, UNITS
Let us first consider how the initial position $S_{\rm ini}$ of the $S$ field is determined. The Kähler potential $K$ generically contains a linear term of $S$, FORMULA where $c$ is a numerical coefficient. Although the effects of this term on the local minimum $|S_{\rm min}| \ll M_P$ may be neglected due to Planck suppression in supergravity, they possibly affect the behavior of $S$ during and after inflation.
414
0810.5413
6,549,590
2,008
10
30
false
true
1
UNITS
The main lesson from this and other works on the trans-Planckian problem is that without some understanding of the quantum behavior of gravity, the only safe scales $k$ that are relevant to our observable universe are those that lie within the range FORMULA If the lower end corresponds to modes that still have not yet reentered the horizon, then the range available to the scales relevant to what we observe in the cosmic microwave background will be still narrower, or it may not even exist at all. These constraints do not apply just to the examples described here, where the inapplicability of the perturbative description as we approach $k\sim k_\star$ is merely clearer, but to any theory of inflation. Inflationary models invariably must make some assumption about how nature behaves over scales shorter than a Planck length. Sometimes these assumptions are guided by what is reasonable, or renormalizable, for a quantum field theory in a curved background. However, in inflation, we are speaking of the fluctuations of the background itself---and what might be reasonable there is an altogether different matter.
1,121
0810.5742
6,553,905
2,008
10
31
true
true
1
UNITS
Several theoretical calculations about light propagation and neutrino propagation in LQG [CIT] predict that the usual relation between energy and momentum which comes from special relativity, may be modified at Planck scales in the form of FORMULA where $\alpha_1$ and $\alpha_2$ are constants of order one. This kind of modification of dispersion relations can be explained by alternative possibilities [CIT]. Some of them are; (i) No effect of Planck scale phenomena can be observed in low energies and hence modification of dispersion relations has no results for observable phenomena, (ii) Lorentz invariance breaks down and there is a preferred frame at the Planck scale, (iii) Relativity of inertial frames maintained but Planck length or Planck energy becomes an observer independent quantity. This possibility is called Deformed Special Relativity (DSR). Experimentally, modification effects of dispersion relations may be observed by gamma ray and ultra high energy cosmic ray thresholds [CIT].
1,003
0811.1158
6,568,454
2,008
11
7
false
true
5
UNITS, UNITS, UNITS, UNITS, UNITS
While Planck will perform, in temperature, as well as a CVL experiment up to $\ell \simeq 1500$, a greater room for improvement is left for polarization measurements. This improvement may come either if Planck keeps functioning after the 14 months required to complete 2 full sky surveys, or by other ground--based or balloon--borne experiments, like the currently planned SPIDER [CIT]. SPIDER will cover $\sim 50 \%$ of the sky, with a polarization sensitivity higher than Planck, but with a lower angular resolution. It is then interesting to consider how the combination of Planck temperature measurements with better polarization data will affect cosmological parameter estimation. To asses this point we consider an ideal experiment with temperature information equal to the combination of the $70 - 217$ GHz Planck channels and CVL polarization measurements up to a multipole $\ell_{\rm CV}$ ranging between $\ell = 10$ and $\ell = 800$. Above $\ell_{\rm CV}$ polarization sensitivity is equal to that expected from Planck. Let us also point out that CVL determination of $E$--mode polarization above $\ell$ of a few hundreds will likely require the next generation of space CMB mission, like the recently proposed EPIC [CIT]. However, if such a mission does not have high angular resolution, we do not expect a significant improvement on Planck temperature data, which are effectively CVL for $\ell \la 1000$.
1,416
0811.2622
6,587,959
2,008
11
17
true
false
7
MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION
We find that for Planck the normalisation and spectral index are completely degenerate, as shown in figure REF. Allowing for $n_{\rm T}$ completely disrupts the instrument's capability for measuring $r$, even for moderate values of $r = 0.10$, as Planck does not have sufficient leverage on the $B$--modes to simultaneously constrain 2 tensor mode parameters. Errors on the remaining parameters are similar to those shown in table REF, implying that $n_{\rm T}$ is not significantly degenerate with the other parameters, in particular with $n_{\rm s}$ and $n_{\rm run}$.
570
0811.2622
6,587,987
2,008
11
17
true
false
2
MISSION, MISSION
We consider a theory which combines the Randall-Sundrum and the DGP model and is described by the action [CIT] FORMULA where $M$ and $m$ are the five- and four-dimensional Planck masses, respectively. The two masses are related by an important length scale $\ell=2m^2/M^3$. On short length scales $(r\ll\ell)$ the usual four-dimensional general relativity is recovered, while on large length scales $(r\gg\ell)$ five-dimensional effects play an important role [CIT]. $\mathcal{R}$ denotes the scalar curvature of the bulk metric $g_{ab}$ and $R$ the scalar curvature of the induced brane metric $h_{ab}=g_{ab}-\epsilon n_an_b$, with $n^a$ being the inner unit normal vector field to the brane. $K$ is the trace of the extrinsic curvature of the brane $K_{ab}=h^c_{,,a}\nabla_cn_b$. $\Lambda_5$ denotes the bulk cosmological constant and $\sigma$ the brane tension. As ordinary matter fields are confined to the brane, the Lagrangian density $L$ does not depend on the bulk metric $g_{ab}$, but on the induced metric $h_{ab}$. For a spacelike extra-dimension $\epsilon=1$, whereas $\epsilon=-1$ for a timelike extra-dimension.
1,125
0811.4629
6,611,040
2,008
11
27
true
false
1
UNITS
In what follows we consider the effective Lagrangian describing the relevant interactions for the processes we want to study. In the language of the weakly coupled 5D theory the relevant degrees of freedom correspond to the fermionic zero modes, the SM gauge fields and the first KK-vector resonances. At low energies the four fermion interactions and the interactions with the KK vectors are described by: FORMULA where $G_\mu^{(1)}$ is the first KK gluon, $a=1,\dots,4$ numbers fermion generations, and we have neglected the momentum of the KK vectors compared with their mass. The coupling $g_{s\psi}$ corresponds to the one between the first KK excitation of the gluon and the zero mode of the fermion $\psi$, and it depends on the degree of compositeness of the fermion, or its localization in the extra dimension in the 5D picture. For a 5D model $g_{s\psi}$ varies between $\simeq 8.4,g_s$, corresponding to a composite (or TeV-localized) fermion, and $\simeq -0.2,g_s$, corresponding to a fundamental (or Planck localized) fermion [CIT]. The mass of the first KK gluon $M_1$ depends on the size of the extra dimension $1/M_{IR}$, and is approximately given by $M_1\simeq 2.4 M_{IR}$. The Higgs mass and the mass of the condensing quark $U_4$ depend on $M_{IR}$. For instance, for $M_{IR}\simeq 1$TeV one has [CIT] $m_h\simeq 900$GeV and $m_{U_4}\simeq 700$ GeV. There are also similar interactions involving the KK modes of the electroweak gauge bosons that we have not written explicitly. Their structure is the same as the one for the KK gluon, with the couplings normalized with respect to the electroweak couplings $g$ and $g'$, instead of the QCD coupling $g_s$.
1,675
0812.0368
6,617,668
2,008
12
1
false
true
1
MISSION
(Fig. REF b). From two reflections, arranged in such a way that s and p polarisation components are successively permuted, the flip of field vectors provides an achromatic phase shift, as well as a pupil rotation of $\pi$. Suitably applied to a couple of interfering beams, a couple of periscope yields two lightwaves phase shifted by $\pi$ (opposed field vectors). Then a modified constructive interferometer provides two nulled outputs by suitably mixing the beams. A prototype APS was developed at Max-Planck-Institut für Astronomie in Heidelberg in collaboration with Kayser-Threde GmbH in Munich and the IOF Fraunhofer Institute for Applied Optics in Jena. [CIT]
668
0812.1901
6,636,661
2,008
12
10
true
false
1
MPS
In the framework of brane-world models the unitarity requirement can be reformulated in a different way. Actions like (REF) or (REF) have to be considered as low-energy effective descriptions of some fundamental unitary quantum theory of gravity, such as string theory. At short distance, say of order of Planck length, actions (REF) and (REF) are not longer valid and the infinite series of higher order terms should be taken into account. For instance, in the complete fundamental theory higher curvature terms can contribute in some form protected by topological invariance (as in the Gauss-Bonnet case) such that unitarity is restored.
639
0812.3010
6,650,743
2,008
12
16
false
true
1
UNITS
We have found that with Planck the use of the quadratic estimator vs. lensed power spectra leads to a significant improvement of the constraints on parameters to which lensing is sensitive. To be specific, we find a $39\%$ improvement on the neutrino mass scale and a $26\%$ improvement on $\Omega_\Lambda$. The improvement in the case of PolarBear and CMBpol is however only marginal. To illuminate this trend, in Figure REF we plot the power spectra of the lensing potential and lensing reconstruction noises as well as the total errors. The dotted lines show the lensing reconstruction noises for each experiment. PolarBear has better capability to map the lensing potential in the observed patches on the sky than Planck (although it reconstructs far fewer of these patches and therefore the total error is larger than for Planck). The lower lensing noise feeds into the estimation with the optimal quadratic estimator for reconstruction.
942
0901.0916
6,686,502
2,009
1
7
true
false
3
MISSION, MISSION, MISSION
In the limit of large $K3$, the tension of the $Q$- and $P$- string are given by FORMULA rescaled by a factor of the volume of $K3$ in ten-dimensional Planck unit $V_{K3}^{\scriptscriptstyle(P)}= V_{K3}\lambda_2$. Here $V_{K3}$ denotes the the volume of $K3$ in string unit, and $-\bar\lambda= -\lambda_1 + i\lambda_2$ is the axion-dilaton of the type IIB theory. In particular, the string coupling is given by $g_s = \lambda_2^{-1}$. Similarly, we will denote by $-\bar\tau= -\tau_1 + i \tau_2$ and $R_B^2 \tau_2$ the complex structure and the area of the type IIB torus $T^2_{\scriptscriptstyle(IIB)}$ respectively.
617
0901.1758
6,697,661
2,009
1
13
false
true
1
UNITS
The notation for energy scales is as follows: The mass of the scalar fields which comprise the flat directions will be $m$ or $\tm$ and these will always be of order $m_{\mathrm{susy}}\sim\mathrm{TeV}=1000,\mathrm{GeV}$ which is the assumed scale of the supersymmetric particle spectrum. The $\mathrm{TeV}$ energy scale (approximately 1000 times the mass of the proton) is heavy by the standards of nuclear and particle physics, but it is relatively light compared to other scales in Cosmology such as the Grand Unified Scale, $M_{\mathrm{GUT}} \sim 10^{16},\mathrm{GeV}$ or the Planck scale, $M_P\sim 10^{19},\mathrm{GeV}$. Note the Planck mass is defined by $G_N = \frac{1}{M_P^2}$. This is not to be confused with the reduced Planck mass $M_P^* = M_P/\sqrt{8\pi} \approx 2.4\times 10^{18},\mathrm{GeV}$ which is not used in the thesis. An arbitrary heavy mass scale will be written as $M$, and in the results section $M$ is defined specifically as the scale of the VEVs $|\Phi_i|\sim 10^{-2}M_P$. The ratio of the $\mathrm{TeV}$ scale states to one of these heavy scales is then generically written $\frac{m}{M}$. A quantity is unsuppressed when it is zeroth order in the expansion in $\frac{m}{M}$.
1,202
0901.3164
6,712,676
2,009
1
21
false
true
3
UNITS, UNITS, UNITS
Models with a warped extra dimension, first proposed by Randall and Sundrum (RS) [CIT], in a very ambitious way address both these problems by allowing SM fields, except for the Higgs boson, to propagate into the five-dimensional bulk [CIT]. These models provide a geometrical explanation of the hierarchy between the Planck scale and the EW scale and allow for a natural generation of the hierarchies in the fermion spectrum and mixing angles [CIT], while simultaneously suppressing flavor changing neutral current (FCNC) interactions [CIT]. Recently realistic RS models of EW symmetry breaking have been constructed [CIT], and one can even achieve gauge coupling unification [CIT]. The models that we will analyze in the following are based on an enlarged bulk gauge group given by
783
0901.4599
6,729,831
2,009
1
29
false
true
1
UNITS
Here, we shall use the simulation method described in Section REF. We choose the parameters $r$ and $n_t$ as the *unfixed* parameters, $A_s$ and $n_s$ as the *fixed* parameters. The values of $\ell_{\max}$, $N$ and the input values of parameters are adopted as the follows: FORMULA The background cosmological parameters are adopt as in Eq. (REF). We consider the Planck instrumental noises and Planck window function, which are given in Eqs. (REF-REF). $N=300$ suggests that our following simulation results ($\overline{r_p}$, $\Delta r_p$, $\overline{n_t}$, $\Delta n_{t}$) have $4\%$ statistical error.
605
0902.1848
6,756,880
2,009
2
11
true
true
2
MISSION, MISSION
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
928
0902.2324
6,762,179
2,009
2
13
true
false
2
MPS, MPS
**Acknowledgments**. Exchange with Peter Weisz, Erhard Seiler and Rainer Sommer was most beneficial to this project. Some part of the effort was stimulated during a visit at the Max Planck (Werner Heisenberg) Institut in München by hospitality and support. I owe thanks to Willi Rath for helping me with under and to the HU physics compute team for providing a smooth infrastructure. Finally financial support of the DFG via SFB transregio 9 is acknowledged.
458
0902.3100
6,770,389
2,009
2
18
false
true
1
PERSON
(The user is able to choose other Planck scale conventions than the one in equation REF, see Section REF). The horizon occurs when $\Delta =0$. That is at a radius given implicitly by FORMULA Here []{#eqn:defrSchwarzshild label="eqn:defrSchwarzshild"} r\^(d)\_s\^1/(d-2) is the Schwarzschild radius of a $(d+1)$-dimensional black-hole, *i.e.* the horizon radius of a non-rotating black-hole. Equation REF can be rewritten as: FORMULA where FORMULA
448
0902.3577
6,775,452
2,009
2
20
false
true
1
UNITS
Another line of reasoning, having its roots in seminal papers [CIT], is to some extent parallel to extra dimensional picture. One can think of space-time geometry becoming of discontinuous or of fractal type at small distances. This idea has been explored from many different points of view: conventional wisdom of Planck-scale quantum gravity fuzziness [CIT], space-time foam models [CIT], spin-networks in loop quantum gravity [CIT], dynamical triangulations [CIT], fractal space-time structure in asymptotically safe gravity [CIT], noncommutative geometry phenomenology [CIT], modified commutation relations [CIT], modified dispersion relations in the context of Finslerian geometry [CIT], minimal length phenomenology [CIT], curved momentum space [CIT] and other approaches. We refer the interested reader to [CIT] for bibliographical review devoted to the models of short distance space-time structure. In most of these approaches one associates the corresponding NP length scale $L$ with the Planck length $L_P$, despite nothing prevents to think about $L$ as being different from $L_P$ (e.g., much larger).
1,113
0903.0565
6,797,858
2,009
3
3
false
true
2
UNITS, UNITS
In order to reproduce the experimental value of the small parameter $r=\Delta m^2_{sun}/\Delta m^2_{atm}$ we need some amount of fine tuning. For instance, the RH neutrino Majorana mass $M$ should be below the cutoff $\Lambda$ (this is reminiscent of the fact that empirically $M \sim M_{GUT}$ rather than $M \sim M_{Planck}$). The neutrino spectrum is mainly of the normal hierarchy type (or moderately degenerate), the smallest light neutrino mass and the $0\nu \beta \beta$ parameter $|m_{ee}|$ are expected to be larger than about $0.1 meV$. The model is compatible with the observed amount of the baryon asymmetry in the Universe interpreted as an effect of leptogenesis.
676
0903.1940
6,813,118
2,009
3
11
false
true
1
UNITS
We have discussed in [CIT] our personal perspective on this problem, and the possibility that GFTs could be the right setting for tackling it, and how. We summarize it here, briefly. From the GFT point of view, the crucial issue is whether we expect the continuum approximation (as opposed to large scale, semi-classical or other a priori distinct approximations) to involve very large numbers of GFT quanta or not[^1]. We opt for a positive answer, as naive reasoning would suggest (one would expect a generic continuum spacetime to be formed by zillions of Planck size building blocks, rather than few macroscopic ones). If this is the case, then we are dealing, from the GFT point of view, with a many-particle system whose microscopic theory is given by some fundamental GFT action and we are interested in its collective dynamics and states in some thermodynamic approximation. This simple thought alone suggests us to look for ideas and techniques from statistical field theory and condensed matter theory, and to try to apply/reformulate/re-interpret them in a GFT context.This also immediately suggests that the GFT formalism is the most natural setting for studying this dynamics even coming from the pure loop quantum gravity perspective or from simplicial quantum gravity. In the first case, in fact, GFTs offer a second quantized formalism for the same quantum geometric structures, and quantum field theory is indeed what comes natural in condensed matter when dealing with large collections of particles/atoms. In the second case, GFTs offer an alternative non-perturbative definition of the quantum dynamics of simplicial gravity, re-interprets the usual sum over discrete geometries as a perturbative expansion around the no-spacetime vacuum, and in doing so suggest the possibility of different vacuum states and a different reformulation of the same dynamics that is better suited for studying the dynamics of (combinatorially) complicated simplicial geometries. See [CIT] for more details.
2,008
0903.3970
6,837,850
2,009
3
23
false
true
1
UNITS
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
1,311
0903.5057
6,850,456
2,009
3
29
true
false
3
MPS, MPS, MPS
What do we expect at higher energies? If we extrapolate what we *know* in a straightforward way, we find that the three gauge couplings have log corrections that point towards a grand unification scale at $M_X\approx 10^{16}$ GeV. Gravity is different, it grows quadratically with the energy and becomes of order one at the Planck scale, $M_P\approx 10^{19}$ GeV. Below $M_P$ one needs a consistent framework for the four interactions, and string theory is the only available candidate. The LHC is going to explore energies of up to 1 TeV. It could find, for example, supersymmetry, a discovery for the next decades that would provide consistency to the whole picture. But such discovery would leave us still very far from the fundamental scale. String theory and quantum gravity are in this framework *non-reachable*, almost non-physical.
839
0904.0921
6,866,811
2,009
4
6
false
true
1
UNITS
The existence of a fundamental length implies that processes involving energies higher than the Planck energy can possibly be suppressed, thereby improving the ultra violet behavior of the theory. In particular, one hopes that, in a complete theory, gravity would provide an effective cut off at the Planck scales [CIT]. Some very general considerations based on the principle of equivalence and the uncertainty principle seem to strongly indicate that it may not be possible to operationally define spacetime events beyond an accuracy of the order of $L_{_{\rm P}}$. Therefore, one may consider $L_{_{\rm P}}$ as the 'zero point length' of spacetime intervals [CIT]. Specifically, if $\sigma(x,x'|g_{\mu\nu})$ denotes the geodesic distance between the spacetime points $x$ and $x'$ in the background metric $g_{\mu\nu}$, then one can expect that FORMULA where $h_{\mu\nu}$ represents *all* possible quantum fluctuations about the background metric, and the angular brackets represent a suitable path integral average over these fluctuations. Such a behavior can then be expected to render the coincidence limit of the propagators finite.
1,138
0904.3217
6,892,651
2,009
4
21
false
true
2
UNITS, UNITS
Fig.REF shows that the modification of $C_{\ell}^{BB}$ by the relativistic free-streaming gas is noticeable only at $\ell>200$. Since the amplitude of $C_{\ell}^{BB}$ is very small in this range, only the very sensitive CMB experiments are expectable to be able to detect this modification. Fig.REF also shows that, Planck mission is only sensitive for the reionization peak of $C_{\ell}^{BB}$. i.e. $\ell<10$. So it will be not expected to be able to constrain on the relativistic free-streaming gas in the Universe. However, for the CMBPol experiment, the signal $C_{\ell}^{BB}$ is larger than $\Delta D_{\ell}^{BB}$ when $\ell<300$, and a detection of this modification due to the relativistic free-streaming gas becomes possible. By solving Eq. (REF), we obtain $\Delta f=0.046$ for the model with $r=0.1$, and this uncertainty reduced to $\Delta f=0.008$ for the ideal experiment.
885
0905.3223
6,949,600
2,009
5
20
true
true
1
MISSION
In this appendix, we shall justify at the hand waving level that the extrapolation $\mu \to \mu_*$ is reliable. The first question is that of the graviton's contribution to the renormalization of the Planck mass. It is possible to calculate the contribution of the graviton to the running of the Planck mass using the effective theory of general relativity developed by Donoghue [CIT]. This beautiful and difficult calculation done by Donoghue and collaborators [CIT] gives: FORMULA where we have identified $\mu = r^{-1}$ in which $r$ is the distance from the source of the potential. There is obviously some numerical uncertainty in the value of the cutoff, but the important result is that quantum gravitational interactions make the Planck mass bigger at high energy.
771
0906.0363
6,975,505
2,009
6
2
false
true
3
UNITS, UNITS, UNITS
Ultrahigh energy cosmic rays that produce giant extensive showers of charged particles and photons when they interact in the Earth's atmosphere provide a unique tool to search for new physics. Of particular interest is the possibility of detecting a very small violation of Lorentz invariance such as may be related to the structure of space-time near the Planck scale of $\sim 10^{-35}$m. We discuss here the possible signature of Lorentz invariance violation on the spectrum of ultrahigh energy cosmic rays as compared with present observations of giant air showers. We also discuss the possibilities of using more sensitive detection techniques to improve searches for Lorentz invariance violation in the future. Using the latest data from the Pierre Auger Observatory, we derive a best fit to the LIV parameter of $3.0^{+1.5}_{-3.0} \times 10^{-23}$, corresponding to an upper limit of $4.5 \times 10^{-23}$ at a proton Lorentz factor of $\sim 2 \times 10^{11}$. This result has fundamental implications for quantum gravity models.
1,035
0906.1735
6,989,848
2,009
6
9
true
true
1
UNITS
This work was primarily supported by NASA grants NAG5-12111, NAG5 11918-1, and NSF grant ATM-0312344 at Stanford University. Writing of this paper was partially conducted during W. Liu's appointment to the NASA Postdoctoral Program at the Goddard Space Flight Center, administered by Oak Ridge Associated Universities. He is grateful to his postdoctoral advisers, Brian Dennis and Gordon Holman, for fruitful discussions. Work performed by J. Mariska was supported by NRL basic research funds. The authors thank the referee for constructive comments and many individuals, including S. Liu, W. East, T. Donaghy, J. Pryadko, B. Park, R. Hamilton, J. McTiernan, and J. Leach, who contributed to the Stanford unified Fokker-Planck code over three decades.
751
0906.2449
7,000,043
2,009
6
13
true
false
1
FOKKER
Let $P(j,t)$ denote the distribution of an ensemble of planets as a function of time. The general form of the Fokker-Planck equation (e.g., Risken 1984) for this problem is given by FORMULA where $T_1(j)$ is the Type I migration torque and $D(j)$ is the appropriate diffusion parameter due to turbulent fluctuations. In this problem (see also JGM), the diffusion constant is defined to be $D \equiv (\Delta J)_T^2/{\tau_T}$, where the fluctuation amplitude $(\Delta J)_T$ and the time scale ${\tau_T}$ over which the turbulent perturbations are independent are specified in Section 2.2 (see equations [REF -- REF]). Notice also that the minus sign in the Type I term is included so that $T_1$ is the magnitude of the torque.
724
0906.4030
7,019,021
2,009
6
22
true
false
1
FOKKER
*Studies involving cold antihydrogen.*---Comparisons of the spectra of hydrogen (H) and antihydrogen ($\overline{\rm H}$) are well suited for CPT and Lorentz tests. Among the various transitions that can be considered, the unmixed 1S--2S transition appears to be an excellent candidate: its projected experimental resolution is expected to be about one part in $10^{18}$, which is promising in light of potential Planck-suppressed quantum-gravity effects. On the other hand, the corresponding leading-order SME calculation establishes identical shifts for free H or $\overline{\rm H}$ in the initial and final states with respect to the conventional energy levels. From this perspective, the 1S--2S transition is actually less suitable for the measurement of unsuppressed CPT- and Lorentz-violating signals. The largest non-trivial contribution to this transition within the SME test framework is produced by relativistic corrections, and it is multiplied by two additional powers of the fine-structure parameter $\alpha$. The expected energy shift, already at zeroth order in $\alpha$ expected to be minuscule, is therefore associated with an additional suppression factor of more than ten thousand [CIT].
1,206
0907.1319
7,053,577
2,009
7
8
false
true
1
UNITS
The *SDSS* is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. Funding for the creation and distribution of the *SDSS* Archive has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The *SDSS* Web site is http://www.sdss.org/.
967
0907.5295
7,105,456
2,009
7
30
true
false
3
MPS, MPS, MPS
Low energy supersymmetry (SUSY) is one of the most plausible candidates for physics beyond the Standard Model (SM) at the TeV scale [CIT]. In the context of supergravity, SUSY can be spontaneously broken in a hidden sector while giving a vanishing cosmological constant. In this framework, the gravitino mass is related to the scale of hidden sector SUSY breaking as $M_{\rm SB}\sim \sqrt{m_{3/2}M_{Pl}}$, where $M_{Pl}=2.4\times 10^{18}$ GeV is the reduced Planck mass. In low energy SUSY scenario, the hidden sector SUSY breaking is transmitted to the supersymmetric standard model (SSM) to induce soft terms providing sparticle masses of ${\cal O}(1)$ TeV. Then, for certain range of $m_{3/2}$, gravitinos produced in the early Universe decay after the Big-Bang nucleosynthesis, which would destroy the successful prediction of the light element abundances [CIT]. This cosmological difficulty might be avoided if the gravitino is relatively heavy, e.g. $m_{3/2}\gtrsim {\cal O}(10)$ TeV, so that the decay occurs before the nucleosynthesis.
1,043
0908.2154
7,136,049
2,009
8
17
false
true
1
UNITS
Non-linearities are now routinely extracted from all-sky observations of the microwave background anisotropy. Our purpose in this paper has been to propose a new technique with which to predict the observable signal. Present data already give interesting constraints on the skewness parameter $f_{\mathrm{NL}}$, and over the next several years we expect that the Planck survey satellite will make these constraints very stringent. It is even possible that higher-order moments, such as the kurtosis parameter $g_{\mathrm{NL}}$ [CIT] will become better constrained [CIT]. To meet the need of the observational community for comparison with theory, reliable estimates of these non-linear quantities will be necessary for various models of early-universe physics.
760
0909.2256
7,191,466
2,009
9
11
true
true
1
MISSION
We will be assuming Planck units throughout, with $G$, $\hbar$ and $c$ set to unity. We will mostly follow the notation used in Ref. [CIT], which in turn was based on the notation in the review by Brout *et al.* [CIT]. Additionally, to avoid cumbersome factors of $2M$ (where $M$ is the mass of the collapsing object), we will assume that all the coordinates are non-dimensionalized by dividing out by $2M$. This will lead to expressions for the flux, etc. with factors of $2M$ missing, which can be put back appropriately by dimensional arguments. Latin tensor indices $a,b,..$ range over $0..3$, and we use the mostly plus sign convention.
641
0909.4668
7,222,521
2,009
9
25
true
true
1
UNITS
If the SM matter fields propagate in the transverse dimensions of $AdS_d$ one therefore expects them to be localized at either the UV or infrared (IR) brane. We shall show that in the former case a similar suppression of the effective 4D couplings is found for $R^{-1}\sim$ TeV so that only IR localization is viable. The main model building feature of the $AdS_d$ spaces seems to be their ability to combine the warped explanation for the weak/Planck hierarchy with the KK parity found in UED models, so these spaces admit only a partial unification of the appealing complimentary features of UED and RS models. The main experimental signature for the $AdS_d$ spaces in this instance is the observation of warped KK gravitons in addition to UED KK modes. Such a signature also occurs when the $(d-1)$ dimensional UED model is realized by embedding the SM fields on the IR brane of $AdS_5\times T^{d-5}$, as discussed in [CIT]. However, as we shall show, the graviton KK towers on $AdS_d$ and $AdS_5\times T^{d-5}$ differ so that if a $(d-1)$ dimensional UED scenario is discovered one would be able to experimentally determine if the UED model in is embedded in either of these distinct warped spaces by carefully studying the graviton KK spectrum.
1,249
0909.5454
7,231,766
2,009
9
29
false
true
1
UNITS
The scale-dependent monopole mass is important for the friction lengthscale, which is now $\ell_f\equiv M/(\theta T^2)\sim\eta_m^2 L/(\theta T^2)$. Another difference is that instead of gauge radiation we now have Goldstone boson radiation, whose rate of energy loss is FORMULA notice that this is much stronger than the energy loss rates due to gauge radiation (except if the monopoles and strings form at the same energy scale, in which case they will be comparable) and due to gravitational radiation (except if both form that the Planck scale, in which case they will all be comparable).
591
0910.3045
7,270,595
2,009
10
16
false
true
1
UNITS
Composite Higgs models are examples of models where the breakdown of perturbative unitarity is postponed to higher energy[^5]. In the next section, I will discuss in details the collider signatures of these models. The extra dimensional Higgsless models that will be presented in Section [7] are other examples with delayed perturbative unitarity breakdown thanks to a non-trivial dynamics of a tower of spin 1 massive particles. Both classes of models were already considered in the eighties (see for instance Ref. [CIT] for a historical perspective, and Ref. [CIT] for an account on recent developments) but they have experienced a recent revival and can now be cast in terms of models with warped extra dimensions. Thanks to this holographic description, models of strong EWSB can now be extrapolated to the far UV up to energies of the order of the Planck scale and questions like the unification of gauge couplings can be legitimately addressed (see, for instance, Ref. [CIT]). Overall, serious competitors to the MSSM have emerged as possible extension of the SM at the Fermi scale.
1,088
0910.4976
7,295,122
2,009
10
26
false
true
1
UNITS
Next we look at the case of the $\Omega\Lambda$CDM model. In this case, the situation is somewhat different (See Figure REF). One of noticeable differences is that $m_\nu$, $H_0$ and $\Omega_k$ are notably correlated with one another in CMB. This degeneracy shows that the change in the angular diameter distance to last scattering surface made by varying one of the above parameters cannot be efficiently compensated by just adjusting another. Thus other observations of the geometrical distances would be necessary. When the BAO scale measurement is included, the constraint on $m_\nu$ is improved because of a better determination of $\Omega_k$ and $H_0$. For the analysis of Planck+$H_0$, the constraint on $m_\nu$ also becomes stronger. As seen from Table REF, Planck+BOSS and Planck+$H_0$ give almost the same limit on $m_\nu$. However, even if both the BAO scale and a direct $H_0$ measurements are included in the analysis, the constraint is not so improved compared to Planck+BOSS and Planck+$H_0$. This is because the BAO scale measurement effectively gives the same information as the direct $H_0$ measurement in this model, which is also the reason why the limit on $m_\nu$ from Planck+BOSS and Planck+$H_0$ are almost the same level.
1,246
0911.0976
7,318,270
2,009
11
5
true
true
7
MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION
The Planck satellite, launched in May 2009, will obtain extremely precise measurements of the CMB temperature anisotropy power spectrum ($C_{\ell}^{\rm TT}$) up to $\ell\sim 2500$ and the E-mode polarization anisotropy power spectrum ($C_{\ell}^{\rm EE}$) up to $\ell\sim 1500$ [CIT]. Robust measurements of the acoustic horizon and distance to the SLS will break degeneracies in dark energy surveys [CIT]. Polarization measurements will yield the optical depth $\tau$ to the SLS [CIT], further constraining models of reionization and breaking the degeneracy between $n_{s}$ and $\tau$ [CIT]. Cosmological parameters will be determined with much greater precision. More precise values of $n_{s}$ and $\alpha_{s}$ will be obtained from CMB data alone, helping to robustly constrain inflationary models and alternatives to inflation [CIT]. The advent of Planck, ongoing (SPT [CIT] and ACT [CIT]) experiments at small scales, and a future space based polarization experiment like CMBPol [CIT] all require predictions of primary anisotropy multipole moments $C_{\ell}$ with ${\cal O}(10^{-3})$ accuracy.
1,099
0911.1359
7,322,657
2,009
11
6
true
false
2
MISSION, MISSION
[^1]: In Ref. [CIT], the results of Ref. [CIT] are used to explore the effect of progressively higher $n_{\rm max}$ on CMB $C_{\ell}$'s. In that work, It is noted that the fractional difference between the $C_{\ell}$'s for $n_{\rm max}=60$ and $n_{\rm max}=120$ falls within a heuristic Planck performance benchmark. Higher values of $n_{\rm max}$ come even closer to the fiducial case of $n_{\rm max}=120$, a fact used to argue that even $n_{\rm max}=60$ recombination is adequate for Planck data analysis. From the Cauchy convergence criterion, however, we know that a meaningful convergence test requires a comparison between successive members in a sequence. Using the results of Ref. [CIT] alone, the question of convergence with $n_{\rm max}$ thus remains open.
767
0911.1359
7,322,709
2,009
11
6
true
false
2
MISSION, MISSION
It has been argued that a theory is natural if it is stable under tiny variations of fundamental parameters. Mass corrections to the fundamental scalar such as the SM Higgs field are proportional to the physical cutoff if it exists. If the Planck mass is the physical cutoff, a fine tuning to one part in $10^{38}$ in the coupling is needed if one were to keep the Higgs mass at the electroweak scale. It is fair to say that this line of reasoning has led to important developments in supersymmetric, technicolor, extra dimensional and little Higgs models [CIT]. From hereon, the term "hierarchy problem" will simply refer to the existence of two scales: the electroweak and Planck scales, and the aforementioned issue. This problem exists *regardless* of whether or not the SM is embedded into some grand unified theory.
821
0911.3892
7,355,842
2,009
11
19
false
true
2
UNITS, UNITS
We assume that the energy density of the Universe receives significant contributions from three components: a) standard baryonic matter (BM); b) a species of weakly interacting, massive particles, which we identify with cold dark matter (CDM); and c) a slowly varying, classical scalar field $\phi$, whose contribution to the energy density is characterized as dark energy (DE). We also consider the possibility that there is a direct coupling between the CDM particles and the scalar field. Its equation of motion takes the form FORMULA We normalize all dimensionful quantities, such as the scalar field, with respect to the reduced Planck mass $M=(8\pi G)^{-1/2}$. The full $M$-dependence is displayed explicitly in appendix A. Our normalization here is equivalent to setting $M=1$. Equation (REF) can be obtained if we assume that the mass $m$ of the particles has a dependence on $\phi$ [CIT]. Then we have $\beta(\phi)=-{d\ln m(\phi)}/{d\phi}$. In order to be consistent with the stringent observational constraints for the baryonic sector, we assume that the interaction with the DE scalar field is confined to the CDM sector. The BM has no direct coupling to $\phi$.
1,173
0911.5396
7,372,139
2,009
11
28
true
true
1
UNITS
The lepton flavor violation is rather generic in a wide class of the SUSY models. In fact, the flavor structures of the neutrinos and quarks can induce mixtures of the slepton generations. In the see-saw models, the neutrino Yukawa interaction radiatively contributes to the left-handed slepton mass, while the right-handed one receives a correction from the CKM mixings above the GUT scale in the SUSY GUT models. By mediating the SUSY breaking effect at the Planck scale such as in the gravity mediation, the slepton mass matrices, thus, acquire the flavor mixing through the renormalization group evolution down to the weak scale, which are approximately shown as FORMULA where the $m_0$ and $a_0$ are typical values of the scalar mass and the trilinear coupling. Here $M_P$, $M_{R_k}$ and $M_{H_c}$ are the Planck scale, the mass of the $k$-th right-handed neutrino and the mass of the colored Higgs boson of the GUT, respectively. Obviously, the neutrino Yukawa couplings, $Y_\nu$, and the CKM matrix, $V_{\rm CKM}$, induce the flavor violation even in the absence of the slepton mixings at the input scale.
1,112
0912.0585
7,386,232
2,009
12
3
true
true
2
UNITS, UNITS
The previous analysis used a Planck scale of 1 TeV, which is a hard lower bound for $D > 6$. If one considers how Kaluza-Klein gravitons would affect supernovae cooling and neutron stars, the lower bounds on the Planck scale are higher [CIT]. However, it should be realized that the bounds provided by astrophysical arguments contain significant uncertainties, such as the compactification moduli. For $D < 8$, the lower bound on the fundamental Planck scale is too high to allow quantum black holes to be produced by the LHC. For higher dimensions, the bounds on the Planck scale are less stringent. For $D = 8$, $M_D > 4$ TeV and for $D > 8$, $M_D > 1.4$ TeV. For $D > 8$, the lower bound on the Planck scale is set by the absence of black holes in neutrino cosmic ray showers [CIT]. However, Auger has yet to observe a single neutrino-induced shower. Ref. [CIT] is based on the ratio of vertical and quasi-horizontal neutrino showers. The vertical showers are used to normalize the product of the neutrino flux and their interaction cross section. The fact that Auger should have seen a few vertical showers by now, but have not seen any yet, relaxes the limits on $M_D$. In addition, the neutrino cosmic ray bounds can be evaded in a model of split fermions [CIT]. Using the results of Fig. REF, we see that for $D = 8$, $\sigma \sim 20$ pb and for $D > 8$, $\sigma \gtrsim 2\times 10^4$ pb. Thus, most decay signatures in Table REF would be observable for $D > 8$, while probably none would be observable for $D < 8$.
1,522
0912.0826
7,389,004
2,009
12
4
false
true
5
UNITS, UNITS, UNITS, UNITS, UNITS
**The minimal $SO(10)$ scalar DM scenario.** The minimal scalar DM scenario [CIT] contains the SM Higgs in a scalar representation ${\bf 10}$ and the DM in a scalar ${\bf 16}$ of $SO(10)$. Below the $M_\text{G}$ and above the EWSB scale the model is described by the $H_{1} \to H_{1}$, $S \to -S,$ $H_{2} \to -H_{2}$ invariant scalar potential FORMULA together with the GUT scale boundary conditions FORMULA and FORMULA While the parameters in Eq. (REF) are allowed by $SO(10),$ the ones in Eq. (REF) can be generated only after $SO(10)$ breaking by operators suppressed by $n$ powers of the Planck scale $M_{\text{P}}.$
620
0912.3797
7,428,778
2,009
12
21
true
true
1
UNITS
Noncommutative (NC) physics has became an integral part of present-day high energy physics theories. It reflects a structure of space-time which is modified in comparison to space-time structure underlying the ordinary commutative physics. This modification of space-time structure is a natural consequence of the appearance of a new fundamental length scale known as Planck length [CIT]. There are two main physical contexts within which a signal for the existence of a Planck length scale appears. The first one lies within a loop quantum gravity framework in which the Planck length plays a fundamental role. There, a presence of a new fundamental length scale leads after quantization to discrete eigenvalues of the area and volume operators. It appears that in loop quantum gravity, the area and volume operators have discrete spectra, with minimal eigenvalue proportional to a square and cube of the Planck length, respectively. The second physical context where one can find a signal for the existence of a fundamental length scale comes from some observations of ultra-high energy cosmic rays which seem to contradict the usual understanding of some astrophysical processes like, for example, electron-positron production in collisions of high energy photons. It turns out that deviations observed in these processes can be explained by modifying dispersion relation in such a way as to incorporate the fundamental length scale [CIT]. NC space-time has also been revived in the paper by Seiberg and Witten [CIT] where NC manifold emerged in a certain low energy limit of open strings moving in the background of a two form gauge field.
1,643
0912.5087
7,440,074
2,009
12
27
false
true
4
UNITS, UNITS, UNITS, UNITS
If PBH evaporations leave stable Planck-mass relics, these might also contribute to the dark matter. This was first pointed out by MacGibbon [CIT] and has subsequently been explored in the context of inflationary scenarios by several authors [CIT]. If the relics have a mass $\kappa,M_\mathrm{Pl}$,, where $M_\mathrm{Pl}$ is the Planck mass, and reheating occurs at a temperature $T_\mathrm R$,, then the requirement that they have less than the critical density implies [CIT] FORMULA for the mass range FORMULA Note that we would now require the density to be less than $\Omega_\mathrm{CDM} \approx 0.25$, which strengthens the limit by a factor of $4$. The lower mass limit arises because PBHs generated before reheating are diluted exponentially. The upper mass limit arises because PBHs larger than this dominate the total density before they evaporate, in which case the final cosmological photon-to-baryon ratio is determined by the baryon asymmetry associated with their emission. Recently Alexander and Mészáros [CIT] have advocated an extended inflationary scenario in which evaporating PBHs naturally generate the dark matter, the entropy, and the baryon asymmetry of the Universe. This triple coincidence applies providing inflation ends at $t \sim 10^{-23},\mathrm s$, so that the PBHs have an initial mass $M \sim 10^6,\mathrm g$. This just corresponds to the upper limit indicated in Eq. REF(#eq:relic){reference-type="eqref" reference="eq:relic"}, which explains one of the coincidences. The other coincidence involves the baryon asymmetry generated in the evaporations. It should be stressed that the limit REF(#eq:betarelic){reference-type="eqref" reference="eq:betarelic"} still applies even if there is no inflationary period but then extends all the way down to the Planck mass.
1,798
0912.5297
7,442,284
2,009
12
29
true
true
3
UNITS, UNITS, UNITS
In five dimensions for $k = 1$ if $x$ is sufficiently large, specifically $x \gtrapprox 0.5418$, bubble production dominates over black holes. For higher dimensions or larger values of $k$, black hole production is always dominant. Note the entire range of $x$ should not truly be trusted; black holes are only reliable semiclassical objects provided the horizon size is large compared to the Planck length, or if FORMULA Since it turns out that for the above instantons $r_c$ is comparable to $l$ (for $d = 5$, $0.655 \lessapprox \frac{r_c}{l} < 1$ and the allowed range of $\frac{r_c}{l}$ shrinks as the dimension increases), this means one can really only trust the black holes when FORMULA
694
1001.2266
7,469,549
2,010
1
13
true
true
1
UNITS
is a project of the European Space Agency with instruments funded by ESA member states, and with special contributions from Denmark and NASA (USA). The Planck-LFI project is developed by an International Consortium lead by Italy and involving Canada, Finland, Germany, Norway, Spain, Switzerland, UK, USA. The Italian contribution to Planck is supported by the Italian Space Agency (ASI). We thank the support of the Spanish Ministry of Science and Education.
459
1001.4737
7,496,558
2,010
1
26
true
false
2
MISSION, MISSION
In summary, if an $\mathcal{O}(1)$ fraction of the dark matter at our epoch are scalar glueballs and if their partial lifetime is not much larger than $10^{26}$ s, two experiments may see a signal: The contribution of glueball decays to the $\gamma$-ray spectrum below $10^2\hspace{+.09cm}\text{GeV}$ may be detected by GLAST. Moreover, the $\gamma$-line near $10^6\hspace{+.09cm}\text{GeV}$ may be seen by HESS. A lifetime of the order of $10^{26}\hspace{+.09cm}\text{s}$ follows if $M_{10} \sim\hspace{-.07cm}10^{13}\hspace{+.09cm}\text{GeV}$ according to Eq. REF(#lt5){reference-type="eqref" reference="lt5"}. Such a low 10d Planck scale may be realized in a large-volume compactification along the lines of [CIT]. Note that this scenario is incompatible with the aforementioned scenario in which $\lambda=\mathcal{O}(1)$: According to Eq. REF(#lt1){reference-type="eqref" reference="lt1"}, $\lambda$ has to be very small (or zero) for such a low 10d Planck scale.
967
1002.2830
7,538,058
2,010
2
15
true
true
2
UNITS, UNITS
As it is well known, gravity is a non-renormalizable field theory. The same holds for supergravity, which by construction is induced by gravity. As such, it is regarded as an effective theory, that is valid only below a certain ultraviolet cutoff $\Lambda_{UV}$. In the case at hand, this scale is the (reduced) Planck mass $\Mpl=2.44\times 10^{18}\G$.
352
1002.2835
7,538,252
2,010
2
15
true
true
1
UNITS
It is simple to see from eq. (REF) that when $\phi\gg m_{\mbox {\tiny Pl}}/\sqrt{\xi}$ (corresponding to $\sigma\gg m_{\mbox {\tiny Pl}}$) the potential $V_E$ approaches a constant; this is the regime in which the effective Planck mass runs in the original Jordan frame. In this regime the flatness of the potential ensures that a phase of slow-roll inflation takes place. Inflation ends when $\phi\sim\phi_{e}\equiv m_{\mbox {\tiny Pl}}/\sqrt\xi$. For details on the inflationary predictions we point the reader to Refs. [CIT].
528
1002.2995
7,540,594
2,010
2
16
true
true
1
UNITS
The Millennium Simulations used in this paper was carried out by the Virgo Supercomputing Consortium at the Computing Center of the Max-Planck Society in Garching. The semi-analytic galaxy catalogues used in this study are publicly available at http://galaxy-catalogue.dur.ac.uk:8080/MyMillennium/. The Millennium Gas Simulations were carried out at the Nottingham HPC facility, as was much of the analysis required by this work. The SDSS-DR4 group catalogue of [CIT] used in this study is publicly available at http://www.astro.umass.edu/ xhyang/Group.html.
558
1002.4414
7,556,910
2,010
2
23
true
false
1
MPS
A theory of position of massive bodies is proposed that results in an observable quantum behavior of geometry at the Planck scale, $t_P$. Departures from classical world lines in flat spacetime are described by Planckian noncommuting operators for position in different directions, as defined by interactions with null waves. The resulting evolution of position wavefunctions in two dimensions displays a new kind of directionally-coherent quantum noise of transverse position. The amplitude of the effect in physical units is predicted with no parameters, by equating the number of degrees of freedom of position wavefunctions on a 2D spacelike surface with the entropy density of a black hole event horizon of the same area. In a region of size $L$, the effect resembles spatially and directionally coherent random transverse shear deformations on timescale $\approx L/c$ with typical amplitude $\approx \sqrt{ct_PL}$. This quantum-geometrical "holographic noise" in position is not describable as fluctuations of a quantized metric, or as any kind of fluctuation, dispersion or propagation effect in quantum fields. In a Michelson interferometer the effect appears as noise that resembles a random Planckian walk of the beamsplitter for durations up to the light crossing time. Signal spectra and correlation functions in interferometers are derived, and predicted to be comparable with the sensitivities of current and planned experiments. It is proposed that nearly co-located Michelson interferometers of laboratory scale, cross-correlated at high frequency, can test the Planckian noise prediction with current technology.
1,629
1002.4880
7,560,057
2,010
2
25
true
true
1
UNITS
In this Section we present the predicted 1--$\sigma$ marginalized errors of the $f_{\rm NL}$ parameter, and the $f_{\rm NL}$ covariance with the remaining cosmological parameters considered in our Fisher matrix analysis. We show forecasts both from LSST and EUCLID data only, as well as the expected errors after combining the results from these two experiments with Planck forecasted errors.
392
1003.0456
7,569,551
2,010
3
1
true
false
1
MISSION
We present for the first time a coherent model of the polarized Galactic synchrotron and thermal dust emissions which are the main diffuse foreground for the measurement of the polarized power spectra of the CMB fluctuations with the Planck satellite mission. We produce 3D models of the Galactic magnetic field including regular and turbulent components, and of the distribution of matter in the Galaxy, relativistic electrons and dust grains. By integrating along the line of sight we construct maps of the polarized Galactic synchrotron and thermal dust emission for each of these models and compare them to currently available data. We consider the 408 MHz all-sky continuum survey, the 23 GHz band of the Wilkinson Microwave Anisotropy Probe and the 353 GHz Archeops data.}{The best-fit parameters obtained are consistent with previous estimates in the literature based only on synchrotron emission and pulsar rotation measurements. They allows us to reproduce the large scale structures observed on the data. Poorly understood local Galactic structures and turbulence make difficult an accurate reconstruction of the observations in the Galactic plane. Finally, using the best-fit model we are able to estimate the expected polarized foreground contamination at the Planck frequency bands. For the CMB bands, 70, 100, 143 and 217 GHz, at high Galactic latitudes although the CMB signal dominates in general, a significant foreground contribution is expected at large angular scales. In particular, this contribution will dominate the CMB signal for the B modes expected from realistic models of a background of primordial gravitational waves.
1,648
1003.4450
7,615,782
2,010
3
23
true
false
2
MISSION, MISSION
Following the treatment of AHD09, we write the general Fokker-Planck equation for the equilibrium distribution of ${\mathbf\Omega}$: FORMULA The Fokker-Planck coefficients are FORMULA Here ${\bmath D}$ denotes the mean drift in ${\mathbf\Omega}$, and ${\mathbfss E}$ denotes the diffusion coefficient tensor.
308
1003.4732
7,621,013
2,010
3
24
true
false
2
FOKKER, FOKKER