text
large_stringlengths 252
2.37k
| length
uint32 252
2.37k
| arxiv_id
large_stringlengths 9
16
| text_id
int64 36.7k
21.8M
| year
int64 1.99k
2.02k
| month
int64 1
12
| day
int64 1
31
| astro
bool 2
classes | hep
bool 2
classes | num_planck_labels
int64 1
11
| planck_labels
large_stringclasses 66
values |
---|---|---|---|---|---|---|---|---|---|---|
Next, we extend this method to the cross-spectrum with Planck. Although the Planck data are the same for the first and second measurements, the change of [Polarbear]{.smallcaps} data affects the cross-spectrum. Similar to REF, the cross spectra with one of Planck frequencies for the first and second measurements are computed as FORMULA and FORMULA respectively, where $D_{b,\mathrm{Planck}\times\mathrm{PB}}$ is the assumed signal spectrum depending on the Planck frequency, $N_b^{\mathrm{Planck}}$ is the noise spectrum for the Planck frequency, and $Z_{b,i}$ is a set of random numbers that follows a normal distribution with a variance of 1. The noise bias in the cross-spectrum becomes zero. | 697 | 2203.02495 | 20,603,777 | 2,022 | 3 | 4 | true | false | 7 | MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION |
Let us highlight the key differences between our approach and the full Planck analysis. As previously, the Base data suggests a significantly lower $S_8$ compared to the Planck 2018 analysis that makes our approach entirely consistent with the low-redshift probes REF(#S8){reference-type="eqref" reference="S8"}. For the effective number of relativistic degrees of freedom we found $N_{\rm eff}=3.16\pm0.30$. While our estimate agrees with the Planck 2018 result [CIT], it allows for considerably larger values of $N_{\rm eff}$ which leads to a moderately higher $H_0$. | 569 | 2203.03666 | 20,613,176 | 2,022 | 3 | 7 | true | true | 3 | MISSION, MISSION, MISSION |
Data uncertainties used in the linear fits are derived from the diagonal QQ and UU pixel variances appropriate to each map. However, these per-pixel uncertainties do not fully encompass the more complex noise characteristics of *WMAP*and Planckdata (see Section [2]). We use simulations to verify that the use of the diagonal uncertainties in the fit does not bias recovery of the true parameters and their uncertainties. Simulated maps include realistic sky signals and independently generated instrument noise that more completely characterizes the full noise properties of the data. | 585 | 2203.11445 | 20,689,814 | 2,022 | 3 | 22 | true | false | 1 | MISSION |
We consider constraints from the SPT+Planck and Planck patches separately in Figure REF. As discussed earlier in Section [5], the consistency of these two patches was part of the unblinding criteria, thus these two constraints are consistent under the PPD metric. We find a $p$-value of 0.37 (0.33) when comparing the Planck (SPT+Planck) results to constraints from SPT+Planck (Planck). We also observe that the constraints are somewhat tighter in the SPT+Planck patch in $S_8$, consistent with the slightly larger signal-to-noise (see Table REF). We note however, that the signal-to-noise before scale cuts of the SPT+Planck patch is significantly larger than the Planck patch due to the lower noise and smaller beam size of the SPT lensing map (for $\langle \delta_g \kappa_{\rm CMB}\rangle$: 26.8 vs. 17.9; for $\langle \gamma_{\rm t}\kappa_{\rm CMB}\rangle$: 15.0 vs. 10.4), though most of the signal-to-noise is on the small scales which we had to remove due to uncertainties in the theoretical modeling. This highlights the importance of improving the small-scale modeling in future work. | 1,094 | 2203.12440 | 20,697,695 | 2,022 | 3 | 23 | true | false | 9 | MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION |
The structure of this paper is as follows. In Sect. [2], we describe the surveys and fields selected for our analysis and the construction of the CFIS mosaic maps from the raw data, as well as the detailed prescription of how we measure and correct the cross-power spectra from the images. In Sect. [3], we present our measurements and our method to estimate and correct for the impact of our Galaxy on these measurements. Section [4] explains how we test the robustness of our method against potential systematic effects using simulated light cones. Finally, Sect. [5] describes our modelling and fitting of the data, and presents the resulting constraints on galaxy formation. Some discussion is contained in Sect. [6] and conclusions are given in Sect. [7]. Additionally, Appendix [8] presents the results of some null tests, while Appendix [9] presents some filtered images that show the cross-correlation visually. Throughout this paper, we assume a Planck cosmology [CIT] and a [CIT] initial mass function (IMF). | 1,018 | 2203.16545 | 20,727,695 | 2,022 | 3 | 30 | true | false | 1 | MISSION |
Based on the above insight in a fictitious, vacuum dominated, conformal, holographic universe where the Hubble radius is the Plank length, $R = 1$, the number of maximal degrees of freedom is the Planck area, $N_{\max} = 4 \pi$, and the corresponding minimal energy is its inverse, $\epsilon_{\min} = 1/(4 \pi)$. In such a universe the ultraviolet and infrared cut-offs are both the Planck scale, and the vacuum energy density is $\rho_{fictitious} = 3/(8 \pi)$. If we confuse our universe with the fictitious one and apply the Planck scale as an ultraviolet cut-off to our universe, instead of the actual ultraviolet regulator (the present Hubble radius, $R_0 = 1/\textsf{H}_0$), we will arrive at a discrepancy of $\rho_{fictitious}/\rho_{observed} = 1/\textsf{H}_0^2$ when calculating the vacuum energy density. The value of $1/\textsf{H}_0^2$ is $7.2 \times 10^{121}$ in Planck units. | 888 | 2203.16753 | 20,730,583 | 2,022 | 3 | 31 | true | true | 4 | UNITS, UNITS, UNITS, UNITS |
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics \| Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. | 1,354 | 2203.17026 | 20,731,828 | 2,022 | 3 | 31 | true | false | 3 | MPS, MPS, MPS |
Significant efforts are aimed at measuring $f_{\rm NL}^{\rm local}$, with the tightest current constraints coming from CMB observations. In particular, the Planck 2018 data yields $f_{\rm NL}^{\rm local} = -0.9 \pm 5.1$ [CIT]. Measurements from galaxy clustering are currently somewhat weaker, but are expected to improve significantly with upcoming galaxy surveys. These surveys will eventually reach the target of $f_{\rm NL}^{\rm local} \approx 1$ (see for example [CIT]). Almost all LSS analyses done so far use the fact that LPNG produces the so-called scale-dependent galaxy bias [CIT], and therefore can be constrained by observations of galaxy power spectra on large scales [CIT]. Whilst measuring the galaxy power spectrum and looking for scale-dependent bias has the advantage of being relatively straightforward, this may not be an optimal way to constrain LPNG from LSS data. Indeed, as many Fisher forecasts and full likelihood analyses indicate, the dominant source of information on $f_{\rm NL}^{\rm local}$ for the shot-noise limited samples is the galaxy bispectrum [CIT]. Developing consistent and robust pipelines to harvest this information is one of the major milestones on the way towards achieving the tightest possible bounds on LPNG. | 1,258 | 2204.01781 | 20,747,125 | 2,022 | 4 | 4 | true | true | 1 | MISSION |
The QG scenario in [CIT] suggests that, even at trans-Planckian scales, gravity decouples from the matter sector. In practice, such a scenario thus amount to simply extrapolating the SM or, in the present context, the respective GUEFT beyond the Planck scale, *i.e.*, FORMULA in Eq. REF. We note that, in any such QG scenario, the Standard Model remains UV-incomplete due to the U(1) Landau-pole. This obstruction, however, can be avoided in a GUT where the U(1) Abelian gauge group at high energies is part of a non-Abelian gauge group with self-interactions. Said self-interactions can -- depending on the respective gauge group and matter content -- be sufficiently antiscreening to provide for asymptotic freedom of the gauge coupling. Even asymptotically free gauge sectors can be sufficient to also render Yukawa couplings and quartic couplings asymptotically free: a proposal known as Complete Asymptotic Freedom (CAF) of gauge-Yukawa theories[^23]. The conditions to achieve CAF in gauge-Yukawa theories have been analysed, for instance, in [CIT]. The requirement of CAF without gravity, places additional non-trivial constraints on the viable parameter space at the Planck scale. Such a QG scenario is thus probably the most straightforward example of additional constraints from demanding a UV-completion. | 1,315 | 2204.03001 | 20,758,280 | 2,022 | 4 | 6 | false | true | 2 | UNITS, UNITS |
In case of DA set, our estimated median values of $\Omega_{0m}h_{0}^{2}$ and $\Omega_{0\Lambda}h_{0}^{2}$ show approximately $1.6\sigma$ deviation from Planck's values of these parameters. However, the median values of $\Omega_{0k}h_{0}^{2}$ and $h_{0}$ show low deviations (i.e., $0.02\sigma$ and $0.4\sigma$ respectively) from the values of these parameters obtained by [CIT]. We note in passing that most of the parameter values (estimated by us) show larger deviation from Planck results for DA set compared with our estimated results for DA+BAO set, since $31$ data generates a small number of sample specific values (which are used to estimate the values of parameters) of density parameters than the same for $53$ data. Moreover, the internal contaminations (e.g., large standard deviations) in DA $H(z)$ data also affect the estimated median values of cosmological parameters using DA data. Due to these reasons, the estimated uncertainty range of each parameter for DA set is larger than the same for DA+BAO set. However, the median values of these four parameters using DA set also show the consistency (within error ranges corresponding to these median values) with the values of these cosmological parameters obtained by [CIT]. | 1,239 | 2204.07099 | 20,791,780 | 2,022 | 4 | 14 | true | false | 2 | MISSION, MISSION |
***Introduction*:** The CDF collaboration has provided an updated measurement of the W boson mass $M_W = 80433.5 \pm 9.4$ MeV [CIT] using the data corresponding to 8.8 ${\rm fb}^{-1}$ integrated luminosity collected at the CDF-II detector of Fermilab Tevatron collider. This newly measured value has 7$\sigma$ deviation from the standard model (SM) expectation ($M_{W}=80357\pm6$ MeV). This has led to several discussions on possible implications and interpretations in the last week related to effective field theory [CIT], electroweak precision parameters [CIT], beyond standard model (BSM) physics like dark matter (DM) [CIT], additional scalar fields [CIT], supersymmetry [CIT] and several others [CIT]. Assuming this anomaly to be originating from beyond standard model (BSM) physics, here we consider a seesaw model for Dirac neutrinos where a real scalar triplet plays a non-trivial role in generating the required enhancement in W boson mass as well as light neutrino masses. Due to the existence of additional light species in the form of right chiral parts of light Dirac neutrinos, we can get enhancement in effective relativistic degrees of freedom $\Delta N_{\rm eff}$ depending upon Yukawa couplings and masses of additional particles including those involving the triplet scalar. We show that such enhanced $\Delta N_{\rm eff}$ can not only be constrained by existing data from the Planck collaboration, but also remains within reach of next generation cosmology experiments. After discussing the minimal model of tree level Dirac seesaw, we consider a radiative version of it which can accommodate dark matter as well as lepton anomalous magnetic moments, which are also signatures of BSM physics. | 1,713 | 2204.08266 | 20,797,738 | 2,022 | 4 | 18 | true | true | 1 | MISSION |
We present a formalism for jointly fitting pre- and post-reconstruction redshift-space clustering (RSD) and baryon acoustic oscillations (BAO) plus gravitational lensing (of the CMB) that works directly with the observed 2-point statistics. The formalism is based upon (effective) Lagrangian perturbation theory and a Lagrangian bias expansion, which models RSD, BAO and galaxy-lensing cross correlations within a consistent dynamical framework. As an example we present an analysis of clustering measured by the Baryon Oscillation Spectroscopic Survey in combination with CMB lensing measured by Planck. The post-reconstruction BAO strongly constrains the distance-redshift relation, the full-shape redshift-space clustering constrains the matter density and growth rate, and CMB lensing constrains the clustering amplitude. Using only the redshift space data we obtain $\Omega_\mathrm{m} = 0.303\pm 0.008$, $H_0 = 69.21\pm 0.78$ and $\sigma_8 = 0.743\pm 0.043$. The addition of lensing information, even when restricted to the Northern Galactic Cap, improves constraints to $\Omega_m = 0.300 \pm 0.008$, $H_0 = 69.21 \pm 0.77$ and $\sigma_8 = 0.707 \pm 0.035$, in tension with CMB and cosmic shear constraints. The combination of $\Omega_m$ and $H_0$ are consistent with Planck, though their constraints derive mostly from redshift-space clustering. The low $\sigma_8$ value are driven by cross correlations with CMB lensing in the low redshift bin ($z\simeq 0.38$) and at large angular scales, which show a $20\%$ deficit compared to expectations from galaxy clustering alone. We conduct several systematics tests on the data and find none that could fully explain these tensions. | 1,683 | 2204.10392 | 20,812,334 | 2,022 | 4 | 21 | true | false | 2 | MISSION, MISSION |
While our results are sufficiently constraining to considerably sharpen the tension in $\sigma_8$ between the CMB and LSS in $\Lambda$CDM, we still remain limited by the data that we use and by our modeling. Luckily, we anticipate rapid progress in both directions in the very near future. The galaxy maps are already sample variance dominated on the large scales from which we derive most of our cosmological information, so the next major improvement in errors at intermediate $\ell$ will come from CMB lensing maps with lower noise than Planck. Redshift-space clustering measured from DESI will also dramatically improve over those we used here. Using maps optimized for cross-correlations, with careful attention to foreground cleaning or hardening, derived from more sensitive and higher angular resolution ground-based experiments will dramatically lower the uncertainties of $C_\ell^{\kappa g}$. These lower-noise measurements should allow us to better distinguish between the shapes of various nonlinear contributions to $C^{\kappa g}_\ell$ even on the scales we have analyzed in this work. More ambitiously, as discussed in Appendix [9], recent work extending the LPT modeling in real space to more nonlinear scales using hybrid N-body models [CIT] can allow us to self-consistently double the $\ell$ reach of our formalism by switching the perturbative calculations of $C^{\kappa g}_\ell$ for an emulator (e.g. [CIT]). Combined with improved modeling future experiments will improve the constraints on the power spectrum amplitude, $\sigma_8$, and allow us to check the consistency between constraints derived from large and small scales, even from within the same and related theoretical models. The well-motivated and tested theoretical framework outlined herein should be ideal for such future work. | 1,812 | 2204.10392 | 20,816,359 | 2,022 | 4 | 21 | true | false | 1 | MISSION |
In Fig.REF, we compare the angular size of $H_1, H_2, H_3$ with the homogeneity scale $\theta_{\mathcal{H}}$ as a function of the mean value (and dispersion) of $H_0$ measured in each region. The $\theta_{\mathcal{H}}$ measurement corresponds to the full sky and is therefore assigned to the global Planck fit for $H_0$ [CIT]. We can also place in the same plot the local type Ia SN measurement of $H_0$ from [CIT] and the one from type II SN from [CIT]. Similar results are found using time delays in a lensed QSO [CIT]. For SNe, the angle is taken to be 60 degrees, as this is the angle in the CMB sky that corresponds to the radial separation $\chi_*$ between the CMB surface and the SNe measurements. The angular spread is taken from the largest variance in the other measurements. | 785 | 2204.10728 | 20,817,599 | 2,022 | 4 | 22 | true | true | 1 | MISSION |
The appendices contain various technical details of the computations used throughout the main text. In Appendix [9] we present the asymptotic behaviour of the mixed propagators. In Appendix [10] we provide the solutions and singularity analysis of the generalized scalar seeds. In Appendix [11], we briefly comment on the double-exchange and triple-exchange diagrams. In Appendix [12], we review the theory of free spinning particles in de Sitter space. Throughout the paper we take the convention of natural units $c=\hbar=1$, the reduced Planck mass $M_{\rm pl}^2 = 1/8\pi G$, and the metric signature $(-,+,+,+)$. | 616 | 2205.00013 | 20,844,114 | 2,022 | 4 | 29 | true | true | 1 | UNITS |
Assuming a spatially flat metric of the form FORMULA where $h_{ij}(\eta,{\ensuremath{\boldsymbol{x}}})$ is a gauge-invariant divergenceless and traceless tensor (Roman indices running only on spatial dimensions), the Einstein equations, linearised at first order in $h_{ij}$, read FORMULA Here a prime denotes differentiation with respect to $\eta$, the conformal time, the Laplacian operator is over the comoving spatial coordinates ${\ensuremath{\boldsymbol{x}}}$ and $M_\mathrm{Pl}$ stands for the reduced Planck mass. The source term $\varPi_{ij} = \delta T_{ij}^{\mathrm{{\scriptscriptstyle{TT}}}}(\eta,{\ensuremath{\boldsymbol{x}}})/a^2$ is the traceless and transverse anisotropic part of the linearised stress tensor $\delta T_{\mu\nu}$ at the origin of the perturbations [CIT]. In these equations, all scalars and vectors have been assumed to vanish as we are only focused in the generation and propagation of gravitational waves. In order to solve equation REF(#eq:einstein){reference-type="eqref" reference="eq:einstein"} in the presence of long cosmic strings, we need to compute $\varPi_{ij}(\eta,{\ensuremath{\boldsymbol{x}}})$ and invert the differential operator to get $h_{ij}(\eta,{\ensuremath{\boldsymbol{x}}})$. Moreover, we are interested in the statistical properties of the stochastic background generated by the superimposition of all these waves and one has to construct the two-point correlation functions of these quantities. | 1,452 | 2205.04349 | 20,876,896 | 2,022 | 5 | 9 | true | true | 1 | UNITS |
Including all of the above noise components in our model, we produced total forecasts for Rayleigh scattering signal-to-noise for upcoming experiments in the presence of atmospheric, galactic, and extragalactic foregrounds. These total forecasts are presented in Figure REF. All forecasted signal-to-noise values are shown in Table REF. These forecasts indicate that, in combination with Planck data, all upcoming ground-based CMB experiments can expect a Rayleigh scattering detection with a signal-to-noise of roughly 1-4. For wide experiments, the majority of this detection comes from Planck data, as indicated by the dotted lines. Though deep experiments can expect slightly lower signal-to-noise than wide experiments, their Rayleigh scattering detections come mostly from the experiments themselves. Without Planck, the highest-significance Rayleigh scattering detections of 1.5-2 come from deep experiments. It is also relevant to note that this model predicts that a roughly 3-$\sigma$ Rayleigh scattering detection is potentially present in the Planck dataset corresponding to the CMB-S4-Wide observing patch, which encompasses 65% of the sky. This is backed up by the forecasted signal-to-noise values for Planck alone with $f_\text{sky}=0.65$, which are shown in the last row of Table REF. | 1,301 | 2205.04494 | 20,878,418 | 2,022 | 5 | 9 | true | false | 5 | MISSION, MISSION, MISSION, MISSION, MISSION |
We employ the Planck 2018 low-$\ell$ TT+EE and Planck 2018 high-$\ell$ TT+TE+EE temperature and polarization power spectrum [CIT]. To marginalize over nuisance parameters, we use the "lite" likelihoods. These datasets are referred to as "Planck". We have also considered the Planck 2018 lensing power spectrum [CIT]. | 316 | 2205.08070 | 20,908,657 | 2,022 | 5 | 17 | true | false | 4 | MISSION, MISSION, MISSION, MISSION |
In conclusion, the effects of running BEI requires a deeper investigation. There is also a need to better understand how Barrow entropy can possibly arise from a theory of quantum gravity. Does the fractalized geometry only hold for horizons or any spacetime hypersurface in general? What about spacetime manifold itself? Barrow entropy is not the first proposal that involves fractal geometry in gravity. It has previously been proposed that spacetime dimension becomes fractalized and decreases towards the Planck scale [CIT], which is a different modification (a 2-sphere in such a fractalized geometry has a *lower* dimension [CIT], not higher as per Barrow's proposal), but perhaps one could obtain some ideas about how to derive Barrow entropy from these different approaches. Connections with fractional quantum mechanics [CIT] and other notions of entropies in relation to holographic dark energy [CIT] should also be further investigated. | 947 | 2205.09311 | 20,918,985 | 2,022 | 5 | 19 | true | true | 1 | UNITS |
In the present work, we examine the following points in the context of the recently proposed curvature coupling helical magnetogenesis scenario \cite{Bamba:2021wyx} -- (1) whether the model is consistent with the predictions of perturbative quantum field theory (QFT), and (2) whether the curvature perturbation induced by the generated electromagnetic (EM) field during inflation is consistent with the Planck data. Such requirements are well motivated in order to argue the viability of the magnetogenesis model under consideration. Actually, the magnetogenesis scenario proposed in \cite{Bamba:2021wyx} seems to predict sufficient magnetic strength over the large scales and also leads to the correct baryon asymmetry of the universe for a suitable range of the model parameter. However in the realm of inflationary magnetogenesis, these requirements are not enough to argue the viability of the model, particularly one needs to examine some more important requirements in this regard. We may recall that the calculations generally used to determine the magnetic field's power spectrum are based on the perturbative QFT -- therefore it is important to examine whether the predictions of such perturbative QFT are consistent with the observational bounds of the model parameter. On other hand, the generated gauge field acts as a source of the curvature perturbation which needs to be suppressed compared to that of contributed from the inflaton field in order to be consistent with the Planck observation. These set our motivation. Interestingly, both the aforementioned requirements in the context of the curvature coupling helical magnetogenesis scenario are found to be simultaneously satisfied by that range of the model parameter which leads to the correct magnetic strength over the large scale modes. | 1,810 | 2205.10561 | 20,927,766 | 2,022 | 5 | 21 | true | true | 2 | MISSION, MISSION |
[We should point out that for our analyses we have used uniform priors on all the three parameters. The prior range for $\log(E_{QG}$ (GeV)) $\in [6,19]$ is a conservative choice. Although some works have obtained limits on $E_{QG}<10^{19}$ GeV, these involve simplifying assumptions on the intrinsic time lags and hence we do not consider this. Despite this conservative choice, we do not get closed contours for $E_{QG}$ in both the LIV analyses. We should also note that a few works have also obtained 95% c.l. lower limits on $E_{QG}$ greater than the Planck scale [CIT]. However, given that some works [CIT] (see also references in A21) have argued for evidence for LIV, contradicting the above lower limits, it is important to test for signatures of LIV in order to verify these results. We did check that choosing a Gaussian prior on $\tau$ with a mean equal to zero and $\sigma=0.3$ does not qualitatively change our conclusions. Therefore, to summarize, although a detailed sensitivity studies of our results as a function of prior choices is beyond the scope of this work, we have shown that our results do not change with a Gaussian prior on $\tau$. Furthermore, since we not get closed contours for $E_{QG}$ despite choosing a wide prior range, they would not change our results if we truncate the prior.]{style="color: black"} | 1,339 | 2205.12780 | 20,941,884 | 2,022 | 5 | 25 | true | false | 1 | UNITS |
FORMULA where $K_B$ stands for the Boltzmann constant, $T$ for the absolute temperature of the environment, $k = A$ or $B$, the $M^k_i$ are $N$ Hermitian operators with eigenvalues in $\{ -1,1 \}$, the $\alpha^k_i \in [0,1]$ are such that $\sum^N_i \alpha^k_i = 1$, and $\beta_k \geq \ln 2$ is an inverse measure of $k$'s thermodynamic efficiency that depends on the internal dynamics $H_k$ --- see e.g. Refs. [CIT]. For fixed $k$, the operators $M^k_i$ clearly must commute, i.e. $[M^k_i, M^k_j] = M^k_i M^k_j - M^k_j M^k_i = 0$ for all $i, j$; hence $H_{AB}$ is swap-symmetric under the permutation group $S_N$ for each $k$. We can, therefore, write $N = \dim(H_{AB})$, i.e. the eigenvalues $H_{AB}$ can be encoded by $2^N$ distinct $N$-bit strings. Via the Holographic Principle [CIT], we can consider $H_{AB}$ to be defined at a finite boundary $\mathscr{B}$ that encodes, at each instant, one of these $2^N$ bit strings [CIT]. We assume a discrete topology for $\mathscr{B}$, with each point encoding one bit; in a widely accepted geometric picture, a "point" on $\mathscr{B}$ corresponds to a 4$L_P^2$ pixel, with $L_P$ denoting the Planck length. | 1,154 | 2205.13184 | 20,946,617 | 2,022 | 5 | 26 | false | true | 1 | UNITS |
The Planck satellite provides $I$, $Q$, and $U$ maps, each containing $N_{\rm pix}$ pixels, in $N_{\nu}$ frequency bands. We assume Gaussian noise that is uncorrelated between frequency channels but may have inter-pixel correlations. Fitting parametric foreground models is computationally intensive at full Planck resolution. Since in this paper we are interested in polarized foregrounds at large angular scales, we degrade the Planck LFI and HFI maps to a resolution of $N_{\rm side}=16$ containing $N_{\rm pix} = 12 \cdot N_{\rm side}^2=3072$ pixels following the procedure described in [CIT] [hereafter]. | 609 | 2205.13968 | 20,951,361 | 2,022 | 5 | 27 | true | false | 3 | MISSION, MISSION, MISSION |
This work is supported by the National Key R&D Program of China Grant (No. 2021YFC2203100, 2020YFC2201600), NSFC No. 12273035, 11903030 and 12150610459, the Fundamental Research Funds for the Central Universities under Grant No. WK2030000036 and WK3440000004. Some of the results in this paper have been derived using the HEALPix [CIT] package. Based on observations obtained with Planck (<http://www.esa.int/Planck>), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. | 532 | 2205.14804 | 20,955,655 | 2,022 | 5 | 30 | true | false | 2 | MISSION, MISSION |
There are two shift parameters, $R$ and $l_{A}$, that contain much of information of CMB power spectrum, the former is defined as FORMULA and the latter reads FORMULA where $d_{A}(z_{\ast})=\frac{1}{1+z}\int_{0}^{z_{\ast}}\frac{d\tilde{z}}{H}$ is the angular distance at decoupling [CIT], which depends on the dominant components after decoupling. The redshift at decoupling $z_{\ast}$ is given by [CIT] FORMULA where $\omega_{(m)}=\omega_{(c)}+\omega_{(b)}$. In this work, we use the following Planck 2018 compressed likelihood [CIT] with these two shift parameters to perform a likelihood analysis, FORMULA where $C_{ij}=D_{ij}\sigma_i\sigma_j$ is the covariance matrix, $\sigma=(0.0046,0.09,0.00015)$ is the errors, and $D_{CMB}= \left(\begin{array}{ccc} 1 & 0.46 &-0.66 \\ 0.46 & 1 & -0.33 \\ -0.66&-0.33& 1\\ \end{array} \right)$ is the covariance. | 853 | 2205.14928 | 20,956,209 | 2,022 | 5 | 30 | true | false | 1 | MISSION |
Having extensively tested our analysis pipelines on simulations, we now shift our focus to discussing details of the analysis carried out on Planck data. We work with both the full mission as well as the half mission Planck data. For the high frequency instrument (HFI) the half mission data sets are provided by the Planck collaboration. For the low frequency instrument (LFI), no specific half mission data set is provided and hence we specifically choose to work with maps produced by combining $1^{\rm st}$ year & $3^{\rm rd}$ year data as part of half mission 1 data set and maps produced by combining $2^{\rm nd}$ year & $4^{\rm th}$ year data as part of the half mission 2 data set. Since a part of the analysis involves reconstruction of a $\mu$ map from Planck data, the LFI covering 30 GHz, 44 GHz and 70 GHz plays a very important role as was discussed in Sec. [5.1] (see Fig. REF). | 893 | 2205.15971 | 20,962,988 | 2,022 | 5 | 31 | true | true | 4 | MISSION, MISSION, MISSION, MISSION |
This 43% AzTEC confirmation rate indicates that our candidate list includes a significant number of foreground confusing sources even after the extensive vetting described in §[2]. Since our candidate selection requires an inclusion in at least two PCCS bands, each with $\ge7\sigma$ statistical significance, they are not likely spurious sources. Instead, Plancksources that are not a single bright AzTEC source are likely extended or distributed dust sources -- extended emission is obvious in some cases. Some Planckcandidates are likely foreground cirrus clouds with an arcminute scale structure that survived our filtering. Previous studies of Planckselected sources have also shown some to be over-density of fainter high-redshift DSFGs [CIT], rather than a single bright object. | 785 | 2206.00138 | 20,965,352 | 2,022 | 5 | 31 | true | false | 3 | MISSION, MISSION, MISSION |
. The Planck857 GHz (350, "$Planck_{350}$\"), 545 GHz (500, "$Planck_{500}$\"), and 353 GHz (850, "$Planck_{850}$\") flux densities reported in Table REF are the Planckaperture photometry (APERFLUX) values in the PCCS2[^10]. The PCCS2 includes four different estimates of source flux: detection pipeline photometry (DETFLUX), aperture photometry (APERFLUX), PSF fit photometry (PSFFLUX), and Gaussian fit photometry (GAUFLUX). DETFLUX is suggested as the flux estimation method of choice for unresolved sources in regions of low background, given its greater sensitivity. The internal consistency check has shown that DETFLUX is subject to a greater scatter and a significant flux bias. In addition, an external consistency check by comparing Planckand *Herschel*data indicates a greater reliability for APERFLUX over DETFLUX [CIT]. Therefore, we adopt the APERFLUX for Planckband photometry in our analysis. For the five Planck-*WISE*sources with published *Herschel*photometry, Planckphotometry is consistent with the higher resolution *Herschel*data (see Table REF). The agreement between the Planck353 GHz photometry and the published SCUBA-2 photometry [CIT] is not as good for the five sources in common, and there might be some systematic problems with either set of photometry data. As discussed in §REF, two Plancksources found in the literature (Cosmic Eyelash, PJ090403.9) included in Table REF are in crowded fields with multiple *Herschel*sources, and the discrepancy between Planckand *Herschel*can be explained by this source blending. | 1,550 | 2206.00138 | 20,965,364 | 2,022 | 5 | 31 | true | false | 9 | MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION |
However, this is not adequate for studying the kinetics of the phase transition. If only considering the gradient force from the free energy landscape, the local stable SAdS black hole can never make a transition and switch to the AdS space, and the reverse statement is also true. It is the fluctuating force (thermal noise) from the bath that makes the transition process possible. Therefore, the dynamics of black hole phase transition is described by the Langevin equation for the stochastic evolution or equivalently the Fokker-Planck equation for the probabilistic evolution. In this sense, the late time stationary Boltzmann distribution $P\sim e^{-\beta F}$ for the fluctuating black holes can be obtained from the Fokker-Planck equation. | 746 | 2206.02623 | 20,981,209 | 2,022 | 6 | 6 | false | true | 2 | FOKKER, FOKKER |
We point out that reconstructing the $\mu$-type distortion anisotropies does not necessarily require an absolute measurement of the sky. The work of [CIT] was, to our knowledge, the first one that attempted a reconstruction of the fluctuating part of the $\mu$ distortions using a component separation method applied to imager data, namely those from the high-frequency instrument on board of the Planck satellite. However, this approach can be more affected by contaminations from residual primary CMB and other astrophysical foregrounds [e.g., [CIT]]. In addition, we stress that knowledge of the $\mu$ monopole is needed to break the degeneracy between the average level of $\mu$ distortions, $\langle\mu\rangle$, and $f_{\rm NL}$ (see [CIT] for discussions on measuring $\mu$ fluctuations with a relatively calibrated experiment). This possibility is only allowed by instruments such as spectrometers that are sensitive to the absolute sky temperature. | 956 | 2206.02762 | 20,982,015 | 2,022 | 6 | 6 | true | true | 1 | MISSION |
Let us recapitulate what we have done. We found that in the "vanilla" flux compactifications involving warped throats with SUSY-breaking anti-branes, D3-particles puff into spherical shells. These shells can be regarded as stabilised bubbles of vacuum decay, with the SUSY vacuum inside the bubble. This realises explicitly the ideas of black holes as brane shells put forward in [CIT], albeit for over-extremal objects, so that the name black holes is somewhat a misnomer. The shells can be significantly larger than the Planck scale, and can be pushed towards the classical regime. From that perspective it is interesting that we find over-extremal configurations since the naked singularity one would expect, is not resolved in a quantum mechanical sense as for the particles of the Standard Model (whose wavelength is vastly bigger than the classical backreaction radius). However, to get to actual astrophysical length scales one would have to lower the string scale to non-realistic values, values which is furthermore in tension with several Swampland bounds since it would mean the vacua exist at parametric weak coupling and large volume [CIT]. However, note that our construction does not rely on de Sitter uplifts. Our computations are equally valid for uplifts of AdS to another AdS, as long as the cosmological length scales are large and the uplift energy small. | 1,376 | 2206.04506 | 20,995,956 | 2,022 | 6 | 9 | false | true | 1 | UNITS |
In this section, we study the radio--SZ, radio--X, and SZ--X spatial correlations. For a proper comparison, we first smoothed the LOFAR 140,MHz and the XMM maps to the $1.65^{\prime}$ ACT+Planck map effective resolution. We then projected the ACT+Planck and the XMM-Newton maps to LOFAR geometry (1.5$^{\prime \prime}$ pixel size) using [Montage]{.smallcaps}.[^3] Note that the pixelization does not affect the results we will present in the following; using the ACT+Planck geometry, we get compatible results. Fig. REF shows the images used for our analysis. Measuring the level of the fluctuations in the ACT+Planck and LOFAR maps, we find $5.9 \times 10^{-6}$ and 1.0 mJy/beam, respectively. For the *XMM-Newton* map, we measure the X-ray background, corrected for the exposure time, in the north-east field with respect to A401, finding 4.2,counts,s$^{-1}$,deg$^{-2}$. | 872 | 2206.04697 | 20,997,775 | 2,022 | 6 | 9 | true | false | 4 | MISSION, MISSION, MISSION, MISSION |
Finally, to probe our scenario with a neutrino and a gravitino in the final state, we are going to employ the constraints with a pair of neutrinos in the final state, since the gravitino is completely stable and we can consider it decoupled from the rest of the spectrum being its interactions suppressed by the Planck scale. However we have to take into account that the energy of the signal, a monochromatic neutrino, is going to be modified for a massive gravitino. In our scenario the energy of the produced monochromatic neutrino is given by E\_=. []{#neutrinoenergy label="neutrinoenergy"} In the region where the gravitino can be considered massless (i.e. $m_{3/2} \ll m_{\tilde{\nu}_R}$), the energy of the signal would be the same as in the case considered by the experimental collaborations. However, if $m_{3/2}\sim m_{\tilde{\nu}_R}$ then $E_{\nu}\simeq m_{\tilde{\nu}_R}-m_{3/2}$. This means that the published constraints will be shifted since they are usually presented as a function of the DM mass, and not the energy of the signal. | 1,048 | 2206.04715 | 20,998,745 | 2,022 | 6 | 9 | false | true | 1 | UNITS |
In upper panel of Figure REF we show the solar spectrum computed with the MPS-ATLAS code, marking wavelengths at which [CIT] measurements were performed. In the lower panel of Figure REF we show computed and measured values of intensity at several view angles normalized to the disk-center intensity. In addition to our final product (i.e., computations with convection and overshoot) we also plot limb darkening computed assuming RE (i.e., no convection and no overshoot) and limb darkening computed taking convection into account but neglecting overshoot. One can see that MPS-ATLAS solar limb-darkening computations with overshoot accurately agree with the observations at all wavelengths and view angles. The convection without overshoot heats the atmosphere in very deep layers, which give only marginal contribution to the emergent radiation. Short-ward of $\sim 400$ nm the continuum opacity increases due to multiple photoionization processes and line haze, so that the contribution from the layers affected by convection can be neglected. Long-ward of $\sim 550$ nm, the temperature sensitivity of the Planck function decreases so that the temperature change due to convection does not noticeably affect limb darkening. Therefore, we only see the deviations between pure RE and convection models between $\sim 400$ nm to $\sim 550$ nm. In comparison to the convection, the overshoot affects the temperature structure of higher layers so that the agreement with measurements significantly deteriorates if overshoot is neglected. This result is in line with [CIT] who showed that limb darkening computed assuming RE does not agree well with observations. | 1,661 | 2206.06641 | 21,014,622 | 2,022 | 6 | 14 | true | false | 1 | LAW |
The explicit Planck factor demonstrates the particles, $N(\omega) = \int d\omega'|\beta_{\omega\omega'}|^2$, are distributed with a temperature, FORMULA in the high frequency approximation $\omega'\gg\omega$ [CIT]. Recall $\Delta_\omega\equiv \omega_{\textrm{max}}-\omega_{\textrm{min}}$, is the scale set by the sensitivity of detection. Thermal emission is not surprising considering the power plateau (Figure REF) and the close analogy for quantum and classical quantities of powers [CIT] and self-forces [CIT] between mirrors and electrons. | 544 | 2206.07291 | 21,018,711 | 2,022 | 6 | 15 | false | true | 1 | LAW |
We generally think that the LV modification terms can be suppressed by the integer power of the ratio of the typical energy $E$ of the actual physical process to the Planck scale $E_\text{Pl}$, so the linear LV energy scale $E_\text{LV}$ is near the Planck scale $E_\text{Pl}$. Generally, a quantum gravity theory will be realized near the Planck scale $E_\text{Pl}$, so the characteristic scale $E_\text{QG}$ is also near the Planck scale $E_\text{Pl}$. Although there is no one-to-one correspondence between a quantum gravity theory and a LV phenomenon, people often use the appearance of the LV phenomenon to mark a quantum gravity theory at work, so in a quantum gravity theory including LV effects, some people often use the characteristic scale $E_\text{QG}$ to replace the LV scale $E_\text{LV}$. To sum up, we expect that both the LV scale $E_\text{LV}$ and the quantum gravity characteristic scale $E_\text{QG}$ will appear near the Planck scale $E_\text{Pl}$, but their specific values need to be determined by experiments. | 1,033 | 2206.08180 | 21,024,748 | 2,022 | 6 | 16 | true | true | 5 | UNITS, UNITS, UNITS, UNITS, UNITS |
In section. 3, we analyze RGEs of coupling of the model. We show that adding a new VDM and scalar mediator, the RGEs will be modified. It is interesting to see behavior of RGEs of couplings in Planck scale. We solve the RG equations numerically and determine the RG evolution of the couplings of the models. For input parameters, we pick benchmark points for parameters of the model that are consistent with all constraints considered previously in the paper. Similar analyses with different input parameters have been performed for $U(1)$ extension of SM in Ref [CIT]. The running of couplings up to the Planck scale have been shown in Figures.(REF.a-c). In Figure. (REF.d), we compared running of $\lambda_H$ in the model with SM Higgs coupling. As it is known, SM Higgs coupling will be negative for $\mu> 10^{10} \rm GeV$. This means, it is not possible to establish all three conditions (perturbativity, vacuum stability and positivity) simultaneously in any scale. It is remarkable that the SM stability problem (positivity of $\lambda_H$) is solved in the model. This issue arises that $\lambda_{SH}$ changes very little in our model. This leads small changes for $\lambda_H$ and as a result, $\lambda_H$ remains positive until the Planck scale. | 1,252 | 2206.11041 | 21,045,867 | 2,022 | 6 | 22 | false | true | 3 | UNITS, UNITS, UNITS |
The pink contours in Fig. REF show the constraints in the $S_8$-$\Omega_{\rm m}$ plane derived by combining these RSD measurements (including the BAO constraints for the BOSS, eBOSS and MGS samples) with the Pantheon supernova magnitude-redshift relation [CIT], Ly$\alpha$-quasar and Ly$\alpha$-Ly$\alpha$ BAO measurements from [CIT] and BAO measurements from the 6dF Galaxy Survey [CIT]. To scale the BAO constraints we impose a Gaussian Planck prior on the sound horizon, $r_{\rm d} = 147.31 \pm 0.31 \text{Mpc}$. The RSD constraints in Fig. REF are consistent with the $S_8$ results from *Planck TTTEEE. There is also substantial overlap between the RSD and KiDS contours in Fig. REF.* | 688 | 2206.11794 | 21,053,483 | 2,022 | 6 | 23 | true | false | 2 | MISSION, MISSION |
We now focus on the analysis of the dark sector. In the early universe all the particles of the dark sector are in thermal equilibrium with the SM fields due to the production-annihilation diagrams shown in the Appendix [8], Figs. REF and REF. As the universe expands, the temperature drops. For unstable particles there is a maximum temperature below which the thermal bath doesn't have enough energy to produce them, while their annihilation and decays are still allowed and therefore disappear. However, recall that the lightest particle of the dark sector will be completely stable due to the residual $Z_6$ symmetry. This means that once such stable particle decouples from the thermal bath its relic density will be 'freezed out'. This relic density can then be computed and compared with Planck observations [CIT]. See [CIT] for a nice review on different DM production mechanisms. This type of DM could also be detected by nuclear recoil experiments such as XENON1T [CIT], see the diagrams shown in Fig. REF and REF. | 1,024 | 2206.11903 | 21,054,148 | 2,022 | 6 | 23 | false | true | 1 | MISSION |
We analyze the models by referring to the following cosmological observation datasets. We include both temperature and polarization likelihoods for high $l$ `plik` ($l=30$ to $2508$ in TT and $l=30$ to $1997$ in EE and TE) and low$l$ `Commander` and lowE `SimAll` ($l=2$ to $29$) of Planck (2018) measurement of the CMB temperature anisotropy [CIT]. We also include Planck lensing [CIT] and data of the BAO from 6dF [CIT], DR7 [CIT], and DR12 [CIT]. We use the datasets of the $Y_{P}$ [CIT] and $D/H$ measurements [CIT] to impose the BBN constraints. | 550 | 2206.13209 | 21,063,192 | 2,022 | 6 | 27 | true | false | 2 | MISSION, MISSION |
As long as the above conditions are satisfied, we are able to have the maximally relaxed scalar potential FORMULA where the inequality comes from the condition $\mathcal{U}>0$. In the meantime, the corresponding constraint is given by FORMULA where the last inequality is due to the dominance condition in Eq. REF(#Dominance){reference-type="eqref" reference="Dominance"}. Notice that $V_{D}^{\cancel{S}}$ is parametrically free up to the Planck scale $M_{pl}$, while the total scalar potential is parametrically free up to the SUSY breaking scale $M_S$. | 554 | 2206.13736 | 21,068,715 | 2,022 | 6 | 28 | false | true | 1 | UNITS |
The composite F-term is given by $\mathcal{F} = e^GG_TG^{T\bar{T}}G_{\bar{T}}$ after solving the equation of motion for the auxiliary fields $F^I$. For our G function we obtain an exponentially decreasing function $\mathcal{F} = 3|W_0|^2/(T+\bar{T})^3= 3|W_0|^2e^{-3\sqrt{2/3}\chi}$. This is what we want to get a viable supersymmetry breaking mechanism. The reason is that we look for a supersymmetry breaking scale during inflation $M_S^i\sim M_{pl}=1$, while the final scale should be parametrically lower than the Planck scale --for instance $M_S^f = 10^{-15}M_{pl}$. To achieve this large difference of scales, the vacuum expectation value of the field $\chi$ should change during the phase transition. On the other hand, the cutoff scale of our model can remain $\mathcal{O}(M_{pl})$ both before and after the phase transition. | 833 | 2206.13736 | 21,068,728 | 2,022 | 6 | 28 | false | true | 1 | UNITS |
The observation of the temperature anisotropies imprinted in the **Cosmic Microwave Background (CMB)**, the furthest photons we can observe in the universe, by the satellite Planck from 2009 to 2013, see Fig. REF and Fig. REF, provides the most precise measurement of Dark Matter abundance. As shown in Fig. REF, the amplitudes and positions of the peaks of the temperature power spectrum depend crucially on the DM abundance. In the Planck 2018 paper [CIT], we read the dark matter and baryon density in the universe FORMULA where $h = 0.674\pm 0.005$ is the Hubble constant in unit of $100$km/s/Mpc. In order to further reduce the uncertainties, it is useful to combine the CMB analysis with the measurement of the scale of **Baryonic Acoustic Oscillations** (which is around $147$Mpc today) [CIT] FORMULA | 808 | 2207.01633 | 21,099,019 | 2,022 | 7 | 4 | true | true | 2 | MISSION, MISSION |
By comparing in Fig. REF the Planck+BAO and Planck+CMB-S4 we can clearly see that CMB-S4 observations have the power to test neutrino interaction rates that are $\sim 1$ order of magnitude weaker than Planck, in general. The expected improvement from CMB-S4 depends sensitively on the value of $z_{\rm int}^{\rm max}$ and the number of interacting neutrino species $N_{\rm int}$. | 379 | 2207.04062 | 21,118,619 | 2,022 | 7 | 8 | true | true | 3 | MISSION, MISSION, MISSION |
The effective metric in Eq.(REF) depends on two \"Planck\" energy scales. One of them is related to the mass $m$ of the atom of the liquid and another one is determined by the average distance $a$ between the atoms in the liquid. Together they produce the effective Planck energy scale, which determines the vacuum energy (see Eq.(3.26) in Ref. [CIT]): FORMULA In general relativity, these two Planck scales are considered to be equal: $E_{\rm Planck,1}=E_{\rm Planck,2}=E_{\rm Planck}$, where $E_{\rm Planck,1}$ corresponds to the Planck mass, and $E_{\rm Planck,2}$ corresponds to the inverse Planck length. | 609 | 2207.05754 | 21,132,177 | 2,022 | 7 | 12 | false | true | 10 | UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS, UNITS |
In [CIT] $\alpha(\nu)$ was computed for Planck 2015 data to test for SI. For the calculation, the CMB data on the sphere was first projected onto the plane using stereographic projection and then $\mathcal W$ was calculated. The authors found that the Planck 2015 temperature data is consistent with SI. However, the E-mode polarization data at 44 and 70 GHz frequency channels deviated from SI at roughly $\sim 4, \sigma$. This deviation is most likely predominantly due to two factors. The first is that the 2015 polarization data possibly contains low levels of systematic effects [CIT]. Secondly, stereographic projection was chosen for the analysis because it preserves angles, but it introduces size scaling of structures and hence can introduce projection errors. The definition of the CMT was generalized to random fields defined on curved manifolds and a method was given in [CIT] for its numerical computation directly on the sphere without requiring projection onto the plane. This method of calculation was applied to the Planck 2018 temperature data [CIT] and E-mode data [CIT]. Both temperature and E-mode data were found to be consistent with SI. Hence we conclude that the $4, \sigma$ deviation from SI obtained with the Planck 2015 E-mode data is caused by the combined effect of residual systematics in the data and stereographic projection. For the analysis of the E-mode data, the authors also used the so-called $\mathcal{D}$ statistic method to the CMB E-mode data and found the results to be in good agreement with the CMT analysis. | 1,555 | 2207.05765 | 21,132,706 | 2,022 | 7 | 12 | true | true | 4 | MISSION, MISSION, MISSION, MISSION |
For comparison, additional blocks of points in Fig. REF show how constraints on $S_8$ are affected by changes to the analysis choices while retaining the $\Lambda\text{CDM}$ model, and the impact of two extensions to $\Lambda\text{CDM}$ which we label ad hoc models. The $\Lambda\text{CDM}$ analysis choice variations include using the shear ratio likelihood, different scale cuts, the more general TATT IA model, the hyperrank method for marginalizing over source photo-$z$ uncertainties, and fixing neutrino mass. The ad hoc models include varying $X_{\rm Lens}$, which introduces a mismatch between the galaxy bias affecting galaxy clustering and that affecting galaxy--galaxy lensing (see its description as part of the robustness tests of Sec. [5.3]), and varying $A_{\mathrm{L}}$ [CIT], which scales the amount of lensing-related smoothing affecting the CMB temperature and polarization power spectra (See Sec. [3.6] and e.g. Ref. [CIT]). Both these ad hoc models correspond to the introduction of a parameter to explain features in the DES Y3 3$\times$ 2pt and Planck data respectively, and not new physics as opposed to the beyond-$\Lambda\text{CDM}$ models considered in this analysis. While $X_{\rm Lens}$ has little effect on the 3$\times$ 2pt constraints and thus on the 3$\times$ 2pt-Planck comparison, varying $A_{\mathrm{L}}$ leads to more consistent estimates of $S_8$ across all probe combinations shown (see also [CIT]). | 1,438 | 2207.05766 | 21,132,950 | 2,022 | 7 | 12 | true | false | 2 | MISSION, MISSION |
As was noted above in Sec. [6.2], for $\Omega_k$ the $p$-value median is 0.010, exactly at our threshold for reporting combined constraints. This merits further discussion, because in addition to being the most significant measure of tension reported, it is also the noisiest. The 16% and 84% quantiles are 0.002 and 1.0, respectively[^20]. This means that at an approximately $1\sigma$ level of certainty, our evaluation of tension between Planck and low-redshift $\Omega_k$ constraints could plausibly be consistent with both a slightly-greater-than-$3\sigma$ tension and with there being no tension at all. This large scatter is driven by the small value of $d_{\rm BMD}=1.5\pm 1.6$ (reporting the mean and standard deviation from the sample variance estimate). This small $d_{\rm BMD}$ means that that there is limited overlap in the parameter directions constrained by Planck and 3$\times$ 2pt+BAO+RSD+SN, making the assessment of tension extremely sensitive to noise in the posterior estimates. To further contextualize this finding, we note that the Planck-only preference for $\ensuremath{\Omega_k}<0$ driving this tension signal has been the subject of extensive discussion in the literature (see e.g. [CIT]) which highlights the fact that the interpretation of this tension can depend on subtleties related the choice of priors, parameters sampled, Planck likelihood calculation method, as well as the relationship to features in the Planck power spectra also captured by phenomenological parameter $A_{\rm L}$. | 1,521 | 2207.05766 | 21,132,956 | 2,022 | 7 | 12 | true | false | 5 | MISSION, MISSION, MISSION, MISSION, MISSION |
The later results do not contradict the well founded arguments pointing out that PBHs would quickly loose any initial electric charge. We simply find that this loss is not relevant for PBHs living in present day haloes, or that it can be shielded, and that a robust charge should remain for PBHs of all masses, or for extremal Kerr-Newman PBHs of Planck mass. This implies that if PBHs constitute the DM for the Universe, they would most likely hold electric charge. | 466 | 2207.05829 | 21,133,655 | 2,022 | 7 | 12 | true | true | 1 | UNITS |
In particular the detection of some form of the Unruh phenomenon [CIT] has been proposed in various set-ups [CIT]. However, in many of the proposed analog systems, the Unruh temperature FORMULA is still too small [CIT] for a direct experimental verification, as one sees that $1 {\rm m}/{\rm s}^2 \to \sim 4 \times 10^{-21}$K. In (REF) $a$ is the uniform acceleration and we explicitly kept the Planck constant, the speed of light and the Boltzmann constant to ease the units conversion. In the following we shall set to one $\hbar$,$c$ and $k_B$. | 547 | 2207.11935 | 21,183,240 | 2,022 | 7 | 25 | false | true | 1 | CONSTANT |
For the numerical simulations throughout this work, we consider a flat cosmology with the following density parameters as measured by Planck Collaboration [CIT] : $\Omega_{m,0} = 0.315$, $\Omega_{dm,0} = 0.264$, $\Omega_{r,0} = 9.237\times 10^{-5}$, and a Hubble constant $H_{0}=67.36, \mathrm{km,s^{-1} Mpc^{-1}}$ | 314 | 2207.13689 | 21,198,427 | 2,022 | 7 | 27 | true | true | 1 | MISSION |
As in our previous analyses, the other cosmological parameters not fixed by their priors are fully in agreement with the values measured by Planck. A distinction with respect to the $\Lambda$CDM and $w$CDM analyses is that the measurement of $A_s$ is not being pushed as much towards lower values as in those cases, indicating that the preference of the data for a low $\sigma_8$ is being driven by the interaction, via $A$. Furthermore, given that this dark sector interaction only affects the late-time evolution and does not modify early-time physics, its effect on the CMB is negligible, as shown in Ref. [CIT]. It is therefore justified that we use the Planck prior on the primordial parameters, whose measurements would be unaffected by the inclusion of the interaction. This indicates that this model of interacting dark energy can alleviate the $\sigma_8$ tension, thus re-establishing the concordance between the early- and late-Universe measurements of the clustering amplitude. | 988 | 2207.14784 | 21,209,365 | 2,022 | 7 | 29 | true | false | 2 | MISSION, MISSION |
Although [CIT] :2022 reports the tightest bound on the tensor-to-scalar ratio ($r_{0.05}<0.032$ at 95% CL), here we want to consider the case in which the tensor spectral tilt is left to vary, beyond the usual consistency relation for single-field slow-roll models ($r=-8 n_t$). So, while keeping in mind the work by [CIT] :2022, the actual state-of-the-art on $\qty(r, n_t)$ are the bounds set by Planck 2018: $r_{0.01} < 0.066$ and $-0.76 < n_t < 0.52$ at $95\%$ CL, when including both CMB and GWs interferometers (see section REF for further details on how these bounds are obtained). Here the subscript ${0.01}$ indicates the pivot scale that is typically assumed in this context, i.e. $0.01$ Mpc$^{-1}$. The main goal of this paper is to update these constraints, exploiting newly available data, both from an electromagnetic and a GW perspective. | 853 | 2208.00188 | 21,212,081 | 2,022 | 7 | 30 | true | false | 1 | MISSION |
In this work, we have obtained new bounds on the tensor-to-scalar ratio $r$ and the tensor spectral index $n_t$. We have exploited newly released datasets from both an electromagnetic point of view, i.e. BICEP/Keck 2018 [CIT] and Planck PR4 [CIT], and a GW perspective, that is, LVK collaboration data [CIT]. In particular, the complementarity of Planck and BK measurements allows us to better constrain the amplitude of the tensor perturbation spectrum, while the information at small scales coming from LVK can cut the values of permitted spectral tilts. | 556 | 2208.00188 | 21,212,085 | 2,022 | 7 | 30 | true | false | 2 | MISSION, MISSION |
At this point Padmanabhan has made a simple but powerful consideration. The relation $\lambda =1/m$ would suggest that the heavier is the particle, the smaller is the Compton wavelength. This is correct but it can be valid for a certain mass regime only. At a certain mass scale, gravity necessarily becomes important for the particle itself and can even lead to matter collapse. To avoid such a scenario, one has to make sure that paths smaller than the gravitational radius of the particle, $r_\mathrm{g}$, do not contribute to the path integral REF(#eq:pathint){reference-type="eqref" reference="eq:pathint"}. Accordingly, the most natural way to amend the standard path integral is: FORMULA The additional term implies the same modification emerging from the introduction of the zero point length REF(#eq:zeropoint){reference-type="eqref" reference="eq:zeropoint"} in the propagator REF(#eq:pathint){reference-type="eqref" reference="eq:pathint"}, provided $L_0^2=G$, being $r_\mathrm{g}\sim Gm$. Therefore, we can conclude that $L_0$ has to be of the order of the Planck length. | 1,083 | 2208.05390 | 21,252,568 | 2,022 | 8 | 10 | false | true | 1 | UNITS |
As shown in Fig. REF, when we use the data within the full spatial frequency range, VChGs give good alignment with the Planck polarization is the low-intensity region (approximately $\rm I\le1200$ K m/s). As for the high-intensity region, there are more negative alignments, i.e., the rotated gradients are nearly perpendicular to the Planck polarization (see also [CIT].496.2868L). One possible reason for the anti-alignment is the shock effect [CIT]. Globally, we have AM = 0.73 for the VChGs and the Planck polarization. Similar to the numerical analysis above, we then remove the low-spatial frequency keeping only $k>50$ components in the velocity channel map. After removing low-spatial frequencies, we find the global AM between VChGs and the Planck polarization increases to 0.90. The majority of anti-alignments seen in high-intensity regions is suppressed. Also, in Fig. REF, we plot the correlation of AM and $k_{max}$ which indicates the spatial frequency from $k=0$ to $k=k_{max}$ is removed. For IGs, VCGs, and VChGs, we see the AM keeps a flat curve around value 0.70 until $k_{max}\approx15$. When $k_{max}\ge15$, the AM starts increasing to it maximum value $\approx$ 0.90 at $k_{max}\approx50$. The AM gets drops significantly if $k_{max}>100$. The results agree with our numerical studies above. To sum up, the most significant contribution to gradients comes from small-scale structures. The accuracy of GT can be improved by filtering out low-spatial frequencies. | 1,484 | 2208.06074 | 21,260,206 | 2,022 | 8 | 12 | true | false | 4 | MISSION, MISSION, MISSION, MISSION |
We use the Galaxy masks provided by the Planck Legacy Archive.[^2] These masks are constructed to limit Galactic emission to varying levels. The masks with smaller sky fraction $f_\mathrm{sky}$ restrict the analysis to relatively high latitudes. The masks with larger $f_\mathrm{sky}$ allow more contributions from nearer the Galactic plane. The set of Galaxy masks is shown in Fig. REF. | 387 | 2208.07382 | 21,269,100 | 2,022 | 8 | 15 | true | false | 1 | MISSION |
Fig.REF shows the $n_{s}-r$ constraints coming from the marginalized joint 68% and 95% CL regions of the Planck 2018 in combination with BK14+BAO data on NI (REF) in the regime of $f(Q)$ theory. In the figure, we find the predictions of the model for the allowed values of $\gamma$ in four cases of $\log_{10}(f/M_{pl})$ in comparison with the result released in Planck 2018 for NI (pink color). The panel (a) is drawn for the case of $\log_{10}(f/M_{pl})=0.8$ for the allowed values of $-1.3\leq\gamma\leq-0.5$ in which there is a minimum overlap between NI in $f(Q)$ gravity (green color) and NI in the Planck 2018 release (pink color). By decreasing the value of $\log_{10}(f/M_{pl})$ to $0.6$ in panel (b), one can realize that more regions of two models (*i.e.* NI in Planck and $f(Q)$ gravity) are overlapped together, in particular, for $r>0.05$. In this case, the allowed range of $\gamma$ is reduced to $-1.17\leq\gamma\leq-1$. In panel (c), we decline the value of $\log_{10}(f/M_{pl})$ to $0.5$ for the allowed values of $-1.14\leq\gamma\leq-1.1$. As we can see, the overlapping starts from $r>0.04$ and keeps going for bigger values of $r$. For a more interesting case, we consider $\log_{10}(f/M_{pl})=0.45$ in a narrow range of $-1.13\leq\gamma\leq-1.11$ in which two NI models behave almost the same with a maximum overlap beginning from $r=0.03$. Note that all panel of Fig.REF are plotted for $N=50-60$ and $\alpha\sim M^{2}$. | 1,443 | 2209.06670 | 21,377,527 | 2,022 | 9 | 14 | true | true | 4 | MISSION, MISSION, MISSION, MISSION |
We refer to [section [2.6.3]](#sec:Planck_data_set) for a description of the used Planck 2018 [CIT] data sets and likelihoods. For consistency, we first run the Planck likelihood in our framework and reproduce constraints in agreement with the Planck 2018 results FORMULA | 272 | 2209.07595 | 21,386,249 | 2,022 | 9 | 15 | true | false | 3 | MISSION, MISSION, MISSION |
Adding the mock 21cm observations to Planck 2018 CMB data, it is possible to significantly narrow the constraints, with respect to Planck alone. Although the effect is observable on all the parameters, we get the most significant improvement on $\Omega_c h^2$ and $H_0$, for which the errors are lessen by a fourth. With 21cm multipoles $+$ Planck we estimate $\Omega_c h^2$ and $H_0$ at the $0.25\%$ and $0.16\%$ levels respectively, to be compared with $0.99\%$ and $0.79\%$, obtained with Planck alone. For $\ln (10^{10} A_s)$ and $\sigma_8$ the errors are reduced by more then a factor two. We constrain $(10^{10} A_s)$ at the $0.17\%$ and $\sigma_8$ at the $0.26\%$ level, to be compared with the $0.46\%$ and $0.73\%$ Planck estimates, respectively. Furthermore, we observe that combining the tomographic 21cm data set with CMB alleviates some of the degeneracies between the parameters, resulting in improved constraints. The strongest effect is visible in the $\Omega_c h^2$ - $H_0$ plane. Although 21cm observations are not sensitive to $\tau$, we find that with Planck the improvement on the other parameters is reflected also on $\tau$, reducing the error by a factor two. | 1,183 | 2209.07595 | 21,386,273 | 2,022 | 9 | 15 | true | false | 6 | MISSION, MISSION, MISSION, MISSION, MISSION, MISSION |
It is well-known that Planck dataset strictly set the angular scale FORMULA where $D_A=\int_0^{z_*} {dz\over H(z)}$ is the angular radius to last scattering surface, $r_s=\int_{z_*}^{\infty}\frac{c_s}{H(z)}dz$ is the sound horizon, and $z_*$ is the redshift of last scattering. In pre-recombination EDE setup, $r_s$ is suppressed (see [CIT] for recent result) so that we have a high $H_0$ in light of (REF). Recently, the relevant EDE models have widely studied e.g. [CIT], see also its effects on cosmic birefringence [CIT] and gravitational waves background [CIT]. | 566 | 2209.09685 | 21,400,076 | 2,022 | 9 | 20 | true | true | 1 | MISSION |
In this section, we provide the distributions of mock clusters and real PSZ2 clusters (REF(#fig:Mvsz){reference-type="autoref" reference="fig:Mvsz"}) in mass and redshift and also examples of *Clean mock data set*, *Planck mock data set* and *Planck real data set* clusters (REF(#fig:maps){reference-type="autoref" reference="fig:maps"}). This section supplements the method section in the main article. | 403 | 2209.10333 | 21,405,167 | 2,022 | 9 | 21 | true | false | 2 | MISSION, MISSION |
The above equations are solved together with the Friedmann equation $\dot a/a = H$, where the Hubble rate is given by FORMULA with $M_{\mathrm{P}}$ being the reduced Planck mass. The time-dependent Ricci scalar in this setup is given by FORMULA as the conformally invariant radiation component gives no contribution at the classical level. These equations can be solved independently of the equations of motion for the spectator field. In the latter the scale factor $a$ and the Ricci scalar $R$ then appear as external functions that source the non-trivial behaviour of the $\chi$-field. | 588 | 2209.10945 | 21,411,303 | 2,022 | 9 | 22 | true | true | 1 | UNITS |
Precise measurements of the Planck cosmic microwave background (CMB) angular power spectrum (APS) at small angles have stimulated accurate statistical analyses of the lensing amplitude parameter $A_{L}$. To confirm if it satisfies the value expected by the flat-$\Lambda$CDM concordance model, i.e. $A_{L} = 1$, we investigate the spectrum difference obtained as: the difference of the measured Planck CMB APS and the Planck best-fit $\Lambda$CDM APS model. To know if this residual spectrum corresponds to statistical noise or if it has a hiden signature that can be accounted for with a larger lensing amplitude $A_{L} > 1$, we apply the Ljung-Box statistical test and find, with high statistical significance, that the spectrum difference is not statistical noise. This spectrum difference is then analysed in detail using simulated APS, based on the Planck $\Lambda$CDM best-fit model, where the lensing amplitude is a free parameter. We explore different binnations of the multipole order \,$\ell$\, and look for the best-fit lensing amplitude parameter that accounts for the spectrum difference in a $\chi^2$ procedure. We find that there is an excess of signal that is well explained by a $\Lambda$CDM APS with a non-null lensing amplitude parameter $A_{lens}$, with values in the interval $[0.10,0.29]$ at 68\% confidence level. Furthermore, the lensing parameter in the Planck APS should be $1 + A_{lens} > 1$ at $\sim 3 \sigma$ of statistical confidence. Additionally, we perform statistical tests that confirm the robustness of this result. Important to say that this excess of lensing amplitude, not accounted in the Planck's flat-$\Lambda$CDM model, could have an impact on the theoretical expectation of large-scale structures formation once the scales where it was detected correspond to these matter clustering processes. | 1,837 | 2209.11660 | 21,415,967 | 2,022 | 9 | 23 | true | false | 6 | MISSION, MISSION, MISSION, MISSION, MISSION, MISSION |
The only exception in which the disagreement between ACT-DR4 and Planck is reduced below the threshold of $2\sigma$, is the minimal extension $\Lambda\text{CDM}+N_{\rm eff}$ where the effective number of relativistic degrees of freedom, $N_{\rm eff}$, can vary freely. In this case, our analysis confirms previous results discussed in literature [CIT] about a mild-to-moderate preference of the ACT-DR4 data for smaller amounts of radiation in the early Universe than expected in the Standard Model of particle physics[^3] ($N_{\rm eff}=2.35^{+0.40}_{-0.47}$ at 68% CL). However it is interesting to point out that this parameter can partially reduce the disagreement between the two experiments at the Gaussian equivalent level of 1.8 standard deviations. | 756 | 2209.14054 | 21,435,989 | 2,022 | 9 | 28 | true | false | 1 | MISSION |
First and foremost, the decay to massless vectors is free from external parameters and the corresponding characteristic lifetime can be evaluated explicitly: FORMULA Here $T_\text{Pl} = \hbar/(M_\text{Pl}, c^2) \simeq 5.391 \times 10^{-44} \text{ sec}$ is the Planck time scale. The smaller the scalaron mass, the bigger the expected lifetime. For $M \sim 10^{-7} M_\text{Pl}$ the characteristic lifetime is about a second. For $M$ equal to the top quark mass the characteristic lifetime is $\tau_{\text{S}\to\text{v}\overline{\text{v}}} \sim 10^{64} \text{ sec}$ which is by $47$ order of magnitude more then the age of the Universe. Finally, the lifetime is equal to the age of the Universe for $M/M_\text{Pl} \sim 10^{-10}$ which marks the region of scalaron masses for which the discussed decay mechanism is relevant: FORMULA | 830 | 2210.00781 | 21,452,799 | 2,022 | 10 | 3 | false | true | 1 | UNITS |
In this example, the dilution factor $\Delta_{\rm NR}$ represents the duration of the non--relativistic expansion period and is not immediately related to the reheating temperature. Since it is bounded from above by ${\cal O}(10)$ and $H_{\rm end} \sim 10^{13}\;$GeV, the constraints on feebly interacting stable scalars in this model are quite strong. According to (REF), such scalars with masses above 1 TeV or so are ruled out. The inflationary constraint depends on the size of the $\phi-s$ couplings. If these are very small, the bound (REF) yields[^9] FORMULA taking into account the non-thermalization constraint on $\lambda_s$ [CIT]. On the other hand, if the $\phi-s$ couplings are substantial as in (REF), the inflationary constraint is not applicable and we only get a set of constraints on $\lambda_{\phi s}$ and Planck--suppressed operators listed in Section [4]. For $\Delta_{\rm NR} \lesssim 10$, they are all very strong. Generic order one Wilson coefficients are allowed only if (cf.,Eq.,REF) FORMULA We thus conclude that the existence of stable feebly interacting scalars in this framework would be problematic, unless they are extremely light or quantum gravity corrections are well under control. | 1,217 | 2210.02293 | 21,465,169 | 2,022 | 10 | 5 | true | true | 1 | UNITS |
In classical Bianchi-I spacetimes, underlying conditions for what dictates the singularity structure - whether it is anisotropic shear or energy density, can be easily determined from the generalized Friedmann equation. However, in non-singular bouncing anisotropic models these insights are difficult to obtain in the quantum gravity regime where the singularity is resolved at a non-vanishing mean volume which can be large compared to the Planck volume, depending on the initial conditions. Such non-singular models may also lack a generalized Friedmann equation making the task even more difficult. We address this problem in an effective spacetime description of loop quantum cosmology (LQC) where energy density and anisotropic shear are universally bounded due to quantum geometry effects, but a generalized Friedmann equation has been difficult to derive due to the underlying complexity. Performing extensive numerical simulations of effective Hamiltonian dynamics, we bring to light a surprising, seemingly universal relationship between energy density and anisotropic shear at the bounce in LQC. For a variety of initial conditions for a massless scalar field, an inflationary potential, and two types of ekpyrotic potentials we find that the values of energy density and the anisotropic shear at the quantum bounce follow a novel parabolic relationship which reveals some surprising results about the anisotropic nature of the bounce, such as the maximum value of the anisotropic shear at the bounce is reached when the energy density reaches approximately half of its maximum allowed value. The relationship we find can prove very useful for developing our understanding of the degree of anisotropy of the bounce, isotropization of the post-bounce universe, and discovering the modified generalized Friedmann equation in Bianchi-I models with quantum gravity corrections. | 1,884 | 2210.07257 | 21,496,620 | 2,022 | 10 | 13 | true | false | 1 | UNITS |
We repeat the cosmological constraints on the WZDR model with the same datasets discussed in Ref. [CIT]. Meanwhile, we adopt a new parameterization with $N_{\rm{ur}}$ being a free parameter (denote as (f)WZDR). This datasets includes: the Planck Planck 2018 low-$\ell$ TT+EE and high-$\ell$ TT+TE+EE temperature and polarization power spectrum [CIT], as well as CMB lensing [CIT], the BAO measurements [CIT], the PANTHEON supernovae data [CIT], as well as measurements from SH0ES [CIT] via a prior on the intrinsic magnitude of supernovae $M_b$ [CIT]. Constraints at $68\%$ C.L. on the cosmological parameters and $\chi^2$ statistics can be found in Table REF. The posterior distributions and for WZDR and (f)WZDR parameterization are shown in Fig. REF and REF, respectively. | 775 | 2210.06851 | 21,497,631 | 2,022 | 10 | 13 | true | false | 2 | MISSION, MISSION |
Despite this success, the increasingly precise low-redshift measurements have started hinting at discrepancies between the observed $\Lambda$CDM parameter values and CMB predictions. Most significantly, the SN measurements of the Hubble parameter ($H_0$) suggest an expansion rate of the Universe today that is 5$\sigma$ above the prediction coming from Planck satellite CMB observations [CIT]. Additionally, there are also inconsistencies surrounding the amplitude of the weak lensing signal, as described by $S_8=\sigma_8\sqrt{\Omega_{\rm{m}}/0.3}$ (where $\Omega_{\rm{m}}$ is the relative matter density parameter, and $\sigma_8$ is the linear density field variance as measured on a scale of 8 $h^{-1}$Mpc), with different weak lensing surveys finding a value that is 2-3$\sigma$ below Planck's best fit prediction [CIT]. | 825 | 2210.07304 | 21,501,753 | 2,022 | 10 | 13 | true | false | 2 | MISSION, MISSION |
When, in addition to $w$, the curvature is also allowed to vary, BOSS+eBOSS and Planck become discrepant, most significantly in the $w-\omega_{\rm DE}$ plane and, subsequently, in the resulting values of $\sigma_{12}$, with Planck preferring a 2.4$\sigma$ higher value than BOSS+eBOSS. Varying dark energy models are, therefore, not able to bring the two probes in a better agreement for curved cosmologies. In addition to this result, our physical curvature density constraint for Planck $\omega_{\rm{K}}=-0.0116^{+0.0029}_{-0.0036}$ prefers a curved Universe at $4\sigma$ significance, which is $2.4\sigma$ higher than what is found using $\Omega_{\rm{K}}$. | 659 | 2210.07304 | 21,501,862 | 2,022 | 10 | 13 | true | false | 3 | MISSION, MISSION, MISSION |
Figure REF presents the comparison of $S_8$ ($\Sigma_8$) from our peak analyses with that from other studies. For $S_8$, besides the best-fit value (black filled circles), we also show the mean (orange) and the median (yellow) values of our studies. It is seen that our results are in good accordance with other WL studies [CIT]. Within the HSC-SSP analyses, our constraints are in excellent agreements with that from cosmic shear power spectra measurements in [CIT]. They are about $1.3-1.8\sigma$ smaller than the update results from 2-point correlation studies of [CIT]. Comparing to that from Planck 2018 [CIT], our $S_8$ values are about $2.0\sigma$ smaller, which is calculated by subtracting our best-fit $S_8$ value from the Planck best-fit and then dividing by the square root of the quadratic sum of our $S_8$ error on its higher side and that from Planck. We also note that without accounting for the boost correction in our theoretical modelling, $S_8$ is biased to a lower value. For the current analyses, this bias is already close to $1\sigma$. | 1,059 | 2210.07853 | 21,504,686 | 2,022 | 10 | 14 | true | false | 3 | MISSION, MISSION, MISSION |
Both the Planck satellite and the Atacama Cosmology Telescope show intriguing anomalies that seem to challenge the typical predictions of inflationary theories: the first data-set seems to disfavor the inflationary prediction for a flat background geometry at more than 99.9 % CL while the second, albeit in perfect agreement with spatial flatness, shows a preference for a larger spectral index consistent with a Harrison-Zel'dovich scale-invariant spectrum ($n_s=1$) of primordial density perturbations, introducing a tension with a significance of 99.3% CL with the results from the Planck satellite. These anomalies suggest either the presence of important observational systematic errors in one or both data-sets or a departure from the theoretical framework. In this work we have extensively explored both possibilities, extending the analysis presented in Ref. [CIT]. | 874 | 2210.09018 | 21,512,196 | 2,022 | 10 | 17 | true | false | 2 | MISSION, MISSION |
The action for a complex scalar field $\Phi = \frac{f}{\sqrt{2}}e^{i \theta}$ with non-minimal coupling with gravity in the Palatini formulation is given as FORMULA where $g=\det g_{\mu\nu}$, the mass parameter $M$ is defined as $M^2 = M_P^2 - \xi f_a^2$ with the Planck mass $M_P = 1/\sqrt{8 \pi G} \approx 2.4\times 10^{18} {\rm GeV}$ and $V(|\Phi|)=\lambda \left(|\Phi|^2 -f_a^2/2 \right)^2$. The vacuum expectation value of $\Phi$ is $\sqrt{\langle \Phi^\dagger \Phi \rangle} =f_a/\sqrt{2}$ such that the gravitational coupling becomes canonical at the vacuum. In order to keep the correct sign of the kinetic term of the graviton, we request $M^2 \geq 0$, or equivalently $\xi \leq M_P^2/f_a^2$. The Ricci scalar $R(\Gamma)$ is obtained from the Ricci tensor, that is explicitly given as FORMULA Due to the absence of second-order derivatives, unlike the metric formulation, the Gibbons-Hawking-York boundary term [CIT] is not necessary in the Palatini formulation. We take the Euclidean geometry with spherical symmetry, $ds^2 = dr^2 + a(r)^2 d^2\Omega_3$, where $r$ is the Euclidean time, $d^2\Omega_3$ is the line element on the three-dimensional unit sphere and $a$ is the radius of the sphere. We assume that $f$ and $\theta$ depend only on $r$, taking the spherical symmetry into account. | 1,299 | 2210.11330 | 21,532,962 | 2,022 | 10 | 20 | false | true | 1 | UNITS |
To this end, he will want to hover above the horizon to examine near-horizon modes. However, to do so he would need to sustain Planck-scale accelerations.[^14]And even if he could sustain such accelerations, his experiments would require Planck-length sensitivity because the physics of the true horizon would be behind the stretched horizon, and the gap between the two is just one Planck length. Thus, Bob would need control over Planck-scale physics to conduct his test ([CIT]). So it seems impossible that Bob will experimentally detect the failure of some prediction of quantum mechanics. | 593 | 2211.15650 | 21,570,970 | 2,022 | 10 | 28 | false | true | 4 | UNITS, UNITS, UNITS, UNITS |
Another important cosmological parameter in the context of the weak-lensing measurements is $S_8\equiv\sigma_8\sqrt{\Omega_m/0.3}$ calculated at the present time. We found from the eBOSS QSO analysis, FORMULA This value is consistent with the Planck CMB value $S_8=0.832 \pm 0.013$. It also $1.8-1.9\sigma$ higher than the DES-Y3, $S_8 = 0.776^{+0.017}_{-0.017}$ [CIT] and KiDS-1000 $S_8 = 0.759^{+0.024}_{-0.021}$ [CIT] results. | 429 | 2210.17044 | 21,574,263 | 2,022 | 10 | 31 | true | true | 1 | MISSION |
As explained in section [1], exactly soluble models are often well suited to sharpen conceptual issues. One such issue is: how does one's description of physical reality change when one passes from a less accurate theory to a more accurate one? General relativity, for example, opened entirely new classes of phenomena and possibilities that could not be envisaged in Newtonian gravity. These arise because, with both $G$ and the velocity of light $c$ at one's disposal, a new scale arises. One can now associate a length with a mass --the Schwarzschild radius $R_{\rm Sch} = 2GM/c^2$ that is not available in the Newtonian theory since it knows only about $G$-- and this scale then unleashes an entirely new class of phenomena. Similarly, in the quantum theory Planck's constant $\hbar$ provides new scales, dramatically changing our understanding of the atomic world and leading to a plethora of unforeseen phenomena that have shaped physics of the micro-world. In quantum gravity one has access to all three of these fundamental constants and there have been speculations, dating back to Planck himself, on the nature of new physics that would arise at Planck length, Planck frequency, and Planck density. Exactly soluble models provide a clean-cut platform to discuss the nature of this new physics. In this subsection we will leave $c$ unchanged but switch on and off $\hbar$ and $G$ and examine how the nature of physical reality changes. | 1,444 | 2211.01525 | 21,591,453 | 2,022 | 11 | 3 | false | true | 5 | CONSTANT, PERSON, UNITS, UNITS, UNITS |
Our analysis focuses on the Higgs/Starobinsky inflation model. This choice is well motivated for several reasons. First, the potential has a single parameter, fixed through the CMB power spectrum normalization. This restricts the parameter space to explore to the initial conditions of the field. Second, it is the best favoured (and the simplest) inflation model after Planck [CIT]. Third, the model has been considered in [CIT], which allows us to compare some of our results to the literature. In particular, we reproduce the case of a Gaussian field fluctuation on top of a background field value lying in the slow-roll region. But compared to previous work, our analysis has been extended to study more exhaustively the dynamics of the preinflation era and determine where inflation can take place and for which fluctuation sizes. Not only we consider the case of an inhomogeneous initial field, but also cases with inhomogeneous field velocity and equation of state that had not been considered so far. Some universal behaviours in the field and space-time dynamics are identified, depending on the characteristic fluctuation sizes. Our results further support the robustness of inflation against various configurations of initial conditions. And for sub-Hubble and Hubble-sized fluctuations, they better emphasize a universal behaviour in the form of an oscillating equation of state, a signature of the respectively quick or slow wobbling between field gradient and kinetic terms that alternatively dominate the total energy density. | 1,541 | 2211.03534 | 21,604,683 | 2,022 | 11 | 7 | true | false | 1 | MISSION |
When the $A_L$ parameter is allowed to vary, the three different sets of constraint contours overlap in all four models (see Figs. REF--REF). In the non-flat models there now is a bigger degeneracy between the cosmological parameters $\Omega_m$-$H_0$-$\Omega_k$-$A_L$ which causes the constraint contours to expand relative to the $A_L = 1$ case, especially for P18 data. For some parameters in the untilted non-flat $\Lambda$CDM model and the tilted non-flat $\Lambda$CDM new $P(q)$ model we observe a bimodal distribution when only P18 data is used, and the same parameters in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model have an almost bimodal distribution for P18 data. These bimodalities are likely a consequences of the above-mentioned geometrical degeneracy. | 773 | 2211.04268 | 21,610,932 | 2,022 | 11 | 8 | true | false | 1 | MISSION |
First, consider the constraints (or lack thereof) on $k_{\rm ent}$. The uniform distribution on $k_{\rm ent}$ occurs because we vary $\mu$ and $s_0$ over a large range, most notably also sampling small values for these parameters which consequently lead to negligibly small deviations from the BD spectrum. Thus, if these distinguishing features are small, it does not matter where on the spectrum they occur as they would all lie in the Planck error budget. | 458 | 2211.11079 | 21,664,715 | 2,022 | 11 | 20 | true | true | 1 | MISSION |
In Fig. REF the temperature maps of the disks are presented. These maps were made with `bettermoments` using a clip of 3$\times$SNR and the full Planck expression. A radial cut along the disks major axes following the projected fitted height profiles of these temperature maps can be found in the left column of Fig. REF. Each disk shows a decrease in temperature close to the star due to beam dilution, as the emitting region shrinks and no longer fills the beam [e.g., [CIT]]. Also, for HD 142666, the relatively low velocity resolution can lower the inferred temperature by underresolving the line. Both Fig. REF and REF show large differences between temperatures of individual disks. However, no clear trend is present between the group I and group II sources. Disks from either group can be found with very similar temperature profiles. For example HD 100453/AK Sco/HD 142666 and HD 34282/HD 163296/MWC 480. The only clear outliers are HD 100546 and HD 97048, which are much warmer in the region out to 450 au compared to the other disks. | 1,044 | 2211.12532 | 21,675,280 | 2,022 | 11 | 22 | true | false | 1 | LAW |
Consequently, gauge-symmetry-based solutions to the quality problem need to be implemented at any scale above the PQ scale, and in any sector which can possibly communicate with the axion. More generally, this discussion suggests that studies of the axion quality problem (or really of the fate of any global symmetry coupled to gravity) in terms of Planck-suppressed operators in an effective framework at an intermediate (e.g., PQ) scale should either include explicit UV assumptions (e.g., a "desert\" until the Planck scale or a secluded PQ sector) or use a conservative $M_P$ scaling for the symmetry-breaking operators. If one works instead with a full-fledged UV completion where all heavy fields below $M_P$ are specified, one should nevertheless evaluate all the relevant (classical or quantum) contributions to the PQ-breaking lagrangian. | 848 | 2212.00102 | 21,713,046 | 2,022 | 11 | 30 | false | true | 2 | UNITS, UNITS |
The goal of the validation is to ensure that `panco2`is able to recover accurate pressure profiles from different types of data. To that end, we seek to create realistic synthetic cluster maps from three instruments: the Planck satellite, the South Pole Telescope (SPT), and the NIKA2 camera at the IRAM 30-m telescope. The choice of these three instruments is motivated by their vastly different angular resolutions: the Compton$-y$ maps built from Planck and SPT data have angular resolutions (expressed as the full width at half maximum, or FWHM) of $10'$ and $1.25'$, respectively [CIT], and the beam of the NIKA2 camera 150 GHz band -- used for tSZ mapping -- has an FWHM of $18''$ [CIT]. | 693 | 2212.01439 | 21,723,732 | 2,022 | 12 | 2 | true | false | 2 | MISSION, MISSION |
The extreme energies of UHECR, as high as $10^{11}$ GeV, eleven orders of magnitude above the proton mass and \"only\" eight below the Planck mass, are a unique workbench to probe new ideas, models and theories beyond the Standard Model (SM) of particle physics, which show their effects at energies much larger than those ever obtained, or obtainable in the future, in accelerator experiments. This is the case of theories with Lorentz invariance violations [CIT] or models of Super-Heavy Dark Matter (SHDM) [CIT], that connect UHECR observation with the dark sector and cosmological observations. | 598 | 2212.01600 | 21,724,381 | 2,022 | 12 | 3 | true | true | 1 | UNITS |
In this note, we will consider an extreme version of scale separation, corresponding to $a=0$, which is to assume that the internal dimensions are not there at all or are frozen at the Planck scale: one just has a pure $d$-dimensional theory of gravity, with vacuum $AdS_d$, dual to CFT$_{d-1}$. A particular case of this are theories including fields in the gravity (super)multiplet only -- theories of "pure" supergravity. The non-supersymmetric version of this question (the existence of pure gravity in $AdS$) remains open after much work, even for $AdS_3/CFT_2$. Recently, [CIT] considered the holographic dual to pure $\mathcal{N}=8$ SUGRA[^1] in $AdS_5$ in the large $AdS$ limit, which is dual to the $\mathcal{N}=4$ stress-energy tensor superconformal multiplet (and multi-trace descendants of it); they found that the model satisfied, and in fact saturated, certain bootstrap constraints. This was interpreted in [CIT] as indirect evidence that pure $\mathcal{N}=8$ SUGRA in $AdS_5$, without the $S^5$ or any other internal dimensions, might make sense as a consistent quantum gravity. | 1,094 | 2212.01697 | 21,724,544 | 2,022 | 12 | 3 | false | true | 1 | UNITS |
The HROs of ten nearby Gould Belt molecular clouds (at distances of less than 450 pc, namely Taurus, Ophiuchus, Lupus, Chamaeleon--Musca, Corona Australis, Aquila Rift, Perseus, IC5146, Cepheus, and Orion) have been measured using the Planck data smoothed to 10$\arcmin$ resolution [CIT]. The $\xi$ of the HROs are found to decrease with increasing $N(H)$, indicating field orientation from preferentially parallel or having no preferred orientation at the lowest $N(H) \sim 10^{21}$ cm$^{-2}$ of the data to preferentially perpendicular at the highest $N(H) \sim 10^{22.5}$ cm$^{-2}$ of the data. Except for the Corona Australis cloud that shows an almost flat slope of $\xi$, the $C_{HRO}$ of the other nine clouds have a range from $-0.22$ to $-0.68$, and the $X_{HRO}$ have a range from $21.67$ to $22.70$. The HROs of the ten clouds and the high-latitude cloud L1642 have further been studied between the $N(H)$ derived from Herschel data at 20$\arcsec$ resolution and the magnetic fields inferred from Planck 850 $\mu$m polarization data, and negative slopes of $\xi$ versus $N(H)$ are identified [CIT]. Besides the Planck polarization data, the HRO analysis applied to the BLASTPol data at 250 $\mu$m, 350 $\mu$m, and 500 $\mu$m at 3$\arcmin$ resolution toward the Vela C molecular complex with $N(H)$ from 10$^{21.7}$ cm$^{-2}$ to 10$^{23.3}$ cm$^{-2}$ also suggests a similar trend of HRO as the Planck results [CIT]. | 1,426 | 2212.01981 | 21,726,013 | 2,022 | 12 | 5 | true | false | 4 | MISSION, MISSION, MISSION, MISSION |
In this letter we now investigate these models with the real Planck data using the binned bispectrum estimator derived in [CIT]. We determine the central value and the error bars of $f_\mathrm{NL}$ for the bispectrum shapes proposed in [CIT] from the data and find that there is no detection. Moreover, the values of $f_\mathrm{NL}$ required in order to remove the anomalies are excluded by 5.4$\sigma$, 6.4$\sigma$ and 14$\sigma$ for the three models considered. | 463 | 2212.05977 | 21,760,157 | 2,022 | 12 | 12 | true | false | 1 | MISSION |
It is now straightforward to compute the species scale associated to these modes using REF(#Lambdasp){reference-type="eqref" reference="Lambdasp"}, again with $n=1$. To do that we need the mass of the KK modes in Planck units. For a string compactification to 4d the relation between string and Planck scale is given by FORMULA In the conifold limit, we can expand the volume of the Calabi--Yau threefold as FORMULA Asymptotically the ratio $\mathcal{V}_{\rm rest}/g_s^2$ is kept constant such that $M_{\rm pl}^2\sim \big|\log|\mu|\big| M_s^2$. Using this and REF(#MKKMS){reference-type="eqref" reference="MKKMS"} the mass of the KK modes on the cigar in Planck units is thus given by FORMULA Using REF(#Lambdasp){reference-type="eqref" reference="Lambdasp"} we then find FORMULA which is consistent with $N_{\rm sp}\sim \big|\log |\mu|\big| \sim F_1$ in accordance with REF(#proposalasymptotic){reference-type="eqref" reference="proposalasymptotic"}. | 951 | 2212.06841 | 21,769,215 | 2,022 | 12 | 13 | false | true | 3 | UNITS, UNITS, UNITS |
However, the cluster-correlated CIB emission, which is what ought to be deprojected, is not a point-like source for an experiment with the resolution of Planck. This can be seen in Figure REF, which shows the CIB emission from the Websky simulation convolved by the Planck beam at 100,GHz stacked at the locations of the Wesbky clusters with $M_{500} > 10^{14} M_{\odot}$ and redshifts within $\Delta z = 0.02$ of four reference redshifts. We note that we have subtracted the value of the stacked profiles at the maximum radius considered (51.36,arcmin), effectively removing the contribution from uncorrelated CIB emission, and we have rescaled the profiles so that they all have unit values at the centre. The Planck isotropic beam at 100,GHz is also shown for comparison. The cluster-correlated CIB emission is clearly not point-like for any of the redshifts considered, which encompass those of most of the Planck clusters. This is also true for the remaining Planck HFI channels, as they have smaller beams. These findings are consistent with observations, with extended dust emission at the locations of the Planck clusters having been detected using Planck data [CIT]. | 1,175 | 2212.07410 | 21,775,488 | 2,022 | 12 | 14 | true | false | 7 | MISSION, MISSION, MISSION, MISSION, MISSION, MISSION, MISSION |
We have then introduced a spectrally constrained iterative MMF, or sciMMF, which is designed to remove the signals from a set of foregrounds with given SEDs, and we have applied it to our simulated Planck-like data. We have shown that our sciMMF can be highly effective at suppressing the CIB-induced bias from the cluster observables (signal-to-noise and $y_0$) and from the survey completeness while incurring only a small signal-to-noise penalty (see Figures REF, REF, REF, and REF). In particular, we have identified three cases featuring an acceptable loss of signal-to-noise for which the cluster observables are virtually unbiased across signal-to-noise and redshift. These are the 100--857,GHz sciMMF with deprojection of the mean CIB SED and its first-order moment with respect to either the dust emissivity index $\beta$ or the dust temperature parameter $\beta_T = T_0^{-1}$ (87.5,% and 93.9,% of signal-to-noise retained relative to the optimal unconstrained iMMF, respectively), and the 100--353,GHz sciMMF with deprojection of the mean CIB SED (65.16,% signal-to-noise retained). | 1,093 | 2212.07410 | 21,775,494 | 2,022 | 12 | 14 | true | false | 1 | MISSION |
In order to make sense of the constraints on $f^{\rm eq}_{\rm NL}$ we need to understand the other energy scales in the problem [CIT], as illustrated in Figure REF. The scale where the symmetry is broken can be defined by the Goldstone boson decay constant $f_\pi$ which in canonical slow roll models is given by $f_\pi^4 = \dot \phi^2$. Interestingly, in the EFT of inflation, this scale controls the amplitude of scalar fluctuations, $4 \pi^2 A_s = H^4 /f_\pi^4$, so that we know $f_\pi = 59 H$ from the value of $A_s$ measured by Planck, $A_s \approx 2.1\times 10^{-9}$. In order for the model of inflation to be described by a weakly coupled scalar, the UV scale $\Lambda$ must be larger than the scale of the background $\Lambda \gg f_\pi$ or $f^{\rm eq}_{\rm NL}\ll 1$ [CIT]. In contrast, any measurement of $f^{\rm eq}_{\rm NL}> 1$ would exclude canonical slow roll inflation. In addition, there is a conjecture that $f^{\rm eq}_{\rm NL}\ll 1$ is always described by a weakly coupled fundamental scalar [CIT]. In this precise sense, $f^{\rm eq}_{\rm NL}$ allows us to distinguish mechanisms of inflation. | 1,111 | 2212.08685 | 21,789,431 | 2,022 | 12 | 16 | true | true | 1 | MISSION |
The outcome of scattering processes at transplanckian center-of-mass energy and subplanckian impact parameter depends on the dynamics of quantum gravity. To obtain some intuition for what the outcome could be in asymptotically safe quantum gravity, we perform an RG improvement of the hoop conjecture. To that end, we simply take the classical Schwarzschild radius, $R_{S,, \rm cl} = \sqrt{2 G_{N} M}$ (with $c=1$), and upgrade $G_{N}$ to its scale-dependent counterpart, $G_{N}= G_{N}(k)$. We subsequently identify $k = \xi/b$, where $b$ is the impact parameter and $\xi$ is a number of order one. We thereby obtain an RG-improved $R_S$ that we can compare to its classical counterpart. If $R_S(b) \geq R_{S,, \rm cl}$, then the classical argument underestimates the impact parameter, at which black holes form. Conversely, if $R_S(b) < R_{S,, \rm cl}$, then the classical argument overestimates the impact parameter, at which black holes form. Because $G_{N}(k<M_{\rm Planck})= \rm const$, the classical and the quantum estimate agree for superplanckian impact parameter. For subplanckian impact parameter, $G_{N}(k> M_{\rm Planck}) \sim k^{-2} \sim %b^{-2} {b^2}$ implies that the RG-improved $R_S$ shrinks linearly with decreasing impact parameter: FORMULA In this regime, the critical radius at which black-hole formation occurs is therefore smaller than suggested by the classical estimate. Therefore, the classical hoop conjecture does not generically apply in this simple RG-improved setup. Whether this simple argument captures the relevant gravitational dynamics in asymptotic safety is of course an open question. | 1,624 | 2212.09495 | 21,796,163 | 2,022 | 12 | 19 | true | false | 2 | UNITS, UNITS |
Inflation is the leading paradigm to address the fine-tuned problem of the initial conditions of the Universe [CIT], though the nature of the inflaton field responsible for driving inflation remains unknown. The latest results from Planck, WMAP, and BICEP/Keck observations suggest that there is not significant primordial tensor perturbations [CIT], which implies that concave-like inflaton potentials are more favorable. | 422 | 2212.12183 | 21,822,076 | 2,022 | 12 | 23 | false | true | 1 | MISSION |
Before proceeding to the next section, we note the following: In this paper the Planck length $\ell_{pl}$ and mass $M_{pl}$ are defined, respectively, by $\ell_{pl} \equiv \sqrt{G\hbar/c^3}$ and $M_{pl} \equiv \sqrt{\hbar c/G}$, where $G$ denotes the Newtonian constant, $\hbar$ is the Planck constant divided by $2\pi$, and $c$ is the speed of light (Note that in the main text, $c$ will be used to denote a phase space variable, and only in this paragraph we use it to denote the speed of light, without causing any confusion.). Thus, in terms of the fundamental units, $M$, $L$ and $T$, the units of $\hbar$ and $c$ are $\left[\hbar\right] = M L^2T^{-1}, \; \left[c\right] = LT^{-1}$, where $M$, $L$ and $T$ denote the units of mass, length and time, respectively. In this paper we shall adopt the natural units, so that $\hbar = c = 1$. Then, we find $L = T, \; M = L^{-1}$, $\left[G\right]=L^3 M^{-1} T^{-2}=L^2$. In addition, all the figures will be plotted in the units of $\ell_{pl}$ and $M_{pl}$, whenever the length and mass parameters are involved. | 1,059 | 2212.14535 | 21,836,561 | 2,022 | 12 | 30 | true | true | 2 | UNITS, CONSTANT |