subfolder
stringclasses
367 values
filename
stringlengths
13
25
abstract
stringlengths
1
39.9k
introduction
stringlengths
0
316k
conclusions
stringlengths
0
229k
year
int64
0
99
month
int64
1
12
arxiv_id
stringlengths
8
25
1607
1607.02445_arXiv.txt
{ We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be compared with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a ``constrained parameter goodness-of-fit'' test statistic, whose $p$-value we then use to define a ``plausibility region'' (\eg where $p \geq 10\%$). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (\eg $p < 10 \%$). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.}
Astrophysical and cosmological evidence indicate that roughly $85 \%$ of the matter in the Universe is in the form of dark matter (DM) most likely composed of yet unknown elementary particles. Arguably the most extensively studied DM particle candidate is a weakly interacting massive particle (WIMP), which offers both theoretical appeal and hope for near-future detection. Most of the matter in our own galaxy resides in a spheroidal dark halo that extends much beyond the visible disk. Direct DM detection experiments represent one of the primary WIMP search methods currently employed. These experiments attempt to measure the recoil energy of nuclei after they collide with DM particles bound to the galactic dark halo passing through Earth. The current status of DM direct detection experiments remain ambiguous, with three experiments observing a potential DM signal and all others reporting upper bounds, some of which appear to be in irreconcilable conflict with the putative detection claims for most particle candidates~\cite{Bernabei:2010mq, Aalseth:2010vx, Aalseth:2012if, Aalseth:2011wp, Aalseth:2014eft, Aalseth:2014jpa, Agnese:2013rvf, Angle:2011th, Aprile:2011hi, Aprile:2012nq, Felizardo:2011uw, Archambault:2012pm, Behnke:2012ys, Ahmed:2012vq, Agnese:2015ywx,Akerib:2015rjg,Agnese:2015nto,Agnese:2014aze}. Interpreting the results of DM direct detection experiments typically requires assumptions on the local DM density, the DM velocity distribution, the DM-nuclei interaction, and the scattering kinematics. The uncertainties associated with these inputs can significantly affect the expected recoil spectrum (both in shape and magnitude) for a particular experiment, as well as the observed compatibility between experimental data. Attempts have been made to remove the astrophysical uncertainty from direct DM detection calculations, and compare data in a ``halo-independent" manner, by translating measurements and bounds on the scattering rate into measurements and bounds on a function we will refer to as $\teta(\vmin,t)$ common to all experiments, which contains all of the information on the local DM density and velocity distribution (see \eg~\cite{Fox:2010bz,Fox:2010bu,Frandsen:2011gi,Gondolo:2012rs,HerreroGarcia:2012fu,Frandsen:2013cna,DelNobile:2013cta,Bozorgnia:2013hsa,DelNobile:2013cva,DelNobile:2013gba,DelNobile:2014eta,Feldstein:2014gza,Fox:2014kua,Gelmini:2014psa,Cherry:2014wia,DelNobile:2014sja,Scopel:2014kba,Feldstein:2014ufa,Bozorgnia:2014gsa,Blennow:2015oea,DelNobile:2015lxa,Anderson:2015xaa,Blennow:2015gta,Scopel:2015baa,Ferrer:2015bta,Wild:2016myz,Kahlhoefer:2016eds}). The function $\teta(\vmin,t)$ depends on the time $t$ and a particular speed $\vmin$. The physical interpretation of $\vmin$ depends on the type of analysis being used. If the nuclear recoil $\ER$ is considered an independent variable, then $\vmin$ is understood to be the minimum speed necessary for the incoming DM particle to impart a nuclear recoil $\ER$ to the target nucleus (and thus it depends on the target nuclide $T$ through its mass $m_T$, $\vmin^T=\vmin(\ER,m_T)$). This has been the more common approach~\cite{Fox:2010bz,Frandsen:2011gi,Frandsen:2013cna}. Alternatively, one can choose $\vmin$ as the independent variable, in which case $\ER^{T}$ is understood to be the extremum recoil energy (maximum for elastic scattering, and either maximum or minimum for inelastic scattering) that can be imparted by an incoming WIMP traveling with speed $v = \vmin$ to a target nuclide $T$. Note that for elastic scattering off a single nuclide target the two approaches are just related by a simple change of variables. We will choose to treat $\vmin$ as an independent variable for the remainder of this paper, as this choice allows us to account for any isotopic target composition by summing terms dependent on $\ER^{T}(\vmin)$ over target nuclides $T$, for any fixed detected energy $\Ed$. Early halo-independent analyses were limited in the way they handled putative signals. Only weighted averages on $\vmin$ intervals of the unmodulated component of $\teta(\vmin,t)$, $\teta^0(\vmin)$, and of the amplitude of the annually modulated component, $\teta^1(\vmin)$, (see \Eq{eta} below) were plotted against upper bounds in the $\vmin - \teta$ plane (see \eg \cite{Fox:2010bz,Frandsen:2011gi,Gondolo:2012rs,DelNobile:2013cva}). This type of analysis leads to a poor understanding of the compatibility of various data sets. Recently, attempts have been made to move beyond this limited approach of taking averages over $\vmin$ intervals by finding a best fit $\teta^0$ function and constructing confidence bands in the $\vmin - \teta$ plane \cite{Fox:2014kua,Gelmini:2015voa}, from unbinned data with an extended likelihood~\cite{Barlow:1990vc}. One can then compare upper bounds at a particular confidence level (CL) with a confidence band at a particular CL to assess if they are compatible (see \cite{Gelmini:2015voa} for a discussion). From now on, when an upper index $0$ or $1$ is not written, $\teta(\vmin)$ is understood to be $\teta^0(\vmin)$. An alternative approach to analyzing the compatibility of data has been studied in~\cite{Feldstein:2014ufa} using the ``parameter goodness-of-fit'' test statistic \cite{Maltoni:2002xd,Maltoni:2003cu} derived from a global likelihood (an alternative approach is taken in \cite{Bozorgnia:2014gsa}). In \cite{Feldstein:2014ufa}, the compatibility of various experiments within a particular theoretical framework was determined by obtaining a $p$-value from Monte Carlo (MC) simulated data, generated under the assumption that the true halo function is the global best fit halo function. This approach has an advantage in that one can make quantitative statements about the compatibility between the observed data given a dark matter candidate model. However, this procedure assigns only a single number to the whole halo-independent parameter space, and we would like to have the ability to assess compatibility of the data with less restrictive assumptions on the underlying halo function. In this paper we extend the approaches of~\cite{Feldstein:2014ufa} and~\cite{Gelmini:2015voa} by using the global likelihood function to assess the compatibility of multiple data sets within a particular theoretical model across the halo-independent $\vmin-\teta$ parameter space. This is done with two distinct approaches. First, we extend the construction of the halo-independent pointwise confidence band presented in~\cite{Gelmini:2015voa} to the case of a global likelihood function, consisting of one (or more) extended likelihood functions and an arbitrary number of Gaussian or Poisson likelihoods. The resultant global confidence band can be compared directly with the confidence band constructed from the extended likelihood alone, to assess the joint compatibility of the data for any choice of DM-nuclei interaction and scattering kinematics. The drawback of this method is that it cannot quantitatively address the level of compatibility of the data sets. To address this concern we also propose an extension of the parameter goodness-of-fit test, which we will refer to as the ``constrained parameter goodness-of-fit" test, that quantifies the compatibility of various data sets for a given DM particle candidate assuming the halo function $\teta(\vmin)$ passes through a particular point $(v^*,\teta^*)$. By calculating the $p$-values for each $(v^*,\teta^*)$ throughout the $\vmin-\teta$ plane, one can construct plausibility regions, such that for any halo function not entirely contained within the plausibility region the data are incompatible at the chosen level, e.g. $p < 10\%$. In Sec.~\ref{haloindep} we review the procedure for constructing the best fit halo function $\teta_{BF}$ and confidence band from an extended likelihood. Readers familiar with~\cite{Gelmini:2015voa} may wish to skip this section and go directly to Sec.~\ref{sec:Extension}, which discusses how the construction of the best fit halo function and confidence band is altered when dealing with a global likelihood function that is the product of one (or more) extended likelihoods and an arbitrary number of Poisson or Gaussian likelihoods. In Sec.~\ref{globalband1}, we use the methods discussed in Sec.~\ref{sec:Extension} to construct the best fit halo and global pointwise confidence band, for the combined analysis of CDMS-II-Si and SuperCDMS data assuming elastic isospin-conserving~\cite{Kurylov:2003ra,Chang:2010yk,Feng:2011vu} and exothermic isospin-violating spin-independent (SI) interactions~\cite{Gelmini:2014psa,Scopel:2014kba}. Sec.~\ref{sec:constrained} introduces the ``constrained parameter goodness-of-fit'' test statistic and the construction of the plausibility regions. This method is illustrated using CDMS-II-Si and SuperCDMS data, assuming elastic isospin-conserving spin-independent interactions. We conclude in Sec.~\ref{conclusion}.
} In this paper we have presented two distinct methods to assess the joint compatibility of data sets for a given DM particle model across halo-independent parameter space, using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. We have illustrated these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming WIMP candidates with SI contact interactions. The first method is a natural extension of the procedure presented in \cite{Gelmini:2015voa}, in which a best fit halo function and pointwise confidence band are constructed from the profile likelihood ratio. Here we have proven that the best fit halo function $\teta_{BF}$ for the global likelihood we studied is a piecewise constant function with the number of steps at most equal to the number of unbinned data points plus the number of data bins in all the single likelihoods, and argued why in practice the number of steps is smaller than this maximum number (see Section 3 and Appendix A). A best fit piecewise constant halo function had already been found in the literature (see~\cite{Feldstein:2014ufa}) for a global likelihood of the type we use, but as a curiosity without any explanation (or proof of uniqueness). In addition to showing how to find the best fit halo function $\teta_{BF}$ and that this function is unique (see Appendix B), here we have shown for the first time how to construct two-sided confidence bands at any CL for the type of global likelihood we studied. As an illustration of the method we have found the best fit halo function and the $68 \%$ and $90 \%$ CL confidence bands assuming two different choices for the DM particle model parameters $m$, $\delta$, and $f_n/f_p$. The choice of a $9 \GeV$ DM particle scattering elastically ($\delta=0$) with an isospin-conserving coupling ($f_n/f_p=1$) leads to an apparent incompatibility between the observed CDMS-II-Si events and the SuperCDMS upper limit, in agreement with previous published results (see \eg \cite{DelNobile:2014sja,Gelmini:2015voa}). This incompatibility can be assessed by comparing the overlap or lack thereof of the global confidence bands with those of CDMS-II-Si alone. As shown in Fig.~\ref{fig:supercdms_full}, at the $68\%$ CL, it is not possible to find a halo function passing through both confidence bands. The situation is very different for a $3.5 \GeV$ DM particle with exothermic scattering ($\delta = -50$ keV) and a Ge-phobic SI interaction ($f_n/f_p = -0.8$)~\cite{Gelmini:2014psa}, for which the data sets are compatible. As shown in Fig.~\ref{fig:gephobic} the global and CDMS-II-Si alone confidence bands practically coincide. The drawback of this method is that it cannot provide a quantitative measurement of the level of incompatibility of the various data sets that comprise the global likelihood. To address this concern, we have proposed in Section 5 a second method in which we construct a ``plausibility region" arising from the global likelihood, using an extension of the parameter goodness-of-fit test~\cite{Maltoni:2002xd,Maltoni:2003cu,Feldstein:2014ufa}, that we refer to as the ``constrained parameter goodness-of-fit'' test. By evaluating the ratio of the global profile likelihood and the product of the individual profile likelihoods (assuming $\teta(v^*)=\teta^*$), a plausibility region can be constructed by grouping together regions of parameter space for which, at each point $(v^*,\teta^*)$, our observed test statistic has a $p$-value \eg $\geq 10\%$. This $p$-value was determined using a probability distribution constructed with Monte Carlo generated data assuming the true halo function is the constrained best fit $\teta_{BF}^c$ of the profile global likelihood, \ie the halo function that maximizes the global likelihood subject to the constraint $\teta(v^*)=\teta^*$. For any halo function not entirely contained within this plausibility region the data are incompatible for the assumed DM particle model at the assumed level (\eg $p < 10\%$). For halo functions entirely contained within the plausibility region the data sets are compatible at the chosen level only if the contained best fit at each point within the region are also entirely contained within the region. We have demonstrated this method for a $9 \GeV$ DM particle scattering elastically with an isospin conserving coupling and for the aforementioned Ge-phobic particle candidate. The results are shown in Figs.~\ref{fig:MCband} and \ref{fig:MCband-gephobic} respectively. In the first case the confidence bands are largely outside the plausibility region, while in the second case the confidence bands are entirely included in the plausibility region and any halo function entirely contained within the plausibility region lead to a compatibility of the data sets at the chosen level ($p >$ 10\%). Together these two methods provide complementary assessments of the compatibility of the data given a particular dark matter model, across the $\vmin-\teta$ halo-independent parameter space. We expect these tools to prove useful for future direct dark matter searches both to test compatibility of different data sets as to provide a guidance of which type of halo functions provide a better or worse compatibility of all the data.
16
7
1607.02445
1607
1607.00440_arXiv.txt
We employed the SERENDIP III system with the Arecibo radio telescope to search for possible artificial extraterrestrial signals. Over the four years of this search we covered 93\% of the sky observable at Arecibo at least once and 44\% of the sky five times or more with a sensitivity of $\sim 3\times 10^{-25} Wm^{-2}$. The data were sent to a $4\times 10^6$ channel spectrum analyzer. Information was obtained from over $10^{14}$ independent data points and the results were then analyzed via a suite of pattern detection algorithms to identify narrow band spectral power peaks that were not readily identifiable as the product of human activity. We separately selected data coincident with interesting nearby G dwarf stars that were encountered by chance in our sky survey for suggestions of excess power peaks. The peak power distributions in both these data sets were consistent with random noise. We report upper limits on possible signals from the stars investigated and provide examples of the most interesting candidates identified in the sky survey. {\em This paper was intended for publication in 2000 and is presented here without change from the version submitted to ApJS in 2000.}
Early radio searches for extraterrestrial intelligence used dedicated telescope time to search for emission from nearby stars (see Tarter 1991 for a partial listing, and Tarter 2001 for a full listing of these searches). This type of search became increasingly difficult to carry out at major facilities because of a general reluctance to devote dedicated telescope time to such projects, which though interesting, are acknowledged to have a low probability of success. In addition, sky surveys were carried out which scanned substantial portions of the sky. The Berkeley SERENDIP project (Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) solved the dedicated telescope time problem by using data obtained simultaneously with ongoing astronomical research. This program began over twenty years ago (Bowyer et al. 1983) and has continued to the present day with ever increasing sensitivity and an ever-widening set of search parameters. Other sky surveys using dedicated telescopes have been, and are continuing to be, carried out. The Ohio State program (Dixon 1985) was the earliest sky survey; this project is now terminated. The Harvard search (Leigh \& Horowitz 1997) has also been terminated. The Argentinian search (Lemarchand et al. 1997), and the Australian search (Stootman et al. 1999) continue. Targeted searches of nearby stars have been initiated by the SETI Institute (Tarter 1997) using substantial amounts of dedicated telescope time which were obtained in return for a substantial financial contribution to the telescope upgrade which was carried out after the conclusion of the SERENDIP III observations. We discuss our sky survey search for artificial extraterrestrial signals with the SERENDIP III system (Bowyer et al. 1997) and the Arecibo telescope. Although the search was a sky survey, nearby solar-type stars inevitably fell within the beam pattern of the telescope in the course of these observations. As part of the analysis of the SERENDIP III data, we have separately investigated the data from observations of the sky coincident with these nearby stars. Although our integration times for individual targets are relatively short (as compared, for example, with the SETI Institute targeted search), our sensitivity is still substantial because of the large collecting area of the Arecibo telescope and the outstanding receivers that are available for use with this instrument. We report the results of our sky survey and provide upper limits on possible signals from stars in this paper.
We have carried out an extensive search with the world's largest telescope for evidence of radio emission produced by an extraterrestrial intelligence. We were able to carry out this search using this unique facility because of the non-intrusive character of our observing program. A major challenge in our search (and in all other searches for extra terrestrial intelligence) is the problem of false signals produced as a result of human activity. We developed a variety of techniques to deal with this problem and demonstrated their robustness. This shows that a non-intrusive collateral data collection technique such as ours is viable. Given the extensive character of our search we found many signals of potential interest. A prioritization scheme was developed to identify the most promising of these signals. These were examined in more detail. In the end, no extraterrestrial signals were identified. We do not find this surprising nor discouraging given our lack of knowledge as to appropriate source locations, frequencies and time periods that an intentional extraterrestrial signal may be employing. We are continuing our search.
16
7
1607.00440
1607
1607.05265_arXiv.txt
Investigations of thermal evolution of neutron stars with hyperon cores require neutrino emissivities for many neutrino reactions involving strongly degenerate particles (nucleons, hyperons, elect\-rons, muons). We calculate the angular integrals $I_n$ (over orientations of momenta of $n$ degene\-rate particles) for major neutrino reactions with $n=$3, 4, 5 at all possible combinations of particle Fermi momenta. The integrals $I_n$ are necessary ingredients for constructing a uniform database of neutrino emissivities in dense nucleon-hyperon matter. The results can also be used in many problems of physical kinetics of strongly degenerate systems.
\label{s:introduc} It is well known that thermal evolution of not too cold neutron stars is regulated by the neutrino emission from superdense matter in neutron star cores. In order to model the thermal evolution one needs the emissivies of many neutrino reactions which can operate and produce an efficient neutrino cooling of these stars (e.g., \citealt{YKGH01}). Consider, for instance, neutron star cores, which are massive and bulky internal regions of neutron stars \citep{ST83}. They are thought to contain uniform nuclear liquid of density $\rho$ ranged from $\sim \rho_0/2$ to $\sim10-20$ $\rho_0$, where $\rho_0 \approx 2.8 \times 10^{14}$ g~cm$^{-3}$ is the density of standard nuclear matter at saturation. A neutron star core can be divided into the outer core ($\rho \lesssim 2 \rho_0$) composed of neutrons (n) with some admixture of protons (p), electrons (e) and muons ($\mu$), and the inner core ($\rho \gtrsim 2 \rho_0$) containing the same particles and possibly other ones (for instance, hyperons). All constituents of the matter (n, p, e, $\mu$, hyperons) are strongly degenerate fermions. These particles can participate in many reactions producing neutrinos. Because in half a minute after their birth neutron stars become fully transparent for neutrinos, the neutrinos immediately escape from the star and cool it. Schematically, the neutrino emissivity [erg~cm$^{-3}$~s$^{-1}$] for any reac\-tion can be written as \begin{equation} Q= (2\pi)^4 \int {\rm d}\Gamma \;{\cal M}_{\cal F I}\,\epsilon_\nu \, \delta(\bm{P}_{\cal F}-\bm{P}_{\cal I}) \;\delta(E_{\cal F}-E_{\cal I})\,F_{\cal F I}. \label{e:R} \end{equation} Here, $\epsilon_\nu$ is the energy of generated neutrino (or neutrinos), ${\cal I}$ and ${\cal F}$ label initial and final states of a system, while $i$ and $f$ label corresponding states of reacting particles; $\bm{P}_{\cal I}=\sum_i \bm{p}_i$ and $\bm{P}_{\cal F}=\sum_f \bm{p}_f$ denote, respectively, total momenta of reacting particles in the ${\cal I}$ and ${\cal F}$ states, with $\bm{p}$ being a one-particle momentum; $E_{\cal I}=\sum_i \epsilon_i$ and $E_{\cal F}=\sum_f \epsilon_f$ are total energies of the particles ($\epsilon$ is a one-particle energy). The delta functions take into account momentum and energy conservation in a reaction event. The factor $F_{\cal F I}$ is \begin{eqnarray} F_{\cal FI} &=&\left(\prod_{i}f_i\right)\,\left( \prod_f (1-f_f) \right),\nonumber \\ f_i& =& \left[\exp\left(\frac{\epsilon_i-\mu_i}{k_{\rm B}T} \right)+1 \right]^{-1}. \label{e:F} \end{eqnarray} It contains the product of Fermi-Dirac functions $f_i$ for particles in the initial states and the product of blocking functions $(1-f_f)$ for particles in the final states; $\mu_i$ is the chemical potential, $T$ the temperature, and $k_{\rm B}$ the Boltzmann constant. In what follows, we take into account that neutron star matter is fully transparent for neutrinos (e.g., \citealt{YKGH01}). Then the chemical potential of neutrinos is zero, $\mu_\nu=0$, and the approximation of massless neutrinos is excellent. Other initial or final reacting fermions $j$ (which belong to the dense matter) are assumed to be strongly degenerate particles of any relativity; their energies $\epsilon_j$ and chemical potentials $\mu_j$ may include or exclude the rest-mass energy, $m_jc^2$. The quantity ${\cal M}_{\cal F I}$ in Eq.\ (\ref{e:R}) is proportional to the squared matrix element for a given reaction summed over spin states. Finally, \begin{equation} {\rm d}\Gamma = \prod_{l}\, \frac{{\rm d}\bm{p}_l}{(2 \pi \hbar)^3} \label{e:G} \end{equation} is the product of densities of states of all reacting partic\-les~$l=j$ and $\nu$. It is well known (e.g., \citealt{Ziman,BP07,ST83}) that calculation of the emissivities (\ref{e:R}), reaction rates or related quantities in a strongly degenerate matter is greatly simplified because the main contribution into corresponding integrals comes from narrow thermal energy widths $|\epsilon_j - \mu_j|\ll k_{\rm B}T$. Accordingly, one can usually employ the so called energy-momentum decomposition detailed, e.g., in \citet{Ziman,BP07,ST83}. It consists in fixing lengths of all momenta of strongly degenerate reacting fermions to the corresponding Fermi momenta ($|\bm{p}_j|=p_{{\rm F}j}$) and values of energies of these particles to the corresponding chemical potentials ($\epsilon_j=\mu_j$) in all functions of $\bm{p}_j$ and $\epsilon_j$ which vary smoothly within thermal energy widths in local elements near respective Fermi surfaces. Then in Eq.\ (\ref{e:G}) one can set ${\rm d}\bm{p}_j=p_{{\rm F}j}m_j^*\,{\rm d}\epsilon_j\,{\rm d}\Omega_j$, where $m_j^*$ is the Landau effective mass of a fermion $j$ at the Fermi surface, and ${\rm d}\Omega_j$ is a solid angle element in the direction of $\bm{p}_j$. The integration over particle momenta in Eq.\ (\ref{e:R}) is then decomposed into the integration over energies d$\epsilon_j$ and over solid angles d$\Omega_j$. In further calculations of the emisivity $Q$ one often approximates ${\cal M}_{\cal F I}$ by its value $\langle {\cal M}_{\cal F I} \rangle$ averaged over orientations of particle momenta. Then the emissivity becomes \begin{equation} Q= I_\epsilon I_\Omega, \label{e:R1} \end{equation} where \begin{equation} I_\Omega= \int \delta(\bm{P}_{\cal F}-\bm{P}_{\cal I})\, \prod_j \,{\rm d}\Omega_j \label{e:I} \end{equation} is the integral over orientations of all particle momenta placed on respective Fermi surfaces, while $I_\epsilon$ contains all other terms (including $\langle {\cal M}_{\cal F I} \rangle$) and integration over particle energies. In a neutron star, generated neutrinos have much lower energies and momenta than the particles of the matter; it is quite sufficient to neglect neutrino momenta in the momentum-conserving delta function in Eq. (\ref{e:I}) (e.g., \citealt{YKGH01}). Then the integration over orientations of neutrino momentum is trivial (e.g., gives a factor of $4 \pi$ for an emission of one neutrino) and will be supposed to be included in $I_\epsilon$. Accordingly, the angular integration in $I_\Omega$ is performed only over orientations of momenta of strongly degenerate fermions $j$ of the matter. The number of these fermions will be denoted by $n$, so that $j=1,\ldots,n$ in $I_\Omega$. The case of strongly interacting fermions (nucleons and hyperons in dense nuclear matter) deserves a comment. We assume that the system is non-superfluid, i.e., it is a normal Fermi liquid (see, e.g, \citealt{BP07,LP1980}). Then the one-particle states with well defined energies and momenta refer actually to elementary excitations, called Landau quasiparticles. In a strongly degenerate Fermi liquid, quasiparticles form a dilute Fermi gas. Therefore, their distribution in momentum space can be well approximated by the Fermi-Dirac one. The Fermi momenta for quasiparticles coincide with those for real particles. These properties justify the use of Eq.~(\ref{e:F}). In what follows, by particles in Fermi liquids of nucleons and hyperons we will mean quasiparticles. Systems of strongly degenerate particles are also important in many branches of physics. In particular, we can mention solid state physics (degenerate electrons in metals and semiconductors; e.g., \citealt{Ziman,Kittel}), Fermi-liquid systems \citep{BP07} as well as nuclear physics (symmetric nuclear matter in atomic nuclei). It is our aim to consider the angular integrals $I_\Omega$ for reactions involving different particle species with various Fermi momenta. These integrals determine the area of a hypersurface in 3$n$-dimensional momentum space which contributes to a given reaction. The advantage of the integrals $I_\Omega$ is that they are indepen\-dent of specific interparticle interactions. They depend only on the total number $n$ of reacting particles and on Fermi momenta of these particles, $p_{{\rm F}j}\equiv p_j$ ($j$=1,\ldots$n$). For simplicity, we drop the subscript F because all momenta are assumed to be on the Fermi surfaces. The angular integrals $I_\Omega$ appear in many problems of Fermi systems (e.g., \citealt{Ziman,BP07,ST83}). Some approaches for calculating them are described, for instance by \citet{ST83}. However, there are plenty of cases realized for different Fermi momenta. Our aim is practical, to present $I_\Omega$ for all possible cases at $n\leq 5$. In particular, these cases correspond to major neutrino reactions in nucleon-hyperon matter of neutron stars. Section \ref{s:remarks} outlines a general method for calculating $I_\Omega$. Sections \ref{s:n*nd3}, \ref{s:n=4} and \ref{s:n=5} present the calculations for $n\leq 5$. Applications for neutrino reactions are briefly discussed in Section \ref{s:applic}, and we conclude in Section \ref{s:conclusion}.
\label{s:conclusion} We have described calculations of angular integrals $I^{(n)}_\Omega$, Eq.\ (\ref{e:I1}), which determine neutrino emissivities, reaction rates and related quantities for reactions involving $n$ degenerate fermions (in initial + final channels) with different or equal Fermi momenta $p_1,\ldots,p_n$. These angular integrals often occur in applications if differential reaction probabilities are determined by angle-averaged squared matrix elements (Section \ref{s:introduc}). The advantage of angular integrals $I^{(n)}_\Omega$ is that they solely depend on $n$ Fermi-momenta $p_1,\ldots,p_n$, being independent of the nature of reacting fermions and their interactions. The integrals $I^{(n)}_\Omega$ are described by analytic expressions which may have different forms, but they can be calculated once and forever. We have calculated $I^{(n)}_\Omega$ for all possible cases with $n$=2 and 3 (Section \ref{s:n*nd3}), 4 (Section \ref{s:n=4}) and 5 (Section \ref{s:n=5}). The formalism we have used (Section \ref{s:remarks}) allows one to perform similar calculations for higher~$n$. In Section \ref{s:applic} we have outlined some applications of the results, particularly, for neutrino emission processes in neutron star cores composed of nucleons and hyperons. For illustration, we have discussed the expressions for angular integrals of major neutrino emission processes in neutron star cores containing neutrons, protons, electrons, muons, as well sigma and lambda hyperons. They are eight direct Urca processes ($n=3$), 12 baryon-baryon bremsstrahlung processes ($n=4$) and 32 modified Urca processes ($n=5$). The majority of these neutrino reactions have not been studied with considerable attention. We provide the angular integrals which are the most important ingredients for such studies. Our results can be useful for construc\-ting a uniform database of neutrino emissivities in nucle\-on-hyperon matter of neutron star cores which is needed to simulate thermal structure and evolution of neutron stars. Let us stress that much work is required to complete such a database. Aside of the angular integrals calculated here, one needs the matrix elements of many neutrino reactions as well as the factors which describe the suppression of these reactions by possible superfluidity of nucleons and hyperons. This suppressi\-on can be either very strong or weak depending on (largely unkown) critical temperatures for superfluidity of different particles (e.g., \citealt{YKGH01}). It would be a complicated project to calculate the matrix elements and suppression factors from first principles but we expect to simplify this task using some selfsimilarity criteria, like those formulated, in \citet{YKGH01}. In addition, superfluidity of various baryon species can induce a specific neutrino emission due to Cooper pairing of baryons. Such processes involving hyperons should also be studied and included into the database taking into account in-medium effects in systems of superfluid baryons (\citealt{lp06}; also see references given by \citealt{pageetal2011,shterninetal2011}). Note also, that neutrino reactions can be affected by strong magnetic fields. Much work should be done to study the effects of magnetic fields on various neutrino processes. The available calculations of these processes in magnetized neutron star crust and nucleon core (reviewed by \citealt{YKGH01}) show that one typically needs very strong fields to affect the neutrino emission of neutron stars. For instance, as demonstrated by \citet{BY99}, the direct Urca process in nucleon neutron star core can be noticeably affected by the fields $B \gtrsim 10^{16}$~G. The calculated angular integrals can also be used to study neutrino emissivities in quark stars and hybrid stars or study cooling properties of compact stars due to the emission of other weakly interacting particles (for instance, axions; e.g., \citealt{Sedrak16}). In a crust of a neutron star one can deal with neutrino reactions of atomic nuclei and degenerate electrons (e.g. \citealt{YKGH01,BK01,BK02}). For instance, it can be neutrino-pair bremsstrahlung in electron-nucleus colli\-si\-ons or Urca cycles involving Urca pairs of atomic nuclei. In these cases the nuclei do not behave as strongly degenerate fermions and the neutrino emissivities are not directly expressed through the angular integrals $I_\Omega$ [although may contain similar integrals $\widetilde{I}_\Omega$, Eq.\ (\ref{e:tI1})].
16
7
1607.05265
1607
1607.07783_arXiv.txt
We report results from simultaneous radio and X-ray observations of PSR B0611+22 which is known to exhibit bursting in its single-pulse emission. The pulse phase of the bursts vary with radio frequency. The bursts are correlated in 327/150~MHz datasets while they are anti-correlated, with bursts at one frequency associated with normal emission at the other, in 820/150~MHz datasets. Also, the flux density of this pulsar is lower than expected at 327~MHz assuming a power law. We attribute this unusual behaviour to the pulsar itself rather than absorption by external astrophysical sources. Using this dataset over an extensive frequency range, we show that the bursting phenomenon in this pulsar exhibits temporal variance over a span of few hours. We also show that the bursting is quasi-periodic over the observed band. The anti-correlation in the phase offset of the burst mode at different frequencies suggests that the mechanisms responsible for phase offset and flux enhancement have different dependencies on the frequency. We did not detect the pulsar with \textit{XMM-Newton} and place a 99$\%$ confidence upper limit on the X-ray efficiency of $ 10^{-5}$.
The 0.33~s pulsar PSR B0611+22 (characteristic age $\sim$90 kyr) was discovered by \cite{Da72} and was initially thought to be associated with the supernova remnant (SNR) IC~443 which lies at close angular separation to the pulsar~\citep{Da72, Hi72}. This association was always doubtful as the pulsar lies well beyond the radio shell \citep{Du75} of the remnant. Recent X-ray observations detected a compact X-ray source within the remnant shell and the corresponding pulsar wind nebula \citep{Ch01} which rejected any association of the pulsar with the remnant. Moreover, IC~443 is known to lie within the molecular cloud G189$+$3.3~\citep{Bo00} which lies along the line of sight to the pulsar. Although, the distances to these sources are highly uncertain, it is reasonable to assume that the pulsar lies beyond these dense regions~\citep{Fe84,We03}. This suggests that the radio emission propagates through the dense medium which might contribute to the pulsar's dispersion measure (DM) of $\sim$96~${\rm pc}~{\rm cm^{-3}}$. The environment of this pulsar makes it an interesting object for studies of radio emission and single-pulse properties. The pulsar was studied by \cite{No92}, who found that PSR B0611+22 appears to exhibit different modes in which the enhanced emission mode peaked at a later pulse phase than the average profile and the weak mode peaked at an earlier phase. Recently, \cite{Se14} performed a detailed study of the emission behaviour of PSR~B0611+22. They found that, at 327~MHz, the pulsar shows steady emission in one mode which is enhanced by bursting emission that is slightly offset in pulse phase from this steady emission. \cite{Se14} also observed the bursting to be quasi-periodic with a period around $\sim1000$ pulse periods. This type of behaviour has also been seen in other pulsars like PSR~J1752+2358~\citep{Ga14} and PSR~J1938+2213 \citep{Lo13}. PSR B0611+22's short mode changes with offset in the emission phase could be responsible for the high degree of timing noise the pulsar exhibits~\citep{Ar94}. The phenomena of nulling and mode changing which relate to such emission behaviour have been studied in different pulsars for four decades. They were first observed and reported by Backer \citep{Ba70d, Ba70c, Ba70b, Ba70a}. Mode changing pulsars are pulsars in which, from time to time, the mean profile abruptly changes between two or more quasi-stable states \citep{Wa07, Ba82} while nulling is the abrupt cessation of radio emission for one or more pulse periods. Nulling has been postulated to be an extreme case of mode changing \citep{Wa07,Ti10}. In a series of papers, Rankin \citep{Ra83,Ra86,Ra03} tried to understand the emission geometry and behaviour of such pulsars. According to her model, the emission beam of a pulsar consists of a central core emission beam surrounded by multiple annular cones of emission. The pulse profile we observe depends on which core and/or cone beams are traversed by the line of sight of the observer. Rankin suggested that mode changing can be thought of as a reorganization of such core and conal emission resulting in a change in the observed pulse profile. Mode changing has been observed in most multi-component pulsars (pulsars with more than one component in their emission profile)~\citep{Ra86}. Many pulsars like PSR B2319+60 \citep{Wr81}, PSR B0943+10 \citep{Su98} and PSR B1918+19 \citep{Ra13} exhibit this phenomenon. Both nulling and mode changing have been studied in $\sim$200 pulsars so far \citep{Bi92, We06, Wa07,Ga12}. PSR~B0611+22 has been classified as a core emission pulsar with a single component~\citep{Ra83}. This makes the pulsar interesting as the phase offsets and flux enhancement are small in comparison to other pulsars in terms of magnitude and timescale and are harder to explain in the standard framework. Recently, a global picture of quasi-stable states of the magnetosphere has come to the fore~\citep{Ly10,He13}. \cite{He13} discovered an anti-correlation between X-ray and radio emission in the two modes of emission of PSR~B0943+10. This result motivated us to ask whether such X-ray emission is also detectable in PSR~B0611+22 and, if yes, how does it relate to the mode changes seen in radio? This led to a simultaneous radio and X-ray observation campaign of PSR~B0611+22. As mentioned above, PSR B0611+22 has a supernova remnant and a molecular cloud in its vicinity. Such dense environments around and likely, in front of the pulsar make it an ideal candidate to study the effects of these environments on the measured flux density. Previously, pulsars within such dense environments have been known to show a spectral turnover at frequencies around $\sim1$~GHz~\citep{Ki07, Ki11}. A recent study by~\cite{Ra15} shows that it is possible to derive the physical parameters of these dense regions by modeling the flux density spectrum of the pulsar. In this paper, we try to characterize the peculiar emission behaviour with a multi-wavelength, broadband dataset of the pulsar. The observational details are given in Section 2. The results are presented in Section 3. The discussions based on the results are in Section 4. The conclusions are given in Section 5.
We have carried out a detailed analysis of simultaneous radio and X-ray observations of pulsar PSR~B0611+22. The multi-frequency data reveal a wealth of information about the emission characteristics of this pulsar. The bursting behaviour varied across the radio band with a quasi-periodic characteristic at all frequencies. The 327/150 MHz and 820/150 MHz simultaneous observations show an anti-correlation in bursting. We leave modeling this unusual behaviour to a later paper. Future polarimetric studies of both modes will help in discerning the emission physics of this pulsar. Moreover, we obtained a flux density spectrum from the radio observations of this pulsar. The spectrum shows a turnover at higher frequencies. We considered free-free thermal absorption by the surrounding ISM as a possible explanation but such model cannot explain the flux density at 150~MHz. From the X-ray non-detection, we obtained an upper bound on the X-ray luminosity and X-ray efficiency of the pulsar. The X-ray non-detection shows that the X-ray efficiency is low and consistent with X-ray efficiencies measured for other similarly aged pulsars.
16
7
1607.07783
1607
1607.00381_arXiv.txt
A remarkable prediction of the Standard Model is that, in the absence of corrections lifting the energy density, the Higgs potential becomes negative at large field values. If the Higgs field samples this part of the potential during inflation, the negative energy density may locally destabilize the spacetime. We use numerical simulations of the Einstein equations to study the evolution of inflation-induced Higgs fluctuations as they grow towards the true (negative-energy) minimum. These simulations show that forming a single patch of true vacuum in our past light cone during inflation is incompatible with the existence of our Universe; the boundary of the true vacuum region grows outward in a causally disconnected manner from the crunching interior, which forms a black hole. We also find that these black hole horizons may be arbitrarily elongated---even forming black strings---in violation of the hoop conjecture. By extending the numerical solution of the Fokker-Planck equation to the exponentially suppressed tails of the field distribution at large field values, we derive a rigorous correlation between a future measurement of the tensor-to-scalar ratio and the scale at which the Higgs potential must receive stabilizing corrections in order for the Universe to have survived inflation until today.
A striking feature of the Standard Model (SM) is that, in the absence of stabilizing corrections, the Higgs potential develops an instability, with the maximum of the potential occurring at $V(\Lmax)^{1/4} \sim 10^{10} \GeV$. This leads to the existence of a ``true vacuum'' at large Higgs field values, which may carry important consequences for our Universe \cite{Sher:1988mj,Sher:1993mf,Casas:1994qy,Altarelli:1994rb,Ellis:2009tp,EliasMiro:2011aa,Isidori:2007vm,Isidori:2001bm,Bezrukov:2012sa,Degrassi:2012ry,Buttazzo:2013uya}. Our present existence does not necessarily demand physics beyond the SM, since current measurements of the Higgs boson and top quark masses indicate that the electroweak (EW) vacuum is metastable, \ie, long-lived relative to the age of the Universe. The scenario is different, however, if our Universe underwent an early period of cosmic inflation with substantial energy density. The inflaton energy density, parametrized by the Hubble parameter $H$, produces large local fluctuations in the Higgs field, $\delta h \sim \frac{H}{2 \pi}$. As such, when $H$ is sufficiently large during inflation, the Higgs field may sample the unstable part of the potential. If sampling this part of the potential can be shown to be catastrophic for the surrounding spacetime, the eventual survival of our Universe in the EW vacuum would consequently imply constraints on the nature of the inflationary epoch that gave rise to our Universe. Conversely, near-future cosmic microwave background experiments will probe tensor-to-scalar ratios of $r \gsim 0.002$ \cite{Creminelli:2015oda}, corresponding to inflationary scales $H > 10^{13} \GeV$. If it can be shown that the SM Higgs potential is inconsistent with such high-scale inflation, a measurement of nonzero $r$ provides evidence for the existence of stabilizing corrections to the Higgs potential. In recent years, the interplay between the SM Higgs potential instability and inflation has received significant attention \cite{Espinosa:2007qp,Lebedev:2012sy,Kobakhidze:2013tn,Enqvist:2013kaa,Hook:2014uia,Enqvist:2014bua,Herranen:2014cua,Kobakhidze:2014xda,Fairbairn:2014zia,Shkerin:2015exa,Kearney:2015vba,Espinosa:2015qea}. A complete treatment of this problem has two important aspects: first, the evolution of the Higgs field under a combination of (inflation-induced) quantum fluctuations and the classical potential and, second, the evolution of spacetime responding to the Higgs vacuum. Initial groundwork on the first aspect was laid in \Rref{Espinosa:2007qp}, which employed a stochastic, Fokker-Planck (FP) approach to study the evolution and distribution of Higgs fluctuations in Hubble-sized patches during inflation. While this is a suitable approach incorporating both leading classical and quantum effects, the analysis of \cite{Espinosa:2007qp} was predicated on the assumption that fluctuations exceeding $\Lmax$ rapidly transitioned to the true vacuum and disappeared, resulting in a miscalculation of the distribution. It was subsequently shown in \cite{Hook:2014uia}, however, that fluctuations continue to evolve in an inflationary background well past the point where the Higgs quartic becomes negative. In fact, it is the formation of fluctuations well beyond $\Lmax$ that carry the most significant implications for our Universe, making it necessary to study the full distribution of fluctuations. As \Rref{Kearney:2015vba} later demonstrated, a true vacuum patch capable of backreacting on the inflating spacetime only forms at about the time that a fluctuation becomes sufficiently large that the Higgs field locally exits the slow-roll regime. The first meaningful investigation of the second aspect---the response of the spacetime to the Higgs vacuum evolution---appeared in \Rref{Espinosa:2015qea}.\footnote{Earlier studies did not investigate the reaction of the spacetime, instead assuming a variety of outcomes. For example, \cite{Espinosa:2007qp} assumed that fluctuations to the true vacuum only locally terminate inflation, rapidly forming AdS regions that ``benignly" crunch (shrinking to negligible volume), while \cite{Kobakhidze:2013tn,Fairbairn:2014zia} supposed a single true vacuum patch in our past light cone eventually devours all of spacetime. Reference~\cite{Hook:2014uia} considered both extreme possibilities.} In order to make the study analytically tractable, they adopted an idealized setup of a spherically symmetric thin-wall anti-de Sitter (AdS) bubble in a de Sitter (dS) background and found that true vacuum bubbles persist throughout inflation for realistic parameters. As such, the formation of a single such true vacuum patch in our past light cone during inflation would be disastrous for our Universe---after inflation, such patches would expand and destroy the surrounding space in the EW vacuum.\footnote{See also \cite{Freivogel:2007fx,Johnson:2010bn} for related working on the collision of crunching bubbles.} The main goal of this paper is a comprehensive study of both aspects, the field evolution and subsequent reaction of the spacetime. We improve the study of the former aspect by numerically resolving the full probability distribution of Higgs fluctuations in the FP equation, even into the exponentially suppressed tails that govern single patches in our past light cone. This is in contrast to previous studies \cite{Espinosa:2007qp,Hook:2014uia,Espinosa:2015qea}, which relied on a type of ``matching'' procedure between quantum-dominated and classical-dominated evolution in the FP treatment.\,\footnote{This matching procedure consists of using the FP equation to track the field evolution to the point where classical effects start to dominate over quantum effects, and switching to the classical equation of motion beyond this point (thus ignoring the quantum effects) to track the subsequent evolution.} We carry out a comprehensive study of the second aspect by employing full numerical solutions to the Einstein equations instead of the thin-wall approximation. Moving beyond the approximations previously employed in the literature is vital to providing a complete description of the interplay between inflation and the Higgs field for several reasons. First, a more complete numerical solution to the FP equation allows us to fully capture the important effects of the renormalization group-improved potential, as well as the crucial non-Gaussian tails of the Higgs field value distribution. In particular, as \Rref{Kearney:2015vba} argued based on the Higgs effective potential in dS space calculated in \cite{Herranen:2014cua} and Wilsonian effective field theory, an appropriate scale at which to evaluate the Higgs self-coupling is $\mu \simeq \sqrt{H^2 + h^2}$ as opposed to $\mu \simeq \abs{h}$. This choice minimizes large logarithms and incorporates the relevant energy scale from inflation. As we shall see, fully including the effects of the renormalization group-improved potential influences both small and large fluctuations. Meanwhile, as we demonstrate, it is the exponentially suppressed but long tails of the distribution that ultimately determine the rate at which true vacuum patches form. Second, since the evolution of a Higgs fluctuation becomes classical well before becoming sufficiently large to backreact on the spacetime, it is important to study a patch rapidly evolving to the true vacuum, gaining significant energy as it falls, as a dynamical general relativity process. The thin-wall approximation employed in \cite{Espinosa:2015qea} is valid when fluctuations beyond the potential barrier at $\Lmax$ occur via a Coleman-de Luccia tunneling process \cite{Coleman:1980aw}, resulting in a true vacuum bubble interior that rapidly transitions to false vacuum exterior across a thin boundary. During inflation, however, Higgs fluctuations are more appropriately described by a broad, Hubble-sized variation in the field, more akin to a Hawking-Moss instanton \cite{Hawking:1981fz} (see \cite{Hook:2014uia} for a detailed discussion). Here we will not make any simplifying assumptions regarding the Higgs fluctuation being a region of AdS separated from the surround dS at an infinitely thin bubble wall, though we will still use the term ``bubble" to refer to dynamically formed regions where the Higgs field is near the true vacuum. Our numerical simulations allow us to study the full behavior of extended fluctuations, offering the first in-depth understanding of the field and spacetime dynamics of these Higgs fluctuations. In particular, we highlight three important aspects of true vacuum patch formation. First, we show that patches only rapidly diverge to the true vacuum and backreact on the inflating spacetime once the Higgs field locally exits the slow-roll regime. Second, the associated large negative energy density does terminate inflation locally, eventually producing a crunching region, but this region is hidden behind a black hole horizon that is surrounded by an expanding shell of negative energy density. Third, for reasonable parameters, the shell of negative energy density expands into the surrounding spacetime in a manner causally disconnected from the crunching interior, in contrast to the thin-wall AdS bubble. As a result, its growth is not sensitive to the crunching behavior of the spacetime in the interior, allowing such true vacuum regions to persist through inflation. We thus confirm that the formation of a single, sufficiently large fluctuation during inflation precludes the existence of our Universe, resulting in a bound $H/\Lmax \lsim 0.07$ that, once a number of competing effects are taken into account, is similar to that found in previous studies~\cite{Hook:2014uia,Espinosa:2015qea}. In addition, our numerical approach enables us to study more complicated nonspherical solutions, where we find that the formation of AdS-like regions from the field falling to the true minimum at negative potential energy allows for the formation of black holes with arbitrarily elongated horizons (and black strings), in violation of the hoop conjecture~\cite{thorne_hoop}. We emphasize that, while the presence of new physics at the weak scale could substantially change the quantitative features of the Higgs evolution due to the modified Higgs potential, there are many conceptual points in the interplay between an inflating spacetime and a field with a vacuum instability that are applicable in a wider context. Furthermore, we illustrate in this work some of the qualitatively different features that Einstein gravity exhibits in the presence of negative energy density, including the formation of black holes with arbitrarily elongated horizons, or even black strings, that hide the crunching regions from outside observers. These touch on fundamental considerations in gravity such as the topology of black hole horizons, the hoop conjecture, and cosmic censorship. The rest of this paper is organized as follows. In \Sref{evolution_stages}, we briefly review the stochastic approach to studying the evolution of Higgs field fluctuations using the FP equation. \Sref{numerical_approach} is the main part of this paper where, using full numerical simulations, we study the spacetime dynamics of the patches exhibiting large fluctuations that evolve to the true vacuum. In \Sref{fp_limits}, we present a complete numerical solution of the FP equation, allowing us to extract constraints on the Hubble scale or the form of the Higgs potential from the survival of our Universe through inflation. Finally, we summarize our conclusions in \Sref{conclusions}.
\label{conclusions} We have studied the dynamical response of inflating spacetime to unstable fluctuations in the Higgs field with numerical simulations of Einstein gravity. Our results offer, for the first time, an in-depth understanding of how spacetime evolves as a Higgs fluctuation falls towards, and eventually reaches, the true, negative energy, vacuum. We find that when true vacuum patches stop inflating and create a crunching region, and the energy liberated creates a black hole surrounded by a shell of negative energy density. This region of true vacuum persists and grows throughout inflation, with more and more energy being locked behind the black hole horizon. In contrast to the na\"ive expectation that this growth is due to the boundary between true and metastable vacua sweeping outward in space, in an exponentially expanding spacetime the growth occurs in a causally disconnected manner. Spatial points fall to the true vacuum independent of the fact that neighboring points have also reached the true vacuum. Hence, under most circumstances, this process is insensitive to the behavior in the interior region, and to the exact shape of the potential close to the true minimum. We also explored nonspherically-symmetric solutions, where, in addition to confirming that the results from the spherically symmetric case apply more generally, we found that the formation of black holes with arbitrarily elongated horizons, or even black strings, was possible, in violation of the hoop conjecture. As such, the Higgs instability provides a quite different setting---one proceeding from an initially dS-like spacetime---where some of the exotic features seen in AdS-like spacetimes are realized. We also extended the numerical solution of the Fokker-Planck equation to resolve the field distribution in the exponentially suppressed tails. This is necessary to extract the tiny probabilities associated with a single true vacuum patch in our past light cone, while simultaneously incorporating the effects from renormalization group running of the quartic in the Higgs potential on the evolution of the probability distribution. Using this solution, in conjunction with the result from our classical General Relativity simulations that a single true vacuum patch in our past light cone destroys the Universe, we derived a bound $H/\Lmax \lsim 0.07$ on the scale of inflation. This bound is the most accurate available to date, and we compared it to bounds derived previously. We also found, as shown in \Fref{fig:HIlimit}, that a future measurement of the tensor to scalar ratio with $r > 0.002$ would imply the need for a stabilizing correction to the Higgs potential at a scale $\; \lsim 10^{14} \GeV$ supposing $m_t \gsim 171.4 \GeV$. We are thus able to correlate a cosmological quantity with the necessity of stabilizing corrections to the Higgs potential. Finally, we reemphasize that the results in this paper are of wider interest than the SM Higgs potential, as they are applicable to the inflationary dynamics of any scalar field with a negative energy true vacuum.
16
7
1607.00381
1607
1607.05792_arXiv.txt
We describe the Zonal Atmospheric Stellar Parameters Estimator (\zaspe), a new algorithm, and its associated code, for determining precise stellar atmospheric parameters and their uncertainties from high resolution echelle spectra of FGK-type stars. \zaspe\ estimates stellar atmospheric parameters by comparing the observed spectrum against a grid of synthetic spectra only in the most sensitive spectral zones to changes in the atmospheric parameters. Realistic uncertainties in the parameters are computed from the data itself, by taking into account the systematic mismatches between the observed spectrum and the best-fit synthetic one. The covariances between the parameters are also estimated in the process. \zaspe\ can in principle use any pre-calculated grid of synthetic spectra. We tested the performance of two existing libraries \citep{coelho:2005, husser:2013} and we concluded that neither is suitable for computing precise atmospheric parameters. We describe a process to synthesise a new library of synthetic spectra that was found to generate consistent results when compared with parameters obtained with different methods (interferometry, asteroseismology, equivalent widths).
\label{sec:intro} The determination of the physical parameters of stars is a fundamental requirement for studying their formation, structure and evolution. Additionally, the physical properties of extrasolar planets depend strongly on how well we have characterised their host stars. In the case of transiting planets, the measured transit depth is related to the ratio of the planet to stellar radii. Similarly, for radial velocity planets the semi-amplitude of the orbit is a function of both the mass of the star and the mass of the planet. In the case of directly imaged exoplanets, their estimated masses depend on the age of the systems. With more than 3000 planets and planetary candidates discovered, mostly by the \textit{Kepler} mission \citep[e.g.][]{howard:2012, burke:2014}, homogeneous and accurate determination of the physical parameters of the host stars are required for linking their occurrence rates and properties with different theoretical predictions \citep[e.g.][]{howard:2010, buchhave:2014}. Direct determinations of the physical properties of single stars (mass, radius and age) are limited to a couple dozens of systems. Long baseline optical interferometry has been used on bright sources with known distances to measure their physical radii \citep{boyajian:2012,boyajian:2013} and precise stellar densities have been obtained using asteroseismology on stars observed by \textit{Kepler} and \textit{CoRoT} \citep[e.g.][]{silva:2015}. Unfortunately, for the rest of the stars, including the vast majority of planetary hosts, physical parameters can not be measured and indirect procedures have to be adopted in which the atmospheric parameters, such as the effective temperature (\teff), surface gravity (\logg) and metallicity (\feh), are derived from stellar spectra by using theoretical model atmospheres. Stellar evolutionary models are then compared with the estimated atmospheric parameters in order to determine the physical parameters of the star. The amount of information about the properties of the stellar atmosphere contained in its spectrum is enormous. Current state-of-the-art high resolution echelle spectrographs are capable of detecting subtle variations of spectral lines which, in principle, can be translated into the determination of the physical atmospheric conditions of a star with exquisite precision. However, there are several factors that reduce the precision that can be achieved. On one side, there are many other properties of a star that can produce changes on the absorption lines. For example, velocity fields on the surface of the star, which include the stellar rotation (which may be differential) and the micro and macro turbulence, modify the shape of the absorption lines. Non-solar abundances change the particular strength of the lines of each element. Therefore, in order to obtain precise atmospheric parameters, all of these variables have to be considered. However, even when all the significant variables of the problem are taken into account, the precision in the parameters becomes limited by modelling uncertainties, e.g.: imperfect modelling of the stellar atmospheres and spectral features due to unknown opacity distribution functions, uncertainties in the properties of particular atomic and molecular transitions, effects arising from the assumed geometry of the modelled atmosphere and non-LTE effects. These sources of error are unavoidable and are currently the main problem for obtaining reliable {\em uncertainties} in the estimated stellar parameters. Most of the actual algorithms that compute atmospheric parameters from high resolution stellar spectra do not consider in detail this factor for obtaining the uncertainties. The problem is that if the reported uncertainties are unreliable, then they propagate to the planetary parameters and can bias the results or hide potential trends in the properties of the system under study that, if detectable, could lead to deeper insights into their formation and evolution. A widely used procedure for obtaining the atmospheric parameters of a star consists in comparing the observed spectra against synthetic models and adopt the parameters of the model that produces the best match as the estimated atmospheric parameters of the observed star. This technique has been implemented in algorithms such as \texttt{SME} \citep{valenti:96}, \texttt{SPC} \citep{buchhave:2012}, and \texttt{iSpec} \citep{blanco:2014} to derive parameters of planetary host stars. Thanks to the large number of spectral features used, this method has been shown to be capable of dealing with spectra having low SNR, moderate resolution and a wide range of stellar atmospheric properties. However, one of the major drawbacks of spectral synthesis methods for the estimation of the atmospheric parameters is the determination of their uncertainties. This problem arises because the source of error is not the Poisson noise of the observed spectrum, but instead is usually dominated by imperfections in the synthesised model spectra, which produce highly correlated residuals. In such cases standard procedures for computing uncertainties for the parameters are not reliable. For example, \texttt{SPC} computes the internal uncertainties using the dispersion from different measurements in the low SNR regime, but an arbitrary floor is applied when the uncertainties are expected to be dominated by the systematic miss-matches between models and data. Additionally, \cite{torres:2012} showed that there are strong correlations between the atmospheric parameters obtained using spectral synthesis techniques, and therefore, the covariance matrix of the parameters should be a required output of any stellar parameter classification tool so that the uncertainty of its results are properly propagated to the posterior inferences that are made using them. Recently, \citet{czekala:2014} introduced \texttt{Starfish}, a code that allows robust estimation of stellar parameters using synthetic models by using a likelihood function with a covariance structure described by Gaussian processes. \texttt{Starfish} allows robustness to synthetic model imperfections through a principled approach using a sophisticated likelihood function and provides full posterior distributions for the parameters, but as we will argue later its uncertainties are significantly underestimated. In this paper we present a new algorithm, dubbed Zonal Atmospheric Stellar Parameters Estimator (hereon \zaspe) for estimating stellar atmospheric parameters using the spectral synthesis technique. The uncertainties and correlations in the parameters are computed from the data itself and include the systematic mismatches due to the imperfect nature of the theoretical spectra. The structure of the paper is as follows. In \S~\ref{method} we describe the method that \zaspe\ uses for determining the stellar parameters and their covariance matrix, including details on the synthesis of a new synthetic library to overcome limitations of the existing libraries for stellar parameter estimation. In \S~\ref{ssec:results} we summarise the performance of \zaspe\ on a sample of stars with measured stellar parameters, and we compare our uncertainties with those produced by \texttt{Starfish}. Finally, in \S~\ref{sec:sum} we summarize and conclude. \begin{table*} \label{grid} \centering \begin{minipage}{180mm} \caption{Grid extension and spacing for each ZASPE iteration.} \begin{tabular}{@{}cccccccccc@{}} \hline Iteration & $\teff^{i}$ [K] & $\teff^f$ [K] & $\Delta \teff$ [K] & $\logg^{i}$ & $\logg^f$ & $\Delta\logg$ & $\feh^i$ & $\feh^f$ & $\Delta\feh$\\ \hline 1 & 4000 & 7000 & 200 & 1.0 & 5.0 & 0.5 & -1.0 & 0.5 & 0.5 \\ 2 & $\teff^{c}$ - 500 & $\teff^{c}$ + 500 & 100 & $\logg^{c}$ - 0.6 & $\logg^{c}$ + 0.6 & 0.2 & $\feh^c$ - 0.4 & $\feh^c$ + 0.4& 0.1 \\ 3 & $\teff^{c}$ - 300 & $\teff^{c}$ + 300 & 75 & $\logg^{c}$ - 0.6 & $\logg^{c}$ + 0.6 & 0.2 & $\feh^c$ - 0.3 & $\feh^c$ + 0.3& 0.075 \\ 4 & $\teff^{c}$ - 200 & $\teff^{c}$ + 200 & 50 & $\logg^{c}$ - 0.4 & $\logg^{c}$ + 0.4 & 0.1 & $\feh^c$ - 0.2 & $\feh^c$ + 0.2& 0.05 \\ $>$ 4 & $\teff^{c}$ - 50 & $\teff^{c}$ + 50 & 10 & $\logg^{c}$ - 0.2 & $\logg^{c}$ + 0.2 & 0.05 & $\feh^c$ - 0.06 & $\feh^c$ + 0.06& 0.02 \\ \hline \end{tabular} \end{minipage} \end{table*} \section[]{The Method} \label{method} In order to determine the atmospheric stellar parameters of a star, \zaspe\ compares an observed {\em continuum normalised} spectrum against synthetic spectra using least squares minimisation by performing an iterative algorithm that explores the complete parameter space of FGK-type stars. For simplicity, we assume first that we are able to generate an unbiased synthetic spectrum with any set of stellar atmospheric parameters (\teff, \logg\ and \feh). By unbiased we mean that there are no systematic trends in the level of mismatch of the synthesised and real spectra as a function of the stellar parameters, but there can be systematic mismatches that are not a function of stellar parameters. If $F_{\lambda} $ is the observed spectrum and $S_{\lambda} (\vec{\theta})$ is the synthesised spectrum with parameters $\vec{\theta} = \{\teff, \logg, \feh\}$, the quantity that we minimize is \begin{equation} \label{dif} X^2(\vec{\theta}) = \sum_{\lambda} ( F_{\lambda} - S_{\lambda} (\vec{\theta}) )^2. \end{equation} in Equation~\ref{dif} we have not included the weights coming from the uncertainties in the observed flux because we are assuming that the signal to noise ratio (SNR) of the data is high enough for the uncertainties in the parameters to be governed by the systematic mismatches between the data and the models. The synthesised spectrum needs to have some processing done in order to compare it against the observed one. We do not treat microturbulence and macroturbulence as free parameters, but instead we assume that these values are functions of the atmospheric parameters. The microturbulence value is required during the process of synthesising the spectra and it depends on the particular spectral library selected to do the comparison (see \S~\ref{grids}). On the other hand, the macroturbulence degradation is applied after the synthetic spectra have been generated. We compute the macroturbulence value for each synthetic spectrum from its \teff\ using the empirical relation given in \cite{valenti:2005}\footnote{As was pointed out by \cite{torres:2012}, the formula in \cite{valenti:2005} has a wrong sign.}, namely: \begin{equation} v_{\rm mac} = \left( 3.98 + \frac{T_{eff}-5770 K}{650 K} \right) \textnormal{km s}^{-1}. \end{equation} The effect of macroturbulence on the spectrum is given by a convolution with a Gaussian kernel whose standard deviation is given by $\sigma_{mac}=0.297 v_{mac}$, as was approximated in \cite{takeda:2008}. The degradation to the particular instrumental resolution, $R = \Delta \lambda / \lambda$ is performed by convolving the synthetic spectrum with another Gaussian kernel whose standard deviations is $\sigma_{res}=\lambda/(2.3R)$. The model spectrum is then split according to the echelle orders of the observed spectrum and the pixelization effect is taken into account by integrating the synthetic flux over each wavelength element of the observed spectrum. \subsection{The sensitive zones} \label{sec:zones} One of the novel features of \zaspe\ in contrast to other similar codes, is that the comparison between the observed and synthetic spectra is performed in particular optimized wavelength zones, rather than using the full spectrum. These zones correspond to the most sensitive regions of the spectra to changes in the stellar parameters and are redefined in each iteration of \zaspe. These sensitive regions are determined from the approximate gradient of the modelled spectra with respect to the stellar parameters at $\vec{\theta^c}$, where $\vec{\theta^c} = \{\teff^c, \logg^c, \feh^c\}$ is the set of parameters that produced the minimum $X^2$ in the previous iteration. In practice, once $\vec{\theta^c}$ is determined, \zaspe\ computes the following finite differences: \begin{align} \Delta S_{\teff}^1 = \|S_\lambda(\teff^c+200,\logg^c,\feh^c) - S_\lambda(\vec{\theta^c}) \|, \\ \Delta S_{\teff}^2 = \|S_\lambda(\teff^c-200,\logg^c,\feh^c) - S_\lambda(\vec{\theta^c}) \|, \\ \Delta S_{\logg}^1 = \|S_\lambda(\teff^c,\logg^c+0.3,\feh^c) - S_\lambda(\vec{\theta^c}) \|, \\ \Delta S_{\logg}^2 = \|S_\lambda(\teff^c,\logg^c-0.3,\feh^c) - S_\lambda(\vec{\theta^c}) \|, \\ \Delta S_{\feh}^1 = \|S_\lambda(\teff^c,\logg^c,\feh^c+0.2) - S_\lambda(\vec{\theta^c}) \|, \\ \Delta S_{\feh}^2 = \|S_\lambda(\teff^c,\logg^c,\feh^c-0.2) - S_\lambda(\vec{\theta^c}) \|, \end{align} \noindent from which the approximate gradient of the synthesised spectra with respect to the atmospheric parameters, averaged on the three parameters, is estimated as \begin{multline} \Delta S_{\lambda}(\vec{\theta^c}) = \frac{1}{6} (\Delta S_{\teff}^1 + \Delta S_{\teff}^2 + \Delta S_{\logg}^1 \\ + \Delta S_{\logg}^2 + \Delta S_{\feh}^1 + \Delta S_{\feh}^2). \end{multline} Spectral regions where $\Delta S_{\lambda}(\vec{\theta^c})$ is greater than an predefined threshold are identified as the sensitive zones, which we denote as $\{z_i\}$. Figure~\ref{zones} shows a portion of the spectrum for three different stars and the sensitive zones selected in the final \zaspe\ iteration in each case. It can be seen that the selected sensitive zones correspond to the spectral regions where absorption lines are present, but that not all the absorption lines are identified as sensitive zones at a given threshold. In addition, the regions that are selected as sensitive zones vary according to the properties of the observed star. For identifying the sensitive zones we have introduced four quantities that take arbitrary values. These correspond to the distance in step sizes for the three atmospherical parameters (200 K, 0.3 dex and 0.2 dex for \teff, \logg\ and \feh, respectivelyy) that are used to compute the gradient of the grid, and the threshold value (0.09 as default) that defines as sensitive zones the spectral regions where the gradient is greater than this value. The particular values that we selected as default allow \zaspe\ to identify a great number of sensitive zones even for F-type stars, but at the same time each of these zones counts only with one or two significant absorption lines even in the case of the crowded K-type stars. This last requirement is mandatory for the procedure that \zaspe\ uses to compute the errors in the parameters (see \S~\ref{errors}). At the same we chose step sizes in the parameters that are slightly larger than the expected errors but small enough to avoid the emergence other significant features in the spectra of each sensitive zone. \begin{figure} \includegraphics[width=\columnwidth]{zones2.pdf} \caption{Sensitive zones determined by \zaspe\ in a portion of the wavelength coverage for three different stars (top panel: late F-dwarf, central panel: solar-type star, bottom panel: K-giant). In each panel the superior plot corresponds to the observed spectrum (thick line) and the optimal synthetic one determined by ZASPE (thin line), while the inferior plot shows the gradient $\Delta S_{\lambda}(\vec{\theta^c})$ of the synthetic grid evaluated at the parameters of the optimal synthetic spectrum, and the threshold (horizontal line) that determines which regions of the spectrum are defined as sensitive zones. The green coloured regions correspond to the sensitive zones determined by ZASPE where the comparison between data and models is performed. The red coloured regions are regions of the spectrum that are initially identified as sensitive zones by ZASPE but then rejected because the average residual between the optimal model and the data in these particular regions is significantly higher (greater that $3\sigma$) than in the rest of the sensitive zones.} \label{zones} \end{figure} The introduction of the zones into the problem allows also the rejection of portions of the spectra that strongly deviate from $S_{\lambda} (\vec{\theta^c})$, due to modelling problems or by the presence of artifacts in the data that remain in the spectrum(e.g., cosmic rays, bad columns). In practise, outliers are identified by computing the root mean square (RMS) of the residuals between the observed spectra and the optimal synthetic one in each sensitive zone and zones with RMS values greater than 3 times the average RMS value are rejected. Once the sensitive zones are known, \zaspe\ builds a binary mask, $M_{\lambda}$, filled with ones in the spectral range of the sensitive zones and zeros elsewhere, i.e. \begin{equation} M_\lambda = \left\{ \begin{array}{lr} 1 : \lambda \in \{z_n\}\\ 0 : \lambda \not \in \{z_n\} \end{array} \right. \end{equation} For the next iteration, the function to be minimised will be \begin{equation} \label{chism} X^2(\vec{\theta}) = \sum_{\lambda} M_{\lambda} ( F_{\lambda} - S_{\lambda} (\vec{\theta}) )^2. \end{equation} In the first iteration of \zaspe\ the complete spectral range is utilised and $M_\lambda \equiv 1\,\,\, \forall \lambda$. \subsection{Continuum normalization} \zaspe\ contains an algorithm that performs the continuum normalisation of the observed spectra, which is required for a proper comparison with the synthetic spectral library. One important assumption that we make at this step is that the large scale variations of the observed flux as function of wavelength must be smooth and it must be possible to accurately trace them with a simple low degree polynomial. This means that the observed spectrum should be at least corrected by the blaze function and it should not contain systematics in order to define a proper continuum or pseudeo-continuum. If the input observed spectrum satisfies this constrain, then our continuum normalisation algorithm deals with the presence of both, shallow and strong spectral features. The continuum is updated after each \zaspe\ iteration, because the optimal model is used by the algorithm to avoid an overfiting of the wide spectral features, like the zone of the \ion{Mg}{I}b triplet for example. The idea is to bring the continuum of the observed spectrum to match the continuum of the optimal synthetic one. Therefore, for every echelle order, the optimal synthetic spectrum found after each \zaspe\ iteration is divided by the observed spectrum, and polynomials are fitted to these ratios using an iterative procedure that rejects regions where the model and data significantly differ. Given that both model and data should contain the wide spectral features, these disappear when the division is performed, and the only significant features that remain are the instrumental response and the black body wavelength dependence of the observed spectrum. The polynomials obtained for each echelle order are then multiplied by the observed spectrum, which corrects for the large scale smooth variations. Finally, a straight line is fitted to this corrected spectrum using an iterative process that excludes the absorption lines from the fit. This last normalisation is applied to ensure that the continuum or pseudo-continuum takes values equal to 1, which is particularly important when determining and applying the mismatch factors of \S~\ref{errors}. Additionally, the synthetic spectra are also normalised by a straight line. Figure~\ref{cont} shows that the normalisation algorithm used by \zaspe\ performs better in zones with wide spectral features than a simple polynomial fit. \begin{figure} \includegraphics[width=\columnwidth]{continuum.pdf} \caption{Top: The blue line corresponds to the polynomial fitted to the ratio between the optimal synthetic spectrum find in the previous \zaspe\ iteration and the observed spectrum. This procedure allows to determine the continuum normalisation without overfitting wide spectral features. Bottom: comparison between the continuum determined by the algorithm that \zaspe\ uses (blue line) and the one determined by fitting a simple polynomial (red line), which clearly is heavily affected by the presence of strong absorption features.} \label{cont} \end{figure} \subsection{Radial velocity and \vsini} In each \zaspe\ iteration, the search of the $X^2$ minimum is performed simultaneously over the three atmospheric parameters. However, the velocity of the observed spectrum with respect to the synthesised spectra (radial velocity) and the \vsini\ value are updated in each \zaspe\ iteration after $\vec{\theta^c}$ is determined, because of the slight dependence of these quantities to the atmospheric parameters. In practise, the radial velocity and \vsini\ of the observed spectrum are obtained from the cross correlation function computed between the observed spectrum and the synthesised one with parameters $\vec{\theta^c}$ and $\vsini = 0\,\, \kms$. This cross correlation function is given by \begin{equation} \label{ccf1} CCF(v,0) = \int M_\lambda F_\lambda S_{\lambda'} (\vec{\theta^c},0)\, d\lambda, \end{equation} where $\lambda'$ is the Doppler shifted wavelength by a velocity $v$, given in the non-relativistic regime by $\lambda'=\lambda+\lambda v/c$, where $c$ is the speed of light. A Gaussian function is fitted to the CCF and the mean of the Gaussian is assumed as the radial velocity of the observed spectrum while the \vsini\ is determined from the full with at half maximum (FWHM) of the CCF peak as follows. New CCFs are computed between the synthetic spectrum without rotation and the same synthetic spectrum degraded by different amounts of \vsini: \begin{equation} CCF(v,\vsini) = \int M_\lambda S_\lambda(\vec{\theta^c},\vsini) S_{\lambda'} (\vec{\theta^c},0) d\lambda \end{equation} The FWHM is computed for each CCF peak and a cubic spline is fitted to the relation between the FWHM and \vsini\ values. This cubic spline is then used to find the \vsini\ of the observed spectra from the FWHM of the CCF computed in equation~\ref{ccf1}. In the next \zaspe\ iteration all the synthesised spectra are degraded to the \vsini\ obtained in the previous iteration, and the observed spectrum is corrected in radial velocity by the amount found from the cross correlation function. The degradation of the spectrum by rotation is performed with a rotational kernel computed following eq. 18.11 of \cite{gray:2008}. The limb darkening is taken into account using the quadratic limb-darkening law with coefficients for the appropriate stellar parameters calculated using the code from \citet{espinoza:2015}. The \vsini\ value for the first \zaspe\ iteration is obtained by cross-correlating the observed spectrum against one with stellar parameters similar to those of the Sun. \subsection{Grid exploration} \label{exp} The synthesis of high resolution spectra is a computationally intensive process. For this reason \zaspe\ uses a pre-computed grid of synthetic spectra and, in order to obtain a synthetic spectrum for an arbitrary set of stellar parameters, a cubic multidimensional interpolation is performed. Given the known correlations between the three atmospheric parameters and the possibility of existence of secondary minima in $X^2$ space due to the imperfect modelling of the synthetic spectra, the approach of \zaspe\ for finding the global $X^2$ minimum is to explore the complete parameter space covered by the grid and not to rely on slope minimisation techniques that require an initial set of guess parameters. % In each \zaspe\ iteration the extension and spacing of the of the parameter grid being explored changes. In the first iteration \zaspe\ explores the complete atmospheric parameter grid with coarse spacing, while from the fourth iteration on, \zaspe\ starts focusing on smaller regions of parameter space around $\vec{\theta^c}$ which are densely explored. Table~\ref{grid} shows the extension and spacings that \zaspe\ uses for each iteration in its default version, but these values can be easily modified by the user. \zaspe\ terminates the iterative process when the parameters obtained after each iteration do not change by significant amounts. In detail, convergence is assumed to be reached when the parameters obtained in the $i$-th iteration do not differ by more than 10 K, 0.03 dex and 0.01 dex in \teff, \logg\ and \feh, respectively, from the ones obtained in the $(i-1)$-th iteration. This convergence is usually achieved after $\sim 5-10$ iterations. \subsection{Parameter uncertainties and correlations} \label{errors} As we mentioned in \S~\ref{sec:intro}, one major issue of the algorithms that use spectral synthesis methods for estimating the stellar atmospheric parameters is the problem of obtaining reliable estimates of the uncertainties in the parameters and their covariances. \zaspe\ deals with this problem by assuming that the principal source of errors is the systematic mismatch between the observed spectrum and the synthetic one. The top panel of Figure~\ref{comp} shows a portion of a high resolution spectrum of a star and the synthetic spectrum that produces the best match with the data. Even though each absorption line is present in both spectra, the depth of the lines is frequently different. This systematic mismatch can be further identified in the central panel of Figure~\ref{comp}, where the residuals in the regions of the absorption lines can be seen to be in several cases significantly greater than those expected just from photon noise. In addition, the residuals are clearly non-Gaussian and highly correlated in wavelength. \begin{figure} \includegraphics[width=\columnwidth]{temp.pdf} \caption{{\em Top}: portion of a high resolution echelle spectrum of a star (continuous line) and the synthetic spectrum that produces the best match with the data (dashed line). {\em Centre}: residuals between the two spectra and the expected 3$\sigma$ errors. Both panels show that the synthetic spectrum that best fits the data produces systematic mismatches in the zones of the absorption lines and that the errors are greater than the ones expected from the received flux. {\em Bottom}: mismatch factors $d^{z_i}$ computed in the case of the 10 sensitive zones identified in this portion of the spectrum} \label{comp} \end{figure} Our approach to take into account the systematic mismatches, which builds upon the approach of \citet{grunhut:2009} is to define a random variable $D_i$ which is responsible for modifying the strength of each absorption feature in a sensitive zone $z_i$ of the synthesised spectrum. If $S'^{z_i}_{\lambda}$ is a perfect synthetic spectrum in the $i$-th sensitive zone $z_i$, given a probability density $P(D)$ for the random variable $D$, an imperfect synthetic spectrum $S^{z_i}_{\lambda}$ (like the ones of the spectral libraries that \zaspe\ uses) is modeled as \begin{equation} S^{z_i}_{\lambda} = (S'^{z_i}_{\lambda} - 1) D + 1. \end{equation} An estimate of the probability density function $P(D)$ can be obtained from the data itself by computing the set of mismatch factors, $d{z_{i}}$ for all sensitive zones, computed from the difference between the data and the optimal synthetic spectrum found in the final \zaspe\ iteration. For each sensitive zone, these factors $d^{z_i}$ are obtained from the median value, over all pixels in $z_i$, of the division between the observed and synthetic spectra: \begin{equation} d^{z_i} = \text{median} \left( \frac{F^{z_i}_{\lambda}-1}{S^{z_i}_{\lambda}-1} \right) . \end{equation} The bottom panel of Figure~\ref{comp} shows the mismatch factors in the case of the 10 sensitive zones identified in that portion of the spectrum. Figure~\ref{diet} shows an histogram of the mismatch factors for the same spectrum of Figure~\ref{comp} but for a greater wavelength coverage ($5000 \AA < \lambda < 6000 \AA$). \begin{figure} \includegraphics[width=\columnwidth]{histogram.pdf} \caption{Histogram of the mismatch factors in the sensitive zones. In several regions of the spectrum the absorption lines of the synthetic spectrum can strongly deviate from the ones of the observed one.} \label{diet} \end{figure} The distribution of mismatch factors is pretty symmetric, centred around $d^{z_i}=1$ and shows a wide spread of values. Most of the absorption lines of the synthetic spectrum that best fits the data have values between 50\% and 200\% of the strength of the observed ones. Some lines can deviate even more, however, these zones are rejected as strong outliers by \zaspe, as explained in \S~\ref{sec:zones}. \zaspe\ estimates the probability distribution of the stellar atmospheric parameters by running a random sampling method where a synthetic spectrum that produces the minimum $X^2$ is searched again a number $B$ of realizations, in the same way as described in the previous sections, but using a modified set of model spectra in each realization. The only difference between the minimization run on each realization and the original search is that the set of sensitive zones $\{z_i\}$ is kept fixed at the set that \zaspe\ converged to. In each replication, the strength of the lines of the synthetic spectra are modified by randomly selecting mismatch factors from the $\{d^{z_i}\}$ set, with replacement. Each sensitive zone is modified by a different factor which can be repeated, but the same factor is applied in each zone for the whole set of synthesised spectra. In the random sampling method, the quantity that is minimised on each iteration $b$ is \begin{equation} \label{chism2} X^2_b = \sum_{\lambda} M_{\lambda} ( F_{\lambda} - (S_{\lambda}(\vec{\theta}) - 1)D_{\lambda} + 1)^2 \end{equation} where $D_\lambda$ is a mask defined for each realisation and contains the mismatch factors for each sensitive zone. In order to avoid possible biases in the final distribution of the parameters originating from the asymmetry in the sampling function, when a factor is selected from $\{d{z_i}\}$ we include a 0.5 probability for this factor to take its reciprocal value, enforcing in practice symmetry in the function from which the factors are sampled. After each realisation of the sampling method a new set of atmospheric parameters is found. From these set of possible outcomes, the complete covariance matrix of the atmospheric parameters can be estimated. After testing the method on spectra with different stellar atmospheric parameters we found that about $B=100$ realisations are enough to obtain reliable parameter covariance matrices. The procedure that \zaspe\ uses to obtain the errors and correlations assumes that the systematic mismatches between the different zones are uncorrelated. This simplification of the problem means that some systematic errors between the data and the models are not accounted for by our method. For example, if the abundance of one particular atomic species strongly deviates from the one assumed in our model, the degree of mismatch of the absorption lines of that element will be correlated. However, in \S~\ref{ssec:results} we will find that our assumption is able to account for the typical value of systematic errors in atmospheric parameters, as inferred from measuring the parameters with different methods. \subsection{The reference spectral synthetic library} \label{grids} In order to determine the atmospheric stellar parameters of a star, \zaspe\ compares the observed spectrum against a grid of synthetic models. In principle, after some minor specifications about the particular format of the grid, \zaspe\ can use any pre-calculated grid. We have tested \zaspe\ with two publicly available grids of synthetic spectra: the one of \cite[hereafter C05]{coelho:2005}, which are based in the ATLAS model atmospheres \citep{kurucz:1993}; and the one presented in \cite[hereafter H13]{husser:2013}, which is based in the Phoenix model atmospheres. We have found that both grids present important biases when comparing the stellar parameters obtained using them with \zaspe\ for a set of reference stars. In Figure~\ref{zaspe-comp} we show the comparison of the results obtained by \zaspe\ against the values presented in SWEET-Cat \citep{santos:2013} for a set of publicly available spectra in the ESO archive. SWEET-Cat is a catalogue of atmospheric stellar parameters of planetary host stars. The parameters were computed using the equivalent width method and the ATLAS plane-parallel model atmospheres \citep{kurucz:1993} on a set of high signal to noise and high spectral resolution echelle spectra. We decided to use SWEET-Cat for benchmarking our method because: (1) it includes stars with a wide range of stellar parameters; (2) the same homogeneous analysis is applied to each spectrum; (3) the equivalent width method has clear physical foundations and does not produce strong correlations between the inferred parameters; and (4) the inferred parameters have been shown to be consistent with results obtained with different, less model-dependent methods (infrared flux, interferometry, stellar density computed from transit light-curve modelling) and also with standard spectral synthesis tools like \texttt{SPC} and \texttt{SME} \citep{torres:2012}. The top panels of Figure~\ref{zaspe-comp} show the comparison of the results obtained by ZASPE using the H13 library. These results deviate strongly from the reference values for the three atmospheric parameters. The parameters are systematically underestimated by 300 K, 0.6 dex and 0.3 dex on average in \teff, \logg\ and \feh, respectively. There also appear to be quadratic trends in \teff\ and \logg\ which produce greater deviations for hot and/or giant stars. These systematic trends can be expected from this kind of grid of synthetic spectra because the parameters of the atomic transitions come from theory or from laboratory experiments, and are not empirically calibrated with observed spectra. Another possible source for these strong biases can be related to the different model atmospheres used. We have estimated the atmospheric parameters of the Sun with \zaspe+H13 finding that they present important deviations with respect to the accepted reference values ($T_{eff \odot}^{H13}$=5430 K, log$g_{\odot}^{H13}$=4.1 dex, [Fe/H]$_{\odot}^{H13}$=-0.3 dex). These results show that if the strong observed biases are produced due to the use of different model atmospheres, the PHOENIX models are less precise than the ATLAS ones for estimating atmospheric parameters. The central panels of Figure~\ref{zaspe-comp} correspond to the results obtained by ZASPE using the C05 library. Even though the average values determined with the C05 grid are more compatible with the reference values than the ones obtained with the H13 grid, there is a strong trend in $\Delta\teff$. The systematic trend tends to bring the values of \teff\ towards the one of the Sun ($\approx$5750K) and can produce deviations of $\approx$500 K for F-type stars. In this case both set of results are obtained using the same model atmospheres. The origin of the observed bias is unknown, but it can be plausibly related to two procedures that were adopted in the generation of the C05 grid. First, the oscillator strengths (log$gf$) of several \ion{Fe}{}\ transitions were calibrated using a high resolution spectrum of the Sun, which could bias the results if the physical processes responsible for the formation of the lines are not accurately modelled by the synthesising program; and second, all the spectra with $\logg>3.0$ were synthesised assuming a solar micro turbulence value of $v_t=1.0$ \kms, but FGK-dwarfs have measured microturbulence values in the range of $\approx$0-6 \kms. The behaviour obtained for the values of the other parameters show less biases. However, the trend in \teff\ coupled with the correlations in the atmospheric parameters induce an important dispersion in \logg\ and \feh. \subsection{A new synthetic grid} \label{ngrid} As shown in the last section, it is not straightforward to use public libraries of synthetic spectra for estimating atmospheric parameters of stars due to strong systematic trends and biases that can arise due to erroneous physical assumptions and calibrations. For that reason we decided to synthesise a new grid. We used the \texttt{spectrum} code \citep{gray:1999} and the Kurucz model atmospheres \citep{castelli:2004} with solar scaled abundances. In order to avoid biases in \teff\ related to assuming a fixed microturbulence value we assume that the microturbulence is a function of \teff\ and \logg. \cite{ramirez:2013} established an empirical calibration of the microturbulence as a function of the three atmospheric parameters but the validity of the proposed relation was limited to stars having $\teff>5000$ K. We thus decided to base our microturbulence calibration on the values computed in SWEET-Cat by \cite{santos:2013}. We considered only the systems having the homogeneity flag and by visually inspecting the dependence of the microturbulence with respect to the atmospheric parameters, we defined three different regimes for our empirical micro turbulence law. For dwarf stars ($\logg>3.5$) the microturbulence was assumed to depend on \teff\ by a third degree polynomial, while for sub-dwarf and giant stars the microturbulence was fixed to two different values as follows \begin{figure*} \includegraphics[width=16cm]{comparison.pdf} \caption{Comparison of the atmospheric parameters obtained by \zaspe\ using three different libraries of synthetic spectra against the values reported in SWEET-Cat. The top panels correspond to the results obtained using the H13 grid, where strong biases and systematic trends are present in the three parameters probably because the parameters of the atomic transitions were not empirically calibrated. The central panels correspond to the results obtained using the C05 grid, where a strong systematic trend in \teff\ drives \teff\ values towards that of the Sun. The bottom panels show the results obtained by \zaspe\ when using the synthetic library presented in this work. Results are compatible with the values reported in SWEET-Cat and no strong systematic trends can be identified.} \label{zaspe-comp} \end{figure*} \begin{align*} v_t & = -36.125 + 0.019 \teff\ &&\\ & -3.65 \times 10^{-6} \teff^2 + 2.28 \times 10^{-10} \teff^3 && (\logg>3.5)\\ v_t & = 1.2 \kms\ && (3.0<\logg<3.5)\\ v_t &=1.6 \kms && (\logg<3.0)\\ \end{align*} We used the line list provided in the \texttt{spectrum} code. We initially synthesised a grid of spectra using the original parameters of the transitions in the line list. However, after testing the grid with \zaspe\ we found that while the estimated \teff\ and \feh\ values were closer to the Sweet-Cat ones than the values found using the other two public libraries, some slight but significant biases in these parameters were still present and also the \logg\ values were strongly underestimated by $\sim 0.8$ dex. For this reason we decided to perform a similar approach to C05, and we tuned the log$gf$ of several ($\sim$400) prominent atomic lines. As opposed to C05, though, we did not use the spectrum of the sun to perform the tuning, but instead we used the spectra of a set of stars that have some of their atmospherical parameters obtained using more direct procedures. In particular we used stars whose \teff\ were measured by long baseline interferometry \citep{boyajian:2012,boyajian:2013} and another set of stars with \logg\ values precisely determined through asteroseismology using $Kepler$ data \citep{silva:2015}. For the latter sample of stars we obtained their spectra from the public Keck/HIRES archive, while for the former sample we obtained the spectra from the same archive but we also use data of the FEROS spectrograph that was found in the ESO archive. Tables~2 and 3 show the stars that were used to adjust the log$gf$ values. For each absorption line we determined the best log$gf$ value in the case of each reference spectrum by building synthetic spectra in this particular spectral region with different values of log$gf$ but with the stellar parameters fixed to the ones obtained by asteroseismology or interferometry. For each star we found the synthetic spectrum that produces the smaller $\chi^2$ and we save the log$gf$ value of that model. Then we used the median value of the log$gf$ determined for the different stars as the calibrated log$gf$ value of the particular atomic transition. In addition to the log$gf$ values of the $\sim400$ spectral lines,we also manually adjusted the damping constants of the \ion{Mg}{I}b triplet and \ion{Na}{I} doublet using a similar procedure. \texttt{spectrum} uses the classical van der Wals formulation to generate the wings of the strong lines but this procedure has been found to underestimate the strength of the absorption features. A common solution is to include an enhancement factor to correct for this behaviour. In our case, we determined this empirical adjustment factor for each of these strong lines using the above mentioned set of standard stars. We found that the adjustment factor has a temperature dependence. \cite{anstee:1991} developed a detailed approximation of the van der Waals theory in which the temperature dependence of the damping constant was determined to follow a power law. In our case, we empirically treated the temperature dependence of the damping constants by fitting linear relations to the enhancement factors determined from the standard stars as a function of the temperature for each strong line. These parameters were then used to synthesise the \ion{Mg}{I}b and \ion{Na}{I} lines for spectra with different values of \teff. The spectral range of our grid goes from 4900\AA\ to 6100\AA. This range was selected because most of the spectral transitions for FGK-type stars are located at shorter wavelengths than 6000 \AA\ but for $\lambda < 5000 \AA$ spectral lines become excessively crowded which complicates the process of adjusting the log$gf$ values. The grid limits and spacings of the stellar parameters of the grid we synthesised are \begin{itemize} \item \teff: 4000K --- 7000K, $\Delta\teff$=200K \item \logg: 1.0 dex --- 5.0 dex, $\Delta\logg$= 0.5 dex \item \feh: -1.0 dex --- 0.5 dex, $\Delta\feh$=0.25 dex. \end{itemize} We used a multidimensional cubic spline to generate the model atmospheres with atmospheric parameters not available in the original set of atmospheres provided by the Kurucz models. The bottom panels of Figure \ref{zaspe-comp} show the results obtained using \zaspe\ with this new grid of synthetic spectra against the values stated in SWEET-Cat. The results agree very well with the reference values and no evident trends are present. The \teff\ shows an excellent agreement with only two outliers at present. The results obtained for \logg\ have some tentative systematic trends. In particular, we note that SWEET-Cat report some \logg\ values greater than 4.7 dex, but we note that surface gravities higher than that are not common for FGK-type stars so those values are suspect. The \feh\ values present no offset trends, but a systematic bias can be identified. \feh\ values are on average underestimated by 0.05 dex as compared to the SWEET-Cat values. However, differences of $\approx$0.09 dex in \feh\ have been previously reported when comparing SWEET-Cat metallicities against the ones computed with the ones obtained via spectral synthesis techniques, so the offset we observe is within the expected range given the different techniques used \citep{mortier:2013}. In order to further check the performance of our new grid we used other three different samples of stars with stellar parameters obtained in an homogeneous way. First we used our two sets of stars with stellar parameters obtained using interferometry and asteroseismology which are shown in Tables~2 and 3. Figure~\ref{inter-aste} shows the comparison of the parameters obtained using \zaspe\ with our new grid as function of the reference values. The third sample of stars that we use corresponds to the exoplanet host stars analysed by \cite{torres:2012}, where the \logg\ values where precisely obtained by using the measured stellar densities obtained from the transit light curve. Figure~\ref{torres} shows the results obtained by \zaspe\ as function of the parameters found in the mentioned study. For these three samples of stars, the parameters obtained with \zaspe\ using our new grid are in good agreement with the reference values. However, there is still a slight but significant overestimation of the \logg\ values in the case of dwarf stars. This problem can be produced because (i) not all the spectral lines were empirically calibrated, and (ii) we are imposing that the modelling errors are originated from unreliable log$gf$ values, and therefore, when we perform the calibration that significantly improves the quality of the synthetic grid, some additional weaker systematic errors could be introduced. \begin{figure*} \includegraphics[width=\textwidth]{cross.pdf} \caption{ Comparison between the parameters obtained by \zaspe\ using the new grid and the reference values for the sets of stars with asteroseismological (blue) and interferometric (red) derived parameters. The left panel shows the results in the case of \teff\ and the right panel for \logg.} \label{inter-aste} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{torres.pdf} \caption{Comparison between the parameters obtained by \zaspe\ using the new grid and the reference values for the set of stars analysed in \citet{torres:2012}. Left, central and right panels correspond to the comparisons in \teff , \logg\ and \feh , respectively} \label{torres} \end{figure*} Our new spectral library has been made publicly available\footnote{http://www.astro.puc.cl/$\sim$rbrahm/new\_grid.tar.gz}. \begin{table} \label{inter} \centering \caption{Sample of stars with temperatures measured using interferometric observations \citep{boyajian:2012,boyajian:2013} that were used to empirically calibrate log$gf$ values and damping constants of prominent absorption lines.} \begin{tabular}{@{}lccccccc@{}} \hline Name & \teff\ [K] & $\sigma_{\teff}$ [K] & \logg & \feh & Instrument\\ \hline GJ105 & 4662 & 17 & 4.52 & -0.08 & FEROS\\ GJ166A & 5143 & 14 & 4.54 & -0.24 & FEROS \\ GJ631 & 5337 & 41 & 4.59 & 0.04 & FEROS \\ GJ702A & 5407 & 52 & 4.53 & 0.03 & FEROS \\ HD102870 & 6132 & 26 & 4.11 & 0.11 & FEROS \\ HD107383 & 4705 & 24 & 2.61 & -0.30 & HIRES\\ HD109358 & 5653 & 72 & 4.27 & -0.30 & HIRES\\ HD115617 & 5538 & 13 & 4.42 & 0.01 & FEROS \\ HD131156 & 5483 & 32 & 4.51 & -0.14 & FEROS\\ HD142860 & 6294 & 29 & 4.18 & -0.19 & FEROS \\ HD145675 & 5518 & 102 & 4.52 & 0.44 & HIRES\\ HD1461 & 5386 & 60 & 4.20 & 0.16 & FEROS \\ HD146233 & 5433 & 69 & 4.25 & -0.02 & FEROS \\ HD16895 & 6157 & 37 & 4.25 & -0.12 & HIRES\\ HD182572 & 5787 & 92 & 4.23 & 0.33 & FEROS\\ HD19373 & 5915 & 29 &4.21 & 0.09 & HIRES\\ HD20630 & 5776 & 81 & 4.53 & 0.0 & FEROS \\ HD210702 & 4780 & 18 & 3.11 & 0.03 & HIRES\\ HD222368 & 6288 & 37 & 3.98 & -0.08 & FEROS \\ HD22484 & 5997 & 44 & 4.07 & -0.09 & FEROS\\ HD30652 & 6516 & 19 & 4.30 & -0.03 & FEROS \\ HD33564 & 6420 & 50 & 4.24 & 0.08 & HIRES \\ HD34411 & 5749 & 48 & 4.21 & 0.05 & HIRES\\ HD39587 & 5961 & 36 & 4.47 & -0.16 & FEROS\\ HD4614 & 6003 & 24 & 4.39 & -0.30 & HIRES\\ HD4628 & 4950 & 14 & 4.63 & -0.22 & FEROS \\ HD7924 & 5075 & 83 & 4.56 & -0.14 & HIRES\\ HD82328 & 6300 & 33 & 3.87 & -0.12 & HIRES\\ HD82885 & 5434 & 45 & 4.39 & 0.06 & HIRES \\ HD86728 & 5612 & 52 & 4.26 & 0.20 & HIRES \\ HD90839 & 6233 & 68 & 4.41 & -0.16 & HIRES \\ \hline \end{tabular} \end{table} \begin{table*} \label{aste} \centering \caption{Sample of stars with \logg\ values measured using asteroseismology of $Kepler$ data \citep{silva:2015}, that were used to empirically calibrate log$gf$ values and damping constants of prominent absorption lines.} \begin{tabular}{@{}lccccccccc@{}} \hline Name & \teff\ [K] & $\sigma_{\teff}$ [K] & \logg & $\sigma_{\logg}$ & \feh & $\sigma_{\feh}$ & Instrument\\ \hline KOI 1612 & 6104 & 74 & 4.293 & 0.004 & -0.20 & 0.10 & HIRES \\ KOI 108 & 5845 & 88 & 4.155 & 0.004 & 0.07 & 0.11 & HIRES \\ KOI 122 & 5699 & 74 & 4.163 & 0.003 & 0.30 & 0.10 & HIRES \\ KOI 41 & 5825 & 75 & 4.125 & 0.004 & 0.02 & 0.10 & HIRES \\ KOI 274 & 6072 & 75 & 4.056 & 0.013 & -0.09 & 0.10 & HIRES \\ HIP94931 & 5046 & 74 & 4.560 & 0.003 & -0.37 & 0.09 & HIRES \\ KOI 246 & 5793 & 74 & 4.280 & 0.003 & 0.12 & 0.07 & HIRES \\ KOI 244 & 6270 & 79 & 4.275 & 0.008 & -0.04 & 0.10 & HIRES \\ KOI 72 & 5647 & 74 & 4.344 & 0.003 & -0.15 & 0.10 & HIRES \\ KOI 262 & 6225 & 75 & 4.135 & 0.008 & -0.00 & 0.08 & HIRES \\ KOI 277& 5911 & 66 & 4.039 & 0.004 & -0.20 & 0.06 & HIRES \\ KOI 123 & 5952 & 75 & 4.213 & 0.008 & -0.08 & 0.10 & HIRES \\ KOI 260 & 6239 & 94 & 4.240 & 0.008 & -0.14 & 0.10 & HIRES \\ KOI 1925 & 5460 & 75 & 4.495 & 0.002 & 0.08 & 0.10 & HIRES \\ KOI 5& 5945 & 60 & 4.007 & 0.003 & 0.17 & 0.05 & HIRES \\ KOI 245 & 5417 & 75 & 4.570 & 0.003 & -0.32 & 0.07 & HIRES \\ KOI 7 & 5781 & 76 & 4.102 & 0.005 & 0.09 & 0.10 & HIRES \\ KOI 263 & 5784 & 98 & 4.061 & 0.004 & -0.11 & 0.11 & HIRES \\ KOI 975 & 6305 & 50 & 4.026 & 0.004 & -0.03 & 0.10 & HIRES \\ KOI 69 & 5669 & 75 &4.468 &0.003 &-0.18 & 0.10 & HIRES \\ KOI 42 & 6325 & 75 &4.262 &0.008 & 0.01 & 0.10 & HIRES \\ \hline \end{tabular} \end{table*}
\label{sec:sum} In this work, we have presented a new algorithm based on the spectral synthesis technique for estimating stellar atmospheric parameters of FGK-type stars from high resolution echelle spectra. The comparison between the data and the models is performed iteratively in the most sensitive zones of the spectra to changes in the atmospheric parameters. These zones are determined after each \zaspe\ iteration and the regions of the spectra that strongly deviate from the best model are not considered in future iterations. \zaspe\ computes the errors and correlations in the parameters from the data itself by assuming that the uncertainties are dominated by the systematic mismatches between the data and the models that arise from unknown parameters of the particular atomic transitions. These systematic effects manifest themselves by randomly modifying the strength of the absorption lines of the synthesised spectra. The distribution of mismatches is determined by \zaspe\ from the observed spectra and the synthetic model that produces the best fit. A random sampling method uses an empirical distribution of line strength mismatches to modify the complete grid of synthetic spectra in a number of realisations and a new set of stellar parameters is determined in each realisation. The complete covariance matrix can be computed from the distribution of outputs of the random sampling method. We have validated \zaspe\ by comparing its estimates with the SWEET-Cat catalogue of stellar parameters. We have found that the synthetic libraries of \cite{coelho:2005} and \cite{husser:2013} are not suitable for obtaining reliable atmospheric parameters because they present some strong systematic trends when comparing \zaspe\ results obtained with these grids against SWEET-Cat reference values. We have detailed the methodology to generate our own library of synthetic spectra that we have shown is able to obtain consistent results with the SWEET-Cat catalogue. We have further confirmed the performance of our new grid by estimating the stellar parameters with \zaspe\ of three sets of stars, whose parameters have been refined by less model dependent techniques (interferometry, asteroseismology, planetary transits). We have estimated stellar parameters for the Sun and Arcturus using high signal-to-noise archival spectra, obtaining results consistent with state-of-the art estimates for these archetypical stars. Importantly, we obtain uncertainties that are in line with the expected level of systematic uncertainties based on studies that have performed repeat measurements of a sample of stars. Finally, we have estimated parameters for the star WASP-14, as both a way to gauge performance on a typical star that is followed-up in exoplanetary transit surveys and to compare to the \texttt{Starfish} code, the only other approach that we are aware of that deals with the systematic mismatch between models and data. Unlike \zaspe\, the \texttt{Starfish} code delivers underestimated uncertainties, a fact we believe is due to the modelling of the mismatch structure using a stationary kernel for what is fundamentally a non-stationary process as it is concentrated in the line structure. Currently \zaspe\ works for stars of spectral type FGK. The main barriers to extend the use of \zaspe\ for stars with lower \teff\ are related to the assumption that the systematic mismatches can be modelled by one random variable that modifies the strength of the absorption lines. Molecular bands become the principal feature in the spectra for stars with $\teff < 4000$ K and a more complex model is required to characterise the systematic differences between observed and synthetic spectra. Extension to later-types will be the subject of future efforts. \zaspe\ is mostly a \texttt{Python} based code with some routines written in \texttt{C}. It has the option of being run in parallel with the user having the capability of entering the number of cores to be utilised. On a 16 core CPU it takes $\approx$10 minutes for \zaspe\ to find the synthetic spectrum that produces the best match with the data. However, to determine the covariance matrix a couple of hours are required. \zaspe\ has been adopted as the standard procedure for estimating the stellar atmospheric parameters of the transiting extrasolar systems discovered by the HATSouth survey \citep{bakos:2013}; to date its results have been used for the analysis of 27 new systems \citep[from HATS-9b to HATS-35b,][]{brahm:2015,mancini:2015,ciceri:2015,brahm:2015:hs17,rabus:2016,penev:2016,deval:2016,bhatti:2016,bento:2016, espinoza:2016:hs}. \zaspe\ is made publicly available at \url{http://github.com/rabrahm/zaspe}.
16
7
1607.05792
1607
1607.07568_arXiv.txt
{A new accurate method for reconstructing the arrival direction of an extensive air shower (EAS) is described. Compared to existing methods, it is not subject to minimization of a function and, therefore, is fast and stable. This method also does not need to know detailed curvature or thickness structure of an EAS. It can have angular resolution of about 1 degree for a typical surface array in central regions. Also, it has better angular resolution than other methods in the marginal area of arrays.} \begin{document}
\label{sec:intro} It would not be exaggerated if we said that the most important property of an EAS is arrival direction. As well, the first step of reconstructing an EAS is estimation of arrival direction. EAS arrival direction is fundamental for reconstructing the core location and, more importantly, for determination of its energy. On the other hand, arrival direction mis-estimation results in systematic error of other reconstructed parameters of an EAS.\\ The most common method for finding the arrival direction of an EAS is a fit of recorded arrival times, $t^{rec}_i$s, to the expected arrival times, $t^{exp}_i$s, which is performed by minimization of the following equation: \begin{equation} \label{eq:CHI2} \chi^2=\sum_{i=1}^N{w_i(t^{exp}_i-t^{rec}_i)^2} \end{equation} where $N$ is the number of triggered detectors (TD) of the array during an EAS event and $w_i$ is the weight which is assigned to the $i$th TD. Usually $t^{exp}$ is a plane, a cone with a fixed cone slope, a cone with a variable cone slope that is taken as a fit parameter, or a plane with curvature correction.\\ The simplest functional form of $t^{exp}$ is a plane wave front with light speed (Plane front approximation (PFA)). This plane is represented by the following equation: \begin{equation} \label{eq:PFA1} \hat{\textbf{n}}\cdot(\textbf{r}-\textbf{r}_0)=c(t-t_0) \end{equation} where $\hat{\textbf{n}}=(n_x,n_y,n_z)$ is a unitary vector in the direction of axis of EAS, $\textbf{r}_0$ is the position of an arbitrary point on the plane, and $t_0$ is the time of arrival of EASs forward front to this point. We should only find 3 independent constants (e.g. $n_x$, $n_y$ and $t_0$) which can be seen better, if we rewrite equation \eqref{eq:PFA1} as follows: \begin{equation} \label{eq:PFA2} n_xx+n_yy+n_zz=c(t-t_0) \end{equation} where $\textbf{r}_0$ is replaced with the coordinates of origin, $(0,0,0)$. Now, the $\chi^2$ function can be written as: \begin{equation} \label{eq:chi2Plane} \chi^2=\sum_{i=1}^N{w_i(n_xx_i+n_yy_i+n_zz_i-c(t_i-t_0))^2} \end{equation} under the constraint of $n_x^2+n_y^2+n_z^2=1$.\\ Sometimes, all weights are taken as $w_i=1$. In these situations, summation does not often include all TDs. For example, summation is performed among a few TDs around the one recording the largest number of particles \citep{aglietta1993uhe}.\\ In some other cases, the thickness of EAS front is considered and the weights are taken as $w_i=1/\sigma_i^2$, where $\sigma_i$ is the thickness of EAS in the location of $i$th detector \citep{yoshida1995cosmic,alexandreas1992cygnus}. \cite{linsley1986thickness} established empirically that:\begin{equation} \label{eq:sigma} \sigma_i=1.6(\frac{r_i}{30}+1)^{1.65}\quad\text{ns} \end{equation} with the $r_i$, distance of $i$th detector to the core location measured in meters. When a detector detects more than one particle, the above equation should be divided by $\sqrt{n_i}$, where $n_i$ is the number of detected particles in $i$th detector.\\ Some authors prefer to consider the front of an EAS as a cone with a fixed cone slope \citep{merck1996methods}. Assuming a conical front, the equation \eqref{eq:chi2Plane} changes as follows: \begin{equation} \label{eq:chi2cone} \chi^2=\sum_{i=1}^N{w_i(n_xx_i+n_yy_i+n_zz_i+s_{cone}\rho_i-c(t_i-t_0))^2} \end{equation} where $s_{cone}$ is the EAS cone slope and $\rho_i$ is the transverse distance of $i$th detector from the EAS axis.\\ Another possible treatment is to take the cone slope a function of EAS' other properties (e.g. a function of zenith angle \citep{acharya1993angular}). In these circumstances, at first, the EAS' parameters should be found with an initial crude estimation (e.g. arrival direction with a PFA with all $w_i$ taken as 1). A further option is to take the slope of cone as an additional fit parameter (e.g. \cite{mayer1993fast}).\\ All of the above methods of reconstructing the arrival direction of an EAS have in common minimization of a multivariable function. With the exception of the special case of a simple PFA where all $w_i$ are taken constant and whose minimization can be done analytically, all other techniques need a numerical minimization which is time-consuming and does not have a unique solution. Also, numerical minimization require a first guess for the desired parameters and because of the inherent complexity of minimization methods for a multivariable function, may not converge to a solution. Also, the same methods are partly dependent on the precision of the predicted shape of EAS front curvature or its thickness, and so are model-dependent.\\ Some of the algorithms used in the literature are model-independent and also do not make use of a minimization procedure, but are restricted to a specific array or a specific category of arrays \citep{klages1997kascade}. As an example, \cite{mayer1993fast} developed a fast gradient method which does not need a minimization procedure, but can only be used for large EASs detected in a square network array.\\ In what follows, we introduce a new arrival direction reconstruction algorithm which does not need a numerical optimization procedure and therefore is fast and stable in comparison to the above-mentioned common algorithms. This method is general, in the sense that is not restricted to a special category of surface arrays. It is also relatively accurate. This method is based on a recently introduced method for reconstructing the core location of an EAS, named SIMEFIC II \citep{hedayati2015statistical}.
In this paper, a new technique named SIMAD for reconstructing the arrival direction of an EAS has been developed. This method does not assume anything about the shape of an EAS front or its thickness. This technique is based on finding a local arrival direction, DV, and a special weight (provided by SIMEFIC method) for each TD of an array. The local arrival direction for a TD is found by fitting a plane to arrival times of the same TD and some other TDs around it. The weighted average direction of all TDs is a vector whose components are direction cosines of an EAS arrival direction.\\ SIMAD has a high angular resolution, especially in marginal parts of an array where other methods do not often have satisfactory precision. Also, it has at least the same accuracy of sophisticated methods in the central part of an array.\\ It should be noted that SIMAD is now in its initial version and should be optimized against different parameters and also for any other type of arrays; a few examples: weights may be not in their most optimized form; selection of some TDs for finding a local arrival direction could be improved; maybe finding a DV could be performed via some other approach; etc. Although the structure of SIMAD is general and is not dependent on a special kind of array, it should be tested and optimized for other arrays before utilizing in EASs data analysis.\\ The last point which should be insisted is that the most important advantage of SIMAD in comparison with other methods is its model-independence, so it can be used to improve other techniques of arrival direction estimation.
16
7
1607.07568
1607
1607.04041_arXiv.txt
By cross-correlating both the Parkes Catalogue and the Second Planck Catalogue of Compact Sources with the arrival direction of the track-type neutrinos detected by the IceCube Neutrino Observatory, we find the flat-spectrum blazar PKS 0723--008 as a good candidate for the high-energy neutrino event 5 (ID5). Apart from its coordinates matching those of ID5, PKS~0723--008 exhibits further interesting radio properties. Its spectrum is flat up to high \textit{Planck} frequencies, and it produced a fivefold-increased radio flux density through the last decade. Based upon these radio properties we propose a scenario of binary black hole evolution leading to the observed high-energy neutrino emission. The main contributing events are the spin-flip of the dominant black hole, the formation of a new jet with significant particle acceleration and interaction with the surrounding material, with the corresponding increased radio flux. Doppler boosting from the underlying jet pointing to the Earth makes it possible to identify the origin of the neutrinos, so the merger itself is the form of an extended flat-spectrum radio emission, a key selection criterion to find traces of this complex process.
Up to date there is no compelling understanding on the origin of extraterrestrial high-energy (HE) neutrinos observed by the IceCube Neutrino Observatory \citep{IC2014,IC2015}. Recently \citet{Kadler2016} reported that a major outburst of the blazar PKS B1424--418 occurred in temporal and positional coincidence with the third PeV-energy neutrino event (IC35) detected by the IceCube Neutrino Observatory. They have shown, that the outburst of PKS B1424--418 provided an energy output high enough to explain the observed PeV event, indicative of a direct physical association. In this Letter we present a promising candidate for another HE-neutrino event (ID5), the flat-spectrum blazar PKS~0723--008. In contrast to PKS~B1424--418, which is a shower-type event, this source is candidate for a track-type event. Shower-type events exhibit spherical topology, and they are created by the electrons emerging from the interaction of the electron-neutrinos with the ice. Such electrons scatter several times until their energy fall below the Cherenkov threshold. By contrast, track-type neutrino events appear as longer tracks, generated by the muons emerging from the interaction of muon-neutrinos with ice. For this type of HE neutrino events the average median angular error is $\lesssim 1.\degr2$, being about 10 times smaller than for the shower-type event ID35 ($15.\degr9$). We also propose a scenario explaining both the HE neutrino emission and the radio properties of PKS~0723--008, based on the merger of two supermassive black holes (SMBH). Typical SMBH evolution leads to a spin-flip of the dominant relativistic jet \citep{Gergely2009}, followed by emission of low-frequency gravitational waves. After the spin-flip a new channel must be plowed through the surrounding material \citep{Becker2009}. At the end of this process, as the jet penetrates the outer regions of the host galaxy, it will accelerate ultrahigh-energy cosmic ray (UHECR) particles, usually protons, therefore substantial UHECR emission is expected after the merger. The nature and origin of UHECR particles were recently reviewed in \citet{Biermann2016}. The UHECR-background has been detected by the Pierre Auger Observatory \citep{Auger2009}, KASCADE-Grande \citep{KG2011} and Telescope Array experiment \citep{TA2014} at EeV energies. Protons get accelerated to very HE in radio galaxies and their jets at about $10^{20}$ eV, based on observed active galactic nucleus (AGN) spectra \citep[e.g][]{Aharonian2002}. A simple estimate leads to $10^{21}$ eV as the maximum in the jet frame \citep{Biermann1987}. Then, if the proton energy is above pion-production threshold, proton-proton hadronic collisions produce pions that decay further creating neutrinos with PeV energies \citep{Kadler2016}. Optical spectroscopic and very long baseline interferometry observations on the inner part of some AGN are consistent with a presence of a spatially unresolved SMBH binaries. Such sources are e.g. Mrk~501 \citep{Villata1999}, 3C~273 \citep{Romero2000}, BL~Lac \citep{Stirling2003}, 3C~120 \citep{Caproni2004}, S5~1803+784 \citep{Roland2008}, NGC~4151 \citep{Bon2012}, S5~1928+738 \citep{Roos1993,Kun2014}, PG~1302--102 \citep{Graham2015,Kun2015}. The observation-compatible binary parameters typically imply the inspiral phase of the binary evolution. Such systems may be revealed in near-THz radio or follow-up HE neutrino emission observations, as will be discussed in detail. This Letter is organized as follows. In Section \ref{section2}, we describe the process of the candidate selection based on the Parkes Catalogue and the Second Planck Catalogue of Compact Sources. In Section \ref{section3} we present the most promising candidate of a source responsible for the HE neutrino. In Section \ref{section4} we give more details on the scenario leading to the HE neutrino emission, while in Section \ref{section5} we briefly discuss and summarize our results.
\label{section5} We cross-correlated the Parkes Catalogue and the Second Planck Catalogue of Compact Sources in order to find possible candidates for the $15$ track-type neutrino events detected by the IceCube detector. We found four flat-spectrum radio sources close enough to a track-type neutrino event, two of them being also detected at high frequencies by \textit{Planck}. Next we estimate the treble chance-coincidence of finding a flat spectrum radio source (from the Parkes Catalogue), at higher frequencies (from the Planck Catalogue) within the error-box of the track events on the sky. For this we employ the results of \citet{Drinkwater1997}, who by using the Parkes Catalogue identified $323$ flat-spectrum sources ($\alpha_\mathrm{2.7GHz,5GHz}>-0.5$) in an area of $3.9$ sr of the sky. Assuming a homogeneous distribution, this translates to $\sim1040$ flat-spectrum radio sources over the full sky ($=41252$ deg$^2$). Taking $1.{\degr}2$ as the average median angular error of the track-type neutrino events, and then the average area of $4.52$ deg$^2$ of one flat spectrum radio source, the statistics yields $\sim0.11$ such source over neutrino event area. A fraction of about 1/10 of flat-spectrum sources defined near 5 GHz extend to high Planck frequencies \citep{PCCS2015}, which means an $\sim0.01$ flat-spectrum source over neutrino event area. The combined probability for two flat-spectrum sources to emerge as candidate for track-type HE neutrino events by chance has then the tiny value of $10^{-4}$. We may conclude that the coincidence is very probably real. As for other track-type HE neutrino events, it is plausible that they may pertain to sources at yet higher red-shift, and so at radio flux densities is below the present detection threshold. We presented the flat-spectrum blazar PKS~0723--008 as an excellent candidate for ID5. We analysed the available MOJAVE data of the sources, which led to the selection of PKS~0723--008 from the two sources appearing at \textit{Planck} frequencies. This is a source with three important characteristics: it has flat radio spectrum at high frequencies, its radio flux significantly increased in the last decade, and it is within the median angular error of a track-type HE neutrino event (ID5). The spectrum brightened and flattened in the \textit{Planck} frequencies after 2006, with a local maximum in the integrated flux density by 2011, suggesting a violent process in the core. The radio maps after 2011 reveal a component ejection from the core (Fig. \ref{figure:vlba}), explaining the local maxima in the total flux density. Such flares in the radio wavelengths are attributed to adiabatic shock-in-jet models \citep{Marscher1985,Hughes1985}. Alternatively, the Turbulent Extreme Multi-Zone Model of \citet{Marscher2014} also increases the flux density of the source, when the magnetically turbulent ambient jet flow crosses oblique or cone-shaped shocks. However, such a scenario seems disfavoured in the present case due to the observed component ejection. We argued that the probability for an accidental identification of two flat-spectrum sources extending their spectrum to near THz frequencies with track-type HE neutrino events is extremely small ($\sim 10^{-4}$). We proposed a scenario explaining the track-type HE neutrino event through a binary SMBH, which upon merging induces the reorientation of the new jet towards Earth, providing a strong boosting of all emissions. The scenario predicts low-frequency gravitational waves, UHECR, HE neutrinos, and luminous radio afterglow with flat spectrum extending to near THz frequencies, all generated by the merger of two SMBHs acting as engine.
16
7
1607.04041
1607
1607.08868_arXiv.txt
Based on more than seven years of \emph{Fermi} {Large Area Telescope (LAT)} Pass 8 data, we report on a detailed analysis of the bright gamma-ray pulsar (PSR) J0007$+$7303\/. We confirm that \psrj\ is significantly detected as a point source also during the off-peak phases {with a TS value of 262 ($\sim$ 16 $\sigma$)}. {In the description of \psrj\/ off-peak spectrum, a power law with an exponential cutoff at 2.7$\pm$1.2$\pm$1.3 GeV (the first/second uncertainties correspond to statistical/systematic errors) is preferred over a single power law at a level of 3.5 $\sigma$. The possible existence of a cutoff hints at a magnetospheric origin of the emission.} In addition, no extended gamma-ray emission is detected compatible with either the supernova remnant (CTA~1) or the very high energy ($>$ 100 GeV) pulsar wind nebula. A flux upper limit of 6.5$\times$10$^{-12}$ erg cm$^{-2}$ s$^{-1}$ in the 10-300 GeV energy range is reported, for an extended source assuming the morphology of the VERITAS detection. During on-peak phases, a sub-exponential cutoff is significantly preferred ($\sim$ 11 $\sigma$) for representing the spectral energy distribution, both in the phase-averaged and in the phase-resolved spectra. Three glitches are detected during the observation period and we found {no flux variability at the time of the glitches} or in the long-term behavior. We also report the discovery of a previously unknown gamma-ray source in the vicinity of PSR J0007+7303, Fermi J0020+7328, which we associate with the $z=1.781$ quasar \qso\/. A concurrent analysis of this source is needed to correctly characterize the behavior of CTA~1 and it is also presented in the paper.
\label{intro} \psrj\/ is a $\sim$ 316 ms gamma-ray pulsar discovered by the \emph{Fermi} Large Area Telescope (LAT) in a blind search (Abdo et al. 2008). Using the timing ephemeris from the LAT, X-ray pulsations from \psrj\/ were detected by \emph{XMM-Newton} (Lin et al. 2010; Caraveo et al. 2010). Deep searches for optical and radio counterparts of \psrj\/ revealed none (Halpern et al. 2004; Mignani et al. 2013), leading to the characterization of \psrj\/ as a {radio-quiet} gamma-ray pulsar similar to Geminga (Bertsch et al. 1992) and PSR J1836+5925 (Halpern, Camilo \& Gotthelf 2007; Abdo et al. 2010; Lin et al. 2014). \psrj\ is one of the brightest pulsars in The Second \emph{Fermi} Large Area Telescope Catalog of Gamma-Ray Pulsars (Abdo et al. 2013, 2PC hereafter), providing enough statistics to investigate spectral and timing features and flux variability in detail. \psrj\/ is associated with the composite supernova remnant CTA~1 (SNR; G119.5+10.2), discovered by Harris \& Roberts (1960). CTA~1 possesses a large radio shell that is incomplete towards the north-west (Pineault et al. 1993). \emph{ASCA} and \emph{ROSAT} observations revealed a central filled SNR with emission extending to the radio shell (Seward et al. 1995). \emph{Chandra} observations resulted in the detection of a pulsar wind nebula (PWN) and a jet-like structure (Halpern et al. 2004). The age of CTA~1 is estimated to be around 10 kyr (Pineault et al 1993; Slane et al. 1997; 2004) and the distance is estimated to be 1.4 $\pm$ 0.3 kpc based on the associated H\,{\sc i} shell (Pineault 1997). The CTA~1 complex was established as an extended gamma-ray source above 500 GeV by VERITAS (Aliu et al. 2013). The extended morphology detected by VERITAS was approximated by a two-dimensional Gaussian of semi-major (semi-minor) axis of 0$\fdg$30$\pm$0$\fdg$03 (0$\fdg$24$\pm$0$\fdg$03). The TeV photon origin was proposed to be the PWN associated with \psrj\ (Aliu et al. 2013). With two years of \emph{Fermi}-LAT observations, the off-peak emission of \psrj\ appeared to be extended and the morphology was fitted with a disk of radius 0$\fdg$7$\pm$0$\fdg$3 at 95\% confidence level. Given the extension and spectral shape derived with the two-year statistics, the emission was proposed to be associated with CTA~1 (Abdo et al. 2012). In this paper, we report further analysis of \psrj\ and its related SNR CTA~1 using more than seven years of \emph{Fermi}-LAT data and the newest response functions.
\label{discussion} Using more than seven years of \emph{Fermi}-LAT data and a contemporaneous ephemeris, we carried out a detailed analysis of \psrj\ during its off-peak and its on-peak phase intervals. During the off-peak phase, \psrj\ is significantly detected with a TS value of 262. An exponential cutoff at 2.7 $\pm$1.2$\pm$ 1.3 GeV is {tentatively} detected in its spectrum, with a significance of 3.5 $\sigma$. We explored the possible extension of \psrj\ during the off-peak phase, but a point-like source is favored (TS$_{ext}$=1.3). The point-like nature of the emission together with the {potential} cutoff at GeV energies argue for a magnetospheric origin of the off-peak gamma-ray emission of \psrj\/. Neither a point-like source nor extended gamma-ray emission was detected from \psrj\/ between 10 and 300 GeV during the off-peak phase. {By removing the point source model of \psrj\/, assuming the same position and the 0.3-degree extension detected by VERITAS (Aliu et al. 2013), we calculated an upper limit for the possible emission coming from the PWN or the SNR CTA~1, of 6.5$\times$10$^{-12}$ erg cm$^{-2}$ s$^{-1}$ at 99\% confidence level, with Helene's method (Helene 1983) assuming a photon index of $2.0$ and considering the systematics {(10--300~GeV)}. In the case of the highest energies, the TeV emission detected by VERITAS is most likely coming from the PWN. The molecular mass in the vicinity of the complex is not enough to explain the TeV emission even under favorable assumptions for the cosmic-ray acceleration properties of the SNR (see the discussion by Martin et al. 2016). The new upper limit we impose on the GeV emission from the PWN is not in conflict with detailed multi-frequency models (Aliu et al. 2013, Torres et al. 2014). This PWN remains, however, a difficult case: it is unique in requiring a relatively high magnetization (as compared with other PWNe detected). The latter and the SNR estimated age may indicate that the nebula (or at least part of it) is already contracting. However, even considering that the PWN could already {have} passed reverberation, the needed magnetization is still high (Matin, Torres, \& Pedaletti 2016). Off-peak emission of 26 young pulsars and 8 millisecond pulsars has been significantly detected (2PC). Their off-peak luminosities range from $\sim$10$^{32}$ to $\sim$10$^{35}$ erg s$^{-1}$ and \psrj\ is near the geometric average (L$_{off\;peak}$= 3.5$\times$10$^{33}$ erg s$^{-1}$). Considering a distance of 1.4 kpc and a spin-down power $\dot{E}$=$-I \Omega\dot{\Omega}$ ($I$ is the pulsar's moment of inertia$\sim$ 10$^{45}$ g cm$^{2}$, $\Omega$ and $\dot{\Omega}$ are the pulsar spin frequency and the first derivative of spin frequency)} of 4.5$\times$10$^{35}$ erg s$^{-1}$, the off-peak emission efficiency {(L$_{off\;peak}$/$\dot{E}$)} of \psrj\/ is $\sim$ 0.8\%, which is among the lowest of pulsars with magnetospheric off-peak emission (2PC, Figure 14). The on-peak emission efficiency {(L$_{on\;peak}$/$\dot{E}$)} of \psrj\/ is $\sim$36.5\%. For the on-peak phase, \psrj\ could be modeled by a power law with a sub-exponential cutoff, which is favored over an exponential cutoff with a significance above 11 $\sigma$ for the phase-averaged spectrum (Table \ref{psrj_fit}) and of 3 $\sigma$ for the phase-resolved spectra (Figure \ref{resolved}). {This makes} \psrj\ the fourth pulsar having an established sub-exponential cutoff spectrum in at least some phase range, besides Geminga, Vela, and Crab (see e.g., 2PC; Bochenek \& McCann 2015). \psrj\/ showed a two-peak pulse profile. The ratio of P1 and P2 evolves significantly with energy (Figure \ref{gaussian_fit}). At low energies, {the strengths of P1 and P2 are} comparable, yet P2 is more prominent at higher energies (Figure \ref{profile}), similar to the tendency observed in the Crab, {Vela, and Geminga pulsars (Kanbach 1999}; Aleksi$\acute{c}$ et al. 2014). This is consistent with the lower cut-off energy of P1 than that obtained for P2 (Figure \ref{resolved}, left panel). Several hypotheses have been proposed to explain the deviation of the spectral cutoff from a pure exponential one. In the outer-gap model of pulsar radiation, the high energy emission originates at high altitudes from the neutron star (see e.g., Cheng et al. 1986a, Cheng et al. 1986b, Romani 1996). A spectral shape represented by a power law with an exponential cut-off is expected (see e.g., Prosekin et al. 2013, Vigan\`{o} et al. 2015a,b). In these kinds of models, the accelerating electric field depends on the height in the gap (Hirotani 2006, Hirotani 2015). Particles at distinct heights will be accelerated to different energies, leading to a range of cut-off energies. The appearance of a sub-exponential cutoff could be taken as evidence that the emission of different pulsar phases {is produced by} different particle acceleration zones (or via different processes) with different radiation-reaction energies. {As a result} of the wide emission beams in {the} outer-gap, {emission at a particular phase are a combination of different beams and different cutoff energies}. {Therefore a} blend of different cutoff energies could plausibly lead to the sub-exponential spectra. Leung et al. (2014) also proposed that the accelerating voltage in a given gap is unstable. {Emission from even a single emitting zone is a convergence of various accelerate states,} which will lead to the sub-exponential cutoff in a pulsar spactra. Such sub-exponential cutoffs can also be due to the contribution of a second component, arising from inverse Compton emission of electrons upscattering off soft photon fields (Hirotani 2015; Lyutikov 2013). However, we note that the physical interpretation of the meaning of $b<1$ should be considered as provisional, since it may simply depend on our sensitivity. {With increased statistics} we have seen that values of $b<1$ are needed to fit first the phase-averaged spectrum, then the phase-resolved ones. It may well be that even in the smaller phase bins considered we are summing up contributions having different acceleration features and thus producing sub-exponential cutoffs as a result of this sum. By reducing even further the phase bins, we would come to a situation in which cutoff power laws with $b=1$ and with $b<1$ would not produce significantly different fits. Up to what extent the existence of $b<1$ is physical and not a problem of sensitivity (too large phase bins for the level of statistics attained) is still a subject of controversy. For a phase-averaged analysis, we found no flux variability in the long-term light curve. The integrated flux level and spectral parameters are consistent during all epochs preceding and following the glitches. We have identified Fermi J0020+7328, a previously unknown, flaring gamma-ray source appearing (due to the relative strength of both sources) only during the off peak phases of \psrj\/. The most probable counterpart for this source is \qso\/.
16
7
1607.08868
1607
1607.06791_arXiv.txt
{Coulomb breakup is used to infer radiative-capture cross sections at astrophysical energies. We test theoretically the accuracy of this indirect technique in the particular case of \ex{15}C, for which both the Coulomb breakup to \ex{14}C+n and the radiative capture \ex{14}C(n,$\gamma$)\ex{15}C have been measured. We analyse the dependance of Coulomb-breakup calculations on the projectile description in both its initial bound state and its continuum. Our calculations depend not only on the Asymptotic Normalisation Coefficient (ANC) of the \ex{15}C ground state, but also on the \ex{14}C-n continuum. This questions the method proposed by Summers and Nunes [Phys. Rev. C \textbf{78}, 011601 (2008), ibid.\ \textbf{78}, 069908 (2008)], which assumes that an ANC can be directly extracted from the comparison of calculations to breakup data. Fortunately, the sensitivity to the continuum description can be absorbed in a normalisation constant obtained by a simple $\chi^2$ fit of our calculations to the measurements. By restricting this fit to low \ex{14}C-n energy in the continuum, we can achieve a better agreement between the radiative-capture cross sections inferred from the Coulomb-breakup method and the exact ones. This result revives the Coulomb-breakup technique to infer neutron radiative-capture capture to loosely-bound states, which would be very useful for $r$- and $s$-process modelling in explosive stellar environments.} \FullConference{54th International Winter Meeting on Nuclear Physics\\ 25-29 January 2016 \\ Bormio, Italy} \begin{document}
Radiative captures are reactions during which two nuclei $b$ and $c$ fuse to form a nucleus $a$ by emiting a $\gamma$: \beq b+c\rightarrow a+\gamma. \eeqn{e1} These reactions, also noted $c(b,\gamma)a$, take place in many astrophysical sites. For example, as discussed by Mossa during this conference \cite{Mos16}, $\rm d(p,\gamma)^3He$ is one of the reactions of the pp chain which takes place in the Sun and has happened during the Big-Bang nucleosynthesis. The $s$ and $r$ processes that take place during supernova-explosions consist of sequences of neutron radiative captures $(n,\gamma)$ \cite{BBFH,Ata16}. To constrain astrophysical models, the corresponding cross sections need to be measured at the relevant energies. These energies being rather low (of the order of a few tens of keV in stars), the direct measurement of the radiative-capture cross sections can be quite difficult, either because of the Coulomb barrier between the colliding nuclei or because they involve neutrons. The former hinders the capture by repelling the colliding nuclei from each other, and the seconds are difficult to handle experimentally. Hence the interest in indirect techniques to infer these cross sections \cite{BBR86,BR96,Mer16}. Coulomb breakup is one of them \cite{BBR86,BR96}. In that reaction, the projectile dissociates into lighter fragments through its interaction with a heavy (high $Z$) target $T$: \beq a+T\rightarrow b+c+T. \eeqn{e2} Assuming the dissociation to be due to the sole Coulomb interaction, the reaction can be described as an exchange of virtual photons between the projectile and the target. It can thus be seen as the time-reversed reaction of the radiative capture of the fragments, which should enable us to deduce easily the radiative-capture cross section from breakup measurements \cite{BBR86,BR96}. Using accurate reaction models, various studies have shown that higher-order effects and other reaction artefacts play a significant role in Coulomb breakup, which hinder the simple extraction of radiative-capture cross sections from breakup measurements \cite{EBS05,CB05}. The case of \ex{15}C is of particular interest to analyse this indirect method as both its Coulomb breakup~\cite{Nak09} and the radiative capture $\rm ^{14}C(n,\gamma)^{15}C$~\cite{Rei08} have been measured accurately. Summers and Nunes have recently proposed an innovating analysis of the Coulomb-breakup measurement \cite{SN08}. They have confirmed the significant influence of dynamical effects observed in previous analyses \cite{EBS05,CB05} and, accordingly, the need of an accurate reaction model to study properly such reactions. Since breakup reactions are mostly peripheral, in the sense that they probe mostly the tail of the projectile wave function \cite{CN07}, they have suggested to use the comparison between their calculations and the measurements to deduce the Asymptotic Normalisation Coefficient (ANC) of the \ex{15}C bound state from experimental data \cite{SN08}. They then suggest to rely on this inferred ANC to compute the cross section for the $\rm ^{14}C(n,\gamma)^{15}C$ radiative capture. Their idea leads to inferred radiative-capture cross sections in good agreement with the direct measurements \cite{SN08,Esb09}. In the present work we analyse the influence of the description of the \ex{14}C-n continuum upon breakup calculations. In \Sec{C15}, we present the model of \ex{15}C we use in this study, emphasising on the \ex{14}C-n interaction used in the continuum. We then discuss our results obtained for the Coulomb breakup of \ex{15}C on Pb at $68A$~MeV (\Sec{Cbu}) and present our analysis of the extraction of the cross section for the radiative capture $\rm ^{14}C(n,\gamma)^{15}C$ following the prescription of Summers and Nunes in \Sec{ng}. We show also how this method can be improved by selecting the data at low energy in the \ex{14}C-n continuum. We end by the conclusions and perspectives of this work.
Coulomb breakup has been proposed as an indirect technique to infer radiative-capture cross section at astrophysical energies \cite{BBR86,BR96}. This idea is based on the hypothesis that Coulomb breakup, corresponding to an exchange of virtual photons between the projectile and the target, can be seen as the time-reversed reaction of the radiative capture. Unfortunately, the direct extraction of the latter cross section from the former's can only be done if the Coulomb breakup takes place at first order, which we know is not the case \cite{EBS05,CB05}. To circumvent this issue, Summers and Nunes have proposed a method based on the fact that breakup reactions are mostly peripheral \cite{CN07}. In this method, an ANC for the projectile bound state is extracted from the fit of accurate breakup calculations to breakup data \cite{SN08,Esb09}. In this work, we have studied the sensitivity of this method to the description of the projectile continuum, which has been ignored in Summers and Nunes' analysis. From accurate calculations of the breakup of \ex{15}C on Pb at $68A$~MeV using different descriptions of the projectile, we have shown that the sensitivity to the \ex{14}C-n continuum cannot be overlooked and that the scaling factor extracted from the fit suggested by Summers and Nunes contains information not only about the ANC of the projectile bound state, but also about its continuum. Nevertheless, the method works fine. We understand this by the fact that the structure information absorbed in this fitting procedure is important for both processes. We have observed that this fit does not have much sense for the extreme descriptions of the \ex{14}C-n continuum, as they lead to significant distortions in the breakup cross sections compared to regular potentials. To account for this, we suggest to restrict the fit suggested by Summers and Nunes to low \ex{14}C-n energies, i.e. those that are of significance for astrophysics. The radiative-capture cross sections obtained in such a way are in excellent agreement with the direct data. This variant of Summers and Nunes' idea hence enables to extract reliably radiative-capture cross section from Coulomb breakup data without having to worry about the description of the two-body projectile in both its bound state and its continuum. This, in a sense, revives the original idea of Baur, Bertulani and Rebel \cite{BBR86}. It would be interesting to see if this variant can be improved by selecting breakup data at forward angles, where the process is fully dominated by the Coulomb interaction. Another interesting perspective is to see whether this method can be applied to charged cases, like for \ex{3}He($\alpha$,$\gamma$)\ex{7}Be or \ex{7}Be(p,$\gamma$)\ex{8}B.
16
7
1607.06791
1607
1607.04277_arXiv.txt
The morphology of young Pulsar Wind Nebulae (PWN) is largely determined by the properties of the wind injected by the pulsar. We have used a recent parametrization of the wind obtained from Force Free Electrodynamics simulations of pulsar magnetospheres to simulate nebulae for different sets of pulsar parameters. We performed axisymmetric Relativistic Magnetohydrodynamics simulations to test the morphology dependence of the nebula on the obliquity of the pulsar and on the magnetization of the pulsar wind. We compare these simulations to the morphology of the Vela and Crab PWN. We find that the morphology of Vela can be reproduced qualitatively if the pulsar obliquity angle is $\alpha \approx 45^\circ$ and the magnetization of the wind is high ($\sigma_0 \approx 3.0$). A morphology similar to the one of the Crab Nebula is only obtained for low magnetization simulations with $\alpha \gtrsim 45^\circ$. Interestingly, we find that Kelvin-Helmholtz instabilities produce small scale turbulences downstream of the reverse shock of the pulsar wind.
Most of the rotational energy lost by a pulsar is transferred to a relativistic particle wind. These particles are by numbers predominantly electrons and positrons (referred to together as electrons in the following). The wind is thought to be cold, meaning that its thermal energy is much less than its bulk kinetic and magnetic energies. When the wind interacts with ambient material, the particles become isotropised and radiate \citep{Arons2012}. This is what is then seen as a Pulsar Wind Nebula (PWN) in the sky. To date, around 100 of these systems have been found \citep{Kargaltsev2015}. In the X-rays, several of them show a torus morphology, with a jet emerging perpendicular to it \citep{Kargaltsev2008}. Young PWN, with an age smaller than $\approx$10000 yrs, have not yet been distorted by the reverse shock of the stellar explosion \citep{Gaensler2006,Kargaltsev2015}. Their morphology is still closely related to the properties of the pulsar wind. Particularly in two cases, the Crab and Vela PWN, the plasma outflow can be resolved observationally in great detail, down to spatial scales of $\Delta r \lesssim 0.03$~ly. These systems therefore provide a test bed to study the behaviour of relativistic plasma, which is also of relevance for other non-thermal sources as Active Galactic Nuclei \citep{Netzer2014,Massaro2015} or Gamma-ray Bursts \citep{Gehrels2012,Berger2014}. The properties of the pulsar wind -- as its magnetic field, particle and velocity distributions -- are not known with certainty today. However, over the past years there has been great progress in this respect. Several groups have performed Force Free Electrodynamics (FFE) \citep{Spitkovsky2006,Kalapotharakos2012,Tchekhovskoy2013} and Particle in Cell (PIC) simulations of pulsar magnetospheres \citep{Philippov2014,Cerutti2015}. These simulations allow one to trace the properties of the wind out to several light cylinder radii $r_{lc}$. Due to the relativistic speed of the wind, it is expected that the wind does not have time to re-arrange itself on larger scales afterwards. In particular, the latitude dependence of its energy flux is expected to remain unchanged as the wind moves out into the nebula \citep{Tchekhovskoy2015}. Recently, the first analytic parametrization of the latitude dependent luminosity of the pulsar wind has been derived from FFE simulations \citep{Tchekhovskoy2015}. The main parameter determining the wind properties is the obliquity angle between the pulsar spin axis and its magnetic moment $\alpha$. The latter can not be measured directly. Constraints from pulsar light curve modelling usually differ vastly between pulsar emission models \citep{Pierbattista2015}. The other important unknown parameter is the magnetization $\sigma$ of the wind. It is thought that most of the wind energy is in its magnetic fields ($\sigma \gg 1$) at $r_{LC}$ . When and where this energy is transferred to kinetic particle energy is not known. FFE simulations of pulsar magnetospheres do not include non-thermal particle acceleration, this question cannot be addressed by them. In this paper we will study the dependence of the nebula morphology on $\alpha$ and $\sigma$. Both of these parameters strongly affect the forces acting on the wind plasma. They are therefore expected to shape the morphology of the resulting nebula. We performed Relativistic Magnetohydrodynamic (RMHD) simulations to scan the parameter space of different obliquity angles and a high and low magnetization of the wind. RMHD simulations of PWN performed in the past have primarily focused on the Crab nebula \citep{Hester2008,Buhler2014}. Qualitatively, the toroidal structure and the jet are well reproduced in 2D axisymmetric simulations \citep{Komissarov2004,DelZanna2004,DelZanna2006,Volpi2009,Bucciantini2011}. Dynamically, the motion of thin filaments, so called ``wisps'' is also reproduced \citep{Camus2009}. Recently, the first 3D simulations of the Crab nebula showed that axisymmetric simulations overpredict the strength of the jet \citep{Porth2013,Porth2013b}. In addition, compared to 2D simulations, significant turbulence emerges far downstream of the wind termination shock in 3D. This potentially enhances the magnetic dissipation, allowing for larger magnetizations of the wind to reproduce the Crab's morphology. Unfortunately, performing several 3D simulations to scan the phase space of pulsar wind parameters is still computationally too expensive. We therefore performed 2D axisymmetric simulations. In contrast to most previous studies, we simulate both hemispheres. As was shown by \citet{Porth2013b}, this enhances the magnetic dissipation also in the axisymmetric case in the equatorial regions. Nevertheless, we will keep the axisymmetric limitation of our simulations in mind and will come back to it in the discussion of the simulation results in section \ref{sec:res}. We chose the length scales and spin-down power of the pulsar to values appropriate for the Vela PWN \citep{Pavlov2003,Durant2013}. To our knowledge this system has not been simulated in RMHD to date. We expect that apart from scaling factors, the PWN morphology does not depend strongly on this choice. Qualitatively, we expect the simulated morphologies to be similar also in other young PWN. We will use cgs units throughout, except for length scales, which for convenience will be given in light years.
\label{sec:summary} We have performed axisymmetric RMHD simulations of PWN to scan the parameter space for different pulsar wind properties. We have simulated the wind emerging for different pulsar obliquities, which has recently been derived from FFE simulations \citep{Tchekhovskoy2015}. In addition we have tested different wind magnetizations. In general, we find that the average wind magnetization is the most important parameter in determining the PWN morphology. The main effect of increasing obliquity angle is to increase the size of the striped wind region, where we have assumed a perfect dissipation of opposite magnetic field lines. We found that the wind region upstream of the reverse shock is smaller in size and becomes more oblate with increasing $\bar{\sigma}$. With the exception of the wind morphology for $\alpha=10^{\circ}$ and $\sigma_0=3.0$, all simulations showed a torus in their emission maps, which emerges from the equatorial region downstream of the reverse shock. We have compared the morphologies of the different simulations to the Vela and Crab PWN. For Vela, we found that the simulation with the parameters $\alpha=45^{\circ}$ and $\sigma_0=3.0$ gives the best match to the observed morphology. For the Crab nebula, all simulations with a low $\bar{\sigma}$ match the observed morphology ($\alpha=10^{\circ}$ and $\sigma_0=0.03$ ; $\alpha=45^{\circ}$ and $\sigma_0=0.03$ ; $\alpha=80^{\circ}$ for $\sigma_0=0.03$ and $\sigma_0=3.0$ ). We found that, particularly for low magnetizations, KH instabilities develops at the downstream at the shear flow of the reverse shock. The KH loops have the effect to increase the emission of the receding side of the nebula compared from what is expected from Doppler boosting of a radial flow. We suggest that this effect might help to explain that the innermost ring observed in X-rays in the Crab nebula has almost constant brightness. We have pointed out the caveat that these conclusions rely on axisymmetric simulations. It would be desirable to confirm these findings with 3D simulations in the future. For a more quantitative comparison of observations and simulations, it will also be important to include a model for particle acceleration in the simulations. From the observational side, the detection of the rings observed in the Vela PWN outside of the X-ray band would be crucial to constrain the electron energy distribution. Taken together these steps provide the prospects in understanding the plasma flow quantitatively in PWN. This would be the first time this is achieved for relativistic plasmas flows and would likely have implications also for other sources as GRBs or AGN.
16
7
1607.04277
1607
1607.04107_arXiv.txt
The 6.67\,hr periodicity and the variable X-ray flux of the central compact object (CCO) at the center of the SNR RCW\,103, named \src, have been always difficult to interpret within the standard scenarios of an isolated neutron star or a binary system. On 2016 June 22, the Burst Alert Telescope (BAT) onboard \swift\ detected a magnetar-like short X-ray burst from the direction of \src, also coincident with a large long-term X-ray outburst. Here we report on \cxo, \nustar, and \swift\, (BAT and XRT) observations of this peculiar source during its 2016 outburst peak. In particular, we study the properties of this magnetar-like burst, we discover a hard X-ray tail in the CCO spectrum during outburst, and we study its long-term outburst history (from 1999 to July 2016). We find the emission properties of \src\, consistent with it being a magnetar. However in this scenario, the 6.67\,hr periodicity can only be interpreted as the rotation period of this strongly magnetized neutron star, which therefore represents the slowest pulsar ever detected, by orders of magnitude. We briefly discuss the viable slow-down scenarios, favoring a picture involving a period of fall-back accretion after the supernova explosion, similarly to what is invoked (although in a different regime) to explain the ``anti-magnetar" scenario for other CCOs.
The central compact object (CCO) \src, laying within the supernova remnant (SNR) RCW\,103, has been a mysterious source for several decades (Tuohy \& Garmire 1980; Gotthelf, Petre \& Hwang 1997). Despite presumably being an isolated neutron star (NS), it shows long-term X-ray outbursts lasting several years, where its luminosity increases by a few orders of magnitude. This source also has a very peculiar $\sim6.67$\,hr periodicity with an extremely variable profile along different luminosity levels (De Luca et al. 2006). Several interpretations of the nature of this system have been proposed, from an isolated slowly spinning magnetar with a substantial fossil-disk, to a young low mass X-ray binary system, or even a binary magnetar, but none of them is straightforward, nor can they explain the overall observational properties (Garmire et al. 2000; De Luca et al. 2006, 2008; Li 2007; Pizzolato et al. 2008; Bhadkamkar \& Ghosh 2009; Esposito et al. 2011; Liu et al. 2015; Popov, Kaurov \& Kaminker 2015). A millisecond burst from a region overlapping the SNR RCW\,103 triggered the \swift\, Burst Alert Telescope (BAT) on 2016 June 22 at 02:03 UT (D'A\`i et al. 2016). These short X-ray bursts are distinguishing characteristics of the soft gamma repeater (SGR) and anomalous X-ray pulsar (AXP) classes, believed to be isolated NSs powered by the strength and instabilities of their $10^{14-15}$\,G magnetic fields (aka magnetars; Duncan \& Thompson 1992; Olausen \& Kaspi 2014; Turolla, Zane \& Watts 2015). In this work, we report on the analysis of the magnetar-like burst detected by \swift-BAT, on simultaneous \cxo\ and \nustar\ observations performed soon after the BAT burst trigger, and on the long-term \swift-XRT monitoring (\S\,\ref{obs}). Furthermore, we put our results in the context of all \swift, \cxo, and \xmm\ campaigns of \src\ from 1999 until 2016 July (\S\,\ref{results}). We then discuss our findings and derive constraints on the nature of this puzzling object (\S\,\ref{discussion}).
\label{discussion} We report on the analysis of a magnetar-like short burst from the CCO \src\, (D'Ai et al. 2016), and study its coincident X-ray outburst activity. This short ms-burst and its spectrum, the X-ray outburst energetics of this source, the spectral decomposition, and surface cooling (see \S\,\ref{results}) are all consistent with observations of magnetar SGR-like bursts and outbursts (see Rea \& Esposito 2011, and reference therein, for an observational review). This is the second X-ray outburst detected from \src, and it shows for the first time a coincident SGR-like burst and a non-thermal component up to $\sim30$\,keV. Two-peak SGR-bursts with similar luminosity and spectra have been observed in other magnetars (see e.g. Aptekar et al. 2001, G\"otz et al. 2004, Collazzi et al. 2015). Due to their ms-timescales and relatively soft spectra, these events cannot be interpreted as Type\,I X-ray bursts or short GRBs (see Galloway et al. 2008; Sakamoto et al. 2011). On the other hand, hard X-ray emission has been detected for at least half of the magnetar population (Olausen \& Kaspi 2014). Sometimes this emission is steady, but other times transient and connected with the outburst peaks. Magnetar outbursts are expected to be produced by the instability of strong magnetic bundles which stress the crust (from outside or inside: Beloborodov 2009, Li, Levin \& Beloborodov 2016, Perna \& Pons 2011, Pons \& Rea 2012). This process heats the surface in one or more regions, and at variable depth inside the NS crust, which in turn drives the outburst duration. The high electron density in these bundles might also cause resonant cyclotron scattering of the seed thermal photons, creating non-thermal high-energy components in the spectrum. Such components can be transient if the untwisting of these bundles during the outburst decay produces a decrease in the scattering optical depth. Furthermore, magnetospheric re-arrangements are expected during these episodes, and are believed to be the cause of the short SGR-like bursts (see Turolla, Zane \& Watts 2015 for a review). Repeated outbursts on several-year timescales have also been detected in at least four magnetars (Bernardini et al. 2011; Kuiper et al. 2012; Archibald et al. 2015), and their recurrence time is expected to be related to the source magnetic field strength and configuration, and to the NS age (see Perna \& Pons 2011; Vigan\`o et al. 2013). In this scenario, the only puzzling property of \src, that makes it unique among any SGR, AXP, CCO or other known NS, is the 6.67\,hr long periodicity, which would represent the longest spin period ever detected in a pulsar. On the other hand, the extreme variability of the modulation in time and energy strongly disfavor this modulation being due to an orbital period (see detailed discussion in De Luca et al. 2008, Pizzolato et al. 2008), but remain fully consistent with the usual pulse profile variability observed in actively flaring magnetars (see e.g. Rea et al. 2009, 2013; Rodr{\'{\i}}guez Castillo et~al. 2014). Isolated pulsar spin periods are observed to be limited to $\sim$12\,s, with the slowest pulsars indeed being the magnetars. This period distribution is explained as due to Hall-Ohmic magnetic field decay during the evolution of these neutron stars (see Pons, Vigan\`o \& Rea 2013). The slowest isolated pulsar that magnetic field decay might produce is $\sim30-50$\,s, according to self-consistent 2D simulations (e.g. Vigan\`o et al. 2013), if we consider the generous case of field threading the stellar core, zero dissipation from crustal impurities, and an initial field ranging from 10$^{13-15}$\,Gauss, while using typical spin period at birth in the range of 1--300\,ms. Regardless of the model inputs, we can in no case reproduce hours-long spin periods. Given the strong evidence for the magnetar nature of the X-ray emission of this source, we are now left with discussing all possible slow-down mechanisms other than the typical pulsar dipolar loss. Since its discovery, many authors have already discussed several scenarios (see De Luca et al. 2006; Li 2007; Pizzolato et al. 2008; Bhadkamakar \& Gosh 2009; Lui et al. 2015; Popov, Kaurov, \& Kamiker 2015), which we cannot summarize here. We will however highlight and discuss the possibilities that remain open, along with their possible deficiencies. The first possibility could be a long-lived fossil disk (Chatterjee, Hernquist \& Narayan 2000), which forms via the circularization of fall-back material after the supernova explosion (see i.e. De Luca et al. 2006; Li 2007). This might result in substantially slowing the spin period. However, recent studies on the formation of fossil disks apparently disfavor their existence around NSs under reasonable assumption on the magnetic torque in the pre-SN phase (Perna et al. 2014). On the other hand, the magnetar flaring activity during its lifetime would most probably expel such thin disks very quickly. Another possibility is that \src\, is a magnetar in a low mass X-ray binary with an M6 companion (or later; De Luca et al. 2008), emitting as though it were isolated, but that had its spin period tidally locked to the orbital motion of the system (see i.e. Pizzolato et al. 2008). However also in this case, fine-tuning is needed to explain how a very low-mass companion remains gravitationally bound to the magnetar after the SN explosion. The most viable interpretation, in line with what has been proposed for other CCO systems (the ``anti-magnetars": see e.g. Halpern \& Gotthelf 2010; Torres-Forn\'e et al. 2016), seems to be of a magnetar that had a strong SN fall-back accretion episode in the past (Chevalier 1999). In particular, if \src\, is born with a magnetic field and spin period such that when the fall-back accretion begins, the source is in the propeller regime (Illarionov \& Sunyaev 1975; Li 2007; Esposito et al. 2011), then the accreted material will not reach the surface and bury the B-field, as for the "anti-magnetar" CCOs, but in the first years or more of its lifetime the magnetar will accrete onto the magnetosphere, hence with a substantially larger spin-down torque. When the fall-back accretion stops, the magnetar continues to evolve as any other isolated pulsar, but with a substantially slower spin period.
16
7
1607.04107
1607
1607.06102_arXiv.txt
NGC 2420 is a $\sim$2 Gyr-old well-populated open cluster that lies about 2 kpc beyond the solar circle, in the general direction of the Galactic anti-center. Most previous abundance studies have found this cluster to be mildly metal-poor, but with a large scatter in the obtained metallicities for this open cluster. Detailed chemical abundance distributions are derived for 12 red-giant members of NGC 2420 via a manual abundance analysis of high-resolution (R = 22,500) near-infrared ($\lambda$1.5 - 1.7$\mu$m) spectra obtained from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey. The sample analyzed contains 6 stars that are identified as members of the first-ascent red giant branch (RGB), as well as 6 members of the red clump (RC). We find small scatter in the star-to-star abundances in NGC 2420, with a mean cluster abundance of [Fe/H] = -0.16 $\pm$ 0.04 for the 12 red giants. The internal abundance dispersion for all elements (C, N, O, Na, Mg, Al, Si, K, Ca, Ti, V, Cr, Mn, Co and Ni) is also very small ($\sim$0.03 - 0.06 dex), indicating a uniform cluster abundance distribution within the uncertainties. NGC 2420 is one of the clusters used to calibrate the APOGEE Stellar Parameter and Chemical Abundance Pipeline (ASPCAP). The results from this manual analysis compare well with ASPCAP abundances for most of the elements studied, although for Na, Al and V there are more significant offsets. No evidence of extra-mixing at the RGB luminosity bump is found in the $^{12}$C and $^{14}$N abundances from the pre-luminosity-bump RGB stars in comparison to the post-He core-flash RC stars.
The open cluster NGC 2420, with an age of roughly 2 Gyr, is located towards the Galactic anti-center at a Galactocentric distance of 10.78 kpc (Sharma et al. 2006). Given its age, location, and metallicity, this cluster is an interesting object for studies of Galactic chemical evolution. The first detailed photometric study of NGC 2420 was by Sarma $\&$ Walker (1962) and, later, West (1967) noted its stars exhibited a mild excess in $\delta$(U - B), which suggested the cluster was somewhat metal-poor. The earliest determinations of spectroscopic metallicities for NGC 2420 were made by Pilachowski et al. (1980), Cohen (1980), and Smith \& Suntzeff (1987); these studies found a small range of metallicities clustering around [Fe/H] $\approx$ -0.60. Later studies using photometric data and isochrones (Anthony-Twarog et al. 2006) derived somewhat higher values of [Fe/H] $\approx$ -0.30. More recently, the high-resolution spectroscopic study by Pancino et al. (2010) found NGC 2420 to be considerably more metal-rich, with [Fe/H] $\approx$ -0.05 dex. Meanwhile Jacobson et al. (2011) analyzed spectra of moderately high-resolution (R $\approx$ 18,000) and found an average metallicity for this cluster of [Fe/H] $\approx$ -0.20 dex. The large scatter for [Fe/H] in the literature suggests that a new abundance analysis using different spectra would be worthwhile and here a sample of red giant members of NGC 2420 are analyzed using near-infrared (NIR) high-resolution spectra from the SDSS-III/APOGEE survey (Apache Point Observatory Galactic Evolution Experiment; Einsenstein et al. 2010; Majewski et al. 2016). The APOGEE-1 survey observed more than 146,000 Galactic red-giants in three years of operation having ended in July 2014. A number of red-giants in disk open clusters, including NGC 2420, were targeted by APOGEE-1 to serve as calibration clusters for the survey, to study cluster membership, and measure Galactic metallicity gradients. Stellar parameters (effective temperatures and surface gravities), chemical abundances of several elements, and metallicities for all the stars observed in the APOGEE survey are derived automatically by means of the pipeline ASPCAP (APOGEE Stellar Parameter and Chemical Abundances Pipeline; Garc\'ia P\'erez et al. 2016). This paper presents chemical abundances for 12 red-giant members of the open cluster NGC 2420 using a manual spectroscopic chemical abundance analysis in the same way as made by Cunha et al. (2015) and Smith et al. (2013). We derive stellar parameters and the abundances of 16 elements: C, N, O, Na, Mg, Al, Si, K, Ca, Ti, V, Cr, Mn, Fe, Co and Ni. One of the goals of this study is to provide a direct comparison of the results from a manual abundance analysis with those derived automatically by the ASPCAP pipeline. The APOGEE team is continually improving ASPCAP and the most recent version of ASPCAP has produced the stellar parameters and metallicity results for the 13$^{th}$ SDSS Data Release, hereafter, DR13, which will become publicly available in summer 2016. The results presented in this independent work will help to verify ASPCAP.
We analyzed 12 red giant members of NGC 2420: six from the red clump and six from the red-giant branch. Line-by-line measurements of the iron abundances for all studied stars are presented in Table 3; the individual elemental abundances have typical standard deviations of the mean that are less than 0.07 dex. There is also small scatter in the star-to-star abundances in NGC 2420, with a mean cluster abundance and standard deviation of the mean of $\langle$A(Fe)$\rangle$ = 7.29 $\pm$ 0.04 for the 12 giants. This translates to $\langle$[Fe/H]$\rangle$ = -0.16 $\pm$ 0.04 for NGC 2420 by using Asplund et al. (2005) as Solar reference. The mean C and N abundances obtained for the stars in our sample are quite consistent and indicate a small standard deviation of the mean values: $\langle$[C/Fe]$\rangle$ = -0.07 $\pm$ 0.04, $\langle$[N/Fe]$\rangle$ = +0.17 $\pm$ 0.03. These carbon and nitrogen results are overall consistent with the CN-Cycle, given that the abundance of carbon is down (slightly below the solar scaled value) and the abundance of nitrogen is enhanced relative to the solar scaled value (Section 5.2). The alpha element oxygen is also mildly enhanced: $\langle$[O/Fe]$\rangle$ = +0.10 $\pm$ 0.03. We note that this spread is very similar to the values found by Bertran de Lis et al. (2016) for stars with similar temperatures in other clusters with metallicities near solar, such as M67, NGC 6819 and NGC 2158. The mean abundances for the other alpha elements, however, are roughly solar scaled with the mean value for Mg, Si, Ca and Ti being $\langle$[$\alpha$/Fe]$\rangle$ = $\langle$[(Mg+Si+Ca+Ti/4)/Fe]$\rangle$ = +0.01 $\pm$ 0.02 dex. For the iron peak elements we obtained: $\langle$[(Cr+Mn+Co+Ni/4)/Fe]$\rangle$ = -0.06 $\pm$ 0.02 dex, while the odd-Z elements Na, Al and K show a marginal enhancement of $\langle$[(Na+Al+K/3)/Fe]$\rangle$ = +0.06 $\pm$ 0.06 dex. \subsection{Comparisons with ASPCAP and the Literature} One of the objectives of this study is to compare the results from the APOGEE automated abundance analysis derived using ASPCAP with an independent manual abundance analysis. ASPCAP abundances and stellar parameters are obtained from automatic matches of APOGEE spectra to synthetic libraries (Zamora et al. 2015) for a 6- or 7-D optimization of T$_{\rm eff}$, logg, [M/H], [C/Fe], [N/Fe], [$\alpha$/Fe] and sometimes ($\xi$) using the FERRE code (Allende Prieto et al. 2006). DR13 includes both raw ASPCAP values, as well as calibrated values that were adjusted in order to match literature abundances of selected calibrators (see discussion in Holtzman et al. 2015). \subsubsection{Stellar Parameters} Figure 4 shows an H-R diagram plotted as log g versus T$_{\rm eff}$ for the target stars. DR13 results for both raw and calibrated ASPCAP abundances are also shown. This comparison indicates that there is a clear offset between the stellar parameters derived in this study (red circles) and the raw values from ASPCAP (brown pentagons), while the calibrated ASPCAP values (grey diamonds) show overall much better agreement with our results. It can be seen from the top left panel of Figure 5 that our effective temperatures (computed from photometric calibrations; Section 3) agree quite well with the ASPCAP T$_{\rm effs}$, which are derived purely from the APOGEE spectra. There is just a small tendency for our effective temperatures to be hotter than those from ASPCAP: the average difference between the two independent scales is $\langle$$\delta$(T$_{\rm eff}$(This work - ASPCAP)$\rangle$ = 49 $\pm$ 22 K. (We note that ASPCAP effective temperatures were not calibrated for DR13). We also show in the bottom left panel the T$_{\rm eff}$ results from Jacobson et al. (2011) and Pancino et al. (2010) for a sample of stars that we have in common with those studies (Table 1). The effective temperatures from Jacobson et al. (2011; green triangles) and Pancino et al. (2010; blue squares), which are both derived from the photometric calibrations in Alonso et al. (1999), do not show significant offsets with our results. The surface gravity comparisons are shown in the right panels of Figure 5. Our derived log g values agree very well with those obtained by Pancino et al. (2010; blue squares) and Jacobson et al. (2011; green triangles) for the stars in common. This is expected because those previous log g derivations are based on physical relations (Eq. 1). It is also clear from this figure that the surface gravity results in DR13, which come directly from the ASPCAP analysis of the APOGEE spectra (brown pentagons), are systematically larger than the log g values obtained from fundamental relations: $\langle$$\delta$(log g(This work - ASPCAP)$\rangle$ = -0.26 $\pm$ 0.12. We note that for the RC sample the log g difference is $\delta$ = -0.34 $\pm$ 0.10 while for the RGB sample $\delta$ = -0.18 $\pm$ 0.07. This systematic offset in the ASPCAP derived surface gravities was also noticed in the previous APOGEE data releases (DR10, Ahn et al. 2014 and DR12, Alam et al.2015) and calibrations have been applied to correct for this bias (see discussions in Holtzman et al. 2015 and M\'esz\'aros et al. 2013). The calibration of the ASPCAP log g results in DR13 uses an algorithm for deciding if a star is on the RC or RGB based on its T$_{\rm eff}$, log g and [C/N] abundances. DR13 ASPCAP calibrated log g values show, on average, much better agreement with our log g (non-spectroscopic) determinations: $\langle$$\delta$(log g(This work - ASPCAP$_{\rm calibrated}$)$\rangle$ = 0.00 $\pm$ 0.12. The source of the offset between the uncalibrated ASPCAP values of log g and the physical log g's is unknown and we note that the APOGEE spectra themselves cannot be used the Fe I/Fe II ionization balance as no Fe II lines are detected in APOGEE spectra. \subsubsection{Chemical Abundances} Elemental abundances obtained for the NGC 2420 stars, along with the raw and calibrated ASPCAP results, are shown in Figure 6 as a function of the effective temperatures derived here. The calculated mean abundance differences between our results and ASPCAP are also indicated in each panel of Figure 6. For a significant fraction of the elements, the abundances obtained manually are similar to those derived automatically by ASPCAP, with all 3 types of results (manual, ASPCAP raw, and ASPCAP calibrated) agreeing in the mean to $\sim$0.05 dex. This is the case for the elements: C, Mg, K, Ca, Cr, Mn, Fe, and Ni. The remaining 8 elements exhibit offsets between the mean abundances of these three sets which are greater than $\sim$0.05 dex. In the case of O and Al, in particular, the ASPCAP calibrated values fall below both the manual and raw ASPCAP results by 0.09 dex and 0.14 dex, respectively. The coolest RGB star in our sample has both raw and calibrated ASPCAP abundances that fall $\sim$0.15 dex below the manual value, with the manual abundance result agreeing with the abundances from the hotter giants: the manual O and Al abundances show no significant trend with T$_{\rm eff}$, while the ASPCAP results do. The abundances from Na, Si, and V exhibit similar behaviors among themselves, with the manual abundances falling in-between the calibrated and raw ASPCAP values. We note the large corrections to the raw ASPCAP abundances for Na, Si and V, becoming as large as $\sim$0.3 dex in the case of Na. Cobalt abundances from both raw and calibrated techniques seem to simply show larger scatter when compared to the manual analysis. The manually derived nitrogen abundances show marginal differences with the raw ASPCAP results, while the corrected ASPCAP abundances show good agreement. For titanium, the differences between the three sets of results are close to 0.1 dex with a similar abundance scatter. As discussed previously, several spectroscopic investigations in the 1980's found that the metallicity of the open cluster NGC 2420 was around [Fe/H] = -0.6 dex (Pilachowski et al. 1980, [Fe/H] = -0.7 dex; Cohen 1980, [Fe/H ]= -0.6 dex; and Smith \& Suntzeff 1987, [Fe/H] = -0.5 dex). More recently, Pancino et al. (2010) analyzed several open clusters, including NGC 2420, using high-resolution (R = $\lambda$/$\delta$$\lambda$ $\approx$ 30,000) echelle optical spectra and found a metallicity for NGC 2420 that was near-solar, with [Fe/H] = -0.05 $\pm$ 0.03, therefore, much more metal-rich than the previous determinations. The study of Jacobson et al. (2011), using spectra obtained with the Hydra spectrograph on WIYN (R = $\lambda$/$\delta$$\lambda$ $\approx$ 18,000), found a metallicity of -0.20 $\pm$ 0.06. The mean iron abundance obtained here from the APOGEE spectra of 12 red-giants in NGC 2420 is $\langle$[Fe/H]$\rangle$ = -0.16 $\pm$ 0.04 and this result compares very well with the mean metallicity from Jacobson et al. (2011). In addition, our analysis here has twelve chemical elements in common with Pancino et al. (2010) and Jacobson et al. (2011). Figure 7 provides a visual comparison of these results, shown as [X/Fe] versus [Fe/H]. Our abundances show small internal scatter in both [X/Fe] and [Fe/H], probably due to the high quality of the APOGEE spectra coupled to a homogeneous analysis. Because Pancino et al. (2010) found a larger metallicity ([Fe/H]) than both this study and Jacobson et al. (2011), all of the Pancino et al. points are shifted to larger values of [Fe/H]; the Jacobson et al. (2011) iron abundances show larger scatter than ours, but generally overlap with our results. Examining various element ratios ([X/Fe]) in Figure 7, the differences between the mean elemental abundances in the 3 studies are typically close to 0.1 dex, with a few points worth noting. Pancino et al. (2010) find two stars (from her sample of three) that show somewhat higher values of [O/Fe] and lower values of [Al/Fe]. There are offsets between the Jacobson et al. (2011) results and this study for almost all elements [O/Fe], [Mg/Fe], [Si/Fe], [Ca/Fe], and [Ti/Fe], except for sodium and nickel, which overlap almost perfectly. It is expected that these offsets are within the uncertainties from both stellar parameter determinations and gf-values. Table 5 presents the final average chemical abundances from all stars analyzed in NGC 2420 and their respective standard deviations. The derived standard deviations in all elements range from 0.02 - 0.05 dex, well within expected uncertainties due to the abundance analysis itself. The standard deviation values obtained limit any intrinsic abundance differences among this sample of red giants to less than these rather small values: the observed red giants in NGC 2420 are chemically homogeneous to a few hundredths of a dex. Using a novel, and very different technique, Bovy (2016) analyzed APOGEE spectra from 4 open clusters, including NGC 2420, to constrain abundance spreads in these clusters. The technique removes T$_{\rm eff}$ trends in relative flux levels in both observed and simulated spectra and then evaluates the residuals both with, and without, abundance scatter in the simulated spectra. The distributions of the values of the residuals can be used to provide strong constraints on any underlying abundance variations in the cluster stars. Bovy (2016) finds quite small upper limits to any abundance variations in all 4 clusters, including NGC 2420; values from Bovy (2016) are included in Table 5. The upper limits set by Bovy (2016) compare well with the limits set by the standard deviations resulting from the classical spectroscopic abundance analysis performed here. The largest difference between the two techniques for limiting abundance variations is for oxygen, from OH, where here $\sigma$ = 0.03 dex, while the limit from Bovy (2016) is 0.06 dex. The scatter found here is indeed small, given that OH is both sensitive to T$_{\rm eff}$ and stellar metallicity (Table 4). Since the red giants analyzed here have, except for one star, very similar temperatures and the same metallicity, the small scatter found for oxygen may not be so surprising. \subsection{Mixing in Red Giants} The members of NGC 2420 present a useful combination of stellar mass and metallicity for probing red giant mixing along the RGB. With an estimated turn-off mass of M $\sim$ 1.6M$_{\odot}$ and a metallicity of [Fe/H] = -0.16, as measured here, the NGC 2420 red giants fall in a mass/metallicity range where the extent and impact of non-standard mixing across the luminosity bump is sensitive to the details of the type of mixing and the input physics used in the modeling, (e.g. Charbonnel \& Lagarde 2010), Lagarde et al. 2012). Of the elemental abundances analyzed here, it is $^{12}$C, $^{14}$N, and the minor isotope $^{13}$C whose abundances are most sensitive to both standard and non-standard mixing. Eleven of the red giants in our study have effective temperatures that are too hot (T$_{\rm eff}$ $\sim$ 4700 -- 4800 K) to easily measure the $^{13}$C$^{16}$O or $^{13}$C$^{14}$N lines to strongly constrain values of $^{12}$C/$^{13}$C, which is one of the most sensitive indicators of extra-mixing. The value of $^{12}$C/$^{14}$N, however, can be used to probe extra-mixing, but is not as sensitive. Previous studies using APOGEE data (DR12) have used the [C/N] ratios in order to estimate stellar masses and ages for the APOGEE sample (Masseron \& Gilmore 2015; Ness et al. 2016 and Martig et al. 2016). Assuming initial scaled-solar values of [C/Fe] and [N/Fe] for NGC 2420 (since it is only slightly sub-solar in metallicity, this assumption is a likely good approximation), the red giants measured here have slightly lowered mean values of [$^{12}$C/Fe] = -0.06 and elevated values of [$^{14}$N/Fe] = +0.11, which are what is expected qualitatively for first dredge-up in low-mass red giants. The altered $^{12}$C and $^{14}$N abundances are due to H-burning on the CN-cycle, as predicted by stellar evolution, with the result that the total number of CNO nuclei are conserved. Neglecting $^{13}$C, which is a minor isotope, the approximate conservation of $^{12}$C + $^{14}$N nuclei can be tested in these red giants, under the assumption that initial abundance ratios were [C/Fe] = 0.0 and [N/Fe] = 0.0. The NGC 2420 red giants are identified in Figure 8 as either RGB or RC stars (see discussion in Section 3), with the error bars equal to the standard deviations of the means from each abundance determination. The hotter red giants, near the lower RGB and RC, scatter around the C+N curve quite closely: within less than 0.1 dex, which is similar to the expected uncertainties. These red giants display the signature of the first dredge-up of matter exposed to the CN-cycle. The coolest red giant analyzed here, 2M07381507+2134589, is offset from the hotter giants, as well as the C+N curve. This offset ($\sim$0.1) is relatively small by typical abundance standards; however, given the accuracy of the analysis of APOGEE spectra, it is significantly larger than the abundance uncertainties. This effect for carbon abundances, as derived from CO molecular lines, has been noted in NGC 6791 from APOGEE spectra (Cunha et al. 2015), with the result that carbon abundances decrease by $\sim$0.1 dex from T$_{\rm eff}$ $\sim$ 5000 K to 4000 K. For the discussion here, this red giant is not considered in constraining stellar models from its $^{12}$C abundance alone. In Figure 8 the two groups of red giants (RGB and RC) do not show obvious differences in their respective C and N abundances. The mean abundances are $\langle$A($^{12}$C$_{\rm RGB}$)$\rangle$ = 8.17 $\pm$ 0.03 and $\langle$A($^{14}$N$_{\rm RGB}$)$\rangle$ = 7.77 $\pm$ 0.03 for the five RGB stars (we do not include the coolest RGB star) and the mean values of six RC stars are A($^{12}$C$_{\rm RC}$) = 8.18 $\pm$ 0.02 and A($^{14}$N$_{\rm RC}$) = 7.80 $\pm$ 0.04. The corresponding mean values of $^{12}$C/$^{14}$N for the RGB and RC stars are, respectively, 2.50 $\pm$ 0.29 and 2.36 $\pm$ 0.18. We note that the RC mean value of C/N is slightly smaller than for the lower RGB stars, which would be in the sense of extra-mixing. However, this difference is not statistically significant or conclusive. We note, however, that differences for C/N between RC and RGB stars have also been reported by Mikolaitis et al. (2012) and Drazdauskas et al. (2016), who obtain ratios of (C/N$_{\rm RC}$ = 1.62, C/N$_{\rm RGB}$ = 2.04) and (C/N$_{\rm RC}$ = 1.60, C/N$_{\rm RGB}$ = 1.74) for the open cluster Collinder 261. In addition, Tautvai{\v s}iene et al. (2000) obtained C/N$_{\rm RC}$ = 1.40 and C/N$_{\rm RGB}$ = 1.70 for M67. These three studies all find somewhat lower C/N ratios on the RC when compared with the RGB. On a more quantitative footing, the results here constrain any extra-mixing, between the lower RGB through the He core-flash and onto the RC, causing a $\Delta$(C/N) to be less than 0.1-0.3 in the linear ratio. Recent studies using the previous APOGEE data release (DR12) have used the [C/N] ratios in order to estimate stellar masses and ages for the APOGEE sample (Masseron \& Gilmore 2015; Ness et al. 2016; Martig et al. 2016). The results from Martig et al. (2016), would indicate a mean mass for our NGC 2420 sample of M$_{\star}$ $\sim$ 1.31 $\pm$ 0.12 M$_{\odot}$ and a mean age of $\sim$ 3.56 $\pm$ 0.86 Gyr, therefore, finding this open cluster to be older than what we adopt. The mean masses and ages for the studied stars estimated in Ness et al. (2016) are: M$_{\star}$ = 1.52 $\pm$ 0.22 M$_{\odot}$ and age $\sim$ 2.84 $\pm$ 0.86 Gyr. However, both these studies are based on DR12 and the improved abundances from DR13 have not yet been adopted. \subsection{Abundance Comparisons with Galactic Trends} Results for the Milky Way field disk stars, defining the Galactic trends, are also shown as comparisons in Figure 9. We use the results from Adibekyan et al. (2012; blue circles), Bensby et al. (2014; green triangles), Allende Prieto et al. (2004; magenta squares), Nissen et al. (2014; cyan pentagons), Reddy et al. (2003; grey axis) and Carretta et al (2000; black pluses), to define the disk trends. The abundances obtained for our sample of red giants in NGC 2420 are in general agreement with what is obtained for field disk stars at the corresponding metallicity of NGC 2420, although the derived abundances of, for example, Mg, Ca, Ti, V, and Co, show some marginal systematic differences when compared to field star results shown in Figure 9; these fall close to the lower envelope of the elemental distribution obtained in the other studies. Some of those samples are quite local to the solar neighborhood, such as, Allende Prieto et al. (2004) who have stars within a volume within 15 pc from the Sun, while other samples extend much further into the disk, as well as the thick disk (Bensby et al. 2014). In addition, there is a metallicity gradient in the Milky Way disk. Several recent studies derive metallicity gradients from open clusters (Cunha et al. 2016, Frinchaboy et al. 2013, Jacobson et al. 2011, Andreuzzi et al. 2011, Carrera $\&$ Pancino 2011, Magrini et al. 2009). For APOGEE results, in particular, Cunha et al. (2016) present metallicity gradients based on DR12 abundances of 29 open clusters. The obtained gradients of [X/H] are typically -0.030 dex/kpc with some possible evidence of flatter gradients for R$_{\rm GC}$ $>$ 12 kpc. Having a Galactocentric distance R$_{\rm GC}$ $\sim$ 11 kpc, the derived abundances here are in line, i.e., about 0.1 dex lower, with the derived gradients from the APOGEE open cluster sample results in DR12, although for some elements there are small systematic offsets due to the different line lists used here and in DR12.
16
7
1607.06102
1607
1607.08603_arXiv.txt
S0 galaxies are known to host classical bulges with a broad range of size and mass, while some such S0s are barred and some not. The origin of the bars has remained as a long-standing problem -- what made bar formation possible in certain S0s? By analysing a large sample of S0s with classical bulges observed by the Spitzer space telescope, we find that most of our barred S0s host comparatively low-mass classical bulges, typically with bulge-to-total ratio ($B/T$) less than $0.5$; whereas S0s with more massive classical bulges than these do not host any bar. Furthermore, we find that amongst the barred S0s, there is a trend for the longer and massive bars to be associated with comparatively bigger and massive classical bulges -- possibly suggesting bar growth being facilitated by these classical bulges. In addition, we find that the bulge effective radius is always less than the bar effective radius --indicating an interesting synergy between the host classical bulge and bars being maintained while bar growth occurred in these S0s.
\label{sec:intro} Lenticular (S0) galaxies in the local universe are primarily characterised by the presence of a bulge and disc with no apparent spiral arms \nocite{Barwayetal2007,Vandenbergh2009}({Barway} {et~al.} 2007; {van den Bergh} 2009) - but a number of observations have shown that like their progenitor spirals, S0 galaxies, especially the low luminous ones, are both barred and unbarred \nocite{Barwayetal2011,vandenBergh2012}({Barway}, {Wadadekar} \& {Kembhavi} 2011; {van den Bergh} 2012). What has made bar formation possible in some S0 galaxies has remained a long standing puzzle. Significant progress has been made over the last decade or so in terms of our understanding of the redshift evolution of bars in disc galaxies. A number of these studies suggest that the bar fraction in spiral galaxies is strongly dependent on their mass \nocite{NairAbraham2010, Cameronetal2010}({Nair} \& {Abraham} 2010; {Cameron} {et~al.} 2010). It has been shown that the bar fraction in low-mass spirals remains nearly constant out to $z \sim 1$, corresponding to a look-back time of $7.8$~billion years \nocite{Elmegreenetal2004,Jogeeetal2004,Barazzaetal2008, NairAbraham2010, Cameronetal2010}({Elmegreen}, {Elmegreen} \& {Hirst} 2004; {Jogee} {et~al.} 2004; {Barazza}, {Jogee}, \& {Marinova} 2008; {Nair} \& {Abraham} 2010; {Cameron} {et~al.} 2010). More recently, \nocite{Simmonsetal2014}{Simmons} {et~al.} (2014) using the HST CANDELS survey extended such a study to $z \sim 2$ and found no significant change in the bar fraction. These findings imply that bars are robust stellar structures; once formed, it is hard to destroy them. Based on the modelling of stellar kinematics, it is believed that the barred spirals were the progenitors of the present-day barred lenticulars which got rid of their spirals \nocite{Cortesietal2011, Cortesietal2013}({Cortesi} {et~al.} 2011, 2013) - it becomes clearer that bars in the present-day S0s have formed long back, most likely during the cosmic assembly of disc galaxies. During those early phase of evolution, a disc would have assembled and grown around a classical bulge either merger-built \nocite{Kauffmanetal1993, Baughetal1996, Hopkinsetal2009}({Kauffmann}, {White} \& {Guiderdoni} 1993; {Baugh}, {Cole}, \& {Frenk} 1996; {Hopkins} {et~al.} 2009) or formed as a result of other processes likely to be active in the high-redshift universe e.g., clump coalescence, violent disc instability etc. \nocite{Elmegreenetal2008,Ceverinoetal2015}({Elmegreen}, {Bournaud} \& {Elmegreen} 2008; {Ceverino} {et~al.} 2015). Then one would expect the classical bulge to intervene the bar formation process that occurred in the host stellar disc of the present-day S0s. Indeed, \nocite{Barazzaetal2008}{Barazza} {et~al.} (2008) showed that bar fraction rises sharply from $\sim 40 \%$ to $70\%$ as one moves from early-type to late-type galaxies which are disc dominated rather than ones with prominent bulges. A massive classical bulge can produce a strong inner Lindblad resonance (ILR) barrier to prevent the feedback loop required for the swing amplification mechanism to work effectively in the disc leading to the formation of a bar in the first place \nocite{Dubinskietal2009}({Dubinski}, {Berentzen} \& {Shlosman} 2009). So it is desirable for a stellar disc to not have a strong ILR in the early phase of galaxy assembly. A massive classical bulge can also produce enough central concentration to create destructive effect on the orbital backbones of a bar \nocite{PfennigerNorman1990, Hasanetal1993}({Pfenniger} \& {Norman} 1990; {Hasan}, {Pfenniger} \& {Norman} 1993). Overall, it turns out that a massive classical bulge and a bar might not coexist in a spiral galaxy. But it remains unclear how to reconcile this with the observed properties of bars and classical bulges in S0 galaxies. The primary aim of the current work is to understand what physical parameters of a classical bulge are a pre-requisite for a bar to form and grow stronger in a S0 galaxy. The paper is organised as follows. Section~\ref{sec:data} describes the sample data and its analysis. The role of S0 discs and classical bulges in the context of bar formation are considered in section~\ref{sec:disc} and section~\ref{sec:ClB}. Section~\ref{sec:discuss} is devoted to discussion and conclusions. Throughout this paper, we use the standard concordance cosmology with $\Omega_M= 0.3$, $\Omega_\Lambda= 0.7$ and $h_{100}=0.7.$ \begin{figure} \rotatebox{0}{\includegraphics[height=7.5cm]{f1.eps}} \caption{Distribution of absolute magnitudes (in 3.6 $\mu m$) of the host stellar discs in barred and unbarred S0 galaxies. Both barred and unbarred discs seem to have similar range of disc luminosities.} \label{fig:Mdisc} \end{figure}
\label{sec:discuss} The role of a classical bulge in the growth and evolution of a bar has not been fully investigated. It is known that a massive centrally concentrated object (e.g., a supermassive black hole) can considerably weaken a bar by scattering stars off the $x_1$-family of orbits which constitute the backbone of the bar \nocite{Hasanetal1993, SellwoodMoore1999, Athanassoulaetal2005,HozumiHerquist2005,Hozumi2012}({Hasan} {et~al.} 1993; {Sellwood} \& {Moore} 1999; {Athanassoula}, {Lambert} \& {Dehnen} 2005; {Hozumi} \& {Hernquist} 2005; {Hozumi} 2012). Some of the classical bulges in the current sample have the right mass for such action, but are not as centrally concentrated as might be required to have a supermassive black hole like effect . But such massive classical bulges could, in principle, delay or even stop a bar from forming in the first place by producing an ILR near the centre of the galaxy, which would cut the feedback loop required for the swing amplification \nocite{Toomre1981}({Toomre} 1981). In fact, as mentioned in section~\ref{sec:ClB}, we do find S0 galaxies with massive classical bulges as unbarred. This agrees with the reported \nocite{Barazzaetal2008}{Barazza} {et~al.} (2008) bar fraction reduction in disc galaxies with rising bulge-to-disc mass ratio. It remains unclear why some spiral galaxies are barred and some are not. Not only spiral galaxies, but as we see here, S0s also face the same unresolved issue. In order to make progress, one has to disentangle the effect of various parameters of the disc, classical bulge and dark matter halo which determine the bar growth in a galaxy. N-body simulations have shown that a bar forms and grows rapidly in a cool, rotating self-gravitating disc \nocite{Hohl1971,SellwoodWilkinson1993,Athanassoula2002, Dubinskietal2009, Sahaetal2012}({Hohl} 1971; {Sellwood} \& {Wilkinson} 1993; {Athanassoula} 2002; {Dubinski} {et~al.} 2009; {Saha} {et~al.} 2012, and references therein). Furthermore, the bar continues to grow in size and mass by transferring angular momentum from the inner disc to the surrounding dark matter halo via resonant gravitational interaction \nocite{DebattistaSellwood1998,Athanassoula2002,Holley-Bockelmannetal2005, WeinbergKatz2007a,Ceverinoklypin2007,SahaNaab2013}({Debattista} \& {Sellwood} 1998; {Athanassoula} 2002; {Holley-Bockelmann}, {Weinberg} \& {Katz} 2005; {Weinberg} \& {Katz} 2007; {Ceverino} \& {Klypin} 2007; {Saha} \& {Naab} 2013). But if the initial disc was hotter and dark matter dominated, a bar would grow rather slowly over several billion years and might remain weak and be too faint to be detected \nocite{Sahaetal2010, Shethetal2012, Saha2014}({Saha} {et~al.} 2010; {Sheth} {et~al.} 2012; {Saha} 2014). These two inputs lead us to suggest that the bars in S0s are unlikely to have formed in their later phase of evolution. In other words, we think that bars in S0s formed during the early phase of disc assembly around a classical bulge with a comparatively lower $B/T$. Bars in S0 galaxies are preferentially formed in the presence of classical bulges with lower $B/T < 0.5$. These classical bulges have their stellar mass in the range $\sim 10^8 - 10^9 M_{\odot}$. Massive classical bulges with $B/T > 0.5$ are not found in any barred S0s in our sample. Amongst barred S0s with similar disc mass, there exist a strong correlation between the bar and classical bulge properties. The host stellar discs are unlikely to have played a major role in the formation of bars in these S0s.
16
7
1607.08603
1607
1607.07724_arXiv.txt
{ Luminous Blue Variables (LBVs) are massive stars caught in a post-main sequence phase, during which they are losing a significant amount of mass. As, on one hand, it is thought that the majority of massive stars are close binaries that will interact during their lifetime, and on the other, the most dramatic example of an LBV, $\eta$~Car, is a binary, it would be useful to find other binary LBVs. We present here interferometric observations of the LBV HR\,Car done with the AMBER and PIONIER instruments attached to ESO's Very Large Telescope Interferometer (VLTI). Our observations, spanning two years, clearly reveal that HR\,Car is a binary star. It is not yet possible to constrain fully the orbit, and the orbital period may lie between a few years and several hundred years. We derive a radius for the primary in the system and possibly resolve as well the companion. The luminosity ratio in the $H-$band between the two components is changing with time, going from about 6 to 9. We also tentatively detect the presence of some background flux which remained at the 2\% level until January 2016, but then increased to 6\% in April 2016. Our AMBER results show that the emission line forming region of Br$\gamma$ is more extended than the continuum emitting region as seen by PIONIER and may indicate some wind-wind interaction. Most importantly, we constrain the total masses of both components, with the most likely range being 33.6~M$_\odot$ and 45~M$_\odot$. Our results show that the LBV HR\, Car is possibly an $\eta$~Car analog binary system with smaller masses, with variable components, and further monitoring of this object is definitively called for. }
Luminous Blue Variables (LBVs) are post-main-sequence massive stars undergoing a brief, but essential, phase in their life, characterised by extreme mass-loss and strong photometric and spectroscopic variability \citepads{1984IAUS..105..233C,1994PASP..106.1025H,2012ASSL..384..221V}. The best known LBV, \object{$\eta$\,Carinae}, is known to have lost about 10--30 solar masses during its great outburst in the 1840s \citepads{2003AJ....125.1458S}. There is, as yet, no firmly established mechanism to explain the large mass loss of LBVs, nor how it happens: is the mass lost due to a steady radiatively-driven stellar wind, or is it removed by punctuated eruption-driven mass loss, such as the great outburst of $\eta$\,Car? Numerous hypotheses have been proposed. Apart from single star processes, such as core and atmospheric instabilities \citepads{1994PASP..106.1025H} or supercritical rotation, the binary hypothesis \citepads[e.g.][]{1989ASSL..157..185G,2011MNRAS.415.2020S} is a very strong contender, especially as it is well established that massive stars form nearly exclusively in multiple systems and that binary interactions are critical for these stars \citepads{Chini2012,Sana2012,Sana2014}. There is currently a hot debate in the literature on the evolutionary status of LBV stars and on the importance of binarity in their formation \citepads{2015MNRAS.447..598S,2016arXiv160301278H}. So far, however, while several wide LBV binaries were identified, LBV systems similar to $\eta$\, Car (relatively close \& eccentric) have not been found \citepads{Mar2012,2016A&A...587A.115M}. The only possible exception might be the LBV candidate \object{MWC 314} \citepads{Lobel2013}, but this is apparently a massive semi-detached binary system, and thus not directly comparable to $\eta$\,Car. On the one hand, this may appear rather surprising as it is thought that given their very high multiplicity rate, more than 70\% of all massive stars will exchange mass with a companion \citepads{Sana2012}. On the other hand, LBVs are rare objects with complex emission line spectra and intricate nebulae. Located at average distances of a few kpc or more, they therefore require at least milli-arcsecond resolution for direct close companion detection. Such a resolution is only reachable by interferometry. \begin{figure*}[htbp] \begin{center} \includegraphics[width=16cm]{fig_HRCar_AMBER.pdf} \end{center} \caption[xx]{\label{fig:AMBER}Exemplary AMBER observations of HR\,Car. In the $u,v$-plane coverage, shown in the center, solid lines connect visibility and phase observation with their respective $u,v$ points, open circles are conjugated $u,v$ points. Panels a) and d) show observations obtained on MJD\,56718.149, panels b) and c) on MJD\,56676.226, and panel f) on MJD\, 56726.113. One baseline from the observation taken on MJD\,55311.024 is shown as dashed line in the flux panel and in panel e), its $u,v$ position is marked by a cross. } \end{figure*} Here, we report on interferometric measurements of the LBV \object{HR\,Carinae} (HD\,90177), one of the very few in the Milky Way. \citetads{1990A&AS...82..189V} derived for this star an effective temperature of 14\,000$\pm$2\,000 K and a bolometric luminosity (M$_{bol}$) of $-9.5$, with a mass-loss rate of 2.2~10$^{-5}$~M$_\odot$yr$^{-1}$. The luminosity was revised to M$_{bol}=-8.9$ and the distance to 5$\pm$1 kpc by \citetads{1991A&A...246..407V}, putting HR\,Car most likely in the Carina spiral arm. At the same time, \citetads{1991A&A...248..141H} derived a kinematic distance of 5.4$\pm$0.4~kpc and showed that HR\,Car has a multiple shell expanding atmosphere. \citetads{1997A&A...320..568W} found that HR\,Car has a nebula that appears bipolar, with each lobe having a diameter of $\sim$0.65~pc and a line-of-sight expansion velocity of 75--150 kms$^{-1}$. We note in passing that the Hipparcos measurement of the parallax of HR\,Car of 1.69$\pm$0.82 mas \citepads{2007A&A...474..653V}, translating to a very imprecise distance of 592$^{+557}_{-193}$ pc, is most likely incorrect in views of the other indicators, and quite possibly a result of the hitherto unknown binarity. Effective temperature determinations for HR\,Car range between about 10\,000 K \citepads{2002A&A...387..151M}, 14\,000$\pm$ 2\,000 K (see above), 17\,900 K \citepads{2009ApJ...705L..25G}, and 22\,000 K \citepads{2010AN....331..349H}. The star is highly variable: it had its last S~Dor outburst in July 2001 and is currently in a quiet state, two magnitudes fainter than at maximum (in $V$). Visual magnitudes obtained on the AAVSO web site indicate indeed that the star has now a magnitude $V\sim 8.7-9$. According to \citetads{2011MNRAS.410..190T}, HR\,Car is a B2evar star with a mass of 18.1$\pm$5.5~M$_\odot$ and an age of 5.0$\pm$1.4 Myr, while \citetads{2010AN....331..349H} quote a value of 23.66$\pm$7.24~M$_\odot$. Similarly, \citetads{2009ApJ...705L..25G} show the star to have a high rotational velocity of $150\pm20$ kms$^{-1}$, i.e. rotating at 88\% of its critical velocity, and derive a current mass of about 25~M$_\odot$, and an initial mass of 50$\pm$10~M$_\odot$, but we should stress here that all these values are very model-dependent. The high velocity and the difficulty of LBVs to lose angular momentum led \citetads{2009ApJ...705L..25G} to suggest that HR\,Car could explode during its current LBV phase, making the link with detections of LBV-like progenitors of Type IIn supernovae \citepads{2015MNRAS.447..598S}. It is thus important to characterise as best as possible this very interesting star. We show here, based on interferometric measurements, that the LBV HR\,Car is in fact a binary system, with an orbital period of several years, making it therefore the first LBV similar to $\eta$~Car in terms of binarity. Our observations are presented in Sect.\ref{Sec:Obs} and discussed in Sect.\ref{Sec:Dis}.
We have obtained interferometric observations of the LBV HR\,Car that clearly reveals its binary nature, and detected the orbital motion over a period of two years. It is still not possible to derive the orbital period which could be of the order of a few to several tens of years and the separation of the order of 10--270 au, but with the constraint that the largest orbit must also be the most eccentric, with a periastron distance most likely fixed around 2 mas, or 11 au. If the eccentricity is small and the orbit turns out to be of the order of 5 to 10 years, {\bf HR\,Car would be the second binary LBV presenting all the hallmarks and properties which make $\eta$~Car truly such a unique object}, but with components of much smaller masses. We should note, however, that no giant eruption has been ever recorded for HR\,Car, unlike the one of the 1840's of $\eta$~Car, and that estimates of the ejecta mass surrounding HR\,Car are more of the order of 1~M$_\odot$ \citepads{2000ApJ...539..851W}, much smaller than what is seen around $\eta$~Car. Apart from highlighting the possible role of binarity in the formation and/or evolutions of LBVs, the fact that HR Car is a binary is essential as it will allow us to derive the masses of the stars, which will be very useful to compare to stellar evolutionary models. For now, we constrain the most likely range of total masses to be 33.5--45~M$_\odot$. AMBER has shown that the emission line forming region of Br$\gamma$ is larger than the minimum projected separation of the components of 2\,mas, measured by PIONIER. Hence HR\,Car must undergo wind-wind interaction detectable in Br$\gamma$, and probably as well in H$\alpha$. Whether the interaction is permanent, in case of a circular orbit, or phase dependent at periastron, in case of an eccentric orbit, cannot be said with the current AMBER data, although the increase in background flux seen in the PIONIER data seem to favour the latter (if it proves to be non-instrumental). In either case, however, the HR\,Car system is considerably simpler than its much better known, nearby LBV-sibling, $\eta$\,Car, and probably much easier to constrain, model, and ultimately understand. In particular in the case of an eccentric orbit the next periastron would offer an excellent opportunity for a concerted multi-wavelength, multi-technique campaign to provide constraints for theoretical modelling. Whatever interaction happens in the system of HR\,Car now, in the minimum phase of its S\,Dor cycle, it must be very different when it is at maximum. In the maximum of an S\,Dor cycle the primary, well separated from the secondary now, even at the closest distance, will possibly become close or exceed its Roche-lobe radius, and maybe even become large enough for the secondary to pass through the primary's outer layers. In the recent past, two S\,Dor cycles have been observed for HR\,Car, with maxima around 1991 and 1999 \citepads[see, e.g.,][]{2003IAUS..212..243S} or 2001 (as indicated by the AAVSO data). Once the orbit is better constrained, it will be seen how these dates relate to the orbital parameters; whether one should have expected strong interaction between the components, and in particular what to expect in the next S\,Dor maximum phase. We will continue to monitor HR\,Car with the PIONIER instrument to try to settle as soon as Nature allows us the orbital period of this interesting binary -- from the current modelling, in 2018 we should already be able to distinguish between the main families of solutions. This combined with a precise GAIA distance should also constrain the total mass of the system. We encourage spectroscopic monitoring of this target to try to derive the associated spectroscopic orbit, although we understand that this won't be a task for the faint-hearted.
16
7
1607.07724
1607
1607.05721_arXiv.txt
\noindent We are interested in the numerical solution of large systems of hyperbolic conservation laws or systems in which the characteristic decomposition is expensive to compute. Solving such equations using finite volumes or Discontinuous Galerkin requires a numerical flux function which solves local Riemann problems at cell interfaces. There are various methods to express the numerical flux function. On the one end, there is the robust but very diffusive Lax-Friedrichs solver; on the other end the upwind Godunov solver which respects all resulting waves. The drawback of the latter method is the costly computation of the eigensystem. This work presents a family of simple first order Riemann solvers, named HLLX$\omega$, which avoid solving the eigensystem. The new method reproduces all waves of the system with less dissipation than other solvers with similar input and effort, such as HLL and FORCE. The family of Riemann solvers can be seen as an extension or generalization of the methods introduced by Degond et al. \cite{DegondPeyrardRussoVilledieu1999}. We only require the same number of input values as HLL, namely the globally fastest wave speeds in both directions, or an estimate of the speeds. Thus, the new family of Riemann solvers is particularly efficient for large systems of conservation laws when the spectral decomposition is expensive to compute or no explicit expression for the eigensystem is available.
\label{sec:introduction} In the finite volume method, integrating conservation laws over a control volume leads to a formulation which requires the evaluation of the flux function at cell interfaces. Since the exact information is not available, local Riemann problems are solved at cell interfaces. The initial states for these problems are typically given by the left and right adjacent cell values. Since these local Riemann problems have to be solved many times in order to find the numerical solution, the Riemann solver is a building block of the finite volume method. Over the last decades, many different Riemann solvers were developed, see e.g. \cite{ToroRiemannSolvers} for a broad overview. The challenge for the solver is the need for computational efficiency and easy implementation, while at the same time it needs to yield accurate results which do not create artificial oscillations. Riemann solvers can be classified into complete and incomplete schemes, depending on whether all present characteristic fields are considered in the model or not. According to this classification, the upwind scheme and Roe's scheme \cite{Roe1981}, respectively, are complete schemes. They yield good, monotone results, however, an evaluation of the eigensystem of the flux Jacobian is needed. This characteristic decomposition is expensive to compute, especially for large systems, and in some cases, an analytic expression is not available at all. Nevertheless, using Roe's scheme, all waves are well-resolved and it typically yields the best resolution of the Riemann wave fan. In order to reduce the computational cost and at the same time keep the high resolution, there have been many attempts to approximate the upwind scheme without solving the eigenvalue problem, see e.g. \cite{DegondPeyrardRussoVilledieu1999, Torrilhon2012, CastroGallardoMarquina2014} and references therein. In this article, we are interested in incomplete Riemann solvers. In comparison to complete solvers, they need fewer characteristic information and are easier to implement. However, they contain more dissipation and thus, yield lower resolution, especially of slow waves. Nevertheless, in many test cases, these Riemann solvers may be sufficient to obtain good results, especially if the system contains only fast waves. \newline \newline The rest of the paper is structured as follows. In Sec.~\ref{sec:FV} we introduce the necessary notation for finite volume (FV) schemes and Riemann problems in general, Sec.~\ref{sec:riemannsolvers} reviews some well-known Riemann solvers. In Sec.~\ref{sec:hybridSolvers} we discuss some hybrid Riemann solvers, that is, solvers which are constructed as weighted combinations of the ones presented in the previous section. Sec.~\ref{sec:HLLXomega} presents the new family of Riemann solvers and discusses its construction and parameter choices. The numerical results of Sec.~\ref{sec:numericalresults} underline the excellent performance of the new solvers and finally, in Sec.~\ref{sec:conclusion} we draw some conclusions.
\label{sec:conclusion} This paper presented a family of approximate hybrid Riemann solvers, HLLX$\omega$, for large non-linear hyperbolic systems of conservation laws. The solvers do not require the characteristic decomposition of the flux Jacobian, only an estimate of the maximal propagation speeds in both directions is needed. The family of solvers contains a parameter $\omega$ which orders the solvers from fully-monotone to fully non-monotone. The intermediate solvers contain monotone as well as non-monotone parts. We showed that these intermediate family members, even though containing non-monotone parts for certain wave speeds, do not lead to oscillatory solutions. Extremely slow waves and stationary waves will still be approximated with higher dissipation than the upwind scheme, however, the computational cost of the new solvers is lower. Compared to solvers with similar prerequisites, the new Riemann solvers are able to rigorously decrease the dissipation of the scheme. The numerical examples underline the excellent performance of the new family of solvers with respect to other solvers. \begin{appendix}
16
7
1607.05721
1607
1607.02047_arXiv.txt
The overabundance of high-energy cosmic positrons, observed by PAMELA and AMS-02, can be considered as the consequence of dark matter decays or annihilations. We show that recent FERMI/LAT measurements of the isotropic diffuse gamma-ray background impose severe constraints on dark matter explanations and make them practically inconsistent. %
\nocite{Belotsky:2016tja} The unexpected increase of the positron fraction in cosmic rays with energies above 10 GeV (also known as the ``positron anomaly'') was observed for the first time in the PAMELA experiment \cite{Adriani:2008zr} and was later confirmed by AMS-02 \cite{Aguilar:2013qda}. A lot of attention was paid to this discovery since the standard mechanisms of positron production and acceleration predicted a much steeper energy spectrum of cosmic positrons. The list of possible explanations includes, inter alia, decays or annihilations of dark matter (DM) particles, implying the existence of interconnection between our world and ``dark world''. This intriguing possibility is though highly constrained by a set of direct, indirect and accelerator-based observations, which force DM models to become more and more sophisticated. But no matter how complicated a DM model explaining the positron anomaly is, it should obviously fulfill the principal requirement that it produces a sufficient amount of high-energy positrons. The undesirable consequence of this fact is that, regardless of the prior (internal) processes, production of charged particles is accompanied by gamma-ray emission (see Fig.~\ref{FSRdiag}). \begin{figure}[h!]\label{FSRdiag} \centering \begin{fmffile}{diagramFSR} \fmfframe(5,5)(5,5){ \begin{fmfgraph*}(70,40) \fmfpen{thin} \fmfleft{i1,i2} \fmflabel{DM}{i1} \fmflabel{DM}{i2} \fmfright{o1,o2,o3,o4} \fmflabel{$X$}{o1} \fmflabel{$e^+$}{o2} \fmflabel{$\nu_e$}{o4} \fmflabel{$\gamma$}{o3} \fmf{fermion}{i1,v2,i2} \fmfblob{.15w}{v2} \fmf{photon,tension=0.5,label=$W^+$,side=up}{v2,v3} \fmf{fermion}{o2,v4,v3,o4} \fmffreeze \fmf{photon}{v4,o3} \fmf{fermion}{v2,o1} \fmfi{plain}{vpath (__v2,__o1) shifted (thick*(0,2))} \fmfi{plain}{vpath (__v2,__o1) shifted (thick*(-1,-2))} \end{fmfgraph*} } \end{fmffile} \caption{A diagram illustrating an example of the DM annihilation process providing a positron via $W^+$ decay and some variety of states $X$. The positron emits final state radiation (FSR).} \end{figure} In addition, gamma rays are produced during the propagation of charged particles through the Galactic gas and the electromagnetic media, mainly in such processes as Bremsstrahlung and inverse Compton scattering (ICS). As we are going to show, even this at first sight small contribution to Galactic gamma rays may come in conflict with the latest Fermi-LAT data on the isotropic diffuse gamma-ray background \cite{Ackermann:2014usa,DiMauro:2016cbj} and, furthermore, rule out DM explanations of the high-energy cosmic positron excess. Basically, the reason of this problem is the following: % the total amount of positrons and photons % depends on the size of the volume in which their sources are concentrated, and though physically both positrons and photons have the same source, % the volume of space from which they mostly arrive is substantially different. While only those positrons that were produced in the $\sim 3$ kpc proximity can approach the Earth (due to their stochastic motion in the Galactic magnetic fields and the corresponding energy losses), gamma rays can come to us directly from any point of the DM halo, where they were born. Now, since the DM halo is indeed large, the amount of gamma rays can simply overwhelm the observed limits. %
16
7
1607.02047
1607
1607.02876_arXiv.txt
We present asymptotic giant branch (AGB) models of solar metallicity, to allow the interpretation of observations of Galactic AGB stars, whose distances should be soon available after the first release of the Gaia catalogue. We find an abrupt change in the AGB physical and chemical properties, occurring at the threshold mass to ignite hot bottom burning,i.e. $3.5~\Msun$. Stars with mass below $3.5~\Msun$ reach the C-star stage and eject into the interstellar medium gas enriched in carbon , nitrogen and $^{17}O$. The higher mass counterparts evolve at large luminosities, between $3\times 10^4 L_{\odot}$ and $10^5 L_{\odot}$. The mass expelled from the massive AGB stars shows the imprinting of proton-capture nucleosynthesis, with considerable production of nitrogen and sodium and destruction of $^{12}C$ and $^{18}O$. The comparison with the most recent results from other research groups are discussed, to evaluate the robustness of the present findings. Finally, we compare the models with recent observations of galactic AGB stars, outlining the possibility offered by Gaia to shed new light on the evolution properties of this class of objects.
Stars of mass in the range $1\Msun \leq M \leq 8\Msun$, after the consumption of helium in the core, evolve through the asymptotic giant branch phase: above a degenerate core, composed of carbon and oxygen (or of oxygen and neon, in the stars of highest mass), a $3\alpha$ burning zone and a region with CNO nuclear activity provide alternatively the energy required to support the star \citep{becker80, iben82, iben83, lattanzio86}. Because helium burning is activated in condition of thermal instability \citep{schw65, schw67}, CNO cycling is for most of the time the only active nuclear channel, whereas ignition of helium occurs periodically, during rapid events, known as thermal pulses (TP).\\ Though the duration of the AGB phase is extremely short when compared to the evolutionary time of the star, it proves of paramount importance for the feedback of these stars on the host environment. This is because it is during the AGB evolution that intermediate mass stars lose their external mantle, thus contributing to the gas pollution of the interstellar medium. In addition, these stars have been recognised as important manufacturers of dust, owing to the thermodynamic conditions of their winds, which are a favourable environment to the condensation of gas molecules into solid particles \citep{gail99}.\\ For the above reasons, AGB stars are believed to play a crucial role in several astrophysical contexts. \\ On a pure stellar evolution side, they are an ideal laboratory to test stellar evolution theories, because of the complexity of their internal structure. In the context of the Galaxy evolution, the importance of AGB stars for the determination of the chemical trends traced by stars in different parts of the Milky Way has been recognised in several studies \citep{romano10, kobayashi11}. Still in the Milky Way environment, massive AGB stars have been proposed as the main actors in the formation of multiple populations in Globular Clusters \citep{ventura01}. Moving out to the Galaxy, it is generally believed that AGB stars give an important contribution to the dust present at high redshift \citep{valiante09, valiante11}; furthermore, these stars play a crucial role in the formation and evolution of galaxies (Santini et al. 2014).\\ It is for these reasons that the research on AGB stars has attracted the interests of the astrophysical community in the last decades.\\ The description of these stars is extremely difficult, owing to the very short time steps (of the order of one day) required to describe the TP phases, which leads to very long computation times. Furthermore, the evolutionary properties of these stars are determined by the delicate interface between the degenerate core and the tenuous, expanded envelope, thus rendering the results obtained extremely sensitive to convection modelling \citep{herwig05, karakas14b}. \\ There are two mechanisms potentially able to alter the surface chemical composition, namely hot bottom-burning (hereinafter HBB) and third dregde-up (TDU). The efficiency of the two mechanisms potentially able to alter the surface chemical composition, namely hot bottom burning (hereinafter HBB) and third dregde-up (TDU) depends critically on the method used to determine the temperature gradients in regions unstable to convective motions \citep{vd05a} and on the details of the treatment of the convective borders, for what concerns the base of the convective envelope and the boundaries of the shell that forms in conjunction with each TP, the so called "pulse driven convective shell". The description of mass loss also plays an important role in the determination of the evolutionary time scales \citep{vd05b, doherty14}.\\ Given the poor knowledge of some of the macro-physics input necessary to build the evolutionary sequences, primarily convection and mass loss, the comparison with the observations is at the moment the only way to improve the robustness of the results obtained. \\ On this side, the Magellanic Clouds have been so far used much more extensively than the Milky Way \citep{martin93, marigo99, karakas02, izzard04, marigo07, stancliffe05}, given the unknown distances of Galactic sources, which render difficult any interpretation of the observations. Very recent works outlines the possibility of calibrating AGB models based on the observations of the AGB population in dwarf galaxies in the Local Group \citep{rosenfield14, rosenfield16}. The attempts of interpreting the observations of metal poor environments, typical of the Magellanic Clouds and of the galaxies in the Local Group, has so far pushed our attention towards sub-solar AGB models, published in previous works of our group \citep{ventura08, ventura09, ventura11, ventura13}. The main drivers of these researches were the understanding of the presence of multiple populations in globular clusters and the comparison of our predictions with the evolved stellar population of the Magellanic Clouds \citep{flavia15a, flavia15b, ventura15, ventura16} and metal poor dwarf galaxies of Local Group \citep{dellagli16}. The advent of the ESA-Gaia mission will open new frontiers in the study of stars of any class, and in particular for the evolved stellar population of the Milky Way. Launched on December 2013, Gaia will allow constructing a catalogue of around more than 1 billion astronomical objects (mostly stars) brighter than 20 G mag (where G is the Gaia whitelight passband, Jordi et al. 2010), which encompasses $\sim 1\%$ of the Galactic stellar population. During the five year mission life time each object will be observed 70 times on average, for a total of $\sim 630$ photometric measurements in G band, the exact number of observations depending on the magnitude and position of the object (ecliptic coordinates) and on the stellar density in the object field. Gaia will perform $\mu$as global astrometry for all the observed objects, thus allowing the determination of the distance of several AGB stars with unprecedented accuracy, refining the parallaxes determination of all the stars in the Hipparcos catalogue and dramatically increasing the number of accurately known parallaxes. The first release of the Gaia catalogue is foreseen by the end of summer 2016, and it will contain positions and G-magnitudes for all single objects with good astrometric behaviour. In order to benefit from the possibilities offered by the upcoming Gaia data, we calculated new AGB models with solar metallicity, completing our library, so far limited to sub-solar chemical composition models. The main goal of the present work is to explore the possibilities, offered by the comparisons with observations, to further constrain some of the still poorly known phenomena affecting this class of objects. This task is essential to be able to assess the role played by AGB stars in the various contexts discussed earlier in the section.\\ To this aim, after the presentation of the main physical and chemical properties of the solar chemistry AGB models, we will compare our theoretical results with a) the models available in the literature, to determine their degree of uncertainty and their robustness and b) recent observations of galactic AGB. In some cases we will also discuss how Gaia will help discriminating among various possibilities still open at present.\\ The paper is structured as follows: the description of the input used to build the evolutionary sequences is given in section 2; in section 3 we present an overall review of the evolution through the AGB ADD phase; the contamination of the interstellar medium determined by the gas ejected from these stars is discussed in section 4; section 5 presents a detailed comparison with two among the most largely used sets of models available in the literature; in section 6 we test our models against the chemical composition of samples of Galactic AGB stars; the conclusions are given in section 7.
We present solar metallicity models of the AGB phase of stars with mass in the range $1~\Msun < M < 8~\Msun$. This investigation integrates previous explorations by our group, focused on sub-solar chemistries. The main physical and chemical properties of AGB stars are extremely sensitive to the stellar mass. A threshold mass $M \sim 3-3.5~\Msun$ separates two distinct behaviours. The chemical composition of stars of mass $M \leq 3~\Msun$ is altered by the TDU mechanism, which favours a gradual increase in the surface carbon content. We find that the stars with mass in the range $1.5~\Msun \leq M \leq 3~\Msun$ become carbon stars during the AGB phase. Once the C-star stage is reached, the consumption of the envelope is accelerated by the expansion of the external regions and by the effects of radiation pressure acting on the carbonaceous dust particles in the circumstellar envelope. This effects prevent further significant enrichment in the surface carbon, keeping the C/O ratio below $\sim 1.5$. The gas ejected by these stars is enriched in carbon and nitrogen by a factor $\sim 3$ compared to the material from which the stars formed. The luminosities of carbon stars fall in the range $8\times 10^3L_{\odot} < L < 1.2\times 10^{4}L_{\odot}$. Stars of mass $M > 3~\Msun$ experience HBB at the bottom of the convective envelope. The strength of the HBB increases with the mass of the star. The pollution from these stars reflects the equilibrium abundances of the HBB nucleosynthesis experienced. On general grounds, we expect carbon-poor and nitrogen-rich ejecta, owing to CN cycling. In stars of mass above $\sim 5~\Msun$ the HBB temperatures are sufficiently large to activate the full CNO and the Ne-Na nucleosynthesis: the gas expelled by these stars is enriched in sodium, whereas the oxygen content is smaller than it was when the star formed. These stars are expected to evolve as lithium-rich sources for a significant fraction of the AGB phase. The comparison with results in the literature outlines some similarities but also significant differences, particularly for what regards the strength of the HBB experienced, thus the luminosities at which these stars evolve and the kind of pollution expected. The carbon, nitrogen and sodium content of stars of mass above $3~\Msun$ are extremely different from the results from other research teams, stressing the importance of confirmation from the observations. We compare the models presented here with the CNO elemental and isotopic abundances in different types of Galactic AGB stars as estimated from observational data at very different wavelengths (from the optical to the radio domain);this part of the research has the double scope of adding more robustness to the present results and to characterise to stars observed, in terms of mass and age of the progenitors. The comparison with the observations is hampered by the unknown distances of the sources discussed.
16
7
1607.02876
1607
1607.08165_arXiv.txt
{Some models for the topology of the magnetic field in sunspot penumbrae predict the existence of field-free or dynamically weak-field regions in the deep Photosphere.}{To confirm or rule out the existence of weak-field regions in the deepest photospheric layers of the penumbra.}{The magnetic field at $\log\tau_5=0$ is investigated by means of inversions of spectropolarimetric data of two different sunspots located very close to disk center with a spatial resolution of approximately 0.4-0.45\arcsec. The data have been recorded using the GRIS instrument attached to the 1.5-meters GREGOR solar telescope at El Teide observatory. It includes three Fe \textsc{i} lines around 1565 nm, whose sensitivity to the magnetic field peaks at half a pressure-scale-height deeper than the sensitivity of the widely used Fe \textsc{i} spectral line pair at 630 nm. Prior to the inversion, the data is corrected for the effects of scattered light using a deconvolution method with several point spread functions.}{At $\log\tau_5=0$ we find no evidence for the existence of regions with dynamically weak ($B<500$~Gauss) magnetic fields in sunspot penumbrae. This result is much more reliable than previous investigations done with Fe \textsc{i} lines at 630 nm. Moreover, the result is independent of the number of nodes employed in the inversion, and also independent of the point spread function used to deconvolve the data, and does not depend on the amount of straylight (i.e. wide-angle scattered light) considered.}{} \titlerunning{Deep-probing of sunspot penumbra: no evidence for field-free gaps} \authorrunning{Borrero et al.}
% \label{section:intro} The last decade has been witness to an unprecedented advance in our knowledge of sunspot penumbrae. Owing to the improvement in instrumentation, data analysis methods and realism of numerical simulations, an unified picture for the topology of the penumbral magnetic and velocity fields has begun to emerge. The foundations of this picture rest on the so-called \emph{spine/intraspine} structure of the sunspot penumbra, first mentioned by \citet{lites1993pen}, whereby regions of strong and somewhat vertical magnetic fields (i.e. spines) alternate horizontally with regions of weaker and more inclined field lines that harbor the Evershed flow (i.e. intraspines). At low spatial resolution ($\approx 1\arcsec$) the intraspines are identified with penumbral filaments. At the same time, \citet{solanki1993pen} established that these two distinct components also interlace vertically, thereby explaining the asymmetries in the observed circular polarization profiles (Stokes $V$). It was later found that the vertical and horizontal interlacing of these two components implies that the magnetic field in the spines wraps around the intraspines \citep{borrero2008pen}, with the latter remaining unchanged at all radial distances from the sunspot's center \citep{borrero2005pen,borrero2006pen,tiwari2013decon} and the former being nothing but the extension of the umbral field into the penumbra \citep{tiwari2015decon}. It has also been confirmed that the Evershed flow can reach supersonic and super-Alv\'enic values, not only on the outer penumbra \citep{borrero2005pen,vannoort2013decon}, but also close to the umbra \citep{deltoro2001pen,bellot2004pen} and has a strong upflowing component at the inner penumbra that turns into a downflowing component at larger radial distances \citep{franz2009pen,franz2013pen,tiwari2013decon}. Finally, there is strong evidence for an additional component of the velocity field in intraspines that appears as convective upflows along the center of the intraspines that turns into downflows at the filaments' edges \citep{zakharov2008pen,joshi2011pen, scharmer2011pen,tiwari2013decon}. These downflows seem capable of dragging the magnetic field lines and turning them back into the solar surface \citep{basilio2013pen,scharmer2013pen}. In spite of this emerging unified picture, a number of controversies persist. One of them pertains to the strength of the convective upflows/downflows at the intraspines' center/edges. \citet{tiwari2013decon,pozuelo2015pen} found an average speed for this convective velocity pattern of about 200 m\,s$^{-1}$. Although they are ubiquitous, their strength does not seem capable of sustaining the radiative cooling of the penumbra, which amounts to about 70 \% of the quiet-Sun brightness. However, \citet{scharmer2013pen} find an rms convective velocity of 1.2 km\,s$^{-1}$ at the intraspines' center/edges, hence strong enough to explain the penumbral brightness. The latter result agrees well with numerical simulations of sunspot penumbrae \citep{rempel2012mhd}. On the other hand, the pattern of upflows/downflows at the heads/tails, respectively, at the penumbral intraspines is easily discernible \citep{franz2009pen,ichimoto2010pen, franz2013pen} and harbors plasma flows of several km\,s$^{-1}$, albeit occupying only a small fraction of the penumbral area. Which one of these two aforementioned convective modes accounts for the energy transfer in the penumbra is unclear from an observational point of view, although the scale is starting to tip in favor of the former. Another remaining controversy concerns the strength of the magnetic field inside intraspines, where convection takes place. \citet{scharmer2006gap} and \citet{spruit2006gap} originally proposed that they would be field-free, thereby coining the term \emph{field-free gap}. However, most observational evidence points towards a magnetic field strength of at least 1 kG \citep{borrero2008pen,borrero2010pen,puschmann2010pen,tiwari2013decon,tiwari2015decon}. Three-dimensional magnetohydrodynamic simulations of penumbral fine structure also yield magnetic field values of the order to 1-1.5 kG inside penumbral intraspines \citep{rempel2012mhd} irrespective of the boundary conditions and grid resolution. \citet{spruit2010pen} interpreted the striations seen perpendicular to the penumbral filaments in high-resolution continuum images as a consequence of fluting instability, and established an upper limit of $B \le 300$ Gauss for the magnetic field inside intraspines. This redefines \emph{field-free} to mean instead \emph{dynamically weak} magnetic fields, where the magnetic pressure is smaller than the kinematic pressure. We note however that this interpretation has been challenged by \citet{barthi2012pen}, who argued that the same striations can be produced by the sideways swaying motions of the intraspines even if these harbor strong magnetic fields ($B \ge 1000$ Gauss). The limited observational evidence in favor of strong convective motions perpendicular to the penumbral filaments, and almost complete lack of evidence for weak magnetic fields in penumbral intraspines has been traditionally ascribed to: {\bf (a)} the insufficient spatial resolution of the spectropolarimetric observations \citep[see Sect.~3.2][]{scharmer2012pen}; {\bf (b)} the smearing effects of straylight that are incorrectly dealt with by two-component inversions employing variable filling factors \citep[see Sect.~2.2 in][]{scharmer2013pen}; and {\bf (c)} the impossibility to probe layers located deep enough to detect them \citep[see Sect.~5.4 in][]{spruit2010pen}. In this work we will address these issues by employing spectropolarimetric observations of the Fe \textsc{i} spectral lines at 1565 nm recorded with the GRIS instrument at the GREGOR Telescope. The spatial resolution is comparable to that of the Hinode/SP instrument and 2.5 times better than previous investigations carried out with these spectral lines. In addition, the lines observed by GRIS are much more sensitive to magnetic fields at the continuum-forming layer (i.e. $\log\tau_5=0$) than their counterparts at 630 nm. Finally, we will account for the straylight within the instrument by performing a deconvolution of the observations employing principal component analysis (PCA) and different point spread functions (PSFs). We expect that with these new data and analysis techniques we will be able to settle, in either direction, the dispute about the strength of the magnetic field in penumbral intraspines (e.g. filaments). A study of the convective velocity field will be presented elsewhere.
% \label{section:conclu} We have studied the magnetic field topology in the penumbra of two sunspots at the deepest layers of the solar photosphere. This was done through the inversion of the radiative transfer equation applied to spectropolarimetric data (i.e. full Stokes vector $\ve{I}$) of three Fe \textsc{i} spectral lines at 1565 nm in order to retrieve the magnetic field $\ve{B}$. The estimated spatial resolution of the data employed in work is 0.4-0.45\arcsec\ and the noise level is $10^{-3}$. Moreover, the observed spectral lines are better suited to study the magnetic field in the deep photosphere than the widely used Fe \textsc{i} spectral lines at 630 nm because, besides the Zeeman splitting being about three times larger than in the lines at 630 nm, the lines at 1565 nm convey information from deeper photospheric layers. In order to account for the degradation of the data due to straylight (i.e. wide-angle scattered light) within the instrument, we have applied, prior to the inversion, a PCA deconvolution method using an empirical point spread function. Our results show no evidence for the presence of weak-field regions ($B<500$), let alone of dynamically weak fields \citep{spruit2010pen} or field-free regions \citep{scharmer2006gap,spruit2006gap} in the deepest regions of the photosphere ($\log\tau_5=0$). This agrees with previous observational results, in particular with \citet{borrero2008penb,borrero2010pen,tiwari2013decon} and with three-dimensional MHD simulations of sunspot fine-structure \citep{rempel2009mhd,nordlund2010mhd,rempel2011mhd,rempel2012mhd}. These results are independent from the amount of straylight used in the PSF and independent of the inversion set-up (i.e. number of nodes). None of the aforementioned works can rule out the existence of field-free plasma deep beneath the sunspot. Indeed it is perfectly plausible that at some point underneath the sunspot \citep[i.e. under the magneto-pause;][]{jahn1994sun} normal field-free convection resumes. The question is wether this happens sufficiently close to $\log\tau_5=0$ so as to explain the penumbral brightness. This is precisely what our work rules out. On the other hand, the amount of flux returning back into the solar surface ($B_z<0$) within the penumbra is very dependent on the amount of straylight considered and, consequently, we shall refrain from drawing conclusions at this point. In summary, we have addressed all major concerns raised by \citet{spruit2010pen}, \cite{scharmer2012pen}, and \citet{scharmer2013pen}: we have used high-spatial resolution observations (indeed the highest ever at 1565 nm) of spectropolarimetric data \citep{scharmer2012pen} that conveys very reliable information about the magnetic field in the deep Photosphere \citep{spruit2010pen}. We have also deconvoled the data with several empirically determined PSFs, thereby allowing us to remove the need for the so-called non-magnetic filling factor \citep{scharmer2013pen}. In all cases, no traces of regions with where $B \le 500$ Gauss have been found at $\log\tau_5=0$.
16
7
1607.08165
1607
1607.06452_arXiv.txt
Large-scale gaseous filaments with length up to the order of 100 pc are on the upper end of the filamentary hierarchy of the Galactic interstellar medium. Their association with respect to the Galactic structure and their role in Galactic star formation are of great interest from both observational and theoretical point of view. Previous ``by-eye'' searches, combined together, have started to uncover the Galactic distribution of large filaments, yet inherent bias and small sample size limit conclusive statistical results to be drawn. Here, we present (1) a new, automated method to identify large-scale velocity-coherent dense filaments, and (2) the first statistics and the Galactic distribution of these filaments. We use a customized minimum spanning tree algorithm to identify filaments by connecting voxels in the position-position-velocity space, using the Bolocam Galactic Plane Survey spectroscopic catalog. In the range of $7.^{\circ}5 \le l \le 194^{\circ}$, we have identified 54 large-scale filaments and derived mass ($\sim 10^3 - 10^5$\msun), length (10--276 pc), linear mass density (54--8625 \msun\,pc$^{-1}$), aspect ratio, linearity, velocity gradient, temperature, fragmentation, Galactic location and orientation angle. The filaments concentrate along major spiral arms. They are widely distributed across the Galactic disk, with 50\% located within $\pm$20 pc from the Galactic mid-plane and 27\% run in the center of spiral arms. An order of 1\% of the molecular ISM is confined in large filaments. Massive star formation is more favorable in large filaments compared to elsewhere. This is the first comprehensive catalog of large filaments useful for a quantitative comparison with spiral structures and numerical simulations.
\label{sec:intro} The interstellar medium (ISM) has a highly filamentary and hierarchical structure. On the upper end of this filamentary hierarchy are large-scale gaseous filaments with length up to the order of 100 pc. What is their distribution in our Galaxy and what role do they play in the context of Galactic star formation? Answers to these questions are important for a critical comparison with theoretical studies and numerical simulations of galaxy formation and filamentary cloud formation. The observational key to answer these questions is a homogeneous sample of large filaments across the Galaxy identified in a uniform way. Studies in the past years have revealed a number of large filaments with a wide range of aspect ratios and morphologies, from linear filaments to a collection of cloud complexes. \cite{Goodman2014} find that the 80 pc long infrared dark cloud (IRDC) ``\object{Nessie}'' \citep{Jackson2010} in the southern sky can be traced up to 430 pc in the position-position-velocity (PPV) space in $^{12}$CO (1--0), guided by connecting the IR-dark patches presumably caused by high column density regions extincting the otherwise smooth IR background emission from the Galactic plane. They argue that the Nessie runs in the center of the Scutum-Centaurus spiral arm in the PPV space, so termed as a ``bone'' of the Milky Way. In a follow up study, \cite{Zucker2015} searched the region covered by the MIPSGAL \citep[\spt/MIPS Galactic Plane Survey, $|l|<62^\circ, \, |b|<1^\circ$;][]{MIPSGAL} focusing on the PPV loci of arms expected by various spiral arm models, finding 10 bone candidates with length 13--52 pc and aspect ratio 25--150. \cite{Ragan2014-GFL} and \cite{Abreu2016} extend this ``mid-IR extinction'' method to a blind search, i.e., not restricting to arm loci but the full extend of the observed PPV space. They find 7 and 9 filaments with length 38--234 pc in part of the first % and fourth Galactic quadrants covered by the GRS (Galactic Ring Survey; \citealt{Jackson2006-GRS}), and the ThrUMMS (Three-mm Ultimate Mopra Milky Way Survey; \citealt{Barnes2015-ThrUMMS}), respectively. The aspect ratios of those filaments are not well defined due to the complex morphology, but inferring from the figures in the papers, the typical aspect ratio is much less than 10. In contrast to the indirect\footnote{Indirect because ``IR-dark'' does not necessarily correspond to a dense cloud; it can be caused by a real ``hole in the sky'' \citep{Jackson2008,Wilcock2012}.} ``mid-IR extinction'' method, \cite{me15} identify large filaments directly based on emission at far-IR wavelengths near the spectral energy distribution (SED) peak of cold filaments. They develop a Fourier Transform filter to separate high-contrast filaments from the low-contrast background/foreground emission. Fitting the SED built up from the multi-wavelength \her\ data from the Hi-GAL survey \citep{Hi-GAL}, they derive temperature and column density maps, and have used those maps to select the ``largest, coldest and densest'' filaments. They present 9 filaments with length 37--99 pc and aspect ratio 19--80, identified primarily from the GRS field.% These systematic searches have started to uncover the spatial distribution of large filaments in our Galaxy, revealing filaments within and outside major spiral arms. However, with different searching methods and selection criteria, in addition to inherent bias from manual inspection, it is difficult to cross compare the results from these studies. The small sample size also limits the robustness of statistical attempts \citep[e.g., see discussion in][]{me15}. All the above mentioned searches start from a ``by-eye'' inspection of dust features (either mid-IR extinction or far-IR emission), identify candidate filaments, and then verify the coherence in radial velocity using gas tracers --- spectral line data. We automate the identification process by applying a customized minimum spanning tree algorithm to the PPV space. We present the first homogeneous sample of 54 large-scale velocity-coherent filaments in the range of $7.^{\circ}5 \le l \le 194^{\circ}$ (see exact coverage in \autoref{sec:data}). We derive mass, length, linearity, aspect ratio, velocity gradient and dispersion, temperature, column/volume density, fragmentation, Galactic location and orientation angle. For the first time, we are able to investigate the Galactic distribution of their physical properties, and to estimate the fraction of the ISM confined in large filaments and star formation therein. We describe the data set in \autoref{sec:data} and present our identification method in \autoref{sec:method}. The identified sample of filaments and their physical properties and statistics are presented in \autoref{sec:para}, followed by a discussion of the nature and implication of the filaments in \autoref{sec:discuss}. Main conclusions are summarized in \autoref{sec:sum}. Following the spirit of \cite{me15}, we focus on the densest filaments traced by millimeter dust continuum emission, and not the more diffused CO filaments.
\label{sec:discuss} \subsection{Comparison to previously known filaments} \label{sec:knownFL} Our MST algorithm finds some filaments previously identified by other methods. Our search field (\autoref{sec:data}) partially overlaps with previous searches by \cite{Ragan2014-GFL}, \cite{me15}, and \cite{Zucker2015}. Of the 9 cold and dense prominent filaments presented by \cite{me15}, 7 are in our search field, 3 of which are identified by MST (F7, F25, and F41 correspond to G11, G24, and G49, respectively). Others are not identified because of a lack of dense BGPS sources (G29, G47, and G64) or too large disruption in velocity space due to active star formation (G28). Among the 10 ``bone'' candidates presented by \cite{Zucker2015}, 6 are in our search field, 2 of which have dense BGPS sources: BC011.13--0.12 and BC024.95--0.17. The former corresponds to F7 (the Snake), and the latter is not identified by MST because of too large velocity disruption. In addition, our MST filament F28 is visible in their Figure 13 and seems to fulfill all their criteria, but was not identified by \cite{Zucker2015}. Among the 7 giant molecular filaments presented by \cite{Ragan2014-GFL}, 6 are partially covered in our field (``partially'' because most of those filaments extend beyond our coverage in $|b|$). F36 is a small dense part of GMF38.1-32.4a, but note that \cite{Ragan2014-GFL} used a kinematic distance of 3.3--3.7 kpc, a factor of 2 larger than the ML distance of 1.7 kpc. F19 and F38 fall in the positional coverage of GMF20.0-17.9 and GMF41.0-41.3, respectively, but outside the velocity ranges. In addition, F33 is the dense part of the ``massive molecular filament'' G32.02+0.06 presented by \cite{Battersby2012-FL} in a case study. F13 is part of the IRDC G14.225-0.506 \citep{Busquet2013}. F31 runs across a well studied IRDC \object{\ga}, also known as the ``Dragon'' nebula \citep{me11,me12,my-Springer-sum}. The IRDC is the IR-dark and submm-bright arc, bent towards the bottom of the panel in \autoref{fig:rgb}. The MST filament F31 is a new filament that runs across the IRDC at P1, where a proto-cluster is forming \citep{me11,me12,qz15}. Interestingly, the clump scale magnetic fields \citep{me12} are aligned with F31. At scales of the order of 10 pc, magnetic fields may be shaped by gravity, while on smaller scales (within 1 pc, or clump-scale), the magnetic fields control the formation of a secondary filament, as interpreted in \cite{me12}. The secondary filament is a small part of F31. Dust polarization observations of these filaments are needed to further investigate the role of magnetic fields on the formation and evolution of these filaments. In summary, our MST method successfully finds previously known filaments where the criteria are satisfied. An important difference between the MST identified filaments and others is that the former contains dense clumps over the \textit{whole} extent, while this is not the case for previously ``by-eye'' identified large filaments (``gaps'' in velocity space are allowed). The MST method also finds filaments embedded in a crowded PPV space, which are difficult to isolate with human eyes (e.g., F31). It is noteworthy that, because of the filamenatry and hierarchical nature of the ISM, one can find an arbitrary number of filaments in the same data set using different criteria. Therefore, when presenting a filament sample, it is \textit{equally important} to explicitly list the criteria used to define filaments. For the same reason, when comparing different samples of filaments one has to notice the difference in criteria, otherwise the comparison is misleading. \subsection{Completeness and bias} \label{sec:bias} The 54 filaments form the first comprehensive sample of large-scale velocity-coherent gas structures in the northern Galactic plane covered by the BGPS spectroscopic survey. The homogeneous sample allows us to investigate statistical trends (\autoref{sec:stat}) for the first time. With length in the range of 10--276 pc and average column density above $10^{21}$ \cms, these filaments are among the densest and largest structures observed in the Galaxy, and provide excellent tracers for Galactic structure and kinematics \citep[e.g.][]{Englmaier1999,Dame2001,Dobbs2012,Reid2014,Vallee2016,Smith2016}. The completeness of our filament sample largely depends on the data we use. The BGPS continuum catalog is 98\% complete at the 0.4 Jy level \citep{Rosolowsky2010_BGPS2}. The spectroscopic catalog \citep{Shirley2013} contains 50\% of the sources in the survey coverage with dense gas lines detected. As our identification used the spectroscopic catalog, the results are biased to dense clumps. This is evident in the high averaged column density and high linear mass density. However, we emphasize that this is indeed our goal --- we are interested in the most prominent dense filaments. On the other hand, given the location of our Sun in the Galaxy, even homogeneous surveys like BGPS or ATLASGAL are biased to structures (a) closer to the Sun and (b) on the same side with respect to the mid-plane as the Sun (\autoref{fig:hist}; \citealt{ATLASGAL}, but see discussion in \citealt{Rosolowsky2010_BGPS2}). As mentioned in \autoref{sec:stat}, the large filaments show less bias than BGPS sources in $z$, but the distribution of $z$ is indeed not perfectly symmetric. Should we mirror the distribution of $z>0$ to $z<0$, the total number of filaments would increase to 74. That is, 27\% of the filaments may be missed due to this effect. Although with a much improved completeness compared to previous methods, the MST approach cannot find all large filaments in our Galaxy. On of the main strength of this method is the repeat-ability compared to manual approaches. \begin{figure} \centering \includegraphics[width=.45\textwidth,angle=0]{arm.pdf} \includegraphics[width=.45\textwidth,angle=0]{arm2.pdf}\\ \includegraphics[width=.43\textwidth,angle=0]{faceon.pdf} \caption{ The Galactic distribution of the filaments. \textit{Upper:} The longitude-velocity plot showing the spiral arm segments derived from maser parallaxes (\citealt{Reid2014}; Reid et al. 2015, private communication). For simplicity, only related arms (Scutum, Sagittarius, Norma, Local, Perseus, and Outer) are plotted. The color shaded segments are of $\pm 5$ \kms\ width with respect to arm centers. The 54 filaments and 13 bones are labeled. The grey shaded horizontal strips along the x axis depict the searched longitude ranges. % \textit{Middle:} a zoom-in of the upper panel for clarity. \textit{Lower:} A ``face-on'' view from the northern Galactic pole. The arm widths (170--630 pc) are from \cite{Reid2014} except for Norma, whose width is not available, and we plot 200 pc width for reference. The solar symbol $\odot$ is plotted at (0, 8.34) kpc. } \label{fig:arm} \end{figure} \subsection{Galactic distribution and number of large filaments and ``bones'' in the Galaxy} \label{sec:Gal-dist-bones} Most of the filaments are associated with major spiral arms (\autoref{fig:arm}), consistent with the observations by \cite{me15}. Many of them concentrate along the longitude-velocity tracks of the Scutum, Sagittarius and Norma arms, and a few associated with the Local arm, the Perseus arm, and one associated with the Outer arm\footnote{In the $l-v$ view (\autoref{fig:arm}), most filaments follow the spiral arms, while the association is less evident in the face-on view. This originates from the difference in the the distance determination methods for the arm segments (parallax measurements) and the filaments (mainly kinematic distance). The same is seen in e.g. \cite{Abreu2016}.}. Only a small fraction (11/54, or 20\%) of the filaments are not within $\pm 5$ \kms\ of any arm structure, and are analogs of ``spurs'' observed in other galaxies. How many large filaments exist in our Galaxy? Using our method, we have identified 48 filaments in the contiguous coverage of $7.5^{\circ} \le l \le 90.5^{\circ}$. It is reasonable to estimate a similar number of filaments in the fourth quadrant. In the outer Galaxy, the BGPS survey is targeted to several star formation regions, therefore the 6 identified filaments provide an extremely lower limit. Taking all these into account, and correcting for the bias as discussed in \autoref{sec:bias}, we estimate about 200 velocity-coherent filaments longer than 10 pc and with a global column density above $10^{21}$ \cms, as the filaments presented in this study. How many filaments lie in the center of spiral arms and thus sketch out the ``bones'' of the Milky Way? \cite{Goodman2014} argued that the long and skinny IRDC ``Nessie'' lies in the center of the Scutum-Centaurus spiral arm in the $(l,v)$ space, and within $z = \pm 20$ pc from the physical mid-plane. Following \cite{Goodman2014} and \cite{Zucker2015}, our criteria for a ``bone'', on top of our large-filaments criteria (1--5) (\autoref{sec:method}), are: \begin{enumerate} \item[(6)] Lies in the very center of the physical Galactic mid-plane, with $|z| \le 20$ pc. \item[(7)] Runs almost parallel to arms in the projected sky, with $|\theta| \le 30^{\circ}$. \item[(8)] The flux weighted LSR velocity $v_{\rm wt}$ is within $\pm 5$ \kms\ from spiral arms. \end{enumerate} However, the exact structure and position of the spiral arms in our Galaxy are not well established. Diverse models have been derived from a variety of data ranging from atomic, molecular, ionized gas to stars and pulsars \citep[e.g.][]{Reid2014-AR,Hou2014-arm,Vallee2015,Vallee2016}. {Here we} have adopted the spiral segments derived from maser parallaxes (\citealt{Reid2014}; Reid et al. 2015, private communication), which have well constrained distances. In \autoref{fig:arm} we superpose the filaments on the spiral segments. Among the 54 filaments, 27 fulfill criteria (1-6), 21 fulfill criteria (1-7), and 13 of them also fulfill criterion (8). These 13 filaments (F2, F3, F7, F10, F13, F14, F15, F18, F28, F29, F37, F38, and F48) are ``bones'' according to our definition. Our criteria for a bone are more strict than \cite{Zucker2015} in terms of velocity coherence and mean column density. When compared to other filaments, bones do not stand out in mass, length (\autoref{fig:hist}(a--b)), column/volumn density, or temperature. All the 13 bones are located in the first quadrant (which is not surprising given our search field, see \autoref{sec:data}), making 27\% of the 48 filaments in the same region of blind BGPS survey. Obviously, owning to disagreement among the many spiral arm models, adopting a different model will lead to different ``bones''. But the fraction of bones in filaments should not change dramatically in a reasonable spiral model. \subsection{Fraction of ISM confined in large filaments} \label{sec:fractionISM} Given the importance of filamentary geometry in enhancing massive clustered star formation, it is of great interest to quantify the fraction of the ISM contained in large filaments and to evaluate the star formation activities therein. To address this question we consider only the range of $7.5^{\circ} \le l \le 90.5^{\circ}$ where the BGPS and its spectroscopic follow-up are contiguous. In this field there are 5841 BGPS v1 sources, 2893 having \hcop or \NTH (3--2) detection which we term as ``dense BGPS sources''. We identified 48 filaments in this field, which are comprised of 521 BGPS sources. That means 17.7\% (512/2893) of dense BGPS sources, or 8.8\% (512/5841) of all BGPS sources, are confined in large filaments. If we count BGPS clumps in the bones only, 6.8\% of dense BGPS sources, or 3.4\% of all BGPS sources are confined in bones. Compact 1.1 mm continuum emission of the BGPS sources outline the dense, inner part of much larger and less dense envelopes of molecular clouds \citep{Dunham2011-BGPS-NH3}. Assuming a dense gas mass fraction of 10-20\% \citep[cf.][]{Ragan2014-GFL,Ginsburg2015-W51}, we infer an order of 1\% of the molecular ISM is confined in large filaments, and about 1/3 of this amount confined in bones -- marking spiral arm centers. \begin{figure*} \centering \includegraphics[width=.46\textwidth,angle=0]{frag.pdf} \includegraphics[width=.52\textwidth,angle=0]{sf.pdf} \caption{ \textit{Left:} Mean clump mass versus mean edge length for the 54 filaments, as compared to IRDCs and theoretical predictions. The magenta line is \textit{not} a fit to the data points, instead, it depicts {cylindrical} fragmentation appropriate for the filaments (see text). In comparison, the resolved fragmentation of IRDCs \citep[data from][]{me11,me12,me14,qz11} is consistent with turbulent Jeans fragmentation. The green line, orange line, and associated shaded regions correspond to a range of density and temperature appropriate for IRDCs \citep{me14}. \textit{Right:} Clump mass versus radius for BGPS clumps, clumps in large filaments, and clumps in other velocity-coherent structures. The color lines depict various empirical criteria of star formation: blue -- the \cite{Krumholz2008Nature} criterion of massive star formation; yellow -- average of \cite{Heiderman2010} and \cite{Lada2010} for ``efficient'' star formation; red -- the \cite{Kauffmann2010} criterion of massive star formation with a correction of the {adopted} dust opacity law as in \cite{Dunham2011-BGPS-NH3}. } \label{fig:frag} \end{figure*} \subsection{An apparent length limit of 100 pc and the longest filaments beyond this limit} \label{sec:100pc} \cite{me15} pointed out an apparent upper limit of 100 pc projected length for the longest filaments in their study designed to find cold and dense filaments based on \her\ far-IR emission. In this study, we use a different approach without limiting the temperature. Except the extremely long filament F5, all other filaments are indeed shorter than 100 pc. Interestingly, this limit is also seen in \cite{Zucker2015} despite of the different search method. The 100 pc limit seems to be present in filaments with a global column density above the order of $10^{21} - 10^{22}$ \cms. Relaxing the column density to a lower cut of $<10^{20}$ \cms, longer filaments start to be picked up in $^{12}$CO/$^{13}$CO (1--0): the the 430 pc ``optimistic Nessie'' \citep{Goodman2014}; the 500 pc ``wisp'' \citep{LiGX2013-FL500pc}, and few filaments by \citealt{Ragan2014-GFL} and \citealt{Abreu2016}. However, those CO filaments have much smaller aspect ratios (typically $\ll$10, see figures in their papers), and the low-$J$ CO gas outlines the relatively diffuse envelopes of denser structures traced by MIR extinction. For example, one filament reported by \cite{Ragan2014-GFL} contains F36 as a small part (\autoref{sec:knownFL}). Whether 100 pc is a true limit for dense filaments warrants further study. This provides a quantitative test case for numerical simulations \citep[e.g.][]{Falceta2015}. So far, the 276 pc long filament F5 (\autoref{fig:rgb}) is the only exception of dense ($>10^{21}$ \cms) filaments longer than 100 pc. Compared to the above mentioned extremely long CO filaments, F5 is at least 10 times denser in average column density, and it may also have less dense extensions similar to the 80 pc ``classic Nessie''. More systematic searches and comparison to numerical simulations can resolve the true length limit of the longest filaments. We {emphasize} that average column density is a crucial parameter in defining the boundary of filaments and thus the length and aspect ratio. It is also worthwhile to note that our filaments, as defined by a collection of dense BGPS clumps, form the center of larger and less dense structures. The origin of large velocity coherent filaments is still a mystery. Numerical simulations of the multiphase ISM in galactic disks demonstrate that the cold, dense gas component tends to organize itself naturally into a filamentary network % \citep[e.g.][]{Tasker2009,Smith2014}. Spiral arms can sweep up and compress gas, generating bones \citep{Goodman2014}. Gravitationally unstable disk regions condense into gaseous rings and arcs \citep{Behrendt2015}. In differentially rotating disks, structures like molecular cloud complexes could be sheared into elongated filaments. Our calculations (Burkert et al. in prep) show that tidal effects of the Milky Way are too weak to affect the maximum length of filaments. This is consistent with our observations, where we find filament lengths do not correlate with Galactocentric radii ($C_{\rm Pearson} = -0.07$). The maximum filament length of order 100 pc might be related to the timescale of $\tau_{\rm SF} = 10^7$ yrs \citep[e.g.][]{Burkert2013} on which stars form in dense molecular gas and destroy their environment. Typical turbulent velocities on large scales in galactic disks are of order $\sigma = 10$ \kms\ \citep{Dib2006}. If $\sigma$ is the maximum velocity with which coherent filaments can grow and if $\tau_{\rm SF}$ denotes the timescale on which they are destroyed again their length is limited by $l = \sigma \times \tau_{\rm SF}$ = 100 pc, in agreement with the observations. \subsection{Fragmentation of large-scale filaments and subsequent star formation} \label{sec:frag} By definition, the filaments presented in this study are in the form of a chain of dense clumps physically connected by less dense gas in between. For linear filaments, this geometry resembles a fragmented ``cylinder'' with regularly spaced clumps under the ``sausage instability'' of self-gravity (e.g. F9 in \autoref{fig:rgb}). According to the framework of \cite{Chandra1953}, an isothermal gas cylinder becomes super-critical when its linear mass density exceeds the critical value $(M/L)_{\rm crit}$, and will fragment into a chain of equally spaced fragments with a spacing of $\lambda_\mathrm{cl}$, with each fragment having a mass of $M_\mathrm{cl} = (M/L)_{\rm crit} \times \lambda_\mathrm{cl}$. In short, the fragmentation is governed by central density and pressure (thermal plus non-thermal). This framework has been followed by many authors \citep[e.g.][]{Ostriker1964_FL,Nagasawa1987_FL,Bastien1991_FL,Inutsuka1992_FL,Fischera2012_FL}. See \cite{me11,me14} for useful deduction of the formulas. \autoref{fig:frag} (left panel) plots the mean clump mass of each filament with the mean separation between clumps (mean length of the edges in the filament). The observed fragmentation is consistent with the theoretical prediction of cylindrical fragmentation assuming a central density of $1\times 10^4$ \cmc\ and velocity dispersion of 0.4--2.2\kms\ (magenta line). The spread of the data points around the prediction line may be due to a range of central densities and imperfect cylinder geometry. Recent numerical simulations have shown that geometric bending, which is often seen in observed filaments, can change the regularity of the spacing \citep{Gritschneder2016}, indicating that more theoretical work is required in order to understand the stability and dynamics of filaments. Dense clumps with a typical mass of $10^3$ \msun\ and typical size of 1 pc (\autoref{fig:frag}, left panel) are, in general, capable of forming a cluster of stars. Statistically, dense clumps residing within large filaments are slightly denser than clumps elsewhere (see below). in \autoref{fig:frag} (right panel), we plot clump mass versus deconvolved radius (not all BGPS clumps have a valid radius, see \autoref{sec:para}) for three categories of BGPS clumps: I -- the 1710 clumps with well determined distance from \cite{Ellsworth2015}. II -- the 294 clumps in velocity-coherent structures but not large filaments. III -- the 469 clumps in large filaments. Categories I, II, and III have 41.0\%, 39.8\%, and 46.2\% clumps satisfying the \cite{Kauffmann2010} threshold of forming massive stars. Thus, categories I and II are indistinguishable, while in comparison, category III is slightly more favorable for massive star formation. If we count in mass instead of number of clumps, the fractions are 79.2\% for BGPS sources, 86.3\% for velocity-coherent structures but not large filaments, and 91.0\% for large filaments. Surprisingly, bones do not show a higher fraction compared to large filaments, either counted in number or mass. This indicates that local environment such as a velocity-coherent filament plays a role in enhancing massive star formation. Filaments, in particular, provide a preferred form of geometry to channel mass flows that can inhomogeneously feed star-forming clumps \citep[e.g.][]{Arzoumanian2011,Peretto2014-SDC13,qz15,Heigl2016,Federrath2016}. On the other hand, the Galactic environment does not seem to affect local star formation across the few hundreds pc spread of vertical position $z$, consistent with previous studies \citep{Eden2012,Eden2013}.
16
7
1607.06452
1607
1607.03277_arXiv.txt
We report the detection of a very low mass star (VLMS) companion to the primary star 1SWASPJ234318.41$+$295556.5A (J2343$+$29A), using radial velocity (RV) measurements from the PARAS (PRL Advanced Radial-velocity Abu-sky Search) high resolution echelle spectrograph. The periodicity of the single-lined eclipsing binary (SB$_1$) system, as determined from 20 sets of RV observations from PARAS and 6 supporting sets of observations from SOPHIE data, is found to be $16.953$~d as against the $4.24$~d period reported from SuperWasp photometry. It is likely that inadequate phase coverage of the transit with SuperWasp photometry led to the incorrect determination of the period for this system. We derive the spectral properties of the primary star from the observed stellar spectra: $T_{\rm{eff}}$~=~5125$~\pm~67$~K, $[Fe/H]$~=~$0.1~\pm~0.14$ and ${\rm log}~g$~=~$4.6~\pm~0.14$, indicating a K1V primary. Applying the Torres relation to the derived stellar parameters, we estimate a primary mass $0.864_{-0.098}^{+0.097}$~M$_\odot$ and a radius of $0.854_{-0.060}^{+0.050}$~R$_\odot$. We combine RV data with SuperWASP photometry to estimate the mass of the secondary, $M_B = 0.098 \pm 0.007 M_{\sun}$, and its radius, $R_B = 0.127 \pm 0.007~R_{\sun}$, with an accuracy of $\sim$7$\%$. Although the observed radius is found to be consistent with the Baraffe's theoretical models, the uncertainties on the mass and radius of the secondary reported here are model dependent and should be used with discretion. Here, we establish this system as a potential benchmark for the study of VLMS objects, worthy of both photometric follow-up and the investment of time on high-resolution spectrographs paired with large-aperture telescopes.
M dwarfs make up most of the galaxy's stellar budget. However, due to their small masses and fainter magnitudes in the visible band, such systems have remained largely unexplored. The Mass-Radius (M-R) relation serves as a crucial test for the verification of theoretical models of M dwarfs against values derived from observation. This is also integral to the understanding of stellar structure and evolution. Compared to M dwarfs, stars with masses $\ge 0.6 M_{\sun}$ have a well-established M-R relation \citep{Torres2010}. The vast majority of observations of M dwarfs of varying masses have reported radii higher than those predicted by models \citep{Torres2010, Lopez-Morales2007}. Improper assumptions of opacity in M dwarf models is speculated to be one of the reasons for this mismatch between observational and theoretically predicted radii of M dwarfs, at the level of $\sim10-15\%$. \cite{Chabrier2007} argue that the discrepancy is due to the high rotation rate in M dwarfs, or the effect of magnetic-field-induced reduction of the efficiency of large-scale thermal convection in their interior. M dwarfs having masses of less than $0.3 M_{\sun}$ seem to match the Baraffe models \citep{Baraffe2015} closely, as the stars become completely convective at this boundary \citep{Lopez-Morales2007}. If we choose to concentrate on very low mass stars (VLMS)~$\leq$~$0.1 M_{\sun}$, there is a dearth of such samples discovered with accuracies $\leq$ 1--2 $\%$. There have been only a handful of EB systems studied at such high accuracies in which one of the components is VLMS object with mass~$\leq$~0.1~$M_{\odot}$ \citep{Wisniewski2012, Triaud2013, Gomez2014, Ofir2012, Tal-Or2013, Beatty2007} where masses of the objects have been determined at high accuracies. Table~\ref{vlms} gives a non-exhaustive compilation of the stars having masses between $0.08-0.4$~M$_\odot$ for which masses and radii are determined at accuracies better than 10$\%$. The first three columns contain the name of the object, its mass and radius. The systems are SB$_1$ or SB$_2$ by nature. The classification of the EB, based on spectral types, is mentioned in the penultimate column. The literature references in which the sources are studied are cited in the last column of the table. A theoretical M-R diagram is plotted in Fig.~\ref{plot_thesis_1} for the M dwarfs having masses between $0.08-0.4$~M$_{\odot}$ based on Baraffe models \citep{Baraffe2015} for 1 Gyr isochrone and solar metallicity (most of the objects studied here have this age and metallicity). We have overplotted objects studied in literature from Table~\ref{vlms} with their respective error bars on masses and radii on the theoretical M-R diagram. As seen from Fig.~\ref{plot_thesis_1}, many of the stars studied in literature fall above the theoretical M-R plot clearly indicative of the M dwarf radius problem. Moreover, we also see that this discrepancy is more pronounced for objects having masses above 0.3~$M_{\odot}$. Out of 26 systems studied previously in literature, in the mass range of $0.08-0.4$~$M_{\odot}$ as shown in Table~\ref{vlms}, there have been only 6 systems having masses between $0.08-0.2$~$M_{\odot}$ studied with accuracies better than a few per~cent \citep{Beatty2007, Doyle2011, Triaud2013, Nefs2013, Fernandez2009}. There is thus a greater need for identifying more such VLMS candidates in EB systems and more specifically in the mass range of $0.08-0.4$~$M_{\odot}$ where there are only few sources studied. The object chosen for the current study, J2343$+$29A, was first identified from SuperWasp (SW) photometry by \cite{Christian2006} and \cite{Cameron2007}. Observations taken from the SW North listed the primary star, J2343$+$29A, having a temperature of 5034 K, spectral type K3, a radius of ${R_{A}=~0.85~R_{\sun}}$ and a transit depth of 21 mmag. The transit light curve showed substantial scatter, and source was suspected to be a stellar binary system as per preliminary observations with SOPHIE \citep{Cameron2007}. Table~\ref{swasp} summarizes the basic stellar parameters listed for this source in the literature \citep{Cameron2007}. Here, we present the discovery and characterization of a VMLS companion to J2343$+$29A, enabled by the PRL Advanced Radial velocity Abu-sky Search spectrograph \citep[hereafter PARAS;][]{chakraborty14}. Spectra acquired using PARAS, over a time period of $\sim$~3 months and well-sampled in phase, are described in \S 2. In \S 3 we describe our radial velocity (RV) analysis, both independently and in concert with SW photometry. We have developed an {\tt{IDL}}-based tool {\tt{PARAS SPEC}} to determine stellar properties like $T_{\rm{eff}}$, surface gravity (${\rm log}~g$), and metallicity ($[Fe/H]$). This procedure, involving matching synthetic spectra with the observed spectra, and fitting the Fe I, Fe II absorption line equivalent width (EW), is discussed in \S 4. Also discussed in \S 4 are the stellar parameters of the primary star derived from the observed PARAS spectra. In \S 5, we discuss the implications of this work, and conclude in \S 6.
The important conclusions of this work are as follows: \begin{enumerate}[1.] \item Stellar parameters determined for the primary using spectroscopic analysis suggest that the star has $T_{\rm{eff}}$~=~5125$~\pm~67$~K, $[Fe/H]$~=~$0.1~\pm~0.14$ and ${\rm log}~g$~=~$4.6~\pm~0.14$. Hence, the primary has a mass of $0.864_{-0.098}^{+0.097}$~M$\odot$ and a radius of $0.854_{-0.060}^{+0.050}$~R$\odot$. \item High resolution spectroscopy taken with PARAS, combined with SOPHIE archival RV data and SW archival photometric data, yield an RV semi-amplitude for the source as $8407_{-10}^{+11}$. The secondary mass from RV measurements is $M_B~=~0.098~\pm0.007~M_{\sun}$ with an accuracy of $\sim~7$~per~cent (model dependent). Hence, we conclude that J2343$+$29 is an EB with a K1 primary and a M7 secondary \citep{Pecaut2013}. The period of the EB, based on combined RV and SW photometry measurements, is revised to be 16.953~d, compared to the previously reported value of 4.24~d. \item We fit the light curve data simultaneously with RV data and determine the transit depth to be $25$ mmag. Based on the transit depth, the radius of the secondary is estimated as $R_B~=0.127\pm0.007~R_{\sun}$ with an accuracy of $\sim~7$~per~cent (model dependent). The observed radius is consistent with the theoretically derived radius values from Baraffe models. \end{enumerate}
16
7
1607.03277
1607
1607.01791_arXiv.txt
We present fully general-relativistic magnetohydrodynamic simulations of the merger of binary neutron star (BNS) systems. We consider BNSs producing a hypermassive neutron star (HMNS) that collapses to a spinning black hole (BH) surrounded by a magnetized accretion disk in a few tens of ms. We investigate whether such systems may launch relativistic jets and hence power short gamma-ray bursts. We study the effects of different equations of state (EOSs), different mass ratios, and different magnetic field orientations. For all cases, we present a detailed investigation of the matter dynamics and of the magnetic field evolution, with particular attention to its global structure and possible emission of relativistic jets. The main result of this work is that we observe the formation of an organized magnetic field structure. This happens independently of EOS, mass ratio, and initial magnetic field orientation. We also show that those models that produce a longer-lived HMNS lead to a stronger magnetic field before collapse to a BH. Such larger fields make it possible, for at least one of our models, to resolve the magnetorotational instability and hence further amplify the magnetic field in the disk. However, by the end of our simulations, we do not (yet) observe a magnetically dominated funnel nor a relativistic outflow. With respect to the recent simulations of Ruiz et al. [Astrophys. J. 824, L6 (2016)], we evolve models with lower and more plausible initial magnetic field strengths and (for computational reasons) we do not evolve the accretion disk for the long time scales that seem to be required in order to see a relativistic outflow. Since all our models produce a similar ordered magnetic field structure aligned with the BH spin axis, we expect that the results found by Ruiz et al. (who only considered an equal-mass system with an ideal fluid EOS) should be general and, at least from a qualitative point of view, independent of the mass ratio, magnetic field orientation, and EOS.
} With the revolutionary first detections of gravitational waves (GWs) by Advanced LIGO~\cite{LIGO:BBHGW:2016, LIGO_GW151226} from the merger of compact binary systems composed of two black holes (BHs), there have been even greater expectations of possible near-future detections of other sources, including binaries composed either of two neutron stars (NSs) or of a NS and a BH. While solar-mass binary BH mergers are not expected to emit electromagnetic (EM) signals (but see, e.g., Refs.~\cite{Perna2016, Janiuk2016, Murase2016} for possible alternatives), binary neutron star (BNS) and NS-BH systems are considered very powerful sources of a variety of EM counterparts, ranging from collimated emission, such as short gamma-ray bursts (SGRBs), to more isotropic ones, such as the so-called kilonova/macronova~\cite{Li:1998:L59, Kulkarni:2005:macronova-term, Metzger2012}. In particular, the possibility that SGRBs are powered by BNS or NS-BH mergers is supported by observational evidence (see Ref.~\cite{Berger2014} for a recent review). The simultaneous detection of a SGRB and GWs from a BNS or a NS-BH merger would represent definitive proof that these binary mergers power the central engine of SGRBs. Moreover, this association could provide strong constraints on the equation of state (EOS) of NS matter~\cite{Giacomazzo2013}. One of the leading theoretical models describing the gamma-ray emission in SGRBs is based on the launch of a relativistic jet from a spinning BH surrounded by an accretion disk. Jets may be launched via neutrino-antineutrino annihilation~\cite{Narayan1992, Piran:2004:76, Nakar:2007:442} or via magnetic mechanisms, such as the Blandford-Znajek (BZ) mechanism~\cite{Blandford1977}. While fully general relativistic simulations of BNS mergers have shown that, in those cases where the merger results in BH formation on a dynamical time scale, disks as massive as $\sim 0.1 M_{\odot}$ can be easily formed~\cite{Rezzolla:2010:114105}, whether the emission of relativistic jets occurs or not is still under investigation. This has driven an increasing effort in performing fully general-relativistic magnetohydrodynamic (GRMHD) simulations of BNS mergers, with the first simulations dating back to a few years ago~\cite{Anderson2008PhRvL.100s1101A, Liu2008, Giacomazzo2009}. More recently some groups started to investigate the formation of jets~\cite{Rezzolla:2011:6, Kiuchi:2014:41502, Ruiz2016, Dionysopoulou:2015:92}. The simulation by Rezzolla et al.~\cite{Rezzolla:2011:6} was in particular the first to show the possibility of forming an ordered and mainly poloidal magnetic field configuration aligned with the BH spin axis. Even if no outflow was observed, this provided a strong indication that BNS mergers can at least provide some of the necessary conditions to launch a relativistic jet. A subsequent simulation by Kiuchi et al.~\cite{Kiuchi:2014:41502}, using a different EOS, has challenged that result. Meanwhile, both local and global simulations of magnetic field evolution in the merger of BNS systems have shown that very large fields of up to $\sim 10^{16}$ G can be formed during merger~\cite{Zrake2013, Giacomazzo:2015, Kiuchi:2015:1509.09205}. Since it was shown that the formation of a magnetically dominated region in the BH ergosphere is a necessary condition for the activation of the BZ mechanism~\cite{Komissarov2009}, these new results encouraged further studies. Very recently, GRMHD simulations by Ruiz et al.~\cite{Ruiz2016} have shown that, when starting with very large magnetic fields, it is possible to observe the formation of a mildly relativistic outflow few tens of ms after BH formation. Even if the initial magnetic fields were unrealistically large, i.e., $\sim 10^{15}$ G, such fields should be produced after merger and therefore these simulations provide a proof of concept that jets may indeed be launched. Moreover, these recent simulations have shown that jets may be launched even when considering magnetic fields confined inside the NSs. All previous simulations considered only equal-mass systems and only two EOSs: ideal fluid~\cite{Rezzolla:2011:6, Ruiz2016} or piecewise polytropic~\cite{Kiuchi:2014:41502}. In this paper we extend the previous investigations by studying, with our GRMHD code Whisky~\cite{Giacomazzo:2007:235, Giacomazzo2011PhRvD..83d4014G, Giacomazzo2013ApJ...771L..26G}, the magnetic field structure that is formed after the merger of BNS systems and how it depends on the initial mass ratio, EOS, and initial magnetic field orientation. As such, our work allows us to assess the robustness of previous results when these important parameters are changed and we consider this as a preliminary step before performing simulations with very high resolutions or using our subgrid model~\cite{Giacomazzo:2015} to further study the effect of large magnetic field amplifications. All our simulations start with plausible values for the initial magnetic field, i.e., $\sim 10^{12}$ G. The role of neutrino emission is not included in our simulations and we believe that this does not affect our results qualitatively. We are currently working on the implementation of neutrino treatment in our GRMHD code and we point out that up to now only one recent work has presented GRMHD simulations of BNS merger including magnetic fields, neutrino emission, and a finite-temperature EOS~\cite{Palenzuela2015}. Our paper is organized as follows. In Sec.~\ref{sec_numerical_methods} we describe our numerical setup and in Sec.~\ref{sec_initial_data} we describe the initial data used in our simulations. We remark that our equal-mass models are the same as those that were evolved by Rezzolla et al~\cite{Rezzolla:2011:6} and Kiuchi et al~\cite{Kiuchi:2014:41502}, while the unequal-mass ones are studied here for the first time. In Sec.~\ref{evolution} we describe in detail the evolution of our different initial models, for the first time with a very accurate description of the magnetic field configurations formed after merger (implementing also advanced visualization tools that are described in the Appendix). In Sec.~\ref{sec_SGRB} we discuss the connection with SGRBs and other possible EM counterparts, while in Sec.~\ref{sec_GWs} we present the GW signal. In Sec.~\ref{sec_conclusions} we conclude and summarize the main results of our work. We use a system of units in which $G=c=M_{\odot}=1$ unless specified otherwise. The time is shifted so that $t=0$ refers to the time of merger, which corresponds to the maximum amplitude in the GW signal.
} In this paper we started our investigation of the magnetic field structure formed in the post-merger of high-mass BNS systems, i.e., of systems that produce a BH on a dynamical time scale after merger. We focused in particular on two different EOSs, ideal fluid and H4, both of which were used recently by other groups to study the merger of equal-mass systems~\cite{Rezzolla:2011:6, Kiuchi:2014:41502, Ruiz2016}. We have extended those previous investigations by including also unequal-mass BNSs and by changing, for one configuration, also the initial magnetic field orientation. Compared to previous work, here we have introduced a more systematic way to study the magnetic field structure in order to better understand whether an ordered poloidal field is formed after the merger or not. This has important consequences on the possible formation of relativistic jets and on the central engine of SGRBs. The main result of this work is that we observe the formation of an organized magnetic field structure after the formation of a BH surrounded by an accretion disk. This happens independently of EOS, mass ratio, and initial magnetic field orientation. The main difference with what was reported by Rezzolla et al.~\cite{Rezzolla:2011:6} is that the field along the BH axis is neither strong nor strongly collimated. We observe a strong field near the edge of the torus, which is not composed of straight magnetic field lines, but instead has a more helical structure, similar to the one observed in Ref.~\cite{Ruiz2016}. The initial magnetic field orientation does not produce large differences, but we point out that the \texttt{UD} configuration is the one leading to the smallest amount of magnetic energy and the smallest values for $B_{90}$ along the conical structure separating the low-density funnel and the higher-density disk, where the magnetic field amplification is generically found to be the most efficient. The largest magnetic field is obtained in the unequal-mass model evolved with the H4 EOS ({\tt H4\_q08}). This is due to the much longer HMNS phase in this case which allows for a much larger magnetic field amplification (likely contributed by a better-resolved MRI, c.f.~Sec.~\ref{res}). We did not observe the formation of a jet in any of the simulations, consistently with what was seen in Refs.~\cite{Rezzolla:2011:6, Kiuchi:2014:41502}, but this is not unexpected considering the recent results of Ref.~\cite{Ruiz2016}. It is indeed known that a magnetically dominated region in the BH ergosphere is a necessary condition for the activation of the BZ mechanism~\cite{Blandford1977}. On the one hand, our resolution is in general not high enough to be able to fully resolve the KH instability during merger and the MRI after merger (with the possible exception of model {\tt H4\_q08}, see Sec.~\ref{res}), and therefore the magnetic field amplification might not be strong enough to activate the BZ mechanism. On the other hand, our simulations are limited to a few tens of ms after BH formation, while it could take longer to realize the conditions to form a jet \cite{Ruiz2016}. A recent study~\cite{Parfrey2015} investigated a mechanism where magnetic loops drifting into the BH are inflated and forced to open due to differential rotation between disk and BH, potentially powering jets. The study assumed the force-free MHD limit as well as axisymmetry, and required a critical size of the initial loops for the case of prograde disks. Therefore it is not clear if this mechanism plays a role in our setup. Future studies can help in assessing the viability of this scenario. Our next step will be to employ the analysis techniques developed in this paper to study the same (or similar) systems when evolved with our subgrid model~\cite{Giacomazzo:2015} or with resolutions that are high enough to better capture KH and MRI. Moreover, we will evolve for a longer time after BH formation. Since in this paper we have shown that the magnetic field structure is qualitatively the same independently of the EOS, mass ratio, and magnetic field orientation, we expect the results of Ref.~\cite{Ruiz2016} to be general and we will assess this statement in future simulations. Another important ingredient will be the use of finite-temperature EOSs and neutrino emission, which were included only recently in GRMHD simulations by another group~\cite{Palenzuela2015}. These will not produce qualitatively different results, but they will provide a more accurate description of the post-merger phase and GW emission. Our step-by-step study will help in assessing the individual contributions of the different physical ingredients (high magnetic fields, finite-temperature EOSs, and neutrino emission) to the possible emission of relativistic jets and SGRBs. Initial data used for the simulations described in this paper, as well as gravitational wave signals and movies from our simulations are publicly available online.
16
7
1607.01791
1607
1607.08679_arXiv.txt
We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations and their inherent halo catalogs. Using the mock catalog to study the error covariance matrix of galaxy-galaxy weak lensing, we compare the full covariance with the ``jackknife'' (JK) covariance, the method often used in the literature that estimates the covariance from the resamples of the data itself. \change{ We show that there exists the variation of JK covariance over realizations of mock lensing measurements, while the average JK covariance over mocks can give a reasonably accurate estimation of the true covariance up to separations comparable with the size of JK subregion. The scatter in JK covariances is found to be $\sim$10\% after we subtract the lensing measurement around random points. } However, the JK method tends to underestimate the covariance at the larger separations, more increasingly for a survey with a higher number density of source galaxies. We apply our method to the the Sloan Digital Sky Survey (SDSS) data, and show that the 48 mock SDSS catalogs nicely reproduce the signals and the JK covariance measured from the real data. We then argue that the use of the accurate covariance, compared to the JK covariance, allows us to use the lensing signals at large scales beyond a size of the JK subregion, which contains cleaner cosmological information in the linear regime.
Cross-correlation of large-scale structure (LSS) tracers, galaxies or clusters, with shapes of background galaxies, referred as to galaxy-galaxy weak lensing or stacked lensing, offers a unique means of measuring the average total matter distribution around the foreground objects at lens redshift \citep[][]{1996ApJ...466..623B, 1998ApJ...503..531H,Fischeretal:00, 2002MNRAS.335..311G,2004ApJ...606...67H,2006MNRAS.368..715M,2007arXiv0709.1159J,2013ApJ...769L..35O, 2013MNRAS.431.1439G, 2014MNRAS.437.2111V}. In particular, by combining the weak lensing and auto-clustering correlation of the same foreground tracers, one can recover the underlying matter clustering and then constrain cosmology by breaking the degeneracy with galaxy bias uncertainty \citep[e.g.,][]{Seljaketal:05,2009MNRAS.394..929C,2013MNRAS.432.1544M,2015ApJ...806....2M,2016arXiv160407871K}. Combining different probes of LSS and cosmic microwave background (CMB) will become a standard strategy aimed at achieving the full potential of ongoing and upcoming wide-area galaxy surveys for addressing the fundamental physics with the cosmological observables \citep[e.g.,][]{OguriTakada:11,Schaanetal:16}. The surveys include the Baryon Oscillation Spectroscopic Survey (BOSS), the Dark Energy Survey (DES)\footnote{\url{http://hsc.mtk.nao.ac.jp/ssp/}}, the Kilo-Degree Survey (KiDS)\footnote{\url{http://kids.strw.leidenuniv.nl}}, the Subaru Hyper Suprime-Cam (HSC) survey\footnote{\url{http://hsc.mtk.nao.ac.jp/ssp/}}, the Dark Energy Spectroscopic Instrument (DESI)\footnote{\url{http://desi.lbl.gov}}, and the Subaru Prime Focus Spectrograph (PFS) survey \citep{Takadaetal:14} in coming 5 years, and ultimately the Large Synoptic Survey Telescope (LSST)\footnote{\url{https://www.lsst.org}}, Euclid\footnote{\url{http://sci.esa.int/euclid/}} and WFIRST\footnote{\url{http://wfirst.gsfc.nasa.gov}} within a coming 10 year time scale. In order to properly extract cosmological information from a given survey, it is important to understand the statistical properties of LSS probes that arise from the properties of the underlying matter distribution. The statistical precision of galaxy-galaxy weak lensing measurements is determined by the covariance matrix that itself contains two contributions: the measurement noise and sample variance caused by an incomplete sampling of the fluctuations due to a finite size of a survey volume. An accurate estimation of the covariance is becoming a challenging issue for upcoming wide-area galaxy surveys \citep{Hartlap2007,DodelsonSchneider:13,Tayloretal:13}, especially if the dimension of data vector is large, e.g. when combining different LSS probes. Even though the initial density field is nearly Gaussian, the sample variance of LSS probes gets substantial non-Gaussian contributions from the nonlinear evolution of large-scale structure \citep[e.g.,][]{Scoccimarro:99,CoorayHu:01}. Since the different Fourier modes are no longer independent but rather are correlated with each other in the weakly or deeply nonlinear regime, it is important to accurately model the non-Gaussian contribution to the sample variance. For this, it is now recognized that the super-sample covariance (SSC), which arises from mode-couplings of sub-survey (observable) modes with super-survey (unobservable) modes comparable with or greater than the size of a survey volume, is the largest non-Gaussian contribution to the sample variance for the cluster counts, the matter power spectrum and the cosmic shear statistics \citep{2003ApJ...584..702H,Hamiltonetal:06, TakadaBridle:07, TakadaJain:09, Sato2009, Sato2011, Takahashietal:09, Kayoetal:13,KayoTakada:13, 2013PhRvD..87l3504T,TakadaSpergel:14, Schaanetal:14, Takahashietal:14, Lietal:14b,2015MNRAS.453.3043S,KrauseEifler:16,Mohammedetal:16}. In particular \citet{2013PhRvD..87l3504T} developed a unified approach to describing the SSC effect in terms of the response of a given observable to a background mode modeling the super-survey mode \citep[also see][]{Lietal:14a,Lietal:14b}. Several cosmic shear measurements have taken into account the SSC contribution in the cosmological analysis \citep{Beckeretal:15,HarnoisDerapsetal:16,Hildebrandtetal:16}. However, the covariance matrix of galaxy-galaxy weak lensing has not been fully studied. A commonly-used approach to estimating the covariance is the jackknife method, which is a well-known method in the fields of statistics or data analysis \citep[][]{Efron:82} \citep[also see][for the pioneer cosmological applications]{Bothunetal:83}. The JK method gives an internal estimator of the errors, i.e. estimating the covariance from the resamples of subdivided copies of the data itself \citep[e.g.,][for the use of JK covariance for the galaxy-galaxy weak lensing]{2013MNRAS.432.1544M,Cacciatoetal:14,Couponetal:15,2016PhRvL.116d1301M,Clampittet:16}. The advantage of the JK method is that it can account for various observational effects inherent in the data, such as inhomogeneities in the depth and measurements across the area. However, the drawback is that the JK covariance is generally noisy because it is estimated from one particular realization of the fluctuations, i.e. data itself. For the same reason, the JK covariance can be unstable or even singular especially if the dimension of data vector is comparable with the number of JK resamples, which could often happen if dividing the data into different bins of the physical quantities or combining different probes. Furthermore, \citet{2016arXiv161100752S} recently showed that the use of the random catalog of lensing galaxies (or clusters) is important for the covariance estimation of galaxy-galaxy lensing, differently from the cosmic shear covariance. What does the JK method really estimate? What are the limitations? These questions have not been fully addressed yet \citep[also see][for a study based on the similar motivation for the clustering correlation function]{Norbergetal:09}. In particular it is not clear whether the JK method can capture the SSC contribution in the galaxy-galaxy weak lensing measurements. Hence the purpose of this paper is to study the covariance matrix of galaxy-galaxy weak lensing. To do this, we develop a method to construct a mock catalog of the galaxy-galaxy weak lensing measurements by fully utilizing a set of full-sky, light-cone simulations containing the lensing fields in multiple source planes as well as dark matter and halo distributions in multiple lens planes \citep{2015MNRAS.453.3043S} (also see Takahashi et al. in preparation). In order to properly simulate properties of source galaxies, we populate a ``real'' catalog of source galaxies into the light-cone simulation realization \citep[also see][]{Shirasaki2014}. In this way we can take into account the observed characteristics of source galaxies (their angular distributions, redshifts, intrinsic ellipticities, and the survey geometry). By identifying halos that are considered to host galaxies and clusters as for foreground tracers, based on a prescription such as the halo occupation distribution and the halo mass-cluster observable scaling relation, we can make hypothetical galaxy-galaxy weak lensing measurements from the mock catalog. With the help of such mock catalogs, we will study the importance of the SSC contribution to the sample variance in the galaxy-galaxy weak lensing as well as the limitations of the JK method. We then apply the developed method to the Sloan Digital Sky Survey (SDSS) data: the catalog of source galaxies \citep{2013MNRAS.432.1544M} and the lens samples of redMaPPer clusters \citep{2014ApJ...785..104R} and the luminous red galaxies \citep{2001AJ....122.2267E}. We will also discuss how the use of accurate covariance could improve cosmological constraints, compared to the JK method. The paper is organized as follows. Section~\ref{sec:stack} summarizes basics of galaxy-galaxy weak lensing, the estimator of the signal, and the different covariance estimators. Section~\ref{sec:sim} describes the details of $N$-body simulations, halo catalogs and ray-tracing simulations used in this paper, and describes the details of our method for generating mock catalogs of galaxy-galaxy weak lensing. Section~\ref{sec:result} presents the main results including detailed comparison of the different covariance estimators as well as the application to the real SDSS data. We discuss the results in Section~\ref{sec:conclusions}.
\label{sec:conclusions} In this paper, we have developed a method to create a mock catalog of the cross-correlation between positions of lensing objects and shapes of background galaxies -- the stacked lensing of galaxy clusters or galaxy-galaxy weak lensing. To do this, we fully utilized the full-sky, light-cone simulations based on a suit of multiple $N$-body simulation outputs, where the lensing fields of source galaxies are given in multiple shells in the radial direction out to a maximum source redshift $z_{\rm s}\simeq 2.4$ as well as the halo catalogs in lens planes are given in the light cone. Our method enables one to generate a mock catalog of the stacked lensing following the procedures; (1) define the survey footprints based on the assigned RA and dec coordinates in the full-sky simulation, (2) populate the real catalog of source galaxies into the light-cone simulation realization according to the angular position (RA and dec) and redshift of each source galaxy, (3) randomly rotate the ellipticity of each source galaxy to erase the real lensing effect, (4) simulate the lensing effects on each source galaxy, using the lensing fields in the light-cone simulation, and (5) identify halos that are considered to host galaxies or clusters of interest, according to a prescription to connect the galaxies or clusters to halos (e.g. the scaling relation between halo mass and cluster richness or the halo occupation distribution model for galaxies). With this method, we can use the observed properties of data, the survey footprints and the positions and characteristics of source galaxies (the intrinsic ellipticities and the redshift distribution). We applied this method to the real SDSS catalog of source galaxies as well as the SDSS catalogs of redMaPPer clusters or luminous red galaxies (LRGs), as shown in Fig.~\ref{fig:sdss_footprint}. We then showed that our mock catalogs well reproduce the signals as well as the jackknife (JK) covariance error bars, estimated from the real data (Fig.~\ref{fig:sdss_dSigma}). Our method will be powerful to estimate the error covariance matrix for ongoing and upcoming wide-field weak lensing surveys. By having the accurate mock catalogs of cluster/galaxy-shear cross correlation, we were able to study the nature of the error covariance matrix. In particular we focused on addressing validity and limitation of the JK method, which has been often used in the literature. The JK method is based on the real data itself, therefore referred to as an internal covariance estimator, and is known in the fields of statistics or data science to give an unbiased estimator of the covariance if the field is Gaussian or Poissonian. We found that the JK method gives a reasonably accurate estimation of the true covariance to within 10$\%$ in the amplitude on average, at separation scales smaller than the size of JK subregion, but it can under-estimate the true error at the larger separations, especially for a survey with higher number density of source galaxies as in ongoing and upcoming surveys such as the Subaru HSC survey. However we should keep in mind a limitation for the use of the JK method: the JK covariance can be noisy on each realization basis, because the JK covariance is estimated from a particular realization (i.e. data). The JK covariance matrix becomes noisier or unreliable if the number of JK subregions/resamples is small or if the dimension of data vector is comparable with the number of JK subregions. Thus the use of accurate covariance matrix for the stacked lensing measurement is important in the future cosmological analysis. The full covariance gives us an access to larger separations beyond the scale of JK subregion, where the JK covariance ceases to be valid (Fig.~\ref{fig:var_dSigma_Nsub}). We showed that the full covariance gives signals with ${\rm S/N}\ge 1$ out to about $100~h^{-1}\, {\rm Mpc}$ in the projected separation. Thus the use of accurate covariance for the stacked lensing is highly desirable in order to attain the full potential of ongoing and upcoming surveys. In particular such large-scale weak lensing signals are expected to contain useful information on the fundamental physics such as the baryon acoustic oscillations, the primordial power spectrum, the primordial non-Gaussianity and the neutrino mass. Exploring an improvement in the cosmological parameters from the SDSS data by the use of the accurate covariance is our future work and will be presented elsewhere. Combining the stacked lensing and auto-correlation measurements for the same foreground tracers allows one to improve cosmological constraints by recovering the underlying dark matter clustering against the bias uncertainty \citep{Seljaketal:05,2013MNRAS.432.1544M,2015ApJ...806....2M}. This would also be true even if the foreground objects are affected by the assembly bias uncertainty if the two measurements are properly combined \citep{2016PhRvL.116d1301M,McEwenWeinberg:16}. Furthermore it would be interesting to combine the stacked lensing with the redshift-space distortion (RSD) measurement for the same foreground galaxies in order to improve cosmological constraints as well as test gravity theory on cosmological scales, by calibrating small-scale systematic effects in the RSD measurement such as the Finger-of-God effect \citep{Hikageetal:13,2013MNRAS.435.2345H}. A joint experiment of galaxy weak lensing and CMB weak lensing for the same foreground clusters/galaxies can be used to directly measure the lensing efficiency function, $\Sigma_{\rm cr}(z_{\rm l},z_{\rm s})$, without being affected by nonlinear structure formation including unknown baryonic physics as well as to calibrate multiplicative systematic biases in the CMB lensing or galaxy weak lensing that is otherwise difficult to calibrate with either method alone \citep{DasSpergel:09,Schaanetal:16, 2016arXiv160505337M}. Thus once one starts to combine different clustering measurements, the dimension of data vector quickly increases and the calibration of auto- and cross-covariances is more demanding. For this reason, it would be desirable to develop a hybrid method of combining the mock light-cone catalog of large-scale structures containing various fields (weak lensing, halos and velocity fields) and the analytical method to model the SSC terms in different observables. This seems feasible, and will be our future work.
16
7
1607.08679
1607
1607.08023_arXiv.txt
We present simultaneous mappings of $J=1-0$ emission of $^{12}$CO, $^{13}$CO, and C$^{18}$O molecules toward the whole disk ($8' \times 5'$ or 20.8 kpc $\times$ 13.0 kpc) of the nearby barred spiral galaxy NGC~2903 with the Nobeyama Radio Observatory 45-m telescope at an effective angular resolution of $20''$ (or 870 pc). We detected $^{12}$CO($J=1-0$) emission over the disk of NGC~2903. In addition, significant $^{13}$CO($J=1-0$) emission was found at the center and bar-ends, whereas we could not detect any significant C$^{18}$O($J=1-0$) emission. In order to improve the signal-to-noise ratio of CO emission and to obtain accurate line ratios of $^{12}$CO($J=2-1$)/$^{12}$CO($J=1-0$) ($R_{2-1/1-0}$) and $^{13}$CO($J=1-0$)/$^{12}$CO($J=1-0$) ($R_{13/12}$), we performed the stacking analysis for our $^{12}$CO($J=1-0$), $^{13}$CO($J=1-0$), and archival $^{12}$CO($J=2-1$) spectra with velocity-axis alignment in nine representative regions of NGC~2903. We successfully obtained the stacked spectra of the three CO lines, and could measure averaged $R_{2-1/1-0}$ and $R_{13/12}$ with high significance for all the regions. We found that both $R_{2-1/1-0}$ and $R_{13/12}$ differ according to the regions, which reflects the difference in the physical properties of molecular gas; i.e., density ($n_{\rm H_2}$) and kinetic temperature ($T_{\rm K}$). We determined $n_{\rm H_2}$ and $T_{\rm K}$ using $R_{2-1/1-0}$ and $R_{13/12}$ based on the large velocity gradient approximation. The derived $n_{\rm H_2}$ ranges from $\sim 1000$ cm$^{-3}$ (in the bar, bar-ends, and spiral arms) to 3700 cm$^{-3}$ (at the center) and the derived $T_{\rm K}$ ranges from 10 K (in the bar and spiral arms) to 30 K (at the center). We examined the dependence of star formation efficiencies (SFEs) on $n_{\rm H_2}$ and $T_{\rm K}$, and found the positive correlation between SFE and $n_{\rm H_2}$ with the correlation coefficient for the least-square power-law fit $R^2$ of 0.50. This suggests that molecular gas density governs the spatial variations in SFEs.
Molecular gas is one of the essential components in galaxies because it is closely related to star formation, which is a fundamental process of galaxy evolution. Thus the observational study of molecular gas is indispensable to understand both star formation in galaxies and galaxy evolution. However, the most abundant constituent in molecular gas, H$_2$, cannot emit any electro-magnetic wave in cold molecular gas with typical temperature of $\sim$ 10 K due to the lack of a permanent dipole moment. Instead, rotational transition lines of $^{12}$CO, the second abundant molecule, have been used as a tracer of molecular gas. For example, some extensive $^{12}$CO surveys of external galaxies, which consist of single pointings toward central regions and some mappings along the major axis, have been reported (e.g., \cite{braine1993}; \cite{young1995}; \cite{elfhag1996}). These studies provided new findings about global properties of galaxies, such as excitation condition of molecular gas in galaxy centers and radial distributions of molecular gas across galaxy disks. In order to understand the relationship between molecular gas and star formation in galaxies further, spatially resolved $^{12}$CO maps covering whole galaxy disks are necessary because star formation rates (SFRs) are often different between galaxy centers and disks. In particular, single-dish observations are essential to measure $total$ molecular gas content in the observing beam from dense component to diffuse one avoiding the missing flux (e.g., \cite{caldu-primo2015}). So far, two major surveys of wide-area $^{12}$CO mapping toward nearby galaxies are performed using multi-beam receivers mounted on large single-dish telescopes. One is the $^{12}$CO($J=1-0$) mapping survey of 40 nearby spiral galaxies performed with the Nobeyama Radio Observatory (NRO) 45-m telescope in the position-switch mode (\cite{kuno2007}, hereafter K07). Their $^{12}$CO($J=1-0$) maps cover most of the optical disks of galaxies at an angular resolution of 15$''$, and clearly show two-dimensional distributions of molecular gas in galaxies. K07 found that the degree of the central concentration of molecular gas is higher in barred spiral galaxies than in non-barred spiral galaxies. In addition, they found a correlation between the degree of central concentration and the bar strength adopted from \citet{Laurikainen2002} \footnote{\citet{Laurikainen2002} estimated the maxima of the tangential force and the averaged radial force at each radius in a bar using {\it JHK} band images, and they defined the maximum of the ratio between the two forces as the bar strength.}; i.e., galaxies with stronger bar tend to exhibit a higher central concentration. This correlation suggests that stronger bars accumulate molecular gas toward central regions more efficiently, which may contribute the onset of intense star formation at galaxy centers (i.e., higher SFRs than disks). Using the $^{12}$CO($J=1-0$) data, \citet{sorai2012} investigated the physical properties of molecular gas in the barred spiral galaxy Maffei 2. They found that molecular gas in the bar ridge regions may be gravitationally unbound, which suggests that molecular gas is hard to become dense, and to form stars in the bar. The other survey is the Heterodyne Receiver Array CO Line Extragalactic Survey performed with the IRAM 30-m telescope \citep{leroy2009}. They observed $^{12}$CO($J=2-1$) emission over the full optical disks of 48 nearby galaxies at an angular resolution of 13$''$, and found that the $^{12}$CO($J=2-1$)/$^{12}$CO($J=1-0$) line intensity ratio (hereafter $R_{2-1/1-0}$) typically ranges from 0.6 to 1.0 with the averaged value of 0.8. In addition, \citet{leroy2013} examined a quantitative relationship between surface densities of molecular gas and SFRs for 30 nearby galaxies at a spatial resolution of 1~kpc using the $^{12}$CO($J=2-1$) data. They found a first-order linear correspondence between surface densities of molecular gas and SFRs but also found second-order systematic variations; i.e., the apparent molecular gas depletion time, which is defined by the ratio of the surface density of molecular gas to that of SFR, becomes shorter with the decrease in stellar mass, metallicity, and dust-to-gas ratio. They suggest that this can be explained by a CO-to-H$_2$ conversion factor ($X_{\rm CO}$) that depends on dust shielding. However, such global CO maps of galaxies have raised a new question; the cause of the spatial variation in star formation efficiencies (SFEs) defined as SFRs per unit gas mass \footnote{In this paper, a correction factor to account for Helium and other heavy elements is not included in the calculation of molecular gas mass and SFE.}. It is reported that SFEs differ not only among galaxies (e.g., \cite{young1996}) but also within locations/regions in a galaxy (e.g., \cite{muraoka2007}); i.e., higher SFEs are often observed in galaxy mergers rather than normal spiral galaxies and also observed in the nuclear star forming region rather than in galaxy disks. Some observational studies based on HCN emission, an excellent dense gas tracer, suggest that SFEs increase with the increase in molecular gas density (or dense gas fraction) in galaxies (e.g., \cite{gao2004}; \cite{gao2007}; \cite{muraoka2009}; \cite{usero2015}), but the cause of the spatial variation in SFEs is still an open question because HCN emission in galaxy disks is too weak to obtain its map except for some gas-rich spiral galaxies (e.g., M~51; \cite{chen2015}; \cite{bigiel2016}). Instead, isotopes of CO molecule are promising probes of molecular gas density. In particular, $^{13}$CO($J=1-0$) is thought to be optically thin and thus trace denser molecular gas ($\sim 10^{3-4} {\rm cm}^{-3}$) rather than $^{12}$CO($J=1-0$), which is optically thick and traces relatively diffuse molecular gas ($\sim 10^{2-3} {\rm cm}^{-3}$). Therefore, the relative intensity between $^{13}$CO($J=1-0$) and $^{12}$CO($J=1-0$) is sensitive to physical properties of molecular gas. For example, spatial variations in $^{13}$CO($J=1-0$)/$^{12}$CO($J=1-0$) intensity ratios (hereafter $R_{13/12}$) were observed in nearby galaxy disks (e.g., \cite{sakamoto1997}; \cite{tosaki2002}; \cite{hirota2010}). Such variations in $R_{13/12}$, typically ranging from 0.05 to 0.20, are interpreted as the variation in molecular gas density; i.e., $R_{13/12}$ increases with the increase in molecular gas density. However, some observations suggest that $R_{13/12}$ in central regions of nearby galaxies are lower than those in disk regions (e.g., \cite{paglione2001}; \cite{tosaki2002}; \cite{hirota2010}; \cite{watanabe2011}) although central regions of galaxies often show intense star formation activities, suggesting higher molecular gas density. The cause of the low $R_{13/12}$ in central regions is thought to be high temperature of molecular gas due to the heating by UV radiation from a lot of young massive stars. Such a degeneracy between density and temperature of molecular gas in a line ratio can be solved using two (or more) molecular line ratios with a theoretical calculation on the excitation of molecular gas such as the large velocity gradient (LVG) model \citep{scoville1974, goldreich1974}. For example, the density and kinetic temperature of giant molecular clouds (GMCs) were determined using $R_{13/12}$ and $^{12}$CO($J=3-2$)/$^{12}$CO($J=1-0$) ratio for Large Magellanic Cloud \citep{minamidani2008} and M~33 \citep{muraoka2012}, and also determined using $R_{13/12}$ and $R_{2-1/1-0}$ for the spiral arm of M~51 \citep{schinnerer2010}. This method to determine molecular gas density is useful to investigate the cause of the variation in SFEs. Thus the dependence of SFEs on molecular gas density should be investigated for various galaxies at high angular resolution based on multiple line ratios including $R_{13/12}$. In this paper, we investigate the relationship between SFE and molecular gas density within a nearby barred spiral galaxy NGC~2903 using an archival $^{12}$CO($J=2-1$) map combined with $^{12}$CO($J=1-0$) and $^{13}$CO($J=1-0$) maps which are newly obtained by the CO Multi-line Imaging of Nearby Galaxies (COMING) project with the NRO 45-m telescope. NGC~2903 is a gas-rich galaxy exhibiting bright nuclear star formation (e.g., \cite{wynn1985}; \cite{simons1988}; \cite{alonso2001}; \cite{yukita2012}). The distance to NGC~2903 is estimated to be 8.9 Mpc \citep{drozdovsky2000}; thus the effective angular resolution of 20$''$ for the on-the-fly (OTF) mapping with the NRO 45-m corresponds to 870 pc. This enables us to resolve major structures within NGC~2903, such as the center, bar, and spiral arms although its inclination of \timeform{65D} \citep{deblok2008} is not so small. In addition, NGC~2903 is rich in archival multi-wavelength data set; i.e., not only the $^{12}$CO($J=2-1$) map to examine $R_{2-1/1-0}$ but also H$\alpha$ and infrared images to calculate SFRs are available. Thus this galaxy is a preferable target to examine the cause of the variation in SFE in terms of molecular gas density. Basic parameters of NGC~2903 are summarized in table~1. The structure of this paper is as follows: We describe the overview of the COMING project and explain the detail of the CO observations and data reduction for NGC~2903 in section 2. Then, we show results of observations; i.e., spectra and velocity-integrated intensity maps of $^{12}$CO($J=1-0$) and $^{13}$CO($J=1-0$) emission in section 3. We obtain averaged spectra of $^{12}$CO($J=1-0$), $^{13}$CO($J=1-0$), and $^{12}$CO($J=2-1$) emission for nine representative regions, and measure averaged $R_{13/12}$ and $R_{2-1/1-0}$ for each region in section 4.1. We determine molecular gas density and kinetic temperature for the center, bar, bar-ends, and spiral arms using $R_{13/12}$ and $R_{2-1/1-0}$ based on the LVG approximation in section 4.2. Finally, we investigate the cause of the variation in SFE by examining the dependence of SFE on molecular gas density and kinetic temperature.
\subsection{Velocity-axis alignment stacking of CO spectra} As described in section 3.3, the spatial distribution of $R_{13/12}$ seems noisy and unclear due to the poor S/N although we could obtain the spatial distribution of $I_{\rm 13CO(1-0)}$. In order to improve the S/N of weak emission such as $^{13}$CO($J=1-0$), the stacking analysis of CO spectra with the velocity-axis alignment seems a promising method. The stacking technique for CO spectra in external galaxies are originally demonstrated by Schruba et al. (2011, 2012). Since the observed velocities of each position within a galaxy are different due to its rotation, a simple stacking causes a smearing of spectrum. In order to overcome such difficulty, \citet{schruba2011} demonstrated the velocity-axis alignment of CO spectra in different regions in a galaxy disk according to mean H\emissiontype{I} velocity. They stacked velocity-axis aligned CO spectra, and successfully confirmed very weak $^{12}$CO($J=2-1$) emission ($<$ 1 K km s$^{-1}$) with high significance in H\emissiontype{I}-dominated outer-disk regions of nearby spiral galaxies. In addition, \citet{schruba2012} applied this stacking technique to perform the sensitive search for weak $^{12}$CO($J=2-1$) emission in dwarf galaxies. Furthermore, \citet{morokuma-matsui2015} applied the stacking technique to $^{13}$CO($J=1-0$) emission in the optical disk of the nearby barred spiral galaxy NGC~3627. By the stacking with velocity-axis alignment based on mean $^{12}$CO($J=1-0$) velocity, they obtained high S/N $^{13}$CO($J=1-0$) spectra which are improved by a factor of up to 3.2 compared to the normal (without velocity-axis alignment) stacking analysis. These earlier studies clearly suggest that the stacking analysis is very useful to detect weak molecular line. In this section, we employ the same stacking technique as \citet{morokuma-matsui2015} to improve the S/N of $^{13}$CO($J=1-0$) emission and to obtain more accurate line ratios. Based on our $I_{\rm 12CO(1-0)}$ image (figure~3), we have separated NGC~2903 into nine regions according to its major structures; i.e., center, northern bar, southern bar, northern bar-end, southern bar-end, northern arm, southern arm, inter-arm, and outer-disk. The left panel of figure~8 shows the separation of each region overlaid by the grey-scale map of $I_{\rm 12CO(1-0)}$. For each region, we stacked $^{12}$CO($J=1-0$), $^{13}$CO($J=1-0$), and $^{12}$CO($J=2-1$) spectra with velocity-axis alignment based on the intensity-weighted mean velocity field calculated from our $^{12}$CO($J=1-0$) data (right panel of figure~8). We successfully obtained the stacked CO spectra as shown in figure~9. The S/N of each CO emission is dramatically improved, and thus we could confirm the significant $^{13}$CO($J=1-0$) emission for all the regions. We found the difference in the line shape of stacked CO spectra according to regions. In particular, the stacked $^{12}$CO spectra in the bar show flat peak over the velocity width of 100 -- 150 km s$^{-1}$. This is presumably due to the rapid velocity change in the bar, which makes the velocity-axis alignment difficult. We summarize the averaged line intensities and line ratios for each region in table~2. We found that the averaged $R_{2-1/1-0}$ shows the highest value of 0.92 at the center, and a moderate value of 0.7 -- 0.8 at both bar-ends and in the northern arm. A slightly lower $R_{2-1/1-0}$ of 0.6 -- 0.7 is observed in the bar, southern arm, inter-arm, and outer-disk. Such a variation in $R_{2-1/1-0}$ ranging from 0.6 to 1.0 in NGC~2903 is quite consistent with those observed in nearby galaxies (e.g., \cite{leroy2009}). However, the highest $R_{13/12}$ of 0.19 is observed not at the center but in the northern arm. The central $R_{13/12}$ of 0.11 is similar to those in other regions (0.08 -- 0.13) except for the northern arm and outer-disk ($\sim$ 0.04). The typical $R_{13/12}$ of $\sim$ 0.1 is frequently observed in nearby galaxies (e.g., \cite{paglione2001}; \cite{vila2015}), but slightly higher than the averaged $R_{13/12}$ in representative regions of NGC~3627, 0.04 -- 0.09 \citep{morokuma-matsui2015}. \subsection{Derivation of physical properties and their comparison with star formation} \subsubsection{LVG calculation for stacked CO spectra} Using $R_{2-1/1-0}$ and $R_{13/12}$, we derive averaged physical properties of molecular gas, its density ($n_{\rm H_2}$) and kinetic temperature ($T_{\rm K}$), in seven regions (center, northern bar, southern bar, northern bar-end, southern bar-end, northern arm, and southern arm) of NGC~2903 based on the LVG approximation. Some assumptions are required to perform the LVG calculation; the molecular abundances $Z$($^{12}$CO) = [$^{12}$CO]/[H$_2$], [$^{12}$CO]/[$^{13}$CO], and the velocity gradient $dv/dr$. Firstly, we fix the $Z$($^{12}$CO) of $1.0 \times 10^{-5}$ and $dv/dr$ of 1.0 km s$^{-1}$ pc$^{-1}$; i.e., $^{12}$CO abundance per unit velocity gradient $Z$($^{12}$CO)/($dv/dr$) was assumed to $1.0 \times 10^{-5}$ (km s$^{-1}$ pc$^{-1}$)$^{-1}$. This is the same as the assumed $Z$($^{12}$CO)/($dv/dr$) for GMCs in M~33 \citep{muraoka2012}. Then, we determine the [$^{12}$CO]/[$^{13}$CO] abundance ratio to be assumed in this study by considering earlier studies. \citet{langer1990} found a systematic gradient in the $^{12}$C/$^{13}$C isotopic ratio across in our Galaxy; from $\sim$ 30 in the inner part at 5 kpc to $\sim$ 70 at 12 kpc with a galactic center value of 24. For external galaxies, the reported $^{12}$C/$^{13}$C isotopic ratios in their central regions are 40 for NGC~253 \citep{henkel1993}, 50 for NGC~4945 \citep{henkel1994}, $> 40$ for M~82 and $> 30$ for IC~342 \citep{henkel1998}. \citet{mao2000} reported a higher [$^{12}$CO]/[$^{13}$CO] abundance ratio of 50 -- 75 in the central region of M~82. \citet{martin2010} also reported a higher $^{12}$C/$^{13}$C isotopic ratios of $>$ 50 -- 100 in the central regions of M~82 and NGC~253. In summary, reported $^{12}$C/$^{13}$C isotopic (and [$^{12}$CO]/[$^{13}$CO] abundance) ratios in nearby galaxy centers (30 -- 100) are typically higher than that in the inner 5 kpc of our Galaxy (24 -- 30), but the cause of such discrepancies in $^{12}$C/$^{13}$C and [$^{12}$CO]/[$^{13}$CO] between our Galaxy and external galaxies is still unresolved. Here, we assumed an intermediate [$^{12}$CO]/[$^{13}$CO] abundance ratio of 50 in NGC~2903 without any gradient across its disk for our LVG calculation. Note that we perform an additional LVG calculation for the center of NGC~2903 assuming the [$^{12}$CO]/[$^{13}$CO] abundance ratios of 30 and 70 to evaluate the effect of the variation in the assumed [$^{12}$CO]/[$^{13}$CO] abundance ratio on results of LVG calculation. Figure~10 shows results of LVG calculation for each region in NGC~2903. The thin line indicates a curve of constant $R_{2-1/1-0}$ as functions of $n_{\rm H_2}$ and $T_{\rm K}$, and the thick line indicates that of constant $R_{13/12}$. We can determine both $n_{\rm H_2}$ and $T_{\rm K}$ at the point where two curves intersect each other. Under the assumption of [$^{12}$CO]/[$^{13}$CO] abundance ratio of 50, the derived $n_{\rm H_2}$ ranges from $\sim$1000 cm$^{-3}$ (in the disk; i.e., bar, bar-ends, and spiral arms) to 3700 cm$^{-3}$ (at the center) and the derived $T_{\rm K}$ ranges from 10 K (in spiral arms) to 30 K (at the center). Note that both $n_{\rm H_2}$ and $T_{\rm K}$ vary depending on the assumption of [$^{12}$CO]/[$^{13}$CO] abundance ratio; at the center of NGC~2903, the abundance ratio of 30 yields lower $n_{\rm H_2}$ of 1800 cm$^{-3}$ and higher $T_{\rm K}$ of 38 K, whereas the abundance ratio of 70 yields higher $n_{\rm H_2}$ of 5900 cm$^{-3}$ and intermediate $T_{\rm K}$ of 29 K. It seems that $n_{\rm H_2}$ is proportional to [$^{12}$CO]/[$^{13}$CO] abundance ratio. This trend of $n_{\rm H_2}$ can be naturally explained if we consider the optical depth of $^{12}$CO and $^{13}$CO emission. $^{12}$CO is always optically thick and thus its emission emerges from the diffuse envelope of dense gas clouds, while $^{13}$CO emission emerges from further within these dense gas clouds due to its lower abundance. Since the increase in the assumed [$^{12}$CO]/[$^{13}$CO] abundance ratio means that $^{13}$CO becomes more optically thin, $^{13}$CO emission emerged from deeper within the dense gas clouds and thus it probes denser gas. Derived physical properties, $n_{\rm H_2}$ and $T_{\rm K}$, are summarized in table~3. We compare the derived $n_{\rm H_2}$ and $T_{\rm K}$ in NGC~2903 with those determined in other external galaxies. \citet{muraoka2012} determined $n_{\rm H_2}$ and $T_{\rm K}$ for GMCs associated with the giant H\emissiontype{II} region NGC~604 in M~33 at a spatial resolution of 100 pc using three molecular lines, $^{12}$CO($J=1-0$), $^{13}$CO($J=1-0$), and $^{12}$CO($J=3-2$), based on the LVG approximation. The derived $n_{\rm H_2}$ and $T_{\rm K}$ are 800 -- 2500 cm$^{-3}$ and 20 -- 30 K, respectively, which are similar to our study for NGC~2903 in spite of the difference in the spatial resolution. However, \citet{schinnerer2010} obtained different physical properties for GMCs in spiral arms of M~51. They performed the LVG analysis using $R_{13/12}$ and $R_{2-1/1-0}$ at a spatial resolution of 120 -- 180 pc. For the case of constant $dv/dr$ = 1.0 km s$^{-1}$ pc$^{-1}$, the derived $T_{\rm K}$ ranges from 10 to 50 K, which is similar to our study for NGC~2903, whereas $n_{\rm H_2}$ ranges from 100 to 400 cm$^{-3}$, which is 5 -- 10 times lower than that in the disk of NGC~2903 in spite that the values of $R_{2-1/1-0}$ and $R_{13/12}$ in M~51 are not so different from those in NGC~2903. This is presumably due to the differences in assumed $Z$($^{12}$CO) and [$^{12}$CO]/[$^{13}$CO] abundance ratio. The authors assumed $Z$($^{12}$CO) of 8.0 $\times 10^{-5}$, which is higher than that assumed in our study, and a lower [$^{12}$CO]/[$^{13}$CO] abundance ratio of 30. Under the LVG approximation with the assumption of $Z$($^{12}$CO) of 8.0 $\times 10^{-5}$, we found that the derived $n_{\rm H_2}$ is typically $\sim$3 times lower than that with assumption of $Z$($^{12}$CO) of 1.0 $\times 10^{-5}$. Physically, high $Z$($^{12}$CO) means abundant $^{12}$CO molecules among molecular gas. In this condition, the optical depth of $^{12}$CO line also increases, and thus the photon-trapping effect in molecular clouds becomes effective. Since this effect contributes the excitation of $^{12}$CO molecule, an effective critical density of $^{12}$CO line decreases. In other words, since the $^{12}$CO is easily excited to upper $J$ levels even in low molecular gas density, $n_{\rm H_2}$ at a given $R_{2-1/1-0}$ decreases. As a result, LVG analysis with the assumption of $Z$($^{12}$CO) of 8.0 $\times 10^{-5}$ yields lower $n_{\rm H_2}$. In addition, the low [$^{12}$CO]/[$^{13}$CO] abundance ratio of 30 causes the decrease in the derived molecular gas density as described above. Therefore, the difference in the derived $n_{\rm H_2}$ between NGC~2903 and M~51 can be explained by the difference in the assumed $Z$($^{12}$CO) and the [$^{12}$CO]/[$^{13}$CO] abundance ratio. \subsubsection{Comparison of SFE with density and kinetic temperature of molecular gas} As described in section 1, SFEs often differ between galaxy centers and disks. Since NGC~2903 has a bright star forming region at the center, its SFE is expected to be higher than those in other regions. Here, we calculate SFEs for seven regions where averaged physical properties of molecular gas are obtained, and compare SFE with $n_{\rm H_2}$ and $T_{\rm K}$ in each region to examine what parameter controls SFE in galaxies. SFE is expressed using the surface density of SFR ($\Sigma_{\rm SFR}$) and that of molecular hydrogen ($\Sigma_{\rm H_2}$) as follows: \begin{eqnarray} \left[ \frac{\rm SFE}{\rm yr^{-1}} \right]= \left( \frac{\Sigma_{\rm SFR}}{M_{\odot}\,{\rm yr^{-1}\,pc^{-2}}} \right) {\displaystyle \biggl/} \left( \frac{\Sigma_{\rm H_2}}{M_{\odot}\,{\rm pc^{-2}}} \right) \end{eqnarray} We calculated extinction-corrected SFRs from a linear combination of H$\alpha$ and $Spitzer$/MIPS 24 $\micron$ luminosities using a following formula \citep{kennicutt1998a, kennicutt1998b, calzetti2007}: \begin{eqnarray} {\Sigma_{\rm SFR}} = 7.9 \times 10^{-42} \left( \frac{L_{{\rm H} \alpha } + 0.031 \times L_{24 \mu {\rm m}}}{{\rm erg} \,\, {\rm s}^{-1}} \right) \frac{{\rm cos} \ i}{\Omega} \,\,\,\,\,\,\,\, M_{\odot} \,\, {\rm yr}^{-1} \,\, {\rm pc}^{-2}, \end{eqnarray} where $L_{{\rm H} \alpha}$ and $L_{24 \mu {\rm m}}$ mean H$\alpha$ and 24 $\micron$ luminosities, respectively. $i$ is the inclination of \timeform{65D} for NGC~2903 and $\Omega$ is the covered area for each region (in the unit of pc$^{2}$). We used archival continuum-subtracted H$\alpha$ and 24 $\micron$ images of NGC~2903 obtained by \citet{hoopes2001} and the Local Volume Legacy survey project \citep{kennicutt2008, dale2009}, respectively. In addition, we calculated $\Sigma_{\rm H_2}$ using $I_{\rm 12CO(1-0)}$ as follows: \begin{eqnarray} \left[ \frac{\Sigma_{\rm H_2}}{M_{\odot}\,{\rm pc^{-2}}} \right] &=& 2.89 \times {\rm cos} \ i \left( \frac{I_{\rm 12CO(1-0)}}{{\rm K\,\,km\,\,s^{-1}}} \right) \times \left\{ \frac{X_{\rm CO}}{1.8 \times 10^{20}\,{\rm cm}^{-2}\,({\rm K\,\,km\,\,s^{-1}})^{-1}} \right\} . \end{eqnarray} Here, we adopted a constant $X_{\rm CO}$ value of $1.8 \times 10^{20}$ ${\rm cm}^{-2}$ (K km s$^{-1}$)$^{-1}$ \citep{dame2001}. We found that SFE at the center, $6.8 \times 10^{-9}$ yr$^{-1}$, is 2 -- 4 times higher than those in other regions. Calculated SFEs are listed in table~3. We examined the dependence of SFE on $n_{\rm H_2}$ and $T_{\rm K}$ as shown in figure~11. We found that SFE positively correlates with both $n_{\rm H_2}$ and $T_{\rm K}$. However, the trend of these correlations might change because it is possible that variations in the [$^{12}$CO]/[$^{13}$CO] abundance ratio and $X_{\rm CO}$ affect the estimate of $n_{\rm H_2}$, $T_{\rm K}$, and SFE. In fact, both the [$^{12}$CO]/[$^{13}$CO] abundance ratio and $X_{\rm CO}$ often differ between galaxy centers and disks. Therefore, we examine how variations in the [$^{12}$CO]/[$^{13}$CO] abundance ratio and $X_{\rm CO}$ alter the estimate of $n_{\rm H_2}$, $T_{\rm K}$, and SFE at the center of NGC~2903. We first consider the effect of the variation in [$^{12}$CO]/[$^{13}$CO] abundance ratio on the estimate of $n_{\rm H_2}$ and $T_{\rm K}$. As described in section 4.2.1, it is reported that the $^{12}$C/$^{13}$C abundance ratio in our Galaxy increases with the galactocentric radius \citep{langer1990}. Thus we examine the case of the lower [$^{12}$CO]/[$^{13}$CO] abundance ratio at the center of NGC~2903. If we adopt the [$^{12}$CO]/[$^{13}$CO] abundance ratio of 30 at the center, $n_{\rm H_2}$ and $T_{\rm K}$ are estimated to be 1800 cm$^{-3}$ and 38 K, respectively. This $n_{\rm H_2}$ value is slightly lower than that in the northern arm, whereas the positive correlation between SFE and $n_{\rm H_2}$ is not destroyed. Similarly, the $T_{\rm K}$ of 38 K does not destroy the positive correlation between SFE and $T_{\rm K}$. Next, we consider the effect of the variation in $X_{\rm CO}$ on the estimate of SFE. In central regions of disk galaxies, $X_{\rm CO}$ drops (i.e., CO emission becomes luminous at a given gas mass) by a factor of 2 -- 3 or more (e.g., \cite{nakai1995}; \cite{regan2000}), including the Galactic Center (e.g., \cite{oka1998}; \cite{dahmen1998}). Such a trend is presumably applicable to NGC~2903 considering the relationship between $X_{\rm CO}$ and metallicity, 12 + log(O/H). In general, $X_{\rm CO}$ decreases with the increase in metallicity because the CO abundance should be proportional to the carbon and oxygen abundances (e.g., \cite{arimoto1996}; \cite{boselli2002}). In addition, it is reported that metallicity decreases with the galactocentric distance in NGC~2903 (e.g., \cite{dack2012}; \cite{pilyugin2014}). These observational facts suggest a smaller $X_{\rm CO}$ by a factor of 1.5 -- 2 at the center than in the disk of NGC~2903, which yields a smaller gas mass, providing a higher SFE than the present one shown in table~3 and figure~11. However, even if a higher SFE by a factor of 2 is adopted for the center, the global trend of the correlations shown in figure~11 does not change so much because the original SFE at the center is already the highest in NGC~2903. Therefore, we concluded that variations in the [$^{12}$CO]/[$^{13}$CO] abundance ratio and $X_{\rm CO}$ do $not$ affect the correlations of SFE with $n_{\rm H_2}$ and $T_{\rm K}$ in NGC~2903. Note that the smaller $X_{\rm CO}$ corresponds to the larger $Z$($^{12}$CO) at the center of NGC~2903, but the larger $dv/dr$ is also suggested because the typical velocity width at the center (250 -- 300 km s$^{-1}$) is wider than those in other regions (150 -- 200 km s$^{-1}$) due to the rapid rotation of molecular gas near the galaxy center. Thus we consider that $Z(^{12}$CO)/($dv/dr$) itself does not differ between the center and the disk in NGC~2903 even if the $Z(^{12}$CO) at the center is larger than that in the disk. Finally, we examine the correlation coefficient for the least-square power-law fit $R^2$ between SFE and $n_{\rm H_2}$ and that between SFE and $T_{\rm K}$ shown in figure~11. We found that the former is 0.50 and the latter is 0.08. The significant correlation between SFE and $n_{\rm H_2}$ with $R^2$ of 0.50 suggests that molecular gas density governs the spatial variations in SFE. This speculation is well consistent with earlier studies based on HCN emission (e.g., \cite{gao2004}; \cite{gao2007}; \cite{muraoka2009}; \cite{usero2015}). In order to confirm whether such a relationship between SFE and $n_{\rm H_2}$ is applicable to other galaxies or not, we will perform further analysis toward other COMING sample galaxies, considering variations in the [$^{12}$CO]/[$^{13}$CO] abundance ratio, $X_{\rm CO}$, and $Z(^{12}$CO)/($dv/dr$), in forthcoming papers.
16
7
1607.08023
1607
1607.03107_arXiv.txt
In this paper, we investigate the relationship between star formation and structure, using a mass-complete sample of 27,893 galaxies at 0.5$<$$z$$<$2.5 selected from 3D-HST. We confirm that star-forming galaxies are larger than quiescent galaxies at fixed stellar mass (M$_{\star}$). However, in contrast with some simulations, there is only a weak relation between star formation rate (SFR) and size within the star-forming population: when dividing into quartiles based on residual offsets in SFR, we find that the sizes of star-forming galaxies in the lowest quartile are 0.27$\pm$0.06 dex smaller than the highest quartile. We show that 50\% of star formation in galaxies at fixed M$_{\star}$ takes place within a narrow range of sizes (0.26 dex). Taken together, these results suggest that there is an abrupt cessation of star formation after galaxies attain particular structural properties. Confirming earlier results, we find that central stellar density within a 1 kpc fixed physical radius is the key parameter connecting galaxy morphology and star formation histories: galaxies with high central densities are red and have increasingly lower SFR/M$_{\star}$, whereas galaxies with low central densities are blue and have a roughly constant (higher) SFR/M$_{\star}$ at a given redshift. We find remarkably little scatter in the average trends and a strong evolution of $>$0.5 dex in the central density threshold correlated with quiescence from $z$$\sim$0.7-2.0. Neither a compact size nor high-$n$ are sufficient to assess the likelihood of quiescence for the average galaxy; rather, the combination of these two parameters together with M$_{\star}$ results in a unique quenching threshold in central density/velocity.
\label{sec:intro} Despite decades of deep and wide extragalactic surveys, we still do not understand the astrophysics behind the empirical relationship linking the star formation histories of galaxies and their morphologies. Observations show that galaxies with evolved stellar populations, so-called ``quiescent'' galaxies, have significantly smaller sizes and more concentrated light profiles than actively star-forming galaxies with a similar stellar mass and redshift \citep[e.g.,][]{Shen03, Trujillo07, Cimatti08, Kriek09b, Williams10, Wuyts11b, vanderWel14}. Although we know that galaxies must shut down their star formation and migrate from the star-forming to quiescent population, there is much to be learned about the physical process(es) that are primarily responsible for this structural evolution and the quenching of star formation. One way to study the connection between this bimodal population of galaxies is through correlations between specific star formation rate (sSFR$\equiv$SFR/M$_{\star}$) and parameters describing various physical properties of galaxies, such as stellar mass \citep[e.g.,][]{Whitaker14b,Schreiber15}, surface density \citep[e.g.,][]{Franx08,Barro13}, bulge mass \citep[e.g.,][]{Lang14,Bluck14,Schreiber16}, or environment \citep[e.g.][]{Elbaz07}. The inverse of the sSFR defines a timescale for the formation of the stellar population of a galaxy, where lower sSFRs correspond to older stellar populations for a constant or single burst star formation history. In this sense, sSFR is a relatively straight forward diagnostic of quiescence that can be directly linked to other physical properties of galaxies. With a sample of galaxies selected from the Sloan Digital Sky Survey (SDSS), \citet{Brinchmann04} were the first to show that there is a turnover in the sSFR of galaxies at higher stellar surface mass densities \citep[also studied in the context of a turnover in D$_{\mathrm{n}}$(4000) by][]{Kauffmann03c}. The redshift evolution of this correlation was later presented in \citet{Franx08} \citep[see also][]{Maier09}. Both works identified a threshold surface density at each redshift interval: below this threshold the sSFRs are high with little variation, and above the threshold density galaxies have low sSFRs. \citet{Franx08} reported that the density threshold increases with redshift, at least out to $z$=3. As stellar density and velocity dispersion are closely related \citep[e.g.,][]{Wake12, Fang13}, observations therefore indicate that galaxies are statistically more likely to be quiescent once they have surpassed a threshold in either density or velocity dispersion. Studies of early-type galaxies at $z$$\sim$0 further show that at fixed stellar mass velocity dispersion is strongly correlated with other physical properties: galaxies with increased velocity dispersion and thereby more compact sizes are on average older, more metal-rich, with lower molecular gas fractions, and more alpha-enhanced than their larger, lower velocity dispersion counterparts \citep{Thomas05, Cappellari13, McDermid15}. \citet{Bezanson09} showed that distant compact galaxies have similar densities to the central regions of these local early-type galaxies by comparing their average stellar density profiles within a constant physical radius of 1 kpc \citep[see also][]{vanDokkum10,Saracco12,vanDokkum14,Tacchella15b}. This study was the first to present a plausible link between these high redshift galaxies and where they end up in the local universe, but it is yet unclear what causes the quenching in the first place. This work does however suggest that it may be more robust to define a quenching threshold in surface density within the \emph{central} 1 kpc, as opposed to the half-light radius. \citet{Fang13} find that this central density threshold increases with stellar mass through a study of the correlation between galaxy structure and the quenching of star formation using a sample of SDSS central galaxies. Furthermore, studies that push the analysis of central density out to $z$=3 corroborate \citep[e.g.,][]{Cheung12,Saracco12,Barro13,Barro15}, supporting the idea that the innermost structure of galaxies is most physically linked with quenching. Where the earlier work of \citet{Franx08} found an evolving effective surface density threshold with redshift, \citet{Barro15} do not find a strong redshift evolution in the central surface density threshold. However, as there still exist star-forming galaxies above this quenching threshold, results in the literature conclude that a dense bulge is a necessary but insufficient condition to fully quench galaxies \citep[see also][]{Bell12}. While most studies have focused on the the stellar mass dependence of the central density alone, it is perhaps unsurprising that there is a tight correlation: the central density is a biproduct of the combined stellar mass and light profile. The key comparison instead should be with sSFR, normalizing out the stellar mass dependence of SFR \citep[as also studied in][]{Barro15}, where total sSFRs can be measured largely independent of the central density. While the dynamic range in stellar mass enabled by the deep high resolution near-infrared (NIR) imaging from the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey \citep[CANDELS;][]{Grogin11, Koekemoer11} improves dramatically from earlier multi-wavelength extragalactic surveys \citep[e.g.,][]{Wuyts08, Whitaker12b}, the depth of the Spitzer/MIPS 24$\mu$m imaging used to derive the IR SFR indicator has remained unchanged. To therefore leverage the full range in stellar mass and galaxy structure probed by these \emph{Hubble Space Telescope} legacy programs, we must perform detailed stacking analyses of the 24$\mu$m imaging to probe the SFR properties of the complete unbiased sample of galaxies using a single robust SFR indicator \citep[e.g.,][]{Whitaker15}. By combining the high resolution photometry from CANDELS with the accurate spectroscopic information provided by the 3D-HST treasury program \citep{Brammer12,Momcheva15} and a stacking analysis of the unobscured (UV) and obscured (IR) SFRs, we are in a unique position to perform a census across most of cosmic time of the simultaneous evolution of galaxy structure and star formation. While earlier results from this treasury dataset have shown that all quiescent galaxies have a dense stellar core, and the formation of such cores is a requirement for quenching \citep{vanDokkum14, Whitaker15, Barro15}, there are several open questions that we aim to answer in this paper. Specifically, (1) how does star formation rate depend on galaxy size?, (2) is there a preferential galaxy size scale where star formation occurs?, (3) is there a physical parameter that will uniquely predict quiescence?, and (4) does the quenching threshold in surface density and velocity evolve with redshift? There are a few differences that together separate the present analysis from earlier studies: the inclusion of accurate grism redshifts from 3D-HST improve the stellar population parameters, we derive the three dimensional deprojected central density and circular velocity instead of the surface density, and we stack the 24$\mu$m imaging to robustly measure total SFRs for more extended, lower stellar mass, or low SFR galaxies. The paper is outlined as follows. In Section \ref{sec:data}, we introduce the data and sample selection, describing the details of the stellar masses, redshifts, rest-frame colors, structural parameters, total star formation rates, central densities and circular velocities used herein. We present the correlations between galaxy size, stellar mass and sSFR for the overall population in Section~\ref{sec:size}. In Section~\ref{sec:sizescale}, we determine at which galaxy size scale the most star formation occurs from $z$=0.5 to $z$=2.5. And in Section~\ref{sec:sfalone}, we proceed to analyze the residual offsets in SFR and size for star-forming galaxies alone once removing the well known correlations between log(SFR)--log(M$_{\star}$) and log($r_{e}$)--log(M$_{\star}$). In the second half of the paper, we explore the physical parameters that best predict quiescence. First, we consider the role of galaxy size and S\'{e}rsic index in predicting quiescence in Section~\ref{sec:sersic}. Next, in Section~\ref{sec:density}, we study the dependence of sSFR on stellar mass density, parameterizing the redshift evolution in Section~\ref{sec:evolution}, and the density and velocity quenching thresholds in Section~\ref{sec:evolve_quiescence}. As this paper touches on a relatively wide range of topics, we integrate the discussion and implications of the results throughout the relevant sections, as well as further discussion in Section~\ref{sec:discussion}. While we choose to place these empirical results in the context of current theoretical models, we note that many of the correlations that we discuss can be interpreted in a different way \citep[e.g.,][]{Lilly16, Abramson16}. We caution that it is yet unclear if there is truly an evolutionary sequence causally linking galaxy structure with star formation. We conclude the paper with a summary in Section~\ref{sec:summary} of the results presented herein, in the context of current and future studies of galaxy formation and evolution. In this paper, we use a \citet{Chabrier} initial mass function (IMF) and assume a $\Lambda$CDM cosmology with $\Omega_{\mathrm{M}}=0.3$, $\Omega_{\Lambda}=0.7$, and $\mathrm{H_{0}}=70$ km s$^{-1}$ Mpc$^{-1}$. All magnitudes are given in the AB system.
\label{sec:discussion} Theoretical predictions for the interplay between galaxy structures and their star formation histories are far from reaching a consensus. In this section we summarize some of the key predictions and compare them to the empirical results from this paper. We place emphasis on the quenching process, which must both truncate star formation and structurally transform galaxies as they migrate from a star-forming population to a quiescent one. Although the current analysis does not suggest an overall residual correlation between SFR and galaxy effective radius, we have identified a population of compact, intermediate-mass star-forming galaxies with depressed SFRs. Focusing specifically on this population of compact, likely quenching galaxies, we discuss whether theoretical studies predict their existence. There are two main channels in cosmological simulations to form massive compact galaxy populations: (1) the galaxies have very early formation times when the Universe was far denser \citep{Khochfar06, Wellons15}, or (2) they are the result of a central starburst driven by violent disk instabilities \citep{Zolotov15,Ceverino15} or gas-rich mergers \citep{Wellons15}. However, it may also be that galaxies do not undergo such ``compaction'' events, and that compact galaxies simply evolved from lower mass, slightly smaller galaxies \citep{vanDokkum15}. In cosmological simulations by \citet{Tacchella16}, where a central starburst drives structural evolution, it is the gas and young stars in galaxies with high sSFRs (above the average star formation sequence) that are predicted to be compact with short gas depletion timescales. \citet{Tacchella16} do not however find any gradients in the stellar mass distribution, tracing the older stellar distribution. If we consider only the galaxies with compact rest-frame 5000\AA\ sizes in this study, we similarly do not see evidence that they have higher than average sSFRs. If anything, we see the opposite trend, at least amongst the most compact intermediate stellar mass galaxies (log(M$_{\star}$/M$_{\odot}$)$\sim$10.0-10.6). Such fading galaxies appear to not be present in the Tacchella simulations, based on their mass-weighted sizes. Results from the EAGLE simulation (at $z$=0), on the other hand, predict a stronger dependence of galaxy size on sSFR than our higher redshift observations \citep[Figure 2 in][]{Furlong15}, with $\Delta$log(sSFR)/$\Delta$log($r_{e}$) at fixed stellar mass ranging between $\sim$0.6--1.4 compared to typical values of $\sim$0.1--0.5 in the observations. Using semi-analytic models, \citet{Brennan17} show a weak(er) trend amongst the most compact galaxies at $z$=0--2.5 that falls between these two extremes, though the comparison cannot be made directly as the stellar mass dependence has not been factored out. In summary, theoretical results predict a range from no residual dependence of galaxy size on SFR to moderately strong trends. One key trend that the EAGLE simulations do not reproduce amongst the star-forming population is the lack of variation in sSFR at lower stellar masses. While this model shows variations of order 0.3--0.5 dex, the variation in sSFR within the five extragalactic fields included in the 3D-HST dataset is $<$0.2 dex. Unfortunately, \citet{Furlong15} do not present their higher redshift results, so a more direct comparison at the equivalent epochs is not possible. Similarly, this information cannot be reconstructed from the results of \citet{Tacchella16} and \citet{Brennan17}. Future such comparisons between the observations and theoretical models will prove illuminating. Returning to the issue of gas depletion in relation to compaction, a recent study by \citet{Spilker16} find extremely low (CO) gas fractions in a pilot sample of compact star-forming galaxies, suggesting short gas depletion timescales. As these compact star-forming galaxies exist in very small numbers, they would need to quench rapidly ($<$0.5 Gyr timescales) in order to produce the required number of compact quiescent galaxies \citep{vanDokkum15}. Indeed, the early results on the gas depletion timescales suggest timescales of order 100 Myr or less \citep{Spilker16, Barro16}. \citet{Saintonge12} also show that galaxies undergoing mergers or showing signs of morphological disruptions have the shortest molecular gas depletion times. These results hint that the timescale for galaxies to pass through this compact high sSFR phase is short, and this is why the observational evidence is lacking when considering the average trends presented herein. Next, we turn our focus back to the full galaxy population. Both the remarkably small scatter and the evolution of the average relation between sSFR and central density is interesting in the context of recent arguments in the literature regarding the the nature of the most recently quenched galaxies and their role in the evolution of the size-mass relation of quiescent galaxies. Although it has been shown that quiescent galaxies will experience growth through minor mergers and accretion \citep{Bezanson09, Newman12}, the simplest explanation of their size growth is the continuous addition of (larger) recently quenched galaxies \citep{vanderWel09a}. Galaxies that quench at later times are expected to have larger sizes because the Universe was less dense and therefore gas-rich, dissipative processes were less efficient \citep{Khochfar06}. Indeed, observations at $z$$<$1 find that the most recently quenched galaxies are the largest \citep{Carollo13}. There is a mixed bag of size measurements at $z$$\sim$1.5 \citep{Belli15}, with recently quenched galaxies exhibiting a range of sizes, and results at $z$$>$1.5 find that the most recently quenched galaxies are similar, if not more compact, than older quiescent galaxies at the same epoch \citep{Whitaker12b, Yano16}. \citet{Furlong15} further predict a trend (at $z$=0) for higher sSFR (suggesting more recent assembly) for larger quiescent galaxies at fixed stellar mass. We do not see any strong trends amongst our quiescent observations at $z$$>$0.5. Although, as the 24$\mu$m derived SFRs are likely upper limits for quiescent galaxies, it is possible that we are washing out a stronger intrinsic trend within the observations. It may be that recently quenched galaxies are more compact at high redshift, whereas they are larger at later times. At $z$$>$1, we show here that the central density threshold for quenching is higher relative to the already quenched population in Figure~\ref{fig:bestfit_sigma_quenched} (shown as greyscale, with the running median in red). We therefore see evidence that higher redshift quiescent galaxies are predicted to have significantly higher central densities to quench, which may therefore alleviate some of the tension in the observations between low and higher redshift analyses. When considering where massive galaxies populate the $\log$(sSFR)--$\log$($\rho_{1}$) plane (e.g., Figure~\ref{fig:sigma1_mass}), we see that they tend to have the densest central concentrations of stellar mass. It is important to note however, that the trend itself between central density and sSFR does not vary strongly with stellar mass at a given epoch. In other words, while different stellar mass regimes tend to populate the upper and lower end of this relation, it is the entire relation itself that evolves with redshift. Although there is no straight forward way to plot the results of \citet{Barro15} in the $\Delta\log$(SFR)--$\Delta\log$($\Sigma_{1}$) plane in our Figure~\ref{fig:bestfit_sigma_quenched}, they are in good qualitative agreement given the definition of $\Delta\log$($\Sigma_{1}$). The stellar mass dependence of this quenching threshold in central density observed by \citet{Fang13} at $z$=0 may also be explained in part by more massive galaxies having formed earlier in the Universe \citep{Kauffmann03}. There is a thorough discussion of this downsizing in quenching in \citet{Dekel14}. As more massive dark matter halos will tend to cross a threshold halo mass for viral shock heating earlier in the Universe, halo quenching will occur preferentially in more massive galaxies \citep[e.g.,][]{Neistein06, Bouche10}. Similarly, violent disk instabilities will also have a natural downsizing, which \citet{Dekel14} argue is the result of higher gas fractions in lower mass galaxies. Whether galaxies quenches fast through, e.g., violent disk instabilities, or slow through halo quenching, more massive galaxies will tend to cross this quenching threshold earlier in time. The predicted redshift evolution of the quenching threshold for central galaxies from \citet{Voit15} is shown as a thin black line in Figure~\ref{fig:bestfit_sigma_quenched}. Their paper presents an argument for self-regulated feedback that links a galaxy’s star-formation history directly with the circular velocity of its potential well. \citet{Voit15} hypothesize that feedback leads to quenching when the halo mass reaches a critical value that allows supernovae and/or AGN to push away the rest of the circumgalactic gas. This critical circular velocity is set to 300 km/s at $z$=2 here, resulting in a quenching threshold that tracks the upper envelope of the central density observed for the quiescent population. In Figure~\ref{fig:bestfit_sigma_quenched}, we show that the quenching threshold differs from the characteristic turnover value. This difference may either imply that galaxies keep growing their centers after star formation starts actively diminishing. Or, that there is significant scatter in this quenching threshold, where galaxies with central densities in between the characteristic turnover and the quenching threshold have an intermediate probability of being quenched. Although we observe that a high central density appears to predict quiescence on average, we caution that this does not necessarily imply causation. \citet{Lilly16} propose instead that galaxies quench their star formation according to empirical probabilistic laws that depend solely on the total mass of the galaxy, not on the surface mass density or size. With their simple model they can broadly reproduce all of the trends between galaxy structure and sSFR that we observe here, including the evolving quenching threshold. \citet{Lilly16} argue that because galaxies form their stars inside-out and passive galaxies will form them at earlier epochs (higher redshifts), passive galaxies will always have smaller sizes than their star-forming counterparts of the same stellar mass at any given redshift. Determining the cause of quenching is beyond the scope of this paper. But we note that even without causation, we show that the high central density holds a unique predictive power in identifying the population of galaxies that, on average, will be quiescent. The aim of this paper is to connect rest-frame optical measurements of the size-mass and density-mass relations with robust measurements of the total specific SFRs from a purely empirical standpoint. We can thereby connect galaxy structure and star formation to better understand the observed bimodal distribution of galaxies across cosmic time and the quenching of star formation. This current study extends the original work by \citet{Franx08} on a smaller field that included 1155 galaxies at 0.2$<$$z$$<$3.5 to now consider a mass-complete sample of 27,893 galaxies at 0.5$<$$z$$<$2.5. The sample is selected in five extragalactic fields from the 3D-HST photometric catalogs presented in \citet{Skelton14}, combining high spatial resolution \emph{HST} NIR imaging from the CANDELS treasury program with total UV+IR SFRs derived from a median stacking analysis of \emph{Spitzer}/MIPS 24$\mu$m imaging. The main results presented within this paper are summarized as follows: \begin{enumerate} \item We find that 50\% of new stars being formed amongst the overall population occurs in galaxies within $\pm$0.13 dex of the average size-mass relation. Extremely compact or extended galaxies do not significantly contribute to the total stellar mass budget. \item We show a flattening in the size-mass relation of quiescent galaxies at stellar masses below 10$^{10}$ M$_{\odot}$ at 0.5$<$$z$$<$1.0. These lower mass quiescent galaxies exhibit slightly higher sSFRs relative to more massive galaxies at the same epoch, suggesting more recent assembly. However, the sSFRs of quiescent galaxies at fixed stellar mass do not show significant variations. \item After removing the well known correlations between stellar mass and star formation rate and galaxy size, we show that star-forming galaxies show a weak dependence of their star formation rates on galaxy size. The residual offset in size for star-forming galaxies in the lowest quartile when rank ordered by sSFR is 0.27$\pm$0.06 dex smaller than the highest sSFR quartile. Similarly, when instead rank ordering by the residual size offsets, the smallest galaxies are lower sSFRs by 0.11$\pm$0.02 dex than that of the largest galaxy quartile. Similar trends are found amongst massive galaxies in simulations \citep[e.g.,][]{Furlong15}, though greatly amplified relative to the observations. \item We find that the independence of star formation rate on galaxy size is not sensitive to the timescale on which the star formation rate is probed, with dust-corrected H$\alpha$ sSFRs yielding similar trends. \item We confirm earlier studies \citep[e.g.,][]{Franx08}, showing that the central stellar density is a key parameter connecting galaxy morphology and star formation histories: stacks of galaxies with high central densities are red and have increasingly lower sSFRs, whereas galaxies stacked with low central densities are blue and have a roughly constant (higher) sSFRs at a given redshift interval. \item We use a broken power-law to parameterize the correlation between log(sSFR) and central density, log($\rho_{1}$), showing remarkably little scatter between the average measurements. \item We find strong evolution in the central density threshold for quenching, as defined by both a constant and evolving threshold in sSFR, decreasing by $>$0.5 dex from $z$$\sim$2 to $z$$\sim$0.7. Similarly, while the threshold in central circular velocity where most galaxies are considered quenched is $>$300 km/s at $z$$\sim$2, this decreases to $\sim$150 km/s by $z$$\sim$0.7. \end{enumerate} We show that neither a high $n$ nor a compact galaxy size will uniquely predict quiescence, whereas a threshold in central density (or velocity) may be a more robust and unique observable signature when considering the overall galaxy population. However, we emphasize that correlations between structure and star formation do not prove a causal effect. For example, it remains to be seen whether small scale structure (at the scale of the stars) or large scale parameters (the scale of the dark matter halo) dominate the physical processes which quench galaxies. While we have presented the average global trends of the sSFR with structural parameters ($r_{e}$, $n$, $\Sigma_{e}$, $\rho_{1}$, and $v_{\mathrm{circ},1}$) amongst a mass-complete sample of galaxies using high resolution \emph{HST}/WFC3 imaging and deep Spitzer/MIPS 24$\mu$m imaging, future studies with the James Webb Space Telescope mid-IR spectroscopic and photometric capabilities will yield robust measurements of SFR for \emph{individual} galaxies across the star-formation sequence. Such studies will allow us to resolve the detailed trends within the star-forming population as a function of structure.
16
7
1607.03107
1607
1607.07125_arXiv.txt
Fairall~9 is one of several type 1 active galactic nuclei for which it has been claimed that the angular momentum (or spin) of the supermassive black hole can be robustly measured, using the \fekalfa emission line and Compton-reflection continuum in the X-ray spectrum. The method rests upon the interpretation of the \fekalfa line profile and associated Compton-reflection continuum in terms of relativistic broadening in the strong gravity regime in the innermost regions of an accretion disc, within a few gravitational radii of the black hole. Here, we re-examine a \suzaku X-ray spectrum of Fairall~9 and show that a face-on toroidal X-ray reprocessor model involving only nonrelativistic and mundane physics provides an excellent fit to the data. The \fekalfa line emission and Compton reflection continuum are calculated self-consistently, the iron abundance is solar, and an equatorial column density of $\sim 10^{24} \ \rm cm^{-2}$ is inferred. In this scenario, neither the \fekalfa line, nor the Compton-reflection continuum provide any information on the black-hole spin. Whereas previous analyses have assumed an infinite column density for the distant-matter reprocessor, the shape of the reflection spectrum from matter with a finite column density eliminates the need for a relativistically broadened \fekalfa line. We find a 90 per cent confidence range in the \fekalfa line FWHM of $1895$--$6205 \ \rm km \ s^{-1}$, corresponding to a distance of $\sim 3100$ to $33,380$ gravitational radii from the black hole, or $0.015$--$0.49$~pc for a black-hole mass of $\sim 1-3 \times 10^{8} \ M_{\odot}$.
\label{intro} Fairall~9 is a bright, nearby ($z=0.047016$) Seyfert~1 galaxy that has been the subject of several studies that attempt, using X-ray spectroscopy, to obtain robust measurements of the angular momentum, or spin, of the putative supermassive central black hole (Schmoll \etal 2009; Emmanoulopoulos \etal 2011; Patrick \etal 2011a, 2011b, 2013; Lohfink \etal 2012a; Walton \etal 2013; Lohfink \etal 2016). According to the ``no hair'' theorem, black holes have only three measurable physical attributes, one of these being spin (mass and charge being the other two). In addition to its importance for fundamental physics, black-hole spin affects the energetics and evolution of the environment in which the black hole resides. Thus, the spin properties of black holes in a population of sources, such as active galactic nuclei (AGN), could provide clues pertaining to the formation and growth of supermassive black holes (e.g. see Volonteri \& Begelman 2010). The reason why Fairall~9 has been of particular interest for black-hole spin measurements via X-ray spectroscopy is that it is a member of a subset of AGN that have been established to exhibit few or no signatures of line-of-sight absorption that could complicate modeling of the X-ray spectral features that are thought to carry signatures of black-hole spin (Gondoin \etal 2001; Emmanoulopoulos \etal 2011). Otherwise known as ``bare Seyfert galaxies'', AGN such as Fairall~9 provide a ``clean'' direct view of the accreting black-hole system (e.g. Patrick \etal 2011a; Tatum \etal 2012; Walton \etal 2013). It is the X-ray spectrum from the innermost regions of the accretion disc that is thought to convey the signatures of relativistic effects in the vicinity of the black hole, including its spin. The basic method for constraining the black-hole spin involves fitting a model of the X-ray reflection spectrum from the accretion disc, and the associated \fekalfa emission line, with the broadening, or ``blurring'' effects of Doppler and gravitational energy shifts being the key drivers (e.g., see Reynolds 2014; Middleton 2015, and references therein). The disc-reflection model invariably used in the literature for AGN in general is REFLIONX (Ross, Fabian \& Young 1999; Ross \& Fabian 2005), combined with one of several alternatives for the ``blurring'' kernel (e.g. Laor 1991; Dov\v{c}iak, Karas \& Yaqoob 2004; Brenneman \& Reynolds 2006; Dauser \etal 2010). Other model components often need to be included to account for features that may or may not be directly related to the accreting black-hole system (for example, additional or alternative soft X-ray emission, and/or narrow emission lines from distant matter at tens of thousands of gravitational radii from the black hole). Clearly, the fewer the number of model components that are needed, the less ambiguity and/or degeneracy there will be in constraining the black-hole spin. The reason why \suzaku spectra for Fairall~9 have received the attention of several studies is that the good energy resolution of the CCD detectors for measuring the crucial \fekalfa line profile, combined with the simultaneous sensitive coverage above 10~keV, provides a significant advance over previous capabilities for constraining the X-ray reflection spectrum and fluorescent line emission. Early observations of Fairall~9 with \asca (Reynolds 1997) and \xmm (Gondoin \etal 2001; Emmanoulopoulos \etal 2011) lacked the high-energy coverage. Nevertheless, a broad \fekalfa line (the principal feature required for constraining black-hole spin) was reported for the \asca observation (Reynolds 1997) and for a 2009 \xmm observation (Emmanoulopoulos \etal 2011), but for an earlier \xmm observation in 2000, Gondoin \etal (2001) reported no detection of a broad \fekalfa line. More recently, Fairall~9 was targeted by a \swift monitoring campaign and three new \xmm observations, one of them simultaneous with \nustar (Lohfink \etal 2016). Although it was stated by the authors that they ``clearly detect blurred ionised reflection,'' detection of a relativistically broadened line alone (as opposed to the joint detection of such a line and the reflection continuum as a single spectral component), was not claimed. The result of all the X-ray spectroscopy studies of Fairall~9 that attempt to measure the black-hole spin is a wide range in its inferred value, ranging from zero to 0.998, the theoretical maximum for accretion (Thorne 1974). Analyses of different data sets, as well as the same data set with different models, yield different black-hole spin measurements, some of which are inconsistent within the given errors. In some cases, the inferred inclination angle of the accretion disc and the iron abundance are also inconsistent amongst different data sets and models applied. Lohfink \etal (2012a) tried to resolve these serious conflicts by adding an additional soft X-ray continuum to their model and forcing the black-spin, disc inclination angle, and iron abundance to be invariant amongst different \xmm and \suzaku data sets. However, this procedure involved adding nine free parameters in total to an already complex model, so it is not surprising that a consistent value of black-hole spin was able to fit the data. Moreover, Lohfink \etal (2012a) explained that the two different models of the additional soft X-ray continuum both left unresolved problems. In one case the additional soft X-ray continuum was a thermal Comptonisation component that implied a colossal iron overabundance, with a lower limit of 8.2 relative to solar. In the other case, the additional component was an ionised reflection spectrum (using the REFLIONX model), but this solution implied a radial ionisation gradient in the accretion disc that predicts atomic features in the spectrum that are not observed. Even if a viable consistent solution for the black-spin can be found for different data sets, the problem of the model-dependence of the black-hole spin based on fitting the same data with different models still remains. Patrick \etal (2011b) studied this model dependence for a small sample of AGN observed by \suzaku (including Fairall~9), and the severity of the model dependence led them to conclude that ``zero spin cannot be ruled out at a particularly high confidence level in all objects.'' The principal difference in the applied models pertained to the exact model used for relativistic broadening of the \fekalfa line (including the option of omitting that component altogether). Several choices are available for such a model in popular X-ray spectral-fitting packages (e.g. Dov\v{c}iak \etal 2004; Dauser \etal 2010; Patrick \etal 2011a; Middleton 2015). It is also known that partial covering models can sometimes eliminate the need for a relativistically broadened \fekalfa emission line (e.g. Miller, Turner \& Reeves 2008; Iso \etal 2016). In such a scenario, there is degeneracy between the spectral curvature due to relativistic broadening of the \fekalfa line and the spectral curvature due to absorption by matter that partially covers the X-ray source. However, only one-dimensional partial covering models are typically applied, which may be incomplete because only line-of-sight absorption is considered, and Compton scattering and fluorescent line emission from any globally distributed matter is not included. Recently, a more sophisticated partial covering model has been applied to Mkn~335 (Gallo \etal 2015), but the spectra were so complicated that both the blurred reflection model and the partial covering model left unmodeled residuals. It is therefore important to first understand the much simpler spectra of bare Seyfert galaxies such as Fairall~9. Another type of model that eliminates the need for a relativistically broadened \fekalfa line whilst also accounting for matter out of the line of sight invokes a Compton-thick accretion disc wind (Tatum \etal 2012), and has also been successfully applied to Fairall~9. A different study involving a disc wind model, applied to the Narrow Line Seyfert galaxy 1H~0707--495 (Hagino \etal 2016), also led to the conclusion that the X-ray spectrum in this source can be modeled with absorption in the wind rather than invoking relativistic effects in strong gravity to produce a broad \fekalfa line. The implication of these studies is that if there is no relativistically broadened \fekalfa line emission from the innermost regions of the accretion disc, then there is no signature of black-hole spin in the X-ray spectrum. In this scenario, measurements of black-hole spin obtained from applying models that do have relativistically broadened \fekalfa line emission are then artifacts of incorrect modeling. One feature that is common to all of the above-mentioned models fitted to the X-ray spectra of Fairall~9 is the method of modeling the narrow core of the \fekalfa line, which is ubiquitous in both type~1 and type~2 AGN (e.g. Yaqoob \& Padmanabhan 2004; Nandra 2006; Shu, Yaqoob \& Wang 2010, 2011; Fukazawa \etal 2011; Ricci \etal 2014). Hereafter, it will be referred to as the distant-matter \fekalfa line, since it has a width that is typically less than a few thousand ${\rm km \ s^{-1}}$ full width half maximum (FWHM), corresponding to an origin in matter located at tens of thousands of gravitational radii from the black hole. Shu \etal (2011) measured a mean FWHM of $\sim 2000 \pm 160 {\rm km \ s^{-1}}$ from a sample of AGN observed by the \chandra high energy grating (HEG). They showed that in some AGN the location of the distant-matter \fekalfa line emitter is consistent with the classical broad line region (BLR), yet in other AGN the location is further from the black hole, possibly at the site of a putative obscuring torus structure that is a principal ingredient of AGN unification schemes (e.g., Antonucci 1993; Urry \& Padovani 1995). The peak energy of the narrow \fekalfa line is tightly distributed around 6.4~keV for both type~1 and type~2 AGN (e.g. Sulentic \etal 1998; Yaqoob \& Padmanabhan 2004; Nandra 2006; Shu \etal 2010, 2011), providing a strong observational constraint on the neutrality of the matter producing the line, and any associated Compton-scattered continuum. Any attempt to measure the relativistically broadened \fekalfa line in AGN X-ray spectra must properly account for the narrow distant-matter \fekalfa line because it can carry a luminosity that is comparable to, or even greater than that in the broad component. The models must also account for the Compton-scattered (reflection) continuum in the matter that produces the narrow \fekalfa line. Without exception, the previous studies of Fairall~9 described above all use a model for the narrow \fekalfa line and the associated reflection continuum that is based on matter with an infinite column density, a flat (disc) geometry, and an infinite size, with the inclination angle of the disc relative to the observer fixed at an arbitrary value. In some cases the narrow \fekalfa line emission and the associated reflection continuum are calculated self-consistently using the model PEXMON (Nandra \etal 2007) or even a second ionised disc-reflection spectrum (e.g. REFLIONX) with the relativistic blurring turned off (e.g. Gallo \etal 2015). In other cases the \fekalfa line emission is modeled with a simple Gaussian, the reflection continuum is modeled with PEXRAV (Magdziarz \& Zdziarski 1995), and the two components (which are in reality not independent), are given \adhoc (floating) normalizations. In yet another variant, the narrow \fekalfa line emission is modeled with a Gaussian component and the associated reflection continuum is completely ignored and omitted (e.g. Patrick \etal 2013). The assumption in the latter scenario is that the narrow \fekalfa line core originates in Compton-thin matter and that the reflection continuum from that Compton-thin matter is negligible and can be ignored. However, there is no basis for the omission of the reflection continuum from Compton-thin matter and indeed, it was shown in the case of Mkn~3 that it cannot be neglected (Yaqoob \etal 2015). Moreover, several recent studies have shown that the Compton-scattered continuum from matter with a finite column density is different to that from matter with an infinite column density, and exhibits a rich variation in spectral shape (Murphy \& Yaqoob 2009; Ikeda \etal 2009; Yaqoob 2012; Liu \& Li 2014; Yaqoob \etal 2015; Furui \etal 2016). The so-called ``Compton-hump'' (the peak in the reflection spectrum) is no longer located at $\sim 30$~keV but depends on the column density (as well as geometry and inclination angle), and can actually be located below the energy of the core of the \fekalfa line for column densities $<10^{24} \ {\rm cm^{-2}}$. The complexity of the reflection continuum from matter with a finite column density can potentially model features that are traditionally interpreted as relativistically broadened \fekalfa line emission. In this paper, we make no assumptions about the column density of the matter producing the narrow \fekalfa line in Fairall~9 and use the \mytorus model of a toroidal X-ray reprocessor (Murphy \& Yaqoob 2009) to actually let the data determine the global column density of the reflecting material from spectral fitting. In the \mytorus model, the \fekalfa line shape and flux are calculated self-consistently with respect to the associated reflection continuum. A Compton-thick reflection spectrum similar to that from infinite column density disc-reflection models can be recovered as a limiting case in the \mytorus model, which is based on a more realistic geometry for the circumnuclear matter than the disc geometry that has has been used in all of the previous studies of Fairall~9 (based on models such as PEXRAV and PEXMON). In type~2 AGN, column densities in the line of sight have been observed over a wide range, from Compton-thin to Compton-thick, with some sources exhibiting transitions between Compton-thin and Compton-thick states (e.g. Matt, Guainazzi \& Maiolino 2003; Risaliti \etal 2010, and references therein). It should therefore not be surprising if this range of column densities is also appropriate for the circumnuclear matter in type~1 AGN. We note that from extended \rxte monitoring data of Fairall~9, Lohfink \etal (2012b) presented evidence of clumps of matter transiting across the X-ray source that did not completely extinguish the X-ray emission. The column density of the clumps was crudely estimated by Lohfink \etal (2012b) to be ``a few $\times 10^{24} \ \rm cm^{-2}$. However, Markowitz, Krumpe, \& Nikutta (2014) pointed out that the \rxte monitoring data are also consistent with intrinsic spectral variability, an interpretation also given in later work by Lohfink \etal (2016), based on a \swiftp/\xmmp/\nustar campaign on Fairall~9 in 2014. In this paper we apply the \mytorus model to \suzaku data for Fairall~9. The \suzaku data are well-suited for studying the X-ray reprocessor because \suzaku provides simultaneous coverage of the critical Fe~K band ($\sim 6-8$~keV) with good spectral resolution and throughput (due to the XIS CCD detectors), and of the hard X-ray band above 10~keV with good sensitivity. The simultaneous broadband coverage is important for reconciling the absorbed and reflected continua with the \fekalfa line emission. The scope and intent of the present work was to focus on \suzakup/\chandra HEG data in the short term and provide a framework and basis for additional studies in the long term, e.g. with \nustarp. We will show that our model, which includes no relativistically broadened \fekalfa line or disc reflection, gives an excellent fit to the \suzaku data. The X-ray spectrum of Fairall~9, which can be explained by \fekalfa line emission and X-ray reflection from the distant matter alone, would then carry no information on black hole spin. The paper is organized as follows. In \S\ref{obsdata} we describe the basic data, and reduction procedures. In \S\ref{analysisstrategy} we describe the analysis strategy, including the procedures for setting up the X-ray reprocessing models for spectral fitting. In \S\ref{spfitting} we give the detailed results from spectral fitting. In \S\ref{summary} we summarize our results and conclusions.
\label{summary} We have re-analysed \suzaku data for the ``bare'' Seyfert~1 galaxy, \srcnamep, applying the \mytorus model of Murphy \& Yaqoob (2009) that self-consistently calculates the \fekalfa line emission and X-ray reflection spectrum from a toroidal distribution of neutral matter with solar iron abundance. We find that an excellent fit is obtained for the detailed \fekalfa line profile including the red and blue wings, the \fekbeta line, the Fe~K edge, and the ``Compton hump'' associated with the X-ray reflection spectrum. The same best-fitting parameters that determine the magnitude and shape of the X-ray reflection continuum also determine the \fekalfa line flux and profile with no additional freedom. The data require no \adhoc adjustment or interference of this self-consistency. We obtained an equatorial column density of the toroidal X-ray reprocessor of $N_{\rm H} = 0.902^{+0.206}_{-0.216} \ \rm \times 10^{24} \ cm^{-2}$ from a baseline fit, for the case that the torus is observed face-on and the cross-normalization between the PIN and XIS data is allowed to be free. Other lines of sight that do not intercept the torus also provide excellent fits to the data, yielding similar constraints on the column density. A face-on fit with the PIN data removed altogether also provides consistent constraints on $N_{\rm H}$. On the other hand, fixing the cross-normalization between the PIN and XIS spectra at the default value gives a column density that is $\sim 60$ per cent higher than that from the baseline fit. The baseline, face-on toroidal model gives a velocity broadening of the \fekalfa line of $4475^{+1730}_{-2065} \ \rm km \ s^{-1}$~FWHM. Variations on the baseline model also gave consistent values for the FWHM of the \fekalfa line. The FWHM measured here does not confuse broadening due to the Compton shoulder of the \fekalfa line with velocity broadening since the broadening due to the Compton shoulder is self-consistently modeled. The \fekalfa line width places the distance of the X-ray reprocessor at $\sim 3.1$ to $33.4 \times 10^{3}$ gravitational radii from the putative central black hole. This corresponds to $\sim 0.015$ to $0.49{\rm \ pc}$ for a black-hole mass in the range $0.99-3.05 \times 10^{8} \ M_{\odot}$ (from historical reverberation measurements). The statistical errors quoted here for the FWHM are for one parameter, 90 per cent confidence, but we found that the \fekalfa line is not resolved at a confidence level of 99 per cent, for two parameters. Thus, a location for the \fekalfa line emitter that is further out than $\sim 0.5$~pc cannot be ruled out. We also detected narrow, unresolved \fexxvres and \feklya emission lines from highly ionised matter in a region distinct from the \fekalfa line emitter, further out from the central engine. The \mytorus models, using only mundane, nonrelativistic physics, give such good fits to the \srcname data using only narrow \fekalfa line emission and X-ray reflection from distant matter that there is no need for any more complexity in the models. In contrast, previous analyses of the same \suzaku data for \srcname have applied relativistic disc-reflection models that produce broad \fekalfa line emission and X-ray reflection from within a few gravitational radii of the black hole (Lohfink \etal 2012a, and references therein). Such fits have been used to directly derive constraints on the black-hole spin, or angular momentum. In our interpretation of the data, there are no signatures in the X-ray spectrum of the strong gravity regime, no broad \fekalfa line, and therefore no measure of black-hole spin based on the \fekalfa line and reflection spectrum. From a theoretical perspective, there is actually no precedent for emission features from within a few gravitational radii of the black hole to leave imprints on the X-ray spectrum since there are several ways to suppress such features. For example, the inner accretion disc may be truncated, or it may be too highly ionised (e.g. see Patrick \etal 2013 for a recent discussion). We showed in detail how the spectral shape of the \mytorus model in the Fe~K band is able to replace the relativistically broadened \fekalfa line, but we also obtained an upper limit to the magnitude of a relativistically blurred accretion disc-reflection spectrum if it is included in addition to the \mytorus model. Relative to the baseline \mytorus model, the 2--10~keV flux in the blurred reflection spectrum is $<1.2$ per cent of the total observed 2--10~keV flux. An important factor in our \mytorus spectral fits is the break from the common assumption that the X-ray reflector and fluorescent line emitter has an infinite column density. The X-ray reflection continuum from matter with a finite column density has a greater variety of spectral shapes than that from matter with an infinite column density. In particular, reflection spectra from matter with a finite column density can produce spectral structure around the \fekalfa line that might otherwise be interpreted as the effects of relativistic smearing. Moreover, we note that in previous analyses of the X-ray spectra of \srcname that derived black-hole spin measurements, the infinite column density disc model was not only applied to what was thought to be the relativistic components, but it was also used to model the distant-matter (narrow) \fekalfa line and reflection spectrum. In future work we will apply the finite column density reprocessor models described here to other AGN and other observations of Fairall~9 in order to investigate whether our conclusions have a broader relevance. \vspace{5mm} \noindent Acknowledgments \\ The authors thank the anonymous referee for helping to improve the paper. The authors acknowledge support for this work from NASA grants NNX09AD01G, NNX10AE83G, and NNX14AE62G. This research has made use of data and software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory.
16
7
1607.07125
1607
1607.00019_arXiv.txt
We analyze hydrodynamical and cosmological simulations of galaxy clusters to study scaling relations between the cluster total masses and observable quantities such as gas luminosity, gas mass, temperature, and $Y_X$, i.e., the product of the last two properties. Our simulations are performed with the Smoothed-Particle-Hydrodynamic GADGET-3 code and include different physical processes. The twofold aim of our study is to compare our simulated scaling relations with observations at low ($z\thickapprox0$) and intermediate ($z\thickapprox0.5$) redshifts and to explore their evolution over the redshift range $z=0-2$. The result of the comparative study shows a good agreement between our numerical models and real data. We find that AGN feedback significantly affects low-mass haloes at the highest redshifts resulting in a reduction of the slope of the mass $-$ gas mass relation $(\sim13\%)$ and the mass $- Y_X$ relation $(\sim10\%)$ at $z=2$ in comparison to $z=0$. The drop of the slope of the mass $-$ temperature relation at $z=2$ $(\sim14\%)$ is, instead, caused by early mergers. We investigate the impact of the slope variation on the study of the evolution of the normalization. We conclude that the observed scaling relations should be limited to the redshift range $z=0-1$ for cosmological studies because in that redshift range the slope, the scatter, and the covariance matrix of the relations do not exhibit significant evolution. The mass$-Y_X$ relation continues to be the most suitable relation for this goal. Extending the analysis to the redshift range between 1 and 2 will be crucial to evaluate the impact generated by the AGN feedback.
\label{sec:intro} % Clusters of galaxies are the latest gravitationally bound structures to form. They begin their collapse at around redshift $z \sim 2$ when the dark-energy component starts to be relevant (even if not dominant yet) and just after the baryonic Universe has experienced an extremely energetic phase due to the high star formation level \citep{madau&dickinson} and the intense AGN (Active Galactic Nuclei) activity \citep{aird.etal.2015}. For the reason that clusters' formation happens in such peculiar time, these massive objects are an invaluable laboratory for the study of astrophysical processes and for the derivation of cosmological parameters. High-redshift investigations and the comparison with local observations are particularly advantageous. One of the most powerful cosmological measure is, indeed, the {\it evolution} of the mass function \citep{borgani_guzzo,voit2005,2009ApJ...692.1033V,allen.etal.2011,2011ASL.....4..204B,2015SSRv..188...93P}. As other example, two direct probes of the AGN-activity history are $(i)$ the entropy profile of local and {\it high-z} objects \citep{dubois.etal.2011,chaudhuri.etal.2012,ettori.etal.2013,pointecouteau.etal.2013} and $(ii)$ the relation between the clusters X-ray luminosity and their temperature and its evolution \citep{2012MNRAS.424.2086H,takey.etal.2013,2015arXiv151203833G}. To decode such important measurements and to extend high-redshift studies of galaxy clusters, the X-ray community is planning and working on future missions such as e-ROSITA\footnote{http://www.mpe.mpg.de/eROSITA}\citep{2012arXiv1209.3114M} and Athena\footnote{http://www.the-athena-x-ray-observatory.eu} \citep{nandra.etal.2013}. These will be complementary to millimetric surveys, e.g. SPT-G3 \citep{benson.etal.2014}, or optical ones, e.g. EUCLID\footnote{http://www.euclid-ec.org} \citep{laureijs.etal.2011} or LSST\footnote{http://www.lsst.org}\citep{Ivezic.etal.2008}. Both X-ray missions will build copious samples of clusters and groups of galaxies at $z \sim 1$ or above to advance high-z investigations that are currently limited mostly to $z \leq 0.7$ objects with sparse data from most distant times (\citealt{2011A&A...535A...4R,2012MNRAS.421.1583M, 2013SSRv..177..247G}). The simplest approach to measure the mass of a large sample of objects is to access to {\it scaling relations} between the total mass and some observable quantities that are easier to derive. In X-rays studies, the most common mass-proxies are the {\it gas mass}, $M_{\rm gas}$, which can be extracted from the surface brightness profile, the {\it temperature}, $T$, which is solidly estimated from X-ray spectra with, at least, one thousand of counts, and their combination, $Y_X=\mg \times T$ \citep[e.g., ][ for recent and detailed studies on X-ray scaling relations]{2014MNRAS.437.1171M,mantz.etal.2016}. The $Y_X$ parameter was firstly introduced as proxy of pressure by \cite{2006ApJ...650..128K} who proved its advantages through the analysis of hydrodynamical simulations. Since then, it has been diffusely adopted in X-ray cosmological studies. The biggest benefit of this parameter relies on the different reactions by the gas mass and temperature in non-equilibrium situations such as a powerful AGN burst activity \citep{2011MNRAS.416..801F} that can be triggered by gravitational mergers. The AGN reduces the gas mass by expelling some gas from the core and, at the same time, it heats the ICM. Due to these behaviors the $Y_X$ parameter does not substantially depend on the dynamical state or on the central AGN activity. \vspace{0.2cm} One of the X-ray measurements most represented in scaling-relation studies is the {\it X-ray luminosity} because it can easily and immediately be obtained from shallow observations. Scaling relations involving the luminosity, in particular the $L-M$ relation, are used to determine the connection between the flux limits of a planned survey with the minimum mass that can be observed. This relation, therefore, plays a crucial role in establishing the selection function of X-ray samples \citep[e.g.][]{nord.etal.2008}. The history of the $L-T$ relation has been equally important because from its first determination it was clear that this relation provides information on the physics of the cluster core \citep[e.g.,][]{fabian.etal.1994} and the phenomena of feedback by stars or AGN \citep[e.g.,][]{maxim98,2012MNRAS.421.1583M}. \vspace{0.2cm} Over the last decade, a significant number of realistic hydrodynamical simulations enabled the study of scaling relations (\citealt{2011ASL.....4..204B} for a review). A special attention was dedicated to the effects of feedback from stars \citep{nagai.etal.2007} and AGNs \citep{short_thomas_2009,puchwein.etal.2008,2010MNRAS.408.2213S,2014MNRAS.438..195P,lebrun.etal.2014,pike.etal.2014}, to the evolution of the relations up to $z\sim1$ \citep{2012ApJ...758...74B,2011MNRAS.416..801F,planelles.etal.2016,lebrun.etal.2016}, and to develop a theoretical framework to exploit the simultaneous analysis of multiple signals \citep{2010ApJ...715.1508S,evrard.etal.2014}. This paper, based on realistic models of ICM quantities obtained through a new BH accretion model, AGN feedback and hydro scheme, extends the analysis to $z=2$ in vision of up-coming observational results. Most importantly, this work analyzes a simulated sample that recently proved to naturally form in a cosmological context cool-core (CC) and non-cool-core (NCC) clusters \citep{2015ApJ...813L..17R}. The diversity of the core entropy level in the centermost regions is mostly ascribed to the dynamical history of the system and to its overall AGN activity. We expect that in our simulated clusters the balance between radiative cooling, which forms stars, and AGN feedback, which heats the gas, is realistically reached as the systems evolve in and interact with the embedded cosmological environment. Therefore, it is compelling to study the relation between the ICM properties in an epoch right after the maximum level of the AGN activity. \vspace{0.3cm} The paper is arranged in the following way: in Section 2 we provide a short description of the simulated sample and refer to companion papers for a more detailed explanation of the code. Section 3 presents the computation of ICM structural quantities, the mass-proxy relations, the luminosity-based relations, and fitting methods. In Section 4, we examine the validity of our simulated data by comparing to observations at low ($z\leq0.25$) and intermediate ($0.25\leq z \leq 0.6$) redshifts. Section 5 is dedicated to explore the theoretical evolution of scaling relations from $z=0$ to $z=2$. Finally, a summary of the obtained results and conclusions is given in Section 6. All the quantities for the scaling relations are evaluated at $R_{500}$ defined as the radius of the sphere whose density is 500 times the critical density of the Universe at the considered redshift. Throughout the paper, the symbol $\log$ indicates the {\it decimal} logarithm and the uncertainty at 1 $\sigma$ on the best-fit parameters represents the 68.4 per cent confidence maximum-probability interval. \label{sec:cmrr} \vspace{0.3cm}
In this paper, we address the evolution of scaling relations in simulated galaxy clusters. Our study is based on a large sample of clusters, simulated in a cosmological context. The simulations are carried out with an up-graded implementation of SPH and with an improved version of the AGN physics. We first examine the reliability of our newly performed \agn\ runs by comparing our predictions to those derived from local ($z<0.25$) and intermediate ($0.25<z< 0.6$) observations. Subsequently, we investigate the evolution of six scaling relations: $M-\mg$, $M-\tmw$, $M-\tsl$, $M-Y_X$, $L-\tsl$, and $L-M$ by comparing the \agn\ model with two other parallel runs performed with non-radiative physics or radiative ICM but without AGN feedback. We characterize how the scaling relation features (namely the slope, intrinsic scatter, and normalization) and the covariance matrix between couples of signals change as a function of redshift. We summarize in the following our main results. \begin{itemize} \item The simulated scaling relations at low and intermediate redshifts reproduce well the observations. Distinctly, the $L-T$ relation is well reproduced by the \agn\ runs that are able to reproduce its slope in a mass range from small groups to clusters and the observed separation between CC and NCC clusters. \item From $z=0$ and $z=1$, we do not detect any appreciable change on the slopes of the relations. At higher redshifts, however, all the relations exhibit some degree of evolution with the only exception of the luminosity-temperature slope which remains unchanged. In the \agn\ runs, the gas slope $\beta_{\mg}$ at $z=2$ is reduced by $\sim13\%$ with respect to the present-time value. This is caused by the effect of intense high-z AGN activity that has more impact on the lowest mass systems. At $z=2$, a shallower slope is found also for the $M-T$ relation that declines by $\sim15\%$ with respect to $z=0$. At these high redshifts, the smallest groups have a systematic low temperature due to their incomplete thermalization. The decrease of $Y_X$ ($\sim10\%$) and increase of $L-M$ ($\sim20\%$) slopes can be explained by analytically decomposing their slopes in the two contributors: $\beta_{\mg}$ and $\beta_{\tsl}$. \item We do not find any significant trend in redshift for the scatter of all the mass-proxy relations. Consistent with previous theoretical studies, the $M-\mg$ relation has the smallest scatter around $2-3\%$. Instead, the scatter of the two luminosity relations $L-\tsl$ and $L-M$ is the largest and around $15-20\%$, over the redshift range $[0-2]$. The $L-M$ scatter increases with the decrease of redshift enlightening the significant impact on the X-ray luminosity by recent major mergers \citep{torri.etal.2004}. When a merger occurs the luminosity registers a permanent increase \citep{rowley.etal.2004}. The scatter of all relations can be well-described by a log-normal distribution whose widths are mostly constant over the redshift up to $z\sim1.5$. \item No correlation is evident between the pair of deviation in $\mg$ and $\tsl$ at fixed mass and at all redshift. No correlation is registered also between $L$ and $\tsl$ at $z=2$. In all other cases, positive correlations are found with Pearson coefficients always greater than 0.4. Interestingly, the correlation coefficients are almost constant from $z=0$ to $z=1$ assuring that the reasons causing coupled deviations from the scaling relations are already in place at high redshift. \item Regarding the study of the evolution of the normalization, we stress that in situations where the slopes vary with redshift the evolution on the normalization cannot be established. On the other hand, we find that the $M-\tmw$ and $L-\tsl$ relations, whose slopes do not have any evolution, exhibit a negative and positive evolution of the normalization for the redshift range $[0-2]$, with values for the evolution parameters in line with recent observational studies and close to the self-similar predictions. \item Overall, we confirm that the $M-\yx$ relation evaluated from $z=0$ to $z=1$ is the best suited for cosmological studies for the combination of its properties: in that redshift range, the slope does not vary, the evolution of the normalization can be robustly determined, the scatter is small and constant over time and most importantly the relation is solid and independent from the source of feedback either from star, supernovae, or AGN. \end{itemize} On the basis of our analysis of the intrinsic variations of simulated clusters, we also conclude that pushing the study of the scaling relations at higher redshifts does not seem to be an advantage because the intense AGN activity, peaking at $z \sim 2$, could have a significant impact and produce deviations from the self-similar behavior at redshifts $z>1$. This cosmic epoch is still almost an unexplored territory where the predictions of higher-resolution simulations can help in designing observational strategies for future missions such as e-Rosita and Athena. From an observational perspective, the scaling relations are expected to be calibrated by measuring the mass from weak-lensing analysis \citep{marrone.etal.2012,hoekstra.etal.2015, sereno_ettori2015,mantz.etal.2016}. Even if this procedure is not expected to introduce mass bias \citep{meneghetti.etal.2010, becker_kravtsov,rasia.etal.2012} it will likely enlarge the scatter of the relations \citep{sereno_ettori.2015} and attention will need to be places for clusters close to the flux limit threshold \cite{nord.etal.2008}. Certainly, more efforts will need to be devoted to reducing the uncertainties on the mass calibration to few percent level.
16
7
1607.00019
1607
1607.05299_arXiv.txt
The bulge appears to be a chemically distinct component of the Galaxy; at {\it b}$=$-$4^{\circ}$ the average [Fe/H] and [Mg/H] values are $+$0.06 and $+$0.17 dex respectively, roughly 0.2 dex higher than the solar neighborhood thin disk, and $\sim$0.7 dex greater than the local thick disk. This high average metallicity suggests a larger {\it effective yield} for the bulge compared to the solar neighborhood, perhaps due to more efficient retention of supernova ejecta. A vertical metallicity gradient in the bulge, at $\sim$0.5 dex/kpc, is attributed to the changing mixture of metal-rich and metal-poor sub-populations (at [Fe/H] $+$0.3 and $-$0.3 dex from Hill et al. 2011; but $+$0.15, $-$0.25 and $-$0.7 dex from Ness et al. 2013), where the metal-poor sub-populations have a larger scale height than the metal-rich population. Abundances of O, Mg, Si, Ca, Ti, and Al are enhanced in the bulge compared to solar composition, with [$\alpha$/Fe]=$+$0.15 dex at solar [Fe/H]; below [Fe/H]$\sim$$-$0.5 dex, the bulge and local thick disk [$\alpha$/Fe] ratios are very similar. Small enhancements in [Mg/Fe] and possibly [$<$SiCaTi$>$/Fe] relative to the thick disk trends are apparent, suggesting slightly higher SFR in the bulge. This is supported by low [s-/r-] process ratios, as measured by [La/Eu], and dramatically enhanced [Cu/Fe] ratios compared to the thick disk. However, the differences between thick disk and bulge composition trends could, conceivably, be due to measurement errors and non-LTE effects. Unfortunately, the comparison of bulge with solar neighborhood thick disk composition may be confused by uncertainties in the identification of local thick disk stars; in particular, the local thick disk [$\alpha$/Fe] trend is not well defined above [Fe/H]$\sim$$-$0.3 dex. The unusual zig-zag abundance trends of [Cu/Fe] and [Na/Fe] are qualitatively consistent with the Type~Ia supernova time-delay scenario of Tinsley (1979) and Matteucci \& Brocato (1990) for elements made principally by core-collapse supernovae, but with metallicity-dependent yields. The metallicities, [$\alpha$/Fe] ratios and kinematics of the metal-poor and metal-rich bulge sub-populations resemble the solar neighborhood thick and thin disks, respectively, but with higher [Fe/H] than at the solar circle. If these sub-populations really represent the inner thin and thick disks, but at higher [Fe/H], then both the thin and thick disks possess a radial [Fe/H] gradient, within the solar circle, near $\sim$$-$0.04 to $-$0.05 dex/kpc. In the secular bulge scenario, the bulge was built from entrained inner disk stars driven by a stellar bar. Thus, it appears that the inner thin and thick disk stars retained vertical scale heights characteristic of their kinematic origin, resulting in the vertical [Fe/H] gradient seen today.
The Galactic bulge is major component of the Milky Way (MW) Galaxy, morphologically distinct from the disk and halo, composed of mostly old stars with an embedded bar. It is the closest bulge and bar to us and we can study it in greater detail than for any other galaxy, down to individual stars. Not only does the MW bulge provide a way to understand bulges and bars in extra-galactic spiral galaxies, but its population is similar to distant giant elliptical galaxies. We would, naturally, like to know how the bulge came to be: how did it evolve? Because the chemical element abundance patterns contain a fossil record of past star formation, much could be learned from a study of the bulge chemical composition. However, an impediment to reading this fossil record is that we do not fully understand the nucleogenesis of all the elements. Thus, we must try to simultaneously understand both the mechanisms and astrophysical sites of element synthesis as well as the star formation history of the bulge. Because the bulge is situated in a deep gravitational well, compared to the MW disk and halo, and because its stars seem to be mostly old, chemical evolution occurred under different environmental conditions in the bulge. Thus, a comparison of the bulge chemical properties, to those in other locations, offers a way to understand how environment can affect chemical evolution. This should inform us about the sites of nucleosynthesis and clues to how the bulge evolved. Certainly, the chemical evolution models, developed to explain the composition of stars near the sun, should work everywhere. To address these questions and issues we must first measure the bulge's chemical properties; good measurements are the basis for understanding. Once we have good measurements we need to compare them to something. It would be nice to compare with the output of chemical evolution models, but at the present time it is more informative to compare to other chemically evolving systems. Here, we compare the bulge chemical composition with the Milky Way thin and thick disks, and then ask how the evolution of these systems could have produced the measured composition differences.
The bulge shows a vertical [Fe/H] gradient, at $\sim$0.5 dex/kpc, with more metal-rich stars concentrated toward the plane. The average and median [Fe/H] in Baade's Window bulge field, at {\it b}=$-$3.9$^{\circ}$ is $+$0.06 dex and $+$0.15 dex respectively. Hill et al. (2011) identified two sub-populations, centered at [Fe/H] of $-$0.30 dex and $+$0.32 dex, at {\it b}=$-$3.9$^{\circ}$, while Ness et al. (2013) suggest three sub-populations in the main bulge MDF, with [Fe/H] in their {\it b}=$-$5$^{\circ}$ field of $+$0.12 dex, $-$0.26 dex and $-$0.66 dex. Both studies suggest that the vertical gradient is due to changing proportions of these sub-populations. The higher mean and median [Fe/H] values for the bulge, compared to the local thin and thick disks, indicates a higher yield for the bulge. This could easily be due to more efficient retention of SN ejecta in the bulge, especially SNIa. Other possible explanations include: 1. an IMF deficient in the lowest mass stars in the bulge, so locking-up less gas, or 2. radial gas inflow into bulge from the inner disk, although this should result in a deficit of metal-poor stars, which is not seen. If these bulge sub-populations originated from inner thick and thin disk stars entrained into a secular bulge through bar formation, then the high mean [Fe/H] values of the bulge sub-populations suggest radial [Fe/H] gradients from bulge to solar neighborhood of $-$0.04 to $-$0.05 dex/kpc for both thin and thick disks. The [$\alpha$/Fe] ratios in bulge, below [Fe/H]$\sim$$-$0.5 dex, are much like the local thick disk trends, suggesting similar IMF and SFR; but, the thick disk is metal deficient, compared to the bulge, by more than 0.7 dex. Possible slight enhancement of the bulge [Mg/Fe] and [$<$SiCaTi$>$/Fe]compared to the thick disk may indicate SFR differences, but could be due to measurement errors, or systematic uncertainties in the comparison of abundances for bulge giants with thick disk dwarfs. Above [Fe/H]=$-$0.5 dex, the kinematically identified thick disk stars merge into the thin disk [$\alpha$/Fe] trends by solar [Fe/H], whereas the bulge [$\alpha$/Fe] is enhanced compared to the thin disk, by $\sim$$+$0.15 dex, indicating a higher SFR in the bulge than thin disk, at least. On the other hand, a small number of kinematically identified local thin disk stars seems to extend the slope established by metal-poor thick disk stars, to solar [Fe/H] and beyond. The status of these high-$\alpha$ thin disk stars should be investigated further. It is remarkable that the [$\alpha$/Fe] trends of the thick disk and bulge are so close, suggesting similar SFR, even though their metallicities differ enormously. The [La/Eu] ratio, indicating the onset of the s-process and presence of ejecta from 2--3 M$_{\odot}$ AGB stars, is consistent with a slightly higher SFR and shorter formation timescale for the bulge, compared to the MW disks. The [Cu/Fe] trend, measured by only one study, J14, shows a stunningly different trend than thick disk or thin disk. However, the zig-zag [Cu/Fe] trend is qualitatively consistent with the combination of a high bulge SFR and metal-dependent Cu yields from massive stars (as expected) in the presence of the SNIa time delay scenario. Curiously, the [Na/Fe] ratios in the bulge, while close to the solar ratio, shows a small amplitude zig-zag trend, similar to [Cu/Fe], suggesting the presence of metal-dependent Na yields from massive stars. The trend of LTE [Mn/Fe] ratios are the same in the bulge as the MW thick and thin disks, to within measurement uncertainty. This is contrary to the expectation that SNIa over-produce Mn. Since the ratio of SNIa/SNII material is lower in the bulge than the thin disk, as evidenced from [$\alpha$/Fe] ratios, lower [Mn/Fe] are be expected in the bulge, but are not seen. Predicted non-LTE corrections to the LTE Mn abundances suggest that the trend of this element could be seriously affected by non-LTE effects.
16
7
1607.05299
1607
1607.06253_arXiv.txt
We calculate the absorption efficiencies of composite silicate grains with inclusions of graphite and silicon carbide in the spectral range 5--25$\rm \mu m$. We study the variation in absorption profiles with volume fractions of inclusions. In particular we study the variation in the wavelength of peak absorption at 10 and 18$\rm \mu m$. We also study the variation of the absorption of porous silicate grains. We use the absorption efficiencies to calculate the infrared flux at various dust temperatures and compare with the observed infrared emission flux from the circumstellar dust around some M-Type \& AGB stars obtained from IRAS and a few stars from Spitzer satellite. We interpret the observed data in terms of the circumstellar dust grain sizes; shape; composition and dust temperature.
In general, the composition of circumstellar dust around evolved stars are studied from observations in the near and mid-infrared spectroscopy of absorption and emission features. Emission from the characteristic 10 and 18$\rm \mu m$ features, which arise from the bending and stretching modes of silicate grains, were first identified in the spectra of oxygen rich giants and super giants, \cite{woolf69}, \cite{woolf73}, \cite{bode88}. \cite{little} have analyzed about 450 IRAS-LRS spectra of M Mira variables to determine the morphology of the emission features, found near 10$\rm \mu m$ and to correlate the shape of this feature with period, mass loss rates and other parameters of the stars. \cite{simp} has analyzed the shape of the silicate dust features of 117 stars using spherical dust shell models. Evolved stars have distinctive IR spectra according to C/O abundance ratio e.g. \cite{aitken}, \cite{cohen}. It is to be noted, however, that among the stars of the principal condensates in the two environments, amorphous carbon and silicates, only those with silicate environments have been detected directly. Amorphous carbon lacks strong IR resonances, although most of the continuum emission from carbon stars is presumably attributed arising from amorphous carbon, \cite{groen}. Silicon Carbide (SiC) emission at 11.3$\rm \mu m$ is the only spectral feature of dust commonly observed in normal C-type red giants. However, as noted by \cite{lorenz}, SiC contributes only 10\% or less of the dust in such objects. Although, the correlation between the C/O abundance ratio and the form of the IR spectrum is not perfect, the dust features are sometimes used as diagnostic of the C/O abundance ratios in stars. In some cases, silicate emission features are detected in stars classified as C rich e.g. \cite{little86}, \cite{waters}. Their analysis indicates that the peak wavelength, strength and shape of the silicate features are very important to obtain the exact composition, sizes and shapes of the silicate grains. Thus, in order to interpret the observed silicate emission, we must compare the observed data with various silicate based models. In this paper, we have systematically analyzed the spectra of about 700 IRAS-LRS stars and compared them with composite dust grain models. We have also analyzed four other M-type \& AGB stars observed by Spitzer satellite. The grains flowing out of the stars are most likely to be non-spherical and inhomogeneous, porous, fluffy and composites of many very small particles due to grain-grain collisions, dust-gas interactions and various other processes. Further, observations from space and balloon probes show that, in general, dust grains are porous, fluffy and composites of many very small grains glued together, see \cite{brown}; \cite{kohler}; \cite{lasue} and \cite{levas}. Since there is no exact theory to study the scattering properties of these composite grains, various approximation methods are used for formulating models of electromagnetic scattering by these grains, such as EMA (Effective Medium Approximation), DDA (Discrete Dipole Approximation), etc. In EMA the optical properties (refractive index, dielectric constant) of a small composite particle, comprising a mixture of two or more materials, are approximated by a single averaged optical constant and then Mie theory or T-Matrix is used to calculate absorption cross sections for spherical/non-spherical particles. Basically, the inhomogeneous particle is replaced by a homogeneous one with some average effective dielectric function. The effects related to the fluctuations of the dielectric function within the inhomogeneous structures cannot be treated by this approach. For details on EMA, refer to \cite{bohren}. On the other hand, DDA takes into account irregular shape effects, surface roughness and internal structure of dust grains. DDA is computationally more rigorous than EMA. (For a discussion and comparison of DDA and EMA methods, including the limitations of EMA, see \cite{bazel}, \cite{perrin90a}, \cite{perrin90b}, \cite{ossen} and \cite{wolff1994}. The DDA, which was first proposed by \cite{purcell}, represents a composite grain of arbitrary shape as a finite array of dipole elements. Each dipole has an oscillating polarization in response to both the incident radiation and the electric fields of the other dipoles in the array, and the superposition of dipole polarizations leads to extinction and scattering cross sections. For a detailed description of DDA, see \cite{draine1988}. In an earlier paper by \cite{vaidya2011}, the effects of inclusions and porosities on the 10 and 18$\rm \mu m$ features were studied for the average observed IRAS spectra. In this paper, we use both DDA and EMA-T-Matrix to study the absorption properties of the composite grains consisting of host silicate spheroidal grains and inclusions of graphite or silicon carbide (SiC). The effects of inclusions and porosities, grain size and axial ratio (AR) on the absorption efficiencies of the grains in the wavelength range 5--25 $\rm \mu m$ have been studied. In particular, we have systematically studied the 10$\rm \mu m$ silicate feature as a function of the volume fraction of the inclusions. Using the absorption efficiencies of these composite grains for a power law grain size distribution (\cite{mathis1977}), the infrared fluxes for these grain models were calculated at various dust temperatures (T=200--400K). The infrared flux curves obtained from the models were then compared with the observed infrared emission curves of circumstellar dust around 700 oxygen-rich M-type and AGB stars, obtained by IRAS and Spitzer satellites. \cite{kessler2006} have used opacities for distribution of hollow spheres (DHS) of silicate shells \cite{min05} and these models have been compared with circumstellar dust around a few stars observed by Spitzer satellite. The DHS method averages the scattering and absorption/emission cross sections of the set of hollow spheres. \cite{smolder} have used silicates and gehlenite to study the 10 and 18$\rm \mu m$ peaks in circumstellar dusts around S-type stars obtained by Spitzer satellite. Very recently, \cite{siber} have used a mixture of amorphous carbon and silicate dust models to interpret interstellar extinction, absorption, emission and polarization in the diffused interstellar medium. \cite{mathis} and \cite{vosh2006} have used amorphous carbon with silicate in their composite grain model. \cite{zubko} have reviewed various dust models including composite silicate-graphite grain model and they have used EMA-T-Matrix method to calculate extinction efficiencies composite grains. Using T-Matrix based method, \cite{iati} have studied optical properties of composite interstellar grains. \cite{draine-li}, have used silicate-graphite-PAH model to study the infra-red emission from interstellar dust in the post-Spitzer era. Using a radiative transfer model, \cite{kirch} have studied the effect of dust porosity on the appearance of proto-planetary disks. A description of the composite grain models used is given in Section 2. In Section 3, the results of the studies on the absorption efficiencies of the grain models are presented. Section 4 provides results of the comparison of the model curves with the observed IR fluxes obtained by IRAS and Spitzer satellites. The conclusions are presented in Section 5.
16
7
1607.06253
1607
1607.01645_arXiv.txt
We investigate two recent parameterizations of the galactic magnetic field with respect to their impact on cosmic nuclei traversing the field. We present a comprehensive study of the size of angular deflections, dispersion in the arrival probability distributions, multiplicity in the images of arrival on Earth, variance in field transparency, and influence of the turbulent field components. To remain restricted to ballistic deflections, a cosmic nucleus with energy $E$ and charge $Z$ should have a rigidity above $E/Z=6$ EV. In view of the differences resulting from the two field parameterizations as a measure of current knowledge in the galactic field, this rigidity threshold may have to be increased. For a point source search with $E/Z\ge 60$ EV, field uncertainties increase the required signal events for discovery moderately for sources in the northern and southern regions, but substantially for sources near the galactic disk.
The origin of cosmic rays still remains an unanswered fundamental research question. Cosmic ray distributions of various aspects have been measured, most notably the steeply falling spectrum up to the ultra-high energy regime with cosmic ray energies even exceeding $E=100$~EeV \cite{Abraham:2010mj, AbuZayyad:2013}. For ultra-high energy cosmic rays, deflections in magnetic fields should diminish with increasing energy, such that directional correlations should lead to a straight-forward identification of accelerating sites. However, even at the highest energies the arrival distributions of cosmic rays appear to be rather isotropic. Only hints for departures from isotropic distributions have been reported, e.g., a so-called hot spot \cite{Abbasi2014}, and a dipole signal \cite{ThePierreAuger:2014nja}. At least with the apparent isotropy, limits on the density of extragalactic sources were derived which depend on the cosmic ray energy \cite{Abreu:2013kif}. A recent determination of ultra-high energy cosmic ray composition from measurements of the shower depth in the atmosphere revealed contributions of heavy nuclei above $\sim 5$~EeV \cite{Aab2014a,Aab:2014aea}. This observation may explain the seemingly isotropic arrival distribution as deflections of nuclei in magnetic fields scale with their nuclear charges $Z$. Obviously, when searching for cosmic ray sources, a key role is therefore attributed to magnetic fields. The galactic field in particular is strong enough to displace original arrival directions of protons with energy $E=60$~EeV by several degrees from their original arrival directions outside the galaxy \cite{Stanev:1996qj}. The displacement angles for nuclei even reach tens of degrees \cite{Giacinti:2010a}. The knowledge on the extragalactic magnetic fields is much less certain, but is likely to be less important than the galactic field \cite{Hackstein:2016pwa} and is not studied in this contribution. To identify sources of cosmic rays, rather precise corrections for the propagation within the galactic magnetic field are needed which in turn can be used to constrain the field \cite{Golup:2009}. Beyond this, effects of lensing caused by the galactic field have been studied which influence the visibility of sources and the number of images appearing from a single source \cite{Golup:2011}. The influence of turbulent contributions to the galactic field has also been studied in the context of lensing \cite{Harari:2002} and nuclear deflections \cite{Giacinti:2011}. In previous directional correlation analyses of measured cosmic rays, only the overall magnitude of deflections was taken into account, e.g. \cite{Aartsen:2015dml}, or corrections for cosmic ray deflections were applied using analytic magnetic field expressions reflecting the spiral structure of our galaxy \cite{Tinyakov:2001ir}. Recently, parameterizations of the galactic magnetic field have been developed which are based on numerous measurements of Faraday rotation \cite{Pshirkov2011,Jansson2012a}, and in addition polarized synchrotron radiation for the second reference. Based on directional characteristics and the field strength of the parameterizations, deflections of cosmic rays are predicted to depend strongly on their arrival direction, charge and energy. In the following we will refer to the regular field with the bisymmetric disk model of the first reference as the PT11 field parameterization, and to the regular field of the latter as the JF12 field parameterization, respectively. Angular distributions of cosmic rays in these galactic field parameterizations have been studied before, e.g., with respect to general properties of the JF12 parameterization \cite{Farrar:2014hma}, specific source candidates \cite{Keivani:2014kua}, general properties of deflections and magnifications \cite{Farrar:2015dza, Farrar:inprep}, and to the potential of revealing correlations between cosmic rays and their sources \cite{emu2015}. In this work we investigate whether cosmic ray deflections in the galactic magnetic field can be reliably corrected for, given the current knowledge of the field. To simplify discussions of energy and nuclear dependencies we will define rigidity as the ratio of the cosmic ray energy and number $Z$ of elementary charges $e$ \begin{equation} R=\frac{E}{Z\;e} \;. \end{equation} In our investigations we use galactic coordinates as our reference system, with longitude $l$ and latitude $b$. For a number of visualizations we use Cartesian coordinates alternatively with height $z$ above the galactic plane, with the Earth being located at $(x_E, y_E, z_E)=(-8.5, 0, 0)$~kpc. Based on the two field parameterizations PT11 and JF12 we initially discuss key distributions of cosmic ray deflection, dispersion effects in arrival distributions, directional variance in field transparency, and the influence of random field components. From the rigidity dependencies of these distributions, we recommend a minimum rigidity threshold above which cosmic ray deflection may be controlled in terms of probability distributions. Furthermore, we take the different results of the two galactic field parameterizations as a measure of our current knowledge of the galactic field. We compare their cosmic ray angular deflections and study differences in the dispersion of arrival distributions. Finally, we study the practical consequences of galactic field corrections and their uncertainties by performing simulated point source searches and by quantifying the field impact in terms of discovery potential.
Corrections for deflections in the galactic magnetic field using the two parameterizations PT11 and JF12 can be meaningfully considered for cosmic ray rigidities above $R>6$~EV. Above this rigidity, deflections can be distinguished from diffusive random walk. This has strong implications for analyses using cosmic ray data with mixed composition. For protons this rigidity corresponds to energies above $E=6$~EeV. However, when analyzing, e.g., Neon nuclei with charge $Z=10$, meaningful corrections can be performed for energies above $E=60$~EeV only. When quantifying uncertainties in the galactic field from comparisons of the two field parameterizations PT11 and JF12, the rigidity threshold needs to be raised substantially. Then both fields give similar predictions for cosmic ray deflections in the northern and southern regions with galactic latitudes $\vert l \vert > 19.5$~deg. In the disk region $\vert l \vert < 19.5$~deg, however, the differences in the predictions remain large. Consequently, in our simulated search for cosmic ray origins the arising uncertainties are substantial for sources near the galactic disk, and may be considered acceptable for sources aside the disk emitting cosmic rays with rigidities $R\ge 20$~EV.
16
7
1607.01645
1607
1607.01473_arXiv.txt
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) Pathfinder radio telescope is currently surveying the northern hemisphere between 400 and 800 MHz. By mapping the large scale structure of neutral hydrogen through its redshifted 21 cm line emission between $z\sim0.8-2.5$ CHIME will contribute to our understanding of Dark Energy. Bright astrophysical foregrounds must be separated from the neutral hydrogen signal, a task which requires precise characterization of the polarized telescope beams. Using the DRAO John A. Galt 26 m telescope, we have developed a holography instrument and technique for mapping the CHIME Pathfinder beams. We report the status of the instrument and initial results of this effort.
\label{intro} The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a new cylindrical transit interferometer currently being deployed at the Dominion Radio Astrophysical Observatory (DRAO) in Penticton, British Columbia. A smaller, two cylinder test-bed -- the CHIME Pathfinder -- has been built and instrumented with 128 dual polarisation dipole antennas and a custom FX correlator and is currently surveying the Northern hemisphere in 1024 frequency bands between 400 and 800 MHz. The Pathfinder correlator performs the full $N^2$ operation of correlating each of its 256 inputs at each frequency channel. See Ref. \citenum{chimepath1} for details of the design of the Pathfinder, Refs. \citenum{xeng1, xeng2, xeng3} for details on the GPU based X-engine, and Ref. \citenum{chimepath2} for a description of the calibration methodology. As a transit interferometer, CHIME monitors the entire Northern sky visible from the DRAO each night. The telescope is optimized for 21 cm intensity mapping at redshifts $0.8-2.5$ where tomography of the large-scale distribution of neutral hydrogen (HI) will allow for a time-dependent measurement of the Baryon Acoustic Oscillations (BAO). The result will provide constraints on the time evolution of Dark Energy, including the epoch where it begins to dominate the energy density of the universe and so influences its expansion \cite{furl, moraleswyithe, bao1}. To do so, we must contend with astrophysical foregrounds, notably the synchrotron emission of the Milky Way, which are some five orders of magnitude brighter than the HI signal \cite{santoscoorayknox}. Removal of foregrounds is possible due to their smooth spectral nature versus the 21 cm signal, which should be relatively uncorrelated in frequency \cite{santoscoorayknox, shaw1, shaw2}. Foreground filtering is only possible with precise instrument characterization. Uncertainty in the primary beam leads to mode mixing, converting small-scale angular power into frequency structure. Uncertainty in the polarized response of the telescope leads to leakage of polarised signal into total intensity. Both of these effects can easily overwhelm the 21 cm signal. In Ref.~\citenum{shaw2}, these statements are made quantitative via fully polarized end-to-end simulations of a CHIME-like cylinder telescope. By varying the full width at half power of the illuminating dipole feed, the authors set the specification required for an unbiased estimate of the 21 cm power spectrum to $0.1\%$ of this parameter. In these proceedings, we describe progress in mapping the full two-dimensional primary beam of each feed and frequency of the CHIME pathfinder array through a technique known as point-source holography \cite{radio1, radio2}. Holography is a well-known technique in radio astronomy and has been used with success in the near \cite{hol1} and far field \cite{hol2} to obtain high-resolution measurements on single-dish telescopes and dish arrays. Holographic techniques have further been used to map direction dependent polarisation leakages \cite{holpol}. In holography we track a bright point source with one telescope as a reference beam and correlate the signal with another telescope that is stationary. As the source transits, we measure a one-dimensional track through the stationary antenna beam. To serve as our tracking dish, we have equipped the John A. Galt 26 m telescope\cite{wolleben1, wolleben2} (hereafter 26 m), an equatorially mounted 26 meter diameter parabolic telescope also located at the DRAO, with a separate $400-800$ MHz receiver chain which is fed into the CHIME Pathfinder correlator. This allows correlation of its signal with the Pathfinder array. Since the Pathfinder is a fixed transit telescope, we observe sources at multiple declinations to obtain information on the North-South (NS) response of the beam. By averaging multiple transits from each source, we obtain high signal-to-noise, good angular resolution measurements of all 256 beams in both amplitude and phase. We have collected a preliminary data set, which allows for development and validation of the holographic analysis pipeline and initial results, that we present here. The document is organized as follows. In Section \ref{sec1}, we describe the Pathfinder and 26 m instruments and the observations and data included here. In Section \ref{sec2}, we outline the holographic data analysis method. In Section \ref{sec3}, we discuss the processed results. Finally, in Section \ref{sec4} we discuss the results of full-sky simulations of the measurement which we have conducted to assess the effect of various systematics on the holographic reconstruction of the beam, notably the effect of background contamination.
\label{conclusion} In this document, we have described and validated our technique of radio holography of bright astronomical point sources for obtaining high signal-to-noise (S/N) measurements, with good angular resolution, of the two-dimensional primary beams of the CHIME Pathfinder array across its frequency band. We have reported our progress in equipping the John A. Galt 26 m telescope with custom instrumentation for the purpose, and displayed the output of the method for a preliminary data set of 7 sources of minimal depth. It is clear from the data that more integration time is necessary for the low S/N sources to begin to measure the sidelobe structure seen in the best sources. From Figure \ref{full2dbeam} we see that the seven sources for which we have holography do not fully sample the two-dimensional structure in our model of the CHIME beams. The basic program is to use these holographic measurements to test and refine our beam models. Additionally, we plan to augment our holographic observations with additional sources which were not favorably located during the period these data were collected. However, our analysis of Section \ref{sec4} suggests that there is a minimum primary source flux at which one can expect to obtain a reasonable beam trace. In addition to the holography method presented here, we are pursing other methods of filling in the NS beam structure, notably with satellites\cite{hol2, sat2}, drones \cite{drone}, and pulsar holography. In its current form the data presented here serve as a basis for an understanding of the primary beam of a realistic cylindrical telescope array, such as CHIME and its Pathfinder.
16
7
1607.01473
1607
1607.04126_arXiv.txt
We present a detailed analysis of the pre-main-sequence (PMS) population of the young star cluster Westerlund~2 (Wd2), the central ionizing cluster of the \ion{H}{2} region RCW~49, using data from a high resolution multi-band survey with the \textit{Hubble} Space Telescope. The data were acquired with the Advanced Camera for Surveys in the $F555W$, $F814W$, and $F658N$ filters and with the Wide Field Camera 3 in the $F125W$, $F160W$, and $F128N$ filters. We find a mean age of the region of $1.04\pm0.72$~Myr. The combination of dereddened $F555W$ and $F814W$ photometry in combination with $F658N$ photometry allows us to study and identify stars with H$\alpha$ excess emission. With a careful selection of 240 bona-fide PMS H$\alpha$ excess emitters we were able to determine their H$\alpha$ luminosity, which has a mean value $L(\rm{H}\alpha)=1.67 \cdot 10^{-31}~\rm{erg}~\rm{s}^{-1}$. Using the PARSEC 1.2S isochrones to obtain the stellar parameters of the PMS stars we determined a mean mass accretion rate $\dot M_{\rm{acc}}=4.43 \cdot 10^{-8}~M_\odot~\rm{yr}^{-1}$ per star. A careful analysis of the spatial dependence of the mass-accretion rate suggests that this rate is $\sim 25\%$ lower in center of the two density peaks of Wd2 in close proximity to the luminous OB stars, compared to the Wd2 average. This rate is higher with increasing distance from the OB stars, indicating that the PMS accretion disks are being rapidly destroyed by the far-ultra-violet radiation emitted by the OB population.
\label{sec:introduction} With a stellar mass of M~$\ge 10^4$~M$_\odot$ \citep{Ascenso_07} the young Galactic star cluster \object{Westerlund~2} \citep[hereafter Wd2;][]{Westerlund_61} is one of the most massive young clusters in the Milky Way (MW). It is embedded in the \ion{H}{2} region \object{RCW~49} \citep{Rodgers_60}, located in the Carina-Sagittarius spiral arm $(\alpha,\delta)=(10^h23^m58^s.1,-57^\circ45'49'')$(J2000), $(l,b)=(284.3^\circ,-0.34^\circ)$. There is general agreement in the literature that Wd2 is younger than 3~Myr and that its core might be younger than 2~Myr \citep{Ascenso_07,Carraro_13}. In our first paper \citep[][hereafter Paper I]{Zeidler_15} we confirmed the cluster distance of \citet{Vargas_Alvarez_13} of 4.16~kpc, using \textit{Hubble} Space Telescope (HST) photometry and our high-resolution 2D extinction map. We estimated the age of the cluster core to be between 0.5 and 2.0~Myr. Using two-color diagrams (TCDs), we found a total-to-selective extinction $R_V=3.95 \pm 0.135$ (Paper I). This value was confirmed by an independent, numerical study of \citet{Mohr-Smith_15}. Their best-fitting parameter is $R_V=3.96^{+0.12}_{-0.14}$, which is in very good agreement with our result. Furthermore, we found that Wd2 contains a rich population of pre-main-sequence (PMS) stars. Over the past decades studies showed that during the PMS phase, low-mass stars grow in mass through accretion of matter from their circumstellar disk \citep[e.g.,][and references therein]{Lynden-Bell_74,Calvet_00}. These disks form due to the conservation of angular momentum following infall of mass onto the star, tracing magnetic field lines connecting the stars and their disks. It is believed that this infall leads to the strong excess emission in the infrared in contrast to the flux distribution of a normal black-body. This excess emission is observed for many PMS stars and probably originates through gravitational energy being radiated away and exciting the surrounding gas. As a result, this excess can be used to measure accretion rates for these classical T-Tauri stars \citep[especially via H$\alpha$ and Pa$\beta$ emission lines, e.g.,][]{Muzerolle_98c,Muzerolle_98b}. The accretion luminosity ($L_{acc}$) can then be used to calculate the mass accretion rate ($\dot M$). Studies of different star formation regions \citep[e.g., Taurus, Ophiuchus,][]{Sicilia-Aguilar_06} showed that these accretion rates decrease steadily from $\sim 10^{-8} \rm M_\odot \rm{yr}^{-1}$ to less than $10^{-9} \rm M_\odot \rm{yr}^{-1}$ within the first 10~Myr of the PMS star lifetime \citep[e.g.,][]{Muzerolle_00,Sicilia-Aguilar_06}. This is in good agreement with the expected evolution of viscous disks as described by \citet{Hartmann_98}. These studies all agree that the mass accretion rate decreases with the stellar mass. Understanding these accretion processes plays an important role in understanding disk evolution as well as the PMS cluster population as a whole \citep{Calvet_00}. The ''standard'' way to quantify the mass accretion is through spectroscopy. Usually, one studies the intensity and profile of emission lines such as H$\alpha$, Pa$\beta$, or Br$\gamma$, which requires medium- to high-resolution spectra. This approach has the disadvantage of long integration times and, therefore, only a small number of stars can usually be observed. H$\alpha$ filters have long been used to identify H$\alpha$ emission-line objects in combination with additional broadband or intermediate-band colors \citep[e.g.,][]{Underhill_82}. For panoramic CCD detectors, the technique was first applied by \citet{Grebel_92} and then developed further for different filter combinations and to quantify the H$\alpha$ emission \citep[e.g.,][]{Grebel_93b,Grebel_97}. \citet{deMarchi_10} used this photometric method to estimate the accretion luminosity of PMS stars. Normally the R-band is used as the continuum for the H$\alpha$ filter. \citet{deMarchi_10} showed for the field around SN~1987A \citep{Romaniello_98,Panagia_00,Romaniello_02} that the Advanced Camera for Surveys \citep[ACS,][]{ACS} filters $F555W$ and $F814W$ can be similarly used to obtain the continuum for the H$\alpha$ filter. Up to now, this method \citep{deMarchi_15} has been proven to be successful in studies for different clusters, such as NGC~346 in the Small Magellanic Cloud \citep[SMC, ][]{deMarchi_11a} and NGC~3603 in the MW \citep{Beccari_10}. Due to its young age, Wd2 is a perfect target to study accretion processes of the PMS stars in the presence of a large number \citep[$\sim 80$, see][]{Moffat_91} of O and B stars. In close proximity to OB stars, the disks may be expected to be destroyed faster by the external UV radiation originating from these massive stars. This would lead to a lower excess of H$\alpha$ emission in the direct neighborhood of the OB stars \citep{Anderson_13,Clarke_07}. Our high-resolution multi-band observations of Wd2 in the optical and near-infrared (Paper I) give us the opportunity to study the PMS population and the signatures of accretion in detail in a spatially resolved, cluster-wide sample down to a stellar mass of 0.1~M$_\odot$. In Paper I, we showed that the stellar population of RCW~49 mainly consists of PMS stars and massive OB main-sequence (MS) stars. These objects are not found in one single, centrally concentrated cluster but are mostly located in two sub-clusters of Wd2, namely its main concentration of stars, which we term the ''main cluster'' (MC), and a secondary, less pronounced concentration, which we call the ''northern clump'' (NC). This paper is a continuation of the study presented in Paper I with an emphasis on the characterization of the PMS population. In Sect. \ref{sec:catalog} we give a short overview of the photometric catalog presented in Paper I. In Sect. \ref{sec:stellar_pop} we look in more detail into the stellar population of RCW~49. We analyze the color-magnitude diagrams (CMDs) for the region as a whole as well as for individual sub-regions. In Sect. \ref{sec:Halpha} we provide a detailed analysis of the determination of the H$\alpha$ excess emission stars. In Sect. \ref{sec:acc_L_and_M} we use H$\alpha$ excess emission to derive the accretion luminosity as well as the mass accretion rate. Furthermore, we provide a detailed analysis of the change of the mass accretion rate with the stellar age and the location relative to the OB stars. In Sect.~\ref{sec:uncertainties} we give an overview and summary of the contribution of the different sources of uncertainty. In Sect. \ref{sec:summary} we summarize the results derived in this paper and we discuss how they further our understanding of this region.
\label{sec:summary} In this paper we examined the PMS population of RCW~49 using our recent optical and near-infrared HST dataset of Wd2, obtained in 6 filters ($F555W$, $F658N$, $F814W$, $F125W$, $F128N$, and $F160W$; for more details see Paper I). To analyze the PMS population of Wd2 we determined the stellar parameters ($T_{\rm{eff}}$, $L_{\rm{bol}}$, and $M_\star$) using the PARSEC 1.2S \citep{Bressan_12} stellar evolution models. We estimated the ages of the PMS stars using the $F814W_0$ vs. $(F814W-F160W)_0$ CMD in combination with the PARSEC 1.2S isochrones. The full sample of 5404 PMS stars (cluster members detected in $F814W$ and $F160W$) has a mean age of $1.04 \pm 0.71$~Myr with $\sim60\%$ of all stars being between 1.0--2.0~Myr old. The full sample age is representative for the Wd2 cluster age (see Sect.~\ref{sec:stellar_ages}). The cluster age is also in good agreement with the age estimated by \citet[][1.5--2Myr]{Ascenso_07} and the theoretical MS lifetime of massive O stars of 2--5~Myr \citep[see Tab 1.1 in][]{Sparke_07}. Therefore, Wd2 has the same age or is even younger than other very young star clusters like \object{NGC~3603} \citep[1~Myr,][]{Pang_13}, \object{Trumpler~14} \citep[$\le 2$~Myr,][]{Carraro_04_Tr14} in the \object{Carina Nebula} \citep{Smith_08}, \object{R136} in the \object{Large Magellanic Cloud} \citep[1--4~Myr,][]{Hunter_95,Walborn_97,Sabbi_12}, \object{NGC~602} \citep{Cignoni_09} and \object{NGC~346} \citep{Cignoni_10ApJ} both in the SMC, or the \object{Arches} cluster \citep{Figer_02,Figer_05}. It is also younger than \object{Westerlund~1} ($5.0 \pm 1.0$~Myr), the most massive young star cluster known in the MW \citep{Clark_05,Gennaro_11,Lim_13}. Comparing the $F814W_0$ vs. $(F814W-F160W)_0$ CMDs of the four different regions MC, NC, the Wd2 cluster outskirts, and the periphery of RCW~49, we do not find any significant age difference between the regions (see Tab.~\ref{tab:spatial_distribution}). It appears that the MC and the NC are coeval. Following the method applied in \citet{deMarchi_10} we used the individually extinction-corrected $F555W$, $F814W$, and $F658N$ photometry to select 240 H$\alpha$ excess emission stars in the RCW~49 region. We used the ATLAS9 model atmospheres \citep{Castelli_Kurucz_03} and the Stellar Spectral Flux Library by \citet{Pickles_98} to obtain interpolated $R$-band photometry from the $F555W_0$ and $F814W_0$ filters to get a reference template (see Appendix~\ref{sec:R_band}). Using TCDs we selected all stars as H$\alpha$ excess emission stars that are located at least $5\sigma$ above the continuum emission. Additionally, all stars must have an H$\alpha$ emission line $\rm{EW}>10\rm{\AA}$. A $(F555W-F814W)_0 > 0.2$~mag criterion is used to exclude possible Ae/Be candidates (see Sect. \ref{sec:Halpha_emission}). This yields 24 Ae/Be candidates (see Sect.~\ref{sec:AeBe_candiates}), mainly located in the TO and MS region of the $F814W_0$ vs. $(F814W-F160W)_0$ CMD (see Fig.~\ref{fig:F814W-F160W_F814W}) and 240 H$\alpha$ excess emission stars with a mean H$\alpha$ luminosity $L(\rm{H}\alpha)=\left(1.67 \pm 0.45\right) \cdot 10^{-31}~\rm{erg}~\rm{s}^{-1}$ and a mass accretion rate of $\dot M_{\rm{acc}}=\left(4.43 \pm 1.68 \right) \cdot 10^{-8}~M_\odot~\rm{yr}^{-1}$. The mean age is $0.62 \pm 0.57$~Myr. The MC and NC host at least 36 and 26 H$\alpha$ excess emission stars, respectively, while the remaining part of Wd2 cluster contains at least 106. The remaining 72 are located in the periphery (see Tab.~\ref{tab:spatial_distribution}). The mean mass accretion rate in Wd2 is $\sim 70\%$ higher than in the SN~1987~A field \citep[$\dot M_{\rm{acc}}=2.6 \cdot 10^{-8}~M_\odot~\rm{yr}^{-1}$,][]{deMarchi_10}, $\sim 77\%$ higher than in NGC~602 \citep[$\sim 2.5 \cdot 10^{-8}~M_\odot~\rm{yr}^{-1}$,][]{deMarchi_13a}, and $\sim 14\%$ higher than in NGC~346 \citep[$3.9 \cdot 10^{-8}~M_\odot~\rm{yr}^{-1}$,][]{deMarchi_11a}. With a mean age of $\sim 1$~Myr Wd2 is younger than the PMS populations investigated by the other studies, which explains the higher mass accretion rate. Taking the younger age and the uncertainty range into account, the mass accretion rates determined in this paper are consistent with the theoretical studies of \citet{Hartmann_98} and the collected data of \citet{Calvet_00} for a number of star-forming regions. \citet{Hartmann_98} showed in their theoretical study of the evolution of viscous disks that the mass accretion rate decreases with increasing age ($\dot M \propto t^{-\eta}$). This was confirmed in many observational studies for different regions inside and outside the MW \citep[e.g.,][]{Calvet_00,Sicilia-Aguilar_06,Fang_09,deMarchi_13a}, yet the slope is poorly constrained. We analyzed our bona-fide sample of 240 mass-accreting stars and determined a decreasing slope of $\eta=0.44 \pm 0.04$, which is in agreement with other studies, taking into account the large uncertainty. The FUV flux emitted by the luminous OB stars can lead to a shorter disk lifetime due to erosion \citep[e.g.,][]{Clarke_07}. \citet{Anderson_13} studied the effects of photoevaporation in the close vicinity (0.1--0.5~pc) of OB stars. Most of their disks were completely dispersed within 0.5--3.0~Myr. In our study of Wd2 we used the centers of the MC and NC and calculated the projected geometric center of all known OB stars within 0.5~pc (red crosses in Fig.~\ref{fig:Halpha_spatial_dist}). We then calculated the mean mass accretion rate in annuli of $15''$ or 0.3~pc going outwards from the respective centers (see Fig.~\ref{fig:radial_Halpha_dist}). The median mass accretion rate in the Wd2 cluster is $4.43~\cdot 10^{-8}~M_\odot \rm{yr}^{-1}$ and thus $\sim 25$--$30\%$ higher than in the MC ($3.32~\cdot 10^{-8}~M_\odot \rm{yr}^{-1}$) and NC ($3.12~\cdot 10^{-8}~M_\odot \rm{yr}^{-1}$). With increasing distance from the respective centers of the two density concentrations the mass accretion rate steeply increases by 60\% in the MC and 68\% in the NC within the innermost $30''$ (0.6~pc) and $45''$ (0.9~pc), respectively. With an increasing number of OB stars the mass accretion rate drops by 5--22\% (see Fig.~\ref{fig:radial_Halpha_dist}). Far away ($\gtrapprox 0.5$~pc) from the OB stars the mass accretion rate rises to a peak value of $5.9~\cdot 10^{-8}~M_\odot \rm{yr}^{-1}$. Despite the large uncertainty in the mass accretion rate, the effect of the increased rate of disk destruction is visible. This effect was also seen in other massive star-forming regions, e.g.,, by \citet{deMarchi_10} for the region around SN~1987~A and by \citet{Stolte_04} for NGC~3603 and supports the theoretical scenario of \citet{Clarke_07} and \citet{Anderson_13}. In \citet{Zeidler_16c} we will provide completeness tests and a more sophisticated analysis of the spatial distribution of the stellar population in Wd2 than in Paper~I. Furthermore, we will determine the present-day mass function, as well as the mass of the Wd2 cluster as a whole and of its sub-clusters.
16
7
1607.04126
1607
1607.04325.txt
Low resolution (4.5 to 5 \AA) spectra of 58 blue supergiant stars distributed over the disk of the Magellanic spiral galaxy NGC\,55 in the Sculptor group are analyzed by means of non-LTE techniques to determine stellar temperatures, gravities and metallicities (from iron peak and $\alpha$-elements). A metallicity gradient of $-0.22 \pm0.06$ dex/R$_{25}$ is detected. The central metallicity on a logarithmic scale relative to the Sun is [Z] = $-0.37 \pm 0.03$. A chemical evolution model using the observed distribution of stellar and interstellar medium gas mass column densities reproduces the observed metallicity distribution well and reveals a recent history of strong galactic mass accretion and wind outflows with accretion and mass-loss rates of the order of the star formation rate. There is an indication of spatial inhomogeneity in metallicity. In addition, the relatively high central metallicity of the disk confirms that two extra-planar metal poor HII regions detected in previous work 1.13 to 2.22 kpc above the galactic plane are ionized by massive stars formed in-situ outside the disk. For a sub-sample of supergiants, for which Hubble Space Telescope photometry is available, the flux-weighted gravity--luminosity realionship is used to determine a distance modulus of $26.85 \pm 0.10$ mag.
The galaxies in the Sculptor group seem to form a filament extended along the line of sight \citep{jerjen1998, karachentsev2003} with three distinct subgroups, one around the starburst galaxy NGC\,253 at about 4 Mpc, the other around NGC\,7793 at about the same distance, and the third much closer at half of this distance around NGC\,300. While NGC\,253, as the largest and most massive galaxy, seems to form the dynamical center of the group, the subgroup around NGC\,300 appears to be attracted by the gravitational field of the Local Group. NGC\,55 is member of the nearby subgroup. The near-IR Cepheid study of the Araucaria collaboration places NGC\,55 and NGC\,300 at roughly the same distance of 1.9 Mpc \citep{gieren2005b, gieren2008}. These two galaxies have roughly comparable near-IR magnitudes \citep[K$_{tot}$ = 6.25 and 6.38, respectively, see][]{jarrett2003} and mid-IR fluxes \citep[F$_{3.6 \mu}$ = 2.02 and 1.63 Jy, F$_{4.5\mu}$ = 1.39 and 1.20 Jy, see][]{dale2009} indicating comparable stellar masses. However, while NGC\,300 is a regular spiral galaxy of morphological type Scd with a moderate inclination angle (i = 39.9 degrees), NGC\,55 is an almost edge-on (i = 78 degrees) barred spiral of type SB(s)m with the bar apparently oriented along the line of sight \citep{devaucouleurs1961, devaucouleurs1991}, which closely resembles the Large Magellanic Cloud \citep{westmeier2013, robinson1964}. NGC\,300 has been subject to many very detailed studies of its stellar populations, the ISM (atomic and molecular gas distribution, HII regions, planetary nebulae, supernovae remnants, dust content) and the very extended faint stellar disk (see \citealt{bresolin2009}, \citealt{vlajic2009}, \citealt{westmeier2011}, \citealt{stasinska2013}, \citealt{kang2016}, \citealt{toribio2016}, and references therein). In particular, the metallicity of the ISM and the young stellar population has been investigated by detailed quantitative spectroscopic studies of blue supergiant stars \citep{kudritzki2008}, red supergiants \citep{gazak2015}, and HII regions \citep{bresolin2009}. While these three investigations used entirely independent methods for metallicity diagnostics, the results with respect to central metallicity and metallicity gradient agreed extremely well. On the other hand, only a handful of HII regions in the disk of NGC\,55 have been studied todate, with an uncertain range in metallicity between [Z] = $-0.6$ to $-0.2$\footnote{We transform the nebular oxygen abundances (O/H) to metallicity relative to solar [Z] adopting 12\,+\,log(O/H)$_\odot$ = 8.69 from \citep{asplund2009}} \citep{webster1983, stasinska1986, zaritsky1994, tuellmann2003, castro2012, pilyugin2014}. \citet{tuellmann2003} also investigated two extra-planar HII regions located 0.8 and 1.5 kpc above the disk and found a metallicity almost a factor of ten smaller than solar. \citet{castro2008} described spectra and spectral morphology of a large sample of hot massive stars, mostly blue supergiants, which were obtained within the Auracaria collaboration \citep[see][]{gieren2005a}, but so far only a small fraction of these objects, 12 supergiants of early-B spectral type, have been subject to a quantitative spectral analysis \citep{castro2012}. This work indicated an average metallicity very similar to the LMC, but remained inconclusive about the possibility of a spatial trend in metallicity, in particular a radial metallicity gradient. NGC\,55, a galaxy with significant star formation, is very likely subject to mass accretion and gas outflows \citep{westmeier2013, tuellmann2003} and, thus, accurate information about metallicity and a potential metallicity gradient might allow to constrain the rates of matter inflow and outflow \citep[see][]{kudritzki2015}. We have, therefore, resumed the analysis of the spectra obtained by \citet{castro2008}, this time focussing on the supergiants of spectral type B8 to A5, for which the signal-to-noise ratio was sufficient for a quantitative spectral analysis. Because of the many metal lines in their spectra and because of their enormous intrinsic brightness, supergiants of these spectral types are ideal for extragalactic metallicity studies (see \citealt{kudritzki2008,kudritzki2012,kudritzki2014}, and references therein). The selection of targets resulted in a sample of 46 objects distributed over a large range of galactocentric distances. These objects were then analyzed in detail with respect to their effective temperatures, gravities and metallicities and the results are presented in this paper. After a brief description in Section 2 of the observations, the analysis method and the geometrical model used for a de-projection of the location of the targets in the galactic disk, we summarize the results in Section 3. Section 4 focusses on the metallicity and the metallicity gradient and applies a chemical evolution model. Blue supergiants, as the brightest stars in the Universe at optical wavelengths, are also excellent distance indicators, because their ``flux-weighted gravity'' $g_F\,\equiv\,g$/{\teffq} (\teff\ in units of 10$^{4}$~K) is tightly correlated with their absolute bolometric magnitude, leading to the ``Flux-weighted Gravity -- Luminosity Relationship (FGLR)'' \citep[see][]{kudritzki2003, kudritzki2008}. Since the distance moduli to NGC\,55 obtained with different methods appear to be slightly controversial ranging from 26.4 mag \citep[Cepheids:][]{gieren2008} to 26.6 mag \citep[EDD database, http;//edd.ifa.hawaii.edu, see][]{tully2009} to 26.8 mag \citep[PLNF:][]{vansteene2006}, we select a sub-sample of 13 supergiants, for which HST/ACS photometry is available, and determine an independent distance in Section 5 using the most recent calibration of the FGLR method by \citet{urbaneja2016}. Section 6 presents a final discussion.
The comprehensive spectroscopic study of blue supergiant stars distributed over the disk of NGC\,55 has led to the first detection of a metallicity gradient in this almost edge on late-type spiral galaxy. The application of a chemical evolution model indicates the effects of intensive infall and outflow, in agreement with recent radio observations of the galaxy, which conclude that the disk is very likely stirred up and disturbed by infalling and outflowing gas. We also find indications of chemical inhomogeneities, which support this picture. The significant difference between the central metallicity of the disk and the metallicity of two extra-planar HII regions above the central disk provides strong evidence for in-situ star formation outside the galactic plane. A distance determination using the FGLR method leads to a distance which is larger than the distance to the Sculptor group neighbor galaxy NGC\,300. Further HST photometry will be needed to settle the issue of distance determination to this galaxy.
16
7
1607.04325
1607
1607.02120_arXiv.txt
In recent years there have been significant improvements in the sensitivity and the angular resolution of the instruments dedicated to the observation of the Cosmic Microwave Background (CMB). ACTPol is the first polarization receiver for the Atacama Cosmology Telescope (ACT) and is observing the CMB sky with arcmin resolution over $\sim$2000 sq. deg. Its upgrade, Advanced ACTPol (AdvACT), will observe the CMB in five frequency bands and over a larger area of the sky. We describe the optimization and implementation of the ACTPol and AdvACT surveys. The selection of the observed fields is driven mainly by the science goals, that is, small angular scale CMB measurements, B-mode measurements and cross-correlation studies. For the ACTPol survey we have observed patches of the southern galactic sky with low galactic foreground emissions which were also chosen to maximize the overlap with several galaxy surveys to allow unique cross-correlation studies. A wider field in the northern galactic cap ensured significant additional overlap with the BOSS spectroscopic survey. The exact shapes and footprints of the fields were optimized to achieve uniform coverage and to obtain cross-linked maps by observing the fields with different scan directions. We have maximized the efficiency of the survey by implementing a close to 24 hour observing strategy, switching between daytime and nighttime observing plans and minimizing the telescope idle time. We describe the challenges represented by the survey optimization for the significantly wider area observed by AdvACT, which will observe roughly half of the low-foreground sky. The survey strategies described here may prove useful for planning future ground-based CMB surveys, such as the Simons Observatory and CMB Stage IV surveys.
The Cosmic Microwave Background (CMB) remains one of the most valuable sources of cosmological information. Temperature anisotropy measurements have reached the cosmic variance limit at angular scales $\gtrsim 0.1^{\circ}$ with the Planck satellite\cite{Ade:2015xua}. Polarization measurements can provide important additional information able to break degeneracies between cosmological parameters and constrain extensions of the $\Lambda$CDM model, such as the tensor-to-scalar ratio $r$ and the neutrino mass sum $\sum m_{\nu}$. Several ground based experiments, such as the Atacama Cosmology Telescope Polarimeter (ACTPol) \cite{2010SPIE.7741E..1SN}, POLARBEAR \cite{2010arXiv1011.0763T}, SPTpol \cite{2012SPIE.8452E..1EA}, BICEP2 \cite{Ade:2014gua}, Keck-array \cite{2012JLTP..167..827S} and CLASS \cite{2014SPIE.9153E..1IE}, are measuring the E-mode and B-mode polarization signals \cite{Hanson:2013hsb, Ade:2014xna, 2014ApJ...794..171T, 2014JCAP...10..007N}. In this paper we focus on the survey strategy implemented by ACTPol between 2013 and 2015 and the upgraded instrument AdvACT \cite{Henderson:2015nzj}, which started its first observation campaign in 2016. AdvACT will observe approximately half of the sky in five frequency bands, from 28 GHz to 230 GHz, with the high angular resolution of 1.4 arcmin at 150 GHz already achieved by ACTPol. The expected map noise in temperature and polarization will be significantly reduced with respect to the ACTPol survey thanks to the nearly doubled number of detectors. AdvACT plans to use half-wave plates (HWP) that modulate the polarized signal at several Hz to improve polarization measurements at the largest angular scales. One of the unique advantages of the ACTPol and AdvACT surveys is the large overlap with optical surveys like BOSS\cite{2013AJ....145...10D}, HSC\cite{HSC2006}, DES\cite{DES}, DESI\cite{Levi:2013gra} and LSST\cite{Ivezic:2008fe}. This overlap allows for powerful cross-correlation studies and new probes of dark energy and the neutrino mass sum. % The observation plans for ACTPol and AdvACT are designed to maximize the scientific potential of the surveys. The choice of the observed fields and of the relative priority for different fields (i.e. which field is observed when more than one field is visible at the same time) takes into account several constraints and scientific objectives that often compete with each other. Observing regions that maximize overlap with optical surveys is an important goal for cross-correlation studies. Galactic dust emission is a significant limitation especially for measurements of large scale B-mode polarization, which thus favor observations of low foreground regions. Scanning the fields while rising and setting at different elevations (cross-linking) reduces systematic effects in the map-making process\cite{Sutton:2008zh}. For the daytime measurements, sun avoidance introduces an additional constraint that must be accounted for, or significant data loss can result from data acquired with the sun in the sidelobes. Moon avoidance is less concerning. We have not yet accounted for it in the ACTPol strategy nor in the 2016 AdvACT strategy but we plan to include it in future observing plans. We take these constraints into account and maximize the efficiency of the observing plan by minimizing the idle time of the telescope and switching between different observing plans for daytime and nighttime. In section \ref{sec_actpol} we describe the three seasons of observations with ACTPol from 2013 to 2015 while in section \ref{sec_advactpol} we focus on the survey strategy for AdvACT discussing the challenges represented by the much wider area covered by the AdvACT survey.
\begin{center} \begin{tabular}{ l | c | c | c | c } & deep56 & BOSS-N & deep8 & deep9 \\ \hline Season 2 (2014) & 700 sq. deg. & 2000 sq. deg. & - & - \\ \hline Season 3 (2015) & 700 sq. deg. & 2000 sq. deg. & 190 sq. deg. & 700 sq. deg. \\ \hline \end{tabular} \end{center} \end{table}
16
7
1607.02120
1607
1607.03853.txt
We present an analysis of the first {\it Kepler} K2 mission observations of a rapidly oscillating Ap (roAp) star, HD\,24355 ($V=9.65$). The star was discovered in SuperWASP broadband photometry with a frequency of 224.31\,\cd\, (2596.18\,$\muup$Hz; $P = 6.4$\,min) and an amplitude of 1.51\,mmag, with later spectroscopic analysis of low-resolution spectra showing HD\,24355 to be an A5\,Vp\,SrEu star. The high precision K2 data allow us to identify 13 rotationally split sidelobes to the main pulsation frequency of HD\,24355. This number of sidelobes combined with an unusual rotational phase variation show this star to be the most distorted quadrupole roAp pulsator yet observed. In modelling this star, we are able to reproduce well the amplitude modulation of the pulsation, and find a close match to the unusual phase variations. We show this star to have a pulsation frequency higher than the critical cut-off frequency. This is currently the only roAp star observed with the {\it Kepler} spacecraft in Short Cadence mode that has a photometric amplitude detectable from the ground, thus allowing comparison between the mmag amplitude ground-based targets and the $\umu$mag spaced-based discoveries. No further pulsation modes are identified in the K2 data, showing this star to be a single-mode pulsator.
\label{sec:intro} The rapidly oscillating Ap (roAp) stars are a rare subclass of the chemically peculiar, magnetic, Ap stars. They show pulsations in the range of $6-23$\,min with amplitudes up to 18\,mmag in Johnson $B$ \citep{holdsworth15}, and are found at the base of the classical instability strip on the Hertzsprung-Russell (HR) diagram, from the zero-age main-sequence to the terminal-age main-sequence in luminosity. Since their discovery by \citet{kurtz82}, only 61 of these objects have been identified (see \citealt{smalley15} for a catalogue). The pulsations are high-overtone pressure modes (p~modes) thought to be driven by the $\kappa$-mechanism acting in the H\,{\sc{i}} ionisation zone \citep{balmforth01}. However, \citet{cunha13} have shown that turbulent pressure in the convective zone may excite some of the modes seen in a selection of roAp stars. The pulsation axis of these stars is inclined to the rotation axis, and closely aligned with the magnetic one, leading to the oblique pulsator model \citep{kurtz82,ss85a,ss85b,dg85,st93,ts94,ts95,bigot02,bigot11}. Oblique pulsation allows the pulsation modes to be viewed from varying aspects over the rotation cycle of the star, giving constraints on the pulsation geometry that are not available for any other type of pulsating star (other than the Sun, which is uniquely resolved). The mean magnetic field modulus in Ap stars are strong, of the order of a few kG to 34\,kG \citep{babcock60}. A strong magnetic field suppresses convection and provides stability to allow radiative levitation of some elements -- most spectacularly singly and doubly ionised rare earth elements -- producing a stratified atmosphere with surface inhomogeneities. These inhomogeneities, or spots, are long lasting (decades in many known cases) on the surface of Ap stars, thus allowing for an accurate determination of the rotation period of the star. In the spots rare earth elements such as La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy and Ho, may be overabundant by up to a million times the solar value, leading to spectral line strength variations over the rotation period \citep[e.g.][]{lueftinger10}. Because of the complex atmospheres of the Ap stars, the roAp stars provide the best laboratory, beyond the Sun, to study the interactions between pulsations, rotation, and chemical abundances in the presence of magnetic fields. Early photometric campaigns targeted known Ap stars in the search for oscillations \citep[e.g.][]{martinez91,martinez94}, with later studies using high-resolution spectroscopy to detect line profile variations in Ap stars caused by pulsational velocity shifts \citep[e.g.][]{savanov99,koch01,hatzes04,mkr08,elkin10,elkin11,kochukhov13}. Most recently, the use of the SuperWASP (Wide Angle Search for Planets) ground-based photometric survey led to the identification of 11 roAp stars \citep{holdsworth14a,holdsworth15}. With the launch of the {\it Kepler} space telescope, the ability to probe to $\muup$mag precision has enabled the detection of four roAp stars with amplitudes below the ground-based detection limit: KIC\,8677585 was a known A5p star observed during the 10-d commissioning phase of the {\it Kepler} mission and was shown to pulsate by \citet{balona11a}; KIC\,10483436 \citep{balona11b} and KIC\,10195926 \citep{kurtz11} were identified as roAp stars through analysis of their light curves and subsequent spectra; KIC\,4768731 was identified as an Ap star by \citet{niemczura15} and was later shown to be a roAp star \citep{smalley15}. Finally, {\it Kepler} observations also allowed the analysis of one roAp star, KIC\,7582608, identified in the SuperWASP survey with an amplitude of 1.45\,mmag \citep{holdsworth14b}, albeit in the super-Nyquist regime \citep{murphy13}. There is an obvious difference between the roAp stars discovered with ground-based photometry, and those first detected by space-based observations: the amplitudes of the pulsations in the ground-based discoveries are generally in the range $0.5 - 10$\,mmag, whereas the {\it Kepler} observations did not detect variations above 0.5\,mmag. Ground-based observations are usually made in the $B$-band where the pulsation amplitude is greatest for the roAp stars \citep{medupe98}, and {\it Kepler} observations are made in a broadband, essentially white, filter where the amplitudes may be a factor of two to three lower. This accounts for some of the difference between the two groups. Further to this, ground-based observations are limited due to sky transparency white noise in the frequency range in which roAp stars are observed to pulsate, thus affecting the minimum amplitude that can be detected, suggesting this may be a observational bias. However the question remains as to whether there is a fundamental difference between the two groups. Are the differences in amplitude solely due to selection effects, or do the stars show differences in their abundance anomalies, magnetic field strengths, ages or rotation rates? The observations at $\muup$mag precision of a roAp star discovered with ground-based photometry may begin to provide insight into this disparity. Ground-based projects in the search for transiting exoplanets produce vast amounts of data on millions of stars (e.g. WASP, \citealt{pollacco06}; HATnet, \citealt{bakos04}; ASAS, \citealt{pojmanski97}; OGLE, \citealt{udalski92}; KELT, \citealt{pepper07}). These data can provide an excellent source of information on many thousands of variable stars. Indeed, many of these projects have been employed for that purpose \citep[e.g.][]{pepper08,hartman11,ulaczyk13,holdsworth14a}. The ability of these surveys to achieve mmag precision provides an extensive all-sky database in which to search for low-amplitude stellar variability, which can then be observed at much higher precision by space-based missions such as K2 \citep{howell14} and TESS \citep{ricker15}. One of the leading ground-based projects in the search for transiting exoplanets, which provides data for many millions of stars, is the SuperWASP survey. This project is a two-site, wide-field survey, with instruments located at the Observatorio del Roque de los Muchachos on La Palma (WASP-N) and the Sutherland Station of the South African Astronomical Observatory \citep[WASP-S][]{pollacco06}. Each instrument has a field-of-view of $\sim$64\,deg$^2$, with a pixel size of 13.7\,arcsec. Observations are made through a broadband filter covering a wavelength range of $4000-7000$\,\AA\, and consist of two consecutive $30$-s integrations at a given pointing, with pointings being revisited, typically, every $10$\,min. The data are reduced with a custom reduction pipeline \citep[see][]{pollacco06} resulting in a `WASP $V$' magnitude which is comparable to the Tycho-$2$ $V_t$\, passband. Aperture photometry is performed at stellar positions provided by the USNO-B$1.0$\, input catalogue \citep{monet03} for stars in the magnitude range $5<V<15$. As previously mentioned, one of the space-based missions which ground-based surveys can inform is the K2 mission. After the failure of a second of four reaction wheels, the {\it Kepler} spacecraft could no longer maintain its precise pointing towards the original single field-of-view. The loss of the reaction wheels now means that the spacecraft needs to account for the solar radiation pressure in a new way. This has been achieved by pointing the telescope in the orbital plane. This new configuration requires semi-regular $\sim$5.9-hr drift corrections, as well as momentum dumps through thruster firings every few days \citep{howell14}. To avoid sunlight entering the telescope, a new field is selected every approximately 80\,d. Such a procedure has led to the fields being labelled as `Campaigns'. Due to the shutter-less exposures of the {\it Kepler} spacecraft, the pointing drift leads to changes in brightness of an observed star as it moves across the CCD. There exist several routines to perform corrections on a large scale \citep[e.g.][]{vanderburg14,handberg14} which aim to reduce the systematic noise in the light curve and resultant periodogram. With careful reduction of the raw K2 data, the science data gives a photometric precision within a factor of $3-4$ of that of {\it Kepler} for a 12$^{\rm th}$ magnitude G star \citep{howell14}. The re-purposing of the {\it Kepler} spacecraft has opened up a host of new possibilities to observe variable stars at $\muup$mag precision. The changing field-of-view now allows the study of O-type stars \citep{buysschaert15}; the prospect of detecting semi-convection in B-type stars \citep{moravveji15}; observations of variable stars in nearby open clusters \citep{nardiello15}; observations of RR Lyrae stars beyond the Galaxy \citep{molnar15}; and now the first observations, in Short Cadence, of a classical, high-amplitude, roAp star. HD\,24355 is a bright ($V=9.65$) rapidly oscillating Ap star, discovered by \citet{holdsworth14a}. Their data show a pulsation at 224.31\,\cd\, (2596.18\,$\muup$Hz; $P = 6.4$\,min) with an amplitude of 1.51\,mmag\, in the WASP broadband filter. In this paper, we present a detailed spectral classification followed by an in-depth discussion of the SuperWASP discovery data, alongside further ground-based observations. We then present an analysis of the K2 Campaign 4 data.
We have presented the first {\it Kepler} spacecraft observations of the high-amplitude roAp star HD\,24355 alongside a detailed analysis of the ground-based discovery and follow-up photometric data. The K2 data have allowed us to unambiguously determine the rotation period of the star to be $27.9158\pm0.0043$\,d, a parameter which was uncertain when considering the ground-based SuperWASP data alone. Classification dispersion spectra allowed us to classify this star as an A5\,Vp\,SrEu star. Abundances derived from high-resolution spectra show HD\,24355 to be slightly enhanced compared to some other roAp stars when considering the rare earth elements. However, a full and detailed abundance analysis is required to confirm its place amongst the roAp and noAp stars. The high-resolution spectra also allowed us to estimate a mean magnetic field strength of $2.64\pm0.49$\,kG; however, we take this to be an upper limit on the value due to the lack of Zeeman splitting in the spectra, and the method used to derive that value. There is a discrepancy between the $T_{\rm eff}$ of HD\,24355 when using different methods to derive the parameter. Values from the literature, SED fitting abundance analysis and line fitting provide a wide range of $T_{\rm eff}$ values. However, we get the best agreement in results using solely the Balmer lines of both the low-resolution and high-resolution spectra, deriving $8200\pm200$\,K, placing HD\,24355 amongst the hotter roAp stars. Analysis of the pulsation mode as detected in the K2 data have shown the characteristic signatures of a roAp pulsator as predicted by the oblique pulsator model \citep{kurtz82,bigot02,bigot11}. The pulsational amplitude is modulated with the rotation period of the star, with the extremes in light variations and amplitude occurring at the same phase, indicative that the magnetic and pulsation poles lie in the same plane. The behaviour of the pulsation phase is not as expected, however. The very small phase change at quadrature is a surprise for a quadrupole pulsator. Examples from the literature (cf. Fig.\,\ref{fig:other_phases}) show a clear $\pi$-rad phase change when a different pulsation pole rotates into view, but this is not the case for HD\,24355. Here we see a shift of only 1-rad at most. This small `blip' in the phase suggests that HD\,24355 is pulsating in a very distorted mode, the most extreme case yet observed. The rotationally split pulsation has provided us with the amplitudes to test the geometry of the star. As such, we modelled the system following the method of \citet{saio05}. Changing values of the inclination and obliquity angles and the polar magnetic field strength, and searching the parameter space surrounding our observational constraints, we conclude that HD\,24355 is a distorted quadrupolar pulsator, with a magnetic field strength of about 1.4\,kG. The model accurately matches the observed amplitude modulation of the pulsation, and the amplitudes of the rotationally split sidelobes. The pulsational phase variations are a stronger function of the evolution of the star, and as such provide a slightly greater challenge to model. We believe, however, that the model presented is a satisfactory match to the data, given our current observational constraints on the evolutionary stage of the star. We determine that the pulsation seen in HD\,24355 is super-critical, making it the most precisely observed super-critical roAp star to date. The driving mechanism for such a pulsation is currently unknown, thus making HD\,24355 a highly important target in understanding how some roAp stars can pulsate with frequencies well above the critical cutoff frequency.
16
7
1607.03853
1607
1607.07369_arXiv.txt
Dormant comets in the near-Earth object (NEO) population are thought to be involved in the terrestrial accretion of water and organic materials. Identification of dormant comets is difficult as they are observationally indistinguishable from their asteroidal counterparts, however they may have produced dust during their final active stages which potentially are detectable today as weak meteor showers at the Earth. Here we present the result of a reconnaissance survey looking for dormant comets using 13~567~542 meteor orbits measured by the Canadian Meteor Orbit Radar (CMOR). We simulate the dynamical evolution of the hypothetical meteoroid streams originated from 407 near-Earth asteroids in cometary orbits (NEACOs) that resemble orbital characteristics of Jupiter-family comets (JFCs). Out of the 44 hypothetical showers that are predicted to be detectable by CMOR, we identify 5 positive detections that are statistically unlikely to be chance associations, including 3 previously known associations. This translates to a lower limit to the dormant comet fraction of $2.0\pm1.7\%$ in the NEO population and a dormancy rate of $\sim 10^{-5}~\mathrm{yr^{-1}}$ per comet. The low dormancy rate confirms disruption and dynamical removal as the dominant end state for near-Earth JFCs. We also predict the existence of a significant number of meteoroid streams whose parents have already been disrupted or dynamically removed.
Dormant comets are comets that have depleted their volatiles and are no longer ejecting dust\footnote{We note that the term ``extinct comet'' is also frequently used in the literature. Strictly speaking, ``dormant comet'' is usually associated with comets that only temporarily lose the ability to actively sublimate, while the term ``extinct comet'' usually refers to the cometary nuclei that have permanently lost the ability to sublimate \citep[c.f.][for a more comprehensive discussion]{Weissman2002}. However, in practice, it is difficult to judge whether the comet is temporarily or permanently inactive. In this work we use the general term ``dormant comet'' which can mean either scenario.}. Due to their inactive nature, dormant comets cannot be easily distinguished from their asteroidal counterparts by current observing techniques \citep[e.g.][]{Luu1990b}. As the physical lifetime of a comet is typically shorter than its dynamical lifetime, it is logical that a large number of defunct or dormant comets exist \citep{Wiegert1999, DiSisto2009}. Dormant comets in the near-Earth object (NEO) population are of particular interest, as they can impact the Earth and contribute to the terrestrial accretion of water and organic materials as normal comets \citep[e.g.][and the references therein]{Hartogh2011f}. It has long been known that the dust produced by Earth-approaching comets can be detected as meteor showers at the Earth \citep[e.g.][]{Schiaparelli1866, Schiaparelli1867}. Dormant comets, though no longer being currently active, may have produced dust during their final active phases, which are potentially still detectable as weak meteor showers. This has a significant implications for the investigation of dormant comets, as any cometary features of these objects are otherwise no longer telescopically observable. Past asteroid-stream searches have revealed some possible linkages, the most notable being (3200) Phaethon and the Geminids \citep[e.g.][and many others]{Williams1993e, deLeon2010, Jewitt2013} as well as (196256) 2003~EH$_1$ and the Quadrantids \citep{Jenniskens2004, Abedin2015}, both involving meteor showers that are exceptional in terms of activity. However, most showers are weak in activity, making parent identification difficult. Radar was introduced into meteor astronomy in the 1940s and has developed into a powerful meteor observing technique \citep[c.f.][]{Ceplecha1998}. Radar detects meteors through the reflection of transmitted radio pulses from the ionized meteor trail formed during meteor ablation. Radar observations are not limited by weather and/or sunlit conditions and are able to detect very faint meteors. The Canadian Meteor Orbit Radar (CMOR), for example, has recorded about 14 million meteor orbits as of May 2016, which is currently the largest dataset for meteor orbits and hence a powerful tool to investigate weak meteor showers. Efforts have been made to the search for dormant comets for several decades. Among the early attempts, \citet{Kresak1979e} discussed the use of the Tisserand parameter \citep{Tisserand1891} as a simple dynamical indicator for the identification of dormant comets. Assuming Jupiter as the perturbing planet, the Tisserand parameter is defined as \begin{equation} T_\mathrm{J} = \frac{a_\mathrm{J}}{a} + 2 \sqrt{\frac{a(1-e^2)}{a_\mathrm{J}}} \cos{i} \end{equation} \noindent where $a_\mathrm{J}$ is the semi-major axis of Jupiter, and $a$, $e$, and $i$ are the semi-major axis, eccentricity, and inclination of orbital plane of the small body. A small body is considered dynamically comet-like if $T_\mathrm{J}\lesssim3$. An asteroid with $T_\mathrm{J}\lesssim3$ is classified as an asteroid in cometary orbit (ACO). Note that dormant comets and ACOs are not all physically comets originate from the Kuiper belt, as a fraction of ACOs might originate from the main asteroid belt \citep[e.g.][]{Binzel2004a}. Separation of main belt interlopers is difficult, but attempts have been made both dynamically \citep[e.g.][]{Fernandez2002ad,Tancredi2014d} and spectroscopically to separate possible cometary nuclei from asteroidal bodies \citep[e.g.][]{Fernandez2005b,DeMeo2008b,Licandro2016}. However, few attempts have been made to link ACOs with meteor showers. \citet{Jenniskens2008} provided a comprehensive review of meteoroid streams possibility associated with dormant comets based on the similarity between their orbits, but a comprehensive contemporary ``cued'' survey to look for all possible weak streams from the large number of recently discovered ACOs/NEOs that may have had weak past activity, including formation of early meteoroid trails, is yet to be performed. In this work, we present a survey for dormant comets in the ACO component in the NEO population through the meteoroid streams they might have produced during their active phase, using the most complete CMOR dataset available to date. The survey is performed in a ``cued search'' manner rather than a commonly-used blind search: we first identify eligible ACOs (i.e. with well-determined orbits suitable for long-term integration) in the NEO population (\S~2), then simulate the formation and evolution of the meteoroid trails produced by such ACOs assuming they have recently been active (\S~3), and then search the CMOR data using the virtual shower characteristics to identify ``real'' streams now visible at the Earth (\S~4). Our survey thus simulates \textit{all} near-Earth ACOs (NEACOs) which are now known and which would have produced meteor showers at the Earth if they were recently active. This approach accounts for orbital evolution of the parent \textit{and} the subsequent evolution of the virtual meteoroid stream.
We conducted a direct survey for dormant comets in the ACO component in the NEO population. This was done by looking for meteor activity originated from each of the 407 NEOs as predicted by meteoroid stream models. This sample represents $\sim80\%$ and $\sim 46\%$ of known NEOs in JFC-like orbits in the $H<18$ and $H<22$ population respectively. To look for the virtual meteoroid streams predicted by the model, we analyzed 13~567~542 meteoroid orbits measured by the Canadian Meteor Orbit Radar (CMOR) in the interval of 2002--2016 using the wavelet technique developed by \citet{Galligan2000} and \citet{Brown2008} and test the statistical significance of any detected association using a Monte Carlo subroutine. Among the 407 starting parent bodies, we found 36 virtual showers that are detectable by CMOR. Of these, we identify 5 positive detections that are statistically unlikely to be chance association. These include 3 previously known asteroid-stream associations [(196256) 2003 EH$_1$ -- Quadrantids, 2004 TG$_{10}$ -- Taurids, and 2009 WN$_{25}$ -- November i Draconids], 1 new association (2012 BU$_{61}$ -- Daytime $\xi$ Sagittariids) and 1 new outburst detection [(139359) 2001 ME$_1$]. Except for the case of (139359) 2001 ME$_1$, which displayed only a single outburst in 2006, all other shower detections are in form of annual activity. We also examined 32 previously proposed asteroid-shower associations. These associations were first checked with a Monte Carlo subroutine, from which we find only 8 associations are statistically significant. Excluding 3 associations that involve observational circumstances unfavorable for CMOR detection (e.g. southerly radiant or low arrival speed), 4 out of the remaining 5 associations involve showers that have only been reported by one study, while the last association [$\psi$ Cassiopeiids -- (5496) 1973 NA] involves some observation--model discrepancy. We leave these questions for future studies. Based on the results above, we derive a lower limit to the dormant comet fraction of $2.0\pm1.7\%$ among all NEOs, slightly lower than previous numbers derived based on dynamical and physical considerations of the parent. This number must be taken with caution as we assume a median dust production from \textit{known} JFC comets. The typical dust production of already-dead comets, however, is not truly known. A dormant comet fraction of $\sim 8\%$ as concluded by other studies would require a characteristic dust production about $10\%$ of the median model. Another caveat is the possibility of overestimating the number of visible showers (hence, reducing the derived dormant comet fraction) due to very steep dust size distribution ($q\gg3.6$), but this is not supported by cometary observations. We also derive a dormancy rate of $\sim 10^{-5}~\mathrm{yr^{-1}}$ per comet, consistent with previous model predictions and significantly lower than the observed and predicted disruption probability. This confirms disruption and dynamical removal as the dominant end state for near-Earth JFCs, while dormancy is relatively uncommon. We predict the existence of a significant number of ``orphan'' meteoroid streams where parents have been disrupted or dynamically removed. While it is challenging to investigate the formation of these streams in the absence of an observable parent, it might be possible to retrieve some knowledge of the parent based on meteor data alone.
16
7
1607.07369
1607
1607.08658_arXiv.txt
We present results of an optical search for Cepheid variable stars using the {\it Hubble Space Telescope (HST)} in 19 hosts of Type Ia supernovae (SNe~Ia) and the maser-host galaxy NGC 4258, conducted as part of the SH0ES project (Supernovae and {\ho} for the Equation of State of dark energy). The targets include 9 newly imaged {\snh} using a novel strategy based on a long-pass filter that minimizes the number of {\it HST} orbits required to detect and accurately determine Cepheid properties. We carried out a homogeneous reduction and analysis of all observations, including new universal variability searches in all {\snh}, that yielded a total of {\ncep} variables with well-defined selection criteria, the largest such sample identified outside the Local Group. These objects are used in a companion paper to determine the local value of {\ho} with a total uncertainty of 2.4\%.
} The Cepheid period-luminosity relation (hereafter, PLR) or ``Leavitt Law'' \citep{leavitt12} is one of the most widely used primary distance indicators and has played a central role in many efforts to determine the local expansion rate of the Universe or Hubble constant \citep[\ho;][]{hubble29}. Six decades' worth of efforts on the extragalactic distance scale \citep[summarized in the reviews by][]{madore91,jacoby92} led to $\sigma\!\approx\!10\%$ determinations of this key cosmological parameter by \citet{freedman01} and \citet{sandage06} using the {\it Hubble Space Telescope (HST)}. The discovery of the acceleration of cosmic expansion \citep{riess98,perlmutter99} motivated the continued development of increasingly more robust and precise distance ladders to better constrain the nature of dark energy. Building on the discovery of a large sample of Cepheids in NGC 4258 \citep[N4258;][hereafter M06]{macri06} and the promising geometric distance to this galaxy \citep{herrnstein99}, the SH0ES project (Supernovae and H$_0$ for the Equation of State of dark energy) focused on reducing sources of systematic uncertainty that yielded $\sigma(\textrm{H}_0)=5$\%, 3.3\%, and 2.4\% \citep[][hereafter R09a, R11, and R16, respectively]{riess09a,riess11,riess16}. The most recent of these determinations benefits from many improvements to the distance scale over the past decade, including but not limited to high signal-to-noise ratio (S/N) parallaxes to Milky Way Cepheids \citep{benedict07,riess14,casertano16}, larger samples of Cepheids in the Large Magellanic Cloud (LMC) with homogeneous optical and near-infrared light curves \citep{soszynski08,macri15}, and robust distances to the LMC \citep{pietrzynski13} and \ngal\ \citep{humphreys13}. R16 ties these improvements on the ``first rung'' of the ladder to a sample of 281 Type Ia supernovae ({\snia}) in the Hubble flow through Cepheid-based distances to 19 host galaxies of ``ideal'' {\snia}. The aim of this publication is to present the details of the optical observations, data reduction and analysis, and selection of the Cepheid variables in these {\snh} and the anchor {\ngal}. Near-infrared follow-up observations of these Cepheids are presented in our companion paper (R16). The rest of the paper is organized as follows. \S\ref{sec:obsdata} describes the {\it HST} observations and data reduction. Details of the point-spread function (PSF) photometry and calibration steps are given in \S\ref{sec:psfphot}. In \S\ref{sec:idceph} we discuss the Cepheid search and selection criteria, and in \S\ref{sec:results} we address systematic corrections. Our results are summarized in \S\ref{sec:sum}. \vfill\pagebreak\newpage
\label{sec:sum} We presented the result of a homogeneous search for Cepheids using {\it HST} at optical wavelengths in 19 {\snh} and {\ngal}, one of the anchors for the extragalactic distance scale. Our efforts yielded a sample of {\ncep} variables, the largest to date outside of the Local Group. We discussed our methodology for data processing, photometry, variability search, and identification of Cepheids, as well as systematic corrections required to enable a determination of {\ho} in our companion publication \citep{riess16}.
16
7
1607.08658
1607
1607.01779_arXiv.txt
Using the Atacama Large Millimeter/submillimeter Array, we have made the first high spatial and spectral resolution observations of the molecular gas and dust in the prototypical blue compact dwarf galaxy \iizw. The \cotwo\ and \cothree\ emission is clumpy and distributed throughout the central star-forming region. Only one of eight molecular clouds has associated star formation. The continuum spectral energy distribution is dominated by free-free and synchrotron; at 870\micron, only 50\% of the emission is from dust. We derive a \coh\ conversion factor using several methods, including a new method that uses simple photodissocation models and resolved CO line intensity measurements to derive a relationship that uniquely predicts \aco\ for a given metallicity. We find that the \coh\ conversion factor is 4 to 35 times that of the Milky Way (18.1 to 150.5 ~\MsunKkmspc). The star formation efficiency of the molecular gas is at least 10 times higher than that found in normal spiral galaxies, which is likely due to the burst-dominated star formation history of \iizw\ rather than an intrinsically higher efficiency. The molecular clouds within \iizw\ resemble those in other strongly interacting systems like the Antennae: overall they have high size-linewidth coefficients and molecular gas surface densities. These properties appear to be due to the high molecular gas surface densities produced in this merging system rather than to increased external pressure. Overall, these results paint a picture of \iizw\ as a complex, rapidly evolving system whose molecular gas properties are dominated by the large-scale gas shocks from its on-going merger.
\label{sec:introduction} The high star formation rate surface densities and low metallicities found in blue compact dwarf galaxies represent one of the most extreme environments for star formation in the local universe, one more akin to that found in high redshift galaxies than in local spirals \citep{2009MNRAS.399.1191C,2011ApJ...728..161I}. These global properties result in increased disruption of the interstellar medium by newly formed young massive stars \citep{2006ApJ...653..361K,2010ApJ...709..191M}, higher and harder radiation fields \citep{2006A&A...446..877M}, and reduced dust content \citep{2013A&A...557A..95R}, all of which may significantly change how the molecular gas within these galaxies transforms into stars. To date, however, the molecular gas fueling the starbursts within blue compact dwarfs remains poorly understood due to the intrinsically faint emission from the most common molecular gas tracers (CO and dust continuum). By quantifying the properties of molecular gas in blue compact dwarfs, we can determine how the physical conditions in these galaxies influence their molecular gas, and thus the formation of young massive stars, as well as gain insight into star formation in high redshift galaxies, where detailed observations are difficult. Previous low-resolution studies of molecular gas in low-metallicity galaxies, including blue compact dwarfs, have shown that these galaxies have extremely high star formation rates compared to their CO luminosity -- a key tracer of the bulk molecular gas -- and that this ratio increases as the metallicity of a galaxy decreases \citep{1998AJ....116.2746T,2012AJ....143..138S}. Taken by itself, this trend suggests that low-metallicity galaxies either have increased molecular star formation efficiency (i.e., less molecular gas is necessary to form a given amount of stars) or that CO emission is not as effective as tracer of molecular gas because of the decreased dust shielding and reduced abundance of molecules in low metallicity environments (i.e., less CO emission for a given amount of molecular gas). Distinguishing between these two scenarios, however, requires higher spatial resolution to directly measure the \coh\ conversion factor (and thus determine the total molecular gas mass) and to link the young massive stars within the galaxy to the giant molecular clouds from which they presumably form (although see \citealp{2012ApJ...759....9K} and \citealp{2012MNRAS.421....9G} for arguments that neutral hydrogen may play an increasing role in star-forming clouds at low metallicity). Resolved giant molecular cloud observations in low-metallicity galaxies have historically been very difficult because of the faint nature of the CO emission in these systems. One of the few resolved studies of giant molecular clouds in a sample of dwarf galaxies showed that the giant molecular clouds in these galaxies have similar sizes, linewidths, and \coh\ conversion factors to more massive spiral galaxies like the Milky Way, M33, and M31, which supports the idea of higher star formation efficiencies \citep{2008ApJ...686..948B}. In contrast, estimates of the \coh\ conversion factor from resolved dust observations find systematically higher values for low-metallicity galaxies, suggesting that dwarf galaxies have lower CO luminosities for a given amount of molecular gas \citep{2011ApJ...737...12L}. These two apparently contradictory sets of observations can be reconciled if the reduced dust shielding for CO in low metallicity environments pushes the CO emission to the densest portion of the molecular cloud, while the H$_2$ remains distributed throughout the molecular cloud because it can self-shield. Therefore, the CO observations only trace the central regions of the molecular clouds, while the infrared observations trace the dust, which is well-mixed with the surrounding envelope of molecular gas \citep{2008ApJ...686..948B,2011ApJ...737...12L,2013ApJ...777....5S}. The sensitivity of the previous generation of millimeter interferometers limited the sample in the most comprehensive study to date \citep{2008ApJ...686..948B} to nearby galaxies ($\lesssim 4$ Mpc) with relatively high metallicities; only one galaxy has a metallicity less than 12+log(O/H)=8.2. Therefore, it is not surprising that these authors see little variation in the sizes, linewidths, and \coh\ conversion factors of low metallicity galaxies. Intriguingly, they do see hints that the lowest metallicity galaxy included in their sample (the Small Magellanic Cloud) may deviate from the fiducial trends seen in normal galaxies, although the deviations are relatively weak. This result suggests that expanding resolved molecular gas studies to lower metallicities and more extreme systems than possible with the previous generation of millimeter interferometers may uncover more variations in molecular cloud properties. Fortunately, today we have access to the Atacama Large Millimeter/submillimeter Array (ALMA), whose excellent sensitivity and resolution allow us to do just this. The blue compact dwarf galaxy \iizw\ represents a key test case for understanding how the properties of molecular clouds vary with metallicity and star formation rate surface density: this galaxy bridges the gap between ultra-low metallicity ($\sim 1/50 Z_\odot$) starburst galaxies like SBS 0335-052 and I Zw 18 and starbursting galaxies with normal solar metallicities. Although \iizw\ has only a moderate metallicity (12+log(O/H)=8.09; \citealp{2000ApJ...531..776G}), roughly comparable to the SMC, its central star-forming region has an extraordinarily high star formation rate surface density of 520~$\Msun \, {\rm yr^{-1} \, kpc^{-2}}$ \citep{2014AJ....147...43K}, comparable to that found in more massive starburst galaxies. Crucially for our purposes, this galaxy is also only 10~Mpc away \citep{1988cng..book.....T}, which is two to five times closer than other comparable galaxies like I~Zw~18 and SBS0335-052. At this distance, we can resolve the giant molecular cloud fueling the starburst within \iizw\ using only moderate angular resolution -- 0.5\arcsec\ corresponds to 24~pc linear resolution -- allowing us to quantify the resolved properties of its molecular gas. By comparing these properties to those in other starburst galaxies of varying metallicity, we can begin to disentangle the relative effects of metallicity and high star formation rate surface density on the star formation efficiencies and CO luminosities in blue compact dwarf galaxies. In this paper, we present new, high spatial and spectral resolution ALMA observations of the molecular gas and dust content of \iizw. Our goal is to understand how the interplay of intense star formation and low metallicity within this galaxy shapes its molecular gas and dust, and whether dust and molecular gas in this galaxy differs in key ways from that in other metal-rich star-forming galaxies. To do this, we measure the properties of the dust and giant molecular clouds within \iizw. We use these measurements to derive the \coh\ conversion factor (\aco) -- which underlies most of what we know about star formation beyond the Local Group -- in \iizw\ and compare it to other observational and theoretical estimates for this factor. Then we compare the molecular cloud properties in \iizw\ to the cloud properties in other systems to see if there are systematic differences in cloud properties with metallicity and/or star formation rate surface density. Finally, we use these observations to place the properties of the molecular gas and star formation within \iizw\ in the larger context of star formation within galaxies.
16
7
1607.01779
1607
1607.03898_arXiv.txt
{Very high-energy (VHE) $\gamma$-ray measurements of distant TeV blazars can be nicely explained by TeV spectra induced by ultra high-energy cosmic rays.} {We develop a model for a plausible origin of hard spectra in distant TeV blazars.} {In the model, the TeV emission in distant TeV blazars is dominated by two mixed components. The first is the internal component with the photon energy around 1 TeV produced by inverse Compton scattering of the relativistic electrons on the synchrotron photons (SSC) with a correction for extragalactic background light absorbtion and the other is the external component with the photon energy more than 1 TeV produced by the cascade emission from high-energy protons propagating through intergalactic space.} {Assuming suitable model parameters, we apply the model to observed spectra of distant TeV blazars of 1ES 0229+200. Our results show that 1) the observed spectrum properties of 1ES 0229+200, especially the TeV $\gamma$-ray tail of the observed spectra, could be reproduced in our model and 2) an expected TeV $\gamma$-ray spectrum with photon energy $>$1 TeV of 1ES 0229+200 should be comparable with the 50-hour sensitivity goal of the Cherenkov Telescope Array (CTA) and the differential sensitivity curve for the one-year observation with the Large High Altitude Air Shower Observatory (LHAASO).} {We argue that strong evidence for the Bethe-Heitler cascades along the line of sight as a plausible origin of hard spectra in distant TeV blazars could be obtained from VHE observations with CTA, LHAASO, HAWC, and HiSCORE.}
A blazar is a special class of active galactic nucleus (AGN) with a non-thermal continuum emission that arises from the jet emission taking place in an AGN whose jet axis is closely aligned with the observer's line of sight (Urry \& Padovani 1995). Blazars are dominated by rapid and large amplitude variability (e.g., Raiteri et al. 2012; Sobolewska et al. 2014). Multi-wavelength observations show that their broad spectral energy distributions (SED) from the radio to the $\rm \gamma$-rays bands generally exhibits two humps, indicating two components. It is generally accepted that the low-energy component that extends from radio up to ultraviolet, or in some extreme cases to a few keV X-rays (Costamante et al. 2001), is produced by synchrotron radiation from relativistic electrons in the jet (Urry 1998), though the origin of the high-energy component that covers the X-ray and $\rm \gamma$-ray energy regime remains an open issue. There are two kinds of theoretical models describing the high-energy photon emission in these blazars, the leptonic and the hadronic model. In the leptonic model scenarios, the high-energy component is probably produced from inverse Compton (IC) scattering of the relativistic electrons either on the synchrotron photons (e.g., Maraschi et al. 1992; Bloom \& Marscher 1996; Mastichiadis \& Kirk 1997; Konopelko et al. 2003) and/or on some other photon populations (e.g., Dermer et al. 1992; Dermer \& Schlickeiser. 1993; Sikora et al. 1994; Ghisellini \& Madau 1996; B$\rm \ddot{o}$ttcher \& Dermer 1998). In contrast, the hadronic model argues that high-energy $\gamma$ rays are produced by either proton synchrotron radiation in high enough magnetic fields (Aharonian 2000; M$\ddot{\rm u}$cke \& Protheroe 2001; M$\ddot{\rm u}$cke et al. 2003; Petropoulou 2014), or mesons and leptons through the cascade initiated by proton-proton or proton-photon interactions (e.g., Mannheim \& Biermann 1992; Mannheim 1993; Pohl \& Schlickeiser 2000; Atoyan \& Dermer 2001). The imaging atmospheric Cherenkov Telescopes (IACTs) have so far detected about 50 very high-energy (VHE; $E_{\gamma}>$100 GeV) $\gamma$-ray blazars with redshifts up to $z\sim 0.6$\footnote{http://tevcat.uchicago.edu}. It is believed that the primary TeV photons propagating through intergalactic space should be attenuated due to their interactions with the extragalactic background light (EBL) to produce electron-positron ($e^{\pm}$) pairs (e.g., Nikishov 1962; Gould \& Schreder 1966; Stecker et al. 1992; Ackermann et al. 2012; Abramowski et al. 2013; Dwek \& Krennrich 2013; Sanchez et al. 2013). However, the observed spectra from distant blazars do not show a sharp cutoff at energies around 1 TeV, which would be expected from simple $\gamma$-ray emission models with a correction for EBL absorbtion (e.g., Stecker et al. 2006; Aharonian et al. 2006a; Costamante et al. 2008; Acciari et a. 2009; Abramowski et al. 2012). Excluding a large uncertainty in the measured redshifts and in the spectral indices (Costamante 2013) and excluding the lower levels of EBL (Aharonian et al. 2006b; Mazin \& Raue 2007; Finke \& Razzaque 2009), the observed spectral hardening assumes either that there are axion-like particles (de Angelis et al. 2007; Simet et al. 2008; Sanchez-Conde et al. 2009) or a Lorentz invariance violation (Kifune 1999; Protheroe \& Meyer 2000). Alternatively, now that the AGN jets are believed to be one of the most powerful sources of cosmic rays, as long as the intergalactic magnetic fields (IGMF) deep in the voids are less than a femtogauss, the point images of distant blazars, which are produced by the interaction of the energy protons with the background photons along the line of sight, should be observed by IACTs (Essey et al. 2011a). In this scenario, the hard TeV spectra can be produced by the cascade emission from high-energy protons propagating through intergalactic space (Essey \& Kusenko 2010; Essey et al. 2010; 2011b; Razzaque et al. 2012; Aharonian et al. 2013; Takami et al. 2013; Zheng \& Kang 2013). In this paper, we study the possible TeV emission in distant TeV blazars. We argue that the TeV emission in distant TeV blazars is dominated by two components, the internal component with the photon energy around 1TeV produced by IC scattering of the relativistic electrons on the synchrotron photons (SSC) with a correction for EBL absorbtion and the external component with the photon energy of more than 1TeV produced by the cascade emission from high-energy protons propagating through intergalactic space. Generally, the external photons are generated in two types photohadronic interactions process along the line of sight. In the first, the proton interaction with cosmic microwave background (CMB) photons would produce $e^{\pm}$ pairs, and the pairs would give rise to electromagnetic cascades. This process is called the Bethe-Heitler pair production ($pe$) process. In the second process, the proton interaction with EBL photons would produce pions, and the pion decay accompanying the photons. This process is called the photopion production ($p\pi$) process. Although the $pe$ process contribution to the production of secondary photons is illustrated at the source (Dimitrakoudis et al. 2012; Murase 2012; Murase et al. 2012; Petropoulou 2014; Petropoulou \& Mastichiadis 2015), the high-energy astrophysical interest focuses on the $p\pi$ process along the line of sight because the $pe$ process is not associated with any neutrinos and neutrons (e.g., Inoue et al. 2013; Kalashev et al. 2013). The aim of the present work is to study in more detail the contribution of pairs injected by the $pe$ process along the line of sight to the TeV spectra in distant blazars. Throughout the paper, we assume the Hubble constant $H_{0}=75$ km s$^{-1}$ Mpc$^{-1}$, the matter energy density $\Omega_{\rm M}=0.27$, the radiation energy density$\Omega_{\rm r}=0$, and the dimensionless cosmological constant $\Omega_{\Lambda}=0.73$.
\label{sec:discussion} As an open issue, very high-energy $\gamma$-ray measurements of distant TeV blazars can be explained by TeV spectra induced by ultra high-energy cosmic rays (Essey \& Kusenko 2010; Essey et al. 2010; 2011b; Murase 2012; Murase et al. 2012; Razzaque et al. 2012; Takami et al. 2013; Zheng, et al. 2013). In this paper, We develop a model for a possible TeV emission in distant TeV blazars. The aim of the present work is to study in greater detail the contribution of pairs injected by the $pe$ process along the line of sight to the TeV spectra in distant blazars. In the model, the TeV emission in distant TeV blazars is dominated by two mixed components: the first is the internal component where the photon energy around 1 TeV is produced by IC scattering of the relativistic electrons on the synchrotron photons (SSC) with a correction for EBL absorbtion, and the second is the external component where the photon energy more than 1 TeV is produced by the cascade emission from high-energy protons propagating through intergalactic space. Assuming a suitable model parameters, we apply the model to observed spectra of distant TeV blazars of 1ES 0229+200. Our results show that 1) the observed spectrum properties of 1ES 0229+200, especially the TeV $\gamma$-ray tail of the observed spectra, could be reproduced in our model and 2) an expected TeV $\gamma$-ray spectrum with photon energy $>$1 TeV of 1ES 0229+200 should be comparable with the 50-hour sensitivity goal of the CTA and the differential sensitivity curve for the one-year observation with LHAASO. We argue that strong evidence for the Bethe-Heitler cascades along the line of sight as a plausible origin of hard spectra in distant TeV blazars could be obtained from VHE observations with CTA and LHAASO. The present work differs from the earlier studies that assume that the pairs cascade process induced by ultra high-energy cosmic rays occurs at the source (e.g., Murase 2012; Murase et al. 2012; Petropoulou \& Mastichiadis 2015). We concentrate on the protons with energy below the GZK cutoff, which would propagate through cosmological distances. We argue that the outflows of the jets from AGNs are likely to contain coherent magnetic fields aligned with the jet, so that the accelerated protons remain in the scope of the initial jet rather than getting deflected. Since the $pe$ process takes place outside the galaxy clusters of both the observer and the source, the cluster magnetic fields are irrelevant to this issue. Although we expect larger fields in the filaments and wall, only the IGMF present deep in the voids along the line of sight is important (Essey \& Kusenko 2010). Within the host galaxy, the propagated directions of the protons could be changed by the galactic magnetic fields, the broadening of the image due to deflections in it should less than $\Delta\theta_{max}\sim r/D_{source}$ (Essey et al. 2010), where $r$ is the size of the host galaxy, and $D_{source}$ is the distance to the host galaxy. Furthermore, the possible thin walls of magnetic fields that might intersect the line of sight could not cause a deflection of more than $\Delta\theta\sim h/D_{wall}$ (Essey et al. 2010), where $h$ is the wall thickness and $D_{wall}$ is the distance to the wall. The model also does not take into account both the $\gamma$-ray photon spectrum and the pairs cascade from the decay of pions $\pi^{0}$, $\pi^{+}$, and $\pi^{-}$. We argue that the characteristic energy of both decay induced photons $E_{\gamma}\sim0.1E_{p}>10^{3}$ TeV and pairs cascade from the decay of pions induced photons $E_{\gamma}\sim(0.05E_{p}/m_{e}c^{2})^{2}E_{CMB}>10^{4}$ TeV with IGMF $B_{IG}=10^{-15}$ G and correlation length of the random fields $l_{c}=1$ Mpc. This energy is far from the TeV energy band. It is noted that small photon indices are not easy to achieve in traditional leptonic scenarios, although the stochastic acceleration model (Lefa et al. 2011) and the leptohadronic model (Cerruti et al. 2015) can also explain the spectral hardening of TeV blazars because radiative cooling tends to produce particle energy distributions that are always steeper than $E^{-2}$. The above distribution results in a TeV photon index $\Gamma_{TeV,int}\geq1.5$, and even steeper at VHE due to the suppression of the Klein-Nishina effects of the cross-section (e.g., Chiang \& B$\rm \ddot{o}$ttcher, 2002). Even when the absorption effect by the lowest level EBL is used, the emitted spectra still tend to be steeper with an observed photon index $\Gamma_{TeV,obs}\geq 2.5$ (e.g., Aharonian et al. 2007; Dwek, \& Krennrich 2012). \bf {Because the Bethe-Heitler pair-creation rate is smoothed in the model, we argue that the spectral shape of the external component spectrum is not very sensitive to the proton injection spectrum (Essey et al. 2010); it is determined primarily by the spectrum of the CMB photons and the Bethe-Heitler pairs energy loss process, which result in hard TeV photons spectra.} \LEt{because the sentence is hardly understood, we rephrase the sentence.} Alternatively, the secondary $e^{\pm}$ pairs that are produced by $\gamma+\gamma\to e^{+}+e^{-}$ pair creation generate a new $\gamma$-ray component through IC scattering of these $e^{\pm}$ pairs against target photons of the CMB, initiating an electromagnetic cascade if the produced $\gamma$ ray is subsequently absorbed (e.g., Dai et al. 2002; Fan et al. 2004; Yang et al. 2008; Neronov et al. 2012). In order to reproduce a hard spectra, we include an external $\gamma$-ray component with photon energy around $\sim10-100$ TeV. Using the photon energy of the external $\gamma$-ray component, we could estimate the boosting energy through the IC process of $E_{\gamma}\sim\gamma_{e^{\pm}}^{2}\epsilon_{CMB}\sim0.01-1.0$ TeV. In this view, the resultant secondary photons should contribute to the TeV $\gamma$-ray flux and we expect to find a complex spectrum around 1 TeV. However, ultra high-energy cosmic rays with $E_{p}\sim10^{19}$ eV have energy loss paths $\lambda_{p\gamma,e}\sim$1 Gpc for Bethe-Heitler pair production, whereas 10-100 TeV $\gamma$-ray only travel $\lambda_{\gamma,eff}\sim$3-200 Mpc before being absorbed by $\gamma\gamma$ pairs production. Now that we assume the cascade emission region $D\sim\lambda_{p\gamma,e}$, the source at a redshift of $z<1.0$ allows the Bethe-Heitler pair to be injected and cascade to such energy far from the source. Thus, some TeV photons can reach the Earth before being attenuated even when $\tau(E_{\gamma}, z)\gg1$, without any spectral shape transformation (Takami et al. 2013). On the other hand, although we focus on spectral information to the possible TeV emission in distant blazars, variability is also another important clue. Murase et al. (2012) argue that the cascade components would not have short variability timescales, since the shortest time scales are $\sim1.0(E_{\gamma}/10~\rm GeV)^{-2}(B_{IG}/10^{-18}~\rm G)^{2}$ yr in the $\gamma$-ray induced cascade case, and $\sim10(E_{\gamma}/10~\rm GeV)^{-2}(B_{IG}/10^{-18}~\rm G)^{2}$ yr in the ultra high-energy cosmic ray induced cascade case. The above issue suggests that strong variability is possible in the $\gamma$-ray induced cascade case, and this means that the cascaded $\gamma$-ray should be regarded as a mixture of attenuated and cascade components, and then the cascade component could be suppressed. Instead, in the ultra high-energy cosmic ray induced cascade case, the variability could not be found (Takami et al. 2013). The lack of strong evidence of variability in 1ES 0229+200 above the HESS energy band (Aharonian et al. 2007; Aliu et al. 2014) suggests that the ultra high-energy cosmic ray induced cascade component might dominate on the TeV $\gamma$-ray spectrum. Since, in our issue, the TeV emission in distant TeV blazars is dominated by two components, the external TeV photons could significantly repair the EBL attenuation, and leave a hard spectra to 100 TeV energy band. It is clear that the jet power of protons plays an important role in determining an emission intensity. In our results, in order to obtain the TeV emission of the 1ES 0229+200, we adopt the jet power of protons $L_{p}=0.3\times10^{46}~erg~s^{-1}$. It is well known that the jets of AGN are powered by the accretion of matter onto a central black hole (e.g., Urry \& Padovani 1995). On the assumption that the radiation escapes isotropically from the black hole, the balancing of the gravitational and radiation force leads to the maximum possible luminosity due to accretion $L_{edd}\sim 1.26\times10^{38}M/M_{\odot}~erg~s^{-1}$ (e.g., Dermer \& Menon 2009). When the total emission of an AGN is not super-Eddington, the Eddington luminosity is the maximum power available for the two jets, $P_{jet} \leq L_{edd}/2$. In this view, the required power in protons could easily be provided by the source with mass $M=1.44\times10^{9}M_{\odot}$ (Wagner 2008). A potential drawback of the model is that the shape of the model spectra in the 1 TeV-10 TeV energy ranges strongly depends on the level of the EBL. As a check, we constrain the shape of the model of spectra depending on a general EBL model. On the basis of the model results, we argue that the predicted TeV spectra properties of the above-mentioned model should be testable in the near future since the secondary emission process will expect CTA to detect more than 80 TeV blazars in the above 1 TeV energy band (Inoue et al. 2013). We note that the neutrino populations could be expected in $p\gamma$ interactions, and it should be given a clearer predictive character with the IceCube observations. Unfortunately, the $pe$ process does not contain any neutrino populations. We defer this possibility to future work. Although our model focuses on the contribution of pairs injected by $pe$ process along the line of sight to the TeV spectra, the $\gamma$-ray induced cascade is also another important scenario for the possible TeV emission in distant blazars (e.g., Vovk et al. 2012; Takami et al. 2013). We leave this possibility to the observation of CTA (Actis et al. 2011), LHAASO(Cao 2010; Cui et al. 2014), HAWC (Sandoval et al. 2009), and HiSCORE (Hampf et al. 2011).
16
7
1607.03898
1607
1607.01065_arXiv.txt
{We investigated, using spectral-timing analysis, the characterization of highly ionized outflows in Seyfert galaxies, the so-called warm absorbers. Here, we present our results on the extensive $\sim$600 ks of XMM-Newton archival observations of the bright and highly variable Seyfert 1 galaxy NGC 4051, whose spectrum has revealed a complex multicomponent wind. Making use of both RGS and EPIC-pn data, we performed a detailed analysis through a time-dependent photoionization code in combination with spectral and Fourier spectral-timing techniques. The source light curves and the warm absorber parameters obtained from the data were used to simulate the response of the gas due to variations in the ionizing flux of the central source. The resulting time variable spectra were employed to predict the effects of the warm absorber on the time lags and coherence of the energy dependent light curves. We have found that, in the absence of any other lag mechanisms, a warm absorber with the characteristics of the one observed in NGC 4051, is able to produce soft lags, up to 100 s, on timescales of $\sim \text{hours}$. The time delay is associated with the response of the gas to changes in the ionizing source, either by photoionization or radiative recombination, which is dependent on its density. The range of radial distances that, under our assumptions, yield longer time delays are distances $r\sim0.3-1.0 \times 10^{16}$ cm, and hence gas densities $n\sim0.4-3.0\times10^{7}\ \text{cm}^{-3}$. Since these ranges are comparable to the existing estimates of the location of the warm absorber in NGC 4051, we suggest that it is likely that the observed X-ray time lags may carry a signature of the warm absorber response time, to changes in the ionizing continuum. Our results show that the warm absorber in NGC 4051 does not introduce lags on the short time-scales associated with reverberation, but will likely modify the hard continuum lags seen on longer time-scales, which in this source have been measured to be on the order of $\sim 50$ s. Hence, these results highlight the importance of understanding the contribution of the warm absorber to the AGN X-ray time lags, since it is also vital information for interpreting the lags associated with propagation and reverberation effects in the inner emitting regions.}
Active galactic nuclei (AGN) are powered by accretion onto a supermassive black hole ($10^{6}-10^{9}\ \text{M}_\odot$). Outflowing events are often also associated with AGN. This ejection of matter and energy, if powerful enough, may affect the the surrounding environment of the AGN and even disturb the evolution of the host galaxy or the cluster hosting the AGN, a phenomenon often termed as AGN feedback \citep[e.g.][]{dimatteo2005,hardcastle2007,fabian2012,crenshaw2012}. The impact of the outflows on the surrounding environment is a function of their distance to the central source, the column density of the outflowing gas, and is highly dependent on the outflowing velocities. Hence, it is crucial to investigate the physical properties of the gas in order to assess its importance for feedback. However, while the column density and the outflowing velocity of the gas are inferred from observations, it is not possible to directly estimate the distance of the gas to the central source. \par In the case of Seyfert 1 galaxies, $60\%$ show the presence of highly ionized outflowing absorbing gas, rich in metals \citep{crenshaw1999}. The outflowing material, usually called a warm absorber, is remarkably complex in its structure, spanning a wide range in ionization parameter, outflowing velocities, and column densities. These outflows have been detected both in the UV and in the X-ray spectra of Seyfert 1 galaxies, through a composite set of absorption lines \citep[for a review see][]{crenshaw2003}. Determining the radial location of the outflows yields valuable information for the study of AGN feedback. Yet characterizing the spatial location of the warm absorbers is not trivial. A common approach to determine the distance of the absorber to the central source is by measuring the density $n$ of the gas using sensitive absorption lines, and deriving the distance $r$ through the ionization parameter $\xi$, where $\xi=L_\text{ion}/nr^{2}$, \citep[e.g.][]{kraemer2006,arav2008}. This method is usually successful for UV data, where these lines are more commonly found, but it is not very effective to study warm absorbers in the X-rays \cite[e.g.][]{kaastra2004}. There we lack the instrumental sensitivity necessary to achieve such measurements. Alternatively, monitoring the response of the gas to changes in the ionizing continuum leads to an estimation of a recombination timescale, which is a function of the electron density. This approach has been applied through time resolved spectroscopy studies and time-dependent photoionization models \citep{behar2003,reeves2004,krongold2007,steenbrugge2009,kaastra2012}. Time resolved spectroscopy in the X-rays suffers from the problem that the involved timescales may be on the order of minutes to hours, which yields limited photon counts, per time and energy bin. The low signal to noise issue can be avoided by studying the statistical properties of variability instead, through the use of Fourier spectral-timing techniques. \par Fourier spectral-timing techniques have been applied to the study of AGN X-ray light curves for more than a decade. It has been found in many sources that soft and hard X-ray photons behave differently on different timescales. For long timescales, the hard photons arrive with a time delay compared to the soft photons. On the contrary, on short timescales the soft photons lag behind the hard photons. The complex time-scale-dependent lags are associated with different physical processes and disentangling them is essential to understand the innermost emitting regions in AGN. The soft lag associated with short timescales has been explained through reverberation from reflected emission \citep{fabian2009}. This scenario is supported by the recently found Fe K$\alpha$ lags \citep{zoghbi2012,kara2013}. Furthermore, it has been suggested that fluctuations in the accretion flow propagate inwards, so that the outermost soft X-rays respond faster than the innermost hard X-rays, which could explain the hard lag seen on long timescales \citep{kotov2001,arevalo2006}. Most of the AGN with such timing properties, are Seyfert 1 galaxies. Since most Seyfert 1 galaxies show the presence of a warm absorber, our goal in this paper is to explore the possible contribution of the warm absorber to the observed X-ray time lags, due to the delayed response of the gas to the continuum variations. \par In this work we expand on the method of \cite{kaastra2012}, using a time-dependent photoionization model to examine the response of the gas to changes in the ionizing continuum. NGC 4051, a relatively nearby Seyfert 1 galaxy ($z\approx0.002$), is an ideal candidate for this study. This source is not only bright (with a luminosity in the 2-10 keV band of $L_\text{X}\sim2.1\times10^{41}\ \text{erg}\ \text{s}^{-1}$ and a corresponding observed flux of $f_\text{X}\sim1.6\times10^{-11}\ \text{erg}\ \text{s}^{-1}\ \text{cm}^{-2}$, in the data used here), but also highly variable \citep{mchardy2004} and has been associated with a complex multicomponent warm absorber \citep{steenbrugge2009,pounds2011_1}. NGC 4051 also shows the characteristic X-ray time lags found in other AGN \citep{alston2013}, with a hard lag at low frequencies and a soft lag at high frequencies. Furthermore, NGC 4051 has an extensive series of XMM-Newton archival observations. In section \ref{section2}, we present the methods used in the reduction and processing of the raw data products, as well as the procedures used to extract the light curves and spectra. We simulate the response of the complex warm absorber observed in the energy spectra of NGC 4051, to changes in the luminosity of the central source, and its dependence on radial distance and density, in section \ref{section3}. We further investigate in section \ref{section4} whether a non-equilibrium gas phase, which results in a delayed response of the gas to the variations on the central source, could produce a time delay of the most absorbed X-ray bands relative to the broad X-ray ionizing continuum. We finally compare our simulations to the real data in section \ref{section5} and present our conclusions in section \ref{section6}.\par Throughout this work we use a flat cosmological model with $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$ and $\Omega_{r}=0.0$, together with a cosmological constant $\text{H}_{0}=70\ \text{km}\ \text{s}^{-1}\ \text{Mpc}^{-1}$. For the spectral modelling in this paper we have assumed a Galactic column density of $N_\text{H}=1.15\times10^{20}\ \text{cm}^{-2}$ \citep{kalberla2005}. The errors quoted in this paper are $1\sigma$ errors, unless otherwise stated.
\label{section6} Working simultaneously with RGS and EPIC-pn data, we performed a detailed analysis using a time-dependent photoionization code in combination with spectral and Fourier spectral-timing techniques. We applied this method to the extensive XMM-Newton archival observations of the bright and highly variable Seyfert 1 galaxy NGC 4051, whose spectrum has revealed a complex multicomponent wind. As a result, we have shown that warm absorbers have the potential to introduce time lags between the most highly absorbed bands relative to the continuum, for a certain range of gas densities and/or distances. The time delay is produced due to the response of the gas to changes in the ionizing source, either by photoionization or radiative recombination.\par We found that, in the absence of any other lag mechanisms, a soft (negative) lag, of the order of $\sim$ 100 s, is detected when computing the spectral-timing products between the more absorbed energy bands from simulated RGS spectra and a broad continuum band from simulated EPIC-pn spectra, on timescales of hours. Furthermore, we also found that the absorbing gas can likewise produce a time delay between the broader soft and hard bands, both belonging to simulations of EPIC-pn spectra. This happens since the soft band appears to be generally more absorbed, and so even without the higher spectral resolution provided by RGS, we can see the soft band lagging behind the hard, for long timescales. A direct consequence of our results is that understanding the contribution of the recombining gas to the X-ray lags is vital information for interpreting the continuum lags associated with propagation and reflection effects in the inner emitting regions. We have shown that the effects of the warm absorber, in this source, are negligible on short timescales, where the reverberation lag is found. However, on long timescales ($\sim$ hours) the response time of the absorber causes the soft photons to lag behind the hard photons by up to hundreds of seconds. This will have implications for the modelling of the observed hard lags, which have been measured to be only $\sim 50$ s in NGC 4051, since the effect of the warm absorber is to dilute them or even produce dominant soft lags at low frequencies. \par The range of gas distances that, under our assumptions, yield a stronger effect of the warm absorber are comparable to the existing estimates for the location of the warm absorber in NGC 4051. In the light of this, we note that it is likely that the warm absorber plays a role in the observed X-ray time-lags in this source. If this is the case, such effects can also help explain the observed soft lags at low frequencies for the low flux segments of the 2009 dataset from NGC 4051. \par The results we present in this paper are specific to a case study we performed on NGC 4051 and its warm absorber. Assessing the effects of the warm absorber to the lags, through future studies in which we will explore the parameter space, will then allow us to disentangle the contribution from continuum processes and warm absorber response in the lag-frequency and lag-energy spectrum of AGN. As mentioned earlier, one of the problems to study these systems through time-resolved spectroscopy is the timescales involved ($\sim$ minutes to hours), which result in limited photon counts, per time and energy bin. Moreover, there are also other processes playing a role on these timescales, namely continuum processes that produce the hard lag. When doing time resolved spectroscopy the gas response lags are not easily distinguished from the continuum lags, which may cause spurious measurements of the response time. Evaluating the level of uncertainty on these estimates goes beyond the scope of this paper. We stress that using spectral-timing analysis together with a time dependent photoionization model is an extremely powerful method, not only to access the contribution of the warm absorber to the X-ray time lags, but also having the potential to provide important diagnostics on the warm absorber location and gas density, which we will explore in future work. Furthermore, the method can be applied to other sources and warm absorber configurations allowing for a wide range of studies. Indeed, recent work by \cite{kara2016} shows that some AGN with soft lags at low Fourier frequencies are also highly absorbed, highlighting a possible connection between variable absorption and low-frequency lags in other sources. These spectral-timing methods will allow the study of the warm absorber response on even shorter timescales and at higher spectral resolution than were ever possible. With the current dataset on NGC 4051, it is not yet possible to determine whether the lag is associated with the warm absorber, however higher signal to noise grating or calorimeter data may enable this test. Looking further ahead, ATHENA \citep{nandra2013} will allow these methods to be routinely used to study the detailed time response of individual absorption components, allowing us to map AGN outflows in exquisite detail.
16
7
1607.01065
1607
1607.03060_arXiv.txt
Motivated by holographic models of (pseudo)conformal Universe, we carry out complete analysis of linearized metric perturbations in the time-dependent two-brane setup of the Lykken-Randall type. We present the equations of motion for the scalar, vector and tensor perturbations and identify light modes in the spectrum, which are scalar radion and transverse-traceless graviton. We show that there are no other modes in the discrete part of the spectrum. We pay special attention to properties of light modes and show, in particular, that the radion has red power spectrum at late times, as anticipated on holographic grounds. Unlike the graviton, the radion survives in the single-brane limit, when one of the branes is sent to the adS boundary. These properties imply that potentially observable features characteristic of the 4d (pseudo)conformal cosmology, such as statistical anisotropy and specific shapes of non-Gaussianity, are inherent also in holographic conformal models as well as in brane world inflation.
Some time ago it has been pointed out that conformal symmetry $SO(4,2)$ broken down to de Sitter $SO(4,1)$ in the early Universe may be responsible for the generation of the (nearly) flat spectrum of scalar cosmological perturbations~\cite{Rubakov:2009np, Creminelli:2010ba, Hinterbichler:2011qk, Hinterbichler:2012fr} (see Ref.~\cite{Libanov:2015iwa} for a review). The main ingredient of the (pseudo)conformal scenarios is the expectation value of a scalar operator $\mathcal{ O}$ of non-zero conformal weight $\triangle$ which depends on time $\tau$ and gives rise to symmetry breaking, \begin{equation} \langle \mathcal{ O}\rangle \propto \frac{1}{(-\tau )^{\triangle}}\,, \label{Eq/Pg1/1:dr} \end{equation} where $\tau <0$. It is assumed also that: (i) space-time is effectively Minkowskian during the rolling stage (\ref{Eq/Pg1/1:dr}); (ii) there is another scalar field of zero effective conformal weight in this background, whose perturbations automatically have flat power spectrum\footnote{Weak explicit breaking of conformal invariance yields small tilt in this spectrum~\cite{Osipov:2010ee}.}; (iii) the perturbations of the latter field are converted into the adiabatic scalar perturbations at some later stage. A peculiarity inherent in the (pseudo)conformal mechanism is that the perturbations of $\mathcal{ O}$ have red power spectrum, \begin{equation} \mathcal{ P}_{\delta \mathcal{ O}} \propto p^{-2}\;. \label{Eq/Pg1/2:dr} \end{equation} This feature leads to potentially observable predictions, such as specific shapes of non-Gaussianity~\cite{Libanov:2011hh, Creminelli:2012qr, nongauss} and statistical anisotropy~\cite{Libanov:2011hh, Creminelli:2012qr, anisotropy, constraniso}. It is worth emphasizing that many of these properties are direct consequences of the symmetry breaking pattern $SO(4,2)\to SO(4,1)$~\cite{Creminelli:2012qr, Hinterbichler:2012mv}. Further development of the (pseudo)conformal scenario involves holography. It has been pointed out that conformal rolling (\ref{Eq/Pg1/1:dr}) in the boundary theory is dual to motion of a domain wall in the adS$_5$ background~\cite{Hinterbichler:2014tka, Libanov:2014nla}. This motion corresponds to spatially homogeneous transition from a false vacuum to a true one. One generalizes this construction further and considers nucleation and subsequent growth, in adS$_{5}$, of a bubble of the true scalar field vacuum surrounded by the false vacuum. From the viewpoint of the boundary CFT, this process corresponds to the (spatially inhomogeneous) Fubini--Lipatov tunneling transition and subsequent real-time development of an instability of a conformally invariant vacuum~\cite{Libanov:2015mha}. In the holographic approach the position of the moving domain wall plays the role of the operator $\mathcal{ O}$ whose perturbations again have red power spectrum (\ref{Eq/Pg1/2:dr}). It is worth noting that the analysis of perturbations in these holographic constructions has not included so far the effects of dynamical 5d gravity: the back reaction of the domain wall perturbations on the background adS$_5$ has been neglected. Clearly, it is of interest to understand whether or not the power spectrum (\ref{Eq/Pg1/2:dr}) gets modified by the effects of dynamical 5d gravity; this is one of the issues we address in this paper (within the thin brane approximation). In fact, various brane-gravity systems in adS$_5$ background have been studied in the context of brane-world models with large and infinite extra dimensions (for a review see, e.g., Ref.~\cite{Rubakov:2001kp}). In particular, the linearized metric perturbations have been analyzed in the framework of the static Randall-Sundrum I (RS1) model with $S^{1}/\mathbb{Z}_{2}$ orbifold extra dimension and two 3-branes (one with positive and another with negative tension) residing at its boundaries~\cite{Randall:1999ee}. It has been shown~\cite{Charmousis:1999rg} that apart from massless four-dimensional graviton (whose wave function is peaked at the positive tension brane) and the corresponding Kaluza-Klein tower, the perturbations contain a massless four-dimensional scalar field, radion, which corresponds to the relative motion of the branes. The radion wave function is peaked at the negative tension brane. In Ref.~\cite{PRZ} the metric perturbations have been studied in a more general static setup~\cite{Lykken:1999nb} where the assumption of the $\mathbb{Z}_{2}$ symmetry across the visible brane has been dropped. It has been shown that the radion becomes a ghost in some region of the parameter space which, in particular, includes the setup of Refs.~\cite{Charmousis:1999rg, Gregory:2000jc} where graviton is quasi-localized due to the warped geometry of the bulk. Similar results were obtained in Ref.~\cite{Dubovsky:2003pn} where effects of the induced Einstein term on the brane(s) have been considered. It is worth recalling that the static brane world setups are possible only if certain fine tuning relation(s) between the bulk cosmological constant(s) and the brane tension(s) are satisfied. If these conditions are not met, the background in general depends on time. In the simple one-brane setup in a frame where an observer is at rest with respect to the bulk, the bulk geometry is (locally) static and anti de Sitter while the brane moves along the extra dimension, and the brane induced metric corresponds to de Sitter space~\cite{Kraus:1999it, Ida:1999ui, Cvetic:1999ec, Mukohyama:1999wi, Bowcock:2000cq, Gorbunov:2001ge}. On the other hand, from the viewpoint of an observer located on the brane, the induced geometry of the brane is still de Sitter, while the bulk metric becomes time-dependent. The above discussion suggests that in the dynamical background, the radion (which is a massless scalar field in the case of the static background) becomes a scalar field with red power spectrum (\ref{Eq/Pg1/2:dr}). The radion in the RS1 setup with a slice of adS$_5$ bound by two dS$_4$ branes (one with positive tension and another with negative tension) was studied in Refs.~\cite{Gen:2000nu, Binetruy:2001tc, Chacko:2001em, Gen:2002rb} (see also Ref.~\cite{Chiba:2000rr}) with the result that the radion perturbations have red power spectrum indeed. In this paper we consider the linearized metric perturbations in a more general spatially homogeneous thin-brane setup of the Lykken-Randall type~\cite{Lykken:1999nb} with the relaxed fine tuning conditions, and hence with time-dependent background. Although we consider for completeness the case when one of the branes has negative tension, our primary interest is the model with both branes having positive tensions. This setup is more reminiscent of the holographic description of the conformal vacuum decay, albeit it is spatially homogeneous and does not involve a scalar field in the bulk. We will pay special attention to the radion and show that its equation of motion indeed leads to red power spectrum which has precisely the form~(\ref{Eq/Pg1/2:dr}). Importantly, there are no other scalar modes bound to any of the branes: all other modes belong to continuous spectrum. Similar situation occurs in the tensor sector, which contains one mode bound to the UV brane (which is essentially the Randall--Sundrum graviton) and modes from continuum. One of our main purposes is to see what happens in a model with a {\it single} brane, that generalizes the model of Ref.~\cite{Libanov:2014nla} in the sense that it includes effects of the 5d gravity. In this context, the Lykken--Randall UV brane is viewed as a regularization tool, so we send it to the adS$_5$ boundary in the end. We find that the radion perturbations do not decouple in this limit and still have the power spectrum~(\ref{Eq/Pg1/2:dr}). Thus, the potentially observable features of the (pseudo)conformal universe~\cite{Libanov:2015iwa} hold for the de Sitter brane moving in the 5d bulk. This paper is organized as follows. In Sec.~\ref{Section/Pg1/1:dyn_radion/The Setup} we describe the two-brane setup. In Sec.~\ref{Section/Pg4/1:dyn_radion/Perturbations} we consider general metric perturbations and fix the gauge. We also identify a radion mode which corresponds to relative brane fluctuation. In Secs.~\ref{Section/Pg8/1:dyn_radion/Einstein equations} and \ref{Section/Pg12/1:dyn_radion/Linearized Israel conditions} we present the linearized Einstein equations and Israel junction conditions. In Sec.~\ref{Section/Pg11/1:dyn_radion_aDs4/The Einstein equations solution} we solve the full set of equations in scalar, vector and tensor sectors of the metric perturbations. In Sec.~\ref{Section/Pg19/1:dyn_radion_adS4_2_branes/Light modes effective action} we construct effective actions for light modes, radion and graviton. We discuss the properties of the radion and show that its perturbations have red power spectrum. We consider the single brane limit and show that the radion does not decouple and that the spectrum of its perturbations remains red. We conclude in Sec.~\ref{Section/Pg32/1:dr/Conclusion}.
\label{Section/Pg32/1:dr/Conclusion} To conclude, in this paper we have performed the analysis of the linearized metric perturbations in the dynamical Lykken-Randall type model. We have derived equations of motion for the scalar, vector and tensor modes and have shown that, in general, the radion and graviton are the only light modes. However, in the single brane regime, depending on the behaviour of the warp factor in the ``$-$'' region, graviton or radion decouples from the physical spectrum: if the warp factor grows outward the visible brane ($k_{-}>0$) and there is the adS boundary, only the radion is present in the physical spectrum while the graviton decouples, and vise versa in the opposite case. We have also shown that if the visible brane has negative tension, the radion is a ghost. Although these features of the metric perturbations are interesting by themselves, we think our main result is the radion equation of motion. This equation leads to the red power spectrum, as one could have anticiated from the holographic picture. This means that the potentially observable features of the (pseudo)conformal Universe ~\cite{Libanov:2015iwa} hold also for the de Sitter brane moving in the adS background.
16
7
1607.03060
1607
1607.03112.txt
We report on the discovery of extended Ly$\alpha$ nebulae at $z\simeq3.3$ in the Hubble Ultra Deep Field (HUDF, $\simeq$ 40 kpc $\times$ 80 kpc) and behind the Hubble Frontier Field galaxy cluster MACSJ0416 ($\simeq 40$kpc), spatially associated with groups of star-forming galaxies. VLT/MUSE integral field spectroscopy reveals a complex structure with a spatially-varying double peaked Ly$\alpha$ emission. Overall, the spectral profiles of the two Ly$\alpha$ nebulae are remarkably similar, both showing a prominent blue emission, more intense and slightly broader than the red peak. From the first nebula, located in the HUDF, no X-ray emission has been detected, disfavoring the possible presence of AGNs. Spectroscopic redshifts have been derived for 11 galaxies within $2\arcsec$ from the nebula and spanning the redshift range $1.037<z< 5.97$. The second nebula, behind MACSJ0416, shows three aligned star-forming galaxies plausibly associated to the emitting gas. In both systems, the associated galaxies reveal possible intense rest-frame-optical nebular emissions lines \oiiidoub\ +H$\beta$ with equivalent widths as high as 1500\AA~rest-frame and star formation rates ranging from a few to tens of solar masses per year. A possible scenario is that of a group of young, star-forming galaxies sources of escaping ionising radiation that induce Ly$\alpha$ fluorescence, therefore revealing the kinematics of the surrounding gas. Also Ly$\alpha$ powered by star-formation and/or cooling radiation may resemble the double peaked spectral properties and the morphology observed here. If the intense blue emission is associated with inflowing gas, then we may be witnessing an early phase of galaxy or a proto-cluster (or group) formation.
Exchanges of gas between galaxies and the ambient intergalactic medium play an important role in the formation and evolution of galaxies. Circumgalactic gas at high redshift ($z>3$) has been observed through absorption line studies using background sources close to foreground galaxies \citep[e.g.,][]{lanzetta95, chen01,adelberger03,steidel10,giavalisco11,turner14}. The presence of a significant amount of circumgalactic gas has also been revealed through the detection of extended Ly$\alpha$ emission at several tens kpc scales around single galaxies \citep[$L_{\alpha} \simeq 10^{42} erg s^{-1}$, e.g.,][]{steidel10,caminha16,patricio16,wisotzki16} and up to hundreds of kpc scale around QSOs and/or high redshift radio galaxies \citep[$L_{\alpha} \simeq 10^{44} erg s^{-1}$, e.g.,][]{borisova16,cantalupo14,swinbank15}. While the origin of the extended Ly$\alpha$ emission is still debated, it is clear that the circumgalactic gas must be at least partly neutral. Extended Ly$\alpha$ emission is therefore a viable tool to investigate the presence, status and dynamics of the surrounding hydrogen gas, from single galaxies or galaxy groups. Various processes can be investigated, e.g., (1) the search for outflowing/inflowing material provide insights about feeding mechanism for galaxy formation and regulation on galactic baryonic/metal budgets and connection with the IGM, and (2) indirect signature of escaping ionizing radiation that illuminate inflowing/outflowing neutral hydrogen gas shaping the Ly$\alpha$ emission profile. The latter is connected with ionization capabilities of sources on their local environment \citep{rauch11,rauch16}. The extent of Ly$\alpha$ nebula is found to be strongly related to the luminosity of a central source, with the largest nebulae extending to hundreds of kpc around luminous AGN \citep[e.g.,][]{borisova16,swinbank15,cantalupo14,hennawi15}. The shape of these nebulae is often found to be symmetrical or filamentary around a central source \citep[e.g.,][]{hayes11,wisotzki16,patricio16}, where more luminous sources show more circular morphologies. The central source is in agreement with proposed mechanisms responsible for extended emission, however, there are nebulae where no central source is detected \citep{prescott12}. The diverse origin of LABs can also be seen in the dynamics of Ly$\alpha$ nebulae. Most of the studied LABs have a chaotic distribution of Ly$\alpha$ emission \citep[e.g.,][]{christensen04,caminha16,prescott12,patricio16,francis13} which is in agreement with Ly$\alpha$ scattering or an ionising central source as the origin of the extended emission. The discovery of rotating LABs \citep[e.g.,][]{prescott15,martin15} indicates that also cold accretion flows can be responsible for extended Ly$\alpha$ emission. In this work we report on two very similar systems at approximately the same redshift ($z=3.3$) discovered in two different fields: one recently observed with long-list spectroscopy by \citet{rauch11,rauch16} in the Hubble Ultra Deep Field (HUDF, hereafter), and a second one discovered as a multiply imaged system in the Hubble Frontier Fields cluster MACSJ0416. Both of these systems have been observed with VLT/MUSE integral field spectroscopy, which revealed extended Ly$\alpha$ emission coincident with a group of star-forming galaxies. We present a study of morphology and spectral profile of these systems, possibly tracing outflowing and inflowing gas. In the following discussion we assume a flat cosmology with $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$ and $H_{0}=70 \, {\rm km\, s^{-1}\, Mpc^{-1}}$, corresponding to 7.6kpc proper for 1\arcsec\ separation at $z=3.3$. \begin{figure*} \centering \includegraphics[width=14cm]{Fig1.pdf} \caption{Panel {\bf A}: HST F105W-band image of the region covering the Ly$\alpha$ nebula. Galaxies are highlighted with contours (only for eye-guidance). Panel {\bf B}: Ly$\alpha$ nebula as the sum of the two Ly$\alpha$ peaks, blue and red. Contours indicate the position of galaxies, the green contour marks the $Ly\alpha$ emission above 2-sigma from the background. Panel {\bf C} shows the color image derived from the HST/ACS F435W, F606W, and z850LP bands. Panel {\bf D}: summed (red + blue) one-dimensional Ly$\alpha$ spectral profile integrated within the green contour, where two peaks are evident. The Ly$\alpha$ spatial maps of the blue and red components are shown in panels {\bf E} and {\bf F}), respectively. These are computed by collapsing the signal in the velocity intervals dv=1300 and 840 km~s$^{-1}$) (marked with blue and red segments). Clearly, the blue and red peaks of the Ly$\alpha$ emission originate from two different, spatially separated regions. The Ly$\alpha$ one-dimensional spectra extracted over these two regions are shown in panels {\bf G} and {\bf H}. These spectra are extracted adopting circular apertures of $1.2\arcsec$ diameter (blue and red dotted circles). Dotted ellipses mark two regions over which each of the peaks of the Ly$\alpha$ emission either dominates or is depressed.} \label{nebula} \end{figure*} % % \begin{table} \footnotesize \caption{List of parameters} \begin{tabular}{ l r } \hline Ly$\alpha$ nebula - HUDF& {}\\ \hline Right ascension(J2000): & {\bf $03h32m39.0s$}\\ Declination(J2000): & {\bf $-27^{\circ}46^{'}17.0''$}\\ Redshift(blue,red): & 3.3172,3.3266 ($\pm 0.0006$)\\ L(Ly$\alpha$) blue (\ergs): & $(5.5 \pm 0.1)\times10^{42} $\ergs \\ L(Ly$\alpha$) red (\ergs): & $(4.0 \pm 0.1) \times10^{42} $\ergs \\ \hline Possible galaxy counterparts & {}\\ \#2-16373(SFR; Mass) & 4\msunyr ; $2.9\times10^{7}$\msun \\ \#3-16376(SFR; Mass) & 46\msunyr ; $1.2\times10^{9}$\msun \\ \#4-16148(SFR; Mass) & 0.1\msunyr ;$5.5\times10^{8}$\msun \\ \#6-16330(SFR; Mass) & 2\msunyr ; $3.2\times10^{7}$\msun \\ \#10-16506(SFR; Mass) & 100\msunyr ; $9.0\times10^{8}$\msun \\ \hline \hline Ly$\alpha$ nebula - MACSJ0416& {}\\ \hline Right ascension(J2000)[A]: & $04h16m10.9s$\\ Declination(J2000)[A]: & $-24^{\circ}04^{'}20.7''$\\ Right ascension(J2000)[B]: & $04h16m09.6s$\\ Declination(J2000)[B]: & $-24^{\circ}03^{'}59.7''$\\ Redshift(blue,red): & 3.2840,3.2928 ($\pm 0.0006$)\\ L(Ly$\alpha$) blue (\ergs): & $(4.4 \pm 0.1)\times10^{42} $\ergs \\ L(Ly$\alpha$) red (\ergs): & $(3.7 \pm 0.1) \times10^{42} $\ergs \\ \hline Galaxy counterparts & {}\\ \#1439(SFR; Mass) & 1.5\msunyr ; $4.4\times10^{8}$\msun \\ \#1443(SFR; Mass) & 1.2\msunyr ; $3.0\times10^{9}$\msun \\ \#1485(SFR; Mass) & 3.4\msunyr ; $1.5\times10^{10}$\msun \\ \hline \hline \end{tabular} \label{tab:valori} \end{table}
We discovered and discussed two extended Ly$\alpha$ systems at redshift $\simeq$ 3.3. The prominent blue peak in their Ly$\alpha$ spectra accompanied by a fainter red, slightly narrower peak is remarkable. Usually, Ly$\alpha$ has been observed with dominant red tails indicative of outflows \citep[e.g.,][]{shapley03,vanzella09} as well as many Ly$\alpha$ blobs \citep[e.g.,][]{matsuda06}. Here we observe the opposite. This is even more relevant if the intergalactic absorption is considered, that would tend to preferentially suppress the bluer peak. In particular, an IGM transmission in the blue side of the Ly$\alpha$ ranging between 20\% and 95\% (68\% interval) with a mean of $\sim 80$\% has been proposed by \citet{laursen11}. The Ly$\alpha$ nebulae described in this work benefit of the integral field spectroscopy (MUSE), which is more informative that previous long-slit studies \citep[e.g.,][]{rauch16}. Despite that, the complexity of the system still prevent us from deriving firm conclusions. So we can at best test the plausibility of the processes involved and discuss the most likely scenarios. As mentioned above the nebulae described in this work show quite complex structure with spatial-depended Ly$\alpha$ emission (Figures~\ref{lyablue} and \ref{lyared}) and varying sub-spectral profiles (Figure~\ref{lyaprofile}), however two clear spectral features are present in both systems: (1) the broad doubled peaked line profile with prominent blue emission and (2) the `trough' separating the two peaks which appears to occur mostly at the same frequency throughout the nebula (this is best illustrated by the {\it left panel} of Figure~\ref{lyaprofile}). These two observations combined can be naturally explained with scattering, and support the fact that radiative transfer effects are likely responsible for shaping the spectra emerging from these nebulae. The simplest version of the scattering medium consists of a static gas cloud of uniform density. For such a medium, the emerging Ly$\alpha$ spectrum is double peaked, with the peak separation set by the total HI column density of the cloud \citep[e.g.,][]{harrington73,neufeld90,dijkstra14}: \begin{equation} N_{HI} =5.3\times10^{20} \left( \frac{T}{10^{4}K} \right) ^{-1/2} \left( \frac{{\rm dv}}{670\hspace{1mm} {\rm km}\hspace{1mm} {\rm s}^{-1}} \right) ^{3} {\rm cm}^{-2}. \end{equation} However, this possibility has been excluded by \citet{rauch11} on the basis of similar spectral properties we find here, e.g., the relatively stronger intensity of the blue versus red emission and the different widths suggests that we are not observing a static configuration. Radiative transfer through clumpy/multiphase media can also explain double peaked spectra \citep[see ][]{gronke16}. Clumpy media generally give rise to a wide variety of broad, multi-peaked spectral line profiles. The trough at a constant frequency then reflects either that there is a non-negligible opacity in residual HI in the hot inter clump gas \citep{gronke16}, or in the cold clumps that reside in the hot halo gas (the presence of these clumps inside massive dark matter halos has been proposed by, e.g., \citealt{cantalupo14} and \citealt{hennawi15}). The width of the though reflects either the thermal broadening of the Ly$\alpha$ absorption cross-section for residual HI in the hot gas, and/or the velocity dispersion of the cold clumps.\\ Also the origin of Ly$\alpha$ emission in these nebulae is not easily identifiable. {\bf Star Formation.} The association of the brightest Ly$\alpha$ spots with galaxies (in the HUDF nebula) and the three aligned star-forming galaxies in the lensed nebula suggest that star formation is at least powering the Ly$\alpha$ emission from the high-surface brightness spots. The enhancement of the blue and/or red peak in these brighter spots either reflects the kinematics of the clumps surrounding these galaxies, where an enhanced blue (red) peak is indicative of clumps falling onto (flowing away from) the galaxies \citep[see][]{zheng02,dijkstra06a,dijkstra06b,verhamme06}. Alternatively, enhancements in any of the two peaks could be due the galaxies bulk motion with respect to the Ly$\alpha$ emitting nebula (i.e. these could be star-forming galaxies which are moving onto the more massive halo that hosts the Ly$\alpha$ nebula).\\ {\bf Cooling.} For massive dark matter halos ($M_{\rm halo}\sim 1-5 \times 10^{12}$M$_{\odot}$), the cooling luminosity can reach $L \sim 10^{43}$ erg s$^{-1}$ \citep[e.g.,][]{dijkstra09,giguere10,goerdt10,rosdahl12}, which is sufficient to explain these halos. The overall double peaked spectrum is reminiscent of that predicted by \citet{giguere10} for cooling radiation (though see \citealt{trebitsch16}). In particular the observed velocity difference ($\simeq 650$\kms) and the Ly$\alpha$ total luminosity are compatible to those reported by \citet{giguere10}. Also in terms of spatial emission there are similarities, like the extensions of several tens kpc and the emission arising in different places. The spatial extent results from a combination of some of the cooling radiation being emitted in the accretion streams far from the galaxy(ies), and of spatial diffusion owing to resonant scattering. Also the presence of a well developed blue component slightly broader than the red one is among the outputs of the \citet{giguere10} prescriptions (see their Figure~5), and is indicative of systematic infall, in which the velocity of the in-flowing gas tend to move (in the frequency domain) the red Ly$\alpha$ photons toward the resonance, while making easier for the blue Ly$\alpha$ photons to escape in the blue side. \\ {\bf Fluoresence.} It is also possible that ionising photons escape from galaxies, which would cause the cold clouds to fluoresce in Ly$\alpha$ \citep[e.g.,][]{mas16}. Fluorescent Ly$\alpha$ emission also gives rise to a double peaked spectrum, in which infalling/outflowing material diminishes the red/blue wing of the spectrum. However, fluorescence tend to produce a smaller peak separation than what is observed here \citep[e.g.,][]{gould96,cantalupo05}. In other words, if fluorescence is the source of Ly$\alpha$ emission, then we would still need additional scattering to explain the width of the Ly$\alpha$ line. Note that this explanation also relies on star formation powering the Ly$\alpha$ emission. Given the number of galaxies that are possibly associated to each system, this explanation is energetically compatible with the observed star formation activity \citep[see][]{rauch11}. Moreover, as mentioned above, the strong optical rest-frame nebular emission (\oiiidoub\ and H$\beta$) as traced by the K-band for some of the galaxies (e.g. \#3, \#2, and \#6 in the HUDF nebula and in the lensed nebula) is intriguingly similar to what has been recently observed in a $z=3.2$ galaxy (with equivalent widths of EW(\oiiidoub)=1500\AA, \citealt{debarros16} and showing a remarkable amount of escaping ionizing radiation (higher than 50\%, \citealt{vanzella16}). Among them it is worth noting the presence of the extremely blue (and in practice dust-free), compact, and low-mass galaxy (\#6), that might further contribute to the ionisation budget. The Ly$\alpha$ resonance emission from nebulae may be indirect probes of the escaping ionising radiation from the embedded sources (even fainter that the detection limit), along transverse directions not accessible from the observer. While, the direct detection of Lyman continuum emission is in principle possible, it might be precluded in the present data because galaxies are surrounded by the same (circum-galactic) medium plausibly producing the Ly$\alpha$ nebula and therefore preventing us to to easily detect ionising flux. In addition, the intergalactic opacity in the Lyman continuum also affects these measurements \citep[e.g.,][]{vanzella12}, requiring a larger sample of similar systems to average the IGM stochastic attenuation. {\bf AGN Activity.} AGN can inject large amounts of energy in the surrounding gas \citep[e.g.,][]{debuhr12}, which could radiate even after the nucleus has shut off. Such processes can modify the kinematic and thermal properties of the circum-galactic medium, and therefore its Ly$\alpha$ signatures. The key observable that distinguishes between different sources of Ly$\alpha$ emission would be the Balmer emission lines (like H$\alpha$ or H$\beta$). In the case of cooling radiation, the H$\alpha$ flux that is associated with Ly$\alpha$ should be $\sim $ 100 times weaker \citep{dijkstra14}, and would likely be undetectable. However, for the `star formation' and `fluorescence' models, the H$\alpha$ flux should be significantly stronger. If H$\alpha$ emission can be observed, and confined to galaxies then Ly$\alpha$ emission was likely powered by nebular emission inside galaxies, while fluorescence would give rise to partially extended H$\alpha$. The search for galaxies that illuminate themselves through some fortuitous release of Ly$\alpha$ or ionizing radiation into their environment offer positive prospects in the future MUSE observations and will provide our main direct insights into the in- and outflows of gas \citep{rauch16}. If the scenario in which the gas is inflowing toward a region forming stars is correct, then we may be witnessing an early phase of galaxy or a proto-cluster (or group) formation. Searches for asymmetric Ly$\alpha$ halos or offsets between stellar populations and Ly$\alpha$ emission may reveal further objects where the escape of ionizing radiation can be studied. \bigskip %
16
7
1607.03112
1607
1607.04706_arXiv.txt
The CONT14 campaign with state-of-the-art VLBI data has observed the source 0642+449 with about one thousand observables each day during a continuous observing period of fifteen days, providing tens of thousands of closure delays---the sum of the delays around a closed loop of baselines. The closure delay is independent of the instrumental and propagation delays and provides valuable additional information about the source structure. We demonstrate the use of this new ``observable'' for the determination of the structure in the radio source 0642+449. This source, as one of the defining sources in the second realization of the International Celestial Reference Frame (ICRF2), is found to have two point-like components with a relative position offset of $-$426 microarcseconds ($\mu$as) in right ascension and $-$66 $\mu$as in declination. The two components are almost equally bright with a flux-density ratio of 0.92. The standard deviation of closure delays for source 0642+449 was reduced from 139 ps to 90 ps by using this two-component model. Closure delays larger than one nanosecond are found to be related to the source structure, demonstrating that structure effects for a source with this simple structure could be up to tens of nanoseconds. The method described in this paper does not rely on a priori source structure information, such as knowledge of source structure determined from direct (Fourier) imaging of the same observations or observations at other epochs. We anticipate our study to be a starting point for more effective determination of the structure effect in VLBI observations.
Radio galaxies and quasars have radio-emitting structure that can be conveniently divided into two categories: extended structure, the dimensions of which range from $10^{3}$ pc to even $10^{6}$ pc, and compact structure, with dimensions typically ranging from 1 pc to 100 pc \citep{kel88}. Extragalactic radio sources with compact structure are used to realize the fundamental Celestial Reference Frame with axis stability at the level of ten microarcseconds ($\mu$as) by very long baseline interferometry (VLBI) observations \citep{ma98, fey15}. Given that the typical distance to these sources is at the level of $10^{9}$~pc, the compact structure should have angular dimensions of 0.2--20 milliarcseconds (mas), as shown in images of astrometric sources from astrophysical imaging studies \citep[e.g.,][]{cha90a, ojh04, ojh05, pin07, lis09, cha10, lis13}. For example, survey images of 91 compact sources obtained from VLBI observations at 5 GHz by \citet{tay94} showed that only eight sources had a structure smaller than one milliarcsecond. The effects of source structure on source position determined from VLBI observations were studied and demonstrated in a series of studies \citep[e.g.,][]{whi71, fey97, fey00, fei03, mac07, mal08, moo11}. Recently, by observing four close radio sources in the second realization of the International Celestial Reference Frame (ICRF2) for five times over one year, \citet{fom11} found that the radio flux intensity maximum could follow a jet component rather than stay close to the radio core. Their study suggests that if the jet component gets fainter than the radio core or if they get completely separated at some time, significant position variations will occur at the level of 0.1 mas yr$^{-1}$ or even larger. The study of the source structure effect on VLBI observables was pioneered by \citet{tho80}. A significant effort was made by \citet{cha90b}, who modeled the source structure corrections for VLBI group delay and phase rate observables based on the brightness distributions of the sources. Many studies then attempted to introduce the theoretical model of the structure effect into astrometric VLBI data analysis based on images of sources \citep[e.g.,][]{cam88, cha88, tan88, ulv88, zep91, cha93, gon93, fey96, pet07}. For example, \citet{sov02} applied it to a series of ten of the Research and Development VLBI (RDV) sessions, and the results showed that the weighted delay residuals could be reduced. An example of the application of the theoretical model to the European geodetic VLBI sessions was tested by \citet{tor07}. There are, however, several points that presently limit the application of this model for the correction of the structure effect. First, the source structure effect is very sensitive to a slight change in the brightness distribution. Unfortunately, the time histories of available images for most sources are quite sparse, and in the foreseeable future it is almost impossible to make images on regular basis at intervals of much less than a year for so many sources in the astrometric catalog unless astrometric/geodetic observations themselves will be scheduled in a suitable way and sufficient efforts of making images will be made. Secondly, even when images made several months apart from each other are available, the stationary reference point in these images can be hard to recognize if the radio flux intensity maximum observed by VLBI is dominated by a jet component. Consequently, in standard astrometric/geodetic VLBI data analysis, the source structure effect has not actually been handled so far. The source structure effect is still very important and challenging for the astrometric VLBI, as shown in simulation studies \citep{sha15, pla16}. If VLBI is to achieve its full potential of the realization of the extragalactic Celestial Reference Frame with accuracy of the microarcsecond level and that of the Terrestrial Reference Frame with accuracy of the millimeter level, it is necessary to study and handle the source structure effect more effectively based on the astrometric observations themselves. These are the purposes of this paper. In this paper we perform an initial analysis to determine how well source structure can be determined directly from the geodetic VLBI observables themselves \footnote{\added{Geodetic/astrometric VLBI observables are the baseline-based group delays and phase rates determined per scan within a geodetic VLBI experiment. The International VLBI Service for Geodesy and Astrometry (IVS) coordinates archives of geodetic VLBI experiment observables (see http://ivscc.gsfc.nasa.gov/products-data/data.html), but visibility datasets are normally not made available for analysis.}}. We aim to develop an alternative method for studying the structure effect that should be simple, easy to implement, and applicable for general historical and future geodetic VLBI observations, including many of the oldest observations (back to the 1970s) for which the visibility datasets are no longer available. Although a self-calibration and Fourier imaging analysis of the visibility data can give superior results for determining source structure, that approach is time and computing resource intensive, it requires large amounts of software not currently implemented in geodetic analysis packages, it requires that the observations were conducted in a manner suitable for imaging, which is frequently not the case for historical geodetic VLBI sessions, and it will be difficult, yield sub-optimal results, or even be impossible for the historical experiments that no longer have archived visibility datasets. Therefore we defer our structure analysis based on imaging for a future publication, and we make use of the closure delay, the sum of the delays around a closed loop of baselines, as a new observable and propose a method to use this new observable for the determination of the source structure effect on the astrometric VLBI observable. We calculate the closure delays, investigate the characteristics of the source structure, and then solve for the source structure effect on each observable. The source structure can be finally obtained and the source structure effect can be determined. The source 0642+449, one of the ICRF2 defining sources, is selected as a demonstration case for this method. The systematical analysis of closure delay requires a consistent definition and a careful discussion of closure delay, which are presented along with its calculating model in Section 2. The data used here, the CONT14 observations, and the overall statistics of the closure delay of source 0642+449 are introduced in Section 3. Section 4 discusses the method that was used to solve for the source structure effect on each observable based on the knowledge from Section 3. The results, describing the structure of this source, are shown in Section 5, and the final model is presented in Section 6. Conclusions and discussion are given in the last Section.
Closure delay analysis has several advantages for geodetic VLBI. First, it directly studies structure effects from the geodetic observables---the multiband group delays---themselves, the quantities actually used to determine station position and motions, Earth orientation parameters, and other astrometric/geometric parameters. Second, closure delay analysis can serve as an indicator to evaluate the performance of any structure model, whether that model is determined by fitting closure delays or through an imaging analysis. Third, model fitting of closure delays can identify the strongest components of a source while ignoring weak components that do not significantly alter the group delay, simplifying the analysis procedure. In contrast to many astrophysical studies where the weak components are of interest, geodetic VLBI analysis can often ignore such components. Fourth, a closure delay analysis can save time and effort in both processing and software development for geodetic purposes compared to standard VLBI imaging. We showed that closure delays can determine the magnitude of the measurement noise in the geodetic VLBI observables, and thereby also determine the magnitude of structure effects in the geodetic VLBI observables. We also showed that large closure delays, even closure delays in excess of 10~ns, are related to source structure effects, and that the underlying delay measurements are not caused by simple measurement errors. We demonstrated for the first time that source structure can be obtained from the closure \emph{delays} as opposed to closure phases or closure amplitudes from visibility data. For sources that are reasonably compact on short VLBI baselines, we can simply and directly solve for (not fit) structure effects for the entire VLBI network of baselines without any additional a priori information. We also apply model fitting to determine source structure, showing how closure delays can yield structure information without the need for sources to be unresolved on short baselines. This method is relatively simple to implement within existing geodetic analysis tools, uses input data from the standard geodetic database files, and does not require significant computational resources. For example, we can compute structure models for all sources in the 15-day CONT14 campaign in a fraction of an hour, whereas our current imaging analysis from the raw visibility data requires about 16~hours to process one 24-hour segment of the CONT14 campaign on a similar computer. In an array of $N$ antennas, with $N(N-1)/2$ interferometer baselines, there are $N(N-1)/2$ unknown structure effects to be determined and $(N-1)(N-2)/2$ independent closure delays as observables. Therefore, the fraction of the total structure delay information available in the closure delays is $(N-2)/N$. The ratio shows the benefits to be gained by increasing the number of antennas in the array; with only 4 antennas, 50\% of the structure delay information is available, while for 15 antennas, as the case of CONT14 observations of source 0642+449, the ratio increases to 87\%. From these observations, the source structure effect is demonstrated at the level of each individual VLBI group delay for the first time. The study reveals that at X band (8.4 GHz) during the CONT14 sessions the source had two point-like components with a flux-density ratio of 0.92, that is, almost equally bright. The position of the weaker component with respect to the stronger one is estimated to be $-$426 $\pm$ 12 $\mu$as in right ascension and $-$66 $\pm$ 19 $\mu$as in declination. Finally, the standard deviation of the corrected closure delay was reduced by 36$\%$. This structure model agrees with the estimated source structure effect on baselines with lengths less than 10~000~km very well, but does not fit some of the longest baselines. There are mainly four reasons for that: (1) source structure effects on these longest baselines with $R \ga$ 0.7 are much more sensitive to the relative position of the two components; (2) such long baselines only have a few observations over one circle of $uv$ position angle, making it statistically difficult to identify the variation caused by the source structure; (3) there can be structure at smaller scale that shows structure effect on longest baselines but none on baselines shorter than 10~000~km; and (4) the model of Equation (\ref{eq_structure}) is the derivative of the structure phase with respect to the observing frequency, while the multiband group delay is derived from the linear estimation of the observed phase over 8 channels spanning about 0.7 GHz---application of this model formultiband group delays introduces errors in the structure effect, which may have larger impacts when the baseline length is longer than 11~000~km with $R \ga$ 0.7. Due to this inadequateness of the model for multiband group delays, the flux-density ratio $K$ may have been underestimated. In 1992 and 1995, this source was observed to have a compact ``core-jet" morphology with the resolution of several milliarcseconds by \citet{gur92} and \citet{xu95}, respectively. Recently, space VLBI (RadioAstron) observations of this source at 1.6 GHz in 2013 with a resolution of 0.8 mas, $\sim$ 4 times better than that of ground VLBI images at this wavelength, found that this source has two compact cores separated by 0.76 mas with a position angle of 81$\degr$ in the sky plane \citep{lob15}. Since space VLBI observations were made fourteen months earlier than CONT14 observations, one may not expect they were observing the same blob, but the position angle of two components should be approximately in the same direction. Our result domnstrates that the two components are in the direction of position angle about 261.2$\degr$, which is the same direction detected by space VLBI. The source 0642+449 did not exhibit a significant structure effect due to a frequency dependence of the flux densities of the two components, which has a completely different pattern, such as more peaks over 24 hours for baselines with $R \approx 0.5$. Our study shows a similar structure of this source with a resolution comparable to that of the space VLBI, demonstrating the feasibility of the application of astrometric observables for the \deleted{direct} study of the source structure with this method. From the study by \citet{ber11}, we expect polarization leakage to affect the multiband group delay by less than 1.6 ps for 90~\% of the observations. General leakage of LCP into RCP for the geodetic observations will result in a baseline-dependent bias. For the LCP part of the Stokes I emission that leaks into RCP, these biases are constant in time and baseline orientation for a given station pair, and do not explain the large, systematic, and source-dependent effects. \citet{hom06} showed that at VLBI resolutions, the fractional circular polarization of AGN core and jet components is typically less than about 1~\%. Supposing that different baseline orientations constructively add/subtract the phases from two components that are circularly polarized at the 1~\% level, the change in delay caused by circularly polarized source structure would only be 2~\% of the change in delay for the Stokes I emission. We therefore expect that polarization effects are negligible for this study, although they may be important to reach picosecond accuracies. The large closure delays have also been effectively traced, which reveals that most observables erroneously identified as outliers in VLBI data analysis are in fact exposed to the source structure effect and this effect could be at the level of tens of ns in some occasions even for a radio source with a rather flat spectral index. This cannot be explained yet by the model. This needs to be studied in the future to find the explanation from astrophysics, while for astrometric VLBI we should schedule routine observations more effectively to exclude this kind of radio source, or to observe it without such long baselines only if the two components do not move with each other. It is still challenging to implement the identified source structure to correct the effect in VLBI data analysis. First, an accurate model for multiband group delays to the level of at least 10 ps needs to be derived. This model should be able to reduce the magnitudes of the closure delays of triangles with the longest baselines to the level of that of small triangles, a few tens of ps. Second, a careful re-study of the linear combination of S and X band data with the presence of source structure would be essential to have an accurate correction for the source structure effect on the combined S-X observable. Moreover, astrometric/geodetic VLBI observables at S band are one order of magnitude noisier than that at X band, which makes the source structure at S band almost unrecoverable, and the structures at S and X bands are different. In general, structures at S band are much more resolved than that at X band. How could our method be improved for the study of source structure? First, there should be more effective ways of deriving the structure effects on $(N-1)$ baselines. \replaced{Without the limitation from the assumption used for the connection, our method by using the closure delay can be applied for the source consisting of compact components with any separation. And this will allow us to directly correct the structure delay on each single delay observable for geodetic VLBI data analysis.}{Due to the limitation from the assumption used for the connection, our method would confine to a small fraction of radio sources. But if one can develop a new way to break this limitation, our method may allow us to directly correct the structure delay on each single delay observable for geodetic VLBI data analysis.} Second, one might develop a new method of image reconstruction in an iterative way other than modeling the structure delay. Then a non-linear estimation of the structure parameters from the closure delay has to be developed. The method can thus be extended to be used for more general cases, complex or resolved sources. The rigorous method to correct the structure effect is to make images based on the standard VLBI imaging from the same observations and to correct the raw visibility phases for source structure in the geodetic VLBI analysis software prior to the multiband group delay fitting.\added{Even though this will need more work and resource compared to the current procedure of the routine VLBI data analysis, the geodetic VLBI should move onto it in the near future.} We are working on making images of CONT14 observations and the results will be presented in another paper. This method could be of great help to monitor the performance of radio sources for the historical VLBI observations and the VLBI Global Observing System \citep[VGOS;][]{pet09}. In VGOS, there is a global network of well-distributed stations and particularly several twin telescopes. A wider range of baseline lengths from hundreds of meters will be available, which then will allow the source structure with a wide separation of compact components to be detectable. Moreover, if the point-like sources that are more likely to be observed in astrometric VLBI begin to demonstrate structure, it is likely to roughly model their structures as consisting of compact components rather than a flat brightness distribution. The source structure effect, as one of the main and inevitable problems for the goals of the VGOS, can be expected to be handled by this method to some extent. Besides the astrometric VLBI, the method can provide benefits as imaging for the astrophysical study of the source structure from continuous observations within VGOS.
16
7
1607.04706
1607
1607.08833_arXiv.txt
We present the first image of the thermal Sunyaev-Zel'dovich effect (SZE) obtained by the Atacama Large Millimeter/submillimeter Array (ALMA). Combining 7-m and 12-m arrays in Band 3, we create an SZE map toward a galaxy cluster RX~J1347.5--1145 with 5 arc-second resolution (corresponding to the physical size of 20$h^{-1}$kpc), the highest angular and physical spatial resolutions achieved to date for imaging the SZE, while retaining extended signals out to 40 arc-seconds. The 1$\sigma$ statistical sensitivity of the image is 0.017 mJy/beam or 0.12 mK$_{\rm CMB}$ at the 5 arc-second full width at half maximum. The SZE image shows a good agreement with an electron pressure map reconstructed independently from the X-ray data and offers a new probe of the small-scale structure of the intracluster medium. Our results demonstrate that ALMA is a powerful instrument for imaging the SZE in compact galaxy clusters with unprecedented angular resolution and sensitivity. As the first report on the detection of the SZE by ALMA, we present detailed analysis procedures including corrections for the missing flux, to provide guiding methods for analyzing and interpreting future SZE images by ALMA.
The Sunyaev-Zel'dovich effect (SZE, \cite{Sunyaev72}), inverse Compton scattering of the cosmic microwave background (CMB) photons off hot electrons, offers a powerful probe of cosmic plasma up to high redshifts (see \cite{Rephaeli95, Birkinshaw99, Carlstrom02, Kitayama14} for reviews). The surface brightness of the SZE is independent of the source redshift $z$ for given electron density $n_{\rm e}$ and temperature $T_{\rm e}$ and is proportional to $n_{\rm e} T_{\rm e}$, whereas that of X-rays varies as $n_{\rm e}^2(1+z)^{-4}$ with only weak dependence on $T_{\rm e}$. The SZE can hence be a unique tool for studying the physics of the intracluster medium, e.g., by detecting shocks (pressure gaps) and very hot gas associated with subcluster mergers \citep{Komatsu01,Kitayama04,Korngut11}. The advent of large-area surveys by the South Pole Telescope (SPT) (e.g., \cite{Spt09,Spt10,Spt11,Spt13a}), the Atacama Cosmology Telescope (ACT) (e.g., \cite{Act10,Act11,Act13a}), and the Planck satellite (e.g., \cite{Planck_earlysz,Planck13a}) has enhanced the sample of galaxy clusters observed via the SZE by more than an order of magnitude over the past decade. The caveats of existing observations are limited angular resolution ($>1'$ in the above mentioned surveys) and sensitivity. Single-dish measurements by the MUSTANG bolometer array have achieved currently the highest angular resolution of $9''$ full width at half maximum (FWHM) for the SZE maps (e.g., \cite{Mason10,Korngut11,Romero15,Young15}), while they are still challenged by point source and atmospheric contamination. Interferometers offer a complementary tool with good control of systematic noise and capability of separating compact sources from the SZE, albeit reduced sensitivity for the sources more extended than the baseline coverage (e.g., \cite{Jones93, Carlstrom96, AMI06, Muchovej07, Wu09}). A recent SZE map obtained by CARMA has a synthesized beam with $10.6'' \times 16.9''$ \citep{Plagge13}. With a combination of 7-m and 12-m arrays in Band 3, the Atacama Large Millimeter/submillimeter Array (ALMA) serves as the first instrument to resolve the SZE with an angular resolution of $5''$, as predicted by detailed imaging simulations \citep{Yamada12} using hydrodynamic simulation data \citep{Takizawa05,Akahori12}. Among currently available frequency bands of ALMA, Band 3 is the most suitable for the SZE imaging owing to the largest field-of-view, the lowest system temperature, and minimal contamination by synchrotron and dust emission. Given that the Total Power Array is still unavailable for continuum observations by ALMA, viable targets are limited to compact distant galaxy clusters. In this paper, we present the first measurement of the SZE by ALMA. The target is a galaxy cluster RX J1347.5--1145 at $z=0.451$. Owing to its brightness and compactness, RX J1347.5--1145 is a prime target for imaging observations by the current configuration of ALMA. A number of SZE measurements have been made for this cluster in the past \citep{Komatsu99,Pointecouteau99,Komatsu01,Pointecouteau01,Reese02, Carlstrom02,Kitayama04,Benson04,Zemcov07,Mason10,Korngut11,Zemcov12, Plagge13,Adam14,Sayers16}. In particular, the Nobeyama Bolometer Array (NOBA; \cite{Komatsu01}) detected a prominent substructure that was not expected from regular morphology of this cluster in the soft-band X-ray image by ROSAT \citep{Schindler97}. The presence of the substructure was confirmed with subsequent X-ray data by Chandra \citep{Allen02,Johnson12} and XMM-Newton \citep{Gitti04} as well as SZE maps by MUSTANG \citep{Mason10,Korngut11}, CARMA \citep{Plagge13}, and NIKA \citep{Adam14}. The inferred temperature of the substructure exceeds 20 keV and is appreciably higher than the mean temperature of the cluster $\sim 13$ keV \citep{Kitayama04,Ota08}; this accounts for the fact that the substructure was more obvious in the SZE map than the X-ray surface brightness image. The disturbed feature was also observed by the radio synchrotron observations \citep{Gitti07,Ferrari11} and gravitational lensing maps (e.g., \cite{Miranda08,Bradac08,Koehlinger14}). These previous results indicate that the cluster is undergoing a merger, but its exact nature such as geometry and dynamics of the collision is still unclear \citep{Johnson12,Kreisch16}. The ALMA Band 3 observation of RX J1347.5--1145 is crucial not only for better understanding this particular galaxy cluster but also for testing the capability of ALMA in observing the SZE against a range of independent datasets available for this well-studied system. This paper is organized as follows. Section \ref{sec-obs} describes the observations and calibration. Section \ref{sec-image} presents details of the imaging analysis including point source subtraction and deconvolution. The results are validated against the X-ray data, realistic imaging simulations, and previous high-significance SZE measurements in Section \ref{sec-implication}. Finally, our conclusions are summarized in Section \ref{sec-conc}. Throughout the paper, we adopt a standard set of cosmological density parameters, $\Omega_{\rm M}=0.3$ and $\Omega_{\rm \Lambda}=0.7$. We use the dimensionless Hubble constant $h\equiv H_0/(100 \mbox{km/s/Mpc})$; given controversial results on the value of $h$ (e.g., \cite{Planck15,Riess16}), we do not fix it unless stated otherwise. In this cosmology, the angular size of 1$''$ corresponds to the physical size of 4.04 $h^{-1}$kpc at the source redshift $z=0.451$. The errors are given in 1$\sigma$ and the coordinates are given in J2000.
\label{sec-conc} In this paper, we have presented the first image of the thermal SZE obtained by ALMA. The resulting angular resolution of $5''$ corresponds to $20 h^{-1}$kpc for our target galaxy cluster RX~J1347.5--1145 at $z=0.451$. The present dataset achieves the highest angular and physical spatial resolutions to date for imaging the SZE. The ALMA image has clearly resolved the bright central AGN, the cool core, and the offsetted SZE peak in this cluster. It is in good agreement with an electron pressure map reconstructed independently from the X-ray data as well as with the previous SZE observations of this cluster by NOBA \citep{Komatsu01,Kitayama04} and MUSTANG \citep{Mason10,Korngut11}. The statistical significance of the measurement has also improved significantly; the achieved 1$\sigma$ sensitivity of the image is 0.017 mJy/beam or 0.12 mK$_{\rm CMB}$ at $5''$ FWHM. The accuracy of the map is limited primarily by missing flux arising from the lack of short-spacing data in the current configuration of ALMA. We have presented detailed analysis procedures including corrections for the missing flux based on realistic imaging simulations for RX~J1347.5--1145. We have shown that the structures up to the spatial scale of $40''$ are faithfully recovered in the ALMA map. Our results demonstrate that ALMA is a powerful instrument for imaging the SZE in compact galaxy clusters with unprecedented angular resolution and sensitivity. They will also serve as guiding methods for analyzing and interpreting future SZE images by ALMA. Completion of the Total Power Array for continuum observations as well as Band 1 receivers will significantly strengthen the capability of ALMA for imaging the SZE. Further implications of the present results on the physics of galaxy clusters will be explored separately in our forthcoming papers. \begin{ack} We thank Brian Mason and Charles Romero for providing the MUSTANG map and helpful comments on the manuscript; Akiko Kawamura, Hiroshi Nagai, and Kazuya Saigo for their support on the ALMA data reduction. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00246.S. The scientific results of this paper are based in part on data obtained from the Chandra Data Archive: ObsID 506, 507, 3592, 13516, 13999, and 14407. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work was supported by the Grants-in-Aid for Scientific Research by the Japan Society for the Promotion of Science with grant numbers 24340035 (Y.S.), 25400236 (T.K.), 26400218 (M.T.), 15H02073 (R.K.), 15H03639 (T.A.), 15K17610 (S.U.), and 15K17614 (T.A.). T.K. was supported by the ALMA Japan Research Grant of NAOJ Chile Observatory, NAOJ-ALMA-0150. \end{ack} \bigskip
16
7
1607.08833
1607
1607.04127.txt
{\subsection*{\bf{Abstract}} We apply a particular form of the inverse scattering theory to turbulent magnetic fluctuations in a plasma. In the present note we develop the theory, formulate the magnetic fluctuation problem in terms of its electrodynamic turbulent response function, and reduce it to the solution of a special form of the famous Gel$'$fand-Levitan-Marchenko equation of quantum mechanical scattering theory. The latter applies to transmission and reflection in an active medium. {Theory of turbulent magnetic fluctuations} does not refer to such quantities. It requires a somewhat different formulation. {We reduce the theory to the measurement of the low-frequency electromagnetic fluctuation spectrum, which is not the turbulent spectral energy density!} The inverse theory in this form enables obtaining information about the turbulent response function of the medium. The dynamic causes of the electromagnetic fluctuations are implicit to it. Thus it is of vital interest in low frequency magnetic turbulence. The theory is developed until presentation of the equations in applicable form to observations of turbulent electromagnetic fluctuations as input from measurements. Solution of the final integral equation should be done by standard numerical methods based on iteration. We point on the possibility of treating power law fluctuation spectra as an example. Formulation of the problem to include observations of spectral power densities in turbulence is not attempted. This leads to severy mathematical problems and requires a reformulation of inverse scattering theory. One particular aspect of the present inverse theory of turbulent fluctuations is that its structure naturally leads to spatial information which is obtained from the temporal information that is inherent to the obseration of time series. The Taylor assumption is not needed here. This is a consequence of Maxwells equations which couple space and time evolution. The inversion procedure takes advantage of a particular mapping from time to space domains. Though the theory is developed for homogeneous stationary non-flowing media, its extension to include flows, anisotropy, non-stationarity and the presence of spectral lines, i.e. plasma eigenmodes like present in the foreshock or the magnetosheath is obvious. } \vspace{0.5cm}
\label{intro} As far as it concerns the fluctuations of the magnetic field, magnetic turbulence \citep{goldstein1995,biskamp2003,zhou2004,brown2015} is a branch of classical electrodynamics with the electromagnetic field described by Maxwell's equations. Its coupling to the dynamics of charged particles, ions and electrons, is contained in a set of separate dynamical equations. Depending on the spatial and temporal scales of the turbulence, these equations are subject to increasing simplifications. On the shortest scales $\ell< r_{ce}=v_e/\omega_{ce}$ shorter than the electron gyroradius $r_{ce}$ any turbulence is about purely electric/electrostatic as long as the plasma is not subject to self-magnetisation via excitation of either Weibel-like modes or nonlinear ion and electron holes, as also when spontaneous reconnection in electron-scale current filaments comes into play. The latter is believed to contribute an ultimate dissipation mechanism for collisionless turbulence \citep[cf., e.g.,][]{treumann2015}. The dynamic part of the turbulence is described by electrostatic kinetic equations and the talk goes of plasma turbulence. On longer scales $r_{ce}<\ell< r_{ci}=v_i/\omega_{ci}$ between the electron and ion gyroradii, electrons magnetise and thus contribute to magnetic turbulence. The magnetically active turbulent frequencies in this range are below the electron cyclotron frequency $\omega<\omega_{ce}$. Coupling of electrons to the nonmagnetic ions however connects the magnetic fluctuations with electrostatic ion-fluctuations as is, for instance, the case in the presence of kinetic Alfv\'en waves. In this range of scales electrons carry the magnetic field and also form the narrow turbulent current filaments. The system in this range is highly nonlinear and too complex for taking into account the plasma dynamics in generation of the turbulence. On the other hand, measurement of the magnetic fluctuations is comparably easy in plasma. One would thus like to infer about the turbulent plasma dynamics from the magnetic fluctuations alone, if possible. This is usually done from observation of the magnetic power spectra of turbulence and determination of the spectral index in several ranges of scales, from magnetohydrodynamic scales down into the dissipative range of scales of turbulence. This procedure mainly provides power law indices of the magnetic turbulence and distinguishes between different spectral ranges and between inertial and dissipation scales while no information about the state of the plasma can be obtained. In the following we attempt a different approach by formulating a so-called ``inverse problem" for the particular case of magnetic turbulence. This is possible when recognising that, as noted above, magnetic turbulence is in fact just a branch of classical electrodynamics. It can thus be formulated in terms of purely electrodynamic quantities with the dynamics implicitly included only. {In a first step of such an approach, we demonstrate, how the problem of magnetic fluctuations} can be reduced to the solution of an inverse problem, whose solution is, of course, nontrivial. We develop the theory until the formulation of the final integral equation whose input is the experimentally obtained field fluctuation spectrum. ({This is not the power spectral density usually used in turbulence and inferred by measurements. Instead, it is the full spectrum of electro-magnetic fluctuations that is on stake -- quite different from the magnetic power spectral densities used in ordinary low frequency turbulence!}) This integral equation will have to be solved for any given observed fluctuation spectrum. This reformulation of magnetic fluctuation theory might provide a new path {in the investigation of turbulence as it infers about the dynamics of the plasma which leads to the generation of turbulent fluctuations. In a subsequent step its relation to observed turbulent spectral power densities should be investigated.} Here we just develop the inverse magnetic {fluctuation} theory. A similar approach should, of course, also be possible for genuine kinetic plasma turbulent fluctuations including electron scales.
This is the maximum that can be achieved at the time being in the inverse problem of turbulent magnetic fluctuations in plasma. As noted earlier, it requires knowledge of the turbulent convolution function $c_\xi(0)$ which enters the last integral and through it the kernel $G(\zeta\pm y)$. This function requires measurement of the magnetic \emph{and} electric fluctuations. Usually only the turbulent magnetic fluctuations are available though, in principle, methods could be developed to measure the electric fluctuation field by injecting dilute ion beams into the plasma and monitoring their return fluxes which provide direct information about the low-frequency electric fluctuations. Such measurements have occasionally been performed using electrons but are polluted by the enormous sensitivity of electrons to the presence of electric and magnetic fluctuation fields. They also suffer from the difficulty of identification of the injected from ambient electrons. Another more promising possibility is the injection of low-energy ion beams, to measure their distribution function and to calculate from it the fluctuations of the velocity field. In the absence of either of these one cannot proceed further. Magnetic field observations alone are insufficient. They cover only half of the information stored in the electromagnetic field. It is easy to see that without the independent determination of the turbulent convolution (response) function $c_\xi(0)$ one cannot proceed. It can be written as a differential equation for the vector potential respectively electric field component \begin{equation} A'_\xi(\zeta)+A_\xi(\zeta)/c_\xi(0)=0 \end{equation} whose solution at fixed frequency $\xi$ \begin{equation} A_\xi(\zeta)=A_\xi(0)\exp[-\zeta/c_\xi(0)] \end{equation} shows that $c_\xi(0)$ is the typical scale of variation of the electric field respectively the vector potential in $\zeta$. The observations provide instead the \emph{frequency spectrum} of the magnetic field $B_\xi(0)$ which is the spatial derivative in $x$ of the vector potential at location $x=\zeta=0$. This derivative contains the unknown function $u_\xi(\zeta)$ \begin{equation} B_\xi(0)=\partial_x A_\xi(0)= u^2_\xi(0)A'_\xi(0) \end{equation} When using $A_\xi (\zeta)= u_\xi(\zeta) f(\zeta)$ this becomes \begin{equation} u^2_\xi(0)A'_\xi(0)= u_\xi(0)f'_\xi(0)-u'_\xi(0)f_\xi(0) \end{equation} The initial or boundary conditions $f_\xi(0)=1$ and (\ref{eq-bound1}) imply that \begin{equation} B_\xi(0)= {\textstyle\frac{1}{2}}{[u_\xi(0)-1]^2}'-u_\xi(0)/c_\xi(0) \end{equation} which still contains the unknown function $u_\xi(0)$. This simply expresses the above noted obvious fact that reduction to magnetic measurements alone, lacking the electric field or otherwise the velocity field, {implies loss of one half of the electromagnetic information which is needed in solving the inverse scattering problem}. This resembles the inverse scattering case where without knowledge of the reflection and transmission coefficients which couple the incoming and outgoing waves, no solution exists. Hence, in solving the inverse problem of turbulent magnetic fluctuations \emph{knowledge of either the electric field or velocity fluctuations in addition to the magnetic fluctuations is obligatory}. With the reduction of the inverse problem of turbulent magnetic fluctuations to the Gel$'$fand-Levitan-Marchenko equation the formal problem of inversion of the magnetic fluctuations in a turbulent plasma has been solved. It has been reduced to the determination of the dissipative convolution function from observations of the fluctuations of the electromagnetic fields at observation point $x_0=0$. In practice the full solution of the inverse problem which aims at the determination of the dissipative response function $\epsilon^T_\xi$ requires providing the data in treatable form, solving the integral equation, and afterwards calculating the response function. These three tasks are still open for handling. {Thus reduction of the inverse problem to the Gel$'$fand-Levitan-Marchenko integral equation is just an important and necessary though only an intermediate and not yet the ultimate sufficient step.} %Fortunately, a relation between $g_\xi(0)$ and the magnetic field fluctuations can be obtained. We refer to Eq. (\ref{eq-rel-b-c}) for the relation of the magnetic field to the solution of the Schr\"odinger equation. %Since the mapping is unique, the magnetic field can be taken at $x_0=\zeta_0=0$ which provides its ``initial value'', i.e. the observation at location $x=x_0=0$. Making use of the boundary conditions we have %\begin{equation} %B_\xi(0)\longmapsto f'_\xi(0)-u'_\xi(0) = -1/c_\xi(0) %\end{equation} %Hence we have also from (\ref{eq-g-data}) The form of the convolution function is not known a priori. Its spatial dependence is not required, however. Necessary is just its temporal spectrum i.e. its Fourier transform with respect to $\xi$. Though this is not known, from some analogy to the magnetic frequency spectra and the models of magnetic turbulence one may expect that the convolution function has similar formal properties. We already noted that it is not expected that the turbulence contains emission or absorption lines corresponding to eigenmodes of the Schr\"odinger equation. This would imply the presence of distinct plasma waves or turbulent energy losses at some particular frequency as might be present in non-homogeneous plasmas such as like near a shock wave in a restricted region of space. Examples are the narrow upstream and downstream regions of shocks in the solar wind, the foreshock and magnetosheath regions where turbulence prevails while distinct plasma modes are excited by some energy source related to the shock. Moreover, if the turbulence is not fully developed, intermittency might play a role leading to additional structure in the dissipative response function. {These problems are all very interesting and important. They, however, as noted several times, in order to be included into the inverse problem require precise measurements of both types of electromagnetic fields, its magnetic and electric components. Formally they introduce discrete eigenvalues leading to poles in the complex $\zeta$ plane which generate residues. These should appear as additional terms in the Gel$'$fand-Levitan-Marchenko equation. They also cause modification of the data function $g_\xi(0)$ which enters the kernel of the integral equation.} \subsection{Power law fluctuation spectra} In the absence {of discrete fluctuation modes}, we may try an unstructured power law distribution of the dissipative convolution function of the kind \begin{equation} c_\xi(0)= a_0 \xi^{-\alpha} $ ~~for~ ~$ \xi_0 < \xi <\xi_d \end{equation} with some dimensional factor $a_0$. The limitations of the frequency range are arbitrarily assumed, with $\xi_0$ some low frequency cut-off of the spectral power law range, and $\xi_d$ some high-frequency cut-off which possibly can be related to the onset of strong dissipation which breaks the power law. Such a power law may be justified by assuming that both the electric and magnetic fluctuation fields obey unstructured power law spectra in frequency space, $E_\xi(0)\sim \xi^{-\alpha_E}, B_\xi(0)\sim \xi^{-\alpha_B}$. Since $c_\xi(0)=-E_\xi(0)/\xi^2cB_\xi(0)$, the ratio of the two power laws yields another power law $\alpha=\alpha_E-\alpha_B+2$ with the various constant factors and normalisations combined into the constant $a_0$. {It should be stressed that these power laws are not power laws of spectral energy densities inferred in turbulence theory; rather they are simply the frequency spectra of fluctuations if they exist in this form}.{\footnote{{Reformulation of the convolution function in terms of turbulent spectral energy densities would require a complete reformulation of the inverse problem which we do not intend in this work. Such a reformulation suppresses the use of Jost functions which are the solutions of the Schr\"odinger equation for the mapped fields, not for their spectral energy densities. Presumably this inhibits reference to the Gel$'$fand-Levitan-Marchenko theory.}}} Actually, the Fourier transforms map the electric and magnetic fluctuation fields from time into frequency space. The mapped field spectra necessarily possess some phases $\phi_\xi^{E,B}(0)$. The convolution function is thus itself a function of the phase differences between the electric and magnetic Fourier spectra. In homogeneous turbulence their phases and thus also their phase difference can be assumed to be randomly distributed. They can, in principle, be averaged out in this case, just contributing to the factor that multiplies the assumed power law. This assumption is followed below. {The assumption of a power law of the turbulent convolution function allows to write \begin{equation} g_\xi(0)={\textstyle\frac{1}{2}}\Big[1-a_0\ (\xi-\xi_0)^{-\alpha}\Big] $~~ with ~~$\Re\,{\alpha} > 0 \end{equation} which inserted into the inverse transform of $g_\xi$ yields the kernel function \begin{equation}\label{eq-kernel-g} G(\zeta) =\pi\delta(\zeta)-\frac{a_0}{4\pi i}\frac{\partial}{\partial\zeta} \int_{0}^{\xi_d} \mathrm{d}\xi (\xi-\xi_0)^{-\alpha}\mathrm{e}^{i\xi\zeta} \end{equation} Since $\xi=\kappa\mathrm{e}^{\pm i\pi/4}$ is a complex wavenumber, we have $\mathrm{d}\xi=\mathrm{d}\kappa\ \mathrm{e}^{\pm i\pi/4}$. The integral becomes a contour integral in the complex $\xi$ plane limited by $\xi_d=\kappa_d$, a real frequency respectively wavenumber corresponding to the power law range of the observations in frequency. Integration is thus along the real axis between zero and these limits, closed by a large circle over the upper (lower) half of the complex $\xi$-plane up to an angle $\pm\pi/4$ and returning on the ray along this angle to zero, depending on the sign of $\zeta$. One may note that there is a singularity on the real axis at $\xi=\xi_0$ which is of fractional order $\alpha$ giving rise to fractional Riemann branches which is seen when writing $(\xi-\xi_0)^{-\alpha}=\exp[-\alpha\:\ln|\xi-\xi_0|-\alpha i\theta]$. However, for an expected value $\alpha>1$ the complex integration contour lies completely in the principal branch making integration around this pole possible without caring for the branch cuts. It gives rise to a residuum $2\pi i\:\exp(i\kappa_0\zeta)$. This would be the total value of the total contour integral around the integration contour. What is wanted, is the principal value along the real axis across the singularity at the point $\xi_0$. This could be determined when the remaining parts of the contour integral are found. Calculation of the remaining circular section at $\xi_d$, and the ray at angle $\pi/4$ back to the origin, is however difficult and cannot be given in closed analytical form even when shifting the upper limit $\xi_d \to \infty$. The problem is the integral along the ray. Thus determination of the integral in Eq. (\ref{eq-kernel-g}), i.e. the principal value of the integral, is not easily done.} {If we, for simplicity, assume that the power law spectrum extends over several orders of magnitude in $\xi$, we can put $\kappa_0\ll\kappa_d$. Then the limits of integration can be pushed to their extremes $0$ and $\infty$. We don't know the value of $\alpha$. One expects $0<\alpha<3$ to be a fraction corresponding to some root. Turbulent spectra of both the electric and magnetic fields are smooth. Consequently, the power law of the turbulent convolution function it smooth as well, and the integral does not contain any poles other than $\xi_0$. Shifting the origin into $\xi_0$ yields the residual factor $\exp(i\kappa_0\zeta)$, and one can solve the remaining integral by the method of steepest descent. } {We write the integral with respect to $\kappa$ and the integrand as an exponential $\exp \psi(\kappa,\zeta)$ with \begin{equation} \psi_\pm(\kappa,\zeta) = \mp\frac{i\pi\alpha}{4}-\alpha\ln\kappa+i\kappa\zeta\mathrm{e}^{\pm i\pi/4} \end{equation} Its first derivative with respect to $\kappa$ put to zero yields the fixed point \begin{equation} \bar\kappa_\pm= (\alpha/\zeta)\mathrm{e}^{\mp3i\pi/4} \end{equation} The second derivative taken at the fixed point is \begin{equation} \psi''_\pm(\bar\kappa,\zeta)=\alpha/\bar\kappa_\pm^2=\mp i\zeta^2/\alpha \end{equation} with both $\zeta$ and $\alpha$ real. The exponent can now be expanded around the fixed point up to second order, which yields for the integral \begin{equation} \int\limits_{\xi_0}^{\xi_d}\mathrm{d}\xi\xi^{-\alpha}\mathrm{e}^{i\xi\zeta}\ =\ \mathrm{e}^{\psi_\pm(\bar\kappa,\zeta)}\int\limits_{\kappa_0}^{\kappa_d}\mathrm{d}\kappa\mathrm{e}^{\frac{1}{2}\psi''_\pm(\bar\kappa,\zeta)(\kappa-\bar\kappa)^2} \end{equation} The formal solution of this Gaussian integral is the difference between two error functions \begin{eqnarray} \int\limits_{\xi_0}^{\xi_d}\mathrm{d}\xi\xi^{-\alpha}\mathrm{e}^{i\xi\zeta}\ &=&\ \sqrt{\frac{\pi/2}{\psi''_\pm(\bar\kappa,\zeta)}}\bigg\{\mathrm{erf}\bigg[\sqrt{{\textstyle\frac{1}{2}}\psi''_\pm(\bar\kappa,\zeta)}(\kappa_d-\bar\kappa_\pm)\bigg]\nonumber\\ &-&\mathrm{erf}\bigg[\sqrt{{\textstyle\frac{1}{2}}\psi''_\pm(\bar\kappa,\zeta)}(\kappa_0-\bar\kappa_\pm)\bigg]\bigg\} \end{eqnarray} The error functions are functions of complex arguments whose convergence properties with respect to the variable $\zeta$ must be taken into account when inserting into the kernel function $G(\zeta)$, in particular when the boundaries are allowed to take their extremal values $\kappa_0=0,\kappa_d=\infty$. (One may note that turbulent spectra consisting of several piecewise parts obeying different power laws which smoothly connect via spectral break points can be included. In this case the above sum of error functions multiplies by the number of such ranges, each with a different power $\alpha$, i.e. different $\bar\kappa_\pm$.) It turns moreover out that the dependence of the kernel on the power $\beta$ of the power law is very complicated. One therefore expects that the solution of the inverse problem, i.e. the solution of the Gel$'$fand-Levitan-Marchenko equation will not be possible to be constructed analytically, even in the simple case of power law spectra. In the limit of an extended spectrum the integral becomes \begin{equation} \int\limits_{0}^{\infty}\mathrm{d}\xi\ \xi^{-\alpha}\mathrm{e}^{i\xi\zeta}\ =\ \sqrt{\frac{\pi/2}{\psi''_\pm(\bar\kappa,\zeta)}}\ =\ \sqrt{\frac{\pi\alpha}{2}}\frac{\mathrm{e}^{\pm i\pi/4}}{\zeta} \end{equation} This has to be multiplied by $\exp\psi_\pm(\bar\kappa,\zeta)$, the factor in front of the integral, which is another complicated function \begin{equation} \mathrm{e}^{\psi_\pm(\bar\kappa,\zeta)}=\bigg(\frac{\zeta}{\alpha}\bigg)^\alpha\exp\bigg[\frac{i\pi}{4}(\alpha\pm\alpha)+\frac{(i\mp 1)\alpha^2}{2\sqrt{2}\: \zeta}\bigg] \end{equation} Combining all these expressions, the data function $G(\zeta)$ is determined for use in the Gel$'$fand-Levitan-Marchenko equation (\ref{eq-glm}). As before the signs $\pm$ apply to the signs of the argument in $\zeta$ which map to the arguments in $G(\zeta\pm y)$ when accounting for the integration with respect to $y$. This by itself becomes a major analytical and numerical effort. We leave the solution of the Gel$'$fand-Levitan-Marchenko equation in the particular case of a power-law turbulent-convolution function for a separate investigation. } {Use of a power law spectrum for the temporal fluctuations is justified by observations which provide the turbulent electromagnetic fluctuations. The relation to any kind of Kolmogorov spectra \citep{kolmogorov1941a} and its apparent observation in space plasmas \citep{goldstein1995,zhou2004} is not clear however. The assumption of a power law convolution function $c_\xi$ has been purely artificial. Observations provide spectral energy densities while we used the fluctuation spectra. There is clearly a relation between the two (see before); the inverse turbulence problem is, however, not defined in terms of the spectral energy density. } {Observations in the solar wind suggest that power laws are realised in the spectral energy densities only over a number of ranges of very limited extension in frequency. Observed spectra contain break points which connect spectral ranges exhibiting different power laws. They also contain more or less well expressed energy injection as well as dissipation ranges of various shapes ranging from exponential decay to exponentials of more complicated arguments or even algebraic decays. Hence, assuming a simple power law in the turbulent convolution function and extending the range of integration over the entire frequency interval from zero frequency to infinity somehow violates the observational input. The solution for a power law turbulent convolution function can therefore serve only as an example and has little to do with reality.} {In fact, if anyone wants to apply the inverse procedure to power spectral energy densities he must refer to Poynting's theorem in electrodynamics, i.e. use the heat transfer equation for the electromagnetic field. This equation is nonlinear containing the product $\vec{j\cdot E}$ and the divergence of the Poynting vector. It seems improbable that this equation can be easily transformed into linear Schr\"odinger-like form and application of the Gel$'$fand-Levitan theorem as this is reserved for the linear Schr\"odinger equation only. One may conclude that though the inverse problem of turbulent fluctuations can well be treated by this approach, the inverse problem of spectral power densities inhibits such an approach for the above mentioned reasons. Moreover, in any case it will be necessary to include measurements of the electric power spectrum respectively fluctuations because only the full electromagnetic field contains the full electromagnetic information about the dynamics.} Doing justice to the observations necessarily implies a numerical treatment of the inverse problem of magnetic turbulence. It moreover requires, in addition, the measurement of both the electric and magnetic fluctuation time series and subsequent determination of their spectral equivalents. \subsection{Conclusions} The possibility of an application of the inverse problem of scattering to turbulence and fluctuations has not been obvious. It required the transformation of the electromagnetic turbulent fluctuation problem into the Sturm-Liouville-Schr\"odinger form. This is an interesting turn that might bring a new view on magnetic fluctuation and turbulence theory, possibly opening a path to infer the properties under which magnetic fluctuations in a broad spectral range in plasma develop. Solving the new integral equation for a given data set is actually not an easy task. Though the solution of the Gel$'$fand-Levitan-Marchenko equation should not provide unsurmountable hurdles, it is not particularly simple. The way to solve it starts from the assumption of any reasonable initial solution for the unknown function $F(\zeta,y)$ and to iterate. In many cases an approach of this kind leads to fractional chains which, depending on the first intelligent guess, should rapidly converge such that not many steps should be necessary to perform. Of course the result and procedure depend on the resolution of the time series of the magnetic and electric fluctuations and their fluctuation spectra. The general method is not restricted to a simple power law in the fluctuations just covering the limited inertial range as used in the above last example. It depends on the precision of the time resolution of the observations and the related spectral representation of the turbulent magnetic fluctuations. This spectrum may well extend from the lowest injection frequencies up to deep into the dissipation range \citep{alexandrova2009,sahraoui2009,sahraoui2013}. It thus should reproduce the turbulent response function over the accessible range of frequencies merely excluding the genuinely kinetic regime at the highest frequencies, i.e. the regime where turbulent magnetic fluctuations do not provide any direct information about the turbulent electrostatic contributions generated by the plasma dynamics. These have been excluded by the assumption of purely transverse magnetic fluctuations and turbulence. Otherwise all magnetically active dynamical contributions to the evolution of turbulence will contribute and are formally included in the theory in the definition of the response function. Moreover, since the theory is based on temporal spectra as ingredients of the observational function $G(\zeta)$, no reference to the Taylor hypothesis is required. This also implies that it is not required to map the spectrum into the wavenumber range. The inversion theory does include this transformation anyway as we have noted above by attributing $\xi$ as the Fourier-conjugate to the spatial coordinate $\zeta$. The method does, in this way, provide information about the way the full turbulent spectrum is generated. This is the physically interesting part of the theory. The only restrictions on the validity of the Gel$'$fand-Levitan-Marchenko approach in its form of making it available to magnetic turbulent fluctuations are the simplifying assumptions made by us of one-dimensionality, homogeneity of turbulence, restriction to non-expanding turbulence, the uncertainty of observations and thus the incompleteness of coverage of the frequency domain. Some of these assumptions may prevent application to fast expanding plasma streams like the solar wind where the observations are performed in one particular spatial point which is about fixed to space and not to the stream thus violating one of our assumptions. Though the relation between the observations in our one-dimensional approach and the inverse theory is striking, it is obvious from the last expression that even the solution of the Fourier integral, the input to the kernel of the Gel$'$fand-Levitan-Marchenko equation, cannot be provided in a sufficiently simple form that would allow for an analytical solution. This does not prevent the application of the theory; it only suggests that any application must necessarily not only refer to numerical work in establishing the power spectrum of turbulence by observations, it also requires a subsequent numerical treatment of the inverse problem. Whether or not this will be advantageous in investigating turbulence is hard to estimate. The effort in formulating and solving the inverse problem is large. Its outcome is the maximum available information about the turbulent response function at maximum effect of the fluctuating fields on it. This function will subsequently have to be interpreted physically in view of the conditions under which turbulent magnetic fluctuations have evolved. An reformulation of this theory to the inclusion of the turbulent power spectral densities on which current investigations of magnetic turbulence rely is, however, currently not in site. Its formulation would require use of Poynting's theorem in turbulence and an attempt to transform it into Schr\"odinger form which, probably, cannot be done.
16
7
1607.04127
1607
1607.03917_arXiv.txt
We use the cellular automaton model described in L\'opez Fuentes \& Klimchuk (2015, ApJ, 799, 128) to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode/XRT and SDO/AIA instruments. The comparison is based on the statistical properties of synthetic and observed loop lightcurves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have an amplitude of 10 to 15\% both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.
\label{intro} Coronal loops are the basic observable blocks of the magnetically structured solar atmosphere. Any single theory proposed to explain the problem of coronal heating must be able to reproduce a series of diverse, and sometimes apparently contradictory, observed loop properties (see Klimchuk 2006, 2009; Reale 2010). Among other things, such theory should predict hot loops (2 to 4 MK) with apparently quasi-static evolutions, as observed in soft X-rays (Rosner et al. 1978), and highly dynamic cooler ($\approx$ 1 MK) loops that are too overdense to be in hydrostatic equilibrium, as observed in some EUV channels (Aschwanden et al. 2001). The multi-thermality of loops inferred from observations (Schmelz et al. 2009, and references therein) suggests that they might actually be structured in sub-resolution filaments or strands. It is possible then, that some of the observable properties of loops may be due to the collective contribution of thinner unresolved strands that follow more or less independent evolutions. Regarding the heating process and consequent evolution of the plasma in the strands, one thoroughly considered possibility is that they are heated by short duration and small-scale impulsive events called nanoflares. This possibility corresponds to the scenario proposed during the 1980s by E. Parker (1988). The idea is that photospheric convective motions displace the strand footpoints, translating into magnetic stress between adjacent strands. When the stress due to misalignement between strands reaches a threshold, reconnection rapidly occurs releasing energy and heating the plasma. One or more of these reconnection events can combine to produce a nanoflare (see Klimchuk 2015). The presence of heating events like these applies naturally to impulsively evolving 1 MK loops. However, if nanoflares can occur at different frequencies the described mechanism could also explain quasi-static evolving hot loops. If the frequency is high enough the heating rate would be indistinguishable from a steady source (Cargill \& Klimchuk, 1997, 2004). In search for a plasma diagnostic tool to discriminate the role of high versus low frequency heating, recent works have focused in the emission measure (EM) distribution of the plasma at different locations in active regions (AR), in particular in AR cores (see e.g., Bradshaw et al. 2012, Warren et al. 2012, Schmelz \& Pathak 2012, Reep et al. 2013). It has long been recognized that the left or ``colder'' side of the EM distributions follow a power law of the type $EM(T)~\propto~T^{\alpha}$ (Dere \& Mason 1993). The value of the $\alpha$ index can be used to estimate the relative contribution of the low and high frequency components of the heating. In the low frequency scenario the plasma in the strands is able to cool enough so that at any time there is a significant contribution of the colder plasma to the EM distribution. That would correspond to ``flatter'' distributions and smaller $\alpha$ values. On the other hand, if most of the plasma is in the high frequency heating regime the distribution is more peaked around the highest temperatures and $\alpha$ takes higher values. Through a series of hydrodynamic simulations, Bradshaw et al. (2012) found that the cut to determine the relative importance of the low frequency contribution is at $\alpha \approx 3$; above this threshold steady heating or high frequency nanoflares dominate. $\alpha$ index values obtained from observations in different wavelengths range from 2 to more than 4 (Tripathi et al. 2011, Schmelz \& Pathak 2012, Warren et al. 2011, Cargill 2014, Cargill et al. 2015). Warren et al. (2011) found that a simple mixture of 90\% high frequency and 10\% low frequency nanoflares could explain the observations at both high and low temperatures for many studied spectral lines. It has also been found that the relative contribution of both frequency components could be related to AR age (Ugarte-Urra \& Warren 2012). In a preliminar theoretical paper that accompanies this one (L\'opez Fuentes \& Klimchuk 2015, henceforth Paper I) we study a cellular automaton model for the evolution of loop plasmas that is based in Parker's (1988) nanoflare scheme described above. In Paper I we analyze scaling laws of the plasma properties as a function of the parameters of the model and the presence of power laws in the nanoflare distribution. We provide a description of the model in Section~\ref{model}. In this paper we use the model to create synthetic loop lightcurves that we compare with real observations from Hinode/XRT and SDO/AIA. We base the comparison on the statistical properties of synthetic and observed lightcurves. We also obtain the power-law indices of the model's differential emission measure (DEM) distribution and compare it with observations from the works cited above. The paper is organized as follows. In Section~\ref{observations} we describe the Hinode/XRT and SDO/AIA data used in the analysis. In Section~\ref{model} we provide a brief description of the model (a complete description can be found in Paper I). In Section~\ref{results} we describe our results: the statistical properties of modeled and observed lightcurves (Sub-section~\ref{lightcurves}) and the DEM distributions obtained from the model (Sub-section~\ref{DEM_slope}). Finally, we discuss and conclude in Section~\ref{conclusions}.
\label{conclusions} In this paper we compare coronal observations from the X-ray Telescope (XRT) on board Hinode and the Atmospheric Imaging Assembly (AIA) on board SDO with synthetic data created using a Cellular Automaton (CA) model studied in an accompanying paper (L\'opez Fuentes \& Klimchuk 2015, Paper I). The model is based on the idea that loops are made of elementary strands that are shuffled and tangled by photospheric motions at their foopoints, producing magnetic stress which is released in the form of reconnection and plasma heating (Parker 1988). The output of the model is a series of nanoflares that heat the different strands in the loop. We compute the response of the plasma to this heating using the hydrodynamic code Enthalpy Based Thermal Evolution of Loops (EBTEL, Klimchuk et al. 2008, Cargill et al. 2012). From the known response of the instruments and the temperature and density obtained with EBTEL we compute synthetic lightcurves that we compare to the observations. We selected lightcurves from a series of loops observed in AR 11147 on 2011 January 18 with Hinode/XRT and with SDO/AIA in the 211~\AA~and 171~\AA~channels. For the comparison we computed the main statistical properties of both observed and synthetic lightcurves. Our results show that using reasonable solar parameters the model can reproduce the statistical properties of observed lightcurves. One of the studied statistical properties is the standard deviation, which we use as an indicator of the relative amplitude of the signal fluctuations. We found that the typical fluctuations have an amplitude of 10 to 15\% both for the model and the observations. This is also consistent with observations reported by other authors (see e.g., Warren et al. 2010). From Figures~\ref{xrt_lc} to~\ref{aia171_lc} it can be seen that the typical duration of the fluctuations in both observed and modeled lightcurves is around 1000 s. Although the temporal span of the observations is too short to perform a reliable frequency analysis, doing this in the case of the synthetic lightcurves confirms that this is the main frequency of the fluctuations. This is not a surprising result, since 1000 s is the duration of the CA model time step, which corresponds to the typical turnover time of photospheric granules. It was also found in Paper I that the typical waiting times between consecutive nanoflares is one or two time steps. The fluctuation in the synthetic lightcurves is due to the superposition of the emission evolution from different strands, so there must be some degree of coherence among the strands. The similarity of the durations in the observed and modeled lightcurves shown in Figures~\ref{xrt_lc} to~\ref{aia171_lc} suggests that that could also be the case for real loops. Similar durations have been reported recently by Ugarte-Urra \& Warren (2014). The shape of the intensity distribution can provide information about how the plasma in the loops evolves. In search of asymmetries in the distributions, here we compute the skewness and we found that in all the selected observed lightcurves and the corresponding modeled cases the skewness is positive. This indicates that the right ``tail'' of the distributions is more spread and less bulky than the left one. This means that there is a larger weight of the low intensity part of the fluctuations. In other words, there are more intensity counts below the mean of the distribution than above it. Terzo et al. (2011) argued that this is related to the widespread presence of cooling plasma in the loops. As a proxy of the distribution asymmetries they used the difference between the mean and the median. They found that the median is systematically smaller than the mean. Here we compute both the mean and the median for our observed and modeled cases and confirmed that result. A very recent work by Tajfirouze et al. (2015), also based on observed lightcurves modeling, confirms the widespread presence of hot plasma cooling. Using the EBTEL code we are able to compute the differential emission measure (DEM) distribution of the strands in the model. The relative amount of plasma emitting at different temperatures provides clues about how often the strands are reheated. As shown in Figures~\ref{single_strand}, nanoflares do not occur regularly in our model. The interval between successive events can correspond to high or low frequency relative to the plasma cooling time. When high frequency dominates, the strands do not reach low temperatures very often, and the cold part of the distribution is not very prominent. In that case, the slope of the log-log distribution of the DEM versus temperature is large. In the opposite case, when low frequency dominates, most of the plasma reaches lower temperatures before being reheated, and the distribution is correspondingly flatter. Whether high or low frequency dominates depends on the model parameters. The range of slopes obtained here is consistent with the observational results by other authors (Bradshaw et al. 2012, Schmelz \& Pathak 2012, Warren et al. 2012, among others). In Paper I we analyzed the dependence of the nanoflare frequency on the different parameters of the model. We found that in our simple CA model the frequency of the nanoflares is independent of the parameters $B_v$, $L$, and $\theta_c$ (the distance, $d$, and duration $\delta t$ of the footpoint displacement were fixed, as in this paper). However, the thermal evolution of the strands depends strongly on $L$. The initial cooling after the nanoflare is due primarily to thermal conduction, with a cooling rate that scales as $L^{-2}$. Shorter strands cool faster than longer ones. Therefore, for a given nanoflare frequency, the strands in a short loop will reach lower temperatures faster and more often than in a long loop. It is expected then, that the DEM distributions in shorter loops will have a larger contribution of plasma on the cold side and thus, smaller slopes. This is born out in our models, as shown in Figure~\ref{slope_length}. Note that this does not imply that short/long lengths produce exclusively cooling/hot strands. All the simulated loops produced by the model contain a mixture of both types. This is clearly evident in Figure~\ref{single_strand}. The key point is that there is a larger contribution of the cooling plasma (low frequency) component in shorter loops than in longer loops. The percentages of the two contributions as function of loop length were thoroughly analyzed in Paper I. Visual inspection of Figures 1 and 2 reveals little evidence for correlation among the three observing channels, both spatially and temporally. This may seem surprising given that low frequency nanoflares should produce such a correlation. Visual impressions can be deceiving, however. Viall and Klimchuk (2012, 2013, 2015) have performed a rigorous time-lag analysis of light curves observed in different AIA channels and find widespread evidence of cooling plasma. The distribution of nanoflare frequencies is yet to be determined, however. An important point is that the cooling plasma is contained in spatially unresolved strands, as in our models, so variations in the light curves are generally very subtle. Given the lack of visual correlation in our data, we have chosen to treat the loops identified in a given channel as unique to that channel. Each loop of course produces some emission in all channels, but a quantitative comparison is hampered by the presence of uncertain background emission. We have adopted a conservative approach and subtracted a constant background that is the minimum intensity along the vertical line in Figure 2. The actual background at any location or any time could be significantly stronger. Furthermore, the ratio of background intensities in the different channels could be variable. One possibility regarding the apparent lack of correlation between the hot (XRT) and cooler (AIA) emissions is that the hot component has a predominance of high frequency nanoflares, but still there is some low frequency contribution such that only a small fraction of the plasma is cooled to the AIA temperatures (e.g., Fig. 4). If this low frequency component accounts for only 10\% of the hot emission (see e.g., Warren et al. 2011), its presence and any observable correlation with its evolved cool counterpart will be masked by the non-cooling 90\% high frequency contribution. Whether this component is able to explain all the cool emission observed has still to be determined. It is worth to comment that the above discussion refers only to a possible interpretation of the apparent lack of correlation between XRT and AIA observations. It does not imply that in its present form our model is able to provide these differentiated high and low frequency components. As we discussed in Paper I, the CA model has a frequency distribution which is not substantially affected by the parameters variation. When it is combined with EBTEL, the change of parameters such as the loop length and nanoflare duration affect the cooling times, so the nanoflare frequency measured with respect to the cooling time is parameter dependent. However, these changes affect all the nanoflares monolithically and are not able to produce different proportions of high and low frequency events. We hope that future, more sophisticated versions of the model, can account for multi-component nanoflare frequency distributions. The CA-EBTEL model combination developed and studied in Paper I and applied here is a useful tool for testing the nanoflare scenario in relation to the evolution of observed loops. One of the main objectives of the present version of the model is to keep it simple to carefully study its implications. In following works, we plan to develop modified versions to account for more detailed strand interaction rules and for the evolution of loops in different parts of ARs (i.e., the core or the periphery).
16
7
1607.03917
1607
1607.08016_arXiv.txt
In this paper we study the consequences of relaxing the hypothesis of the pressureless nature of the dark matter component when determining constraints on dark energy. To this aim we consider simple generalized dark matter models with constant equation of state parameter. We find that present-day low-redshift probes (type-Ia supernovae and baryonic acoustic oscillations) lead to a complete degeneracy between the dark energy and the dark matter sectors. However, adding the cosmic microwave background (CMB) high-redshift probe restores constraints similar to those on the standard $\Lambda$CDM model. We then examine the anticipated constraints from the galaxy clustering probe of the future Euclid survey on the same class of models, using a Fisher forecast estimation. We show that the Euclid survey allows us to break the degeneracy between the dark sectors, although the constraints on dark energy are much weaker than with standard dark matter. The use of CMB in combination allows us to restore the high precision on the dark energy sector constraints.
The $\Lambda$CDM framework offers a simple description of the properties of our Universe with a very small number of free parameters. It reproduces remarkably well a wealth of high-quality observations which allow us to determine with high precision the few parameters describing the model (typically below 5\% in most of the parameters~\cite{Planck2015}). Therefore, more than fifteen years after the establishment of the accelerated expansion of the Universe~\cite{Riess,Perlmutter}, the $\Lambda$CDM cosmological model remains the current standard model in cosmology. However, the dark contents of the Universe remain largely unidentified: no direct detection of a dark matter particle has been achieved, and the theoretical basis of the observed cosmological constant is not clearly established, especially with respect to the issue of gravitational effects of quantum vacuum energy (the cosmological constant problem $-$see \cite{JMartin} for an extended review). In this context, a large variety of explanations have been proposed beyond a simple Einstein cosmological constant: scalar field domination known as quintessence~\cite{quintessence}, generalized gravity theory beyond general relativity~\cite{MG} or even inhomogeneous cosmological models~\cite{inhomogeneous}. An extensive review on constraints on cosmological models with the Euclid satellite can be found in~\cite{Amendola2013}. In addition, it has already been noticed that the pressureless (cold) nature of dark matter itself is not firmly established~\cite{GDM}. Finally, even the separation of the dark sector in physically independent components such as a dark matter component and a dark energy may not be relevant with cosmological observations alone~\cite{kunz}. % In this paper, we examine the consequences of considering % nonpressureless dark matter when constraining the dark energy sector, with present-day observations and in the context of the future Euclid survey mission. We focus on the simplest models, both for the dark matter and the dark energy sectors. Namely, we assume constant equation of state for both sectors: $P=w_{DM}\rho$ for the dark matter sector ($w_{DM}\neq0$ implies that the dark matter component has some pressure), and $P=w_{DE}\rho$ for the dark energy sector. Section~\ref{section:2} summarizes the constraints obtained on the previous two parameters using the present-day data, while Sec.~\ref{section:3} shows the improvements that are expected with the Euclid survey.
We have investigated consequences on cosmological constraints relaxing the pressureless nature of dark matter ($w_{DM}\neq0$). We restricted ourselves to the simple case of constant equation of state parameter for both dark sectors. Even if not fully theoretically motivated, these simple models allow us to ascertain the maximum values that the equation of state parameters are allowed to take~\cite{Kunz2016}. We have found that cosmological constraints from present-day SNIa and BAO data are strongly degraded, revealing a complete degeneracy between the equations of state of matter and dark energy. The constraints are essentially restored by the inclusion of CMB data thanks to its leverage. We have then studied the anticipated accuracy from the Euclid redshift galaxy survey. We have found that Euclid is expected to break the above degeneracy between dark matter and dark energy, but the high accuracy on the dark energy equation of state parameter is lost. Combining with the CMB allows us to restore constraints at a similar level to the $w_{DM}=0$ forecast in the specific model we investigated. We expect even better performance from the full exploitation of the future Euclid survey data, but the remaining correlation between dark matter and dark energy equation of state parameter deserves further investigation. \begin{table*}[t] \caption{Cosmological parameter constraints for the different models and the different probes considered (Euclid GC stands for the galaxy clustering probe of the Euclid survey). The errors are given at the 1$\sigma$ confidence level on one parameter ($\Delta \chi^2=1$). The $\Lambda$CDM model is included for comparison. The stars in some reduced baryon densities stand for fixed values. The dash in the $\epsilon w$CDM model using SNIa+BAO data stands for the extreme degeneracies which do not allow us to obtain significant constraints on the cosmological parameters.}\label{table1} \begin{center} \begin{tabular}{cc|c|c|c|c|} \cline{3-6} & & SNIa+BAO & Euclid GC & SNIa+BAO+CMB & Euclid GC + CMB\\ \cline{1-6} \multicolumn{1}{|c}{\multirow{3}{*}{$\Lambda$CDM}} & \multicolumn{1}{|c|}{$\Omega_m$} & $0.288^{+0.032}_{-0.031}$ & $0.2984^{+0.0015}_{-0.0015}$ & $0.2984^{+0.0096}_{-0.0092}$ & $0.2984^{+0.0015}_{-0.0015}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$H_0$} & $67.6^{+2.7}_{-2.4}$ & $ 68.80^{+0.10}_{-0.10}$ & $68.80^{+0.75}_{-0.74}$ & $68.80^{+0.10}_{-0.10}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$\omega_b$} & $0.02262^*$ & $0.02257^*$ & $0.02257^{+0.00024}_{-0.00024}$ & $0.022574^{+0.000098}_{-0.000098}$\\ \hline \multicolumn{1}{|c}{\multirow{4}{*}{$w$CDM}} & \multicolumn{1}{|c|}{$\Omega_m$} & $\leq 0.28$ & $0.299^{+0.022}_{-0.022}$ & $0.299^{+0.012}_{-0.011}$ & $0.2990^{+0.0021}_{-0.0021}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$w$} & $-0.72^{+0.18}_{-0.25}$ & $-0.995^{+0.026}_{-0.026}$ & $-0.995^{+0.052}_{-0.054}$ & $-0.994^{+0.022}_{-0.022}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$H_0$} & $53.0^{+13.3}_{-5.5}$ & $ 68.70^{+0.45}_{-0.45}$ & $68.7^{+1.3}_{-1.3}$ & $68.68^{+0.39}_{-0.40}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$\omega_b$} & $0.02262^*$ & $0.02259^*$ & $0.02259^{+0.00026}_{-0.00026}$ & $0.022581^{+0.000098}_{-0.000098}$\\ \hline \multicolumn{1}{|c}{\multirow{4}{*}{$\epsilon$CDM}} & \multicolumn{1}{|c|}{$\Omega_m$} & $\geq 0.31$ & $0.301^{+0.010}_{-0.010}$ & $0.301^{+0.014}_{-0.013}$ & $0.3001^{+0.0030}_{-0.0030}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$\epsilon$} & $-0.49^{+0.44}_{-0.20}$ & $-0.0003^{+0.0092}_{-0.0092}$ & $-0.0003^{+0.0011}_{-0.0011}$ & $-0.00024^{+0.00065}_{-0.00066}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$H_0$} & $50.00^{+3.83}_{-0.90}$ & $ 68.60^{+0.27}_{-0.27}$ & $68.6^{+1.2}_{-1.2}$ & $68.62^{+0.12}_{-0.12}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$\omega_b$} & $0.02262^*$ & $0.02262^*$ & $0.02262^{+0.00029}_{-0.00029}$ & $0.02262^{+0.00029}_{-0.00029}$\\ \hline \multicolumn{1}{|c}{\multirow{5}{*}{$\epsilon w$CDM}} & \multicolumn{1}{|c|}{$\Omega_m$} & & $0.301^{+0.041}_{-0.041}$ & $0.301^{+0.014}_{-0.013}$ & $0.3011^{+0.0038}_{-0.0037}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$w$} & & $-1.01^{+0.13}_{-0.13}$ & $-1.010^{+0.075}_{-0.077}$ & $-1.010^{+0.023}_{-0.023}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$\epsilon$} & $-$ & $0.000^{+0.046}_{-0.046}$ & $-0.0004^{+0.0016}_{-0.0016}$ & $-0.00045^{+0.00065}_{-0.00066}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$H_0$} & & $ 68.6^{+1.0}_{-1.0}$ & $68.6^{+1.3}_{-1.3}$ & $68.60^{+0.44}_{-0.43}$\\ \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$\omega_b$} & & $0.02262^*$ & $0.02262^{+0.00029}_{-0.00029}$ & $0.02262^{+0.00029}_{-0.00029}$\\ \hline \end{tabular} \end{center} \end{table*} \vspace{10pt} \textbf
16
7
1607.08016
1607
1607.03903_arXiv.txt
M82~X-1 is one of the brightest ultraluminous X-ray sources (ULXs) known, which, assuming Eddington-limited accretion and other considerations, makes it one of the best intermediate-mass black hole (IMBH) candidates. However, the ULX may still be explained by super-Eddington accretion onto a stellar-remnant black hole. We present simultaneous \nustar, \chandra\ and \swiftxrt\ observations during the peak of a flaring episode with the aim of modeling the emission of M82~X-1 and yielding insights into its nature. We find that thin-accretion disk models all require accretion rates at or above the Eddington limit in order to reproduce the spectral shape, given a range of black hole masses and spins. Since at these high Eddington ratios the thin-disk model breaks down due to radial advection in the disk, we discard the results of the thin-disk models as unphysical. We find that the temperature profile as a function of disk radius ($T(r)\propto r^{-p}$) is significantly flatter ($p=0.55^{+ 0.07}_{- 0.04}$) than expected for a standard thin disk ($p=0.75$). A flatter profile is instead characteristic of a slim disk which is highly suggestive of super-Eddington accretion. Furthermore, radiation hydrodynamical simulations of super-Eddington accretion have shown that the predicted spectra of these systems are very similar to what we observe for M82~X-1. We therefore conclude that M82~X-1 is a super-Eddington accretor. Our mass estimates inferred from the inner disk radius imply a stellar-remnant black hole (\mbh=$26^{+9}_{-6}$~\msol) when assuming zero spin, or an IMBH (\mbh=$125^{+45}_{-30}$~\msol) when assuming maximal spin.
The ultraluminous X-ray source M82~X-1 is one of the best candidates for an intermediate-mass black hole ($100<M_{\rm BH}<10000$~\msol) based on several indirect factors. These include the source's high luminosity, which can reach $\sim10^{41}$~\ergs\ \citep[e.g.][]{ptak99b, rephaeli02, kaaret06}, far greater than the Eddington limit of a stellar-remnant black hole of mass $\sim$10 \msol\ that is typical of X-ray binaries in our own Galaxy ($\sim10^{39}$~\ergs); detection of low-frequency quasi-periodic oscillations (QPOs) in the power spectrum \citep[54 mHz,][]{strohmayer03, dewangan06, mucciarelli06}, indicative of a compact, unbeamed source; as well as twin-peaked QPOs at 3.3 and 5.1 Hz, which lead to a mass estimate using scaling laws between the QPOs frequencies and mass \citep{pasham14}. The mass estimates for X-1 vary considerably, however, and have large uncertainties. This makes its status as an IMBH is not yet firmly established. The most recent estimate came from the twin peak QPOs, which give a mass of 428$\pm$105~\msol\ \citep{pasham14}. On the other hand, modeling of the accretion disk emission by \cite{okajima06} instead found that the source can be explained by a $\sim30$~\msol\ stellar remnant black hole radiating at several times its Eddington limit. At moderate Eddington ratios (\lamedd$\equiv L/L_{\rm Edd}\ll1$), the accretion on to a black hole can be described by the standard ``thin'' accretion disk model \citep[][SS73]{shakura73}. For the standard disk model, the accretion disk is geometrically thin and optically thick where viscous heating in the disk is balanced by radiative cooling and the local temperature of the disk, $T$, decreases with radius, $r$ as $T(r)\propto r^{-0.75}$. Under the assumption that the disk extends down to the innermost stable circular orbit \citep[ISCO, e.g.][]{steiner10}, spectral modeling yields the temperature of the disk at the ISCO, which in turn yields the inner radius. The inner radius is directly proportional to the mass of the black hole, albeit with a large degeneracy with the black hole's spin, which can be used for mass estimates. However, as the mass accretion rate increases, advective cooling dominates over radiative cooling and the thin-disk model breaks down. The scale height of the disk increases and thus is referred to as the ``slim'' disk model \citep{abramowicz88}. For a slim disk, the local temperature of the disk has a flatter temperature profile as a function of radius with $T(r)\propto r^{-0.5}$ \citep{watarai00}. Slim disks have been proposed as mechanisms to explain ULXs as super-Eddington stellar remnant black hole accretors rather than IMBHs \citep[e.g.][]{kato98,poutanen07}. In addition to the modified disk spectrum, the emission from super-Eddington accretion is expected to produce winds/outflows \citep{king03} which may also modify the emission spectrum \citep[e.g. via Compton scattering,][]{kawashima09}. Indeed, high-velocity, ionized outflows have recently been detected in the high-resolution X-ray grating spectra of NGC~1313~X-1 and NGC~5408~X-1 \citep{pinto16} and confirmed in a follow-up study with CCD resolution data for NGC~1313~X-1 \citep{walton16}. Therefore, spectral modeling of the emission from ULXs and testing for a departure from the thin-disk model can yield important information regarding their nature. For M82~X-1, however, modeling of the disk emission has yielded conflicting results. \cite{feng10} observed the source with \xmm\ and \chandra\ over the course of a flaring episode and fitted the spectra with the standard thin-disk model. They found that the luminosity of the disk, $L$, scaled with inner temperature as $L\propto T^4$ which is expected from a thin accretion disk with a constant inner radius. From this they inferred a black hole mass in the range $300-810$~\msol, assuming that the black hole is rapidly spinning in order to avoid extreme violations of the Eddington limit. However, using a different \xmm\ dataset, \cite{okajima06} modeled the emission of the accretion disk from M82~X-1 instead finding that the temperature profile of the disk was too flat ($T(r)\propto r^{0.61}$) to be consistent with the standard thin-accretion disk and concluded that it was in the slim-disk condition. Applying a theoretical slim-disk model their estimate for the mass of the black hole was \mbh$\approx19-32$\msol. Considering broader band data afforded by \suzaku/XIS and HXD-PIN allowed \cite{miyawaki09} (M09) to better distinguish between thin and slim-disk models. While M09 also consider a slim-disk conclusion based on a high inferred Eddington ratio, they instead prefer the power-law state interpretation, finding the spectrum to be too hard to be explained by emission from an optically thick accretion disk. These conflicting results may stem from the fact that spectral studies of X-1 are complicated by the presence of another ultraluminous X-ray source only 5\arcsec\ from X-1, which was recently identified as an ultraluminous X-ray pulsar \citep{bachetti14}. Since this source can reach luminosities of $10^{40}$~\ergs\ \citep{feng07, kong07,brightman16}, and is only resolvable from X-1 with \chandra, its contribution to the X-ray spectrum of M82 must be taken into account when modeling the spectrum of X-1 with other X-ray instruments. Furthermore, X-1 and X-2 are embedded in bright diffuse emission \citep{ranalli08}, which further complicates analysis. X-1 is also bright enough to cause pile-up effects on the \chandra\ detectors, which can severely distort the spectrum. In this paper we report on simultaneous observations of M82 with \nustar, \chandra\ and \swiftxrt\ during an episode of flaring activity from X-1. We aim to improve upon previous works with the combination of \chandra\ to spatially resolve X-1 from X-2 and the diffuse emission below 8 keV, and \nustar\ to gain broadband sensitive spectral coverage, especially above 10 keV. Our goal is to determine if the emission from the disk is indeed consistent with a standard thin-accretion disk, which would support the IMBH scenario, or if it shows a significant departure from this model that would indicate a super-Eddington accretor of lower mass. In section \ref{sec_obs} we describe our observations, including details of the \swiftxrt\ monitoring that showed the increased flux from M82, triggering our \nustar\ and \chandra\ Director's Discretionary Time (DDT) requests, including details of the data reduction, while in section \ref{sec_spec} we describe our spectral analysis where we test various emission models for X-1. In section \ref{sec_x1} we describe the results from the disk models and the mass estimation of X-1. In section \ref{sec_comp} we discuss our results with respect to previous analyses and we finish with a discussion of alternative interpretations of the high-energy spectrum section \ref{sec_alt}. We conclude and summarize in section \ref{sec_conc}. A distance of 3.3 Mpc to M82 is assumed throughout \citep{foley14}.
\label{sec_conc} In this paper we have presented analysis of simultaneous \nustar, \chandra\ and \swiftxrt\ observations of the ultraluminous X-ray source M82~X-1 during a period of flaring activity. The \chandra\ data have allowed us to spatially resolve the source from the other bright sources of X-rays in the galaxy, specifically that of the nearby ultraluminous pulsar, X-2, and the bright diffuse emission. Combined with \nustar\ and \swiftxrt\ data, this provides a sensitive measurement of the 0.5$-$30~keV spectrum of the source. We have fitted standard thin accretion disk models for sub-Eddington accretion to the spectrum finding that they require super-Eddington accretion rates in order to reproduce the observed spectrum. Since the thin accretion disk models do not hold at high Eddington ratios, we discard the thin-disk models as unphysical. We directly test for the departure from the thin-disk model using a disk model that allows for a variable temperature as a function of radius of the disk ({\tt diskpbb}), finding that the temperature profile is ($T(r)\propto r^{-0.55}$), which is significantly flatter than expected for a thin disk, and is instead characteristic of a slim disk, which is expected at high Eddington ratios. While at high Eddington rates, outflows and geometric collimation are also expected to influence the observed emission, which our simple model does not account for, radiation hydrodynamics simulations of super-Eddington accretion have shown that the predicted spectra are very similar to what we observe for M82~X-1. We therefore conclude that the ULX is a super-Eddington accretor. Our mass estimates inferred from the inner disk radius imply a stellar-remnant black hole (\mbh=$26^{+9}_{-6}$~\msol) when assuming zero spin, or an IMBH (\mbh=$125^{+45}_{-30}$~\msol) when assuming maximal spin.
16
7
1607.03903
1607
1607.03418_arXiv.txt
s{ In this study, we probe the cosmic homogeneity with the BOSS CMASS galaxy sample in the redshift region of $0.43 < z < 0.7$. We use the normalised counts-in-spheres estimator $\mathcal{N}(<r)$ and the fractal correlation dimension $\mathcal{D}_{2}(r)$ to assess the homogeneity scale of the universe. We verify that the universe becomes homogenous on scales greater than $\mathcal{R}_{H} \simeq 64.3\pm1.6\ h^{-1}Mpc$, consolidating the Cosmological Principle with a consistency test of $\Lambda$CDM model at the percentage level. Finally, we explore the evolution of the homogeneity scale in redshift. }
\paragraph{}The standard model of Cosmology, known as the $\Lambda$CDM model is based on solutions of the equations of General Relativity for isotropic and homogeneous universes where the matter is mainly composed of Cold Dark Matter (CDM) and the $\Lambda$ corresponds to a cosmological constant. This model shows excellent agreement with current data, be it from Type Ia supernovae, temperature and polarisation anisotropies in the Cosmic Microwave Background or Large Scale Structure. The main assumption of these models is the Cosmological Principle which states that the universe is homogeneous and isotropic on large scales ~\cite{CP}. Isotropy is well tested through various probes in different redshifts such as the Cosmic Microwave Background temperature anisotropies at $z\approx1100$, corresponding to density fluctuations of the order $10^{-5}$~\cite{CMB-Cobe}. In later cosmic epochs, the distribution of source in surveys the hypothesis of isotropy is strongly supported by in X-ray~\cite{X-ray-ref1} and radio~\cite{Radio}, while the large spectroscopic galaxy surveys such show no evidence for anisotropies in volumes of a few $\mathrm{Gpc}^3$~\cite{boss2011}.As a result, it is strongly motivated to probe the homogeneity property of our universe. In this study, we use a fiducial flat $\Lambda$CDM cosmological model using the parameters estimated by Planck ~\cite{Params-Planck}: \be\label{fid-cosmo} p_{cosmo}=(\omega_{cdm},\omega_{b},h,n_{s},ln\left[10^{10}A_{s}\right]) = (0.1198,0.02225,0.6727,0.9645,3.094) \ee
\paragraph{}In this study, we perform measurements of the homogeneity scale of our universe on DR12 BOSS CMASS galaxy sample. We measured the \textit{fractal correlation dimension} showing its scale dependence.$\mathcal{D}_{2}(r)$ reaches the homogeneous value asymptotically behaving as a homogeneous distribution on scales greater than $\mathcal{R}_{H} = 64.3\pm1.6h^{-1}Mpc$ at $z=0.538-0.592$. On scales less than the distribution behaves as a fractal for different scales. Moreover, we have shown the consistency of our homogeneity scale measurement for different cuts of our data at the North and South galactic caps we have optimised the measurement of previous studies at $3 \%$ level. Additionally, we present the epoch evolution of the cosmic homogeneity and its accordance with the $\Lambda$CDM prediction at percentage level. Finally, since we assume the $\Lambda$CDM model to infer distances and to correct for redshift space distortions, we can only conclude with a consistency test validation of Cosmological Principle.
16
7
1607.03418
1607
1607.04712_arXiv.txt
The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing of the frequency standards deviations. For the past decades, AVAR has increasingly being used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with the clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. Besides, some physically connected scalar time series naturally form series of multi-dimensional vectors. For example, three station coordinates time series $X$, $Y$, and $Z$ can be combined to analyze 3D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multi-dimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multi-dimensional AVAR (MAVAR), and weighted multi-dimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astro-geodetic time series.
The noise assessment in physical measurements time series is an important measure of its statistical characteristics and overall quality. Among the most effective approaches to analyzing measurement noise (scatter) is Allan variance (AVAR), which was originally introduced to estimate the frequency standards instability \cite{Allan1966}. Later, AVAR has proved to be a powerful statistical tool for time series analysis, particularly, for the analysis of geodetic and astronomical observations. AVAR has been used for quality assessment and improvement of the celestial reference frame (CRF) \cite{Feissel2000a,Gontier2001,Feissel2003a,Sokolova2007,Malkin2008j,LeBail2010a,LeBail2014a,Malkin2013b,Malkin2015b}, the time series analysis of station position and baseline length \cite{Malkin2001n,Roberts2002,LeBail2006,Feissel2007,LeBail2007,Gorshkov2012b,Malkin2013b,Khelifa2014}, and studies on the Earth rotation and geodynamics \cite{Feissel1980,Gambis2002,Feissel2006a,LeBail2012,Malkin2013b,Bizouard2014a}. AVAR estimates of noise characteristics have important advantages over classical variance estimates such as standard deviation (STD) and weighted root-mean-square (WRMS) residual. The latter cannot distinguish the different significant types of noise, which is important in several astro-geodetic tasks. Another advantage of AVAR is that it is practically independent of the long-term systematic components in the investigated time series. AVAR can also be used to investigate the spectral characteristics of the signal \cite{Allan1981,Allan1987} that is actively used for analysis of astrometric and geodetic data \cite{Feissel2000a,Feissel2003a,Feissel2006a,Feissel2007}. However, the application of original AVAR to the time series analysis of astro-geodetic measurements may not yield satisfactory results. Unlike clock comparison, geodetic and astrometric measurements mostly consist of data points with unequal uncertainties. This requires a proper weighting of the measurements during the data analysis. Moreover, one often deals with multi-dimensional quantities in geodesy and astronomy. For example, the station coordinates $X$, $Y$, and $Z$ form 3D vector of a geocentric station position (although this example is more complicated because the vertical and horizontal station displacements caused by the geophysical reasons may have different statistical characteristics, including AVAR estimates, see \cite{Malkin2001n,Malkin2013c} and references therein). The coordinates of a celestial object, right ascension and declination, also form a 2D position vector. To analyze such data types, AVAR modifications were proposed in \cite{Malkin2008j}, including weighted AVAR (WAVAR), multi-dimensional AVAR (MAVAR), and weighted multi-dimensional AVAR (WMAVAR). These modifications should be distinguished from the classical modified AVAR introduced in \cite{Allan1981}. The rest of the paper is organized as follows. Section~\ref{sect:overview} introduces AVAR and its modification, and gives several practical illustrations of their basic features. In Section~\ref{sect:results}, a brief overview is provided of the works that employ AVAR in geodesy and astrometry, and basic results obtained with the AVAR technique are presented. Additional details and discussion on the use of AVAR in space geodesy and astrometry can be found in \cite{LeBail2004t,Malkin2011c,Malkin2013b}.
AVAR is an effective statistical tool for analyzing the time series of observational data in astronomy and geodesy, as well as of all other time series. Important independent characteristics of the noise component in the studied signal can be obtained through this tool. The main application of AVAR to time series analysis is in determining the signal scatter level and spectral analysis with the primary aim to identify the dominant noise type in a time series. This information can be applied to refined data analysis, such as computing realistic uncertainty of station velocities \cite{Mao1999,Williams2003}) and regularization of EOP series \cite{LeBail2012}. AVAR modifications, namely, WAVAR, MAVAR, and WMAVAR, were proposed for processing weighted and/or multi-dimensional data \cite{Malkin2008j}. These modifications serve as effective and convenient tools for data analysis in geodesy and astronomy. WMAVAR is the most general estimator and encompasses AVAR, WAVAR, and MAVAR as special cases. In particular, 2D WMAVAR can be used for complex data processing. Indeed, the AVAR modifications we proposed should not be confused with the ``original'' modified AVAR definition \cite{Allan1981}. An important advantage of AVAR and its more refine versions over WRMS in practical application is its weak sensitivity to low-frequency signal variations. By contrast, WRMS depends heavily on the model used to eliminate the systematic component of the studied signal. Our study showed that WAVAR is more robust to outliers than the classical AVAR is; however, both AVAR and WAVAR may estimate noise level erroneously when jumps occur in a time series. AVAR is also widely used to investigate the spectral characteristics of a time series and is a powerful tool for noise type identification through log-log representation. In particular, AVAR facilitates the effective analysis of signals with different types of noise at various frequency bands. AVAR is supposed to be more computationally effective than classical spectral methods, such as Fourier transform; in our opinion, however, this advantage is no longer significant at present. A detailed comparison of two methods for estimating spectral noise characteristics may be interesting and useful though. It must be noted that the AVAR method needs further investigation and development for some applications. First, many geodetic and astronomical series are unevenly spaced as was discussed in Section~\ref{sect:overview}. Frequent examples include station position time series with gaps, radio source position time series, VLBI-derived session-wise EOP series. Another open issue regarding the application of AVAR to the analysis of geodetic and astronomical time series stems from possible correlations between the measurements that may distort statistical analysis results substantially. Finally, we can conclude that despite its limitations and some unresolved issues, AVAR nevertheless remains one of the most powerful tools for analyzing a wide range of physical measurement time series.
16
7
1607.04712
1607
1607.01889_arXiv.txt
We present a near-infrared (NIR) imaging study of barred low surface brightness (LSB) galaxies using the TIFR near-infrared Spectrometer and Imager (TIRSPEC). LSB galaxies are dark matter dominated, late type spirals that have low luminosity stellar disks but large neutral hydrogen (HI) gas disks. Using SDSS images of a very large sample of LSB galaxies derived from the literature, we found that the barred fraction is only 8.3\%. We imaged twenty five barred LSB galaxies in the J, H, K$_S$ wavebands and twenty nine in the K$_S$ band. Most of the bars are much brighter than their stellar disks, which appear to be very diffuse. Our image analysis gives deprojected mean bar sizes of $R_{b}/R_{25}$~=~0.40 and ellipticities e~$\approx$~0.45, which are similar to bars in high surface brightness galaxies. Thus, although bars are rare in LSB galaxies, they appear to be just as strong as bars found in normal galaxies. There is no correlation of $R_{b}/R_{25}$ or $e$ with the relative HI or stellar masses of the galaxies. In the (J-K$_S$) color images most of the bars have no significant color gradient which indicates that their stellar population is uniformly distributed and confirms that they have low dust content.
Low Surface Brightness galaxies are extreme late type spiral galaxies that are optically dim and have a central disk surface brightness fainter than 22 magnitudes/arcsec$^{2}$ in the B band \citep{ImpeyBothun97}. They have diffuse stellar disks, that are low in metallicity \citep{McGaugh.1994} and dust content (\citealt{Rahman.etal.2007}; \citealt{Hinz.etal.2007}). They are rich in neutral hydrogen (HI) gas \citep{O'Neil.etal.2004} but have low star formation rates (\citealt{Boissier.etal.2008}; \citealt{O'Neil.etal.2004}). They can be broadly classified into LSB spirals and LSB dwarf or irregular galaxies. Of the LSB spirals, a significant fraction have very large disks and HI gas masses; these galaxies are often referred to as giant LSB (GLSB) galaxies of which UGC~6614, Malin~1 and Malin~2 are very good examples (\citealt{Pickering.etal.1997}; \citealt{Sprayberry.etal.1995}). GLSB galaxies are usually seen in isolated environments \citep{Rosenbaum.etal.2009} but the smaller LSB dwarf and irregular galaxies are found in both underdense regions \citep{Pustilink.etal.2011} as well as more crowded environments (\citealt{Merritt.etal.2014}; \citealt{Javanmardi.etal.2016}; \citealt{Davies.etal.2016}). \par One of the distinguishing features of LSB galaxies is their very large dark matter content \citep{deblok.etal.2001}. Their dominant dark matter halos suppresses the formation of both global and local disk instabilities \citep{Mihos.etal.1997} thus hampering the formation of bars and strong spiral arms in these galaxies \citep{Wadsley.etal.2004}. Thus it is not surprising that bars are relatively rare in LSB galaxies and spiral arms are thin compared to those found in high surface brightness (HSB) galaxies. However, although barred LSB galaxies are rare, they are the best systems in which to understand the formation and evolution of bars in dark matter dominated disks. Although there have been many simulation studies of bars in halo dominated disks (\citealt{long.etal.2014}; \citealt{saha.naab.2013}; \citealt{villa-vargas.etal.2010}), there are surprisingly no near-infrared (NIR) or optical studies of bars in halo dominated disk galaxies. One of the main aims of this paper is to provide a deep NIR study of bars in LSB galaxies and see how they differ from normal bright galaxies. \par Bars play an important role in disk evolution through several dynamical effects. First, they drive gas into the centers of galaxies resulting in the buildup of central mass concentrations that can lead to bulge growth (\citealt{norman.etal.1996}; \citealt{bournaud.etal.2005}; \citealt{fanali.etal.2015}) as well as disk star formation \citep{ellison.etal.2011}. During this process, gas may collect at the resonance radii in the disks - such as the corotation radii at the bar ends, circumnuclear rings within the bars or at resonance radii in the outer disks \citep{sellwood.wilkinson.1993}. If the gas surface density in these rings is large enough, local instabilities can result in the formation of bright star forming rings at the resonance radii \citep{Buta1986}. Such rings are clearly seen in the larger LSB galaxies such as UGC~6614 \citep{mapelli.etal.2008}. Secondly, bars can themselves also evolve into rounder bars or boxy bulges in disks. This change in bar morphology can occur rapidly due to bending instabilities in bars \citep{raha.etal.1991} or alternatively there maybe a slow dissolution of the bar structure caused by gas infall and the buildup of a central mass concentration (\citealt{das.etal.2003}; \citealt{das.etal.2008}). This slow internal evolution of bars is often referred to as the secular evolution of barred galaxies (\citealt{Kormendy.etal.2004}; \citealt{Combes.etal.1990}; \citealt{Combes.etal.1981}; \citealt{sheth.etal.2005}). LSB galaxies are usually isolated and are hence ideal systems in which to study secular evolution of bars into boxier bulges. As our study shows, LSB bulges are not always classical bulges; a definite indication that bulges evolve even in the most isolated environments. \par In this paper we present near infrared (NIR) imaging in the J, H, K$_S$ bands of a sample of barred LSB galaxies using the TIFR NIR Spectrometer and Imager (TIRSPEC) that is mounted on the Himalayan Chandra Telescope (HCT). Our aim is to study bar morphologies in LSB galaxies; their sizes, shapes, colors and correlation with other galaxy properties such as stellar and HI gas masses. We have also examined the K$_S$ band isophotes for signatures of bar evolution (boxy shapes), interactions or nested bars (twisted isophotes). In the following sections we describe our sample selection and our estimate of bar fraction in LSB galaxies. We then describe our observations, results and discuss the implications of our study. \\
{\bf 1.}~The fraction of bars in LSB galaxies is very low, as expected from the simulations of bar formation in halo dominated galaxies. We examined the SDSS images of a sample of 856 LSB galaxies and found that only 8.3\% have bars. This low fraction supports the simulations that show that massive dark matter halos suppress the formation of disk instabilities such as bars in spiral galaxies. \\ {\bf 2.~}About half our sample of 29 barred LSB galaxies host large, classical bulges that are very bright in the NIR. The remaining other half have either boxy bulges or are bulgeless. The bars are also much brighter in NIR than the diffuse stellar disks. This indicates that both the bars and bulges have either an older stellar population and/or higher stellar surface density than the LSB disks. The K$_s$ isophotes do not show any twisting in the selected sample and are instead generally aligned along the bar major axes. \\ {\bf 3.~}Although the fraction of barred candidates in the dark matter dominated LSB galaxies is very small, the bar parameters such as barlength and ellipticities have a range of values that are similar to those found in normal galaxies. Our results clearly show that halo dominated galaxies can host strong bars. \\ {\bf 4.~}For more than half of the sample (25/29) the color parameter J-K$_s$ shows practically no variation between the bar and bulge regions. This indicates that they have similar stellar populations, metallicities and dust content. Some candidates have bulges that are significantly dimmer than the bar; these galaxies may have a older cold stellar population in the bulge. \\ {\bf 5.~}The plots of J-K$_s$ with the bar length $D_{bar}/D_{25}$ shows a weak but significant correlation, which suggests that the bar may cause some local disk star formation which makes the J-K$_s$ bluer. But the plots of J-K$_s$ with the ratios of the HI content and stellar mass (M$_{HI}$/M$_{stellar}$) do not show any correlation, which clearly shows that the star formation triggered by the bar is only local and not on a global disk scales. {\bf \sc Acknowledgments} The optical observations were done at the Indian Optical Observatory (IAO) at Hanle. We thank the staff of IAO, Hanle and CREST, Hosakote, that made these observations possible. The facilities at IAO and CREST are operated by the Indian Institute of Astrophysics, Bangalore. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We acknowledge the usage of the HyperLeda database\footnote{http://leda.univ-lyon1.fr/}\citep{Makarov2014}. Our work has also used SDSS-III data. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III website is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research made use of Montage. It is funded by the National Science Foundation under Grant Number ACI-1440620, and was previously funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology.The color plots were generated using the two-dimensional graphics environment Matplotlib \citep{Hunter2007}. We thank the anonymous referee for valuable comments and suggestions on the draft. \begin{figure*} \centering \caption{The J, H, K$_s$ band images of Low Surface Brightness galaxies. In all figures the north is up and east is in the left hand side. We have used the logarithmic scaling.The isophotal contours are overlaid on the K$_s$ band image. Most of the isophotes are above 5$\sigma$ level except the galaxies IC 742, NGC 5905, PGC 68495, UM163, LSBC F568-08, UGC 5035,LSBC F568-09 have last contour level is 4$\sigma$ and for galaxies CGCG 381-048, UGC 1920, UGC 9927, UGC 9634 have last contour level is 3$\sigma$. For galaxies UGC 3968, LSBC F580-01, and 1300+0144 the H band observations were effected by the bad sky conditions.} \includegraphics[trim = 10mm 17mm 20mm 8mm, clip,scale=0.37]{CGCG381_J_nov.pdf}\includegraphics[trim = 10mm 17mm 20mm 8mm, clip,scale=0.37]{CGCG381_H_nov.pdf}\includegraphics[trim = 10mm 17mm 20mm 8mm, clip,scale=0.37]{CGCG381_K_con_nov.pdf} \includegraphics[trim = 5mm 17mm 20mm 15mm, clip,scale=0.37]{UGC1920_J_nov.pdf}\includegraphics[trim = 10mm 17mm 20mm 15mm, clip,scale=0.37]{UGC1920_H_nov.pdf}\includegraphics[trim = 25mm 23mm 20mm 15mm, clip,scale=0.45]{UGC1920_K_con_nov.pdf} \includegraphics[trim = 45mm 20mm 60mm 20mm, clip,width=5.5cm]{UGC1455_J_nov.pdf}\includegraphics[trim = 45mm 20mm 60mm 20mm, clip,width=5.5cm]{UGC1455_H_nov.pdf}\includegraphics[trim = 50mm 20mm 60mm 20mm, clip,scale=0.46]{UGC1455_Ks_con_nov.pdf} \includegraphics[trim = 30mm 17mm 45mm 15mm, clip,scale=0.37]{ngc5905_J_nov.pdf}\includegraphics[trim = 30mm 17mm 45mm 15mm, clip,scale=0.37]{ngc5905_H_nov.pdf}\includegraphics[trim = 30mm 17mm 45mm 15mm, clip,scale=0.37]{ngc5905_Ks_con_nov.pdf} \end{figure*} \begin{figure*} \includegraphics[trim = 45mm 30mm 45mm 20mm, clip,scale=0.49]{UM163_J_nov.pdf}\includegraphics[trim = 30mm 25mm 30mm 8mm, clip,scale=0.49]{UM163_H_nov.pdf}\includegraphics[trim = 35mm 17mm 20mm 8mm, clip,scale=0.405]{UM163_Ks_con_nov.pdf}\\ \includegraphics[trim = 35mm 17mm 40mm 15mm, clip,scale=0.44]{UGC11754_J_nov.pdf}\includegraphics[trim = 40mm 25mm 50mm 20mm, clip,scale=0.49]{UGC11754_H_radecnov.pdf}\includegraphics[trim = 35mm 17mm 35mm 15mm, clip,scale=0.44]{UGC11754_Ks_con_nov.pdf} \includegraphics[trim = 35mm 17mm 45mm 15mm, clip,scale=0.41]{pgc68495_J_nov.pdf}\includegraphics[trim = 35mm 17mm 45mm 15mm, clip,scale=0.41]{pgc68495_H_nov.pdf}\includegraphics[trim = 35mm 17mm 45mm 15mm, clip,scale=0.41]{pgc68495_Ks_con_nov.pdf} \includegraphics[trim = 40mm 17mm 40mm 8mm, clip,scale=0.425]{UGC2936_J_nov.pdf}\includegraphics[trim = 40mm 17mm 40mm 8mm, clip,scale=0.425]{UGC2936_H_nov.pdf}\includegraphics[trim = 40mm 17mm 40mm 8mm, clip,scale=0.425]{UGC2936_Ks_con_nov.pdf} \end{figure*} \begin{figure*} \includegraphics[trim = 40mm 17mm 40mm 8mm, clip,scale=0.44]{UGC10405_J_nov.pdf}\includegraphics[trim = 40mm 17mm 40mm 8mm, clip,scale=0.44]{UGC10405_H_nov.pdf}\includegraphics[trim = 40mm 17mm 40mm 8mm, clip,scale=0.44]{UGC10405_Ks_con_nov.pdf} \includegraphics[trim = 10mm 17mm 20mm 8mm, clip,scale=0.4]{0223-0033_J_nov.pdf}\includegraphics[trim = 17mm 17mm 20mm 8mm, clip,scale=0.4]{0223-0033_H_nov.pdf}\includegraphics[trim = 17mm 17mm 20mm 8mm, clip,scale=0.4]{0223-0033_Ks_con_nov.pdf} \includegraphics[trim = 25mm 17mm 30mm 8mm, clip,scale=0.46]{UGC5035_J_nov.pdf}\includegraphics[trim = 25mm 17mm 30mm 8mm, clip,scale=0.46]{UGC5035_H_nov.pdf}\includegraphics[trim = 25mm 17mm 30mm 8mm, clip,scale=0.46]{UGC5035_Ks_con_nov.pdf} \includegraphics[trim = 25mm 17mm 30mm 8mm, clip,scale=0.46]{UGC9087_J_nov.pdf}\includegraphics[trim = 25mm 17mm 30mm 8mm, clip,scale=0.46]{UGC9087_H_nov.pdf}\includegraphics[trim = 25mm 17mm 30mm 8mm, clip,scale=0.46]{UGC9087_K_con_nov.pdf} \end{figure*} \begin{figure*} \includegraphics[trim = 25mm 17mm 30mm 8mm, clip,scale=0.44]{LSBCF568-8_J_nov.pdf}\includegraphics[trim = 25mm 22mm 30mm 8mm, clip,scale=0.45]{LSBCF568-8_H_nov.pdf}\includegraphics[trim = 20mm 17mm 30mm 8mm, clip,scale=0.44]{LSBCF568-8_K_con_nov.pdf} \includegraphics[trim = 58mm 17mm 60mm 8mm, clip,scale=0.46]{LSBCF568-9_J_nov.pdf}\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,scale=0.39]{LSBCF568-9_H_may.pdf}\includegraphics[trim = 48mm 17mm 40mm 8mm, clip,scale=0.46]{LSBCF568-9_K_con_nov.pdf} \includegraphics[trim = 45mm 17mm 50mm 8mm, clip,scale=0.39]{UGC9634_J_nov.pdf}\includegraphics[trim = 45mm 17mm 50mm 8mm, clip,scale=0.39]{UGC9634_H_nov.pdf}\includegraphics[trim = 45mm 17mm 50mm 8mm, clip,scale=0.39]{UGC9634_KS_con_nov.pdf} \includegraphics[trim = 40mm 17mm 55mm 8mm, clip,scale=0.39]{IC742_J_nov.pdf}\includegraphics[trim = 40mm 17mm 55mm 8mm, clip,scale=0.39]{IC742_H_nov.pdf}\includegraphics[trim = 40mm 17mm 55mm 8mm, clip,scale=0.39]{IC742_KS_con_nov.pdf} \end{figure*} \begin{figure*} \includegraphics[trim = 45mm 17mm 65mm 8mm, clip,scale=0.42]{UGC8794_J_nov.pdf}\includegraphics[trim = 48mm 17mm 60mm 8mm, clip,scale=0.42]{UGC8794_H_nov.pdf}\includegraphics[trim = 55mm 17mm 60mm 8mm, clip,scale=0.42]{UGC8794_KS_con_nov.pdf} \includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{UGC9927_J_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{UGC9927_H_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{UGC9927_K_con_nov.pdf} \includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{LSBCF584-1_J_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{LSBCF584-1_H_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{LSBCF584-1_KS_con_nov.pdf} \includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{LSBCF580-02_J_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{LSBCF580-02_H_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{LSBCF580-02_KS_con_nov.pdf} \end{figure*} \begin{figure*} \includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{UGC3968_J_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{UGC3968_H_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{UGC3968_KS_con_nov.pdf} \includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{1300+0144_J_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{1300+0144_H_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{1300+0144_KS_con_nov.pdf} \includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{PGC60365_J_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{PGC60365_H_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{PGC60365_KS_con_nov.pdf} \includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{CGCG006-023_J_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{CGCG006-023_H_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{CGCG006-023_KS_con_nov.pdf} \end{figure*} \begin{figure*} \includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{1252+0230_J_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{1252+0230_H_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.42]{1252+0230_KS_con_nov.pdf} \includegraphics[trim = 50mm 25mm 65mm 15mm, clip,scale=0.45]{LSBCF570-1_KS_con_nov.pdf}\includegraphics[trim = 50mm 17mm 60mm 8mm, clip,scale=0.4]{1442+0137_KS_con_nov.pdf}\includegraphics[trim = 50mm 17mm 65mm 8mm, clip,scale=0.4]{LSBCF675-01_KS_con_nov.pdf} \includegraphics[trim = 50mm 17mm 60mm 8mm, clip,scale=0.36]{IC2423_K_con_nov.pdf} \\ \end{figure*} \begin{figure*} \caption{The J-K$_s$ images of Low Surface Brightness galaxies} \includegraphics[scale=0.26]{CGCG006-023_JK_WCS_added_mar6.pdf}\includegraphics[scale=0.26]{UGC1920_JK_added_mar6.pdf} \includegraphics[scale=0.26]{UGC1455_JK_mar_mar6.pdf}\includegraphics[scale=0.26]{NGC5905_J-K_mar6.pdf} \includegraphics[scale=0.26]{UM163_JK_WCS_mar6.pdf}\includegraphics[scale=0.26]{UGC1175_JK_mar6.pdf} \includegraphics[scale=0.26]{PG68495_JK_mar.pdf}\includegraphics[scale=0.26]{UGC2936_JK_mar6.pdf} \end{figure*} \begin{figure*} \includegraphics[scale=0.28]{UGC10405_JK_added_mar6.pdf}\includegraphics[scale=0.28]{0223-0033_JK_mar6.pdf} \includegraphics[scale=0.28]{UGC5035_JK_WCS_added_mar6.pdf}\includegraphics[scale=0.28]{UGC9087_JK_WCS_added_mar6.pdf} \includegraphics[scale=0.28]{LSBCf568-08_JK_WCS_added_mar6.pdf}\includegraphics[scale=0.28]{F568_9-JK-WCS_added_mar6.pdf} \includegraphics[scale=0.28]{ugc9634-JK_WCS_added_mar6.pdf}\includegraphics[scale=0.28]{IC742_JK_WCS_added_mar6.pdf} \end{figure*} \begin{figure*} \includegraphics[scale=0.28]{UGC8794_JK_WCS_mar6.pdf}\includegraphics[scale=0.28]{UGC9927_JK_WCS_jan_mar6.pdf} \includegraphics[scale=0.28]{LSBCF584-01_JK_WCS_added1_mar6.pdf}\includegraphics[scale=0.28]{LSBCF580-2_JK_WCS_mar6.pdf} \includegraphics[scale=0.28]{UGC3968_JK_WCS_mar6.pdf}\includegraphics[scale=0.28]{1300_JK_WCS_added_mar6.pdf} \includegraphics[scale=0.28]{PGC60365_JK_WCS_added_mar6.pdf}\includegraphics[scale=0.28]{CGCG006-023_JK_WCS_added_mar6.pdf} \end{figure*} \begin{figure*} \includegraphics[scale=0.28]{1252+0230_JK_WCS_mar6.pdf} \end{figure*} \begin{figure*} \caption{ The R-band images of galaxies UM 163(left) and UGC 11754(right). Logscale is used in both of the images.} \label{f:b-band} \includegraphics[trim = 0mm 18mm 28mm 0mm, clip,scale=0.45]{um163_red_band.pdf}\includegraphics[trim = 18mm 18mm 0mm 0mm, clip,scale=0.45]{ugc11754_red_band.pdf} \end{figure*} \begin{center} \begin{table*} \label{tab:barlength} \centering \caption{The bar parameters of LSB galaxies from ellipse fit.The parameters a \& b are the semi major and semi minor axis of the galaxy in RC3 or SDSS r-band depending upon which one is covering the galaxy disk at greater distance. The angle of inclination is given as $i$. L$_{obs}$,e$_{obs}$ and PA$_{bar}$ is the projected bar semi major axis length, ellipcity and bar angle obtained from ELLIPSE fit. PA$_{gal}$ is the position angle of the galaxy. The parameters b/a,$i$ and PA$_{gal}$ are taken from NED. ( the position angles are taken from north to eastwards given in degrees). The difference between PA$_{bar}$ and PA$_{gal}$ is taken as $\alpha$. L$_{dep}$ is the deprojected bar semi major axis length calculated in arcsec. The e$_{dep}$ is the deprojected ellipcity. Using the scale given in table 1, L$_{dep}$ is converted to physical units (kpc) given as L$_{bar}$. For galaxies 0223-0033, UGC 2936, UGC 9634, UGC 8794 and 1300+0144 are having high inclination angles that are greater than 60$^{\circ}$ so we are avoiding those galaxies for the plots even though listed in the table.} \begin{tabular}{l|l|l|l|l|l|l|l|l|l|l|l} \hline Galaxy & b/a & $i$ & L$_{obs}$ & e$_{obs}$ & PA$_{bar}$ & PA$_{gal}$ & $\alpha$ & L$_{dep}$ & L$_{bar}$ & e$_{dep}$ \\ name & & ( $^{\circ}$)& (arcsec) & & ( $^{\circ}$) & ( $^{\circ}$) & ( $^{\circ}$) & (arcsec) & (kpc) & \\ \hline CGCG381-048 & 0.78 & 38 & 11.39$\pm$1.14 & 0.38 $\pm$0.03 & 67 $\pm$3.1 & 42 & 25 & 12.00 $\pm$1.21 & 3.83 $\pm$0.39 & 0.32$\pm$0.04 \\ UGC1920 & 0.72 & 44 & 9.41 $\pm$0.94 & 0.50 $\pm$0.02 & 166$\pm$1.3 & 10 & 24 & 10.11 $\pm$1.01 & 3.93 $\pm$0.39 & 0.42$\pm$0.02 \\ UGC1455 & 1.00 & 0 & 16.68$\pm$1.67 & 0.44 $\pm$0.03 & 28 $\pm$3.0 & 132 & 76 & 16.68 $\pm$1.67 & 5.27 $\pm$0.53 & 0.44$\pm$0.03 \\ NGC5905 & 0.66 & 49 & 25.67$\pm$2.57 & 0.54 $\pm$0.03 & 21 $\pm$2.0 & 135 & 66 & 37.21 $\pm$3.73 & 8.41 $\pm$0.84 & 0.67$\pm$0.02 \\ UM163 & 0.69 & 46 & 22.20$\pm$2.22 & 0.41 $\pm$0.04 & 115$\pm$3.5 & 99 & 16 & 23.08 $\pm$2.34 & 14.24 $\pm$1.44 & 0.26$\pm$0.05 \\ UGC11754 & 0.89 & 27 & 8.56 $\pm$0.86 & 0.47 $\pm$0.01 & 145$\pm$2.3 & 157 & 12 & 8.61 $\pm$0.86 & 2.53 $\pm$0.25 & 0.42$\pm$0.02 \\ PGC68495 & 0.65 & 50 & 7.08 $\pm$0.71 & 0.56 $\pm$0.01 & 174$\pm$0.6 & 117 & 57 & 10.00 $\pm$1.00 & 7.79 $\pm$0.78 & 0.66$\pm$0.01 \\ UGC2936 & 0.27 & 74 & 10.35$\pm$1.04 & 0.43 $\pm$0.01 & 26 $\pm$0.7 & 30 & 4 & 10.66 $\pm$1.07 & 2.58 $\pm$0.26 & 0.52$\pm$0.01 \\ UGC10405 & 0.74 & 42 & 5.32 $\pm$0.53 & 0.44 $\pm$0.02 & 136$\pm$2.1 & 21 & 65 & 6.86 $\pm$0.69 & 4.77 $\pm$0.48 & 0.56$\pm$0.02 \\ 0223-0033 & 0.46 & 63 & 7.78 $\pm$0.78 & 0.26 $\pm$0.03 & 176$\pm$3.4 & 39 & 43 & 12.98 $\pm$1.40 & 5.19 $\pm$0.56 & 0.57$\pm$0.02 \\ UGC5035 & 0.84 & 33 & 13.78$\pm$1.38 & 0.44 $\pm$0.01 & 77 $\pm$1.3 & 170 & 87 & 16.43 $\pm$1.64 & 11.86 $\pm$1.19 & 0.53$\pm$0.01 \\ UGC9087 & 0.72 & 44 & 19.15$\pm$1.91 & 0.54 $\pm$0.02 & 28.5$\pm$2.0 & 17 & 11.5 & 19.50 $\pm$1.95 & 6.81 $\pm$0.68 & 0.39$\pm$0.03 \\ LSBCF568-08 & 0.78 & 39 & 7.07 $\pm$0.71 & 0.20 $\pm$0.01 & 121$\pm$1.6 & 63 & 58 & 8.58 $\pm$0.86 & 5.76 $\pm$0.58 & 0.33$\pm$0.01 \\ LSBCF568-09 & 0.96 & 16 & 12.53$\pm$1.25 & 0.39 $\pm$0.01 & 60 $\pm$1.1 & 13 & 47 & 12.80 $\pm$1.28 & 6.90 $\pm$0.69 & 0.40$\pm$0.01 \\ UGC9634 & 0.47 & 62 & 10.36$\pm$1.04 & 0.63 $\pm$0.01 & 89 $\pm$0.5 & 100 & 11 & 10.98 $\pm$1.10 & 9.00 $\pm$0.90 & 0.35$\pm$0.01 \\ IC742 & 0.95 & 17 & 16.68$\pm$1.67 & 0.64 $\pm$0.01 & 117.6$\pm$0.7& 0 &62.4& 17.28 $\pm$1.73 & 7.53 $\pm$0.75 & 0.65$\pm$0.01 \\ UGC8794 & 0.31 & 72 & 11.39$\pm$1.14 & 0.48 $\pm$0.01 & 42.1$\pm$0.8 & 68 &25.9& 19.05 $\pm$1.93 & 10.78 $\pm$1.09 & 0.63$\pm$0.01 \\ UGC9927 & 0.88 & 28 & 13.78$\pm$1.38 & 0.42 $\pm$0.01 & 109$\pm$0.8 & 4 &75 & 15.49 $\pm$1.55 & 4.49 $\pm$0.45 & 0.48$\pm$0.01 \\ LSBCF584-01 & 0.75 & 41 & 9.41 $\pm$0.94 & 0.60 $\pm$0.02 & 87$\pm$1.7 & 76 & 11 & 9.54 $\pm$0.96 & 7.31 $\pm$0.73 & 0.48$\pm$0.03 \\ LSBCF580-02 & 0.8 & 37 & 4.83 $\pm$0.48 & 0.57 $\pm$0.03 & 115$\pm$2.0 & 130 & 15 & 4.92 $\pm$0.49 & *** & 0.48$\pm$0.04 \\ UGC3968 & 0.85 & 32 & 16.68$\pm$1.67 & 0.63 $\pm$0.02 & 136$\pm$1.1 & 26 & 70 & 19.34 $\pm$1.93 & 8.53 $\pm$0.85 & 0.68$\pm$0.01 \\ 1300+0144 & 0.24 & 76 & 7.07 $\pm$0.71 & 0.63 $\pm$0.01 & 82.6$\pm$0.7 & 82 & 0.6& 7.08 $\pm$0.71 & 5.63 $\pm$0.56 & 0.35$\pm$0.02 \\ PGC60365 & 0.71 & 44 & 9.41 $\pm$0.94 & 0.41 $\pm$0.02 & 169$\pm$1.9 & 120 & 49 & 11.64 $\pm$1.17 & *** & 0.49$\pm$0.02 \\ CGCG006-023 & 0.63 & 51 & 8.56 $\pm$0.86 & 0.37 $\pm$0.01 & 30$\pm$1.3 & 109 & 79 & 13.44 $\pm$1.34 & 10.02 $\pm$1.00 & 0.60$\pm$0.01 \\ 1252+0230 & 0.88 & 33 & 7.07 $\pm$0.71 & 0.29 $\pm$0.03 & 88.3$\pm$3.1 & 99 &10.7& 7.12 $\pm$0.71 & 6.57 $\pm$0.66 & 0.18$\pm$0.03 \\ LSBCF570-01 & 0.60 & 53 & 7.78 $\pm$0.78 & 0.17 $\pm$0.01 & 29$\pm$1.4 & 108 & 79 & 12.77 $\pm$1.28 & 6.65 $\pm$0.67 & 0.50$\pm$0.00 \\ 1442+0137 & 0.9 & 26 & 6.43 $\pm$0.64 & 0.42 $\pm$0.02 & 31 $\pm$1.8 & 8 & 23 & 6.55 $\pm$0.65 & 4.33 $\pm$0.43 & 0.38$\pm$0.02 \\ LSBCF675-01 & 0.75 & 41 & 6.43 $\pm$0.64 & 0.57 $\pm$0.02 & 79 $\pm$1.8 & 72 & 7 & 6.47 $\pm$0.65 & 4.39 $\pm$0.44 & 0.44$\pm$0.03 \\ IC2423 & 0.81 & 36 & 9.41 $\pm$0.94 & 0.44 $\pm$0.01 & 128 $\pm$1.1 & 100 & 28 & 9.95 $\pm$0.99 & 5.97 $\pm$0.60 & 0.39$\pm$0.01 \\ \hline \end{tabular} \end{table*} \end{center} \begin{center} \begin{table*} \label{tab:barlength} \centering \caption{The model magnitudes are taken from SDSS database for G-band and R-Band, which are used to determine the B-V color of the galaxy. The f$\nu$ represent the total flux in the R-band. For the calculations we have used the luminosity distance taken form NED. Lr, M/L and M$_{stellar}$ denotes the R-band luminosity, the M/L ratio of the galaxy for that particular band and the stellar mass respectively.} \begin{tabular}{l|c|c|c|c|c|c|c} \hline Galaxy & g-r & B-V & f$\nu$ & Distance& Lr & M/L & M$_{stellar}$\\ & & & (ergs cm$^{-2}$ s$^{-1}$ Hz)& (Mpc) & (ergs s$^{-1}$ Hz)& &(10$^{10}$M$\odot$)\\ \hline CGCG 381-048 & 0.629$\pm$0.003 & 0.836 & 8.47E-12 & 68.0 & 4.43E+42 & 2.78 & 0.62$\pm$0.01\\ UGC 1920 & 0.862$\pm$0.003 & 1.065 & 7.22E-12 & 83.4 & 5.68E+42 & 5.66 & 1.61$\pm$0.02\\ UGC 1455 & 0.889$\pm$0.002 & 1.091 & 2.60E-11 & 67.3 & 1.33E+43 & 6.14 & 4.09$\pm$0.03\\ NGC 5905 & 0.823$\pm$0.002 & 1.027 & 4.27E-11 & 47.8 & 1.10E+43 & 5.02 & 2.77$\pm$0.02\\ UM 163 & 0.728$\pm$0.003 & 0.933 & 1.25E-11 & 136.0 & 2.61E+43 & 3.76 & 4.89$\pm$0.04\\ UGC 11754 & 0.385$\pm$0.004 & 0.597 & 8.48E-12 & 62.5 & 3.75E+42 & 1.33 & 0.25$\pm$0.003\\ PGC 68495 & 0.633$\pm$0.004 & 0.840 & 5.40E-12 & 174.0 & 1.85E+43 & 2.82 & 2.61$\pm$0.03\\ UGC 2936 & *** & *** & *** & 51.2 & *** & *** & *** \\ UGC 10405 & 0.651$\pm$0.004 & 0.858 & 7.60E-12 & 154.0 & 2.04E+43 & 2.98 & 3.03$\pm$0.03\\ 0223-0033 & 0.990$\pm$0.003 & 1.191 & 2.03E-11 & 86.0 & 1.70E+43 & 8.35 & 7.11$\pm$0.07\\ UGC 5035 & 0.800$\pm$0.003 & 1.004 & 7.82E-12 & 161.0 & 2.29E+43 & 4.68 & 5.37$\pm$0.05\\ UGC 9087 & 0.939$\pm$0.003 & 1.140 & 1.41E-11 & 74.5 & 8.87E+42 & 7.15 & 3.17$\pm$0.03\\ LSBC F568-8 & 0.843$\pm$0.003 & 1.047 & 8.25E-12 & 148.0 & 2.04E+43 & 5.35 & 5.46$\pm$0.05\\ LSBC F568-9 & 0.682$\pm$0.003 & 0.888 & 6.97E-12 & 117.0 & 1.08E+43 & 3.27 & 1.76$\pm$0.02\\ UGC 9634 & 0.824$\pm$0.004 & 1.027 & 4.25E-12 & 184.0 & 1.63E+43 & 5.03 & 4.09$\pm$0.05\\ IC 742 & 0.934$\pm$0.003 & 1.135 & 8.69E-12 & 94.1 & 8.70E+42 & 7.04 & 3.06$\pm$0.03\\ UGC 8794 & 0.703$\pm$0.003 & 0.909 & 9.25E-12 & 124.0 & 1.61E+43 & 3.49 & 2.80$\pm$0.03\\ UGC 9927 & 0.772$\pm$0.003 & 0.976 & 1.52E-11 & 61.7 & 6.54E+42 & 4.30 & 1.41$\pm$0.01\\ LSBC F584-01 & 0.843$\pm$0.004 & 1.046 & 4.88E-12 & 171.0 & 1.61E+43 & 5.33 & 4.31$\pm$0.05\\ LSBC F580-2 & 0.608$\pm$0.004 & 0.816 & 5.52E-12 & *** & *** & *** & *** \\ UGC 3968 & 0.722$\pm$0.003 & 0.928 & 1.17E-11 & 95.2 & 1.20E+43 & 3.69 & 2.21$\pm$0.02\\ 1300+0144 & 0.730$\pm$0.005 & 0.935 & 2.99E-12 & 178.0 & 1.07E+43 & 3.78 & 2.03$\pm$0.03\\ PGC 60365 & 0.827$\pm$0.003 & 1.030 & 5.73E-12 & *** & *** & *** & *** \\ CGCG 006-023 & 0.548$\pm$0.004 & 0.757 & 4.21E-12 & 166.0 & 1.31E+43 & 2.18 & 1.43$\pm$0.02\\ 1252+0230 & 0.596$\pm$0.005 & 0.804 & 3.42E-12 & 209.0 & 1.69E+43 & 2.52 & 2.13$\pm$0.04\\ LSBC F570-01 & 0.812$\pm$0.003 & 1.016 & 8.93E-12 & 113.0 & 1.29E+43 & 4.86 & 3.13$\pm$0.03\\ 1442+0137 & 0.587$\pm$0.005 & 0.795 & 2.74E-12 & 146.0 & 6.61E+42 & 2.45 & 0.81$\pm$0.01\\ LSBC F675-01 & 0.881$\pm$0.006 & 1.083 & 1.24E-12 & 150.0 & 3.15E+42 & 5.99 & 0.94$\pm$0.02\\ IC 2423 & 0.609$\pm$0.003 & 0.817 & 1.35E-11 & 132.0 & 2.65E+43 & 2.62 & 3.47$\pm$0.03\\ \hline \end{tabular} \end{table*} \end{center} \begin{center} \begin{table*} \label{tab:HI} \caption{The J and K$_s$ values are taken from 2MASS extended object catalogue. J and K$_s$ denotes the magnitudes in J and K$_s$ bands. The HI values are taken from HYPERLEDA archival datadase. The columns m21, S$_{\nu}$, M$_{HI}$ are representing the HI magnitude, flux and the neutral hydrogen mass, that corresponds to the gaseous component of the galaxy.} \begin{tabular}{l|c|c|c|c|c} \hline Galaxy & J & K$_s$ & m21 & S$_{\nu}$ (Jy Km s$^{-1}$) & M$_{HI}$(10$^9$M$\odot$)\\ \hline & & & & & \\ CGCG 381-048 & 12.180$\pm$0.038 & 11.353 $\pm$0.068 & 16.20$\pm$0.15 &$ 3.03\substack{+0.45\\-0.39}$&$3.30\substack{+0.49\\-0.43}$ \\ & & & & & \\ UGC 1920 & 12.273$\pm$0.038 & 11.081 $\pm$0.051 & 15.56$\pm$0.12 &$ 5.45\substack{+0.64\\-0.57}$&$8.94\substack{+1.04\\-0.94}$ \\ & & & & & \\ UGC 1455 & 10.516$\pm$0.023 & 9.502 $\pm$0.040 & 15.05$\pm$0.17 &$ 8.71\substack{+1.26\\-1.48}$&$9.31\substack{+1.58\\-1.35}$ \\ & & & & & \\ NGC 5905 & 10.467$\pm$0.017 & 9.514 $\pm$0.027 & 13.61$\pm$0.12 &$ 32.8\substack{+3.43\\-3.83}$&$17.7\substack{+2.07\\-1.85}$ \\ & & & & & \\ UM 163 & 11.357$\pm$0.034 & 10.327 $\pm$0.054 & 16.37$\pm$0.09 &$ 2.58\substack{+0.21\\-0.22}$&$11.3\substack{+0.97\\-0.90}$ \\ & & & & & \\ UGC 11754 & 12.823$\pm$0.075 & 11.853 $\pm$0.106 & 15.06$\pm$0.13 &$ 8.63\substack{+0.97\\-1.10}$&$7.96\substack{+1.01\\-0.90}$ \\ & & & & & \\ PGC 68495 & 12.581$\pm$0.070 & 11.436 $\pm$0.111 & 16.38$\pm$0.19 &$ 2.56\substack{+0.41\\-0.49}$&$18.3\substack{+3.50\\-2.93}$ \\ & & & & & \\ UGC 2936 & 10.160$\pm$0.026 & 8.914 $\pm$0.029 & 14.97$\pm$0.14 &$ 9.38\substack{+1.13\\-1.29}$&$5.80\substack{+0.80\\-0.70}$ \\ & & & & & \\ UGC 10405 & *** & *** & 15.57$\pm$0.13 &$ 5.40\substack{+0.61\\-0.69}$&$30.20\substack{+3.84\\-3.41}$ \\ & & & & & \\ 0223-0033 & 11.059$\pm$0.023 & 10.115 $\pm$0.045 & 15.52$\pm$0.09 &$ 5.65\substack{+0.45\\-0.49}$&$9.86\substack{+0.85\\-0.78}$ \\ & & & & & \\ UGC 5035 & 12.168$\pm$0.040 & 11.144 $\pm$0.049 & *** & *** & *** \\ & & & & & \\ UGC 9087 & 11.673$\pm$0.027 & 10.811 $\pm$0.046 & *** & *** & *** \\ & & & & & \\ LSBC F568-8 & 12.232$\pm$0.061 & 11.177 $\pm$0.072 & *** & *** & *** \\ & & & & & \\ LSBC F568-9 & ** & ** & 16.89$\pm$0.09 &$ 1.60\substack{+0.13\\-0.14}$&$5.17\substack{+0.44\\-0.41}$ \\ & & & & & \\ UGC 9634 & 12.761$\pm$0.044 & 12.068 $\pm$0.084 & 15.84$\pm$0.22 &$ 4.22\substack{+0.77\\-0.95}$&$33.6\substack{+7.55\\-6.17}$ \\ & & & & & \\ IC 742 & 11.654$\pm$0.029 & 11.199 $\pm$0.082 & 17.58$\pm$0.25 &$ 0.85\substack{+0.17\\-0.22}$&$1.77\substack{+0.46\\-0.36}$ \\ & & & & & \\ UGC 8794 & 11.893$\pm$0.028 & 10.893 $\pm$0.039 & 16.41$\pm$0.15 &$ 2.49\substack{+0.32\\-0.37}$&$9.03\substack{+1.34\\-1.17}$ \\ & & & & & \\ UGC 9927 & 11.559$\pm$0.031 & 10.825 $\pm$0.034 & *** & *** & *** \\ & & & & & \\ LSBC F584-01 & 12.551$\pm$0.041 & 11.844 $\pm$0.079 & 17.26$\pm$0.16 &$ 1.14\substack{+0.16\\-0.18}$&$7.85\substack{+1.25\\-1.08}$ \\ & & & & & \\ LSBC F580-2 & ** & ** & *** & *** & *** \\ & & & & & \\ UGC 3968 & 12.100$\pm$0.039 & 11.073 $\pm$0.059 & 15.79$\pm$0.12 &$ 4.41\substack{+0.46\\-0.51}$&$9.42\substack{+1.10\\-0.99}$ \\ & & & & & \\ 1300+0144 & 13.044$\pm$0.058 & 12.158 $\pm$0.106 & 16.92$\pm$0.27 &$ 1.56\substack{+0.34\\-0.44}$&$11.63\substack{+3.28\\-2.56}$ \\ & & & & & \\ PGC 60365 & 12.473$\pm$0.047 & 11.472 $\pm$0.080 & *** & *** & *** \\ & & & & & \\ CGCG 006-023 & 13.286$\pm$0.068 & 12.378 $\pm$0.116 & *** & *** & *** \\ & & & & & \\ 1252+0230 & 13.698$\pm$0.083 & 12.752 $\pm$0.145 & *** & *** & *** \\ & & & & & \\ LSBC F570-01 & 12.116$\pm$0.031 & 11.235 $\pm$0.039 & *** & *** & *** \\ & & & & & \\ 1442+0137 & 13.737$\pm$0.080 & 12.770 $\pm$0.139 & *** & *** & *** \\ & & & & & \\ LSBC F675-01 & 13.954$\pm$0.066 & 13.294 $\pm$0.148 & 16.91$\pm$0.42 & $1.57\substack{+0.53\\-0.74}$&$8.34\substack{+3.94\\-2.68}$ \\ & & & & & \\ IC 2423 & 11.811$\pm$0.028 & 10.866 $\pm$0.044 & 16.47$\pm$0.42 & $2.36\substack{+0.76\\-1.11}$&$9.68\substack{+4.57\\-3.11}$ \\ & & & & & \\ \hline \end{tabular} \end{table*} \end{center} \begin{figure*} \centering \caption{The fitted elliptical isophotes are overlaid on the Ks band image of galaxies UGC 3968, PGC60365 and UGC11754. Logscale is used in all images. The case of galaxy UGC 3968 is shown detailed with how the parameters like surface brightness, ellipcity, position angle and b4 parameter changes with the semi major axis} \includegraphics[trim =0mm 17mm 5mm 0mm, clip,scale=0.44]{UGC3968_e2.pdf} \includegraphics[trim = 0mm 17mm 5mm 0mm, clip,scale=0.44]{PGC60365_e2.pdf} \includegraphics[trim = 0mm 4mm 0mm 0mm, clip,scale=.47]{UGC11754_e2.pdf} \vspace{2cm} \includegraphics[scale=0.5]{ref2_surface_b_3968.pdf}\includegraphics[scale=0.5]{ref_ellip_3968.pdf} \includegraphics[scale=0.5]{ref_pa_3968.pdf}\includegraphics[scale=0.5]{ref_b4_3968.pdf} \end{figure*} \begin{figure*} \centering \caption{The histogram of deprojected barlength and deprojected ellipticities are shown below. The third figure show how the fraction of barlength to that of D$_{25}$ is distributed} \includegraphics[scale=0.5]{barlength_histogram_gadotty_1.pdf} \includegraphics[scale=0.5]{ellip_histogram.pdf} \includegraphics[scale=0.5]{fraction_bar_histogram.pdf} \end{figure*} \begin{figure*} \centering \caption{The ellipticity is plotted against the barlength to D$_{25}$ parameter. The points are very scattered and not showing any correlation} \includegraphics{ref_ellip_barfraction.pdf} \end{figure*} \begin{figure*} \centering \caption{The bar parameters like barlength to D$_{25}$ and ellipcity are plotted against the gas mass fraction of the galaxy. barlength to D$_{25}$ and ellipcity are not showing any correlation with the M$_{HI}$ to (M$_{HI}$+M$_{stellar}$),barlength to D$_{25}$ to the total baryonic mass also scattered in nature.} \includegraphics[scale=0.9]{ref_bar_fraction_MH_fraction.pdf} \includegraphics[scale=0.9]{ref_ellip_MH_fraction.pdf} \end{figure*} \clearpage \begin{figure*} \includegraphics[scale=0.9]{ref_bar_fraction_Mtotal.pdf} \end{figure*} \clearpage \begin{figure*} \caption{To check the variation of J-K$_s$ with the bar parameters, we plotted theJ-K$_s$ against the ratio of barlength to D$_{25 }$ in the first plot and with ellipcity in the second plot. The ratio of barlength to D$_{25 }$ and ellipcity are showing a weak relation with the J-K$_s$ color. The fitting does not include the errors} \includegraphics[scale=0.9]{ref_JK_barfraction.pdf} \includegraphics[scale=0.9]{ref_JK_ellip.pdf} \end{figure*} \begin{figure*} \centering \caption{The J-K$_s$ color is plotted against the ratio of the gaseous mass to that of the stellar mass of the galaxy} \includegraphics{ref_JK_MH_fraction.pdf} \end{figure*} \clearpage
16
7
1607.01889
1607
1607.04148_arXiv.txt
We present a novel approach for setting initial conditions on the mode functions of the Mukhanov-Sazaki equation. These conditions are motivated by minimisation of the renormalised stress-energy tensor, and are valid for setting a vacuum state even in a context where the spacetime is changing rapidly. Moreover, these alternative conditions are potentially observationally distinguishable. We apply this to the kinetically dominated universe, and compare with the more traditional approach.
Traditionally, quantum initial conditions for inflation are set using the Bunch-Davies vacuum. This approach is valid in de-Sitter space and other asymptotically static spacetimes. Rapidly evolving spacetimes, however, do not admit such an easy quantisation. In a recent work~\cite{Handley+2014}, we showed that the classical equations of motion suggest that the universe in fact emerged a rapidly evolving state, with the kinetic energy of the inflaton dominating the potential in a pre-inflationary phase. This can be used to set initial conditions on the background variables such as the inflaton value and Hubble parameter. In order to make contact with real observations, the effect that this phase has on the primordial power spectrum requires a semi-classical quantum mechanical treatment of the comoving curvature perturbation. Hamiltonian diagonalisation is the simplest approach for setting quantum initial conditions in a general spacetime, and derives the vacuum from the minimisation of the Hamiltonian density. This approach has been criticised in the past as it does not admit a consistent interpretation in terms of particles~\cite{Fulling+1989,Fulling_HD}. Other approaches such as the adiabatic vacuum go some way to rescuing the particle concept, but have additional theoretical issues. The issue of the particle interpretation stems from an attempt to apply a Minkowski spacetime concept outside the region of its validity. We postulate that the minimisation of an energy density is still an appropriate way to define a vacuum. In order to avoid the issues raised against Hamiltonian diagonalisation, we motivate our initial conditions from the minimisation of the {\em renormalised\/} stress-energy density. Indeed, if one takes care to minimise the correct quantity (using the theory of quantum fields in curved spacetime), then novel initial conditions can be derived which differ from the traditional Hamiltonian diagonalisation conditions. After the relevant background material is reviewed, we develop a generic mechanism for setting initial conditions. These reduce to the Bunch-Davies case in asymptotically static spacetimes (such as de-Sitter space), but yield different results otherwise. The aim is that these should be more theoretically robust. Additionally, these conditions are potentially distinguishable using observational data. We then apply this procedure to the kinetically dominated universe, but delay the observational analysis to a later work.
We have presented a novel procedure for setting the initial conditions on the Mukhanov-Sazaki equation. We define the vacuum state via the instantaneous minimisation of the renormalised stress-energy tensor. This procedure is valid for any background cosmology, independent of the thorny issue of a particle-type concept. It reduces to the Bunch-Davies vacuum in an asymptotically static region. Further, it makes theoretical predictions that may be observationally testable.
16
7
1607.04148
1607
1607.03845_arXiv.txt
In 2015, Anderson et al~\cite{Anderson15a} have claimed to find evidence for periodic sinusoidal variations (period=5.9 years) in measurements of Newton's gravitational constant. These claims have been disputed by Pitkin~\cite{Pitkin15}. Using Bayesian model comparison, he argues that a model with an unknown Gaussian noise component is favored over any periodic variations by more than $e^{30}$. We re-examine the claims of Anderson et al~\cite{Anderson15a} using frequentist model comparison tests, both with and without errors in the measurement times. Our findings lend support to Pitkin's claim that a constant term along with an unknown systematic offset provides a better fit to the measurements of Newton's constant, compared to any sinusoidal variations. \PACS{04.20.Cv \and 04.80.-y \and 02.50.-r }
\label{intro} In 2015, Anderson et al~\cite{Anderson15a} have found evidence for periodicities in the measured values of Newton's gravitational constant ($G$) (using data compiled in ~\cite{Schlamminger}) with a period of 5.9 years. They also noted that similar variations have been seen in the length of a day~\cite{Holme}, and hence there is a possible correlation between the two. However, these results have been disputed by Pitkin~\cite{Pitkin15} (hereafter P15). In P15, he examined four different models for the observed values of $G$ and showed using Bayesian model comparison tests that the logarithm of the Bayes factor for a model with constant offset and Gaussian noise compared to sinusoidal variations in $G$ is about 30 (See Table 1 of P15). The analysis was done by considering both a uniform prior and a Jeffreys prior for the parameters. Therefore, from his analysis the data is better fit by a constant offset and an unknown Gaussian noise component. Thereafter, Anderson et al responded to this in a short note~\cite{Anderson15b}, pointing out that they were unable to replicate the claims in P15 using minimization of L1 norm, and they stand by their original claims of sinusoidal variations in $G$. Another difference between the analysis done by P15 and by Anderson et al is that P15 marginalized over the errors in measurement times of $G$, whereas these errors in measurement times ignored by Anderson et al. A periodicity search was also done by Schlamminger et al~\cite{Schlamminger}, which involved minimization of both the L1 and L2 norms. They also argue that a sinusoidal variation with a period of 5.9 years provides a better fit than a straight line. However, the chi-square probabilities for all their models are very small (See Table III of ~\cite{Schlamminger}). To resolve this imbroglio, we re-analyze the same data using Maximum Likelihood analysis (both with and without errors in the measured values of $G$) and do frequentist model comparison tests between different models. Therefore, our analysis is complementary to that of Anderson~\cite{Anderson15a} and Schlamminger~\cite{Schlamminger}.
The current consensus among the Physics and Astrophysics community is that measurements of Newton's Gravitational constant ($G$) have no time dependence or any correlations with environmental parameters. However, this paradigm has recently been challenged by Anderson and collaborators~\cite{Anderson15a}. They have detected sinusoidal variations in the measurements of Newton's constant ($G$) (compiled over the last 35 years from 1981) with a period of 5.9 years~\cite{Anderson15a,Anderson15b}. Similar periodic variations have also been observed in measurements of the length of the day~\cite{Holme}. Therefore, Anderson et al have argued there is some systematic effect in measurements of $G$ that is connected with the same mechanism, which causes variations in the length of the day. However, these results have been disputed by Pitkin~\cite{Pitkin15}, who has shown using Bayesian model comparison and a suitable choice of priors for the different model parameters, that a model with constant offset and a unknown systematic uncertainty fits the data better than any sinusoidal variations, with the Bayes factor between the two hypotheses having a value equal to $e^{30}$. Therefore, the analysis by Pitkin contradicts the claims by Anderson et al. In this letter, we have carried out a complementary analysis of the same dataset to resolve the above conflicting claims between the two groups of authors. We have performed frequentist model comparison tests of the same measurements, both with and without the errors in measurement times. We examined four hypotheses similar to that in Ref.~\cite{Pitkin15}: a constant offset; a constant offset augmented by an unknown systematic uncertainty; sinusoidal variations; and sinusoidal variations with an unknown systematic uncertainty. For each of these models, we found the best-fit parameters and then calculated the chi-square probabilities for each of these and chose the best model as the one with the largest chi-square probability. This is the standard procedure followed in frequentist model comparison, which is complementary to the Bayesian model comparison analysis done by Pitkin. We find in agreement with Pitkin that the best model is the one with a constant offset in measurements of $G$ along with an unknown systematic offset. Therefore, there is no evidence for any sinusoidal variations in the measurements of $G$.
16
7
1607.03845
1607
1607.06467_arXiv.txt
Motivated by the claimed detection of a large population of faint active galactic nuclei (AGN) at high redshift, recent studies have proposed models in which AGN contribute significantly to the $z>4$ \HI\ ionizing background. In some models, AGN are even the chief sources of reionization. If correct, these models would make necessary a complete revision to the standard view that galaxies dominated the high-redshift ionizing background. It has been suggested that AGN-dominated models can better account for two recent observations that appear to be in conflict with the standard view: (1) large opacity variations in the $z\sim 5.5$ \HI\ Ly$\alpha$ forest, and (2) slow evolution in the mean opacity of the \HeII\ Ly$\alpha$ forest. Large spatial fluctuations in the ionizing background from the brightness and rarity of AGN may account for the former, while the earlier onset of \HeII\ reionization in these models may account for the latter. Here we show that models in which AGN emissions source $\gtrsim 50\%$ of the ionizing background generally provide a better fit to the observed \HI\ Ly$\alpha$ forest opacity variations compared to standard galaxy-dominated models. However, we argue that these AGN-dominated models are in tension with constraints on the thermal history of the intergalactic medium (IGM). Under standard assumptions about the spectra of AGN, we show that the earlier onset of \HeII\ reionization heats up the IGM well above recent temperature measurements. We further argue that the slower evolution of the mean opacity of the \HeII\ Ly$\alpha$ forest relative to simulations may reflect deficiencies in current simulations rather than favor AGN-dominated models as has been suggested.
Over the course of several decades, a standard view has emerged in which galaxies produced the dominant contribution of \HI\ ionizing photons at redshifts $z>4$ \citep[e.g.][]{1987ApJ...321L.107S,2009ApJ...703.1416F,2012ApJ...746..125H,2013MNRAS.436.1023B}. This view is based on numerous measurements of the AGN luminosity function which show a steep decline in the AGN abundance at $z>3$ \citep[e.g.][]{2006AJ....132..117F,2010AJ....139..906W,2013ApJ...768..105M,2015MNRAS.453.1946G}. Recently, the standard view has been challenged by some authors citing a large sample of faint AGN candidates reported by \citet[][G2015 hereafter]{2015A&amp;A...578A..83G}. These objects were selected by searching for $X$-ray flux coincident with $z>4$ galaxy candidates in the CANDELS GOODS-South field -- a technique that in principle allows the detection of fainter AGN compared to prior selection methods. If confirmed, this large population of faint AGN could make necessary a substantial revision of our understanding of the high-$z$ ionizing background, and possibly even of cosmological reionization. Indeed, \citet{2015ApJ...813L...8M} showed that AGN with ionizing emissivities consistent with the G2015 measurements could reionize intergalactic \HI\ by $z\approx6$ without any contribution from galaxies (see also \citealt{2016MNRAS.457.4051K} and \citealt{2016arXiv160204407Y}). A distinguishing feature of their model is that \HeII\ reionization ends by $z \approx 4$, at least 500 million years earlier than in the standard scenario in which it ends at $z\approx 3$ \cite[see e.g.][and references therein]{2015arXiv151200086M}. An AGN-sourced ionizing background at $z>4$ potentially explains three puzzling observations of the IGM. First, \HI\ Ly$\alpha$ forest\footnote{From here on, the term ``Ly$\alpha$ forest" refers to \HI.} measurements show that the \HI\ photoionization rate, $\GammaHI$, is remarkably flat over the redshift range $2<z<5$ \citep[see e.g.][]{2013MNRAS.436.1023B}. This flatness is traditionally explained by invoking a steep increase in the escape fraction of galaxies with redshift, coinciding with the decline of the AGN abundance at $z>3$ \citep[e.g.][]{2012ApJ...746..125H}. \citet{2015ApJ...813L...8M} showed that the slower evolution of the AGN emissivity claimed by G2015 can more naturally account for the observed flatness of $\GammaHI$ without appealing to a coincidental transition between the AGN and galaxy populations. Second, measurements of the mean opacity of the \HeII\ Ly$\alpha$ forest at $z=3.1-3.3$ are lower than predictions from existing simulations of \HeII\ reionization, which use standard quasar emisisivity models with \HeII\ reionization ending at $z\approx3$ \citep{2014arXiv1405.7405W}. \citet{2015ApJ...813L...8M} suggested that such low opacities are more consistent with an earlier onset of \HeII\ reionization driven by a large population of high-$z$ AGN. Lastly, recent observations by \citet{2015MNRAS.447.3402B} show that the dispersion of opacities among coeval $50h^{-1}\Mpc$ segments of the Ly$\alpha$ forest increases rapidly above $z>5$, significantly exceeding the dispersion predicted by models that assume a uniform ionizing background. In a companion paper \citep[][Paper I hereafter]{2016arXiv161102711D}, we show that accounting for this dispersion with spatial fluctuations in the ionizing background, under the standard assumption that galaxies are the dominant sources, requires that the mean free path of \HI\ ionizing photons be significantly shorter than observations and simulations indicate \citep[see also][]{2015MNRAS.447.3402B,2015arXiv150907131D}. Alternatively, models in which AGN source the ionizing background naturally lead to large fluctuations owing to the brightness and rarity of AGN. \citet{2015arXiv150501853C} showed with a ``proof-of-concept" model that rare sources with a space density of $\sim 10^{-6}~\Mpc^{-3}$, similar to the space density of $>L_*$ AGN in G2015, could generate large-scale ($\sim 50h^{-1}~\Mpc$) opacity variations substantial enough to account for the observed dispersion at $z=5.8$. During the final preparation of this manuscript, \citet{2016arXiv160608231C} used more realistic models to show that a $\gtrsim 50\%$ contribution from AGN to the ionizing background is sufficient to account for the observed dispersion. We reach a similar conclusion in this paper. The purpose of this paper is to further elucidate the implications of a large AGN population at high redshifts. To this end we will discuss three observational probes of AGN-dominated models of the high-$z$ ionizing background: (1) We will develop empirically motivated models of the Ly$\alpha$ forest in scenarios where AGN constitute a significant fraction of the background. We will then use these models to assess the possible contribution of AGN to the $z>5$ Ly$\alpha$ forest opacity fluctuations; (2) We will quantify the implications of these models for \HeII\ reionization and for the thermal history of the IGM -- a facet that has yet to be discussed in the literature; (3) We will discuss the interpretation of recent \HeII\ Ly$\alpha$ forest opacity measurements in the context of these models. Foreshadowing, we will show that, while AGN-dominated models are indeed a viable explanation for the Ly$\alpha$ forest opacity measurements of \citet{2015MNRAS.447.3402B}, the models that best match the measurements are qualitatively inconsistent with constraints on the thermal history of the IGM under standard assumptions about the spectra of faint AGN. We will further argue that the discrepancy between the opacities observed in the $z \approx3.1 - 3.3$ \HeII\ Ly$\alpha$ forest and those in current \HeII\ reionization simulations may reflect deficiencies in the simulations rather than favor AGN-dominated models. The remainder of this paper is organized as follows. In Section \ref{SEC:lumfunc} we present a comparison of the AGN luminosity function of G2015 to other measurements in the literature. Section \ref{SEC:opacityflucs} is dedicated to models of the Ly$\alpha$ forest opacity fluctuations, while \S \ref{SEC:thermalhistory} explores the impact of high-$z$ AGN on \HeII\ reionization and the thermal history of the IGM. Section \ref{SEC:HeIIforest} discusses the interpretation of recent \HeII\ Ly$\alpha$ forest opacity measurements. Finally, in \S \ref{SEC:conclusion} we offer closing remarks. All distances are reported in comoving units unless otherwise noted. We assume a flat $\Lambda$CDM cosmology with $\Omega_m=0.31$, $\Omega_b=0.048$, $h=0.68$, $\sigma_8=0.82$, $n_s=0.97$, and $Y_{\mathrm{He}}=0.25$, consistent with recent measurements \citep{2015arXiv150201589P}.
\label{SEC:conclusion} Several recent studies have proposed models in which AGN contribute significantly to (or even dominate) the $z\gtrsim 5$ \HI\ ionizing background. Among the main motivations for these models is that they may reconcile recent observations showing large opacity fluctuations in the $z\approx5.5$ \HI\ Ly$\alpha$ forest \citep{2015MNRAS.447.3402B}, and a slow evolution in the mean opacity of the $z\approx 3.1-3.3$ \HeII\ Ly$\alpha$ forest. Here we have considered several facets of AGN-driven models of the ionizing background: (1) We have quantified the amplitude of $z\approx 5.5$ \HI\ Ly$\alpha$ forest opacity fluctuations in models with varying levels of contribution from AGN; (2) We have investigated the implications of these models for cosmological \HeII\ reionization and for the thermal history of the IGM; (3) We have discussed the interpretation of recent $\approx 3.1-3.3$ \HeII\ Ly$\alpha$ forest opacity measurements in the context of these models. We found that a model in which $\approx 50\%$ of the \HI\ ionizing background is sourced by AGN generally provides a better fit to the observed Ly$\alpha$ forest opacity fluctuation distribution across $z\approx 5-5.7$ compared to a model in which only galaxies source the background. However, even this $50\%$ AGN model struggles to account for the highest opacity measurements. We found that doing so requires that essentially all ($\gtrsim 90\%$) of the ionizing background be produced by AGN, in a similar vein to the model proposed recently by \citet{2015ApJ...813L...8M}. These results are generally consistent with the findings of \citet{2016arXiv160608231C}, which appeared during the final preparations of this manuscript. { An important caveat to this work is that we have adopted a model for the mean free path that is consistent with the recent measurements of \citealt{2014MNRAS.445.1745W}. In Paper I, we argued that these measurements may be biased significantly by the enhanced ionizing flux in quasar proximity zones. We note that such a bias could reduce the contribution from AGN that is required to account for the observed amplitude of Ly$\alpha$ opacity fluctuations.} Since AGN are expected to have harder spectra than galaxies at \HeII\ ionizing energies of $\geq 4$ Ry, a unique prediction of AGN-driven models is that cosmological \HeII\ reionization occurs earlier than in the standard scenario \citep{2015ApJ...813L...8M}. Thus the impact of this reionization event on the thermal state of the IGM provides an independent way to constrain these models. We showed that, under standard assumptions about the spectra of AGN, the earlier \HeII\ reionization heats the IGM well above the most recent temperature measurements, even for models in which AGN contribute modestly ($\approx 25\%$) to the \HI\ ionizing background. We are thus led to conclude that AGN-dominated models are disfavored by the temperature measurements. Our results strongly disfavor the scenario proposed by \citet{2015ApJ...813L...8M} in which AGN emissions alone are enough to reionize intergalactic \HI\ by $z\approx5.5$. Finally, some authors have argued that the slow evolution in the mean opacity of the $z\approx 3.1-3.3$ \HeII\ Ly$\alpha$ forest implies that \HeII\ reionization ended earlier or was more extended than is expected in standard quasar source models \citep{2014arXiv1405.7405W,2015ApJ...813L...8M}. We argued that the \HeII\ reionization simulations that were used to establish this discrepancy should over-predict the opacity at high redshifts. We also argued that the opacity found in these simulations should be most dependent on the assumed quasar lightcurve model. Thus the measurements of \citet{2014arXiv1405.7405W} are not necessarily in conflict with standard quasar models (and a late \HeII\ reionization). Therefore, they should not yet be interpreted as evidence in favor of the abundant high-$z$ AGN models considered here. Combined with direct searches to better characterize the high-$z$ faint AGN population, future measurements of the Ly$\alpha$ forest opacity and of the IGM temperature will more tightly constrain the contribution of AGN to the ionizing background, as well as their contribution to reionization.
16
7
1607.06467
1607
1607.04654_arXiv.txt
$\,\!$\indent A supernova remnant (SNR), the aftermath of a supernova explosion, is an important phenomenon of study in astrophysics. The typical $10^{51}$ erg of energy released in the explosion is transferred primarily into the interstellar medium during the course of evolution of a SNR. SNR are also valuable as tools to study the evolution of star, the evolution of the Galaxy, and the evolution of the interstellar medium. A SNR emits in X-rays from its hot shocked gas, in infrared from heated dust, and in radio continuum. The latter is via synchrotron emission from relativistic electrons accelerated at the SNR shock. The evolution of a single SNR can be studied and calculated using a hydrodynamics code. However to study the physical conditions of large numbers of SNR, it is desirable to have analytic methods to obtain input parameters needed to run a detailed hydrodynamic simulation. The short paper describes the basic ideas behind the analytic methods, the creation of software to carry out the calculations and some new results of the calculations.
16
7
1607.04654
1607
1607.03242_arXiv.txt
{Dense cloud cores present chemical differentiation due to the different distribution of C-bearing and N-bearing molecules, the latter being less affected by freeze-out onto dust grains. In this letter we show that two C-bearing molecules, CH$_3$OH and $c$-C$_3$H$_2$, present a strikingly different (complementary) morphology while showing the same kinematics toward the prestellar core L1544. After comparing their distribution with large scale H$_2$ column density $N$(H$_2$) map from the {\em Herschel} satellite, we find that these two molecules trace different environmental conditions in the surrounding of L1544: the $c$-C$_3$H$_2$ distribution peaks close to the southern part of the core, where the surrounding molecular cloud has a $N$(H$_2$) sharp edge, while CH$_3$OH mainly traces the northern part of the core, where $N$(H$_2$) presents a shallower tail. We conclude that this is evidence of chemical differentiation driven by different amount of illumination from the interstellar radiation field: in the South, photochemistry maintains more C atoms in the gas phase allowing carbon chain (such as $c$-C$_3$H$_2$) production; in the North, C is mainly locked in CO and methanol traces the zone where CO starts to freeze out significantly. During the process of cloud contraction, different gas and ice compositions are thus expected to mix toward the central regions of the core, where a potential Solar-type system will form. An alternative view on carbon-chain chemistry in star-forming regions is also provided.}
Prestellar cores represent the initial conditions of star formation and they are precious laboratories to study physical and chemical processes away from the complications due to protostellar feedback. Their chemical composition provides the starting point out of which the future protoplanetary disk and stellar system will form. Molecular studies also provide important clues on the dynamical status and evolution. Based on spectroscopic data, \cite{ket15} in fact concluded that the prestellar core L1544 is slowly contracting and its kinematics is not consistent with the singular isothermal sphere \citep{shu77} and Larson-Penston \citep{lar69, pen69} contraction. \cite{cas99, cas02a} showed that CO is heavily ($>$99\%) frozen onto the surface of dust grains in the central 6500\,AU of L1544, while N$_2$H$^+$ and, in particular N$_2$D$^+$, better follow the millimetre dust continuum emission showing no clear signs of depletion (see also Crapsi et al. 2007 for similar conclusions on NH$_2$D). \cite{taf02, taf04, taf06} extended this study to more starless cores and more molecular species. They found a systematic molecular differentiation, pointing out that C-bearing species, such as CO, CS, C$_2$S, CH$_3$OH, C$_3$H$_2$, and HC$_3$N are severely affected by freeze-out, showing a sharp central hole (with some differences in hole size depending on the particular molecule), while N$_2$H$^+$ and NH$_3$ seem present in the gas phase at the core centres. Carbon chain molecules are known to preferentially trace starless and less evolved cores \citep{suz92}, where C atoms have not yet been mainly locked in CO. CCS has also been found in the outer outer edge of L1544 \citep{oha99} and, in the case of L1551, with clear shift in velocity compared to the LSR core velocity \citep{swi06}, indicating possible accretion of material from the more diffuse molecular cloud where the dense core is embedded. Although gas-grain chemical models (of spherically symmetric starless cores) can explain the observed differential distribution of C-bearing and N-bearing molecules based on the chemical/dynamical evolution (e.g. Aikawa et al. 2001), observational and theoretical studies have so far not discussed possible differences in the distribution of C-bearing molecules affected by central freeze-out. Here we report on the distribution of methanol (CH$_3$OH ) and cyclopropenylidene ($c$-C$_3$H$_2$) toward L1544 and show that their complementary distribution can be understood if the non-homogeneous environmental conditions are taken into account. The recent CH$_3$OH map of \cite{biz14} shows an asymmetric structure, consistent with central depletion and preferential release in the region where CO starts to significantly freeze out (CH$_3$OH is produced on the surface of dust grains via successive hydrogenation of CO; e.g. Watanabe $\&$ Kouchi 2002). In particular, the CH$_3$OH column density map presents a clear peak toward the North of the millimetre dust continuum peak (see Figure 1 of Bizzocchi et al. 2014), thus revealing a non-uniform distribution around the ring-like structure. With the aim of exploring chemical processes across L1544, we mapped the emission of another carbon-bearing molecule, $c$-C$_3$H$_2$, representative of carbon-chain chemistry \citep{sak08}. In this paper we investigate the effects of physical parameters on the distribution of methanol and cyclopropenylidene in L1544, and the influence of environmental effects on the gas- and grain-phase chemistry.
\subsection{Chemical differentiation in L1544} In Figure \ref{fig:methanol}, the integrated intensities of both the 2$_{1,2}$-1$_{1,1}$ ($E_2$) transition of methanol from \cite{biz14} (red contours), and the 3$_{2,2}$-3$_{1,3}$ transition of cyclopropenylidene (black contours), are superimposed on the H$_2$ column density derived from far-infrared images observed by Herschel \citep{Hershel}. The white box defines the area mapped with the 30m telescope. The $N$(H$_2$) map shows a sharp and straight edge toward the South and South-West part of the cloud, which marks the edge of the filamentary cloud within which L1544 is embedded (see also Tafalla et al. 1998). Therefore, this side of the dense core should be more affected by the photochemistry activated by the interstellar radiation field (ISRF). In fact, methanol shows a complementary distribution with respect to cyclopropenylidene. Is this expected based on our understanding of the chemistry of these two molecules? \begin{figure} \centering \includegraphics [width=0.45\textwidth]{scatter_prova.png} \caption{Scatter plot of the integrated intensities of the 2$_{1,2}$-1$_{1,1}$ ($E_2$) CH$_3$OH line (blue circles) and the 3$_{2,2}$-3$_{1,3}$ $c$-C$_3$H$_2$ line (green circles) with respect to the H$_2$ column density inferred from the SPIRE observations (see text). The average error bars are shown on the upper part of the plot. The average errors are 14 mK km s$^{-1}$ for CH$_3$OH and 4 mK km s$^{-1}$ for $c$-C$_3$H$_2$. Only the pixels where the integrated intensity over the average error is larger than 3 are plotted. } \label{fig:scatter} \end{figure} \begin{figure} \centering \includegraphics [width=0.45\textwidth]{Figure2.png} \caption{Column densities of H$_2$, $c$-C$_3$H$_2$ and CH$_3$OH extracted along the dotted line present in Figure \ref{fig:methanol}, as well as the $N$(H$_2$) calculated with the model of L1544 described in \cite{ket10} smoothed at 40\arcsec. While the $N$(H$_2$) presents a sharp drop towards the South-West, its decrease is not as steep towards the North-East. The resulting different illumination on the two sides of L1544 is likely to cause the different distribution of cyclopropenylidene and methanol within the core. } \label{fig:plot} \end{figure} Methanol is believed to be formed on dust grains \citep{wat02} by subsequent hydrogenation of carbon monoxide, and its detection towards prestellar cores is already a challenge for current models given the absence of efficient desorption processes in these sources. Thermal desorption is out of question, because of the low dust temperature. Recent laboratory studies show that also the photo-desorption of methanol from ices is negligible \citep{ber16,cru16}. The main desorption products when irradiating pure and mixed methanol ices are photo-fragments of methanol. An alternative route to explain the presence of methanol in the gas phase is the reactive/chemical desorption that has been theoretically proposed by \cite{gar07} and \cite{vas13} and experimentally studied by \cite{dul13} and \cite{min16}. On the other hand, $c$-C$_3$H$_2$ is mainly formed in the gas phase (e.g. Spezzano et al. 2013) and it should preferentially trace dense and 'chemically young' gas, i.e. gas where C atoms have not yet been mainly locked into CO. This C-atom rich gas is expected in the outer envelope of an externally illuminated dense core (e.g. Aikawa et al. 2001). However, toward L1544, $c$-C$_3$H$_2$ only appears to trace one side of the core, the one closer to the sharp $N$(H$_2$) edge and away from the CH$_3$OH peak. This indicates that photo-chemistry is not uniformly active around L1544, most likely due to the fact that the distribution of the envelope material (belonging to the filament within which L1544 is embedded), is not uniform as clearly shown by the {\em Herschel} map in Figure 1. Figure \ref{fig:methanol} shows that methanol traces a region farther away from the Southern sharp-edge of the $N$(H$_2$) map, possibly more shielded from the ISRF and where most of the carbon is locked in CO. CH$_3$OH is preferentially found at the northern edge of L1544 because here photochemistry does not play a major role (so C is locked in CO) and the density is low enough to maintain a higher fraction of CH$_3$OH in the gas phase, but above the threshold value for CO freeze out, a few $\times$ 10$^4$ cm$^{-3}$ \citep{cas99}. Based on the \cite{ket10} model (and updated by Keto et al. 2014), the volume density at the distance of the methanol peak is predicted to be 8$\times$10$^4$ cm$^{-3}$, which is just above the threshold value. On the contrary, cyclopropenylidene has the most prominent peak toward the Southern sharp-edge of the H$_2$ column density and extends along the semi-major axis of the core, almost parallel to the South-West edge of the $N$(H$_2$) map. This behaviour is also clearly shown in Figure \ref{fig:scatter} where the integrated intensities of both methanol and cyclopropenylidene are plotted against $N$(H$_2$). $c$-C$_3$H$_2$ is in fact present also at lower $N$(H$_2$) values with respect to methanol, and it maintains a flat intensity profile, suggestive of a layer-like structure, with no significant increase toward the core center. CH$_3$OH intensity instead shows a sharp rise up to column densities of about 1.6$\times$10$^{22}$ cm$^{-2}$ and it declines at higher values. Figure \ref{fig:plot} shows the column densities of H$_2$, $c$-C$_3$H$_2$ and CH$_3$OH extracted along the dotted line present in Figure \ref{fig:methanol}, as well as the $N$(H$_2$) calculated with the model of L1544 described in \cite{ket10} assuming a beam size of 40\arcsec. The column densities of $c$-C$_3$H$_2$ and CH$_3$OH have been calculated assuming that the lines are optically thin, using the formula given in Equation (1) of \cite{spe16}. We assumed a T$_{ex}$ of 6 K for $c$-C$_3$H$_2$ and 6.5 K for CH$_3$OH, as done in \cite{spe13} and \cite{biz14} respectively. In the same works it is reported that both lines present moderate optical depths ($\tau$ < 0.4). This plot shows that the decrease of $N$(H$_2$) towards the South-West, where cyclopropenylidene is more abundant, is much steeper than towards the North-East, where methanol is more abundant. A full map of the $N$($c$-C$_3$H$_2$)/$N$(CH$_3$OH) column density ratio can be seen in Figure \ref{fig:ratio}, showing a clear peak toward the South-East side of L1544. Figure \ref{fig:plot} also shows that both $c$-C$_3$H$_2$ and CH$_3$OH belong to the same dense core (identified by the brightest peak in $N$(H$_2$)), while tracing different parts of it. We obtain the same result by comparing the line-width and velocity maps of both molecules, see \ref{fig:vlsr}. Despite the different spatial distributions, the two molecules trace the same kinematic patterns (velocity gradient and amount of non-thermal motions). This indicates that the velocity fields are dominated by the bulk motions (gravitational contraction and rotation) of the prestellar core, which similarly affect the two sides of the core and do not depend on the chemical composition of the gas. In summary, both $c$-C$_3$H$_2$ and CH$_3$OH trace different parts of the same dense core and no velocity shift is present (unlike for the L1551 case; Swift et al. 2006). We note that the observed transitions of $c$-C$_3$H$_2$ and CH$_3$OH have relatively high critical densities (between a few $\times$ 10$^4$ and 10$^6$\,cm$^{-3}$), so that these lines are not expected to trace the more diffuse filamentary material surrounding the prestellar core. \subsection{Carbon-Chain Chemistry} The above results show that Carbon-Chain Chemistry (CCC) is active in the chemically and dynamically evolved prestellar core L1544 in the direction of its maximum exposure to the ISRF (witnessed by the sharp edge in the H$_2$ column density map; see Fig. 1). Based on our observations, we conclude that CCC can also be present in slowly contracting clouds such as L1544 (Keto et al. 2015) and that the C atoms driving the CCC can partially deplete on dust grain surfaces, producing solid CH$_4$, which is the origin of the so-called Warm-Carbon-Chain-Chemistry (WCCC; Sakai \& Yamamoto 2013), when it is evaporated in the proximity of newly born young stellar objects. This apparently contradicts the suggestion that WCCC becomes active when the parent core experiences a fast contraction on a time scale close to that of free fall \citep{sak08, sak09}. In these previous papers, fast contraction is invoked to allow C atoms to deplete onto dust grains before they are converted to CO in the gas phase \citep{sak13}, so that a substantial amount of solid CH$_4$ is produced. Based on the results of our observations, we propose an alternative view to the WCCC origin, that is not depending on the dynamical evolution: cores embedded in low density (and low A$_v$) environments, where the ISRF maintains a large fraction of the carbon in atomic form in most of the surrounding envelope, become rich in solid CH$_4$ and carbon chains, precursors to WCCC. Cores such as L1544, which are embedded in non uniform clouds, with non uniform amounts of illumination from the ISRF, should have mixed ices, where CH$_4$ and CH$_3$OH ices coexist. It is interesting to note that CH$_3$OH has been recently found to correlate with the C-chain molecule C$_4$H toward embedded protostars \citep{gra16}, suggesting that these two classes of molecules can indeed coexist. We suggest that the sources studied by Graninger et al. (2016) had non-uniform environmental similar to those around L1544, whereas the WCCC sources formed within prestellar cores with most of their envelope affected by photochemistry (due to a lower overall H$_2$ column density or extinction toward their outer edges). We are now extending our study of CH$_3$OH and $c$-C$_3$H$_2$ to a larger sample of starless cores and link the results to {\em Herschel} $N$(H$_2$) maps, to see if the same conclusion reached toward L1544 can be extended to other regions.
16
7
1607.03242
1607
1607.01247_arXiv.txt
General features of spontaneous baryogenesis are studied. The relation between the time derivative of the (pseudo)goldstone field and the baryonic chemical potential is revisited. It is shown that this relation essentially depends upon the representation chosen for the fermionic fields with non-zero baryonic number (quarks). The calculations of the cosmological baryon asymmetry are based on the kinetic equation generalized to the case of non-stationary background. The effects of the finite interval of the integration over time are also included into consideration. All these effects combined lead to a noticeable deviation of the magnitude of the baryon asymmetry from the canonical results.
} The usual approach to cosmological baryogenesis is based on three well known Sakharov's conditions~\cite{ads}: a)~nonconservation of baryonic number; b) breaking of C and CP invariance; c) deviation from thermal equilibrium. There are however some interesting scenarios of baryogenesis for which one or several of the above conditions are not fulfilled. A very popular scenario is the so called spontaneous baryogenesis (SBG) proposed in refs~\cite{spont-BG-1,spont-BG-2,spont-BG-3}, for reviews see e.g.~\cite{BG-rev,AD-30}. The term "spontaneous" is related to spontaneous breaking of underlying symmetry of the theory. It is assumed that in the unbroken phase the Lagrangian is invariant with respect to the global $U(1)$-symmetry, which ensures conservation of the total baryonic number including that of the Higgs-like field, $\Phi$, and the matter fields (quarks). This symmetry is supposed to be spontaneously broken and in the broken phase the Lagrangian density acquires the term \be {\cal L}_{SB} = (\partial_{\mu} \theta) J^{\mu}_B\, , \label{L-SB} \ee where $\theta$ is Goldstone field, or in other words, the phase of the field $\Phi$ and $J^{\mu}_B$ is the baryonic current of matter fields (quarks). Depending upon the form of the interaction of $\Phi$ with the matter fields, the spontaneous symmetry breaking (SSB) may lead to nonconservation of the baryonic current of matter. If this is not so and $J^{\mu}_B$ is conserved, then integrating by parts eq. ~(\ref{L-SB}) we obtain a vanishing expression and hence the interaction~(\ref{L-SB}) is unobservable. The next step in the implementation of the SBG scenario is the conjecture that the Hamiltonian density corresponding to ${\cal L}_{SB}$ is simply the Lagrangian density taken with the opposite sign: \be {\cal H}_{SB} = - {\cal L}_{SB} = - (\partial_{\mu} \theta) J^{\mu}_B\, . \label{H-SB} \ee This could be true, however, if the Lagrangian depended only on the field variables but not on their derivatives, as it is argued below. For the time being we neglect the complications related to the questionable identification (\ref{H-SB}) and proceed further in description of the SBG logic. For the spatially homogeneous field $\theta = \theta (t)$ the Hamiltonian (\ref{H-SB}) is reduced to ${\cal H}_{SB} = - \dot \theta\, n_B$, where $n_B\equiv J^4_B$ is the baryonic number density of matter, so it is tempting to identify $\dot \theta$ with the chemical potential, $\mu$, of the corresponding system, see e.g.~\cite{LL-stat}. If this is the case, then in thermal equilibrium with respect to the baryon non-conserving interaction the baryon asymmetry would evolve to: \be n_B =\frac{g_S B_Q}{6} \left({\mu\, T^2} + \frac{\mu^3}{ \pi^2}\right) \rar \frac{g_S B_Q}{6} \left({\dot \theta\, T^2} + \frac{\dot \theta ^3}{ \pi^2}\right)\,, \label{n-B-of-theta} \ee where $T$ is the cosmological plasma temperature, $g_S$ and $B_Q$ are respectively the number of the spin states and the baryonic number of quarks, which are supposed to be the bearers of the baryonic number. It is interesting that for successful SBG two of the Sakharov's conditions for the generation of the cosmological baryon asymmetry, namely, breaking of thermal equilibrium and a violation of C and CP symmetries are unnecessary. This scenario is analogous the baryogenesis in absence of CPT invariance, if the masses of particles and antiparticles are different. In the latter case the generation of the cosmological baryon asymmetry can also proceed in thermal equilibrium~\cite{ad-yab-cpt,ad-cpt}. In the SBG scenario the role of CPT "breaker" plays the external field $\theta (t)$. However, in contrast with the usual saying, the identification $\dot\theta = \mu_B$ is incorrect. Indeed, if $\dot \theta (t)$ is constant or slowly varying, then according to eq.~(\ref{H-SB}) it shifts the energies of baryons with respect to antibaryons at the same spatial momentum, by $\dot\theta$. Thus there would be different number densities of baryons and antibaryons in the plasma even if the corresponding chemical potential vanishes. In this case the baryon asymmetry is determined by effective chemical potential $\mu_{eff} = \mu - \dot\theta$ to be substituted into eq.~(\ref{n-B-of-theta}) instead of $\mu$. The detailed arguments are presented in sec.~\ref{s-kin-eq-canon}. It is also shown there that the baryonic chemical potential tends to zero when the system evolves to the thermal equilibrium state. So in equilibrium the baryon asymmetry would be non-zero with vanishing chemical potential. The picture becomes different if we use another representation for the quark fields. Redefining the quark fields by the phase transformation, $ Q \rar \exp (i\theta/3 ) Q$, we can eliminate the term (\ref{L-SB}) from the Lagrangian, but instead it would appear in the interaction term which violates B-conservation, see eq.~(\ref{L-of-theta-1}). Clearly in this case $\dot\theta$ is not simply connected to the chemical potential. However, as is shown in the presented paper, the baryonic chemical potential in this formulation of the theory would tend in equilibrium to $c\, \dot\theta$ with a constant coefficient $c$. Anyway, as we see from the solution of the kinetic equation presented below, the physically meaningful expression of the baryon asymmetry, $n_B$, expressed through $\theta$, is the same independently on the mentioned above two different formulations of the theory, though the values of the chemical potentials are quite different. Seemingly this difference is related to non-accurate transition from the Lagrangian ${\cal L}_{SB}$ to the Hamiltonian ${\cal H}_{SB}$, made according to Eq.~(\ref{H-SB}). Such identification is true if the Lagrangian does not depend on the time derivative of the corresponding field, $\theta (t)$ in the case under scrutiny. The related criticism of spontaneous baryogenesis can be found in Ref.~\cite{AD-KF}, see also the review~\cite{AD-30}. Recently the gravitational baryogenesis scenario was suggested~\cite{gravBG-1}, see also \cite{gravBG-papers}. In these works the original SSB model was modified by the substitution of curvature scalar $R$ instead of the goldstone field $\theta $. With an advent of the $F(R)$-theories of modified gravity the gravitational baryogenesis was studied in their frameworks~\cite{gravBG-F-of-R} as well. In this paper the classical version of SBG is studied. We present an accurate derivation of the Hamiltonian for the Lagrangian which depends upon the field derivatives. For a constant $\dot\theta$ and sufficiently large interval of the integration over time the results are essentially the same as obtained in the previous considerations. With the account of the finite time effects, which effectively break the energy conservation, the outcome of SBG becomes significantly different. We have also considered an impact of a nonlinear time evolution of the Goldstone field: \be \theta = \dot\theta_0 t + \ddot\theta_0 \, t^2/2 \label{theta-Taylor2} \ee and have found that there can be significant deviations from the standard scenario with $ \dot\theta \approx const$. A strong deviation from the standard results is also found for the pseudgoldstone field oscillating near the minimum of the potential $U(\theta)$. The paper is organized as follows. In section~\ref {s-ssb} the general features of the spontaneous breaking of baryonic $U(1)$-symmetry are described and the (pseudo)Goldstone mode, its equation of motion, as well as the equations of motion of the quarks are introduced. In sec.~\ref{s-H-v-L} the construction of the Hamiltonian density from known Lagrangian is considered. Next, in sec.~\ref{s-kin-eq-canon} the standard kinetic equation in stationary background is presented. Sec.~\ref{s-evol-theta-G} is devoted to the generation of cosmological baryon asymmetry with out-of-equilibrium purely Goldstone field. The pseudogoldstone case is studied in sec.~\ref{s-psevdo}. In sec.~\ref{s-kin-eq} we derive kinetic equation in time dependent external field and/or for the case when energy is not conserved because of finite limits of integration over time. Several examples, when such kinetic equation is relevant, are presented in sec.~\ref{s-examples}. Lastly in sec.~\ref{s-conclude} we conclude.
} To summarize, we have clarified the relation between Lagrangian and Hamiltonian in SBG scenario. We argue that in the standard description $\dot\theta$ is not formally the chemical potential, though in thermal equilibrium $\dot \theta$ may tend to the chemical potential with the numerical coefficient which depends upon the model. However, this result is not always true but depends upon the chosen representation of the "quark" fields. In the theory described by the Lagrangian (\ref{L-of-theta-1}) which appears "immediately" after the spontaneous symmetry breaking, $\theta (t)$ directly enters the interaction term and in equilibrium $\mu_B \sim \dot\theta$ indeed. On the other hand, if we transform the quark field, so that the dependence on $\theta$ is shifted to the bilinear product of the quark fields (\ref{L-of-theta-2}), then chemical potential in equilibrium does not tend to $\dot\theta$, but to zero. Still, the magnitude of the baryon asymmetry in equilibrium is always proportional to $\dot\theta$. It can be seen, according to the equation of motion of the Goldstone field that $\dot\theta/T$ drops down in the course of the cosmological cooling as $T^2$, so the baryon number density in the comoving volume decreases in the same way. So to avoid the complete vanishing of $n_B$ the baryo-violating interaction should switch-off at some non-zero and not very small temperature. The dependence of the baryon asymmetry on the interaction strength is non-monotonic. Too strong and too weak interactions lead to small baryon asymmetry, as is presented in Fig.~\ref{fig-1}. The assumption of a constant or slowly varying $\dot\theta$, which is usually done in the SBG scenario, may be not fulfilled and to include the effects of an arbitrary variation of $\theta (t)$, as well as the effects of the finite time integration, we transformed the kinetic equation in such a way that it becomes operative in non-stationary background. A shift of the equilibrium value of the baryonic chemical potential due to this effect is numerically calculated. In spite of these corrections to the standard SBG scenario, it remains a viable mechanism for creation of the observed cosmological excess of matter over antimatter. However, this mechanism is not particularly efficient in the case of pure spontaneous symmetry breaking, when the potential of the $\theta$-field is absent. Non-zero potential $U(\theta)$, which can appear as a result of an explicit breaking of the baryonic $U(1)$-symmetry in addition to the spontaneous breaking may grossly enhance the efficiency of the spontaneous baryogenesis. The evaluation of the efficiency demands numerical solution of the ordinary differential equation of motion for the $\theta$-field together with the integral kinetic equation. In the case of thermal equilibrium the kinetic equation is reduced to an algebraic one and the system is trivially investigated. The out-of-equilibrium situation is much more complicated technically and will be studied elsewhere. We assumed that the symmetry breaking phase transition in the early universe occurred instantly. It may be a reasonable approximation, but still the corrections can be significant. This can be also a subject of future work. There remains the problem of the proper definition of the fermionic Hamiltonian but presumably it does not have an important impact on the considered here problems and thus is neglected. \\[3mm] {\bf Acknowledgement} We thank A.I. Vainshtein for stimulating criticism. The work of E.A. and A.D. was supported by the RNF Grant N 16-12-10037. V.N. thanks the support of the Grant RFBR 16-02-00342.
16
7
1607.01247
1607
1607.00288_arXiv.txt
{Several issues regarding the nature of dust at high redshift remain unresolved: its composition, its production and growth mechanisms, and its effect on background sources. } {We provide a more accurate relation between dust depletion levels and dust-to-metals ratio (DTM), and to use the DTM to investigate the origin and evolution of dust in the high-redshift Universe via Gamma-ray burst damped Lyman-alpha absorbers (GRB-DLAs).} {We use absorption-line measured metal column densities for a total of 19 GRB-DLAs, including five new GRB afterglow spectra from VLT/X-shooter. We use the latest linear models to calculate the dust depletion strength factor in each DLA. Using these values we calculate total dust and metal column densities to determine a DTM. We explore the evolution of DTM with metallicity, and compare it to previous trends in DTM measured with different methods. } {We find significant dust depletion in 16 of our 19 GRB-DLAs, yet 18 of the 19 have a DTM significantly lower than the Milky Way. We find that DTM is positively correlated with metallicity, which supports a dominant ISM grain-growth mode of dust formation. We find a substantial discrepancy between the dust content measured from depletion and that derived from the total $V$-band extinction, $A_V$, measured by fitting the afterglow SED. We advise against using a measurement from one method to estimate that from the other until the discrepancy can be resolved.} {}
The abundances and compositions of the dust and metals in the interstellar medium (ISM) can reveal important information about local environmental conditions. Despite the wealth of information on our doorstep regarding the ISM of the Milky Way (MW) and Local Group galaxies, it is also necessary to investigate the ISM in the distant Universe in order to trace its properties in very different environments, as well as its evolution over cosmic history. One of the key constituents of the ISM is dust. Dust is produced in a range of environments, from the stellar sources of outer envelopes of post-aysmptotic giant branch (AGB) stars and the expanding and cooling ejecta of supernova to grain growth and accretion in the ISM. It reveals itself via emission in the far-infrared and sub-mm wavelength range and through absorption and scattering of visible and ultraviolet (UV) light from background sources, and its effect must be corrected for when studying sources that shine through it. For example, everything outside the Galaxy must be observed through the dust of the MW, which has a complex topography \citep{Schlafly2011}. It is estimated that up to 30\% of all light in the Universe has been reprocessed by dust grains \citep{Bernstein2002}. Dust is also necessary for, and traces, star formation across the Universe \citep{Sanders1996,Genzel1998,Peeters2004,McKee2007}. Conversely, star formation also destroys dust at differing rates (\citealt{Draine1979b,Draine1979a,McKee1989,Jones1996,Dwek1998,Bianchi2005,Yamasawa2011}). % Along with the ISM, dust is present in substantial quantities alongside gas and metals in the circum-galactic medium (CGM; \citealt{Bouche2007,Peeples2014,Peek2015}.) It is therefore of fundamental importance to the theory of star formation and thus galaxy evolution to understand the nature of all dust processes, such as formation, composition, evolution, and destruction, as well as its observational characteristics, both in the local and distant Universe. Since dust is intimately connected to the conditions of the ISM and the properties of gas \citep{Draine2003}, the dust-to-gas ratio (DTG; \citealt{Bohlin1978}) is a good indicator of the dust content of a galaxy or gas cloud. The dust-to-metals ratio (DTM; \citealt{Predehl1995,Guever2009,Watson2011}), which is the DTG corrected for the metallicity of the gas, thus describing the fraction of the total metals that are in the solid dust phase, can reveal more about the nature of the dust itself, its production mechanisms, and the processes by which it evolves. The evolution of the DTM over cosmic time is a tracer of the history of the interplay between gas and dust in the ISM of galaxies, and its distribution in comparison to metallicity can be used to infer clues about the origin of interstellar dust. If all dust and metals were to be produced in and ejected from stars, one would expect the DTM to remain constant in both time and metallicity (e.g. \citealt{Franco1986}). In models, this is often assumed (e.g. \citealt{Edmunds1998}), especially in the local Universe \citep{Inoue2003}, and a fairly constant dust-to-metals ratio is indeed observed \citep{Issa1990, Watson2011}. At higher redshift, \citet{Zafar2013} found that the DTM in a sample of foreground absorbers to gamma-ray bursts (GRBs) and quasars tends not to vary significantly over a wide range of redshifts, metallicities, and hydrogen column densities, proposing a universally constant DTM. \citet{Chen2013} find a slow redshift evolution of DTM in lensed galaxies. These findings suggest that most of the dust is produced `instantaneously' in the ejecta of core-collapse supernova (CCSNe), a result supported by recent models by \citet{McKinnon2016}, who find that roughly two-thirds of the dust in MW-like galaxies at $z=0$ is produced in Type II SNe. These authors all use the traditional method of measuring DTM: the extinction, $A_V$, is compared to the equivalent metal column density, $\log N(\mathrm{H}) + $[M/H], where [M/H] is the logged metallicity of the gas (see Eq. \ref{eq:relative}). Other studies use a different definition of DTM, namely by determining the dust fraction $\mathcal{F}_d$ from the dust depletion (Sect. \ref{sec:depletion}) of metals observed in damped Lyman-$\alpha$ absorbers (DLAs) on sight lines to quasars \citep{Vladilo2004} and GRBs \citep{DeCia2013}. These studies, unlike those using $A_V$ as their dust quantifier, claim detections of increasing evolution of the DTM with metallicity. This would suggest that the majority of the dust is formed by growth onto grains in the ISM \citep{Draine2009} rather than simultaneously together with the metals formed in CCSNe and post-AGB star envelopes. \citet{Tchernyshyov2015} use depletion observations in the Small Magellanic Cloud (SMC) to suggest the trend between DTM and metallicity only occurs below a certain metallicity threshold that depends on gas density. \citet{Mattsson2014} provide a comprehensive discussion on the debate from a theoretical standpoint, suggesting that selection effects or statistical fluctuations could explain the differing observed trends, and \citet{Feldmann2015} attempts to model the observed evolution of dust and metal parameters via production, accretion, destruction, as well as gas infall and outflow from the galaxy, and also reproduce an evolution of the DTM at low metallicities. \citet{McKinnon2016} include stellar production and accretion along with destruction by SN shocks and winds driven by star formation in models that predict the DTM of MW-like galaxies. GRBs are useful tools with which to study trends in the DTM in the distant Universe. They are extremely bright, allowing their detection even at very high redshift \citep{Tanvir2009}, and occur in galaxies with a wide range of dust content and metallicities (e.g. \citealt{Fynbo2008,Mannucci2011,Kruehler2015,Cucchiara2015}). GRBs are massive stellar explosions (e.g. \citealt{Galama1998}), the afterglows of which are observed to have featureless synchrotron spectra \citep{Meszaros1997}. This means that any absorption lines or changes to the shape of the spectrum must originate from an absorbing medium between the explosion site and the observer. Typically they manifest themselves in the form of DLAs in the host galaxy of the GRB. A DLA is defined as an absorbing system with $\log(N(\ion{H}{I}))>20.3$ \citep{Wolfe2005}, and it has been found that a large proportion of GRB afterglow spectra that lie in the redshift range for the Ly-$\alpha$ transition to fall into the atmospheric transmission window ($z>\sim1.7$) do indeed fulfil this criterion (e.g. \citealt{Kruehler2013,Sparre2014,Friis2015}). With such a large pool of neutral gas, the ionization fraction is so small that the dominant state of the elements used in this analysis is the singly ionized one \citep{Wolfe2005,Viegas1995,Peroux2007}, and the measurements of singly ionized metal species are taken to be representative of the total gas phase abundance of these metals in the DLA. We do commonly detect highly ionized species such as $\ion{C}{IV}$ and $\ion{Si}{IV}$, both often saturated, which might call the above assumption into question, and \citet{Fox2004} do indeed use the ratio [$\ion{C}{IV}/\ion{O}{VI}$] as proportional to the total [C/O]. However, these lines often show broader velocity structure and/or offsets in central velocity than the low-ionization lines (e.g. \citealt{Fox2007}), suggesting that the gas with a higher ionization state does not trace the same structure as the low-ionization lines. This issue is also addressed in \citet{Ellison2010}, and while they suggest that there may be some ionization corrections below $\log(N(\ion{H}{I}))<21$, they are still low, and only two of the objects in our sample have a neutral hydrogen column density below this value. We thus make no ionization corrections throughout the paper, and take the low-ionization abundances to be representative. In this paper, we present spectral analysis of five previously unpublished GRBs, and we combine them with 14 more GRB-DLAs from the literature, all but three of which have mid- to high-resolution spectroscopy. We compute dust depletion curves using all of the available metals, which we then use to calculate average DTM values, and investigate their relation with metallicity and redshift in order to investigate the evolution of DTM. The structure of this paper is as follows. In Sect. \ref{sec:depletion} we describe the background and updated methods available to parameterize dust-depletion. The initial sample is presented in Sect. \ref{sec:sample}. In Sect. \ref{sec:method} we introduce our method of fitting for depletions in GRB-DLAs, and in Sect. \ref{sec:results} we present the results; we discuss the results in Sect. \ref{sec:discussion} and conclude in Sect. \ref{sec:conc}. Throughout the paper we assume the solar abundances from \citet{Asplund2009}.
} Gamma-ray bursts are a unique if somewhat biased probe of the dust-to-metals ratio in the high-redshift Universe. GRBs occur only within certain types of galaxies \citep{Kruehler2015,Perley2016a,Perley2016d}, and thus are not totally unbiased probes, although this effect is reduced as redshifts greater than around 2 are reached \citep{Perley2013,Greiner2015a,Schulze2015}. They are also complementary to QSO-DLAs, and this work expands our observational knowledge of the DTM into the inner regions of galaxies in the distant Universe. We have used optical/NIR spectroscopy from a sample of 19 GRB afterglows in order to measure the metal and dust content of the DLAs in their host galaxies, including previously un-published metal column densities and metallicities for five objects. By using dust depletion models based on the MW, as well as QSO-DLAs, we have used a thorough method to determine the column densities of dust and of metals in order to calculate a dust-to-metals ratio. We find that the DTM follows a positive trend with metallicity, supporting the theory that a significant amount of dust is formed in situ in the ISM. We have investigated the discrepancy between the results of \citet{DeCia2013} and \citet{Zafar2013}, concluding that $A_{V;\mathrm{SED}}$ and depletion are not analogous measurements of dust. We see the common trend that $A_{V;\mathrm{DTM}}$ is often higher than $A_{V;\mathrm{SED}}$, which we tentatively suggest could be due to the scaling between depletion-measured DTM and $A_V$ being different in GRB host galaxies to the MW. We also note a significant number of objects whose $A_{V;\mathrm{DTM}}$ values are underpredictions compared to $A_{V;\mathrm{SED}}$, and despite seeing what looks like two distinct populations, we are unable to satisfactorily reconcile the two using theories such as grey dust or intervening systems. We thus suggest that, given the large scatter between the two, DTM measured from depletion should not be used as a proxy for $A_V$, and encourage further work with larger samples to investigate the problem further.
16
7
1607.00288
1607
1607.04418_arXiv.txt
{The interpretation of dark matter direct detection experiments is complicated by the fact that neither the astrophysical distribution of dark matter nor the properties of its particle physics interactions with nuclei are known in detail. To address both of these issues in a very general way we develop a new framework that combines the full formalism of non-relativistic effective interactions with state-of-the-art halo-independent methods. This approach makes it possible to analyse direct detection experiments for arbitrary dark matter interactions and quantify the goodness-of-fit independent of astrophysical uncertainties. We employ this method in order to demonstrate that the degeneracy between astrophysical uncertainties and particle physics unknowns is not complete. Certain models can be distinguished in a halo-independent way using a single ton-scale experiment based on liquid xenon, while other models are indistinguishable with a single experiment but can be separated using combined information from several target elements.}
Over the past decade dark matter (DM) direct detection experiments have improved their sensitivity by an order of magnitude every two years and this trend is expected to continue for the near future~\cite{Cushman:2013zza,Baudis:2014naa,Akerib:2015cja,Aprile:2015uzo,Aalbers:2016jon,PICO250,Shields:2015wka,Calkins:2016pnm,Arnaud:2016tpa}. While there is at present no clear evidence for the interactions of DM particles in nuclear recoil detectors~\cite{Akerib:2015rjg,Aprile:2012nq,Amole:2016pye,Agnese:2014aze,Angloher:2015ewa}, it is perfectly conceivable (and in fact predicted by many models for DM) that hundreds of events will be observed by 2020. Once a signal is seen in one or several direct detection experiments, the challenge will be to identify those models of DM that allow for a good fit of the experimental data and to determine the preferred values of the underlying parameters, such as the mass of the DM particle. Answering these questions is complicated by the fact that event rates in direct detection experiments depend in complicated ways on the velocity distribution of DM particles in the Galactic halo, which is subject to large uncertainties~\cite{Kuhlen:2009vh,Lisanti:2010qx,McCabe:2010zh,Mao:2012hf}. Many studies have investigated the impact of these uncertainties on our ability to employ the results of future direct detection experiments in order to infer the mass of the DM particle or to discriminate between different DM models (e.g.\ by distinguishing between spin-independent and spin-dependent interactions or by determining separately the DM-proton and the DM-neutron coupling)~\cite{Strigari:2009zb,Peter:2009ak,Green:2010gw,Peter:2011eu,Pato:2012fw,Kavanagh:2012nr,Frandsen:2013cna,Kavanagh:2013wba,Peter:2013aha,Strigari:2013iaa,Kavanagh:2014rya,Feldstein:2014gza,Feldstein:2014ufa}. In particular, a range of so-called halo-independent methods have been developed to reduce or even eliminate completely the impact of astrophysical uncertainties on the DM properties that can be inferred from direct detection experiments~\cite{Drees:2008bv,Fox:2010bu,Fox:2010bz,McCabe:2011sr,Frandsen:2011gi,Gondolo:2012rs,HerreroGarcia:2012fu,DelNobile:2013cva,Fox:2014kua,Feldstein:2014gza,Feldstein:2014ufa,Bozorgnia:2014gsa,Cherry:2014wia,Anderson:2015xaa,Gelmini:2015voa,Ferrer:2015bta,Scopel:2015baa,DelNobile:2015rmp,Gelmini:2016pei}. At the same time, however, it has become clear that the interactions of DM particles with nuclei can be significantly more complicated than suggested by the simple division into spin-independent and spin-dependent interactions. In general the scattering cross section can also depend on the momentum transfer and the relative velocity between the DM particle and the nucleus. A full classification of all possible DM interactions in the non-relativistic limit requires no less than 28 different scattering operators as well as a large number of nuclear response functions~\cite{Fitzpatrick:2012ix,Anand:2013yka}, which can significantly affect the expected signals in direct detection experiments~\cite{Fitzpatrick:2012ib,Catena:2014uqa,Gresham:2014vja,Catena:2014hla,Catena:2014epa,Gluscevic:2014vga,Catena:2015uua,Gluscevic:2015sqa,Scopel:2015baa,Dent:2015zpa,Catena:2016hoj}. The possible importance of the additional DM-nucleon scattering operators makes the issue of astrophysical uncertainties even more pressing, because a non-standard velocity distribution can potentially mimic non-standard DM interactions. Fortunately, the degeneracy between astrophysical uncertainties and particle physics unknowns is not complete. For example, standard spin-independent interactions induce a differential event rate that decreases monotonically with increasing recoil energy for \emph{any} DM velocity distribution~\cite{Fox:2010bu}. The observation of a non-monotonic differential event rate could hence not be attributed to astrophysical uncertainties and would instead point strongly towards non-standard interactions. It should therefore always be possible to obtain at least some basic information on the nature of DM interactions even when accounting for astrophysical uncertainties. In the present paper we develop a framework to combine the full formalism of non-relativistic effective interactions with state-of-the-art halo-independent methods. Our approach is based on the idea of decomposing the velocity distribution into a finite sum of streams with velocity $v_j$ and then varying the normalisation of each stream~\cite{Feldstein:2014gza,Feldstein:2014ufa}. We will show that, even for non-standard interactions, it is always possible to calculate a matrix $D_{ij}$ such that the number of events $N_i$ in the $i$th bin of a given experiment can be calculated via simple matrix multiplication $N_i = D_{ij} \, g_j$ with the $g_j$ being determined by the normalizations of the streams. This simple relation allows to determine the velocity distribution that best describes a given set of data for assumed particle physics properties. We can repeat this procedure for different particle physics assumptions in order to study whether changes in the velocity distribution can compensate for changes in the particle properties of DM and thus reduce our ability to determine these properties unambiguously. The aim is to quantify what information can be inferred on the coupling structure of DM in a halo-independent way. For example, if no good fit to a given set of data can be found for any DM velocity distribution, the corresponding particle physics assumptions can be disfavoured without the need to make assumptions on the astrophysical distribution of DM. This approach makes it possible to analyse direct detection experiments for arbitrary DM interactions, in particular DM interactions with non-standard momentum and velocity dependence, independent of astrophysical uncertainties. To illustrate the general formalism we study a representative set of DM-nucleon interactions. These consist of the standard spin-independent and spin-dependent scattering scenarios, interactions induced by an anapole or a magnetic dipole moment of DM, as well as a dipole interaction involving a new heavy mediator. This set of interactions, though certainly not exhaustive, covers all the central aspects relevant for a halo-independent investigation of non-standard interactions between DM and nucleons. We discuss the scattering rates induced by these models in the context of three future direct detection experiments based respectively on xenon, germanium and iodine targets. We first study whether a single (xenon-based) experiment can distinguish the different models of DM in a halo-independent way and then focus on the question whether the complementarity of several different target materials can improve the distinction. A similar analysis of the possibility to distinguish different DM models using future direct detection experiments has been performed in~\cite{Gluscevic:2015sqa}. While considering a larger set of DM interactions and more different experiments, the analysis makes much more specific assumptions on the velocity distribution of DM. Our results extend the findings from~\cite{Gluscevic:2015sqa} to arbitrary DM velocity distributions (and furthermore take into account effects from finite energy resolution). This paper is structured as follows. In section~\ref{sec:haloindependent} we review the basic formalism for direct detection, including the central ideas underlying the halo-independent methods for DM-nucleon interactions with standard velocity dependence. We then explain how to generalise such a framework to more complicated DM interactions and derive the central formulas used to calculate the matrix $D_{ij}$. In section~\ref{sec:generalised}, after defining the set of interaction scenarios discussed in this work, we provide a qualitative illustration of the halo-independent interpretation of direct detection data in the context of these models. We then introduce methods for quantifying the goodness of halo-independent fits to experimental data. In sections~\ref{sec:1exp} and \ref{sec:2exp} we apply the method to various combinations of direct detection experiments, and discuss in particular which of the above-mentioned interaction scenarios can be distinguished in a halo-independent way. Section~\ref{sec:conclusions} provides a summary of our findings. Additional details on the parametrization of non-relativistic effective interactions and the calculation of event rates are given in the appendices.
\label{sec:conclusions} Near-future experiments for the direct detection of DM promise significant improvements in sensitivity over existing searches. In anticipation of these improvements it is timely to develop and refine strategies for extracting the particle physics properties of DM from data. Because of the strong dependence of experimental event rates on the essentially unknown Galactic velocity distribution of DM it is of particular importance to devise \emph{halo-independent} methods, i.e.\ methods that enable us to deduce properties of the DM particle from a positive detection without the need to make any assumptions on the astrophysical distributions. In this work we have presented a very general framework for analysing the data from one or several direct detection experiments independent of astrophysical assumptions and for quantifying the goodness-of-fit of a given particle physics scenario. Following~\cite{Feldstein:2014gza,Feldstein:2014ufa}, we perform a halo-independent fit to a given set of data by parametrizing the velocity integral as a piecewise constant function with a sufficiently large number of steps. As we determine the best-fit velocity distribution directly from data instead of fixing it to e.g.~a Maxwell-Boltzmann distribution, our approach can be employed in order to test whether there exists \emph{any} velocity distribution for which a given particle physics scenario of DM is compatible with the data. Importantly, we have shown that this formalism can be applied universally to any combination of non-relativistic operators describing the interactions of DM with nuclei, in particular to models with non-standard dependence of the scattering cross section on the momentum exchange and the DM velocity. This enables us to consider a much broader set of particle physics models compared to many existing halo-independent analyses in the literature. To demonstrate how this method can be applied to the case of a positive detection of DM in one or several experiments, we have studied mock data generated for realistic near-future direct detection experiments. To this end we have considered a range of different possibilities for the true particle physics nature of DM, including the standard spin-independent and spin-dependent interaction as well as scenarios involving a non-trivial velocity and momentum dependence of the DM-nucleon scattering cross section. For each mock data set we then perform halo-independent fits to the data for various different assumptions on the properties of the DM particle (see figures~\ref{fig:vminspace1} and~\ref{fig:vminspace2}). By repeating this fitting procedure for a large number of Poisson realisations of mock data sets, we consistently take into account the statistical noise expected in the observed data. We can then determine which of the interaction scenarios can be distinguished without specifying the velocity distribution. To quantify the similarity and distinguishability of different DM interaction scenarios we have developed a simplified procedure based on the \emph{typical predictions} of the fitted model (figure~\ref{fig:illustration}). We have confirmed the validity of this approximation using explicit Monte Carlo simulations (figure~\ref{fig:pvalues}). Interestingly we find that already a single experiment like XENON1T with a moderate exposure of two ton-years can be employed for inferring non-trivial information about the particle physics properties of DM in a fully halo-independent way (figures~\ref{fig:xe_panel} and~\ref{fig:eventrates}). For example, our results show that a model of DM predicting a non-monotonic recoil spectrum (realized e.g.~for a dark magnetic dipole moment) can be clearly distinguished from the standard spin-independent or spin-dependent interaction scenario with a XENON1T-like experiment. Also, if DM interacts via the standard spin-independent or spin-dependent coupling, it is possible to rule out in a halo-independent way long-range interactions of DM with nucleons, which could be induced e.g.~by a magnetic dipole moment of DM. Lastly we have studied to what extent a detection of DM in more than one experiment helps in discriminating different particle physics scenarios. We have found that adding a germanium target to the xenon-based experiment does not increase significantly the halo-independent distinguishability of the scenarios discussed in this work (figure~\ref{fig:xege_panel}). We have illuminated the numerical results by a simplified analytical argument explaining why the complementarity of a xenon and germanium target is strongly limited. An exception to this statement is the case in which the data of both experiments are fitted assuming a spin-independent interaction with variable neutron-to-proton coupling ratio $f_n/f_p$, which can be constrained much more tightly by two experiments with different target nuclei (figures~\ref{fig:fpfn} and~\ref{fig:fpfn_XeGe}). On the other hand, we have shown that the presence of a DM signal in an iodine-based experiment in addition to the xenon-based detector yields additional discrimination power (figure~\ref{fig:xei_panel}). In particular, if scattering is induced by an anapole or magnetic dipole moment of DM, it should be possible to rule out the standard assumption of spin-independent or spin-dependent interactions by combining the information from both experiments, again without referring to a particular velocity distribution of DM in the galaxy. Our analysis is based on rather modest assumptions on the exposures of upcoming experiments. Clearly, stronger discrimination power can be achieved by more ambitious experiments and by combining the information from more than two experiments. Nevertheless, hundred DM events in a liquid xenon experiment and ten events in an additional experiment may already be sufficient to extract highly non-trivial information on the particle physics nature of DM in a completely halo-independent way. While the conclusion may simply be that some more exotic models of DM can be ruled out, future experiments may also point in the opposite direction and tell us that the interactions of DM are much more complicated than what is usually assumed.
16
7
1607.04418
1607
1607.03712_arXiv.txt
The Scheduled Relaxation Jacobi (SRJ) method is an extension of the classical Jacobi iterative method to solve linear systems of equations ($Au=b$) associated with elliptic problems. It inherits its robustness and accelerates its convergence rate computing a set of $P$ relaxation factors that result from a minimization problem. In a typical SRJ scheme, the former set of factors is employed in cycles of $M$ consecutive iterations until a prescribed tolerance is reached. We present the analytic form for the optimal set of relaxation factors for the case in which all of them are strictly different, and find that the resulting algorithm is equivalent to a non-stationary generalized Richardson's method where the matrix of the system of equations is preconditioned multiplying it by $D=\rm{diag}(A)$. Our method to estimate the weights has the advantage that the explicit computation of the maximum and minimum eigenvalues of the matrix $A$ (or the corresponding iteration matrix of the underlying weighted Jacobi scheme) is replaced by the (much easier) calculation of the maximum and minimum frequencies derived from a von Neumann analysis of the continuous elliptic operator. This set of weights is also the optimal one for the general problem, resulting in the fastest convergence of all possible SRJ schemes for a given grid structure. The amplification factor of the method can be found analytically and allows for the exact estimation of the number of iterations needed to achieve a desired tolerance. We also show that with the set of weights computed for the optimal SRJ scheme for a fixed cycle size it is possible to estimate numerically the optimal value of the parameter $\omega$ in the Successive Overrelaxation (SOR) method in some cases. Finally, we demonstrate with practical examples that our method also works very well for Poisson-like problems in which a high-order discretization of the Laplacian operator is employed (e.g., a $9-$ or $17-$points discretization). This is of interest since the former discretizations do not yield consistently ordered $A$ matrices and, hence, the theory of Young cannot be used to predict the optimal value of the SOR parameter. Furthermore, the optimal SRJ schemes deduced here are advantageous over existing SOR implementations for high-order discretizations of the Laplacian operator in as much as they do not need to resort to multi-coloring schemes for their parallel implementation.
\label{sec1} The Jacobi method \cite{Jacobi1845} is an iterative method to solve systems of linear equations. Due to its simplicity and its convergence properties it is a popular choice as preconditioner, in particular when solving elliptic partial differential equations. However, its slow rate of convergence, compared to other iterative methods (e.g. Gauss-Seidel, SOR, Conjugate gradient, GMRES), makes it a poor choice to solve linear systems. The scheduled relaxation Jacobi method \cite{Yang2014}, SRJ hereafter, is an extension of the classical Jacobi method, which increases the rate of convergence in the case of linear problems that arise in the finite difference discretization of elliptic equations. It consists of executing a series of weighted Jacobi steps with carefully chosen values for the weights in the sequence. Indeed, the SRJ method can be expressed for a linear system, $Au=b$, as \begin{equation} u^{n+1} = u^n + \omega_{n} D^{-1}( b - A u^n), \label{eq:SRJ} \end{equation} where $D$ is the diagonal of the matrix $A$. If we consider a set of $P$ different relaxation factors, $\omega _{n}, \,\, n=1,\dots,P$, such that $\omega _{n} > \omega _{n+1}$ and we apply each relaxation factor $q_n$ times, the {\it total amplification factor} after $M:=\sum_{n=1}^P q_n$ iterations is \begin{equation} G_M(\kappa) = \prod_{n=1}^{P} (1 - \omega _{n} \kappa)^{q_n}, \label{eq:gm} \end{equation} which is an estimation of the reduction of the residual during one cycle (M iterations). In the former expression $\kappa$ is a function of the wave-numbers obtained from a von Neumann analysis of the system of linear equations resulting from the discretization of the original elliptical problem by finite differences (for more details see \cite{Yang2014,Adsuetal15}). Yang \& Mittal \cite{Yang2014} argued that, for a fixed number $P$ of different weights, there is an optimal choice of the weights $\omega _{n}$ and repetition numbers $q_n$ that minimizes the maximum {\it per-iteration amplification factor}, $\Gamma(\kappa) = |G(\kappa)|^{1/M}$, in the interval $\kappa \in [\kmin,\kmax]$ and therefore also the number of iterations needed for convergence. The boundaries of the interval in $\kappa$ correspond to the minimum and the maximum weight numbers allowed by the discretization mesh and boundary conditions used to solve the elliptic problem under consideration. In the aforementioned paper, \cite{Yang2014} computed numerically the optimal weights for $P\le 5$ and Adsuara et al.\,\cite{Adsuetal15} extended the calculations up to $P=15$. The main properties of the SRJ, obtained by \cite{Yang2014} and confirmed by \cite{Adsuetal15}, are the following: \begin{enumerate} \item Within the range of $P$ studied, increasing the number of weights $P$ improves the rate of convergence. \item The resulting SRJ schemes converge significantly faster than the classical Jacobi method by factors exceeding $100$ in the methods presented by \cite{Yang2014} and $\sim 1000$ in those presented by \cite{Adsuetal15}. Increasing grid sizes, i.e. decreasing $\kmin$, results in larger acceleration factors. \item The optimal schemes found use each of the weights multiple times, resulting in a total number of iterations $M$ per cycle significantly larger than $P$, e.g. for $P=2$, \cite{Yang2014} found an optimal scheme with $M=16$ for the smallest grid size they considered ($N=16$), while for larger grids $M$ notably increases (e.g., $M=1173$ for $N=1024$). \end{enumerate} The optimization procedure outlined by \cite{Yang2014} has a caveat though. Even if the amplification factor were to reduce monotonically by increasing $P$, for sufficiently high values of $P$, the number of iterations per cycle $M$ may be comparable to the total number of iterations needed to solve a particular problem for a prescribed tolerance. At this point, using a method with higher $P$, and thus higher $M$, would increase the number of iterations to converge, even if the $\Gamma(\kappa)$ is nominally smaller. With this limitation in mind we outline a procedure to obtain optimal SRJ schemes, minimizing the total number of iterations needed to reduce the residual by an amount sufficient to reach convergence or, equivalently, to minimize $|G_M(\kappa)|$. Note that the total number of iterations can be chosen to be equal to $M$ without loss of generality, i.e. one cycle of $M$ iterations is needed to reach convergence. To follow this procedure one should find the optimal scheme for fixed values of $M$, and then choose $M$ such that the maximum value of $|G_M(\kappa)|$ is similar to the residual reduction needed to solve a particular problem. The first step, the minimization problem, is in general difficult to solve, since fixing $M$ gives an enormous freedom in the choice of the number of weights $P$, which can range from $1$ to $M$. However, the numerical results of \cite{Yang2014} and \cite{Adsuetal15}, seem to suggest that in general increasing the number of weights $P$ will always lead to better convergence rates. This leads us to conjecture that the optimal SRJ scheme, for fixed $M$, is the one with $P=M$, i.e. all weights are different and each weight is used once per cycle, $q_i=1,\, (i=1,\ldots,M)$. In terms of the total amplification factor $G_M(\kappa)$, it is quite reasonable to think that if one maximizes the number of different roots by choosing $P=M$, the resulting function is, on average, closer to zero than in methods with smaller number of roots, $P<M$, and one might therefore expect smaller maxima for the optimal set of coefficients. One of the aims of this work is to compute the optimal coefficients for this particular case and demonstrate that $P=M$ is indeed the optimal case. Another goal of this paper is to show the performance of optimal SRJ methods compared with optimal SOR algorithms applied to a number of different discretizations of the Laplacian operator in two-dimensional (2D) and three-dimensional (3D) applications (Sect.\,\ref{sec:numexamples}). We will show that optimal SRJ methods applied to high-order discretizations of the Laplacian, which yield iteration matrices that cannot be consistently ordered, perform very similarly to optimal SOR schemes (when an optimal SOR weight can be computed). We will further discuss that the trivial parallelization of the SRJ methods outbalances the slightly better scalar performance of SOR in some cases (Sect.\,\ref{sec:9-17discret}). Also, we will show that the optimal weight of the SOR method can be suitably approximated by functions related to the geometric mean of the set of weights obtained for optimal SRJ schemes. This is of particular relevance when the iteration matrix is non-consistently ordered and hence, the analytic calculation of the optimal SOR weight is extremely intricate. \section {Optimal $P=M$ SRJ scheme} \label{sec:OptimalCheb} Let us consider a SRJ method with $P=M$ and hence $q_n=1,\, (n=1,\ldots,M)$. For this particular choice, the amplification factor $G_M(\kappa)$ is a polynomial of degree $M$ in $\kappa$ with $M$ different roots. In this case, the set of weights $\omega _{n}$ that minimizes the value of the maximum of $|G_M(\kappa)|$, given by Eq.~(\ref{eq:gm}), in the interval $\kappa \in [\kmin,\kmax]$, $0<\kmin\le \kmax$, can be determined by the following $M$ conditions: \begin{equation} G_M(0) = 1 \quad ; \quad G_M(\kappa_n) = - G_M(\kappa_{n+1}), \quad n=0,\ldots,M-1, \\ \label{eq:conditions} \end{equation} where $\kappa_0 = \kmin$, $\kappa_M = \kmax$, and $\kappa_n, \text{ } n=1,\ldots,M-1$ are the relative extrema of the function $G_M(\kappa)$. To simplify further we rescale $\kappa$ as follows: \begin{equation} \tilde{\kappa} = 2 \frac{\kappa-\kmin}{\kmax-\kmin} - 1. \end{equation} As a function of $\tilde{\kappa}$ the amplification factor is $\tilde{G}_M(\tilde{\kappa}) = G_M(\kappa(\tilde{\kappa}))$. In the resulting interval, $\tilde{\kappa}\in[-1,1]$, there is a unique polynomial of degree $M$ such that the absolute value of $\tilde{G}_M(\tilde{\kappa})$ at the extrema $\tilde{\kappa}_i$ is the same (fulfilling the last $M-1$ Eqs.~(\ref{eq:conditions})) and such that $\tilde{G}_M(\tilde{\kappa}(0))=1$. This polynomial is proportional to the Chebyshev polynomial of first kind of degree $M$, $T_M(\kappa)$, which can be defined through the identity $T_M (\cos \theta) = \cos (M \,\theta)$. This polynomial satisfies that \begin{gather} |T_M(-1)| = |T_M(\tilde{\kappa}_n)| = |T_M(+1) |= 1, \quad n=1,\ldots,M-1, \end{gather} with $\tilde{\kappa}_i$ being the local extrema of $T_M(\tilde{\kappa})$ in $[-1,1]$. The constant of proportionality can be determined requiring (Eq.\,\ref{eq:conditions}) $G_M(0)=1$, and the amplification factor reads in this case \begin{gather} \tilde{G}_M(\tilde{\kappa}) = \frac{T_M(\tilde{\kappa})}{T_M(\tilde{\kappa}(0))} \quad; \quad \tilde{\kappa}(0) = -\frac{(1+\kmin/\kmax)}{(1-\kmin/\kmax)} < -1. \label{eq:Gequ} \end{gather} This result is equivalent to Markoff's theorem\footnote{For an accesible proof of the original theorem \cite{Markoff:1916} , see Young's textbook \cite{Young:1971}, Theorem 9-3.1.}. Note that the value of $\tilde{\kappa}(0)$ does not depend on the actual values of $\kmin$ and $\kmax$, but only on the ratio $\kmin/\kmax$. The roots and local extrema of the polynomial $T_M(\tilde{\kappa})$ are located, respectively, at \begin{gather} \tilde{\omega}_{n}^{-1} =-\cos\left(\pi \frac{2n-1}{2M}\right), \;\; n=1,\ldots,M, \\ \tilde{\kappa}_{n} = \cos\left(\pi\frac{n}{M}\right), \;\; n=1,\ldots,M-1, \end{gather} which coincide with those of $\tilde{G}_M(\tilde{\kappa})$. Therefore, the set of weights \begin{gather} \omega_{n} = 2 \left[ \kmax+\kmin -\left(\kmax-\kmin\right) \cos\left(\pi\frac{2n-1}{2M}\right) \right ]^{-1},\; n=1,\ldots,M, \label{eq:omegan} \end{gather} corresponds to the optimal SRJ method for $P=M$. We have found with the simple analysis of this section that the optimal SRJ scheme when $P=M$ is fixed turns out to be closely related to a Chebyshev iteration or Chebyshev semi-iteration for the solution of systems of linear equations (see, for instance, \cite{Gutknecht:2002} for a review). This is especially easy to realize if we consider the original formulation of this kind of methods, which appeared in the literature as special implementations of the non-stationary or semi-iterative Richardson's method (RM, hereafter; see, e.g., \cite{Young:1953,Frank:1960} for generic systems of linear equations, or \cite{Shortley:1953} for the application to boundary-value problems). \cite{Yang2014} argued that, for a uniform grid, Eq.\,\ref{eq:SRJ} is identical to that of the RM \cite{Richardson11}. There is, nevertheless, a minor difference between Eq.\,\ref{eq:SRJ} of the SRJ method and the RM as it has been traditionally written \cite{Young:1954b}, that using our notation would be $u^{n+1} = u^n + \hat{\omega}_{n} ( b - A u^n )$, which gives the obvious relation $\hat{\omega}_{n} =\omega_{n} d^{-1}$, in the case in which all elements in $D$ are the same and equal to $d$. We note that this difference disappears in more modern formulations of the RM (e.g., \cite{Opfer:1984}), in which the RM is also written as a fix point iteration of the form $u^{n+1}=Tu^n+c$, with $T=I-M^{-1}A$, $c=M^{-1}b$ and $M$ any non-singular matrix. Differently from the RM in its definition by Young \cite{Young:1954b}, our method in the case $M=1$ would fall in the category of stationary Generalized Richardson's (GRF) methods according to the textbook of Young \citep[][chap.\,3]{Young:1971}. GRF methods are defined by the updating formula \begin{gather} u^{n+1} = u^n + P(Au^n-b) \end{gather} where $P$ is any non singular matrix (in our case, $P=-\omega_{n}D^{-1}$). In the original work of Richardson \cite{Richardson11}, all the values of $\hat{\omega}_{n}$ where set either equal or evenly distributed in $[a,b]$, where $a$ and $b$ are, respectively, lower and upper bounds to the minimum and maximum eigenvalues, $\lambda_i$ of the matrix $A$ (optimally, $a=\min{(\lambda_i)}$, $b=\max{(\lambda_i})$). If a single weight is used throughout the iteration procedure, a convenient choice is $\hat{\omega}=2/(b+a)$.\footnote{In the case of SRJ schemes with $P=M$, it is easy to demonstrate (see \ref{sec:properties}) that the harmonic mean of the weights $\omega_n$ very approximately equals the value of the inverse weight of the stationary RM ($2d^{-1}/(\kmax+\kmin)\simeq 2/(b+a)$).} Yang \& Mittal \cite{Yang2014} state that the SRJ approach to maximizing convergence is fundamentally different from that of the stationary RM. They argue that the RM aims to reduce $\Gamma(\kappa)$ uniformly over the range $[\kmin,\kmax]$ by generating equally spaced nodes of $\Gamma$ in this interval, while SRJ methods set a min-max problem whose goal is to minimize $|\Gamma|_{\rm max}$.\footnote{We note that this argument does not hold in the implementation of the non-stationary RM method made by Young \cite{Young:1953}, since in this case one also attempts to minimize $|\Gamma|_{\rm max}$.} As a result, SRJ methods require computing a set of weights yielding two differences with respect to the non-stationary RM in its original formulation \cite{Yang2014}: \begin{enumerate} \item the nodes in the SRJ method are not evenly distributed in the range $[\kmin,\kmax]$; \item optimal SRJ schemes naturally have many repetitions of the same relaxation factor whereas RM generated distinct values of $\hat{\omega}_{n}$ in each iteration of a cycle. \label{point2} \end{enumerate} From these two main differences, \cite{Yang2014} conclude that while optimal SRJ schemes actually gain in convergence rate over Jacobi method as grids get larger, the convergence rate gain for Richardson's procedure (in its original formulation) never produces acceleration factors larger than 5 with respect to the Jacobi method. This result was supported by Young in his Ph.D. thesis~\cite[][p.\,4]{Young:1950}, but on the basis of employing orderings of the weights which did pile-up roundoff errors, preventing a faster method convergence (see point 2 below). The difference outlined in point 1 above is non existent for GRF methods, where the eigenvalues of $A$ are not necessarily evenly distributed in the spectral range of matrix A (i.e., in the interval $[a,b]$). We note that Young \cite{Young:1953} attempted to chose the $\hat{\omega}_{n}$ parameters of the RM to be the reciprocals of the roots of the corresponding Chebyshev polynomials in $[a,b]$, which resulted in a method that is {\em almost the same} as ours, but with two differences: {\em First}, we do not need to compute the maximum and minimum eigenvalues of the matrix $A$; instead, we compute $\kmax$ and $\kmin$, which are related to the maximum and minimum frequencies that can be developed on the grid of choice employing an straightforward von Neumann analysis. Indeed, this procedure to estimate the maximum and minimum frequencies for the elliptic operators (e.g., the Laplacian) in the continuum limit allows applying it to matrices that are not necessarily consistently ordered, like, e.g., the ones resulting from the 9-point discretization of the Laplacian \cite{Adams:1988}. In Sect.\,\ref{sec:9-17discret} we show how our method can be straightforwardly prescribed in this case and other more involved (high-order) discretizations of the Laplacian. {\it Second}, in Young's method \cite{Young:1953} the two-term recurrence relation given by Eq.\,\ref{eq:SRJ} turned out to be unstable. Young found that the reason for the instability was the build up of roundoff errors in the evaluation of the amplification factor (Eq.\,\ref{eq:gm}), which resulted as a consequence of the fact that many of the values of $\omega_{n}$ can be much larger than one. Somewhat unsuccessfully, Young \cite{Young:1953} tried different orderings of the sequence of weights $\omega_{n}$, and concluded that, though they ameliorated the problem for small values of $M$, did not cure it when $M$ was sufficiently large. Later, Young \cite{Young:1954b,Young:1956} examines a number of orderings and concluded that some gave better results than others. However, the key problem of existence of orderings for which RM defines a stable numerical algorithm amenable to a practical implementation was not shown until the work of Anderssen \& Golub \cite{Anderssen:1972}. These authors showed that employing the ordering developed by Lebedev \& Finogenov \cite{Lebedev:1971} for the iteration parameters in the Chebyshev cyclic iteration method, the RM devised by Young \cite{Young:1953} was stable against the pile-up of round-off errors. However, Anderssen \& Golub \cite{Anderssen:1972} left open the question of whether other orderings are possible. In our case, numerical stability is brought about by the ordering of the weights in the iteration procedure. This ordering is directly inherited from the SRJ schemes of \cite{Yang2014}, and notably differs from the prescriptions given for two- or three-term iteration relations in Chebyshev semi-iterations \cite{Gutknecht:2002} and from those suggested by \cite{Young:1953}. Indeed, the ordering we use differs from that of \cite{Lebedev:1971,Nikolaev:1972,Lebedev:2002} (see \ref{sec:ordering}). Thus, though we do not have a theoretical proof for it, we empirically confirm that other alternative orderings work. Taking advantage of the analysis made by \cite{Young:1953}, we point out that the average rate of convergence of the method in a cycle of $M$ iterations is \begin{equation} R_M = \frac{1}{M} \log{|T_M(\tilde\kappa(0))|}, \label{eq:Rp} \end{equation} and it is trivial to prove that for $\kappa\in[\kmin,\kmax]$ \begin{equation} G_M(\kappa) \leq \left | \frac{1}{T_M(\tilde{\kappa}(0))} \right | < 1, \end{equation} providing a simple way to compute an upper bound for the amplification factor for the optimal scheme. This condition also guarantees the convergence of the optimal SRJ method. Therefore, if we aim to reduce the initial residual of the method by a factor $\sigma$, we have to select a sufficiently large $M$ such that \begin{equation} \sigma \geq |T_M(\tilde\kappa(0))|^{-1} \label{eq:Pmax} \end{equation} It only remains to demonstrate that the optimal SRJ scheme with $P=M$ is also the optimal SRJ scheme for any $P\le M$. Markoff's theorem states that for any polynomial $Q(x)$ of degree smaller or equal to $M$, such that $\exists x_0\in\mathbb{R}, x_0<-1$, with $Q(x_0)=1$, and $Q(x) \neq T_M(x)/T_M(x_0)$, then \begin{align} \max{|Q(x)|} > \max{\left|\frac{T_{M}(x)}{T_M(x_0)}\right|} \quad \forall x \in [-1,1]. \end{align} This theorem implies that any other polynomial of order $P\le M$, different from Eq.~(\ref{eq:Gequ}), is a poorer choice as amplification factor. The first implication is that $G_M(\tilde{\kappa}(0))<G_{M-1}(\tilde{\kappa}(0))$, i.e., increasing $M$ decreases monotonically the amplification factor $G_M(\kappa)$. As a consequence, the per iteration amplification factor $\Gamma_M (\kappa)$ also decreases by increasing $M$. The second consequence is that the case $P<M$ results in an amplification factor with larger extrema than the optimal $P=M$ case, and hence proves that our numerical scheme leads to the optimal set of weights for any SRJ method with $M$ steps. This confirms our intuition that adding additional roots to the polynomial would decrease the value of its maxima, resulting in faster numerical methods. Though the SRJ algorithm with $P=M$ we have presented here turns out to be nearly equivalent to the non-stationary RM of Young \cite{Young:1953}, in order to single it out as the optimum among the SRJ schemes, we will refer to it as the {\em Chebyshev-Jacobi} method (CJM) henceforth.
In this work we have obtained the optimal coefficients for the SRJ method to solve linear systems arising in the finite difference discretization of elliptic problems in the case $P=M$, i.e., using each weight only once per cycle. We have proven that these are the optimal coefficients for the general case, where we fix $P$ but allow for repetitions of the coefficients ($P\le M$). Furthermore, we have provided a simple estimate to compute the optimal value of $M$ to reduce the initial residual by a prescribed factor. We have tested the performance of the method with two simple examples (in 2 and 3 dimensions), showing that the analytically derived amplification factors can be obtained in practice. When comparing the optimal $P=M$ set of coefficients with those in the literature~\cite{Yang2014,Adsuetal15}, our method always gives better results, i.e., it achieves a larger reduction of the residual for the same number of iterations $M$. Additionally, the new coefficients can be computed analytically, as a function of $M$, $\kmax$, and $\kmin$, which avoids the numerical resolution of the minimization problem involved in previous works on the SRJ. The result is a numerical method that is easy to implement, and where all necessary coefficients can easily be calculated given the grid size, boundary conditions and tolerance of the elliptic problem at hand {\em before} the actual iteration procedure is even started. We have found that following the same philosophy that inspired the development of SRJ methods, the case $P=M$ results in an iterative method nearly equivalent to the non-stationary Richardson method as implemented by Young \cite{Young:1953}; namely, where the coefficients $\omega_{n}$ are taken to be the reciprocals of the roots of the corresponding Chebyshev polynomials in the interval bounding the spectrum of eigenvalues of the matrix ($A$) of the linear system. Furthermore, inspired by the same ideas as in the original SRJ methods, the actual minimum and maximum eigenvalues of $A$ do not need to be explicitly computed. Instead, we resort to a (much simpler) von Neumann analysis of the linear system which yields the values of the $\kmin$ and $\kmax$ that {\em replace} the (larger) values of the minimum and maximum eigenvalues of $A$. The key to our success in the practical implementation of the Chebyshev-Jacobi methods stems from a suitable ordering (or scheduling) of the weights $\omega_{n}$ in the algorithm. Though other orderings have also been shown to work, our choice clearly limits the growth of round-off errors when the number of iterations is large. This ordering is inherited from the SRJ schemes. We have also tested the performance of the CJM for more than second order discretizations of the elliptic Laplacian operator. These cases are especially involved since the matrix of iteration cannot be consistently ordered. Thus, Young's theory cannot be employed to find the value of the optimal weight of a SOR scheme applied to the resulting problems. For the particular case of the 9-points discretization of the Laplacian, even though the iteration matrix cannot be consistently ordered, Adams~\cite{Adams:1988} found the optimal weight for the corresponding SOR scheme in a rather involved derivation. Comparing the results for the numerical solution of a simple Poisson-like problem of the SOR method derived by Adams and the CJM we obtain here for the same 9-points discretization of the Laplacian, it is evident that both methods perform quite similarly (though the optimal SOR scheme is still slightly better). However, the SOR method requires a multi-coloring parallelization strategy with up to 72 four-color orderings (each with different performance), when applied to the 9-points discretizations of the Laplacian operator. The parallelization strategy is even more intricate when a 17-points discretization of the Laplacian is used. In contrast, CJM methods are trivially parallelizable and do not require any multi-coloring strategy. Thus, we conclude that the slightly smaller performance difference between the CJM and the SOR method in sequential applications is easily outbalanced in parallel implementations of the former method.
16
7
1607.03712
1607
1607.08636_arXiv.txt
{The Ward identities for conformal symmetries in single field models of inflation are studied in more detail in momentum space. For a class of generalized single field models, where the inflaton action contains arbitrary powers of the scalar and its first derivative, we find that the Ward identities are valid. We also study a one-parameter family of vacua, called $\alpha$-vacua, which preserve conformal invariance in de Sitter space. We find that the Ward identities, upto contact terms, are met for the three point function of a scalar field in the probe approximation in these vacua. Interestingly, the corresponding non-Gaussian term in the wave function does not satisfy the operator product expansion. For scalar perturbations in inflation, in the $\alpha$-vacua, we find that the Ward identities are not satisfied. We argue that this is because the back-reaction on the metric of the full quantum stress tensor has not been self-consistently incorporated. We also present a calculation, drawing on techniques from the AdS/CFT correspondence, for the three point function of scalar perturbations in inflation in the Bunch-Davies vacuum.} \begin{document} \begin{flushright} \small{TIFR/TH/16-26} \end{flushright}
\label{intro} Inflation is a successful paradigm which explains the observed approximate homogeneity and isotropy of the universe. It also gives rise to perturbations due to quantum effects, which lead to the anisotropy of the CMB and seed the growth of large scale structure in the universe. During the inflationary epoch, the universe was approximately de Sitter space, which is a maximally symmetric solution to the Einstein's equations with a positive cosmological constant. The symmetry group of four dimensional de Sitter space is $O(1,4)$, with ten generators\footnote{For our analysis, we will be interested only in the connected subgroup of $O(1,4)$.}. It is interesting to ask what constraints are imposed by this big symmetry group, which is approximately preserved during inflation, on the quantum fluctuations generated during the inflationary epoch. Such an analysis has been carried out by a number of authors, see \cite{Antoniadis:1996dj, Larsen:2002et, Larsen:2003pf, McFadden:2010vh, Antoniadis:2011ib, McFadden:2011kk, Creminelli:2011mw, Bzowski:2011ab, Kehagias:2012pd, Kehagias:2012td, Schalm:2012pi, Bzowski:2012ih, McFadden:2014nta, Kehagias:2015jha}. The symmetry algebra of $O(1,4)$ is the same as the symmetry algebra of a three dimensional Euclidean Conformal Field Theory. In \cite{Mata:2012bx, Ghosh:2014kba, Kundu:2014gxa}, following the seminal works \cite{Maldacena:2002vr} and \cite{Maldacena:2011nz}, single field slow-roll inflation was studied and it was shown that the symmetry constraints on the correlation functions of scalar and tensor perturbations can be expressed in terms of the Ward identities of conformal invariance. These Ward identities give rise to the Maldacena consistency condition, and additional similar constraints arising due to the special conformal transformations. In fact, further study showed, \cite{Kundu:2015xta}, that these identities follow from the constraints of reparametrization invariance and should be generally valid. This allows the breaking of conformal invariance during inflation, due to the evolution of inflaton, to be incorporated systematically even beyond leading order in the slow-roll parameters. Related references where the constraints are often thought of as following from non-linear realization of conformal symmetries include \cite{Weinberg:2003sw, Creminelli:2004yq, Cheung:2007sv, Weinberg:2008nf, Creminelli:2011sq, Bartolo:2011wb, Creminelli:2012ed, Hinterbichler:2012nm, Senatore:2012wy, Assassi:2012zq, Creminelli:2012qr, Goldberger:2013rsa, Hinterbichler:2013dpa, Creminelli:2013cga, Pimentel:2013gza, Berezhiani:2013ewa, Sreenath:2014nka, Mirbabayi:2014zpa, Joyce:2014aqa, Sreenath:2014nca, Binosi:2015obq, Chowdhury:2016yrh}. The situation is analogous to what happens in a field theory which is not scale invariant. The correlations in such a theory still satisfy the Callan-Symanzik equation, which now involves contributions due to the non-vanishing of the beta functions. In the conformally invariant limit, the beta functions vanish and the Ward identities simplify and constrain the correlators in a more powerful way. In the near conformal limit, where the beta functions are small, there can still be significant constraints from the Ward identities. In the same way, for inflation it was found in \cite{Kundu:2015xta} that the Ward identities are generally valid since they arise from the constraints of spatial reparametrization invariance, which is a gauge symmetry of general relativity, and must hold very generally. In the slow-roll limit, where there is approximate conformal invariance, these conditions can impose significant constraints on the correlation functions for the scalar and tensor perturbations. An important aspect of this symmetry based analysis is that it is model independent. Constraints which arise, for example, in the approximately conformally invariant limit, probe basic features of the inflationary model in a model independent way. These constraints can have significant observational consequences, and can therefore give rise to model independent tests for the inflationary paradigm. In this paper, we continue to explore these symmetry properties of the correlation functions for perturbations produced during inflation. In section \ref{sound}, we consider a class of models which are not of the standard slow-roll type. Instead, in these models, called generalized single field models, the inflaton can roll quickly in units of the Hubble parameter, $H$, while the spacetime is still approximately de Sitter space. Using the earlier calculations of three and four point correlations for scalar perturbations in these models, \cite{Chen:2006nt} and \cite{Chen:2009bc}, we explicitly check that the Ward identities derived in \cite{Kundu:2015xta} are in fact valid. Some checks of these Ward identities were carried out earlier in \cite{Creminelli:2012ed}. It is usually assumed in inflation that the initial state of the universe, when inflation commenced, was the Bunch-Davies vacuum. This assumption is well motivated. It corresponds to taking the modes with wavelength much smaller than $H^{-1}$ to be in their ``ground state'', i.e. in the state they would occupy in Minkowski space. The physical picture here is that at length scales much smaller than the Hubble scale the universe should be well approximated by Minkowski space. One positive feature of this choice is that the back-reaction due to the quantum stress tensor in this vacuum is small, and this makes the calculations, where only classical effects are included to leading order, self-consistent and well justified. However, since one of our purposes here is to examine various inflationary possibilities in a more model independent manner, we turn next to examining the Ward identities mentioned above in a class of vacua called $\alpha$-vacua, which are different from the Bunch-Davies vacuum. For a suitable choice of parameters these vacua preserve conformal invariance in de Sitter space, see \cite{PhysRevD.32.3136}. It is therefore quite interesting to ask if the Ward identities hold in these vacua as well, since the approximate conformal invariance of these vacua in inflation could then also lead to significant constraints on correlation functions. There is one drawback to these vacua, however. Modes of arbitrarily short wavelength in these vacua can be viewed as being highly excited as compared to their ground state in the Minkowski vacuum. As a result, the quantum stress tensor in these vacua is not small, and in fact is expected to diverge. This gives rise to a question as to whether such vacua can in fact arise in a self-consistent way in any inflationary model. Since the thrust of our analysis is a model independent one, only tied to symmetry considerations, we lay aside this worry in the beginning. In section \ref{intml}, to begin with we study a probe scalar field, for which the quantum back-reaction can indeed be self-consistently neglected by taking the $M_{Pl}\rightarrow \infty$ limit (while keeping $H$ fixed). We verify that the $\alpha$-vacua do preserve the full conformal symmetry\footnote{This is true barring some subtleties involving zero modes for a massless field which we do not address, see \cite{PhysRevD.32.3136, PhysRevD.35.3771}.}. In an interacting theory obtained by adding a cubic term, we find that the resulting three point function does satisfy the Ward identities of conformal invariance, as expected. This analysis in the probe case serves as a test of some of the basic issues involving the $\alpha$-vacua. Next, we turn to the more complicated case of inflation. Working with the same vacua, we calculate the three point function for scalar perturbations and find that the Ward identities now do not hold. For example, we find that the Maldacena consistency condition is violated. We argue that this violation is because the back-reaction has not been included consistently for the analysis in these vacua. Unlike the probe case, we cannot set $M_{Pl}$ to be infinite here because the perturbations involved arise from gravity itself. It is therefore not consistent to neglect the back-reaction of the quantum stress tensor while incorporating quantum effects also suppressed in $M_{Pl}$ for the calculation of the three point function. This inconsistency, we argue, is why the Ward identities do not hold. To elaborate on this some more, the Ward identities, as we have mentioned above, arise because of spatial reparametrization invariance. The conditions ensuring this invariance are in fact part of the Einstein equations. By neglecting the back-reaction we do not meet the Einstein equations consistently. It is therefore not surprising that the Ward identities, which are consequences of these equations, are also not met. We end the paper in section \ref{bulk3pt} by discussing the scalar three point function in slow-roll inflation in some detail. This correlation function, which is observationally most significant in the study of non-Gaussianity, was first calculated in \cite{Maldacena:2002vr}. The Ward identities suggest a somewhat different way to calculate this correlation function. These identities relate the three point function to a scalar four point function in a particular limit, with the coefficient of the four point function being suppressed by a power of the slow-roll parameter ${\dot {\bar \phi}}/H$. This suggests that the leading slow-roll result for the three point function can be calculated from the four point function in the de Sitter approximation (where the slow-roll parameters can be set to vanish). We make this explicit in section \ref{bulk3pt} by carrying out the calculation along these lines. We show that replacing one of the legs in the four point calculation in de Sitter space with a factor of the slow-roll parameter ${\dot {\bar \phi}}/H$ does give the correct result for the three point function. This way of thinking about the three and higher point correlators, motivated by the AdS/CFT correspondence, and the resulting discussion of the Ward identities was implicit in some of the earlier literature, \cite{Ghosh:2014kba}, and has also played an important role in the recent discussions in \cite{Arkani-Hamed:2015bza}. The paper also includes appendices \ref{chenres}-\ref{bulkdet} which give additional important details. \textbf{Notation:} The Planck mass is given by $M_{Pl} = 1/\sqrt{8\pi G}$. We denote the conformal time coordinate by $\eta$. Spatial three vectors are denoted by boldface letters, e.g. $\vec[x], \vec[k]$ etc. $\vecs[k,a], a= 1,2, \ldots$ denotes the momentum vectors $\vecs[k,1], \vecs[k,2], \ldots$ etc, whereas $k^i, i = 1,2,3$ denotes the components of $\vec[k]$. The magnitude of a vector is denoted by the corresponding ordinary letter, e.g. $x \equiv |\vec[x]|$. A dot above a quantity denotes ordinary time derivative, e.g. $\dot{f} \equiv df/dt$.
\label{conclusion} We have studied the Ward identities for scale and special conformal transformations in the context of inflation and de Sitter space in this paper. It was argued earlier, \cite{Kundu:2015xta}, that these Ward identities follow from the coordinate reparametrization symmetries of the system. The coordinate reparametrization invariance can be used to set the perturbation in the inflaton to vanish, $\delta \phi=0$, at late times. The resulting perturbations in single field models then correspond to scalar perturbations $\zeta$, and tensor perturbations $\widehat\gamma_{ij}$ in the metric. The residual spatial reparametrization symmetries present give rise to Ward identities for the correlation functions of these perturbations. See \cite{Kundu:2015xta} for details. For generalized models of single field inflation, it was shown here that the Ward identities are indeed valid, as would be expected from the general nature of the arguments leading to these identities. We should mention that some of these Ward identities were checked in an earlier work \cite{Creminelli:2012ed}. We also explored a class of vacua, called $\alpha$-vacua, which preserve conformal invariance. For these vacua, we found that the scalar three point function $\langle\zeta\zeta\zeta\rangle$ did not meet the Maldacena consistency condition, which is the Ward identity for scale invariance. We argued that this is because the background inflationary solution, about which the perturbations have been computed, is itself not self-consistent. In the $\alpha$-vacua, the quantum stress tensor diverges, and thus the back-reaction of the quantum stress tensor cannot be neglected. The background solution though neglects this effect and only incorporates the classical potential and small corrections due to the rolling of the inflaton. We also explored the nature of the $\alpha$-vacua in some detail for a probe scalar field. We showed directly, by constructing the conserved charges, that these vacua preserve conformal invariance in the interacting theory, upto subtleties having to do with zero modes and possible surface terms, see appendix \ref{isogen}. We also calculated the late time three point function in the $\alpha$-vacua for a probe massless scalar field. We found that the result is conformally invariant, upto contact terms. However, interestingly, the corresponding non-Gaussian term in the wave function does not satisfy the operator product expansion. The implications for a possible dS/CFT correspondence are left for the future. Finally, we described an alternate calculation for the three point function for scalar perturbations in standard slow-roll inflation in the Bunch-Davies vacuum. This calculation is motivated by techniques drawn from the AdS/CFT correspondence and is related to other recent papers, including \cite{Ghosh:2014kba, Arkani-Hamed:2015bza, Lee:2016vti}, and could be useful in thinking about the implications of additional fields during inflation, including those with higher spin.
16
7
1607.08636
1607
1607.04845_arXiv.txt
{In this paper, we study analytically the process of external generation and subsequent free evolution of the lepton chiral asymmetry and helical magnetic fields in the early hot universe. This process is known to be affected by the Abelian anomaly of the electroweak gauge interactions. As a consequence, chiral asymmetry in the fermion distribution generates magnetic fields of non-zero helicity, and vice versa. We take into account the presence of thermal bath, which serves as a seed for the development of instability in magnetic field in the presence of externally generated lepton chiral asymmetry. The developed helical magnetic field and lepton chiral asymmetry support each other, considerably prolonging their mutual existence, in the process of `inverse cascade' transferring magnetic-field power from small to large spatial scales. For cosmologically interesting initial conditions, the chiral asymmetry and the energy density of helical magnetic field are shown to evolve by scaling laws, effectively depending on a single combined variable. In this case, the late-time asymptotics of the conformal chiral chemical potential reproduces the universal scaling law previously found in the literature for the system under consideration. This regime is terminated at lower temperatures because of scattering of electrons with chirality change, which exponentially washes out chiral asymmetry. We derive an expression for the termination temperature as a function of the chiral asymmetry and energy density of helical magnetic field.}
Observations have established the presence of magnetic field of various magnitudes and on various spatial scales in our universe. Galaxies such as Milky Wave contain regular magnetic fields of the order of $\mu$G, while coherent fields of the order of $100~\mu$G are detected in distant galaxies \cite{Bernet:2008qp, Wolfe:2008nk}. There is a strong evidence for the presence of magnetic field in intergalactic medium, including voids \cite{Tavecchio:2010mk, Ando:2010rb, Neronov:1900zz, Dolag:2010}, with strengths exceeding $\sim 10^{-15}$~G\@. This supports the idea of cosmological origin of magnetic fields, which are subsequently amplified in galaxies, probably by the dynamo mechanism (see reviews \cite{Widrow:2002ud, Kandus:2010nw, Durrer:2013pga, Subramanian:2015lua}). The origin of cosmological magnetic field is a problem yet to be solved, with several pos\-sible mechanisms under discussion. These can broadly be classified into inflationary and post-inflationary scenarios. Both types still face problems to overcome: inflationary magnetic fields are constrained to be rather weak, while those produced after inflation typically have too small coherence lengths (see \cite{Widrow:2002ud, Kandus:2010nw, Durrer:2013pga, Subramanian:2015lua} for a review of these mechanisms and assessment of these difficul\-ties). It should also be noted that generation of helical hypermagnetic field prior to the elec\-troweak phase transition may explain the observed baryon asymmetry of the universe \cite{Fujita:2016igl, Kamada:2016eeb}. One of the mechanisms of generation of cosmological magnetic fields which is currently under scrutiny is based on the Abelian anomaly of the electroweak interactions \cite{Joyce:1997uy, Frohlich:2000en, Frohlich:2002fg}. If the difference between the number densities of right-handed and left-handed charged fermions in the early hot universe happens to be non-zero (as in the leptogenesis scenario involving physics beyond the standard model; see \cite{Davidson:2008bu, Fong:2013wr} for reviews), then a specific instability arises with respect to generation of helical (hypercharge) magnetic field. The generated helical magnetic field, in turn, is capable of supporting the fermion chiral asymmetry, thus prolonging its own existence to cosmological temperatures as low as tens of MeV \cite{Boyarsky:2011uy}. In this process, magnetic-field power is permanently transferred from small to large spatial scales (the phenomenon known as `inverse cascade'). Further investigation of the general properties of the regime of inverse cascade revealed certain universal scaling laws in its late-time asymptotics \cite{Hirono:2015rla, Xia:2016any, Yamamoto:2016xtu}. In this paper, we study analytically the process of generation of helical magnetic field in the early hot universe by an unspecified external source of lepton chiral asymmetry. Helical magnetic field is produced due to the presence of thermal background, which we extrapolate to all spatial scales, including the super-horizon scales.\footnote{The spectral properties of magnetic fields on superhorizon spatial scales depend on a concrete model of generation of primordial magnetic fields (see \cite{Kandus:2010nw, Durrer:2013pga, Subramanian:2015lua} for recent reviews).} We consider a simple model of generation of magnetic field which assumes that the source of chiral anomaly maintains a constant value of the (conformal) chiral chemical potential of charged leptons. After generation of magnetic field of near maximal helicity, its evolution is traced in the absence of the external source of lepton chiral asymmetry. In this case, the helical magnetic field and the lepton chiral asymmetry are mutually sustained (decaying slowly) by quantum anomaly until temperatures of the order of tens of MeV, with magnetic-field power being permanently transferred from small to large spatial scales in the regime of inverse cascade. We obtain analytic expressions describing the evolution of the lepton chiral chemical potential and magnetic-field energy density. The evolution of both these quantities exhibits certain scaling behavior, effectively depending on a single combined variable. In this case, the late-time asymptotics of the chiral chemical potential reproduces the universal scaling law previously found in the literature for the system under investigation \cite{Hirono:2015rla, Xia:2016any, Yamamoto:2016xtu}. As the temperature drops down because of the cosmological expansion, the processes of lepton scattering with the change of chirality (the so-called chirality-flipping processes) start playing important role, eventually leading to a rapid decay of the lepton chiral asymmetry. We give an analytic expression for the temperature at which this happens, depending on the initially generated values of the magnetic-field energy density and lepton chiral asymmetry.
We provided an analytic treatment of the process of generation of helical magnetic field in an early hot universe in the presence of externally induced lepton chiral asymmetry, and of the subsequent mutual evolution of the chiral asymmetry and magnetic field. Helical magnetic field is generated from the thermal initial spectrum (extrapolated to all scales including the super-horizon ones) owing to the effects of quantum chiral anomaly. The thermal bath also serves as a medium of relaxation of magnetic field to its thermal state. The generated helical magnetic field and the lepton chiral asymmetry are capable of supporting each other, thus prolonging their existence to cosmological temperatures as low as tens of MeV, with spectral power being permanently transformed from small to large spatial scales (the so-called `inverse cascade') \cite{Boyarsky:2011uy}. Our main results are summarized as follows. We obtained analytic expressions describing the evolution of the lepton chiral chemical potential and magnetic-field energy density. For a developed maximally helical magnetic field, both the chiral chemical potential $\Delta \mu$ and the relative fraction of magnetic-field energy density $r_B$ depend on their initial values and on time through a single variable $\phi$ introduced in (\ref{phi}). This scaling property is encoded in equations (\ref{dzeta})--(\ref{muevol}) and (\ref{NB})--(\ref{interpol-1}), and depicted in figures \ref{fig:muevol} and \ref{fig:interpol-1}. The late-time asymptotics for $\Delta \mu$ reproduces the scaling law $\Delta \mu \propto \eta^{-1/2} \log^{1/2} \eta$ [see equation (\ref{muas})] previously found in this system in \cite{Hirono:2015rla, Xia:2016any}. By numerical interpolation, we find that the relative fraction $r_B$ of the magnetic-field energy density in this regime decays as $r_B \propto \eta^{-5/9}$ all through the relevant part of the cosmological history. Since the conformal time $\eta$ in our units is related to the temperature $T$ as $\eta = M_* / T$, this also describes the evolution of these quantities with temperature. As the temperature drops to sufficiently low values due to the cosmological expansion, the chirality-flipping lepton scattering processes take control over the evolution of chiral asymmetry, leading to its rapid decay (\ref{expodec}). We derived a simple expression (\ref{Tf}) for the temperature at which this happens, depending on the initially generated values of the energy density of magnetic field and of the lepton chiral asymmetry. The analytic expressions obtained in this paper are sufficiently general and may be used for primary evaluation of scenarios of cosmological magnetogenesis by lepton chiral asymmetry.
16
7
1607.04845
1607
1607.06592_arXiv.txt
We perform three dimensional smoothed particle hydrodynamics (SPH) simulations of gas accretion onto the seeds of binary stars to investigate their short-term evolution. Taking into account of dynamically evolving envelope with non-uniform distribution of gas density and angular momentum of accreting flow, our initial condition includes a seed binary and a surrounding gas envelope, modelling the phase of core collapse of gas cloud when the fragmentation has already occurred. We run multiple simulations with different values of initial mass ratio $q_0$ (the ratio of secondary over primary mass) and gas temperature. For our simulation setup, we find a critical value of $\qc=0.25$ which distinguishes the later evolution of mass ratio q as a function of time. If $q_0 \ga \qc$, the secondary seed grows faster and $q$ increases monotonically towards unity. If $q_0 \la \qc$, on the other hand, the primary seed grows faster and $q$ is lower than $q_0$ at the end of the simulation. Based on our numerical results, we analytically calculate the long-term evolution of the seed binary including the growth of binary by gas accretion. We find that the seed binary with $q_0 \ga \qc$ evolves towards an equal-mass binary star, and that with $q_0 \la \qc$ evolves to a binary with an extreme value of $q$. Binary separation is a monotonically increasing function of time for any $q_0$, suggesting that the binary growth by accretion does not lead to the formation of close binaries.
\label{sec:discussion} \subsection{Categorising the Accreting gas}\label{subsec:categorization} Focusing on the short-term evolution while $\Delta \Mb(t) \lid \Mb$, the accreting gas onto the seed binary can be categorised into four different modes as we describe below. To characterise the properties of accreting gas in each mode, it is useful to plot the relation between initial specific angular momentum of gas and $q_0$. In Fig.~\ref{fig:AngMomCriterion}, $j_{\rm in}$ (thick black dotted line) and $j_{\rm out}$ (thick black dot-dashed line) denotes the initial gas specific angular momentum at $R_{\rm in}$ and $R_{\rm out}$, respectively, and $j_{\rm M_b}$ (thick black dashed line) denotes the initial gas specific angular momentum at $r_{\rm M_b}$, inside which the gas mass is equal to $\Mb$. The specific angular momentum of the secondary and the primary are defined as $j_s$ (thick red dashed line) and $j_p$ (thick red solid line). The specific angular momentum of L1 point is defined as $j_{\rm L1}$ (blue solid line). The specific angular momentum of the circum-binary disc $j_{\rm cb}$ (green dashed line) is defined as \begin{equation} j_{\rm cb} = \sqrt{ \frac{2}{1+q_0} }j_{\rm circ}, \label{eq:j_cb} \end{equation} such that the centrifugal potential ${j_{\rm cb}}^2/2{r_{\rm cyl}}^2$ equals to the gravitational potential $G \Mb /r_{\rm cyl}$ at $r_{\rm cyl}=a_0/(1+q_0)$ which is the distance of secondary from the mass centre \citep{Ochi_etal_05}. Since the initial specific angular momentum of gas is determined by equation~(\ref{eq:j_des}), gas with $j_{\rm in}$ is expected to fall first onto the seed binary. At the end of the short-term evolution, gas with $j_{\rm M_b}$ is expected to fall onto the circum-stellar discs if we ignore the complex dynamics until the gas falls. In all our simulations, at the end of the short-term evolution, more than $80\%$ of gas in the circum-stellar discs comes from the gas whose initial angular momentum is $j_{\rm in}<j<j_{\rm M_b}$. Thus $j_{\rm M_b}$ adequately represents the specific angular momentum of accreted gas at the end of the short-term evolution. Here we define the specific angular momentum of accreted gas as $j_{\rm acc}$, which is in the region filled by backslash, mesh, and single slash in Fig.~\ref{fig:AngMomCriterion}. The first mode of accreting gas is the ``{\it circum-primary disc mode}" (the region filled by backslash in Fig.~\ref{fig:AngMomCriterion}). We can see this mode when \begin{equation} j_{\rm acc}<j_{\rm L1}. \label{eq:type1} \end{equation} For example, when $q_0=0.1$, all accreted gas satisfies equation~(\ref{eq:type1}), indicating that the gas easily enters inside L1 point where the Jacobi constant of the gas is dissipated by the shock as discussed in Subsection \ref{subsec:results_disc}. Since L1 point and mass centre of the seed binary are in Roche lobe of the primary, the gas forms a circum-primary disc (Fig.~\ref{fig:q01}b,e), the primary seed grows, and the mass ratio decreases monotonically (Fig.~\ref{fig:Evo}). The second mode of accreting gas is the ``{\it marginal mode}" (the region filled by mesh in Fig.~\ref{fig:AngMomCriterion}). This mode is seen when \begin{equation} j_{\rm L1} < j_{\rm acc} < j_{\rm s}. \label{eq:type2} \end{equation} When $q_0=0.2$, for example, all accreted gas satisfies equation~(\ref{eq:type2}). In this case, most of gas is trapped by the primary similarly to the {\it circum-primary disc mode}. In the end, the mass ratio decreases. However, the gas that satisfies equation~(\ref{eq:type2}) enters inside secondary's Roche lobe more easily than in the {\it circum-primary disc mode}. Inside secondary's Roche lobe, the Jacobi constant of gas is dissipated by the shock. As a result, $M_{\rm acc,s}$ becomes non-negligible in the end. The third mode of accreting gas is ``{\it circum-stellar discs mode}" (the region filled by single slash in Fig.~\ref{fig:AngMomCriterion}). We can see this mode when \begin{equation} j_{\rm s} < j_{\rm acc} < j_{\rm cb}. \label{eq:type3} \end{equation} In this case, a circum-secondary disc is formed. Once the circum-secondary disc is formed, $\Delta M_{\rm s}/q_0$ dominates, and the mass ratio increases monotonically. We can see this mode when $q_0=0.7$, for example (see Fig.~\ref{fig:q07}). The fourth mode is the ``{\it circum-binary disc mode}" (the region filled by double slash in Fig.~\ref{fig:AngMomCriterion}). We can see this mode when the specific angular momentum of gas is larger than $j_{\rm cb}$ (equation~\ref{eq:j_cb}), \begin{equation} j > j_{\rm cb}. \end{equation} In this mode, the majority of gas cannot enter inside each Roche lobe because of the centrifugal barrier, and gas settles down to the circum-binary disc first. Then, the gas enters inside each Roche lobe through L2 or L3 point, and falls onto the circum-stellar discs. This behaviour is seen at $t>6\pi$ for $q_0=0.7$ (see Fig.~\ref{fig:q07}b,e). Since $j_{\rm M_b}$ is lower than $j_{\rm cb}$ for any $q_0$ in our simulations (Fig.~\ref{fig:AngMomCriterion}), the gas in {\it circum-binary disc mode} is not accreted by the end of short-term evolution. Therefore, the {\it circum-binary disc mode} is irrelevant for the $q$-evolution in the short term. To investigate $q$-evolution in this mode, we need to simulate the long-term evolution. In our simulations, the time evolution of mass ratio qualitatively changes at $q_{\rm c,hot}=0.23$ (hot case) or $q_{\rm c,cold}=0.26$ (cold case). The values of $q_{\rm c,hot}$ and $q_{\rm c,cold}$ roughly correspond to the intersection point of $j_{\rm s}$ and $j_{\rm {M_b}}$ in Fig.~\ref{fig:AngMomCriterion}. Therefore, we define a critical initial mass ratio $\qc$ at this intersection point, and we find $\qc=0.25$ from Fig.~\ref{fig:AngMomCriterion}. The value of $q_{\rm c,cold}$ is somewhat closer to $\qc$ than $q_{\rm c,hot}$. This is because the gas flow is closer to a ballistic motion in the cold limit than in the hot case. With a finite gas temperature, pressure gradient force pushes out the gas in radial direction. Therefore, even if $j_{\rm M_b}<j_{\rm L1}$, the rotation radius of gas with $j_{\rm M_b}$ can reach ${j_{\rm L1}}^2/G \Mb$. Since $j_{\rm M_b}$ is monotonically increasing function of $q_0$, $q_{\rm c,hot}$ is somewhat lower than $q_{\rm c}$. The difference between $q_{\rm c,hot}$ and $q_{\rm c,cold}$ is small since this push-out effect is expected to be weak when $c_s/v_{\rm K}<1$. Here we emphasize that the critical value $\qc=0.25$ was derived only for a particular distribution of angular momentum and density (equation~\ref{eq:rel_j_Mqum}), and that it was evaluated when $\Delta M_{\rm b}(t) = M_{\rm b}$. In summary, gas accretion onto the primary dominates in the {\it circum-primary disc mode} and the {\it marginal mode}. While in the {\it circum-stellar discs mode}, a circum-secondary disc is formed and accretion onto the secondary becomes significant enough to increase the mass ratio. The gas in {\it circum-binary disc mode} forms a circum-binary disc. \begin{figure*} \begin{center} \includegraphics[scale=0.6]{fig_7.eps} \end{center} \caption{Relation between the initial mass ratio $q_0$ and the specific angular momentum $j$ of the envelope. Each line shows the specific angular momentum of secondary seed (thick red dashed), primary seed (thick red solid), L1 point (thin blue solid), circum-binary disc (thin green dashed), initial gas specific angular momentum at $R_{\rm in}$ (thick black dotted), at $R_{\rm out}$ (thick-black dot-dashed), and at $r_{\rm Mb}$ (thick black dashed). The black open circle at the intersection of $j_{\rm p}$ and $j_{\rm M_b}$ indicates the critical value $\qc=0.25$. Each shaded region indicates a different mode of gas accretion: {\it circum-primary disc mode} (backslash), {\it marginal mode} (mesh), {\it circum-stellar discs mode} (single slash), and {\it circum-binary disc mode} (double slash). } \label{fig:AngMomCriterion} \end{figure*} \subsection{Analytic Estimate of Long-term Evolution}\label{subsec:analytic} In our numerical simulations, we focus on the short-term evolution until $\Delta \Mb(t) = \Mb$ assuming an isolated binary with no self-gravity. In this subsection, we discuss the long-term evolution of binary separation analytically including binary growth by accretion. There are two effects which change the binary separation by accretion. One is the increase of binary mass. When the binary mass becomes larger and if the angular momentum is conserved, then the binary separation becomes smaller because of stronger gravitational force. The other is the increase of binary angular momentum, which increases the binary separation. The evolution of binary separation is determined by the competition between above two effects. These effects become especially important when $\Delta \Mb(t)>\Mb$. First, we formulate the time evolution of binary in our model. Then, we discuss one possibility in which the long-term evolution can be predicted based on our numerical results of short-term evolution. As for the binary, we define the time-dependent binary mass $\Mb(t)$, binary separation $a(t)$, mass ratio $q(t)$. The reference specific orbital angular momentum can be written as \begin{equation} j_{\rm circ}(t)=\sqrt{G\Mb(t)a(t)} \label{eq:j_circ(t)}. \end{equation} Then the time-dependent orbital angular momentum of binary $J_{\rm b}(t)$ is written by \begin{equation} J_{\rm b}(t) = \frac{2q(t)}{(1+q(t))^2} \Mb(t) j_{\rm circ}(t). \label{eq:J_b(t)} \end{equation} We introduce following non-dimensional variables: \begin{eqnarray} \tilde{M}(t) &=& \frac{M_{\rm b}(t)}{M_{\rm b}} \label{eq:tilde_M},\\ \tilde{J}(t) &=& \frac{J_{\rm b}(t)}{J_{\rm b}} \label{eq:tilde_J},\\ \tilde{a}(t) &=& \frac{a(t)}{a_0} \label{eq:tilde_a}, \end{eqnarray} and $j_{\rm circ}(t)$ is represented as \begin{equation} j_{\rm circ}(t)=\frac{(1+q(t))^2}{q(t)}\frac{q_0}{(1+q_0)^2}\frac{{\tilde J}(t)}{{\tilde M}(t)}j_{\rm circ}. \end{equation} Note that we stop our simulations when it becomes $\tilde{M}=2$. As for the envelope, in our model (equations~\ref{eq:rho_des} and \ref{eq:j_des}), the specific angular momentum of gas $j$ and the gas mass inside the radius $r$, $M_{\rm gas}$, has a relationship \begin{equation} j\propto M_{\rm gas} \propto r. \label{eq:rel_j_Mqum} \end{equation} From equations~(\ref{eq:tilde_M}) and (\ref{eq:rel_j_Mqum}), $j_{\rm in}$ as a function of time is given by \begin{equation} j_{\rm in}(t) = \tilde{M}j_{\rm in} \label{eq:j_in(t)}. \end{equation} From equations~(\ref{eq:j_0}), ~(\ref{eq:j_circ(t)}) and (\ref{eq:j_in(t)}), we have \begin{eqnarray} j_{\rm in} &=& \frac{2q_0}{(1+q_0)^2}j_{\rm circ},\label{eq:j_0_again}\\ j_{\rm in}(t) &=& \frac{2q(t)}{(1+q(t))^2}\frac{{\tilde M}^2(t)}{{\tilde J(t)}}j_{\rm circ}(t). \label{eq:j_0_t} \end{eqnarray} Equations~(\ref{eq:j_0_again}) and (\ref{eq:j_0_t}) represent the specific angular momentum at the inner edge of the envelope. The power indices of ${\tilde M}$ and ${\tilde J}$ in equation~(\ref{eq:j_0_t}) reflect the spatial distribution of density and angular momentum in the envelope. If the relation \begin{equation} \frac{{\tilde M}^2(t)}{{\tilde J}(t)}=1\label{eq:self_similar} \end{equation} holds and if $q(t)=q_0$, equations~(\ref{eq:j_0_again}) and (\ref{eq:j_0_t}) are the same in units of $M_{\rm b}(t)=a(t)=1$ and $M_{\rm b}=a_0=1$. This indicates that the evolution of binary system is self-similar when equation~(\ref{eq:self_similar}) holds and $q(t)=q_0$. Note that, in equations~(\ref{eq:j_in(t)}) and (\ref{eq:j_0_t}), it is implicitly assumed that all angular momentum and mass of the envelope is converted to the orbital angular momentum and mass of the binary. After the above preparation, we can now discuss the time evolution of binary separation. From equations~(\ref{eq:J_b}) and (\ref{eq:J_b(t)}), we have \begin{equation} \tilde{a}(t) = \left(\frac{q(t)}{q_0}\right)^{-2} \left(\frac{1+q(t)}{1+q_0} \right)^{4}\frac{{\tilde J}^2(t)}{{\tilde M}^3(t)}. \label{eq:a_evo} \end{equation} From equation~(\ref{eq:a_evo}), we can see that the separation becomes larger with increasing orbital angular momentum of the binary, and that it becomes smaller with increasing mass. Moreover, the separation also depends on $q(t)$, and this dependence originates from equation~(\ref{eq:J_b(t)}). For given $J_{\rm b}(t)$ and $M_{\rm b}(t)$, one can see from equation~(\ref{eq:J_b(t)}) that $a(t)$ inside $j_{\rm circ}$ depends on $q(t)$. If equation~(\ref{eq:self_similar}) and $q(t)=q_0$ hold, binary separation is proportional to accreted mass in our model: \begin{equation} \tilde{a}(t) = {\tilde M}(t).\label{eq:a_self} \end{equation} The analytic result of equation~(\ref{eq:a_self}) is consistent with the numerical work by \cite{Bate_00}. Here, we discuss one possibility in which the long-term evolution can be predicted by reusing the result of the short-term evolution. From equations~(\ref{eq:j_0_again}) and (\ref{eq:j_0_t}), we see that the difference between $j_{\rm in}/j_{\rm circ}$ and $j_{\rm in}(t)/j_{\rm circ}(t)$ is caused only by the mass ratio, if equation~(\ref{eq:self_similar}) always holds. According to our simulations, in the hot case with $q_0=0.5$, $q \approx 0.7$ when $\tilde{M}=2$ from Fig.~\ref{fig:Evo}. Under the above assumptions, we can reuse the former result to predict that the mass ratio would be $q \approx 0.9$ when it reaches $\tilde{M}=3$. Repeating this procedure, we can predict the long-term evolution of a seed binary. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fig_8.eps} \end{center} \caption{Binary separation at the end of the short-term evolution $\tilde{a}(t_{\rm end})$ in the cases of hot (thick red line) and cold (thin blue line). The black horizontal line denotes $\tilde{a}(t_{\rm end}) = 2$.} \label{fig:a_evo} \end{figure} We saw in Fig.~\ref{fig:Evo} that, in the short-term evolution, $q(t)$ increases monotonically if $q_0>\qc$, and vice versa. Based on this result and the argument in the previous paragraph, we argue that the long-term evolution of $q(t)$ is qualitatively determined by $q_0$. Fig.~\ref{fig:a_evo} plots equation~(\ref{eq:a_evo}) at the end of the short-term evolution (i.e., binary separation at $\tilde{M} = 2$) using our numerical results of $q(t)$ and equation~(\ref{eq:self_similar}). Fig.~\ref{fig:a_evo} shows that the separation reaches $\tilde{a}(t_{\rm end})=2$ in the cases with $q_0 \rightarrow 1.0$ and $q_0\simeq q_{\rm c}$, indicating that the time evolution of a binary is self-similar in these cases (equation~\ref{eq:a_self}). Fig.~\ref{fig:a_evo} also shows that $\tilde{a}(t_{\rm end})>1$ for any $q_0$, which suggests that the binary separation is a monotonically increasing function of time and therefore close binaries are difficult to form. Here, we note again that these analytic results are based on the assumption that all angular momentum of the envelope is converted to the orbital angular momentum of the binary. In other words, we are disregarding the division of gas angular momentum into orbital angular momentum of binary and that of circum-stellar discs. In order to investigate the growth of separation more properly, a direct calculation of the binary orbit is needed.
\label{sec:conclusion} In the present work, we investigate the short-term evolution of a seed binary using the SPH code {\tt GADGET-3} in three dimensions. Our simulation setup includes non-uniform distribution of gas and angular momentum with $\rho \propto r^{-2}$ and $j \propto r$, respectively. In the initial condition, the seed binary is assumed to have formed around the mass centre of the binary by fragmentation, conserving angular momentum and mass. The seed binary is isolated, and self-gravity of gas is ignored. With this setup, we compute the accretion of gaseous envelope onto the seed binary until the binary mass growth exceeds its initial mass, surveying the parameter ranges of $0.1<q_0<1.0$ and the sound speeds $c_s/\sqrt{GM_{\rm b}/a_0} = 0.05$ (cold) and $0.25$ (hot). As a result, we categorise the gas accretion into four different modes as follows: \begin{enumerate} \item {\it ``Circum-primary disc mode"} is seen when the specific angular momentum of accreting gas is lower than that of L1 point, i.e., $j_{\rm acc} < j_{\rm L1}$. Most of the gas falls onto the primary and the circum-primary disc, and hence $q(t)$ monotonically decreases. This is because the specific angular momentum is small enough, and the gas with $j_{\rm acc} < j_{\rm L1}$ enters the primary's Roche lobe. \item {\it ``Marginal mode"} is seen when $j_{\rm L1}<j_{\rm acc}<j_{\rm s}$. In this case, although most of the gas is trapped by the primary similarly to the {\it "circum-primary disc mode"}, the gas is able to enter the secondary's Roche lobe, and the secondary starts to accrete gas. As a result $q(t)$ becomes smaller than $q_0$ after the short-term evolution. \item {\it ``Circum-stellar discs mode"} is seen when $j_{\rm s} < j_{\rm acc} < j_{\rm cb}$. If the specific angular momentum of gas exceeds that of the secondary, gas starts to rotate around the secondary, and a circum-secondary disc is also formed. Once the circum-secondary disc is formed, $q(t)$ monotonically increases. \item {\it ``Circum-binary disc mode"} is seen when $j_{\rm cb} < j$. In this case, gas cannot fall onto the circum-stellar discs directly because of its large angular momentum. Therefore, the gas falls onto the circum-binary disc first, and then later enter the Roche lobes through L2 or L3 point. \end{enumerate} We find that the short-term evolution of $q$-value is qualitatively different according to its initial value $q_0$. If $q_0> \qc = 0.25$, the final mass ratio exceeds $q_0$. This critical value $\qc$ is determined by the condition $j_{\rm s} = j_{\rm M_{\rm b}}$ in Fig.~\ref{fig:AngMomCriterion}. The critical value $\qc=0.25$ was derived only for a particular distribution of angular momentum and density (equation~\ref{eq:rel_j_Mqum}), and that it was evaluated when $\Delta M_{\rm b} = M_{\rm b}$. In {\it circum-primary disc mode} or {\it marginal mode}, the dominant accretion onto the primary decreases the $q$-value. However, once the circum-secondary disc is formed, the accretion onto the secondary becomes significant enough to increase the mass ratio. The value of $\qc$ does not differ dramatically depending on gas temperature as long as $c_s/v_{\rm K}<1$. We also estimate the long-term evolution of a seed binary analytically. Assuming that equation~(\ref{eq:self_similar}) holds, we argue that the evolution of binary system would be self-similar, and the short-term evolution of $q(t)$ from our simulations can be reused just by updating the initial mass ratio. As a result, we find that the binary separation is a monotonically increasing function of time for any $q_0$. This result suggests that close binaries are difficult to form. In the future, we will include direct computations of binary orbit in our simulations in order to investigate the effect of binary growth by accretion.
16
7
1607.06592
1607
1607.03148_arXiv.txt
We present a measurement of the linear growth rate of structure, \textit{f} from the Sloan Digital Sky Survey III (SDSS III) Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 (DR12) using Convolution Lagrangian Perturbation Theory (CLPT) with Gaussian Streaming Redshift-Space Distortions (GSRSD) to model the two point statistics of BOSS galaxies in DR12. The BOSS-DR12 dataset includes 1,198,006 massive galaxies spread over the redshift range $0.2 < z < 0.75$. These galaxy samples are categorized in three redshift bins. Using CLPT-GSRSD in our analysis of the combined sample of the three redshift bins, we report measurements of $f \sigma_8$ for the three redshift bins. We find $f \sigma_8 = 0.430 \pm 0.054$ at $z_{\rm eff} = 0.38$, $f \sigma_8 = 0.452 \pm 0.057$ at $z_{\rm eff} = 0.51$ and $f \sigma_8 = 0.457 \pm 0.052$ at $z_{\rm eff} = 0.61$. Our results are consistent with the predictions of Planck $\Lambda$CDM-GR. Our constraints on the growth rates of structure in the Universe at different redshifts serve as a useful probe, which can help distinguish between a model of the Universe based on dark energy and models based on modified theories of gravity. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS. The measurements and likelihoods presented here are combined with others in \citet{Acacia2016} to produce the final cosmological constraints from BOSS.
The theory of General Relativity (GR) gives us a relation between the expansion rate of the Universe and its matter and energy content \citep[][]{Einstein1915, Einstein1916}. At the same time, cosmological observations have also given us a glimpse into the Universe's dark sector. The observation of the accelerated expansion of the Universe is a landmark discovery in cosmology \citep[][]{Riess1998, Perlmutter1999}. The acceleration of the expansion of the Universe is most commonly explained by a framework which suggests that our Universe is dominated by a `dark energy' field with negative pressure. The dark energy is similar to the cosmological constant ($\Lambda$) in Einstein's theory of General Relativity \citep[][]{Padmanabhan2007}. The $\Lambda$CDM-GR model which proposed the accelerated expansion of the Universe is in consonance with probes such as the Cosmic Microwave Background (CMB) \citep[][]{Bennett2013, Planck2014a} and Baryon Acoustic Oscillations (BAO) \citep[][]{Eisenstein2005, Cole2005, Hutsi2006, Percival2007, Kazin2010, Percival2010, Reid2010}. The observation that the expansion of the Universe is accelerating can also pertain to the possibility of `dark gravity' \citep[][]{HenryCouannier2005a, HenryCouannier2005b, Bludman2007, Durrer2007, HenryCouannier2007, Heavens2009, Lobo2011, Lobo2012}, which suggests that General Relativity is incorrect on the largest scales and is a limit of a more complete theory of gravity. Such a possibility gives scope to the explanation of the accelerated expansion of the Universe by frameworks which try to reproduce cosmological observations by modifying the form of the equations of GR. The cause for the acceleration in the expansion of the Universe remains a mystery and one cannot settle on a preferred candidate to explain the measurement of the expansion history \textit{H(z)} from spectroscopic surveys and probes like Type Ia supernova and BAO since modified theories of gravity \citep[][]{Carroll2004, Kolb2006, Carroll2006, Cardone2012} and theories based on dark energy explain the observations equally well. In other words, the measurement of the expansion rate alone will not be able to distinguish between a model based on dark energy and modified theories of gravity. One way of resolving this conundrum lies in the investigation of the growth rate of structure inside the Universe \citep[][]{Peacock2006, Albrecht2007, Pouri2013, Pouri2014, Alam2015a, Mohammad2015}. The growth rate of structure in the Universe is decided by the competing effects of the gravitational collapse of density fluctuations, which accelerate their growth and the expansion rate, which inhibits it. Since the theory of General Relativity gives us a relation between the growth rate of cosmological structure and the expansion history of the Universe, measurements of the growth rate give us a handle on the underlying theory of gravity. In cosmological observations, the positions of galaxies are mapped by redshift, which correspond to the true distance according to the Hubble Law. Probes which look at the growth of structure in the Universe also include the peculiar velocities of galaxies. The observed redshift (\textit{z}) is, in fact, a sum of the Hubble recession velocity and the peculiar velocity caused by gravitational dynamics. Components from the peculiar velocities, which are deviations of galaxies' velocity from pure Hubble flow, combine with components from the Hubble flow to give rise to distortions in the reconstructed spatial distribution of the observed objects. The ensuing distortions manifest themselves as anisotropy in the distribution of objects and are caused in the radial direction in the redshift space \citep[][]{Kaiser1987, Hamilton1992, Cole1995, Guzzo1997, Peacock2001, Scoccimarro2004, Tegmark2006, White2009, Percival2009, Yoo2009, McDonaldP2009, McDonaldPSeljak2009, Percival2011, Reid2011, Yoo2012, Samushia2012, McQuinn2013, Beutler2014, White2015, Simpson2016}. These distortions are referred to as `redshift space distortions' (RSD). Redshift space distortions can be used to reveal information about the motion of galaxies and underlying matter distribution in the Universe. The distinctive features of RSD are revealed in the two-point correlation statistics of galaxy distributions which are obtained as functions of variables representing distances parallel and perpendicular to the line of sight($s_{||}$ and $s_{\perp}$ respectively). In small spatial scales where galaxies with high velocities are dominant, RSD is manifested as elongation in redshift space maps with an axis of elongation pointing towards the observer (i.e., along $s_{||}$). This phenomenon is referred to as the ``Fingers of God'' effect \citep[][]{Jackson1972, Tegmark2004}. On larger scales, one observes the ``Kaiser effect'' where coherent peculiar velocities cause an apparent contraction of structure along the line of sight in redshift space. As a result, we see two distinct effects, $viz.$ the non-linear and the linear effects due to small-scale elongation and large scale flattening in redshift scale maps. Measurement of the growth rate of structure from RSD is intricate. In 1987, Nick Kaiser \citep[][]{Kaiser1987} tried to tackle this problem by introducing a prescription which discusses the redshift space power spectrum by modifying the linear theory of large scale structure. In his landmark work, Kaiser was able to relate the power spectrum in redshift space $P_s(\mathbf{k})$ and it's counterpart in real space $P_r(\mathbf{k})$ by the following relation: \begin{align} \label{eqn:Kaiser} P_s(\mathbf{k}) = \left( 1 + \beta \mu_k^2 \right)^2 P_r(\mathbf{k}) \end{align} where $\mu_k$ corresponds to the cosine of the angle between $\mathbf{k}$ and the line of sight and $\beta = \Omega_m^{0.55}/b$ is the linear distortion parameter. Here, $\Omega_m$ is the mass density parameter and $b$ denotes the linear bias parameter. Exploration of the concept of peculiar velocities in non-linear scales using ideas of the ``streaming model'' was presented in \citet{Peebles1980, Davis1983, Fisher1995}. \cite{Peebles1980} showed that the factor $\Omega_m^{0.55}$ relates peculiar velocities to density fluctuations. The real space counterpart of the Fourier space formalism given by Kaiser was introduced by \citet{Hamilton1992}. Extensions of the linear model, called the ``dispersion model'' have been used to determine the growth rate from the two point galaxy correlation function $\xi(s_{||},s_{\perp})$ \citep[][]{Peacock2001, Hawkins2003}. However, measurements of the growth rate parameter from the dispersion model have been found to introduce systematic errors in the results \citep[][]{Taruya2010, Bianchi2012}. An important breakthrough in the dispersion model was effected by Taruya in 2010 \citep[][]{Taruya2010} when he proposed a new model of redshift space distortion which studied correction factors arising from the non-linear coupling between velocity and density fields. RSD analyses from the data releases 9 \citep[DR09;][]{Ahn2012} and 10 \citep[DR10;][]{Ahn2014} of Sloan Digital Sky Survey III \citep[SDSS III;][]{Eisenstein2011} which include the Baryon Oscillation Spectroscopic Survey \citep[BOSS;][]{Dawson2013}, employed the Lagrangian Perturbation Theory discussed by \citet{Matsubara2008a, Matsubara2008b} and the Gaussian Streaming Model to measure the linear growth rate of structure in the Universe \citep[][]{Reid2011, Reid2012, Samushia2013}. Other measurements of the linear growth rate of the Universe ($f\sigma_8$) include \citet{Cooray2004, Percival2004, Narikawa2010, Blake2011, Giovannini2011, Okumura2011, Beutler2012, Gupta2012, Hirano2012, Hudson2012, Nusser2012, Samushia2012, Shi2012, Contreras2013, delaTorre2013, Macaulay2013, Sanchez2013, Sanchez2014, Reid2014, Avsajanishvili2014, Alam2015a, Feix2015, Marulli2015, Hamaus2016}. In the work presented here, we follow \citet{Alam2015a} to study galaxies in the final data release \citep[DR12;][]{Alam2015b}. We use the MultiDark Patchy mock catalogs \citep[][]{Kitaura2014, Kitaura2015} and the BOSS DR12 galaxy dataset \citep[][]{Reid2015} in our analysis to recover information about the growth rate of the Universe at different redshifts. Our paper is a support paper and the final cosmological analysis is discussed in \citet{Acacia2016}. In addition to our paper there are other companion papers of \citet{Acacia2016} which analyze the full shape of the anisotropic two-point correlation function. \citet{Acacia2016} use the methodology presented in \citet{Sanchez2016b} to combine the results of all these companion papers into a final set of BOSS consensus constraints and explore their cosmological implications. In section~\ref{sec:BOSSDR12}, we provide a brief summary of three different full shape analyses of galaxy clustering for the BOSS DR12 sample using different models \citep[][]{Beutler2016a, Grieb2016, Sanchez2016a}, which are support papers to \citet{Acacia2016}. Other companion papers where the BAO scale is measured using the anisotropic two-point correlation function include \citet{Beutler2016b, Ross2016, Vargas2016}. Our paper is organized as follows. In section~\ref{sec:CLPT}, we review the Convolution Langrangian Perturbation Theory and the Gaussian Streaming Model which we use as the theoretical basis of our investigation. In section~\ref{sec:Data}, we sketch the details of the BOSS DR12 galaxy dataset and the mock galaxy catalogs that we use in our analysis. We discuss details of the approach adopted in our analysis in section~\ref{sec:Analysis}. Our results from the mocks and the galaxy data are discussed in section~\ref{sec:Results}. We conduct a critical analysis and present a summary of the obtained results for cosmological parameters in sections~\ref{sec:Discussion} and~\ref{sec:Summary}.
\label{sec:Discussion} Our measurements of $f \sigma_8$, $D_{\rm A}$ and $H$ using CLPT-GSRSD are consistent with the predictions of the $\Lambda$CDM for all the three redshift bins. Furthermore, our results for $f \sigma_8$, $D_{\rm A}$ and $H$ obtained from CLPT-GSRSD agree very well with measurements of the same parameters obtained from other approaches which do investigations based on full-shape analyses of SDSS-III BOSS DR12 combined sample \citep[][]{Beutler2016a, Grieb2016, Sanchez2016a}. At the same time, the theoretical models used by the four analyses are very different. A comparison of the different results is presented in Figure~\ref{fig:Comb}. The black and the red lines in the plot of $ f \sigma_8$ vs $z$ in Figure~\ref{fig:Comb} show the predictions of \citet{Planck2015} and \citet{Planck2016} respectively. The difference between the predictions of Planck 2015 and Planck 2016 is not more than $1/2$ $\sigma$. The correspondence of results of $f \sigma_8$ at multiple redshifts which are obtained from different theories can be considered as a useful probe of the theory of gravity. It also holds the promise of letting us place model independent constraints on other models of gravity. One of the challenges in RSD analysis is to use the smaller scales as they have higher signal to noise by virtue of sampling large numbers of two point modes. But, perturbation theory based models find it difficult to describe measurements at smaller scales due to non-linear clustering. The model used in our analysis has been validated using various approximate mocks and N-body mocks. We fit to scales upto 25 $h^{-1}$Mpc in our final analysis. In order to understand the contribution from quasi-linear scales and to further look for biases in our analysis we have also run our analysis using a linear scale of $s>40 h^{-1}$Mpc. Figure~\ref{fig: CosmoMcLikelihood} shows the comparison between results obtained from fitting only linear scale and results obtained while including the quasi linear scales. In Figure~\ref{fig: CosmoMcLikelihood}, we show $1 \sigma$ ($68 \%$) and $2 \sigma$ ($95 \%$) confidence intervals for $F_{\rm AP}$ and $f \sigma_8$ on the left and $D_{\rm v} / r_{\rm d}$ and $f \sigma_8$ on the right. The grey contours show constraints from larger scales ($s > 40 \ h^{-1}$Mpc) while the red contours depict constraints from all scales ($s > 25 \ h^{-1}$Mpc) from our RSD analysis. The blue contour shows the constraints from Planck 2015 results. The top, middle and the bottom rows are from the three redshift bins, $viz.$ $z_{\rm eff}=0.38, \ 0.51$ and $0.61$ respectively. The inclusion of the quasi-linear scale improves the constraints without introducing any statistically significant shifts in the measurements. The improvement in $F_{\rm AP}$ and $f\sigma_8$ is larger compared to $D_{\rm V} / r_{\rm d}$ because most of the information in $D_{\rm V} / r_{\rm d}$ is contained in the BAO peak. Figure~\ref{fig:WorldCompilation} presents a compilation of $f\sigma_8$ measurements at different redshifts from different surveys and research studies. We expect our results to provide a robust test of the underlying theory of gravity at large distance scales. \begin{figure} \includegraphics[width=\columnwidth]{Plot_WorldCompilation/fs8.png} \caption{Here, we plot measurements of $f\sigma_8$ from different surveys and research studies. The included surveys and studies report measurements of $f\sigma_8$ over a redshift range of $0.06 < z < 0.80$. We have represented the $1 \sigma$ and the $2 \sigma$ spreads of the Planck $\Lambda$CDM prediction for the evolution of $f\sigma_8$ with redshift in the dark green and the light green shaded regions respectively.} \label{fig:WorldCompilation} \end{figure} We decided not to push for measurements of the linear growth rate of structure at fitting scales smaller than the minimum fitting scale that we have chosen here because of the lack of reliability in the behavior of the model at small distance scales. From the perspective of a comprehensive RSD analysis it will be invaluable to gain knowledge of estimates of cosmological parameters at distance scales smaller than $20 \ h^{-1}$Mpc. However, an analysis of smaller cosmological scales with presently available theoretical resources presents a significant challenge. The theoretical models that are currently available to us are unable to model small distance scales. It is worthwhile to investigate if this inability is due to the presence of the non-linear Finger of God effect. Are the contributions from non-linear clustering (which are different from the Fingers of God effect) modeled accurately? These are questions that we wish to seek answers to. We would like to investigate the efficacy of the Gaussian Streaming Model in explaining non-linear clustering at small scales. As an outlook for the future, we also plan to explore the feasibility of designing and using new estimators to probe scales smaller than $20 \ h^{-1}$Mpc and also test the effectiveness of CLPT-GSRSD at such scales. We have used CLPT-GSRSD to measure cosmological parameters including the linear growth rate of structure $f$ from the SDSS III BOSS DR12 combined galaxy sample. The BOSS DR12 combined galaxy dataset includes over a million massive galaxies encompassing a redshift range $0.2<z<0.75$. We divide this sample to three partially overlapping redshift bins with effective redshifts of 0.38, 0.51 and 0.61 and we work with multipole moments of two-point galaxy correlation functions in these three redshift bins. We use the measured and best fit multipole moments to place constraints on cosmological parameters including the linear growth rate of structure in the Universe. The fitting scale that we choose in this work is dictated by the performance, reliability and considerations of error of MD-P mocks and our theoretical model at small scales. Our measurements of the growth rate of structure, $f \sigma_8(z)$, angular diameter distance $D_A(z)$ and the Hubble expansion rate, $H(z)$ are in agreement with the results for the same parameters obtained by different groups \citep[][]{Beutler2016a, Grieb2016, Sanchez2016a}. Furthermore, our results are combined with other BAO \citep[][]{Beutler2016b, Ross2016, Vargas2016} and full-shape methods in a set of final consensus constraints in \citet{Acacia2016}. Our results are in consonance with the predictions of the Planck $\Lambda$CDM model. We expect the results of our work to shed more light on the evolution of the linear growth rate of structure and contribute towards lifting the ambiguity in the choice between dark energy and modified theories of gravity. The measurements we report in this work can contribute to constrain cosmological parameters in different models of gravity. Through our work, we also provoke questions of whether it is possible to model non-linearities at small distance scales in the Universe.
16
7
1607.03148
1607
1607.06840_arXiv.txt
We present spectra of 14 A-type supergiants in the metal-rich spiral galaxy M83. We derive stellar parameters and metallicities, and measure a spectroscopic distance modulus $\rm \mu = 28.47 \pm 0.10$ ($4.9 \pm 0.2$~Mpc), in agreement with other methods. We use the stellar characteristic metallicity of M83 and other systems to discuss a version of the galaxy mass-metallicity relation that is independent of the analysis of nebular emission lines and the associated systematic uncertainties. We reproduce the radial metallicity gradient of M83, which flattens at large radii, with a chemical evolution model, constraining gas inflow and outflow processes. We carry out a comparative analysis of the metallicities we derive from the stellar spectra and published \hii\ region line fluxes, utilizing both the direct, \te-based method and different strong-line abundance diagnostics. The direct abundances are in relatively good agreement with the stellar metallicities, once we apply a modest correction to the nebular oxygen abundance due to depletion onto dust. Popular empirically calibrated strong-line diagnostics tend to provide nebular abundances that underestimate the stellar metallicities above the solar value by $\sim$0.2 dex. This result could be related to difficulties in selecting calibration samples at high metallicity. The O3N2 method calibrated by Pettini and Pagel gives the best agreement with our stellar metallicities. We confirm that metal recombination lines yield nebular abundances that agree with the stellar abundances for high metallicity systems, but find evidence that in more metal-poor environments they tend to underestimate the stellar metallicities by a significant amount, opposite to the behavior of the direct method.
\label{sec:intro} Measuring extragalactic chemical abundances is the key to deciphering a wide variety of physical and evolutionary processes occurring inside and between galaxies. For star-forming systems the investigation of the present-day abundances of the interstellar medium (ISM), photoionized by young massive stars, holds a prominent place in modern astronomy, laying the foundations of our understanding of the chemical evolution of the Universe. Regrettably, despite decades of observational and theoretical work, we still lack an absolute abundance scale, which is necessary for a complete and coherent picture of how the chemical elements are processed and moved around by galactic flows. The gas-phase metallicity, identified with the abundance of oxygen, the most common heavy element in the ISM, can be derived from forbidden, collisionally excited lines (\cel s) present in \hii\ region optical spectra. Such evaluation depends critically on the knowledge of the physical conditions of the gas, in particular the electron temperature \te, because of the strong temperature sensitivity of the metal line emissivities (see the monograph by \citealt{Stasinska:2012} for a review). In the so-called {\em direct} method \te\ is obtained by the classical technique (\citealt{Menzel:1941}) that utilizes \cel s originating from transitions involving different energy levels of the same ions. The intensity ratio of the auroral \oiii\lin4363 to the nebular \oiii\llin4959,\,5007 lines can be used to measure the temperature of the high-excitation zone, especially at low metallicities, where the weak auroral lines are more easily observed. The \nii\lin 5755/\lin6584 ratio is generally used for the low excitation zone. Around the solar metallicity and above, as the increased gas cooling quenches the auroral lines, statistical methods, first introduced by \citet{Pagel:1979} and \citet{Alloin:1979}, relying on easily observed strong emission lines, complement or supplant altogether the use of the direct technique. As is well known, different strong-line diagnostics and calibration methodologies (\eg\ photoionization models \vs\ empirical \te\ derivations) yield substantial systematic offsets in the inferred gas metallicities (\citealt{Kennicutt:2003a, Moustakas:2010, Lopez-Sanchez:2012}), reaching values up to 0.7~dex (\citealt{Kewley:2008}). Methods calibrated from \te\ measurements tend to occupy the bottom of the abundance scale. Small-scale departures from homogeneity in the thermal (\citealt{Peimbert:1967}) and chemical abundance (\citealt{Tsamis:2003}) structure of photoionized nebulae, combined with the pronounced temperature dependence of \cel s, can bias the results obtained from the direct method to low values. A similar effect can also originate at high metallicity from large-scale temperature gradients (\citealt{Stasinska:2005}). Estimations of temperature fluctuations, parameterized by the mean square value $t^2$ (\citealt{Peimbert:1967}), indicate that optical \cel s underestimate the oxygen abundances by 0.2--0.3 dex (\citealt{Esteban:2004, Peimbert:2005}). This effect is usually regarded responsible for the systematic oxygen abundance offset of the same magnitude found between measurements from the direct method and the \oiir\ recombination lines (\rl s; \citealt{Peimbert:1993a, Garcia-Rojas:2007, Esteban:2009}). Discrepancies of comparable sizes are also obtained when the \te-based nebular abundances are compared to a theoretical analysis of the emission-line spectra (\citealt{Blanc:2015, Vale-Asari:2016}). Such differences can also result from a non-thermal distribution of electron energies (\citealt{Nicholls:2012}). For the widely-used direct method the crux of the matter remains the fact that, with the presence of these effects, the \te\ values we measure from optical \cel s tend to overestimate the nebular temperatures, leading to systematically underestimated gas-phase metallicities. While the situation described above seems to spell doom for the direct method and its ability to produce correct nebular abundances, at least at high metallicity, there are various considerations that warrant further investigations involving \te-based abundances. These include the existence of still poorly understood systematic uncertainties in photoionization models (\citealt{Blanc:2015}), the lack of clearly identified causes for temperature fluctuations in ionized nebulae (although several processes have been proposed, see \citealt{Peimbert:2006a}) and the possibility that recombination lines overestimate gas-phase oxygen abundances (\citealt{Ercolano:2007, Stasinska:2007a}). We also note that theoretical and observational considerations argue against the $\kappa$ electron velocity distribution (\citealt{Nicholls:2012, Nicholls:2013}) as a solution for the abundance discrepancies observed in photoionized nebulae (\citealt{Zhang:2016, Ferland:2016}). \smallskip In light of these difficulties, a complementary approach for the investigation of present-day abundances in galaxies is the analysis of the surface chemical composition of early-type (OBA) stars, which in virtue of their young ages share the same initial chemical composition as their parent gas clouds and associated \hii\ regions. This is true in particular for elements, such as oxygen and iron, whose surface abundances are not significantly altered by evolutionary processes during most of the stellar lifetimes. Oxygen abundance comparisons between nearby B stars and \hii\ regions, as in the well-studied case of the Orion nebula (\citealt{Simon-Diaz:2011}), offer support for the nebular abundance scale defined by \rl s rather than \cel s. A salient consideration is that the systematic chemical abundance uncertainty for B- and A-type stars is on the order of 0.1~dex (\citealt{Przybilla:2006, Nieva:2012}), much smaller than for the analysis of nebular spectra. For more than a decade our collaboration has focused on a project of stellar spectroscopy in nearby star-forming galaxies, with distances up to a few Mpc, selected for a long-term investigation of the distance scale (\citealt{Gieren:2005}), in order to measure the metal content of bright blue supergiant stars and their distances (see \citealt{Kudritzki:2016} and \citealt{Urbaneja:2016} for the most recent results and references). In comparing stellar with nebular abundances we found a varying degree of agreement, ranging from excellent (\eg\ in the case of NGC~300, \citealt{Bresolin:2009a}) to modest (with offsets $\sim$0.2 dex, as in the case of NGC~3109, \citealt{Hosek:2014}). There are also indications that especially for systems of relatively high metallicity, such as M31 (\citealt{Zurita:2012}) and the solar neighbourhood (\citealt{Simon-Diaz:2011, Garcia-Rojas:2014}), the \te\ method underestimates the stellar abundances. \smallskip In this paper we analyze new stellar spectra of blue supergiant stars obtained in the spiral galaxy M83 (NGC~5236), at a distance of 4.9~Mpc (\citealt[$1'' = 23.8$~pc]{Jacobs:2009}). Our main motivation is to extend our stellar work to a galactic environment characterized by a high level of chemical enrichment, \ie~super-solar in the central regions, as already indicated by work on \hii\ regions (\citealt{Bresolin:2002, Bresolin:2005}) and a single super star cluster (\citealt{Gazak:2014}). This is the metallicity regime where the systematic biases of the direct method should be more evident. We thus compare stellar and nebular metallicities using the \te\ method and a variety of strong line diagnostics, aiming to clarify how abundances inferred from the latter relate to the metallicities measured in young stars. In a nutshell, we find that \te-based abundances fare reasonably well in comparison with stellar metallicities across a wide range of abundances, but nevertheless that existing empirical calibrations of strong line methods can significantly underestimate the stellar abundances in the high-metallicity regime. We describe our observational material and the data reduction in Sect.~2, and the spectral analysis in Sect.~3. We derive a spectroscopic distance to M83 in Sect.~4. In Sect.~5 the stellar metallicities are used to discuss the mass-metallicity relation for nearby galaxies and to compare with a variety of nebular abundance diagnostics. We develop a chemical evolution model to reproduce the radial metallicity distribution in M83 in Sect.~6. In our discussion in Sect.~7 we focus on the comparison of metallicities derived from the direct method, the blue supergiants and \rl s in a number of nearby galaxies, based on results published in the literature. In Sect.~8 we summarize our main conclusions.
\label{Sec:discussion} \subsection{Strong-line methods} The comparison we carried out in Sect.~\ref{sec:hii} reveals that most of the nebular diagnostics we considered yield abundances that do not agree with the metallicities of the blue supergiant stars in M83. This appears to be true for both empirically- and theoretically-calibrated diagnostics. The potential perils of systematic uncertainties, although difficult to estimate, should be kept in mind. For example, abundance offsets could result from a significant mismatch in physical properties between the nebulae in M83 and the calibrating samples or models used for the abundance diagnostics. In this regard, we do note that the nebular N/O ratio in M83 appears to be higher than average (\citealt{Bresolin:2005}), albeit the uncertainties are large, and this could affect the abundances derived from diagnostics involving the nitrogen lines. However, a higher N/O ratio would lead to overestimate the nebular O/H ratio (\citealt{Perez-Montero:2009a}), opposite to what the comparison with the stellar metallicities suggests. For systematics concerning stellar abundances, we refer to \citet{Przybilla:2006} and \citet{Nieva:2012} and references therein. Panels a--d in Fig.~\ref{fig:hii} indicate that at the highest metallicities considered in this work on M83 (nearly 2$\times$ solar) some of the theoretical calibrations can produce nebular abundances in good agreement with the stellar metallicities we measured, in particular, the O3N2 method (panel c), whose calibration by \citet{Pettini:2004} at the high-metallicity end relies on photoionization models (this holds also after adding 0.1 dex to the nebular abundances to account for dust depletion on dust grains). On the other hand, panels e--h show that empirical, \te-based calibrations of strong-line methods yield results that, approximately above the solar O/H value, lie $\sim$0.2~dex below the stellar metallicities. At the same time, some of the auroral line-based nebular abundances appear to agree with the stellar metallicities even very close to the center of M83, where the metallicity is highest. \medskip At face value, and considering the blue supergiant surface chemical abundances to be representative of the `true' metallicity of the young populations of M83, these results suggest the existence of a problem with the empirical calibrations, \ie\ that they progressively underestimate O/H with increasing metallicity, by $\sim$0.1--0.2 dex around 2$\times$ the solar value (correcting for 0.1 dex due to dust depletion). If in the following we assume this to be correct, this could result from the well-known difficulty for the empirical methods to establish the calibrating samples of high-metallicity \hii\ regions, which rely on the detection of faint auroral lines and somewhat uncertain relationships used to infer, for example, the temperature of the \oiii-emitting nebular zone from the temperature measured for the \siii- or \nii-emitting zones (\eg\ \citealt{Garnett:1992}). It is thus possible that the empirical calibrations are affected by a selection bias, whereby the \hii\ regions with the strongest auroral lines (corresponding to higher gas electron temperatures and lower metallicities) are preferentially measured at high oxygen abundances. While a few high-metallicity \hii\ regions could still be providing reliable abundances, as seen also in the case of M83, more generally the calibrating samples could be biased to low abundances. A completely different interpretation is that we might be detecting the bias predicted by \citet{Stasinska:2005} to occur due to \hii\ region temperature stratification. According to this work, the direct method could underestimate the abundance by 0.2~dex or more above the solar value. Nebular abundances that are systematically higher than those derived from the direct method are also obtained from the use of recombination lines (as formalized by the presence of an abundance discrepancy factor ADF, \citealt{Garcia-Rojas:2007}), by an amount that is comparable to the difference we observe in the central regions of M83. The most popular interpretation for this discrepancy is given in terms of temperature fluctuations (\citealt{Peimbert:1967, Peimbert:2013}), but other explanations have also been proposed, such as the presence of metal-rich inclusions (\citealt{Tsamis:2003, Stasinska:2007a}). Alternatively, deviations from the thermal electron velocity distribution commonly assumed for ionized nebulae have been invoked (\citealt{Nicholls:2012, Nicholls:2013}). \medskip \subsubsection{A recommended strong-line method?} It bears on the initial motivation of our work to try and identify which, if any, of the strong-line methods we looked at can be recommended for extragalactic emission-line abundance studies in order to obtain metallicities that are in agreement, in an absolute sense, with current and published results based on stellar spectroscopy. We emphasize again that such an approach is encouraged by the relatively small systematic uncertainties in the stellar abundances, and the good agreement for the metallicities determined independently for massive hot and cool stars, from analyses carried out in different wavelength regimes (\citealt{Gazak:2014, Gazak:2015, Davies:2015}), which boosts our confidence on the metallicity scale defined by massive stars. From our discussion in Sect.~\ref{sec:hii} the O3N2 diagnostic calibrated by \citet{Pettini:2004} stands out as the only one providing \hii\ region abundances that are consistent with our stellar metallicities in M83, which are all but one above the solar value. In the similarly high metallicity (\eo\ $>$ 8.6) environment of the galaxy M81 we reach the same conclusion, analyzing the supergiant data from \citet{Kudritzki:2012} and the nebular emission fluxes from \citet{Patterson:2012} and \citet{Arellano-Cordova:2016}. Keeping in mind the statistical nature of strong-line diagnostics (\ie\ the fact that they can fail on individual objects) we can extend this statement to include lower metallicities by looking, for example, at our study of NGC~300 (\citealt{Bresolin:2009a}). We find that in this case (\eo\ $<$ 8.6) the radial trend of the stellar metallicities is equally well reproduced by O3N2 (PP4), the ONS and the $R$ methods, if a modest dust depletion factor is introduced. In summary, the use of O3N2 (PP4) for extragalactic \hii\ regions provides \eo\ values that are consistent with the metallicity scale defined by our stellar work across a wide metallicity range, 8.1 $\lesssim$ \eo\ $\lesssim$ 9. \floattable \begin{deluxetable}{lccccccccc} \tablecolumns{10} \tablewidth{0pt} \tablecaption{Abundance data for objects with stellar and nebular abundance information.\label{table:celrl}} \tablehead{ \colhead{Object} & \multicolumn{3}{c}{$\rm\epsilon(O)$: R\,=\,0} & \multicolumn{3}{c}{$\rm\epsilon(O)$: R\,=\,0.4~\rtf} & \multicolumn{3}{c}{References}\\[0.5mm] \colhead{} & \colhead{stars} & \multicolumn{2}{c}{\hii\ regions} & \colhead{stars} & \multicolumn{2}{c}{\hii\ regions} & \colhead{stars} & \colhead{\cel} & \colhead{\rl}\\[0.5mm] \colhead{} & \colhead{} & \colhead{\cel} & \colhead{\rl} & \colhead{} & \colhead{\cel} & \colhead{\rl} & \colhead{} & \colhead{} & \colhead{} } \startdata \\[-4mm] Sextans A & $7.70 \pm 0.07$ & $7.49 \pm 0.06$ & \nodata & \nodata & \nodata & \nodata & K04 & K05 & \\ WLM & $7.82 \pm 0.06$ & $7.82 \pm 0.09$ & \nodata & \nodata & \nodata & \nodata & U08 & L05 & \\ IC~1613 & $7.90 \pm 0.08$ & $7.78 \pm 0.07$ & \nodata & \nodata & \nodata & \nodata & B07 & B07 & \\ NGC~3109 & $8.02 \pm 0.13$ & $7.81 \pm 0.08$ & \nodata & \nodata & \nodata & \nodata & H14 & P07 & \\ ~~~~~'' & $7.76 \pm 0.07$ & $7.81 \pm 0.08$ & \nodata & \nodata & \nodata & \nodata & E07 & P07 & \\ NGC~6822 & $8.08 \pm 0.21$ & $8.14 \pm 0.08$ & $8.37 \pm 0.09$ & \nodata & \nodata & \nodata & P15 & L06 & P05 \\ SMC & $8.06 \pm 0.10$ & $8.05 \pm 0.09$ & $8.24 \pm 0.16$ & \nodata & \nodata & \nodata & H07 & B07 & PG12 \\ LMC & $8.33 \pm 0.08$ & $8.40 \pm 0.10$ & $8.54 \pm 0.05$ & \nodata & \nodata & \nodata & H07 & B07 & P03 \\ NGC~55 & $8.32 \pm 0.06$ & $8.21 \pm 0.10$ & \nodata & \nodata & \nodata & \nodata & K16 & T03 & \\ NGC~300 & $8.59 \pm 0.05$ & $8.59 \pm 0.02$ & $8.71 \pm 0.10$ & $8.42 \pm 0.06$ & $8.43 \pm 0.02$ & $8.65 \pm 0.12$ & K08 & B09 & T16 \\ M33 & $8.78 \pm 0.04$ & $8.51 \pm 0.04$ & $8.76 \pm 0.07$ & $8.49 \pm 0.05$ & $8.36 \pm 0.05$ & $8.63 \pm 0.09$ & U09 & B11 & T16 \\ M31 & $8.99 \pm 0.10$ & $8.74 \pm 0.20$ & $8.94 \pm 0.03$ & $8.74 \pm 0.10$ & $8.51 \pm 0.21$ & $8.69 \pm 0.03$ & Z12 & Z12 & E09 \\ M81 & $8.98 \pm 0.06$ & $8.86 \pm 0.13$ & \nodata & $8.81 \pm 0.07$ & $8.72 \pm 0.13$ & \nodata & K12 & P12 & \\ M42 & $8.74 \pm 0.04$ & $8.53 \pm 0.01$ & $8.65 \pm 0.03$ & \nodata & \nodata & \nodata & S11 & E04 & S11 \\ M83 & $9.04 \pm 0.04$ & $8.90 \pm 0.19$ & \nodata & $8.78 \pm 0.07$ & $8.73 \pm 0.27$ & \nodata & This & work & \\ [1mm] \enddata \tablerefs{{\sc Stars:} K04: \citet{Kaufer:2004}; U08: \citet{Urbaneja:2008}; B07: \citet{Bresolin:2007a}; H14: \citet{Hosek:2014}; E07: \citet{Evans:2007}; P15: \citet{Patrick:2015}; H07: \citet{Hunter:2007}; K16: \citet{Kudritzki:2016}; K08: \citet{Kudritzki:2008}; U09: \citet{U:2009}; Z12: \citet{Zurita:2012}; K12: \citet{Kudritzki:2012}; S11: \citet{Simon-Diaz:2011}. ~---{\cel:} K05: \citet{Kniazev:2005}; L05: \citet{Lee:2005}; B07: \citet{Bresolin:2007a}; P07: \citet{Pena:2007}; L06: \citet{Lee:2006}; T03: \citet{Tullmann:2003}; B09: \citet{Bresolin:2009a} B11: \citet{Bresolin:2011a}; Z12: \citet{Zurita:2012}; P12: \citet{Patterson:2012}; E04: \citet{Esteban:2004}. ~---{\rl:} P05: \citet{Peimbert:2005}; PG12: \citet{Pena-Guerrero:2012}; P03: \citet{Peimbert:2003}; T16: \citet{Toribio-San-Cipriano:2016}; E09: \citet{Esteban:2009}; S11: \citet{Simon-Diaz:2011}.} \tablecomments{All \cel-based abundances redetermined with consistent and updated atomic data (see text).} \end{deluxetable} \subsection{Stellar \vs\ nebular abundances: auroral and recombination lines} Despite the complexity of the physics of ionized nebulae, which hinders the resolution of issues related to their temperature and density structure, and in view of the urgency to understand how to select the correct absolute abundance scale, it is worthwile to test empirically whether the difference between stellar and nebular direct abundances remains constant with metallicity, as is the case for the difference obtained using \cel s and \rl s ($\sim$0.2 dex, \citealt{Garcia-Rojas:2007}). For this purpose, we have assembled published data on stellar abundances for young stars and \hii\ regions in nearby galaxies and the Milky Way, as summarized in Table~\ref{table:celrl}. The nebular oxygen abundances refer to \cel-based determinations and, for seven objects, \rl-based results. The latter refer mostly to single \hii\ regions in different galaxies, while \cel\ measurements are typically available for several \hii\ regions. For irregular galaxies, due to their spatially homogeneous abundance distribution or their flat/very shallow metallicity gradients, we report mean abundance values, while for spirals we use the available radial gradient information to obtain the metallicity both at the center and at 0.4~\rtf. For several of the galaxies reported in Table~\ref{table:celrl} we used the data compilation from \citet{Bresolin:2011}, who re-analyzed published emission line fluxes in order to homogenize the derived abundances, using a set of atomic data consistent with the work on NGC~300 by \citet{Bresolin:2009a}. For the present work we re-determined all the \te-based abundances using IRAF's {\em nebular} package, with the atomic parameters used in \citet[Table~5]{Bresolin:2009a} but updating the \ion{O}{3} collision strengths from \citet{Palay:2012}, and re-deriving radial gradients when necessary. The updated \ion{O}{3} collision strengths determined an increase in \eo\ of typically 0.02--0.04~dex. It is worth pointing out that our comparison is mostly of a statistical nature, because the ideal situation in which stellar and nebular abundances are simultaneously available for young stars and their parent gas cloud, as in the case of the Orion nebula in the Milky Way, is still not realized with current data in extragalactic systems. \begin{figure*} \center \includegraphics[width=1.7\columnwidth]{f11}\medskip \caption{Difference in metallicity between young stars and ionized gas for a sample extracted from the literature and the M83 data presented here. We have added 0.1 dex to the gas metallicities reported in Table~\ref{table:celrl} to account for dust depletion. For spiral galaxies the metallicities correspond to the central values. We use blue circles and orange squares for nebular oxygen abundances determined from the direct method and from recombination lines, respectively. The adopted solar O/H value is shown by the vertical line.}\label{fig:hiistars} \end{figure*} \medskip In Fig.~\ref{fig:hiistars} we show the difference between stellar and nebular abundances as a function of stellar metallicity. We added 0.1~dex to the \hii\ region abundances included in Table~\ref{table:celrl} to account for the effect of depletion onto dust grains. For spiral galaxies we use the central metallicity values (our main conclusions do not change if we use the characteristic metallicity at 0.4~\rtf). % The blue dots refer to the quantity $\rm \Delta\epsilon(O)_{CEL}$, the (stars$-$gas) metallicity difference, using direct abundances for \hii\ regions. The orange open square symbols are used for the corresponding quantity $\rm \Delta\epsilon(O)_{RL}$ using the nebular \rl s instead to estimate the gaseous abundances. In order to support our interpretation, we comment on the following objects: \let\origdescription\description \renewenvironment{description}{ \setlength{\leftmargini}{0em} \origdescription \setlength{\itemindent}{0em} } \begin{description} \item[\rm Sextans~A] The spectral data we used for the nebular abundance of three \hii\ regions, from \citet{Kniazev:2005}, do not cover the \oii\lin3727 line, and the resulting $\rm O^+/H^+$ abundance relies on the \oii\llin7320--7330 auroral lines instead, and as such we suspect that it is subject to a higher level of uncertainty than reported (see \citealt{Kennicutt:2003}). \item[\rm NGC~3109] There is a discrepancy between the metallicities of B- and A-type supergiants from \citet{Evans:2007} and \citet{Hosek:2014}, respectively. We use both measurements in Fig.~\ref{fig:hiistars}, using the stellar type (B or A) as a subscript to the galaxy name. \item[\rm NGC~6822] We use the mean metallicity of the 11 red supergiants studied by \citet{Patrick:2015}, with a $-0.086$~dex correction to account for the difference in the adopted solar metallicity value (see Sect.~\ref{sec:hii} with respect to the MARCS model atmospheres used for red supergiants). Although we are not using blue supergiants for this galaxy, we point out that red supergiants have been shown by \citet{Gazak:2015} to provide chemical abundances that are in excellent agreement with blue supergiants. \item[\rm SMC] We use the \rl\ measurements from \citet{Pena-Guerrero:2012} for the two \hii\ regions NGC~456 and NGC~460, taking the weighted average of the published, gas-phase results. We do not include the study of N66 by \citet{Tsamis:2003}, which is highly discrepant relative to the stellar and \cel-based metallicities, with \eo\,=\,8.47, but without an estimate of the uncertainty. \item[\rm M31] The abundance gradient in the Andromeda Galaxy is still quite uncertain. For the estimation of the quantities in Table~\ref{table:celrl} we relied on the gradient determined from \cel\ by \citet{Zurita:2012}, and used the same slope to estimate the values for \rl s and stars. Based on \citet{Zurita:2012} and \citet{Esteban:2009} we used a value $\rm \Delta\epsilon(O)$ relative to the \cel s of +0.25~dex and +0.2~dex for stars and \rl s, respectively. \item[\rm M42] We include data for the Orion nebula and the Orion OB1 stellar association in the Milky Way. The abundance results for this object are consistent with other measurements of the chemical abundances in the local neighbourhood (\eg\ \citealt{Nieva:2012, Garcia-Rojas:2014}), not included in the figure for clarity. We re-derived the \cel-based nebular oxygen abundance using the data by \citet{Esteban:2004}, and following the same procedure as in \citet{Simon-Diaz:2011}, \ie\ using the \nii\ temperature for the $\rm O^+$ region, and the electron density from the \oii\ 3726/3727~\AA\ line ratio. The effect of the updated \ion{O}{3} collision strengths on the final oxygen abundance is minor ($\sim 0.01$~dex). \item[\rm M83] As we mentioned earlier, the auroral line-based gradient, that we used to estimate the central abundance, is quite uncertain. Nevertheless, the central abundance that we adopt is close to the value we measure for the central \hii\ region. \end{description} \bigskip Focusing on $\rm \Delta\epsilon(O)_{CEL}$ first, we note that this quantity appears to be largely independent of metallicity. % Fig.~\ref{fig:hiistars} suggests that the direct method yields metallicities that could lie, on average, below the stellar ones at high metallicity, but does not seem to be true for all objects. We divided (arbitrarily) the sample at \eo\,=\,8.7 and performed a weighted mean for different metallicity ranges, as summarized below: \smallskip \begin{tabular}{l c} Range & $\rm \Delta\epsilon(O)_{CEL}$ -- Weighted mean \\[2pt] $\rm \epsilon(O) < 8.7$ & $-0.05 \pm 0.09$ \\ $\rm \epsilon(O) > 8.7$ & $+0.12 \pm 0.04$ \\ All & $+0.03 \pm 0.11$ \\ \end{tabular} \medskip The difference between high and low metallicity is marginally significant ($\sim 1 \sigma$). The point remains that for some objects with small observational errors (M33, M42 and other Galactic objects not included in Fig.~\ref{fig:hiistars}, \eg\ the Cocoon Nebula, \citealt{Garcia-Rojas:2014}) the direct method underestimates the stellar metallicity by $\sim$0.1~dex, even considering the dust depletion correction. % Turning to $\rm \Delta\epsilon(O)_{RL}$, as shown by the seven open square symbols in Fig.~\ref{fig:hiistars}, we notice a somewhat opposite behavior. The agreement with the stellar metallicities is excellent in the high-abundance regime, a result that has been pointed out already by several authors (\eg\ \citealt{Simon-Diaz:2011}). At lower metallicities, however, the \rl-based nebular abundances tend to diverge from the stellar ones. The mean offset for the four data points at \eo\,$<8.7$ is $-0.28 \pm 0.05$, after the 0.1~dex correction for dust depletion. To our knowledge, this is the first time that this effect has been identified or emphasized. We examine here briefly the four data points in Fig.~\ref{fig:hiistars} that indicate a significant difference between stellar and \rl-based metallicities.\\ \noindent SMC and LMC: The stellar metallicities and mean \eo\ values of the Small and Magellanic Clouds are known to quite good precision from the VLT-FLAMES survey (\citealt{Hunter:2007}), in which the chemical abundances of B-type stars are obtained with the same non-LTE {\sc fastwind} code (\citealt{Puls:2005}) utilized for other objects included in Fig.~\ref{fig:hiistars} (\eg\ M42, NGC~300, WLM, NGC~3109), which ensures some level of homogeneity in our analysis. We also note that for the LMC the \citet{Hunter:2007} metallicity agrees very well with the most recent study of 90 blue supergiants by \citet{Urbaneja:2016}. The \rl s have been studied in the two SMC nebulae mentioned earlier and in 30~Dor for the LMC. \\ \noindent NGC~300: \citet{Bresolin:2009a} found very good agreement between the absolute abundances determined from A and B supergiants, which rely upon different diagnostic lines as well as stellar models. Moreover, \citet{Urbaneja:2016} demonstrated the absence of systematic effects when the spectral analysis is carried out from spectra of high (as used in the LMC/SMC) or medium (as used in NGC~300) resolution. The $\rm \Delta\epsilon(O)_{RL}$ value we used for this galaxy does not depend on the use of central abundances only, as can be seen from the work on the metal \rl s by \citet{Toribio-San-Cipriano:2016}\\ \noindent NGC~6822: We have used the recent metallicities for 11 red supergiants from \citet{Patrick:2015}, which are in good agreement with the overall metallicity obtained from B-type supergiants by \citet{Muschielok:1999} and from 2 A-type supergiants by \citet{Venn:2001}.\\ \smallskip We note that the mean difference between \rl- and \cel-based abundances is 0.16\,$\pm$\,0.05 for the seven objects included in Fig.~\ref{fig:hiistars}, consistent with the value for the oxygen ADF\,=\,0.26\,$\pm$\,0.09 measured by \citet{Esteban:2009} for a sample of extragalactic \hii\ regions and with other determinations in the Milky Way (\eg\ \citealt{Garcia-Rojas:2007}). \smallskip An in-depth discussion of our results within the context of the non-equilibrium $\kappa$ electron energy distribution lies outside the scopes of this paper. However, it is worth recalling that the assumption of a $\kappa$ distribution has a profound impact on the abundances derived from \cel s, due to the strong sensitivity of these lines to the gas temperature (see \citealt{Nicholls:2012, Nicholls:2013} for details). In fact, the assumption of even a moderate deviation from the Maxwellian energy distribution can explain the ADF observed in Galactic and extragalactic \hii\ regions, and similarly the abundance offset between theoretically-calibrated strong line abundance determination methods and the direct method. We do note however that the photoionization models presented by \citet[see their Fig.~32]{Dopita:2013}, calculated for $\kappa = 20$, predict that this offset, which is roughly constant with metallicity below the solar value, increases rapidly for higher metallicities. \citet[Fig.~9]{Blanc:2015} also illustrated a difference between \rl\ abundances and those derived from photoionization models that increases with metallicity. We suggest that this effect, that appears to be on the order of 0.2 dex, mirrors the behavior of $\rm \Delta\epsilon(O)_{RL}$ seen in Fig.~\ref{fig:hiistars}.
16
7
1607.06840
1607
1607.03462_arXiv.txt
{ We report the spectroscopic confirmation of 22 new multiply lensed sources behind the {\it Hubble Frontier Field} (HFF) galaxy cluster MACS~J0416.1$-$2403 (MACS~0416), using archival data from the Multi Unit Spectroscopic Explorer (MUSE) on the VLT. Combining with previous spectroscopic measurements of 15 other multiply imaged sources, we have obtained a sample of 102 secure multiple images with measured redshifts, the largest to date in a single strong lensing system. The newly confirmed sources are largely low-luminosity Lyman-$\alpha$ emitters with redshift in the range $[3.08-6.15]$. With such a large number of secure constraints, and a significantly improved sample of galaxy members in the cluster core, we have improved our previous strong lensing model and obtained a robust determination of the projected total mass distribution of MACS 0416. We find evidence of three cored dark-matter halos, adding to the known complexity of this merging system. The total mass density profile, as well as the sub-halo population, are found to be in good agreement with previous works. We update and make public the redshift catalog of MACS 0416 from our previous spectroscopic campaign with the new MUSE redshifts. We also release lensing maps (convergence, shear, magnification) in the standard HFF format. }
The use of gravitational lensing by galaxy clusters has intensified in recent years and has led to significant progress in our understanding of the mass distribution in clusters, as well as to the discovery of some of the most distant galaxies \citep[e.g.,][]{2013ApJ...762...32C, 2014ApJ...795..126B} thanks to the magnification of selected cluster lenses. Key to this progress has been the combination of homogeneous multi-band surveys of a sizeable number of massive clusters with the Hubble Space Telescope (HST), primarily with the Cluster Lensing And Supernova survey with Hubble \citep[CLASH,][]{2012ApJS..199...25P}, with wide-field imaging \citep[e.g.,][]{2014ApJ...795..163U, 2016ApJ...821..116U} and spectroscopic follow-up work from the ground and space. Studies with HST have inevitably focused on the cluster cores, where a variety of strong lensing models have been developed to cope with the increasing data quality and to deliver the precision needed to determine the physical properties of background lensed galaxies (such as stellar masses, sizes and star formation rates), which critically depend on the magnification measurement across the cluster cores. Following the CLASH project, which has provided a panchromatic, relatively shallow imaging of 25 massive clusters, the Hubble Frontier Fields (HFF) program \citep{2016arXiv160506567L} has recently targeted six clusters (three in common with CLASH) to much greater depth ($\sim2$ mag) in seven optical and near-IR bands with the ACS and WFC3 cameras. This has provided a very rich legacy data set to investigate the best methodologies to infer mass distributions of the inner ($R\lesssim 300$ kpc) regions of galaxy clusters, and is stimulating a transition to precision strong lensing modeling with parametric \citep[e.g.,][]{2014MNRAS.444..268R, 2015MNRAS.452.1437J, M2016,2016ApJ...819..114K} and non-parametric lens models \citep[e.g.,][]{2014ApJ...797...98L, 2016MNRAS.459.3447D, 2015ApJ...811...29W, 2016arXiv160300505H}. Spectroscopic follow-up information on a large number of multiply lensed sources is critical to achieve high-precision cluster mass reconstruction through strong lensing modeling. Early works heavily relied on photometric redshifts or color information to identify multiple images. While this method has been shown to be adequate for determining robust mass density profiles \citep[e.g.,][]{2015ApJ...801...44Z}, it is prone to systematics due to possible misidentifications of multiple images and degeneracies between angular diameter distances and the cluster mass distribution. This typically leads to root-mean-square offsets ($\Delta_{\rm rms}$) between the observed and lens model-predicted positions of $\Delta_{\rm rms} \gtrsim 1\arcsec$ \citep[see][for the CLASH sample]{2015ApJ...801...44Z}. Using extensive redshift measurements for both cluster member galaxies and background lensed galaxies, high-fidelity mass maps can be obtained with $\Delta_{\rm rms}\approx 0\arcsec.3$, as shown for example, in the study of the HFF clusters MACS~J0416.3$-$2403 (hereafter MACS~0416) \citep[][hereafter Gr15]{2015ApJ...800...38G} and MACS~J1149.5$+$2223 with the sucessful prediction of the lensed supernova Refsdal \citep{2016ApJ...817...60T, 2016ApJ...822...78G}. Exploiting these new high-quality spectroscopic data sets in clusters that are relatively free from other intervening line-of-sight structures, strong lensing modeling even becomes sensitive to the adopted cosmology \citep[][hereafter Ca16]{2016A&A...587A..80C}. In addition, new large spectroscopic samples of cluster member galaxies over a sufficiently wide area allow the cluster total mass to be derived based on galaxy dynamics \citep[e.g.,][]{2013A&A...558A...1B}. This provides an independent, complementary probe of the cluster mass out to large radii, which, when combined with high-quality weak-lensing determinations, can in principle be used to infer dark-matter properties \citep{2014ApJ...783L..11S} or to test modified theories of gravity \citep{2016JCAP...04..023P}. The combination of photometric and spectroscopic data now available for MACS~0416, from extensive HST and VLT observations, makes it one of the best data sets with which to investigate the dark-matter distribution in the central region of a massive merging cluster through strong lensing techniques and to unveil high-redshift magnified galaxies owing to its large magnification area. The high-precision strong lensing model of MACS~0416 presented by Gr15 was based on CLASH imaging data and spectroscopic information obtained as part of the CLASH-VLT survey, presented in \citet{2015arXiv151102522B}. MACS~0416 is a massive and X-ray luminous \citep[$M_{200} \approx 0.9 \times 10^{15} \rm M_{\odot}$ and $L_X \approx 10^{45}$ erg s$^{-1}$,][]{2015arXiv151102522B} galaxy cluster at $z= 0.396$, originally selected as one of the five clusters with high magnification in the CLASH sample. This system was readily identified as a merger, given its unrelaxed X-ray morphology and the observed projected separation ($\sim 200$ kpc) of the two brightest cluster galaxies (BCGs) \citep[see][]{2012MNRAS.420.2120M}. \citet{2013ApJ...762L..30Z} performed the first strong lensing analysis using the available CLASH HST photometry, which revealed a quite elongated projected mass distribution in the cluster core ($\sim 250$ kpc). In subsequent works \citet{2014MNRAS.443.1549J,2015MNRAS.446.4132J} combined weak and strong lensing analyses, detecting two main central mass concentrations. When comparing their mass reconstruction with shallow Chandra observations, they were not able to unambiguously discern between a pre-collisional or post-collisional merger. The CLASH-VLT spectroscopic sample of about 800 cluster member galaxies out to $\sim 4$ Mpc has recently allowed a detailed dynamical and phase-space distribution analyses, which revealed a very complex structure in the cluster core \citep{2015arXiv151102522B}. The most likely scenario, supported also by deep X-ray Chandra observations and VLA radio data, suggests a merger composed of two main subclusters observed in a pre-collisional phase. In this work, we present a further improved strong lensing model of MACS~0416, which exploits a new unprecendeted sample of more than 100 spectroscopically confirmed multiple images (corresponding to 37 multiply imaged sources) and $\sim\! 200$ cluster member galaxies in the cluster core. In Section 2, we describe the MUSE spectroscopic data set, the data reduction procedure and the method used for redshift measurements. In Section 3, we describe the strong lensing model and discuss the results of our strong lensing analysis. In Section 4, we summarize our conclusions. Throughout this article, we adopt a flat $\rm \Lambda CDM$ cosmology with $\Omega_m = 0.3$ and $H_0 = 70\, {\rm km/s/Mpc}$. In this cosmology, $1''$ corresponds to a physical scale of $5.34\, {\rm kpc}$ at the cluster redshift ($z_{lens} = 0.396$). All magnitudes are given in the AB system. \begin{figure*} \centering \includegraphics[width = 1.0\textwidth]{MACS0416_images_with_zoom_model_predicted.pdf} \caption{Color composite image of MACS~0416 from Hubble Frontier Fields data. Blue, green and red channels are the combination of filters F435W, F606W+F814W and F105W+F125W+F140W+F160W, respectively. White circles mark the positions of the 59 multiple images belonging to 22 families with new spectroscopic confirmation in this work, while red circles show multiple images previously known in spectroscopic families. Magenta circles show the model-predicted positions of multiple images not included in our model, lacking secure identifications (see Table \ref{tab:multiple_images}). The inset is a blow-up of the region around family 14, around two galaxy cluster members, G1 and G2, with total mass density profile parameters free to vary in our model (see Section \ref{sec:sl_modelling}). The blue circles indicate the positions of the BCGs (BCG,N and BCG,S).} \label{fig:arcs} \end{figure*}
In this article, we have significantly extended the panoramic VIMOS spectroscopic campaign of MACS 0416, presented in Gr15 and \citet{2015arXiv151102522B}, with data from the MUSE integral-field spectrograph on the VLT, which has yielded 208 new secure redshift measurements in the central 2 arcmin$^2$ region of the cluster. Notably, a new large set of multiply lensed sources was identified using two MUSE archival pointings, extending the work of Gr15 and \citet{2016arXiv160300505H} and bringing the number of spectroscopically identified multiple-image systems from 15 to 37. This was possible by measuring 59 new redshifts to very faint magnitude, thanks to the sensitivity of MUSE to line fluxes as faint as $10^{-19}\, {\rm erg}\, {\rm s}^{-1}\, {\rm cm}^{-2}\, \AA^{-1}$ \citep[see][for the study of a similar set of low-luminosity Lyman-$\alpha$ emitters with MUSE observations of the HFF cluster AS1063]{{2016arXiv160601471K}}. This new sample also extends the redshift range of known multiple images, with five additional systems at $z>5$, one of which is at $z=6.145$ (13 images with measured redshift at $z>5$). The MUSE observations also allowed us to secure redshifts of 144 member galaxies over an area of $\sim\! 0.2\, {\rm Mpc}^2$. Three-quarters of the cluster galaxies selected down to $mag_{F160W}=24$ (corresponding to $M_*\approx 3\times 10^8 \,\rm M_\odot$) are now spectroscopically confirmed. With such a large set of 102 spectroscopic multiple images and a much improved sample of galaxy members in the cluster core, we have built a new strong lensing model and obtained an accurate determination of the projected total mass distribution of MACS 0416. The main results of this study can be summarized as follows: \begin{enumerate} \item We can reproduce the observed multiple-image positions with an accuracy of $\Delta_{\rm rms}=0\arcsec.59$, which is somewhat larger than the one obtained by Gr15 ($0\arcsec.36$), who however used less than one-third of the multiple images. \item The large-scale component of the total mass distribution was initially modeled with two cored elliptical pseudo-isothermal profiles around the two BCGs, as in Gr15, however larger positional offsets $\Delta$ in the NE portion of the cluster led us to introduce a third floating cored halo in the model. We find interesting that, besides significantly reducing the $\Delta_{\rm rms}$, the best-fit position of this third halo is very close to a peak of the convergence map obtained by \citet{2016arXiv160300505H} with an independent free-form lensing model, which also exploits the weak-lensing shear. Although this third halo is centered on a relatively small overdensity of cluster galaxies, it could not be identified in the phase-space analysis of \citet{2015arXiv151102522B}, most probably because of the combination of projection effects and the absence of clear separation in the projected velocity space. \item The new best-fitting centers of the two main halos are now found within $\sim\! 2\arcsec$ from the respective BCGs, further reducing the halo-BCG offset when compared with the Gr15 model. As described in \citet{2015arXiv151102522B}, such a concentric distribution of light and dark-matter mass, when compared with the distribution of the X-ray emitting gas whose main peak is at the position of the northern BCG, is consistent with a pre-merging scenario. \item The cumulative projected total mass profile is found in excellent agreement with the one of Gr15, and in good agreement with the dynamical and X-ray mass which was however obtained with the simple approximation of a single spherical halo \citep[see][]{2015arXiv151102522B}. Together with the point 2. above, this suggests that owing to a significant enhancement of constraints in the strong lensing model we are now able to better resolve the mass distribution of the smooth cluster halo. \item The overall scaling of the total mass-to-light ratio for the sub-halo population, traced by the new highly complete and pure sample of cluster galaxies, is found consistent with the one of Gr15. Our new model therefore corroborates the evidence found in Gr15 that a sub-halo mass function is significantly suppressed when compared to simulations, particularly at the high-mass end. A similar result has recently been obtained in an independent study \citep{2016arXiv160701023M} of the Abell 2142 galaxy cluster with SDSS data. A detailed analysis of the sub-halo population and different mass components in the core of MACS 0416, which takes advantage of the internal velocity dispersions of cluster galaxies \citep[see e.g.,][]{2015MNRAS.447.1224M,2016arXiv160208491M} is deferred to a future paper, where we also plan a detailed comparison with the study of \citet{2016arXiv160300505H}. \end{enumerate} Remarkably, the new spectroscopic identifications with MUSE observations of MACS 0416 match in some cases the continuum magnitude limit of the HFF data for Lyman-$\alpha$ emitters \citep[see also][]{2016arXiv160601471K}, and complement the HST NIR GRISM spectroscopy of the GLASS survey. Not surprisingly, this cluster now becomes one of the best test bench for strong lensing modeling (see Figure~\ref{fig:mag_specz}), which we argue need to rely largely, or entirely, on spectroscopically confirmed multiple-image systems for high-precision modeling. The accuracy we have reached in reproducing the observed multiple-image positions with this new model, on the other hand suggests that it will be challenging to further improve on these results by simply introducing more mass components in parametric models. Interestingly, the large number of constraints for this cluster should allow free-form models to become more effective, for example in discovering extra mass clumps with unusual total mass-to-light ratios. As already noted in Ca16 \cite[see also][]{2016ApJ...817...60T}, with the current high-quality set of strong lensing constraints we seem to have hit the limit of the single-plane lensing approximation, so that the next step in precision strong-lensing modeling inevitably will have to properly take into account the effects of the structure along the line of sight, adequately sampled by spectroscopic data. As previously done with CLASH-VLT VIMOS observations of HFF clusters, we make public the new extended redshift catalog \footnote{the full redshift catalog including VIMOS and MUSE measurments can be found in the electronic journal and at the link: \url{https://sites.google.com/site/vltclashpublic/data-release}}, which includes secure redshift determinations from the MUSE data, in the effort to add further value to the entire HFF dataset.
16
7
1607.03462
1607
1607.01698_arXiv.txt
{} {Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as Supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here.} {SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called $sinc$ kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code.} {A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes.} {On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.}
\label{sec:introduction} Many interesting problems in Astrophysics involve the evolution of fluids and plasmas coupled with complex physics. For example, in core collapse Supernova, magnetohydrodynamics meets with general relativity, nuclear processes and radiation transport. Other scenarios, such as neutron star mergers, Type Ia Supernova, and planet- or star formation, face similar challenges in terms of complexity. Besides that, these phenomena often have a strong dependence on the dimensionality and they must be studied in three dimensions. This requires accurate numerical tools which translate to rather sophisticated hydrodynamic codes. Because of its adaptability to complex geometries and good conservation properties, the Smoothed Particle Hydrodynamics (SPH) method is a popular alternative to grid-based codes in the astrophysics community. SPH is a fully Lagrangian method, born forty years ago \citep{luc77,gin77}, that since then has undergone sustained development \citep{mon92,mon05,ros15,spr10,pri12}. Recent years have witnessed a large range of improvements specially aimed at reducing the numerical errors inherent to the technique. These errors are known as $E_0$ errors \citep{rea10} and mainly appear due to the conversion of the integrals, representing local-averaged magnitudes of the fluid, into finite summations. The most simple and naive way to get rid of them would be to work closer to the continuum limit, which implies working with a number of particles and neighbors as big as possible (ideally, $N\to \infty$ and $N_{nb}\to \infty$, see \citealp{zhu15}). Unfortunately, this is not feasible in common applications of the technique because the total number of particles is limited by the computing power (both by speed and storage). Moreover, the number of neighbors of a given particle cannot be arbitrarily increased without suffering pairing-instability \citep{sch81}. This is a numerical instability that acts as an attractive force which appears at scales slightly shorter than the smoothing length $h$, provoking artificial particle clumping and effectively decreasing the quality of the discretization, which eventually leads to unrealistic results. In order to reduce the $E_0$~errors, another more practical possibility has been studied during recent years: finding interpolating functions that are less prone to particle clustering than the widely used $M_4$ or cubic-spline kernel \citep{mon85}. Among the various candidates, the most used (pairing-resistant) kernels come either from an extension of the $M_n$ family to higher-order polynomials \citep{sch46} or those based on the Wendland functions \citep{wendland1995}. In particular, the Wendland family is specially suited to cope with pairing instability \citep{deh12}. Another possibility is the $sinc$ family of kernels \citep{cabezon2008}, which are functions of type $S(x)=C(n)(\sin x/x)^n$, and add the capability of dynamically modifying their shape simply by changing the exponent $n$. That adaptability of the $sinc$ kernels makes the SPH technique even more flexible and can be used, in particular, to prevent particle clustering, as shown in Sect.~\ref{Sec.sinc}. Historically, the growth of subsonic hydrodynamical instabilities has been problematic for SPH simulations, as they damp them significantly. The Rayleigh-Taylor (RT) instability is a ubiquitous phenomenon which serves as a paradigmatic example. It appears wherever there is cold and dense fluid on top of a hot and diluted one in the presence of gravity (or any inertial force, in virtue of the principle of equivalence). The entropy inversion leads to the rapid overturn of the fluid layers. In the real world, the overturn is triggered by small perturbations at the separation layer between the light and dense fluids. The RT instability is one of the most important agents driving the thermonuclear explosion of a white dwarf, which gives rise to the Type Ia supernova (SNIa) explosions. Their correct numerical description is also crucial to understanding the structure of the supernova remnants (SNR) and to model core collapse supernova (CCSN) explosions. Additionally, the RT instability is also the source of other interesting phenomena such as the KH instability or turbulence. The numerical simulation of the Rayleigh-Taylor instability using SPH has traditionally been a drawback for the technique, especially for low-amplitude initial perturbations in the presence of a weak gravitational force. At present, for a similar level of resolution, the best SPH codes cannot yet compete with the state-of-art grid-based methods. For example, the finite-volume/difference Godunov methods such as ATHENA and PLUTO, AMR codes such as FLASH, the Meshless Finite Mass (MFM) and Volume methods (MFV), and especially the moving mesh methods based on Voronoi tessellations, as in the code AREPO, provide a good approach to the RT instability. This problem is partially overcome using a large number of neighbors and by adding an artificial heat diffusion term to the energy equation, as in the PSPH proposal by \cite{sai13} and \cite{hop13}. However, these problems still persist when either the size of the initial perturbation or the gravity value are reduced \citep{val12}, meaning that they are a symptom of another source of numerical error in SPH named {\it tensile instability}. This is an artificial surface tension that appears at contact discontinuities because of an insufficient smoothness of pressure between both sides of the discontinuity \citep{mon00}. As a consequence, the integration of the momentum equation gives incorrect results. An excess of that tension provokes the damping of fluid instabilities, especially those with short wavelengths. Several techniques have been proposed to treat this problem, like averaging the pressure by means of the interpolating kernel itself, scheme PSPH \citep{hop15}, volume element estimation bounded to pressure, the density independent scheme (DISPH) \citep{hop13,sai13,sai16}, or by adding an artificial diffusion of heat to the energy equation, which helps to balance the pressure across the discontinuity \citep{pri07}. They have paved the road that led the SPH technique to a new standard within the last few years, and have helped to overcome this long lasting inconvenience. In particular, it has been proved that it is fundamental to increase the accuracy of the gradient estimation across the contact discontinuities via reducing its numerical noise. To achieve that, \cite{garciasenz2012} used an integral scheme to calculate spatial derivatives, which proved to be especially efficient at handling fluid instabilities. The validity of that approach was assessed in subsequent works by \cite{cabezon2012} and \cite{ros15}. In this last case, the integral approach to the derivatives (IAD) was used to extend the SPH scheme to the special-relativistic regime (see also \cite{ros15b}). See also the recent work of \cite{val16}, where the efficiency of the IAD scheme to reduce the $E_0$~errors is studied in detail. Finally, a recent breakthrough in SPH was the emergence of the concept of generalized volume elements \citep{rit01,sai13,hop13}. In these works, it was shown that a clever choice of the volume element (VE) can reduce the tensile-instability, leading to a better description of hydrodynamic instabilities. In this paper, we present a novel estimator to the VE that preserves the normalization of the kernel. We show in this work that having a good partition of the unity is also connected to the tensile-instability problem. The calculations of the growth of the Kelvin-Helmholtz and Rayleigh-Taylor instabilities using these new VE are encouraging. In this work, we also introduce the hydrodynamics code SPHYNX, that gathers together the latest advances in the SPH technique, including those new ones presented here. SPHYNX has already been used in production runs simulating type Ia and core collapse supernova and is publicly accessible\footnote{\url{astro.physik.unibas.ch/sphynx}}. The organization of this paper is as follows. In Section~\ref{sec:generalities} we review the main properties of the $sinc$~kernels as well as the integral approach to the derivatives, which are at the heart of SPHYNX. Section~\ref{sec:preconditioning} is devoted to the choice of the optimal volume element and to the update of the smoothing-length $h$ and of the kernel index $n$. Section~\ref{sec:sphynx} describes the structure of the hydrodynamics code SPHYNX: moment and energy equations and included physics. Sections~\ref{sec:2Dtests} and \ref{sec:3Dtests} are devoted to describing and analyzing a variety of tests carried out in two-dimensions and three-dimensions, respectively. Finally, we present our main conclusions and prospects for the future in Section~\ref{sec:conclusions}.
\label{sec:conclusions} In this paper, we present a new density-based SPH code, named SPHYNX, and test it in a series of traditionally problematic simulations for SPH codes in 2D and 3D. In particular, we have been able to perform a Rayleigh-Taylor simulation in a weak gravitational field $g=0.1$. Additionally, the shock-blob interaction test proved that SPHYNX can efficiently suppress the tensile instability that prevents the rise of hydrodynamical instabilities and mixing in many scenarios simulated with SPH. Additionally, the outcome of other tests, such as the hydrostatic square, Kelvin-Helmholtz instability, Gresho-Chan vortex, Sedov explosion, Noh wall-shock, Evrard collapse and Triple point-shock, prove that our implementation provides results competitive with other state-of-the-art calculations. For these problems, SPHYNX produces better results than many of the extant density-based SPH codes, being qualitatively similar to those obtained with the recently developed CRKSPH scheme \citep{fro17}. But, unlike the CRKSPH method, our approach ensures the angular momentum conservation from the onset. To achieve this, SPHYNX benefits from recent advances in the field and gathers together the latest methodologies to perform numerical simulations of astrophysical scenarios via the smoothed particle hydrodynamics technique. These methodologies include, as a novelty, a new generalized volume element estimator and a consistent update of the smoothing length and the sharpness of the interpolating kernel along with the particle density. Additionally, it counts with an integral approach to calculate gradients and a pairing-resistant family of interpolators. These features are summarized and discussed in the following. The choice of non-standard volume elements to approximate the Euler integral equations as finite summations has a significant impact on the simulations. Following the works by \cite{sai13} and \cite{hop13}, who generalized the VE so that they are not necessarily the trivial $m/\rho$ choice, we postulate a new volume element which enhances the normalization of the kernel. As discussed in Sect.~\ref{choice_volel}, the VE assigned to a particle is $V_a= X_a/\sum_b X_b W_{ab}$, where $X_a = (m_a/\rho_a)^p$ is the weighting estimator of the kernel and $0\le p\le 1$ is a parameter chosen by the user. The value $p=0$ reduces the VE to $1/\sum_b W_{ab}$, which is the standard VE when the mass of the particles is the same. For $p=1$, we have $V_a = (m_a/\rho_a)/\sum_b (m_b/\rho_b) W_{ab}$ which is simply the re-normalized traditional volume element. As expected, a better kernel normalization (between a factor 2 and a factor 5) is obtained when these VE are used. A negative feature of the proposed VE is their tendency to overshoot the density estimation in the presence of sharp gradients when $p\simeq 1$. Actually, that is the fundamental reason for not taking $p= 1$ in the estimator $X_a= (m_a/\rho_a)^p$. The optimal value of $p$ depends on the particular problem at hand, but the range $0\le p\le 0.7$, explored in this work seems to be safe. Nevertheless, a most robust implementation that allows taking $p=1$ is to consider $X_a= (\langle m_a/\rho_a\rangle)^p$, where $\langle .\rangle$ is the SPH average of the magnitude. Although this last procedure requires the computation of the averages $\langle m_a/\rho_a\rangle$, it is the recommended default choice because of its robustness and ability to keep track of strong shocks and instabilities in the presence of sharp density gradients. Another important feature is the dynamical choice of the interpolating kernel function. A large body of calculations carried out with SPH in the past made use of the $M_4$ cubic-spline function to perform interpolations. The $M_4$ polynomial has, however, a serious drawback: it is prone to the pairing-instability when the number of neighbors increases (e.g., exceeding $n_b\simeq 60$ in 3D calculations which uses the $M_4$~kernel). This is clearly a limitation, because in practical applications it is advisable to take as many neighbors as possible to reduce the $E_0$ errors in the SPH equations. A growing number of kernel candidates has been proposed during the last decade to alleviate this problem. For example, one option is to consider the natural extension of the $M_n$ family to higher polynomial degrees, such as the quartic ($M_5$) or the quintic ($M_6$) kernels. More recently, a different family of interpolators has been proposed based on the Wendland functions, as discussed in \cite{deh12}, which shows a strong endurance against the pairing-instability. A third family of interpolators, called the $sinc$ (harmonic-like) kernels, was introduced by \cite{cabezon2008}, which are also implemented in SPHYNX. As mentioned in Sect.~\ref{Sec.sinc}, the definition of the $sinc$ kernels is directly linked to that of the Dirac-$\delta$ function. Unlike the $M_n$ family, which is discrete in the index $n\in \mathbf{Z (+)}$, the $sinc$ kernels do form a continuous family, which depends on a leading exponent $n\in \Re (+)$. Actually, the $M_n$ family could be considered as a subset of the $sinc$ family \citep{garciasenz2014}. Using the $sinc$ family of kernels endows the SPH technique with a flexible engine, as the shape of the kernel can be dynamically changed, in a continuous way, during run-time. This feature can be used, for example, to suppress the pairing instability (see Sect.~\ref{pairingtest}) or to equalize the resolution behind a shock-wave (as shown in Fig.\ref{sedov_5}). Additionally, SPHYNX estimates gradients by an integral approximation (\emph{IAD}$_0$) which is more accurate than the traditional procedure based on the analytic derivative of the kernel function, and reduces the $E0$ errors caused by the particle sampling of the fluid. We fully confirm in this work the importance of this new approach, especially for handling hydrodynamic instabilities, in agreement with previous publications \citep{garciasenz2012,cabezon2012,ros15,ros15b,val16}. SPHYNX has been validated with several standard tests in two and three dimensions, ranging from strong shocks and subsonic fluid instabilities in boxes, to larger systems where the gravitational force takes over. From the analysis of these test cases we summarize the following conclusions. The use of the Integral Approach to calculate gradients along with the traditional volume elements, $V_a= m_a/\rho_a$ ($p=0$ in Eq.~\ref{estimatorXrho}) and a $sinc$ kernel with $n=5$ improves the simulation of hydrodynamics instabilities subjected to small initial perturbations with respect the standard SPH. The quantitative amplitude growth-rate of the KH instability is closer to the correct growth-rate (as computed with state-of-the-art Eulerian codes) than current density-based SPH codes (with smaller L$_1$ errors by a factor $1.5-4$), being similar to the results of the modern PSPH formulation. It is also able to reproduce the KH instability in stratified fluids with high density contrasts ($\rho_2/\rho_1\simeq 8$). In the case of the RT instability, the scheme is also able to cope with small perturbations ($w_0=0.0025$) and tiny gravity values ($g=-0.1$), although in the latter case the non-linear evolution scarcely shows structure. In shocks, the results are similar to those provided by the standard method in identical conditions. When the new VE are switched-on there is, in general, an increase of the quality of the simulations. We have monitored the volume normalization condition $\sum_b V_b W_{ab}=1$ in all calculated models and, without exception, it is better fulfilled (usually in the range of 10-20\% closer to unity) with the new volume elements. This change has an impact on the overall evolution of the simulation, considerably improving the results of the simulations. A paradigmatic case is the RT instability, where the use of the VE leads to an increase of the growth-rate of the instability and to a richer evolution in the linear stage, even for the low-gravity simulation. In shock-waves, the front of the blast becomes steeper and the density peak is 10-25\% higher, even in 3D. Regarding the Sedov test, the post-shock evolution of density and pressure is 5-10\% closer to the analytic expectations. It is also worth noting that the VE also improve the condition $\vert\Delta\mathbf{r}\vert=0$ which, according to Eq.~(\ref{approxI}), is a necessary condition to exactly compute the gradient of linear functions when the $IAD_0$ scheme is used. During the course of the simulations we did not see any sign of pairing instability, even when working with $\simeq 50$ neighbors in the 2D tests. In any case, to avoid the instability it is enough to raise the exponent of the $sinc$ kernel above the adopted default value $n=5$. We stress that, unlike other recent SPH schemes, the simulations of the KH and RT instabilities were carried out without including any artificial flux of heat or any other procedure to smooth the pressure. Among the several improvements left for future work, we plan to improve the calculation of gravity by including a better treatment of the gravitational softening on short distances. The best way to do that is to include the gravity into the discretized SPH Lagrangian as described in \cite{pri07} and \cite{spr10}. Also, the implementation and validation of switches to ensure that the AV is only added in regions where there are shocks \citep{cul10}, as well as noise triggers to control the velocity in subsonic flows \citep{ros15} could be done with moderate effort. A more ambitious goal would be to directly calculate the volume elements solving implicitly the equation $\sum_b V_b W_{ab}=1$ on each particle of the system. Even though the strong coupling between particles renders any implicit calculation computationally expensive, it will probably solve the density overshooting problem seen in our explicit approach.
16
7
1607.01698
1607
1607.05072_arXiv.txt
{The intensity of the \oiv~2s$^{2}$ 2p $^{2}$P-2s2p$^{2}$ $^{4}$P and \siv~3 s$^{2}$ 3p $^{2}$P- 3s 3p$^{2}$ $^{4}$ P intercombination lines around 1400~\AA~observed with the \textit{Interface Region Imaging Spectrograph} (IRIS) provide a useful tool to diagnose the electron number density ($N_\textrm{e}$) in the solar transition region plasma. We measure the electron number density in a variety of solar features observed by IRIS, including an active region (AR) loop, plage and brightening, and the ribbon of the 22-June-2015 M 6.5 class flare. By using the emissivity ratios of \oiv\ and \siv\ lines, we find that our observations are consistent with the emitting plasma being near isothermal (log$T$[K] $\approx$ 5) and iso-density ($N_\textrm{e}$ $\approx$~10$^{10.6}$ cm$^{-3}$) in the AR loop. Moreover, high electron number densities ($N_\textrm{e}$ $\approx$~10$^{13}$ cm$^{-3}$) are obtained during the impulsive phase of the flare by using the \siv\ line ratio. We note that the \siv\ lines provide a higher range of density sensitivity than the \oiv\ lines. Finally, we investigate the effects of high densities ($N_\textrm{e}$ $\gtrsim$ 10$^{11}$ cm$^{-3}$) on the ionization balance. In particular, the fractional ion abundances are found to be shifted towards lower temperatures for high densities compared to the low density case. We also explored the effects of a non-Maxwellian electron distribution on our diagnostic method. }
\label{Sect:1} The intercombination lines from \oiv\ and \siv\ around 1400~\AA~provide useful electron density ($N_\mathrm{e}$) diagnostics in a variety of solar features and astrophysical plasmas \citep[see, e.g.,][]{Flower75,Feldman79,Bhatia80}. These transitions are particularly suitable for performing density measurements as their ratios are known to be largely independent of the electron temperature, and only weakly dependent on the electron distribution \citep{Dudik14}. The other advantage is that these lines are close in wavelength, minimizing any instrumental calibration effects. However, discrepancies between theoretical ratios and observed values have been reported in the past by several authors. For instance, \citet{Cook95} calculated emission line ratios from different \oiv\ and \siv\ line pairs by using solar observations from the \textit{High Resolution Telescope Spectrograph} (HRTS) and the SO82B spectrograph on board \textit{Skylab} as well as stellar observations from the \textit{Hubble Space Telescope}. They found that the observed ratios from \oiv\ and \siv\ would imply electron densities which differed significantly with each other (by up to an order of magnitude). Some of the discrepancies were subsequently identified by \citet{Keenan02} as due to line blends and low accuracy in the atomic data calculations. They obtained more consistent density diagnostics from \oiv\ and \siv\ ratios by using updated atomic calculations together with observations from SOHO/SUMER. % Nevertheless, some inconsistencies still remained \citep{DelZanna02}. There is now renewed interest in the literature concerning these transitions, because some of the \oiv\ and \siv\ intercombination lines, together with the \siiv\ resonance lines, are routinely observed with the \textit{Interface Region Imaging Spectrograph} \citep[IRIS;][]{DePontieu14} at much higher spectral, spatial and temporal resolution than previously. For example, \cite{peter_etal:2014} used the intensities of the \oiv\ vs. \siiv\ lines to propose that very high densities, on the order of 10$^{13}$ cm$^{-3}$ or higher, are present in the so-called IRIS plasma `bombs'. Line ratios involving an \oiv\ forbidden transition and a \siiv\ allowed transition have been used in the past to provide electron densities during solar flares and transient brightenings \citep[e.g.,][]{Cheng81,Hanssen81}. However, the validity of using \oiv\ to \siiv\ ratios has been hotly debated because these ratios gave very high densities compared to the more reliable ones obtained from the \oiv\ ratios alone \citep[see, e.g.,][]{hayes_shine:1987}. In addition, \cite{judge:2015} recalled several issues that should be taken into account when considering the \siiv /\oiv\ density diagnostic. The main ones were: 1) \oiv\ and \siiv\ ions are formed at quite different temperatures in equilibrium and hence a change in the \oiv\ / \siiv\ ratio could imply a change in the temperature rather than in the plasma density 2) the chemical abundances of O and Si are not known with any great accuracy and could be varying during the observed events 3) density effects on the ion populations could increase the \siiv\ / \oiv\ relative intensities by a factor of roughly 3--4. \cite{judge:2015} also mentioned the well-known problem of the "anomalous ions", e.g., the observed high intensities of the Li- and Na-like (as \siiv) ions \citep[see also][]{DelZanna02}. Another important aspect to take into account is the effect of non-equilibrium conditions on the observed plasma diagnostics. It is well-known that strong variations in the line intensities are obtained when non-equilibrium ionisation is included in the numerical calculations \citep[see, e.g.,][]{Shen13,Raymond78,Mewe80,bradshaw_etal:04}. In particular, \cite{Doyle13} and \cite{Olluri13} investigated the consequences of time-dependent ionization on the formation of the \oiv\ and \siiv\ transition region lines observed by IRIS. In addition, \cite{Dudik14} showed that non-Maxwellian electron distributions in the plasma can substantially affect the formation temperatures and intensity ratios of the IRIS \siiv\ and \oiv\ lines. These authors also suggested that the observing window used by IRIS should be extended to include \siv. Recent IRIS observation sequences have indeed included the \siv\ line near 1406~\AA. The \siv\ line ratios have a higher limit for density sensitivity than the \oiv\ line ratios and are thus particularly useful for diagnosing high densities which might occur in flares. Previous flare studies have in fact reported line ratios involving O ions which lay above the density sensitivity range, indicating an electron density in the excess of 10$^{12}$ cm$^{-3}$ \citep[e.g.,][]{Cook95,Polito16}. We present here the analysis of several IRIS observational datasets where \siiv, \oiv, and \siv\ lines were observed. We focus on the diagnostics based on the \oiv\ and \siv\ lines which we believe to be more reliable than those involving the \siiv\ to \oiv\ line ratios, because of the issues described above. The observations used in this work were obtained from a variety of solar features. Small spatial elements were selected, to reduce multi-thermal and multi-density effects. Discrepancies in density diagnostics can in fact also arise if regions of plasma at different temperature and density are observed along the line of sight \citep{Doschek84,Almleaky89}. We discuss in some detail the various factors that affect the density measurements, and their uncertainties, as well as the different methods to obtain densities. We start by describing in Sect. \ref{Sect:2} the spectral lines and atomic data used in our analysis. We refer the reader to the Appendix \ref{Sect:A1} for a detailed review of the issues related to the atomic data and wavelengths for these lines. In Sect.~\ref{Sect:3} we briefly discuss the diagnostic methods. In Sect. ~\ref{Sect:4} we analyse a loop spectrum observed in Active Region (AR) NOAA 12356 on the 1 June 2015 and a spectrum acquired at the footpoints of the M 6.5 class flare on the 22 June 2015. The analysis of additional spectra observed in the AR can be found in Appendix \ref{Sect:A2}. Sect.\ref{Sect:5} presents a discussion on some of the physical processes which can affect the formation temperature of the ions studied in this work. Finally, the results of our analysis are discussed and summarized in Sect. \ref{Sect:6}. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{th_ratios.eps} \caption{Theoretical ratios (continuous lines) of \oiv~1399/1401\AA, 1401/1404\AA~and \siv~1404/1406\AA~obtained by using the atomic data described in the text. The dotted curves show a $\pm$~10\% error for the theoretical ratios. The vertical lines indicate the high-density limit of 10$^{12}$ cm$^{-3}$ for the \oiv\ ratios and 10$^{13}$ cm$^{-3}$ for the \siv\ ratios.} \label{Fig:ratios} \end{figure}
\label{Sect:6} In this work we have investigated the use of the \oiv\ and \siv\ emission lines near 1400~\AA~observed by IRIS as electron density diagnostics in the plasma from which they are emitted. These ions are formed at similar temperatures and therefore are expected to provide similar density diagnostics (within a factor of around two). Density diagnostics are usually based on the use of the intensity ratio of two lines from the same ion. We have applied an emissivity ratio method to obtain a value of electron density which can reproduce the relative intensity of the all the \oiv\ and \siv\ lines in the observed spectra. In our analysis, we have selected different plasma regions within an AR (loop, bright point and a plage region) and at the ribbon of the 22 June 2015 flare, where the lines were observed to be more intense. The \oiv\ and \siv\ lines (in particular the \oiv\ line at 1399.77~\AA) are usually very weak and cannot easily be detected in quiet Sun regions with IRIS. In all the features we analysed, we find that the \oiv\ and \siv\ lines give consistent density diagnostic results when we assume that the plasma has a near isothermal distribution rather than different temperatures of formation for the two ions. In all cases, the results are consistent with the plasma being at a temperature of about log$T$[K]=5. This temperature is lower than the peak formation temperature of \oiv\ calculated in CHIANTI v.8 assuming ionization equilibrium. However, a significant amount of \oiv\ is still formed at a temperature of log$T$[K]\,$\approx$\,5.0, as shown in the fractional abundance plot in Fig. \ref{Fig:ioneq} (continuous lines), and even more so when the high density effects are taken into account (dotted lines). The hypothesis of iso-thermality for the \oiv\ and \siv\ plasma could be explained by the fact that all the features we analysed are either cool loop structures or loop footpoints, which are often observed to be dominated by plasma with a very narrow thermal distribution \citep[e.g.][]{delzanna:03, Warren08, Schmelz07,Schmelz14}. We also emphasize that we are estimating average values of temperature and density for the emitting plasma. Using the emissivity ratio method, we find electron number densities ranging from log$N_\textrm{e}$ [cm$^{-3}$] $\approx$ 10.6--11.0 in the AR loop and bright point respectively. The density variation in different plasma features in the AR under study can also qualitatively be estimated by comparing the spectra shown in the right panel of Fig. \ref{Fig:spectra}. For instance, in the bright point spectrum (red), formed in a higher density plasma, the intensity of the \siv\ 1406.1~\AA~spectral line is enhanced compared to the intensity of \oiv+\siv~blend at 1404.82~\AA. In contrast, in the loop spectrum the \siv\ 1406.16~\AA~line is weaker, indicating that the plasma is at a lower density. In addition, we note that the densities obtained by using the \siiv /\oiv\ line ratio $R_{4}$ are much higher than the values obtained by using the \oiv\ $R_{1}$ and \siv\ $R_3$ ratios, as shown in Tab. \ref{tab:densityAR} and previously noted by \cite{hayes_shine:1987}. This could be due to a number of issues, as outlined in Sect. \ref{Sect:1} and \ref{Sect:5.2}, and in particular to the anomalous behaviour of the Na-like ions. In the flare case presented in Sect. \ref{Sect:4.2}, the \oiv\ line ratio indicates a very high electron number density above the high density limit of $\approx$ 10$^{12}$ cm$^{-3}$. The \siv\ line ratio is sensitive to higher electron densities and has been used to measure densities of $\approx$ 10$^{13}$ cm$^{-3}$ at the TR footpoints of the flare during the impulsive phase. Indications of high electron densities in the TR plasma during flares have been reported by some authors in the past. In particular, \cite{Keenan95} obtained 10$^{12}$ cm$^{-3}$ using the \ov\ line ratio, while \cite{Cook94} reported density values of 10$^{12.6}$ cm$^{-3}$ using ratios of allowed and intersystem \oiv\ lines. However, most of the other studies \citep[e.g.,][]{Cook95} were based on the use of line ratios, such the \oiv\ ones used in this work, which are density-sensitive up to a high density limit of 10$^{12}$ cm$^{-3}$, and therefore could only provide lower estimates for the electron density. Other authors have investigated electron number densities in high temperature plasma during flares. For instance, \cite{Doschek81} measured densities of 10$^{12}$~cm$^{-3}$ in the \ovii\ coronal plasma at 2 MK. Of particular importance is the study of \cite{Phillips96}, who showed for the first time very high electron densities (up to 10$^{13}$ cm$^{-3}$) from ions formed at $\approx$ 10~MK. Those high densities were observed 1 minute after the peak of a M-class flare. By using the ratio of the \siv\ lines observed by IRIS, we obtain an accurate electron density estimates for the TR plasma which are almost everywhere within the sensitivity range of the line ratio. Fig. \ref{Fig:density_time} shows that very high densities close to or above 10$^{13}$ cm$^{-3}$ are only reached over a short period of time during the peak of the flare, before dropping dramatically by more than an order of magnitude at the same footpoint position over 2 minutes. To the best of our knowledge, this is the first time we can directly diagnose such high electron number densities in the TR plasma during a flare. Density diagnostics based on the use of \siv\ line ratios with previous instruments in the past were complicated by the presence of line blends which could not be properly resolved and problems in the atomic data \citep{Dufton82,Cook95}, as also pointed out by \cite{keenan_etal:2002}. In this work we have shown that the \siv\ 1404.85~\AA~line can be accurately de-blended from the \oiv\ 1404.85~\AA~line in the high density interval above 10$^{12}$ cm$^{-3}$. In this case in fact the ratio $R_2$ remains constant, reducing the uncertainty associated with the estimation of the \oiv\ 1404.85~\AA~contribution to blend by using the \oiv\ 1401.16~\AA~line intensity (see middle panel of Fig. \ref{Fig:ratios}). One might think that at such high values of densities, the opacity effects may become important. This is not the case for the \oiv\ and \siv\ intercombination lines, due to the low $A$-values of these transitions. The optical depth can in fact be easily estimated by using the classical formula given for instance in \cite{Buchlin09}. We found that at a density of $\approx$~10$^{13}$ cm$^{-3}$, the \oiv\ lines reach an opacity of 1 over an emitting layer of the order of 10$^{5}$ km, which is much higher than the source size ($\approx$ size of the IRIS pixel, i.e. around 200 km). In contrast, the \siiv\ reaches an opacity of 1 over a considerably smaller layer, of the order of $\approx$~20~km. This implies that opacity effects might be important for this line in the flare case study. In particular, a decrease in the \siiv\ intensity due to the opacity might result in wrong density estimates based on the \siiv /\oiv\ ratio. Moreover, we emphasize the importance of including the effect of high electron number densities (above 10$^{10}$ cm$^{-3}$) in the calculations of the fractional ion abundances. % In Sect. \ref{Sect:5.1} we show that including fractional ion abundances for \oiv\ and \siv\ calculated at higher electron number density does not significantly affect the results of the density diagnostics from the emissivity ratio method. In contrast, the temperature formation of the ions, and therefore the temperature estimation from the emissivity ratio method, is shifted to lower values, as shown in Fig. \ref{Fig:ioneq}. Similarly, the presence of non-Maxwellian electron distribution in the plasma causes a shift of the temperature of formation of the ions to lower values. It is not possible to unambiguously detect signatures of non-thermal plasma conditions in the present study. A possible signature might arise from the analysis of the spectral line profiles. These profiles provide information regarding the velocity distribution of the ions, and it is reasonable to assume that this distribution is the same than the electron velocity distribution at these plasma densities and at the timescale of our observations. However, this analysis is quite involved and requires further investigation which is beyond the scope of this paper. In this work, we are interested in estimating how the possible presence of non-thermal electron distributions might affect our density diagnostics. We therefore provide a range of possible values of density and temperature diagnostics for the AR loop obtained by using the emissivity ratio method, assuming that $\kappa$-distributions at different $\kappa$ values were present. The results are presented in Fig. \ref{Fig:k_values} and Tab. \ref{tab:densityAR_kappa}, showing that the density and temperature diagnostics would indeed differ significantly from the values obtained in Sect. \ref{Sect:4.2}, where we assumed Maxwellian electron distributions.\\ We have shown that combining \oiv\ and \siv\ observations from the recent IRIS satellite provide a useful tool to measure the electron number density in a variety of plasma environments. In particular, thanks to very high spatial resolution of IRIS ($\approx$ 200 km) it is now possible to select small spatial elements, reducing the problem of observing emission from very different density and temperature plasma regions, as pointed out in the past by e.g., \cite{Doschek84}. \\ In this work, we greatly emphasized the importance of including the \siv\ lines in the IRIS observational studies. Simultaneous, high-cadence observations of several spectral lines formed in the transition region can be used as direct density and temperature diagnostics of the emitting plasma. These diagnostics provide crucial information which can be compared with the predictions made by theoretical models of energetic events in the solar atmosphere.
16
7
1607.05072
1607
1607.05591_arXiv.txt
Supernova (SN) neutrinos can excite the nuclei of various detector materials beyond their neutron emission thresholds through charged current (CC) and neutral current (NC) interactions. The emitted neutrons, if detected, can be a signal for the supernova event. Here we present the results of our study of SN neutrino detection through the neutron channel in $\Pb$ and $\Fe$ detectors for realistic neutrino fluxes and energies given by the recent Basel/Darmstadt simulations for a 18 solar mass progenitor SN at a distance of 10 kpc. We find that, in general, the number of neutrons emitted per kTon of detector material for the neutrino luminosities and average energies of the different neutrino species as given by the Basel/Darmstadt simulations are significantly lower than those estimated in previous studies based on the results of earlier SN simulations. At the same time, we highlight the fact that, although the total number of neutrons produced per kTon in a $\Fe$ detector is more than an order of magnitude lower than that for $\Pb$, the dominance of the flavor blind NC events in the case of $\Fe$, as opposed to dominance of $\nue$ induced CC events in the case of $\Pb$, offers a complementarity between the two detector materials so that simultaneous detection of SN neutrinos in a $\Pb$ and a sufficiently large $\Fe$ detector suitably instrumented for neutron detection may allow estimating the fraction of the total $\mu$ and $\tau$ flavored neutrinos in the SN neutrino flux and thereby probing the emission mechanism as well as flavor oscillation scenarios of the SN neutrinos.
\label{sec:intro} Detection of the neutrinos emitted during core collapse supernova (SN) explosion events is important for two reasons. Firstly, these neutrinos carry information about the core of the exploding star whereas no other particle or radiation can come out of that very high density region. Secondly, the properties of neutrinos like their mass hierarchy and flavor mixing, and their charged and neutral current interactions with matter inside the supernova may leave some imprints on the number of neutrinos detected and their temporal structure \cite{Raffelt:1999tx,duan-09}, thereby allowing those neutrino properties as well as the core collapse supernova explosion mechanism to be probed. Because of these reasons a number of detectors capable of detecting SN neutrinos have come into operation during the past twenty five years or so after the detection of neutrinos from supernova 1987A located in the Large Magellanic Cloud at a distance of $\sim 50\kpc$~\cite{Kamioka-II,IMB,Baksan,Mont-Blanc}. For a recent review of the capabilities and detection methods of currently operating as well as near-future and proposed future SN neutrino detectors, see Ref.~\cite{Scholberg:2012id}. In this paper we study the possibility of detection of SN neutrinos with iron or lead as detector materials through detection of neutrons emitted from the nuclei excited by the SN neutrinos. The use of such heavy-nuclei materials for detection of SN neutrinos through the neutron channel has been discussed by a number of authors in the past~\cite{SN-nu-heavy-nuclei-detectors,Kolbe-Langanke-01,Engel-etal-03}. In general, neutron rich nuclei offer good sensitivity to $\nue$'s through charged current (CC) process $\nue+n\to p+e^-$ in contrast to water Cerenkov or organic scintillator based detectors which are primarily sensitive to $\anue$'s through the CC inverse beta decay process $\anue + p \to n + e^+$. Further, CC cross section for $\nue$ interaction with high $Z$ nuclei receives significant enhancement due to Coulomb effect on the emitted electron, and correlated nucleon effect also amplifies the $\nu$-nucleus cross section relative to $\nu$-nucleon cross section as a function of $A$. In particular, $\Pb$ --- it being both a highly neutron rich ($N=126$) as well as high $Z (=82)$ nucleus --- is considered a good material for detection of the $\nue$'s from SN through the CC reaction $\nue + \Pb \to e^- + {\Bi}^*$, with the excited ${\Bi}^*$ nucleus ($N=125, Z=83$) subsequently decaying by emitting one or more neutrons. For a recent detailed study of the effectiveness of $\Pb$ as a SN neutrino detector material, done within the context of the currently operating HALO~\cite{HALO_detector} detector, see Ref.~\cite{Vaananen-Volpe-11}. A $\Pb$ detector would, of course, also be sensitive to all the six $\nu$ and $\bar{\nu}$ species including $\numu\,,\anumu\,,\nutau\,,$ and $\anutau$ components through neutral current (NC) interaction $\nu (\anu) + \Pb \to \nu (\anu) + \Pb^*$, with the excited $\Pb^*$ nucleus subsequently decaying by emitting one or more neutrons. However, the $\nu\,$-$\Pb$ NC cross section in the SN neutrino energy range of interest is typically a factor of 20 or so smaller than the $\nue\,$-$\Pb$ CC cross section~\cite{Kolbe-Langanke-01}, and even considering equal contributions from all six $\nu$ plus $\anu$ species, the total number of interactions would be expected to be dominated by those due to the $\nue$ CC interactions; see, e.g., Ref.~\cite{Vaananen-Volpe-11}. Indeed, our calculations below show that the neutrons from NC interactions would comprise $\sim$ 20\% or less of all events in a $\Pb$ detector. On the other hand, a material with $N\approx Z$, such as $\Fe$ ($N=30, Z=26$), while being significantly less neutron rich compared to $\Pb$ and thus having a $\nue$ CC cross section more than an order of magnitude less than that for $\Pb$ in the relevant SN neutrino energy range of interest, the flavor blind $\nu\,$-$\Fe$ NC cross section is less than the corresponding $\nue\,$-$\Fe$ CC cross section only by a factor of $\sim$ 4--5. With six species of $\nu$ plus $\anu$ contributing roughly equally, the total number of interactions in a $\Fe$ detector may be expected to be dominated by NC interactions. Indeed, this expectation is borne out by our calculations below, which show that $\sim$ 60\% or more of the total number of neutrons in a $\Fe$ detector would come from NC interactions as compared to $\sim$ 20\% or less in a $\Pb$ detector. Thus, an appropriately large $\Fe$ detector can be a good NC detector for SN neutrinos. In this respect, in absence of separate identification of the CC events, simultaneous detection of SN neutrinos in a $\Pb$ and a $\Fe$ detector, for example, can, in principle, provide an estimate of the fraction of the $\numu$, $\anumu$, $\nutau$ and $\anutau$ components in the total SN $\nu$ flux, thereby probing the emission as well as flavor oscillation scenarios of SN neutrinos. Motivated by the above considerations, in this paper we make a comparative study of the efficacies of the two materials, $\Fe$ and $\Pb$, as detector materials for SN neutrinos. In doing this, differing from previous studies, we use the results of the most recent state-of-the-art Basel/Darmstadt (B/D) simulations~\cite{Basel-Darmstadt-10} for the supernova neutrino fluxes and average energies, which typically yield closer fluxes among different neutrino flavors and lower average energies compared to those given by earlier simulations (see, e.g., \cite{Totani-etal-98,Gava-etal-09}). The B/D models are based on spherically symmetric general relativistic hydrodynamics including spectral three-flavor Boltzmann neutrino transport. These simulations are much more realistic compared to the earlier simulations based on simple leakage schemes~\cite{Totani-etal-98} without full Boltzmann neutrino transport. The lower average energies of the different neutrino species in the B/D simulations are related to the significantly larger radii of the neutrinospheres of the different neutrino species found in the new simulations as compared to those in the previous simulations. Indeed, several recent investigations with the full Boltzmann transport equation and their consecutive upgrades (e.g., Ref.~\cite{Mueller-etal-12ab}) have also consistently shown colder neutrino fluxes compared to the earlier SN simulations. As a consequence, as we shall see below, our results for the number of neutrons emitted are, in general, significantly lower than those obtained previously. We note here in passing that recently an additional avenue of flavor independent detection of all six species of SN neutrinos has opened up with the advent of very low threshold Dark Matter (DM) detectors which are primarily designed to detect the Weakly Interacting Massive Particle (WIMP) candidates of DM through detection of nuclear recoil events caused by the scattering of WIMPs from nuclei of the chosen detector materials (see, for example, the recent review~\cite{DM-detection-review-16}). Because of their capability to detect very low ($\sim$ keV) energy nuclear recoils, such detectors would be sensitive to nuclear recoils caused by coherent elastic neutrino-nucleus scattering (CE$\nu$NS) of the relatively low energy ($\sim$ few MeV) SN neutrinos of all flavors~\cite{Horowitz-etal-03} with the cross section for the process roughly proportional to $N^2$~\cite{Freedman-74-77}, where $N$ is the neutron number of the detector material. These DM detectors, being also potential NC detectors of SN neutrinos of all flavors, may thus provide important information about the SN neutrino flux complementary to those derived from conventional (mostly CC) SN neutrino detectors. For recent studies on this topic, see, e.g., \cite{Sovan-PB-KK-14,Abe-etal-XMASSS-16,Lang-etal-16}. This, however, is beyond the scope of the present paper and will not be further discussed here. The rest of this paper is arranged as follows: In Section \ref{sec:ccSN-nu} we briefly describe neutrino emission from core collapse supernovae and the basic results of the B/D simulations for a typical explosion of a $18\msun$ star. Section \ref{sec:xsecs-n-emission-etc} discusses the CC and NC cross sections for interaction of neutrinos with lead and iron nuclei and the process of emission of neutrons from these nuclei. The number of neutrons emitted as a function of the neutrino energy is calculated by folding in the one-, two- and three neutron emission probabilities (as functions of the excitation energies of the nucleus under consideration) with the differential cross section for neutrino induced excitation of the nucleus to different excitation energies. Section \ref{sec:results} gives the results for the number of neutrons emitted and makes a comparative study between lead and iron as detector materials. The paper ends with a summary and conclusions in Section \ref{sec:summary}.
\label{sec:summary} We have presented the results of our study of the possibility of detecting SN neutrinos with $\Fe$ and $\Pb$ detectors through detection of neutrons emitted by the excited nuclei resulting from the interaction of SN neutrinos with the nuclei of these detector materials. In doing this, we have used the results of the most recent state-of-the-art Basel/Darmstadt simulations~\cite{Basel-Darmstadt-10} for the supernova neutrino fluxes and average energies, which typically yield closer fluxes among different neutrino flavors and lower average energies compared to those given by earlier simulations~\cite{Totani-etal-98,Gava-etal-09}. Specifically, we have used the simulation results of the Basel/Darmstadt simulations for a $18\msun$ progenitor SN at a distance of 10 kpc. Our results for the numbers of neutron events per kTon of detector material are found to be significantly lower than those estimated in previous studies which were based on earlier simulations of SN neutrino emission. It will be of interest to study the implications of this result for the possibility of effectively distinguishing between the neutrino mass hierarchies using SN neutrino detection (see, e.g., Refs.~\cite{Vale-14,Vale-etal-16}). We also make the observation that, while $\Pb$ would be a better detector material than $\Fe$ in terms of the total number of neutrons produced per kTon of detector mass, $\sim$ 80\% or more of the produced neutrons in the case of $\Pb$ arise from CC interactions of $\nue$, whereas neutrons produced in $\Fe$ are dominated by those produced through NC interactions of all the six $\nu$ plus $\anu$ species. Thus, a sufficiently large $\Fe$ detector --- large enough to compensate for the overall smaller $\nu\,$-$\Fe$ cross sections compared to $\nu\,$-$\Pb$ cross sections --- can be a good NC detector for SN neutrinos. For example, the proposed 50 kTon iron calorimeter (ICAL)~\cite{ICAL-15} detector, though primarily designed for studying neutrino properties using the relatively higher energy (multi-GeV) atmospheric neutrinos, can also be a good NC detector of SN neutrinos if it can be appropriately instrumented with suitable neutron detectors. Thus, simultaneous detection of SN neutrinos in a $\Pb$ and a $\Fe$ detector can, in principle, provide an estimate of the relative fractions of the $\nue$ and the other five neutrino species in the SN neutrino flux, which would be a good probe of the SN neutrino production as well as flavor oscillation scenarios. It will be interesting to carry out more detailed analysis involving relevant statistical and systematic uncertainties --- the latter including those due to (currently somewhat large) uncertainties in neutrino cross sections on lead and iron --- to derive estimates of the minimum lead and iron detector sizes that would allow extraction of statistically significant information on the $\numu$ and $\nutau$ components of the SN neutrino flux. \noindent{\bf Acknowledgment:} One of us (SC) thanks Tobias Fischer for providing the numerical data for the temporal profiles of the luminosities and average energies of neutrinos of different flavors for the Basel/Darmstadt SN simulations used in this paper. SC also acknowledges partial support by the Deutsche Forschungsgemeinschaft through Grant No.~EXC 153 (``Excellence Cluster Universe") and by the European Union through the ``Initial Training Network Invisibles," Grant No.~PITN-GA-2011-289442.
16
7
1607.05591
1607
1607.02187_arXiv.txt
We derive constraints on dark matter (DM) annihilation cross section and decay lifetime from cross-correlation analyses of the data from Fermi-LAT and weak lensing surveys that cover a wide area of $\sim660$ squared degrees in total. We improve upon our previous analyses by using an updated extragalactic $\gamma$-ray background data reprocessed with the Fermi Pass 8 pipeline, and by using well-calibrated shape measurements of about twelve million galaxies in the Canada-France-Hawaii Lensing Survey (CFHTLenS) and Red-Cluster-Sequence Lensing Survey (RCSLenS). We generate a large set of full-sky mock catalogs from cosmological $N$-body simulations and use them to estimate statistical errors accurately. The measured cross correlation is consistent with null detection, which is then used to place strong cosmological constraints on annihilating and decaying DM. For leptophilic DM, the constraints are improved by a factor of $\sim100$ in the mass range of $O(1)$ TeV when including contributions from secondary $\gamma$ rays due to the inverse-Compton upscattering of background photons. Annihilation cross-sections of $\langle \sigma v \rangle \sim 10^{-23}\, {\rm cm}^3/{\rm s}$ are excluded for TeV-scale DM depending on channel. Lifetimes of $\sim 10^{25}$ sec are also excluded for the decaying TeV-scale DM. Finally, we apply this analysis to wino DM and exclude the wino mass around 200 GeV. These constraints will be further tightened, and all the interesting wino DM parameter region can be tested, by using data from future wide-field cosmology surveys.
INTRODUCTION} An array of astronomical observations over a wide range of redshifts and length scales consistently support the existence of cosmic dark matter (DM). Recent observations include the statistical analysis of the cosmic microwave background (CMB) anisotropies (e.g., Refs.~\cite{Hinshaw:2012aka, Ade:2013zuv}), spatial clustering of galaxies (e.g., Ref.~\cite{Eisenstein:2005su}), galaxy rotation curves (e.g., Ref.~\cite{Persic:1995ru}), and direct mapping of matter distribution through gravitational lensing (e.g., Ref.~\cite{Clowe:2006eq}). Gravitational lensing is a direct and most promising probe of the matter density distribution in the Universe. A foreground gravitational field causes small distortions of the images of distant background galaxies. The small distortions contain, collectively, rich information on the foreground matter distribution and its growth over cosmic time. In the past decades, the coherent lensing effect between galaxy pairs with angular separation of $\sim$ degree has been successfully detected in wide-area surveys (e.g., Refs~\cite{Bacon:2000sy, Kilbinger:2012qz, Becker:2015ilr}). Most importantly, the large angular scale signals, called cosmic shear, probe the matter distribution in an {\it unbiased} manner. However, cosmic shear alone does not provide, by definition, any information on possible electromagnetic signatures from DM, and thus it cannot be used to probe the particle properties of DM such as annihilation cross section and decay lifetime. The extragalactic $\gamma$-ray background is thought to be a potential probe of DM, if DM annihilates or decays to produce high-energy photons. Weakly interacting massive particles (WIMPs) are promising DM candidates that can naturally explain the observed abundance of cosmic DM if the WIMP mass ranges from 10 GeV to 10 TeV and the self-annihilation occurs around the weak-interaction scale \cite{1996PhR...267..195J}. Dark matter decay lifetime remains largely unknown, and, in fact, there are {\it not} strong cosmological and astrophysical evidences for stable DM; the possibility of very long-lived particles with a lifetime longer than the age of the Universe of 13.8 Gyr remains viable. DM annihilation or decay produces a variety of cascade products and thus leaves characteristic imprints in, for example, the cosmic $\gamma$-ray background. The isotropic $\gamma$-ray background (IGRB) is a promising target to search for DM annihilation or decay~\cite{Funk:2015ena}. Although the mean IGRB intensity can be explained by (extrapolating) unresolved astrophysical sources (e.g., \cite{2015ApJ...800L..27A}), there remains substantial uncertainties and thus there is room for contribution from other unknown sources. The anisotropies in the diffuse $\gamma$-ray background should in principle contain rich information about DM contributions at small and large length scales (e.g., see Ref.~\cite{Fornasa:2015qua} for review). It has been proposed that the cross-correlation of the IGRB with large-scale structure provides a novel probe of the microscopic properties of DM \cite{Camera:2012cj, 2014FrP.....2....6F, 2014PhRvD..90b3514A, Ando:2014aoa, 2015JCAP...06..029C}. Positive correlations with actual galaxy survey data \cite{2015ApJS..217...15X} have been reported, and implications for the nature of DM have been discussed \cite{2015PhRvL.114x1301R, Cuoco:2015rfa}. In this paper, we search indirect DM signals through cross-correlation of the IGRB and cosmic shear. We improve the cross-correlation measurement over our previous analysis \cite{Shirasaki:2014noa} by using the latest $\gamma$-ray data taken from the Fermi-LAT and two publicly available galaxy catalogs, the Canada-France-Hawaii Lensing Survey (CFHTLenS) and the Red-Cluster-Sequence Lensing Survey (RCSLenS), that provide with precise galaxy shape measurement. We apply a set of Galactic $\gamma$-ray emission models to characterize the foreground emission from our own Galaxy, and also utilize full-sky simulations of cosmic shear to construct realistic mock galaxy catalogues specifically for CFHTLenS and RCSLenS. In order to make the best use of the cross-correlation signals over a wide range of angular separations, we calculate the statistical uncertainties associated with the intrinsic galaxy shapes, the Poisson photon noise, and the sample variance of cosmic shear. To this end, we make use of our large mock catalogues in a manner closely following the actual observations. The methodology presented in this paper is readily applicable to cross-correlation analyses of the IGRB and cosmic shear with ongoing and future surveys, such as the Hyper-Suprime Cam, the Dark Energy Survey, the Large Synoptic Survey Telescope, and the Cherenkov Telescope Array. The rest of the paper is organized as follows. In Section~\ref{sec:DM}, we summarize the basics of the two observables of interest: IGRB and cosmic shear. We also present a theoretical model of the cross-correlation of the IGRB and cosmic shear in annihilating or decaying DM scenarios. In Section~\ref{sec:data}, we describe the $\gamma$-ray data and the galaxy imaging survey for shape measurement. The details of the cross-correlation analysis are provided in Section~\ref{sec:cross}. In Section~\ref{sec:res}, we show the result of our cross-correlation analysis, and derive constraints on particle DM. Concluding remarks and discussions are given in Section~\ref{sec:con}. Throughout the paper, we adopt the standard $\Lambda$CDM model with the following parameters; matter density $\Omega_{\rm m0}=0.279$, dark energy density $\Omega_{\Lambda}=0.721$, the density fluctuation amplitude $\sigma_{8}=0.823$, the parameter of the equation of state of dark energy $w_{0} = -1$, Hubble parameter $h=0.700$ and the scalar spectral index $n_s=0.972$. These parameters are consistent with the WMAP nine-year results \citep{Hinshaw:2012aka}.
16
7
1607.02187
1607
1607.02839_arXiv.txt
Nonlinear evolution of magnetic reconnection is investigated by means of magnetohydrodynamic simulations including uniform resistivity, uniform viscosity, and anisotropic thermal conduction. When viscosity exceeds resistivity (the magnetic Prandtl number $Pr_m > 1$), the viscous dissipation dominates outflow dynamics and leads to the decrease in the plasma density inside a current sheet. The low-density current sheet supports the excitation of the vortex. The thickness of the vortex is broader than that of the current for $Pr_m > 1$. The broader vortex flow more efficiently carries the upstream magnetic flux toward the reconnection region, and consequently boosts the reconnection. The reconnection rate increases with viscosity provided that thermal conduction is fast enough to take away the thermal energy increased by the viscous dissipation (the fluid Prandtl number $Pr < 1$). The result suggests the need to control the Prandtl numbers for the reconnection against the conventional resistive model.
\label{sec:introduction} Magnetic reconnection is one of the most fundamental processes in plasma physics, in which stored magnetic energy is rapidly released and converted into kinetic and internal energies through the change of magnetic field topology. It is widely believed to play a major role in explosive phenomena such as magnetospheric substorms and stellar flares. The reconnection intrinsically contains a hierarchical structure ranging from the fully kinetic scale to the magnetohydrodynamic (MHD) scale. In order to identify the essential physics necessary to model the reconnection, \citealt{2001JGR...106.3715B} conducted numerical simulations with a variety of codes from kinetic codes to conventional resistive MHD codes. They showed that only the MHD simulation with uniform resistivity fails to trigger fast reconnection, indicating that resistive MHD would be insufficient to model it. The role of resistive dissipation on the reconnection has been extensively investigated in the framework of MHD. A classical Sweet-Parker model predicts the rate of the reconnection proportional to the square root of resistivity that is too slow to account for observed phenomena. Subsequent studies have demonstrated that the Sweet-Parker type current sheet undergoes secondary instabilities for sufficiently small resistivity (large Lundquist number) \citep{1986PhFl...29.1520B,2005PhRvL..95w5003L,2007PhPl...14j0703L,2009PhRvL.103j5004S}. The resulting reconnection rate seems to be independent of resistivity \citep{2008PhRvL.100w5001L,2009PhPl...16k2102B,2012PhPl...19d2303L,2015PhPl...22j0706S}. Meanwhile, the impact of other dissipation processes should be discussed. We focus on viscosity and heat transfer. Viscosity might support the nonlinear evolution of the reconnection \citep{2009PhPl...16f0701B} whereas it suppresses the linear growth \citep{1987PhFl...30.1734P}. The ratio of kinematic viscosity to resistivity is defined as the magnetic Prandtl number, which relates the dissipation scale of vortex to current. Resistive MHD assumes this number to be zero, meaning that the vortex scale is negligible small compared with the current scale. However, it is not necessarily true in actual plasma environments \citep{2015ApJ...801..145T}; the number can be much larger than unity in a classical Spitzer model for hot tenuous plasmas \citep{1962pfig.book.....S}. Numerical simulations have demonstrated that it affects the nonlinear evolution of MHD phenomena such as small-scale turbulence and dynamo \citep{2004ApJ...612..276S,2007MNRAS.378.1471L,2014ApJ...791...12B,2015ApJ...808..54M}. It may also impact on the reconnection in which small-scale dissipation processes eventually result in large-scale evolution. In the kinetic reconnection composed of collisionless ions and electrons, the reconnection region has a two-scale structure of broad ion diffusion region and narrow electron diffusion region embedded there \citep{1998JGR...103.9165S,2001JGR...106.3721H}. This structure may be measured as broad vortex and narrow current layers from the viewpoint of MHD, because the momentum and the current are predominantly sustained by ions and electrons, respectively. Heat transfer is associated with the reconnection. High-energy particles are produced in the vicinity of a reconnection site and stream along a magnetic field line during the collisionless reconnection \citep{2001JGR...10625979H,2006JGRA..111.9216F,2007JGRA..112.3202I}. In the solar flare, thermal conduction is effective along a magnetic field line and may affect the evolution of the collisional reconnection \citep{1997ApJ...474L..61Y,1999ApJ...513..516C,2012ApJ...758...20N}. Including heat transfer increases compressibility that can enhance the reconnection \citep{2011PhPl...18d2104H,2011PhPl...18k1202B,2012PhPl...19h2109B}. The fluid Prandtl number, the ratio of kinematic viscosity to temperature conductivity, is around $10^{-3}$ in the Spitzer model. Therefore, actual plasmas can have the following inequality for the timescale of three diffusion processes, $\tau_{\rm heat} < \tau_{\rm viscous} < \tau_{\rm resistive}$. In order to ascertain the effect of viscosity and heat transfer on the nonlinear evolution of the reconnection, we conduct two-dimensional MHD simulations including viscous dissipation and anisotropic thermal conduction as well as resistive dissipation.
\label{sec:discussion} Based on the visco-resistive MHD simulation coupled with anisotropic thermal conduction, we propose the viscosity-dominated reconnection model in Figure \ref{fig:schematic_rep}. Viscosity and thermal conduction can be a key to boost the reconnection in the MHD regime. The reconnection rate is found to increase with viscosity within the explored range provided that thermal conduction is fast enough. However, the reconnection rate in the present model is still slower than that in kinetic models \citep{2011PhPl...18l2108Z,2015CoPhC.187..137M}. The Hall-MHD model is thought to be a minimal model for fast reconnection \citep{2001JGR...106.3737B}. \citealt{1983GeoRL..10..475T} carried out a linear analysis of the resistive tearing instability including the Hall effect, and argued that the thickness of the vortex is broader than the current and the broad vortex flow is expected to enhance the reconnection. One of differences between the two models is the presence or absence of dissipation. The Hall effect is purely dispersive mode without dissipation. It should more efficiently convert magnetic energy (current) into kinetic energy (vortex) than the viscosity-dominated reconnection. The inflow speed is characterized by the whistler rather than the {\Alfven} speed in the kinetic regime, $u_{\rm in} \sim V_{\rm A,in} d_{\rm i}/L$, where $d_{\rm i}$ is the ion inertia length \citep{1995PhRvL..75.3850B,1998GeoRL..25.3759S,1999GeoRL..26.2163S}. {If we assume $u_{\rm out} \sim V_{\rm A,in}$ and relate the ratio $V_{\rm A,in}/u_{\rm in}$ to the aspect ratio of the reconnection region, the viscosity-dominated model (eq. (\ref{eq:16})) gives the effective viscosity for the kinetic reconnection as $\nu_{\rm eff} \sim u_{\rm in}L (u_{\rm in} / V_{\rm A, in}) \sim V_{\rm A,in}d_{\rm i} (d_{\rm i}/L)$. It also gives the effective resistivity as $\eta_{\rm eff} \sim u_{\rm in} \delta \sim V_{\rm A,in}d_{\rm i} (\delta/L)$.} The effective magnetic Prandtl number $Pr_{m,{\rm eff}}\sim d_{\rm i}/\delta$ may be larger than unity in the kinetic reconnection in which the thickness of the current sheet gets thinner than the ion inertia length (down to the electron inertia length). {Transition from the slow resistive MHD to the fast Hall-MHD reconnection can be observed when the current sheet thickness falls below the ion inertia length\cite{2010PhRvL.105a5004S}.} The decreasing density distribution from the upstream toward the current sheet is not a situation observed only in the viscosity-dominated reconnection. It has been extensively studied that ad hoc localized resistivity triggers fast reconnection in the MHD regime, the so-called Petscheck-type reconnection. The decreasing density distribution is observed in the Petscheck-type reconnection due to fast magnetosonic rarefaction waves emanated from the reconnection site\citep{2001ApJ...549.1160Y}. The rarefaction wave drives the upstream plasma toward the reconnection site since $\nabla \cdot \vect{u} \sim -(u_y/\rho) \partial \rho/\partial y > 0$. The dilatation in the upstream is also seen in the viscosity-dominated reconnection. Furthermore, thermal conduction facilitates the Petscheck-type reconnection\cite{1999ApJ...513..516C}. Observation of the solar flare supports these theoretical models \citep{1996ApJ...456..840T}. Currently, it remains unclear whether the viscosity-dominated reconnection is controlled by diffusion coefficients $(\eta,\nu,\alpha)$ or their ratio $(Pr_m,Pr)$ (or both), because we have fixed the resistivity value. Subsequent works will investigate the dependence on resistivity (Lundquist number), which is a key parameter to classify the reconnection dynamics \citep{2011PhPl...18k1207J}. We may anticipate that the Prandtl numbers control the dynamics to some extent because $Pr_m > 1$ leads to the two-scale structure of the reconnection region and $Pr < 1$ is required to sustain the boost. This implies that one cannot ignore viscosity and thermal conduction whenever they exceed resistivity, even if their absolute values are small. The condition $Pr_m \gg 1$ and $Pr \ll 1$ can be expected in actual plasma environments such as the solar atmosphere and the interstellar medium \citep{2015ApJ...801..145T}. The result indicates the importance of viscosity and heat transfer for the reconnection against the conventional resistive MHD model in which the Prandtl numbers are not explicitly defined.
16
7
1607.02839
1607
1607.05905_arXiv.txt
{} {Observations of the solar atmosphere have shown that magnetohydrodynamic waves are ubiquitous throughout. Improvements in instrumentation and the techniques used for measurement of the waves now enables subtleties of competing theoretical models to be compared with the observed waves behaviour. Some studies have already begun to undertake this process. However, the techniques employed for model comparison have generally been unsuitable and can lead to erroneous conclusions about the best model. The aim here is to introduce some robust statistical techniques for model comparison to the solar waves community, drawing on the experiences from other areas of astrophysics. In the process, we also aim to investigate the physics of coronal loop oscillations. } {The methodology exploits least-squares fitting to compare models to observational data. We demonstrate that the residuals between the model and observations contain significant information about the ability for the model to describe the observations, and show how they can be assessed using various statistical tests. In particular we discuss the Kolmogorov-Smirnoff one and two sample tests, as well as the runs test. We also highlight the importance of including any observational trend line in the model-fitting process.} {To demonstrate the methodology, an observation of an oscillating coronal loop undergoing standing kink motion is used. The model comparison techniques provide evidence that a Gaussian damping profile provides a better description of the observed wave attenuation than the often used exponential profile. This supports previous analysis from \cite{PASetal2016}. Further, we use the model comparison to provide evidence of time-dependent wave properties of a kink oscillation, attributing the behaviour to the thermodynamic evolution of the local plasma.} {}
The Sun's atmosphere is known to be replete with magnetohydrodynamic (MHD) wave phenomena and current instrumentation has enabled the measurement of the waves and their properties. In particular, the transverse kink motions of magnetic structures in the chromosphere and corona has received a great deal of attention, mainly due to their suitability for energy transfer and also their ability for providing diagnostics of the local plasma environment via coronal or solar magnetoseismology. Current instrumentation has demonstrated the capabilities required to accurately measure the motions of the fine-scale magnetic structure, with high spatial and temporal resolution and high signal-to-noise levels. Further, the almost continuous coverage of {Solar Dynamic Observatory} (SDO), supplemented with numerous ground-based data sets and {Hinode} data, has led to the existence of a large catalogue of wave events. The result of these fortuitous circumstances means that there is plenty of high quality data on kink waves, which can be used for probing the physics of the waves and the local plasma. However, these resources have yet to be exploited effectively, with only simple modelling of individual observed wave events. By modelling, we refer to using the observational data to test theoretical ideas of wave behaviour by fitting an expected model. In a recent study by \cite{PASetal2016}, the authors attempt to exploit the catalogue of events to do just this. Observations of standing kink waves are utilised to test different models of the damping profile of the oscillatory motion. The authors attempt to determine whether the observed oscillatory motion can be best described by an exponential or Gaussian damping profile, after recent analytical studies of the resonant absorption of kink modes suggested that a Gaussian profile is more suitable (\citealp{HOOetal2013}; \citealp{PASetal2013}). However, the method used by the authors to obtain the time-series data for model fitting and the technique used for a model comparison are unfortunately inadequate. The technique used by the authors ignores known problems with, and associated uncertainties of, the $\chi^2$ statistic. As such, there is the potential to end up with erroneous values for model parameters, as well as underestimating the associated uncertainties. More importantly, this methodology is not suitable for distinguishing between the two non-linear models. { The study of \cite{PASetal2016} is not the first to fit non-exponential profiles to damped kink oscillations. \cite{DEMetal2002b} measured the decay in amplitude of a kink oscillation from wavelet analysis of a displacement time-series. A least-squares fit of a function of the form $\exp\left(-\epsilon t^n\right)$ is performed for three cases, finding values of $n=\{1.79, 2.83, 0.42\}$. While the fit visually appear to describe the data well, the uncertainties are not displayed. The amplitude spectrum obtained from Fourier techniques is not a consistent statistical estimator of the true amplitude spectrum of the oscillatory signal, largely because of the noise present in the signal, but also the choice of windowing functions and mother wavelets can cause problems owing to for example, spectral leakage. A discussion of the significance of Fourier-based methods takes us away from the central theme of this manuscript, so we do not expand on this any further.} {\cite{VERetal2004} also investigate whether the damping profile of kink waves observed in a coronal loop arcade is exponential. A trend is subtracted from a displacement time-series before a weighted non-linear least-squares fit of damped sinusoids is performed. The damping term has the same form as \cite{DEMetal2002b}, but the values of $n$ were chosen, namely $n=1,2,3$. The authors state they cannot distinguish between the different damping profiles from the fits, although it is not evident how this comparison was made. }\\ In the following, we outline a methodology for comparing models to observations, providing statistical estimators that can be used to assess the suitability of different models. The pitfalls of the methodology used in previous studies that perform least-squares fitting are highlighted. We demonstrate that, for a particular example of an oscillating coronal loop, the evidence suggests that the Gaussian damping is the better profile to explain the observed attenuation. Further, the model comparison also reveals that the oscillatory signal contains signatures for the dynamic evolution of the background plasma, signified by time-dependent behaviour of the wave period. \begin{figure*}[!tp] \centering \includegraphics[scale=0.9, clip=true, viewport=0.cm 0.cm 21.cm 7.5cm]{aa28613_fig1.eps} \caption{Transverse displacement of a quiescent chromospheric fibril. The time-series of 141 data points represent the location of the centre of the fibrils' cross-sectional flux profile, with the uncertainties in the location given by the error bars (left panel). The overplotted red dashed line shows the best fit of the model. The distribution of the $\chi^2$ statistic for 500 realisations of the random noise in the time-series (right panel). }\label{fig:ts_chrom} \end{figure*}
The observation of wave phenomenon throughout the solar atmosphere now occurs with regularity, and techniques have been developed to provide high quality measurements of the observed waves properties. The quality of these measurements can be utilised to probe the local plasma of the wave guide through seismology (e.g., \citealp{MOR2014}, \citealp{WANetal2015}). Significantly, this enables models of oscillatory phenomenon to be tested against observed cases, with the quality of available data enabling us move beyond very basic models, e.g., a simple sinusoid, and probe increasingly complex physics. However, in order to make decisions about whether the chosen models describe the observed behaviour well, robust techniques for model comparison are required. These techniques give an indication of the probability that the observed behaviour will arise, assuming the chosen model is correct. A range of such techniques have already been developed and are used widely in astrophysics, and many other areas. Here, we demonstrate the benefit of using these techniques for the modelling of oscillations of fine-scale magnetic structure in the corona. The unsuitability of single value $\chi^2$ and also $\chi^2_\nu$ statistics to compare models is highlighted, as discussed in \cite{ANDetal2010}, and we demonstrate how greater confidence in the comparison of competing models can be achieved by using robust statistical techniques. In particular, the techniques were applied to the kink oscillations of a coronal loop. It was found that the amplitude profile of the kink wave was best described by a Gaussian damping profile rather than an exponential damping profile. This result had already been suggested in \cite{PASetal2016}, although the methodology used in that study to reach this conclusion is based upon comparison of single values of $\chi^2_\nu$, and as such, no confidence intervals where associated with the conclusions. Moreover, in a number of cases, \cite{PASetal2016} suggest the exponential profile is a better fit to the data than a Gaussian profile. These cases could potentially be false results, due to either: (i) the subtraction of a spline profile that distorts the true amplitude envelope; (ii) reliance upon a single value of $\chi^2$ (or more precisely $\chi_\nu^2$); (iii) not including the trend line in the assessment of uncertainties.\\ { Similar comments can be made regarding previous results that use trend subtraction and then try to estimate other parameters. For example, \cite{VANetal2007} aim to measure multiple harmonics of a kink oscillation associated with a coronal loop. The displacement time-series is trend subtracted and fit with a single frequency sinusoid. The residuals between the trend subtracted data and single frequency model are then analysed by an additional least-squares fit of a secondary sinusoidal component. We suggest that the uncertainties associated with the parameters from the fitting of these residuals would be significantly greater than given by the authors. The subtraction of the trend alone is seen to underestimate the uncertainties (Section 3.1), but an additional fit to the residuals will only exacerbate this effect on the parameter estimates of the secondary sinusoidal model. To ensure statistical significance, we suggest a single model incorporating all the physics should be fit to the original time-series. } {It is then unclear whether the secondary harmonic found in \cite{VANetal2007} is statistically significant. The lack of error bars on the data and residual plots also do not enable a visual assessment of the situation. Applying both the runs test and the KS test would help elucidate whether any additional structure was contained with the residuals above the uncertainties. }\\ Further, we extended the Gaussian damping model to include the effects of time-dependence, namely through a simple modification of the period to permit it to vary as a function of time. Such a model naturally arises when the background plasma that supports the oscillation is subject to thermodynamic evolution, e.g., heating/cooling (cf \citealp{ MORERD2009b}, \citealp{RUD2010,RUD2011}). We note that the amplitude of the oscillation is also influenced by the time-dependent background plasma, indicating that the amplitude envelope described by the Gaussian damping contains the influence of both dynamics and attenuation by e.g., resonant absorption. The analysis performed here suggests that the time-dependent model describes the data better than the static model. We believe this is evidence of dynamic coronal loop plasma influencing the properties of the oscillation. {A change in periodicity over the observed displacement time-series has previously been reported by \cite{DEMetal2002b} and \cite{WHIetal2013}.}\\ Further work is need to untangle the information that the fitted model provides about the plasma evolution and the attenuation due to resonant absorption, and should be complemented with analysis of the plasma and magnetic field, e.g, differential emission measure. Moreover, while the time-dependent model performs much better than the time-independent model, it is still clear that something is missing from the current analysis (e.g., as demonstrated by the non-random residuals in Figure~\ref{fig:resid} and the KS statistic in Figure~\ref{fig:ks_tdp}). This could potentially be due to under-estimates of uncertainty, which would imply are model describes all the physics occurring during this event. However, we believe it is more likely that some physics in the data is not captured by the model and this is supported by the lack of randomness in the residuals (Figure~\ref{fig:resid}) . It is unknown what this may be at present. One possibility is the data contains the signature of higher harmonics of the kink wave, which could also have been excited along with the fundamental mode (e.g., \citealp{VERetal2004}; \citealp{VANetal2007}; \citealp{DEMBRA2007}; \citealp{OSHetal2007}; \citealp{VERERDJES2008}). In order to find answers to some of these questions, an extended study will be required. \begin{figure}[!tp] \centering \includegraphics[scale=0.53, clip=true, viewport=0.5cm 0.0cm 17.cm 11.2cm]{aa28613_fig9.eps} \caption{Distribution of normalised residuals for all the fitted models. The black line shows a normal distribution. The legend for the different distributions is as follows: quartic Gaussian (purple) ; quartic exponential (yellow); cubic Gaussian (green) ; cubic exponential (blue); time-dependent (red). }\label{fig:norm_resid} \end{figure} \begin{table*} \caption{Parameter estimates and statistics from the model comparison.} \centering \begin{tabular}{lccccc} \hline\hline\ Model & $\chi^2$ & Mean KS & Amplitude (km) & Period (s) & $\tau$ (s) \\[0.2ex] & &test p-value & & & \\ \hline \\ Cubic Exponential & 488$\pm$43 & $4\times10^{-4}$ & 1183$\pm$30 & 284$\pm$1& 811$\pm$34\\ Cubic Gaussian & 369$\pm$36 & 0.003 & 1044$\pm$22 & 282$\pm$1 & $572\pm25$\\ Quartic Exponential & 371$\pm$36 & 0.005 & 1236$\pm$32 & 281$\pm$1 & 795$\pm$38\\ Quartic Gaussian & 310$\pm$37 & 0.01 & 1070$\pm$48 & 280$\pm$1 & $574\pm15$\\ Time-dependent & 207$\pm31$ & 0.1 & 1061$\pm$30 & 247$\pm$3 & $600\pm19$\\ \hline \end{tabular} \label{tab:meas} \end{table*}
16
7
1607.05905
1607
1607.06322.txt
%max 250 words We present long-term photometric observations of the young open cluster IC~348 with a baseline time-scale of 2.4\,yr. Our study was conducted with several telescopes from the Young Exoplanet Transit Initiative (YETI) network in the Bessel $R$ band to find periodic variability of young stars. We identified 87 stars in IC~348 to be periodically variable; 33 of them were unreported before. Additionally, we detected 61 periodic non-members of which 41 are new discoveries. Our wide field of view was the key to those numerous newly found variable stars. The distribution of rotation periods in IC~348 has always been of special interest. We investigate it further with our newly detected periods but we cannot find a statistically significant bimodality. We also report the detection of a close eclipsing binary in IC~348 composed of a low-mass stellar component ($M \gtrsim 0.09\,\mathrm{M}_{\sun}$) and a K0 pre-main sequence star ($M \approx 2.7\,\mathrm{M}_{\sun}$). Furthermore, we discovered three detached binaries among the background stars in our field of view and confirmed the period of a fourth one.
16
7
1607.06322
1607
1607.02432_arXiv.txt
We describe the \texttt{redmonster} automated redshift measurement and spectral classification software designed for the extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey IV (SDSS-IV). We describe the algorithms, the template standard and requirements, and the newly developed galaxy templates to be used on eBOSS spectra. We present results from testing on early data from eBOSS, where we have found a 90.5\% automated redshift and spectral classification success rate for the luminous red galaxy sample (redshifts 0.6~$\lesssim$~$z$~$\lesssim$~1.0). The \texttt{redmonster} performance meets the eBOSS cosmology requirements for redshift classification and catastrophic failures, and represents a significant improvement over the previous pipeline. We describe the empirical processes used to determine the optimum number of additive polynomial terms in our models and an acceptable $\Delta\chi_r^2$ threshold for declaring statistical confidence. Statistical errors on redshift measurement due to photon shot noise are assessed, and we find typical values of a few tens of km~s$^{-1}$. An investigation of redshift differences in repeat observations scaled by error estimates yields a distribution with a Gaussian mean and standard deviation of $\mu\sim$~0.01 and $\sigma\sim$~0.65, respectively, suggesting the reported statistical redshift uncertainties are over-estimated by $\sim$~54\%. We assess the effects of object magnitude, signal-to-noise ratio, fiber number, and fiber head location on the pipeline's redshift success rate. Finally, we describe directions of ongoing development.
\setcounter{footnote}{0} Redshift surveys are a fundamental tool in modern observational astronomy. These surveys aim to measure redshifts of galaxies, galaxy clusters, and quasars to map the 3-dimensional distribution of matter. These observations allow measurements of the statistical properties of the large-scale structure of the universe. In conjunction with observations of the cosmic microwave background, redshift surveys can also be used to place constraints on cosmological parameters, such as the Hubble constant (e.g., \citealp{beu2011}) and the dark energy equation of state through measurements of the baryon acoustic oscillation (BAO) peak, first detected in the clustering of galaxies (\citealp{eis2005}, \citealp{col2005}). The first systematic redshift survey was the CfA Redshift Survey \citep{dav1982}, measuring redshifts for approximately 2,200 galaxies. Such early surveys were limited in scale due to single object spectroscopy. The development of fiber-optic and multi-slit spectrographs enabled the simultaneous observations of hundreds or thousands of spectra, making possible much larger surveys, such as the DEEP2 Redshift Survey \citep{new2013}, the 6dF Galaxy Survey (6dFGS; \citealp{jon2004}), Galaxy and Mass Assembly (GAMA; \citealp{lis2015}), and the VIMOS Public Extragalactic Survey (VIPERS; \citealp{gar2014}), measuring redshifts for approximately 50,000, 136,000, 300,000, and 55,000 objects, respectively. The Sloan Digital Sky Survey (SDSS; \citealp{yor2000}) is the largest redshift survey undertaken to date. At the conclusion of SDSS-III, the third iteration of SDSS \citep{eis2011}, a total of 4,355,200 spectra had been obtained. Of these, 2,497,484 were taken as part of the Baryon Oscillation Spectroscopic Survey (BOSS; \citealp{daw2013}), containing 1,480,945 galaxies, 350,793 quasars, and 274,811 stars. The ``constant mass'' (CMASS) subset of the BOSS sample is composed of massive galaxies over the approximate redshift range of $0.4 < z < 0.8$ and typical S/N values of $\sim 5$/pixel. The automated redshift measurement and spectral classification of such large numbers of objects presents a challenge, inspiring refinements of the \texttt{spectro1d} pipeline \citep{bol2012}. This software models each co-added spectrum as a linear combination of principal component analysis (PCA) basis vectors and polynomial nuisance vectors and adopts the combination that produces the minimum $\chi^2$ as the output classification and redshift. PCA-reconstructed models were chosen due to their close ties to the data, allowing the PCA eigenspectra to potentially capture any intrinsic populations within the training sample. This pipeline was able to achieve an automated classification success rate of 98.7\% on the CMASS sample (and 99.9\% on the lower-redshift, higher-S/N LOWZ sample). However, the software was only able to successfully classify 79\% of the quasar sample, which resulted in the need for the entire sample to be visually inspected \citep{par2012}. The Sloan Digital Sky Survey IV (SDSS-IV; \citealp{bla2016}) is the fourth iteration of the SDSS. Within SDSS-IV, the Extended Baryon Oscillation Spectroscopic Survey (eBOSS; \citealp{daw2016}) will precisely measure the expansion history of the Universe throughout eighty percent of cosmic time through observations of galaxies and quasars in a range of redshifts left unexplored by previous redshift surveys. Ultimately, eBOSS plans to use approximately 300,000 luminous red galaxies (LRGs; 0.6~$<$~$z$~$<$~1.0), 200,000 emission line galaxies (ELGs; 0.7~$<$~$z$~$<$~1.1), and 700,000 quasars (0.9~$<$~$z$~$<$~3.5) to measure the clustering of matter. The primary science goal of eBOSS is to measure the length scale of the BAO feature in the spatial correlation function in four discrete redshift intervals to $1-2$\% precision, thereby constraining the nature of the dark energy that drives the accelerated expansion of the present-day universe. A set of requirements for the redshift and classification pipeline was established to meet these goals. As given in \citet{daw2016}, these requirements are: (1) redshift accuracy $c\sigma_z/(1+z)<$~300 km~s$^{-1}$ for all tracers at redshift $z<1.5$ and $<(300+400(z-1.5))$ km~s$^{-1}$ RMS for quasars at $z>1.5$ ; and (2) fewer than 1\% unrecognized redshift errors of $>1000$ km~s$^{-1}$ for LRGs and $>3000$ km~s$^{-1}$ for quasars (referred to in this paper as ``catastrophic redshift failures"). Additionally, the pipeline should return confident redshift measurements and classifications for $>90$\% of spectra. The higher-redshift, lower-S/N (typically $\sim 2$/pixel for galaxies) targets in eBOSS present a new challenge for automated redshift measurement and classification software. Initial tests with the \texttt{spectro1d} PCA basis vectors predicted success rates of $\sim 70$\% for the LRG sample, which is well below the specified science requirements. This is due, in part, to the flexibility in fitting PCA components to a spectrum, which allows non-physical combinations of basis vectors to pollute the redshift measurements and statistical confidences thereof. Additionally, while possible (e.g., \citealp{che2012}), mapping PCA coefficients onto physical properties is a difficult task. It requires the use of a transformation matrix, and confidence in the results is, at best, unintuitive, and possibly uncertain. To meet these challenges, we have developed an archetype-based software system for redshift measurement and spectral classification named \texttt{redmonster}. We have developed a set of theoretical templates from which spectra can be classified. \texttt{redmonster} is written in the Python programming language. The project is open source, and is maintained on the first author's GitHub account\footnote{https://github.com/timahutchinson/redmonster}. The analysis performed in this paper uses tagged version \texttt{v1\_0\_0}. The development of this software was driven by the following goals: \begin{enumerate} \item Redshift measurement and classification on the basis of discrete, non-negative, and physically-motivated model spectra; \item Robustness against unphysical PCA solutions likely to arise for low-S/N ELG and LRG spectra in eBOSS, particularly in the presence of imperfect sky-subtraction; \item Determination of joint likelihood functions over redshift and physical parameters; \item Self-consistent determination and application of hierarchical redshift priors; \item Self-consistent incorporation of photometry and spectroscopy in performing redshift constraints; \item Simultaneous redshift and parameter fits to each individual exposure in multi-exposure data; \item Custom configurability of spectroscopic templates for different target classes; \item Automated identification of multi-object superposition spectra. \end{enumerate} The software as described in this paper meets design goals 1, 2, 3, and 7. We have chosen to enumerate the full list of design goals to provide a forward-looking vision of new and interesting possibilities. In this paper, we describe \texttt{redmonster} and its application to the eBOSS LRG sample. The organization is as follows: Section~\ref{sec:software} describes automated redshift and classification algorithms and procedures of \texttt{redmonster}. We include the requirements and standardized format for templates and a description of the eBOSS galaxy and star templates in Section~\ref{sec:templates}. The core redshift measurement algorithm is described in Section~\ref{sec:fitting} and Section~\ref{sec:interpretation}. Section~\ref{sec:parameters} gives an overview of the spectroscopic data sample of eBOSS and an analysis of the tuning and performance of the software on eBOSS data, including completeness and purity. Section~\ref{sec:classification} provides a description of the classification of eBOSS LRG spectra, including redshift success dependence, effects on the final redshift distribution in eBOSS, and precision and accuracy. Finally, Section~\ref{sec:conclusion} provides a summary and conclusion. The content and structure of the output files of \texttt{redmonster} are described in Appendix~\ref{sec:output}.
\label{sec:conclusion} We have described the \redmonster~software that provides automated redshift measurement and spectral classification and its performance on the SDSS-IV eBOSS LRG sample, comprising 99,449 spectra. This software provides a new algorithm and new sets of templates that restrict all spectral fitting to only physically-motivated models. The advantages over the current algorithm include robustness against unphysical solutions likely to arise for low signal-to-noise spectra (particularly in the presence of imperfect sky-subtraction), determination of joint likelihood functions over redshift and physical parameters, and custom configurability of spectroscopic templates for different target classes. The redshift success rate of the \redmonster~software on eBOSS LRGs is 90.5\%, meeting the eBOSS scientific requirement of 90\% and providing a significant improvement over the previous redshift pipeline, \texttt{spectro1d}. The improvement translates to a 23.9\% increase in the surface density of tracers that can now be used to constrain cosmology through clustering measurements. We have shown catastrophic failure rates for \texttt{redmonster} of 0.98\%, in agreement with the scientific requirements of $<1$\%. The software also provides robust estimates of statistical redshift errors that are Gaussian distributed, typically a few tens of km~s$^{-1}$, well below the specified maximum of 300 km~s$^{-1}$. Looking forward, using $\Delta\chi_\mathrm{threshold}^2=0.0015$ would give \texttt{redmonster} a completeness of 95.7\% and a catastrophic failure rate of 1.9\%, which would very nearly meet the DESI science requirements of at least 95\% completeness and a maximum of 5\% catastrophic failures on eBOSS data. The raw S/N in DESI will be comparable to that in eBOSS, though the image quality in eBOSS two-dimensional spectra is degraded relative to the image quality we expect in the bench-mounted DESI system. eBOSS also shows a failure rate that increases to $\sim25\%$ near the edges of the focal plane. This is imperfect optics towards the edges of the spectrographs, which will be less significant in DESI. Therefore, we expect improved \texttt{redmonster} performance on the more well-behaved DESI spectra. Development work is ongoing for eBOSS, both in the calibration and extraction of spectra and on \redmonster\ itself. The next priority for \redmonster\ development is to build and test templates for ELG and quasar spectra. Additionally, we will incorporate simultaneous fitting to the individual exposures at their native resolution to remove covariances between neighboring pixels introduced by the co-adding process. Subsequent eBOSS data releases will be accompanied by catalogues of redshift measurements and spectral classifications produced by \redmonster.
16
7
1607.02432
1607
1607.02268_arXiv.txt
In the conference presentation we have reviewed the theory of non-Gaussian geometrical measures for the 3D Cosmic Web of the matter distribution in the Universe and 2D sky data, such as Cosmic Microwave Background (CMB) maps that was developed in a series of our papers. The theory leverages symmetry of isotropic statistics such as Minkowski functionals and extrema counts to develop post- Gaussian expansion of the statistics in orthogonal polynomials of invariant descriptors of the field, its first and second derivatives. The application of the approach to 2D fields defined on a spherical sky was suggested, but never rigorously developed. In this paper we present such development treating effects of the curvature and finiteness of the spherical space $S_2$ exactly, without relying on the flat-sky approximation. We present Minkowski functionals, including Euler characteristic and extrema counts to the first non-Gaussian correction, suitable for weakly non-Gaussian fields on a sphere, of which CMB is the prime example.
The statistics of Minkowski functionals, including the Euler number, as well as extrema counts requires the knowledge of the one-point joint probability distribution function (JPDF) $P(x,x_i,x_{ij})$ of the field $x$ (assumed to have zero mean), its first, $x_i$, and second, $x_{ij}$, derivatives. Let us consider a random field $x$ defined on a 2D sphere $S_2$ of radius $R$ represented as the expansion in spherical harmonics \begin{equation} x(\theta,\phi) = \sum_{l=0}^\infty \sum_{m=-l}^l a_{lm} Y_{lm}(\theta,\phi) \end{equation} where for the Gaussian statistically homogeneous and isotropic field random coefficients $a_{lm}$ are uncorrelated with $m$-independent variances $C_l$ of each harmonic \begin{equation} \langle a_{lm} a^*_{l^\prime m^\prime}\rangle = C_l \delta_{l l^\prime} \delta_{m m^\prime} \end{equation} The variance of the field is then given by \begin{equation} \sigma^2 \equiv \left\langle x^2 \right\rangle = \frac{1}{4\pi} \sum_l C_l (2l+1) \end{equation} When considering derivatives in the curved space, we use covariant derivatives $x_{;\theta}$, $x_{;\phi}$, ${x^{;\theta}}_{;\theta}$, ${x^{;\phi}}_{;\phi}$, ${x^{;\theta}}_{;\phi}$ where it will be seen immediately that mixed version for the second derivatives is the most appropriate choice. The 2D rotation-invariant combinations of derivatives are \begin{equation} q^2 = x_{;\phi} x^{;\phi} + x_{;\theta} x^{;\theta} ~,~ J_1 = \left(x^{;\theta}_{;\theta}+x^{;\phi}_{;\phi}\right)^2 ~,~ J_2 = \left(x^{;\theta}_{;\theta}-x^{;\phi}_{;\phi}\right)^2 + 4 x^{;\theta}_{;\phi} x^{;\phi}_{;\theta} \end{equation} where $J_1$ is linear in the field and $q^2$ and $J_2$ are quadratic, always positive, quantities. The derivatives are also random Gaussian variables, which variances are easily computed \begin{eqnarray} \sigma_1^2 &\equiv& \langle q^2 \rangle = \frac{1}{4\pi R^2} \sum_l C_l l (l+1) (2l+1) \\ \sigma_2^2 &\equiv& \langle J_1^2 \rangle = \frac{1}{4\pi R^4} \sum_l C_l l^2 (l+1)^2 (2l+1) \\ \sigma_2^{\prime 2} &\equiv& \langle J_2 \rangle = \frac{1}{4 \pi R^4} \sum_l C_l (l-1) l (l+1) (l+2) (2l+1) \end{eqnarray} where the fundamental difference between a sphere and the 2D Cartesian space is in the fact that $\sigma_2^\prime \ne \sigma_2$. Among the cross-correlations the only non-zero one is between the field and its Laplacian $\left\langle x \left(x^{;\theta}_{;\theta}+x^{;\phi}_{;\phi}\right)\right\rangle = - \sigma_1^2 $. From now on we rescale all random quantities by their variances, so that rescaled variables have $\langle x^2\rangle = \langle J_1^2 \rangle = \langle q^2 \rangle = \langle J_2 \rangle =1$. Introducing $\zeta=(x+\gamma J_1)/\sqrt{1-\gamma^2}$ (where the spectral parameter $\gamma= - \langle x J_1 \rangle = \sigma_1^2/(\sigma\sigma_2)$) leads to the following simple JPDF for the Gaussian 2D fields \begin{equation} G_{\rm 2D} = \frac{1}{2 \pi} \exp\left[-\frac{1}{2} \zeta^2 - q^2 - \frac{1}{2} J_1^2 - J_2 \right] \,. \label{eq:2DG} \end{equation} In \cite{PGP} we have observed that for non-Gaussian JPDF the invariant approach immediately suggests a Gram-Charlier expansion in terms of the orthogonal polynomials defined by the kernel $G_{\rm 2D}$. Since $\zeta$, $q^2$, $J_1$ and $J_2$ are uncorrelated variables in the Gaussian limit, the resulting expansion is \begin{eqnarray} P_{\rm 2D}(\zeta, q^2, J_1, J_2) &=& G_{\rm 2D} \left[ \vphantom{ \frac{(-1)^{j+l}}{i!\;j!\; k!\; l!}} 1 + \right. \nonumber \\ \sum_{n=3}^\infty \sum_{i,j,k,l=0}^{i+2 j+k+2 l=n} && \left. \frac{(-1)^{j+l}}{i!\;j!\; k!\; l!} \left\langle \zeta^i {q^2}^j {J_1}^k {J_2}^l \right\rangle_{\rm GC} H_i\left(\zeta\right) L_j\left(q^2\right) H_k\left(J_1\right) L_l\left(J_2\right) \right], \label{eq:2DP_general} \end{eqnarray} where terms are sorted in the order of the field power $n$ and $\sum_{i,j,k,l=0}^{i+2 j+k+2 l=n} $ stands for summation over all combinations of non-negative $i,j,k,l$ such that $i+2j+k+2l$ adds to the order of the expansion term $n$. $H_i$ are ({\it probabilists'}) Hermite and $L_j$ are Laguerre polynomials. The coefficients of expansion \begin{equation} \left\langle \zeta^i {q^2}^j J_1^k J_2^l \right\rangle_{\scriptscriptstyle{\mathrm{GC}}} \!\! = \frac{j! \; l!}{(-1)^{j+l}} \left\langle \vphantom{ \zeta^i {q^2}^j J_1^k J_2^l } \! H_i\left(\zeta\right) L_j\left(q^2\right) H_k\left(J_1\right) L_l\left(J_2\right) \!\right\rangle. \label{eq:GCtomoments2D} \end{equation} are related (and for the first non-Gaussian order $n=3$ are equal) to the moments of the field and its derivatives (see \cite{Gay12} for details). Up to now our considerations are practically identical to the theory in the Cartesian space, which facilitates using many of the Cartesian calculations. We stress again the only, but important, difference being $\sigma_2^\prime \ne \sigma_2$. We shall see in the next sections how this difference plays out. Here we introduce the spectral parameter $\beta$ that describes this difference \begin{equation} \beta \equiv 1 - \frac{ \sigma_2^{\prime 2}}{ \sigma_2^2} = 2 \frac{ \sum_l C_l l (l+1) (2l+1) }{ \sum_l C_l l^2 (l+1)^2 (2l+1)} \label{eq:beta} \end{equation} Let us review the scales and parameters that the theory has. As in the flat space, we have two scales $R_0 = \sigma/\sigma_1$ and $R_* = \sigma_1/\sigma_2$ and the spectral parameter $\gamma = R_*/R_0$ (which also describes correlation between the field and its second derivatives). On a sphere we have a third scale, the curvature radius $R$. The meaning of the additional spectral parameter $\beta$ becomes clear if we notice that $\sigma_2^2 - \sigma_2^{\prime 2} = 2 \sigma_1^2/R^2$, thus $\beta = 2 R_*^2/R^2$, i.e describes the ratio of the correlation scale $R_*$ to the curvature of the sphere. As with $\gamma$, $\beta$ varies from $0$ to $1$, with $\beta=0$ corresponding to the flat space limit. From Eq.~(\ref{eq:beta}) we find that $\beta=1$ is achieved when the field has only the monopole and the dipole in its spectral decomposition.
16
7
1607.02268
1607
1607.05617.txt
{ We perform a comprehensive cosmological study of the $H_0$ tension between the direct local measurement and the model-dependent value inferred from the Cosmic Microwave Background. With the recent measurement of $H_0$ this tension has raised to more than $3 \,\sigma$. We consider changes in the early time physics without modifying the late time cosmology. We also reconstruct the late time expansion history in a model independent way with minimal assumptions using distance measurements from Baryon Acoustic Oscillations and Type Ia Supernovae, finding that at $z<0.6$ the recovered shape of the expansion history is less than 5\% different than that of a standard $\Lambda$CDM model. These probes also provide a model insensitive constraint on the low-redshift standard ruler, measuring directly the combination $r_{\rm s}h$ where $H_0=h \times 100$ Mpc$^{-1}$km/s and $r_{\rm s}$ is the sound horizon at radiation drag (the standard ruler), traditionally constrained by CMB observations. Thus $r_{\rm s}$ and $H_0$ provide absolute scales for distance measurements (anchors) at opposite ends of the observable Universe. We calibrate the cosmic distance ladder and obtain a model-independent determination of the standard ruler for acoustic scale, $r_{\rm s}$. The tension in $H_0$ reflects a mismatch between our determination of $r_{\rm s}$ and its standard, CMB-inferred value. Without including high-$\ell$ Planck CMB polarization data (i.e., only considering the ``recommended baseline" low-$\ell$ polarisation and temperature and the high $\ell$ temperature data), a modification of the early-time physics to include a component of dark radiation with an effective number of species around 0.4 would reconcile the CMB-inferred constraints, and the local $H_0$ and standard ruler determinations. The inclusion of the ``preliminary" high-$\ell$ Planck CMB polarisation data disfavours this solution.}
\label{sec:Introduction} In the last few years, the determination of cosmological parameters has reached astonishing and unprecedented precision. Within the standard $\Lambda$ - Cold Dark Matter ($\Lambda$CDM) cosmological model some parameters are constrained at or below the percent level. This model assumes a spatially flat cosmology and matter content dominated by cold dark matter but with total matter energy density dominated by a cosmological constant, which drives a late time accelerated expansion. Such precision has been driven by a major observational effort. This is especially true in the case of Cosmic Microwave Background (CMB) experiments, where WMAP \citep{HinshawWMAP_13,BennettWMAP_13} and Planck \citep{Planckparameterspaper} have played a key role, but also in the measurements of Baryon Acoustic Oscillations (BAO) \citep{Anderson14_bao,Cuesta16_bao}, where the evolution of the cosmic distance scale is now measured with a $\sim 1\%$ uncertainty. The Planck Collaboration 2015 \citep{Planckparameterspaper} presents the strongest constraints so far in key parameters, such as geometry, the predicted Hubble constant, $H_0$, and the sound horizon at radiation drag epoch, $r_{\rm s}$. These last two quantities provide an absolute scale for distance measurements at opposite ends of the observable Universe (see e.g., \cite{Cuesta:2014asa, Aubourg_2015}), which makes them essential to build the distance ladder and model the expansion history of the Universe. However, they are {\it indirect} measurements and as such they are model-dependent. Whereas the $H_0$ constraint assumes an expansion history model (which heavily relies on late time physics assumptions such as the details of late-time cosmic acceleration, or equivalently, the properties of dark energy), $r_{\rm s}$ is a derived parameter which relies on early time physics (such as the density and equation of state parameters of the different species in the early universe). This is why having model-independent, direct measurements of these same quantities is of utmost importance. In the absence of significant systematic errors, if the standard cosmological model is the correct model, indirect (model-dependent) and direct (model-independent) constraints on these parameters should agree. If they are significantly inconsistent, this will provide evidence of physics beyond the standard model (or unaccounted systematic errors). Direct measurements of $H_0$ rely on the ability to measure absolute distances to $>100$ Mpc, usually through the use of coincident geometric and relative distance indicators. $H_0$ can be interpreted as the normalization of the Hubble parameter, $H(z)$, which describes the expansion rate of the Universe as function of redshift. Previous constraints on $H_0$ (i.e. \citep{Riess:2011yx}) are consistent with the final results from the WMAP mission, but are in $2$-$2.5 \sigma$ tensions with Planck when $\Lambda$CDM model is assumed \citep{MarraH0_2013, Verde_tension2d,BennettH0_2014}. The low value of $H_0$ found, within the $\Lambda$CDM model, by the Planck Collaboration since its first data release \citep{Planck13_param}, and confirmed by the latest data release \citep{Planckparameterspaper}, has attracted a lot of attention. Re-analyses of the direct measurements of $H_0$ have been performed (\citep{EfstathiouH0_2014} including the recalibration of distances of \cite{HumphreyH0}); physics beyond the standard model has been advocated to alleviate the tension, especially higher number of effective relativistic species, dynamical dark energy and non-zero curvature \citep{Wyman14_nu,Dvorkin14, Leistedt14, Aubourg_2015, Planck15_MGDE, DiValentino:2016hlg}. In some of these model extensions, by allowing the extra parameter to vary, tension is reduced but this is mainly due to weaker constraints on $H_0$ (because of the increased number of model parameters), rather than an actual shift in the central value. In many cases, non-standard values of the extra parameter appear disfavoured by other data sets. Recent improvements in the process of measuring $H_0$ (an increase in the number of SNeIa calibrated by Cepheids from 8 to 19, new parallax measurements, stronger constraints on the Hubble flow and a refined computation of distance to NGC4258 from maser data) have made possible a $2.4\%$ measurement of $H_0$: $H_0 = 73.24\pm 1.74$ ${\rm Mpc}^{-1}{\rm km/s}$ \citep{RiessH0_2016}. This new measurement increases the tension with respect to the latest Planck-inferred value \citep{Planck_newHFI} to $\sim 3.4 \sigma$. This calibration of $H_0$ has been successfully tested with recent Gaia DR1 parallax measurements of cepheids in \cite{Casertano16_Gaia}. Time-delay cosmography measurements of quasars which pass through strong lenses is another way to set independent constraints on $H_0$. Effort in this direction is represented by the H0LiCOW project \citep{Suyu_holicow}. Using three strong lenses, they find $H_0 = 71.9^{+2.4}_{-3.0}$ ${\rm Mpc}^{-1}{\rm km/s}$, within flat $\Lambda$CDM with free matter and energy density \citep{H0_holicow}. Fixing $\Omega_M= 0.32$ (motivated by the Planck results \citep{Planckparameterspaper}), yields a value $H_0 = 72.8\pm 2.4$ ${\rm Mpc}^{-1}{\rm km/s}$. These results are in $1.7\sigma$ and $2.5\sigma$ tension with respect to the most-recent CMB inferred value, while are perfectly consistent with the local measurement of \citep{RiessH0_2016}. In addition, in \citep{Addison_2016}, it is shown that the value of $H_0$ depends strongly on the CMB multipole range analysed. Analysing only temperature power spectrum, tension of 2.3$\sigma$ between the $H_0$ from $\ell < 1000$ and from $\ell \geq 1000$ is found, the former being consistent with the direct measurement of \citep{RiessH0_2016}. % However, Ref. \citep{Planck_shifts} finds that the shifts in the cosmological parameters values inferred from low versus high multipoles are not highly improbable in a $\Lambda$CDM model (consistent with the expectations within a 10$\%$). These shifts appear because when considering only multipoles $\ell < 800$ (approximately the range explored by WMAP) the cosmological parameters are more strongly affected by the well known $\ell < 10$ power deficit. Explanation for this tension in $H_0$ includes internal inconsistencies in Planck data systematics in the local determination of $H_0$ or physics beyond the standard model. These recent results clearly motivate a detailed study of possible extensions of the $\Lambda$CDM model and an inspection of the current cosmological data sets, checking for inconsistencies. In figure \ref{fig:H0_values}, we summarize the current constraints on $H_0$ tied to the CMB and low-redshift measurements. We show results from the public posterior samples provided by the Planck Collaboration 2015 \citep{Planckparameterspaper}, WMAP9 \citep{HinshawWMAP_13} (analysed with the same assumptions of Planck)\footnote{The values of $r_{\rm s}$ in WMAP's public posterior samples were computed using the approximation of \cite{EH98}, which differs from the values computed by current Boltzmann codes and used in Planck's analysis by several percent, as pointed in the appendix B of Ref. \cite{Hamann10}. As WMAP's data have been re-analysed by the Planck Collaboration, the values reported here are all computed with the same definition.}, the results of the work of Addison et al. \citep{Addison_2016} and the quasar time-delay cosmography measurements of $H_0$ \citep{H0_holicow}, along with the local measurement of \cite{RiessH0_2016}. CMB constraints are shown for two models: a standard flat $\Lambda$CDM and a model where the effective number of relativistic species $N_{\rm eff}$ is varied in addition to the standard $\Lambda$CDM parameters. Of all the popular $\Lambda$CDM model extensions, this is the most promising one to reduce the tension. Assuming $\Lambda$CDM, the CMB-inferred $H_0$ is consistent with the local measurement only when $\ell < 1000$ are considered (the work of Addison et al. and WMAP9). However when BAO measurements are added to WMAP9 data, the tension reappears, but at a lower level ($2.8\sigma$). \begin{figure}[t] \minipage{0.86\textwidth} \begin{center} %\includegraphics[width=0.8\textwidth]{Figures/legend_values.pdf} \includegraphics[width=0.75\textwidth]{H0_values_holicow.pdf} \end{center} \endminipage \minipage{0.14\textwidth} \hspace{-2cm} \includegraphics[width=1.05\textwidth]{legend_H0.pdf} \endminipage\hfill \caption{\footnotesize Marginalised 68\% and 95\% constraints on $H_0$ from different analysis of CMB data, obtained from Planck Collaboration 2015 public chains \citep{Planckparameterspaper}, WMAP9 \citep{HinshawWMAP_13} (analysed with the same assumptions than Planck) and the results of the work of Addison et al. \citep{Addison_2016} and Bonvin et al. \cite{H0_holicow}. We show the constraints obtained in a $\Lambda$CDM context in blue, $\Lambda$CDM+$N_{\rm eff}$ in red, quasar time-delay cosmography results (taken from H0LiCOW project \cite{H0_holicow}, for a $\Lambda$CDM model, with and without relying on a CMB prior for $\Omega_{\rm M}$) in green and the constraints of the independent direct measurement of \citep{RiessH0_2016} in black. We report in parenthesis the tension with respect to the direct measurement. } \label{fig:H0_values} \end{figure} On the other hand, $r_{\rm s}$ is the standard ruler which calibrates the distance scale measurements of BAO. Since BAO measure $D_V/r_{\rm s}$ (or $D_A/r_{\rm s}$ and $Hr_{\rm s}$ in the anisotropic analysis) the only way to constrain $r_{\rm s}$ without making assumptions about the early universe physics is combining the BAO measurement with other probes of the expansion rate (such as $H_0$, cosmic clocks \citep{Jimenez_C} or gravitational lensing time delays \cite{Suyu_holicow}). When no cosmological model is assumed, $H_0$ and $r_{\rm s}$ are understood as anchors of the cosmic distance ladder and the inverse cosmic distance ladder, respectively. As BAO measurements always depends on the product $H_0r_{\rm s}$ (see Equations \eqref{comovdist}, \eqref{DA} and \eqref{Dv}), when the Universe expansion history is probed by BAO, the two anchors are related by $H_0r_{\rm s}=$ constant. This was illustrated in \citep{Heavens:2014rja} and more recently in \citep{StandardQuantities}, where only weak assumptions are made on the shape of $H(z)$, and in \citep{Cuesta:2014asa}, where the normal and inverse distance ladder are studied in the context of $\Lambda$CDM and typical extensions. While the model-independent measurement of $r_{\rm s}$ \citep{Heavens:2014rja} is consistent with Planck, the model-dependent value of \citep{Cuesta:2014asa} is in $2\sigma$ tension with it. Both of these measurements use $H_0\approx 73.0\pm 2.4 {\rm Mpc}^{-1}{\rm km/s}$, so, this modest tension is expected to increase with the new constraint on $H_0$. In this paper we quantify the tension in $H_0$ and explore how it could be resolved --without invoking systematic errors in the measurements-- by studying separately changes in the early time physics and in the late time physics We follow three avenues. Firstly, we allow the early cosmology (probed mostly by the CMB) to deviate from the standard $\Lambda$CDM assumptions, leaving unaltered the late cosmology (e.g., the expansion history at redshift below $z\sim 1000$ is given by the $\Lambda$CDM model). Secondly, we allow for changes in the late time cosmology, in particular in the expansion history at $z\leq 1.3$, assuming standard early cosmology (i.e., physics is standard until recombination, but the expansion history at late time is allowed to be non-standard). Finally, we reconstruct in a model-independent way, the late-time expansion history without making any assumption about the early-time physics, besides assuming that the BAO scale corresponds to a standard ruler (with unknown length). By combining BAO with SNeIa and $H_0$ measurements we are able to measure the standard ruler in a model-independent way. Comparison with the Planck-derived determination of the sound horizon at radiation drag allows us to assess the consistency of the two measurements within the assumed cosmological model. In section \ref{sec:Data} we present the data sets used in this work and in section \ref{sec:Methods} we describe the methodology. We explore modifications of early-time physics from the standard $\Lambda$CDM (leaving unaltered the late-time ones) in section \ref{sec:Earlyuniverse} while changes in the late-time cosmology are explored in section \ref{sec:H_recon}. Here we present the findings both assuming standard early-time physics and in a way that is independent from it. Finally we summarize the conclusions of this work in section \ref{sec:Conclusions}.
\label{sec:Conclusions} The standard $\Lambda$CDM model with only a handful of parameters, provides an excellent description of a host of cosmological observations with remarkably few exceptions. The most notable and persistent one is the local determination of the Hubble constant $H_0$, which, with the recent improvement by \citep{RiessH0_2016}, presents a $\sim 3 \sigma$ tension with respect to the value inferred by the Planck Collaboration (assuming $\Lambda$CDM). The CMB is mostly sensitive to early-Universe physics, and the CMB-inferred $H_0$ measurement thus depends on assumptions about both early time and late-time physics. A related quantity that the CMB can measure in a way that does not depend on late-time physics is the sound horizon at radiation drag, $r_{\rm s}$. This measurement however is still model-dependent in that it relies on standard assumptions about early-time physics. On the other hand the local measurement of $H_0$ is model-independent as it does not depend on cosmological assumptions. As this work was nearing completion, new quasar time-delay cosmography data became available \cite{H0_holicow}. Within the $\Lambda$CDM model these provide an $H_0$ constraint centered around 72 ${\rm Mpc}^{-1}$km/s, with a ~4\% error and thus shows reduced tension. The two parameters $r_{\rm s}$ and $H_0$ are strictly related when we consider also BAO observations. Expansion history probes such as BAO and SNIa can provide a model-independent estimate of the low-redshift standard ruler, constraining directly the combination $r_{\rm s}h$ (with $H_0=h \times 100$ ${\rm Mpc}^{-1}$km/s). Thus $r_{\rm s}$ and $H_0$ provide absolute scales for distance measurements (anchors) at opposite ends of the observable Universe. In the absence of systematic errors in the measurements, if the standard cosmological model is the correct model, indirect (model-dependent) and direct (model-independent) constraints on these parameters should agree. The tension could thus provide evidence of physics beyond the standard model (or unaccounted systematic errors). We have performed a complete cosmological study of the current tension between the inferred value of $H_0$ from the latest CMB data (as provided by the Planck satellite) \citep{Planckparameterspaper} and its direct measurement, with the recent update from \citep{RiessH0_2016}. This reflects into a tension between cosmological model-dependent and model-independent constraints on $r_{\rm s}$. We first have explored models for deviations from the standard $\Lambda$CDM in the early-Universe physics. When including CMB data alone (or in combination with geometric measurements that do not rely on the $H_0$ anchor such as BAO) we find no evidence for deviations from the standard $\Lambda$CDM model and in particular no evidence for extra effective relativistic species beyond three active neutrinos. This conclusion is unchanged if we allow additional freedom in the behaviour of the perturbations, both in all relativistic species or only in the additional ones. Therefore we put limits on the possible presence of a Universe component whose mean energy scales like radiation with the Universe expansion but which perturbations could behave like radiation, a perfect fluid, a scalar field or anything else in between. On the other hand the value for the Hubble constant inferred by these analyses and other promising modifications of early-time physics, is always significantly lower than the local measurement of~\cite{RiessH0_2016}. Should the low-level systematics present in the high $\ell$ ``preliminary" Planck polarisation data be found to be non-negligible, the TEEE data should not be included in the analysis. In this case, including only the ``recommended" baseline of low $\ell$ temperature and polarisation data and only temperature for high $\ell$, the tight limits relax and the tension disappears for a cosmological model with extra dark radiation corresponding to $\Delta N_{\rm eff} \sim 0.4$. However the tension appears (but at an acceptable level) again when BAO data is included. The constraints on the effective parameters which describe the behaviour of the extra radiation in terms of perturbations are too weak to discriminate among the different candidates. Another possible way to reconcile the CMB-derived $H_0$ value and the local measurement is to allow deviations from the standard late-time expansion history of the Universe. Rather than invoking specific models we have reconstructed the expansion history in a model-independent, minimally parametric way. Our method to reconstruct $H(z)$ does not rely on any model and only require minimal assumptions. These are: SNeIa form a homogeneous group and can be used as standard candles, $r_{\rm s}$ is a standard ruler for BAO corresponding to the sound horizon at radiation drag, the expansion history is smooth and continuous and the Universe is spatially flat. When only using BAO, and the $H_0$ measurement with an early Universe $r_{\rm s}$ prior, the reconstructed $H(z)$ shows a sharp increase in acceleration at low redshift, such as that provided by a phantom equation of state parameter for dark energy. However when SNeIa are included, the shape of $H(z)$ cannot deviate significantly from that of a $\Lambda$CDM, disfavouring therefore the phantom dark energy solution. When the CMB $r_{\rm s}$ prior is removed, this procedure yields a model-independent determination of $r_{\rm s}$ (and the expansion history) without any assumption on the early Universe. The $r_{\rm s}$ value so obtained is significantly lower than that obtained from the CMB assuming standard early-time physics (2.6$\sigma$ tension). When we relax the assumption about the flatness of the Universe, the curvature remains largely unconstrained and the error on the other parameters grow slightly. We do not find significant shifts in the rest of the parameters. Of course this hinges on identifying the BAO standard ruler with the sound horizon at radiation drag. Several processes have been proposed that could displace the BAO feature, the most important being non-linearities, bias e.g., \cite{Angulo, Rasera} and non-zero baryon-dark matter relative velocity \cite{TH2010, D2010, Slepian}. These effects however have been found to be below current errors \cite{PB09, Blazek, Slepian2} and below the 1\% level. It is therefore hard to imagine how these effects could introduce the $~ 5-7 $\% shift required to solve the tension. In summary, because the shape of the expansion history is tightly constrained by current data, in a model--independent way, the $H_0$ tension can be restated as a mis-match in the normalisation of the cosmic distance ladder between the two anchors: $H_0$ at low redshift and $r_{\rm s}$ at high redshift. In the absence of systematic errors, especially in the high $\ell$ CMB polarisation data and/or in the local $H_0$ measurement, the mismatch suggest reconsidering the standard assumptions about early-time physics. Should the ``preliminary" high $\ell$ CMB polarisation data be found to be affected by significant systematics and excluded from the analysis, the mismatch could be resolved by allowing an extra component behaving like dark radiation at the background level with a $\Delta N_{\rm eff} \sim 0.4$. Other new physics in the early Universe that reduce the CMB-inferred sound horizon at radiation drag by $\sim 10$ Mpc (6\%) would have the same effect. %\input{sec7}
16
7
1607.05617
1607
1607.01788_arXiv.txt
Using a set of high resolution hydrodynamical simulations run with the \cholla code, we investigate how mass and momentum couple to the multiphase components of galactic winds. The simulations model the interaction between a hot wind driven by supernova explosions and a cooler, denser cloud of interstellar or circumgalactic media. By resolving scales of $\Delta x<0.1$ pc over $>100$ pc distances our calculations capture how the cloud disruption leads to a distribution of densities and temperatures in the resulting multiphase outflow, and quantify the mass and momentum associated with each phase. We find the multiphase wind contains comparable mass and momenta in phases over a wide range of densities and temperatures extending from the hot wind ($n \approx 10^{-2.5}$~$\mathrm{cm}^{-3}$, $T \approx 10^{6.5}$~K) to the coldest components ($n \approx 10^2$ $\mathrm{cm}^{-3}$, $T \approx 10^2$ K). We further find that the momentum distributes roughly in proportion to the mass in each phase, and the mass-loading of the hot phase by the destruction of cold, dense material is an efficient process. These results provide new insight into the physical origin of observed multiphase galactic outflows, and inform galaxy formation models that include coarser treatments of galactic winds. Our results confirm that cool gas observed in outflows at large distances from the galaxy ($\gtrsim1$kpc) likely does not originate through the entrainment of cold material near the central starburst.
\label{sec:introduction} Star-forming galaxies commonly feature a multiphase galactic wind, observed at a wide variety of densities, temperatures, and velocities \citep[e.g.][]{Lehnert96, Martin05, Rupke05, Strickland07, Tripp11, Rubin14}, and over a large range of redshifts \citep[e.g.][]{Weiner09, Coil11, Nestor11, Bouche12, Kornei12, Bordoloi16}. Despite their ubiquity, fully characterizing these winds can prove difficult. Spatially-resolved observations of the wind's many phases remain challenging, even for the nearest star-forming systems \citep{Shopbell98, Westmoquette09, Rich10, Leroy15}. Different observational techniques and instruments are required for different phases, so amassing a complete picture for even a single galaxy represents a large coordinated effort. At higher redshifts, absorption line studies that trace outflowing gas in and around star-forming galaxies can be challenging to interpret as they require making assumptions about the wind's geometry \citep[e.g.][]{Rubin11, Bouche12}. While much progress has been made in recent years thanks to the installation of the Cosmic Origins Spectrograph on the \textit{Hubble Space Telescope}, large uncertainties still exist regarding the contributions of different phases of winds to the net mass, momentum, and energy content of outflows \citep{Heckman15}. Winds also play an important role in theoretical studies of galaxy evolution. Supernova-driven winds provide an attractive method of feedback in cosmological simulations, allowing galaxies to regulate their star formation rates and gas supply over cosmic time \citep[e.g.,][]{Oppenheimer08, Dave11, FaucherGiguere11, DallaVecchia12, Muratov15}. Recent simulations have successfully reproduced the galaxy stellar mass function across a wide range of redshifts by including phenomenologically-motivated wind models \citep{Vogelsberger14, Schaye15, Dave16}. However, the processes that launch winds and govern their evolution as they escape galaxies remain unresolved on the scale of cosmological simulations. We currently must turn to smaller-scale, higher-resolution simulations to learn more about the physical nature of the winds themselves. On these smaller physical scales, idealized simulations of galactic winds have also presented a theoretical challenge. Both analytic studies and hydrodynamic simulations of winds have had difficulty accelerating cool gas to the velocities observed in winds, because the dense phases get destroyed by hydrodynamic instabilities too quickly \citep[e.g.,][]{Zhang15, Scannapieco15, Bruggen16}. Magnetic fields may play an important role in stabilizing the cool gas \citep{McCourt15}, but without realistic comparisons to observations the most important physical processes at play in multiphase winds are difficult to ascertain. A detailed analysis of the momentum and energy budget of gas in different phases in these hydrodynamic simulations has not yet been conducted. This data would be valuable both for improving sub-grid prescriptions of winds in cosmological simulations, and for comparing with observations to better determine where our theoretical understanding of winds fails. However, such a study requires high resolution across a large simulation volume in order to track the gas in different phases for significant periods of time. In this work, we aim to improve our theoretical understanding of multiphase galactic winds via high resolution, idealized simulations. Using the recently released Graphics Processor Unit (GPU)-based code \cholla\footnote{A public version of the \cholla code is available at: http://github.com/cholla-hydro/cholla} \citep{Schneider15}, we can perform hydrodynamic simulations of the interaction between cool and hot phases of a starburst-driven wind at high resolution ($<0.1$pc) over a large volume ($>100$pc). The code performs well enough to compute such simulations on a static mesh, and thus capture the interaction between the different phases of gas across a much larger region than any previous study \citep[e.g.][]{Cooper09, Scannapieco15, Banda-Barragan16}. The ability to track gas in each phase over long periods of time allows a direct probe of the momentum coupling between the hot and cool phases of the wind. In addition, the calculations add an element of physical realism to the cool gas by changing the initial density structure of the multiphase clouds to better match the features seen in spatially-resolved outflows of dense gas. Our simulations model a multiphase galactic wind as cold, dense interstellar or circumgalactic medium clouds embedded within a hot, rarified background flow driven by supernovae. Because the cool material starts at rest with respect to the background wind, the initial interaction between the two phases drives a shock into the dense cloud. While the current work focuses on the cloud densities, shock mach numbers, and physical scales relevant to galactic winds, the adopted numerical setup allows for comparisons with previous investigations of cloud-shock interactions. Because of its ubiquity in the ISM, the shock-cloud interaction problem has been studied by many authors. Early numerical work by \cite{Klein94} investigated the case of a planar shock interacting with a spherical cloud using two-dimensional, adiabatic simulations. Their work indicated that clouds encountering a shock typically survive for a few ``cloud crushing times," roughly the timescale for the initial shock to propagate through the cloud. For strong shocks, the cloud crushing time depends on the density contrast between the cloud and the ambient medium, the size of the cloud, and the speed of the shock. Earlier under-resolved numerical work came to similar conclusions \citep{Bedogni90, Nittmann82}. These studies found that shocked clouds travel $\sim8$ cloud radii before mixing with the ambient medium as a result of hydrodynamic instabilities. Adiabatic three-dimensional simulations \citep{Stone92, Xu95} corroborated the two-dimensional results, and additionally attempted to account for different cloud geometries. Cloud geometry and orientation in those simulations did not affect the timescale for cloud fragmentation, but did substantially affect the late-time morphology of the clouds before they were destroyed. These early studies could reasonably ignore radiative cooling effects by limiting their studies to small clouds. In larger scale problems where the cooling timescale is smaller than the dynamical timescale, thermal energy losses must be included. Many authors have investigated this regime \citep[e.g.,][]{Mellema02, Fragile04, Melioli05, Cooper09}, and demonstrated that radiative cooling inhibits destruction of the dense material and extends the lifetime of the cloud relative to the adiabatic case. Rather than efficiently mixing with the hot post-shock wind, radiatively-cooling clouds tend to get strung out into filaments containing individual ``cloudlets" of dense gas that can survive much longer. Other authors have investigated the effects of conduction \cite[e.g.,][]{Marcolini05, Orlando05, Bruggen16, Armillotta16} and magnetic fields \cite[e.g.,][]{MacLow94, Fragile05, Shin08, McCourt15, Banda-Barragan16} on the cloud-shock interaction, with varying results for the stabilization of the cloud. While multiple previous works studied a range of potentially-important physics, few explored the impact of the initial structure of the cloud on the results of cloud-shock interactions. Early work focused on modeling supernova remnants in the ISM, and a simple spherical cloud provided a sufficient approximation for the initial conditions. In radiatively-cooling galactic winds, however, the initial morphology of the cloud may have a profound effect on its evolution. Only \citet{Cooper09} previously studied how the internal structure might influence the cloud destruction, using a fractal cloud as a proxy for a realistic cloud in a galactic wind. They found that fractal clouds survived for less time than initially spherical clouds. More recently, \citet{Schneider15} examined how a turbulent interior cloud structure can alter the cloud crushing timescale in adiabatic simulations. Our current study aims to better quantify the differences in the physical picture for inhomogeneous clouds, and more broadly describe the way the gas phases in the outflow evolve. Specifically, we attempt to capture the region of parameter space relevant for the cool ($\sim10^4$ K) clouds observed in galactic winds near the disks of star-forming galaxies. In this regime, the wind can be adequately modeled as a hot ($\sim10^6$ K), supersonic fluid containing a population of embedded clouds of denser, cooler, initially stationary material. Depending on the exact density contrast between the cool and hot phases, the cooling timescale may fall below the local dynamic timescale and the simulations therefore should include radiative cooling. Other potentially relevant effects, such as conduction and magnetic fields, we leave for future study. An outline of our paper follows. We describe in Section~\ref{sec:wind_model} the model used to study the interaction between the multiple phases of the wind. In Section~\ref{sec:simulations} we explain the setup of our wind simulations. Section~\ref{sec:cloud_evolution} presents the qualitative evolution of the wind-cloud interaction, including the impact of the initial surface density of the cool gas on the cloud evolution. In Section~\ref{sec:phase_structure}, we describe in detail the density and temperature structure of the multiphase outflow. In Section~\ref{sec:momentum_coupling} we study the velocities of the gas and describe how momentum distributes between different phases of the wind. Section~\ref{sec:resolution} presents a resolution study focused on increasingly small-scale features in \turb clouds. Section~\ref{sec:discussion} contains our interpretation of these results, including a discussion of our findings in relation to previous work, possible effects of incorporating additional physical processes, and an analysis of the fate of dense gas within a gravitational potential. We summarize in Section~\ref{sec:summary}.
\label{sec:summary} In this work, we have modeled the hydrodynamic evolution of radiatively cooling clouds in the context of galactic winds with very high numerical resolution. Our study investigated two main parameters relevant to cold cloud survival - the initial structure of the cool gas and the median density of the cloud. We varied the cloud structure in our simulations between a lognormal density distribution with large-scale structure as set by turbulent processes and an idealized spherical distribution of gas. The median densities of our clouds ranged from $\tilde{n} = 0.1 - 1.0$ $\mathrm{cm}^{-3}$. The median density affects the overall destruction time of the cool gas via the cloud crushing time, as well as the efficiency of cooling within the cloud. We find that clouds with a \turb density structure are destroyed more quickly than clouds with a homogeneous spherical density distribution. This efficient destruction results in faster mass-loading of the hot wind, as intermediate- and low-density regions of \turb clouds are quickly heated, rarified, and accelerated to the hot wind velocity. The entrainment of dense gas within cool \turb clouds proves extremely inefficient, and much less efficient than for idealized spherical initial conditions. The varying column densities present in \turb clouds result in very little acceleration of the densest regions, which are the only regions that survive for many cloud-crushing times. These effects are amplified as the resolution of the simulations is increased and the clouds are allowed to become increasingly realistic. We therefore conclude that entrainment of \turb ISM clouds in hot supernova winds does not explain the neutral gas observed at large distances from starburst galaxies, unless other physical processes (such as magnetic fields) substantially alter the results from the hydrodynamic case. We have also provided an extensive description of the phase structure of the gas in the wind. Shortly after being shocked the gas associated with the \turb clouds spreads over a large range of densities and temperatures, with the densest regions cooling down to temperatures of $T \sim 100$ K. Each phase of gas remains close to thermal pressure equilibrium with the hot ($\gg 10^6$ K) wind. Interestingly, though the majority of the mass remains in the densest phases ($n > 20$ $\mathrm{cm}^{-3}$) for much of the cloud evolution, the total momentum distributes fairly evenly across densities. Roughly the same amount of momentum transfers to cold neutral (100's of K), cool ionized ($\sim 10^4 K$), and warm ionized ($\sim 10^5$ K) gas.
16
7
1607.01788
1607
1607.06816_arXiv.txt
The presence of short-lived radioisotopes (SLRs) in Solar system meteorites has been interpreted as evidence that the Solar system was exposed to a supernova shortly before or during its formation. Yet results from hydrodynamical models of SLR injection into the proto-solar cloud or disc suggest that gas-phase mixing may not be efficient enough to reproduce the observed abundances. As an alternative, we explore the injection of SLRs via dust grains as a way to overcome the mixing barrier. We numerically model the interaction of a supernova remnant containing SLR-rich dust grains with a nearby molecular cloud. The dust grains are subject to drag forces and both thermal and non-thermal sputtering. We confirm that the expanding gas shell stalls upon impact with the dense cloud and that gas-phase SLR injection occurs slowly due to hydrodynamical instabilities at the cloud surface. In contrast, dust grains of sufficient size ($\gtrsim 1~\micron$) decouple from the gas and penetrate into the cloud within 0.1 Myr. Once inside the cloud, the dust grains are destroyed by sputtering, releasing SLRs and rapidly enriching the dense (potentially star-forming) regions. Our results suggest that SLR transport on dust grains is a viable mechanism to explain SLR enrichment.
\subsection{Short-lived radioisotopes} Calcium--aluminium-rich inclusions (CAIs) in chondritic meteorites are the oldest known Solar system solids, with ages over 4.567~Gyr \citep{2002Sci...297.1678A,2010E&PSL.300..343A}. Spectroscopic analyses of CAIs reveal isotopic excesses due to the {\em in situ} decay of short-lived radioisotopes (SLRs) \citep{1977ApJ...211L.107L}, so named because of their half-lifetimes of $\lesssim$ a few Myr \citep{2001RSPTA.359.1991R,2003TrGeo...1..431M}. The radioactive decay of these SLRs, particularly $^{26}$Al, was an important source of heat during the first 10~Myr of Solar system evolution \citep{1955PNAS...41..127U}, fueling the differentiation of planetesimals \citep{2007M&PS...42.1529S} and the internal melting of ice in rocky bodies \citep{2005E&PSL.240..234T}. The sustained aqueous state due to SLRs in these bodies may have allowed the synthesis of amino acids -- the biomolecular precursors for life \citep{2014ApJ...783..140C}. The initial abundances of some SLRs in the early Solar system (ESS) may be enhanced above the Galactic background level \citep[][however, see \citealp{2013ApJ...775L..41J}]{2006Natur.439...45D}. The presence of `live' SLRs in the ESS seems remarkable; SLRs rapidly decay and must therefore either be produced locally or quickly transported through the interstellar medium (ISM) from a nearby massive nucleosynthetic source \citep{1977ApJ...211L.107L}. In the latter case, the presence of a nearby massive star provides constraints on the birth environment of the Solar system, such as cluster size \citep{2010ARA&A..48...47A} and dynamical evolution \citep{2014MNRAS.437..946P, 2013A&A...549A..82P}. However, the conditions leading to enrichment are uncertain. The initial SLR abundances in other planet forming systems are unknown, but conditions similar to those in the ESS may be common \citep{2013ApJ...769L...8V,2013ApJ...775L..41J,2014E&PSL.392...16Y}. The origin scenarios and initial abundances for SLRs are still a matter of debate, but it seems likely that both solar and extra-solar enrichment sources are required to explain the observed variety. Local mechanisms such as solar radiation-induced spallation reactions can produce some SLRs (e.g. $^{10}$Be) but not all (e.g. $^{60}$Fe) \citep{1976Sci...191...79H,2008ApJ...680..781G}. Although recent estimates of the initial $^{60}$Fe$/^{56}$Fe ratio argue against significant $^{60}$Fe enrichment \citep{2012E&PSL.359..248T}, the enhanced $^{26}$Al$/^{27}$Al ratio probably requires external sources \citep{2013GeCoA.110..190M}. Asymptotic giant branch (AGB) star winds \citep{1994ApJ...424..412W}, Wolf--Rayet (WR) winds \citep{1986ApJ...307..324P}, or Type II (core-collapse) supernova (SN) shock waves \citep{1977Icar...30..447C} could transport SLRs and contaminate the ESS at some phase of its evolution (e.g. pre-solar molecular cloud, pre-stellar core, or proto-planetary disc). \subsection{Supernova enrichment} Among the various enrichment sources, Type II supernovae (SNe) have received the most attention in the literature \citep{1977Icar...30..447C,1997ApJ...489..346F,2005ASPC..341..527O,2012ApJ...756..102P}. SNe are naturally associated with star-forming regions, and predicted SLR yields from SNe match reasonably well with ESS abundance estimates \citep{2000SSRv...92..133M}. Additional evidence is provided by the anomalous ratio of oxygen isotopes ([$^{18}$O]/[$^{17}$O]) in the Solar system, which is best explained by enrichment from Type II SNe \citep{2011ApJ...729...43Y}. Following the discovery of $^{26}$Al in CAIs, \citet{1977Icar...30..447C} suggested that a nearby SN could have simultaneously injected SLRs and triggered the collapse of the ESS. In this scenario, a single SN shock wave rapidly transports and deposits SLRs into an isolated marginally-stable pre-stellar core. The impinging shock wave compresses the core and triggers gravitational collapse while at the same time generating Rayleigh--Taylor (RT) instabilities at the core surface that lead to mixing of SLRs with the solar gas. \citet{1997ApJ...489..346F} first demonstrated the plausibility of this scenario with hydrodynamical simulations, and subsequent iterations of the experiment \citep{2010ApJ...708.1268B, 2012ApJ...756L...9B, 2013ApJ...770...51B, 2014ApJ...788...20B, 2015ApJ...809..103B} have defined a range of acceptable shock wave parameters (e.g. speed, width, density) for enrichment. This `triggered collapse' scenario requires nearly perfect timing and choreography. The SN must be close to the pre-stellar core ($\lesssim$ 0.1--4~pc) at the time of explosion to prevent significant SLR radioactive decay during transit; yet the SN shock must slow considerably (from $\gtrsim 2000$~km~s$^{-1}$ at ejection to $\lesssim 70$~km~s$^{-1}$ at impact) to prevent destruction of the core, requiring either large separation ($\gtrsim 10$~pc) or very dense intervening gas ($\gtrsim 100$~cm$^{-3}$). \citet{2012ApJ...745...22G} demonstrated that injection at higher velocities (up to 270~km~s$^{-1}$) may be possible, but this is yet to be confirmed in three-dimensional models. The amount of SLRs injected in the `triggered formation' scenario is typically below observed values; both \citet{2014ApJ...788...20B} and \citet{2012ApJ...745...22G} find SLR injection efficiencies $\lesssim 0.01$, compatible with only the lowest estimates for ESS values \citep{2008ApJ...688.1382T}. Enrichment relies on hydrodynamical mixing of the ejecta into the pre-stellar gas, primarily via RT fingers \citep{2012ApJ...756L...9B}. However, the (linear) growth rates of the involved fluid instabilities depend on the square root of the density contrast \citep{1961hhs..book.....C}, resulting in an inevitable impedance mismatch between the hot, diffuse stellar ejecta and the cold, dense pre-solar core. One possible solution to this mixing barrier problem is to concentrate the SN ejecta into dense clumps that can breach the cloud surface. The inner ejecta of Type II SNe are found to be clumpy and anisotropic in both observations \citep{2014Natur.506..339G,2015Sci...348..670B} and simulations \citep{2015A&A...577A..48W}. \citet{2012ApJ...756..102P} explore injection and mixing of clumpy SN ejecta into molecular clouds. The authors find that an over-dense clump can penetrate up to 1~pc into the target cloud, leaving a swath of enriched gas in its wake. Depending on the degree of clumpiness, the resulting enrichment can be comparable to ESS abundances. Here, we explore an alternative mechanism to overcome the mixing barrier: the injection of SLRs via dust grains. The ejecta from both stellar winds and SNe have been predicted to condense and form dust grains \citep{1979Ap&SS..65..179C,1981ApJ...251..820E,1989ApJ...344..325K}. This prediction is supported by observations that find some SNe produce large amounts of dust ($\gtrsim 0.1~{\rm M}\odot$) soon after explosion \citep{2014ApJ...782L...2I,2015ApJ...800...50M}. In addition, meteorites contain pre-solar grains that originated in massive stars, including SNe \citep{2004ARA&A..42...39C}. Numerous authors \citep{1975ApJ...199..765C,2005ASPC..341..527O,2009ApJ...696.1854G} have suggested that these dust grains will contain SLRs, and in fact some pre-solar grains show evidence for {\em in situ} decay of $^{26}$Al \citep{2015ApJ...809...31G}. If the dust grains survive transport to the pre-solar cloud, they can dynamically decouple from the stalled shock front and penetrate into the dense gas, possibly delivering SLRs \citep{1981ApJ...251..820E,1997ApJ...489..346F}. \citet{2010ApJ...711..597O} have examined the role of dust grains in enrichment, considering injection into an already-formed proto-planetary disc. Although the disc's small cross-section places strong constraints on the SN distance, the authors found that over 70 per cent of dust grains with radii greater than $0.4~\micron$ can survive the passage into the inner disc where they are either stopped or destroyed. Both fates contribute SLRs to the forming star, suggesting dust grains may favorably enhance enrichment. However, injection at the disc phase may be too late; CAIs containing SLRs probably formed within the first 300,000 years of Solar system formation \citep{2005Sci...308..223Y}, prior to the proto-planetary disc phase. Injecting dust grains at the pre-stellar core phase may be more difficult. For grains impacting a dense pre-stellar core of number density $n \gtrsim 10^5~{\rm cm}^{-3}$, only grains with radii $a \ge 30~\micron$ are able to penetrate the stalled shock front and deposit SLRs into the core \citep{2010ApJ...717L...1B}. $30~\micron$ is greater than either simulated \citep{2015A&A...575A..95S} or meteoritic \citep{2004ARA&A..42...39C} SN grain radii (typically $a \lesssim 1~\micron$). Therefore, if injection via dust grains is to be a viable scenario, it must occur at an even earlier phase. Enriching the pre-solar molecular cloud prior to core formation has been suggested by several authors \citep{2009ApJ...696.1854G,2009ApJ...694L...1G,2014E&PSL.392...16Y} but remains largely untested with simulations. In this scenario, one to several massive stars, possibly across multiple generations, contribute SLRs to a large star-forming region. The Solar system then forms from the enriched gas, eliminating the need for injection into a dense core. To our knowledge, the only numerical simulations of this scenario are presented by \citet{2013ApJ...769L...8V}, with a follow-up by \citet{2016ApJ...826...22K}. The authors follow the enrichment of a massive ($\gtrsim 10^5~{\rm M}\odot$) star-forming region over 20~Myr. A turbulent periodic box is allowed to evolve subject to star formation and SN feedback. The combined effect of numerous explosions leads to an overall enrichment of $^{26}$Al and $^{60}$Fe in star-forming gas. The authors used passive particles to track SLRs, and they relied on numerical diffusion to mimic the mixing between SN ejecta and cold gas. While the resulting enrichment is broadly consistent with observed ESS values, a more detailed understanding of the injection mechanisms may be of interest. \subsection{Motivation} We attempt to bridge the gap between the small-scale injection scenario of \citeauthor{2010ApJ...708.1268B}, and the global, large-scale approach of \citeauthor{2013ApJ...769L...8V}, by studying the interaction of a single SN remnant with a large, clumpy molecular cloud. We focus on the details of the injection mechanism, investigating in particular the role of SLR-rich dust grains. We use hydrodynamical simulations to follow the evolution of the gas and dust over 0.3~Myr. The dust grains are decelerated by drag forces and destroyed by thermal and non-thermal sputtering, releasing SLRs into the gas phase. We estimate the amount of SLRs injected into the cloud and determine the dust grain radii needed for successful injection to occur. We conclude from our simulations that sufficiently large ($a \gtrsim 1~\micron$) dust grains can rapidly penetrate the cloud surface and deposit SLRs within the cloud, long before any gas can hydrodynamically mix at the cloud surface. Nearly half of all incident dust grains sputter or stop within the cloud, enriching the dense (eventually star-forming) gas. Our results suggest that dust grains offer a viable mechanism to deposit SLRs in dense star-forming gas and may be the key to reproducing the canonical Solar system SLR abundances. We outline the numerical methods, including initial conditions and dust grain physics, in Section \ref{s:methods}. We describe measures and analytic estimates for the injection efficiency in Section \ref{s:estimates}. We present the results of our simulations in Section \ref{s:results} and discuss the implications for enrichment scenarios in Section \ref{s:discussion}. Finally, we summarize our conclusions in Section \ref{s:conclusions}.
\label{s:conclusions} A nearby SN remains a possible candidate as the source of SLRs in the early Solar system. The main challenge in this `direct injection' scenario is overcoming the impedance mismatch between the hot, diffuse SNR gas and the cold, dense pre-solar gas, as demonstrated amply in the literature \citep{2012ApJ...756L...9B,2012ApJ...745...22G,2012ApJ...756..102P}. We explore whether dust grains formed from the SN ejecta and carrying SLRs can overcome the mixing barrier and enrich dense (potentially star-forming) gas. Using hydrodynamical simulations, we model the interaction of a SNR carrying dust grains with the pre-solar molecular cloud. We follow dust grains of varying initial radius ($a = 0.01$--$10~\micron$) subject to drag forces and sputtering. We find the following points: \begin{enumerate} \item Sufficiently large dust grains ($a \ge 1~\micron$) entrained in the SN ejecta will decouple from the shock front and survive entry into the molecular cloud. They will then be either completely stopped or sputtered, enriching the dense gas with SLRs within 0.1 Myr of the SN explosion. \item Smaller dust grains ($a \le 0.1~\micron$) formed in the SN ejecta will be either stopped or sputtered before impacting the molecular cloud. The sputtered SLRs will contribute to the enrichment through subsequent gas-phase mixing. \item Gas-phase SN ejecta will enrich the leading edge of molecular cloud only after instabilities develop at the cloud surface. The degree of mixing depends strongly on the inclusion of radiative cooling. \end{enumerate} While it is still unknown what fraction of dust grains survive passage by the reverse shock and emerge from the SNR, we show that any surviving dust will contribute favorably to the typical SN enrichment scenario. Indeed, if a significant amount of large ($a \gtrsim 1~\micron$) grains survive, dust may be the dominant source of SLR enrichment in nearby molecular clouds. Most notably, the dust grain enrichment occurs rapidly, in contrast with the typical gas-phase mixing which relies on the growth of hydrodynamical instabilities at the cloud surface. A shorter time delay between production and injection of the SLRs prevents substantial radioactive decay. Finally, if the various SLRs condense into different-sized dust grains, drag and sputtering will lead to a spatial stratification of SLRs within the pre-solar cloud. This could explain the large discrepancy in the $^{60}$Fe/$^{26}$Al mass ratio between SN predictions and meteoritic measurements. We conclude that dust grains can be a viable mechanism for the transport of SLRs into the pre-solar cloud.
16
7
1607.06816
1607
1607.03602_arXiv.txt
% We present a theoretical analysis of formation and kinetics of hot OH molecules in the upper atmosphere of Mars produced in reactions of thermal molecular hydrogen and energetic oxygen atoms. Two major sources of energetic O considered are the photochemical production, via dissociative recombination of O$_{2}^{+}$ ions, and energizing collisions with fast atoms produced by the precipitating Solar Wind (SW) ions, mostly H$^+$ and He$^{2+}$, and energetic neutral atoms (ENAs) originating in the charge-exchange collisions between the SW ions and atmospheric gases. Energizing collisions of O with atmospheric secondary hot atoms, induced by precipitating SW ions and ENAs, are also included in our consideration. The non-thermal reaction O + H$_2(v,j) \rightarrow$ H + OH$(v',j')$ is described using recent quantum-mechanical state-to-state cross sections, which allow us to predict non-equilibrium distributions of excited rotational and vibrational states $(v',j')$ of OH and expected emission spectra. A fraction of produced translationally hot OH is sufficiently energetic to overcome Mars' gravitational potential and escape into space, contributing to the hot corona. We estimate the total escape flux from dayside of Mars for low solar activity conditions at about $5\times10^{22}$ s$^{-1}$, or about 0.1\% of the total escape rate of atomic O and H. The described non-thermal OH production mechanism is general and expected to contribute to the evolution of atmospheres of the planets, satellites, and exoplanets with similar atmospheric compositions.
The escape of volatile atmospheric species to space is important for understanding the evolution of Mars' atmosphere and climate as it transitioned from the conditions that supported liquid water into the cold, dry, low-pressure climate that we witness today \citep{1972Sci...177..986M,2008SSRv..139..355J,2013SSRv..174..113L,2015SSRv..195..357L}. While it is well established that the evaporation of the martian atmosphere is driven by the interaction with the solar radiation and interplanetary plasma, with the absence of an intrinsic planetary magnetic field and Mars' lower mass accelerating the process \citep{2004P&SS...52.1039C,2008SSRv..139..355J}, detailed physical mechanisms and their mutual interactions are still not fully understood and 3D global atmospheric models cannot simultaneously explain all observed effects \citep{2015SSRv..195..357L,2015GeoRL..42.9015L,2015JGRE..120.1880L}. Attempting to resolve the remaining unanswered questions and shed light on water inventory in the early history of Mars is the main scientific objective of the ongoing NASA's Mars Atmosphere and Volatile Evolution (MAVEN) mission \citep{2015SSRv..195....3J,2015SSRv..195..423B}. At the present time, the atmospheric escape from Mars is comprised of thermal (Jeans) escape and various non-thermal mechanisms, including photo-chemical escape of neutrals \citep{2004P&SS...52.1039C,2008SSRv..139..355J,2013SSRv..174..113L,2015JGRE..120.1880L} and escape of ions governed by the interplay of the solar wind with the induced martian magnetosphere and crustal magnetic fields \citep{acuna1999global,2004SSRv..111...33N,2015JGRA..120.7857D,2015GeoRL..42.8870R}. Major escaping species include atomic hydrogen, oxygen, and carbon, of which the first two directly affect the estimates of water abundance on primordial Mars \citep{2013SSRv..174..113L}. A major photochemical process responsible for escape of neutrals heavier than hydrogen is dissociative recombination (DR) of O$_{2}^{+}$, which serves as a major source of hot O atoms that either directly escape to space or form a hot oxygen corona \citep{1988Icar...76..135I,1993GeoRL..20.1747F,2005SoSyR..39...22K,2015JGRE..120.1880L,2015GeoRL..42.9009D}. The nascent suprathermal O atoms can collide with thermal background gases in the upper atmosphere and transfer sufficient kinetic energy to eject them to space. This non-thermal escape mechanism of light elements, also known as a collisional ejection, was studied for He atoms \citep{2011GeoRL..3802203B}, and H$_2$ and HD molecules \citep{2012GeoRL..3910203G}, which were found to produce significant fluxes and possibly affect the H/D ratio in Mars' upper atmosphere by 5-10\%. One of the major goals of our research, reported in this article, is to develop a consistent model of production of non-thermal atoms and molecules in the Martian atmosphere and describe non-equilibrium atmospheric reactions caused by hot particles. In this study, we explore reactive collisions of hot O atoms with H$_2$ molecules, leading to formation of rotationally-vibrationally (RV) excited OH molecules in Mars' upper atmosphere. The description of the reaction is based on quantum-mechanical state-to-state reactive cross sections at high temperatures \citep{2014JChPh.141p4324G}, while kinetic theory is used to calculate the energy transfer to translational and internal degrees of freedom of the products. We use a 1D model of the martian atmosphere to estimate altitude profiles of the total formation and escape rates of OH molecules and non-thermal RV distributions for a selected orbital geometry and solar activity. In addition to the DR, secondary hot O atoms energized in collisions with energetic neutral atoms (ENAs) \citep{2014ApJ...790...98L} are considered as an efficient source of hot O atoms in our model.
We report the first theoretical model of non-thermal formation of excited OH molecules in the upper atmosphere of Mars in reactions of translationally hot O atoms and atmospheric H$_2$. The produced OH is very energetic, capable of escaping into space, and expected to contribute a small fraction of rotationally and vibrationally excited hydroxyl molecules to the Martian hot corona. These OH molecules are expected to have lifetimes up to several hours providing they are not destroyed in collisions. We estimate that the process contributes up to about 0.1\% of the total rate of escape of the Martian atmosphere to space. Our model is based on state-to-state energy dependent reactive cross sections for O+H$_2$ reaction and a 1D model of transport in the Martian atmosphere. The hot O atoms produced by photo-dissociative recombination of O$_2^+$ and collisions of thermal oxygen atoms with H and He ENAs were considered. The ENAs are found to contribute less than 1\% to the total formation rates for the present-day solar flux. This non-thermal process could be identified by its characteristic emission profile from high rotational and vibrational states of OH (vibrational states up to $v'=6$ could be excited). Recently, Meinel emission bands up to $(3-2)$ and $(3-1)$ were detected in limb observations of Mars' atmosphere with MRO CRISP instrument and taken as a signature of presence of OH in the Martian atmosphere \citep{2013Icar..226..272T}. The emissions are likely to originate from the altitudes between 45 and 55 km, where the concentration of the OH is the highest, and are described well by the existing models that include relevant chemical processes in middle atmosphere. While low densities of OH in the upper atmosphere present a significant difficulty for observations, identification of emissions from high Meinel OH bands, with the intensities following the predicted altitude profile, would confirm the presence of this non-thermal process. Moreover, the described non-thermal mechanism and detailed information about populated excited states may be helpful in interpreting high-resolution spectra of neutrals in the upper atmosphere of Mars, leading to more accurate estimates of total escape rates of H$_2$ and O from Mars. We expect that the developed model can be adapted to other planetary atmospheres where the described high-temperature reactions can take place, including comets.
16
7
1607.03602
1607
1607.02212_arXiv.txt
We provide an analytic framework for interpreting observations of multiphase circumgalactic gas that is heavily informed by recent numerical simulations of thermal instability and precipitation in cool-core galaxy clusters. We start by considering the local conditions required for the formation of multiphase gas via two different modes: (1) uplift of ambient gas by galactic outflows, and (2) condensation in a stratified stationary medium in which thermal balance is explicitly maintained. Analytic exploration of these two modes provides insights into the relationships between the local ratio of the cooling and freefall time scales (i.e., $t_{\rm cool} / t_{\rm ff}$), the large-scale gradient of specific entropy, and development of precipitation and multiphase media in circumgalactic gas. We then use these analytic findings to interpret recent simulations of circumgalactic gas in which global thermal balance is maintained. We show that long-lasting configurations of gas with $5 \lesssim \min (t_{\rm cool} / t_{\rm ff}) \lesssim 20$ and radial entropy profiles similar to observations of local cool-core galaxy cluster cores are a natural outcome of precipitation-regulated feedback. We conclude with some observational predictions that follow from these models. This work focuses primarily on precipitation and AGN feedback in galaxy cluster cores, because that is where the observations of multiphase gas around galaxies are most complete. However, many of the physical principles that govern condensation in those environments apply to circumgalactic gas around galaxies of all masses.
\setcounter{footnote}{0} The relationship between thermal instability and galaxy formation is a classic topic in theoretical astrophysics that has recently come back into fashion. Its reemergence has been driven by the need to understand how accretion onto supermassive black holes regulates cooling and star formation in galaxy-cluster cores. Both observational and theoretical evidence is accumulating in support of the idea that development of a multiphase medium through condensation in the vicinity of a supermassive black hole triggers strong feedback that limits further condensation \citep[e.g.,][]{ps05,Soker06,Cavagnolo+08,ps10,McCourt+2012MNRAS.419.3319M,Sharma_2012MNRAS.420.3174S,Gaspari+2012ApJ...746...94G,Gaspari+2013MNRAS.432.3401G,Gaspari_2015A&A...579A..62G,Voit_2015Natur.519..203V,Li_2015ApJ...811...73L,Tremblay2016}. Perhaps most intriguingly, if there is a similar link between feedback heating and condensation of circumgalactic gas around smaller galaxies, then this regulation mechanism has much broader implications for galaxy evolution \citep[e.g.,][]{Soker10_galform,Sharma+2012MNRAS.427.1219S,Voit_PrecipReg_2015ApJ...808L..30V}. \subsection{Heritage of the Topic} All modern discussions of thermal instability in the context of galaxy formation are rooted in the classic work of \citet{ReesOstriker1977MNRAS.179..541R}, \citet{Binney1977ApJ...215..483B}, and \citet{Silk1977ApJ...211..638S}, which themselves owe a debt to \citet{Hoyle_1953ApJ...118..513H}. These landmark papers derived the maximum stellar mass of a galaxy ($\sim 10^{12} \, M_\odot$) by comparing the time for gas to fall through a galaxy's potential well to the time required to cool from the potential's virial temperature. If the cooling time is less than the freefall time, then infalling gas can potentially condense and fragment into star-forming clouds via thermal instability. That can happen relatively easily in galaxy-scale objects with virial temperatures $\lesssim 10^7$~K but is more difficult to arrange in hotter, more massive sytems, leading to a natural division between the mass scales of individual galaxies and those of galaxy groups and clusters. Many subsequent papers have made interesting use of the cooling-time to freefall time ratio to analyze how galaxies form and evolve \citep[e.g.,][]{BFPR_1984Natur.311..517B,FallRees1985ApJ...298...18F,MallerBullock_2004MNRAS.355..694M}. However, some form of negative feedback is necessary to explain the inefficient transformation of a galaxy's gas supply into stars \citep[e.g.,][]{Larson_1974MNRAS.169..229L,wr78,DekelSilk1986ApJ...303...39D,WhiteFrenk1991ApJ...379...52W,Baugh_1998ApJ...498..504B,SomervillePrimack_1999MNRAS.310.1087S,KauffmannHaehnelt_2000MNRAS.311..576K}. A complete understanding of the relationship between thermal instability and galaxy formation must therefore account for interplay between radiative cooling and the energetic feedback that opposes cooling. \citet{Field65} worked out many of the fundamental features of astrophysical thermal instability but did not consider the complications that arise when thermal instability couples with buoyancy. The most comprehensive analytical treatment of that coupling is by \citet{bs89}, who astutely summarized much of the preceding work. Balbus \& Soker were primarily concerned with the development of inhomogeneity in galaxy-cluster cores. At the time, hot gas in the cores of many galaxy clusters was suspected to condense at rates $\sim 10^{2-3} \, M_\odot \, {\rm yr}^{-1}$ \citep[e.g.,][]{Fabian94}, but models of homogeneous cooling flows into the cluster's central galaxy produced X-ray surface-brightness profiles with central peaks far greater than were observed. This conundrum led to speculation that the mass inflow rate could decline inward because of thermal instability and spatially distributed condensation, which would reduce the radiative losses required to maintain a steady state at the center of the flow \citep[e.g.,][]{Thomas_1987MNRAS.228..973T,WhiteSarazin_analytical_1987ApJ...318..612W,WhiteSarazin_data_1987ApJ...318..621W,WhiteSarazin_numerical_1987ApJ...318..629W}. Upon closer examination, this speculation was found to be problematic, because buoyancy generally tends to suppress the development of thermal instability \citep[e.g.,][]{Cowie_1980MNRAS.191..399C,Nulsen_1986MNRAS.221..377N,bs89}. The attention of the field therefore gradually shifted away from steady-state cooling-flow models in favor of models in which feedback from a central active galactic nucleus compensates for cooling \citep[e.g.,][]{TaborBinney1993MNRAS.263..323T,BinneyTabor_1995MNRAS.276..663B,Soker+01,mn07,McNamaraNulsen2012NJPh...14e5023M} \subsection{Renaissance of the Topic} More than two decades later, the coupling between buoyancy and thermal instability is being re-examined, because the presence of inhomogenous gas in galaxy-cluster cores consisting of multiple phases that are orders of magnitude cooler and denser than the ambient medium now appears closely linked with the ratio of cooling time to freefall time \citep[e.g.,][]{McCourt+2012MNRAS.419.3319M,Gaspari+2012ApJ...746...94G,VoitDonahue2015ApJ...799L...1V,Voit_2015Natur.519..203V}. In those studies, the cooling time is typically defined with respect to the specific heat at constant volume, so that $t_{\rm cool} \equiv [ 3kT / n_e \Lambda (T) ] (n/2n_i)$, where $\Lambda (T)$ is the usual cooling function at temperature $T$, and the number densities of electrons, ions, and gas particles are $n_e$, $n_i$, and $n$, respectively. The freefall time $t_{\rm ff} \equiv (2r/g)^{1/2}$ is defined with respect to the local gravitational potential $g$ at radius $r$. Given these definitions, a floor appears to be present near $t_{\rm cool} / t_{\rm ff} \approx 10$ in the radial cooling-time profiles of the ambient hot gas in galaxy clusters. A large majority of the cluster cores known to contain multiphase gas have minimum values of $t_{\rm cool} / t_{\rm ff}$ within a factor of 2 of this floor. Conversely, almost all the clusters without multiphase gas have $\min (t_{\rm cool} / t_{\rm ff}) > 20$. \citet{McCourt+2012MNRAS.419.3319M} and \citet{Sharma_2012MNRAS.420.3174S} interpreted this relationship between multiphase gas and $t_{\rm cool} / t_{\rm ff}$ as resulting from amplification of initially small perturbations by thermal instability. Under conditions of global thermal balance, numerical simulations of thermal instability in a plane-parallel potential by \citet{McCourt+2012MNRAS.419.3319M} showed that $t_{\rm cool} / t_{\rm ff} \lesssim 1$ was required for thermal instability resulting in condensation, but the simulations of \citet{Sharma_2012MNRAS.420.3174S} in a spherically symmetric potential indicated that condensation could happen in spherical systems with $t_{\rm cool} / t_{\rm ff} \lesssim 10$. The threshold value of $t_{\rm cool} / t_{\rm ff}$ was therefore assumed to be geometry dependent. Subsequent simulations implementing more sophisticated treatments of feedback appeared to corroborate that assumption because they showed that condensation-fueled accretion of cold gas onto a central black hole can lead to long-lasting self regulation with $\min (t_{\rm cool} / t_{\rm ff}) \approx 10$ \citep[e.g.,][]{Gaspari+2012ApJ...746...94G,LiBryan2014ApJ...789...54L,Li_2015ApJ...811...73L,Prasad_2015ApJ...811..108P}. In the meantime, it has become clear that the critical ratio of $t_{\rm cool} / t_{\rm ff}$ is not a geometry-dependent manifestation of local thermal instability. For example, numerical simulations by \citet{Meece_2015ApJ...808...43M} of thermal instability in a plane-parallel potential under conditions seemingly quite similar to those adopted by \citet{McCourt+2012MNRAS.419.3319M} showed that condensation could occur at the midplane of systems with any value of $t_{\rm cool} / t_{\rm ff}$. Also, \citet{ChoudhurySharma_2016MNRAS.457.2554C} have presented a detailed thermal stability analysis of systems in global thermal balance showing that the growth rates of linear perturbations are largely independent of the gravitational potential's geometry. So why, then, do both real and simulated galaxy-cluster cores appear to self-regulate at $\min (t_{\rm cool} / t_{\rm ff}) \approx 10$? \subsection{Precipitation-Regulated Feedback} The most general answer seems to involve a phenomenon that we have come to call precipitation. As feedback acts on a galaxy-scale system, the outflows it drives can promote condensation of the hot ambient medium by raising some of it to greater altitudes \citep[e.g.,][]{Revaz_2008A&A...477L..33R,LiBryan2014ApJ...789..153L,McNamara_2016arXiv160404629M}. Adiabatic uplift promotes condensation by lowering the $t_{\rm cool} / t_{\rm ff}$ ratio of the uplifted gas (see \S \ref{sec-tc_isoK}). This process is loosely analogous to the production of raindrops during adiabatic cooling of uplifted humid gas in a thunderstorm---hence, the name ``precipitation.'' As in a thunderstorm, the condensates rain down toward the bottom of the potential well after they form. In simulations, this rain of cold gas into the galaxy at first provides additional fuel for feedback and temporarily boosts the strength of the outflows, but eventually those strengthening outflows add enough heat to the ambient medium to raise $t_{\rm cool} / t_{\rm ff}$ high enough to stop the condensation. Precipitation is therefore naturally self-regulating. A prescient series of ``cold feedback'' papers by Soker \& Pizzolato anticipated many features of the precipitation mechanism now seen in simulations \citep{ps05,Soker06,Soker08,ps10}. They proposed a feedback cycle in cluster cores in which energetic AGN outbursts produce a wealth of non-linear density perturbations, the densest of which cool, condense, fall back toward the black hole, and provide more fuel for accretion. In this scenario, the cooling times of the blobs must be short enough that $t_{\rm cool} \lesssim t_{\rm ff}$ and also shorter than the time interval between AGN heating outbursts. They argued that this source of accretion fuel could potentially provide much more fuel than Bondi accretion from the hot medium alone while responding to changes in the ambient cooling time far more quickly. They also noted that a shallow central entropy gradient would promote condensation. Shortly thereafter, Gaspari and collaborators produced sets of numerical simulations in which such a cold feedback loop was realized \citep[][but see also Sharma et al. 2012b]{Gaspari+2012ApJ...746...94G,Gaspari+2013MNRAS.432.3401G,Gaspari_2015A&A...579A..62G}. In these simulations, cold clouds form through thermal instability and accrete toward the center of the simulation volume while the heating mechanism needed to maintain approximate global thermal balance stimulates turbulence. This turbulence is critical, because it ensures a steady supply of cold gas blobs with low specific angular momentum, which can plunge to the center through a process the authors call ``chaotic cold accretion.'' The proliferation of terminology has a way of making these mechanisms seem more different than they really are. Precipitation-regulated feedback, as described in this paper, is a ``cold feedback'' mechanism that fuels a central black hole through ``chaotic cold accretion.'' Here we are proceeding with the term ``precipitation,'' despite the prior existence of these other terms, because the processes that promote thermal instability and condensation in circumgalactic gas may be responsible for more than just the feeding of black holes. Therefore, they merit a more general term. \subsection{Implications for Galaxy Evolution} Precipitation is potentially of broader interest because it may link a galaxy's time-averaged star-formation rate with the multiphase structure of its circumgalactic gas. Observations of circumgalactic absorption lines are showing that the masses of gas and metals within a few hundred kpc of a galaxy are at least as great as the galaxy's mass in the form of stars \citep[e.g.,][]{Tumlinson_2011Sci...334..948T}. Also, the amount of circumgalactic gas at intermediate temperatures ($10^5$-$10^6$~K) appears closely linked with a galaxy's star-formation rate. Observations of the circumgalactic medium at other wavelengths likewise show rich multiphase structure \citep[e.g.,][]{Putman_2012ARA&A..50..491P}, which has been challenging for simulations to reproduce \citep[e.g.,][]{Hummels_2013MNRAS.430.1548H,Ford_2016MNRAS.459.1745F}. \citet{Sharma+2012MNRAS.427.1219S} proposed that thermal instability, through precipitation-regulated feedback, places a lower limit of $t_{\rm cool} / t_{\rm ff} \approx 10$ on the ambient density of circumgalactic gas \citep[but see also][]{Meece_2015ApJ...808...43M}. \citet{Voit_PrecipReg_2015ApJ...808L..30V} built on that idea to show how precipitation-regulated feedback could be responsible for governing not only the relationships between stellar mass, metallicity, and stellar baryon fraction observed among galaxies but also the relationship between a galaxy's stellar velocity dispersion and the mass of its central black hole. However, they did so without having a satisfactory explanation for the crucial assumption that $\min(t_{\rm cool} / t_{\rm ff}) \approx 10$ or a complete model for the global structure of precipitation-regulated systems. \subsection{Purpose of the Paper} This paper's purpose is to propose a global context for interpreting observations of multiphase gas around galaxies, as well as the numerical simulations that strive to reproduce those observations, in terms of the long legacy of theoretical papers on astrophysical thermal instability. Many of the theoretical results derived here were published decades ago by others, most notably by \citet{Defouw_1970ApJ...160..659D,Cowie_1980MNRAS.191..399C,Nulsen_1986MNRAS.221..377N,Malagoli_1987ApJ...319..632M,Loewenstein_1989MNRAS.238...15L}; and \citet{bs89}. Our re-derivations of them are intended to provide a common conceptual framework for a meta-analysis of simulations in which precipitation occurs. We are focusing on simulations of precipitation in galaxy-cluster cores, because that is where the observations of multiphase gas around galaxies and interactions of outflows with the circumgalactic medium are most complete. However, many of the physical principles that govern condensation in those environments apply to circumgalactic gas around galaxies of all masses. \begin{figure*}[t] \begin{center} \includegraphics[width=6.75in, trim = 0.0in 0.0in 0.0in 0.0in]{f1.pdf} \\ \end{center} \caption{ \footnotesize This schematic cartoon outlines the main ideas presented in the paper. On the left is a diagram of a galactic environment in which feedback is active. Accretion of condensed gas onto the central black hole releases feedback energy, and a bipolar outflow distributes that energy over a large volume. In order for the system to develop a well-regulated feedback loop, it must separate into two zones, an inner ``isentropic zone'' and an outer ``power-law zone'' in which the specific entropy ($K \equiv kTn_e^{-2/3}$) follows $d \ln K / d \ln r \approx 1$, as observed in both real and simulated galaxy-cluster cores (\S \S \ref{sec-GlobalBalance},\ref{sec-Illustrations}). The power-law entropy gradient allows buoyancy to limit the growth of thermal instability (\S \S\ref{sec-GeneralConsiderations},\ref{sec-TI_NumericalSimulations}), implying that condensation in the power-law zone requires uplift of lower-entropy gas (\S \ref{sec-tc_isoK}). In contrast, buoyancy cannot suppress thermal instability in the isentropic zone, which proceeds to develop multiphase structure, as indicated by the dashed lines showing the dispersion in $K$ at each $r$ (\S \S \ref{sec-TI_NumericalSimulations},\ref{sec-Illustrations}). Phenomenologically, the ratio of cooling time to free-fall time in the ambient medium is observed to reach a minimum value in the range $5 \lesssim t_{\rm cool} / t_{\rm ff} \lesssim 20$ at the boundary between these zones in cluster cores with multiphase gas (\S \ref{sec-Illustrations}). Such a system cannot remain in a steady feedback-regulated state with a central cooling time $\lesssim 1$~Gyr unless a large proportion of the feedback energy is thermalized {\em outside} of the central isentropic region (\S\ref{sec-GlobalBalance}). \vspace*{1em} \label{fig-CartoonVersion}} \end{figure*} \subsection{A Readers' Guide} \label{sec-ReadersGuide} Busy readers may wish to be selective in deciding which sections of this long paper will reward their close attention. For them, we have prepared this guide, along with a cartoon (Figure \ref{fig-CartoonVersion}) that sketches out the main ideas. \begin{itemize} \item The next three sections (\S \S \ref{sec-tc_isoK}-\ref{sec-TI_NumericalSimulations}) consider the local conditions required for condensation and precipitation. Two different modes of precipitation emerge from those considerations. One is analogous to rain that is stimulated by uplift of humid gas in Earth's atmosphere, because of the role that adiabatic cooling plays in bringing on condensation. The other is analogous to drizzle or fog, in that condensation happens without uplift, when the conditions are right. \begin{itemize} \item Section \ref{sec-tc_isoK} initiates the discussion with a brief reminder about the deep connections between adiabatic uplift and condensation. It also presents a short calculation showing that ambient gas uplifted at speeds comparable to a halo's circular velocity is likely to condense if it initially has $t_{\rm cool} / t_{\rm ff} \lesssim 10$. This finding suggests that the ambient medium around a galaxy cannot persist in a state with $t_{\rm cool} / t_{\rm ff} \ll 10$ if there is significant vertical circulation. Galactic outflows in which the energy source is fueled by condensation therefore tend to drive the ambient medium toward $t_{\rm cool} / t_{\rm ff} \approx 10$. \item Section \ref{sec-GeneralConsiderations} outlines the general conditions necessary for thermal instability without uplift to progress to condensation in a hydrostatic medium that is thermally balanced within each equipotential layer. The main result is that the condition for condensation to occur depends not only on $t_{\rm cool} / t_{\rm ff}$ but also on the slope of the entropy gradient: Thermal instability leads to condensation only if a low-entropy perturbation can cool faster than it sinks to a layer of equivalent entropy. This is not a new result, but its significance is often not fully appreciated. \item Section \ref{sec-TI_NumericalSimulations} uses the results of \S \ref{sec-GeneralConsiderations} to interpret recent simulations of thermal instability in circumgalactic gas. In particular, it calls attention to the critical role of the global entropy gradient in determining where condensation can occur and where it is suppressed by buoyancy. The main result is that media in thermal balance are prone to condensation in regions where the large-scale entropy gradient is flat. However, buoyancy tends to delay the onset of condensation if $t_{\rm cool} / t_{\rm ff} \gg 1$. \end{itemize} \item The following two sections (\S \S \ref{sec-GlobalBalance},\ref{sec-Illustrations}) apply the findings from the first part of the paper to interpret the global evolution of simulated galactic systems in which condensation fuels feedback. Our objective is to understand why those systems end up in long-lasting configurations with $5 \lesssim \min (t_{\rm cool} / t_{\rm ff}) \lesssim 20$ and with radial entropy profiles in agreement with observations of multiphase galaxy-cluster cores. \begin{itemize} \item Section \ref{sec-GlobalBalance} examines recent simulations of condensation in globally balanced but locally unstable galactic systems in light of the findings summarized in \S \ref{sec-TI_NumericalSimulations}. It points out that feedback is {\em required} for the development of a multiphase medium, because phase separation cannot happen in a globally balanced medium without a flow of free energy through the system. Condensation also cannot happen through linear thermal instability in regions with a significant entropy gradient and $t_{\rm cool} \gg t_{\rm ff}$. Steady self-regulation of a precipitating system therefore favors a global configuration with a shallow inner entropy gradient and a steeper outer entropy gradient (Figure \ref{fig-CartoonVersion}). The shallow inner gradient promotes the precipitation needed for fuel, while the steeper outer entropy gradient prevents condensation from running away into a cooling catastrophe. The boundary between these regions tends to be where $t_{\rm cool} / t_{\rm ff}$ reaches a minimum value $\sim 10$, for reasons outlined in \S \ref{sec-tc_isoK}. In order to ensure long-term global stability with a central cooling time $\lesssim 1$~Gyr, much of the feedback energy must propagate beyond the isentropic zone before thermalizing. This finding has deep implications for implementations of black-hole feedback in numerical simulations. \item Section~\ref{sec-Illustrations} puts all these pieces together to interpret the time-dependent behavior of precipitation-regulated feedback in a numerical simulation from \citet{Li_2015ApJ...811...73L}. It shows that unopposed cooling leads to a power-law entropy gradient at the center of the system, which focuses condensation onto the central black hole. The feedback response disrupts that central gradient out to where $t_{\rm cool} / t_{\rm ff} \approx 10$ in the ambient medium. Much of the gas uplifted from that central region can be induced to condense, particularly if it is inhomogeneous. The system then settles into a long-lasting steady state in which condensed gas fuels the outflow and feedback maintains the isentropic central region at a level corresponding to $5 \lesssim \min (t_{\rm cool} / t_{\rm ff}) \lesssim 20$. Catastrophic cooling is prevented because much of the outflow's energy is thermalized outside of the isentropic zone. When the condensed gas is depleted, the outflow shuts down. Cooling then proceeds almost homogeneously, because there is no source of free energy to promote phase separation. However, thermal instability eventually initiates condensation near the outer edge of the isentropic region, at the minimum of $t_{\rm cool} / t_{\rm ff}$ in the ambient medium. Newly condensed gas subsequently reignites feedback, and the cycle repeats. \end{itemize} \end{itemize} \newpage \noindent The paper concludes with two sections that acknowledge the loose ends (\S \ref{sec-LooseEnds}) and present some concluding thoughts about how the overall model can be tested (\S \ref{sec-Summary}), followed by an appendix that constructs a useful toy model for global configuration changes of precipitation-regulated systems.
\label{sec-Summary} A summary of the paper's main findings can be found in \S \ref{sec-ReadersGuide}. Instead of repeating that summary here, we will conclude with some suggestions about how our global model for the ambient circumgalactic medium may be tested with observations and numerical simulations: \begin{itemize} \item {\bf Buoyancy Damping.} The model proposes that buoyancy damping can suppress condensation in circumgalactic media with a significant entropy gradient but allows it to proceed if the median entropy profile is nearly isentropic. This proposal can be tested with observations of correlations between homogeneity of the ambient medium and the slope of its entropy gradient. Cluster cores with short central cooling times are ideal for such tests, because both their entropy gradients and their levels of inhomogeneity are observable with X-ray telescopes. One expects to detect an increasing dispersion in gas entropy and temperature as the large-scale entropy gradient flattens. \item {\bf Two Precipitation Modes.} Multiphase gas can precipitate out of the circumgalactic medium in two different ways: (1) through growth of thermal instability in the isentropic zone, or (2) through uplift of low-entropy ambient gas. Isolated clumps of multiphase gas outside of the isentropic zone must therefore originate at lower altitudes and can result from condensation if they are uplifted from regions with $t_{\rm ff} / t_{\rm cool} \lesssim 10$. Condensation in the power-law zone of gas with greater initial values of $t_{\rm ff} / t_{\rm cool}$ requires either an uplift velocity exceeding $\approx 1.5 v_c$ or some combination of drag and turbulence that slows the infall speed as the uplifted gas falls back toward its original altitude. These should be the two most prevalent modes of condensation in more complex simulations of galaxy evolution with sufficiently high spatial resolution. \item {\bf Lower Limit on $t_{\rm cool}/t_{\rm ff}$ in the Ambient Medium.} Feedback outbursts tend to produce a lower limit of $\min (t_{\rm cool} / t_{\rm ff}) \approx 10$ in the ambient medium, because ambient gas with lower values of $t_{\rm cool} / t_{\rm ff}$ is vulnerable to uplift-driven condensation. X-ray observations show that galactic systems with $v_c > 300 \, {\rm km \, s^{-1}}$ generally adhere to this limit. It should also apply to ambient circumgalactic gas near the virial temperature in lower-mass systems, in which the conditions can be probed by {\em Hubble}-COS observations \citep[e.g.,][]{Stocke_2013ApJ...763..148S,Werk_2014ApJ...792....8W}. \item {\bf Consequences of Central Thermal Feedback.} Central thermal feedback destabilizes the circumgalactic medium because it expands the isentropic zone and suppresses buoyancy damping. Condensation can proceed there until feedback raises its cooling time to approximately the current age of the universe. Simulations implementing this mode of feedback cannot produce realistic galaxy-cluster cores, many of which have central cooling times $\lesssim 1$~Gyr. Instead, heat deposition by feedback must extend beyond the isentropic zone. Feedback energy is probably transported there by bipolar outflows, and this is likely to be why kinetic feedback reproduces the characteristics of galaxy cluster cores with much greater success than pure thermal feedback. Our analysis predicts that any numerical simulation of massive-galaxy evolution in which feedback is centrally injected and purely thermal requires heat input to raise the central cooling time to several Gyr before it succeeds in stopping star formation. \item {\bf Quenching and the CGM Entropy Gradient.} Buoyancy damping, when coupled with kinetic feedback, can explain how star formation remains quenched in massive elliptical galaxies with short central cooling times. If feedback from a bipolar outflow can maintain a significant entropy gradient in the ambient medium, then buoyancy damping will suppress thermal instability and condensation. Episodic feedback outbursts may occasionally induce condensation through uplift, but the total amount of condensed gas should remain small, as long as the mass of ambient gas in the isentropic zone remains small. More generally, one expects the time-averaged supply rate of condensed gas from the circumgalactic medium to be similar to the gas mass of the isentropic zone divided by the cooling time of the isentropic zone, as long as that cooling time is significantly less than the age of the universe. Quenched galaxies with short central cooling times should therefore be observed to have strong entropy gradients and small isentropic cores \citep[as in][]{Werner+2012MNRAS.425.2731W,Werner+2014MNRAS.439.2291W}. \item {\bf Midplane Condensation in Galactic Disks.} Buoyancy damping may allow the ambient circumgalactic medium to supply a galactic disk with condensing gas via midplane condensation without any obvious ``rainfall.'' This can happen if there is an entropy gradient outside of the midplane and sufficient rotation to inhibit buoyancy damping in the radial direction. Given enough time, condensation will happen near the midplane (see \S \ref{sec-Midplane}), in a region with a thickness that depends on $( t_{\rm ff} / t_{\rm cool})^2$. A steady supply of hot ambient gas then settles into the midplane to replenish what is lost to condensation, without any need for infall of cold clouds at large distances from the galactic disk. This mode of gas supply should be present in numerical simulations of sufficiently high spatial resolution. \item {\bf Episodic Line Emission from Cooling Gas.} Condensation and star formation in a precipitation-regulated system do not necessarily happen simultaneously, in a nearly steady state (see Figure~\ref{fig-OutburstHistory}). Line emission from intermediate-temperature ($10^5$-$10^6$~K) condensing gas is expected to be most luminous during the uplift events that trigger rapid condensation. In between these events, the star-formation rate may exceed the condensation rate for much of the duty cycle, causing a steady decline in the amount of cold gas. \end{itemize} \vspace*{0.5em} We are indebted to A. Babul, J. Bregman, A. Crocker, A. Evrard, M. Fall, M. Gaspari, A. Kravtsov, M. McDonald, B. McNamara, P. Nulsen, P. Oh, M. Peeples, M. Ruszkowski, P. Sharma, D. Silvia, N. Soker, M. Sun, G. Tremblay, J. Tumlinson, S. White, and an insightful but anonymous referee for discussions that have shaped our thinking. Some of this work was performed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293. GMV, BWO, and MD acknowledge NSF for support through grant AST-0908819 and NASA for support through grants NNX12AC98G, NNX15AP39G, HST-AR-13261.01-A, and HST-AR-14315.001-A. GLB acknowledges financial support from NSF grants AST-1312888, and NASA grants NNX12AH41G and NNX15AB20G, as well as computational resources from NSF XSEDE, and Columbia University's Yeti cluster. Simulations were run on the NASA Pleiades supercomputer through allocation SMD-15-6514 and were performed using the publicly-available Enzo code (http://enzo-project.org) and analyzed with the {\tt yt} package \citep{Turk_yt_2011ApJS..192....9T}. Enzo is the product of a collaborative effort of many independent scientists from numerous institutions around the world. Their commitment to open science has helped make this work possible. \appendix
16
7
1607.02212
1607
1607.01129.txt
We consider K-mouflage models, which are K-essence theories coupled to matter. We analyze their quantum properties and in particular the quantum corrections to the classical Lagrangian. We setup the renormalization program for these models and show that, contrary to renormalizable field theories where renormalization by infinite counterterms can be performed in one step, K-mouflage theories involve a recursive construction whereby each set of counterterms introduces new divergent quantum contributions which in turn must be subtracted by new counterterms. This tower of counterterms can be in principle constructed step by step by recursion and allows one to calculate the finite renormalized action of the model. In particular, it can be checked that the classical action is not renormalized and that the finite corrections to the renormalized action contain only higher-derivative operators. We concentrate then on the regime where calculability is ensured, i.e., when the corrections to the classical action are negligible. We establish an operational criterion for classicality and show that this is satisfied in cosmological and astrophysical situations for (healthy) K-mouflage models which pass the Solar System tests. These results rely on perturbation theory around a background and are only valid when the background configuration is quantum stable. We analyze the quantum stability of astrophysical and cosmological backgrounds and find that models that pass the Solar System tests are quantum stable. We then consider the possible embedding of the K-mouflage models in an Ultra-Violet completion. We find that the healthy models which pass the Solar System tests all violate the positivity constraint which would follow from the unitarity of the putative UV completion, implying that these healthy K-mouflage theories have no UV completion. We then analyze their behavior at high energy, and we find that the classicality criterion is satisfied in the vicinity of a high-energy collision, implying that the classical K-mouflage theory can be applied in this context. Moreover, the classical description becomes more accurate as the energy increases, in a way compatible with the classicalization concept.
\label{sec:Introduction} Scalar-tensor theories motivated by the discovery of the acceleration of the expansion \cite{Riess:1998cb,Perlmutter:1998np} of the Universe suffer from severe gravitational problems in the Solar System \cite{Will:2001mx} unless the scalar interaction is screened \cite{Khoury:2010xi}. Four mechanisms are now known, and the list seems complete for conformally coupled scalar fields: the chameleon \cite{Khoury:2003aq,Khoury:2003rn,Brax:2004qh} \footnote{On quasi-linear scales ultra-local models behave as chameleon models with large mass and coupling, although they have a different behavior on subgalactic scales \cite{Brax:2016}.}, Damour-Polyakov \cite{Damour:1994zq}, Vainshtein \cite{Vainshtein:1972sx}, and K-mouflage mechanisms \cite{Babichev:2009ee}. Their classification follows from the requirement of preserving second-order equations of motion for the scalar field, and they can be seen as restrictions on the second derivatives, first derivatives, and the value itself of the Newton potential in the presence of matter \cite{Khoury:2013tda,Brax:2014a}. All these theories involve nonlinearities in either their scalar potential or the (generalized) kinetic terms. Locally, Newtonian gravity is retrieved thanks to the relevant role played by the nonlinearities. This property is also a drawback of these models as the quantum corrections are not guaranteed to preserve the form of the nonlinearities required to screen the scalar field locally. For instance, in chameleon models, the scalar potential can be largely modified by quantum corrections in dense environments \cite{Upadhye:2012vh}. For K-mouflage and theories with the Vainshtein property like Galileons, the nonlinear kinetic terms become dominant when the scalar field is screened. This may cast a doubt on the validity of these models as nonrenormalizable operators play a fundamental role there. Fortunately, in the K-mouflage and Galileon cases nonrenormalization theorems \cite{Nicolis:2008in, deRham2014} have been obtained whereby the quantum corrections in a background where the scalar field is screened are under control. In fact, they do not affect the classical Lagrangian and only add finite corrections to the classical action that involve higher-order derivatives. In this paper, we reconsider this issue and set up the renormalization program for K-mouflage theories. Starting from the classical Lagrangian, we construct a first set of infinite counterterms which cancel the quantum divergences obtained in perturbation theory around a classical background. In renormalizable theories, this is all there is to do, and this yields the renormalized action. For K-mouflage, the counterterms introduce new vertices in the perturbative series which in turn lead to new divergences which require the introduction of other counterterms. The whole procedure carries on recursively. Using this approach, we find that indeed the classical Lagrangian is not renormalized because the quantum corrections depend on higher-order derivatives than the original Lagrangian. This whole process is unwieldy and is not guaranteed to converge. What we find is that there is a classicality criterion which ensures that the quantum corrections are negligible. We then focus on ``healthy'' K-mouflage theories which pass the Solar System tests \cite{Barreira2015}, i.e., models with no ghosts and no gradient instability \cite{Brax:2014a} that also satisfy the stringent quantitative constraints of the Solar System (this implies that when the argument of the kinetic function goes to $-\infty$ the kinetic function becomes very close to linear). We find that the quantum corrections calculated in the cosmological or astrophysical backgrounds receive negligible quantum corrections. This whole approach relies on perturbation theory and is only valid when the background is quantum stable. The healthy K-mouflage theories which pass the Solar System tests show no such quantum instabilities. Our approach should be compared to Ref.~\cite{deRham2014} where the nonrenormalization theorem was first stated. The classicality criterion that we obtain is similar to the conditions obtained there, which were then applied to the Dirac-Born-Infeld (DBI) models, although our approach is different. Our explicit construction of the renormalization procedure allows us to highlight how the renormalized action can be constructed in a step by step way. Here we do not insist on the DBI models as they are either ruled out by local tests or fail to screen the effects of the scalar field in the early Universe. On the contrary, we insist on considering ``realistic'' models, which pass the Solar System tests while giving a realistic cosmology. Even though the K-mouflage theories are not renormalizable quantum field theories, in the classical regime they are calculable theories with negligible quantum corrections. One may wonder what happens when one pushes the K-mouflage theories outside their classical regime at sufficiently high energy. The natural expectation would be that some kind of UV completion ought to be existing. It turns out that for healthy K-mouflage theories which pass the Solar System tests this is not the case, as they violate the positivity criterion of scattering amplitudes which follows from unitarity \cite{Adams:2006aa,Joyce:2014kja,Goon:2016ihr}. This implies that these K-mouflage models, contrary to the example given in \cite{Kaloper:2014vqa}, cannot be UV completed in a traditional way. This justifies our approach to treat the classical K-mouflage Lagrangian as our fundamental field theory that we renormalize step by step. Indeed another procedure used in \cite{deRham2014} would be to start from a UV completed theory at high energy and study the naturality of the K-mouflage action at low energy using the functional renormalization group \cite{Wetterich1993}. In the cases that we consider, this top-down approach is inoperative as the starting point of the renormalization group at high energy, i.e., the UV completion, does not exist. All that is left is the bottom-up approach that we present here, where the classical Lagrangian is renormalized at low energy. Moreover, in the classical regime, the quantum corrections are negligible, and one can deduce reliable low-energy predictions from such models. The absence of UV completion raises the obvious question of what happens when the theory is probed at higher and higher energies. We then consider the high-energy regime of the theory, and we find that at high energy in scalar collisions there is a large region of space around the interaction point where the classicality criterion applies and the K-mouflage theory behaves classically. Moreover, the classicality region grows with the energy in agreement with the classicalization picture \cite{Dvali:2010jz}. We also consider the fermion - antifermion annihilation processes via a scalar intermediate state, and we find that the cross section can always be calculated in the classical regime up to very high energy. In section II, we recall facts about K-mouflage theories, and then in section III we discuss the quantum corrections of such models. In section IV, we consider the quantum stability of the background configurations. In section V, we analyze the UV completion of these theories and their behavior at high energy for scalar scattering. In section VI, we study the interactions between fermions and the scalar. We then conclude. We have added two Appendices on the one-loop quantum corrections and on matter loops.
\label{sec:conclusion} In this work, we have considered K-mouflage models from a quantum field-theoretic point of view. In a traditional sense, these theories are not renormalizable as they involve any number of powers of the scalar kinetic terms, i.e., an arbitrary function of the kinetic terms. All these higher-order operators are not renormalizable, and one may worry that quantum corrections should alter the classical Lagrangian in an uncontrollable way. To have a better handle on this issue, we set up the renormalization program for these theories and provide a recursive algorithm to construct the renormalized effective action. We do this using background quantization around a solution of the classical equations of motion, which could be either a static or a cosmological time-dependent configuration for instance, and perturbation theory, where Feynman diagrams are regularized by dimensional regularization. In usual renormalizable theories, starting from a bare Lagrangian containing bare (and formally infinite) couplings, the renormalized action can be obtained in one step as the bare action contains counterterms that keep the same form as the bare Lagrangian while exactly canceling the divergences obtained by calculating the quantum corrections using Feynman diagrams. For K-mouflage, this does not work similarly as the original classical action does not contain all the operators which are generated quantum mechanically. So, one must proceed stepwise by first producing a first set of counterterms to cancel all divergences induced quantum mechanically from the classical Lagrangian, and then recalculate the divergences that the new vertices brought by the counterterms entail. This generates a new set of counterterms, and {\it a priori} this recursive construction must be carried on indefinitely. Of course, there is no guarantee that the associated series of recursively constructed Lagrangians converges at all, and therefore one may cast a doubt on the validity of K-mouflage models. Despite all, one can confirm that the corrections to the classical Lagrangian always involve extra derivatives of the kinetic terms and that therefore the classical action is not renormalized \cite{deRham2014}. We then define a calculability criterion whereby all the quantum corrections in the renormalized effective action are negligible compared to the classical action. In this case and in this ``classical regime'', it is sufficient to consider K-mouflage theories at the classical level and quantum corrections can be safely neglected. We apply this criterion to healthy K-mouflage models, i.e., with no ghosts and no gradient instabilities, that pass the tight tests of gravity in the Solar System, and we show that in all astrophysical and cosmological situations of interest the classicality criterion is satisfied. Moreover, we find that for such theories the quantum regime is never reached even in the early inflationary Universe or in the very short distance regime on Earth. These results rely on the assumption that perturbation theory is well defined around a given background, and this could be violated when the mass squared of the scalar becomes negative, because this may render the path integral ill defined. We examine two situations: the astrophysical one around a static spherical source and the cosmological one up to very early times. In the former, stability is guaranteed when the Sturm-Liouville problem associated with the linear perturbations around the static background has no negative eigenvalues. We confirm that this is indeed the case for the healthy models that pass the Solar System tests. The latter is more subtle as negative square masses for linear perturbations are a standard feature of cosmological perturbations, and we use a different criterion there. We impose that the extra instability induced by the negative mass squared due to the K-mouflage Lagrangian (and not only the $\frac{a''}{a}$ term of cosmological perturbations) does not lead to an explosion of the energy density of the K-mouflage field. Indeed, this would disrupt the evolution of the Universe and we confirm that this is far from being the case for the healthy models passing the solar tests. Despite the existence of the classicality regime, one may wonder what happens at very high energy way beyond the cutoff scale of the theory. Traditionally, in a top-down approach, one may advocate that some type of UV-completion must exist at high energy and that the K-mouflage models should emerge naturally from the renormalization group evolution. This approach was followed in particular in \cite{deRham2014}. Here, we find out that positivity and unitarity constraints imposed on scattering amplitudes \cite{Adams:2006aa} cannot be met by healthy models that pass the Solar System tests, and that therefore no such UV completion can be hoped to be constructed. This invalidates the approach of \cite{deRham2014} in our case, and one is left with the only prospect of having to deal with K-mouflage in a bottom-up approach and using the renormalization program that we have set in order to study high-energy properties of such models. This negative result might have invalidated the usefulness of K-mouflage models beyond the realm of very low-energy astrophysics and cosmology. To tackle this issue, we consider two relevant situations. The first one concerns the high-energy collision of scalars as may happen in colliders. In this case, we show that the collision occurs more and more in the classical regime as the energy increases. Hence, the K-mouflage models ``classicalize'' in this high-energy setting, and we can always use the classical Lagrangian. We also consider the interaction of the K-mouflage scalar with fermions. We find that free fermions are dressed by a negligible classical scalar cloud. Fermions also radiate scalars when they are accelerated, as for standard bremsstrahlung, but this is again negligible as compared with the standard electromagnetic bremsstrahlung because of the coupling factor $m_{\psi}/M_{\rm Pl}$. In a similar fashion, the scattering $f\bar{f} \to \varphi \to f \bar{f}$, which corresponds to the annihilation of a fermion pair into another one via an intermediate scalar, comes with a factor $(m_{\psi}/M_{\rm Pl})^4$ that yields a negligible cross section. As for scalar collisions, this process can be described from the bare K-mouflage Lagrangian, both at low and high energies. Hence, we have seen that healthy K-mouflage models that pass the stringent tests of gravity in the Solar System have remarkable properties quantum mechanically. They are not renormalized, and the finite corrections of higher order can be neglected in a ``classical regime'' which applies to astrophysical and cosmological systems of interest. Moreover, even pushed to high energy such as in collider experiments, the classicality criterion still applies, implying that trustworthy calculations for associated cross sections can be performed. K-mouflage models of the type considered here are most likely to be tested by cosmological and astrophysical means in the near future, and it is reassuring that such nonlinear models of dark energy/modified gravity can be simply used at the classical level.
16
7
1607.01129
1607
1607.00398_arXiv.txt
We consider a minimally-coupled inflationary theory with a general scalar potential $V(f(\varphi))= V(\xi\sum_{k=1}^{n}\lambda_k \varphi^k)$ containing a stationary point of maximal order $m$. We show that asymptotically flat potentials can be associated to stationary points of infinite order and discuss the relation of our approach to the theory of $\alpha$-attractors.
\label{sec:Introduction} Cosmic inflation \cite{Starobinsky:1980te,Mukhanov:1981xt,Guth:1980zm,Linde:1981mu,Albrecht:1982wi,Linde:1983gd} is nowadays a well established paradigm able to solve most of the hot Big Bang puzzles and to explain the generation of the almost scale invariant spectrum of coherent primordial perturbations giving rise to structure formation \cite{Ade:2015lrj} (for a review, see for instance~\cite{Lyth:1998xn,Mazumdar:2010sa}). The vast majority of inflationary models assume the early domination of a scalar field $\varphi$ with a sufficiently flat potential $V(\varphi)$. For canonically normalized fields, the inflationary observables can be parametrized in terms of the so-called slow-roll parameters \begin{equation}\label{inf_cond} \epsilon = \frac{1}{2}\left(\frac{V'}{V}\right)^2 \,, \qquad\qquad \eta = \frac{V''}{V} \, , \end{equation} where the primes denote derivatives with respect to the inflaton field $\varphi$. For inflation to take place, the \textit{slow-roll} conditions $\epsilon, |\eta| \ll 1$ must be satisfied. Note that these requirements should be understood as \textit{local} conditions on an arbitrary potential, which can in principle contain a large number of extrema and slopes. Locally flat regions in the potential appear generically in the vicinity of stationary points. Even though saddle-point models of inflation have been shown to be inconsistent with the data \cite{Allahverdi:2006we}, one should not exclude the appearance of higher order points on the inflationary potential~\cite{Hamada:2015wea}. In this paper we will take a model building perspective. Rather than asking about the origin of $V(\varphi)$, we will require it to \textit{locally} satisfy some particular flatness conditions. We will ask for the existence of a single stationary point without imposing any further restrictions on the shape of the potential. Similar studies have been performed in the context of modifed gravity theories. In particular, it was shown in Refs.~\cite{Artymowski:2015pna,Artymowski:2016mlh} that the requirement of vanishing derivatives in a $f(R)$ model gives rise to an inflationary plateau in the Einstein frame formulation of the theory. The main purpose of this short letter is to extend this analysis to the scalar sector and to make explicit the equivalence between the stationary point picture and the $\alpha$-attractor formulation~\cite{Carrasco:2015rva,Kallosh:2014rga,Carrasco:2015pla,Kallosh:2015lwa,Linde:2015uga}. The structure of this paper is as follows. In Sec.~\ref{sec:general} we construct a general scalar inflationary theory containing a stationary point of a given order $m$. The equivalence of these theories and the $\alpha$-attractor formulation is presented in Sec.~\ref{sec:alpha}. Finally, we summarize in Sec.~\ref{sec:Summary}.
\label{sec:Summary} In this paper we considered a general scalar potential $V(f(\varphi))$ with $f(\varphi) = \xi\sum_{k=1}^n \lambda_k\varphi^k$. By requiring the existence of the $m$-order stationary point at a field value $\varphi = \varphi_s$, we obtained a specific form for $f(\varphi)$ containing three free parameters $\lambda\equiv\lambda_n$, $\xi$ and $n$. We showed that around the stationary point $\varphi_s$, the inflationary potential is flat and suitable for inflation provided that $V(\varphi_s) > 0$. Nevertheless, not all of the resulting flat potentials allow for a graceful inflationary exit. The relation between our results and the theory of $\alpha$-attractors was also considered. We explicitly showed that using $f$ as a scalar field it is possible to obtain a non-canonical kinetic term, which after a trivial field redefinition, takes the form of the $\alpha$-attractor kinetic term. There is therefore a deep connection between the two approaches: asking for the existence of a stationary point in a scalar potential with canonical kinetic term is equivalent to ask for the existence of a kinetic term with a pole. Both formulations leads to flat regions within general potentials which can be responsible for inflation.
16
7
1607.00398
1607
1607.03487_arXiv.txt
We present observations of the Pisces A and B galaxies with the \acl{ACS} on the \acl{HST}. Photometry from these images clearly resolve a \acl{RGB} for both objects, demonstrating that they are nearby dwarf galaxies. We describe a Bayesian inferential approach to determining the distance to these galaxies using the magnitude of the \acl{TRGB}, and then apply this approach to these galaxies. This reveals the distance to these galaxies as $5.64^{+0.13}_{-0.15} \, {\rm Mpc}$ and $8.89^{+0.75}_{-0.85} \, {\rm Mpc}$ for Pisces A and B, respectively, placing both within the Local Volume but not the Local Group. We estimate the \aclp{SFH} of these galaxies, which suggests that they have recently undergone an increase in their star formation rates. Together these yield luminosities for Pisces A and B of $M_V=-11.57^{+0.06}_{-0.05}$ and $-12.9 \pm 0.2$, respectively, and estimated stellar masses of $\log(M_*/M_{\odot})= 7.0^{+0.4}_{-1.7}$ and $7.5^{+0.3}_{-1.8}$. We further show that these galaxies are likely at the boundary between nearby voids and higher-density filamentary structure. This suggests that they are entering a higher-density region from voids, where they would have experienced delayed evolution, consistent with their recent increased star formation rates. If this is indeed the case, they are useful for study as proxies of the galaxies that later evolved into typical \acl{LG} satellite galaxies.
Faint dwarf galaxies provide constraints on dark matter and cosmology \citep[e.g.][]{kl99ms, krav10satrev, BKBK11} as well as tests of the complex physics behind galaxy formation \citep[e.g.,][]{dekelandsilk, geha06, weisz11b, brooks14}. A major reason for this is that the nearest dwarfs can be studied in resolved stars because they are both faint and nearby enough that crowding issues can be overcome for a relatively large sample. By characterizing their resolved stellar populations, it becomes possible both to obtain present-day structural parameters for these galaxies \emph{and} characterize their \acp{SFH}, providing constraints on their past \citep[e.g.,][]{lgtime}. While this is powerful, it does not provide a way to determine their past structural properties or gas content. Furthermore, such faint galaxies are primarily observable only in the very nearby universe, where the bright galaxies of the \ac{LG} seem to quench the dwarfs \citep[e.g.,][]{einasto74, fill15, wetzel15}. This suggests that searching for faint dwarf galaxies in other environments might provide further clues to dwarf galaxy formation and evolution. Recently such objects have been identified from compact 21 cm \ion{H}{1} clumps at velocities consistent with nearby dwarf galaxies, which are followed-up on via optical imaging to confirm the presence of dwarf galaxies. The Leo P dwarf was found using this approach using data from ALFALFA \citep{leop}, and compact clouds in the GALFA-HI Survey led to detection of the Pisces A and B galaxies \citep{T15} and two others \citep{Sand15}. These GALFA-HI galaxies overlap in properties with some of the faintest already-known dwarfs \citep[e.g., the sample from][]{SHIELD}, but without firm distance measures it is difficult to place them in a wider galaxy formation context. Hence, here we present \ac{HST} \ac{ACS}/\ac{WFC} imaging of Pisces A and B. The unmatched resolution available from \ac{HST} allows resolving their stellar populations, identifying an \ac{RGB}, and thereby obtaining distances to these galaxies. This paper is organized as follows: in Section \ref{sec:obs}, we describe the \ac{HST} observations of the Pisces A and B dwarf galaxies. In Section \ref{sec:dist}, we describe our method for determining \ac{RGB} distances, as well as the resulting distance estimates. In Section \ref{sec:sparams}, we discuss the structural parameters of the galaxies. In Section \ref{sec:sfhs}, we fit the \acp{CMD} to determine \acp{SFH} of these dwarfs. In Section \ref{sec:disc}, we provide context for these galaxies and in Section \ref{sec:conc} we conclude. To aid reproducibility, the analysis software used for this paper is available at \url{https://github.com/eteq/piscdwarfs_hst}; full Mar kov Chain Monte Carlo (MCMC) chains are also made available as an online data set at \url{http://dx.doi.org/10.5281/zenodo.51375}. In Table \ref{tab:res} we provide an overview of the key properties of Pisces A and B. While we describe the methods used to derive these values in the remainder of this paper, we provide the table here for easy reference. When describing uncertainties of the quantities in this table and other parts of this paper, we will use the median and 84th/16th percentile (i.e., $1\sigma$ confidence region). However, it is important to recognize that some of these quantities are non-Gaussian. Hence, we provide samples from these distributions in Table \ref{tab:quantities} to allow better modeling of any of these quantities in other contexts or follow-on work. \begin{table}[htbp] \begin{center} \caption{Key properties of Pisces A and B.} \label{tab:res} \begin{tabular}{l l c c } \hline \hline & & Pisces A & Pisces B \\ \hline (1) & R.A. (J2000) & $00^{\rm h}14^{\rm m}46\fras0 $ & $01^{\rm h}19^{\rm m}11\fras7 $ \\ (2) & Dec (J2000) & $+10^{\circ}48^{\prime}47\farcsec01$ & $+11^{\circ}07^{\prime}18\farcsec22$ \\ (3) & $b$ ($^\circ$) & -51.03 & -51.16 \\ (4) & $l$ ($^\circ$) & 108.52 & 133.83 \\ (5) & Distance (Mpc) & $5.64^{+0.15}_{-0.13}$ & $8.89^{+0.75}_{-0.85}$ $\dagger$ \\ (6) & Distance modulus (mag) & $28.76^{+0.05}_{-0.06}$ & $29.75^{+0.19}_{-0.20}$ $\dagger$ \\ (7) & $\mu_{\rm F814W}$ (mag) & $24.62 \pm 0.05$ & $25.61^{+0.19}_{-0.18}$ $\dagger$ \\ (8) & $F606W_0$ (mag) & $-11.67^{+0.06}_{-0.05}$ & $-12.9 \pm 0.2$ $\dagger$ \\ (9) & $F814W_0$ (mag) & $-12.31 \pm 0.06$ & $-13.4 \pm 0.2$ $\dagger$ \\ (10) & $M_V$ (mag) & $-11.57^{+0.06}_{-0.05}$ & $-12.9 \pm 0.2$ $\dagger$ \\ (11) & $V-I$ (mag) & $0.78 \pm 0.01$ & $0.57 \pm 0.01$ \\ (12) & $R_{\rm eff, major}$ ($^{\prime\prime}$) & $9.1 \pm 0.1$ & $10.37 \pm 0.03$ \\ (13) & $r_{\rm eff}$ (pc) & $145^{+5}_{-6}$ & $323^{+27}_{-30}$ $\dagger$ \\ (14) & $n$ & $0.47^{+0.02}_{-0.01}$ & $0.64 \pm 0.01$ \\ (15) & $e$ & $0.34 \pm 0.01$ & $0.52 \pm 0.01$ \\ (16) & $\theta$ ($^\circ$ E of N) & $136 \pm 1.3$ & $139 \pm 0.1$ \\ (17) & $\log{(M_{*,SFH}/M_{\odot})}$ & $7.0^{+0.4}_{-1.7}$ & $7.5^{+0.3}_{-1.8}$ \\ (18) & $M_{\rm HI}$ ($10^6 M_{\odot}$) & $8.9 \pm 0.8$ & $30^{+6}_{-7}$ $\dagger$ \\ (19) & $v_{\rm helio, HI}$ (${\rm km \; s}^{-1}$) & $236 \pm 0.5$ & $615 \pm 1$ \\ (20) & $W50_{\rm HI}$ (${\rm km \; s}^{-1}$) & $22.5 \pm 1.3$ & $43 \pm 3$ \\ (21) & $v_{\rm helio, opt}$ (${\rm km \; s}^{-1}$) & $240 \pm 34$ & $607 \pm 35$ \\ \hline \end{tabular} \end{center} Row (1)-(2) On-sky equatorial coordinates. (3)-(4) Galactic latitude/longitude. (5)-(6) Distance/distance modulus from \ac{TRGB} as described in \S \ref{sec:dist}. (7) F814W magnitude of the \ac{TRGB}. (8)-(9) Absolute F606W and F814W magnitudes (\S \ref{sec:sparams}). (10)-(11) VI magnitude and color, transformed from the \ac{HST} bands following the prescription of \citet{acstrans05}, \S 8.3. (12) On-sky major axis half-light radius from F606W imaging (\S \ref{sec:sparams}). (13) Physical half-light radius (circularized) from F606W imaging. (14) S{\'e}rsic index from F606W imaging (\S \ref{sec:sparams}). (15) Ellipticity ($1-b/a$) from F606W imaging (\S \ref{sec:sparams}). (16) Position angle of major axis from F606W imaging (\S \ref{sec:sparams}). (17) Total stellar mass as inferred from the \ac{CMD} from fitting the \ac{SFH} following the procedure described in \S \ref{sec:sfhs}. (18) \ion{H}{1} gas mass from GALFA-HI \citep{Peek11galfadr1} as described in \citet{T15}, assuming the distance inferred in \S \ref{sec:dist}. (19) systemic velocity of \ion{H}{1} gas. (20) W50 of \ion{H}{1} gas. (21) optical velocity, from the H$\alpha$ emission lines reported in \citet{T15}. $\dagger$: A quantity that has a significantly non-Gaussian distribution. Such quantities are better represented using the samples from the distributions given in Table \ref{tab:quantities}. \end{table}
\label{sec:conc} In this paper, we: \begin{itemize} \item Describe observations of the Pisces A and B dwarf galaxies with the \ac{HST} \ac{ACS}, and clearly identify a resolved \acp{RGB}. \item Describe a Bayesian inferential method of determining distances to and structural parameters of Local Volume galaxies with resolved stellar populations, and apply this approach to our Pisces A and B data set. From this we infer the distance and photometric parameters of these galaxies along with the various parameter covariances. \item We conclude that Pisces A and B are Local Volume galaxies (at $5.6 \pm 0.2$ and $9.2 \pm 1.1$ Mpc, respectively). Moreover, with the newly-constrained distances and an estimate of their \acp{SFH} derived from the same data, we find they are plausibly galaxies from the Local Void infalling onto filamentary structure in the Local Volume. \end{itemize} These galaxies (and others like them) thus represent a potentially valuable tool as ``initial conditions'' for dwarf galaxies of the \ac{LG} or other higher-density environments. \vspace*{1.5 \baselineskip} We acknowledge Frank van den Bosch and Anil Seth for helpful discussion about this work. We also thank the anonymous referee for feedback that improved this paper. This research made use of Astropy, a community-developed core Python package for Astronomy \citep{astropy}. It also used the \ac{MCMC} fitting code {\it emcee} \citep{emcee}. It further made use of the open-source software tools Numpy, Scipy, Matplotlib, IPython, and corner.py \citep{numpyscipy, matplotlib, ipython, cornerpy}. This research has made use of NASA's Astrophysics Data System. Support for EJT and DRW was provided by NASA through Hubble Fellowship grants \#51316.01 and \#51331.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. \begin{table} \begin{center} \caption{Distributions of Quantities} \label{tab:quantities} \begin{tabular}{c c c c} \hline \hline Distance (A) & $\mu_{814}$ (A) & $({\rm F606W}-{\rm F814W})_0$ (A) & $r_{\rm eff}$ (A) \\ Mpc & mag & mag & kpc \\ \hline 5.62 & 24.61 & 0.841 & 2.21\\ ... & ... & ... & ... \\ \hline \end{tabular} \end{center} Posteriors of quantities inferred as described in the text. The rows are aligned so as to preserve correlations when they are present between two quantities. This is a sample table with a limited number of columns to demonstrate the format. The true table is available in the electronic edition of the journal. It includes the following quantities (for both Pisces A and B): Distance, F814W magnitude of the \ac{TRGB}, $({\rm F606W}-{\rm F814W})_0$ color of the \ac{TRGB}, $\alpha$, $\beta$, $f$ (from Equation \ref{eqn:rgbmodel}), F606W (total), F814W (total), S{\'e}rsic index (F606W), ellipticity (F606W), position angle (F606W), $R_{\rm eff}$ (on-sky half-light radius in F606W), $r_{\rm eff}$ (physical half-light radius in F606). \end{table}
16
7
1607.03487
1607
1607.01357_arXiv.txt
We present new Herschel PACS observations of 32 T Tauri stars in the young ($\sim$3 Myr) $\sigma$ Ori cluster. Most of our objects are K \& M stars with large excesses at 24 $\mu$m. We used irradiated accretion disk models of \citet{dalessio06} to compare their spectral energy distributions with our observational data. We arrive at the following six conclusions. (i) The observed disks are consistent with irradiated accretion disks systems. (ii) Most of our objects (60\%) can be explained by significant dust depletion from the upper disk layers. (iii) Similarly, 61\% of our objects can be modeled with large disk sizes ($\rm R_{\rm d} \geq$ 100 AU). (iv) The masses of our disks range between 0.03 to 39 $\rm M_{Jup}$, where 35\% of our objects have disk masses lower than 1 Jupiter. Although these are lower limits, high mass ($>$ 0.05 M$_{\odot}$) disks, which are present e.g, in Taurus, are missing. (v) By assuming a uniform distribution of objects around the brightest stars at the center of the cluster, we found that 80\% of our disks are exposed to external FUV radiation of $300 \leq G_{0} \leq 1000$, which can be strong enough to photoevaporate the outer edges of the closer disks. (vi) Within 0.6 pc from \SOri we found forbidden emission lines of [NII] in the spectrum of one of our large disk (SO662), but no emission in any of our small ones. This suggests that this object may be an example of a photoevaporating disk.\\
} \indent Due to angular momentum conservation, the collapse of rotating cloud cores leads to the formation of stars surrounded by disks. These disks evolve because they are accreting mass onto the star and because the dust grains tend to settle towards the midplane where they collide and grow (e.g. Hartmann et al. 1998, Hartmann 2009). The material in the disk is subject to irradiation from the host star and from the high energy fields produced in accretion shocks on the stellar surface, in the stellar active regions, and in the environment, if the star is immersed in the radiation field of nearby OB stars in a stellar cluster \citep{dalessio01,dalessio06, adams04,anderson13}. These high energy fields heat the gas, eventually leading to its dissipation, while the solids grow to planetesimal and planet sizes. Still, many open questions remain on how these processes happen and interact with each other.\\ \indent Previous studies using Spitzer data of different star-forming regions with ages between 1 and 10 Myr, show a decrease of disk fraction as a function of age of the clusters (Hern\'andez et al. 2007a, henceforward H07a). The decrease of disk frequency is reflected as a clear drop in the mid-IR excess, indicating that only 20$\%$ of the stars retain their original disks by 5 Myr \citep{hernandez07b}. It is therefore essential to observe disks in the crucial age range between 2 and 10 Myr in which the agents driving the evolution of protoplanetary disks are most active. The decrease of the IR excess can be explained by grain growth and by settling of dust to the disk midplane, reducing the flaring of the disk and thus its emitting flux. This interpretation is confirmed by the analysis made by \citet{dalessio06} using irradiated accretion disk models. These models simulate the settling process by introducing the $\epsilon$ parameter, which represents the gas-to-dust mass ratio at the disk atmosphere compared to that of the ISM. In this sense, a depletion of small grains in the upper layer of the disks will be reflected as a small value of $\epsilon$. Unevolved disks, on the other hand, will have $\epsilon$ values close to unity. Once the dust has settled, large bodies present in the disk will interact with its local environment creating more complex radial structures like inner clearings or gaps. The most probable mechanism responsible for this effect is an orbiting companion, either stellar or planetary, that cleared out the material in the inner disk \citep{calvet02,espaillat14}. This mechanism can explain some of the so called transitional and pre-transitional disks (TDs and PTDs hereafter). Also important is disk truncation via mass loss. Besides an orbiting planet, truncation of the {\it inner} disk may result from the dissipation of gas being heated by high-energy radiation fields coming from the host star \citep{hollenbach00,alexander06b,clarke07,dullemond07}. Evidence of mass loss in disks comes from forbidden emission lines of ionized species like [SII], [OI], [NeII] and [NII]. The low velocity component of these lines has been associated with photoevaporative winds that might be able to explain some of the TD and PTDs observed \citep{pascucci09,gorti11}. The truncation of the {\it outer} parts of the disks, on the other hand, may be the result of environmental effects, like mass loss due to high energy photons from nearby massive stars impinging on the surface of the disk and heating the less tightly bound material. Expected mass loss rates in externally-illuminated disks can be substantial \citep{adams04,facchini16}, and when incorporated into viscous evolution models \citep{clarke07,anderson13,kalyaan15} can have a strong impact on the disk structure and lifetime. Externally-illuminated disks, known as proplyds, have been well characterized in the Orion Nebula Cluster \citep[hereafter ONC;][]{odell94,johnstone98,henney99,storzer99,garcia01,smith05,williams05,eisner08,mann14} where the radiation from the Trapezium stars photoevaporates the disks. Evidence of outer photoevaporation in other star-forming regions has also been found \citep{rigliaco09,rigliaco13,natta14}. Multiplicity can also produce truncated disks. The fraction of binary companions in young regions can be $\sim$30\% or larger where close ($<$ 100 AU) binaries can affect the evolution of protoplanetary and circumbinary disks by significantly reducing their lifetime \citep{daemgen15}. In Taurus star-forming region the disk population affected by multiplicity constitutes close binaries with separations $<$40 AU \citep{kraus12}. In order to understand what physical processes cause the disks to evolve, many multiband observations of different regions within a wide range of ages and environments have been made. Many of these studies have used data from the Spitzer Space Telescope to describe the state of gas and dust within the first AU from the central object. In the $\sim$5 Myr old \citep{dezeeuw99} Upper Scorpius OB association \citet{dahm09} examined, among others, 7 late-type disk-bearing (K+M) members using the Infrared Spectrograph (IRS). They found a lack of sub-micron dust grains in the inner regions of the disks and that the strength of silicate emission is spectral-type dependent. In a disk census performed by \citet{luhman12} with Spitzer and WISE photometry they found that late-type members have a higher inner disk fraction than early-types. The $\sim$10 Myr old \citep{uchida04} TW Hydrae (TW Hya) association has also been the target for different disk evolution studies. \citet{uchida04} analyzed two objects with IR excesses on their IRS spectra. They found signs of significant grain growth and dust processing and also evidence of dust clearing in the inner ($\sim$4 AU) disks, possibly due to the presence of orbiting planets. Similar studies performed in other regions like Ophiucus \cite[$\sim$1 Myr]{mcclure10}, Taurus \cite[1-2 Myr]{furlan06}, and Chamaeleon I \cite[$\sim$2 Myr]{manoj11}, using IRS spectra to analyze the strength and shape of the 10 $\mu m$ and 20 $\mu m$ silicate features, have shown that disks in these regions are highly settled and exhibit signs of significant dust processing. In order to describe the distribution of gas and dust in circumstellar disks around young stars, many works have been done using the Photodetector Array Camera \& Spectrometer (PACS) instrument on board the Herschel Space Observatory. The main idea of these studies has been the description of disks structures as well as the estimation of gas and dust masses in different star-forming regions (Riviere-Marichalar et al. 2013: TW Hya association; Mathews et al. 2013: Upper Scorpius; Olofsson et al. 2013: Chamaleon-I). Additionally, \citet{howard13} modeled PACS detections in Taurus, and found that the region probed by their observations constitutes the inner part (5-50 AU) of their disks.\\ \indent The $\sim$3 Myr old $\sigma$ Ori cluster (H07a) is an excellent laboratory for studies of disk evolution for two reasons: first, the large number of stars still harboring disks allows us to obtain results with statistical significance and second, given its intermediate age, one can expect the first traces of disk evolution to become apparent. We present here new Herschel PACS 70 and 160 $\mu m$ photometry of 32 TTSs in the cluster, with B, V, R and I magnitudes, 2MASS, Spitzer IRAC and MIPS photometry from H07a and spectral types from \citet{hernandez14}, H14 hereafter. Our main goal is to describe the state of the dust on our sample by analyzing the infrared properties of the stars and by modeling their SEDs with irradiated accretion disk models. In \S\ref{sec:obs} we describe the observational data and a few details about the reduction process; in \S\ref{sec:SEDs} we present the SEDs of our objects; in \S\ref{sec:SOri_pro} we characterize our PACS sources; our results are shown in \S\ref{sec:results} where we characterize our PACS disks using spectral indices (\S\ref{sec:spec_ind}) and modeling the SEDs of individual objects (\S\ref{sec:modelling}); the discussion is presented in \S\ref{sec:disscusion} and the conclusions are shown in \S\ref{sec:conclusions}.
} \indent We analyzed the IR emission of 32 TTSs (mostly class II stars) with PACS detections belonging to the \SOri cluster located in the Ori OB1b subassociation. We modeled 18 sources using the irradiated accretion disk models of \citet{dalessio06}. Our main conclusions are as follows: \begin{enumerate} \item PACS detections are consistent with stars surrounded by optically thick disks with high 24-{\micron} excesses and spectral types between K2.0 and M5.0. \item Detailed modeling indicates that most of our objects (60\%) can be explained by $\epsilon$ = 0.01, indicative of significant dust settling and possible grain growth. This is consistent with previous studies of other young star-forming regions \citep{furlan09,mcclure10,manoj11}. \item 61\% of our disks can be modeled with large sizes (R$_{\rm d} \geq$ 100 AU). The rest, have dust disk radii of less than 80 AU. These disks may have been subject to photoevaporation. We estimated that 80\% of our disks are exposed to FUV fluxes between 300 $\lesssim$ G$_{0} \lesssim$ 1000. These values may be high enough to photoevaporate the outer edges of the closer disks. Additionally, within the first 0.6 pc from the central ionizing sources we found forbidden emission lines of [NII] in SO662 ($\rm R_d = 200$ AU) while none of the small disks exhibit any features. This suggests that the region producing the lines is located in the outer disk. Therefore, SO662 may be a photoevaporative disk in its initial phase while the small disks have already photoavaporated most of their material and hence cannot produce the [NII] lines. \item The masses of our disks range between 0.03 to $\sim$39 M$_{\rm Jup}$, with 35\% of the disks having masses lower than 0.001 M$_{\odot}$, i.e., 1 Jupiter mass. These low masses suggest that the formation of giant planets is probably over in the cluster. If this is the case, then time scales for giant planet formation should be less than 3 Myr, or giant planets are difficult to form in clustered environments. \end{enumerate} This work was supported by UNAM-PAPIIT grant number IN110816 to JBP. Model calculations were performed in the supercomputer Miztli, at DGTIC-UNAM. KM acknowledges a scholarship from CONACYT and financial support from CONACyT grant number 168251. JH acknowledges UNAM-DGAPA's PREI program for visiting research at IRyA. We have made extensive use of NASA-ADS database. \newpage
16
7
1607.01357
1607
1607.03508_arXiv.txt
We analyse configurations of neutron stars in the so-called $R$-squared gravity in the Palatini formalism. Using a realistic equation of state we show that the mass-radius configurations are lighter than their counterparts in General Relativity. We also obtain the internal profiles, which run in strong correlation with the derivatives of the equation of state, leading to regions where the mass parameter decreases with the radial coordinate in a counter-intuitive way. In order to analyse such correlation, we introduce a parametrisation of the equation of state given by multiple polytropes, which allows us to explicitly control its derivatives. We show that, even in a limiting case where hard phase transitions in matter are allowed, the internal profile of the mass parameter still presents strange features and the calculated $M-R$ configurations also yield neutron stars lighter than those obtained in General Relativity.
\label{intro} The so-called Extended Theories of Gravity (ETGs) are generalisations of General Relativity (GR) conceived to deal with theoretical and observational issues arising from astrophysical and cosmological scenarios (see \cite{Capozziello2011} for an extended review). A particular class of them, namely $f(R)$ theories, is obtained by substituting the Einstein-Hilbert Lagrangian density by a function of the Ricci scalar curvature $R$. In the low-curvature regime, one of the stronger motivations to study $f(R)$ theories is to describe cosmological observations without the necessity of invoking a dark energy component in the current epoch of the evolution of the universe \cite{Sotiriou2006,Sotiriou2010,deFelice2010,Capozziello2011,Nojiri2011}. In this vein, there are several $f(R)$ models that successfully account for the succession of different cosmological eras, and satisfy the current Solar System and laboratory constraints \cite{Nojiri2006,Starobinsky2007,HuSawicki2007,Cognola2008,Miranda2009,Jaime2011}. A different motivation to consider $f(R)$ theories comes from the fact that the scarce data available from phenomena in the strong-curvature regime are compatible not only with GR, but also with $f(R)$ and other modified theories (see for instance \cite{Konoplya2016,Vainio2016}). In this context, Neutron Stars (NSs) may offer the possibility of testing deviations from GR through astrophysical observations. The internal structure of such compact objects is described in GR by the solutions of the Tolman-Oppenheimer-Volkoff (TOV) equations, together with a suitable Equation of State (EoS). In the framework of $f(R)$-theories in the metric formalism \cite{deFelice2010}, the internal structure of NSs has been previously studied by several authors. Since the modified TOV equations have derivatives of the metric up to the fourth order, different approaches have been developed to deal with the numerical integration. One of the first attempts was to consider the solution inside the star as a perturbation of the GR case, and match it with the external solution characterised by the Schwarzschild metric. This perturbative method was used in \cite{Cooney2010,Arapoglu2011,Orellana2013,Astashenok2013} to analyse the internal structure of NSs using polytropic and realistic EoSs to describe the matter content inside the compact object. The structure of NSs using a perturbative approach and including hyperons and/or quarks EoSs was also explored in \cite{Astashenok2014}. However, as it was pointed out in \cite{Yazadjiev2014}, the use of a perturbative method to investigate the strong field regime in $f(R)$ theories and may lead to unphysical results. Self-consistent models of NSs are then required to solve simultaneously for the internal and external regions, assuming appropriate boundary conditions at the centre of the star and at infinity. This new approach was explored by introducing a scalar field and working in the so-called Jordan frame \cite{Yazadjiev2014}, by recasting the field equations without mapping the original $f(R)$ theory to any scalar-tensor counterpart \cite{Salgado2011}, and by using self-consistent numerical methods to solve simultaneously the internal structure of the star and the external metric \cite{Astasheno2015b,Capozziello2016,Alvaro2016}.\footnote{More sophisticated models of NSs were also considered in the framework of $f(R)$, such those including rotation \cite{Staykov2014,Yazadjiev2015} and strong magnetic mean fields \cite{Astashenok2015}.} The internal structure of NSs has also been studied using the Palatini formalism, in which the metric and the connection are a priori considered as independent geometrical entities \cite{Olmo2008,Olmo2011}. This approach has the advantage of straightforwardly yielding field equations with derivatives of the metric up to second order. The modified TOV equations in this case were firstly derived in \cite{Kainulainen2007} by matching the interior solution with the exterior Schwarzschild-de Sitter solution. Let us remark that, differently from the above-mentioned metric approach, in Palatini gravity the unique solution of static and spherically symmetric vacuum configurations is the Schwarzschild-de Sitter metric, in which the value of the effective cosmological constant is calculated using the well-known equivalence of $f(R)$ and Brans-Dicke theories (see for instance \cite{Capozziello2011}). In the case of a null cosmological constant (which is precisely that of $R$-squared gravity), the mass parameter coincides with the Schwarzschild mass, (\emph{i.e.} with the value of $m(r)$ at the surface of the star. The structure of static and spherically-symmetric compact stars in the context of the Palatini formalism was studied in \cite{Barausse2008a} assuming both polytropic and realistic EoSs. In the first case, the authors showed that the matching between the interior and exterior solutions at the surface of the star can yield divergences in the curvature invariants near the surface of the star when polytropic EoSs with $3/2< \Gamma < 2$ are used for generic $f(R)$. The no-go theorem related to the issue of the singularity at the surface of some polytropic NSs in these model was carefully analysed in \cite{Barausse2008c}. It was claimed there that the origin of the singularity does not lie in the fluid approximation or in the specifics of the approach followed to solve the internal structure of the star, but is related to the intrinsic features of Palatini $f(R)$ gravity. The authors of \cite{Barausse2008c} argued that the root of the problem lies in the differential structure of the field equations, in which the matter field derivatives are of higher order than the metric derivatives.\footnote{In fact, this feature induces corrections to the standard model of particle physics at the MeV energy scale, see \cite{Flanagan2004,Flanagan2004b,Iglesias2007}.} This peculiarity introduces non-cumulative effects and makes the metric sensitive to the local characteristics of matter. A possible resolution to the singularity problem in this context, namely the addition of terms quadratic in the derivatives of the connection to the gravitational action, was also discussed in \cite{Barausse2008c}.\footnote{The existence of singularities at the surface of the star in the context of Eddington-inspired Born-Infeld (EiBI) theory was proved in \cite{Pani2012}, while a possible resolution to this problem, due to gravitational back-reaction on the particles was presented in \cite{Kim2014}.}$^{,}$\footnote{It was shown in \cite{Olmo2008} that the surface singularities are not physical in the case of Planck-scale modified Lagrangians, in which they are instead an artifact of the idealised equation of state used.} If more realistic EoSs (which take into account the fundamental microphysics of the matter that composes the star) are used along with the modified TOV equations, compact stars present another unappealing feature in Palatini $f(R)$ gravity. In \cite{Barausse2008a}, the structure of NSs was calculated for the choice $f(R)=R+\alpha R^2$, using an analytic approximation of the realistic FPS EoS \cite{Haensel2004}. In spite of the fact that such EoS yields a regular solution at the surface, the interior metric strongly depends on the first and second derivatives of the function $\rho(p)$. As a consequence, the radial profiles of the mass parameter are not smooth functions as in GR, but develop bumps when there are rapid changes in the derivatives of the EoS \cite{Barausse2008c}. The above results show that the modelling of NSs in $f(R)$ theories in the Palatini formalism involves some extra considerations when compared with the GR case, due to the strong correlation between the metric and the derivatives of the EoS. It is important to note that these are poorly constrained, since the EoSs are actually constructed to fit only the zeroth-order relation between $\rho$ and $p$, which is enough to calculate the structure of NSs in GR. Thus, special care must be taken if high-order derivatives (e.g. ${\rm d}p/{\rm d}\rho$, ${\rm d}^2p/{\rm d}\rho^2$) are used during the calculation, as in the case we are interested in here. The main goal of this work is to check whether the non-smoothness of the mass parameter reported in \cite{Barausse2008a} is actually a feature of $f(R)$ theories in the Palatini formalism or it may be due to the details of the EoS chosen there. For this purpose, we calculate the structure of a star in the Palatini formalism with the choice $f(R)=R+\alpha R^{2}$ in two different ways. First, we use the SLY EoS (instead of the FPS EoS used in \cite{Barausse2008a}). As a second test, an approximation to the EoS based on the connection of multiple polytropes was employed. The polytropes represent the state of matter at the core and crust of the NS, and allow us to control the derivatives of the EoS through a set of parameters. We find that in both cases the internal profiles run in strong correlation with the derivatives of the EoS, leading to regions where the mass parameter decreases with the radial coordinate in a counter-intuitive way, even in the case where hard phase transitions in the EoS are allowed. We also find that mass-radius configurations in this theory do not allow heavier NSs than in GR for any plausible $\alpha > 0$. The paper is organised as follows. In Section~2, we present the modified TOV equations in the Palatini formalism. Realistic EoSs and the integration of the stellar structure are described in Section~3, focusing on the mass-radius relations and the correlation between the features of the internal profile and the first and second derivatives of the EoSs. In Section~4 we introduce a parametrisation for the EoS based on the connection of multiple polytropes, and examine the stellar structure obtained for this EoS. Final remarks are presented in Section~5.
In order to investigate whether $f(R)$-theories in the Palatini formalism can be used to describe astrophysical scenarios in the strong curvature regime, we studied the internal structure of NSs in the theory defined by $f(R)=R+\alpha R^2$. In contrast to the metric formalism, the modified TOV equations have derivatives of the metric up to the second order, as in the GR case. However, in spite of this advantage, the integration involves some extra considerations since derivatives of the EoS are present in the structure equations. Considering the SLY EoS commonly used to compute NSs, we obtained results consistent with previous studies (in which the FPS EoS was used \cite{Barausse2008a}) regarding the static mass-radius configurations and internal mass profiles. Concerning the mass-radius relations, lower maximum masses than those in the GR case are obtained, although the differences are not large enough to fully constrain the parameter $\alpha$ by observational evidence of the most massive NSs. A more serious problem is found when the internal structure of these models is analysed. A counter-intuitive behaviour is observed in the mass profiles, which include regions where ${\rm d}m/{\rm d}\rho<0$. It was claimed in \cite{Barausse2008a} that this feature is a natural consequence of theories of gravity involving higher order derivatives in the matter fields than in the metric. Assuming the validity of the realistic EoS, it may be possible to limit the parameter $\alpha$ to values lower than $10^9$cm$^2$ if ${\rm d}m/{\rm d}r>0$ is required all through the interior of the star. However, EoSs for matter in the extremely high density regime are usually constrained by fitting the structure of NSs in GR, where only the zeroth-order relation between $\rho$ and $p$ is relevant. Then, it seems inappropriate to use an EoS to constrain alternative theories of gravity without imposing the bias $\alpha=0$. This is of course an intricate problem because NSs are actually the only natural laboratories where properties of high density matter can be tested. Thus, in this work we also studied an alternative parametrisation for the EoSs, namely PLYT, that simply accounts for the core and the crust regions of the NS. This is achieved by means of polytropic relations connected continuously and analytically, which mimic the phase-transition between both regions. This new parametrisation of the EoS allows us to control its first and second derivatives. The trends of mass-radius configurations found using an analytic approximation to realistic SLY and FPS EoSs are recovered, as well as their internal profiles. We found that even in the limiting case representing a hard phase transition between the core and the crust of the compact star, the peculiar behaviour of the mass parameter profile is unavoidable, and lighter NSs than those calculated with GR are obtained. Our results also indicate that in the limit $\Delta = 0$, there will be a singularity in the curvature, due to the discontinuity in $m(r)$. These features seem to suggest that the problems claimed to be characteristic of NSs in Palatini $f(R)$ theories are indeed rooted to the nature of the field equations, and core-crust phase transitions in EoSs are not capable to counteract this dependence. To conclude, we would like to mention two lines of research that are a natural extension of this work. The first one is the possible existence of wormhole-like solutions that may arise from particular choices of the EoS and the function $f(R)$. The second is the study of the stability of the calculated NSs, which would be very important to ensure that configurations using different parametrisations of the EoS can be realised under $R$-squared gravity. Such studies are left for future work.
16
7
1607.03508
1607
1607.04328_arXiv.txt
Recent observations have detected molecular outflows in a few nearby starburst nuclei. We discuss the physical processes at work in such an environment in order to outline a scenario that can explain the observed parameters of the phenomenon, such as the molecular mass, speed and size of the outflows. We show that outflows triggered by OB associations, with $N_{OB}\ge 10^5$ (corresponding to a star formation rate (SFR)$\ge 1$ M$_{\odot}$ yr$^{-1}$ in the nuclear region), in a stratified disk with mid-plane density $n_0\sim 200\hbox{--}1000$ cm$^{-3}$ and scale height $z_0\ge 200 (n_0/10^2 \, {\rm cm}^{-3})^{-3/5}$ pc, can form molecules in a cool dense and expanding shell. The associated molecular mass is $\ge 10^7$ M$_\odot$ at a distance of a few hundred pc, with a speed of several tens of km s$^{-1}$. We show that a SFR surface density of $10 \le \Sigma_{SFR} \le 50$ M$_\odot$ yr$^{-1}$ kpc$^{-2}$ favours the production of molecular outflows, consistent with observed values.
\label{intro} Observations show that outflows from starburst galaxies contain gas in different phases, which manifest with different emission mechanisms and are probed in different wavelengths. The fully ionised component usually show up through free-free emission and is probed by X-ray observations (\cite{strickland2004}, \cite{heckman1990}). Partially ionized/atomic component are more clumpy than the fully ionised gas, and are probed by line emission from various ions, e.g. NaI, MgII etc \citep{heckman00}. Outflows from some nearby starburst galaxies have also been observed to contain a molecular component. Understanding the dynamics of this molecular component has become an important issue, in light of recent observations with {\it ALMA} and further observations in the future. \cite{bolatto2013} observed a molecular outflow in the central region of NGC 253 with a rate of $\ge 3 $ M$_\odot$ yr$^{-1}$ (likely as large as $9$ M$_\odot$ yr$^{-1}$), with a mass loading factor $1\hbox{--}3$. Four expanding shells with radii $60\hbox{--}90$ pc have velocities of $\simeq 23\hbox{--}42$ km s$^{-1}$, suggesting a dynamical age of $\sim 1.4\hbox{--}4$ Myr. The inferred molecular mass is $(0.3-1) \times 10^7$ M$_\odot$, % and energy $\sim (2-20)\times 10^{52}$ erg. These shells likely outline a larger shell around the central starburst region. \cite{tsai2012} observed a molecular outflow in NGC 3628 with the CO (J=1-0) line. The outflow shows almost a structureless morphology with a very weak bubble breaking through in the north part of the central outflow. Its size of $\sim 370\hbox{--}450$ pc, inferred molecular mass of $\sim 2.8 \times 10^7$ M$_\odot$, and outflow speed $\sim 90\pm 10$ km s$^{-1}$, suggest a total kinetic energy of molecular gas of $\sim 3 \times 10^{54}$ erg. More recently \citep{salak2016} observed dust lanes above the galactic plane in NGC 1808 along with NaI, NII, CO(1-0) emission lines tracing extraplanar gas close (within 2 kpc) to the galactic centre with a mass of $10^8~M_\odot$, and a nuclear star formation rate of $\sim 1~M_\odot$ yr$^{-1}$. The velocity along the minor axes varies in the range $48\hbox{--}128$ km s$^{-1}$ and most likely indicates a gas outflow off the disk with an estimated mass loss rate of $(1\hbox{--}10)~M_\odot$ yr$^{-1}$. The molecular outflow observed in M82 has a complex morphology. The part of it outlined by CO emission is at a larger radii than the part seen with HCN and HCO$^+$ lines. The CO (J=1-0) observations show diffuse molecular gas in a nearly spherical region of radius $\sim 0.75$ kpc, with a total molecular mass $3.3 \times 10^8$ M$_\odot$, with an average outflow velocity of $\sim 100$ km s$^{-1}$ \citep{walter2002}. The corresponding kinetic energy of the CO-outflow is of $\sim 3\times 10^{55}$ erg. % More recently \cite{salak2013} re-estimated the mass and kinetic energy of CO gas to be larger by factors of 3 and 3-10 respectively. Notably, the molecular outflow morphology is similar to that of the dust halo described by \cite{alton1999}. The morphology of the region of the outflow observed in HCN/HCO$^+$ is similar to that of the CO outflow -- it is amorphous and nearly spherical with slightly smaller length scale: the radius of the HCN region is of $400 \hbox{--}450$ pc, and around $600$ pc for HCO$^+$; both HCN and HCO$^+$ emissions show clumpy structure with characteristic size of 100 pc \citep{salas2014}. The kinematics and the energetics differ slightly from those inferred for the CO-outflow: the mean de-projected outflow velocity for HCO$^+$ is 64 km s$^{-1}$, while for HCN it is 43 km s$^{-1}$. The total molecular mass contained into the HCN (HCO$^+$) outflows is $\ge 7 (21)\times 10^6$ M$_\odot$, which in total is an order of magnitude lower for molecular outflows associated with CO \citep{walter2002}. The kinetic energy of the outflow associated with HCN/HCO$^+$ emission ranges between $5\hbox{--}30 \times 10^{52}$ erg. The molecular outflow rate is $\ge 0.3$ M$_\odot$ yr$^{-1}$. They also inferred a SFR of $\sim 4\hbox{--}7$ M$_\odot$ yr$^{-1}$ from free-free emission. % These observations pose a number of questions that we address in this paper: are the molecules formed {\it in situ} in the flow or are they entrained the flow, or are the residues of the parent molecular cloud (in which the superbubble has gone off)? What are the typical length scales, time scales, molecular mass and speed? How are these related to the SFR, or disk parameters (e.g., gas density, scale height)? In this paper we outline a model which includes the basic physical processes for producing a molecular outflow in starburst nuclei, and addresses some of these issues. We have kept our model simple enough to be general, but it has the essential ingredients in order to explain some of the observed parameters mentioned above, namely the length scales and velocities, as well as an estimate of the molecular mass. Our results can become the base models of more sophisticated numerical simulations which would be able to address finer details of this complex phenomenon. We use a model of a shell propagating in a stratified ISM in our calculation. Such an outflow is inherently 2-dimensional, with the dense shell pushed out to a roughly constant stand-off radius in the plane of the disk, while the top of the bubble is blown of by Rayleigh-Taylor instability. In steady state, a dense shell (in which molecules can form) exists in a dynamically young ($r/v\sim $ few Myr) conical shell confined within a few times the scale-height (see Figure \ref{fig:schem} for a cartoon; for numerical simulations, see Figs 2, 3 of \cite{sarkar2015}). For analytical tractability, we consider the formation and survival of molecules in the dense shell expanding in a stratified disk. All starburst nuclei discussed in the paper show a CO disk and biconical outflows emanating from them. We expect our simple estimates to apply, at least to an order of magnitude, for the realistic scenario. We begin with a discussion of the phase space of molecular and ionic components of outflows from starbursts, and after eliminating various possibilities we arrive at a basic scenario (\S 2). In the later part of this section, we study various physical constraints on the parameters of the starburst and the disk galaxy for producing molecular outflows. Next we discuss the physical processes involved in the formation and destruction of molecules in these outflows (\S 3) and present our results in \S 4.
We summarise our main findings as follows: \begin{itemize} \item We have considered a simple 1-D model of molecule formation in expanding superbubble shells triggered by star formation activity in the nuclei of starburst galaxies. We have determined a threshold condition (eqn 5) for disk parameters (gas density and scale height) for the formation of molecules in superbubble shells breaking out of disk galaxies. This threshold condition implies a gas surface density of $ \ge 2000$ M$_{\odot}$ pc$^{-2}$, which translates to a SFR of $\ge 3$ M$_{\odot}$ yr$^{-1}$ within the nucleus region of radius $\sim 300$ pc, consistent with observed SFR of galaxies hosting molecular outflows. We also show that there is a range in the surface density of SFR that is most conducive for the formation of molecular outflows, given by $10\le \Sigma_{SFR} \le 50$ M$_\odot$ yr$^{-1}$ kpc$^{-2}$, consistent with observations. \item Consideration of molecule formation in these expanding superbubble shells predicts molecular outflows with velocities $\sim 30\hbox{--}40$ km s$^{-1}$ at distances $\sim 100\hbox{--}200$ pc with a molecular mass $\sim 10^6\hbox{--}10^7$ M$_{\odot}$, which tally with the recent ALMA observations of NGC 253. \item We have considered different combinations of disk parameters and the predicted velocities of molecule bearing shells in the range of $\sim 30\hbox{--}100$ km s$^{-1}$ with length scales of $\ge 100$ pc are in rough agreement with the observations of molecules in NGC 3628 and M82. \end{itemize} \textbf{ACKNOWLEDGEMENT} We would like to thank Eve Ostriker for valuable discussions and an anonymous referee for useful comments. The paper is supported partly (YS) by RFBR (project codes 15-02-08293 and 15-52-45114-IND). YS is also thankful to the Grant of the President of the Russian Federation for Support of the Leading Scientific Schools NSh-4235.2014.2 \footnotesize{
16
7
1607.04328
1607
1607.08324_arXiv.txt
A rapidly spinning, strongly magnetized neutron star is invoked as the central engine for some Gamma-ray bursts (GRBs), especially, the $``$internal plateau$"$ feature of X-ray afterglow. However, for these $``$internal plateau$"$ GRBs, how to produce their prompt emission remains an open question. Two different physical process have been proposed in the literature, (1) a new-born neutron star is surrounded by a hyper-accreting and neutrino cooling disk, the GRB jet can be powered by neutrino annihilation aligning the spin axis; (2) a differentially rotating millisecond pulsar was formed due to different angular velocity between the interior core and outer shell parts of the neutron star, which can power an episodic GRB jet. In this paper, by analyzing the data of one peculiar GRB 070110 (with internal plateau), we try to test which model being favored. By deriving the physical parameters of magnetar with observational data, the parameter regime for initial period ($P_{0\rm }$) and surface polar cap magnetic field ($B_{\rm p}$) of the central NS are $(0.96\sim 1.2 )~\rm ms$ and $(2.4\sim 3.7)\times 10^{14}~\rm G$, respectively. The radiative efficiency of prompt emission is about $\eta_{\gamma} \sim 6\%$. However, the radiative efficiency of internal plateau ($\eta_{\rm X}$) is larger than $31\%$ assuming the $M_{\rm NS}\sim1.4 M_{\odot}$ and $P_{0\rm }\sim1.2 ~\rm ms$. The clear difference between the radiation efficiencies of prompt emission and internal plateau implies that they maybe originated from different components (e.g. prompt emission from the relativistic jet powered by neutrino annihilation, while the internal plateau from the magnetic outflow wind).
So far, more than 120 GRBs have been observed with shallow (or plateau) decay segment in the X-ray afterglow. However, if a normal decay is followed the plateau, it can not be confident to show that the shallow decay is originated from the internal dissipation of magnetar spin-down (Panaitescu et al. 2006). In order to find out the magnetar signature, which typically invokes a shallow decay phase (or plateau) followed by a steeper decay segment (steeper than $t^{-3}$). One requires three independent criteria to define our sample. First, it displays an $``$internal plateau$"$. Second, after the sharp decay following with plateau, another power-law component is appeared with decay index less than 1.5, which is contributed by the external shock emission. Third, the redshift of the burst need to be measured, in order to estimate the gamma-ray energy and kinetic energy. We systematically process the XRT data of more than 1250 GRBs observed between 2005 January and 2016 March. Only GRB 070110, with duration $T_{90}\sim 88$s, is satisfied with those three requirements in our entire sample. We next perform a temporal fit to the plateau behavior of GRB 070110 with a smooth broken power law \begin{eqnarray} F = F_{0} \left[\left(\frac{t}{t_b}\right)^{\omega\alpha_1}+ \left(\frac{t}{t_b}\right)^{\omega\alpha_2}\right]^{-1/\omega}, \end{eqnarray} add single power-law function \begin{eqnarray} F = F_{1}t^{-\alpha_3} , \end{eqnarray} where $t_{b}$ is the break time, $F_b=F_0 \cdot 2^{-1/\omega}$ is the flux at the break time $t_b$, $\alpha_1$, $\alpha_2$ and $\alpha_3$ are decay indices, respectively, and $\omega$ describes the sharpness of the break. The larger the $\omega$ parameter, the sharper the break. We also collect the optical observational data from Troja et al (2007). Both X-ray and optical light curve are shown in Figure 1, and fitting result is presented in Table 1. Another two important parameters are the isotropic gamma-ray energy ($E_{\rm \gamma,iso}$) and kinetic energy ($E_{\rm K,iso}$). $E_{\rm \gamma,iso}$ was measured from the observation flunce and distance, read as \begin{eqnarray} E_{\rm \gamma,iso}&=&4\pi k D^{2}_{L} S_{\gamma} (1+z)^{-1}\nonumber \\ &=&(3.09\pm2.51)\times 10^{52}~ {\rm erg} \end{eqnarray} where $z=2.352$ is the redshift, $D_{L}$ is the luminosity distance, $S_{\rm \gamma}=(1.8\pm0.2)\times 10^{-6} \rm erg~ cm^{-2}$ is gamma-ray fluence in BAT band, and $k$ is the $k$-correction factor from the observed band to $1-10^4$ keV in the burst rest frame (e.g. Bloom et al. 2001). More details, please refer to L\"{u} \& Zhang (2014). The $E_{\rm K,iso}$, is isotropic kinetic energy of the fireball. It could be estimated by standard forward afterglow model (Sari, Piran \& Narayan 1998; Fan \& Piran 2006). For the late time X-ray afterglow data ($t> 5\times10^4$s), one has decay slope $\alpha_3\sim 0.82$, and the spectral index $\beta_{\rm X}\sim 1.12$ in the normal decay segment. Approximately, they are satisfied $2\alpha_3\simeq 3\beta_{\rm X}-1$ in the spectral regime $\nu > max(\nu_{\rm m},\nu_{\rm c})$, where $\nu_{\rm c}$ and $\nu_{\rm m}$ are the typical and cooling frequencies of synchrotron radiation, respectively. Following the equations and methods of Yost et al (2003), the flux was recorded in XRT (0.3 keV - 10 keV) as, \begin{eqnarray} \rm Flux &=& 1.2\times10^{-12} ~{\rm erg~s^{-1}~cm^{-2}}(\frac{1+z}{2})^{(p+2)/4}D_{L,28}^{-2} \nonumber \\& \times & \epsilon_{B,-2}^{(p-2)/4}\epsilon_{e,-1}^{p-1}E_{\rm K,iso,53}^{(p+2)/4}(1+Y)^{-1}t_{d}^{(2-3p)/4}, \end{eqnarray} in this calculation, the Compton parameter (Y) is assigned to a typical value $Y=1$. Combine with the observational data, one obtain $E_{\rm K, iso}\sim 5\times 10^{53} \rm ~erg$, the physical parameters of forward shock model are shown in Table 1, and the fitting result is presented in Figure 1.
An internal dissipation process of magnetar with Poynting-flux dominated outflow was invoked to interpret internal plateau phase of GRB afterglows. We suggest that comparing the radiation efficiency of prompt emission and internal plateau phase could help to investigate the composition of GRB jet. We focus on analyzing the data of GRB 070110 which exhibits internal plateau feature following a normal decay. We firstly estimate the physical parameters of magnetar based on the observational feature of internal plateau, the parameter regime of initial period ($P_{\rm 0}$) and surface polar cap magnetic field ($B_{\rm p}$) of NS are $(0.96\sim 1.2)\rm~ ms$ and $(2.4\sim 3.7)\times 10^{14}\rm~ G$, respectively. In this case, the radiation efficiency of prompt emission would be $\eta_{\gamma} \sim (6\pm 4)\%$ if the GRB jet was powered by neutrino annihilation. On the other hand, the lower limit of internal plateau radiative efficiency is estimated as $\eta_{\rm X}=31\%$ with $M_{\rm NS}\sim1.4 M_{\odot}$ and $P_{0\rm }\sim1.2 \rm ~ms$. Since the standard internal shock model and magnetic dissipation model for prompt emission predict lower and higher radiation efficiency, respectively (Kumar 1999; Panaitescu et al. 1999; Usov 1992; Zhang \& Yan 2011). Also, it is wildly accepted that the internal plateau phase is from magnetar wind dissipation process, and the prompt emission radiation efficiency ($\eta_{\gamma} \sim 6\%$) is much less than the minimum efficiency of internal plateau ($\eta_{\rm X}=31\%$), so that the prompt emission and later internal plateau of GRB 070110 may be from different origin, e.g., a new-born neutron star surrounded by a hyper-accreting disk generates the prompt emission, while the magnetic dipole dissipation is account for the later internal plateau. One suspicion is that whether the neutrino annihilation of NS cooling can power the prompt emission of GRB 070110. If neutron star surrounded by a hyper-accreting model was accepted to power the GRB jet, the neutrino annihilation luminosity ($L_{\rm \nu\bar{\nu}}$) is contributed by neutrinos emitted from both disk and neutron star surface layer. Following Zhang \& Dai (2009) method, there is no analytical solution of $L_{\rm \nu\bar{\nu}}$, but related to several parameters, e.g. accretion rate ($\dot{M}$), outflow index ($s$), viscosity ($\alpha$), energy parameter ($\varepsilon$) and efficiency factor to measure the surface emission ($\eta_{\rm s}$). Therefore, we have to use numerical method to get the solution with right parameters to compare with observational prompt emission luminosity. Since $L_{\rm \nu\bar{\nu}}$ are not sensitively depending on the $\alpha$, $\varepsilon$ and $s$ (see Figure 7 and 8 in Zhang \& Dai 2009), we fix the typical value of $\alpha=0.1$, $\varepsilon=0.5$ and $s=0.2$. Assuming $\eta_{\rm s}=0.5$ and $\dot{M}=0.03~\rm M_{\odot}~s^{-1}$, one has $L_{\rm \nu\bar{\nu}}\sim 3\times 10^{48}~\rm erg~s^{-1}$. However, it is isotropic energy instead of true energy. Due to lack observation of jet break feature, one can estimate lower limit of the jet opening angle with the last observed point ($t_{j}\sim 25$ days) in X-ray afterglow, read as \begin{eqnarray} \label{theta_j} \theta_j &=& 0.057 ~\rm rad ~\left(\frac{t_j}{1 ~\rm day}\right)^{3/8}\left(\frac{1+z}{2}\right)^{-3/8} \nonumber \\& \times & \left(\frac{E_{\rm K,iso}}{10^{53}~\rm ergs}\right)^{-1/8}\left(\frac{n}{0.1 ~\rm cm^{-3}}\right)^{-3/8}\nonumber \\&=& 7.4^{\circ} \end{eqnarray} The prompt emission energy of GRB jet after beaming-corrected is \begin{eqnarray} E_{\rm \gamma}=E_{\rm \gamma, iso}\cdot f_{b}\simeq 2.5\times 10^{50} \rm~erg \end{eqnarray} where $f_b$ is beaming factor of the GRB 070110 \begin{eqnarray} \label{fb} f_b = 1-\cos \theta_j \simeq (1/2) \theta_j^2, \end{eqnarray} and the luminosity of prompt emission is $L_{\rm jet}\sim E_{\rm \gamma}/T_{90}\sim 2.8\times 10^{48} ~\rm erg~s^{-1}$. One has $L_{\nu\dot{\nu}}> L_{\rm jet}$ with typical value of parameters and $\dot{M}=0.03~\rm M_{\odot}~s^{-1}$, namely, neutrino annihilation of NS can provide enough energy to power the GRB jet.
16
7
1607.08324
1607
1607.01802_arXiv.txt
We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (\harm) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to $\mathcal{O}(10^3)$ processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.
Radiation-hydrodynamics (RHD) is a challenging numerical problem, but it is a crucial component in modeling several physical phenomena in the fields of astrophysics, laser physics, and plasma physics. Accurate solution of the radiative transfer (RT) equation, which governs the evolution of radiation interacting with matter, is difficult because of its high dimensionality. This equation depends on six independent variables: three spatial, two angles describing the direction of the propagation of photons, and one frequency dimension. For time-dependent RHD calculations, this solution must be obtained at every time step, and then coupled to the hydrodynamics. Even on parallel supercomputers direct solution of the RT equation at each time step of a time-dependent calculation is prohibitively expensive, because of this most numerical RHD codes use approximations to treat the evolution of the radiation field and its interaction with matter. One common approach to solving the RHD equations is to reduce the dimensionality of the problem. This class of approximations are known as moment methods because they take the moments of the radiative transfer equation in direct analogy to the Chapman-Enskog procedure used to derive the hydrodynamic equations from the kinetic theory of gases \citep{krumholz2011b, teyssier2015a}. This method averages over the angular dependence, and thus is a good approximation for smooth, diffuse radiation fields such as those present in optically thick media when the radiation is tightly coupled to the matter. The accuracy with which moment methods recover the angular dependence of the true solution depends on the order at which the moments are closed, and on the closure relation adopted. Common approximations include flux-limited diffusion (FLD; closure at first moment) \citep{levermore1981a,krumholz2007a,commercon11a}, the M1 method (closure at the 2nd moment using a minimum entropy closure) \citep{gonzalez2005a,rosdahl15a}, and Variable Eddington Tensor (VET; closure at the 2nd moment using an approximate solution to the full transfer equation) \citep{dykema1996a, jiang2012a, davis2012a}. Regardless of the order and closure relation, the computational cost of these methods usually scales as $N$ or $N \log{N}$, where $N$ is the number of cells, and the technique is highly parallelizable \citep{krumholz2011b}. An alternative technique used to solve the RT equation numerically is characteristics-based ray tracing, which solves this equation directly along specific rays. With this method, the directionality of the radiative flux is highly accurate, but the accuracy depends on the sampling of rays. Two widely used schemes for ray tracing in grid-based codes are \textit{long} and \textit{hybrid} characteristics. Long characteristics traces rays on a cell by cell basis, and provides maximum possible accuracy. Hybrid characteristics is a combination of long characteristics within individual grids and short characteristics between grids (i.e., in which only neighboring grid cells are used to interpolate incoming intensities) \citep{rijkhorst2006a, buntemeyer2016a}. The method of short characteristics is faster but more diffusive compared to long characteristics methods. The computational cost for both methods scales linearly with the number of sources, rays traced, and grid cells with which the rays interact, making these methods prohibitively expensive for treating diffuse radiation fields where every computational cell is a source. Instead, they are ideal for treating the radial radiation fields of point sources. Even for this application, however, one major drawback of ray tracing methods, especially long characteristics, is that they are difficult to parallelize in a code where the hydrodynamics is parallelized by domain decomposition. In such a configuration, each ray will usually cross multiple processor domains, creating significant communications overheads and serial bottlenecks. In summary, moment methods are better at approximating the diffuse radiation field from a fluid but are poor at modeling the propagation of radiation from point sources where the direction of the field is important. Characteristics methods, in contrast, are good at approximating the direction-dependent radiation fields from point sources but are too computationally expensive for practical use in simulating a diffuse radiating fluid. When both point and diffuse radiation sources are present, therefore, a natural approach is to combine both techniques by using long characteristics to model the propagation of radiation from a point source and its subsequent interaction (e.g., absorption) with the fluid and then use a moment method to follow the subsequent diffuse re-emission. This technique has been developed in several numerical codes in the past 20 years, but these codes typically have been limited to cases where a geometric symmetry simplifies the long characteristics solution. \citet{wolfire86a, wolfire87a} introduced a formal decomposition between the direct and dust-reprocessed radiation fields for a calculation in 1D spherical geometry. The first published 2D simulation using such a method is \citet{murray1994a}, who coupled long characteristics to FLD to model the direct (ray tracer) and scattered (FLD) radiation field in accretion disk coronae. \citet{kuiper2010a} incorporated a similar hybrid approach in the 3D grid based code Pluto, but again limiting the problem to a special geometry: in this case a single point source at the origin of a spherical computational grid. Most recently, \citet{klassen2014a} developed a hybrid scheme in the FLASH adaptive mesh refinement (AMR) code but uses FLD plus hybrid characteristics which, although faster, is less accurate than long characteristic methods. The reason that many authors have resorted to special geometries or abandoned long characteristics is the difficulty in parallelizing long characteristics in a general geometry, particularly in the case of adaptive grids. The problem is difficult because it is unknown \textit{a priori} how far rays will travel and what grids they will interact with in an adaptive grid framework. In a distributed memory paradigm where different grids may be stored in memory on different processors, this can easily result in a complex communication pattern with numerous serial bottlenecks. Indeed, all implementations of long characteristics on adaptive grids published to date use synchronous communication algorithms in which processors must wait for other processors to receive ray information \citep{wise2011a}, leading to exactly this problem. In this paper we present our Hybrid Adaptive Ray-Moment Method (\harm) which uses long characteristics to treat radiation from point sources coupled to a moment method to handle the diffuse radiation field from the fluid. \harm\ works on adaptive grids with asynchronous time stepping. We have greatly improved the parallelism of the long characteristics solve in a distributed memory framework through a new, completely asynchronous, non-blocking communication method. The rest of this paper is organized as follows. We begin with a formal derivation of our method for decomposing the radiation field into two components in section \ref{sec:theory}. Section \ref{sec:num} describes our numerical implementation of our hybrid radiation scheme in the astrophysical AMR code \orion. Next we confirm the robustness of our method by providing validation and performance tests in sections \ref{sec:valid} and \ref{sec:perform}, respectively. Finally, we summarize our methods and results in section {\ref{sec:sum}}.
\label{sec:sum} In this paper, we have presented our implementation of \harm\ -- a new highly-parallel multi-frequency hybrid radiation hydrodynamics module that combines an adaptive long characteristics method for the (direct) radial radiation field from point sources with a moment method that handles the (thermal) diffuse radiation field produced by a volume-filling fluid. Our new method is designed to be used with adaptive grids and is not limited to specific geometries. We have coupled \harm\ to the hydrodynamics in the astrophysical AMR code \orion\ which includes flux limited diffusion, but our method can be applied to any AMR hydrodynamics code that has asynchronous time stepping and can incorporate any moment method. Although our implementation is not the first hybrid radiation scheme implemented in an AMR code, it is more accurate than previous methods because it uses long rather than hybrid characteristics. Furthermore, our new algorithm can be used in a variety of radiation hydrodynamics problems in which the radiation from point sources and diffuse radiation field from the fluid should be modelled. Such examples are the study of the formation of isolated high-mass stars and clustered star formation in the dusty interstellar medium. One of the major difficulties with incorporating a long characteristics method in an AMR code that allows for a general geometry, where the hydrodynamics is parallelized by domain decomposition, is the parallel communication of rays. This is because ray tracing is a highly serial process and each ray will usually cross multiple processor domains. In order to avoid significant communication overheads and serial bottlenecks that often occur with long characteristics methods we have implemented a new completely asynchronous and non-blocking communication algorithm for ray communication. We performed a variety of weak and strong scaling tests of this method, and found that its performance is dramatically improved compared to previous long characteristics methods. In idealized tests without adaptive grids we obtain near-perfect weak scaling out to $>1000$ cores, and, in problems where the characteristic trace covers the entire computational domain, near-perfect strong scaling as well. Previous implementations became communications-bound at processor counts a factor of $\sim 4$ smaller than this. In a realistic, demanding research application with a complex, adaptive grid geometry, and using 10 frequency bins for the characteristic trace, we find excellent scaling as long as there are at least $\sim 3-4$ grids per CPU, and we find that the cost of adaptive ray tracing is smaller than or comparable to hydrodynamics, and significantly cheaper than flux limited diffusion. Since \harm\ works for adaptive grids in a general geometry, it can be used in a variety of high-resolution simulations that require radiative transfer. Our implementation in \orion\ will be made public in an upcoming release of the \orion\ code, and the \harm\ source code will be made available immediately upon request to any developers who are interested in implementing \harm\ in their own AMR codes.
16
7
1607.01802
1607
1607.08781_arXiv.txt
In the last fifteen years radio detection made it back to the list of promising techniques for extensive air showers, firstly, due to the installation and successful operation of digital radio experiments and, secondly, due to the quantitative understanding of the radio emission from atmospheric particle cascades. The radio technique has an energy threshold of about $100\,$PeV, which coincides with the energy at which a transition from the highest-energy galactic sources to the even more energetic extragalactic cosmic rays is assumed. Thus, radio detectors are particularly useful to study the highest-energy galactic particles and ultra-high-energy extragalactic particles of all types. Recent measurements by various antenna arrays like LOPES, CODALEMA, AERA, LOFAR, Tunka-Rex, and others have shown that radio measurements can compete in precision with other established techniques, in particular for the arrival direction, the energy, and the position of the shower maximum, which is one of the best estimators for the composition of the primary cosmic rays. The scientific potential of the radio technique seems to be maximum in combination with particle detectors, because this combination of complementary detectors can significantly increase the total accuracy for air-shower measurements. This increase in accuracy is crucial for a better separation of different primary particles, like gamma-ray photons, neutrinos, or different types of nuclei, because showers initiated by these particles differ in average depth of the shower maximum and in the ratio between the amplitude of the radio signal and the number of muons. In addition to air-shower measurements, the radio technique can be used to measure particle cascades in dense media, which is a promising technique for detection of ultra-high-energy neutrinos. Several pioneering experiments like ARA, ARIANNA, and ANITA are currently searching for the radio emission by neutrino-induced particle cascades in ice. In the next years these two sub-fields of radio detection of cascades in air and in dense media will likely merge, because several future projects aim at the simultaneous detection of both, high-energy cosmic-rays and neutrinos. SKA will search for neutrino and cosmic-ray initiated cascades in the lunar regolith and simultaneously provide unprecedented detail for air-shower measurements. Moreover, detectors with huge exposure like GRAND, SWORD or EVA are being considered to study the highest energy cosmic rays and neutrinos. This review provides an introduction to the physics of radio emission by particle cascades, an overview on the various experiments and their instrumental properties, and a summary of methods for reconstructing the most important air-shower properties from radio measurements. Finally, potential applications of the radio technique in high-energy astroparticle physics are discussed.
In the last years interest in the radio technique has greatly increased for both cosmic-ray and neutrino detection at high energies starting around $100\,$PeV. Current experiments detect cosmic rays by the radio emission of particle cascades in air, and neutrinos are searched for by the radio emission of particle cascades in dense media such as the Antarctic ice or the lunar regolith. In both cases the principles of the radio emission by the particle cascades are the same, with some difference due to the length scales of the shower development related to the density of the medium. The different length scales are the reason that air-shower detection is mostly done at frequencies below $100\,$MHz compared to a few $100\,$MHz for radio detection in dense media. Moreover, they make the situation more complex for atmospheric showers, because two emission mechanisms contribute significantly to the radio signal, which are the geomagnetic and the Askaryan effects, while in dense media the geomagnetic effect is negligible. From this point of view air showers constitute the more general case, and the radio emission by showers in dense media can be seen as a simplified case. This is one of the reasons why the radio signal of air showers is the main focus of this review, where differences to the situation in dense media will be mentioned where important. The focus on air showers also reflects the progress of current experiments, since several antenna arrays are successfully measuring cosmic-ray air showers, while radio experiments searching for neutrinos are mostly in the prototype phase aiming at a first detection. In any case, the separation between air-shower detection for cosmic rays and dense media for neutrino detection will become less strict in the future. Planned projects aim at air showers for neutrino detection and at particle cascades in the lunar regolith for cosmic-ray detection, respectively, but this foreseen merge of the two fields of cosmic-ray and neutrino detection might still take a few year. For cosmic rays, radio detection is competitive already now. Due to the availability of digital electronics and the feasibility of computing-intensive analysis techniques current radio arrays achieve similar precision as other technologies for air-shower detection. Several air-shower arrays already feature radio antennas in addition to optical and particle detectors, and others likely will follow. This combination of radio and other complementary detection techniques can be used to maximize the accuracy for the properties of the primary cosmic-rays, in particular the arrival direction, the energy, and the particle type. In this sense the radio technique for extensive air-showers has just crossed the threshold from prototype experiments and proof-of-principle demonstrations to their application for serious cosmic-ray science. This has been possible not only because of the technological advances, but also because the radio emission of air showers is finally understood on a quantitative level: current air-shower-simulation tools can predict the absolute value of the radio amplitude in agreement with measurements. In summary, this review gives an extensive overview on recent developments in the radio-detection technique for extensive cosmic-ray air showers, and also includes related topics, in particular the search for ultra-high-energy neutrinos. The article covers the various experimental setups used for detection, the instrumental properties and methods, and the results achieved by air-shower experiments which successfully have detected cosmic rays. In addition to providing an overview for the community of this research field of astroparticle physics, this review is hopefully of help for anybody who wants to start experimental work or analysis in this field. In comparison with other reviews on the radio detection of air showers \cite{HuegeReview2016, FilonenkoRadioReview2015}, this review is more extensive on practical experimental aspects, such as the design of radio experiments, treatment of background, or analysis methods for measurements by antenna arrays. Ref.~\cite{BrayReview2016} provides an extensive review on the search for particle cascades initiated in the lunar regolith by high-energy neutrinos using radio telescopes, and a summary on the situation of neutrino detection in ice can be found in Ref.~\cite{ConnollyNeutrinoReview2016}. Other reviews and papers might as well be appropriate for overviews on related topics, like the general situation in ultra-high-energy cosmic-ray physics \cite{Bluemer2009, Antoine2011}, or other detection techniques for air showers \cite{Haungs2003}. \clearpage
Significant progress has been achieved in the last years regarding the digital radio technique for high-energy cosmic rays and neutrinos: in several regions of the world antenna arrays have been constructed and measure successfully cosmic-ray air showers. Moreover, prototypes for large-scale radio arrays aiming at neutrinos have been deployed in Antarctica, and radio emission by particle cascades was measured under laboratory conditions in a variety of accelerator measurements. The mechanisms of the radio emission now seem to be sufficiently well understood in order to apply the radio technique for serious measurements in astroparticle physics. This is a substantial advance compared to the situation in the 1970's when analog radio experiments measured air showers, but were not able to interpret the measurements with sufficient accuracy. This is also a clear advance to the situation just a few years ago when the digital measurements started, but the interpretation was hampered by a lack of theoretical understanding. Nonetheless, optical detection techniques are still leading in high-energy astroparticle physics: in particular the detection of Cherenkov light in water and ice for neutrino detection and in air for photon detection, and the combination of particle detector arrays with air-fluorescence telescopes for extensive air-showers initiated by ultra-high-energy cosmic rays. However, this might change soon: the current generation of antenna arrays for air showers has demonstrated that digital radio detection together with sophisticated data processing can compete with the established techniques in accuracy. Even though the accuracy recently achieved for radio detection is yet slightly worse than that of air-fluorescence detection when taking into account all systematic uncertainties, this is compensated by the higher duty cycle, since the radio technique is not limited to clear nights. So can radio detection completely replace the established techniques? The answer likely is no, but certainly there are some aspects for which the radio technique is taking over: for the search for neutrinos above $100\,$PeV energy, radio detection already is the most promising option. For cosmic rays, there is a competition between different techniques when aiming at huge exposures, and radio will have a realistic chance to become the technique of choice, since there at least three ideas how radio detection can provide huge exposure. All ideas need further investigation, but might be realizable with a sensible amount of resource: first, huge antenna arrays of several $10,000\,$km\textsuperscript{2} for inclined air showers \cite{GRAND_ICRC2015}; second, the observation of particle showers in the lunar regolith \cite{SKAlunar_ICRC2015}; third, the observation of the atmosphere with antennas from space \cite{SWORDarxiv2013}. In other aspects, radio detection will not replace existing techniques. Simply because of the high threshold it makes no sense to use radio detection for photon or neutrino detection below the PeV energy range. For air showers the true potential of the radio technique is not in the replacement, but in the combination with other techniques. This is especially interesting to enable further progress in this research fields without the need for significant new resources, because radio extensions are relatively economic compared to other air-shower detectors. The additional information provided by radio measurements of air showers can increase the total accuracy of the reconstructed energy and mass of the primary particle, which is of utmost importance in order to find and understand the sources of ultra-high-energy cosmic rays. Therefore, the addition of radio antennas to air-shower observatories might bring the necessary step in accuracy to distinguish between competing scenarios for the origin of cosmic rays. While the principle advantages of radio detection are clear now, a lot of work still has to be done on the details for further advancing the technique. For neutrinos, a successful proof-of-principle is needed, which likely requires further extending the size of existing experiments by at least an order of magnitude. For cosmic-ray air showers, the principle issues are solved, so one has to go deeper in order to improve. This means a better study of systematic uncertainties, a better absolute calibration of antennas, and subsequently a more accurate testing of simulation codes representing our understanding of air-shower physics and the associated radio emission. If this is done, there is a reasonable chance that radio measurements can become even more accurate for the shower energy and for $X_\mathrm{max}$ than the established air-fluorescence and air-Cherenkov techniques, because current antenna arrays have already reached equal precision, but are not yet at the theoretical limit. Moreover, accurate radio measurements of the energy scale and of the shower development will help to improve our understanding of particle cascades at energies beyond the range of LHC, and consequently be valuable input to high-energy particle and astroparticle physics in general. Concluding, it seems highly advisable to equip any future cosmic-ray observatory with additional radio antennas, and to spend the minimal additional resources required to observe air-showers commensally with any astronomical radio observatories operating in the frequency range of a few MHz to a few GHz. \clearpage
16
7
1607.08781
1607
1607.06755_arXiv.txt
name}{\vspace{-\baselineskip}} \shorttitle{New Constraints on Prototypical HyLIRGs} \shortauthors{Jones et al.} \begin{document} \received{} \revised{} \accepted{} \pagenumbering{arabic} \title {New Constraints on the Molecular Gas in the Prototypical HyLIRGs BRI1202--0725 \& BRI1335--0417} \author{G. C. Jones\altaffilmark{1,2}, C. L. Carilli\altaffilmark{2,3}, E. Momjian\altaffilmark{2}, J. Wagg\altaffilmark{4}, D. A. Riechers\altaffilmark{5}, F. Walter\altaffilmark{6}, R. Decarli\altaffilmark{6}, K. Ota\altaffilmark{7,3}, R. G. McMahon\altaffilmark{8}} \altaffiltext{1}{Physics Department, New Mexico Institute of Mining and Technology, Socorro, NM 87801, USA; gcjones@nrao.edu} \altaffiltext{2}{National Radio Astronomy Observatory, 1003 Lopezville Road, Socorro, NM 87801, USA} \altaffiltext{3}{Cavendish Astrophysics Group, University of Cambridge, Cambridge, CB3 0HE, UK} \altaffiltext{4}{Square Kilometre Array Organization, Jodrell Bank Observatory, Lower Withington, Macclesfield, Cheshire SK11 9DL, UK} \altaffiltext{5}{Department of Astronomy, Cornell University, 220 Space Sciences Building, Ithaca, NY 14853, USA} \altaffiltext{6}{Max-Planck Institut f\"ur Astronomie, K\"onigstuhl 17, D-69117 Heidelberg, Germany} \altaffiltext{7}{Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK} \altaffiltext{8}{Institute of Astronomy, University of Cambridge, Cambridge CB3 0HA, UK} \begin{abstract} We present Karl G Jansky Very Large Array (VLA) observations of CO($J=2\rightarrow1$) line emission and rest-frame 250$\,$GHz continuum emission of the Hyper-Luminous IR Galaxies (HyLIRGs) BRI1202-–0725 ($z=4.69$) and BRI1335–-0417 ($z=4.41$), with an angular resolution as high as 0.15$''$. Our low order CO observations delineate the cool molecular gas, the fuel for star formation in the systems, in unprecedented detail. For BRI1202--0725, line emission is seen from both extreme starburst galaxies: the quasar host and the optically obscured submm galaxy (SMG), in addition to one of the Ly$\alpha$ emitting galaxies in the group. Line emission from the SMG shows an east-west extension of about $0.6''$. For Ly$\alpha$-2, the CO emission is detected at the same velocity as [CII] and [NII], indicating a total gas mass $\sim4.0\times10^{10}\,$M$_{\odot}$. The CO emission from BRI1335--0417 peaks at the nominal quasar position, with a prominent northern extension ($\sim 1''$, a possible tidal feature). The gas depletion timescales are $\sim 10^7$ years for the three HyLIRGs, consistent with extreme starbursts, while that of Ly$\alpha$-2 may be consistent with main sequence galaxies. We interpret these sources as major star formation episodes in the formation of massive galaxies and supermassive black holes (SMBHs) via gas rich mergers in the early Universe.
It is well established that massive galaxies form most of their stars at early times, and the more massive, the earlier (e.g., \citealt{renz06,shap11}). Hyper-luminous infrared galaxies (HyLIRGs), or galaxies with L$_{IR}(8-1000\mu$m$) >10^{13}$ L$_\odot$ \citep{sand96} at high redshift discovered in wide field submm surveys, play an important role in the study of the early formation of massive galaxies, corresponding to perhaps the dominant star formation episode in the formation of massive elliptical galaxies (e.g., \citealt{case14}). Star formation rates over 1000 M$_\odot$ year$^{-1}$, occurring for timescales up to 10$^8$ years, can form the majority of stars in a large elliptical (e.g., \citealt{nara15}). These systems are typically highly dust-obscured, and best studied at IR through radio wavelengths. A second important finding in galaxy evolution is the correlation between the masses of supermassive black holes (SMBHs) and their host spheroidal galaxies (e.g., \citealt{korm13}). While the exact nature of this correlation remains under investigation, including its redshift evolution (e.g., \citealt{walt04,wang13,will15,kimb15}), and certainly counter examples exist (e.g., \citealt{vand12}), the general implication of such a correlation would be that there exists some sort of co-evolution of supermassive black holes and their host galaxies Submm galaxies (SMGs) and other ultra-luminous infrared galaxies (ULIRGs), powered by either star formation or active galactic nucleus (AGN), have clustering properties that imply they reside in the densest cosmic environments (proto-clusters) in the early Universe (e.g., \citealt{blai02,chap09,capa11}). The mechanism driving the extreme star formation rates remains uncertain, or at least multivariate. Some systems are clearly in the process of a major gas rich merger, in which nuclear starbursts are triggered by tidal torques driving gas to the galaxy centers (e.g., \citealt{enge10,tacc08,riec11L31}). However, some SMGs show clear evidence for smoothly rotating disk galaxies, with little indication of a major disturbance (e.g., \citealt{hodg12,kari16}). There is some evidence that the most extreme luminosity systems, in particular powerful AGN in HyLIRGs (quasars and powerful radio galaxies), are preferentially involved in active, gas rich major mergers leading to compact, nuclear starbursts (e.g., \citealt{riec08,riec11L32,mile08,ivis12}). These merging systems may indicate a major accretion event in the formation of the SMBH, coeval with the major star formation episode of the host galaxy. Wide field surveys have identified thousands of extreme starbursts in the early Universe, and the statistical properties and demographics are reasonably well determined (e.g., \citealt{case12}). Study of such systems are now turning to the detailed physical processes driving extreme starbursts in the early Universe, enabled by the advent of sensitive, wide-band interferometers such as the Atacama Large Millimeter/submillimeter Array (ALMA), the Very Large Array (VLA), and the NOrthern Millimeter Extended Array (NOEMA). These facilities allow for deep, very high resolution imaging of the dust, gas, star formation, and dynamics in extreme starbursts, unhindered by dust obscuration. Key questions can now be addressed, such as: What is the relationship between gas mass and star formation (i.e., the `star formation law')? What are the interstellar medium (ISM) physical conditions that drive the extreme star formation? What dominates the gas supply? What role does the local environment play (proto-cluster, group harassment)? What is the role of feedback in mediating galaxy formation, driven by either AGN and/or starbursts? The BRI1202--0725 ($z\sim$4.7) and BRI1335--0417 ($z\sim$4.4) systems were among the first HyLIRG systems discovered at very high redshift (\citealt{irwi91,mcma94,omonA96}), and they remain two of the brightest unlensed submm sources known at $z > 4$. These two systems are the archetypes for coeval extreme starbursts and luminous AGN within 1.4 Gyr of the Big Bang. We have undertaken an extensive study of these systems, using ALMA, the VLA, NOEMA, and other telescopes, to determine the dominant physical processes driving the extreme starbursts and their evolution. In this paper, we present our latest VLA observations of the CO($J=2\rightarrow1$) emission from these two systems, at a resolution as high as $0.15''=1\,$kpc. Imaging of low order CO emission is crucial to understand the distribution and dynamics of the cool molecular gas fueling star formation in the systems. We will assume ($\Omega_{\Lambda}$,$\Omega_m$,h)=(0.682,0.308,0.678) \citep{plan15} throughout. At this distance, 1 arcsecond corresponds to 6.63$\,$kpc at $z$=4.69 (BRI1202--0725) and 6.82$\,$kpc at $z$=4.41 (BRI1335--0417).
We have imaged the archetypal HyLIRGs BRI1202--0725 and BRI1335--0417 with the VLA B--configuration in the rest-frame 250$\,$GHz continuum and CO($J=2\rightarrow1$) at high resolution. These observations allow us to determine sizes for the 44GHz continuum and the CO emission on scales down to $\sim 1$kpc. The 44$\,$GHz continuum emission in all three sources appears extended on a scale of 1 to 2$\,$kpc, although only marginally so for the QSO host in BRI1202--0725. Based on radio through FIR SED fitting, the observed 44$\,$GHz continuum emission is thermal emission from cold dust \citep{wagg14}. For the BRI1202--0725 SMG, the 44$\,$GHz size roughly agrees with the nonthermal 1.4$\,$GHz size from the VLBI observations of \citet{momj05}, while the 44$\,$GHz size of BRI1335--0417 is about twice as large as the VLBI 1.4$\,$GHz extent of \citet{momj07}. Assuming these 44$\,$GHz continuum extents correspond to that of the starbursting regions, we derive SFR surface densities of $\Sigma_{SFR}=(4\pm3)\times10^3\,$M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ for the BRI1202--0725 SMG and $\Sigma_{SFR}=(3\pm2)\,$M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ for BRI1335--0417. While estimates of the Eddington fractions of these sources vary, evidence from one test suggests that both of these objects are possibly radiating above their stable limit. Using both the standard ULIRG conversion factor for CO luminosity to H$_2$ mass and the more flexible factor of \citet{nara12}, we derive surface densities for the gas mass for these systems, based on the CO($J=2\rightarrow1$) source sizes. Using metallicities of 1$\,$Z$_{\odot}$ and 0.02$\,$Z$_{\odot}$, the values range from $10^{4-5}$ M$_\odot$ pc$^{-2}$. When plotted in a Kennicutt-Schmidt diagram, the low metallicity assumption places the HyLIRGs closer to the trend at lower SFR, but all three assumptions mark the HyLIRGs as strong starbursts, i.e. well above main sequence galaxies. A possible limit on object metallicity was found by setting the flexible conversion factor to that of ULIRGs, yielding $Z\sim Z_{\odot}/3$. A tentative east-west linear structure is seen in the CO($J=2\rightarrow1$) image of the SMG in BRI1202--0725, possibly a disk of size $\sim 0.8''$. This molecular disk is consistent with the east-west velocity gradient seen in the [CII] ALMA observations \citep{carn13}. For the QSO host galaxy in BRI1202--0725, the CO($J=2\rightarrow1$) emission is clearly extended, although the signal-to-noise of the emission in the B--configuration provides only a lower limit to the size of $0.5'' (\sim 3.3\,$kpc). For BRI1335--0417, we confirm the very extended CO emission to the north of the QSO host galaxy, as seen in \citet{riec08}. This extended emission has a narrower velocity dispersion than for the main galaxy. The negligible velocity offset with respect to the main galaxy and the lower velocity dispersion of this extended gas suggests that it may be a remnant tidal feature from a previous major merger of gas rich galaxies. This is based on radio observations, as past optical and submm observations lacked the necessary resolution to distinguish the northern extension. In all three sources, the extent of the 44$\,$GHz continuum emission appears smaller than that of the CO($J=2\rightarrow1$) line emission, or the area of active star formation is smaller than its fuel supply. This suggests a varying level of excitation and star formation activity across each source. The variation can be seen in BRI1335--0417, which shows a southern core in 44$\,$GHz emission, but no northern extension. Since both are seen in CO emission, this suggests that the northern area is not starbursting. This size discrepancy between low order CO and star formation has been noted for other high redshift SFGs (star forming galaxies; e.g., as compiled by \citealt{spil15}). Alternatively, the different sizes could simply be a result of low signal-to-noise continuum observations not detecting the lower optical depth outer sections of the star forming regions. Higher resolution, more sensitive ALMA observations of the dust continuum emission are planned. These should answer this question of the relative distributions of gas and star formation. We also detect CO($J=2\rightarrow1$) emission from Ly$\alpha$-2 in the BRI1202--0725 system. The total gas mass is $(3.2\pm0.5)\times (\alpha_{CO}/0.8)\times10^{10}\,$M$_{\odot}$, and the gas depletion timescale is $2\times 10^8\times (\alpha_{CO}/0.8)\,$yr, where $\alpha_{CO}$ has units of M$_{\odot}$K$^{-1}$ km$^{-1}$ s pc$^{-2}$. Even assuming a low value of $\alpha_{CO}$ (compared to $\alpha_{CO}\sim4$ for normal galaxies; \citealt{bola13}), this galaxy has a gas depletion timescale comparable to main sequence galaxies at low and high redshift (e.g., $7\times10^8\,$years for $z=1-3$ MS SFGs, \citealt{tacc13}), and not the extremely short timescales applicable to starbursts. \citet{klam04} suggested that the extreme aspects of the BRI1202--0725 system might relate to star formation induced by a strong radio jet from the QSO. While no such radio jet has yet been seen, the presence of Ly$\alpha$-2 in the direction of the ouflow from the QSO seen in [CII] \citep{cari13}, is circumstantially suggestive of a hydrodynamic interaction as well as gravitational. We have used our CO($J=2-1$) luminosities in conjunction with previous [CII] luminosities (\citealt{wagg12,wagg10,cari13}) and FIR luminosities (\citealt{carn13,cari02,will14}) to constrain the density and FUV radiation field of photodissociation regions (PDRs) in each source with the diagnostic plot of \citet{stac10}. Assuming thermalized ratios (L$_{CO(2-1)}$/L$_{CO(1-0)}$=4), this showed that PDRs in the three HyLIRGs showed similar densities and radiation fields to local ULIRGs and $z>2.3$ objects. When comparing the density and FUV level of PDRs in BRI1202-0725Ly$\alpha$-2 to other objects, an ambiguity arises. They are either similar to normal galaxies in level of FUV emission but are more dense or are similar to ULIRGs in density but feature weaker radiation fields. Current size constraints do not allow for concrete conclusions here. Our data confirm that these HyLIRGs are extreme starbursts with short depletion timescales, as shown in the Kennicutt-Schmidt Relation. They are consistent with models that show SMG formation in short time periods \citep{nara15}, although they are at slightly higher redshift than expected. The data for these two HyLIRG systems suggest that they represent two different stages in the evolution of extreme starbursts driven by major gas rich mergers in the early Universe. BRI1202--0725 appears to be a relatively early stage merger, with a number of distinct galaxies still observed in stars and gas. There are clear indications of strong gravitational interaction between the galaxies likely driving the extreme starbursts, as well as possible evidence for a strong QSO driven outflow assisting in quenching the star formation in the QSO host. BRI1335--0417 appears to be a later stage merger, with just one galaxy seen (the QSO host), plus what may be tidal remnants of the merger seen in the extended cold gas. The high luminosities, disrupted morphologies, evidence of gravitational interaction, and short gas depletion timescales of these objects suggest that they represent a transient, but highly star forming phase of early galaxy evolution. \vspace{2mm} G. C. J. is grateful for support from NRAO through the Grote Reber Doctoral Fellowship Program. R. G. M. acknowledges the UK Science and Technology Facilities Council (STFC). K. O. acknowledges the Kavli Institute Fellowship at the Kavli Institute for Cosmology in the University of Cambridge supported by the Kavli Foundation. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We thank all those involved in the VLA project for making these observations possible (project code 13A-012).
16
7
1607.06755
1607
1607.03493_arXiv.txt
We investigate the origin of the evolution of the population-averaged size of quenched galaxies (QGs) through a spectroscopic analysis of their stellar ages. The two most favoured scenarios for this evolution are either the size growth of individual galaxies through a sequence of dry minor merger events, or the addition of larger, newly quenched galaxies to the pre-existing population (i.e., a progenitor bias effect). We use the 20k zCOSMOS-bright spectroscopic survey to select \textit{bona fide} quiescent galaxies at $0.2<z<0.8$. We stack their spectra in bins of redshift, stellar mass and size to compute stellar population parameters in these bins through fits to the rest-frame optical spectra and through Lick spectral indices. We confirm a change of behaviour in the size-age relation below and above the $\sim10^{11} \mathrm{M}_\odot$ stellar mass scale: In our $10.5 < \log \mathrm{M_*/M_\odot} < 11$ mass bin, over the entire redshift window, the stellar populations of the largest galaxies are systematically younger than those of the smaller counterparts, pointing at progenitor bias as the main driver of the observed average size evolution at sub-10$^{11} \mathrm{M}_\odot$ masses. In contrast, at higher masses, there is no clear trend in age as a function of galaxy size, supporting a substantial role of dry mergers in increasing the sizes of these most massive QGs with cosmic time. Within the errors, the [$\alpha$/Fe] abundance ratios of QGs are $(i)$ above-solar over the entire redshift range of our analysis, hinting at universally short timescales for the buildup of the stellar populations of QGs, and $(ii)$ similar at all masses and sizes, suggesting similar (short) timescales for the whole QG population and strengthening the role of mergers in the buildup of the most massive QGs in the Universe.
The observed evolution with cosmic time in the population-averaged size of Quenched Galaxies (QGs, here often also referred to as `passive' or `quiescent' galaxies, as opposed to `star-forming' galaxies) at fixed stellar mass has received a lot of attention in the past decade (e.g., \citealt{daddi2005, trujillo2007, cimatti2008, vandokkum2008, cassata2011, carollo2013}, hereafter C13, \citealt{poggianti2013, vanderwel2014}). The median half-light radius of QGs is about a factor $\sim$3--5 larger in the local universe than at redshift $z\sim 2$ \citep{newman2012}. The size growth scales as roughly $(1+z)^{-1}$, and it is similar to the rate of growth of the sizes of dark matter halos, but is somewhat steeper than the latter. This has sparked an intense debate concerning the physical mechanism behind this size evolution. There are two main scenarios to which the evolution of the size-mass relation has been ascribed: the growth of individual QGs through a series of dry minor merger events, or the continuous addition of larger, recently quenched, galaxies, at later epochs. This effect is an example of so-called `progenitor bias' in the sense that the population changes because of a change in membership rather than through changes in individual members (e.g., \citealt{franx2008, newman2012}; C13; \citealt{poggianti2013, belli2015}). In the individual growth scenario, the compact cores of QGs would remain constant in mass within a few kiloparsecs, but would accrete extended stellar envelopes around them (\citealt{cimatti2008,hopkins2009, naab2009, cappellari2013a}). Contrary to major mergers, minor gas-poor (dubbed `dry') mergers could have a key role: for a given amount of added mass, mergers with higher mass-ratios (i.e. minor mergers) result in a larger size increase (\citealt{villumsen1983, hilz2012}; see also \citealt{taylor2010, feldmann2010, szomoru2011, mclure2013, vanderwel2014}). This scheme would require $\sim 10$ dry mergers with $\sim 1:10$ mass ratio to account for the observed growth in size \citep{naab2009,vandesande2013}. The mergers are required to be dry since `wet' mergers, involving gas-rich galaxies, are expected to lead to central star formation and therefore to a reduction of the half light radius of the primary galaxy. At a mass ratio of 1:10, the companion of a $10^{11}\mathrm{M}_\odot$ galaxy is a $10^{10}\mathrm{M}_\odot$ galaxy. Galaxies of this mass are generally gas-rich systems (e.g., \citealt{santini2014, genzel2015}). Therefore, the sequence and number of the required dry mergers, without a substantial contribution of wet mergers, is quite problematical, an aspect which has been substantially ignored so far. Regardless of whether the merger scenario can explain the observed effect, the possible effects of progenitor bias must anyway be taken into consideration. An implicit assumption of the individual size growth view is that galaxies which are being quenched at different epochs have similar properties. If not, however, this could lead to a progenitor bias effect. In the context of the evolution of the average size-mass relation, the addition of newly quenched galaxies to a pre-existing population of QGs could lead to an observed growth of the average size of the population even if individual early-type galaxies do not grow at all. This is particularly important especially in the light of the observed increase by about one order of magnitude of the comoving number density of massive (i.e., $\gtrsim10^{11}\mathrm{M}_\odot$) QGs from $z=2$ to the present epoch (e.g. \citealt{ilbert2010, cassata2013, muzzin2013}). Tracing the evolution of the number density of QGs of different sizes offers clues towards discriminating between the two scenarios. Different studies well agree on the evolution of the number densities of the smallest and densest QGs at stellar masses above $\sim10^{11} \mathrm{M}_\odot$, where a steady decrease is observed with cosmic time. At lower masses, however, different authors report different results. For example, C13 did not find any change in the number density of their `compact' galaxies at masses $10.5 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11$; they report instead a substantial increase in the number density of large QGs. The constancy of the compact population and the increase in the large population led those authors to advance the progenitor bias interpretation. At similar $10.5 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11$ masses, however, \cite{vanderwel2014} report a strong decrease in the number density for compact QGs since $z=1.5$, and therefore interpret their observed disappearance of these objects at the lower redshifts as indication of a growth in size of individual QGs. In comparing results from different studies, it is important however to note that the adopted definition of stellar mass is an important factor when discussing the evolution of the size-mass relation. In C13, and also in this paper, we will define the stellar masses to be the integral of the star formation rate (SFR). These are about 0.2 dex larger than the commonly used definition which subtracts the mass returned to the interstellar medium, i.e. the mass of surviving stars plus compact stellar remnants. The former has the feature of remaining constant after the galaxy ceases star-formation, whereas the latter continually decreases. Thus, when comparing the properties of quenched galaxies in a given mass bin across cosmic time, one should clearly use the former. This effect explains part of the discrepancies found in the different number density analyses: effectively high redshift galaxies are given a spuriously high mass, which leads to them appearing to be too small and to have a higher number density at high redshift. Another factor that leads to different estimates for the evolution of the number densities of small QGs is the definition of the bins in which the densities are computed, in particular whether a single size threshold is used to compare number densities at different redshifts, or whether the bins are defined along the size-mass relation at each given redshift (which, due to its evolution, implies a comparison between populations of different sizes). Number densities alone however are not conclusive. C13 and \citet{damjanov2015a} agree that, at masses below the $\sim10^{11} \mathrm{M}_\odot$ scale, the number densities of compact QGs remain constant since at least $z \sim 1$; these authors reached however different conclusions on the origin of this constancy. \citet{damjanov2015a} proposed that the compact QG population is continuously replenished with younger members, so as to compensate for the shift towards larger sizes of individual galaxies due to mergers. In contrast, C13 argued that the compact population remains stable since $z\sim1$ and the newly-accreted memberd of the population have increasingly larger sizes at steadily lower redshifts. These two interpretations can be easily tested through the average ages of the populations involved. If the increase of the median size is due to the addition of newly-quenched galaxies that are progressively larger towards lower redshifts, then, at any epoch, the stellar populations of larger QGs should be \emph{younger} than those of smaller QGs of similar mass. On the other hand, if individual QGs grow their sizes through mergers and the number density of compact QGs remains more or less constant due to the continuous production of compact QGs, then, at any epoch, \emph{smaller} QGs should be younger on average than their larger relatives of similar mass. Therefore, the stellar ages of the galaxies offer a powerful discriminant between these two scenarios (see e.g. also \citealt{onodera2012, belli2014b, keating2015, yano2016}). C13 did a study of the colors of compact and large $<10^{11} \mathrm{M}_\odot$ QGs at different redshifts, and found that, at any epoch, larger QGs appear to be bluer than those of their smaller counterparts; it is this result that led those authors to conclude that, at these masses, the stellar populations of larger QGs are younger than those of smaller QGs and thus that the evolution in size of the whole populations is to a large extent ascribable to the addition of recently quenched, larger QGs. Galaxies quenched at later epochs are indeed expected to have larger sizes than the ones quenched earlier as (progenitor) star-forming galaxies also experience an evolution in their average size with cosmic time (e.g., \citealt{newman2012}). Stellar ages determined on the basis of a single rest-frame optical color (as done in C13), however, heavily suffer from the well-known degeneracy between age, metallicity and also, possibly most problematically, dust effects \citep{worthey1994}. We therefore push here the analysis of the stellar ages of small and large QG below and above the evidently important mass threshold of $10^{11} \mathrm{M}_\odot$ using more robust spectroscopic measurements of stellar population properties. Our primary goal is to test whether and to what extent progenitor bias is driving the increase of the average size of passive galaxies as a function of stellar mass; specifically, we use two mass bins with boundaries $10.5 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11$ and $11 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11.5$. We also use the spectroscopic diagnostics to study the ratio of different elements in the attempt to constrain the timescales of buildup of the stellar populations of quenched galaxies of different masses and sizes. Even spectroscopically, however, residual degeneracies between effects of age and metallicity continue afflicting galaxy ages, which are not straightforward to obtain. In the last few years, a number of `full spectral fitting' codes (e.g, \citealp{ocvirk2006b, ocvirk2006a, koleva2009, cappellari2004}, STECKMAP, ULySS and pPXF, respectively) have been developed in order to address this issue. In the full spectral technique, a set of templates is used to fit the overall shape of the spectrum. The most recent full spectral fitting codes do not fit the overall shape of the continuum, thereby avoiding common problems such as flux calibration and extinction. Instead, a polynomial function is used to fit the shape of the continuum. Full spectrum fitting codes are good at handling the impact of the age-metallicity degeneracy as they maximize the information used from the whole observed spectrum (\citealt{koleva2008, sanchezblazquez2011, beasley2015, ruizlara2015}). We therefore adopt this methodology to derive our fiducial stellar population ages in this paper. Besides the full spectral fitting analysis, we have also used however the Lick line-strength indices to get independent estimates of ages and metallicities. The Lick system of spectral line indices is a commonly used method to determine ages and metallicities of stellar populations (e.g., \citealt[][]{burstein1984, gonzalez1993, carollo1994, worthey1994, worthey1997, trager1998, trager2000, trager2005, korn2005, poggianti2001, thomas2003a, thomas2003b, korn2005, schiavon2007, thomas2011, onodera2012, onodera2014}). The system consists of a set of 25 optical absorption line indices, spanning a wavelength range from $\sim$ 4080 to $\sim$ 6400 \AA{}. The absorption features are particularly useful because they are largely insensitive to dust attenuation \citep{macarthur2005}. Even so, age-dating based on the Lick indices is not free from degeneracy effects. Its main pitfall is that most indices are sensitive to all the basic population parameters, namely age, metallicity and the ratio of $\alpha$-elements to iron. Circulation of the errors can generate spurious correlations or anticorrelations \citep{kuntschner2001,thomas2005,renzini2006}. For example, an underestimate in the strength of a Balmer line (mainly sensitive to age), due e.g. to partial filling in by an emission line, may lead to an overestimate in the age, but having an overestimated age, the procedure is forced to underestimate metallicity in order to match the strength of the metal lines. In this way, a spurious age-metallicity anti-correlation can be generated. A strength of our analysis is thus to attempt to mitigate the intrinsic degeneracies by using and comparing for cross-validation both methodologies, i.e., the full spectral fitting approach and the Lick indices approach. We include in our study a brief introspection of the $\alpha$-elements to $Fe$-elements abundance ratios (i.e., [$\alpha$/Fe]) in QGs of different masses and sizes. The [$\alpha$/Fe] ratio is a well-known diagnostics to constrain formation timescales (\citealt{matteucci1986, pagel1995}), since $\alpha$-elements such as O, Ne, Mg, Si, S, Ar, Ca, and Ti (i.e., nuclei that are built up with $\alpha$-particles) are delivered mainly by core collapse (CC) supernova explosions of massive stars and thus on much shorter timescales than elements such as Fe and Cr, which come predominantly from the delayed explosion of Type Ia supernovae (e.g., \citealt{nomoto1984, woosley1995, thielemann1996}). Enhanced values of [$\alpha$/Fe] ratio indicate a short formation timescale. \\ The paper is organized as follows. In Section \ref{dataset} we describe the data set, the zCOSMOS-bright 20k catalog, and its features. Section \ref{sampselmeasurements} summarizes the basic measurements and presents the spectroscopic sample selection in detail. Section \ref{analysis} presents the steps we took in the course of our analysis. Section \ref{sizemass} describes the binning in mass, size and redshift, and Section \ref{stacking} describes the stacking procedure that was used to obtain average spectra as a function of redshift, mass and size. The fitting used to correct for the emission lines contribution is described in Section \ref{emcorrection}. Section \ref{fullspectralfitting} describes how we derived ages with full spectral fitting using \texttt{pPXF} and Section \ref{lickmeasurements} describes how we measured the Lick strengths and derived the stellar population parameters from them. In Section \ref{results} we present our results, followed by a discussion in Section \ref{discussion}. In Section \ref{conclusion} we summarize our paper and present the conclusions.\\ Through this paper we adopt a $\Lambda$-dominated Cold Dark Matter ($\Lambda$CDM) cosmology, with $\Omega_m\,=\,0.3$, $\Omega_\Lambda\,=\,0.7$ and $H_0\,=\,70$ km\,s$^{-1}\,$Mpc$^{-1}$. All magnitudes are given in AB system. We use `dex' to refer to the anti-logarithm, so that 0.3 dex represents a factor of 2.
The observed average size of QGs is about $\sim3-5$ times larger today than at $z\sim2$. There are two main scenarios which have been proposed to explain this evolution: the size growth of individual galaxies or the progenitor bias introduced if newly formed members of the population are larger than the previous members. In this work, we measure the stellar ages of QGs in order to distinguish between these two scenarios which make quite different predictions for the variation of stellar population age with size. If the driver of this evolution is the addition of large newly-quenched objects then larger QG galaxies will be younger. If the driver is the size growth of individual galaxies, and small galaxies are being replaced by newly quenched objects, then the larger galaxies will be older. In the light of the fact that, at any epoch and stellar mass, star-forming galaxies are on average larger than passive galaxies, purity (and completeness) of the QG samples is clearly crucial when attempting the measurement of their stellar ages (see e.g., \citealt{keating2015}). The polluting presence of star-forming galaxies in QG samples would bias the latter towards larger sizes, while also biasing their age estimates towards younger values. In this work we have selected our QG samples using galaxies securely identified as quiescent. Starting from the 20k zCOSMOS-bright catalog, we selected galaxies with absent, or very weak, emission lines. We stacked the spectra in bins defined in size, stellar mass and redshift, in order to study the average stellar population properties of the sample galaxies. Two binning schemes were used in size, a relative one normalised to the evolving mean mass-size relation, and an absolute one constructed at fixed physical size. We then used \texttt{pPXF} to derive best-fit ages from BC03 solar SSP templates. To further check our age results, we also computed absorption line strengths following the wavelength definitions of feature and pseudo-continua of the Lick system of spectral line and derived values for age, [Z/H] and [$\alpha$/Fe] comparing our results to the \cite{thomas2011} models. Our robust spectroscopic selection for quenched galaxies is much cleaner than the more frequently adopted color-color selections; in addition, we carefully checked the spectrum of each object by visual inspection, to ensure the absence of star-formation tracers. We are thus confident that contamination by star-forming galaxies is negligible in the results that we have discussed above. Reassuringly, the average age of QGs (not discriminating in size) increases with cosmic time. Turning to the ages as a function of sizes, we find that the $10^{11} \mathrm{M}_\odot$ mass scale is a `threshold' above and below which the size-age relation changes behavior, as already pointed out in C13. Below $10^{11}{\mathrm{M}}_\odot$, larger galaxies have systematically younger ages than smaller ones. The $\Delta$age between small and large galaxies becomes more significant towards lower redshift. The $\Delta$age from the highest to lowest redshift bin in the small-size QG population is in good agreement with a passive evolution of its stellar populations. The younger ages of the larger galaxies at each redshift argues for newly quenched objects to be systematically larger at later epochs. This trend is visible using both our size binning schemes. We conclude that progenitor bias is a major and possibly the dominant component of the observed evolution in the average sizes of QG at these masses. Above $10^{11}{\mathrm{M}}_\odot$, where dry mergers are expected to play a major role in imprinting the well-known `dissipationless' features that are observed at $z=0$ in this ultra-massive population, there is indeed no clear trend between ages and sizes. Size growth of individual galaxies through dry mergers is the most likely channel for the observed growth of the average size of the QG population at this top-mass end. The confirmation of a `transition' mass around $10^{11}{\mathrm{M}}_\odot$ for the size-age behaviour -- and thus for the dominant role of progenitor bias at low masses and dry mergers at high masses in driving the observed average size growth of QGs with time -- highlights the fundamental importance of sample selection and of tuning the interpretation of the data to the specific sample selection. For example, \citet{zanella2016} report that the size-age relation in their sample of, in quotation, $\mathrm{M} >4.5\times10^{10}{\mathrm{M}}_\odot$ QGs at $z \sim 1.5$ supports the mergers interpretation. A quick inspection of their analysis shows however that $\sim80\%$ of their galaxies actually have masses above $10^{11} \mathrm{M}_\odot$. Therefore, their result is better commented on as holding for this top-mass end population, which puts their result in agreement with our work. Interestingly, the $\alpha$-to-iron abundance ratio of the stellar populations of QGs at all masses within the $10^{10.5-11.5} \mathrm{M}_\odot$ window is rather constant since $z=0.6$. This ratio should reflect the formation timescales for the stellar populations in these systems. The constancy of the measured $[\alpha/\mathrm{Fe}]$ ratio thus suggests similar such timescales, independent of galaxy size, across the whole $10^{10.5-11.5} \mathrm{M}_\odot$ mass range for the galaxy population that has already quenched by our lowest redshift bin at $z \sim 0.3$, consistent with the idea that the most massive galaxies above $10^{11} \mathrm{M}_\odot$ are formed by mergers of lower mass galaxies.
16
7
1607.03493
1607
1607.06049_arXiv.txt
A new covariant generalization of Einstein's general relativity is developed which allows the existence of a term proportional to $T_{\alpha\beta}T^{\alpha\beta}$ in the action functional of the theory ($T_{\alpha\beta}$ is the energy-momentum tensor). Consequently the relevant field equations are different from general relativity only in the presence of matter sources. In the case of a charged black hole, we find exact solutions for the field equations. Applying this theory to a homogeneous and isotropic space-time, we find that there is a maximum energy density $\rho_{\text{max}}$, and correspondingly a minimum length $a_{\text{min}}$, at early universe. This means that there is a bounce at early times and this theory avoids the existence of an early time singularity. Moreover we show that this theory possesses a true sequence of cosmological eras. Also, we argue that although in the context of the standard cosmological model the cosmological constant $\Lambda$ does not play any important role in the early times and becomes important only after the matter dominated era, in this theory the "repulsive" nature of the cosmological constant plays a crucial role at early times for resolving the singularity.
Modifying a gravitational theory dates back to late 1800s, there were some attempts modeled on Maxwell's electrodynamics to modify Newtonian gravity. Since Einstein developed his general relativity (GR) in 1915, various attempts with different motivations have been carried out to generalize it \cite{will}. Some motivations have theoretical character and some observational. Einstein himself modified the original field equations by adding a term including the cosmological constant. Also he proposed the Palatini formulation of GR \cite{ae}. Eddington proposed an interesting alternative to GR in 1924 \cite{edd}. Brans-Dicke theory \cite{bd} and the Einstein-Cartan theory \cite{cartan} are two other examples of a very broad variety of alternatives. Currently, observations of the dark matter and the dark energy provide one of the main motivations for extending GR (for a review on modified gravity theories see e.g. \cite{capo}). One of the main intriguing enigmas in GR is that it predicts the existence of space-time singularity at some finite time in the past. However it turns out that GR itself is no longer valid at the singularity because of the expected quantum effects. On the other hand a precise formulation for quantum gravity is still lacking. Nevertheless, there are some classical models in which this kind of singularity can be resolved. For example Eddington-inspired Born-Infeld (EiBI) theory, is a modified theory of gravity which is equivalent to GR only in vacuum and can resolve the singularity \cite{EiBI}. For other examples and also for other motivations behind this kind of modifications, we refer the reader to the review article \cite{bounce}. Here we propose a new model which , despite of its simple appearance, possesses interesting features. Let us start with the following action \begin{equation} S=\frac{1}{2\kappa}\int \sqrt{-g}\left(R-2\Lambda-\eta \mathbf{T}^2\right)d^4x+S_M \label{action} \end{equation} where $\mathbf{T}^2=T_{\alpha\beta}T^{\alpha\beta}$, $T_{\alpha\beta}$ is the energy-momentum tensor, $R$ is the Ricci scalar, $\kappa=8\pi G$, $\Lambda$ is the cosmological constant and $S_M$ is the matter action. Also $\eta$ is a coupling constant which its value can be constrained by observations. For a somehow similar approach we refer the reader to \cite{mahmet}. In general $\eta$ can be negative or positive. However as we will show in this paper, a positive $\eta$ leads to a bounce at early universe and to a satisfactory cosmological behavior after the bounce. This bounce avoids the early time singularity. On the other hand, as we will see in section \ref{cosmology}, a negative $\eta$ leads to unsatisfactory cosmological behavior. More specifically there is no stable late time accelerated phase in the case of $\eta<0$. Therefore our main purpose in this paper is to study the $\eta>0$ case. The situation here is somehow reminiscent of the appearance of the cosmological constant in the standard cosmological model, where $\Lambda$ is postulated to be positive. A negative cosmological constant leads to completely different consequences which are inconsistent with the cosmological observations. More specifically positive $\Lambda$ accelerates the universe while a negative $\Lambda$ decelerates it. The standard Einstein-Hilbert action can be recovered by setting $\eta=0$. Because of the correction term $\mathbf{T}^2$, we refer to this theory as Energy-Momentum Squared Gravity (EMSG). Throughout the paper, we use units with $c=1$ and assume the metric signature $(-,+,+,+)$ for the metric. It is natural to expect that this correction term be important only in the high energy regimes such as the early universe or within the black holes. Therefore there are no departures from GR in the low curvature regime. The outline of the paper is as follows. In section \ref{fe} we derive the field equations of EMSG by varying the action \eqref{action} with respect to the metric. In section \ref{cosmology} we derive the modified Freidmann equations and show that there is a maximum energy density and a minimum length at early universe (when $\eta>0$). Also using the dynamical system method we study the cosmological consequences of EMSG. More specifically we show that this theory possesses a true sequence of cosmological epochs. In section \ref{cbh}, we find an exact charged black hole solution in EMSG. Finally, conclusions are drawn in section \ref{conc}.
\label{conc} In this paper a new covariant generalization of GR is developed. This theory allows the existence of a term proportional to $T_{\alpha\beta}T^{\alpha\beta}$ in the action. Therefore we referred to this theory as Energy-Momentum Squared Gravity (EMSG). EMSG is different from GR only in the presence of matter sources. In this theory the correction term can be defined only when the Lagrangian density for the matter content is specified. Therefore in order to find the field equations, one must first vary the matter action with respect to the gravitational degrees of freedom. Although this feature is not the case in GR, it appears in theories which introduce correction terms including the energy-momentum tensor in the generic action Applying this theory to a homogeneous and isotropic space-time, we find that there is a maximum energy density $\rho_{\text{max}}$, and correspondingly a minimum length scale $a_{\text{min}}$, at early universe. In other words, we showed that there is a bounce at early times and consequently the early time singularity is avoided. We found the exact value of $\rho_{\text{max}}$. Also we estimated the minimum value of the cosmic scale factor. Moreover, the dynamical system method has been used to investigate the cosmological behavior of EMSG. It turned out that EMSG possesses a true sequence of cosmological eras (or fixed points). Comparing to $\Lambda$CDM model, there is an extra duty for the cosmological constant in this theory. In fact, a positive $\Lambda$ is necessary for the existence of a regular bounce at early universe. Also an exact solution for a charged black hole has been found. We recall that Schwarzschild and Kerr metrics are also solutions for EMSG field equations. However, the charged black hole solution in EMSG is different from the standard Reissner-Nordstr\"{o}m space-time. As a further study it is needed to check the existence of stable compact stars in EMSG; for such a study in the context of EiBI see \cite{compact}. It is also necessary to investigate the consequences of the rapid decrease of $\Omega_{\text{r}}$ and the accelerated expansion right after the bounce. Finally one may expect quantum effects to become important at ultra-short distances and ultra-high energy densities. Although in order to avoid these effects one may require $\rho_{\text{max}} < \rho_{\text{p}}$ and $a_{\text{min}}> l_{\text{p}}$ where $\rho_{\text{p}}$ is the Planck density and $l_{\text{p}}$ the Plancklength. Using the current value of the radiation energy density and scaling $a_0=1$, one can easily show that if $\eta> \hbar G^3$ then both conditions are satisfied. If this constraint is consistent with the cosmological observations, then, in EMSG, the universe may not enter a quantum era during its evolution.
16
7
1607.06049
1607
1607.08054_arXiv.txt
M87 is arguably the best supermassive black hole (BH) to explore the jet and/or accretion physics due to its proximity and fruitful high-resolution multi-waveband observations. We model the multi-wavelength spectral energy distribution (SED) of M87 core that observed at a scale of 0.4 arcsec ($\sim 10^5R_{\rm g}$, $R_{\rm g}$ is gravitational radius) as recently presented by Prieto et al. Similar to Sgr A*, we find that the millimeter bump as observed by Atacama Large Millimeter/submillimeter Array (ALMA) can be modeled by the synchrotron emission of the thermal electrons in advection dominated accretion flow (ADAF), while the low-frequency radio emission and X-ray emission may dominantly come from the jet. The millimeter radiation from ADAF dominantly come from the region within $10R_{\rm g}$, which is roughly consistent with the recent very long baseline interferometry observations at 230\,GHz. We further calculate the Faraday rotation measure (RM) from both ADAF and jet models, and find that the RM predicted from the ADAF is roughly consistent with the measured value while the RM predicted from the jet is much higher if jet velocity close to the BH is low or moderate (e.g., $v_{\rm jet}\lesssim0.6\,c$). With the constraints from the SED modeling and RM, we find that the accretion rate close to the BH horizon is $\sim (0.2-1)\times10^{-3}\msun \rm yr^{-1}\ll\dot{\it M}_{\rm B}\sim 0.2\it \msun \rm yr^{-1}$ ($\dot{M}_{\rm B}$ is Bondi accretion rate), where the electron density profile, $n_{\rm e}\propto r^{\sim -1}$, in the accretion flow is consistent with that determined from X-ray observation inside the Bondi radius and recent numerical simulations.
The giant radio galaxy M87 is one of the well-known radio loud low-luminosity active galactic nuclei (AGNs). It is an excellent laboratory for investigating the accretion and jet physics because of its proximity with a distance of $D=16.7\pm0.6$ Mpc \citep[]{jord05,blak09} and a large estimated black hole (BH) mass of $3-6.6\times10^9 \msun$ \citep{macc97,gebh11,wals13}. The bolometric luminosity of the core is estimated to be $L_{\rm bol}\sim2.7\times10^{42} \ergs \sim 3.6\times10^{-6} L_{\rm Edd}$ \citep[$L_{\rm Edd}$ is Eddington luminosity,][]{pri16}, which is several orders of magnitude less than those of Seyferts and quasars. The quite low Eddington ratio in M87 suggests that it most possibly accretes through a radiatively inefficient accretion flow \citep[see][for a recent review and references therein]{yn14}. Recent high spatial resolution $Chandra$ X-ray observations have resolved the Bondi radius, $R_{\rm Bondi} \approx$ 0.2\ kpc $\approx 8\times 10^{5} R_{\rm g}$, where $R_{\rm g}=GM_{\rm BH}/c^{2}$ is the gravitational radius \citep[]{russ15}. In combination with the inferred gas density at the Bondi radius as about 0.3 $\rm cm^{-3}$, the Bondi accretion rate is estimated to be $\dot{M}_{\rm B} \approx 0.2 M_\odot \rm yr^{-1}$ \citep[e.g.,][]{russ15}, which indicates that either the radiative efficiency of the accretion flow is very low ($\eta\sim L_{\rm bol}/\dot{M}_{\rm B}c^2 \approx 10^{-4}$) or most of the matter at Bondi radius is not captured by the BH, or both. The Galactic center BH (Sgr A*) and the supermassive BH in the center of the Virgo cluster (M87), are the two largest BHs on the sky, with putative event horizons subtending $\sim$ 53 and 38 microarcseconds ($\mu$as) respectively \citep[e.g.,][]{rd15}. The Event Horizon Telescope (EHT), a planed Earth-sized array at millimeter (mm) and submillimeter (submm) wavebands, provides well-matched horizon-scale resolution for Sgr A* and M87 \citep[e.g.,][]{doe09}, which greatly help to study the accretion and/or jet physics in both sources \citep[e.g.,][]{doe08,hua09,mos09,dex10,fish11,bro11,lu14}. In particular, M87 is the best source for exploring the jet physics near a BH due to its strong jet has been observed in multiwaveband, where the multi-wavelength studies also have been made from radio to $\gamma$-ray~\citep[e.g.,][]{rei89,jun99,per05,har06,ly07,kov07,doe12,ak15,had16}. Recently, it is possible to explore the inner jet physics with high-resolution EHT observations at 230\,GHz, which resolve the jet base at $\sim 10 R_{\rm g}$ \citep[]{doe12,ak15}. \citet{asa12} investigated the structure of the M87 jet from milliarcsec (mas) to arcsec scales by utilizing multi-frequency very long baseline interferometry (VLBI) images, where they found that the jet follows a parabolic shape, $Z\propto R_{\rm j}^{1.73\pm0.05}$, in a deprojected distance of $\sim 10^2-10^5 R_{\rm g}$ ($R_{\rm j}$ is the radius of the jet emission and $Z$ is the axial distance from the core). The acceleration zone of M87 jet may be co-spatial with the jet parabolic region, where the intrinsic jet velocity fields increase from $\sim 0.1\ c$ at $\sim 10^2R_{\rm g}$ to 1 $c$ at $\sim 10^5 R_{\rm g}$ \citep[]{asa14}. The multi-wavelength nuclear SED of M87 has been extensively explored by both pure jet models \citep[e.g.,][]{dex12,de15,pri16} and ADAF+jet models \citep[e.g.,][]{dim03,yua09,br09,li09,nem14,moc16}, where the radio emission is produced by the jet in both models while the millimeter/sub-millimeter and X-ray emission can either come from the jet or ADAF. Apart from the continuum spectrum, linear polarization can be a diagnostic of the relativistic jets and accretion flows associated with BH systems. In particular, millimeter/submillimeter polarimetry provides an important tool to study the magnetized plasma near a BH through the Faraday rotation of the polarized light. It was found that the accretion rate close the BH ($\lesssim 10 R_{\rm g}$) is several orders of magnitude lower than the accretion rate at Bondi radius ($R_{\rm B}\sim 10^{5-6} R_{\rm g}$) in Sgr A* based on the Faraday rotation measure stuidies \citep[RM, an integral of the product of the thermal electron density and the magnetic field component along the line of sight,][]{bow03,mac06,mar06}. \citet{kuo14} presented the first constraint on the Faraday RM at millimeter wavelength for the nucleus of M87 and found that the best fit RM is $-(2.1\pm1.8)\times10^5\rm rad\ m^{-2}$ (1$\sigma$ uncertainty). Using the same method as in Sgr A* \citep[][]{mar06}, \citet{kuo14} found the accretion rate should be below $9.2\times10^{-4}\msun \rm yr^{-1}$ at a distance of 21 Schwarzschild radii from the BH, which suggest that most of the matter at Bondi radius is not really accreted by the BH. \begin{figure*} \centering \includegraphics[width=90mm]{f1.eps} \caption{A cartoon picture of our ADAF-jet model, where a geometrical thick, optically thin ADAF and parabolic shape of jet are considered. The jet inclination angle is assumed to be $15^{\rm o}$, and the disk is perpendicular to the jet. Here, we consider the two possibilities where the polarized emission pass though ADAF itself along LOS-1 and the polarized emission of ADAF pass through the plasma in the jet along LOS-2 (the thick solid lines).} \end{figure*} Recently, \citet{pri16} presented the high-resolution quasi-simultaneous multi-waveband SED at scale of $\sim 0.4$ arcsec for M87, which is very helpful to explore the accretion-jet physics. In particular, the evident millimeter bump in the SED of M87 is quite similar to the sub-millimeter bump of Sgr A* \citep[e.g.,][]{yu03}, which may be contributed by the synchrotron emission from the thermal electrons in ADAF. If this is the case, it can be used to constrain the accretion rate near the BH, since that most of former works believed that the multi-waveband emission of M87 core is dominated by the jet which prevent us to learn about the underlying accretion physics. Furthermore, the recently reported Faraday rotation measure will put another constraint on the accretion and jet model. We present the ADAF-jet model in Section 2, and show the main results in Section 3. Discussion and conclusion will be given in Section 4. Throughout this work, we adopt a BH mass of $6.6\times10^9 \msun$ and a distance of 16.7 Mpc, where 1 mas = 0.08 pc = 280 $R_{\rm g}$. \begin{table*}[t] \centering \begin{minipage}{180mm} \centering \footnotesize \centerline{\bf Table 1. M87 core SED in quiescent phase with aperture radius of $\sim 0.4^{''}$.} \tabcolsep 1.500mm \begin{tabular}{llllc}\hline\hline \tablecolumns{16} Frequency &Flux & Telescope & Date &References\\ \hline $5.0\times10^{9}$Hz &$3.10\pm0.06 $ Jy &VLA-A & 1999-09 & 1 \\ $8.4\times10^{9}$Hz &$3.02\pm0.02 $ Jy &VLA-A & 2003-06\&2003-08 & 2 \\ $8.4\times10^{9}$Hz &$3.15\pm0.16 $ Jy &VLA-A & 2004-12-31 & 2 \\ $15.0\times10^{9}$Hz &$2.7\pm0.1 $ Jy &VLA-A & 2003-06\&2003-08 & 2 \\ $22.0\times10^{9}$Hz &$2.0\pm0.1 $ Jy &VLA-A & 2003-06 & 2 \\ $93.7\times10^{9}$Hz &$1.82\pm0.06 $ Jy &ALMA & 2012-6-3 & 2 \\ $108.0\times10^{9}$Hz &$1.91\pm 0.05$ Jy &ALMA & 2012-6-3 & 2 \\ $221.0\times10^{9}$Hz &$1.63\pm0.03 $ Jy &ALMA & 2012-6-3 & 2 \\ $252.0\times10^{9}$Hz &$1.42\pm0.02 $ Jy &ALMA & 2012-6-3 & 2 \\ $286.0\times10^{9}$Hz &$1.28\pm0.02 $ Jy &ALMA & 2012-6-3 & 2 \\ $350.0\times10^{9}$Hz &$0.96\pm0.02 $ Jy &ALMA & 2012-6-3 & 2 \\ $635.0\times10^{9}$Hz &$0.43\pm0.09 $ Jy &ALMA & 2012-6-3 & 2 \\ $2.6\times10^{13}$Hz &$(1.3\pm0.2)\times10^{-2}$ Jy &Keck & 2000-01-18 & 3 \\ $2.8\times10^{13}$Hz &$(1.67\pm0.09)\times10^{-2}$ Jy &Gemini & 2001-05 & 4 \\ $1.37\times10^{14}$Hz &$(3.3\pm0.6)\times10^{-3}$ Jy &HST & 1998-1-16 & 2 \\ $1.81\times10^{14}$Hz &$(3.1\pm0.8)\times10^{-3}$ Jy &HST & 1999-1-16 & 2 \\ $2.47\times10^{14}$Hz &$(2.06\pm0.18)\times10^{-3}$ Jy &HST & 1997-11-10 & 2 \\ $3.32\times10^{14}$Hz &$(1.38\pm0.01)\times10^{-3}$ Jy &HST & 2003-1-19 & 2 \\ $3.70\times10^{14}$Hz &$(9.5\pm1.9)\times10^{-4}$ Jy &HST & 2003-11-29 & 2 \\ $4.99\times10^{14}$Hz &$(6.33\pm0.63)\times10^{-4}$ Jy &HST & 2003-11-29 & 2 \\ $6.32\times10^{14}$Hz &$(4.13\pm0.12)\times10^{-4}$ Jy &HST & 2003-11-29 & 2 \\ $8.93\times10^{14}$Hz &$(2.10\pm0.04)\times10^{-4}$ Jy &HST & 2003-5-10 & 2 \\ $8.93\times10^{14}$Hz &$(2.16\pm0.04)\times10^{-4}$ Jy &HST & 2003-3-31 & 2 \\ $1.11\times10^{15}$Hz &$(1.55\pm0.03)\times10^{-4}$ Jy &HST & 2003-05-10 & 2 \\ $1.27\times10^{15}$Hz &$(1.05\pm0.03)\times10^{-4}$ Jy &HST & 2003-7-27 & 2 \\ $1.36\times10^{15}$Hz &$(1.33\pm0.04)\times10^{-4}$ Jy &HST & 2003-11-29 & 2 \\ $2.06\times10^{15}$Hz &$(4.73\pm0.47)\times10^{-5}$ Jy &HST & 1999-5-17 & 2 \\ 2-10 keV & $(0.70\pm0.04)\times10^{-12}\rm erg\ cm^{-2}\ s^{-1}$ & Chandra & 2000-07-30 & 5 \\ \hline \end{tabular} \end{minipage} \begin{minipage}{170mm} \centering References: 1) \citet{naga01}; 2) \citet{pri16}; 3) \citet{whys04}; 4) \citet{per01}; 5) \citep{russ15}. \end{minipage} \end{table*} \begin{figure*} \epsscale{1.0} \plotone{f2.eps} \caption{ADAF-jet model result compared with M87 0.4 arcsec aperture radius SEDs in quiescent state. The dotted-line represent the ADAF spectrum with $a_{*}=0.9$, $s=0.52$ and $\beta=0.5$. The dashed lines show the jet spectrum with $v_{\rm jet}=0.6\ c$, $\dot{m}_{\rm jet}=1.5\times10^{-6}$ and $p=2.38$. The solid line is the sum of ADAF and jet contribution.} \end{figure*}
The multi-wavelength SED of M87 has been widely modeled in literatures by ADAF model \citep[][]{re96,dim03,wa08,li09}, jet model \citep[][]{de15,pri16}, or combination of the two~\citep[][]{yu11,nem14}. Normally, it is believed that the radio emission of M87 dominantly come from jet, while the origin of the millimeter/sub-millimeter and X-ray emission are controversial \citep[either from the ADAF or from the jet,][]{wa08,li09,yu11,pri16}. With ALMA observations, it is found that the radio spectrum becomes much steeper at $\sim 100$ GHz compare to the low-frequency radio band, and the spectrum becomes turnover at $\sim 200$ GHz \citep[][]{pri16}. The similar spectrum of M87 is also found in Sgr A* and M 81 \citep[][]{fal00,yu03,an05,ma08,ma09}. \citet{yu03} proposed that the sub-millimeter bump of Sgr A* can be naturally reproduced by the synchrotron emission from the high-temperature electrons in ADAF. In this work, we get a similar conclusion for M87, which will help us to learn about the underlying accretion process. After constrained by the millimeter bump, we find that the ADAF cannot well reproduce the X-ray emission simultaneously, while the X-ray emission and low-frequency radio emission are better explained by the jet. This conclusion is similar to \citet{yc05,wu07,yua09}, where the X-ray emission should be dominated by the jet, not the ADAF, if the Eddington ratio is less than a critical value. It should be noted that our ADAF-jet model cannot explain the optical-UV emission, which may be contributed by the host galaxy \citep[][]{nem14,de15}, and the multi-waveband flux variations may help to test this issue. We find that different BH spin parameters ($a_{*}=0-0.99$) and magnetic parameters ($\beta=0.5-0.9$) yield an equivalent fit of the SED, but the accretion rate has to be decreased if high BH spin and stronger magnetic field (lower $\beta$) are adopted. The jet velocity also cannot be constrained from our SED modeling, and we find that it will not affect our above conclusion since that it is degenerated with $\dot{m}_{\rm jet}$, where the different jet-velocity parameters will lead to different Doppler factors. In the ADAF model, the density profile is $\rho\propto r^{-1.5+s}$ and $\rho\propto r^{\sim -1}$ for $s\sim0.4-0.5$, which is quite consistent with that determined by $Chandra$ within the Bondi radius for M87 \citep[][]{russ15}. It was also found that the density profile is quite shallow, $\rho\propto r^{-(0.5-1)}$ in Sgr A* and NGC 3115 \citep[e.g.,][]{wan13,won14}, which is much shallower than that predicted in ``old" ADAF model ($\rho\propto r^{-1.5}$). These results suggest that only a small fraction of the material captured at the Bondi radius reaches the SMBH, which is quite consistent with the recent numerical simulations of the hot flows \citep[e.g.,][and references therein]{yua12}. The Faraday RM has been used to constrain the accretion rate in both Sgr A* and M87 \citep[][]{mar06,kuo14}, where they simply assumed a spherical accretion flow surrounding the BH. It may be no problem for Sgr A* due to our LOS is possibly close to the ADAF plane, however, the disk-like ADAF is roughly perpendicular to our LOS in M87 if assuming the jet is perpendicular to the disk (see the cartoon in Figure 1). For this case, the RM cannot be calculated using the same way as Sgr A*. We calculate the RM along the LOS-1 based on the disk-like ADAF (see Figure 1). We find that the RM is around $(0.8-15)\times10^5\rm rad\ m^{-2}$ with different parameters of $a_{*}$ and $\beta$, where the wind parameter $s\simeq0.4-0.5$ has been constrained from SED modeling. Our result is roughly consistent with the observed value of $-(2.1\pm1.8)\times10^5\rm rad\ m^{-2}$ if, in particular, assuming the BH may be fast rotating (e.g., $a_{*}\sim0.9$) in M87 \citep[e.g.,][]{wu07}. It should be noted that our above conclusion will not change if the RM is calculated along the LOS-1 even in a smaller radius of ADAF (e.g., $2R_{\rm g}<R<10R_{\rm g}$), where the RM values will decrease by a factor of 2. Beside the ADAF model, we also explore the possibility of jet. Due to the jet emission at mm waveband is much larger than that of observation (see Figure 3). Therefore, we only consider the case of jet as an external origin of the RM (e.g., polarized source pass through the jet). The RM should be $<7.5\times10^5\rm rad\ m^{-2}$ if the jet velocity is $>0.99\ c$, where the lower $\dot{m}_{\rm jet}$ is needed for modeling the SED with higher jet velocity. The intrinsic velocity of core jet in M87 is still not known, where the jet may has complex structure, e.g., a fast spine surrounded by a slower layer, and the observed low-velocity is measured from the slower layer \citep[e.g.,][]{gi08,gr09,xie12,na14,wa14,moc16}. Furthermore, the RM will become lower if the magnetic field is strongly dominated by toriodal field in the innermost part of jet or the magnetic field undergo many reversals along LOS. In our model, ADAF model can naturally reproduce the observed RM and we cannot exclude the possibilities of the jet model. Future constraints on the intrinsic velocity of the spine jet (if the jet is spine-layer structure) will help to further understand this issue. Similar to Sgr A* \citep[][]{yu03}, we model the millimeter bump of M87 using a thermal disk component. It should be noted that the millimeter/sub-millimeter bump of both M87 and Sgr A* can also reproduced by the jet component associated with the jet launching region close the BH \citep[so-called ``jet nozzle",][]{fal00,pri16}. In this work, we use a simple jet model with the shape constrained from the observations directly, which do not include such a nozzle. Recently, \citet{li15} calculated the RM of Sgr A* based on the jet nozzle model of \citet{fal00} and found that the predicted RM is two orders of magnitude less than the observed value, which suggest that this model cannot explain the observed RM even it can reproduce the sub-millimeter bump. It is still unknown whether this model can explain the RM of M87 or not, which is beyond our scope.
16
7
1607.08054
1607
1607.00173_arXiv.txt
We study the stability of the Vainshtein screening solution of the massive/bi-gravity based on the massive nonlinear sigma model as the effective action inside the Vainshtein radius. The effective action is obtained by taking the $\Lambda_2$ decoupling limit around a curved spacetime. First we derive a general consequence that any Ricci flat Vainshtein screening solution is unstable when we take into account the excitation of the scalar graviton only. This instability suggests that the nonlinear excitation of the scalar graviton is not sufficient to obtain a successful Vainshtein screening in massive/bi-gravity. Then to see the role of the excitation of the vector graviton, we study perturbations around the static and spherically symmetric solution obtained in bigravity explicitly. As a result, we find that linear excitations of the vector graviton cannot be helpful and the solution still suffers from a ghost and/or a gradient instability for any parameters of the theory for this background.
The current acceleration of the Universe is one of the biggest problems in the modern cosmology. It has been proposed to explain this acceleration with a modification of the theory of gravity from general relativity (GR) at the infrared regime (see \cite{Clifton:2011jh,Joyce:2014kja,Koyama:2015vza} for reviews). However the modification of the gravity is strongly constrained by Solar System tests of the gravity which agree with the predictions of GR. Hence the effect of the modification of gravity must be screened at Solar System. One natural theory with such a screening mechanism is the massive gravity with a tiny graviton mass. In short scales the massive graviton may behave as a massless graviton, thus one expects the prediction of GR can be recovered. However the linear massive gravity \cite{Fierz:1939ix} cannot be restored to the linearized GR because of the non-vanishing fifth force mediated by the scalar graviton \cite{vanDam:1970vg,Zakharov:1970cc}. Vainshtein then proposed that the linear approximation is no longer valid inside the Vainshtein radius and the fifth force could be screened by nonlinear interactions, called the Vainshtein mechanism \cite{Vainshtein:1972sx}. The Vainshtein mechanism has an important role not only in the massive gravity but also in some classes of the scalar-tensor theories which have non-linear interactions \cite{Nicolis:2008in,Deffayet:2009wt,Deffayet:2009mn,Deffayet:2011gz,Kobayashi:2011nu,Horndeski:1974wa,Gleyzes:2014dya,Gleyzes:2014qga}. However because of their nonlinear interactions, a general analysis of the Vainshtein mechanism is quite complicated. Hence to discuss the Vainshtein mechanism, it should be useful to construct an effective theory from an original theory and discuss the Vainshtein mechanism based on the effective theory \cite{Kimura:2011dc,Koyama:2013paa,Kobayashi:2014ida,Saito:2015fza}. In this paper, we discuss an effective theory for the Vainshtein mechanism from the nonlinear massive gravity \cite{deRham:2010ik,deRham:2010kj} and the bigravity \cite{Hassan:2011zd} (see \cite{Hinterbichler:2011tt,Babichev:2013usa,deRham:2014zqa,Schmidt-May:2015vnx} for reviews). The Vainshtein mechanism in massive/bi-gravity has been discussed in \cite{Babichev:2009us,Babichev:2009jt,Babichev:2010jd,Koyama:2011xz,Nieuwenhuizen:2011sq,Koyama:2011yg,Chkareuli:2011te,Gruzinov:2011mm,Babichev:2011iz,Comelli:2011wq,Berezhiani:2011mt,Sjors:2011iv,Volkov:2012wp,Sbisa:2012zk,Volkov:2013roa,Babichev:2013pfa,Kaloper:2014vqa,Renaux-Petel:2014pja,Enander:2015kda,Aoki:2015xqa,Aoki:2016eov}. It is known that the nonlinear massive gravity can be reduced into the scalar-tensor theory with Galileon interactions by taking the $\Lambda_3$ decoupling limit when the vector graviton is not excited in which there still exist scalar-tensor interactions \cite{ArkaniHamed:2002sp,deRham:2010ik} (see also \cite{Ondo:2013wka,Fasiello:2013woa}). On the other hand, one can take more direct decoupling limit called $\Lambda_2$ decoupling limit \cite{deRham:2015ijs,deRham:2016plk} in which the tensor fluctuations are decoupled from the scalar and vector gravitons and the effective action for the scalar and vector gravitons is given by the massive gravity nonlinear sigma model. The papers \cite{deRham:2015ijs,deRham:2016plk} discussed the $\Lambda_2$ decoupling limit around the Minkowski spacetime. Contrary to this, in the present paper, we discuss the $\Lambda_2$ decoupling limit around a curved spacetime and obtain an effective theory inside the Vainshtein radius. Indeed the solution obtained by the effective theory gives an approximate solution inside the Vainshtein radius in the bigravity theory \cite{Aoki:2016eov}. Then we study the stability of the Vainshtein screening solution based on the effective theory. The paper is organized as follows. We derive the effective theory for the Vainshtein mechanism from massive/bi-gravity in Section \ref{sec_ET_Vainshtein}. In Section \ref{sec_scalar_instability} we then study the dynamics of the scalar graviton around general backgrounds and find a Ricci flat spacetime generally suffers from a ghost and/or a gradient instability. The instability is found when we ignore the vector graviton, however perturbations of scalar and vector gravitons are coupled to each other, in general. Hence to complete the stability analysis of the solution, we should include perturbations of the vector graviton, which is studied in Section \ref{sec_instability_SSS}. We explicitly show that the static and spherically symmetric solution is unstable. We give a summary and some discussions in Section \ref{summary}.
\label{summary} In this paper, we showed that the massive gravity nonlinear sigma model gives an effective theory of the vector and scalar gravitons inside the Vainshtein radius for general massive/bi-gravity. We obtained the effective action by taking the $\Lambda_2$ decoupling limit around a curved spacetime and it can be used as long as we have the Vainshtein screening solutions. Making use of the massive gravity nonlinear sigma model as the effective action inside the Vainshtein radius, we studied the stability of the Vainshtein screening solutions in massive/bi-gravity. First we derived a general consequence that in any Ricci flat background spacetime, the scalar graviton generally suffers from a ghost and/or a gradient instability as long as the vector graviton is not excited. Since the spacetime is given by a solution in GR, the Ricci flat region is realized by the vacuum region of the spacetime, thus the instability is found outside the source. However since the massive/bi-gravity contains the vector graviton and the perturbations of scalar and vector gravitons are coupled, one cannot directly conclude that the Ricci flat Vainshtein screening background spacetime is indeed unstable. Hence we studied perturbations around the static and spherically symmetric solution obtained in Ref.~\cite{Aoki:2016eov} next. We clarified the stability condition for both odd parity perturbations and even parity perturbations, which depends on $\beta_2$ and $\beta_3$, model parameters in the massive gravity nonlinear sigma model as well as $\epsilon$, a parameter depending on the asymptotic behavior of the background solution. As a result, for any parameters ($\beta_2, \beta_3, \epsilon$), we found that the perturbations suffer from some of the instabilities and confirmed that the Vainshtein screening background solution is unstable. We have shown the (local) instability of the spherically symmetric solution in the space region outside the star. In addition, the instability of a black hole solution was shown in \cite{Babichev:2013una,Brito:2013wya} (see also \cite{Kodama:2013rea,Babichev:2014oua,Babichev:2015zub,Babichev:2015xha}). Note that our background solution completely differs from the background solution discussed in \cite{Babichev:2013una,Brito:2013wya}. For the black hole solution, both metrics are given by same Schwarzschild metric (or Kerr metric) in which the St\"ueckelberg fields are not excited, i.e., $\phi^a=x^a$. One may expect that there exists a stable hairy black hole supported by hair of the St\"ueckelberg fields. However, our result suggests that a scalar graviton hair is not helpful for supporting astrophysical objects. In particular, existence of a spherically symmetric hairy black hole is unlikely as numerically shown in \cite{Brito:2013xaa}. The instability implies a difficulty to construct viable astrophysical objects in the context of massive/bi-gravity. The universality of the instability suggests that the Vainshtein screening could not be realized only by the scalar graviton. To obtain a stable solution with the Vainshtein screening, the vector graviton has to be nonlinearly excited in a vacuum region of the spacetime. Therefore it is also important to study the property of the vector graviton in more general spacetimes for the Vainshtein mechanism in massive/bi-gravity.
16
7
1607.00173
1607
1607.00459_arXiv.txt
text{ The Sausage radio relic is the arc-like radio structure in the cluster CIZA J2242.8+5301, whose observed properties can be best understood by synchrotron emission from relativistic electrons accelerated at a merger-driven shock. However, there remain a few puzzles that cannot be explained by the shock acceleration model with only in-situ injection. In particular, the Mach number inferred from the observed radio spectral index, $M_{\rm radio}\approx 4.6$, while the Mach number estimated from X-ray observations, $M_{\rm X-ray}\approx 2.7$. In an attempt to resolve such a discrepancy, here we consider the re-acceleration model in which a shock of $M_s\approx 3$ sweeps through the intracluster gas with a pre-existing population of relativistic electrons. We find that observed brightness profiles at multi frequencies provide strong constraints on the spectral shape of pre-existing electrons. The models with a power-law momentum spectrum with the slope, $s\approx 4.1$, and the cutoff Lorentz factor, $\gamma_{e,c}\approx 3-5\times 10^4$ can reproduce reasonably well the observed spatial profiles of radio fluxes and integrated radio spectrum of the Sausage relic. The possible origins of such relativistic electrons in the intracluster medium remain to be investigated further. } \begin{document} \jkashead %
Giant radio relics such as the Sausage and the Toothbrush relics exhibit elongated morphologies, spectral steepening across the relic width, integrated radio spectra of a power-law form with spectral curvature above $\sim 2 $~GHz, and high polarization level \citep{vanweeren10,vanweeren12, feretti12, stroe16}. They are thought to be synchrotron radiation emitted by GeV electrons, which are (re)-accelerated at structure formation shocks in the intracluster medium (ICM) \citep[e.g.,][]{ensslin98, brug12, brunetti2014}. It is now well established that nonthermal particles can be (re)-accelerated at such shocks via diffusive shock acceleration (DSA) process \citep[e.g.,][]{ryu03,vazza09,skill11,kangryu11}. In the simple DSA model of a steady planar shock, the synchrotron radiation spectrum at the shock becomes a power-law of $j_{\nu}(r_s)\propto \nu^{-\alpha_{\rm sh}}$ with the {\it shock index}, $\alpha_{\rm sh} = (M_s^2+3)/2(M_s^2-1)$, while the volume-integrated radio spectrum becomes $J_{\nu} \propto \nu^{-\alpha_{\rm int}}$ with the integrated index, $\alpha_{\rm int}=\alpha_{\rm sh}+0.5$, above the break frequency $\nu_{\rm br}$ \citep[e.g.][]{dru83,ensslin98,kang11}. Here $M_s$ is the shock sonic Mach number. If the shock acceleration duration is less than $\sim 100$ Myr, however, the break frequency, $\nu_{\rm br}\sim 1$ GHz, falls in the typical observation frequencies and the integrated spectrum steepens gradually over the frequency range of $(0.1-10) \nu_{\rm br}$ \citep{kang15b}. Moreover, additional spectral curvatures can be introduced in the case of a spherically expanding shock \citep{kang15a}. On the other hand, in the re-acceleration model in which the upstream gas contains a pre-existing electron population, for example, $f_{\rm up} \propto \gamma_e^{-s} \exp [ - ({\gamma_e /\gamma_{e,c}})^2 ]$ (where $\gamma_e$ is the Lorentz factor), the re-accelerated electron spectrum and the ensuing radio spectrum should depend on the slope $s$ and the cutoff energy $\gamma_{e,c}$ as well as $M_s$. Recently, \citet{kang16} (Paper I) has explored the observed properties of the Tooth brush relic, and also reviewed some puzzles in the DSA origin of radio {\it gischt} relics: (1) discrepancy between $M_{\rm radio}$ inferred from the radio index $\alpha_{\rm sh}$ and $M_{\rm X-ray}$ estimated from the X-ray temperature discontinuities in some relics, (2) low DSA efficiency expected for weak shocks with $M_s\lesssim 3$ that form in the hot ICM, (3) low frequency of merging clusters with detected radio relics, compared to the expected occurrence of ICM shocks, and (4) shocks detected in X-ray observations without associated radio emission. In Paper I, it was suggested that most of these puzzles can be explained by the re-acceleration model in which a radio relic lights up only when a shock propagates in the ICM thermal plasma that contains a pre-existing population of electrons \citep[see also][]{kang12,pinzke13, shimwell15,kangryu16}. The so-called Sausage relic is a giant radio relic in the outskirts of the merging cluster CIZA J2242.8+5301, first detected by \citet{vanweeren10}. They interpreted the observed radio spectrum from 150 ~MHz to 2.3 GHz as a power-law-like synchrotron radiation emitted by shock-accelerated relativistic electrons. So they inferred the shock Mach number, $M_{\rm radio}\approx 4.6$, from the spectral index at the hypothesized shock location, $\alpha_{\rm sh}\approx 0.6$, and the magnetic field strength, $B_2\approx 5$ or $1.2 \muG$ from the relic width of 55~kpc. Although this shock interpretation was strongly supported by observed downstream spectral aging and high polarization levels, the requirement for a relatively high Mach number of $M_s=4.6$ in the ICM called for some concerns. Based on structure formation simulations, the shocks in the ICM are expected to have low Mach numbers, typically $M_s<3$ \citep[e.g.,][]{ryu03}. \citet{stroe14b} reported that the integrated spectrum of the Sausage relic steepens toward 16~GHz with the integrated index increasing from $\alpha_{\rm int}\approx 1.06$ to $\alpha_{\rm int}\approx 1.33$ above 2.3~GHz. They noted that such a curved integrated spectrum cannot be consistent with the simple DSA model for a steady plane shock with $M_s\approx4.6$ suggested by \citet{vanweeren10}. Later, \citet{stroe14a} suggested, using a spatially resolved spectral fitting, that the the injection index could be larger, i.e.,~$\alpha_{\rm sh}\approx 0.77$, implying $M_s\approx 2.9$. In fact, this lower value of $M_s$ is more consistent with temperature discontinuities detected in X-ray observations by \citet{ogrean14} and \citet{akamatsu15}. In order to understand the spectral curvature in the integrated spectrum reported by \citet{stroe14b}, \citet{kangryu15} considered the various shock models including both the {\it in-situ injection} model without pre-existing electrons and the {\it re-acceleration} model with pre-existing electrons of a power-law spectrum with exponential cutoff. It was shown that shock models with $M_s\approx 3$, either the {\it in-situ injection} or the {\it re-acceleration} models can reproduce reasonably well the radio brightness profile at 600~MHz and the curved integrated spectrum of the Sausage relic except the abrupt increase of the spectral index above 2~GHz. The authors concluded that such a steep increase of the spectral index cannot be explained by the simple radiative cooling of postshock electrons. On the other hand, it was pointed out that the Sunyaev-Zeldovich effect may induce such spectral steepening by reducing the the radio flux at high frequencies by a factor of two or so \citep{basu15}. Recently, \citet{stroe16} presented the integrated spectrum spanning from 150~MHz up to 30~GHz of the Sausage relic, which exhibits a spectral steepening from $\alpha_{\rm int}\approx 0.9$ at low frequencies to $\alpha_{\rm int}\approx 1.71$ above 2.5~GHz. \citet{kangryu16} attempted to reproduce this observed spectrum by the re-acceleration model in which a spherical shock sweeps through and then exits out of a finite-size region with pre-existing relativistic electrons. Since the re-acceleration stops after the shock crosses the region with pre-existing electrons, the ensuing integrated radio spectrum steepens much more than predicted for aging postshock electrons alone, resulting in a better match to the observed spectrum. We suggested that a shock of $M_s \approx 2.7-3.0$ and $u_s \approx 2.5-2.8\times 10^3 \kms$ that has swept up the cloud of $\sim 130$~kpc with pre-existing electrons about 10 Myr ago, could reproduce the observed radio flux profile at 600~MHz \citep{vanweeren10} and the observed integrated spectrum \citep{stroe16}. The required spectral shape of pre-existing electrons is a power-law spectrum with the slope, $s=4.2$ and the exponential cutoff at $\gamma_{e,c}\approx 10^4$. On the other hand, \citet{donnert16} has proposed an alternative approach to explain the spectral steepening of the Sausage relic. In order to match the observed brightness profiles, it was assumed that behind the shock the magnetic field strength increases first, peaks around 40~kpc from the shock and then decreases exponentially at larger distances. In this model, the magnetic field strength is lower at the immediate postshock region with the highest energy electrons, compared to the model with constant postshock magnetic field. As a result, the integrated radio spectrum steepens at high frequencies, leading to the curved spectrum consistent with the observation by \citet{stroe16}. That paper presented beam convolved brightness profiles, $S_{\nu}(R)$, at several radio frequencies from 153~MHz to~30 GHz in their Figure 5 and the spectral index, $\alpha_{153}^{608}$ between 153 and 608~MHz in Figure 6. We notice that $S_{\nu}$ at $153-323$~MHz extends well beyond 150~kpc away from the relic edge, and $\alpha_{153}^{608}$ increases from $\sim 0.6$ at the position of the putative shock to $\sim 1.9$ at 200~kpc south of the shock. Considering the shock compression ratio of $\sigma\approx 3$, these downstream length scales imply that the shock has swept through a region of at least $450$~kpc in the case of a plane shock. So these observations cannot be explained by the model of \citet{kangryu16} which assumed a cloud with pre-existing electrons of 130~kpc in width. As pointed out in Paper I, the ubiquitous presence of radio galaxies, AGN relics and radio phoenix implies that the ICM may contain radio-quiet {\it fossil} electrons ($\gamma_{e,c}\lesssim 10^2$) or radio-loud {\it live} electrons ($\gamma_{e,c}\lesssim 10^4$) \citep[e.g.,][]{slee01}. In the re-acceleration model, fossil electrons with $\gamma_e\sim 100$ provide seed electrons that can be injected to the DSA process and enhance the acceleration efficiency at weak ICM shocks. On the other hand, radio-loud electrons of a power-law spectrum with a cutoff at $\gamma_{e,c}\sim 7-8\times 10^4$ is required to explain the broad relic width of $\sim 150-200$~kpc in the case of the Toothbrush relic (Paper I). In our re-acceleration model of the Sausage relic, considered in \citet{kangryu15, kangryu16}, the shock propagates into the thermal ICM gas with pre-existing relativistic electrons whose pressure is dynamically insignificant. Note that here the preshock medium is not a bubble of hot buoyant relativistic plasma unlike the models studied previously by \citet{ensslin01}, \citet{ensslin02}, and \citet{pfrommer11}. Thus the presence of pre-existing electrons does not affect the dynamics of the shock, but instead it only provides the seed electrons that can be injected effectively into the DSA process. However, it is not clear how relativistic electrons can be mixed with the thermal ICM gas, if they were to originate from radio jets and lobes ejected from AGNs. On the other hand, such a mixture of thermal gas and relativistic electrons can be understood more naturally, if they were to be produced by previous episodes of shocks and turbulence generated by merger-driven activities in the ICM \citep[e.g.,][]{brunetti2014}. In this study, we attempt to explain the observed properties of the Sausage relic, reported by \citet{stroe16} and \citet{donnert16}, with the re-acceleration model in which a low Mach number shock sweeps though the ICM gas with pre-existing relativistic electrons, as we did for the Toothbrush relic in Paper I. In the next section, we explain some basic physics of the DSA model and review the observed properties of the Sausage relic. In Section 3 the numerical simulations and the shock models are described. The comparison of our results with observations is presented in Section 4, followed by a brief summary in Section 5. \begin{figure}[t!] \centering \includegraphics[trim=1mm 4mm 4mm 8mm, clip, width=84mm]{f1.eps} \caption{The factor $Q(B,z)$ for $z=0.188$ given in Equation~(\ref{qfactor}). } \end{figure} \begin{table*} \begin{center} {\bf Table 1.}~~Model Parameters for the Sausage Relic Shock\\ \vskip 0.3cm \begin{tabular}{ lrrrrrrrrrrr } \hline\hline Model & $M_{\rm s,i}$ & $kT_1$ & $B_1$ & $s$ & $\gamma_{\rm e,c}$ & $t_{\rm exit}$ & $L_{\rm cloud}$& $t_{\rm obs}$& $M_{\rm s,obs}$ & $kT_{\rm 2,obs}$ & $u_{\rm s,best}$ \\ {} & &{(keV)} &($\muG$) & & & (Myr)& (kpc) & (Myr)& &{(keV)} & {(${\rm km~s^{-1}}$)} \\ \hline M3.3 & 3.3 &3.4& 1& 4.1 & $5\times10^4$ & 124 & 367 & 144 & 2.7 & 10.7 & $2.6\times10^3$ \\ M3.8 & 3.8 &3.4& 1& 4.1 & $3\times10^4$ & 125 & 419 &143 & 3.1 & 13.1 & $2.9\times10^3$ \\ \hline \end{tabular} \end{center} {$M_{\rm s,i}$: initial shock Mach number at the onset of the simulations}\\ {$kT_1$: preshock temperature}\\ {$B_1$: preshock magnetic field strength}\\ {$s$: power-law slope in Equation (2)}\\ {$\gamma_{e,c}$: exponential cutoff in Equation (2)}\\ {$t_{\rm exit}$: time when the shock exists the cloud with pre-existing electrons}\\ {$L_{\rm cloud}$: size of the cloud with pre-existing electrons}\\ {$t_{\rm obs}$: shock age when the simulated results match the observations}\\ {$M_{\rm s,obs}$: shock Mach number at $t_{\rm obs}$}\\ {$kT_{\rm 2,obs}$: postshock temperature at $t_{\rm obs}$}\\ {$u_{\rm s,obs}$: shock speed at $t_{\rm obs}$}\\ \end{table*}
The Sausage radio relic is unique in several aspects. Its thin arc-like morphology and uniform surface brightness along the relic length over 2~Mpc could be explained by the re-acceleration model in which a spherical shock sweeps through an elongated cloud of the ICM gas with pre-existing relativistic electrons \citep{kangryu15}. Moreover, the re-acceleration model can resolve the discrepancy between $M_{\rm radio}\approx 4.6$ inferred from the radio spectral index \citep{vanweeren10} and $M_{\rm X-ray}\approx 2.7$ estimated from X-ray temperature discontinuities \citep{akamatsu15}. Note that in this model the spectral index at the relic edge, $\alpha_{\rm sh}\approx 0.6$, can be controlled by the power-law index of the pre-existing electron population, $s\approx 4.1-4.2$, independent of the shock Mach number. The steep spectral steepening above $\sim2$~GHz \citep{stroe16} could be understood, if we assume that the cloud of pre-existing electrons has a finite width and the shock has existed the cloud about $10-20$~Myr ago \citep{kangryu16}. In this study, we attempt to reproduce the observed profiles of the surface brightness at 153 and 608~MHz and the spectral index between the two frequencies presented in \citet{donnert16}, using the same re-acceleration model but with a set of shock parameters different from \citet{kangryu16}. In particular, the observational facts that $S_{\rm 153MHz}$ and $\alpha_{153}^{608}$ extend beyond 150~kpc downstream of the shock and the degree of spectral steepening of the integrated spectrum at high frequencies provide strong constrains to the model parameters, which are listed in Table 1. Since the re-accelerated electron spectrum depends on the pre-existing electron population, we find that the cutoff Lorentz factor should be fine tuned as $\gamma_{e,c}\approx 3-5\times 10^4$ in order to match the observations. This study illustrates that it is possible to explain most of the observed properties of the Sausage relic including the surface brightness profiles and the integrated spectrum by the shock acceleration model with pre-existing electrons. If the shock speed and Mach number are specified by X-ray temperature discontinuities, the other model parameters such as magnetic field strength and the spectral shape of pre-existing electrons can be constrained by the radio brightness profiles at multi frequencies. Moreover, the degree of spectral steepening in the integrated spectrum at high frequencies can be modeled with a finite-sized cloud with pre-existing electrons. We assume the shock has existed the cloud of pre-existing electrons at $t_{\rm exit}\approx 124-125$~Myr after crossing the length of the cloud, $L_{\rm cloud}=367$~kpc in M3.3 model and $L_{\rm cloud}=419$~kpc in M3.8 model. Although both M3.3 and M3.8 models produce the results comparable to the observations as shown in Figures 5 and 6, M3.3 model seems more consistent with X-ray observations: $M_{\rm s,obs}=2.7$, $kT_{\rm 2,obs}=10.7$~keV, and $u_{\rm s,obs}=2.6\times10^3\kms$ at the time of observation, $t_{\rm obs}\approx 144$~Myr. However, it is not well understood how an elongated cloud of the thermal gas with such pre-existing relativistic electrons could be generated in the ICM of CIZA J2242.8+5301. It could be produced by strong accretion shocks or infall shocks($M_s\gtrsim 5$) in the cluster outskirts \citep{hong14} or by turbulence induced by merger-driven activities \citep{brunetti2014}. Alternatively, it could originate from nearby radio galaxies, such as radio galaxy H at the eastern edge of the relic or radio galaxies B, C, and D downstream of the relic. Again it is not clear how relativistic electrons contained in jets/lobes of radio galaxies are mixed with the background gas instead of forming a bubble of hot buoyant plasma. Note that the shock passage through such relativistic plasma is expected to result in a filamentary or toroidal structure \citep{ensslin02,pfrommer11}, which is inconsistent with the thin arc-like morphology of the Sausage relic. In conclusion, despite the success of the re-acceleration model in explaining many observed properties of the Sausage relic, the origin of pre-existing relativistic electrons needs to be investigated further. On the other hand, the in-situ injection model for radio relics has its own puzzles: (1) $M_{\rm radio}>M_{\rm X-ray}$ in some relics, (2) low DSA efficiency expected for weak shocks with $M_s<3$, (3) relatively low fraction of merging clusters with detected radio relics, compared to the theoretically expected frequency of shocks in the ICM, and (4) some observed X-ray shocks without associated radio emission. In particular, the generation of suparthermal electrons via wave-particle interactions, and the ensuing enhancement of the injection to the DSA process in high beta ICM plasma should be studied in detail by fully kinetic plasma simulations.
16
7
1607.00459
1607
1607.07938_arXiv.txt
We have developed a new method of data processing for radio telescope observation data to measure time-dependent temporal coherence, and we named it cross-correlation spectrometry (XCS). XCS is an autocorrelation procedure that expands time lags over the integration time and is applied to data obtained from a single-dish observation. The temporal coherence property of received signals is enhanced by XCS. We tested the XCS technique using the data of strong H$_{2}$O masers in W3 (H$_2$O), W49N and W75N. We obtained the temporal coherent lengths of the maser emission to be 17.95 $\pm$ 0.33 $\mu$s, 26.89 $\pm$ 0.49 $\mu$s and 15.95 $\pm$ 0.46 $\mu$s for W3 (H$_2$O), W49N and W75N, respectively. These results may indicate the existence of a coherent astrophysical maser.
Historically, the Michelson interferometer (\citealt{1887SidM....6..306M}) was first used to measure temporal coherence of light. Light input to the Michelson interferometer is split into two paths by a half-mirror. Then the split light beams are combined after reflection by total-reflection mirrors to form interference fringes. The temporal coherence can be measured by changing the position of the total-reflection mirrors, where the correlation is taken between the light signals originating from the same source but arriving at a detector slightly at different epochs. We consider an introduction of this technique to measurement of the temporal coherence of an astronomical radio signal using a radio telescope as follows. The observed data is recorded digitally in hard disks, then the temporal coherence of the observed signal is measured by autocorrelation while applying time lags over the integration time. Instead of taking the simple autocorrelation, a single time-series of data is divided into small chunks corresponding to short time slots, and each chunk of data is converted to a frequency spectrum by Fourier transform. The time-series of the complex frequency spectrum data is then used for coherence analysis. We named this algorithm cross-correlation spectrometry (XCS). This paper describes the XCS algorithm and examples of its application to exploration of the coherence of interstellar water (H$_2$O) maser emission. \h2o\ masers are important tools in astrophysics and astrometry in terms of their limited association with specific evolutionary stages or physical conditions of astronomical objects and their extreme brightness and compactness (e.g., \citealt{elitzur}; \citealt{Gray2012}). Astronomical \h2o\ masers have been considered to generate ``incoherent masers'', which produce electromagnetic waves originating from different volumes of the maser region and with random wave phases. However, in an extreme case, where the maser emission is generated from a much smaller region ($l<<$0.1~AU) in a much sharper beam ($\theta <<$0.01~rad) and at much higher brightness temperature ($T_{\rm b}>> 10^{14}$~K), ``coherent maser'' emission is expected, in which specific electromagnetic waves with specific synchronized phases are greatly enhanced (e.g., \citealt{1991SvAL...17..250I}; \citealt{elitzur}). Without extremely high angular resolution, it is impossible to spatially resolve such coherent emission regions in the observed astronomical masers. We demonstrate that the XCS technique can be used to distinguish coherent emission even in single-dish observations.
} The derived coherence time is much shorter than a light travel time of a path length that corresponds to the expected size of a water maser spot ($\sim$ 1 AU, e.g., \citealt{1981ARA&A..19..231R}). This is consistent with the expectation that observed astronomical masers are usually incoherent and coherent maser regions, even if exist, are very tiny. Nevertheless, if one supposes such a coherent maser, it may have a uniform physical condition in order to maintain coherency, namely an equal maser gain per gas volume. In addition, in a bundle of maser amplification path lengths should be equal within a certain threshold of coherent maser region. In practice, the coherent region covers a limited volume of the whole maser region, including an incoherent maser region, in the antenna beam. In order to explain a very tiny area of the observed coherent maser in the whole maser region, one can suppose a local spherical morphology of the coherent maser region, in which only the maser radiation transferred through the geometrical center of the convexity may have the maximum length of the maser amplification. With taking into account a maser beaming angle, $\theta_{beam}$, we consider the difference of maser path lengths between the center and the convex boundary of the coherent maser region as shown in figure \ref{fig.region}. For the maser region with a length $2R$, the difference in the maser path length, $\Delta R$, is derived as a function of $\theta_{beam}$ to be, \begin{align} \Delta R = 2R - 2R \cos \theta_{beam} \\ \simeq R \theta^{2}_{beam}. \label{eqn:radial} \end{align} Now this path length difference may give a coherence time $\Delta t_{coh}$ (typically about 20 $\mu$s based on our measurements), \begin{equation} \Delta t_{coh} \sim \frac{\Delta R}{c}, \end{equation} where $c$ is the speed of light [ms$^{-1}$]. Thus $\theta_{beam}$ is derived to be, \begin{equation} \theta_{beam} \simeq (\frac{\Delta R}{R})^\frac{1}{2} . \label{eqn:maserbeamingangle} \end{equation} If $2R$ = 1 AU (about 500 s for the light speed), we roughly obtain the beaming angle $\theta_{beam}$ and the beaming solid angle $\Omega_{coh}$ to be about $ (\frac{20 \times 10^{-6}}{250})^\frac{1}{2} = 2.8 \times 10^{-4}$ rad and $8.0 \times 10^{-8}$ str, respectively. \\ According to \cite{elitzur}, if the brightness temperature $T_b$ of a maser meets the following value, the likelihood of coherent maser is anticipated, \begin{equation} T_b \gg T_0 \frac{4\pi}{\Omega_m}, \label{eqn:tb} \end{equation} where $\Omega_m$ is the beaming solid angle ($\approx 10^{-2} - 10^{-4}$) and \begin{equation} T_0 \equiv \frac{2 \pi h \nu^2\Delta\upsilon}{kcA}, \end{equation} where $h$ is Planck's constant [Js], $k$ is Boltzmann's constant [JK$^{-1}$], $\nu$ is the observation frequency [Hz], $\Delta\upsilon$ is the bandwidth in velocity units [ms$^{-1}$] and $A$ is the Einstein A-coefficient [s$^{-1}$]. $T_0$ is $3 \times 10^{14}$ K for a 22GHz water vapor maser with a bandwidth of 1 km/s. Thus, the right side of equation (\ref{eqn:tb}) takes a value of $3.7\times10^{17}$ K to $3.7\times10^{19}$ K. On the other hand, we estimate the brightness temperatures $T_b$ of W3 (H$_2$O), W49N and W75N on the basis of our observation whether they exceed the brightness temperatures. Here, table \ref{tbl:para} shows the estimated parameters, which are calculated for the peak flux densities of three masers and the longest coherence time of W49N. Using the Rayleigh-Jeans approximation, the $T_b$ can be written as \begin{equation} T_b = \frac{c^2}{2k \nu^2}I_{\nu}, \label{eqn:tb2} \end{equation} where \begin{equation} I_{\nu} = \frac{S_{\nu}}{\Omega_a}, \label{eqn:iv} \end{equation} in which $S_{\nu}$ is the flux density [Js$^{-1}$m$^{-2}$Hz$^{-1}$] and $\Omega_a$ is the solid angle of the maser region. By scaling the peak amplitudes of the masers in figures \ref{fig.w3oh.spec} to \ref{fig.w75n.spec} by the estimated SEFD (1025 Jy), the total-power flux densities of the three masers of W3 (H$_2$O), W49N, and W75N are estimated. If the flux densities of the coherent masers have the correlation coefficient at the time shift of 5 $\mu$s in figures \ref{fig.w3oh.xcs} to \ref{fig.w75n.xcs}, the flux densities of the three masers are obtained assuming that they are coherent masers. The maser beaming angle assuming 2R $=$ 1 AU are obtained by the measured coherence time of three masers by equation \ref{eqn:maserbeamingangle}. Moreover, the maser beam cross-sections are obtained by multiplying the maser beaming angle by the size of 1 AU. Since the distances to W3 (H$_2$O), W49N and W75N are 1.95 $\pm$ 0.04 kpc (\citealt{Xu}), 11.11$^{+0.79}_{-0.69}$ kpc (\citealt{2013ApJ...775...79Z}) and 1.30 $\pm$ 0.07 kpc (\citealt{Rygl}), the antenna beam for the maser beam cross-sections are obtained to be of the order of magnitude $10^{-13}$ rad by dividing by the distance to the masers. Finally we obtain the brightness temperatures of (8.47 $\pm$ 0.41) $\times10^{18}$ K for peak flux density of W3 (H$_2$O), (8.91 $\pm$ 1.28) $\times10^{20}$ K for the longest coherence time of W49N, (6.21 $\pm$ 1.15) $\times10^{22}$ K for peak flux density of W49N and (4.77 $\pm$ 0.54) $\times 10^{18}$ K for peak flux density W75N from equations (\ref{eqn:tb2}) and (\ref{eqn:iv}). The brightness temperatures of W3 (H$_2$O) and W75N are comparable to the threshold value of the coherent maser but the that for W49N clearly exceed the threshold of a coherent maser. Consequently, the coherent maser should be observable especially toward W49N if our hypothesis is true. \\ Note that masers may realistically become observational targets of the Radioastron project (\citealt{kardashev}), which operates a 10 m space radio telescope, Spektr-R launched by the Russian Astro Space Center in July 2011, can form an ultimately high angular resolution interferometer with ground radio telescopes. In the project, astronomical H$_2$O and OH masers have been detected using baselines of up to 10 Earth diameters (\citealt{kardashev2}). The angular resolution of 36 microarcseconds yielded in the mapping of the W3~IRS5 H$_2$O maser emission, for example, corresponds to a linear size of 10$^{7}$ km at a distance of $\sim$2~kpc. On the other hand, we suppose a coherent maser region of 1 AU, which will form a maser beam cross-section of the coherent maser region to be $4.2 \times 10^{4}$ km with a supposed beaming angle of $2.8 \times 10^{-4}$ radian. If a coherent maser region is as long as 240 AU, then the maser beam cross-section of the coherent maser will have a size of 10$^{7}$ km. Although the coherent maser region is likely limited into a tiny fraction of the volume of the whole maser gas clump, such a large coherent maser region will be detectable by the space-ground interferometer. Therefore, further observations in the Radioastron project to trace the temporal variation of the detected maser emission is crucial for identifying maser emission with much smaller sizes and extremely high brightness temperatures as discussed in this paper. Single-dish observations employing the XCS technique will be useful for finding maser sources exhibiting very bright and small structures with coherent properties.
16
7
1607.07938
1607
1607.01425_arXiv.txt
There is substantial evidence for disk formation taking place during the early stages of star formation and for most stars being born in multiple systems; however, protostellar multiplicity and disk searches have been hampered by low resolution, sample bias, and variable sensitivity. We have conducted an unbiased, high-sensitivity Karl G. Jansky Very Large Array (VLA) survey toward all known protostars (n = 94) in the Perseus molecular cloud (d $\sim$ 230 pc), with a resolution of $\sim$15 AU (0.06$^{\prime\prime}$) at $\lambda$ = 8 mm. We have detected candidate protostellar disks toward 17 sources (with 12 of those in the Class 0 stage) and we have found substructure on $<$ 50AU scales for three Class 0 disk candidates, possibly evidence for disk fragmentation. We have discovered 16 new multiple systems (or new components) in this survey; the new systems have separations $<$~500~AU and 3 by $<$~30~AU. We also found a bi-modal distribution of separations, with peaks at $\sim$75 AU and $\sim$3000 AU, suggestive of formation through two distinct mechanisms: disk and turbulent fragmentation. The results from this survey demonstrate the necessity and utility of uniform, unbiased surveys of protostellar systems at millimeter and centimeter wavelengths.
Stars form due to the gravitational collapse of dense cores within molecular clouds. Conservation of angular momentum in this infalling material causes the formation of a rotationally-supported disk around the nascent protostar. However, this picture may be complicated by the presence of magnetic fields that can remove angular momentum from the infalling material (Allen et al. 2003). In a similar vein, wide multiple protostellar systems can form via rotational breakup of the collapsing cloud (e.g., Burkert and Bodenheimer 1993). Turbulent fragmentation (Padoan et al. 2007) has recently become a favorable route for the formation of both wide and close multiples; the close multiples migrate inward from initially larger separations (e.g., Offner et al. 2010). Close multiples may also form by fragmentation of a massive disk via gravitational instability (e.g., Adams et al. 1989); however, it is unknown if disks of sufficient mass and radius form in young protostellar systems. Thus far, sub/millimeter studies of Class 0 protostars have not had the resolution to resolve the scale of disks and close multiples, and most samples have been small and/or biased. To make a substantial leap in our knowledge of both protostellar disks and multiplicity, we have conducted the VLA Nascent Disk and Multiplicity (VANDAM) Survey toward all known protostars in the Perseus molecular cloud. The survey was conducted in A and B configurations with the VLA at 8 mm, 1 cm, 4 cm, and 6.4 cm, observed only in wide-band continuum and reaching a high spatial resolution of 0\farcs065 (15 AU) at 8 mm. We will focus only on the 8 mm and 1 cm results in this contribution. Our sample is drawn primarily from the \spitzer\ survey by Enoch et al. (2009) as well as all known candidate first hydrostatic core objects and other deeply embedded sources.
The key to the success of the VANDAM survey was in its unbiased nature and the superb sensitivity of the upgraded VLA, obtaining as complete a characterization of protostellar disks and multiple systems as possible.
16
7
1607.01425
1607
1607.06669_arXiv.txt
{Extragalactic jets originating from the central supermassive black holes of active galaxies are powerful, highly relativistic plasma outflows, emitting light from the radio up to the $\gamma$-ray regime. The details of their formation, composition and emission mechanisms are still not completely clear. The combination of high-resolution observations using very long baseline interferometry (VLBI) and multiwavelength monitoring provides the best insight into these objects. Here, such a combined study of sources of the TANAMI sample is presented, investigating the parsec-scale and high-energy properties. The TANAMI program is a multiwavelength monitoring program of a sample of the radio and $\gamma$-ray brightest extragalactic jets in the Southern sky, below $-30^\circ$\,declination. We obtain the first-ever VLBI images for most of the sources, providing crucial information on the jet kinematics and brightness distribution at milliarcsecond resolution. Two particular sources are discussed in detail: \pmn, which can be classified either as an atypical blazar or a $\gamma$-ray loud (young) radio galaxy, and Centaurus~A, the nearest radio-loud active galaxy. The VLBI kinematics of the innermost parsec of Centaurus~A's jet result in a consistent picture of an accelerated jet flow with a spine-sheath like structure. }
\label{sec:intro} Active galactic nuclei (AGN) emit light across the whole electromagnetic spectrum, often dominating the emission of their host galaxy. Due to accretion onto the central supermassive black hole (SMBH), they can produce so-called ``jets'', highly relativistic plasma outflows. They belong to the most fascinating objects in the Universe, but the underlying physics is still not fully understood. The knowledge is crucial in context of AGN feedback and multimessenger astronomy. Multiwavelength observations are useful tools to address open questions concerning the formation, acceleration and the mechanism(s) behind the broadband emission up to the highest energies. Blazars are a subset of radio-loud AGN, where the jet is observed at a small angle to the line of sight, such that the jet emission is strongly Doppler boosted. They belong to the most luminous and highly variable sources \citep{Urry1996variability}, typically showing superluminal motion in the pc-scale radio jet \citep[e.g.,][]{Lister2013}. With the detection of $\gamma$-ray emission of AGN jets by \textsl{EGRET} \citep{Hartman1999} various models were considered in order to explain the broadband emission \citep[e.g.,][and many more]{Marscher1985,Mannheim1993,Sikora1994,Dermer1997,Dermer2012,Boettcher2013}. A typical radio to $\gamma$-ray SED of a blazar shows a double-humped spectral shape from the radio up to the $\gamma$-ray regime \citep{Fossati1998}. While the low-energy peak can be well explained by synchrotron emission, it is still discussed which emission processes are responsible for the high-energy peak. It is contentious whether it is due to synchrotron self-Compton up-scattering and/or inverse Compton scattering with external photons. Furthermore, the composition of the ejected plasma, leptons or hadrons or the combination of both, plays an important role in modeling the broadband emission. Single-zone leptonic models have been very successful in describing the broadband spectrum, however they fail to explain observations revealing rapid flaring and multiple emission zones. In that case models need to take the jet geometry into account \citep[like spine-sheath configuration, e.g.,][]{Tavecchio2008}. Hadronic models, on the other hand, attempt to explain the high-energy hump due to accelerated hadrons inducing pion-photo production resulting in a electromagnetic cascade. The combination of simultaneous broadband data allows us to study the spectral energy distribution (SED) and the variability across the bands. This provides information on the different radiating components, e.g., the disk, broad line region or the jet, all together making up the overall spectrum. Since these sources show strong variability across all wavelengths, simultaneity of the data is essential, i.e., contemporaneous monitoring at different wavelengths is required. In addition to the broadband spectral data, we use high-resolution radio data from Very Long Baseline Interferometry (VLBI). It is a unique tool to address the innermost regions of extragalactic jets at milliarcsecond (mas) scales. It provides the highest angular resolution and insights into regions close to the jet base where the high-energy emission is thought to be produced. VLBI images reveal the morphology of the jets at (sub-)pc scales. Typical blazar jet morphologies are compact or one-sided, while for larger jet inclination angles, where relativistic beaming effects are small, the jet and the counterjet can be detected \citep[see, e.g.,][]{Kadler2004}. Most objects detected with the Large Area Telescope (LAT) onboard of the \textsl{Fermi} Gamma-ray Space Telescope are classified as blazars \citep{2fgl,3fgl}. Only few of the so-called ``misaligned'' objects (radio galaxies) with jets seen edge-on \citep{Abdo2010_misaligned} are bright in the $\gamma$-rays. However, these objects are of particular interest and challenge theoretical jet emission models, which typically explain the high-energy spectral component with high beaming factors. Their study can help to determine the $\gamma$-ray emission region(s) and to constrain emission models, because the broadband emission is less dominated by the beamed jet emission \citep{Abdo2010_cenacore}. In the radio regime, these misaligned objects can be divided into evolved (e.g., as Centaurus~A or M\,87) and young radio galaxies. The jets of the former have sizes up to several hundred kiloparsecs, while the latter are typically more compact and smaller than 1\,kpc. Therefore, these sources are also called Compact Symmetric Objects \citep[CSO,][]{ODea1998,Readhead1996a,Readhead1996b}. Because of their intrinsic power, theoretical models predicted $\gamma$-ray emission from CSOs \citep{Stawarz2008,Kino2007,Kino2009}, but no detection is confirmed yet. Here, the multiwavelength and VLBI study of extragalactic jets on the Southern Hemisphere is presented. This work was performed in the framework of the multiwavelength monitoring program TANAMI (Sect.~\ref{sec:tanami}). After a short introduction to the project and the sample results (Sect.~\ref{sec:tanamisources}), the properties of two particular sources are discussed, \pmn (Sect.~\ref{sec:pmn}) and Centaurus~A (Sect.~\ref{sec:cena}).
It has been discussed how combined multiwavelength and VLBI studies of extragalactic jets can shed light on the physics of these powerful objects. These observations provide both, monitoring of source activity and changes in the spectrum as well as highly resolved images of the innermost regions, where the power is thought to be released. The monitoring of the TANAMI program is set up to address open questions in jet physics. Two TANAMI sources have been studied in great detail, namely \pmn and Cen~A. Both sources present ideal objects to study the high-energy emission and formation of jets. \pmn is one of the brightest sources in the $\gamma$-ray sky, but shows no major flaring activity. Its unusual broadband properties question its classification as a blazar and open room for an alternative interpretation. Future observations can confirm the CSO classification. Only recently \citet{Migliori2016} presented the first $\gamma$-ray detection of a confirmed CSO (PKS\,1718-649). Since PMN\,1603-4904 has a hard $\gamma$-ray spectrum, it is a likely candidate source for TeV instruments like H.E.S.S. or in future CTA, and therefore it could play an important role in investigating the high-energy properties in misaligned sources. The sub-pc scale imaging of Cen~A provides unprecedented insights into the properties of the inner region of an AGN jet. We observe complex jet dynamics, which, together with long-term light curves can help to constrain SED model parameters. The overall jet structure can be well explained by a spine-sheath configuration. Connecting our results for the pc-scale jet and the observations at hundreds of parsecs requires intrinsic acceleration between these scales. Individual jet features can be studied in detail. The jet widening at a distance of $\sim$0.4\,pc from the core could arise from a jet-star interaction. Thanks to the recent developments in VLBI at millimeter wavelengths (mm-VLBI), we will be able to further study southern extragalactic jets at even higher angular resolution. Future mm-VLBI observations will include the Atacama Large Millimeter Array (ALMA) in Chile, providing for the first time enough sensitivity and suitable $(u,v)$-coverage to image sources below $-30^\circ$ declination at millimeter wavelengths. In particular Cen~A presents an ideal target due to its proximity, such that we can obtain insights into regions that are self-absorbed at longer wavelengths and are located even closer to the jet base.
16
7
1607.06669
1607
1607.01755_arXiv.txt
We report the discovery of a transiting exoplanet, KELT-11b, orbiting the bright ($V=8.0$) subgiant HD 93396. A global analysis of the system shows that the host star is an evolved subgiant star with $\teff = 5370\pm51$ K, $M_{*} = 1.438_{-0.052}^{+0.061}$\msun, $R_{*} = 2.72_{-0.17}^{+0.21}$\rsun, \logg$= 3.727_{-0.046}^{+0.040}$, and \feh$ = 0.180\pm0.075$. The planet is a low-mass gas giant in a $P = 4.736529\pm0.00006$ day orbit, with $M_{P} = 0.195\pm0.018$\mj, $R_{P}= 1.37_{-0.12}^{+0.15}$\rj, $\rho_{P} = 0.093_{-0.024}^{+0.028}$ g cm$^{-3}$, surface gravity $\log{g_{P}} = 2.407_{-0.086}^{+0.080}$, and equilibrium temperature $T_{eq} = 1712_{-46}^{+51}$ K. KELT-11 is the brightest known transiting exoplanet host in the southern hemisphere by more than a magnitude, and is the 6th brightest transit host to date. The planet is one of the most inflated planets known, with an exceptionally large atmospheric scale height (2763 km), and an associated size of the expected atmospheric transmission signal of 5.6\%. These attributes make the KELT-11 system a valuable target for follow-up and atmospheric characterization, and it promises to become one of the benchmark systems for the study of inflated exoplanets.
The discovery of transiting exoplanets has been marked by two eras. The first era began with observations showing that the planet HD 209458b, first discovered by the radial velocity (RV) method, transited its host star \citep{Henry:2000, Charbonneau:2000}, and with the discovery of the first planet with the transit method, OGLE-TR-56b \citep{Udalski:2002, Konacki:2003}. That began a period of rapid discovery of new transiting exoplanets using small, automated, and dedicated telescopes, most notably by the HATNet \citep{Bakos:2004}, SuperWASP \citep{Pollacco:2006}, TrES \citep{Alonso:2004}, and XO \citep{McCullough:2006} projects. The second era was marked by the 2007 launch of the CoRoT mission \citep{Rouan:1998}, and then in 2009 with the launch of the Kepler mission \citep{Borucki:2010}. Space-based detection of transiting planets was a huge leap forward, especially with the ability to detect smaller and longer-period transiting planets. Although the Kepler mission has been tremendously fruitful with the number and variety of detected planets, especially for the determination of the underlying population and demographics of exoplanets, the small ground-based telescopes have continued to make many important discoveries. Notably, the population of transiting planets discovered by the ground-based surveys tend to be large planets with short orbital periods orbiting bright stars, due to selection and observation bias \citep{Pepper:2003,Pepper:2005,Gaudi:2005,Pont:2006,Fressin:2007}. These planets, unlike the vast majority of the Kepler planets, offer great potential for detailed characterization of the atmospheres of exoplanets. The bulk of our understanding of exoplanetary atmospheres comes from observations of planets with host stars with $V < 13$ \citep{Sing:2016,Seager:2010}. For that reason, the ongoing discoveries from ground-based transit surveys will continue to provide great value, at least until the launch of the Transiting Exoplanet Survey Satellite (TESS) mission \citep{Ricker:2015}. Among these projects is the KELT survey. KELT (the Kilodegree Extremely Little Telescope) fills a niche in planet discovery space by observing stars generally brighter than those observed by the other ground-based surveys, with a target magnitude regime of $7.5 < V < 10.5$. The KELT-North telescope \citep{Pepper:2007} has been operating since 2006, and has discovered seven exoplanets to date. The KELT-South telescope \citep{Pepper:2012} has been operating since 2009. It is located in Sutherland, South Africa, and surveys a large fraction of the southern hemisphere, where no transiting planets have been discovered with a host star brighter than $V=9.2$. KELT-South has discovered or co-discovered three transiting planets to date: KELT-10b \citep{Kuhn:2015}, and WASP-122b/KELT-14b and KELT-15b \citep{Rodriguez:2015}. In this paper, we report the discovery of a new exoplanet, KELT-11b. The discovery of KELT-11b was enabled by a collaboration between the KELT team and the Retired A-star Program of the California Planet Search (CPS) team, an RV survey that has discovered 34 exoplanets \citep{Johnson:2011}. This discovery attests to the value of combining data from multiple surveys to enable future discovery. This planet has one of the brightest host stars in the sky for a transiting planet, and is by far the brightest transit host in the southern hemisphere. It is also extraordinarily inflated, and one of the lowest-density planets known.
\label{sec:sum} KELT-11b is an extremely inflated planet (Figure \ref{fig:ScaleHeight}), with a density of just $0.093^{+0.028}_{-0.024}$ g cm$^{-3}$. This makes KELT-11b the third lowest density planet ever discovered with a precisely measured mass and radius (those with parameter uncertainties $<$20\%). The only comparable planets are WASP-94Ab \citep{Neveu-VanMalle:2014} and Kepler-12b \citep{Fortney:2011}, but they both orbit significantly fainter hosts. Given its mass and level of irradiation (1.94$\times10^9$ erg/s/cm$^2$), KELT-11b has a measured radius that is about twice as large as predicted by the mass-radius-incident flux relation from \citet{Weiss:2013}. Another way of placing this planet in context is to note that currently there are only a handful of hot Jupiters that transit bright stars (Figure \ref{fig:ScaleHeight}). Of these KELT-11b has by far the largest atmospheric scale height, at 2763 km, assuming uniform heat redistribution and calculating the scale height along the lines of \citet{Winn:2010b}. The ratio of scale height to planet radius is 2.8\%, with an expected size of the signal from transmission spectroscopy of 5.6\%, making KELT-11b particularly amenable to atmospheric characterization via transmission or emission spectroscopy. The expected depth of the secondary eclipse is $1.2\alpha_{\rm therm} \times 10^{-3}$, following the calculations of \citet{Siverd:2012}. Ultimately, the bright host star, the inflated radius, and the high equilibrium temperature make KELT-11b one of the best targets discovered to date for transmission spectroscopy. For example, detailed studies of KELT-11b will allow its chemical composition to be determined, which in turn will constrain parameters involving its formation and evolution of planets \cite[e.g.,][]{Madhusudhan:2014}. Furthermore, the source of inflation in hot Jupiters can be investigated in the extreme case of KELT-11b. Future observations of KELT-11b with facilities like The Hubble Space Telescope, Spitzer, and JWST will reveal the structure and content of its atmosphere, and will set up KELT-11b as the benchmark sub-Saturn exoplanet. \begin{figure} \centering \includegraphics[width=0.9\columnwidth,angle=0,trim={0 0 0 0},clip]{s_height.eps} \caption{Estimated atmospheric scale height of known transiting hot Jupiters versus the $V$-band brightness of the host star. KELT-11b is highlighted by a filled green circle, while other discoveries from the KELT survey are marked by filled red circles. The filled blue circles mark the two well-studied benchmark hot Jupiters HD 209458b and HD 189733b. The open circles mark other known transiting exoplanets from the NASA Exoplanet Archive (accessed on 2016 May 28).} \label{fig:ScaleHeight} \end{figure} It is also noteworthy that this planet has the shallowest transit depth (2.69 mmag) of any planet discovered by a ground-based transit survey, with the next-shallowest such planet having a transit depth of 3.3 mmag \citep[HAT-P-11b;][]{Bakos:2010}. Surveys like HAT, KELT, and SuperWASP are still increasing their photometric precision, and although the TESS survey will provide higher photometric precision over the entire sky, the long time baselines of the ground-based surveys with high-quality photometry can help confirm planets with periods longer than the duration of the TESS observations. As described in \S \ref{sec:evol}, the KELT-11 system exists in a very brief range of the host star's evolution. The star has exhausted its core hydrogen, and is contracting such that it is about to begin (or maybe already is) undergoing shell hydrogen fusion. This stage is very short lived (roughly 60 Myr), and since transiting planets are already rare, finding one with a host star in such a stage is quite fortunate. Once KELT-11 reaches the base of the giant branch, it will engulf KELT-11b, possibly producing a spectacular transient signal \citep{Metzger:2012}. Thus, the detection of this one system (because it occupies such a special and short-lived period in the evolution of the star), provides an example of a direct precursor to such planet-engulfment events and transients. It also provides an estimate of the frequency of such transient events, which can be used as a prediction for, e.g., LSST. The recent discovery of another transiting giant planet around a subgiant star K2-39b is an additional contribution to this small sample \citep{VanEylen:2016}. In addition to the potential value of KELT-11b for characterization of exoatmospheres and the frequency of planets orbiting higher-mass stars, this discovery illustrates certain aspects of the current state of transit discovery. This planet was discovered due to the combination of both transit and RV survey data. The KELT survey observations (\S \ref{sec:ks}) and follow-up photometry (\S \ref{sec:Follow-up_Photometry}) enabled us to identify this target as a good candidate, but with such a low mass planet, our typical follow-up methods to obtain an RV orbit would have been extremely hard-pressed to enable dynamical confirmation purely through follow-up RV observations. However, the addition of the CPS survey data provided the evidence that this was a real planet, prompting us to gather the additional APF observations to enable reliable confirmation. Furthermore, the CPS RV observations by themselves were not sufficient to verify HD 93396 as a planet host without the accompanying transit evidence from KELT. We believe that this synergy between multiple types of survey data will be of great value over the next several years, especially with the expected launch of the TESS mission and availability of nearly all-sky high precision photometry.
16
7
1607.01755
1607
1607.03799_arXiv.txt
Isomerism is one of the oldest and most important concepts in chemistry, dating back to the 1820s when Liebig and W\"ohler first demonstrated that silver fulminate and silver cyanate -- two compounds with the same elemental formula -- have different physical properties. These findings led Berzelius to propose the concept of ``isomer'' in 1831.~\cite{esteban:1201} Because it is fundamentally linked to molecular structure and chemical bonding, isomerism --- particularly of small molecules --- has long fascinated experimentalists and theoreticians alike. Astrochemistry is one of the applied disciplines where structural isomers are of great importance because chemistry in the interstellar medium (ISM) is kinetically, rather than thermodynamically, controlled.\cite{hirota:717} Consequently, the abundances of isomers (e.g., HCN vs.~HNC)\cite{Loison:2014br} in astronomical sources often provide a sensitive probe of the chemical evolution and physical conditions that are operative there. In many cases, isomeric abundance ratios deviate significantly from predictions based on thermodynamic considerations alone; in some astronomical sources, a higher-energy isomer may even be more abundant than the most stable conformer.\cite{Lovas:2010ew,Zaleski:2013bc} A recent illustration of this point is provided by studies of three stable singlet H$_2$C$_3$O isomers in the Sagittarius B2 (N), hereafter Sgr B2(N), star-forming region: Loomis et al.\cite{Loomis:2015jh} found no evidence for $l$-propadienone (H$_2$CCCO) in this source, but rotational lines of the higher energy isomer, cyclopropenone (calculated to lie 6~kcal/mol above ground),\cite{Zhou:2008un} are readily observed. Their results show that $l$-propadienone is at least an order of magnitude less abundant than cyclopropenone. Propynal [HC(O)CCH], a low-lying isomer of comparable stability to $l$-propadienone,\cite{komornicki:1652,maclagan:185,ekern:16109,Zhou:2008un,karton:22} is more than an order of magnitude more abundant than cyclopropenone. The authors suggest that these large variations in abundance could arise from different formation pathways on the surface of interstellar dust grains, for which some supporting laboratory evidence has been found from surface reaction experiments.\cite{Zhou:2008un} \footnotetext{$^a$~National Radio Astronomy Observatory, 520 Edgemont Rd, Charlottesville, VA USA 22903.} \footnotetext{$^b$~Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA USA 02138.} \footnotetext{$^c$~I. Physikalisches Institut, Universit\"{a}t K\"{o}ln, K\"{o}ln Germany 50937.} \footnotetext{$^d$~Max-Planck Institut fur Extraterrestrische Physik, Garching, Bayern Germany.} \footnotetext{$^e$~Department of Chemistry, University of Virginia, 759 Madison Ave, Charlottesville, VA 22903.} \footnotetext{$^f$~Aerodyne Research, Inc, Billerica, MA USA.} \footnotetext{$^g$~School of Engineering and Applied Sciences, Harvard University, 29 Oxford Street, Cambridge, MA USA 02138.} \footnotetext{$^{\ddag}$~B.A.M. is a Jansky Fellow of the National Radio Astronomy Observatory.} Because nearly all well-known interstellar species have high-energy isomers with significant barriers to either isomerization or dissociation, there is much applied interest in identifying and precisely characterizing the rotational spectra of these metastable forms. Isocyanic acid (HNCO) was one of the first polyatomic molecules detected in space,\cite{snyder:619,buhl:625} but owing to the lack of laboratory rest frequencies, it was not until fairly recently that its higher energy isomers cyanic acid, HOCN (24.7~kcal/mol above ground),\cite{Schuurman:2004je} and fulminic acid, HCNO (70.7~kcal/mol), were observed in interstellar molecular clouds.\cite{brunken:880,marcelino:l27} Soon after its initial astronomical identification,\cite{brunken:880} rotational lines of HOCN were reported in at least five other sources,\cite{brunken:2010hp,marcelino:a105} findings which suggest that this isomer and HCNO are common constituents of the ISM. Abundance ratios and formation pathways of the HNCO isomers as a function of visual extinction, density, and temperature have now been the subject of several chemical modeling studies.\cite{marcelino:a105,quan:2101,jimenez:19} Among the four singlet isomers, only isofulminic acid (HONC) has yet to be detected in space. Apart from their astronomical interest, systematic studies of the structure, properties, and formation pathways of isomers are of fundamental importance. Such studies provide insight into a wide variety of bonding preferences (e.g., bond order), enable comparative studies of isovalent systems, and provide stringent tests for quantum chemical calculations. For example, rotational spectroscopy measurements of the elusive HONC (84~kcal/mol) -- the highest energy singlet isomer of HNCO\cite{mladenovic:174308} -- are consistent with a structure containing a polar C-N triple bond\cite{poppinger:7806} and a significant HOC bending angle (105$^{\circ}$), in good agreement with a high-level coupled cluster calculation performed in conjunction with the experimental work. Furthermore, simultaneous detection of several isomers under similar experimental conditions may yield insight into isomerization pathways, and provide estimates of relative and absolute abundances, so follow-up experiments can be undertaken at other wavelengths. Like their isovalent oxygen counterparts, thiocyanates (R-SCN) and isothiocyanates (R-NCS) have a rich history which is closely linked to isomerization. In one of the first investigations of these compounds, Hofmann established in 1868 that methyl thiocyanate rearranges to form methyl isothiocyanate at high temperature.\cite{hofmann:201} Several years later, Billeter\cite{billeter:462} and Gerlich\cite{gerlich:80} independently observed that allyl thiocyanate (H$_2$C=CH-CH$_2$-NCS) thermally rearranges into allyl isothiocyanate, known as mustard oil,\cite{smith:3076} the compound largely responsible for the characteristic hot, pungent flavor of vegetables such as horseradish and mustard leaves. Today, the chemistry of thiocyanates and their derivatives is extensive and highly varied; compounds containing this chromophore are used in applications ranging from pharmaceuticals, dyes, and synthetic fiber to fungicides and photography, among others. Isothiocyanic acid (HNCS), the simplest isothiocyanate, is calculated to be the most stable molecule in the [H, N, C, S] family, followed by thiocyanic acid (HSCN), lying about 6~kcal/mol above HNCS.\cite{bak:666,Wierzejewska:2003dd} These are followed by thiofulminic acid (HCNS) and isothiofulminic acid (HSNC), which are comparably stable to each other at 35.1~kcal/mol and 36.6~kcal/mol above HNCS, respectively (this work). Each isomer is calculated to possess a singlet ground state with a nearly linear heavy atom backbone, a planar equilibrium structure, and a large permanent electric dipole moment. Fig.~\ref{structures} summarizes a number of the salient properties of the four singlet HNCS isomers. \begin{figure*} \centering \includegraphics[width=\textwidth]{EgyDiagram3.pdf} \caption{Relative energies and structures of the [H, N, C, S] isomer family. Energies are calculated at the ae-CCSD(T)/cc-pwCVQZ level and corrected for zero-point vibrational contributions calculated at the fc-CCSD(T)/cc-pV(Q+$d$)Z level. Semi-experimental equilibrium ($r_e^{SE}$) structures, obtained in this work, are indicated, along with associated uncertainties (1$\sigma$) derived from a least-squares optimization. Bond lengths are in \r{A}, bond angles are in degrees. Square brackets indicate the structural parameter was fixed to the calculated value.} \label{structures} \end{figure*} With the exception of HNCS itself, our knowledge of the [H, N, C, S] isomeric system is limited. HNCS was first detected nearly 50 years ago by Beard and Dailey~\cite{Beard:1950ez} who determined its molecular structure and dipole moment from an analysis of the rotational spectrum and those of its more abundant isotopologues. Since then, HNCS and its isotopic species have been the subject of many high-resolution studies, from the microwave to the far infrared region;\cite{yamada:189,Yamada:1980ky} much of this work was aimed at understanding how large-amplitude bending vibrations affect structural rigidity. HNCS has also long been known as a constituent of the ISM; it was detected nearly 40 years ago via several $a$-type, $K_a=0$ rotational transitions toward the Sgr B2 region.\cite{Frerking:1979yd} Until quite recently, there was little spectroscopic information on HSCN, HCNS, or HSNC. HSCN and HSNC were first characterized experimentally at low spectral resolution by matrix-IR spectroscopy\cite{Wierzejewska:227} in which both isomers were formed by UV-photolysis of HNCS in solid argon and nitrogen matrices, but HCNS does not appear to have been studied at any wavelength. The microwave spectrum of HSCN was recently reported by several co-authors of this study,\cite{Brunken:2009fh} and soon after it was detected in the ISM.\cite{Halfen:2009it,adande:561} Owing to the absence of rotational spectra for HCNS and HSNC, astronomical searches had not been possible. Since sulfur is less electronegative and has a larger atomic radius compared to oxygen, the energetics, structure, and bonding of analogous [H, N, C, S] and [H, N, C, O] isomers are predicted to differ. While the [H, N, C, S] isomers have the same energy ordering compared to their oxygen counterparts, the spread in energy is more than two times smaller (37 vs.~84~kcal/mol).\cite{Schuurman:2004je,Wierzejewska:2003dd} Because the reservoir of sulfur in space is not well established in dense molecular clouds,\cite{wakelam:159} the abundance of higher energy [H, N, C, S] isomers may significantly differ from that found in the HNCO isomers. In this paper we report a comprehensive laboratory study of the microwave spectra of HCNS and HSNC, the two remaining singlet isomers of HNCS, along with detection of their singly-substituted isotopic species and a number of rare isotopologues of HSCN, using both chirped-pulse Fourier-transform microwave (CP-FTMW) spectroscopy and cavity-FTMW spectroscopy. Because all four isomers can be simultaneously observed under the same experimental conditions, it has been possible to derive abundances relative to ground state HNCS, and consequently infer the dominant chemical reaction that yields HSCN. The relatively low abundance found for HSNC, with respect to isoenergetic HCNS, may indicate a low barrier to isomerization. By correcting the experimental rotational constants of each isotopic species for the effects of zero-point vibration calculated theoretically, precise semi-experimental equilibrium structures ($r_e^{\rm SE}$) have been derived for each isomer. Finally, the results of an astronomical search using observations toward Sgr B2(N) from the Green Bank Telescope (GBT) Prebiotic Interstellar Molecular Survey (PRIMOS) project are reported. Although the recent millimeter study by Halfen et al.\cite{Halfen:2009it} toward this source found that HNCS and HSCN are present in nearly equal column density, in the PRIMOS survey, firm evidence is found for HSCN alone, with only a tentative detection of HNCS. Lines of HSCN are observed in both absorption and emission, an indication that the excitation of this isomer is not well described by a single excitation temperature.
In the present investigation, the pure rotational spectra of HSCN, HCNS, and HSNC were recorded by a combination of chirped-pulse and cavity FTMW spectroscopy. Rotational constants were obtained from fits to these spectra, and experimental $r_0$ structures were derived. Using high-level quantum-chemical calculations, these structures were corrected for zero-point vibrational energy effects, and semi-experimental equilibrium $r_e^\text{SE}$ structures were determined for these three species, as well as for HNCS. Now that four isomers of the [H, N, C, S] system up to 37\,kcal/mol above ground have been characterized experimentally, even more energetic isomers may be within reach. On the singlet potential energy surface, Wierzejewska and Moc \cite{Wierzejewska:2003dd} calculate three ring molecules, $c$-C(H)NS, $c$-S(H)CN, and $c$-N(H)CS (i.e., where the hydrogen atom is bound to either of the heavy atoms forming a three-membered ring) to energetically follow the four chains characterized here at roughly 45, 54 and 72\,kcal/mol above ground, respectively. Given that isomers of [H, N, C, O] as high as 84 kcal/mol above ground have been observed already,\cite{mladenovic:174308} any of the three [H, N, C, S] ring isomers seem plausible candidates for future laboratory microwave searches. Even triplet species might be amenable to detection: the lowest triplet species, branched C(H)NS, is predicted at 63 kcal/mol followed by the bent chains HNCS and HCNS at 67 and 80\,kcal/mol, respectively.\cite{Wierzejewska:2003dd} In the course of the present study, we have also searched the publicly-available PRIMOS centimeter wave survey of Sgr B2(N), and find no evidence for a cold population of HCNS or HSNC, and only a tentative detection of weak emission from HNCS. Lines of HSCN are clearly observed, and evidence is found for weak maser activity in its $1_{0,1} - 0_{0,0}$ transition near 11469 MHz. Future astronomical searches for HCNS and HSNC in molecule rich sources, however, are clearly warranted in the millimeter-wave regime. While the data obtained here are not sufficient to predict the millimeter wave spectra of these two species accurately enough for astronomical searches, they are indispensable in guiding laboratory searches at still higher frequencies. To further explore structural isomerism in analogous systems to the [H, N, C, O] family, comprehensive studies of molecules in which carbon and/or nitrogen are replaced with their heavier counterparts such as those of the [H, Si, N, O] and [H, C, P, O] families may be worthwhile.\cite{raghunath_JPCA_107_11497_2003,fu_CPL_361_62_2002} As a first step in this direction, the corresponding lowest-energy silicon and phosphorus analogs HNSiO and HPCO were recently detected by their pure rotational spectra.\cite{thorwirth_HNSiO_2015} Owing to the relatively small energy separation between isomers, and that there are very few experimental studies of these heavier analogs, members of these (seemingly simple) four-atomic molecular systems should provide a fertile testbed for further experimental study of molecular isomerism.
16
7
1607.03799